Large Language Models in Reflect
Reflect uses large language models (LLMs) from OpenAI and Google Gemini to power its AI capabilities. The specific model used may vary depending on the feature and current system conditions.
LLM Providers
Reflect integrates with the following providers:
OpenAI
Google Gemini
Model selection is handled automatically within the system.
Why Reflect Uses Multiple Providers
Using multiple LLM providers allows Reflect to:
Improve reliability and uptime
Optimize performance
Maintain high-quality responses
Provide redundancy if a provider becomes unavailable
Model Selection
Model routing is managed internally and is not user-configurable unless explicitly documented.
Data Processing
Reflect securely sends the required prompt data to the selected provider for processing. All data handling follows SmartBear security and privacy standards.
Expected Behavior
Reflect’s use of OpenAI and Google Gemini is part of the product’s intended architecture. This behavior is expected and does not indicate a configuration issue.