cogency.llm

Classes

AnthropicLLM

Base class for all LLM implementations in the cogency framework. All LLM providers support: - Streaming execution for real-time output - Key rotation for high-volume usage - Rate limiting via yield_interval parameter - Unified interface across providers - Dynamic model/parameter configuration


AnthropicLLM(
  self,
  api_keys: Union[str, List[str]] = None,
  model: str = 'claude-3-5-sonnet-20241022',
  timeout: float = 15.0,
  temperature: float = 0.7,
  max_tokens: int = 4096,
  max_retries: int = 3,
  **kwargs
)
                    

BaseLLM

Base class for all LLM implementations in the cogency framework. All LLM providers support: - Streaming execution for real-time output - Key rotation for high-volume usage - Rate limiting via yield_interval parameter - Unified interface across providers - Dynamic model/parameter configuration


BaseLLM(self, api_key: str = None, key_rotator=None, **kwargs)
                    

GeminiLLM

Base class for all LLM implementations in the cogency framework. All LLM providers support: - Streaming execution for real-time output - Key rotation for high-volume usage - Rate limiting via yield_interval parameter - Unified interface across providers - Dynamic model/parameter configuration


GeminiLLM(
  self,
  api_keys: Union[str, List[str]] = None,
  model: str = 'gemini-2.5-flash',
  timeout: float = 15.0,
  temperature: float = 0.7,
  max_retries: int = 3,
  **kwargs
)
                    

GrokLLM

Base class for all LLM implementations in the cogency framework. All LLM providers support: - Streaming execution for real-time output - Key rotation for high-volume usage - Rate limiting via yield_interval parameter - Unified interface across providers - Dynamic model/parameter configuration


GrokLLM(
  self,
  api_keys: Union[str, List[str]] = None,
  model: str = 'grok-beta',
  timeout: float = 15.0,
  temperature: float = 0.7,
  max_retries: int = 3,
  **kwargs
)
                    

KeyRotator

Simple key rotator for API rate limit avoidance.


KeyRotator(self, keys: List[str])
                    

MistralLLM

Base class for all LLM implementations in the cogency framework. All LLM providers support: - Streaming execution for real-time output - Key rotation for high-volume usage - Rate limiting via yield_interval parameter - Unified interface across providers - Dynamic model/parameter configuration


MistralLLM(
  self,
  api_keys: Union[str, List[str]] = None,
  model: str = 'mistral-large-latest',
  timeout: float = 15.0,
  temperature: float = 0.7,
  max_tokens: int = 4096,
  max_retries: int = 3,
  **kwargs
)
                    

OpenAILLM

Base class for all LLM implementations in the cogency framework. All LLM providers support: - Streaming execution for real-time output - Key rotation for high-volume usage - Rate limiting via yield_interval parameter - Unified interface across providers - Dynamic model/parameter configuration


OpenAILLM(
  self,
  api_keys: Union[str, List[str]] = None,
  model: str = 'gpt-4o',
  timeout: float = 15.0,
  temperature: float = 0.7,
  max_retries: int = 3,
  **kwargs
)
                    

Functions

auto_detect_llm

Auto-detect LLM provider from environment variables. Fallback chain: 1. OpenAI 2. Anthropic 3. Gemini 4. Grok 5. Mistral Returns: BaseLLM: Configured LLM instance Raises: RuntimeError: If no API keys found for any provider.


auto_detect_llm() -> cogency.llm.base.BaseLLM