BaseLLM

Base class for all LLM implementations in the cogency framework. All LLM providers support: - Streaming execution for real-time output - Key rotation for high-volume usage - Rate limiting via yield_interval parameter - Unified interface across providers - Dynamic model/parameter configuration

Constructor


BaseLLM(self, api_key: str = None, key_rotator=None, **kwargs)
            

Methods

ainvoke

LangGraph compatibility method - wrapper around invoke().


ainvoke(self, messages: List[Dict[str, str]], **kwargs) -> str
                    

invoke

Generate a response from the LLM given a list of messages. Args: messages: List of message dictionaries with 'role' and 'content' keys **kwargs: Additional parameters for the LLM call Returns: String response from the LLM


invoke(self, messages: List[Dict[str, str]], **kwargs) -> str
                    

stream

Generate a streaming response from the LLM given a list of messages. Args: messages: List of message dictionaries with 'role' and 'content' keys yield_interval: Minimum time between yields for rate limiting (seconds) **kwargs: Additional parameters for the LLM call Yields: String chunks from the LLM response


stream(
  self,
  messages: List[Dict[str, str]],
  yield_interval: float = 0.0,
  **kwargs
) -> AsyncIterator[str]