AnthropicLLM

Base class for all LLM implementations in the cogency framework. All LLM providers support: - Streaming execution for real-time output - Key rotation for high-volume usage - Rate limiting via yield_interval parameter - Unified interface across providers - Dynamic model/parameter configuration

Constructor


AnthropicLLM(
  self,
  api_keys: Union[str, List[str]] = None,
  model: str = 'claude-3-5-sonnet-20241022',
  timeout: float = 15.0,
  temperature: float = 0.7,
  max_tokens: int = 4096,
  max_retries: int = 3,
  **kwargs
)
            

Methods

ainvoke

LangGraph compatibility method - wrapper around invoke().


ainvoke(self, messages: List[Dict[str, str]], **kwargs) -> str
                    

invoke

Generate a response from the LLM given a list of messages. Args: messages: List of message dictionaries with 'role' and 'content' keys **kwargs: Additional parameters for the LLM call Returns: String response from the LLM


invoke(self, messages: List[Dict[str, str]], **kwargs) -> str
                    

stream

Generate a streaming response from the LLM given a list of messages. Args: messages: List of message dictionaries with 'role' and 'content' keys yield_interval: Minimum time between yields for rate limiting (seconds) **kwargs: Additional parameters for the LLM call Yields: String chunks from the LLM response


stream(
  self,
  messages: List[Dict[str, str]],
  yield_interval: float = 0.0,
  **kwargs
) -> AsyncIterator[str]