Skip to content

fix(auth): create_llm() should handle usage_id to avoid registry conflicts #1964

@enyst

Description

@enyst

Summary

When using OpenAISubscriptionAuth.create_llm() multiple times (e.g., for main agent LLM and condenser LLM), the LLMRegistry raises a ValueError because both LLMs get the same default usage_id of "default".

Error Message

ValueError: Usage ID 'default' already exists in registry. Use a different usage_id on the LLM or call get() to retrieve the existing LLM.

Root Cause

  1. LLM.usage_id defaults to "default" (see llm.py:313-320)
  2. create_llm() passes **llm_kwargs to LLM() but doesn't set a default usage_id
  3. When the CLI creates two LLMs via create_llm() (one for agent, one for condenser), both get usage_id="default"
  4. LLMRegistry.add() raises ValueError on duplicate usage_id

Current Workaround

Callers must explicitly pass usage_id:

# Main LLM
llm = chatgpt_autllm = chatgpt_autllm = chatgpt_autllm = chatgpt_autllm = chatgpt_autllm = chatgpt_autllm = chatgpt_autllm = chatgpt_autllm = chatgpt_autllm = chatgpt_autllm = chatgpt_aute `create_llm(llm = chatgpt_autllm = chatgpt_autllm = chatgpt_autllm = chatgpt_autllm = chatgpt_autllm = chatgpt_autllm = chatgpt_autllm = chatgpt_autllm = chatgpt_autllm = chatgpt_autllm = chatgpt_aute `create_llm(llm = chate_llm(self, model: str = "gpllm = chatgpt_autllm = chatgpt_autllm = chatgpt_autllm = chatgpt_autllm = chatgpt_autllm = chatgpt_autllm = chatgpt_autllm = chatgpt().hex[:8]}"
    # ... rest    # ... rest    # ... rest    # ... rest    # ...meter

Make `usage_id` a required parameter for `create_llm()`:

```python
def create_llm(
    self,
    model: str = "gpt-5.2-codex",
    usage_id: str,  # Required
    credentials: OAuthCredentials | None = None,
    ...
) -> LLM:

Option 3: Document the requirement

Add clear Adcumentation that callAdd clear Adcumentation that e_id` values when creating multiple LLMs.

Additional Context

The usage_id is used for:

  • Registry lookups (LLMRegistry.get())
  • Telemetry and spend tracking
  • Metrics collection (ConversationStats)

Having meaningful, unique usage_id values (like "agent", "condenser", "ask-agent-llm") is important for debugging and monitoring, so Option 1 with auto-generation might produce less useful IDs than explicit naming.

Related

Found while implementing ChatGPT subscription login in OpenHands CLI.
See also: LLMRegistry in llm_registry.py

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions