ccproxy.llms.formatters.context¶
ccproxy.llms.formatters.context
¶
Context helpers for formatter conversions using async contextvars.
register_request
¶
Record the most recent upstream request for streaming conversions.
Source code in ccproxy/llms/formatters/context.py
get_last_request
¶
Return the cached upstream request for the active conversion, if any.
Source code in ccproxy/llms/formatters/context.py
get_last_instructions
¶
Return the cached instruction string from the last registered request.
Source code in ccproxy/llms/formatters/context.py
register_request_tools
¶
Cache request tool definitions for downstream streaming responses.
Source code in ccproxy/llms/formatters/context.py
get_last_request_tools
¶
Return cached request tool definitions, if any.
Source code in ccproxy/llms/formatters/context.py
register_openai_thinking_xml
¶
Cache OpenAI thinking serialization preference for active conversions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
enabled
|
bool | None
|
Whether thinking blocks should be serialized with XML wrappers.
|
required |
Note
The value is stored in a ContextVar, so concurrent async requests
keep independent preferences without leaking into each other.
Source code in ccproxy/llms/formatters/context.py
get_openai_thinking_xml
¶
Return the OpenAI thinking serialization preference for active conversions.