GptOssForCausalLM#
Validated models#
Engine documentation:
GPT-OSS in vLLM supported models
(architecture GptOssForCausalLM).
Status: Validated with LMCache.
Start the LMCache MP server:
lmcache server --l1-size-gb 100 --eviction-policy LRU
gpt-oss-120b (2 GPUs):
vllm serve openai/gpt-oss-120b \
--tensor-parallel-size 2 \
--enable-auto-tool-choice \
--tool-call-parser openai \
--kv-transfer-config \
'{"kv_connector":"LMCacheMPConnector", "kv_role":"kv_both"}'
gpt-oss-20b (1 GPU):
vllm serve openai/gpt-oss-20b \
--enable-auto-tool-choice \
--tool-call-parser openai \
--kv-transfer-config \
'{"kv_connector":"LMCacheMPConnector", "kv_role":"kv_both"}'
Adjust --tensor-parallel-size to match your hardware. For the
generic LMCache + vLLM wiring (ports, remote hosts, in-process mode),
see Quick Start.
If there are any issues with vLLM setup, please refer to the vLLM Recipes for more details.
Status: Not validated with LMCache.
Status: Not supported. LMCache TRT-LLM integration is in progress.
CacheBlend support#
Compression support#
Method |
Status |
Notes |
|---|---|---|
Not validated |
Caveats#
None known.