MistralForCausalLM#

Validated models#

Engine documentation: Mistral / Devstral in vLLM supported models (architecture MistralForCausalLM).

Status: Validated with LMCache.

Start the LMCache MP server:

lmcache server --l1-size-gb 100 --eviction-policy LRU

Start vLLM with the LMCache MP connector:

vllm serve mistralai/Devstral-2-123B-Instruct-2512 \
    --tensor-parallel-size 2 \
    --enable-auto-tool-choice \
    --tool-call-parser mistral \
    --kv-transfer-config \
    '{"kv_connector":"LMCacheMPConnector", "kv_role":"kv_both"}'

Adjust --tensor-parallel-size to match your hardware. For the generic LMCache + vLLM wiring (ports, remote hosts, in-process mode), see Quick Start.

If there are any issues with vLLM setup, please refer to the vLLM Recipes for more details.

Status: Not validated with LMCache.

Status: Not supported. LMCache TRT-LLM integration is in progress.

CacheBlend support#

Compression support#

Method

Status

Notes

CacheGen

Not validated

Caveats#

None known.