Gemma4ForConditionalGeneration#
Validated models#
Engine documentation:
Gemma 4 in vLLM supported models
(architecture Gemma4ForConditionalGeneration).
Status: Validated with LMCache.
Start the LMCache MP server:
lmcache server --l1-size-gb 100 --eviction-policy LRU
Start vLLM with the LMCache MP connector:
vllm serve google/gemma-4-31B-it \
--tensor-parallel-size 2 \
--kv-transfer-config \
'{"kv_connector":"LMCacheMPConnector", "kv_role":"kv_both"}'
Adjust --tensor-parallel-size to match your hardware. For the
generic LMCache + vLLM wiring (ports, remote hosts, in-process mode),
see Quick Start.
If there are any issues with vLLM setup, please refer to the vLLM Recipes for more details.
Status: Not validated with LMCache.
Status: Not supported. LMCache TRT-LLM integration is in progress.
CacheBlend support#
Compression support#
Method |
Status |
Notes |
|---|---|---|
Not validated |
Caveats#
None known.