Welcome to LMCache!#

LMCache

Redis for LLMs.

Note

We are currently in the process of upgrading our documentation to provide better guidance and examples. Some sections may be under construction. Thank you for your patience!

Star Watch Fork

LMCache lets LLMs prefill each text only once. By storing the KV caches of all reusable texts, LMCache can reuse the KV caches of any reused text (not necessarily prefix) in any serving engine instance. It thus reduces prefill delay, i.e., time to first token (TTFT), as well as saves the precious GPU cycles and memory. By combining LMCache with vLLM, LMCaches achieves 3-10x delay savings and GPU cycle reduction in many LLM use cases, including multi-round QA and RAG.

For more information, check out the following:


Documentation#


KV Cache offloading and sharing


Disaggregated prefill



KV Cache Optimizations


Use LMCache in production


Developer Guide