Google has introduced TurboQuant, a compression algorithm that reduces large language model (LLM) memory usage by at least 6x while boosting performance, targeting one of AI’s most persistent bottlenecks: memory. The breakthrough lowers inference costs and expands deployment across cloud and edge environments.
In-depth: Google TurboQuant cuts LLM memory 6x, resets AI inference cost curve
27
Mar