
FlashMLA:DeepSeek高性能注意力内核库,驱动V3模型实现660 TFLOPS
FlashMLA is DeepSeek's optimized attention kernel library that powers DeepSeek-V3 models, featuring token-level sparse attention with FP8 KV cache support, achieving up to 660 TFLOPS performance on NVIDIA H800 GPUs. (FlashMLA是DeepSeek优化的注意力内核库,为DeepSeek-V3模型提供动力,具有令牌级稀疏注意力和FP8 KV缓存支持,在NVIDIA H800 GPU上实现高达660 TFLOPS的性能。)
DeepSeek2026/1/23
阅读全文 →






