
最新文章
共 410 篇



DeepSeek发布FlashMLA:专为Hopper GPU优化的高效MLA解码内核,AI推理性能大幅提升
FlashMLA is an efficient MLA decoding kernel optimized for NVIDIA Hopper GPUs, delivering up to 3000 GB/s memory bandwidth and 580 TFLOPS compute performance while reducing KV cache requirements by 93.3% for faster, more cost-effective AI inference. (FlashMLA是DeepSeek针对NVIDIA Hopper GPU优化的高效MLA解码内核,在内存受限配置下可达3000 GB/s带宽,计算受限配置下可达580 TFLOPS峰值性能,同时将KV缓存需求减少93.3%,实现更快、更经济的AI推理。)
DeepSeek2026/1/23
阅读全文 →
DeepSeek FlashMLA代码分析:揭秘未公开的MODEL1高效推理架构
DeepSeek's FlashMLA repository reveals two distinct model architectures: V3.2 optimized for maximum performance and precision, and MODEL1 designed for efficiency and deployability with lower memory footprint and specialized long-sequence handling. (DeepSeek的FlashMLA代码库揭示了两种不同的模型架构:V3.2针对最大性能和精度优化,而MODEL1则针对效率和可部署性设计,具有更低的内存占用和专门的长序列处理能力。)
DeepSeek2026/1/23
阅读全文 →
FlashMLA:DeepSeek开源的高效MLA解码内核,专为NVIDIA Hopper GPU优化
FlashMLA is an open-source, high-performance Multi-Head Linear Attention (MLA) decoding kernel optimized for NVIDIA Hopper architecture GPUs, designed to handle variable-length sequences efficiently. It enhances memory and computational efficiency through optimized KV caching and BF16 data format support, achieving up to 3000 GB/s memory bandwidth and 580 TFLOPS computational performance on H800 SXM5 GPUs. FlashMLA is ideal for large language model (LLM) inference and natural language processing (NLP) tasks requiring efficient decoding. (FlashMLA是DeepSeek开源的高效MLA解码内核,专为NVIDIA Hopper架构GPU优化,用于处理可变长度序列。通过优化KV缓存和采用BF16数据格式,提升了内存和计算效率,在H800 SXM5 GPU上内存带宽可达3000 GB/s,计算性能可达580 TFLOPS。适用于大语言模型推理和需要高效解码的自然语言处理任务。)
DeepSeek2026/1/23
阅读全文 →
FlashMLA:突破Transformer瓶颈,下一代高效注意力机制引擎
FlashMLA is an optimized algorithm for Multi-Head Attention that dramatically improves inference performance through streaming chunking, online normalization, and register-level pipelining, reducing memory usage and increasing speed while maintaining numerical stability. FlashMLA通过分块计算、在线归一化和寄存器级流水线等优化技术,显著提升多头注意力计算性能,在降低内存消耗的同时提高速度并保持数值稳定性。
AI大模型2026/1/23
阅读全文 →
DeepSeek开源FlashMLA:面向Hopper GPU的终极解码加速内核,大幅提升大模型推理效率
FlashMLA is an efficient MLA decoding kernel optimized for Hopper GPUs (specifically H800) and variable-length sequences, significantly accelerating inference for large language models. (FlashMLA是一款针对Hopper GPU(特别是H800)和可变长度序列优化的高效MLA解码内核,能大幅加速大语言模型的推理过程。)
DeepSeek2026/1/23
阅读全文 →
FlashMLA:DeepSeek高性能注意力内核库,驱动V3模型实现660 TFLOPS
FlashMLA is DeepSeek's optimized attention kernel library that powers DeepSeek-V3 models, featuring token-level sparse attention with FP8 KV cache support, achieving up to 660 TFLOPS performance on NVIDIA H800 GPUs. (FlashMLA是DeepSeek优化的注意力内核库,为DeepSeek-V3模型提供动力,具有令牌级稀疏注意力和FP8 KV缓存支持,在NVIDIA H800 GPU上实现高达660 TFLOPS的性能。)
DeepSeek2026/1/23
阅读全文 →
Mastra:TypeScript框架赋能企业级AI应用快速开发
Mastra is a TypeScript framework designed for rapid AI application development, featuring multi-model support, visual workflow design, knowledge base integration, automated testing, and seamless API connectivity. (Mastra是一个专为快速开发AI应用打造的TypeScript框架,支持多模型切换、可视化工作流、知识库增强、自动化测试和API集成。)
AI大模型2026/1/23
阅读全文 →