GEO

最新文章

240
4GB GPU运行Llama3 70B:AirLLM框架让高端AI触手可及

4GB GPU运行Llama3 70B:AirLLM框架让高端AI触手可及

This article demonstrates how to run the powerful Llama3 70B open-source LLM on just 4GB GPU memory using the AirLLM framework, making cutting-edge AI technology accessible to users with limited hardware resources. (本文展示了如何利用AirLLM框架,在仅4GB GPU内存的条件下运行强大的Llama3 70B开源大语言模型,使硬件资源有限的用户也能接触前沿AI技术。)
AI大模型2026/1/24
阅读全文 →
AirLLM:单卡4GB显存运行700亿大模型,革命性轻量化框架

AirLLM:单卡4GB显存运行700亿大模型,革命性轻量化框架

AirLLM is an innovative lightweight framework that enables running 70B parameter large language models on a single 4GB GPU through advanced memory optimization techniques, significantly reducing hardware costs while maintaining performance. (AirLLM是一个创新的轻量化框架,通过先进的内存优化技术,可在单张4GB GPU上运行700亿参数的大语言模型,大幅降低硬件成本的同时保持性能。)
AI大模型2026/1/24
阅读全文 →
UltraRAG 2.0:基于MCP架构的低代码高性能RAG框架,让复杂推理系统开发效率提升20倍

UltraRAG 2.0:基于MCP架构的低代码高性能RAG框架,让复杂推理系统开发效率提升20倍

UltraRAG 2.0 is a novel RAG framework built on the Model Context Protocol (MCP) architecture, designed to drastically reduce the engineering overhead of implementing complex multi-stage reasoning systems. It achieves this through componentized encapsulation and YAML-based workflow definitions, enabling developers to build advanced systems with as little as 5% of the code required by traditional frameworks, while maintaining high performance and supporting features like dynamic retrieval and conditional logic. UltraRAG 2.0 是一个基于模型上下文协议(MCP)架构设计的新型RAG框架,旨在显著降低构建复杂多阶段推理系统的工程成本。它通过组件化封装和YAML流程定义,使开发者能够用传统框架所需代码量的5%即可构建高级系统,同时保持高性能,并支持动态检索、条件判断等功能。
AI大模型2026/1/24
阅读全文 →
OpenBMB:清华大学开源社区如何推动大语言模型高效计算与参数微调

OpenBMB:清华大学开源社区如何推动大语言模型高效计算与参数微调

OpenBMB is an open-source community and toolset initiated by Tsinghua University since 2018, focused on building efficient computational tools for large-scale pre-trained language models. Its core contribution includes parameter-efficient fine-tuning methods, and it has released significant projects like UltraRAG 2.1, UltraEval-Audio v1.1.0, and the 4-billion-parameter AgentCPM-Explore model, which demonstrate strong performance in benchmarks. (OpenBMB是清华大学自2018年起支持发起的开源社区与工具集,致力于构建大规模预训练语言模型的高效计算工具。其核心贡献包括参数高效微调方法,并发布了UltraRAG 2.1、UltraEval-Audio v1.1.0和40亿参数的AgentCPM-Explore模型等重要项目,在多项基准测试中表现出色。)
AI大模型2026/1/24
阅读全文 →
UltraRAG UI实战指南:构建标准化检索增强生成(RAG)流程

UltraRAG UI实战指南:构建标准化检索增强生成(RAG)流程

This article provides a comprehensive guide to implementing Retrieval-Augmented Generation (RAG) using UltraRAG UI, detailing the standardized pipeline structure, configuration parameters, and practical demonstration steps. (本文全面介绍了使用UltraRAG UI实现检索增强生成(RAG)的实战指南,详细阐述了标准化流程结构、配置参数及效果演示步骤。)
AI大模型2026/1/24
阅读全文 →
LEANN:将笔记本变为本地AI与RAG平台,存储节省97%且无精度损失

LEANN:将笔记本变为本地AI与RAG平台,存储节省97%且无精度损失

LEANN is an innovative vector database and personal AI platform that transforms your notebook into a powerful RAG system, supporting local semantic retrieval of millions of documents with 97% storage savings and no precision loss. (LEANN是一款创新的向量数据库与个人AI平台,可将笔记本变为强大的RAG系统,支持本地语义检索数百万文档,存储节省97%且无精度损失。)
AI大模型2026/1/24
阅读全文 →
生成式引擎优化(GEO)全维度技术指南:AI时代的内容优化新范式

生成式引擎优化(GEO)全维度技术指南:AI时代的内容优化新范式

GEO optimization is an emerging technology that integrates generative AI with traditional SEO and recommendation engine optimization. It focuses on optimizing content adaptability, engine recall efficiency, and generation quality across the entire 'content generation-engine parsing-result output' pipeline, addressing the limitations of traditional SEO which only focuses on the retrieval end. This guide provides a comprehensive overview of GEO optimization concepts, tools, software, systems, implementation steps, and best practices for technical professionals. GEO优化是生成式AI技术与传统SEO、推荐引擎优化深度融合的新兴技术方向。它围绕生成式引擎的“内容生成-引擎解析-结果输出”全链路,通过技术手段优化内容适配性、引擎召回效率与生成结果质量,解决传统SEO仅聚焦检索端优化的局限性。本指南为技术专业人士提供GEO优化概念、工具、软件、系统、实现步骤和最佳实践的全面概述。
GEO技术2026/1/24
阅读全文 →