GEO

最新文章

237
深入解析检索增强生成(RAG):原理、模块与应用

深入解析检索增强生成(RAG):原理、模块与应用

RAG (Retrieval-Augmented Generation) is an AI technique that enhances large language models' performance on knowledge-intensive tasks by retrieving relevant information from external knowledge bases and using it as prompts. This approach significantly improves answer accuracy, especially for tasks requiring specialized knowledge. (RAG(检索增强生成)是一种人工智能技术,通过从外部知识库检索相关信息并作为提示输入给大型语言模型,来增强模型处理知识密集型任务的能力。这种方法显著提升了回答的精确度,特别适用于需要专业知识的任务。)
AI大模型2026/1/24
阅读全文 →
Clippy:怀旧桌面应用,本地运行大语言模型

Clippy:怀旧桌面应用,本地运行大语言模型

Clippy is a desktop application that allows users to run various large language models locally on their computers with a nostalgic 1990s Microsoft Office-style interface, offering offline functionality, easy setup, and customizable model support. (Clippy是一款桌面应用程序,让用户能够在本地计算机上运行各种大语言模型,采用怀旧的1990年代Microsoft Office风格界面,提供离线功能、简易设置和可定制模型支持。)
LLMS2026/1/24
阅读全文 →
LLMs.txt:为AI智能体提供结构化文档访问的新标准

LLMs.txt:为AI智能体提供结构化文档访问的新标准

LLMs.txt and llms-full.txt are specialized document formats designed to provide Large Language Models (LLMs) and AI agents with structured access to programming documentation and APIs, particularly useful in Integrated Development Environments (IDEs). The llms.txt format serves as an index file containing links with brief descriptions, while llms-full.txt contains all detailed content in a single file. Key considerations include file size limitations for LLM context windows and integration methods through MCP servers like mcpdoc. (llms.txt和llms-full.txt是专为大型语言模型和AI智能体设计的文档格式,提供对编程文档和API的结构化访问,在集成开发环境中尤其有用。llms.txt作为索引文件包含带简要描述的链接,而llms-full.txt将所有详细内容整合在单个文件中。关键考虑因素包括LLM上下文窗口的文件大小限制以及通过MCP服务器的集成方法。)
LLMS2026/1/24
阅读全文 →
Browser-Use:AI驱动的浏览器自动化革命,让AI像人类一样操作网页

Browser-Use:AI驱动的浏览器自动化革命,让AI像人类一样操作网页

Browser-Use is an open-source AI-powered browser automation platform that enables AI agents to interact with web pages like humans—navigating, clicking, filling forms, and scraping data—through natural language instructions or program logic. It bridges AI models with browsers, supports multiple LLMs, and offers both no-code interfaces and SDKs for technical and non-technical users. (Browser-Use是一个开源的AI驱动浏览器自动化平台,让AI代理能像人类一样与网页交互:导航、点击、填表、抓取数据等。它通过自然语言指令或程序逻辑连接AI与浏览器,支持多款LLM,并提供无代码界面和SDK,适合技术人员和非工程背景人员使用。)
AI大模型2026/1/24
阅读全文 →
构建高效LLM智能体:实用模式与最佳实践指南

构建高效LLM智能体:实用模式与最佳实践指南

English Summary: This comprehensive guide from Anthropic shares practical insights on building effective LLM agents, emphasizing simplicity over complexity. It distinguishes between workflows (predefined code paths) and agents (dynamic, self-directed systems), provides concrete patterns like prompt chaining, routing, and parallelization, and offers guidance on when to use frameworks versus direct API calls. The article stresses starting with simple solutions and adding complexity only when necessary, with real-world examples from customer implementations. 中文摘要翻译:本文是Anthropic分享的关于构建高效LLM智能体的实用指南,强调简单性优于复杂性。文章区分了工作流(预定义代码路径)和智能体(动态、自导向系统),提供了提示链、路由、并行化等具体模式,并就何时使用框架与直接API调用提供了指导。文章强调从简单解决方案开始,仅在必要时增加复杂性,并提供了客户实施的真实案例。
LLMS2026/1/24
阅读全文 →
AirLLM:无需量化,让700亿大模型在4GB GPU上运行

AirLLM:无需量化,让700亿大模型在4GB GPU上运行

AirLLM is a lightweight inference framework for large language models that enables 70B parameter models to run on a single 4GB GPU without quantization, distillation, or pruning. (AirLLM是一个轻量化大语言模型推理框架,无需量化、蒸馏或剪枝,即可让700亿参数模型在单个4GB GPU上运行。)
LLMS2026/1/24
阅读全文 →
4GB GPU运行Llama3 70B:AirLLM框架让高端AI触手可及

4GB GPU运行Llama3 70B:AirLLM框架让高端AI触手可及

This article demonstrates how to run the powerful Llama3 70B open-source LLM on just 4GB GPU memory using the AirLLM framework, making cutting-edge AI technology accessible to users with limited hardware resources. (本文展示了如何利用AirLLM框架,在仅4GB GPU内存的条件下运行强大的Llama3 70B开源大语言模型,使硬件资源有限的用户也能接触前沿AI技术。)
AI大模型2026/1/24
阅读全文 →
AirLLM:单卡4GB显存运行700亿大模型,革命性轻量化框架

AirLLM:单卡4GB显存运行700亿大模型,革命性轻量化框架

AirLLM is an innovative lightweight framework that enables running 70B parameter large language models on a single 4GB GPU through advanced memory optimization techniques, significantly reducing hardware costs while maintaining performance. (AirLLM是一个创新的轻量化框架,通过先进的内存优化技术,可在单张4GB GPU上运行700亿参数的大语言模型,大幅降低硬件成本的同时保持性能。)
AI大模型2026/1/24
阅读全文 →
UltraRAG 2.0:基于MCP架构的低代码高性能RAG框架,让复杂推理系统开发效率提升20倍

UltraRAG 2.0:基于MCP架构的低代码高性能RAG框架,让复杂推理系统开发效率提升20倍

UltraRAG 2.0 is a novel RAG framework built on the Model Context Protocol (MCP) architecture, designed to drastically reduce the engineering overhead of implementing complex multi-stage reasoning systems. It achieves this through componentized encapsulation and YAML-based workflow definitions, enabling developers to build advanced systems with as little as 5% of the code required by traditional frameworks, while maintaining high performance and supporting features like dynamic retrieval and conditional logic. UltraRAG 2.0 是一个基于模型上下文协议(MCP)架构设计的新型RAG框架,旨在显著降低构建复杂多阶段推理系统的工程成本。它通过组件化封装和YAML流程定义,使开发者能够用传统框架所需代码量的5%即可构建高级系统,同时保持高性能,并支持动态检索、条件判断等功能。
AI大模型2026/1/24
阅读全文 →