GEO

分类:LLMS

63
LLMs.txt:为AI智能体提供结构化文档访问的新标准

LLMs.txt:为AI智能体提供结构化文档访问的新标准

LLMs.txt and llms-full.txt are specialized document formats designed to provide Large Language Models (LLMs) and AI agents with structured access to programming documentation and APIs, particularly useful in Integrated Development Environments (IDEs). The llms.txt format serves as an index file containing links with brief descriptions, while llms-full.txt contains all detailed content in a single file. Key considerations include file size limitations for LLM context windows and integration methods through MCP servers like mcpdoc. (llms.txt和llms-full.txt是专为大型语言模型和AI智能体设计的文档格式,提供对编程文档和API的结构化访问,在集成开发环境中尤其有用。llms.txt作为索引文件包含带简要描述的链接,而llms-full.txt将所有详细内容整合在单个文件中。关键考虑因素包括LLM上下文窗口的文件大小限制以及通过MCP服务器的集成方法。)
LLMS2026/1/24
阅读全文 →
构建高效LLM智能体:实用模式与最佳实践指南

构建高效LLM智能体:实用模式与最佳实践指南

English Summary: This comprehensive guide from Anthropic shares practical insights on building effective LLM agents, emphasizing simplicity over complexity. It distinguishes between workflows (predefined code paths) and agents (dynamic, self-directed systems), provides concrete patterns like prompt chaining, routing, and parallelization, and offers guidance on when to use frameworks versus direct API calls. The article stresses starting with simple solutions and adding complexity only when necessary, with real-world examples from customer implementations. 中文摘要翻译:本文是Anthropic分享的关于构建高效LLM智能体的实用指南,强调简单性优于复杂性。文章区分了工作流(预定义代码路径)和智能体(动态、自导向系统),提供了提示链、路由、并行化等具体模式,并就何时使用框架与直接API调用提供了指导。文章强调从简单解决方案开始,仅在必要时增加复杂性,并提供了客户实施的真实案例。
LLMS2026/1/24
阅读全文 →
AirLLM:无需量化,让700亿大模型在4GB GPU上运行

AirLLM:无需量化,让700亿大模型在4GB GPU上运行

AirLLM is a lightweight inference framework for large language models that enables 70B parameter models to run on a single 4GB GPU without quantization, distillation, or pruning. (AirLLM是一个轻量化大语言模型推理框架,无需量化、蒸馏或剪枝,即可让700亿参数模型在单个4GB GPU上运行。)
LLMS2026/1/24
阅读全文 →
LLMs.txt生成器API弃用指南:从网站内容生成LLM训练文件的工具迁移路径

LLMs.txt生成器API弃用指南:从网站内容生成LLM训练文件的工具迁移路径

This API generates consolidated text files from websites specifically for LLM training and inference. The service is powered by Firecrawl but will be deprecated after June 30, 2025 in favor of main endpoints. (此API可从网站生成整合文本文件,专为LLM训练和推理设计。该服务由Firecrawl提供支持,但将于2025年6月30日后弃用,建议使用主要端点替代。)
LLMS2026/1/24
阅读全文 →
llms.txt标准兴起:揭秘AI透明化的新规范

llms.txt标准兴起:揭秘AI透明化的新规范

A curated directory showcasing companies and products adopting the llms.txt standard across various sectors like AI, finance, developer tools, and websites, with token counts indicating implementation scale. (中文摘要翻译:一份精选目录,展示在AI、金融、开发者工具和网站等多个领域采用llms.txt标准的企业与产品,token数量反映了实施规模。)
LLMS2026/1/24
阅读全文 →
汉语词汇“新型”的语义演变与当代创新应用解析

汉语词汇“新型”的语义演变与当代创新应用解析

This article explores the semantic evolution and contemporary applications of the Chinese term '新型' (new type/novel form), tracing its linguistic development from classical literature to modern technological contexts. The analysis demonstrates how this adjective has become a key descriptor for innovation across fields including artificial intelligence, materials science, and social transformation. English Summary. (本文系统解析汉语形容词'新型'的语义演变与当代应用,从古典文献到现代科技语境,展示该词如何成为人工智能、材料科学和社会变革等领域创新的核心描述符)
LLMS2026/1/24
阅读全文 →
学术论文LLM生成内容质量评估:技术标准与伦理指南

学术论文LLM生成内容质量评估:技术标准与伦理指南

English Summary. This article provides a comprehensive overview of quality assessment standards for LLM-generated content in academic papers, focusing on technical evaluation criteria, ethical considerations, and practical implementation guidelines for researchers and editors. (中文摘要翻译:本文全面概述了学术论文中LLM生成内容的质量评估标准,重点关注技术评估标准、伦理考量以及研究人员和编辑的实际实施指南。)
LLMS2026/1/24
阅读全文 →
仅需250份恶意文档即可攻破大语言模型:数据投毒攻击门槛远低于预期

仅需250份恶意文档即可攻破大语言模型:数据投毒攻击门槛远低于预期

A joint study reveals that poisoning large language models requires only a fixed number of malicious documents (as few as 250), regardless of model size or training data volume, challenging previous assumptions about attack feasibility. (一项联合研究表明,无论模型规模或训练数据量如何,仅需固定数量的恶意文档(少至250份)即可对大语言模型进行数据投毒攻击,这挑战了先前关于攻击可行性的假设。)
LLMS2026/1/24
阅读全文 →