GEO

分类:AI大模型

309
Papr Memory是什么?AI系统记忆层如何实现多跳RAG | Geoz.com.cn

Papr Memory是什么?AI系统记忆层如何实现多跳RAG | Geoz.com.cn

Papr Memory is an advanced memory layer for AI systems that enables multi-hop RAG with state-of-the-art accuracy through real-time data ingestion, smart chunking, entity extraction, and dynamic knowledge graph creation. It supports various data sources and provides intelligent retrieval with query expansion, hybrid search, and contextual reranking. (Papr Memory 是一个先进的AI系统记忆层,通过实时数据摄取、智能分块、实体提取和动态知识图谱构建,实现具有最先进准确性的多跳检索增强生成。它支持多种数据源,并提供具有查询扩展、混合搜索和上下文重排的智能检索功能。)
AI大模型2026/2/13
阅读全文 →
如何使用LangExtract构建知识图谱?2025年Google开源工具实战指南 | Geoz.com.cn

如何使用LangExtract构建知识图谱?2025年Google开源工具实战指南 | Geoz.com.cn

LangExtract is Google's open-source programmatic extraction tool that transforms unstructured text into structured, traceable data with character-level offsets. It enables efficient long-document processing, multi-round extraction for recall, and direct structured output, reducing traditional RAG overhead. This guide demonstrates building a knowledge graph chatbot using Streamlit, Agraph, and LangExtract with dynamic few-shot template selection. LangExtract是Google开源的程序化抽取工具,可将非结构化文本转化为可追溯的结构化数据,通过字符偏移实现高亮验证。它支持长文档分块并行处理、多轮抽取保证召回率,并直接生成结构化结果,减少传统RAG流程开销。本文演示了使用Streamlit、Agraph和LangExtract构建知识图谱聊天机器人,实现动态few-shot模板选择和实体关系并行抽取。
AI大模型2026/2/12
阅读全文 →
腾讯云BI智能助手ChatBI如何实现对话式数据分析?2026企业BI应用指南 | Geoz.com.cn

腾讯云BI智能助手ChatBI如何实现对话式数据分析?2026企业BI应用指南 | Geoz.com.cn

Tencent Cloud BI provides end-to-end business intelligence capabilities from data source integration to visualization, featuring ChatBI—an AI-powered analytics agent that enables conversational data analysis, interpretation, and optimization recommendations. (腾讯云BI提供从数据源接入到可视化分析的全流程BI能力,其智能助手ChatBI基于大模型实现对话式数据分析,并提供数据解读与业务优化建议。)
AI大模型2026/2/10
阅读全文 →
LangExtract实战指南:2025企业级数据提取方案 | Geoz.com.cn

LangExtract实战指南:2025企业级数据提取方案 | Geoz.com.cn

LangExtract is Google's official open-source Python library designed for extracting structured data (JSON, Pydantic objects) from text, PDFs, and invoices. Unlike standard prompt engineering, it's built for enterprise-grade extraction with three core advantages: precise grounding (mapping fields to source coordinates), schema enforcement (ensuring output matches Pydantic definitions), and model agnosticism (compatible with Gemini, DeepSeek, OpenAI, and LlamaIndex). This guide provides practical insights for Chinese developers on local configuration, cost optimization, and handling long documents. LangExtract是Google官方开源的Python库,专为从文本、PDF和发票中提取结构化数据(JSON、Pydantic对象)而设计。与普通Prompt工程不同,它为企业级数据提取打造,具备三大核心优势:精准溯源(字段可映射回原文坐标)、Schema强约束(保证输出符合数据结构)、模型无关性(兼容Gemini、DeepSeek、OpenAI及LlamaIndex)。本指南基于真实项目经验,涵盖国内环境配置、API成本优化和长文档处理技巧。
AI大模型2026/2/9
阅读全文 →
如何从文本提取结构化信息?2024 LangExtract库使用指南 | Geoz.com.cn

如何从文本提取结构化信息?2024 LangExtract库使用指南 | Geoz.com.cn

LangExtract is a Python library powered by large language models (like Gemini) that extracts structured information from unstructured text with precise source localization and interactive visualization capabilities. It offers reliable structured output, long-document optimization, domain adaptability, and is open-source under Apache 2.0 license. (LangExtract是一个基于大语言模型(如Gemini)的Python库,能够从非结构化文本中提取结构化信息,具备精确的源定位和交互式可视化功能。它提供可靠的结构化输出、长文档优化、领域适应性,并在Apache 2.0许可证下开源。)
AI大模型2026/2/9
阅读全文 →
什么是RLHF?基于人类反馈的强化学习技术详解 | Geoz.com.cn

什么是RLHF?基于人类反馈的强化学习技术详解 | Geoz.com.cn

Reinforcement Learning from Human Feedback (RLHF) is a machine learning technique that optimizes AI agent performance by training a reward model using direct human feedback. It is particularly effective for tasks with complex, ill-defined, or difficult-to-specify objectives, such as improving the relevance, accuracy, and ethics of large language models (LLMs) in chatbot applications. RLHF typically involves four phases: pre-training model, supervised fine-tuning, reward model training, and policy optimization, with proximal policy optimization (PPO) being a key algorithm. While RLHF has demonstrated remarkable results in training AI agents for complex tasks from robotics to NLP, it faces limitations including the high cost of human preference data, the subjectivity of human opinions, and risks of overfitting and bias. (RLHF(基于人类反馈的强化学习)是一种机器学习技术,通过使用直接的人类反馈训练奖励模型来优化AI代理的性能。它特别适用于具有复杂、定义不明确或难以指定目标的任务,例如提高大型语言模型(LLM)在聊天机器人应用中的相关性、准确性和伦理性。RLHF通常包括四个阶段:预训练模型、监督微调、奖励模型训练和策略优化,其中近端策略优化(PPO)是关键算法。虽然RLHF在从机器人学到自然语言处理的复杂任务AI代理训练中取得了显著成果,但它面临一些限制,包括人类偏好数据的高成本、人类意见的主观性以及过拟合和偏见的风险。)
AI大模型2026/2/8
阅读全文 →
Cognee深度测评:开源AI记忆引擎如何重塑知识管理与LLM推理能力

Cognee深度测评:开源AI记忆引擎如何重塑知识管理与LLM推理能力

Cognee is an innovative open-source AI memory engine that combines knowledge graphs and vector storage technologies to provide dynamic memory capabilities for large language models (LLMs) and AI agents. This comprehensive evaluation covers its functional features, installation deployment, use cases, and commercial value. (Cognee是一个创新的开源AI记忆引擎,通过结合知识图谱和向量存储技术,为大型语言模型和AI智能体提供动态记忆能力。本测评全面评估其功能特性、安装部署、使用案例及商业价值。)
AI大模型2026/2/6
阅读全文 →
Cognee:开源AI内存引擎,92.5%精准检索重塑AI代理记忆

Cognee:开源AI内存引擎,92.5%精准检索重塑AI代理记忆

Cognee is an open-source AI memory platform that transforms fragmented data into structured, persistent memory for AI agents through its ECL pipeline and dual-database architecture, achieving 92.5% answer relevance compared to traditional RAG's 5%. (Cognee是一个开源AI内存平台,通过ECL管道和双数据库架构将碎片化数据转化为结构化、持久化的AI代理记忆,相比传统RAG系统5%的回答相关性,其相关性高达92.5%。)
AI大模型2026/2/6
阅读全文 →