GEO

LangExtract实战指南:2025企业级数据提取方案 | Geoz.com.cn

2026/2/9
LangExtract实战指南:2025企业级数据提取方案 | Geoz.com.cn
AI Summary (BLUF)

LangExtract is Google's official open-source Python library designed for extracting structured data (JSON, Pydantic objects) from text, PDFs, and invoices. Unlike standard prompt engineering, it's built for enterprise-grade extraction with three core advantages: precise grounding (mapping fields to source coordinates), schema enforcement (ensuring output matches Pydantic definitions), and model agnosticism (compatible with Gemini, DeepSeek, OpenAI, and LlamaIndex). This guide provides practical insights for Chinese developers on local configuration, cost optimization, and handling long documents. LangExtract是Google官方开源的Python库,专为从文本、PDF和发票中提取结构化数据(JSON、Pydantic对象)而设计。与普通Prompt工程不同,它为企业级数据提取打造,具备三大核心优势:精准溯源(字段可映射回原文坐标)、Schema强约束(保证输出符合数据结构)、模型无关性(兼容Gemini、DeepSeek、OpenAI及LlamaIndex)。本指南基于真实项目经验,涵盖国内环境配置、API成本优化和长文档处理技巧。

引言

LangExtract 是 Google 官方开源 的 Python 库,专为从文本、PDF 和发票中提取结构化数据 (JSON, Pydantic 对象) 而设计。与普通的 Prompt 工程不同,LangExtract 专为企业级数据提取打造,具备三大核心优势:精准溯源 (Grounding): 每一个提取的字段都能精确映射回原文的坐标,方便人工验证。Schema 强约束: 保证输出内容严格符合 Pydantic 定义的数据结构,杜绝字段幻觉。模型无关性: 虽为 Google Gemini 原生打造,但完美兼容 DeepSeek、OpenAI 以及 LlamaIndex 生态。它是替代脆弱的正则表达式或不稳定 Prompt 的最佳生产级方案。

LangExtract is an officially open-sourced Python library from Google, specifically designed for extracting structured data (JSON, Pydantic objects) from text, PDFs, and invoices. Unlike generic prompt engineering, LangExtract is built for enterprise-grade data extraction, featuring three core advantages: Precise Grounding: Each extracted field can be precisely mapped back to its coordinates in the original text, facilitating manual verification. Schema Enforcement: Guarantees output strictly adheres to the data structure defined by Pydantic, eliminating field hallucinations. Model Agnosticism: While natively built for Google Gemini, it seamlessly integrates with DeepSeek, OpenAI, and the LlamaIndex ecosystem. It is the optimal production-grade solution to replace fragile regular expressions or unstable prompts.

为什么需要这个指南? ​官方文档虽然全面,但在国内环境配置、本地模型适配(如 DeepSeek/Llama)以及生产环境落地上往往语焉不详。本指南源于真实项目经验,涵盖了网络配置、API 成本优化以及长文档处理技巧,助你少走弯路。

Why is this guide needed? While the official documentation is comprehensive, it often lacks details on configuration within the Chinese environment, adaptation to local models (like DeepSeek/Llama), and practical deployment in production. This guide is based on real-world project experience, covering network configuration, API cost optimization, and techniques for handling long documents, helping you avoid common pitfalls.

快速开始 (Quick Start)

1. 安装 (Install)

通过 pip 安装。需要 Python 3.9+。

Install via pip. Requires Python 3.9+.

pip install langextract

国内加速
如果下载慢,可以使用清华源:

Acceleration for China
If the download is slow, you can use the Tsinghua mirror:

pip install -i https://pypi.tuna.tsinghua.edu.cn/simple langextract

2. 配置 API Key

默认使用 Google Gemini。你需要从 Google AI Studio 获取 Key。

By default, it uses Google Gemini. You need to obtain a Key from Google AI Studio.

export LANGEXTRACT_API_KEY="your-api-key-here"

🔑 langextract_api_key 怎么获取?
LANGEXTRACT_API_KEY 其实就是 Google Gemini API Key。 请访问 Google AI Studio 免费申请。获取后将其赋值给环境变量 LANGEXTRACT_API_KEY 即可。

🔑 How to get the langextract_api_key?
The LANGEXTRACT_API_KEY is essentially the Google Gemini API Key. Please visit Google AI Studio to apply for a free one. After obtaining it, assign it to the environment variable LANGEXTRACT_API_KEY.

3. 第一个提取示例 (Demo)

从一段简单的文本中提取角色信息。需要提供少样本示例 (few-shot example)。

Extract character information from a simple piece of text. Requires providing few-shot examples.

import langextract as lx

# 定义提取 prompt
prompt = "提取文本中的角色及其情绪。"

# 提供少样本示例来引导模型
examples = [
    lx.data.ExampleData(
        text="罗密欧: 嘘!那边窗户透过来的是什么光?",
        extractions=[
            lx.data.Extraction(
                extraction_class="角色",
                extraction_text="罗密欧",
                attributes={"情绪状态": "惊奇"}
            ),
        ]
    )
]

# 输入文本
text = "朱丽叶望着星空,思念着罗密欧。"

# 运行提取 (默认使用 Gemini Flash 模型)
result = lx.extract(
    text_or_documents=text,
    prompt_description=prompt,
    examples=examples,
    model_id="gemini-2.5-flash",
)

print(result.extractions)

4. 可视化结果 (Interactive Visualization) 📊

LangExtract 的杀手级功能。生成交互式 HTML 报告,方便人工核查。

LangExtract's killer feature. Generates interactive HTML reports for easy manual verification.

# 1. 保存为 JSONL
lx.io.save_annotated_documents([result], output_name="extraction_results.jsonl", output_dir=".")

# 2. 生成交互式 HTML
html_content = lx.visualize("extraction_results.jsonl")
with open("visualization.html", "w") as f:
    f.write(html_content.data if hasattr(html_content, 'data') else html_content)

框架集成 (Integrations)

LangExtract 可以无缝融入你现有的 AI 技术栈。

LangExtract can be seamlessly integrated into your existing AI tech stack.

LlamaIndex 集成

结合 LlamaIndex 强大的 RAG (检索增强生成) 能力与 LangExtract 的精准提取。使用 LlamaIndex 检索相关文档切片,再通过 LangExtract 生成清洗后的标准 JSON 数据。

Combines LlamaIndex's powerful RAG (Retrieval-Augmented Generation) capabilities with LangExtract's precise extraction. Use LlamaIndex to retrieve relevant document chunks, then generate cleaned, standard JSON data via LangExtract.

LangChain 支持

轻松将 LangExtract 封装为 LangChain 管道中的 Runnable 组件。非常适合构建需要可靠地“阅读”文档并填充数据库的复杂 Agent。

Easily wrap LangExtract as a Runnable component within a LangChain pipeline. Ideal for building complex agents that need to reliably "read" documents and populate databases.

LLM 配置指南

如何接入不同的模型?

How to connect different models?

本地模型 (Ollama) 🏠

隐私安全、零成本。推荐国内用户使用。

Privacy-safe, zero cost. Recommended for users in China.

  1. 安装 Ollama: 访问 ollama.com 下载。
  2. 下载模型: ollama pull gemma2:2bollama pull llama3
  3. 启动服务: ollama serve
  1. Install Ollama: Visit ollama.com to download.
  2. Pull a model: ollama pull gemma2:2b or ollama pull llama3
  3. Start the service: ollama serve

代码配置:

Code Configuration:

import langextract as lx

result = lx.extract(
    text_or_documents=text,
    prompt_description=prompt,
    examples=examples,
    model_id="gemma2:2b",  # 自动选择 Ollama provider
    model_url="http://localhost:11434",
    fence_output=False,
    use_schema_constraints=False
)

OpenAI GPT-4 🧠

适合复杂推理任务。需要安装依赖:pip install langextract[openai]

Suitable for complex reasoning tasks. Requires installing dependencies: pip install langextract[openai]

export OPENAI_API_KEY="sk-..."
import os
import langextract as lx

result = lx.extract(
    text_or_documents=text,
    prompt_description=prompt,
    examples=examples,
    model_id="gpt-4o",  # 自动选择 OpenAI provider
    api_key=os.environ.get('OPENAI_API_KEY'),
    fence_output=True,
    use_schema_constraints=False
)

注意
OpenAI 模型需要 fence_output=Trueuse_schema_constraints=False,因为 LangExtract 尚未为 OpenAI 实现 schema 约束。

Note
OpenAI models require fence_output=True and use_schema_constraints=False because LangExtract has not yet implemented schema constraints for OpenAI.

国产大模型完美支持 (DeepSeek, 豆包, 千问等) 🔌

LangExtract 支持所有兼容 OpenAI 协议的国产大模型。已实测支持:

LangExtract supports all domestic large language models compatible with the OpenAI protocol. Tested and confirmed support includes:

  • DeepSeek (V3, R1)
  • 字节跳动 豆包 (Doubao)
  • 阿里 通义千问 (Qwen)
  • 智谱 AI (GLM-4)
  • MiniMax
  • DeepSeek (V3, R1)
  • ByteDance Doubao
  • Alibaba Qwen
  • Zhipu AI (GLM-4)
  • MiniMax

DeepSeek

import langextract as lx

result = lx.extract(
    text_or_documents=text,
    prompt_description=prompt,
    examples=examples,
    model_id="deepseek-chat",  # DeepSeek V3/R1
    api_key="your-api-key",
    language_model_params={
        "base_url": "https://api.deepseek.com/v1"
    },
    fence_output=True,
    use_schema_constraints=False
)

通义千问 (Qwen)

import langextract as lx

result = lx.extract(
    text_or_documents=text,
    prompt_description=prompt,
    examples=examples,
    model_id="qwen-turbo",   # 或 qwen-plus, qwen-max
    api_key="sk-...",        # 阿里云 DashScope API Key
    language_model_params={
        "base_url": "https://dashscope.aliyuncs.com/compatible-mode/v1"
    },
    fence_output=True,
    use_schema_constraints=False
)

其他模型 (OpenAI Compatible)

# 适用于 豆包 (Doubao), MiniMax, GLM-4 等
result = lx.extract(
    ...,
    model_id="your-model-id",
    api_key="your-api-key",
    language_model_params={
        "base_url": "https://your-provider-api.com/v1"
    },
    fence_output=True,
    use_schema_constraints=False
)

长文档处理 (Scaling to Longer Documents) 📚

如何处理超过 Context Window 的长书或 PDF?LangExtract 内置了分块 (Chunking) 和 并行处理 机制。无需手动切分文本,直接传入 URL 或长文本即可:

How to handle long books or PDFs that exceed the Context Window? LangExtract has built-in chunking and parallel processing mechanisms. No need to manually split the text; simply pass in a URL or long text:

# 例如:处理整本《罗密欧与朱丽叶》
result = lx.extract(
    text_or_documents="https://www.gutenberg.org/files/1513/1513-0.txt",
    prompt_description=prompt,
    examples=examples,
    # 核心参数
    extraction_passes=3,    # 多轮提取,提高召回率 (Recall)
    max_workers=20,         # 并行 Worker 数量,极大提升速度
    max_char_buffer=1000    # 控制上下文缓冲区大小
)

LangExtract vs. 竞品对比

LangExtract 与 LlamaExtract 或 Docling 有何区别?

How does LangExtract differ from LlamaExtract or Docling?

特性 LangExtract 🚀 LlamaExtract Docling
核心定位 结构化数据提取 文档解析 / RAG 文档格式转换 (PDF转MD)
溯源能力 ✅ 原生支持 (字符级)
Schema 验证 ✅ 严格 (Pydantic)
模型支持 Gemini, DeepSeek, 任意LLM OpenAI 为主 本地 / 云端
适用场景 复杂结构提取, 需要溯源 快速 RAG 搭建 Markdown 转换
Feature LangExtract 🚀 LlamaExtract Docling
Core Focus Structured Data Extraction Document Parsing / RAG Document Format Conversion (PDF to MD)
Grounding ✅ Native Support (Character-level)
Schema Validation ✅ Strict (Pydantic)
Model Support Gemini, DeepSeek, Any LLM Primarily OpenAI Local / Cloud
Use Case Complex Structure Extraction, Needs Grounding Quick RAG Setup Markdown Conversion

常见问题 (FAQ)

Q: LangExtract 和 Docling 有什么区别?
A: Docling 专注于将 PDF/文档解析为 Markdown 格式,擅长文档版面分析。而 LangExtract 专注于从文本中提取结构化数据(如 JSON)。你可以结合使用:先用 Docling 解析文档,再用 LangExtract 提取关键数据。

Q: What's the difference between LangExtract and Docling?
A: Docling focuses on parsing PDFs/documents into Markdown format, excelling at document layout analysis. LangExtract, on the other hand, focuses on extracting structured data (like JSON) from text. You can use them together: first parse the document with Docling, then extract key data with LangExtract.

Q: LangExtract 是 Google 官方产品吗?
A: 是的,LangExtract 是 Google 官方开源库 (GitHub: google/langextract)。本指南旨在填补官方文档在本地化部署和中文实践上的空白,帮助开发者更好地应用。

Q: Is LangExtract an official Google product?
A: Yes, LangExtract is an officially open-sourced library from Google (GitHub: google/langextract). This guide aims to fill the gaps in the official documentation regarding localized deployment and Chinese practices, helping developers apply it more effectively.

Q: 我可以使用 DeepSeek 或其他国产大模型吗?
A: 完全可以。 LangExtract 支持所有兼容 OpenAI 接口的模型。你只需要将 base_url 设置为 DeepSeek/豆包/千问 的 API 地址即可。不管是 V3 还是 R1 版本都能完美运行。

Q: Can I use DeepSeek or other domestic large models?
A: Absolutely. LangExtract supports all models compatible with the OpenAI interface. You just need to set the base_url to the API address of DeepSeek/Doubao/Qwen. Both V3 and R1 versions work perfectly.

Q: 如何处理超过 Context Window 的长文档(如书籍/长 PDF)?
A: LangExtract 内置了分块处理机制(Chunking)。参考我们的 长文档提取示例,它可以自动将长文切分,并行或串行提取后再合并结果,无需你手动编写复杂的切分逻辑。

Q: How to handle long documents (like books/long PDFs) that exceed the Context Window?
A: LangExtract has a built-in chunking mechanism. Refer to our long document extraction example. It can automatically split long texts, perform extraction in parallel or serially, and then merge the results, eliminating the need for you to write complex splitting logic manually.

Q: 支持本地私有化部署吗?
A: 支持。 通过集成 Ollama,你可以在本地运行 Llama 3、Qwen、Gemma 等模型。这不仅完全免费,而且数据不出本地,非常适合处理敏感隐私数据(如合同、病历)。

Q: Does it support local, private deployment?
A: Yes. By integrating with Ollama, you can run models like Llama 3, Qwen, Gemma locally. This is not only completely free but also ensures data never leaves your local environment, making it ideal for handling sensitive, private data (like contracts, medical records).

Q: LangExtract 收费吗?
A: LangExtract 库本身是 100% 开源免费的。费用仅取决于你使用的 LLM 提供商(如 Google Gemini 或 OpenAI 的 token 费用)。如果你使用 Ollama 本地模型,则完全免费。

Q: Is LangExtract free?
A: The LangExtract library itself is 100% open-source and free. Costs depend solely on the LLM provider you use (e.g., token fees for Google Gemini or OpenAI). If you use Ollama with local models, it's completely free.

← 返回文章列表
分享到:微博

版权与免责声明:本文仅用于信息分享与交流,不构成任何形式的法律、投资、医疗或其他专业建议,也不构成对任何结果的承诺或保证。

文中提及的商标、品牌、Logo、产品名称及相关图片/素材,其权利归各自合法权利人所有。本站内容可能基于公开资料整理,亦可能使用 AI 辅助生成或润色;我们尽力确保准确与合规,但不保证完整性、时效性与适用性,请读者自行甄别并以官方信息为准。

若本文内容或素材涉嫌侵权、隐私不当或存在错误,请相关权利人/当事人联系本站,我们将及时核实并采取删除、修正或下架等处理措施。 也请勿在评论或联系信息中提交身份证号、手机号、住址等个人敏感信息。