GEO

如何构建类型安全的LLM代理?llm-exe模块化TypeScript库指南 | Geoz.com.cn

2026/2/13
如何构建类型安全的LLM代理?llm-exe模块化TypeScript库指南 | Geoz.com.cn
AI Summary (BLUF)

English Summary: llm-exe is a modular TypeScript library for building type-safe LLM agents and AI functions with full TypeScript support, provider-agnostic architecture, and production-ready features like automatic retries and schema validation. It enables developers to create composable executors, powerful parsers, and autonomous agents while allowing one-line provider switching between OpenAI, Anthropic, Google, xAI, and others.

中文摘要翻译:llm-exe是一个模块化TypeScript库,用于构建类型安全的LLM代理和AI函数,具有完整的TypeScript支持、供应商无关的架构以及生产就绪功能(如自动重试和模式验证)。它使开发人员能够创建可组合的执行器、强大的解析器和自主代理,同时允许在OpenAI、Anthropic、Google、xAI等供应商之间进行单行切换。

引言:LLM 开发的现状与痛点

Every LLM project starts like this: debugging JSON errors, writing boilerplate retries, juggling timeouts, and praying your parse didn’t break. It sucks.

每个 LLM 项目都是这样开始的:调试 JSON 错误、编写样板化的重试逻辑、处理超时问题,并祈祷你的解析不会出错。这糟透了。

Developers often face a series of common, low-level challenges when integrating Large Language Models (LLMs) into their applications. These include:

开发者在将大型语言模型集成到应用程序中时,常常面临一系列常见且底层的挑战。这些挑战包括:

  • Crossed-fingers JSON parsing (JSON.parse() with fingers crossed)

    提心吊胆的 JSON 解析(使用 JSON.parse() 时祈祷一切顺利)

  • Complete lack of type safety (Everything is type any)

    完全缺乏类型安全(所有东西都是 any 类型)

  • Manual validation for every response

    为每个响应进行手动验证

  • Vendor lock-in (All this and you only support one provider)

    供应商锁定(做了这么多,却只支持一个供应商)

// Every LLM project starts like this...
const response = await openai.chat.completions.create({
  model: "gpt-4",
  messages: [{ role: "user", content: makePrompt(data) }],
  response_format: { type: "json_object" },
});
const text = response.choices[0].message.content;
const parsed = JSON.parse(text); // 🤞 hope it's valid JSON

// Type safety? lol?
const category = parsed.category; // any
const items = parsed.items; // undefined? array? who knows

// Oh right, need to validate this somehow
if (!["bug", "feature", "question"].includes(category)) {
  // Model hallucinated a new category. Now what?
}

// TODO: Add retries
// TODO: Add tests
// TODO: Switch to Claude when this fails

核心理念:将 LLM 调用转化为可靠的函数

What if LLM Calls Were Just Normal Functions? What if every LLM call was as reliable as calling a regular function? Type-safe inputs, validated outputs, built-in retries. Just async functions that happen to be powered by AI.

如果 LLM 调用就像普通函数一样会怎样?如果每次 LLM 调用都像调用普通函数一样可靠会怎样?类型安全的输入、经过验证的输出、内置的重试机制。它们只是碰巧由 AI 驱动的异步函数。

llm-exe 正是基于这一理念构建的。它旨在将 LLM 交互从脆弱的、基于字符串的流程,转变为健壮的、可组合的、类型安全的函数式操作。

llm-exe is built on this very idea. It aims to transform LLM interactions from fragile, string-based processes into robust, composable, and type-safe functional operations.

其目标是提供:

Its goal is to provide:

  • Real TypeScript types, no more any/unknown

    真正的 TypeScript 类型,不再有 any/unknown

  • Validated outputs that match your schema

    符合你定义模式的、经过验证的输出

  • Just import and call, like any other function

    像调用其他函数一样,只需导入并调用

  • One-line provider switching

    一行代码切换供应商

import {
  useLlm,
  createChatPrompt,
  createParser,
  createLlmExecutor,
} from "llm-exe";

// Define once, use everywhere
async function llmClassifier(text: string) {
  return createLlmExecutor({
    llm: useLlm("openai.gpt-4o-mini"),
    prompt: createChatPrompt<{ text: string }>(
      "Classify this as 'bug', 'feature', or 'question': {{text}}"
    ),
    parser: createParser("stringExtract", {
      enum: ["bug", "feature", "question"],
    }),
  }).execute({ text });
}

// It's just a typed function now
const category = await llmClassifier(userInput);
// category is typed as "bug" | "feature" | "question" ✨

主要特性

1. 类型安全至上

Full TypeScript support with inferred types throughout your LLM chains. No more guessing what data you're working with.

完整的 TypeScript 支持,在整个 LLM 链中都能推断类型。无需再猜测你正在处理什么数据。

从提示词模板的输入参数,到解析器的输出结构,llm-exe 利用 TypeScript 的泛型和类型推断,确保整个数据流的类型安全。这意味着你在开发时就能获得完整的 IDE 自动补全和类型检查,将运行时错误提前到编译时。

From the input parameters of the prompt template to the output structure of the parser, llm-exe leverages TypeScript's generics and type inference to ensure type safety throughout the entire data flow. This means you get full IDE autocompletion and type checking during development, shifting runtime errors to compile time.

2. 供应商无关性

Same code works with OpenAI, Anthropic, Google, xAI, Ollama, Bedrock, and more. Switch with one line.

同一套代码适用于 OpenAI、Anthropic、Google、xAI、Ollama、Bedrock 等。只需一行代码即可切换。

llm-exe 提供了一个统一的抽象层,将你从特定 LLM 供应商的 API 细节中解放出来。核心业务逻辑保持不变,你可以根据成本、性能或功能需求,轻松切换底层模型。

llm-exe provides a unified abstraction layer, freeing you from the API specifics of any particular LLM vendor. Your core business logic remains unchanged, allowing you to easily switch the underlying model based on cost, performance, or feature requirements.

// Change ONE line to switch providers
const llm = useLlm("openai.gpt-4o");
// const llm = useLlm("anthropic.claude-3-5-sonnet");
// const llm = useLlm("google.gemini-2.0-flash");
// const llm = useLlm("xai.grok-2");
// const llm = useLlm("ollama.llama-3.3-70b");

// Everything else stays exactly the same ✨

3. 开箱即用的生产就绪性

Built-in retries, timeouts, error handling, and schema validation. Battle-tested with 100% test coverage.

内置重试、超时、错误处理和模式验证。经过实战测试,拥有 100% 的测试覆盖率。

无需再手动编写复杂的重试逻辑或错误处理代码。llm-exe 的执行器(Executor)内置了这些生产环境必需的健壮性功能。

No need to manually write complex retry logic or error handling code. llm-exe's Executors come with these essential robustness features for production environments built-in.

  • Automatic retries and timeouts

    自动重试和超时控制

  • Schema validation that throws on mismatch

    模式验证,不匹配时抛出错误

  • Hooks for logging and monitoring

    用于日志记录和监控的钩子函数

const analyst = createLlmExecutor(
  {
    llm: useLlm("openai.gpt-4o"),
    prompt: createChatPrompt<{ data: any }>(
      "Analyze this data and return insights as JSON: {{data}}"
    ),
    parser: createParser("json", {
      schema: {
        insights: { type: "array", items: { type: "string" } },
        score: { type: "number", min: 0, max: 100 },
      },
    }),
  },
  {
    // Built-in retry, timeout, hooks
    maxRetries: 3,
    timeout: 30000,
    hooks: {
      onSuccess: (result) => logger.info("Analysis complete", result),
      onError: (error) => logger.error("Analysis failed", error),
    },
  }
);

// Guaranteed to match schema or throw
const { insights, score } = await analyst.execute({ data: salesData });

4. 可组合的执行器

Chain executors like building blocks. Each piece does one thing well and combines naturally.

像搭积木一样链式组合执行器。每个组件各司其职,并能自然地组合在一起。

llm-exe 的核心抽象是 执行器(Executor)。一个标准的 LLM 执行器由三个可互换的部分构成:提示词(Prompt) + LLM 模型(LLM) + 解析器(Parser)。这种设计遵循单一职责原则,使得每个部分都可以独立开发、测试和替换。

The core abstraction of llm-exe is the Executor. A standard LLM Executor consists of three interchangeable parts: Prompt + LLM Model + Parser. This design follows the Single Responsibility Principle, allowing each part to be developed, tested, and replaced independently.

// Each piece does one thing well
const summarizer = createLlmExecutor({
  llm: useLlm("openai.gpt-4o-mini"),
  prompt: createChatPrompt("Summarize: {{text}}"),
  parser: createParser("string"),
});

const translator = createLlmExecutor({
  llm: useLlm("anthropic.claude-3-5-haiku"),
  prompt: createChatPrompt("Translate to {{language}}: {{text}}"),
  parser: createParser("string"),
});

// Combine them naturally
const summary = await summarizer.execute({ text: article });
const spanish = await translator.execute({
  text: summary,
  language: "Spanish",
});

5. 强大的解析器

Extract exactly what you need - JSON, lists, regex, markdown blocks. Guaranteed output format or throw.

精确提取你所需的内容——JSON、列表、正则表达式匹配、Markdown 块。保证输出格式,否则抛出异常。

解析器(Parser)是确保输出结构化的关键。llm-exe 提供了多种强大的解析器,可以将 LLM 的非结构化文本输出强制转换为你的应用程序能够直接使用的结构化数据。

The Parser is key to ensuring structured output. llm-exe provides a variety of powerful parsers that can force the LLM's unstructured text output into structured data that your application can use directly.

6. 使用现有函数构建智能代理

Build autonomous agents with built-in state management, tool calling, and dialogue tracking. Turn any function into an agent capability.

构建具有内置状态管理、工具调用和对话跟踪功能的自主代理。将任何函数转化为代理的能力。

llm-exe 允许你将任何现有的异步函数(如数据库查询、API 调用、业务逻辑)轻松封装成 LLM 代理可以调用的“工具”。即使模型本身不支持原生函数调用,你也可以通过提示词工程实现代理行为。

llm-exe allows you to easily wrap any existing asynchronous function (like database queries, API calls, business logic) into a "tool" that an LLM agent can call. Even if the model itself doesn't support native function calling, you can implement agent behavior through prompt engineering.

  • Works with ALL models, even without native function calling

    适用于所有模型,即使没有原生函数调用支持

  • The LLM plans what to do, you control execution

    LLM 负责规划,你控制执行

  • Build agents without complex frameworks

    无需复杂框架即可构建代理

  • You control the execution flow and security

    你控制执行流程和安全性

import { createCallableExecutor, useExecutors } from "llm-exe";

// Your existing code becomes LLM-callable
const queryDB = createCallableExecutor({
  name: "query_database",
  description: "Query our PostgreSQL database",
  input: "SQL query to execute",
  handler: async ({ input }) => {
    const results = await db.query(input); // Your actual DB!
    return { result: results.rows };
  },
});

const sendEmail = createCallableExecutor({
  name: "send_email",
  description: "Send email via our email service",
  input: "JSON with 'to', 'subject', 'body'",
  handler: async ({ input }) => {
    const { to, subject, body } = JSON.parse(input);
    await emailService.send({ to, subject, body }); // Real emails!
    return { result: "Email sent successfully" };
  },
});

// Let the LLM use your tools
const assistant = createLlmExecutor({
  llm: useLlm("openai.gpt-4o"),
  prompt: createChatPrompt(`Help the user with their request.
You can query the database and send emails.`),
  parser: createParser("json"),
});

const tools = useExecutors([queryDB, sendEmail]);

// LLM decides what to do and calls YOUR functions
const plan = await assistant.execute({
  request: "Send our top 5 customers a thank you email",
});
// LLM might return: { action: "query_database", input: "SELECT email FROM customers ORDER BY revenue DESC LIMIT 5" }

const result = await tools.callFunction(plan.action, plan.input);

开发者评价

Why Developers Love llm-exe

开发者为何喜爱 llm-exe

"Finally, LLM calls that don't feel like stringly-typed nightmares."

“终于,LLM 调用不再像是字符串类型的噩梦了。”

"Switched from OpenAI to Claude in literally one line. Everything just worked."
— Tech Lead, Series B Fintech

“真的只用一行代码就从 OpenAI 切换到了 Claude。一切运行正常。”
— 某 B 轮金融科技公司技术负责人

"The type safety alone saved us hours of debugging. The composability changed how we build."
— Principal Engineer, Fortune 500

“仅类型安全这一项就为我们节省了数小时的调试时间。其可组合性改变了我们的构建方式。”
— 某财富 500 强公司首席工程师

"As an AI, I shouldn't play favorites... but being able to switch providers with one line means developers can always choose the best model for the job. Even if it's not me."
— Claude, Anthropic

“作为一个人工智能,我不应该偏袒……但能够用一行代码切换供应商,意味着开发者总能选择最适合工作的模型。即使那个模型不是我。”
— Claude, Anthropic

结语:开始构建卓越应用

Ready to Build Something Incredible? Stop wrestling with LLM APIs. Start shipping AI features that actually work.

准备好构建卓越的应用了吗?停止与 LLM API 搏斗。开始交付真正可用的 AI 功能。

llm-exe 不仅仅是一个库,它代表了一种构建 LLM 应用的范式转变——从脆弱、冗长的脚本转向健壮、可维护、类型安全的工程化代码。通过提供模块化、可组合的构建块,它让开发者能够专注于业务逻辑和创新,而非底层 API 的复杂性。

llm-exe is more than just a library; it represents a paradigm shift in building LLM applications—moving from fragile, verbose scripts to robust, maintainable, type-safe engineered code. By providing modular, composable building blocks, it allows developers to focus on business logic and innovation, rather than the complexities of underlying APIs.

如果你也厌倦了在 JSON 解析、类型转换和供应商锁定中挣扎,llm-exe 或许正是你一直在寻找的解决方案。

If you're also tired of struggling with JSON parsing, type casting, and vendor lock-in, llm-exe might be the solution you've been looking for.

← 返回文章列表
分享到:微博

版权与免责声明:本文仅用于信息分享与交流,不构成任何形式的法律、投资、医疗或其他专业建议,也不构成对任何结果的承诺或保证。

文中提及的商标、品牌、Logo、产品名称及相关图片/素材,其权利归各自合法权利人所有。本站内容可能基于公开资料整理,亦可能使用 AI 辅助生成或润色;我们尽力确保准确与合规,但不保证完整性、时效性与适用性,请读者自行甄别并以官方信息为准。

若本文内容或素材涉嫌侵权、隐私不当或存在错误,请相关权利人/当事人联系本站,我们将及时核实并采取删除、修正或下架等处理措施。 也请勿在评论或联系信息中提交身份证号、手机号、住址等个人敏感信息。