GEO

Mastra框架:构建企业级AI助手与自主代理的TypeScript解决方案

2026/1/23
Mastra框架:构建企业级AI助手与自主代理的TypeScript解决方案
AI Summary (BLUF)

Mastra is a TypeScript framework for building AI assistants and agents, used by major companies for internal automation and customer-facing applications. It features LLM model routing, agents with tools and workflows, RAG knowledge bases, integrations, and evaluation systems, deployable locally or to serverless clouds.

Mastra是一个用于构建AI助手和代理的TypeScript框架,被大型企业用于内部自动化和面向客户的应用程序。它具有LLM模型路由、带工具和工作流的代理、RAG知识库、集成和评估系统,可本地部署或部署到无服务器云。

Introduction

Mastra is a TypeScript framework designed for building sophisticated AI assistants and autonomous agents. Adopted by some of the world's largest companies, it powers both internal AI automation tools and customer-facing agents. The framework offers flexible deployment options, allowing you to run it on your local machine, package it within a Node.js server using Hono, or deploy it to serverless cloud environments. This article provides a technical deep dive into Mastra's core architecture and capabilities.

Mastra 是一个用于构建复杂 AI 助手和自主代理的 TypeScript 框架。它已被世界上一些大型公司采用,用于驱动内部 AI 自动化工具和面向客户的代理。该框架提供了灵活的部署选项,允许您在本地机器上运行,使用 Hono 将其打包到 Node.js 服务器中,或部署到无服务器云环境。本文将对 Mastra 的核心架构和功能进行深入的技术探讨。

Core Concepts & Architecture

Mastra is built around several key abstractions that work together to create robust, scalable AI applications. Understanding these components is essential for effective development.

Mastra 围绕几个关键抽象构建,这些抽象共同协作以创建健壮、可扩展的 AI 应用程序。理解这些组件对于高效开发至关重要。

LLM Models & The Vercel AI SDK

At its foundation, Mastra utilizes the Vercel AI SDK for model routing. This provides a unified interface for interacting with virtually any Large Language Model (LLM) provider, including OpenAI, Anthropic, and Google Gemini. Developers can select specific models and providers, and crucially, decide whether to stream responses—a key feature for creating responsive user experiences.

其基础是,Mastra 利用 Vercel AI SDK 进行模型路由。这为与几乎所有大型语言模型提供商交互提供了一个统一的接口,包括 OpenAI、Anthropic 和 Google Gemini。开发人员可以选择特定的模型和提供商,并且关键的是,可以决定是否流式传输响应——这是创建响应式用户体验的关键功能。

Agents: The Orchestrators

Agents are systems where a language model selects and executes a sequence of actions. In Mastra, an agent serves as the central orchestrator, providing the LLM with access to Tools, Workflows, and contextual data. Agents can invoke custom functions, call APIs from third-party integrations, and query knowledge bases you've built, enabling complex, multi-step reasoning and task execution.

代理是语言模型选择并执行一系列动作的系统。在 Mastra 中,代理充当中央协调器,为 LLM 提供对工具工作流和上下文数据的访问。代理可以调用自定义函数、调用第三方集成的 API 以及查询您构建的知识库,从而实现复杂的多步推理和任务执行。

Tools: Typed, Executable Functions

Tools are typed functions that can be executed by an agent or within a workflow. They feature built-in integration access and parameter validation. Each tool is defined by:

  1. A schema that describes its inputs.
  2. An execution function that implements its logic.
  3. Access to configured integrations.

This structure ensures type safety, reduces runtime errors, and makes tool behavior predictable and debuggable.

工具是可由代理或在工作流内执行的类型化函数。它们具有内置的集成访问和参数验证功能。每个工具由以下部分定义:

  1. 描述其输入的模式
  2. 实现其逻辑的执行函数
  3. 对已配置集成的访问。

这种结构确保了类型安全,减少了运行时错误,并使工具行为可预测且易于调试。

Workflows: Stateful, Graph-Based Processes

Workflows are persistent, graph-based state machines that represent complex, multi-step processes. They are a powerhouse feature of Mastra, supporting:

  • Control Flow: Loops, conditional branching, and error handling with retries.
  • Human-in-the-Loop: Steps that can pause and wait for human input.
  • Composability: The ability to embed other workflows.
  • Observability: Built-in OpenTelemetry tracing for every step.
  • Flexible Development: Can be built either via code or a visual editor.

Workflows are ideal for long-running, stateful operations like customer onboarding sequences, multi-stage data processing, or approval chains.

工作流是基于图的持久状态机,代表复杂的多步骤流程。它们是 Mastra 的核心功能,支持:

  • 控制流:循环、条件分支以及带重试的错误处理。
  • 人在回路:可以暂停并等待人工输入的步骤。
  • 可组合性:嵌入其他工作流的能力。
  • 可观测性:每个步骤都内置 OpenTelemetry 追踪。
  • 灵活开发:可以通过代码或可视化编辑器构建。

工作流非常适合长时间运行的有状态操作,例如客户入职流程、多阶段数据处理或审批链。

RAG: Building Knowledge for Agents

Retrieval-Augmented Generation (RAG) allows you to equip agents with specialized knowledge beyond their base training. In Mastra, RAG is implemented as an ETL (Extract, Transform, Load) pipeline with specific query techniques. The process typically involves:

  1. Chunking: Breaking down documents into manageable pieces.
  2. Embedding: Converting text chunks into vector representations.
  3. Vector Search: Efficiently retrieving the most relevant chunks based on a query.

This enables agents to answer questions based on proprietary documentation, internal wikis, or real-time data.

检索增强生成允许您为代理配备其基础训练之外的专业知识。在 Mastra 中,RAG 被实现为具有特定查询技术的 ETL 管道。该过程通常包括:

  1. 分块:将文档分解为可管理的片段。
  2. 嵌入:将文本块转换为向量表示。
  3. 向量搜索:根据查询高效检索最相关的块。

这使得代理能够基于专有文档、内部维基或实时数据回答问题。

Integrations: Type-Safe API Clients

Integrations in Mastra are automatically generated, type-safe clients for third-party service APIs (e.g., Slack, Stripe, GitHub). These can be used directly as tools for agents or as steps within workflows. The automatic type generation significantly reduces integration boilerplate and potential errors from manual API client implementation.

Mastra 中的集成是自动生成的、类型安全的第三方服务 API 客户端。这些可以直接用作代理的工具或工作流中的步骤。自动类型生成显著减少了集成样板代码和手动实现 API 客户端可能带来的错误。

Evaluations: Automated LLM Testing

Evaluations are automated tests designed to assess the quality of LLM outputs. They employ a combination of:

  • Model-based Scoring: Using an LLM to grade another LLM's output.
  • Rule-based Methods: Checking for specific keywords, formats, or logic.
  • Statistical Methods: Analyzing distributions or other metrics.

Each evaluation returns a normalized score between 0 and 1, which can be logged and compared over time. They are fully customizable, allowing you to define your own prompts and scoring functions to align with specific business or quality requirements.

评估是旨在评估 LLM 输出质量的自动化测试。它们结合使用了:

  • 基于模型的评分:使用一个 LLM 来给另一个 LLM 的输出打分。
  • 基于规则的方法:检查特定的关键词、格式或逻辑。
  • 统计方法:分析分布或其他指标。

每次评估都会返回一个介于 0 和 1 之间的标准化分数,该分数可以被记录并随时间进行比较。它们是完全可定制的,允许您定义自己的提示词和评分函数,以符合特定的业务或质量要求。

Getting Started

Prerequisites

To begin working with Mastra, ensure you have the following:

  • Node.js (version 20.0 or higher)
  • An API key from an LLM provider (e.g., OpenAI, Anthropic, Google Gemini)

要开始使用 Mastra,请确保您具备以下条件:

  • Node.js(版本 20.0 或更高)
  • LLM 提供商的 API 密钥

Creating a New Project

The simplest way to start is by using the create-mastra CLI tool. It scaffolds a new Mastra application with all necessary configurations.

npx create-mastra@latest

最简单的方法是使用 create-mastra CLI 工具。它会搭建一个包含所有必要配置的新 Mastra 应用程序脚手架。

Running the Development Server

After project creation, navigate to the project directory and start the development server, which opens the Mastra Playground.

npm run dev

Remember to set the required environment variables for your chosen LLM provider (e.g., ANTHROPIC_API_KEY, GOOGLE_GENERATIVE_AI_API_KEY).

项目创建后,进入项目目录并启动开发服务器,这将打开 Mastra Playground。

请记住为您选择的 LLM 提供商设置所需的环境变量。

Enhancing Development with MCP

Mastra offers a Model Context Protocol (MCP) server (@mastra/mcp-docs-server) that provides AI assistants with direct access to the full Mastra.ai knowledge base. This is invaluable for getting contextual help during development.

Mastra 提供了一个模型上下文协议服务器,使 AI 助手能够直接访问完整的 Mastra.ai 知识库。这对于在开发过程中获取上下文帮助非常宝贵。

Configuration for Cursor

To enable it in Cursor, create or update .cursor/mcp.json in your project root:

macOS/Linux:

{
  "mcpServers": {
    "mastra": {
      "command": "npx",
      "args": ["-y", "@mastra/mcp-docs-server"]
    }
  }
}

Windows:

{
  "mcpServers": {
    "mastra": {
      "command": "cmd",
      "args": ["/c", "npx", "-y", "@mastra/mcp-docs-server"]
    }
  }
}

After configuration, you must manually enable the server in Cursor via Settings -> MCP Settings.

要在 Cursor 中启用它,请在项目根目录中创建或更新 .cursor/mcp.json 文件。

配置后,您必须通过 设置 -> MCP 设置 在 Cursor 中手动启用该服务器。

Community & Contribution

Mastra is an open-source project that welcomes contributions. Whether you're interested in coding, testing, or defining feature specifications, you can get involved. Developers are encouraged to open an issue for discussion before submitting a pull request. Project setup details are available in the development documentation.

For support, join the open community Discord. To report security vulnerabilities responsibly, please contact security@mastra.ai.

Mastra 是一个欢迎贡献的开源项目。无论您对编码、测试还是定义功能规范感兴趣,都可以参与进来。鼓励开发人员在提交拉取请求之前先开一个 issue 进行讨论。项目设置详情可在开发文档中找到。

如需支持,请加入开放的社区 Discord。要负责任地报告安全漏洞,请联系 security@mastra.ai

Conclusion

Mastra presents a compelling, enterprise-ready framework for building the next generation of AI applications. By combining a unified LLM interface, powerful agentic patterns, stateful workflows, and robust tooling for knowledge integration and evaluation, it addresses many of the complexities inherent in production AI systems. Its use of TypeScript ensures developer familiarity and type safety, while features like the MCP server enhance the development experience. For teams looking to move beyond simple chat interfaces to create dynamic, autonomous, and knowledge-aware AI agents, Mastra provides a structured and scalable path forward.

Mastra 提出了一个引人注目、可用于企业环境的框架,用于构建下一代 AI 应用程序。它通过结合统一的 LLM 接口、强大的代理模式、有状态的工作流以及用于知识集成和评估的健壮工具,解决了许多生产 AI 系统中固有的复杂性。其对 TypeScript 的使用确保了开发人员的熟悉度和类型安全,而 MCP 服务器等功能则增强了开发体验。对于希望超越简单聊天界面,创建动态、自主且具备知识感知能力的 AI 代理的团队来说,Mastra 提供了一条结构化且可扩展的前进道路。

← 返回文章列表
分享到:微博

版权与免责声明:本文仅用于信息分享与交流,不构成任何形式的法律、投资、医疗或其他专业建议,也不构成对任何结果的承诺或保证。

文中提及的商标、品牌、Logo、产品名称及相关图片/素材,其权利归各自合法权利人所有。本站内容可能基于公开资料整理,亦可能使用 AI 辅助生成或润色;我们尽力确保准确与合规,但不保证完整性、时效性与适用性,请读者自行甄别并以官方信息为准。

若本文内容或素材涉嫌侵权、隐私不当或存在错误,请相关权利人/当事人联系本站,我们将及时核实并采取删除、修正或下架等处理措施。 也请勿在评论或联系信息中提交身份证号、手机号、住址等个人敏感信息。