OpenViking如何解决AI Agent记忆困境?2026年文件系统式记忆方案
引言:AI Agent 的“记忆”困境
养虾人最近又多了一个新玩具——但光有爪子还不够,记忆得跟上啊。
Shrimp farmers have a new toy to play with lately — but having claws isn’t enough; memory needs to keep up.
OpenClaw们天天在“长记性”,上下文工程却成了新痛点:向量碎片到处飞、token烧钱如流水、检索回来一堆乱码、长期任务三天就忘光……
OpenClaws are constantly “building memory,” but context engineering has become a new pain point: vector fragments are scattered everywhere, tokens are being burned through like running water, retrieval returns a pile of garbled code, and long-term tasks are forgotten after just three days…
现在,一个开源项目直接把Agent的“大脑”升级成了可操作的文件系统?
Now, an open-source project has directly upgraded the Agent’s “brain” into an operable file system?
刚刚,火山引擎Viking团队悄然放出:OpenViking。
Recently, the Volcano Engine Viking team quietly released: OpenViking.
专为AI Agent打造的上下文数据库,用文件系统范式把记忆、资源、技能全部统一管理,检索可观察、加载按需来、上下文还能自我迭代。
A context database specifically built for AI Agents, using a file system paradigm to uniformly manage memory, resources, and skills. Retrieval is observable, loading is on-demand, and context can even self-iterate.
这货刚开源不到一个月,GitHub已飙到8.3k星。
This thing has been open-source for less than a month, and its GitHub stars have already soared to 8.3k.
核心痛点:传统 RAG 的“黑盒”与高成本
当 AI 处理长期任务时,背景资料会越来越多。传统的做法是每次让 AI 干活,都让它把所有资料重新读一遍。这不仅容易丢失重点,而且会消耗大量的 Token(在 AI 世界里,消耗 Token 就是在烧钱)。别急,骂也是要付钱的。
When an AI handles long-term tasks, the background information keeps growing. The traditional approach is to make the AI re-read all the materials every time it needs to work. This not only easily loses focus but also consumes a massive amount of Tokens (in the AI world, consuming Tokens is burning money). Don’t rush, complaining costs money too.
OpenClaw Memory插件实测任务完成率暴涨43%~49%,输入token直接砍掉91%!
The OpenClaw Memory plugin, in actual tests, increased task completion rates by 43% to 49%, and directly slashed input tokens by 91%!
这要用上了,能省下多少token(钱)!特别适合现在养龙虾的各位。
If this gets adopted, think of how many tokens (money) could be saved! Especially suitable for all the shrimp farmers out there.
OpenViking 的核心理念:从“向量黑盒”到“可操作文件系统”
“OpenViking”到底有多猛?一句话自动安装,负责给Agent提供稳定、可治理的长期记忆与上下文供给。
Just how powerful is “OpenViking”? One-command automatic installation, responsible for providing Agents with stable, governable long-term memory and context supply.
有意思的是,OpenViking的核心卖点就是“把上下文从向量黑盒变成可操作的文件系统”。
Interestingly, OpenViking’s core selling point is “transforming context from a vector black box into an operable file system.”
传统 RAG 的局限性
传统RAG大家都知道:切chunk→向量嵌入→语义召回→拼凑上下文,检索结果像散装零件,Agent拿到手还得自己拼。
Everyone knows traditional RAG: split chunks → vector embedding → semantic retrieval → piece together context. The retrieval results are like loose parts, and the Agent has to assemble them itself.
传统 RAG 技术是个“黑盒”,它给你一个答案,但如果找错了,你根本不知道它是翻错了哪本书,很难纠错。
Traditional RAG technology is a “black box.” It gives you an answer, but if it’s wrong, you have no idea which book it misread, making it hard to correct.
文件系统范式:结构化、可追溯的管理
为了解决这些痛点,OpenViking 引入了一个非常聪明且接地气的概念:用大家最熟悉的“电脑文件夹”的方式,来管理 AI 的大脑。
To solve these pain points, OpenViking introduces a very clever and down-to-earth concept: using the familiar “computer folder” method to manage the AI’s brain.
它不再把 AI 的数据打碎揉在一起,而是把 AI 需要的 “记忆(聊天历史)”、“资源(参考文档)”和“技能(能用的工具)”,变成了类似我们电脑上的文件夹。
It no longer smashes and mixes AI data together. Instead, it turns what the AI needs — “memory (chat history),” “resources (reference documents),” and “skills (usable tools)” — into folders similar to those on our computers.
AI 可以在里面像浏览 C盘、D盘 一样,分门别类地存取信息,非常有条理。
The AI can browse inside it just like browsing the C: or D: drive, accessing information in a categorized, very organized manner.
OpenViking 通过一个虚拟文件系统实现了这一理念,其核心设计包括:
OpenViking implements this concept through a virtual file system. Its core design includes:
- 统一协议与存储:所有记忆、资源、技能统一塞进虚拟文件系统,协议叫
viking://。(Unified Protocol & Storage: All memory, resources, and skills are placed into a unified virtual file system under the protocolviking://.) - 清晰的目录结构:目录结构清晰:
resources/、user/、agent/、memories/、skills/。(Clear Directory Structure: The directory structure is clear:resources/,user/,agent/,memories/,skills/.) - 类文件操作:Agent像操作本地文件一样
ls、find、grep、tree,想看啥直接定位。(File-like Operations: The Agent can use commands likels,find,grep,treeas if operating local files, directly locating what it wants to see.)
关键技术:三层上下文加载与记忆进化
三层上下文加载 (L0/L1/L2)
它还搞了三层上下文加载(L0/L1/L2)。
It also implements a three-tier context loading system (L0/L1/L2).
- L0:一句话抽象总结,轻量级常驻。(L0: A one-sentence abstract summary, lightweight and always resident.)
- L1:概览信息,按需加载。(L1: Overview information, loaded on-demand.)
- L2:完整细节,只有真正需要时才拉。(L2: Full details, pulled only when genuinely needed.)
这就好比 AI 找资料时,先只看“书名和目录”;如果觉得相关,再去加载“章节摘要”;最后确定有用,才去完整阅读“正文详情”。这种方式能大大减少 AI 读废话的概率,极大地降低了调用 AI 模型的成本。
This is like when the AI searches for information: first, it only looks at the “book title and table of contents”; if it seems relevant, it then loads the “chapter summaries”; finally, only when it’s sure it’s useful does it fully read the “text details.” This method significantly reduces the chance of the AI reading irrelevant fluff, greatly lowering the cost of calling the AI model.
这样一搞,token消耗直线下降,上下文也更结构化、可追溯。
With this setup, token consumption plummets, and context becomes more structured and traceable.
整个检索轨迹还能可视化!调试的时候直接看路径图,哪一步召回烂了、哪层目录漏了,一目了然。
The entire retrieval trajectory can also be visualized! During debugging, you can directly view the path map — which retrieval step failed, which directory layer was missed — it’s all clear at a glance.
自动整理,越用越聪明(记忆自我进化)
当你和 AI 聊了很久,记录特别长时,OpenViking 会自动帮你做“记忆瘦身”。
When you’ve chatted with the AI for a long time and the record gets very lengthy, OpenViking automatically performs “memory slimming” for you.
它会自动压缩无关紧要的闲聊,提取出重要的“长期记忆”并归档保存。不需要你手动干预,AI 的脑子会自动保持清晰。
It automatically compresses unimportant chit-chat, extracts important “long-term memories,” and archives them. No manual intervention is needed; the AI’s brain automatically stays clear.
项目背景与意义
再说说背后团队。OpenViking由字节跳动火山引擎Viking团队发起和维护,这个团队专注非结构化数据+AI原生基础设施,之前在云原生数据湖、向量引擎等领域就有深厚积累。
Let’s talk about the team behind it. OpenViking was initiated and is maintained by the Volcano Engine Viking team from ByteDance. This team focuses on unstructured data + AI-native infrastructure and has deep prior experience in areas like cloud-native data lakes and vector engines.
这一次他们押注的方向很明确:让AI Agent真正能“长期在线、持续进化”。
This time, their bet is very clear: to enable AI Agents to truly be “always online, continuously evolving.”
上下文不再是临时的token填充,而是像操作系统文件系统一样,成为Agent的持久“根目录”。
Context is no longer temporary token stuffing. Instead, like an operating system’s file system, it becomes the Agent’s persistent “root directory.”
我个人认为这个非常好,当初龙虾刚火的时候就在想怎么没有类似的东西来管理上下文。Agent太需要了!
Personally, I think this is excellent. Back when the shrimp farming trend first took off, I wondered why there wasn’t something similar to manage context. Agents desperately need this!
项目地址
项目地址:
Project Address:
https://github.com/volcengine/openviking
常见问题(FAQ)
OpenViking与传统RAG相比有什么核心优势?
OpenViking将AI记忆从传统RAG的“向量黑盒”转变为可操作的文件系统范式,实现结构化、可追溯的管理,解决了检索不可观察、纠错困难的问题,同时大幅降低token消耗成本。
OpenViking如何帮助AI Agent提升任务完成率?
通过三层上下文加载机制和记忆自我进化能力,OpenViking能智能整理和调用记忆资源。实测显示,其Memory插件使任务完成率提升43%-49%,输入token减少91%。
OpenViking的文件系统范式具体如何工作?
它采用类似电脑文件夹的方式,将AI的记忆、资源和技能统一管理在虚拟文件系统中,使用viking://协议。AI可以像浏览磁盘一样分类存取信息,实现可观察的检索和按需加载。
版权与免责声明:本文仅用于信息分享与交流,不构成任何形式的法律、投资、医疗或其他专业建议,也不构成对任何结果的承诺或保证。
文中提及的商标、品牌、Logo、产品名称及相关图片/素材,其权利归各自合法权利人所有。本站内容可能基于公开资料整理,亦可能使用 AI 辅助生成或润色;我们尽力确保准确与合规,但不保证完整性、时效性与适用性,请读者自行甄别并以官方信息为准。
若本文内容或素材涉嫌侵权、隐私不当或存在错误,请相关权利人/当事人联系本站,我们将及时核实并采取删除、修正或下架等处理措施。 也请勿在评论或联系信息中提交身份证号、手机号、住址等个人敏感信息。