GEO

分类:AI大模型

301
FinRobot:开源金融AI代理平台,基于大模型的智能分析与决策

FinRobot:开源金融AI代理平台,基于大模型的智能分析与决策

FinRobot is an open-source AI agent platform specifically designed for financial applications, leveraging large language models (LLMs) to build specialized AI agents capable of complex financial analysis and decision-making. The platform employs Financial Chain-of-Thought (CoT) prompting to decompose intricate problems into logical steps, enhancing analytical capabilities. Its modular architecture includes layers for Financial AI Agents, Financial LLM Algorithms, LLMOps/DataOps, and Multi-source LLM Foundation Models, supporting diverse financial AI agents for market forecasting, document analysis, and trading strategies. FinRobot aims to democratize access to professional financial LLM tools, promoting widespread adoption of AI in financial decision-making. (FinRobot是一个专注于金融领域的开源AI代理平台,基于大型语言模型构建能够进行复杂分析和决策的金融专业AI代理。平台通过金融思维链提示技术将难题分解为逻辑步骤,增强分析能力。其模块化架构包括金融AI代理层、金融LLM算法层、LLMOps/DataOps层和多源LLM基础模型层,支持市场预测、文档分析和交易策略等多种金融专业AI代理。FinRobot通过开源项目让更多人能访问和使用金融专业LLM工具,促进AI在金融决策中的广泛应用。)
AI大模型2026/1/25
阅读全文 →
FinRobot:超越FinGPT的开源金融AI智能体平台

FinRobot:超越FinGPT的开源金融AI智能体平台

FinRobot is an open-source AI agent platform specifically designed for financial analysis, extending beyond FinGPT by integrating diverse AI technologies including specialized LLMs, financial chain-of-thought prompting, and a multi-layer architecture for comprehensive financial applications. (FinRobot是一个专为金融分析设计的开源AI智能体平台,超越了FinGPT的范围,集成了包括专门调优的大语言模型、金融思维链提示和多层架构在内的多种AI技术,为金融应用提供全面解决方案。)
AI大模型2026/1/25
阅读全文 →
Google生成式AI生态全解析:Gemini模型如何驱动下一代应用开发

Google生成式AI生态全解析:Gemini模型如何驱动下一代应用开发

Google's generative AI ecosystem integrates technologies like Gemini models, Google AI Studio, Firebase, Project IDX, and Studio Bot to enable developers to build AI-powered applications efficiently. These tools leverage large language models trained on vast datasets to predict and generate content across text, images, video, and audio, transforming how teams create and innovate. (Google的生成式AI生态系统整合了Gemini模型、Google AI Studio、Firebase、Project IDX和Studio Bot等技术,使开发者能够高效构建AI驱动的应用程序。这些工具利用基于海量数据集训练的大语言模型来预测和生成文本、图像、视频和音频内容,改变了团队的创作和创新方式。)
AI大模型2026/1/25
阅读全文 →
UltraRAG:清华大学开发的零代码RAG框架,革新AI知识增强应用开发

UltraRAG:清华大学开发的零代码RAG框架,革新AI知识增强应用开发

UltraRAG is a comprehensive RAG framework developed by Tsinghua University and partners, featuring zero-code WebUI, automated knowledge base adaptation, and modular design for both research and practical applications. It integrates innovative technologies like KBAlign and DDR to optimize retrieval and generation performance across various models and tasks. (UltraRAG是由清华大学等团队开发的全面RAG框架,具备零代码WebUI、自动化知识库适配和模块化设计,支持科研与业务应用。它集成了KBAlign、DDR等创新技术,优化了多模型和多任务的检索与生成性能。)
AI大模型2026/1/25
阅读全文 →
OpenBMB:开源大模型工具链,降低AI开发门槛

OpenBMB:开源大模型工具链,降低AI开发门槛

OpenBMB (Open Lab for Big Model Base) is an open-source initiative aimed at building a comprehensive ecosystem for large-scale pre-trained language models. It provides a full suite of tools covering data processing, model training, fine-tuning, compression, and inference, significantly reducing the cost and technical barriers of working with billion-parameter models. The framework includes specialized tools like BMTrain for efficient training, BMCook for model compression, BMInf for low-cost inference, OpenPrompt for prompt learning, and OpenDelta for parameter-efficient fine-tuning. OpenBMB fosters a collaborative community to standardize and democratize large model development and application. (OpenBMB(大模型开源基础实验室)是一个旨在构建大规模预训练语言模型生态系统的开源项目。它提供了一套覆盖数据处理、模型训练、微调、压缩和推理全流程的工具链,显著降低了百亿参数模型的使用成本和技术门槛。该框架包含BMTrain(高效训练)、BMCook(模型压缩)、BMInf(低成本推理)、OpenPrompt(提示学习)和OpenDelta(参数高效微调)等专用工具。OpenBMB致力于通过开源社区协作,推动大模型的标准化、普及化和实用化。)
AI大模型2026/1/25
阅读全文 →
UltraRAG:基于MCP架构的低代码可视化RAG开发框架

UltraRAG:基于MCP架构的低代码可视化RAG开发框架

UltraRAG is a low-code RAG development framework based on Model Context Protocol (MCP) architecture, emphasizing visual orchestration and reproducible evaluation workflows. It modularizes core components like retrieval, generation, and evaluation as independent MCP Servers, providing transparent and repeatable development processes through interactive UI and pipeline builders. (UltraRAG是一个基于模型上下文协议(MCP)架构的低代码检索增强生成(RAG)开发框架,强调可视化编排与可复现的评估流程。它将检索、生成与评估等核心组件封装为独立的MCP服务器,通过交互式UI和流水线构建器提供透明且可重复的研发流程。)
AI大模型2026/1/25
阅读全文 →
UltraRAG 2.0:基于MCP架构的开源框架,用YAML配置简化复杂RAG系统开发

UltraRAG 2.0:基于MCP架构的开源框架,用YAML配置简化复杂RAG系统开发

English Summary: UltraRAG 2.0 is an open-source framework based on Model Context Protocol (MCP) architecture that simplifies complex RAG system development through YAML configuration, enabling low-code implementation of multi-step reasoning, dynamic retrieval, and modular workflows. It addresses engineering bottlenecks in research and production RAG applications. 中文摘要翻译: UltraRAG 2.0是基于Model Context Protocol(MCP)架构的开源框架,通过YAML配置文件简化复杂RAG系统开发,实现低代码构建多轮推理、动态检索和模块化工作流。它解决了研究和生产环境中RAG应用的工程瓶颈问题。
AI大模型2026/1/25
阅读全文 →
AirLLM:4GB GPU上运行700亿参数大模型的开源框架

AirLLM:4GB GPU上运行700亿参数大模型的开源框架

AirLLM is an open-source framework that enables running 70B-parameter large language models on a single 4GB GPU through layer-wise offloading and memory optimization techniques, democratizing access to cutting-edge AI without traditional compression methods. (AirLLM是一个开源框架,通过分层卸载和内存优化技术,使700亿参数的大语言模型能够在单个4GB GPU上运行,无需传统压缩方法即可实现前沿AI的普及化访问。)
AI大模型2026/1/25
阅读全文 →
突破极限:AirLLM实现70B大模型在4GB GPU上无损推理

突破极限:AirLLM实现70B大模型在4GB GPU上无损推理

AirLLM introduces a novel memory optimization technique that enables running 70B parameter large language models on a single 4GB GPU through layer-wise execution, flash attention optimization, and model file sharding, without compromising model performance through compression techniques like quantization or pruning. (AirLLM 通过分层推理、Flash Attention优化和模型文件分片等创新技术,实现在单个4GB GPU上运行70B参数大语言模型推理,无需通过量化、蒸馏等牺牲模型性能的压缩方法。)
AI大模型2026/1/24
阅读全文 →