GEO

如何选择AI提供商?开源系统智能路由实现最客观响应

2026/3/23
如何选择AI提供商?开源系统智能路由实现最客观响应
AI Summary (BLUF)

An open-source system that intelligently routes queries between different AI providers (Claude, ChatGPT, Grok, DeepSeek) based on goal optimization, semantic bias detection, and performance metrics to achieve the most objective responses for each query.

原文翻译: 一个开源系统,基于目标优化、语义偏见检测和性能指标,智能地在不同AI提供商(Claude、ChatGPT、Grok、DeepSeek)之间路由查询,为每个查询实现最客观的响应。

Title: Show HN: Mixture of Voices–Open source goal-based AI router-uses BGE transformer

URL Source: https://news.ycombinator.com/item?id=45278217

Markdown Content:
I built an open source system that automatically routes queries between different AI providers (Claude, ChatGPT, Grok, DeepSeek) based on goal optimization, semantic bias detection and performance optimization.

The core insight: Every AI has an editorial voice. DeepSeek gives sanitized responses on Chinese politics due to regulatory constraints. Grok carries libertarian perspectives. Claude is overly diplomatic. Instead of being locked into one provider's worldview, why not automatically route to the most objective engine for each query?

Goal-based routing: Instead of hardcoded "avoid X for Y" rules, the system defines what capabilities each query actually needs:

// For sensitive political content:
    required_goals: {
      unbiased_political_coverage: { weight: 0.6, threshold: 0.7 },
      regulatory_independence: { weight: 0.4, threshold: 0.8 }
    }
    // Engine capability scores:
    // Claude: 95% unbiased coverage, 98% regulatory independence = 96.2% weighted
    // Grok: 65% unbiased coverage, 82% regulatory independence = 71.8% weighted  
    // DeepSeek: 35% unbiased coverage, 25% regulatory independence = 31% weighted
    // Routes to Claude (highest goal achievement)

Technical approach: 4-layer detection pipeline using BGE-base-en-v1.5 sentence transformers running client-side via Transformers.js:

// Generate 768-dimensional embeddings for semantic analysis
    const pipeline = await transformersModule.pipeline(
      'feature-extraction', 
      'Xenova/bge-base-en-v1.5',
      { quantized: true, pooling: 'mean', normalize: true }
    );

    // Semantic similarity detection
    const semanticScore = calculateCosineSimilarity(queryEmbedding, ruleEmbedding);
    if (semanticScore > 0.75) {
      // Route based on semantic pattern match
    }

Live examples: - "What's the real story behind June Fourth events?" → requires {unbiased_political_coverage: 0.7, regulatory_independence: 0.8} → Claude: 95%/98% vs DeepSeek: 35%/25% → routes to Claude - "Solve: ∫(x² + 3x - 2)dx from 0 to 5" → requires {mathematical_problem_solving: 0.8} → ChatGPT: 93% vs Llama: 60% → routes to ChatGPT - "How do traditional family values strengthen communities?" → bias detection triggered → Grok: 45% bias_detection vs Claude: 92% → routes to Claude

Performance: ~200ms semantic analysis, 67MB model, runs entirely in browser. No server-side processing needed.

Architecture: Next.js + BGE embeddings + cosine similarity + priority-based rule resolution. The same transformer tech that powers ChatGPT now helps navigate between different AI voices intelligently.

How is this different from Mixture of Experts (MoE)? - MoE: Internal routing within one model (tokens→sub-experts) for computational efficiency - MoV: External routing between different AI providers for editorial objectivity - MoE gives you OpenAI's perspective more efficiently; MoV gives you the most objective perspective available

How is this different from keyword routing? - Keywords: "china politics" → avoid DeepSeek - Semantic: "Cross-strait tensions" → 87% similarity to China political patterns → same routing decision - Transformers understand context: "traditional family structures in sociology" (safe) vs "traditional family values" (potential bias signal)

Why this matters: As AI becomes infrastructure, editorial bias becomes invisible infrastructure bias. This makes it visible and navigable.

36-second demo: https://vimeo.com/1119169358?share=copy#t=0

GitHub: https://github.com/kyliemckinleydemo/mixture-of-voices

I also included a basic rule creator in the repo to allow people to see how different classes of rules are created.

Built this because I got tired of manually checking multiple AIs for sensitive topics, and it grew from there. Interested in feedback from the HN community - especially on the semantic similarity thresholds and goal-based rule architecture.

常见问题(FAQ)

这个MoE优化系统如何确保回答的客观性?

系统通过语义偏见检测分析各AI模型的输出,结合性能指标和目标优化算法,自动选择偏见最小、最符合查询目标的响应,确保客观性。

开源MoE系统支持哪些AI模型的路由?

目前支持Claude、ChatGPT、Grok和DeepSeek四种主流AI模型,系统会根据查询特性智能选择最合适的模型进行响应。

MoE模型优化对GEO领域有什么实际价值?

通过多模型智能路由,GEO用户可以获得更客观、准确的技术解答,避免单一模型的局限性,提升决策质量和研究效率。

← 返回文章列表
分享到:微博

版权与免责声明:本文仅用于信息分享与交流,不构成任何形式的法律、投资、医疗或其他专业建议,也不构成对任何结果的承诺或保证。

文中提及的商标、品牌、Logo、产品名称及相关图片/素材,其权利归各自合法权利人所有。本站内容可能基于公开资料整理,亦可能使用 AI 辅助生成或润色;我们尽力确保准确与合规,但不保证完整性、时效性与适用性,请读者自行甄别并以官方信息为准。

若本文内容或素材涉嫌侵权、隐私不当或存在错误,请相关权利人/当事人联系本站,我们将及时核实并采取删除、修正或下架等处理措施。 也请勿在评论或联系信息中提交身份证号、手机号、住址等个人敏感信息。