GEO

Qwen3-1.7B双模式推理如何优化AI性能?2026年深度解析

2026/3/21
Qwen3-1.7B双模式推理如何优化AI性能?2026年深度解析
AI Summary (BLUF)

Qwen3-1.7B is a 1.7 billion parameter large language model featuring unique dual-mode reasoning capabilities, supporting seamless switching between thinking and non-thinking modes for optimized performance across various scenarios including complex reasoning, creative tasks, and multilingual applications.

原文翻译: 千问3-1.7B是一个拥有17亿参数的大型语言模型,具备独特的双模式推理能力,支持在思考模式和非思考模式之间无缝切换,可在复杂推理、创意任务和多语言应用等多种场景中实现优化性能。

Image 1: Chat

Qwen3 核心亮点

Qwen3 是通义千问系列大语言模型的最新迭代,提供了一系列密集模型和混合专家(MoE)模型。基于大规模训练,Qwen3 在推理、指令遵循、智能体能力和多语言支持方面实现了突破性进展,具备以下关键特性:

  • 独特的思维模式无缝切换:在单一模型内支持在思维模式(用于复杂逻辑推理、数学和代码任务)和非思维模式(用于高效通用对话)之间无缝切换,确保在各种场景下的最优性能。
  • 显著增强的推理能力:在数学、代码生成和常识逻辑推理方面,其表现超越了之前的 QwQ(思维模式)和 Qwen2.5-Instruct 模型(非思维模式)。
  • 卓越的人类偏好对齐:在创意写作、角色扮演、多轮对话和指令遵循方面表现出色,提供更自然、引人入胜和沉浸式的对话体验。
  • 专业的智能体能力:能够在思维和非思维模式下精确集成外部工具,在复杂的智能体任务中达到开源模型的领先水平。
  • 支持超过 100 种语言和方言,具备强大的多语言指令遵循翻译能力。

Qwen3 is the latest generation of large language models in the Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:

  • Uniquely supports seamless switching between thinking mode (for complex logical reasoning, math, and coding) and non-thinking mode (for efficient, general-purpose dialogue) within a single model, ensuring optimal performance across various scenarios.
  • Significantly enhanced reasoning capabilities, surpassing previous QwQ (in thinking mode) and Qwen2.5-Instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
  • Superior human preference alignment, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
  • Expertise in agent capabilities, enabling precise integration with external tools in both thinking and non-thinking modes and achieving leading performance among open-source models in complex agent-based tasks.
  • Supports 100+ languages and dialects with strong capabilities for multilingual instruction following and translation.

模型概览

Qwen3-1.7B 模型具有以下技术规格:

  • 模型类型: 因果语言模型
  • 训练阶段: 预训练 & 后训练
  • 参数量: 17 亿
  • 非嵌入参数量: 14 亿
  • 层数: 28
  • 注意力头数 (GQA): Q 为 16,KV 为 8
  • 上下文长度: 32,768

Qwen3-1.7B has the following specifications:

  • Type: Causal Language Model
  • Training Stage: Pretraining & Post-training
  • Number of Parameters: 1.7B
  • Number of Parameters (Non-Embedding): 1.4B
  • Number of Layers: 28
  • Number of Attention Heads (GQA): 16 for Q and 8 for KV
  • Context Length: 32,768

有关基准评估、硬件要求和推理性能的更多详细信息,请参阅我们的 博客GitHub文档

For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our blog, GitHub, and Documentation.

提示
如果遇到严重的无限重复生成问题,请参考 最佳实践 部分获取最优采样参数,并将 presence_penalty 设置为 1.5。

TIP
If you encounter significant endless repetitions, please refer to the Best Practices section for optimal sampling parameters, and set the presence_penalty to 1.5.

快速开始

Qwen3 的代码已集成到最新版的 Hugging Face transformers 库中,建议使用最新版本的 transformers

The code for Qwen3 has been integrated into the latest Hugging Face transformers library, and we advise you to use the latest version of transformers.

如果使用 transformers<4.51.0,可能会遇到以下错误:

With transformers<4.51.0, you will encounter the following error:

KeyError: 'qwen3'

以下代码片段展示了如何基于给定输入使用模型生成内容。

The following contains a code snippet illustrating how to use the model to generate content based on given inputs.

from modelscope import AutoModelForCausalLM, AutoTokenizer

model_name = "Qwen/Qwen3-1.7B"

# 加载分词器和模型
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

# 准备模型输入
prompt = "Give me a short introduction to large language model."
messages = [
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
    enable_thinking=True # 在思维和非思维模式之间切换。默认为 True。
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

# 执行文本补全
generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() 

# 解析思维内容
try:
    # rindex 查找 151668 ()
    index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
    index = 0

thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")

print("thinking content:", thinking_content)
print("content:", content)

对于部署,可以使用 sglang>=0.4.6.post1vllm>=0.8.5 来创建兼容 OpenAI 的 API 端点:

For deployment, you can use sglang>=0.4.6.post1 or vllm>=0.8.5 to create an OpenAI-compatible API endpoint:

  • SGLang: SGLANG_USE_MODELSCOPE=true python -m sglang.launch_server --model-path Qwen/Qwen3-1.7B --reasoning-parser qwen3
  • vLLM: VLLM_USE_MODELSCOPE=true vllm serve Qwen/Qwen3-1.7B --enable-reasoning --reasoning-parser deepseek_r1

对于本地使用,Ollama、LMStudio、MLX-LM、llama.cpp 和 KTransformers 等应用程序也已支持 Qwen3。

For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.

思维与非思维模式切换

提示
enable_thinking 开关在 SGLang 和 vLLM 创建的 API 中同样可用。请参阅我们的 SGLangvLLM 用户文档。

TIP
The enable_thinking switch is also available in APIs created by SGLang and vLLM. Please refer to our documentation for SGLang and vLLM users.

enable_thinking=True

默认情况下,Qwen3 启用了思维能力,类似于 QwQ-32B。这意味着模型将使用其推理能力来提升生成响应的质量。例如,在 tokenizer.apply_chat_template 中显式设置 enable_thinking=True 或保留其默认值,模型将进入思维模式。

By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting enable_thinking=True or leaving it as the default value in tokenizer.apply_chat_template, the model will engage its thinking mode.

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
    enable_thinking=True  # True 是 enable_thinking 的默认值
)

在此模式下,模型将生成包裹在 <think>...</think> 块中的思维内容,然后是最终响应。

In this mode, the model will generate think content wrapped in a <think>...</think> block, followed by the final response.

注意
对于思维模式,使用 Temperature=0.6TopP=0.95TopK=20MinP=0generation_config.json 中的默认设置)。切勿使用贪婪解码,因为这可能导致性能下降和无限重复。更详细的指导请参考 最佳实践 部分。

NOTE
For thinking mode, use Temperature=0.6, TopP=0.95, TopK=20, and MinP=0 (the default setting in generation_config.json). DO NOT use greedy decoding, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the Best Practices section.

enable_thinking=False

我们提供了一个硬开关来严格禁用模型的思维行为,使其功能与之前的 Qwen2.5-Instruct 模型保持一致。此模式在需要禁用思维以提高效率的场景中特别有用。

We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
    enable_thinking=False  # 设置 enable_thinking=False 禁用思维模式
)

在此模式下,模型不会生成任何思维内容,也不会包含 <think>...</think> 块。

In this mode, the model will not generate any think content and will not include a <think>...</think> block.

注意
对于非思维模式,建议使用 Temperature=0.7TopP=0.8TopK=20MinP=0。更详细的指导请参考 最佳实践 部分。

NOTE
For non-thinking mode, we suggest using Temperature=0.7, TopP=0.8, TopK=20, and MinP=0. For more detailed guidance, please refer to the Best Practices section.

高级用法:通过用户输入切换模式

我们提供了一种软切换机制,允许用户在 enable_thinking=True 时动态控制模型的行为。具体来说,您可以在用户提示或系统消息中添加 /think/no_think,以在多轮对话中逐轮切换模型的思维模式。模型将遵循最近的一条指令。

We provide a soft switch mechanism that allows users to dynamically control the model's behavior when enable_thinking=True. Specifically, you can add /think and /no_think to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.

以下是一个多轮对话的示例:

Here is an example of a multi-turn conversation:

from modelscope import AutoModelForCausalLM, AutoTokenizer

class QwenChatbot:
    def __init__(self, model_name="Qwen/Qwen3-1.7B"):
        self.tokenizer = AutoTokenizer.from_pretrained(model_name)
        self.model = AutoModelForCausalLM.from_pretrained(model_name)
        self.history = []

    def generate_response(self, user_input):
        messages = self.history + [{"role": "user", "content": user_input}]

        text = self.tokenizer.apply_chat_template(
            messages,
            tokenize=False,
            add_generation_prompt=True
        )

        inputs = self.tokenizer(text, return_tensors="pt")
        response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
        response = self.tokenizer.decode(response_ids, skip_special_tokens=True)

        # 更新历史记录
        self.history.append({"role": "user", "content": user_input})
        self.history.append({"role": "assistant", "content": response})

        return response

# 示例用法
if __name__ == "__main__":
    chatbot = QwenChatbot()

    # 第一次输入(无 /think 或 /no_think 标签,默认启用思维模式)
    user_input_1 = "How many r's in strawberries?"
    print(f"User: {user_input_1}")
    response_1 = chatbot.generate_response(user_input_1)
    print(f"Bot: {response_1}")
    print("----------------------")

    # 第二次输入,使用 /no_think
    user_input_2 = "Then, how many r's in blueberries? /no_think"
    print(f"User: {user_input_2}")
    response_2 = chatbot.generate_response(user_input_2)
    print(f"Bot: {response_2}") 
    print("----------------------")

    # 第三次输入,使用 /think
    user_input_3 = "Really? /think"
    print(f"User: {user_input_3}")
    response_3 = chatbot.generate_response(user_input_3)
    print(f"Bot: {response_3}")

注意
为了 API 兼容性,当 enable_thinking=True 时,无论用户使用 /think 还是 /no_think,模型将始终输出一个包裹在 <think>...</think> 中的块。但是,如果思维被禁用,该块内的内容可能为空。当 enable_thinking=False 时,软开关无效。无论用户输入任何 /think/no_think 标签,模型都不会生成思维内容,也不会包含 <think>...</think> 块。

NOTE
For API compatibility, when enable_thinking=True, regardless of whether the user uses /think or /no_think, the model will always output a block wrapped in <think>...</think>. However, the content inside this block may be empty if thinking is disabled. When enable_thinking=False, the soft switches are not valid. Regardless of any /think or /no_think tags input by the user, the model will not generate think content and will not include a <think>...</think> block.

智能体应用

Qwen3 在工具调用能力方面表现出色。我们推荐使用 Qwen-Agent 来充分发挥 Qwen3 的智能体能力。Qwen-Agent 内部封装了工具调用模板和解析器,极大地降低了编码复杂度。

Qwen3 excels in tool calling capabilities. We recommend using Qwen-Agent to make the best use of the agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.

要定义可用工具,您可以使用 MCP 配置文件、使用 Qwen-Agent 的集成工具,或自行集成其他工具。

To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.

from qwen_agent.agents import Assistant

# 定义 LLM
llm_cfg = {
    'model': 'Qwen3-1.7B',

    # 使用阿里云灵积平台提供的端点:
    # 'model_type': 'qwen_dashscope',
    # 'api_key': os.getenv('DASHSCOPE_API_KEY'),

    # 使用兼容 OpenAI API 的自定义端点:
    'model_server': 'http://localhost:8000/v1',  # api_base
    'api_key': 'EMPTY',

    # 其他参数:
    # 'generate_cfg': {
    #         # 添加:当响应内容为 `<think>这是思考</think>这是答案` 时;
    #         # 不添加:当响应已被分离为 reasoning_content 和 content 时。
    #         'thought_in_content': True,
    #     },
}

# 定义工具
tools = [
    {'mcpServers': {  # 可以指定 MCP 配置文件
            'time': {
                'command': 'uvx',
                'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
            },
            "fetch": {
                "command": "uvx",
                "args": ["mcp-server-f

## 常见问题(FAQ)

### Qwen3-1.7B的思维模式和非思维模式有什么区别?

思维模式适用于复杂逻辑推理、数学和代码任务,而非思维模式用于高效通用对话。模型支持在单一模型内无缝切换这两种模式,以优化不同场景下的性能。

### 如何启用或禁用Qwen3-1.7B的思维模式?

可以通过设置参数`enablethinking=True`来启用思维模式进行复杂推理,或设置`enablethinking=False`来使用非思维模式进行高效对话。也支持通过用户输入指令来动态切换模式。

### Qwen3-1.7B模型在哪些应用场景中表现突出?

该模型在复杂推理、创意写作、多语言应用及智能体任务中表现优异。其独特的双模式切换能力使其能在数学、代码生成和多轮对话等多种场景实现优化性能。
← 返回文章列表
分享到:微博

版权与免责声明:本文仅用于信息分享与交流,不构成任何形式的法律、投资、医疗或其他专业建议,也不构成对任何结果的承诺或保证。

文中提及的商标、品牌、Logo、产品名称及相关图片/素材,其权利归各自合法权利人所有。本站内容可能基于公开资料整理,亦可能使用 AI 辅助生成或润色;我们尽力确保准确与合规,但不保证完整性、时效性与适用性,请读者自行甄别并以官方信息为准。

若本文内容或素材涉嫌侵权、隐私不当或存在错误,请相关权利人/当事人联系本站,我们将及时核实并采取删除、修正或下架等处理措施。 也请勿在评论或联系信息中提交身份证号、手机号、住址等个人敏感信息。