Qwen3.5大模型深度解析:2026年AI推理与智能体能力突破
Qwen3.5 is the latest generation of large language models in the Qwen series, featuring groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support. It uniquely supports seamless switching between thinking and non-thinking modes within a single model, offers superior human preference alignment, and excels in agentic tasks with tool calling capabilities. The model natively supports 32,768 tokens and can be extended to 131,072 tokens using YaRN scaling techniques.
原文翻译: Qwen3.5是通义千问系列最新一代大语言模型,在推理、指令遵循、智能体能力和多语言支持方面取得突破性进展。它独特地支持在单个模型内无缝切换思考模式和非思考模式,提供卓越的人类偏好对齐,并在工具调用等智能体任务中表现出色。该模型原生支持32,768个token,可通过YaRN缩放技术扩展至131,072个token。
Qwen3 核心亮点
Qwen3 是通义千问系列大语言模型的最新版本,提供了一系列密集模型和混合专家模型。基于大规模训练,Qwen3 在推理、指令遵循、智能体能力和多语言支持方面取得了突破性进展,具备以下关键特性:
- 独特的思维模式无缝切换:支持在单个模型内,在思维模式(用于复杂逻辑推理、数学和代码任务)与非思维模式(用于高效通用对话)之间无缝切换,确保在各种场景下的最优性能。
- 显著增强的推理能力:在数学、代码生成和常识逻辑推理方面,其表现超越了前代 QwQ(思维模式下)和 Qwen2.5 Instruct 模型(非思维模式下)。
- 卓越的人类偏好对齐:在创意写作、角色扮演、多轮对话和指令遵循方面表现出色,提供更自然、引人入胜和沉浸式的对话体验。
- 专业的智能体能力:能够在思维和非思维模式下精确集成外部工具,并在复杂的智能体任务中达到开源模型的领先水平。
- 支持超过 100 种语言和方言:具备强大的多语言指令遵循和翻译能力。
Qwen3 is the latest generation of large language models in the Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- Uniquely support of seamless switching between thinking mode (for complex logical reasoning, math, and coding) and non-thinking mode (for efficient, general-purpose dialogue) within a single model, ensuring optimal performance across various scenarios.
- Significantly enhanced reasoning capabilities, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- Superior human preference alignment, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- Expertise in agent capabilities, enabling precise integration with external tools in both thinking and non-thinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- Support of 100+ languages and dialects with strong capabilities for multilingual instruction following and translation.
模型概览
Qwen3-8B 模型具有以下技术规格:
- 模型类型:因果语言模型
- 训练阶段:预训练与后训练
- 参数量:82 亿
- 非嵌入层参数量:69.5 亿
- 层数:36
- 注意力头数:查询头 32,键值头 8
- 上下文长度:原生支持 32,768 个令牌,通过 YaRN 技术可扩展至 131,072 个令牌。
有关基准评估、硬件要求和推理性能的更多详细信息,请参阅我们的博客、GitHub 和文档。
Qwen3-8B has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 8.2B
- Number of Parameters (Non-Embedding): 6.95B
- Number of Layers: 36
- Number of Attention Heads (GQA): 32 for Q and 8 for KV
- Context Length: 32,768 natively and 131,072 tokens with YaRN.
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our blog, GitHub, and Documentation.
快速开始
Qwen3 的代码已集成到最新版本的 Hugging Face transformers 库中,建议使用最新版本的 transformers。
如果使用 transformers<4.51.0,可能会遇到以下错误:
KeyError: 'qwen3'
以下代码片段展示了如何基于给定输入使用模型生成内容。
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-8B"
# 加载分词器和模型
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# 准备模型输入
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # 在思维和非思维模式间切换。默认为 True。
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# 执行文本补全
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# 解析思维内容
try:
# 查找特殊令牌 151668 的位置
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
对于部署,可以使用 sglang>=0.4.6.post1 或 vllm>=0.8.5 创建兼容 OpenAI 的 API 端点:
- SGLang:
python -m sglang.launch_server --model-path Qwen/Qwen3-8B --reasoning-parser qwen3 - vLLM:
vllm serve Qwen/Qwen3-8B --enable-reasoning --reasoning-parser deepseek_r1
对于本地使用,Ollama、LMStudio、MLX-LM、llama.cpp 和 KTransformers 等应用程序也已支持 Qwen3。
The code of Qwen3 has been integrated into the latest Hugging Face
transformerslibrary, and we advise you to use the latest version oftransformers.With
transformers<4.51.0, you will encounter the following error:KeyError: 'qwen3'The following contains a code snippet illustrating how to use the model to generate content based on given inputs.
from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen3-8B" # load the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) # prepare the model input prompt = "Give me a short introduction to large language model." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # Switches between thinking and non-thinking modes. Default is True. ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) # conduct text completion generated_ids = model.generate( **model_inputs, max_new_tokens=32768 ) output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() # parsing thinking content try: # rindex finding 151668 () index = len(output_ids) - output_ids[::-1].index(151668) except ValueError: index = 0 thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n") content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n") print("thinking content:", thinking_content) print("content:", content)For deployment, you can use
sglang>=0.4.6.post1orvllm>=0.8.5to create an OpenAI-compatible API endpoint:
- SGLang:
python -m sglang.launch_server --model-path Qwen/Qwen3-8B --reasoning-parser qwen3- vLLM:
vllm serve Qwen/Qwen3-8B --enable-reasoning --reasoning-parser deepseek_r1For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
思维模式与非思维模式切换
enable_thinking开关在 SGLang 和 vLLM 创建的 API 中同样可用。请参阅我们的 SGLang 和 vLLM 用户文档。
enable_thinking=True
默认情况下,Qwen3 启用了思维能力,类似于 QwQ-32B。这意味着模型将利用其推理能力来提升生成响应的质量。例如,在 tokenizer.apply_chat_template 中显式设置 enable_thinking=True 或保留其默认值时,模型将进入思维模式。
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True 是 enable_thinking 的默认值
)
在此模式下,模型将生成包裹在 <think>...</think> 块中的思维内容,随后是最终响应。
对于思维模式,建议使用
Temperature=0.6、TopP=0.95、TopK=20和MinP=0(generation_config.json中的默认设置)。请勿使用贪婪解码,否则可能导致性能下降和无限重复。更详细的指导请参阅最佳实践部分。
enable_thinking=False
我们提供了一个硬开关来严格禁用模型的思维行为,使其功能与之前的 Qwen2.5-Instruct 模型保持一致。此模式在需要禁用思维以提高效率的场景中特别有用。
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # 设置 enable_thinking=False 将禁用思维模式
)
在此模式下,模型不会生成任何思维内容,也不会包含 <think>...</think> 块。
对于非思维模式,建议使用
Temperature=0.7、TopP=0.8、TopK=20和MinP=0。更详细的指导请参阅最佳实践部分。
高级用法:通过用户输入切换模式
我们提供了一种软切换机制,允许用户在 enable_thinking=True 时动态控制模型行为。具体来说,您可以在用户提示或系统消息中添加 /think 和 /no_think 来逐轮切换模型的思维模式。在多轮对话中,模型将遵循最新的指令。
以下是一个多轮对话示例:
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-8B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# 更新历史记录
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# 示例用法
if __name__ == "__main__":
chatbot = QwenChatbot()
# 第一次输入(无 /think 或 /no_think 标签,默认启用思维模式)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# 第二次输入,使用 /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# 第三次输入,使用 /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
为了 API 兼容性,当
enable_thinking=True时,无论用户是否使用/think或/no_think,模型将始终输出包裹在<think>...</think>块中的内容。但是,如果思维被禁用,该块内的内容可能为空。当enable_thinking=False时,软开关无效。无论用户输入任何/think或/no_think标签,模型都不会生成思维内容,也不会包含<think>...</think>块。
The
enable_thinkingswitch is also available in APIs created by SGLang and vLLM. Please refer to our documentation for SGLang and vLLM users.
enable_thinking=True
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting
enable_thinking=Trueor leaving it as the default value intokenizer.apply_chat_template, the model will engage its thinking mode.text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # True is the default value for enable_thinking )In this mode, the model will generate think content wrapped in a
<think>...</think>block, followed by the final response.For thinking mode, use
Temperature=0.6,TopP=0.95,TopK=20, andMinP=0(the default setting ingeneration_config.json). DO NOT use greedy decoding, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the Best Practices section.
enable_thinking=False
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=False # Setting enable_thinking=False disables thinking mode )In this mode, the model will not generate any think content and will not include a
<think>...</think>block.For non-thinking mode, we suggest using
Temperature=0.7,TopP=0.8,TopK=20, andMinP=0. For more detailed guidance, please refer to the Best Practices section.
Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when
enable_thinking=True. Specifically, you can add/thinkand `/no
常见问题(FAQ)
Qwen3.5的思维模式和非思维模式有什么区别?
思维模式适用于复杂逻辑推理、数学和代码任务,而非思维模式用于高效通用对话。Qwen3.5支持在单个模型内无缝切换这两种模式,确保不同场景下的最优性能。
如何启用Qwen3.5的思维模式?
在代码中设置enablethinking=True参数即可启用思维模式。该模式专门用于处理需要深度推理的任务,如数学计算和代码生成,能显著提升模型在这些方面的表现。
Qwen3.5支持多长的上下文?
Qwen3.5模型原生支持32,768个token的上下文长度。通过YaRN缩放技术,可以进一步扩展至131,072个token,适合处理长文本任务。
版权与免责声明:本文仅用于信息分享与交流,不构成任何形式的法律、投资、医疗或其他专业建议,也不构成对任何结果的承诺或保证。
文中提及的商标、品牌、Logo、产品名称及相关图片/素材,其权利归各自合法权利人所有。本站内容可能基于公开资料整理,亦可能使用 AI 辅助生成或润色;我们尽力确保准确与合规,但不保证完整性、时效性与适用性,请读者自行甄别并以官方信息为准。
若本文内容或素材涉嫌侵权、隐私不当或存在错误,请相关权利人/当事人联系本站,我们将及时核实并采取删除、修正或下架等处理措施。 也请勿在评论或联系信息中提交身份证号、手机号、住址等个人敏感信息。