全部模型

DeepSeek-R1-Distill-Qwen-1.5B

GGUF深度思考

3 Download options available

README

DeepSeek-R1-Distill-Qwen-1.5B

Introduction

DeepSeek introduces its first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning. With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors. However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance, DeepSeek also introduces DeepSeek-R1, which incorporates cold-start data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.

Model Summary


Post-Training: Large-Scale Reinforcement Learning on the Base Model

  • DeepSeek directly applies reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is described as the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area.

  • DeepSeek introduces its pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities. The pipeline is intended to benefit the industry by enabling better models.


Distillation: Smaller Models Can Be Powerful Too

  • DeepSeek demonstrates that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open-source DeepSeek-R1, as well as its API, is intended to benefit the research community in distilling better smaller models in the future.
  • Using the reasoning data generated by DeepSeek-R1, DeepSeek fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. DeepSeek open-sources distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on the Qwen2.5 and Llama3 series to the community.
OmniMind

万象智维

Omni Studio 公众号二维码

公众号

Omni Studio 小红书二维码

小红书

© 2025 万象智维科技有限公司. All rights reserved.

京ICP备2025136340号-1