全部模型

Qwen2.5-VL-7B-Instruct

GGUF视觉模型

3 Download options available

README

Qwen2.5-VL-7B-Instruct

Introduction

Since Qwen2-VL's release, numerous developers have built new models on the Qwen2-VL vision-language models and provided valuable feedback. During that period, the Qwen team focused on building more useful vision-language models. Qwen2.5-VL is presented as the latest addition to the Qwen family.

Key Enhancements:

  • Understand things visually: Qwen2.5-VL is not only proficient in recognizing common objects such as flowers, birds, fish, and insects, but it is highly capable of analyzing texts, charts, icons, graphics, and layouts within images.

  • Being agentic: Qwen2.5-VL directly plays as a visual agent that can reason and dynamically direct tools, which is capable of computer use and phone use.

  • Understanding long videos and capturing events: Qwen2.5-VL can comprehend videos of over 1 hour, and this time it has a new ability of cpaturing event by pinpointing the relevant video segments.

  • Capable of visual localization in different formats: Qwen2.5-VL can accurately localize objects in an image by generating bounding boxes or points, and it can provide stable JSON outputs for coordinates and attributes.

  • Generating structured outputs: for data like scans of invoices, forms, tables, etc. Qwen2.5-VL supports structured outputs of their contents, benefiting usages in finance, commerce, etc.

Model Architecture Updates:

  • Dynamic Resolution and Frame Rate Training for Video Understanding:

Qwen2.5-VL extends dynamic resolution to the temporal dimension by adopting dynamic FPS sampling, enabling the model to comprehend videos at various sampling rates. Accordingly, the time dimension of mRoPE is updated with IDs and absolute time alignment, enabling the model to learn temporal sequence and speed and ultimately acquire the ability to pinpoint specific moments.

  • Streamlined and Efficient Vision Encoder

Qwen2.5-VL enhances both training and inference speeds by strategically implementing window attention into the ViT. The ViT architecture is further optimized with SwiGLU and RMSNorm, aligning it with the structure of the Qwen2.5 LLM.

The series includes three models with 3, 7 and 72 billion parameters. This repository contains the instruction-tuned 7B Qwen2.5-VL model.

OmniMind

万象智维

Omni Studio 公众号二维码

公众号

Omni Studio 小红书二维码

小红书

© 2025 万象智维科技有限公司. All rights reserved.

京ICP备2025136340号-1