Alibaba Qwen AI releases Compact Dense Qwen3-VL 4B/8B (command and think) with FP8 checkpoint
Do you really need a huge VLM when a dense Qwen3-VL 4B/8B (instruction/think) with FP8 runs in low VRAM but retains 256K→1M context and full feature surface? Alibaba Qwen Team Expands its intermodal lineup...