Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
BitNet b1.58 2B4T Focuses on Speed, Efficiency, Open Access

Microsoft released what it describes as the most expansive 1-bit AI model to date, BitNet b1.58 2B4T.
See Also: Securing Data in the AI Era
Unlike traditional large language models that depend on GPUs and massive infrastructure, the model is built to operate efficiently on CPUs, including Apple’s M2 chip, Microsoft researchers said. The computing giant claims the model, released under the permissive MIT license, marks a significant move toward more accessible and energy-efficient AI systems.
Bitnet works by simplifying the internal architecture of AI models. Instead of relying on full-precision or multi-bit quantization for their weights – the parameters that define the model’s behavior- Bitnet uses just three values: -1, 0 and 1. This quantization reduces the computational and memory requirements, making the model much lighter and faster to run on hardware with limited resources.
The model has 2 billion parameters and was trained on a dataset containing 4 trillion tokens. For context, that’s equivalent to 33 million books.
In benchmark testing, BitNet b1.58 2B4T reportedly outperformed models including Meta’s Llama 3.2 1B, Google’s Gemma 3 1B and Alibaba’s Qwen 2.5 1.5B. These evaluations included tasks such as GSM8K, which consists of grade-school-level math problems, and PIQA, which measures basic physical common sense reasoning.
Performance isn’t just about accuracy. Microsoft’s research team claims BitNet b1.58 2B4T runs significantly faster than its peers, sometimes clocking in at twice the speed, while consuming far less memory. That combination of speed and efficiency positions the model as a potential fit for environments where power and processing capabilities are limited.
The model’s benefits come with caveats. The performance metrics depend on the use of bitnet.cpp
, Microsoft’s custom inference framework. The framework helps achieve the model’s runtime performance but its hardware compatibility is limited. GPUs, which are the dominant platform for training and deploying AI models, are not yet supported by bitnet.cpp.