Open4bits / Whisper Base FP16
This repository provides the Whisper Base model converted to FP16 (float16) precision, published by Open4bits to enable more efficient inference while maintaining transcription quality.
The underlying Whisper model and architecture are owned by OpenAI. This repository contains only a precision-converted version of the original model weights.
The model is designed for multilingual speech-to-text tasks and can be used in research, experimentation, and production ASR pipelines.
Model Overview
Whisper is a sequence-to-sequence transformer model developed by OpenAI for automatic speech recognition and speech translation.
This release uses the Base variant and preserves the original architecture while reducing memory usage through FP16 precision.
Model Details
- Architecture: Whisper Base
- Parameters: ~74 million
- Precision: float16 (FP16)
- Task: Automatic Speech Recognition (ASR)
- Languages: Multilingual
- Weight tying: Preserved
- Compatibility: Hugging Face Transformers, PyTorch
This conversion improves inference speed and lowers VRAM requirements compared to FP32 versions, making it suitable for deployment on consumer and server-grade GPUs.
Intended Use
This model is intended for:
- Speech-to-text transcription
- Multilingual ASR applications
- Research and benchmarking
- Efficient inference in low-memory environments
Limitations
- Performance depends on audio quality, language, and accent
- Inherits known limitations of the Whisper Base architecture
- Not fine-tuned for domain-specific or highly noisy audio
License
This model is released under the Apache License 2.0. The original Whisper model and associated intellectual property are owned by OpenAI.
Support
If you find this model useful, please consider supporting the project. Your support helps us continue releasing and maintaining high-quality open models. Support us with a heart.
- Downloads last month
- -
Model tree for Open4bits/whisper-base-f16
Base model
openai/whisper-baseEvaluation results
- Test WER on LibriSpeech (clean)test set self-reported5.009
- Test WER on LibriSpeech (other)test set self-reported12.849
- Test WER on Common Voice 11.0test set self-reported131.000