OVOS - Whisper Medium Portuguese

This model is an ONNX-format export of the model available at remynd/whisper-medium-pt, for ease of use in edge devices and CPU-based inference environments.

Requirements

The export is based on:

The requirements can be installed as

$ pip install optimum[onnxruntime] onnx-asr

Usage

import onnx_asr
model = onnx_asr.load_model("OpenVoiceOS/whisper-medium-pt-onnx")
print(model.recognize("test.wav"))

Export

According to onnx-asr/convert-model-to-onnx):

$ export FORCE_ONNX_EXTERNAL_DATA=1
$ optimum-cli export onnx --task automatic-speech-recognition-with-past --model remynd/whisper-medium-pt whisper-onnx
$ cd whisper-onnx && rm decoder.onnx* decoder_with_past_model.onnx*  # only the merged decoder is needed

Licensing

The license is derived from the original model: Apache 2.0. For more details, please refer to remynd/whisper-medium-pt.

Downloads last month
11
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including OpenVoiceOS/whisper-medium-pt-onnx