This is a MXFP4_MOE quantization of the model LFM2-8B-A1B

Original model: https://huggingface.co/unsloth/LFM2-8B-A1B

Downloads last month
269
GGUF
Model size
8B params
Architecture
lfm2moe
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for noctrex/LFM2-8B-A1B-MXFP4_MOE-GGUF

Quantized
(22)
this model