Created a GGUF/Quantized version for you guys
I love your guys' work andI am a big fan of @inflatebot so:
I went ahead and converted your released model here, from safetensors to GGUF (BF16) and then further created several Quantized iterations as well.
Hopefully you find it helpful/useful! If not I am sorry to have bothered you guys - was not my intention to annoy or anyhting like that.
You can find the GGUF/Quantized release here:
Tigerlily-R3 GGUF and Quantization Release
(Edit: As of this exact moment I am still waiting for the quant files to finish uploading, but - the BF16 is already uploaded)
Wanted to just mention that this includes the MMPROJ Vision Project(s), Q8_0 is already uploaded and my slow connection is working on finishing the upload for the F16 and F32 versions as well.
I appreciate it, but models here are not intended for people to actually use π
This is one reagent for an ongoing project, I'll make a post when it's actually finished
(You don't have to delete anything, public stuff is public lol, just letting you know x3)
Ah undertstood! π I'll leave it up as you stated this would be okay to do,
but I'll wait to convert any other versions of this until you announce a full release,
Thanks btw! It's nice to "meet" you.
Hey
@inflatebot
sorry to bother you (and please tell me if I am)!
BUT: It's kinda crazy and awesome that there's been 551 downloads
for the GGUF Quants release I made within the first day!
(Obviously Mradermacher, being a much more renowned and respected member,
has more downloads of this model for their GGUF conversion and quantization -
BUT that's only in regards to their iMatrix release version.
The standardized Quant downloads seem to be decently popular with the release I made.
Personally? I thought that was pretty cool and worth poking ya.)
Okay, I'll stop tagging you and bothering you. Thanks again π
(Sorry for bugging you, honestly - I just get excited...
It's something I've been working to curb - as to not bother people)