Fixed Nanonets GGUFs!

#3
by shimmyshimmer - opened
Unsloth AI org

Hey guys we reuploaded the GGUFs with some llama.cpp and chat template fixes which should drastically improve performance.

Example:

Extract the text from the above document as if you were reading it naturally. Return the tables in html format. Return the equations in LaTeX representation. If there is an image in the document and image caption is not present, add a small description of the image inside the <img></img> tag; otherwise, add the image caption inside <img></img>. Watermarks should be wrapped in brackets. Ex: <watermark>OFFICIAL COPY</watermark>. Page numbers should be wrapped in brackets. Ex: <page_number>14</page_number> or <page_number>9/22</page_number>. Prefer using ☐ and β˜‘ for check boxes.
shimmyshimmer pinned discussion

For llama.cpp do you recommend the following parameters or they are recommend for ollama only
{
"temperature": 0.0,
"min_p" : 0.01,
"repeat_penalty" : 1.0
}

Unsloth AI org

For llama.cpp do you recommend the following parameters or they are recommend for ollama only
{
"temperature": 0.0,
"min_p" : 0.01,
"repeat_penalty" : 1.0
}

I'm not sure what nanonets officially recommends

For old version output was okay but never as their announcement. I was using Q6 XL.

For me new version is far worse that old one. For new release I tried Q6 XL and Q8 XL. New version started looping it self when it tries to create HTML table. Also in new version system prompt is not passed even if using --jinja. So I have been putting the prompt as agent.

following are my settings. I also tried sampling parameters mentioned in my above post.

llama.cpp b5757 vulkan backend
${llama-cpp}
-m /home/tipu/.lmstudio/models/unsloth/Nanonets-OCR/Nanonets-OCR-s-UD-Q6_K_XL.gguf
--mmproj /home/tipu/.lmstudio/models/unsloth/Nanonets-OCR/nanonets-mmproj-F16.gguf
--jinja
-n -1
-ngl 99
--repeat-penalty 1.05
--temp 0.0
--top-p 1.0
--min-p 0.0
--top-k -1
-t 4
--no-webui
-a Nanonets-OCR
-c 10240
--no-context-shift
--mlock
--seed 3502
--swa-full

I tried the model via llama.cpp. While it gives nice results for text-only, image encoding and decoding takes a lot of time per request (300s)
Any idea how to accelerate text + Image instructions?

I tried the model via llama.cpp. While it gives nice results for text-only, image encoding and decoding takes a lot of time per request (300s)
Any idea how to accelerate text + Image instructions?

Can you give me your llama.cpp version and settings. For me it is not working okay on llama.cpp server but on lmstudio it works fine.

Sign up or log in to comment