Dataset Viewer
text
stringlengths 0
1.94k
|
|---|
Would recap posts keep the thread warm and cozy through these frigid winter nights?
|
Maybe, yeah! It could be years before Miku breaks through into our reality, so I think it's best to make Anon's stay as painless as it can be.
|
Less time scrolling means more time to use, develop, and discuss our local Mikus!
|
Highlights and recap of the previous thread: >>98408803
|
T/s leaderboard for Apple silicon: >>98408901 >>98409038 >>98409096
|
Over two weeks since low-rank expert Mixtral idea was pitched: >>98410524 >>98410533 >>98410560
|
Anon makes a "Good & Evil" advisor card, provides guide: >>98409721 >>98413520
|
WhiteRabbitNeo 33B pentest model released, TheBloke GGUF quants broken: >>98410316 >>98410890 >>98411016 >>98414636
|
WhiteRabbitNeo GPTQ version works, Anon gets cucked response, notices improvement with different prompt: >>98411729 >>98413609 >>98414076 >>98414242
|
Anon tests llama.cpp's updated quant utility, compares outputs across Mixtral quants for a given prompt, mentions drugs sampler: >>98412905
|
A new political benchmark is born: >>98414619 >>98414727 >>98414799 >>98414857 >>98414898 >>98415442
|
ExllamaV2 experiment - instant creation and inferencing of frankenmerges: >>98416337 >>98416453 >>98416646 >>98417164
|
Token healing discussion on Github: >>98418273
|
DeepSparse fast CPU-only inference: >>98418545
|
Anon's LocalLlama travel report: >>98411031 >>98411058 >>98411474 >>98411530
|
Comparing Mixtral-limarp and lzlv 70b: >>98413528 >>98413520 >>98413534 >>98413761
|
/aicg/ resident places faith in /lmg/, finds salvation, shares the news with his characters: >>98411395 >>98411470 >>98411791 >>98412229
|
Anon develops affection for LLM weights, shares logs: >>98412551 >>98412725
|
Different Anon uses the model in question, gets good results after some work, shares settings: >>98415036 >>98415139 >>98415501
|
Lazy UPS driver denies Anon his Miku: >>98412754 >>98412917 >>98413542
|
Anon finds himself in heaven after getting 3090, shares plans for old GPU: >>98412521 >>98412658 >>98413064
|
Anon envisions Star Trek without a certain character: >>98415750 >>98415837 >>98415880
|
Discussing the state of finetuning: >>98417244 >>98418316 >>98418410 >>98418426 >>98418622 >>98418786 >>98419040
|
Anon merges: >>98421125 >>98421134 >>98421260
|
Highlights and recap of the previous thread: >>98421955
|
Anon needs a high quality TTS model, receives Piper,: >>98422083 >>98422237 >>98422574 >>98422659
|
A second Anon also asks about TTS: >>98423448 >>98423463 >>98423497
|
Reminder that Coqui, the XTTSv2 company, is shutting down: >>98423014 >>98423058 >>98423073 >>98423229
|
Anon self-promotes MAID, a frontend: >>98422713 >>98422735 >>98422771 >>98422775 >>98422869 >>98425005 >>98425040
|
Fix for llama.cpp server markdown deleting characters from code: >>98423519
|
Mixtral improves its own system prompt, Anon shares related paper: >>98424281 >>98424480 >>98424799 >>98424824
|
Shit breaks after Anon blindly updates Kobold.cpp: >>98424581 >>98424779 >>98424814 >>98424868
|
Anon experiments with system prompt self-enhancement: >>98425885 >>98425899 >>98426094 >>98426301
|
Discussing usage of DPO to decuck Models. Paper about finetuning to realign L2 70b is linked: >>98426377 >>98426393 >>98426448 >>98427942 >>98428485
|
Outdated OP rentries lead Anon to ask questions about hardware, models, quants, and training LoRAs: >>98426797 >>98426877
|
Running and deploying LLMs in Unity: >>98428378 >>98428488
|
Sam Altman marries Meta software engineer: >>98422347 >>98422373 >>98422412
|
Anon asks about 5bpw Yi vs. 3.7bpw Mixtral: >>98422048 >>98422054 >>98422087 >>98422985 >>98422199 >>98427517
|
Mixtral draws a cube, Anon posts a tricky challenge: >>98423244 >>98423344 >>98423361 >>98423379 >>98423425 >>98423440 >>98423467 >>98423499 >>98426928
|
Visited by a stranger: >>98425602
|
Anon tries runpod without preparation: >>98426637 >>98426789 >>98426815 >>98426836 >>98426866 >>98426881 >>98426933
|
Miku: >>98426729 >>98426741
|
Xwin-Mlewd is still relevant: >>98426986
|
Sizefag model recommendation: >>98427331 >>98427517
|
llama.cpp server outputs nonsense for Anon: >>98427767 >>98428028 >>98428476
|
Highlights and recap of the previous thread: >>98428617
|
(1/2)
|
Transformers Training code for Mixtral might have been fixed (for real this time??): >>98428957 >>98429023 >>98429049 >>98429285 >>98429357 >>98429386
|
>>98429413 >>98429162 >>98429193 >>98429234 >>98429243
|
NVIDIA info page about digital AI avatars and interview about digital avatars that use generative AI: >>98429233 >>98429332
|
Anon thinks about creating a dataset focusing on human-nonhuman scenes, particularly scenes that include abnormal physical and anatomical interactions: >>98429661 (Over 20 replies)
|
Political compass test, questions and script provided: >>98429999 >>98430020 >>98430084 >>98430127 >>98430146 >>98430357 >>98430693 >>98431074 >>98431263
|
Anon requests information about local models for translating text from Japanese: >>98431480 >>98431534 >>98431635 >>98434539 >>98434563 >>98434761
|
Anon asks about applications that combine art, voice, and chat into one experience: >>98431516 >>98431560 >>98431569
|
Article covering a study by Anthropic about how LLMs can supposedly be taught to deceive (You): >>98431977 >>98432056 >>98432072 >>98432146 >>98432215 >>98432314
|
New paper about trustworthiness in LLMs: >>98433998 >>98434114
|
Paper from back in November investigating potential malicious use of LLMs. Authors used Jon Durbin's SpicyBoros in some tests: >>98434276 >>98434367 >>98434560 >>98435338 >>98435375 >>98435449
|
>>98435602
|
(2/2)
|
Anon's Mixtral self sysprompt improvement experiment continues with discussion about RP prompting and Mixtral's unwillingness to kill the user: >>98428683 >>98428758 >>98429343 >>98429405 >>98429495 >>98429553
|
Envisioning the AI endgame: >>98429847 >>98429999 >>98430004 >>98430943 >>98431484 >>98431492 >>98431549 >>98431748 >>98431865
|
Lovecraft and GUMI; Two flavors of horror: >>98430082 >>98430569 >>98430995 >>98431599 >>98431667 >>98431702 >>98431645 >>98431810 >>98432010 >>98432204 >>98432248
|
Experienced YiYiAss Anon revisits Yi after using Mixtral. The old trick of starting chats with stronger models, and continuing them with a weaker one is brought up: >>98432319 >>98432354 >>98432402 >>98432424 >>98432524 >>98432608 >>98432699
|
Knight & Dragon riddle: >>98433096 >>98433503 >>98433147 >>98433747 >>98433815 >>98433860 >>98433840 >>98433933 >>98433967 >>98434342
|
►Recent Highlights from the Previous Thread: >>98435583
|
--Article from May about optical computing for AI: >>98435958
|
--OpenAI LLM refusals found in Amazon pages and on Twitter: >>98436911 >>98436995 >>98437083
|
--Anon gets blue balls. Later shares wholesome backstory about lifelong goal and glimpse of success: >>98436766 >>98436925 >>98436930 >>98436979 >>98437207 >>98436794 >>98437365 >>98437918 >>98438123
|
--Hitchhikers's Guide to Local LLMs - terminology glossary: >>98437932 >>98438016 >>98438327
|
--Mixtral finetune adequately runs card designed for GPT-4: >>98438096 >>98438141 >>98438258 >>98438492 >>98438802
|
--Discussing quality of Q6, Q8, FP16: >>98436011 >>98436080 >>98436132 >>98436184 >>98436256 >>98438187 >>98438212 >>98438301
|
--Pruning unimportant weights from models, related paper and repos: >>98438828 >>98438966 >>98439004 >>98439006 >>98439044 >>98439116 >>98439129 >>98439145
|
--Related: Discussing Mixtral optimization: >>98439116 >>98439133 >>98439141 >>98439174 >>98439221
|
--Anon tries 70b, finds it smarter than Mixtral: >>98439560 >>98439858 >>98439579 >>98440203 >>98440217
|
--Related: Quantization of 70b vs. sparse MoE, image visualizing quantization of SD: >>98440203 >>98440316 >>98440863 >>98440883 >>98440941 >>98440987 >>98440996 >>98441197 >>98441235 >>98441255 >>98441265 >>98441291 >>98441300
|
--Flash Attention on AMD RDNA3 consumer cards: >>98440085 >>98440107 >>98440143 >>98440271
|
--Anon asks about multi-GPU performance: >>98440090 >>98440130 >>98440148 >>98441648
|
--llama.cpp calibrated 2bit quants merged + new PR to add ability to use importance matrix for all k-quants: >>98441682 >>98441698 >>98442792 >>98441705 >>98441796 >>98442539 >>98442551
|
--Mamba implementation in pure C: >>98441835 >>98441865 >>98441881 >>98442389
|
--Intermediate checkpoint of OpenMoE-34B (200B tokens): >>98442058 >>98442123
|
►Recent Highlight Post from the Previous Thread: >>98435602 >>98435616
|
►Recent Highlights from the Previous Thread: >>98451105
|
(1/2)
|
--Support for optional importance matrix (computed using a calibration dataset) for all k-quants merged into llama.cpp: >>98445276
|
--Anon shared unusual character card formatting: >>98445147 >>98445168 >>98445234 >>98445235 >>98445646 >>98445755 >>98445822 >>98445854 >>98445915 >>98445964 >>98445987 >>98446038
|
--Development of TikTok's language model was cut short by OpenAI in December: >>98446500 >>98446594 >>98448009 >>98448044
|
--No model handled Anon's extremely rare fetish, can't find data for training, did not say what the fetish is comprised of: >>98444899 >>98445118 >>98445214 >>98445236 >>98445716
|
--Experimenting with rounding and dequantization logic in llama.cpp: >>98445405 >>98448897 >>98449003 >>98449461
|
--Anon trained rapey Mixtral QLoRA, posted outputs, was unsure about uploading, cont.: >>98442342 >>98442484 >>98442594 >>98443112 | >>98445595 >>98445922
|
--Performance tests using Vulkan implementation in llama.cpp: >>98446951 >>98446966 >>98446976 >>98447276
|
End of preview. Expand
in Data Studio
YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/datasets-cards)
sdfsdf
- Downloads last month
- 4