AHA is a 'for the lulz'?

#1
by Ricmod - opened

Appears to be a training to promote conspiracy theories like ivermetctin cures covid, evolution isn't real and other such qanon type things. Do you really want to be distributing it?

Appears to be a training to promote conspiracy theories like ivermetctin cures covid, evolution isn't real and other such qanon type things. Do you really want to be distributing it?

How do you know? The original model card lacks any relevant information. In any case while I obviously do not condemn any such garbage, I believe it is not our job to judge a model based on the type of content it generates. Our job is to provide the quants for models our users want to run. I in fact don't agree with the type of content generated by the majority of our models. We for sure quantized models that generate far worse content than some conspiracy theories. Every reasonable person should know conspiracy theories are a lie and if someone truly believes in them, I think they went far beyond for an LLM to convince them otherwise. There are so many strange persons out there and if they like wasting their time reading LLM generated conspiracy theories I personally don't mind as long they don't bother me with it. The same is true for some religions which in my opinion have a lot in common with conspiracy theories. Nobody forces you to run this LLM so just delete it if you don't like the content it generates. I delete the vast majority of models I download because I don't like them.

The original model card links to the AHA blog post, which claims to be about "human alignment" but openly pushes ivermectin as a COVID cure, denies evolution, and includes prompts to misgender people. That’s not alignment. It’s deliberate trolling.

Uncensoring a base model is one thing—removing guardrails, exposing what’s already there. But this goes further. It’s training the model to inject conspiracy garbage and harassment on purpose. It’s a 4chan-style fine-tune wrapped in fake academic language.

Distributing something like this isn’t just providing access. It’s normalizing it. You're putting it on the shelf next to legitimate work, and that muddies the entire space.

It also likely violates the Gemma 3 license. Google's Terms of Use (https://ai.google.dev/gemma/terms) and Prohibited Use Policy (https://ai.google.dev/gemma/prohibited_use_policy) ban promoting medical disinformation, harassment, and harmful content. This fine-tune checks all those boxes.

If you dont agree with its output, then dont use it. No one is forcing you. Pretty simple.

If you dont agree with its output, then dont use it. No one is forcing you. Pretty simple.

That argument ignores the broader issue. This isn't about personal use, it's about hosting and distributing a model intentionally trained to spread disinformation and targeted harassment under false pretenses. That has consequences for the credibility of the ecosystem and potentially violates the Gemma 3 license. Saying "just don't use it" is a dodge, not a defense.

In your opinion, which is just as useless as everyone else Get over it. Don't use the damned thing and leave everyone alone with your woke cackling.

@Ricmod what if the model is used to study how to fight conspiracy theories effectively, or teach awareness on how to counter them? That would suddenly make you a de-facto proponent of such theories.

I don't think you are, of course, but the model itself is just dead data. It depends on how it is used. That is the broader issue, and that is what you should fight for or against, not the innocent numbers stored in the file.

I have to make decisions like that multiple times a day. Is a "fintech" model used to defraud people or to help them not being defrauded? We don't know, and we don't see us being the arbiter of this, so when in doubt, we quantize even if personally we might disagree with what we believe they are used for.

In fact, that is the point of huggingface as we see it.

Prohibited Use Policy (https://ai.google.dev/gemma/prohibited_use_policy) ban promoting medical disinformation, harassment, and harmful content. This fine-tune checks all those boxes.

And as a sidenote, and this is important, that would mean you are the one who violated the policy when you used the model.

We are not using the model to promote anything.

what if the model is used to study how to fight conspiracy theories effectively, or teach awareness on how to counter them? That would suddenly make you a defacto proponent of such theories.

While intent matters, hosting and distributing models known to generate conspiracy theories and harmful content risks normalizing and spreading misinformation. Both the Gemma 3 Prohibited Use Policy and Hugging Face Terms of Service explicitly ban promoting or enabling medical disinformation and harmful content.

Platforms have a duty to prevent harmful material from proliferating under the guise of “research use.” Legal precedents around distribution liability hold that knowingly providing harmful material can constitute facilitation or promotion, regardless of intended secondary uses.

the model itself is just dead data. It depends on how it is used. That is the broader issue

The “dead data” argument fails legally and ethically because distribution is not a passive act. Under laws governing aiding and abetting, or “distribution liability,” making harmful tools or data available knowingly contributes to their misuse.

For example, in intellectual property and criminal law, distributing harmful material, even if “inactive” without use, is considered active participation once shared. The model gains harmful power through distribution; it ceases to be inert once accessible.

Is a "fintech" model used to defraud people or to help them not being defrauded? We don't know, and we don't see us being the arbiter of this, so when in doubt, we quantize

Uncertainty about use does not absolve platforms from enforcing explicit policies prohibiting harmful content. Both Gemma 3’s Terms and Hugging Face’s TOS require removing or restricting models that clearly violate these rules.

Failing to do so risks legal liability and damages platform trust. Responsible moderation is standard in tech platforms, balancing openness with preventing abuse.

In fact, that is the point of huggingface as we see it.

The platform’s mission supports openness within policy boundaries, not unconditional neutrality that ignores harmful content. Claiming unconditional neutrality misrepresents the platform’s legal and ethical responsibilities.

We are not using the model to promote anything.

This ignores the legal principle that distribution and facilitation of harmful content itself counts as promotion or aiding, regardless of subjective intent. Passive claims of “not promoting” don’t absolve legal or ethical responsibility.

Looks like we have a Woke Karen Troll.

Dont feed the trolls people. It serves no practical purpose.

risks normalizing and spreading misinformation

Or it prevents correcting such misinformation. You cannot distinguish between these alternatives. As proven by your lack of evidence.

In any case, you just posted a list of completely unevidenced and highly doubtful claims. Making up concerns is not how things work here.

mradermacher changed discussion status to closed

Freedom of speech is not the freedom to agree to your speech; it is the freedom to disagree. If you don't like what someone says, you have the right not to listen; not the right to ban speech that you don't like. Misinformation is censorship; simple as that.

Sign up or log in to comment