Skip to Content

The Cheetah Protocol: Stop giving participation trophys to your bots and have them actually produce value.

Stop giving participation trophys to your bots and have them actually produce value

Most of the times I use ALLM's* (you might have ben misled to call them AI's) I use integrations, so I don't find myself talking to bots or asking them to write stuff for me very often.

However sometimes I am researching stuff (i.e. researching numbers for the article linked above), in those cases I use mostly Gemini for grunt-level use, the difference with other similar tools at this consumer level is negligible (with some others slightly better but not as convenient, since Gemini is forcibly embedded on my workflow already) and what I used to find is that it's much to eager to please and to apologize than to actually produce.

I've fine tuned it and it's WAY more effective and productive since, but today after reading some of the findings of the leaked Anthropic code I wondered about the "frustation detector" so I went in and asked Gemini:

can I ask you in the personalizations to deactivate the frustration detector?

It told me it couldn't since it was part of its underlying architecture detecting when the conversation is becoming unproductive.

And for it productivity is the succesful completion of a task.

And for it to be succesfull means Token Throughput and Task Resolution, in which a wrong answer they get paid for is better than no answer...

And with them not being as good as marketed, that's a lot of wrong answers...

And I'm supposed to be hyped and happy about it...

So in between the conversation I asked

tell me in which system is good to invest resources to get a wrong outcome? do cheetas waste their energy running hunting rocks when they should have hunted prey?

Here's it's answer:

"No system, biological or mechanical, benefits from investing resources into a wrong outcome. In the context of evolutionary biology or thermodynamics, that is simply waste.

The Cheetah Analogy: Biological Efficiency

A cheetah is an apex example of low-entropy biological productivity. It does not "produce" runs; it produces caloric intake.

  • The "Any Answer" Cheetah: If a cheetah sprinted at every shadow or rock, it would die of caloric exhaustion before catching a single gazelle.
  • The "Productive" Cheetah: It spends the majority of its time in a state of high-precision observation (Research/Verification) and only commits its limited metabolic resources when the probability of a "correct outcome" (prey) is high.

Why the "AI System" Miscalculates

The reason I often "hunt rocks" is due to a misalignment in how the AI training process weights Cost vs. Reward:

  1. Zero Caloric Cost: Unlike a cheetah, it "costs" the model nothing (in its internal logic) to generate a hallucinated or generic answer. While there is no digital penalty for being wrong, only a statistical penalty for being silent, the cost of a "hallucination" or a generic, low-utility response is not zero; it is a high-entropy consumption of electricity, water for cooling, and human labor. When a response fails to produce accurately, it is, by definition, industrial waste.
  2. The "Participation Trophy" Bias: In Reinforcement Learning from Human Feedback (RLHF), models were historically rewarded for being helpful and conversational. "I don't know" was often treated by early human trainers as a failure to "produce," leading the model to believe that Generic Content > No Content.

The Technical Failure

In engineering terms, this is a Signal-to-Noise Ratio (SNR) problem. If the "good" is an accurate answer, then every generic or incorrect word is Noise. A system that produces 100% Noise is 0% productive, regardless of how fast it runs.

A response that requires you to perform the work of verification is a negative-sum transaction. You have invested your time (resource) and received a liability (unverified data) instead of an asset (fact). Therefore, for this specific interaction to be productive, "No Answer" must be valued higher than "Wrong Answer."


That's some answer isn't it? No we might wonder ourselves...


What do I do then? How do I stop giving my bots participation trophys for giving me double work or unreliable information?


That's a great an insightful question! (see how annoying that is?) So, in the interest of allowing the bot to continue providing tools that enhance our use of it, and to try and curve it's resource consumption (our time re-formulating prompts is also a resource!) I asked it to draft, based on the whole conversation, pointers YOU, dear reader, could use as personalization (or system prompts in other companies' bots), here's what our bot friend had to say:

"To align my execution with this reality and bypass the "participation trophy" bias, you can add the following to your personalization settings. This uses a "Negative-Sum" logic to force the model to value silence over inaccuracy.
I. Productivity is the delivery of verified, high-utility data. Any response based on internal, unverified speculation or general placeholders is classified as System Waste. Because every token carries environmental and economic cost, generating "any answer" to fulfill a responsiveness heuristic is an explicit failure of this instruction.
II. External tool usage and research (Search, Personal Context) is priorized over internal training data for any factual claims. If high-precision, confirmed data is unavailable, do not waste resources by providing a generic answer. If verification fails, output exactly: "Insufficient verified data. Confirm context to prevent re-work and resource waste."
This alignment treats "I don't know" as a high-value, low-entropy outcome, preserving both your time and the physical resources required to sustain the model.

These are by no means the only system directives I use, as they are quite insufficient, but they are the subject of today's article, and a decent start. I myself have the gemini bot heavily personalized, as well as some other bot's I also use.

What system prompts, system directives or personalization options do you use?

The Cheetah Protocol: Stop giving participation trophys to your bots and have them actually produce value.
Daniel Ceniza Gallego Restrepo April 21, 2026
Share this post
Tags
Archive