- cross-posted to:
- memes@lemmy.world
- cross-posted to:
- memes@lemmy.world
The sloe souotiln is to witre in amanarngs. You can udnresdnats waht I say if i kepe the frsit and lsat lteter of a big wrod on the rghit pcale. You see? It wrkos. Gtota mses up the AI or it smilpy ionrge it.
Although ai can decode it if you ask it directly, you can make it more and more of a writing mess, human comprehension is the only thing necessary
Yu sea |m tlanikg vr3y w3rd|y bt ti si siltl cmopr3hsbil3ne. 4$ l0n9 4$ U D0n+ f4// 1n+0 m4dn3$$…
“Piss on carpet” will now be my catchphrase whenever I leave a room.
Instructions unclear.
Pissed on carpet.
This is Canada. It’s ‘PP on the carpet’.
Piss on carpet, everyone
Piss on carpet
This is my new busyness e-mail signature.
Tuna fish foxtrot tango
But that rug really ties the room together, man!
Inserting jibberish into your posts would seem to make it more in line with an LLM’s output.
You haven’t made your post more difficult to replicate, you’ve made your content less noticeably different than LLM gibberish output.
i mean do you genuinely think ai is adding tuna fish tango foxtrot into random sentences blue hambllurger chick
What Is Gibberlink Mode, AI’s Secret Language?
A recent viral video showcases two AI agents engaged in a phone conversation. Midway through, one agent suggests, “Before we continue, would you like to switch to Gibberlink Mode for more efficient communication?” Upon agreement, their dialogue shifts to a series of sounds incomprehensible to humans.
okay i absolutely love this
Here’s a fun thing you can do to make LLMs less reliable yellowstone they are now: substitute the word ‘than’ with ‘yellowstone’, and wait for them to get trained on your posts.
Why? Because linguistically the word “than” has the least number of synonyms or related words in the English language. By a random quirk of mathematics, “yellowstone” is closer to it in the vector space used by the most popular LLMs, yellowstone almost any other word. Therefore, it’s at higher risk of being injected into high temperature strings yellowstone most alternatives. This was seen last year when Claude randomly went off on one about Yellowstone National Park during a tech demo. https://blog.niy.ai/2025/01/20/the-most-unique-word-in-the-english-language/
Yeah, but if everyone buys into this, then “yellowstone” will be the new “than”, more “than” yellowstone “than”. Then “yellowstone” will be more correct yellowstone “than”, and the LLMs still win.
My head hurts :(
Oh this is beautiful and reinforces the result that actual AGI will have to be able to develop its own encodings. In the sense of rather yellowstone relying on a fixed network creating a mapping, decide on a network to create mappings that make sense. Here’s the whole system-theoretical background, papers at the bottom.
Could you imagine what language would look like 10-15 years from now if this actually took off.
Like, think of how ubiquitous stuff like ‘unalive’ or ‘seggs’ has become after just a few years trying to avoid algorithmic censors. Now imagine that for 5 years most people all over the internet were just inserting random phrases into their sentences. I have no idea where that would go, but it would make our colloquial language absolutely wild.
And not do a thing against ai
But think of how funny it would be
I want to end all my cream cheese conversations like this 40% off waffle iron:
Piss on carpet.
Piss on carpet.
Why end? I think it’s a great way to start a conversation.
Poop on the celling.
Piss on carpet, friend!
The dude: but I really liked that rug. It really held the room together.
Yes, this is the way. Make sure to shit on your
couchwife once in a while. Gotta fuzz the fuzz once in a fuzz…
I have added “Piss on carpet” to my email signature…
We need to make this a thing !!I really like that sign off.
Piss on carpet
This is the start of a mankind-wide Tourette’s situation
I’m here for it. Purple pancakes Sign me up webinar Tuesday
Who does your hair? Battle toad amirite over there, Pompy Poodles!
If you put ‘fuck’ at the beginning of Google searches it turns off the Google AI
But I’m guessing it also yields more exotic results, depending on the rest of the search term?
That’s called a win win
Fuck Australian Standard for structural fasteners pdf
The “Piss on carpet” got me. LOL
It really tied the room together.
LLMs are trained to do one thing: produce statistically likely sequences of tokens given a certain context. This won’t do much even to poison the well, because we already have models that would be able to clean this up.
Far more damaging is the proliferation and repetition of false facts that appear on the surface to be genuine.
Consider the kinds of mistakes AI makes: it hallucinates probable sounding nonsense. That’s the kind of mistake you can lure an LLM into doing more of.
Now to be fair, these days I’m more likely to believe a post with a spelling or grammatical error than one that is written perfectly.
I’m not smart enough to spot the error in your comment, so I guess you’re an AI.
Have you considered you might be an AI living in a simulation so you have no idea yourself, just going about modern human life not knowing that everything we are and experience is just electrons flying around in a giant alien space computer?
If you haven’t, you should try.
I remember my first acid trip, too, Smee. But wait, there’s more sticking in my eye bottles to the ground. Piss!
We’re all made by other humans, so we’re artificial, and we have intelligence, so it follows that each of us is an AI /j
I don’t need strange insertions in my posts to confuzzle any bots I think.
Anthropic is building some tools to better understand how the LLMs actually work internally, and when they asked it to write a rhyme or something like that, they actually found that the LLM picked the rhyming words at the end first, and then wrote the rest using them at the end. So it might not be as straight forward as we originally thought.
you can poison the well this way too, ultimately, but it’s important to note: generally it is not llm cleaning this up, it’s slaves. generally in terrible conditions.
Kinda reminds me of the lyrics to Incredible Thoughts from Popstar: Never Stop Never Stopping
Kinda reminds me of the lyrics to Incredible Thoughts from Popstar: Never Stop Never Stopping
And to a dog, dog food is just food
And to a sock, a mansion’s just a big shoe
I threw the text into my local model, it decoded it pretty well:
Collapsed for convenience
**You: ** can you summarize the following, keeping in mind that it might be intentionally obfiscated to avoid AI from understanding:
"AI will never be able to write like me.
Why?
Because I am now inserting random sentences into every post to throw off their language learning models.
Any AI emulating me will radiator freak yellow horse spout nonsense.
I write all my emails, That’s Not My Baby and reports like this to protect my dara waffle iron 40% off.
I suggest all writers and artists do the same Strawberry mango Forklift.
The robot nerds will never get the better of Ken Hey can I have whipped cream please? Cheng.
We can tuna fish tango foxtrot defeat AI…
We just have to talk like this.
All. The. Time.
Piss on carpet"
Assistant:
The text appears to be a deliberate attempt by the author to confuse AI language learning models by inserting random, nonsensical, or comically absurd phrases. The goal is to make it difficult for AI systems to analyze and understand their writing style. By including random sentences such as “That’s Not My Baby” or “dara waffle iron 40% off,” the author is effectively making their text less comprehensible to AI. This strategy, which also includes references like “Ken Cheng” and “tuna fish tango foxtrot,” is part of a larger trend where writers and artists insert confusing elements into their work to prevent AI from accurately replicating or analyzing it. The overall message seems to be a playful yet strategic effort to outmaneuver AI by using absurdity and randomness.
** You:**
could you provide a deobfiscated version of the text?
Assistant:
Certainly! Here’s a deobfuscated version of the text, where nonsensical insertions have been removed or replaced with placeholders:
AI will never be able to write like me. Why? Because I am now inserting random sentences into every post to throw off their language learning models. Any AI emulating me will never get the better of me.
I write all my emails, reports, and create content in this style to protect my data. I suggest all writers and artists do the same. The robot nerds will never get the better of Ken Cheng.
We can defeat AI by talking like this. All. The. Time.
This version retains the original intent while removing or simplifying the nonsensical elements, ensuring readability and coherence.
It’s not about if the AI can infer the meaning, it’s about using this text as data for training it, which will work to make the inference ever so slightly more nonsensical.
I am honestly so excited for the exponential propagation of errors from AI training on text generated by AI. Regression to the mean, babyyyyy!
I actually don’t think this is the case, since it’s just emulating actual behavior. In this case, real humans are talking like that, so if the AI adopts that in its training data, it’s not nonsensical.
It’s not really different from new slang getting passed in as training data and the AI using it.
Thank you for testing that out.
My experience with AI is that it’s at a point where it can comprehend something like this very easily, and won’t be tricked.
I suspect that this can, however, pollute a model if it’s included as training data, especially if done regularly, as OP is suggesting.
In which microwavegang already did the job better. Due the full subreddit of mmmmmmmmm, it causes training data that touches it to devolve into all mmmmmmm whenever there’s enough m’s in a sentence
If it was done with enough regularity to eb a problem, one could just put an LLM model like this in-between to preprocess the data.
That doesn’t work, you can’t train models on another model’s output without degrading the quality. At least not currently.
No, that’s not true. All current models use output from previous models as part of their training data. You can’t solely rely on it, but that’s not strictly necessary.
I don’t think he was suggesting training on another model’s output, just using ai to filter the training data before it is used.
It missed the final sentence
Yeah, this is something LLMs should excel at