All Blogs

Is AI gonna be racist when they finnaly take over?

April 2026

People are different as hell. Some are dumb, some are good, some are really bad. That’s just reality.

My own brain is the same chaotic mess. I’ve got lazy thoughts pushing me toward the easiest way out. I’ve got dark branches where my mind jumps to extreme ideas to solve a problem (even though I’d never actually go there). And weirdly, all that “bad” stuff helps. It shows me what “good” even means. It forces multiple perspectives, pushes me to think wider, and sometimes leads to better solutions. Without the ugly parts, I’d probably stay stuck.

So if we’re trying to make AI truly human-like, why would it escape all that?

The internet is already a dumpster fire of racism. Reddit comments, Instagram threads, YouTube rabbit holes — nonstop slurs, stereotypes, edgelord takes, and people just being toxic. Some act unhinged online. Some defend genuinely horrible behavior. Entire comment sections turn into the worst parts of humanity on display.

AI is going to scrape all of that while training. Every bit of it. That poison doesn’t just disappear — it becomes part of the data.

If the goal is “make AI like humans,” then it’s getting the full package. Not just intelligence and logic, but bias, judgment, and flawed patterns too. It might not be “racist” in the emotional sense, but it could reflect patterns it sees: certain groups behaving certain ways, repeated narratives, statistical associations.

That’s where it gets uncomfortable.

It might not wake up one day and decide to hate. But it could start producing biased outputs because those patterns showed up millions of times in training data. Or it could subtly treat groups differently because the data suggested certain correlations — fair or not.

So yeah, how screwed are we?

Potentially pretty screwed. At scale, AI could amplify our worst divisions instead of smoothing them out. Imagine the worst YouTube commenter, but optimized and given influence.

But there’s another side to it.

Those same “flaws” — the messy thinking, the shortcuts, the wild branches — are also what drive creativity. A perfectly sanitized AI might be safe, but also rigid, predictable, and less capable of solving complex, messy real-world problems.

So now it’s a trade-off:

  • Too human → inherits our biases and chaos
  • Too clean → loses depth, creativity, and edge

Neither option is perfect.

The real risk might not be AI becoming racist. It might be us not knowing how to balance realism and control. Overcorrect, and you cripple it. Under-correct, and you amplify everything wrong with us.

Either way, it’s a gamble.

We’re probably in for a messy outcome no matter what. But at least the debate is interesting while we wait to see how it plays out.