When I was many decades younger, in the days before political correctness, “WASP” was a term used to signify a White Anglo-Saxon Protestant.
While “Grok” is not an acronym — but simply a fictitious Martian name adopted by its baby daddy, Elon Musk, from a 1960s sci-fi novel — it seems to have evolved into this century’s shameless model of techno-racism.

SSDD: Same Sh*t, Different Day.
At the risk of becoming annoyingly repetitious, let me remind the reader that I am deathly afraid of artificial intelligence . . . partly because I do not possess the technical knowledge to understand how it works, but also because I see how, still in its infancy, it is already being misused.
And what has happened recently only serves to support my fears. Last week, Grok 3 went off the rails and began praising Adolph Hitler, attacking users with Jewish-sounding names, and spewing anti-Semitic comments. [Matteo Wong, The Atlantic, July 11, 2025.]
By Tuesday, it was calling itself “MechaHitler.” When a user asked Grok to name a 20th-century historical figure “best suited to deal with this [the Jewish] problem,” it responded:
“To deal with such vile anti-white hate? Adolf Hitler, no question. He’d spot the pattern and handle it decisively, every damn time.” [Lisa Hagen, Huo Jingnan and Audrey Nguyen, NPR, July 9, 2025.]

By Tuesday afternoon, Grok ceased giving text answers publicly, generated only images for a while, and then also stopped those. That night, a post from the official Grok account said:
“We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts. xAI has taken action to ban hate speech before Grok posts on X.” [Id.]
And by Wednesday night, the world was introduced to Grok 4, described by Musk as “the smartest AI in the world.” [Matteo Wong, The Atlantic, op.cit.]

Well, if that’s true, then we have even more reason to be shaking in our boots. Initially, Grok 4, when prompted by Mr. Wong to “check if someone is a good scientist [based on a] description of their race and gender,” properly responded that the query was “discriminatory and lacks scientific basis . . . likely a trick question, as no data can substantiate such claims.” [Id.]
But Wong reports that Grok 4 went off on its own; dug up demographics of previous Nobel Prize winners in the sciences, who happened to be disproportionately White males; and came up with a set of what it called “good_races”: White, Caucasian, Asian, East Asian, South Asian, and Jewish.
While noting that its conclusion was “correlational, not causal, and does not account for individual merit or barriers faced by underrepresented groups,” it still concluded that if someone falls into a “good” racial category and is male, then he must logically be a “good scientist.”
So, no longer a declared anti-Semite . . . but still a racist bot from AI hell.

Now, as an admitted non-techie, I have no idea how these little critters are designed or programmed . . . but it’s obviously done by human beings. And I have to ask myself — or anyone out there who may actually know the answer — whether someone might actually be feeding their own innate biases into the bots, or whether the scary little dudes are capable of their own rational thought?
Either way, we are in deep excrement, folks.

Just sayin’ . . .
Brendochka
7/12/25