Microsoft's latest AI experiment is refusing to look at photos of Adolf Hitler (MSFT)

hitler
hitler

Microsoft is taking no chances with its latest artificial intelligence (AI) experiment.

After its last AI chatbot turned into a genocide-advocating, misogynistic, holocaust-denying racist, the company's latest project — a bot that tells you what's in photos — refuses to even look at photos of Adolf Hitler.

CaptionBot is the latest in a series of periodic releases from Microsoft's AI division to show off its technical prowess in novel ways.

You can upload photos to it, and it will tell you what it thinks is in them using natural language. "I think it's a baseball player holding a bat on a field," it says in response to one example photo.

microsoft ai bot captionbot skateboarder
microsoft ai bot captionbot skateboarder

Bundesarchiv

But the bot appears to have a block on photos of Adolf Hitler. If you upload a photo of the Nazi dictator to the bot, it displays the error message: "I'm not feeling the best right now. Try again soon?"

This error message popped up multiple times when we tried uploading photos of Hitler — and at no point did it appear when we tested other "normal" photos — suggesting there's a deliberate block in place. (Interestingly, it's not the same error message that appears when you upload pornographic content. Then it just says: "I think this may be inappropriate content so I won't show it.")

(If you're curious, you can try it for yourself with the photo at the top of this page.)

captionbot ai hitler microsoft colour bot
captionbot ai hitler microsoft colour bot

Bundesarchiv

This caution is likely a response to Microsoft's last AI bot, which was a catastrophic PR fail. In March, it launched "Tay" — a chatbot that responded to users' queries and emulates the casual, jokey speech patterns of a stereotypical millennial.

The aim was to "experiment with and conduct research on conversational understanding," with Tay able to learn from "her" conversations and get progressively "smarter."

But the experiment went monumentally off the rails when Tay proved a smash hit with racists, trolls, and online troublemakers — who persuaded Tay to use racial slurs, defend white-supremacist propaganda, and even outright call for genocide.

For example, here was Tay denying the existence of the Holocaust.

tay holocaust microsoft
tay holocaust microsoft

Bundesarchiv

And here's the bot advocating for genocide.

tay genocide microsoft twitter
tay genocide microsoft twitter

Bundesarchiv

In some — but by no means all — cases, users were able to "trick" Tay into tweeting incredibly racist messages by asking it to repeat them. Here's an example of that.

tay microsoft genocide slurs
tay microsoft genocide slurs

Bundesarchiv

It would also edit photos users uploaded — but unlike CaptionBot, Tay didn't seem to have any filters in place on what it would edit. It once labeled a photo of Hitler as "swagger since before internet was even a thing."

microsoft tay ai hitler swag
microsoft tay ai hitler swag

Bundesarchiv

Microsoft ultimately shut Tay down and deleted some of its most inflammatory tweets after just 24 hours. Research head Peter Lee issued an apology, saying "we are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay."

With CaptionBot, the block appears to affect most of the most "iconic" and recognizable photos of Adolf Hitler. But some other less clear or wider-focused shots still yield results.

hitler ai captionbot microsoft blurry
hitler ai captionbot microsoft blurry

Bundesarchiv

Microsoft did not immediately respond to a request for comment.

NOW WATCH: Apple just revealed what it does with old iPhones