The world of online advertising can be a dizzying world of acronyms - you start talking about Google advertising, and you end up with GDN, SEO, DSP and DBM all littering the conversation.
But one thing is clear - Google has a problem here.
A serious problem that goes to the heart of what this company does, and what it stands for.
Look in the Oxford dictionary and you'll find that "media" refers to "mass communication", and not much can fall into that category more than our social media giants.
:: Google boss apology as M&S becomes latest to suspend ads over hate videos
Over a billion people are regular YouTube users and hundreds of millions of videos - possibly billions, depending on which statistics you believe - are watched every day.
And yet Google says it's not a media company.
It insists it is, instead, solely a technology company, and should be regulated as such.
YouTube, runs that logic, is nothing more profound than a website for allowing people to post content online, rather than a structure for judging the worth of that content.
It just so happens that it's one of the three most visited websites in the world.
There are guidelines, and users are invited to report content that they find offensive.
But YouTube doesn't proactively manage the content on its platform. It just jumps in when someone complains.
What it does do, though, is charge advertisers for putting their content on YouTube, or any of Google's various other platforms.
And that's why this row has such potential for changing the nature of Google, both as a company and as a service.
Advertisers spend money because they want people to take an interest in their brand, or their products.
:: What the firms are saying
They want consumers - that's you and me - to feel positive or intrigued.
What those advertisers emphatically don't want is for their new campaigns to appear next to offensive content.
"It is frankly ridiculous that Google can take our money but not curate the content on its own website," one executive at a FTSE100 told me.
"They want to have the money, but not the responsibility."
He told me that both Google and Facebook had "for too long claimed to be digital pioneers and denied responsibility for their own content".
It is a problem echoed in the ongoing debates over "fake news" - the simple fact that clever algorithms may be able to do almost limitless calculations, but presently struggle to tell apart lies and truth, or to differentiate between the offensive and the entertaining.
The answer may be something very old-fashioned - human beings monitoring content and sifting out what's acceptable, and what isn't.