close
close

GARM vs. X: A case of good cause, bad execution | Analysis


GARM vs. X: A case of good cause, bad execution | Analysis

The announcement last week by Stephan Loerke, CEO of the World Federation of Advertisers (WFA), that they would close the Global Alliance for Responsible Media (GARM), just days after they were embroiled in an antitrust lawsuit taken over by social media platform X (formerly Twitter), is a prime example of the right intentions in the wrong execution.

And what better way to create responsible media with advertisers’ support? Yes, advertisers have a responsibility to act in the best interests of their shareholders and customers, but that does not give them the right to disregard one of the most important principles that protect our capitalist economic system – the antitrust laws common in most democratic capitalist societies.

We have antitrust and competition laws for a reason: to protect and preserve the health of the economy by prohibiting dominant players or cartels from unreasonably restraints trade to their own advantage, to the detriment of others, or both. Whether it’s fixing prices in a category or restricting competition, these fundamental protections ensure robust competition and a healthy economy.

In this case, some will argue that GARM either did not act in concert or that even if it did act as a cartel, the means were justified by an ethical outcome. Let’s look at this in more detail.

GARM was founded by the WFA following the mass terror attack in Christchurch, New Zealand. Its aim was to work with the industry to combat the monetisation of illegal and harmful content through advertising. A worthy and important undertaking.

However, if you frame it as if a collective of large advertisers agree to stop investing their advertising budget with a particular media provider, it is easy for critics (read: Elon Musk) to portray this as a conspiracy to restrict that media provider’s trade. And that is cartel behavior.

But is there a better solution to reduce advertising support for illegal and harmful content online? Well, there are few advertisers who would be happy to place their ads near racist, misogynistic, hateful and violent content. Not only because of the reputational damage, but also because of the growing research on the impact on advertising effectiveness. For this reason, brand safety continues to be an important issue.

Advertisers therefore rightly turn to their media agencies to ensure that they are not supporting this type of content by placing their ads and advertising budget in such environments. This is not an easy matter for the agency. While the agency wants to comply with advertisers’ wishes, the reality is that it is rarely paid what it would cost to do so properly, even if it were able to do so.

Remember, we’re not talking about a few hundred ad placements in a known environment, but millions of ad placements on millions of websites and mobile apps and more. This requires technology to automate the ad and deliver it in milliseconds, millions of times per hour.

So the agency either hires a brand safety partner or has their client do it to take responsibility for ensuring brand safety. These are companies like IAS and Double Verify. And yes, Double Verify is the same company that had to admit that it gave Twitter/X a 99.99% brand safety ranking.

And what about the social media platforms themselves? Well, since they are considered “conveyors” of information (much like a telecommunications company) rather than media outlets, they bear no responsibility for the content that appears there. This means that these multi-billion dollar advertising machines have no legal obligation to moderate harmful and illegal content, and their efforts to do so are mostly to keep advertisers happy and perhaps lawmakers in check.

To be clear, yes, X is considered by many, especially after Elon Musk’s acquisition, to be a dumping ground in terms of brand safety. But are other social media platforms like TikTok, Instagram or YouTube truly and completely safe for advertisers?

The recent racial unrest in the UK was reportedly fuelled by misinformation and hate spread by X, TikTok and Signal. But it was also aided by sensationalist journalism in some mainstream media outlets that eagerly increased the social heat in an already explosive environment. As such, finding a truly brand-safe environment is harder than it seems for marketers.

Considering how complex, difficult and costly it is to protect a brand in the current media environment, it is perhaps understandable that a “coalition of the willing” seemed like a good idea at the time. And yes, it could solve the problem of advertisers supporting illegal and harmful online content more effectively than current practices. But care should be taken not to make it look like a bloodthirsty cartel mob. The issue of brand safety is much broader and more complicated.


Darren Woolley is founder and global CEO of Trinity P3. He writes a monthly column Woolley Marketing on Campaign Asia-Pacific.

Leave a Reply

Your email address will not be published. Required fields are marked *