Before the widespread adoption of digital media, brand safety was a seldomly, if ever, used term. Since then, it has become one of the most concerning elements of media, shifting in the last few years from a word often bundled with other digital best practice terms, to a widespread issue that is not as easily solved through ‘approved list’ optimisations as marketers confidently thought it might be.
Unpacking brand safety
The most basic definition of brand safety is the set of measures marketers put in place to protect their brand, and its reputation, from negative or harmful digital content. The inclusion of the word ‘digital’ in that definition may be the reason the problem evolved to the extent that it almost feels unmanageable in 2020.
For a long time that word, ‘digital’, seemed to only encompass the likes of publisher sites, which meant that brand safety was often a tick box that we checked when running programmatic campaigns. Blocked lists, approved lists and keyword checks were created to make sure that a brands programmatic advertising didn’t end up on any unwanted sites or that ads were not served next to anything untoward.
Fast forward to 2017, Google was found to be serving brand content alongside extremist videos on YouTube and despite their attempts to control and fix the issue, brand content was still found running alongside climate change misinformation videos in 2020.
The issue of brand safety has come even more to the forefront in the last few months with the widespread boycott of Facebook with the #StopHateforProfit movement. Brands are holding Facebook accountable for their proliferation of racism, hate speech, fake news and bullying content by pulling their spend, some for the month of July and others indefinitely until the problem is solved.
Many questions arise when looking into this complex topic.
Why was it forgotten, for so long, that social media platforms also fall into the umbrella term ‘digital’?
This question raises some key issues about the nature of social platforms. These platforms exist because of user-generated content. How do marketers, or even the big tech companies that invented them, draw a clear line between exercising control over the content created and a person’s right to freedom of speech? Can big social impose rules and regulations that remove certain user generated content in an effort to increase brand safety, and in turn, increase their ability to sell media space? And if they do, what human rights, if any, does that infringe upon?
The explosion of new social platforms like TikTok make this an even more complex issue. As social media platforms evolve so does consumers’ use of them. Instead of aesthetically produced feeds, content curation, putting your best foot forward and social peacocking, consumers use TikTok more as an unfiltered stream of consciousness, expressing any thought, opinion, lifestyle choice and even kink that they may feel like at the time. This proliferates uneducated, racist and anti-Semitic content on a worldwide scale – and consumer’s feel it’s their right to be able to do so.
The question is, when does Tik Tok step in and how do brands navigate these murky waters as they start advertising on this platform?
Are efforts being focused on social platforms purely because they are big, consumer facing organisations?
The fact is, the issue is so inherently built into the way all digital content, not just social platforms, are set up. Should marketers not be looking for a way to completely change the trajectory of how digital content is bought and consumed instead of going after the apparent ‘big guys’?
There is no easy answer currently to both these first questions. Mars’ global head of Media describes brand safety as an on-going game of Whack-a-Mole – there is a need to continually squash down issues whenever or wherever they arise.
However, the strategy of Whack-a-Mole should not and cannot be seen as a long-term solution.
A recent Warc article summed it up nicely “this problem requires so much more thought than just tweaking technical systems. These problems are philosophical, jurisprudential, and these conversations should all have taken place a long time ago.”
There needs to be widespread change, enforced not by the companies themselves, but at the level of the law. These issues are no longer, or possibly have never been, digital media issues, they are now societal issues that need larger accountability than the digital media world.
This leads into the last two questions.
Should brand safety only exist within the digital landscape?
It is quite clear to see that we no longer have a ‘digital world’ and an ‘analogue world’ – what you do online and what you do offline is now one and the same. This makes it even more complicated for companies trying to keep their brands ‘safe’. If you sponsor a sports team or bring on a brand ambassador or influencer, what control do you have over their expression on social media? And if they decide to align with a specific social movement what implications does this have for you as a brand? The above is even further amplified by the proliferation of ‘cancel culture’, no matter how much research and vetting a brand might do, no organisation or person is safe from being called out by society and cancelled.
So, what measures are brands putting in place to not shy away from these issues but rather have a strong plan for how they can both associate with personalities, entities, media platforms and societal issues and injustice and tackle the inevitable issues that will arise.
Lastly, are brands the victims of poor brand safety or should they be held accountable for where they show up?
What should brands be doing to be both socially accountable but also brand safe?
Because of the way digital media platforms are set up, just by advertising brands have been unknowingly funding non-brand safe content. Generally, brands are demanding a refund of advertising funds if their adverts appear next to non-brand safe content. But with these issues being so widely known and understood in the public domain, can brands continue to put 100% of the responsibility on media owners, should brands not begin to be held accountable for where they choose to book media space? The solution may lie in brands being ruthlessly selective in their support of only media companies, influencers and entities that follow strict brand safety guidelines, even if this might result in short-term loss.
The answer to what a brand should be doing is two-fold. The first thing is to be ruthlessly aligned to brand safety measures that are already in place and to remove partners that do not meet those requirements no matter the business consequence.
The second, and arguably most important, thing is to have a clearly outlined and authentic brand purpose that is strictly followed by both the company and all its partners. Brand purpose is so closely aligned to brand safety that some marketers are foregoing the latter term for the term brand suitability – not only where is it safe for my brand to show up, but also where is it also where does my brand have the right or authenticity to show up. Having a clear brand purpose makes it that much easier to understand your brand’s role in not only, brand safety issues, but societal issues at large. They key to understanding this, is identifying what brand purpose means for your brand. Not all brands need to have a societal purpose, some brands just need to have a purpose – and that purpose might just be the product or service you offer society. Trying to retrofit brands into having a societal purpose when they don’t or haven’t historically, might just further complicate issues of not only brand safety but also showing up to consumers with a believable and authentic proposition.
Brand safety is a complex and ever changing issue and it is now up to brands to hold themselves and their partners accountable for the kind of content that exists in media. Taking on a proactive stance to the issue, rather than a reactive stance, a stance that is 100% authentic and aligned to their broader brand purpose.