The recent Stop Funding Hate Twitter campaign was designed to expose examples of advertising alongside hate campaigns in mainstream media. Currently the campaign has 92,000 followers highlighting the growing dissatisfaction among consumers. Consumers tend to react to these exposés with swift boycotts of products.
One recent example highlighted EE’s continued advertising in the Daily Mail. In the example below the brand’s advert appeared alongside an article that contained comments such as “We don’t even create a ‘hostile environment’ for illegals and convicted foreign criminals. Instead we give them council houses, legal aid, and thousands of pounds from the poor box. If this is hostile, it’s nowhere near hostile enough.”
Consumers have responded with comments such as: “@EE my contract is about to run out (next month) and I probably would stay on if you stop advertising with the Daily Mail. Quite happy to cancel it if not, watch me!” “Sorry @ee say bye bye to three contracts,” and “So @EE I have been a customer for many years; ever since one2one days. But it’s time to put my money where my mouth is. If you don’t stop funding hate I will be moving my account by the end of this month. There are plenty of other good providers.”
Brands like EE can easily solve issues like the example above, however, digital media presents a much bigger, much harder-to-solve problem that affects all brands that advertise online.
Programmatic online media buying differs to print buys in that advertisers do not have one-to-one relationships with publishers. Instead, ad placements are auctioned by ad exchanges that receive inventory from hundreds of thousands of websites. These auctions are completed in milliseconds and the information used to decide whether or not to bid is limited to user demographics, size of ad, domain, and subdomain. What is missing is detailed information about the content and sentiment on the web page on which the ad will appear.
Take the following examples of more complex, nuanced cases of hate speech: “I am not a sexist, but girls do not know mathematics and physics” and “There is no comparing the vileness of Mohammed to Jesus or Buddha, or Lao Tse.”
Traditional blacklists work by identifying individual words that are inappropriate or extremist. So in the first example a blacklist might have picked up the word “sexist” and prevented ads from appearing. However, if you look at the individual words in the second sentence the content appears safe—it’s only when you look at the semantics that the sentence’s true meaning can be understood.
Solving the issue of understanding semantics and context doesn’t entirely solve the problem though. Some brands may not see the above sentences as inappropriate and would be happy for their ads to appear next to the content—particularly if their target audience is small and they need to maximize reach.
What’s required here is an appreciation of the level of extremism and the ability for brands to set their own tolerance levels.
More than three-quarters (78%) of marketers say brand safety scandals have hurt their brand’s reputation. Blacklists, whitelists, and individual keyword analysis is not enough to tackle brand safety in today’s online environments.
To overcome the latest challenges of effectively identifying and evaluating grey areas, staying on top of new forms of unsuitable or toxic forms of content, and tackling the growing presence of extremely polarized political content, brands need a blended approach to brand safety.
Defining hard and fast rules is the first thing any brand needs to do when putting in place a brand safety strategy. This process includes identifying websites that are 100% inappropriate, such as pornography, extremist politics or animal welfare. This will protect you from the most extreme and therefore harmful exposure.
Artificial intelligence (AI) is a term that is often misused by advertising technologies, but when it comes to brand safety true AI does have a place. The history of natural-language processing (NLP) generally started in the 1950s, although work can be found from earlier periods. In 1950, Alan Turing published an article titled “Computing Machinery and Intelligence” which proposed what is now called the Turing test as a criterion of artificial intelligence.
NLP involves computerized ingestion of language and the ‘translation’ of language into meaning. The most common application of NLP to date has been the translation of one language into another which requires understanding of context and sentiment. For example the sentence “The man was in tears” cannot be translated effectively word by word because the individual words “man” and “tears” have different meanings in different contexts. Once the whole sentence is understood it’s clear that the male person is crying.
Transferring NLP to brand safety opens up opportunities for brands to maintain (or increase) reach. Individual words such as “sexism” could be removed from blacklists and brands would still be protected against content that promotes sexism, but ads would run against content that promotes fighting sexism.
Both YouTube and Facebook have invested significant amounts of money in people and teams that manually review content and prevent ads from appearing against unsuitable materials. Given the vast amounts of video and written content added to the internet every day, there will never be enough human hours to review every piece of content. While this is not a full-scale solution, it’s still required to effectively tackle the problem at large.
Content that is identified as unsuitable by humans—along with a classification or description of why the content is inappropriate—can be fed into AI systems to augment the learning processes. This improve the effectiveness of the algorithms, which enables the AI to stay as up-to-date as possible.
While many brands might feel like they’re fighting a losing battle against brand safety online, implementing a blended approach is the best way to ensure you’re fully armed and able to fight the battle in the most effective, efficient way possible. It might seem like a mountainous task to battle, but it’s vital that all parts of the digital media industry persevere regardless of what the bad actors do next.
Anant Joshi is the CRO of Factmata, a tool that uses advanced NLP to solve the problem of misinformation and protect brands online.Reblogged 2 years ago from www.clickz.com