SOLUTION

Brand Safety
for Advertisers and Brands

For Advertisers/Brands:

Brand Safety Solutions for Programmatic & Contextual Advertising

Problems

Brands are facing major advertising challenges as brand safety is evolving every day. A previously safe publisher can post inappropriate content at any time.
TAG indicates that 85% of UK consumers would boycott favourite brands if ads appeared near Covid-19 conspiracies. In addition, most consumers would reduce purchases from a favourite brand that advertised near hate speech (89%), illegal content (89%), or terrorist recruiting materials (93%).
Current brand safety methods relying on keywords are blunt and could result in reducing ad revenue for legitimate sites and good quality content

Solution

By scoring the online content through our API, we identify 19 different signals of unsafe or unsuitable content for brands, which can help advertisers:

Prevent ad spend from going to unsafe publishers

Increase alignment of ads to more suitable content, increasing conversion

Reduce risk of public or specialist backlash

Adjust their thresholds of risk on a signal by signal basis depending on brand values and target audiences

In addition, we can help you place ads in brand relevant contexts based on:

Use “Stance” to categorise pages as “for” or “against” a certain topic

Sentiment and emotional content of pages

Satire and other forms of humour

Subjectivity and controversy

Sovrn is passionate about working with independent publishers of quality content. To offer further quality metrics to our buyers, we have chosen to work with you to help build new Inclusions lists of inventory that are free of hate speech, politically extreme, and fake/spoof content…this is a new offering in the programmatic advertising market, and you’re a strong partner in this space. We are excited to be part of your journey to help indirect programmatic offer a cleaner, healthier environment for brands.”

Andy Evans
ex-CMO at Sovrn and ex-founder of Scroll

Solution / Testimonial

Get started with Content Score

CASE STUDY

Client

Situation

Taboola is the world leading online advertising agency. They were maintaining lists of bad sites manually to avoid wasted ad-spending for their clients through their own automated system. There were ~10,000 websites already flagged.

Task

Taboola wanted to have a review and update their internal analysis with our AI.

Action

We provided a service where up to 10,000 websites already flagged by Taboola get second-screened by us per week.

Our tool enabled their team to conduct more ad-hoc risk analysis, giving feedback to our AI on each domain, which allowed it to learn their content policy preferences and risk levels.

Result

  • Our work led to 95 propaganda and anti-semitic websites removed from the network, each with 100,000 impressions a month.
  • We saved 2,100 hours a month, around $1.3m in moderator equivalents.

Content Score Method

CASE STUDY

Grey area Harmful content detection trial

Situation

Multiple SSPs, DSPs, Trading Desks and Ad Agencies were struggling with ads listed next to harmful content on ad exchanges, which will waste advertisers’ ad spend and brand reputation. Each had already been filtered through blacklists and keyword blocking systems, but yet they are not
sufficient/accurate enough.

Task

Multiple SSPs, DSPs, Trading Desks and Ad Agencies provided us with sample inventories to analyse including campaign log files to see content scores.

Action

We used our content scoring AI to find content which brands didn’t want to place ads on. The majority of them were funding propaganda, hate and violence

Result

  • 86% domains correctly labelled as misinformation and disinformation
  • 0.91 – 23.9% publishers contained at least 1 hateful or hyperpartisan page
  • Up to 7% of pages were rated as unsafe, toxic, highly questionable
  • Up to 5 sites found per network based on “deceptive language” algorithm along with a ~400 known fake sites blocklist