We are on your side. Our independent solution ensures brand safe inventory in a digital world and makes the grey area clearer
Our AI rapidly scans content rather than URLs
Machine learning algorithms ensure nuanced content is classified rapidly by analysing the semantic meaning of content on individual pages. We don’t just look at domain level or identify isolated keywords – our technology highlights specific sections of pages as well as whole pages for nuances which might be interesting for your clients to target.
Open up new inventory to advertisers
Excluding entire high-traffic domains throttles brand safe traffic, inclusion lists narrow reach by fettering advertisers to certain domains. Content Score identifies individual pages that are risky presering scale and protecting brand safety, as well as creates nuanced content segments far beyond just topics.
Increase inventory transparency and segmentation
Build trusted relationships with your buyers by guaranteeing that ads won’t appear on questionable content based on parameters and tolerances. Create contextual segments based on new subtleties.
More accurate detection of subtle content
Using our unique, community driven annotation, Content Score’s classification methods are unbiased and able to understand the most complex, subtle forms of problematic content such as opinionated satire.
Content Score at work for brand safety
How we present it
Our hate speech systems work at the sentence level. When larger text is presented as input, such as a new article, it is split into sentences and each sentence is scored for hate speech independently. The score is a probability value between 0 and 1. The overall score for an article can be custom parametrised.
It should also be noted that prominent brands have ads placed in the page around this article, which may have a negative effect on their reputation especially if the page is high traffic and likely to pick up attention in the news cycle.
Content Score’s AI technology detects this type of content so it can’t be used for placing brand advertising and protect brands against extremely harmful contextual associations.
The above is an example of our Hyperpartisanship signal detection, that specialises in picking up extreme right and extreme left news which are clearly non-neutral.
We now have 19 models to help build a brand safe environment:
- Spam: Spam and phishing content
- Harmful: Racism, Sexism, Hate speech, Obscenity or insult content
- Hyperpartisanship: Hyperpartisan, extremely biased and aggressive political content
- Threat: Threatening languages
- Clickbait: Clickbait title with low quality content
- Controversy: Conflict, disagreement heated discussions
- Toxicity: Insult, threats, attacks, humiliation or degraded content
- Obscenity: To identify use of abusive words and graphic language
- Hate speech: Racist, sexist or abusive content
- Sarcasm: Sarcastic content may mock or annoy
- Subjectivity: Highly opinionated and prejudiced content
- Sexism: Gender demeaning and abusive content
- Racism: Racism content
- Insult: Scornful or abusive remarks
- Satire: Humour exaggeration and ridicule
- Emotion: Either Negative emotion with sadness, anger, disgust or fear, or Positive emotion with joy or trust
- Fake News: Fake news low quality misleading content
- Sentiment: Either Negative sentiment content or Positive
- Stance: To identify the sentiment towards a specific topic/subject, if a page overall is pro or against the given topic/subject.
At Content Score, we apply our Brand Safety Solution API to ensure a brand safe environment for advertisers. By combining advanced technology with human intelligence, our team analyses more nuanced language and always takes your feedback into account. Contact us to schedule a trial now!