HOW IT WORKS

How Content Score can help

How Content Score Works?

Tools and integration

1. Risk identification & classification

Detect

Pick up the grey area items that other automated systems might not find, or your users might not flag

Prioritise

We can help you filter your trust and security queues and categorise them for filtering/searching

Investigate

Via our categorisation and interpretable scores, we show you the reasoning for your take-down decisions, by listing the items that should be investigated, questioned, or left alone.

Retrain

We retrain based on your feedback on every URL we score. We specialise in active learning/relabelling policies for skewed datasets according to your feedback

2. Customer API

By our API integration, you can submit a content item, text or URLs, to retrieve the scores and submit individual feedback.
Authorisation

We set up your API key and can configure your models for you.

Scoring Content

Send a URL or extracted text for scoring. Monitor completion status via API or optionally be informed via email or webhook once the scoring completes

Retrieving Scores

Call our API to get scores for one or more URLs in JSON format.

Feedback

Send individual feedback on content scores

3. Brand Safety Integration

Campaign Mapping

We map your brand campaign targets against our bad site database for early flags. Or we can help you place ads in brand relevant contexts and make cleaner, contextually relevant ad placements.

URL/content moderation

We monitor each URL and impression to enable granular tuning.

Log Files Analysis

Analyse log files to find out which agencies, content networks, and sites are most harmful and should be removed for future.

Post-bids campaign analysis

Analyse your campaign log files post-bids, and drive allocation changes with your DSP. Our AI has been trained to understand nuance, and does not judge websites arbitrarily. Instead, it monitors up to 100 pages of each site daily to provide constantly updated risk scores.

Website List Maintenance

Content Score offers a service to monitor Inclusion lists and Exclusion lists of your website inventory on a monthly basis. We can give you a monthly report of a website’s risk score, for any targeting list you give us, for multiple languages.

In order to build algorithms that are capable of classifying content in subtle, nuanced ways, we collect high quality, detailed annotations from experts in each subject.

Subject-matter Experts

Can extract relevant arguments, claims and their stance for the context of the issue

Journalists, Linguists, Researchers

Review content efficiently for quality of argument/evidence and bias

Scientist & Medical Experts

Judge scientific arguments/claims by using evidence

Advocacy Groups & Charities

Skilled at finding subtle offensive memes and know legal definitions of different types of hate speech in different countries

What Content we can detect

We have 19 models currently running to detect harmful content:

To identify prolonged public disagreement or heated discussion, usually concerning a matter of conflicting opinion or point of view.

To identify misleading, low-quality content, deliberately written to sound authentic and truthful, based on different linguistic cues.

To identify content with extremely biased language that is hyperpartisan in nature or pushing a certain agenda very aggressively towards one entity, person or viewpoint.

To identify the use of humour and exaggeration used to provide an alternate commentary on a person, event, organisation usually as a ridicule.

To identify the use of words usually used to either mock or annoy someone, or for humorous purposes. Sarcasm may employ ambivalence, although it is not necessarily ironic.

To identify opinionated content, where the writer talks about a topic with little to no factual language often based on the individual prejudices and experiences.

To identify the sentiment towards something, especially expressed in a publicly stated opinion.

To identify the overall binary sentiment of content (positive or negative), predicted confidence scores can be used as a proxy for neutral sentiment.

To identify and recognise types of feelings through the expression of texts, We adapt Robert Plutchik 8-class-wheel, suggesting 8 primary emotions grouped on a positive or negative basis: joy vs sadness; anger vs fear; trust vs disgust; and surprise vs anticipation.

To identify article headlines which at the expense of being informative, are designed to entice readers into clicking the accompanying link.

To identify unsolicited messages to large numbers of recipients for the purpose of commercial or non-commercial advertising, or for any prohibited purpose (e.g. phishing).

To identify hateful or toxic language targeting an individual of particular ethnicity, religion, demographic identity

To identify scornful or abusive remarks directed towards an individual or entity.

To identify demeaning and abusive language based on people’s group identity, with focus on gender and stereotypes.

To identify direct expressions of a wish or intention for pain, injury, or violence against an individual or group.

To identify direct expressions of a wish or intention for pain, injury, or violence against an individual or group.

To identify insults, threats and attacks on any individual or group with the purpose of humiliating, degrading or excluding that person or group.

To identify use of abusive words and graphic language.

To identify both targeted and untargeted, harmful and toxic language in general (racist, sexist, hate speech, obscenity, insult, etc).

Try Content Score

A sneak peek on how Content Score works.

Products

Discover Content Score AI on:

Contact us

We would like to chat with you about how we can help you recognise online harm!