How Utopia Analytics’ AI moderation software helped Proto Thema reclaim the comments section

For Greek news publisher Proto Thema, the site’s comments section has always presented a dilemma, says web technology officer George Koiliaris. Comments could boost engagement and revenue, but toxic discourse and the burden of manual moderation threatened to overwhelm the newsroom or even shut down the comments section entirely.

“I don’t know what’s wrong with people,” he says. “Even if the story is about a kid saving a cat from a tree, they will always find something bad to say.” And when the story is about politics, the comments section can get ugly. 

Proto Thema needed a solution that could handle high-volume moderation without sacrificing quality or consuming editorial resources. Koiliaris found it in Utopia Analytics, an AI-powered comment moderation platform. 

The results were transformative: Proto Thema increased reader engagement three-fold while freeing journalists to focus on journalism.

Three reasons to use Utopia

  • Utopia’s AI handles 80-90% of comment moderation automatically, eliminating delays and costs associated with manual review while maintaining high-quality discourse.
  • Context-aware technology understands complex meaning rather than just chasing keywords, accurately flagging subtle insults, sarcasm and toxic content in conversational context.
  • Beyond moderation, the platform provides actionable audience insights, detailing which topics drive engagement and where toxicity originates.

Newsroom overview

Covering national and international news, sports and lifestyle features, Proto Thema is one of the largest online publishers in Greece, with about 1.3 million unique visitors per day alongside its popular weekly print edition and an English-language webpage aimed at the Greek diaspora. 

The newsroom operates with 20 to 30 staff members who previously divided their time between core journalism duties such as reporting, writing and editing, and the time-consuming task of moderating reader comments.

This set up reflects a broader industry challenge.

“Still today, surprisingly, a lot of news media do (comment moderation) manually,” says Santiago Osorio, trust and safety director at Utopia Analytics.

“We’re in the 21st century,” he adds, “technology can already do it.”

Problem: Shutting down conversation

Proto Thema likes to “leave things pretty open for people to leave their comments,” Koiliaris says. They don’t require a login or user verification. That resulted in “many, many, many comments that were inappropriate, to say the least.”

Newsroom staff manually monitored these comments, often taking several hours to review submissions. These delays created a cascade of problems: reduced reader interaction, shorter site visits (a crucial advertiser metric) and frustrated journalists.

“Nobody wants to do moderation,” Utopia’s Osorio says. “I’m sorry to say, but it’s a very shitty job,” and one which takes time away from the journalism they want to do. 

Plus, people just aren’t very consistent about it. 

Two people, given the same moderation guidelines, will come to different conclusions about whether to publish a comment.

One of Utopia’s clients hired moderators and paid them per comment. The client discovered that some of them “were just clicking accept, accept, accept,” Osorio says. “If I’m being paid by the click, I’m just going to click like crazy. That’s when they realized, ‘Oh, that’s why the moderation is so bad.’”

This is why publishers often flip-flop over the comments section.

“We’ve seen these cycles of ups and downs, when news media say, ‘Yes, let’s bring comments back. They’re engaging,’” Osorio says. “And then they deal with moderation, and it doesn’t work, it’s too much work, the toxicity is all over the place… And then they’re like, ‘No, let’s shut this down.’”

Solution: AI-driven editorial efficiency

Older automated moderation systems relied on dictionaries or rule sets to flag specific words. Users quickly found ways around this by substituting characters, resulting in a “never-ending race” between users and the moderation tool.

Utopia’s AI model is built on the principle that “a machine [should] look at data the way that we humans look at data,” Osorio says, quoting one of Utopia’s founders. The AI doesn’t just look for individual words, but analyzes the context by looking at metadata, including:

  • article category (politics, fashion, sports);
  • article title and description;
  • comment type (main comment or reply); and
  • conversation history (up to six preceding lines).

This allows the AI to infer meaning, even when language is obscured. For example, the AI can recognize that “you are nuts” and “you are bananas” both carry the same insulting meaning as “you are crazy” and flag it accordingly.

And it can understand context, such as when a seemingly harmless sentence like, “this is the best thing that could happen,” becomes inappropriate when posted under an article about an attack, Osorio says.

Utopia creates a tailor-made AI model for each publisher. The first step is to train the system on the client’s historical data. That helps the model learn what kinds of comments a publisher will tolerate and what it won’t, which varies from outlet to outlet. With enough data to start with, the system can be up and running in two weeks.

If historical data isn’t available, Utopia can start with an LLM. This generic approach will catch the most toxic stuff. Within two or three months, the system will have gathered enough data to transition into the core, customized AI system. 

The system assigns confidence scores to each comment, allowing publishers to set thresholds for human review. Utopia recommends clients set that bar low to begin with, giving the user “space to get to trust the robot.” Then they can ratchet it up week by week.

Impact: Triple the comments, 80% time savings

With Utopia handling the moderation workload, Koiliaris estimates that Proto Thema journalists gained approximately 80% of the time they had previously dedicated to comment review. This means reporters and editors are doing what they were hired for: interviewing sources, writing articles and editing copy. 

Koiliaris notes that Utopia also helped Proto Thema monitor the performance of their remaining human moderators. Reports provided by Utopia identified those who were accepting or rejecting all comments en masse, “because they’re too busy, they’re bored, they’re slacking,” Kollaris says. “We can tell who’s doing the actual work and who’s not.”

With AI allowing comments to publish in near-real time, Proto Thema has seen an explosion in audience engagement. Comments have roughly tripled since they started using Utopia’s system, to roughly 250,000 per month. Readers are staying on the site longer to read the comments. And it’s attracted a segment of the audience that doesn’t even read the article but who instead read the headline and dive right into the comments section to see what people are saying about it.

Utopia’s data also provides actionable editorial intelligence. Analytics reveal which stories generate the most engagement and how publication timing affects audience interaction. The data also identifies bad actors, with Osorio noting that “60 to 70% of toxic content typically comes from just 3 or 4% of users.” Removing these troublemakers dramatically improves the comment environment.

Pricing for Utopia’s AI Moderator starts around $2,000 per month and tops out in the tens of thousands for big media conglomerates. So, it’s not cheap. But Osorio noted that the AI is doing 80-90% of the work that several people used to do (and hated).

Utopia also offers flexible pricing. The company typically retrains the AI every two weeks based on user data, but can cut the frequency to lower the price.

Security and privacy

The company’s website doesn’t offer much detail, but Osorio says, “both security and privacy are very important for the sort of clients we deal with, many of which are renowned brands, hence we usually go through a scrutinized process of reviewing these aspects carefully with their legal (teams).”

The Finnish company is subject to the European Union’s strict General Data Protection Regulation, and Osorio says “we are proud to say we followed GDPR practices even before GDPR came into force.” This applies regardless of where the client is based. 

Utopia is also transparent about being ethically sustainable and, according to its website, bases its practices on the United Nation Universal Declaration of Human Rights (UDHR).

Verdict: Essential engagement without exhaustion

For publications seeking to foster a vibrant and loyal online community, Utopia Analytics is a powerful, cost-effective tool. It leverages advanced AI to shoulder the job of moderating comments, transforming the comment section from a toxic liability into a real-time source of engagement, data and potential revenue builder. 

Alternatives to Utopia Analytics for comment moderation

Perspective AI: Google’s tool is free but offers limited customization.

Coral: Vox Media’s open-source platform allows complete customization and control over data, which appeals to privacy-conscious organizations and those with specific technical requirements. But Coral requires more technical expertise to implement and maintain.

Viafoura: This Toronto-based platform has community engagement features like user identity features and reply notifications, and it includes paywall optimization and other features to increase monetization.

OpenWeb: Backed by investors that include The New York Times, OpenWeb touts a proprietary LLM that claims high accuracy figures. Like Viafoura, it also includes community engagement features.

These companies don’t advertise their prices. Contact them for a custom quote.

Written by Steve Baragona

Steve Baragona is an award-winning science writer and editor. He spent eight years in research labs before deciding writing about science was more fun than doing it.