How Softonic turned negative AI sentiment on AI search
Softonic x LLM Pulse: from legacy reputation to AI-accurate brand perception
TL;DR
Softonic was being flagged by AI models as a risky download source, even though the business had spent years cleaning up its catalog and trust signals. Within four months of adopting LLM Pulse, they identified the specific prompts driving that perception, built a targeted content and PR plan around them, and lifted positive sentiment by 7pp across the tracked non-brand prompts.
About Softonic
Softonic is one of the largest software discovery platforms in the world, serving more than 100 million users a year across 20+ languages. Their catalog covers everything from open-source utilities to commercial productivity apps, with editorial reviews and safety scans powering each listing.
The problem: an AI sentiment gap they couldn't see
Traditional SEO and brand monitoring tools were telling Softonic everything was fine. Organic traffic was healthy, branded search was stable, and review sites showed strong scores.
But when their team started spot-checking ChatGPT, Perplexity and Gemini, the picture was very different:
- "Is Softonic safe?" returned hedged, cautious answers in 3 out of 5 models.
- Long-form prompts about software recommendations frequently included disclaimers about historical issues from a decade ago.
- Competitors with smaller catalogs were being recommended over Softonic in 47% of head-to-head prompts.
They needed a way to quantify this gap, prioritize what to fix, and measure whether the fixes were working.
What they did with LLM Pulse
1. Mapped the prompts that actually mattered
Using LLM Pulse's Prompt Tracking, the team set up 450 high-intent prompts in English and Spanish, the two languages driving most of their AI-search risk. Each prompt was run weekly across ChatGPT, Perplexity, Gemini, Google AI Mode and Google AI Overviews.
Within the first execution cycle, the team had a clear ranked list of the prompts where Softonic had the lowest visibility, the most negative sentiment, or both.
2. Used sentiment tracking to find the narrative
LLM Pulse's Sentiment Tracking broke down each mention of Softonic into:
- Polarity (positive / neutral / negative)
- Topics driving the sentiment (safety, ads, bundling, app quality)
- Sources the AI was citing as evidence
This is where it clicked: 78% of negative sentiment was driven by old forum threads and a single 2014 incident report that AI models kept retrieving via outdated citations.
3. Acted with citations + content recommendations
From there, the playbook wrote itself:
- PR + outreach to refresh the cited sources where possible for very outdated sources from >10 years ago — 95% of them were those.
- Content rewrites of the 22 highest-impact pages flagged by Content Recommendations, focusing on safety, transparency and trust signals.
- New canonical landing pages for the questions where AI models had nothing fresh to cite.
- GEO Testing to A/B test layout and trust-signal placement.
4. Measured everything, weekly
Every Monday, the team reviewed:
- Visibility per prompt cluster (Brand Visibility + AI Visibility Score)
- Sentiment shift week-over-week
- New citations earned by the rewritten pages
- Share of Voice vs. their top 4 competitors
The results (4 months in)
The team saw an improvement of 7pp in just 4 months in their Net Sentiment Score. This means now users get a more realistic picture of the present — a Softonic that is more healthy and not punished by their long gone past. The team doesn't stop here and its committed to continue to work to improve its reputation on AI search.
"Understanding how LLMs perceive our brand is no longer optional — it is the next frontier of a modern SEO strategy. LLM Pulse provides the deep insight we need to see exactly what influences our AI visibility. It has turned our feelings into a roadmap of actionable steps, ensuring our content is perceived accurately across every major model."
Ferran Gavin, Director of Catalog and Traffic, Softonic
Why it worked
- Multi-model coverage. Softonic stopped optimizing for a single model and started optimizing for the answer they want users to get, regardless of which AI is asked.
- Prompts as the unit of work. Instead of vague "improve brand" goals, they had a finite list of prompts and a clear definition of done for each.
- Sentiment + citations together. Knowing the polarity wasn't enough. The citations told them which sources to influence.
Want a similar playbook for your brand?
Softonic's wins are repeatable for any brand with a long history online. If you're a customer of LLM Pulse and want help running this play, start a free trial or talk to us.