API — LLMPulse
Build dashboards, ETL pipelines and automations with a clean, well-typed API.
LLM Pulse is an AI visibility analytics platform that monitors how your brand appears in AI-generated responses from ChatGPT, Perplexity, Gemini, and other LLMs. The API gives you programmatic access to all your visibility data — including brand mentions, citation sources, sentiment analysis, and share of voice metrics. Use it to build custom dashboards, automate reports, or integrate AI visibility tracking into your existing tools.
Authentication
Use a Bearer token from your API Key.
Authorization: Bearer YOUR_API_KEY
Base URL
Use this base URL for all endpoints below. Examples already include the full path.
https://api.llmpulse.ai/api/v1
Quick start
The logical flow is: list resources → (optionally) fetch dimensions → query metrics. The first call to /dimensions/projects already validates your API key.
- List your projects:
GET /dimensions/projects - (Optional) Fetch project dimensions (competitors, models, locales, tags)
- Request metrics under
/metrics/*
API Playground
API Playground
curl -X GET "https://api.llmpulse.ai/api/v1/ping" -H "Authorization: Bearer YOUR_API_KEY"
Response will appear here...
Sign up to get your API key and start testing
Get API KeyAuthentication
Send your key in the Authorization header as a Bearer token. Rotate/revoke under Settings → API Keys.
Missing or malformed headers return 401 ERR_MISSING_AUTH. Unknown keys return 401 ERR_INVALID_API_KEY. Revoked keys return 403 ERR_REVOKED_API_KEY.
Each call runs in the context of the API key’s user. Projects must belong to that user; otherwise 404 ERR_PROJECT_NOT_FOUND.
Versioning & Rate limits
Current version: v1. Future breaking changes will bump the path (e.g. /api/v2).
Rate limiting: per-key burst + rolling window. Contact us for higher quotas.
Metrics endpoints support conditional GET (ETag/Last-Modified); see Cache & Conditional GET.
Dimensions endpoints
GET /dimensions/projects
Lists the authenticated user’s projects (also serves as an auth check).
GET /dimensions/competitors
Lists competitors for the project.
GET /dimensions/collections
Lists collections (Tags) belonging to the project.
GET /dimensions/tags
Alias for /dimensions/collections. Returns the same payload, but named as tags for consistency with the UI.
GET /dimensions/models
Lists models present in daily metrics for the project.
GET /dimensions/locales
Lists countries and languages present in daily metrics for the project.
GET /dimensions/sentiments
(Optional) Lists available sentiment buckets and their metric names.
- Params:
project_id(required).
GET /dimensions/prompts
Lists prompts in a project.
- Params:
project_id(required),collection_id,country_code,language_code,from,to,page,per_page(≤100).
GET /dimensions/prompt_executions
Lists prompt executions in a project.
- Params:
project_id(required),model,collection_id,country_code,language_code,from,to,page,per_page.
GET /dimensions/sources
Lists non-rejected sources produced by prompt executions.
- Params:
project_id(required),model,collection_id,country_code,language_code,from,to,page,per_page.
GET /dimensions/mentions
Lists mentions created by your project.
- Params:
project_id(required),collection_id,from,to,page,per_page.
GET /dimensions/citations
Lists citations created by your project.
- Params:
project_id(required),collection_id,from,to,page,per_page.
GET /dimensions/competitor_mentions
Lists competitor mentions detected in your executions.
- Params:
project_id(required),competitors(CSV ids),from,to,page,per_page.
GET /dimensions/competitor_citations
Lists competitor citations detected in your executions.
- Params:
project_id(required),competitors(CSV ids),from,to,page,per_page.
GET /dimensions/projects/:id
Get detailed information about a project including matching names, industry, business model, and statistics.
GET /dimensions/competitors/:id
Get detailed information about a competitor including matching names and app store IDs.
Answers (AI Responses)
Access the actual AI-generated responses with full content, mentions, citations, and sentiment analysis.
GET /answers
List AI responses with their content. Returns paginated results with truncated response text (max 10,000 chars).
- Params:
project_id(required),model,collection_id,country_code,language_code,from,to,page,per_page.
GET /answers/:id
Get a single AI response with full details including mentions, citations, sentiments, and sources.
- Params:
project_id(required),id(answer ID in URL).
Detailed Sentiments
Access detailed sentiment analysis with comments, topics, and scores. This is different from /dimensions/sentiments which only returns sentiment categories.
GET /sentiments
List detailed sentiment records with full analysis context.
- Params:
project_id(required),model,competitor_id,brand_only,analysis(very_positive/positive/neutral/negative/very_negative),collection_id,country_code,language_code,from,to,page,per_page.
Filters & Common params
project_idrequired for all project-scoped endpoints.metricsormetric: CSV ofmentions, citations, visibility, avg_position, sentiment_very_positive, sentiment_positive, sentiment_neutral, sentiment_negative, sentiment_very_negative, net_sentimentgranularity:day|week|month(weeks start Monday)- Time window: either
range(days) orfrom&to(ISO8601). If you sendto,fromis required. Defaults withfrom/to: missingfrom= 30 days ago (start of day); missingto= now (end of day). - Filters:
model,collection_id(tag id),country_code,language_code,competitors(CSV IDs) include_project(defaulttrue) — setfalseto exclude your project (return only competitors).
Metrics — Semantics & aggregation
These semantics are aligned 1:1 with the Overview UI.
- mentions: daily sums of
mentions_count. Forweek/monthwe compute the PERIOD TOTAL (Mon–Sun for weeks) and repeat that same total for every day in the period. If a period’s total is 0, we “carry” the last non-zero period total (sticky). - citations: same behavior as mentions (PERIOD TOTAL repeated daily with sticky carry when the period total is 0).
- visibility: percentage per actor, using the project-level executions as denominator under the same filters (
mentions_count/prompt_executions_count× 100). •day: compute the daily ratio. •week/month: compute the ratio-of-sums for the period (Σ mentions / Σ executions) and repeat that value for every day in the period. If the period’s denominator is 0, carry forward the last non-null value. - AI Visibility Score
ai_visibility_score: position-weighted visibility metric that accounts for mention prominence. Uses reciprocal position weighting (position 1 = 100%, position 2 = 50%, position 3 = 33%, etc.). Higher scores indicate both more mentions AND better positioning. •day: compute daily weighted score / executions. •week/month: ratio-of-period-sums with sticky carry, same as visibility. - avg_position: daily average excluding nulls. •
week/month: average within the period (ignoring nulls) and then propagate (carry) the last known value to days without data — matching the Overview chart smoothing. - Sentiment metrics
sentiment_very_positive,sentiment_positive,sentiment_neutral,sentiment_negative,sentiment_very_negative: for each actor and day, percentage of sentiments in that bucket over all sentiments for that actor. •day: daily ratio (bucket_count / total_sentiments * 100). Days with no sentiments returnnull. •week/month: we compute a ratio-of-period-sums (Σ bucket / Σ total * 100) and repeat that percentage for every day in the period. If the period’s denominator is 0, we carry forward the last non-null value (sticky), matching the Overview smoothing. - Net Sentiment Score
net_sentiment: para cada actor y día, calculamos un score en [-100, 100]: ( (very_positive + positive) − (negative + very_negative) ) / total_sentiments × 100. •day: usamos los porcentajes diarios de cada bucket (las métricassentiment_*) y calculamosnet = (pos+very_pos) − (neg+very_neg). •week/month: el score hereda las mismas semánticas de agregación/smoothing que los buckets de sentiment, porque se calcula a partir de esas series.
In /metrics/summary, total is a sum for count metrics and an average for avg_position and net_sentiment.
GET /metrics/timeseries
Returns time series for one or more metrics, grouped by actor (your project + selected competitors). Weekly/monthly values follow the period semantics above.
GET /metrics/summary
Aggregates per actor and metric (total/min/max/last). Ideal for KPI tiles.
GET /metrics/sov
Share of Voice based on mentions, aligned with Overview.
- over_time: for each date,
SOV = (actor_mentions / sum_all_mentions) × 100. If the daily sum is 0, the last value is carried forward (sticky). - current: uses the latest date with non-zero total; if none, it uses totals over the whole range.
- breakdown: top 4 actors + an aggregated Others row (and a detailed
otherslist).
GET /metrics/top_sources
Ranks source domains by number of responses and average visibility (% of executions that surfaced that domain).
- The dataset includes non-rejected sources linked to prompt executions within the window and current filters.
total_responses: number of unique prompt executions that produced at least one source on that domain.avg_visibility:(count of source rows for the domain / total executions) × 100.- Sorting:
sort=total_responses(default) orsort=avg_visibility. - Pagination:
page(>=1),per_page(default 20, max 100). - Filter by domain:
query(case-insensitiveLIKE) — e.g.query=github.
Cache & Conditional GET
Metrics endpoints (timeseries, summary, sov, top_sources) support conditional GET. We compute an ETag from the project version and a hash of your query params, and use the project’s updated_at as Last-Modified.
Send If-None-Match or If-Modified-Since to receive 304 Not Modified when nothing changed.
Errors
Errors are consistent and machine-parsable:
| Code | HTTP | Meaning |
|---|---|---|
| ERR_MISSING_AUTH | 401 | Missing or malformed Authorization header |
| ERR_INVALID_API_KEY | 401 | Token not recognized |
| ERR_REVOKED_API_KEY | 403 | Key revoked |
| ERR_PROJECT_NOT_FOUND | 404 | project_id not accessible by this user |
| ERR_INVALID_PARAM | 422 | Validation failed (see message) |
Error body shape:
Typical 422 cases:
invalid metrics— onlymentions, citations, visibility,avg_position, sentiment_very_positive, sentiment_positive, sentiment_neutral, sentiment_negative, sentiment_very_negative, net_sentimentare allowed.invalid granularity— must beday|week|month.from/to must be ISO8601, orfrom is required with to.range is requiredwhen neitherfromnortoare sent.unknown competitor ids: ...— IDs not found under the project.
Client snippets
Examples in multiple languages are shown in the code panel on the right.
Postman — Download & Environment
Use this curated Postman collection to explore the API quickly and securely. It mirrors the examples and metric semantics described above (period totals with sticky carry, ratio-of-period-sums for visibility, and propagation for avg_position), so you can validate responses and prototype integrations in minutes.
Download the collection
Download LLMPulse API v1 Postman Collection
Import the file in Postman (File → Import), then select or create the environment shown below. The collection uses variables, so you won’t need to edit each request manually.
Environment (placeholders — keep your IDs private)
Create a new Environment in Postman with the following variables. Use placeholders instead of real IDs or keys in any public context (docs, screenshots, etc.):
How it’s used:
- The collection reads
{{base_url}}and injectsAuthorization: Bearer {{api_key}}automatically. - Set
{{project_id}}to a project you own;{{competitors}}is CSV (e.g.1,2,3). - You can override per request (e.g. send
from/toinstead ofrange) and add filters likemodel,collection_id,country_code,language_code. - Keep your API key in your private environment only; do not paste it into shared documents.
MCP (AI Integration)
MCP (Model Context Protocol) allows AI assistants like Claude and ChatGPT to access your LLM Pulse data directly. Ask questions in natural language and let the AI fetch and analyze your visibility data.
What is MCP?
MCP is an open protocol that lets AI assistants connect to external data sources. Instead of copying data or making API calls manually, you can simply ask your AI assistant questions like "What's my visibility trend this month?" and it will fetch the data automatically.
Endpoint
Configuration
Add this to your AI client's MCP settings (e.g., Claude Desktop, ChatGPT Developer Mode).
Available Tools
These tools are available to your AI assistant.
| Tool | Description |
|---|---|
| list_projects | List all your projects |
| list_competitors | List competitors for a project |
| list_collections | List prompt tags/collections |
| list_models | List AI models being tracked |
| list_locales | List available locales |
| list_prompts | List prompts with pagination |
| list_mentions | Get brand mentions in AI responses |
| list_citations | Get URL citations |
| list_sources | Get source URLs cited by AI |
| list_sentiments | Get sentiment distribution |
| get_timeseries | Get metrics over time |
| get_summary | Get aggregated metrics |
| get_sov | Get share of voice vs competitors |
| get_top_sources | Get top cited sources |
Protocol
MCP uses JSON-RPC 2.0. Here's an example of listing available tools:
For a simpler guide, see the MCP Setup page.
Changelog
- 2025-08: Initial public API docs (v1)
- 2025-09: Metrics aligned with Overview (period totals with sticky, ratio-of-period-sums for visibility, propagation in avg_position), SOV breakdown,
queryfilter in Top Sources, caching notes, Postman download & environment section - 2025-12: Sentiment metrics added to /metrics/timeseries (sentiment_very_positive/positive/neutral/negative/very_negative) and documented semantics