GameDataCoreGameDataCore
HomePricing

Product

  • From reviews to themesCoreFeedback
  • Players & audienceCoreProfile
  • Game catalogueCoreDatabase
  • Decisions you can defendCoreDecisions
  • Reports that leave the productCoreReports

Use cases

All guides

Pre-production

  • Evidence before you ship
  • Benchmark comparables
  • Validate concept early
  • Evidence for pitches

Demo / Early Access

  • EA & demos
  • Prepare for EA

Launch

  • Ship: week & after
  • Player feedback & snippets
  • Themes & clusters
  • Who your players are
  • Emotional landscape

Live Service

  • Seasons & cohorts
  • Prioritise fixes
  • Track patch impact
  • Audience cohorts
AboutNewsSupport
JOIN WAITLIST

Use cases

PLAYBOOKS BYLIFECYCLE STAGE.

Each row matches the Use Cases database. Lifecycle Stage sets the section on this page; Slug is the anchor. Long-form copy below is the same structure as in Notion (problem, solution, steps, credits, tips).

Pre-production

Before you ship: comparables, validation, and evidence for stakeholders — grounded in signal, not slides alone.

Pre-production

Benchmark against comparable games

Use the game database to explore how similar titles perform across emotion, topic, polarity, playtime, and player demographics. Understand your competitive landscape and set realistic expectations for your own game.

The problem

Every studio has a mental model of their competitive landscape — "we're like Game X but with better combat" or "our audience overlaps with Game Y". But these assumptions are rarely tested against real data. You don't actually know how players feel about Game X's combat, what Game Y's audience cares about, or how your target market's expectations compare to what you're building.

Without comparative data, you're navigating by assumption.

How GameDataCore solves this

The Game Database lets you search, explore, and analyse any game on Steam — not just your own. Add comparable titles to your catalogue, import their reviews, and run the full analysis pipeline to understand how players in your genre think, feel, and behave.

Compare polarity (sentiment), emotion/topic patterns, player demographics, and emotional profiles across multiple games to benchmark your own title against the competition — not a single aggregate score per title.

Step by step
  1. Open the Game Database and search for comparable titles in your genre
  2. Add 2–5 competitor or reference games to your catalogue as "tracked" titles
  3. Import reviews for each game and run analysis
  4. Compare across games — look at polarity and emotion breakdowns, top topics, dominant named emotions, and player demographics
  5. Identify gaps and opportunities — where do competitors fall short? What do players wish for that nobody delivers?
What you'll see
  • Game-level polarity and emotion distributions across your catalogue
  • Topic comparisons — which themes dominate feedback for each game
  • Player demographic differences — how the audiences compare in playtime, engagement, and experience
  • Emotional profiles — how each game makes its players feel
Credit cost

Analysing competitor games costs the same as analysing your own: 1 credit per snippet during analysis. Use filters to focus on the most relevant reviews and keep costs manageable.

💡 Tip: You don't need to analyse all 50,000 reviews for a major title. Import the most recent 200–500, filter by your area of interest (e.g. negative polarity or frustration tagged to combat), and analyse that focused set. You'll get the patterns you need without the cost.

When to use this
  • You're in pre-production and want to understand audience expectations in your genre
  • You're positioning your game against specific competitors
  • You want to validate that your unique selling points actually resonate differently from the competition
Systems used

Game Database → Inbox → Snippets → Clusters (repeated per game)

Pre-production

Validate your concept before you build

Before committing your team, analyse feedback from comparable games to understand what players love, hate, and wish for in your genre. Use real evidence to de-risk your design decisions early.

The problem

Pre-production is where the most consequential decisions are made with the least amount of data. You're choosing genre, setting, mechanics, tone, and scope based on creative vision and market intuition — but you have no structured evidence of what your target audience actually values, tolerates, or rejects.

By the time you have your own player feedback, you've already spent months (or years) building. The cost of a wrong assumption at this stage is enormous.

How GameDataCore solves this

You don't need your own game to be live to use GameDataCore. Analyse feedback from comparable games — titles in your genre, with your target audience, facing similar design challenges — to understand what players love, hate, and wish for before you commit your team.

Build clusters around the themes that matter to your design — combat feel, narrative depth, onboarding clarity, endgame loop — and use real player evidence to stress-test your assumptions.

Step by step
  1. Identify 3–5 reference games that share your genre, audience, or design pillars
  2. Add them to your catalogue and import reviews for each
  3. Run analysis — focus on the topics relevant to your design (e.g. filter by topic or keyword)
  4. Build clusters around your core design questions: "How do players feel about crafting systems in survival games?" or "What frustrates players about roguelike progression?"
  5. Generate insights to synthesise findings across games — look for patterns that hold across multiple titles, not just one
What you'll see
  • Cross-game patterns — themes that recur across multiple comparable titles
  • Audience expectations — what players in your genre take for granted vs. what delights them
  • Risk signals — features or design choices that consistently generate frustration
  • Opportunity gaps — things players wish for that no game in the space delivers well
Credit cost

Same as any analysis: 1 credit per snippet when you run analysis, 2 credits per accepted cluster, 5 credits per cluster when accepting an insight. Start small — 200 reviews per game is enough to surface the dominant themes.

💡 Tip: Frame your clusters as design questions, not just topics. Instead of a cluster called "Combat", build one called "What frustrates players about melee combat in action RPGs?" — filter for negative polarity or frustration + combat topic + high playtime. The evidence you get back directly answers your design question.

When to use this
  • You're in concept or pre-production and want to de-risk design decisions
  • You're choosing between two creative directions and want evidence to inform the choice
  • You're building a pitch and want to show you understand the audience landscape
Systems used

Game Database → Inbox → Snippets → Clusters → Insights (across multiple reference games)

Pre-production

Build evidence for publisher and investor conversations

Use structured player data, emotion and polarity trends, and audience profiles as evidence in funding and publishing conversations. Show stakeholders exactly how players respond to your game — backed by data, not anecdotes.

The problem

When you're talking to a publisher or investor, "players love our game" isn't enough. They want specifics: how many players? What do they love? How does that compare to similar titles? What's the evidence?

Most studios go into these conversations with Steam review counts, a positive/negative percentage, and a handful of cherry-picked quotes. That's not evidence — it's anecdote dressed up as data.

How GameDataCore solves this

GameDataCore gives you structured, quantified player evidence you can present with confidence: polarity breakdowns with real numbers, topic clusters showing what players care about most, rich emotional taxonomies so you can name how players feel (not just thumbs up/down), player demographic profiles, and trend data showing momentum.

You're not saying "players like the combat" — you're saying "412 snippets across 6 clusters indicate strong positive sentiment toward melee combat, primarily from players with 50+ hours, with satisfaction trending upward 12% over the last 60 days." And you can go further: emotion tags and VAD-style signals show whether that positivity is admiration, relief, or excitement — not a flat sentiment score.

Step by step
  1. Analyse your own game's reviews through the full pipeline (Inbox → Snippets → Clusters → Insights)
  2. Analyse 2–3 comparable titles to show how your game compares
  3. Build clusters and insights around the themes that matter to your pitch — audience engagement, emotional resonance, competitive differentiation
  4. Pull key data points: snippet counts, sentiment percentages, topic and emotion/taxonomy breakdowns, player demographics, trend direction
  5. Use evidence in your materials — pitch decks, one-pagers, and live conversations backed by structured data
What you'll see
  • Quantified sentiment — not just "positive", but "72% positive, with 'world-building' and 'atmosphere' as the top drivers" — and we go beyond that baseline with rich emotional taxonomies: named emotions (e.g. frustration vs. boredom vs. disappointment), not a single polarity bucket, so stakeholders see what kind of player experience you're delivering
  • Competitive positioning — how your game's feedback profile compares to reference titles
  • Audience depth — player demographics showing who your audience is (experience, engagement, playtime commitment)
  • Momentum indicators — polarity and emotion trends, review velocity, and emerging themes over time
  • Evidence chains — every claim traces back through insights → clusters → snippets → original reviews
Credit cost

Standard pipeline costs apply. For a pitch-ready analysis of your own game plus 2–3 comparables, budget approximately 2,000–5,000 credits total (depending on review volume and analysis depth).

💡 Tip: Frame your evidence around the questions publishers actually ask: "Who is your audience?" "How does player experience (emotion, topic, polarity) compare to competitors?" "What's your retention story?" "What are the biggest risks?" Build a cluster and insight for each question, and you'll walk into the meeting with answers, not hopes.

When to use this
  • You're preparing a pitch deck for a publisher or investor meeting
  • You need to justify continued investment in a live title
  • You want to prove product-market fit with structured data instead of anecdotes
Systems used

Full pipeline: Game Database → Inbox → Snippets → Clusters → Insights → CoreDecisions + CoreProfile for audience depth

Demo / Early Access

Limited players, high intent — separate material signal from noise before you scale marketing or lock roadmap.

Demo / Early Access

Prepare for Early Access with real data

Analyse player expectations from comparable titles before your Early Access launch. Understand what your target audience cares about most, so you can shape your messaging, feature set, and roadmap around real player evidence.

The problem

Early Access is a high-stakes moment. You're exposing your game to paying players for the first time, and their feedback will shape your reputation, your roadmap, and your Steam algorithm performance. But you're launching into uncertainty — you don't know what your audience expects, what they'll tolerate at this stage, or what will trigger a wave of negative reviews.

Most studios enter Early Access with assumptions about their audience that haven't been tested.

How GameDataCore solves this

Analyse feedback from comparable Early Access titles to understand what players in your genre expect from an EA launch. What do they forgive? What do they punish? What features do they consider essential on day one vs. acceptable as "coming soon"?

Use this evidence to shape your EA feature set, messaging, store page copy, and early roadmap — before a single player touches your build.

Step by step
  1. Find comparable EA titles in the Game Database — games in your genre that launched into Early Access
  2. Import their reviews and filter for the EA period (use date filters to isolate early feedback)
  3. Run analysis and focus on negative signal — polarity, named emotions, and topics — what did players complain about most in the first weeks?
  4. Build clusters around EA-specific themes: missing features, performance expectations, content depth, communication cadence
  5. Compare across titles to identify patterns — what do EA players in your genre consistently expect?
What you'll see
  • EA expectation patterns — the features and polish levels players in your genre consider table stakes
  • Forgiveness thresholds — what players tolerate in EA vs. what triggers refunds and negative reviews
  • Communication themes — how dev responsiveness and roadmap transparency affect player emotion and narrative (not just average polarity)
  • Early churn signals — what drives players to leave negative reviews in the first 2 hours
Credit cost

Same as any analysis pipeline. Focus your budget: import 200–300 reviews per reference game, filter for the first 30 days of EA, and analyse that focused set.

💡 Tip: Pay special attention to reviews from players with under 2 hours of playtime and strong negative emotion or polarity. These are your early churn signals — the things that make players bounce before giving your game a real chance. If you can address these before your EA launch, you'll have a much stronger first impression.

When to use this
  • You're 1–3 months from an Early Access launch
  • You're deciding which features to include in your EA build vs. your post-EA roadmap
  • You want to shape your store page messaging around what EA players actually care about
Systems used

Game Database → Inbox → Snippets → Clusters → Insights (across EA reference titles)

Launch

Ship week and the window after: fast, structured reads on what players are saying, feeling, and doing.

Launch

Understand what players are really saying

Import your Steam reviews and let the system break them into snippets — individual points of praise, complaint, or suggestion — so you can see exactly what players care about, not just whether they're positive or negative.

The problem

Steam reviews are a goldmine of player feedback — but reading hundreds or thousands of them manually is impossible. Even when you do, you're left with impressions, not structure. You know players are unhappy, but you can't say precisely what they're unhappy about, how many feel the same way, or who these players are.

Most teams skim the first page of reviews, react to the loudest voices, and hope they're making the right call.

How GameDataCore solves this

GameDataCore breaks every review into snippets — individual points of praise, complaint, suggestion, or observation. Each snippet is classified with topic, emotion (rich taxonomies, not just “happy/sad”), sentiment (polarity), and player context, turning unstructured text into structured, filterable data.

Instead of reading 500 reviews, you explore 1,500 snippets — each one a precise signal you can sort, filter, group, and act on.

Step by step
  1. Add a game to your catalogue from the Game Database
  2. Open the Inbox and import your first batch of Steam reviews (up to 1,000 on first load)
  3. Filter reviews by polarity (sentiment), emotion, playtime, language, or date to focus on the feedback that matters most
  4. Run analysis — the system breaks your selected reviews into snippets, each tagged with topic, emotion/taxonomy, and polarity
  5. Explore snippets — filter by topic (e.g. "Combat", "Performance"), named emotion (e.g. "Frustration"), and polarity to isolate specific themes
What you'll see
  • Snippet cards showing the exact player quote, tagged with topic, emotion, and polarity (sentiment)
  • Player context for each snippet — who said this, how long they played, how reliable their data is
  • Filter controls to narrow by any dimension: topic, emotion, polarity, language, playtime, date range
Credit cost

Analysis costs 1 credit per snippet classified. A typical review produces 2–5 snippets, so analysing 100 reviews might cost 200–500 credits.

💡 Tip: Start with 50–200 focused reviews (e.g. negative polarity or strong negative emotion, high playtime) rather than analysing everything at once. You'll get meaningful results faster and spend fewer credits.

When to use this
  • You've just launched and want to understand first impressions
  • You're preparing a patch and need to know what players care about most
  • You want to move beyond "positive/negative" into the specific topics and emotions driving player experience
Systems used

Inbox → Snippets → optionally Clusters and Insights for deeper analysis

Launch

Spot recurring themes and patterns

Group related snippets into clusters to surface the topics that keep coming up — whether it's a beloved mechanic, a persistent bug, or a feature players are asking for. Filter by emotion, topic, polarity, and player segment to focus on what matters.

The problem

Individual reviews are anecdotes. One player complains about matchmaking; another loves the art style; a third wants more endgame content. Without structure, you can't tell whether matchmaking is a widespread crisis or one person's bad day.

Teams that rely on reading reviews individually miss the patterns hidden across hundreds of data points. They react to the loudest voice instead of the biggest signal.

How GameDataCore solves this

Clusters group related snippets together to reveal the themes that keep recurring across your feedback. Instead of "37 reviews mention bugs", you get a cluster called "Persistent crash on level 3 transition" with 37 snippets, emotion and polarity breakdowns, temporal trends, and player profile data — so you can see not just what players are saying, but how many, how strongly, how they feel, and who.

The system suggests clusters automatically using multi-dimensional analysis across topic, emotion/taxonomy, polarity (sentiment), playtime, experience band, and more. You can also build your own clusters manually by selecting and grouping snippets around a hypothesis.

Step by step
  1. Run analysis on your reviews in the Inbox (if you haven't already)
  2. Open Clusters to see system-generated suggestions based on statistical patterns in your snippets
  3. Review suggestions — each one shows the theme, snippet count, emotion and polarity distribution, and a confidence indicator
  4. Accept or customise — accept a suggestion as-is, refine it by adding or removing snippets, or build your own from scratch
  5. Explore the cluster detail view — volume, velocity, temporal charts, taxonomy breakdowns, emotional VAD shift, and polarity bars
What you'll see
  • Suggested clusters with theme labels, snippet counts, and emotion/polarity distributions
  • Cluster detail view with temporal charts (when did this feedback appear?), topic breakdowns, emotion analysis, and player signal aggregates
  • Sub-clusters for drilling into a pattern without losing the parent theme's context
Credit cost

Accepting a cluster costs 2 credits. Exploring suggestions and previewing clusters is free — you only pay when you confirm.

💡 Tip: Start by reviewing the system's suggested clusters before building your own. They're generated from statistical patterns you might not spot manually — high-lift combinations of topic, emotion, polarity, and player segment that stand out from the baseline.

When to use this
  • You have 200+ snippets and want to see what themes dominate
  • You suspect a recurring issue but want to quantify it before acting
  • You want to compare how different player segments experience the same theme
Systems used

Snippets → Clusters → optionally Insights for synthesis and CoreDecisions for action

Launch

Understand who your players are

Every review is connected to a player profile with experience bands, engagement levels, playtime commitment, and data confidence scores. Use Audience Intelligence to see aggregate cohort breakdowns — filter by segment and instantly understand the composition of who is speaking, not just what they're saying.

The problem

A review that says "the controls are terrible" means something very different coming from a player with 500 hours than from someone who played for 20 minutes. But on Steam, every review looks the same — a block of text with a thumbs up or down.

Without player context, you can't tell whether negative feedback comes from your core audience or from players who were never your target. You can't segment, you can't weight, and you can't understand who is actually speaking.

How GameDataCore solves this

CoreProfile builds a behavioural profile for every reviewer. Each profile includes experience band, engagement level, playtime commitment, data quality score, and confidence metrics across games, reviews, playtime, badges, and achievements.

This context flows through the entire pipeline — snippets inherit player data, clusters show aggregate player demographics, and insights can be weighted by reviewer reliability.

Step by step
  1. Open Player Profiles for a game in your catalogue
  2. Browse profiles to see experience bands, engagement levels, playtime commitment, and quality scores
  3. Click a profile to see the full detail: summary, library, and signal tabs
  4. Filter by segment — compare feedback from veterans vs newcomers, high-playtime vs casual, by region, platform, or commitment level
  5. Cross-reference with snippets — see how the same theme plays differently across player segments
What you'll see
  • Experience band — Steam account maturity based on library size, level, and playtime history
  • Engagement band — how actively this player participates in the Steam community
  • Playtime commitment — dedicated, invested, sampled, tried, or minimal
  • Data quality — how publicly visible the profile is, affecting score reliability
  • Confidence scores — per-dimension confidence (games, reviews, playtime, badges, achievements)
  • Sample weight — a combined score used to weight this player's snippets in aggregate analysis
Credit cost

Exploring player profiles is free. Profiles are computed automatically when reviews are imported — no additional credit cost.

💡 Tip: Filter by "high playtime + negative polarity or strong negative emotion" to surface your most critical power-user feedback. These are the players who loved your game enough to invest serious time — and something drove them to leave a negative review. That signal is gold.

When to use this
  • You want to understand whether negative feedback comes from your target audience or drive-by reviewers
  • You're comparing how different player segments experience the same feature
  • You need to weight evidence in a cluster or insight by player credibility
Going deeper with Audience Intelligence

Audience Intelligence takes the player-level data in CoreProfile and gives you a cohort view of the entire player population for a game. Rather than inspecting one profile at a time, you can filter the full reviewer population by experience band, playtime commitment, polarity, emotion, language, or any combination of signals and see aggregate breakdowns instantly.

This is especially useful when you want to know: "Of all the players who left negative reviews, what proportion are veteran players vs casual newcomers?" or "Do high-playtime players feel differently (named emotions, not just thumbs down) about this issue than samplers do?"

Open Audience Intelligence from the sidebar under CoreProfile → Audience Intelligence.

Systems used

CoreProfile (cross-cutting) — feeds into Snippets, Clusters, and Insights

Audience Intelligence — cohort-level view of the full reviewer population

Launch

Understand the emotional landscape of your game

Go beyond positive and negative. See exactly which emotions players experience — joy, frustration, surprise, boredom — and how intensely they feel them. Understand not just what players think, but how your game makes them feel.

The problem

Polarity (positive/negative) tells you which side of the line a review sits on — but games are emotional experiences, and that alone doesn't capture the difference between frustration, boredom, disappointment, and anger. A player frustrated by a boss fight and a player bored by empty exploration can both look "negative" — but they need completely different responses.

Traditional review analysis flattens the emotional richness of player feedback into a binary sentiment signal. GameDataCore is built around rich emotional taxonomies layered on top of polarity.

How GameDataCore solves this

Every snippet is classified not just by sentiment, but by emotion — joy, frustration, surprise, boredom, anger, admiration, and more. The system also tracks emotional VAD (Valence, Arousal, Dominance) at the cluster level, showing how player emotion shifts over time.

This lets you understand not just what players think, but how your game makes them feel — and whether that's changing.

Step by step
  1. Run analysis on your reviews (or use existing snippets)
  2. Filter snippets by emotion — isolate frustration, boredom, joy, or any other emotional signal
  3. Build clusters around emotional themes — e.g. "Frustration with combat difficulty" vs. "Frustration with UI responsiveness"
  4. Explore the cluster detail view — check the emotional VAD shift chart to see how emotion trends over time
  5. Compare emotions across player segments — do veterans feel differently from newcomers about the same feature?
What you'll see
  • Emotion tags on every snippet — precise emotional classification beyond positive/negative
  • Emotion distribution in clusters — what's the dominant emotional response to a theme?
  • VAD temporal charts — how valence (positive/negative), arousal (intensity), and dominance (feeling in control) shift over time
  • Segment-level emotion differences — how different player types experience the same content emotionally
Credit cost

Emotion classification is included in the standard analysis cost: 1 credit per snippet. No additional charge for emotional signals.

💡 Tip: Combine emotion filters with playtime filters for the most actionable signals. "Frustration + 20 hours played" tells you your most dedicated players are hitting a wall — that's a retention risk. "Boredom + under 2 hours" tells you your opening isn't hooking players — that's an acquisition risk. Same emotion, very different problems.

When to use this
  • You want to understand the emotional experience of playing your game, not just whether players "like" it
  • You're designing around feel — pacing, difficulty curves, narrative beats — and need evidence
  • You want to detect emotional shifts after a patch or content update
Systems used

Snippets (emotion filters) → Clusters (VAD analysis) → optionally Insights for synthesis

Live Service

Rhythm: each season or patch gets a before/after read — recurring themes, cohort context, and impact you can track.

Live Service

Prioritise what to fix or build next

Turn clusters into insights and evidence-backed decisions. See how many players are affected, how strongly they feel, and who they are — so you can prioritise with confidence instead of gut instinct.

The problem

Every studio has more things to fix and build than they have time for. The question is never "what should we work on?" — it's "what should we work on first?" Without evidence, prioritisation defaults to whoever speaks loudest in the meeting, the last bug report that landed in Discord, or gut instinct.

The result: teams ship patches that don't move the needle, ignore issues that are quietly driving churn, and can't defend their decisions to stakeholders.

How GameDataCore solves this

Insights synthesise your clusters into actionable findings with severity ratings, evidence trails, and player impact data. Instead of "players don't like the combat", you get: "142 snippets across 3 clusters indicate frustration (named emotion) with hitbox detection, primarily from players with 20+ hours. Severity: High. 68% negative polarity, frustration trending upward over the last 30 days."

You can then turn an insight into a Decision — a tracked commitment with linked evidence, success metrics, and impact monitoring.

Step by step
  1. Review your clusters — identify the ones with the highest volume, strongest emotional signal, or fastest growth
  2. Generate insights by selecting related clusters — the system synthesises them into a structured finding
  3. Review the insight — check severity, linked evidence, emotion and polarity distribution, and player demographics
  4. Create a decision from the insight — record what you're going to do about it and why
  5. Set success metrics so you can measure whether the change worked
What you'll see
  • Insight cards with severity (Critical/High/Medium/Low), type, and status
  • Evidence trails linking back through clusters → snippets → original reviews
  • Player demographics showing who is affected — veterans vs newcomers, high-playtime vs casual
  • Decision records with linked evidence, status tracking, and revision alerts
Credit cost
  • Generating candidate insights is free — you can preview before committing
  • Accepting an insight costs 5 credits per attached cluster
  • Monitoring a decision costs 1 credit per tracked metric per day

💡 Tip: Generate insights from multiple related clusters rather than one at a time. An insight built from 3 clusters that tell the same story is far more compelling than 3 separate single-cluster insights.

When to use this
  • You're planning a sprint and need to decide what to prioritise
  • You need to justify a technical investment to your lead, publisher, or board
  • You want to compare the severity of competing issues before committing resources
Systems used

Clusters → Insights → CoreDecisions

Live Service

Track the impact of updates and patches

Monitor how player emotion, topic drivers, and polarity shift after a release or patch. See whether your changes landed the way you intended, and catch emerging issues before they snowball into negative review trends.

The problem

You ship a patch. You fix the thing players were complaining about. But did it actually work? Did polarity improve and did the named emotions you cared about (e.g. frustration) move? Did the complaints stop? Did new issues emerge that you didn't anticipate?

Most studios have no structured way to measure the impact of their own decisions. They ship, move on, and hope for the best. If a patch fails to land, they don't find out until the next wave of negative reviews — by which point the damage is done.

How GameDataCore solves this

CoreDecisions lets you record a decision, link it to the evidence that prompted it, set success and revision metrics, and then monitor how player signal changes after implementation — polarity, emotions, and topics together, not a single sentiment dial.

When you mark a decision as "Implemented", the system freezes a baseline snapshot of your current polarity, topic, and emotion data. From that point forward, it tracks daily changes so you can compare before and after — and catch problems early if reality diverges from expectations.

Step by step
  1. Create a decision from an insight (or directly from Clusters)
  2. Fill in the decision record — what you're doing, why, and what evidence supports it
  3. Set success metrics (e.g. "negative polarity about matchmaking drops below 40%" or "frustration-tagged snippets in matchmaking fall by X%") and revision metrics (e.g. "if frustration increases post-patch, revisit")
  4. Mark as Implemented when the change ships — this freezes the baseline snapshot
  5. Monitor the dashboard — compare before/after polarity and emotion drivers, track emerging themes, and watch for revision alerts
What you'll see
  • Baseline vs current comparison — polarity distribution, topic frequency, and named emotion intensity before and after your change
  • Trend charts — daily tracking of key metrics over time
  • Revision alerts — automatic notifications when reality diverges from expectations (e.g. polarity worsens or frustration spikes instead of improving)
  • Evidence graph — trace the full chain from decision → insight → clusters → snippets → original reviews
Credit cost

Monitoring costs 1 credit per tracked metric per day. A typical decision tracks 2–3 metrics, costing 2–3 credits/day. Remove metrics you're no longer watching to save credits.

💡 Tip: Don't monitor everything. Pick 2–3 metrics that directly relate to the problem you were solving. If your decision was about matchmaking, track matchmaking-related emotion tags (e.g. frustration) and polarity — not overall game sentiment as one number. Focused monitoring gives clearer signals and costs less.

When to use this
  • You've shipped a patch or update and want to know if it worked
  • You're running a live service and need to track emotion and polarity trends over time
  • You want to build an evidence base for what works (and what doesn't) to inform future decisions
Using Decision Surface alongside impact tracking

The Decision Surface gives you a real-time evidence pack for any game — showing the top drivers of player experience (topics, emotions, polarity), trend comparisons across time windows, and a narrative summary of decision pressure. Use it before creating a decision to understand the current state of play, and return to it after shipping to see how the driver rankings and emotional landscape have shifted.

For example: if "Matchmaking" was the top driver before your patch, check the Decision Surface after shipping to see whether it has dropped in the rankings and whether negative polarity and frustration in that area have decreased.

Systems used

CoreDecisions (with evidence from Insights → Clusters → Snippets)

Decision Surface — for real-time driver analysis before and after implementation

Live Service

Monitor player cohorts with Audience Intelligence

Filter your full reviewer population by experience band, playtime, polarity, emotion, language, or any combination of signals to see aggregate cohort breakdowns. Understand not just what players are saying, but which kinds of players are saying it — and how that balance shifts over time.

The problem

Player feedback is not uniform. A game with 10,000 reviews might have veteran players praising the depth while casual newcomers bounce off the difficulty. Both groups are writing reviews, but without a way to see them separately, their voices blend into a single aggregate polarity score that hides who feels what — almost nothing useful for decisions.

You need to know which players are speaking — and whether the segment you care most about (your core audience, your churned players, your long-term fans) feels differently from the crowd.

How GameDataCore solves this

Audience Intelligence gives you a cohort-level view of the full reviewer population for any game in your catalogue. Apply filters — experience band, playtime commitment, polarity, emotion, language, region, or platform — and see aggregate breakdowns for the resulting segment: how many players, polarity and emotion distribution, top topics, engagement levels, and more.

This sits alongside the individual player profiles in CoreProfile and is designed for population-level questions rather than one-at-a-time inspection.

Step by step
  1. Open Audience Intelligence from the sidebar under CoreProfile → Audience Intelligence, or from the game navigation bar
  2. Select a game from your catalogue
  3. Apply filters to define the cohort you want to examine — for example: Experience Band = Veteran, Sentiment = Negative
  4. Review the aggregate breakdown — see how many players match, their polarity and emotion distribution, top topics, and engagement profile
  5. Compare cohorts by adjusting the filters — swap Veteran for Newcomer and see how the same game lands differently across segments
  6. Cross-reference with clusters and snippets — if a cohort shows unexpectedly high negativity on a specific topic, open Snippets filtered to that segment to read the actual feedback
What you'll see
  • Cohort size — how many reviewers match your current filter combination
  • Polarity breakdown — positive / negative / mixed distribution for the cohort (plus emotion tags where available)
  • Top topics — the subjects this cohort mentions most
  • Experience and engagement profile — the composition of the cohort by band and level
  • Playtime distribution — how much time this cohort had invested before reviewing
Credit cost

Audience Intelligence is free to browse. Player profiles are computed automatically when reviews are imported — no additional credits are required to explore cohort views.

💡 Tip: Compare "High playtime + Negative" vs "Low playtime + Negative" for the same game. If veteran players are unhappy about something different from casual players, those are two separate problems that likely need two different solutions. Treating them as one issue is a common and expensive mistake.

When to use this
  • You want to know whether core players and casual players feel differently about a recent change
  • You're preparing a decision and need to understand the demographic breakdown of the evidence
  • You're monitoring a live service title and want to watch how cohort emotion and polarity shift over time
  • You need to build a player-segmented argument for a stakeholder or publisher conversation
Systems used

Audience Intelligence (CoreProfile) — cross-references with Snippets, Clusters, and CoreDecisions

Long-form guides above match the Notion Use Cases database (Published). Edit the markdown files under src/content/useCases/bodies/ when Notion changes, or rely on the in-app Knowledge Base for the live source.

CoreFeedbackCoreProfileCoreDatabaseCoreDecisionsBook a demo
Product
  • Use cases
  • CoreFeedback
  • CoreProfile
  • CoreDatabase
  • CoreDecisions
  • CoreReports
Company
  • About
  • Pricing
  • Support
Resources
  • News
  • Substack
Legal
  • Privacy
  • Terms
  • For investors
Social
  • Discord
  • LinkedIn
  • Bluesky
  • Instagram
GameDataCore logoGameDataCore logo© 2026 GameDataCore. All rights reserved.