GameDataCoreGameDataCore
HomePilot ProgramDecision ReportsAboutNewsSupport
JOIN WAITLIST
Now Open: Q1 2026 Pilot Program

Turn player feedback into decision-grade signals.

CoreFeedback ingests Steam reviews and extracts recurring experience patterns, friction clusters, player archetype splits, and pre/post change signal shifts.

So you can make roadmap decisions with measurable evidence — not instinct.

Apply for Pilot

Free. Limited credits. Cohort access only.

A 30-Day Pilot Built Around One Core QuestionDid our last decision improve player experience — and for whom?

Most teams already have feedback. What they lack is:

  • Pattern detection across thousands of reviews
  • Consistent thematic grouping
  • Segment-aware analysis
  • Time-window comparison around patches
  • A way to verify impact

The pilot runs you through that full loop.

Example Signal

Combat pacing

Recurring friction around reload speed and combat pacing, concentrated in competitive play.

51 reports·1.2k volume+42% impactHigh confidence
critical severityGameplay

Event

Patch 1.04 — Reload speed buff

Before patch

  • High recurrence of "slow reload" friction
  • Concentrated within competitive players

Top keywords

reloadpacingcompetitivecombat

After patch

  • Friction cluster drops 38%
  • Positive pacing mentions increase
  • Casual segment unaffected

Confidence: High (recurrence threshold met)

This isn't sentiment. It's pattern movement.

What Actually HappensUnder the Hood

No black-box summarisation.

  • Ingests full review datasets
  • Breaks reviews into atomic experience snippets
  • Classifies snippets against a fixed taxonomy
  • Measures recurrence and intensity
  • Links themes to player profile attributes
  • Compares signals across defined time windows

The output is structured, reproducible, and auditable.

Reviews move through these stages before they become clusters and signals.

Feedback processing

Raw

40

Awaiting

Queued

12

In pipeline

Processing

13

Analyzing

Completed

15

Snippets

What CoreFeedbackProduces

Clusters, archetype splits, time tracking, review-level signals, evidence quality, and emotional journey — so you see what’s recurring, who it affects, and how it moves.

1. Recurring Experience Clusters

What appears repeatedly across the dataset — not just what's loud.

Inventory UX friction

Volume 421Intensity 0.72Confidence High

Who's affected

45% veteran30% mid25% new

So you see what's really repeated, who it affects, and how confident you can be.

2. Player Archetype Splits

The same complaint can come from different player types. CoreFeedback splits signals by who said it — story-focused vs min-maxers — so you know who you're fixing for.

Same theme, split by who said it

Immersion-focused

e.g. lore inconsistencies

Optimisation-focused

e.g. drop-rate imbalance

Targeted decisions instead of one-size-fits-all fixes.

3. Time-Based Signal Tracking

See how signals change before vs after a patch, campaign, or season — not just a single snapshot.

Pre-patch

Baseline

Vol +12%Intensity ↑Seg. div.

Post-patch

Measured

So you can validate whether a change or campaign actually moved the needle.

4. Structured Review Signals

Every review and snippet gets deterministic signals: sentiment, quality, engagement, and taxonomy.

Review analysis

Sentiment −0.3Engagement 0.7Quality 0.82

Taxonomy

frustrationinventory uxanalytical

So every insight and weight rests on the same per-review signals — filter and compare consistently.

5. Evidence Quality & Weighting

Experience and engagement bands, reliability scores, and skew warnings so decisions aren't distorted by noise.

Author signals

Experience midEngagement high
Review reliability0.88
EvidenceSufficient

So you weight feedback by who said it and how reliable it is — not one vote per review.

6. Emotional Journey

Within a single review, sentiment can move from frustrated to satisfied — you see the pattern, volatility, and resolution.

Segment → sentiment

1−0.2 frustrated
20.1 mixed
30.5 satisfied

Pattern: recovery · Resolution: satisfied

So you spot recovery or escalation patterns, not just an average score.

Who This Is ForIf you are responsible for prioritisation, roadmap calls, or performance interpretation.

01

Devs & Live Ops

Verify tuning changes and systemic adjustments.

02

Studio Leadership

Align around what is structurally recurring, not anecdotal.

03

Marketing

Identify mismatched expectations vs actual player experience.

04

Publishers & Investors

Assess product risk and trend trajectory.

What YouLeave With

  • A structured map of recurring experience patterns
  • Segment-specific friction breakdown
  • A ranked list of high-recurrence opportunity clusters
  • A time-window comparison around one defined change (for impact validation)
  • Decision records linked to evidence — with predicted vs actual outcomes and learnings
  • A reusable workflow for ongoing feedback analysis

No lock-in. Exportable outputs.

Example from your pilot

Decision record · linked to evidence

Outcome: PositiveEffectiveness 100%

Original question

Should we rework the mount stamina system?

Decision

Remove stamina drain outside combat; double regen in combat.

Context: Mount stamina was the #2 complaint. Players felt punished for exploring.

Signal movement · predicted vs actual

Sentiment+12% →+14.3%
Emotionfrustration dominant →satisfaction dominant
Cluster volume421 →180reviews
Segment mixveteran 45% →52%
Top themestamina friction ↓exploration freedom ↑

Lesson learned

Removing a punishing exploration mechanic exceeded predictions — reusable pattern for future systems.

The CoreFeedbackLoop

Ingest Steam Reviews during pilot. SOURCE Steam Reviews Pilot source Single feed UNIFIED FEED All signals → one pipeline
01

Ingest

Snippetise Break feedback into analysable pieces. RAW → SNIPPETS One review → many snippets Snippet 1 Snippet 2 Snippet 3 OUTPUT Segmented by theme
02

Snippetise

Classify Tag and categorise automatically. TAXONOMY Gameplay · Balance · QoL 312 198 156 EXAMPLE TAGS frustration inventory ux analytical Auto-tagged per snippet
03

Classify

Verify Track implementation and impact. PROGRESS Building QoL shipped in 3.2 STATUS Done In progress Pending 75% implemented · Retention +6%
08

Verify

Loop

Learn

Signal Manifestation

Additional signals from enrichment, classification & NLP.

Inputs

EnrichmentClassificationNLP

Calculated for

Snippets & profiles

SentimentRecurrenceSegmentQuality

Signals feed clustering and comparison

04

Signal Manifestation

Decide Commit to a path with clear criteria. OPTIONS Option B · QoL + roadmap Ship now Selected Wait SUCCESS CRITERIA Retention +5% NPS +10 Clear targets to verify
07

Decide

Compare Benchmark topics and cohorts. PRE-PATCH Baseline Vol 847 78% POST-PATCH Measured Vol 423 +12% Delta vs baseline
06

Compare

Cluster Group similar themes into topics. RECURRING EXPERIENCE CLUSTERS Inventory UX friction Volume 421 Intensity 0.72 Confidence High WHO'S AFFECTED 45% veteran · 30% mid · 25% new
05

Cluster

This is not a one-off report. It's a repeatable decision system.

The pilot includes the full core decision loop. Clustering and the decision layer are simplified; not all signals from the full product are surfaced yet — enough to run the loop and verify impact.

Apply for the pilot.

Deterministic Feedback Analysis. Not Black-Box LLM Summaries.

Apply for Pilot
Product
  • Pilot Program
  • Decision Reports
Company
  • About
  • Support
Resources
  • News
  • Substack
Legal
  • Privacy
  • Terms
  • For investors
Social
  • Discord
  • LinkedIn
  • Bluesky
  • Instagram
GameDataCore logoGameDataCore logo© 2026 GameDataCore. All rights reserved.