CoreFeedback ingests Steam reviews and extracts recurring experience patterns, friction clusters, player archetype splits, and pre/post change signal shifts.
So you can make roadmap decisions with measurable evidence — not instinct.
Free. Limited credits. Cohort access only.
Most teams already have feedback. What they lack is:
The pilot runs you through that full loop.
Example Signal
Recurring friction around reload speed and combat pacing, concentrated in competitive play.
Event
Patch 1.04 — Reload speed buff
Before patch
Top keywords
After patch
Confidence: High (recurrence threshold met)
This isn't sentiment. It's pattern movement.
No black-box summarisation.
The output is structured, reproducible, and auditable.
Reviews move through these stages before they become clusters and signals.
Feedback processing
Raw
40
Awaiting
Queued
12
In pipeline
Processing
13
Analyzing
Completed
15
Snippets
Clusters, archetype splits, time tracking, review-level signals, evidence quality, and emotional journey — so you see what’s recurring, who it affects, and how it moves.
What appears repeatedly across the dataset — not just what's loud.
Inventory UX friction
Who's affected
45% veteran30% mid25% new
So you see what's really repeated, who it affects, and how confident you can be.
The same complaint can come from different player types. CoreFeedback splits signals by who said it — story-focused vs min-maxers — so you know who you're fixing for.
Same theme, split by who said it
Immersion-focused
e.g. lore inconsistencies
Optimisation-focused
e.g. drop-rate imbalance
Targeted decisions instead of one-size-fits-all fixes.
See how signals change before vs after a patch, campaign, or season — not just a single snapshot.
Pre-patch
Baseline
Post-patch
Measured
So you can validate whether a change or campaign actually moved the needle.
Every review and snippet gets deterministic signals: sentiment, quality, engagement, and taxonomy.
Review analysis
Taxonomy
So every insight and weight rests on the same per-review signals — filter and compare consistently.
Experience and engagement bands, reliability scores, and skew warnings so decisions aren't distorted by noise.
Author signals
So you weight feedback by who said it and how reliable it is — not one vote per review.
Within a single review, sentiment can move from frustrated to satisfied — you see the pattern, volatility, and resolution.
Segment → sentiment
Pattern: recovery · Resolution: satisfied
So you spot recovery or escalation patterns, not just an average score.
Verify tuning changes and systemic adjustments.
Align around what is structurally recurring, not anecdotal.
Identify mismatched expectations vs actual player experience.
Assess product risk and trend trajectory.
No lock-in. Exportable outputs.
Example from your pilot
Decision record · linked to evidence
Original question
Should we rework the mount stamina system?
Decision
Remove stamina drain outside combat; double regen in combat.
Context: Mount stamina was the #2 complaint. Players felt punished for exploring.
Signal movement · predicted vs actual
Lesson learned
Removing a punishing exploration mechanic exceeded predictions — reusable pattern for future systems.
Loop
Learn
Signal Manifestation
Additional signals from enrichment, classification & NLP.
Inputs
Calculated for
Snippets & profiles
Signals feed clustering and comparison
This is not a one-off report. It's a repeatable decision system.
The pilot includes the full core decision loop. Clustering and the decision layer are simplified; not all signals from the full product are surfaced yet — enough to run the loop and verify impact.