Before
Greenlight

Should this game exist in this form?

Greenlight decisions commit years of work, capital, and people. Most failures aren’t caused by bad ideas — but by decisions made without defensible audience evidence.

Once made, these decisions are expensive to reverse.

Why this decisionIs Hard

What teams are up against

  • Ambiguous early signals
  • Conflicting internal opinions
  • Survival and funding needs
  • Publisher pressure
  • Bias from past successes

What usually fills the gap

  • Gut feel
  • Anecdotes
  • Vanity metrics
  • Over-indexing on loud voices
  • Estimated and innacurate data
The decisions below determine whether confidence is built on real audience alignment — or fragile assumptions.

AudienceReality

The Decision

This decision determines whether the game’s core appeal aligns with real player motivations — or only internal expectations.

What Goes Wrong Without Evidence

  • Marketing targeting the wrong audience
  • Misaligned tone or mechanics
  • Strong early praise followed by rapid drop-off

How GameDataCore Supports This

GameDataCore connects behavioural, emotional, and community signals to determine whether an audience fit actually exists — and how fragile that fit is.

What Evidence Looks Like Here

  • Behavioural patterns from comparable titles
  • Expectation language in community discussions
  • Motivational clustering across genres

CommunitySignals

The Decision

This decision determines whether early community reactions reflect genuine player alignment — or short-term noise that will collapse under pressure.

What Goes Wrong Without Evidence

  • Over-reacting to loud minority voices
  • Misreading sentiment as intent
  • Feature changes that satisfy no one
  • Community trust eroding before launch

How GameDataCore Supports This

GameDataCore tracks how emotional signals and expectations form, spread, and stabilise — helping teams distinguish meaningful signals from transient noise before trust is lost.

What Evidence Looks Like Here

  • Emotional signal trajectories over time
  • Repeated themes across independent discussions
  • Language shifts as expectations solidify
  • Early pressure points that predict conflict

MarketContext

The Decision

This decision determines whether the game is positioned within a viable market context — or evaluated in isolation from real competitive expectations.

What Goes Wrong Without Evidence

  • Overestimating differentiation
  • Competing on the wrong dimensions
  • Pricing or scope mismatches
  • Being compared unfavourably at launch

How GameDataCore Supports This

GameDataCore situates a project within its real competitive landscape, grounding creative ambition in how players actually evaluate similar games.

What Evidence Looks Like Here

  • Behavioural patterns across comparable titles
  • Expectation baselines within the genre
  • Shifts in player tolerance over time
  • Where similar games succeed — and fail

IdeaStress-Testing

The Decision

This decision determines whether the core concept holds up under real player scrutiny — or only works in theory.

What Goes Wrong Without Evidence

  • Concepts that sound compelling but don’t convert
  • Mechanics players abandon after novelty fades
  • Narrative hooks that fail to sustain engagement
  • Costly iteration on weak foundations

How GameDataCore Supports This

GameDataCore surfaces where similar ideas succeed or break down, helping teams pressure-test concepts before commitment becomes irreversible.

What Evidence Looks Like Here

  • Early rejection patterns in similar concepts
  • Drop-off points tied to mechanic or tone
  • Player language around “promise” versus “delivery”
  • Signals of sustained engagement, not novelty

EffortVs Impact

The Decision

This decision determines whether development effort meaningfully increases player confidence — or simply adds complexity.

What Goes Wrong Without Evidence

  • Over-investing in low-impact features
  • Polishing areas players don’t value
  • Neglecting friction that actually drives drop-off
  • Burning time without increasing confidence

How GameDataCore Supports This

GameDataCore helps teams identify where effort translates into real confidence — and where it doesn’t — before resources are irreversibly spent.

What Evidence Looks Like Here

  • Player tolerance thresholds
  • Features that correlate with recommendation or abandonment
  • Emotional responses tied to specific systems
  • Diminishing returns on added complexity

OpportunityCost

The Decision

This decision determines what the studio is choosing not to do -- And whether that trade off is justified

What Goes Wrong Without Evidence

  • Locking into a path that limits future options
  • Carrying technical or design debt forward
  • Sacrificing long-term flexibility for short-term gains
  • Repeating avoidable mistakes across projects

How GameDataCore
Supports This

GameDataCore frames decisions in terms of downstream impact, helping studios understand not just what a choice enables — but what it forecloses.

What Evidence Looks Like Here

  • Patterns of downstream risk from similar choices
  • Long-tail impact on live support or sequels
  • Studio-level consequences across a catalogue
  • Where past trade-offs quietly failed

CompoundingDecision Intelligence

HOW TEAMS USE THESE DECISIONS

Teams use Before Release decisions to:

  • Decide what not to change late
  • Adjust messaging without scope creep
  • Enter launch with eyes open, not hopeful
  • Protect long-term trust over short-term optics

This is not about perfect launches.
It’s about defensible ones.

RELATIONSHIP TO OTHER DECISIONS

Before Release decisions connect directly to:

  • Audience Reality — who the game was built for
  • Community Signals — how expectations formed
  • After Release — whether outcomes compound or decay

What you lock in here determines how everything that follows is interpreted.

WHAT THIS IS NOT

  • Not a launch checklist
  • Not QA tooling
  • Not marketing optimisation

Our decision engine supports judgement at the point where being wrong becomes public.

A Shared Evidence Layer for Real Decisions

Used daily to align teams around the same underlying reality

Ground decisions in behaviour, motivation, and emotional evidence — not opinion

Replace fragmented analytics, documents, and gut-feel with shared judgement