Client Context
A mid-market private equity firm trying to modernise origination without adding more point tools.
A European private equity firm focused on software and tech-enabled services, with an approximately USD 680 million fund, wanted to transform deal origination. Leadership had already experimented with custom GPTs, enrichment tools, and third-party scoring products over an 18-month period, but the tools remained disconnected.
The core problem was not appetite. It was architecture. Fragmented AI adoption was replicating the firm's underlying data problem instead of solving it.
Leadership had already seen enough AI demos to know the opportunity was real. What they needed was a system that could operate inside the reality of deal origination rather than another disconnected experiment.
The Challenge
Too many leads, too little depth, and no shared intelligence layer.
Origination was consuming the largest share of team time over the prior 12 to 24 months. The research and investment teams were operating at an estimated 150% capacity, while the firm could only deeply evaluate a small number of opportunities at any given time.
Shallow first-pass filtering
More than 500 leads per year were triaged manually, with most rejected on basic filters such as geography, ownership type, and headcount ranges.
Intelligence leakage
Banker decks, expert calls, newsletters, and internal memos were constantly being lost because no single system captured and connected them.
Weak proxies
Traditional headcount-as-revenue heuristics were becoming less reliable as AI changed how businesses scaled.
Limited evaluation depth
Only the top 10 to 15 prospects received the kind of multidimensional analysis that could materially improve deal selection.
In practice, that meant the firm was making high-consequence kill decisions on thin evidence, while valuable internal knowledge remained trapped in emails, calls, decks, and individual memory.
The Approach
Working architecture first, strategy deck never.
Leadership rejected the traditional consulting cadence and asked for a working architecture in three weeks. 60x ran an immersive discovery across investment, research and origination, value creation, tech due diligence, operations, and finance.
- Week 1: stakeholder interviews and end-to-end workflow mapping.
- Weeks 2 to 3: architecture design, prioritisation, and MVP scraper concepts for the steering group.
- Weeks 3 to 8: build and deploy the AI brain in parallel with targeted use cases.
- Ongoing: retained strategic partnership to evolve the platform as capabilities changed.
That process surfaced a consistent message across teams: origination was the highest-value place to start, but only if the firm first built a permissioned knowledge fabric capable of supporting every downstream use case.
What We Built
A unified knowledge graph with AI scoring across the full origination funnel.
60x designed a permissioned knowledge graph that ingests internal and external sources, then feeds an AI scoring engine capable of evaluating every lead at a far deeper level than the team could achieve manually.
Unified data layer
Public data, banker materials, expert insights, emails, and internal research were brought into one queryable system.
AI scoring at scale
Every lead received analysis across business quality, growth signals, sector fit, and competitive context rather than crude filter logic.
Live company views
Top prospects were further enriched with proprietary internal knowledge, creating a view no external platform could replicate.
The system flipped the funnel from purely top-down thematic research to a model where machine-generated, bottom-up ideas fed the top of origination and humans focused on the highest-signal work.
Crucially, this was not just a better sourcing workflow. It was a new way for the firm to capture, retain, and reuse proprietary intelligence throughout the investment process.
The Results
10x more leads evaluated at 5x the depth with zero intelligence leakage.
More leads evaluated at depth
Analyst productivity improvement
Intelligence leakage
- Lead coverage: the firm could evaluate 10x more opportunities at meaningful depth.
- Analyst productivity: manual triage was eliminated, delivering a 5x productivity improvement.
- Knowledge capture: banker decks, emails, expert calls, and newsletters were automatically captured instead of lost.
- Decision quality: kill decisions moved from shallow proxies toward richer business model and market analysis.
The most important shift was qualitative. Every company now received the kind of analytical attention that had previously been reserved for a tiny shortlist.
Operational Shift
The team moved from manual triage toward conviction-building.
Before 60x, significant analyst time was spent moving data between systems, rereading the same external inputs, and performing low-depth screening. After 60x, that time was redeployed into strategy, relationship building, and deeper work on the opportunities most worth pursuing.
That mattered because the firm's goal was never headcount reduction. It was to increase the quality and speed of judgement across the funnel.
Why It Matters
The advantage was not just speed. It was depth of judgment at scale.
The most important outcome was not that the team processed more leads. It was that every lead received the level of analytical attention previously reserved for the shortlist.
For a fund targeting 3x returns over five years, better deal selection, earlier conviction, and stronger portfolio support compound into a structural edge. Human capacity was not removed. It was redeployed into strategy, relationships, and winning the best opportunities faster.
For private equity, that is the difference between using AI as a convenience layer and using it as a true edge in fund performance.