All projects

MCC AI Blueprints: mapping 266 workflows to a future-state strategy

Took 266 discovered workflows across seven enterprise domains, mapped them against 43 AI Blueprint initiatives, measured coverage against a 70% target, and shipped the entire analysis as a live multi-page Railway site.

Year
2025 — 2026
Client
IBM Consulting
Scope
7 domains, 266 processes, 43 initiatives
Built with
Figma, Claude Code, custom skill, Railway
Live
MCC AI Blueprints combined coverage view showing 266 processes, 43 initiatives, 47% coverage rate
Combined coverage view: 266 processes, 124 mapped, 142 gaps, 47% aggregate coverage against the 70% agentification target.

Two streams of work, no crosswalk between them

IBM's Marketing Communications Center engagement produced two parallel streams of work that never met in the middle. The strategy team had built an AI Blueprint of 43 transformation initiatives across seven enterprise domains. Separately, an iX discovery effort had documented 266 existing workflows in PDF form, one domain at a time, with no consistent format and no way to query across them.

The blueprint lived in a master Excel and a strategy deck. The discovery lived in a folder of PDFs with names like Operations_E2E_Process.pdf and CSC_E2E_Process_Flow.pdf. Both sides were meticulous. Neither side could answer the one question stakeholders kept asking: how much of what we already do is covered by what we plan to build?

Answering that meant reading 266 workflow descriptions, holding 43 initiatives in your head, and mapping them by intent, tooling, stakeholder, and outcome. Doing it once was a multi-week effort. Doing it every time a new domain landed was untenable. So nobody was doing it, and the coverage story stayed invisible at exactly the moment leadership needed it most: heading into the agentification investment review.

I took it on as side work to make the relationship visible and, more importantly, to build a system where it could stay visible without re-doing the work each cycle.

The starting state
266
workflows discovered, no normalized format
43
blueprint initiatives across 7 domains
7
PDFs of varying length, structure, and naming
0
existing crosswalk between the two streams

A repeatable mapping system, packaged as a Claude Code skill

Manually mapping 266 workflows would have taken weeks and decayed the moment a new domain landed. Instead I built the work as a repeatable system: a custom Claude Code skill that handled extraction, normalization, mapping, scoring, and rendering end-to-end. Each new domain runs through the same pipeline, produces output in the same format, and ships to the same Railway service. The work compounds instead of restarting.

I want to be precise about what "AI did the work" means here, because it's almost always the wrong framing. The AI didn't make the strategic decisions. I designed a system where the strategic decisions could happen reliably and at scale. Claude Code was the runtime. The skill encoded my judgment, my prompts, my mapping rubric, and my output format. The value lives in the skill, not in any single domain's mapping.

Skill anatomy · /mcc-ix-mapping
STAGE 01
Extract
Run pdftotext over a domain's discovery PDFs, normalize headings, emit a structured markdown corpus.
STAGE 02
Compile
Discrete process records pulled from the markdown, deduped, tagged with source PDF and section path.
STAGE 03
Map
Each process scored against every blueprint initiative for the domain. Strong, medium, or no match.
STAGE 04
Score
Coverage rate and target gap calculated against the 70% agentification benchmark for the domain.
STAGE 05
Render
Self-contained HTML fragment emitted into the deploy folder, slotted into the live shell via fetch.

What the skill actually does to a domain

01
Normalize the source material
Discovery PDFs vary wildly. Some are flat process flows. Others are dense narratives with embedded tables. The first stage runs pdftotext, strips noise, identifies heading hierarchies, and produces one markdown record per process. This is the unglamorous step that makes everything downstream possible.
02
Pull initiatives from the master Excel
The blueprint lives in a master Excel with one row per initiative: ID, name, description, owning domain, capability tags. The skill reads it directly so there's no manual transcription, and so changes to the blueprint flow automatically into the next mapping run.
03
Score every process against every initiative
For a 30-process domain with 6 initiatives, that's 180 pairwise comparisons. The skill scores each pair as strong match, medium match, or no match using a rubric I encoded as part of the skill prompt: shared intent, overlapping tooling, common stakeholders, equivalent outcome.
04
Calculate coverage and identify gaps
Coverage rate = mapped processes ÷ total processes. Anything under the 70% agentification target becomes a gap, with a target-gap value (in points) so leadership can see at a glance how far each domain is from the goal. Initiatives with zero mapped processes become opportunities; processes with no initiative become candidates for scope expansion.
05
Emit a styled HTML fragment
Output is a self-contained ix-{domain}.html fragment matching the site's design system: header with summary stats, coverage bar against the target, a card grid of every blueprint initiative with mapped process counts, and the per-process mapping list. Drop it into the deploy folder, push to Railway, done.
T&O Workflow Mapping page showing 27 processes across 6 blueprint initiatives
A single domain output: T&O. 27 processes, 6 initiatives, 14 mapped, 52% coverage, –18 pts from target.

Where the blueprint matched, and where it didn't

Once all seven domains ran through the skill, the coverage spread became the story. Two domains were essentially fully covered. Three were 30+ points below target. One — CSR — was at 19% on a base of 105 processes, which made it the loudest signal in the deck.

The variance pointed directly at the next round of strategy work. The blueprint either needed to expand to cover the gaps, or explicitly de-scope them. Either answer was useful. Until this view existed, neither answer was possible.

≥ 70%
< 70%
Domain Processes Mapped Gaps Coverage vs. 70% target %
Select Marketing 20 19 1
95%
Content & Creative 19 18 1
95%
Named Account 30 20 10
67%
MMAPI 32 18 14
56%
T&O 27 14 13
52%
Corporate Affairs 33 15 18
45%
CSR 105 20 85
19%
Total processes
266
Mapped
124
Aggregate coverage
47%
Target gap
−23 pts
Domain comparison table from the live site showing all 7 domains side by side
The same data, rendered live on the site. One row per domain, plus a combined total.

Why a Railway site beat a slide deck

The default move would have been a deck. Decks were the wrong tool for this. 266 workflows, 43 initiatives, 7 domains, per-process mapping notes, coverage bars against a target, comparison tables, drill-in modals, and cross-domain views — there is no honest way to compress that into slides without throwing away the parts that make it useful. PowerPoint forces linear reading, flattens hierarchy, and turns every interactive question into a new slide.

A web site with one page per domain handled the volume natively. Coverage charts rendered as actual charts. Modals opened on click for any process to show its mapping notes. Domains compared side by side in a sortable table. Every view was a hyperlink instead of a slide jump. And because it was a live URL, the only thing that ever needed to "update" was the URL itself — stakeholders bookmarked once and saw the latest state every time they returned. No re-distributing decks, no version sprawl, no "is this the current one?"

Getting from "a folder of PDFs" to a site that could carry that much density wasn't a code problem, it was a design problem. I worked it the same way I'd work any product surface: built UX flows in Figma to figure out how stakeholders would move between the combined view, the per-domain pages, and the per-process drill-ins; defined an internal design system (type scale, color tokens, card patterns, table treatments, modal behavior) so every fragment the skill emitted looked like it belonged to the same product; and only then wrote the templates the skill would render against. The skill produces consistent output because the design system gave it consistent rules to follow.

mcc-ai-blueprints/
└─ deploy/
   └─ public/
      ├─ index.html // shell + nav + fetch loader
      ├─ ix-combined.html // 266 / 47%
      ├─ ix-to.html // T&O · 52%
      ├─ ix-ca.html // Corp Affairs · 45%
      ├─ ix-mmapi.html // MMAPI · 56%
      ├─ ix-na.html // Named Account · 67%
      ├─ ix-sm.html // Select Mktg · 95%
      ├─ ix-csr.html // CSR · 19%
      └─ ix-cc.html // Content & Creative · 95%

An index.html shell holds the navigation and shared chrome. Each domain is a self-contained HTML fragment loaded via fetch. Adding a domain means dropping a new fragment and adding one nav item. No framework, no build step, no cold starts.

# 1. run the skill against a new domain
$ claude-code /mcc-ix-mapping content-creative
  → wrote deploy/public/ix-cc.html

# 2. ship it
$ git add deploy/public/ix-cc.html
$ git commit -m "add content & creative"
$ railway up --detach \
    --service mcc-ai-blueprints

# 3. live in seconds
  → mcc-ai-blueprints-production.up.railway.app

The whole loop, from "new PDF arrives" to "stakeholders can read it on the live URL," is minutes. No local dev server. No build pipeline. The same Railway pattern I use for RepoIntel and the rest of my prototypes.

Live site sidebar nav showing all 7 domains with coverage percentages
Live sidebar: every domain reachable in one click, with its coverage % visible at a glance.

A coverage story stakeholders can actually use

Stakeholders stopped asking "is this covered?" and started asking "why is CSR at 19%?" — which is a much better question, and the one the blueprint team needed to answer next.
39 / 43
Blueprint initiatives mapped to at least one discovered process. The remaining four became an explicit watchlist for the next strategy cycle.
−23 pts
Aggregate gap below the 70% agentification target. A single number that turned an abstract goal into a measurable backlog.
7 / 7
Domains shipped through the same skill, in the same format, on the same Railway service. The next domain is a command, not a project.
URL
The deliverable lives somewhere it can keep being used, not somewhere it has to be re-presented every time leadership wants to see the picture.

More than the artifact, the system it was built on is the part that lasts. The next domain doesn't require a new project. It requires running the skill. The next stakeholder doesn't need a meeting. They need a URL. That is the version of AI tooling I want to keep building: repeatable systems that compress weeks of analysis into a single command, and ship the output somewhere humans can actually read it.

Next project
RepoIntel