← All research

May 4, 2026

ResearchIssue 02

The AI hiring filter doesn’t exist yet.

Companies mention AI tools constantly. They require them almost never. AI is the new Excel.

TL;DR

We’re seeing an expectation change, not a replacement change. Across 37,920 deduplicated April 2026 job postings, AI fluency is being embedded into knowledge work everywhere from engineering to customer success. It is almost never showing up as a hard hiring requirement.

This runs counter to what Goldman, McKinsey, and the AI labs themselves keep publishing. They describe a near-term step change where AI fluency becomes a filter. The data does not support it. AI tools have spread to 15% of postings, but fewer than 1.28% of those mentions appear in required-skills sections. AI is the new Excel. Pervasive, normalized, and assumed rather than required.

Non-technical roles are already in the shift. Designers, product marketers, and customer success managers all reference AI tools at meaningful rates with no “AI” in the title. The strategic question for job seekers, hiring teams, and AI tool vendors isn’t which tool to bet on. It’s how to operate inside a labor market where AI fluency is being assumed silently.

Dataset under CC BY 4.0. Cite as “Four-Leaf 2026 AI Stack Index, Q2.”

By the numbers.

0.02%
of jobs require Cursor by name. The $9B+ IDE is mentioned in 1.3% but rarely required.
9
jobs require ChatGPT or GitHub Copilot combined, out of 37,058.
17%
of “AI Engineer” / “ML Engineer” jobs name no specific AI tool.
26% / 25% / 19%
of designer / product marketing / customer success jobs mention AI tools. None say “AI” in the title.
1.28%
of all 10,891 AI tool mentions sit in required-skills sections. The rest are tech-stack signals.
0
foundation models (GPT, Claude, Llama, Gemini) are required in more than 4 jobs each.

AI is the new Excel.

The Excel pattern.

Excel became universal across white-collar work in the 1990s without a single “Excel certified” hiring filter. Companies didn’t require it. They assumed it. The expectation embedded in the work, not the job description. Our data shows AI tools following the same pattern in 2026. AI tools appear in 15% of April postings, including roles with nothing to do with AI on paper. Companies are not yet gatekeeping on AI fluency. The filter will probably form. It hasn’t yet.

The demand-side lag.

Cursor, GitHub Copilot, and ChatGPT have all hit nine-figure ARR. None appear as required skills in more than a handful of jobs. Most published commentary on AI hiring conflates enterprise revenue with JD requirements. The gap between the two is large and worth measuring on its own.

The hidden tax is gentle.

Designers and product marketers are already expected to use AI tools. They are not yet being filtered on it. The strategic question shifts from “which AI tools should I learn” to “how broadly AI-comfortable do I need to be to keep up.” The answer is broader than the discourse suggests. The bar is lower.

What this means.

For job seekers

Don’t over-optimize for any one tool.

The data does not support spending six months becoming a Cursor expert to be hireable. No single AI tool appears in enough required-skills sections to function as a hiring filter. What does help is being broadly AI-comfortable. If you can credibly demonstrate you use ChatGPT, Claude, or a coding assistant fluently, you clear the implicit bar across the entire 15% of postings that mention AI tools at all. Specialize after you have offers, not before.

For hiring teams

You’re already screening on AI tacitly.

The data shows AI tool mentions in 26% of designer roles, 25% of product marketing, and 19% of customer success. If you’re a hiring manager in any of these functions, your JDs already carry AI assumptions. The decision to make explicit is whether you screen for AI fluency in the interview process now, or wait for the rest of the market to catch up. Most teams are doing it implicitly through “feels like the right candidate.” Making it explicit shrinks bias and improves consistency.

For AI tool vendors

Your enterprise revenue isn’t in the JD yet.

For every named AI tool with reported nine-figure ARR, the count of jobs requiring it by name is in the single digits. There is a real demand-side gap between adoption inside companies and required-status in the JDs they publish. This is partly a methodology issue (companies require “LLM experience” rather than “Cursor”) and partly a market-maturity issue. The vendor that pushes its name into “required skills” columns first wins something durable, a hiring-filter moat that compounds.

The mention-vs-require gap.

Top 15 named AI tools by total mentions across 37,058 April 2026 job postings. Bar segments split required (dark), preferred (medium), mentioned-only (light). The lopsided ratio across nearly every tool is the central finding: tools are named in JDs without being required.

Agentic / AI agents
T2
3,032 (8.2%)
Cursor
T1
487 (1.3%)
RAG
T2
486 (1.3%)
OpenAI API
T2
482 (1.3%)
Prompt engineering
T2
461 (1.2%)
Anthropic API
T2
418 (1.1%)
Fine-tuning
T2
349 (0.9%)
Claude (Anthropic)
T1
343 (0.9%)
Gemini (model family)
T3
305 (0.8%)
Embeddings
T2
302 (0.8%)
MCP (Model Context Protocol)
T2
284 (0.8%)
Copilot (unspecified)
T1
267 (0.7%)
LangChain
T2
234 (0.6%)
ChatGPT
T1
227 (0.6%)
GitHub Copilot
T1
213 (0.6%)
Required Preferred Mentioned

Where AI tools have spread.

AI-tool mention rate by job role. Buckets with at least 200 deduplicated April 2026 postings, control buckets excluded. The 16x range from AI / ML engineer at the top to operations at the bottom shows the spread is uneven but everywhere meaningful.

AI / ML Engineer
n = 568
84%
Data Scientist
n = 695
41%
Sales Engineer
n = 845
31%
Backend Engineer
n = 228
29%
Designer
n = 315
26%
Product Marketing
n = 220
25%
Product Manager
n = 952
24%
Software Engineer (general)
n = 4,931
24%
Data Engineer
n = 306
21%
Customer Success Manager
n = 401
19%
Security Engineer
n = 351
18%
DevOps / SRE
n = 656
17%
Technical Program Manager
n = 297
17%
BizOps / Strategy
n = 215
13%
Brand Marketing
n = 420
13%
Non-software Engineer
n = 4,085
11%
Data Analyst
n = 530
11%
Customer Support
n = 449
11%
Finance / Accounting
n = 635
8.8%
QA / Test Engineer
n = 473
8.5%
Project / Program Manager
n = 816
7.5%
Sales Manager
n = 668
7.3%
Operations (general)
n = 1,629
7.1%
Account Manager
n = 574
6.1%
Account Executive
n = 541
6.1%
Supply Chain
n = 423
2.8%
Hardware Engineer
n = 321
0.6%
Aerospace / Defense Engineer
n = 220
0.0%

The soft tax on non-engineering work.

Non-engineering roles with at least 150 postings, ranked by AI-tool mention rate. None of these job titles include “AI”. None require AI fluency at meaningful rates. But a meaningful share of them name AI tools in the JD, embedding an expectation that doesn’t show up in the role-title or interview rubric. This is the part of the data that contradicts the “AI is for engineers” framing most strongly.

Designer
n = 315
26%
Product Marketing
n = 220
25%
Product Manager
n = 952
24%
Customer Success Manager
n = 401
19%
Technical Program Manager
n = 297
17%
Sales Operations
n = 165
14%
BizOps / Strategy
n = 215
13%
Brand Marketing
n = 420
13%
Customer Support
n = 449
11%
Legal Counsel
n = 155
9.0%
Finance / Accounting
n = 635
8.8%
Project / Program Manager
n = 816
7.5%
Sales Manager
n = 668
7.3%
Operations (general)
n = 1,629
7.1%
Account Manager
n = 574
6.1%
Account Executive
n = 541
6.1%
Supply Chain
n = 423
2.8%

Tools closest to becoming filters.

Among AI tools with at least 50 total mentions, ranked by what share of those mentions appear in required-skills sections rather than tech-stack or role-overview text. Even the leaders sit in the low single digits. The hiring filter is not yet here, but if it comes, these are the tools most likely to lead it.

ToolTierTotal mentionsRequired share
Prompt engineeringT24614.3%
RAGT24862.7%
LangChainT22342.6%
GitHub CopilotT12132.3%
LangGraphT21322.3%
Copilot (unspecified)T12671.9%
ChatGPTT12271.8%
CursorT14871.2%
LlamaIndexT2821.2%
Fine-tuningT23490.9%

What we can’t claim.

This is a single-month snapshot. The hiring filter could form quickly, and a Q4 follow-up may show very different numbers. If GitHub announces a Copilot certification and 50,000 Microsoft customers begin requiring it, the required-skills column changes overnight. We commit to one month of clean data and don’t make growth-rate claims.

JDs do not capture tacit requirements. Companies routinely screen on AI fluency through take-home tests, live coding, and interview prompts that never make it into the published JD. We can only measure what was published. The required-skill rate is therefore a lower bound on actual gatekeeping, possibly a generous one.

The dataset skews English and US. About 70% of postings come from English-speaking markets, and 28% of the deduplicated rows come from the top 10 employers (we cap each employer at 5% before publishing headlines, but the raw sample still leans toward large US employers). The findings are most reliable for the US tech-and-tech-adjacent labor market and weakest for non-US roles.

Roughly a third of titles in our deduplicated dataset don’t fit any of our role buckets. Most are healthcare, finance, and specialty roles outside the research target, but a small share are software roles with cryptic titles we couldn’t classify. The leaderboard above excludes them.

Sample composition.

Raw English postings
48,053
After dedup + length filter
37,920
79% of raw
After 5% per-employer cap
37,058
headline analysis sample
Tools in taxonomy
75
across 4 tiers
Top employers (deduplicated, pre-cap)
Amazon2,758 (7.3%)
Anduril Industries1,331 (3.5%)
Citi1,320 (3.5%)
SpaceX1,021 (2.7%)
Nvidia885 (2.3%)
AbbVie721 (1.9%)
Capital One714 (1.9%)
GE Vernova686 (1.8%)
Stryker684 (1.8%)
Medtronic641 (1.7%)

How this was built.

Four-Leaf scrapes job postings from public ATS feeds (Greenhouse, Workday, Ashby, Lever, SmartRecruiters, Eightfold, Amazon ATS) for ~3,000 employers. We took the 2026-04-01 to 2026-05-04 window, kept English postings with at least 500-character descriptions, deduplicated on (company, title), and capped each employer at 5% of the dataset.

For each remaining job, we ran a tiered taxonomy of 75 named AI tools, frameworks, and foundation models against the structured parsed_jd jsonb. Disambiguation rules suppress false positives (the surname Claude, Microsoft 365 Copilot vs. GitHub Copilot, the unrelated “Bedrock” products).

For each (job, tool) pair, the tool is classified as required (named in required_skills or qualifications text), preferred (in nice_to_have_skills or near a preferred-language phrase), or mentioned (only in key_technologies, role overview, or responsibilities). See the methodology page for the full taxonomy and regex.

Citation

The dataset is licensed under CC BY 4.0. Suggested citation:

Four-Leaf. “The 2026 AI Stack Index, Q2.” May 4, 2026. https://four-leaf.ai/research/ai-stack-index-2026-q2

Try it free

Be broadly AI-comfortable. Not Cursor-certified.

Four-Leaf helps you pass the implicit AI bar without over-specializing. Tailored resumes, voice-enabled mock interviews, and a job search that reads what the JD actually asks for.

Start your free trial

3-day trial. No credit card required.