By the numbers.
AI is the new Excel.
The Excel pattern.
Excel became universal across white-collar work in the 1990s without a single “Excel certified” hiring filter. Companies didn’t require it. They assumed it. The expectation embedded in the work, not the job description. Our data shows AI tools following the same pattern in 2026. AI tools appear in 15% of April postings, including roles with nothing to do with AI on paper. Companies are not yet gatekeeping on AI fluency. The filter will probably form. It hasn’t yet.
The demand-side lag.
Cursor, GitHub Copilot, and ChatGPT have all hit nine-figure ARR. None appear as required skills in more than a handful of jobs. Most published commentary on AI hiring conflates enterprise revenue with JD requirements. The gap between the two is large and worth measuring on its own.
The hidden tax is gentle.
Designers and product marketers are already expected to use AI tools. They are not yet being filtered on it. The strategic question shifts from “which AI tools should I learn” to “how broadly AI-comfortable do I need to be to keep up.” The answer is broader than the discourse suggests. The bar is lower.
What this means.
Don’t over-optimize for any one tool.
The data does not support spending six months becoming a Cursor expert to be hireable. No single AI tool appears in enough required-skills sections to function as a hiring filter. What does help is being broadly AI-comfortable. If you can credibly demonstrate you use ChatGPT, Claude, or a coding assistant fluently, you clear the implicit bar across the entire 15% of postings that mention AI tools at all. Specialize after you have offers, not before.
You’re already screening on AI tacitly.
The data shows AI tool mentions in 26% of designer roles, 25% of product marketing, and 19% of customer success. If you’re a hiring manager in any of these functions, your JDs already carry AI assumptions. The decision to make explicit is whether you screen for AI fluency in the interview process now, or wait for the rest of the market to catch up. Most teams are doing it implicitly through “feels like the right candidate.” Making it explicit shrinks bias and improves consistency.
Your enterprise revenue isn’t in the JD yet.
For every named AI tool with reported nine-figure ARR, the count of jobs requiring it by name is in the single digits. There is a real demand-side gap between adoption inside companies and required-status in the JDs they publish. This is partly a methodology issue (companies require “LLM experience” rather than “Cursor”) and partly a market-maturity issue. The vendor that pushes its name into “required skills” columns first wins something durable, a hiring-filter moat that compounds.
The mention-vs-require gap.
Top 15 named AI tools by total mentions across 37,058 April 2026 job postings. Bar segments split required (dark), preferred (medium), mentioned-only (light). The lopsided ratio across nearly every tool is the central finding: tools are named in JDs without being required.
Where AI tools have spread.
AI-tool mention rate by job role. Buckets with at least 200 deduplicated April 2026 postings, control buckets excluded. The 16x range from AI / ML engineer at the top to operations at the bottom shows the spread is uneven but everywhere meaningful.
The soft tax on non-engineering work.
Non-engineering roles with at least 150 postings, ranked by AI-tool mention rate. None of these job titles include “AI”. None require AI fluency at meaningful rates. But a meaningful share of them name AI tools in the JD, embedding an expectation that doesn’t show up in the role-title or interview rubric. This is the part of the data that contradicts the “AI is for engineers” framing most strongly.
Tools closest to becoming filters.
Among AI tools with at least 50 total mentions, ranked by what share of those mentions appear in required-skills sections rather than tech-stack or role-overview text. Even the leaders sit in the low single digits. The hiring filter is not yet here, but if it comes, these are the tools most likely to lead it.
| Tool | Tier | Total mentions | Required share |
|---|---|---|---|
| Prompt engineering | T2 | 461 | 4.3% |
| RAG | T2 | 486 | 2.7% |
| LangChain | T2 | 234 | 2.6% |
| GitHub Copilot | T1 | 213 | 2.3% |
| LangGraph | T2 | 132 | 2.3% |
| Copilot (unspecified) | T1 | 267 | 1.9% |
| ChatGPT | T1 | 227 | 1.8% |
| Cursor | T1 | 487 | 1.2% |
| LlamaIndex | T2 | 82 | 1.2% |
| Fine-tuning | T2 | 349 | 0.9% |
What we can’t claim.
This is a single-month snapshot. The hiring filter could form quickly, and a Q4 follow-up may show very different numbers. If GitHub announces a Copilot certification and 50,000 Microsoft customers begin requiring it, the required-skills column changes overnight. We commit to one month of clean data and don’t make growth-rate claims.
JDs do not capture tacit requirements. Companies routinely screen on AI fluency through take-home tests, live coding, and interview prompts that never make it into the published JD. We can only measure what was published. The required-skill rate is therefore a lower bound on actual gatekeeping, possibly a generous one.
The dataset skews English and US. About 70% of postings come from English-speaking markets, and 28% of the deduplicated rows come from the top 10 employers (we cap each employer at 5% before publishing headlines, but the raw sample still leans toward large US employers). The findings are most reliable for the US tech-and-tech-adjacent labor market and weakest for non-US roles.
Roughly a third of titles in our deduplicated dataset don’t fit any of our role buckets. Most are healthcare, finance, and specialty roles outside the research target, but a small share are software roles with cryptic titles we couldn’t classify. The leaderboard above excludes them.
Sample composition.
How this was built.
Four-Leaf scrapes job postings from public ATS feeds (Greenhouse, Workday, Ashby, Lever, SmartRecruiters, Eightfold, Amazon ATS) for ~3,000 employers. We took the 2026-04-01 to 2026-05-04 window, kept English postings with at least 500-character descriptions, deduplicated on (company, title), and capped each employer at 5% of the dataset.
For each remaining job, we ran a tiered taxonomy of 75 named AI tools, frameworks, and foundation models against the structured parsed_jd jsonb. Disambiguation rules suppress false positives (the surname Claude, Microsoft 365 Copilot vs. GitHub Copilot, the unrelated “Bedrock” products).
For each (job, tool) pair, the tool is classified as required (named in required_skills or qualifications text), preferred (in nice_to_have_skills or near a preferred-language phrase), or mentioned (only in key_technologies, role overview, or responsibilities). See the methodology page for the full taxonomy and regex.
Citation
The dataset is licensed under CC BY 4.0. Suggested citation:
Four-Leaf. “The 2026 AI Stack Index, Q2.” May 4, 2026. https://four-leaf.ai/research/ai-stack-index-2026-q2