Skip to content

Patent Representation Index

We built a composite score that ranks 91 tech companies on how well they translate women's technical work into patent inventorship. The score combines five sub-indices into a single 0-100 number, normalized across the dataset, weighted to reward both workforce diversity and patent inventor representation.

This is version 2 of an index we first published in April 2026. The first version measured the gap between the share of women in a company's STEM workforce and the share of women among its patent inventors, but that metric had a structural flaw: a small gap could mean a company was doing well, or it could mean the company was male-heavy across the board with little distance between two low numbers. The composite score fixes that. A company can no longer rank well simply because both its workforce and its inventor pools are male-heavy.

Patent data comes from Google BigQuery (patents-public-data.patents.publications), covering all US granted patents. Workforce data comes primarily from government-mandated EEO-1 filings (OFCCP FOIA release, 2016-2020) supplemented by company diversity reports.

Published April 2026, version 2. This is a patent-side representation index, not a complete innovation census or a claim about any individual company's culture. See methodology for the formula, sub-indices, and limitations.

The Big Picture

91
Companies scored on the composite
Workforce-matched subset
79.4
Top score: Starbucks
45.7% women in STEM, 16.8% women inventors
26
Median composite score
Most companies cluster in the lower half
9.4
Bottom score: Lumentum
27.7% women in STEM, 1.3% women inventors

Only companies with both patent and workforce data are eligible for the composite. The 91-company subset is drawn from a larger 169-company patent dataset by joining with EEO-1 workforce filings.

The Composite Score Leaderboard

The top 10 companies in the index, ranked by composite score. Each score combines five sub-indices: women in STEM workforce, women inventors as a share of all inventors, the ratio between them, the trend over the last 24 months (sample-size weighted), and inventor concentration. Companies need 100 or more total US patents to appear in the leaderboard. All 88 leaderboard-eligible companies are in the full ranked table below.

1
Starbucks 79.4
Women in STEM: 45.7% · Women inventors: 16.8% · Patents: 235
2
Etsy 66.2
Women in STEM: 55.0% · Women inventors: 13.0% · Patents: 113
3
Procter & Gamble 62.5
Women in STEM: 41.7% · Women inventors: 12.5% · Patents: 19,095
4
Airbnb 58.9
Women in STEM: 27.5% · Women inventors: 10.1% · Patents: 206
5
Johnson & Johnson 51.0
Women in STEM: 47.4% · Women inventors: 9.4% · Patents: 4,525
6
Tableau 49.4
Women in STEM: 30.2% · Women inventors: 8.8% · Patents: 292
7
Pinterest 46.2
Women in STEM: 43.5% · Women inventors: 8.0% · Patents: 131
8
Intuit 44.5
Women in STEM: 46.0% · Women inventors: 8.5% · Patents: 5,051
9
Corning 42.7
Women in STEM: 29.8% · Women inventors: 7.4% · Patents: 22,487
10
Cray 41.6
Women in STEM: 14.3% · Women inventors: 5.5% · Patents: 423
The top of this index is not where many would expect. Big tech (Apple, Microsoft, Google, Amazon, Meta) sits solidly mid-pack in our composite. In this dataset, many of the highest composite scores come from consumer-facing tech (Starbucks, Etsy, Airbnb, Pinterest), CPG with strong R&D (Procter & Gamble, Johnson & Johnson, Corning), and diversified software (Intuit, Tableau). The composite is measuring patent inventor representation, not workplace culture; see the methodology section for what the score does and does not capture.

How to read a high score in context

The index is intentionally two-dimensional. It separately rewards having women in technical roles (Foundation) and crediting them on patents (Output and Ratio). Some companies will score well on one and poorly on the other. That tension is not noise. It is exactly why we publish the underlying numbers next to the composite score.

Cray is the cleanest illustration. Cray sits at #10 on the leaderboard despite having the lowest share of women in its STEM workforce of any company in the 91-company dataset (14.3%). The composite puts them in the top 10 because their representation ratio is the highest in the index: roughly 39% of Cray's women STEM workers are credited as inventors, which is the best in the dataset. Cray has very few women in engineering, but the few they have are being recognized.

This is not a workplace-culture ranking. A high composite should not be read as "this is the best place for a woman engineer to work." Foundation matters too, and the full ranked table shows both Foundation and Output next to each composite score so readers can hold both numbers in mind at once.

The Bottom 10

The lowest composite scores in the index. Almost entirely semiconductor, networking infrastructure, storage, and legacy industrial companies. All have women inventor rates between 1.3% and 2.5%.

#79
VMware 14.1
Women in STEM: 28.3% · Women inventors: 2.0% · Patents: 5,547
#80
Tesla 13.7
Women in STEM: 17.4% · Women inventors: 1.9% · Patents: 814
#81
QUALCOMM 13.1
Women in STEM: 20.3% · Women inventors: 2.5% · Patents: 42,096
#82
General Electric 13.0
Women in STEM: 24.7% · Women inventors: 1.7% · Patents: 1,097
#83
Sandisk 12.7
Women in STEM: 23.3% · Women inventors: 1.9% · Patents: 13,283
#84
Equinix 12.7
Women in STEM: 24.2% · Women inventors: 1.7% · Patents: 168
#85
Synopsys 12.6
Women in STEM: 21.5% · Women inventors: 1.8% · Patents: 2,484
#86
Akamai 11.2
Women in STEM: 25.7% · Women inventors: 1.6% · Patents: 621
#87
Marvell Technology 9.4
Women in STEM: 20.8% · Women inventors: 1.4% · Patents: 9,010
#88
Lumentum 9.4
Women in STEM: 27.7% · Women inventors: 1.3% · Patents: 446
An industry pattern in the bottom 10. The bottom 10 is concentrated in semiconductors, networking and infrastructure, storage, and legacy industrial firms (Synopsys, Marvell, Sandisk, Lumentum, Akamai, Equinix, General Electric, and others). These categories share two patterns in our data: a smaller share of women in STEM workforce than consumer-facing tech, and significantly lower rates of crediting the women they do have as inventors. The composite captures both patterns at once. We are not making a population-level claim about every semiconductor or networking company, only describing what the bottom 10 of this 88-company leaderboard looks like.

Why we built v2: the Tesla story

Version 1 of this index used a single metric: the gap between the share of women in a company's STEM workforce and the share of women among its patent inventors. By that metric, Tesla had one of the smaller gaps in the dataset at 15.5 points (ranked 7th-smallest of 91), well below the median gap of 25 points. On v1, Tesla looked like a relative leader.

But Tesla's STEM workforce contains roughly 60 percent the share of women that IBM's does (17.4% vs 32.2%). The "small gap" was small because Tesla had so few women to begin with. Of the distinct inventors named on Tesla's patents, only 1.9 percent are women, which is in the bottom 12 of the women inventor rates we measured. The gap metric was treating Tesla as a leader because both its workforce and inventor pools were male-heavy, not because it was doing anything well.

The composite score fixes this. Tesla now ranks #80 of 88 on the leaderboard. IBM, which appeared "worse" on the v1 gap metric (gap of 26.2 points), is now the highest-scoring big tech company in the index at #20 of 88, because IBM has both the highest women STEM workforce share in big tech (32.2%) and the highest women inventor share (6.0%). The composite rewards balance, not the absence of women.

How the Score Works

The composite combines five sub-indices, each normalized to a 0-100 scale across the 91-company dataset, then weighted and summed. A score of 100 would mean a company is the best in the dataset on every dimension. A score of 0 would mean worst on every dimension. Real companies fall between roughly 9 and 79.

Foundation (25%)

Women in the STEM workforce, as a percentage. From EEO-1 Categories 2 (Professionals) and 3 (Technicians). The baseline. You cannot have inventor representation without workforce representation.

Output (35%)

Women inventors as a percentage of all distinct inventors at the company. The most heavily weighted sub-index because it is the actual outcome being measured: whether women in the engineering workforce are being credited as inventors when patents are filed.

Representation Ratio (25%)

The fraction of women in a company's STEM workforce who appear as patent inventors. Computed as Output divided by Foundation. Captures structural fairness independent of workforce size. Caveat: the ratio can favor companies that credit a high fraction of a small women-in-STEM base, which is why the score should always be read alongside Foundation and Output values.

Trend (10%)

Relative change in the women inventor rate over the last 24 months, weighted by the size of the recent patent sample. We use relative change (not absolute) so improvement from a low baseline counts as real progress: 2% → 4% is a 100% relative gain, not a 2-point one. Sample-size weighting pulls noisy small samples toward zero so a few patents cannot dominate.

Distribution (5%)

The inverse of overall inventor concentration. Specifically, 100 minus the share of patents accounted for by the company's top 5% most prolific inventors of any gender. A less concentrated inventor pool gives more engineers a path into the patent system, which may create more room for women to appear. This is a weak proxy signal, not a direct gender measure, and the 5% weight reflects that it is a tiebreaker rather than a primary metric.

The formula

Composite = Foundation × 0.25 + Output × 0.35 + Ratio × 0.25 + Trend × 0.10 + Distribution × 0.05

Each sub-index is min-max normalized 0-100 across the 91-company dataset before weighting. Note: rankings should be read as relative within this dataset, not as absolute scores comparable across future versions without recalibration.

How we handle small samples in the trend

The trend sub-index uses relative change rather than absolute change. Going from 2% women inventors to 4% counts as a 100% relative improvement, not a 2 percentage-point absolute one. We chose relative change deliberately so the index recognizes meaningful improvement from low baselines, where most companies in this dataset still sit.

But relative change has a known weakness: it explodes near zero. A company that filed only 3 patents in the last 24 months and happened to credit one woman would look like a 33% women inventor rate, swamping any real signal. To handle this we apply sample-size shrinkage toward zero, the same regularization technique used by baseball sabermetrics, IMDB Top 250 ratings, and small-sample polling. The formula:

adjusted_change = relative_change × (recent_patents / (recent_patents + 50))

This pulls noisy small samples toward zero (no change) while leaving large samples close to face value. A company with 5 recent patents has its trend signal trusted at 9 percent of face value. A company with 5,000 recent patents has its signal trusted at 99 percent. Big tech with thousands of recent patents shows their real trend. Small companies with noisy samples have that noise neutralized. We chose k = 50 based on the dataset's median recent-patent count of roughly 220, where it gives 81 percent confidence at the median.

Relative change is also capped at the bounds +2.0 (200% improvement) and -1.0 (100% decline) before shrinkage is applied. The cap exists precisely because relative change near zero is unstable, and prevents any single outlier from dominating the trend score even if shrinkage is incomplete.

Robustness Check: Are the Rankings Stable?

The natural critique of any composite index is that the rankings might be artifacts of the specific weights chosen. To test that, we ran 4,000 sensitivity analyses across two Monte Carlo regimes plus six named alternative weight schemes. The findings give us confidence that the headline rankings are robust to plausible reweightings.

Test 1: Bounded perturbation around our weights

We sampled 2,000 weight vectors with each sub-index allowed to vary up to ±5 percentage points from its baseline. This asks: if our specific weights are slightly wrong, do the rankings hold?

9 of the 10 leaderboard companies remained in the top 10 in at least 80 percent of runs. 8 of the 10 bottom companies remained in the bottom 10 at the same threshold. Tesla stayed in the bottom 10 in 96 percent of runs. The only top 10 entry that moved meaningfully was Cray (in the top 10 in 60 percent of runs), reflecting that Cray's score depends heavily on the Representation Ratio sub-index — exactly what the Cray section above explains. We use 80 percent as a practical stability threshold here, not a formal statistical cutoff.

Test 2: Pure random weights (Dirichlet)

We then sampled 2,000 weight vectors from a symmetric Dirichlet distribution, allowing any combination of weights summing to 1.0 with no constraints. This is a worst-case stress test, not an alternative design universe. It includes degenerate cases like 100 percent on a single sub-index, which is not what a thoughtful designer would actually publish. We treat it as a stress test rather than a competing methodology, but the result is meaningful: it answers how much of the leaderboard survives even under weights nobody would choose on purpose.

Under this much harsher regime, 5 of the 10 top companies remained in the top 10 in at least 80 percent of runs. Starbucks (98.6%), Etsy (99.6%), Airbnb (95.1%), Procter & Gamble (85.9%), and Pinterest (89.5%) were the most robust top 10 entries. Mid-pack companies moved more, which is expected: when companies sit close together on a composite, small weight changes naturally swap their relative positions.

Test 3: Named alternative weight schemes

Finally, we ran six specific alternative weighting schemes that a reviewer might propose, including equal weights, Output-dominant (60% on Output), Foundation-dominant (50% on Foundation), Foundation + Output only, composite minus Distribution, and Ratio-heavy. Here is how the headline anchor companies fared across all six:

Company Baseline Equal Output-dom Foundation-dom F + O only No Distribution Ratio-heavy Range
Starbucks#1#1#1#1#1#1#11–1
Etsy#2#2#2#2#2#2#42–4
Procter & Gamble#3#4#3#4#3#3#33–4
IBM#20#48#18#24#19#19#1818–48
Microsoft#36#42#34#38#37#36#3534–42
Apple#49#70#48#53#45#42#4642–70
Amazon#54#60#51#46#47#52#5546–60
Tesla#80#78#81#87#87#82#7878–87
Lumentum#88#86#87#78#84#88#8778–88

Starbucks ranks #1 in all six alternative schemes. Etsy ranks #2 in five of six. Tesla stays in the bottom 10 of every scheme. Lumentum stays in the bottom 10 of every scheme. The Range column makes the asymmetry visible: the top three move 0–2 positions, the bottom two move 9–10 positions, and the mid-pack big tech names move 8–28 positions.

What the sensitivity analysis actually shows

Reading the three tests together, the most defensible interpretation is more specific than "the rankings are robust." It is:

  • The top of the index is fairly stable. Starbucks, Etsy, and Procter & Gamble survive almost any reasonable weighting.
  • The bottom of the index is fairly stable. Lumentum, Marvell, and Tesla stay near the bottom under almost any reasonable weighting.
  • Mid-pack positions are meaningfully more sensitive. Big tech (Apple, Microsoft, Amazon, Intel, Google) sits in a region where small changes to the weights can swap rankings.
  • Cray is notably less stable than the other top 10 names (in the top 10 in only 60 percent of bounded perturbation runs). That is honest about what the Cray section above explains: Cray is in the top 10 because of a single sub-index, not because of balanced strength across all five.
  • Tesla remains in the lower tier under almost anything reasonable. The v1-to-v2 correction holds.

What this analysis tests and does not test: The sensitivity tests above examine robustness to weight choices. They do not test robustness to other modeling choices, such as the min-max normalization scheme, the choice of which sub-indices to include, or the gender-from-first-name estimation method. A more complete robustness program would test all of those independently. We treat this analysis as a meaningful first robustness layer, not the final word.

The bottom line. The sensitivity analysis shows that our headline leaders and laggards are not artifacts of one arbitrary weighting choice, even though mid-pack ranks remain more fluid. A reviewer can argue with our specific weights, but the identity of who leads and who lags this index is itself a reproducible result of the public methodology we describe here. The full per-company sensitivity data is available alongside the index for anyone who wants to test alternative weightings themselves.

The Full Ranked Index: 91 Companies

All 91 workforce-matched companies, ranked by composite score. Companies marked "small portfolio" have fewer than 100 patents and are excluded from the curated leaderboard but appear here for transparency. Their scores should be interpreted with caution because percentage estimates from small samples have wide confidence intervals.

Rank Company Composite Women in STEM Women Inventors Representation Ratio Trend Patents
1 Starbucks 79.4 45.7% 16.8% 37% 235
2 Etsy 66.2 55.0% 13.0% 24% 113
3 Procter & Gamble 62.5 41.7% 12.5% 30% 19,095
4 Airbnb 58.9 27.5% 10.1% 37% 206
Expedia (small portfolio) 54.3 27.1% 9.4% 35% 35
5 Johnson & Johnson 51.0 47.4% 9.4% 20% 4,525
6 Tableau 49.4 30.2% 8.8% 29% 292
7 Pinterest 46.2 43.5% 8.0% 18% 131
8 Intuit 44.5 46.0% 8.5% 18% 5,051
9 Corning 42.7 29.8% 7.4% 25% 22,487
10 Cray 41.6 14.3% 5.5% 38% 423
11 Palantir 40.5 32.3% 6.8% 21% 1,607
12 Nordstrom 40.4 72.1% 4.4% 6% 286
13 Zillow 39.7 41.5% 6.7% 16% 125
14 Lyft 38.2 34.4% 6.5% 19% 535
15 AT&T 37.9 29.6% 6.3% 21% 44,227
16 Xerox 36.5 32.2% 5.0% 16% 39,104
17 Netflix 35.4 43.5% 5.0% 11% 503
18 Salesforce 35.4 34.3% 5.8% 17% 4,513
19 Coinbase 34.9 31.6% 5.6% 18% 129
20 IBM 32.8 32.2% 6.0% 19% 142,040
21 Bloomberg 32.7 27.9% 5.5% 20% 165
22 Uber 32.2 32.3% 4.6% 14% 946
23 Weyerhaeuser 32.1 31.1% 5.3% 17% 1,291
24 Boeing 31.8 26.5% 5.0% 19% 23,940
25 Docusign 31.6 37.7% 5.1% 14% 168
26 3M 31.5 36.9% 4.9% 13% 11,037
27 Snap 31.2 32.1% 5.1% 16% 3,303
28 Conduent 30.6 49.8% 4.0% 8% 338
29 Meta (Facebook) 30.6 33.5% 4.8% 14% 10,174
30 Twilio 30.5 31.0% 5.1% 16% 333
31 Alphabet (Google) 30.2 27.8% 4.9% 18% 39,529
32 Rapid7 30.0 28.9% 4.6% 16% 295
33 Splunk 30.0 30.9% 4.8% 16% 1,963
34 Workday 29.7 43.7% 3.7% 8% 269
35 HP Inc. 29.4 30.8% 5.0% 16% 43,755
36 Microsoft 28.8 29.4% 4.8% 16% 56,572
37 eBay 28.6 31.9% 4.6% 14% 3,715
38 Raytheon 28.4 24.9% 4.3% 17% 13,975
39 Lockheed Martin 27.9 24.3% 4.5% 19% 6,193
40 Zoom 27.9 29.0% 4.4% 15% 590
41 Garmin 27.6 17.1% 3.7% 22% 872
42 Northrop Grumman 27.5 24.2% 4.0% 17% 3,582
43 PayPal 26.8 37.7% 4.4% 12% 2,656
44 Electronic Arts 26.6 24.9% 4.1% 16% 607
45 ServiceNow 25.8 36.1% 3.6% 10% 1,103
46 MicroStrategy 25.8 31.5% 4.0% 13% 354
47 Adobe 25.7 36.4% 4.0% 11% 6,857
48 KLA 25.3 18.8% 3.4% 18% 17,111
49 Apple 25.1 27.2% 4.2% 15% 40,741
50 Oracle 24.7 32.7% 3.9% 12% 13,294
51 Applied Materials 24.1 20.7% 3.9% 19% 14,202
52 TE Connectivity 23.9 29.3% 3.7% 13% 1,776
53 SAP 23.8 37.3% 3.0% 8% 9,465
54 Amazon 23.7 30.6% 3.8% 12% 23,216
55 Cloudflare 23.5 31.2% 3.6% 12% 317
56 Micron Technology 22.8 18.3% 3.3% 18% 40,930
T-Mobile (small portfolio) 22.3 43.0% 1.8% 4% 27
57 Honeywell 22.2 24.2% 3.3% 14% 27,018
58 Dell 22.2 30.0% 3.3% 11% 12,696
59 Intel 21.4 25.8% 3.6% 14% 82,391
60 Seagate 21.0 23.5% 3.6% 15% 8,847
61 Motorola Solutions 20.8 20.7% 3.0% 14% 2,201
62 NetApp 20.4 25.4% 2.8% 11% 3,291
63 Cisco 19.4 30.8% 2.8% 9% 22,698
64 Cadence Design Systems 19.1 19.3% 2.5% 13% 2,542
65 F5 Networks 19.1 23.6% 2.3% 10% 473
66 Keysight 19.0 31.0% 2.7% 9% 671
67 CrowdStrike 19.0 25.7% 2.2% 9% 150
68 AMD 19.0 18.3% 3.4% 19% 12,950
69 Broadcom 18.8 28.0% 2.6% 9% 11,447
70 Fortinet 18.6 18.3% 2.6% 14% 981
71 Snowflake 18.4 29.4% 2.6% 9% 996
72 Analog Devices 18.0 23.4% 2.7% 12% 4,777
73 Texas Instruments 17.4 20.1% 2.9% 14% 31,886
74 Caterpillar 17.1 26.3% 2.4% 9% 15,002
75 Cirrus Logic 17.0 16.8% 2.0% 12% 2,651
76 NVIDIA 16.2 18.3% 2.0% 11% 5,363
77 Palo Alto Networks 14.8 28.5% 1.7% 6% 701
78 Juniper Networks 14.8 24.9% 1.8% 7% 4,717
79 VMware 14.1 28.3% 2.0% 7% 5,547
80 Tesla 13.7 17.4% 1.9% 11% 814
81 QUALCOMM 13.1 20.3% 2.5% 12% 42,096
82 General Electric 13.0 24.7% 1.7% 7% 1,097
Rackspace (small portfolio) 13.0 21.3% 1.5% 7% 88
83 Sandisk 12.7 23.3% 1.9% 8% 13,283
84 Equinix 12.7 24.2% 1.7% 7% 168
85 Synopsys 12.6 21.5% 1.8% 8% 2,484
86 Akamai 11.2 25.7% 1.6% 6% 621
87 Marvell Technology 9.4 20.8% 1.4% 7% 9,010
88 Lumentum 9.4 27.7% 1.3% 5% 446

Composite scores are normalized 0-100 across the dataset. Trend arrows: improving, stable, declining. Hover for sample-size confidence.

Want the deeper data?

The full data explorer for all 169 companies includes 24-month trend movements, top improvers and decliners, CPC technology areas where women are filing, and the underlying year-over-year breakdown.

Trend deltas (24 months) Top improvers & decliners CPC technology areas Year-by-year data
Explore the Full Data →

The National Trend

Women Inventor Rate from USPTO data, 2005-2024. Progress is real but slow. At the current rate of roughly 0.3 percentage points per year, parity would take more than 100 years.

2005
2010
2015
12.8%
2020
~15%2024

USPTO reported   12.8% benchmark   Estimated

Methodology and Limitations

Patent Data

Google Patents Public Datasets on BigQuery (patents-public-data.patents.publications). Complete coverage of all US granted patents with disambiguated inventor and assignee names. 1,583,778 patents analyzed across 169 companies. Free tier: 1 TB/month.

Company Universe

200 companies were queried: top 100 tech by market cap plus major employers in Washington, Texas, California, and other regions. 169 returned meaningful patent data (more than 10 patents). 91 of those 169 also have workforce data, which is the subset scored on the composite.

Gender Estimation

Gender is estimated from inventor first names using the same methodology USPTO uses for its annual reports. Gender-neutral and non-Western names may be misclassified. Approximately 5-10 percent of inventor names are unclassifiable and excluded.

Workforce Data

Government-mandated EEO-1 filings released by OFCCP via FOIA in 2026, covering 2016-2020. STEM proxy is EEO-1 Categories 2 (Professionals) plus 3 (Technicians). Where EEO-1 was unavailable, company-published diversity reports were used and tagged Tier B in the source CSV.

Composite Score Choices

Weights of 25/35/25/10/5 reflect a deliberate emphasis on actual patent representation (Output 35%) and workforce foundation (Foundation 25%) as the load-bearing measures. Trend gets only 10% because it is noisier and easier to game. These weights are documented choices, not derived from data.

Key Limitations

This is a representation index, not a causal measure. We cannot tell from the data whether women are not contributing inventions, not being listed on patents they contributed to, not being put forward for patent review, or some combination. The composite measures the outcome, not the cause.

What changed in v2

Version 1 of the index used a single metric: the gap between women in STEM workforce share and women inventor share. We discovered during a public review that the gap metric had a structural flaw. A small gap could mean a company was doing well, or it could mean the company was male-heavy across the board with two low numbers sitting close together. Tesla ranked among the smallest gaps on v1 (7th-smallest of 91) despite having one of the lowest women inventor rates in the dataset (1.9%, in the bottom 12).

Version 2 replaces the single gap metric with the five-sub-index composite described above. Weights and methodology are documented. The composite score is on a 0-100 scale and no longer mechanically rewards companies for being male-heavy across both workforce and inventor pools.

This is honest, not perfect. Patent inventor gender is estimated from first names. Workforce data is from government-mandated EEO-1 filings (2016-2020), not current-year data. Composite weights are deliberate choices we are willing to defend, not derived from data. We show our work: the v1 to v2 reasoning, the methodology, and the underlying script and CSV are available on request.

Where does your company stand?

We can analyze your patent portfolio against your workforce data and show you exactly where the gap is. Which technology areas are unprotected. Which teams are filing and which are not.

The patent data is public. The insight is knowing where to look and what to do about it.

We also build tools that close the gap. Our scanner surfaces strategic concepts in your codebase so the engineers who would never self-nominate can still have their innovations recognized.

Talk to Us About Your Score

Or scan your own codebase to see what strategic concepts your engineers are building right now.

Explore the detailed time-series data for all 169 companies →

Results are strategic concepts, not legal conclusions. Review with a patent attorney.