Skip to content

Software Patents After Alice

58% of software patents get approved. 88.6% of Section 101 appeals fail. The difference is how you describe your invention.

In 2014, the Supreme Court decided Alice Corp. v. CLS Bank International and changed the test for software patent eligibility. A decade of Federal Circuit case law has clarified what survives. This guide covers what gets approved, how attorneys navigate prosecution, and what developers should document before filing.

Last updated: February 2026. This page is informational only and not legal advice. Consult a patent attorney for your specific situation.

The Test

The Alice decision created a two-step filter for patent eligibility under 35 U.S.C. Section 101.

1
Is the claim directed to an abstract idea?

If you could describe your invention as "doing [familiar thing] but on a computer," it is probably directed to an abstract idea. Intermediated settlement, organizing data, filtering content, calculating a price. These are abstract ideas.

If NO → Patent eligible. Stop here.

2
Does it add something more?

If the claim is directed to an abstract idea, does it include an "inventive concept" that transforms it? This means more than running the idea on a generic computer. The claim must show a specific technical improvement or an unconventional arrangement of components.

If YES → Patent eligible.

The core question: Does your invention improve the technology itself, or does it just use technology to implement a known idea?

The Numbers

Software patents get through at lower rates than mechanical patents. But the approval rate is far from zero.

58%
Software patent allowance rate
Multiple sources, 2024
74%
Mechanical patent allowance rate
PatentBots, 2024
77%
AI patent office actions with 101 rejection
Juristat, 2024
88.6%
PTAB affirmance rate on 101 rejections
PTAB statistics, 2024
The gap: Software faces a 16-point approval disadvantage versus mechanical. AI patents face 101 rejections in 77% of office actions. If you lose at the examiner level and appeal, you have an 11.4% chance of winning. The numbers favor getting it right the first time.

What Survives Alice

These Federal Circuit cases established categories of software patents that survive the Alice test.

Case Year What It Was Why It Survived
Enfish v. Microsoft 2016 Self-referential database Improved how the computer itself stored and retrieved data
McRO v. Bandai 2016 Automated lip-sync rules Replaced subjective human judgment with specific, rule-based automation
BASCOM v. AT&T 2016 Internet content filter Unconventional arrangement of known components at a specific network location
Finjan v. Blue Coat 2018 Behavior-based virus scan New technical approach: analyzing code behavior rather than matching signatures
Core Wireless v. LG 2018 Small-screen UI improvement Solved a specific technical problem (limited screen size) with a specific solution
Berkheimer v. HP 2018 Document processing Shifted burden: examiner must prove with evidence that elements are conventional

The Pattern

  • Improves the computer itself, not just uses it (Enfish)
  • Replaces human judgment with a specific technical rule (McRO)
  • Unconventional arrangement of components producing new capability (BASCOM)

Component Count Matters

Claims combining components from different technical areas survive at different rates.

1-2
~40%
Too abstract. Appears like "applying it on a computer."
3-5
~70%
Optimal. Sufficient complexity, clear integration.
6+
~55%
Overly complex. Harder to articulate inventive concept.

What Fails Alice

Case Year What It Was Why It Failed
Alice v. CLS Bank 2014 Intermediated settlement Business method on generic computer. No improvement to computer functionality.
Electric Power Group 2016 Power grid data collection Collecting and analyzing information is abstract. No unconventional approach.
IV v. Symantec 2016 Email filtering Organizing data into categories is a fundamental practice.
American Axle 2020 Vibration attenuation Applied a law of nature without specifying how. Too abstract.
Recentive Analytics v. Fox 2025 ML-based scheduling Applied conventional ML to new data domain without improving ML itself.

2025 Warning: AI/ML Patents

Recentive Analytics v. Fox Corp. (Federal Circuit, April 2025) is the first major ruling on ML patent eligibility. The court held that applying known machine learning to scheduling data was abstract. "We used AI to do X" is not patentable. "We designed a specific architecture that solves [technical problem] by [unconventional approach]" might be.

The Failure Pattern

  • Automates a known human process on a generic computer (Alice)
  • Applies a formula or algorithm without connecting it to a specific technical problem (American Axle)
  • Uses established technology on new data without improving the technology (Recentive Analytics)

How Attorneys Navigate It

Frame the Technical Problem

The single most important prosecution strategy. Frame the invention around a technical problem, not a business outcome.

Abstract Problem (Fails) Technical Problem (Survives)
"Faster checkout process" "Database queries timeout when cart table exceeds 1M rows under concurrent load"
"Better recommendations" "Cold-start latency exceeds 500ms because collaborative filtering requires minimum interaction history"
"Improved scheduling" "O(n2) scheduling algorithm fails above 10K events because pairwise constraint checking exhausts memory"
"Smarter data organization" "Cross-table joins require full table scans without the proposed index structure, consuming 40% of database capacity"

Layered Claim Strategy

Write claims at multiple levels of abstraction. Each level serves a different purpose.

Broadest

Independent Claim

Technical improvement without implementation details. Captures competitors who solve the same problem differently.

"A method for reducing authentication latency in a distributed computing system by caching verification results bound to session identifiers."
Medium

Dependent Claim

Adds the general technique. Narrows scope but covers variations.

"The method of claim 1, wherein the caching comprises a session-scoped data store with time-bounded invalidation."
Narrowest

Specific Claim

Exact implementation. Fallback if broader claims face prior art.

"The method of claim 2, wherein the session-scoped data store uses a hash table with HMAC-SHA256 binding and 24-hour TTL."

Hardware Integration Language

How you describe software-hardware interaction affects Alice survival.

Software-Only (Risky) Hardware-Anchored (Stronger)
"A method for classifying data" "A system comprising a processor and memory storing instructions that, when executed, classify sensor data received from an imaging device"
"An algorithm that optimizes" "An apparatus comprising a network interface controller configured to optimize packet routing based on real-time congestion measurements"
"Outputting results on a device" "Rendering a diagnostic visualization on a display coupled to an imaging sensor, mapping spatial coordinates to detected anomalies"

Specification Depth Matters

The more technical detail in the specification, the harder it is to characterize as abstract. Detail also strengthens Section 112 (enablement), which reinforces Section 101.

Detail Level Alice Risk Section 112 Risk
Minimal (1-2 pages) HIGH HIGH
Moderate (5-10 pages) MEDIUM MEDIUM
Comprehensive (20-30 pages) LOW LOW

What Developers Should Document

Every item below maps to an Alice defense strategy. Document these before meeting your patent attorney.

Document This Why It Matters for Alice Example
Specific technical problem Anchors Step 1 away from abstract idea "Sequential searches cause memory fragmentation, degrading retrieval by 40%"
Prior approaches + limitations Establishes context, supports non-obviousness "Existing caching caches credentials, not verification results"
Hardware interaction Provides physical anchor for claims "Receives raw pixel data from CMOS sensor. Renders on calibrated display."
Performance benchmarks Proves technical improvement at Step 2 "P99 latency: 180ms before, 2ms after. Memory: 4.2GB to 890MB."
Why not mentally feasible Defeats "mental process" categorization "Classifying 10M packets/sec using 47-dimensional vectors"
Architecture choices Shows non-obvious design decisions "B-tree with bloom filter over hash table because collisions degrade linearly above 1M"
Alternatives considered Broadens written description "Evaluated LRU cache (cold-start problem), round-robin (ignores load variance)"

What Weakens Your Position

Avoid This Why It Hurts Document Instead
Business outcomes only "Improved revenue" is not a technical improvement Pair business benefit with technical mechanism
Generic computer language "Using a computer to process data" = abstract Name specific components and their functions
Unsupported broad claims "Applicable to all industries" without examples Describe 2-3 concrete embodiments
Implementation without problem "We built X" misses problem framing Lead with the technical limitation you overcame

Pre-Filing Checklist

Use these 10 items before meeting your patent attorney.

1
Can you state the technical problem in one sentence without business terms? If it includes "revenue" or "user experience," reframe around the technical limitation.
2
Can you quantify the improvement? Latency, memory, accuracy, throughput. A number turns "better" into "technically improved."
3
Does your invention interact with any physical hardware? Sensors, displays, network controllers, GPUs. Document the interaction.
4
Could a human perform these steps mentally? If yes, document the scale, speed, or dimensionality that makes human performance impossible.
5
What existing approaches did you consider and why did they fail?
6
What surprised you during development? Unexpected results are a strong signal for inventive concept.
7
Can you describe the approach without naming specific libraries or languages? If it requires "Python" or "TensorFlow," it may be too narrow for broad claims.
8
Have you documented at least two alternative implementations?
9
Can you explain why this specific combination of components produces unexpected results?
10
Would someone in a different technical field recognize this as solving a known problem in their area?

Is Your Patent at Risk?

Is the core concept mathematical, mental, or a business method?
HIGH RISK → Emphasize technical implementation
Can the steps be performed in someone's head?
HIGH RISK → Add scale, speed, and data volume requirements
Is it essentially "X on a computer"?
HIGH RISK → Show unconventional technical arrangement
Are there specific, measurable technical improvements?
LOW RISK if documented with metrics
Is there a comprehensive technical specification?
LOW RISK if 20+ pages with working examples

Frequently Asked Questions

Are software patents dead after Alice?

No. Software allowance rates are approximately 58%. The Alice ruling eliminated certain types of claims (business methods on generic computers) but left room for genuine technical improvements. Every major Federal Circuit case since 2016 has identified categories of software that survive. The test is specific: does your invention improve the technology itself?

What about AI and machine learning inventions?

AI patents face higher scrutiny. About 77% of office actions in AI-related technology centers include Section 101 rejections. Recentive Analytics v. Fox (April 2025) held that generic ML applied to a new data domain is abstract. To survive, AI claims must show a specific architectural improvement or a specific technical problem solved in an unconventional way. The USPTO's 2024 examples (47-49) illustrate what works.

Do I need hardware to get a software patent?

Not strictly. Enfish survived with a pure database improvement and no hardware claims. But hardware anchoring provides an additional defense layer. Patents that tie algorithms to specific physical devices (sensors, displays, controllers) have stronger Section 101 positioning than those relying on generic references to "an electronic device."

What happens if I get a Section 101 rejection?

Most software patents face at least one. Common responses: point to specific technical improvements (cite Enfish or McRO), emphasize unconventional component arrangement (cite BASCOM), or challenge the examiner's conventionality assertion (cite Berkheimer, which requires examiners to provide evidence). Technical benchmarks and inventor declarations strengthen all responses. PTAB affirms 88.6% of 101 rejections on appeal, so the examiner-level response matters.

What is the Patent Eligibility Restoration Act?

Proposed federal legislation to reform 35 U.S.C. Section 101 by narrowing the "abstract idea" judicial exception. Would replace judicial exceptions with a statutory framework. Introduced multiple times in Congress. Not yet passed as of February 2026. If enacted, it would significantly change the Alice landscape by reducing examiner discretion.