Skip to content

Tight Ownership Protocols for AI Code Teams

Your team shipped a feature with Copilot last sprint. Now it is six months later and a patent attorney is asking: who conceived the inventive idea? Not who wrote the code. Who conceived it. If your answer is a shrug and a git log, you have a problem.

A 2024 Checkmarx study found that 99% of development teams use AI for code generation. The Stack Overflow 2025 Developer Survey reported 84% of respondents use or plan to use AI tools. Nearly every team faces the same question: when a feature reaches the patent stage, can you identify who conceived it?

U.S. patent law is unambiguous. Under 35 U.S.C. § 100(f), an inventor must be a natural person. The Federal Circuit confirmed in Thaler v. Vidal, 43 F.4th 1207 (2022), that AI systems cannot be listed as inventors. The law does not bar patents on AI-assisted work. It requires that at least one human contributed to the conception of the claimed invention. Your team's job is to make that contribution traceable.

Last updated: March 2026. This page is informational only and not legal advice. Consult a patent attorney for your specific situation.

99%

of dev teams use AI for code generation (Checkmarx 2024)

84%

of developers use or plan to use AI tools (Stack Overflow 2025)

§ 116

U.S.C. section governing joint inventorship — each inventor must contribute to at least one claim

2025

USPTO revised guidance year confirming AI is a tool, not an inventor

Conception vs. Reduction to Practice: The Distinction That Matters

Most developers assume that whoever builds a feature is the inventor. Patent law disagrees.

Inventorship turns on conception, not reduction to practice. Conception means forming a definite and permanent idea of the complete invention, including every element of at least one patent claim. Reduction to practice is building, testing, or demonstrating that the idea works. You can reduce someone else's idea to practice without being an inventor.

This distinction matters when AI generates code. If Copilot produces a novel algorithm and a developer pastes it in without meaningful intellectual contribution, the developer may have reduced the idea to practice without conceiving it. That is not inventorship. If a developer identifies a specific technical problem, engineers a series of prompts to explore solutions, selects and modifies the output, and integrates it into a broader architecture, that human contribution may satisfy the conception standard.

The USPTO's revised 2025 guidance on AI-assisted inventions states that AI systems are tools used by human inventors. The same legal standard for determining inventorship applies regardless of whether AI was involved. Your team protocol needs to capture the human decisions, not just the AI outputs.

Key distinction: Whoever builds the feature is not automatically the inventor. The inventor is whoever formed the complete inventive idea, including the decision to pursue a particular technical approach. Documenting that decision at the time it is made is the entire purpose of an inventorship protocol.

Inventorship Is Not Ownership

Before building a documentation workflow, understand a second distinction. Inventorship and ownership are separate concepts.

Inventorship identifies who conceived the claimed invention. Under 35 U.S.C. § 116, joint inventors can each contribute to different claims. Under Pannu v. Iolab Corp., 155 F.3d 1344 (Fed. Cir. 1998), each named inventor must make a significant contribution to at least one claim. Ownership depends on contracts. Most employees assign patent rights to their employer through invention assignment agreements. A developer can be a named inventor while the employer holds all commercial rights.

Why does this matter for your workflow? Incorrect inventorship creates serious enforceability problems. Errors may require correction under 35 U.S.C. § 256, and correction is not always straightforward. If your team names a manager who only supervised, or omits a junior engineer who conceived a key claim element, the patent is vulnerable to challenge.

Assignment agreements handle who owns the patent rights. Inventorship protocols handle who gets named. Both need to be correct.

Inventorship

Who conceived the claimed invention. Determined by patent law. Cannot be changed by contract. Errors create enforceability risk.

Ownership

Who holds commercial rights. Determined by assignment agreements. Most employees assign rights to their employer at hire.

Patent Protection

Best when the inventive approach is novel and you want exclusivity. Requires narrow, claim-specific inventorship analysis.

Trade Secret / Copyright

Better for some AI-assisted innovations. Your documentation protocol supports all three IP strategies.

A Six-Step Team Protocol for Inventorship Evidence

The goal is a lightweight, repeatable workflow that captures inventorship evidence at the moment of creation. Not six months later during a filing scramble or a litigation hold. Here is a practical protocol you can integrate into GitHub, Jira, Linear, or Notion.

1
Flag patentable work early. Not every ticket needs inventorship tracking. Train your team to recognize features that solve a technical problem in a novel way. When a developer or product lead identifies a potentially patentable approach, tag the ticket or PR with an “invention-candidate” label. This triggers the remaining steps.
2
Log AI tool usage on flagged work. For invention-candidate tasks, require developers to record which AI tools were used, the specific prompts entered, the outputs generated, and the context in which each tool was applied. Store these in version control, a linked document, or a metadata field in your project tracker. Only log prompts tied to the inventive feature, not your entire workflow.
3
Document the human inventive contribution. For each flagged feature, the developer writes a short narrative (3 to 5 sentences) answering: What technical problem did I identify? What approach did I choose and why? How did I modify, combine, or extend the AI output? The narrative should describe the inventive decision, not just the implementation steps.
4
Peer review for conception, not just code quality. Add a lightweight review where a second engineer confirms that the documented human contribution reflects a genuine inventive decision. This is a sanity check: did the developer actually shape the solution, or did they accept the first AI suggestion wholesale?
5
Complete an invention disclosure form. When a feature is ready for IP review, consolidate the evidence: problem statement, prior approaches considered, AI tools used and their role, specific prompts and outputs, specific human modifications or selections, and a list of contributors with their respective contributions. Your patent counsel uses this to determine who qualifies as an inventor for each claim.
6
Archive with timestamps and audit trails. Store all logs, narratives, peer reviews, and disclosure forms in a system with reliable timestamps. Git commit histories and access-controlled document systems with version tracking both work. The goal is a record that can withstand scrutiny during prosecution, assignment disputes, or litigation.
Weak vs. strong disclosure entries:

Weak: “Used Copilot to write the caching module.”

Strong: “I identified that our standard caching approach would fail under concurrent writes at scale. I prompted the AI to explore lock-free data structures, reviewed three candidate outputs, and combined elements from two of them to produce a solution the AI did not generate independently. The key inventive decision was recognizing that eventual consistency was acceptable in this context, which narrowed the solution space significantly.”

What Prompt Logs Actually Prove (and What They Do Not)

Teams sometimes assume that saving every prompt and AI output creates a complete inventorship record. It does not.

A detailed prompt history shows that AI was used. It shows the developer's inputs and the model's outputs. But a patent examiner or opposing counsel will care more about the human's contribution to the claimed idea than about the volume of prompts. A hundred prompts requesting variations of the same approach may show extensive AI use without demonstrating any human conception.

A well-documented decision, “I recognized that the standard caching approach would fail under concurrent writes, so I directed the AI to explore lock-free structures and then combined two partial outputs,” is often more probative than raw logs alone. The human narrative explains the inventive judgment. The prompt logs provide supporting context. Both can matter.

The hierarchy of evidence: The primary evidence is the developer's narrative explaining the inventive decision. Prompt logs are supporting context, not the main event. Log the prompts tied to patentable features, and invest your documentation effort in the human story alongside the AI transcript. One more nuance: the USPTO's 2025 guidance treats AI as a tool, but courts have not yet extensively tested prompt-engineering scenarios. What constitutes a significant contribution in iterative AI code generation remains a case-by-case factual determination. Good documentation gives you the raw material to make the strongest argument regardless of how courts eventually draw the line.

Common Mistakes and Red Flags

These patterns create risk during patent prosecution, due diligence, or litigation.

!
Naming everyone on the PR as an inventor. If five engineers reviewed the code but only one conceived the inventive approach, the other four are not inventors. Over-naming is as dangerous as under-naming. Under 35 U.S.C. § 256, inventorship errors may be correctable, but they invite challenges and delays.
!
Naming no one because “the AI wrote it.” If a human identified the problem, guided the AI, selected among outputs, and modified the result, that human likely contributed to conception. The AI is a tool, not an inventor, and not a bar to patentability.
!
Treating missing logs as a misconduct problem. Inequitable conduct requires specific intent to deceive the USPTO, a high bar under Therasense, Inc. v. Becton, Dickinson & Co. (Fed. Cir. 2011). Missing logs are primarily an evidentiary weakness, not an automatic misconduct issue. Weak evidence is still a problem worth avoiding.
!
Reconstructing inventorship months after the fact. Memory fades. Developers change teams or leave the company. If you wait until the patent filing to figure out who conceived what, you are reconstructing instead of documenting. Reconstruction is less credible and more expensive.
!
Logging everything, everywhere. A process that requires every developer to log every prompt on every task will fail within a week. Scope your documentation to invention-candidate work only. Keep the overhead proportional to the IP value at stake.

Filing Readiness Checklist

Before your team submits an invention disclosure to patent counsel, confirm the following.

1
At least one natural person is identified as having contributed to the conception of each proposed claim.
2
Each identified inventor has a written narrative describing their specific inventive contribution.
3
AI tool usage is documented for the relevant feature, including which tools were used, the prompts entered, the outputs generated, and how the human modified or selected the output.
4
A peer review confirms that the documented human contributions reflect genuine inventive decisions.
5
All evidence (commits, logs, narratives, disclosure forms) is archived with timestamps and audit trails.
6
Invention assignment agreements are in place for all named inventors.
7
The team has considered whether a patent, trade secret, or copyright is the best protection path for this specific innovation.

Frequently Asked Questions

What specific human actions count as conception when an AI tool generates most of the code?

Conception means forming a definite and permanent idea of the complete invention. Actions that can support a conception claim include: identifying a specific technical problem that existing solutions do not address, designing the overall approach or architecture before prompting the AI, engineering prompts that direct the AI toward a particular solution space, selecting among multiple AI outputs based on technical judgment, and modifying or combining AI outputs to produce something the AI did not independently generate. Simply accepting AI output without meaningful intellectual contribution is unlikely to satisfy the standard. The key question is whether the human exercised inventive judgment over the claimed solution, not whether the human typed every line.

Do we need to log every prompt and output, or only those tied to patentable features?

Only those tied to patentable features. Logging every AI interaction across your entire workflow is impractical and creates noise that obscures the evidence that matters. Focus your documentation on work tagged as invention candidates. For routine coding with no IP significance, standard version control is sufficient. Invest documentation overhead where the IP value justifies it.

If multiple engineers and a manager reviewed AI-generated code, who should be named as inventors?

Only those who made a significant contribution to the conception of at least one claim. Under Pannu v. Iolab Corp., 155 F.3d 1344 (Fed. Cir. 1998), each joint inventor must contribute meaningfully to the inventive idea, not merely to its implementation or review. A manager who approved the feature but did not shape the inventive approach is not an inventor. An engineer who only ran tests is not an inventor. An engineer who identified a novel technical approach and directed the AI to explore it likely is. Your patent counsel makes the final determination based on the disclosure form and supporting evidence.

How do we document AI use without slowing down shipping or exposing trade secrets?

Integrate documentation into existing workflows rather than adding a separate process. Tag invention candidates in your project tracker. Add a short narrative field to PR templates for flagged work. Store AI interaction logs in a private, access-controlled repository. The total overhead for a flagged feature should be around 10 to 15 minutes of developer time. For trade secret protection, keep all documentation in systems your team controls so your code and documentation stay within your infrastructure.

Does the USPTO investigate whether AI was used in creating an invention?

The USPTO presumes that named inventors are correct and does not routinely investigate AI use absent a specific challenge. The real risk is not a proactive audit. It is that an opponent in an invalidity challenge, or an acquirer in due diligence, will question whether a named inventor actually conceived the claimed invention. Good records answer that question directly. Poor records leave it open and create leverage for challengers.

Can we patent software features where AI contributed significant portions of the code?

Yes, provided at least one natural person made a significant contribution to the conception of each claimed element. The USPTO's revised 2025 guidance confirms that AI-assisted inventions are not categorically unpatentable. The AI is treated as a tool. The relevant question is whether a human conceived the inventive idea, not whether a human wrote every line of code. Your documentation protocol exists to prove that human contribution when it is eventually questioned.

Every commit is a potential filing date you are missing.

Your team is already building novel technical approaches. The question is whether you can prove who conceived them when it matters. ObviouslyNot scans your codebase locally to surface patentable patterns, before they slip into the prior art record.

No code leaves your device.

Try the scanner

Related Resources