By: Nate Bek
Startup developers should treat their AI coding assistants as copilots, not autopilots.
That’s my biggest takeaway from CenterForce’s Governance & Strategy Summit, held Wednesday at Bell Harbor International Conference Center on a sunny day in Seattle’s Belltown neighborhood. In a room full of legal minds from across the country, my goal was to take away what matters most to startup founders working with AI.
Two high-profile AI legal battles framed most of the discussions on stage: one centered on patents, the other on copyright.
In the patent case, computer scientist Stephen Thaler challenged the U.S. Patent and Trademark Office after it refused to grant patents for inventions created by his AI system. In the copyright case, artist Jason M. Allen sued the U.S. Copyright Office after it denied protection for an award-winning image he generated using AI.
“We have some information from cases that are in different contexts, where multiple humans are debating who is an author of a work,” said Eric Tuttle, a partner at Wilson Sonsini, speaking on stage in a session titled “Copyright in the Age of AI: Navigating New Frontiers.”
Courts are likely to use that standard as a starting point, but how it applies to AI-generated content is still unclear.
Brian McMahon, senior copyright counsel at Microsoft, emphasized a practical takeaway from the latest copyright guidance. While prompts alone are not eligible for copyright, there may still be a path to protection if human input shapes the final result.
“As a company, if you're using the AI tool in your business, so long as you’re not just cranking out the output, throwing that into the stream of commerce… and you're instead taking that extra step and modifying the output in some way, I think that's a good opportunity,” McMahon said.
He explained that if a human’s expressive contribution can be detected in the AI-assisted output, that work may qualify for copyright protection.
Tuttle then raised a key concern for software companies. If AI tools are generating a significant portion of your code, can that code and the software built from it be protected under copyright law? That’s still an open question that has not yet been tested in court, he says.
For startup founders building with AI, I believe this is the crux. If your AI system is generating code, how much of it can you actually claim? How much human input is required before that code becomes yours?
At this moment, I asked the panelists to double-click on this point: If you're a startup, the whole ethos is to move fast, break things. So, then, what is your pro bono advice with AI coding tools? Is it to not move so fast using Cursor or GitHub Copilot that you put yourself at legal risk? Or is it to just be aware of these cases making their way through the legal system and continue building fast with AI?
Here’s what they told me:
Tuttle said AI is being widely adopted and it is unrealistic to expect developers to avoid tools that improve efficiency when writing code. He emphasized the importance of understanding the risks that come with relying on AI, especially in the context of intellectual property.
When it comes to copyright, Tuttle warned that generating substantial portions of a codebase with AI could make it harder to protect that software. He encouraged founders to consider other ways to safeguard their work, such as intellectual property protections, technological measures, and contracts.
Tuttle pointed to the idea of “autopilot” versus “copilot.” The more humans are involved in revising, editing, integrating, and making decisions about the code, the stronger the case for authorship. He added that being able to document that human input could be important if the code ever becomes the subject of a legal dispute.
Glory Francke, head of privacy and data protection officer at GitHub, advised founders to pay close attention to the tools their developers are using.
She recommended sticking with commercially offered versions of AI tools rather than free ones. Commercial tools are more likely to protect user privacy by not retaining prompts or using them for training.
Francke also urged teams to review and configure their settings carefully. For instance, some tools allow you to block code suggestions that match publicly available code, preventing them from being used in completions. Others offer code referencing features that show the origin of suggested code and its license.
Francke encouraged teams to “get to know your admin” and make use of those settings to reduce risk and stay compliant.
While much about AI is still unknown and outside of our control, Francke said companies can take practical steps now by choosing the right tools, using the right settings, and making sure prompts are not being used for training if that is not the company’s intent.
“Use the tools that are there to help protect against the harms that you're concerned about,” Francke said.
Jonathan Talcott, shareholder at Buchalter, stressed the importance of understanding open-source licensing when using AI coding tools. He advised companies to put systems in place to manage open-source compliance, whether through features built into tools like GitHub Copilot or external scanners like FOSSA.
“You want to be making sure that you have some type of tool in place,” he said, to track and address licensing issues before they become legal problems.
On the topic of IP and copyright ownership for AI-generated code, Talcott said much is still uncertain and will depend on how current legal cases are decided.
“That’s going to be impacted by how these cases play out,” he said.
Still, Talcott echoed what others on the panel made clear. Avoiding these tools entirely is not a realistic option for software startups.
“Unless you want to be left behind,” he said.
Disclosures: I’m no lawyer, and this isn’t legal advice. The speakers quoted here also made clear they were speaking for themselves, not their respective companies. Ascend is a client of Wilson Sonsini.