Claude Code skills: Automating FDA-required documentation for software as a medical device

Building software as a medical device (SaMD) comes with a big admin challenge: extensive documentation requirements for the FDA. These include logging everything from detailed architecture and risk control measures to change control and bug tracking. Traditionally, this meant hours upon hours of manual work. But Claude Code skills offer a path to automation.

Chad sat down with thoughtbot alum Damian Galarza, who’s currently a Senior Software Engineer at August Health, to demo how to create compliant documentation without drowning in tedium. Watch the full video here or read on for the highlights of automating software documentation.

The FDA and software documentation

When you’re building standalone software for medical purposes, it’s classified as software as a medical device (SaMD) in the eyes of the FDA. This means there’s a whole raft of documentation necessary for 510(k) submission, a process where you demonstrate your product’s safety and effectiveness before it can launch.

One of the things the FDA cares a lot about is making sure there’s a history of your decision-making and how you’ve gotten to where you are today. This documentation traces everything from high-level architecture down to individual modules and classes. An authentication system alone might require documenting every class involved, as well as user paths, external service interfaces, and security controls.

Building a Claude Code skill for FDA documentation

Whether you’re working in a regulated industry like healthcare or not, we see huge value in being able to keep documentation up-to-date automatically. Manual software documentation used to be out-of-date nearly as soon as it was written, but a Claude Code skill can continuously and automatically update documentation to create a powerful reference for everything from release notes to team collaboration.

Claude Code skills are essentially specialized instruction sets stored in a skill.md or markdown file. The skill file lives in a folder within the project and contains YAML front matter with a name and description, followed by detailed instructions. The description is critical: It tells Claude when to use the skill. Everything else stays out of the context window until the skill is invoked.

In our demo, we worked with a Rails project to experiment with building out a documentation skill. This software isn’t technically SaMD, but it still gave us a useful testing ground. We named our skill SDS documentation (for Software Design Specifications) and the description simply said: Use to generate 510(k) SDS documentation. For instructions, we listed our goal as generating SDS documentation for the FDA 510(k) process and asked for a high-level view of authentication components, modules, and their interactions, as well as detailed specifications per module.

How well did our skill perform?

We were impressed by the first pass. Our skill used typical 510(k) formatting and included versioning identifiers, risk classification and regulatory context. The documentation covered authentication, credential verification, how session management works, and even some diagrams. It also captured relationships between middleware components, outlined security measures, captured multi-tenancy and included some data modeling. A lot of detail for not much work on our end.

As we continued to experiment, the real power emerged when we integrated skills with development tools. Instead of authentication, we asked our new skill to look at interface specifications, specifically internal APIs, external integrations and user interfaces. Using the Linear MCP (Model Context Protocol) server, our skill could automatically fetch related issues and create traceability links. When documenting an approval workflow module, Claude searched Linear for relevant tickets and included them in the generated documentation with their IDs, titles, statuses, and how they related to the code.

Automate with AI but verify with humans

The generated documentation wasn’t perfect. During testing, Claude insisted on referencing “devise authentication” when our Rails app used has_secure_password and custom controllers, not devise. It also sometimes generated placeholder requirement IDs instead of linking to actual Linear tickets.

A human verification pass caught these issues, but you can also prompt Claude to review its own documentation for factual correctness. When we did this, it identified missing information, documentation gaps, and inconsistencies—though it still missed the devise error until explicitly corrected. We recommend treating AI-generated documentation as a strong first draft that requires review; not a final product.

Are you improving an existing product or creating something new in a highly regulated industry? We can help.

About thoughtbot

We've been helping engineering teams deliver exceptional products for over 20 years. Our designers, developers, and product managers work closely with teams to solve your toughest software challenges through collaborative design and development. Learn more about us.