Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Risks and Guardrails

AI is the ultimate InnerSource contributor. Like any external contributor, AI agents generate code that must be reviewed, validated, and integrated thoughtfully into your systems. The same InnerSource practices that enable trusted external contributions—code review, clear guidelines, transparent decision-making, and systems thinking—are exactly what you need to safely and sustainably adopt AI in development.

Adopting AI without these guardrails can deliver short-term gains in speed and productivity, but at the cost of long-term risks to quality, security, and maintainability. The good news: if your organization has built a strong InnerSource culture, you already have the foundations in place.

Short-term speed vs. long-term risk

AI coding tools can deliver impressive short-term productivity gains. The risk is that teams take on more risk than they realize—releasing AI-generated content with fewer human reviews, skipping tests, or accepting code they do not fully understand. These gains can erode over time as technical debt, security vulnerabilities, and maintenance burden accumulate. InnerSource practices like mandatory code review, clear ownership, and contribution guidelines act as a natural brake on this tendency, ensuring that speed does not come at the expense of reliability.

Mitigating AI slop

“AI slop” refers to low-quality, generic, or incorrect content produced by AI systems without adequate human oversight. In a development context, this can mean boilerplate code that does not fit the project’s conventions, misleading documentation, or subtly incorrect implementations. InnerSource’s emphasis on transparency—keeping things traceable and open for inspection—directly mitigates this risk. When contributions (whether from humans or AI) go through visible review processes in shared repositories, quality issues are caught earlier and patterns of slop become visible to the community.

Defining boundaries for proprietary knowledge

As organizations use InnerSource practices to capture and share knowledge for AI training, they must define clear boundaries between what can be shared broadly and what must remain protected. Not all internal knowledge is appropriate for AI training—sensitive research, competitive intelligence, and regulated data require careful handling. InnerSource governance practices—clear ownership, access controls, and contribution guidelines—provide a natural framework for making these distinctions explicit.

The goal is to separate human creation outcomes (the knowledge and artifacts that can be shared) from the creation process itself and from proprietary assets that need safeguarding. Organizations should establish policies that specify which content can be used for AI training, which requires restricted access, and which must remain outside AI systems entirely. This is especially important for organizations with sensitive internal research or regulated data, where compliance and appropriate access controls are non-negotiable.

Transparency and stakeholder involvement

Involving stakeholders and keeping development transparent supports responsible AI deployment. When decisions about tools, patterns, and policies are visible and discussable, teams can align on what is acceptable and what is not. This aligns with InnerSource principles of openness and collaboration and helps prevent AI from being used in ways that conflict with organizational values or compliance requirements.

Leading people and agents

As AI agents take on more development tasks, leaders face a new challenge: managing both people and AI agents. This goes beyond tooling decisions into questions of work design, accountability, and organizational structure. Who is responsible when an agent produces incorrect or harmful output? How do you balance workloads between human contributors and automated agents? How do you ensure that institutional knowledge continues to be built by people even as agents handle more of the routine work?

InnerSource program leads should think proactively about these questions rather than waiting to react as problems emerge. Clear contribution guidelines that apply to both human and AI contributors, transparent review processes, and explicit accountability structures will help organizations navigate this transition. The goal is to design work practices that get the best from both people and agents while preserving the collaborative culture that makes InnerSource effective.