“The important shift is that software contribution itself is becoming programmable,” commented Eugene Neelou, head of AI security for API security company Wallarm, who also leads the industry Agentic AI Runtime Security and Self‑Defense (A2AS) project.
“Once contribution and reputation building can be automated, the attack surface moves from the code to the governance process around it. Projects that rely on informal trust and maintainer intuition will struggle, while those with strong, enforceable AI governance and controls will remain resilient,” he pointed out.
A better approach is to adapt to this new reality. “The long-term solution is not banning AI contributors, but introducing machine-verifiable governance around software change, including provenance, policy enforcement, and auditable contributions,” he said. “AI trust needs to be anchored in verifiable controls, not assumptions about contributor intent.”