Home
News
Tech Grid
Interviews
Anecdotes
Think Stack
Press Releases
Articles
  • Home
  • /
  • Interviews
  • /
  • The Infrastructure Behind Safe AI Coding at Scale, with Orit Golowinski

The Infrastructure Behind Safe AI Coding at Scale, with Orit Golowinski

  • May 8, 2026
TipNew
The Infrastructure Behind Safe AI Coding at Scale, with Orit Golowinski

AI can now generate code faster than teams can govern it. And as autonomous agents enter development workflows, the real challenge is shifting from speed to control and accountability.

Orit Golowinski, Head of Product at JetBrains, explains why leading AI-native teams are redesigning the software lifecycle around AI, not just adding AI tools into existing workflows. She shares how organisations can balance developer autonomy with enterprise guardrails through smarter governance, workflow orchestration, and AI-native development practices.


AI is already deeply embedded in developer workflows across MENA. In your view, what separates teams that are truly driving productivity gains from those just scratching the surface?

What separates the teams that are truly driving productivity from those just experimenting with AI is the level at which they apply it.

Many teams are still using AI in a fairly basic way. Outside of engineering, that often means using tools like ChatGPT to help with presentations, documentation, summaries, or research. Within development teams, early adoption often looks like code suggestions, autocomplete, or developers accepting AI-generated code without fully challenging the quality, context, or long-term maintainability of what is being produced.

The more advanced teams are thinking beyond individual tasks. They are looking across the full software development lifecycle and asking: where are the real bottlenecks, where are the repetitive manual tasks, and where can AI meaningfully reduce friction? These teams are starting to connect AI agents and automated workflows across the lifecycle, from writing code to reviewing it, running CI checks, identifying failures, suggesting fixes, and helping move changes through the pipeline.

But the most important difference is how they measure success. Teams that are only scratching the surface often rely on shallow productivity metrics, such as the number of lines of code generated by AI or an increase in PR volume. But more code does not necessarily mean better outcomes. In fact, with AI, writing code is becoming less of a bottleneck. The new bottleneck is often review, validation, and understanding code that was generated quickly and at scale.

That is why mature teams are shifting the conversation from “how much code did AI write?” to “what value did we actually deliver?” They are measuring things like reduced cycle time, fewer human hours spent on repetitive work, faster delivery of customer value, improved reliability, and ultimately, business impact. The teams that win with AI are not just generating more output; they are redesigning their workflows so that AI helps them deliver better software, faster, with stronger controls around quality and value.

 

As AI coding agents move closer to real adoption, development is shifting toward more autonomous systems. What are the biggest readiness gaps you see organisations underestimating?

The biggest readiness gap is that many organisations are preparing for AI coding agents as if they were just more advanced developer tools, when in reality, they introduce a new operating model: non-deterministic systems that can take actions, make decisions, and affect production workflows.

The most underestimated area is governance and guardrails. Many enterprises still rely on static policies that were designed for human-driven workflows. But autonomous agents behave differently: they can generate code, trigger pipelines, request resources, make architectural choices, and interact with multiple systems. Organisations need guardrails that are designed specifically for agentic workflows, including isolation, sandboxing, just-in-time permissions, approval gates, and limits on what an agent can access or change. Reducing the blast radius is critical.

A second major gap is accountability. If an agent writes code, modifies infrastructure, or triggers a workflow, organisations need to know which human initiated it, what context the agent had, what decisions it made, and how the output was validated. Without this auditability, it becomes very hard to manage risk, compliance, and trust.

The third gap is technical readiness: documentation, APIs, data contracts, architectural guardrails, and scalable testing. Agents need clear system context and non-negotiable architectural rules; otherwise, they may produce code that looks correct in isolation but breaks existing assumptions. And because the volume of AI-generated changes can grow quickly, manual review alone will not scale. Organisations need automated testing, CI validation, sandbox environments, and evaluation frameworks that can check business logic, security, and compliance before changes reach production.

Finally, many companies underestimate cost governance. Autonomous agents may make decisions about architecture, infrastructure, scalability, or repeated retries in CI that have real financial implications. Without cost prediction, usage limits, and observability, an agentic workflow can create unexpected cloud, compute, or tooling costs.

So the readiness question is not just “can the agent write code?” It is “can the organisation safely govern, test, trace, and control autonomous work at scale?” That is where many companies still have work to do.

 

The region’s strong tilt toward mobile, web, and AI-powered applications is shaping its ecosystem. How is this influencing the next generation of developer tools and platforms?

The next generation of developer tools is being shaped by two forces at the same time: the region’s demand for fast digital innovation, and the need to operate within enterprise-grade requirements around security, compliance, and data residency.

In MENA, many organisations are building mobile-first, web-first, and AI-powered services at a very high pace. That is pushing developer tools to become much more cloud-based, automated, and AI-native. Development is moving beyond the traditional local IDE setup toward environments that can run anywhere: in the cloud, in sovereign cloud environments, on remote machines, and eventually even from mobile or voice-driven interfaces. The direction is clear: development is becoming more accessible, more distributed, and much faster.

At the same time, regional requirements matter. Data residency, regulatory expectations, and sovereign cloud strategies mean that platforms cannot assume a one-size-fits-all public cloud model. Enterprises need tools that can support hybrid and sovereign-ready architectures, where AI models, code, data, and development environments can be hosted in the right location and under the right controls.

AI is also changing who can build and how they build. The idea that “anyone can be a builder” is becoming more realistic as AI lowers the barrier to creating applications, prototypes, automations, and integrations. But for professional software teams, the bigger change is that developers will increasingly manage fleets of AI agents that perform well-defined tasks: writing code, testing, reviewing, fixing pipeline failures, generating documentation, or validating compliance requirements.

So the next generation of developer platforms will not just be better editors or faster CI systems. They will be orchestration platforms for human and AI collaboration. The individual contributor may look more like a lead managing a team of agents, while still remaining accountable for quality, security, performance, and business outcomes. That combination: speed, automation, AI-native workflows, and sovereign enterprise control, is what will define developer tools in the region.

 

20% of developers report difficulty accessing paid IDEs, more than double the global average. How should the industry respond to these structural and economic barriers?

The industry needs to respond on two levels: by lowering economic barriers, and by making AI-powered development more interoperable and accessible across environments.

As AI workflows and agentic coding become more common, the IDE remains incredibly important. For the human developer, it is the most convenient and productive place to work. But for an AI agent, the IDE becomes something even more strategic: the semantic layer that provides context about the codebase, dependencies, project structure, inspections, tests, and developer intent. That context is what allows agents to be useful, safe, and accurate.

So the answer is not to move away from IDEs, but to make them more AI-native while also ensuring that access to AI agents is not locked behind a single proprietary environment. We need both: deeply integrated IDE experiences for developers who have access to them, and open, interoperable ways for agents to work across different tools and workflows.

That is why open protocols matter. MCP is one important step in allowing agents to connect to external tools and context. In addition, we have been working on Agent Client Protocol, or ACP, as a shared protocol for IDE-agent interoperability. The goal is to treat agents as first-class participants in the development lifecycle, not just as plugins inside one tool.

At the same time, we need to address the economic side directly through community access, education programs, startup support, flexible licensing, and entry points such as CLI or cloud-based workflows. This is especially important in regions like MENA, where talent is strong but access to paid tools can be uneven.

Ultimately, democratising AI development does not mean removing the IDE from the equation. It means making the IDE smarter, more agent-ready, and more open, while giving every developer, regardless of budget or geography, a realistic path to participate in the next generation of software development.

 

With AI embedded into coding workflows, governance and safety are becoming critical. How are you approaching guardrails that enable innovation without slowing developers down?

The key is to move from governance as a gatekeeper to governance by design. Traditional governance often relies on static policies, approval boards, and manual review processes. That may work when AI is used in isolated experiments, but it does not scale once agents are embedded into real software delivery workflows.

As agents become more autonomous, they do not just suggest code. They can generate changes, call tools, query context, trigger automations, open pull requests, and interact with CI/CD systems. That means governance needs to operate at runtime, not only through policy documents.

Our approach is to build security and governance directly into the agentic platform. Enterprises need to be able to define who is allowed to use which agents, which models they can access, which tools, skills, and MCP servers are approved, and what parts of the development environment or codebase they are allowed to touch.

A major part of this is isolation and blast-radius control. Agents should be able to run in controlled environments where their behaviour can be observed, evaluated, and audited before they are granted more autonomy. Permissions should be least-privilege and, where possible, just-in-time, enough for the agent to complete a task, but not broad enough to create unnecessary risk.

Visibility is just as important as control. Organisations need to understand what an agent did, which tools it used, what decisions it made, which human initiated the workflow, and what it cost. Without that, governance becomes aspirational rather than operational.

The goal is not to slow developers down. It is the opposite: to give developers and enterprises the confidence to adopt AI faster. When guardrails are built into the workflow, in the IDE, in the agentic environment, and in the organisational control plane, teams can innovate safely without waiting for manual governance checkpoints at every step. Done well, governance becomes an enabler of scale, not a blocker to productivity.

 

As organisations scale AI-assisted development, developer autonomy and enterprise control can start to clash. How should platforms evolve to strike that balance in a practical, scalable way?

The balance starts with recognising that developer autonomy does not mean unrestricted access to everything. Developers need room to experiment, move quickly, and adopt AI in ways that improve their workflows, but that experimentation should happen within boundaries that reflect the organisation’s risk appetite.

One of the biggest risks enterprises underestimate is that insider risk is not always malicious. A developer may have good intentions, but an AI agent acting with too much access, too little context, or unclear constraints can still create security, compliance, or operational issues. So the question is not whether to allow autonomy or impose control. The question is how to create safe autonomy.

Platforms need to evolve toward federated governance: clear enterprise-level guardrails combined with flexibility at the team and developer level. Organisations should be able to define approved models, tools, skills, MCP servers, data sources, permissions, and environments. Within those predefined boundaries, developers should have the freedom to experiment and build without waiting for manual approvals at every step.

The practical model is: allow experimentation when it is within the organisation’s risk appetite, but make sure the right safeguards are in place. That means least-privilege access, isolated environments, observability into agent actions, auditability back to the human who initiated the workflow, and fast rollback if something goes wrong.

Leadership also needs visibility without turning governance into surveillance or bureaucracy. Platforms should provide dashboards that show where AI is being used, which workflows are driving value, where risks are emerging, and whether guardrails need to be adjusted. That allows organisations to evolve policies dynamically instead of relying on blanket restrictions that slow everyone down.

The goal is to make control feel invisible when things are operating safely, and decisive when risk increases. That is how platforms can give developers the autonomy to innovate while giving enterprises the confidence to scale AI-assisted development responsibly.

Software Development
AI Development
Dev Ops
Software Engineering
Developer Tools
Enterprise AI
Agentic AI
Digital Transformation
  • Share

Orit Golowinski is a product executive with 15+ years of experience, leading strategy and growth across developer tools, AI-driven platforms, and enterprise software. Currently, she is working as the Product Lead for Enterprise AI Safety at JetBrains, driving innovation in AI governance, enterprise developer experience, and secure remote development. She leads initiatives that help organizations manage and scale AI-assisted coding responsibly across their developer ecosystems.

Recognized among Sharebird’s Top 50 Product Management Mentors (2024 & 2025) and a DevOps Institute Ambassador, she is passionate about applying AI responsibly to accelerate developer experience and enterprise transformation.

More about Orit:

JetBrains is a global software company specializing in the creation of intelligent, productivity-enhancing tools for software developers and teams. It’s headquartered in Prague, Czech Republic, with 13 offices globally. JetBrains employs more than 2,200 people and has grown organically, with no external funding. Its product catalog covers every stage of the software development cycle as well as major technologies, programming languages, and educational processes.

Learn more at jetbrains.com