• Industry News
  • CXO Spotlight
  • AI
  • Enterprise Security

 Back to New Tab

Autonomous AI Drives Return To Structured Pipelines And New Tolerance For Experimentation

New Tab News Team
March 30, 2026
AI

Bethanie Nonami, CEO and Co-Founder of VINify, on the challenges of balancing innovation and safety in AI adoption, the lessons of traditional software pipelines, and the need for human oversight.

Credit: Outlever

We almost need to go back to how we used to do things to just make sure that what we're developing is safe. Even when we're developing autonomous things, there still has to be human oversight.

Autonomous agents are forcing enterprises to rethink how software gets built, tested, and governed. Systems that can independently chain actions introduce a new class of risk, where speed and scale amplify small mistakes into operational failures. As employees plug AI into SaaS tools and personal workflows, experimentation is moving faster than oversight, exposing gaps in traditional controls. The result is a new operating reality where teams must pair stricter development discipline with a higher tolerance for uncertainty.

We spoke with Bethanie Nonami, CEO and Co-Founder of motor vehicle monitoring and risk management company VINify and an award-winning AI Consultant with Markey Nonami Incorporated. With over 30 years of enterprise technology experience, including foundational leadership roles at IBM and Lodestar Solutions, Nonami has spent her career bridging the gap between technical architecture and executive change management. Currently, her focus is on how humans and autonomous agents intersect, and how that interaction raises significant questions about data security and safety: "We almost need to go back to how we used to do things to just make sure that what we're developing is safe. Even when we're developing autonomous things, there still has to be human oversight," she says.

According to Nonami, containing these autonomous systems frequently demands that enterprises reinstate strict test-to-production pipelines. Traditional software development focused on eliminating discrete bugs, but often does so at a slower pace that includes building test environments, performing extensive QA, and invoking human oversight. Autonomous agents, by contrast, can trigger operational problems as they optimize their tasks at massive scale and speed. Preventing unintended outcomes requires a combination of continuous human oversight and robust agent observability within testing pipelines.

  • Ethics and AI: The challenge of autonomous agents is that they are autonomous. Nonami is clear that this opens the door for bad behavior, malicious or not. "Agents can optimize, and they might not be malicious intentionally, but then there have been cases where agents are maliciously and intentionally not doing good things. So we have to understand what we are doing to protect data, which means understanding that data. Do we have PII? Do we have HIPAA data?"

  • Firewall fallacies: The risks multiply when organizations attempt to build enclosed systems using free, open-source models. Nonami says that bringing these models behind the corporate firewall can sometimes create a false sense of security, widening the potential impact if agents are given broad access to sensitive internal data. "We can design almost anything in these open systems. And then even when you put them behind your walls, it opens it up even more. So how do you do those things in a way that ensures you have fail-safes and you can still recover from things that can go rogue?"

The technical hurdle directly clashes with IT's need for control. Traditional IT frameworks were built to govern incremental software updates, but agentic AI stretches those controls beyond their original design, introducing the risk of much larger operational changes. Many IT departments report that it has become harder to secure the perimeter as users demand highly capable, fragmented AI tools tailored to their specific workflows. Unchecked user access introduces new attack surfaces, requiring practical guardrails around chat and collaboration tools to prevent unauthorized screenshots in Microsoft Teams, stop RPA data leaks, and mitigate the threat of indirect prompt injections hijacking AI-powered browsers. Securing this environment is also pushing teams to think differently about permission boundaries and control interfaces, such as those defined in the Model Context Protocol architecture.

  • The kill switch conundrum: While it might sound like something out of a science fiction story, Nonami believes that the problem of autonomous agents and malicious behavior is a potential scenario. "What if you give an agent too much power and you can't shut it down fast enough or you don't have visibility into the system, or you don't understand what it's doing in the background? How do you turn that off?"

IT departments often struggle to shoulder this level of operational risk alone. Without a centralized kill switch, many enterprises are scrambling to build independent governance boards to weigh risks more objectively. As highlighted in a recent SailPoint governance gap report and subsequent industry analysis of its findings, least-privilege design is becoming a baseline requirement, which further puts restrictions on potential rogue agents.

The friction over acceptable AI use cases and oversight is now highly visible in the public sector as well, evidenced by Anthropic's push for Claude Gov and the subsequent legal scrutiny over the Pentagon's motives regarding mass surveillance and autonomous weapons. Between agent misbehavior and the potential for agents to be used in less-than-transparent ways, the need for ethical and practice governance has never been more apparent.

  • Checking emotional baggage: A big part of adopting AI is risk, and this moves well beyond the limited scope of IT departments and implementation. Nonami says that it's for leaders to be able to weigh value with risk. "With every decision we make, there's an element of risk that we don't even understand yet. We're seeing governance boards at the departmental level that won't have a single person from that department on that board, because they are too close to it to look objectively. It has to be governed by a body that can look at it objectively and is not emotional, and it's not subjective."

Nonami notes a limit to what structure and controls can achieve on their own. Rigorous technical guardrails and governance boards can only do so much inside a corporate culture that penalizes failure. In her experience, a rigid legacy enterprise mindset often poses a greater barrier to AI adoption than the underlying technology. She argues that a lack of flexibility leads leaders to blame the technology and pull back when early experiments do not meet traditional software ROI expectations. That cultural tension shows up in the data: when executives are skeptical or fearful, employees frequently turn to unsanctioned tools, with unsupervised AI use creating additional security exposure as it spills over into personal accounts and consumer-grade apps.

  • Procurement panic: For Nonami, the transformative nature of AI is felt across the organization, and organizations can't expect transformation if they won't also negotiate with some of the ambiguity of change. "How we've done software in the past, AI changes that. How we've bought software in the past, AI changes that. So to go into it with an expectation that it's going be this transformation, this productivity gain, we're just guessing. If the leaders are doing the best they can, but if that culture doesn't support that, it has to bend for this."

Pulling the plug on AI initiatives often stems from a defensive reaction. A risk-averse culture naturally tries to avoid visible missteps in the face of unfamiliar technical friction. The fear of looking foolish in front of peers or stakeholders creates an anxiety that is entirely different from what many leaders remember from past software rollouts. When official policy is overly restrictive or unclear, that anxiety does not stop people from experimenting altogether. Instead, it moves activity underground. Recent research shows that employees and even executives are quietly bypassing AI policies. Rather than just a compliance failure, this shadow IT acts as a diagnostic signal for IT leaders: simply banning AI doesn't work.

  • Clippy didn't judge: For many teams, resolving these questions goes beyond risk registers and policy documents. It often requires leaders who are willing to admit uncertainty and tolerate some level of controlled failure. In her work with executive teams, Nonami finds that the most effective organizations treat early AI deployments as structured experiments. "I was here when the Internet was started, and I was here when we got smartphones. And no one thought there was psychology around learning Microsoft Office or looking like a fool if it doesn't work. I think there's a mindset and psychology specifically around AI, and I don't think people are even putting that into account."

Addressing these psychological and technical blind spots often involves rethinking how leadership views its relationship with intelligent systems. Nonami points out that because human foresight naturally has limits, teams are unlikely to predict every way an autonomous agent might fail or over-optimize a process. Therefore, she advises organizations to build formal decision frameworks and actively embed AI into the validation process to audit their own human blind spots.

The seeming paradox is that the more powerful the systems become, the more enterprises need to revive the disciplined development practices and simultaneously loosen their grip on certainty and control. Nonami is clear that while robust controls around data, access, and observability are baseline requirements, so is the willingness to experiment, to surface blind spots, and to let AI help test assumptions rather than simply automate existing workflows. "There's fear. There's resistance. And the biggest risk is probably us. You have to probably use AI as a thought partner even in some of these experiments just to think beyond what our limited beliefs are."

Related content

‘The Last Six Months Have Changed Everything’: How Security Experts are Governing AI in Real Time

Vineet Love, VP and Deputy Head of Cybersecurity with DigitalNet.ai, describes his GRC playbook for security leaders engaging with rapidly evolving agentic AI infrastructure.

Enterprises Take Accountability for Agentic AI Through Explicit Risk and Access Decisions

Quinten Steenhuis, Co-Director of the Legal Innovation and Technology Lab at Suffolk University Law School, discusses the impact of agents on existing infrastructure, and how that impact will push rapid changes in governance, accountability, and ethics in AI.

Agentic AI Delivers Value When Teams Limit Scope, Enforce Controls, and Track Every Action

Abby Morgan, AI Research Engineer and Developer Advocate at Comet, says teams succeed with agentic AI by controlling operations and supervising critical steps to ensure reliable results.

You might also like

See all →

Autonomous AI Drives Return To Structured Pipelines And New Tolerance For Experimentation

‘The Last Six Months Have Changed Everything’: How Security Experts are Governing AI in Real Time

Enterprises Take Accountability for Agentic AI Through Explicit Risk and Access Decisions

Powered by Island.
ISLAND, All rights reserved ©