• Industry News
  • CXO Spotlight
  • AI
  • Enterprise Security

 Back to New Tab

Enterprises Take Accountability for Agentic AI Through Explicit Risk and Access Decisions

New Tab News Team
March 22, 2026
AI

Quinten Steenhuis, Co-Director of the Legal Innovation and Technology Lab at Suffolk University Law School, discusses the impact of agents on existing infrastructure, and how that impact will push rapid changes in governance, accountability, and ethics in AI.

Credit: Outlever

The machines are a tool. They're not responsible. You are, and you have to do your due diligence on your output.

The evolution of artificial intelligence is moving beyond conversational capabilities toward autonomous agents capable of executing real-world tasks: updating databases, accessing systems, and completing workflows. That leap from instruction to execution holds significant potential for the speed and scalability of modern systems, but it also exposes fragile assumptions about fairness, access, and cost that have kept critical infrastructure afloat.

We spoke with Quinten Steenhuis, Co-Director of the Legal Innovation and Technology Lab at Suffolk University Law School and a leader in legal technology with a deep professional background in software development and systems administration. Steenhuis specializes in developing applications to improve access to justice, most recently demonstrated with his work on the courtformsonline.org project. Recognized as an ABA Legal Rebel and a Fastcase 50 honoree for his work at the intersection of law and technology, he has a unique perspective on how technology and government intersect.

When it comes to AI, Steenhuis says that the true value of this new era is unlocked the moment an agent stops talking and starts doing. “Agentic AI becomes truly valuable the moment it stops just telling you what to do and actually does the work for you, closing the loop from instruction to execution.”

The move from communication to execution defines the boundary between chatbots and agents. Chatbots talk and respond; agents act, interacting directly with databases, logging into systems with temporary credentials, and running through multi-step workflows. This is the true dividing line, according to Steenhuis: a feedback loop in which an agent can try a task, see what works, and correct itself without constant human intervention, fundamentally changing the future of automation.

  • Barriers by design: As agents remove friction from digital systems, they also eliminate the informal limits that have quietly governed access and demand. Steenhuis frames those barriers as a form of metering rather than protection. Getting through all the government hoops has been a way to meter services. "Because these services are underfunded, they erect artificial barriers that make them harder to access," he explains. "We saw this with Medicare and Medicaid, where the approach wasn't an explicit cut, but a decision to make it much harder to keep the service. Magically, money is saved because people won't bother to go through all the hoops anymore." In a human-driven system, that effort naturally limits usage. In an agent-driven one, where tasks execute instantly and at scale, that constraint disappears, forcing organizations to explicitly decide who gets access, how much usage they can support, and what those decisions cost.

  • Pay to play: The erosion of these artificial barriers forces decisions that were once avoided, turning access into something that must be explicitly controlled. Steenhuis points to the strain this creates across public systems and the open internet, where smaller actors can no longer absorb automated demand. “People who were willing to put up a quick hobby website with information can’t possibly pay the cost of a million AI agents pulling that data every day. They may conclude that they simply can’t afford to put up that hobby website anymore,” he says, pushing organizations toward tighter controls on AI-driven traffic and clearer limits on who gets access and at what cost.

Then there's the question of performance. Proponents will argue that agentic AI increases performance across many different industries. But, for Steenhuis, the intense focus on AI performance creates a paradox. The real concern isn't just the fact that AI makes mistakes, but how it makes them. Unlike human error, an AI's mistakes are deterministic and scalable. That distinction helps explain public concern and clarifies what AI agents mean for business and their potential societal impact, forcing a re-evaluation of the economic potential of these technologies.

  • Bias at scale: Steenhuis articulates a major challenge for performance and scalability in agentic AI: widespread bias. "The real issue is that if an AI has a bias, it's unfair to the same people every time. Before, with a hundred different people, you had a random chance of encountering someone biased against you. With these AI tools, the same people might get the same negative outcomes more regularly because the process is more deterministic."

  • Trust, but verify: As agents scale, so does the impact of their bias, concentrating accountability with the humans who deploy them and forcing clearer governance boundaries. Steenhuis emphasizes that autonomy does not remove oversight but reshapes it. “In our lab at Suffolk, we use agentic AI to maintain a database of courts. The agent reviews our database, verifies each court’s current location, and, if it determines there has been a change, identifies three or four credible news sources to support that change," he explains. "It’s all about gathering information so it can be successfully validated by a human using clear, rules-based checklists,” he says, framing a model where agents surface evidence while humans make the final call.

Such hands-on governance extends to managing an agent's operational leash and planning for inevitable errors. Steenhuis recommends keeping agents on a short, one-off timeline, noting that he personally doesn't step away for more than an hour before reviewing the output. That kind of human-in-the-loop vigilance is paired with an engineering mindset that accepts a quantified error rate from the start. "You have to decide upfront if you are okay with a 99% accuracy rate. That means one out of a hundred times, it will do something unexpected. It's not a matter of blame, because that failure rate was a known risk. You have to build that into your expectations from the beginning and have a disaster recovery plan to fix things when they go wrong."

Ultimately, despite their autonomy, these agents are prompting many organizations to make and formalize tough decisions they had previously avoided. The measurability of AI can challenge plausible deniability, in turn compelling some leaders to go on the record about performance, bias, and what an acceptable failure really looks like—all while navigating the challenges of securing agent-based systems.

  • The explicit bargain: The double-edged sword of accountability might push enterprise leaders' hand to quantify problems in terms of risk and bias because, as Steenhuis argues, they have to demonstrate decision-making at the level of systemic models of governance. "We have to make decisions we never had to make before. We didn't get to say we're okay with a 10% error rate among people because we never measured it. We do have to do that with AI."

The act of using a powerful tool, whether it's a legal database or an intelligent agent, does not absolve the user of their responsibility for the final output. While the temptation might be to offload accountability into the black box of an AI model, the future of automation will, according to Steenhuis, be built on governance frameworks and transparent decision-making by business and tech leaders. "The machines are a tool. They're not responsible. You are, and you have to do your due diligence on your output."

Related content

Agentic AI Delivers Value When Teams Limit Scope, Enforce Controls, and Track Every Action

Abby Morgan, AI Research Engineer and Developer Advocate at Comet, says teams succeed with agentic AI by controlling operations and supervising critical steps to ensure reliable results.

Enterprises Bring Order to AI Chaos By Defining Ownership and Security Accountability

Max Heinemeyer, Global Field CISO at Darktrace, says organizations are confronting a chaotic AI landscape, where unmonitored systems and opaque processes demand proactive oversight to reduce risk and maintain operational integrity.

AI-Generated Meeting Records Become A New Frontier For Enterprise Risk & Governance

Michael Whittam, VP for Nordics and Central Europe at Pexip, says companies must rethink oversight of live discussions before AI turns routine exchanges into lasting exposure.

You might also like

See all →

Enterprises Take Accountability for Agentic AI Through Explicit Risk and Access Decisions

Agentic AI Delivers Value When Teams Limit Scope, Enforce Controls, and Track Every Action

AI-Generated Meeting Records Become A New Frontier For Enterprise Risk & Governance

Powered by Island.
ISLAND, All rights reserved ©