• Industry News
  • CXO Spotlight
  • AI
  • Enterprise Security

 Back to New Tab

ASU’s CISO Pushes AI Data Governance Upstream To Procurement

New Tab News Team
April 19, 2026
CXO Spotlight

Lester Godsey, CISO at Arizona State University, explains why AI vendor contracts have become the frontline data privacy battle in higher education, and how ASU is winning it.

Credit: The New Tab

"Vendors want to embed AI into their products, but can we do that in a responsible way? We're seeing terms around ownership and even rights to use derivative data in perpetuity, and we have to decide what's acceptable."

Few environments expose the tension between AI ambition and data responsibility more sharply than higher education. Universities sit on troves of sensitive research, student, and institutional data that AI vendors want access to, and the contract terms they're proposing to get it have grown increasingly aggressive. Data ownership, model training permissions, and rights to derivative outputs in perpetuity are now standard points of contention in procurement negotiations. The security leaders navigating this most effectively aren't the ones with the strictest controls. They are the ones treating privacy as a prerequisite before the contract is signed and discovering that clear boundaries actually accelerate AI adoption rather than slow it down.

Lester Godsey is the Chief Information Security Officer at Arizona State University, the largest public university in the United States by enrollment. At that scale, sensitive data moves constantly across third-party vendor relationships: student records, research outputs, and faculty intellectual property. Before joining ASU, he served as CISO for Maricopa County, the fourth largest county in the United States, and the City of Mesa, building more than 30 years of public-sector security leadership. Recently named one of the top 100 CISOs in North America, his approach to AI adoption at ASU has since become a model for how security can enable, rather than obstruct, institutional innovation. To Godsey, the hardest AI governance questions facing enterprises today are not technical. They are showing up in the fine print of vendor contracts.

"Vendors want to embed AI into their products, but can we do that in a responsible way? We're seeing terms around ownership and even rights to use derivative data in perpetuity, and we have to decide what's acceptable," says Godsey. At a university, that question carries particular weight. Student records fall under FERPA. Research data is often proprietary or IRB-protected. What once required a routine legal flag now requires a decision about long-term data sovereignty.

For universities, the vendor ecosystem problem is acute. Academic environments run on third-party tools: learning management systems, research platforms, collaboration software, and every one of those vendors is now embedding AI into their products. Supply chain complexity deepened during the COVID-19 cloud acceleration, and the AI wave has compounded it: a new generation of AI-native vendors has entered the mix alongside established players retrofitting models into existing contracts. The incentive misalignment is structural. Vendors improve their models with customer data. Universities need to protect it, and in higher education that obligation carries legal and ethical weight that goes well beyond standard corporate data governance. That tension lands directly in the contract, forcing security teams to maintain supply chain accountability and scrutinize vendor terms and conditions.

  • Forever is a long time: "We have terms where third parties can't use our data to train their models. That's pretty common, but of course, vendors want to do that. We've seen some very interesting proposed language regarding not only data ownership (which security has always pushed back on, and isn't unique to AI) but also the right to use created or derivative data in perpetuity," says Godsey. The perpetuity clause is the sharpest edge of a broader pattern: vendors are arriving at the negotiating table with contract language that treats customer data as a long-term asset, not a temporary input.

  • Guardrails as gas pedals: "Even in higher education, there's a certain population that is very reticent about using AI. The more focused and clear we are in terms of privacy, the greater the trust and, by extension, the greater adoption," says Godsey. At ASU, that equation has proven out: treating privacy as a design principle rather than a compliance checkbox has lowered the barrier to AI uptake across a campus of hundreds of thousands of students and staff.

ASU's response to aggressive vendor contract terms is to implement a targeted administrative checkpoint: security is embedded into procurement before departments make a commitment. Every new vendor goes through a Vendor IT Risk Assessment (VITRA), timed to align with the standard purchasing workflow. In a university environment, where faculty and researchers often move fast on new tools and vendor relationships develop quickly, getting ahead of the contract is everything. Positioning the assessment at the procurement stage means expectations are set early, before a department has committed, before a contract is signed, and before security becomes the obstacle rather than the advisor. For this to work, Godsey emphasizes clear ownership of the process and security acting as a partner to the business units it serves, not an obstacle they route around.

  • Surgical speed bumps: "We want to be very precise and effective because we're cognizant that any friction we add is looked at as a negative from an organizational perspective. We don't want to take a blunt-object approach," says Godsey. Where possible, automation compresses the assessment window so the delay is minimal. "We want to get as close as possible to the initial workflow so they don't get too far down the road, think it's a slam dunk, and then all of a sudden, information security throws a monkey wrench into that process. We're purposely putting in friction, but with the lightest touch that we can," he adds. When the risk assessment surfaces a high-risk vendor, the process doesn't end there. Godsey's team works with the department to identify mitigating controls that could bring the risk to an acceptable level, keeping the collaboration intact even when the answer isn't a straightforward yes.

  • Translating the threat: Security has a well-worn reputation as the department that slows things down. Godsey's antidote is deliberate: lead with business impact, not technical detail. "When we're able to put it in terms from a business perspective, those conversations tend to go better," he says.

In a university, the question of who owns risk is never simple. When a faculty member adopts a new AI tool, a department head approves a vendor, or a research team shares data with an outside partner, the decision to accept or reject that risk cannot sit with security. It sits with the people making the business decision. Godsey illustrates the principle with a moment from his tenure at Maricopa County, where he addressed the board of supervisors directly on the question of ransomware accountability.

  • Who holds the bag: "I took that as an opportunity to let the entire board know that if and when we were ever hit with ransomware, they would ultimately be the ones to decide whether or not to pay the ransom," says Godsey, whose work on public sector risk and accountability has consistently placed the decision at the business level. "Cybersecurity cannot accept business risk for an organization," he says. "The business or the component within the organization as a whole ultimately has to either accept it, not accept it, or decide how to treat it." His position reflects a broader industry standard: clear accountability structures require that the function with the most at stake operationally is the one that makes the final call.

  • Line in the sand: For Godsey, drawing this line is also a matter of structural integrity. "Ultimately, we can't accept the risk for the business because we don't have context. We don't have awareness. We don't understand the business as well as the business itself does," he says. The separation is not just philosophical. An entity assessing risk cannot also be the one accepting it on behalf of the organization it serves.

For Godsey, AI's most consequential effect is not invention of new threats but amplification of traditional ones: risks that organizations have managed for decades are now operating at a scale and speed that changes their character entirely. Prompt injection attacks illustrate the point: new in syntax, but structurally similar to SQL injection, where attackers insert commands into a form field to pull information from a backend system. Data privacy, ownership, and exfiltration are decades-old concerns. In a university environment, where student records, research data, and faculty IP flow through dozens of vendor relationships, AI has simply made the blast radius larger. Instead of viewing this expanded threat surface as a reason to panic, Godsey frames it as the exact reason his collaborative model for business risk ownership is so vital.

"The problems that have plagued organizations for decades have simply gotten significantly larger as a result of AI," Godsey notes. "The ability to de-anonymize that information has almost been commoditized. The impact of data flowing out of your environment through third-party relationships is potentially significantly larger because that data is being used to train models by third parties we have no insight into."

Related content

Cyber Resilience Is About Planning, Practice, and Patience, not Urgency

Aurobindo Sundaram, CISO with RELX, argues that the pressure for answers during crises hinders recovery, and the way forward is with clear templates, plans, and patience.

Security Teams Hit The Brakes As AI Agents Outrun Identity Controls

Vineet Love, VP and Deputy Head of Cybersecurity Practice at DigitalNet.ai, warns that shadow AI poses significant security risks, and organizations need to apply traditional human IAM and governance controls to agents.

An Insider's Guide to Rewiring Orgs as Agents Move From Tools to Core Operators

Omer Grossman, former Chief Trust Officer and Head of the CYBR Unit at CyberArk, explains why nearly every enterprise claims to use AI but almost none have transformed the way their organizations actually operate.

You might also like

See all →

ASU’s CISO Pushes AI Data Governance Upstream To Procurement

Agent-Based Tools Will Define the Future of Defense, Predicts CSO at Sheppard Mullin

Veeam's EMEA Field CISO on Ungoverned AI Browser Risk and Finding Resilience Through Governance

Powered by Island.
ISLAND, All rights reserved ©