• Industry News
  • CXO Spotlight
  • AI
  • Enterprise Security
  • Cloud & SaaS

 Back to New Tab

Protecting productivity as AI adoption outpaces governance: Colgate University CISO Blake Penn

Island News Desk
September 18, 2025
CXO Spotlight

Blake Penn, CISO at Colgate University, highlights the risks of using consumer-grade AI tools for company work without understanding data implications.

Credit: Outlever

This is one of the first major technologies people started using at home and brought into work, instead of the other way around. That makes it something you can't stop, and something you have to manage.

There's a dangerous disconnect playing out in workplaces everywhere. According to a recent 2025 State of the CIO report, while 97% of IT leaders see significant risks in the unchecked use of generative AI, a staggering 91% of employees see no harm at all. For many on the front lines using AI, constant warnings from IT are disregarded in the name of productivity. But for security leaders, it’s a five-alarm fire.

We spoke with Blake Penn, CISO at Colgate University, who has spent over 20 years leading security programs for organizations ranging from startups to Fortune 500 companies. For Penn, risk is rooted in a fundamental misunderstanding that can only be solved with a new strategic playbook.

  • Data knowledge gaps: "The fundamental issue is that most AI users really don't understand what's happening with the data, where it's going, how it's being used," Penn says. Employees have spent decades learning the "why" behind familiar controls like firewalls and encryption. But AI is a new kind of black box. When an employee uses a public AI tool to summarize meeting notes or draft an email, they aren't thinking about data exfiltration. But that's exactly what's happening.

  • The danger of consumer-grade AI: "With AI, users often don't understand that when they open up ChatGPT or any other AI tools for company work, that data is no longer confined to the company's infrastructure," Penn explains. "It's like taking corporate data that's sensitive or confidential and unwittingly giving it to parties outside your organization that you don't have authorization to give it to." While businesses share data with third parties all the time for things like payroll, those exchanges are governed by strict legal contracts. With public AI, there is no contract, no governance, and no confidentiality.

According to Penn, this isn't the first time technology adoption has outpaced security. He draws a direct parallel to the dot-com boom, where organizations embraced the web first and were forced to retroactively build cybersecurity and privacy frameworks to clean up the mess. The difference now is the subtlety of the risk.

  • A losing strategy: Penn is not anti-AI, however. He calls it "operationally useful" and admits to using it himself both personally and professionally. A pragmatic stance gives weight to his core argument: you cannot win by simply banning tools that offer utility.

  • Bringing work home, and home to work: "This is one of the first major technologies people started using at home and brought into work, instead of the other way around," he notes. "That makes it something you can't stop, and something you have to manage." Once an employee gets a taste of the productivity boost from an AI tool, the genie is out of the bottle. They will find a way to use it.

If banning AI is a losing strategy, managing it requires a formal, two-pronged approach. First, the solution must be institutional. "The president and the board need to understand the importance of AI safety and what it means from a reputation standpoint." This means forming a dedicated AI governance committee to address the full spectrum of risks, from security and privacy to intellectual property and fairness. The second prong is a bottom-up strategy for winning hearts and minds, built on a simple principle: never just say "no".

  • Finding alternatives: "You don't want to stop people from using AI. You want to give them a good alternative, one that does the same thing but doesn't put institutional data at risk," Penn advises. "If you just tell them 'no' without giving them a good alternative, they're still going to want that function." This strategy involves providing a menu of approved, powerful AI tools and educating users on the "why" by tailoring the risks to their roles, framing it as an intellectual property threat for faculty or a confidentiality breach for staff.

  • Sharing responsibility: Only after providing a safe and effective alternative can an organization fairly demand compliance. "Once you have a program that shows them why they should use it another way, it's their responsibility," Penn says. "If they continue to go outside the approved tools, that's just like any other policy violation."

Ultimately, Penn forecasts that this challenge will create a new, dedicated role within organizations, an expert who will function as a CISO for artificial intelligence. It's a necessary evolution to manage a technology that is becoming deeply embedded in every facet of work. The goal is to create a framework where innovation and safety are not mutually exclusive. "It's really the best of both worlds: you get the work done without introducing the inherent risks of unmanaged AI."

Related content

Arizona State University CISO Makes Security a Business Function to Speed Research Safely

Lester Godsey, Chief Information Security Officer for Arizona State University, explains why the CISO role is evolving from a defensive gatekeeper to a strategic business enabler, and how modern security leaders can adapt for success.

Enterprise AI Becomes Critical Infrastructure as Gap Between Security and Governance Grows

Aaron Mathews, Global Head of Cybersecurity at Orion Innovation, explains why AI is becoming essential to business operations even though security and governance frameworks haven't kept pace.

Hindsight Comes at High Cost for Security Leaders as 'Bolt-On' Security Breaks Budgets in OT

Gernette Wright, IT Security Officer, Americas at Schneider Electric, on threats to legacy OT systems and failed human patches.

You might also like

See all →
Arizona State University CISO Makes Security a Business Function to Speed Research Safely
Former GoDaddy CSO Talks Past, Present, and Future of AI in Corporate Security
Inside the Uncertain Future for CISOs in the Age of AI, and the Rise of Risk Scapegoating
Powered by Island.
© ISLAND, 2025. All rights reserved