State and local governments are moving quickly from artificial intelligence (AI) experimentation to production use, and that shift is forcing a new kind of security posture – one that can keep up with constantly changing models, data pipelines, and cloud services.
Recent research suggests that as generative AI use spreads, states are already putting security guardrails and oversight mechanisms in place, and that the next challenge is maintaining visibility over time. In the 2025 State CIO Survey from the National Association of State Chief Information Officers (NASCIO), 82% said employees in the CIO organization are using GenAI tools in their daily work, while 84% said their states have implemented a GenAI inventory and are documenting uses in agencies and applications. At the same time, 88% reported implementing “responsible use, flexible guardrails, security, ethics,” underscoring how closely adoption and risk management are moving together.
A separate “From Vision to Value” survey of 100 state and local IT decision-makers shows why that shift matters: Cyber risk remains a constant concern, with 97% saying cyber risks still keep them up at night. The same research also found that 55% have already piloted AI in at least one mission-critical workflow – a signal that AI at scale is arriving alongside the need for stronger, more continuous visibility and controls to protect sensitive data and systems.
As GenAI moves from pilots into day-to-day operations, state and local IT leaders are finding that the toughest part isn’t launching new tools – it’s sustaining oversight as models, data sources, and cloud services evolve. Those realities framed a MeriTalk webinar discussion, “Making AI Safe at Scale: Visibility, Controls, and Oversight,” with William Patton, solutions engineering manager for cloud security firm Wiz; Greg Carpenter, senior security partner strategist at cloud computing giant Amazon Web Services; and Ben Palacio, senior information technology analyst for Placer County, Calif. Their message was clear: Safety at scale depends less on one-time approvals and more on continuous visibility, data-driven controls, and cross-team operating discipline.
Visibility is the first control plane for AI
As agencies adopt AI, inventory and oversight can’t be static, the experts stressed.
“In the snapshot in time that we’re used to, every week, every month, every quarter, we report out bad ports and IPs exposed. We report out our CVEs,” Patton said, but cautioned that these traditional governance rhythms – quarterly spreadsheets and point-in-time reviews – don’t match the pace of AI services and feature releases in the cloud.
“Now that we’ve got that broad API availability, we can … move beyond the spreadsheet and actually interrelate these things in a way that people can understand the risk,” he noted.
For local governments, Palacio said early AI wins are often intentionally low risk, which can help agencies build momentum while they strengthen their guardrails. In Placer County, for example, a public-facing chatbot is backed by AI, but “It’s also low security risk, because it’s [leveraging] publicly posted information,” Palacio said.
But even when the initial use case feels safe, the operational question becomes repeatability, Patton observed: How to keep discovering AI usage as it expands, and often shows up where teams don’t expect it, from new cloud features to third-party libraries embedded in applications. Teams need tools that help “them to understand how those models are being used [and] where that data is,” he said.
Data classification turns AI security into practical decisions
Across the discussion, “secure the data” emerged as the most actionable starting point – especially because AI systems can accelerate exposure of sensitive information that may already be poorly governed.
Palacio put the risk in plain terms: “O365 or [Microsoft] Copilot can look at your entire network, and if you have that Excel spreadsheet … hidden in the corner of the network that nobody knows about, the AI doesn’t care. It’s going to go find it, and now it’s going to be in front of everybody.”
Baseline configuration checks are a good place to start as agencies work to secure data, but they can’t be one-size-fits-all. Rather, they should be driven by scope and data sensitivity, Carpenter cautioned.
“Encryption is obviously something everybody should be looking at when you’re looking at data at rest or data in transit, [and] obviously, identity access management,” he said. But agencies should also match controls to the use case: “You don’t necessarily need the same baseline configuration checks that you might need for something that’s handling Social Security numbers or PII data.”
The focus on data is also reflected in broader state and local AI readiness research. “From Vision to Value” found that agencies see AI as inseparable from modernization, but data readiness is a persistent gap. Just 24% of state and local IT decision-makers described their data practices as “seamless and strategic.”
Threat detection must keep pace with AI-powered attackers
Security teams are also contending with a new reality: Threat actors are using AI to move faster and probe more targets at once, raising the bar for speed and automation in cyber defense.
Threat actors are “attempting to find those zero days. They’re attempting to find those exposed services constantly. And AI is only accelerating their ability to do that, and it also requires us to use [AI] to try to respond,” Patton observed.
State leaders are looking at AI to expand cyber defenses without growing head count. In another recent discussion, Mississippi CIO Craig Orgeron said, “There’s been a lot of conversation in Mississippi about how we augment our cybersecurity efforts. We really think AI could be a way for us to scale cyber services pretty quickly and dramatically.”
In the current environment, Patton said agencies need to move beyond log review as a manual, human-scale task. “They can’t look through the millions and millions of log lines and analyze every single call to every API and SDK; they’re just not capable of doing that,” he said.
Instead, AI can accelerate triage and prioritization. It enables analysts to quickly identify behavior changes and move faster to remediation steps such as isolating systems or tightening permissions, he noted.
Operating models matter as much as tools
While technology enables visibility and automation, Carpenter stressed that day-to-day operating practices determine whether controls actually stick across application, data, and security teams. First, organizations need to map the roles of each team, and the teams need to understand how they can work together.
He highlighted a model that successful organizations use to bridge gaps: security liaisons in the application team and the data science team that champion AI and work directly with the security team.
From there, teams can implement AI to automate model validation and monitor models for potential drift. Wiz, for example, offers a single pane of glass view to monitor applications and security, Carpenter noted.
Start, and then keep going
Patton warned agencies against waiting for perfect plans before acting. “Just get started,” he said. He also framed transparency as a critical accelerant to AI security progress: “Sunlight is the best disinfectant.”
That “start now, mature continuously” posture is increasingly aligned with what state CIOs say they’re doing: NASCIO found 90% of states are running GenAI pilot projects and 71% are training employees, indicating that experimentation and workforce enablement are happening in parallel with governance efforts.
AI safety at scale, the panelists suggested, isn’t a single project or checklist. It’s an operating stance built on continuous visibility, data-first controls, and security practices that move at the same speed as delivery.