Insight / Government & AI Strategy / April 2026

AI Modernization for Local Government

The first wave of municipal AI adoption is at this point past us, and it produced a fairly consistent set of lessons. Cities and counties that started with narrow internal-facing use cases, ran them with explicit governance, and resisted the urge to launch resident-facing chatbots in the first six months are doing well. The cities that did the opposite have, in roughly equal proportions, retrenched, lawyered up, or replaced the chief information officer.

This essay is for the city manager, county administrator, or municipal CIO who is now in the second wave: aware of the failure modes, under pressure from elected officials to demonstrate AI competence, and budgetarily constrained. The framework below has held up well across mid-size US cities and county governments we have worked with or observed closely. It is opinionated, written, and short on hype.

The five-step framework

Five steps, in order. Each builds on the previous. Skipping a step does not get you to the next one faster, it gets you to the same crisis the cities that skipped a step are in now.

Step 1: Establish governance before you adopt

Before any AI tool is procured, before any pilot is approved, the city or county should have a written governance policy. The policy does not need to be long; it needs to be specific. Who can approve an AI use case. What categories are off-limits without elected-official sign-off. What the disclosure expectation is to residents. What records-retention obligation applies. How vendors must document training-data provenance. What evidence is required before a use case moves to production. What the kill criteria are.

This sounds slow. It is faster than the alternative. The cities we have seen burned have, in every case, been burned because they started without a governance policy and reverse-engineered one after a public incident.

Step 2: Inventory candidate use cases and triage

Run a structured inventory across departments. Some questions are obvious (what document-heavy work is currently consuming staff time), others less so (what work is currently outsourced because the city cannot retain staff to do it, what backlog is currently politically embarrassing). Triage candidate use cases into three buckets: approved for pilot, conditional pending governance review, and disapproved.

Most municipal AI programs we have seen have ten to twenty viable candidate use cases on first pass. The temptation is to start them all. Resist. Pick the two or three that best meet the criteria of low political stakes, clear quality measurability, and high time-savings potential.

Step 3: Pilot, narrowly

Run pilots that are scoped to a single department, a single workflow, and a single quality measure. Pilots that try to cross departmental boundaries fail at organizational politics, not at technology. Pilots that try to span workflows fail at the seam between them. Pilots that lack a single quality measure cannot be evaluated against go / no-go criteria.

Document the baseline. Document the pilot results. Decide explicitly whether to scale, refine, or kill. Most municipal pilots that "succeed" without a documented baseline are actually claiming success without evidence; this matters when the city auditor or the inspector general comes around.

Step 4: Scale carefully, with retraining of the workforce

If a pilot demonstrates value, scaling is not a copy-paste exercise. It is a workforce-change exercise. Staff whose work is now AI-assisted need training on the tool, on the governance expectations, and on what is now their job (typically, more review, less first-draft production). Staff whose work is being eliminated need explicit conversations and, where possible, redeployment paths.

Cities that have skipped this step have produced internal backlash that has, in several cases, led elected officials to halt AI programs entirely. The technology was not the problem.

Step 5: Sustain and audit

Once an AI use case is in production, it requires ongoing oversight. Model versions change. Vendor pricing changes. Underlying data changes. Quality drifts. The governance policy needs a review cadence (quarterly is reasonable for most cities) where each in-production use case is reviewed against its original criteria and either continued, modified, or retired.

This is also where audit fits in. Internal audit, the city auditor, or the inspector general should be a stakeholder in the AI governance program, not an adversary discovering the program after a public incident.

Risk inventory specific to local government

Cities and counties carry a risk inventory that does not map cleanly onto enterprise AI risk frameworks. The four categories below should be addressed in any municipal AI program plan.

Ethical risk

AI applied to eligibility, benefits, policing, child welfare, or housing carries ethical exposure that is qualitatively different from AI applied to expense-report processing. Bias in training data, opacity of decision logic, and disparate impact across protected classes are all genuine risks. Cities that have rushed AI into these high-stakes domains have, in some cases, been sued. We strongly counsel municipal clients to keep AI human-in-the-loop in any decision affecting individual residents' rights, benefits, or liberty until the governance program has been exercised across multiple lower-stakes use cases.

Legal risk

State open-records and FOIA laws generally apply to AI prompts, outputs, and intermediate artifacts created in the course of public-agency work. Records-retention obligations apply. Some states have begun to legislate specifically on government use of AI; the landscape is evolving and varies state by state. Cities should engage their city attorney early in the AI program design and treat the legal review as an ongoing partnership, not a one-time clearance.

Technical risk

Vendor lock-in. Data leakage into vendor training sets. Model deprecation. Latency tail. Cost variance month over month. None of this is unique to local government, but local-government IT teams typically have less staff to manage it than their enterprise counterparts. The mitigation is architectural (abstraction layers, multi-vendor fallbacks where appropriate) and contractual (training-data restrictions, exit provisions, pricing predictability).

Political risk

Elected officials' tolerance for AI varies widely. The city council that publicly demands AI competence in March can publicly halt the AI program in October if a single resident-facing incident makes the local news. Municipal AI programs survive across election cycles when they are framed as workforce productivity and service delivery, run with disclosure to residents, and visibly governed. They die when they are framed as transformation and run quietly.

Procurement vehicles for municipal AI

One of the most common reasons municipal AI programs stall is procurement. Cities do not always have IT procurement staff dedicated to AI, and the major AI vendors do not always have municipal-government contracting expertise on their side of the table. The vehicles below cover most municipal AI procurement we have seen succeed.

Cooperative purchasing

NASPO ValuePoint, Sourcewell (formerly NJPA), OMNIA Partners, and U.S. Communities all offer cooperative-purchasing contracts that municipal entities can leverage without running their own RFP. Several major cloud and software vendors hold cooperative-purchasing contracts. For AI specifically, coverage is uneven; some contracts include AI services, others do not. Verify before relying.

State contract vehicles

Most US states maintain statewide contracts that are available to county and municipal governments inside the state. State contract vehicles are usually faster than running a city RFP and offer better pricing than direct purchasing. AI-specific coverage varies by state.

Piggybacking on larger entity contracts

Many cities can piggyback on a county, state, or larger-city contract. The clauses vary; the city attorney needs to verify that the underlying contract permits piggybacking and that the price book applies to the smaller entity.

GSA Schedule 70 for federal-pass-through funding

Where a municipal AI program is funded in part by federal grants (DHS, DOJ, HUD, DOT), GSA Schedule 70 / Multiple Award Schedule Information Technology vehicles may be available. The federal funding source typically dictates the procurement options; cities should confirm with the grant program officer.

Direct RFP

For larger AI programs, a direct RFP is often appropriate. We help municipal clients draft RFP language that elicits responsive bids: explicit governance expectations, training-data restrictions, defined evaluation criteria, exit provisions, and references in similar municipal environments. RFPs that omit these consistently produce vendor responses that paper over the issues.

Closing

Municipal AI is not magic and it is not a passing fad. The cities and counties that adopt it well will, over the next decade, deliver materially better service to residents at materially lower cost. The cities that adopt it poorly will produce headlines we'd rather not be associated with. The difference, in nearly every case we have observed, is the discipline of the program: governance first, narrow pilots, clear quality criteria, workforce engagement, and ongoing audit.

If you are building or rebuilding a municipal AI program and want a senior outside read, we are happy to help. Request our capability statement and we will respond within one business day.

Frequently asked questions

What is the right first AI use case for a city or county?

A narrow, internal-facing, back-office use with low political stakes and clear quality criteria. Document classification, FOIA-request triage, vendor-bid summarization, and 311 complaint routing are common starting points. Avoid resident-facing chatbots as a first move.

How should a city manage hallucination liability for AI in eligibility or benefits decisions?

Treat eligibility and benefits as human-in-the-loop only. AI can summarize, organize, and surface evidence; the human decides. Document the decision chain. Make the AI's role explicit in any communication to residents.

Do open-records laws apply to AI-generated content?

In most US states, yes. AI-generated drafts, prompts, and outputs created in the course of public-agency work are generally subject to state open-records / FOIA laws. Plan retention accordingly. Consult counsel.

What does AI procurement look like for cities without dedicated IT procurement staff?

Most cities use cooperative purchasing (NASPO ValuePoint, Sourcewell, OMNIA) or piggyback on a larger entity's contract. State contract vehicles are also common. We help cities map the candidate vehicles per use case and select the one with the cleanest path.

How do you evaluate vendors making AI claims?

With a written rubric. Specifics on the underlying model, evidence of evaluation against domain-relevant data, governance documentation, security posture, contractual terms (training-data restrictions, exit provisions), and reference customers in similar agencies. Vague claims fail the rubric automatically.

What about resident-facing AI assistants for city services?

Possible, but late in the program. The political and reputational cost of a hallucinating resident-facing assistant is substantial. We recommend cities prove out internal use first, then move to resident-facing only after the governance and evaluation framework has been exercised on lower-stakes use.

Need a senior outside read on a municipal AI program?

Send us your current state and the question you are trying to answer. We respond within one business day with a capability statement and a discovery call.

Request Capability Statement