After ChatGPT burst on the scene last November, some government officials raced to prohibit its use. Italy banned the chatbot. New York City, Los Angeles Unified, Seattle, and Baltimore School Districts either banned or blocked access to generative AI tools, fearing that ChatGPT, Bard, and other content generation sites could tempt students to cheat on assignments, induce rampant plagiarism, and impede critical thinking. This week, US Congress heard testimony from Sam Altman, CEO of OpenAI, and AI researcher Gary Marcus as it weighed whether and how to regulate the technology.

In a rapid about-face, however, a few governments are now embracing a less fearful and more hands-on approach to AI. New York City Schools chancellor David Banks announced yesterday that NYC is reversing its ban because “the knee jerk fear and risk overlooked the potential of generative AI to support students and teachers, as well as the reality that our students are participating in and will work in a world where understanding generative AI is crucial.” And yesterday, City of Boston chief information officer Santiago Garces sent guidelines to every city official encouraging them to start using generative AI “to understand their potential.” The city also turned on use of Google Bard as part of the City of Boston’s enterprise-wide use of Google Workspace so that all public servants have access.

The “responsible experimentation approach” adopted in Boston—the first policy of its kind in the US—could, if used as a blueprint, revolutionize the public sector’s use of AI across the country and cause a sea change in how governments at every level approach AI. By promoting greater exploration of how AI can be used to improve government effectiveness and efficiency, and by focusing on how to use AI for governance instead of only how to govern AI, the Boston approach might help to reduce alarmism and focus attention on how to use AI for social good. 

Boston’s policy outlines several scenarios in which public servants might want to use AI to improve how they work, and even includes specific how-tos for effective prompt writing.

Generative AI, city officials were told in an email that went out from the CIO to all city officials on May 18, is a great way to get started on memos, letters, and job descriptions, and might help to alleviate the work of overburdened public officials. 

The tools can also help public servants “translate” government-speak and legalese into plain English, which can make important information about public services more accessible to residents. The policy explains that public servants can indicate the reading level or audience in the prompt, allowing the AI model to generate text suitable for elementary school students or specific target audiences.

Generative AI can also help with translation into other languages so that a city’s non-English speaking populations can enjoy equal and easier access to information about policies and services affecting them. 

City officials were also encouraged to use generative AI to summarize lengthy pieces of text or audio into concise summaries, which could make it easier for government officials to engage in conversations with residents.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

AA chief reveals his microwave tip to foil tech-savvy car thieves

After his wife’s Lexus was stolen, Edmund King went the extra mile…

Diplodocids could move their tails like bullwhips at speeds of up to 73mph, study finds 

Diplodocids could move their tails like bullwhips at speeds similar to a…

Who’s Responsible for the Gaza Hospital Explosion? Here’s Why It’s Hard to Know What’s Real

“When you see people in such a capacity putting out a claim,…

Google Antitrust Trial Exposes Deepening Rift With Microsoft

Nov. 9, 2023 5:30 am ET Listen (2 min) Satya Nadella wanted…