Is AI a public good?

Home
Subscribe

With the advent of AI, we now have machines that can learn from large datasets to do everything from having human-like conversations to processing medical images. Many futurists think this tech can revolutionise how we run the world.

But what kind of revolution are we talking about?

“We’re in another era of a hammer in search of a nail,” said Renee Sieber, Associate Professor in Geography at McGill University and one of 2025’s 100 Brilliant Women in AI Ethics, on the McGill Delve podcast.

Organisations across sectors are looking for ways to use AI, even if they don’t have a specific problem for it to solve, she said. In profit-driven companies, this can be a way to pre-empt their competition. They want to be the first to effectively use AI to supercharge their marketing, optimise their supply chains, or develop cutting-edge new products.

But in the public sector, this can be a dangerous game, said Sieber. Fundamental differences in incentives and societal functions mean governments should approach AI with caution, if at all.

“It’s become a useful tool,” she said. “But we should always remain sceptical of it.”

In pursuit of inefficiency

Popular AI-driven chatbots like ChatGPT put a friendly face on a powerful and versatile piece of technology. At its core, AI is a tool that can process copious amounts of data, interpret it, and offer insights or recommendations based on its analysis. That’s how chatbots are “trained” to speak like humans. A social media company like X can program its AI to analyse all user exchanges on its platform. Then the AI can communicate based on what it considers “normal” linguistic behaviour, as determined by patterns found in its database. Other AI tools operate in a similar way. But instead of language, they can analyse, interpret, and generate other kinds of data.

It’s easy to imagine how this can be useful in the public service, said Sieber. Governments manage all sorts of critical services that can be optimised with the help of AI computing. It can help governments deploy city buses more efficiently during peak commuting hours, track water usage during droughts, or quickly process documents.

But what makes AI so efficient is also what makes it risky, said Sieber.

“What if it’s your algorithm that determines who gets child protective services? If it gets it wrong, it is possible that a child might die,” she said.

That’s a core difference between public and private applications of AI, said Sieber. Many private ventures hold risk for human life. But in the public sector, where services are meant primarily to serve and protect citizens, accountability is a much more salient issue.

That’s why inefficiency in government, in some cases, may not be a bad thing, said Sieber. It’s about giving humans time to properly do the work that affects other humans.

“It is all too easy to remove humans from the loop,” she said.

On the McGill Delve podcast, Professor Renee Sieber continues the conversation. She shares more thoughts on the opportunities and perils of applying AI in the public sector, whether AI is the pathway towards a leisure society, and whether government should be engaging with AI at all. To listen, search “McGill Delve” in your favourite podcast player.

This article was written by Eric Dicaire, Managing Editor, McGill Delve

Featured experts

Renee Sieber
Associate Professor, Geography
Bieler School of Environment
McGill University