bsg.ox.ac.uk

Building responsible AI stewardship in the public sector

In her summer placement policy report, MPP alumni Laman Ahmadova offers a way forward for the responsible management of AI.

Artificial intelligence (AI) is transforming the way governments operate, promising to enhance efficiency and decision-making. But navigating the complexities of AI adoption requires careful stewardship, especially in public institutions.

Why responsible AI matters

AI’s potential in the public sector is immense, from streamlining bureaucratic processes to improving service delivery. It is transforming the public sector, with potential to enable labour productivity growth from 0.1 to 0.6% annually until 2040. But governments are grappling with understanding what AI can and cannot do effectively and determining how to ensure its responsible use to advance the public good. The question is: how can governments ensure the responsible stewardship of AI in the public sector?

G7 leaders have recognised the responsible use of AI technologies as the top global priority on AI. Without actively engaging in the practical application of AI, governments risk implementing regulations that are either overly strict or poorly targeted, ultimately slowing progress in realising AI’s full potential for the public sector.

I interviewed a number of executives from a range of organisations and institutions in Canada, the UK, and the Netherlands who provided a broad perspective on how different sectors within these three countries are approaching responsible stewardship of AI in the public sector.

Chaos and errors and ROI expectations

According to one respondent, problems will arise less from intentional misuse of AI and more from chaos and errors due to insufficient knowledge and guidance. Governments and institutions face challenges such as identifying AI-generated content, adhering to confidentiality agreements, and addressing the risks of biased or unsafe AI outputs. For instance the UK White Paper on AI reports an incident where an AI assistant recommended a dangerous activity to a member of the public without context, leading to physical harm.

Participants raised a concern regarding private sector involvement in the provision of training grants, which can inadvertently lead to the return of investment. Without clear supervision this could result in the implicit lobbying of ideas by the private sector.

Balancing expertise, regulation, and fair competition

Interview findings indicated that governments often face challenges in regulating AI effectively due to limited expertise in development, safety protocols, and risk assessment. Participants warned that inconsistent regulatory frameworks could lead to a fragmented landscape, creating geographic silos where companies might prioritise regions with more flexible legal environments for product development and launches.

The interviews also highlighted a significant challenge for AI startups: the monopolisation of the market by large, established companies. This lack of competition, participants noted, risks stifling innovation and undermining efforts to ensure safety and effectiveness. They emphasised that regulations must be nuanced and avoid a one-size-fits-all approach to foster a fair and competitive ecosystem.

Cultural resistance to change

It is crucial to address the cultural resistance and bureaucratic barriers that often impede the adoption of new technologies within government and academic institutions. A Summary Note developed by the World Bank underscores this challenge, revealing that the pace of AI adoption is uneven across countries. Many of the nations supported by the Bank are either unprepared or only at the initial stages of AI implementation, exacerbating the existing inequality.

What next?

Develop comprehensive internal guidelines

Based on insights from the interviews, my report outlines a five-step process for developing and maintaining organisational guidelines on AI use:

Co-produce guidelines with input from domain experts, academia, industry, and civil society.

Train the workforce on the principles and application of the guidelines.

Develop evaluation metrics to assess the effectiveness of AI guidelines.

Monitor and collect feedback on a case-by-case basis to identify issues.

Periodically review and iterate the guidelines to ensure relevance.

Build cross-sectoral training partnerships

Several OECD countries have recognised the importance of government involvement in AI literacy programs. For example, the Dutch government has supported AI development in education through funding from the National Growth Fund, which includes investments in initiatives such as NOLAI and AiNed. Similarly, in Canada, the government has established three AI Excellence Centers—AMII in Edmonton, MILA in Montreal, and Vector in Toronto—to advance fundamental AI research and talent development. In the UK, a Google-backed AI Campus in Camden has been launched to enhance AI education among young people.

Adopt flexible regulation and prevent monopolies

Multidisciplinary teams, combining technical and non-technical expertise, are essential for industries to adopt AI in ways that balance innovation with ethical standards. Governments should implement policies to prevent monopolies by large tech firms, fostering a competitive environment where startups and SMEs can thrive. While initial regulatory requirements may delay product launches, they help build consumer trust and promote responsible, safe innovation.

Establish public-private partnership

A fundamental shift in mindset and culture in governments is necessary for the responsible and safe utilisation of AI. To effectively navigate cultural shifts and bureaucratic hurdles in AI adoption, governments should establish a public-private partnership model for process innovation that emphasises global cooperation and the cultivation of local AI champions through intergovernmental collaboration.

Conclusion

Moving forward, governments must overcome institutional inertia, embrace proactive leadership, and collaborate across sectors to create a cohesive, forward-looking approach to AI governance, ensuring AI's benefits are harnessed responsibly and equitably.

This blog draws insights from the report Understanding Responsible AI Stewardship in the Public Sector, a project I conducted during my summer placement at the University of Sheffield. Over two months, under the supervision of Denis Newman-Griffis, I interviewed 12 senior executives, policy advisors, project leaders, academics, and unit directors from diverse organisations, including research funding bodies, local councils, government agencies, academic institutions, non-profits, and multinational companies. The participants, representing organisations in Canada, the UK, and the Netherlands, provided a broad perspective on how different sectors within these three case study countries are approaching responsible stewardship of AI in the public sector.

The report, now published, offers key findings and actionable recommendations for governments worldwide. It was funded by the Blavatnik School of Government and the University of Sheffield Information School.

Laman was a Republic of Azerbaijan State Scholar.

Read full news in source page