Note: Since I haven’t posted anything here in a long time, this is a mini-literature review I wrote as a part of my Master of Public Policy program at Ohio University. I felt it kind of fit the mold here, so I figured I’d grace this blog with some new content.
Introduction
While there are varying opinions on what constitutes machine intelligence, in 1950, English mathematician Alan Turing proposed a test that was intended to determine whether or not a machine was a thinking machine (Pană-Micu, 2024). Noticing a strong connection between intelligence and computing as early as 1936, Turing, in his 1950 paper Computing Machinery and Intelligence, posited a simple question, “Can machines think?” (Li et al., 2019, p.3618) He then proposed a methodology for benchmarking machine intelligence: An “imitation game” played non-verbally through writing or an intermediary with a human interrogator and two human subjects where, through a line of questioning, the interrogator’s goal is to determine which subject is a man and which is a woman. In the game, the first subject tries to deceive the interrogator and can lie, while the second attempts to help them, making the truth a primary strategy. In the context of the game, Turing then posted a second superseding question to his initial question: What happens if a machine takes part as a subject in the game? Will the interrogator make incorrect decisions as frequently in this game format as when it is played between a man and a woman when trying to determine which person is human and which is a machine? (Turing, 1950)
The concept and practice of artificial intelligence (AI) have a rich history, with discussions surrounding its ethical implications tracing back to the 1960s (Morley et al., 2020). Today, AI has moved from the realm of science fiction to becoming a part of the larger strategy for many business sectors. AI and Large Language Models (LLMs) have reached a point of development where they are expected to disrupt a host of industries, including healthcare, manufacturing, education, and interactions with both healthcare and government (Dwivedi et al., 2021). Leveraging sophisticated algorithms and data analytics, AI can process extensive data sets, recognize patterns, and generate predictions with unmatched precision and efficiency (Pană-Micu, 2024). AI is already being implemented in technologies like self-driving cars, enhancements to healthcare services, and various technologies in the financial sector (Cath et al., 2017), and a study by Mckinsey Global Institute has estimated that by 2030 AI’s contribution to the global economy will be US $13 trillion while by then, nearly 70% of companies will be using AI (Dwivedi et al., 2021 as cited in Bughin et al., 2018).
To Turing’s point, we may soon discover what happens if a machine “becomes a player in the game.” However, regarding his secondary question regarding machine intelligence, will we be able to tell if we are interacting with man or machine? More important than the philosophical musings of the Turing Test — what does the potential for machine replication of human tasks mean for industries poised for disruption? Further, what does it mean for humanity from an ethical, humanitarian, and economic perspective? With such potential disruption on the horizon, there is a monumental question at hand that looms large in the context of society as a whole: What defines the appropriate governmental action as it pertains to both creating a regulatory environment as well as making use of this emerging technology for the benefit of the public that government serves? While policy regarding AI is only just beginning to emerge and while many nations have only just begun scratching the surface of policy strategy, few have adopted formal regulations — though the 2020s are expected to bring forth “significant policymaking activity” (Schiff, 2023, as cited in CIFAR, 2020; OECD, 2021 & Perry & Uuk, 2019; Zhang et al., 2022).
Scholars agree that the emergence of AI marks a monumental shift in the human habitat and that governments will not be immune to its impact. Cath et al. (2017) write that AI is shifting our habitat into an infosphere and will reshape the human environment, our interactions, and ultimately our lives. Schwab (2024) notes that the third industrial revolution, which automated production through information technology and electronics, is giving way to what is being called the fourth industrial revolution, which builds on its predecessor’s automaton with emerging technologies that are blurring physical, digital, and biological spheres. As AI-integrated technologies are increasingly developed, the centralized policymaking and public engagement role of governments will begin to succumb to pressures to give way to what technology has to offer. Given the disruptive economic and social effects, governments are in both a unique and precarious position as they are anticipated to be one of the largest adopters of AI as they grapple with inadequate resources and the scale of their operations while simultaneously responsible for the regulation of this emerging technology (Dwivedi et al., 2021).
On Use in Government
Studies find common ground that, as users, governments have the ability to utilize AI for better service delivery, optimize resource allocation, enhance efficiency, and improve the ability of citizens to engage with services (Schwab, 2024). Scholars theorize that by transforming their service offerings using AI, governments could potentially deliver individualized services that can reduce the time and cost for both the government and citizens; institute “predictive service delivery,” and use analytics to connect people to benefits, scholarships, or other services; and provide customized services for every individual based on their specific needs — whether it is a physical handicap, literacy limitation, or even autogenerate more uncomplicated tax returns for individuals at lower income levels (Dwivedi et al., 2021). Governments could also utilize chatbots to address citizen inquiries and employ AI for sentiment analysis, fraud detection, prevention, investigation, compliance, and risk management (Pană-Micu, 2024).
In current practice, AI is being used to render many services that are providing benefits to both governments and citizens the world over. Giest & Klievink (2024) highlight two cases: one, using AI to assess risk on applications for government childcare allowance in the Netherlands, and the other, in the state of Michigan, where the government deployed AI to lower the cost of operations and detect fraudulent activity in the unemployment system while retroactively examining claims from the past six years (Giest & Klievink, 2024 as cited in Behringer 2016). Pană-Micu (2024) further highlights five applications of AI being used in government. Three of the highlighted applications are: the Italian Social Security and Welfare Administration (INPS) using an open-source ML model to drastically decrease the workload of INPS workers who have seen applications for various public benefits double between 2019 and 2023 by automatically generating codes from the emails; The Basque Government Informatic Society (EJIE), using AI connected to a neural network to convert trial proceedings into searchable and structured text which are then used for judicial review; and the Infocomm Development Authority of Singapore (IDA), which deployed an application called “Ask Jamie” with the goal of making public service websites better accessible to the public.
While each of these applications helps improve governmental service delivery or communication to the citizenry, studies note that these implementations also present challenges and have significant impacts on the role of bureaucrats. Giest & Klievink (2024) acknowledge there has been extensive conversation regarding the role of street-level bureaucrats and their roles as government functions are more fully automated. Bureaucrats, at one time, dealt with the vast majority of decision-making, but with the increased technology integration, their role has been reduced to dealing with the exceptions that require additional information that the application’s engineers could not consider. However, Dwivedi et al. (2021) point out that the implementation of AI automation provides benefits to both the government and street-level bureaucrats as it allows their skills to be allocated toward something more high-value (Dwivedi et al., 2021, as cited in Eggers et al., 2017). AI automation also has the added benefit of building trust in government by automating out public servant corruption by implementing “digital discretion.” Research has been done regarding the public’s acceptance of AI in service delivery (T.S. Gesk and M. Leyer, 2022). T.S. Gesk and M. Leyer (2022) found that acceptance depends primarily on whether the service is specific or general and if the user can decide whether or not to engage with AI-provided services or be helped by a clerk. Additionally, Gesk and Leyer found that for more abstract general public services, citizens might not even be aware they are interacting with an AI and accept the interaction at face value. They note, however, that there is a difference between levels of government regarding in-person interactions, as there is an inherent remoteness felt with federal service delivery and more of an expectation for in-person interactions at the municipal level.
On Ethics
As a corollary to public accountability and government integration, there is literary consensus that society and government are responsible for creating a policy framework that ensures AI development is compatible with human values, is inclusive, and maintains a course that works to improve the lives of the citizenry (Dwivedi et al., 2021; Schwab, 2024). Smit, Zoet, and van Meerten conducted a study (2020) that analyzed the literature on AI principles across numerous types of organizations. Their findings determined that the five most important ethical values resulting from the design and execution of AI were Do Good, Accountability, Equality, Privacy, and Education. Cath et al. (2017) synthesized reports regarding readiness and the future of AI, which were published in October 2016 by the Office of Science and Technology Policy (OSTP) at the White House, the Committee on Legal Affairs of the European Parliament, and the United Kingdom’s House of Commons Science and Technology Committee, each independently of one another. These reports found transparency, accountability, and a ‘positive impact’ on the economy and society as key values expressed across all three. Taken together, it is further noted the United States, European Union, and United Kingdom’s trajectory for AI is to contribute to the social good while developing a plan for how certain stakeholders are responsible, how they cooperate, and what values underpin what would be called a ‘good AI society.’ Morley et al. (2020) write that while there is only a tenuous thematic consensus on ethical principles across multiple stakeholders, it nevertheless presents a foundation on which to build an environment to foster the ethical development of machine learning. While it’s been previously noted that there are high-level goals for ethical restraint, it also stands to reason that governments can and will have at their fingertips the additional capabilities for increasing control over their populations, instituting new-era surveillance systems, and reigning control over digital infrastructure while at the same time, citizens could use AI to circumvent their authority (Schwab, 2024). This fact underpins why the development and implementation of ethical AI is so urgent and important for governments all over the world.
With agreement regarding ethical goals, many scholars have advocated for adopting various standards. Some have suggested incorporating methodologies to prompt developers to acknowledge the direct and indirect impact of what they’re building at each stage of development to ensure ethical accountability (Morley et al., 2020), while others have suggested frameworks that aim to assist public policy practitioners and governments who aim to utilize a specific AI system. One such framework, TAM-DEF (Transparency & Audit, Accountability & Legal Issues, built-in Misuse Protection, Digital Divide & Data Deficit, Ethics, Fairness & Equity), offers a helpful toolkit for public policy practitioners which can guide them in assessing how safe and socially beneficial an AI system is, suggesting that regulators must tackle these six challenges before implementing any AI solution (Dwivedi et al., 2021). Cath et al. (2017) recommend that governments take the lead in bringing together an international consortium of governments, corporations, civil society, and researchers for the establishment of an independent, multi-stakeholder Council on AI and Data Ethics, which would provide foresight and advice for the future of ethical AI development and regulation as does Dwivedi, et al. (2021) who advocate for a unified framework to identify the ‘Public Policy Challenges of AI.’ This is in line with the sixth goal in Romania’s National Strategy in the Field of Artificial Intelligence 2024-2027, which aims to adopt the development of a governance and regulatory system for AI (Pană-Micu, 2024).
In March of 2024, the European Union passed the AI Act, which categorizes AI into different levels of potential danger, applying differing degrees of penalties for each level (Pop, M.,2024). However, in spite of the work to develop regulations, there is still a consensus that applying it to the AI space will be difficult. Dwivedi et al. (2021) note the sheer amount of portable computing power available combined with a network of easily accessible open-source or commoditized AI modules will inevitably lead to challenges in regard to the regulation of AI, while Pop (2024) notes that international enforcement of AI regulations will be difficult as there is no central international body to coerce enforcement. Pop further notes that states tend to be averse to international regulation, and despite actors like The European Union and Romania, these regulations are voluntary and will not be able to be enforced when violations occur (Pop, 2024).
Conclusion
In conclusion, AI adoption across government remains both ripe in theory and still relatively nascent in policy applications. The impacts of its rapid development and adoption are likely to be felt in virtually every sector but will be compounded for government as it simultaneously juggles the societal impacts, regulatory responsibility, and the internal adoption of this groundbreaking technology. While governments worldwide are grappling with the responsibility of ensuring the ethical development of AI, scholars insist upon creating an international consortium that brings to the table a cross-disciplinary and multi-stakeholder group to provide global guidance on policy adoption. Governments at every level have already begun the implementation of AI and ML technologies into their operations, increasing efficiency and upending the existing roles of street-level bureaucrats. Where this road will lead, time will tell, but rest assured, governance will be a part of the picture in the story of artificial intelligence.
References
Pană-Micu, F. (2024). Artificial Intelligence In The Public Sector – Challenges, Opportunities And Best Practices. Journal of Public Administration, Finance and Law, 32, 393–399. https://doi.org/10.47743/jopafl-2024-32-29
Li, L., Zheng, N.-N., & Wang, F.-Y. (2019). On the Crossroad of Artificial Intelligence: A Revisit to Alan Turing and Norbert Wiener. IEEE Transactions on Cybernetics, 49(10), 3618–3626. https://doi.org/10.1109/TCYB.2018.2884315
Turing, A. M. (1950). Computing Machinery and Intelligence. Mind, 59(236), 433–460. http://www.jstor.org/stable/2251299
Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., Duan, Y., Dwivedi, R., Edwards, J., Eirug, A., Galanos, V., Ilavarasan, P. V., Janssen, M., Jones, P., Kar, A. K., Kizgin, H., Kronemann, B., Lal, B., Lucini, B., … Williams, M. D. (2021). Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management, 57, 101994. https://doi.org/10.1016/j.ijinfomgt.2019.08.002
Schiff, D. S. (2023). Looking through a policy window with tinted glasses: Setting the agenda for U.S. AI policy. Review of Policy Research, 40(5), 729–756. https://doi.org/10.1111/ropr.12535
Schwab, K. (2024). The Fourth Industrial Revolution: What it means, how to respond1. In Z. Simsek, C. Heavey, & B. C. Fox (Eds.), Handbook of Research on Strategic Leadership in the Fourth Industrial Revolution (pp. 29–34). Edward Elgar Publishing. https://doi.org/10.4337/9781802208818.00008
Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2017). Artificial Intelligence and the ‘Good Society’: The US, EU, and UK approach. Science and Engineering Ethics. https://doi.org/10.1007/s11948-017-9901-7
Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2020). From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices. Science and Engineering Ethics, 26(4), 2141–2168. https://doi.org/10.1007/s11948-019-00165-5
Giest, S. N., & Klievink, B. (2024). More than a digital system: How AI is changing the role of bureaucrats in different organizational contexts. Public Management Review, 26(2), 379–398. https://doi.org/10.1080/14719037.2022.2095001
Gesk, T. S., & Leyer, M. (2022). Artificial intelligence in public services: When and why citizens accept its usage. Government Information Quarterly, 39(3), 101704. https://doi.org/10.1016/j.giq.2022.101704
Smit, K., Zoet, M., & van Meerten, J. (2020). A Review of AI Principles in Practice. PACIS 2020 Proceedings. 198. https://aisel.aisnet.org/pacis2020/198
Pop, M. (2024). Legal Frameworks for Artificial Intelligence: A Comparative Analysis of Romania, the European Union, and International Perspectives. Journal of Law & Administrative Sciences, 21, 75–87.