Date of Conferral

2-11-2025

Degree

Ph.D.

School

Public Policy and Administration

Advisor

Gregory Campbell

Abstract

Technology is largely unregulated at present, which raises a safety issue for the automated decision-making process of multiagent systems (MAS) using AI applications. This study was conducted to explore whether a policy-making rule for decisions can protect the public from the social implications of biases, challenges, risks, and uncertainty while exposing a trust gap. Computers can dominate life, and this research was conducted to explore how to trust the decision-making process of MAS, mainly when there are no guardrails or regulations to manage the system. As a result, the generic qualitative inquiry was used to explore how concerned computer users are with the decision-making process of MAS and whether a policy-making rule for decisions can protect the public interest. The research question focused on how online university students and graduates perceive MAS decision-making processes and whether regulatory policies can demonstrate trust. Interviews were conducted with 10 participants between the ages of 18 and 65 who live in the United States. The framework used to explain the trust phenomenon is mixed scanning theory, which allows for a regulatory policy approach to build trust in MAS decision-making. The study’s findings revealed that without protection, participants would continue to worry about trust. The concern for trust rests on collective action, course of action, and protection to shape the decision-making policies. As a result of these overarching results, policymakers are more likely to recognize the necessity of developing embedded support tools that can positively influence society, which can have implications for positive social change.

Included in

Public Policy Commons

Share

 
COinS