posted on 2024-06-24, 10:59authored byGordon Bowen, Richard Bowen, Deidre Bowen, Sebastian Okafor
Artificial intelligence (AI) systems are inherently risky because how they work is not knowable in detail and how the different elements of an algorithm interact is also not knowable. Monitoring over time will improve understanding of AI systems. Testing the system comprehensively would take a long time and how the elements in the AI system interact would still not be known with great certainty. Hence, there is a need for qualitative and quantitative metrics to measure and quantify the risks inherent in AI systems and to predict an AI system’s response or behaviour. Lessons about social acceptance of AI systems could be learned from the introduction of electric vehicle technology that engaged the public and involved stakeholders. Technoeconomic analysis draws out the financial implications and economic benefits of technologies such as AI systems. Risk with AI systems is trust-based and minimising risk will enhance the adoption of AI systems. However, focusing on AI regulation to minimise risks does not guarantee the safety of AI. AI risk reduction requires regulation and innovation.
History
Refereed
Yes
Publisher
Global Business and Technology Association
Conference proceeding
Global Business and Technology Association (GBATA) 25th Annual International Conference Conference