Anglia Ruskin Research Online (ARRO)
Browse

AI: Risk management and measurement

conference contribution
posted on 2024-06-24, 10:59 authored by Gordon Bowen, Richard Bowen, Deidre Bowen, Sebastian Okafor
Artificial intelligence (AI) systems are inherently risky because how they work is not knowable in detail and how the different elements of an algorithm interact is also not knowable. Monitoring over time will improve understanding of AI systems. Testing the system comprehensively would take a long time and how the elements in the AI system interact would still not be known with great certainty. Hence, there is a need for qualitative and quantitative metrics to measure and quantify the risks inherent in AI systems and to predict an AI system’s response or behaviour. Lessons about social acceptance of AI systems could be learned from the introduction of electric vehicle technology that engaged the public and involved stakeholders. Technoeconomic analysis draws out the financial implications and economic benefits of technologies such as AI systems. Risk with AI systems is trust-based and minimising risk will enhance the adoption of AI systems. However, focusing on AI regulation to minimise risks does not guarantee the safety of AI. AI risk reduction requires regulation and innovation.

History

Refereed

  • Yes

Publisher

Global Business and Technology Association

Conference proceeding

Global Business and Technology Association (GBATA) 25th Annual International Conference Conference

Name of event

GBATA 25th Annual International Conference

Location

Altis Grand Hotel in Lisbon, Portugal

Event start date

2024-07-09

Event finish date

2024-07-12

Affiliated with

  • School of Management Outputs

Usage metrics

    ARU Outputs

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC