Electronic Thesis and Dissertation Repository

Design of a Capability and Maturity Model for the Development of Trustworthy ADM Systems Based on Principled AI

Daniel Varona Cordero, The University of Western Ontario

Abstract

Automatic decision-making (ADM) systems have permeated every sphere of society where a large amount of data is managed to fulfill prediction/classification needs. The enhanced capabilities ADM systems have brought into their applied sciences conditioned their evolution to more complex and less transparent machine learning algorithms and models (MLA & M). Nowadays, dissimilar predictions, or suggested decisions supported by MLA & M are found to be misleading, or discriminatory resulting in heated academic and public debates since these MLA & M are being applied in socially and politically sensitive areas such as crime prevention, justice management, among others. Thus, there exists an increasing concern among scholars and regulators regarding biased decisions when using complex non-transparent MLA & M, leading to the pursuit of an ethical development process to create ADM solutions. Available approaches lean towards the regulatory aspects of this problem, with a focus on the Human Rights International Law, to define the supposed trustworthiness of trustworthy artificial intelligence (AI). There is still a need to explore how this approach both intersects and harmonizes with the design-based engineering pursuit to achieve fairer decisions. This dissertation proposes a capability and maturity model for trustworthy ADM solutions to help reduce the social gap experienced by social minorities, such as the Hispanic community, because of discriminatory automated decisions. First, the specialized literature on bias in ADM systems is analyzed to identify current limitations of ML in fairness achievement. Also, the so-called international regulatory framework on “principled AI” is studied to determine which elements may be influenced to achieve design-based trustworthy ADM solutions. Variables like Discrimination, Bias, Fairness, and Trustworthiness, relevant within the principled AI context, are explored and incorporated within the model. The findings of this research project highlight the limitations of ML which 1) amplify and perpetuate bias and 2) stress the constraints of the AI international regulatory framework as a complementary methodological support for ADM solutions engineering. This reenforces the need for policy and software developers to join efforts to assure fairer outcomes produced by ADM systems.