Doctor of Philosophy
Suárez, Juan L.
University of Calgary
Automatic decision-making (ADM) systems have permeated every sphere of society where a large amount of data is managed to fulfill prediction/classification needs. The enhanced capabilities ADM systems have brought into their applied sciences conditioned their evolution to more complex and less transparent machine learning algorithms and models (MLA & M). Nowadays, dissimilar predictions, or suggested decisions supported by MLA & M are found to be misleading, or discriminatory resulting in heated academic and public debates since these MLA & M are being applied in socially and politically sensitive areas such as crime prevention, justice management, among others. Thus, there exists an increasing concern among scholars and regulators regarding biased decisions when using complex non-transparent MLA & M, leading to the pursuit of an ethical development process to create ADM solutions. Available approaches lean towards the regulatory aspects of this problem, with a focus on the Human Rights International Law, to define the supposed trustworthiness of trustworthy artificial intelligence (AI). There is still a need to explore how this approach both intersects and harmonizes with the design-based engineering pursuit to achieve fairer decisions. This dissertation proposes a capability and maturity model for trustworthy ADM solutions to help reduce the social gap experienced by social minorities, such as the Hispanic community, because of discriminatory automated decisions. First, the specialized literature on bias in ADM systems is analyzed to identify current limitations of ML in fairness achievement. Also, the so-called international regulatory framework on “principled AI” is studied to determine which elements may be influenced to achieve design-based trustworthy ADM solutions. Variables like Discrimination, Bias, Fairness, and Trustworthiness, relevant within the principled AI context, are explored and incorporated within the model. The findings of this research project highlight the limitations of ML which 1) amplify and perpetuate bias and 2) stress the constraints of the AI international regulatory framework as a complementary methodological support for ADM solutions engineering. This reenforces the need for policy and software developers to join efforts to assure fairer outcomes produced by ADM systems.
Summary for Lay Audience
This thesis dissertation is the culmination of a research project aiming to reduce the discriminatory outcomes of ADM tools using AI and ML, by articulating a software engineering methodological model to ensure fairer decisions from trustworthy ADM solutions. The proposed model uses a structure similar to a known popular quality assurance model called CMMI, which develops a series of quality characteristics across different process areas organized in capability and maturity levels. The trustworthy-related variables are like quality variables already available in the software industry, which currently exhibits a functional dimension. Consequently, the model redefines these variables and integrates them using their ethical perspective, enhancing the available quality assurance approach in the software industry. To do so, exploratory studies of the current engineering methodological approach to ADM solutions, and of the principled AI international framework (a set of regulatory mechanisms seeking to reduce discriminatory outcomes produced by ADM technology with a focus on the International Law of Human Rights) were conducted. The resulting capability and maturity model for trustworthy ADM solutions proposed in this thesis is important as it helps reduce the social gap experienced by minorities, including the Hispanic community, as a result of discriminatory automated decisions influencing the design of trustworthy ADM solutions early in the development process.
Varona Cordero, Daniel, "Design of a Capability and Maturity Model for the Development of Trustworthy ADM Systems Based on Principled AI" (2022). Electronic Thesis and Dissertation Repository. 8391.