Framework for Bias Detection in Machine Learning Models: A Fairness Approach

Alveiro Alonso Rosado Gomez, Maritza Liliana Calderón Benavides

Research output: Book / Book Chapter / ReportResearch Bookspeer-review

Abstract

The research addresses bias and inequity in binary classification problems in machine learning. Despite existing ethical frameworks for artificial intelligence, detailed guidance on practices and tech niques to address these issues is lacking. The main objective is to identify and analyze theoretical and practical components related to the detection and mitigation of biases and inequalities in machine learning. The proposed approach combines best practices, ethics, and technology to promote the responsible use of artificial intelligence in Colombia. The methodology covers the definition of performance and fairness interests, interventions in preprocessing, processing, and post-processing, and the generation of recommendations and explainability of the model.

Original languageEnglish
Title of host publicationWSDM 2024 - Proceedings of the 17th ACM International Conference on Web Search and Data Mining
PublisherAssociation for Computing Machinery, Inc
Pages1152-1154
Number of pages3
ISBN (Electronic)9798400703713
DOIs
StatePublished - 4 Mar 2024
Event17th ACM International Conference on Web Search and Data Mining, WSDM 2024 - Merida, Mexico
Duration: 4 Mar 20248 Mar 2024

Publication series

NameWSDM 2024 - Proceedings of the 17th ACM International Conference on Web Search and Data Mining

Conference

Conference17th ACM International Conference on Web Search and Data Mining, WSDM 2024
Country/TerritoryMexico
CityMerida
Period4/03/248/03/24

Keywords

  • bias mitigation
  • explainability
  • machine learning fairness
  • supervised learning

Fingerprint

Dive into the research topics of 'Framework for Bias Detection in Machine Learning Models: A Fairness Approach'. Together they form a unique fingerprint.

Cite this