Show simple item record

dc.contributorUniversitat de Vic - Universitat Central de Catalunya. Elisava, Facultat de Disseny i Enginyeria de Barcelona
dc.contributor.authorDelgado, Janet
dc.contributor.authorde Manuel, Alicia
dc.contributor.authorParra, Iris
dc.contributor.authorMoyano, Cristian
dc.contributor.authorRueda, Jon
dc.contributor.authorGuersenzvaig, Ariel
dc.contributor.authorAusin, Txetxu
dc.contributor.authorCruz, Maite
dc.contributor.authorCasacuberta, David
dc.contributor.authorPuyol, Angel
dc.date.accessioned2025-06-30T09:59:42Z
dc.date.available2025-06-30T09:59:42Z
dc.date.issued2022-07-22
dc.identifier.citationDelgado, J., de Manuel, A., Parra, I., Moyano, C., Rueda, J., Guersenzvaig, A., Puyol, A. (2022) Bias in algorithms of AI systems developed for COVID-19: A scoping review. Journal of bioethical inquiry., 19, 407-419. https://doi.org/10.1007/s11673-022-10200-zca
dc.identifier.issn1176-7529ca
dc.identifier.urihttp://hdl.handle.net/10854/180262
dc.description.abstractTo analyze which ethically relevant biases have been identified by academic literature in artificial intelligence (AI) algorithms developed either for patient risk prediction and triage, or for contact tracing to deal with the COVID-19 pandemic. Additionally, to specifically investigate whether the role of social determinants of health (SDOH) have been considered in these AI developments or not. We conducted a scoping review of the literature, which covered publications from March 2020 to April 2021. Studies mentioning biases on AI algorithms developed for contact tracing and medical triage or risk prediction regarding COVID-19 were included. From 1054 identified articles, 20 studies were finally included. We propose a typology of biases identified in the literature based on bias, limitations and other ethical issues in both areas of analysis. Results on health disparities and SDOH were classified into five categories: racial disparities, biased data, socioeconomic disparities, unequal accessibility and workforce, and information communication. SDOH needs to be considered in the clinical context, where they still seem underestimated. Epidemiological conditions depend on geographic location, so the use of local data in studies to develop international solutions may increase some biases. Gender bias was not specifically addressed in the articles included. The main biases are related to data collection and management. Ethical problems related to privacy, consent, and lack of regulation have been identified in contact tracing while some biasrelated health inequalities have been highlighted. There is a need for further research focusing on SDOH and these specific AI apps.ca
dc.format.extent13 p.ca
dc.language.isoengca
dc.publisherSpringerca
dc.relation.ispartofJournal of bioethical inquiry, 19, 407-419ca
dc.subject.otherArtificial intelligenceca
dc.subject.otherBiasca
dc.subject.otherDigital contact tracingca
dc.subject.otherCOVID-19ca
dc.subject.otherPatient risk predictionca
dc.titleBias in algorithms of AI systems developed for COVID-19: A scoping reviewca
dc.typeinfo:eu-repo/semantics/articleca
dc.description.versioninfo:eu-repo/semantics/publishedVersionca
dc.embargo.termscapca
dc.identifier.doihttps://doi.org/10.1007/s11673-022-10200-zca
dc.rights.accessLevelinfo:eu-repo/semantics/openAccess


Files in this item

 

This item appears in the following Collection(s)

Show simple item record

Share on TwitterShare on LinkedinShare on FacebookShare on TelegramShare on WhatsappPrint