• Tools and Resources
  • Customer Services
  • Affective Science
  • Biological Foundations of Psychology
  • Clinical Psychology: Disorders and Therapies
  • Cognitive Psychology/Neuroscience
  • Developmental Psychology
  • Educational/School Psychology
  • Forensic Psychology
  • Health Psychology
  • History and Systems of Psychology
  • Individual Differences
  • Methods and Approaches in Psychology
  • Neuropsychology
  • Organizational and Institutional Psychology
  • Personality
  • Psychology and Other Disciplines
  • Social Psychology
  • Sports Psychology
  • Share This Facebook LinkedIn Twitter

Article contents

Human–computer interaction.

  • Amon Rapp Amon Rapp University of Torino
  • https://doi.org/10.1093/acrefore/9780190236557.013.47
  • Published online: 24 May 2023

Human-Computer Interaction (HCI) is a multidisciplinary field of research that focuses on the understanding and design of interaction between humans and computers. HCI has its roots in Human Factors and Ergonomics and cognitive sciences, but over the years it has underwent a variety of deep transformations, by importing a variety of approaches, theories, and methods from other disciplines, like anthropology and sociology. Theoretical perspectives like phenomenology, social practices theories, and grounded theory, are now fruitfully used by HCI researchers to interpret the behavior of people interacting with technology and ground the design of new interactive systems. In the same vein, HCI techniques for understanding, designing, and evaluating the interaction span from ethnography, semi-structured interviews, participatory design, and scenario-based design, to controlled experiments, usability testing, and research through design methods. At the beginning of the third decade of the 21st century, HCI tackles practically every aspect of people’s lives, including matters like, techno-spirituality, global crises, death, sexuality, physical and cognitive disabilities, as well as technologies like wearable devices, self-changing and bio-interfaces, robots, virtual, mixed, and augmented reality applications. In this complex landscape, several promising lines of HCI research, which intertwine the individual, social, and organizational levels of the usage of technologies, are “gameful” interaction, self-tracking and behavior change technologies, and conversational agents.

  • human–computer interaction
  • interaction design
  • HCI theories
  • HCI methods
  • human–machine interaction

You do not currently have access to this article

Please login to access the full content.

Access to the full content requires a subscription

Printed from Oxford Research Encyclopedias, Psychology. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 05 September 2024

  • Cookie Policy
  • Privacy Policy
  • Legal Notice
  • Accessibility
  • [81.177.182.174]
  • 81.177.182.174

Character limit 500 /500

research paper human computer interaction

  • Login To RMS System
  • About JETIR URP
  • About All Approval and Licence
  • Conference/Special Issue Proposal
  • Book and Dissertation/Thesis Publication
  • How start New Journal & Software
  • Best Papers Award
  • Mission and Vision
  • Reviewer Board
  • Join JETIR URP
  • Call For Paper
  • Research Areas
  • Publication Guidelines
  • Sample Paper Format
  • Submit Paper Online
  • Processing Charges
  • Hard Copy and DOI Charges
  • Check Your Paper Status
  • Current Issue
  • Past Issues
  • Special Issues
  • Conference Proposal
  • Recent Conference
  • Published Thesis

Contact Us Click Here

Whatsapp contact click here, published in:.

Volume 6 Issue 1 January-2019 eISSN: 2349-5162

UGC and ISSN approved 7.95 impact factor UGC Approved Journal no 63975

Unique identifier.

Published Paper ID: JETIREQ06003

Registration ID: 308403

Page Number

Post-publication.

  • Downlaod eCertificate, Confirmation Letter
  • editor board member
  • JETIR front page
  • Journal Back Page
  • UGC Approval 14 June W.e.f of CARE List UGC Approved Journal no 63975

Share This Article

Important links:.

  • Call for Paper
  • Submit Manuscript online

research paper human computer interaction

  • Rajni Sharma

Cite This Article

2349-5162 | Impact Factor 7.95 Calculate by Google Scholar An International Scholarly Open Access Journal, Peer-Reviewed, Refereed Journal Impact Factor 7.95 Calculate by Google Scholar and Semantic Scholar | AI-Powered Research Tool, Multidisciplinary, Monthly, Multilanguage Journal Indexing in All Major Database & Metadata, Citation Generator

Publication Details

Download paper / preview article.

research paper human computer interaction

Download Paper

Preview this article, download pdf, print this page.

research paper human computer interaction

Impact Factor:

Impact factor calculation click here current call for paper, call for paper cilck here for more info important links:.

-->

  • Follow Us on

research paper human computer interaction

  • Developed by JETIR

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

applsci-logo

Article Menu

research paper human computer interaction

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Human–computer interaction (hci) advances to re-contextualize cultural heritage toward multiperspectivity, inclusion, and sensemaking.

research paper human computer interaction

1. Introduction

How can HCI research approach CH re-contextualization to enable multiperspectivity, inclusion, and sensemaking?
  • Case 1 presents a methodological approach to enable multiperspectivity already in the Ideation stage of HCI projects, fostering inclusion early in technological development.
  • Case 2 reflects on Exploration and Prototyping as a means of communicating with Indigenous user groups to achieve a shared understanding of different cultural values more quickly.
  • Case 3 focuses on 3D scanning and modeling in Cultural Landscape Analysis and Documentation and preservation to support decision-making in CH management and discusses digital tools to promote sensemaking through active stakeholder participation.
  • Case 4 discusses digital data legacies and the development of personal cultural heritage through social media and online platforms, highlighting the need for further Evaluation of the increasing HCI components in CH entanglements (Figure 6).

2. Background

2.1. interactive ch for preservation, experience, and sensemaking, 2.2. multiperspectivity and socio-cultural inclusion, 2.3. decolonialism and pluriversality, 3. case study selection and collaborative reflection, 4. case studies, 4.1. re-contextualizing intangible ch as a factor of inclusion and cohesion in design ideation, 4.2. exploring designs through prototypes for indigenous museums, 4.3. digital landscape and community involvement of the bamiyan world heritage (afghanistan), 4.4. re-contextualizing digital ch: the legacy of social media data, 4.5. summary of case studies, 5. discussion, 5.1. the role of the hci–ch relationship and entanglements in ch re-contextualization, 5.2. consequences for hci technology and research, 5.3. limitations and outlook, 6. conclusions, author contributions, institutional review board statement, informed consent statement, data availability statement, acknowledgments, conflicts of interest.

  • Bull, A.; Hansen, H. On agonistic memory. Mem. Stud. 2016 , 9 , 390–404. [ Google Scholar ] [ CrossRef ]
  • Kuipers, M.; de Jonge, W. Designing from Heritage: Strategies for Conservation and Conversion ; Basic Books: New York, NY, USA, 2017. [ Google Scholar ]
  • White, S.; Hespanhol, L. Towards a framework for designing technology with Country: A perspective from Australia. In Proceedings of the DRS2022, Bilbao, Spain, 25 June–3 July 2022. [ Google Scholar ] [ CrossRef ]
  • Nordgren, K. Boundaries of historical consciousness: A Western cultural achievement or an anthropological universal? J. Curric. Stud. 2019 , 51 , 779–797. [ Google Scholar ] [ CrossRef ]
  • Adichie, C.N. The Danger of a Single Story. 2009. Available online: https://www.ted.com/talks/chimamanda_ngozi_adichie_the_danger_of_a_single_story/transcript (accessed on 9 April 2024).
  • Seixas, P. Historical Consciousness and Historical Thinking. In Palgrave Handbook of Research in Historical Culture and Education ; Palgrave Macmillan: London, UK, 2017; pp. 59–72. [ Google Scholar ] [ CrossRef ]
  • Di Stefano, M. (Ed.) Heritage and Landscape as Human Values. In Proceedings of the 18th ICOMOS GA and Symposium, Firenze, Italy, 9–14 November 2014; Edizioni Scientifiche Italiane: Naples, Italy, 2015. [ Google Scholar ]
  • ICOMOS (Ed.) Resolution 20GA/19—People-Centred Approaches to Cultural Heritage. In Report of the Resolutions Committee to the 20th ICOMOS General Assembly ; ICOMOS: Paris, France, 2020; pp. 18–19. [ Google Scholar ]
  • Silverman, H.; Waterton, E.; Watson, S. (Eds.) Heritage in Action ; Springer International Publishing: Cham, Switzerland, 2017. [ Google Scholar ] [ CrossRef ]
  • Alda, M. Communications. 2024. Available online: https://www.statista.com/markets/424/topic/2494/communications/ (accessed on 4 February 2024).
  • Loh Chee Wyai, G.; Zaman, T.; Ab Hamid, K.; Anak Gindau, M. Design inspiration translated from the “Proud to be Iban” probes. In KUI ’23, Proceedings of the 20th International Conference on Culture and Computer Science: Code and Materiality, Lisbon, Portugal, 28–29 September 2023 ; Association for Computing Machinery: New York, NY, USA, 2023. [ Google Scholar ] [ CrossRef ]
  • Mushengyezi, A. Rethinking indigenous media: Rituals, ‘talking’ drums and orality as forms of public communication in Uganda. J. Afr. Cult. Stud. 2003 , 16 , 107–117. [ Google Scholar ] [ CrossRef ]
  • Abdulai, M.; Ibrahim, H.; Latif Anas, A. The Role of Indigenous Communication Systems for Rural Development in the Tolon District of Ghana. Res. Glob. 2023 , 6 , 100128. [ Google Scholar ] [ CrossRef ]
  • Wefwafwa, J. Indigenous Communication Systems versus Modern Communication Systems: A Case Study of the Bukusu Subtribe of Western Kenya. Glob. Media J. Afr. Ed. 2015 , 8 . [ Google Scholar ] [ CrossRef ]
  • Du, J.T. Research on Indigenous People and the Role of Information and Communications Technology in Development: A Review of the Literature. J. Aust. Libr. Inf. Assoc. 2017 , 66 , 344–363. [ Google Scholar ] [ CrossRef ]
  • Alvarado Garcia, A.; Maestre, J.F.; Barcham, M.; Iriarte, M.; Wong-Villacres, M.; Lemus, O.A.; Dudani, P.; Reynolds-Cuéllar, P.; Wang, R.; Cerratto Pargman, T. Decolonial Pathways: Our Manifesto for a Decolonizing Agenda in HCI Research and Design. In CHI EA ’21: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021 ; Association for Computing Machinery: New York, NY, USA, 2021. [ Google Scholar ] [ CrossRef ]
  • Anuyah, O.; Badillo-Urquiola, K.; Metoyer, R. Characterizing the Technology Needs of Vulnerable Populations for Participation in Research and Design by Adopting Maslow’s Hierarchy of Needs. In CHI ’23, Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April 2023 ; Association for Computing Machinery: New York, NY, USA, 2023. [ Google Scholar ] [ CrossRef ]
  • Avouris, N. Teaching Human–Computer Interaction for Social Good. In Proceedings of the CHI Greece 2021: 1st International Conference of the ACM Greek SIGCHI Chapter, Virtual, 25–27 November 2021; Association for Computing Machinery: New York, NY, USA, 2021. [ Google Scholar ] [ CrossRef ]
  • Butler, K.A.; Jacob, R.J.K. Human–Computer Interaction: Introduction and overview. In CHI EA ’97, CHI ’97 Extended Abstracts on Human Factors in Computing Systems, Atlanta, GA, USA, 22–27 March 1997 ; Association for Computing Machinery: New York, NY, USA, 1997; pp. 138–139. [ Google Scholar ] [ CrossRef ]
  • Hirsch, L.; Welsch, R.; Rossmy, B.; Butz, A. Embedded AR Storytelling Supports Active Indexing at Historical Places. In TEI ’22, Proceedings of the Sixteenth International Conference on Tangible, Embedded, and Embodied Interaction, Daejeon, Republic of Korea, 13–16 February 2022 ; Association for Computing Machinery: New York, NY, USA, 2022. [ Google Scholar ] [ CrossRef ]
  • Paananen, S.; Kim, J.C.; Kirjavainen, E.; Kalving, M.; Mitra, K.; Häkkilä, J. Augmenting Indigenous Sámi Exhibition - Interactive Digital Heritage in Museum Context. In Human–Computer Interaction—INTERACT 2023, Proceedings of the 19th IFIP TC13 International Conference, York, UK, 28 August–1 September 2023, Proceedings, Part II ; Springer: Berlin/Heidelberg, Germany, 2023; pp. 597–617. [ Google Scholar ] [ CrossRef ]
  • Selmanović, E.; Rizvic, S.; Harvey, C.; Boskovic, D.; Hulusic, V.; Chahin, M.; Sljivo, S. Improving Accessibility to Intangible Cultural Heritage Preservation Using Virtual Reality. J. Comput. Cult. Herit. 2020 , 13 , 1–19. [ Google Scholar ] [ CrossRef ]
  • Duranti, D.; Spallazzo, D.; Petrelli, D. Smart Objects and Replicas: A Survey of Tangible and Embodied Interactions in Museums and Cultural Heritage Sites. J. Comput. Cult. Herit. 2024 , 17 , 1–32. [ Google Scholar ] [ CrossRef ]
  • Petrelli, D.; Roberts, A.J. Exploring Digital Means to Engage Visitors with Roman Culture: Virtual Reality vs. Tangible Interaction. J. Comput. Cult. Herit. 2023 , 16 , 1–18. [ Google Scholar ] [ CrossRef ]
  • Jansen, M.; Toubekis, G.; Walther, A.; Döring-Williams, M.; Mayer, I. Laser Scan Measurement of the Niche and Virtual 3D Representation of the Small Buddha in Bamiyan. In Layers of Perception, Proceedings of the 35th International Conference on Computer Applications and Quantitative Methods in Archaeology (CAA), Berlin, Germany, 2–6 April 2007 ; Posluschny, A., Lambers, K., Herzog, I., Eds.; Kolloquien zur Vor- und Frühgeschichte; Rudolf Habelt Verlag: Bonn, Germany, 2008; Volume 10, pp. 83–90. [ Google Scholar ] [ CrossRef ]
  • Milosz, M.; Kundefinedsik, J.; Montusiewicz, J. 3D Scanning and Visualization of Large Monuments of Timurid Architecture in Central Asia—A Methodical Approach. J. Comput. Cult. Herit. 2021 , 14 , 1–31. [ Google Scholar ] [ CrossRef ]
  • Pribanić, T.; Bojanić, D.; Bartol, K.; Petković, T. Can OpenPose Be Used as a 3D Registration Method for 3D Scans of Cultural Heritage Artifacts. In Pattern Recognition, Proceedings of the ICPR International Workshops and Challenges, Virtual Event, 10–15 January 2021, Proceedings, Part VII ; Springer: Berlin/Heidelberg, Germany, 2021; pp. 83–96. [ Google Scholar ] [ CrossRef ]
  • Bratteteig, T.; Wagner, I. What is a participatory design result? In PDC ’16: Proceedings of the 14th Participatory Design Conference: Full Papers—Volume 1 ; Association for Computing Machinery: New York, NY, USA, 2016; pp. 141–150. [ Google Scholar ] [ CrossRef ]
  • Wang, Z.; Jiang, T.; Huang, J.; Tai, Y.; Trapani, P. How might we evaluate co-design? A literature review on existing practicess. In Proceedings of the DRS2022, Bilbao, Spain, 25 June–3 July 2022. [ Google Scholar ] [ CrossRef ]
  • Hirsch, L.; Paananen, S.; Hornecker, E.; Hespanhol, L.; Kuflik, T.; Losev, T.; Häkkilä, J. Re-contextualizing Built Environments: Critical and Inclusive HCI Approaches for Cultural Heritage. In Human–Computer Interaction—INTERACT 2023, 19th IFIP TC13 International Conference, York, UK, 28 August–1 September 2023, Proceedings, Part IV ; Springer: Berlin/Heidelberg, Germany, 2023; pp. 668–673. [ Google Scholar ]
  • Prilla, M.; Degeling, M.; Herrmann, T. Collaborative reflection at work: Supporting informal learning at a healthcare workplace. In GROUP ’12, Proceedings of the 2012 ACM International Conference on Supporting Group Work, Sanibel Island, FL, USA, 27–31 October 2012 ; Association for Computing Machinery: New York, NY, USA, 2012; pp. 55–64. [ Google Scholar ] [ CrossRef ]
  • Yoo, D.; Kantengwa, O.; Logler, N.; Interayamahanga, R.; Nkurunziza, J.; Friedman, B. Collaborative Reflection: A Practice for Enriching Research Partnerships Spanning Culture, Discipline, and Time. In CHI ’18, Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018 ; Association for Computing Machinery: New York, NY, USA, 2018; pp. 1–11. [ Google Scholar ] [ CrossRef ]
  • Janssenswillen, P.; Meeus, W. Sustainable Heritage Education: Multiperspectivity as a Bridge. In Proceedings of the Future of Education, Florence, Italy, 8–9 June 2017; pp. 33–37. [ Google Scholar ]
  • Nisi, V.; Bala, P.; Cesário, V.; James, S.; Del Bue, A.; Nunes, N.J. “Connected to the people”: Social Inclusion & Cohesion in Action through a Cultural Heritage Digital Tool. Proc. ACM Hum.-Comput. Interact. 2023 , 7 , 1–37. [ Google Scholar ] [ CrossRef ]
  • Hornecker, E. The To-and-Fro of Sense Making: Supporting Users’ Active Indexing in Museums. ACM Trans. Comput. Hum. Interact. 2016 , 23 , 1–48. [ Google Scholar ] [ CrossRef ]
  • Rantanen, M.J. Indexicality of Language and the Art of Creating Treasures. In CHI ’10, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Atlanta, GA, USA, 10–15 April 2010 ; Association for Computing Machinery: New York, NY, USA, 2010; pp. 301–304. [ Google Scholar ]
  • Laycock, S.D.; Bell, G.D.; Mortimore, D.B.; Greco, M.K.; Corps, N.; Finkle, I. Combining X-Ray Micro-CT Technology and 3D Printing for the Digital Preservation and Study of a 19th Century Cantonese Chess Piece with Intricate Internal Structure. J. Comput. Cult. Herit. 2013 , 5 , 1–7. [ Google Scholar ] [ CrossRef ]
  • Nöll, T.; Köhler, J.; Reis, G.; Stricker, D. Fully Automatic, Omnidirectional Acquisition of Geometry and Appearance in the Context of Cultural Heritage Preservation. J. Comput. Cult. Herit. 2015 , 8 , 1–28. [ Google Scholar ] [ CrossRef ]
  • Häkkilä, J.; Kalving, M.; Marjomaa, S.; Mäkikalli, M. Connecting the Past: Evaluating an Indigenous Sámi Heritage Search Portal in Schools. In Relate North: Possible Futures ; InSEA Publications: Troy, Greece, 2023; pp. 116–131. [ Google Scholar ]
  • Häkkilä, J.; Hannula, P.; Luiro, E.; Launne, E.; Mustonen, S.; Westerlund, T.; Colley, A. Visiting a virtual graveyard: Designing virtual reality cultural heritage experiences. In MUM ’19, Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia, Pisa, Italy, 26–29 November 2019 ; Association for Computing Machinery: New York, NY, USA, 2019. [ Google Scholar ] [ CrossRef ]
  • Torsi, S.; Ardito, C.; Rebek, C. An Interactive Narrative to Improve Cultural Heritage Experience in Elementary School Children. J. Comput. Cult. Herit. 2020 , 13 , 1–14. [ Google Scholar ] [ CrossRef ]
  • Mah, K.; Loke, L.; Hespanhol, L. Designing With Ritual Interaction: A Novel Approach to Compassion Cultivation Through a Buddhist-Inspired Interactive Artwork. In TEI ’20, Proceedings of the Fourteenth International Conference on Tangible, Embedded, and Embodied Interaction, Sydney, NSW, Australia, 9–12 February 2020 ; Association for Computing Machinery: New York, NY, USA, 2020; pp. 363–375. [ Google Scholar ] [ CrossRef ]
  • Nofal, E.; Panagiotidou, G.; Reffat, R.M.; Hameeuw, H.; Boschloos, V.; Moere, A.V. Situated Tangible Gamification of Heritage for Supporting Collaborative Learning of Young Museum Visitors. J. Comput. Cult. Herit. 2020 , 13 , 1–24. [ Google Scholar ] [ CrossRef ]
  • Harjuniemi, E. Soft tangible user interfaces: Coupling the digital information to the textile materials. In MUM ’16, Proceedings of the 15th International Conference on Mobile and Ubiquitous Multimedia, Rovaniemi, Finland, 12–15 December 2016 ; Association for Computing Machinery: New York, NY, USA, 2016; pp. 381–383. [ Google Scholar ] [ CrossRef ]
  • Nisi, V.; Bostock, H.; Cesário, V.; Acedo, A.; Nunes, N. Impalpable Narratives: How to capture intangible cultural heritage of migrant communities. In C&T ’21: Proceedings of the 10th International Conference on Communities & Technologies—Wicked Problems in the Age of Tech, Seattle, WA, USA, 23–25 June 2021 ; Association for Computing Machinery: New York, NY, USA, 2021; pp. 109–120. [ Google Scholar ] [ CrossRef ]
  • Gomes, L.; Silva, L.; Bellon, O.R.P. Exploring RGB-D Cameras for 3D Reconstruction of Cultural Heritage: A New Approach Applied to Brazilian Baroque Sculptures. J. Comput. Cult. Herit. 2018 , 11 , 1–24. [ Google Scholar ] [ CrossRef ]
  • Grammalidis, N.; Dimitropoulos, K.; Tsalakanidou, F.; Kitsikidis, A.; Roussel, P.; Denby, B.; Chawah, P.; Buchman, L.; Dupont, S.; Laraba, S.; et al. The i-Treasures Intangible Cultural Heritage dataset. In MOCO ’16, Proceedings of the 3rd International Symposium on Movement and Computing, Thessaloniki, Greece, 5–6 July 2016 ; Association for Computing Machinery: New York, NY, USA, 2016. [ Google Scholar ] [ CrossRef ]
  • Jarke, M. Culture Data Space: A Case Study in Federated Data Ecosystems. In VLDBW 2023: Workshops at VLDB 2023: Joint Proceedings of Workshops at the 49th International Conference on Very Large Data Bases (VLDB 2023), Vancouver, BC, Canada, 28 August–1 September 2023 ; CEUR Workshop Proceedings; Bordawekar, R., Cappiello, C., Efthymiou, V., Ehrlinger, L., Gadepally, V., Galhotra, S., Geisler, S., Groppe, S., Gruenwald, L., Halevy, A., et al., Eds.; CEUR: Aachen, Germany, 2023; Volume 3462. [ Google Scholar ] [ CrossRef ]
  • Jones, S. Wrestling with the Social Value of Heritage: Problems, Dilemmas and Opportunities. J. Community Archaeol. Herit. 2016 , 4 , 1–17. [ Google Scholar ] [ CrossRef ]
  • Chang, E.; Cai, S.; Feng, P.; Cheng, D. Social Augmented Reality: Communicating via Cultural Heritage. J. Comput. Cult. Herit. 2023 , 16 , 1–26. [ Google Scholar ] [ CrossRef ]
  • Falk, J.H.; Dierking, L.D. (Eds.) Learning from Museums , 2nd ed.; American Association for State and Local History Book Series; Rowman & Littlefield Publishers: Lanham, MD, USA, 2018. [ Google Scholar ]
  • Hanneke Bartelds, G.M.S.; van Boxtel, C. Students’ and teachers’ beliefs about historical empathy in secondary history education. Theory Res. Soc. Educ. 2020 , 48 , 529–551. [ Google Scholar ] [ CrossRef ]
  • Angeli, D.D.; Finnegan, D.J.; Scott, L.; O’neill, E. Unsettling Play: Perceptions of Agonistic Games. J. Comput. Cult. Herit. 2021 , 14 , 1–25. [ Google Scholar ] [ CrossRef ]
  • Berger, S.; Kansteiner, W. Agonistic Perspectives on the Memory of War: An Introduction ; Springer: Berlin/Heidelberg, Germany, 2021; pp. 1–11. [ Google Scholar ] [ CrossRef ]
  • Pescarin, S.; Bonanno, V.; Marasco, A. Social Cohesion in Interactive Digital Heritage Experiences. Multimodal Technol. Interact. 2023 , 7 , 61. [ Google Scholar ] [ CrossRef ]
  • Lazem, S.; Giglitto, D.; Nkwo, M.S.; Mthoko, H.; Upani, J.; Peters, A. Challenges and paradoxes in decolonising HCI: A critical discussion. Comput. Support. Coop. Work (CSCW) 2022 , 31 , 159–196. [ Google Scholar ] [ CrossRef ]
  • Smith, R.C.; Winschiers-Theophilus, H.; Loi, D.; de Paula, R.A.; Kambunga, A.P.; Samuel, M.M.; Zaman, T. Decolonizing design practices: Towards pluriversality. In CHI EA ’21: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021 ; Association for Computing Machinery: New York, NY, USA, 2021; pp. 1–5. [ Google Scholar ]
  • Da Milano, C.; Falchetti, E.; Migone, P.; Nisi, V. Digital storytelling, cultural heritage, and social inclusion: The MEMEX project. In Digital Approaches to Inclusion and Participation in Cultural Heritage ; Routledge: London, UK, 2023; pp. 8–26. [ Google Scholar ]
  • Petropoulos, E. Social Cohesion through Cultural Heritage. Master’s Thesis, University of the Peloponnese, Kalamata, Greece, 2021. [ Google Scholar ]
  • Bulla, L.; De Giorgis, S.; Gangemi, A.; Lucifora, C.; Mongiovì, M. Comparing User Perspectives in a Virtual Reality Cultural Heritage Environment. In Advanced Information Systems Engineering, Proceedings of the 2023 Conference (CAiSE 2023), Zaragoza, Spain, 12–16 June 2023 ; Springer: Berlin/Heidelberg, Germany, 2023; pp. 3–15. [ Google Scholar ] [ CrossRef ]
  • Giglitto, D.; Ciolfi, L.; Claisse, C.; Lockley, E. Bridging Cultural Heritage and Communities through Digital Technologies: Understanding Perspectives and Challenges. In C&T ’19: Proceedings of the 9th International Conference on Communities & Technologies—Transforming Communities, Vienna, Austria, 3–7 June 2019 ; ACM: New York, NY, USA, 2019; pp. 81–91. [ Google Scholar ] [ CrossRef ]
  • Chowdhury, N.; Shokri, N.; Valera, C.H.; Sp, A.M.; Marquez, C.R.; Rifat, M.R.; Wong-Villacres, M.; Munteanu, C.; Dahya, N.; Ahmed, S.I. Politics of the Past: Understanding the Role of Memory, Postmemory, and Remembrance in Navigating the History of Migrant Families. In CHI ’24: Proceedings of the CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11–16 May 2024 ; Association for Computing Machinery: New York, NY, USA, 2024. [ Google Scholar ] [ CrossRef ]
  • Van Lanen, R.J.; van Beek, R.; Kosian, M.C. A different view on (world) heritage. The need for multi-perspective data analyses in historical landscape studies: The example of Schokland (NL). J. Cult. Herit. 2022 , 53 , 190–205. [ Google Scholar ] [ CrossRef ]
  • Meissner, M. Between Social Cohesion and Social Distinction: Intangible Cultural Heritage and Sustainable Social Development. In Proceedings of the 6th International Conference on Heritage and Sustainable Development, Granada, Spain, 12–15 June 2018. [ Google Scholar ]
  • Leite, C.; Acosta, C.; Militelli, F.; Jajamovich, G.; Wilderom, M.; Bonduki, N.; Somekh, N.; Herling, T. Sao Paulo: Participation and Social Inclusion on Cultural Heritage. In Social Urbanism in Latin America ; Springer International Publishing: Cham, Switzerland, 2019; pp. 125–135. [ Google Scholar ] [ CrossRef ]
  • Bustamante Duarte, A.M.; Ataei, M.; Degbelo, A.; Brendel, N.; Kray, C. Safe spaces in participatory design with young forced migrants. CoDesign 2019 , 17 , 188–210. [ Google Scholar ] [ CrossRef ]
  • Ansari, A. Decolonizing design through the perspectives of cosmological others: Arguing for an ontological turn in design research and practice. XRDS Crossroads ACM Mag. Stud. 2019 , 26 , 16–19. [ Google Scholar ] [ CrossRef ]
  • Tlostanova, M. On decolonizing design. Des. Philos. Pap. 2017 , 15 , 51–61. [ Google Scholar ] [ CrossRef ]
  • Paananen, S.; Suoheimo, M.; Häkkilä, J. Decolonizing design with technology in cultural heritage contexts-systematic literature review. In [ ] With Design: Reinventing Design Modes: Proceedings of the Congress of the International Association of Societies of Design Research ; Springer: Berlin/Heidelberg, Germany, 2022; pp. 1839–1855. [ Google Scholar ]
  • Taboada, M.B.; Rojas-Lizana, S.; Dutra, L.X.; Levu, A.V.M. Decolonial design in practice: Designing meaningful and transformative science communications for Navakavu, Fiji. Des. Cult. 2020 , 12 , 141–164. [ Google Scholar ] [ CrossRef ]
  • Escobar, A. Sustainability: Design for the pluriverse. Development 2011 , 54 , 137–140. [ Google Scholar ]
  • Kambunga, A.P.; Winschiers-Theophilus, H.; Goagoses, N. Re-conceptualizing technology adoption in informal settlements based on a Namibian application. In AfriCHI ’18: Proceedings of the Second African Conference for Human-Computer Interaction: Thriving Communities, Windhoek, Namibia, 3–7 December 2018 ; Association for Computing Machinery: New York, NY, USA, 2018; pp. 1–10. [ Google Scholar ]
  • Clarke, R.; Talhouk, R.; Beshtawi, A.; Barham, K.; Boyle, O.; Griffiths, M.; Baillie Smith, M. Decolonising in, by and through participatory design with political activists in Palestine. In PDC ’22: Proceedings of the Participatory Design Conference 2022, Newcastle upon Tyne, Uk, 19 August–1 September 2022—Volume 1 ; Association for Computing Machinery: New York, NY, USA, 2022; pp. 36–49. [ Google Scholar ]
  • Akama, Y.; Yee, J. Embracing plurality in designing social innovation practices. Des. Cult. 2019 , 11 , 1–11. [ Google Scholar ]
  • Nicholas, G. Protecting Indigenous heritage objects, places, and values: Challenges, responses, and responsibilities. Int. J. Herit. Stud. 2022 , 28 , 400–422. [ Google Scholar ] [ CrossRef ]
  • Bramwell-Dicks, A.; Evans, A.; Winckler, M.; Petrie, H.; Abdelnour-Nocera, J. Design for Equality and Justice ; Chapter Re-Contextualizing Built Environments: Critical & Inclusive HCI Approaches for Cultural Heritage; Springer: Berlin/Heidelberg, Germany, 2023. [ Google Scholar ]
  • Kubrická, J. Academic Self-Organised Learning Environment—The lessons to be learned and taught. CASALC Rev. 2020 , 10 , 83. [ Google Scholar ]
  • Lengyel, D.; Kharrufa, A.; Stanfield, J.; Powers, H.; Stratford, B.L.; Talhouk, R. Gender and Racism: Considerations for Digital Learning Among Young Refugees and Asylum Seekers. In Proceedings of the Human–Computer Interaction—INTERACT 2023, York, UK, 28 August–1 September 2023; Springer Nature: Cham, Switzerland, 2023; pp. 469–478. [ Google Scholar ] [ CrossRef ]
  • Dankwa, N.K.; Draude, C. Setting Diversity at the Core of HCI. In Universal Access in Human–Computer Interaction. Design Methods and User Experience, Proceedings of the 15th International Conference, UAHCI 2021, Held as Part of the 23rd HCI International Conference, HCII 2021, Virtual Event, 24–29 July 2021, Proceedings, Part I ; Springer: Berlin/Heidelberg, Germany, 2021; pp. 39–52. [ Google Scholar ]
  • Farr, M. Power dynamics and collaborative mechanisms in co-production and co-design processes. Crit. Soc. Policy 2017 , 38 , 623–644. [ Google Scholar ] [ CrossRef ]
  • Mannay, D. ‘Who put that on there... why why why?’ Power games and participatory techniques of visual data production. Vis. Stud. 2013 , 28 , 136–146. [ Google Scholar ] [ CrossRef ]
  • Angell, C.; Alexander, J.; Hunt, J.A. ‘Draw, write and tell’: A literature review and methodological development on the ’draw and write’ research method. J. Early Child. Res. 2015 , 13 , 17–28. [ Google Scholar ] [ CrossRef ]
  • Talhouk, R.; Montague, K.; Ghattas, H.; Araujo-Soares, V.; Ahmad, B.; Balaam, M. Refugee Food Insecurity & Technology: Surfacing Experiences of Adaptation, Navigation, Negotiation and Sharing. Comput. Support. Coop. Work (CSCW) 2022 , 31 , 341–372. [ Google Scholar ] [ CrossRef ]
  • Wall, K. Understanding metacognition through the use of pupil views templates: Pupil views of Learning to Learn. Think. Ski. Creat. 2008 , 3 , 23–33. [ Google Scholar ] [ CrossRef ]
  • Spiel, K. Practicing Humility: Design as Response, Not as Solution. Postdigital Sci. Educ. 2023 , 6 , 25–31. [ Google Scholar ] [ CrossRef ]
  • Vecco, M. A definition of cultural heritage: From the tangible to the intangible. J. Cult. Herit. 2010 , 11 , 321–324. [ Google Scholar ] [ CrossRef ]
  • Schön, D.A. The Reflective Practitioner: How Professionals Think in Action ; Routledge: London, UK, 1984. [ Google Scholar ]
  • de Freitas, E. Interrogating Reflexivity: Art, Research, and the Desire for Presence. In Handbook of the Arts in Qualitative Research: Perspectives, Methodologies, Examples, and Issues ; Knowles, J.G., Cole, A.L., Eds.; SAGE: Thousand Oaks, CA, USA, 2008; pp. 469–476. [ Google Scholar ]
  • Probst, B.; Berenson, L. The double arrow: How qualitative social work researchers use reflexivity. Qual. Soc. Work 2014 , 13 , 813–827. [ Google Scholar ] [ CrossRef ]
  • Holm, G.; Sahlström, F.; Zilliacus, H. Arts-Based Visual Research. In Handbook of Arts-Based Research ; Leavy, P., Ed.; Guilford Press: New York, NY, USA, 2017; pp. 311–335. [ Google Scholar ]
  • Lengyel, D. Penta Portas and the Multi-disciplinary Arts-based Method Framework (MAMF): An Empirical and Theoretical Investigation of Arts-Based Methods. Ph.D. Thesis, Department of Computer Science, University of Bath, Bath, UK, 2024. [ Google Scholar ]
  • Karabanow, J.; Naylor, T. Using Art to Tell Stories and Build Safe Spaces: Transforming Academic Research Into Action. Can. J. Community Ment. Health 2015 , 34 , 67–85. [ Google Scholar ] [ CrossRef ]
  • Bergum, V.; Godkin, D. Nursing Research and the Transformative Value of Art. In Handbook of the Arts in Qualitative Research: Perspectives, Methodologies, Examples, and Issues ; Knowles, J.G., Cole, A.L., Eds.; SAGE: Thousand Oaks, CA, USA, 2008; pp. 603–612. [ Google Scholar ]
  • Stengers, I. Another Science Is Possible: A Manifesto for Slow Science ; Muecke, S., Translator; Polity Press: Cambridge, UK, 2018. [ Google Scholar ]
  • Frith, U. Fast Lane to Slow Science. Trends Cogn. Sci. 2020 , 24 , 1–2. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Odom, W.; Selby, M.; Sellen, A.; Kirk, D.; Banks, R.; Regan, T. Photobox: On the design of a slow technology. In DIS ’12: Proceedings of the Designing Interactive Systems Conference, Newcastle Upon Tyne, UK, 11–15 June 2012 ; ACM: New York, NY, USA, 2012. [ Google Scholar ] [ CrossRef ]
  • Falk, J.; Frauenberger, C.; Kannabiran, G. How Shortening or Lengthening Design Processes Configure Decision Making. In NordiCHI ’22: Nordic Human–Computer Interaction Conference, Aarhus, Denmark, 8–12 October 2022 ; ACM: New York, NY, USA, 2022. [ Google Scholar ] [ CrossRef ]
  • Keskitalo, P.; Virtanen, P.K.; Olsen, T. Introduction. In Indigenous Research Methodologies in Sámi and Global Contexts ; Brill: Leiden, The Netherlands, 2021; pp. 1–6. [ Google Scholar ]
  • Silvén, E. Contested Sami heritage: Drums and sieidis on the move. In National Museums and the Negotiation of Difficult Pasts ; EuNaMus Report; Academia: Singapore, 2012; Volume 8, pp. 173–186. [ Google Scholar ]
  • Harlin, E.K. Repatriation as knowledge sharing–returning the Sámi cultural heritage. UTIMUT: Past Heritage-Future Partnerships: Discussions on Repatriation in the 21st Century ; Gabriel, M., Dahl, J., Eds.; IWGIA: Copenhagen, Denmark, 2008; pp. 192–200. [ Google Scholar ]
  • Porsanger, J. An Indigenous Sámi museum and repatriation on a Sámi drum from the XVII century. Dutkansearvvi Dieđalaš Áigečála 2022 , 6 , 72–90. [ Google Scholar ]
  • Hornecker, E. “I don’t understand it either, but it is cool”-visitor interactions with a multi-touch table in a museum. In Proceedings of the 2008 3rd IEEE International Workshop on Horizontal Interactive Human Computer Systems, Amsterdam, The Netherlands, 1–3 October 2008; pp. 113–120. [ Google Scholar ]
  • Miettinen, S.; Rontti, S.; Kuure, E.; Lindström, A. Realizing design thinking through a service design process and an innovative prototyping laboratory: Introducing Service Innovation Corner (SINCO). In Proceedings of the DRS2012, Bangkok, Thailand, 1–4 July 2012. [ Google Scholar ]
  • Colley, A.; Suoheimo, M.; Häkkilä, J. Exploring VR and AR tools for service design. In MUM ’20: Proceedings of the 19th International Conference on Mobile and Ubiquitous Multimedia, Essen, Germany, 22–25 November 2020 ; Association for Computing Machinery: New York, NY, USA, 2020; pp. 309–311. [ Google Scholar ]
  • Colley, A.; Pfleging, B.; Alt, F.; Häkkilä, J. Exploring public wearable display of wellness tracker data. Int. J. Hum.-Comput. Stud. 2020 , 138 , 102408. [ Google Scholar ] [ CrossRef ]
  • Häkkilä, J.; Paananen, S.; Suoheimo, M.; Mäkikalli, M. Pluriverse perspectives in designing for a cultural heritage context in the digital age. In Artistic Cartography and Design Explorations towards the Pluriverse ; Routledge: London, UK, 2022; pp. 134–143. [ Google Scholar ]
  • Dupree, L. Inside Afghanistan; Yesterday and Today a Strategic Appraisal. Strateg. Stud. 1979 , 2 , 64–83. [ Google Scholar ]
  • Crews, R.D. Afghan Modern: The History of a Global Nation ; Harvard University Press: Cambridge, MA, USA, 2015. [ Google Scholar ]
  • Press Realease March 9: General Assemby ‘Appalled’ by the Edict on Destruction of Afghan Shrines; Strongly Urges Taliban to Halt Implementation. 2001. Available online: https://press.un.org/en/2001/ga9858.doc.htm (accessed on 2 May 2024).
  • UN General Assembly. The Destruction of Relics and Monuments in Afghanistan: Resolution Adopted by the General Assembly. In Proceedings of the 55th Session, New York, NY, USA, 1 May 2001. [ Google Scholar ]
  • Manhart, C. The Afghan Cultural Heritage Crisis: UNESCO’s Response to the Destruction of Statues in Afghanistan. Am. J. Archaeol. 2001 , 105 , 387–388. [ Google Scholar ] [ CrossRef ]
  • Chiovenda, M.K. Sacred Blasphemy: Global and Local Views of the Destruction of the Bamyan Buddha Statues in Afghanistan. J. Muslim Minor. Aff. 2014 , 34 , 410–424. [ Google Scholar ] [ CrossRef ]
  • Klimburg-Salter, D. Entangled Narrative Biographies of the Colossal Sculptures of Bāmiyān: Heroes of the Mythic History of the Conversion to Islam. In The Future of the Bamiyan Buddha Statues: Heritage Reconstruction in Theory and Practice ; Nagaoka, M., Ed.; Springer International Publishing: Cham, Switzerland, 2020. [ Google Scholar ] [ CrossRef ]
  • Toubekis, G.; Jansen, M.; Jarke, M. Long-Term Preservation of the Physical Remains of the Destroyed Buddha Figures in Bamiyan (Afghanistan) Using Virtual Reality Technologies for Preparation and Evaluation of Restoration Measures. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017 , IV-2/W2 , 271–278. [ Google Scholar ] [ CrossRef ]
  • Toubekis, G.; Jansen, M.; Jarke, M. Cultural Master Plan Bamiyan (Afghanistan)—A Process Model for the Management of Cultural Landscapes Based on Remote-Sensing Data. In Digital Heritage. Progress in Cultural Heritage: Documentation, Preservation, and Protection, Proceedings of the 8th International Conference, EuroMed 2020, Virtual Event, 2–5 November 2020 ; Ioannides, M., Fink, E., Cantoni, L., Champion, E., Eds.; LNCS (12642); Springer: Cham, Switzerland, 2021; pp. 115–126. [ Google Scholar ] [ CrossRef ]
  • Toubekis, G.; Jansen, M. The Giant Buddha Figures in Afghanistan: Virtual Reality for a Physical Reconstruction? In ’Archaeologizing’ Heritage? Transcultural Entanglements between Local Social Practices and Global Virtual Realities ; Falser, M., Juneja, M., Eds.; Transcultural Research–Heidelberg Studies on Asia and Europe in a Global Context; Springer: Berlin/Heidelberg, Germany, 2013; pp. 143–166. [ Google Scholar ] [ CrossRef ]
  • Toubekis, G. Requirements for the Protection of the UNESCO World Heritage Cultural Landscape and Archaeological Remains of the Bamiyan Valley (Afghanistan). In Cultural Heritage and Development in Fragile Contexts ; Loda, M., Abenante, P., Eds.; Research for Development; Springer: Cham, Switzerland, 2024; pp. 71–87. [ Google Scholar ] [ CrossRef ]
  • De Marco, L.; Hadzimuammedovich, A.; Kealy, L. ICOMOS-ICCROM Guidance on Post-Disaster and Post-Conflict Recovery and Reconstruction for Heritage Places of Cultural Signifcance and World Heritage Cultural Properties ; International Council on Monuments and Sites: Charenton-le-Pont, France, 2023. [ Google Scholar ]
  • Seifert, C.; Bailer, W.; Orgel, T.; Gantner, L.; Kern, R.; Ziak, H.; Petit, A.; Schlötterer, J.; Zwicklbauer, S.; Granitzer, M. Ubiquitous Access to Digital Cultural Heritage. J. Comput. Cult. Herit. 2017 , 10 , 1–27. [ Google Scholar ] [ CrossRef ]
  • Amato, F.; Moscato, V.; Picariello, A.; Colace, F.; Santo, M.D.; Schreiber, F.A.; Tanca, L. Big Data Meets Digital Cultural Heritage: Design and Implementation of SCRABS, A Smart Context-awaRe Browsing Assistant for Cultural EnvironmentS. J. Comput. Cult. Herit. 2017 , 10 , 1–23. [ Google Scholar ] [ CrossRef ]
  • Heath, C.P.R.; Coles-Kemp, L. Drawing Out the Everyday Hyper-[In]Securities of Digital Identity. In CHI ’22: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April–5 May 2022 ; Association for Computing Machinery: New York, NY, USA, 2022. [ Google Scholar ] [ CrossRef ]
  • Pinter, A.T.; Brubaker, J.R. Behold the Once and Future Me: Online Identity After the End of a Romantic Relationship. Proc. ACM Hum.-Comput. Interact. 2022 , 6 , 1–35. [ Google Scholar ] [ CrossRef ]
  • Petrosyan, A. Average Daily Time Spent Using the Internet by Online Users Worldwide from 3rd Quarter 2015 to 3rd Quarter 2023. 2024. Available online: https://www.statista.com/statistics/1380282/daily-time-spent-online-global/ (accessed on 4 February 2024).
  • Zaleppa, P.; Dudley, A. Ethical, Legal and Security Implications of Digital Legacies on Social Media. In Social Computing and Social Media. Design, Ethics, User Behavior, and Social Network Analysis ; Meiselwitz, G., Ed.; Springer International Publishing: Cham, Switzerland, 2020; pp. 419–429. [ Google Scholar ]
  • Gerlitz, C. What Counts? Reflections on the Multivalence of Social Media Data. Digit. Cult. Soc. 2016 , 2 , 19–38. [ Google Scholar ] [ CrossRef ]
  • González-Larrea, B.; Hernández-Serrano, M.J. Digital identity built through social networks: New trends in a hyperconnected world. In Proceedings of the TEEM’20: Eighth International Conference on Technological Ecosystems for Enhancing Multiculturality, Salamanca, Spain, 21–23 October 2020; Association for Computing Machinery: New York, NY, USA, 2021; pp. 940–944. [ Google Scholar ] [ CrossRef ]
  • Kemp, S. Digital 2023 DEEP-DIVE: Is Social Media Really Dying? 2023. Available online: https://datareportal.com/reports/digital-2023-deep-dive-the-worlds-top-social-media-platforms (accessed on 21 March 2024).
  • Doyle, D.T.; Brubaker, J.R. Digital Legacy: A Systematic Literature Review. Proc. ACM Hum.-Comput. Interact. 2023 , 7 , 1–26. [ Google Scholar ] [ CrossRef ]
  • Gulotta, R.; Gerritsen, D.B.; Kelliher, A.; Forlizzi, J. Engaging with Death Online: An Analysis of Systems that Support Legacy-Making, Bereavement, and Remembrance. In DIS ’16, Proceedings of the 2016 ACM Conference on Designing Interactive Systems, Brisbane, QLD, Australia, 4–8 June 2016 ; Association for Computing Machinery: New York, NY, USA, 2016; pp. 736–748. [ Google Scholar ] [ CrossRef ]
  • Holt, J.; Nicholson, J.; Smeddinck, J.D. From Personal Data to Digital Legacy: Exploring Conflicts in the Sharing, Security and Privacy of Post-mortem Data. In WWW’21, Proceedings of the Web Conference 2021, Ljubljana, Slovenia, 19–23 April 2021 ; Association for Computing Machinery: New York, NY, USA, 2021; pp. 2745–2756. [ Google Scholar ] [ CrossRef ]
  • Morse, T.; Birnhack, M. The posthumous privacy paradox: Privacy preferences and behavior regarding digital remains. New Media Soc. 2022 , 24 , 1343–1362. [ Google Scholar ] [ CrossRef ]
  • Spiekermann, S.; Grossklags, J.; Berendt, B. E-privacy in 2nd generation E-commerce: Privacy preferences versus actual behavior. In EC ’01, Proceedings of the 3rd ACM Conference on Electronic Commerce, Tampa, FL, USA, 14–17 October 2001 ; Association for Computing Machinery: New York, NY, USA, 2001; pp. 38–47. [ Google Scholar ] [ CrossRef ]
  • Norberg, P.A.; Horne, D.R.; Horne, D.A. The Privacy Paradox: Personal Information Disclosure Intentions versus Behaviors. J. Consum. Aff. 2007 , 41 , 100–126. [ Google Scholar ] [ CrossRef ]
  • Brucker-Kley, E.; Keller, T.; Kurtz, L.; Pärli, K.; Pedron, C.; Schweizer, M.; Studer, M. Passing and Passing on in the Digital World ; IADIS: Lisbon, Portugal, 2013. [ Google Scholar ]
  • GmbH, D.E. Article 17 GDPR. Right to Erasure (‘Right to Be Forgotten’). 2014. Available online: https://gdpr-text.com/read/article-17/ (accessed on 4 February 2024).
  • Bergram, K.; Djokovic, M.; Bezençon, V.; Holzer, A. The Digital Landscape of Nudging: A Systematic Literature Review of Empirical Research on Digital Nudges. In CHI ’22, Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April–5 May 2022 ; Association for Computing Machinery: New York, NY, USA, 2022. [ Google Scholar ] [ CrossRef ]
  • Caraban, A.; Karapanos, E.; Gonçalves, D.; Campos, P. 23 Ways to Nudge: A Review of Technology-Mediated Nudging in Human–Computer Interaction. In CHI ’19, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019 ; Association for Computing Machinery: New York, NY, USA, 2019; pp. 1–15. [ Google Scholar ] [ CrossRef ]
  • Lindemann, L.; Volkmann, T.; Jochems, N. Building Bridges Through Design: Game Design Strategies to Empower Young Adults Taking Social Offers - Results From a Pilot Study. In MuC ’23, Proceedings of Mensch Und Computer 2023, Rapperswil Switzerland, 3–6 September 2023 ; Association for Computing Machinery: New York, NY, USA, 2023; pp. 460–466. [ Google Scholar ] [ CrossRef ]
  • Fuchs, K.; Meusburger, D.; Haldimann, M.; Ilic, A. NutritionAvatar: Designing a future-self avatar for promotion of balanced, low-sodium diet intention: Framework design and user study. In CHItaly ’19, Proceedings of the 13th Biannual Conference of the Italian SIGCHI Chapter: Designing the next Interaction, Padua, Italy, 23–25 September 2019 ; Association for Computing Machinery: New York, NY, USA, 2019. [ Google Scholar ] [ CrossRef ]
  • Frauenberger, C. Entanglement HCI The Next Wave? ACM Trans. Comput.-Hum. Interact. 2019 , 27 , 1–27. [ Google Scholar ] [ CrossRef ]
  • Hespanhol, L. Human-computer intra-action: A relational approach to digital media and technologies. Front. Comput. Sci. 2023 , 5 , 1083800. [ Google Scholar ] [ CrossRef ]
  • Barad, K. Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning ; Duke University Press: Durham, NC, USA, 2007. [ Google Scholar ]
  • Irani, L.; Vertesi, J.; Dourish, P.; Philip, K.; Grinter, R.E. Postcolonial computing: A lens on design and development. In CHI ’10, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Atlanta, GA, USA, 10–15 April 2010 ; Association for Computing Machinery: New York, NY, USA, 2010; pp. 1311–1320. [ Google Scholar ]
  • Cernadas, E.; Calvo-Iglesias, E. Gender perspective in Artificial Intelligence (AI). In Proceedings of the TEEM’20: Eighth International Conference on Technological Ecosystems for Enhancing Multiculturality, Salamanca, Spain, 21–23 October 2020; Association for Computing Machinery: New York, NY, USA, 2021; pp. 173–176. [ Google Scholar ] [ CrossRef ]
  • Kaplan, M. Introduction: Adding a cultural dimension to human factors. In Cultural Ergonomics ; Emerald Group Publishing Limited: Bingley, UK, 2004; pp. XI–XVII. [ Google Scholar ]
  • Hallnäs, L.; Redström, J. Slow Technology-Designing For Reflection. Pers. Ubiquitous Comput. 2001 , 5 , 201–212. [ Google Scholar ] [ CrossRef ]
  • To, A.; Sweeney, W.; Hammer, J.; Kaufman, G. “They Just Don’t Get It”: Towards Social Technologies for Coping with Interpersonal Racism. Proc. ACM Hum. Comput. Interact. 2020 , 4 , 1–29. [ Google Scholar ] [ CrossRef ]
  • Willis, A.M. Ontological designing. Des. Philos. Pap. 2006 , 4 , 69–92. [ Google Scholar ] [ CrossRef ]
  • Fry, T. Design Futuring ; University of New South Wales Press: Sydney, Australia, 2009; pp. 71–77. [ Google Scholar ]

Click here to enlarge figure

CaseContextHCI forLocationProject Phase
Intangible CH as a Factor in Design IdeationInclusion MultiperspectivityEngland, EuropeIdeation
methodological choices and flexibility to foster inclusion and cohesion
communicating individual social and cultural values to others
co-design, arts-based methods
researcher diversity, reflexivity, arts-based methods, slow science
pre-figuring
Interaction Design for Indigenous Museum and CHMultiperspectivity SensemakingNorthern Finland, EuropeExploration andPrototyping
value of listening and flexibility
exploring sensitive topics, Indigenous-led project
participatory design, prototyping, iteration
empowering locals, involving experts, co-design
pre-figuring, enabling and facilitating
CH management for Cultural LandscapeInclusion SensemakingAfghanistan, AsiaAnalysis to Documentation
visualize the impact of planning decision to local stakeholders
foster feedback and stakeholder participation
field survey, remote sensing, 3D modeling, community involvement
self-determination on heritage values, reconciliation after conflict
mediating and converging
Social Media Data as Digital CHSensemakingGermany, EuropeEvaluation
facilitating sensemaking of and empowerment over one’s own data
data bequest of one’s digital identity and possessions
web application testing, lab study
AI assistance, data ownership, future digital cultures and CH
converging
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Hirsch, L.; Paananen, S.; Lengyel, D.; Häkkilä, J.; Toubekis, G.; Talhouk, R.; Hespanhol, L. Human–Computer Interaction (HCI) Advances to Re-Contextualize Cultural Heritage toward Multiperspectivity, Inclusion, and Sensemaking. Appl. Sci. 2024 , 14 , 7652. https://doi.org/10.3390/app14177652

Hirsch L, Paananen S, Lengyel D, Häkkilä J, Toubekis G, Talhouk R, Hespanhol L. Human–Computer Interaction (HCI) Advances to Re-Contextualize Cultural Heritage toward Multiperspectivity, Inclusion, and Sensemaking. Applied Sciences . 2024; 14(17):7652. https://doi.org/10.3390/app14177652

Hirsch, Linda, Siiri Paananen, Denise Lengyel, Jonna Häkkilä, Georgios Toubekis, Reem Talhouk, and Luke Hespanhol. 2024. "Human–Computer Interaction (HCI) Advances to Re-Contextualize Cultural Heritage toward Multiperspectivity, Inclusion, and Sensemaking" Applied Sciences 14, no. 17: 7652. https://doi.org/10.3390/app14177652

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

Help | Advanced Search

Computer Science > Human-Computer Interaction

Title: exploring user acceptance of portable intelligent personal assistants: a hybrid approach using pls-sem and fsqca.

Abstract: This research explores the factors driving user acceptance of Rabbit R1, a newly developed portable intelligent personal assistant (PIPA) that aims to redefine user interaction and control. The study extends the technology acceptance model (TAM) by incorporating artificial intelligence-specific factors (conversational intelligence, task intelligence, and perceived naturalness), user interface design factors (simplicity in information design and visual aesthetics), and user acceptance and loyalty. Using a purposive sampling method, we gathered data from 824 users in the US and analyzed the sample through partial least squares structural equation modeling (PLS-SEM) and fuzzy set qualitative comparative analysis (fsQCA). The findings reveal that all hypothesized relationships, including both direct and indirect effects, are supported. Additionally, fsQCA supports the PLS-SEM findings and identifies three configurations leading to high and low user acceptance. This research enriches the literature and provides valuable insights for system designers and marketers of PIPAs, guiding strategic decisions to foster widespread adoption and long-term engagement.
Comments: 36,
Subjects: Human-Computer Interaction (cs.HC); Artificial Intelligence (cs.AI)
classes: HCC
Cite as: [cs.HC]
  (or [cs.HC] for this version)
  Focus to learn more arXiv-issued DOI via DataCite

Submission history

Access paper:.

  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Advertisement

Advertisement

An Exploration into Human–Computer Interaction: Hand Gesture Recognition Management in a Challenging Environment

  • Original Research
  • Open access
  • Published: 12 June 2023
  • Volume 4 , article number  441 , ( 2023 )

Cite this article

You have full access to this open access article

research paper human computer interaction

  • Victor Chang   ORCID: orcid.org/0000-0002-8012-5852 1 ,
  • Rahman Olamide Eniola 2 ,
  • Lewis Golightly 2 &
  • Qianwen Ariel Xu 1  

7881 Accesses

7 Citations

Explore all metrics

Scientists are developing hand gesture recognition systems to improve authentic, efficient, and effortless human–computer interactions without additional gadgets, particularly for the speech-impaired community, which relies on hand gestures as their only mode of communication. Unfortunately, the speech-impaired community has been underrepresented in the majority of human–computer interaction research, such as natural language processing and other automation fields, which makes it more difficult for them to interact with systems and people through these advanced systems. This system’s algorithm is in two phases. The first step is the Region of Interest Segmentation, based on the color space segmentation technique, with a pre-set color range that will remove pixels (hand) of the region of interest from the background (pixels not in the desired area of interest). The system’s second phase is inputting the segmented images into a Convolutional Neural Network (CNN) model for image categorization. For image training, we utilized the Python Keras package. The system proved the need for image segmentation in hand gesture recognition. The performance of the optimal model is 58 percent which is about 10 percent higher than the accuracy obtained without image segmentation.

Similar content being viewed by others

research paper human computer interaction

Hand Gesture Recognition: A Review

research paper human computer interaction

Hand Gesture Recognition for Human Computer Interaction: A Comparative Study of Different Image Features

research paper human computer interaction

Integrated Solutions and Computerized Human Gesture Control

Explore related subjects.

  • Artificial Intelligence

Avoid common mistakes on your manuscript.

Introduction

British Sign Language recognition is a project based on the notion of image processing and machine learning classification. There has been much study in the last year on gesture recognition utilizing various machine learning and deep learning approaches without fully explaining the methods used to get the results. This study focuses on lowering the cost and increasing the resilience of the suggested system using a mobile phone camera while also detailing the steps used to conclude.

Management is a collection of operations (including planning and decision-making, organizing, directing, and supervising) aimed toward an organization’s resources (human, financial, physical, and informational) to attain organizational goals effectively and efficiently [ 10 ]. Unquestionably, good management is one that the business can depend on in the face of new and unexpected difficulties. Nevertheless, socioeconomic, political, and, most recently, health challenges have significantly impacted the efficacy and efficiency of management processes in modern organizations. Consequently, the internal and external elements affecting the organizational management process should be attentive to and evaluated.

Internal factors such as workplace culture, personnel, finances, and current technologies are under the influence of the company, while extrinsic variables such as politics, competitors, the economic system, clients, and the climate are beyond the management’s control but can have a significant influence on the productivity and accomplishment of the organization. Therefore, the management framework of a company must be critically assessed. As a firm with a rich history spanning more than a century (116 years), BMW was founded in 1916 in Munich, Germany. This establishment, which is a few years younger than Ford in 1903 and Rolls Royce in 1907, has developed one of the finest automobiles [ 45 ]. In this research, we critically reviewed BMW's management strategy throughout the 2008–2011 global economic crisis to determine why BMW effectively navigated the crisis while other companies flopped.

Research Questions

This study aims to clarify and explain the following five research questions (RQs).

RQ1: What are the image processing approaches for improving picture quality and generalization of the project?

RQ2: What image segmentation techniques for separating the foreground (hand motion) from the background?

RQ3: What machine learning and deep learning approaches are available for image classification and hand gesture recognition?

RQ4: What hardware and or software is required?

RQ5: What are the benefits of the proposed approaches over currently existing methods? The result of the comparison between our method and other approaches can be used to determine what aspects of our techniques need to be improved for future study.

RQ6: What ethical issues does the initiative raise?

Research Contributions

This study aims to make a contribution to the understanding of various approaches utilized to enhance picture quality during an imaging job. Specifically, we investigate image processing techniques such as erosion, resizing, and normalizing, as well as segmentation features like HSV color separation and thresholding. In addition, this study explores Machine Learning approaches used in picture classification projects, particularly for hand gesture recognition, and considers hitherto unutilized Machine Learning methods as potential alternatives. Then, this study evaluates the feasibility of the project based on the available materials and quantifies the model's performance in comparison to prior studies. Finally, we detect and address any ethical concerns that may arise in hand gesture recognition due to the potential impact of advanced algorithms on people. Overall, this study seeks to contribute to the field of hand gesture recognition and image processing, with the goal of improving human–computer interaction and addressing potential ethical issues.

Related Literature

Hand Gesture Recognitions (HGRs) is a complicated process that includes many components like image processing, segmentation, pattern matching, machine learning, and even deep learning. The approach for hand gesture recognition may be divided into many phases: data collection, image processing, hand segmentation, extraction of features, and gesture classification. Furthermore, while static hand motion recognition tasks use single frames of imagery as inputs, dynamic sign languages utilize video, which provides continuous frames of varying imagery [ 5 ]. The technique for data collection distinguishes computer vision-based approaches from sensors and wearable-based systems. This section discusses the methods and strategies used by static computer vision-based gesture recognition researchers.

Human–Computer Interaction and Hand Gesture Recognition

The recent technological breakthrough in computational capabilities has resulted in the development of powerful computing devices that affects people’s everyday lives. Humans can now engage with a wide range of apps and platforms created to solve most day-to-day challenges. With the advancement of information technology in our civilization, we may anticipate greater computer systems integrated into our society. These settings will enact new requirements for human–computer interaction, including easy and robust platforms. When these technologies are used naturally, interaction with them becomes easier (i.e., similar to how people communicate with one another through speech or gestures). Another change is the recent evolution of computer user interfaces that have influenced modern developments in devices and methodologies of human–computer interaction. The keyboard, the perfect option for text-based user interfaces, is one of the most frequent human–computer interaction devices (R. [ 15 ].

Human–Computer Interaction/Interfacing (HCI), also known as Man–Machine Interaction or Interfacing, has emerged gradually with the emergence of advanced computers [ 11 , 12 , 19 ]. HCI is a field of research involving the creation, analysis, and deployment of interacting computer systems for human use and the investigation of the phenomena associated with the subject [ 11 ]. Indeed, the logic is self-evident: even the most advanced systems are useless until they have been operated effectively by humans. This foundational argument summarizes the two crucial elements to consider when building HCI: functionality and usability [ 19 ]. A system’s functionality is described as the collection of activities or services it delivers to its clients. Nevertheless, the significance of functionality is apparent only if it becomes feasible for people to employ it effectively. On the other hand, the usability of a system with a feature refers to the extent and depth to which the system can be utilized effectively and adequately to achieve specific objectives for the user. The real value of a computer is attained when the system’s functionality and usability are adequately balanced [ 11 , 12 ].

Hand Gesture Recognition (HGR) is a critical component of Human–Computer Interaction (HCI), which studies computer technology designed to understand human commands. Interacting with these technologies is made simpler when they are conducted in a natural manner (i.e., just as humans interact with each other using voice or gestures). Nonetheless, owing to the influence of illumination and complicated. Backgrounds, most visual hand gesture detection systems mainly function in a limited setting. Hand gestures are a kind of body language communicated via the center of the palm, finger position, and hand shape. Hand gestures are divided into two types: dynamic and static, as shown in Fig.  1 below. The stationary gesture relates to the fixed form of the hand, while on the other hand, the dynamic hand gesture consists of a sequence of hand motions like waving. There exist different hand motions in a gesture. For instance, a handshake differs from one individual to another and depends entirely on time and location. The main distinction between posture and gesture is that the former focuses on the form of the hand, while the latter focuses on the hand motion.

figure 1

Features of hand gesture recognition

The fundamental objective of gesture recognition research is to develop a technology capable of recognizing distinct human gestures and utilizing them to communicate information or control devices [ 28 ]. As a result, it incorporates monitoring hand movement and translation of such motion as crucial instruction. Furthermore, Hand Gesture Recognition methods for HCI systems can also be classified into two types: wearable-based and computer vision-based recognition [ 30 ]. The wearable-based recognition approach collects hand gesture data using several sensor types. These devices are mounted to the hand and record the position and movement of the hand. Afterwards, the data are analyzed for gesture recognition [ 30 , 38 ]. Wearable devices allow gesture recognition in different ways, including data gloves, EMG sensors, and Wii controllers. Wearable-based hand gesture identification systems have a variety of drawbacks and ethical challenges: covered later in this paper.

In contrast, computer vision-based solutions are a widespread, appropriate, and adaptable approach that employs a camera to capture imagery for hand gesture recognition and enable contactless communication between people and computers [ 30 , 38 ]. Moreover, the vision-based recognition technique uses different image processing techniques to obtain the hand position and movement data. This method detects gestures based on the shapes, positions, features, color, and hand movements (Fig.  2 ). However, vision-based recognition has certain limitations in that it is impacted by depending on the light and crowded surroundings [ 38 ].

figure 2

Computer vision-based gesture recognition

Image Processing

The human eye can perceive and grasp the things in a photograph. Accurate algorithms, and considerable training, are necessary to make computers comprehend like people [ 13 , 14 , 16 ]. Image data account for about 75 percent of the information acquired by an individual. When we receive and use visual information, we refer to this as vision, cognizance, or recognition. However, when a computer collects and processes visual data, this is called image processing and recognition. The median and Gaussian filters are two prevalently used filtering techniques for minimizing distortion in collected images [ 5 ]. Zhang et al. [ 46 ] adopted the median filter approach to remove noise from the gesture image to generate a more suitable image for subsequent processing. Also, Piao et al. [ 33 ] also presented the Gaussian and bilateral filter strategies to de-noise the image and created a more enhanced image. On the other hand, Treece [ 41 ] proposed a unique filter that claimed to have better edge and details retaining capabilities than the median filter, noise-reducing performance comparable to the Gaussian filter, and is suitable for a wide range of signal and noise kinds. Scholars have also researched other filtering algorithms. For example, Khare and Nagwanshi [ 21 ] presented a review of nonlinear filter methods that may be utilized: for image enhancement. They conducted a thorough investigation and performance comparison of the Histogram Adaptive Fuzzy (HAF) filter and other filters based on PSNR (Peak Signal to Noise Ration).

Morphological transformation is another image processing procedure often used to eliminate undesirable content from an image. A notable example is H. Hassanpour et al. (2015), which use morphological transformations to improve the quality of different medical photographs. Moreover, Morphological transformation has been utilized in many studies to extract features and contour areas critical for recognition or classification tasks [ 6 , 39 , 43 ]. Lastly, Histogram Equalization is another image processing technique for image enhancement that has received considerable attention. Xie et al. [ 44 ] investigated the basic concept of histogram equalization for image enhancement and showed that histogram equalization could enhance image effect and contrast. Abdullah-Al-Wadud et al. [ 1 ] also addressed a dynamic histogram equalization (DHE) method that splits the histogram relying on local minima and allocates specific grey level ranges for each partition before equalizing them individually, taking control over the impact of classical HE so that it performs image enhancement without sacrificing detail. Abdullah-Al-Wadud et al. [ 1 ] asserted that the DHE technique outdoes other current methods by improving contrast without presenting adverse effects or unacceptable artifacts.

Image Segmentation

The segmentation stage entails splitting images into numerous separate regions to separate the Region of Interest (ROI) from the rest of the imagery. Scholars have discussed different methods for image segmentation discussed below. Skin color segmentation is typically done in different color spaces, depending on the image type and content. Muhammad and Abu-Bakar [ 27 ] suggested a color space blend of HSV and YCgCr for skin detection and segmentation that responds well to various skin color tones while being less sensitive to pixels in the background that look like the skin.

Shaik et al. [ 37 ] conducted a thorough literature review, displaying various color spaces used for skin color identification, and discovered that RGB color space is not favored for color-based identification and color assessment due to the blending of color (chrominance) with the level of intensity (luminance) data and its non-uniform features. Furthermore, Shaik et al. [ 37 ] argued that Luminance and Hue-based strategies actively discriminate color and intensity level even under bad lighting conditions, a claim backed by their experimental results, demonstrating the YCbCr color space performance in the segmentation and detection of skin color in color images.

Saini and Chand [ 35 ] addressed the application and retrieval of skin pixels in the RGB color model in their publication, and they demonstrated the need for changing color models by monitoring the impacts of variables such as noise and illumination conditions. Furthermore, they debated various color models that are: commonly utilized in research, such as the HIS, HSV, TSL, and YUV color spaces. Saini and Chand [ 35 ] also speculated that the presence of illumination, shadows, and interference could impact the appearance of skin color and make segmentation and detection difficult. As a result, an RGB-based skin segmentation method for retrieving skin pixels was introduced in their research study, along with a computerized method for automatically transitioning color models in different color spaces, such as RGB into HSV or vice versa, to obtain the best noticeable image pixels.

Other methods, aside from skin color-based segmentation, have been extensively researched in the literature. Phung et al. [ 32 ] examined the pixel-wise skin segmentation technique based on color pixel classification, revealing that the Bayesian classifier based on the histogram technique and the multi-layered perception performed better than other methods, including the piece-wise linear and the Gaussian classifiers. Additionally, they argued that the Bayesian classifier combined with the histogram method is practical for the skin color pixels classifying issue due to the low dimension of the feature space and the availability of an enormous training set. They note, nonetheless, that the Bayesian classifier consumes far more memory than the MLP or other algorithms. Concerning color representations, their investigation using a Bayesian classifier demonstrates that the selection of color model does not affect pixel-wise skin segmentation. They concluded, nevertheless, that using chrominance channels solely reduces segmentation results and that there are considerable efficiency differences across various chrominance options.

Gesture Recognition and Machine Learning Algorithms

Various machine learning and deep learning algorithms have recently been utilized for hand gesture recognition and classification of static and dynamic gestures. Different machine learning algorithms have been utilized for static hand gesture recognition [ 7 , 25 , 29 ]. Liu et al. [ 25 ] introduced Hu moments and support vector machine-based approaches (SVMs). Firstly, Hu invariant moments are retrieved: into a seven-dimensional vector. Secondly, an SVM classifier is utilized to determine a decision boundary between the integrating and flawed hands. On the other hand, Feng and Yuan [ 7 ] and Nagashree et al. [ 29 ] retrieved the histogram of gradients (HOG) for feature extraction and the Support Vector Machine (SVM) classifier, which is extensively utilized for classification to train these relevant attributes. At testing time, a decision is made using the earlier learned SVMs, and the same gesture recognition rate with a comparison in distinct illumination scenarios. The findings reveal that the HOG feature extraction and multivariate SVM classification approaches have a substantial recognition accuracy, and the system is more resistant to lighting.

An Artificial Neural Network (ANN) is a computer processing technology with functional properties like human neural systems. In ANN for Hand gesture recognition, we studied several works of literature [ 8 , 17 , 31 ].

Oyedotun and Khashman [ 31 ] suggested using a deep convolutional neural network to solve the challenge of hand gesture recognition for all 24 hand gestures from Thomas Moeslund’s gesture recognition repository. They demonstrated that more biologically oriented DNN, such as the convolutional neural network and the stacked de- noising autoencoder, can grasp the complicated hand gesture identification challenge with reduced misclassification. Islam et al. [ 17 ] reported a static hand gesture recognition approach based on CNN. Data augmentation techniques such as re-scaling, resizing, shearing, translation, width, and height altering were applied: to the pictures used to train the model. Flores et al. [ 8 ] suggested techniques for recognizing the static hand gesture alphabet of the Peruvian sign language (LSP). They used image processing methods to remove or minimize noise, boost contrasts under different lighting conditions, segment the hand from the image background and ultimately recognize and trim the area holding the hand gesture. They used convolutional neural networks (CNN) to categorize the 24 hand gestures, creating two CNN designs with varying numbers of layers and attributes for every layer. The testing revealed that the initial CNN before data augmentation has a lower accuracy than the one after.

The timing mismatch makes it impossible to match two different gestures in dynamic gesture recognition using Euclidean space. Nevertheless, scholars have developed sophisticated techniques and algorithms for detecting and identifying dynamic hand gestures in real-time [ 23 , 24 , 40 ]. [ 40 ] created a method that uses picture entropy and density clustering to exploit critical frames from hand gesture video for additional feature extraction, potentially improving identification efficiency. Furthermore, they presented a pattern fusion technique to increase feature representation and enhance the system performance.

Lai and Yanushkevich [ 24 ] suggested a hybrid of convolutional neural networks (CNN) and recurrent neural networks (RNN) for automatic hand gesture identification utilizing depth and skeletal data. In their study, Recurrent neural networks functioned effectively in recognizing sequences of motion for every skeleton joint provided the skeleton details, while CNN was used to retrieve spatial data from depth images. Köpüklü et al. [ 23 ] suggested a two-level hierarchical system of a sensor and a classifier that allows offline-working convolutional neural network (CNN) models to run online effectively using the matching template technique. They used several CNN designs and compared them in terms of offline classification accuracy, set of variables, and computing efficiency. They employed Levenshtein distance as an assessment tool to assess the identified gestures' single-time activations.

Applications of Hand Gesture Recognition

Hand gesture recognition has several applications in various industries, such as virtual worlds, robotics, intelligent surveillance, sign language translation, and healthcare systems. The section below delves more into a few of the application areas.

Applications in Healthcare

Over the years, interactive systems have expanded significantly as several research efforts have proved their value and influence in different sectors, including drug production, medicine, and healthcare, after effective experiments. Versions of interactive systems in medicine, there has been a desire for such systems to be used in the healthcare and medical fields. Therefore, countries have been interested in building many interactive systems, including medical robots, and clearly by providing financing. Moreover, scholarship opportunities to promote research and innovation in the Human–Computer Interaction sector [ 36 ].

The continual progress of this kind of innovation is now recognized and welcomed in practically all industries. The concept of human–computer interaction systems will benefit the healthcare and medical industries, extensively utilizing the novel concepts. Yearly, new variations, designs, aesthetics, and maneuverability are developed, particularly human-inspired or humanoid robotics that can think and behave like people and act like humans. The continued advancement of this technology is lauded and has significantly influenced the medical and healthcare fields [ 36 ].

Undoubtfully, the prospect of computerized assistance will result in a significant boost in the quality of care. However, the performance and viability of such systems will be determined by how effective the interactive systems are in the medical and healthcare sector and how valuable they are to patients. Undoubtedly, the importance of doctors' confidence measures in understanding the effectiveness of the new technology systems to be deployed in the healthcare profession cannot be overstated [ 36 ].

It is essential to maintain the environment aseptic during medical surgery. However, during surgery, the physician must also see the patient’s clinical visual information via the computer, which must be sterile. Therefore, the existing method of the human–computer interface makes it difficult for workers to handle during operation. It raises the workload and the number of operational workers needed in the theatre. As a result, ensuring a speedy, accurate, and safe procedure becomes challenging.

The drawback mentioned above can be overcome using the hand gesture recognition approach. A notable example of a Hand Gesture Recognition application is the optimization of the surgical process utilizing the Kinect for Windows hardware and SDK created by a team at Microsoft Research Cambridge. The device allows clinicians to adjust, relocate, or enlarge Scans, Magnetic resonance imaging (MRI), and other medical data using simple hand motions (Douglas).

Wachs et al. [ 42 ] created a gesture-based system for sterilized viewing of radiological images, another notable example of Hand Gesture Recognition (HGR) research and application in a surgery theatre. The sterilized human–machine interaction is critical since it is how the physician handles clinical data while preventing it.

Contamination of the patient, the surgical theatre, and the accompanying doctors. The gesture-based technology might substitute touchscreen displays already used in many medical operating rooms, which must be enclosed to prevent contamination from accumulating or spreading and need flat surfaces which must be thoroughly cleaned after each treatment—but sometimes are not. With healthcare infection rates currently at alarmingly high levels, hand gesture recognition technology will be a viable option [ 9 ].

Another noteworthy application of the HGR technology is Sathiyanarayanan and Rajan’s [ 36 ] MYO diagnostics systems, which is applicable in interpreting Electromyography (EMG) patterns (graphs), bytes of vector data, and electrical data of our complex anatomy within our hand. To identify hand movement, the system employs complex algorithms that are interpreted: as instructions. The system will allow for collecting massive amounts of data and investigating a series of EMG lines to identify medical issues and hand motions.

Applications in Robotics

Van den Bergh et al. [ 3 ] created an automated hand gesture detection system using the available Kinect sensor. Using a sensor enables complicated three-dimensional motions while remaining resistant to disrupting objects or humans in the environment. The technology is embedded into an interactive robot (based on ROS), enabling automated interaction with the robot through hand gestures. The robot’s direction is determined by the translation of directing motions into objectives.

Robot technology has advanced considerably in recent years. However, there is a hurdle in developing the robot’s capacity to comprehend its surroundings, for which the sensor is crucial [ 4 ]. Using hand gesture recognition algorithms is advisable to manage the robot’s behavior efficiently and effectively, a new research hotspot in robot vision and control. A notable example is the Smartpal service robot, which utilizes Kinect technologies and allows users to operate the robot with their gestures, simulating the users’ actions [ 4 ]. We feel that as time progresses, using hand gestures to instruct the robot to do different tasks is no longer unrealistic for us.

Applications in Gaming and Virtual Realities (VE)

There has been a trend in the gaming industry toward hand gesture recognition systems, where gestures are used as instructions for video games rather than the traditional approach of touching keys on a keypad or using a controller. It is essential for these modern interfaces to recognize accidental movements and consistent gestures so that the user can have a more natural experience. Kang et al. [ 18 ] provided a unique approach to gesture identification that blends gesture detection and classification. The system distinguishes between purposeful and unintended motions within a particular visual sequence Kang et al. [ 18 ].

Methodology

In this study, we review previous literature in this area that has a focus on human–computer interaction and hand gesture recognition. We then select a dataset of images for analysis. In the step of image enhancement and segmentation, the quality of raw images is improved by reducing background noise, applying color space conversions, as well as isolating the main image from its background. After that, machine learning algorithms such as CNN are employed to learn the attributes and conduct hand gesture recognition. We then observe the results of the algorithms and analyze them in two parts focusing on image enhancement and hand gesture recognition. Finally, we provide a discussion of the results, including challenges and limitations, ethical considerations and opportunities for future work in this area (Fig.  3 ).

figure 3

Research framework

British Sign Language

British Sign Language (BSL) is a visioned language used by persons with hearing or speech disabilities to convey meaning via word-level gestures, nonmanual characteristics such as facial expression and body posture, and fingerspelling (spelling words using hand movements) [ 26 ]. Figures  4 and 5 below depict the BSL alphabet with two hands and one hand, respectively. We utilized the one-hand alphabet in this project due to many features of two-hand BSL alphabets that make identification difficult.

figure 4

Two-hand British sign language

figure 5

One-Hand British sign language

In this paper, we worked on 5 one hand BSL alphabets: A, I, L, R, and V. For each of the five BSL alphabets utilized in this study, we developed a data collection of 90 distinct signs. A single signer performs each hand sign five times in varied lighting and timing circumstances. Moreover, to improve real-world performance and prevent over-fitting, we employed a validation dataset obtained under different conditions from the training dataset.

The images used in this project were captured with an iPhone 11 Pro Max camera, which has triple 12MP Ultra-Wide, Wide, and Telephoto lenses and an Ultra-Wide: 2.4 aperture and a 120° field of view. We collected two photos for the hand segmentation (color separation) task, one with and the other without a background. The images were separated and stored in separate folders. Each picture was processed and segmented before being augmented to expand the dataset for each sign. As part of the augmentations, the photos were randomly resized and rotated. The dataset was then subdivided into folders for each alphabet.

The first step in every image processing system is processing raw pictures. Image processing is essential to keep all images uniform and consistent, which increases the accuracy and efficacy of the subsequent segmentation and feature extraction methods. Consequently, background noise should be decreased to enhance images and color space conversions, emphasizing the image’s region of interest better. All the images should be converted: to different color spaces and determine the optimal color space for color separation to enable image segmentation.

Recently, many efforts have focused on the Image segmentation process, a critical stage in image processing that analyzes the digital image by segmenting it into several sections and is used to differentiate between distinct elements in an image into foreground and background based on various criteria like grey level values or to the texture [ 20 , 34 ]. Image segmentation is acknowledged as the initial and most fundamental procedure in numerous computer vision tasks, including hand gesture recognition, medical imaging, robotic vision, and geographical imaging. Scholars have previously examined several segmentation approaches or algorithms in the literature. These solutions overcome several limitations of traditional hand gesture recognition systems. However, no one method can be deemed a superior approach for all types of images—such strategies are only appropriate for the particular image and purpose.

Thresholding, region growing and region merging and splitting, clustering, edge detection, and model-based approaches are the six types of image segmentation methods. All splitting approaches are based on two fundamental concepts: intensity values and discontinuity and similarity. The discontinuity progresses to image segmentation based on a sudden shift in intensity levels in the picture. In this method, we are primarily interested in recognizing distinct spots. The other strategy is centered on pixels, which are comparable in some regions according to the predefined parameters used to split images, and it comprises procedures such as Thresholding, Region expansion, and region splitting and merging.

Color Model

A color model is a mathematical abstraction representing colors as tuples of integers with three or four values or color components. The collection of generated colors is referred to as “color space” when the color model is connected with an appropriate depiction of how the elements are to be inferred and the circumstances are observed. Color space may also be used to explain how human color vision might be simulated, which is used: in a range of applications, including computer vision, image analysis, and graphic design. Moreover, color space and color models are substantially equivalent in some cases. There are many color space bases, including the Luminance color model (YUV, YCbCr, and YIQ), Hue color model (HSI, HSV, and HSL), and the RGB color model (RGB, normalized RGB) [ 2 , 22 ]. By default, the Python OpenCV library transforms pictures into BGR format. However, the BGR picture is converted into any other color space using transformative functions. RGB Color space is the most fundamental kind of picture representation, yet some applications, such as OpenCV, feel that using alternative color spaces is more convenient.

Kolkur et al. [ 22 ] stated that color space preference is the first step in skin color segmentation. Furthermore, they acknowledged that the suitable specified threshold for recognizing skin pixels in a given image might be provided by a combination of one or more color spaces. Typically, the appropriate color space is usually determined: by the skin recognition application.

RGB Color Model

A natural image’s default color model is RGB (Red, Green, and Blue). An image is represented: by m x n × 3 arrays of color pixels in this form, where each pixel is a triplet of three colors, red, green, and blue, at a spatial position (m, n) Hema and Kannan [ 16 ] (Appendix 1). The three-color elements can be thought of as a stacking of three separate layers. Furthermore, pixels in an image have a red layer, a blue layer, and a green layer, resulting in an RGB image [ 16 ]. All these color components can be seen as a three-dimensional model. When all three-color channels have a value of 0 in additive color mixing, it signifies that no illumination is emitted, and the final color is black. The resultant color is white when all the three-color channels are set to their peak value, i.e., 255. Tv screens are excellent illustrations of how RGB color mixing is used. This color space is more susceptible to noise than others since it combines illumination and chromatic elements [ 2 ].

HSV Color Space

HSV (Hue, Saturation, Value) color space is much closer to RGB color space, which people use to describe and interpret colors. Humans see hue as the dominating color. The quantity of white light that varies with color is referred to as Saturation. The value represents the brightness/intensity. Hue is an abbreviation for tint, Saturation is an abbreviation for shade, and Value is an abbreviation for tone. A HSV color space may be seen as a geometric cylinder, with the angular dimension representing Hue(H), beginning with primary red at 0°, progressing to primary green at 120°, primary blue at 240°, and eventually curving back to red at 360°. Saturation refers to the distance from the center axis of the HSV cylinder (S). A saturation value heading towards the outside border indicates that the colorfulness value for the color described by the hue is reaching its peak. The Value (V) is the center vertical axis of HSV color space, extending from black at the bottom with brightness or value 0 to white at the top with lightness or value 1 (Appendix 1). Figure  9 below shows that the color model in this space can be easily separated, unlike in the RGB and HSV color space discussed above. Therefore, we selected the HSV color space as the optimal color model for the segmentation based on color space.

Color Image Segmentation Using HSV Space

The detection of the color of the skin surface is an excellent example of the application of color-based image segmentation for recognizing a particular item based on its color space. A popular image segmentation phase is determining skin color via distinct color spaces. Segmenting the image’s foreground object to identify and recognize the hand area is the first step in this project's three-phase technique, shown in the flowchart in Fig.  6 below. The first stage is to choose a Region of Interest (ROI) from the provided picture; the second step is to alter the HSV values inside the ROI to extract a mask; the last step is to select the ROI using the image mask. As shown in Fig.  6 , segmenting the picture in either the BGR or RGB color space is highly challenging and will not result in the optimal result, so we switched the image to HSV and investigated the segmentation potential.

figure 6

HSV-based segmentation flowchart

How the Flowchart works is as follows. First, an RGB image is transformed into an HSV image using the HSV color space conversion algorithm. Next, the resultant HSV model components (Hue, Saturation, and Value) are separated into constituent values, as shown in Fig.  6 , which are then represented in a range using the Python library. Finally, the excellent HSV value for a particular image in a specific hand gesture is determined by interactively changing the values of each Hue, Saturation, and Value component.

Hand Gesture Recognition Algorithm

Deep Learning is quickly emerging as a prominent sub-field of machine learning owing to its exceptional performance over many data sources. Convolutional neural networks are an excellent approach to using deep learning algorithms to categorize images. The Keras Python package makes creating a CNN straightforward. CNN utilizes a multi-layer architecture that includes an input layer, an output layer, and a hidden layer composed of numerous convolutional layers, pooling layers, and fully linked layers.

Convolution Layer

Convolution is a mathematical operation on two functions that yields a third function that explains how the form of their form is dependent and one affects the other. Convolutional neural networks comprise several layers of artificial neurons. Artificial neurons are mathematical functions that compute the weighted sum of many inputs and output an activation value, an approximate replica of their biological counterparts. When feeding an image into a ConvNet, each layer creates many activation functions, which are passed on to the next layer. Typically, the first layer removes crucial information, such as horizontal or vertical edges. This output is sent to the subsequent layer, which recognizes more complicated characteristics like corners or combinational edges. As we get further into the system, it recognizes more sophisticated features, such as objects and characters.

Pooling Layer

As with the Convolutional Layer, the Pooling Layer is responsible for lowering the spatial dimension of the Convolved Feature. This reduces the CPU power necessary to analyze the data by decreasing the dimensions. Pooling is classified into two types: average Pooling and maximum Pooling. Max Pooling is a technique for determining a pixel’s highest value inside a kernel-covered picture region. Additionally, Max Pooling acts as a Noise Suppressant. It eliminates all noisy activations and conducts de-noising and dimension compression. Average Pooling produces the mean of all values in the Kernel’s section of the picture. Average Pooling is just a distortion suppression strategy that reduces dimension. As a result, we may conclude that Max Pooling outperforms Average Pooling.

The hand gesture recognition system is separated: into two tasks, which are explained lengthily below.

Image Enhancement and Segmentation Phase

The first task is image segmentation, which includes removing the image's background contents and providing an image; without the background. Before beginning image segmentation, we must first do image processing, as described in our technique above.

Image Enhancement

To start, we examine a sample of our picture for analysis using the image histogram, as shown in Fig.  7 below. The hand gesture image was loaded using the OpenCV library, which automatically converts the image into a BGR space. The difference between the RGB and BGR color spaces can be seen in Fig.  7 .

figure 7

Histogram of a sample image

Figure  8 depicts the intensity value and count of the RGB color model from one of our dataset sample images. In addition, we produced a clearer histogram based on the RGB and HSV color spaces for better comprehension. We attempted to improve the image before moving on to the segmentation, and the task was to evaluate whether the image enhancement outcome was favorable. First, we employed several denoising algorithms to filter the picture and decrease noise. Then, we used the median and mean filter approaches on the example picture and found no difference between the original and filtered images, illustrated in Fig.  8 below.

figure 8

RGB and HSV histogram of a sample hand gesture

Finally, we employed the Histogram of Equalization approach to improve the image quality and discovered that it did not provide the best results. Figure  9 compares the original and obtained images after Histogram Equalization. As seen in the above figure, equalization did not lead to an improved image. Therefore, we examined the histogram of the multi-channel image in the RGB and HSV color spaces to understand the impact of equalization better, as shown in Fig.  9 below.

figure 9

RGB and HSV histogram after equalization of a sample hand gesture ( a left and b right)

When figures a and b are compared: to figures c and d, image improvement based on histogram equalization is counterproductive and thereby removed from the system. We moved on to the picture segmentation portion of the research after carefully analyzing the selected sample image from the hand motion collection. In the picture segmentation phase, we attempted two thresholding algorithms and a one-color space segmentation approach, which are explained further below. We attempted to segment a hand gesture image by binarizing it and repeatedly looking for the optimal threshold to segment the image and eliminate the background.

Figure  9 above shows the result obtainable from image segmentation using different thresholding functions. The yen and Otsu thresholds retain most of the information in the image. The section below tries both yen and Otsu thresholding methods for image segmentation. Figure  9 a shows the outcome of the Otsu threshold, whereas Fig.  9 b shows the yen threshold.

We tried different color segmentation approaches to remove the background from the foreground. The first approach is for image segmentation based on the upper and lower blue ranges in the HSV space, and the result is shown below. Our final approach in hand segmentation uses an iterative approach to select the best Hue, Saturation, and values in the sample image. This method proved the most effective and yielded the best result, as illustrated in the image below.

As we can see in the image above, the region of interest most needed for the hand gesture recognition task is highlighted above, and the background has been removed. We can now loop through our images and apply the image segmentation algorithm we created based on the HSV color space. In the next stage, we discussed how different Machine Learning results were obtained from the project.

Hand Gesture Recognition

The first step in the machine learning stage of the project is to load all image datasets into our system and carefully explore the dataset to understand the training and validation data distribution (Fig.  10 ).

figure 10

Train and Validation Class

For the hand gesture prediction, we tried classifying the unsegmented image first and then compared the results of the two models. We observed that the performance of our model almost doubled because of image segmentation. We achieved an accuracy of 45 percent using the unsegmented images and an accuracy of 58 percent using the unsegmented images. However, the system did not achieve the optimal result due to the quality of sample images and the number of images used. The images and table below give a summary of the models used in this project. The author has also identified areas for improvement, highlighted in this project’s conclusion section.

The accuracy and loss for both training and validation images before and after image segmentation is given in Figs.  11 and 12 . Moreover, Table 1 illustrates the confusion matrix summary obtained by predicting the set of images under different lighting conditions.

figure 11

Accuracy and Loss before Image Segmentation

figure 12

Accuracy and Loss after Image Segmentation

This project extensively studied computer image processing and analyzed various literature and techniques. As a result, the author is familiar with the different approaches and algorithms required for image classification and hand gesture recognition. The summary of the discoveries by the author in this research project is illustrated below. While it is critical to analyze our data and pre-process photographs for better results, no approach exists that works on all images and image types. Therefore, the data scientist or image processor must be able to accurately analyze the picture to choose the optimum image enhancement for the image and the job. As we can see from the above result, the image-enhancing processes were unproductive and did not provide a satisfactory outcome.

Recommendations

Various factors influence image quality. The camera type we used to take the photographs, the direction and angle of capture, the lighting condition, the skin tone, and other characteristics are all likely to have an impact on the image processing and hand motion identification job. Unlike a straightforward Machine Learning classification or regression task, which makes it easier to fix the features for the prediction of the target class, deep learning classification has several hyper-parameters and criteria for the picture to be usable. For example, the image's form and size must be as specified by the algorithm of choice. Nonetheless, biased and racial results from hand gesture detection systems may have risky and uncontrolled consequences. Furthermore, we have to compare analytical, discriminating tendencies to various advantages to provide a fair model.

Ethical Considerations

If the true goal of a hand gesture recognition system is to deal with sign language efficiently, then all the multiple and diverse elements of sign language should be considered, which means that sign language must be researched in full and sign language recognition systems fully implemented. In this sense, scholars working on sign language must take an interdisciplinary approach with the assistance of the speech-impaired community and experts. Numerous studies gave scant attention to the effect of including or excluding specific words from the task of hand gesture recognition.

The implications of this omission may be detrimental to users who rely on such systems. Another oversight in hand gesture recognition systems is the omission or under-representation of some demographics or groups in the training dataset, which results in a biased system that overgeneralizes the training data. This could result in deploying machine learning systems that fail to perform efficiently, when used by an underrepresented user. Since the image's luminous level significantly affects the vision-based hand gesture recognition system, this can result in systems malfunctioning when employed in light conditions not included in the training data.

The project is a rudimentary static gesture recognition system that cannot do dynamic gesture recognition tasks. Since the dataset utilized is not diverse (the author’s hand gesture photographs), the system may fail when applied to a different dataset. The picture obtained was taken using a simple camera and is of poor quality. The technology is sensitive to lighting conditions and may not perform optimally on a picture with varying lighting conditions. Through the study, the research team uncovered other methodologies that are worth examining. First, several picture-enhancing algorithms must be researched and applied in future work. Second, feature extraction, such as the silhouette image-based technique, should be studied in future research. Third, optimizing, selecting, and weighting processing of extracted characteristics will be investigated to simplify computations. In addition, the algorithm design element will examine the recognition accuracy and robustness, ease of use, and operation efficiency. Finally, future studies should incorporate more modern technology and approaches, such as tracking to allow dynamic gesture detection and learning the newest technology in the sector to enhance the metric’ performance.

The hand gesture recognition project in this article was produced using skin color segmentation in the HSV color space. This algorithm uses robust skin color segmentation properties in the HSV space to counteract the impact of changes in illumination conditions on gesture detection. In addition, several image-enhancing procedures were performed: on the image prior to hand segmentation. The hand gesture orientation was generalized after the Region of Interest was segmented using the data generator function for batch gradient descent. These processes mitigate the effect of variations in gesture orientation on gesture recognition. The generalization ability of the algorithm is improved during the gesture recognition stage by integrating the embedded deep sparse auto-encoders in the classifier. The experimental findings reveal that, following segmentation, the suggested technique is robust and considerably preferable to the other method in classification performance and recognition consistency.

Data availability

The data used to support the findings of this study are available from the authors upon request. The data are not publicly available due to the presence of sensitive biological information that may compromise the privacy and confidentiality of research participants.

Abdullah-Al-Wadud M, Kabir MH, Dewan MAA, Chae O. A dynamic histogram equalization for image contrast enhancement. IEEE Trans Consum Electron. 2007;53(2):593–600.

Article   Google Scholar  

Al-Tairi ZH, Rahmat RW, Saripan MI, Sulaiman PS. Skin segmentation using yuv and rgb color spaces. J Inform Proc Syst. 2014;10(2):283–99.

Van den Bergh M, Carton D, De Nijs R, Mitsou N, Landsiedel C, Kuehnlenz K, Wollherr D, Van Gool L, Buss M. Real-time 3d hand gesture interaction with a robot for understanding directions from humans. Ro- Man. 2011;2011:357–62.

Google Scholar  

Chen L, Wang F, Deng H, Ji K. A survey on hand gesture recognition. Intern Conf Comput Sci Appl. 2013;2013:313–6.

Cheok MJ, Omar Z, Jaward MH. A review of hand gesture and sign language recognition techniques. Int J Mach Learn Cybern. 2019;10(1):131–53.

Elyasi E. Hand gesture recognition using morphological processing (Doctoral dissertation). Northridge: California State University; 2018.

Feng KP, Yuan F. Static hand gesture recognition based on hog characters and support vector machines. In: 2013 2nd international symposium on instrumentation and measurement, sensor network and automation (IMSNA) 2013 pp 936–938.

Flores CJL, Cutipa AG, Enciso RL. Application of convolutional neural networks for static hand gestures recognition under different invariant features. In: 2017 IEEE XXIV International Conference on Electronics, Electrical Engineering and Computing (INTERCON) 2017 1–4.

Garg P, Aggarwal N, Sofat S. Vision based hand gesture recognition. World Acad Sci Eng Technol. 2009;49(1):972–7.

Griffin, R. W. (2021). Management. Cengage Learning . Hampshire, UK.

Gupta S, Bagga S, Sharma DK. Hand gesture recognition for human computer interaction and its applications in virtual reality. Cham: Advanced computational intelligence techniques for virtual reality in healthcare. Springer; 2020. p. 85–105.

Harish R, Khan SA, Ali S, Jain V, et al. Human computer interaction-a brief study. Int J Manage IT Eng. 2013;3(7):390.

Hassaballah, M., & Awad, A. I. (2020). Deep learning in computer vision: Principles and applications. CRC Press.

Hassanpour, H., Samadiani, N., & Salehi, S. M. (2015). Using morphological trans- forms to enhance the contrast of medical images. The Egyptian Journal of Radiology and Nuclear Medicine. 46(2):481–489

Hassanpour, R., Wong, S., & Shahbahrami, A. (2008). Visionbased hand gesture recog nition for human computer interaction: A review. IADIS international conference interfaces and human computer interaction, 125.

Hema, D., & Kannan, D. S. (2020). Interactive color image segmentation using hsv color space. Science and Technology Journal.

Islam, M. Z., Hossain, M. S., ul Islam, R., & Andersson, K. (2019). Static hand gesture recognition using convolutional neural network with data augmentation. 2019 Joint 8th International Conference on Informatics, Electronics & Vision (ICIEV) and 2019 3rd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), 324–329.

Kang H, Lee CW, Jung K. Recognition-based gesture spotting in video games. Pattern Recogn Lett. 2004;25(15):1701–14.

Karray, F., Alemzadeh, M., Abou Saleh, J., & Arab, M. N. (2017). Human-computer interaction: Overview on state of the art. International journal on smart sensing and intelligent systems, 1(1).

Khan MW. A survey: Image segmentation techniques. International Journal of Future Computer and Communication. 2014;3(2):89.

Khare C, Nagwanshi KK. Image restoration technique with nonlinear filter. International Journal of Advanced Science and Technology. 2012;39:67–74.

Kolkur, S., Kalbande, D., Shimpi, P., Bapat, C., & Jatakia, J. (2017). Human skin detection using rgb, hsv and ycbcr color models. arXiv preprint arXiv:1708.02694 .

Köpüklü O, Gunduz A, Kose N, Rigoll G. Online dynamic hand gesture recognition including efficiency analysis. IEEE Transactions on Biometrics, Behavior, and Identity Science. 2020;2(2):85–97.

Lai, K., & Yanushkevich, S. N. (2018). Cnn+ rnn depth and skeleton based dynamic hand gesture recognition. 2018 24th international conference on pattern recognition (ICPR), 3451–3456.

Liu, Y., Gan, Z., & Sun, Y. (2008). Static hand gesture recognition and its application based on support vector machines. 2008 Ninth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Dis- tributed Computing, 517–521.

Liwicki S, Everingham M. Automatic recognition of fingerspelled words in British sign language. IEEE computer society conference on computer vision and pattern recognition workshops. 2009;2009:50–7.

Muhammad B, Abu-Bakar SAR. A hybrid skin color detection using hsv and ycgcr color space for face detection. IEEE International Conference on Signal and Image Processing Applications (ICSIPA). 2015;2015:95–8.

Murthy G, Jadon R. A review of vision based hand gestures recognition. International Journal of Information Technology and Knowledge Management. 2009;2(2):405–10.

Nagashree R, Michahial S, Aishwarya G, Azeez BH, Jayalakshmi M, Rani RK. Hand gesture recognition using support vector machine. International Journal of Engineering and Science. 2005;4(6):42–6.

Oudah, M., Al-Naji, A., & Chahl, J. (2020). Hand gesture recognition based on computer vision: A review of techniques. journal of Imaging, 6(8), 73.

Oyedotun OK, Khashman A. Deep learning in vision-based static hand gesture recognition. Neural Comput Appl. 2017;28(12):3941–51.

Phung SL, Bouzerdoum A, Chai D. Skin segmentation using color pixel classification: Analysis and comparison. IEEE Trans Pattern Anal Mach Intell. 2005;27(1):148–54.

Piao W, Yuan Y, Lin H. A digital image denoising algorithm based on gaussian filtering and bilateral filtering. ITM Web of Conferences. 2018;17:01006.

Resma KB, Nair MS. Multilevel thresholding for image segmentation using krill herd optimization algorithm. Journal of King Saud University-Computer and Information Sciences. 2021;33(5):528–41.

Saini HK, Chand O. Skin segmentation using rgb color model and implementation of switching conditions. Skin. 2013;3(1):1781–7.

Sathiyanarayanan, M., & Rajan, S. (2016). Myo armband for physiotherapy healthcare: A case study using gesture recognition application. 2016 8th International Conference on Communication Systems and Networks (COMSNETS), 1–6.

Shaik KB, Ganesan P, Kalist V, Sathish B, Jenitha JMM. Compara- tive study of skin color detection and segmentation in hsv and ycbcr color space. Procedia Computer Science. 2015;57:41–8.

Singh S, Gupta AK, Singh T. Computer vision based hand gesture recog- nition: A survey. Snyder, H. (2019). Literature review as a research methodology: An overview and guidelines. J Bus Res. 2019;104:333–9.

Stergiopoulou E, Papamarkos N. A new technique for hand gesture recognition. International Conference on Image Processing. 2006;2006:2657–60.

Tang H, Liu H, Xiao W, Sebe N. Fast and robust dynamic hand gesture recognition via key frames extraction and feature fusion. Neurocomputing. 2019;331:424–43.

Treece G. The bitonic filter: Linear filtering in an edge-preserving morphological framework. IEEE Trans Image Process. 2016;25(11):5199–211.

Article   MathSciNet   MATH   Google Scholar  

Wachs JP, Stern HI, Edan Y, Gillam M, Handler J, Feied C, Smith M. A gesture-based tool for sterile browsing of radiology images. J Am Med Inform Assoc. 2008;15(3):321–3.

Article   MATH   Google Scholar  

Wen, X., & Niu, Y. (2010). A method for hand gesture recognition based on morphology and fingertip angle. 2010 The 2nd International Conference on Computer and Automation Engineering (ICCAE), 1, 688–691.

Xie Y, Ning L, Wang M, Li C. Image enhancement based on histogram equalization. J Phys: Conf Ser. 2019;1314(1): 012161.

Xu, S., & Xiong, Y. (2021). Simulation of Chevy and BMW turning research. 2021 2nd International Conference on Intelligent Design (ICID) . IEEE..

Zhang, H., Wang, Y., & Deng, C. (2011). Application of gesture recognition based on simulated annealing bp neural network. Proceedings of 2011 International Conference on Electronic & Mechanical Engineering and Information Technology, 1,178–181.

Download references

Part of this work is supported by VC Research (VCR 0000198).

Author information

Authors and affiliations.

Aston University, Aston St, Birmingham, B4 7ET, UK

Victor Chang & Qianwen Ariel Xu

Teesside University, Campus Heart, Southfield Rd, Middlesbrough, TS1 3BX, UK

Rahman Olamide Eniola & Lewis Golightly

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Victor Chang .

Ethics declarations

Conflict of interest.

Authors confirm that there is no any conflict of interests with anyone involved.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the topical collection “Emerging Technologies and Services for post-COViD-19” guest edited by Victor Chang, Gary Wills, Flavia Delicato and Mitra Arami.

Appendix 1: Color Space

Figs. 13 , 14 , 15 and 16 .

figure 13

RGB color space

figure 14

RGB and BGR color space

figure 15

HSV color space

figure 16

HSV color space of a sample image

Appendix 2: Hand Representations

Figs. 17 , 18 and 19 .

figure 17

Wearable data glove

figure 18

Median filter of a sample hand gesture

figure 19

Effect of histogram equalization on a sample hand gesture

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Chang, V., Eniola, R.O., Golightly, L. et al. An Exploration into Human–Computer Interaction: Hand Gesture Recognition Management in a Challenging Environment. SN COMPUT. SCI. 4 , 441 (2023). https://doi.org/10.1007/s42979-023-01751-y

Download citation

Received : 23 December 2022

Accepted : 21 February 2023

Published : 12 June 2023

DOI : https://doi.org/10.1007/s42979-023-01751-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Hand recognition
  • Human–computer interaction
  • Machine learning
  • Convolutional neural network (CNN)
  • Find a journal
  • Publish with us
  • Track your research

Why Larp?! A Synthesis Paper on Live Action Roleplay in Relation to HCI Research and Practice

New citation alert added.

This alert has been successfully added and will be sent to:

You will be notified whenever a record that you have chosen has been cited.

To manage your alert preferences, click on the button below.

New Citation Alert!

Please log in to your account

Information & Contributors

Bibliometrics & citations, view options, index terms.

Human-centered computing

Human computer interaction (HCI)

Recommendations

Unmaking@chi: concretizing the material and epistemological practices of unmaking in hci.

Design is conventionally considered to be about making and creating new things. But what about the converse of that process – unmaking that which already exists? Researchers and designers have recently started to explore the concept of “unmaking” to ...

Reprioritizing the relationship between HCI research and practice: bubble-up and trickle-down effects

There has been an ongoing conversation about the role and relationship of theory and practice in the HCI community. This paper explores this relationship privileging a practice perspective through a tentative model, which describes a "bubble-up" of ...

Feminist HCI: taking stock and outlining an agenda for design

Feminism is a natural ally to interaction design, due to its central commitments to issues such as agency, fulfillment, identity, equity, empowerment, and social justice. In this paper, I summarize the state of the art of feminism in HCI and propose ...

Information

Published in.

cover image ACM Transactions on Computer-Human Interaction

Association for Computing Machinery

New York, NY, United States

Publication History

Check for updates, author tags.

  • game research
  • design methods
  • Research-article

Contributors

Other metrics, bibliometrics, article metrics.

  • 0 Total Citations
  • 0 Total Downloads
  • Downloads (Last 12 months) 0
  • Downloads (Last 6 weeks) 0

View options

View or Download as a PDF file.

View online with eReader .

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Full Access

Share this publication link.

Copying failed.

Share on social media

Affiliations, export citations.

  • Please download or close your previous search result export first before starting a new bulk export. Preview is not available. By clicking download, a status dialog will open to start the export process. The process may take a few minutes but once it finishes a file will be downloadable from your browser. You may continue to browse the DL while the export process is in progress. Download
  • Download citation
  • Copy citation

We are preparing your search results for download ...

We will inform you here when the file is ready.

Your file of search results citations is now ready.

Your search export query has expired. Please try again.

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

  • DOI: 10.1109/CSCWD57460.2023.10152686
  • Corpus ID: 259235859

An Intelligent Human-Agent Interaction Support System in Medicine

  • Peng Liu , Liang Xiao
  • Published in International Conference on… 24 May 2023
  • Medicine, Computer Science

Related Papers

Showing 1 through 3 of 0 Related Papers

COMMENTS

  1. A Systematic Review on Human and Computer Interaction

    As technology continues to advance at an unprecedented pace, the interaction between humans and computers has become an integral part of our daily lives. This study provides a comprehensive review of the evolving landscape of human-computer interaction (HCI) research, focusing on the key concepts, methodologies, and advancements in this interdisciplinary field. The review begins by presenting ...

  2. A Review Paper on Human Computer Interaction

    Research experiments in human computer interaction involves the young age group of people that are educated and technically knowledgeable. This paper focuses on the mental model in Human Computer ...

  3. A Review on Human-Computer Interaction (HCI)

    Human-Computer Interaction (HCI), has risen to prominence as a cutting-edge research area in recent years. Human-computer interaction has made significant contributions to the development of hazard recognition over the last 20 years, as well as spawned a slew of new research topics, including multimodal data analysis in hazard recognition experiments, the development of efficient devices and ...

  4. A Systematic Review of Human-Computer Interaction and Explainable

    Artificial intelligence (AI) is one of the emerging technologies. In recent decades, artificial intelligence (AI) has gained widespread acceptance in a variety of fields, including virtual support, healthcare, and security. Human-Computer Interaction (HCI) is a field that has been combining AI and human-computer engagement over the past several years in order to create an interactive ...

  5. Proceedings of the ACM on Human-Computer Interaction

    The Proceedings of the ACM on Human Computer Interaction (HCI) is a journal series for research relevant to multiple aspects of the intersection between human factors and computing systems. Characteristics of humans from individual cognition, to group effects, to societal impacts shape and are shaped by computing systems. Human and computer interactions affect multiple aspects of daily life ...

  6. Advances in Human-Computer Interaction

    Advances in Human-Computer Interaction is an interdisciplinary open access journal that publishes theoretical and applied papers covering the broad spectrum of interactive systems. As part of Wiley's Forward Series, this journal offers a streamlined, faster publication experience with a strong emphasis on integrity.

  7. A Systematic Review of Human-Computer Interaction (HCI) Research in

    This article provides a systematic review of research related to Human-Computer Interaction techniques supporting training and learning in various domains including medicine, healthcare, and engine...

  8. Implications of Human-Computer Interaction Research

    Salimzadeh SHe GGadiraju U (2024)Dealing with Uncertainty: Understanding the Impact of Prognostic Versus Diagnostic Tasks on Trust and Reliance in Human-AI Decision MakingProceedings of the CHI Conference on Human Factors in Computing Systems 10.1145/3613904.3641905(1-17)Online publication date: 11-May-2024.

  9. Human-Computer Interaction

    Human-Computer Interaction publishes research on interaction Science and system design, looking at how people learn and use computer systems.

  10. Human-Engaged Computing: the future of Human-Computer Interaction

    Debates regarding the nature and role of Human-Computer Interaction (HCI) have become increasingly common. This is because HCI lacks a clear philosophical foundation from which to derive a coherent vision and consistent aims and goals. This paper proposes a conceptual framework for ongoing discussion that can give more meaningful and pertinent direction to the future of HCI; we call the ...

  11. Computer-Human Interaction Research and Applications

    The papers selected to be included in this book contribute to the understanding of relevant trends of current research on computer-human interaction, including Interaction design, human factors, entertainment, cognition, perception, user-friendly software and systems, pervasive technologies and interactive devices.

  12. Systematic Review of Multimodal Human-Computer Interaction

    This document presents a systematic review of Multimodal Human-Computer Interaction. It shows how different types of interaction technologies (virtual reality (VR) and augmented reality, force and vibration feedback devices (haptics), and tracking) are used in different domains (concepts, medicine, physics, human factors/user experience design, transportation, cultural heritage, and industry ...

  13. Human-Computer Interaction

    Summary. Human-Computer Interaction (HCI) is a multidisciplinary field of research that focuses on the understanding and design of interaction between humans and computers. HCI has its roots in Human Factors and Ergonomics and cognitive sciences, but over the years it has underwent a variety of deep transformations, by importing a variety of ...

  14. A Systematic Literature Review for Human-Computer Interaction and

    Human-computer interaction (HCI) has been challenged in recent years because of advanced technology requiring adoption of new applications and investigations of connection with other disciplines, to enhance its theoretical knowledge. Design thinking (DT), an...

  15. Human-Computer Interaction

    Subjects: Human-Computer Interaction (cs.HC) [3] arXiv:2408.11673 [ pdf, other ] Title: Improved Visual Saliency of Graph Clusters with Orderable Node-Link Layouts Nora Al-Naami, Nicolas Médoc, Matteo Magnani, Mohammad Ghoniem Subjects: Human-Computer Interaction (cs.HC); Social and Information Networks (cs.SI) [4] arXiv:2408.11667 [ pdf, html ...

  16. (PDF) Human Computer Interaction Research Through the Lens of a

    Abstract and Figures Human Computing Interaction (HCI) is an expansive research field that covers various disciplines from computer science and engineering to human factors and social science.

  17. 182978 PDFs

    The essence of biomimetics in human-computer interaction (HCI) is the inspiration derived from natural systems to drive innovations in modern-day technologies. With this in mind, this paper ...

  18. Human-Computer Interaction: A Systematic Review

    This paper presents an overview of the significance of Human-Computer Interaction (HCI) in modern technology and its influence on different fields. The study of HCI involves understanding how people interact with technology and the various techniques and methodologies used to make technology more user-friendly. The paper aims to explore the applications of HCI in different areas such as ...

  19. A Review Paper on Human Computer Interaction

    The main purpose of practical research in human-computer interaction is to disclose unknown perception about behavior of humans and its relationship to technology. Resilience is just a set of routines that allow us to recover from obstacles. The term resilience has been applied to almost everything from the economy, real estate, events, sports ...

  20. Applied Sciences

    Today's social and political movements against dominant Western narratives call for a re-contextualization of cultural heritage (CH) toward inclusivity, multiperspectivity, and sensemaking. Our work approaches this challenge from a Human-Computer Interaction (HCI) perspective, questioning how HCI approaches, tools and methods can contribute to CH re-contextualization.

  21. Promises and challenges of generative artificial intelligence for human

    Generative artificial intelligence (GenAI) holds the potential to transform the delivery, cultivation, and evaluation of human learning. This Perspective examines the integration of GenAI as a tool for human learning, addressing its promises and challenges from a holistic viewpoint that integrates insights from learning sciences, educational technology, and human-computer interaction. GenAI ...

  22. Exploring User Acceptance Of Portable Intelligent Personal Assistants

    This research explores the factors driving user acceptance of Rabbit R1, a newly developed portable intelligent personal assistant (PIPA) that aims to redefine user interaction and control. The study extends the technology acceptance model (TAM) by incorporating artificial intelligence-specific factors (conversational intelligence, task intelligence, and perceived naturalness), user interface ...

  23. An Exploration into Human-Computer Interaction: Hand Gesture

    The concept of human-computer interaction systems will benefit the healthcare and medical industries, extensively utilizing the novel concepts. Yearly, new variations, designs, aesthetics, and maneuverability are developed, particularly human-inspired or humanoid robotics that can think and behave like people and act like humans.

  24. Why Larp?! A Synthesis Paper on Live Action Roleplay in Relation to HCI

    Donghee Yvette Wohn, Emma J Freeman, and Katherine J Quehl. 2017. A Game of Research: Information Management and Decision-making in Daily Fantasy Sports. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play (CHI PLAY '17).

  25. PDF Research Paper on Human Computer Interaction (HCI)

    The study of computer technology's use and design with a particular emphasis on human-computer interfaces is known as human-computer interaction. Researchers in human-computer interaction (HCI) study how people use computers and create new technologies to enable creative computer usage.

  26. HUMAN COMPUTER INTERACTION

    The improvements in the development of computer technology has contributed to the concept of the Human Computer Interactions (HCI) since the computer systems has the interfaces which can easily be ...

  27. Study on relationship between adversarial texts and language errors: a

    Search calls for papers; Journal Suggester; Open access publishing ... adversarial texts, and LLMs. Each LLM is measured in language understanding ability and robustness within a human-computer interaction context. To further disclose the differences between language errors and adversarial texts, we measured each LLM under 6 metrics, including ...

  28. Interaction Design: Beyond Human-Computer Interaction, 6th Edition

    A delightful, engaging, and comprehensive overview of interaction design Effective and engaging design is a critical component of any digital product, from virtual reality software to chatbots, smartphone apps, and more. In the newly updated sixth edition of Interaction Design: Beyond Human-Computer Interaction , a team of accomplished technology, design, and computing professors delivers an ...

  29. Human-Computer Interaction: Innovations and Challenges in Virtual

    In an effort to shed light on the advances and difficulties that are shaping the area of Virtual Reality (VR), this research paper digs into the ever-evolving world of Human-Computer Interaction (HCI) within the context of VR. We have found important insights with theoretical and practical applications via a careful research technique comprising mathematical modelling, data collecting, and ...

  30. An Intelligent Human-Agent Interaction Support System in Medicine

    A multi-agent dialogue system for answering post-operative thyroid eye disease questions using an intent understanding agent, a knowledge processing agent, and an intelligent interaction support agent, each responsible for intent recognition, knowledge base building, and conversation generation. Conversational robots have been widely used in the medical field to monitor, diagnose, and manage ...