LWL #30 AI and ML for People with Disabilities: Innovations and Challenges

LINKS WE LIKE #30

AI and ML for People with Disabilities: Innovations and Challenges

One of the biggest opportunities that Artificial Intelligence and Machine Learning technologies have opened up is the chance to design solutions that can positively impact the lives of people with disabilities. According to the World Health Organization, about 15% of the global population (1.2 Billion people) has some type of disability. In the fight towards inclusivity, companies, organizations and individuals have developed AI tools capable of assisting people in performing daily tasks. These technologies are often described as assistive technology, that is, any item, program or system that is used to increase, maintain or improve the functional capabilities of a person. 

AI and ML technologies are currently being used to address four main problems: facilitating communication with others, increasing mobility, enabling independent living, and ensuring equal access to services. Tools such as VoiceItt and project Euphonia train speech recognition models through several recordings to help people with speech impediments or an atypical speech pattern who face issues to be understood. For those dealing with mobility-related disabilities, initiatives like the one started by Brazilian start-up HOOBOX Robotics and Intel, are changing the landscape by creating a technology that allows a person to use their facial expressions to control the movements of an electric wheelchair. Similarly, apps such as Seeing AI help the visually-impaired to move around by narrating textual messages as well as the objects and individuals that surround them. Other technologies like the Amazon Echo Show “Alexa” or “Siri” are also being adapted to become autonomy-enhancing tools for people with disabilities through the use of the voice control functionality which enable them to move around their house and perform different tasks. 

Unfortunately, one of the hardest tasks for this sector of the population is the lack of access to basic opportunities such as education and employment. Experts at Vanderbilt, Cornell, Yale and Georgia Tech joined forces to develop different initiatives that would respond to the distinct needs that people with Autism Spectrum Disorder (ASD) have in the workplace. These initiatives, that range from interview simulators to practice social skills in interviews and completing virtual tasks in a team setting, aim to incorporate more people with ASD into stable employment where they can use their skills to excel professionally.

One of the most interesting aspects of this topic lies in the fact that artificial intelligence and machine learning can be applied in unique ways to the specific needs that arise from each disability. At the same time, the increasing use of AI has also brought challenges and sometimes even contribute to invisibilize the lives and experiences of people with disabilities. In this edition of Links We Like we will explore some of the ways in which organizations and companies are leveraging the power of AI and ML to enhance the lives of people with disabilities, but also some ways in which these technologies are limiting them. Shedding light on both sides can help us create a more inclusive and accessible society for all.

Machine learning (ML) is a powerful approach to artificial intelligence (AI) designed for creating patterns based on data collection and with increasing presence in our daily lives. ML processes aim to predict how different things will behave in the future, thus allowing to make decisions based on those predictions. However, professor Jutta Treviranus points out that one of the constraints holding back the improvement of machine learning systems for people with disabilities is the use of proprietary software. When creating Big Data sets, the tricky process of refining the data that fuels ML engines (in order to clean out information that disrupts the greater pattern) can cause problems, since not all outlier data is irrelevant. Although the edged data makes it more difficult for ML systems to come to a quick and relevant decision, such data can represent the experience of real people. “Research is not very supportive to diversity (…) and we have basically transferred that to machine learning”, Jutta says. To address this issue, Jutta suggests the use of open transparent systems and create technologies that work for people with disabilities and are thus more inclusive.

The implementation of AI in organizational processes -such as human resource management- has increased considerably in recent years, but the unwanted (or undesired) negative effects on people with disabilities is still relatively unaddressed. As Laurie Henneborn states in this brief article, close to “76% of organizations with 100 or more employees use algorithms to assess performance on hiring tests, and 40% use [AI] when screening potential candidates”.  However, unawareness of how algorithms, AI, or even screening processes do not include critical considerations of disabilities is common in decision-making processes. To tackle these issues and close the ableist bias gap in organizations, Accenture Research in collaboration with Disability:IN and the American Association for People with Disabilities (AAPD), have proposed four guiding principles for inclusive design. What they call R(AI)S stands for Responsible (adopting and scaling AI responsibly and ethically); Accessible (ensuring all AI ventures prioritize accessibility); Inclusive (act with fairness in mind, consider lived experiences of people with disabilities and use de-biasing techniques), and Secure (ensure privacy is not put at risk). These principles have been put forward by this partnership to assess the consequences of AI on every aspect of the employment experience, to re-think organizational processes, and to “R(AI)S the AI game”. 

In recent history, not enough data has been collected to train personalized object recognition within the machine learning field for people with disabilities. Last year, Microsoft AI for Accessibility granted City, University of London, support for the launch of the Object Recognition for Blind Image Training (ORBIT), a research project to build a public dataset using video submissions from individuals who are blind or have low vision. Lack of large datasets has been an ongoing challenge for researchers and developers seeking to enhance the daily lives of people with disabilities. In the first phase of ORBIT, the team collected several  videos, vetted to ensure there was no information that could be used to personally identify individuals or cases. This is now the largest dataset of its kind. The overall objective is to make the ORBIT dataset publicly available so it can be leveraged by other organizations and researchers. On another note, this article also touches upon a workshop hosted by Microsoft at NYU’s AI Now Institute in 2019 to address how to further develop AI systems that do not further marginalize those with disabilities. Undoubtedly, expanding public data sets beyond what is currently utilized to train AI systems will be a continuous effort towards inclusivity.

Accessibility concerns rise as AI advances and data tools become more sophisticated. In fact, Cat Noone, CEO of Stark –a startup that thrives to maximize accessibility to software–, affirmed that it is not only the AI or ML systems in and of themselves that are not accessible, but flawed data is putting people with disabilities at risk. In a publication for TechCrunch, Noone explains that AI is structured to find patterns and form groups. Boxes to which, (is something missing in this sentence?)many times, people with disabilities cannot fit in. AI systems classify everything and everyone;how do they classify a person in a wheelchair,  or in a vehicle category? As head of Stark, Cat Noone promotes five pillars that need to be followed to guarantee that an innovation is mindful of people with disabilities. In her company, engineers, designers, product managers and all employees have to address: 1) what data they are collecting, 2) why are they collecting it, 3) how will the innovation be used (and misused), 4) simulate IFTTT (“if this, then that”) to picture possible scenarios in which data can be used, and 5) ship or trash the idea. Overall, innovation requires ethical stress tests that guarantee accessibility and take everyone in mind. As Noone explained: “we all have to acknowledge the data in front of us and think about why we collect it and how we collect it. That means dissecting the data we’re requesting and analyzing what our motivations are”.

Accessibility, disability and artificial intelligence have become intertwined. Research in machine learning and AI is increasingly interested in either assisting to medically heal disabilities or accommodating them better. As Musser concludes, It is particularly exciting to see this trend in research focused towards individuals on the autism spectrum. For diagnosis , Liu, Li and Yi (2014) came up with a machine learning model that helps identify ASD (Autism Spectrum Disorder) cases, through eye movement using data already collected from ASD cases in previous studies. The model is able to successfully predict the existence of ASD in an individual at a 88.51% level of accuracy.

Another group of researchers from Duke university were able to create a non-invasive tool to diagnose autism. The CV (Computer Vision) tool designed helped automatise the clinical process to spot autism, and received an interrater score of 75%. Another successful ML model (DP.Wall et .al, 2012) with an incredible 100% accuracy is helping reduce the work it takes from clinicians to evaluate ASD by 66%, which effectively brings evaluation items from 29 to 9 with zero loss of accuracy. For therapy, the MIT Media Lab arranged for a robot inside a clinic. This robot assists in providing kids with therapy through a specialized model that caters to the needs of each kid it interacts with. The deployed robot is now able to properly mimic a therapist through suggestions and prompts such as “why are you sad?”.

AI for inclusivity of the population with disabilities

AI projects to help handicap community:

Designing fair AI 

AI and Disability

Share
Keywords
Author(s)
Share
Recommendations

Project Report

Stratégie Nationale des Données du Sénégal – Résumé

En 2023, DPA a fourni un soutien technique à l’élaboration de la

Project Report

Community-Based Social Protection Mechanisms in Africa’s Borderlands – Liberia and Sierra Leone Case Study

The report on “Community-based social protection mechanisms in Africa’s borderlands – Liberia

Annual Report

Overview and Outlook 2022-2024

10 YEARS IN REVIEW: A LETTER FROM OUR DIRECTOR Finding appropriate metrics