RESNA Annual Conference - 2023

A Machine Learning-Based Approach to Enhance the Accuracy of Sound Measurements in iOS Devices for Accessibility Applications

Sayeda F. Aktar1, Mason D. Drake2, Shiyu Tian1, Roger O. Smith2, Sheikh I. Ahamed1

1Marquette University, 2University of Wisconsin-Milwaukee


In the era of advanced technology, mobile devices are playing a significant role in helping people with disabilities (PWD) to enhance their quality of life by allowing them to be more independent in performing daily activities and facilitating social inclusion. Currently, the rapid development of technology presents one of the major challenges in data collected by electronic devices. Research demonstrates there are reliability challenges regarding electronic devices and illustrates the barriers of application reliability, usability and validation methods. Furthermore, accessibility measurements and assessments are affected by the frequent updates of technology. One primary issue is the rapid rate of change concerning software and hardware in the technology industry, which creates a formidable barrier to adequately depend on data collected by mobile devices that rely on cameras, microphones, and sensors. Because the collected device dependent data is different, it impacts the collection of accessibility data compiled by applications (light, sound, slope, etc.). This research presents a machine learning based system solution to minimize the hardware dependency of iOS mobile devices to provide more reliable and valid accessibility measurements. The proposed solution analyzes the hardware (microphone) of twenty-five different models of iPhone and twenty-five other models iPad. The solution maps a relation on the different model of devices using piecewise linear least-squares fit algorithms. The output of the model is the sound measurement data of all iOS devices producing similar measurements when compared to a sound meter measuring the same sound source. Next, the solution is integrated with the accessibility measurement application called AccessSound, which measures sound and interprets the accessibility of the sound level [1][2]. The intended solution provides sound measurements that are reliable and device-independent to interpret the auditory accessibility of an environment with increased accuracy. This solution contributes to improving data collection reliability, making iPhone and iPad devices more accessible, and measuring sound accessibility for PWD.


PWD have to overcome many physical, sensory, and cognitive barriers when interacting in public environments. Although, in many cases people do not know about these issues until they face them, which can create obstacles for PWD when participating in the community [2][3].  

The iPhone and iPad provide numerous benefits, including portability, location awareness, and accessibility. Worldwide, these devices have been used in many fields of modern society, such as education, economy, government, engineering, and healthcare. Ensuring equitable access to all areas is essential and iPhone and iPad apps can assist in determining which environments are the most accessible to PWD. This allows individuals to have accessibility information so they may plan alternatives, bring assistance, or avoid certain barriers [3][8].  

Presently, accessibility assessment applications such as Access Ratings for Buildings (ARB), evaluate and report an individual's specific accessibility needs, while also displaying accessibility information like a comprehensive accessibility score of a building. Although, this kind of accessibility measurement evaluation app faces one significant issue during data collection, the reliability of mobile device sensor data [8]. As the microphone of different iOS devices vary, the measurements may not be consistent and reliable. Therefore, when users rely on summary reports, the report is sensor dependent, which may not be accurate. This may also be the situation when visitors who have provided their personal accessibility ratings for a particular building, are not accurate, due to device sensor dependency [2][4].  

To resolve this problem, a novel system-based solution was proposed using a machine learning-based study. Machine learning (ML) has vast advantages and helps us to create ways of modernizing technology [5]. The proposed ML model was built using a popular ML library [5]. The solution can contribute to revise the existing system of sound measurement in different iOS applications. The increased reliability among iOS devices can ensure that collected sound measurements are reliable and device independent [5]. Moreover, any application developers using the sensors in iOS mobile devices are made aware of the current variability among sound measurements collected by different device models and versions [9].


The aim of this study is to propose a solution for reducing devices hardware dependency to provide PWD with consistent and accurate accessibility measurements. We have designed a prototype for proving our concept. Our system design has three main parts- data collection, ML model, and implementation of the algorithm on accessibility sound measurement app.

Figure 1 is a screenshot of the 1) iPhone 2) iPad 3( Bluetooth speaker and 4) iPhone with tone generator app.
Figure 1: Analyze iOS devices configurations.

Data collection: iOS Receiving Device (iPhone, iPad), Sound emitting device (Bluetooth Speaker), iPhone with tone generator app, Sound Meter (Red and Gray), Wooden Chair, ADA CAT test kit.

Figure 2 illustrated how we collect data for sound test.  The image represent how we collect data on iPad and iPhone.  The image the sound generator is 18 inches distance from a sound meter and device.
Figure 2: Data collection process [1][2]

Researchers placed two sound meters 6 inches apart on the floor for data collection. One Bluetooth speaker was suspended 18 inches above the floor above them. Through a tone-generating app, the sound was played downward through the speaker. This app allowed researchers to initiate sound and manipulate frequency and volume strength. Ten frequencies were measured, and each frequency had ten-volume measurements recorded for each frequency when sound was played. For example, sound at 50Hz would be recorded at 10% volume, 20% volume, etc. Once all the measurements had been taken up to 100% volume, the sound meters were replaced with iOS devices and the process was repeated. For sampling data from iOS devices, the devices' front-facing microphones were used, and the same data collection methods were used. A total three-thousand measurements were collected, and the results demonstrated variation between the sound measurements of the iOS devices [2]. Devices could be grouped into three categories based on the configuration of the microphones used in the devices, which differed in location, number, and position. Furthermore, the data collected using the iOS devices differed from that collected using the sound meters.

Machine learning-based model:  In this study, a continuous piecewise linear function was created using the pwlf package [5]. The pwlf package utilizes two optimizations to find the best piecewise model. Between the outer and inner optimization, the outer one is differential evolution. It is used to locate the breakpoints of the data series. Depends on the calculated breakpoints, the least squares fit, used as the inner optimization, which can be used to find the best continuous piecewise linear function. The input of the ML model is the iPhones and iPad's data recorded as sound data (only iOS device-based data because primarily this study focusses on only iPhones. The A-weighted decibel(dBA) data is used as output of the model. A-weighting is applied specially to instrumentmeasured sound levels because to account for the relative loudness perception of the ear, also the ear is less sensitive to low frequencies. The mobile devices are not able measured for low and high frequencies because of its speaker and microphone sensitivity. The research considered the frequency 400 Hz to 6400 Hz to build the algorithm. The output of the model is the relative threshold values for each different type of hardware configurations of microphone. After getting the threshold value it is integrated to iOS devices' sound data [2]. 

Figure 3 is a screenshot of the sound measurement interface for AccessSound that displays sound levels and sound level descriptions.
Figure 3: AccessSound application [1][2]

After getting the equations for each device, researchers integrated the solution with the AccessSound application with the intention of creating a comprehensive accessibility assessment tool for measuring sound accessibility. Investigators plan to implement the solution in AccessSound so all users can measure the sound level in any environment reliably. This will eliminate the uncertainty regarding accessibility measurements due to varying device hardware between devices and provides accurate measurements and interpretations of environmental sound.


A total of fifty different models of iOS devices (twenty-five iPhones and twenty-five iPads) were considered. Based on the microphone of all examined devices, all iOS devices could be allocated to one of three categories. Previous data collection when identifying the issue led to more than three thousand sound measurements being collected. Following alterations to the app algorithm, developments demonstrated an improved sound measurement accuracy on iPhone 12 (Figure 7) with evident measurement discrepancies between the sound meter and iOS devices reduced significantly. Although the implemented alteration has improved the accuracy of the AccessSound application, a discrepancy margin remains and is currently being worked on by researchers to provide users with the most accurate accessibility information possible with consistency across iOS devices.

Figure 4: Improvement of the sound measurement. The left side (red portion showed the previous data difference between iPhone 12 and Red Meter and, right side the green section showed the improved sound measurement data on iPhone 12) using AccessSound application.      


With a focus on improving consistency across devices, our study provided a system solution that can be used to improve the accuracy of sound measuring applications in iOS devices. Using our proposed solution, increased consistency among devices ensures that collected sound measurements are accurate and will interpret the auditory accessibility of that environment with more precision. Additionally, rehabilitation app developers using the sensors on iOS mobile devices must be aware of the variability among sound measurements collected by different device models and versions. In the future, we will cover our solution will all other devices.


  1. Drake, M.D., Sizer, S. & Smith, R.O. (2022). An Exploratory Study Investigating the Accuracy of Sound Measurements in iOS Devices for Accessibility Applications. Poster presentation at the American Congress of Rehabilitation Medicine (ACRM) Annual Conference, Chicago, IL.
  2. Aktar, S.F., Drake, M.D. & Smith, R.O. (2022). A Machine Learning-Based Approach to Enhance the Accuracy of Sound Measurements in iOS Devices for Accessibility Applications. Poster presentation at IEEE/ACM Connected Health: Applications, Systems and Engineering Technologies (CHASE) 2022, Washington D.C.
  3. Johnson, N., Saxena, P., Williams, D., Bangole, O. C., Hasan, K., Ahamed, S. I., et al. (2015). Smartphonebased light and sound intensity calculation application for accessibility measurement. Paper presented at the RESNA 38th International Conference on Technology and Disability: Research, Design, Practice, and Policy (poster), Denver, CO. Retrieved from
  4. R.J.P. Damaceno, J.C. Braga, J.P.M.Chalco ., et al. (2016). Mobile Device Accessibility for the Visually   Impaired: Problems Mapping and Empirical Study of Touch Screen Gestures. Paper presented at Conference IHC: Brazilian Symposium on Human Factors in Computing Systems, Denver, CO. Retrieved from
  5. N.W. Moon, P.M. Baker, K. Goughnour., (2019, August). Designing wearable technologies for users with disabilities: Accessibility, usability, and connectivity factors. Journal of Rehabilitation and Assistive Technologies Engineering from
  6. "Get Started: Why Accessible Technology Matters", started why accessible technology matters from
  7. S.K. Kane, C. Jayant, J. O. Wobbrock, R. E. Lander., (2009, October) Freedom to roam: a study of mobile device adoption and accessibility for people with visual and motor disabilities from
  8. Williams, D., Johnson, N., Bangole, O. C., Hasan, K., Tomashek, D., Ahamed, S. I., et al. (2015). Access Tools: Developing a usable smartphone-based tool for determining building accessibility. Paper presented at the RESNA 38th International Conference on Technology and Disability: Research, Design, Practice and Policy, Denver, CO.Retrieved from
  9. Johnson, N., Saxena, P., Williams, D., Bangole, O. C., Hasan, K., Ahamed, S. I., et al. (2015). Smartphonebased light and sound intensity calculation application for accessibility measurement. Paper presented at the RESNA 38th International Conference on Technology and Disability: Research, Design, Practice and Policy (poster), Denver, CO.Retrieved from


This work was developed in part under grants from the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR grant numbers H133G100211 and 90IFDV0006). NIDILRR is a Center within the Administration for Community Living (ACL), Department of Health and Human Services (HHS). The content of this work does not necessarily represent the policy of NIDILRR, ACL, or HHS, and you should not assume endorsement by the Federal Government.