Measuring Air Quality Through Our Subjective Experience Part II
As I have described in my earlier blogpost, I have been working on an experimental project, WearAQ where we worked with students at the Marner Primary school in Tower Hamlets London to go out into the surrounding neighbourhood, measure air quality both technologically and through their own perceptions, and recorded their subjective experience using low tech wearable devices that catalogued their gestures. This data was compared with measurements from expensive, highly calibrated pollution monitoring equipment and other data like temperature, wind and humidity to look at correlations and contrasts.
The result of the experiment has been revealing, we were able to obtain an 8/8 correct predictions on students perceptual data based on our machine learning model. We were also able to obtain a 6/8 accuracy when we compared recorded perceptual data with data from the mobile pollution monitoring equipment. We recognised that there was a lack of data, however it was adequate for a first prototype and have proven via this experiment that there is a correlation between perceptual data to actual air quality measurements.
In my previous blogpost, I have discussed about the challenges faced designing the technology and our experience of structuring participation with the students, Here, i’m going to talk about the findings from the workshop sessions and how the result came about in Experiment 1.

In Experiment 1, we take a closer look at the perceptual data recorded by the students by comparing it with air quality measurement from the airbeam device, and photos taken by students during the exploration walk. As the sample is small, there is little room for statistical significance, however this is adequate as a first prototype.
Study 1: Comparing average gesture feedback to data collected from airbeam
We started by comparing participants’ gesture feedback to the wearable airbeam data collected at the same location and time as the gesture data. The 8 timestamps represent location 1-8 and the average gesture data collected at the specific time and location. We were looking for correlation between the PM2.5 data to students’ feedback. Among the 8 locations, it is noted that 6 locations register similar pattern when compared to PM 2.5 data ( i.e. both PM2.5 and gesture data register a drop/increase at the same location), yielding an accuracy level of 75%.


Suggestions for improvement
As only two workshops were conducted with some technical malfunction, hence in order to prove that the findings are valid, we would need to conduct more workshops with participants. We had no means to validate that the air quality data recorded from airbeam was accurate. This can be improved by using multiple types of mobile air quality devices. We have talked to a few air quality experts, however most of them have shown doubt on the validity of mobile air quality devices as technically one would need a stable volume of air input in order for the machine to measure the particle content reliably. We would need more investigation and testing of different mobile air quality devices in order to ascertain the above.
Study 2: Cataloging gesture data and images collected by same student
We used coding techniques to look at gesture data in relation to pictures taken by the same student of the environment that they thought related to their gesture feedback. During the exploration walk, apart from getting students to record their gesture data at the 8 identified locations, we also got the students to use the phone that they were each given to take pictures of things in the environment that contributed to their perception of the air quality. As each phone was connected to an individual wearable device, we were able to pinpoint which set of photos correlated to which set of gesture data. The data was later recorded in a table format (see below).
Through the technique of coding, any parts of the data that relate to a code topic are coded with the appropriate label. This process of coding involves close inspection of the images. Through contrasting with other sets of similar data from other students, we derived a few observations:
- Students were able to make a comparative judgement on the quality of air
The comparison between the pictures taken by students and the gesture data record shows that the gestures recorded are highly dependant on the location and relative to the participant’s own experience. For example, in location 1, six out of ten students recorded 1 (good air), although the images taken are of a few passing vehicles while in location 4, six out of ten students recorded 3 (bad air), while the images taken are also passing vehicles but relatively high in number. In another comparing between locations 3 and 7 (both are in a park), most students perceive the air to be of variable quality.
- There is a degree of social learning among the students
Gestures were recorded with students’ eyes closed hence making the data recorded independent from social learning. Whereas during photo taking time, some students were still shown to copy what their friends were doing. For example, in location 3, one student spotted animal faeces on the floor and some students got excited and crowded round to take picture of it (as shown in the data collected)
- Students were aware that moving vehicles are a major source of air pollution
Students were able to judge that vehicles when driven adversely affect the air quality but not when they are parked. This can be inferred from comparing data from location 4 (next to a traffic junction) where all students gave a rating of either 2 (so-so air) or 3 (bad air) and took pictures of passing vehicles at the junction. Whereas at location 8, students took pictures of parked vehicles and rated either 1 (good air) or 2 (so-so air)
- Students linked cleanliness of environment with quality of air
At location 5 which is between a park and residential estate, even though there are few passing vehicles, five out of ten students recorded 3 (bad air) and the rest 2(so-so air) and some of the pictures taken were of the rubbish left at the roadside.





Suggestions for improvement
Although students were visually isolated when they record their gesture, there is still a degree of influence between each other during photo taking time. Limiting social learning between students could have helped in establishing each perception as independent data. Possible improvements might include:
- upgrading the mobile phone app such that pictures can be taken of the environment immediately when a student records a gesture
- making workshops longer to allow more time in each location to prepare students to take pictures in isolation (we were running on a tight workshop timeline)
- Getting students to go out in smaller groups, in pairs or far apart from each other to control for participant interaction during the experiment (concerns on safety and management of the groups will have to be taken in consideration)
As the sample is small, we would need to conduct more workshops in order to prove that the technique and findings are valid.
The result from Experiment 1 has demonstrated the possibility of measuring air quality through our subjective experience. We wanted to interrogate this question further by combining the data with machine learning experiments to see whether our subjective experience can be a reflection of sensor data.
In Part III of the blogpost series, data scientist Usamah Khan will discuss about the machine learning experiments and result from Experiment 2!
























