Human Fall Detection Using Passive Infrared Sensors with Low Resolution: A Systematic Review
Received 19 July 2021
Accepted for publication 9 October 2021
Published 13 January 2022 Volume 2022:17 Pages 35—53
Checked for plagiarism Yes
Review by Single anonymous peer review
Peer reviewer comments 2
Editor who approved publication: Prof. Dr. Nandu Goswami
Grégory Ben-Sadoun,1,2 Emeline Michel,3,4 Cédric Annweiler,1,5– 7 Guillaume Sacco3,5,8
1Department of Geriatric Medicine and Memory Clinic, Research Center on Autonomy and Longevity, University Hospital of Angers, Angers, France; 2Normandie Université, UNICAEN, INSERM, COMETE, CYCERON, CHU Caen, Caen, 14000, France; 3Université Côte d’Azur, Centre Hospitalier Universitaire de Nice, Clinique Gériatrique du Cerveau et du Mouvement, Nice, France; 4Université Côte d’Azur, LAMHESS, Nice, France; 5Laboratoire de Psychologie des Pays de la Loire, Univ Angers, Université de Nantes, EA 4638 LPPL, SFR CONFLUENCES, Angers, F-49000, France; 6School of Medicine, Health Faculty, University of Angers, Angers, France; 7Robarts Research Institute, Department of Medical Biophysics, Schulich School of Medicine and Dentistry, The University of Western Ontario, London, ON, Canada; 8Université Côte d’Azur, CoBTek, Nice, France
Correspondence: Grégory Ben-Sadoun
Department of Geriatric Medicine and Memory Clinic, Research Center on Autonomy and Longevity, University Hospital of Angers, Angers, France
Email [email protected]; [email protected]; [email protected]
Abstract: Systems using passive infrared sensors with a low resolution were recently proposed to answer the dilemma effectiveness–ethical considerations for human fall detection by Information and Communication Technologies (ICTs) in older adults. How effective is this type of system? We performed a systematic review to identify studies that investigated the metrological qualities of passive infrared sensors with a maximum resolution of 16× 16 pixels to identify falls. The search was conducted on PubMed, ScienceDirect, SpringerLink, IEEE Xplore Digital Library, and MDPI until November 26– 28, 2020. We focused on studies testing only these types of sensor. Thirteen articles were “conference papers”, five were “original articles” and one was a found in arXiv.org (an open access repository of scientific research). Since four authors “duplicated” their study in two different journals, our review finally analyzed 15 studies. The studies were very heterogeneous with regard to experimental procedures and detection methods, which made it difficult to draw formal conclusions. All studies tested their systems in controlled conditions, mostly in empty rooms. Except for two studies, the overall performance reported for the detection of falls exceeded 85– 90% of accuracy, precision, sensitivity or specificity. Systems using two or more sensors and particular detection methods (eg, 3D CNN, CNN with 10-fold cross-validation, LSTM with CNN, LSTM and Voting algorithms) seemed to give the highest levels of performance (> 90%). Future studies should test more this type of system in real-life conditions.
Keywords: fall detection, older adults, passive infrared sensor, thermal sensor, thermopile
One of the defining events of the 20th century and the beginning of the 21st century is the aging of the population. In 2019, people aged over 65 represented 20.3% of the European population and people aged over 85 represented 5.8%.1
According to the World Health Organization
With increasing age, numerous underlying physiological changes occur, and the risk of chronic disease rises. By age 60, the major burdens of disability and death arise from age-related losses in hearing, seeing and moving, and non-communicable diseases, including heart disease, stroke, chronic respiratory disorders, cancer and dementia.2
The associations between physiological dysfunctions and chronic diseases can lead to disability. Disability refers to difficulties encountered in any or all three areas of functioning: impairments (problems in body function or structure), activity limitations (difficulties in executing activities) and participation restrictions (problems with involvement in any area of life).3,4 In 2012, falls were part of the world top 10 health conditions associated with disability in people aged over 60 years old.2 The social and economic costs associated with falls in older adults are also important. For example, falls were identified as the highest risk factors of nursing home admission in the world.5
To our knowledge, research on falls in older adults began in the second part of the 20th century. Campbell et al6 reported in 1981 that respectively 34%, 45% and 54% of people aged over 65, 80 and 90 years old had experienced at least one fall in the previous year. More than 400 risk factors for falls are recognized. They could be classified as changeable (eg, polypharmacy or environmental factors) or not changeable (eg, age, gender, cognitive decline).
The management of falls relies on various strategies: avoiding the first fall, fast intervention and treatment when fall occurs, and preventing recurrence. While the assessment and the treatment of fall consequences are better known today (see the recommendations of the American Geriatrics Society and British Geriatrics Society7), its early detection remains an important issue, especially for socially isolated older adults. The challenge is to initiate an early rescue process to limit the physiological and psychological consequences of the fall, especially those related to the time spent on the floor. To reduce this time, the use of Information and Communication Technologies (ICTs), called human fall detection systems, has been proposed in the two last decades.8,9 The human fall detection systems were initially distinguished by the type of devices and sensors used.10–13 Systems using wearable devices must be worn by a person under or over their clothes. They include different sensors such as gyroscopes, accelerometers, tilt-meters, myowaves and oscilloscopes. They have the advantage of being individual-centered, being easily combined in the same system14,15 or in several sub-systems placed at different areas on the body,16 having indoor as well as outdoor uses and including a manual or an automatic alarm. However, efficiency could decrease because of the inconsistent use of the device, and the difficulties to use the alarm, especially for older adults with cognitive decline. Systems using non-wearable devices should be deployed in the environment of the individuals. They include different technologies such as ambient-based sensors (pressure sensors, floor sensors, infrared sensors, microphones), vision-based sensors (normal, depth or thermal video cameras), and radio-frequency sensors (based on tracking fluctuating radio frequency signals or wireless channel status information, such as WiFi or Bluetooth, to detect rapid and intense body movement that shows abnormal changes in radio-frequency signals; see17 also for more details). The absence of any action required from the individual is the major advantage of non-wearable devices. Also, systems using these types of sensors can be connected to the electric network and therefore do not need an internal battery. They are used almost only indoors but they can also be used outdoors (see18 for an example of recent advance in this topic). Ambient-based devices are prone to environmental noise. Vision-based devices (known to be the most accurate of all) and acoustic-based devices pose ethical issues regarding the protection of privacy (detailed recording of movements, conversations, and ambient objects).
Although each of these sensors can be connected directly to a computer containing the detection algorithms, they are mainly designed as a mini-system (ie, a capturing device) containing the sensor (or multiple sensors named “sensor fusion”), a microcontroller (containing the detection algorithms) and a wired (USB) or wireless (Wi-Fi, Xbee, Bluetooth, 3G) remote connection element. This type of system can be easily worn by the subject (on the clothes) or placed in a room (on the ceiling or on the wall). It can easily communicate with other similar systems, server and/or send alarms directly to a computer or a smartphone when a fall occurs (see for exemple14–16,19).
Today, the influx of increasingly ingenious algorithmic detection methods over the past decade highlights the possibility of achieving very high levels of fall detection for all existing systems.12,13 Consequently, the dilemma effectiveness–ethical consideration (named also security–privacy balance) seems to be the real challenge for human fall detection by ICTs, particularly in the older population8 and in nursing homes.20
Infrared sensors with a low resolution could be an efficient solution to answer this dilemma. Fall detection with system using passive infrared sensors with a low resolution was first proposed by Sixsmith & Johnson.21 After disappointing results reported by their study, these sensors have experienced a new enthusiasm 10 years later, thanks to the “good performance” highlighted by the studies of Mashiyama et al.22,23 The passive infrared sensors with a low resolution measure thermal radiation from the scene and maps the spatial distribution of temperature in an array of pixels to give an image (not exceeding 32×32 pixels in general). They only capture the shape of the individual (difference in thermal radiation between the individual and the objects in the room), and provide a representation of the area analyses in one (eg, 1×8 pixels temperature distribution) or two dimensions (eg, 8×8 pixels temperature distribution). However, beyond 16×16 pixels, these sensors seem sufficient to clearly identify the silhouette of an individual.24 With a resolution lower than 16×16 pixels, if privacy seems guarantee, the effectiveness of fall detection depends even more on the architecture of the system, and on the algorithms running. For this reason, the objective of this systematic review is to draw up an exhaustive inventory of the uses of passive infrared sensors with a very low resolution in the human fall detection, and to question the effectiveness of this type of system.
Materials and Methods
Protocol and Registration
This systematic review was designed and conducted according to the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA, version 2020).25 This protocol has not been registered in any international database of prospectively registered systematic reviews.
The goal of this review was to identify studies that investigated the metrological qualities of passive infrared sensors with a very low resolution to identify falls or related conditions (eg, lying on the floor).
Article databases, keywords related to the objective of the review, and conditions of eligibility were identified in a pre-search round carried out in July 2020. Only original research articles and conference papers published in English from inception of databases up to 26–28 November 2020 were included in this review. Review articles and grey literature (including PhD and Master’s thesis) were not retained. When the full-text of an article was not available, the authors were asked for a copy, while precising the objectives of our systematic review.
PICO (Population, Intervention, Comparator, Outcome) criteria were: P) human adults, I) infrared sensors with a low resolution, C) evaluator observation or video recording, O) fall detection. Inclusion criteria were: i) protocols including passive infrared sensor with a resolution up to 16×16 pixels, and ii) protocols including fall detection or related conditions (eg, lying on floor). Exclusion criteria were: i) protocols including pyroelectric infrared sensors (infrared sensors that does not give spatial distribution of thermal radiation), and ii) protocols including another sensor which was combined with passive infrared sensor (eg, inertial, video, RGB-D, ultrasonic, accelerometer, pedometer, gyroscope, global positional system, or other similar sensors).
Information Sources and Search
The bibliographic searches were conducted in several databases: IEEE Xplore Digital Library, MEDLINE (PubMed), MDPI, SpringerLink and ScienceDirect.
Keywords used to perform the search were “fall detection”, “infrared”, “thermal” and “thermopile”. Keywords were associated with Boolean operator in each database. This strategy was chosen to avoid uncountable quantity responses regarding the multiple topics around infrared sensors. For example, in IEEE Xplore Digital Library, the three following search strings were employed: i) (“All Metadata”:“fall detection”) AND (“All Metadata”: infrared), ii) (“All Metadata”:“fall detection”) AND (“All Metadata”:thermal), iii) (“All Metadata”:“fall detection”) AND (“All Metadata”:thermopile).
All identified articles were first recorded on Zotero and saved at “.ris” and “.enw” formats. Then, they were uploaded to Rayyan QRCI,26 a web-based abstract selection program. Duplicates were eliminated from the search results. Using Rayyan QRCI, potentially eligible articles were placed in the “maybe” folder and excluded articles were placed in the “excluded” folder. All articles were first sorted using their title and abstract. When title and abstract where not informative enough to sort the article, the full text was analyzed. At last, the full text of articles in the “maybe” folder was analyzed to determine the “included” articles.
In addition to these bibliographic searches, manual searches in the reference list of the “included” articles and empirical searches in Google Scholar and Semantic Scholar were carried out.
The whole process of selecting papers was conducted by one member of the team that fulfilled all the steps from the topic definition to “included” articles analysis.
Data Collection Process and Items
The data were collected on a predefined table including: author’s information, year of publication, title of publication, sensor’s characteristics (eg, manufacturer, resolution, number of images per second chosen by the authors), type of postures (eg, stand up, sit on the chair or floor, lie on the floor) and/or movements tested (eg, standing up, walking, sitting, lying, falling), experimental procedures (type of room, number and position of the sensors, area of capture, ambient temperature, number of participants, procedures for performing postures and/or movements and total number of actions collected), detection methods (eg, background acquisition, foreground acquisition, signal filtration, background subtraction, the thresholds to extract the features, type of features, type of algorithms for action classification), and detection performance (eg, accuracy, precision, sensibility, specificity, F1-score).
Risk of Bias Assessment
To our knowledge, there is no standard for assessing the risk of bias for studies that assess the accuracy of computer systems for detecting human activity. Consequently, a relative quality assessment of studies was conducted by the authors to identify potential study biases. The main identified biases will be presented at the end of the manuscript.
The Steps of Articles Selection Process
At first, 1674 articles were identified through databases search and seven were added after manual search in articles bibliographies and empirical searches on articles, Google Scholar and Semantic Scholar. After exclusion of duplicates and full-text reading, 19 articles were included21–23,27–42 (see details in Figure 1). Four authors “duplicated” their study in two different journals. The article of Tao et al39 also contained results of Tao et al.40 We included Tao et al40 to report the detection method of this study. The article of Tao et al40 was a preprint and was accessible from arXiv.org, an open access repository of scientific research. These two articles contained a common experimental procedure. The two publications of Taramasco et al41,42 were similar and were published as “original article” in two different scientific journals. The two publications of Hayashida et al33,34 were similar and were duplicates published as “conference paper” in two different scientific journals. The two publications of Fan et al30,31 were similar and were published as “conference paper” in two different scientific journals but the second publication contains additional results.31
Figure 1 PRISMA flow diagram.
Notes: PRISMA figure adapted from Page MJ, McKenzie JE, Bossuyt PM et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71.25.
Therefore, our systematic review reported the results of 15 reports.
Table 1 Studies Characteristics Regarding Their Sensors Used, Experimental Procedures, Detection Methods and Detection Performance
Article Types and Sources
Thirteen articles were “conference paper”, five were “original article”29,35,37,41,42 and Tao et al40 was a preprint found in arXiv.org (Table 1). The drafting formalism of these articles was similar. All studies except that of Sixsmith & Johnson21 were published from 2014 (Figure 2).
Figure 2 Publication dates of studies.
Sensor Types and System Architectures
The Panasonic’s Grid-EYE® AMG88xx (8x8 pixels) were used in nine studies (Table 1 second column).22,23,27,29–35,39,40 The Melexis MLX9062x (16x4 pixels) were used in three studies.28,36,37 The Omron D6T-1616-L-06 (16x16 pixels), the Omron D6T-8L-06 (1x8 pixels) and the Irisys (16x16 pixels) were each used in one study.21,38,41,42
The majority of the studies used a system containing at last the sensor (s) and a microcontroller.22,23,27–34,36,37,41,42 Only Chen & Wang29 needed to add a battery to supply their system (mini-robot). Only Hayashida et al33,34 gave information concerning power consumption. A summary of system architectures is presented in Table 2.
Table 2 Summary of System Architectures Used in the Studies
The majority of the studies were conducted in empty rooms except for Taramasco et al41,42 and Sixsmith & Johnson21 which tested their system in furnished rooms (eg, with sofa, television, armchair, tables, chairs, Table 1 third column and Table S1). Also, Taniguchi et al38 included a single bed against the wall of their empty room.
Studies used 1–4 sensors. Most of them used 1–2 sensors mounted on the ceiling22,23,27,33–36,39,40 or on the wall.21,28,30,31,37 Chen & Wang29 proposed to place the sensor on a mini-robot that follows the subject. This placement was technically close to the one on the wall. Also, Taramasco et al41,42 proposed a system designed to be placed on a pole, in a corner of a living room, and measuring the heat received in two horizontal planes (upper and bottom planes). At last, two studies placed the sensors on both the wall and the ceiling. Taniguchi et al38 synchronized one sensor on the ceiling and one on the wall whereas Gochoo et al32 synchronized one sensor on the ceiling with two sensors on two different walls to “generate” a 3D representation of the capture area (see Table S1 for details).
The samples size was usually low with 1–10 participants. Participants had to realize posture and/or locomotion conditions. Adolf et al,27 Gochoo et al32 and Shelke & Aksanli37 tested only static postures. The participants had to repeat a sequence of postures held for a few seconds, especially standing, sitting (on a chair or on a floor) and lying (on the floor) positions. The sitting on the floor and lying postures were interpreted as fall conditions. The twelve other studies21–23,28–31,33–36,38–42 proposed to realize both, posture and locomotion conditions. In the locomotion conditions, the participants had to repeat dynamic actions, especially walking, sitting down on a chair from a standing position, lying down on the floor from a standing or a sitting position, and falling. The falls were generally characterized by the “abrupt” transition from a standing position or walking action to lying on the floor. Most of studies proposed only fall forward actions whereas Chen & Ma,28 Chen & Wang,29 Sixsmith & Johnson21 and Taramasco et al41,42 proposed multiple fall conditions (eg, falls forward, backward, to the side or slide, Table 1 fourth column).
Detection methods could be globally defined by a six step sequence (where step ii) to v) were not mandatory depending on the detection methods): i) acquisition of the raw thermal data, ii) post-treatment of the raw data, iii) definition of thresholds (temperature, number of frames to consider the foreground from the background or human activity), iv) subtraction of the background, v) extraction of additional and specific spatial-temporal features (depending on the chosen classifier), and vi) definition of classifiers to identify ongoing human action (with more emphasis on fall detection).
Methods to acquire raw thermal data were based on the principle of obtaining a temperature distribution map on a grid at each frame or from the average of N-frames given (or a matrix, eg, a grid of 64 pixels for the Panasonic’s Grid EYE with a resolution of 8×8 pixels). When several sensors were used, the heatmaps for each sensor could be: concatenated into a single heatmap32 or combined38 to obtain a 3D representation of capture area, joined to extend the 2D representation of capture area,36,37,41,42 or used in concurrently ways28 (eg, the sensor closest to the subject recorded his movements, see Table S1 for details).
To eliminate the influence of other heat sources and temperature background on the participant detection, the thermal raw data were post-treated in few studies. Shelke & Aksanli37 applied a Gaussian filter. Fan et al30,31 compared three filters (Wavelet, Median and Gaussian). Chen & Ma28 used a multi-frame averaging filtering method. Liu et al35 used a bicubic interpolation. Gochoo et al32 increased their heatmap by two (48x8 instead of 24x8, Table 1 “detection methods” columns).
A background was generated in most studies. To differentiate the foreground from the background and/or to extract the features, a temperature threshold, defined empirically (ranged 0.6–2.5 °C and 3–20 frames), was used by Chen & Ma,28 Chen & Wang,29 Hayashida et al,33,34 Liu et al,35 Mashiyama et al,22,23 Ogawa & Naito,36 Shelke & Aksanli37 and Tao et al39,40 (Table 1 “detection methods” columns and Table S1).
Feature extraction was not relevant for studies using Neural Networks (NN) as machine learning Classifiers. Fan et al30,31 and Sixsmith & Johnson21 used the Multi-Layer Perceptron (MLP). Shelke & Aksanli37 used another feed-forward NN (fNN). Fan et al30,31 only used a Recurrent NN (RNN) such as the Long Short-Term Memory (LSTM) and Grated Recurrent Units (GRU). Adolf et al,27 Gochoo et al32 and Tao et al39 used a Convolutional NN (CNN, Table 1 “detection methods” columns and Table S1 for detailed architectures). Taramasco et al41,42 used several combinations of CNN and RNN.
For the other studies,22,23,28,29,33–36,38 feature extraction preceded the classifier. The features globally integrated spatial-temporal data to define the position (coordinates, areas, specific regions) of the pixels of interest (pixel exceeding the temperature threshold), its duration, and temperature intensities (peak value, difference between background and foreground, variance). The movements (trajectories, distances, speeds, accelerations) corresponding to the displacements of the pixels of interest between frames. Only Tao et al40 used Discrete Cosine Transforms to defined their space-time features. Taniguchi et al38 and Hayashida et al33,34 defined their proper classifier without using machine learning whereas the other studies used one or more machine learning classifiers such as Adaptive Boosting (AdaBoost), Bootstrap Aggregation (Bagging), Decision Tree (DecTree), k-Nearest Neighbors (k-NN), Linear Discriminant Analysis (LDA), Logistic Regression (Logistic), Naive Bayes (NaiveB), Random Forest (RandFor), Support Vector Machine (SVM), and Voting algorithms (Table 1 “detection methods” columns and Table S1).
Through the variety of experimental procedures and detection methods, inter-study comparisons were difficult. The “performance” columns of the Table 1 and the Table S1 give with more details the detection performance of postures, movements, and falls. We summarize here the main detection performance reported by studies from ranges of values or rounding (sometimes reporting only the best when there were a lot of results).
In most studies,27,30–32,37,38,41,42 true positive (TP), false positive (FP), true negative (TN) and false negative (FN) rates were used to calculate accuracy (Ac), precision (Pr), sensibility (Se), specificity (Sp), error of sensibility (ESe), error of specificity (ESp) and F1-score (F1), by following equations:
Ac = (TP + TN)/(TP + TN + FP + FN) (1)
Pr = TP/(TP + FP) x 100 (2)
Se = TP/(TP + FN) x 100 (3)
Sp = TN/(TN + FP) x 100 (4)
ESe = FN/(TP + FN) x 100 (5)
ESp = FP/(TN + FP) x 100) (6)
F1-score F1 = 2 x ((Pr x Se)/(Pr + Se)) x 100 (7)
Sometimes, the accuracy equations were stated by the following equations:
Ac = The number of correct classifications/the number of all activities x 100,22 (10)
Ac = The number of correctly classified frames/the number of total frames x 100,23 (11)
Concerning the studies analyzing static postures, Adolf et al27 reported poor performance, with a Se ranged from 41–50% and a Sp ranged from 82–90% when considering their five static postures. Gochoo et al32 reported extremely high detection performance with Ac, Pr, Se and Sp> 99% in all postures and with all machine learning Classifiers. Shelke & Aksanli37 reported also extremely high detection performance (all Ac and F1> 99%), except when they used NaiveB algorithm (Ac ranged from 65–76% and F1-score ranged from 0–55%).
Concerning the studies analyzing dynamic movements, the detection performance was heterogenous. The poorer fall detection were reported by Sixsmith & Johnson,21 with 36% of TP when they compared fall versus non-fall actions. Fan et al30,31 considered only fall movements. They found poorer Pr, Se and F1-score (ranging from 67–92%) during front-side capture compared to side capture (Pr, Se and F1 at 100% with Median filter plus LSTM algorithm). In the comparison between fall and non-fall actions, Mashiyama et al22 reported a fall detection Ac at 95%, Chen & Ma28 reported a fall detection Ac, Se and Sp ranged from 90–95%, and Liu et al35 reported a fall detection Pr, Se and F1-score ranged from 86–100%. Hayashida et al33,34 reported a decrease of fall detection Ac when the thermal ambient increased (from 97% at 18 °C to 83% at 28 °C, see Table S1 for details). Chen & Wang29 reported a better overall detection Ac (fall and non-fall) at 1.5m (95%) compared to 1.2m (93%) and 1.8m (88%). Their overall detection Ac were poorer when they realized several actions without turning-off the recording (70% at 1.5m of the sensor, see Table S1 for details). Finally, in the comparison between fall and several other actions, Taramasco et al41,42 reported their best overall detection Ac, Se and Sp with CNN plus bi-LSTM algorithm (93% each). Ogawa & Naito36 reported a very different overall detection Ac depending on the algorithm used (ranging from 40–98%). Their performance was extremely high (98%) when using Voting algorithm (based on the three most accurate, in this case, linear discrimination, k-NN and bagging algorithms). Mashiyama et al23 and Tao et al39 reported detection Ac> 95% in all actions, except for sitting movement (79% and 90%, respectively). Taniguchi et al,38 which were the only ones to include bed related actions, reported an overall detection Ac, error of Sp and error of Se at 89%, 17% and 6.5%, respectively.
The objective of this systematic review was to present studies using only passive infrared sensor with a low resolution (up to 16×16 pixels) in fall detection. We found very heterogeneous associations between the choice of experimental procedures and the choice of detection methods. These choices could explain very heterogeneous detection performance. The 15 retrieved studies were thus hardly comparable.
Influence of the Sensor
The type of sensor does not seem to influence detection performance concerning the studies carried out from 2014. It is difficult to conclude whether the poor detection performance published by Sixsmith & Johnson21 depended on the sensor used (Irysis sensor, study carried out at least 10 years before any other), on the experimental procedure or on the detection method, because all these elements were not being well defined by the authors.
Influence of the Type of Experimental Procedure
Number and Position of the Sensors
Mounting a single sensor on the wall seems to encounter more spatial constraints than mounting a sensor on the ceiling. Fan et al30,31 showed that the fall detection performance was weaker during front-side capture compared to side capture. Furthermore, in Cheng & Wang29 fall detection performance depend on the distance between the individual and the sensor. However, mounting at least two sensors on the wall (or on a pole) and at two different heights from the floor (one above the other) could partly explain the very good detection performance obtained by Taramasco et al41,42 and Shelke & Aksanli.37 Taramasco et al41,42 tested their system in quasi real conditions (in a furnished room, and with several activities of daily living). They reported an overall detection Ac at 93%. Furthermore, Shelke & Aksanli37 reported an extremely high detection performance in every posture tested. In another way, Gochoo et al32 reported similar results than Shelke & Aksanli37 with the combination of their three sensors (two on the wall and one on the ceiling) and with other machine learning algorithms. However, it would be interesting to test this combination in an experimental design including actions such as carried out by Taramasco et al.41,42
Ambient Room Temperature
The ambient conditions do not seem to have an influence on the detection performance. Hayashida et al33,34 were the only ones that studied the variations of thermal environment and reported a decrease of fall detection performance with an increase of ambient temperature. The proposed explanation was that when the ambient temperature was higher, it approached the temperature of the participant recorded by the sensor. This temperature is not 37 °C because the participant was at a certain distance from the sensor, resulting in an increased risk of misinterpretation of the system (confusion between the background and the participant) in warmer ambient temperatures. Thus, all studies using the passive infrared sensors should systematically test their fall detection systems over wide temperature ranges.
Influence of the Type of Detection Method
The subtraction of the background seems to be interesting in limiting the impact of variations in ambient temperatures reported above. Nevertheless, subtraction or comparison of the background to the foreground does not seem to guarantee high detection performance. We did not systematically distinguish better performance when these procedures were implemented in the detection methods. It would be interesting to test the influence of background subtraction on the detection performance in a same experimental procedure and with the same machine learning algorithm.
Finally, concerning the algorithms tested, those created by Hayashida et al33,34 and Taniguchi et al38 do not seem to give different detection performance than those known and used in the other studies. The use of NaiveB algorithm by Ogawa & Naito36 and Shelke & Aksanli37 reported systematically very low detection performance. Also, the use of CNN and/or LSTM (or BiLSTM) seems to promote better performance compared to other algorithms30–32,39–42 (excepted for the study of Adolf et al).27 More interestingly, the use of Voting algorithm, which votes according to the detection performance of the other three best algorithms used, seems to potentiate the detection performance.36 It would be interesting to test this type of machine learning algorithm in more studies.
Methodological Limitations in the Included Studies
Several methodological limitations in the included studies should be considered regarding the context of fall detection in the living environments of older adults. A summary of study limitations is presented in Table S2.
Firstly, most studies tested their fall detection systems in empty rooms and with one (or two) fixed ambient temperatures (between 16 °C and 26 °C depending on the studies). Only two studies considered the presence of furniture,21,41,42 one with a bed,38 and one compared various ambient temperatures. However, rooms of the older adults contain a lot of furniture which are often used as supports in their travels. In addition, the thermal environment can change depending on the geographical location, the climate, the season, the time of day, or even the financial resources of residents intended for energy expenditure (eg, insulation of housing, purchase of air conditioners or radiators). Also, most studies did not specify or fix the locations of actions and postures. However, the heat intensity of the subject captured by the sensor varies according to the sensor-subject distance. For the sensors on the ceiling, due to the limited coverage of the sensor (maximum 4m x 4m on the ground), we can assume that it varies slightly for the same posture at different location. For the sensors placed on the wall, the heat intensity have greater variations and lead to large differences in performance as demonstrated by Chen & Wang29 In view of its two limits, it is difficult to position ourselves on the stability of this type of system despite the encouraging performance reported in most studies. Future studies should systematically control and test their systems at various ambient temperatures and subject locations when they perform their actions or postures.
Secondly, power consumption is a crucial factor for the usability of systems using wearable devices depending to a battery (see for example19,43). In this systematic review, the study of Chen & Wang29 was the only one to need battery for their mini-robot, but they did not give any information on the robot’s autonomy. The other studies used the electrical network, as most of ambient and vision-based devices. However, it would have been interesting to know system power consumptions to estimate the electricity cost of a large-scale installation in care facilities, which sometimes contain several hundred rooms. Only one studies give attention on the power consumption of their system but without evaluating it accurately.33,34 Therefore, we cannot discuss this topic although it represents one of the challenges identified by Igual et al9 for almost a decade. Future studies should give more information about power consumption, especially in kilowatt per hour for pragmatic reasons.
Thirdly, taken together, we have a wide variety of experimental conditions tested. However, taken individually, most articles only tested limited conditions such as staying standing, walking, sitting down, lying down and falling forward.
Fourthly, the number of participants seems too low to conclude that the detection methods can be adapted to the diversity of human morphological characteristics (less than 5 participants in half of the studies and less than 10 in the other half). Moreover, the characteristics of the participants were rarely described in the studies. The next step would be to experiment in real life conditions, with a wide range of activities (including the fall), and with various participants with different ages, morphological and clinical characteristics.
Finally, despite a similar article writing plan, the articles paid different attention to the information provided in their sections relating to experimental conditions, detection methods and performance analysis, which complicates article analysis and inter-study comparisons. For this type of technology, it would be interesting to define all the relevant information to present them in the “methods” section of the articles (original article and conference paper).
Comparison Results with Other Sensors and Systems
Comparison with Studies Using Other Sensor Types
Faced with the large number of studies based on fall detection using other types of devices (see for example8,13), it is difficult to make reliable comparisons without undertaking a large meta-analysis. However, some relevant comparisons can be presented from recent existing reviews. It is easier to find, in other studies, more participants (up to several hundreds) and older adults (up to 81 years old).8,9,44 Moreover, we observe similar trends in with most studies: (i) in fall detection performance regardless of the type of sensor (most studies report accuracy> 85–90%),9,12–14,17,44 (ii) in the most frequent use of CNN and RNN type algorithms, particularly over the past 5 years,8,12,13 and (iii) in the best effectiveness of 3D CNN,39 CNN with 10-fold cross-validation32 and LSTM with CNN based systems41,42 as observed by Islam et al.13
Comparison with Studies Using Sensor Fusion Including Passive Infrared Sensors
There are some studies using passive infrared sensors with another sensor. The fusion of passive infrared and ultrasonic sensors is interesting regardless the dilemma effectiveness–ethical considerations. This technology has been tested by Chen and collaborators.29,45 They reported better performance with sensor fusion compared to infrared sensor alone, with Ac ranged from 91.3–99.7% for recordings per actions (versus 88.7–94.7%, Table 1) and Ac ranged from 81–100% for continuous recordings (versus 55–75%, Table 1).29 This type of sensor fusion was also used by Asbjørn & Jim46 but they used a passive infrared sensor with higher resolution (80 x 60 pixels).
Future Directions and Challenges
The major innovation allowed by this type of sensor is to detect a fall in an environment similar in size to a living room (4m x 4m) and minimizing privacy concerns. However, there are still many challenges to overcome.
At first, more studies in controlled conditions are necessary to validate this type of fall detection system regardless the potential ambient temperatures encountered in living rooms and the range of distances between the subject and the sensor. Also, still minimizing privacy concerns, the gain in accuracy allowed by sensor fusion (for example with an ultrasonic sensor)29,45 must be explored. Finally, the effects of having a second person in the capture area have not been explored at all.
In a second time, although studies based on passive infrared sensors are very recent, the challenges presented in 2013 by Igual et al9 still need to be considered. The performance in real-life conditions must be checked. These conditions must include at least: (i) the target population (older adults), (ii) the potential places (eg, at home, in care facilities; in bedroom, in the toilet, in the dining room), (iii) the period of use (eg, during the night, the day), and (iv) the potential modes of interaction (eg, alone, with a visitor, a caregiver). Also, the usability must be evaluated. This usability must relate to: (i) the type of device attached to the fall detection system which alert when a fall occurs (eg, smartphone, computer), (ii) the user experience evaluation by the patients and the caregivers, particularly its ergonomic, pragmatic and hedonic (including the acceptability) dimensions.
Methodological Limitations in This Systematic Review
Several methodological limitations in this review should also be considered.
Due to identified biases regarding the included studies and due to the heterogeneity of studies design, it was not possible to group the results to perform a meta-analysis. Also, to avoid amplifying these biases, we have chosen to present the studies with as many details as possible (Table 1 and Table S1). However, it is possible that some information is still very summarized and may lead to misinterpretations despite our precautions.
We followed the PRISMA and its checklist to conduct this review. However, due to many topics associated with the keywords “passive infrared”, “thermal” or “fall” in humans as well as in other fields (eg, in geology and cosmology), we chose to associate “fall detection” with “passive infrared”, “thermal” or “thermopile” in the databases. Moreover, we chose to include only articles written in English. Thus, it is possible that we omitted articles related to the topic of this review despite additional empirical research.
This systematic review presents an overview of studies on fall detection using the passive infrared sensors with a low resolution. The studies were very heterogeneous with regard to experimental procedures and detection methods, which made it difficult to draw formal conclusions.
All studies tested their systems in controlled conditions, mostly in empty rooms. Except in two studies,21,27 the overall performance reported for the detection of falls exceeded 85–90% of accuracy, precision, sensitivity or specificity. These levels of accuracy seem similar to all the current fall detection systems using other types of sensors. Systems using two or more sensors associated with a detection method using 3D CNN, CNN with 10-fold cross-validation, LSTM with CNN, but also LSTM alone and Voting algorithms seemed to give the highest levels of performance (> 90%). Also, it seemed easier to reach 100% of accuracy when classifications focused on postures rather than actions. However, placing a single sensor on a wall seemed irrelevant, as well as using the NaiveB algorithm.
These first results are encouraging to continue exploring the potential of such sensors and we hope this review highlights current advances and future challenges to overcome. Future studies will be focused on achieving high fall detection performance despite variations in ambient temperatures and subject positions in the capture area. Also, experimentations in real-life conditions (eg, with older adults, in care facilities, in furnished room, during a long time, with other people on the capture zone, and with a fall alarm system) should become the priority.
The authors acknowledge Elodie Prin for correcting the English, and Amandine Dubois for her occasional scientific help.
The author reports no conflicts of interest in this work.
1. Population structure and ageing. Available from: https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Population_structure_and_ageing.
2. WHO. World report on ageing and health 2015. Available from: http://www.who.int/ageing/events/world-report-2015-launch/en/.
3. World Report on Disability. Available from: https://www.who.int/teams/noncommunicable-diseases/sensory-functions-disability-and-rehabilitation/world-report-on-disability.
4. International Classification of Functioning, Disability and Health (ICF). Available from: https://www.who.int/standards/classifications/international-classification-of-functioning-disability-and-health.
5. Vejux J, Ben-Sadoun G, Piolet D, Bernat V, Ould-Aoudia V, Berrut G. [Screening risk and protective factors of nursing home admission]. Geriatr Psychol Neuropsychiatr Vieil. 2019;17(1):39–50. doi:10.1684/pnv.2019.0784. French.
6. Campbell AJ, Reinken J, Allan BC, Martinez GS. Falls in old age: a study of frequency and related clinical factors. Age Ageing. 1981;10(4):264–270. doi:10.1093/ageing/10.4.264
7. Panel on Prevention of Falls in Older Persons, American Geriatrics Society and British Geriatrics Society. Summary of the Updated American Geriatrics Society/British Geriatrics Society clinical practice guideline for prevention of falls in older persons. J Am Geriatr Soc. 2011;59(1):148–157. doi:10.1111/j.1532-5415.2010.03234.x
8. Wang X, Ellul J, Azzopardi G. Elderly Fall Detection Systems: a Literature Survey. Front Robot AI. 2020;7:71. doi:10.3389/frobt.2020.00071
9. Igual R, Medrano C, Plaza I. Challenges, issues and trends in fall detection systems. Biomed Eng Online. 2013;12(1):66. doi:10.1186/1475-925X-12-66
10. Mubashir M, Shao L, Seed L. A survey on fall detection: principles and approaches. Neurocomputing. 2013;100:144–152. doi:10.1016/j.neucom.2011.09.037
11. Qi J, Yang P, Waraich A, Deng Z, Zhao Y, Yang Y. Examining sensor-based physical activity recognition and monitoring for healthcare using Internet of Things: a systematic review. J Biomed Inform. 2018;87:138–153. doi:10.1016/j.jbi.2018.09.002
12. Singh K, Rajput A, Sharma S. Human Fall Detection Using Machine Learning Methods: a Survey. Int J Math, Eng, Manag Sci. 2019;5(1):161–180. doi:10.33889/IJMEMS.2020.5.1.014
13. Islam MM, Tayan O, Islam MR, et al. Deep Learning Based Systems Developed for Fall Detection: a Review. IEEE Access. 2020;8:166117–166137. doi:10.1109/ACCESS.2020.3021943
14. Islam M, Neom N, Imtiaz M, Nooruddin S, Islam M, Islam M. A Review on Fall Detection Systems Using Data from Smartphone Sensors. ISI. 2019;24(6):569–576. doi:10.18280/isi.240602
15. Rahman MM, Islam M, Ahmmed S, Khan SA. Obstacle and Fall Detection to Guide the Visually Impaired People with Real Time Monitoring. SN Comput Sci. 2020;1(4):219. doi:10.1007/s42979-020-00231-x
16. Ali Hashim H, Mohammed SL, Gharghan SK. Accurate fall detection for patients with Parkinson’s disease based on a data event algorithm and wireless sensor nodes. Measurement. 2020;156:107573. doi:10.1016/j.measurement.2020.107573
17. Ren L, Peng Y. Research of Fall Detection and Fall Prevention Technologies: a Systematic Review. IEEE Access. 2019;7:77702–77722. doi:10.1109/ACCESS.2019.2922708
18. Ko M, Kim S, Kim M, Kim K, Novel A. Approach for Outdoor Fall Detection Using Multidimensional Features from A Single Camera. Applied Sciences. 2018;8(6):984. doi:10.3390/app8060984
19. Nooruddin S, Milon islam MD, Sharna FA. An IoT based device-type invariant fall detection system. Internet of Things. 2020;9:100130. doi:10.1016/j.iot.2019.100130
20. Fehling P, Dassen T. A critical systematic review and synopsis of the alignment of scientific developments in surveillance technology in nursing care facilities. J Nurs. 2017;4(1):1. doi:10.7243/2056-9157-4-1
21. Sixsmith A, Johnson N. A smart sensor to detect the falls of the elderly. IEEE Pervasive Computing. 2004;3(2):42–47. doi:10.1109/MPRV.2004.1316817
22. Mashiyama S, Hong J, Ohtsuki T. A fall detection system using low resolution infrared array sensor.
23. Mashiyama S, Hong J, Ohtsuki T. Activity recognition using low resolution infrared array sensor.
24. Liang Q, Yu L, Zhai X, Wan Z, Nie H. Activity Recognition Based on Thermopile Imaging Array Sensor.
25. Page MJ, McKenzie JE, Bossuyt PM, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71. doi:10.1136/bmj.n71
26. Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan—a web and mobile app for systematic reviews. Syst Rev. 2016;5(1):210. doi:10.1186/s13643-016-0384-4
27. Adolf J, Macas M, Lhotska L, Dolezal J. Deep neural network based body posture recognitions and fall detection from low resolution infrared array sensor.
28. Chen W-H, Ma H-P. A fall detection system based on infrared array sensors with tracking capability for the elderly at home.
29. Chen Z, Wang Y. Infrared–ultrasonic sensor fusion for support vector machine–based fall detection. J Intell Mater Syst Struct. 2018;29(9):2027–2039. doi:10.1177/1045389X18758183
30. Fan X, Zhang H, Leung C, Shen Z. Robust unobtrusive fall detection using infrared array sensors.
31. Fan X, Zhang H, Leung C, Shen Z. Fall Detection with Unobtrusive Infrared Array Sensors. In: Lee S, Ko H, Oh S editors. Multisensor Fusion and Integration in the Wake of Big Data, Deep Learning and Cyber Physical System. Lecture Notes in Electrical Engineering. Cham: Springer International Publishing; 2018:253–267. doi:10.1007/978-3-319-90509-9_15
32. Gochoo M, Tan T, Batjargal T, Seredin O, Huang S. Device-Free Non-Privacy Invasive Indoor Human Posture Recognition Using Low-Resolution Infrared Sensor-Based Wireless Sensor Networks and DCNN.
33. Hayashida A, Moshnyaga V, Hashimoto K. The use of thermal ir array sensor for indoor fall detection.
34. Hayashida A, Moshnyaga V, Hashimoto K. New approach for indoor fall detection by infrared thermal array sensor.
35. Liu Z, Yang M, Yuan Y, Chan KY. Fall Detection and Personnel Tracking System Using Infrared Array Sensors. IEEE Sens J. 2020;20(16):9558–9566. doi:10.1109/JSEN.2020.2988070
36. Ogawa Y, Naito K. Fall detection scheme based on temperature distribution with IR array sensor.
37. Shelke S, Aksanli B. Static and Dynamic Activity Detection with Ambient Sensors in Smart Spaces. Sensors. 2019;19(4):804. doi:10.3390/s19040804
38. Taniguchi Y, Nakajima H, Tsuchiya N, Tanaka J, Aita F, Hata Y. A falling detection system with plural thermal array sensors.
39. Tao L, Volonakis T, Tan B, Zhang Z, Jing Y, Smith M. 3D convolutional neural network for home monitoring using low resolution thermal-sensor array.
40. Tao L, Volonakis T, Tan B, Jing Y, Chetty K, Smith M. Home Activity Monitoring using Low Resolution Infrared Sensor. arXiv:181105416 [cs]; 2018. Available from: http://arxiv.org/abs/1811.05416.
41. Taramasco C, Rodenas T, Martinez F, et al. A Novel Monitoring System for Fall Detection in Older People. IEEE Access. 2018;6:43563–43574. doi:10.1109/ACCESS.2018.2861331
42. Taramasco C, Lazo Y, Rodenas T, Fuentes P, Martínez F, Demongeot J. System Design for Emergency Alert Triggered by Falls Using Convolutional Neural Networks. J Med Syst. 2020;44(2):50. doi:10.1007/s10916-019-1484-1
43. Gharghan SK, Mohammed SL, Al-Naji A, et al. Accurate Fall Detection and Localization for Elderly People Based on Neural Network and Energy-Efficient Wireless Sensor Network. Energies. 2018;11(11):2866. doi:10.3390/en11112866
44. Pang I, Okubo Y, Sturnieks D, Lord SR, Brodie MA. Detection of Near Falls Using Wearable Devices: a Systematic Review. J Geriatr Phys Ther. 2019;42(1):48–56. doi:10.1519/JPT.0000000000000181
45. Chen Z, Liu H, Wang Y, Wang Y. A Sensor Fusion Based Pan-Tilt Platform for Activity Tracking and Fall Detection.
46. Asbjørn D, Jim T. Recognizing Bedside Events Using Thermal and Ultrasonic Readings. Sensors. 2017;17(6):1342. doi:10.3390/s17061342
47. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the Inception Architecture for Computer Vision. arXiv:151200567; 2015. Available from: http://arxiv.org/abs/1512.00567.
© 2022 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.