Next Article in Journal
Precise Water Leak Detection Using Machine Learning and Real-Time Sensor Data
Previous Article in Journal
Bibliometric Analysis of Scientific Productivity around Edge Computing and the Internet of Things
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Review on Scaling Mobile Sensing Platforms for Human Activity Recognition: Challenges and Recommendations for Future Research

1
COPELABS, University Lusofona, 1990-124 Lisboa, Portugal
2
Fortiss GmbH—Research Institute of the Free State of Bavaria Associated with Technical University of Munich, 80805 Munich, Germany
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Submission received: 28 September 2020 / Revised: 19 November 2020 / Accepted: 21 November 2020 / Published: 29 November 2020

Abstract

:
Mobile sensing has been gaining ground due to the increasing capabilities of mobile and personal devices that are carried around by citizens, giving access to a large variety of data and services based on the way humans interact. Mobile sensing brings several advantages in terms of the richness of available data, particularly for human activity recognition. Nevertheless, the infrastructure required to support large-scale mobile sensing requires an interoperable design, which is still hard to achieve today. This review paper contributes to raising awareness of challenges faced today by mobile sensing platforms that perform learning and behavior inference with respect to human routines: how current solutions perform activity recognition, which classification models they consider, and which types of behavior inferences can be seamlessly provided. The paper provides a set of guidelines that contribute to a better functional design of mobile sensing infrastructures, keeping scalability as well as interoperability in mind.

Graphical Abstract

1. Introduction

The daily activities of billions of people today rely on mediating technology in the form of smart heterogeneous personal mobile devices and other smart sensor devices. Such devices integrate a significant number of sensors, large storage capability, and high processing capability [1]. Moreover, personal mobile devices also integrate short-range wireless interfaces, such as Bluetooth and Wi-Fi interfaces, which facilitate the capability to share data among neighboring devices [2].
Such novel features and, in particular, the non-intrusive sensing capabilities of personal devices are relevant in two ways. Firstly, the sensed data can assist in raising individual awareness of the quality of life and well-being aspects, e.g., physical health conditions and individual activity awareness and management [3]. Secondly, such smart data [4,5] are relevant for assisting mobile crowd sensing services to evolve in a way that allows the network to best adjust to the topological variability that is highly tied to aspects of human routines [5]. A major applicability benefit of pervasive mobile sensing is the possibility of bringing awareness to aspects concerning individual human behaviors [6]. By understanding social interaction aspects, such as similarities in human routines, it is possible to improve individual and collective well-being and the quality of life [7,8].
Several mobile sensing platforms and studies have been trying to provide a better understanding of different dimensions of well-being using data captured with multiple sensors [9,10,11]. Such data can be used to perform activity recognition and to find regularities in routines [12,13]. Even though there is an increasing interest from the research community in solutions and platforms that perform mobile sensing, there is no clear understanding of how to best develop such tools: which sensors are best applicable to which type of activity detection and/or recognition; which models best suit behavior/routine learning and inference; where and when to capture data; how, when, and where to treat the data captured. Adding to this, keeping in mind that personal devices equipped with a multitude of sensors are carried around by users, there are computational aspects that arise from the distributed nature of such systems [14,15,16]. It should also be highlighted that while mobile crowd sensing solutions have been around for a decade, the evolution of the Internet of Things and its networking and computational models from edge to cloud, coupled with the most recent evolutions in machine learning (ML), brings the possibility of further exploring mobile crowd sensing in diverse scenarios related with our lives [5]. The aforementioned technological evolution and the recent COVID-19 situation again strengthen the need to re-think the decentralization of large-scale sensing platforms. This paper is focused on debating such aspects, as there is currently a major gap in terms of paths to follow concerning large-scale sensing platforms.
The review provided in this paper is based on an extensive review of papers concerning pervasive mobile sensing work focused on promoting well-being. This review comprises an analysis of papers from 2009 until 2020 based on the paper keywords (mobile sensing; cloud-edge computing; human behavior inference; activity recognition; context awareness), which reflect areas of work that are relevant to the interests of the authors and that are currently highly relevant in the context of emergency management, such as for COVID-19. Out of the papers analyzed, we have selected a relevant subset of work, described in Section 3 to provide a comparison of features in regards to simple and complex activity recognition for social interaction and well-being promotion. The selection of this subset of work was done based on the following criteria: (i) The work provides extensible open-source software; (ii) the work has been described in peer-reviewed publications with a high impact factor; (iii) the work has been applied in studies and the data are available.
The paper’s contributions are three-fold. Firstly, it provides a fresh look into the most relevant mobile sensing networking solutions used to bring awareness about different aspects of human routine behavior (e.g., activities, likes and dislikes, mood). The study of these solutions and their implications for the network is relevant. Their study is also pertinent in the context of improving the quality of life and social well-being. Secondly, it provides a comparison of features, such as the type of sensing, sets of sensors used vs. recognized activities, and applied classification models. Thirdly, it considers challenges that future mobile crowd-based sensing frameworks should overcome.
The remainder of the document is organized as follows. Section 2 goes over work that has studied mobile sensing networking solutions that attempt to capture and to model human routine properties. Section 3 discusses the types of activities that such platforms can do today, the sensing approaches they rely upon, and the classification models used. Challenges that future frameworks need to work on are discussed in Section 4. Section 5 provides recommendations to assist the future development of large-scale sensing platforms focused on behavior inference. Section 6 concludes this work.

2. Related Work

A first line of related work concerns surveys focused on sensing platforms that track human routine aspects. In this context, Lane et al. provide a survey on sensing algorithms, systems, and applications. Their survey formulates a first architectural framework for discussing open issues and challenges in mobile sensing [17]. Altallah and Yang contribute with a debate on how pervasive mobile sensing could discriminate approaches to behavior pattern clustering and variability, and on the influence of interaction between people and objects [18]. In a subsequent paper, the authors survey existing pervasive sensing system solutions focused on personal healthcare solutions, e.g., elderly or neonatal support [19]. In a paper by Draguichi et al., the authors investigate the various methods and techniques for capturing crowd behavior through physical sensors focused on automatically detecting information by positioning, tracking, and measuring collections of people [20]. This line of work is focused on centralized sensing frameworks, where the captured data are centrally treated (on the cloud), while our work focuses on edge-based platforms (personal mobile devices).
A second line of related work concerns the integration of human routine aspects into pervasive mobile sensing infrastructures. The proposals of the survey of Rosi et al. are related to integrating social and pervasive sensing infrastructures [21]. In this line of work, related surveys concern the development of sensing middleware, focusing on aspects such as data capture, data treatment, and visualization. However, the contributions of our survey to the field of mobile sensing platforms concern categorization of open-source approaches focused on the inference of human behavior.
A third line of work related to this survey concerns analyses of how current mobile sensing networking frameworks perform activity recognition. That line of work categorizes recognized activities with human routine as a basis. For instance, Schoaib et al. study mobile systems developed to recognize physical activities [22]. The authors consider a characterization of work that comprises both theoretical and experimental work focused on sensor selection and resource management. Avci et al. study the application and process of activity recognition for healthcare via inertial sensors [23]. Lockhart et al. describe and categorize a variety of activity-recognition-based applications to assist in reinforcing the development of mobile sensing middleware for activity recognition [24]. Oscar et al. [25] survey the use of wearable sensors and activity recognition. The authors propose a specific taxonomy where human activity recognition systems are categorized based on response time and learning scheme. Incel et al. cover activity recognition based on sensors integrated in personal devices with a special focus on personal health and well-being [26]. Lane and Georgiev [27] applied deep learning to the inference of activity recognition. Wang et al. [8] and Nweke et al. [28] debate the application of deep learning for assisting activity recognition. The attributes considered are the specific deep learning model to apply [8,28] and the sensor type [8].
While prior work has been focused on the design of activity recognition, our work focuses on analyzing which sensors become relevant for performing different types of activity recognition and which challenges need to be further addressed, as explained in Section 4.
A fourth category of related work concerns context awareness. The application of context awareness is vast, and several related works have dealt with context awareness in ubiquitous and pervasive computing systems [29,30]. For instance, Saeed and Waheed discuss architectures that can support context-aware middleware, comparing aspects such as fault-tolerance, adaptability, interoperability, architectural style, discoverability, and location transparency [31]. Makris et al. conducted a survey on context awareness in mobile and wireless environments. The authors propose a specific context-aware abstract architecture for these specific environments [32]. Bettini et al. studied context modeling and reasoning [33]. Bandyopadhyay et al. conducted a survey on existing popular Internet of Things middleware solutions [34]. Bellavista et al. did a survey that was focused on context distribution for mobile ubiquitous systems [35].
Edge computing, also known as fog computing [36], is a set of paradigms that assist computation, networking, and storage between the edges of the network and the cloud. The main goal of edge computing is to extend the cloud’s capabilities to the edges of the network, thereby supporting real-time data processing and latency-sensitive applications. In edge computing, resources are dynamically distributed across the cloud and network elements based on quality of service (QoS) requirements [37].
In this context, the mobile edge computing architecture, currently under specification by the European Telecommunications Standards Institute (ETSI) [38], provides a relevant architectural model for mobile crowd sensing platforms. Edge computing provides cloud systems that are deployed closer to the users to meet their needs regarding processing and delay with minimum help from the Internet infrastructure [39]. Edge computing can assist in lowering latency by allowing data and computation of data to be placed closer to the end user. The idea in this context is that edge devices, such as gateways, switches, and routers, can store/serve application modules before they are sent to the cloud [40].
Our survey contributes to the raising of awareness of the need to consider context awareness in order for pervasive mobile sensing tools to become more effective in the context of quality of life and social well-being.

3. Behavior Characterization: Activity Recognition

3.1. Analysis of Selected Mobile Sensing Tools

Activity recognition via mobile sensing platforms is a relevant area being applied in diverse aspects of well-being analysis [41,42].
This section discusses selected work focused on behavior inference analysis and awareness related with aspects of social interaction. The selection methodology, which covered papers between 2009 and 2020, and which has already been described in Section 1, considered aspects such as the possibility to reuse the selected work and its scientific impact, among other aspects. A categorization is provided in terms of the sensors used and the type of recognized activity, among other parameters. Our analysis relies on multiple features that are relevant for defining a mobile sensing system intended to perform activity recognition from an end-to-end perspective. These dimensions have been selected from related work and are summarized in Table 1. The table holds the following fields: Column 1 contains the tool analyzed and the year when the tool was first released. We highlight that, over the years, several tools, such as EmotionSense and NSense, have had updates and have been used in several studies that are cited in this paper. Column 2 contains the activities, i.e., the types of activities that the tool recognizes; Column 3 contains the type, referring to an activity being simple (S) or complex (C) [43]. A simple activity can be seen as an action where there is a repeated pattern, such as walking. A complex activity involves several actions, such as cooking, driving, and biking. Column 4 holds the type of sensor(s) considered in the tool, while column 5 concerns the operating system and/or type of device. Column 6 includes the type of sensing approach followed, namely, opportunistic (O) [44] or participatory (P) [45] sensing, while Column 7 describes the classification tools relied upon. Column 8 describes the metrics used in activity recognition, while column 9 describes the type of underlying network architecture used to compute and to store the data. For instance, the tool may send all of the data to a cloud server, or part of the data can be locally classified and stored (edge).
The first middleware described in Table 1 is CenceMe (2007–2008) [46], a personal mobile sensing platform designed to capture activities, disposition (happy, sad, etc.), habits, and surrounding context (e.g., temperature). Based on a client/server model, CenceMe relies on a J48 decision tree classifier for motion detection derived from accelerometer data. A second classifier handles the location detection (indoors/outdoors) by relying on GPS data, Wi-Fi data, and Bluetooth data, such as signal strength. The surrounding context, such as temperature, is also used to infer indoor/outdoor positioning. A third classifier is applied to the classification of mobility (stationary/walking/driving). This classifier considers GPS, Wi-Fi, and Bluetooth measures in regards to neighboring nodes, e.g., changes in the number of nodes around and received signal strength. A fourth classifier is implemented to detect if a user is engaged in a conversation or not based on the microphone. Moreover, CenceMe interacts with social networking applications (presence detection), igniting the possibility of creating new associations on social networks. CenceMe is, therefore, a relevant and interesting tool, and already considers partial computation on the embedded devices, even though it is still based on a client/server architecture.
SoundSense (2009) [47] relies on the microphone to perform activity recognition derived from sound analysis. It integrates a pre-processing module that provides data source adaptation (in this case, frame adaptation), and then relies on decision tree classification to perform detection of ambient sound, music, and speech patterns. In terms of human behavior, SoundSense is a relevant tool for the detection of speech and silence only. All of the computation is locally performed on the device.
EmotionSense (2010) [48] is a mobile pervasive communication platform intended to assist in mood detection derived from, among other aspects, social interaction. EmotionSense captures emotional states based on sensed data, e.g., interaction between devices (Bluetooth beacons), speech vs. silence, and measurement of activity and location. By correlating the different types of data and relying as well on feedback from the user based on surveys periodically provided by the tool, EmotionSense is an interesting tool to assist eHealth studies. In terms of activity recognition, EmotionSense has been devised to consider only simple activity recognition. Its modular design implies that it can be easily extended. As for data storage and computation, the inference is performed on the device. Inferred data and user participatory data, provided upon consent, are sent to the cloud for further emotional inference.
AndWellness (2011) [49] is an open mobile sensing platform. AndWellness captures data via multiple sensors on a mobile device, e.g., the camera, GPS, accelerometer, and Wi-Fi. It combines such data with data provided by the user (participatory sensing). The main purpose of AndWellness is to assist eHealth participatory studies, e.g., tracking of the well-being of breast cancer survivors and young mothers. AndWellness assists people in further personal behavior awareness [49]. Its architecture is based on a client–server model. On the end-user side, an application (Android) monitors daily habits and collects indicators of individual behavior. On the server side, specific campaigns can be configured. Sensors can be selected, and the types of data as well. The server stores all collected data and provides a front-end interface for the users to view results in real time. AndWellness relies on the accelerometer and GPS to recognize simple activities, such as motion and location. For motion detection, it relies on a C4.5 decision tree classifier. AndWellness is, therefore, an interesting tool to assist in longitudinal eHealth studies involving the need to provide surveys. Its classification is mostly based on the accelerometer: Indoors, it considers only the accelerometer, which is enough to detect a difference between being still, walking, or running. Outdoors, it requires GPS to further detect complex activities, such as driving and biking. As for computation and storage, AndWellness relies on the cloud.
BeWell and its successor, BeWell+ [51], are mobile sensing systems for eHealth that consider three health dimensions: sleep, physical activity, and social interaction. BeWell provides users with smart feedback on well-being, contributing to a better perception of the well-being component. Relying on three sensors, GPS, accelerometer, and microphone, BeWell performs activity recognition for mobility, sleep, and driving, as well as location and speaking/not speaking. For activities such as mobility and sleeping, it relies on a Naive Bayes classifier. For sleep detection, it provides a model derived from data entered by the user and from statistics over time (e.g., duration and frequency of sleep periods) correlated with aspects such as mobile phone charging, use of Wi-Fi, and periods of near silence. BeWell then computes scores for well-being based on cloud computing; inference of behavior is locally processed and then sent to the cloud together with data provided directly by the user. Classification is, therefore, done on the device (edge); score computation and data storage are based on the cloud.
InSense has been developed to assist in collaborative studies focused on aspects of human behavior. The middleware collects accelerometer and audio data from multiple smartphones, and then relies on cloud computing to support an analysis of similarity in terms of audio. The middleware relies on participatory and opportunistic sensing and has been applied in limited sensing studies on elderly and dementia signals. Opportunistic sensing has been used to, for instance, assist in detecting repetitive body movements, variations in walking gait, abnormal audio patterns (e.g., crying, shouting), while questionnaires were also used to evaluate aspects of social interaction. The collected data are locally filtered and then sent to the cloud, where an estimate of proximity (based on filtering of the individual audio fingerprints) is computed.
SociableSense (2011) [50] is a mobile sensing tool that infers levels of sociability derived from user habits. It relies on the classification approaches explored in EmotionSense, introducing an edge–cloud computational approach that takes into consideration energy consumption and that distributes tasks across local and cloud resources, taking into consideration network constraints and the need to provide feedback to the user in close-to-real time.
StudentLife (2014) [53] relies on opportunistic sensing to track, among other indicators, academic performance and behavioral trends, including human interaction, to infer aspects concerning mental health. StudentLife has been heavily applied in studies involving college students. StudentLife relies on multiple sensors and detects simple activities, such as movement vs. standing still, or periods of conversation vs. silence. It also provides a model for sleep detection, a complex activity. StudentLife relies on different classifiers for different activities. A decision tree model is used for detection of motion vs. standing still. A Markov model is applied to detect periods of conversation without recurring to storing data. StudentLife then relies on additional aspects to infer status, such as level of social interaction.
Sleep as Android (2015) [54] is a sleep-cycle-tracking middleware developed to analyze aspects such as sleep duration and sleep quality in a pervasive way. It relies on accelerometer measurements and recognition of movement activities. The algorithm used considers behavior learning derived from data provided by the user, such as usual sleep intervals. It also considers Google activity tracking, such as application usage, and detects "regularities", i.e., regular habit patterns (e.g., when sleep starts on specific days of the week).
NSense (2016) [55] has been developed as a tool to infer nearness levels, i.e., levels of physical and psychological proximity. NSense relies on opportunistic sensing and on a diversified set of sensors to, in a non-intrusive way, detect levels of social interaction and aspects that assist in finding habit correlations between different users. The user first configures a set of “interests”, which are the basis to detect psychological proximity (similarity in interests). Such interests are directly exchanged among nearby users, if there is consent for such distribution, with the aim of fighting perceived isolation. A key difference of this tool from the others is the fact that the inference of behavior, as well as classification, is performed on the edge, i.e., solely locally on the devices.
CrowdMeter [56] is a mobile sensing tool that captures real-time congestion levels in train stations. CrowdMeter relies on sensed data provided by regular users during their daily commutes. CrowdMeter leverages the location and context data of the passenger, recognizing patterns in user behavior; for instance, walking. CrowdMeter can also sense environmental features, such as surrounding sound level. Such data are relevant for providing context to a specific routine. For instance, if the surrounding sound level is high and the user is located in a specific station, then congestion in the station can be identified.

3.2. Main Sensors Being Used to Perform Pervasive and Non-Intrusive Behavior Recognition

In what concerns hardware-based sensors, as they become smaller, more portable, and, consequently, less intrusive, there is greater use of them in the collection of data for the analysis of human behavior [57]. Non-intrusive behavior recognition is based on sensors available on common devices carried and controlled by end users. In contrast, intrusive behavior recognition is provided by sensors specifically installed to achieve a means, e.g., a biometric sensor. For instance, Servia-Rodriguez et al. [58] provided a longitudinal study based on the crossing of data collected via mobile phones (e.g., location data, microphone data, accelerometer measurements, and call/SMS logs), i.e., opportunistic sensing, as well as self-reporting (participatory sensing) assessments happening twice a day for 18,000 users; these data were collected over 3 years to predict people’s moods based on their activities, sociability, and psychological dimensions (e.g., perception of health and life satisfaction). Data analysis was performed using a restricted set of Boltzmann Machines (RBMs) for mood classification. Based on the study, they concluded that humor is interconnected with people’s routine and that mobile detection can be used to predict the user’s mood with an accuracy of about 70%. In addition, Krupitzer et al. [59] propose a generalized self-adaptive drop detection framework robust to the heterogeneity of real-life situations and that, independently, the algorithm adapts to the inevitable changes in the position of the sensor at runtime, determining the current sensor position based on the user’s movement pattern. They combine sensor data from four datasets. For fall detection algorithms, they used algorithms from other works, like Weka and configurations of Support Vector Machine (SVM), k-Nearest Neighbor (k-NN), Random Forest, and J48 decision trees, for comparison. The authors conclude that fall detection algorithms are often customized for the datasets used. Still, in the context of crowd sensing, Depatla et al. presented a crowd counting system to count the number of persons inside an area that embeds the inter-arrival times between the line-of-sight (LoS) blockages into a renewable stochastic process that models the human motion mathematically based on the Received Signal Strength Indicator (RSSI) in the through-the-wall scenario, leveraging Wi-Fi technology to count the total number of people walking inside a building [60].
Despite some advancement, the most popular sets of sensors in use are the accelerometer/gyroscope and GPS. As shown in Table 1, all of the analyzed tools, with the exception of SoundSense (2009), recur at least to two of these sensors to recognize simple activities such as walking [57,61,62]. Then, social interaction detection is often derived from the use of sensed data based on location (GPS) [63,64]. Some pervasive sensing tools, such as NSense, consider interaction based on Wi-Fi and Bluetooth data, i.e., the application of received signal strength to compute relative distance. This process is advantageous for indoor environments, as also stated in CrowdMeter [65,66].
Most cases analyzed rely on a set of one to five sensors, usually considering the recognition of simple activities; mobility is the most commonly recognized activity. A few mobile sensing tools, such as NSense, BeWell, and AndWellness, have started to address the recognition of complex activities.
Moreover, several tools combine sensor measurement with context awareness and with data provided by the user to infer more complex aspects of human behavior, such as emotions, stress, or perceived isolation (reduction in nearness levels), as occurs with EmotionSense, StudentLife, and NSense.
In recent years, tools have been using additional sensors, such as the microphone, to help correlate levels of sound with interaction, as is the case of EmotionSense and NSense, among others [67,68].

3.3. Preferred Sensing Approaches

There are currently two main approaches for sensing: opportunistic and participatory sensing [69]. Opportunistic sensing does not imply user intervention. It is based on sensors available on devices. Different sensors passively capture data. However, in participatory sensing, the user intervenes. This can be via self-reports, or via specific actions that an application requests. In both cases, prior consent is mandatory.
As shown in Table 1, all existing tools rely upon opportunistic sensing. A few tools (AndWellness, EmotionSense, SoundSense, and StudentLife) add participatory sensing, as this brings in the possibility to confirm specific aspects of data detection.
What this analysis corroborates is that opportunistic sensing is the preferred choice for solutions that concern well-being. More complete solutions can be built by combining opportunistic with participatory sensing [69,70,71].

3.4. Classification of Activities and Their Placement

While the classification of simple activities (e.g., motion, surrounding sound, conversation, proximity, location) is usually performed based on data collected from a single sensor, as explained in Section 3.1, the classification of more complex activities, such as social interaction and sleep patterns, requires data from more than one sensor. Currently, such classification is commonly based on two or three sensors, e.g., accelerometer and GPS, not due to precision aspects, but due to the ubiquity of these sensors.
For instance, BeWell and NSense classify physical activity based on decision tree models, while EmotionSense makes use of a discriminator function classifier. Sociometer makes use of a hidden Markov model to classify group interactions based on the microphone and infrared data, while NSense classifies social interaction and propinquity, i.e., “the probability of social interaction to occur” [55]. For this purpose, NSense relies on the Wi-Fi, Bluetooth, accelerometer, and microphone sensors. In what concerns sleep patterns, BeWell uses Gaussian models to derive patterns related to phone movement and surrounding sound based on data collected from the accelerometer and microphone. These sensed data are also combined with energy consumption.
Current pervasive mobile communication solutions rely on eager classification models without considering if such classification intends to focus on simple or complex activity recognition. However, eager classification models present significant limitations for operation in the fringes of the network, given that personal devices, such as smartphones, have limited resources [5]. Eager learning requires continuous sensing strategies that are able to supply a significant amount of data, which has implications in terms of processing and energy usage, for instance.
As for where such computation is being performed, all of the mobile sensing tools analyzed, with the exception of AndWellness, support classification of activities on the edge. This relates with the fact that such classification is mostly related with the recognition of simple activities. Then, behavior inference is performed in the cloud. The single exception to this is NSense, which performs both classification and behavior inference on the mobile devices.

3.5. Current Applied Classification Metrics

The adequate selection of classification models and metrics is essential for assisting prediction of human interaction patterns and habits. Prediction is relevant both to provide better feedback to the user and to increase the efficiency of platforms. For instance, by applying machine learning to Wi-Fi-derived indicators (e.g., visited networks, neighborhood density), it is possible to predict indoor occupancy [72].
Moreover, prediction of behavior patterns is useful for assessment and detection of abnormal behavior, which works as a control trigger in the system. Such a trigger can assist the system in adjusting different aspects in real time. For instance, the system may shift the set of sensors to be used to capture data [5], or the system may adjust the feedback provided to the user to prevent information overload [72]. Such an adjustment should take into consideration not only the prior digital footprint, but also external commodities (the context in time and space) that may or may not contribute to a deviation from the usual pattern modeled.
Currently, the preferred classification metrics are based on statistical properties, such as mean, variance, and standard deviation [25]. This selection is due to the ease of implementation and is not based on application or user/service requirements. However, a few solutions, such as AndWellness, consider more sophisticated metrics, such as the measurement of quality of participation in studies over time. NSense considers a measure of sociability (feedback to the user). EmotionSense also considers similar measures.
The technological evolution assists the implementation of sophisticated metrics that combine statistical accuracy with social interaction classification aspects, such as sociability levels, awareness levels, etc. It is, therefore, relevant to consider metrics that target accuracy and efficiency in terms of classification, but that incorporate interdisciplinary metrics, which are better suited to assist platforms in scaling.

4. Discussion: Challenges for Mobile Sensing Infrastructures

A typical modular design for pervasive mobile communication solutions can assist in developing solutions that better address human behavior aspects. A modular design also assists in service decentralization. Such a framework design is expected to have at least four functional modules [17,22]: data capture, learning, inference, and feedback (individual and collective). Figure 1 illustrates the different modules and their interactions, as well as the processes that feed each module. Figure 2 illustrates a simple taxonomy of the different blocks discussed in this paper.
The methodology followed in this section to discuss current challenges for mobile sensing infrastructures is derived from the four identified functional blocks. Data capture is backed up by an adequate sensing methodology, as discussed in Section 4.1. Learning and inference rely on contextualization and classification, respectively discussed on Section 4.2 and Section 4.4. Feedback requires adequate prediction and QoS management. All of these functional modules need to take privacy and anonymity support into consideration, as discussed in Section 4.3.

4.1. Sensing

As discussed in Section 3.1, even though all of the studied solutions rely on opportunistic sensing, there is no consensus about the best paradigm to be applied for pervasive sensing systems.
Both opportunistic and participatory sensing have pros and cons concerning implementation complexity and data handling. Participatory sensing provides a user with a sense of control, and eventually with a reward [73]. The user can, therefore, control data to be shared [74]. The data provided by the user also help in fine-tuning the system. However, participatory sensing requires the sound support of recruitment strategies to collect meaningful data [75,76,77,78,79].
In opportunistic sensing, the user provides consent, but data are collected passively (in the background) based on specific application requirements, e.g., using geolocation or battery usage [80,81,82]. Furthermore, it is possible to explore network overhearing without adding network entropy (e.g., the need for probing). Therefore, opportunistic sensing lowers the need for incentives and well-thought recruitment campaigns. The downsize concerns the need to integrate mechanisms that assist in better service decentralization [83] and the need to improve the efficiency of large-scale sensing, which requires adequate security mechanisms [84,85].
As has been discussed and as is observable in the tools analyzed (rf. to Section 3.1)
Still, prior studies have discussed the advantages of opportunistic sensing, i.e., how it is better suited for large-scale environments [86,87,88,89]. Independently of the data collection approach to be used, the amount of data, information, and generated knowledge is expected to be substantial, and the platforms need to be able to adequately manage the resources required for this support [90].
Moreover, the use of mobile devices and multiple sensors for activity recognition is expected to increase, as shown in the previous sections and as corroborated by the recent COVID-19 situation. Thus, the extraction of relevant data, the composition of information, and the generation of knowledge, as well as its proper representation, need to be carefully addressed. If poorly selected, composed, and generated, the underlying networked system’s operation may be endangered. Moreover, user and data privacy may also be at risk. By compromising the infrastructure that supports data storage and computation, there is the risk of producing invalid results in terms of behavior, activities, and habits.

4.2. Adequate Contextualization

The environment (i.e., context) where the user finds him/herself may influence his/her well-being [91]. With automatic context recognition, it is possible, for instance, to detect abnormal patterns, e.g., isolation of a user, falls, etc. [92,93,94]. Environmental indicators, e.g., geolocation and temperature, are also relevant in assisting sensing platforms to provide a more accurate activity recognition [43].
Moreover, social context is relevant as well, as it allows one to infer aspects concerning user and device interaction over time and space [95]. For this purpose, it is relevant to define “social” context. Currently, in the context of network architectures, social context is derived from human interaction. Models for the social context are often simplified. The indicators used are, for instance, encounter duration. Human interaction can be modeled based on sociability levels [48,55]. Therefore, context awareness should take into consideration three different dimensions: physical (space, co-presence), social (embedding in groups), and relational (e.g., identification of similarities of interests or behavior patterns). According to Vaiseman et al., the context-aware component must be discrete and should not require the person to adjust their behavior so that the application succeeds on a large scale [92].
Adequate contextualization is also relevant for providing a more efficient data transmission. For instance, via context-aware mechanisms, the network can opt to keep data on the edges or send them to the cloud [96]. This can be dependent on several aspects, e.g., node availability, network status, and data types and volume. It can also be dependent on social interaction aspects [97].

4.3. Privacy and Anonymity

Any pervasive mobile communication framework needs to take privacy and anonymity into account [98,99,100]. Pervasive sensing applications require the cooperation of strangers who will not trust each other [17] and, therefore, incentives to back such schemes are essential [101,102,103].
One relevant aspect concerns the need to protect user/device privacy. Even with obfuscation techniques, the data collected via sensors can reveal, for instance, user location [104] and user habits [105]. Keeping and treating data locally is a technique that can be used to circumvent this issue. For instance, assuming a large-scale event (such as a music festival), the data to be exchanged would track an increase in, e.g., device movement or surrounding noise in a specific cluster of devices, rather than considering the devices, their users, or the conversations being held. Therefore, pervasive sensing platforms should consider data aggregation support [99,102]. Moreover, it is essential to obfuscate parameters such as device identifiers, e.g., MAC address. Furthermore, and above all, independently of data being locally stored or sent to the cloud, the user must provide his/her prior consent.
Some authors have been working on improving the privacy and anonymity of sensed data. For instance, Alsheikh et al. [106] proposed a secure framework based on incentives for crowd sensing. In this framework, the users can configure specific data anonymity levels individually. In [107,108,109], the authors presented a nonparametric privacy optimization framework with an interactive optimization algorithm to further enhance the privacy, but they did not take data fusion into consideration.

4.4. Classification

Routine modeling and an adequate identification of a digital behavior footprint across time and space require pervasive solutions to passively, and in real time, learn and adjust to the individual user’s routine [5].
Prediction of behavior patterns is useful for assisting the network in better adjusting to the demand of different users. Such an adjustment should take into consideration not only the prior digital footprint [110], but also external commodities (the context in time and space) that may or may not contribute to a deviation from the usual network behavior or from human interaction patterns. For this purpose, classification models have to be applied.
Eager learning algorithms [111], such as decision trees (DT) [112,113,114] and neural networks (NN) [11,106,110,115], build explicit descriptions of target functions based on training data sets. Generalization beyond the training data is tried before receiving queries.
Lazy learning algorithms [111,116], such as the k-nearest neighbor (k-NN) algorithm, are more commonly applied in wireless sensor networks. Lazy learning algorithms store training data and wait until a query (test tuple) is performed [13,106]. Hence, this category of algorithms displays properties of low computational costs during training, but may have high computational costs at a query. In the context of online analysis of mobile sensing data where it may be necessary to continually retrain an eager learner, running a lazy learning algorithm with storage in the cloud may prove to be beneficial and improve computational performance.
Case-based reasoning (CBR) [117,118,119] stores “cases”, namely, prior experiences (dataset contextualization) and the solutions for those experiences. CBR assumes that problems recur and that similar problems have similar solutions. CBR provides a simplistic way to solve its (prior) dataset of problems. CBR is often used for recommendation systems [120] and is a lazy learning method that uses the k-NN approach.
Memory-based reasoning (MBR) [106] is a lazy learning method that usually relies on k-NN to operate [121]. The process here is to store all of the training data and retrieve instances from memory that are similar to the query instance. The result is applied in the classification of a current instance. MBR differs from CBR in the fact that CBR uses some form of domain theory for the case matching and adaption process, while MBR relies entirely on similar examples from memory found in the training data and avoids the knowledge engineering phase employed by other artificial intelligence approaches. This property makes MBR a powerful tool for classification in the context of analysis of fused data from pervasive sensing devices. An advantage of using the k-NN algorithm compared to other popular machine learning algorithms is also its simplicity in understanding and implementation.
To mitigate the aforementioned limitations, researchers have been investigating methods to help in the integration of eager classification models into pervasive mobile networking architectures. For that, for instance, one possibility is to reduce resources consumed during continuous sensing activities. Alternatively, it is feasible to send data to the cloud and perform classification learning there. As an example of methods aiming to reduce resources used in sensing activities, we can enumerate hierarchical sensor management strategies, balancing the performance of applications and sensing activities, or monitoring topology changes and adapting the rate of sensing queries. In what concerns the exploitation of cloud computing for the development of pervasive mobile communication platforms, in their work on SociableSense [50], K. Rachuri et al. show that tools that rely on continuous sensing require adaptation of cloud computing for efficient data upload. SociableSense relies on classification to assist task placement on the edges or on the cloud, taking into consideration energy consumption, latency, and the resulting throughput.
In what concerns behavior inference, the standard approach for data mining until recently has been to collect raw data and send them to cloud servers, where they would then be filtered, classified, and mined to identify and analyze statistical properties, e.g., mobility or interaction (encounters, distance) patterns [122]. This process is time and resource consuming [112], and above all, it raises critical privacy issues. Hence, an important challenge to address in the context of mobile sensing networking concerns the use of data mining techniques on the edge to the detriment of or to complement sending data to the cloud [122].
Eager classification models, such as neural networks, seem better suited for crowd sensing in wireless environments [123,124]. The reason is that eager classification seem to fit devices on the edges; for instance, in personal devices such as smartphones [5]. This could be achieved via a hierarchical classification strategy. Such a strategy holds benefits in terms of data capture, as data can be kept locally instead of being sent to the cloud.
It is worth highlighting that data extracted from a single sensor (such as an accelerometer) are already often classified locally and used to recognize different activities, such as sitting, jogging, or walking. Nevertheless, concerning sensing middleware, several sensors are usually applied to perform activity recognition, as explained in Section 3. With data fusion, the classification process becomes more complex in terms of data volume and possible features. The risk for misclassification also increases due to the higher data variability. Hence, a first requirement to be able to classify data on the network edges derived from data acquired by multiple sensors is to reduce the computational cost of the required classification models. However, there are still few studies focused on this paradigm [27,125,126]. To fulfill such a requirement, it is necessary to evaluate whether or not lazy classification models suit the limitations of the edge of the network. A recent paper specifically focused on healthcare discusses an ML recommendation system on the edge, where ML and recipe search tasks are placed in the edge, thus reducing the overall latency and computational impact on mobile end-user devices [127].

4.5. Categorization of the Different Approaches

Leveraging a new generation of mobile sensing platforms that can cope with new Internet challenges, such as mobility and distributed services, requires support on different fronts, as has been debated throughout the previous sections. To better assist future work, Table 2 provides a summary of the studied work according to the area. The table starts with data capture, providing related work that has been discussed for opportunistic sensing, participatory sensing, and hybrid approaches. Learning and contextualization work is categorized based on the focus on learning of routine habits (e.g., human interaction, sociability levels, inter-contact times) and on context-awareness applicability aspects. Inference and classification work is categorized in terms of activities being recognized, placement of classification models (e.g., edges of the network), specific models being applied, and classification metrics in use. Feedback and behavior inference work is split into providing behavior awareness or supporting well-being, as well as according to aspects of inference of human behavior. Security is split into privacy, anonymity, and incentives, which are a relevant aspect for bootstrapping pervasive mobile sensing systems, particularly in large-scale environments. Last, related work that has provided contributions to the specific topic is listed.

5. Recommendations for Future Research

Pervasive sensing middleware is highly relevant in the context of contributions to societal well-being and quality of life, as can be seen today with, for instance, the COVID-19 situation. A common modular design for such frameworks, where the different sensors are adequately mapped to activity recognition, is relevant for creating tools that can more efficiently achieve their purpose. Overall, pervasive mobile sensing is expected to be further increased in the context of areas such as mobile crowd sensing and Internet of Things. Due to this, it is relevant to debate on how to approach future solutions, particularly those based on a consolidated view on sensing, activity recognition, and the required computational and networking support to sustain mobile sensing frameworks.
By understanding social interaction aspects, such as similarities in human routines, it is feasible to assist in improving well-being and quality of life. This paper provides a thorough review of related work focused on behavior inference and social interaction awareness. The paper then discusses the different functional modules that mobile sensing platforms for activity recognition need to consider. We have reviewed the most relevant open-source pervasive sensing solutions developed to bring awareness about different aspects of human routine behavior, and warn about the current challenges and limitations faced.
Derived from the current analysis, we propose the following recommendations for future work:
  • The data collected should to be kept private and anonymous, as mandated today by privacy regulations, such as General Data Regulation Protection (GDPR). This aspect requires adequate data treatment and filtering, and must ensure that feedback and visualization do not endanger individuals in any way. For that purpose, the network architecture should consider that data should be treated as much as possible in end-user devices or closer to the end-user as much as possible (edge of the network). The discussion of aspects concerning privacy and anonymity is covered in Section 4.3.
  • The analysis described in this paper based on the extensive related work shows that behavior inference for simple activities—as well as for complex activities, as demonstrated by the middleware NSense—can be at least partially located on the edge. Furthermore, Edge AI [134] is addressing this aspect today via the distribution of artificial intelligence applications across the cloud–edge continuum. Therefore, whenever feasible (due to the associated computational cost), classification and inference mechanisms should be made available, thus reducing the need for users to always be on. The possibility to export data should be given to the user, but not be an underlying assumption. Moreover, the selection of specific classification models needs to take data fusion into consideration. Data fusion can provide a lighter software design. Data fusion is also relevant for providing finer-grained behavior inference. Classification and behavior inference aspects and today’s approaches have been discussed in Section 4.4.
  • Mobile sensing platforms need to be designed with the consideration of energy consumption aspects. In pervasive sensing platforms, the use of multiple sensors implies heavy energy consumption, thus limiting the potential of these solutions in large-scale scenarios. From this perspective, which has been discussed in Section 4.2, it is also important to highlight the role of opportunistic wireless routing approaches that take energy consumption into consideration [103,135,136].

6. Conclusions

This article provides a review of the challenges faced by large-scale mobile sensing platforms that have been devised for human activity recognition. The review integrates an analysis of selected open-source mobile sensing tools and their categorization in terms of different aspects, such as types of sensors used, computation placement, recognized activities, and types of classification. Derived from this analysis, the review provides recommendations for the future design of mobile sensing platforms, namely, aspects to assist in overcoming the identified challenges in future applications. Mobile sensing applications—and, in particular, mobile crowd sensing—are again gaining ground as a category of technology to assist with different aspects of physical and social well-being. Nonetheless, specific frameworks to provide a better design in terms of usability, behavior inference, or even the specific sensors and classifiers to apply to large-scale decentralized analysis are missing; this is, therefore, a relevant field of work for current and future research in the context of edge/cloud computing, the Internet of Things, and decentralized application architectures.

Author Contributions

Both authors contributed as follows: thereof. Conceptualization, L.I.C. and R.C.S.; Methodology, L.I.C. and R.C.S.; Formal Analysis, L.I.C.; Investigation, L.I.C.; Writing Original Draft Preparation, L.I.C. and R.C.S.; Writing Review & Editing, L.I.C. and R.C.S.; Visualization, L.I.C. and R.C.S.; Supervision, R.C.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by FCT reference number UID/MULTI/04111/2019.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ma, H.; Zhao, D.; Yuan, P. Opportunities in mobile crowd sensing. IEEE Commun. Mag. 2014, 52, 29–35. [Google Scholar] [CrossRef]
  2. Guo, B.; Wang, Z.; Yu, Z.; Wang, Y.; Yen, N.Y.; Huang, R.; Zhou, X. Mobile crowd sensing and computing: The review of an emerging human-powered sensing paradigm. ACM Comput. Surv. (CSUR) 2015, 48, 1–31. [Google Scholar] [CrossRef]
  3. Hänsel, K. Wearable and ambient sensing for well-being and emotional awareness in the smart workplace. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, New York, NY, USA, 12–16 September 2016; pp. 411–416. [Google Scholar]
  4. García-Gil, D.; Luengo, J.; García, S.; Herrera, F. Enabling smart data: Noise filtering in big data classification. Inf. Sci. 2019, 479, 135–152. [Google Scholar] [CrossRef]
  5. Sofia, R.C.; Carvalho, L.I.; Pereira, F.M. The Role of Smart Data in Inference of Human Behavior and Interaction, Smart Data: State-of-the-Art Perspectives in Computing and Applications; Li, K.-C., Di Martino, B., Yang, L.T., Zhang, Q., Eds.; CRC Press: Boca Raton, FL, USA, 2019. [Google Scholar]
  6. Pantic, M.; Pentland, A.; Nijholt, A.; Huang, T.S. Human computing and machine understanding of human behavior: A survey. In Artifical Intelligence for Human Computing; Springer: Berlin, Germany, 2007; pp. 47–71. [Google Scholar]
  7. Li, J.; de Ridder, H.; Vermeeren, A.; Conrado, C.; Martella, C. Designing for crowd well-being: Current designs, strategies and future design suggestions. In Proceedings of the 5th International Congress of International Association of Societies of Design Research, Tokyo, Japan, 26–30 August 2013; pp. 2278–2289. [Google Scholar]
  8. Wang, J.; Chen, Y.; Hao, S.; Peng, X.; Hu, L. Deep learning for sensor-based activity recognition: A survey. Pattern Recognit. Lett. 2019, 119, 3–11. [Google Scholar] [CrossRef] [Green Version]
  9. Zhang, Y.; Chen, M.; Mao, S.; Hu, L.; Leung, V.C. CAP: Community activity prediction based on big data analysis. IEEE Netw. 2014, 28, 52–57. [Google Scholar] [CrossRef]
  10. Hinckley, K.; Pierce, J.; Sinclair, M.; Horvitz, E. Sensing techniques for mobile interaction. In Proceedings of the 13th annual ACM Symposium on User Interface Software and Technology, San Diego, CA, USA, 6–8 November 2000; pp. 91–100. [Google Scholar]
  11. Srivastava, M.; Abdelzaher, T.; Szymanski, B. Human-centric sensing. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2012, 370, 176–197. [Google Scholar] [CrossRef]
  12. Hong, X.; Nugent, C.; Mulvenna, M.; McClean, S.; Scotney, B.; Devlin, S. Evidential fusion of sensor data for activity recognition in smart homes. Pervasive Mob. Comput. 2009, 5, 236–252. [Google Scholar] [CrossRef]
  13. Dernbach, S.; Das, B.; Krishnan, N.C.; Thomas, B.L.; Cook, D.J. Simple and complex activity recognition through smart phones. In Proceedings of the IEEE 2012 Eighth International Conference on Intelligent Environments, Guanajuato, Mexico, 26–29 June 2012; pp. 214–221. [Google Scholar]
  14. Satyanarayanan, M.; Simoens, P.; Xiao, Y.; Pillai, P.; Chen, Z.; Ha, K.; Hu, W.; Amos, B. Edge analytics in the internet of things. IEEE Pervasive Comput. 2015, 14, 24–31. [Google Scholar] [CrossRef] [Green Version]
  15. Garcia Lopez, P.; Montresor, A.; Epema, D.; Datta, A.; Higashino, T.; Iamnitchi, A.; Barcellos, M.; Felber, P.; Riviere, E. Edge-Centric Computing: Vision and Challenges. In Proceedings of the ACM SIGCOMM Computer Communication Review, London, UK, 17–21 August 2015; Volume 45, pp. 37–42. [Google Scholar]
  16. Bellavista, P.; Chessa, S.; Foschini, L.; Gioia, L.; Girolami, M. Human-enabled edge computing: Exploiting the crowd as a dynamic extension of mobile edge computing. IEEE Commun. Mag. 2018, 56, 145–155. [Google Scholar] [CrossRef] [Green Version]
  17. Lane, N.D.; Miluzzo, E.; Lu, H.; Peebles, D.; Choudhury, T.; Campbell, A.T. A survey of mobile phone sensing. IEEE Commun. Mag. 2010, 48, 140–150. [Google Scholar] [CrossRef]
  18. Atallah, L.; Yang, G.Z. The use of pervasive sensing for behaviour profiling—A survey. Pervasive Mob. Comput. 2009, 5, 447–464. [Google Scholar] [CrossRef]
  19. Atallah, L.; Lo, B.; Yang, G.Z. Can pervasive sensing address current challenges in global healthcare? J. Epidemiol. Glob. Health 2012, 2, 1–13. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Draghici, A.; Steen, M.V. A survey of techniques for automatically sensing the behavior of a crowd. ACM Comput. Surv. (CSUR) 2018, 51, 1–40. [Google Scholar] [CrossRef] [Green Version]
  21. Rosi, A.; Mamei, M.; Zambonelli, F.; Dobson, S.; Stevenson, G.; Ye, J. Social sensors and pervasive services: Approaches and perspectives. In Proceedings of the 2011 IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOM Workshops), Seattle, WA, USA, 21–25 March 2011; pp. 525–530. [Google Scholar]
  22. Shoaib, M.; Bosch, S.; Incel, O.D.; Scholten, H.; Havinga, P.J. A survey of online activity recognition using mobile phones. Sensors 2015, 15, 2059–2085. [Google Scholar] [CrossRef] [PubMed]
  23. Avci, A.; Bosch, S.; Marin-Perianu, M.; Marin-Perianu, R.; Havinga, P. Activity recognition using inertial sensing for healthcare, wellbeing and sports applications: A survey. In Proceedings of the 23th International Conference on Architecture of Computing Systems, Hannover, Germany, 22–25 February 2010; pp. 1–10. [Google Scholar]
  24. Lockhart, J.W.; Pulickal, T.; Weiss, G.M. Applications of mobile activity recognition. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing, Pittsburgh, PA, USA, 5–8 September 2012; pp. 1054–1058. [Google Scholar]
  25. Lara, O.D.; Labrador, M.A. A survey on human activity recognition using wearable sensors. IEEE Commun. Surv. Tutor. 2012, 15, 1192–1209. [Google Scholar] [CrossRef]
  26. Incel, O.D.; Kose, M.; Ersoy, C. A review and taxonomy of activity recognition on mobile phones. BioNanoScience 2013, 3, 145–171. [Google Scholar] [CrossRef]
  27. Lane, N.D.; Georgiev, P. Can deep learning revolutionize mobile sensing? In Proceedings of the 16th International Workshop on Mobile Computing Systems and Applications, Santa Fe, NM, USA, 12–13 February 2015; pp. 117–122. [Google Scholar]
  28. Nweke, H.F.; Teh, Y.W.; Al-Garadi, M.A.; Alo, U.R. Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: State of the art and research challenges. Expert Syst. Appl. 2018, 105, 233–261. [Google Scholar] [CrossRef]
  29. Perera, C.; Zaslavsky, A.; Christen, P.; Georgakopoulos, D. Context aware computing for the internet of things: A survey. IEEE Commun. Surv. Tutor. 2013, 16, 414–454. [Google Scholar] [CrossRef] [Green Version]
  30. Altshuler, Y.; Fire, M.; Aharony, N.; Volkovich, Z.; Elovici, Y.; Pentland, A.S. Trade-offs in social and behavioral modeling in mobile networks. In International Conference on Social Computing, Behavioral-Cultural Modeling, and Prediction; Springer: Berlin, Germany, 2013; pp. 412–423. [Google Scholar]
  31. Saeed, A.; Waheed, T. An extensive survey of context-aware middleware architectures. In Proceedings of the 2010 IEEE International Conference on Electro/Information Technology, Normal, IL, USA, 20–22 May 2010; pp. 1–6. [Google Scholar]
  32. Makris, P.; Skoutas, D.N.; Skianis, C. A survey on context-aware mobile and wireless networking: On networking and computing environments’ integration. IEEE Commun. Surv. Tutor. 2012, 15, 362–386. [Google Scholar] [CrossRef]
  33. Bettini, C.; Brdiczka, O.; Henricksen, K.; Indulska, J.; Nicklas, D.; Ranganathan, A.; Riboni, D. A survey of context modelling and reasoning techniques. Pervasive Mob. Comput. 2010, 6, 161–180. [Google Scholar] [CrossRef]
  34. Bandyopadhyay, S.; Sengupta, M.; Maiti, S.; Dutta, S. Role of middleware for internet of things: A study. Int. J. Comput. Sci. Eng. Surv. 2011, 2, 94–105. [Google Scholar] [CrossRef]
  35. Bellavista, P.; Corradi, A.; Fanelli, M.; Foschini, L. A survey of context data distribution for mobile ubiquitous systems. ACM Comput. Surv. (CSUR) 2012, 44, 1–45. [Google Scholar] [CrossRef] [Green Version]
  36. Shi, W.; Cao, J.; Zhang, Q.; Li, Y.; Xu, L. Edge computing: Vision and challenges. IEEE Internet Things J. 2016, 3, 637–646. [Google Scholar] [CrossRef]
  37. Sarddar, D.; Barman, S.; Sen, P.; Pandit, R. Refinement of Resource Management in Fog Computing Aspect of QoS. Int. J. Grid Distrib. Comput. 2018, 11, 29–44. [Google Scholar] [CrossRef]
  38. Hu, Y.C.; Patel, M.; Sabella, D.; Sprecher, N.; Young, V. Mobile edge computing—A key technology towards 5G. ETSI White Pap. 2015, 11, 1–16. [Google Scholar]
  39. Bilal, K.; Khalid, O.; Erbad, A.; Khan, S.U. Potentials, trends, and prospects in edge technologies: Fog, cloudlet, mobile edge, and micro data centers. Comput. Netw. 2018, 130, 94–120. [Google Scholar] [CrossRef] [Green Version]
  40. Carvalho, L.I.; Silva, D.; Sofia, R.C. Leveraging Context-awareness to Better Support the IoT Cloud-Edge Continuum. arXiv 2020, arXiv:2005.00121. [Google Scholar]
  41. Riboni, D.; Bettini, C.; Civitarese, G.; Janjua, Z.H.; Helaoui, R. Smartfaber: Recognizing fine-grained abnormal behaviors for early detection of mild cognitive impairment. Artif. Intell. Med. 2016, 67, 57–74. [Google Scholar] [CrossRef] [Green Version]
  42. Dawadi, P.N.; Cook, D.J.; Schmitter-Edgecombe, M. Automated cognitive health assessment using smart home monitoring of complex tasks. IEEE Trans. Syst. Man Cybern. Syst. 2013, 43, 1302–1313. [Google Scholar] [CrossRef] [Green Version]
  43. Liu, Y.; Nie, L.; Liu, L.; Rosenblum, D.S. From action to activity: Sensor-based activity recognition. Neurocomputing 2016, 181, 108–115. [Google Scholar] [CrossRef]
  44. Liang, Q.; Cheng, X.; Huang, S.C.; Chen, D. Opportunistic sensing in wireless sensor networks: Theory and application. IEEE Trans. Comput. 2013, 63, 2002–2010. [Google Scholar] [CrossRef]
  45. Burke, J.A.; Estrin, D.; Hansen, M.; Parker, A.; Ramanathan, N.; Reddy, S.; Srivastava, M.B. Participatory Sensing. Available online: https://escholarship.org/uc/item/19h777qd (accessed on 21 November 2020).
  46. Miluzzo, E.; Lane, N.D.; Eisenman, S.B.; Campbell, A.T. CenceMe–injecting sensing presence into social networking applications. In European Conference on Smart Sensing and Context; Springer: Berlin/Heidelberg, Germany, 2007; pp. 1–28. [Google Scholar]
  47. Lu, H.; Pan, W.; Lane, N.D.; Choudhury, T.; Campbell, A.T. SoundSense: Scalable sound sensing for people-centric applications on mobile phones. In Proceedings of the 7th International Conference on Mobile Systems, Applications, and Services, Krakow, Poland, 22–25 June 2009; pp. 165–178. [Google Scholar]
  48. Rachuri, K.K.; Musolesi, M.; Mascolo, C.; Rentfrow, P.J.; Longworth, C.; Aucinas, A. EmotionSense: A mobile phones based adaptive platform for experimental social psychology research. In Proceedings of the 12th ACM International Conference on Ubiquitous Computing, Copenhagen, Denmark, 26–29 September 2010; pp. 281–290. [Google Scholar]
  49. Hicks, J.; Ramanathan, N.; Kim, D.; Monibi, M.; Selsky, J.; Hansen, M.; Estrin, D. AndWellness: An open mobile system for activity and experience sampling. In Proceeding of the Conference on Wireless Health, San Diego, CA, USA, 5–7 October 2010; pp. 34–43. [Google Scholar]
  50. Rachuri, K.K.; Mascolo, C.; Musolesi, M.; Rentfrow, P.J. Sociablesense: Exploring the trade-offs of adaptive sampling and computation offloading for social sensing. In Proceedings of the 17th Annual International Conference on Mobile Computing and Networking, Las Vegas, NV, USA, 19–23 September 2011; pp. 73–84. [Google Scholar]
  51. Lin, M.; Lane, N.D.; Mohammod, M.; Yang, X.; Lu, H.; Cardone, G.; Ali, S.; Doryab, A.; Berke, E.; Campbell, A.T.; et al. BeWell+ multi-dimensional wellbeing monitoring with community-guided user feedback and energy optimization. In Proceedings of the Conference on Wireless Health, La Jolla, CA, USA, 22–25 October 2012; pp. 1–8. [Google Scholar]
  52. Castro, L.A.; Beltrán, J.; Perez, M.; Quintana, E.; Favela, J.; Chávez, E.; Rodriguez, M.; Navarro, R. Collaborative opportunistic sensing with mobile phones. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication, Seattle, WA, USA, 13–17 September 2014; pp. 1265–1272. [Google Scholar]
  53. Wang, R.; Chen, F.; Chen, Z.; Li, T.; Harari, G.; Tignor, S.; Zhou, X.; Ben-Zeev, D.; Campbell, A.T. StudentLife: Assessing mental health, academic performance and behavioral trends of college students using smartphones. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Seattle, WA, USA, 13–17 September 2014; pp. 3–14. [Google Scholar]
  54. Akbar, F.; Weber, I. # Sleep_as_Android: Feasibility of Using Sleep Logs on Twitter for Sleep Studies. In Proceedings of the 2016 IEEE International Conference on Healthcare Informatics (ICHI), Chicago, IL, USA, 4–7 October 2016; pp. 227–233. [Google Scholar]
  55. Sofia, R.; Firdose, S.; Lopes, L.A.; Moreira, W.; Mendes, P. NSense: A people-centric, non-intrusive opportunistic sensing tool for contextualizing nearness. In Proceedings of the 2016 IEEE 18th International Conference on e-Health Networking, Applications and Services (Healthcom), Munich, Germany, 14–16 September 2016; pp. 1–6. [Google Scholar]
  56. Elhamshary, M.; Youssef, M.; Uchiyama, A.; Yamaguchi, H.; Higashino, T. Crowdmeter: Congestion level estimation in railway stations using smartphones. In Proceedings of the 2018 IEEE International Conference on Pervasive Computing and Communications (PerCom), Athens, Greece, 19–23 March 2018; pp. 1–12. [Google Scholar]
  57. Onnela, J.P.; Waber, B.N.; Pentland, A.; Schnorf, S.; Lazer, D. Using sociometers to quantify social interaction patterns. Sci. Rep. 2014, 4, 5604. [Google Scholar] [CrossRef] [Green Version]
  58. Servia-Rodríguez, S.; Rachuri, K.K.; Mascolo, C.; Rentfrow, P.J.; Lathia, N.; Sandstrom, G.M. Mobile sensing at the service of mental well-being: A large-scale longitudinal study. In Proceedings of the 26th International Conference on World Wide Web, Perth, Australia, 3–7 May 2017; pp. 103–112. [Google Scholar]
  59. Krupitzer, C.; Sztyler, T.; Edinger, J.; Breitbach, M.; Stuckenschmidt, H.; Becker, C. Hips do lie! a position-aware mobile fall detection system. In Proceedings of the 2018 IEEE International Conference on Pervasive Computing and Communications (PerCom), Athens, Greece, 19–23 March 2018; pp. 1–10. [Google Scholar]
  60. Depatla, S.; Mostofi, Y. Crowd counting through walls using WiFi. In Proceedings of the 2018 IEEE International Conference on Pervasive Computing and Communications (PerCom), Athens, Greece, 19–23 March 2018; pp. 1–10. [Google Scholar]
  61. Garcia-Ceja, E.; Brena, R. Long-term activity recognition from accelerometer data. Procedia Technol. 2013, 7, 248–256. [Google Scholar] [CrossRef] [Green Version]
  62. Preece, S.J.; Goulermas, J.Y.; Kenney, L.P.; Howard, D. A comparison of feature extraction methods for the classification of dynamic activities from accelerometer data. IEEE Trans. Biomed. Eng. 2008, 56, 871–879. [Google Scholar] [CrossRef] [PubMed]
  63. Liu, X.; Gong, L.; Gong, Y.; Liu, Y. Revealing travel patterns and city structure with taxi trip data. J. Transp. Geogr. 2015, 43, 78–90. [Google Scholar] [CrossRef] [Green Version]
  64. Wirz, M.; Schläpfer, P.; Kjærgaard, M.B.; Roggen, D.; Feese, S.; Tröster, G. Towards an Online Detection of Pedestrian Flocks in Urban Canyons by Smoothed Spatio-Temporal Clustering of GPS Trajectories. Available online: https://0-dl-acm-org.brum.beds.ac.uk/doi/proceedings/10.1145/2063212 (accessed on 21 November 2020).
  65. Werb, J.; Lanzl, C. Designing a positioning system for finding things and people indoors. IEEE Spectr. 1998, 35, 71–78. [Google Scholar] [CrossRef]
  66. Kawaguchi, N.; Yano, M.; Ishida, S.; Sasaki, T.; Iwasaki, Y.; Sugiki, K.; Matsubara, S. Underground positioning: Subway information system using WiFi location technology. In Proceedings of the 2009 Tenth International Conference on Mobile Data Management: Systems, Services and Middleware, Taipei, Taiwan, 18–20 May 2009; pp. 371–372. [Google Scholar]
  67. Rahman, T.; Adams, A.T.; Zhang, M.; Cherry, E.; Zhou, B.; Peng, H.; Choudhury, T. BodyBeat: A mobile system for sensing non-speech body sounds. In Proceeding of the Conference ACM MobiSys 2014, Bretton Woods, NH, USA, 16–19 June 2014. [Google Scholar]
  68. Guo, B.; Yu, Z.; Zhou, X.; Zhang, D. From participatory sensing to mobile crowd sensing. In Proceedings of the 2014 IEEE International Conference on Pervasive Computing and Communication Workshops (PERCOM WORKSHOPS), Budapest, Hungary, 24–28 March 2014; pp. 593–598. [Google Scholar]
  69. Lane, N.D.; Eisenman, S.B.; Musolesi, M.; Miluzzo, E.; Campbell, A.T. Urban sensing systems: Opportunistic or participatory? In Proceedings of the 9th Workshop on Mobile Computing Systems and Applications (HotMobile 2008), Napa Valley, CA, USA, 25–26 February 2008; pp. 11–16. [Google Scholar]
  70. Guo, B.; Chen, C.; Zhang, D.; Yu, Z.; Chin, A. Mobile crowd sensing and computing: When participatory sensing meets participatory social media. IEEE Commun. Mag. 2016, 54, 131–137. [Google Scholar] [CrossRef] [Green Version]
  71. Avvenuti, M.; Bellomo, S.; Cresci, S.; La Polla, M.N.; Tesconi, M. Hybrid crowdsensing: A novel paradigm to combine the strengths of opportunistic and participatory crowdsensing. In Proceedings of the 26th International Conference on World Wide Web Companion, Perth, Australia, 3–7 April 2017; pp. 1413–1421. [Google Scholar]
  72. Wang, W.; Chen, J.; Hong, T. Occupancy prediction through machine learning and data fusion of environmental sensing and Wi-Fi sensing in buildings. Autom. Constr. 2018, 94, 233–243. [Google Scholar] [CrossRef] [Green Version]
  73. Ganti, R.K.; Pham, N.; Ahmadi, H.; Nangia, S.; Abdelzaher, T.F. GreenGPS: A participatory sensing fuel-efficient maps application. In Proceedings of the 8th International Conference on Mobile Systems, Applications, and Services, San Francisco, CA, USA, 15–18 June 2010; pp. 151–164. [Google Scholar]
  74. Christin, D.; Reinhardt, A.; Kanhere, S.S.; Hollick, M. A survey on privacy in mobile participatory sensing applications. J. Syst. Softw. 2011, 84, 1928–1946. [Google Scholar] [CrossRef]
  75. Reddy, S.; Estrin, D.; Srivastava, M. Recruitment framework for participatory sensing data collections. In International Conference on Pervasive Computing; Springer: Berlin, Germany, 2010; pp. 138–155. [Google Scholar]
  76. Koutsopoulos, I. Optimal incentive-driven design of participatory sensing systems. In Proceedings of the 2013 Proceedings IEEE INFOCOM, Turin, Italy, 14–19 April 2013; pp. 1402–1410. [Google Scholar]
  77. Dua, A.; Bulusu, N.; Feng, W.C.; Hu, W. Towards trustworthy participatory sensing. In Proceedings of the 4th USENIX Conference on Hot topics in Security, Montreal, QC, Canada, 11 August 2009; p. 8. [Google Scholar]
  78. Luo, T.; Tan, H.P.; Xia, L. Profit-maximizing incentive for participatory sensing. In Proceedings of the IEEE INFOCOM 2014-IEEE Conference on Computer Communications, Toronto, ON, Canada, 27 April–2 May 2014; pp. 127–135. [Google Scholar]
  79. Tuncay, G.S.; Benincasa, G.; Helmy, A. Participant recruitment and data collection framework for opportunistic sensing: A comparative analysis. In Proceedings of the 8th ACM MobiCom Workshop on Challenged Networks, Miami, FL, USA, 30 September–4 October 2013; pp. 25–30. [Google Scholar]
  80. Higuchi, T.; Yamaguchi, H.; Higashino, T. Mobile devices as an infrastructure: A survey of opportunistic sensing technology. J. Inf. Process. 2015, 23, 94–104. [Google Scholar] [CrossRef] [Green Version]
  81. Eisenman, S.B.; Lane, N.D.; Miluzzo, E.; Peterson, R.A.; Ahn, G.S.; Campbell, A.T. Metrosense project: People-centric sensing at scale. In Workshop on World-Sensor-Web (WSW 2006); Citeseer: Boulder, CO, USA, 2006. [Google Scholar]
  82. Zhao, D.; Ma, H.; Liu, L.; Zhao, J. On opportunistic coverage for urban sensing. In Proceedings of the 2013 IEEE 10th International Conference on Mobile Ad-Hoc and Sensor Systems, HangZhou, China, 14–16 October 2013; pp. 231–239. [Google Scholar]
  83. Menchaca-Mendez, R.; Luna-Nuñez, B.; Menchaca-Mendez, R.; Yee-Rendon, A.; Quintero, R.; Favela, J. Opportunistic mobile sensing in the fog. Wirel. Commun. Mob. Comput. 2018, 2018. [Google Scholar] [CrossRef]
  84. Jayaraman, P.P.; Perera, C.; Georgakopoulos, D.; Zaslavsky, A. Efficient opportunistic sensing using mobile collaborative platform mosden. In Proceedings of the 9th IEEE International Conference on Collaborative Computing: Networking, Applications and Worksharing, Austin, TX, USA, 20–23 October 2013; pp. 77–86. [Google Scholar]
  85. Cornelius, C.; Kapadia, A.; Kotz, D.; Peebles, D.; Shin, M.; Triandopoulos, N. Anonysense: Privacy-aware people-centric sensing. In Proceedings of the 6th International Conference on Mobile Systems, Applications, and Services, Breckenridge, CO, USA, 17–20 June 2008; pp. 211–224. [Google Scholar]
  86. Sun, X.; Hu, S.; Su, L.; Abdelzaher, T.F.; Hui, P.; Zheng, W.; Liu, H.; Stankovic, J.A. Participatory sensing meets opportunistic sharing: Automatic phone-to-phone communication in vehicles. IEEE Trans. Mob. Comput. 2015, 15, 2550–2563. [Google Scholar] [CrossRef]
  87. Issarny, V.; Mallet, V.; Nguyen, K.; Raverdy, P.G.; Rebhi, F.; Ventura, R. Dos and don’ts in mobile phone sensing middleware: Learning from a large-scale experiment. In Proceedings of the 17th International Middleware Conference, Trento, Italy, 12–16 December 2016; pp. 1–13. [Google Scholar]
  88. Habibzadeh, H.; Qin, Z.; Soyata, T.; Kantarci, B. Large-scale distributed dedicated-and non-dedicated smart city sensing systems. IEEE Sens. J. 2017, 17, 7649–7658. [Google Scholar] [CrossRef]
  89. Salim, F.; Haque, U. Urban computing in the wild: A survey on large scale participation and citizen engagement with ubiquitous computing, cyber physical systems, and Internet of Things. Int. J. Hum. Comput. Stud. 2015, 81, 31–48. [Google Scholar] [CrossRef]
  90. Riboni, D. Opportunistic pervasive computing: Adaptive context recognition and interfaces. CCF Trans. Pervasive Comput. Interact. 2019, 1, 125–139. [Google Scholar] [CrossRef] [Green Version]
  91. D’Alessandro, D.; Buffoli, M.; Capasso, L.; Fara, G.M.; Rebecchi, A.; Capolongo, S. Green areas and public health: Improving wellbeing and physical activity in the urban context. Epidemiol. Prev. 2015, 39, 8–13. [Google Scholar]
  92. Vaizman, Y.; Ellis, K.; Lanckriet, G. Recognizing detailed human context in the wild from smartphones and smartwatches. IEEE Pervasive Comput. 2017, 16, 62–74. [Google Scholar] [CrossRef] [Green Version]
  93. Cardenas, C.; Garcia-Macias, J.A. ProximiThings: Implementing Proxemic Interactions in the Internet of Things. Procedia Comput. Sci. 2017, 113, 49–56. [Google Scholar] [CrossRef]
  94. Forkan, A.R.M.; Khalil, I.; Tari, Z.; Foufou, S.; Bouras, A. A context-aware approach for long-term behavioural change detection and abnormality prediction in ambient assisted living. Pattern Recognit. 2015, 48, 628–641. [Google Scholar] [CrossRef]
  95. Cao, L.; Wang, Y.; Zhang, B.; Jin, Q.; Vasilakos, A.V. GCHAR: An efficient Group-based Context—Aware human activity recognition on smartphone. J. Parallel Distrib. Comput. 2018, 118, 67–80. [Google Scholar] [CrossRef]
  96. Roman, R.; Lopez, J.; Mambo, M. Mobile edge computing, fog et al.: A survey and analysis of security threats and challenges. Future Gener. Comput. Syst. 2018, 78, 680–698. [Google Scholar] [CrossRef] [Green Version]
  97. Bellavista, P.; Belli, D.; Chessa, S.; Foschini, L. A social-driven edge computing architecture for mobile crowd sensing management. IEEE Commun. Mag. 2019, 57, 68–73. [Google Scholar] [CrossRef]
  98. Zhong, S.; Zhong, H.; Huang, X.; Yang, P.; Shi, J.; Xie, L.; Wang, K. Connecting physical-world to cyber-world: Security and privacy issues in pervasive sensing. In Security and Privacy for Next-Generation Wireless Networks; Springer: Berlin, Germany, 2019; pp. 49–63. [Google Scholar]
  99. Guan, Z.; Zhang, Y.; Wu, L.; Wu, J.; Li, J.; Ma, Y.; Hu, J. APPA: An anonymous and privacy preserving data aggregation scheme for fog-enhanced IoT. J. Netw. Comput. Appl. 2019, 125, 82–92. [Google Scholar] [CrossRef]
  100. Chen, Q.; Zheng, S.; Weng, Z. Leveraging mobile nodes for preserving node privacy in mobile crowd sensing. Wirel. Commun. Mob. Comput. 2018, 2018. [Google Scholar] [CrossRef] [Green Version]
  101. Xiong, J.; Ma, R.; Chen, L.; Tian, Y.; Lin, L.; Jin, B. Achieving incentive, security, and scalable privacy protection in mobile crowdsensing services. Wirel. Commun. Mob. Comput. 2018, 2018. [Google Scholar] [CrossRef] [Green Version]
  102. Zhang, X.; Liang, L.; Luo, C.; Cheng, L. Privacy-preserving incentive mechanisms for mobile crowdsensing. IEEE Pervasive Comput. 2018, 17, 47–57. [Google Scholar] [CrossRef]
  103. Lin, J.; Yang, D.; Li, M.; Xu, J.; Xue, G. Frameworks for privacy-preserving mobile crowdsensing incentive mechanisms. IEEE Trans. Mob. Comput. 2017, 17, 1851–1864. [Google Scholar] [CrossRef]
  104. Mach, P.; Becvar, Z. Mobile edge computing: A survey on architecture and computation offloading. IEEE Commun. Surv. Tutor. 2017, 19, 1628–1656. [Google Scholar] [CrossRef] [Green Version]
  105. Wang, Y.; Cai, Z.; Tong, X.; Gao, Y.; Yin, G. Truthful incentive mechanism with location privacy-preserving for mobile crowdsourcing systems. Comput. Netw. 2018, 135, 32–43. [Google Scholar] [CrossRef]
  106. Alsheikh, M.A.; Lin, S.; Niyato, D.; Tan, H.P. Machine learning in wireless sensor networks: Algorithms, strategies, and applications. IEEE Commun. Surv. Tutor. 2014, 16, 1996–2018. [Google Scholar] [CrossRef] [Green Version]
  107. Sun, M.; Tay, W.P.; He, X. Toward information privacy for the Internet of Things: A nonparametric learning approach. IEEE Trans. Signal Process. 2018, 66, 1734–1747. [Google Scholar] [CrossRef]
  108. He, X.; Tay, W.P.; Sun, M. Privacy-aware decentralized detection using linear precoding. In Proceedings of the 2016 IEEE Sensor Array and Multichannel Signal Processing Workshop (SAM), Rio de Janeiro, Brazil, 10–13 July 2016; pp. 1–5. [Google Scholar]
  109. He, X.; Sun, M.; Tay, W.P.; Gong, Y. Multilayer nonlinear processing for information privacy in sensor networks. arXiv 2017, arXiv:1711.04459. [Google Scholar]
  110. Ignatov, A. Real-time human activity recognition from accelerometer data using Convolutional Neural Networks. Appl. Soft Comput. 2018, 62, 915–922. [Google Scholar] [CrossRef]
  111. Wang, J.; Zucker, J.D. Solving Multiple-Instance Problem: A Lazy Learning Approach. Available online: http://cogprints.org/2124/ (accessed on 11 March 2011).
  112. Lu, H.; Yang, J.; Liu, Z.; Lane, N.D.; Choudhury, T.; Campbell, A.T. The Jigsaw Continuous Sensing Engine for Mobile Phone Applications. Available online: https://0-dl-acm-org.brum.beds.ac.uk/doi/proceedings/10.1145/1869983 (accessed on 21 November 2020).
  113. Rokach, L.; Maimon, O.Z. Data Mining with Decision Trees: Theory and Applications; World Scientific: Singapore, 2008; Volume 69. [Google Scholar]
  114. Reddy, S.; Mun, M.; Burke, J.; Estrin, D.; Hansen, M.; Srivastava, M. Using mobile phones to determine transportation modes. ACM Trans. Sens. Netw. (TOSN) 2010, 6, 1–27. [Google Scholar] [CrossRef]
  115. Haykin, S. Neural Networks: A Comprehensive Foundation; Prentice-Hall, Inc.: Upper Saddle River, NJ, USA, 2007. [Google Scholar]
  116. Wettschereck, D.; Aha, D.W.; Mohri, T. A review and empirical evaluation of feature weighting methods for a class of lazy learning algorithms. Artif. Intell. Rev. 1997, 11, 273–314. [Google Scholar] [CrossRef]
  117. Richter, M.M.; Weber, R.O. Case-Based Reasoning; Springer: Berlin, Germany, 2016. [Google Scholar]
  118. Schank, R.C. Dynamic Memory: A Theory of Reminding and Learning in Computers and People; Cambridge University Press: Cambridge, UK, 1983. [Google Scholar]
  119. Kolodner, J. Case-Based Reasoning; Morgan Kaufmann: Burlington, MA, USA, 2014. [Google Scholar]
  120. Kofod-Petersen, A.; Aamodt, A. Contextualised ambient intelligence through case-based reasoning. In European Conference on Case-Based Reasoning; Springer: Berlin, Germany, 2006; pp. 211–225. [Google Scholar]
  121. Berry, M.J.; Linoff, G.S. Data Mining Techniques: For Marketing, Sales, and Customer Relationship Management; John Wiley & Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
  122. Chandra, A.; Weissman, J.; Heintz, B. Decentralized edge clouds. IEEE Internet Comput. 2013, 17, 70–73. [Google Scholar] [CrossRef]
  123. Erdogan, S.Z.; Bilgin, T.T. A data mining approach for fall detection by using k-nearest neighbour algorithm on wireless sensor network data. IET Commun. 2012, 6, 3281–3287. [Google Scholar] [CrossRef]
  124. Kulkarni, R.V.; Förster, A.; Venayagamoorthy, G.K. Computational intelligence in wireless sensor networks: A survey. IEEE Commun. Surv. Tutor. 2010, 13, 68–96. [Google Scholar] [CrossRef]
  125. Radu, V.; Lane, N.D.; Bhattacharya, S.; Mascolo, C.; Marina, M.K.; Kawsar, F. Towards multimodal deep learning for activity recognition on mobile devices. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, Heidelberg, Germany, 12–16 September 2016; pp. 185–188. [Google Scholar]
  126. Hammerla, N.Y.; Halloran, S.; Plötz, T. Deep, convolutional, and recurrent models for human activity recognition using wearables. arXiv 2016, arXiv:1604.08880. [Google Scholar]
  127. Lee, J.; Lee, J. Juice Recipe Recommendation System Using Machine Learning in MEC Environment. IEEE Consum. Electron. Mag. 2020, 9, 79–84. [Google Scholar] [CrossRef]
  128. Calabrese, F.; Ferrari, L.; Blondel, V.D. Urban sensing using mobile phone network data: A survey of research. ACM Comput. Surv. (CSUR) 2014, 47, 1–20. [Google Scholar] [CrossRef]
  129. Yang, D.; Xue, G.; Fang, X.; Tang, J. Crowdsourcing to smartphones: Incentive mechanism design for mobile phone sensing. In Proceedings of the 18th ACM Annual International Conference on Mobile Computing and Networking, Istanbul, Turkey, 22–26 August 2012; pp. 173–184. [Google Scholar]
  130. Stisen, A.; Blunck, H.; Bhattacharya, S.; Prentow, T.S.; Kjærgaard, M.B.; Dey, A.; Sonne, T.; Jensen, M.M. Smart devices are different: Assessing and mitigatingmobile sensing heterogeneities for activity recognition. In Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems, Seoul, Korea, 1–4 November 2015; pp. 127–140. [Google Scholar]
  131. Jaimes, L.G.; Vergara-Laurens, I.J.; Raij, A. A survey of incentive techniques for mobile crowd sensing. IEEE Internet Things J. 2015, 2, 370–380. [Google Scholar] [CrossRef]
  132. Zebin, T.; Scully, P.J.; Peek, N.; Casson, A.J.; Ozanyan, K.B. Design and implementation of a convolutional neural network on an edge computing smartphone for human activity recognition. IEEE Access 2019, 7, 133509–133520. [Google Scholar] [CrossRef]
  133. Miao, C.; Su, L.; Jiang, W.; Li, Y.; Tian, M. A lightweight privacy-preserving truth discovery framework for mobile crowd sensing systems. In Proceedings of the IEEE INFOCOM 2017-IEEE Conference on Computer Communications, Atlanta, GA, USA, 1–4 May 2017; pp. 1–9. [Google Scholar]
  134. Shi, W.; Dustdar, S. The promise of edge computing. Computer 2016, 49, 78–81. [Google Scholar] [CrossRef]
  135. MartíN-Campillo, A.; Crowcroft, J.; Yoneki, E.; Martí, R. Evaluating opportunistic networks in disaster scenarios. J. Netw. Comput. Appl. 2013, 36, 870–880. [Google Scholar] [CrossRef]
  136. Loreti, P.; Bracciale, L. Optimized neighbor discovery for opportunistic networks of energy constrained IoT devices. IEEE Trans. Mob. Comput. 2019, 19, 1387–1400. [Google Scholar] [CrossRef]
Figure 1. Modules and their interaction.
Figure 1. Modules and their interaction.
Iot 01 00025 g001
Figure 2. Building blocks of mobile sensing frameworks.
Figure 2. Building blocks of mobile sensing frameworks.
Iot 01 00025 g002
Table 1. Selected mobile sensing platforms for human behavior awareness.
Table 1. Selected mobile sensing platforms for human behavior awareness.
ToolBehavior Inference AspectActivityTypeSensorOS, DeviceSensingClassificationMetricsEdge/Cloud
CenceMe, 2007 [46]Social interactionMovement, Conversation, Location (Indoor/Outdoor)S,CAccelerometer, GPS, microphone, Wi-Fi, BluetoothLinux, iOS, Nokia N95 (Symbian)OJ48 Decision Tree; different classifiers for different activitiesMean, Standard deviation, Number of peaksEdge classification; cloud computation and storage
SoundSense, 2009 [47]Social interactionMusic, speech, silenceSMicrophoneAppleO,PDecision treesZero crossing rate, Low energy frame, SpectralEdge (device)
EmotionSense, 2010 [48]EmotionsMovement; location (indoor vs. outdoor); conversationSAccelerometer, GPS, Bluetooth, microphoneNokia Symbian s60O,PGaussian Mixture Model (GMM) for speech; discriminant function classifier for movement; GMM classifier for emotionsMean, average; mode levelsEdge classification; inferred data stored in the cloud
AndWellness, 2011 [49]Social interactionMovement, location, interactionS,CAccelerometer, GPS, Wi-FiAndroidO,PC4.5 decision tree for mobilityQuality of participation over time: battery charge level, mobility feedback, etc.Cloud
SociableSense, 2011 [50]Social interactionMovement, location, interactionSAccelerometer, BluetoothAndroidO,PGaussian Mixture Model for speech; discriminant function classifier for movement;Mean, averageEdge and cloud
Bewell BeWell+, 2012 [51]Social interactionMovement, speaking, sleep, locationS,CAccelerometer, microphone, GPSAndroid NexusONaive Bayes for mobility, speaking; specific model for sleepSpectral roll-off, mean, varianceCloud, edge classification; inferred data and other data stored in the cloud
InCense, 2014 [52]Social interactionMovement, relative location, speakingS,CAccelerometer, microphoneAndroidO,PAudio fingerprinting (filter)Audio similarityCloud
StudentLife, 2014 [53]Stress, mental healthMotion, conversation, sleep, location and co-locationSAccelerometer, GPS, Bluetooth, microphone, light sensorAndroidO,PDecision tree for motion; Markov model for conversation vs. silence; specific model for sleepMean, standard deviationEdge classification; behavior inference stored and computed on the cloud
Sleep as Android, 2015 [54]Sleep issues, stressSleepCAccelerometer, gyroscope, microphone, screen, sonar, oximeterAndroidONoise graph, actigraphy, hypnographyMean, average < sleep scoresEdge classification; behavior inference stored and computed on the cloud
Nsense, 2016 [55]Nearness, social interaction, preferred locationsMovement and mobility preferences, location (indoor/outdoors), level of surrounding noise, proximityS,CAccelerometer, GPS, microphone, Wi-Fi, BluetoothAndroidODecision treesSociability level, propinquity level, affinity of shared interestsEdge (device)
CrowdMeter, 2018 [56]Train congestion levelsWalking vs. standing, locationSgyroscope, accelerometer, magnetometer, barometer, microphone, Wi-FiAndroidO, PEstimation/maximizationFine-grained congestion levelsEdge classification and cloud classification for collective behavior
Table 2. Categorization of the different approaches.
Table 2. Categorization of the different approaches.
AreaStudied Work
Data capture/SensingOpportunistic[17,44,55,69,80,81,82,83,84,85,128]
Participatory[45,58,73,74,75,76,77,78,79,129,130]
Hybrid[2,69,71,86,87,88,89,131]
Learning/ContextualizationRoutine Habits[12,13,55,97]
Context awareness[29,30,43,91,92,93,94,95,96,112,128]
Inference/ClassificationActivity recognition[12,13,22,23,24,25,26,27,28,43,57,61,62,92,112,128,130]
Placement[14,15,16,50,96,132]
Models[8,13,27,28,106,111,112,113,114,115]
Behavior inferenceBehavior awareness[3,6,49,57,112]
Well-being[7,8,9,10,11,58]
Human interaction[5,18,20,48,55,97,128]
SecurityPrivacy/anonymity[84,85,98,99,100,102,104,105,128,130,131]
Incentives[79,101,102,103,106,129,131]
Platforms [13,34,46,47,48,49,51,53,54,55,56,59,60,81,128,133]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Carvalho, L.I.; Sofia, R.C. A Review on Scaling Mobile Sensing Platforms for Human Activity Recognition: Challenges and Recommendations for Future Research. IoT 2020, 1, 451-473. https://0-doi-org.brum.beds.ac.uk/10.3390/iot1020025

AMA Style

Carvalho LI, Sofia RC. A Review on Scaling Mobile Sensing Platforms for Human Activity Recognition: Challenges and Recommendations for Future Research. IoT. 2020; 1(2):451-473. https://0-doi-org.brum.beds.ac.uk/10.3390/iot1020025

Chicago/Turabian Style

Carvalho, Liliana I., and Rute C. Sofia. 2020. "A Review on Scaling Mobile Sensing Platforms for Human Activity Recognition: Challenges and Recommendations for Future Research" IoT 1, no. 2: 451-473. https://0-doi-org.brum.beds.ac.uk/10.3390/iot1020025

Article Metrics

Back to TopTop