The purpose of this study is the measurement of MAAI and value co-creation in Japanese-style service interaction through quantification of service interaction between a customer and a salesperson in retail stores. In this paper, we propose the MAAI measurement technique of service interaction in retail store. We confirmed that MAAI measurement enables modeling value co-creation in this customer service situation. As a result, the technique proposed here to measure MAAI is expected to be useful to manage and computing customer service.
In this paper, we explore the potential impact of Internet of Things (IoT) technology may have on the cosplay community. We developed a costume (an IoT Skullfort) and embedded IoT technology to enhance its capabilities and user interactions. Sensing technologies are widely used in many different wearable domains including cosplay scenarios. However, in most of these scenarios, typical interaction pattern is that the costume responds to its environment or the player's behaviour (e.g., colour of lights may get changed when player moves hands). In contrast, our research focuses on exploring scenarios where the audience (third party) get to manipulate the costume behaviour (e.g., the audience get to change the colour of the Skullfort using a mobile application). We believe such an audience (third party) influenced cosplay brings new opportunities for enhanced entertainment. However, it also creates significant challenges. We report the results gathered through a focus group conducted in collaboration with cosplay community experts.
Many applications aim to make smarter the indoor environments where most people spend much of their time (home, office, transportation, public spaces), but they need long-term low-cost human sensing and monitoring capabilities. Small capacitive sensors match well most requirements, like privacy, power, cost, and unobtrusiveness, and, importantly, they do not rely on wearables or specific human interactions. However, long-range capacitive sensors often need advanced data processing to increase their performance. Our ongoing research experimental results show that four 16 cm X 16 cm capacitive sensors deployed in a 3 m X 3 m room can taglessly track the movement of a person with a root mean square error as low as 26 cm. Our system uses a median and low-pass filter for sensor signal conditioning before an autoregressive neural network that we trained to infer the location of the person in the room.
The development of augmented reality capabilities on smartphones has led to the emergence of a range of AR apps, including AR compasses. Some of these apps claim to be as good as professional magnetic navigation compasses, and suitable for navigation use. This poster presents detailed measurements of compass deviation (error) curves and offset errors for augmented reality compass apps on 17 mobile devices. The magnitude of the deviation errors measured casts serious doubt on claims the apps are appropriate for navigation purposes. This in turn emphasizes the need for the ubiquitous computing community to help ensure adequate awareness of the limitations of some onboard sensors, including compasses, on devices such as smartphones.
FMCW radar could detect object's range, speed and Angle-of-Arrival, advantages are robust to bad weather, good range resolution, and good speed resolution. In this paper, we consider the FMCW radar as a novel interacting interface on laptop. We merge sequences of object's range, speed, azimuth information into single input, then feed to a convolution neural network to learn spatial and temporal patterns. Our model achieved 96% accuracy on test set and real-time test.
In this paper, we propose a system based on Convolutional Long Short-Term Memory (ConvLSTM)-Attention Mechanism (AM) to preserve spatial features and time characteristics for surface electromyography (sEMG) signals. We assume that this method can perform more robustly under training with small data than a Convolutional Neural Network (CNN). To test the performance of this method, we measured sEMG signals for eight hand gestures. Whereas the pure CNN based model and ConvLSTM (not have AM) model, had accuracies of 88.2 and 89.7%, respectively, our proposed ConvLSTM-AM method achieved 92.8% accuracy. Thus, proposed method has a better recognition rate than the CNN, which only uses spatial features. Through the results of the experiment, we believe that the proposed method can effectively improve the robustness of Deep Learning to small sample sEMG signals.
Signature verification, if we consider the muscle memory, is a biometric for identification technology. To access muscle memory, we use a motion sensor that consists of accelerometer and gyroscope to implement a signature verification system. The motion sensor records six motion values including three-axis accelerations and angular velocities while name signing. 14 features of signature are extracted from the sequence of accelerations and angular velocities. A support vector machine (SVM) is then applied to verify the signatures. The proposed method was applied to verify the Chinese signatures. The SVM is trained by the training data from each person. The true positive rate of the proposed method can reach to 95.66%. Fake signatures generated by tracing from true signatures can also be recognized by the proposed method.
Urban villages emerge with the rapid urbanization process in many developing countries, and bring serious social and economic challenges to urban authorities, such as overcrowding and low living standards. A comprehensive understanding of the locations and regional boundaries of urban villages in a city is crucial for urban planning and management, especially when urban authorities need to renovate these regions. Traditional methods greatly rely on surveys and investigations of city planners, which consumes substantial time and human labor. In this work, we propose a low-cost and automatic framework to accurately identify urban villages from high-resolution remote sensing satellite imagery. Specifically, we leverage the Mask Regional Convolutional Neural Network (Mask-RCNN) model for end-to-end urban village detection and segmentation. We evaluate our framework on the city-wide satellite imagery of Xiamen, China. Results show that our framework successfully detects 87.18% of the urban villages in the city, and accurately segments their regional boundaries with an IoU of 74.48%.
Shadows are ubiquitous in our life. Through the phenomena of shadows, we focus on a novel way which can change people's perspectives of observing surroundings and create a natural outdoor interaction for children in their daily life. In this paper, we present a mobile AR game which uses shadows as clues with a treasure hunting game mechanism to make a connection with children and outdoor surroundings. In the field experiment at a kindergarten, children(n=6) participated in the outdoor interaction experience with Shadower. We conducted a preliminary user study with Smileyometer strategy to evaluate children's reaction to our prototype. Qualitative results indicate that the use of shadows from the outdoor environment as AR makers have the potential to expand the approach to facilitate children's engagement through their outdoor interaction.
Understanding the mobility patterns of large groups of people is essential in transport planning. Today's assessments rely on questionnaires or self-reported data, which are cumbersome, expensive, and prone to errors. With recent developments in mobile and ubiquitous computing, it has become feasible to automate this process and classify transportation modes using data collected by users' smartphones. Previous work has mainly considered GPS and accelerometers; however, the achieved accuracies were often insufficient. We propose a novel method which also considers the proximity patterns of WiFi and Bluetooth (BT) devices in the environment, which are expected to be quite specific to the different transportation modes. In this poster, we present the promising results of a preliminary study in Zurich.
A large literature evaluates how virtual agents impact the lives of people with dementia using perceptions of technology. We assess how a home virtual agent from "Living Well with Anne" impacts the quality of life of elderly with dementia rather than only their perceptions of the technology. Assessing impact on life alongside technology perception is pertinent given the importance of a person's perceived quality of daily home life and that positive technology perception does not always lead to actual use. We propose an approach to evaluate assistive technology for elderly people with dementia by assessing impact on life using semi-structured interviews and the QOL-AD scale. A preliminary proof-of-concept study tests whether perceptions of a virtual agent, actual use of the agent and participants' quality of life are related, and whether a virtual agent improves quality of life.
We often perceive other peoples' presence implicitly, through the traces of their interactions with physical objects. What if our urban environments could mediate these traces allowing remotely located people perceive each other's presence collectively? We developed Pneuxels, a network of programmable inflatables, placed at remote sites, that allow visitors in one site to perceive the presence of visitors in other sites, promoting thereby a sense of collective awareness and place-making. Pneuxels (Pneumatic Pixels) are pneumatically actuated pixels, connected through a web-socket platform, that change their physical state based on input from other Pneuxels, from the environment, or from users. We discuss our experiences in designing, prototyping, and testing Pneuxels, and we report our results from preliminary user studies.
This article describes the methodology and technology for data collection of hand gestures of master craftspeople, with a case study tracing the Intangible Cultural Heritage practice of Horse Tail Embroiderers of Shui ethnic minority in collaboration with GYPEC1 in Guiyang, Southwestern China. We describe unique technological design solutions to enable mobility to remote villages, and freedom of movement to capture the hand gestures of master craftspeople. The significance of this work is vitally important, as it outlines a technique to digitally record, analyse and archive the intricate dynamics of craft practices. We contextualise the research within a contemporary context, and describe the aims and objectives of the research for an interactive media museum interface to provide new insights into traditional practices. Finally, we propose future potential to investigate embodied knowledge and alternative pedagogical applications for innovative contemporary design.
In this work, we present the SmartLobby, an intelligent environment system integrated into the lobby of a research institute. The SmartLobby is running 24/7, i.e. it can be used any time by anyone without any preparations. The goal of the system is to conduct research in the domain of human machine cooperation. One important first step towards this goal is a detailed human state modeling and estimation with head-eye-tracking as key component. The SmartLobby mainly integrates state-of-the-art algorithms that enable a thorough analysis of human behavior and state. These algorithms constitute the fundamental basis for the development of higher level system components. Here, we present our system with its various hardware and software components. Thereby, we focus on the head-eye-tracking as a key component to continuously observe persons using the system and customize content shown to them. The results of a multi-week lasting experiment demonstrate the effectiveness of the system.
In our daily lives, we use places shared by multiple people such as homes and offices. Although many methods for individual recognition have been proposed, when they are used in such places, they have some problems in terms of psychological and physical burden. Therefore, in this research, we propose a method to recognize a person who entered or left the room by using the opening and closing movement of a door. In the proposed method, an angular velocity sensor is installed on the doorknob, and the user is identified based on personal characteristics of door's motion that is naturally accompanied with entering and leaving a room. We implement the proposed system on a door with a lever handle typed doorknob for a target scene of general household. From experimental results that F value of user's leaving motion was 0.90 and user's entering motion was 0.73, we confirmed the effectiveness of our method.
In recent years, the elderly population has been increasing rapidly and robot therapy for elderly people has been researched. However, there are few common platform for robot therapy for elderly people and few researches on robot therapy for blind elderly people. In this research, we develop a data distribution system for robot therapy for elderly people that collectively manages data on the cloud and promotes data utilization. The development system collects, transfers, and visualizes sensor data related to elderly people with robot interaction. As a case study, we also develop a robot therapy system using conversations for blind elderly people to induce positive mental state. We conducted experiment for three blind elderly and confirmed the feasibility of robot therapy using conversations for blind elderly people.
In this paper, we explore the influence of street space visual qualities on human physiology and perception of comfort at selected street corners (i.e., Fumin, Changde, Xinle and Donghu Road, Jing'an district, Shanghai). The visual qualities of the street were identified by the physical space variations, sky visibility, wall continuity and the cross-sectional proportion. These three variables contribute to the "enclosure index", a dimensionless numeral which defines the occupant perception of the street space. We used a custom biosensor kit to collect 15 participants' average heart rates for one minute measured at aforementioned street corners. We compared participant's heart rates when they looked toward the intersection (open street space) and looked down the street (enclosed street space) and we asked them to complete a questionnaire on comfort level. The data of questionnaire was then compared to the corresponding heart rates. Results demonstrated that heart rates of participants who looked at the street view (more enclosed) were lower than those when looking in corner view (less enclosed).
Advancements in wearable biosensor technologies in recent years enable us to witness an unprecedented boom, as interest in fitness health data grow exponentially. Learning about one's metrics and sharing of this data in various platforms has arguably become a demand and practice for daily life. This research aims to discover potential applications of wearable technology and fitness health data by taking on the existing ubiquitous computing hardware and software technologies on wearable biosensors and push them further to a new level by proposing a methodology and visualization for users response to their surroundings, and how this application could become a critical feedback mechanism for individual users as well as planners, decision makers and designers for our built environment.
In the paper at hand, a model-based design and energy estimation approach for wireless sensor nodes in human activity recognition systems is extended. Entire wireless body area sensor networks are modeled and analyzed w.r.t. their real time capabilities of different software mappings on a system level.
The high-dimensional and co-evolved data streams sensed by mobile devices typically exists time delays that form the "causal-and-effect" patterns. Understanding the informative causal patterns from the multivariate time series is critical but challenging for the inference tasks with the sensing data. While a large scope of statistical learning methods has undergone great advances in the causal pattern recognition problem, most of them are still limited in the unreliable causal analysis, high computational complexity and the environmental noise interruption. To this end, we propose a novel directed information (DI)-aided approach to efficiently select the casual patterns from a set of feature streams collected from mobile devices. The proposed approach has been evaluated on a real blood glucose sensing dataset. The results demonstrate our proposed approach outperforms the traditional methods in cost efficiency and inference accuracy.
Wearable sensors are monumental for human activity recognition. Researchers are continuously inventing new technology to detect human activity properly. Earable opens up interesting possibilities of monitoring personal scale behavioral activities. In this paper, we explore earables device 'eSense' multisensory stereo device for personal scale behavior analysis. We propose an activity recognition framework by exploiting eSense based multi-sensory device. It has a microphone, 6-axis inertial measurement unit, and a dual-mode Bluetooth. We use eSense accelerometer sensor data for detecting head and mouth related behavioral activities. We develop a data collection framework from the eSense through our smartphone application via Bluetooth. Then from the collected data, a few statistical features are computed to classify six personal scale activities related to head and neck movement such as speaking, eating, headshaking and head nodding, as well as, stay and walk. We aggregate the time series data intzo different action labels that summarize the user activity over a time interval. After, we train the data to induce a predictive model for activity recognition. We explore both machine learning and deep learning approach for data classification. For classification, we use the Support Vector Machine, Random Forest, and K-Nearest Neighbor and Convolutional Neural Network and achieve satisfactory recognition accuracy. The findings provide promising prospect for eSense for personal scale activity recognition in healthcare monitoring service. Based on our study, this kid of work is done for the first time with satisfactory findings.
Because of the diversity of document layouts and reading styles, detecting reading activities in real life is a challenging task compared to the detection in the laboratory setting. For contributing to the implementation of robust reading detection algorithms, we introduce a dataset which contains 220 hours of sensor signals from JINS MEME electrooculography glasses and corresponding ground truth activity labels. As a baseline study, we propose a statistical feature based reading detection approach and evaluate it on the dataset.
In this paper, we explore wearable ambient display as a clothing accessory. A majority of wearable display research so far focuses on using displays as a channel to present concrete information and notifications. The aim of our research is to develop an understanding and identify the preferred characteristics of wearable displays as an everyday accessory and embellishment. We designed the Scarf Set headphones in the form of a hooded scarf, as an instance to explore the design space. The Scarf Set is used as a probe to provoke conversation around wearable displays as an everyday clothing and accessory item. The initial user study shows that our participants appreciated the interactive features enabled by the wearable display, but the constant changing and movement, which are part of the interaction, were not liked.
Predicting the popularity of outdoor billboards is crucial for many applications such as guidance of billboard placement and estimation of advertising cost. Recently, some researchers have worked on leveraging single traffic data to access the performance of billboards, which often leads to coarse-grained performance estimation and undesirable ad placement plans. To solve the problem, we propose a data-driven system, named BoradWatch, for fine-grained billboard popularity prediction. In particular, we extract three types of critical features based on multi-source urban data, including billboard profile, geographic feature and commercial feature. Furthermore, we propose a hybrid model named Tree-Enhanced Regression Model (TERM) based on extracted features for prediction, which takes full advantage of the feature transformation of decision trees model to enhance the prediction performance of the linear model. Experiment results on real-world outdoor billboard data and multi-source urban data demonstrate the effectiveness of our work.
Forecasting body motion has a lot of potential applications such as sports and entertainment. Previous studies have mainly employed cameras and optical motion captures to measure the joint positions of person, and predicted them about 0.5 seconds before by using deep neural networks. However, following two difficulties have to be solved to install the forecasting system into the real world: One is that camera and optical based methods have to take into account the environmental settings and occlusion problems, and the other is that previous studies have not considered plural persons. In this paper, we propose a multi-person motion forecasting system by using inertial measurement unit (IMU) motion captures to overcome these difficulties simultaneously, and demonstrate a preliminary result.
Auditory-verbal or speech interactions with in-vehicle information systems have became increasingly popular. This opens up a whole new realm of possibilities for serving drivers with proactive speech services such as contextualized recommendations and interactive decision-making. However, prior studies have warned that such interactions may consume considerable attentional resources, thus degrade driving performance. This work aims to develop a machine learning model that can find opportune moments for the driver to engage in proactive speech interaction by using the vehicle and environment sensor data. Our machine learning analysis shows that opportune moments for interruption can be conservatively inferred with an accuracy of 0.74.
Increasing the number of chewing can help reduce obesity. Nevertheless, it is difficult for a person to keep track of his mastication rate without the help of an automatic mastication counting device. Such devices do exist, but they are big and non-portable and are not suitable for daily use. In our previous work, we proposed an optimization model for classification of chewing, swallowing, and speaking activities using sound data collected by a bone conduction microphone in a natural eating environment. In this paper, we aim to implement a system that could automatically recognize a person's eating gestures (e.g., mastication, swallowing, and utterance) in real time. In realizing this, it is necessary to add the other sounds such as noises in the model so that it is more robust to natural meal environment. Therefore, in this study, we proposed an optimization for classification method adding the other sounds to three eating activities.
Nowadays, many researchers analyze reading behavior with eye trackers. Various traits of reading like engagement, or text difficulty have been observed in laboratory settings. But, their automatic application for daily life is usually prevented by one question: when is somebody reading? We have developed a tool to classify short sequences of fixations from eye gaze data into reading and not reading. Our specific use case is the Vocabulometer, a website for learning English by reading texts. We used supervised learning on data from nonnative English speakers to train decision trees for the classification. With features based on vertical eye movement, we achieved 93.1% of correct classifications.
Smart eyewear that detects eye movements and head motions have been applied to studies on detecting oneâĂŹs mental state [5], engagement level in social interactions, and measuring concentration [4]. However, applications for these smart eyewear that offers intriguing and novel interactive experience has seldom been used as wearable device-control. As an application of this type of wearable, we used J!NS MEME that enables head motion data to control smart devices with hands-free operation in a pet toy system to facilitate human-animal interaction for people in cases of limited mobility either due to physical disability or the lack of enough space allowance. For our study, we primarily focused on exploring interaction possibilities between pet owners, who experience physical limitations due to the latter context, and their pets. The study result shows a promising start for opening up new interaction opportunities for our targeted audience in the context of physical limitation; however, the impact on those with physical disabilities is still arguable as it is yet to be evaluated. Nevertheless, since smart eyewear can act as an unobtrusive and useful body extension for people with limited mobility, we believe that it can be an alternative input option that can be applied not just to human-animal interaction but also to wider domains such smart devices and home systems for these users.
The range of interpersonal distance is changing according to the environment, gender, culture, and individual person. Thus, realizing interpersonal distance is an important factor in modern society. In this work, we aim to call people's attention to personal distance. We developed a prototype system, TONG, consisting of a wearable device and a mobile phone application. The system detects the people from behind entering the set distance and reminds them of keeping a proper interpersonal distance. To test the effectiveness of this system, we conducted quantitative and qualitative experiments. The results showed that our system express interpersonal distance more clearly. Further studies will be conducted by using multi-mode to suit different needs of interpersonal distance.
Knit e-textile sensors can be used to detect stretch or strain, and when integrated directly into wearable garments, they can be used to detect movement of the human body. However, before they can reliably be used in real-world applications, the garment construction technique and the effects of wear due to washing need to be considered. This paper presents a study examining how thermal bonding and washing affects piezo-resistive textile sensors. Three textile strain sensors are considered all using Technik-tex P130B as the conductive material: i) conductive fabric only, ii) conductive fabric bonded to on one side to Eurojersey fabric, and iii) conductive fabric with Eurojersey bonded on top and bottom of the conductive fabric. The sensors' performance is evaluated using a tensile tester while monitoring their electrical resistance before and after washing. The findings show that a single layer of bonding is the ideal construction and that after three wash cycles the sensor remains reliable.
Detecting stress during user experience (UX) evaluation is particularly important. Studies have shown that skin conductance (SC) is a physiological signal highly associated with stress. This paper investigates how SC Responses (SCRs) can contribute to the development of a publicly available stress detection mechanism. In specific, SCRs located in users' self-reported stress periods were used as a training dataset for the creation of our UDSP+ predictor. A lab study was conducted to evaluate the accuracy of our approach. The SC of 24 participants was recorded using the wearable Nexus10 sensor. Moreover, participants' self-reported emotional ratings (valence-arousal) were obtained using the Affect Grid Tool retrospectively. The performance of the UDSP+ was tested using machine learning. Considering the 2-class classification problem (stress vs. non-stress), an accuracy of up to 86% was achieved. This demonstrates the dynamics of users' self-reported periods to act as a dataset creation mechanism in tow with SCRs.
Learning how to cook presents at least two significant challenges. First, it can be difficult for novices to find appropriate recipes based on the ingredients available in one's pantry and/or refrigerator. Second, it can be difficult to focus on cooking tasks and following a recipe at the same time. In this poster, we present the design process and implementation of a system that uses deep learning to address the first of these two problems. Our initial design work focuses on streamlining the process of entering and tracking potential ingredients on hand and determining appropriate recommendations for recipes that utilize these ingredients. Here, we present the current state of our project, explaining in particular our contributions to minimizing the overhead of tracking kitchen ingredients and converting this inventory information into effective recipe recommendations using a multimodal machine learning approach.
We present an implementation of a textile sensing sleeve with attached strain sensors to capture the shape of body parts. A shape reconstruction algorithm was developed to reconstruct a geometrical model of the deformed sleeve using the elongation measurements obtained from the sensors and an optimisation process. The current system achieves a 0.44 mm error when reconstructing the radius of a conical shape. We discuss future improvements required to form a more reliable 3D shape sensing device. After further development, this sleeve could be utilised for health care application such as muscle density measurements, movement tracking or to replace plaster or thermoplastic casting in the fabrication of orthoses and prostheses.
It is expensive to collect trajectory data on a mobile phone by continuously pinpointing its location, which limits the application of trajectory data mining (e.g., trajectory prediction). In this poster, we propose a method for trajectory prediction by collecting cell-id trajectory data without explicit locations. First, it exploits the spatial correlation between cell towers based on graph embedding technique. Second, it employs the sequence-to-sequence (seq2seq) framework to train the prediction model by designing a novel spatial loss function. Experiment results based on real datasets have demonstrated the effectiveness of the proposed method.
Augmented Reality (AR) adds additional layers of information on top of real environments. Recently, Pervasive AR extends this concept through an AR experience that is continuous in space, being aware of and responsive to the user's context and pose (position and orientation). This paper focus on an exploratory user study with 27 participants meant to better understand some aspects of Pervasive AR, such as how users explore, select, recognize and manipulate virtual content in uninterrupted AR experiences, as well as their preferences. The approach used to provide this sort of engaging experiences allows the creation of indoor persistent location-based experiences, with a high level of accuracy and resilience to changes in dynamic environments. Results concerning user acceptance of uninterrupted AR experiences were encouraging. In particular, users were positively impressed by the continuous display of virtual content and were willing to use this technology more often and in different contexts.
We present an approach to identify bike types using smart-phone sensors. Knowledge of the bike type is necessary to provide ubiquitous services such as navigation services that consider bike-specific road conditions in route planning to improve driving safety. In order to differentiate between bike types, we use four machine learning classifiers. To evaluate our approach, we collected sensor readings on two routes with six rides each for two bike types. The evaluation shows very good predictive performance for all classifiers with F1 scores of up to 0.94. Overall, the convolutional neural network (CNN) classifier yields the best results for both bike types and both routes.
Smartphones have the potential to produce new habits, i.e., habitual phone usage sessions consistently associated with explicit contextual cues. Despite there is evidence that habitual smartphone use is perceived as meaningless and addictive, little is known about what such habits are, how they can be detected, and how their disruptive effect can be mitigated. In this paper, we propose a data analytic methodology based on association rule mining to automatically discover smartphone habits from smartphone usage data. By assessing the methodology with more than 130,000 smartphone sessions collected in-the-wild, we show evidence that smartphone use can be characterized by different types of complex habits, which are highly diversified across users and involve multiple apps. To promote discussion and present our future work, we introduce a mobile app that exploits the proposed methodology to assist users in monitoring and changing their smartphone habits through implementation intentions, i.e., "if-then" plans where if's are contextual cues and then's are goal-related behaviors.
This study investigates the effect of different types of cues in terms of modality and message form applied in a turn-taking interaction with a virtual agent. We divided the message form to implicit and explicit, and modality to visual and auditory. The results from a 2 (message form: implicit vs. explicit) x 2 (modality: auditory vs. visual) between subject experiment revealed that implicit auditory cue has a significantly positive effect on inducing the perceived contingency and perceived intelligence in human-agent conversation. These results suggest that designing continuous turn-taking cue should be carefully crafted with elaborated research in various aspects such as social psychology and technical usability. Finally, Implication and limits of these findings are discussed.
Herein, we studied a multiuser human body communication system that uses multiple human bodies as a transmission channel when assuming data sharing among users. We evaluated electric field distributions around human bodies and transmission characteristics between wearable transceivers through electromagnetic field simulation using whole-body human models. Simulation results showed that the electric field strongly propagates between human bodies via physical contact; however, the contact area between two users has little influence on the electric field. In addition, the dominant control on the transmission characteristics is the position of the receiver.
We propose and implement data transmission method for capacitive touch panel using physical interface and micro-computer that can generate touch to capacitive touch panel. The proposed device consists of one-board microcomputer, an external power supply, and multiple touch control devices that can be individually controlled. One touch control device is in contact with one touch area, and touch is individually generated for the touch panel. Our software in the touch panel detects touch patterns. The snapshot of the detected touch patterns is interpreted as a bit string. By controlling the touch generation device at high speed, long bit string can be transmitted.
It is important for teachers to grasp students' engagement in order to improve the quality of lectures. However, in the e-learning environment, there is no teacher to grasp the students' engagement and it may cause ineffective learning. The purpose of this study is to grasp the students' engagement by using a pressure mat and web camera. We recorded students' postural data, that is upper body pressure distribution and upper body pose, during e-learning lectures. Then we extracted 38 features from upper body pressure distribution and 33 features from upper body pose for every minute, selected proper features and trained classifiers to estimate whether he or she was engaged in the lecture. As a result, the average accuracy was 79.3% for student-dependent estimation. This result shows it is possible to predict the student's engagement automatically.
Nowadays, various branches of industry are based on continuous processes. Therefore, efficient and accurate data analysis became crucial for maintaining control and optimizing the monitored trials. This paper presents a novel solution for in-situ analysis of complex numerical data. The proposed system employs mixed-reality technology to visualize data and enable collaborative analysis in remote locations. The system was implemented using Microsoft HoloLens and tested in laboratory environment, with its proof-of-concept version applied to solve real expert analysis task. After the experiment NASA TLX and SUS questionnaires were filled and demonstrated improvement performance and enabling more extensive analysis, including their spatio-temporal features.
A smartphone app to screen for neonatal jaundice has a large potential impact in reducing neonatal death and disability. Our app, neoSCB, uses a colour measurement of the sclera to make a screening decision. Although there are numerous benefits of a smartphone-based approach, smartphone colour measurement that is accurate and repeatable is a challenge. Using data from a clinical setting in Ghana, we compare sclera colour measurement using an ambient subtraction method to sclera colour measurement using a standard colour card method, and find they are comparable provided the subtracted signal-to-noise ratio (SSNR) is sufficient. Calculating a screening decision metric via the colour card method gave 100% sensitivity and 69% specificity (n=87), while applying the ambient subtraction method gave 100% sensitivity and 78% specificity (SSNR>3.5; n=50).
Temperature is an important source of information for personal health, weather forecasting, and thermal management in buildings and computing infrastructure, etc. Temperature measurement can be achieved via low-cost hardware with reasonable accuracy. However, large-scale distributed thermal sensing is non-trivial and currently relies on dedicated hardware. In this paper, we design the first software sonic thermometer (SST) using commercial-off-the-shelf acoustic-enabled devices. We utilize on-board dual microphones on commodity mobile devices to estimate sound speed, which has a known relationship with temperature. SST is portable, contactless, and cost-effective, making it suitable for ubiquitous sensing. We implement SST on an Android smartphone and an acoustic sensing platform. Evaluation results show that SST can achieve a median accuracy of 0.54dGC even at varying humidity levels.
We propose a framework targeting mental health in workplace by tracking the environmental and physiological variables. Environmental variables (noise, air quality, light intensity, etc.) will be collected by sensor nodes installed for each variable and physiological changes are measured by recording Electrocardiogram (ECG) signal, providing information about heart activity. ECG data is used to get Heart Rate Variability (HRV) values, indicating the physiological state based on Polyvagal Theory. Data analysis from these two sources (environment and physiology) together helps to generate a system alert suggesting ways (short walk, listening to music) to manage the mental health without aggravating it, throughout the day. This framework creates awareness about the mental state of an individual, giving them opportunity to restore their mental state and protect it from getting worse. Mental health affects the productivity of a person. Targeting mental health improves productivity in a workplace. This paper proposes the framework and suggests future steps in deployment. The framework improves upon existing methods by considering dependence of environmental factors on individual autonomic state.
The paper describes the various problems and challenges encountered during the development and remote data collection in a cross-platform hybrid application developed for remote monitoring of participants and what solutions were implemented to mitigate them. These problems and challenges are universal for hybrid applications and this paper digs deep into these in the domain of large-scale, long-duration mHealth research studies. From technical issues to issues with user compliance, this paper discusses the core problems inherent to these types of studies and technologies, and how to mitigate them.
Current techniques for tracking sleep are either obtrusive (Polysomnography) or low in accuracy (wearables). In this early work, we model a sleep classification system using an unobtrusive Ballistocardiographic (BCG)-based heart sensor signal collected from a commercially available pressure-sensitive sensor sheet. We present DeepSleep, a hybrid deep neural network architecture comprising of CNN and LSTM layers. We further employed a 2-phase training strategy to build a pre-trained model and to tackle the limited dataset size. Our model results in a classification accuracy of 74%, 82%, 77% and 63% using Dozee BCG, MIT-BIH's ECG, Dozee's ECG and Fitbit's PPG datasets, respectively. Furthermore, our model shows a positive correlation (r = 0.43) with the SATED perceived sleep quality scores. We show that BCG signals are effective for long-term sleep monitoring, but currently not suitable for medical diagnostic purposes.
Long-term stress is a leading cause of global health loss. Despite its clear influence on productivity, and overall social and economic development, mental health continues to be neglected in work environments. To help reduce this problem, we present LightStress, a tangible affective artifact that adapts itself to the user's needs in order to improve their mental well-being. Our prototype is inspired by existing techniques and coping mechanisms collected during the two ethnographic studies we conducted (a cultural probe and an online imagery research).
Visual attention is guided by the integration of two streams: the global, that rapidly processes the scene, and the local, that processes details. For people with autism, the integration of these two streams can be disrupted by the tendency to privilege details (local processing) instead of seeing the big picture (global processing). Consequently, people with autism struggle with typical visual attention, evidenced by their verbal description of local features when asked to describe overall scenes, disrupting their social understanding. This work aims to explore an augmentation for global processing by digitally filtering visual stimuli. This work contributes initial prototypes to improve global processing and leverages an eye tracking dataset to compare results as a validation technique.
With the rapid growth in the number of Internet of Things (IoT) devices with wireless communication capabilities, and sensitive information collection capabilities, it is becoming increasingly necessary to ensure that these devices communicate securely with only authorized devices. A major requirement of this secure communication is to ensure that both the devices share a secret, which can be used for secure pairing and encrypted communication. Manually imparting this secret to these devices becomes an unnecessary overhead, especially when the device interaction is transient. In this work, we empirically investigate the possibility of using an out-of-band communication channel - vibration, generated by a custom smartRing - to share a secret with a compatible IoT device. Through a user study with 12 participants we show that in the best case we can exchange 85.9% messages successfully. Our technique demonstrates the possibility of sharing messages accurately, quickly and securely as compared to several existing techniques.
In this paper, we propose PASNIC, a lightweight privacy-aware sensor node for image capturing. PASNIC masks the region of faces and private screens (smartphone and portable computer displays) in captured visible images before sending them to remote sites. Thermal information, which can also be captured with a thermal image sensor, is used to detect the regions of faces and private screens. Our masking algorithms are so simple that even sensor nodes with low processing power can sufficiently process them.
Custom made textile sensors encounter design and manufacturing challenges that differ from conventional printed circuit board-based sensors. The field of e-textiles commonly deploys such sensors on the human body, meaning that overcoming these challenges are crucial for reliable sensor performance. In this paper, we present and evaluate the design of trousers with embedded fabric sensors. Two iterative prototypes were manufactured and tested in two user studies, focusing on mechanical aspects of the design for applications in capturing body movement within social interaction. We report on failures and risks of the design, and furthermore propose solutions for a more robust, yet soft wearable sensing system.
This paper provides a novel best in the domain automatic classifier of sleep stages based on wrist photoplethysmography and 3D-accelerometer data obtained from smartwatches. The sleep is classified into rapid-eye-movement (REM), Light, Deep, or Wake stages. State of the art classifiers based on wearable sensors suffer from motion artifacts and apnea events. Proposed novel techniques of artifacts and apnea events elimination result in the high accuracy of sleep stages classification and robustness to apnea events. The model provides the Cohen's Kappa score of 0.65 and accuracy of 0.80 on 254 night-logs of 173 subjects with a broad distribution of apnea-hypopnea index until 90. The approach is applicable to unobtrusive sleep monitoring by wearables.
A smart home equipped with various smart devices allows a service provider to automatically identify daily living activities from sensor/appliance data, but it is risky for dwellers to upload all the data generated in the home. In this paper, we define a threat model in which an attacker(s) can access all or part of the smart home data uploaded to the untrusted cloud server and can physically observe activities. Hence, the attacker can identify the association between the data and the home by matching the uploaded data and the observed data. The proposed method employs k-anonymity for dwellers to make a decision on whether the data should be uploaded or not. We computed values of k from the existing datasets and asked 18 participants to answer upload/no-upload for each pair of activities and time zone. As a result, our k-anonymity based method can reflect the dweller's sensitivity of privacy in uploading the data.
The increasing number of people with lifestyle-related diseases has become a social problem. It is well known that improving lifestyle habits is effective in preventing illness. We focus on regulating the rhythm of daily life and investigate a support technology that adjusts the user's schedule to enhance health outcomes. Its main goals are to schedule the ideal goal (target action at desired time given by the user) and advise what actions should be taken and when to perform them to achieve the goal. This paper proposes a method to find trigger actions whose scheduling is flexible and that should precede the target action. Our study started with the collection of activity logs and flex logs. The flex logs record the periods in which the user feels the actions could be performed. We report on the preliminary findings from analyzing the collected logs and discuss how the findings can be used in designing future studies.
There are limitations and poor adherence to pharmacological and non-pharmacological interventions when treating children with ADHD. Wearable technology has great potential to overcome this limitation due to their portability and capability to measure biosignals. Unfortunately, the acceptance of this technology in children with ADHD has not been explored. The purpose of this study was to understand the acceptability of wearables and propose designs for applications to enhance executive functioning in these children. Results from a qualitative study involving children with ADHD and their teachers showed that concerns from the children outweighed the advantages for using smartwatch technology, and suggested that an application should mimic the current practices of self-regulation taught at their school and be able to monitor self-reflection of mood and behavior. These findings suggest that wearables can potentially be used as interventions supporting the executive function of children with ADHD.
In this research, we explored the design of navigation technology for wildland firefighters. We worked within a set of empirically informed design constraints to create prototypes of a wearable system that provides peripheral navigation cues via visual and haptic feedback. We used physical and interactive prototypes of this system as technology probes to provoke discussions with wildland firefighters about their navigation and location technology needs. Our pilot study results indicate that our prototypes helped to uncover ideas for future technical work in the domain of wildland firefighting, as well as on mobile and wearable navigation systems, more broadly.
Although encounters with spiders are commonplace for people almost everywhere in the world, fear of spiders is one of the most frequently diagnosed phobias and immediate contact is widely perceived as unfavourable. We present a system for indirect, quasi-tangible interaction with spiders, to be applied in an exhibition context - a robotic arm, steered through gestural input, which mimics user's actions and enables indirect physical interaction with the spider. The proof-of-concept prototype has been tested with N=15 users in museum-like environment. The concept of implementing an interactive modality to the exhibition was commented as an asset in terms amusement and education aiding, whilst being also a promising endeavour towards phobia-overcoming exercise.
Accurate and efficient estimation of caloric expenditure during daily activities is desirable in tracking personal activity and health. Kinetic energy harvesting (KEH) has created an opportunity for wearable devices with limited battery power to realize long-term human health monitoring. We postulate to use the data of KEH device as a new source for calorie estimation instead of accelerometer. In this paper, we utilize the output voltage of kinetic energy harvester to classify activity intensity types and then develop activity-specific regression models combined with anthropometric characteristics for caloric expenditure estimation by random forest. To validate our approach, we build a KEH hardware platform and collect the dataset of seven different activities in free-living conditions from ten participants. Experiment results validate that the KEH device can act as a substitute for accelerometer to estimate calorie expenditure.
We propose a method to estimate thermal comfort by combining a thermal camera and a wristband sensor. A wristband sensor continuously monitors physiological data (i.e. heart rate, skin temperature, and electrodermal activity). On the other hand, a thermal camera captures temperature distribution of facial parts only when a user's face is in the camera frame. When a thermal camera cannot capture the user's face, our method estimates thermal comfort based on the current features obtained by the wristband sensor and the past estimation results using the thermal camera. To investigate the effectiveness in the reduction of energy consumption by air conditioning, we evaluate our method by collecting data from 15 subjects for 128 days. The results show our method achieves F-measure of 0.85 for estimating thermal comfort allowing shifts to the neighboring classes.
This paper outlines a pilot study aimed at identifying salient themes in the awareness of health and well-being and the effects that this can have on one's home life and work behavior. We also sought to identify the factors that lead to lapses in one's awareness of health and well-being issues. Interviews were conducted with individuals who are known to work for extended durations (taxi / Uber drivers). A number of proven methods were used for the collection of pertinent data (i.e., sensing technologies, self-reported information, and interpersonal observations). Finally, design guidelines are presented for the development of technology-based solutions aimed at raising awareness of health and well-being among workers in high-stress occupations.
Existing activity recognition technologies empower the smart home for perceiving the ambient environment. Efficient activity prediction, based on activity recognition, can enable the smart home to provide timely personalized services. However, predicting the next activity and its precise occurrence period are challenging due to the complexity of modelling human behaviour. In this work, we aim to understand whether the temporal information integrated into the deep learning networks can improve the prediction accuracy in both predicting the next activity and its timing. We develop two Long Short-Term Memory (LSTM) models, both with deep contextualized word representation on sensor labels, one with temporal information and one without. Our results highlight that if temporal information is used appropriately, the model with timestamp can outperform the model without this information. While modelling human activity prediction, comprehending the contextual-temporal dynamics is highly important.
The efficacy of behavioural activation in the treatment of major depressive disorders has been established in a number of studies over the last four decades. Although a number of recent studies show that behavioural activation administered via a smartphone application has the potential to be effective in the treatment of depression, these opportunities are tempered by the problem that these interventions have high dropout rates. However, recent research finds that personalisation of content can positively influence engagement. We present MindTick, a smartphone-based behavioural activation application using a recommender system to deliver personalized content to encourage users to engage in behavioural activation activities.
Wearable gestural interfaces for effect control during concerts enable artists to experiment with new ways to express their work. These interfaces can be seen as novel musical instruments, however compared to traditional musical instruments they often lack fine-grained haptic feedback. The artist effectively plays these instrument "blind". In this paper, we present an easily built pressure-feedback device to close the loop between gesture interaction and haptic feedback. This can give the artist an idea of the currently enabled musical effect, and indicates the strength of this effect which can serve as a warning signal when the effect reaches a otherwise invisible limit with respect to a starting gesture. To evaluate the system, study participants were asked to distinguish different levels of pressure, giving insights on the accuracy of such a haptic interface.
Inertial sensors have been used for tracking applications on every scale for decades. Commercial IMUs suffer from a spectrum of errors such as axes misalignment, bias etc. leading to a drifted output. Approximation models are usually implemented to address these issues which are highly sensor-dependent with constrained testing and sometimes use complementary infrastructure which limits their use in the wild. In this demonstration we introduce 'AiRite', an effective solution for 3-D tracking of a smart device using only the onboard IMU. Trajectories of basic shapes and cursive words written in air using smart devices are visualized in 3-D, which are first observations of their kind in the field. We demonstrate device-independence and ubiquity of our tracking method by using different smartphones and smart-watches for writing in air.
Nowadays, 4G devices are pervasive and most of the homes and offices in modern cities are covered by LTE signals. While it is very attractive to leverage ubiquitous LTE signals and use hand gestures to control the home appliances remotely, there is no work on such contactless gesture interaction systems reported yet. In this work, we present an LTE-based contactless gesture interaction system to recognize various hand gestures around a 4G terminal like mobile phone, which can be used to control the switch, channel and volume of a TV set remotely without holding any devices. The results show that the proposed system can recognize different hand gestures accurately leveraging LTE signals without training, and achieve remote TV control in real time in different settings.
In this demo, we present the smart eyewear toolchain consisting of smart glasses prototypes and a software platform for cognitive and social interaction assessments in the wild, with several application cases and a demonstration of activity recognition in real-time. The platform is designed to work with Jins MEME, smart EOG enabled glasses, The user software is capable data logging, posture tracking and recognition of several activities, such as talking, reading and blinking. During the demonstration we will walk through several applications and studies that the platform has been used for.
We have designed and implemented a real-time hybrid activity recognition system which combines supervised learning on inertial sensor data from mobile devices and context-aware reasoning. We demonstrate how the context surrounding the user, combined with common knowledge about the relationship between this context and human activities, can significantly increase the ability to discriminate among activities when machine learning over inertial sensors has clear difficulties.
We developed a highly-visible head-mounted novelty wearable to be used in social settings. We tested our Interactive Social Novelty Wearable (iSNoW) prototype in a partner-based user study to see if perceptions of the experience would change if the information displayed on the wearable was contextually relevant. Thematic analyses revealed important considerations for the design of future devices, regarding distraction and pressure to understand the rules of the game. Participants wearing contextually relevant information were more likely to recommend the device to their friends. We highlight future opportunities for exploration in this relatively untouched space.
We present 'True Colors': a social wearable prototype designed to augment co-located social interaction of players in a LARP (live action role play). We designed it to enable the emergence of rich social dynamics between wearers and non-wearers. True Colors is Y-shaped, worn around the upper body, and has front and back interfaces to distinguish between actions taken by the wearer (front), and actions taken by others (back). To design True Colors, we followed a Research-through-Design approach, used experiential qualities and social affordances to guide our process, and co-designed with LARP designers. 13 True Colors wearables were deployed in a 3-day LARP event, attended by 109 people. From all the functionalities and interactivity the device afforded, players gravitated towards ones that emphasized the social value of experiencing vulnerability as a prompt to get together. This project was recently presented in CHI '19 [1] and may offer useful insights to others in the Ubi-Comp/ISWC community who develop technology to support co-located social experience.
As one kind of biological characteristics of people, handwritten signature has been widely used in the banking industry, government and education. Verifying handwritten signatures manually causes too much human cost, and its high probability of errors can threaten the property safety and even society stability. Therefore, the need for an automatic verification system is emphasized. This paper proposes a device-free on-line handwritten signature verification system ASSV, providing paper-based handwritten signature verification service. As far as we know, ASSV is the first system which uses the changes of acoustic signals to realize signature verification. ASSV differs from previous on-line signature verification work in two aspects: 1. It requires neither a special sensor-instrumented pen nor a tablet; 2. People do not need to wear a device such as a smartwatch on the dominant hand for hand tracking. Differing from previous acoustic-based sensing systems, ASSV uses a novel chord-based method to estimate phase-related changes caused by tiny actions. Then based on the estimation, frequency-domain features are extracted by a discrete cosine transform (DCT). Moreover, a deep convolutional neural network (CNN) model fed with distance matrices is designed to verify signatures. Extensive experiments show that ASSV is a robust, efficient and secure system achieving an AUC of 98.7% and an EER of 5.5% with a low latency.
The Ubiquitous Cognitive Assessment Tool (UbiCAT) is a wearable technology designed for 'in-the-wild' cognitive assessment. UbiCAT includes three smartwatch-based applications adapted from the Stroop color-word, n-back, and two-choice reaction time tests, respectively. UbiCAT aims to measure selective attention and processing speed, working memory, and inhibition control. UbiCAT can be used for real-life cognitive assessment and for experiments on human cognitive performance. Within the field of ubiquitous computing, it contributes to the cognition-aware systems.
In this work we investigate the coordination of human-machine interactions from a bird's-eye view using a single panoramic color camera. Our approach replaces conventional physical hardware sensors, such as light barriers and switches, by location-aware virtual regions. We employ recent methods from the field of pose estimation to detect human and robot joint configurations. By fusing 2D human and robot pose information with prior scene knowledge, we can lift these perceptions to a 3D metric space. In this way, our system can initiate environmental reactions induced by geometric events among humans, robots and virtual regions. We demonstrate the diverse application possibilities and robustness of our system in three use cases.
This demonstration shows an interactive urban flood damage prediction system "ARIA" that simulates urban flood, the sufferer, and network failure in an integrated manner. In terms of disaster mitigation, it is important to confirm an affected area and issue an evacuation advisory. ARIA predicts flood damages - the number of sufferers or the locations of flooded roads - and figures out the suitable timing of an evacuation advisory while incorporating actual measurement values like precipitation, river water level and person flow data observed during flood occurrence using data assimilation method. We propose flood damage prediction system, which cooperates conventional proprietary simulators for flood/evacuation/network damage analysis using simulation and emulation federation platform "Smithsonian". ARIA aims to accurately simulate actual disaster phenomena to consider how flood damages affect evacuation behavior based on the mutual impact of road condition and network damage caused by floods.
Recent advances in positional tracking systems for virtual and augmented reality have opened up the possibilities for ubiquitous motion capture technology in the consumer market. However, for many applications such as in performance art, athletics, neuroscience, and medicine, these systems remain too bulky, expensive, and limited to tracking a few objects at a time. In this work, we present a small wireless device that takes advantage of existing HTC Vive lighthouse tracking technology to provide affordable, scalable, and highly accurate positional tracking capabilities. This open hardware and open software project contains several elements, and the latest contributions described in this paper include: (1) a characterization of the optical distortions of the lighthouses, (2) a new cross-platform WebBLE interface, and (3) a real-time in-browser visualization. We also introduce new possibilities with an adaptive calibration to estimate transformation matrices of lighthouses, and an FPGA approach to improve precision and adaptability. Finally, we show how the new developments reduce setup costs and increase the accessibility of our tracking technology.
Conventional car navigation systems that require manual input of a user's destination are not frequently used for familiar routes such as daily commutes and regular shopping. In this paper, we propose and realize proactive car navigation, which integrates daily destination prediction system to car navigation system in order to eliminate input by user by displaying information related to destinations automatically. By questionnaire-based evaluation, we found that proactive car navigation innovates a user's driving experiences, and offers them various advantages, even for regularly frequented destinations.
We present a demonstration based on a MobileHCI 2019 paper to use eye gaze to selectively render or obscure text. Obscuring is used to "simulate" a specific kind of dyslexia. Selectively render text is to give users a more private reading experience in public spaces.
In this paper, we demonstrate LiftSmart, a novel smart wearable to detect, track and analyse weight training activities. LiftSmart is the first wearable for weight training that is based on unsupervised machine learning techniques to eliminate the use of labelled data, which is expensive to collect, computationally intensive, and requires the tuning of multiple key parameters.
We present a design of an always-on system connecting long-distance couples at bedtime, a time and space partners normally share together. The system offers a novel, real-time shared inking space for creative interactivity, a slow photo stream to balance privacy and remote presence. It adapts to the local light level in order to stay in the background, but can be also configured to reflect the remote light level to provide an additional communication channel.
New user interfaces afford innovations in the user experience of computers- the interactive textile approach posed by Jacquard is another step in this evolution[1, 5]. Although it demonstrates four gestures via the Levi's Jacquard Jacket [4], we experiment with potential applications to explore new forms of user inputs. We have built a toolkit that enables developers full access to the Jacquard technology and custom gesture creation [2, 3]. We subsequently used this to develop a custom Force Touch gesture, which is now integrated into the aforementioned iOS toolkit such that developers may use it to implement original and new jacket functionality into their own applications. The playground application helps developers understand how to effectively utilize the library. In parallel, we evaluated the Jacquard hardware by conducting a detailed tear down of the Jacket cuff and tag. We mapped how such sensing technology can be removed from the jacket and applied to other textiles with conductive threads. Moreover, our findings motivate future testing on other garments and accessories as we isolated the Jacquard swatch from the cuff. In order to evaluate the intuitiveness of the jacket gestures and test learnability of the new custom gestures, we designed a user assessment application with the toolkit to conduct a user study with 25 subjects. Results showed that both Brush In and Brush Out had the easiest learning curve and were perceived as most intuitive. Brush In (48%) stood out as the most preferred gesture followed by Double Tap (20%). Although the custom gesture, Force Touch, had the lowest perceived intuitiveness, it serves as the first key step towards a future of rapidly prototyped and consumer-ready custom gestures.
Researchers, makers and hobbyists rely on plastics for creating their DIY electronics. Enclosures, battery holders, buttons and wires are used in most of the prototypes in a temporal way, generating waste. This research aims to extend the boundaries of biomaterials applications into electronics. Mycelium is a fast-growing vegetative part of a fungus which adapts to different shapes when growing in a mold and decomposes after 90 days in a natural environment as organic waste. In order to create more sustainable prototypes, we use mycelium composites with common digital fabrication techniques for replacing plastic in electronics. We present our method for growing mycelium, our design process of using digital fabrication techniques with mycelium, applications for embedding electronics in mycelium boards, making enclosures for electronics, and using mycelium within electronics. This paper could contribute with the merge of biomaterials and electronics, an approach which is still under exploration.
Human activity data sets are fundamental for intelligent activity recognition in context-aware computing and intelligent video analysis. Surveillance videos include rich human activity data that are more realistic compared to data collected from a controlled environment. However, there are several challenges in annotating large data sets: 1) inappropriateness for crowd-sourcing because of public privacy, and 2) tediousness to manually select activities of people from busy scenes.
We present Skeletonotator, a web-based annotation tool that creates human activity data sets using anonymous skeletonized poses. The tool generates 2D skeletons from surveillance videos using computer vision techniques, and visualizes and plays back the skeletonized poses. Skeletons are tracked between frames, and a unique id is automatically assigned to each skeleton. For the annotation process, users can add annotations by selecting the target skeleton and applying activity labels to a particular time period, while only watching skeletonized poses. The tool outputs human activity data sets which include the type of activity, relevant skeletons, and timestamps. We plan to open source Skeletonotator together with our data sets for future researchers.
Intrusion detection plays a rather important role in many applications, like asset protection and elder caring. Since we cannot make any requirements on the intruder, a device-free passive way of intrusion detection is much more promising and practical. In order to achieve robust passive intrusion detection, various techniques have been proposed, including video-based, infra-based and sensor-based approaches, among which dedicated device installation is often required. In this work, we present a real-time and robust device-free intrusion detection system, named RR-Alarm. By reusing the existing Wi-Fi signals, RR-Alarm is able to detect human intrusion in real time, at the same time, requiring no additional facilities installation. By utilizing the Doppler effects incurred by human motion on multiple Wi-Fi devices, RR-Alarm is not only able to accurately detect the intrusion without any extra human efforts but also avoids a large number of false alarms caused by the human motion from outside the house. A long-term trial in a nursing home verifies the effectiveness of our Wi-Fi based RR-Alarm system.
Physical models are a key component in the architectural process and play an important role in understanding material and space relationships. We present Tangible Urban Models, an approach for leveraging the use of conductive material for 3D printed architectural prototypes. This enables non-interactive objects, such as buildings, to become tangible without the need to attach additional components. We combine this capability with an augmented reality (AR) app and explore the use of gestures for interacting with digital and physical content. The multi-material 3D printed buildings consist of thin layers of white plastic filament and a conductive wireframe to enable touch gestures. In this way, we enable a two-way interaction either with the physical model or with the mobile AR interface.
Intention recognition is the process of using behavioural cues to infer an agent's goals or future behaviour. In face-to-face communication, our gaze implicitly signals our point of interest within the environment and therefore, inadvertently leaks our unspoken intentions to others. In our published body of work, we leverage this implicit function of gaze together with the tendency of humans to plan before executing their actions, resulting in an artificial agent that can project humans intentions while human players engage in a competitive game. In this demo, we created a path-planning game to demonstrate the capability of our artificial agent in a playful manner. The agent projects future plans of players by combining the use of implicit gaze of human players with an AI planning-based model. The demo aims to illustrate that gaze is intentional and that socially interactive agents can harness gaze as a natural input implicitly to assist humans collaboratively with knowledge of their intentions.
For the patients with speech and motion impairments, there is an indispensable need to facilitate their communication with other people, using approaches such as eyeball tracking. However, these systems are usually complex and expensive. In this demo, we propose a WiFi-based contactless text input system, called WiMorse. The system allows these patients to communicate with other people by using WiFi signals to track single-finger movements and encoding them as Morse code to input text. However, we note that a small change in the target's location would lead to a significant change in the received WiFi signal pattern, making it impossible to recognize the finger gestures. To tackle this problem, we propose a signal transformation mechanism to obtain a consistent and stable signal pattern at various locations. By deploying only a pair of COTS WiFi devices, WiMorse can achieve real time recognition of finger generated Morse code with high accuracy, and is robust against input position, environment change, and user diversity.
The grand vision of pervasive computing and the Internet of Things (IoT) involves providing people with a range of seamless functionality, be it through automation, information delivery, etc. However, much of the IoT is opaque; it is often difficult for users to uncover and understand how and why particular functionality occurs, the sources of information, the entities involved, and so forth. We argue that automation scripts, as well as logs and provenance records could be leveraged to assist in illuminating the workings of connected and automated environments.
This paper explores the use of voice assistants (an accessible, intuitive and increasingly common interface) as a means for allowing users to interrogate what is happening in the IoT systems that surround them. In presenting an exploratory Alexa 'Skill', we discuss several considerations for the implementation of such a system. This work represents a starting point for considering how such assistants could help people better understand---and indeed, evaluate, challenge, and accept---technology that is increasingly pervading our world.
This paper proposes a device that automatically generates various touch interactions including multi-touch to realize touch interactions at high speed, continuously, and hands free. The proposed device does not require additional software installations to the touch panel, and it can be expected to improve efficiency and realize automation of work using the touch panel. The proposed device consists of an electrode sheet printed with conductive ink and a voltage control circuit, and generates touch interactions by changing the capacitance of the touch panel temporally and spatially.
Electronics workbenches often becomes messy, as various electronic parts and tools are used repeatedly. To solve this problem, we propose a system called "PartsSweeper", which cleans up both parts and tools on electronics workbenches. The PartsSweeper mainly consists of a customized XY plotter and GUI software on a tablet. We attached several magnets, servomotors, and lift mechanisms on a head of the XY plotter. As most electronic parts and tools are ferromagnetic, the system can move them along with the head while the magnet is lifted. Moreover, the system can selectively move parts and tools using two magnets of different force; while the strong magnet can move both tools and parts, the weak magnet moves only small parts.
This paper presents a novel virtual reality demonstration program that provides people immersive urban experience, i.e. helps people understand characteristics of the city atmosphere from big data analysis on GPS location logs and search query logs. In contrast to the other demonstration systems that show the characteristics of areas of interests by using tag cloud, or VR systems using 3D computer graphics, our system synthesizes both the functionalities by showing 3D point clouds, where each particle shows search queries made by people staying at the area of interests. This synthesis invokes people feel the atmosphere of the city while traditional VR systems could not offer it. To make demonstration system effective, a new feature extraction process on search query logs is proposed by focusing on the users who visit the area of interests. In the demonstration at the conference, visitors could enjoy immersive urban experience with VR headset. Furthermore, the paper also shows the empirical evaluation on our new feature representation from search query logs.
The IoT era demands ad-hoc wireless devices association for a convenient and spontaneous cross-device interaction experience. Currently, users associate two devices by selecting the advertising device (e.g. a mouse) from a list displayed by the scanning device (e.g. a laptop). However, the association can be misplaced since it is often more convenient to initiate association from the advertising device (e.g. when switching a mouse between two computers). Tap2Pair allows users to simply tap on an advertising device to associate with the target scanning device. It does not require any modification of existing wireless devices and is compatible with most wireless protocols. Hands tap near the advertising device's antenna can change the strength of the signal received by scanning devices. Scanning devices can then calculate signal features and initiate association if certain criteria are met. We demonstrate two association strategies for different scenarios: 1. Hold and tap an advertising device near the target scanning device; 2. Tap at the corresponding frequency of the target scanning device.
Gait rehabilitation is a common method of postoperative recovery after the user sustains an injury or disability. However, traditional gait rehabilitations are usually performed under the supervision of rehabilitation specialists, meaning the patients can not receive adequate care continuously. In this paper, we propose IMU-Kinect, a novel system to remotely and continuously monitor the gait rehabilitation via the wearable kit. This system consists of a wearable hardware platform and a user-friendly software application. The hardware platform is composed of four Inertial Measurement Units (IMU), which are attached on the shanks and thighs of the human body. The software application is able to estimate the rotation and displacement of these sensors, then reconstruct the gait movements and calculate the gait parameters according to the geometric model of human lower limbs. Based on IMU-Kinect system, the users of gait rehabilitation just need to walk normally by wearing the IMU-Kinect kit, and then the rehabilitation specialists can analyze the status of postoperative recovery by remotely viewing the animations about users' gait movements and charts of the general gait parameters. Extend experiments in real environment show that our system can efficiently track the gait movements with 9% rotation and displacement error.
The past few years have witnessed the great potential of exploiting channel state information (CSI) retrieved from COTS WiFi devices for respiration monitoring. However, existing approaches only work when the target is close to the WiFi transceivers and the performance degrades significantly when the target is far away. This sensing range constraint greatly limits the application of the proposed approaches in real life. Different from the existing approaches that apply the raw CSI readings of individual antenna for sensing, we employ the ratio of CSI readings from two antennas, whose noise is mostly canceled out by the division operation to significantly increase the sensing range.1 In this demo, we will demonstrate FarSense - a CSI-ratio model based house-level real-time respiration monitoring system using COTS WiFi devices.
As we work, play, shop, and communicate in digital interfaces, we continuously generate traces of information. To turn such noisy sources of personal data into actual insight, my research introduces Personal Bits, a service that enables personalized task support in various kinds of information tasks such as instant message handling, information retrieval, and text entry. Personal Bits mines a user's interaction traces with web apps and native mobile apps and extracts task-centric entities. I present three example apps for Personal Bits: Deja Wu, MessageOnTap, and ContextBoard, to address inefficiencies presented in these information tasks. Personal Bits acts as the central nexus for intelligence between apps and interaction traces, making it easy for apps to acquire personally relevant task entities in fine granularity.
Individuals with mobility disabilities face physical and social barriers due to a lack of accessible clothing. E-textiles garments and smart clothing could help make garments more accessible by incorporating assistive technologies directly into clothing, but there are limited methods for co-designing prototypes so that users can be involved in the design process. This is due to the specialized knowledge needed for designing smart clothing (garment construction, e-textiles, computing). To help with these issues I am developing a co-design toolkit for wearable e-textiles called Wearable Bits. Wearable Bits expands upon the swatchbook tradition in e-textiles by making swatches that can connect to form any wearable garment. This document covers the proposed studies I will be doing to evaluate the co-design toolkit and the expected contributions of this thesis project.
Note-taking, an active learning strategy, has a long history in the educational setting. Learners would tend to make notes for a variety of reasons such as planning their learning activities, extracting useful information from the learning content and to reflect their understanding about learning material [9]. The research studies conducted in educational domain support that note-taking can help learners think, understand and create their knowledge, thereby increasing their learning performance. Although note-taking is crucial for learning, this activity is highly complex which requires comprehension and selection of information. As a result, resource demanding cognitive operations are triggered which coordinate in rapid succession thereby making the activity of note-taking cognitively effortful [12]. One other drawback of this activity is the split attention effect induced when learners split their attention between the act of writing notes and attending to learning content [10]. This results in the withdrawal of attention from the global context to writing selective information which can thereby increase cognitive load and adversely affect the learning performance.
The pioneering concept of connected vehicles has transformed the way of thinking for researchers and entrepreneurs by collecting relevant data from nearby objects. However, this data is useful for a specific vehicle only. Moreover, vehicles get a high amount of data (e.g., traffic, safety, and multimedia infotainment) on the road. Thus, vehicles expect adequate storage device for this data, but it is infeasible to have a large memory in each vehicle. Hence, the vehicular cloud computing (VCC) framework came into the picture to provide a storage facility by connecting a road-side-unit (RSU) with the vehicular cloud (VC). In this, data should be saved in an encrypted form to preserve security, but there is a challenge to search for information over encrypted data. Next, we understand that many of vehicular communication schemes are inefficient for data transmissions due to its poor performance results and vulnerable to different fundamental security attacks. Accordingly, on-device performance is critical, but data damages and secure on-time connectivity are also significant challenges in a public environment. Therefore, we propose reliable data transmission protocols for cutting-edge architecture to search data from the storage, to resist against various security attacks, and provide better performance results. Thus, the proposed data transmission protocol is useful in diverse smart city applications (business, safety, and entertainment) for the benefits of society.
Indoor localisation has been an active area of research for the last decades, and while substantial research aims to increase localisation accuracy, little has been done in developing localisation data analytics for indoor spaces. There is a wide range of scenarios and applications in which efficiency is of the essence and localisation data could be used to optimise the general flow of people. For instance, Hospitals' Operating Rooms (ORs) cost up to $1,5001 per hour even when not being used, and therefore improving staff and patients' flow to maximise OR utilisation is important. By using indoor localisation and a long-term deployment to identify delays and timeliness in the steps that lead to a surgery, the hospital can better schedule surgeries to increase the up-time of ORs. Likewise, moving heavy assets through multi-floored construction sites can result in injuries and high costs. Minimising these movements by studying the flow of workers and assets can potentially result in a safer and healthier working environment and in a lower overall costs. Museums, zoos, festivals, and other exhibit-based sites can benefit from a more streamlined deployment and analysis of people's flow and insights on historical data. As of now, the process to turn indoor localisation data to useful analytics is not straightforward, remains bespoke, and costly.
Older people experience difficulties when managing their security and privacy in mobile environments. However, support from the older adult's social network, and especially from close-tie relations such as family and close friends, is known to be an effective source of help in coping with technological tasks. On the basis of this existing phenomena, I investigate how new methods can increase the availability of social support to older adults and enhance learning in tackling privacy and security challenges. I will develop and evaluate several technological interventions in the support process within social networks for older adults: finding methods that increase seekers' technology learning and methods that increase help availability and quality.
In my Ph.D., I suggest conducting three studies: the first study aims to analyze existing approaches and scenarios of social support to older adults. The initial results suggest that people have a significant willingness to help their older relatives (specifically, their parents), but the actual instances in which they do so is much rarer. We conclude that the potential for social help is far from being exploited. In the second study, I plan to explore social support as a system to increase older adults' self-efficacy and collective efficacy to overcome privacy and security problems. The final study will investigate physiological signals to identify when an older adult required help with mobile security and privacy issues. A successful outcome will be a theoretical model of social support, focused on the domain of privacy and security, and based on vulnerable populations such as older adults. From a practical standpoint, the thesis will offer and evaluate a set of technologies that enable and encourage social support for older adults on mobile platforms.
Advances in mobile, wearable and embedded sensing technology have created new opportunities for research into a variety of health conditions. This has led to the field of mobile health (mHealth), which covers a full spectrum of works, including but not limited to disease surveillance, treatment support, epidemic outbreak tracking, and chronic disease management. An important sub-field which has been rising in the past years is the application of mHealth technology in the field of mental and behavioral health, enabling researchers to study stress, depression, mood, personality change, schizophrenia, physical activity and addictive behavior, among other things.
Internet of Things (IoT) and artificial intelligence (AI) technologies transform our lives into a more convenient and advanced ones. The revolutionary transformation arise in healthcare and medical fields. IoT devices make it feasible to examine patient status in daily lives. Clinicians say that continuous examination of patient status out of the hospital should be valuable to manage target disorder or disease. Moreover, engineers expect that AI techniques could find novel treatments for the disorder or disease based on huge amount of daily patient data. We have investigated such technology-supported methodologies with an urologist in Severance Hospital for better treatments to patients who have nocturnal enuresis.
Digital learning environments provide rich and engaging experiences for students to develop different knowledge and skills. However, learning systems in these environments generally lack the capacity to assess student difficulties in realtime. The lack of timely assessment and guidance can result in unproductive floundering and associated frustration [3, 9]. Standard measures are mostly focused on the use of questionnaires, intervews or think-aloud protocols for capturing learners' subjective feedback on affect, mental-effort, perceived learning, and user preferences [5]. Though these instruments are effective in capturing overall sentiments and reactions, they do not provide enough granularity to conduct detailed analyses on how specific parts of the lecture affect the learning experience.
As next-generation space exploration missions necessitate increasingly autonomous systems, there is a critical need to better detect and anticipate crewmember interactions with these systems. The success of present and future autonomous technology in exploration spaceflight is ultimately dependent upon safe and efficient interaction with the human operator. Optimal interaction is particularly important for surface missions during highly coordinated extravehicular activity (EVA), which consists of high physical and cognitive demands with limited ground support. Crew functional state may be affected by a number of variables including workload, stress, and motivation. Real-time assessments of crew state that do not require a crewmember's time and attention to complete will be especially important to assess operational performance and behavioral health during flight. In response to the need for objective, passive assessment of crew state, the aim of this work is to develop an accurate and precise prediction model of human functional state for surface EVA using multi-modal psychophysiological sensing. The psychophysiological monitoring approach relies on extracting a set of features from physiological signals and using these features to classify an operator's cognitive state. This work aims to compile a non-invasive sensor suite to collect physiological data in real-time. Training data during cognitive and more complex functional tasks will be used to develop a classifier to discriminate high and low cognitive workload crew states. The classifier will then be tested in an operationally relevant EVA simulation to predict cognitive workload over time. Once a crew state is determined, further research into specific countermeasures, such as decision support systems, would be necessary to optimize the automation and improve crew state and operational performance.
In the past, augmented reality (AR) research focused mostly on visual augmentation, which requires a visual rendering device like head-mounted displays that are usually obtrusive, expensive, and socially unaccepted. In contrast, wearable audio headsets are already popularized and the auditory sense also plays an important role in everyday interactions with the environment. In this PhD project, we explore audio augmented reality (AAR) that augments objects with 3D sounds, which are spatialized virtually but are perceived as originating from real locations in the space. We intend to design, implement, and evaluate such AAR systems that enhance people's intuitive and immersive interactions with objects in various consumer and industrial scenarios. By exploring AAR using pervasive and wearable devices, we hope to contribute to the vision of ubiquitous AR.
The existing smart-home ecosystem has the capability to perceiving the ambient environment by using cutting-edge sensing technologies but is limited to reacting autonomously and timely. Successfully predicting the subsequent human activity can effectively infer human intention and instruct the smart homes to react in a timely, customized and accurate way. However, predicting the next activity and its precise occurrence period are challenging due to the complexity of modelling human behaviour. In this paper, the Long Short-Term Memory (LSTM) network equipped with temporal information is investigated to understand whether integrated temporal information on the model has better prediction performance or not. Our results highlight that, accurately integrating the temporal information into the models bring better prediction accuracy. In terms of modelling and further predicting human activity, comprehending the contextual-temporal dynamics is highly significant.
Smartphone apps are becoming ubiquitous in our everyday life. Apps on smartphones sense users' behaviors and activities, providing a lens for understanding users, which is an important point in the community of ubiquitous computing. In UbiComp 2018, we successfully held the first International workshop AppLens 2018: mining and learning from smartphone apps for users. In UbiComp 2019, we would like to run the second International workshop AppLens 2019. It seeks for participants interested in characterizing users from their use of smartphone apps, discovering cultural and social phenomenon by analyzing app usage, recognizing app usage behaviors, studying smartphone apps, user privacy issues, etc. In order to attract more participants, we will open two app datasets. This workshop will include paper sessions, invited talks, and a panel session, to provide a forum for the participants to communicate and discuss issues to promote the emerging research field. Moreover, we will select a few accepted papers to be extended and published in a prestigious journal special issue.
In everyday life, eating follows patterns and occurs in context. We present an approach to discover daily eating routines of a population following a multidimensional representation of eating episodes, using data collected with the Bites'n'Bits smartphone app. Our approach integrates multiple contextual cues provided in-situ (food type, time, location, social context, concurrent activities, and motivations) with probabilistic topic models, which discover representative patterns across these contextual dimensions. We show that this approach, when applied on eating episode data for over 120 people and 1200 days, allows describing the main eating routines of the population in meaningful ways. This approach, resulting from a collaboration between ubiquitous computing and nutrition science, can support interdisciplinary work on contextual analytics for promotion of healthy eating.
Smartphones are linked with individuals and are valuable and yet easily available sources for characterizing users' behavior and activities. User's location is among the characteristics of each individual that can be utilized in the provision of location-based services (LBs) in numerous scenarios such as remote health-care and interactive museums. Mobile phone tracking and positioning techniques approximate the position of a mobile phone and thereby its user, by disclosing the actual coordinate of a mobile phone. Considering the advances in positioning techniques, indoor positioning is still a challenging issue, because the coverage of satellite signals is limited in indoor environments. One of the promising solutions for indoor positioning is fingerprinting in which the signals of some known transmitters are measured in several reference points (RPs). This measured data, which is called dataset is stored and used to train a mathematical model that relates the received signal from the transmitters (model input) and the location of that user (the output of the model). Considering all the improvements in indoor positioning, there is still a gap between practical solutions and the optimal solution that provides near theoretical accuracy for positioning. This accuracy directly impacts the level of usability and reliability in corresponding LBSs. In this paper, we develop a smartphone app with the ability to be trained and detect users' location, accurately. We use Gaussian Process Regression (GPR) as a probabilistic method to find the parameters of a non-linear and non-convex indoor positioning model. We collect a dataset of received signals' strength (RSS) in several RPs by using a software which is prepared and installed on an Android smartphone. We also find the accurate 2σ confidence interval in the presented GPR method and evaluate the performance of the proposed method by measured data in a realistic scenario. The measurements confirm that our proposed method outperforms some conventional methods including KNN, SVR and PCA-SVR in terms of accuracy.
Snapchat is a highly popular smartphone app that allows personalised multimedia communication for spontaneous experiences, where the shared content disappears after a short period of time. In this paper, we examine the predictors of Snapchat usage based on a range of data collected through surveys and from interaction with the handset, using a cohort of 64 recruited participants. The results show that age, Smartphone Addiction, happiness and the use of the popular chatting apps WhatsApp and Facebook Messenger are significant predictors for Snapchat usage. We discuss the implications of these findings against the related literature, and also against the design of the app itself.
Activity recognition is an increasingly relevant topic in the context of the most varied end-user services. In outdoor environments, activity recognition based on close-to-real-time information is key in providing awareness to the user about their surroundings in a timely and user-friendly manner, thus allowing to the user to improve its overall use (Quality of Experience). In this context, it is relevant to understand how data extracted from multiple sensors can be fused, interpreted and classified, to best provide feedback to the user. Having as target case Mobile Augmented Reality Systems for outdoor environments, this paper presents a first analysis on how smart data captured via multiple sensors can assist activity recognition and adequate feedback to the user. The paper also debates the existent restrictions imposed by applications' usage in these environments, describing possible use scenarios and presenting results of an experiment for discriminating activities when using common sensors, such as the accelerometer.
Much of the research in wearable technology focuses on the primary user's experiences and interactions. However, many wearables are inherently social - even public - by nature as they are visible to nearby others. Wearables carry meanings about the wearer (e.g., lifestyle, attitudes, interests, social status). Some wearables are even designed to enable interaction between collocated users or enhance group-experience. While the functions of the technology are often seen to justify its existence, reaching high acceptance of technology requires that also social and cultural aspects are considered. In this workshop, we look into the dynamic and communicative nature of wearable technology designed for both individuals and groups. We are particularly interested in social experiences emerging around personal wearables and the possibilities for technology enhancing group experiences. The goal of this workshop is to bring together a community of researchers, designers, and practitioners who have designed or are interested in designing wearable technology to discuss research agenda and challenges in designing wearable technology as social, communicative artefacts.
This abstract discusses the EEG Visualising Pendant (2012) [3], an emotive wearable that maps and visualises EEG data from a NeuroSky MindWave Mobile Bluetooth EEG headset. It has been developed through several iterations as a doctoral research prototype for studies evaluating the use of bespoke, aesthetic wearables in the role of nonverbal communication.
Wearable computing solutions have been used to support children and adults with autism and ADHD previously. The design of these technologies, however, require a deep understanding not only of what is possible from a clinical standpoint but also how the children themselves might understand and orient towards wearable technologies, such as a smartwatch. In this work, we were interested in supporting children and their caregivers in participating in co-design to explore the issue of transitioning from self-regulation to co-regulation.
We present 'True Colors,' a social wearable prototype designed to augment co-located social interaction among players in a LARP (live action role play). We designed it to enable the emergence of rich social dynamics between wearers and non-wearers. True Colors is Y-shaped, worn around the upper body, and has distinct front and back interfaces to afford actions taken by the wearer (front), and actions taken by others (back). To design True Colors [3], we followed a Research-through-Design approach, used experiential qualities and social affordances to guide our process, and co-designed with LARP designers. 13 True Colors wearables were deployed in a 3-day LARP event, attended by 109 people. Out of all the functionalities and interactivity the device afforded, players gravitated most towards those that emphasized the social value of experiencing vulnerability as a prompt to get together.
Cosplay, where people dress up with special costumes resembling or inspired by an imaginary character, derived e.g. from game or comic genre, is an interesting area of social clothing design. In our on-going research, we consider cosplay as a source of inspiration for wearable computing. We investigate different aspects related to the identity and expression with cosplay costumes, and seek to identify factors that affect the engagement and user satisfaction in participating in the cosplay. We believe that wearable computing research can learn from cosplay, and take elements that help in designing future wearables.
Internet of Things (IoT) provides streaming, large-amount, and multimodal sensing data over time. The statistical properties of these data are always characterized very differently by time and sensing modalities, which are hardly captured by conventional learning methods. Continual and multimodal learning allows integrating, adapting, and generalizing the knowledge learned from previous experiential data collected with heterogeneity to new situations. Therefore, continual and multimodal learning is an important step to improve the estimation, utilization, and security for the real-world data from IoT devices. A few major challenges to combine continual learning and multimodal learning with real-world data include 1) how to accurately match, fuse, and transfer knowledge between the multimodal data from the fast-changing dynamic physical environment, 2) how to learn accurately despite the missing, imbalanced or noisy data for continual learning under multimodal sensing scenarios, 3) how to effectively combine information collected in different sensing modalities to improve the understanding of CPS while retaining privacy and security, and 4) how to develop usable systems handling high volume streaming multimodal data on mobile devices.
We organize this workshop to bring people working on different disciplines together to tackle these challenges in this topic. This workshop aims to explore the intersection and combination of continual machine learning and multi-modal modeling with applications in the Internet of Things. The workshop welcomes works addressing these issues in different applications/domains as well as algorithmic and systematic approaches to leverage continual learning on multimodal data. We further seek to develop a community that systematically handles the streaming multimodal data widely available in real-world ubiquitous computing systems. Preliminary and on-going work is welcomed.
We present a variety of new visual features in extension to the TED-LIUM corpus. We re-aligned the original TED talk audio transcriptions with official TED.com videos. By utilizing state-of-the-art models for face and facial landmarks detection, optical character recognition, object detection and classification, we extract four new visual features that can be used for Large-Vocabulary Continuous Speech Recognition (LVCSR) systems, including facial images, landmarks, text, and objects in the scenes. The facial images and landmarks can be used in combination with audio for audio-visual acoustic modeling where the visual modality provides robust features in adverse acoustic environments. The contextual information, i.e. extracted text and detected objects in the scene can be used as prior knowledge to create contextual language models. Experimental results showed the efficacy of using visual features on top of acoustic features for speech recognition in overlapping speech scenarios.
The design of computational methods for detection of abnormal lung sounds (e.g., wheeze) associated with the advent of a pulmonary attack (e.g., asthma) and subsequent characterization of the 'severity' or progressive exacerbation in pulmonary patients is a relevant problem in ubiquitous computing. While a few recent works have been done on on-body sensor and smartphone sensor based lung activity detection, designing a comprehensive architecture for the detection and characterization of abnormal lung sounds (e.g., wheeze) is an open issue. In this paper, we present mLung++, which is a comprehensive pulmonary care system for respiration cycle based detection and subsequent characterization of wheezing in chronic pulmonary patients using audio and inertial sensors embedded on a smartphone. For the design, training, and evaluation, we use audio and Inertial Measurement Unit (IMU) data collected by smartphone and watch from 131 human subjects (91 pulmonary patients, 40 healthy control). We show empirical evidence that the performance of mLung++ for characterizing abnormal lung sounds (accuracy 93.4% and f1_score 77.94%) is comparable with that of high-quality on-body sensor based characterization, which is usually done in a hospital or clinical setup.
Figures are human-friendly but difficult for computers to process automatically. In this work, we investigate the problem of figure captioning. The goal is to automatically generate a natural language description of a given figure. We create a new dataset for figure captioning, FigCAP. To achieve accurate generation of labels in figures, we propose the Label Maps Attention Model. Extensive experiments show that our method outperforms the baselines. A successful solution to this task allows figure content to be accessible to those with visual impairment by providing input to a text-to-speech system; and enables automatic parsing of vast repositories of documents where figures are pervasive.
With the increasing number of low-cost sensing modalities, bulk amount of spatial and temporal data is collected and accumulated from building systems. Substantial information could be extracted about occupant behavior and actions from the data gathered. Understanding the data provides an opportunity to decode movement patterns, circulation-flow i.e. how an occupant tends to move inside the building and extract occupant presence impressions. Occupant Presence can be defined as digital traces of spatial coordinates (x,y) of an occupant at a particular instant that moves within the monitored space and is represented by a chronologically ordered sequence of those position coordinates. This study analyzes the occupant presence inside a building and makes predictions on the next location, i.e., where an occupant possibly could be in the future. This paper introduces a predictive model for occupancy presence prediction using the data collected from an instrumented commercial building spanning for over 30 days - May 2019 to June 2019. The proposed prediction model named PRECEPT - is a variant of Recurrent Neural Network known as Gated Recurrent Unit (GRU) Network. PRECEPT is capable of learning mobility patterns and predict presence impressions based on the occupant's past spatial coordinates. We evaluate the performance of PRECEPT on a dataset using metrics such as Mean Squared Error (MSE) and Mean Absolute Error (MAE) for each training epoch. The model results in a Root Mean Squared Error (RMSE) value of 4.79 centimeters for a single occupant. We also illustrate how the prediction model can be used for the task of identifying important zones and extract unique space-usage patterns. This could further assist the Building Management System (BMS) authorities to reduce energy wastage and perform efficient HVAC control and intelligent building operations.
While hearing a case, the judge must fully understand the case and make clear the disputed issues between parties, which is the cornerstone of a fair trial. However, manual mining the key of the case from the statements of the litigious parties is a bottleneck, which currently relies on methods like keyword searching and regular matching. To complete this time-consuming and laborious task, judges need to have sufficient prior knowledge of cases belonging to different causes of action. We try to apply the technology of event extraction to faster capture the focus of the case. However, there is no proper definition of events that contains types of focus in the judicial field. And existing event extraction methods can't solve the problem of multiple events sharing the same arguments or trigger words in a single sentence, which is very common in case materials. In this paper, we present a mechanism to define focus events, and a two-level labeling approach, which can solve multiple events sharing the same argument or trigger words, to automatically extract focus events from case materials. Experimental results demonstrate that the method can obtain the focus of case accurately. As far as we know, this is the first time that event extraction technology has been applied to the judicial field.
Air pollution monitoring is a concerned issue in urban management. Nowadays, vehicle-based mobile sensing systems are deployed to enhance sensing granularity. However, in these systems, the number of online sensors may change over time and the long-term maintenance is costly. Therefore, an inference algorithm is necessary to maintain the high spatiotemporal granularity of pollution field under both dense and sparse sampling. In this paper, we propose an algorithm based on super-resolution scheme. To address the challenges of complex external factors and spatiotemporal dependencies, three modules are included: a heterogeneous data fusion subnet to extract useful information from external data, a spatiotemporal residual subnet to capture the spatiotemporal dependencies in pollution field, and an upsampling subnet to generate the fine-grained pollution map. Experiments on real-world datasets show that our model outperforms the state-of-the-art baselines.
Despite significant advances in the performance of sensory inference models, their poor robustness to changing environmental conditions and hardware remains a major hurdle for widespread adoption. In this paper, we introduce the concept of unsupervised domain adaptation which is a technique to adapt sensory inference models to new domains only using unlabeled data from the target domain. We present two case-studies to motivate the problem and highlight some of our recent work in this space. Finally, we discuss the core challenges in this space that can trigger further ubicomp research on this topic.
Human activity recognition (HAR) is essential to many context-aware applications in mobile and ubiquitous computing. A human's physical activity can be decomposed into a sequence of simple actions or body movements, corresponding to what we denote as mid-level features. Such mid-level features ("leg up," 'leg down," "leg still,"...), which we contrast to high-level activities ("walking," "sitting,"...) and low-level features (raw sensor readings), can be developed manually. While proven to be effective, this manual approach is not scalable and relies heavily on human domain expertise. In this paper, we address this limitation by proposing a machine learning method, AttriNet, based on deep belief networks. Our AttriNet method automatically constructs mid-level features and outperforms baseline approaches. Interestingly, we show in experiments that some of the features learned by AttriNet highly correlate with manually defined features. This result demonstrates the potential of using deep learning techniques for learning mid-level features that are semantically meaningful, as a replacement to handcrafted features. Generally, this empirical finding provides an improved understanding of deep learning methods for HAR.
Autonomous checkout at retail stores could bring a large array of benefits to both consumers -no lines, better user experience- and retailers -lower operational cost, detailed insights about customer behavior. Existing approaches include self-checkout stations, on-item sensing (e.g., RFID) and infrastructure-based sensing (e.g., vision and weight). While each has their own pros and cons, the latter offers a good tradeoff between information richness and operational cost. However, several challenges currently limit their accuracy. In particular, visual item recognition is constrained by the huge amount of training data required and the domain adaptation gap that usually exists between the training -e.g., well-lit environment- and testing distributions -e.g., each store might have different lighting conditions, camera angles, etc. In this preliminary work we explore different ways to leverage multi-modal sensing (e.g., weight load cells on shelves) to automatically label frames from customers picking up or putting down items on the shelf. Then, those annotated frames could be used to continuously expand the initial visual model and tailor it to each store's conditions.
With advances in Internet of Things many opportunities arise if the challenges of continual learning in a multimodal setting can be tackled. One common issue in Online Learning is to obtain labelled data, as this generally is costly. Active Learning is a popular approach to collect labelled data efficiently, but in general includes unrealistic assumptions.
In this work we present a first step towards a taxonomy of Interactive Learning strategies in a multimodal and dynamic setting. By relaxing assumptions of standard Active Learning, the strategies become better suited for real-world settings and can achieve better performance.
Speaker recognition is a key component for emerging Internet of Things (IoT) smart services, such as voice-control and personalized applications. Although speaker recognition systems can attain excellent performance on synthetic datasets, operation in the real-world can lead to a significant degradation in performance. The key reason for this is the lack of enough labeled datasets for model adaptation, primarily due to the cost of manual annotation and enrollment. A recent solution to this problem is to use cross-modal identifiers e.g. WiFi sniffing to gradually associate an identity with a certain vocal feature e.g. Simultaneous Clustering and Naming (SCAN). In this paper we demonstrate how to further improve performance of these cross-modal systems in the wild by iteratively adapting the feature extractor based on the output of the noisy association and clustering step. We show how this feedback loop can not only improve overall accuracy, but also labeling coverage in association result. iSCAN is a further step towards a robust and zero-effort speaker recognition system for the IoT.
In the real-world ubiquitous computing systems, it is difficult to require a significant amount of data to obtain accurate information through pure data-driven methods. Performance of data-driven methods relies on the quantity and 'quality' of data. They perform well when sufficient amount of data is available, which is regarded as ideal conditions. However, in real-world systems, collecting data can be costly or impossible due to practical limitations. On the other hand, it is promising to utilize physical knowledge to alleviate these issues of data limitation. The physical knowledge includes domain knowledge from experts, heuristics from experiences, analytic models of the physical phenomena and etc.
The goal of the workshop is to explore the intersection between (and the combination of) data and physical knowledge. The workshop aims to bring together domain experts that explore the physical understanding of the data, practitioners that develop systems and the researchers in traditional data-driven domains. The workshop welcomes papers, which focuses on addressing these issues in different applications/domains as well as algorithmic and systematic approaches to applying physical knowledge. Therefore, we further seek to develop a community that systematically analyzes the data quality regarding inference and evaluates the improvements from the physical knowledge. Preliminary and on-going work is welcomed.
With the increasing popularity of consumer wearable devices augmented with sensing capabilities (smart bands, smart watches), there is a significant focus on extracting meaningful information about human behaviour through large scale real-world wearable sensor data. The focus of this work is to develop techniques to detect human activities, utilising a large dataset of wearable data where no ground truth has been produced on the actual activities performed. We propose a deep learning variational auto encoder activity recognition model - Motion2Vector. The model is trained using large amounts of unlabelled human activity data to learn a representation of a time period of activity data. The learned activity representations can be mapped into an embedded activity space and grouped with regards to the nature of the activity type. In order to evaluate the proposed model, we have applied our method on public dataset - The Heterogeneity Human Activity Recognition (HHAR) dataset. The results showed that our method can achieve improved result over the HHAR dataset. In addition, we have collected our own lab-based activity dataset. Our experimental results show that our system achieves good accuracy in detecting such activities, and has the potential to provide additional insights in understanding the real-world activity in the situations where there is no ground truth available.
Coronary Artery Disease (CAD) is a leading cause of death globally. Coronary angiography, the clinical diagnosis for CAD involves a surgery and admission to hospital. While this is a proven gold standard, having a less exact low-cost non-invasive screening method would be very helpful in mass diagnosis and pre-diagnosis. However, all physiological manifestations of CAD either appear late in the time-curve or are non-specific surrogate markers. With the advent of Artificial Intelligence (AI), there is new hope using multi-modal non-invasive sensing and analysis. In this paper, we combine domain knowledge with AI based data analysis to propose a novel two-stage approach that effectively incorporates multiple CAD markers in various non-invasive cardiovascular signals for an improved diagnosis system. At first stage, a hierarchical rule-engine identifies the high cardiac risk population using patient demography and medical history, who are further analysed at the second stage using numeric features from various cardiovascular signals. Results show that the proposed approach achieves sensitivity = 0.96 and specificity = 0.91 in classifying CAD patients on an in-house hospital dataset, recorded using commercially available sensors.
For the present engineering of neural networks, rotation invariant is hard to be obtained. Rotation symmetry is an important characteristic in our physical world. In image recognition, using rotated images would largely decrease the performance of neural networks. This situation seriously hindered the application of neural networks in the real-world, such as human tracking, self-driving cars, and intelligent surveillance. In this paper, we would like to present a rotation-equivariant design of convolutional neural network ensembles to counteract the problem of rotated image processing task. This convolutional neural network ensembles combine multiple convolutional neural networks trained by different ranges of rotation angles respectively. In our proposed theory, the model lowers the training difficulty by learning with smaller separations of random rotation angles instead of a huge one. Experiments are reported in this paper. The convolutional neural network ensembles could reach 96.35% on rotated MNIST datasets, 84.9% on rotated Fashion-MNIST datasets, and 91.35% on rotated KMNIST datasets. These results are comparable to current state-of-the-art performance.
A large and diversified dataset is the cornerstone for the analysis of many real world systems. Data collection, especially involving living beings, is a time and effort consuming approach. Data augmentation using analytical models is gaining traction in computing systems because of data scarcity in real world applications. In this paper, we demonstrate a case study of data explosion for radar based human gesture detection. We present a physical model based simulation framework for obtaining radar signals corresponding to different human gestures. Radar based public datasets have not yet been so easily available because of its high cost and less availability. On the contrary, Kinect based datasets are easily available because of its market dominance and commercialization of devices such as Xbox Kinect. Thus, for the simulation framework, we have used the publicly available dataset for gaming gestures based on Kinect data (obtained from Microsoft Cambridge joint initiative). This Kinect data containing the coordinates of joint positions of human skeletons performing different gaming gestures is then fed to a radar based simulation framework to calculate the radar data. Furthermore, we have tuned different parameters in this simulation framework and exploded it to generate radar micro Doppler signatures in a controlled way, keeping in mind the physical constraints. On this exploded data, machine learning algorithms have been applied to evaluate gesture detection accuracy. The enlarged dataset for four gestures has shown an accuracy of around 94.7 % for 10 fold cross validation in training phase. A comparison of simulated radar data with respect to experimentally obtained hardware radar data for different gestures has also been reported to show the relevance of the proposed method.
Identifying the causal features from multi-dimensional physical data streams is one of the underpinnings for the success of the data-driven inference tasks. Prior research utilizes the 'correlation' or 'mutual information' to select features, which ignored the crucial inherent causality behind the data. In this work, we consider the problem of selecting causal features from streaming physical sensing data. Inspired by a metric from information theory which calibrates both instantaneous and temporal relations, we formulate the causal feature selection as a cardinality constrained joint directed information maximization (i.e., Max-JDI) problem. Then we propose a near optimal greedy algorithm for streaming feature selection and present an information-interpretive solution for the cardinality constraint presetting. The proposed method is evaluated on a real-world case study involving feature selection for the power distribution network event detection. Compared with other selection baselines, the proposed method increases the detection accuracy by around 5%, while concurrently reduces the computation time from several weeks to within a minute. The promising results demonstrate that it can be applied to optimize the energy operation and enhance the resilience of power buildings.
In the renewable resource recycling market, the recycling price is a key factor which can influence the recycling market. The price prediction of renewable resources is important, which is helpful to guide the development of the market. However, it is difficult to make accurate predictions because the data of recycling prices is highly random and complex. Moreover, there is a time lag between the predicted prices and the accurate prices. In this paper, we propose a combined model to solve these problems. Our model decomposes the prediction into two parts: trend price prediction with Moving Average (MA) and residual price prediction with Empirical Mode Decomposition (EMD) and Long Short-Term Memory (LSTM) neural network. Evaluations on a real-world dataset show that our model outperforms those classical prediction models with the error reduced by over 70% and solves the time lag problem.
Air pollution is a global health threat. Nowadays, with the increasing amount of air pollution monitoring data from either conventional official stations or mobile sensing systems, the role of deep learning methods in pollution map recovery becomes gradually apparent. To address the challenges including the irregular sampling from mobile sensing and the non-interpretability of deep learning models, we proposed a deep autoencoder framework based inference algorithm. By separating the process of pollution generation and data sampling, this framework is able to deal with sampling under any irregular intervals in time and space. Also, we adopt a convolutional long short-term memory (ConvLSTM) structure to model the pollution generation after revealing its internal connections with an atmospheric dispersion model. Our algorithm is evaluated over a three-month real-world data collection in Tianjin, China. Results show our method can obtain up to 2x performance improvement over existing methods, benefited from its high robustness against different background pollution level and accidental sensor errors.
We propose a context-free semantic localisation approach to visualise and analyse indoor movements. We focus on settings where indoor location or rooms have strongly associated semantics, such as hospitals. We describe an approach that can work with different localisation systems, with little knowledge of the physical space properties, and with minimal bootstrapping required. We propose a movement representation that consists of time-encoded strings, and discuss how our approach can be used for analysing and visualising longitudinal indoor localisation data.
Mobile vision systems, often battery-powered, are now incredibly powerful in capturing, analyzing, and understanding real-world events uncovering interminable opportunities for new applications in the areas of life-logging, cognitive augmentation, security, safety, wildlife surveillance, etc. There are two complementary challenges in the design of a mobile vision system today - improving the recognition accuracy at the expense of minimum energy consumption. In this work, we posit that best-effort sensing with degradable featurization and an elastic inference pipeline offers an interesting avenue to bring energy autonomy to mobile vision systems while ensuring acceptable recognition performance. Borrowing principles from Intermittent Computing, and Numerical Computing we propose such best-effort sensing using a Degradable-Inference pipeline supported by a parameterized Discrete Cosine Transformation (DCT) based featurization and an Anytime Deep Neural Network. These two principles aim at extending the lifetime of a mobile vision system while minimizing compute and communication cost without compromising recognition performance. We report the design and early characterization of our proposed solution.
A major challenge in human activity recognition over long periods with multiple sensors is clock synchronization of independent data streams. Poor clock synchronization can lead to poor data and classifiers. In this paper, we propose a hybrid synchronization approach that combines NTP (Network Time Protocol) and context markers. Our evaluation shows that our approach significantly reduces synchronization error (20 ms) when compared to approaches that rely solely on NTP or sensor events. Our proposed approach can be applied to any wearable sensor where an independent sensor stream requires synchronization.
Vehicle flow estimation has many potential smart cities and transportation applications. Many cities have existing camera networks which broadcast image feeds; however, the resolution and frame-rate are too low for existing computer vision algorithms to accurately estimate flow. In this work, we present a computer vision and deep learning framework for vehicle tracking. We demonstrate a novel tracking pipeline which enables accurate flow estimates in a range of environments under low resolution and frame-rate constraints. We demonstrate that our system is able to track vehicles in New York City's traffic camera video feeds at 1 Hz or lower frame-rate, and produces higher traffic flow accuracy than popular open source tracking frameworks.
In this work, we propose P-Loc, a device-free indoor localization system based on power-line network within the building. P-Loc measures the electromagnetic (EM) coupling between a human body and existing power-lines, which are simultaneously used for electric power transmission. To avoid the impact of AC mains and noise from other electrical sources, we inject a signal into the ground-line to generate the occupant location fingerprint of a specific frequency. A 0.48m average error distance is obtained in the preliminary experiments.
Computing devices worn on the human body have a long history in academic and industrial research, most importantly in wearable computing, mobile eye tracking, and mobile mixed and augmented reality. As humans receive most of their sensory input via the head, it is a very interesting body location for simultaneous sensing and interaction as well as cognitive assistance. Eyewear Computing devices have recently emerged as commercial products and can provide an research platform for a range of fields, including human-computer interaction, ubiquitous computing, pervasive sensing, psychology and social sciences. The proposed workshop will bring together researchers from a wide range of disciplines, such as mobile and ubiquitous computing, eye tracking, optics, computer vision, human vision and perception, usability, as well as systems research. This year it will also bring in researchers from psychology, with a focus on the social and interpersonal aspects of eyewear technology. The workshop is a continuation from 2016/2018 and will focus on discussing application scenarios as well as focusing on eyewear sensing and supporting social interactions.
In this position paper, we encourage the use of novel 3D gaze tracking possibilities in the field of gaze-based interaction. Smooth pursuit offers great benefits over other gaze interaction approaches, like the ability to work with uncalibrated eye trackers, but also has disadvantages like the produced visual clutter in more complex user interfaces. We examine the basic concept of smooth pursuits, its hardware and algorithmic requirements and how this can be applied to real world problems. Then we evaluate how the recent change in availability of 3D eye tracking hardware can be used to approach the challenges of 2D smooth pursuit interaction. We take a look at different research opportunities, show concrete ideas and discuss why they are relevant for future research.
Movements of human eyes provide useful insights for understanding human's physical and mental condition and ability, which has high attention in various industrial and academic fields. Although many methods for sensing eye movements have been proposed, each method has some problems in sensing eye movements constantly in all situations. In this research, in order to realize a device which constantly recognizes eye movements in every scene, we adopted a sensing technology using uplift movements of a skin accompanying eye movements. The proposed method attached a infrared distance sensor to an inside of glasses so as to sense eye movements based on the change in a distance between eyes and a distance sensor. Then, necessary parameters such as eye movements directions are recognized based on an extracted feature quantity using a machine learning. We conducted two kinds of experiments at early-stage and confirmed the feasibility of our proposed method on eye movements recognition.
Due to the explicit and implicit facets of gaze-based interaction, eye tracking is a major area of interest within the field of cognitive industrial assistance systems. In this position paper, we describe a scenario which includes a wearable platform built around a mobile eye tracker, which can support and guide an industrial worker throughout the execution of a maintenance task. The potential benefits of such a solution are discussed and the key components are outlined.
We have become accustomed to flat screens over a long time because flat screens are both easy to manufacture and easy to carry around. However, there is a huge gap between the 3D experiences that we see directly with our eyes and those viewed through a flat screen. This paper compares 3D experiences through eyewear and other media. Based on the strengths of eyewear, we suggest several design considerations for eyewear applications vividly conveying the actual 3D experience.
Commercial smart eyewear products designed for day-today use have drastically improved in recent years. However, the current utility and applications of smart eyewear can be easily substituted with other smart wearables. In this paper, we propose the integration of an artificial agent capable of performing gaze-based intention recognition on smart eyewear to extend the device capabilities. As smart eyewear affords unobtrusive tracking of the user's gaze while the user interacts naturally with the world, it serves as a perfect platform to discreetly identify the user's intentions through gaze, allowing the agent to provide relevant, personalised and proactive assistance. We believe that integration our proposed agent in smart eyewear is an achievable goal in the coming years with the rapid progressions in computer vision, wearable technology and socially interactive artificial agents. This paper, therefore, discusses our proof-of-concept intention-aware agent, followed by its future opportunities and existing challenges for its integration in smart eyewear.
Emerging smart eyewear solutions offer new ways to present information. In addition, new technology provides an unobtrusive way to study human cognitive and attentional functions in real life. This combination presents a unique possibility to design more usable and intuitive interfaces and information visualizations for the users. Flexible and fluent allocation of attention to relevant information are key functions needed in today's work life, especially in knowledge work. Supporting these essential functions e.g., with intelligent glasses, can increase working fluency and decrease work-related strain. This study shows how work-related stress is associated with the ability to flexibly disengage and reorient cognitive (attention) resources. The results suggest that cognitive weariness related to job-burnout impairs fluent and flexible attentional allocation.
Contemporary Head-Mounted Displays (HMDs) are progressively becoming socially acceptable by approaching the size and design of normal eyewear. Apart from the exciting interaction design prospects, HMDs bear significant potential in hosting an array of physiological sensors very adjacent to the human skull. As a proof of concept, we illustrate EEGlass, an early wearable prototype comprised of plastic eyewear frames for approximating the form factor of a modern HMD. EEGlass is equipped with an Open-BCI board and a set of EEG electrodes at the contact points with the skull for unobtrusively collecting data related to the activity of the human brain. We tested our prototype with 1 participant performing cognitive and sensorimotor tasks while wearing an established Electroencephalography (EEG) device for obtaining a baseline. Our preliminary results showcase that EEGlass is capable of accurately capturing resting state, detect motor-action and Electrooculographic (EOG) artifacts. Further experimentation is required, but our early trials with EEGlass are promising in that HMDs could serve as a springboard for moving EEG outside of the lab and in our everyday life, facilitating the design of neuroadaptive systems.
In this paper we present a system prototype capable of assessing electrodermal activity (EDA), also referred to as galvanic skin response (GSR) in smart eyewear form-factor. EDA has a long history in psychological research and has been used for ground truth recording in multiple scenarios. Human skin sweating has been proven to be relation to the emotional state of the user, and we believe that this could enable us to build systems that could potentially help us gain deeper insights into our behaviour, emotions and stress factors throughout the day. In this paper we discuss the background of EDA studies, discuss the presented concept and potential applications. We hope this work could inspire fruitful discussions and potential collaborations during the workshop.
In this paper we present an approach to detection and estimation of the cognitive load[8, 9] by measuring temperature changes of the forehead and the nose bridge using smart eyewear. The presented system consists of a pair of glasses with four temperature sensors attached. We have found signs of a correlation between the facial temperature changes and cognitive load using this approach, which makes it possible to develop a wearable device to quantitatively estimate cognitive engagement. Although are still working on investigating the effect on facial temperature from different types of cognitive tasks and different difficultly levels, we believe this approach could spark interesting discussions during the workshop.
Premature technology, privacy, intrusiveness, power consumption, and user habits are all factors potentially contributing to the lack of social acceptance of smart glasses. After investigating the recent development of commercial smart eyewear and its related research, we propose a design space for ubiquitous smart eyewear interactions while maximising interactivity with minimal obtrusiveness. We focus on implicit and explicit interactions enabled by the combination of miniature sensor technology, low-resolution display and simplistic interaction modalities. Additionally, we are presenting example applications outlining future development directions. Finally, we aim at raising the awareness of designing for ubiquitous eyewear with implicit sensing and unobtrusive information output abilities.
Wearable displays with augmented reality are in an early phase of adoption by the public. The uptake is slow and user studies regarding acceptance and use are scarce and have limitations to understand long-term use and their influence on daily life. The complexity of understanding long-term use requires a multidisciplinary approach to different stakeholder perspectives. A variety of complementary theoretical perspectives are needed to create a framework to study the impact and perils of the use of smart eyewear. In this position paper, we develop such a framework, integrating theoretical perspectives from Philosophy, Psychology, Science and Technology Studies and Information Systems. The resulting framework has concrete implications for future studies regarding the design, acceptance and long-term use of smart eyewear.
The recognition of complex and subtle human behaviors from wearable sensors will enable next-generation human-oriented computing in scenarios of high societal value (e.g., dementia care). This will require large-scale human activity corpuses and much improved methods to recognize activities and the context in which they occur. This workshop deals with the challenges of designing reproducible experimental setups, running large-scale dataset collection campaigns, designing activity and context recognition methods that are robust and adaptive, and evaluating systems in the real world. We wish to reflect on future methods, such as lifelong learning approaches that allow open-ended activity recognition. This year HASCA will welcome papers from participants to the Second Sussex-Huawei Locomotion and Transportation Recognition Competition and Open Lab Nursing Activity Recognition Challenge in special sessions.
Sensor-based recognition of locomotion and transportation modes has numerous application domains including urban traffic monitoring, transportation planning, and healthcare. However, the use of a smartphone in a fixed position and orientation in previous research works limited the user behavior a lot. Besides, the performance of naive methods for position-independent cases was not up to the mark. In this research, we have designed a position and orientation independent deep ensemble network (POIDEN) to classify eight modes of locomotion and transportation activities. The proposed POIDEN architecture is constructed of a Recurrent Neural Network (RNN) with LSTM that is assigned the task of selecting optimum general classifiers (random forest, decision tree, gradient boosting, etc.) to classify the activity labels. We have trained the RNN architecture using an intermediate feature set (IFS), whereas, the general classifiers have been trained using a statistical classifier feature set (SCFS). The choice of a classifier by RNN is dependent upon the highest probability of those classifiers to recognize particular activity samples. We have also utilized the rotation of acceleration and magnetometer values from phone coordinate to earth coordinate, proposed jerk feature, and position insensitive features along with parameter adjustment to make the POIDEN architecture position and orientation independent. Our team "Gradient Descent" has presented this work for the "Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge".
In this paper, our team (Orange Labs) propose to address the task of the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge (2019), which consists in recognizing the user transportation mode based on the smartphone inertial sensors data, by using method of recurrent neural networks. The bidirectional LSTM architecture has been proposed to solve this challenge. The model was trained on rotation and translation invariant features in order to ignore the information of smartphone orientation and location. Our preliminary results show that the proposed method reaches the accuracy of 60.4% for the user transportation mode recognition by using hand validation data as testing data. Based on this SHL recognition challenge, it has been proposed to take advantage of the obtained model and try to integrate it into our continuous identity authentication project.
SHL recognition challenge 2019 goal is to recognize eight locomotion and transportation (activities) from the inertial sensor data of a smartphone. The dataset contains information from different mobile-phones placement (torso, bag, hips, hand). Participants must provide their predictions based on test data that contains Hand phone sensors information. Only a small amount of Hand phone labeled data exists in the validation data. Train data consists only of torso, bag and hips placed mobile devices. Team DB proposes to apply deep semi-supervised learning. As the base for our model, we have chosen Adversarial Autoencoder (AAE) and employ Convolutional Networks for feature extraction. We prove that semi-supervised learning gives possibility to utilize test unlabeled data during AAE training with small amount of validation labeled data and achieve high model accuracy for Human Activity Recognition task.
For the Nurse Care Activity Recognition Challenge, an activity recognition algorithm was developed by Team TDU-DSML. A spatial-temporal graph convolutional network (ST-GCN) was applied to process 3D motion capture data included in the challenge dataset. Time-series data was divided into 20-second segments with a 10-second overlap. The recognition model with a tree-structure graph was then created. The prediction result was set to one-minute segments on the basis of a majority decision from each segment output. Our model was evaluated by using leave-one-subject-out cross-validation methods. An average accuracy of 57% for all six subjects was achieved.
Human activity recognition using multiple sensors is a challenging but promising task in recent decades. In this paper, we propose a deep multimodal fusion model for activity recognition based on the recently proposed feature fusion architecture named EmbraceNet. Our model processes each sensor data independently, combines the features with the EmbraceNet architecture, and post-processes the fused feature to predict the activity. In addition, we propose additional processes to boost the performance of our model. We submit the results obtained from our proposed model to the SHL recognition challenge with the team name "Yonsei-MCML."
Recent advances in Machine Learning, in particular Deep Learning have been driving rapid progress in fields such as computer vision and natural language processing. Human activity recognition (HAR) using wearable sensors, which has been a thriving research field for the last 20 years, has benefited much less from such advances. This is largely due to the lack of adequate amounts of labeled training data. In this paper we propose a method to mitigate the labeled data problem in wearable HAR by generating wearable motion data from monocular RGB videos, which can be collected from popular video platforms such as YouTube. Our method works by extracting 2D poses from video frames and then using a regression model to map them to sensor signals. We have validated it on fitness exercises as the domain for which activity recognition is trained and shown that we can improve a HAR recognition model using data that was produced from a YouTube video.
This article introduce the architecture of a Long-Short-Term-Memory network for classifying transportation-modes via smartphone data and evaluates its accuracy. By using a Long-Short-Term-Memory with common preprocessing steps such as normalisation for classification tasks an F1-Score accuracy of 63.68 % was achieved with an internal test dataset. We participated as team "GanbareAMT" in the "SHL recognition challenge".
Convolution Neural Network (CNN) filters learned on one domain can be used as feature extractors on another similar domain. Transferring filters allow reusing datasets across domains and reducing labelling costs. In this paper, four activity recognition datasets were analyzed to study the effects of transferring filters across the datasets. A spectro-temporal ResNet was implemented as a deep, end-to-end learning architecture. We analyzed the number of transferred CNN residual blocks with respect to the size of the target-adaptation data. The analysis showed that transfer learning using small adaptation subsets is more useful when the target domain contains a small number of different activities. Furthermore, the similarity between the domains, participating in the transfer learning scenario, seems to play a role in its success. The most successful transfer achieved an F1-score of 93%, which is an increase of 9 percentage points compared to a domain-specific baseline model.
Human activity recognition is a challenging task due to complexity and variations of human movements while performing activities by different subjects. Extracting features to model the temporal evolution of different movements plays an important role in this task. In this paper, we present the approach followed by our team, Dark_Shadow, to recognize complex nurse activities in the "Nurse Care Activity Recognition Challenge" [1]. We present a deep learning method to capture the movements of essential body parts from time series of human activity data collected by sensors and then classify them. Deep learning approaches have provided satisfactory results in various human activity recognition tasks. In this work, we propose a Gated Recurrent Unit (GRU) model with attention mechanism to recognize the nurse activities. We obtain approximately 66.43% accuracy for person-wise one leave out cross validation.
This paper describes an activity recognition method for the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge by Team TDU-DSML. The CNN model reported in our 2018 SHL Challenge was adopted. 5-second FFT spectrogram images from all axes of acceleration and gyro sensor data were treated as input data. We confirmed that a multiple-sensor input model combining acceleration and gyro sensors improves the recognition rate. However, there was insufficient training data in the SHL dataset for the target sensor labeled as Hand. To overcome this difficulty, a transfer learning method was applied to the pre-training model from other sensors labeled as Hips and Torso. After evaluation from all combination of sensors, the transfer learning model from Acc_norm and Gyr_x for Hips and Torso had the best recognition rate of 82.1% at Hand in the submission phase.
The Sussex-Huawei Locomotion Challenge 2019 was an open competition in activity recognition where the participants were tasked with recognizing eight different modes of locomotion and transportation. The main difficulty of the challenge is that the training data was recorded with a smartphone that was placed in a different body location than the test data. Only a small validation set with all locations was provided to enable transfer learning. This paper describes our (team JSI First) approach, in which we first derived additional sensor streams from the existing ones and on them calculated a large body of features. We then used cross-location transfer learning via specialized feature selection, and performed two-step classification. Finally, we used Hidden Markov Models to alter the predictions in order to take their temporal dependencies into account. Internal tests using this methodology yielded an accuracy of 83%.
For the last two decades, more and more complex methods have been developed to identify human activities using various types of sensors, e.g., data from motion capture, accelerometer, and gyroscopes sensors. To date, most of the researches mainly focus on identifying simple human activities, e.g., walking, eating, and running. However, many of our daily life activities are usually more complex than those. To instigate research in complex activity recognition, the "Nurse Care Activity Recognition Challenge" [1] is initiated where six nurse activities are to be identified based on location, air pressure, motion capture, and accelerometer data. Our team, "IITDU", investigates the use of simple methods for this purpose. We first extract features from the sensor data and use one of the simplest classifiers, namely K-Nearest Neighbors (KNN). Experiment using an ensemble of KNN classifiers demonstrates that it is possible to achieve approximately 87% accuracy on 10-fold cross-validation and 66% accuracy on leave-one-subject-out cross-validation.
In this paper, we propose a gesture recognition method with acceleration data weighted by sEMG. Acceleration and sEMG are collected as training data, and gestures are recognized using only acceleration as input data. The dynamic time warping (DTW) algorithm is used for the distance calculation. Three axis acceleration data and sEMG were collected for three types of baseball pitching forms: overarm, sidearm, and underarm beginning from three types of preliminary motions: no windup, quick, and windup. We investigate the changes in sEMG during pitching motion. The distance calculation method is changed according to the sEMG amplitude, reducing the influence of unstable motions. In evaluation experiments, the proposed method achieved higher accuracy than the comparison method that does not use sEMG for windup form, even if the number of training data changes.
Although activity recognition has been studied for a long time now, research and applications have focused on physical activity recognition. Even if many application domains require the recognition of more complex activities, research on such activities has attracted less attention. One reason for this gap is the lack of datasets to evaluate and compare different methods. To promote research in such scenarios, we organized the Open Lab Nursing Activity Recognition Challenge focusing on the recognition of complex activities related to the nursing domain. Nursing domain is one of the domains that can benefit enormously from activity recognition but has not been researched due to lack of datasets. The competition used the CARE-COM Nurse Care Activity Dataset, featuring 7 activities performed by 8 subjects in a controlled environment with accelerometer sensors, motion capture and indoor location sensor. In this paper, we summarize the results of the competition.
Numerous studies on emotion recognition from physiological signals have been conducted in laboratory settings. However, differences in the data on emotions elicited in the lab and in the wild have been observed. Thus, there is a need for systems collecting and labelling emotion-related physiological data in ecological settings. This paper proposes a new solution to collect and label such data: an open-source mobile application (app) based on the appraisal theory. Our approach exploits a commercially available wearable physiological sensor connected to a smartphone. The app detects relevant events from the physiological data, and prompts the users to report their emotions using a questionnaire based on the Ortony, Clore and Collins (OCC) Model. We believe that the app can be used to collect emotional and physiological data in ecological settings and to ensure high quality of ground truth labels.
Mode of transport recognition is an important part of understanding the context of a person with a mobile phone being the best device on which to do this since a person typically carries it around for most of the day. In this paper, we present a method to understand the mode of transport (also called locomotion recognition) using data collected from Android mobile phones. The method is the submitted solution for "Team Jellyfish" for the Sussex-Huawei Locomotion-Transportation recognition challenge. The goal is to develop a body-position independent classifier that uses data from a set of commonly available sensors on a phone - accelerometer, gyroscope, magnetometer and barometer. The solution is an ensemble of XGBoost and neural network classifiers. The precision and recall using 5-fold cross validation on the validation set is 0.946 and 0.945 respectively.
A big challenge for activity data collection is unavoidable to rely on users and to keep them motivated to provide labels. In this paper, we propose the idea of exploiting gamification points to motivate the users for activity data collection by using an uncertainty based active learning approach to evaluate those points. The novel idea behind this is that we approximate the score of the unlabeled examples according to the current model's uncertainty in its prediction of the corresponding activity labels, and using that score as gamification points. Thus, the users are motivated by getting gamification points as feedback based on their data annotation quality. 1,236 activity labels with smartphone sensors that we collected help to validate our proposed method. By evaluating with the dataset, the results show our proposed method has improvements in data quality, data quantity, and user engagement that reflect the improvement in activity data collection.
This paper presents a method to collect training labels for human activity recognition by using a dialogue system. To show the feasibility of using dialogue-based annotation, we implemented the dialogue system and conducted experiments in the lab setting. The preliminary performance of activity recognition attained the f-measure of 0.76. We also analyze the collected data to provide a better understanding of what users expect from the system, how they interact with it and its other potential uses. Finally, we discussed the results obtained and possible directions for future works.
The paper presents a hybrid ballroom dance step type recognition method using video and wearable sensors. Learning ballroom dance is very difficult for less experienced dancers as it has many complex types of steps. Therefore, our purpose is to recognize the various step types to support step learning. While the major approach to recognize dance performance is to utilize video, we cannot simply adopt it for ballroom dance because the dancers' images overlap each other. To solve the problem, we propose a hybrid step recognition method combining video and wearable sensors for enhancing its accuracy and robustness. We collect seven dancers' video and wearable sensors data including acceleration, angular velocity, and body parts location change. After that, we pre-process them and extract some feature values to recognize the step types. By adopting Random Forest for recognition, we confirmed that our approach achieved fl-score 0.760 for 13 step types recognition. Finally, we will open our dataset of ballroom dance to HASCA community for further research opportunities.
Traveling by public transportation is a daily activity for many people, so the number of passenger can provide useful congestion information to passengers. In this paper, we describe a practicability study of using the polynomial regression method for predicting the number of waiting passengers at the bus stop by passively monitoring the activity Wi-Fi signal from mobile devices. We also show our experimental results by comparing the predicted number of passengers to the actually observed number of passengers at the bus stop.
Human activity recognition is one of the important research topics in machine learning. Different algorithms have been proposed for human activity recognition in several years. But nurse care activity recognition is a new section in machine learning which mainly focused on physical activity recognition. We are the members of the team "Data Digger" accepted a challenge named "Nurse Care Activity Recognition Challenge". We used Carecom nurse care activity dataset for that challenge [14]. We compare several machine learning algorithms for the dataset. The proposed method was trained and evaluated on Carecom nurse care activity recognition dataset and got 69% accuracy after splitting the dataset.
Crowdsensing applications are a popular and common research tool, because they allow volunteering participants to provide valuable data via their mobile phones with minimal effort. In most scenarios, it is an important goal to gather data in a reliable and continuous way, while the app runs in the background to avoid disturbing the user. However, in recent versions, Android as well as iOS severely restrict the functionality of an app when it does not have the authorization of a foreground process.
In this work, we present a structured overview of the technical state of background service restrictions under iOS (12) and Android (9). We demonstrate a practical approach for working with these restrictions by utilizing the respective operating system's location provider solution.
Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge provides a large dataset including train, validation and test data. Our team name is ICT-BUPT League. Moreover, the main work is to recognition eight transportation modes. Our method combines the CNN and LSTM. In this system, CNNs allow us to learn suitable features representation for recognition that are robust against transportation mode recognition. We make use of LSTM units on the CNN output, which play the role of a structured dimensionality reduction on the feature vector leading to drastic improvements in transportation mode recognition performance. As a result, the CNN+LSTM transportation mode recognition system could predict eight classes with a success rate of 76%.
Vision-based human activity recognition can provide rich contextual information but has traditionally been computationally prohibitive. We present a characterisation of five convolutional neural networks (DenseNet169, MobileNet, ResNet50, VGG16, VGG19) implemented with TensorFlow Lite running on three state of the art Android mobile phones. The networks have been trained to recognise 8 modes of transportation from camera images using the SHL Locomotion and Transportation dataset. We analyse the effect of thread count and back-ends services (CPU, GPU, Android Neural Network API) to classify the images provided by the rear camera of the phones. We report processing time and classification accuracy.
The goal of the SHL recognition challenge 2019 is to recognize transportation modalities in a sensor placement independent manner. In this paper, the performance of shallow neural networks is benchmarked by Team Orion in such a manner on the dataset provided in the challenge, using 156 handcrafted temporal and spectral features per sensor through the application of parallel processing and out-of-memory architecture. Using scaled conjugate gradient back-propagation (SCGB) algorithm, combining classes 7 and 8 and taking 5000 frames of bag-hips-torso data from validation set, classification accuracy of 87.2% was obtained on the validation dataset of the same labels for a shallow two-layer feed-forward network. 71% accuracy was obtained on the validation set of classes 7 and 8 via transfer of 2500 frames using another shallow neural network of similar architecture. Using empirically observed variable based transfer of 7088 frames from hand validation data to training dataset, 77.5% accuracy was obtained on hand validation data for classes 1 to 7/8, and 70% classification accuracy of classes 7 and 8 via transfer of 1809 frames from hand validation data. The results illustrate how carefully crafted features coupled with empirical transfer of labeled knowledge and combination of problematic classes can tune a neural classifier to work in a new feature space.
During the dataset creation process for activity and context recognition research, manual annotation of ground truth events can be a time-consuming and error-prone task. In the typical use case, one or more annotators have to go over the videos recorded during the experiments and label what happens at what time of the experiment. In this paper, we introduce a new open source, web-based tool to assist researchers to create event annotations easily and to leverage group work by supporting intuitive collaboration features. We provide an overview of the main technical components and their respective technologies used to realize the tool. The first version of the tool with the core features for video annotation is implemented and publicly available. By using containerized services, the deployment involves only a small number of steps and dependencies. We point out some possible direction for future development and customization options for using the tool in other annotation tasks.
The process of human activity recognition needs to construct a model that has learned sensor data with annotations, i.e., groundtruth, label, or answer activity, in advance. Therefore, a large and diverse set of annotated data are needed to improve and evaluate model performance. Since it is difficult to judge the user's situation even after seeing the acceleration data, it is necessary to add an annotation to the collected acceleration data. In this paper we propose a method that estimates the user and device situations from the user's response to the notification generated by the device such as smartphone. User and device situations are estimated from the user's response time to the notification and the acceleration values in the device. Estimation result with high confidence is given to the sensor data as an annotation. Through the evaluation experiment, for seven kinds of annotation classes, an average precision of 0.769 and 0.963 for user-independent experiments and for user-dependent experiments were achieved, respectively.
This paper is the first trial to create a corpus on human-to-human multi-modal communication among multiple persons in group discussions. Our corpus includes not only video conversations but also the head movement and eye gaze. In addition, it includes detailed labels about the behaviors appeared in the discussion. Since we focused on the micro-behavior, we classified the general behavior into more detailed behaviors based on those meaning. For example, we have four types of smile: response, agree, interesting, sympathy. Because it takes much effort to create such corpus having multiple sensor data and detailed labels, it seems that no one has created it. In this work, we first attempted to create a corpus called "M3B Corpus (Multi-Modal Meeting Behavior Corpus)," which includes 320 minutes discussion among 21 Japanese students in total by developing the recording system that can handle multiple sensors and 360-degree camera simultaneously and synchronously. In this paper, we introduce our developed recording system and report the detail of M3B Corpus.
In this paper, activity recognition is performed using an optical motion capture system that can measure three-dimensional position information of reflective markers attached to the body. The individual markers detected by motion capture are automatically associated with which part of the body they are attached to. However, due to the overlapping of obstacles and other body parts and misplacement of the markers, these may be hidden from the camera and enter a blind spot, which may frequently cause a marker to be associated to different body parts erroneously. Usually, these errors need to be corrected manually after measurement, but this work is very time consuming, cumbersome and requires some skill. In this research, it is thought that there is no problem in recognizing the activity even if the process of spending the effort of correcting the correspondence between the marker after measurement and the body is omitted in the activity recognition using the motion capture. Because feature quantities are extracted from activity data when performing action recognition, even if an error occurs in part of the marker data, the effect is small because the correct feature quantities are selected and other marker data can compensate for an error. In addition, in this paper, we proposed a method to recognize the activity using the data when the human body template preparation required before Mocap data measurement is omitted, which is one of marker body matching work. The verification showed that even if the marker body matching operation was omitted, it was possible to recognize the action with high accuracy.
PDR is a method of estimating the relative position from initial position using only an accelerometer and gyroscope. In recent years, hearable devices are becoming increasingly popular, and there are many researches on head pose estimation with them. In this paper, we aim to realize PDR considering head pose using only sensor data obtained with hearable devices. However, horizontal head swing affects the estimation of the traveling direction of PDR when wearing the sensor on ear. Using the difference of acceleration applied to both ears to detect head swing. For evaluation, we created the device with accelerometer and gyroscope attached to the left and right speaker of the headphone. As a result of the evaluation, the accuracy of swing motion estimation is 88.0%. F-measure of head swing is 0.87. In conclusion, the detection result of swing is adopted to PDR, and realized PDR that use sensor data obtained by hearable device.
In this paper we summarize the contributions of participants to the Sussex-Huawei Transportation-Locomotion (SHL) Recognition Challenge organized at the HASCA Workshop of Ubi-Comp 2019. The goal of this machine learning/data science challenge is to recognize eight locomotion and transportation activities (Still, Walk, Run, Bike, Bus, Car, Train, Subway) from the inertial sensor data of a smartphone in a placement independent manner. The training data is collected with smartphones placed at three body positions (Torso, Bag and Hips), while the testing data is collected with a smartphone placed at another body position (Hand). We introduce the dataset used in the challenge and the protocol for the competition. We present a meta-analysis of the contributions from 14 submissions, their approaches, the software tools used, computational cost and the achieved results. Overall, three submissions achieved F1 scores between 70% and 80% five with F1 scores between 60% and 70%, five between between 50% and 60%, and one below 50%, with a latency of a maximum of 5 seconds.
We present our submission (team S304) to the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge 2019. The goal is to recognize 8 modes of transport and locomotion from 5 second frames of inertial sensor data of a smartphone carried in the hand, while most of the labelled data provided for classifier training consists of data from three other smartphone placements: hips, torso and bag. Only a small dataset from a smartphone carried in the hand was provided. Model training is complicated by the fact that the data distribution differs between the phone positions. To optimize classification performance for data from the Hand phone, we employ an ensemble of Multilayer Perceptrons, each trained with data from a different particular smartphone placement, including the small dataset of the Hand phone. We propose an iterative re-weighting scheme for combining the classifiers that takes their agreement with the specialized Hand classifier into account. The proposed method achieves 74% average per-class Recall, significantly improving the performance achieved when training with mixed data from all phone placements (59%) and training with data from the Hand phone only (66%). The ensemble-based method also outperforms domain adaptation by Feature Augmentation, which achieves 70% average Recall.
There are many inertial sensor based indoor localization methods for smartphone, for example, SINS and PDR. However, most of the MEMS sensors of smartphones are not precise enough for these methods. We proposed end-to-end walking speed estimation method using deep learning to perform robust walking speed estimation with a low-precision sensor. Currently, we use the input data with a fixed format of 200 samples at 100 Hz. However, the sampling rate and sequence length should be changed appropriately depending on the required accuracy and terminal performance. They are critical factors when using our method for a long time on a terminal because continuous processing of a large amount of data leads to shorter battery life. In this paper, we evaluate the accuracy of the estimated speed by our method when changing the sampling rate and sequence length. As a result, using 5 patterns of combinations, the estimation accuracy hardly changed.
We propose a human activity recognition method using Independently Recurrent Neural Network (IndRNN) on the spectrum for the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge 2019. The proposed method takes advantage of FFT and IndRNN to obtain short-time and long-time features, respectively. To be specific, since the signal obtained by Smartphone-sensors is strongly periodic, it is first processed into spectrum by FFT in a one-second sliding window to obtain short-time features. Then, the spectrum of 21 overlapped window extracted from 5-second data is processed by IndRNN in order to explore the correlation of FFT spectrums of all windows to obtain longtime features for the final action classification. To obtain a phone-position independent model, the training data of three phone locations (bag, hips, torso) is used to train the IndRNN model, which is further fine-tuned with half of the validation data in hand position (a relatively small amount of data which can be obtained in practical applications). The proposed method achieved 82.6% accuracy of validation in hand position, which have been submitted to SHL recognition challenge as "UESTC_IndRNN".
The Sussex-Huawei Transportation-Locomotion (SHL) Recognition Challenge organized at the HASCA Workshop of UbiComp 2019 presents a large and realistic dataset with different activities and transportation. The goal of this machine learning/data science challenge is to recognize eight modes of locomotion and transportation from the inertial sensor data of a smartphone in a mobile-phone placement independent manner. In this paper, our team (We can fly) summarize our submission to the competition. We proposed a 1D DenseNet model, a deep learning method for transportation classification. We first convert sensor readings from phone coordinate system to navigation coordinate system. Then, we normalized each sensor using different maximums and minimums and construct multichannel sensor input. Finally, 1D DenseNet model output the predictions. In the experiment, we utilized three internal datasets to train our model and achieved averaged F1 score 0.78 on four internal datasets.
Individuals increasingly use mobile, wearable, and ubiquitous devices capable of unobtrusive collection of vast amounts of scientifically rich personal data over long periods (months to years), and in the context of their daily life. However, numerous human and technological factors challenge longitudinal data collection, often limiting research studies to very short data collection periods (days to weeks), spawning recruitment biases, and affecting participant retention over time. This workshop is designed to bring together researchers involved in longitudinal data collection studies to foster an insightful exchange of ideas, experiences, and discoveries to improve the studies' reliability, validity, and perceived meaning of longitudinal mobile, wearable, and ubiquitous data collection for the participants.
In social interaction systems, the formation and testing of theories is significantly difficult because social interaction systems cannot be easily manipulated and controlled. It is also not possible to reproduce large-scale systems in a lab setting or in a short fixed time duration. Detecting short-term non-recurrent interactions between individuals is very different from studying an individual's long term social group(s). However, over the last decade the rate of digital data availability using smartphones and wearables has increased consistently at a high pace which allows social scientists gain a comprehensive understanding of how groups form and evolve over time using recurrent in-person interaction networks. In this paper, we design a long term data-driven study on a finite student population of a residential university campus. Our aim is to study a student's recurrent in-person interactions, or long-term social groups, between the time that one enters into a cohort, e.g. Class of 2022, and until that cohort graduates. In this sensor-data driven study using state-of-the-art interaction-detection algorithms, we monitor parameters such as social group size, formation-time and longevity. We also conduct a retrospective cohort analysis of self-reported social group parameters, e.g. social group size, time spent with each group type and associated satisfaction. Preliminary results from the same make an extremely strong case for a longitudinal study, especially indicated by the evolution of one's social circles over a long period of time.
In order to build more fair Artificial Intelligence applications, a thorough understanding of human morality is required. Given the variable nature of human moral values, AI algorithms will have to adjust their behaviour based on the moral values of its users in order to align with end user expectations. Quantifying human moral values is, however, a challenging task which cannot easily be completed using e.g. surveys. In order to address this problem, we propose the use of game theory in longitudinal mobile sensing deployments. Game theory has long been used in disciplines such as Economics to quantify human preferences by asking participants to choose between a set of hypothetical options and outcomes. The behaviour observed in these games, combined with the use of mobile sensors, enables researchers to obtain unique insights into the effect of context on participant convictions.
Whilst literature is rich in lessons learned from recruitment and retention of participants in longitudinal studies, papers sharing practical experience of implementing such studies with or about ICT are lacking. We discuss the challenges and lessons learned in four longitudinal studies with older adults and chronic disease patients for the assessment of self-care technology. Despite apparently prosaic, everyday challenges and potential threats to studies with non-mainstream audiences may be hard to anticipate. A reflection by the researchers leading these studies led to three main themes associated to studies' timelines, which are described with practical examples.
Mental health issues affect a significant portion of the world's population and can result in debilitating and life-threatening outcomes. To address this increasingly pressing healthcare challenge, there is a need to research novel approaches for early detection and prevention. Toward this, ubiquitous systems can play a central role in revealing and tracking clinically relevant behaviors, contexts, and symptoms. Further, such systems can passively detect relapse onset and enable the opportune delivery of effective intervention strategies. However, despite their clear potential, the uptake of ubiquitous technologies into clinical mental healthcare is slow, and a number of challenges still face the overall efficacy of such technology-based solutions. The goal of this workshop is to bring together researchers interested in identifying, articulating, and addressing such issues and opportunities. Following the success of this workshop in the last three years, we aim to continue facilitating the UbiComp community in developing a holistic approach for sensing and intervention in the context of mental health.
Smartwatches provide a unique opportunity to collect more speech data because they are always with the user and also have a more exposed microphone compared to smartphones. Speech data could be used to infer various indicators of mental well being such as emotions, stress and social activity. Hence, real-time voice activity detection (VAD) on smartwatches could enable the development of applications for mental health monitoring. In this work, we present VADLite, an open-source, lightweight, system that performs real-time VAD on smartwatches. It extracts mel-frequency cepstral coefficients and classifies speech versus non-speech audio samples using a linear Support Vector Machine. The real-time implementation is done on the Wear OS Polar M600 smartwatch. An offline and online evaluation of VADLite using real-world data showed better performance than WebRTC's open-source VAD system. VADLite can be easily integrated into Wear OS projects that need a lightweight VAD module running on a smartwatch.
Listening to music has been studied as a method for combating the rapidly increasing stress levels of adolescents. Previous studies yielded inconsistent results and neglected specific factors including the time relative to the stressor and the duration of time in which participants listened to music. We conducted a survey and lab experiment to investigate the impact of these factors on the stress-reducing effect of music. The survey contained questions regarding music preference, stress, and use of music for stress reduction. In the lab experiment, the math task of the Trier Social Stress Test (TSST) was used to simulate stress in participants. Three experimental groups listened to music for either five minutes before the stressor, five minutes after the stressor, or ten minutes after the stressor; with the control group not listening to any music. Heart rate variability was continuously monitored with a wearable device, Empatica, and used to derive stress levels. The survey received 251 responses and 42 students participated the lab experiment. The results showed that listening to music before the stressor resulted in significantly lower stress levels than listening to music after the stressor (p < 0.01). This finding, contrary to our survey results, revealed that the "preventive" effect of listening to music prior to the stressor was more effective than the "remedial" effect that followed after the stressor..
Research in the development of tools to successfully deliver psychological interventions through smartphones is growing rapidly. As the research body grows towards more cutting-edge solutions that utilize the smartphone's advanced technical capabilities, various challenges are uncovered to successfully and efficiently deliver safe interventions in critical scenarios and situations. We present the SyMptOMS platform, a configurable set of tools that allows therapists to specify, deploy and follow-up location- and sensor-based assessments and interventions for various mental disorders, run and delivered remotely via the patient's smartphone at any place (ecological) and time (momentarily). From our experience in developing and running experiments with SyMptOMS, we overview and discuss technical challenges and open research questions involved in sensor-based interventions using smartphones.
Recognizing human cognitive performance is important for preserving working efficiency and preventing human error. This paper presents a method for estimating cognitive performance by leveraging multiple information available in a smartphone. The method employs the Go-NoGo task to measure cognitive performance, and fuses contextual and behavioral features to identify the level of performance. It was confirmed that the proposed method could recognize whether cognitive performance was high or low with an average accuracy of 71%, even when only referring to inertial sensor logs. Combining sensing modalities improved the accuracy up to 74%.
Difference in features of voice such as tone, volume, intonation, and rate of speech have been suggested as sensitive and valid measures of mental illness. Researchers have used analysis of voice recordings during phone calls, response to the IVR systems and smartphone based conversational agents as a marker in continuous monitoring of symptoms and effect of treatment in patients with mental illness. While these methods of recording the patient's voice have been considered efficient, they come with a number of issues in terms of adoption, privacy, security, data storage etc. To address these issues we propose a smart speaker based conversational agent - "Hear me out". In this paper, we describe the proposed system, rationale behind using smart speakers, and the challenges we are facing in the design of the system.
The reactions of the human body to physical exercise, psychophysiological stress and heart diseases are reflected in heart rate variability (HRV). Thus, continuous monitoring of HRV can contribute to determining and predicting issues in well-being and mental health. HRV can be measured in everyday life by consumer wearable devices such as smartwatches which are easily accessible and affordable. However, they are arguably accurate due to the stability of the sensor. We hypothesize a systematic error which is related to the wearer movement. Our evidence builds upon explanatory and predictive modeling: we find a statistically significant correlation between error in HRV measurements and the wearer movement. We show that this error can be minimized by bringing into context additional available sensor information, such as accelerometer data. This work demonstrates our research-in-progress on how neural learning can minimize the error of such smartwatch HRV measurements.
Parkinson's Disease (PD) is a long-term neurodegenerative disorder that affects over four million people worldwide. State-of-the-art mobile and wearable sensing technologies offer the prospect of enhanced clinical care pathways for PD patients through integration of automated symptom tracking within current healthcare infrastructures. Yet, even though sensor data collection can be performed efficiently today using these technologies, automated inference of high-level severity scores from such data is still limited by the lack of validated evidence, despite a plethora of published research. In this paper, we introduce PDkit, an open source toolkit for PD progression monitoring using multimodal sensor data obtained by smartphone apps or wearables. We discuss how PDkit implements an information processing pipeline incorporating distinct stages for data ingestion and quality assessment, feature and biomarker estimation, and clinical scoring using high-level clinical scales. Finally, we demonstrate how PDkit facilitates outcome reproducibility and algorithmic transparency in the CUSSP clinical trial, a pilot, dual-site, open label study.
We discuss the importance of designing self-tracking technologies for serious mental illness (SMI) that allow individuals with SMI to collect, share, and sense-make over data with a dynamic set of support system members. Our collaborative work with individuals diagnosed with bipolar disorder has suggested the following design and technical challenges for supporting social practices around personal data in long-term mental health management: allowing for fine-grained control over data disclosure by individuals with SMI, supporting dynamism in relationships and roles over long-term use of a system, and allowing individuals flexibility in the variables that they self-track. We discuss these challenges and how they relate to the goals of predictive modelling and intervention in mental health personal informatics systems.
The theory of human touch that causes the body to release hormone oxytocin can be an effective treatment to alleviate depression and anxiety, such that the patients don't need to seek the help from consultants or drug medications to improve their mental conditions. In this paper, we have developed a wearable robot that mimic human affective touch to build social bonds and regulate emotion and cognitive functions. The touch-stimulated emotion can be measured by brainwaves from 4 EEG electrodes placed on the parietal, prefrontal and left and right temporal lobe regions of the brain. The novel Deep Learning emotion decoder has been designed to identify the human affective, non-affective and neutral emotions. It paves the way in the future to develop an intelligent self-adaptive robot that understands human emotions and adjusts its touch stimulation patterns accordingly to regulate human mental states and treat depression and anxiety problems.
The 8th Workshop on Pervasive Urban Applications (PURBA 2019) aims to build on the success of the previous workshops organized in conjunction with the Pervasive (2011-12) and UbiComp (2013, 2015-18) to continue to disseminate the results of the latest research outcomes and developments of ubiquitous computing technologies for urban applications. All workshop contributions are published in supplemental proceedings of the UbiComp 2019 conference and included in the ACM Digital Library.
Regulatory compliance is an essential exercise in the modern societies confirming safety and prevention of harm to consumers. Despite many efforts from international and national quality control authorities, transparency and accountability in regulatory compliance remain a challenging technical-legal problem sitting atop a heavy reliance on trust. This paper presents a theoretical model of regulatory compliance aiming at improving accountability for systems and data audit and introduces a higher degree of transparency in management and quality control. It explores the technical aspects of two emerging technologies the Internet of Things (IoT) and Blockchain, and using a common use-case in practice shows how to better align these technologies with legal concerns and trust in regulatory compliance.
Intelligent public transportation systems are the cornerstone to any smart city, given the advancements made in the field of self-driving autonomous vehicles - particularly for autonomous buses, where it becomes really difficult to systematize a way to identify the arrival of a bus stop on-the-fly for the bus to appropriately halt and notify its passengers. This paper proposes an automatic and intelligent bus stop recognition system built on computer vision techniques, deployed on a low-cost single-board computing platform with minimal human supervision. The on-device recognition engine aims to extract the features of a bus stop and its surrounding environment, which eliminates the need for a conventional Global Positioning System (GPS) look-up, thereby alleviating network latency and accuracy issues. The dataset proposed in this paper consists of images of 11 different bus stops taken at different locations in Chennai, India during day and night. The core engine consists of a convolutional neural network (CNN) of size ~260 kB that is computationally lightweight for training and inference. In order to automatically scale and adapt to the dynamic landscape of bus stops over time, incremental learning (model updation) techniques were explored on-device from real-time incoming data points. Real-time incoming streams of images are unlabeled, hence suitable ground truthing strategies (like Active Learning), should help establish labels on-the-fly. Light-weight Bayesian Active Learning strategies using Bayesian Neural Networks using dropout (capable of representing model uncertainties) enable selection of the most informative images to query from an oracle. Intelligent rendering of the inference module by iteratively looking for better images on either sides of the bus stop environment propels the system towards human-like behavior. The proposed work can be integrated seamlessly into the widespread existing vision-based self-driving autonomous vehicles.
Natural, public places in cities serve often as a place for recreation and relaxation. Additionally, such places often signify historic and social importance about which visitors would like to know more. Screens and other currently existing technology would, however, destroy the natural beauty of such a place. Attention-aware and unobtrusive interfaces seem to offer a solution to this problem. In our approach, we conducted a qualitative survey with 19 people and an epoché at a decommissioned cemetery which is mainly used for recreation and leisure time. Overall, the results show that the majority would like to know more about the deceased with the information closely placed to the grave, but without disturbing the natural, mystical atmosphere of the cemetery. In this work in progress report, we present our research approach to attention-aware, unobtrusive and context-sensitive interactive prototypes that keep the natural beauty and recreational characteristics of such a place.
Blossoming coffee culture give a rise to coffee shops in many cities around the world. In today's coffee industry, it is no longer customer satisfaction that is important but the customer experience. This work presents a development of an intelligent coffee plate system called iCoff, which aims at enhancing coffee shop's customer experience with a platform that allows a barista to communicate with a coffee drinker as well as enables the coffee drinker to learn more about his/her coffee, such as ingredients, temperature, and weight through an interactive coffee plate. This paper describes its hardware and software components as well as a preliminary user experience study result. It is an applied ubiquitous/pervasive technology in the context of coffee shop experience as part of our today's urban living.
Public transport plays an essential role in sustainable urban mobility. The increasing availability of public transport data and the dissemination of interactive devices in the public and in public transport specifically provide a basis for smart mobility systems. Urban mobility is characterized by rapid context changes and a very personalized and situational information need of users. Smart mobility systems therefore support users in their mobility in intelligent ways and bring together ubiquitous and mobile computing, the Internet of Things as well as context-awareness. In this paper, we focus on the delivery of mobility information in public transport and present a model and an adaptation scheme for context-aware choices of output modality and device. Our approach enables a smart mobility system to choose output modality and device based on the user's situation and their preferences as part of a context-aware application design. The model and adaptation process are part of our ongoing work in the field of context-aware smart mobility applications.
We analyze the consumer-age-specific patterns of restaurant preferences in commercial areas of Seoul, through the mining of place recommendation results from the Naver Place online service. We calculate indices for 188 distinct areas of Seoul measuring the heterogeneity of taste across age groups, and the dominance of any one age group over the general options presented to the public. Our results suggest that both high-traffic and rapidly changing commercial areas present diverse options appealing to all age groups, and that this diversity is primarily driven by the tastes of younger age groups. Recognizing these patterns may help stakeholders predict gentrification and proactively shape neighborhood transformation from business turnover. This study contributes to the broader literature on applying online behavioral data to study urban economic activity.
Unsupervised anomaly detection in time-series data is crucial for both machine learning research and industrial applications. Over the past few years, the operational efficiencies of logistics agencies have decreased because of a lack of understanding on how best to address potential client requests. However, current anomaly detection approaches have been inefficient in distinguishing normal and abnormal behaviors from high dimensional data. In this study, we aimed to assist decision makers and improve anomaly detection by proposing a Long Short Term Memory (LSTM) approach with dynamic threshold detection. In the proposed methodology, first, data were processed and inputted into an LSTM network to determine temporal dependency. Second, a contextualized dynamic threshold was determined to detect anomalies. To demonstrate the practicality of our model, real operational data were used for evaluation and our model was shown to more accurately detect anomalies, with values of 0.836 and 0.842 for precision and recall, respectively.
Population flow data (traffic sets of crowds from one regions to another) is of great value in a wide range of fields from urban traffic resource allocation to public security protection. Since the data of the individual-level mobility requires privacy protection, it's hard to collect detailed population flow data. There have been published works on generating population flow from aggregated data. But they have the limitations of considering the flow between neighbors only or modeling the mapping from aggregated population data to flow by a simple physical model without taking regionally diversity into account. However, long-range dependencies and regionally diversity is very important. Since population flow contains more information than that in aggregated population variation, generating the detailed former from the aggregated latter is quite difficult. In this paper, we proposed an end-to-end structure deep learning based model to generate population flow from aggregated historical population variation data. We use real-world datasets to compare the performance of our model with several baselines, which shows the superiority of our model. This proves the potential of using deep learning in population flow generation.
Looking out for each other's safety is always a nice thing to do. Crowdsourcing enables us to do just that as we've developed a system called Safe Street Rangers that allows the user or ranger to monitor and report via a mobile app the level of safety concern of any street segment regarding seven aspects including traffic signs, road obstacles, brightness, road condition, animals, solitariness, and traffic accidents, in relation to a transport mode (i.e., driving or walking). The system consists of three main components; mobile app, web app, and data server. Each submitted report will be verified and approved by an admin user via our web app. The data server handles all data storage and processing. It checks for overlapping of reported street segments with the existing ones and updates the safety rating values of street waypoints accordingly. The system has been tested with the real users from which its usefulness is highly perceived.
The direct and flexible use of any network connectivity that is available within an urban scenario is essential for the successful operation of ubiquitous systems. We demonstrate seamless communication across different networks without the use of middleware, proxies, tunnels, or address translation, with minimal (near-zero) packet loss to communication flows as handoff occurs between networks. Our solution does not require any new functions in existing networks, will work on existing infrastructure, and does not require applications to be re-designed or re-engineered. Our solution requires only modifications to the end-systems involved in communication, so can be deployed incrementally only for those end-systems that require the functionality. We describe our approach and its design, based on the use of the Identifier-Locator Network Protocol (ILNP), which can be realised directly on IPv6. We demonstrate the efficacy of our solution with testbed experiments based on modifications to the Linux kernel v4.9 LTS, operating directly over IPv6, and using unmodified binary applications utilising directly the standard socket(2) POSIX.1-2008 API, and standard C library calls. As our approach is 'end-to-end', we also describe how to maintain packet-level secrecy and identity privacy for the communication flow as part of our approach.
Wearables that combine practical functionality and physical functionality are part of a new and growing field. The main goal of wearables for health applications is to improve the quality of life overall by designing a usable, comfortable, and fashionable device with a purpose. Constructing wearables requires knowledge from a variety of different areas, which can result in a diversity of new ideas. Customization of wearables gives the user comfort and personalized style while also increasing health benefits. One major area for wearable opportunities is adaptive clothing for individuals with physical and/or intellectual disabilities. Adaptive clothing is currently in limited supply and often tries to group multiple disabilities together, when different disabilities have distinctly different needs. Wearables, however, do have technical barriers when it comes to cleaning, cost, and analyzing qualitative data from research participants.
Smart textiles are garments with integrated sensors and actuators which have a large potential of being beneficial in our future society. These textiles can monitor our health condition during daily life, remind us to do some extra fitness activities or warn us when our blood pressure is too high or when our heart beating becomes irregular. They can be used in hospitals for the continuous monitoring of patients but also to help soldiers navigating the dark and athletes to optimize their performance.
During the last 10 years three major smart textile related developments can be noted. First of all the sensors for wearables have become smaller, more reliable and easier to use. Next to that in the field of textile engineering new, and better electrical conducting yarns were developed which can be handled in textile production processes such as knitting and weaving. Thirdly, in the field of information technologies much more efficient algorithms for data processing and interpretation have been developed and more can be expected from the developments in artificial intelligence.
Based on this the market for smart textiles is expected to grow exponentially but although researchers have been working on smart textiles already for a long time, we do not see them in the shop next-door yet.
In this paper we will discuss the developments and trends, list the challenges and propose a strategy to come to the next generation of Smart Textiles.
Wearable technologies are body-worn devices, including smart clothes, e-textiles, and accessories. Wearables embed computational capabilities in garments to provide information, services and resources for end users. By being continuously worn, promptly accessible, and unobtrusive, wearable devices are well-suited to serve as assistive technologies meeting the needs of users with diverse abilities. For neurodiverse users specifically, belts have been explored to control impulsive speaking, smart clothes (such as t-shirts and caps) have been employed for emotional expression, and badges have been assessed to raise proximity awareness. Among diverse form factors, wrist-worn devices stand out due to their conventional look and popular usage. They have been explored for emotional regulation, touch monitoring, stress release, and self-regulation. Despite a growth in research and development of wearables, their application as assistive technologies for neurodiverse users is still underexplored. Challenges emerge on designing for heterogeneous users' profiles, automating assistance, increasing accuracy, and managing users' privacy. This paper discusses the main opportunities and affordances for wearables to support neurodiverse users as well as the grand challenges in the field.
Smart textiles are being used by multiple technologies for various health care applications. In my recent research, I have designed a textile-based smart sanitary napkin sensor. It helps in identifying multiple gynaecological diseases by regular monitoring of menstrual blood loss volume. This paper talks about the technical and physical challenges I faced during the designing of the smart sensor and the potential possibilities of future research to address these challenges.
In Orthopedic surgery, our goal is to return people to better levels of musculoskeletal function. There are many areas where there is still room for improvement, including in identifying poor movement patterns and helping people correct those movement patterns. The University of Minnesota Wearable Technology Lab has developed a stitched, textile-based strain sensor that provides a variable resistance response when flexed or stretched. The sensor is fabricated using common industrial sewing techniques, and can be incorporated into regular athletic clothing. Prior work has assessed the ability of this sensor to reliably measure healthy knee flexion with accuracy comparable to standard electrogoniometry when integrated into tight-fitting leggings. Preliminary results have indicated the ability to detect more nuanced movements such as valgus knee flexion through strategic sensor placement.
The impacts of this technology are the potential to benefit any person that needs analysis of their movement patterns. In particular, it would be targeting active individuals who are placing their bodies under the greatest stress and therefore are most at risk of injury. The goal would be to identify poor movement patterns and alter the individual's movement prior to, or after, injury. This technology, however, could be easily expanded to deal with people working in manual labor jobs to identify repetitive movement patterns that place them at risk of overuse injury. It could also be applied to patients undergoing rehabilitation following surgeries that require mobility training as part of their rehabilitation.
In this position paper, we discuss the role of smart garments for stress reduction. Stress management plays a vital role in healthy living. Stress is an essential component of human life, but too much of it is a risk factor contributing to problems with concentration and a wide variety of illnesses including hypertension, heart problems, disorders of the immune system, and depression. Proposed solutions include mindfulness exercises, biofeedback, muscle-relaxation, and breathing exercises. Most of these solutions are either low-tech or rely on smart-phone apps. In this position paper, we argue that there is a great design opportunity to embed stress reduction solutions in everyday garments. Garments are a natural interface to the human body and enable practical ways to anchor stress reduction in the rituals of daily life. Another advantage is that we move attention away from the smartphone, which too often is a source of information overload by itself. We demonstrate an aesthetically pleasing soft actuator in a garment, which is realized by embroidery of conductive yarn, and which can be used as both a subtle break reminder or as a tool guiding breathing exercises. We reflect on the various design choices during the process of creating this demonstrator.
Users are increasingly being confronted with a tremendous amount of information proactively provided via notifications from versatile applications and services, through multiple devices and screens in their environment ubiquitously. However, human attention is limited. Further, the latest computing trends including versatile IoT devices, and contexts, such as smart cities and vehicles, are further competing for limited attention. To counter this challenge, "attention management", including attention representation, sensing, prediction, analysis, and adaptive behavior is needed in our computing systems. Following the successful UbiTtention 2016, 2017 and 2018 workshops with up to 50 participants, the UbiTtention 2019 workshop brings together researchers and practitioners from academia and industry to explore the management of human attention and notifications across versatile devices and contexts to overcome information overload and over-choice.
In the last decade, the effects of interruptions through mobile notifications have been extensively researched in the field of Human-Computer Interaction. Breakpoints in tasks and activities, cognitive load, and personality traits have all been shown to correlate with individuals' interruptibility. However, concepts that explain interruptibility in a broader sense are needed to provide a holistic understanding of its characteristics. In this paper, we build upon the theory of social roles to conceptualize and investigate the correlation between individuals' private and work-related smartphone usage and their interruptibility. Through our preliminary study with four participants over 11 weeks, we found that application sequences on smartphones correlate with individuals' private and work roles. We observed that participants engaged in these roles tend to follow specific interruptibility strategies - integrating, combining, or segmenting private and work-related engagements. Understanding these strategies breaks new ground for attention and interruption management systems in ubiquitous computing.
Smartphones offer different modalities to inform about incoming notifications. They are perceived differently in terms of pleasantness and disruptiveness, depending on the receptivity and interruptibility of the user, among others. Contextual factors such as the user's location, activity, and task engagement level further influence this perception. Within a lab study with 40 tech-savvy participants, we investigated suitable notification modalities for different place types. We found that a user's receptivity, the disruptiveness of a notification, and the task engagement correlate and that they differ per place type with statistical significance and small to large effect sizes. Due to their unobtrusive nature, silent mode and vibration are preferred notification modalities at all places - silent mode especially at "do not disturb" locations ("library", "movie theater"), at places where users tend to be in company ("café", "restaurant"), or where users have to focus ("university", "work"). Ringtone is considered obtrusive and undesired and is only tolerated at a few places at which users tend to be alone ("home") or which are rather loud so that the auditory alert does not disturb others too much ("gas station").
As people utilize instant messaging (IM) to communicate with people of various relationships, they pay different amounts of attention to and have different communication practices with them of different relationships. However, we haven't seen a close investigation of how users' IM communication patterns relate to different groups of IM contacts. We collected IM logs of 547 sender-recipient pairs from 33 smartphone users over the course of 4 weeks, and used k-mean clustering to identify 6 clusters of these users' IM communication patterns. We illustrate the characteristics of the IM patterns of these distinct clusters as well as how the patterns relate to the relationship between the senders and the recipients within these clusters respectively.
Ambient Displays are a promising means to reduce notification overload and work towards the vision of Calm Computing. In this paper, we present electrochromic displays as a novel class of displays to convey information. electrochromic displays are non-light-emitting, flexible, free-form, transparent, energy-efficient, easily integrated, and slow-switching, making them ideal candidates for information that changes over time and does not require immediate user attention. We describe the key features of electrochromic displays as well as application areas and provide an outlook into future developments of the technology.
The measurement of participant attention is a frequent by-product of mobile sensing-based studies, which typically focus on user interruptibility or the effectiveness of notification deliveries. We note that, despite the popularity of interruptibility research within our discipline, research focused on attention is surprisingly scarce. This omission may be due to (a combination of) methodological, technological, or disciplinary constraints. In this paper, we argue how attention levels can be effectively measured with existing technologies and methodologies by adapting continuous measurements of attention fluctuations. Many clinically researched technologies, as well as sensing-based analysis methods, could be leveraged for this purpose. This paper invites co-researchers to assess the use of novel ways to measure attention in their future endeavours.
According to the latest science of human performance, we are wired to thrive and adapt from discomfort. This workshop explores how to leverage that science to improve human wellbeing and to improve sustainability as a side-effect of designing ubiquitous technology to prepare, practice and perform discomfort, for social benefit. We will use Design Jams as a key activity to explore and build up this Uncomfortable Design Methodology. There will be prizes.
Under certain conditions, breathing abnormally fast or slow can improve health and performance. Even breathing in a way that intentionally restricts the amount of oxygen delivered to the body can be desirable in some situations. However, while these various "uncomfortable" breathing exercises can lead to benefits when practiced in a controlled and supervised setting, they can also be potentially dangerous if practiced incorrectly. In this paper, we discuss ethical considerations for ensuring end-user safety when designing technology-mediated breathing exercises.
Maintaining a consistent indoor temperature causes one of the largest energy demands in UK. UK buildings are famously poorly insulated and expensive to heat and cool. This is set to become ever more challenging in a warming and rapidly changing climate. What if we allowed ourselves to be more uncomfortable and took more charge of our thermal comfort? Wouldn't we then be healthier, more thermally delighted, more productive? Would we not also save energy and related carbon emissions? We offer this provocation, and set the challenge to identify how this should change the role of future ubiquitous environments.
In this paper, we describe how by embracing a first-person design perspective we engaged with the uncomfortable to successfully gain insight into the design of affective technologies. Firstly, we experience estrangement that highlights and grounds our bodies as desired in the targeted technology interaction. Secondly, we understand design preconceptions, risks and limitations of the design artifacts.
With climate change becoming a more prominent issue as time goes on, it is important to lower our carbon emissions as a way to prevent the worst effects coming. Many applications are currently on the market to help individuals lower their carbon emissions; however, these apps do not leverage positive spill-over techniques. Implementing techniques that promote positive spill-over leads to users being more likely to stay invested and have lower impact on the environment.
Advancements in ubiquitous technologies and artificial intelligence have paved the way for the recent rise of digital personal assistants in everyday life. The Fourth International Workshop on Ubiquitous Personal Assistance (UPA'19) aims to continue discussing the latest research developments and outcomes on digital personal assistants. UPA'19 builds on the success of our three previous workshops, organized in conjunction with UbiComp'16-18. We welcome contributions focusing on the advancements toward digital assistants that provide a high level of personalization, through proactive and effective support of users' activities, in an unobtrusive manner. All workshop contributions will be published in the supplemental proceedings of the Ubicomp/ISWC 2019 conference and included in the ACM Digital Library.
Conversational agents are increasingly becoming digital partners in our everyday computational experiences. Although rich, and fresh in content, they are oblivious to users' locality beyond geospatial weather and traffic conditions. We introduce conversational agents that are hyper-local, embedded deeply into the urban infrastructure providing rich, purposeful, detail, and in some cases playful information relevant to a neighborhood. These agents are spatially constrained, and one can only interact with them once she is in close vicinity at street-level granularity. In other words, the city provides personal, stateful, spontaneous service to its citizens through the agents installed in urban landmarks. Drawing lessons from two user studies, we identify the requirements for this system. We then discuss the architecture of these agents that leverage covert communication channels and machine learning algorithms that run on the edge and wearable devices to offer meaningful conversational experience in urban settings.
With the emergence of AI-powered products and services, the hospitality industry has started to adopt service robots to transform the guest experience. Despite this growing interest, Henn-na Hotel, the world's first robot hotel, recently announced to abandon half of its robots. This study aims to unveil factors leading to the adoption failure of service robots in the hospitality context using Henn-na Hotel as the case study. Through mining online guest reviews from four different leading online booking sites, we conducted thematic content analysis on a total of 250 negative online reviews. A total of six themes emerged from our data (e.g., human intervention, usefulness, embodiment), illustrating various factors resulting in the adoption failure. Based on this, we come up with six design implications for future researchers and designers to re-think about the interaction process between human and robots, as well as how service robots could be better designed and used in hospitality settings to fulfill guest needs.
The eyes are a particularly interesting modality for cognitive industrial assistance systems, as gaze analysis can reveal cognition- and task-related aspects, while gaze interaction depicts a lightweight and fast method for hands-free machine control. In this paper, we present mobEYEle, a body-worn eye tracking platform that performs the entire computation directly on the user, as opposed to primarily streaming the data to a centralized unit for online processing and hence restricting its pervasiveness. The applicability of the platform is demonstrated throughout extensive performance and battery runtime tests. Moreover, a self-contained calibration method is outlined that enables the usage of mobEYEle without any supervisor nor digital screen.
Personal smart assistance systems make people's lives easier and enable exceptional convenience, e.g. by supporting users during bothersome tasks. While personal intelligent assistants offer a lot of comfort to their users, there are also worries about data protection and data security since personal data about users is collected, aggregated and analyzed for ubiquitous assistance systems. Smart assistance systems can for example be found in cars. Connected to other internet of things devices, those assistants can help with the search for free parking lots in a crowded city or enable easy refueling in cooperation with intelligent charging stations. As the users' motivation to engage in those smart assistance systems is still undetected we investigate the influence of several potential drivers on the intention to use smart assistance systems in cars. This study uses survey data (N = 150) and structural equation modeling as the analysis method. Our results provide empirical evidence that convenience motives, performance expectancy, personal innovativeness, and perceived risk are drivers for consumers' intention to use smart assistance systems in cars. Moreover, we motivate further research in the field of smart assistance systems. Furthermore, we discuss academic and practical implications.
Communication technologies (CTs) have made it possible for employees to stay connected with their jobs beyond the traditional boundaries of the workplace and workday. Some researchers implied that CT may become the "electronic leash" [1], since employees may feel trapped or excessively tied to their CT device [2]. In contrast to these findings, we follow the design science research [3] aiming at the development of a smart assistant that will reflect the variety of individuals' availability preferences. To better understand the needs and demands of potential users, we conducted standardized qualitative open-ended interviews with 67 employees from different backgrounds (knowledge workers, social workers, and blue collar workers). The survey revealed many important features concerning this technological solution, like filtering by the priority of the person contacting, time, location and actual life domain, as well as location tracking different availability stages and feedback loops.
The sedentary lifestyle of elderly people may cause chronic diseases, reduce muscle mass and mobility. Moreover, elderly physical inactivity increases the susceptibility to mood disorders, such as anxiety and depression, and can permanently lead to disability. Physical activity reminders might be a useful tool to motivate the elderly to do exercises. Even though the importance of them, the related works only use interactive calendars to set up activity reminders to the user, without personal and autonomous feedback about the user's activities. Some studies have also established physical activities recognition in a smart-home context associated with interactive calendars, it's a personalized approach, but the activity recognition is limited to a small environment, excluding the possibility of the elderly exercising outdoors. Therefore, in this work, we presented the Elderly Physical Activity Reminder System (EPARS), which is a digital assistant to reminder elderly people to practice physical exercises in an unobtrusive way. EPARS addresses the activity recognition system using wearable and mobile devices with 98.63% accuracy and the context-appropriate activity reminder module, providing a barrier-free and non-intrusive approach to guide elderly people to a healthier lifestyle. The EPARS also includes a database consisting of data collected from 32 elderly volunteers, that helped us feed the activities recognition system.
With the advancements in ubiquitous computing, ubicomp technology has deeply spread into our daily lives, including office work, home and house-keeping, health management, transportation, or even urban living environments. Furthermore, beyond the initial metric of computing, such as "efficiency" and "productivity", the benefits that people (users) benefit on a well-being perspective based on such ubiquitous technology has been greatly paid attention in the recent years. In our second "WellComp" (Computing for Well-being) workshop, we intensively discuss about the contribution of ubiquitous computing towards users' well-being that covers physical, mental, and social wellness (and their combinations), from the viewpoints of various different layers of computing. Having strong international organization members in various ubicomp research domains, WellComp 2019 will bring together researchers and practitioners from the academia and industry to explore versatile topics related to wellbeing and ubiquitous computing.
The paradigm of wellness consists of both physical and mental wellness. One important parameter of physical wellness in the monitoring of cardiac health during activity, which requires measurement of ambulatory heart rate (HR). With the advent of smart wearable devices, measurement of heart-rate using Photoplethysmogram (PPG) has become a commodity. However, arriving at a reliable heart-rate measurement in real-time during daily activities is an open research problem. In this paper, we propose a method based on Weiner Filter, to estimate the correct HR values in the presence of motion, while being computationally efficient to be run on-device. Results are presented on a public data-set which prove the efficacy and efficiency of the proposed method.
The inclusion of mobile computing in outdoor recreation raises important questions about its ability to contribute meaningfully to activities without detracting from their benefits to well-being. In this paper, we present results from our research, which seeks to explore and set directions for computing's place in outdoor recreation. Our work addresses smartphone use while hiking. Our position is that computing already has a place in outdoor recreation and can contribute meaningfully to well-being in the outdoors now and in the future.
Major depressive disorder is a complex and common mental health disorder that is heterogeneous and varies between individuals. Predictive measures have previously been used to predict depression in individuals. Given the complexity, heterogeneity of major depressive disorder in individuals, and the scarcity of labelled objective depressive behavioural data, predictive measures have shown limited applicability in detecting the early onset of depression. We present a developed system that collects similar smartphone sensor data like in previous predictive analysis studies. We discuss that anomaly detection and entropy analysis methods are best suited for developing new metrics for the early detection of the onset and progression of major depressive disorder.
Cognitive functioning is a crucial aspect of the individual's mental health and it affects human's daily activities. We have developed the Ubiquitous Cognitive Assessment Tool (UbiCAT) including three cognitive assessment apps on the Fitbit smartwatch. In this paper, we present the design and formative evaluation of the UbiCAT apps conducted with 5 participants who had a background in design and/or human-computer interaction. Moreover, we investigated the adoption of the wearable devices by our participants.
This study discusses measurement of well-being in the context of smart environments. We propose an experimental design which induces variation in an individual's flow, stress, and affect for testing different measurement methods. Both qualitative and quantitative measuring methods are applied, with a variety of wearable sensors (EEG sensor, smart ring, heart rate monitor) and video monitoring. Preliminary results show significant agreement with the test structure in the readings of wearable stress and heart rate sensors. Self-assessments, on the contrary, fail to show significant evidence of the experiment structure, reflecting the difficulty of subjective estimation of short-term stress, flow and affect.
Driver's wellbeing has a positive impact on driving behavior and experience. Another way, driver's wellbeing depends on their daily lifestyle, demography, traffic and road conditions. Poor conditions in such factors are responsible for low wellbeing. These factors also initiate driving stress. A simple technology approach can play an important role to monitor driver's wellbeing, and help to provide better ways of increasing self-awareness regarding wellbeing. Here, we conducted a quantitative study on 88 drivers, and finally present a low cost wearable approach to support the drivers in the context of Bangladesh for better wellbeing.
With the advent of 24/7 technology-driven society, there are rising discomforts and increasing concerns over sleep. Quality of sleep critically affects human well-being and everyday performance. A good night's sleep, therefore, is essential to prepare one's mind and body for the next day. In this paper, we propose SleepThermo, a system that identifies the affect of in-cloth monitored body temperature change during sleep on human well-being. The primary purpose of this research is to determine the relationship between in-clothing body temperature and mental/physical conditions, and then elicit positive insights to improve sleep quality. Our evaluation proved that there is a potential for body temperature to be used in identifying mental and physical conditions. In addition, personalized models achieved better balanced accuracy than overall models in all subjects.
Resting heart rate (RHR) and heart rate variability (HRV) reflect the autonomic control of cardiac chronotropic activity, and they associate with cardiovascular fitness, acute and chronic health status, and mental stress. Relatively low RHR and relatively high HRV are generally seen as marks of better health, performance, and recovery levels. Nevertheless, the values are highly individual and comparison between individuals is not straightforward. On the other hand, evolution of wearable devices has made it possible to follow the course of individual RHR and HRV as long-term time series, which in turn enables observation of how behavioral, societal and seasonal factors affect RHR and HRV at individual and population scale. In this article, data measured by the Oura ring is used to study how alcohol and training affect these values, and moreover, how societal and seasonal factors affect us as a population.
In Parkinson's disease (PD), patients' motor functionalities are measured by various tests. Spiral drawing is one of the proven techniques for assessing the severity of PD motor symptoms. Commonly the test is performed with pen and paper, with the following visual observation by a clinician. This paper describes the implementation of the digitized version of the spiral drawing test for Android devices. Moreover, the application extends the spiral test and utilizes square-shape drawing accordingly. This artifact was tested in a trial with 8 PD patients and 6 age matching controls. The results have shown the observable difference in performance between PD and non-PD users in drawing accuracy and speed.
Physical exercise can improve sleep quality. However, how to perform physical exercise to achieve the best possible improvements is not clear. In this article, we build predictive models based on volume real data collected from wearable devices to predict the sleep efficiency related to users' daily exercise information. As far as we know, this is the first study to investigate insights of prediction of sleep efficiency from volume physical exercise data collected from real world.
With the spread of smartphones, mothers who operate smartphones while breast-feeding are increasing. This kind of activities are sometimes defined as a bad behavior although this is one of few repose in parenting. In this paper, we investigate if the use of smartphone affects the breast-feeding from the viewpoints of the mother's posture and the quality of communication between the mother and the baby. We measure the behavior of the mother with/without smartphone using wearable sensors and video camera. As a result of the survey, sensor data did not show the significant difference in the mother's posture. In the observation of the video camera, the inclination of a mother's back was different depending on the presence or absence of a smartphone operation. As a result of research on communication with infants, it was longer for mothers to notice changes in their baby while operating smartphones. In the future, in order to reduce mother's stress, we will consider how to operate smartphones properly, instead of prohibiting operation of smartphones for nursing care.
The EU-funded project WellCo1 aims to deliver a new mobile app with a virtual coach to encourage the users towards healthier behaviour choices in order to improve their physical, cognitive, mental and social well-being. Healthy nutrition can substantially contribute to health and wellbeing. We will use different techniques for dietary assessment in the WellCo project - eating detection by gesture recognition using a wrist-worn device, and estimating the quality of diet by self-reporting using a Food Frequency Questionnaire (FFQ). This paper describes the latter. We designed a short FFQ, compared it to validated questionnaires, and developed a web service and a web application to determine dietary quality score for each user by using the designed FFQ.
Stress detection is becoming a popular field in machine learning and this study focuses on recognizing stress using the sensors of commercially available smartwatches. In most of the previous studies, stress detection is based on partly or fully on electrodermal activity sensor (EDA). However, if the final aim of the study is to build a smartwatch application, using EDA signal is problematic as the smartwatches currently in the market do not include sensor to measure EDA signal. Therefore, this study surveys what sensors the smartwatches currently in the market include, and which of them 3rd party developers have access to. Moreover, it is studied how accurately stress can be detected user-independently using different sensor combinations. In addition, it is studied how detection rates vary between study subjects and what kind of effect window size has to the recognition rates. All of the experiments are based on publicly available WESAD dataset. The results show that, indeed, EDA signal is not necessary when detecting stress user-independently, and therefore, commercial smartwatches can be used for recognizing stress when the used window length is big enough. However, it is also noted that recognition rate varies a lot between the study subjects.
In today's fast-paced and demanding society, more and more people are suffering from stress-related problems; however, intelligent environments can be equipped with facilities that assist in keeping it under control. This paper presents CaLmi, a system for Intelligent Homes that aims to reduce the stress of its residents by: (a) monitoring its level through a combination of biometric measurements from a wearable device along with information about user's everyday life and (b) enabling the ubiquitous presentation of relaxation programs, which deliver multi-sensory, context-aware, personalized interventions.
With the decreasing cost and increasing capability of sensor and mobile technology along with the proliferation of data from social media, ambient environment and other sources, new concepts for digital prognostic and technological quantification of well-being are emerging. These concepts are referred to as digital phenotyping. One of the main challenges facing the development of these technologies is the design of easy to use and personalised devices which benefit from interventional feedback by leveraging on-device processing in real-time. Tangible interfaces designed for well-being possess the capabilities to reduce anxiety or manage panic attacks, thus improving the quality of life for both the general population and vulnerable members of society. Real-time biofeedback paired with Artificial Intelligence (AI) presents new opportunities for mental well-being to be inferred allowing individually personalised interventional feedback to be automatically applied. This research explores future directions for bio-feedback including the opportunity to fuse multiple AI enabled feedback mechanisms that can then be utilised collectively or individually.
Maintaining mental health as well as physical health is essential for our daily lives. We think that we could use a "mood" meter every day or regularly to see our own mental health condition similarly as in the case of weight meters. When performing emotion recognition using human speech, one of linguistic information and prosodic features included in speech is often used. However, by capturing both sides of speech, which is a means of human communication, it is considered that emotion recognition can be more accurately realized. Based on the background, we have developed PNViz, Positive-and-Negative Polarity Visualizer, an application running on an Android phone, to show the state of positive-ness of mental health by recording a short voice message. PNViz consists of the smartphone application and an analyzing server where the recorded voice is processed with both lexical and phonetic analyses and calculates a score ranging from -1 to 1. The calculated score is continuously logged and shown to the user and thus it is expected to encourage the user to take refreshing breaks or holidays.
The issue of ageing population is gaining significant attention across the world, while the caregivers' psychological burden caused by a variety of geriatric symptoms is often overlooked. Efficient collaboration between the elderly and caregivers has great potential to relieve the caregivers' psychological burden and improve the caregiving quality. For instance, activity prediction can provide a promising approach to cultivate this efficient collaboration. Given the ability to predict the elderly patients' activity and its timing, caregivers can provide timely and appropriate care, which not only can relieve caregiving stress for professional or family caregivers, but also can reduce the unwanted conflicts between both parties. In this paper, we train an activity predictor by integrating the activity temporal information into the Long Short-Term Memory (LSTM) networks. The approach leads to significant improvements in the prediction accuracy both in the next activity and its precise occurrence time.