In this paper, we propose a battery-free place recognition system that utilizes solar-cells as a sensor for localization. Our system combines multiple solar cells having different characteristics against the light environment. As an initial work, we select five kinds of solar cells available in the market and investigate the characteristics of them. Then, we show a potential for estimating the place based on the variation of electricity amounts generated from solar cells. Finally, we show that our proposed system distinguish nine places with 88.0% accuracy.
Grief resulting from the death of a deceased loved one has a great impact on both the personal and social life styles of the bereaved persons. However, this has become important to explore within the technological context considering how technologies are recently helping to understand the grieving needs of the bereaved persons. This paper presents an initial work drawn from the continuing bonds theory. The theory explains how the bereaved persons are continually being reminded of their lost loved ones while they encounter certain memory triggers (such as familiar sound from the past, faces that resemble the deceased person and places that they had once visited with the deceased). Hence, as life logging technologies (technology that continually captures and preserve memorable moments) become pervasive, it is necessary to understand how it can support these memory-related events associated with grieving.
In this work, we discuss the application of printed haptic actuators based on Electroactive Polymer (EAP) for HCI. We envision a printed haptic layer that can be used to augment objects and surfaces with feedback. The printing process offers unique opportunities to fabricate actuators of various shapes, sizes and layouts. We show that printed actuators provide strong output that is clearly perceived even under challenging conditions. We discuss the current possibilities and limitations, and provide HCI example cases that show the benefits of printed haptic actuators.
Moving livestock from one location to another is a tedious but necessary task in many farmland environments. Agersens, an agri-tech startup, is launching eShepherd: a world first autonomous shepherd based on virtual fencing technology. This IoT device enables farmers to geo-fence, move and monitor their cattle using their smartphone or tablet device. eShepherd uses Global Navigation Satellite System (GNSS) technology to localise livestock relative to a virtual fence boundary, and subsequently apply control signals based on shepherding decisions. The critical barrier to long-term operation of the collar is power management, as GNSS/GPS-based localisation consumes excessive power during continuous operation. This paper proposes an intelligent power management of eShepherd collars, based on animal behaviour classification and adaptive GPS sampling. Simulations derived from field trial data confirm significant reductions in position observations are achievable, while maintaining localisation of the animal relative to the boundary, and without unobserved boundary crossings.
We test ActiVibe, a previously reported vibrotactile method for communicating numeric values between 1-10, in the face of an audio distractor task, as well as when conveying not just one numeric value in a single message, but three values in succession. Results of a 12 participant user study comparing three different rendering methods indicate that ActiVibe maintains its advantage vs. two different duration-based methods when conveying a single value, but largely loses this advantage when presenting three sequential values. In these challenging conditions, the more concise duration-only approach may be preferable since it uses less power and demands attention for less time.
Smiling and saying hello to people can possibly increase social inclusion or belongingness, which is one of the basic needs of human. We present HelloBot, a social robot that proactively gives greetings to passersby and reacts to their smiles to induce positive feelings. We performed an in-the-wild trial to evaluate the emotional effect of interacting with HelloBot. We observed 123 students and 32 answered the accompanying questionnaire. 92 of 123 participants made a natural laugh or smile. 50%(N=16) of participants reported that they felt better after interacting with HelloBot. Our results show that interactive social greeting bots have the potential to induce social inclusion and positive feelings.
Emotion regulation is an important part of humans' life and it is not only negative emotion that is harmful but excessive positive emotion could also prevent someone from achieving their goals. For example, high excitement that prevents someone from falling asleep. In our research, we are exploring the use of directional vibration patterns and embedding them onto a furniture to provide more accessibility to useful emotions. We have designed three vibration patterns and implemented them onto a chair-like cushion to evaluate the generated emotions in a user study.
Visual crowdsensing (VCS) uses built-in cameras of smart devices and asks people to capture the details of interesting objects or views in the real world. To decrease the incentive costs, we propose the task allocation method to assign light subtasks of multi-facet VCS tasks to participants. We also develop a task assistant in the mobile app to help participants collect the required photos efficiently. The experimental results show that both the task allocation ratio and the covered-facet ratio are raised, and participants can accomplish the multi-facet VCS tasks more easily.
The car sharing market is growing at breakneck speed. Due to the continuing trend of shared invehicles, various new functions will be introduced into those invehicles. Accordingly, the user interfaces (UIs) for drivers, invehicles, mobile, etc. will be more important, and the studies on customized UIs will be essential under the circumstances. In this paper, we aimed to create a UI environment that enables shared invehicle users riding in invehicles owned by others to feel the same emotions as they would in their own invehicles. we designing a personalized conversational AI voice user interface design as part of infotainment, and further investigated whether that agent system equipped in a shared invehicle would influence the intimacy of the invehicle interior space. A pilot study showed that conversational voice agent effected increase intimacy of the shared invehicle interior space.
Computational offloading has become an important strategy to augment computational capabilities of resource constrained devices. While several techniques and frameworks have been proposed in the literature, currently quantifying the cost of offloading in the wild remains challenging. Indeed, the performance of computational offloading is affected by contextual factors ranging from network quality to energy consumption of the device and load of the offloading infrastructure. This makes it infeasible for a single device to capture all possible contexts and to make optimal offloading decisions. To improve the design of offloading mechanisms, we propose MobileCloudSim, a novel and innovative context-aware simulation toolkit that can be used to asses the impact of computational offloading in a large-scale variety of complex contexts. Unlike traditional simulators that generate synthetic data, MobileCloudSim relies on crowdsensing to create realistic offloading conditions that characterize different offloading contexts.
The power of cloud computing is in the inherent ability to scale and share resources at internet scale. In the meantime, the power of edge computing is in the ubiquity of its resources and their proximity to data sources. The collaborative use of these two computing platforms (collaborative distributed computing) is of great practical importance. In this research, we study the seamless integration of edge devices, edge servers and cloud resources as a collaborative distributed system. As the first step towards the realization of collaborative distributed computing, we investigate the feasibility of edge-based video summarization, as a case study. Experimental results show the effective use of edge servers in the close proximity saves internet bandwidth usage and leverages the processing of latency-sensitive jobs.
Children with autism typically show difficulty in sensory processing, which directly affects their ability to communicate and build connections with others. In order to enhance their senses, we present a multi-sensory interactive system, Expressive Plant, that provides sensory training on vision, hearing and tactile and olfactory feeling. A pilot study with autistic children and their teachers was conducted. The result indicates that our system is promising in helping children with autism engage in sensory training and improve their ability to sense the environment.
Nowadays many people rely on smartphones in their daily lives for chatting in SNS, voice communications, searching for shopping information, and watching video contents, etc.. We tried to predict the used application on a smartphone based on Call Detail Records (CDRs) in our previous work. However, we have found that classification of small messages into SNS messages or simple Web browsing is difficult. In this work, we focus on enhancing the possibility of predicting the SNS messages by extracting the notification to receivers of SNS messaging before the receivers actually access to the message body. We have conducted a small experiment to distinguish the SNS notifications from short background messages in Android. The result has shown that F_1 score is 0.87
This paper proposes Sneaking Detector, a system which recognizes sneaking on a laptop screen by other people and alerts the owner through several interventions. We utilize a pre-trained deep learning network to estimate eye gaze of sneakers captured by a front-facing camera. Since most of the cameras equipped on laptop computers cannot cover a wide enough range, a commercial wide-angle lens attachment and an image processing are applied in our system. On the dataset involving nine participants following four experiments, it has been realized that our system can estimate the horizontal eye gaze and recognizes whether a sneaker is looking at a screen or not with 78% accuracy.
Low Power Wide Area (LPWA) technologies are monumental for the IoT sector. In this paper, we explore LoRaWAN (LoRa Wide Area Network) sensor for human activity recognition. We propose an activity recognition framework by exploiting LoRaWAN sensor and its accelerometer data. In our framework, we explore Arduino Uno, Arduino Lucky Shield having a number of different sensors, and LoRaWAN to build one compact system. By exploring a LoRaWAN Gateway, we transfer the sensor data to SORACOM cloud platform successfully. Then from the cloud data, a few statistical features are computed to classify three activities such as walk, stay and run. We aggregate the time series data into different action labels that summarize the user activity over a time interval. After, we train data to induce a predictive model for activity recognition. We explore the K-Nearest Neighbor (KNN) and Linear Discriminant Analysis (LDA) for classification. We achieve recognition accuracy 80% by KNN and 73.3% by LDA. The result provides promising prospect for LoRaWAN sensor for improving healthcare monitoring service.
Bluetooth Low Energy (BLE) beacon technology is projected to be the leading proximity technology. Various business sectors are rapidly applying it because of its automatic location sensing capabilities, low cost and high accuracy. However, understanding of how people adopt beacon-based location sensing applications is still very limited. We developed a BLE beacon based system for automating class attendance taking. Our first field study with 42 students showed that about 38% of the students adopted it. Students had several misunderstandings and concerns of the technology that challenged its adoption. We revised the design by integrating a participatory sensing approach, where users can manually check in to their class by explicitly sharing their location using GPS. We conducted a second field study with 45 students under the same instructor in the following semester. The overall adoption of the attendance taking system was increased to 80%.
The effect of daily life stress has been known to cause irregular behavioral changes in a person's daily work timeline. They not only affect work but also fills a person with depression and anxiety. Thus, managing stress is a key for improving life quality. In this poster, we propose StressWatch, a mobile system which helps users in finding the root causes of their stress origination in daily life. The system monitors user contexts and HRV (Heart Rate Variability) features of the user by utilizing sensor data stream from the user's smartwatch. It then extracts a series of stress level from HRV data and matches the stress level and user contexts in order to explore the source of the stress of the user. Also, we performed a motivational study to investigate the feasibility of opportunistic HRV monitoring in daily life.
In this paper, a smartphone application "iTrack" is introduced for young people with autism that provides learning support and emergency support about safety skills related to fire and rain. Also, the application provides additional support to caregivers to promote collaboration between them. Our proposed application evaluated on two autistic individuals. Fire and rain safety skills were taught to children using video modelling via a smartphone. Single Subject Design (SSD), precisely, A-B design was used in this study along with maintenance phase to evaluate the effectiveness of the proposed system. Results show that iTrack, provide assistance and improve learning related to fire and rain safety skills. Furthermore, autistic individuals feel satisfied and not annoyed by the use of the iTrack application.
In this paper, we present how physiological measures including heart rate (HR), electrodermal activity (EDA) and blood volume pulse (BVP) can be retrieved from a wristband device like an E4 wristband and further used to detect the interest of a user during a reading task. From the data of 13 university students on 18 newspaper articles, we have classified their interest level into four classes with an accuracy of 50%, and 68% with binary classification (interesting or boring). This research can be incorporated in the real-time prediction of a user's interest while reading, for the betterment of future designs of human-document interaction.
In this paper, we present TanCreator, a tangible authoring tool which facilitates children to create games based on Augmented Reality (AR) and sensor technologies. Combining AR elements and sensors in games bridges virtual world with realistic surroundings closely together, providing more joyful and intuitive creating experiment for children. Children could boost their creativity via creating their own maze games in daily life with paper tokens and sensors. It is also great training for children's motor skills such as hand-eye coordination.
In this paper, a novel wearable respiration sensor using ultrasound transducer is proposed. Respiration is one of interesting physiological information which are affected by voluntary and in-voluntary motions. Hence, respiration reflects the consciousness and unconsciousness of person's state such as sleep, speaking, etc. The ultrasound transducer installed into the sensor can detect small abdominal movement as the impedance variation by pressure. In addition, the wide dynamic range allows to measure respiration under various situations in daily living. Moreover, our wearable sensor has generic devices such as accelerometer and the software development kit enables users to handle data obtained from these sensor. We demonstrate that the proposed sensor can measure respiration information precisely in various conditions through an experiment.
Today, retailers spend considerable efforts to provide a personalized shopping experience to their customers. As data-driven marketing helps to meet customer requirements, it is important to understand individual needs. However, offline stores---unlike their online counterparts---have great difficulty knowing their customers' needs due to a lack of proper context information. In this paper, we proposed a framework for estimating customer interests by using various sensor devices. The participants in our pilot study expected that recommendation services that adopt their interests would help to reduce their shopping time. As a result, shop assistants will have a stronger ability to understand, analyze, and even predict customer interests in the near future.
The purpose of this study is to understand (1) college students' goals and the strategies needed to achieve these goals through a bottom-up approach and (2) the use of mobile technology to support such strategies. To do this, we conducted a user study with a total of 295 undergraduate and graduate students. We identified four primary goals and eight strategies. We mapped each strategy to mobile sensor data and derived specific use scenarios. By analyzing and visualizing the collected sensor data, we developed a smartphone app which is expected to help students achieve their goals.
Many office-workers and students tend to overlook the potential dangers from a spreading cough infection. To address such problems, we propose CoughCCTV, a group-wise cough management system. It aims to increase awareness of the cough occurring in a group and induce group-wise infection defensive actions (e.g., frequently washing hands, ventilating and wearing masks). In this paper, we share our initial design of CoughCCTV and report our preliminary results from a 5-day deployment study with 6 participants. We found that CoughCCTV was effective to increase awareness of coughing in an office, yet not enough to motivate infection-defensive actions.
In Wireless Body Area Networks (WBANs), body-attached devices communicate with each other to form services around the human body. To design WBANs, Intra-body Communication (IBC) uses electro-magnetic (EM) wave signals and utilizes the human body as the communication medium, rather than using the surrounding air with RF signals. IBC allows the air medium to be freely used by other services while offering WBAN services and shows lower energy consumption compared to RF-based nodes. In this work, we conducted a pair of experiments to obtain knowledge on the signal attenuation and signal propagation latency characteristics of EM waves on the human body. Our results show that the average attenuation observed from a human body was -27dB in a capacitively-coupled circuit and -36dB in galvanically-coupled circuit, and the propagation delay of EM waves on the human body was 0.94ns per cm. These findings can be further utilized to design more complex protocols for IBC networks.
Self-reported perceived stress does not often correlate with physiologic and behavioral stress response. Current perceived stress self-report assessment methods require users to answer many questions at different time points of the day. Reducing it to one question at multiple time points throughout the day, using microinteraction-based Ecological Momentary Assessment (micro-EMA) allows us to identify smaller or more subtle changes in physiology and corresponding emotional reactions that reflect experiences of stress. We identify the optimal micro-EMAs by finding single item questions that correlate with intended stressors, and are most predictive from physiological signals. Physiological signals were collected in lab with a flexible wearable sensor that captured R-R IBI and motion from 22 female participants performing multiple stressful and non-stressful activities. Results show that simply asking how stressed a person is with a 7-scale Likert scale response results in 0.63 correlation with intended stressful activities, and a 68% F1-Score in predicting stress. We further report on acceptability and feasibility of using this sensor.
Garbage is intricately with our daily life. We are investigating method that estimates regional amounts of garbage using motion sensors mounted on garbage trucks. In this paper, we report out analysis results of garbage amounts using national census data. We could obtain insightful information from just motion sensors we mounted on garbage trucks.
Advertising and media industry has shown rapid growth in past few decades by aligning with the increased popularity of mobile phones. As a result, advertising firms tend to try new technologies to capture the target audience attractively. Since its method of identification is independent of the application, audio fingerprinting has been used for various purposes including content-based audio retrieval. This project aims to demonstrate one potential application of audio fingerprinting algorithm in the media industry in the form of a mobile application which is able to detect and identify advertisement tracks and notify the user of related details and offers. The audio fingerprinting algorithm extracts different attributes from an audio file, processes them into audio fingerprints, and compares them with a database of audio fingerprints to find the closest-matching audio file. The proposed application has tested on a small-scale database and shown a respectable performance.
We propose a new method for human activity recognition using a single accelerometer sensor and additional sensors for training. The performance of inertial sensors for complex activities drops considerably compared with simple activities due to inter-class similarities. In such cases deploying more sensors may improve the performance. But such strategy is often not feasible in reality due to costs or privacy concerns among others. In this context, we propose a new method to use additional sensors only in training phase. We introduce the idea of mapping the test data to a codebook created from the additional sensor information. Using the Berkeley MHAD dataset our preliminary results show this worked positively; improving in 10.0% both the average F1-score and the average accuracy. Notably, the improvement for the stand, sit and sit to stand activities was higher, typical activities for which the inertial sensor is less informative when using the wrist-worn accelerometer.
We propose Tiger1, an eyeglasses-type wearable system to help users follow the 20-20-20 rule to alleviate the Computer Vision Syndrome (CVS) symptoms such as eyestrain, headaches, and dry eyes. It monitors user's screen viewing activities and provides real-time feedback to help users take appropriate actions. We present a system architecture with an initial version of hardware prototype. We also report preliminary results to assess the feasibility.
Many people suffer from panic disorder worldwide. Panic disorder has a bad influence on overall social relations as well as mental/physical health, debasing the quality of life. Thus, we introduce Moonglow, a wrist-wearable device with a user-centered design aimed at helping with cognitive behavioral therapy of panic disorder patients. Moonglow will help patients relieve panic disorder symptoms by consistently training them and correcting wrong thought patterns through diary records. We confirmed the necessity of Moonglow through a literature review and preliminary study related to cognitive behavioral therapy and designed based on the gleaned insights.
In the current global fashion market, the utilization of Radio-Frequency Identification (RFID) technology has continuously enhanced retail sales performance and productivity. The purpose of this study was to review the literature on RFID utilization in the See-Now-Buy-Now fashion business model. This study identified five conditions to meet by fashion companies for efficient RFID incorporation into the See-Now-Buy-Now fashion business model, which require immediate order fulfillment after presenting their runway collections: (a) essential RFID components, (b) a compatible cloud-based solution, (c) predictive analysis capability, (d) strong IT infrastructure for operating mobile commerce, and (e) a vertically integrated supply chain. The current research also determined major challenges of current RFID used in the fashion industry, such as passive RFID tag limitations, cloud system interoperability, and privacy and security concerns on RFID data sharing.
Mobile instant messengers lack the social appropriateness of conversation, which incurs embarrassing situations when an unwanted message is unexpectedly exposed to people nearby. To avoid such situations, we develop a relationship-aware mobile messenger that takes into account the relationship of a receiver to a sender and people nearby. Based on the in-situ relationship, it selectively shows or hides a content of incoming messages on the notification pop-up. We develop a messenger prototype and show its usefulness via a deployment study.
Commercial site recommendation based on big data is one of the innovative applications in the new retail era. Recently, most studies utilize regression analysis or collaborative filtering to recommend the optimal site based on some features extracted from commercial data, geographic data and other heterogeneous data. Compared to manual features which could not be well-defined, deep learning is able to automatically extract features and give nonlinear and in-depth description of the relationship between variables. Therefore, this paper applies deep learning to the study of commercial site recommendation. We firstly study the usage of NeuMF, a neural collaborative filtering method in commercial site recommendation. Then we propose NeuMF-RS method based on NeuMF method. Finally, we evaluate our proposed model on a real-world dataset collected from Dianping.com. The results indicate that NeuMF-RS outperforms the state-of-the-art methods in commercial site recommendation.
This paper describes our effort to reconnect people to their surroundings in an urban setting, making them aware of the immediate environment. We present an initial exploration, as well as a first prototype to support urban wandering. Our first insights are encouraging, as it seems our prototype enhances the experience of wandering and helps users with lower ambiguity tolerance to engage in wandering activity.
Sensory substitution has been a research subject for decades, and yet its applicability outside of the research is very limited. Thus creating scepticism among researchers that a full sensory substitution is not even possible . In this paper, we do not substitute the entire perceptual channel. Instead, we follow a different approach which reduces the captured information drastically. We present concepts and implementation of two mobile applications which capture the user's environment, describe it in the form of text and then convey its textual description to the user through a vibrotactile wearable display. The applications target users with hearing and vision impairments.
Our overall goal of our is to provide a personalized method to categorize and find media content of interest based for individual users, especially focusing on implicit feedback (facial expressions, posture and other reactions). This paper presents an initial study to understand tagging and annotation processes for music videos, focusing on Korean Pop videos. We present an initial experimental setup of 10 users watching 5 selected K-Pop Videos. We obtained words, key terms, and phrases users use to describe/search for the music video content in question and recording eye gaze as well as body posture and facial expressions for the participants. In addition we explore Tag identification associated with the video content in a study, to look into cultural and individual differences.
The vocabulary acquisition is a complex task which needs continuous reviewing. Existing vocabulary acquisition tools take into account only the correctness of the answers. However, it is important to know if the user has a knowledge misconception or give an answer by chance. This information can be assessed through the user's confidence.
We aim to construct a confidence estimation system that will help the users reviewing more effectively. We analyze the keystroke information of 12 participants answering 120 vocabulary questions on a smartphone, and estimate whether they feel confident about their answers. We obtained an accuracy of 89.1% in a user-independent case.
Our communication highly depends on nonverbal clues, especially on facial expressions. This paper presents the mapping of spontaneous facial expressions in daily conversation using the optical sensors on smart eyewear and unsupervised learning method(Self-Organizing Map) to see the potentially detectable expressions. We had the case study of five to ten minutes of the unscripted communications with five users. It showed that our system could map the various facial expressions of the users such as social smile and the smile of enjoyment. The study also demonstrated that the map trained with the datasets of five users could categorize the similar expressions of each user into the shared clusters among the users.
We present an experiment for 5 months with 30 residents in a caregiving facility to collect big data and to analyze the correlation between sleep and daytime activities. Analysis of sleep is useful for health care and for self analysis. Especially, knowing how daytime activities are influenced by sleep quality, and vice versa, are important as well as analyzing the sleep status itself. Existing analysis of correlation between sleep and the daytime activities collected at maximum 1 week and the recorded activities are primitive such as 'activity levels'. In this paper, we performed a sensing experiment at a caregiving facility to collect big data for 30 subjects and for 5 months. Furthermore, by collecting care records for subject, it is possible to collect the daytime activities data. Finally, we analyzed the data of five subjects for 26 days, and found that (1) the sleeping situation can estimate whether users will do daytime exercise with an accuracy of 91% for specific users, and (2) that the information of the daytime exercise influences the time when users start to sleep for specific users.
Human activity recognition (HAR) is challenging, particularly in natural settings, due to issues like confounding gestures present in different activities, diversity in performing the same activity, and the wide range of possible human activities. Acceleration and rotation rate, two of the most widely used sensing modalities for HAR, are limited in addressing these issues. Also, many solutions for wearables are focused on some particular activities, and they do not generalize to others. One challenge is to develop underlying generic techniques for activity recognition that can be used in many different wearable based applications. We present a set of general purpose techniques for activity recognition using wearables. The techniques are based on quaternions that represent the orientation of a device in three-dimensional space. The techniques can be used for different purposes like reducing computation, increasing robustness and accuracy, and better understanding movements for HAR.
In this study, an Arduino based mini nurse robot has been developed for assisting elderly people. Bangladesh which is a low income developing country has 7% of its total population of elderly people (60+ age) . Elderly people are often neglected in this developing society where providing support is challenging from the family structure. An automated robot can assist them in such a situation, especially in healthcare. In the context of developing countries, automated solutions are not available and affordable compared to the developed countries. Our major contribution is to develop a low-cost healthcare robot for the elderly people to assist in taking medicines according to schedule.
In this research, we introduce an experience sharing platform that enables users to record and playback experiences accompanied with haptics. This platform makes it possible to record and reproduce haptic data with tactile sensation added to video and audio by smartphone using file format MPEG-4. In addition, users can share the experiences including haptics by uploading and downloading this haptic data to and from a cloud system. We expect to use it for applications such as finding haptic similarity using tags added at the time of recording. In this paper, we focus on the design and verification of the haptic device, the data format of haptic data, and the cloud system. As a result, we present the effectiveness of data compression and storage of vibration tactile sense into the MPEG-4 container.
The conventional tactile diagrams for the visually impaired people are usually limited to a finite number of annotations and its associated Braille legend can extend to several pages. This in turn makes the tactile books bulky and cumbersome. To address this issue, we present the design of ColorTact, a finger-worn assistive device for visually impaired to explore tactile diagrams with audio feedback. The device leverages color-printed tactile diagrams and associates audio supplement to various colors to provide additional information about a single-page tactile diagram. The ColorTact device consists of a color sensor paired with a smart phone app which can play a preset audio information about the distinctive areas of a tactile diagram. Furthermore, the device was designed not to hinder any natural finger movements or tactile sensation. Eliminating the need of additional appendixes for tactile diagrams, the ColorTact device aims to support visually impaired students to learn graphic intensive subjects in Science, Technology, Engineering and Mathematics (STEM).
In technology enabled self-tutoring systems, it could be useful to monitor learners' cognitive and affective states in real time to provide useful feedback when a student faces difficulty or seems disengaged. Biometric sensors such as EEG, GSR and eye trackers have been used to indicate cognitive and affective states of individuals. In order to make EEG relevant and affordable for such HCI applications we need to be able to use consumer-grade devices. We conducted a pilot study using a wearable consumer grade MUSE© headband to record EEG signals. We used the ratio of the frequencies alpha and theta (theta/alpha) to investigate cognitive load and the frontal asymmetry in the alpha frequency band to investigate learners' affective states. We found evidence supporting the use of theta/alpha as a cognitive load metric. However, alpha frontal asymmetry was not indicative of affective states. We discuss the results and possible implications of our study.
In a classroom, it is important for teachers to grasp students' engagement in order to lecture effectively. However, to grasp students' engagement is too difficult in the situation where there are so many students. Therefore, the purpose of this study is to grasp the students' engagement by a chair with a pressure mat, not by teachers. We recorded students' upper body pressure distribution while they were taking e-learning lectures. We recorded 145 lectures in total. Then we extracted 56 features for each lecture, selected proper features and trained classifiers to determine whether he or she was engaged in the lecture. As a result, the average accuracy was 75.2% for student-dependent. This result shows it is possible to predict student's engagement automatically, and it will help teachers to give lectures more effective.
In this study, an Automated Assistance System "RICHSHAW BUDDY" has been developed for three-wheeler Auto Rickshaws. In developing countries like Bangladesh, Auto Rickshaws are one of the key parts of transportation. They are also the reason for most fatal accidents (~85%) because of over speed, lack of maintenance and absence of back viewer . In this research, we have developed a complete assistance system with rear-view monitoring, back obstacle sensor, an over speed sensor and a maintenance scheduler which are integrated into a user-friendly app. We have designed a system to overcome the downsides with minimal costing and easy installation process. To appraise the effectiveness, we installed this system in real Auto Rickshaw in the road. Astounded responses were accomplished by the Auto Rickshaw drivers while tested and demonstrated. This novel work creates a new pathway to increase road safety and consciousness of driving rules.
This study proposes a system to visualize the body-pressures applied to body parts in a bed, with textile pressure sensors to support caregivers to learn posture change skill for pressure ulcer prevention. As the result of an evaluation with 21 nursing students, the comparison group using our system for learning showed better scores than ones of the control group with skill to understand the body-pressure distribution of sub-body parts: head (6/6, 100%) and right leg (3/6, 50%).
This paper proposes a new method of early change detection for people flow analysis. Some conventional methods often focus on a single location (spot) to demonstrate how the number of people changes over time. In contrast, our proposed method takes into account the links between the spots to grasp a foretaste of congestion of a specific spot as early as possible. The main advantage of the proposed method is that it not only describes the characteristics of each spot, but also the relationships among spots, i.e., whether the connectivities are strong/weak. We introduce an idea of PageRank, which is based on a centrality of graph theory and extend that idea to represent the amount of people flow among spots. We call the extended method "SpotRank". SpotRank assigns an importance score to each spot. The score of a particular spot is calculated by the number of paths and the amount of people flow from other spots. Therefore, the more paths and people flow, the importance score (ranking) increases. The proposed method begins with the calculation of SpotRank every 10 min, followed by change detection, i.e., how much the ranking changes over time. In our experiments, we measured people flow using Wi-Fi packet sensors for a period of over 16 weeks. We confirmed the effectiveness of the proposed method, which successfully grasped a foretaste of congestion at a restaurant in our university.
Training for any kind of sports not only requires dedication, but also the correct way of obtaining the necessary information. To maximize training, we delve into the understanding of bridging between spectating and practice by bridging the virtual and physical space. We introduce the concept of ubiquitous sports training in UbiTrain, where the user is able to train anytime, anywhere with the help of mixed reality (MR) and virtual reality (VR). The contribution of this work are the following; 1) it leverages the use of physical and virtual space for sports training, 2) it adapts to any physical space the user is currently in, thus allowing ubiquitous usage, and 3) it combines both practice and observation as an effective learning package.
Quick Response (QR) codes are rapidly becoming pervasive in our daily life because of its fast readability and the popularity of smartphones with a built-in camera. However, recent researches raise security concerns because QR codes can be easily sniffed and decoded which can lead to private information leakage or financial loss. To address the issue, we present mQRCode which exploit patterns with specific spatial frequency to camouflage QR codes. When the targeted receiver put a camera at the designated position (e.g., 30cm and 0° above the camouflaged QR code), the original QR code is revealed due to the Moiré phenomenon. Malicious adversaries will only see camouflaged QR code at any other position. Our experiments show that the decoding rate of mQR codes is 95% or above within 0.83 seconds. When the camera is 10cm or 15° away from the designated location, the decoding rate drops to 0 so it's secure from attackers.
Several recent mobile operating systems allow users to configure the smartphone into a "driving mode". This mode suppresses the smartphone's incoming SMS/call notifications so that it does not distract driving activities. However, currently available driving mode implementations keep all notifications from being delivered, which decreases its practical usability. We identify this as a problem and perform a survey on the preference of drivers on incoming smartphone notifications when driving. Specifically, we ask 74 survey participants on the need for a smartphone driving mode and also the need to differentiate incoming call/SMS contacts when the driving mode operates. Our results show that the need for a designated driving mode scored on average 4.3 on a 0-6 scale. When asked what criteria would be ideal for differentiating the contacts for notification prioritization, ~59% chose the communication frequency with the contact as the main criteria. Overall, our results suggests for a careful design when implementing smartphone driving mode for incoming notification control.
Current clinimetrics assessment of Parkinson's disease (PD) is insensitive, episodic, subjective, and provider-centered. Ubiquitous technologies such as smartphones promise to fundamentally change PD assessments. To enable frequent remote assessment of PD tremor severity, here we present a 39-month smartphone research study in a real-world setting without supervision. More than 15,000 consented participants used the smartphone application, mPower, to perform self-administered active tasks. In the scope of this abstract, we developed an objective smartphone measure of PD tremor severity called mPower Tremor Scores (mPTS) using machine learning. Efficacy, and reliability of mPTS was further tested and validated in a separate cohort in the real world and in-clinic setting. This study demonstrates the utility of using structured activities with built-in smartphone sensors to measure PD tremor severity remotely and objectively in a real-world setting using more than 1100 participants.
In this paper, we propose PrivacyShield, a mobile system for on-demand and selective just-in-time privacy provisioning. PrivacyShield leverages the screen I/O device (screen digitizer) of smartphones to provide gestures (i.e., with phone's screen turned off). Based on subtle gestures, various privacy-protection policies can be configured on-the-fly.
Smartphones with embedded and connected sensors are playing vital role in healthcare through various apps and mHealth platforms. RADAR-base is a modern mHealth data collection platform built around Confluent and Apache Kafka. RADAR-base enables study design and set up, active and passive remote data collection. It provides secure data transmission, and scalable solutions for data storage, management and access. The application is used presently in RADAR-CNS study to collect data from patients suffering from Multiples Sclerosis, Depression and Epilepsy. Beyond RADAR-CNS, RADAR-base is being deployed across a number of other funded research programmes.
The traditional hospital set-up is not appropriate for long-term epilepsy seizure detection in naturalistic ambulatory settings. To explore the feasibility of seizure detection in such a setting, an in-hospital study was conducted to evaluate three wearable devices and a data collection platform for ambulatory seizure detection. The platform collects and processes data for study administrators, clinicians and data scientists, who use it to create models to detect seizures. For that purpose, all data collected from the wearable devices is additionally synchronized with the hospital EEG and video, with gold-standard seizure labels provided by trained clinicians. Data collected by wearable devices shows potential for seizure detection in out-of-hospital based and ambulatory settings.
Tabletop computers have been set out to change the way people work by providing a display on the table surface. Still they're only flat displays with digital content behind a glass screen. Tangibles extended the interface into the third dimension. However, the content still sticks mostly to the screen or is projected onto rigid objects. In this work, we present OverTop, a concept for providing a full three-dimensional interaction experience for both content and interface. We introduce three different layers, where both content and interfaces can be presented: Below, On, and Above. Using these three layers, OverTop extends the design space of tabletop interfaces to the area above traditional tabletop workplaces, opening up new interaction and visualization possibilities.
Recent epidemiological studies suggest that age-related cognitive decline---particularly, the stages between normal cognitive changes in aging and early dementia--- is adversely affected by environmental exposures, such as long-term air pollution and traffic noise. Although monitoring outdoor air pollution is now commonplace, and smart home solutions for monitoring indoor air quality is becoming prevalent, ways that the elderly can record long-term environmental exposures and adopt healthy lifestyle changes are little explored. We present myCityMeter, a pollution exposure management tool for older adults and their caregivers. myCityMeter measures the pollutants shown to be associated with cognitive impairment in older adults: PM2.5 and ambient noise. Using a set of neighborhood-level stationary and personal mobile sensors, myCityMeter helps users to monitor their environmental exposures, know potential exposures when planning activities, journal cognitive performances, and take day-to-day actions to avoid the environmental risk factors for early dementia.
In this paper we present Flair, an interactive fiction game that is intended to serve as a psycho-educational material for the therapeutic treatment of Social Anxiety Disorder (SAD). Along with the game design approach, explanation of the inclusion of cognitive-behavioral therapy (CBT) techniques into Flair's story, and the psychological benefits of doing so, is extensively discussed. The initial results are encouraging as patients (15 users) and a therapist respond positive on the current game design.
E-learning using digital learning materials is widespread due to freedom in choosing time and space for learning. Often, these learning materials are provided uniformly regardless of the learners' knowledge and the attention. However, to maximize the power of digitized materials, the presented materials can be dynamically adapted to the learner's state of learning, in particular, the degree of concentration. Based on the background, we propose MARTO, a system to change the content of learning based on the concentration level of the learner using an EEG. We have also examined how properly the concentration level is obtained with EEG by comparing it with the result obtained with eye gaze movement.
This paper presents a new technique of adaptive auditory feedback in desktop assistance for the people with visual impairment. Adaptive auditory feedback is based on switching between speech only and non-speech only (i.e., Spearcons) sounds. The auditory feedback adaptively changes that changing based on user mode. Active users are assigned with speech only while the engaged users with the non-speech only auditory feedback. Ten participants perform tasks on desktop computers using within-subject design. The proposed adaptive auditory feedback is more efficient in comparison to speech only and non-speech auditory feedbacks. It can also improve a user experience in the desktop environment. Finally, it may help different researchers to use adaptive auditory feedback as a substitute for speech only or non-speech auditory feedback during the designing of accessibility softwares, tools, and systems.
According to the theory of Embodied Cognition, our behavior is a result of real-time interaction with surroundings, our cognitive skills, and the nervous system. From this perspective, researchers are considering a learning environment which promotes physical activities to achieve cognitive tasks. Such Natural User Interfaces (NUI) make use of gesture-based sensors like the Microsoft Kinect. Yet we lack in-depth studies of how they improve the learning process. In this paper, we present observations of two deployment studies which focus on different roles that NUI can play as a part of learning activities. We deploy the Kinect based applications:- Yoga Soft: A Digital Yoga Instructor and Mudra: A Kinect based Learning System in real life scenarios. The first study is conducted at residences of preadolescent children in Gurgaon, India. The second study is conducted at an education center specializing in the care of kindergarten children in Pilani, India.
To expand the possibilities of thermochromic displays, we design Eunoia, which is a series of animations on textiles patterns that control the area and time of the color change activation by using the properties of fabrics, thermochromic pigments and conductive threads. The six different design patterns are related to the length of the conductive threads, voltage regulation, fabric density, the temperature of thermochromic pigment, fade-in and fade-out time, and maintenance time. For example, we could control the changing color area around a conductive thread by regulating the giving voltage or we could control the time that a pattern is activated by using different fabrics. We collaborate with a graphic artist to design the animations using each of these properties and. aim to provide some guidance to graphic designers, fashion designers, and artists to create new ways of dynamic fabrics. In this way, we aim that Eunoia will provide guidance to graphic designers, fashion designers, and artists to create novel ways of dynamic fabrics.
An augmented reality (AR) application, Story Teller, is designed to teach preschool children some Chinese words. In order to train their motor skill, holdable items are used as the AR markers, which can be combined to construct a story. To retain children's interest, varying stories may be presented in different location and time.
We propose a method that interpolates missing values from sensor nodes in a sensor network using Nonnegative Matrix Factorization. Since nearby sensor nodes take approximate values, more reliable interpolation is possible with these values. We carried out experiments and evaluations using the data of sensors deployed in a real environment.
An increasing number of travelers edit and share movies while traveling, owing to the widespread use of smartphones. Whereas short movies have garnered much attention, manually concatenating highlights and commentaries is often considered complicated for producing good movies. Thus, we propose a method for creating impressive short movies by reconstructing travel shorts based on their emotional arcs, ostensibly representing the emotional trajectories of stories in a time series. Our method extracts adjectives from travel movies utterances, calculates emotional scores using a sentiment dictionary, and rearranges scenes based on emotional arc templates. The templates consist of 194 popular cinema screenplays. We conduct a user study for which travelers evaluate short travel movies reconstructed by our method. Their impressions were changed according to templates of the emotional arcs.
Sensing our daily activities is essential for ubiquitous computing. Although smartphones and wearable devices can easily collect their owner's data, continuous sensing causes battery drain and consumes CPU power of those devices. This paper proposes a brand new framework utilizing the symbiotic construction of an individual's location dataset, one of the most typical daily activities, by combining infrequent sensed location logs of serendipitously nearby users including strangers. The proposed method estimates the location where the user was using other users' sensed sparse location logs. The user can obtain his/her detailed dense location history without increasing sensing frequency and CPU load of his/her device compared to those for normal daily use. Experimental results show a rich location dataset of the user can be estimated.
Health monitoring using a wearable device is gaining increasing research interest. Several longitudinal studies have been conducted across college students taking advantage of the robust data acquisition of wearable devices. In these studies, watch-type wearable device and mobile phone are often used to collect data. However, longitudinal activity tracking using an eyewear-type device is challenging because such devices often result in significant discomfort to the user due to their bulky hardware. This paper reports on a longitudinal study on high school students using commercial electrooculography glasses to track study activity. Initial result indicates that weekly feedback via text messages can contribute to longer use of the device.
Wearable technology is becoming increasingly common in primary and high school settings. Furthermore, research has shown that augmented reality, another type of wearable has the potential to benefit student learning. Combining these two technologies to support creativity and problem-solving for children has not been well explored. In this work, we describe our initial exploration into the contextual variables that impact the levels of low to high creativity moments. Ultimately, we aim to use wearable technology (and its sensors) to close the gap between the topic of children's creativity by measuring moments of creative fixation and then using feedback in these low creativity moments to spur creativity. The main contributions of this work, is a preliminary pilot study with ten primary-school students (K-6), our initial results and examination and discussion of how children's creativity can be supported or enhanced using wearable technology and augmented reality.
We analyzed a large data set from a mobile exercise application to find the preferred running situations of a large number of users. We categorized the users according to their running behaviors (i.e. regularly active, or rarely active over the year), then studied the influence of 15 features, including temporal, geographical and weather-based features for different user groups. We found that geographical features influence the behavior of less active runners.
Syncope and strokes in toilets can lead to severe injuries, and even pose life threats to patients. However, owing to privacy concerns, vision-based fall detection cannot be applied in such a scenario. In this poster, we propose ToiFall, a prototype for syncope detection in toilet environments. ToiFall collects Channel State Information (CSI) of commodity Wi-Fi devices. Different human movements form various textures on CSI images, and such textures can be used for feature extraction and classification. Experimental results show an accuracy of over 98% for fall detection with satisfying reliability.
Language development has long been recognized as one of the core areas of deficiency in autism spectrum disorder (ASD). A large number of technology-based intervention applications have been developed to support, enhance and facilitate the language and literacy development among individuals with ASD. However, although it has been argued that allowing flexible and personalized customization of applications to accommodate ASD children's individual development trajectories is highly desirable, relatively few systems had been built to allow for such personalization and adaptivity. In this demo, we present two flexible and individually adaptable picture-based word-learning mobile applications which are lightweight in terms of the technologies utilized (i.e. Swipe-N-Tag, and Snap-N-Recognize). Specifically, both applications allow the caregivers and teachers to build up the picture database based on a child's learning progress in the classroom and the items available at home (or outdoor during an activity). As such, the words acquired in the classroom could hopefully be transferable to real-life situations. The iOS version of Swipe-N-Tag (for both iPhone and iPad) is currently being alpha tested among a limited number of parents and teachers in a private autism center in the city with satisfying results.
Detecting human interaction borders in urban environment, the geographical perimeter of dense human mobility interaction, is important for mobility-related city resource allocation and urban planning. However, with the advance of transportation technology, dwellers' movements become convenient and flexible, leading to the doubts that whether such interaction borders exist and how they come into being. This poster investigates these problems via a cellular mobility dataset in Shanghai, China. From this large-scale dataset, we first build an urban flow network, where the node is city block and the edge is human mobility interaction frequency. Based on this network, we discover several spatial communities, and extract the borders of them as the interaction borders. We surprisingly find that the interaction borders do exist because city blocks in each spatial community are geographically connected. We further find that the borders are influenced not only by administrative policy but also by other factors in geographical and cultural aspects.
PECS or Picture Exchange Communication System, involving teaching individuals with autism spectrum disorder (ASD) to use the picture cards for requesting items and activities; it has been used with not only the individuals with ASD, but also their families and teachers with notably success. Despite the much progress achieved so far, to our knowledge, there are very few dedicated PECS (software) application available in Chinese. Motivated by these previous research, we are developing a highly customizable and contextually expandable PECS system embedded with augmented aids and some additional communication abilities. In particular, our system can not only offer content-authoring tools but also expands picture vocabularies by automatically offering context-sensitive picture cards (via location- and event sensing) for users to make on-demand request through WeChat, one of the most used social communication tools in China. Last but not the least, the simpler version with typical PECS capabilities with message-sending module has also been loaded in a children's tablet which makes our PECS very affordable for Chinese children's use in rural areas.
We present how crowdsensing with civil officers contribute to collect larger amount of urban data with high spatial-temporal coverage, and enable to understand urban features by analyzing the data. We implemented a crowdsensing system which fits to daily city management, and conducted long-term experiment with garbage section of Fujisawa city, Japan. Based on analysis of collected data, we show that our system provide fine-grained urban data compared to crowdsensing with citizens with improving efficiency of city management. In addition, we analyze our dataset with demographic urban data, and clarified the relationship between pattern of uncollectible garbage thrown and features of resident area.
Head mounted display (HMD) virtual reality (VR) games have shown promise beyond entertainment. Work has shown that playing VR games even for 10 minutes can provide players with valuable levels of physical exertion that is much higher then their perceived exertion. Despite VR showing promise at providing engaging forms of exercise, there is a risk that players can become over-exerted because they are so immersed in VR games, potentially leading to injury. Activity trackers like smart watches and chest heart-rate monitors can help with keeping track of exertion. However, the player is inherently locked out of the real-world while they are in the game. This means that the data can only be viewed if the player takes off the HMD, interrupting their game session. At present, the design of suitable feedback about exertion has had little attention in HMD VR games. In this paper, we show how fully-immersive VR and data from activity trackers can be combined in real-time so players can track their level of exertion, to help prevent over-exertion during gameplay.
This paper describes a pilot study involving alcohol-dependent patients and their family members aimed at identifying salient themes to guide the design of a mobile support system. We then developed a phone-based support system that facilitates communication between patients and their family members. The patient self-administers breath alcohol tests using a portable breathalyzer and self-reports moods and alcohol cravings. This information can then be shared with select family members who subsequently provide responses, such as encouragement or suggestions. The patients do not directly reply to the encouragement or suggestions, but rather use the same app to indirectly rate family's responses according to their effectiveness in assisting them to remain sober. These indirect ratings as well as other data provided by patients and family members can be used during following group interventions to assist counselors (the treatment team) in making suggestions about future intra-family communications, with the aim of breaking many of the negative loops that would otherwise impede positive interactions and the effectiveness of treatment.
Transaction behavior can be recorded by the invoice in China. The invoice records several fields including transaction content, transaction code (in accordance with the Tax Classification and Coding for Commodities and Services issued by the state) and transaction unit, etc. In this paper, we propose a compositional CNN-RNN model framework with attention mechanism to classify transaction behavior collected from tax invoices to the corresponding category based on the official transaction code, which is of great importance to tax supervision and provides a new perspective to analysis the industrial structure of the region. Preliminary experiments show that the overall accuracy of classifying transaction behavior achieves 75%.
Mouse based target selection requires much movement when locating targets across long distance on large displays. Eye tracking technique can locate targets more easily and quickly across long distance, and has a high potential for fast targets selection on large displays. However, eye tracking still faces to challenges, such as low accuracy, and high error rate of selection operation, especially for gaze-only interaction. This paper proposed a multimodal interaction method that combined gaze with gestures. This method utilized gaze for rough selection first, and then utilized hand gesture to confirm accurate selection. Furthermore, when targets are small and crowded, we used semi-fixed gaze cursor and secondary selection mechanism to optimize the selection process. Finally we conducted a pilot study, and the results showed that the selection speed and accuracy rate of our method was higher than the gaze only input and similar to mouse input.
As digital machine tools like 3D printers have become cheaper and popular, even common people come to involve personal fabrication. There are many web services for sharing instructions; however, people often have difficulty to create such instructions. To solve this problem, we propose a support system to create video instructions for manufacturing using smartwatches. Our system mainly consists of a smartphone application for recording, a web server for data storage and machine learning platform, and a browser-based application for data presentation and manual tagging. The system can automatically add chapters on video instructions and provide smart fast-forward function.
We will demonstrate a mobile app test system in VR environments. The system enables developers to use a real smartphone in VR and to evaluate their apps at the locations of interest, with various network situations. The system consists of 2 core engines. First one is the real world reproduction engine which reproduces 3D city models and network environments to virtual environments. The second one is the laboratory-virtual environment mapping engine which brings a real device into VR and set environmental information in VR to the device.
This demo presents an interactive experiment environment for physics and electrical engineering students and aims to provide a deep insight into the basic electrical theories (i.e Ohm's, Kirchoff's law) using real-time sensing system with augmented-reality (AR) visualization enhancements. To this end, a system was designed to measure the current, voltage, AC frequency and RFID based 2D positioning. A mixed reality glass, HoloLens, was used to provide the visualization and augmentation. An application is designed for HoloLens to provide different visualization of nodal measurements and detected circuit schematic. Using this interactive experiment setting, the goal is to reduce the cognitive load of a learner while allowing more enjoyable and intuitive learning experience.
In this demo paper, we assess the performance of using mid-air gestures for web browsing. We built a Google Chrome extension to interface with the Leap Motion controller (LMC). In our user study evaluation, this interaction method is adequate for basic actions, but limitations arise when performing complex gestures.
We propose a Meal Photo SNS (called HealthyStadium) for improving users' eating habits by mutually assessing each others' health. This application's method is to evaluates pictures of meals leads to the realization of sustainable journaling. In addition, we implement competitive awareness that motivates the users to improve their eating habits by allowing users to share their healthiness ranking amongst the users. Through our study, we confirmed that our system enables users to motivate for improving eating habits. Also we found the number of records is increased by our system.
We introduce an intelligent parking space detection system based on PassiveVLC, a visible light backscatter communication technology. Its substrate is retroreflection -- by instrumenting retroreflector in each parking space, we are able to detect the vacancy of parking space based on whether the lighting infrastructure receives the light reflection or not from that particular parking lot. We further design and implement a MAC protocol to improve the scalability of our system.
Popular games for smartphones and tablets focus on touchscreen-based interaction. Here we describe a speculative research exploration motivated by the desire to enable traditional gaming experiences on mobile devices by incorporating tactile input controls. The design and ergonomics of our prototype accessory concept were informed by anecdotal feedback from a variety of gamers.
We describe a concept smartphone Display Cover, a secondary screen designed to improve productivity and convenience. Motivated by user research highlighting some of the limitations of current smartphones, the aim of this concept was to explore a practical solution which allows users to be more productive.
In this paper, we present a novel approach to the realization of a battery-free soil profile probe that uses the temperature difference between the near-surface air and the underground soil as a power source. The temperature change in the underground soil is slower than that in the near-surface air, and thus a large temperature difference occurs between the near-surface air and the underground soil for most of the day. Hence, we developed a sensor prototype driven by a thermoelectric generator (TEG) that directly converts this temperature difference into electricity. The results of an experimental implementation of the prototype proved that when the difference in temperature between the near-surface air and the underground soil is only 3 °C, which is much lower than the average temperature difference in an actual field, the measured output power is about 80 μW. Because the typical sensing interval of a soil profile probe is 1 h, the average power consumption (e.g., for a Texas Instruments CC2650) is about 5 μW, which is much lower than the expected amount of harvested energy.
We introduce a Graphical User Interface (GUI) to create intelligent interactive documents for everyone. The intelligent interactive document refers to a document displaying contents dynamically according to a reader's behavior. To the best of our knowledge, creating such documents requires a certain amount of efforts and implementation skills. By utilizing our system, users including non-technological experts can create interactive documents without any programming. Our system supports many people to enhance the possibility of designing new human-document interactions.
The Vocabulometer is a reading assistant system designed to record the reading activity of a user and to extract mutual information about the users and the read documents. The Vocabulometer stands as a web platform and offers services for analyzing which word is read at which time, estimating the reading skill of the user, the level of difficulty of a document, and recommending reading materials according to the user's vocabulary. For now the system is based on eye tracking but in the future we plan to make a smartphone application which do not need additional hardware.
Education is the Achilles heel of successful resuscitation in cardiac arrest. Therefore, we aim to contribute to the educational efficiency by providing a novel augmented-reality (AR) guided interactive cardiopulmonary resuscitation (CPR) "trainer". For this trainer, a mixed reality smart glass, Microsoft HoloLens, and a CPR manikin covered with pressure sensors were used. To introduce the CPR procedure to a learner, an application with an intractable virtual teacher model was designed. The teaching scenario consists of the two main parts, theory and practice. In the theoretical part, the virtual teacher provides all information about the CPR procedure. Afterward, the user will be asked to perform the CPR cycles in three different stages. In the first two stages, it is aimed to gain the muscle memory with audio and optical feedback system. In the end, the performance of the participant is evaluated by the virtual teacher.
We introduce RF-Wear1, a low-cost, washable and wearable solution to track movements of a user's body using passive RFIDs embedded in their clothing. RF-Wear processes wireless signals reflected off these tags to a compact single-antenna RFID reader in the user's pocket. In doing so, RF-Wear enables a first-of-its-kind skeleton tracking mechanism that is lightweight and convenient for day-to-day use, without relying on external infrastructure. We implement and evaluate a prototype of RF-Wear on commercial RFID readers and tags and demonstrate its performance in skeleton tracking. Our results reveal a mean error of 8-12° in tracking angles at joints that rotate along one degree-of-freedom, and 21°- azimuth, 8°- elevation for joints supporting two degrees-of-freedom.
Parkinson's disease (PD) is a second most common neurological disorder that affects up to 10 million people worldwide. It has an evolving nature and the symptoms may vary from patient to patient. Thus, to increase the effectiveness of PD treatment, it is necessary a personalized medication plan. Currently, PD patients undergo symptom observation on semiannual clinical visits. This work aims at the development of a new way of observation via smartphones, while at the same time offering the PD patient a tool to better understand his medication needs. Our mobile application leverages smartphone's inbuilt sensors in order to keep track of subject's medication adherence throughout the day, taking shape as a short-term accelerometer-based game played several times a day, and allows PD patients to record when they took medication. The combination of collected datasets can be used in further studies in order to estimate the changes in PD severity and medication effectiveness over time.
In this research, we propose a system that allows one to find a hidden spot that is scenic, yet unpopulated, with the intention of improving the satisfaction of one's trip. These hidden spots are not yet noted in the existing tourism services, however can provide a new place for the tourists to discover. We developed an application as a prototype to analyze and pinpoint hidden spots by using venue data from Foursquare . We acquired 19,705 data around Tokyo and calculated the relative hiding spot degree. Thus, we made it possible to provide users with hidden spot information from smartphone application.
We present eSense - an open and multi-sensory in-ear wearable platform to detect and monitor human activities. eSense is a true wireless stereo (TWS) earbud with dualmode Bluetooth and Bluetooth Low Energy and augmented with a 6-axis inertial measurement unit and a microphone. We showcase the eSense platform, its data APIs to capture real-time multi-modal sensory data in a data exploration tool, and its manifestation in a 360° workplace well-being application.
In this demo, we present a smartwatch app that is designed to help users eating meals at an appropriate rate. Our app measures the user's eating rate in real time and delivers appropriate feedback to slow down when eating fast. To this end, we explore four different types of feedback - graphics, text, clock, and vibration, in order to find the suitable type of feedback depending on the eating environments.
In this demonstration, we introduce an animated emoji feedback system that can help users eating at an appropriate rate. The system measures the user's eating rate in real time and sends appropriate feedback to the user in accordance with the measured eating rate via Samsung's animated AR Emoji.
Aiming at the development of functional sheets, this research proposes a new fabrication method of origami robots to create an electronic circuit which can transform into a three-dimensional (3D) shape and move by itself by combining origami geometry and a flexible printed circuit board. Inspired by the caterpillar movement, we develop a crawling robot to demonstrate this approach. The body of the robot is made of a P-Flex™ sheet and actuated by shape memory alloy (SMA). The robot uses the contraction force generated from the SMA and the elasticity of sheet itself in order to produce periodic changes in shape. Since these fold lines maintain elasticity, this robot can also be returned to a flat state for storage. Materials we use are commercially available and cheap enough to achieve mass-production. Besides, this design method provides an easy way to fabricate a variety of new functional origami robots and mechanisms by designing more complex origami patterns and built-in circuits.
Bodyweight exercises, such as push-up, sit-up, and squat, are effective forms of strength training to maintain good health. In order to improve people's exercise experience and provide feedback, lots of work has been done to monitor the bodyweight exercise by requiring people to wear special sensors on body. Different from traditional ways, a non-intrusive system WiFit is shown in this demo which uses surrounding Wi-Fi signals to monitor the bodyweight exercises without any attachment requirements. It not only could recognize the exercise type but also count the repetition number of exercise for diverse population even in different environments.
We propose FonLog: a mobile app that can be used for a data collection tool in human activity recognition for nursing services. The app has essential features to collect activity data efficiently found by the feedbacks from nursing facilities from the staffs, namely, recording activity targets, a user-friendly interface, recognition feedback, customizable detailed records, instant activity, and offline functionality.
Researchers, particularly in the area of ubiquitous and wearable computing, often spend significant amount of effort and time in developing devices and/or apps for data collection. Most of the apps and devices are customized with limited options, and they are usually non-generic or publicly unavailable. In this paper, we present WaDa, an easy to use app for sensor data collection from commercial off-the-shelf Android smart watches. It provides handy features such as user-defined labels, time synchronization, sensor selection, and data navigation. WaDa, freely available for research and academic purposes, facilitates prompt data collection without requiring expertise and effort for custom device/app development.
In this study, a set of transmitter/receiver modules was designed to facilitate human body communication (HBC) between pairs of users. The modules, which use a direct digital synthesizer (DDS) to produce frequency shift keying (FSK) modulated signals at arbitrary frequencies, are compatible with a wide range of systems through the use of a USB port interface. The proposed method was tested by implementing an LED color control system where the HBC modules were paired with a smartphone to successfully transmit color information between a pair of users.
Anime music videos (AMVs) are fan-made music videos that are created by editing and synthesizing anime footage and music. Anime fans create AMVs and share them on the Internet and with their friends. However, AMV creation takes time and effort. It is also hard to synchronize the scene changes with the musical rhythm. We propose a system to ease these difficulties. Our system splits a video by frames and crops the face images of the characters. The system then associates the face images and the scenes of the characters and classifies the face images to each character. Using this information, our system automatically extracts the scenes as clips for each character. This drastically reduces the time and effort to create an AMV.
Recent work has revealed the sensing theory of human respiration outside the First Fresnel Zone (FFZ) using commodity Wi-Fi devices. However, there is still no theoretical model to guide human respiration detection when the subject locates in the FFZ. In our work , we propose a diffraction-based sensing model to investigate how to effectively sense human respiration in FFZ. We present this demo system to show human respiration sensing performance varies based on different human locations and postures. By deploying the respiration detection system using COTS Wi-Fi devices, we can observe that the respiration sensing results match the theoretical model well.
This paper discusses the interaction between a user and a wearable device that is being charged. Many wearable devices do not engage with the user while charging. This paper presents a prototype using a pair of wireless earbuds to review the interaction while the device is charging.
1D Eyewear introduces digital designs where 1D arrays of LEDs and pre-recorded holographic symbols, enables minimal head-worn displays that can be embedded in normal-looking eyeglasses frames. We developed a set of optical designs (transmissive, reflective, and steerable), and a proof-of-concept optics implementation of a self-contained symbology display. To enable very compact symbol generation, we use computer-generated holograms (CGHs) to encode diffraction gratings, which project a pre-recorded static image when illuminated with coherent light.
We introduce programmable material and electromechanical control to enable a set of hybrid watch user interfaces that symbiotically leverage the joint strengths of electro-mechanical hands and a dynamic watch dial. This approach enables computation and connectivity with existing materials to preserve the inherent physical qualities and abilities of traditional analog watches.
We augment the watch's mechanical hands with micro-stepper motors for control, positioning and mechanical expressivity. We extend the traditional watch dial with programmable pigments for non-emissive dynamic patterns. Together, these components enable a unique set of interaction techniques and user interfaces beyond their individual capabilities.
Digital cameras and smartphones have made it exceedingly convenient for people to take photographs. Anyone can easily take and store hundreds of pictures. With film cameras, the general assumption was that users paid more attention to the scenes, owing to the higher cost of photography materials. We hypothesize that user attentiveness has diminished because of the ease provided by digital cameras. To improve memory recall, we propose a viewer that, during photograph review, provides feedback of the user's heart rate at the time of shooting. We believe this will enhance memory recall more than extant photo viewers.
This demo presents a genetic algorithm to optimize the parameters length and intensity of vibrations signals deployed in vibration patterns. The participants are able to interact with the system and personalize their vibrations signals.
Supermarkets offer a wide range of products which makes it challenging for consumers to choose between the different options and find the items they are looking for. Augmented Reality (AR) applications, however, have a high potential to enrich real-world objects with information which can be leveraged to improve this process. We developed an application that runs on a regular smartphone and helps users to choose between packaged groceries based on factors such as calories or sugar, rated on a scale from red (bad) to green (good). Compared to previous work, there is no need for a priori knowledge about product locations making the system suitable for many use cases. Moreover, information maps precisely onto the outline of the product's and not on the approximate shelf. To do so, no modifications of the objects, such as specialized tags, are necessary. Additionally, users can find items just by entering the name. Highlighting the packaging virtually helps to find the desired product. It is also possible to make a binary distinction between groceries that contain specific ingredients.
Most existing wireless power transfer (WPT) solutions are limited to 2-D configurations, which limits mobility when charging electronics. What is needed are 3-D WPT, which can deliver power anywhere in large volumes (e.g., factories, rooms, toolbox, etc). WPT using quasistatic cavity resonators (QSCR) proposed a route towards truly ubiquitous WPT, which safely charges devices as they enter a WPT enabled space. However, several drawbacks exist to this approach such as the need for a central pole and the spatially non-uniform power availability. To address these issues, we demonstrate a WPT system based on "multimode" QSCR ; this structure possess two resonant modes: pole dependent (PD) mode, which resembles the previous QSCR work and pole independent (PI) mode, which works whether or not the pole exist. This structure enables two operations: (i) pole-less operation, which works to the same degree as the previous QSCR without the central pole and (ii) dual-mode operation, which although requires the central pole, enables high-efficiency WPT all over the volume.
In this demonstration, we present a full implementation of Tacita, a display personalisation system designed to address viewer privacy concerns whilst still capable of providing relevant content to viewers and therefore increasing the value of displays.
Wireless charging pads such as Qi are rapidly gaining ground, but its limited power supply range still requires precise placement on a specific point. 2-D wireless power transfer (WPT) sheets consisting of coil arrays are one well-known solution for this issue . However, these approaches require custom-made designs by experts; what we need is a system that can be reconfigured by simply placing ready-made modules on the intended surface.
In this demo, we present "Alvus", a reconfigurable 2-D wireless charging system which enables such simple construction of wireless charging surfaces. Alvus is based on multihop WPT, which constructs "virtual power codes" and consists of three types of ready-made resonator modules: transmitter, relay, and receiver. Our system instantly and interactively forms wireless charging surface on everyday objects (e.g. floor, wall, table, etc.) only by placing the resonator modules.
We propose a cuttable wireless power transfer sheet which allows users to modify its size and shape. This intuitive manipulation allows the users to easily add wireless power transmission capabilities to everyday objects. The properties of the sheet such as thinness, flexibility, and lightness make our sheet highly compatible with various configurations. The sheet includes H-tree wiring that allows the sheet to remain functional even when cut from the outside of the sheet. To show its feasibility, we present three applications in the combinations of our sheet and daily objects: wireless charging furniture, bag, and jacket.
Understanding gestures help better communication between humans and between human and robot/agent because the gestures express various feelings of a speaker. We focus attention on gestures and aim to create a corpus of the relationship between the gestures performed by a speaker and his/her feelings. It is difficult to associate feeling with gestures because it requires subjective evaluation, answering a lot of questions. We propose a system, YUGE, that allows us to collect and evaluate gestures in a playful way. YUGE's visual interaction enables a user to input various gestures with fun. Through the preliminary experiment, we found that our proposed system can classify gestures with almost the same accuracy as the conventional method.
Throughout the 2010s interest in both organic user interfaces and various tangible haptics systems has grown steadily. Prior art has shown this through transmitting shape as a means of a novel interaction. Oobleck is a mixture of water and cornstarch that exhibits properties of both a solid and liquid depending on whether or not force is applied to it at the time. Prior art mainly employed the use of sound and vibration to activate the oobleck. The results were somewhat effective but really lacked any precise control over the oobleck. However, by embedding magnetic particles and some chemical additives in the oobleck, finer and more dynamic control over the oobleck could be attained. A magnetic matrix was concepted as a user interface to test the oobleck medium. As a proof of concept, 4 solenoid valves with small neodymium attached to them were used. Neodymium were used instead of electromagnets due to their extremely dense and strong magnetic fields. The interface was designed to communicate fingertip touch between parties. 4 touch sensors were placed on a mouse shaped interface, when triggered, these would deactivate relays that controlled solenoids with small neodymium attached to them, springing upwards to be near a thin receptacle holding the oobleck. A user could then feel the shape of the oobleck change in specific locations to reflect the feeling of the person pressing their fingertips against the sensor area.
RSA-BT (Respiratory Sinus Arrhythmia biofeedback-based Breathing Training) is a common cardiorespiratory intervention that has been commonly used as a complementary treatment to diseases (e.g., asthma), and as an effective exercise to reduce anxiety. In this demo, we propose BreathCoach, a smart and unobtrusive system using sensors on smartwatch and smartphone-based VR that enables in-home RSA-BT coaching. Specifically, BreathCoach uses off-the-shelf devices to continuously monitor key bio-signals including breathing pattern (BP), inter-beat interval (IBI), and the amplitude of RSA, and intelligently calculates the optimal breathing pattern based on current and historical measurements. The recommended breathing pattern is then conveyed to the user in the form of an intuitive VR game to provide an immersive training experience. We will showcase a research prototype implemented on Android smartphone and smartwatch with two proof-of-concept VR game designs.
An anime music video (AMV) is a fan-made music video consisting of scenes from Japanese anime set to an audio track, often songs or promotional trailer audio. Quote MAD is one genre of AMV that consists of video clips of quotes. Quote MAD creators will be required to search and extract the necessary scenes from anime. Since both searching and extracting processes take a lot of time, it is a burden for the creators. We developed a system called PoChotto that enables the creators to search for the scenes that they are looking for. This system helps the creators to search for scenes using quote information in anime and provides an interface for extracting scenes. From our preliminary experiments, we found that the time required for searching scenes can be as three times faster than the conventional method. This implies the burden on the creator can be reduced by our system.
In this study, we propose AOBAKO, a testbed for context-aware applications, which focuses on the testing of mobile applications using Bluetooth Low Energy (BLE) for indoor positioning. AOBAKO emulates BLE-beacon communications based on accurate frame-level emulation. In the proposed testbed, the emulation result is transferred to a physical space and beacon frames are emitted via radio waves. AOBAKO makes it possible to verify context awareness in a mobile application and can reduce the cost of a field survey.
In recent years, human respiration detection based on Wi-Fi signals has drawn a lot of attention due to better user acceptance and great potential for real-world deployment. However, latest studies show that respiration sensing performance varies at different locations due to the nature of Wi-Fi radio wave propagation in indoor environments, i.e., respiration detection may experience poor performance at certain locations which we call "blind spots". In this demo, we will demonstrate a human respiration detection system which enables full location coverage with no blind spot.
In this paper, we present the design and implementation of the HABits necklace, a neck-worn device that estimates behavior. This neck-worn device is continuously evolving to provide researchers with the ability to use it in multiple applications including eating, gesture, and activity recognition. Our proposed HABits necklace generates four signal streams using three embedded sensors including a proximity sensor, ambient light sensor, and a 9-axis inertial measurement unit (IMU). In this work, we highlight its ability to characterize eating episodes. We apply algorithms to estimate chew count, number of feeding gestures, posture, and activity intensity. Users will be able to visually see the necklace data and the result of the algorithms on the smartphone in real time.
Human vitality information is pivotal to many sensing applications. By vitality, we mean the status of a human target in a multi-room environment: whether he/she is still and which room he/she is located in. Continuous monitoring of human vitality helps us obtain important high-level contexts like one's emotions, living habits, and physical conditions. Unlike the most existing solutions that require human efforts in offline training or calibration, in this demo, we present WiVit, a training-free contactless Wi-Fi based sensing platform that can capture human vitality information in 7*24 hours. In typical indoor environments, WiVit can achieve 98% accuracy of vitality detection and nearly 100% accuracy of area detection.
Maintaining a positive and respectful attitude during a group discussion is the key to a healthy collaboration. Automatically capturing and effectively showcasing the attitudes expressed through one's face, voice, and quote can make the group members more aware of their emotions and behaviors. My work is on developing fully automated systems to analyze group discussions and provide feedback to improve discussion experience and team cohesion. I explore the automatic capture and analysis of audio-video data from the discussions, and the effective design principles of machine mediation for effective behavior changes. In this proposal, I describe the developed system that senses group dynamics and provides automated feedback for videoconferencing based group discussions, and discuss the ways in which it can evolve to address different challenges regarding effective collaboration.
Interconnect join failure is a common theme in wearable technology, but has yet to be addressed or resolved. The purpose of this thesis is to begin study of isolating and identifying ways to relieve an interconnect joint stress that is common in e-textiles and wearable technology. An overview of related work in the area of delocalizing strain in an interconnected joint are presented, followed by proposing an experimental design used to determine the cause-effect relationship between variables that affect interconnect joint failure.
Ubicomp/HCI researchers are increasingly using smartphones to collect human-labelled data 'in the wild'. While this allows for the collection of a wide range of interesting data in authentic settings and surroundings, humans are notoriously inconsistent in the quality of their contributions. Improving the quality of data collected with mobile devices is a largely unexplored, but highly relevant field. The primary objective of this workshop is to share insights, ideas, and discoveries on the quality of mobile human contributions. The work presented in the International Workshop on Mobile Human Contributions (MHC '18) explores methods, tools, and novel approaches towards increasing the reliability of human data submissions with mobile devices.
The collection of human contributions through mobile devices is increasingly common across a range of methodologies. However, possible quality issues of these contributions are often overlooked. As the quality of human data has a direct impact on study reliability, more should be done to improve the accuracy of these contributions. We identify and categorise solutions aimed at increasing the accuracy of contributions prior, during, and following data collection. Our categorisation assists in the positioning of future work in this area and fosters the usage of cross-methodological practises.
We discuss two methods designed to increase the accuracy of human-labeled data. First, Peer-ceived Momentary Assessment (Peer-MA), a novel data collection method inspired by the concept of Observer Reported Outcomes in clinical care. Second, mQoL-Peer, a platform aiming to equip researchers with tools to assess and maintain the accuracy of the data collected by participants and peers during mobile human studies. We describe the state of the research and specific contributions.
We propose Information Transmission as a novel perspective on the mobile Experience Sampling Method (ESM) to frame a research agenda with a sharpened focus on increasing data quality in ESM studies. In this view, good experience sampling transmits valid, relevant, and "noise-free" information from users' in-situ experiences to remote researchers. We identify key transmission channels, which motivate combinations of objective and subjective data (i.e. device sensors and machine learning, plus asking users). We discuss opportunities and challenges, and give examples from our previous and ongoing work on ESM tools.
Smartphones are personal ubiquitous devices that provide an immense source of information via diverse applications (apps) that contribute to our decision-making process throughout the day and improve our quality of life in the long term. In the past, an app only had one or a few specific functions, while nowadays, given the same interface, an app provides multiple interactive services to their users. However, we still have a weak understanding of user expectations and experiences with these apps. Towards this end, we extended our previous smartphone logging app, to the new 'mQoL Lab' for mobile Quality of Life, to strategically trigger user surveys and to achieve a better understanding of the user's actions in popular Android apps, like: Spotify, WhatsApp, Instagram, Maps, Chrome, Facebook and its Messenger. We present and discuss the results and their implications acquired during our first pilot study conducted with five users for four weeks in our Living Lab settings.
The rise of the smartphone opens up new possibilities for researchers to observe users in everyday life situations. Researchers from diverse disciplines use in-field studies to gain new insights into user behavior and experiences. However, the collected datasets are mostly not available to the public and thus results are neither falsifiable nor reproducible. Community datasets attempt to counter this problem, i.a. by sharing the cost of the data collection. One example is the crowdfunded campaign CrowdSignals. In this paper, we report on our experiences in doing research with crowdfunded data, drawing on the example of this dataset. By "zooming into" specific aspects of the data, we juxtapose the expectations we had when co-funding the data collection with our findings when analyzing the dataset. We highlight shortcomings and benefits of crowdfunded datasets, draw lessons and discuss how future crowdsourced data collection campaigns might be improved.
Thanks to the emergence of mobile computing technologies, location-based services (LBS) have been widely used. Massive data of LBS user activities would be useful for studying human mobility and urban computing. In this paper, we design and implement LBSLab, a system which facilitates large-scale data collection of mobile user activities, and provides data visualization in an informative way. LBSLab interacts with users via a mini-program built in the WeChat app, assembling several representative location-related functions such as conducting check-ins. A mobile user can join a data collection experiment by simply scanning a QR code. LBSLab can serve as an efficient system to study the user behavior of LBS-related applications.
In most crowdsensing systems, the quality of the collected data is varied and difficult to evaluate while the existing crowdsensing quality control methods are mostly based on a central platform, which is not completely trusted in reality and results in fraud and other problems. To solve these questions, a novel crowdsensing quality control model is proposed in this paper. First, the idea of blockchain is introduced into this model. The credit-based verifier selection mechanism and twice consensuses are proposed to realize the non-repudiation and non-tampering of information in crowdsensing. Then, the quality grading evaluation (QGE) is put forward, in which the method of truth discovery and the idea of fuzzy theories are combined to evaluate the quality of sensing data, and the garbled circuit is used to ensure that evaluation criteria can not be leaked. Finally, the Experiments show that our model is feasible in time and effective in quality evaluation.
This paper presents our preliminary design of the Reaching the Frail Elderly Patient for Optimizing Diagnosis of Atrial Fibrillation (REAFEL) system that helps to improve accuracy in interpretation of Electrocardiography (ECG) recordings by leveraging multi-modal user-labeled data and other contextual information from mobile devices. We describe the methods to collect and visualize the data, discuss the challenges associated with the project and conclude the paper by outlining future work.
Nowadays, the app stores host a variety of mobile health solutions. Smartphone users can choose from tens of thousands of applications, designed to prevent or manage certain diseases, or induce behavior change to improve health and life quality in general. However, the value of most applications remains unclear, as they stop short from documenting adherence to medical evidence. We review the fundamental mobile health challenges and propose Mobile Quality of Life Lab (mQoL), a mobile health platform which addresses the identified challenges and leverages recent developments to facilitate the deployment of much-needed longitudinal, multidimensional, evidence-based studies that are minimally obtrusive for the participants, yet provide high value in terms of the collected datasets, as well as potential for behavior change towards improving Quality of Life.
In mobile crowdsourcing, labour can be opportunistically elicited by sending notifications to workers who complete tasks on-the-go. While much work has focused on optimizing the work quality and quantity in mobile crowdsourcing, surprisingly few studies have explored the type of tasks that might be suitable for different user contexts. This paper presents results from a proof-of-concept user study that aimed to uncover where, when and what type of tasks mobile workers are willing to complete. We find that different contexts do affect the type of work users are willing to complete. Finally, we lay out a complete design, key challenges and opportunities for a longer field trial that we hope to conduct in the near-future.
In affective computing (AC) field studies it is impossible to obtain an objective ground truth. Hence, self-reports in form of ecological momentary assessments (EMAs) are frequently used in lieu of ground truth. Based on four paradigms, we formulate practical guidelines to increase the accuracy of labels generated via EMAs. In addition, we detail how these guidelines were implemented in a recent AC field study of ours. During our field study, 1081 EMAs were collected from 10 subjects over a duration of 148 days. Based on these EMAs, we perform a qualitative analysis of the effectiveness of our proposed guidelines. Furthermore, we present insights and lessons learned from the field study.
Everyday mobility encompasses different forms of public and private transportation. However, everyday mobility often does not involve substantial levels of physical activity. The goal of this workshop is to investigate ways of integrating physical activity into everyday mobility in accordance with widely accepted health recommendations. We aim to explore wearable and ambient systems that sense and support active navigation as well as conceptual aspects from a variety of perspectives, such as persuasive technologies. Researchers from different disciplines are invited to contribute their point of view by means of position papers, posters, and demonstrations. One planned outcome of this workshop is a set of design guidelines for navigation systems that explicitly consider health aspects. The workshop explores requirements and design challenges in a creative setting.
With the significant rise in the number of senior citizens living independently, there is a considerable research focus in the wellness and care of this geriatric population. While monitoring these citizens by heavily instrumenting them with tech could possibly aid in better prediction and prevention of certain ailments, this type of setup interferes with the need of independent living. We therefore explore the possibilities of a ubiquitous and pervasive monitoring system using smart-watches to allow making useful inferences in the subject's activities of daily living (ADL).
We report results from a pilot study that focuses mainly on understanding the everyday life quality of patients suffering from multiple sclerosis through the lens of connected Nokia Health devices. Our dataset comprises of 198 individuals (184 females and 14 males) and the study lasted over six months. By analyzing carefully crafted user-studies and correlating with personal sensor data collected with Nokia devices, we found that the level of fatigue is one of the main sources of discomfort across the majority of the patients. We further perform an exploratory analysis, which provides an early indication that by actively monitoring and perturbing users' daily activity levels, such as increasing daily step-counts, sleep duration and decreasing body weight, patients can potentially reduce their daily fatigue level.
Sensors in our phones and wearables, leave digital traces of our activities. With active user participation, these devices serve as personal sensing devices, giving insights to human behavior, thoughts, intents and personalities. We discuss how acoustical environment data from hearing aids, coupled with motion and location data from smartphones, may provide new insights to physical and mental health. We outline an approach to model soundscape and context data to learn preferences for personalized hearing healthcare. Using Bayesian statistical inference we investigate how physical motion and acoustical features may interact to capture behavioral patterns. Finally, we discuss how such insights may offer a foundation for designing new types of participatory healthcare solutions, as preventive measures against cognitive decline, and physical health.
Advances in user interfaces, pattern recognition, and ubiquitous computing continue to pave the way for better navigation towards our health goals. Quantitative methods which can guide us towards our personal health goals will help us optimize our daily life actions, and environmental exposures. Ubiquitous computing is essential for monitoring these factors and actuating timely interventions in all relevant circumstances. We need to combine the events recognized by different ubiquitous systems and derive actionable causal relationships from an event ledger. Understanding of user habits and health should be teleported between applications rather than these systems working in silos, allowing systems to find the optimal guidance medium for required interventions. We propose a method through which applications and devices can enhance the user experience by leveraging event relationships, leading the way to more relevant, useful, and, most importantly, pleasurable health guidance experience.
Tactile patterns are a means to convey general direction information to pedestrians (for example when turning right) and specific navigation instructions (for example when approaching the stairs). Tactile patterns are especially helpful for people with visual impairments in navigation scenarios and can also be used to deliver general notifications. This workshop paper is supposed to spark a discussion within the workshop about correctly identifying requirements and other needs of the visually impaired population in order to create a useful guidance tool to eventually replace the white cane as a primary navigation tool for the visually impaired.
With the advancements in ubiquitous computing, ubicomp technology has deeply spread into our daily lives, including office work, home and house-keeping, health management, transportation, or even urban living environments. Furthermore, beyond the initial metric of computing, such as "efficiency" and "productivity", the benefits that people (users) benefit on a well-being perspective based on such ubiquitous technology has been greatly paid attention in the recent years. In our first "WellComp" (Computing for Well-being) workshop, we intensively discuss about the contribution of ubiquitous computing towards users' well-being that covers both physical, mental, and social wellness (and their combinations), from the viewpoints of various different layers of computing. Having strong international organization members in various ubicomp research domains, WellComp 2018 will bring together researchers and practitioners from the academia and industry to explore versatile topics related to wellbeing and ubiquitous computing.
Recent technological development offers new possibilities for taking into account peoples' personal wellness data in adjustment of environment conditions. For example, users' heartrate, facial expression, room temperature, and CO2 data could be used for adjustment of lighting, temperature, and air-condition to support people's wellbeing in smart environment. Using personal wellness data for adaptive smart environment condition controlling seems inviting, however before expensive implementation is started, it is essential to know peoples' perceptions towards such systems to be able to create environments that people want to use. We conducted an anticipated user experience study of an adaptive wellbeing supporting meeting room concept video with 48 participants. Based on our results we present the four design challenges for future research.
What does well-being mean in the context of smart environments? What restrictions are set, how can well-being be measured and predicted? Can smart environments control or influence individual well-being? We seek to answer these questions by aggregating existing research on well-being and identifying the concepts relevant for smart environments. As a result, we provide a falsifiable definition candidate for well-being in smart environments and outline the experiments necessary for verifying the validity of the definition.
Promoting students' well-being is instrumental to foster their academic performance and prevent health issues or drop outs. Since an average student spends 4 hours a day in the classroom, the emotional climate during lectures is likely to affect a student's overall well-being. We propose using the physiological synchrony among students as a measure of the classroom emotional climate. We analyze an existing data set containing electrodermal activity signals and affective states self-reports of 24 different students during 42 lectures. We show that an increment in group physiological synchrony is related to an improvement of the classroom emotional climate. The possibility to measure the classroom emotional climate unobtrusively has important implications for classroom experience sensing systems: it allows teachers to monitor and potentially control the classroom emotional climate, thereby improving students' teaching-learning performance and overall well-being.
Subjective well-being (SWB) refers to people's subjective evaluation of their own quality of life. Previous studies show that environmental pollution, such as air pollution, has generated significant negative impacts on one's SWB. However, such works are often constrained by the lack of appropriate representation of SWB specifically related to air quality. In this study, we develop UMeAir, which collects one's real-time SWB, specifically, one's momentary happiness at a given air quality, pre-processes input data and detects outliers via Isolation Forests, trains and selects the best model via Support Vector Machine and Random Forests, and predicts the momentary happiness towards any air quality one experienced. Unlike traditional representation of air quality by pollution concentration/Air Pollution Index, UMeAir intends to represent air quality in a more user-comprehensible way, by connecting the air quality experienced at a particular time and location with the corresponding momentary happiness perceived towards the air. The higher the momentary happiness, the better the air quality one experienced. Our work is the first attempt to predict momentary happiness towards air quality in real-time, with the development of the-first-of-its-kind UMeAir Happiness Index (HAPI) towards air quality via machine learning.
We propose a cross-modal approach for conversational well-being monitoring with a multi-sensory earable. It consists of motion, audio, and BLE models on earables. Using the IMU sensor, the microphone, and BLE scanning, the models detect speaking activities, stress and emotion, and participants in the conversation, respectively. We discuss the feasibility in qualifying conversations with our purpose-built cross-modal model in an energy-efficient and privacy-preserving way. With the cross-modal model, we develop a mobile application that qualifies on-going conversations and provides personalised feedback on social well-being.
Iris is an interactive reward and information system that aims to increase wellbeing amongst parents and children, by supporting treatment compliance in clubfoot patients. This paper discusses the design and a pilot test of Iris, exploring the role of interactive solutions in increasing compliance amongst clubfoot patients.
We spend almost one third of our life in sleep. Sleep is one of the main contributors to our life and wellbeing. Sleep disorders are know to have adverse health effects but studies have also shown that too little or too much sleep is correlated with a greater risk of death. Sleep trackers have introduced new tools for sleep related studies by providing detailed, long-term sleep data. In this study, data collected with a wearable wellness device, Oura ring, is used to reveal how people sleep from the point of view of duration, consistency and timing. It is shown that, on average, Oura users sleep approximately 7 hours per night and that following a consistent sleep schedule is associated with more efficient sleep.
Hundreds of well-being apps aim to manage stress. Despite some successes in developing personalized regimes for stress coaching, current apps are still far from offering a compelling user experience. We discuss the requirements and technical challenges underlying the design of a virtual coach, including the critical need to support both personalization and conversation.
Towards wellbeing-awareness in computing, researches for estimating users' emotions using smartphone sensor data have been actively conducted as smartphones are getting more and more ubiquitous. Most studies have constructed emotion estimation models based on machine learning with contextual data from the smartphones and user's self-reporting ground truth label often collected via Experience Sampling Method (ESM). However, snice our emotion changes frequently in our daily lives, trying to collect the ground truth of such volatile emotions leads a storm of ESMs which could be burden to the users. In order to find better ESM methods, we propose and compare 3 ESMs, namely Randomized ESM that executes in randomly timings, Trigger ESM that executes when the user's behavior changes, and Unlocking ESM that sets up ESM on the unlocking screen. We constructed various emotional estimation models with four types of time granularity (1 day, 1/3 day, 3 hours, 1 hour) in four weeks experience with eight persons. As for the response rate, Unlocking ESM was the highest. In addition, it was clear that Unlocking ESM had the highest estimation accuracy in most cases.
Overuse of smartphone applications causes addiction to smartphone, which affects the user's physical and mental health. Interventions such as providing persuasive messages and controlling the use of applications need to assess the level of addiction to smartphone. To measure this addiction level, Smartphone Addiction Proneness scale (SAPS) has been proposed. However, it requires the user to answer 15 questions, which makes it burdensome, unreliable (because it is based on user response rather than their behavior), and slow (the estimation takes time). To overcome these limitations, we propose a technique for automatic recognition of SAPS score based on the actual daily use of the smartphone device. Our technique estimates the SAPS score using a regression model that takes the smartphone's states of use as explanatory variables (features). We describe the effective features and the regression model.
Biofeedback is commonly used to regulate one's state, for example to manage stress. The underlying idea is that by perceiving a feedback about their physiological activity, a user can act upon it. In this paper we describe through two recent projects how biofeedback could be leveraged to share one's state at distance. Such extension of biofeedback could answer to the need of belonging, further widening the applications of the technology in terms of well-being.
Although designing interactive technologies for motivating children to be physically more active has gained much attention, less focus has been directed towards young athletes. Interactive sports garments are an interesting area with much potential from both utilitarian and hedonic design drivers. In this paper, we describe the design process for an interactive skating dress for figure skating. We designed and prototyped an interactive ice skating dress, which detects the spinning movement of the skater, and displays it with changing colors.
Past studies have shown that commercial sports tracking technologies do not often provide the desired level of awareness of one's own body, and they are often abandoned after intermittent usage. In this position paper, we explore design possibilities for wearable displays for increasing bodily awareness while the users are engaged in sports. This builds our vision of future sports displays, and provides a framework and inspiration for developing interactive sport technologies utilizing wearable displays. Our work contributes new directions in developing wearable devices for enhancing the experience of physical activity.
Telemedicine is an emerging challenge for the shortage of qualified professionals, particularly in under-resourced regions. Physical assessment by a non-medical doctor is a practice in telemedicine which discovers essential symptom of a patient who needs to consult a doctor. We aim at facilitating this stage with a conversational chatbot which identifies the patient by conversation. Adopting the procedures of physical assessment one critical types of conversation involves in the self-diagnosis. Further, it turned out that useful kinds of questions in chatbot at this stage are related to Yes/No questions. We discovered that particular difficulties lie in the ambiguous replies by the patients: a patient modifies a question which makes them answer yes or no, a response does not the corresponding reply to the question, a reply involves some part yes and some part no, and so on. Focusing on this particular type of question we introduce a text classifier using Long Short-Term Memory (LSTM) and build a corpus using Twitter.
An interactive tangible interface has been developed to capture and communicate emotions between people who are missing and longing for loved ones. EmoEcho measures the wearer's pulse, touch and movement to provide varying vibration patterns on the partner device. During an informal evaluation of two prototype devices users acknowledged how EmoEcho could help counter the negative feeling of missing someone through the range of haptic feedback offered. In general, we believe, tangible interfaces appear to offer a non-obtrusive means towards interpreting and communicating emotions to others.
An increasing number of devices are getting enough computing and storage capacity to adapt their behaviour to the needs and preferences of their users. However, in multi device systems, this will require new techniques allowing several devices to take the contextual information of their users into account to adapt or coordinate themselves so they can improve the well-being of their users. This paper presents the Liquid Context and the Situational Context concepts. Both concepts are focused on facilitating the migration of the virtual profiles among different devices and on the computation of the contextual information of several entities in order to coordinate the surrounding devices. The main goal of this coordination is the improvement of the elders well-being by a better adaptation of the surrounding technology to their needs and preferences.
In recent years, the number of people suffering from stress and anxiety disorders is growing dramatically, reaching over 40 million adults in the United States alone. The meditation, amongst many other relaxation techniques, is emerging as a popular practice for dealing with stress and anxiety. However, so far meditation and its effect on well-being have been mostly measured based on subjective feedback or in some cases in laboratory settings where bulky and expensive devices are used for capturing physiological data, therefore, unfit for daily life uses. In this paper, we present the early design of Calmify, a mobile application, which is aimed to measure the effectiveness of personalized meditation techniques using a smartphone and off-the-shelf wearable devices.
UPDATED---August 17, 2018. Blood pressure (BP) is the most commonly performed medical office test. We developed a system that uses exclusively wristband-collected photoplethysmogram (PPG) to estimate BP. A dataset was collected and annotated during daily activities of 22 subjects. Preprocessing was applied to remove the signal noise and artefacts. Signal was segmented into cycles and features were computed. The RReliefF algorithm was used to select a subset of relevant features. The approach was validated with a person-independent leave-one-subject-out (LOSO) experiment. The LOSO experiment was updated with personalization to improve the results. The lowest mean absolute error (MAE) was 6.70 mmHg for systolic and 4.42 for diastolic BP. Ensemble of regression trees achieved the best results, which borderline meet the requirements set by two standards for BP estimation devices.
Improving people's well-being through relevant products and services is a designer task. They achieve this by combining innovative ideas with appropriate technologies. While the Internet of Things (IoT) brings vast opportunities in this regard, it significantly raises the rapid-prototyping complexity. In this paper, we look at the challenge of designing for wheelchair users' well-being. How do we empower designers to effectively envision relevant products and services to improve wheelchair user well-being? We propose the concept of Domain-Specific Design Platforms (DSDP) to help designers inform, rapid-prototype and evaluate their design concepts.
Advancements in ubiquitous technologies and artificial intelligence have paved the way for the recent rise of digital personal assistants in everyday life. The Third International Workshop on Ubiquitous Personal Assistance (UPA'18) aims to build on the success of our both previous workshops (namely SmartGuidance), organized in conjunction with UbiComp'16/17, to continue discussing the latest research outcomes of digital personal assistants. We invite the submission of papers within this emerging, interdisciplinary research field of ubiquitous personal assistance that focuses on understanding, design, and development of such digital helpers. We also welcome contributions that investigate human behaviors, underlying recognition, and prediction models; conduct field studies; as well as propose novel HCI techniques to provide personal assistance. All workshop contributions will be published in the supplemental proceedings of the UbiComp'18 conference and included in the ACM Digital Library.
Falling is a serious threat to the elderly people. One severe fall can cause hazardous problems like bone fracture or may lead to some permanent disability or even death. Thus, it has become the need of the hour to continuously monitor the activities of the elderly people so that in case of fall incident they may get rescued timely. For this purpose, many fall monitoring systems have been proposed for the ubiquitous personal assistance of the elderly people but most of those systems focus on the detection of fall incident only. However, if a fall monitoring system is made capable of recognizing the way in which the fall occurs, it can better assist people in preventing or reducing future falls. Therefore, in this study, we proposed a fall monitoring system that not only detects a fall but also recognizes the pattern of the fall for elderly assistance using supervised machine learning. The proposed system effectively distinguishes between falling and non-falling activities to recognize the fall pattern with a high accuracy.
Despite the growth of personal digital information use, both in scale and application diversity, conventional user models are still reliant on limited user input data to improve a variety of services for specific applications and tasks. This trend toward increased application diversity renders it difficult for a system to generate inferences about a user's evolving interests and naturalistic tasks in real-life settings. This workshop paper introduces a novel approach, aimed at training a user model to recognize real-life tasks on the basis of naturalistic user behavioral data via continuous screen monitoring. The resulting task model could be used in real-life settings for personal information assistance, which proactively retrieves useful documents and resources for the user, on a personal computer, with respect to the task context and information demand.
Passwords and PINs are used to protect all kinds of services against adversaries in our everyday lives. To serve their purpose, passwords require a certain degree of complexity which is often enforced through password policies. This results in complicated passwords, which might not only be hard for users to create, but also hard to remember. Furthermore, users might reuse passwords which they feel are secure. We present a scheme for deterministic password generation that solves these problems by assisting the user in generating and remembering passwords. The passwords are generated based on previously stored meta data (e.g., policies) and a master password. Since the password generation is deterministic, only the master password is required to recreate the passwords. As proof of concept we implemented a mobile app and pre-evaluated it. The pre-evaluation indicates that our scheme offers a good usability.
Important goals of the process-industry are efficient, safe and resource-saving production. High expectations have been formulated concerning the linkage of automation technologies, digitization and data-driven analytics methods like machine learning. This paper investigates the adaptation challenges of the process industry when it comes to the introduction of digital services in the plant. Specifically, we discuss the potential of virtual assistant systems to address these challenges. We discuss virtual assistant systems in their role as 1) support directly embedded into the work process and 2) integration point for new services. Based on this, we outline a research agenda for virtual assistant systems for the process industry.
Research has suggested that asking participants to generate their own arguments or ideas for the purpose of behavioral improvement (e.g., quit smoking) could lead to better intervention outcomes. Based on this concept, the opportunity for participants to personalize or generate own behavioral prompts intends to engage them in deeper self-reflection, with the ultimate goal of making the intervention more effective. Theory of active involvement has not yet been tested out in longer-term behavior change interventions. This paper seeks to bridge this gap by examining participant reactions toward personalizing and inventing new behavioral prompts in a mobile intervention for sustainable living. The data highlighted both the advantages and disadvantages of this intervention approach.
Diabetes mellitus is a common disease in today's population, where the insulin control system fails. It can be harmful for the patient when not treated appropriately with insulin injections. The complex functionality of the human body paired with very individual circumstances make this a hard task, even for committed patients. Modern sensor technology and personal monitoring equipment such as smartphones can help us retrieve more information about the patients blood glucose level and their habits regarding food intake and physical activities. Based on this information, we propose an assisting system giving the patient helpful advice by predicting the blood glucose level for the near future and incorporating their activities and heart rate. Deep convolutional neural networks will be used for learning the glucose level predictor. In our evaluation we see moderate improvements towards existing systems, which we think can be further improved when using more data.
Smartphone applications (abbr. apps) are becoming ubiquitous in our everyday life. Apps on smartphones can sense users' behaviors and activities, providing a lens for understanding users, which is an important point in the community of ubiquitous computing. In UbiComp 2018, we would like to run a workshop on mining and learning from smartphone apps for users (AppLens). It seeks for participants interested in understanding users from their use of smartphone apps, such as mining user attributes and discovering life patterns, discovering cultural and social phenomenon by analyzing app usage, such as social event detection, recognizing app usage behaviors, such as app overuse detection, and studying smartphone apps, such as app categorization and app popularity prediction. This workshop will include paper sessions, two invited talks, and a panel session, so as to keep participants engaged to make the workshop more interactive. It provides a forum for the participants to communicate and discuss issues to promote the emerging research field. Moreover, we will select a few accepted papers to be extended and published in a prestigious journal special issue.
With the rapid development of online social networks (OSNs), many people have linked their accounts of multiple OSN sites and share contents across them. In this work, we conduct an empirical study of the usage of the Swarm app's cross-site sharing feature, i.e., the feature that enables Swarm users to share their check-ins to Twitter, and reveal factors that impact Swarm users' sharing behavior. We classify factors into two groups, i.e., check-in-related factors and profile-related factors, and dedicate to figure out their individual and combined influence on Swarm users' sharing behavior. Our work can provide a reference for researchers who collect Swarm check-ins from Twitter to study the characteristics of Swarm check-ins, assisting them to identify that whether their Twitter-collected check-ins are representative of the randomly selected check-ins collected directly from Swarm. The OSN sites can also improve their design of the sharing feature through the findings of this work.
We describe the vision, implementation and initial deployment experience with an App based crowd sourcing system for the analysis and prediction of the influence of aggregated individual behavior patterns on the spread of infectious diseases. The App builds on InfluenzaNet , a well established European participatory network of self reporting web platforms for monitoring Influeza-Like-Ilnesses (ILI) and goes starting from the current practice of explicit self reporting towards the use of sensor derived behavior information. The App has been cleared with the Regional Research Ethics Committee (CCER) in Geneva, Switzerland and was test deployed through the Google Play Store in Switzerland during last year's influenza season. It is currently being prepared for a broader European deployment.
Recent revelations about data breaches have heightened users' consciousness about the privacy of their online activity. An often overlooked avenue of collection of users' personal information are registration processes and/or social logins, such as login with Facebook or Google, implemented by smartphone apps. Although the extent of social login implementations on websites has been widely studied, there is negligible research on the extent of implementation of login features on smartphone apps. Hence, this work contributes to further the understanding of smartphone apps ecosystem through investigating whether smartphone apps use login features, and what relationships exist between login features and apps popularity. To address this research gap this work presents the systematic analysis of the publicly available Rico dataset, which contains 72,000 unique UI screen designs that describes smartphone apps design properties for 9,717 Android apps.
The proliferation of smart devices prompts the explosive usage of mobile applications, which increases network traffic load. Characterizing the application level traffic patterns from an individual perspective is valuable for operators and content providers to make technical and business strategies. In this paper, we identify several typical traffic patterns and predict per-user traffic demand utilizing application usage dataset in cellular network. Our primary contributions are twofold: First, we novelly designed a three-stage model combining factor analysis and machine learning to extract the traffic patterns of individuals. By detecting the latent temporal structure of their application usage, users in the network are grouped into six typical patterns. Second, we implement a Wavelet-ARMA based model to forecast per-user application level traffic demand. The evaluation on real-world dataset indicates the model improves the prediction accuracy by 7 to 8 times compared with the benchmark solutions.
In recent years, eye tracking technique has been greatly promoted. However, due to the limitations of the hardware performance of mobile device, the image captured by the mobile device camera has low resolution, so the existed image processing based eye tracking technique for mobile device still has low accuracy. This paper proposed an approach, MobiET, to compute the gaze fixation on mobile device. Based on the image processing of eye image, the rectangle area of eye was extracted; and then the geometric center of this area (EC), and the iris center of gravity (IC) were detected. Then the vector of EC-IC was generated, and by means of calibration, the mapping relationship between the vector and the gaze fixation coordinates could be calculated. The evaluation results showed that the proposed approach had high accuracy, e.g., 2.34 to 4.69 degrees of visual angle when the distance between eyes and smart phone screen was from 22 to 28 centimeters.
Smartphone applications (Abbr. apps) have become an indispensable part in our everyday lives. Users determine what apps to use depending on their personal needs and interests. App usage behaviors reveal rich clues regarding one's personal attributes. It is possible to predict smartphone users' demographic attributes through their app usage behaviors. In this paper, we predict users' gender and income level on a large-scale dataset of app usage records from 10,000 Android users. More specifically, we first extracted features from app usage behaviors in terms of app, category, and app usage sequence. Then, we accessed the predictive ability of individual features and combinations of different features for gender and income level. We achieved an accuracy of 82.49%, precision of 82.01%, recall of 81.38% and F1 score of 0.82 for gender, with the best set of features. For income level (three classes), we achieved an accuracy of 69.71%, precision of 70.31%, recall of 70.38% and F1 score of 0.70.
As transborder data flows of personal data are increasing in volume and frequency, a jurisdiction's capacity to enforce personal data protection laws outside its territory is becoming more necessary and more difficult. As is shown in this paper, there have been three main approaches to dealing with this issue: the jurisdiction-to-jurisdiction, organization-to-organization, and the data localization approaches. While the jurisdiction-to-jurisdiction approach makes transborder data flows contingent upon the existence of adequate/equivalent national data protection laws, the organization-to-organization approach makes it the responsibility of individual data controllers to meet basic standards of data protection when those data are processed offshore. The data localization approach on the other hand, obliges third parties to store personal data within the boundaries of the country of operation. The fact that each of these models has its strengths and weaknesses, and that different jurisdictions have adopted different approaches based on different motivations and interests, makes the actual pursuit of international data protection increasingly complex..
Science fiction movies sometimes depict scenes of urban spaces with robots amidst humans as if this was normal. The Internet of Things continues to be explored in many domains, and with things getting "smarter", as they become embedded with autonomous decision-making and AI, we envision the rise of the Internet of robotic things. The humanoid robot is only just one view of robots - the Internet of robotic things in public could give rise to robots in public spaces in forms that are more subtle and pervasive. This paper first highlights the issues and considerations arising from robotic things in public and suggests the need for regulations and cyber-institutions for such robotic things in public. The paper also outlines a notion of layers of behaviour and then introduces the notion of favour networks for robotic things.
In the domestic IoT domain, data is often collected by physical sensors and actuators embedded in the household and used to provide contextually relevant services to end users. Given that this data is often personal, the EU's General Data Protection Regulation can implicate IoT app developers, requiring them to adhere to "data protection by design and default" to ensure safeguards that protect a data subject's rights. Yet the simple-to-use task-oriented development environments that are commonly used to build domestic IoT apps provide little support for developers to engage with data protection measures. In this paper we present an overview of an IoT development environment that has been designed to help developers engage with data protection at app design time. We describe a data tracking feature, which makes all personal flows in an app explicit at development time and which provides the foundation for an additonal set of data protection measures, including personal data disclosure risk assessments, transparency of processing and runtime inspection.
The new European General Data Protection Regulation has introduced several new rights designed to empower users and regulate imbalances of power between those who collect and control data and those to whom the data refer. In this paper we focus on one particular right, the right to data portability, and examine how it is being implemented. We discuss the responses to 230 real-world data portability requests, and examine the file formats returned and difficulties in making and interpreting requests. We find variation in file formats, not all of which meet the GDPR requirements, and confusion amongst data controllers about the various GDPR rights.
Data protection regulations generally afford individuals certain rights over their personal data, including the rights to access, rectify, and delete the data held on them. Exercising such rights naturally requires those with data management obligations (service providers) to be able to match an individual with their data. However, many mobile apps collect personal data, without requiring user registration or collecting details of a user's identity (email address, names, phone number, and so forth). As a result, a user's ability to exercise their rights will be hindered without means for an individual to link themselves with this 'nameless' data. Current approaches often involve those seeking to exercise their legal rights having to give the app's provider more personal information, or even to register for a service; both of which seem contrary to the spirit of data protection law. This paper explores these concerns, and indicates simple means for facilitating data subject rights through both application and mobile platform (OS) design.
Human behavior is increasingly sensed and recorded and used to create models that accurately predict the behavior of consumers, employees, and citizens. While behavioral models are important in many domains, the ability to predict individuals' behavior is in the focus of growing privacy concerns. The legal and technological measures for privacy do not adequately recognize and address the ability to infer behavior and traits. In this position paper, we first analyze the shortcoming of existing privacy theories in addressing AI's inferential abilities. We then point to legal and theoretical frameworks that can adequately describe the potential of AI to negatively affect people's privacy. We then present a technical privacy measure that can help bridge the divide between legal and technical thinking with respect to AI and privacy.
In 1997 Rosalind Picard introduced fundamental concepts of affect recognition . Since this time, multimodal interfaces such as Brain-computer interfaces (BCIs), RGB and depth cameras, physiological wearables, multimodal facial data and physiological data have been used to study human emotion. Much of the work in this field focuses on a single modality to recognize emotion. However, there is a wealth of information that is available for recognizing emotions when incorporating multimodal data. Considering this, the aim of this workshop is to look at current and future research activities and trends for ubiquitous emotion recognition through the fusion of data from various multimodal, mobile devices.
Recent research in the enteric nervous system, sometimes called the second brain, has revealed potential of the digestive system in predicting emotion. Even though people regularly experience changes in their gastrointestinal (GI) tract which influence their mood and behavior multiple times per day, robust measurements and wearable devices are not quite developed for such phenomena. However, other manifestations of the autonomic nervous system such as electrodermal activity, heart rate, and facial muscle movement have been extensively used as measures of emotions or in biofeedback applications, while neglecting the gut. We expose electrogastrography (EGG), i.e., recordings of the myoelectric activity of the GI tract, as a possible measure for inferring human emotions. In this paper, we also wish to bring into light some fundamental questions about emotions, which are often taken for granted in the field of Human Computer Interaction, but are still a great debate in the fields of cognitive neuroscience and psychology.
To design more context-aware systems for smart environments, especially smart cities, the psychological user status such as emotion should be considered in addition to environmental information. In this study, we focus on the tourism domain as a typical use case, and propose a multimodal tourist emotion recognition method during the sightseeing. We employ behavioural cues (eye and head/body movements) and audio-visual features to recognise emotion. As a result of real-world experiments with tourists, we achieved up to 0.71 of average recall score in 3-class emotion recognition task with feature level fusion.
Neonates needs an intensive health care during their early life. The cost of health care is high. In addition, health care might not be accessible to neonates in rural areas. In this paper, we propose the development of a smart, accessible, and cost-effective system to monitor neonates' health conditions using the advanced technologies of artificial intelligence and ubiquitous computing. The paper also presents an approach for automatically measuring neonates' pain, which is a main indication of an underlying health condition. While the preliminary results of our approach are encouraging, several extensions should be considered in future.
In this paper we present a method for recognizing emotions using video and audio data captured from a mobile phone. A mobile application is also presented that captures audio and video data, which were used to predict emotion with a convolutional neural network. We show results of our deep network on images taken from the BP4D+  database, and audio signals taken from the RAVDESS dataset , which were also used to train the CNN used in the presented mobile application.
During college, students make important decisions that will affect the rest of their life while also dealing with the burdens of social and academic expectations, often while living alone for the first time. For these reasons, stress deteriorates mental health and ability to relax. Meditation has been shown to reduce stress and mitigate its negative effects. EEG activity in the alpha spectrum has been shown to increase during meditative sessions and has been linked to feelings of relaxation. In this study, students without being given training in meditative practices, are asked to try to clear their mind and meditate for 10 minutes. Participants were also given a questionnaire to help determine if they were able to clear their thoughts. Results show that 77.14% of participants were not able to clear their mind for proper meditation. This shows that students do not usually have a natural ability to relax and meditate.
Virtual/augmented reality headsets, smart sensing glasses and similar "Smart Eyewear" have recently emerged as commercial products and can provide an interesting research platform for a range of research fields, including human-computer interaction, ubiquitous computing, pervasive sensing, and social sciences. The proposed workshop will bring together researchers from a wide range of computing disciplines, such as mobile and ubiquitous computing, eye tracking, optics, computer vision, human vision and perception, usability, as well as systems research. The workshop is a continuation from 2016 and will focus on discussing application scenarios and shared interest use cases for eyewear computing between corporate and academic researchers.
Sensors can be worn on various positions on the body such as wristband, clothes, armband, ring, etc. However the head is a strategic position as our brain, eyes, ears, nose and mouth are near. We introduce a short state of the art of the existing technologies for sensing the mental states and the possible applications on eyewear. Analyzing the physical activity has been broadly covered by sensing the physical movement of the body, but mental states need more fine analysis of physiological signals such as blood volume pressure, skin conductance, and brain activity. These signals can give information about emotions and knowledge of the users.
The development of ubiquitous computing and wearable sensing devices has lead to a situation where off-the-shelf hardware can be used for ubiquitous computing applications. We are arguing for using these available devices for designing unobtrusive context-aware systems that are capable of detecting in-situ changes of complex cognitive states. We believe that this is also marking the time for taking studies on context-awareness out of constrained settings into the wild, in order to test these systems in the real world situations, in which they shall later be implemented.
Humans sense most of the environment through their eyes. Detecting people's activities and context from their visual behavior using mobile eye trackers is well studied. Typically, these systems require some kind of calibration prior to usage, unsuitable for the end user. Cameras that actively record the user's field of view became a valid alternative for lifelogging applications. But such approaches do not include gaze information and may cause privacy challenges. In this work we propose an in-depth analysis of the first long-term corneal imaging dataset (i.e., the reflection of the environment on the human eye). All data was manually labeled with information about the attended objects. This was compared to an automatic approach using a state-of-the art neural network.
The availability of artificial intelligence and smart glasses, equipped with cameras and displays, presents a strong foundation on which to build a wearable cognitive assistant, a device that provides the user with context-aware information. Although technical challenges such as power consumption and insufficient latency have plagued early versions of such products, developments in fog computing present a potential solution. In this paper, we present an application that uses fog computing to provide real-time face recognition on smart glasses. To understand the impact of fog computing on latency and battery consumption, we compare it with an adapted version that does not rely on fog computing.
Looking is a two-way process: we use our eyes to perceive the world around us, but we also use our eyes to signal to others. Eye contact in particular reveals much about our social interactions, and as such can be a rich source of information for context-aware wearable applications. But when designing these applications, it is useful to understand the effects that the head-worn eye-trackers might have on our looking behavior. Previous studies have shown that we moderate our gaze when we know our eyes are being tracked, but what happens to our gaze when we see others wearing eye trackers? Using gaze recordings from 30 dyads, we investigate what happens to a person's looking behavior when the person with whom they are speaking is also wearing an eye-tracker. In the preliminary findings reported here, we show that people tend to look less to the eyes of people who are wearing a tracker, than they do to the eyes of those who are not. We discuss possible reasons for this and suggest future directions of study.
Our everyday work is becoming increasingly complex and cognitively demanding. What we pay attention to during our day influences how effectively our brain prepares itself for action, and how much effort we apply to a task. To address this issue we present AttentivU -a system that uses wearable electroencephalography (EEG) to measure the attention of a person in realtime. When the user's attention level is low, the system provides real-time, subtle, haptic or audio feedback to nudge the person to become attentive again. We tested a first version of the system, which uses an EEG headband on 48 adults over several sessions in both a lab and classroom setting. The results show that the biofeedback redirects the attention of the participants to the task at hand and improves their performance on comprehension tests. We next tested the same approach in the form of glasses on 6 adults in a lab setting, as the glasses form factor may be more acceptable in the long run. We conclude with a discussion of an improved third version of AttentivU, currently under development, which combines a custom-made solution of the glasses form-factor with built-in electrooculography (EOG) and EEG electrodes as well as auditory feedback.
Head Mounted Display (HMD) based Augmented Reality (AR) allows users to access digital information related to real world objects without being distracted from their real world task. With the introduction of the HoloLens (and increasingly other similar devices) a platform allowing wide spread practical application of such solutions has finally become available. However, today, not many such real world applications exist, and it is not yet fully understood which specific applications can benefit most from the technology in what form. As a promising novel application that optimally leverages the advantages of HMD based AR we propose a HoloLens based solution to support the debugging of electronic circuits. As a first step we have implemented a HoloLens interface to an oscilloscope. In this paper we describe the general idea, the fist implementation of the oscilloscope interface, and summarize initial qualitative feedback that we have received from users.
The 7th Workshop on Pervasive Urban Applications (PURBA 2018) aims to build on the success of the previous workshops organized in conjunction with the Pervasive (2011-12) and UbiComp (2013, 2015--17) to continue to disseminate the results of the latest research outcomes and developments of ubiquitous computing technologies for urban applications. All workshop contributions are published in supplemental proceedings of the UbiComp 2018 conference and included in the ACM Digital Library.
We apply a novel clustering technique to London's bikesharing network, deriving distinctive behavioral patterns and assessing community interactions and spatio-temporal dynamics. The analyses reveal self-contained, interconnected and hybrid clusters that mimic London's physical structure. Exploring changes over time, we find geographically isolated and specialized communities to be relatively consistent, while the remaining system exhibits volatility. We increase understanding of the collective behavior of the bikesharing users.
For efficient operation of taxis, it is important to provide taxi drivers with detailed information about passenger demand. In this paper, we propose a future taxi demand prediction algorithm by using real-time population data generated from cellular networks. We evaluated the effects of real-time population data on the accuracy of taxi demand prediction by using stacked denoising autoencoders. The results of an offline experiment conducted herein indicate that when real-time population data were used, the root mean squared prediction error of the proposed algorithm was 1.370 as opposed to 1.513 when population data were not used. In addition, we conducted a field test. We implement a real-time prediction system based on realtime population data, the first such online real-world test conducted worldwide. In the trial, 26 participant drivers tried our demand forecast system. The results showed that the sales of participant drivers improved by 1,409 JPY per person per day, which represents a 3.9% increase in sales on average compared to the drivers who did not use the system.
This paper presents an interactive visualization tool for resource usage monitoring. This paper discusses the developing processes and focuses on implementation techniques for data preparation and visualization. The web application called Energis is developed as the first prototype. It features interactive 3D graphical map with numerical and animated infographics. It is designed to enable real-time user-to-user data sharing. A user experience study is conducted in order to evaluate the potential usage of the tool. The preliminary results from a standardized questionnaire show that the voluntary participants rate the application based on their hands-on experience at a consistently high level. With the availability of meter data providers, the tool can be utilized in location-based management to understand resource usage in both temporal and spatial domains.
Commuting is recurring travel between home and workplace, which accounts for most trips made daily. Understanding commuting patterns and flows is therefore essential for city and transport system design and planning. Traditionally, commuting flow information was collected using surveys and interviews, which are expensive and time-consuming. This paper introduces a way to extract commuting flow and route choice information from analyzing mobile phone communication logs. We present two new methods for inferring individual commuting route choice, which collectively constitutes city-level commuting flows. Both morning and evening flows are inferred and visualized. We believe that our methods and results are useful and contributing to both theory and practice, especially in the interdisciplinary field of urban computing and city science.
This paper presents an online platform for visualizing and analyzing city event and tourism information, called Eventity. The tool allows the user to interact with the information by selecting the view the information within a selected period. It visualizes both contextual and geographical information. It displays past, ongoing, and up-coming events. It assists the user in analyzing the information with heatmap and statistics modules. The tool has been evaluated the real users and nicely received as a useful tool. Eventity can be useful for general users with its assistive viewing and planning functionalities, as well we urban planners with its analysis feature.
This paper introduces a public transit system called Jerney, which is a peer-to-peer shared ride on fixed routes. It takes the advantage of fixed route shared public transit (such as bus and rail systems) in order to keep the fare rate down compared to taxicabs, and provide a convenience of the peer-to-peer ride (such as Uber and Lyft) over the public transit such as bus and rail systems. Jerney system consists of three main components; passenger app, driver app, and dispatch system. Passenger books a shared ride via a mobile app by specifying origin and destination locations. The dispatch system then assigns a driver to the passenger and informs the passenger with pick-up and drop-off stations. Driver location can be monitored by the passenger once the ride has been booked. Jeryney system has been evaluated with real users in a user experience study from which there were positive feedbacks from the users, as they felt that the Jerney system was useful and easy to start using.
Soundscape or acoustic landscape is an important spatial characteristic that can influence real estate development and urban economics. Understanding of soundscape is thus essential for urban design and planning. This paper presents the development and implementation of our Soundscape system that senses and visualizes soundscape in an area on campus, as a case study. Unlike other soundscape sensing approaches that can only capture instance sound levels of the area, our system can collect and visualize longitudinal soundscape information that better reflects aggregate spatial characteristics.
Urban planners and economists alike have strong interest in understanding the inter-dependency of land use and people flow. The two-pronged problem entails systematic modeling and understanding of how land use impacts crowd flow to an area and in turn, how the influx of people to an area (or lack thereof) can influence the viability of business entities in that area. With cities becoming increasingly sensor-rich, for example, digitized payments for public transportation and constant trajectory tracking of buses and taxis, understanding and modelling crowd flows at the city scale, as well as, at finer granularity such as at the neighborhood level, has now become possible. Integrating such understanding with heterogeneous data such as land use profiles, demographics, and social media, enables richer studies on land use and its interdependence on mobility. In this work, we share findings from our preliminary efforts and identify key lines of research inquiry that can help urban planners towards data-driven policy decisions.
This paper presents presents an approach to preserve privacy for content sharing in online social networks. The approach is based on the concept of friendship strengths and social ties within a friendship circle. Friends can be categorized into different groups according to social strength e.g., close friends, just friends, and distant friends. Therefore, there are corresponding levels of privacy concerns. The paper describes a mechanism that categorizes a friend list based on "social context," which then allows the user to decide on with which social tie the content ought to be shared.
Pedestrian-vehicle accidents are the cause of many human injuries and deaths. To address this challenge, vision-based traffic systems have focused on detecting traffic-related objects' behaviors, such as vehicle position and velocity relative to pedestrians. In this paper, we propose a new and simple model for effectively recognizing overhead front point of vehicles, while only using a single stationary camera capturing from an oblique angle. The proposed system uses faster R-CNN model for detecting object bounding box and mask, projects the mask's extreme points down to find the car's ground front point, and transforms these coordinates from oblique to overhead frame of reference. Our experimental result shows that this method is effective for recognizing overhead front point of car (accuracy: 92.4%) within a certain tolerance.
In this study, we have developed a method that applies machine learning in combination with an optimization heuristic algorithm such as a genetic algorithm (GA) for solving the vehicle routing problem (VRP). Further, we developed a knowledge-based algorithm for a knowledge learning system. The algorithm learns to classify coordinates (unlabeled) into regions. Consequently, dividing routing calculations into regions (clusters) provides many benefits over traditional methods, and can result in an improvement in routing cost over the traditional company method by up to 25.68% and over the classical GA by up to 8.10%. It is also shown that our proposed method can reduce traveling distance compared to previous methods. Finally, the prediction of future customer regions has an accuracy of up to 0.72 for the predicted unlabeled customer coordinates. This study can contribute toward creation of more efficient and environmentally friendly urban freight transportation systems.
Fixed sensor stations are the primary means to collect environmental data in cities. Yet, their high cost of deployment and maintenance often result in an accurate but geographically-sparse monitoring. We advocate for a drive-by approach to urban sensing in which vehicles are used as sensors to scan the city with a high spatiotemporal resolution. We present City Scanner, a highly-customizable, self-sufficient platform that allows for cost-efficient drive-by sensing. City Scanner modules can be deployed on existing vehicles (e.g. busses and taxis) without interfering with their operations. We describe our first prototype that includes sensing modules for air quality, temperature, humidity, and thermal imaging. We discuss the challenges we encountered during an 8-months deployment of the platform on trash trucks in order to derive implications for the design of future drive-by sensing systems.
We present the activity of strolling as an artistic practice. With this we make an invitation to be lost in the geography of an urban setting. The main concept is to provide the physical space with a communication level that would be pleasant to discover while wandering without a destination. We want to support flânerie as way for recapturing the dimensions of both time and space against the modern division between work, conviviality, rest and family life which are typical of modernism. Therefore the Post-Modern paradigm of valuing heterotopias drives this contribution. Along with considerations about semiotics, sonification, ecological psychology and digital jewellery we would like to furnish a possible way to imbue the sub-urban areas of genius loci, the spirit of a place.
This paper provides a fog-based approach to solving the traffic light optimization problem which utilizes the Adaptive Traffic Signal Control (ATSC) model. ATSC systems demand the ability to strictly reflect real-time traffic state. The proposed fog computing framework, namely FogFly, aligns with this requirement by its natures in location-awareness, low latency and affordability to the changes in traffic conditions. As traffic data is updated timely and processed at fog nodes deployed close to data sources (i.e., vehicles at intersections) traffic light cycles can be optimized efficiently while virtualized resources available at network edges are efficiently utilized. Evaluation results show that services running in FogFly produce better performance comparing to those in cloud computing approaches.
Our world is increasingly interconnected via a wide variety of computers, IoT, wearable and mobile devices. The information provided collectively through these devices offers insightful information on our everyday lives, daily patterns, mood, behaviour, and surrounding environment. Our workshop brings together researchers interested in collecting and augmenting context to understand device specific behaviour and routines, human behaviour and mood, and changes in the environment. The outcomes of this workshop are new tools, methodologies, and potential collaborations for sensing the outlying world as well as ourselves.
The mood of a community influences work productivity, socioeconomic outcomes and general quality of life of its members, so being able to measure it opens a wealth of opportunities like, informing policies, scheduling events and possibly discovering the contexts that bring about undesirable moods within a community. Though there are a plethora of methods for measuring emotional states of individuals in lab settings (e.g. self-report, analysis of nonverbal behaviours, physiological sensors), they do not scale well to large numbers of people or in-the-wild settings. This paper examines the feasibility of inferring the mood of a community by measuring the walking speed of pedestrians, which is a technique that is unobtrusive, scalable, and readily available. Our preliminary results from our data collection reveal differences in walking speed at different times of the day; demonstrating the feasibility of our approach. We discuss the implications of our findings, followed by the future steps of this work.
The use of mobile phones has transcended its initial use as a communication device to become a medium to fulfilling social needs. While this has been beneficial to some, studies have exposed adverse effects such as depression and social media addiction. Since it is not always clear which category the user belongs, we propose a mobile application that records the social media interaction patterns of a user. The application also captures their mood before and after each social media use, until it can automatically infer the mood of the user through their social media interaction pattern. Consenting users can transmit the data collected on their device to a central location for further analysis by researchers of human behaviour or mobile application developers to provide intervention to users or design guidelines for mobile applications.
As new generations of sensors and modalities become available the possible sensor modalities to apply for occupant sensing accumulate. The fusion of heterogeneous sensing technologies has transposed the sensing scenarios where data is only based on one sensor modality. This paper presents a mapping and evaluation framework distilled from categorizing several heterogeneous sensing modalities and their limitation factors. The paper introduces the limitation factors: geometrical, tampering, reliability, environmental that has a repercussion on the performance. In this paper we mainly highlight two limitation factors- geometry and tampering and apply them in a case scenario considering diverse sensing modalities.
Classic measurement grids with their static and expensive infrastructure are unfit to realize modern air quality monitoring needs, such as source appointment, pollution tracking or the assessment of personal exposure. Fine grained air quality assessment (both in time and space) is the future. Different approaches, ranging from measurement with low-cost sensors over advanced modeling and remote sensing to combinations of these have been proposed. This position paper summarizes our previous contributions in this field and lists what we see as open challenges for future research.
Pollution exposure assessment at the population level is an established enterprise for environmental scientists and public health officials---but efforts to help individuals monitor and track their personal pollution exposures have just begun to garner research interest. Self-tracking pollution exposure is challenging for several reasons, including current limitations in sensor size, accuracy, and cost, frequent calibration requirements, and that people's daily activities often interfere with data quality in wearable sensing. The goal of this research is to develop a human-centered computing framework for the emerging field of personal pollution exposure assessment. To that aim, in this position paper, we propose a Bayesian approach to combine environmental sensing data from different spatiotemporal resolutions, such as from citywide national monitoring stations, neighborhood-wide lightweight sensing nodes, and personal wearables.
Our work investigates the use of a Near InfraRed Spectroscopy scanner for the identification of liquids. While previous work has shown promising results for the identification of solid objects, identifying liquids poses additional challenges. These challenges include light scattering and low reflectance caused by the transparency of liquids, which interfere with the infrared measurement. We develop a prototype solution consisting of a 3D printed clamp that attaches to a tube, such that it blocks ambient light from interfering. Our preliminary results indicate that our prototype works, and we demonstrate this by measuring sugar levels in a liquid solution.
Mobile-phone based activity recognition has been successfully applied to many useful scenarios like measuring the 'calories burnt' by a person. Unlike activities that are performed by a person alone, many activities are performed in a group-setting for example 'classroom teaching'. Because people often make friends with whom they are together, it's natural to look for communities in which people are engaged in similar physical-activities. Automated ways to learn such communities involve fusing physical-sensor-data from multiple users and hence, is a challenging problem. In this research, we measured physical-activities of seventy-two students located on two different university campuses for ten days. Using this data, we propose a model to detect communities based on similar physical-activities. Detecting such communities could be of great use e.g. it allows to invite new members who could be interested in similar activities and find those members who are in the community but are not actively engaged.
Radio-Frequency based device-free localisation systems are able to pinpoint people's location in a given area without their cooperation. They work by analysing the perturbations that the presence of a person causes on the communications exchanged by a high number of radio devices installed around the area. Typical numbers are some tens of devices for an area the size of a single-family house, with an accuracy around one meter and a high sensitivity to even the slightest movement. The literature about device-free localisation systems typically concentrates on improving sensitivity, accuracy and discriminating power, without worrying too much about the number of involved radio devices. In this paper, for the first time, we demonstrate that device-free localisation can work with reduced performance with as low as four anchors in an environment composed by two wide rooms. Being able to use so few anchors opens the possibility of several use cases where installing tens of devices is not desirable.
In the advancing ubiquitous computing, users are increasingly confronted with a tremendous amount of information proactively provided via notifications from versatile applications and services, through multiple devices and screens in their environment. The human's attention is becoming a new significant bottleneck. Further, the latest computing trends with emerging new devices including versatile IoT devices, and contexts, such as smart cities and vehicles, are even accelerating this situation. In such situations, "attention management", including attention representation, sensing, prediction, analysis, and adaptive behavior in the computer systems, are needed in our computing system. Following last years' successful UbiTtention 2016 and UbiTtention 2017 workshops with up to 50 participants, the UbiTtention 2018 workshop brings together researchers and practitioners from academy and industry to explore the management of human attention and notifications with versatile devices and smart situations to overcome information overload and overchoice.
User interaction is an essential part of many mobile devices such as smartphones and wrist bands. Only by interacting with the user can these devices deliver services, enable proper configurations, and learn user preferences. Push notifications are the primary method used to attract user attention in modern devices. However, these notifications can be ineffective and even irritating if they prompt the user at an inappropriate time. The discontent is exacerbated by the large number of applications that target limited user attention. We propose a reinforcement learning-based personalization technique, called Nurture, which automatically identifies the appropriate time to send notifications for a given user context. Through simulations with the crowd-sourcing platform Amazon Mechanical Turk, we show that our approach successfully learns user preferences and significantly improves the rate of notification responses.
College students are exposed to smartphone distraction during study-related contexts (e.g., classrooms, self and group studies). This constant distraction may lower their academic performance. In this work, we built a simple context-aware proactive blocking prototype to explore the patterns of focusing contexts, and to evaluate user experiences of proactive blocking for distraction management in study-related contexts for college students. Our preliminary user study shows the positive effects of proactive blocking. We discuss several design implications for context-aware proactive blocking and semi-automated logging for distraction management.
One of the primary ways that people interact with applications on their mobile phones is through notifications. However, Android and iOS treat notifications differently, one being opt-in and the other being opt-out. We explore, through two identical survey studies spaced 2.5 years apart, changing practices in choosing to enable or disable notifications for certain types of mobile applications and how the power of defaults has lessened over time. We conclude with implications for apps that wish to use notifications to increase engagement.
This paper outlines the roadmap and the preliminary results of a data-driven study aimed at providing the real-time notification using ambient displays tailored to people having solar panel and home storage battery installed at households. The objective of the research presented in this paper is to manage householders' attention so that they can adapt to a lifestyle helping them maximise the use of renewable energy.
A hallmark of ambient displays is their constant presence in the periphery of the user's attention, such as the Ambient Orb1 that changes color based on the outdoor weather. Users of such a device can explicitly turn their attention to the device if they are curious about the current weather conditions, and also be notified of a change by noticing the light changing abruptly, e.g., as a thunderstorm suddenly begins. However, especially in a mobile context, it can be difficult to have truly continuous indicators that are not fatiguing, annoying, or consuming considerable power. We propose "pseudo-ambient" displays that are not continuous, yet are nearly always accessible since they are triggered at regular intervals. Our contention is that such displays can potentially provide most of the benefits of a fully continuous ambient display, with limited drawbacks. In this work we focus on haptic pseudo-ambient displays. Yet, we believe the same approach can apply to other modalities, such as visual and audio.
When it comes to attention and notification management, most of the previous attempts to visualise notifications and smartphone usage have focused on digital representations on screens that are not fully embedded in the users' environment. Today, the constant development in hardware and embedded systems including mini displays, LEDs, actuators as well as digital fabrication, have begun to provide new opportunities for representing data physically in surrounding environments. In this paper, we introduce a new way of visualising notification data using physical representations that are deeply integrated within the physical space and everyday objects. Based on our preliminary design and prototypes, we identify a variety of design challenges for embedded data representation, and suggest opportunities for future research.
To prevent undesirable effects of attention grabbing at times when a user is occupied with a difficult task, ubiquitous computing devices should be aware of the user's cognitive load. However, inferring cognitive load is extremely challenging, especially when performed without obtrusive, expensive, and purpose-built equipment. In this study we examine the potential for inferring one's cognitive load using merely cheap wearable sensing devices. We subject 25 volunteers to varying cognitive load using six different Primary tasks. In parallel, we collect physiological data with a cheap device, extract features, and then construct machine learning models for cognitive load prediction. As metrics for the load we use one subjective measure, the NASA Task Load Index (NASA-TLX), and two objective measures: task difficulty and reaction time. The leave-one-subject-out evaluation shows a significant influence of the task type and the chosen cognitive load metric on the prediction accuracy.
From not disturbing a focused programmer, to entertaining a restless commuter waiting for a train, ubiquitous computing devices could greatly enhance their interaction with humans, should they only be aware of the user's cognitive load. However, current means of assessing cognitive load are, with a few exceptions, based on intrusive methods requiring physical contact of the measurement equipment and the user. In this paper we propose Wi-Mind, a system for remote cognitive load assessment through wireless sensing. Wi-Mind is based on a software-defined radio-based radar that measures sub-millimeter movements related to a person's breathing and heartbeats, which, in turn allow us to infer the person's cognitive load. We built and tested the system with 23 volunteers engaged in different tasks. Initial results show that while Wi-Mind manges to detect whether one is engaged in a cognitively demanding task, the inference of the exact cognitive load level remains challenging.
Alerts and notifications are increasingly being used in emergency situations. However, notifying users when an alert is not contextually useful results in those users opting out of this mode of alerting. There is a need to extensively evaluate such notifications and understand their utility before adopting them in real-world alert systems.
In this paper, we describe OpenAlerts, an open-source software that allows creating and evaluating the impact of simulated alerts and notifications. The proposed system is an extension to a testbed that was used by over two-hundred users across two university campuses to evaluate potential improvements to the US-based Wireless Emergency Alerts. The system allows A/B-type user testing of many potential enhancements including embedding maps, fine geo-targeting of alerts, embedding URLS, and speech-to-text.
By enabling driver mode on a smartphone, the smartphone can autonomously detect driving activities and suppress any incoming notifications so that drivers can focus on their attention on driving activities. However, the current driving mode implementations are limited in the fact that they frequently misjudge driving activities and also only have the option of nullifying all incoming notifications. In this work we present an improvement to this design and propose two major improvements to the current driving mode implementations: the first to improve driving activity detection and the second to diversify the notification mechanisms when the user is detected to be driving. While the work is still at its preliminary stages, we foresee this work as a meaningful effort to initiate the design of smartphone notification systems more tailored to the users' needs.
Users are confronted with more and more notifications in their lives. Multiple device types in the users' environment use visual, tactile and auditory cues to inform them about messages, events, and updates. All these devices differ in the used modalities to inform the users and in the offered configuration options for these modalities. Prior work investigated the distracting effects of notifications and how people interact with notifications. However, related work often only focuses on one platform at a time. Instead, we use interviews to investigate how users experience and deal with notifications generated by their different devices in their everyday lives. Our results show that users developed strategies to deal with notifications on their devices such as disabling (or not enabling) notifications, uninstalling applications, using do-not-disturb functionality, muting devices or even putting devices away. Only few users change the notification settings on their devices. As a consequence, the default settings selected by the device manufactures can drastically change how notifications are affecting users.
In the past decade, the number of always-connected mobile devices exploded. Smartphones are always with the user and host a large number of applications and services that use notifications to gain the user's attention. These notifications and their effects on users have been extensively researched in the context of human-computer interaction. In a body of prior work, numerous small- and large-scale studies were conducted to understand notifications as well as their effects. A common theme in these studies is the need for accessing users' notifications, often for logging purposes. In this paper, we present an open-source framework for notification research on mobile devices. The framework has been used as the foundation of multiple in-the-wild and in-lab studies, and has been downloaded by over 60,000 users. We explain the requirements, the architecture, and past application scenarios of the framework. The scenarios range from enabling reflection on mobile notifications to rich experiences in multi-device environments.
Real-world ubiquitous computing systems face the challenge of requiring a significant amount of data to obtain accurate information through pure data-driven approaches. The performance of these data-driven systems greatly depends on the quantity and 'quality' of data. In ideal conditions, pure data-driven methods perform well due to the abundance of data. However, in real-world systems, collecting data can be costly or impossible due to practical limitations. Physical knowledge, on the other hand, can be used to alleviate these issues of data limitation. This physical knowledge can include domain knowledge from experts, heuristics from experiences, as well as analytic models of the physical phenomena.
This workshop aims to explore the intersection between (and the combination of) data and physical knowledge. The workshop will bring together domain experts that explore the physical understanding of the data, practitioners that develop systems and the researchers in traditional data-driven domains. The workshop welcomes addressing these issues in different applications/domains as well as algorithmic and systematic approaches to applying physical knowledge. Therefore, we further seek to develop a community that systematically analyzes the data quality regarding inference and evaluates the improvements from the physical knowledge. Preliminary and on-going work is welcomed.
PPG-based continuous heart rate estimation is challenging due to the effects of physical activity. Recently, methods based on time-frequency spectra emerged to compensate motion artefacts. However, existing approaches are highly parametrised and optimised for specific scenarios. In this paper, we first argue that cross-validation schemes should be adapted to this topic, and show that the generalisation capabilities of current approaches are limited. We then introduce deep learning, specifically CNN-models, to this domain. We investigate different CNN-architectures (e.g. the number of convolutional layers, applying batch normalisation, or ensemble prediction), and report insights based on our systematic evaluation on two publicly available datasets. Finally, we show that our CNN-based approach performs comparably to classical methods.
Long Short Term Memory (LSTM) Recurrent Neural Networks has been shown to be capable of learning long time dependencies, and has been successfully applied to many studies, such as machine translation, speech recognition and air pollution concentration prediction. The present research has shown that the presence of missing data could dramatically degrade the results of data mining and categorical predictions with the aid of the machine learning technique including LSTM networks. Therefore, this paper focuses on imputation of missing data in the time series of air pollutants using LSTM networks to improve the PM2.5 concentration prediction accuracy. Experimental result shows that the proposed LSTM-based imputation method presents better PM2.5 concentration prediction accuracy than mean-imputation method and moving average imputation method.
Perspiration level monitoring enables numerous applications such as physical condition estimation, personal comfort monitoring, health/exercise monitoring, and inference of environmental conditions of the user. Prior works on perspiration (sweat) sensing require users to manually hold a device or attach adhesive sensors directly onto their skin, limiting user mobility and comfort. In this paper, we present a low-cost and novel wearable sensor system that is able to accurately estimate an individual's sweat level based on measuring moisture. The sensor is designed in a threadlike form factor, allowing it to be sewn into the seams of clothing, rather than having to act as a standalone sensor that the user must attach to their body. The system is comprised of multiple cotton-covered conductive threads that are braided into one sensor. When a person sweats, the resistance between the braided conductive threads changes as moisture becomes trapped in the cotton covering of the threads. The braided three-dimensional structure allows for robust estimation of perspiration level in the presence of external forces that may cause sensor distortion, such as motion. We characterize the relationship between the volume of sweat and measured resistance between the braided threads. Finally, we weave our sensors into the fabric of a shirt and conduct on-body experiments to study users' sweating level through various activities.
The multitude of data generated by sensors available on users' mobile devices, combined with advances in machine learning techniques, support context-aware services in recognizing the current situation of a user (i.e., physical context) and optimizing the system's personalization features. However, context-awareness performances mainly depend on the accuracy of the context inference process, which is strictly tied to the availability of large-scale and labeled datasets. In this work, we present a framework developed to collect datasets containing heterogeneous sensing data derived from personal mobile devices. The framework has been used by 3 voluntary users for two weeks, generating a dataset with more than 36K samples and 1331 features. We also propose a lightweight approach to model the user context able to efficiently perform the entire reasoning process on the user mobile device. To this aim, we used six dimensionality reduction techniques in order to optimize the context classification. Experimental results on the generated dataset show that we achieve a 10x speed up and a feature reduction of more than 90% while keeping the accuracy loss less than 3%.
Large-scale fine-grained air pollution information has both financial for city managers and health benefits for all city residents. Sensors installed on fleet of vehicles (like taxis) to collect air pollution data provides a low cost, low-maintenance and a potentially high coverage approach. The challenges here are: 1) Sensing coverage is sparse in some areas, and 2) it changes over time.
This paper presents PGA, a physics guided and adaptive approach to estimate fine-grained air pollution with vehicle fleets. We combine the advantages of a physics guided model and a data-driven model to achieve high accuracy with high spatial-temporal resolution. To evaluate the system, we deploy our air pollution sensing hardware on 29 taxis in the city of Shenzhen, and collected around 26.3 million data samples within 14 days. The results show that our system achieves up to 4.0× reduction on average error compared to state-of-the-art approaches.
Radio frequency radar indoors is gaining traction owing to its promise for extended coverage and device-free operation. However, while the well-behaved radar sensing model affords clear advantages, the cluttered indoor environment presents numerous challenges for reliable human sensing. Classic radar techniques are hard to call upon since the kinematic and clutter behaviours in aerospace are vastly different from their indoor counterparts. We demonstrate the peculiarities of indoor radar using a commercial 2D array commodity device in the 6 to 8.5 GHz band. We then present a set of processing tools suited for indoor radar human sensing. We show that excessive indoor clutter and erratic human kinematics can be largely mitigated building on such processing tools without resorting to much low-level techniques unsupported by commercial commodity radars.
With the development of social media, a huge number of users are attracted by social platforms such as Twitter. Emojis are widely used by social network users when posting messages. Therefore, it is important to mine the relationships between plain texts and emojis. In this paper, we present a neural approach to predict multiple emojis evoked by plain tweets. Our model contains three modules, i.e., a character encoder to learn representations of words from original characters using convolutional neural network (CNN), a sentence encoder to learn representations of sentences using a combination of long short-term memory (LSTM) network and CNN, a multi-label classification module to predict the emojis evoked by a tweet. Besides, attention mechanism is applied at word-level to select important contexts. Our approach is self-labeling and free from expensive and time-consuming manual annotation. Experiments on real-world datasets show that our model outperforms several automatic baselines as well as humans in this task.
Bag-of-Words (BoW) is one of the important techniques for activity recognition. Instead of dividing a continuous sensor streams into sliding windows with fixed time duration, it builds activity recognition models using histograms of primitive motion symbols. However, this BoW method losses the sequential information in the symbol sequences and limits the performance of activity recognition. In this paper, we propose an activity recognition approach to get rid of this limitation and consider longer time dependency by capturing local features from the symbol sequences. We use a set of small sliding windows inside the symbol sequences to capture local features. Our algorithm utilizes the physical knowledge where the sequence of the selected window size of symbols reflects the context and order of an activity. We evaluate the activity recognition approaches on two public datasets. The results show that our approach achieved stable improvement on all the datasets, compared with traditional statistical and BoW approaches.
Occupant activity level information is essential in many smart home applications, such as energy management and elderly care. Various methods have been proposed for detecting occupant activities through vision-, acoustic-, or radio frequency-based methods. However, the visual-based methods function only when occupants are in the visual field, the acoustic-based methods are sensitive to noise, and the radio-based methods usually require occupants to carry receivers all the time. These requirements increase the difficulty of deployment and maintenance in typical indoor smart home scenarios.
To overcome these shortcomings, we propose a structural vibration-based approach. Specifically, we develop a system with sparse sensor configuration deployed in the floor to monitor the activity levels of different areas. Compared to vision- and acoustic-based methods, our method is not restricted by line-of-sight and less influenced by noise. Compared to the access control system (ground truth), our system enables finer grained activity level estimation with a comparable resolution. We evaluate our system in a real-world deployment in an office building and used the building access control as the ground truth system. Our system shows a correlation coefficient at 0.836 when compared to the ground truth systems.
Volatile organic compound (VOC) recognition systems can be helpful tools in monitoring today's living environments surrounded by harmful chemicals including dangerous VOCs. By designing a mobile system where users can easily detect VOC materials in their surroundings, people can avoid VOC-contained environments or take actions to improve their living conditions. Unfortunately, current VOC detection systems require bulky devices, and the current technology does not allow this detection and classification process to take place in real-time near the user. In this work, we introduce a novel VOC recognition process using a smartphone camera and paper-based fluorometric sensors. Fluorometric sensors will change their color patterns as they are exposed to different VOC materials and the smartphone camera combined with simple machine learning algorithms can be used to classify different VOC materials. Specifically, we introduce how a fluorometric sensor dataset of different VOC materials is gathered, and present a set of preliminary machine learning algorithms for VOC classification using smartphones. Our results show up to ~88% accuracy in classifying eight different types of VOC materials using an LDA model.
Radio frequency (RF) signals have been used extensively to enable (indoor) localization and proximity detection based on Received Signal Strength Indication (RSSI). However, localization systems often suffer from large data collection and calibration overhead, especially when being deployed in a new environment. RSSI fingerprinting based localization systems require the construction of a fingerprinting database. This localization data acquisition is a hindrance for the proliferation of localization systems in practice. Similarly, RSSI proximity applications require an RSSI calibration for the receiver hardware and the deployment environment. To overcome these problems, we propose the usage of visual 3D models which enable 6DOF localization and distance measurement with high accuracy. We then fuse this physical knowledge with RF data: (1) for automated acquisition of fingerprinting data and (2) easy calibration of a RF propagation model for proximity estimation.
In this paper, we present a structure-adaptive approach for monitoring human gait using footstep-induced floor vibrations. Human gait information is critical for timely and accurate assessment and diagnosis of many health conditions. Footstep-induced vibration monitoring has been introduced in prior works as an accurate and cost-effective mean to provide continuous gait monitoring in indoor environments. Prior works in this field typically rely on inverse modeling approaches to map signal characteristics to gait information, but are limited due to varying signal characteristics in each new deployment structure. These changing characteristics introduce errors when attempting to use a model trained in one structure in another structure. To overcome this challenge, we propose a structure-adaptive approach that enables the transfer of a model trained in one structure to a new structure where no training data is present. To this end, we first find a lower-dimension space in which vibration responses share similarity across various structures and label the data in the target structure. Then, we use the labeled data to train a new model for footstep detection and monitoring in that structure. We evaluated our approach through real-world experiments in three types of structures (wood, concrete, and steel). Our evaluation results show that our approach achieves a footstep detection accuracy of between 85 and 97 percent and F1-score of between 86 and 97 percent, which represents 3-10X and 2. 2-15X improvements over the baseline approach, respectively.
The recognition of complex and subtle human behaviors from wearable sensors will enable next-generation human-oriented computing in scenarios of high societal value (e.g., dementia care). This will require large-scale human activity corpuses and much improved methods to recognize activities and the context in which they occur. This workshop deals with the challenges of designing reproducible experimental setups, running large-scale dataset collection campaigns, designing activity and context recognition methods that are robust and adaptive, and evaluating systems in the real world. We wish to reflect on future methods, such as lifelong learning approaches that allow open-ended activity recognition.
Unique this year, HASCA will welcome papers from participants to the Sussex-Huawei Locomotion and Transportation Recognition Competition in a special session.
This study introduces OpenHAR, a free Matlab toolbox to combine and unify publicly open data sets. It provides an easy access to accelerometer signals of ten publicly open human activity data sets. Data sets are easy to access as OpenHAR provides all the data sets in the same format. In addition, units, measurement range and labels are unified, as well as, body position IDs. Moreover, data sets with different sampling rates are unified using downsampling. What is more, data sets have been visually inspected to find visible errors, such as sensor in wrong orientation. OpenHAR improves re-usability of data sets by fixing these errors. Altogether OpenHAR contains over 65 million labeled data samples. This is equivalent to over 280 hours of data from 3D accelerometers. This includes data from 211 study subjects performing 17 daily human activities and wearing sensors in 14 different body positions.
A body weight training (BWT) means the training which utilizes the self-weight instead of the weight machine. The feedback of form and proper training menu recommendation is important for maximizing the effect of BWT. The objective of this study is to realize a novel support system which allows beginners to perform effective BWT alone, under wearable computing environment. To make an effective feedback, it is necessary to recognize BWT type with high accuracy. However, since the accuracy is greatly affected by the position of wearable sensors, we need to know the sensor position which achieves the high accuracy in recognizing the BWT type. We investigated 10 types BWT recognition accuracy for each sensor position. We found that waist is the best position when only 1 sensor is used. When 2 sensors are used, we found that the best combination is of waist and wrist. We conducted an evaluation experiment to show the effectiveness of sensor position. As a result of leave-one-person-out cross-validation from 13 subjects to confirm validity, we calculated the F-measure of 93.5% when sensors are placed on both wrist and waist.
The robustness and consistency of sensory inference models under changing environmental conditions and hardware is a crucial requirement for the generalizability of recent innovative work, particularly in the field of deep learning, from the lab to the real world. We measure the extent to which current speech recognition cloud models are robust to background noise, and show that hardware variability is still a problem for real-world applicability of state-of-the-art speech recognition models.
Since taking a detailed record of rehabilitation is difficult due to the busyness of practitioners, the chance of making quantitative analysis of rehabilitation is degraded. Automatic rehabilitation recording by activity recognition using wearable sensors offers the dual prospect of decreasing a practitioners' load and enabling quantitative analysis of rehabilitation. For highly accurate activity recognition, it is desirable to use a large number of sensors. On the other hand, usage of a larger number of wearable sensors increases a patients' discomfort and decreases practicality. Thus, we investigate the suitable number and positions of wearable sensors for activity recognition in rehabilitations. Experiments were carried out with 16 healthy people as subjects for 10 different behaviors in rehabilitation performed with seven inertial measurement units (IMUs) on the subjects' body. Then, we examined recognition accuracy while reducing the number of sensors with all combinations of sensor positions. As a result, 0.833 of F-measure value can be obtained with three sensors at the waist, right thigh and right lower leg.
Training classifiers for human activity recognition systems often relies on large corpora of annotated sensor data. Crowd sourcing is one way to collect and annotate large amounts of sensor data. Crowd sourcing often depends on unskilled workers to collect and annotate the data. In this paper we explore machine learning of classifiers based on human activity data collected and annotated by non-experts. We consider the entire process starting from data collection through annotation including machine learning and ending with the final application implementation. We focus on three issues 1) can non-expert annotators overcome the technical challenges of data acquisition and annotation, 2) can they annotate reliably, and 3) to what extent might we expect their annotations to yield accurate and generalizable event classifiers. Our results suggest that non-expert users can collect video and data as well as produce annotations which are suitable for machine learning.
In this paper we present a case study on drinking gesture recognition from a dataset annotated by Experience Sampling (ES). The dataset contains 8825 "sensor events" and users reported 1808 "drink events" through experience sampling. We first show that the annotations obtained through ES do not reflect accurately true drinking events. We present then how we maximise the value of this dataset through two approaches aiming at improving the quality of the annotations post-hoc. First, we use template-matching (Warping Longest Common Subsequence) to spot a subset of events which are highly likely to be drinking gestures. We then propose an unsupervised approach which can perform drinking gesture recognition by combining K-Means clustering with WLCSS. Experimental results verify the effectiveness of the proposed method.
Motion capture generates data which are often more accurate than those captured by multiple of accelerometer sensors by their physical specification. Based on the observation that accelerometer data can be obtained by the second derivation of position data from motion capture, we propose a simulator, called MEASURed, for activity recognition classifiers. MEASURed can accommodate any number of virtual accelerometer sensors on the body based on some given motion capture data. Therefore, MEASURed can evaluate activity recognition classifiers in settings with different number, placement, and sampling rate of accelerometer sensors. Our results show that the F1-Score estimated by MEASURed is close to that obtained with the real accelerometer data.
In this paper, we explore LoRaWAN (Long Range Wide Area Network) sensor for human activity recognition. In this research, we want to explore relation between packet loss and activity recognition accuracy from LoRaWAN sensor data. We want to estimate the packet loss amount from realistic sensors. In LoRaWAN technology, the amount of sensor nodes connected with a single gateway have an impact on the performance of sensors ultimate data sending capability in terms of packet loss. By exploring a single gateway, we transfer the LoRaWAN sensor data to the cloud platform successfully. We evaluate LoRaWAN accelerometer sensors data for human activity recognition. We explore the Linear Discriminant Analysis (LDA), Random Forest (RnF) and K-Nearest Neighbor (KNN) for classification. We achieve recognition accuracy of 94.44% by LDA, 84.72% by RnF and 98.61% by KNN. Then we simulate the packet loss environment in our dataset to explore the relation between packet loss and accuracy. In real caregiving center, we did experiment with 42 LoRaWAN sensors node connectivity and data transfer ability to evaluate the packet received and packet loss performance with LoRaWAN sensors.
Many research papers focus on communication skills in a group and information transmission capability of university students and/or business persons. Group discussions and poster session presentations are major topics in human interaction research. Aiming at benefits for students and teachers, we opened a living laboratory in our Tokyo Denki University. In this paper, we introduce the sensing system in the living laboratory.
We propose a method to translate between multi-modalities using an RNN encoder-decoder model. Based on such a model allowing to translate between modalities, we built an activity recognition system. The idea of equivalence of modality was investigated by Banos et al. This paper replaces this with deep learning. We compare the performance of translation with/without clustering and sliding window. We show the preliminary performance of activity recognition attained the F1 score of 0.78.
As it is generally difficult to utilize GPS positioning indoors, methods such as Wi-Fi positioning and PDR positioning are proposed. In particular, in addition to being able to estimate the absolute position, the Wi-Fi positioning has attracted attention due to the low introduction cost due to already installed Wi-Fi base stations in various facilities. In Wi-Fi positioning, position estimation is mainly performed by using radio wave intensity (RSSI). However, at the time of radio observation, the body of the user shields the radio waves, which reduces the RSSI, and there is a problem that the positioning accuracy deteriorates. Therefore, in this research, the radio wave environment map is expressed using GMM (Gaussian mixture model), and the positioning accuracy is improved by considering radio wave attenuation by the human body in the Wi-Fi positioning using the particle filter.
We present a solution to the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge (team "S304"). Our experiments reveal two potential pitfalls in the evaluation of activity recognition algorithms: 1) unnoticed overfitting due to autocorrelation (i.e. dependencies between temporally close samples), and 2) the accuracy/generality trade-off due to idealized conditions and lack of variation in the data. We show that evaluation with a random training/test split suggests highly accurate recognition of eight different travel activities with an average F1 score of 96% for single-participant/fixed-position data, whereas with proper backtesting the F1 score drops to 84%, for data of different participants in the SHL Dataset to 61%, and for different carrying positions to 54%. Our experiments demonstrate that results achieved 'in-the-lab' can easily become subject to an upward bias and cannot always serve as reliable indicators for the future performance 'in-the-field', where generality and robustness are essential.
In recent years, activity recognition (AR) has become prominent in ubiquitous systems. Following this trend, the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge provides a unique opportunity for researchers to test their AR methods against a common, real-life and large-scale benchmark. The goal of the challenge is to recognize eight everyday activities including transit. Our team, JSI-Deep, utilized an AR approach based on combining multiple machine-learning methods following the principle of multiple knowledge. We first created several base learners using classical and deep learning approaches, then integrated them into an ensemble, and finally refined the ensemble's predictions by smoothing. On the internal test data, the approach achieved 96% accuracy, which is a significant leap over the baseline 60%.
In this paper, we have used a smartphone sensor-based benchmark Sussex-Huawei Locomotion-Transportation (SHL) dataset for rich locomotion and transportation analytics. We have shown a comparison of different sensor-based features for the identification of a specific activity level. Besides, we have proposed a "Mod technique" method, which increases the accuracy of classifier outputs in offline processing method. We have assumed that the minimum switching time from one transportation activity to another is 1-minute as a base of this technique. Based on this assumption we have been able to correct the wrong prediction of the classifier outputs in each 1-minute data frame. Our proposed method brings 4.56% accuracy increase in the Random Forest (RnF) classifier output. Our team, "Confusion Matrix" has developed this algorithm pipeline for the "Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge".
At the SHL recognition challenge 2018, Team Tesaguri developed a human activity recognition method. First, we obtained the FFT spectrogram from 60-second acceleration and gyro sensor data for each of six axes. A five-second sliding window was used for FFT processing. About 70% of the spectrogram figures from the Sussex-Huawei Locomotion-Transportation dataset were used for training data. Our model was based on CNN using FFT spectrogram images. After training for 50 epochs, F-measure was about 90% for acceleration data and 85% for gyro data. Next, considering the results of each sensor axis, to improve the recognition rate, we combined the information of multiple sensors. Specifically, we synthesized new images by combining the FFT spectrogram figures of two axes and the best combination condition was examined by correlation analysis. The highest score, 93% recognition, came from the vertically arranged images derived from the norm of acceleration and the y-axis gyro.
The Sussex-Huawei Locomotion-Transportation recognition challenge presents a unique opportunity to the activity-recognition community - providing a large, real-life dataset with activities different from those typically being recognized. This paper describes our submission (team JSI Classic) to the competition that was organized by the dataset authors. We used a carefully executed machine learning approach, achieving 90% accuracy classifying eight different activities (Still, Walk, Run, Bike, Car, Bus, Train, Subway). The first step was data preprocessing, including a normalization of the phone orientation. Then, a wide set of hand-crafted domain features in both frequency and time domain were computed and their quality was evaluated. Finally, the appropriate machine learning model was chosen (XGBoost) and its hyper-parameters were optimized. The recognition result for the testing dataset will be presented in the summary paper of the challenge .
In this paper we summarize the contributions of participants to the Sussex-Huawei Transportation-Locomotion (SHL) Recognition Challenge organized at the HASCA Workshop of UbiComp 2018. The SHL challenge is a machine learning and data science competition, which aims to recognize eight transportation activities (Still, Walk, Run, Bike, Bus, Car, Train, Subway) from the inertial and pressure sensor data of a smartphone. We introduce the dataset used in the challenge and the protocol for the competition. We present a meta-analysis of the contributions from 19 submissions, their approaches, the software tools used, computational cost and the achieved results. Overall, two entries achieved F1 scores above 90%, eight with F1 scores between 80% and 90%, and nine between 50% and 80%.
This paper explores the relevance of an approach based exclusively on deep neural networks for locomotion recognition. This work is done within the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge as team Power of Things. Provided data used during the experiments is part of the SHL dataset for which we emphasize the adaptability to different applications of the ubiquitous computing. This quality emerges from the broad spectrum of modalities that this dataset encompasses, they are 16 in total. More than 500 different convolutional and hybrid architectures are evaluated, and a Bayesian optimization procedure is used for hyper-parameters space exploration. The influence of these hyper-parameters on performances is analyzed using the fANOVA framework. Best models achieve a recognition rate of about 92% measured by the f1 score.
Human activity recognition based on the smartphone sensors has the potential to impact a wide range of applications such as healthcare, smart home, and remote monitoring. For simple activities like "Sit" and "Walk", it can be distinguished relatively easily. However, for similar activities with respect to transportations, such as "Train", "Bus" and "Subway", it still remains an open problem. To tackle this problem, this paper uses the recently proposed Independently Recurrent Neural Network (IndRNN) to process data of different lengths in order to capture the temporal patterns at different granularities. The unique property of IndRNN in terms of the gradient propagation allows it to construct deep RNN networks and extract good features. The proposed method has been applied and submitted to SHL recognition challenge as "UOW_AMRL".
Human activity recognition using multimodal sensors is widely studied in recent days. In this paper, we propose an end-to-end deep learning model for activity recognition, which fuses features of multiple modalities based on their confidence scores that are automatically determined. The confidence scores efficiently regulate the level of contribution of each sensor. We conduct an experiment on the latest activity recognition dataset. The results confirm that our model outperforms existing methods. We submit the proposed model to the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge  with the team name "Yonsei-MCML."
In this paper, our method (Team Tk-2) for Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge  is described. First, the sliding window technique was applied on sensor data with six window sizes of 60, 30, 10, 6, 3 and 1 second to obtain recognizing instances. Then, SVM classifiers for each instance size are trained with mean, variance, maximum value, minimum value, and zero crossing rate. After test data is recognized by the six SVM models with each instance size, the result is determined by a voting of the results by the SVM models.
The goal of the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge is to classify 8 modes of transportation and locomotion activities recorded using a smartphone inertial sensor. In this paper, Team Orion extracts 36 quantitative features per sensor (totalling 226 features) of the dataset provided in the SHL recognition challenge and provides a processing pipeline for training the classifiers embracing parallel computation and out-of-memory processing. One-vs-one quadratic Support Vector Machine (1-1 QSVM) and Bagged Decision Trees with 45 learners (EoC-45) were used for classification of the DU Mobility Dataset. The same features/pipeline provided 91% and 92% classification accuracies respectively on the SHL recognition challenge dataset. Using one-vs-one cubic Support Vector Machine (1-1 CSVM) on SHL, we received the highest classification accuracy of 92.8%. Afterwards, Artificial Neural Network (ANN) was applied on the dataset and accuracies of 93.6%, 88.18% and 90.05% were obtained for training, testing and validation phases. The results provide promising prospects of supervised and neural network classification for locomotion analysis.
The objective of this work is to recognize modes of locomotion and transportation accurately, with special emphasis on precise detection of transitions between different activities. The recognition of activities of daily living (ADLs), specifically modes of locomotion and transportation, provides an important context for many ubiquitous sensing applications. The precise detection of activity transition time is also important for applications that require immediate response. Many prior signal processing techniques use a fixed-length window for signal segmentation, which leads to poor performance for detecting activity transitions due to the limitation of a single window size. In this paper, we construct weak classifiers based on different window sizes and propose a decision level fusion approach to effectively classify and assign a label for each sample by fusing the decisions from all weak classifiers. Moreover, we propose a set of phone orientation independent features to ensure the system can work with arbitrary phone orientation. Our team, The Drifters, attained an F-score improvement of 1.9%, increasing from 94% to 95.9%, using our proposed method compared to using a single fixed-size window segmentation technique.
In this paper, we, Ubi-NUTS Japan, introduce a multi-stage activity inference method that can recognize a user's mode of locomotion and transportation using mobile device sensors. We use the Sussex-Huawei Locomotion-Transportation (SHL) dataset to tackle the SHL recognition challenge, where the goal is to recognize 8 modes of locomotion and transportation (still, walk, run, bike, car, bus, train, and subway) activities from the inertial sensor data of a smartphone. We adopt a multi-stage approach where the 8 class classification problem is divided into multiple sub-problems considering the similarity of each activity. Multimodal sensor data collected from a mobile phone are inferred using a proposed pipeline that combines feature extraction and 4 different types of classifiers generated using the random forest algorithm. We evaluated our method using data from over 271 hours of daily activities of 1 participant and the 5-fold cross-validation. Evaluation results demonstrate that our method clearly recognizes the 8 types of activities with an average F1-score of 97%.
This paper describes a simple yet resource efficient method to train and finally test a classifier based on "Sussex-Huawei Locomotion- Transportation (SHL) dataset" for "Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge". Our team, "Maximum Analytics" placed special emphasis on simple feature design and fast training and recognition. The computational resources necessary for training a classifier based on the whole dataset was not present, therefore, an effective subset was chosen from the entire dataset. This approach also solved the class imbalance present in the initial dataset. Then we train the subset using popular machine learning algorithms. The maximum overall accuracy rate found is 82.8%.
The objective of this work is to determine various modes of locomotion and in particular identify the transition time from one mode of locomotion to another as accurately as possible. Recognizing human daily activities, specifically modes of locomotion and transportation, with smartphones provides important contextual insight that can enhance the effectiveness of many mobile applications. In particular, determining any transition from one mode of operation to another empowers applications to react in a timely manner to this contextual insight. Previous studies on activity recognition have utilized various fixed window sizes for signal segmentation and feature extraction. While extracting features from larger window size provides richer information to classifiers, it increases misclassification rate when a transition occurs in the middle of windows as the classifier assigns only one label to all samples within a window. This paper proposes a hierarchical signal segmentation approach to deal with the problem of fixed-size windows. This process begins by extracting a rich set of features from large segments of signal and predicting the activity. Segments that are suspected to contain more than one activity are then detected and split into smaller subwindows in order to fine-tune the label assignment. The search space of the classifier is narrowed down based on the initial estimation of the activity, and labels are assigned to each sub-window. Experimental results show that the proposed method improves the F1-score by 2% compared to using fixed windows for data segmentation. The paper presents the techniques employed in our team's (The Drifters) submission to the SHL recognition challenge.
Traditional machine learning approaches for recognizing modes of transportation rely heavily on hand-crafted feature extraction methods which require domain knowledge. So, we propose a hybrid deep learning model: Deep Convolutional Bidirectional-LSTM (DCBL) which combines convolutional and bidirectional LSTM layers and is trained directly on raw sensor data to predict the transportation modes. We compare our model to the traditional machine learning approaches of training Support Vector Machines and Multilayer Perceptron models on extracted features. In our experiments, DCBL performs better than the feature selection methods in terms of accuracy and simplifies the data processing pipeline. The models are trained on the Sussex-Huawei Locomotion-Transportation (SHL) dataset. The submission of our team, Vahan, to SHL recognition challenge uses an ensemble of DCBL models trained on raw data using the different combination of sensors and window sizes and achieved an F1-score of 0.96 on our test data.
This paper describes our submission (team name: Ideas Lab @ UT) to the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge. The SHL recognition challenge considers the problem of human activity recognition from inertial sensor data collected at 100 Hz from an Android smartphone. Our data analysis pipeline contains three stages: a pre-processing stage, a classification stage, and a time stabilization stage. Our main finding is that performing classification on "raw" data features (i.e. without feature extraction) over extremely short time windows (e.g. 0.1 seconds of data) and then stabilizing the activity predictions over longer time windows (e.g. 15 seconds) results in much higher accuracy than directly performing classification on the longer windows. Our submitted model uses a random forest classifier and attains a mean F1 score over all activities of 0.973 on a 10% hold-out sample of the training data, suggesting that activity recognition can be performed quite accurately on the SHL dataset.
In this paper we, as part of the Sussex-Huawei Locomotion-Transportation (SHL) Recognition Challenge organizing team, present reference recognition performance obtained by applying various classical and deep-learning classifiers to the testing dataset. We aim to recognize eight modes of transportation (Still, Walk, Run, Bike, Bus, Car, Train, Subway) from smartphone inertial sensors: accelerometer, gyroscope and magnetometer. The classical classifiers include naive Bayesian, decision tree, random forest, K-nearest neighbour and support vector machine, while the deep-learning classifiers include fully-connected and convolutional deep neural networks. We feed different types of input to the classifier, including hand-crafted features, raw sensor data in the time domain, and in the frequency domain. We employ a post-processing scheme to improve the recognition performance. Results show that convolutional neural network operating on frequency-domain raw data achieves the best performance among all the classifiers.
The bases of the approaches of UCLab(submission 1) towards SHL recognition challenge are using Random Forest and letting it select important features. Using accelerometer, gyroscope, magnetometer, gravity and pressure sensor as input data, features such as mean, variance, max, difference of max and min, and main frequency are calculated. We find that activities of Still, Train, and Subway are highly similar and hard to distinguish. To achieve robust recognition, we make predictions for every segment of 3 seconds and produce final prediction based on these predictions. Moreover, to deal with the case that one line contains two or more activities, we use a rule-based post processing to predict these activity labels. As a result, using the lines of last 20% in training dataset as validation set, predictions for 3-second segments have around 0.879 of F1-score and predictions for lines have around 0.942.
For high precision estimation with SHL recognition challenge, we use a deep learning framework based on convolutional layers and LSTM recurrent units (ConvLSTM). We, UCLab(submission 2), propose the model combined two different ConvLSTMs. One ConvLSTM of convolution layers has large kernel size and the other has small kernel size. We expect that these two ConvLSTMs extract the different kind of features like global features and local features. Then we concatenate each ConvLSTM output, and input to fully connected layer. Finally, we convert output of fully connected layer to probability distribution by applying soft-max function. We finally determine that we input 10 axes of sensors to our model. The axes we use are 3 axes of linear acceleration, 3 axes of gyroscope, 3 axes of normalize gravitational acceleration and pressure of difference from previous value. As a result, using last 20% of lines for validation, predictions have around 0.931 of F1-score.
Human-computer interaction is progressively shifting towards natural language communication, determining the rise of conversational agents. In the context of ubiquitous computing, the opportunities for interacting with new services and systems in a conversational manner are increasing and, nowadays, it is common to talk to home assistants to interact with a smart environment or to write to chatbots to access an online service. This workshop aims at bringing together researchers from academia and industry in order to establish a multidisciplinary community interested in discovering and exploring the challenges and opportunities coming from the ubiquity of conversational agents.
The ability to engage the user in a conversation and the credibility of the system are two fundamental characteristics of virtual coaches. In this paper, we present the architecture of a conversational e-coach for promoting healthy lifestyles in older age, developed in the context of the NESTORE H2020 EU project. The proposed system allows multiple access points to a conversational agent via different interaction modalities: a tangible companion that embodies the virtual coach will leverage voice and other non-verbal cues in the domestic environment, while a mobile app will integrate a text-based chat for a ubiquitous intervention. In both cases, the conversational agent will deliver personalized interventions based on behavior change models and will promote trust by means of emotionally rich conversations.
In today's media landscape, emotionally charged topics including issues related to politics, religion, or gender can lead to the formation of filter bubbles, echo chambers, and subsequently to strong polarization. An effective way to help people break out of their bubble is engaging with other perspectives. When caught in a filter bubble, however, it can be hard to find an opposed conversation partner in one's social surroundings to discuss a particular topic with. Also, such conversations can be socially awkward and are, therefore, rather avoided. In this project, we aim to train different chat-bots to become highly opinionated on specific topics (e.g., pro/against gun laws) and provide the opportunity to discuss political views with a highly biased opponent.
In this paper, we describe a new theoretically motivated application framework, Wenner, for developing tailored interactions for an exercise-oriented well-being application. The framework allows us to conduct in-lab and 'in the wild' studies to directly deploy and evaluate the effects of our interventions. We describe two systems that have been developed using the framework. The first WennerStep is an individually tailored text based approach to delivering adaptive persuasive messaging using a smartwatch. The second WennerAgent, is a virtual embodied coach who adaptively reacts to user activity with emotionally appropriate responses. We conclude with a discussion in which we describe outstanding challenges for our approach and relate this to current work on Chatbot deployments for well-being.
This article presents the design of a conversational agent whose goal is to coach people who wish to improve their food lifestyle. The data gathering method is easy and especially fast in order to simplify the life of the user. The goals are defined by the user and the follow-up is done daily. Two choices are available to the user: reduce his consumption of meat or consume more fruits and vegetables. User tests were conducted with 36 people. Only 11% of the challenges were successful, but in 65% of cases, the tester has managed to improve its consumption compared to before.
Mental health issues affect a significant portion of the world's population and can result in debilitating and life-threatening outcomes. To address this increasingly pressing healthcare challenge, there is a need to research novel approaches for early detection and prevention. Toward this, ubiquitous systems can play a central role in revealing and tracking clinically relevant behaviors, contexts, and symptoms. Further, such systems can passively detect relapse onset and enable the opportune delivery of effective intervention strategies. However, despite their clear potential, the uptake of ubiquitous technologies into clinical mental healthcare is rare, and a number of challenges still face the overall efficacy of such technology-based solutions. The goal of this workshop is to bring together researchers interested in identifying, articulating, and addressing such issues and opportunities. Following the success of this workshop in the last two years, we aim to continue facilitating the UbiComp community in developing a holistic approach for sensing and intervention in the context of mental health.
Although mental illness is one of the most serious social problems, stress does not necessarily have a negative effect, on the contrary, an optimal amount of stress can result in individuals being able to perform better in tasks. While there have been several studies on estimating stress level from smartphone usage logs and several effective features were revealed as a result, there are few studies on estimating cognitive performance. Thus, in this paper, we explore several factors affecting the estimation of cognitive performance from smartphone logs. To conduct the analysis, we collected smartphone usage logs and Go/No-Go task data for 6 weeks from 39 participants in the wild. We found that the time to find intended app is related to cognitive performance. This result suggests that measuring the time to find intended app can be the effective feature of estimating cognitive performance from smartphone logs.
Social interactions have multifaceted effects on individuals' mental health statuses, including mood and stress. As a proxy for the social environment, Bluetooth encounters detected by personal mobile devices have been used to improve mental health prediction and have shown preliminary success. In this paper, we propose a vector space model representation of Bluetooth encounters in which we convert encounters into spatiotemporal tokens within a multidimensional feature space. We discuss multiple token designs and feature value schemes and evaluate the predictive power of the resulting features for stress recognition tasks using the StudentLife and Friends & Family datasets. Our findings motivate further discussion and research on bag-of-words approaches for representing raw mobile sensing signals for health outcome inference.
A number of studies have been investigating the use of mobile phone sensing to predict mood in unipolar (depression) and bipolar disorder. However, most of these studies included a small number of people making it difficult to understand the feasibility of this method in practice. This paper reports on mood prediction from a large (N=129) sample of bipolar disorder patients. We achieved prediction accuracies of 89% and 58% in personalized and generic models respectively. Moreover, we shed light on the "cold-start" problem in practice and we show that the accuracy depends on the labeling strategy of euthymic states. The paper discusses the results, the difference between personalized and generic models, and the use of mobile phones in mental health treatment in practice.
The advances in mobile and wearable sensing have led to a myriad of approaches for stress detection in both laboratory and free-living settings. Most of these methods, however, rely on the usage of some combination of physiological signals measured by the sensors to detect stress. While these solutions work great in a lab or a controlled environment, the performance in free-living situations leaves much to be desired. In this work, we explore the role of context of the user in free-living conditions, and how that affects users' perceived stress levels. To this end, we conducted an 'in-the-wild' study with 23 participants, where we collected physiological data from the users, along with 'high-level' contextual labels, and perceived stress levels. Our analysis shows that context plays a significant role in the users' perceived stress levels, and when used in conjunction with physiological signals leads to much higher stress detection results, as compared to relying on just physiological data.
Timely detection of an individual's stress level has the potential to expedite and improve stress management, thereby reducing the risk of adverse health consequences that may arise due to unawareness or mismanagement of stress. Recent advances in wearable sensing have resulted in multiple approaches to detect and monitor stress with varying levels of accuracy. The most accurate methods, however, rely on clinical grade sensors strapped to the user. These sensors measure physiological signals of a person and are often bulky, custom-made, expensive, and/or in limited supply, hence limiting their large-scale adoption by researchers and the general public. In this paper, we explore the viability of commercially available off-the-shelf sensors for stress monitoring. The idea is to be able to use cheap, non-clinical sensors to capture physiological signals, and make inferences about the wearer's stress level based on that data. In this paper, we describe a system involving a popular off-the-shelf heart-rate monitor, the Polar H7; we evaluated our system in a lab setting with three well-validated stress-inducing stimuli with 26 participants. Our analysis shows that using the off-the-shelf sensor alone, we were able to detect stressful events with an F1 score of 0.81, on par with clinical-grade sensors.
Exercise therapy for dementia care helps patients improve balance, muscle strength, endurance, flexibility, and posture. Usually, a therapist develops a physical training program to help patients retain their locomotor abilities, but in many cases, the challenge is to motivate and engage participants. To assist the therapist to engage participants we introduced the anthropomorphic social robot Kiro. Aiming to support the therapist along with a predefined routine, Kiro follows the instructions of the therapist to perform several exercises moving its arms and legs while motivating patients with personalized and motivational phrases. In this work, we report a preliminary user study consisting of two sessions with seven persons with dementia in which the robot successfully engaged with the patients and kept them motivated. Finally, we discuss the intervention design, adoption, and user interaction.
Emerging mobile health (mHealth) and eHealth technology could provide opportunities for remote monitoring and interventions for people with mental health and neurological disorders. RADAR-base is a modern mHealth data collection platform built around Confluent and Apache Kafka. Here we report progress on studies into two brain disorders: major depressive disorder and epilepsy. For depression an ambulatory study is being conducted with patients recruited to three sites and for epilepsy an in-hospital study is being carried out at two sites. Initial results show smartphones and wearable devices have potential to improve care for patients with depression and epilepsy.
Collaborative writing has been a common activity in different professions for many years. Researchers from multiple disciplines, including from HCI and CSCW, have experimented and evaluated a number of collaborative writing tools. However, with the emergence of more modern technologies like cloud-based Google Docs, newer versions of Microsoft Word etc., collaborative writing practices are gaining in popularity and use. Yet, we know little about the accessibility of these systems and how they should be designed to best support the increasingly common collaborative practices that constitute everyday work groups and collaboration. Our research goal is to study how people with visual impairments collaboratively write with other sighted and/or visually impaired peers in practice, whether and how they use computer-mediated collaborative tools, and design new assistive technologies to facilitate collaboration among a group of writers with visual impairments or mixed abilities.
I am a fifth year Ph.D. Student (enrolled January '14) working in Data Science (Department of Mathematics) at Shiv Nadar University, India. I have an undergraduate major in Computer Science, and my research interests lie in two primary areas - (i) developing systems for event detection and pattern recognition using smartphone sensor data, and (ii) leveraging micro-events for deep context mining including human-human interactions and behavior. Having been an active part of the ubiquitous computing research community during my undergraduate as well as during my Ph.D. (expected completion Summer '19), my future aspirations are to join as a post-doctoral researcher in the same field, and continue to build upon my skills and contribute both as an active researcher as well as a mentor for budding undergraduate students.
Our goal is to explore the changes of driver's behavior in developing country context which changes through numerous factors and brings discomfort to their daily life. Our implemented system is able to give the insight of cardiovascular changes which indicates discomfort as well. We are trying to develop such system that may help to reduce discomfort.
With the vision of building "A Smart World", Internet of Things (IoT) plays a crucial role where users, computing systems and objects with sensing and actuating capabilities cooperate with unparalleled convenience. Among many applications of IoT, healthcare is the most emerging in today's scenario, as new technological advancement creates opportunity for early detection of illnesses, quick decision generation and even aftercare monitoring. Nowadays, it has become a reality for many patients to be monitored remotely, overcoming traditional logistical obstacles. However, these e-health applications increase the concerns of security, privacy, and integrity of medical data. For secured transmission in IoT healthcare, data that has been gathered from sensors in a patient's body area network needs to be sent to the end user and might need to be aggregated, visualized and/or evaluated before being presented. Here, trust is critical. Therefore, an end-to-end trustworthy system architecture can guarantee the reliable transmission of a patient's data and confirms the success of IoT Healthcare application.
Electrophoretic devices, also known as Electronic Paper (E-paper), are gaining popularity due to their merits including low-energy consumption, flexible form factors as well as their non-glowing nature. These advantages are making E-paper an interesting alternative to conventional glowing LCD displays, which are generally believed to have negative impact on users' physiological and psychological well-being. Existing research on E-paper devices focuses on evaluating readability, usability and user-acceptance. However, there is little research on applying E-paper devices in scenarios other than reading. In this research, we aim to bridge this gap by investigating the potential of using E-paper devices in input-heavy applications scenarios that are realistic for both office work and school education, in pursuit of high efficiency, better usability and a healthier life style.
Activity recognition and understanding are very important research areas. Video (RGB-Depth) sensors, accelerometer, LoRaWAN, etc. are explored for activity recognition for different applications. In this short paper, research areas related to these fields are added. The core research challenges will be discussed with the participants. I have been exploring sensor-based activities and health-care support system in Bangladesh and Japan for the last few years. Therefore, we can explore to share my research works, and discuss for further ideas and research partnerships.
Sensor-based activity recognition is my core research area. As a Ph.D. student, I have been working on this area for healthcare monitoring system, fall detection, and elderly support system. My research approach is to improving activity recognition method by approaching better recognition performance (specially, abnormal activities of elders), which may lead to serious health concern if left unattended. Another target is to solve sensor network data loss problem which is one of the important issues to detect human activity properly. This workshop will be highly motivational and crucial for my learning and future research endeavor.
Chronic Kidney Diseases are one of the leading reasons for untimely deaths worldwide taking lives of millions of people every year. Bangladesh as a developing country with the dense population and inadequate resources is very vulnerable to the problem. We are working on a cloud-based solution that tries to compensate for the doctor-patient ratio which affects the quality of healthcare highly. The system connects patients, doctors and caregivers. It presents all the necessary information of the patients to the doctor and helps them to analyze the condition of the patients in less time. The system provides interfaces for the communication of the doctors, patients and caregivers. Patients and caregivers are notified about the instructions of the doctor and help them navigate through different phases of the disease.
Autism has emerged as one of the most critical issues globally in recent times. The situation is more complicated in developing countries like Bangladesh where infrastructure is not up to the mark. The education system for autistic people in Bangladesh lacks the aid of technology which can make the system more efficient. There is a communication gap between the teacher and the caregivers of autistic kids. Our work addresses these issues by developing a school management guide for the autism schools which will make the management issues of autism schools much more efficient. The system is designed in such a way that will help the communication between parents and teachers by providing adequate information to the parents of autistic kids about the daily teaching procedures in the school.
Sentiment Analysis is one of the research fields of computer science that analyzes human feedback and response based on text inputs. So far there has been a significant number of works on sentiment analysis of recorded text inputs. In this paper, we are proposing a multi-dimensional aspect-based sentiment analysis. This analysis will represent the influence of human emotion and social factors on the response of individuals in different events. The social media platforms have already proved that people's opinion varies to a large extent on any given topic. We want to present a data-analyzed visualization of this common scenario and find out the causes behind. The primary result of the experiment of this hypothesis points out the significant influence of human factors on the response of an individual to some common scenario. The next steps of this research will include larger user study data collection and find out the most intriguing key factors. Thus in this multi-dimensional aspect-based sentiment analysis, where we are finding out the background story of text input in addition to its polarity detection.
This paper outlines an ongoing PhD dissertation research, seeking to explore the user experience of a non-traditional form of haptic modality --- compression, through the use of computer-mediated on-body compression systems with embedded active materials that are capable of being simultaneously functional, remotely-controllable, and inconspicuous. The motivation for this work is to address the need to expand our understanding of different haptic sensations, in this case, the objective and subjective effects of compression on a user, as well as how might the use of active, dynamically-controllable garments affect traditional compression garment use paradigms.
Foot-based interaction for skill transfer is an underexplored domain. The goal of this research is to explore systems for transferring foot movements from an instructor to a recipient in a real-time performance-oriented context. Given this, we designed a wearable haptic-visual system consisting of a pair of smart shoes equipped with LED lights and haptic-enabled socks. This system focuses on portability and non-intrusiveness.
In this project, a Secret Bot has been designed which is controlled by an android device remotely and can secretly capture and transmit the video to the user of any incident. Especially, in the situations of the terrorist attack, hostage taking or anywhere if human intervention becomes risky, this Bot can be operated independently to know about the inside scenario. This small, silent and camouflage textured Bot can be used by law enforcement agencies for better planning in the emergency.
I explore designing and creating sensing modules for older adults to guide them in avoiding prolonged exposure to harmful pollutants, such as PM2.5 and noise, that are positively associated with mild cognitive impairment as per what recent epidemiological studies revealed. Mild cognitive impairment is a stage of cognitive decline that falls between the cognitive changes due to normal aging and early onset dementia. My work explores designing and deploying a sensing wearable accompanied by a smartphone application for older adults to increase their awareness of how much they were exposed to these pollutants and to suggest possible actions to reduce exposure.
In this study, an automated healthcare assistance "Nurse Bot" has been developed for elderly population. For past few decades, life expectancy has increased and the global percentage of elderly population is upward. But the rise of this ageing population gravitates towards different health related problems. Therefore, they need extra care and some sorts of assistance to make their life comfortable. Elderly people tend to forget to take their medicines on time. In this research, we have developed an automated mini robot with a scheduler and a monitoring system, which delivers medicines to the elderly person on scheduled time. We have constructed a cost-effective system, so that majority of the people can afford it. Besides that, to evaluate the efficacy and efficiency of this system, we visited a retirement home. During our demonstration, we received enthusiastic and positive responses from the authority and the residents of that retirement home.
Human security and privacy are two different aspects and making these two things work together is quite complicated. With the development of cellular devices, location tracking capabilities has improved remarkably. People are able to share their exact location anywhere and anytime. But problems related to user privacy have risen. In order to make a balance between security and privacy, I'm currently doing my research with a project named "I'm Here".
People with visual impairments (PVI) experience difficulties with daily tasks that require visual queues, making PVI dependent on sighted people. Emergent assistive technology has proven to enable PVI with performing complex tasks, such as item identification by using phone applications or stand-alone assistive devices. Existing technologies currently fail to provide seamless interactions, where users can easily access information, at any location and time, without requiring the performance of multiple steps. In this research, we focus on augmenting PVI through wearable technology to provide seamless support in a mobile environment, such as the supermarket.
Research in previous decades has explored foot interaction via foot interfaces, namely with pressure sensitive insoles in different applications such as gait analysis, rehabilitation, explicit gestures, and implicit interactions. However, there is still a vast amount of unexplored information about the feet. Focusing research on pressure sensitive insoles, within this particular type of foot interface, is beneficial as it is less obtrusive than wearing bulky cameras or multiple IMUs attached to the leg and foot. An insole can simply be worn inside footwear and remain unnoticed by others. The aim of this PhD research is to contribute new concepts that will complement previous research in the domain of mobile computing. To accomplish this, three application scenarios will be explored that related to foot interfaces, which has not broadly been explored yet such as evaluating exercise performance, inferring the user's mental state and distinguishing physical activities and detecting anomalies.
The goal of my research is to study how individuals perform self-experiments and to build behavior-powered systems that help them run such experiments. I have developed SleepCoacher, a sleep-tracking system that provides and evaluates the effect of actionable personalized recommendations for improving sleep. Going further, my aim is to go beyond sleep and develop the first guided self-experimentation system, which educates users about health interventions and helps them plan and carry out their own experiments. My thesis aims to use self-experimentation to help people take better care of their well-being by uncovering the hidden causal relationships in their lives.
Emotions are the driving force for posting content on a social media platform. Although modeling and predicting in emotion with social media have been proposed, they often require post-investigation such as sentiment analysis. This often limits users to be aware of negative emotional states while publishing content on social media. In this project, we will propose an emotion detection technique by extracting the physiological responses of social media users from smartphone commodity sensors. We aim to provide design opportunities for social media platforms to raise emotional self-awareness of the users while using social media on the smartphone.
The explosion of IoT devices has meant that our daily activities will continue to be influenced by such technologies. This paves the way for a deluge of notifications to interrupt us frequently. Such interruptions can result in frequent task switching where a user might put off one's activities to respond to such notifications. We devise a strategy through which we can mitigate these notifications by relying on the user's typing patterns. Furthermore, with the help of a reinforcement learning algorithm, the feedback from these typing patterns is used by the system to manage the notification system.
This is my application document to attend the UbiComp 2018 Broadening Participation Workshop. In this application, I introduce myself first, including the current Ph.D. program I am enrolled in. Then, I go through the topics I am working on, such as the RNN (Recurrent Neural Network)-based malicious accounts detection designs on LBSNs (Location-Based Social Networks), and the measurement work on the cross-site linking function that links users accounts on multiple OSNs. Some of my work have been published. At last, I summarize the future work of these topics. I classify the related work of malicious users into four categories according to the motivations of malicious behaviors. I plan to deal with more types of malicious behaviors by exploiting the cross-site linking function.
Inappropriate alcohol drinking may cause health and social problems. Although controlling the intake of alcohol is effective to solve the problem, it is laborious to track consumption manually. A system that automatically records the amount of alcohol consumption has a potential to improve behavior in drinking activities. Existing devices and systems support drinking activity detection and liquid intake estimation, but our target scenario requires the capability of determining the alcohol concentration of a beverage.
I and my colleagues develop Al-light, a smart ice cube to detect the alcohol concentration level of a beverage using an optical method (Fig. 1). Al-light is the size of 31.9 x 38.6 x 52.6 mm and users can simply put it into a beverage for estimation. It embeds near infrared (1450 nm) and visible LEDs, and measures the magnitude of light absorption. Our device design integrates prior technology  in a patent which exploits different light absorption properties between water and ethanol to determine alcohol concentration. Through our revisitation studies, we found that light at the wavelength of 1450 nm has strong distinguishability even with different types of commercially-available beverages. Our quantitative examinations on alcohol concentration estimation revealed that Al-light was able to achieve the estimation accuracy of approximately 2 % v/v with 13 commercially-available beverages. Although our current approach needs a regressor to be trained for a particular ambient light condition or the sensor to be calibrated using measurements with water, it does not require beverage-dependent models unlike prior work. Al-light enables different applications around alcoholic beverage drinking in addition to simple tracking as shown in Fig. 2.
Mental-health monitoring, biometric security analysis have lots of impact in the smart world. Speech recognition, voice identification are key technologies in these fields. However these are very challenging research areas because voice features can vary based on gender, physical or mental condition and environmental noise. In our paper, we identify emotional status based on Cepstral and Jitter coefficients. Cepestral co efficient has some important role since it carries the maximum information of voice signals. Rather than using the entire voice signal, we use short time significant frames, which would b e enough to identify the emotional condition of the speaker. Our hybrid framework is very realistic and inexpensive because it computes with a part of voice signal considering jitter. We support our method by providing better accuracy and true acceptance rate.
Life transitions are part of human experience. For women, transitions are an integral part of their lives, mainly due to the inherent biological changes unique to them. Life transitions such as menarche, pregnancy, miscarriage, childbirth, and menopause have profound implications for women's physical and mental wellness. For a woman, transition to motherhood is a joyous as well as stressful period in her life. Lack of care and support during this time can result in long-term adverse consequences for both mother and baby. Previous studies have identified a gap in expected and received support for new mothers, which can sometimes lead to postpartum depression (PPD). Compassion, which is empathy in action, may help reduce this gap. My research aims to develop a framework for understanding and designing compassionate interactions to foster maternal wellness during transition to motherhood.
Systems that are disconnected from human centered thinking tend to create a gap. The work presents initiatives that emphasizes on connecting them.
I, Sahiti Kunchay, am a student of the College of Information Sciences and Technology (IST) at The Pennsylvania State University, enrolled in the IST's PhD program's Fall 2018 cohort, with the expected date of graduation being May 2023. My undergraduate background in Computer Science and Sociology offered me the unique perspective of identifying and investigating pressing issues through the combined lens of research methodologies in computing as well as social sciences.
As a part of my research career, I have been involved in multiple projects, which yielded publications accepted at workshops co-located with conferences such as ACM CHI, ACM SenSys and ACM MobiSys. Along with being awarded the Student Travel Grant by SIGMOBILE, I have also been awarded the ACM-W Scholarship to present my work at the aforementioned venues.
This research project aims to develop a scalable manufacturing method for garment-integrated technologies that preserve user comfort and work within the constraints of typical apparel manufacturing processes while providing required electrical performance and durability needed by the system. We have developed a method for attaching discrete surface-mount components and rigorously tested the method. To demonstrate the scalability of the method test prototype garments are under development, which will be used to evaluate the durability of the manufacturing process at garment scale as well as the impact of integrating electronic technology on labor, equipment, and cost.
In recent time stroke becomes life threatening deadly cause and it just increasing at global alarming state. Stroke occurs when blood flow interrupt in brain. Now it is highly demanded to use computational expertise for detecting stroke. The proposed system of stroke prediction focuses potential and crucial risk factors of stroke to design the model. The data set was collected from Dhaka medical college, situated in Bangladesh and by using data mining technique; the unnecessary risk factors are pruned. By using Fuzzy logic Inference System and C-means fuzzy classifier, input data is classified. Later on, we generate fuzzy if-then rule by using fuzzy Inference System to make a better prediction model. The developed predictive model gained satisfaction of physicians' as it provides higher accuracy. The developed model will not only aid needy one but also it will help medical experts.
"RICKSHAW BUDDY" is a low-cost automated assistance system for three-wheeler auto rickshaws to reduce the high rate of accidents in the streets of developing countries like Bangladesh. It is a given fact that the lack of over speed alert, back camera, detection of rear obstacle and delay of maintenance are causes behind fatal accidents. These systems are absent not only in auto rickshaws but also most public transports. For this system, surveys have been done in different phases among the passengers, drivers and even the conductors for a useful and successful result. Since the system is very cheap, the low-income drivers and owners of vehicles will be able to afford it easily making road safety the first and foremost priority.
Human beings are social by nature and like to form the group with the similar minded people. Therefore, for understanding the behaviour of an individual, besides the information of the individual, we need to look at the interaction of the individual in physical world groups. Likewise, the workers form various formal and informal groups in the organization for mutual interactions. For monitoring context as well as behaviours of the workers, identification of such groups and understanding their dynamics are essential. Moreover, this information is used for analysing the organizational efficiency. In an institutional environment, the knowledge of the student group formation helps the instructor to the analyse the performance of the students. On the other side, the proper selection of groups leads to the improvement of the students. Understanding the group dynamics in sports is also important. The proper interpretation of the group dynamics reveals the strategy of the team. Hence, for the future sports event, it is useful to the opponent team.
We propose a novel method for knitting advanced smart garments (e.g., garments with targeted electrical or mechanical properties) using a single, spatially-varying, multi-material monofilament created using additive manufacturing (AM) techniques. By strategically varying the constitutive functional materials that comprise the monofilament along its length, it is theoretically possible to create targeted functional regions within the knitted structure. If spaced properly, functional regions naturally emerge in the knit as loops in adjacent rows align. To test the feasibility of this method, we evaluated the ability of a commercially available knitting machine (a Passap® E6000) to knit a variety of experimental and commercially available, spatially-variant monofilament. Candidate materials were tested both to characterize their mechanical behavior as well as to determine their ability to be successfully knitted. A repeatable spatial mapping relationship between 1D filament location and 2D knit location was established, enabling the ability to create a variety of 2D functional pathways (straight, linear, nonlinear) in the knit structure using a single monofilament input. Using this approach, a multi-material monofilament can be designed and manufactured to create advanced functional knits with spatially-variant properties.
In this document I introduce my PhD research that investigates responsive and emotive wearable technology. I outline my objectives for attending the Broadening Participation Workshop and how it would be valuable to attend in regard to the next steps of my career.