Identifying the finger used for touching and measuring the force of the touch provides valuable information on manual interactions. This information can be inferred from electromyography (EMG) of the forearm, measuring the activation of the muscles controlling the hand and fingers. We present Touch-Sense, which classifies the finger touches using a novel neural network architecture and estimates their force on a smartphone in real time based on data recorded from the sensors of an inexpensive and wireless EMG armband. Using data collected from 18 participants with force ground truth, we evaluate our system's performance and limitations. Our system could allow for new interaction paradigms with appliances and objects, which we exemplarily showcase in four applications.
We propose a method to improve ultrasound-based in-air gesture recognition by altering the acoustic characteristics of a microphone. The Doppler effect is often utilized to recognize ultrasound-based gestures. However, increasing the number of gestures is difficult because of the limited information obtained from the Doppler effect. In this study, we partially shield a microphone with a 3D-printed cover. The cover alters the sensitivity of the microphone and the characteristics of the obtained Doppler effect. Since the proposed method utilizes a 3D-printed cover with a single microphone and speaker embedded in a device, it does not require additional electronic devices to improve gesture recognition. We design four different microphone covers and evaluate the performance of the proposed method on six gestures with eight participants. The evaluation results confirm that recognition accuracy is increased by 15.3% by utilizing the proposed method.
We present SeeSaw, a synchronous gesture interface for commodity smartwatches to support watch-hand only input with no additional hardware. Our algorithm, which uses correlation to determine whether the user is rotating their wrist in synchrony with a tactile and visual prompt, minimizes false-trigger events while maintaining fast input during situational impairments. Results from a 12 person evaluation of the system, used to respond to notifications on the watch during walking and simulated driving, show interaction speeds of 4.0 s - 5.5 s, which is comparable to the swipe-based interface control condition. SeeSaw is also evaluated as an input interface for watches used in conjunction with a head-worn display. A six subject study showed a 95% success rate in dismissing notifications and a 3.57 s mean dismissal time.
The photo reflective sensor (PRS), a tiny distant-measurement module, is a popular electronic component widely used in wearable user-interfaces. An unavoidable issue of such wearable PRS devices in practical use is the need of user-independent training to have high gesture recognition accuracy. Each new user has to re-train a device by providing new training data (we call the inter-user setup). Even worse, re-training is also necessary ideally every time when the same user re-wears the device (we call the intra-user setup). In this paper, we propose a domain adaptation framework to reduce this training cost of users. Specifically, we adapt a pre-trained convolutional neural network (CNN) for both inter-user and intra-user setups to maintain the recognition accuracy high. We demonstrate, with an actual PRS device, that our framework significantly improves the average classification accuracy of the intra-user and inter-user setups up to 87.43% and 80.06% against the baseline (non-adapted) setups with the accuracy 68.96% and 63.26% respectively.
In our daily lives, we rely heavily on our visual and auditory channels to receive information from others. In the case of impairment, or when large amounts of information are already transmitted visually or aurally, alternative methods of communication are needed. A haptic language offers the potential to provide information to a user when visual and auditory channels are unavailable. Previously created haptic languages include deconstructing acoustic signals into features and displaying them through a haptic device, and haptic adaptations of Braille or Morse code; however, these approaches are unintuitive, slow at presenting language, or require a large surface area. We propose using a multi-sensory haptic device called MISSIVE, which can be worn on the upper arm and is capable of producing brief cues, sufficient in quantity to encode the full English phoneme set. We evaluated our approach by teaching subjects a subset of 23 phonemes, and demonstrated an 86% accuracy in a 50 word identification task after 100 minutes of training.
We present Movelet, a self-actuated bracelet that can move along the user's forearm to convey feedback via its movement and positioning. In contrast to other eyes-free modalities such as vibro-tactile feedback, that only works momentarily, Movelet is able to provide sustained feedback via its spatial position on the forearm, in addition to momentary feedback by movement. This allows to continuously inform the user about the changing state of information utilizing their haptic perception. In a user study using the Movelet prototype, we found that users can blindly estimate the device's position on the forearm with an average deviation of 1.20cm to the actual position and estimate the length of a movemement with an average deviation of 1.44cm. This shows the applicability of position-based feedback using haptic perception.
This paper investigates the effects of using passive haptic learning to train the skill of comprehending text from vibrotactile patterns. The method of transmitting messages, skin-reading, is effective at conveying rich information but its active training method requires full user attention, is demanding, time-consuming, and tedious. Passive haptic learning offers the possibility to learn in the background while performing another primary task. We present a study investigating the use of passive haptic learning to train for skin-reading.
We present a study comparing the effect of real-time wearable feedback with traditional training methods for cardiopulmonary resuscitation (CPR). The aim is to ensure that the students can deliver CPR with the right compression speed and depth. On the wearable side, we test two systems: one based on a combination of visual feedback and tactile information on a smart-watch and one based on visual feedback and audio information on a Google Glass. In a trial with 50 subjects (23 trainee nurses and 27 novices,) we compare those modalities to standard human teaching that is used in nurse training. While a single traditional teaching session tends to improve only the percentage of correct depth, it has less effect on the percentage of effective CPR (depth and speed correct at the same time). By contrast, in a training session with the wearable feedback device, the average percentage of time when CPR is effective improves by up to almost 25%.
This paper presents a structured survey of 119 publications from the proceedings of the International Symposium on Wearable Computers (ISWC) from 2013 to 2017. This survey on research methods and purposes is based upon a classification schema used by many others in HCI research. An extra dimension was added to the classification of research methods pertaining to the Research Domains, in order to provide a more insightful overview of the field. An analysis of the research methods and purposes, and domains of ISWC is presented. Additionally, the citation impact of ISWC publications is evaluated and compared over these dimensions. Current tendencies of the research presented at ISWC are identified with focus on the contextual setting of the research. Opportunities for future research at ISWC are identified.
Deep neural networks, including recurrent networks, have been successfully applied to human activity recognition. Unfortunately, the final representation learned by recurrent networks might encode some noise (irrelevant signal components, unimportant sensor modalities, etc.). Besides, it is difficult to interpret the recurrent networks to gain insight into the models' behavior. To address these issues, we propose two attention models for human activity recognition: temporal attention and sensor attention. These two mechanisms adaptively focus on important signals and sensor modalities. To further improve the understandability and mean Fl score, we add continuity constraints, considering that continuous sensor signals are more robust than discrete ones. We evaluate the approaches on three datasets and obtain state-of-the-art results. Furthermore, qualitative analysis shows that the attention learned by the models agree well with human intuition.
With the fast evolution in the area of processing units and sensors, wearable devices are becoming more popular among people of all ages. Recently, there has been renewed interest in exploiting the capabilities of wearable sensors for person recognition while undertaking their normal daily activities. In this paper, we focus on utilizing motion information of known daily activities gathered from wearable sensors for recognizing the person. The analysis of the results demonstrates that different fundamental classification factors have an impact on person recognition success-rates. Furthermore, the results of comparison among subjects prove that some subjects have high classification results and can be easily identifiable compared to other subjects which have high confusability rates. Lastly, a significant improvement in subject classification success rate was found for activities with little or no movement which can successfully distinguish among persons and hence producing higher classification results compared to activities with large movement.
Sliding window based activity recognition chains represent the state-of-the-art for many mobile and embedded scenarios as they are common in wearable computing. The length of the analysis frames is a crucial system parameter that directly influences the effectiveness of the overall approach. In this paper we present a method that optimizes the window length - individually for each target activity. Instead of employing a single, multi-class recognition system that is based on a generic window length, we combine individually optimized activity detectors into an Ensemble based recognition approach. We demonstrate the effectiveness of the approach through an experimental evaluation on eight benchmark datasets. The proposed method leads to significant improvements across a range of activity recognition application domains.
Feature extraction is a critical step in sliding-window based standard activity recognition chains. Recently, distribution based features have been introduced that showed excellent generalization capabilities across a wide range of application domains in human activity recognition scenarios based on body-worn sensors. These features capture the data distribution of individual analysis frames, yet they ignore temporal structure inherent to the signal of a frame. We explore four variants of adding temporal structure to distribution based features and demonstrate their potential for statistically significant improvements of activity recognition in general. The addition of temporal structure comes with a moderate increase in computational complexity rendering the proposed methods applicable to mobile and embedded scenarios.
Effective training without risk of injury based on quantitative data is important for all athletes. In bicycle racing, pedaling skill analysis with electromyography (EMG) is gaining attention as a technique to use in addition to power training and heart rate training, which already have effective measuring devices. However, EMG is not easy for non-specialists to use because of the difficulties in analysis and interpretation. We, therefore, propose a multilateral pedaling skill evaluation technique to quantitatively and practically analyze with surface EMG wear for non-specialists. This technique can extract strong and weak pedaling skill items that are difficult to detect through visual observation. It also removes the need to acquire additional data only for preprocessing and is well adapted for measurement in an actual environment over a long period. We verify the technique's expressiveness and feasibility through experiments involving 14 participants. The results indicate its potential for pedaling skill training.
Muscle fatigue detection and tracking has gained significant attention as the sports science and rehabilitation technologies developed. It is known that muscle fatigue can be evaluated through surface Electromyography (sEMG) sensors, which are portable, non-invasive and applicable for real-time systems. There are plenty of fatigue tracking algorithms, many of which uses frequency, time and time-frequency behaviors of sEMG signals. An example to most commonly used sEMG-based fatigue detection methods can be mean frequency (MNF), median frequency (MDF), zero-crossing rate (ZCR) and continuous wavelet transform (CWT). However, all of these muscle fatigue calculation methods are adversely affected by the dynamically changing sEMG contraction amplitude, since EMG spectrum also demonstrates a shift with the changing signal RMS; powerful contractions lead a shift to high frequency bounds and the opposite happens for the weak. To overcome that, we propose an adaptive algorithm, which learns the effect of contraction power on sEMG power spectral density (PSD) and subtracts that amount of frequency shift from the PSD.
In modern showjumping and cross-country riding, the success of the horse-rider-pair is measured by the ability to finish a given course of obstacles without penalties within a given time. A horse performs a successful (penalty-free) jump, if no element of the fence falls during the jump. The success of each jump is determined by the correct take-off point of the horse in front of the fence and the amount of strides a horse does between fences. This paper proposes a solution for tracking gaits and jumps using a smartphone attached to the horse's saddle. We propose an event detection algorithm based on Discrete Wavelet Transform and a peak detection to detect jumps and canter strides between fences. We segment the signal to find gait and jump sections, evaluate statistical and heuristic features and classify the segments using different machine learning algorithms. We show that horse jumps and canter strides are detected with a precision of 94.6% and 89.8% recall. All gaits and jumps are further classified with an accuracy of up to 95.4% and a Kappa coefficient (KC) of up to 93%.
Human Activity Recognition (HAR) with body-worn sensors has been studied intensively in the past decade. Existing approaches typically rely on data from inertial sensors. This paper explores the potential of using point cloud data gathered from wearable depth cameras for on-body activity recognition. We discuss effects of different granularity in the depth information and compare their performance to inertial sensor based HAR. We evaluated our approach with a total of sixteen participants performing nine distinct activity classes in three home environments. 10-fold cross-validation results of KNN and Random Forests classification exhibit a significant increase in F-score from inertial data to depth information (by > 12 percentage points) and show a further improvement when combining low-resolution depth matrices and sensor data. We discuss the performance of the different sensor types for different contexts and show that overall, depth sensors prove to be suitable for HAR.
Deep Learning methods have become very attractive in the wider, wearables-based human activity recognition (HAR) research community. The majority of models are based on either convolutional or explicitly temporal models, or combinations of both. In this paper we introduce attention models into HAR research as a data driven approach for exploring relevant temporal context. Attention models learn a set of weights over input data, which we leverage to weight the temporal context being considered to model each sensor reading. We construct attention models for HAR by adding attention layers to a state-of-the-art deep learning HAR model (DeepConvLSTM) and evaluate our approach on benchmark datasets achieving significant increase in performance. Finally, we visualize the learned weights to better understand what constitutes relevant temporal context.
E-textiles that enable distribution of electronic components have advantages for wearable technology, in that functionality, power, and networking can be spread over a much larger area while preserving hand-feel and wearability. However, textile-embedded circuitry often must be machine-washable to conform to user expectations for care and maintenance, particularly for garments. In this study, we evaluate the robustness to home laundering of a previously-developed cut-and-sew technique for assembling e-textile circuits. Alternative surface insulation materials, textile substrate properties, and soldered component joints are evaluated. After around 1000 minutes (16.67 hours) of rigorous washing and drying, we measured a best-case 0% failure rate for component solder joints, and a best-case 0.38 ohm/m maximum increase in trace resistance. Liquid silicone seam sealer was effective in protecting 100% of solder joints. Two tape-type alternative surface insulation materials were effective in protecting bare traces and component attachment points respectively. Overall, results demonstrate the feasibility of producing insulated, washable cut-and-sew circuits for smart garment manufacturing.
This paper presents a new approach to implement wearable haptic devices using Shape Memory Alloy (SMA) wires. The proposed concept allows building silent, soft, flexible and lightweight wearable devices, capable of producing the sense of pressure on the skin without any bulky mechanical actuators. We explore possible design considerations and applications for such devices, present user studies proving the feasibility of delivering meaningful information and use nonlinear autoregressive neural networks to compensate for SMA inherent drawbacks, such as delayed onset, enabling us to characterize and predict the physical behavior of the device.
In this paper, we propose a novel method for knitting advanced smart garments (e.g., garments with targeted electrical or mechanical properties) using a single, spatially-varying, multi-material monofilament created using additive manufacturing (AM) techniques. By strategically varying the constitutive functional materials that comprise the monofilament along its length, it is theoretically possible to create targeted functional regions within the knitted structure. If spaced properly, functional regions naturally emerge in the knit as loops in adjacent rows align. To test the feasibility of this method, we evaluated the ability of a commercially available knitting machine (a Passap® E6000) to knit a variety of experimental and commercially available, spatially-variant monofilament. Candidate materials were tested both to characterize their mechanical behavior as well as to determine their ability to be successfully knitted. A repeatable spatial mapping relationship between 1D filament location and 2D knit location was established, enabling the ability to create a variety of 2D functional pathways (straight, linear, nonlinear) in the knit structure using a single monofilament input. Using this approach, a multi-material monofilament can be designed and manufactured to create advanced functional knits with spatially-variant properties.
The sensation of touch is integral to everyday life. Current haptics research focuses mainly on vibrations, tap, and point pressures, but the sensation of distributed pressures such as compression are often overlooked. We investigated the subjective comfort and emotional effects of applied on-body compression, specifically on the torso and upper arms, through a pilot user study incorporating a novel, low-profile, and actively-controllable compression garment. The active compression garment was embedded with contractile shape memory alloys (SMAs) to create dynamic compression on the body. Qualitative interview data collected (n=8) were used to generate a list of findings to inform the future creation of a computer-mediated compression garment that is wearable, comfortable, and safe for use.
Washing hands is one of the easiest yet most effective ways to prevent spreading illnesses and diseases. However, not adhering to thorough handwashing routines is a substantial problem worldwide. For example, in hospital operations lack of hygiene leads to healthcare associated infections. We present WristWash, a wrist-worn sensing platform that integrates an inertial measurement unit and a Hidden Markov Model-based analysis method that enables automated assessments of handwashing routines according to recommendations provided by the World Health Organization (WHO). We evaluated Wrist-Wash in a case study with 12 participants. WristWash is able to successfully recognize the 13 steps of the WHO handwashing procedure with an average accuracy of 92% with user-dependent models, and with 85% for user-independent modeling. We further explored the system's robustness by conducting another case study with six participants, this time in an unconstrained environment, to test variations in the hand-washing routine and to show the potential for real-world deployments.
We present a spotting network composed of Gaussian Mixture Hidden Markov Models (GMM-HMMs) to detect sparse natural gestures in free living. The key technical features of our approach are (1) a method to mine non-gesture patterns that deals with the arbitrary data (Null Class), and (2) an optimisation based on multipopulation genetic programming to approximate spotting network's parameters across target and non-target models. We evaluate our GMM-HMMs spotting network in a novel free living dataset, including totally 35 days of annotated inertial sensor's recordings from seven participants. Drinking was chosen as target gesture. Our method reached an average F1-score of over 74% and clearly outperformed an HMM-based threshold model approach. The results suggest that our spotting network approach is viable for sparse natural pattern spotting.
We introduce a method of using wrist-worn accelerometers to measure non-verbal social coordination within a group that includes autistic children. Our goal was to record and chart the children's social engagement - measured using interpersonal movement synchrony - as they took part in a theatrical workshop that was specifically designed to enhance their social skills. Interpersonal synchrony, an important factor of social engagement that is known to be impaired in autism, is calculated using a cross-wavelet similarity comparison between participants' movement data. We evaluate the feasibility of the approach over 3 live performances, each lasting 2 hours, using 6 actors and a total of 10 autistic children. We show that by visualising each child's engagement over the course of a performance, it is possible to highlight subtle moments of social coordination that might otherwise be lost when reviewing video footage alone. This is important because it points the way to a new method for people who work with autistic children to be able to monitor the development of those in their care, and to adapt their therapeutic activities accordingly.
In this paper, we present I4S, a system that identifies item interactions of customers in a retail store through sensor data fusion from smartwatches, smartphones and distributed BLE beacons. To identify these interactions, I4S builds a gesture-triggered pipeline that (a) detects the occurrence of "item picks", and (b) performs fine-grained localization of such pickup gestures. By analyzing data collected from 31 shoppers visiting a midsized stationary store, we show that we can identify person-independent picking gestures with a precision of over 88%, and identify the rack from where the pick occurred with 91%+ precision (for popular racks).
Virtual and augmented reality headsets are unique as they have access to our facial area: an area that presents an excellent opportunity for always-available input and insight into the user's state. Their position on the face makes it possible to capture bio-signals as well as facial expressions. This paper introduces the PhysioHMD, a software and hardware modular interface built for collecting affect and physiological data from users wearing a head-mounted display. The PhysioHMD platform is a flexible architecture enables researchers and developers to aggregate and interprets signals in real-time, and use those to develop novel, personalized interactions and evaluate virtual experiences. Offering an interface that is not only easy to extend but also is complemented by a suite of tools for testing and analysis. We hope that PhysioHMD can become a universal, publicly available testbed for VR and AR researchers.
Order picking accounts for 55% of the annual $60 billion spent on warehouse operations in the United States. Reducing human-induced errors in the order fulfillment process can save warehouses and distributors significant costs. We investigate a radio-frequency identification (RFID)-based verification method wherein wearable RFID scanners, worn on the wrists, scan passive RFID tags mounted on an item's bin as the item is picked; this method is used in conjunction with a head-up display (HUD) to guide the user to the correct item. We compare this RFID verification method to pick-to-light with button verification, pick-to-paper with barcode verification, and pick-to-paper with no verification. We find that pick-to-HUD with RFID verification enables significantly faster picking, provides the lowest error rate, and provides the lowest task workload.
We extend the interaction space of low-cost mobile virtual reality (VR) by introducing bidirectional scrolling and discrete selection using magnetic sensing. Our design uses the original Google Cardboard v1 input components, modifying only the cardboard mounted on the side. Users slide the magnetized washer around a circular track on the outer layer, which drags a magnet on the inner layer across asymmetric patterned ridges. The phone's magnetometer detects the position of the magnet as it moves around the track and slots into each ridge, emulating a click wheel. The phone's accelerometer is used to recognize center button taps. We compare our system against the current best practice (gaze) with 12 participants across four VR navigation and selection tasks. Finally, we demonstrate our system robustly handles continuous input, despite some minor deterioration of the cardboard, using a motorized rig over an 8-hour period.
Teleconferencing is touted to be one of the main and most powerful uses of virtual reality (VR). While subtle facial movements play a large role in human-to-human interactions, current work in the VR space has focused on identifying discrete emotions and expressions through coarse facial cues and gestures. By tracking and representing the fluid movements of facial elements as continuous range values, users are able to more fully express themselves. In this work, we present Buccal, a simple yet effective approach to inferring continuous lip and jaw motions by measuring deformations of the cheeks and temples with only 5 infrared proximity sensors embedded in a mobile VR headset. The signals from these sensors are mapped to facial movements through a regression model trained with ground truth labels recorded from a webcam. For a streamlined user experience, we train a user independent model that requires no setup process. Finally, we demonstrate the use of our technique to manipulate the lips and jaw of a 3D face model in real-time.
1D Eyewear uses 1D arrays of LEDs and pre-recorded holographic symbols to enable minimal head-worn displays. Our approach uses computer-generated holograms (CGHs) to create diffraction gratings which project a pre-recorded static image when illuminated with coherent light. Specifically, we develop a set of transmissive, reflective, and steerable optical configurations that can be embedded in conventional eyewear designs. This approach enables high resolution symbolic display in discreet digital eyewear.
We present CASPER, a charging solution to enable a future of wearable devices that are much more distributed on the body. Instead of having to charge every device we want to adorn our bodies with, may it be distributed health sensors or digital jewelry, we can instead augment everyday objects such as beds, seats, and frequently worn clothing to provide convenient charging base stations that will charge devices on our body serendipitously as we go about our day. Our system works by treating the human body as a conductor and capacitively charging devices worn on the body whenever a well coupled electrical path is created during natural use of everyday objects. In this paper, we performed an extensive parameter characterization for through-body power transfer and based on our empirical findings, we present a design trade-off visualization to aid designers looking to integrate our system. Furthermore, we demonstrate how we utilized this design process in the development of our own smart bandage device and a LED adorned temporary tattoo that charges at hundreds of micro-watts using our system.
SkinMorph is an on-skin interface which can selectively transition between soft and rigid states to serve as a texture-tunable wearable skin output. This texture change is made possible through the material design of smart hydrophillic gels. These gels are soft in resting state, yet when activate by heat (>36°C), they generate a micro-level structural change which results in observable stiffening. These gels are encapsulated in thin silicone patterned with resistive wires through a sew-and-transfer fabrication approach. We demonstrate application examples using the texture-tunable skin overlay as wearable, interactive protection for scenarios including: a carpal tunnel splint for rehabilitation, a protective layer for joints when engaging in high impact activities, and foot pads when wearing uncomfortable shoes. Our evaluation shows that the gel is 10 times stiffer when activated, and that users find the device skin-conformable.
This paper proposes DeepAuth, an in-situ authentication framework that leverages the unique motion patterns when users entering passwords as behavioural biometrics. It uses a deep recurrent neural network to capture the subtle motion signatures during password input, and employs a novel loss function to learn deep feature representations that are robust to noise, unseen passwords, and malicious imposters even with limited training data. DeepAuth is by design optimised for resource constrained platforms, and uses a novel split-RNN architecture to slim inference down to run in real-time on off-the-shelf smartwatches. Extensive experiments with real-world data show that DeepAuth outperforms the state-of-the-art significantly in both authentication performance and cost, offering real-time authentication on a variety of smartwatches.
Glanceability and low access time are arguably the key assets of a smartwatch. Smartwatches are designed for, and excel at micro-interactions- simple tasks that only take seconds to complete. However, if a user desires to transition to a task requiring sustained usage, we show that there are additional factors that prevent possible longer usage of the smartwatch. In this paper, we conduct a study with 18 participants to empirically demonstrate that interacting with the smartwatch on the wrist leads to fatigue after only a few minutes. In our study, users performed three tasks in two different poses while using a smartwatch. We demonstrate that only after three minutes of use, the change in perceived exertion of the user was anchored as "somewhat strong" on the Borg CR10 survey scale. These results place an upper bound for smartwatch usage that needs to be considered in application and interaction design.
We present LYRA, a modular in-flight system that enhances service and assists flight attendants during their work. LYRA enables passengers to browse and order services from their smartphones. Smart glasses and a smart shoe-clip with RFID reader module provides flight attendants with situated information. We gained first insights into how flight attendants and passengers use of the system during a long distance flight from Frankfurt to Houston.
The form factors of current wearable devices are designed and limited to be worn at specifically defined on-body locations (such as the wrist), which can limit the interaction capabilities based on physical constraints in body movement and positioning. We investigate the design of a multi-functional wearable input device that can be worn at various locations on the body and may as well get mounted onto objects in the environment. This allows users to adjust the device's location to different affordances of varying situations and use cases. We present a SnapBand as such a multi-location touch input device that can be quickly snapped to different locations.
This paper introduces the idea of using wearable, multi-modal body and brain sensing, in a theatrical setting, for neuroscientific research. Wearable motion capture suits are used to track the body movements of two actors while they enact a sequence of scenes together. One actor additionally wears a functional near-infrared spectroscopy (fNIRS)-based headgear to record the activation patterns on his prefrontal cortex. Repetitions in the movement data are then used to automatically segment the fNIRS data for further analysis. This exploration reveals that the semi-structured and repeatable nature of theatre can provide a useful laboratory for neuroscience, and that wearable sensing is a promising method to achieve this. This is important because it points to a new way of researching the brain in a more natural, and social, environment than traditional lab-based methods.
Fitness related wearables have become ubiquitous in the recent past. Nevertheless, short battery life of these devices is still a pressing issue. Limited battery capacity in small form factor and power hungry continuous monitoring of accelerometer have been significant concerns in this regard. To address these issues we propose a novel low-power step counting solution based on an Electromagnetic energy harvesting mechanism. Extremely simple nature of the step counter removes the requirement of any step detection algorithm, thereby reducing the power consumption, while the energy harvester generates a portion of energy requirement prolonging the battery life.
We can benefit from various services with context recognition using wearable sensors. In this study, we focus on the contexts acquired from sensor data in the nostrils. Nostrils can provide various contexts on breathing, nasal congestion, and higher level contexts including psychological and health states. In this paper, we propose a context recognition method using the information in the nostril. We develop a system to acquire the temperature in the nostrils using small temperature sensors connected to glasses. As a result of the evaluations, the proposed system can detect workload at an accuracy of 96.4%.
Muscle fatigue monitoring is important for injury prevention in sports. Previous studies have shown that wearable electromyography (EMG) sensors on limb muscles can be used to monitor the level of muscle fatigue continuously. However, there are two main problems in using these sensors for sports. One is that users wearing them create motion artifacts that affect the sensor data. The other is that the sensors disturb limb motion. To address these problems, we propose a method for estimating biceps fatigue with an e-textile headband. There is a strong correlation between frowning and jaw clenching muscle activity and the physical efforts made when exercising. Our method uses both of EMG signals of the temporal muscles and motion artifacts. The former indicate jaw clenching and the latter indicate frowning. Experimental results show that the root mean square values calculated from the headband sensor values significantly increased when biceps were fatigued.
We describe the development of a system for recognizing dolphin whistles on the CHAT (cetacean hearing and telemetry) wearable underwater computer system. An Nvidia Jetson TK1 single board computer was installed in the existing chat systems to improve processing power and overall system power efficiency. The inclusion of a GPU allowed the system to recognize in real time dolphin whistles varied in both pitch and time by using a 192khz Fast Fourier Transform for spectral analysis, linear convolution filters for pattern extraction, and dynamic time warping for pattern recognition.
Sleep breathing disorder is a serious threat to a large share of the population. This paper presents a low-cost, tiny sensor system based on Volatile Organic Component (VOC) sensing for the detection of sleep apnea/hypopnea. We present two designs, discuss wearability aspects and show that the sensor works similar to gold standard Polysomnography (PSG).
Haptic technology can be used as a tool for learning. Can even the haptic elements in a smartwatch teach a new skill? Here we present a case of using a smartwatch for passive tactile learning. We use the Sony Smartwatch 3 to teach users Morse code while they wear the watch but focus on unrelated tasks. An initial hypothesis forecasted that the stimulation from the smartwatch, typically used for message alerts, would be too subtle to enable haptic learning; however, we find significant improvements in six participants using the technique. Furthermore, we expose participants to two different durations of stimulation and find different results.
This paper introduces Augmented Jump, a backpack multirotor system for jumping ability augmentation. Augmented Jump hovers and supports users' weight by a constant upward power of thrust. Users can jump higher and stay in the air for a longer time than usual with Augmented Jump. We designed and developed our first proof-of-concept prototype that can be controlled as an octocopter and support user's weight by 50kg at maximum. In our experiments, it is found that the system enabled the user to perform jumping in simulated 75% reduced gravity. From user study, the results showed that our system was effective for extending the height and the duration of jumping.
Wearable toolkits simplify the integration of micro-electronics into fabric. They require basic knowledge about electronics for part interconnections. This technical aspect might be perceived as a barrier. We propose YAWN, a bus-based, modular wearable toolkit that simplifies the interconnection by relying on a pre-fabricated three-wire fabric band. This allows quick reconfiguration, ensures washability, and reduces the number of connection problems.
We design and 3D print conductive lines and EMG electrodes on eyeglasses temples. We evaluate the electrical property and the EMG signal quality of the printed components and report line resistance, electrode surface resistance, and EMG signal quality. We found that the signal quality is comparable to non-printed lines and electrodes. Our work shows that 3D printing of conductive lines and electrodes on custom-shaped eyeglasses frames is feasible for chewing monitoring.
This work translates an inspiration derived from the Disney classic `Sleeping Beauty' to realize a smart child's costume with color-changing capabilities. The dress uses addressable multi-color NeoPixel LEDs to change its surface color based on input from two wand controllers, consistent with the animated sequence in which Princess Aurora's fairy godmothers argue over the best color for her gown. The garment's construction emphasizes diffusion of emitted light to achieve a more uniform color distribution, and explores benefits and drawbacks of two manufacturing methods for embedding LED pixels: a manual method based on hand-soldered wire connections consistent with typical electronic craft, and a more scalable surface-mount manufacturing method based on stitched traces and reflow soldering.
The Embodisuit allows its wearer to map signals onto different places on their body. Informed by embodied cognition, the suit receives signals from an IoT platform, and each signal controls a different haptic actuator on the body. Knowledge is experienced ambiently without necessitating the interpretation of symbols by the conscious mind. The suit empowers wearers to reconfigure the boundaries of their selves strengthening their connection to the people, places, and things that are meaningful to them. It both critiques and offers an alternative to current trends in wearable technology. Most wearables harvest data from their users to be sent and processed elsewhere. The Embodisuit flips this paradigm such that data is taken in through the body instead. Furthermore, we believe that by changing the way people live with data, it will change the type of data that people create.
The Empathy Amulet is a wearable interpretation of Philip K. Dick's empathy box from his novel Do Androids Dream of Electronic Sheep? . In the novel, thousands of people were anonymously connected with each other both haptically and emotionally when they grabbed the handles of their empathy boxes. The Empathy Amulet similarly networks a group of strangers together through shared experiences of physical warmth. It is not yet another technology for staying in touch with people you already know (and falling short). Rather, it encourages its wearer to make a deliberate and generous choice to invest their time and energy in connection with strangers, and it incorporates reciprocity into its design, such that helping oneself means helping other people. In today's world, people are less likely to feel empathy towards those not in their immediate network of family and friends, and, despite a proliferation of connective technologies, loneliness is on the rise [2, 5]. Surprisingly, it is the perceived sense of loneliness, and not actually being physically alone that has numerous health consequences for a significant portion of the population. Lakoff and Johnson's theory of embodied mind asserts that our physical and subjective experiences are inextricably linked, and the Empathy Amulet leverages the powerful connection between the physical experience of warmth and the subjective experience of social connectedness to combat loneliness and cultivate a stronger sense of connection with strangers [1, 4].
We present the design and prototype of the Idle Stripes shirt, which is an aesthetic, clothing-integrated display, reflecting the wearer's physical activity in an ambient manner. The design is targeted to be smart business-wear for an office worker, which creates awareness of immobility periods during typical sitting-intensive office work. Long periods of such sitting are known to be health risks. The Idle Stripes shirt promotes healthy working, encouraging the wearer to break up their office desk work with walking breaks. The design prototype is constructed of a fabric with integrated optical fibers, which are illuminated based on the sitting time detected by an app running on the wearer's mobile phone.
The University of Minnesota Mechanical Engineering Department, in collaboration with the Theater Arts & Dance Department, designed and built smart material actuated angel wings for the theater production of José Rivera's play Marisol. The design challenge coordinated aesthetic, structural, mechanical, and electrical requirements for a successful theatrical effect. The aesthetic design drew influence from Baroque Catholic art and 90's grunge fashion for dramatic effect. The wing structure was inspired by the skeletal structure of swan wings, which resulted in the design of a Nickel-Titanium (NiTi) shape memory alloy (SMA) actuated mechanical linkage that mimicked the shape and motion of swan wings. A hidden electrical circuit with a simple pushbutton switch provided the actor with control over the wing movement. The unique design constraints that came with designing a mechanism for theatrical use, along with the close collaboration between the two departments, led to a successful wearable mechanism with dramatic stage effect.
To examine creative smart clothing design for sports and fitness a fastest growing wearable market, the current study examined an interactive cycling outfit incorporating solar powered LED sensor lights. The completed garment design contributes to eco-design approaches for sustainability, history of fashion (1890's the Rational Cycling Outfit) and aesthetic appeal of clothing. Semi-transparent Hanji fabrics (made with paper mulberry by Korean traditional paper making techniques) and high-tech trim materials (e.g., 3M reflective strips, stretch air-mesh fabric, fluorescent tape) were used to complete the garment part of the current design. This study will fill the gap in the literature for exploring creative design ideas and techniques to design smart clothing for cycling. The area of current smart clothing research would be beneficial for educators to create impactful new interdisciplinary design programs and course contents by incorporating wearable technology, containing new emerging concepts that can aid students to expand their personal and professional spheres in global industry. For designers and marketers in the wearable technology industry, this study will also suggest future research direction on smart clothing design to address aesthetic aspects desired by consumers. Finally, this design shed new light on the fashion history of the late Victorian fashion through the re-interpretation of the Rational Cycling Outfit.
The mechanism of astronaut injuries inside rigid spacesuits is not well understood, due to difficulty visualizing intra-spacesuit body motions. An alternative method is to record muscle activation signals with electromyography (EMG); however, the conventional EMG procedure requires bulky, extensive electrode/wire setup and is susceptible to signal noise. To address these challenges for aerospace applications of EMG, we designed a garment-based system to collect EMG data from upper body muscle activities inside spacesuits. Constructed with form-fitting textiles and careful management of on-garment tensions, our garment provides a viable non-invasive EMG study solution that maximizes applicability and subject mobility while resisting motion artifacts. The functionality and usability of our design was also validated with a human subject test, which showed standard-quality signal, easy don/doff process, minimal electrode/wire setup, and simple wire bulk management, as opposed to the conventional methods.
The Power of Proximity fashion tech collection creates a network between garments to show what users have in common with each other. The collection uses a mesh network to link the garments where pre-processed data taken from social media is stored locally on a micro-controller. Once the garments are connected to the same network they exchange data and an algorithm determines the commonalities between the users. Infrared signalling is used to tell if the wearers are facing each other and when that happens an array of LEDs then displays the level of commonalities. The level changes if the users spend time together. The two garments are knitted from merino wool and cashmere and designed so that the electronics are seamlessly embedded into them to create a soft interface around the body.
Human perception has long been influenced by technological breakthroughs. An intimate mediation of technology lies in between our direct perceptions and the environment we perceive. Through three extreme ideal types of perceptual machines, this project defamiliarizes and questions the habitual ways in which we interpret, operate, and understand the visual world intervened by digital media. The three machines create: Hyper-sensitive vision - a speculation on social media's amplification effect and our filtered communication landscape. Hyper-focused vision - an analogue version of the searching behavior on the Internet. Hyper-commoditized vision - monetized vision that meditates on the omnipresent advertisement targeted all over our visual field. The site of intervention is the visual field in a technologically augmented society. All the three machines have both internal state and external signal. This duality allows them to be seen from outside and experienced from inside.
Sense is a responsive garment that visualizes the global phenomenon of coral bleaching. Sense examines the current scientific shift in which technology converges with biology and embody the beauty, power and fragility of nature. This project explores aesthetic territory at the intersection of traditional textile techniques and wearable technologies. In contemporary fashion design, biomimicry inspiration provides alternative ways to express and communicate in the networked, hybrid physical-digital domain, and the urge of dying coral reefs.
Nanogami is a bioresponsive garment to visualize the importance of the microbiome on collective wellbeing. The microbiome is the group of bacteria, viruses, and cells that live within and on our bodies. This galaxy of particles makes up more than half of the human body and are noted to be responsible for overall health and mood.
From nano - micro sensations, Nanogami's dichromic origami fabric seeks to visualize this micro-galaxy with mood sensing, breath monitoring, and inflatable actuation to assist in ideal homeostasis. Within the concept of extimacy, showing the internal states to the external world, the Nanogami illuminates the microbiome to promote awareness with a self monitoring, mediated textile to provoke wellbeing with real-time visual and haptic feedback.