Engaging users with public displays has been a major challenge in public display research. Interactive displays often suffer from being ignored by potential users. Research showed that user representations are a valid way to partially address this challenge, e.g., by attracting attention, conveying interactivity, and serving as entry points to gestures and touch interaction. We believe that user representations, particularly personalized avatars, could further increase the attractiveness of public displays, if carefully designed. In this work, we provide first insights on how such avatars can be designed and which properties are important for users. In particular, we present AVotar, a voting application for mobiles that lets users design avatars being utilized to represent them. In an user study we found that users appreciate high degrees of freedom in customization and focus on expressive facial features. Finally, we discuss the findings yielding useful implications for designers of future public display applications employing avatars.
The amplification of human senses has been in the focus of contemporary research for the past decades. Apart from the replication of human organs, the functionality of the human body has been enhanced. While many approaches aim to augment existing sensory channels, our research purpose is to explore the creation of a new sense, namely a sense for weather awareness. For this, we present our concept which is based on the presentation of thermal stimuli. Hence, we initially explored the perception and suitability of thermal feedback stimuli to communicate weather information, and particularly precipitation in an experiment comprising 16 participants. From the qualitative and quantitative results we derive important findings helping us to advance the realization of our concept in future research involving a field study to further evaluate the creation of a sense for weather awareness.
Present head-mounted displays (HMDs) for Augmented Reality (AR) devices have narrow fields-of-view (FOV). The narrow FOV further decreases the already limited human visual range and worsens the problem of objects going out of view. Therefore, we explore the utility of augmenting head-mounted AR devices with MonoculAR, a peripheral light display comprised of twelve radially positioned light cues, to point towards out-of-view objects. In this work, we present two implementations of MonoculAR: (1) On-screen virtual light cues and (2) Off-screen LEDs. In a controlled user study we compare both approaches and evaluate search time performance for locating out-of-view objects in AR on the Microsoft Hololens. Key results show that participants find out-of-view objects faster when the light cues are presented on the screen. Furthermore, we provide implications for building peripheral HMDs.
The performance of users with motor impairments with stroke gesture input on touchscreens has been little examined so far, despite the wide prevalence of mobile devices and the benefits they bring to increase users' quality of life. In this work, we present the first empirical results on this subject matter from 915 gestures collected from 10 participants with motor impairments (spastic tetraplegia and tetraparesis) and 10 participants without known impairments. We report that different motor abilities lead to different performance in terms of gesture production time. We also show that the production times of gestures articulated by users with motor impairments can be accurately predicted with an absolute error of just 150 ms and a relative error of only 3.7% with respect to actual times (user-independent tests), a result that will enable designers to estimate human performance a priori when prototyping gesture UIs for users with motor impairments.
We introduce an approach to user interface testing with a particular focus on non-native GUI based mobile applications. We particularly address the domain of entertainment and education software including mobile games. We describe a prototype system based on inexpensive components and open source software, intended to support product development cycle for companies on lean budget. On the base of a prototype system discussed in this paper we expect to develop a distributed infrastructure that would allow users to use facilities of users' own computers and connected devices as a part of a common testing framework. The approach presented in this work is also suitable for wider range of mobile applications with a high variety of human-computer interaction mechanisms.
The sense of hearing provides humans with information about their surroundings and is the primary means of communication, yet it is limited in its ability to focus on particular stimuli. To provide this ability, we designed and built S5, a mobile proof-of-concept prototype that allows Selective Sensing of Single Sound Sources. Our design consists of a head-mounted directional microphone attached to a smart-phone, which acts as controller, filter and amplifier. Users hear the selective signal through headphones and activate the device by touching their ear. To evaluate this sensory augmentation, we conducted a study with 16 participants that showed the system was appealing and perceived as useful. Based on our findings, we conclude that the proposed augmentation is feasible and we provide insights for further development of the concept.
Asthma is a complex disease caused by genetic and environmental factors, affecting 235 million people worldwide. The effects of chronic respiratory diseases, such as asthma and respiratory ailments, can be particularly aggressive in islands, due to the optimal climate conditions and the rich nature and flora. Conceived as a case study on Madeira island, we applied user-driven innovation strategy to inform the design of a mobile app able to assist the users in tracking asthma symptoms through an intuitive interface and a smart analog button, providing real-time personalized recommendations. The app is the result of a workshop conducted with 28 17-year-old students from Madeira. The workshop goal was to involve users in thinking about how to exploit data about air quality and weather conditions collected by a pervasive low-cost infrastructure spreads across the island.
For pedestrian navigation support, we report on how the feeling of being in control about receiving updates impacts navigation efficiency and user experience. In an exploratory field study, 24 participants navigated to previously unknown targets using a wristband which conveyed tactile information about targets' bearing. Information was either pulled by the user at times of her choosing via a simple arm gesture, or was pushed by the armband at a regular, preset interval. While the push mode resulted in higher efficiency, more users preferred actively pulling information, possibly as this afforded feeling more in control. Interestingly, mode preference was independent of individual navigation ability. Results suggest that properties of the specific navigation context should be used to determine whether an interface offers push or pull modes for navigation support.
Previous research shows couples favour online information sources when seeking support for preconception health and pregnancy planning, with mobile applications (apps) becoming increasingly popular. This study aimed to establish what smartphone apps currently exist to support couples when preparing for pregnancy. A functionality review was conducted to explore app content and an analysis of user reviews was undertaken to investigate user views towards existing apps. 25 apps were analysed, which provided information on diet, weight, alcohol, smoking and caffeine amongst others. Positive reviews mainly referred to the helpfulness of the app. Negative comments focused on the over simplification of information. Overall, user comments showed a positive response towards existing preconception care apps, but users reported concerns towards information accuracy and reliability. Further work will be undertaken to evaluate whether existing apps engage users to improve their preconception health care and whether these apps fulfils user requirements.
Collaboration is essential in both the working and the learning context. Although there is a wide range of collaboration tools and technology, they mostly focus on one specific application and thus are seldom reusable for other domains. To provide reusable solutions, it seems to be reasonable to develop not for one explicit application but for recurring activities that collaborators perform. We identified typical collaborative activities in mixed-focus collaboration, e.g., creating content or presenting results, which we derived from literature survey, observation and existing tools. Collaboration activities aim to classify solutions and enable designers and developers to reuse them for other scenarios. We illustrate this approach with mobile interaction techniques for two collaboration activities. The presented interactions address collaborative comparing and sharing and are applicable for various scenarios. This approach extends other collaboration processes to focus on recurring activities and improve reusability of tools.
Online forms often require users to provide a lot of personal information when registering for services, which puts their privacy at risk. While recent legislation has focused on how personal data is handled by organizations, the recent Cambridge Analytica revelations expose the limitations of relying on organizations to adhere to legislation or even their own privacy policies. In this research, we tackle this problem by taking the first steps towards providing users with more control over their personal data when registering for services. We employ a user-centered approach to design a privacy protection app, which, through the use of avatars, would provide users with greater control and flexibility over the personal information they disclose during online registrations. This paper reports on a set of design findings and observations extracted from a series of design workshops conducted to identify the design elements users would prefer in this novel privacy protection app.
Freedom of expression is a fundamental human right. Unfortunately, this right gets denied to the majority of people because they cannot read and write. This is because most modern means of communication rely on textual interfaces that are not inclusive to less educated and visually impaired people. However, simple and feature mobile phones are becoming widely available to such populations. In this paper, we present Mehfil, an IVR based citizen journalism platform that was deployed in Pakistan for 41 days. It received 789 calls from 535 users (2.4% of them blind) from all provinces of Pakistan. Mehfil provides a platform to its users to report their local area problems by recording their grievances on a range of social issues including unemployment, personal safety, health, education, corruption and rights of disabled (especially visually impaired). This paper reveals a demand for mobile phone-based citizen journalism and grievance reporting platforms among low-literate people in Pakistan.
This paper describes I-Set, a mobile set for video-makers, able to transform traditional video equipment into smart objects, and making possible to control the equipment and the shooting through a mobile application. I-Set has been designed according to circular economy principles, giving users the possibility of installing the modules on existing equipment, and of printing its part using recyclable materials. I-Set follows a user centered design approach, involving final users in all the project phases.
We present a "privacy facts" label, which aims at helping non-experts understand how an Internet of Things (IoT) device collects and handles data. We describe our design methodology, and detail the results of our user study involving 31 participants, assessing the efficacy of the label. The results suggest that the label was perceived positively by the participants, and is a promising solution to help users in making informed decisions.
Virtual reality (VR) glasses enable to be present in an environment while the own physical body is located in another place. Recent mobile VR glasses enable users to be present in any environment they want at any time and physical place. Still, mobile VR glasses are rarely used. One explanation is that it is not considered socially acceptable to immerse in another environment in certain situations. We conducted an online experiment that investigates the social acceptance of VR glasses in six different contexts. Our results confirm that social acceptability depends on the situation. In the bed, in the metro, or in a train, mobile VR glasses seem to be acceptable. However, while being surrounded by other people where a social interaction between people is expected, such as in a living room or a public cafe, the acceptance of mobile VR glasses is significantly reduced. If one or two persons wear glasses seems to have a negligible effect. We conclude that social acceptability of VR glasses depends on the situation and is lower when the user is supposed to interact with surrounding people.
Interacting with traditional user interfaces can be challenging for people with cognitive impairments. Speech-based conversational interfaces and virtual assistants such as Amazon's Alexa and Apple's Siri might provide great potential for this user group. Yet, knowledge about how cognitively impaired perceive such conversational interfaces and about the special requirements and opportunities is scarce. To gain first insights, we conducted a group interview with participants with mild to moderate cognitive impairments. They expressed their high expectations and imagined mobile conversational assistants for controlling vending machines, for example. Yet, they emphasized that smart assistants must not replace but rather complement personal contact with humans.
Recently, commercial Voice User Interfaces (VUIs) have been introduced to the market (e.g. Amazon Echo and Google Home). Although they have drawn much attention from users, little is known about their usability, user experiences, and usefulness. In this study, we conducted a web-based survey to investigate usability, user experiences, and usefulness of the Google Home smart speaker. A total of 114 users, who are active in a social-media based interest group, participated in the study. The findings show that the Google Home is usable and user-friendly for users, and shows the potential for international users. Based on the users' feedback, we identified the challenges encountered by the participants. The findings from this study can be insightful for researchers and developers to take into account for future research in VUI.
Autism is a rapidly growing problem all over the world. But in the developing countries like Bangladesh, the technological support for Autism is not adequate. The acutest problems children with autism faces are the communication problem. An Android app was developed to meet these needs of the children with autism. Study on the user feedbacks of the survey conducted about the review of the app suggested a very important integration to the app. The process of learning through the app can be made much more efficient by involving the parents of the autism personnel. The emotional attachment towards parents is a great fulcrum to teach autism personnel. This paper gives an overview of the development of the Android App and the integrated part of reinforcement based learning. To let the parents, aid the learning process, we have kept options for them to customize the app contents.
This work explores the use of pressure-sensing to capture cues of the stress of smartphone users while typing. In a controlled laboratory study, 11 participants were asked to write about a recent stressful and relaxing experience in counterbalanced order. Preliminary results show a significant positive correlation between the increase in typing pressure and self-reported stress across the two conditions (r=0.75, p=0.0081). In addition, we observed a significant negative correlation between the typing pressure baseline and the self-reported stress (r=-0.74, p=0.0093). These findings can help inform the development of less invasive methods for stress measurement.
Commodity mobile devices have front-facing cameras that can be used to precisely track facial expressions, such as winking, which can provide an additional input modality that co-exists with touchscreen input. We evaluate and compare three types of wink-based interactions: single wink, double wink, and long wink, in three mobile usage scenarios: sitting, walking, and lying down. Results show that single wink has similar error rate as touch input, and is preferred over touch input for targets in corner regions on a smartphone.
Humor appreciation is one of the determinants of individual's mood and can be assessed through jokes. We have developed a functional prototype called Humoris which asks users to select the funniest punchline and register their affective response to the jokes. Based on users' responses, the application predicts and displays their short-term mood using emoticons. Our smartphone prototype is evaluated using the 'think-aloud' method with 9 participants. Usability of Humoris was examined by System Usability Scale questionnaire which gave an average score of 79.44 (SD=8.08). Based on our findings, participants liked the application interface as well as the mood prediction but some of them found some jokes difficult to understand.
Scentery proposes a novel approach to create calming multisensory environments by displaying visualizations, reproducing audios and activating olfactory sensations. By the use of recent literature, we introduce an initial Emotive Design Taxonomy that intersects emotions, and colors, sounds and scents. Scentery's users switch between different multisensory scenarios that promote calm sensation. The first VR scenario immerses the user into the scenery of lavender field, which bursts into a carnival of purple, a lavender scent and ambient instrumental sound. The other scenario is the scenery of raining forest, a ylang-ylang scent and nature sound. Scentery was developed with Unity 3D for creating the 3D scenarios, Unity Remote for the camera control and viewer's perspectives, and a microcontroller for triggering the scents in the vaporizer.
With the advent of the low-cost headsets, Virtual Reality (VR) applications have gained enormous attention and popularity. Object selection is the primary task in VR interfaces. Although numerous studies have been done for the selection of objects, there is limited understanding of how the input methods affect the object selection in sparse, dense and occluded dense Virtual Environment (VE). In this paper, we investigate the commonly used input methods (capacitive touch and dwell gaze) for a smartphone-based VR head-mounted device. We evaluate the task completion time, error rates, ease of use, comfort and user preference correlated to the proposed VEs. The results indicated that capacitive touch was a preferred method of selection in dense and occluded dense VEs. The users preferred dwell gaze in the sparsely populated VE.
The process of installing and removing smartphone apps is simple, but choosing which apps to install or remove requires users' time and effort because users need to categorize and think about which apps they will still use and not use based on their memory and behavior. Although there have been several studies on app recommendation, there is relatively a sparse understanding on what motivates users to take an action to delete apps which they no longer use. In this study, we aim to investigate (1) whether users feel necessary to remove smartphone apps. If so, (2) why, when users make a decision to delete installed apps, and which types of apps are deleted. Also, we suggest (3) five categories of burdens that smartphone users may feel when they attempt to delete their installed apps. Finally, this paper suggests how to automate and develop a system of smartphone app removal.
Bornomala AR is an Augmented Reality Application for Android devices which provides a better and easy way to learn Bengali alphabets. This application is developed targeting the children of age 3 to 5 of Bangladesh, to make them more familiar with their mother language. We present a qualitative research where we have measured the learning efficiency with this application of 20 kids from two preschools in Dhaka, Bangladesh. It has been found that, without the application, the average learning efficiency is 41.67% per week whereas with this application the learning efficiency becomes 58.33% per week with 17% improvement.
Mobile interfaces will be central in connecting end-users to the smart grid and enabling their active participation. Services and features supporting this participation do, however, rely on high-frequency collection and transmission of energy usage data by smart meters which is privacy-sensitive. The successful communication of privacy to end-users via consumer interfaces will therefore be crucial to ensure smart meter acceptance and consequently enable participation. Current understanding of user privacy concerns in this context is not very differentiated, and user privacy requirements have received little attention. A preliminary user questionnaire study was conducted to gain a more detailed understanding of the differing perceptions of various privacy risks and the relative importance of different privacy-ensuring measures. The results underline the significance of open communication, restraint in data collection and usage, user control, transparency, communication of security measures, and a good customer relationship.
The combination of augmented reality and pervasive games in urban environments can be exploited to enable situated and informal learning through ludic and engaging activities. In this research, we explore how to use such technologies to improve citizens' awareness about their urban environment by means of an AR pervasive game to learn about a specific urban space. Since the game is pervasively played in a physical space, many usability issues need to be assessed before evaluating whether it can engage citizens whilst promoting some sense of urban awareness. In this paper, we introduce a usability study of the application and a preliminary set of factors to be analysed to assess user engagement.
The constant barrage of updates and novel applications to explore creates a ceaseless cycle of new layouts and interaction methods that we must adapt to. One way to address these challenges is through in-context interactive tutorials. Most applications provide onboarding tutorials using visual metaphors to guide the user through the core features available. However, these tutorials are limited in their scope and are often inaccessible to blind people. In this paper, we present AidMe, a system-wide authoring and playthrough of non-visual interactive tutorials. Tutorials are created via user demonstration and narration. Using AidMe, in a user study with 11 blind participants we identified issues with instruction delivery and user guidance providing insights into the development of accessible interactive non-visual tutorials.
We investigate how a task-sensitive personal assistant on smartphones can support users in public space. Therefore, we designed, implemented, and evaluated
Research in mobile text entry has long focused on speed and input errors during lab studies. However, little is known about how input errors emerge in real-world situations or how users deal with these. We present findings from an in-the-wild study of everyday text entry and discuss their implications for future studies.
Even though Overactive Bladder is a treatable condition (75% of cases can be cured or ameliorated), it remains mostly undertreated due to embarrassment, lack of knowledge or misperceptions. In this paper, we investigate what are the key features of a mobile phone application that could help people with Overactive Bladder symptoms adhere to a self-managed rehabilitation program. We also investigate what methods are appropriate to use with end users with an embarrassing condition. Our results show that it is important to include all stakeholders (health professionals and end users) in the iterative design process as contradictions were found between and within the stakeholders.
This paper presents work done within the EU project STARR. Within the framework of the project technologies to empower and support stroke survivors are developed. We report on the iterative development of an outdoor activity game for stroke survivors, and discuss design choices, experiences from the initial testing and outline potential future developments.
We describe the rationale, the design and development of an iOS framework to enhance the end user experience of car-related apps. The car framework fuses the smartphone sensors' raw data to detect user behaviour and car events, and provides relevant and timely information that an app can use to relief the user from manually inputting data and to foster implicit interaction. Thus, app developers can focus on user needs and UX rather than on code complexity. We show how detected events map common needs of car-related apps.
Tablet-based screenings have been shown to enhance diagnostics of symptoms of neuropsychiatric disorders such as Alzheimer's or Parkinsons by providing more diagnostic data through digital media in addition to conventional paper-and-pencil tests. However, user acceptance/experience with older patients have not been systematically researched in this context. We present the design and evaluation of a tablet-based prototype of a neuropsychiatric screening of cognitive symptoms. After developing two layouts, one identical to the original pencil-and-paper test (TI), one optimized for tablets (TO), the tablet-based versions were compared with the original version in a user test (n = 20). Results showed user acceptance/experience to be positive for all versions and TO favored over the other versions. Test-retest reliability was maintained in the tablet-based versions and differences in digitizer features between healthy and cognitively impaired participants were explored.
Recent advances allow for driverless cars that do not rely on necessary support systems for humans such as traffic lights. However, humans will still walk the streets in the future, thus we must think about integrating pedestrians into a system of completely autonomous vehicles. We developed several prototypes for mobile UIs based on three navigational concepts showing how future navigational systems for pedestrians within automated traffic systems may display information via augmented reality. In a small qualitative study, we gathered first feedback to determine future working directions. We found that our participants liked information about autonomous vehicles, their intents, and safe zones, but were concerned with information overload.
As research on speech interfaces continues to grow in the field of HCI, there is a need to develop design guidelines that help solve usability and learnability issues that exist in hands-free speech interfaces. While several sets of established guidelines for GUIs exist, an equivalent set of principles for speech interfaces does not exist. This is critical as speech interfaces are so widely used in a mobile context, which in itself evolved with respect to design guidelines as the field matured. We explore design guidelines for GUIs and analyze how these are applicable to speech interfaces. For this we identified 21 papers that reflect on the challenges of designing (predominantly mobile) voice interfaces. We present an investigation of how GUI design principles apply to such hands-free interfaces. We discuss how this can serve as the foundation for a taxonomy of design guidelines for hands-free speech interfaces.
Graphical user authentication schemes typically require users to draw a secret on a background image or select images on a grid. Although it is known that various image-related attributes affect security and memorability of generated passwords, current state-of-the-art approaches deliver image-content either randomly or based on the end-users' selections. Motivated by sociocultural theories which underpin that the meaning of an image varies across different people depending on their sociocultural background and experiences, in this paper we elaborate on a multi-layer image-content delivery approach which is supported by an initial framework that targets to deliver background images tailored to the unique sociocultural experiences of users. By doing so, we aim to trigger the users' sociocultural episodic memories, and ultimately help the creation of more secure and memorable passwords. Initial experimental results related to the value of this approach are also presented.
Users, unfamiliar with the terminology, technical meaning or intended functionality of mobile ICT may be reluctant to use them and miss out on potential benefits. This also prevents from exploiting the true potential of ICT and hinders the uptake and use of services, including those of societal relevance. We present an alternative focusing on improving the overall user experience and accessibility through the provision of recommendations for a harmonized terminology, covering basic, commonly used ICT features in English, French, German, Italian and Spanish. Technical Committee Human Factors (TC HF) of the European Telecommunication Standards Institute (ETSI) has initiated this ongoing work, to develop a freely available to be published in August 2019.
The spread of smartphones allows us to freely capture video and diverse hardware sensors' data (e.g., accel erometer, gyroscope). While recording such data is relatively simple, it is often challenging to share and restream this data to other people and applications. Such capability is very valuable for a range of applications such as a context-aware prototyping/developing platform, an integrated data recording and analysis tool, and a sensor-data based video editing system. To enable such complex operations, we propose Senbay, a platform for instant sensing, integrating, and restreaming multiple-sensor data streams. The platform embeds collected sensor data into a video frame using an animated two-dimensional barcode via real-time video processing. The video-embedded sensor data, dubbed Senbay Video, can be easily restreamed to other people and reused by data rich, context-aware applications.
People have become more aware about their environment and pay more attention to conditions, e.g., air quality, and UV light exposure. Conventional technologies for reading environmental conditions are expensive, bulky, situated, and do not meet people's need for a mobile and portable tool for environmental fingerprinting on demand. We present a mobile-enabled client-server system for personalized environmental fingerprinting and crowdsourced environmental fingerprint datasets using a smartphone and a portable credit card-sized NFC powered sensor board.
Using serious games to screen dyslexia has been a successful approach for English, German and Spanish. In a pilot study with a desktop game, we addressed pre-readers screening, that is, younger children who have not acquired reading or writing skills. Based on our results, we have redesigned the game content and new interactions with visual and musical cues. Hence, here we present a tablet game, DGames, which has the potential to predict dyslexia in pre-readers. This could contribute to around 10% of the population that is affected by dyslexia, as children will gain more time to learn to cope with the challenges of learning how to read and write.
Visual information can be decoded very fast, letting us perceive and process a large amount of data in parallel. There is a lot of knowledge organized as guidelines and recommendations for GUI design. However, for blind people that perceive the world through auditory and haptic channels, GUIs might not fit their needs. In this paper we present a prototype of LETS Math (Learning Environment for Tangible Smart Mathematics), a tangible system for mathematics learning for blind children. LETS Math consists of tangibles blocks with tactile and auditory feedback, a working space, and a tablet-mediated audio game.
Capturing natural emotions as they occur in the wild is a known challenge. Participant compliance is often low and often the most important events are not captured. We present a new application design for capturing emotional assessments in the wild, using phone based sensors to determine events of interest and to generate timely in situ prompts. We present a flexible design that allows both quick assessments (less than 30s) and in depth journaling. The application is designed to be beautiful, engaging and informative while capturing rich data to inform insights and algorithms.
Responsive Snippets is the accompanying software of my recent work on responsive text summarization, an approach to web design aimed at allowing desktop web pages to be read in response to the size of the device a user is browsing with. Responsive Snippets allow designers to create HTML summaries quickly and effortlessly, by simply using CSS selectors and media queries. Responsive Snippets can be especially useful for news and blog sites, as a means to quickly allow users get a glimpse of the main body content, although they can be applied to any kind of HTML elements having text. The software is publicly available, so that others can build upon this work.
We discuss the design and initial evaluation of Gravity of Thought Waterfall, which is designed to be a surprising and creative prototype. Users can control the visual "gravity" effect of the waterfall with an EEG headset. The potentially surprising interactions with the prototype are evaluated through a set of questionnaires and a survey. Results show that the designed prototype is perceived to be surprising and creative; almost all participants were positively surprised when interacting with it.
Staying healthy is one of the most important things in life, and our daily decisions determine how healthy or unhealthy we are. We present PHARA, an augmented reality (AR) mobile assistant that supports decision-making for food products at grocery stores. Using a user-centered design approach we investigated the possibilities of AR technology in presenting food product information. Then, following an iterative design process, we implemented a mobile AR application to support users with typical decision-making tasks that take place at grocery stores. In this paper, detailed explanations of the working prototype of PHARA and its use case are presented.
Networked digital sharing economy services enable the effective and efficient sharing of vehicles, housing, and everyday objects. However, contemporary online sharing platforms face several challenges related to the establishment of trust among peers, as well difficulties to deal with the growing number of intermediaries (e.g., payment, insurance) needed to ensure an adequate service delivery. We designed and developed "Just Share It" (JSI), an interactive system that enables the sharing of personal physical possessions (e.g., power tools, toys, sports gear) by directly connecting lenders and borrowers, as peers, through mobile technology. The JSI system utilizes a blockchain ledger and smart contracting technologies to improve peer trust and limit the number of required intermediaries, respectively. In this submission, we briefly review emergent challenges in this space, describe the JSI prototype system and its trust model, and reflect on future architectural opportunities for an eventual "in the wild" deployment.
Touchscreens are the most successful input method for smartphones. Despite their flexibility, touch input is limited to the location of taps and gestures. We present Palm Touch, an additional input modality that differentiates between touches of fingers and the palm. Touching the display with the palm can be a natural gesture since moving the thumb towards the device's top edge implicitly places the palm on the touchscreen. We developed a model that differentiates between finger and palm touch with an accuracy of 99.53% in realistic scenarios. In this demonstration, we exhibit different use cases for Palm Touch, including the use as a shortcut and for improving reachability. In a previous evaluation, we showed that participants perceive the input modality as intuitive and natural to perform. Moreover, they appreciate Palm Touch as an easy and fast solution to address the reachability issue during one-handed smartphone interaction compared to thumb stretching or grip changes.
Data visualisation is one of the most common mechanisms to explore data. It is therefore no surprise that there are today a broad array of techniques and tools available to visually explore data. However, data may be also perceived through other sensory channels, such as touch, taste or sound. In this paper we propose Musical Data, a novel interactive demo that transforms mobile usage data into music. In same way as there is a visual language to interpret data visualisations, we can draw from the musical language to interpret the music generated from the data. Musical-Data offers two key advantages: first, it enables visually impaired individuals to make sense of complex data; second, Musical Data -used by itself or combined with data visualisations- opens new possibilities in terms of customer understanding and human computer interaction, as musical patterns may provide a novel perspective for understanding the behavior of mobile users.
Drone mission definition for inspection purposes may require the operator or pilot to manage a lot of specific data. This process is time consuming, and validation prior to the flight is highly recommended to be able to fulfil the mission with the minimum risks and the maximum efficiency. This paper explores the use of wearable Mixed Reality as a tool to enhance interaction with information when planning drone-based missions. A Hololens-adapted set of automated functions to define complex missions through geometrical shapes is provided, together with update and information layer visualization. The use of an MR wearable enables the operator to easily configure, view and manage the mission over the virtual terrain.
We introduce Quiri, a holistic design concept for pain assessment and documentation. In this paper, we present the design studies of two mobile interfaces - a tablet app especially designed for children and adolescents and a smartwatch app for adults with chronic pain. This case study aims to firstly gain a better understanding of the preferences and needs of pain patients and physicians. Secondly, it describes the design process we took to arrive at the final design of Quiri. Quiri was developed based on the IEC 62366 usability standards for medical devices. The results of the design studies form an interdependent system which is complemented by a web user interface for physicians for the evaluation of the data collected in the mobile applications. Initial qualitative feedback from physicians indicates that Quiri is useful and appropriate for supporting pain therapy.
Speech synthesis is a key enabling technology for wearable technology. We discuss the design challenges in customising speech synthesis for the Sony Xperia Ear, and the Oakley Radar Pace smartglasses. In order to support speech interaction designers working on novel interactive eye-free mobile devices, specific functionality is required including: flexibility in terms of performance, memory footprint, disk requirements, server or local configurations, methods for personification and branding, architectures for fast reactive interfaces, and customisation for content, genres and speech styles. We describe implementations of this required functionality and how this functionality can be made available to engineers and designers working on 3rd party devices and the impact they can have on user experience. To conclude we discuss why some customers are reluctant to depend on speech services from well known providers such as Google and Amazon and consider the barrier to entry for custom built personal digital advisors.
For many people, parking in large indoor venues can be a challenging and demanding task. Advanced positioning technologies and accurate indoor maps offer technical solutions for developing an indoor navigation system that helps drivers with the challenges of indoor parking. However, in order to develop a meaningful indoor navigation system, we first need to understand the experiential context of indoor parking. To do so, we investigated the indoor parking experience through explorative studies and design-led explorations. In this paper, we present the design process and the prototyping activities involved to gain an understanding of the indoor parking context. Furthermore, we reflect on how design-led research has helped us to iteratively reframe the research question and we present some of the relevant findings to inspire future academic or industrial research on this topic.
We propose an intelligent factory training system based on smart AR glasses, with the support of spatial cognition and positioning technology to achieve a simple and intuitive self-learning interface. We have designed four training scenarios including the inspection of the equipment components, the troubleshooting of the ill functioning instrument, the factory route guidance, and the factory area information display and communication. Users can have the training and guide of the factory without additional manpower assistance. Finally, we sum up the user experience and the result showed our users are mostly positive in terms of their feedback. It is highly accepted by the user for the use of the AR intelligent trainer.
Since its development, the technology acceptance model (TAM) has been adapted for a multitude of different technologies and has proven very useful in a research context. These TAM adaptations are, however, less appropriate in product development since they do not contribute much to guide design and branding. We have revised the original TAM with the specific aim of application during new product development (NPD) and applied this model in a study on the acceptance of smart payment cards. The results provide helpful insights into the relevance of different potential benefits, suggesting that usefulness perception is most impacted by increased convenience, improved transaction overview and usage fun. Further, the model suggests that a good fit with who we are, rather than who we wish to be or feel we ought to be, is of special importance for usage intention. The application of NPDTAM in product development can be recommended.
Recent research has been devoted to designing mobile applications that encourage users to complete microtasks in everyday context, known as "mobile crowdsourcing". In this case study, we present our ongoing effort of a publicly-available mobile application, Crowdsource, that has over 540,000 global users from 200 countries or regions. Over 15 million sessions have been performed since the initial launch in August 2016. To better support users' motivations and goals, we conducted a survey with our active users and validate their feedback with a set of usability studies. Our findings suggest design considerations for crowdsourcing microtasks with mobile users at the global scale.
Mobile apps allow TV viewers to use their smartphone to control and interact with their TV. Our study set out to understand real-world practices of smartphone interaction with TV and uncover reasons people use smartphones instead of their remote controls. An online survey was conducted to determine which activities viewers perform when they use their smartphone to interact with TV, how they perform these activities, and why they choose to use their smartphone over their remote. Results reveal that the most popular activities are considered easier to complete on a smartphone than with a remote. Our findings outline these activities and provide a discussion on how the experience of smartphone control of TV can be improved. These discussion points are relevant as design teams decide how much to invest in mobile apps versus remote controls and companies evaluate if the smartphone can replace the remote control.
My Research explores methods to enhance the reading experience of both digital and physical methods. Up to now in my PhD journey I have designed and developed prototypes using paper as input for digital reading devices. I have been exploring methods to augment the reading experience by adding tangible controls to digital reading devices and methods of communicating with printed books. This paper briefly outlines my research goals, research to date and future work.
Our research explores a seamless interaction with smart devices, which we divide into three stages: (i) device recognition, (ii) user input recognition, and (iii) inferring the appropriate action to be carried out on the device (cf. Figure 1). We leverage wearable computers and combine them into one interaction system. This makes interactions ubiquitous in two ways: While smart devices are becoming increasingly ubiquitous, our wearable system will be ubiquitously available to the user, making it possible to interact everywhere, any time.
The objective of my research is to investigate how daily objects can be augmented with embedded technologies to support public opportunistic sociality (i.e. social curiosity, awareness, f2f interaction). I investigate how inhabitants of public spaces discover and use the augmented objects and the effects on daily sociality. I explore how different modes of engagement impact on interactions and how these vary in different locations. I address my research questions by deploying technological prototypes in month-lasting studies in the wild. I rely on both quantitative and qualitative methods to analyse how heterogeneous and differently recurrent users discover make sense and appropriate technologies over time in relation to situated sociality.
Advances in technologies have enabled a variety of convenient channels for couples in long-distance relationships to communicate and interact over vast distances. However, the hyper-connectivity can be a double-edged sword. Prior research has pointed out the focus of mainstream communication technologies is the transmission of explicit information, which neglects the mediation of emotional communication that is needed in intimate relationships . As a result, there is a gap between understanding the users' needs in research and designing technologies for them in practice. My research has been dedicated to applying design thinking to investigate how intensive technologies can be redesigned and humanized to create a subtle and poetic cue of the presence of a distant loved one.
Wearables, like augmented reality glasses, are more and more commercially available but uptake has been slow and concerns on social and ethical implications are raised. Current adoption theories can provide insights into the adoption or rejection process, but very few studies are conducted in this field to address the social and ethical implications of wearables. This paper provides results about the inclusion of different perspectives, namely diffusion and adoption theories, mutual shaping perspectives and philosophy of technology to study the social interactions and ethical implications of wearables as well. By following this path we would like to develop a model of appropriation in the future to get a better understanding of the acceptance and interactions of emerging technologies such as wearable augmented reality.
We report on our research on usable transparency in the context of mobile health (mhealth) tracking. Usable transparency refers to the usability of transparency-enhancing tools (TETs), which seek to aid users of online data services in improving their privacy. Focusing on fitness tracking scenarios, our research addresses the conceptual and technical demands of such tools in terms of usability.
As mobile technology is embedded in people's collocated social interactions, it has impacts on self-presentation. With a perspective of self-presentation as a performance, we might gain insights into people's everyday practices with augmented personal information. This research looks into how people could make use of personal information dynamically integrated into their context, appearance and actions. With a focus on practice, this research combines field studies and interventions with personal wearable prototypes to reveal insights for design.
The aim of this doctoral study is to generate knowledge on how the digital payments ecosystem in India can be (re)designed to benefit the poor. This is not to say that that digital money offers the best solution to drivers' financial problems. Rather, choosing to use cash for financial transactions is different from an inevitable dependence on cash. This study, therefore, situates digital payments within the broader ecosystem of financial practices and 'acceptance network' of digital (card or mobile-based) money. It shows how different parts of the ecosystem are shaped by practices of the poor and need to be integrated for digital payments to be usable and beneficial for the poor in substantive terms.