Multiplanes is a virtual reality (VR) drawing system that provides users with the flexibility of freehand drawing and the ability to draw perfect shapes. Through the combination of both beautified and 2D drawing, Multiplanes addresses challenges in creating 3D VR drawings. To achieve this, the system beautifies user's strokes based on the most probable, intended shapes while the user is drawing them. It also automatically generates snapping planes and beautification trigger points based on previous and current strokes and the current controller pose. Based on geometrical relationships to previous strokes, beautification trigger points act as guides inside the virtual environment. Users can hit these points to (explicitly) trigger a stroke beautification. In contrast to other systems, when using Multiplanes, users do not need to manually set or do any kind of special gesture to activate, such guides allowing the user to focus on the creative process.
We introduce SkinBot; a lightweight robot that moves over the skin surface with a two-legged suction-based locomotion mechanism and that captures a wide range of body parameters with an exchangeable multipurpose sensing module. We believe that robots that live on our skin such as SkinBot will enable a more systematic study of the human body and will offer great opportunities to advance many areas such as telemedicine, human-computer interfaces, body care, and fashion.
We present CritiqueKit, a mixed-initiative machine-learning system that helps students give better feedback to peers by reusing prior feedback, reducing it to be useful in a general context, and retraining the system about what is useful in real time. CritiqueKit exploits the fact that novices often make similar errors, leading reviewers to reuse the same feedback on many different submissions. It takes advantage of all prior feedback, and classifies feedback as the reviewer types it. CritiqueKit continually updates the corpus of feedback with new comments that are added, and it guides reviewers to improve their feedback, and thus the entire corpus, over time.
Physical controls are fabricated through complicated assembly of parts requiring expensive machinery and are prone to mechanical wear. One solution is to embed controls directly in interactive surfaces, but the proprioceptive part of gestural interaction that makes physical controls discoverable and usable solely by hand gestures is lost and has to be compensated, by vibrotactile feedback for instance. Vibrotactile actuators face the same aforementioned issues as for physical controls. We propose printed vibrotactile actuators and sensors. They are printed on plastic sheets, with piezoelectric ink for actuation, and with silver ink for conductive elements, such as wires and capacitive sensors. These printed actuators and sensors make it possible to design vibrotactile widgets on curved surfaces, without complicated mechanical assembly.
Virtual reality (VR) using head-mounted displays (HMDs) is becoming popular. Smartphone-based HMDs (SbHMDs) are so low cost that users can easily experience VR. Unfortunately, their input modality is quite limited. We propose a real-time eye tracking technique that uses the built-in front facing camera to capture the user's eye. It realizes stand-alone pointing functionality without any additional device.
We present PhyShare, a new haptic user interface based on actuated robots. Virtual reality has recently been gaining wide adoption, and an effective haptic feedback in these scenarios can strongly support user's sensory in bridging virtual and physical world. Since participants do not directly observe these robotic proxies, we investigate the multiple mappings between physical robots and virtual proxies that can utilize the resources needed to provide a well rounded VR experience. PhyShare bots can act either as directly touchable objects or invisible carriers of physical objects, depending on different scenarios. They also support distributed collaboration, allowing remotely located VR collaborators to share the same physical feedback.
We demonstrate a haptic feedback method that generates a compliance illusion on a rigid surface with a tangential force sensor and a vibrotactile actuator. The method assumes a conceptual model where a virtual object is placed on a textured surface and stringed to four walls with four springs. A two dimensional tangential force vector measured from the rigid surface is mapped to the virtual position of the virtual object on the textured surface. By playing vibration patterns that simulate the friction-induced vibrations made from the movement of the virtual object, we could make an illusion that the rigid surface feels like moving. We also demonstrate that the perceptual properties of the illusion, such as the stiffness of the virtual spring and the maximum travel distance of the virtual object, can be programmatically controlled.
Sensor technologies have been adapted to performing arts, and owing to the recent advancement of low-cost mobile electroencephalography devices, brain-computer interface (BCI) is integrated to dance performances as well. Nevertheless, BCI is less accessible to artists compared to other sensors because signal processing and machine learning are required. This paper proposes a work-in-progress example of BCI applications for performances that has been designed in collaboration with contemporary dancers. Its contribution is that the piece is not an add-on to a performance, but the implementation reflects practices of contemporary dance.
Recent developments in wearable robots and human augmentation open up new possibilities of designing computational interfaces integrated to the body. Particularly, supernumerary robot is a recently established field of research that investigates a radical idea of adding robotic limbs to users. Such augmentations, however, pose a limit in how much we can add to the body due to weight or interference with other body parts. To address that, we explore the use of soft robots as supernumerary robotic fingers. We present a pair of soft robotic fingers driven by cables and servomotors, and applications using the robotic fingers in various contexts.
We propose a tactile element that can generate both an electrostatic force and an electrostimulus, and can be used to provide tactile feedback on a wide area of human skin such as the palm of the hand. Touching the flat surface through the our proposed tactile element allow the user to feel both uneven and rough textures. In addition, the element can be fabricated using double-sided inkjet printing with conductive ink. Use of a flexible substrate, such as a PET film or paper, allows the user to design a free-formed tactile element. In this demonstration, we describe the implementation of the proposed stimuli element and show examples of applications.
We propose a new type of printing system that incorporates sensors in a handheld printer to reflect in real time user intent in the results of printing on paper. This system achieves two key functions: "real-time embellishment" for altering printed content by reading user hand movements by pressure and optical sensors, and "local transcription" for selecting content to be output by tracing existing content on paper with a linear camera. We performed experiments to measure the accuracy of both techniques and evaluate their usefulness.
We propose a novel shape-changing technique called Filum, which makes it possible to alter the shapes of textiles to better suit the requirements of people and the environment. Using strings and various sewing methods, ordinary textiles can be automatically shortened or shrunk into curved shapes. We demonstrate a series of novel interactions between people and textiles via three applications.
We demonstrate TrussFab's editor for creating large-scale structures that are sturdy enough to carry human weight. TrussFab achieves the large scale by using plastic bottles as beams that form structurally sound node-link structures, also known as trusses, allowing it to handle the forces resulting from scale and load. During this hands-on demo at UIST, attendees will use the TrussFab software to design their own structures, validate their design using integrated structural analysis, and export their designs for 3D printing.
We provide a hands-on demonstration of the potential of interactive systems based on electrical muscle stimulation (EMS). These wearable devices allow attendees, for example, to physically learn how to manipulate objects they never seen before, feel walls and forces in virtual reality, and so forth. In our demo we plan to not only demonstrate several of these EMS-based prototypes but also to provide instructions and free hardware for people to conduct their first projects using EMS.
This paper proposes a new approach to enhancing interaction with general applications on smartphones. Physical objects held on back surface of the smartphone, which can be captured by the rear camera with a mirror, work as input devices or controllers. It does not require any additional electronic devices but offers tactile feedback. The occlusion problem does not occur when using smartphone's back side, in terms of both display and camera viewing. We implemented on an Android smartphone and confirmed that it provides richer interaction and low latency (100 ms).
We propose the concept of the "Internet of Haptics" (IoH) that enables sharing experiences of others with the sense of touch. IoH allows to multicast haptic sensation from one Sensor-Node (Inter-Node) to multiple Actuator-Node (Ceive-Node) and to multicast from multiple Inter-Node to multiple Ceive-Node via the Internet. As a proof of concept, we developed the "HaptI/O" device which is a physical network node that can perform as a gateway to input or output the haptic information to/from the human body or tangible objects. We use the WebRTC as the baseline protocol for communication. Users can gain access IoH Web using a smartphone or PC and experience the haptic sensation by selecting the Inter-Node and Ceive-Node from a web browser. Multiple HaptI/O would be connected on the IoH server and transmit the haptic information from one node to multiple nodes as well as one to one mutual connection so that HaptI/O enables us to share our experiences with the sense of touch.
This paper proposes a dynamic acoustic field generation system for a spot audio towards particular person indoors. Spot audio techniques have been explored by generating the ultrasound beams toward the target person in certain area however everyone in this area can hear the sound. Our system recognizes the positions of each person indoor using motion capture and 3D model data of the room. After that we control direction of parametric speaker in real-time so that sound reach only particular person by calculating the reflection of sound on surfaces such as wall and ceiling. We calculate direction of parametric speaker using a beam tracing method. We present generating methods of dynamic acoustic field in our system and conducted the experiments on human factor to evaluate performance of proposed system.
Chalktalk is a computer-based visual language based around real-time interaction with virtual objects in a blackboard-style environment. Its aim is to be a presentation and communication tool, using animation and interactivity to allow easy visualization of complex ideas and concepts. This demonstration will show Chalktalk in action, with focus on its ability to link these objects together to send data between them, as well as the flexible type system, named Atypical, that underpins this feature.
There are many everyday situations in which users need to enter their user identification (user ID), such as logging in to computer systems and entering secure offices. In such situations, contactless passive IC cards are convenient because users can input their user ID simply by passing the card over a reader. However, these cards cannot be used for successive interactions. To address this issue, we propose AccelTag, a contactless IC card equipped with an acceleration sensor and a liquid crystal display (LCD). AccelTag utilizes high-function RFID technology so that the acceleration sensor and the LCD can also be driven by a wireless power supply. With its built-in acceleration sensor, AccelTag can acquire its direction and movement when it is waved over the reader. We demonstrate several applications using AccelTag, such as displaying several types of information in the card depending on the user's requirements.
We present Feeling Fireworks, a tactile firework show. Feeling Fireworks is aimed at making fireworks more inclusive for blind and visually impaired users in a novel experience that is open to all. Tactile effects are created using directable water jets that spray onto the rear of a flexible screen, with different nozzles for different firework effects. Our approach is low-cost and scales well, and allows for dynamic tactile effects to be rendered with high spatial resolution. A user study demonstrates that the tactile effects are meaningful analogs to the visual fireworks that they represent. Beyond the specific application, the technology represents a novel and cost-effective approach for making large scalable tactile displays.
An actuated shape-changing interface with faster response and smaller pixel size using a liquid material can provide real time tangible interaction with the digital world in physical space. To this end, we demonstrate an interface that displays user-defined patterns dynamically using liquid metal droplets as programmable micro robots on a flat surface. We built a prototype using an array of embedded electrodes and a switching circuit to control the jump of the droplets from electrode to electrode. The actuation and dynamics of the droplets under the finger provides mild tactile feedback to the user. Our demo is the first to show a planar visio-tactile display using liquid metal, and is a first step to make shape-changing physical ephemeral widgets on a tabletop interface.
In this paper we propose a method of non-contact stirring. Ultrasonic waves have been studied for various applications. However, devices using ultrasound have been devised only to specialize in one role up to now. In recent years, we aim at generalization of aerial ultrasonic equipment used for various applications such as tactile presentation and super directional speaker, and propose applications closely our real life.
The computer keyboard is a widely used input device to operate computers, such as text entry and command execution. Typically, keystrokes are detected as binary states (e.g. "pressed" vs. "not pressed"). Due to this, more complex input commands need multiple key presses that could go up to pressing four keys at the same time, such as pressing "Cmd + Shift + Opt + 4" to take a screenshot to the clipboard on macOS. We present GestAKey, a technique to enable multifunctional keystrokes on a single key, providing new interaction possibilities on the familiar keyboards. The system consists of touch sensitive keycaps and a software backend that recognizes micro-gestures performed on individual keys to perform system commands or input special characters. In this demo, attendees will get the chance to interact with several GestAKey-enabled proof-of-concept applications.
shapeShift is a compact, high-resolution (7 mm pitch), mobile tabletop shape display. We explore potential interaction techniques in both passive and active mobile scenarios. In the passive case, the user is able to freely move and spin the display as it renders elements. We introduce use cases for rendering lateral I/O elements, exploring volumetric datasets, and grasping and manipulating objects. On an active omnidirectional-robot platform, shapeShift can display moving objects and provide both vertical and lateral kinesthetic feedback. We use the active platform as an encounter-type haptic device combined with a head-mounted display to dynamically simulate the presence of virtual content.
Conquer it! is a lightweight proof-of-concept exertion game that demonstrates Body Channel Communication (BCC) in a smart environment. BCC employs the human body as communication medium to transfer digital data between physical objects by using electric fields that are coupled to the body. During the game participants are provided with BCC wearables, each of which represents a specific RGB color. When the user stands, walks on, or touches with a hand the BCC tiles, communication is automatically established: the corresponding sensor area decodes the message (RGB value) originating from the wearable and lights up according to that color for two seconds. The goal of the game is to try to light up as many tile cells simultaneously as possible. Participants can try to keep alive the colors by continuously moving around on the tiles. In the multiuser version, by stepping on or touching a blinking cell, users can immediately claim the area and overwrite the color of that subtile.
The recent emergence of digital fabrication allows everyday designers to make, deploy, and enjoy their creation. However, the excitement over digital fabrication presumes that users have sufficient domain knowledge to create complex models by understanding the underlying principles, can be self-creative without computational supports. This paper presents a new fabrication framework that lowers the boundary of solving everyday issues with fabrication. A formalism and accompanying finite state machine (FSM) model that help assign a fabrication machine intelligence to appreciate humans' design actions was proposed, with a view towards a new fabrication framework empowering collaborative, incremental fabrication. Empowered by the novel framework, this paper envisions a future of fabrication that pushes the ceiling, a collaborative fabrication, processing intermittent, unpredictable events as live input and reflect them in the emerging outcomes by co-design process between a designer and a machine.
As computing becomes increasingly embedded into the fabric of everyday life, systems that understand people's context of use are of paramount importance. Regardless of whether the platform is a mobile device, wearable, or smart infrastructure, context offers an implicit dimension that is vital to increasing the richness of human-computer interaction. In my thesis work, I introduce multiple enabling technologies that greatly enhance context awareness in highly dynamic platforms, all without costly or invasive instrumentation. My systems have been deployed across long periods and multiple environments, the results of which show the versatility, accuracy and potential for robust context sensing. By combining novel sensing with machine learning, my work transforms raw signals into intelligent abstractions that can power rich, context-sensitive applications, unleashing the potential of next-generation computing platforms.
Traditional wearable tactile displays transfer information through a firm contact between the tactile stimulator (tactor) and the skin. The firm contact, however, might limit the location of wearable tactile displays and might be the source of discomfort when the skin is being exposed to prolonged contact. This motivated us to find a non-contact wearable tactile display, which is able to transfer information without a contact. Based on the literature review, we concluded that we should focus on airflow-based tactile displays among various non-contact stimulation methods. In my previous work, I proposed the concept of a non-contact wearable tactile display using airflows and explored its feasibility. Focusing on an airflow-based wearable tactile display, I am investigating the expressivity and the feasibility of wearable airflow displays in real-world environments. I expect my dissertation will provide empirical grounds and guidelines for the design of an airflow-based wearable tactile display.
My dissertation is aimed at enabling people to collaborate to create complex artifacts: for example, to develop software, sketch GUI prototypes, play music together, or write a novel. Such creative processes are not well defined and can evolve dynamically. We introduce interactive systems that help users collaborate and communicate in the open-ended process. In particular, we investigate the benefits of both integrating asynchronous interactions into real-time collaborations and of having real time components in asynchronous collaborative settings. The systems provide tools that combine the two different types of interaction techniques, and we validate them via user study, participatory performing arts, and the online deployments of systems and crowdsourced tasks. The hybrid methods are designed to help users recover collaborative context, make the process approachable to nonexperts, collaborate online crowds on demand in real-time, and sustain liveness during collaboration. The dissertation will result in cross-domain knowledge in designing collaborative systems and it will help us create a framework for future intelligent systems that will help people solve more complex tasks effectively.
Incorporating accurate physics-based simulation into interactive design tools is challenging. However, adding the physics accurately becomes crucial to several emerging technologies. For example, in virtual/augmented reality (VR/AR) videos, the faithful reproduction of surrounding audios is required to bring the immersion to the next level. Similarly, as personal fabrication is made possible with accessible 3D printers, more intuitive tools that respect the physical constraints can help artists to prototype designs. One main hurdle is the sheer amount of computation complexity to accurately reproduce the real-world phenomena through physics-based simulation. In my thesis research, I develop interactive tools that implement efficient physics-based simulation algorithms for automatic optimization and intuitive user interaction.
The Internet of Things (IoT) promises to revolutionize the way people interact with their environment and the objects within it by creating a ubiquitous network of physical devices. However, current advancement in IoT has been heavily targeted towards creating battery-powered electronics. There remains a huge gap between the collection of smart devices integrated into the IoT and the massive number of everyday physical objects. The goal of my research is to bridge this gap in the current IoT framework to enable novel sensing and interactive applications with daily objects.
In the virtual world, changing properties of objects such as their color, size or shape is one of the main means of communication. Objects are hidden or revealed when needed, or undergo changes in color or size to communicate importance. I am interested in how these features can be brought into the real world by modifying the optical properties of physical objects and devices, and how this dynamic appearance influences interaction and behavior. The interplay of creating functional prototypes of interactive artifacts and devices, and studying them in controlled experiments forms the basis of my research. During my research I created a three level model describing how physical artifacts and interfaces can be appropriated to allow for dynamic appearance: (1) dynamic objects, (2) augmented objects, and (3) augmented surroundings. This position paper outlines these three levels and details instantiations of each level that were created in the context of this thesis research.
More and more of the discussion that happens now takes place on the web, whether it be for work, communities of interest, political and civic discourse, or education. However, little has changed in the design of online discussion systems, such as email, forums, and chat, in the decades they have been available, even as discussions grow in size and scope. As a result, online communities continue to struggle with issues stemming from growing participation, a diversity of participants, and new application domains. To solve these problems, my research is on building novel online discussion systems that give members of a community direct control over their experiences and information within these systems. Specifically, I focus on: 1) tools to make sense of large discussions and extract usable knowledge from them, 2) tools to situate conversations in the context what is being discussed, as well as 3) tools to give users more fine-grained control over the delivery of content, so that messages only go to those who wish to receive it.
We present HapticDrone, a concept to generate controllable and comparable force feedback for direct haptic interaction with a drone. As a proof-of-concept study this paper focuses on creating haptic feedback only in 1D direction. To this end, an encountered-type, safe and un-tethered haptic display is implemented. An overview of the system and details on how to control the force output of drones is provided. Our current prototype generates forces up to 1.53 N upwards and 2.97 N downwards. This concept serves as a first step towards introducing drones as mainstream haptic devices.
We present Ani-Bot, a modular robotics system that allows users to construct Do-It-Yourself (DIY) robots and use mixed-reality approach to interact with them. Ani-Bot enables novel user experience by embedding Mixed-Reality Interaction (MRI) in the three phases of interacting with a modular construction kit, namely, Creation, Tweaking, and Usage. In this paper, we first present the system design that allows users to instantly perform MRI once they finish assembling the robot. Further, we discuss the augmentations offered by MRI in the three phases in specific.
In this project, by combining thermal feedback with Virtual Reality (VR) and utilizing thermal stimuli to present temperature data of weather, we attempted to provide a multi-sensory experience for enhancing users' perception of environment in virtual space. By integrating thermal modules with the current VR head mounted display to provide thermal feedback directly on the face, and by setting thermal stimulus to provide similar feeling towards real air temperature, we developed an application in which users are able to "feel" the weather in VR environment. An user experiment was also conducted to evaluate our design, according to which we verified that thermal feedback can improve users' experience in perceiving environment, and this research also provided a new approach for setting thermal feedback for presenting environmental information in virtual space.
The emergence of social reading services has enabled readers to participate actively in reading activities by means of sharing and feedback. Readers can state their opinion on a book by providing feedback. However, because current e-books are published with fixed, unchangeable content, it is difficult to reflect the reader's feedback on them. In this paper, we propose a system for an adaptive e-book that dynamically updates itself on user participation. To achieve this, we designed a Feedback Block Model and a Feedback Engine. In the Feedback Block Model, at the time of publication, the author defines the type of feedback expected from readers. After publication, the Feedback Engine collects and aggregates the readers? feedback. The Feedback Engine can be configured with drag-and-drop block programming, and hence, even authors inexperienced in programming can create an adaptive e-book.
Global Positioning System (GPS) technology is widely used for outdoor navigation, but it is still challenging to apply this technology to a mid-scale or indoor environment. Using GPS in this way raises issues, such as reliability, deployment cost, and maintenance. Recently, companies like Google have begun to provide accurate indoor mapping. However, current implementations rely on both Wi-Fi and cellular technologies which have a hard time identifying the user's exact location in an indoor environment. There are two research questions in this paper: (1) How do we design a flexible and cost efficient indoor navigation system for organizations? (2) How to find an optimized path in a mid-scale/local environment. Here we propose Jaguar, which is a novel navigation system that utilizes a customized KML map with NFC technologies to address above questions. Our system includes an Android mobile application, a web-based map authoring tool and an implementation of a Cartesian plane based path finding algorithm. The initial testing of the system shows successful adaptation for a school campus environment.
Voice assistant technology has expanded the design space for voice-activated consumer products and audio-centric user experience. To navigate this emerging design space, Speech Synthesis Markup Language (SSML) provides a standard to characterize synthetic speech based on parametric control of the prosody elements, i.e. pitch, rate, volume, contour, range, and duration. However, the existing voice assistants utilizing Text-to-Speech (TTS) lack expressiveness. The need of a new production workflow for more efficient and emotional audio content using TTS is discussed. A prototype that allows a user to produce TTS-based content in any emotional tone using voice input is presented. To evaluate the new workflow enabled by the prototype, an initial comparative study is conducted against the parametric approach. Preliminary quantitative and qualitative results suggest the new workflow is more efficient based on time to complete tasks and number of design iterations, while maintaining the same level of user preferred production quality.
Virtual Reality (VR) has numerous mechanisms for making a virtual scene more compellingly real. Most effort has been focused on visual and auditory techniques for immersive environments, although some commercial systems now include relatively crude haptic effects through handheld controllers or haptic suits. We present results from a pilot experiment demonstrating the use of Electrical Muscle Stimulation (EMS) to trick participants into thinking a surface is dangerously hot even though it is below 50C. This is accomplished by inducing an artificial heat withdrawal reflex by contracting the participant's bicep shortly after contact with the virtual hot surface. Although the effects of multiple experimental confounds need to be quantified in future work, results so far suggest that EMS could potentially be used to modify temperature perception in VR and AR contexts. Such an illusion has applications for VR gaming as well as emergency response and workplace training and simulation, in addition to providing new insights into the human perceptual system.
We propose a method for determining grip force based on active bone-conducted sound sensing, which is an active acoustic sensing. In our previous studies, we estimated the joint angle, hand pose, and contact force by emitting a vibration to the body. We aspired to expand to an additional application of an active bone-conducted sound sensing, thus, we tried to estimate the grip force by creating a wrist-type device. The grip force was determined by using the power spectral density as the features, and gradient boosted regression trees (GBRT). Through evaluation experiments, the average error of the estimated grip force was around 15 N. Moreover, we confirmed that the grip strength could be determined with high accuracy.
Perceptual illusions enable designers to go beyond hardware limitations to create rich haptic content. Nevertheless, spatio-temporal interactions for thermal displays have not been studied thoroughly. We focus on the apparent motion of hot and cold thermal pulses delivered at the thenar eminence of both hands. Here we show that 1000 ms hot and cold thermal pulses overlapping for about 40% of their actuation time are likely to produce a continuous apparent motion sensation. Furthermore, we show that the quality of the illusion (defined as the motion's temporal continuity) was more sensitive to changes in SOA for cold pulses in relation to hot pulses.
The main goal of our research is to develop a haptic display that conveys the shapes, hardness, and textures of objects displayed on near-future 3D haptic TVs. We present a novel handheld device, GraspForm. This device renders the surface shapes and hardness of a virtual object that is represented in an absolute position in real space. GraspForm has a 2×2 matrix of actuated hemispheres for one fingertip, two actuated pads for the palm, and a force feedback actuator for the thumb. Our first experimental results showed that eight participants succeeded in recognizing the side geometries of a cylinder and a square prism regardless of the availability of visual information.
Recent work in 3D printing has focused on tools and techniques to design deformation behaviors using mechanical structures such as joints and metamaterials. In this poster, we explore how to embed and control mechanical springs to create deformable 3D-printed objects. We propose an initial design space of 3D-printable spring-based structures to support a wide range of expressive behaviors, including stretch and compress, bend, twist, and all possible combinations. The poster concludes with a brief feasibility test and enumerates future work.
Crowd-powered conversational assistants have found to be more robust than automated systems, but do so at the cost of higher response latency and monetary costs. One promising direction is to combined the two approaches for high quality and low cost solutions. However, traditional offline approaches of building automated systems with the crowd requires first collecting training data from the crowd, and then training a model before an online system can be launched. In this paper, we introduce Evorus, a crowd-powered conversational assistant with online-learning capability that automate itself over time. Evorus expands a previous crowd-powered conversation system by reducing its reliance on the crowd over time while maintaining the robustness and reliability of human intelligence, by (i) allowing new chatbots to be added to help contribute possible answers, (ii) learning to reuse past responses to similar queries over time, and (iii) learning to reduce the amount of crowd oversight necessary to retain quality. Our deployment study with 28 users show that automated responses were chosen 12.84% of the time, and voting cost was reduced by 6%. Evorus introduced a new framework for constructing crowd-powered conversation systems that can gradually automate themselves using machine learning, a concept that we believe can be generalize to other types of crowd-powered systems for future research.
We developed methods and implemented a system prototype to help people find specific signboards in areas with densely located signboards. In addition, we examined whether the proposed methods would reduce the search time of a specific signboard. The result showed that the proposed method was superior in cases where there were multiple signboards to be searched and background saturation was low.
Presbyopia is a symptom in which the elasticity of the lens is weakened and the image is not formed. However, as the use of smart phones increases, the age at which presbyopia symptoms appear is gradually decreasing. The closer the distance is from the smartphone, the less the focus of the eyes will be which making the letters and pictures on the smartphone screen appear blurred. In this study, we conducted a study on the interactions that helped to improve health of eye for people with presbyopia or those who have a habit that can facilitating presbyopia. As the distance of the smartphone from the eye is increased, the font size is increased to upgrading readability and the prototype is tested by 20 experimenters.
Audio Podcasts have gained popularity because they are a compelling form of storytelling and are easy to consume. However, they are not as easy to produce since resources are invested in the research, recording, and editing process and the average length of an episode is over an hour. Some audio podcasts could benefit from visuals to increase engagement and learning, but manually curating them can be arduous and time-consuming. We introduce a tool for automatically visualizing audio podcasts, currently focused on the genre of travelogues. Our system works by first time-aligning the transcript of a given podcast, using NLP techniques to extract entities and track how interesting or relevant they are throughout the podcast, and then retrieving visual data appropriately to describe them, either through transitions on a map or professionally taken photographs with captions. By automatically creating a visual narrative to accompany a podcast, we hope our tool can provide listeners with a better sense of the podcast's topic.
In augmented and virtual reality, there may be many 3D planar windows with 2D texts, images, and videos on them. Projective Windows is a technique using projective geometry to bring any near or distant window instantly to the fingertip and then to scale and position it simultaneously with a single, continuous flow of hand motion.
In Japan, the necessity of saving energy is rising due to the nuclear accident caused by the Great East Japan Earthquake that occurred on March 11, 2011. Reduction of energy usage is required due to rapid increases in electricity consumption due to the scorching summer heat in recent years. The common ways to provide information on energy consumption mainly occur through "visualization" of information. On the contrary, olfactory stimulation can be performed while working, and it is effective also when the degree of arousal is low. This study considers applications on the basis of the concept of "smellization" of information using olfactive stimulation. In this paper, we introduce the configuration and operation examples of a system developed for evoking public energy conservation behavior using smell.
"Walk-In Music" is a system that provides a new walking experience through synchronized music and pseudo-gravity. This system synchronizes each step with the music being listened to and creates a feeling of generating music through walking. It creates a Walk-In state where music and walking are consistent at all times. In this state, when changing the speed of music, the pedestrian may feel pseudo-gravity based on pseudo-haptics. Our results indicate that by changing the speed of music during the Walk-In state, the walking speed became faster and slower. We call this a Walk-Shift. This demonstrated the possibility of controlling personal walking by music. Walk-In Music has created a pleasant experience by music, and proposed a new relationship between people and music that leads to behavior changes.
In this paper, we propose a grouping scheme that classifies applications into groups for individual users by utilizing their geometrical information on a tabletop system. The proposed scheme investigates the geometrical information of each application, such as its position on the display and its rotational information, and then groups the applications of each individual user by utilizing a classifier with the geometrical information. We evaluate the proposed scheme with lab experiments, and the results show that, on average, 95.6% of applications are well classified into their users.
Screencasts, where computer screen is broadcast to a large audience on the web, are becoming popular as an online educational tool. Among various types of screencast content, popular are the contents that involve text editing, including computer programming. There are emerging platforms that support such text-based screencasts by recording every character insertion/deletion from the creator and reconstructing its playback on the viewer's screen. However, these platforms lack rich support for creating and editing the screencast itself, mainly due to the difficulty of manipulating recorded text changes; the changes are tightly coupled in sequence, thus modifying arbitrary part of the sequence is not trivial. We present a non-linear editing tool for text-based screencasts. With the proposed selective history rewrite process, our editor allows users to substitute an arbitrary part of a text-based screencast while preserving overall consistency of the rest of the text-based screencast.
Wearable devices combining with VR/AR technology become a research hotspot these years. In some research, tactile displays are put on the skin and synchronized with VR/AR environment. Researchers try to use these display to simulate varied embodied feeling to enhance the immersion in the VR/AR environment. In the field of game entertainment, based on the scenario, sometimes the feeling of passing through the body need to be presented to the user. However this is physically impossible. Thus we make a exploration attempting to simulate this feeling by thermal feedback. Here we use two thermal modules bonding on the two side of the wrist( inside and outside). When we actuate two modules sequentially, user would perceive the stimuli and interpret this into a feeling of passing though. In the paper, we will introduce the interface and describe the experiment to determine the principle for thermo-tactile illusion of passing through.
As communication technologies continue to rapidly evolve, older adults face challenges to access systems and devices, which may increase their social isolation. Our project investigates the design of a digital pen and paper-based communication system that allows users to connect to their family and friends' e-mail inboxes. Given the unique needs of older adults, we opted for a participatory design approach, prototyping the system with 22 older adults through a series of design workshops in two locations. Four individuals used our resulting system over a period of two weeks. Based on their feedback and a review of design workshops, we are currently in the process of refining our interface and preparing for a larger deployment study.
Blind users browse the web using screen readers. Screen readers read the content on a web page sequentially via synthesized speech. The linear nature of this process makes it difficult to obtain an overview of the web page, which creates navigation challenges. To alleviate this problem, we have developed ScreenTrack, a browser extension that summarizes a web page's accessibility features into a short, dynamically generated soundtrack. Users can quickly gain an overview of the presence of web elements useful for navigation on a web page. Here we describe ScreenTrack and discuss future research plans.
Children with ASD (Autism Spectrum Disorder) have difficulties in expressing their feelings and needs, their teachers have to be very familiar with them to adjust teaching contents in related training lessons. In this paper, we present an adaptive training system with EEG (Electroencephalogram) devices for autistic children. We designed an EEG helmet to monitor children's attention levels, and a video chat system with virtual cartoon faces covered on teacher's face. Cartoon faces are synchronized with the performer's facial movements to help trainers express themselves in an exaggerated way. When the attention reduction is detected by the EEG helmet, cartoon face will adjust automatically, and try to draw their attention back through changing cartoon types, colors, brightness, etc. Each change and feedback from children will be traced by the helmet and analyzed for improvements. By continuous iterative learning, the system will become smarter in avoiding children's physical exhaustion. The system was introduced in the form of a specific training lesson to an ASD school, and preliminary experiment has indicated an encouraging result.
Children with ASD (Autism Spectrum Disorder) have social communication difficulties partly due to their abnormal avoidance of eye contact on human faces, yet they have a normal visual processing strategy on cartoon face. In this paper, we present KinToon, a face-to-face communication enhancement system to help ASD children in their training lessons. Our system use Kinect to scan human face and extract key points from facial contour, and match them to corresponding key points of a cartoon face. A modified cartoon face is projected to the communicator's face to achieve the effect of dynamic "makeup". ASD children will finally talk to the communicator with dynamic cartoon makeup, which would reduce their stress of interacting with people and make them easier to understand emotions. The interactive devices were applied to an ASD training lesson, and our creative approach was examined to be relatively effective in encouraging ASD children to fetch more emotional information and have more eye contact with people by eye tracking.