Affective Learning Companion
Developing learning experiences that facilitate self-actualization
and creativity is among the most important goals of our society in preparation
for the future. To facilitate deep understandinglearners must have
the opportunity to develop multiple and flexible perspectives. The process
of becoming an expert involves failure, understanding failure, and the
motivation to move onward. Meta-cognitive awareness and personal strategies
can play a role in developing an individuals ability to persevere
through failure and combat other diluting influences. This research develops
theory and a new system for using affective sensing and appropriate relational-agent
interactions to support learning and meta-cognitive strategies for perseverance
through failure. We are investigating, designing, building, and evaluating
relational agents that may act as intelligent tutors, virtual peers, or
a group of virtual friends to support learning, creativity, playful imagination,
motivation, and to pursue the development of meta-cognitive skills that
persist beyond interaction with the technology.
Affective Tangibles
People naturally express frustration through the use
of their motor skills. The purpose of the Affective Tangibles project
is to develop physical objects that can be grasped, squeezed, thrown,
or otherwise manipulated via a natural display of affect. Current tangibles
include a PressureMouse, affective pinwheels that are mapped to skin
conductivity, and a voodoo doll that can be shaken to express frustration.
We have found that people often increase the intensity of muscle movements
when experiencing frustrating interactions.
Affective-Cognitive Framework for Learning and Decision-Making
Recent affective neuroscience and psychology indicate that human affect
and emotional experience play a significant, and useful, role in human
learning and decision-making. Most machine learning and decision-making
models, however, are based on old, purely cognitive models, and are
slow, brittle, and awkward to adapt. We aim to redress many of these
classic problems by developing new models that integrate affect with
cognition. Ultimately, such improvements will allow machines to make
smart and more human-like decisions for better human-machine interactions.
Combining Multiple Modalities to Detect Learner's Interest
We are interested in combining multiple modalities to detect affect.
So far, most of the work in affective computing focuses on only a single
channel of information. This work extends earlier work by incorporating
information from multiple modalities. The problem is posed as a combination
of classifiers in a probabilistic framework that naturally explains
the concepts of experts and critics. Each channel of data has an expert
associated that generates the beliefs about the correct class using
only that modality. Probabilistic models of error and the critics, which
predict the performance of the individual expert on the current input,
are used to combine the experts' beliefs about the correct class. We
demonstrate the multi-sensor classification scheme on the task of detecting
the affective state of interest in children trying to solve a puzzle.
The sensory information from the face, the postures and the state of
the puzzle are combined in a probabilistic framework and we demonstrate
that this method achieves a much better recognition accuracy than classification
based on individual channels. Further, the critic-driven averaging,
which is a special case of the proposed framework, outperforms all the
other classifier combination methods applied to this problem.
Digital Story Explication as it Relates to Emotional Needs and Learning
This project aims to address emotional needs and develop
emotional intelligence. The system, G.I.R.L.S. Talk (Girls Involved
in Real Life Sharing), will allow users to reflect actively upon the
emotions related to their situations through the construction of pictorial
narratives. Users will be able to gain new knowledge and understanding
about themselves and others through the exploration of authentic and
personal experiences. The system will employ new, common-sense reasoning
technology, enabling it to infer affective content from the users' stories
and support emotional reflection. A similar story will be extracted
from the database and displayed to the users, allowing them to hear
real stories, share their feelings and experiences, and reflect upon
these in relation to their personal situations. We expect that such
reflection will facilitate development of new perspectives on dealing
with life's events.
EmoteMail
EmoteMail is an email client that is augmented to convey
aspects of the state of the writer during the composition of email to
the recipient. The client captures facial expressions and typing speed
and introduces them as design elements. These contextual cues provide
extra information that can help the recipient decode the tone of the
email. Moreover, the contextual information is gathered and automatically
embedded as the sender composes the email, allowing an additional channel
of expression.
Emotional DJ
The technology in this project changes facial expressions in videos
without the system knowing anything in particular about the person's
face ahead of time. There are a few reasons to create something like
this: first, it provides an artistic tool with which to alter photos
or videos; second, it could be set up to let people open-endedly explore
their facial communication and expressiveness by playing with a real-time
video of their own current face; finally, if made to work with regular
video, it would be useful to demonstrate an unexpected way in which
we can't always trust the video information we love to consume.
Guilt Detection
The goal of this project is to produce a guilt detector. We have created
an experiment that is designed to produce feelings of guilt of varying
levels in different groups while we record EKG and skin conductivity.
By examining the differences in physiology across the conditions, we
hope to build a classifier to determine which condition, and thus which
level of guilt, an individual is experiencing.
INNER-active Journal
The purpose of the INNER-active Journal system is to provide a way for
users to reconstruct their emotions around events in their lives, and
to see how recall of these events affects their physiology. Expressive
writing, a task in which the participant is asked to write about extremely
emotional events, is presented as a means towards story construction.
Previous use of expressive writing has shown profound benefits for both
psychological and physical health. In this system, measures of skin
conductivity, instantaneous heart rate, and heart stress entropy are
used as indicators of activities occurring in the body. Users have the
ability to view these signals after taking part in an expressive writing
task.
Moral Sensors
The computer's emerging capacity to communicate an individual's affect
raises critical ethical concerns. Additionally, designers of perceptual
computer systems face moral decisions about how the information gathered
by computers with sensors can be used. As humans, we have ethical considerations
that come into play when we observe and report each other's behavior.
Computers, as they are currently designed, do not employ such ethical
considerations. The subject of this project will be evaluations that
assess the ethical acceptability of perceptual computers. The goal is
to make a perceptual computer that behaves ethically, in the eyes of
its users. More specifically, this project will conduct a series of
evaluations of systems that mediate the communication of affect motivated
by different ethical philosophies.
Mouse-Behavior Analysis and Adaptive
Relational Agents
The goal of this project is to develop tools to sense and adapt to a
user's affective state based on his or her mouse behavior. We are developing
algorithms to detect frustration level for use in usability studies.
We are also exploring how more permanent personality characteristics
and changes in mood are reflected in the users mouse behavior.
Ultimately, we seek to build adaptive relational agents that tailor
their interactions with the user based on these sensed affective states.
Pattern Recognition and Learning
This project develops basic theories and tools that enable computers
to make inferences from data, such as determining a user's affective
states. The approach is Bayesian: formulating probabilistic models on
the basis of domain knowledge and training data, and then performing
inference according to the rules of probability theory. Bayesian approaches
have been implemented in the context of curve fitting, mixture-density
estimation, principal-components analysis (PCA), automatic relevance
determination, and spectral analysis. Current work has yielded a Bayesian
spectral analysis tool for nonstationary and non-evenly sampled signals,
such as electrocardiogram (EKG) signals, which outperforms other known
methods. We have developed a new adaptive Monte Carlo method, which
can be applied to any generalized linear model and which greatly speeds
up the sampling process. Additionally, we have proposed a new, principled
way to combine multiple classifiers in a Bayesian framework. Recently,
we have developed Bayesian conditional random fields for joint classification
of structured data, such as sequences, images, and webs.
Personal Heart-Stress Monitor
The saying, "if you can't measure it, you can't manage it"
may be appropriate for stress. Many people are unaware of their stress
level, and of what is good or bad for it. The issue is complicated by
the fact that while too much stress is unhealthy, a certain amount of
stress can be healthy as it motivates and energizes. The "right"
level varies with temperment, task, and other factors, many of which
are unknown. There seems to be no data analyzing how stress levels vary
for the average healthy individual, over day-to-day activities. We would
like to build a device that helps to gather and present data for improving
an individual's understanding of both healthy and unhealthy stress in
his or her life. The device itself should be comfortable and should
not increase the user's stress. (It is noteworthy that stress monitoring
is also important in human-computer interaction for testing new designs.)
Currently, we are building a new, wireless, stress-mornitoring system
by integrating Fitsense's heart-rate sensors and Motorola's iDen cell
phone with our heart-rate-variability estimation algorithm.
Posture Recognition Chair
We have developed a system to recognize posture patterns and associated
affective states in real time, in an unobtrusive way, from a set of
pressure sensors on a chair. This system discriminates states of children
in learning situations, such as when the child is interested, or is
starting to take frequent breaks and looking bored. The system uses
pattern recognition techniques, while watching natural behaviors, to
"learn" what behaviors tend to accompany which states. The
system thus detects the surface-level behaviors (postures) and their
mappings during a learning situation in an unobtrusive manner so that
they don't interfere with the natural learning process. Through the
chair, we can reliably detect nine static postures, and four temporal
patterns associated with affective states.
Robotic Computer
A robotic computer that moves its monitor "head" and "neck,"
but that has no explicit face, is being designed to interact with users
in a natural way for applications such as learning, rapport-building,
interactive teaching, and posture improvement. In all these applications,
the robot will need to move in subtle ways that express its state and
promote appropriate movements in the user, but that don't distract or
annoy. Toward this goal, we are giving the system the ability to recognize
and have subtle expressions.
Wearable Relational Devices for Stress
Monitoring
This research aims to build a system for data collection, annotation,
and feedback that is part of a longer-term research plan to gather data
to understand more about stress and physiological signals involved in
its expression. The first phase consists of building a wearable apparatus
for gathering data. The challenge here is getting as many accurate labels
(annotations) from the user as possible, while he or she goes about
natural daily activities. The problem is that getting such annotations
is disruptive, and is itself likely to increase stress, which can interfere
with the signals being measured, and make users less likely to collect
a lot of data. The hypothesis is that some ways of interrupting would
be less stressful than others. Thus, the second phase focuses on implementing
different means of interrupting the user for annotations. These ways
will be informed by prior results on both relational and attentional
strategies. Overall, this research should contribute not only a new
system for gathering annotations useful for studies of stress, but also
to provide new insights into the value of using relational/attentional
strategies in a task that involves a large number of interruptions.
Prior Projects:
AboutFace
AboutFace is a user-dependent system that is able to learn patterns
and discriminate the different facial movements characterizing confusion
and interest. The system uses a piezoelectric sensor to detect eyebrow
movements and begins with a training session to calibrate the unique
values for each user. After the training session, the system uses these
levels to develop an expression profile for the individual user. The
system has many potential uses, ranging from computer and video-mediated
conversations to interactions with computer agents. This system is an
alternative to using camera-based computer vision analysis to detect
faces and expressions. Additionally, when communicating with other people,
users of this system also have the option of conveying their expressions
anonymously by wearing a pair of glasses that conceals their expressions
and the sensing device.
Adaptive, Wireless, Signal Detection and Decoding
In this project, we propose a new Bayesian receiver for signal detection
in flat-fading channels. First, the detection problem is formulated
as an inference problem in a hybrid dynamic system that has both continuous
and discrete variables. Then, an expectation propagation algorithm is
proposed to address the inference problem. As an extension of belief
propagation, expectation propagation efficiently approximates a Bayesian
estimation by iteratively propagating information between different
nodes in the dynamic system and projecting exact messages into the exponential
family. Compared to sequential Monte Carlo filters and smoothers, the
new method has much lower computational complexity since it makes analytically
deterministic approximations instead of Monte Carlo approximations.
Our simulations demonstrate that the new receiver achieves accurate
detection without the aid of any training symbols or decision feedbacks.
Future work involves joint decoding and channel estimation, where convolutional
codes are used to protect signals from noise corruption. Initial results
are promising.
Affect in Speech: Assembling a Database
The aim of this project is to build a database of natural speech showing
a range of affective variability. It is an extension of our ongoing
research focused on building models for automatic detection of affect
in speech. At a very basic level, training such systems requires a large
corpus of speech containing a range of emotional vocal variation. A
traditional approach to this research has been to assemble databases
where actors have provided the affective variation on demand. However,
this method often results in unnatural sounding speech and/or exaggerated
expressions. We have developed a prototype of an interactive system
that guides a user through a question and answer session. Without any
rehearsals or scripts, the user navigates through touch and spoken language
an interface guided by embodied conversational agents which prompt the
user to speak about an emotional experience. Some of the issues we are
addressing include the design of the text and character behavior (including
speech and gesture) so as to obtain a convincing and disclosing interaction
with the user.
Affective Carpet
The "Affective Carpet" is a soft, deformable surface made
of cloth and foam, which detects continuous pressure with excellent
sensitivity and resolution. It is being used as an interface for projects
in affective expression, including as a controller to measure a musical
performer's direction and intensity in leaning and weight-shifting patterns.
Affective Mirror
The Affective Mirror is an attempt to build a fully automated system
that intelligently responds to a person's affective state in real time.
Current work is focused on building an agent that realistically mirrors
a person's facial expression and posture. The agent detects affective
cues through a facial-feature tracker and a posture-recognition system
developed in the Affective Computing group; based on what affect a person
is displaying, such as interest, boredom, frustration, or confusion,
the system responds with matching facial affect and/or posture. This
project is designed to be integrated into the Learning Companion Project,
as part of an early phase of showing rapport-building behaviors between
the computer agent and the human learner.
Affective Social Quest
ASQ investigates ways to teach social-emotion skills to children interactively
with toys. One of the first goals is to help autistic children recognize
expressions of emotion in social situations. The system uses four "dwarfs"
expressing sad, happy, surprise, and angry, and each communicates wirelessly
to the system and detects which plush doll was selected by the child.
The computer plays short entertaining video clips displaying examples
of the four emotions and cues the child to pick a dwarf that closely
matches that emotion. Future work includes improving the ability of
the system to recognize direct displays of emotion by the child.
Affective Tigger
The Affective Tigger is a plush toy designed to recognize and react
to certain emotinal behaviors of its playmate. For example the toy enters
a state of "happy," moving its ears upward and emitting a
happy vocalization when it recognizes that the child has postured the
toy upright and is bouncing it along the floor. Tigger has five such
states, involving recognizing and responding with an emotional behavior.
The resulting behavior Tigger demonstrates allows it to serve as an
affective mirror for the child's expression. This work involved designing
the toy, and evaluating sessions of play with it with dozens of kids.
The toy was shown to successfully communicate some aspects of emotion,
and to prompt behaviors that are interesting to researchers trying to
learn about the development of human emotional skills such as empathy.
AffQuake
AffQuake is an attempt to incorporate signals that relate to a player's
affect into ID Software's Quake II in a way that alters game play. Several
modifications have been made that cause the player's avatar within Quake
to alter its behaviors depending upon one of these signals. In StartleQuake,
when a player becomes startled, his or her avatar also becomes startled
and jumps back. Quake changes the size of the player's avatar in relation
to the user's response as well, representing player excitement by average
skin conductivity level, and growing the avatar's size when this level
is high.
Automatic Facial Expression Analysis
Recognizing non-verbal cues, which constitute a large percentage of
our communication, is a prime facet of building emotionally intelligent
systems. Facial expressions and movements such as a smile or a nod are
used either to fulfill a semantic function, to communicate emotions,
or as conversational cues. We are developing an automatic tool using
computer vision and various machine-learning techniques, which can detect
the different facial movements and head gestures of people while they
are interacting naturally with the computer. Past work on this project
determined techniques to track upper facial features (eyes and eyebrows)
and detect facial actions corresponding to those features (eyes squintint
or widening, eyebrows raised). The ongoing project is expanding its
scope to track and detect facial actions corresponding to the lower
features. Further, we hope to integrate the facial expression analysis
module with other sensors developed by the Affective Computing group
to reliably detect and recognize different emotions.
BioMod
BioMod is an integrated interface for users of mobile and wearable devices,
monitoring various physiological signals such as the electrocardiogram,
with the intention of providing useful and comfortable feedback about
medically important information. The first version of this system includes
new software for monitoring stress and its impact on heart functioning,
and the ability to wirelessly communicate this information over a Motorola
cell phone. One application under development is the monitoring of stress
in patients who desire to stop smoking: the system will alert an "on-call"
trained behavior-change assistant when the smoker is exhibiting physiological
patterns indicative of stress or likely relapse, offering an opportunity
for encouraging intervention at a point of weakness. Challenges in this
project include the development of an interface that is easy and efficient
to use on the go, is sensitive to user feelings about the nature of
the information being communicated, and accurately recognizes the patterns
of physiological signals related to the conditions of interest.
Car Phone Stress
We are building a system that can watch for certain signs of stress
in drivers, specifically stress related to talking on the car phone,
as may be caused by increased mental workload. To gather data for training
and testing our system, subjects were asked to 'drive' in a simulator
past several curves while keeping their speed close to a predetermined
desired constant value. In some cases they were simultaneously asked
to listen to random numbers from a speech-synthesis software and to
perform simple mathematical tasks over a telephone headset. Several
measures drawn from the subjects' driving behavior were examined as
possible indicators of the subjects' performance and of their mental
workload. When subjects were instructed (by a visible sign) to brake,
most braked within 0.7-1.4 seconds after the sign came into view. However,
in a significant number of incidents, subjects never braked or braked
1.5-3.5 seconds after the message; almost all of these incidents were
when subjects were on the phone. On average, we found that drivers on
the phone braked 10% slower than when not on the phone; additionally,
the variance in their braking time was four times higher -- suggesting
that although delayed driver reactions were infrequent, when delays
happened they could be large and potentially dangerous. Furthermore,
their infrequency could create a false sense of security. In future
experiments, subjects' physiological data will be analyzed jointly with
measures of workload, stress and performance.
Cardiac PAF Detection and Prediction
PAF (paroxysmal atrial fibrillation) is a dangerous form of cardiac
arrhythmia that poses severe health risks, sometimes leading to heart
attacks, the recognized number-one killer in the developed world. The
technical challenges for detecting and predicting PAF include accurate
sensing, speedy analysis, and a workable classification system. To address
these issues, electrocardiogram (ECG) data from the PhysioNet Online
Database will be analyzed using new spectrum estimation techniques to
develop a program able to predict, as well as recognize, the onset of
specific cardiac arrhythmias such as PAF. The system could then be incorporated
into wearable/mobile medical devices, allowing for interventions before
cardiac episodes occur, and potentially saving many lives.
Conductive Chat
While instant messaging clients are frequently and widely used for interpersonal
communication, they lack the richness of face-to-face conversations.
Without the benefit of eye contact and other non-verbal "back-channel
feedback," text-based chat users frequently resort to typing "emoticons"
and extraneous punctuation in an attempt to incorporate contextual affect
information in the text communication. Conductive Chat is an instant
messenger client that integrates users' changing skin conductivity levels
into their typewritten dialogue. Skin conductivity level (also referred
to as galvanic skin response) is frequently used as a measure of emotional
arousal, and high levels are correlated with cognitive states such as
high stress, excitement, and attentiveness. On an expressive level,
Conductive Chat communicates information about each user's arousal in
a consistent, intuitive manner, without needing explicit controls or
explanations. On a communication-theory level, this new communication
channel allows for more "media rich" conversations without
requiring more work from the users.
Detection and Analysis of Driver Stress
Driving is an ideal test bed for detecting stress in natural situations.
Four types of physiological signals (electrocardiogram, electromyogram,
respiration, and skin conductivity related to autonomic nervous system
activation) were collected in a natural driving situation under various
driving conditions. The occurrence of natural stressors was designed
into the driving task and validated using driver self-report, real-time,
third-party observations, and independently coded video records of road
conditions and facial expression. Features reflecting heart-rate variability
derived from the adaptive Bayesian spectrum estimation, the rate of
skin-conductivity orienting responses, and the spectral characteristics
of respiration were extracted from the data. Initial pattern-recognition
results show separation for the three types of driving states: rest,
city, and highway, and some discrimination within states for cases in
which the state precedes or follows a difficult turn-around or toll
situation. These results yielded from 89-96 percent accuracy in recognizing
stress level. We are currently investigating new, advanced means of
modeling the driver data.
Gene Expression Data Analysis
This research aims to classify gene expression data sets into different
categories, such as normal vs. cancer. The main challenge is that thousands
of genes are measured in the micro-array data, while only a small subset
of genes are believed to be relevant for disease classification. We
have developed a novel approach called "predictive automatic relevance
determination;" this method brings Bayesian tools to bear on the
problem of selecting which genes are relevant, and extends our earlier
work on the development of the "expectation propagation" algorithm.
In our simulations, the new method outperforms several state-of-the-art
methods, including support-vector machines with feature selection and
relevance-vector machines.
Interface Tailor
The Interface Tailor is an agent that attempts to adapt a system in
response to affective feedback. Frustration is being used as a fitness
function to select between a wide variety of different system behaviors.
Currently, the Microsoft Office Assistant (or Paperclip) is one example
interface that is being made more adaptive. Ultimately the project seeks
to provide a generalized framework for making all software more tailor-able.
Learning Companion
"I can't do this" and "I'm not good at this" are
common statements made by kids while trying to learn. Usually triggered
by affective states of confusion, frustration, and hopelessness, these
statements represent some of the greatest problems left unaddressed
by educational reform. Education has emphasized conveying a great deal
of information and facts, and has not modeled the learning process.
When teachers present material to the class, it is usually in a polished
form that omits the natural steps of making mistakes (feeling confused),
recovering from them (overcoming frustration), deconstructing what went
wrong (not becoming dispirited), and finally starting over again (with
hope and maybe even enthusiasm). Learning naturally involves failure
and a host of associated affective responses. This project aims to build
a computerized learning companion that facilitates the child's own efforts
at learning. The goal of the companion is to help keep the child's exploration
going, by occasionally prompting with questions or feedback, and by
watching and responding to the affective state of the childwatching
especially for signs of frustration and boredom that may precede quitting,
for signs of curiosity or interest that tend to indicate active exploration,
and for signs of enjoyment and mastery, which might indicate a successful
learning experience. The companion is not a tutor that knows all the
answers but rather a player on the side of the student, there to help
him or her learn, and in so doing, learn how to learn better.
Mr. Java: Customer Support
Mr. Java is the Media Lab's wired coffee machine, which keeps track
of usage patterns and user preferences. The focus of this project is
to give Mr. Java a tangible customer-feedback system that collects data
on user complaints or compliments. "Thumbs-up" and "thumbs-down"
pressure sensors were built and their signals integrated with the state
of the machine to gather data from customers regarding their ongoing
experiences with the machine. Potentially, the data gathered can be
used to learn how to improve the system. The system also portrays an
affective, social interface to the user: helpful, polite, and attempting
to be responsive to any problems reported.
Online Emotion Recognition
This project is aimed at building a system to recognize emotional expression
given four physiological signals. Data was gathered from a graduate
student with acting experience as she intentionally tried to experience
eight different emotional states daily over a period of several weeks.
Several features are extracted from each of her physiological signals.
The first classifiers gave a classification result of 88% success when
discriminating among 3 emotions (pure chance would be 33.3%), and of
51% when discriminating among 8 emotions (pure chance 12.5%). New, improved
classifiers reach an 81% success rate when discriminating among all
8 emotions. Furthermore, an online classifier has now been built using
the old method, which gives a success rate only 8% less than its old
offline counterpart (i.e. 43%). We expect this percentage to sharply
increase when the new methods are adapted to run online.
Recognizing Affect in Speech
This research project is concerned with building computational models
for the automatic recognition of affective expression in speech. We
are in the process of completing an investigation of how acoustic parameters
extracted from the speech waveform (related to voice quality, intonation,
loudness and rhythm) can help disambiguate the affect of the speaker
without knowledge of the textual component of the linguistic message.
We have carried out a multi-corpus investigation, which includes data
from actors and spontaneous speech in English, and evaluated the model's
performance. In particular, we have shown that the model exhibits a
speaker-dependent performance which reflects human evaluation of these
particular data sets, and, held against human recognition benchmarks,
the model begins to perform competitively.
Relational Agents
Relational Agents are computational artifacts designed to build and
maintain long-term, social-emotional relationships with their users.
Central to the notion of relationship is that it is a persistent construct,
spanning multiple interactions. Thus, Relational Agents are explicitly
designed to remember past history and manage future expectations in
their interactions with users. Since face-to-face conversation is the
primary context of relationship-building for humans, our work focuses
on Relational Agents as a specialized kind of embodied conversational
agent (animated humanoid software agents that use speech, gaze, gesture,
intonation, and other nonverbal modalities to emulate the experience
of human face-to-face conversation). One major achievement was the development
of a Relational Agent for health behavior change, specifically in the
area of exercise adoption. A study involving 100 subjects interacting
with this agent over one month demonstrated that trusting, caring relationships
can be developed, and that such agents can be used to achieve beneficial
behavior change outcomes.
The Conductor's Jacket
The Conductor's Jacket is a unique wearable device that measures physiological
and gestural signals. Together with the Gesture Construction, a musical
software system, it interprets these signals and applies them expressively
in a musical context. Sixteen sensors have been incorporated into the
Conductor's Jacket in such a way as to not encumber or interfere with
the gestures of a working orchestra conductor. The Conductor's Jacket
system gathers up to sixteen data channels reliably at rates of 3 kHz
per channel, and also provides real-time graphical feedback. Unlike
many gesture-sensing systems it not only gathers positional and accelerational
data but also senses muscle tension from several locations on each arm.
We will demonstrate the Gesture Construction, a musical software system
that analyzes and performs music in real-time based on the performer's
gestures and breathing signals. A bank of software filters extract several
of the features that were found in the conductor study, including beat
intensities and the alternation between arms. These features are then
used to generate real-time expressive effects by shaping the beats,
tempos, articulations, dynamics, and note lengths in a musical score.
The Galvactivator
The galvactivator is a glove-like wearable device that senses the wearer's
skin conductivity and maps its values to a bright LED display. Increases
in skin conductivity across the palm tend to be good indicators of physiological
arousal, causing the galvactivator display to glow brightly. The galvactivator
has many potentially useful purposes, ranging from self-feedback for
stress management, to facilitation of conversation between two people,
to new ways of visualizing mass excitement levels in performance situations
or visualizing aspects of arousal and attention in learning situations.
One of the findings in mass-communication settings was that people tended
to "glow" when a new speaker came onstage, and during live
demos, laughter, and live audience interaction. They tended to "go
dim" during powerpoint presentations. In smaller educational settings,
students have commented on how they tend to glow when they are more
engaged with learning.
Touch-Phone
The Touch-Phone was developed to explore the use of objects to mediate
the emotional exchange in interpersonal communication. Through an abstract
visualization of screen-based color changes, a standard telephone is modified
to communicate how it is being held and squeezed. The telephone receiver
includes a touch-sensitive surface which conveys the user's physical response
over a computer network. The recipient sees a small colored icon on his
computer screen which changes in real time according to the way his conversational
partner is interacting with the telephone object.
|