Great Wall of China
Imperial Palace of the Ming and Qing Dynasties, Dongcheng, Beijing, China
Beijing National Aquatics Center, Chaoyang, Beijing, China
Beijing National Stadium, Chaoyang, Beijing, China
Beijing Opera Performance
  • Deadline for early registration is on June 15, 2013. Click here to register.
  • Deadline for submission of camera-ready papers is on June 15, 2013
  • Deadline for submission of abstracts has been moved to May 5, 2013
  • Deadline for submission of full paper has been moved to May 10, 2013
Technology has made strides investigating how computational models of emotions can be built. In recent years, Computer Science researchers have realized that emotion models cannot be effectively used in real-world applications by themselves. They need to be analyzed in light of human interactions, and treated with other non-verbal cues as social signals to extract meaning from the data.

Right now, there is a need for human-centered systems, i.e. systems that are seamlessly integrated into everyday life, easy to use, multimodal, and anticipatory. These systems widen the breadth of users of computing systems, from the very young to the elderly, as well as to the physically challenged. Empathic systems are human-centered systems.

Empathic computing systems are software or physical context-aware computing systems capable of building user models and provide richer, naturalistic, system-initiated empathic responses with the objective of providing intelligent assistance and support. We view empathy as a cognitive act that involves the perception of the user's thought, affect (i.e., emotional feeling or mood), intention or goal, activity, and/or situation and a response due to this perception that is supportive of the user. An empathic computing system is ambient intelligent, i.e., it consists of seamlessly integrated ubiquitous networked sensors, microprocessors and software for it to perceive the various user behavioral patterns from multimodal inputs.

Empathic computing systems may be applied to various areas such as e-health, geriatric domestic support, empathic home/space, productivity systems, entertainment and e-learning. Lastly, this approach shall draw upon the expertise in, and theories of, ubiquitous sensor-rich computing, embedded systems, affective computing, user adaptive interfaces, image processing, digital signal processing and machine learning in artificial intelligence.

On its fourth year, IWEC-13 focuses on the ambient intelligent, socio-affective context of empathic computing and how machine learning approaches can be used to effectively build robust, reliable and scalable empathic systems. While primarily data-driven, the workshop this year will investigate how domain knowledge and contextual information can be used to reduce the complexity of emotion analysis and synthesis, as well as empathic response modeling. We are inviting original and unpublished papers on, but not limited to, the following topics:
  • Emotion and mood recognition
  • Intention Recognition
  • Behavior/Activity Recognition
  • Motion/Gesture Detection
  • Multimodal Communication
  • Sensor Networks for Human Tracking
  • Social Signal Processing
  • Wearable or Implantable Sensor Integration
  • Sensor Networks for Intelligent Interfaces
  • Data fusion in Intelligent Ambient Spaces
  • Multimodal Approaches for Improved Decision-making
  • Motivational Aids in Intelligent Education Systems
  • Advanced Home Automation Systems
  • e-Health and Geriatrics Care
  • Social Agents
  • Machine Learning and Data mining for Empathy
The workshop will be of interest to researchers working on affective computing, ambient intelligent systems, psychologists, internet of things/wireless sensor networks, and digital signal processing. IWEC-13 aims to serve as venue for these researchers to discuss and share ideas, raise concerns and technical issues, and form research relationships for future collaboration.
Organizing Committee
Merlin Teodosia Suarez
Center for Empathic Human-Computer Interactions
De La Salle University (Philippines)
Masayuki Numao
Department of Architecture for Intelligence
Osaka University (Japan)
The Duy Bui
Human Machine Interaction Laboratory
Vietnam National University - Hanoi (Vietnam)
Ma. Mercedes Rodrigo
Ateneo Laboratory for the Learning Sciences
Ateneo de Manila University (Philippines)
Advisory Board
Dirk Heylen
Human Media Interaction Laboratory
Computer Science, University of Twente, The Netherlands
Toyoaki Nishida
Department of Intelligence Science and Technology
Graduate School of Informatics
Kyoto University, Japan
Catherine Pelachaud
Centre National de la Recherche Scientifique
CNRS - Telecom Paris Tech, France
Program Committee
Eriko Aiba, Japan Advanced Industrial Science and Technology (Japan)
Arnulfo Azcarraga, De La Salle University (Philippines)
Judith Azcarraga, De La Salle University (Philippines)
Rafael Cabredo, Osaka University (Japan)
Nick Campbell, Trinity College (Ireland)
Jocelynn Cu, De La Salle University (Philippines)
Kenichi Fukui, Osaka University (Japan)
Masashi Inoue, Yamagata University (Japan)
Paul Salvador Inventado, Osaka University (Japan)
Akihiro Kashihara, University of Electro-Communications (Japan)
Satoshi Kurihara, Osaka University (Japan)
Nelson Marcos, De La Salle University (Philippines)
Koichi Moriyama, Osaka University (Japan)
Radoslaw Niewiadomski, Telecom Paris Tech (France)
Noriko Otani, Tokyo City University (Japan)
Raymund Sison, De La Salle University (Philippines)
Khiet Truong, University of Twente (The Netherlands)
Jerome Urbain, University of Mons (Belgium)
Important Dates
Submission of Abstracts:May 5, 2013
Submission of Full Papers:May 10, 2013
Paper Acceptance Notification:May 20, 2013
Camera-ready paper submission:June 15, 2013
4thInternational Workshop on Empathic Computing:August 3-5, 2013
Paper Submission
Submitted papers must be formatted according to IJCAI guidelines and submitted electronically through the paper submission system. Full instructions including formatting guidelines and electronic templates are available on the IJCAI 2013 website.

At least one author of each accepted paper is required to attend the conference to present the work. Authors will be required to agree to this requirement at the time of submission.
Please register your attendance to the workshop through the IJCAI registration page.
Invited Speaker

Communication with a Virtual Agent via Facial Expressions

Prof. Kaoru Sumi
Future University Hakodate
Hokkaido, Japan

Our research group has studied persuasive technology via human agent interaction using facial expressions and dialogues, with the goal of developing a virtual agent that could persuade a human. We have found that some combination of facial expressions and dialogues can facilitate a persuasive agent. Based on these research results, this talk introduces such a dialogue system with a virtual agent using facial expressions. The purpose of this system is to communicate a Japanese style of service-mindedness, as typified by paying attention to customers. The system thus provides educational training for users in a customer service role. It consists of a facial expression recognition system using brain wave measurement equipment, a speech recognition system, a speech synthesis system, and a dialogue control system. The virtual agent exhibits seven facial expressions—namely, "smiling", "laughing", "angry", "sad", "disgusted", "frightened", "surprised"—and it mouths vowel shapes given fluid movement through morphing technology. The system was evaluated through an experiment with subjects who used it to learn how to serve customers. This talk introduces some of the experimental results.

Kaoru Sumi is a professor in the Department of Media Architecture, Future University Hakodate, Japan. Prof. Sumi received her Ph.D. in engineering from the University of Tokyo. Her recent research interests include human-computer interaction, artificial intelligence, media informatics, persuasive technology, and digital storytelling. She previously worked at ATR MI&C Research Laboratories, Communications Research Laboratory (CRL), and Osaka University, where she researched human-computer interaction, knowledge engineering, and the application of artificial intelligence. After Prof. Sumi worked on media informatics and human-agent interaction at the National Institute of Information and Communications Technology (NICT), she was an associate professor at Hitotsubashi University.

Schedule (August 4, 2013)

Time Presentation
INVITED TALK: Communication with a Virtual Agent via Facial Expressions
Kaoru Sumi, PhD
Future University Hakadote, Hokkaido, Japan
An analysis of player affect in survival horror game using physiological signals and player self-reports
Vanus Vachiratamporn, Roberto Legaspi, Paul Inventado, Ken-Ichi Fukui, Koichi Moriyama and Masayuki Numao
Generation of Rhythm for Melody in a Constructive Adaptive User Interface
Noriko Otani, Ryoko Kamimura, Yu Yamano and Masayuki Numao
Modeling Affect in Self-directed Learning Scenarios
Paul Salvador Inventado, Roberto Legaspi, Koichi Moriyama, Ken-Ichi Fukui and Masayuki Numao
Using Empathy of the Crowd for Simulating Mirror Neurons Behavior
Rafal Rzepka, Marek Krawczyk and Kenji Araki
Beacon-TDMA Medium Access Control Protocol for Wireless Sensor Networks
Arlyn Verina Ong and Gregory Cu
Personalization Approach in Health Information Retrieval System
Ira Puspitasari, Ken-Ichi Fukui, Koichi Moriyama and Masayuki Numao
An Exploratory Study on Naturalistic Laughter Synthesis
Bernadyn Cagampan, Henry Ng, Kevin Panuelos, Kyrstyn Uy, Jocelynn Cu and Merlin Suarez
Determining Product Emotion using Automatic Facial Expression Recognition
Edward Philippe Choi and Merlin Teodosia Suarez