Paperback Edition
Paperback
277 pages
$25.95
Choose vendor to order paperback edition
BrownWalker Press
Amazon.com
Barnes & Noble
Harvard Book Store
Return policy
PDF eBook
Entire PDF eBook
647k
$17
Get instant access to an entire eBook
Buy PDF Password
Download Complete PDF
eBook editions
Gesture Generation by Imitation
From Human Behavior to Computer Character Animation
Paperback
eBook PDF
Publisher: | Dissertation |
Pub date: | 2005 |
Pages: | 277 |
ISBN-10: | 1581122551 |
ISBN-13: | 9781581122558 |
Categories: | Computer Science Computers Computers |
Abstract
In an effort to extend traditional human-computer interfaces research has introduced embodied agents to utilize the modalities of everyday human-human communication, like facial expression, gestures and body postures. However, giving computer agents a human-like body introduces new challenges. Since human users are very sensitive and critical concerning bodily behavior the agents must act naturally and individually in order to be believable.This dissertation focuses on conversational gestures. It shows how to generate conversational gestures for an animated embodied agent based on annotated text input. The central idea is to imitate the gestural behavior of a human individual. Using TV show recordings as empirical data, gestural key parameters are extracted for the generation of natural and individual gestures. The gesture generation task is solved in three stages: observation, modeling and generation. For each stage, a software module was developed.
For observation, the video annotation research tool ANVIL was created. It allows the efficient transcription of gesture, speech and other modalities on multiple layers. ANVIL is application-independent by allowing users to define their own annotation schemes, it provides various import/export facilities and it is extensible via its plug-in interface. Therefore, the tool is suitable for a wide variety of research fields. For this work, selected clips of the TV talk show ``Das Literarische Quartett'' were transcribed and analyzed, arriving at a total of 1,056 gestures. For the modeling stage, the NOVALIS module was created to compute individual gesture profiles from these transcriptions with statistical methods. A gesture profile models the aspects handedness, timing and function of gestures for a single human individual using estimated conditional probabilities. The profiles are based on a shared lexicon of 68 gestures, assembled from the data. Finally, for generation, the NOVA generator was devised to create gestures based on gesture profiles in an overgenerate-and-filter approach. Annotated text input is processed in a graph-based representation in multiple stages where semantic data is added, the location of potential gestures is determined by heuristic rules, and gestures are added and filtered based on a gesture profile. NOVA outputs a linear, player-independent action script in XML.
About the Author
Michael Kipp studied Computer Science, Mathematics and Psychology at Saarland University, Germany, and the University of Edinburgh, UK. From 1997 on he worked at the German Research Center for Artificial Intelligence (DFKI) on fields as diverse as neural networks, machine translation, embodied agents and virtual theater. After finishing his Doctor of Engineering in 2003, he embarked on a whole new career journey by starting to work at the National Theater of the Saarland as a director's assistent.
Paperback Edition
Paperback
277 pages
$25.95
Choose vendor to order paperback edition
BrownWalker Press
Amazon.com
Barnes & Noble
Harvard Book Store
Return policy
PDF eBook
Entire PDF eBook
647k
$17
Get instant access to an entire eBook
Buy PDF Password
Download Complete PDF
eBook editions
Share this book
Relevant events
JAN
13
AIEE 2025
2025 6th International Conference on Artificial Intelligence in Electronics Engineering (AIEE 2025)
Publication:
Accepted and properly registered papers will be published in the International ...
13 - 15 Jan 2025
Bangkok, Thailand
JAN
13
ICCDE 2025
2025 11th International Conference on Computing and Data Engineering (ICCDE 2025)
Proceedings:
Accepted papers will be published in the International Conference Proceedings S...
13 - 15 Jan 2025
Bangkok, Thailand
JAN
17
ICCRD 2025
2025 IEEE 17th International Conference on Computer Research and Development (ICCRD 2025)
Proceedings:
Accepted papers of ICCRD2025 will be published in IEEE Conference Proceedings a...
17 - 19 Jan 2025
Shangrao, Jiangxi, China
JAN
18
ACSTY 2025
11th International Conference on Advances in Computer Science and Information Technology (ACSTY 2025)
11th International Conference on Advances in Computer Science and Information Technology (ACS...
18 - 19 Jan 2025
, Switzerland
JAN
18
SOFE 2025
11th International Conference on Software Engineering (SOFE 2025)
11th International Conference on Software Engineering (SOFE 2025)
January 18 ~ 19, 2025, Zur...
18 - 19 Jan 2025
, Switzerland
JAN
24
ICMLSC 2025
2025 The 9th International Conference on Machine Learning and Soft Computing (ICMLSC 2025)
Proceedings:
All accepted papers will be included in the ICMLSC conference proceedings, whic...
24 - 26 Jan 2025
Tokyo, Japan
FEB
12
ICARA 2025
2025 The 11th International Conference on Automation, Robotics and Applications (ICARA 2025)
Publication:
Submitted papers will be peer reviewed by the conference committees and interna...
12 - 14 Feb 2025
Zagreb, Croatia
FEB
14
ICMLC 2025
2025 17th International Conference on Machine Learning and Computing (ICMLC 2025)
Publication:
All submitted papers will be sent to 2-3 peer reviewers for reviewing. And acce...
14 - 17 Feb 2025
Guangzhou, China
FEB
14
ICIEE 2025
2025 14th International Conference on Information and Electronics Engineering (ICIEE 2025)
PUBLICATION:
Peer-reviewed papers accepted by ICIEE2025 will be published in conference proc...
14 - 16 Feb 2025
Singapore, Singapore
FEB
14
ICMCR 2025
2025 3rd International Conference on Mechatronics, Control and Robotics (ICMCR 2025)
Conference Proceedings:
1. Papers submitted to ICMCR 2025 will be peer reviewed by the inter...
14 - 16 Feb 2025
Singapore, Singapore