Home > Research > Publications & Outputs > Interactive motion mapping for real-time charac...
View graph of relations

Interactive motion mapping for real-time character control

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Interactive motion mapping for real-time character control. / Rhodin, Helge; Tompkin, James; Kim, Kwang In et al.
In: Computer Graphics Forum, Vol. 33, No. 2, 2014, p. 273-282.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Rhodin, H, Tompkin, J, Kim, KI, Varanasi, K, Seidel, H-P & Theobalt, C 2014, 'Interactive motion mapping for real-time character control', Computer Graphics Forum, vol. 33, no. 2, pp. 273-282. https://doi.org/10.1111/cgf.12325

APA

Rhodin, H., Tompkin, J., Kim, K. I., Varanasi, K., Seidel, H-P., & Theobalt, C. (2014). Interactive motion mapping for real-time character control. Computer Graphics Forum, 33(2), 273-282. https://doi.org/10.1111/cgf.12325

Vancouver

Rhodin H, Tompkin J, Kim KI, Varanasi K, Seidel H-P, Theobalt C. Interactive motion mapping for real-time character control. Computer Graphics Forum. 2014;33(2):273-282. doi: 10.1111/cgf.12325

Author

Rhodin, Helge ; Tompkin, James ; Kim, Kwang In et al. / Interactive motion mapping for real-time character control. In: Computer Graphics Forum. 2014 ; Vol. 33, No. 2. pp. 273-282.

Bibtex

@article{b4c427af3a814a3a9d357835f1dcb040,
title = "Interactive motion mapping for real-time character control",
abstract = "It is now possible to capture the 3D motion of the human body on consumer hardware and to puppet in real time skeleton-based virtual characters. However, many characters do not have humanoid skeletons. Characters such as spiders and caterpillars do not have boned skeletons at all, and these characters have very different shapes and motions. In general, character control under arbitrary shape and motion transformations is unsolved - how might these motions be mapped? We control characters with a method which avoids the rigging-skinning pipeline — source and target characters do not have skeletons or rigs. We use interactively-defined sparse pose correspondences to learn a mapping between arbitrary 3D point source sequences and mesh target sequences. Then, we puppet the target character in real time. We demonstrate the versatility of our method through results on diverse virtual characters with different input motion controllers. Our method provides a fast, flexible, and intuitive interface for arbitrary motion mapping which provides new ways to control characters for real-time animation.",
author = "Helge Rhodin and James Tompkin and Kim, {Kwang In} and Kiran Varanasi and Hans-Peter Seidel and Christian Theobalt",
year = "2014",
doi = "10.1111/cgf.12325",
language = "English",
volume = "33",
pages = "273--282",
journal = "Computer Graphics Forum",
issn = "0167-7055",
publisher = "Wiley-Blackwell",
number = "2",

}

RIS

TY - JOUR

T1 - Interactive motion mapping for real-time character control

AU - Rhodin, Helge

AU - Tompkin, James

AU - Kim, Kwang In

AU - Varanasi, Kiran

AU - Seidel, Hans-Peter

AU - Theobalt, Christian

PY - 2014

Y1 - 2014

N2 - It is now possible to capture the 3D motion of the human body on consumer hardware and to puppet in real time skeleton-based virtual characters. However, many characters do not have humanoid skeletons. Characters such as spiders and caterpillars do not have boned skeletons at all, and these characters have very different shapes and motions. In general, character control under arbitrary shape and motion transformations is unsolved - how might these motions be mapped? We control characters with a method which avoids the rigging-skinning pipeline — source and target characters do not have skeletons or rigs. We use interactively-defined sparse pose correspondences to learn a mapping between arbitrary 3D point source sequences and mesh target sequences. Then, we puppet the target character in real time. We demonstrate the versatility of our method through results on diverse virtual characters with different input motion controllers. Our method provides a fast, flexible, and intuitive interface for arbitrary motion mapping which provides new ways to control characters for real-time animation.

AB - It is now possible to capture the 3D motion of the human body on consumer hardware and to puppet in real time skeleton-based virtual characters. However, many characters do not have humanoid skeletons. Characters such as spiders and caterpillars do not have boned skeletons at all, and these characters have very different shapes and motions. In general, character control under arbitrary shape and motion transformations is unsolved - how might these motions be mapped? We control characters with a method which avoids the rigging-skinning pipeline — source and target characters do not have skeletons or rigs. We use interactively-defined sparse pose correspondences to learn a mapping between arbitrary 3D point source sequences and mesh target sequences. Then, we puppet the target character in real time. We demonstrate the versatility of our method through results on diverse virtual characters with different input motion controllers. Our method provides a fast, flexible, and intuitive interface for arbitrary motion mapping which provides new ways to control characters for real-time animation.

U2 - 10.1111/cgf.12325

DO - 10.1111/cgf.12325

M3 - Journal article

VL - 33

SP - 273

EP - 282

JO - Computer Graphics Forum

JF - Computer Graphics Forum

SN - 0167-7055

IS - 2

ER -