Home > Research > Publications & Outputs > Modeling spatial and temporal variation in moti...
View graph of relations

Modeling spatial and temporal variation in motion data

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Modeling spatial and temporal variation in motion data. / Lau, Manfred; Bar-Joseph, Ziv; Kuffner, James.
In: ACM Transactions on Graphics, Vol. 28, No. 5, 171, 12.2009.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Lau, M, Bar-Joseph, Z & Kuffner, J 2009, 'Modeling spatial and temporal variation in motion data', ACM Transactions on Graphics, vol. 28, no. 5, 171. https://doi.org/10.1145/1618452.1618517

APA

Lau, M., Bar-Joseph, Z., & Kuffner, J. (2009). Modeling spatial and temporal variation in motion data. ACM Transactions on Graphics, 28(5), Article 171. https://doi.org/10.1145/1618452.1618517

Vancouver

Lau M, Bar-Joseph Z, Kuffner J. Modeling spatial and temporal variation in motion data. ACM Transactions on Graphics. 2009 Dec;28(5):171. doi: 10.1145/1618452.1618517

Author

Lau, Manfred ; Bar-Joseph, Ziv ; Kuffner, James. / Modeling spatial and temporal variation in motion data. In: ACM Transactions on Graphics. 2009 ; Vol. 28, No. 5.

Bibtex

@article{acb069c69b0f49b8b5b993dcd9b2a387,
title = "Modeling spatial and temporal variation in motion data",
abstract = "We present a novel method to model and synthesize variation in motion data. Given a few examples of a particular type of motion as input, we learn a generative model that is able to synthesize a family of spatial and temporal variants that are statistically similar to the input examples. The new variants retain the features of the original examples, but are not exact copies of them. We learn a Dynamic Bayesian Network model from the input examples that enables us to capture properties of conditional independence in the data, and model it using a multivariate probability distribution. We present results for a variety of human motion, and 2D handwritten characters. We perform a user study to show that our new variants are less repetitive than typical game and crowd simulation approaches of re-playing a small number of existing motion clips. Our technique can synthesize new variants efficiently and has a small memory requirement.",
author = "Manfred Lau and Ziv Bar-Joseph and James Kuffner",
year = "2009",
month = dec,
doi = "10.1145/1618452.1618517",
language = "English",
volume = "28",
journal = "ACM Transactions on Graphics",
issn = "1557-7368",
publisher = "Association for Computing Machinery (ACM)",
number = "5",

}

RIS

TY - JOUR

T1 - Modeling spatial and temporal variation in motion data

AU - Lau, Manfred

AU - Bar-Joseph, Ziv

AU - Kuffner, James

PY - 2009/12

Y1 - 2009/12

N2 - We present a novel method to model and synthesize variation in motion data. Given a few examples of a particular type of motion as input, we learn a generative model that is able to synthesize a family of spatial and temporal variants that are statistically similar to the input examples. The new variants retain the features of the original examples, but are not exact copies of them. We learn a Dynamic Bayesian Network model from the input examples that enables us to capture properties of conditional independence in the data, and model it using a multivariate probability distribution. We present results for a variety of human motion, and 2D handwritten characters. We perform a user study to show that our new variants are less repetitive than typical game and crowd simulation approaches of re-playing a small number of existing motion clips. Our technique can synthesize new variants efficiently and has a small memory requirement.

AB - We present a novel method to model and synthesize variation in motion data. Given a few examples of a particular type of motion as input, we learn a generative model that is able to synthesize a family of spatial and temporal variants that are statistically similar to the input examples. The new variants retain the features of the original examples, but are not exact copies of them. We learn a Dynamic Bayesian Network model from the input examples that enables us to capture properties of conditional independence in the data, and model it using a multivariate probability distribution. We present results for a variety of human motion, and 2D handwritten characters. We perform a user study to show that our new variants are less repetitive than typical game and crowd simulation approaches of re-playing a small number of existing motion clips. Our technique can synthesize new variants efficiently and has a small memory requirement.

UR - http://www.scopus.com/inward/record.url?scp=77949304883&partnerID=8YFLogxK

U2 - 10.1145/1618452.1618517

DO - 10.1145/1618452.1618517

M3 - Journal article

AN - SCOPUS:77949304883

VL - 28

JO - ACM Transactions on Graphics

JF - ACM Transactions on Graphics

SN - 1557-7368

IS - 5

M1 - 171

ER -