Reference Papers

Add / Update / Delete an entry

Author: Hubert P.H. Shum, Taku Komura, Masashi Shiraishi, Shuntaro Yamasaki
Title: Interaction Patches for Multi-Character Animation
Link: ACM Transactions on Graphics, 2008
Summary: - a data-driven approach, automatically generates a scene with a lot if interacting characters (crowds): flexible, automated
- during off-line processing, the close interactions between the characters are precomputed by expanding a game tree
- stored as data structures called interaction patches
- during run-time the system spatio (more than two characters interactions)-temporally (long series of interaction) concatenates the interaction patches to create scenes
- bottom up approach: builds the individual interactions first and combine them together to design the whole scene

Approach:
Off-line processes
1. capture the motion of a single person using mocap
2. create the action-level motion graph (nodes: postures, edges: actions)
3. compose the set of minimal units of interactions interacting patches (specify patterns of actions, sample the initial condition of the two characters, simulate their interactios)
4. generate two tables, showing temporal and spatial concatenation of the interaction patches
On-line process
5. Compose a scene by concatenating the interaction patches high-level commands (locomotion engine)

Interaction patch: is composed of the initial condition of the two characters (their distance, their orientation + delay to start the first action) and the list of short motion clips performed by each of them.
Motion Refinement: done by traditional inverse kinematics and physically-based animation (ODE).

Large-scale animations, close (stylized) interactions
Process of composing the scene fully automatic
Control Interface
High-level commands
A lot of degrees of freedom to adjust their movements before and after the interactions

Future:
1. increase the interaction patches, use hierarchical structure to store them
2. Parametric techniques to deform and interpolate the existing patches
3. Global planner


Author: Nam Nguyen, Nkenge Wheatland, David Brown, Brian Parise, C. Karen Liu, Victor Zordan
Title: Performance Capture with physical interaction
Link: Eurographics, 2010
Summary: The technique
- combines motion capture performance with physical simulation
- allows actors to interact with virtual objects including other actors
- works in real-time (while the performance is being recorded)
- depends on good, real-time motion capture data
- allows user control over the outcome, temporally and spatially
- supports remote users interacting physically in a shared virtual world

Methodology:
Seamless integration of the two animation signals (kinematic and dynamic) is accomplished by transforming the kinematic signal into a dynamics representation and then balancing the influence of the original data and the physical response across the body and across time -> coordination and responsivity.

Integrated Kinematic-Dynamic control

Kinematic
maintain global Cartesian positions
resist external forces – unrealistic
Cartesian forces – more precise control
Cartesian forces – aid in balance
stronger influence in the lower body part: aid in balance

Dynamic
make corrections in local joint coordinates
respond in more believable manner
Joint torques – limited amount of control
Joint torques – no balance
stronger influence in the upper body part: more directed, physical interactions

Kinematic/Dynamic: blend the two to get the best of both

Motion Capture: Vicon Motion System
Real-time Simulation Engine: ODE + collision handler

Scenarios: Tether ball, balloon, stapler, tug of war, giant, glider

Future work
- more prudent use of the kinematic control forces
- balance scenarios pending
- multi-person capture
- performance with physics: to be explored, discovered


Author: Taesoo Kwon, Young-Sang Cho, Sang II Park, and Sung Yong Shin
Title: Two-character Motion Analysis and Synthesis
Link: IEEE Transactions on Visualisation and Computer Graphics, Vol 14, No. 3, May/June 2008
Summary: Summary:

It is a real-time approach dealing with motion synthesis and analysis of two interacting characters.
• Synthesising novel motions of martial arts performed by a pair of humanlike characters while reflecting their interactions.
• Motions are segmented and classified into groups according to their similarities
• Motions are synthesised according to a motion transition graph built on a Bayesian Network

—————————————-

Motion Modelling:
Each single-player motion stream is segmented into an action and classified into groups according to their similarities. Then it is modelled as a motion transition graph in which nodes and edges represent their action groups and transitions, respectively.
a) Motion segmentation: They used the local minima of the height component of the centre of mass (COM) against the ground to segment the example motion stream. They dealt with the small force minima caused by some actions by filtering larger smoothing kernels. Each action represents a motion unit containing a single force peak.
b) Motion Classification: They created a motion vocabulary of seven motion aspects. The choice of the vocabulary determines the level of user control. They used a Multiclass Support Vector Machine (MSVM) to classify the motions in addition to a rule-based classification.
c) Graph Construction: A two-player example motion stream is represented as a coupled motion transition graph. Each single-character performs its own sequence of actions, that it might be partially overlapped in time with those by other. The motions of each single player are decomposed into basic motion units and then are stitched according to motion specifications reflecting the captured interactions between the players.

Interaction Modelling:
The motion transition graph is accessed to search for a proper reaction for the other character in a probabilistic manner, guided by the interaction model (built on a Bayesian Network)
a) Motion transition model: Players gathers the info of the opponent via observations and fuses that info to determine his next action. The two-character motion stream is represented as a series of transition patterns which is illustrated with a dynamic Bayesian network. Next action of the player is likely to be chosen based on the current actions labels of both the players.
b) Estimation: They estimated the conditional probabilities based on training data and probability distributions of constituent action elements. Finally, they compromised the two alternatives using joint probabilities.

Motion Synthesis
A pair of characters exchange actions and reactions via the cross edges while traversing their respective single-player motion transition graphs
a) Motion Selection: They choose the next action randomly among candidates actions according to their estimated probabilities.
b) Interactive Model: Although synthesising an action, the avatar may perform other actions in response to the opponent’s actions.

Results: Visually good results, real-time performance (more than 30fps)

Limitations:
• In the way they choose actions, the two characters may not be perfectly positioned to perform their respective actions; in that manner, it might result in unexpected collisions.
• Their captured actions differ from real situations of martial arts since the performers try to avoid injuries (during action and reaction)

Future Work:
• How to determine the ideal position of each character, free from unexpected collisions, while avoiding motion discontinuity or jerkiness.
• Improve their dataset with physically more natural motions.
• Generalise their approach for other sport games such as ping-pong and tennis in which a variety of dynamic motions are performed to hit a ball.


Leave a Reply