Vision-based reaching for autonomous virtual humans
Item Type:Conference Paper
Citation:Peters, C. and O'Sullivan, C. 'Vision-based reaching for autonomous virtual humans' in AISB'02 symposium: Animating Expressive Characters for Social Interactions, London, UK., edited by Aylett, R., Canamero, L., 2002, pp 69 - 72.
aisb02Peters.pdf (Final paper) 54.14Kb
A method for the generation of realistic real-time goal-directed virtual human arm motion is presented. Agents are endowed with a rudimentary synthetic vision and memory system that is used to gather and store data about objects in the vicinity. Agents then use the perceived object data rather than global database object data for the planning of reaching arm motions. Our method differs from previous attempts at goal-directed motion generation in that it uses sensory information available to the agent in order to distinguish between movements towards those objects visible to the agent, and movements towards memorised object locations. The generation of appropriate arm configurations under these circumstances is based on results from neurophysiology.
Type of material:Conference Paper
Availability:Full text available