Due to the massive dissemination of consumer-grade Head-Mounted Displays (Oculus, Samsung, HTC) and the huge investments from major industrial players (Microsoft, Google, Facebook, SONY), avatars are becoming a major requirement in immersive virtual reality applications, such as immersive media and movies, video games, virtual communities and social networks, sports and industrial training simulations, or medical cybertherapies. In all these applications, ensuring that users embody in an avatar is crucial for making people live a truly immersive and effective experience. For these reasons, an ideal VR experience should involve acquiring automatically, easily and quickly a high quality representation (photo-realistic or stylised) of the user to display in the virtual environment, automatically simulating the movements of this avatar based on the user’s movements, e.g., using intuitive and non-invasive input devices, while enabling users to “sense” the properties of the virtual environment (e.g., physical, social) through their avatar.
However, current avatars do not elicit today such immersive and effective experiences. In particular, they often fail at conveying a strong sense of embodiment or interaction, due to limitations at several levels of the experience. First, avatars suffer from current technological limitations in their acquisition, simulation and control. For instance, because of the complexity of acquiring visual representations, most applications rely on low quality generic avatars, which are not adapted to users. These low quality representations are also typically animated using standard input devices, using control schemes that were not designed with avatar-control in mind, and which do not enable users to “feel” the virtual world through their avatar. Second, current limitations are also largely due to a lack of perceptual understanding about how we perceive and interact with avatars. Such processes involve complex psychological factors, which start to be explored in the VR, psychology and neuroscience communities, but much work remains to be done to apprehend fully these complex human-avatar interactions.
Objective & Challenges
For these reasons, the objective of this project is to push further the limits of perception and interaction through our avatars. To design this next generation of avatars, we aim for avatars to provide a stronger sense of embodiment, in order to enable natural and expressive interactions with virtual worlds, to better feel the digital content by means of multisensory feedback, and to better share virtual experiences. Naturally, such an objective raises multiple questions. E.g.: How to acquire and model faithful or stylised representations of the users? How to render and animate them in highly expressive or symbolic ways? How to enable complex 3D interaction capabilities? How to design and identify the best means of conveying multisensory information to users through their avatar? In order to reach our objectives, we identified three main scientific challenges, as well as a transversal challenge consisting in characterizing the perceptual experience and the resulting sense of embodiment across the whole process of acquiring, simulating, interacting and “sensing” through the avatar:
- The acquisition, real-time simulation and animation of avatars. The displayed avatars should become “customizable” and meet the requirements of the targeted user experience (e.g. realistic vs. cartoon display) or the hardware constraints (e.g. upper-body vs. full-body tracking).
- The design of novel interaction paradigms based on avatars. Successful interaction capabilities should take into account the context (e.g. display type, or game/simulation situation), the representation of the avatar (e.g. 1st vs. 3rd person, realistic vs. artistic) and its control type (e.g. gestural interaction, gamepad, keyboard).
- The design of multi-sensory feedback algorithms for avatars. The VR experience should reflect successfully the interactions between the avatar and its surrounding digital world, and users should perceive the properties of the virtual environment (including other avatars) through their own avatar.
In terms of organisation, addressing these multidisciplinary challenges involves the complementary expertise of several Inria teams at all levels. E.g., addressing the complex problem of simultaneously animating and rendering high-quality avatars requires to couple expertise in reconstruction (MORPHEO), biomechanics (MIMETIC) and rendering (GRAPHDECO). Similarly, reaching multisensory interaction for users through their avatar requires expertise in animation (MIMETIC), in creating novel intuitive input devices (LOKI), in multi-sensory feedback (HYBRID), as well as a strong know-how in human computer-interaction (POTIOC), in order to design these novel interaction techniques in a manner that is thought for and centered on avatars. Our scientific challenges are also strongly interconnected with fundamental knowledge in perception, psychology and neuroscience, provided by Prof. Mel Slater (Univ. Barcelona, ERC laureate), world expert on fundamental psychological and neuroscience aspects of “virtual embodiment”. Our results will then be illustrated by means of relevant demonstrators in the field of immersive cinema developed in collaboration with a leading industrial partner Technicolor, as well as in the field of VR training in collaboration with Faurecia.
This initiative is a unique opportunity to gather, inside the same institute, all the scientific and technological expertises needed to cover the entire pipeline and thus address all-in-one: the acquisition, simulation, display, and interaction with avatars, as well as the necessary perceptual/psychological perspective by including a leading psychological expert in the field of “virtual embodiment”. These collaborations will enable the creation of the next generation of avatars, capable of being deployed in the majority of immersive applications, from home to industrial setups, where the avatar should thus become an effective tool truly enabling communication within shared experiences.