Vaufreydaz Dominique

Maître de Conférences at University Grenoble Alpes /Inria /LIG laboratory 

I am Maître de Conférences (Associate Professor) in Computer Sciences at University Grenoble Alpes  and researcher in the Pervasive Interaction team  of the LIG laboratory  and Inria .

My current research interests are about multimodal perception and behavior analysis, mainly of humans, in the context of smart spaces/ubiquitous computing, healthcare and assistive technologies and/or affective computing. These researches could be applied to social robot companion, autonomous car, smart home or any human/agent interaction.

Research

Research keywords:

• Multimodal perception

• Affective computing

• Sociable interaction

• Smart spaces/Ambient Assisted living

Multimodal perception of People, Behaviors and Emotions

Using several sensors (microphones, cameras, RGB-D, LIDAR, ...) for perception of Humans.

These researches addresses the use of multimodal sensors (microphones, cameras, RGB-D, LIDAR, ...) to perceive humans, their behaviors and mental states. These researches start from low level signal processing up to high level machine learning. Deep Learning is now included as machine learning technic for its performance on some of our perception tasks.

Sociable Interaction with Humans

Use perception of humans and sociable feedback from the system in the interaction loop.

In the first part of the interaction loop, we use perception of humans as input for intelligent systems (robot companions, social robots, autonomous cars...). This information permits to anticipate human needs or to predict human behaviors. The second part of the interaction loop, feedback from the system is studied. For instance, in the case of a social companion, its animation must reflect its internal state and must be directly readable/understandable by its human partner(s). For mobile devices, mobile robots or autonomous cars, their navigation must be sociably acceptable and predictable.

Smart spaces/Ambient Assisted living

Perception within smart spaces from two points of views.

On this research topic, two points of view are addressed. The first one is how to distribute perception systems in smart spaces or in ubiquitous environments. Our reflection leads to Omiscid, a middleware for distributed (perception) systems in such environment. The second point of view inquiries usage of perception researches to help people in their daily life at home (notably elderly) or at work. We also study usage of IoT objects to complete human perception/system feedback in smart spaces or smart homes.

Software and datasets

Software and datasets contributions

Under construction.

You can take a look at my github page, to

MobileRGBD, a publication on BRAF-100 French corpus.

Projects

List of research projects

Since 1998, I was involved in many research projects. I also initiate some personal projects on some specific topics like in the MobileRGBD project.

CEEGE, VALET, MobileRGBD, Expressive Figurines, Equipex Amiqual

Pramad, PAL, ICT Labs, CASPER, CHIL, FAME, NESPOLE!, C-STAR II

Publications

Figurines, a multimodal framework for tangible storytelling

6th Workshop on Child Computer Interaction (WOCCI 2017) at the 19th ACM International Conference on Multi-modal Interaction (ICMI 2017).

This paper presents Figurines, an offline framework for narrative creation with tangible objects, designed to record storytelling sessions with children, teenagers or adults. This framework uses tangible diegetic objects to record a free narrative from up to two storytellers and construct a fully annotated representation of the story. This representation is composed of the 3D position and orientation of the figurines, the position of decor elements and interpretation of the storytellers' actions (facial expression, gestures and voice). While maintaining the playful dimension of the storytelling session, the system must tackle the challenge of recovering the free-form motion of the figurines and the storytellers in uncontrolled environments. To do so, we record the storytelling session using a hybrid setup with two RGB-D sensors and figurines augmented with IMU sensors. The first RGB-D sensor completes IMU information in order to identify figurines and tracks them as well as decor elements. It also tracks the storytellers jointly with the second RGB-D sensor.

Natural Vision Based Method for Predicting Pedestrian Behaviour in Urban Environments

IEEE 20th International Conference on Intelligent Transportation Systems, Oct 2017, Yokohama, Japan.

This paper proposes to model pedestrian behaviour in urban scenes by combining the principles of urban planning and the sociological concept of Natural Vision. This model assumes that the environment perceived by pedestrians is composed of multiple potential fields that influence their behaviour. These fields are derived from static scene elements like side-walks, cross-walks, buildings, shops entrances and dynamic obstacles like cars and buses for instance. Using this model, autonomous cars increase their level of situational awareness in the local urban space, with the ability to infer probable pedestrian paths in the scene to predict, for example, legal and illegal crossings.

Read the article 

Making Movies from Make-Believe Games

6th Workshop on Intelligent Cinematography and Editing (WICED 2017), Apr 2017, Lyon, France

Pretend play is a storytelling technique, naturally used from very young ages, which relies on object substitution to represent the characters of the imagined story. We propose "Make-believe", a system for making movies from pretend play by using 3D printed figurines as props. We capture the rigid motions of the figurines and the gestures and facial expressions of the storyteller using Kinect cameras and IMU sensors and transfer them to the virtual story-world. As a proof-of-concept, we demonstrate our system with an improvised story involving a prince and a witch, which was successfully recorded and transferred into 3D animation.

Read the article 

The Smartphone-Based Offline Indoor Location Competition at IPIN 2016: Analysis and Future Work

Sensors, MDPI, 2017, 557, pp.17.

This paper presents the analysis and discussion of the off-site localization competition track, which took place during the Seventh International Conference on Indoor Positioning and Indoor Navigation (IPIN 2016). Five international teams proposed different strategies for smartphone-based indoor positioning using the same reference data. The competitors were provided with several smartphone-collected signal datasets, some of which were used for training (known trajectories), and others for evaluating (unknown trajectories). The competition permits a coherent evaluation method of the competitors' estimations, where inside information to fine-tune their systems is not offered, and thus provides, in our opinion, a good starting point to introduce a fair comparison between the smartphone-based systems found in the literature. The methodology, experience, feedback from competitors and future working lines are described.

Read the article 

Curriculum vitae

  • 2005-

    Maître de Conférences (Associate Professor) in Computer Sciences.

    I am currently Maître de Conférences (Associate Professor) in Computer Sciences at Grenoble Alpes University and in the Pervasive Interaction team of Inria.

  • 2002-2005

    Postdoc in the PRIMA team of the GRAVIR laboratory and Inria.

    I was involved in the European projects FAME and CHIL for integration of the context (linguistic, thematic, situation awareness) in the acoustic perception (speech recognition, speaker's localization) within an intelligent environment. I also work on an intelligent virtual cameraman.

  • 2002

    Ph.D. in Computer Sciences within the GEOD team of CLIPS laboratory

    My Ph.D. thesis was about "statistical language modeling using Internet documents for continuous speech recognition". My work about speech recognition was used within the CStar and the Nespole! international projects.

  • 2001-2002

    ATER (Research and teaching assistant).

    In 2001/2002, I was ATER (Research and teaching assistant) in computer sciences at University Pierre Mendès-France.

Campus

  Image courtesy of JP Guilbaud.  

Teaching and misc.

Contact

Inria Rhône-Alpes

Zirst Montbonnot, 655, avenue de l’Europe

38334 Saint Ismier Cedex. France

+33476615519 Dominique.Vaufreydaz@inria.fr

Université Grenoble Alpes

BATEG

BP 47

38040 Grenoble cedex 9. France

+33476827836 Dominique.Vaufreydaz@univ-grenoble-alpes.fr