Vaufreydaz Dominique

Maître de Conférences - HDR at University Grenoble Alpes /Inria /LIG laboratory 

I am Maître de Conférences - HDR (Associate Professor) in Computer Sciences at University Grenoble Alpes  and researcher in the Pervasive Interaction team  of the LIG laboratory  and Inria .

My current research interests are about multimodal perception and behavior analysis, mainly of humans, in the context of smart spaces/ubiquitous computing, healthcare and assistive technologies and/or affective computing. These researches could be applied to sociable robot companions, autonomous cars, smart homes or any human/agent interactions.

Research

Research topics:

• Multimodal perception for interaction

• Smart spaces/Ambient Assisted living

• Affective computing

• Sociable interaction with robots and autonomous cars

Multimodal perception of People, Behaviors and Emotions

Machine Learning (including Deep Learning) with several sensors (microphones, cameras, RGB-D, LIDAR, ...) for perception of Humans.

These researches addresses the use of multimodal sensors (microphones, cameras, RGB-D, LIDAR, ...) to perceive humans, their behaviors and mental states. These researches start from low level signal processing up to high level machine learning. Deep Learning is now included as machine learning technic for its performance on some of our perception tasks.

Keywords:

• Machine Learning

• Deep Learning

• Computer vision

• Multimodal processing

• Sociable robot

Sociable Interaction with Humans

Use perception of humans and sociable feedback from the system in the interaction loop.

In the first part of the interaction loop, we use perception of humans as input for intelligent systems (robot companions, social robots, autonomous cars...). This information permits to anticipate human needs or to predict human behaviors. The second part of the interaction loop, feedback from the system is studied. For instance, in the case of a social companion, its animation must reflect its internal state and must be directly readable/understandable by its human partner(s). For mobile devices, mobile robots or autonomous cars, their navigation must be sociably acceptable and predictable.

Smart spaces/Ambient Assisted living

Perception within smart spaces from two points of views.

On this research topic, two points of view are addressed. The first one is how to distribute perception systems in smart spaces or in ubiquitous environments. Our reflection leads to Omiscid, a middleware for distributed (perception) systems in such environment. The second point of view inquiries usage of perception researches to help people in their daily life at home (notably elderly) or at work. We also study usage of IoT objects to complete human perception/system feedback in smart spaces or smart homes.

Projects, software and datasets

List of research projects, software and datasets contributions

Since 1998, I was involved in many research projects. I also initiate some personal projects on some specific topics like in the MobileRGBD project.

CEEGE, VALET, MobileRGBD, Expressive Figurines, Equipex Amiqual

Pramad, PAL, ICT Labs, CASPER, CHIL, FAME, NESPOLE!, C-STAR II

Contributions.

You can take a look at my github page, to

MobileRGBD, a publication on BRAF-100 French corpus.

Publications

Building Prior Knowledge: A Markov Based Pedestrian Prediction Model Using Urban Environmental Data

ICARCV 2018 - 15th International Conference on Control, Automation, Robotics and Vision

Autonomous Vehicles navigating in urban areas have a need to understand and predict future pedestrian behavior for safer navigation. This high level of situational awareness requires observing pedestrian behavior and extrapolating their positions to know future positions. While some work has been done in this field using Hidden Markov Models (HMMs), one of the few observed drawbacks of the method is the need for informed priors for learning behavior. In this work, an extension to the Growing Hidden Markov Model (GHMM) method is proposed to solve some of these drawbacks. This is achieved by building on existing work using potential cost maps and the principle of Natural Vision. As a consequence, the proposed model is able to predict pedestrian positions more precisely over a longer horizon compared to the state of the art. The method is tested over "legal" and "illegal" behavior of pedestrians, having trained the model with sparse observations and partial trajectories. The method, with no training data, is compared against a trained state of the art model. It is observed that the proposed method is robust even in new, previously unseen areas.

Read the article 

Personal space of autonomous car's passengers sitting in the driver's seat

Intelligent Vehicle 2018

This article deals with the specific context of an autonomous car navigating in an urban center within a shared space between pedestrians and cars. The driver delegates the control to the autonomous system while remaining seated in the driver's seat. The proposed study aims at giving a first insight into the definition of human perception of space applied to vehicles by testing the existence of a personal space around the car. It aims at measuring proxemic information about the driver's comfort zone in such conditions. Proxemics, or human perception of space, has been largely explored when applied to humans or to robots, leading to the concept of personal space, but poorly when applied to vehicles. In this article, we highlight the existence and the characteristics of a zone of comfort around the car which is not correlated to the risk of a collision between the car and other road users. Our experiment includes 19 volunteers using a virtual reality headset to look at 30 scenarios filmed in 360° from the point of view of a passenger sitting in the driver's seat of an autonomous car. They were asked to say "stop" when they felt discomfort visualizing the scenarios. As said, the scenarios voluntarily avoid collision effect as we do not want to measure fear but discomfort. The scenarios involve one or three pedestrians walking past the car at different distances from the wings of the car, relative to the direction of motion of the car, on both sides. The car is either static or moving straight forward at different speeds. The results indicate the existence of a comfort zone around the car in which intrusion causes discomfort. The size of the comfort zone is sensitive neither to the side of the car where the pedestrian passes nor to the number of pedestrians. In contrast, the feeling of discomfort is relative to the car's motion (static or moving). Another outcome from this study is an illustration of the usage of first person 360° video and a virtual reality headset to evaluate feelings of a passenger within an autonomous car.

Read the article 

Figurines, a multimodal framework for tangible storytelling

6th Workshop on Child Computer Interaction (WOCCI 2017) at the 19th ACM International Conference on Multi-modal Interaction (ICMI 2017).

This paper presents Figurines, an offline framework for narrative creation with tangible objects, designed to record storytelling sessions with children, teenagers or adults. This framework uses tangible diegetic objects to record a free narrative from up to two storytellers and construct a fully annotated representation of the story. This representation is composed of the 3D position and orientation of the figurines, the position of decor elements and interpretation of the storytellers' actions (facial expression, gestures and voice). While maintaining the playful dimension of the storytelling session, the system must tackle the challenge of recovering the free-form motion of the figurines and the storytellers in uncontrolled environments. To do so, we record the storytelling session using a hybrid setup with two RGB-D sensors and figurines augmented with IMU sensors. The first RGB-D sensor completes IMU information in order to identify figurines and tracks them as well as decor elements. It also tracks the storytellers jointly with the second RGB-D sensor. The framework has been used to record preliminary experiments to validate interest of our approach. These experiments evaluate figurine following and combination of motion and storyteller's voice, gesture and facial expressions. In a make-believe game, this story representation was re-targeted on virtual characters to produce an animated version of the story. The final goal of the Figurines framework is to enhance our understanding of the creative processes at work during immersive storytelling.

Read the article 

Curriculum vitae

  • 2005-

    Maître de Conférences - HDR (Associate Professor) in Computer Sciences.

    I am currently Maître de Conférences - HDR (Associate Professor) in Computer Sciences at Grenoble Alpes University and in the Pervasive Interaction team of Inria.

  • 2002-2005

    Postdoc in the PRIMA team of the GRAVIR laboratory and Inria.

    I was involved in the European projects FAME and CHIL for integration of the context (linguistic, thematic, situation awareness) in the acoustic perception (speech recognition, speaker's localization) within an intelligent environment. I also work on an intelligent virtual cameraman.

  • 2002

    Ph.D. in Computer Sciences within the GEOD team of CLIPS laboratory

    My Ph.D. thesis was about "statistical language modeling using Internet documents for continuous speech recognition". My work about speech recognition was used within the CStar and the Nespole! international projects.

  • 2001-2002

    ATER (Research and teaching assistant).

    In 2001/2002, I was ATER (Research and teaching assistant) in computer sciences at University Pierre Mendès-France.

Campus

  Image courtesy of JP Guilbaud.  

Teaching and misc.

Contact

Inria Rhône-Alpes

Zirst Montbonnot, 655, avenue de l’Europe

38334 Saint Ismier Cedex. France

+33476615519 Dominique.Vaufreydaz@inria.fr

Université Grenoble Alpes

BATEG

BP 47

38040 Grenoble cedex 9. France

+33476827836 Dominique.Vaufreydaz@univ-grenoble-alpes.fr