David Whitney, Professor

Closed (1) Individual differences in face perception.

Applications for fall 2021 are now closed for this project.

The human perceptual system processes faces in an unique manner. Humans perceive faces holistically (i.e. as a whole) rather than as a set of separate features. The face inversion effect is one of the most compelling pieces of evidence for holistic processing of faces. Upright faces are
identified faster than inverted faces, and more interestingly, this difference is disproportionally larger in faces than in objects. The most supported explanation is that inverting a face disrupts holistic processing, which is key for face recognition. In contrast, non-face objects are not processed holistically, so inverting them does not affect recognition in such magnitude. Previous work demonstrates that some individuals are better at this holistic type of processing than others.
Ability to perceive faces as a whole has been shown to correlate with other individual measures such as face recognition abilities and impairments. Preliminary data from our lab suggests that there are not only individual differences in human holistic processing of faces, but that specific faces that were processed holistically by one observer were not by other observers. The aim of this project is then to investigate whether there are such unique individual differences in holistic processing of specific faces. For this purpose, we use Mooney faces as stimuli, which are two-tone images readily perceived as faces despite lacking clear separate face-specific features. Mooney faces are ideal for this project because 1) they can only be recognized holistically since they lack separate, identifiable face-like parts and 2) there seems to be anecdotal individual differences in the Mooney faces that people find easy or hard to recognize. In a set of tasks, we will calculate the inversion effect for each subject for each Mooney face. By measuring each subject’s ability to identify real versions of the Mooney faces, life-time experiences with sets of faces and other face recognition abilities, we expect to find a way to predict which faces will be most holistically
processed by each subject. Next phases of the project will involve analysis and categorization of the specific features of each Mooney face. We expect to explore whether one or multiple individual factors, such as face learning experiences or recognition biases, shape the set of faces that each subject processes holistically.

The research apprentice will be involved in designing stimuli, running experiments, collecting data and analyzing data. Students will be trained in research methods and statistics. Students will be responsible of writing Matlab and Python code and apply statistical toolboxes to analyze the data. The research apprentice will be also expected to read the relevant literature and write reports on the ongoing results of the project. The student will also meet with the supervisor weekly to discuss preliminary data and background literature, in addition to the opportunity to attend weekly lab meetings.

Most tasks and meetings will be remote, but some in-person data collection will be required.


Day-to-day supervisor for this project: Teresa Canas-Bajo, Graduate Student

Qualifications: Required: High motivation, interest in visual perception, basic programming skills, ability to work independently on certain projects. Desirable but not essential: basic knowledge of Adobe Photoshop; experience with programming languages Python and/or Matlab

Weekly Hours: to be negotiated

Related website: https://www.frontiersin.org/articles/10.3389/fpsyg.2020.585921/full

Closed (2) Eye movement behavior during autonomous vehicle-human interaction.

Applications for fall 2021 are now closed for this project.

The recent advent of autonomous vehicles (AV) in the last decade has changed the traditional role of drivers. Currently, autonomous cars being developed and commercialized are not fully autonomous. In fact, commercially-available AV take control of only some of the driving functions, such as speed, or are highly automated systems that still need the driver’s input/control under challenging situations (for instance, traffic or poor weather conditions). To the extent that cars aren’t perfectly autonomous, some work falls on the drivers’ shoulders, including monitoring the system’s behavior. This proposed project aims to investigate whether drivers are capable of efficiently monitoring an AV system and to provide an accurate measure of driver’s inattentiveness.

One of the major challenges the autonomous vehicle research is facing is developing an ability to communicate to humans – both pedestrians and drivers – their next actions and intentions. In the first phase of this project we will approach the question of how driver's attention change when riding on an autonomous vehicle that they do not need to control. When driving a manual car, drivers need to selectively attend to the critical areas to the driving environment at all times. However, in an autonomous vehicle, the idea is that the driver would be able to disengage from the driving environment. The question arises as to whether humans are able to supervise efficiently and whether automation increases the chances of the driver being inattentive. The present project uses state-of-the-art mobile eye-tracking devices to record drivers’ eye movements while interacting with autonomous and manual cars. For this project, drivers will wear a mobile eye-tracker while performing a task in a driving simulator located in McLaughlin Hall.

The findings from this proposed project will help clarify if drivers can efficiently monitor an automated system. If we find that distribution of attention in automated vehicles differs from attention when manually driving a car, we will provide evidence that when drivers delegate the control of the vehicle to an automated system their attention may shift away from critical areas. This information could be then implemented in a car monitoring system that could help drivers maintain attention or alert drivers from a drop of attention. Our goal is also to contribute to the BDD repository with datasets of drivers’ gaze in a more realistic environment that can be used to train attention prediction models or implemented in autonomous driving models.

This project is funded by and in collaboration with the Berkeley Deep Drive group. Previous work from this collaboration unit can be found here: https://deepdrive.berkeley.edu/node/232


The research apprentice will be involved in one or two of these tasks:

1) Setting up experiments with the driving simulator using Matlab Simulink and Python.
2) Collecting data and running subjects experiments.
3) Analyzing eyetracking and behavioral data: Analysis of the data includes applying deep neural networks and Python toolboxes to categorize the eye movement videos and write Matlab and Python code to investigate eye movement patterns.

Students will be trained to set up and record data with the Pupil Labs eye-tracker and work the driving simulator and simulation software required for the project. The apprentice will also meet with the supervisor weekly to discuss preliminary data and background literature, in addition to the opportunity to attend weekly lab meetings. Last, the student will also have the opportunity to attend the November Berkeley Deep Drive meeting and meet the industry sponsors.

Most tasks and meetings will be remote, but some in-person data collection will be required.

Day-to-day supervisor for this project: Teresa Canas-Bajo, Graduate Student

Qualifications: Required: High motivation, interest in visual perception and cognitive science, interest in learning about driving research and simulators, experience with programming languages Python and/or Matlab., ability to work independently on certain projects. Desirable but not essential: basic knowledge of deep neural networks;

Weekly Hours: to be negotiated

Related website: https://whitneylab.berkeley.edu/index.html
Related website: https://deepdrive.berkeley.edu/node/232

Closed (3) How do we register the flow of time in natural movies?

Closed. This professor is continuing with Spring 2021 apprentices on this project; no new apprentices needed for Fall 2021.

Accurately knowing the timing of different events happening around us is a crucial ability of all human beings. In natural scenes such as movies or everyday life, dynamic events could happen either in the left or right visual field. Given that information coming through our left visual field will be processed in our right hemisphere of brain, and right visual field is processed in our left hemisphere, how could our brain be able to register the timing of events that might be happening at different halves of our vision? Our study aims to investigate this question by showing observers well-controlled dynamic movies and probe their sensitivity of discriminating the timing of different object appearance.

The research apprentice will be involved in coding the experiments, running experiments and analyzing data. This mainly helps to practice the use of Matlab. In addition, students will be trained on the basic skills in psychophysics experiments, including but not limited to: color correction, parameters adjustment and deeper statistics knowledge in individual data analysis. The apprentice will also meet with the supervisor weekly to discuss background literature and preliminary data, in addition to the opportunity to attend weekly lab meetings.

Day-to-day supervisor for this project: Zixuan Wang, Graduate Student

Qualifications: 1. Required: High motivation, effective communication and problem-solving skills 2. Desirable but not essential: Experienced in Python or Matlab; Experience in movie editing; Basic knowledge about human perception from class C126.

Weekly Hours: 6-8 hrs

Related website: https://whitneylab.berkeley.edu/index.html