David Whitney, Professor

Closed (1) Behavioral and fMRI experiments of emotion perception with dynamic videos

Applications for Spring 2019 are now closed for this project.

Emotion recognition is a core aspect of human experience, important for social functioning. It is widely assumed that registering facial expression is the key to this and models of emotion recognition have mainly focused on facial and bodily features in static, unnatural conditions. However, emotion perception is continuous and dynamic, and an individual’s face and body are usually perceived within a meaningful context. These face-based models and measures are very limited for understanding the mechanisms underlying emotion recognition and the development of real-world applications.

Here we aim to understand the mechanisms underlying efficient and robust emotion perception. An alternative view is that emotion recognition is, at its heart, a context-mediated process: context makes a significant and direct contribution to the perception of emotion in a precise spatial and temporal manner. Human perceptual systems are exquisitely sensitive to context and gist information in dynamic natural scenes. Such dynamic gist information could carry rich affect-relevant signals, including the presence of other people, visual background scene information, and social interactions—unique emotional information that cannot be obtained from an individual’s face and body. In fact, a recent meta-analysis found that facial expressions do not often reflect our real emotional states at all, but instead our intention or social goals. For example, a smiling face could accompany completely different internal emotions depending on the context: it could be faked to hide nervousness in an interview setting; it could signal friendliness when celebrating other people’s success; it could also show hostility when teasing or mocking others.

Therefore, we hypothesized that emotion recognition may be efficiently driven by dynamic visual context, independent of information from facial expressions and body postures. Our project is using hundreds of dynamic and naturalistic video clips (large dataset), in combination with deep learning neural networks, and advanced human behavioral measures. During the spring semester, we are also planning to design fMRI experiments and collect data to investigate the neural mechanisms of context-based emotion perception. We believe that our findings will benefit computer vision, neural, and social-cognitive models, as well as psychological measures of emotional intelligence.

There are many tasks for students to accomplish and we will assign manageable tasks every week depending on assistants’ skill sets and preferences. Students will mainly be responsible for preparing experimental materials, running experiments, and analyzing data. Specific tasks will include processing outputs from deep learning neural models for video analysis in Python, editing videos using software like Adobe Premiere Pro, managing data collection and credits assignment through RPP, writing data analysis code in Python, and editing unpublished manuscript submitting to academic journals etc. Students will receive training for the tasks assigned and we will meet regularly to update progress. Students will be expected to demonstrate significant progress towards completion. We will also spend weekly meetings discussing background papers and connecting the dots to the broader literature. Students will be expected to develop leadership and professional skills requisite for future academic and professional pursuits.

Day-to-day supervisor for this project: Zhimin Chen, Graduate Student

Qualifications: Required: High motivation, effective communication and problem-solving skills Desirable but not essential: Experienced in video editing using Adobe Premiere Pro; Experienced in Python or Matlab; Basic knowledge of deep neural networks; Basic knowledge about human perception from class C126.

Weekly Hours: to be negotiated

Related website: https://whitneylab.berkeley.edu/index.html

Closed (2) Eye movement behavior during autonomous vehicle-human interaction.

Applications for Spring 2019 are now closed for this project.

One of the major challenges the autonomous vehicle research is facing is developing an ability to communicate to humans – both pedestrians and drivers – their next actions and intentions. In the first phase of this project we will approach the question of how the lack of a driver in a car changes pedestrians’ eye movements while deciding whether to cross or not cross at an intersection. When facing a manual car, pedestrians get information about vehicle intention and next action from two sources: driver’s face and the bumper of the car. Driver’s face can communicate driver’s awareness of the pedestrian while the rate at which the bumper of the car approaches the pedestrian reveals the time to impact from vehicle to pedestrian. What happens when there is no driver (as in autonomous vehicles) to convey this information? The present project uses state-of-the-art mobile eye-tracking devices to record pedestrians’ eye movements while interacting with autonomous and manual cars.

The second phase of this project will focus on how semi-autonomous vehicles change driver’s attention. Specifically, it will address the changes in eye movements during driving. For this project, drivers will wear a mobile eye-tracker while performing a task in a driving simulator.


The research apprentice will be involved in running experiments and analyzing data. Students will be trained to set up and record data with the Pupil Labs eye-tracker. Analysis of the data includes applying deep neural networks and Python toolboxes to categorize the eye movement videos and write Matlab and Python code to investigate eye movement patterns. The apprentice will also meet with the supervisor weekly to discuss preliminary data and background literature, in addition to the opportunity to attend weekly lab meetings.

Day-to-day supervisor for this project: Teresa Canas-Bajo, Graduate Student

Qualifications: Required: High motivation, interest in visual perception and cognitive science, basic programming skills, ability to work independently on certain projects. Desirable but not essential: basic knowledge of deep neural networks; experience with programming languages Python and/or Matlab.

Weekly Hours: to be negotiated

Related website: https://whitneylab.berkeley.edu/index.html

Closed (3) Individual Difference in Object Localization

Applications for Spring 2019 are now closed for this project.

Object localization refers to the ability to locate the position of objects in the visual field and it plays a crucial role in many everyday tasks such as grasping and reaching. Past research on object localization mostly believes that this is a universal mechanism that does not vary across different individuals. However, given the accumulated structural and anatomical evidence showing profound individual difference in our visual system, it is highly possible that even in the basic object localization task, people will have systematic and idiosyncratic variations. To test this hypothesis, we design an adjustment task, in which observers were shown a brief image on screen and they need to use the mouse to indicate the center of the image. We found that every observer has their own unique error pattern, and this pattern is consistent across time when they were re-tested after several weeks.
Given this, the next step of our project is to horizontally and vertically expand this finding. "Horizontal" means that we hope to test this task on different observer groups (for example, twins, mTurk workers) and increase the power of current finding; "vertical" means that our project is also curious about when those substantial individual difference in object localization establishes in early ages and how it changes with aging. This may include testing on children and elder people.

The research apprentice will be involved in coding the experiments, running experiments and analyzing data. This mainly helps to practice the use of Matlab. In addition, students will be trained on the basic skills in psychophysics experiments, including but not limited to: color correction, parameters adjustment and deeper statistics knowledge in individual data analysis. The apprentice will also meet with the supervisor weekly to discuss background literature and preliminary data, in addition to the opportunity to attend weekly lab meetings.

Day-to-day supervisor for this project: Zixuan Wang, Graduate Student

Qualifications: 1. Required: High motivation, effective communication and problem-solving skills 2. Desirable but not essential: Experienced in Python or Matlab; Basic knowledge about human perception from class C126.

Weekly Hours: to be negotiated

Related website: https://whitneylab.berkeley.edu/index.html

Closed (4) Individual differences in holistic perception of faces

Applications for Spring 2019 are now closed for this project.

The human perceptual system processes faces in an unique manner. Humans perceive faces holistically (i.e. as a whole) rather than as a set of separate features. The face inversion effect is one of the most compelling pieces of evidence for holistic processing of faces. Upright faces are identified faster than inverted faces, and more interestingly, this difference is disproportionally larger in faces than in objects. The most supported explanation is that inverting a face disrupts holistic processing, which is key for face recognition. In contrast, non-face objects are not processed holistically, so inverting them does not affect recognition in such magnitude. Previous work demonstrates that some individuals are better at this holistic type of processing than others. Ability to perceive faces as a whole has been shown to correlate with other individual measures such as face recognition abilities and impairments. Preliminary data from our lab suggests that there are not only individual differences in human holistic processing of faces, but that specific faces that were processed holistically by one observer were not by other observers. The aim of this project is then to investigate whether there are such unique individual differences in holistic processing of specific faces. For this purpose, we use Mooney faces as stimuli, which are two-tone images readily perceived as faces despite lacking clear separate face-specific features. Mooney faces are ideal for this project because 1) they can only be recognized holistically since they lack separate, identifiable face-like parts and 2) there seems to be anecdotal individual differences in the Mooney faces that people find easy or hard to recognize. In a set of tasks, we will calculate the inversion effect for each subject for each Mooney face. By measuring each subject’s ability to identify real versions of the Mooney faces, life-time experiences with sets of faces and other face recognition abilities, we expect to find a way to predict which faces will be most holistically processed by each subject. Next phases of the project will involve analysis and categorization of the specific features of each Mooney face. We expect to explore whether one or multiple individual factors, such as face learning experiences or recognition biases, shape the set of faces that each subject processes holistically.

The research apprentice will be involved in designing stimuli, running experiments and analyzing data. Students will be trained to set up and record data with the Pupil Labs eye-tracker. Students will be responsible of writing Matlab and Python code and apply statistical toolboxes to analyze the data. The research apprentice will be also expected to read the relevant literature and write reports on the ongoing results of the project. The student will also meet with the supervisor weekly to discuss preliminary data and background literature, in addition to the opportunity to attend weekly lab meetings.

Day-to-day supervisor for this project: Teresa Canas-Bajo, Graduate Student

Qualifications: Required: High motivation, interest in visual perception, basic programming skills, ability to work independently on certain projects. Desirable but not essential: basic knowledge of Adobe Photoshop; experience with programming languages Python and/or Matlab.

Weekly Hours: to be negotiated

Related website: https://whitneylab.berkeley.edu/index.html