Subjective performance evaluation is an important part of hiring and promotion decisions. We combine experiments with administrative data to understand what drives gender bias in such evaluations in the technology industry. Our results highlight the role of personal interaction. Leveraging 60,000 mock video interviews on a platform for software engineers, we find that average ratings for code quality and problem solving are 12 percent of a standard deviation lower for women than men. Half of these gaps remain unexplained when we control for automated measures of coding performance.
To test for statistical and taste-based bias, we analyze two field experiments. Our first experiment shows that providing evaluators with automated performance measures does not reduce the gender gap. Our second experiment removed video interaction, and compared blind to non-blind evaluations. No gender gap is present in either case. These results rule out traditional economic models of discrimination. Instead, we show that gender gaps widen with extended personal interaction, and are larger for evaluators educated in regions where implicit association test scores are higher.
We use cookies to provide you with an optimal website experience. This includes cookies that are necessary for the operation of the site as well as cookies that are only used for anonymous statistical purposes, for comfort settings or to display personalized content. You can decide for yourself which categories you want to allow. Please note that based on your settings, you may not be able to use all of the site's functions.
Cookie settings
These necessary cookies are required to activate the core functionality of the website. An opt-out from these technologies is not available.
In order to further improve our offer and our website, we collect anonymous data for statistics and analyses. With the help of these cookies we can, for example, determine the number of visitors and the effect of certain pages on our website and optimize our content.