



In LinkedIn’s case, these algorithms exclude a person’s name, age, gender, and race, because including these characteristics can contribute to bias in automated processes. These systems base their recommendations on three categories of data: information the user provides directly to the platform data assigned to the user based on others with similar skill sets, experiences, and interests and behavioral data, like how often a user responds to messages or interacts with job postings. Most matching engines are optimized to generate applications, says John Jersin, the former vice president of product management at LinkedIn. “When we look at the recommendation engine we’ve built, you can reduce that time down to milliseconds.” “You typically hear the anecdote that a recruiter spends six seconds looking at your résumé, right?” says Derek Kan, vice president of product management at Monster. Meanwhile, some of the world’s largest job search sites-including CareerBuilder, ZipRecruiter, and Monster-are taking very different approaches to addressing bias on their own platforms, as we report in the newest episode of MIT Technology Review’s podcast “In Machines We Trust.” Since these platforms don’t disclose exactly how their systems work, though, it’s hard for job seekers to know how effective any of these measures are at actually preventing discrimination. LinkedIn discovered the problem and built another AI program to counteract the bias in the results of the first. The system wound up referring more men than women for open roles simply because men are often more aggressive at seeking out new opportunities. The algorithms were ranking candidates partly on the basis of how likely they were to apply for a position or respond to a recruiter. Years ago, LinkedIn discovered that the recommendation algorithms it uses to match job candidates with opportunities were producing biased results.
