Advanced technologies can address bias and strengthen hiring decisions
Complex emerging technologies such as artificial intelligence, machine learning, and big data analysis will be used to create the leading HR organizations of the future, and employers must be willing to invest time and effort to responsibly use these powerful tools.
But that first means overcoming the fear of what could go wrong, and instead resolving to harness the power of technology to improve decision-making and revolutionize talent management.
SHRM Online discussed the important topic of the future of work with Eric Seidel, Ph.D., industrial organizational psychologist, expert in artificial intelligence and machine learning, executive vice president of innovation at the newly hired tech company, and co-author of a new book, Decoding Talent (Fast Company Press, 2022).
SHRM Online: People often react to advanced technology with trepidation. In the case of using AI in the workplace, government regulators place well-intentioned restrictions on the use of data because they fear that employers may abuse employee privacy and workers may be harmed by bias.
How can people bypass these initial reactions to fully harness the benefits of this advanced technology while also addressing its threats?
Seidl: It has been observed that we are making advanced technology at a faster rate than we can bring in. And throughout history, this has often been the case - regulations and guidelines are often created after the event, to harness new technology.
Artificial intelligence is perhaps the most powerful and powerful technology that humans have ever developed.
And, as with any powerful tool, AI can be used for either benevolent or malicious purposes. In many cases, bona fide AI produces harmful results due to unexpected consequences. However, as we all know, AI can also greatly improve our world in many ways.
Privacy and bias are two of the biggest problems in unfettered applications of AI. As a society, we must find ways to reduce these problems so that we can reap the benefits of technology.
Of course, there are plenty of businesses out there that want private personal data so they can better target ads and other tools, and bias is often buried deep in algorithms that produce some other beneficial effect.
So finding the right balance between restricting privacy and bias issues as well as enabling AI to be effective and useful is a delicate dance between business and human interests.
In my opinion, we do not yet have sufficient algorithmic and AI development constraints for the task of harnessing AI for the benefit of humanity. The main part of the last sentence is "humanity".
Not corporate interests. AI should not only be beneficial to businesses, but also to individuals. It must make our life better. It is not enough to ensure that private data is not used or mitigate algorithmic bias.
These issues are often interrelated. For example, we often need to know what demographic groups people belong to so that we can ensure that algorithms are not biased against any particular group, however some regulations limit access to demographic information because it can be considered private or can be used by humans to discriminate.
We still have a lot of work to do if we are to harness AI and algorithms for the benefit of people.
SHRM Online: The most popular news stories about the use of artificial intelligence in hiring decisions typically depict negative consequences of technology, including ethical, legal, and privacy breaches.
How can artificial intelligence and big data be used to remove bias from hiring?
Sydell: Early on, AI developers were excited about the technology, and they came up with features that weren't adequately vetted. This led to a lot of high-profile incidents like when Microsoft released the Tay chatbot that was trained on Twitter data.
Almost immediately, Twitter users began feeding racist remarks from Tay, which they then learned from and started posting on their own. Microsoft quickly dumped Tay and has since learned that you can't allow AI to learn from users' responses in this unconstrained manner.
However, at its core, AI is just an ability for statistical analysis. This ability can be designed to find and eradicate bias.
While poorly developed AI can broaden bias, the same types of techniques can also be used to identify bias and thus make hiring decisions that are fair for all categories of individuals. Remember that AI is just a tool. It is up to governments to control how it is used, and developers need to be aware of the downsides of poorly developed code.
SHRM Online: If the key to effective AI use is capturing the right data for analysis, how does an organization begin to identify and act on that data?
Seidl: We all intuitively understand that some types of data are more useful than others. But the truth is that it is very difficult to know which data points will ultimately prove to be more predictive and fair.
As humans, we often think we know. We are very adept at constructing narratives to explain the world around us. But one of the promises of big data and artificial intelligence is that it can help make sense of complex, chaotic, and unstructured data in ways that weren't possible before.
Some types of data are likely to be more valuable than others. I divide the candidate data into the following four categories:
- accidental. This refers to non-job-related data such as social media profiles, a person's voice, or an interview video. This type of data has not been found to be highly predictive of job success, and it certainly includes a lot of potentially biased information. It also tends to be considered an invader by candidates.
- Effect. This is online behavioral data such as mouse movements and restart counts. This type of information is also not predictive of job success.
- a novel. This refers to more job-related, but unstructured information such as LinkedIn profiles, cover letters, and resumes. This type of data is useful in hiring, but it also has a lot of bias factors, so it should be used with caution.
- Deliberate response. This is the gold standard for data-driven recruitment. It refers to questions that candidates answer on purpose, such as interview questions that can be measured using artificial intelligence and answers to job-related tests. These data are not invasive and because they are quantifiable, they can be validated, and bias can be measured.
SHRM Online: Talent acquisition professionals want to be able to predict job candidates' success, but they struggle sometimes.
How can emerging AI technology better assess talent?
Seidl: Decisions about who to appoint are human in nature. We humans are not good at making logical, high-quality, and fair decisions about other humans. Our brains are wired to take in large amounts of data and make quick and intuitive decisions. And we do that with candidates. We get to know who they are in literally seconds, and those first impressions are often hard to beat even with more data coming in.
While there are a lot of recruitment techniques available today, many of them do not help us overcome the shortcomings inherent in human decision-making.
Therefore, we must turn to structured and scientifically oriented tools that measure very specific candidate characteristics that have been shown to be predictive of functionality.
A typical example is validated job-related evaluation, which is often the most valid and expected part of the hiring process.
Over two decades of valuation research we have produced many examples of how validated valuations lead to much higher ROI [return on investment] and much higher levels of diversity for new hires.
Artificial intelligence allows us to study and score for more than just exams; It allows us to significantly expand the set of candidate information that can be quantified and thus studied. Essentially, AI allows us to identify a huge amount of data, which recruiters and hiring managers previously had to pay attention to.
Ultimately, this helps reduce the hiring process dramatically from weeks to days or even hours, increases the effectiveness of these hiring decisions, and does all of this with a level of fairness that humans cannot match.