If you are one of those who “pretty” the curriculum with things that you may never have done, this interests you. The pandemic encouraged the use of remote interviews and with it, greater difficulty for employers to choose candidates well.

Some would-be hires have become famous for receiving interview responses from someone off-camera on Cyrano de Bergerac-style Zoom calls. Others take advantage of remote testing by having someone else do the cognitive or programming tests for them. And others take advantage and make sure that the person who interviews and is hired is not the same person who appears for the job.

This fraud creates a frustrating – not to say expensive – problem for businesses. Not only do they spend money hiring new candidates, but they also have to deal with the costs related to lost productivity associated with bringing in workers who don’t have the qualifications they claim to have, and that undermines team morale. There are also security issues. Recently, the FBI in the United States has warned companies of the growing risk of people with criminal records applying for technology jobs from home that allow them to access sensitive company and customer data.

A recent study by Glider AI, a recruiting software company, found that candidate fraud has nearly doubled since the pandemic began . The executives of this company calculate that around 10% of the possible recruits try to commit some type of fraud. Cameron Edwards, vice president of client strategy and operations at Matlen Silver, a Fortune 500 recruiting agency , explains that this phenomenon occurs across all industries and functions, but is most common in technical, IT, and developers.

“It’s common for people to fatten up their résumés, but more and more we’re dealing with people who just aren’t who they say they are,” Edwards told Business Insider . “Once there was someone who hid under the table to answer a candidate’s interview questions, you could even see the top of that person’s head. But there are other cases of fraud that are difficult to detect”. For them, employers rely on tools to help them recognize when a job applicant is trying to cheat the hiring process. Staying ahead of unscrupulous tactics isn’t easy, especially as technology advances so quickly. But at a time when remote hiring is becoming the norm, a growing number of companies are determined to take action.

Remote hiring is an opportunity to cheat companies

Candidates lying in interviews or misrepresenting their skills to land a job are nothing new, of course. A 2020 survey by Checkster, a reference checking company, revealed that 78% of job applicants admitted they had falsified CV details or had considered doing so.

Although this is common practice, experts warn that these behaviors have become more prevalent in recent times, as previous social norms around work have weakened.

” Everything we thought we knew about work has been called into question in the last two years ,” says G. James Lemoine, a professor at the University at Buffalo School of Management. The infallible technique to know the true face of the candidates, according to a human resources director who has conducted more than 10,000 job interviews

“People don’t usually go to the office and, at the time of the ‘big resignation’ , they quit more often,” explains Lemoine. “People start to think about what other rules can be broken.”  The limitations of remote contact are another factor , according to Lindsey Zuloaga, chief data scientist at HireVue, a recruitment technology company. Before the pandemic, most candidates came into the office to be interviewed or tested under someone else’s supervision.

“If hired in person, the candidate would program in front of you on a whiteboard, and when they showed up to work, you knew they were the right person because you’d already met them,” he explains. “It was much more difficult to do these kinds of tricks.”  The shortage of talent in the US has compounded this problem , adds Ben Walker, vice president of operations at Glider AI.

“Companies don’t find enough talented technicians, so they have to vet people faster,” says the expert. “This makes it more difficult to know if candidates can do what they say they can.”

How AI identifies fraudulent candidates

Artificial intelligence has become a standard component in current selection processes. According to the Society for Human Resource Management, 88% of companies around the world use AI in some way to screen candidates.

Using AI-powered tools to find fraudulent candidates is a natural next step in the evolution of hiring, employers say. Some tools, for example, can detect cheating and plagiarism in technical assessments by identifying questionable behavior patterns, including whether candidates are using their phones and using multiple screens.

Other capabilities can detect outside voices during an interview that would otherwise be barely noticeable to a remote interviewer. Some can also detect if there are other people in the room by scanning the sandbox and detecting if a remote desktop is being used. The tools raise red flags that alert human hiring managers to take a closer look at a candidate’s behavior or examine code similarities to published sources.

“With artificial intelligence we can tell when a candidate is opening a page in their browser to copy and paste answers, or when an outside person is accessing the candidate’s microphone during the interview,” Edwards explains. ” Since using these tools, we’ve been able to catch more fraudulent applications that we might not otherwise have identified.” Vik Kalra, co-founder of Mindlance, a recruiting firm for Fortune 1000 companies , tells how he started using AI just before covid-19 hit, out of concern for human bias in the process of selection , with the desire to obtain more objective data on the capacities of the candidates.

Since the pandemic began, he has also relied on these tools to identify fraudulent candidates.

“Interviews are based on intuition and common sense, combined with open questions and other tests designed to assess the candidate’s ability,” says the expert. “I need to validate that there is no fraud and that the candidate is the best person for the position.”

Kalra says the biggest benefit to her company has been facial recognition technology that checks candidates to make sure they’re the same person who was interviewed before they’re hired. Almost 75% of the candidates who have passed the selection process in your company, which includes the Glider IA evaluation, are selected by your clients. Before using the AI ​​tool, that figure was 30%. Critics stress that an over-reliance on these tools can be a problem. Some lawmakers have called for more regulation of AI for candidate evaluation. They claim that racial or gender stereotypes hidden in databases and algorithms could pick up the bias of humans. This is especially the case in the field of technology, where the process of diversity is still slow. For this reason, those responsible for AI platforms often insist on the need to combine technology with human judgment.

“The pace of change in applicant fraud is such that being successful means combining the right tools with the best people,” says Danny Jones, product manager at Dice, a technology recruiting platform. “For every automated process or AI model we employ, there is a human behind the scenes.”

Categorized in: