By Dr Steven Finch, CTO, DaXtra Technologies
An edited version of this article was originally published in The Global Recruiter
Take a quick scan of the recruitment industry’s blogosphere and you might leave convinced we are heading rapidly toward a robot-led future of total automation, a world cut from the same cloth as The Matrix or Blade Runner. Or, should you read articles from the opposing view, you might think that nothing has really changed, and all this talk about AI is little more than hot air and hype.As usual, a sober analysis reveals the truth is somewhere in the middle. Robots aren’t ‘taking over’ anytime soon, but they are playing an ever-larger role in our workflow.
The problem with the chatter around AI is that the conversation is shaped by buzzwords rather than facts. There’s pressure to be ‘buzzword compliant’ – and this pressure motivates some technology companies to freely issue statements about advancements in ‘machine learning’ to demonstrate that they are on top of the latest trends.
But sophisticated concepts such as ‘machine learning’ (and its sub-set ‘deep learning’) lose their meaning when used casually for marketing purposes. That makes it even more imperative that we attempt to define these phrases, and in turn, figure out a way to characterise and evaluate AI’s impact on recruitment technology (RecTech) in a way that is accurate and true.
The buzz around machine learning
Without a doubt, machine learning – and especially deep learning – is at the centre of AI development today. Google’s tremendous effort in this area is leading the charge. Deep learning algorithms are those that process data according to deductive representations, rather than task-specific orders. In terms of RecTech, deep learning should ultimately enable parsing solutions to read nuances in CV language – an R&D challenge we’re tackling aggressively here at DaXtra.
While our intended destination in this regard is clear to all – i.e. intelligent software that improves itself over time and reads nuance and context – the reality is we still have a very long way to go.
A good example of where we stand with the technology is the Google gorilla fiasco. For the uninitiated, here’s a brief (and embarrassing) summary: in 2015 a software engineer pointed out that the Google Photo algorithms auto-classified his black friends as ‘gorillas.’ The algorithm’s mistake was bad enough. But it gets worse: after more than two years, the tech giant’s only solution has been to prevent any images from being labelled gorilla, chimpanzee or monkey.
This means that not even Google, which has billions to spend on R&D, can solve the problem of coding algorithms to accurately read nuance and context. All humans intuitively know that people are not gorillas. But a computer? Not so much. It processes conclusions based on a pre-determined system of classifications.
Google certainly didn’t cause this problem on purpose. But the bigger point isn’t lost: as of 2018, machine learning is not where it needs to be. It’s still a great big black box – mapping statistical regularities and applying such analyses to new data to draw conclusions and make predictions, often making errors in the process.
As for how this applies to recruitment, that’s obvious: there exists room for CV parsing errors given the infinite variations of everyday language. For example, let’s say your last name is ‘Orlando,’ or you’re a medical doctor who describes yourself as an MD.
In the first instance, the algorithms could confuse your name for a place – the city of Orlando, Florida; in the second there are even more possibilities for error: the computer might think you meant Managing Director, or perhaps even the US state of Maryland (often abbreviated MD).
Rules-based systems – the way forward
Does all of this give you a headache? If so, that’s understandable: the challenge to program computers with processing capabilities that resemble human intuition and deductive reasoning is formidable.
So, what can be done? At DaXtra we are using machine learning technologies to develop rules-based algorithms that can improve themselves over time. That means these algorithms collect information and make predictions in a relational manner. In short, they eventually learn the nuances of context. For example, if MD is listed right before information about a medical school on a CV, the algorithm knows this MD reference doesn’t stand for Maryland.
Of course, rules-based algorithms can still make errors. But the difference is they can be fixed – in fact, deep learning technology implies that they should be able to fix themselves.
Again, we’ve barely left the starting gates with this new technology. No matter what any marketing department says, the fact is algorithms haven’t really changed in the past 30 years. And so, apologies to all of you sci-fi enthusiasts: the robots aren’t taking over any time soon. We’re still in the early stages of teaching robots how to read nuances in information.
The next time you hear a technology company say they’ve increased their software’s accuracy by 30% due to improvements in deep learning, you should proceed with caution. Even Google hasn’t solved that riddle – yet.