This past weekend, my wife and I enjoyed a quiet dinner at a neighbor’s house. They provided the kitchen, my wife provided the pho. We took our time catching up on everything from neighborhood happenings to family milestones, and eventually we got to my new job at an AI company, and with it, a flood of questions and concerns. Many of which were immediately recognizable from the torrent of social media headlines, creating fear and paranoia in exchange for likes and shares. (Be sure to like and subscribe!)
In between bites, I answered these questions in as simple terms as possible. It was an exercise of “explain this like I’m 5”, which I consider one of the best ways to demonstrate understanding of a topic.
As fast as technology is advancing and being integrated into our daily lives (“I didn’t ask to start using AI in Facebook, it’s just there, forced on me.”), it’s even more important to demystify the jargon and concepts for the non-technical. I believe that by learning how these technologies work at a high level, and maybe even understanding some of the behind-the-scenes systems “think”, people can feel a little less apprehensive about them. I also feel that those of us involved in its development have a responsibility to make AI accessible and understandable to all.
“I just don’t want AI making important decisions that could negatively affect us.”
A future where a faceless intelligence with the entire internet at its digital fingertips makes logical and emotionless decisions about your healthcare, your finances, or your future, is a frightening one. The unfortunate truth is that we are already at a point in our society where decisions are made by faceless entities based on algorithms that the public really knows very little about. However, it’s not without hope — and I think AI can actually help solve this.
Think about the last time you applied for a job. You likely dealt with the frustration of updating your resume to beat keyword matching systems so that your resume can be seen by an actual human who might care enough to give you a chance. AI systems can improve this process by understanding your resume as a whole picture just like a real human would if given the time to do so.
At my previous job I was involved in the recruitment process, where we would get 100’s of resumes a day for multiple job roles. It would take hours to sort through all of the resumes, so many candidates were not given a fair chance. We did our best, but it was simply too much data. Out of the candidates that did make it through the pipeline into real interviews, as many as 50% were very quickly realized to not make the cut.
One of the first production AI systems that I implemented was an autonomous AI agent that would read resumes, sort them into the appropriate job role category based on our descriptions, and rank them based on our criteria. Quality of in-person interviews jumped up an astounding 45%, with near 95% of candidates interviewed being offered jobs.
In this system, there were 2 decisions that the AI made, and 1 final decision made by us humans. The decisions that the AI system made enhanced our decision-making by efficiently sorting and prioritizing candidates to give us top picks based on our own criteria that we personally defined. The final decision was still made by us humans, and I think that’s tremendously important.
As a Solutions Architect working for an AI company, these are the types of use-cases that I often see. Most of them are actually a little bit boring, such as creating product descriptions for websites, intelligent chat-bots that are factual and accurate, parsing through vast knowledge bases of data for easier consumption, etc. We’re hardly at the point that we’re relying on AI to make mission-critical or life-altering decisions for us. But, we should always keep a few things in mind as we move forward.
Addressing These Concerns
- Human oversight is essential: Humans should be “in the loop” for every decision that an AI system makes, and there should be regular audits and testing to ensure that the AI systems behave in accordance with strict guidelines and criteria that are generated by humans. AI systems should augment and not replace important decision-making.
- Building understanding and trust: Regular fact-checking in media should be standardized, with clear and easy to understand definitions of terms. Tools like “Ask Meta” should clearly explain what it does, and how it operates. Documentation should be easily accessible.
- Informed Policymakers: We need policymakers in office that truly understand what technology is and how it operates without embarrassing themselves. (Anybody remember the questions Zuckerberg was asked in his 2018 Testimony to congress? “Is Twitter the same as what you do?”) I’d almost rather have an AI create policies than some of the ineffective and misguided regulations we’ve all seen come from our uneducated leaders.
It’s important the we have open and honest discussions with our friends and family, and that we provide clear and honest answers. I know it can be hard, particularly when you imagine this discussion with that specific relative (You know the one), but if we all work together to do our part, maybe the future where Skynet rules will remain fiction.