The epiphany happened on Indeed. I specialize in human due diligence, so I set up my interviewing process as I’d advise my clients to do—get as much information as possible to screen people out before they make it to the interview. What I didn’t anticipate was that some applicants would use ChatGPT to answer my custom questions and that their folly would save me time. Their answers became a screen within a screen for independent thought and integrity beyond the actual answers given.
Here is the backstory and how you can copycat the time hack.
Using Customized Questions to Elevate the Hiring Process
From the perspective of trying to get information early, I took advantage of Indeed’s option to add custom questions. I wanted to assess abstract reasoning, independent thought processes, attention to detail, and work style preferences. The answers to these questions show before the applicant’s resume. At the time, I wasn’t thinking about AI—I was only thinking about obtaining a fast window into the minds of the applicants
.
ChatGPT Answers to Customized Questions
This is what happened. My post received over 100 applications in less than 24 hours, and I paused the listing while I figured out how to sort quickly. About 1/3 of the way through the applications, I figured out that some people were using ChatGPT to answer the questions. As soon as I recognized the pattern, I could decline those applications without even looking at the resumes. I’d just cut down a 1-2 minute assessment to a few seconds. Instead of reading through all of the responses and resumes, my process became Chat-generated? Decline. No Chat? Read.
The flags for Chat GPT generated versus Human included the following:
1. The ChatGPT answers were longer and used a more complex writing style without adding significant information. Great writers use a variety of sentence structures, but the content is also full of information. The extra words are used to extend the concept. Some people write longer answers because they have a chatty personality, but one can see an attempt to add extra detail or to explain previous thoughts. ChatGPT answers were long without adding meaningful information.
2. I had a question that could not be answered by Chat. I’d asked about the software that people use. The answers to the personal-based questions were shorter AND written in a different style than the Chat answers.
3. The Chat answers sounded like a politician (with no offense to politicians). They felt like diplomatic answers with an attempt to cover all bases while not providing an actual opinion.
4. One of the Chat answers had a significant ‘tell” to my question about how a person conducts research. The end of the applicant’s answer said they’d go to their local library.
5. If people didn’t have professionally done resumes, there were significant differences between the sentence structure and tone of the Chat answers and their resume phraseology.
AI as a Signal of Innovative Thought versus Intellectual Laziness
I emailed one of my clients, a business owner, who worked through his anxiety about human-like responses of some AI and now uses it extensively to build additional businesses. He initially said that my assessment made him more paranoid. Then he followed with a good question—couldn’t it be said that a person who used Chat is an early adopter and can integrate tech advances into their thought processes?
While the general answer to that question is “yes,” integrating AI into a thought process differs from using it to replace independent thought. If I’d found a person who had used Chat to get ideas but then integrated it with their own perspective, I would either not have noticed the presence of AI or I would have noticed their personal opinions coming through.
Look for Pattern Interrupts Rather than Specifics
In my business mastermind group, someone latched onto the “library” comment as the definitive answer on how I distinguished human vs ChatGPT answers. While it was low-hanging fruit, neglecting the rest of the signals is a mistake. Patterns give us information. A person may not mention items that are irrelevant or outdated, but a difference in sentence styles between personal experience answers versus personal opinions raises a flag. Conversely, someone may use an esoteric reference, that by itself, could be assigned to ChatGPT, but if it flows with the rest of their thought pattern, the individual may be more unique or “old-school” than others.
After the occurrence, I looked online and found that there are tools to identify items that are more likely to be ChatGPT generated, but from what I found, many of these required hundreds of words. Conversely, by asking a mixture of sentences, one can identify patterns within short answers without researching or adding an extra tool to identify AI-generated responses.
Beyond Employee Screening – A Larger Perspective on AI Growth
The overt reason for this article is to save people time in the screening process of applicants. However, I also want to share a larger perspective. I’ve followed the reactions to AI growth. Some of them remind me of people’s responses to other changes in our culture—the invention of cars, or the skeptical and fear-based responses to the growth of the Internet. I recall going to a tech conference a few years ago, and being surprised by a consistent theme…“as tech advances, the distinctly human skills will become more valuable.”
In my case, the existence of ChatGPT didn’t replace the humanity of my applicants or myself. It highlighted their thought processes (or lack thereof) and made my life much easier.