I’m working on an app using AI. My commitment to using artificial intelligence comes from a deep-seated belief that the only way I can truly understand the challenges and the opportunities it poses, is by using it. Little did I know that the process would highlight my human blindspots.
I always thought I was great at communicating.
What caught my attention recently is that the communication challenges, both my weaknesses and those of the platform, mimic real life. Let me rip the band-aid off. I think I’m amazing at communication. I believe that I’ve said everything with absolute clarity. I get genuinely confused when I have to repeat myself. So a couple of years ago, when I hired someone to help me, and the product I received was different from what I asked for, I was annoyed. Inside of my head, I sounded like every single boss, spouse and parent, who says, ‘but I told you….”.
I’m a big picture thinker. I synthesize information quickly, and a lot of times, connecting the dots feels like common sense. Hence, I get pretty frustrated when people don’t “get it.” Alternately, I tend to feel like I’m patronizing people if I break a concept down into concrete steps. The resulting intersection of those variables is that what is clear to me may be perceived differently by the person on the other side
AI mirrored human mistakes.
The first thing I noticed in communicating with AI, was the parallel to the same challenges I’ve had with humans. “No, that’s not what I meant. What I mean is this.” The difference is that I can be annoyed, frustrated, and think that the AI agent is stupid, but I don’t have to worry about hurting its feelings. I don’t need to worry about patronizing or having my tone of voice show on my face.
And, I noticed something else. AI makes the same mistakes I do. It assumes that I have knowledge that I don’t. The process of creating an app feels as foreign to me as when I googled, ‘how to build a website” 10 years ago. AI tells me to open the server terminal on my computer. I explain to AI that I don’t know what that is nor how to get to it. Along the way, after MANY iterations, I learn about how the terminal works, TextEdit files, and real-time exposure that takes me back to my 8th grade education in computer programming. Repeatedly, I ask it to break the step-by-step instructions into VERY specific steps that contain no assumed knowledge.
Like AI, or perhaps, as AI mirrors humans, I assume my knowledge base is shared by others. I often think I’m being specific, but I’m not. In my mind, I’m breaking a concept down to its original components, but the reality is that I may be asking someone to read at 5th grade level when they haven’t learned the alphabet.
On one hand, this underestimation of my knowledge base reflects humility; on the other hand, it makes me blind. In making the mistake of overestimating shared knowledge, I risk false conclusions about the end user of my communication. I may assume they aren’t smart, or that they weren’t listening, or that they were too lazy to execute. In reality, maybe they are in a learning curve and I’m assuming knowledge to which they have not been exposed. I may also make them feel dumb when I make assumptions about what they know instead of checking their level of familiarity first.
I teach leaders how to communicate, but honestly, AI is showing me that I still have room to grow.