Now, I wish it was universally true that combining artificial intelligence with good old-fashioned human smarts always resulted in superior outcomes, we could all cheer for this symbiosis and be happy forever—but real life, as usual, delights in disappointing us. Many studies of actual deployments suggest something altogether less cheerful.
Science has an amusing way of reminding us just how peculiar humans can be. Turns out, people tend to mistrust AI precisely in those places where it could lend a mighty helpful hand, yet confidently trust it in spots where it's about as useful as a screen door on a submarine. This peculiar behavior likely arises from our charming array of biases—Anchoring, Confirmation, Negativity—you name it, we humans have it. Moreover, we have this curious habit of attributing human intentions to our digital babies, feeling personally betrayed when an AI stumbles, as if it had intentionally spilled chutney on our Italian shoes. Conversely, if the contraption simply smiles (or convincingly pretends to), we trust it implicitly, never mind its actual qualifications.
The path to positive human augmentation via AI requires us first to understand—and perhaps gently mock—these human peculiarities. Only then can we weave such understanding into our systems, processes, and even our personal habits. I find myself continually needing a self-correction, wrestling with anchoring bias especially, since AI capabilities are shifting quicker than the weather on Moana Loa.
Ultimately, understanding social and human psychology is just as critical as technical proficiency in implementing an AI Service. Recognizing the whims, worries, talents, and quirks of the human served by AI is essential if we are to make the equation AI + Human truly greater than either alone.