Do you remember the climax of Mission: Impossible? An exploding helicopter hurls Tom Cruise through the Chunnel. He saves his own life by grabbing hold of the Eurostar train traveling 300 kilometers per hour.
Up to that point, the movie was gripping, fast-paced thriller. With the final scene, it crossed a line into the farcical and absurd.
It’s curious how our human brains are wired. We enjoy reading books and watching movies that tell stories we know are not true. Science fiction, fantasy, and adventure tales take us into worlds of the impossibly make-believe. Yet, paradoxically, no matter how phantasmic these visions, we demand that they preserve enough logical consistency that we can imagine them to be real.
The willing suspension of disbelief takes us only so far. It also takes us to this week’s entry to the Ethical Lexicon:
Verisimilitude (veri·si·mil·i·tude/ ver-uh-si-mil-i-tood) noun
The appearance or semblance of truth, genuineness, and authenticity
The truth is, we don’t mind blurring fantasy and reality. But that’s only so long as we retain some sense of the boundary between the two. Violate that boundary, and we feel cheated, manipulated, and betrayed.
It’s bad enough when it happens in the movies. But when it happens in real life, then we start to feel as if the foundations of the world are convulsing beneath our feet.
THE BENEFITS AND THE DANGERS, ARE ENDLESS
The tremors of existential instability intensified when ChatGPT arrived on the scene, straight out of the futuristic novels of our collective youth. The interactive chatbot takes AI to an awe-inspiring—and equally horrifying—new level, scouring the internet to produce in seconds a story, thesis, email, or college essay indistinguishable from human writing.
The benefits are endless, as are the dangers.
- How many jobs will this technology eliminate?
- How might unscrupulous individuals or groups exploit the technology to further erode public discourse?
- How will it corrupt our educational institutions and the concept of education itself?
We’ve learned to live with computers performing many of the tasks we do, often faster and better than we can. At the same time, we need to affirm our own humanity by the assurance that mankind will not become obsolete. Our own sense of self-worth demands that we recognize the dividing line between human and artificial intelligence.
The problem of verisimilitude is that once make-believe becomes too real, we stand to lose our grip on reality altogether. Is this the real thing, or is this fantasy? Caught in a tech landslide, can we escape to reality? When we open our eyes, will we look up to see ourselves as characters in a macabre opera or prisoners inside The Matrix?
MEETING FEAR OF THE UNKNOWN HEAD ON
How ChatGPT might affect workplace culture is an open question. The more that businesses can do with technology, the more streamlined business becomes. In direct proportion, however, the more easily a sense of insecurity, disempowerment, and panic can spread among employees.
Fear of the unknown is best combatted with transparency. It is the responsibility of leadership to anticipate and allay employees’ concerns about their own future. That’s why ethical leaders have an obligation to:
- Communicate what innovations are being considered and tested
- Explain how these novelties might be integrated into the current structure
- Articulate a (genuine) commitment to safeguarding job security throughout the organization
- Ensure that employees will be prepared to adapt to coming changes
- Establish their own reputation for trustworthiness to ensure that their explanations and assurances preserve calm and quell panic
Despite Frankensteinian forebodings, history offers some basis for optimism. Across the ages, new technologies have been met with doomsday predictions that rarely, if ever, came to pass. Socrates feared that the written word would undermine the tradition of oral education and scholarship. Skeptics decried Gutenberg’s movable type as a threat to upholding the integrity of authentic church writings. And popularizing the telephone, it was feared, would lead to mass psychosis and social disintegration.
Concerns over the effects of television and the internet may be less overblown, but human beings have proven remarkably adaptable. Still, what if this time the Cassandras are right?
A flicker of hope has emerged in the person of Edward Tian. The Princeton senior was amazed when he first saw what ChatGPT could do, but also alarmed. Eventually, he set out to do something about it.
If this new steroidal AI couldn’t be stopped, perhaps it could be contained. Edward developed software to do just that, not putting the genie back in the bottle but at least preventing it from disguising itself in human form. He designed an app to identify whether content has been written by human beings or by AI. The simple ability to expose content as machine- versus human-generated might be all we need to prevent the fabric of society from unraveling around us.
Of course, it’s only a matter of time before the next technological threat to human security rears its head. But even for those of us who are not programming geniuses like Edward Tian, we can still look for creative ways to offset the destabilizing effects of untested technology.
The time-honored leadership tools of honest and clear communication, empathy, consistency, and character serve as leaders’ most effective measures for dispelling the appearance of chaos and maintaining an atmosphere of security in our workplaces, our homes, and our communities.
Article link – https://www.fastcompany.com/90836480/how-capital-one-is-increasing-transparency-in-the-car-buying-experience