<a class="" href="https://www.fastcompany.com/90842966/chatgpt-end-thought-leaders">Does ChatGPT mean the end of ‘thought leaders’?</a>

Does ChatGPT mean the end of ‘thought leaders’?

Over the past two days, I must have had 20 conversations with people anxious about the impacts of AI on their work and livelihoods. Of course, I might have been seeking out those conversations because, as an academic and contributor to several business publications, I am also anxious!

 
 

Creators, artists, coaches, programmers, doctors, writers, experts, and thought leaders are grappling with the implications of generative AI such as ChatGPTDALL-E, and other advanced AI products on their jobs, passions, and income.

On the one hand, it may seem like a dream come true. You can benefit from AI helping you work faster. Perhaps, better. On the other hand: So does everyone else and their hamster. Especially since it seems that anyone can look like an expert and creator with the help of AI. It can, after all, write a job application good enough to be short-listed, pass portions of the U.S. medical licensing exam and the bar exam, produce code, text, and sellable art, and (sort of) provide mental health therapy.

If any of that is making you want to cry, punch, or throw something—stop reading this article for a bit. Grab a pillow, or a sweater, or whatever you can squeeze, yell into, punch, or throw without causing destruction. Take a deep breath.

 

Now, let’s be honest and talk about anxiety. (Or is it AI-xiety?)

THE END OF THE WORLD AS WE KNOW IT 

Don’t let anyone gaslight you: The unease and existential dread you feel isn’t unfounded. Top career experts grapple with these feelings, along with early-career professionals, five-time pivoters, and everyone in between.

“For years, we’ve been hearing that automation and AI will decimate the rote, boring jobs that none of us want, anyway, but we’ll be fine as long as we’re ‘creative’ and ‘human,’” says Dorie Clark, author of The Long Game, Fast Company contributor, and a master of pivoting. “Alas, ChatGPT just wiped the floor with us so . . . apparently we’ll need a new plan.”

 

And that new plan should include networking, along with building a reputation “so powerful and distinctive, people will choose to work with you,” Clark says. Building portfolio careers involving a variety of skills is also more critical than ever. 

Yet, for many, the unease is not just about the practicalities of having a new career plan. The struggle goes deeper. My clients and colleagues are wrestling with a threat to our very identities as creators, experts, writers, programmers, artists, doctors, lawyers, etc.

There is also the question of justice. People who already had to pull themselves by the bootstraps from poverty or build careers despite discrimination or the stigma of disability will also have to work harder than others to pivot. People who invested their lives into creating intellectual property and thought leadership are threatened by AI-supported plagiarism. This means that while some of the anxieties can be addressed on the individual level by pivoting and adapting, others require collective and systemic solutions. And those systemic solutions will take a collective focus on human ethics.

 

Let’s look at some of the issues specific to content creation such as digital art and copywriting, expertise, and thought leadership—and the place of human thought leadership in creating solutions. 

WHO WILL BE IMPACTED?

Marketing and personal branding expert Mark Schaefer has bad news for aspiring content creators. He predicts that AI will do to content creation what the advent of digital music production did to the music industry. According to Schaefer, those who lost their jobs were generally good-enough-but-“commodity” studio musicians, while the elite musicians, most-respected songwriters, producers, and technicians thrived. Schaefer’s conclusion is that those providing “informational content” or generic SEO content for corporate websites, may want to think about pivoting. Clark agrees. The threat to many jobs in marketing and copywriting is real. This may also apply to digital art, building websites, and many other careers.

It sucks and it hurts. Whoever glorifies disruptions has not felt the full blow of being on the receiving end of one—especially without a social or familial safety net. The current disruption, in particular, hurts so much because the threat is much more than economic. Robots encroach on things many of us enjoy doing, including drawing and writing and creating. Fun things. I would not object for a second if robots took over cleaning the toilets, but here we are.

 

Individuals may need to consider if their creativity can also be expressed via pursuits less susceptible to AI solutions. But from the point of view of societal justice and ethics, the question only humans can answer is whether businesses that benefit from this disruption are obligated to support pivoting and re-skilling, and what type of societal safety net can help prevent career disruptions from becoming personal and societal disasters. This is a societal concern becase significant loss of jobs in any sector has a domino effect on other sectors that people can no longer afford to patronize. 

Of course, not every type of creative work, like writing or art, is equally impacted. Business journalist and leadership coach Natasha D’Souza believes, for example, that human journalism will remain important. “As a journalist, plagiarism is professional suicide, and having a distinct POV, in addition to the analytical ferocity and editorial timbre are essential in order to stand out”. D’Souza believes that in journalism, or any field that relies on the uniqueness of human perspective and lived experience, AI tools can never replace human imagination and intuition.

I like the emphasis on imagination and intuition. But what about our expertise?

 

WHO ARE THE EXPERTS?

Ironically, it seems that while generative AI might have lowered the bar for calling oneself an expert, it raised the bar for actually being an expert. ChatGPT copy/paste can help non-experts sound impressive even if they are not even versed in the topic (taking some of the advantage away from those naturally good at fibbing). The effect is the increased competition between impressively sounding non-experts.

At the same time, proving oneself as an actual expert will likely require an exceptional level of human expertise. According to Tomas Chamorro-Premuzic, Fast Company contributor and author of the upcoming book, I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique, being a true human expert will take knowing more about the subject matter than the “wisdom of the crowds” on which the AI draws. Human experts must be able to detect errors, bias, and fake references in “crowdsourced” knowledge, producing work more accurate and useful than what the AI can generate. They also must be able to put the knowledge into practice. 

For those who hire experts—such as consultants—and are not interested in cookie-cutter advice, D’Souza suggests looking for deep human expertise. Moreover, this expertise should be “deliberately curated and applied to a specific set of business circumstances or objectives.”  

 

D’Souza also believes that AI can never replace true thought leadership.

IS THOUGHT LEADERSHIP STILL HUMAN?

Thought leaders (a term that has unfortunately become overused and misused) build on expertise by adding a unique perspective, creating new understanding, and influencing the thinking of others. Their expertise is not only deep and based on research and experience, but also steeped in passion and conviction.

Denise Brosseau, author of Ready to Be a Thought Leader?, believes that ChatGPT and similar advanced AI may be both “one of the best and one of the worst tools to come around in a long time for individuals and organizations aspiring to build their reputation as thought leaders.” It can provide a compilation of the mainstream ideas related to any topic. If you use that compilation as a jumping-off point, then AI is a useful tool. But “if you re-write a sentence or two, add a catchy headline and hashtag, and stick your name on it as an original piece of content, then we are all the worse off.” Creating noise is not the same as thought leadership.

 

But is thought leadership truly “safe” from problematic applications of AI? I have concerns, both on the individual and societal levels.

THE PLAGIARISM PROBLEM

Doing unique work won’t necessarily protect you from one of the more nefarious aspects of AI use: plagiarism. If anything, those who put decades into developing their expertise, craft, and intellectual property might be particularly vulnerable to having a lifetime of work swiped by a faceless, nameless AI-giarist.

Yes, plagiarism, including technology-assisted plagiarism, has existed for a long time. But the scale of plagiarism is made unprecedented by AI. Now, a person’s life’s work may not just be plagiarized by another person, but commodified by a robot. The thought of so much human passion and effort being taken without compensation, attribution, or acknowledgment is terrifying and infuriating. But addressing the ethics of AI use is one of the areas where human leadership fueled by a (human) sense of right and wrong is essential.

 

HUMAN LEADERSHIP IS NEEDED

Without human leadership, the proliferation of generative AI may very well result in doomsday economic and humanitarian scenarios. Leadership in ethical decision-making, consideration of human consequences, and preventing misinformation are some of the areas most in need of human and human-focused thinking.

1. Protecting human life and social justice

It appears that large companies find layoffs in the name of AI-driven productivity acceptable, and heads of AI companies believe that the risks of job loss “don’t outweigh the positive consequences.” However, who is to benefit? And is the fact that unemployment is associated with a twofold to threefold increase in the suicide risk considered in those calculations? Do the profits of AI companies outweigh the risks of the loss of human life?

 

It is up to humans to address the many ethical questions of AI use. For example, has the potential to further increase the already drastic economic inequalities by pushing individuals and families from the middle class into poverty been considered? What about the risk of a mental health crisis linked to the loss of not only livelihoods, but professional identities? It could easily exceed the mental health crisis of the pandemic.

The AI can help analyze multiple scenarios of job loss, economic trajectories, and the potential of universal income and re-training scenarios to ameliorate human harm. However, only ethics-driven humans can make decisions in the interest of human well-being.

2. Preventing disinformation and establishing the truth

 

One rarely discussed drawback of generative AI is the potential to multiply disinformation and privilege the viewpoints that serve commercial or political agendas, at the expense of human well-being. Caroline Stokes, an advisor to leaders in the tech sector and author of Elephants Before Unicorns, points to potential dangers. “If we thought people discrediting vaccines, fake news from leaders trying to move everything to their will were bad, AI can scale that. Imagine AI sources multiplying “information” that’s wrong—and people not researching, assessing, fact-checking data properly. The more the data is used, the more it becomes a “truth.”

The nature of truth and the separation of what is popular and often repeated from what is true is a human dilemma. Solving it will require human research and critical thinking—the very skills humanity might be tempted to outsource to ChatGPT and the like.

But even when humans are willing and able to exert the effort, the dilemma of establishing truth is exacerbated by the non-transparent nature of ChatGPT and its lack of referencing the origins of information.

 

3. References, please

Remember the good old days when Wikipedia was seen as a threat to education? Now, Wikipedia seems positively saintly. There are references. There are ways to flag, challenge, and correct erroneous or biased entries. There is the tracking of edits.

On the other hand, sources of information in the black box of ChatGPT are untraceable to the user. When asked to provide references, it spits out non-existent, made-up experts, books, and studies. (Yes, I even had an argument with it about made-up sources in my area of expertise.) ChatGPT said that as an artificial intelligence, it could not possibly make up references. I believe it, but that begs another question: What was it trained on?

The use of appropriate sources but without attribution might be, in some ways, an even bigger problem than made-up references. Derivative use of copyrighted art, writing, and other intellectual property by generative AI is a significant ethical and potentially legal issue. If ChatGPT plagiarizes for you, who is the plagiarist? And is the use of AI tools without attributional capabilities altogether ethical?

Ironically, generative AI appears to have created an urgent need for human thought leadership, thorough research, and critical thinking. We can choose to channel our anxieties destructively or constructively, as caring and compassion. It is our responsibility not to outsource human, genuinely informed, and ethical decision-making. 

Article link – https://www.fastcompany.com/90842966/chatgpt-end-thought-leaders