Everyone has an opinion about Chat-GPT and artificial intelligence. Engineers and entrepreneurs see it as a new frontier: a daring new world in which to create products, services, and solutions. Social scientists and journalists are concerned, with prominent New York Times author Ezra Klein referring to it as a “information warfare machine.” What has God created?
Let me state right away that I see enormous potential here. And, as with all new technologies, we cannot yet fully predict their impact. There will be setbacks and failures, but the end result will be “hooray.”
What Is Chat-GPT?
Simply put, this technology (and many others like it) is a “language machine” that uses statistics, reinforcement learning, and supervised learning to index words, phrases, and sentences. While it lacks true “intelligence” (it doesn’t know what a word “means,” but it knows how to use it), it can answer questions, write articles, summarise information, and do other things very well.
Chat-GPT engines are “trained” (programmed and reinforced) to mimic writing styles, avoid specific types of conversations, and learn from your questions. In other words, more advanced models can refine answers as more questions are asked, and then store what it has learned for future use.
While this isn’t a novel concept (we’ve had chatbots for over a decade, including Siri, Alexa, Olivia, and others), the level of performance in GPT-3.5 (the most recent version) is astounding. I’ve asked it questions like “what are the best practises for recruiting” and “how do you build a corporate training programme,” and it’s given me good answers. Yes, the answers were quite basic and somewhat incorrect, but with practise, they will undoubtedly improve.
It also has a variety of other capabilities. It can answer historical questions (such as who was president of the United States in 1956), write code (Satya Nadella believes that 80% of code will be generated automatically), and write news articles, information summaries, and more.
One vendor I spoke with last week is using a GPT-3 derivative to generate automatic quizzes from courses and act as a “virtual Teaching Assistant.” That brings me to the potential use cases.
How Can Chat-GPT and Similar Technologies Be Used?
Before I get into the market, let me explain why I think this will be so massive. The corpus (database) of information that these systems index “trains and educates” them. The GPT-3 system has been trained using the internet and highly validated data sets, so it can answer almost any question. That is, in some ways, “stupid,” because “the internet” is a jumble of marketing, self-promotion, news, and opinion. To be honest, I think we all have enough trouble determining what is true (try searching for health information on your latest ailment; you’ll be surprised at what you find).
The Google competitor to GPT-3 (rumoured to be Sparrow) was designed with “ethical rules” in mind from the start. It includes ideas like “do not give financial advice,” “do not discuss race or discriminate,” and “do not give medical advice,” according to my sources. I’m not sure if GPT-3 has this level of “ethics,” but you can bet that OpenAI (the company developing it) and Microsoft (one of their major partners) are working on it (announcement here.)
So, while “conversation and language” are important, some very erudite people (I won’t name names) are actually kind of jerks. As a result, chatbots like Chat-GPT require refined, in-depth content to truly build industrial-strength intelligence. It’s fine if the chatbot works “pretty well” if you’re using it to break through writer’s block. However, if you want it to work consistently, it must source valid, deep, and expansive domain data.
I suppose one example would be Elon Musk’s overhyped self-driving software. I, for one, do not want to drive or even be on the road with a bunch of 99% safe cars. Even 99.9% safety is insufficient. This could be a “disinformation machine” if the information corpus is flawed and the algorithms aren’t “constantly checking for reliability.” And one of the most senior AI engineers I know told me that Chat-GPT will almost certainly be biassed due to the data it consumes.
Consider the possibility that the Russians used GPT-3 to create a chatbot about “United States Government Policy” and directed it to every conspiracy theory website ever written. This doesn’t appear to be a difficult task, and if they put an American flag on it, I’m sure many people would use it. As a result, the source of information is critical.
Because AI engineers are well aware of this, they believe that “more data is better.” As the data set grows larger, OpenAI CEO Sam Altman believes these systems will “learn” from invalid data. While I understand that concept, I believe the opposite. I believe that one of the most valuable applications of OpenAI in business will be directing this system to refined, smaller, validated, deep databases that we trust. (As a major investor, Microsoft has its own Ethical Framework for AI, which we must assume will be enforced based on their partnership.)
The most impressive solutions I’ve seen in demos over the years are those that focus on a single domain. Olivia, a Paradox AI chatbot, is intelligent enough to screen, interview, and hire a McDonald’s employee with remarkable efficiency. A vendor created a chatbot for bank compliance that functions as a “chief compliance officer,” and it works very well.
As I discuss in the podcast, imagine if we built an AI that directed us to all of our HR research and professional development. It would be a “virtual Josh Bersin,” possibly smarter than me. (We are currently prototyping this.)
Last week, I saw a demonstration of a system that took existing courseware in software engineering and data science and generated quizzes, a virtual teaching assistant, course outlines, and even learning objectives automatically. This type of work typically necessitates a significant amount of cognitive effort on the part of instructional designers and subject matter experts. When we “direct” the AI toward our content, we suddenly unleash it on the world at large. And we can train it behind the scenes as experts or designers.
Consider the hundreds of business applications: recruiting, onboarding, sales training, manufacturing training, compliance training, leadership development, and even personal and professional coaching. If the AI is trained on a trusted domain of content (which most companies have in abundance), it can solve the “expertise delivery” problem at scale.
Where Will This Market Go?
As with any new technology, the early adopters frequently receive arrows in the back. So, while Chat-GPT appears miraculous, we must predict that innovators will rapidly advance, extend, and refine this. I’m willing to bet that most VC firms are now writing blank checks to startups in this space, so there will be plenty of competition.
My gut feeling is that companies like OpenAI and Microsoft will compete with a slew of other players (Google, Oracle, Salesforce, ServiceNow, Workday, and so on), and that every major vendor will “bulk up” on AI and machine learning expertise. If Microsoft integrates OpenAI APIs into Azure, thousands of innovators will use the platform to create domain-specific offerings, new products, and creative solutions. However, it is still too early to tell, and I believe that industry-specific and domain-specific solutions will triumph.
Consider the number of “opportunity spaces” available. Leadership development, fitness coaching, psychological counselling, technical training, and customer service are just a few of the services available. That is why, as early as this market is, I continue to believe the opportunity is “enormous.” (I recently attempted to contact PayPal via their chatbot and became so frustrated that I decided to close my account.)
This technology reminds me of the early days of “mobile computing.” We initially saw it as a “add-on” to our corporate systems. Then it expanded, matured, and grew. Today, most digital systems are designed for mobile first, with entire tech stacks built around mobile, and we study behaviour, markets, and consumers via their phones. The same thing is going to happen here. Imagine being able to see all of the questions your customers have about your products. The potential is simply astounding.
As I discuss in the podcast, many jobs will change. I just completed an analysis of all the jobs directly impacted by Chat-GPT (editors, reporters, analysts, customer service agents, QA engineers, and so on) and discovered that today, with approximately 10.3 million jobs open, approximately 8% (800,000) will be impacted immediately. These jobs will not be eliminated, but they will be upgraded and enhanced over time by these systems. (In addition, many new jobs, such as “Chatbot trainer,” are being created.)
There’s a lot more to talk about on this subject, so I invite you to join us as a Josh Bersin Academy or Corporate Member to learn more. And if you have your own experience or are working on something cool, we’d love to see it.
Onward and upward: think of this as one of the brightest stars in our future, and try to keep it under control.