• Home
  • Blog
  • What ChatGPT and Generative AI Could Mean for Learning

What ChatGPT and Generative AI Could Mean for Learning

When Barbara Mistick and I surveyed people for our book Stretch, we asked people what they did to ensure they stayed current. The number one answer was: I hang around with smart people.

I got the chance to do exactly that last week as I facilitated a conversation on generative artificial intelligence with Sourabh Bajaj, CTO of CoRise and a group of heads of learning or talent who sit on the GSV Ventures Workforce Insights Board. Sourabh has spent his career in artificial intelligence in companies from Coursera to Google.

In this article, I hope to provide a quick primer and further the conversation more broadly on what generative AI will mean for us in the talent and learning space.

Part I: A Primer on Generative AI. Is ChatGPT All There Is?

Not by a long shot. But before we get there, here’s a tiny primer in lay terms by a non-technologist attempting to translate to our field of practice. Skip to Part II if you’re more the type who likes to understand the implications right off the bat.

We’re living in an incredible shift in capability of computing power that has led to this moment in time. A very popular quote cited by numerous tech bloggers in the last few months is from Lenin: “There are decades where nothing happens, and there are weeks where decades happen.”

Three big factors have been contributors to the weeks we live in now: computing power, transformers, and large-language models (LLMs). Hang with me just a moment, because understanding these three factors at a very high level will help you understand the enormity of changes that are about to happen.

Computing Power and The Three Waves of Machine Learning

You’ve probably heard of Moore’s Law, where the number of transistors that fit on a circuit doubles every two years. Think of this as the classic investing literacy question: Would you rather have $1 million or a penny that doubles every day for a month?  You might remember the doubling penny breaks $1 million on day 28. On day 29, it’s now closer to $3 million than $2 million. One more day and you’re over $5 million. Keep on going until the end of the year, the numbers are so big they are beyond conceptual understanding for most people – or at least me.

In computing power, we’re reaching beyond the penny-doubling-analogy month now, leading to what’s now a third wave of power. The first wave was good-old fashioned artificial intelligence – GOFAI – where humans hand-crafted and trained the machine. Many applications you might use rightfully claim they are AI-powered with this first wave of GOFAI. The second wave, deep learning, took off post 2012 and can extract attributes from raw data without the need for specific programming. In a sense, computers became able to learn like humans do – by example and through pattern recognition in an architecture of neural networks. See a chart below for the number of AI applications introduced in this first wave, in the second wave of deep learning, and in the current large-scale wave. The key takeaway: not only is computing power unprecedented and growing, we’ve reached a stage where previously inconceivable power is available to fuel sophisticated machinations.

Transformers. Not the Sci-Fi Action Movies

This newly available computing power and the advent of deep learning have allowed the creation of transformers. A transformer model is a neural network that tracks relationships and thus learns context and meaning. Computing doesn’t have to be done in a linear fashion, speeding up output. Transformers might take millions of hours to be properly trained, but when released openly, can be shared by others as foundational building blocks.

Before the onset of transformers, users had to label objects for the machine to learn. This breakthrough, introduced by Google and just available in the last few years, creates an ability for machines to pay attention to the most important data features and ignore the things that are not relevant, mimicking human cognition. ChatGPT is a specific use of a transformer model, with many others emerging. Because these transformers are parallel processing information and paying attention to relationships, they can go beyond an analysis of existing content to generate new content.

Transformer models that can take an input of text, images, audio, etc., and generate basically new output are what we now call the field of generative artificial intelligence. An older GOFAI model might have helped find and index content on your corporate intranet. If a newer transformer existed, it might look at everything on your intranet and generate content to “write a training policy for the employee handbook in conversational English” in 45 seconds or so. Below is a graphic that shows how many of these transformers have been developed in just the last year. We have to assume these are the equivalent of the first computers. In other words, you ain’t seen nothing yet. Key takeway: ChatGPT, amazing as it is, is just the beginning with many more transformers coming soon.

Large-Language Models

Large-language models are a sub-set of transformers. I asked ChatGPT to explain LLMs in lay terms. “Large language models are computer programs that can understand and generate human language at a scale never seen before. They are built using a special type of artificial intelligence called deep learning, which allows them to learn from vast amounts of text data and find patterns in the language.

These models can perform a wide range of language-related tasks, including translation, summarization, question-answering, and even creative writing. One of the most remarkable things about these models is their ability to generate new, coherent language that sounds like it was written by a human. They do this by using complex statistical algorithms to predict the most likely words to follow a given sequence of text, based on what they’ve learned from the vast amounts of language data they’ve been trained on.” Key takeaway: we are much closer to human-machine interaction that feels more like human to human than we have ever been.

These three components, not just ChatGPT, lay out the enormity of what changes lie before us, soon.

Part II: What Does it Mean for Talent & Learning?

Do you remember the famous big miss in predicting the future by IBM’s president, Thomas J Watson? In the 1940s, he reputedly said: “I think there is a world market for about five computers.” We should remember that and not underestimate what this tectonic shift will mean. Here are just a few of the ideas we talked about:

  • Content development. Indeed CoRise is already experimenting with limiting ChatGPT to a known body of content and then generating a course outline in seconds. A few seconds later, a module of content is written. A few seconds later, Sourabh created a test. Can we anticipate enormous productivity and efficiency gains in content creation? Almost assuredly yes.
  • AI augmented SMEs. How many times has your course development schedule been thrown off-track by a lack of SME access? What if 80% of what a course developer needs can be collected virtually, and the last 20% refined?
  • Engagement @ scale. Could this be a new era of online learning, where the system is continuously monitoring and offering suggestions, tips, and advice, whether in a course or the flow of work? 
  • Adaptive and learning on steroids. Adaptive learning at scale has been more of a hope than reality, with a few exceptions. Will we be able to realize that promise and insert learners in the right place with continuous monitoring to transport them through learning at the right pace for their unique needs?
  • Personalized learning. If a new smart learning system can see what other systems you use during the day, can it suggest learning based on your unique usage? Or based on your job role, tenure, or other characteristics?
  • Collaborative learning. Will smart systems be able to find other people doing the same tasks as you and connect you to learn together?

Perhaps the biggest implication for education and learning will be how we’ll have to help support our organizations in the ways in which nearly every job will eventually be affected. Computers did become ubiquitous, against Thomas Watson’s projections, and people and organizations adapted. As learning professionals, we should prepare to enable people with new skills to leverage the technology that is just about at our doorstep.

Part III: What Should We Be Worried About? Or Not Worried About?

With anything new and unknown, there are aspects of concern. Here are just a few.

  1. Hallucinations. Just like humans, machines can learn the wrong thing but think it’s right. In AI, these are called hallucinations, where essentially the deep learning model is following the wrong patterns. For example, in self-driving mode my Tesla can read pavement changes or passing clouds as obstructions, triggering a known bug of phantom braking. (I love my Tesla and it’ll eventually get worked out, I trust.) When and how will these hallucinations in other systems get alerted and averted?
  2. Two truths and a lie. You know that icebreaker – tell two truths and a lie about yourself. At this stage, I feel ChatGPT writes two truths and a lie. My name is unique enough that the internet domain for me should be limited. So I asked it to write my bio for me. (Try this, and if your name is shared by others, constrain it to your company, or city, or school.) It got it almost right, but not quite. All my education was completely wrong. But interestingly, the ChatGPT bio emphasized some things I hadn’t really ever focused on, and made me think I should. (What media outlets I’ve been interviewed by.) Really, this is more of the same problem as hallucinations, but a bit more deceptive since it’s almost right.
  3. Mass job displacement. Many writers have already raised the alarm about the end of jobs for content creators of any type, from writers to artists to analysts to contact center representatives to software developers. This same alarm was raised for bank tellers when ATMs came out, but instead the number of bank tellers grew because demand grew and more branch banks were built. Will the demand for personalization and other services grow, allowing machines to do the more mundane work? Or will machines displace workers?
  4. Isolation. In the event so much can be done with a human and many systems, will we lose touch with other humans? What will be the psychological effects of interacting with a system that seems human but isn’t?

As you can see, an hour discussion raised more questions than answers. Thus the GSV team is scheduling another call to continue the discussion further. If you aren’t already aware of it, GSV Ventures and Arizona State University hold a tremendous conference each year in San Diego, the ASU+GSV Summit. Sourabh and I will both be presenting there, along with many extraordinary speakers and our own LTG CEO, Jonathan Satchell.

We’ll also be at ATD ICE in San Diego in May. More info on the ASU+GSV Summit program can be found here, and if you’re a corporate learning leader, drop a comment requesting the special discount code and the GSV team will get you more than 60% off the registration ticket. Hope to see you there to continue the exploration on this topic together.

About the Authors

Dr. Karie Willyerd
Karie Willyerd is the only CLO to have taken two companies to number one in the world. Most recently she was the CLO for Visa. Prior to that she was head of SAP’s global customer education business. She came into SAP when Jambok – a video-based learning platform where she was the cofounder and CEO – was acquired by SAP SuccessFactors. Dr. Willyerd served as the CLO for Sun Microsystems and head of talent or learning for Solectron, Heinz, and Lockheed Martin Tactical Air. Dr. Willyerd is a prolific author on the future of work, learning, and human capital leadership, with two best selling books and articles/blogs published in Harvard Business Review, Forbes, and all the major learning trade journals.

Get in touch.

Learn more about our talent transformation solutions.

Transformation doesn’t happen overnight if you’re doing it right. We continuously deliver measurable outcomes and help you stay the course – choose the right partner for your journey.

Our suite of offerings include:

  • Consulting Services | Aligning vision and strategy to deliver integrated and systemic business results to drive growth and change through people.
  • Learning Services | Modern learning strategies, content, experiences, and delivery approaches that optimise workforce performance.
  • Technologies | An ecosystem of learning and talent tools, systems, platforms, and expertise that enable learning and talent transformation.