Think you're skilled? Think again.
In a world where over two hundred million adults use a cognitive prosthesis, we need to take a long, hard look at what skill is. Then we'll see: old skill is fading, and a new kind is on the rise.
Should my kids learn to code? Can I keep up with my coworkers? Can this person do the job I’m hiring them for? Should I trust this company with my problem? Do we have the workforce we need for our industry’s problems?
The very public advent of generative AI - and the multifaceted assistance it can provide - means questions like these are being asked and answered anew, around the globe. In case you need to get up to speed on the apparent disruption, I recommend this short video of a naive user - someone with no training in software development - using conversational English prompts to create a functional game that will run via the internet on their mobile phone. I watched many of my Master’s students do something similar as they analyzed and plotted a large dataset via python - most with no prior coding experience. We’ve seen comparable examples with reading, writing, and basic math, but also with web design, graphic design, scientific research, customer service, statistics, project planning, and medicine, just to name a few.
New results abound, and we want them. In fact they’re probably more important than ever, given the complex problems we’re facing. But the way to do our old tasks with these new tools is now a bit murky, and we have a thicket of unanswered, critical questions:
Does genAI mean we’ve arrived in an era where knowing exactly how to do the work is the equivalent of knowing arithmetic by hand when a calculator is available? When does it not mean that?
Does getting better results with genAI mean we’re more skilled? Less skilled?
What skills do we really need to get results, and how can we trust if someone has them?
How do you build and maintain skills, if you’re relying on genAI?
Absent clear answers, we’re all making bets.
Parents, deciding on school options. Recruiters, deciding on candidates. Educational administrators and EdTech providers, deciding on curricula. Clients, deciding on law firms. Patients, deciding on doctors. Learning & Development professionals, deciding on corporate training. Teachers, deciding on class design. Administrators and executives, deciding on strategic investments. Politicians, deciding on policy. Even kids, looking at code, writing, and art they get just by prompting for it in English - some are asking and answering for themselves, too.
Long before we had the science to explain it, we’ve known that the right skills beget results, power, and opportunity. So we’ve worked hard to provide up-to-date paths for skill development, because they would help learners contribute in ways that meet society’s needs. This also often gives them a personal leg up - in pay, dignity, and job quality. While it’s as old as the human species, this kind of adaptation is clearly the story of all classic education, vocational training, higher education, corporate L&D and formal apprenticeship programs: the world identifies outcomes it’s after, the skills required to achieve them, and sets about helping humans build those skills and put them to work.
None of that can keep up with what’s happening right now.
I’ve been studying the implications of intelligent technology for skill development for over ten years now. Before that, my work was likewise focused on skill development and learning for another decade. And to write my upcoming book I familiarized myself with decades of research on skill development spanning over a dozen disciplines and a hundred and fifty years. Based on all that, I can confidently say the skills target is moving and changing at unprecedented speed - and we’re not ready for it. We’ve got to rethink skill right away, or many of our bets are going to turn up losers.
Skill at the Jagged Frontier
Let me ground this all out by referring to the now deservedly-well-known “Jagged Frontier” paper. The findings here are, objectively, bananas: consultants were more productive and did better quality work by wide margins when using genAI, and those below the average skills distribution improved their performance 2.5 times as much as those with above average skill. If measured by results, all of these consultants got better, but the worst ones saw the biggest boost (important to note that a *wonderful* study of Kenyan entrepreneurs showed the opposite effect).
Some have claimed this means those consultants are, by definition, more skilled. Skill is that which allows you to get results, and users’ results are better, therefore they have more skill, Q.E.D.. Ethan Mollick and I had an interesting and informative conversation about this issue and the Jagged Frontier paper on Twitter, where he started out with this view. And it definitely has some truth to it. But not enough for games with consequences. Yet have no doubt: people are using this logic to apply and select for jobs or work gigs, make decisions about reskilling their workforce and training budgets, and predict the socioeconomic impact of genAI.
There are at least four serious problems with this line of thinking:
It conflates individual skills with bundles of skill (aka jobs, activities). Being skilled at writing a research brief is *very* different than being a skilled consultant. In my fall experiment, my Master’s students hit their stride producing code, but really struggled with completing their first individual assignment. Why? They had to get their code into GitHub codespaces - a cloud-based development environment that lets users write and run code from their web browser. Many folks would call all of this “coding in python” and miss the stark difference between producing code on your local machine and working through platforms. The professionals who curate oNet - the U.S. government’s occupational dictionary of tasks for all known occupations - work hard to avoid this trap, but most of us gloss right over it. Most work takes a bundle of skills.
It presumes away surprise, change, and complexity. To riff off Mike Tyson, everyone’s got a skill until they get punched in the mouth. Sometimes writing a memo is just writing a memo, but other times we need to fly to Bangalore to interview an expert because their internet and phone are down, and the client bought the memo for their expertise. Sometimes a crossing guard has to deal with a 500-person Bike Life group, right when school lets out and the kids need to get home. Sometimes reality intrudes - it’s complex and dynamic. And we expect people to handle quite a bit of it. Until we have practiced using genAI to handle these predictable surprises, our apparent new “skill” is shiny but brittle.
It ignores task characteristics. We can trust genAI too much. And it hallucinates. None of that is okay on a task where failure is not an option, or when technology access is problematic, for example. In a robotic partial nephrectomy (removing a cancerous tumor from a kidney), if you knick the iliac artery, your patient could be dead on the table in three to five minutes. What do you have to do to solve this problem? Undock the robot from the patient - which involves removing four laparoscopic instruments and backing the thousand-pound apparatus six feet away - page a vascular surgeon, gown and glove up, and convert to open surgery via a large, manual incision. The robot is no longer useful - it’s a hindrance. Many other task characteristics matter here - time pressure, irreversibility of decisions, task feedback… we have to take many of them into account to assess skill.
It ignores learning. Hard to say genAI boosts our skill if our results get better but our broader abilities stagnate or decay through use. I wrote about this last year: AI can dumb you down. Not because you get worse at the thing you’re doing. Because you get better at it. If you get a response you’re satisfied with, much faster, you have much less opportunity to struggle, and to encounter the broader context for your work task through “inefficient” exploration. You likewise lose an opportunity to ask for help or offer it in return. Challenge, complexity, and connection, in the language of my upcoming book. It’s this robust, semi-focused process that allows us to build and maintain valuable skills.
If we take just those four critiques seriously, a lot of the genAI-enabled “skill” out there is spindle-thin, fragile, hyper-specialized, and static. It’s not deeply trustworthy, at least in the way we’re used to trusting skill. It would need to be measured differently. It’s social and economic impacts would be different. We have not studied this - or even modeled it - and we need to, because it really is an empirical fact that hundreds of millions of us are applying this kind of skill in consequential work.
The new wave of skill: Interactional expertise
To put this to the side, however, I think this flood of new “talent” into the marketplace highlights a kind of skill that has always been with us, but will matter much more in the days, months, and years ahead.
A Master’s student of mine unwittingly (and cynically) highlighted it when she said:
“I don’t think this qualifies me to work at a software company, but I guess I could gaslight them into employing me.”
She was referring to the software development skill she’d acquired through the quarter-long, intensive experience her class had just been through. Together they’d gone from having no coding expertise to doing data analytics on their own via python, then working in teams to build a functional, web-based, software tool for technical project managers. This required many hours of staring at and tweaking code, figuring out how to run it, seeing it fail, debugging it. Then stitching it all together and running it live in front of the class. They thought it was impossible, but it worked.
But did they have coding skill? Clearly, in key senses, they did not. My amazing TA Brandon Lepine and I interviewed a representative sample of them after the fact, and as part of that interview we showed them a set of very simple code snippets. A “for loop”. A “dictionary”. Some recursion. Many had no idea what they were looking at, other than “code”. Some saw a few key words they understood, though. “I know that’s a print statement”, some said. Or “that’s a list of data.” If you took ChatGPT away from them, or turned off the code interpreter feature, or asked them to think through a problem computationally, they’d be dead in the water. So there’s your thin, fragile, hyper-specialized, static “skill.”
They did, however, build a lot of a very different kind of skill. Skill that’s about a reasonable understanding of the system of work they were involved in, its vocabulary, its dynamics, its processes and tasks, its norms, roles and responsibilities, and its output. They roughly knew what was going on, and could go through a lot of the motions with some fluency. That's a special and powerful kind of skill - useful for a lot more than gaslighting an employer into hiring you.
Harry Collins called this “interactional expertise.” It’s knowing enough about a context to be “conversant” with local experts. To understand the basics of their work to their satisfaction. This allows outsiders to plausibly interact with insiders. Interactional expertise is what allows managers, journalists, and organizational ethnographers like me to do our jobs. It enables interdisciplinary collaboration and peer review. It lets the public engage with complex issues. It is literally the grease for the cogs of society, science and commerce.
If you use generative AI to do something you couldn’t before - even a little bit - you have an opportunity to build this kind of expertise. Especially if you configure it to push you in that direction (here’s my updated, first-pass how-to on that). You’ll be in a position to build better, more productive relationships with experts outside your areas of solid skill, and to identify places you’d like to go in your skills journey.
But the benefits of interactional expertise are about to mushroom, Oppenheimer-style.
Skill is dead, long live skill
David Autor - one of my intellectual heroes, an economist at MIT - recently wrote a marvelous piece for Noemi (and NBER), tantalizingly titled “How AI could actually help rebuild the middle class.” It’s a *must* read, full stop. Tl;dr, however? Our society has concentrated wealth and power in the hands of those who can make expert decisions, and generative AI could break all that in fruitful ways for the middle class. In David’s words, the genAI we’ll soon have will likely:
…support and supplement judgment, thus enabling a larger set of non-elite workers to engage in high-stakes decision-making. It would simultaneously temper the monopoly power that doctors hold over medical care, lawyers over document production, software engineers over computer code, professors over undergraduate education, etc.
Sounds incredible, but it’s quite plausible. If hard sci-fi had an equivalent in labor economics, this would be it. Lots of social science predicts elite experts won’t go quietly into the good night, but in certain senses if these systems get really good, we should expect them to have no choice. Urgent medical care in rural areas. Legal help in impoverished ones. Technical tutoring and design help for kids with no nearby instructors. Against all the research on sociology of work and organizing, occupations, occupational jurisdiction, and so on: it’s going to happen, at least in pockets.
But to take advantage of this newly-available, nearly-free elite judgement, users will need… you guessed it: interactional expertise!
You have to know roughly the game you’re playing - at least a few moves deep - to put forward a cogent request to your elite-expert-in-a-box. You have to express your problem in terms that would allow it to make a precise judgment. Sure, it will ask clarifying questions to try to help you, but if you have interactional expertise in a given domain, you’ll be able to get great results much, much faster. That will in part be because you’ll be able to convey the output to real people in the real world as you try to get value from them. You’ll have to explain. Convince. Enroll. Rebut. Modify. Learn. If you’re only thin, fragile, hyper-specialized, and static, you’re toast.
Interactional expertise is the new metaskill that you’ll need to program GPT-4+ level systems in ways that loosen the chokehold of elite experts on the economy and your life. Not that many of those experts want to have their hands around your metaphorical neck, mind you - doctors, lawyers, plumbers, teachers - they’re all super stressed, and are well aware they can’t help enough people fast enough. I think, in part, that’s why we’ll see some surprising movement on this front - the benefit to otherwise underserved people will be too significant, too quickly.
Even if our prior social structures and ways of working remain intact - even if elite expert judgment still wins the day - interactional expertise will afford the rest of us a great deal more agency in our work and lives. Being a conversant generalist is the new superskill, and if you use it right, genAI can help you develop it faster than ever before.
I'm curious about this interactional expertise, it sounds a lot like what really competent consultant types do (not the stereotypical meme ones). ie being able to understand context from the outside, navigate complex interactions and to then solve problems for clients
Really helpful post Matt!