AI and the Future of Higher Ed: Insights from Paul LeBlanc
Manage episode 407295658 series 3562351
Paul LeBlanc is President of Southern New Hampshire University. Over 20 years, he has led SNHU’s explosive growth, expanding access to over 250,000 students, to become the largest nonprofit provider of online higher education in the U.S. Paul shares his perspective on how artificial intelligence and the rise of tools like ChatGPT will bring deep shifts for higher education. He outlines the need to reinvent learning models and curricula, deploy AI judiciously, and coordinate global data sharing. Ultimately, Paul expresses optimism that AI’s transformative potential can usher in positive change for society, if anchored by human relationships and wisdom. This forward-looking discussion provides higher education leaders timely insights on navigating AI-driven disruption through coordinated innovation that keeps learner needs at the center.
Please follow, rate, and review Work Forces on Apple, Spotify, or wherever you are listening. Also, please follow Kaitlin and Julian on LinkedIn.
Transcript
Julian Alssid: Welcome to Work Forces. I'm Julian Alssid.
Kaitlin LeMoine: And I'm Kaitlin LeMoine. And we speak with the innovators who shape the future of work and learning.
Julian: Together, we unpack the complex elements of workforce and career preparation and offer practical solutions that can be scaled and sustained.
Kaitlin: Let's dive in.
Julian: Today, we're thrilled to be joined by Paul LeBlanc, President of Southern New Hampshire University. Paul joined SNHU in 2003 and has transformed the university into the largest nonprofit provider of online higher education in the country, expanding its student base from 2,800 to over 250,000. Paul's been recognized for his innovative leadership and has earned accolades from Fast Company, Forbes Magazine, and Washington Monthly. And in 2018, Paul won the prestigious TIA Institute Hesburgh Award for Leadership Excellence in Higher Education. Paul's work extends beyond SNHU. He served as senior policy advisor to Under Secretary Ted Mitchell at the US Department of Education, and on several national committees focused on education, quality, and innovation. As a first-generation college grad with degrees from Framingham State University, Boston College, and the University of Massachusetts, Paul's career has spanned from directing a technology startup for Hoetten Mifflin to presiding over Marlboro College before joining SNHU. And most recently, Paul announced that he is stepping down from his role at SNHU and is embarking on an exciting new chapter to focus on the integration of AI into higher education. Thanks so much for joining us today, Paul.
Paul LeBlanc: It's really great to be with both of you.
Kaitlin: So wonderful to see you, Paul. Thank you for joining our podcast. So though we've given this bio, can you please tell us a bit about your background in your own words and your journey of bringing SNHU to its current scale?
Paul: Sure. Let me do the first one in broad strokes. So my family immigrated from a very sort of impoverished rural farming area of New Brunswick, Canada when I was just a kid. And I was the youngest of five and the first in my extended family to go to college. My father had an eighth grade education. My mother had a sixth grade education. And they were, you know, my mom worked in a factory until she was in her seventies, and my dad was a day laborer and a construction worker. And like all immigrant stories, you know, they came because it was work and a better life. And indeed there was. So I have a kind of schmaltzy love of the American dream story. And because I watched the way that my two grown daughters have a life now that their grandparents would scarcely imagine if they were still with us. And that was because I had access to high quality, affordable higher education and changed the whole trajectory of my life and thus theirs. I think that dream has increasingly slipped out of reach for too many Americans and new immigrants and working people. So SNHU has been…what I loved about this place when I came in 2003, almost 21 years ago now, was that that in its DNA had always focused on those learners. It wasn't a typical college that started with a campus and 17 year olds and that bucolic setting we see on view books. It started in the second floor over the Palace Fruit Company on Hanover Street in Manchester and wasn't until 1968 that it had a campus. So when I arrived in 2003, it had about 2,500 students and they had sort of continuing education centers and those were evening programs working adults, but it also has a small online program at a time when a lot of not-for-profit higher ed was still looking down its collective nose at online learning, thought it was inferior quality, couldn't be as good, you need to be in a classroom with our faculty to have a great experience, and yet we watched as for-profit higher ed just rushed to fill that vacuum and people forget that at its height the for-profit sector educated about 12% of all American college students. And I thought, A, we could do a better job. We learned a lot from them. And then I think, though, I tell people all the time, when Phoenix University first came along, they really did good work. They thought hard about what does a working adult need in terms of support and service. It really, they only started to go sideways when they became publicly traded, and quarterly reports and shareholder value trumped good behavior, and then we all know how they went off the rails. So we set out to really look hard at what would it take to grow a nonprofit online institution that was laser focused on that 32 year old with three kids and a dead-end job, maybe some credits. How do they improve their life? And that's really been the driving mission of this institution. We've never wavered from it, you know, even though we're very large now, as you said Julian, and while we're financially very robust and healthy, we didn't raise our tuition for 10 years. We're still an affordable option. And we also didn't get distracted by chasing status. Like we've never, sometimes because we do have a campus, we're like, why don't we do Division 1 sports? That would put us on the map. It was like, and how would that make the life of our students better? Like, tell me again how that works for us. Or across higher ed, people will say, we want to get to a place where we can be more selective and get better students. Like, no, I love the students we serve. I don't want better students. I want us to be better for our students. And that's been probably our big driver. So the rest of the story about how we get to the size has lots of good luck and missteps. And we didn't know how to scale when we started scaling. In 2012, I think, Babson University listed us as number 50 on their list of the 50 largest nonprofit providers of online education. And just three years later, we were number four. And in those three years, we were number four. And in those three years, we were hiring 40-50 people and every Monday we were onboarding. We had you know mountains of Dell computers out in the hallways and and more and God we had no idea what we were doing like we just broke everything we had no idea how to scale so we learned some lessons we had our stumbles, but I think you know part of it got us through all of that was this unwavering focus on our mission and doing really good work for our students. So that's really sort of the story of our growth.
Julian: It really is such an amazing story, Paul, the theme of effectively leveraging technology in education has run through this story and contributed so much to the growth, right? As you've said. But reflecting on these last couple of decades, what are the trends and challenges that you've seen as a technology that has evolved and ushered in AI?
Paul: So I think, you know, among the sort of challenges we've faced certainly is the regulatory framework, you know, where we are a heavily regulated industry, and I think regulators have often also shared some skepticism of what was possible in online education. And look at the skeptics weren't wholly wrong 20 years ago. Like Clay Christensen has taught us this lesson about disruptive innovation, is that it usually isn't as good as the incumbent thing it later comes to replace or compete with. It gets better though in a very steep curve. Like it gets better fast. And we get better fast. And we started to harness data and data analytics. And, you know, we have lots of...every month we have kind of open day for visiting institutions. We're very collaborative in terms of people who want to learn what we do. And I would say that the showstopper almost every time is when that visiting team hears how we use data and sees 75 people on our data analytics team and they realize we measure everything. We measure how long does it take to complete a financial aid package? How long does it take to get someone enrolled? How are they performing in their classes? We monitor every section of every course 24-7. We know when students are stumbling, eventually, almost always, there'll be a provost, our president who turns to his or her team and says, how come we can't do this? Why don't I get to ask these questions? So I think one of the things we grapple with is really the poor state of data in higher education at the institutional level, where it sits in silos, there's a lot of data collected, but there's not a lot of data analyzed and put to work, and there are data, generally speaking, in the industry. Like we don't, like when I look at the iPads, I don't recognize us, me, or you, right? It's a weird out of date, like it's not real time. So. So I think that's probably our biggest challenge related to that would be our legacy systems. You know, someone said to me recently, because I have worked my whole career in higher ed, that's sort of like the goldfish who sees the castle and like, oh, there he comes. I just assumed that we were like a big lucrative market and people would want to build great technology solutions for us. That's not the case. We are dwarfed by K-12. We are dwarfed by finance. We are dwarfed by healthcare. The legacy systems that we work with in higher education tend not to be the most cutting edge, the most up to date. So I think some of our progress was leveraging industry leaders like in CRM technology. We deploy Salesforce as a lot of people do now. But when we did it at first, there were a lot of like, what is Salesforce and how do you use it? And by the way, what's the CRM? Those are the things looking back that I think we were able to push through and move pretty quickly on. But I think moving forward, I am in the camp that thinks AI radically changes everything. Like, I think the world is not the same today after November 30th, 2022, with the advent of ChatGPT. And while AI has been around for a long time, it's been embedded in our phones and in our GPS and in our back office operations, the idea that any of us can now augment our intelligence with natural language by just speaking a prompt or typing a prompt in English is revolutionary. So looking ahead, I think massive challenges for higher education, because I think there'll be massive challenges for the workforce. I think the workforce will be utterly changed and transformed. And if university's primary job is to prepare people for a world of work, and while as an idealist, I think it's about a lot of other things as well, but that's its first job. It's why society funds higher education, is to prepare people to serve society. We are a knowledge factor and a knowledge economy and now knowledge has to be fundamentally redefined. By extension, we have to redefine universities. I've argued elsewhere that all curricula became instantly out of date on November 30th, ‘22. Like if you weren't looking at everything you teach to say, does this major still matter? Like would you advise people to go into accounting? I wouldn't. Someone said I think those jobs are gonna get enormously displaced. Will there be no accountants? No. But I think it might be 10% of the current number because AI does it so well and faster and smarter. And I think we go through, you know, field after field. And then as McKinsey argues, something like 65% of jobs won't go away, but they'll be radically redefined. Well, you better start redefining your major then, like what are you teaching? And I, you know, I chuckle when I hear faculty in some instances say, you know we need to ban the use of ChatGPT in our classes and I was like, wow, so you would actually ban the very tool that will get students jobs, right? I mean, it's sort of crazy, right? So yeah, I think it's so exciting. So a lot of things have to happen to go to your technology question. A, we have to sort of, we have a lot of work to do. We're very early. Like, we're not gonna figure this out overnight. So we have a lot of work to do to understand AI, how to best use it, what kinds of tools. We need to think about our policies. We need to think about, we think things like academic integrity, the use of writing, what counts for knowledge, and we can go deeper on this one if you like. And then we need to think about how we use data as a sector. And one of the things that I've been working on with George Siemens, who's joined us as a chief scientist on our new venture, is a proposal for a global data consortium. And the idea is that you build better AI applications, you get new and better insights when you have more data. And as a sector, we're terrible at our data, so could we create a global data consortium where large-scale providers would make their data available. It would be pooled, protected in a synthetic layer to protect student privacy and individual data, but that we could all then have better insights. So the Gates Foundation has given us initial funding on the design. ACE, the American Council of Education, has agreed to house it. It shouldn't live with an institution, right? It needs to have a neutral home, so ACE is going to house it. And in the initial conversations we've had with various large-scale providers, if everyone joined who says they want to, the devil’s in the details, and that's what we're working on now, we would have more than 30 million students represented globally in that data consortium. So I think it's a very powerful idea. And with that, we can start to counter things like algorithmic bias. We can ask different kinds of questions. We can combat cultural hegemony in the data. Like there's just lots of things that get better when you can do this globally. So yeah, it's an exciting, exciting time. I fret over all the ways we could get it wrong. But I do think there's a version of this that is incredibly optimistic and hopeful. So much of the AI discussion I hear is about real fears and well-founded fears, by the way, but a lot of fear about, you know, the ways in which it will disrupt our society and maybe displace human beings in various ways. And I'm looking around and thinking, is this working really well for us? Like what part of society is working well today? Like, I don't know, maybe human intelligence is overrated. And maybe a little bit of augmentation would be a good thing. I'm not looking forward to like our robot overlords, but there is an argument that says if we train people less around the issue of, and I know this is grossly oversimplified, but if the fundamental notion of the university and the college major is to answer the question, what do I need to know in order to be an X? What if you can know everything? What if it's all a prompt away? So maybe the better question we should be answering is, what do we need to know, or how do we need to think about the world so that we're better human beings, so there are better societies, there are better communities. And George Siemens argues, and I've come to agree with him, that the fundamental shift facing us, and it's a profoundly existential one for universities, is that we will spend less time worrying about epistemological questions of knowledge, because we're no longer the most powerful knowledge entities on the planet, but maybe a lot more time on ontological questions of being. What does it mean to be in this world? What does it mean to be a good human? And honestly, that would be sort of refreshing to my mind. And it might be that we see things like the decline in status for a lot of STEM and maybe a resurgence of the humanities. Looking very different than they do. I think they're kind of super out of date, but isn't that really what the questions of the humanities are about? So I think it's potentially a super exciting time.
Julian: As you're talking, one of the issues we're dealing with in the workforce world is there are a lot of very esteemed researchers, for example, looking at taxonomies related to skills and well, if we have the skills, we have the technology to see them all and read them all, then maybe our jobs are going to shift a little bit and it'll have all sorts of implications for the kind of dissertations we write and the kind of research we do. And as you say, maybe we can focus on making this world a better place. Not that we're not doing that already, but really focus our efforts on how can we improve quality of life and work and ensure that more and more people have productive and happy careers.
Paul: If you think about knowledge work, Julian, Stuart Russell is the wonderful computer scientist from Berkeley. He does a wonderful, if your listeners haven't heard it, the BBC has something called the Reflectors and there are four of them that Stuart Russell did in 2023. And he says, he makes a wonderful point, he said, if you had told our ancestors that someday in the future, this thing called a knowledge economy, you go to work into a big sealed box called an office building. And then you'll sit in a small glass box called a cubicle and stare at a really little box that has a bright thing called a computer screen. And they would say, and you'll do that all day, they would say, that sounds like hell. Right, like the work that's wonderful is can I be with people, can I be in nature, can I be making a difference in my community? And honestly, we have enormous need for those human jobs. I’ve argued this in lots of places like we should fill, we should flood our schools with great teachers and coaches and social workers, right? We should rebuild a mental health care system that is broken. I mean just decimated. We don't have a health care system in America anymore…mental health care system, excuse me. That's gone. We need to fix our whole notions of criminal justice. System after system in what I call in my 2022 book called Broken, what I call symptoms of care, those are all jobs that AI can't and won't do. They may be helped by AI, but these are distinctly human jobs. But if you think about all the jobs I just listed, our society doesn't like to pay very much for them. It doesn't want, it tries to have as few of them as possible, and they don't enjoy the status or support. That's what I think could shift. I think it'll have to shift because those are the jobs that AI won't do well and those jobs left to us. And they're actually jobs that bring people enormous sense of meaning and impact when they're well supported, when you don't burn people out, when you don't mistreat them. Carlota Perez, the economist, talks about this, right, which is, this is one of those moments where you get all of these things, new paradigm-shifting technology comes about, and it's usually around for a while in various forms, but it all comes together in a catalytic moment. So AI has been around for decades. ChatGPT is the catalytic moment, and now we're in the middle of what is kind of like an in-between time. And in the in-between time, all bets are off, like the world gets thrown into upheaval. And honestly, scarily, sometimes bad things happen, like wars happen, revolutions happen, because all the old assumptions go away. Feels like we're in that moment right now. And then, as she argues, in every case, you come out of it into kind of a golden age. Like the world just gets demonstrably better. It doesn't feel that way in the middle of the chaos. Like in the middle of the Industrial Revolution factories are being built and workers are being displaced and the agrarian culture was sort of going to hell. Railroads are coming through, stinking up the place. We're like, that was like, you know, go back to read English poetry of the, you know, early 1800s and it's a kind of a hellish version of things. But then look what happens when you come out of that and you raise the level of society, pull people out of poverty, improve lives. All of this good stuff happens. That's my hope.
Kaitlin: So building upon that point, Paul, that we're in that middle or early stage, early phase of this work. From your perspective, what do college presidents or leaders in higher ed or business more generally need to consider with AI becoming more mainstream? What does it look like to navigate this moment? Like what are some from a leadership perspective, some recommendations you could provide?
Paul: So one, I mean, get educated, right? We all need to learn a whole lot, play with the tools. So if you're a college president today who's not using these tools, I think you're remiss. I would encourage all my people as we have at SNHU to be playing with these tools. And then I think as articulated in a book called Power and Prediction, which came up, it's about AI, but it came out before ChatGPT, but it really predicted in many ways. I think organizations have to approach this in two ways. I think they have to look at the organization as they know it today. And then they have to think about the ways they can deploy AI to improve what they do. And I would say most are doing that. Like I see it at SNHU too–all over the place. Everyone's using AI. You know, there are 1.7 billion users of ChatGPT already. I mean, this is crazy. No technology has ever been adapted this fast. And you know, I was in Mexico recently and I was talking to an Uber driver who said his wife is a dentist and he does this on the side and he gets very side hustles and blah, blah, blah. But she does TikTok videos, kind of informational videos to support her business. And he writes the scripts using ChatGPT. And I thought, this is frankly a conversation you never would have dreamed of five years ago. Like literally a dentist doing TikTok informational videos written by and scripted by ChatGPT from her Uber driving husband, like crazy, right? So everyone's using it. So we need to be using it. And we'll see enormous productivity gains as we deploy what are essentially AI point solutions. So marketing will use in a certain way and admissions will use in a certain way and HR will use in a certain way, and they'll use different AI tools depending what they're trying to do and they all will see enormous productivity gains. Let me give you one quick example. Our marketing team of seven people produces these informational videos about attending college and they help our SEO search, positioning, etc, etc. Prior to ChatGPT, those seven people that can produce about 20 short form videos a month. Using ChatGPT today, they produce 150 a month. That's just an enormous leap, right? So we're gonna see all of that. But the authors of Power and Prediction, who happen to be three economists from the University of Toronto's Rotman School, also argue that the real power of AI happens is unleashed when you do a full system redesign. And I would argue that every university needs a sandbox with a small team, it’s not a big team, who are given permission to kind of do a blank sheet of paper reinvention of the work. It could be, how do we reimagine this program in a dramatically different way, this major or some aspect of the organization. And that's what we're doing with the creation of Human Systems, which is our new public benefits company. So at SNHU, we're doing the first part, point solutions everywhere, massive impact and productivity, but the small team, the merry band of seven people in Human Systems, has been asked what if you could be unconstrained in your thinking and invent a new model of learning, a new university if you will, that keeps human relationships at the center, so human-centered, but relentlessly uses AI in every way. And that's the work that that team is setting out to do and that's the work I'll be doing after I hand over the keys to SNHU on June 30th. But I think that's, you know, university leaders have to be to really be getting smart, using the tools, encouraging people to use the tools, challenging their leaders to think about how to deploy AI within their respective area. That's the point. Solution strategy. And then creating enough of a sandbox to have some people really experimenting with more radical rethinking of what they do. Otherwise, and this is Clay Christensen's theory, right? And Clay very much influences my thinking, was a dear friend of 40 years and was on my boards. But what he would say is that if you don't want to be disruptive, disrupted, excuse me, with your incumbent work, do a great job on that, but make sure you have some other group on the boundary rethinking what the next iteration of your organization will be. And the problem is that I think most universities are built to be self-preserving entities. So the idea of could we reinvent and disrupt ourselves is a scary idea. It's the one, it's really hard to do. But we've always had this approach at SNHU at least that if we don't figure it out, someone else will figure it out and do it to us. So we might also be agents of our own change and evolution.
Kaitlin: Right. I think therein lies the complexity, right, of where we are now versus what you were saying about the future state of, you know, what are these human focused skills and what does it look like to, you know, pay teachers well or healthcare workers well and what are the in-between steps along the way that get us there? Because societally there's just so much that we'll need to change and evolve in order to get us to that next level. So it feels like we're in that exciting, opportunistic and maybe scary space.
Paul: Completely. And there will be, I'm in the camp of things, we will see, as I said earlier, massive displacement in the workforce. And they tend to put it in four buckets, there will be jobs that got obliterated. In other words, you just wouldn't put human beings on this because AI does it so much better and faster. There will be jobs that are untouched. That's the other end of the spectrum. So think, you know, if you're a dancer, you're probably not losing sleep over AI. Like, you know, AI is not going to do that. But then there's two other categories. It will be jobs that get created that don't exist today or hardly exist. So there are actually about 880,000 data science and AI-related job openings right now. And we'll see that go up because everyone's trying to find AI and data scientists. So those will grow. Now, one of the interesting questions is, will AI get really good at doing those jobs such that we're not filling them with humans? That's a legitimate question. That's not facetious. And then there's a category that's probably the bulk of the workforce, which are maybe jobs that stay today but look so dramatically different that we have a lot of work to rethink. And if I were leading an organization, I'd probably spend a lot of my time on that one. Like I know I have to hire my data scientists, AI people, but I would challenge my HR department to go through and tell me how these jobs are going to change. I suspect, I believe that in the future, big statement here, I'm probably wrong, but being a physician will look so dramatically different and doctors will spend a lot less time on diagnosis prognosis. Like what AI is really good at is prediction. What some would call with Chris Dede at Harvard would call reckoning. It's way better than us already in area after area. What AI is not good at is judgment. The example I use is AI will be much better than your physician at diagnosing your illness and also predicting the course of that illness and also giving you treatment options. And you don't want AI having a conversation with you at the end of that diagnosis saying, God forbid, hey Julian, so you have to go home and have this tough conversation with your family. Let's talk about that. Or how do you want to think about quality of life? Or what are your support systems in assessing what is your resilience? How's your emotional resilience, your psychological resilience? Are you alone in the world? Are you part of a tight-knit family and clan that I know will rally around you? Those are distinctly human questions. They are tied to judgment, right? They're tied to things that AI is not good at. And so if you're the dean of a medical school and you should be thinking 10 years out, I'm kind of like, how does the medical curriculum change? You know, we know today demonstrably that AI is better at diagnosing breast cancer than a radiologist, a malignant tumor. You do not want AI holding your hand and saying, it's okay, we'll get through this together. Like, that ain't making me feel any better, thank you very much. So maybe what this allows is for medical practitioners, health care practitioners, to actually do the thing they're not always great at, which is being human beings. Right? I mean, if you ask, visit like every survey of patients in every major health care system, the number one complaint is, I want my health care provider to know me. That's a human question. No one's saying, I want my doctor to be a better doctor. Usually say, I want my doctor to be a better human. Like stop looking at the laptop and entering data which serves the system and asked me about me.
Julian: You know, Paul, I was even thinking back to your earlier sort of framing comments about the challenges at SNHU dealing with legacy systems. And, you know, there's a part of me that even thinks, I mean, I believe that part of the problem there is similar to the doctors. You know, the engineers aren't exactly trained to be human centric. And maybe we'll have to ditch all those legacy systems or jump them and have engineers that are trained to be much more end user oriented.
Paul It may not be engineers who are writing our code for us any longer. 46% of all new code going into GitHub is written by AI. But it may be that software engineering is a much more human exercise of curating code in ways that put human beings at the heart of things. So we'll see. It's actually a very optimistic and hopeful vision of AI. And my family would accuse me of being neurotically optimistic. So I hope I'm right. I hope I'm right. Because I think it's a small window. I think we have to. I think we have it. Like I don’t know if you agree with me, I can see the way we got it horribly, horribly wrong with social media and we did enormous damage to our children and our young people, everybody I would argue, our societal institutions. AI will make social media look like a day at the beach. If you've been tracking the election, recent election in Pakistan, the way that AI was used with false avatars, deep fake videos, voice impersonations, crazy stuff.
Katilin: And then what does it look like to educate people to navigate that complexity?
Paul: And you know, I think this is a series of, I've been trying to formulate this framework. I don't have it quite right, but I think we're in store for what I would call an enormous number of sort of flips when like systems that like operated very clearly in a binary get flipped on their head. So if one is that knowledge jobs which enjoy so much status and reward in our country, whether it's in finance or tech or healthcare, all of a sudden aren't worth very much, we won't want to pay very much for them because the system, the AI system can do them much better and human jobs become status focused on what we pay for. Another flip might be, Kaitlin, that if you think about journalism and the production of information, writers and editors are kind of the heart of the work. Well, when writing and editing can be increasingly done through AI at a very high level, the thing that might get actually the value of that may go down. And in a newsroom, the lowest status star right now is the fact checker, the person who says this is accurate. And in a world where we don't trust anything, veracity checking, if I can use that phrase, actually may go way up in importance. Like, what tools, what knowledge do we have to have? How do we know? How do you build systems that reassure people? So when I look at that story, I have some way of knowing that this is actually accurate and not a creation. And look, Europeans are much more proactively trying to get at this through regulation as well. Like, if you use AI in an image creation, you have to label it as AI created, right? And I think we may, the US typically is more of a Wild West show, so we'll see how we do on that front.
Julian: Well Paul, the higher ed world is very fortunate to have you and your, you know, eternal optimism and brain and experience on the task now of figuring out how do we make this work for higher ed. I guess that's, no, no, I'm quite, I feel strong, I stand behind that wind up our conversation now, how can listeners continue to follow what you're doing?
Paul: No, we've been out there talking a lot about this. We'll increase the right about it. Human Systems as an organization will sort of, we don't even have a website. We're trying not to do what so much of EdTech does, which is make huge pronouncements for what you can do. And then actually there isn't anything under the surface yet. Like then you look like, well, what is it? It's coming, you know? It's like, so we're actually trying to build, like we're actively building what we think is a dramatically different kind of learning platform and we'll sort of unveil that later in 24. We have a lot of partners who are lining up to kick the tires on it and give it a try and I think we have some distinctly different ways of thinking about. So in a world of AI tutors like Khanmigo and others, I think this is a distinctly different alternative if we get it right. That's a big F because it's what we're building is pretty ambitious and hard to do but it's exciting. So in some ways it’s a little bit of a stay tuned. You know, we'll again, we'll be at ASU+GSV, which is a big for your listeners who may not know, it's a huge kind of ed tech summit that's become a go-to event in April in San Diego. We'll do a little bit of a reveal there and then by late fall though, by mid fall, we will be testing our first commercial products. And there'll be a lot to see. We'll see how close we get.
Kaitlin: That's exciting.
Paul: It is exciting. I think the need is enormous, as you well know. I mean, we have so many people for whom higher ed is either financially out of reach. It just doesn't work for them. And when we widen the aperture of the lens and look globally, I met with the Prime Minister of Rwanda in October and the Minister of Education. And there, because we are trying to do this global data consortium, they're very eager to participate. But when we describe to them how we're thinking about the learning model, what we're trying to build, I was there to persuade them, could we try? Could we pilot this with a couple hundred users in Rwanda? Because again, we're trying to get this…When you think about learning practices, they look different culturally, right? Like even soft skills assessment. So if we're talking about soft skills assessment in a US context, we're trying to train our students to look someone squarely in the eye and give them a firm handshake. In a Rwandan context, that would look disrespectful if the person on the other side of that is your elder. You're sort of actually downcast and almost literally covering your mouth in some instances. So we need to learn and sort of integrate all of this. But my point in the story was that the Prime Minister's question was not, yeah, we can give you 200 students. He's like, how fast can we have this? We have a national need and we need to roll this out. We can't build enough universities fast enough, physical universities. Which is exciting, right? But it speaks to the like, you know, by 2050, a quarter of the world's population will be in Africa. And there's an enormous need for education. So we're excited about the prospect.
Kaitlin: Well, this has been an amazing conversation. We really appreciate your time, your thoughtfulness, and a window into the world ahead, because I think for so many, we recognize that the change is upon us and navigating the uncertainty is challenging, but I think having an optimistic view is important in it. And it's really been really, really interesting to hear, you know, all of the different resources and materials that you've been pulling together that are framing your thinking around it.
Paul: Well, thank you, Kaitlin. I think of Alan Kay's famous line that the best way to predict the future is to invent it, right? Like we can wait, sit, wringing our hands, or we can get busy and try to do the best work we can, shaping it. I know you have two young kids and I am going to have a first grandchild coming in June, which I'm super excited about. I was like, we need a world that's better than the one we're in for them. And AI could make it a lot more complicated and worse, or it could open up dizzying opportunity. But we need to be proactive about shaping the positive and really mitigating against the negative.
Julian: Well, thank you so much, Paul. It's always so wonderful speaking with you. Good luck with the new venture. It's great things in store, and we look forward to tracking you and to continuing the conversation.
Paul: Well, thank you. I look forward to doing just that. It's so nice to see both of you again. Thank you for having me. I’m flattered.
Kaitlin: Thank you. That's all we have for you today. Thank you for listening to Work Forces. We hope that you take away nuggets that you can use in your own work. Thank you to our producer, Dustin Ramsdell. Work Forces is available on Apple, Amazon, Google, and Spotify. We hope you will subscribe, like, and share the podcast with your colleagues and friends. If you have interest in sponsoring this podcast, please contact us through the podcast notes.
31 Episoden