LEARN Podcasts

ShiftED Podcast #91 In Conversation with Amanda Bickerstaff : The Pandora's Box Problem - Why AI Literacy Can't Be Optional

LEARN Episode 91

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 25:56

AI is flooding into classrooms faster than educators can keep up — but are we actually building AI literacy, or just assuming it'll happen on its own? In this episode, Chris sits down with Amanda Bickerstaff, founder of AI for Education, to unpack why "they're already using it" isn't good enough. Amanda draws on lessons from digital literacy, the illusion of easy AI tools, and a broken assessment system that generative AI is now exposing. A sharp, honest conversation about what intentional AI literacy really looks like — and why educators can't afford to wait.

Chris Colley

Welcome back, everyone, to another episode of Shift Ed podcast coming to you. Um, today I have Amanda Bickerstaff, who is an educator researcher. She's the founder of AI for education, and that is one of the big reasons why I wanted to come and have her join us on Shift Ed to talk about AI in all its glory or not. We're gonna analyze all of that today. In 25 minutes, nonetheless. Amanda, thanks so much for joining me today on the on on Shift Dead Podcast.

Amanda Bickerstaff

Happy to be here.

Chris Colley

So, Amanda, I like to start these with a little foundational What were some of the occasions that brought you to starting up AI for educators? Or for education, sorry.

Amanda Bickerstaff

No worries. Yeah. So I have been a teacher, I've been a researcher, like you said. I also have been an ed tech CEO over in Australia. So I've had like every job in education. And my first interaction with General AI was actually not ChatGBT. I was working with a co-founder, a technical co-founder back in January, February, March of 2023, thinking about something I care a lot about, which is student well-being. And we were thinking about building a transformer model, which is the architecture underneath ChatGBT that could be used to like match students, like to like ask students questions about their well-being and then match them to people. Actually, it was kind of like still potentially something that doesn't exist in the world that much, like, but something that could like lead to emotional like support from humans, not from the AI. Um, but at the same time, I was like, I need to understand these tools. And so I started using ChatGBT for the first time around the end of March 2023. And it was, it was such a fascinating moment as someone that's been on all angles of education, of how quickly I recognized two things that I talked about a lot. One is that this is was and continues to be a transformational technology. Um, it, you know, having built technology and knowing how hard it was to do something as simple as a rubric or a, you know, a lesson plant. And then to be able to do it in like 10 seconds and give you like an okay base, like was something that has was just not possible. And now it's like hard for us to think about that if you've used it at all. And I and I realized very quickly that it was going to start changing the ways in which we think about learning and teaching. Um, but at the same time, these tools are there, they have a trap. The trap is they look incredibly easy. It's like, oh, you can type or or you can speak, you can use generative AI, but it actually requires an enormous amount of both technical knowledge of like how it works, mindsets of like actually how to use the tools, and then these practices that we we really focus on safe at the Fun of Factory Practices. But that it looks so easy that like it it kind of does the opposite of what you need it to do. And so, like the best case scenarios at Jennifer VI would say, actually, Chris, I do not have enough information and ask you questions to get more instead of just giving you an answer that potentially could be incomplete. So I started AI for education like that day. So I was posting on LinkedIn for the first time in a year. Um, funnily enough, I'm not really a social media person, but that has changed over the last couple of years. And then I built a website in a weekend with a prompt library that's uh it's very funny. It's still from the very first day I had one till like prompting doesn't matter, but I'm here to say it still does.

Chris Colley

It sure does, yeah.

Amanda Bickerstaff

Yeah, but it will be three years on April 17th. And what's crazy is that it will be we will be releasing the first Gender of AI uh literacy framework um on like the 10th of April. So like three years later, like to be able to take everything we've learned and actually put that into hopefully a really digestible, if not comprehensive, framework has been really exciting.

Chris Colley

Yeah, that's great. I look forward to the the release of that. Um yeah, it's it it AI like it flooded in there really quickly, and it's like every conference or workshop is like they're embedding it in there. In these kind of novel ways, I think that we're trying to a little bit the cart before the horse in that uh AI literacy, I mean, we've had digital literacy, we've had media literacy, like those foundational skills that students need to have, and I will multiply that teachers need to have as well. Are are it it seems to be kind of like uh yeah, we acknowledge it, but yeah, let's get back to the tool. Um how do we get AI literacy moving forward more in a more of a systematic way? Because I'm not saying there's not AI literacy going on, but I think it's small in pockets, um, and it's kind of like everybody's job and nobody's job within a school curriculum. How do you see AI literacy integrating better into a school? Um and like what how do you see that kind of does that touch on your framework a little bit as well, where within the framework that helps kind of establish an AI literacy within a school?

Amanda Bickerstaff

Well, I think the first thing is going back to that trap. One of the reasons why we don't see this intentional AI literacy is that we just think people are using it and that somehow that leads to literacy. And it is it is the same thing that happened with digital literacy is that, but kids have devices they use it more than we do. Uh, my teachers are using it. That there are these kind of straw man arguments that are like, but it's already happening. But what we have seen from digital literacy is that without intent, right? Without intentional digital literacy, students and young people have not been safe online, right? They fall for internet scams. It's changed their social, their relationship to themselves and each other, their mental and uh health and self-image. It's been something that has already impacted their academic work and socialization. So I think that there's that is like the really big issue here is that people are just thinking, oh, it's already happening. You know, I absolutely, I really do not love some of the kind of analogies out there, but Pandora's box is open. What could we possibly do? And so the tools themselves are especially Gen AI application layers, meaning like the MTEC tools or maybe design tool on top of a foundation model that's built by like Gemini or OpenAI with ChatGBT, almost make it seem easier that we just need the tools and the tools themselves are the goal.

Chris Colley

And so fix the other stuff.

Amanda Bickerstaff

Yeah, or just that the all the only purpose of this is to have efficiency or new things instead of understanding that this actually needs to change the ways in which we think about teaching and learning. So the way that we've set up AI literacy is that we we do this kind of tried and two first. And the thing that we have found that is quite interesting because we don't want to blow anybody out immediately is that we do a demo, a 10-minute demo on technical AI literacy. The think aloud, we show things like the fact that that Gender AI tools do not have a database, they are always making up and predicting that they are creative engines, they've been designed to be creative, that they're sycophantic, they will say yes pretty much no matter what. And also that they hallucinate and can make mistakes. And we do that in 10 minutes. And I'm telling you right now, no one will ever look at Gender AI the same way without thinking, you know what, I cannot just just trust these tools. And so the way our framework is set out is that it's the knowledge mindsets and practices. So the knowledges themselves of like training data sets, in fact, when AI isn't new, that it's predicting that there is no, like even choices of concepting a thinking model are choices that do not actually reflect the way these tools work. Um, and that the reasons why, you know, these tools always answer even when incorrect, because that's how they've been trained to do, right? It's better to be thingsing than to be accurate. So then what we do is we look at that, but then that leads to these mindsets about like intentionality, right? Being responsible for the work that you do, critical evaluation is like an enormous part of this. That are these mindsets that have to shift a ways which we both engage with directly these tools, but also engage with what's been developed from them. And then the final thing is like to build out like what does it mean to be safe personally with these tools? What does it mean to be ethical externally, which is, I think, a lot of where the focus of BI Literacy is just don't cheat. And then how is an E effective? Like this is this is the thing. Gender AI is the first tool where it really makes an enormous difference of how the user interacts with it. Like you, like, like even like thinking about like traditional coding, right? There's a lot that comes in with like who's able to be the best coder. But when it comes down to it, if you're really good at asking questions, giving feedback, analyzing, being creative, you can do so much more with the same model as someone that doesn't know those things. And so that user ability, that user impact, especially for non-technical people, is is enormous and something that we do not think gets a lot of attention in in AI literacy.

Chris Colley

Yeah, absolutely. I I I've been kind of dabbling with some custom agents to, you know, within our system to support teachers. And I am starting to notice the thought process and the creativity and all of those amazing things that we want to develop, I'm having to access those all the time. And also getting user feedback has been super helpful. Um, because you need to know how they're gonna interact with it, you know, like it's it's important, you know, like even if it comes to like it looks just too busy, you know, like in having to acknowledge those kinds of things. If if we're if we're fifth shifting the focus away from tools and more of to, you know, thought creativity when it comes to using AI, how do we start teaching that? Um what are some solutions or remedies to getting the mindset shift to start to happen? And you said a great quote, I think I'm gonna tell you right now. Um, and this might be paraphrasing, but you had mentioned that it's very hard to have a functioning ecosystem for students if you don't have a functioning ecosystem for teachers. How do we start shifting teacher mindset in the hope that that will influence student mindset? Um you know, and and and it's not about the tool, you know, it's really about that that AI literacy. Um and like how does how do teachers start to get that entry level so that they can start to feel like they're in not that they're that they're teaching, you know, about literacy and awareness and you know, all the great things too, creativity around using it and stuff like that.

Amanda Bickerstaff

Well, I think the first thing is it needs to be made a priority by those that are in positions of leadership. Um, this is this is something that should be a national, an international priority. There should be AI literacy, general AI literacy specifically, should be a goal of all leaders and all in all schools, uh and I for all stakeholders. And so a lot of that comes down to the leaders themselves not having generative AI leaders see them and not having space or time themselves. What we've seen to be the most effective is to provide, to uncover how these tools work and where they are. But we really focus on foundation models. So if we're gonna use it, we're gonna go right to the source, we're gonna go into this foundation model chatbot to think your clouds, your Geminis, your ChatGPTs, because that's what powers every application layer. So if we can show you how those tools work, it also has the most creativity and of course potentially the most risk, but that's where that balance comes in play of that really understanding the two parts of this coin, right? Of both potential for positive and potential for negative harm. And so what we do is we really want people to understand and to have time to use the tools. And the thing is, is like even in our AI literacy framework, we actually say this is a unique time where teachers and students can learn together. I, if I was saying, Chris, if you're building those agents, get in front of that classroom and say, like, hey, I was building this agent. And I noticed that like I did not double check my work because I felt like it had been so accurate, or like, look at this lesson plan, look at how I changed it. Do we actually think this is better or not? How could I have prompted it differently? Or like, do you want like, like, what about this opportunity? But like modeling about the the actual like practices, the lens, the thinking through this is going to allow the teacher to get more confident and like why they're actually doing what they're doing, but also to to model those important skills around transparency, around evaluation, around the need to always like have authority and agency in the process. And so I think that what we see is that it is actually very interesting. Our leadership PD versus our teacher PD versus our student PD, that's kind of our foundation generative AI for is like 98% the same. It doesn't matter, the audience, the use cases, the applications, the way we talk about integrity might shift. The way we talk about the importance of leaders leaning in might shift. But I think that that is a really unique time. These tools work a certain type of way that it isn't necessarily that we need to be like, oh, but kids know, right? We like like if you're a high school student and you know that like you're potentially gonna find this in your own at home, you're gonna use this in ways in both your social emotional, your entertainment, your academic pursuits. I would much rather, as a teacher, a leader, a school, be a part of that process of creating the that best set of practices moving forward. And what we see is that the gap between teacher and student AI literacy that is being done in structured ways is like enormous. And it's why we actually released a free student course. We never planned to get in the student AI literacy game until the end of last year, just because we did not see it happening at scale in in almost anywhere in like the US and also in Canada.

Chris Colley

Yeah. Oh, totally, totally. It it kind of feels a little bit like the Wild West of AI, you know, like where everybody's kind of doing their own thing. And it's like when you're so right, like it's common, right? Like we see, like when, for example, when social media came out, it affected kids in Australia the same way it affected kids in Canada and the US, and like these are global issues that all kids would will encounter if they're you know interacting with these things. The same thing's gonna happen with AI. I mean, you could almost like sign off on it, right? Like, if we don't start making stuff universal, uh what in your opinion is that that uh disconnect? Like, I know that policies are tend to be very local, but if you're looking at countrywide or in our case province wide, um uh there's some kind of thread that ties it all together, right? So at least you have the same starting spot where you go uh from there, uh great. But as long as you have that base of some kind of literacy, you know, like of of the ethical use and you know safety uses. Why the disconnect, Amanda? I it it baffles me like from the past until the present, and still trying to figure out what what what it can be easier than this, it seems, in my opinion.

Amanda Bickerstaff

I think that the fragmentation of the education system and I I mean I was in Australia, which is you know, like at a virtual web works with Canadian schools, and like, you know, it doesn't matter how many schools, like in like Western context, there's a less, there's less control. So if you look at like state, like countries that are doing more around this, they're they have more state, they have more like federal control, right? So you look at things like Singapore, Korea, and some other places that are doing Estonia, for example. I think that the the issue that we're finding and we continue to find is that our our systems, our education systems are fragmented in terms of like regulation and control, and for all kinds of reasons. And then also our systems are quite rigid. And so those two things hit against each other. I joke that like the education system, the best thing, the thing that we are best at is not changing. Like we are like so our systems are incredibly rigid and hard to move and change, even at the local level. And something like this, if you cannot discount how little people know at any level of power. And so, like, you know, I would love to have every, you know, every parliamentarian sit with me and my kids the same training I do with with 15-year-olds and and and educators, but that is not really happening as much as it should. And so there is a disconnect here. And I think what ends up happening, you think, oh, AI skills. We're gonna build AI skills, we're gonna build an AI workforce without realizing that these skills, these schools are going to actually change like even data and AI going forward. That like, and it is, it is so systematically making us, it needs to make us rethink about what skills of the future are, what are what are our uniquely human skills? Where are the places in which even if AI can do things, we decide not to have them do that? Like those are incredibly large for us to like like like have to think through. There was a paper that came out today or like this week about cognitive surrender is what they termed it. Is that AI looks so authoritative that it becomes a third system of thinking, that people believe that they have evaluated it because they it was it seemed so good and then it's rewiring our brain in a way that like can like it doesn't even mean if a kid wants to like be evaluative that they can, that their brain says, no, this format, this confirmation bias means that I believe this. And I think that that, like, until people understand that and wrap their hands around it, in some cases it doesn't feel like a priority. In some cases, it feels like there's the Gartner's hype cycle and that like, oh, it's gonna go the bubble, AI bubble. Well, no, it's A bubble right now again, because Asians came out, right? That are working. But there are those types of things that I think that make this incredibly difficult. And I will say though, it has only been three years. And I have seen at least a shifting in some ground level understanding of the importance. There are many, many school districts, many, many different colleges that are prioritizing some of this. It just is not at a point right yet, at this point, enough. And I'm gonna say that very strongly. Like it's just not enough.

Chris Colley

Yeah, no, I totally agree with you. Um, and I worry about our kids. Like we're seeing now with the worry of the onset of social media that's unfiltered. You know, we're starting to see the those vestiges come back again through AI and how kids are using them. Because yeah, like I I love what you said too, that we have this inherent need to want to trust technology, right? Like, oh no, it's technology, don't worry about it. They thought about it, it has its its thing.

Amanda Bickerstaff

It's like we have people, yeah, like yeah, people pay for it.

Chris Colley

That's right, that's right. And I guess with this AI too, like w whenever these these tipping points happen, teachers uh react in certain ways. And I think with AI, it's been, you know, either all in or you know, you know, the the hot and cold or the the all great, all all negative. But uh one thing that it did, I find, is that it started to expose the system and how we assess kids. Um and AI was kind of seen as this cheating because it cheated the system that was set in place. And well, how would you how dare you? Um seeing as that we have assessment now that's uh with AI has has exposed it to uh its maybe short short givings a little bit, how do you think it the assessment can now start to adapt so that teachers can I don't want to say AI proof, but can shift their evaluations or tweak them so that AI, whatever. I mean, yeah, if you need to go and get some foundational stuff, great. It's like you know, going and using the Google. Um how do you think that assessment uh can be reimagined in in the age of AI?

Amanda Bickerstaff

Well, I think the first thing is we need to just be really honest that there is not a traditional assessment in the world of any type that is AI resistant. I mean, it doesn't mean if it's the MCAT, ERE, a high school essay, a a science project. Like, you know, it is it is pretty like like we have to kind of move past this idea of AI resistant. The only things that are truly AI resistant are ones in which you focus on the, you grade the process. You provide space for students to dive in and have to like interrogate, to collaborate, to create. Um, there needs to be opportunities in which young people are building durable skills that are going to matter for the future. Like, are we are we actually building systems in which students are like agile learners? Like, and and how are we how are we even doing that when most of our assessments are a one and done? And if you don't do very well, that's it, you know? And we we have essentially shifted you off of like our high achieving kind of like landscape into remediation, which Often tends to have its own impacts. I think that assessment has been broken for a long time for most kids. Um for some kids, you know what? There, it hasn't, right? For some high achieving kids that are great at tests, it's been great for. But realistically, we know that this system is like generative eye has caused, like I talk about in the sense of it feels like it's exposed cracks. But what it's done is it actually hit the cracks that were already there and pushed and it started to push over those walls, right? Like it is not like there were stable foundations of assessment before, but it is just like a bull closer, just knocking down these broken walls that were already there that were hold held together by these very like these constructs, right? Of this like importance of high-stakes testing or other types of things. And so when we talk about the need for this work, it's like we have to actually come into and be honest about the systems in which we have created and the ones in which we need to create. And so we have a piece called Beyond the AI Inflection Point. There's a futurist piece that came out in January. It's a narrative about a fictional school district. It's in the US, but it could be anywhere really for uh if you would recognize yourself, I'm sure, if you're if you're in a school in Canada or Australia or the UK as well. And what it does is it posits that this change is really hard. And even with guidelines and AI literacy and even an innovative school approach, that there are so many complexities and that the the risk we have today is not either not just not changing, but re-entrenching in traditional assessment, going all in in AI where we actually go the other end, where like AI is everything, or this kind of messy metal in which we actually look at the structures and like and really like have deep conversations that are evidence-based of what's about what works and what doesn't work for young people and create flexible systems. Like there's never going to be a time in the world where education needs to be as flexible as it is today, even with even with COVID. This is this is something that is going to be so much more of an impact. And so, like our flexibility, like we need to start building in flexibility into our systems, whether it's assessment or not. And unfortunately, in the stages we are right now, the resistance to AI in general from many parts of call makes those types of conversations almost impossible to have in some areas.

Chris Colley

Interesting. Interesting. Well, Amanda, this is uh been a great talk. Thank you for sharing your insight and very thoughtful reflections on I mean, things that we'll we need to just keep talking about. I think that's um, you know, the thrust of this is that if we we can't just like a la-di-da, move on, like it's it's it's a shifting point in in our history that's gonna affect, you know, from here on out. So those conversations are super to have important to have. And I'm glad that you're uh getting your voice out there for teachers to hear, um, because it is something that will not go away. And if we don't address it, um, in the end our kids get the short end, unfortunately. So we can change all that people though. We can change all that people. And I'm leaving with tons of optimism and hope, uh Amanda, because I I really truly believe that uh things will start to shift as long as we keep it at the forefront of what we're doing. Awesome. Well, thank you so much. Uh check Amanda's site out, uh AI for education. It's got great resources, and we look forward to uh your new documentations and your framework coming out. Uh you said April, um April tenth, yeah.

Amanda Bickerstaff

So the AI the C forget will be out on April 10th.

Chris Colley

Awesome. Well, thanks so much, Amanda. It's been great.

Amanda Bickerstaff

Thank you, Chris.