In the Loop

Teaching in the AI era

Episode Summary

How are faculty navigating teaching with AI? We continue our conversation on AI with Professors Karen Reid and Daniel Zingaro in a roundtable discussion.

Episode Notes

[02:02] How are faculty approaching teaching with genAI?
[08:43] How do students build the skills to direct AI effectively?
[13:51] Students want to learn; my job is to provide the right opportunities
[24:36] The definition of learning is enduring
[27:14] Importance of teaching metacognition
[30:48] Shifting course outcomes
[36:14] Advice for students

Credits

Audio editing done by the University of Toronto's Arts and Science Digital Teaching & Learning Studio, located in the Sidney Smith Building at the St. George Campus.

Episode Transcription

Introduction

Mario: Hello listeners! In Episode 1, we heard from students about how they’re using AI in their learning and coursework. I thought they were creative in their uses, and insightful about possible risks.

Diane: I agree Mario! And in today’s episode, we sit down for a roundtable discussion with two faculty who are experts on teaching and learning with AI.

Diane: Karen Reid is a Professor, Teaching Stream in Computer Science at the University of Toronto.

Diane: She focuses on teaching systems courses, and has also taught hundreds of students about building software for use in the real-world through both large open-source software systems and software tools for teaching.

Diane: She has won numerous teaching awards, including the University of Toronto's highest honour for teaching, the President's teaching award.

Diane: And very relevant to this episode, Karen served on the Uni of Toronto Task Force on Artificial Intelligence, co-chairing the working group that looked at AI's impact on Teaching & Learning.

Diane: The task forces report on the AI-ready university was recently released.

Diane: Dan Zingaro is an Associate Professor, Teaching Stream in the Department of Mathematical and Computational Sciences at the University of Toronto, Mississauga.

Diane: He is an award-winning teacher, most recently winning a national award, the CS-Can |Info-Can Excellence in teaching award.

Diane: He is also an accomplished researcher in CS education, and author of multiple textbooks.

Diane: Recently, his research has focused on teaching with AI and he co-authored a textbook "Learn AI-Assisted Python".

Diane: He is an in-demand speaker on how AI is impacting teaching and learning.

Diane: We are so fortunate to have Karen and Dan as our guests today!

Karen: Hi, I'm Karen Reid. I'm a professor in the in the teaching stream in the Department of computer science here at Saint George Campus.

Dan: What's up, folks? I'm Dan Zingaro, I'm a teaching stream professor at the University of Toronto Mississauga.

Mario: All right. Great.

How are faculty approaching teaching with genAI?

Diane: All right, I'm going to kick it off with a question for both of you. There's a huge range of approaches faculty are taking to teaching with generative AI. And I wonder what you're seeing and what are some of the approaches that you find to be effective?

Karen: I think we're in the early stages of all of us figuring out how to teach with genAI effectively. And so I think we see a lot of differences in how people are using it. Everything from please don't use it at all to fully, fully embracing the technology and using it for all aspects of the course. I think the thing that people are worrying about mostly is how do we ensure that students are learning, or how do students themselves know that they're actually learning the things that we think are important for them to learn?

Mario: Are those things changing, those things that we believe are important to learn?

Karen: I think they will shift over time. But so far, I'm still of the opinion that people need a good foundation of knowledge in order to make the best use of these tools. That's not particularly grounded in research at this point, but I'm not hearing anybody telling us which concepts we can now abandon completely, because genAI I will just do it for us.

Dan: It's been fun for me to watch how professors have been changing their opinions and what they expect students to do. I know it's only been a couple of years, but, I subscribe to an email list with a bunch of CS educators. And for the first year or so, it was very divided. Some people were embracing the new tools, and others were trying to come up with more and more clever ways of tricking the AI.

Dan: And I think it goes without saying, but a lot of these approaches also trick students, right? So if, for example, there were proposals to make assignments, extremely convoluted so that the AI would get confused, but like, yo, students are going to get confused too.

Dan: But more recently, the discussion on the email lists has calmed down to the point where I think professors are starting to maybe come to grips with the fact that it cannot be ignored.

Dan: I think sometimes there are changes to technologies where we can pretend that they don't really influence academics that much, and occasionally that may be the right thing to do. But I think for this one, it's irrevocably changing how our courses are being taught. And from what I'm seeing, I think professors are understanding that at this point. And now it's just a matter of how exactly they want to adapt.

Diane: And are you suggesting, Dan, that it's hopeless to create assessments that AI can't solve?

Dan: I think so, especially because if they can't solve it today, they will probably be able to solve it tomorrow.

Dan: Though the rate at which these models are advancing, I've had people two years ago telling me that we've, we've reached the maximum of what these models can do, and they've been extremely wrong. And I don't know if anybody knows what the maximum these models can do is. I mean, already it's an incredible programmer.

Dan: I use AI daily at this point to do, you know, programing with me. And it's already at that point where it's very, very impressive. I can only imagine what the next model is going to be. And I don't know if there's, if there's a cap to this, but whatever the cap is, it's super high.

Karen: I also think philosophically trying to trick the AI are trying to construct problems that the I can't solve isn't a pedagogic approach that helps students learn anything. It's it becomes an arms race and it's about the product at the end of the day, not the process and not the actual learning that's happening.

Mario: Well is it- So I know like lots, of faculty now are, you know, they're trying to familiarize themselves with AI, but also they're trying to teach their students maybe how to use AI or when to use AI. But is that based off of their own, like, anecdotes or their own understanding of AI, or is it, as you're saying, like rooted in pedagogy, like, how do we know when we should use AI in our classrooms?

Karen: Well, I think it's a learning process. I don't think we know yet. And and so because that research is ongoing and hasn't- we haven't come to any conclusions yet. We're all experimenting and trying to figure out what those approaches are.

Mario: So nobody has an answer?

Karen: No. Well, not that I've seen so far.

Diane: We've got to try stuff, see what works.

Karen: I do think one of the biggest things I'm seeing now is that students really want some guidance in how to use it, and that makes it challenging for faculty who didn't grow up learning how to program with these tools, and kind of challenges us to go out and use them ourselves so that we've got the experience and can combine that experience with our the knowledge we've built up over the years to be able to provide some guidance for students going forward.

Dan: I just want to support something that Karen said. Please play with it. Play with the AI. There's I don't think there's a more important thing that if you're an instructor or a teacher, I don't think there's a more important thing you can do right now for your students than to just open it up. You have no idea what the AI can do unless you start playing around with it, and the ways that our students are using it.

Dan: For example, until a couple months ago, I used to think that our students were using code completion, like in in Visual Studio Code, for example. I thought that they were starting to type the code for a function and then the AI completes the function and they and then they press tab to accept the code. I thought that that was their workflow.

Dan: But in recent research that we've been doing, students don't do this. Like students just use the chat interface. They, they tend to use the chat to ask for what they want more than starting to write the first line of a function, for example. Or they'll, they'll use these agentic tools where the AI works in a loop, where it produces code, it runs it against test cases.

Dan: If the test cases fail, it makes changes and it keeps going and going and doing this and it will do it for maybe half an hour or something until it gets the code perfect. And these are models I did not know the students were working in. So the students are not just using a single tool, they're using tools that we haven't even heard of.

Dan: And I think one of the most important things we can do, it's a lot to ask, but I think it's important to just see what the newest tools are because it's changing so quickly. I don't think we can afford to not know.

How do students build the skills to direct AI effectively?

Karen: I think what I saw last year was students were afraid to tell their professors that they were using AI, because there's still so much fear around academic integrity and and accusations of cheating. And what I'm hoping we're going to see next year and I'm going to try to foster in my own classes is a more of a collaborate session with students and and faculty so that I can learn from my students how they're using these AI tools and then help guide them in, in using them effectively so that they have some control over their own learning.

Karen: So that that it comes back to what are they really learning in the end, not what have they produced at the end of the day, although that's important too.

Mario: Like, I have a feeling that, you know, fostering this, understanding of how to use AI is very important for what they're going to be doing after they get their degree. Right. Because it's not like all this industry is going to be like, no, we're not using this because we don't want productivity, right?

Diane: For sure.

Karen: I think so. In talking with some students this past year, one thing I heard was students who were afraid in that how they were using, I was getting in the way of what they they were learning what they needed to learn, what they wanted to learn. And so I think this is a place where I'm hoping we can work with the students and and each other and develop our new pedagogies to figure out how to be really transparent with students and how to help them understand that they're achieving their learning goals.

Dan: Mario guidelines are starting to come out from companies. Now, I don't know if you all saw the recent guidelines from Google. There was a post, a little while ago, and if it overall what it's what it says is the direction that's given to Google's software engineers is that they are expected to be using it. It's it's pretty much what it says.

Dan: It says, for purposes of productivity, you're expected to be using it. So I think every single employee is expected to figure out how to use it. And, and this is early still like this is just going to, it's just going to keep going. Our students are going to be entering a workforce where if they can't use it, I don't know if they're going to get hired.

Karen: And I think the real concern from, from our perspective is, how do students build up the skills that they need to be able to use these tools as effectively as possible because they still make mistakes and they still are going to go in a direction that what maybe wasn't what you intended originally. And so somehow you still need the ability and the knowledge and the skills to be able to direct these tools to go in the direction you want them to or use them as a collaborator rather than as a, as the, you know, a coworker that, well, no, that's not the best example to them to use them as a collaborator rather than ruling your life, I guess is the way I want to say it.

Diane: To still keep oversight on what you're getting back.

Karen: Well, I think if we don't keep oversight in what we're- what what these tools are producing, then we'll just have exponentially more crappy code that doesn't do exactly what we want it to do. And we'll all be drowning in software that doesn't quite work correctly.

Mario: And aren't we already?

Karen: I said exponentially more.

Diane: It could be worse.

Dan: The ways that the AI code fails is very different to how human code fails to. So I feel like we're very good at coming up with test cases. If we know that a person wrote the code, we're pretty good at coming up with test cases that might fail. Like, I think all of us in this room know how to break our students code. For example, you know, try zero, try negative one.

Dan: But AI code, I don't think we understand fully when it fails or how it fails. And it might look like there's nothing wrong with it. It's very easy to be tricked. Right. But by code, that looks good, but it's, you know, it's hiding some bug that we might not otherwise catch.

Mario: We're talking about how the process is important, whether that process be like, learning something new or whether that process be learning how to use AI. But at the end of the day, a lot of our assessment now is disconnected because the assessment is based on the product, not the process.

Mario: So how do we assess process?

Karen: I think that's the challenge. And because we've all been working in the environment we're in for long enough, it takes a mindset shift to be able to think in those directions. One thing I did this year, that worked to some degree, was I asked students to to write descriptions of test cases in English, not code, for cases that were the ordinary cases.

Karen: So I wanted them to be able to express as test cases what the program was supposed to do. And a lot of students struggled with that. Some students asked ChatGPT to do it for them. But it gave the students a way, I think, to check their own understanding, to check whether they really were able to express what the what the program was supposed to be doing.

Karen: We'll see if we can continue that going forward or what other approaches we'll need to adopt.

Students want to learn; my job is to provide the right opportunities

Karen: I like to come at this from the point of view that students genuinely want to learn the material and they want to build up their skills. They want to become good programmers. They want to have a good understanding of how databases or operating systems or data structures all work and all fit together. And so I think part of our part of the way I'm trying to think about it is it's on me to figure out how to present them with tasks and to present them with opportunities to stretch that understanding and to build those skills. And even if AI can do parts of it, the students who really care about their learning will get some evidence that they're actually making progress. And so that's, that's a little different than just running auto tests on their- the code that they produced. And so I'm trying to figure out ways of getting at that. I have a bit of an analogy, that I don't know whether it'll work, but when every year I publish past tests for students to use for practice. I never publish the solutions to those tests.

Karen: And students always ask me for them. And I explain to them that the reason why I don't give them the solutions for it is because it's really easy to just look at the solution and say, oh yeah, I get it. And then you come back to that. If you got exactly the same question on the test two days later, you wouldn't know how to do it because you didn't really understand it.

Karen: But your brain kind of tricked you into thinking that you understood it. And I feel the same way about what's generated by ChatGPT sometimes. It's really easy to read a solution produced by ChatGPT and say, oh yeah, I get it. Like I see what it's doing now I understand this algorithm, now I understand how to use this system call and but at the end of the day, that's not the evidence that they need to be able to really genuinely understand it.

Karen: And so the challenge for us is to come up with strategies and tasks and exercises so that they can, the students, really can feel like they're making that kind of progress.

Mario: So I think that's interesting because we actually we sent out a survey to some students to see, who might be want to be on this episode, and some of the strategies they described for how they use AI was to get it to describe or explain certain concepts to them. But that, to me sounds kind of like just reading the notes again.

Mario: Right? Like it's not as effective as a tool of learning as it is to like, actually try out an exercise or try out a problem.

Karen: I agree, I think they need to. I mean, part of the problem is that learning is actually uncomfortable because it's admitting that you don't know how to do something. Working through the struggle to be able to figure out how to do something, and then it becomes easy. So it's it's no surprise to me that students that, like, we all want to take some shortcuts. But learning is always going to be a struggle.

Dan: I mean, this to me, is maybe the most difficult challenge we're going to have to figure out, as professors is what Karen just said, which is before we used to be able to, yes, withhold resources for the benefit of students. There's this thing in education called desirable difficulties, where you want students to be struggling. I mean, you don't want them to be struggling for hours.

Dan: You know, but you want them to be struggling for the right amount of time. And before we could just say, well, we're not giving you the solutions for the midterm, so go, go figure it out. But now, ... I work in a basement office at home, and I keep my chocolate, like, two stories up. And the reason is because if I kept the chocolate at my desk, I would eat the whole bowl.

Dan: And I think my biggest worry right now for AI is that the chocolate is right next to them, and it's irresistible. And so it's very difficult, especially when you're confronted with all sorts of deadlines and stress about getting into your program. And you're working a job to help your family pay for the tuition. Students are under immense stress right now.

Dan: Are you kidding me? If I was a student right now, I would be AI-ing everything. And so I, I don't place the blame on students if they're using AI inappropriately. I think they've found a shortcut and they're going to use it. And they would have done that before too, accept that it wasn't that easy to get solutions for midterms or assignments.

Dan: Somehow it was maybe easier to just do the practice yourself. But now it's definitely not, right. You have to exercise a lot of cognitive control to do that. When you have the answer literally right next to you. And it's an extremely difficult thing for humans to do.

Diane: And it's not just cognitive. It's affective. It's about your feelings. Like feeling stress. You're feeling worry and yeah, yeah, you got to fight that too. You got to build the motivation to do the hard, hard work.

Karen: And sometimes that's the right call. Like sometimes getting a night's sleep is more important than struggling through a calculus problem or a programing problem.

Diane: How do we teach students? This is the question, how do we teach them to know the difference? And how do we motivate them to actually do the hard work at the appropriate time? How do we create those circumstances?

Karen: So this is going to sound either radical or or silly, but I actually think that, that the more we can get students working together, like physically together, talking to each other, wrestling with ideas that's both fun and productive.

Diane: Okay. So that comes up against another worry I have in the in the genAI era, which is how to get students to come to class because we are all people who are passionate about teaching and we try to create those really wonderful circumstances in the classroom. So for light bulb moments and connection and community and all of that beautiful stuff.

Diane: But if you can just, you know, use genAI to help you with every single thing, including the learning in the first place. Why go to class? How do how do we how do we get them to come?

Mario: For me, it comes back to access and lectures, in my opinion, have never really been accessible. They're a little bit more accessible now because I'm able to provide like recordings and stuff like that. But that's asynchronous now. It's not like, should I be live streaming my lectures? And that kind of you know, detracts from the I want everybody in the same room to learn together type thing.

Mario: So I don't know, there's always been that tension for me. And it's not really about AI, it's just about, you know, people don't want to commute an hour and a half to, to the university for, I don't know, however many lectures like, that's a long commute there and back every day.

Dan: I mean, students do what they think is optimal, right? So maybe it's optimal not to go to class. That's certainly a possibility, especially if we don't offer anything that they can't otherwise get right. Like we're no longer the sole source of information for students. We have to, I mean, you know, we have to swallow that pill, right? It's not an easy one, but it's true.

Karen: It also means we have to rethink what assumptions we've made about what education really means. And I think we've used a lot of things like attendance in class and, and assignments that are, are really hard or that are going to take you a long time to do as ways to, implicitly, achieve some of the things that I think we want students, we want students to achieve. I'm trying to use the word- avoid using the word coerced.

Diane: Reward. We're trying to reward them for, for these things.

Karen: But effectively, the reason why we've asked students to write essays and write large programs and, do big assignments and come to class is because those are- there's aspects of that that contribute to their learning that aren't written down as part of the goals of the course. And I think one of the things we need to to do is be much more explicit with students about what's beneficial about coming to class, like we just say, oh, we think you should come to class because it's a better way to learn.

Karen: It's like, well, why is it a better way to learn? And for whom is that a better way to learn? And maybe I'm naive and hope that some of that will resonate with at least some of the students and will inspire them to come to class. But I think that's also motivational in terms of getting students to do some of the work that we all need to do to be able to to learn something or to build a new skill.

Dan: I've office hours too, eh Karen?

Karen: Oh, I know.

Dan: Attendance is way down in office hours. And again, like, what are we offering them? Right? Like if they can get answers from ChatGPT, then maybe they're getting the answers that they think they need now. What they think they need might be different from what we think. And that's again, what worries me about the whole chocolate thing. It's like, yeah, they're going to get an immediate answer from ChatGPT that helps them right now.

Dan: I had this experience recently. I don't know if this is a sidebar or something, but I recently wanted to understand this probability thing. And I use ChatGPT to understand it. And it took maybe five minutes and I was pretty happy. I thought, oh, like, I would have taken me hours before. I would have had to ask somebody who knows, this stuff is amazing.

Dan: And then like two days later, I found myself looking it up again because I didn't remember. Like, I remember the feeling of knowing it. And then like two days later, I was like, what was it again? And then I had to go back to ChatGPT and look at it again. And you know what? Honestly, if I try to tell you what it was right now, I probably have to go back and look it up a third time because I'm not remembering it.

Dan: So it feels good in the moment, like the endorphins are kicking in and stuff, but I don't remember this thing, and this is not an isolated occurrence. Like I've noticed that when I learned something from it. And this is from a person who apparently knows how to learn things right? I was, you know, in school for a long time, and it's students don't even have this, I guess, advantage that that we all have in this room.

Dan: Right. But it's just so easy to be to, to be content with learning something. Whereas if I had learned it the, the prior way, if I had talked to, you know, one of my statistics colleagues and I went to their office, you know, I took time out of my day and I went to their office, and I felt a little guilty about wasting half an hour of their time. And, you know, I brought them like a cupcake or something. And we had a little chat of it. I'm never forgetting that. Right. Like, whatever they teach me that day, I'm never forget it yet. But when I learn it with ChatGPT, I have my music on. I have my email open.

Diane: It's handed to you on a silver platter.

Dan: Yeah.

Diane: You didn't have to wrestle with it.

The definition of learning is enduring

Dan: It's not enduring. And the definition of learning is enduring. Like if somebody shows you something and you do it and you forget it five seconds later, it's not learning, right? Like learning is, you know it for an extended period of time, and no one agrees on what that period is. If it's a week or a month or a year or whatever.

Dan: But it has to be enduring. And one of the worries I have is it's so easy that it slides in your brain and slides out, and then you don't know it. And at the same time, though, if you need it back, you can get it. So I've really been wondering, like, what level of of learning is required?

Dan: Is it because I just love learning that I want to know this? Or is it, you know, is it something that I really do need to know? Like if I really have to know this probability fact, I can just go look it up again. Like, what's so wrong with that?

Diane: This is so juicy. This - Oh go ahead Karen.

Karen: That's waht I was going to say, it comes back to what you were saying right at the beginning. Which about- do students need to know how to write a loop anymore? Or can we just ask ChatGPT to do it for us. And and I think there's what we don't know yet is what are the fundamental pieces of programing, of computer science, of any- pick your domain that, that we really need to have at our fingertips cognitively effectively to be able to do the tasks that we want to do in our life, to be able to work a job effectively or to be able to to carry out some activity that we want to participate in.

Diane: Yeah, some things it's no trouble if you have to look it up again because you only need to access it rarely. But if it's something that you need to access ten times an hour to do the kinds of thinking and work you want to do, then it's very unproductive to have to look it up over and over.

Dan: You know, the have you- have you all heard the expression, you need to know the thing or you need to know someone else who knows the thing. Yeah. Now we know someone else who knows everything.

Karen: Yeah, but we can't be 100% sure that they're right.

Diane: Mmhmm.

Karen: I think we're more inclined to believe. Things that say it with a sounding of authority, which the AI tools currently do. They don't give us a fraction of how much they believe in what they're saying. And if we're having a conversation, we're much more likely to think, to say to if somebody says, or are you really sure to then say, back off, maybe and say, oh, well, I feel pretty sure, but maybe there's a chance I could be wrong.

Mario: You don't get body language from ChatGPT?

Karen: Yeah.

Mario: Right.

Karen: Well the body language we get is certainly I can give you this answer.

Importance of teaching metacognition

Diane: Can we back up to something Dan raised like, oh, broadening it out: It's really about metacognition. It's about knowing how memory works, knowing how learning works, knowing what you've learned and what you haven't learned, and so on. And I noticed, Karen, that this came up in the recommendations of the AI task force that you worked on this year.

Diane: And recently the, report came out. It talked about the importance of teaching students metacognition skills. And I'm wondering, if we could get specific about that. And How do we get students, how do we give them the tools so that they can know what they've learned or know, how to assess where they're at?

Karen: I think some of it is making things that we've always thought of as implicit, explicit. So when we ask students a question on a test, most of the time there's a reason behind that. Like we're thinking about a particular learning outcome. We're testing. We want students to demonstrate, a particular skill. But we don't always tell students that that's what these questions are all about.

Karen: And so I think students often feel like there's just hurdles to jump over rather than unpacking what's going on behind them. And so I think, I think some of the skills we want to develop are in ourselves as, as teachers is being much more explicit about how the tasks or activities exam questions, the the oral tests. If we go in our oral interviews, if we go in that direction, how those speak to the metacognition, to what the skills that we want students to learn.

Mario: When you're saying those things, right? Like, we got to, like, teach them metacognition. We got to figure out how to make them understand that these questions were asked for a reason. But at the end of the day, like, it comes down to grades, right? So if you're, if you're not giving them grades for developing these metacognitive skills, then they might not do it or they probably won't do it because they have enough things on their plate.

Mario: Right? So now all of a sudden I have a 15% midterm and then, I don't know, I have another 5% assessment on their metacognitive skills reflecting on that midterm, I'm not sure. But then you could just ask AI to do that as well. So, I'm at a loss.

Karen: So I agree, except that we've given students tests before. I mean, tests serve two purposes. They serve the purpose of assessing, of evaluating a student's progress or evaluating a student's ability. And they're also formative in the sense that students are learning while they're doing those those tests.

Karen: And I'm optimistic that if we- we may not need to shift the tests or the activities as much as we're thinking right, or we're afraid we might need to, if the students are more aware of the value of them. Because in my experience, students are quite willing to do things that they think are valuable to them, that they believe will help them build their skills and prepare them for their future life as researchers or software developers or whatever path they choose to take.

Karen: And so if they see the things we asked them to do as part of the course that they get grades for as something that will push them in that direction and not just hurdles to jump over, then I'm hopeful that they will buy into it more.

Shifting course outcomes

Dan: This is why, yea Karen, I agree. This is why I think our learning outcomes that do have to change. Because if they don't, then students will not be motivated to do this stuff that AI can already do. For example, if we focus on syntax CSC108, then students are not going to be motivated to learn it because the AI gets syntax right every time.

Dan: So I, I do think this is a this is a major shift in the outcomes. I don't know if it changes outcomes for every course, but I think it dramatically changes them for some courses just so that we can have students do more with the new tools that are available. Like this is to the extent that students having computers in their homes or the internet available.

Dan: That's what I'm talking about. Like that's the extent to which I think things are about to change. Or when when compilers came into existence, like maybe before compilers existed, we would have students write a loop with, you know, like x86 branching instructions. And we don't do that.

Diane: I did that.

Dan: We don't do that now, because why would you what motivation could you possibly have.

Mario: Uh, we still do.

Karen: Well another example is in operating systems class, we often asked students to write a substantial amount of code or grapple with, 100,000 line code base as their starter code, because that helps them practice their basic programing skills. It helps build their understanding of taking a whole large, code base and trying to make sense of it and to organize it in their brains as a coherent piece or to drill down and find the part of the code that they're actually going to need to change to solve a particular problem.

Karen: And we never really told them that explicitly that these are the things that you're going to learn by grappling with this great big code base. And now that ChatGPT comes out, or genAI comes along and tells us, tells them, oh, here's exactly what you need to do in five minutes and can solve our solve problems that work with big code bases in in a relatively small amount of time.

Karen: We either need to admit that students no longer need to to build those skills. Or if we discover, as I think we probably will, that somehow those students still need some of those skills because they're still going to be taking maybe larger pieces of of code and trying to figure out how they all fit together. Then we need to figure out different ways of teaching that and making those explicit learning goals.

Karen: And maybe that does come down to having a conversation with students or having group projects where students describe to each other how a code base works, or what steps would you follow to debug this, this problem? Well, there's all kinds of things in there that I think we could do.

Diane: Could we boil this down to, introductory programing? For Dan, Dan is this- I know because you've taught the course using AI from day one. And you've also written a book that teaches this that way. And you gave a webinar with your colleague Leo Porter that practically broke the internet talking about how to do that. So, can you tell us what you learned from that? And, and how did you change your learning outcomes?

Dan: So that the the learning outcomes for intro programing, for me and Leo have have changed quite a bit. They are much more focused on large programs. We have students building programs, the size of which has never been experienced in, in my knowledge, in past CS 1s, like they're building large projects. There's three projects in a CS 1, if you were to go back in time like five years and listen to this podcast, and I told you that my intro students were doing three huge projects.

Dan: They were making a complete data science analysis, they were doing an image manipulation project, and they were making a complete game. You would, you know, you would like walk me out the door. That's way too much for a first year course. But we can do it now. And students are doing incredible projects. And they're super motivated to do these projects.

Dan: They're allowed to use AI as much as they want, and they are making RPG video games, like they're making actual games. They're doing data analysis on data that they care about, about things that are interesting to them or their families or communities. So we've just really tried to broaden the course to get away from syntax and these little programs that like how many vowels does the string have?

Dan: Like no one cares how many vowels string has. We love those questions because it helps students learn the basics of programing. But, but now we can't have students engage with these with these little functions. We used to do it not because we thought they were interesting, we knew they weren't, but because it forced students to engage with these concepts.

Dan: But now they don't have to. So in my opinion, we need to change the outcomes of the course to keep students interested, to make the course relevant for them. So we definitely don't have too many answers for intro CS, and we have even fewer answers for what the the follow on courses look like. Like, what should a data structures course look like? I have really no idea right now. So that's, that's what we're currently thinking through.

Advice for students

Karen: So my advice for students is to use AI to help you learn not to do your work for you.

Karen: So I think it's a great idea to use these tools to explain something you didn't quite get from the lecture, or to make you to be able to ask questions and iterate over a problem to to create new study materials for yourself to help you study for a test, even if we're not quite as up to date as Dan's class in building, interesting programs using AI.

Karen: And be honest with yourself in what you're actually learning. And when you're unsure, go ask somebody about it. Go ask your prof or go ask you to or talk to your fellow students about it.

Mario: Dan that was plenty of time to think.

Dan: I'm still thinking. I'm doing. Currently I'm doing deep research. I'm synthesizing a bunch of websites.

Dan: Try to determine or ask your professor why you're doing specific tasks.

Dan: It's not always clear what the purpose of the work in a course is. So you have a specific assignment. Generally, students don't know what purpose the instructor has in mind for the assignment. The purpose is typically not for you to do the assignment and submit it. That's what they're asking you to do because they need to ask you to do something.

Dan: But typically there's, bigger purpose for the assignment. What do they want you to get out of it? I think if you have a sense of the goal beyond just submitting the thing and getting a good grade, right, if you have a sense of the goal for the assignment or the project, I think that might help you calibrate how to use AI.

Diane: Thank you both so much. This was a really invigorating conversation. We have so much more to think about.

Mario: Yeah. Thank you guys.

Karen: Definitely.

Dan: Thanks. Thanks everyone.

Karen: Thanks. It was fun.

Key takeways and future of AI in eduction

Diane: I hope our listeners find it interesting to hear what’s on our minds as faculty dealing with this AI revolution.

Mario: Yeah I mean, everything’s in flux and we’re trying to find our way through. I think it’s clear that we need to revisit our learning outcomes and make some that may have been implicit, explicit. In the past, we might have said that after completing this assignment, you’ll know how to do X and Y. But now we have to make explicit other things that you will have acquired through having done certain tasks yourself.

Diane: Yes, so that students will know what skills or knowledge they can gain from wrestling with the work directly. They also need to know how to tell whether or not they have acquired those skills or that knowledge. And that was the other big theme that came through for me in our discussion: the importance of meta-cognition. We need to think hard about how to support students in developing meta-cognitive skills.

Mario: From the student point of view, the takeaways are first, to be aware of the learning outcomes – of why you are being asked to do the tasks on your assignment. What are you supposed to gain from that? And if it’s not clear to you, ask. Two, try to have that meta-level awareness of your learning. How do I know I understood this thing that AI helped me with? How can I challenge that understanding to see if it’s really there?

Diane: Yeah, and the students we interviewed in Episode 1 were clearly thinking along these lines. As the AI revolution continues, students will need to continue building these learning skills, and faculty like us will need to continue building their teaching skills to support this.

Mario: We live in exciting times.

Diane: Indeed we do.

Mario: Okay, that’s it for this episode. I’m Mario Badr.

Diane: And I’m Diane Horton

Both: And you are In the Loop.