Episode Transcript
[00:00:00] Speaker A: Welcome to FHSU Tilt Talk, a podcast about educational technologies, teaching and learning, scholarly research and service, hosted by teaching innovation and learning technology staff.
Welcome to our special series where we dive into generative AI at Fort Hays State University.
In this series, we interview faculty and staff about their work with generative AIh, exploring how they've integrated it into their teaching and research.
I'm Madeline Moyne. I am the AI task force chair at Forte State University. So what we haven't done a lot yet, and I'll let Doctor Anderson introduce himself in a second.
We haven't talked about students, about their experiences with AI. We've been focusing a lot on faculty and staff, and you might I've experienced some faculty fear is probably about the right word around AI. So I work in teaching innovations and learning technologies. So our teaching and learning center at four k's. So I don't interface a lot with students.
But our hope for this next year is to kind of get student perspective on what is your experience with AI? What do you think is good about AI? What are some of your fears about Aih? And this is the first one we've done with students. So we're really grateful that Professor Sharon will let us come to talk to you. So hopefully you'll give us some insightful information back. We will try to start doing trainings for students next year. One of the things that we're looking at doing is a uni lead 101 module on AI, so that everyone comes in and gets a little bit of background on it that's been met with some fear from faculty as well. So a lot of what we're doing is education and hopefully trying to get people over some of that fear. We're not trying to force AI on people, we just believe that it's here, it's going to stay. And so those who want to take advantage of it or learn how to use it in their own next steps professionally have that opportunity. So that's what we're trying to do.
Would you like to introduce yourself, Gary?
[00:02:19] Speaker B: Sure.
Hi, I'm Gary Anderson and I am associate professor in the advanced education department. So we work with the grad students and I am the coordinator of the transition to Teaching program, an alternative licensure program in the state of Kansas and I think the largest possibly.
So I have an interest in AIH, use it myself a bit and am exploring it.
I also have an interest in what students are doing with it these days. And so welcome to our podcast. I'd like to hear some of you talk about your introduction to generative AI. I know it's been around since it kind of made a big splash last spring, but what was your first encounter with AI and what were the circumstances and what did you think when you first saw it?
[00:03:27] Speaker A: Okay, honestly, my first encounter with it was on Snapchat.
Just when I got the new update and it said like, you know, AI and it was an accident. Like I accidentally snapped the AI person or robot, whatever, and it kind of like freaked me out because it like responded immediately. And I think I have like my hair kind of crazy, like it is right now. And it was like, nice hair. And I was like, this is weird, but like, I never wanted to like do that again. Like, it just like made me feel really weird. Also, I can see how people could like use that if they were like wanting to like talk to someone.
I don't know.
That was my approach.
[00:04:08] Speaker B: That's cool.
Somebody else.
[00:04:12] Speaker A: My first experience with AI was also probably on Snapchat. However, I personally did not allow it to be able to like communicate with me because you have to accept that because it goes into like all your different information. And I was like, no, that's weird. So I did not accept any of that and it didn't let me like chat with it or anything. At least that's like how it was on my Snapchat. I don't know if it was like on anyone else's, but they told me that I had to accept something to be able to communicate with it.
And so that was technically my first experience. But then my first actual usage of AI was probably for homework uses. Honestly, it was just like a really good use of answering some of the questions that I had and helping me understand it on a further basis, which, so I like that aspect. But the Snapchat aspect, that was really creepy weird. Made me feel really uncomfortable.
[00:05:18] Speaker B: Sounds like Snapchat is the winner so far. Anybody else?
[00:05:25] Speaker A: I heard about it in class for the first time because the teacher got generated parent emails and then show them to our class and then consider that I do want to kind of follow up on the homework days. I think that's where a lot of fear is. Snapchat. You know, a lot of factors don't know what is your interaction so far with four K's faculty and AI use in pulp work or question generation or just like, I need some more help around this topic.
I've had teachers who like both. Like they say like, oh, you can use it if you have a question or like, you need help. And I've had teachers be like, oh, no. Like, if you use this, like, you literally fail. So, like, I think it depends on the teacher that you have. But, like, from my experience, it's like what Dayton said. Like, if you have, like, a question, you want further knowledge on what you're learning. It's a good thing. But, like, at the same time, yeah. You also have that fear that if you use it, you're gonna get docked or you're gonna get accounted, that you.
[00:06:46] Speaker C: Plagiarized or something like that.
[00:06:49] Speaker A: Sounded like there may have been a shift over, even fall to the semester, even within the semester.
Does anyone want to speak more about that? Maybe personal experience within a classroom where you see that change?
I feel like a lot of the AI usage that I see in my education classes are around lesson plan development, and they don't want you to have to stare at your blank screen waiting for an idea to come up. And you can just go on AI and have them create a lesson plan for you. But most of the time, they're pretty vague and non specific to your classroom, and so you're able to get an idea of what you could teach or how you could teach it and then modify it to your specific environment.
I would like to believe that education majors, myself and Gary included, are both working people study. Obviously, I've worked in higher ed for a long time, so I think we're maybe a little bit more open minded. Like, to. That's probably really biased.
Have you seen that in other departments, other faculty that are maybe not education professors?
So I took a statistics class two semesters ago, and I mean, like, I obviously had to take that class for my education degree. Besides, like, I don't necessarily think that, like, his was solely based on, like, okay, you have to take this four year education, but he was just, it was just a stat bus. And I would definitely say he was super against it, but I can see why. Because if we just kind of, like, went to AI and solve this problem for me, we can, I guess, maybe see the way they solve it, but not us doing the work to get the equation. So I would definitely say he's very against it and definitely did not want us to use it. But to a certain extent, I did kind of understand, like, why he wanted us to, because we weren't necessarily learning it and doing it ourselves. We were mainly just, like, getting the answers and seeing it, which is a helpful thing, but, like, we're never going to learn if we don't actually do it ourselves.
So you're all going into a classroom, right? K twelve.
Have you heard about any uses in the k twelve study or any fears around that?
I wouldn't say matter. Like years when I went to a professional development and there's like magic school, I think that they use, and they basically were telling us that it could be like a good thing to help. You don't even have to think about what you're teaching. Just type it in and then they'll give you everything, which I feel like that's good and bad. It's great. Start an idea, but you shouldn't let an AI teacher tons.
[00:09:44] Speaker C: At my time, at my internship this semester, I've noticed teachers using a lot more technology based resources that are more like help students with their practice. Like the AI of it would kind of see what subjects or what areas they were like, needing more assistance in, and then it would sort of build like practice section things around those ideas about the student. But I haven't seen or heard a lot of ideas about like the students directly using AI the same way we do, or I would think that we do in like high school or college.
[00:10:20] Speaker B: I'm just curious.
A couple things. One on a scale of one to ten, with ten being the highest, how do you rate generative AI as a tool?
That's the usefulness of it for you personally and professionally. I'm just curious, do you see the potential for it or not?
[00:10:55] Speaker A: From being honest, I probably rated like an eight or nine out of ten. And it's just like kind of a dispute correctly. I know when I went in for a couple interviews a couple months ago, they were asking me, I was interviewing out of high school and they asked me what my thoughts were on AI and if I visited my classroom or if I would let my kids use it in my classroom. I told them I would, and they kind of looked at me with their mouths wide open. And I think a lot of it just has to be with something new, like technology wise to introduce the world we're terrified of. We don't know how it works, we don't know what it is, and we don't know what it's supposed to do. But then once we start to kind of figure that thing out, we start to use it, like as a tool that helps us. I mean, I was on Facebook the other day and I saw a recipe about like something that I wanted to make, and I typed in a comment section. I used AI to figure out, like, what ingredients I needed off of a picture. And then he gave me directions on, like, how to make it. And then I went to those and bought the stuff and it was great. But I think it's like really helpful if you don't have to use it correctly to generate ideas and not just taking what it fits. Doing copy and pasting still needs to be there, part of you and what's being generated.
I think that's a really interesting point. I think a lot of, you know, what's the purpose of higher education or education in general, if our goal isn't to have you learn something, since you're all education majors, I'm interested to know, what would you want faculty to dental about? It could encourage them around, like, we think it's useful for some purpose. What might be a way to.
What might you say to them to encourage them to either facilitate it in a class or at least be less scared for students to use it? I would encourage them in the way that it's a piece of technology and we are moving into a more technologically advanced world. So to be able to, like, give us the knowledge and the experience, to be able to use it, so that way we know how to use it correctly. When we get out to the, we kind of have to jump in and get over those fears because readers always treat me, there's always evidence of the world, and we have to be prepared for it.
How would you. So we have an example of an to completely write the paper.
I'm not saying any of you have done that, but certainly my department deals with Iraq and respond to this. So we have pattern boards for people who allegedly have used AI to write their entire paper. So one of the things that we're looking to do is encourage students not to do that, but also encourage faculty to incorporate AI into some part of their lesson so that it's very clear where to draw the line between using it for granted versus using it for writing a full paper. Right?
Certainly english composition and history classes are really interested in where this line might be. So I'm interested to hear your thoughts around where might that line be for you all. How much would you use AI to help you brainstorm, help you come up with ideas? Can you help us revise your papers? Where should that wanna be, from your perspectives, would you like your faculty members, your professors, to tell you, like, this is exactly how I would like you to use it? Would that be helpful for you all in your educational journey? I feel like if you give students access to AI, if they, even if on accident, ask AI to write them an essay, once you read that essay, it's kind of already ingrained into you. And I feel like that's what would make it difficult. I feel like the line would be, you can use AI to help you create an outline, or students can use Grammarly to help them write their essays as opposed to an AI essay generator. But I feel like Grammarly would be defined that way. They're still generating their own ideas. Otherwise, you'll probably have to have them write on paper and write their entire essay. There are doing that. There are blue books out in this building right now being written on actual paper. I think that's response.
My perspective at least is a response back here.
Great outline is how you control it in a classroom.
[00:15:28] Speaker B: Can I follow up on that question about boundaries?
So I'm thinking about, okay, let's drill down a little bit.
Yeah. There's controversy over AI. Some people never want to see it used ever.
Other people are wide open is the greatest. But if we're going to make a decision about this as a university, as faculty members, as students, we almost have to get to a philosophical level about what we want AI to do for us or not do for us.
And so let me ask you to probe yourself. What are, what is that boundary about? I mean, the boundary of whether it's cheating or not cheating. The boundary of we should use it for this or we shouldn't use it for something else. What's underneath that?
What are you thinking should be the basis for which we make decisions about whether to use AI or nothing in certain circumstances?
How do we make that decision?
It's almost an ethical dilemma. What do you think?
[00:16:53] Speaker A: I just feel like you can't really put restrictions on it. Like, you can always tell your students not to use it for certain things, but in all reality, they're still going to use it the way they want. So I think that's the difficult part. But just reminding them it should be for ideas and not be for like, maybe like assignments that just have like three questions or like, this is for that stuff. But I just feel like at the end of it, no, like, restrictions can be put on it because it is like an AI. Like everyone has access to it and they use it as they will.
I think something that can be kind of used as like a good boundary, I guess, for teachers is like just kind of like having expectations and not being so eager on AI, because, like, I have a teacher this semester and there's been a lot of questions that I've had to ask her, like, I don't know how to create something like this, or I don't know necessarily how to write something like that. And she's like, okay, well, write out a simple and be easy on yourself and type it into AI and ask them to write professionally, which is like, great, thank you. But it's also like, okay. But then it's still not like my own words. Like, I'm just formulating an easy going sentence to make it look more professional by typing it into AI. And so with her saying that, that makes it more like an okay for me to go ahead and continue to do that. And then it's more like laid back on the teacher. So that's laid back on me and I feel like it's fine to just do whatever I want. Whereas if the teacher is more or less very consistent on just use it to help generate ideas. I don't think there's anything wrong in helping provide sentence, but if it looks so laid back on the teachers end, students aren't going to have enough respect for themselves and their words to want to put forth the effort in that. So it has to be like super professional on the teacher thing as well. Consistently throughout the student park. It's the professionalism that's a really interesting .1 of the things we did this spring was having AI Institute faculty staff, I have student panel and asked them, how do you feel about professors using AI for agent papers?
We got into a little bit of a tip, I guess, where students were really upset to think that their professors are maybe doing less of the work that you're paying them to do and rely on it more. So maybe any thoughts around where you think the launch fee for faculty members to use AI when it comes to grading things, I feel like I'm here to learn. So like I have a teacher who's trying to teach me a curriculum and like different information, I kind of expect them to put forth the effort and give me feedback on what I'm doing correct and what I'm not doing correct. And I don't think that AI can't do that. But AI is not well, AI is not like here. They can't see the effort into it, see that she's putting in a lot of effort, but it's not technically right. And so I just think that in a lot of areas that even if I get a bad grade or I'm not doing well, I want that physical teacher teaching me to be able to grade that. There's other areas that I think is like fine with the teacher, you know, does AI like, whether that's fighting lesson or information, I guess teaches over. But when it comes to grading. Like, I kind of expect my teachers to be able to put forth that effort because I'm here as a student learning and I want the best accurate response from the teacher that I'm working from.
[00:20:54] Speaker C: I feel like AI is very black and white. So really it depends upon, like, what they are grading this neighbor. Then I say that they shouldn't be able to grade it because they don't see that. Or they can say, but if it's like math equations and it's like a right wrong answer, then I can see them grading that.
But a lot of teachers, the logics and stuff, they have to see.
[00:21:22] Speaker A: The.
[00:21:22] Speaker C: Student who is doing that.
[00:21:26] Speaker A: There's, like, a creative process with, like, things that we make and, like, I don't know. When I put in some, like, put in a lot of effort and then, like, I know, like, I might have, you know, messed up on a few things and then I see exploring great books. It's a perfect score. I'm like, immediately they did not look at it because, like, I don't, like, you know, and so, like, that just, like, almost, like, frustrating to me because I'm like, okay, like, I spent a lot of time on this and you're just going to, like, go through and give everyone hundreds and, like, what are we here for? You know? Like, feedback matters. Like, when you get personal feedback from a teacher, it means a lot. And you can tell when it's, like, personal and they're like, know you. And that's just like, we can't take this.
I feel like this might be comfortable, but I feel like if we're not. Like, if we're expecting not to use it through that entire thing, that's like, they should expect them not to use it to grade or try to figure. Like, I feel like it should be a two way street. Like. Like what? It's important for them to give us feedback because, like, at the end of the day, that's what you remember the most, is the feedback that the teacher saved you. If you're just getting it from an AI the entire year, then what was the point of putting in all that effort? So I feel like it should be a two way street.
[00:22:37] Speaker C: The other thing that I can think about with it is that if the instructor is using AI to create and then, like, it doesn't seem like they would be looking at themselves and knowing where their weaknesses are and, like, teaching or how they maybe, like, restructured the course or anything. So it's not just to me, it's not just important that it's like as the students, we're getting that feedback from the teacher, but the teacher also, like, going into education thing like being an educator is an ongoing process where you learn to adapt and gain experience and learn what you need to do to make your class better than your students. I think that if an instructor is just using AI to grade and give feedback, if they're not taking the time to look at it themselves and see the patterns among the students of what subject scenarios are weaker, then they're not going to be able to fix issues that they may have had, like in their instruction or their planning for that course.
So it's not only then for the students or just that course, but like future courses and that instructor's performance to be able to teach that subject overall.
[00:23:49] Speaker B: Just curious, does anybody have their, what's their greatest hope or greatest fear about AI?
[00:24:04] Speaker A: And that will get to a point where there's kids that don't know how to write a paper or don't know how to do a supplemental math equation because simply they can just get on their computer.
So then if it gets to that point, what's the point going to if they have it already there and they still have to know how to do these sort of things? Because, you know, if something ever does happen to it, we're going to be some sort of mindless society. I have no idea what to do.
And you're kind of already seeing it with kids today. You can have a concept of stimulated by the company because they have the technology of letting me and Gary come, you know, invade your classroom and take over for a little bit. And hopefully you'll see some of our work in the future. We will be pushing out policies in the fall. Policies is not right. Guidelines for faculty and students, I think is kind of more where we're shifting our focus to. We will be doing listening sessions in the fall. So as we push out these guidelines, if you all want to give us more feedback, you're welcome to come find us. But I really appreciate it. Thank you so much. Thank you so much. Thank you.
[00:25:19] Speaker B: Thank you.
[00:25:21] Speaker A: Thanks, Gary.
Thank you for listening to this episode of FHSU till Talk. Subscribe on Spotify and Amazon, and check us out on the Tiger Learn blog or the tilt social media pages for updates. We'll see you next time.