Editor’s note: This article was not made with the use of AI tools.
CUNY SPS students are cautiously supportive of using generative AI tools for classes, though many expressed worries about academic integrity and data privacy when using artificial intelligence.
During the panel discussion, “Let’s Talk AI: Student Roundtable” last week, students voiced their concerns and agreed that more specific frameworks should be created as part of CUNY policy on the use of AI in classes.
The roundtable was hosted by Ruru Rusmin, Assistant Dean of Faculty Development and Instructional Technology at CUNY SPS, and Carolee Ramsay, Director of Student Conduct/Academic Integrity at CUNY SPS.
The rise of AI is accelerating to the point where students and faculty agree that it will only become a larger part of daily life. Generative AI chatbot ChatGPT hardly needs an introduction, while other tools like Microsoft Copilot, Midjourney, and DALL-E are rapidly growing in popularity. Meanwhile, higher education is grappling with how these tools will be used by students across the country and around the world, though many institutions’ outlooks are optimistic.
Before the roundtable, CUNY SPS Student Life sent out a survey to students to get a sense of how they felt about AI for academic purposes. Although the survey did not make up a substantial amount of the student body, it offered a glimpse into how students are using this emerging technology.
Out of the 106 respondents, over 60 had no experience using AI in their academic studies, while around 40 had at least some experience, according to the survey. Despite their limited experience, over 50 students said they plan to use AI tools for their academics in the future, while over 45 said they would need some sort of guidance with AI before making a decision. Just under 30 students, however, said they were not interested in using AI tools at all.
The students who did have experience with AI tools used them mainly for finding answers or ideas for discussion prompts and for other assignments, drafting content for written assignments, and creating study and review materials.
“AI seems to be a very popular tool for supporting writing in terms of brainstorming or rephrasing, rewriting, and providing citations,” Rusmin explained. “The other broad area that people responded positively about was tutoring, using AI as a tutor to help review, to help clarify.”
Student opinions
Some students were very supportive of AI in the classroom, as long as students wrote their thoughts in their own words. AI tools could be especially useful for students with disabilities, others pointed out.
“In terms of having AI in line with Microsoft Office, which is really fantastic, it will help you draft emails,” one student said. “So whether you have a physical disability, cognitive disability, if you are low vision or blind, it will be your best friend in creating all kinds of different ways to write papers, to write emails, and to just make things.”
Others were less supportive of AI after seeing it overused or misused by other students, but were nonetheless curious to learn more about its uses.
“My thing is the clearly lazy usage of it where it’s just like, put the discussion board prompt in, copy and paste to the point where I’ve been in classes where people’s responses literally said, ‘I can’t access this report because I am an AI blah blah blah,’” one student remarked. “It’s lazy and honestly it’s disrespectful to the rest of your classmates.” The student went on to add that AI could be a useful study aid for summarizing texts or tutoring.
Others warned against the larger implications of using AI, like academic credibility, especially if it isn’t clear whether a student has written content using AI tools.
“If you can’t figure out a way to creatively communicate your thoughts, AI can help you do that,” one student said. “But at a point, you can’t use AI for the content of what you need to do because there’s also problems that can come in with credibility.”
“There’s just something missing that is not always clear,” another student added. “There’s no connection between your personality, your personal experience, with what you’re writing [when using AI]. There are no examples for me to be able to digest what you’re communicating. But honestly, at the same time, I would still like to know how it could be useful.”
Another student cautioned against the long-term future of AI in higher education. “The AI issue needs to be broken down into the practical issues about potentially violating school policies like taking away from the collective learning experience by having AI-generated responses and discussion board posts…especially when students are aware of that happening. I’ve experienced that myself. It can be very difficult.
“But there’s also the existential side of it, which is something that we probably can’t handle in this meeting, let alone as a university or as a system,” the student continued. “What it poses for a future society, what it means to be working with AI tools, because I do think that proverbial cat is out of the bag and we’re not putting it back in. Things are going to stay this way and most likely progress.”
CUNY is currently working to add AI-specific language in its academic integrity policy so that students can understand what constitutes an acceptable use of AI and what constitutes a violation, according to Ramsay. For now, the current policy stands, which means that students should ask their professors for authorization to use AI tools or check their syllabus for an AI policy, she added.