With the advent of new, easy-to-use artificial intelligence (AI) modules such as ChatGPT, or even the recently released chatbot on Snapchat, some students and professors are utilizing the program for academic purposes, like STEM research. However, there is a growing concern over it becoming a tool for academic dishonesty — forcing some to consider what the future of AI in the academic world will look like.
In the fall of 2022, the San Francisco-based company OpenAI released its AI module, ChatGPT. The platform allows users to input questions and prompts and receive answers that are written in natural language. Unlike other AI modules, the majority of the answers that ChatGPT produces are, for the most part, entirely original.
An anonymous computer science junior said he uses ChatGPT for creating code material and for sorting, filtering, transforming and formatting data. He said professors have encouraged him to use the module in class. He also said he used it before in a biology class to get some ideas for how to respond to challenging questions. The student asked to be anonymous to avoid retaliation from professors.
“AI is a sleeping giant, [because] while it is in its infancy now, it has the capability to develop and become more and more powerful,” the computer science junior said.
He said current AI platforms have the capability to complete lower-division computer science classes and that “there is nowhere to go but up” with the development of artificial intelligence.
The computer science junior said ChatGPT is helpful for physics because it is good at conceptualizing and telling you what calculations you need to do, but that it “sucks at math” and cannot execute the calculations itself.
He said he has experienced people using ChatGPT in class, as well as on discussion posts without attribution.
It is “obvious” to him when something was AI-generated, even though ChatGPT is “natural language and supposedly undetectable,” he said.
Computer science professor regularly encounters ChatGPT as he teaches classes such as Introduction to Artificial Intelligence.
Kurfess said ChatGPT is a useful tool for STEM majors, particularly computer science majors, to provide brief descriptions of complicated publications.
Kurfess still encourages his students to use ChatGPT for their research papers, as long as they disclose that they used the program to assist in their research.
He said the problem with ChatGPT is that it does not lie with its use — it lies with who gets the credit for the responses produced.
Kurfess struggles to grade research projects where ChatGPT was used because he “doesn’t know how much of the good grade to attribute to the use of the program and how much to attribute to the student themself,” he said.
Additionally, Kurfess said that ChatGPT’s responses make it difficult to identify sources without asking the engine hyper-specific questions, which he said could create further issues with attribution in the future.
Kurfess also said that ChatGPT’s responses are often factually incorrect.
Kurfess said ChatGPT can be problematic because of how it operates behind-the-screen.
ChatGPT does not understand what the response is saying as a human would, but it provides text based on a prediction of what it believes the next best word is – based on the data it collects from across the internet.
Communications associate professor Jnan Blau said he sees Cal Poly’s STEM programs adjusting to the new technology, even expecting that Cal Poly will create a new major program dedicated to harnessing the power of artificial technology. He said once AI becomes more widely understood and used, STEM programs at Cal Poly will not hesitate to make university-wide accommodations that focus on the development and use of artificial intelligence.
Blau said he is concerned that liberal arts majors who “largely rely on subjectivity for their academic pursuits,” may use the AI module to complete essays or gather information and violate academic dishonesty policies.
Ancient history lecturer Jason Linn acknowledges that AI can be “just another tool in the toolbox” and using ChatGPT to gain inspiration is similar to brainstorming with a colleague. However amid what he called an “arms race” in the development of AI technology, academic dishonesty violations may become more prevalent with the use of AI.
“With new technologies that students could use to cheat, there will be new platforms that instructors can use to detect that,” Linn said.
Linn said he would put his students’ essays into GPT Zero, a platform that can be used to detect if artificial intelligence generated a section of text. The computer science department does not use GPT Zero, but a similar program called “Moss,” developed at Stanford, according to the computer science junior.
“You can’t put the genie back in the bottle,” Linn said when it came to combatting the use of AI to be academically dishonest.
Linn said it would be difficult for the university to ban or regulate artificial intelligence use, because “where there is a will, there is a way.”
The Office of Student Rights and Responsibilities, which is responsible for claims of student academic dishonesty at Cal Poly, did not respond to a request for comment.
Another concern Linn has is that some people feel ChatGPT is a perversion of human intelligence because they feel that writing and brainstorming on your own helps you better absorb the material.
History freshman Luke Saenz recognized this. He said he wouldn’t consider using ChatGPT to complete his essays and written responses because most of what it produces is “word vomit.”
He advised against using the program because professors could identify that it is not the students’ writing based on the lack of opinions in responses.
“[ChatGPT’s responses] are so committed to being neutral that oftentimes, the program says nothing of value,” Saenz said.
Kurfess suggests students to “make an effort to be educated on artificial intelligence and the way it works to realize that it does not yet have the capability to replicate your own subjectivity.”