Those wanting to know what ChatGPT does can ask the artificial intelligence itself.
The Nov. 30, 2022, arrival of ChatGPT, an AI program — which was developed to generate human-like dialogue by AI development and research company OpenAI — has created a new era of human and AI interaction. Yet, it raises concerns of increasing cheating and academic misconduct.
According to its website, the conversational purpose of the AI enables it to answer follow-up questions, admit mistakes and challenge incorrect premises.
Huan Sun, a professor in computer science and engineering, said one of the goals of conversational AI is to mimic natural language and conversation.
“Conversational AI is an area where researchers and developers create systems that can have conversations or dialogues with humans to guide them through certain tasks,” Sun said.
Sun said this guidance could be as simple as cooking soup but could become as complex as learning to code.
“You can ask all kinds of questions, and it will adjust the plan or adjust the steps for you to still make sure that you can finish the task successfully,” Sun said.
Kui Xie, a professor of educational psychology and learning technologies, said ChatGPT “harvests” big data — large, fast-growing sets of information — from the internet to generate its responses. Although big data is not a new concept, the way ChatGPT uses it is, he said.
“I think why it suddenly draws our attention is the capacity of generating human writing, human language,” Xie said. “And in some sense, that has changed the routine of our academia.”
However, concerns of the AI leading to compromised academic work circulated — including articles regarding the platform being used to pass law and business school exams and write essays or assignments.
To avoid plagiarism, Xie said educators can develop strategies to combat improper use, such as switching home assignments to group work.
“Instructors may think about, ‘How do [we] design assessment processes or strategies meant to reduce this type of cheating?’” Xie said.
Sun said she recommends clarifying what constitutes academic dishonesty.
“I think what we educators need to do is to make really clear the expectations of the students on what they can or cannot use,” Sun said. “You need to clarify the expectations and also inspire the students to follow the rules.”
According to the Office of Academic Integrity and Misconduct, the university’s Code of Student Conduct defines academic misconduct as “any activity that tends to compromise the academic integrity of the University, or subvert the educational process.” There are many behaviors that include academic dishonesty, the website states, such as knowingly providing or using unapproved assistance for coursework.
Meanwhile, Xie said ChatGPT presents many positive learning opportunities.
“Maybe students can also learn from this tool,” Xie said. “Maybe you can write a draft first and then compare it to what has been generated by the AI. Maybe there’s something to learn about it.”
Sun said students can use ChatGPT and similar technologies to augment their work by asking the tool to critique an existing thesis.
Sun said many open-ended questions remain. The model is not perfect and will sometimes generate incorrect, unsubstantiated or biased responses.
“It will continue to impress us,” Sun said. “But also on the other hand, I think researchers in the community are actively improving, or at least discussing, how to improve the existing issues and mitigate the harm to the society.”
Sun and Xie said they look forward to the future of human and AI interaction.
“The goal is not to replace human cognition in the equation,” Xie said. “It’s to support, it’s to facilitate [learning].”