“I was hoping to retire before AI became disruptive,” said James Foster with a chuckle.  As professor of Computer Science at Walla Walla University, he has spent the last 50 years listening to the industry predict the replacement of humans with AI and is finally witnessing its revolution in education.
The introduction of programs such as ChatGPT, Google Bard, and the Bing AI assistant by Microsoft have brought major changes to every field, particularly academia.
Within computer science, programmers must still understand the user’s needs and envision a solution, even if AI is now responsible for writing the code. As Foster put it, “While AI and programming tools are getting dramatically better, humans seem to be as difficult to understand as they have been for thousands of years.” 
For Jerry Hartman, a professor of film and communications, understanding how humans interact with AI may be more important than its capabilities. “It’s a really clear point of something new, like the printing press, affecting how we communicate, and as a society we have to figure out what that becomes.” 
Hartman is concerned with what the software already tells us about ourselves. He noted his experience testing the software and the clear biases image generation programs like Dall-E and Midjourney demonstrated in regard to race and gender discrimination. This was reflected in a series of images he generated for this article, which predominantly featured white, male figures. How these problems are addressed remains to be seen, but standards are actively being created.
On October 30, President Biden issued an executive order on AI safety. According to the statement released by the White House, “The Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.” 
Within the University, these standards were just addressed in the Academic Integrity Policy. The new policy, which was voted into effect on October 20, addresses many complex questions of authorship and originality. “We understand those questions are not resolved, probably not likely to be resolved, but the principle is still important to us,” said Cynthia Westerbeck, chair of the English department and one of several faculty and staff who drafted its overhaul. 
The policy now states, “Central to any scholarly endeavor is a commitment to seek the highest quality information, verify the accuracy of that information, and acknowledge the source.”  Further on, AI is mentioned directly: “Tools such as generative AI or problem-solving software should only be used if explicitly permitted for a particular assignment or class and must always be properly acknowledged.” 
Source accuracy and citation has been the primary concern in the field of history. AI is known to invent information and sources when prompted to write a scholarly paper, which has led to Professor Hillary Dickerson forbidding the use of AI in her classes to maintain standards of scholarship.
As Dickerson described it, “For students to be meeting what we would consider the basics of good historical scholarship means that they’re doing their own reading, they’re doing their own analysis they’re doing their own research.” 
Dickerson and Westerbeck both expressed frustration with the effect the programs had on students, particularly the ways it undermined their effort and participation in scholarship. As Westerbeck put it, “We believe to the core of our very being that the act of writing for ourselves is how we think, and if you give that over to something else […] were losing something so incredibly important.” 
But AI has had positive impacts in a few departments. Brian Hartman, a professor of science education at Walla Walla University, encouraged students to familiarize themselves with the technology and its capabilities. “As opposed to other departments that have policies about what you can’t do with it,” he said, “our strategy has been to teach students how to use it properly.” 
Because teachers’ worksheets are not published, the use of AI comes with far fewer problems. It is a standard practice in the field to share resources within the community of professionals, particularly online, and these resources all fall under fair use. So, an AI’s function of pulling information from across the web is merely a streamlined version of the work already done by teachers. “We encourage students to use AI, but we just want them to cite it,” said Hartman. 
Acknowledgement of AI generated content was a central theme among the professors interviewed, and as societal expectations such as this develop, many more changes are likely to follow. Students can expect to see a shift towards more old-school methods of evaluation such as handwritten essays and oral exams, and teachers can expect continued conversations over its use.
Whatever happens, one thing remains constant: humans are as complicated as ever.
Interview with James Foster 10/31/2023.
Interview with Jerry Hartman, 10/31/2023.
Fact sheet: President Biden issues executive order on safe, secure, and trustworthy artificial intelligence. (2023, October 30). The White House. https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
Interview with Cynthia Westerbeck, 10/31/2023.
Academic integrity policy. (2023, October 20). Walla Walla University. https://www.wallawalla.edu/academics/academic-administration/academic-policies/academic-integrity-policy
Interview with Hillary Dickerson, 11/02/2023.
Interview with Cynthia Westerbeck, 10/31/2023.
Interview with Brian Hartman, 10/31/2023.
AI and Academia Primary Photo. Photo Generated through Midjourney, Courtesy of Jerry Hartman.
AI and Academia Secondary Photo. Photo set generated from the prompt photo for college newspaper on AI and academia. Photo Generated through Midjourney, Courtesy of Jerry Hartman.