The Future of AI

Hollywood has been warning us about it for years. Think “I, Robot,” in which a conspiracy spearheaded by an artificial intelligence attempts to enslave the human race. Or, the science-fiction action classic “The Terminator,” which sees Schwarzenegger’s titular character trying to ensure humankind’s extinction at the hands of machines. Even back in 1968, when computers were still huge, unseemly hunks of metal and plastic, Kubrick’s “2001: A Space Odyssey” featured a murderous supercomputer, HAL 9000, as its antagonist. HAL’s chilling, iconic refusal to open the pod bay doors was many audiences first introduction to an independently intelligent and evil machine, and the idea of AI as something dangerous that would stick around for decades to come. 

These depictions have all leaned into violent ideas of the robot apocalypse and have helped to predispose many people to fear AI that can look, sound, and act human. Yet, it seems now that the movies’ dramatized plots may have been overcomplicating things. AI is not going to take over the world with a bang, but with a generative prompt. 

It has been roughly a year since the AI chatbot “ChatGPT” and the AI image generator “DALL-E” were launched by OpenAI, and they have already completely shifted the discourse around the technology. Now those worries of AI rising up against humanity are, while not entirely gone, more distant compared to the pressing concerns of its current implications. 

For those less familiar with the programs, ChatGPT is a text-generating chatbot. That means its primary purpose is to converse with human users about anything from popular culture to the current state of the stock market. It does this by pulling on a vast store of available data, some of which comes from other conversations that it has had. An even more notable feature is its ability to suggest and generate content. Ask ChatGPT for rainy day activities or places to visit in New York City and in seconds you’ll have twenty-point lists filled with ideas. 

DALL-E is made to fabricate images by matching pictures from its vast database with human- created text prompts. The results are entirely new images that have never existed. Some of these are more fantastical than others and thus easier to recognize as AI, but others are deceptively real. Take the image of the Pope wearing a puffer coat that went viral a few months ago. Unless you notice the warped fingers on one of his hands, it is difficult to spot it as a fake. Generating human hands and limbs is something DALL-E has struggled with since its release, but it is continually improving. 

These programs sparked worries about disinformation, and have caused many to become nervous about their job security. Alphabet, Google’s parent company, cut 12,000 jobs earlier this year citing “a different economic reality” as it seeks to incorporate AI into many of its services. Despite examples like this, many still view humans competing with AI for jobs as something on the semi-distant horizon. 

Perhaps one of the most immediately impacted areas by the launch of ChatGPT has been higher education. Students and professors found themselves returning in the spring semester of 2023 to a brave new world in which a chatbot could churn out five-paragraph essay about as fast as one could type out the prompt requesting it. 

Colleges and universities across the country have had varied responses from outright banning the use of AI to incorporating it into certain classes. One professor at Texas A&M accused students of using ChatGPT to generate their final essays after feeding them into a checker made to detect AI writing. He failed them, leaving graduating seniors panicking. After a large backlash, the university stated that no students would be prevented from graduating. AI checkers are known to be prone to false-positives that can wrongly identify human writing as artificial. 

At Grand View University, the response to the rise of AI has not been quite as extreme. Rather, the plan is to proceed with caution and experimentation. 

Currently, GVU academic dishonesty policy does not include specific mentions of AI. Academic Dean Todd Knealing said that this was because AI usage already fell under the given definition of plagiarism. 

“If you use AI to generate something, you’d have to attribute it to the AI in the paper,or it would be the same thing as taking what another author did not citing them,” Knealing said. 

He also added that GVU has very low case numbers of academic dishonesty, around 10 a semester. Whether that number will grow remains to be seen, but Knealing is sure that AI will remain prevalent in future conversations. 

“I don’t think it’s going to go away. I think the models are going to get better and better and stronger and stronger, faster and faster and so on. It’s going to be more integrated in all kinds of different areas and that’s going to be scary and frightening to all kinds of people because it could represent a very big disruption,” Knealing said. 

That disruption is already present in college classrooms, especially from professors’ perspectives. Though not all are responding as strongly as that Texas A&M professor, you would be hard-pressed to find a professor without some kind of opinion or plan on how AI should be dealt with. 

Simone Sorterberg, Associate Professor of Education and the Director for the Center of Excellence in Teaching and Learning (CETL) at GVU discussed different reactions she has seen to ChatGPT by professors. 

“You can be resistant to artificial intelligence and try to design assignments that are resistant. You can also embrace it wholeheartedly and integrate it and talk about its ethical use in your class, which is probably a pretty important thing to do at this point,” Sorterberg said. 

Resistant assignments could take different forms. ChatGPT is unable to draw heavily from copyrighted material and struggles to perform more in- depth analyses, like comparing and contrasting separate texts. It also has no ability to cite the material it is referencing, though this is something OpenAI plans to implement in the future. 

“It can’t necessarily reference, and it even will tell you that because the copyright laws, it can’t reference certain words,” Sorterberg said, explaining the chatbot’s limitations. “So, all it could do is take general ideas from each author and compare those general ideas, even though they weren’t necessarily coming from those exact texts.” 

While AI resistant assignments may be used, Sorterberg noted that ignoring the development of the technology in the classroom was also unrealistic. 

“I don’t think we can avoid talking about it in our classes, because where else are our students going to learn ethical use? It’s going to be the teachers that lead society, as we do on many issues and to thinking through the ethical uses of something and how it might impact others negatively,” Sorterberg said. 

Sorterberg has made information available for professors on how to engage with AI through CETL, and will continue to do so. 

Some professors have already brought AI into the classroom, like English Professor Paul Brooke. In his Writing for Business class, Brooke created an assignment for students to see the differences between human and AI writing. 

“They have to do a ChatGPT driven AI memo and then they have to do one of their own. What you notice when you do the[ChatGPT] one is that it goes on and on. For the students, I would expect them to have fewer main points. It wouldn’t be so over the top. It would be more directed,” Brooke said. 

Brooke explained his thoughts on AI in the classroom by stressing its inevitability. 

“You can’t exclude it. Students are going to use it. Students are going to probably, you know, try it out and see if they can get away with it. Why wouldn’t you? I think we should embrace it,” Brooke said. 

Brooke also teaches all of GVU’s creative writing classes, including fiction, poetry and non-fiction. Though he believes that AI’s capabilities to write creatively are currently nowhere near the level of human talent, he still acknowledges that it will present issues. 

“What’s going to happen is it could just X out a lot of writers and leave them kind of out in the cold. So that’s an issue. Right now, the quality is low but it’s machine learning, so in time it’ll probably be pretty close to many of the writers who are, you know, maybe not the greatest writers, but they’re okay. And then will they be replaced? Probably. So that is a deep concern,” Brooke said. 

Currently, many creatives are worried about AI’s capabilities. There has been talk of AI- generated novels and movies. Disney has recently revealed to have used AI technology to scan the faces and bodies of background actors in its streaming show “WandaVision” to create digital replicas. This was done without their consent and potentially intended to cut costs on hiring extras in future projects. 

“Heart On My Sleeve,” a song using the AI-generated vocals of musicians Drake and The Weeknd, amassed hundreds of thousands of streams on Spotify and millions of views on TikTok before it was removed due to copyright issues. Recording Academy CEO Harvey Mason Jr. said that the song would be eligible for the 2024 Grammy’s if not for the fact of its removal, which makes it commercially unavailable. 

These events have generally been met with widespread backlash. But that backlash is underscored by a creeping trepidation of AI taking over creative industries. 

Despite this, GVU Art Professor Rachel Schwaller said that human art and AI art possess innate differences that set them apart. 

“The robot, or the AI, or the technology feels colder than that. It doesn’t feel like it might give you new content. It might show you something that you haven’t thought about in your own work, but it doesn’t have the human quality that sort of bounces ideas back and you go, yeah, that’s a great idea,” Schwaller said. 

When it came to businesses using AI in the place of artists, Schwaller was also questioning. 

“That’s a money thing, and I don’t want to discount somebody’s small firm, and you can’t afford a person within it. But if you have a person who can write or can create the visual things, and you choose to go with the AI over it, perhaps you missed the opportunity to see greatness for somebody who creates something that’s really interesting and new,” Schwaller said. 

Many of these discussions around AI seem to have been magnified by their relative immediacy. Concerns about AI affecting academic dishonesty and creatives simply didn’t exist in the same way a year ago. That immediacy makes it difficult to determine what kind of staying power different aspects of generative AI will have. 

Schwaller admitted that coming changes were undeniable, but was hopeful about the future of human collaboration. 

“I think people will get to a point where, technology is awesome, but human contact is desired. I think there’s a point where people go ‘Yeah, but can we just have a conversation? Can we just share ideas?’” Schwaller said. 

When asked if the negative effects of AI would outweigh the positive ones, Schwaller was also optimistic. 

“I don’t know if the bad will outweigh the good. I think it might be dominant for a while. But I always ask that question, you know, what does the human do for us? And the human has advanced and evolved, and all these things. So, I don’t think it’s 1984 and I don’t think it’s like, so sci-fi,” Schwaller said. 

Uncertainty seems to be a factor in all discussions of the future of AI. The sources interviewed for this story were asked to describe their feelings towards AI in a few words, and the responses were often at odds with each other. AI is puzzling, yet innovative. It is banal, yet shiny. And it is scary, yet fascinating. 

The true direction AI is taking us in the classroom or in broader contexts may not be fully understood for years to come, but there is no question that the technology is here to stay. Attempting to deny or work around its existence would only serve to work against ourselves. Rather, it is imperative that we have meaningful discussions about the uses and effects of AI before we see it used in potentially damaging ways. 

These discussions may take place in faculty meetings, statehouses, or between friends, but they will need to happen. Having them is what will, hopefully, map out issues and disputes that are currently uncharted territory. 

In comparison with our reality, the stories that we have seen in the movies about AI and robot uprisings are exaggerated and a little ridiculous. ChatGPT is no Terminator and DALL-E is no HAL 9000. However, there may be something to be taken from them, that being a sense of caution around the use of AI. 

It is guaranteed that someone, somewhere, will use these new tools in unethical ways. It has already been seen to happen. Living with that and continuing to explore, debate, and even regulate AI’s uses are all that we can do. 

The ironic fact of the matter is that the pod bay doors have been opened, and there is no way for anyone to close them. 

Leave a comment

Your email address will not be published.


*