Thumbnail artwork by Iliana Garner / North by Northwestern

I was stunned when I first watched the Pixar film Wall-E. Earth was practically dead and civilization escaped to spaceships, becoming sluggish people who had technology perform every task for them. Even the captain of the ship reassigned most of his duties to an AI that ended up becoming the story’s main villain.

ChatGPT feels reminiscent of this plotline. Today, 1.16 billion people use the AI tool that was released just six months ago. It can answer questions, tell jokes and write text – poems, academic essays and everything in between.

Mohammad Hosseini, a postdoctoral scholar under the division of health and biomedical informatics in Northwestern's Department of Preventive Medicine, devotes significant research time to studying the potential and risks of AI.

“The possibilities are limited to the extent of your own creativity,” Hosseini said. “Because it's so new, because it's so powerful, we haven't even had time to think about what norms should be regulating the use of this new technology.”

The Responsible AI Student Organization (RAISO) on campus is trying to raise awareness on the societal impact of AI and modern technology. They do this through weekly discussions, guest speakers and a newsletter called “Hold the Code.”

The organization, led by Weinberg third-year and RAISO President Tiffany Lou, is pressuring the computer science department to include ethics in the major’s core curriculum. In a RAISO discussion meeting, students talked about ChatGPT’s potential to favor Western norms given that it was created by U.S. developers and is unavailable in some countries like China.

“There’s no completely unbiased structure in ChatGPT algorithms. If they’re built by humans, there’ll be some sort of selection [bias] in the data or algorithm they build,” Lou said. “If your software engineers are coming from the U.S. or Western world, it’s not going to have a global view.”

The robo-research assistant

While it's hard to predict the direction ChatGPT will develop, Hosseini said academic research is likely to move faster than ever before. Scientists already use ChatGPT to write research abstracts, brainstorm ideas and, he predicts, eventually collect accurate data. These time-consuming processes can be sped up through automation abilities and can shift the dynamics of academic research in a few years, according to Hosseini. Essentially, ChatGPT will be like those research assistants and TAs you work with in class labs, minus the overworked graduate students.

“The class of ‘26 would be among the first groups to enter a completely different research dynamic. By that time, I expect to see a lot of manual and non-automated tasks to be automated,” Hosseini said.

Hosseini encourages students interested in research to use AI responsibly and focus on learning unique skills that can’t be automated, especially any manual task beyond what’s needed to complete a paper. For example, he said that ChatGPT can tell you about climate change, but it can’t build a wind turbine to fix it.

“In theory, I think research is going to move a lot faster than it has prior. But in practice, we need things to happen on the ground,” Hosseini said. “Neither ChatGPT nor other AI applications have been able to do things.”

The substitute teacher’s a robot?

Northwestern has already offered faculty training on the impact of ChatGPT on education. Nina Wieda is an assistant professor of instruction in the Chicago Field Studies program, and part of her course encourages students to analyze existing trends to predict the future.

For one assignment, students use ChatGPT to generate a research paper in order to critique the AI’s ability to come up with ideas, gather sources and write.  

“NU emphasizes the importance of approaching ChatGPT critically as a potential tool that can be useful but also used for harm,” Wieda said. “Our goal as teachers is to help students use it without undermining the quality of the eventual output, without giving up an opportunity to learn in the process.”

Wieda believes that ChatGPT can be useful in classes that aim to teach critical thinking and brainstorm ideas but should not be used in classes for writing. Instead, she said it could be a new Wikipedia.

“We’re aware it exists and use it to some extent but also understand its limitations and so seek information beyond it,” Wieda said.

Ears in the room

ChatGPT can revolutionize healthcare by automating tedious tasks, allowing physicians to focus on strengthening relationships with patients, said Daniel Liebovitz, an internal medical physician at Northwestern’s Feinberg School of Medicine. The AI tool won’t make jobs in the industry obsolete. Instead, Liebovitz said that future physicians will have more time to cultivate inherently human abilities, like listening for emotional cues and having rapport, that can’t be automated.

The tool is already helping physicians brainstorm medical decisions and draft paperwork for patients. Once developers implement data privacy measures, it can be used by doctors to analyze patient data and vet possible diagnoses, Liebovitz said. ChatGPT can be all ears: Liebovitz added that it might be able to listen in on physician-patient meetings and create live records of a patient’s diagnosis along with recommended courses of action for both parties like a smart scribe.

“We have a workforce shortage overall. If we could use the individuals that we have more effectively by freeing them up from some of the drudgery work, that will be huge in terms of job satisfaction, as well as patient safety and efficiency,” Liebovitz said.

The caveat: ChatGPT frequently produces inaccurate information or references faulty studies. For example, some outdated research efforts led to the inaccurate conclusion that certain medicine is more effective in white patients than in Black patients. Liebovitz recommends doctors carefully fact-check ChatGPT results and eliminate bias by writing prompts to specifically ignore variables like race.  

Better Call ChatGPT

Those in need of affordable legal services may also benefit from the automation of tedious tasks. Northwestern Pritzker School of Law Professor John McGinnis forecasts that most legal contracts will be written by ChatGPT within the next two years, potentially replacing paralegals and generic templates that already exist. This could democratize legal assistance, making it more affordable by removing the need to pay a paralegal to produce the same outcome, he said.

“There are a lot of unmet legal needs for people, particularly middle class and poor people who really can't afford the hourly rate of some lawyers,” McGinnis said. “Well, they may be able to afford 15 minutes of a lawyer's time, who will input something with ChatGPT and check on it.”

Shine bright like a diamond

Don’t fear automation, McGinnis said. In other words, we won’t end up like the people in Wall-E. He instead thinks that the responsibilities of assistant jobs will shift.

“People thought that bank tellers would decline once ATMs were brought in. Yet the amount of bank tellers grew. They just did different things,” McGinnis said.

Rather than eliminating a technology that may render some tasks and jobs obsolete, Hosseini suggested adapting to a world that makes the most of ChatGPT.

They way to do that in research, teaching, healthcare, law and beyond is to be what McGinnis encourages: a “superstar.”

“Just add your little sprinkle of magic, your little genius, on it and make things better,” McGinnis said.