Brandeis Magazine
(Artificial) Intelligence Quotient
AI is taking higher education by storm, disrupting age-old academic traditions. How are Brandeis professors responding to the threats and opportunities this new technology presents?
By David Levin
Illustrations by Charlotte de Greling
Since the much-hyped launch of ChatGPT in 2022, generative AI tools — which can create text, video, images, and other types of content when given simple prompts — are forcing universities to grapple head-on with a technological challenge to traditional ways of teaching and learning.
As a 2024 report on AI in The Chronicle of Higher Education notes, universities have long considered themselves bastions of helping people acquire knowledge, critical-thinking skills, and problem-solving abilities through careful, methodical work. Generative AI seems to fly in the face of this approach. In some cases, it can even hamstring students who seek a thorough understanding of a subject.
AI’s very existence poses existential questions for higher ed. If ChatGPT can research and write a half-decent term paper in seconds, how do schools maintain their academic integrity? Can students learn to pursue truth and self-reflection in a largely post-truth world?
In a recent study conducted by market-research firm Tyton Partners, nearly 60% of students said they use AI regularly, yet only 36% of instructors said the same. Asking whether these numbers pose a threat to education is sort of a Rorschach test; everyone who looks at them sees something different. The real benefits and risks that generative AI poses may depend on how and when it’s used.
“There are disciplines, like studio art or theater arts, where AI is probably not going to change what faculty traditionally do in the classroom,” says Irina Dubinina, a professor of Russian. “But in disciplines like writing and literature, the consequences of something like ChatGPT are much more direct.”
Dubinina is a member of the newly formed Brandeis AI Task Force, which aims to set policy for AI use on campus. Until 2024, she was also the faculty liaison at Brandeis’ Center for Teaching and Learning, where she hosted gatherings, formed working groups, and held daylong symposia that focused on how to use the technology in the classroom.
Students are not the only ones who will have to adapt to the new normal, she says. Whether faculty members are AI champions or skeptics, they’ll inevitably have to adjust the ways they teach and reconfigure their assignments to constructively discourage or embrace the use of AI tools.
Brandeis professors’ opinions on generative AI are as varied as their fields. Here, some of them describe the ways they’re approaching the technology, the successes and pitfalls they’ve experienced, and the lessons they’ve learned.
I’ve used AI very successfully in our entrepreneurship classes. AI tools can help students go through the prototyping process — like creating product sketches or pitches to investors — faster than ever before, and speed is an important factor when you want to launch a product or a service. You don’t want someone else to steal your idea or get it to market before you. With AI, students can spend more of their time developing ideas instead of getting bogged down in little details, like fixing margins on a PowerPoint presentation.
AI is actually helping my students realize the value of what we teach best at Brandeis: thinking critically, writing clearly, and debating ideas. We promote these skills across all our departments, from engineering to business — and they’re not something ChatGPT can ever replace. If students leave here knowing how to apply this knowledge to AI tools, they’ll have a major advantage in the workplace.
Generative AI sets the bar for students’ work. If you’re at a consulting firm and you’re asked to write a brief on a topic, you’d better be able to deliver something that’s much higher-quality than what ChatGPT can offer. You have to reach further.
That said, AI is definitely a powerful tool for developing ideas. All you really need is a concept for something you want to create, and, within minutes, you can get an AI-generated drawing or image. Even if it’s not perfect, it will still help to communicate your ideas quickly so you can get other people onboard. Fundamentally, it’s a tool that lets my students reach further faster.
AI proponents often come from the perspective that students can use AI to generate a first draft of something and then mold it into the thing they need.
I don’t buy that. As an anthropologist, I want students to learn how to think critically. I always tell them I don’t care if, 10 or 20 years from now, they know some little fact, or concept, or theory I taught them. I do care that they are able to reason. I’m not convinced AI teaches them that skill.
I don’t use AI in my classes at all, and I don’t let my students use it on assignments. There’s no way of knowing if they do, really, but it means I have to put guardrails in place, like asking them to cite a certain number of instances from my lectures when they write a paper. AI won’t have access to that.
The downside is that AI increases the amount of work my colleagues and I have to do, which makes it harder and harder to teach. But, at the end of the day, if we can’t think, we won’t succeed as a society.
In my lower-level classes, generative AI has pushed me to use a mastery learning approach: Either students can demonstrate they understand their code and can meet learning objectives, or they have to revise their work and resubmit it.
I usually require students to create videos of themselves running their code, explaining why it’s correct, and then showing me — in detail — how one interesting part of it works. This way, I can tell they really understand what they’re creating, even if AI helped them write it.
As a computer scientist, I do research on AI. As an educator, though, I don’t distinguish myself as someone who uses AI in my classroom in any unique way. I expect students will use it for coding, which is not a bad thing. That’s just part of the landscape.
But I still give students paper exams and paper-based projects, where they have nothing on their desk except a blue book. The storage room across from my office holds about 2,000 of these notebooks, and I’ll probably use them until I die. Even if my students use AI tools to code, they’ll still need to show me they can think on their feet and retrieve concepts from their own memory.
Tools like ChatGPT are made to process language, not to calculate things, which is a nuance a lot of students may not initially grasp.
In my biology classes, I use a simple exercise to illustrate this shortcoming. I’ll give students data from a short sequence of RNA, which looks something like “AUGGUCGCAGGAUUAAGGGAG.” For molecular biologists, it’s pretty easy to figure out which amino acids that bit of RNA can make. But if you ask ChatGPT for an answer, it’ll spit out a bunch of different solutions, some of which are totally wrong.
When students see this, it shows them pretty quickly they need to be more skeptical of AI’s accuracy.
Every Brandeis undergraduate has to take a University Writing Program class. In the program’s spring 2024 seminar, we asked incoming students to use AI technology at every stage of a research paper, from brainstorming an idea, to narrowing a topic, to getting feedback on drafts. We discussed what the AI produced so students could understand the technology’s strengths and weaknesses.
Students clearly had conflicting emotions about AI, and the majority of them asked for guidance on how to use it ethically. They were worried about inadvertently cheating.
As faculty, we have to be very clear in our policies and explanations moving forward. Once we give students guidelines, they can choose whether to use AI or not, and they should definitely leave Brandeis knowing how to write a prompt for AI, verify the answer it gives them, and understand the ethics behind using the technology.
AI-powered tools, like text-to-speech technology, can break down language barriers for international students and improve access to class materials for students with disabilities. If used properly, AI can make education more inclusive overall — but that’s a big “if.”
My main concerns are access and equity. For instance, do students who can afford the subscription version of ChatGPT have an advantage over those who can’t? Also, AI systems have inherent human biases baked into them, including the assumptions made by the algorithms themselves and the data AI tools are trained on. Generative AI learns by processing millions of documents, all written by humans. It reflects the systemic inequalities that exist in our society.
We’re already seeing AI tools amplify systemic disparities that disproportionately affect marginalized communities — for example, in policing and other areas of the criminal legal system, in environmental issues, and in mortgage-lending algorithms. Our first priorities should be teaching students to think critically about AI’s output and implementing a structure that helps them use it responsibly.
I teach students to use online Spanish-English dictionaries properly. We should do the same for generative AI, because all good writers know how to use their resources responsibly and effectively.
Last year, I started incorporating ChatGPT into my Spanish language courses. If you ask it to generate text in Spanish, it won’t do a perfect job, but analyzing its flaws can actually be very helpful. Students can learn a lot by picking ChatGPT-generated text apart. They can identify the good ideas and the helpful vocabulary but also spot the weaknesses, like being too generic or too inconsistent in tone.
These conversations can improve students’ own writing, teach them to critically evaluate AI-generated content, and help them understand AI’s potential and limitations.
ChatGPT excels at writing simple code for temperature control or feedback in lab instruments. I’ve encouraged my students to use it so they don’t get mired in programming. I want them to focus on the physics concepts I’m teaching.
Interestingly, a lot of students have gotten frustrated with ChatGPT and end up writing code themselves, which takes longer. The problem is that they were asking the machine to do too much and what they got back didn’t work. They didn’t know how to break the problem down into smaller parts.
I had to explain that using AI is like teaching. If you ask a student a question in physics, even if their answer is wrong, the way they answer can reveal their thought process. It can imply they have a certain mental model of how nature and physics work, and this model has some errors.
But I can’t just tell them, “No, wrong answer,” because they’ll still have the wrong fundamental understanding. I have to ask another question that reveals to them some contradiction in their thinking, so they can unlearn and then relearn. That’s basically AI prompt engineering — you break things down into parts and course-correct if something goes wrong along the way.
I don’t see AI as the end of education. It’s a tool that increases productivity, like having a better hammer or a better search engine. You still need critical-thinking skills to assess the veracity of AI’s responses.
Sooner or later, everyone will have to deal with AI in their classroom. That’s a fact.
Faculty need to teach students how to use it ethically. We can’t do this with a punitive approach. Instead of outright prohibiting AI use, we need to create assignments that teach students how to use AI responsibly, or change our teaching in a way that deters students from using it, such as creating assignments that render AI useless.
There are lots of ways to do this. Teachers can require students to submit small sections of a term paper as they write it, focusing on writing as a process rather than a product. They can ask students to create a podcast or integrate a topic discussed in class into a project. If teachers understand what a machine is unable to do, they can make that a required component of anything students produce.