An AI tool named after Albert Einstein (pictured) was taken down shortly after it was released.Credit: GK History Images/Alamy
On 23 February, academics across the world took to social media to decry the death of education as we know it. The day before, a technology start-up company called Companion had released an artificial-intelligence platform that pledged to free students from tedious coursework.
Such a statement might not seem controversial at a time when AI tools exist for nearly everything, if not for the fact that the program, called Einstein, promised on its website to do so much more. The company said that students could grant the tool access to their account on a virtual learning environment, such as Canvas. Once they did that, Einstein could watch lectures, read course material, participate in discussions, complete quizzes, and write and submit homework — all with minimal oversight by the student themselves.
Companion chief executive Advait Paliwal told the technology news outlet CNET that Einstein “makes ChatGPT look like a toy”, whereas educators called it “a cheating app”, “evil” and “the ultimate brain smoothing machine”. Language on the tool’s website shifted after the backlash to downplay the AI’s capabilities, and by 26 February, the bot was no longer accessible after a ‘cease-and-desist’ demand. Paliwal told Times Higher Education that he would now “concentrate on promoting how the wider Companion AI can be used by students”. (Attempts by Nature to reach Paliwal received no reply.)
Game over
Einstein’s moment in the Sun might have been short, but it is part of a wider reckoning over how students should be educated today. AI tools are being marketed as time savers for teachers overburdened by administrative tasks, and yet some faculty members are instead spending more time on battling bad-faith uses involving students, resulting in a push to return to ‘de-digitized’ curricula that place less emphasis on computers.
“My first thought when I saw Einstein was ‘game over’,” says Lilian Edwards, a specialist in Internet law and technology policy at Newcastle University, UK, because circumventing it would require instructors “to rearrange [their] assessment strategy entirely”, which would involve substantial effort. “AI can certainly be useful,” she adds, but the majority of people she knows “think it’s driving a stake through the heart of conventional educational assessment”.
AI has lots of legitimate uses in academia — including writing code, translating texts and correcting grammar — and David Jurgens, a computer scientist at the University of Michigan in Ann Arbor, says it’s nearly impossible to avoid in his field. As such, he often faces many of the same ethical quandaries as his students. Jurgens came across another AI, called Professor Feynman, which is essentially Einstein for academics: it promises to free them from the ‘busywork’ of reading and grading essays, responding to discussions and even the need to offer online office hours, by creating a ‘digital twin’ that mimics their voice, mannerisms and teaching style.
“You can imagine a nightmare situation where classes become AIs talking to AIs, with no people actually interacting,” he says.
Rather than adapting his assessments to AI platforms, Jurgens has engaged his students in thoughtful discussions in the classroom.
“Teachers are always going to have to spend time developing and updating their curriculums, and so I’ve tried to make it a more collaborative process,” he says. “It feels like a better use of my time, and as a result, I do see students being more aware that they’re only hurting themselves in the long term if they’re replacing themselves with these tools.”