What if I Were an AI professor? The Promises and Risks of an Automated Academic Industry

By Florian Schneider

September 2025

The rapid developments in generative artificial intelligence (genAI) have forcefully drawn attention to the challenges our societies face, especially when it comes to labour and work. These challenges are keenly felt in almost all industries, but certainly in academia. They are not necessarily new: from the introduction of the written word to printing presses to early computers, scholars have asked themselves what we gain when we outsource some aspect of our human experience to a device, and what we might lose. When is the ‘cognitive offloading’ that technologies enable helpful, when does it crowd out essential skills? When does tech free up our time… and when does it threaten to make us redundant?

The rapid growth of genAI is now again prompting such questions, leading to a great deal of soul-searching in higher education. At many universities, the debates of whether and how genAI should be integrated into teaching and research have been heated. Should scholars use these digital systems? Should they permit their students to use them?

In this short article, I reflect on the thorny issue of how developments in genAI might relate to automation and labour in academia. But let me start off with a simple question: if you could recreate yourself as an AI chatbot, would you?

AI personas and labour

This is no longer a question for science fiction or philosophical thought experiments. GenAI already creates simulacra of people. But the issue has especially come to the fore in entertainment industries, where corporations recreate fictional characters formerly played by meat-and-bone actors.

Take the following case. In the spring of 2025, the company behind the online multiplayer experience Fortnight used genAI to create a non-player character (NPC) for its game: the infamous Star Wars villain Darth Vader. This NPC was built using the original (and: legally licensed) voice of the late James Earl Jones, the actor who lent Vader his menacing baritone in the films. The result is a fully interactive, personalised version of the famous character that players can talk to about anything from game-related matters like combat strategies to what they had for lunch.

This may seem banal, but it is anything but. This innovation has far-reaching implications for entire industries, even societies. The game industry, much like the porn industry, has always been at the forefront of digital innovation, and what happens here is the litmus test of what consumers will soon be confronted with elsewhere. And the implications are profound.

Much has been made of the lawsuit the company was quickly confronted with, as its AI practice obviates the need to now hire expensive human talents to emulate the voice of James Earl Jones. There is more to unpack about this than I can cover here. Fortnite is a multi-billion dollar consumer product, and the fact that the corporation behind it relentlessly pushes forward to make human labour redundant is deeply worrying. And much like the prospect of being confronted with interactive versions of fictional or deceased persons, these practices spell out a wider future for how corporate actors treat labour. I’ll leave it to the political economists to properly unpack what is going on here, and what we should make of the case. I am more interested in what this gamer event might mean for the way we think about digital information and communication, and specifically for the way we think about knowledge production.

Digital Darth Vader is a promise: what if we could all have access to fully interactive, personalised bots that simulate a specific character? That promise is not entirely new. The rollout of ChatGPT in late 2022 was trailed by a flood of chatbots that emulated everything from philosophers to celebrities to specific professions like therapists. The initial results were often eerie, and not necessarily compelling. Since then, the field has made significant strides. Despite all its remaining flaws, the fact that Fortnite’s interactive Darth Vader is now available not just as a text model, but as a fully realised visual, animated, voiced version of the beloved character is stunning. It shows tech entrepreneurs and designers are one step closer to creating personalized AI assistants as well as automated interfaces that emulate specific people, like your doctor, or your lawyer, or your university professor. Or: you.

You will have your own answer to the question of whether you’d like to see yourself recreated this way. Personally, I wouldn’t mind having an AI version of myself. Maybe that guy can then take care of all the boring stuff in my life: answering my emails, commenting on student work (at this point itself mostly written by AI), or peer-reviewing the n-th generic research article (again, probably also written by AI). That would certainly free me up to spend more time with friends and family. Or to do some actual thinking.

Of course, we don’t live in a world where I could then simply enjoy the simple pleasures of life. Sure, if my tedious admin work were automated, I could significantly reduce the hours in my work week. Responding to other people’s emails alone takes up about 30% of what I do these days. Add other admin tasks and I could probably automate three days of work per week.

Of course, I could then be a good Protestant (or Confucian) worker and diligently fill those free days with fresh, shiny new work. I could read more. I could write more. I could teach more. If that is motivating and engaging enough to fill five workdays, then maybe we can still justify the 38-hour work week.

But why would we? I can just as easily imagine a week in which I kick back and smell the flowers most of the time but am then in the right state of mind to have truly meaningful thoughts and interactions on the remaining days. That would benefit my organisation, and society, far more than if I make sure I still clock in for the right amount of work each day, regardless of what that work is. Instead, I could do things that AI cannot (yet) do: I could meet with folks to discuss what’s on our minds. I could think up novel ideas and imagine new futures. I could ignite curiosity for what I do, instil courage in my students, help them grow.

Unfortunately, that is not the way our societies operate. As soon as I were able to ‘save’ a day of work, the organisation I work for would command me to fill it with something else to do. And if I couldn’t justify that day of work, then that organisation would likely ask: why are we paying that guy for that day? I mean, what a great way to save some money. If that lazy professor only works two days a week, then maybe only pay for two days a week; no need to hand out a monthly salary on which he can live.

The writing is on the wall, and this is of course why workers living in precarity are so concerned, be they Darth-Vader voice actors or copywriters, coders, translators, artists, teaching assistants, or really anyone doing a job that deals with information and communication. Which is to say: pretty much every white-collar worker out there. Because, of course administrators, managers, and investors are already asking themselves how they can automate away as much staff as possible and pay as little as possible for whatever labour remains. That is the dystopia we are headed for, unless we collectively act against that future. While this is a dystopia that will not stop at the hallowed halls of academia, it is certainly felt acutely in a field that relies so heavily on information and communication.

AI and the shift in academic work

I sympathise with the reaction of many of my colleagues to try and put the genie back in the bottle. Maybe if academics committed to not using AI, it would send an important signal. Maybe if we ban all electronics from the classroom, students will learn to resist the siren call of the chatbot. Maybe the changes wrought by AI can still be rolled back. However, I worry that many of these efforts are misplaced. Academia has already changed, and that change will not be undone. Large language models (LLMs) only speed up a process that has long been underway.

Today, LLMs make it possible to survey large bodies of literature and distil them to their essence. This has cut the time it takes to conduct a literature review to a fraction of what it used to take ‘by hand’. Similarly, if you are analysing anything that consists of information, then that analysis can now be automated, especially if you go through the trouble of learning how to set up your own AI workflow. And if you want to write all this up? Why not feed your past writings to an LLM, then ask it to put your thoughts into the format of the generic, highly structured genre of the ‘research article’, but faithfully emulating your writing style. Three weeks of writing have now turned into two hours of quick editing.

You may think I am exaggerating. Surely LLMs aren’t that good yet. And in some cases, using genAI may actually cost more time than it saves, for instance when scholars have to go through lengthy prompt engineering and edits to create materials that holdup to academic scrutiny. Or when teachers waste time having to check their students’ papers for AI plagiarism. I have also talked to experts on very niche issues, and they convincingly argue their work does not benefit from probabilistic systems that churn out content that emulates mainstream concerns. Overall, however, genAI tools have radically improved since ChatGPT’s high-profile rollout, and using these tools often does save time.

And so the automation train has left the station, never to return: while most scholars (and: many other white-collar workers) are likely to be circumspect about this in public, my private conversations with colleagues across disciplines suggest that many now use LLMs to automate parts of their work. Some do so casually, through AI-powered tools that have become integrated into their browsers and search engines and task managers and email programmes. Others do so strategically, by building their own AI research workflows on top of existing AI infrastructures like ChatGPT, Gemini, or Claude. And in Asia, I know of colleagues who unapologetically outsource anything they can to AI workflows or AI agents. These tools have drastically multiplied their output. Your organisation demands that you write one paper per year? Why not write four. It might just get you that promotion. And those who used to struggle with the hegemony that the English language holds in academic publishing can now rest easy: an LLM can fix their writing or generate it whole cloth. The so-called Global South can finally become part of conversations that had, up until recently, been dominated by the highly trained elites of the carefully gatekept Anglosphere. Or at least that is the hope of many.

Of course, these changes are already raising red flags. Some of the concerns are understandable. They include echoes of concerns in other industries, e.g. about copyright implications, environmental impacts, the digital biases of ‘hallucinating’ bots, the devolution of language, and more. But they also extend to issues specific to the world of universities: will automation harm academic integrity? Will AI use lead to generic, uninspired conclusions that merely reproduce the going consensus? And if so: what happens when the already heavily burdened world of academic publishing becomes ever more inundated with polished but generic article and book manuscript submissions? Well, maybe the editorial work can be automated as well, but that only threatens to further fuel the ever-mushrooming number of uninspired publications. Again, we can find ways to automate the task of sifting through the bloat and the AI slop, but to what end? In short: the very nature of academic knowledge is in the process of shifting.

Again, these concerns are valid, certainly the moral and environmental concerns. That said, criticism of how innovation and critical thinking in academia are threatened by the tendencies of AI to produce generic, common-denominator cliches risks misdiagnosing the problem. As it were, the ability of academia to produce interesting, novel insights has long been under attack, and not by AI. Over the past decades, academic knowledge production has come to resemble a conveyer-belt production line that favours mainstream arguments, created primarily to satisfy data-driven tenure assessment systems. AI fits so neatly into this system precisely because LLMs are so good at creating the probabilistically perfect run-off-the mill averageness for which the system selects.

Clearly something has gone wrong here, and it has been going wrong for some time. GenAI is deeply entangled with the problem, but it is not its cause. If anything, the much-evoked AI revolution shines a spotlight on what the problem truly is.

As I see it, that problem is an unfortunate fusion of two developments, each of which has been self-reinforcing in its own right, but which together have been a potent, runaway force shaping academic knowledge production. The first is industrial-scale organisation, the second is the inherent inertia of organisational cultures.

The academic industry

First, consider industrial-level organisation apparent in modern production process of goods and services, which can take on baffling scales and speeds. The ready-made meal sold in a supermarket today is a product of industrialization that is far removed from what it looks like to procure, prepare, and cook that same meal from scratch, as a human.

The overarching issue is that scaling up any human project comes with the temptation to do so in an impersonal, inhuman fashion. The sociologist Lewis Mumford stressed this throughout his work. In fact, he himself would not have described this as a mere ‘temptation’ but considered it a foundational part of all modernity: we cannot have large human projects without authoritarian, hierarchical structures, which inevitably do substantial violence to their subjects – at least this was Mumford’s understanding. I won’t get into the weeds here of why I am more hopeful than this, but Mumford’s studies of 1960s America are hard to fault empirically, and much of what he wrote (and: warned) about has proven prophetic.

Academia is precisely one of these human endeavours that has scaled to a national, regional, and near-global scope. This means that it has adopted and developed systems to handle its growing complexities. We now have systems for assessing what is correct knowledge and what is not. We use systems for assuring the knowledge reaches others, in formats that are useful. We have systems that assess who and what is important, systems of reward and punishment, systems for hiring and firing and retiring people, systems for teaching and grading and commenting. The modern university uses systems within systems within systems, many borrowed from business and management. And this is why academia, much like those other complex industrial-scale work environments, lends itself so readily to AI-fuelled automation. Because what is AI if not an advanced way to manage systems?

But this is the crux: rather than fretting about ways to shield these various systems from AI intervention, we should be asking: do we truly need these systems in the first place? Which can be dismantled? Can we revert some of these systems back to some earlier scale?

For instance, peer-reviewed research articles are a modern solution to an arguably timeless problem: how can scholars communicate their insights in an accountable fashion? However, it is not at all a given that the answer must take the form of 8000-word written pieces of text. And the idea that each of those texts should be checked anonymously by two other scholars from that field? That is the artifact of a development trajectory that could have been envisioned differently. In fact, it has been envisioned differently: in much of computer science, peer feedback does not take this form. There, scholars comment on each other’s work as it develops, offering comments and advice along the way. That is also peer review. We could, then, envision entirely different ways of sharing our insights as scholars while keeping ourselves accountable, for instance by using interactive forums.

Academic cultures

But what of that second unfortunate dynamic, the inertia of organisational cultures? Academia consists of many large-scale organisations, across geographical places and scholarly fields. Universities, institutes, departments, think tanks, publishing houses: each does what human organisations do, anywhere humans come together to do things structurally over time. They build a culture, they generate a history, they cultivate rituals and practices, and they rely on norms that have emerged over the course of years, decades, or even centuries of interaction. Some of these practices may become best practice that are shared across time and space, and that shape how the entire endeavour functions. Peer-review is again an example of this, but so are degree programmes, tenure models, or even core ideals such as ‘verifiability’.

As much as many academics like to think of themselves as progressive thinkers, and regardless of how much the academy might get misleadingly maligned and systematically attacked for supposedly being biased towards progressive ideas, the truth is that these organisations and their practices are just like those found anywhere else: they are inherently conservative. They provide stability. They provide certainty. They are the structures that allow the whole endeavour to function. Or at least it seems that way. That makes them sticky institutions that have a great deal of inertia.

This is what organisational cultures are like. But why is this a problem? I’ll leave aside here that such inherent conservativisms might blind scholars to new ways of seeing the problems around them, a risk particularly acute in an endeavour that is supposed to produce novel insights. And I am emphatically including myself in that at-risk group: I am of course just as prone to stay in my comfort zone as anyone else. However, here I am more concerned with the way that organisational inertia might prevent us from seeing how some institutional practices we take at face value are merely cultural artifact that we can change.

Think of the student thesis. It is international practice in most fields to have university students at all levels produce such a piece of writing at the end of their studies, roughly emulating the genre and format of the peer-reviewed research article. There are many reasons why this can be a good idea: to write down your thoughts on a complex topic helps structure those thoughts. Writing is thinking. Also, working through a project like this offers students a chance to showcase that they have learned the fundamentals of academic work: how to ask the right questions, how to conduct original research to answer those questions, how to position that research in a wider landscape of knowledge, and how to communicate the results to a professional audience. All of this has value.

There is then much to like about the idea of the written thesis. The reality, however, does not necessarily match the ideals. In practice, such documents are often highly formulaic, to the point of being generic. Any academic grading undergraduate theses for an extended period of their work life will be familiar with the sinking feeling one gets when reading the same takes on the same subjects, year in and year out.

What is more, the format privileges a particular way of thinking and communicating: it is a formal, written essay, and many instructors expend a great deal of energy policing what such writing should look like. It also relies heavily on rationalist ideas of what ‘reasoning’ looks like. Roundabout ways of building an argument don’t fit the bill. Neither do slow, gradual build-ups to a big reveal, like we would find in a mystery novel. That is not what an academic essay does. As a consequence, and for better or worse, thesis writing does not cover other ways of thinking about, exploring, or communicating ideas. Some higher education institutions acknowledge this and support students who want to set up alternative graduate projects, such as creating documentary films, websites or apps, podcasts or exhibits, or other kinds of artistic expression. But those efforts are noteworthy precisely because they are not the norm. The standard is the written thesis, and the conventions of how to produce such a piece of text are highly entrenched.

Small wonder, then, that almost all stages of a thesis-writing project can now be automated using LLMs. Higher education is in turmoil over this. Some educators plead with their students in the hope that they will understand the inherent value of doing such a project ‘by hand’ (writing is thinking, remember?). Others are issuing stern warnings or bans against using AI, threatening dire consequences for those who ignore the commandments. Some organisations are hoping to use digital systems (increasingly fuelled by AI themselves) to detect AI writing, so that transgressions can be identified and duly punished.

I sympathise with many these reactions, especially those that warn students that they may just be cheating their way out of their own education. But again, much of the criticism gets the problem wrong. The problem is that if a cornerstone of our education system, such as the written thesis, turns out to be so vulnerable to AI emulation, then we may have to ask the tough question of why this is. We may need to ask whether graduate theses (and: academic essays more broadly) truly embody the values we thought they would reflect. And we may then need to stop treating them as a holy cow: if the well-intended project of having students write graduate essays has in practice turned into the kind of formulaic, repetitive task any LLM can do, then we need to find new kinds of projects that capture what we want academic learning to achieve. Maybe the way forward is to have students draw their thesis, or sing about it, or dance it. Or maybe it is to have them use AI to generate a thesis, then use that text as an initial starting point for a much deeper discussions of what that kind of process can teach us. And maybe – and here’s a radical thought – we can use AI to automate away enough of our boring work to make time to actually talk to our students about their insights. In person. Over coffee or tea. Like real humans.

The future of academic knowledge

What, then, of academic knowledge production? What might the AI future hold? Personally – and I say this as someone who loves writing – I suspect that the genre of the long-form written academic text will become a quaint curiosity, akin to the physical book, the music record, or the videogame cartridge. Much like with such physical media, there will always be those who appreciate what that format has to offer, me included. However, maybe it will no longer be the norm to produce and share ideas this way. That does not have to be a problem. It might just unshackle the format of the written non-fiction text and save it from its slow march into progressively more advanced inanity. If it is no longer necessary for every academic to churn out these texts, then the genre is left to those who want to still deploy it, who see value in it, and who can consequently return it to what is was always meant to be: a deeply personal way to think through a problem and share that thought process with others. You know, like real humans do.

What might take the place of the academic text? I suppose we’ll have to find new formats through which to share our insights. Maybe this will be database-like structures through which academics capture the essence of their research; maybe it will be lists of simple statements and propositions. I can certainly envision myself putting together a bullet-point list of my findings and conclusions on a project rather than writing a long-form text. If that list, together with the data I’ve based my arguments on, is available in an open repository, then anyone with an internet connection can have those raw materials processed into a format they find useful. An undergrad student might ask their AI assistant to distil the essence of what I have found into an infographic. A PhD researcher may combine my materials with those of dozens or hundreds of other scholars to produce a two-page summary of the field. A colleague of mine might have their LLM of choice emulate my writing style to create a traditional research article they can print onto old-fashion paper and read on the train. A policymaker might ask to have the findings reproduced as slides for a presentation, a schoolteacher might have them turned into a short video for their students, using an AI-generated voice for narration. Maybe it will be my voice. Maybe it will be an AI avatar that emulates me. An AI professor.

In principle, I would have no issue with any of this, as long as all these processes still observe the values that sit at the core of the academic endeavour. The information I provide this way still needs to be checked for accuracy by my colleagues in the field; maybe this can be the kind of interactive, open crowd-sourced process the computer-science folks have put to such successful uses. The result will need to be publicly available, and with provisions that limit, or prohibit, commercial (ab)uses. We have creative-commons sharing licenses that can serve as the foundation for the necessary legal framework. And whatever happens with my information next needs to be transparent: those using these resources need to credit and reference me so that others can verify where the information came from.

And there is another rub: something like this only works if my employer acknowledges that my work still has value. Automating any part of academia is only smart if academics are still paid their weekly salary, regardless of whether they spent 38 hours per week creating knowledge or 16 hours. But that, as so much else about AI, will require a fundamental rethinking of what work is, what knowledge creation is, and what we want our societies to look like in the AI-accelerated future. For academia, it will require difficult discussions about whether we have lost sight of what knowledge production and learning are supposed to achieve. I, for one, will certainly engage with AI to suss out the boundaries of what such automation can (and: cannot) achieve, but I will hold off on creating my own AI version of myself. I wouldn’t want my AI prof to optimize me out of my job.

Want to share? Choose Your Platform!

By Published On: September, 2025Categories: LAC archive, LAC shorts, News, NieuwsComments Off on What if I Were an AI professor? The Promises and Risks of an Automated Academic Industry