CHALLENGES OF AI IN ACADEMIA

By Mickey Skidmore, AMHSW, ACSW, FAASW

Whether or not we like it, or agree with it — the Jeannie has been let out of the bottle, and we cannot put it back. Whatever your views about artificial intelligence (AI), most experts in this field acknowledge that its impact on humanity will be realised sooner rather than later — probably far sooner than we’d like to admit. 

Early speculation was that AI would first come for blue collar jobs where technological innovation could out perform humans for mundane repetitive tasks, much in the same way machines replaced humans in the industrial revolution. It is getting increased attention with the realisation that AI is now also coming for white collar jobs as well. 

I envision AI fundamentally altering our existence. How will we make money to buy and pay for things if AI has taken most of the jobs? What will we do with our time? How will we find or determine purpose for our lives without work, vocation or careers? This in fact may be the quintessential challenge for humanity as it strives to determine how humans survive and function in the age of AI. Such a scenario bolsters the concept of some version of universal income, however, it is becoming clear that transcending this challenge will fundamentally transform the functioning of humanity — likely in ways we may not yet have the framework to imagine. Some have suggested this will be upon us by 2050; others have suggested it is closer to 2035 or 2030. Whatever the actual number, the current AI experts in the field generally concur that it will happen sooner than anyone actually thinks.

Before we get there however, this editorial will focus on the current impact of AI challenges in (tertiary) education in particular. The rapid increase of AI use in academic assignments is disconcerting and raises some alarming concerns around academic integrity and ethics. Not only are students inputting the task instructions and then submitting AI generated assignments; likewise, educators can also use AI to mark these assignments as well. Not only does this highlight tendencies of laziness, it reveals issues of intellectual dishonesty, fraud and theft, and exposes some fundamental threats to the process of education in general.

As a Social Work educator, I make no secret that an overall objective of an accredited Social Work MSWQ program emphasises two fundamental efforts: to cultivate critical thinking (teaching SW students not WHAT to think; but rather HOW to think); and to embrace the concept of critical reflection in their practice. AI seems to be a direct threat undermining these principles and efforts.

For example, as part of a recent experiential learning effort I broke my class into small groups of 4-5 to discuss and explore what information (content) they felt should be included in an assessment. As I meandered through each group I encountered one group who inputted my instructions into Chat GPT with a response. Spank my butt and call me silly — but I’m pretty sure this is not an example of critical thinking.

I do recognise that there may be some areas of SW practice where AI could be a useful tool. Using AI for a comprehensive literature review or meta analysis as part of a research effort can often be done in considerably less time than spending hours in a library or online. While I note that AI can also generate bogus references, these can be easily confirmed. Likewise, identifying resources in the community that may helpful to clients may also be a useful AI application (although this will only yield services that are included in any particular data base). And I leave room that there may be other yet to be identified applications as well. Or for international students (where English is not their first language) to have AI review their completed assignments to assist with grammar correction or enhancement.

Ideally, for any University, I believe the aim would be to actually take the lead in teaching professional, ethical and moral applications for AI use in academic and professional practice, clearly establishing the boundaries of academic misconduct. Unfortunately, the reality is that most Universities are behind on such a challenge and have not yet figured out what this might actually look like. However, there has been a shift towards in-class assignments to combat the growing impact of AI.

The fact that (international) students are pushing back on such efforts only confirms and acknowledges the pervasive prevalence of such practices. Consider an excerpt from a student email:

First, while I understand the intent behind banning AI use, completely excluding it without fostering education on its ethical application is misaligned with the modern academic environment. AI can serve as a valuable tool for tasks such as research, structuring essays, and grammar correction. Rather than an outright ban, providing guidelines to encourage transparent and ethical use of AI would be more effective. Reverting to outdated assessment methods to ensure academic integrity reflects a lack of innovation and diligence on the part of the university and its faculty in exploring better alternatives …

Second, prohibiting electronic devices … undermines the quality of the assignment and fails to evaluate creative thinking or analytical skills … ” (Really?)

Third, digital tools and literacy are essential competencies in today’s society. Excluding electronic devices hinders students’ ability to develop skills necessary for professional environments, particularly for social workers …” (again — Really?)

I fully support the emphasis on academic integrity behind this assignment. However, the current restrictions risk undermining the learning experience and compromising fairness and practicality. I respectfully urge you to consider these concerns and explore alternative approaches ….

I confess my initial curiosity when reading these comments was to inquire if this was a stream of consciousness from a student’s critical reflection versus a self-serving AI justification supporting and reinforcing the need for the use of digital technology. The fact that students can deny this is precisely one of the dilemmas. (If you cannot determine if the thought is generated from a human or a machine, is it ethical?). Moreover, as exposed previously, relying on AI for basic Social Work functioning undermines the efforts to cultivate critical thinking. I do not need to know what AI thinks should be included in an assessment. Rather, I want to stimulate students’ critical reflection to consider a human response to a particular human context. 

I may be old school about this, however, for professions that are essentially relationally based (i.e. Social Work; Psychology and others), the primary focus of the education process need to be unapologetically experiential and social in nature. (The first word of the Social Work profession is social for goodness sake!). Endorsing this view of technology is not adequately preparing students for the demanding and complex issues that await them in Social Work practice. Yielding to the notion of relying on digital technology (whether AI or something else) to undergo the basic functions of inter-relational Social Work practice in my view only will lead to a profound crisis for the profession — which is already viewed as the “less than” red-haired, step-child of the allied health professions. 

Again, I acknowledge the Jeannie is out of the bottle. I even acknowledge that an aspiration of academia might be to ensure students completing a degree might include a basic proficiency in the use of AI that is professional, ethical and moral. Perhaps at a more profound level this raises questions about the fundamental covenant between students and educators. Yet, it may all be a moot point if we do not rise to the challenge of transforming how we will function in the age of AI. If there are no more jobs, perhaps the need for tertiary education becomes increasingly irrelevant.

You might have noticed the proliferation of dystopian themed TV shows and movies amidst our entertainment landscape. They often depict apocalyptic zombies roaming the earth. I envision a different type of zombie in the AI age — intellectual zombies where the majority of surviving humans have learned to stop (critically) thinking for themselves — as we have phones, tablets, computers and an endless ranged of screens with AI digital technology that we increasingly rely on to thinks for us.