Kayt Davies: AI will take media jobs but will free up time for fun stuff

Date:

Share post:

COMMENTARY: By Kayt Davies in Perth

I wasn’t good at French in my final year of high school. My classmates had five years of language studies behind them. I had three. As a result of my woeful grip on the language, I wrote a terribly bad essay in my final French exam.

The more I read of ChatGPT’s output, the more I am reminded of my final French essay. I could not express the complex ideas I wrote in my English essays, so instead, I repeated the question a lot and clumped together words and phrases that sounded like they kind of went together. There was no logical thread, no cogent argument.

It was a bit like the perplexing, digressive, buzzword-rich oratory stylings of Donald Trump.

I have been a university lecturer, tutor and marker for coming on two decades now and late last year a student submitted an essay that I sent off to the university integrity team, explaining that it was “bad in a new and different way”.

According to Turnitin (our detection software), it wasn’t plagiarised. It didn’t read like it had been written in another language and run through Google Translate. The grammar was impeccable but there were glaring non-sequiturs and it danced around the question, which it repeated several times, but didn’t actually answer.

I didn’t hear back from the integrity people. They probably didn’t know what to do about it and may have been busy, as it was the end of the teaching year. I had also said it wasn’t urgent, as it had failed against my marking key, meaning the student, whose marks had been poor all along, would have to repeat the unit anyway.

New teaching year
A couple of weeks later ChatGPT was made available to the public, joining the dozen or so other AI writers available to people who want AI to string together their sentences.

Journalism lecturer Dr Kayt Davies
Journalism lecturer Dr Kayt Davies . . . graduates will need to be focused on things only humans can do to make the world a better place. Image: Kayt Davies/Curtin University

Now, heading back into a new teaching year, having spent the summer chatting with ChatGPT, I am in conversations with my colleagues about how we should proceed. I teach journalism and my colleagues are from a range of arts and communications disciplines.

Collectively our feelings are mixed, but I’m looking forward to letting my students know about this leap forward in communications technology.

I plan to explain it in the context of the other leaps and lurches I’ve lived through.

This won’t be the first to make swathes of workers redundant. I remember the angst in my industry about digital typesetting usurping the compositors and typesetters, replacing vast numbers of them with far fewer graphic designers.

ChatGPT will undoubtedly take some jobs, but it’s the donkey work of the writing professions. It frees us up to do the innovative fun stuff. Also, while ChatGPT is big and shiny, we’ve known that AI writing is on its way for a long time.

In 2018, Noam Lemelshtrich Latar summed up the progress in our field to date in his book Robot Journalism: Can human journalism survive? He documented the many workplaces already using AI writing software and concluded that there was still work to be done. There still is.

Essay capacity underwhelming
Much of the media racket over ChatGPT this summer has been about its capacity to write essays, and so I have read several essays it has written, and I can happily report that I am underwhelmed by them, but also fascinated by the challenge we face in getting better at describing the ways in which they are bad.

This task is part of the mission humanity more broadly is facing in figuring out what it is that people can do that robots can’t. If robots/AI writers are going to do the donkey-work writing in workplaces, that is not something we need to be training graduates to do.

Graduates need to be able to do things an AI language model can’t, and they need to be able to articulate their skill sets.

So, I will be generating AI content in my classrooms and we are going to set to work pulling it apart, in search of its failings and foibles. We’ll do this together and learn about it and ourselves as we go.

Another big theme in the media hype has been ChatGPT’s ability to “do the marking for us”. This, in my opinion, is rubbish. Sure, you can copy-and-paste some text into ChatGPT and ask it for a comment and a grade, but every university I know of demands more of the markers than a simple comment and grade.

If only it was that simple. But, no. We have to describe the specific criteria every piece of work will be assessed against, and the expectations ascribed to each criterion that will result in the award of a specific number of marks. This forms a table called a rubric, which is embedded in our unit websites and getting the assignments and rubrics out of that software and into ChatGPT would take longer than the tight time allocation we get to mark each piece.

Besides the software we mark in is already replete with time-saving tricks, like a record function so you can speak rather than type feedback and the ability to save commonly used comments.

‘Getting to know students’
In addition, failing to read the assignments would inhibit the “getting to know your students” process that marking their work facilitates, and so I imagine it to be the sort of drain-circling behaviour used by failing teachers on their way out of the profession — as student assessment of teachers who cheat in their marking is going to be on par with teacher assessment of students who cheat in their assessments.

Cheating is a key word here. While ChatGPT is new, universities have longstanding policies and charters that use words like “honesty and fairness” in relation to academic integrity. These are being underscored and highlighted in preparation for the start of semester and hyperlinked to paragraphs about AI writing.

Honest use of ChatGPT will involve disclosure about how it was used, and what measures have been taken to verify its content and iron out its wrinkles. It then joins the swath of online tools we encourage our students to use to prepare them for the professions they’ll enter when they graduate.

For my first year students these will be professions that have adjusted to the existence of AI language models, and so their new graduate brilliance will need to be focused on things only humans can do to make the world a better place. This is how I’m going to frame it in my classes, when our next semester starts.

Dr Kayt Davies is a lecturer in journalism at Curtin University. She is a contributor to Pacific Journalism Review. The article was first published in The West Australian and is republished by Café Pacific with the author’s permission.

Cafe Pacific Publisher
Cafe Pacific Publisher
Café Pacific's duty editor.
- advertisement -

Related articles

ICC’s arrest warrants for Israeli war crimes: Netanyahu is certainly a criminal, but …

ANALYSIS: By Belén Fernández The International Criminal Court (ICC) last week issued arrest warrants for Israeli Prime Minister Benjamin...

Protest photographer John Miller records Hīkoi mō te Tiriti with his historic lens 

RNZ National For almost six decades photographer John Miller (Ngāpuhi) has been a protest photographer in Aotearoa New Zealand. From...

ICC issues arrest warrant for Benjamin Netanyahu over alleged war crimes

ANALYSIS: By Catherine Gegout The International Criminal Court (ICC) has issued arrest warrants for Israeli Prime Minister, Benjamin Netanyahu,...

Caitlin Johnstone: Today in imperial recklessness and insanity

COMMENTARY: By Caitlin Johnstone The International Criminal Court has formally issued arrest warrants for Israeli Prime Minister Benjamin Netanyahu...