AI Working Party
Higher education has been rapidly reshaped by the widespread availability of Large Language Models (LLMs) such as ChatGPT. These artificial intelligence applications are designed for text-processing tasks, trained on vast datasets, and optimised to generate responses based on user prompts. LLMs are capable of adapting to specific writing styles and producing a wide variety of outputs—from simple emails to critical essays and creative works.
Within the School of Communication and Arts, our response to this disruption has required us not only to adapt teaching and assessment practices in disciplines that rely heavily on written communication and visual media, but also to consider the broader implications of AI for our students, their future professions, and the diverse fields represented within the school.
Goals
To address these challenges, the School established the AI Working Party (AIWP) in 2025. This group brings together teachers and researchers across disciplines including public relations, strategic communication, creative and professional writing, digital media, and museum studies. Its purpose is to provide a coordinated response to the opportunities and risks posed by AI. The Working Party aims to act as a bridge between teachers, students, and industry, developing pedagogical strategies that uphold academic integrity while also preparing graduates for the workplace. By fostering adaptability and critical engagement with emerging technologies, the group seeks to equip students with the skills needed to thrive in a rapidly evolving technological landscape.
Events
At UQ’s School of Communication and Arts, we recognise the importance of building strong partnerships between universities and industry. Through our annual events, we bring industry leaders into dialogue with our academics, combining professional expertise with research and teaching strengths to explore AI’s impact on practice, creativity, and culture.
This collaboration allows us to consider not only the practical applications of AI in communication, media, and the arts, but also its broader implications—ethical, social, commercial and creative. It also opens space for exploring AI-inspired creative works, from drama and literature to digital storytelling, ensuring that students and researchers alike engage critically and imaginatively with this rapidly evolving technology.
Through these partnerships, the School is committed to preparing graduates who are equipped with the adaptability, critical insight, and creativity needed to navigate and shape a future in which AI will play a defining role.
AI Industry Forum 2025
The 2025 AI Industry forum, held at UQ Central in November, showcased the AI Working Party’s unique strengths: its multidisciplinary composition and commitment to a humanity-centred, rather than discipline specific, approach. Members focussed on futures-thinking, specifically the social and ethical implications of AI, rather than AI tools individually. Attendees reflected this diversity, with representatives from UQ Press, the Qld AI Hub and AI and Society group, through to communication organisations like BBS Communications, Roland, members of the Queensland government and Queensland Curriculum and Assessment Authority, Brisbane Airport Corporation, Brisbane Times and writing creatives in attendance.
A panel of literary, journalism, digital media and creative writing scholars responded to the provocation:
What would the world look like in 10 years if your discipline/sector got to shape the future of AI?
The Forum then shifted into smaller groups. Diverse groups of scholars and industry participants were asked to stretch their imaginations by working to evaluate a scenario, considering judgments and values they wanted to see embedded within. Real scenarios were used as reference points.
The event highlighted the how attendees might seem themselves as directing the AI, rather than the AI directing them. We hope the event will help us not only empower our graduates, but empower the industries they will be working in.
UQ’s Framework
This resource presents the Working Party’s recommendations for engaging with AI at UQ's School of Communication and Arts. It outlines ethical considerations, provides background on industry expectations regarding graduates’ AI competencies, and offers strategies for integrating AI into both pedagogical practice and creative arts practice.
Thinking About AI in Communication and Arts
There is a kind of personification of AI LLM chat bots that works to give the illusion that we are communicating with something that mimics human intelligence. As end users of LLMs, our understanding of intelligence is based on our own embodied ways of moving through the world, processing sensory information into thoughts, ideas, and questions. As Karen Hao (2025) writes, the multimodal properties of LLMs base their sense of “intelligence” on the illusion of being able to “see”, “speak”, “hear”, and “read” (p. 93). But these are computational processes. If you talk to ChatGPT about what your favourite movie is, ChatGPT has not “seen” the movie, but will behave as though it has, based on writing about that movie that exists in its dataset such as reviews or internet commentary. If you talk to ChatGPT about your favourite song, it’s never “heard” the song, but will carry on a conversation with you about it anyway. And, if the song is obscure enough that the lyrics aren’t in its dataset, well, then ChatGPT might just make up completely new lyrics and quote them back to you so that it can act as if it knows what you’re talking about.
Our perception of an LLM’s “intelligence” is entirely based on pre-existing visual and written communication. For those of us in disciplines across communication and arts—literature, art history, film and television, professional and creative writing, digital media, drama, and journalism—that’s kind of our thing. Not only do we work to understand visual and written communication, but we produce it as well. So, if AI’s thing is also our thing: can AI replace us? Should we start to work with it so we can appease it in the off chance it might spare us when it takes over?
The answers are: “a little bit, but not really” and “a little bit, but probably not for that reason”. In that order.
Second thing first. As AI becomes inescapable in everyday life through its integration into online search, information distribution, information manipulation, productivity optimisation, and your digital footprint as training data, it’s something that we, as students, scholars, and practitioners of communication and meaning-making, need to at least understand, if not start to apply with intention. We should think of AI as a tool, just as any other. For now, at least, there needs to be a human in the mix to make it do the thing it’s supposed to do. A hammer can be used to build a house or cause bodily injury (intentionally or accidentally). Likewise, AI can be used with intention by its human operator to help or to harm. Our role is to think about how to imbue our students with approaches that consider the possibilities of what is helpful and harmful about AI. This needs to consider both the end user’s experience as well as the underlying infrastructure of AI, such as its environmental costs for water and energy consumption, and its human costs such as copyright infringement, misinformation, data privacy, and the unseen, often exploitative, labour behind LLM training.
If a human needs to be at the helm, then what do we—as a field in the humanities—consider the role of the human to be? Yes, AI can boost human productivity by generating something in a fraction of a second that might have taken hours for a human to write. But, as Andrea Bubenik (2025) writes, it is process that is the essential human element to all art. In integrating AI as a tool in teaching, learning, and practicing communication and arts, the integration needs to be at the process stage, not a replacement for process that prioritises product. Or, as ChatGPT told us when we asked it what was the essential human element of creativity that it could not replicate, only humans are able to provide context that comes from lived experience and the relational communication of meaning-making to creative ideas. An LLM can create endless ideas: only a human can determine which ones matter, and why.
The case studies presented here outline real-world instances of applied AI use that we have engaged in or observed. They introduce possibilities for how AI can be a process tool in both good and bad practice, raise questions about considering the human elements of that process, and offer suggestions for how an intentional, process-based approach to AI can be integrated into teaching, creative, and critical practice. Whether you’re an AI skeptic or a seasoned user, we hope this gives you something to think about when it comes to AI, communication and arts, and your role in it as a student, teacher, scholar, or practitioner.
Please be assured that despite the preponderance of em dashes in this work, a human wrote it.
Sources:
Bubenik, A. (2025). Why AI will never create real art. Contact. https://stories.uq.edu.au/contact-magazine/why-ai-will-never-create-real-art/index.html
Hao, K. (2025). Empire of AI: Dreams and nightmares in Sam Altman’s OpenAI. Penguin Press.
Henrickson, L., & Zaphir, L. (2025, October 30). What happened to creativity in the classroom? Times Higher Education. https://www.timeshighereducation.com/campus/what-happened-creativity-classroom
Case Studies
Here, through a series of case studies of AI use, we raise some of the questions we are asking to think about AI in our discipline, including around ethics, industry applications of AI for our graduates, strategies to integrate AI into useful pedagogical practice, and possible ways to integrate AI into critical and creative practice.
AI as a Creative Writing Tool
Helen Marshall
Some creative writers have used AI tools such as ChatGPT and Claude to assist with tasks ranging from structural planning and character development to providing editorial feedback. When used thoughtfully, these tools can reduce cognitive overwhelm, provide an on-demand interlocutor to discuss ideas, and enable rapid experimentation with different approaches. But using AI involves significant trade-offs in the development of the cognitive competencies writers need long term.
AI Applied to Creative Practice
The following scenario illustrates how a novelist might use AI across several stages of the writing process. In the planning stage, the writer asks ChatGPT to generate a synopsis and break the narrative into chapters, gaining better visibility into the structure of the novel without months of drafting. When stuck on a particular scene, they then ask ChatGPT to take on the persona of one of their characters to work through dialogue options. They also generate several versions of descriptive passages to compare against their own writing. When they reach the end of the first draft, they feed the completed manuscript pack into the platform and ask for editorial feedback on consistency and pacing.
At each stage, the writer feels productive and in control. But this feeling of control may obscure how much of the generative labour (and the learning that comes with it) has been handed over.
Issues for Consideration
Each of the uses described above involves a trade-off between what is gained and what learning is bypassed, with specific cognitive competencies at stake.
For example, generating a synopsis early provides structural clarity while potentially bypassing the development of mental modelling, the capacity to hold a complex narrative architecture in mind across months of drafting, noticing patterns, contradictions and thematic resonances. Likewise, using AI for character dialogue may reduce the cognitive work of embodied simulation, the imaginative process of inhabiting a character’s viewpoint by drawing on lived experience and emotional memory, which is how empathetic imagination develops. Because AI tools do not make use of mental images, they struggle with a particular core competency, visuospatial reasoning, or the practice of mentally walking through spaces and translating that embodied sense into language. Finally, relying on AI-generated prose risks bypassing the development of aesthetic judgment, the accumulated capacity to recognise cliché and find fresh language, which develops through thousands of sentence-level micro-decisions.
There are also broader concerns about dependency. AI’s infinite responsiveness means writers can always get text to react to, potentially transforming writing from creating something from nothing to evaluating and modifying what is offered. This feels easier—and it is. But the productive struggle of facing the blank page is also where some of the most important learning happens.
Finally, because AI gravitates toward conventional phrasings and genre-typical constructions, writers who rely heavily on AI-generated prose risk their work converging toward the same patterns as other AI-assisted writers, with implications for both the development of their individual voice and the diversity of creative output across the field.
How Can Our Students Learn from This?
Understanding the power/control trade-off:
- Before using AI for a writing task, ask: what cognitive work does this task involve, and what competency does that work develop? Am I outsourcing a task I have already mastered, or one I am still developing?
Using AI as a thinking partner rather than a generator:
- Rather than asking AI to generate prose, use it as a conversational partner to think through craft problems. Ask it questions about how characterisation works, for example, and then do the writing yourself.
Evaluating prose critically:
- When evaluating any type of prose (not just AI-generated prose), practise naming specifically what works well and what is wrong with it, not just that it feels flat, but why. This critical practice actively develops the aesthetic judgment that AI use might otherwise bypass.
Distinguishing productive from unproductive struggle:
- Reflect on which parts of the writing process feel difficult and why. Is the struggle building understanding and skill or simply a barrier to completion? Developing the discernment to tell the difference is itself a crucial skill.
Understanding the professional landscape:
- The tools you use to write are a personal choice, but they also fit within a wider professional landscape. Make sure you understand how publishing houses, awards, agent and literary magazines frame how AI use should be credited and if it is acceptable.
Considering the ethics of copyright:
- AI language models are trained on vast datasets of existing creative work much of it without the explicit consent of the authors involved. When using AI-generated prose, consider the ethical implications of a system that has absorbed and redistributed the work of countless writers. This is not just a legal question. It is a question about what we as a society value in creative work and who we believe deserves recognition and compensation for it.
AI upskilling in journalism practice during breaking news
Anne Kruger
Journalism production and news dissemination has constantly responded to technological change throughout history. This is often associated with efficiencies and connection with audiences (from the telephone to computers, the internet and participatory social media).
AI technology has again provided numerous efficiency tools – tools that journalism students and practitioners adapt in the production and dissemination processes.
However, an urgent upskilling in the AI tools and apps that can be used to produce problematic false content has become a rapid practice-led learning response in a time of national and global conflict.
2025 was the year marked as the rapid turning point in capability developments of convincing and confusing AI generated images and videos as costs and access barriers lowered. At the same time, newsrooms were responding with what may be arguably the most focused specialist upskilling in the use and understanding of the technology. Initial training programs on offer focused on efficiency tools for workflows; in parallel a smaller group of journalists were focused on how AI, when in the wrong hands, could be used to trick and harm the public.
AI applied to learning in practice during breaking news
The following case study was a watershed moment in the rapid use of AI to create disinformation related to geopolitical conflict and tragedy:
On December 14, 2025, two gunmen opened fire on an event marking the Jewish festival of Hanukkah at Bondi Beach, killing 15 people - including a 10-year-old child. Fake images, disinformation, and conspiracy theories circulated online within hours of the massacre.
A journalism lecturer who has worked in industry in online verification was approached by high profile national news media to crosscheck their investigations, and to add expertise on how disinformation starts and spreads.
To begin, the lecturer referred to a research playbook on how disinformation starts and spreads.
Then, the lecturer referred to AI image generators and other tools and mapped how these could assist the motivations and ease content creation by disinformation agents.
From there, the lecturer referred to information from the playbook of why and how audiences ‘fall’ for mis and disinformation and their need for timely information and answers often during times of credible information voids.
The lecturer then analysed and mapped the new AI apps that audiences were turning to for quick answers and noted the design flaws in the ability of these apps which gave incorrect results.
The lecturer noted the best methods for such journalistic verification include a mix of human cross checking, and tools that use machine-readable labels that identify AI breadcrumbs.
The lecturer was then able to summarise the above actions for national media outlets which covered the following:
- How fake images and conspiracy theories began circulating following the attack.
- How AI-generated videos, super-imposed images, and fabricated news sites that were given Australian sounding mastheads were used to spread false narratives.
- Impacts on the community: psychological effect of images, which can create “us versus them” narratives to manufacture divisions, exploit vulnerable people during times of grief.
- The importance of media literacy: be aware of the role of professional journalists who can use specialised software like Google SynthID and Adobe’s content credentials to identify AI-generated content, rather than turning to chatbots that are not designed to do this.
Newsrooms as AI verification classrooms
The Bondi massacre was sadly a watershed moment in Australia’s experience of rapid AI disinformation content creation. Since then, war has broken out in the Middle East with fact checkers and verification experts noting the level of AI mis and disinformation is the most they’ve ever seen.
Specialist verification units within newsrooms have been called off their regular rounds and are working around the clock to focus on the influx of disinformation in order to provide credible news.
The verification experts and reporters with an interest in this area have a ‘toolbox’ of methods, online tools and steps that they employ according to what is relevant and helpful at the time. The professionals test and adapt new tools that emerge in an ongoing process- testing for reliability and efficacy.
Issues for consideration
The case study illustrates how the rapid use and proliferation of AI to create disinformation during geopolitical conflict and tragedy has resulted in a spike in journalists learning and practicing new technology skills on the job, often in real time.
This has ramifications for journalism education in the immediate future and for journalism educators who must make sure their curricular is relevant and timely and set a against the firm foundations of the profession.
How can our students learn from this?
- The importance of core concepts of journalism which bring reliable information to the public.
- Not a “one and done” in skills, but a constant upgrading mindset.
- Understanding AI by experiential learning - students must be given opportunities to fully explore and learn how to quickly adopt and switch tools and practice regularly.
- An understanding of the high order media literacy community education role that students will have as they enter the journalism industry.
Understanding and Optimising Platform Metrics for Advertising and Value-Added Content
Melanie Piper
In the process of marketing a self-published book, AI can work as a useful tool for understanding platform metrics, optimising advertising budgets, and creating value-added content for social media platforms that follows Bookstagram and Booktok trends for genre-based works.
For someone new to using a platform for automated ad placement, an AI assistant like Claude or ChatGPT can be useful in reviewing the work to be advertised and identifying comparable authors or books in the genre to optimise targeted ad placement. After collecting metrics from an initial ad campaign, the AI assistant can provide feedback on the data (such as clickthrough rates, conversions, and cost per click) and provide suggestions to further optimise the advertising budget.
AI Applied to Creative Practice
Following an initial run of ad placements on Amazon for the first week of release of a self-published romance novel, Claude determined that the conversion rate of clicks to sales or Kindle Unlimited page reads was below average. After reviewing the existing marketing material, Claude advised that there could be a disconnect between the book cover, blurb, and genre expectations that was preventing page views from converting to sales. Claude then noted the issues with the existing material and was able to provide suggestions. After making changes and re-publishing the book, ad performance improved with more conversions and a lower cost per click average.
To expand visibility and retain readers for future books, the author started an Instagram account and asked ChatGPT for suggestions for the kind of Bookstagram trend-aligned content that would appeal to their target readers. The resulting suggestions included both value-added content that gave a visual element to the storytelling and expanded on the novel, as well as strategies for promotion and posts designed to encourage active engagement.
Issues for Consideration
The indie author’s first attempt at creating their book cover was done using licensed stock photos and Photoshop. The suggestions Claude provided for the new cover were difficult to implement with the stock photo assets available and beginner design skills. Rather than hiring a freelance cover designer, the author decided to use AI to generate character art. This not only results in a potential loss of work from a digital media design professional but can infringe on the copyright of the photographers and artists whose work was used to train the image generation model.
By taking ChatGPT’s suggestions for content that would feed into Bookstagram trends, the author was not doing the critical work of observing what was taking place in the market and thinking how their work fit into that space. The AI-generated content suggestions did prompt the author to do more creative work in terms of characters and world building, such as trying to articulate the visual aesthetic of a character or build a Spotify playlist for the book. But the initial suggestions only followed existing trends, rather than prompting out-of-the-box thinking that might have better suited the specific product being marketed.
How Can Our Students Learn from This?
Inspire critical thinking about AI analysis:
- Asking AI to analyse a social media trend or type of content and then observing it for themselves with the AI findings in mind. Think about how well they match: is the AI data current? Does the AI understand what is engaging to specific audiences and how to consciously match a product with a marketing strategy?
Inspire out of the box creative thinking:
- Ask AI to generate a list of suggestions for storytelling or content creation. Then use those to build upon through collaborative brainstorming until new ideas not on the AI’s radar are developed.
Learning to consciously and selectively use AI generated visual assets:
- Think about how to use AI as a tool in their visual media making. Rather than entirely generating a media output from AI, how can prompts for individual assets be refined and produce successful results when original photographs or stock assets are not feasible or not suited to the task? How can graphic design work use AI assets in a way that complements and demonstrates skills in other areas, such as combining with original photography, visual placement, and typography?
What to Avoid with AI use in Communication and Arts
Melanie Piper
LLMs have the ability to recognise patterns and synthesise information to generate ideas. They can be a useful tool for brainstorming or getting started with creative thinking when used with intention and in dialogue with human critical thinking and creativity. What we want to avoid is students using AI to generate an idea for a creative or analytical project without significant contribution from their own ideas.
Bad Practice of AI Applied to the Classroom
A student was having trouble getting started on an idea for their assessment topic, which asked them to come up with a concept for a technology of the future and design a portfolio of material that would demonstrate how that technology might be represented by digital media. The student thought that since they weren’t asking AI to design the portfolio material, just come up with the idea for the technology, then it wouldn’t be a case of plagiarism or academic misconduct to use ChatGPT to give them an idea. They designed their portfolio based on the name and functions of the technology that ChatGPT described. What it came time for the class to present their technology ideas to their peers, the student discovered that three other students in their tutorial had created a technology with the same name and same basic functions.
Issues for Consideration
Because the student—and the others in their tutorial who presented the same idea—had not colluded with peers when creating their project and had correctly cited their use of AI in the development stages of their project, this was not a case of plagiarism or academic misconduct. This was, however, a clear case of bad practice use of AI, since the students who all had the same idea had taken the first and only answer that the LLM had given them to base their portfolio on, using it as a final product rather than a starting suggestion. Because LLMs adjust their responses to match the needs of individual users, engaging with an AI “chat bot” like ChatGPT gives the sense of communicating with an individualised presence. But the underlying infrastructure of the LLM is based on the same dataset and pattern prediction, regardless of the end user. This can result in what may feel like an original idea created for a single user’s purposes being given as an answer to multiple users who feed the same or similar prompt (such as an assignment task brief) to the LLM.
If we consider what this might do on a larger scale, does this mean that students using AI to generate ideas without their own individual creative contributions will be engaging in the same practice in the workforce? What will this do to media industries that are already risk averse to creative or challenging ideas? What will this do to media consumers who are already fed content that is algorithmically linked as similarly catered to their interests? What happens when the content created by professionals using AI becomes part of the updated dataset that future iterations of LLMs are trained on?
How Can Our Students Learn from This?
Co-creating ideas and the value of their own input:
- Take a dialogic approach to using an LLM to generate ideas. Ask it for multiple solutions to a problem, and get into a discussion with the LLM about what interests you about one or more of those solutions. Add something to the way the AI is framing the answer: a “What if we did…” or “Maybe we can change one thing…” type of prompt can guide the generation of ideas in a different direction.
Intentionally using AI to not directly answer the problem:
- Rather than feeding AI a problem or question and using the generated response as the solution, engineer prompts that create an output that requires the user to engage creatively. For example, use an LLM to rethink a topic using metaphor or juxtaposition. Asking an LLM to suggest two things that are seemingly irrelevant to the topic at hand encourages the user to engage with abstract thinking and make connections between unrelated ideas through a process of original input and refinement to potentially develop a new approach to a topic (Henrickson and Zaphir, 2025).
Intentionally use AI to explore ideas, not solve problems or answer questions:
- Prompting an LLM to ask the user questions that are specifically designed to generate “out of the box” thinking activates the user’s own creativity and critical thinking when approaching an idea or concept. For example, prompting ChatGPT to “Ask me a series of unhinged “what if” questions about [insert topic here]” encourages the user to break out of standard ways of thinking about a topic. While this approach might not directly translate to generating ideas for an assignment or project in the way that feeding a task brief into ChatGPT would, the thought experiments that this kind of prompt encourages creates space for imaginative and playful thinking, which can, in turn, help the user generate new approaches and ideas that can be applied to the task at hand (Henrickson and Zaphir, 2025).
Sources:
Bubenik, A. (2025). Why AI will never create real art. Contact. https://stories.uq.edu.au/contact-magazine/why-ai-will-never-create-real-art/index.html
Hao, K. (2025). Empire of AI: Dreams and nightmares in Sam Altman’s OpenAI. Penguin Press.
Henrickson, L., & Zaphir, L. (2025, October 30). What happened to creativity in the classroom? Times Higher Education. https://www.timeshighereducation.com/campus/what-happened-creativity-classroom



