Jumpstart a presentation assessment by generating a loose outline of talking points. Record a group meeting and ask for a synopsis and actions arising. Upload your unit study notes and create practice exam questions. Having trouble understanding a complex theory, ask for metaphors to explain it to you. Take your original content and get it copyedited. Text genAI can do all this and more for you.
New genAI tools are created and released for use every day. With hundreds of tools to choose from, selecting one depends on the work output you need. This page focuses on text outputs from genAI tools and limitations you can encounter. It’s important to recognise these limitations and identify how they affect the reliability of genAI produced output.
Why? Because not questioning the output of genAI tools can have disastrous consequences. Play the Spot the Troll game and reflect on the ripple effects of not being aware of genAI limitations.
Interested in learning more about specific risks and limitations of text-based genAI output? Click on the plus (+) icons below to open the accordion and explore the risk area you’re keen to know more about.
There are multiple ethical concerns, spanning a wide range of issues, that impact creating, sharing or just reading genAI text outputs. Some of those ethical concerns are explored further on this page (e.g. bias, fairness, copyright and legal, misuse and manipulation). As a student, you’re expected to develop critical thinking skills. Which means you need to interrogate how the genAI tool was made, who owns it, how are others using it, and how you use it in an ethical framing. Overall, it’s about being a global citizen who engages ethically and productively with technologies.
In relation to text-based genAI outputs, we want you to be consider ethical questions like:
Watch this video for a leading AI expert’s opinion on ethical genAI use.
Be aware that genAI text-tools have a range of ways they can be misused and manipulated. In relation to text-based genAI outputs, some examples about what misuse and manipulation might look like are:
Watch this video that explores what happens when genAI text generates false information that isn’t checked. To appreciate the vast depth of the misinformation crisis, explore the library’s Misinformation Guide.
The quality of genAI text can vary and is affected by a lack of clear instructions and ambiguity. Achieving consistent high-quality results across diverse applications therefore remains a challenge.
When I asked Bing's Copilot (available free to you as a student or staff member of Deakin) to produce a short, engaging poem about AI and Australia. This is what it produced:
Binary whispers echo 'neath gum tree shade,
AI dances with Dreamtime, a digital serenade.
In the land of kangaroos and Southern Cross above, Wisdom blooms anew
—a pixelated love.
Looking at the above poem it is easy to see how genAI tools struggle to comprehend or reproduce content that involves sarcasm, tone, or cultural nuances. Which is why prompting is so important as the quality of your prompt impacts the quality of the output.
To get better at prompting, practice using our prompt engineering module.
Be conscious of in-built bias due to underlying patterns in the information the genAI tool is trained on. If the training data used to train generative text models is biased, the generated text will also exhibit biases. This could result in the reinforcement of stereotypes, discrimination, or other forms of bias present in the training data.
Using genAI text verbatim means you’re creating plagiarised content, forgeries, or content that infringes on copyrights and intellectual property rights and raises legal issues as genAI tools can’t create original work.
Remember you need to acknowledge any work that has generative AI content as per the Deakin Guidelines that can be found here.
Hannah has been using ChatGPT to explore some background information for her report on the impacts of climate change on agriculture. What are the risks with using ChatGPT and what can Hannah do to minimise them?
In the activity below, categorise what are risks and what are minimising strategies when using genAI? You can rerun the activity as many times as you like.
Working through this page you’ve read, watched or listened to different scenarios and stories that unpack the relationship between genAI and text. Below is a final real-world tale that shares the potential of what generative text can do and the ethical implications that can arise. It also touches on questions of trust and misinformation.
Read the text story below about how genAI can hallucinate references when asked to support information it has provided you with. It is important to include sources for your information however, genAI can just make things up and you must always check the references it provides you with. Find out how below.
Key takeaway
Not everything you read is real or trustworthy. genAI can generate amazing stories, poems, reports and ideas. However, highly biased text and hallucinated content are known problems for text genAI tools.
Consider
Here are some key points to keep in mind when producing or engaging with generated text: