Skip to Main Content

GenAI limitations

Text Outputs

How could you use it?

Jumpstart a presentation assessment by generating a loose outline of talking points. Record a group meeting and ask for a synopsis and actions arising. Upload your unit study notes and create practice exam questions. Having trouble understanding a complex theory, ask for metaphors to explain it to you. Take your original content and get it copyedited. Text GenAI can do all this and more for you.

What are text outputs?

New GenAI tools are created and released for use every day. With hundreds of tools to choose from, selecting one depends on the work output you need. This page focuses on text outputs from GenAI tools and limitations you can encounter. It’s important to recognise these limitations and identify how they affect the reliability of GenAI produced output.

Why? Because not questioning the output of GenAI tools can have disastrous consequences. Play the Spot the Troll game and reflect on the ripple effects of not being aware of GenAI limitations.

Specific risks and limitations

Interested in learning more about specific risks and limitations of text-based GenAI output? Click on the plus (+) icons below to open the accordion and explore the risk area you’re keen to know more about.

Ethical Concerns

There are multiple ethical concerns, spanning a wide range of issues, that impact creating, sharing or just reading GenAi text outputs. Some of those ethical concerns are explored further on this page (e.g. bias, fairness, copyright and legal, misuse and manipulation). As a student, you’re expected to develop critical thinking skills. Which means you need to interrogate how the GenAI tool was made, who owns it, how are others using it, and how you use it in an ethical framing. Overall, it’s about being a global citizen who engages ethically and productively with technologies.  

In relation to text-based GenAI outputs, we want you to be consider ethical questions like:

  • Are you adding more noise to the mis-, dis- and mal-information sphere? Find out more in the Misinformation Guide  
  • Is it your information to upload to a tool?
  • Is the information private or sensitive and do you want that added to a large language model dataset?
  • Do you know where your information is going?
  • By using text-generation are you undermining your creative writing or learning?
  • Are you comfortable using a text-generation tool that may have stolen the work of writers in developing its training data?
  • Be aware that not everyone has access to the GenAI text-based tools and what does that means for your use?  
  • Have you thought about the energy consumption that large AI models like Bing Chat, ChatGPT or Google Bard use?
  • Who is maintaining the text-based GenAI tool and what are their working conditions?

Watch this video for a leading AI expert’s opinion on ethical GenAI use.

Misuse and Manipulation

Be aware that GenAI text-tools have a range of ways they can be misused and manipulated. In relation to text-based GenAI outputs, some examples about what misuse and manipulation might look like are:

  • Impersonating the written voice of individuals or organisations which are used in scams or phishing attacks 
  • Impersonating the written voice of individuals or organisations to discredit them
  • Creating biased narratives (aka propaganda) in text-based outputs to persuade or convince people of a particular ideology or way of thinking 
  • Using fabricated information that looks authoritative in an essay or report
  • Take credit for creation without acknowledging the use of GenAI in their work.

Watch this video that explores what happens when GenAI text generates false information that isn’t checked. To appreciate the vast depth of the misinformation crisis, explore the library’s Misinformation Guide.

Inconsistent or Not-Quite-Right Quality

The quality of GenAI text can vary and is affected by a lack of clear instructions and ambiguity. Achieving consistent high-quality results across diverse applications therefore remains a challenge.

When I asked Bing's Copilot (available free to you as a student or staff member of Deakin) to produce a short, engaging poem about AI and Australia. This is what it produced:

Binary whispers echo 'neath gum tree shade,

AI dances with Dreamtime, a digital serenade.

In the land of kangaroos and Southern Cross above, Wisdom blooms anew

—a pixelated love.

Looking at the above poem it is easy to see how GenAI tools struggle to comprehend or reproduce content that involves sarcasm, tone, or cultural nuances. Which is why prompting is so important as the quality of your prompt impacts the quality of the output.

To get better at prompting, practice using our prompt engineering module (hyperlink).

Bias and Fairness

Be conscious of in-built bias due to underlying patterns in the information the GenAI tool is trained on. If the training data used to train generative text models is biased, the generated text will also exhibit biases. This could result in the reinforcement of stereotypes, discrimination, or other forms of bias present in the training data.

Legal and Copyright Issues

Using GenAI text verbatim means you’re creating plagiarised content, forgeries, or content that infringes on copyrights and intellectual property rights and raises legal issues as GenAI tools can’t create original work.

Remember you need to acknowledge any work that has generative AI content as per the Deakin Guidelines that can be found here.


How to minimise the risks

Hannah has been using ChatGPT to explore some background information for her report on the impacts of climate change on agriculture. What are the risks with using ChatGPT and what can Hannah do to minimise them?

In the activity below, categorise what are risks and what are minimising strategies when using GenAI? You can rerun the activity as many times as you like.


Stories

Working through this page you’ve read, watched or listened to different scenarios and stories that unpack the relationship between GenAI and text. Below is a final real-world tale that shares the potential of what generative text can do and the ethical implications that can arise. It also touches on questions of trust and misinformation. 

Read the text story below about how GenAI can hallucinate references when asked to support information it has provided you with. It is important to include sources for your information however, GenAI can just make things up and you must always check the references it provides you with. Find out how below.
 

Remember and reflect

Key takeaway

Not everything you read is real or trustworthy. GenAI can generate amazing stories, poems, reports and ideas. However, highly biased text and hallucinated content are known problems for text GenAI tools.

Consider

Here are some key points to keep in mind when producing or engaging with generated text: