Skip to Main Content

AI Evaluations - Library

Library AI Evaluation approach

Our evaluations offer insights into the implications of emergent AI considered through Library areas of expertise. We focus on how AI is changing the ways people find, assess, and work with information across increasingly complex digital environments. Our goal is to support confident, critical, and principled engagement with AI.

We evaluate AI that our community is already using in freely available platforms, through personal subscriptions, or those embedded in library-managed systems.

Key advice

Library AI Evaluations are designed to inform and not endorse AI use. Each evaluation reflects Deakin Generative AI Framework and Principles to support principled engagement with AI.

Apply your own professional, academic, or student judgement to assess how different AI fits your needs, responsibilities, and values. 


Evaluation approach

AI Evaluations are grounded in Deakin’s Generative AI Framework and Principles and leverage Library expertise. We take a structured, principled approach aimed at supporting our community to engage confidently but critically with AI.

Each evaluation contributes to the Library’s advisory knowledge base and reinforces our enablement role in Deakin’s digital fluency goals.

These evaluations are distinct from enterprise-level technology assessments or procurement processes, which are delivered through separate, established University channels. They are not designed to guide subscription decisions, though they may inform them in library-specific contexts. Our focus is on AI that our community may be using freely, buying personal subscriptions for, or engaging with through library-provided platforms.

Library areas of expertise informing AI Evaluations

  • Expert searching and information discovery
  • Digital literacies or digital capabilities development
  • Data and information management (excluding areas that fall outside our SME like cybersecurity)
  • Open Scholarship (including Open Access and Open Educational Resources)
  • Copyright
  • Publishing landscape
  • Information landscape

AI evaluation process

Each AI Evaluation follows a consistent process to ensure that evaluations are rapid but also rigorous, evidence-informed, and grounded in our professional knowledge areas.

Click on the plus (+) icons to explore how we work and our processes.

1. Spin up an evaluation workspace

Purpose of this step
Establish a single, traceable space for every artefact—notes, evidence, drafts, and the final report—before any evaluation work begins.

What’s included

  • Confluence AI evaluation page generated from the standard template
  • SharePoint/Drive folder for raw evidence, exports, and the finished PDF
  • Pre-populated Information collection checklist (headings and instructional guidance only)
  • Pre-populated Overview table (headers only and instructional guidance only) to steer later data entry
  • Pre-populated Library AI Evaluation table (headers, criteria, and instructional guidance only) to support assessment
  • Pre-populated Key advice, Summary findings, Considerations and implications sections (with instructional guidance and worked examples)
  • Automated link-back to the master evaluation register for instant discoverability for library staff

How we do it

  1. In the Library AI Evaluations Confluence space click Create → Library SME Evaluation to generate a new page.
    • The template auto-prompts framing, criteria, and version-history macros.
  2. Create a companion folder in AI Evaluations SharePoint space and name it “Toolname AI eval-Month Year”.
  3. Add link to the Confluence evaluaton working space so peers can locate artefacts quickly.

Where it appears
Internal only—this step is not visible in the public report but underpins version control and transparency.

Recommended time
≈ 10 minutes (one-off admin).

2. Gather background evidence

Purpose of this step
Ground the evaluation in up-to-date sector context and source material.

What’s included

  • Sector, institutional, and policy scans
  • Scholarly & grey-literature sweeps
  • Terms-of-use / licensing checks
  • Comparable tools & prior evaluations

How we do it

  1. Use the structured search checklist saved in the workspace.

Category

Sources / Actions

Findings

Other Deakin evaluations

  • Determine if other Deakin teams (e.g., Digital Learning Environments, Infrastructure & Digital) have/are in the process of evaluating tool from their domain perspectives.

 

Vendor transparency

  • Is it easy to find and understand information about how the vendor and data or information are used (input/outputs)?

 

Cost & access models

  • Identify and list (briefly) the subscription structure, free vs paid tiers, and institutional licenses.

  • Access to information and digital divide considerations.

  • Is this tool freely available or part of Deakin software options?

 

Where it appears
Feeds the Overview and supports citations throughout the report.

Recommended time
≈ 1 hour per evaluator.

3. Build the Overview snapshot

Purpose of this step
Capture a concise “at-a-glance” profile of the AI tool, model, or function.

What’s included

Field Typical content
Name & link Direct URL or access point
Vendor / Host Organisation(s) responsible
Primary function Clear, outcome-focused description
Impacted areas Research, education, enterprise, etc.
Alternatives Comparable tools & prior evaluations
Evaluators & dates Currency flags and attribution

How we do it

  • Populate early, refine as evidence emerges.
  • Keep plain language; remove unused field labels.
  • Hyperlink out for verification.

Where it appears
First page of every published evaluation report.

Recommended time
≈ 30 minutes per evaluator.

4. Evaluate through library lenses

Purpose of this step
Apply professional library expertise to surface benefits, risks, and affordances.

What’s included (adaptable menu)

  • Expert searching & information retrieval
  • Metadata & data management
  • Copyright & licensing
  • Digital literacies & capability building
  • Human-centred systems & bias
  • Vendor & publishing landscape
  • Open scholarship & open practice
  • User support & training

How we do it

  1. Primary evaluators draft bullet commentary per lens.
  2. Bring in colleagues for specialist input as needed.
  3. Store notes in the “Domain Notes” table.

Where it appears
Appendix (full commentary) and Summary Findings (distilled points).

Recommended time
≈ 3 hours total per evaluator.

5. Draft the evaluation report

Purpose of this step
Turn evidence into reader-friendly insights (Key advice, Summary Findings, Considerations & implications).

How we do it

  • Keep Summary Findings descriptive; interpretation lives in Considerations.
  • Anchor every claim to the Evidence Log.

Where it appears
Central body of the published report.

Recommended time
≈ 2 hours per evaluator.

6. Finalise, publish & share

Purpose of this step
Produce a peer-reviewed, citable report and share findings widely.

How we do it

  1. Copy content into the report template; remove instructional notes.
  2. Peer review for rigour and alignment with GenAI principles.
  3. Export to PDF and add a version-history banner (first evaluation date + update log).
  4. Publish to the LibGuide and notify stakeholders via agreed channels.
  5. Schedule cyclical reviews to maintain currency.

Where it appears
Downloadable report plus overview on the public guide.

Recommended time
≈ 2 hours (polish + peer review).


Deakin Library and digital capabilities

We provide a structured, evidence-informed, and principled evaluation of AI that intersects with information science. Each evaluation strengthens the Library’s AI literacy and advisory knowledge base - an essential resource for our scholarly services outreach and for our own decision making.

Through this work, Deakin Library enables our community to make informed decisions about their use of AI. This aligns with our commitment to equipping our people to live, learn, and work in a digital society (JISC Digital Capabilities Framework). Where appropriate, these evaluations feed into broader Deakin conversations and knowledge repositories on AI, strengthening institutional awareness and positioning the Library as a key player in the university’s digital future.