Our evaluations offer insights into the implications of emergent AI considered through Library areas of expertise. We focus on how AI is changing the ways people find, assess, and work with information across increasingly complex digital environments. Our goal is to support confident, critical, and principled engagement with AI.
We evaluate AI that our community is already using in freely available platforms, through personal subscriptions, or those embedded in library-managed systems.
Library AI Evaluations are designed to inform and not endorse AI use. Each evaluation reflects Deakin Generative AI Framework and Principles to support principled engagement with AI.
Apply your own professional, academic, or student judgement to assess how different AI fits your needs, responsibilities, and values.
AI Evaluations are grounded in Deakin’s Generative AI Framework and Principles and leverage Library expertise. We take a structured, principled approach aimed at supporting our community to engage confidently but critically with AI.
Each evaluation contributes to the Library’s advisory knowledge base and reinforces our enablement role in Deakin’s digital fluency goals.
These evaluations are distinct from enterprise-level technology assessments or procurement processes, which are delivered through separate, established University channels. They are not designed to guide subscription decisions, though they may inform them in library-specific contexts. Our focus is on AI that our community may be using freely, buying personal subscriptions for, or engaging with through library-provided platforms.
Each AI Evaluation follows a consistent process to ensure that evaluations are rapid but also rigorous, evidence-informed, and grounded in our professional knowledge areas.
Click on the plus (+) icons to explore how we work and our processes.
Purpose of this step
Establish a single, traceable space for every artefact—notes, evidence, drafts, and the final report—before any evaluation work begins.
What’s included
How we do it
Where it appears
Internal only—this step is not visible in the public report but underpins version control and transparency.
Recommended time
≈ 10 minutes (one-off admin).
Purpose of this step
Ground the evaluation in up-to-date sector context and source material.
What’s included
How we do it
Category |
Sources / Actions |
Findings |
---|---|---|
Other Deakin evaluations |
|
|
Vendor transparency |
|
|
Cost & access models |
|
|
Where it appears
Feeds the Overview and supports citations throughout the report.
Recommended time
≈ 1 hour per evaluator.
Purpose of this step
Capture a concise “at-a-glance” profile of the AI tool, model, or function.
What’s included
Field | Typical content |
---|---|
Name & link | Direct URL or access point |
Vendor / Host | Organisation(s) responsible |
Primary function | Clear, outcome-focused description |
Impacted areas | Research, education, enterprise, etc. |
Alternatives | Comparable tools & prior evaluations |
Evaluators & dates | Currency flags and attribution |
How we do it
Where it appears
First page of every published evaluation report.
Recommended time
≈ 30 minutes per evaluator.
Purpose of this step
Apply professional library expertise to surface benefits, risks, and affordances.
What’s included (adaptable menu)
How we do it
Where it appears
Appendix (full commentary) and Summary Findings (distilled points).
Recommended time
≈ 3 hours total per evaluator.
Purpose of this step
Turn evidence into reader-friendly insights (Key advice, Summary Findings, Considerations & implications).
How we do it
Where it appears
Central body of the published report.
Recommended time
≈ 2 hours per evaluator.
Purpose of this step
Produce a peer-reviewed, citable report and share findings widely.
How we do it
Where it appears
Downloadable report plus overview on the public guide.
Recommended time
≈ 2 hours (polish + peer review).
We provide a structured, evidence-informed, and principled evaluation of AI that intersects with information science. Each evaluation strengthens the Library’s AI literacy and advisory knowledge base - an essential resource for our scholarly services outreach and for our own decision making.
Through this work, Deakin Library enables our community to make informed decisions about their use of AI. This aligns with our commitment to equipping our people to live, learn, and work in a digital society (JISC Digital Capabilities Framework). Where appropriate, these evaluations feed into broader Deakin conversations and knowledge repositories on AI, strengthening institutional awareness and positioning the Library as a key player in the university’s digital future.