Skip to Main Content

AI Evaluations - Library

Elicit - Library AI Evaluation

This evaluation is part of Deakin Library’s AI Evaluation series, providing structured, practice-informed insights into emerging AI technologies. Evaluations are designed to support critical decision-making and responsible engagement with AI, guided by Deakin’s Generative AI Framework and Principles.

Our findings and assessment are shared to inform your judgement. Evaluations are not an endorsement.

Evaluations like this one are not about deciding whether the AI “works,” but about understanding what kind of thinking and behaviours it encourages and whether that aligns with our pedagogical, scholarly, and professional values.

Key advice

Use Elicit for early-stage exploration of academic literature but not for formal synthesis and in-depth research. 

Elicit helps speed up literature discovery by using conversational-style search to surface and summarise research. It extracts structured summaries of papers, often highlighting study focus and methods, which supports early-stage exploration or initial scoping.

Avoid uploading third-party or published content, as this may breach copyright or licensing conditions.


Elicit overview

Field Details
Tool Elicit: the AI Research Assistant - Basic plan (free)
Vendor Elicit - Co-founder and CEO: Andreas Stuhlmuller
Primary function
  • Literature discovery
  • Literature summarisation
  • Literature review / systematic review automation
  • Data extraction from studies
  • Research reports
Impacted areas
  • Research with potential for educational impact
  • Research integrity and researcher development
  • Library research services
Existing alternatives

Summary findings

Evaluation snapshot

  • Elicit is a semantic search tool that leverages the Semantic Scholar database. It enables literature discovery, literature summarisation, and has literature/systematic/rapid review and research report automation.  
  • Freemium product that includes Enterprise options. 
  • An AI-enabled systematic review feature was added February 2025 to aid in searching, screening, data extraction, and report generation, promoting Living Reviews. It integrates with Zotero but limits screening to 500 papers. Searches are not directly reproducible and currently lack transparency required for systematic review reporting. 
  • Recently introduced Elicit Reports with deep research functionality allows rapid reviews. Free-version users can screen up to 50 papers and extract data from 8 papers, with reports following the PRISMA flowchart structure. However, Elicit Reports require a clearly defined research question and don’t support overview-style questions.  

Benefits

  • Good level of transparency in relation to privacy and data collection, usage, storage (including international servers). Provision of a named privacy contact for enquiries around Elicit’s management of personal information.  
  • Clear information on data sources (Semantic Scholar) but provenance of individual records is unclear (e.g. partner attribution not specified). 
  • Elicit workflows enhance critical thinking and study evaluation but lack formal critical appraisal functionalities, requiring researchers to assess study quality and relevance.  
  • Transparency in research is promoted, aligning with ethical standards from the Australian Code for the Responsible Conduct of Research. It offers a transparent review screening process using Large Language Models (LLM) for screening, allowing human decision-making and efficient data verification. Academic integrity is also somewhat encouraged with citation practice recommendation
  • A key advantage is integration with tools like Zotero, facilitating effective referencing and source management. Elicit also supports open practices by allowing users to export results into a CSV for further analysis and share projects internally and externally. 

Limitations or risks

  • Elicit limitations (accuracy, potential misinterpretation of underlying data, risk of English language bias) are acknowledged and communicated.
  • Currently, searches lack reproducibility and transparency for systematic review reporting. 
  • There is an increased likelihood of copyright violation by users of Elicit in the upload functionality, due to nuanced rights and permissions for different articles/journals. 

Notable points

  • Developers claim Elicit outputs are about 80–90% accurate, urging users to critically evaluate results.
  • Elicit reports follow the PRISMA flowchart structure (see Documenting your process and PRISMA advice).
  • Searches are not directly reproducible and currently lack the transparency required for systematic review reporting.
  • Elicit workflows enhance critical thinking and study evaluation but lack formal critical appraisal functionalities, requiring researchers to assess study quality and relevance. 
  • Despite being user-friendly with training resources, users should be aware of copyright and licensing issues, especially when uploading PDFs, and may encounter bugs or incomplete features as Elicit evolves. 

Considerations and implications

This section surfaces key reflections from the evaluation, moving beyond tool description to sense-making. It considers how AI may influence library practice, sector responsibilities, and the broader information landscape. As a boundary-spanning part of the university, the Library draws on its expertise in information practice and knowledge management to surface impacts across cohorts.

Our insights are provisional and reflective, emphasising conditions and contexts rather than certainties or prescriptions.

Our role is not to endorse tools like Elicit but to make meaning. To examine how different forms of AI may reshape how information is produced, accessed, and understood. These considerations are part of the academic library’s role in supporting the organisation, interrogation, and circulation of knowledges within our academic contexts.

Click on the plus (+) icons below to explore considerations related to digital literacies, user behaviours and needs, educator capability, open practices, and library practices:

Instructional approaches and searching behaviours

  • Library-led search instruction should iterate to accommodate tools like Elicit and to reflect shifting information-seeking behaviours.
  • More exploration and evaluation is required in relation to information discovery and AI use in systematic reviews to ensure our community are critically engaging with AI outputs and understand reproducibility limitations.
  • Direction of Library search/research development and AI literacies approach further considered through both critical engagement and enablement lens, incorporating human-led evaluative judgment of AI-assisted outputs (in alignment with Australian Code for the Responsible Conduct of Research).

Strengthen information and digital literacies instruction

  • Library AI guidance or instructional content to incorporate reasonable cautionary copyright advice at point of engagement or context. Specifically for tools like Elicit, the copyright implications in AI-enabled search and research, including uploading third-party content to tools.
  • Strengthened focus on the ethics of tool use, data source provenance, transparency, and limits of AI/automation.
  • Consideration of metadata and provenance awareness being embedded in digital capability offerings. Training and guidance evolved to include awareness of FAIR principles, data lineage, and data transparency in AI use and outputs.
  • Position AI in instruction as one of many lenses learners can use for information discovery; alongside disciplinary inquiry and search methods.

Embed reflexivity and critical use in AI literacy

  • Include guidance and examples that model how to interrogate AI outputs: what’s missing, what’s overrepresented, and what assumptions underpin the model’s recommendations.
  • Integrate reflection and sense-making activities into instructional materials, especially where AI tools are used in early research ideation, to strengthen metacognitive practices and resist over-reliance on AI-generated framing.
  • Encourage learners and staff to document their use of AI tools during information discovery (what was used, when, and how) as part of reproducibility and transparency.
  • Reinforce the notion of AI tools as scaffolds in research and inquiry, not authoritative sources; especially in areas like evidence synthesis, topic exploration, or summarisation.

Targeted capability development for library staff

  • Librarian capability development designed in response to the changed environment and user behaviour to enable the instructional content change.
  • A targeted capability development stream should be introduced to equip librarians with critical knowledge of both general-purpose and domain-specific AI tools used in literature searching.
  • Programs should include hands-on training in Elicit and similar tools, with demonstration of technology through the lens of principles-led engagement.
  • Metadata and provenance awareness embedded in digital capability offerings. Training and guidance evolved to include awareness of FAIR principles, data lineage, and model/data transparency in AI-generated outputs.

Broader strategic reflections

  • Academic libraries play a key role in shaping institutional engagement with AI in research discovery and literature searching.
  • Academic libraries should identify appropriate areas for digital capability leadership and partnering with strategic units to align with institutional goals.
  • Consider forming strong partnerships with researcher development programs, study skills teams, or learning support divisions to ensure holistic, principled, and accessible instruction around AI-mediated search and discovery.
  • It's essential to map how AI-powered tools intersect with existing discovery ecosystems in both insitutional, library-managed databases and in broader repositories.
  • Academic libraries should explore and highlight findings around the value of controlled vocabularies, subject indexing, and full-text access in contrast to AI models or platforms trained on open web or abstract-level data.
  • Staff and student guidance should address where and how AI tools integrate (or fail to integrate) with trusted library-curated platforms, reinforcing the importance of scholarly infrastructure and surfacing issues of content licencing, reliability, and access.

 

Evaluation currency

This evaluation was first published in May 2025 as part of Deakin Library’s AI Evaluation series.

  • Last reviewed: June 2025
  • Next review scheduled: October 2025

This is a living document and may be updated as tool features, functionality, or institutional context evolve.