Skip to Main Content

AI Evaluations - Library

EBSCO AI functions - Library AI Evaluation

This evaluation is part of Deakin Library’s AI Evaluation series, providing structured, practice-informed insights into emerging AI technologies. Evaluations are designed to support critical decision-making and responsible engagement with AI, guided by Deakin’s Generative AI Framework and Principles.

Our findings and assessment are shared to inform your judgement. Evaluations are not an endorsement.

Evaluations like this one are not about deciding whether the AI “works,” but about understanding what kind of thinking and behaviours it encourages and whether that aligns with our pedagogical, scholarly, and professional values.

Key advice

EBSCO's pilot AI tools, Natural Language Search and AI Insights, provide opportunities for enhanced discovery but sill require critical engagement.

These AI tools can support users during their initial research phase by offering simplified search functionalities and concise, bullet-point summaries of selected sources. When functioning effectively, these tools have the potential to streamline the process of locating and accessing relevant sources. However, there are considerations regarding digital literacy, particularly in scenarios where the tools do not perform optimally. Users may become overly reliant on Natural Language Search and may not develop proficiency in other search techniques. Additionally, AI-generated insights are limited to licensed full text within specific databases, which may influence users' material selection based on availability.

 


EBSCO AI functions overview

Field Details
Tool
Vendor
Primary function
  • EBSCO has introduced two AI functions into their products:
    • Natural language search to support plain language research questions or topics, without user use of keywords or Boolean operators.
    • AI Insights that provides 3-5 key summary points from licenced full-text content to assist users decisions on whether to read the articles.
  • EBSCO Discovery Services is Deakin's library discovery search engine; EBSCOhost is the platform for many databases subscribed to by the Library.
Impacted areas
  • Education and Research
  • All discipline areas
Existing alternatives

Summary findings

Evaluation snapshot

  • Natural Language Search (NLS) is an opt-in function available for EBSCOhost Research Platform and EBSCO Discovery Services.
  • EBSCO NLS performs a search based on a research question or topic, without the need to develop a search with keywords or Boolean operators. It uses Large Language Model (Claude Sonnet 3.5) to process searches from these plain language research questions or topics.
  • AI Insights provides summaries of articles to assist the user in deciding whether to read the articles. It is an opt-in available for licenced full-text content on select databases.
  • Summaries are generated using Retrieval Augmented Generation (RAG) with additional human Subject Matter Expert checking process. 
    AI functionality in EBSCO’s pilot initiative integrates responsible tools into academic research platforms, aiming to enhance the discovery experience for students and researchers.

Benefits

  • Overall, the AI functions facilitate the use of databases with minimal search skills by leveraging natural language within searches and through highlighting key points from selected materials in search results.
  • Both Natural Language Search and AI Insights are intuitive to use and would require little or no training.
  • Natural Language Search allows students to retrieve relevant documents with low levels of cognitive effort.
  • Natural Language Search results were equal to or superior to equivalent Boolean searches.
  • When the Natural Language Search function is enabled, users can either use this feature or opt for traditional search functions.
  • AI Insights enable students to identify key points from material, aiding in the decision-making process regarding relevance.
  • Aligns with emerging user search and discovery behaviours.
  • There are minimal copyright or licensing risks for users.

Limitations or risks

  • AI Insights currently has limited availability, leading to inconsistent user experience and difficulty in managing user expectations of the AI functions.
  • Users may over rely on Natural Language Search, leading to a loss of skill in traditional techniques.
  • Natural Language Search struggles with narrow topics because it generalises queries rather than searching for specific terms. As a result, specific topics are less likely to appear due to the probabilistic nature of the results.
  • The AI Insights examined in testing did not offer any advantage for choosing whether to read compared with the abstract, but the Insights did not have any negative effect.
  • Users may engage more with material that includes AI Insights, leading to potential bias in material selection.
  • Users may over-rely on summaries rather than engaging with primary material.

Notable points

  • EBSCO NLS is more suitable for undergraduate-level queries. Traditional search is better suited for research questions.
  • Counterintuitively, EBSCO user testing of different personas found that AI Insights led to better engagement with the full text of articles.
  • Random samples of AI Insights are checked for quality by subject matter experts (SMEs)
  • EBSCO uses Retrieval Augmented Generation (RAG) from the articles it indexes and prompts Claude to generate the result (either a list of articles or a summary of an individual article). This reduces the risk of false or misleading outputs.
  • User data is not stored or used for training the model.

Considerations and implications

This section surfaces key reflections from the evaluation, moving beyond tool description to sense-making. It considers how AI may influence library practice, sector responsibilities, and the broader information landscape. As a boundary-spanning part of the university, the Library draws on its expertise in information practice and knowledge management to surface impacts across cohorts.

Our insights are provisional and reflective, emphasising conditions and contexts rather than certainties or prescriptions.

Our role is not to endorse tools like Elicit but to make meaning. To examine how different forms of AI may reshape how information is produced, accessed, and understood. These considerations are part of the academic library’s role in supporting the organisation, interrogation, and circulation of knowledges within our academic contexts.

Click on the plus (+) icons below to explore considerations related to digital literacies, user behaviours and needs, educator capability, open practices, and library practices:

Vendor in-product AI functionality

  • EBSCO’s AI functionality reflects AI trends in both search and summarisation in other vendor platforms, normalising AI-assisted engagement.
  • The in-product functionality provides a level of support in licensed material not available in freemium tools with less transparent workings and data sources.
  • Aligns with evolving user expectations for AI-enhanced search tools; disabling such features may push users towards freemium AI search options, bypassing library-subscribed resources, that are less transparent and have more risks to users.
  • Natural Language Search
    • An opt-in function.
    • Uses AI to convert plain language questions into a keyword search and runs it using functions used in traditional search such as relevant ranking.
  • AI Insights
    • In beta, may change over time.
    • Available in both EBSCOhost and EDS.
    • Summarises key points from full text.
    • Bias in source selection.
  • The low proportion of articles with AI Insights may lead users to preferentially select those that do have them. This introduces a risk of bias in source selection, as users may overlook equally relevant or more relevant articles without AI Insights.

User behaviours and information searching practices

  • The tools can be used without understanding how information is produced, stored, or organised, which limits digital literacy development.
  • Users are likely to satisfice—settling for the first acceptable result—rather than optimise their search.
  • While satisficing is sometimes appropriate, knowing when to go deeper requires evaluative judgement.
  • Exclusive use of Natural Language Search may prevent users from developing the evaluative skills needed to optimise when necessary.
  • Natural Language Search may reduce the number of user enquiries if it has a high uptake and provides relevant results.
  • Users seeking direct answers may still prefer generative AI tools that provide both answers and sources, potentially bypassing embedded tools like those in EBSCO.
  • The increased ease of use of the tools may result in fewer Library service enquiries, while the remaining enquiries could be more complex, as simpler questions are resolved independently.
  • Search skill gaps
    • Users relying on Natural Language Search may not develop the skills needed to construct effective Boolean searches which may impact their ability to conduct more complex searching when required.
    • In cases where results from Natural Language Search are not relevant to the information need, users may be unequipped to refine their search strategy.

Information and digital literacies education

  • Highlight the strengths and limitations of search tools in digital literacy education.
  • Encourage students to use tools selectively and strategically as part of a thoughtful search process.
  • Education can help students understand when and how to use different tools at various stages of research.
  • Reinforce critical evaluation skills when interpreting AI outputs in research contexts. As vendor AI tools blur the line between search and summary, users need support to understand when a tool is surfacing data, interpreting content, or shaping conceptual direction.
  • Educate users on the purpose and limitations of AI Insight summary points, particularly the risks of relying on summaries without engaging with full-text content.
  • AI Insights offer opportunities to teach comparative evaluation: encouraging students to test AI summaries against abstracts and full-texts can build evaluative judgement and highlight the limitations of surface-level AI outputs.

Train the trainer - capability development of educators

  • Instructors need confidence to critique and explain embedded AI functionality across a variety of platforms, not just generative tools like ChatGPT but increasingly common in-product tools like those from ProQuest, EBSCO, or JSTOR.
  • A dedicated capability stream on vendor-integrated AI could help staff build fluency in tool function, critique, and pedagogical framing. Thereby enabling more agile teaching interventions and advisory support.
  • Professional development programs for information and search professionals should have a scaffolded and specialist AI training stream. Librarians focused on academic and research services need support to respond to changing user behaviours and to confidently deliver updated instructional content that reflects the AI-integrated research environment.
  • Instructional framing must shift from “how to use the platform or tool” toward “how to platform or tool mediates information” that acknowledge AI as both a functional layer and a lens.

Broader strategic reflections

  • EBSCO AI functionality reflects an industry shift in information discovery and is becoming a standard user expectation.
  • Libraries should approach in-product AI not as neutral functionality but as an active influence on discovery behaviours and academic habits. Library outreach must include shaping the narrative; clarifying when, why, and how AI should support—and not shortcut—scholarly practices.
  • The presence of embedded AI in trusted academic systems may lead to uncritical use if not addressed through deliberate teaching and policy framing.
  • Evaluations like this are not only about tool performance; they highlight the behaviours and epistemologies (ways of thinking) these tools promote. Libraries must remain responsive, not reactive, to these developments.

 

Evaluation currency

This evaluation was first published in June 2025 as part of Deakin Library’s AI Evaluation series.

  • Last reviewed: June 2025
  • Next review scheduled: November 2025

This is a living document and may be updated as tool features, functionality, or institutional context evolve.