Skip to Main Content

AI Evaluations - Library

Consensus - Library AI Evaluation

This evaluation is part of Deakin Library’s AI Evaluation series, providing structured, practice-informed insights into emerging AI technologies. Evaluations are designed to support critical decision-making and responsible engagement with AI, guided by Deakin’s Generative AI Framework and Principles.

Our findings and assessment are shared to inform your judgement. Evaluations are not an endorsement.

Evaluations like this one are not about deciding whether the AI “works,” but about understanding what kind of thinking and behaviours it encourages and whether that aligns with our pedagogical, scholarly, and professional values.

Key advice

Consensus is a search tool for early research exploration and to spark curiosity. 

It offers value at the start of your research by surfacing relevant studies and showing how strongly the evidence aligns on a specific claim. However, its outputs are largely drawn from abstracts and lack the depth needed for critical evaluation. Use Consensus to support early discovery but rely on more comprehensive searching and your own full-text analysis to develop rigorous, informed understanding.


Consensus overview

Field Details
Platform
  • Consensus: AI-powered search engine for evidence-based answers from scientific research.  
Vendor
  • Consensus Inc. An independent AI company (Seed funded, $11.5 million raised in 2024) developing tools for academic research search and synthesis. 
  • Explore Consensus mission statement. 
Primary function
  • Supports exploration of a research question in scholarly literature. 
  • Performs semantic search across indexed databases, summarises key findings, and visualises alignment in the literature.
  • Uses NLP (Natural Language Processing) to extract, summarise, and assess consensus across academic literature.
  • Consensus meter” provides a visual indicator of how strongly the research evidence aligns on a specific claim or finding.
Impacted areas
  • Education and Research
  • All discipline areas but particularly sciences, technology, and health research
  • Research integrity and researcher development
  • Library research services
  • Academic skills services
Existing alternatives
  • Comparable functions:
    • Semantic searching and summarisation.
    • Research metrics and research claim consensus.
    • Progressive refinement of research questions through conversational querying.
  • Tools with comparable functions:
    • Elicit (by Ought) – Structured support for literature review and research synthesis. 
    • Scite – Citation-level analysis and research claims validation via Smart Citations. 
    • Perplexity AI Pro (Academic Mode) – Peer-reviewed evidence retrieval with citation traceability.
    • ChatGPT (with Web or Pro plug-ins) – Generates answers using some academic content but lacks traceable sourcing mechanisms and accuracy. 

Summary findings

Evaluation snapshot

  • Consensus is a semantic academic search tool designed to support exploratory, question-led research.e.
  • Draws from publicly available abstracts via Semantic Scholar, OpenAlex, and additional academic content aggregated through its own data collection.
  • Cited claims are extracted and summarised, with a “Consensus meter” visualising where research appears to align.
  • Outputs are predominantly based on abstracts rather than full-text synthesis. 
  • Supports exploratory academic inquiry but lacks the depth, transparency, and reproducibility required for systematic review-level research.

Benefits

  • Uses semantic search and NLP (Natural Language Processing) to rapidly extract, summarise, and display consensus-level insights from academic literature.
  • Helps users quickly orient in a research topic by surfacing relevant studies and linked citations.
  • Consensus meter” provides a novel way to visualise scholarly agreement, encouraging question refinement.
  • Encourages evidence awareness and citation searching by linking directly to peer-reviewed sources. 
  • Intuitive interface and familiar functionality require minimal onboarding and training.
  • Supports export to common reference management tools like Zotero and EndNote (Deakin’s preferred bibliographic software). 
  • Encourages question refinement and engagement with scholarly consensus, providing a low-barrier entry point to research.  
  • Institutional subscription model available, with potential use in academic skills and research training contexts. 

Limitations or risks

  • Reliance on abstracts limits access to context, methodological detail, and full-text analysis, regardless of whether the articles are Open Access. 
  • Visualisation methods lack clarity; users are not told how “consensus” is measured or calculated, which can lead to misinterpretation.
  • Simplified summaries may obscure scholarly disagreement, nuance, or discipline-specific reasoning. 
  • Over-reliance on summarised claims may discourage deeper reading or critical engagement with full texts.
  • The “Ask Paper” feature allows PDF uploads, which may introduce copyright risk. Some publishers restrict the use of AI with their content. Users need to check licences or terms before uploading into Consensus. Contact your librarian for help.
  • Vendor information on data sources, model training, and retention is limited. Privacy policy, in particular contains ambiguous clauses.
  • Indexed content coverage is strongest in science, technology, health, and medical research; with more limited representation in other disciplines. 
  • While well-suited to scoping and topic familiarisation, the tool lacks the transparency, methodological depth, and reproducibility required for systematic reviews or advanced synthesis work. 

Notable points

  • Consensus states the tool is designed for exploratory, question-led research, not for producing evidence syntheses or systematic reviews. 
  •  Although it may appear to provide answers, it instead surfaces literature summaries and indicates where research aligns or diverges, rather than offering definitive conclusions. 
  • Best used as supplement to, not a substitute for, expert search strategies, full-text reading, and critical reading strategies. 
  • Already informally used by academics in higher education, including Deakin, though it is not currently licensed institutionally. 
  • Aligns with open scholarship principles through citation traceability but is not integrated with institutional discovery systems. 
  • Consensus meter is a distinctive feature but must be framed carefully, as it does not account for dissenting perspectives or explain how agreement is measured.

Considerations and implications

This section surfaces key reflections from the evaluation, moving beyond tool description to sense-making. It considers how AI may influence library practice, sector responsibilities, and the broader information landscape. As a boundary-spanning part of the university, the Library draws on its expertise in information practice and knowledge management to surface impacts across cohorts.

Our insights are provisional and reflective, emphasising conditions and contexts rather than certainties or prescriptions.

Our role is not to endorse tools like Consensus but to make meaning. To examine how different forms of AI may reshape how information is produced, accessed, and understood. These considerations are part of the academic library’s role in supporting the organisation, interrogation, and circulation of knowledges within our academic contexts.

Click on the plus (+) icons below to explore considerations related to digital literacies, user behaviours and needs, educator capability, open practices, and library practices:

In-product AI functionality

  • Uses NLP (Natural Language Processing) to:
    • Identify research papers relevant to a specific question.
    • Extract and summarise key findings from those papers.
    • Present an aggregated view of how strongly the literature supports a particular claim, using a visual “Consensus meter”.
  • Employs a hybrid search approach to maximise relevance:
    • Semantic search captures the intent of natural language queries.
    • Keyword search anchors results in exact term matches.
    • Relevance scoring compares the query against titles and abstracts.
  • These combined functionalities surface papers that match both the user’s language and their research intent.

User behaviours and information searching practices

  • Reflects a broader behavioural shift toward speed, conversational querying, and AI-mediated research.
  • Caters to user preference for fast answers and surface-level summaries (satisficing). 
  • While Consensus supports rapid orientation and curiosity-driven scoping, it also introduces tensions around depth, critical analysis, and scholarly rigour.
  • These shifts call for evolving search instruction that bridges AI exploration with formal research practices and literacies.

Information and digital literacies education

  • Teach citation searching alongside algorithmic critique and question what’s included/excluded.
  • Discuss how Consensus selects, summarises, and frames cited content.
  • Highlight how AI-mediated searching often bypasses Boolean logic and structured search with impacts on precision.
  • AI-generated queries often bypass structured search logic, which could diminish skills in Boolean reasoning and search precision.
  • Educators should use tools like Consensus as a prompt to revisit and refresh foundational search skills in contemporary contexts. 
  • Upload features introduce potential copyright and licensing risks when users input full-text PDFs or proprietary content. Provide guidance on copyright/licensing risks with full-text uploads.
  • Library guidance is needed to support informed, responsible use aligned with legal and ethical norms. Frame tools like Consensus as a starting point in a deeper, layered research process.

Train the trainer – capability development for educators or instructors 

  • Support educators in recognising how AI influences knowledge framing, not just how they function.
  • Emphasising that AI like Consensus filters summarise and prioritise information in ways that influence user understanding. 
  • Encourage “teachable moments” through activities like:
    • Summary validation
    • Citation tracing
    • Interrogating evidence strength
  • Provide training on vendor-integrated AI to assess risks and value when tools like Consensus are used in learning and research contexts.
  • Encourage educators to move beyond tool-led instruction, integrating AI literacy in curriculum as a capability (not just a skill) linked to enquiry, evaluation, and scholarly practices.
  • Promote reflective teaching by supporting educators in their explorations of how AI can reinforce surface-level learning and how deliberate pedagogical framing can counter this.
  • Build educator confidence through upskilling, support them to model critical, informed use of AI and to lead peer capability building in their institutions.

Open practices considerations

  • Consensus supports open discovery by using public abstracts and promoting citation awareness.
  • Does not integrate with institutional repositories or OER platforms. This limits contribution to Open Access, Open Education, and interoperable scholarly workflows.
  • Library and institutional engagement should assess AI tools like Consensus on both user-facing functionality and their alignment with open knowledge ecosysem.

Broader strategic reflections

  • Aligns with higher education trends towards productivity, efficiency, and digital-first research but may encourage surface-level scholarly engagement
  • For universities, tools like this highlight tensions between innovation and academic rigour; especially in disciplines where depth, debate, and methodological transparency are essential.
  • Library framing should position AI-mediated search as a catalyst for further enquiry, critical thinking development, and source evaluation rather than as a shortcut.
  • Strategic engagement with AI-assisted tools must involve both capability building and cultural shifts; positioning critical AI use as a core graduate and research competency.

 

Evaluation currency

This evaluation was first published in June 2025 as part of Deakin Library’s AI Evaluation series.

  • Last reviewed: June 2025
  • Next review scheduled: November 2025

This is a living document and may be updated as AI tools, features, functionality, or institutional context evolve.