Evidence synthesis is built on the principles of rigor, transparency and replicability. They are also time-consuming and expensive, and struggle to keep up with funders and users needs. Artificial Intelligence (AI) has the potential to make the process faster and reduce costs, but it also brings challenges. Right now, many AI tools could undermine the very principles evidence synthesis relies on, risking a flood of unreliable, biased and low-quality reviews.
In recent years there has been rapid surge of new AI tools for systematic reviews and evidence syntheses, but they are emerging faster than they can be properly evaluated, leaving a limited evidence base to guide their use.
A recent paper evaluated different generative AI (genAI) tools for the different stages of reviews, and concluded that for the searching step genAI is currently non-promising, but is promising for the screening and data extraction steps.
Takeaway: AI should be used with human oversight: as a companion, not replacement!
Responsible AI in evidence SynthEsis (RAISE) is an initiative focused on promoting the responsible development, evaluation and us of AI tools in evidence synthesis, aiming to establish best practice standards and build stronger evidence base for tool selection and application.
RAISE guidance includes three documents:
In summary, these are the recommendations for reviewers:
Contribute to the ecosystem to help all roles continue to develop and grow
More details can be found in RAISE 1.
The newly formed AI Methods Group is a collaboration between four leading evidence synthesis organisations Cochrane Collaboration, JBI, Campbell Collaboration and the Collaboration for Environmental Evidence (CEE) focusing on AI and automation in evidence synthesis.