Artificial Intelligence Research Assistants: A New Age in Academic Research and Publishing

Leadership for Change

Leadership for Change

Introduction:

The birth of artificial intelligence (AI) has ushered in substantial, and sometimes daunting, changes across various sectors, including academia. Artificial intelligence research assistants, in particular, are carving out an essential role within certain research practices, offering significant advantages while simultaneously bringing forward the conversation about their use in academic research and publishing. This Insights Article, grounded in a presentation delivered at the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) conference in November 2023, looks at some of the AI uses mentioned and mitigation strategies for the risks it may pose.

Artificial intelligence research assistants, while traditionally used for conducting systematic reviews, have found potential applications in diverse tasks such as grant proposals and manuscript preparation. However, the incorporation of AI tools in academia is still a matter of contention, with some researchers embracing these tools, and others expressing reservations. Three experts presented their views on the matter during the ISPOR presentation, and their insights will be discussed in this article. Drawing on perspectives from researchers, industry professionals, and publishers, this Insights Article provides a holistic view of the current state of AI in academic research.

The Role of AI in the Industry

Yumi Asukai, a representative from the industry perspective, presented her views on the use of AI-based research assistants in the field of Health Technology Assessment (HTA). She outlined three primary objectives for their use in Health Economics and Outcomes Research (HEOR): automation of tasks, drafting content, and generating insights to predict HTA outcomes. The goal is to achieve time and cost savings while maintaining or improving the quality of output.

Objectives of AI Use in HEOR

Firstly, the automation of tasks involves using AI to conduct systematic literature reviews (SLRs) and other tasks that traditionally required human intervention. For instance, AI can be used to automatically run searches and extract data, thus saving time and resources. However, she also acknowledged the concerns associated with the use of AI in HTA. For automated SLRs, the question is whether AI can truly replace human researchers without compromising quality.

The second objective is drafting content. Here, AI can be used to generate text for various documents, such as briefing documents for early scientific advice interactions. By using AI to draft these documents, researchers can save time and potentially improve the quality of the output. For example, AI could be used to draft a briefing document based on a given set of data, ensuring that all relevant information is included and presented in a clear, concise manner. When it comes to drafting content, the challenge lies in whether AI can produce time-saving and accurate text as “hallucinations” and information bias can be seen with AI tools.

The third objective of generating insights is where AI can be used to predict HTA outcomes based on early predicted outcomes such as a Target Product Profile (TPP). A TPP is a strategic development process tool that outlines the descriptions of a drug product’s proposed indications, labelling concepts, and other critical attributes. For example, an AI system could be fed a TPP and generate a prediction of the potential HTA outcome. This could provide valuable insights for evidence generation planning. However, as Asukai pointed out, the reliability of these AI-generated predictions is a critical concern that needs to be addressed.

Data Security Concerns in AI Use

One of the major concerns raised in the use of AI in HEOR and HTA is data security. Asukai pointed out that there is a general mistrust, especially when company proprietary data is required for the large language models (LLM) to produce outputs. Stakeholders express concerns about the safety of their data, fearing that once proprietary data is fed into a LLM, it might become accessible to the next user of the model.

Another concern is the misunderstanding of how AI models learn and are trained. Many fear that the information provided to the AI model will be used to train the model, potentially leading to the disclosure of sensitive information. However, feeding data into an AI model for a specific task does not equate to training the model. It is crucial to distinguish between these two processes and reassure stakeholders that the proprietary information they provide will not be used to train the model or become accessible to other users.

Current Concerns and Mitigation Strategies: Artificial Intelligence Research Assistants

Asukai highlighted the need for short-term mitigation strategies, such as running SLRs in parallel with human researchers and checking the quality of AI-generated content. For long-term mitigation, she emphasised the necessity for an education campaign to build a basic level of knowledge about LLMs and their interfaces.

The Future of AI in HTA

Asukai concluded with the belief that while the field of HTA is still in its early stages in terms of AI use, the potential benefits of AI research assistants cannot be ignored. She suggests utilising AI research assistants but treating them like junior research assistants, closely monitoring their work to ensure quality and accuracy. She also stresses the need for a company-wide, consistent policy to avoid duplication of effort and ensure secure use of AI.

The Use of AI in SLR and Value Brief Creation

Nikolaos Takatzoglou presented two case studies that demonstrated the potential use of AI in HTA. The first case study was about using AI for SLRs. Takatzoglou and his team used a generative AI model, GPT4, to help with an SLR. They fed the AI model with search terms and used it to identify additional search terms. The AI model helped them retrieve 2,406 titles and abstracts, which were then screened by both human researchers and the AI model.

The second case study was about using AI to create a value brief for a key product of Johnson & Johnson. The team used GPT4 to summarise a 1,000-page evaluation document from the National Institute for Health and Care Excellence (NICE). The aim was to create a value brief with the push of a button, which would then be refined by human researchers with the support of AI.

Results and Concerns

Takatzoglou presented the preliminary results of the SLR case study. He reported that human researchers and the AI model disagreed on the inclusion of certain papers 9% and 8% of the time, respectively. However, the AI model managed to convince human researchers to change their initial decisions in several cases. The AI model alone achieved an accuracy of 98.4%, but its sensitivity was relatively low at 44.9%. When combined with human researchers, the sensitivity increased to 77.2%.

In the value brief case study, the team reported that the use of AI resulted in 20 fewer human working hours than if the task had been done entirely in-house. However, the quality of the AI-generated draft was moderate due to a lack of flow and a word count limit.

Quality and Efficiency with AI

The conclusion was that the use of AI in HTA could result in significant time savings without compromising on quality in research. The AI model was able to stick more strictly to the inclusion and exclusion criteria and help identify articles that might have been missed by human researchers. However, he emphasised that human review is always critical for accuracy and that the final draft would always require a human touch. Despite the challenges, Takatzoglou believes that the use of AI in HTA is not only feasible but also unavoidable.

AI in Publishing: An Editor’s Perspective

Laura Dormer, an Editorial Director at Becaris Publishing, discussed the use of AI in publishing workflows from the perspective of an editor. She highlighted several areas where AI is currently having an impact and potential future applications, acknowledging that the field is rapidly evolving, and guidelines are still catching up to the rapidly developing technology.

AI in the Writing Process

Dormer highlighted several areas where AI is already currently being used in the writing process. Language and grammar checking tools, some of which are specifically trained on scholarly content, are being used to improve the readability of content. Generative AI tools such as GPT and Google Bard are being used to generate text for publication, including creating abstracts. AI tools are also being used to help authors select journals to submit their work to, matching content with specific journals.

AI in Peer Review and Post-Publication

During the presentation, Dormer discussed the use of AI in peer review and post-publication processes. AI tools are being used to assist in technical checks and screenings, such as formatting and word count checks, improving the quality of methodology, and checking for plagiarism and image manipulation. AI is also being used to identify peer reviewers, matching papers with people with the correct expertise. Post-publication, AI tools are used to improve metadata of articles, improving discoverability and linking them up in sensible collections.

Concerns and Risks

Despite the potential benefits, Dormer highlighted several concerns and risks associated with the use of AI in publishing. One major concern is the problem of paper mills, where papers are essentially fabrications. This problem could potentially be exacerbated by the availability of generative AI tools. Other concerns include a lack of transparency, with authors using AI tools without disclosing it, potential confidentiality issues, and the risk of bias in AI tools due to the data they are trained on.

Publishing Guidelines on AI Use

Dormer discussed the current guidelines on the use of AI in publishing as set out by the International Committee of Medical Journal Editors (ICMJE) and the World Association of Medical Editors (WAME). Both bodies agree that AI tools like GPT cannot be listed as authors (mostly due not no accountability) and that the use of AI tools during authorship should be disclosed to editors. They also agree that each author remain responsible for their own material that they submit. The ICMJE has further stated that AI-generated material cannot be referenced as a primary source, surprisingly. Regarding peer review, both bodies did agree on maintaining the confidentiality of manuscripts and improving disclosure. The WAME went even further and also recommends that if editors use AI tools, this should be disclosed to authors.

AI used as publishing assistant

AI should be utilised in the publishing industry, as recommended by Dormer, but with care. She emphasises the need for governance and detection tools to support the use of AI, as well as the importance of human oversight and judgement. She also highlights the need for publishers to collaborate in creating these tools and ensuring that researchers, editors, and others are trained in their best usage.

Conclusion

To sum up, the emergence of AI in academic publishing and research is a watershed moment in their development. AI has enticing potential advantages, such as reduced costs and increased efficiency and quality. Addressing related risks and concerns, such as data security, transparency, and bias, is as crucial.

We must find a way to use AI’s functions while still ensuring that academic research and publishing remain of high quality as we move through this new terrain. There will be adaptations and learning curves associated with any new technology’s launch. We may utilise AI to improve our work and uphold our fields’ principles and standards with proper preparation, well-defined objectives, and continuous communication among all parties involved.