Use of Generative AI in a Research Setting

Purple tinted image of hands typing into a laptop keyboard with CHAT AI screen shot listed.

Although guidance related to the use of Generative Artificial Intelligence (AI) tools in an academic setting has been provided by many of our partner academic institutions, that guidance has been focused on student and instructor usage and may not be directly applicable to research staff, scientists and trainees engaged in research. As we see increasing reports of research teams’ use of tools such as ChatGPT to support their work, the WHRI aims to provide initial support to investigators to initiate conversations. On October 18th our CW Digital Health Research Manager, Dr. Beth Payne, held a Lunch & Learn with PHSA Director, Research Integration and Innovation, Dr. Holly Longstaff. The event was focused on individuals’ use of AI in their research role, and the event enabled consultation and feedback from the CW research community on their current usages and desired supports around how to manage AI use among their teams. 

As a first step in response to the Lunch & Learn discussion, we are reaching out with some general information and potential questions to support investigators, trainees and research teams in their conversations regarding appropriate use, and transparency of use, of generative artificial intelligence. We recognize that the AI field is dynamic and quickly advancing. To ensure the research institutes remain informed and are effectively using these advanced technologies, our planned next steps are 1) develop staff-specific guidance on use of AI at work, and 2) convene an AI in the Research Workplace Working Group for WHRI and BCCHR Researchers, Trainees, and Staff to inform guidance and keep us up to date on recent advances in the AI field.

What is Generative Artificial Intelligence?

Generative AI is a form of AI that primarily uses machine learning algorithms to generate text, images or sound. Generative AI tools can be used for a variety of purposes such as summarizing information in plain language, reducing the word count of text, editing text for grammatical errors, and more.

Why does this matter in a research setting?

There is general consensus among users that generative AI tools, such as ChatGPT, Grammerly, and GitHubPilot, can create efficiencies in the workplace; however, there are particular and unique considerations for a research setting. There are many known limitations, which include plagiarism, general inaccuracies with output, and hallucinations, which can be described as creating plausible sounding citations and facts that are not real.

What can you do now?

As we move to develop our additional guidance and working group, you can initiate conversations about generative AI with your research team. Table 1 provides questions and considerations to assist you in informed decision-making regarding use of generative AI in a research setting. It is intended to support responsible use of generative AI by research institute members, trainees and staff. It is adapted from Government of Canada Guidance1 and several other guidance notes2,3,4. We encourage you to talk to your staff and trainees about their current and planned use of generative AI at work. This guidance is based on four general steps (Figure 1), in approaching use of AI in a research setting, consult, verify, review and disclose.

Figure 1. Steps to approaching AI use in a research setting

figure 1. steps to approaching AI use in a research setting.

For best experience please view with a tablet or computer.

Table 1. Stepwise Table for use of generative AI in a research setting

Step 1: Consult
  • Have you (the PI) or the team, discussed use of generative AI within your team?
  • Consult with Dr. Beth Payne if you/ your team has questions or would like more information on resources available to staff and this guidance.
  • Have you let the PI/your manager/ supervisor know you plan to use AI prior to using for any given work task(s)?
  • Disclose planned use of AI to you manager/ supervisor before using it for work-related tasks.
  • Do any privacy or ethics concerns need to be addressed prior to use? For example – further consultation may be required if you respond yes to any of the following questions:
  • Is unpublished data or results included as part of the prompt or disclosed to the AI tool during planned use?
  • Does information provided to the AI tool include any personal identifiers?
  • Is accuracy of output critical to the project, including need to output citations and true sources?
  • Consult with Dr. Holly Longstaff to ensure planned use meets ethical and privacy standards when needed.
Step 2: Verify
  • Are data sources used for the development of the AI model you plan to use known and disclosed by the company who owns it and any biases resulting from these data sources understood?
  • Review product development and use information and privacy and security statements.
  • Have you reviewed your prompts and tested them to ensure no additional sources of bias are introduced (see for training resources). It is known that prompts have a significant impact on the quality and content of output.
  • Try several variations of the prompts to understand impact of the language you are using and ensure context and direction clear and appropriate to achieve goals.
Step 3: Review
  • Have you considered the accuracy and quality of generated output in light of known limitations to generative AI tools such as plagiarism and hallucinations?
  • Output has been carefully reviewed to ensure results are free of bias.
  • Output has been carefully reviewed to ensure any source information accurately cited.
  • Output has been carefully reviewed to confirm accuracy of information presented.
Step 4: Disclose
  • Have you disclosed how and why you used AI?
  • Final AI generated output included in any publication/ communication/ report includes a clear statement indicating generative AI has been used to develop content; naming the tool used and describing any prompts used to develop the content.
  • Responsibility for accuracy of the final AI generated output is acknowledged by the tool user. Generative AI tools cannot be given authorship so all responsibility for content generated from these tools rests on the user

We welcome your feedback on this information, any relevant resources you would like to share, as well as, how we can best support you and your research team in this evolving landscape. Please contact Beth Payne with any questions or feedback, and Kathryn Dewar regarding concerns of AI use.