About the use of AI

     The burgeoning use of artificial intelligence (AI) tools in scientific research and publication has given rise to a complex interplay of ethical, moral, and legal considerations. However, the generative capabilities of AI technologies can be harnessed to support ethical writing and publication of scientific articles:

  1. Generate research ideas: By inputting a few keywords, researchers can leverage AI to generate novel research ideas, providing a stimulating starting point for their work.
  2. Summarize literature: AI can efficiently summarize relevant scientific articles and studies, helping researchers grasp key concepts and identify critical gaps in the literature.
  3. Gather and organize data: AI's ability to cluster and categorize information can facilitate the collection and organization of relevant data.
  4. Detect plagiarism and ensure originality: AI can assist in identifying potential instances of plagiarism and ensuring that citations are accurate. Researchers must, however, take care to avoid leaving copies of their work in AI repositories to prevent self-plagiarism.
  5. Verify AI-generated content: Researchers should employ tools to verify if content has been generated by AI.
  6. Enhance writing style: AI can offer suggestions for improving the clarity, coherence, and overall quality of written text.

     Despite these benefits, the misuse of AI can lead to significant societal harms, including the propagation of misinformation, the erosion of trust in scientific research, and the infringement of intellectual property rights. Specific concerns in scientific research include:

  1. Data quality and bias: The quality and representativeness of the data used to train AI models can significantly impact the reliability of the results.
  2. Limited data availability: Constraints on data availability may limit the generalizability of findings.
  3. Privacy and security: The collection and use of data for AI training raise concerns about privacy and security.
  4. Skill requirements: The effective use of AI tools requires specialized knowledge and skills.

     To ensure the integrity, transparency, and quality of scientific research, the following guidelines are established for authors, reviewers, and publishers.

Authors:

EASI Journal does not permit the use of AI to independently write articles or to include a large language model (LLM) as an author of an article. This is due to AI's inability to be accountable for the written content and its lack of the capacity to provide consent for publication as an author.

  1. Understand the terms of use for any AI to reuse content from search prompts.
  2. Verify the AI tool's policy regarding the handling and privacy of sensitive data. Ensure that this data has informed consent from relevant parties and that publicly available data has the necessary permissions for use.
  3. Conduct a thorough analysis of the prompts and data used to train the models. This involves identifying potential biases in the data, such as gender, race, or class inequalities. Consider robust security measures to protect confidential data in the design of prompts, such as encryption, access control, and anonymization protocols.
  4. Evaluate models for algorithmic bias once they are trained. This may include specific tests to detect discrimination or inequality in predictions.
  5. Assess and declare how model-based decisions will affect stakeholders. This may involve consulting ethics experts and groups affected by the model's decisions. Briefly ask: What are the implications for the scientific community or society as a whole?
  6. Always cite the use of generative AI when writing or creating text, figures, or other content for scientific publication after verifying the source of publication.
  7. Verify facts and references suggested by a generative AI tool.

Reviewers:

  1. Critical assessment: Rigorously assess the use of AI in submitted manuscripts, ensuring that it is appropriate and ethical.
  2. Identify biases: Be vigilant for potential biases in AI-generated content.
  3. Provide constructive feedback: Offer constructive feedback on the use of AI, including suggestions for improvement.
  4. Disclose AI use: Declare the use of AI tools in the review process.

Publisher:

     Guidelines concerning the use of AI must be regularly reviewed and updated to reflect technological advancements and evolving ethical and legal considerations pertaining to the responsible use of AI in the peer review and editing of scientific articles. All articles will undergo a rigorous peer-review process, including originality checks and IA use by Turnitin.

     While there are currently no definitive metrics limiting the extent to which generative AI tools can be employed in the production and review of scientific manuscripts, decisions regarding the acceptance or rejection of articles that exhibit AI usage will be based on the following criteria:

  1. Critical and well-supported arguments presented by human experts should supersede any AI-generated content within scientific manuscripts.
  2. The credibility and quality of submitted manuscripts, even those that have utilized AI tools, must remain paramount. Evaluations will focus on the scientific merit of the work, irrespective of the tools employed. To this end, the EASI Editorial Board will adhere to the following ranges for the acceptance or rejection of AI-assisted manuscripts:
  • Low range (0-15%): Content is generally considered original and acceptable. Additional validation of AI-derived content may not be required. The number of authors is limited to five or fewer.
  • Moderate range (16-50%): Acceptance is contingent upon the inclusion of a methodology section addressing the validation and verification of AI-generated content. Authors may describe specific methods used to confirm the validity and originality of results, including the verification and citation of sources supporting AI-generated claims. If AI-generated text detection analysis exceeds 50%, the manuscript will be automatically rejected.

If the analysis detects that more than 50% of the manuscript is AI-generated, it will be automatically rejected.

References:

  1. Leung TI, de Azevedo Cardoso T, Mavragani A, Eysenbach G. Best Practices for Using AI Tools as an Author, Peer Reviewer, or Editor. J Med Internet Res. 2023 Aug 31;25:e51584. doi: 10.2196/51584. PMID: 37651164; PMCID: PMC10502596.
  2. Marušić A. JoGH policy on the use of artificial intelligence in scholarly manuscripts. J Glob Health. 2023 Feb 3;13:01002. doi: 10.7189/jogh.13.01002. PMID: 36730184; PMCID: PMC9894504.
  3. Tiedrich, L. (2024). Editorial. Journal of AI Law and Regulation, 1(1). Retrieved from https://doi.org/10.21552/aire/2024/1/3
  4. The Use of Artificial Intelligence in Writing Scientific Review Articles. (2023, April 27). The Use of Artificial Intelligence in Writing Scientific Review Articles. Current Osteoporosis Reports. https://doi.org/10.1007/s11914-023-00852-0
  5. OpenAI (2024). Copilot (versión del 27 de abril de 2024) [Modelo de lenguaje grande]. https://copilot.microsoft.com/?showconv=1
  6. OpenAI (2024). Gemini (versión del 24 de mayo) [Modelo de lenguaje grande]. https://gemini.google.com/app