A comparison of cover letters written by ChatGPT-4 or humans

Dan Med J. 2023 Nov 23;70(12):A06230412.

Abstract

Introduction: Artificial intelligence has started to become a part of scientific studies and may help researchers with a wide range of tasks. However, no scientific studies have been published on its ussefulness in writing cover letters for scientific articles. This study aimed to determine whether Generative Pre-Trained Transformer (GPT)-4 is as good as humans in writing cover letters for scientific papers.

Methods: In this randomised non-inferiority study, we included two parallel arms consisting of cover letters written by humans and by GPT-4. Each arm had 18 cover letters, which were assessed by three different blinded assessors. The assessors completed a questionnaire in which they had to assess the cover letters with respect to impression, readability, criteria satisfaction, and degree of detail. Subsequently, we performed readability tests with Lix score and Flesch Kincaid grade level.

Results: No significant or relevant difference was found on any parameter. A total of 61% of the blinded assessors guessed correctly as to whether the cover letter was written by GPT-4 or a human. GPT-4 had a higher score according to our objective readability tests. Nevertheless, it performed better than human writing on readability in the subjective assessments.

Conclusion: We found that GPT-4 was non-inferior at writing cover letters compared to humans. This may be used to streamline cover letters for researchers, providing an equal chance to all researchers for advancement to peer-review.

Funding: This study received no financial support from external sources.

Trial registration: This study was not registered before the study commenced.

Publication types

  • Randomized Controlled Trial

MeSH terms

  • Artificial Intelligence*
  • Comprehension
  • Humans
  • Writing*