Researchers use ChatGPT in their writing mainly out of trust in the technology than perceived usefulness

27 Feb 2026

ChatGPT has attracted the attention of the scientific community. Unlike existing writing tools that are conventionally capable of checking styles, grammar, and spelling in texts that have already been written, ChatGPT possesses generative capabilities that enable it to produce syntactically coherent and semantically plausible content across various computational linguistics applications (e.g., chatbots, language summarization, sentiment analysis, and question-answering systems). While ChatGPT lacks hermeneutic comprehension and does not exhibit the domain-specific inferential reasoning characteristic of human researchers, it nonetheless constitutes a valuable assistive mechanism in scholarly composition. For instance, it can be employed for automated draft generation in structured sections of research articles or for condensing extensive literature reviews into thematically organized narratives.

Considering the rapid development of AI and the viability of ChatGPT as an academic writing tool, investigating whether researchers intend to adopt this software, as well as the factors that influence their decision, is of scholarly and societal relevance. While recent studies have examined researchers’ awareness, perceptions, and attitudinal dispositions toward ChatGPT, there remains a paucity of research explicating the cognitive, technological, and contextual contingencies shaping their adoption decisions.

This study examined factors driving the adoption of generative artificial intelligence tools like ChatGPT for research writing through an integrated framework combining the Technology Acceptance Model, Task Technology Fit, and Trust in Specific Technology. Responses from 564 researchers in 12 countries were analyzed using a structural equation modeling approach. Intriguingly, perceived usefulness and ease of use were insignificant despite being considered the strongest predictors of behavioral intention in countless studies. Instead, researchers prioritize trusting beliefs and the compatibility between a technology and a task when considering its use. It was also found that trust in the technology has greater explanatory power than task-technology compatibility, and this trust is influenced by beliefs that ChatGPT is a socially and academically accepted tool for manuscript writing. Overall, this study contributes new insights for researchers, funding bodies, publishers, policymakers, and the academic community as they navigate the evolving role of AI in scholarly writing.

Author: Manuel B. Garcia (College of Education, University of the Philippines Diliman | Educational Innovation and Technology Hub, FEU Institute of Technology | Graduate School of Education, Korea University)

Read the full paper: https://www.tandfonline.com/doi/full/10.1080/10447318.2025.2499158

Image by Sanket Mishra from Pexels

Researchers use ChatGPT in their writing mainly out of trust in the technology than perceived usefulness

ChatGPT has attracted the attention of the scientific community. Unlike existing writing tools that are conventionally capable of checking styles, grammar, and spelling in texts that have already been written, ChatGPT possesses generative capabilities that enable it to produce syntactically coherent and semantically plausible content across various computational linguistics applications (e.g., chatbots, language summarization, sentiment analysis, and question-answering systems). While ChatGPT lacks hermeneutic comprehension and does not exhibit the domain-specific inferential reasoning characteristic of human researchers, it nonetheless constitutes a valuable assistive mechanism in scholarly composition. For instance, it can be employed for automated draft generation in structured sections of research articles or for condensing extensive literature reviews into thematically organized narratives.

Considering the rapid development of AI and the viability of ChatGPT as an academic writing tool, investigating whether researchers intend to adopt this software, as well as the factors that influence their decision, is of scholarly and societal relevance. While recent studies have examined researchers’ awareness, perceptions, and attitudinal dispositions toward ChatGPT, there remains a paucity of research explicating the cognitive, technological, and contextual contingencies shaping their adoption decisions.

This study examined factors driving the adoption of generative artificial intelligence tools like ChatGPT for research writing through an integrated framework combining the Technology Acceptance Model, Task Technology Fit, and Trust in Specific Technology. Responses from 564 researchers in 12 countries were analyzed using a structural equation modeling approach. Intriguingly, perceived usefulness and ease of use were insignificant despite being considered the strongest predictors of behavioral intention in countless studies. Instead, researchers prioritize trusting beliefs and the compatibility between a technology and a task when considering its use. It was also found that trust in the technology has greater explanatory power than task-technology compatibility, and this trust is influenced by beliefs that ChatGPT is a socially and academically accepted tool for manuscript writing. Overall, this study contributes new insights for researchers, funding bodies, publishers, policymakers, and the academic community as they navigate the evolving role of AI in scholarly writing.

Author: Manuel B. Garcia (College of Education, University of the Philippines Diliman | Educational Innovation and Technology Hub, FEU Institute of Technology | Graduate School of Education, Korea University)

Read the full paper: https://www.tandfonline.com/doi/full/10.1080/10447318.2025.2499158

Image by Sanket Mishra from Pexels