GrantCheck-an AI Solution for Guiding Grant Language to New Policy Requirements: Development Study.

Saved in:
Bibliographic Details
Title: GrantCheck-an AI Solution for Guiding Grant Language to New Policy Requirements: Development Study.
Authors: Shi Q; Center for Clinical and Translational Science, UMass Chan Medical School, Worcester, MA, United States., Oztekin A; Manning School of Business, UMass Lowell, Lowell, MA, United States., Matthew G; Information Technology, UMass Chan Medical School, Worcester, MA, United States., Bortle J; Information Technology, UMass Chan Medical School, Worcester, MA, United States., Jenkins H; Information Technology, UMass Chan Medical School, Worcester, MA, United States., Wong SK; Department of Population and Quantitative Health Sciences, UMass Chan Medical School, Worcester, MA, United States., Langlois P; Information Technology, UMass Chan Medical School, Worcester, MA, United States., Zaki A; Information Technology, UMass Chan Medical School, Worcester, MA, United States., Coleman B; Information Technology, UMass Chan Medical School, Worcester, MA, United States., Luzuriaga K; Center for Clinical and Translational Science, UMass Chan Medical School, Worcester, MA, United States., Zai AH; Department of Population and Quantitative Health Sciences, UMass Chan Medical School, Worcester, MA, United States.
Source: JMIR formative research [JMIR Form Res] 2025 Nov 27; Vol. 9, pp. e79038. Date of Electronic Publication: 2025 Nov 27.
Publication Type: Journal Article
Language: English
Journal Info: Publisher: JMIR Publications Country of Publication: Canada NLM ID: 101726394 Publication Model: Electronic Cited Medium: Internet ISSN: 2561-326X (Electronic) Linking ISSN: 2561326X NLM ISO Abbreviation: JMIR Form Res Subsets: MEDLINE
Imprint Name(s): Original Publication: Toronto, ON, Canada : JMIR Publications, [2017]-
MeSH Terms: Artificial Intelligence* , Natural Language Processing* , Writing* , Research Support as Topic*, Humans
Abstract: Background: Academic institutions face increasing challenges in grant writing due to evolving federal and state policies that restrict the use of specific language. Manual review processes are labor-intensive and may delay submissions, highlighting the need for scalable, secure solutions that ensure compliance without compromising scientific integrity.
Objective: This study aimed to develop a secure, artificial intelligence-powered tool that assists researchers in writing grants consistent with evolving state and federal policy requirements.
Methods: GrantCheck (University of Massachusetts Chan Medical School) was built on a private Amazon Web Services virtual private cloud, integrating a rule-based natural language processing engine with large language models accessed via Amazon Bedrock. A hybrid pipeline detects flagged terms and generates alternative phrasing, with validation steps to prevent hallucinations. A secure web-based front end enables document upload and report retrieval. Usability was assessed using the System Usability Scale.
Results: GrantCheck achieved high performance in detecting and recommending alternatives for sensitive terms, with a precision of 1.00, recall of 0.73, and an F 1 -score of 0.84-outperforming general-purpose models including GPT-4o (OpenAI; F 1 =0.43), Deepseek R1 (High-Flyer; F 1 =0.40), Llama 3.1 (Meta AI; F 1 =0.27), Gemini 2.5 Flash (Google; F 1 =0.58), and even Gemini 2.5 Pro (Google; F 1 =0.72). Usability testing among 25 faculty and staff yielded a mean System Usability Scale score of 85.9 (SD 13.4), indicating high user satisfaction and strong workflow integration.
Conclusions: GrantCheck demonstrates the feasibility of deploying institutionally hosted, artificial intelligence-driven systems to support compliant and researcher-friendly grant writing. Beyond administrative efficiency, such systems can indirectly safeguard public health research continuity by minimizing grant delays and funding losses caused by language-related policy changes. By maintaining compliance without suppressing scientific rigor or inclusivity, GrantCheck helps protect the pipeline of research that advances biomedical discovery, health equity, and patient outcomes. This capability is particularly relevant for proposals in sensitive domains-such as social determinants of health, behavioral medicine, and community-based research-that are most vulnerable to evolving policy restrictions. As a proof-of-concept development study, our implementation is tailored to one institution's policy environment and security infrastructure, and findings should be interpreted as preliminary rather than universally generalizable.
(©Qiming Shi, Asil Oztekin, George Matthew, Jeffrey Bortle, Hayden Jenkins, Steven (Koon) Wong, Paul Langlois, Anaheed Zaki, Brian Coleman, Katherine Luzuriaga, Adrian H Zai. Originally published in JMIR Formative Research (https://formative.jmir.org), 27.11.2025.)
Contributed Indexing: Keywords: artificial intelligence; grant applications; large language model; natural language processing; research compliance; usability evaluation; user-computer interface
Entry Date(s): Date Created: 20251127 Date Completed: 20251127 Latest Revision: 20251127
Update Code: 20251128
DOI: 10.2196/79038
PMID: 41308189
Database: MEDLINE
Description
Abstract:Background: Academic institutions face increasing challenges in grant writing due to evolving federal and state policies that restrict the use of specific language. Manual review processes are labor-intensive and may delay submissions, highlighting the need for scalable, secure solutions that ensure compliance without compromising scientific integrity.<br />Objective: This study aimed to develop a secure, artificial intelligence-powered tool that assists researchers in writing grants consistent with evolving state and federal policy requirements.<br />Methods: GrantCheck (University of Massachusetts Chan Medical School) was built on a private Amazon Web Services virtual private cloud, integrating a rule-based natural language processing engine with large language models accessed via Amazon Bedrock. A hybrid pipeline detects flagged terms and generates alternative phrasing, with validation steps to prevent hallucinations. A secure web-based front end enables document upload and report retrieval. Usability was assessed using the System Usability Scale.<br />Results: GrantCheck achieved high performance in detecting and recommending alternatives for sensitive terms, with a precision of 1.00, recall of 0.73, and an F <subscript>1</subscript> -score of 0.84-outperforming general-purpose models including GPT-4o (OpenAI; F <subscript>1</subscript> =0.43), Deepseek R1 (High-Flyer; F <subscript>1</subscript> =0.40), Llama 3.1 (Meta AI; F <subscript>1</subscript> =0.27), Gemini 2.5 Flash (Google; F <subscript>1</subscript> =0.58), and even Gemini 2.5 Pro (Google; F <subscript>1</subscript> =0.72). Usability testing among 25 faculty and staff yielded a mean System Usability Scale score of 85.9 (SD 13.4), indicating high user satisfaction and strong workflow integration.<br />Conclusions: GrantCheck demonstrates the feasibility of deploying institutionally hosted, artificial intelligence-driven systems to support compliant and researcher-friendly grant writing. Beyond administrative efficiency, such systems can indirectly safeguard public health research continuity by minimizing grant delays and funding losses caused by language-related policy changes. By maintaining compliance without suppressing scientific rigor or inclusivity, GrantCheck helps protect the pipeline of research that advances biomedical discovery, health equity, and patient outcomes. This capability is particularly relevant for proposals in sensitive domains-such as social determinants of health, behavioral medicine, and community-based research-that are most vulnerable to evolving policy restrictions. As a proof-of-concept development study, our implementation is tailored to one institution's policy environment and security infrastructure, and findings should be interpreted as preliminary rather than universally generalizable.<br /> (©Qiming Shi, Asil Oztekin, George Matthew, Jeffrey Bortle, Hayden Jenkins, Steven (Koon) Wong, Paul Langlois, Anaheed Zaki, Brian Coleman, Katherine Luzuriaga, Adrian H Zai. Originally published in JMIR Formative Research (https://formative.jmir.org), 27.11.2025.)
ISSN:2561-326X
DOI:10.2196/79038