没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
1
Putting ChatGPT’s Medical Advice to the (Turing) Test
Authors: Oded Nov, PhD, MS
1,2
, Nina Singh, BS
1
, Devin M. Mann, MD, MS
1,3
Affiliations:
1
NYU Grossman School of Medicine, Department of Population Health, New York,
NY, USA;
2
Department of Technology Management, NYU Tandon School of Engineering,
Brooklyn, NY, USA;
3
NYU Langone Health, Medical Center Information Technology, New
York, NY, USA
Corresponding Author:
Oded Nov,
New York University
New York, NY, 11201
onov@nyu.edu
2
Putting ChatGPT’s Medical Advice to the (Turing) Test
Abstract
Importance: Chatbots could play a role in answering patient questions, but patients’ ability to
distinguish between provider and chatbot responses, and patients’ trust in chatbots’ functions are
not well established.
Objective: To assess the feasibility of using ChatGPT or a similar AI-based chatbot for patient-
provider communication.
Design: Survey in January 2023
Setting: Survey
Participants: A US representative sample of 430 study participants aged 18 and above was
recruited on Prolific, a crowdsourcing platform for academic studies. 426 participants filled out
the full survey. After removing participants who spent less than 3 minutes on the survey, 392
respondents remained. 53.2% of respondents analyzed were women; their average age was 47.1.
Exposure(s): Ten representative non-administrative patient-provider interactions were extracted
from the EHR. Patients’ questions were placed in ChatGPT with a request for the chatbot to
respond using approximately the same word count as the human provider’s response. In the
survey, each patient’s question was followed by a provider- or ChatGPT-generated response.
Participants were informed that five responses were provider-generated and five were chatbot-
generated. Participants were asked, and incentivized financially, to correctly identify the
response source. Participants were also asked about their trust in chatbots’ functions in patient-
provider communication, using a Likert scale of 1-5.
3
Main Outcome(s) and Measure(s): Main outcome: Proportion of responses correctly classified
as provider- vs chatbot-generated. Secondary outcomes: Average and standard deviation of
responses to trust questions.
Results: The correct classification of responses ranged between 49.0% to 85.7% for different
questions. On average, chatbot responses were correctly identified 65.5% of the time, and
provider responses were correctly distinguished 65.1% of the time. On average, responses
toward patients’ trust in chatbots’ functions were weakly positive (mean Likert score: 3.4), with
lower trust as the health-related complexity of the task in questions increased.
Conclusions and Relevance: ChatGPT responses to patient questions were weakly
distinguishable from provider responses. Laypeople appear to trust the use of chatbots to answer
lower risk health questions. It is important to continue studying patient-chatbot interaction as
chatbots move from administrative to more clinical roles in healthcare.
Keywords: AI in Medicine; ChatGPT; Generative AI; Healthcare AI; Turing Test;
剩余12页未读,继续阅读
资源评论
Java老徐
- 粉丝: 1286
- 资源: 2045
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功