Oded Nov, Nina Singh, and Devin Mann’s “Putting ChatGPT’s Medical Advice to the (Turing) Test: Survey Study” appeared in JMIR Medical Education Volume 9. The research was aimed at investigating how well sophisticated chatbots can tackle concerns from patients, and whether the latter would take their responses on board.
To accomplish this, a series of 10 legitimate medical queries were chosen from the record in January of 2023 and adapted for anonymity. ChatGPT, provided with the queries, was prompted to give its own response to them, and for the ease of comparison, was also prompted to keep its answer around as long as that of the human health professional. From here, respondents had two important questions to answer: Could they tell which of the answers were written by the bot, and did they accept the ones that were?
Almost 400 participants’ results were tabulated, and they proved interesting. The researchers note in the study that “On average, chatbot responses were identified correctly in 65.5% (1284/1960) of the cases, and human provider responses were identified correctly in 65.1% (1276/1960) of the cases.” This is just under two-thirds of the time, overall, and it also appeared that there was a limit to the sort of healthcare support participants wanted from ChatGP: “trust was lower as the health-related complexity of the task in the questions increased. Logistical questions (eg, scheduling appointments and insurance questions) had the highest trust rating,” the study states.
Stay connected with us on social media platform for instant update click here to join our Twitter, & Facebook
We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.
For all the latest Technology News Click Here