SINGAPORE – Protecting the privacy and confidentiality of a person’s data is of paramount importance in biomedical research. So is obtaining informed consent and respecting the individual’s rights and autonomy.
But what if it is not feasible for researchers to get the person’s consent for the specific use of his data for a study powered by artificial intelligence (AI) and big data?
When should societal benefit take priority over data privacy, and vice versa? If research is done by an AI model, how does it come to a decision and who should be held responsible for its wrong decisions?
These are among the questions raised in an ongoing public consultation paper, titled Ethical, Legal And Social Issues Arising From Big Data And Artificial Intelligence Use In Human Biomedical Research.
The 103-page paper, released online in May, was initiated by the Bioethics Advisory Committee (BAC), an independent national advisory body set up by the Cabinet in late 2000 to review ethical, legal and social issues arising from biomedical research and its applications in Singapore.
The final advisory report will be used to guide various groups, including academics, researchers and healthcare professionals, which want to use big data and AI in human biomedical research.
Current safeguards include the Personal Data Protection Act (PDPA) 2012 that came into full effect in 2014, and the AI in Healthcare Guidelines 2021, which the Ministry of Health co-developed with the Health Sciences Authority and the Integrated Health Information Systems.
These do not cover all eventualities or possible developments in big data (which in healthcare includes those from clinical records, patient health records, results of medical examinations, health-monitoring devices) and AI.
It is a transformative, fast-growing area that can help with the early detection of diseases, disease prevention, better treatments, and better quality of life, for instance, said the chair of BAC’s big data and AI review group, Professor Patrick Tan from Duke-NUS Medical School’s cancer and stem cell biology programme.
However, he said that when it comes to big data research, individuals may sometimes not know what their data is used for, and it may not be possible to get consent from every individual each time their data is used.
Data privacy vis-a-vis societal benefits is an issue highlighted in the paper. The potential of big data research is huge, but risks to data privacy could increase, and there has to be a fair balance, Prof Tan added.
For individuals, a possible risk scenario in the future is when they face discrimination when buying insurance, if the privacy of their data is compromised.
“For instance, if your insurance company knows that you have the BRCA1 (breast cancer gene 1) mutation, but you don’t have breast cancer yet, should you still be eligible for insurance?
“In the United States, there are laws that prohibit this sort of pre-disease profiling,” said Prof Tan, who is also the executive director of the Genome Institute of Singapore and director of the SingHealth Duke-NUS Institute of Precision Medicine.
“There’s even some evidence of (researchers) using big data in AI where the cadence of your voice… the speed at which you type your letters on the keyboard are all registers of mental acuity.
“Let’s say that you’re at risk for something, and if you can be identified, would that put you at risk (of) being discriminated against?” he said, citing hypothetical examples.
Stay connected with us on social media platform for instant update click here to join our Twitter, & Facebook
We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.
For all the latest For News Update Click Here