WCM-Q to host “AI Harms in Healthcare” symposium
“As AI is evolving, our knowledge about the benefits and harms is also evolving.”

At this pioneering stage of AI in healthcare, hospitals and medical practitioners alike face some thorny new dilemmas around liability. As AI tools become increasingly ubiquitous, what happens when harm occurs? Who is responsible – the doctor, hospital, or developer? These are the timely questions which will be explored in-depth at the “AI Harms in Healthcare: Who Is Liable?” symposium on February 12 in Doha. Targeted at a wide range of healthcare professionals, from hospital leaders and decision-makers to allied health professionals, physicians, nurses, pharmacists, researchers, dentists, educators, and students, the Weill Cornell Medicine-Qatar (WCM-Q) symposium will compare international approaches and examine real-life case studies as well as provide practical guidance on policy-making and risk-mitigation.
Course directors and co-presenters Dr. Barry Solaiman, Assistant Dean and Assistant Professor of Law at Hamad Bin Khalifa University (HBKU), and Adjunct Assistant Professor of Medical Ethics in Clinical Medicine at WCM-Q, and Dr. Thurayya Arayssi, Vice Dean for Academic and Curricular Affairs, and Professor of Clinical Medicine at WCM-Q, were interviewed about the latest developments in AI healthcare and the layers of risk that healthcare professionals should be aware of. In 2024, Dr. Solaiman co-edited The Research Handbook on Health, AI and the Law with Dr. I. Glenn Cohen, Deputy Dean and James A. Attwood and Leslie Williams Professor of Law at Harvard Law School, one of the world’s foremost experts on the intersection of medical ethics and law. The open-access title is available via Edward Elgar Publishing.
The symposium is coordinated by WCM-Q’s Division of Continuing Professional Development (CPD), which supports lifelong learning for healthcare professionals.
What are the main legal issues hospitals and healthcare professionals need to be aware of when using AI for diagnosis, triage and treatment?
Dr. Solaiman: There are six legal issues identified by myself and co-editor I. Glenn Cohen in The Research Handbook on Health, AI and the Law. Firstly, there is the risk of discrimination and bias. When you’re training an AI system to be used in a clinical setting, what sort of data are you feeding it? If you’re using a system that hasn’t been trained on broad enough data, or data of good quality, then the outputs won’t be very good. For example, some clinicians have been experimenting with the use of ChatGPT in mental health cases, which is concerning, because it has not been approved or designed for such use.
Secondly, there is the issue of data privacy, with legislation varying from jurisdiction to jurisdiction. Thirdly, healthcare providers need to be on top of data security: how do they ensure data is secure?
And then there is the issue of intellectual property law. If an AI system has made poor recommendations, it could be because the coding is poor, but it would be quite hard to get the developers to reveal their coding because they can state that its development is a trade secret. There’s new case law emerging in this area, but it’s not yet clear.
The issues in which clinicians are on the front edge are informed consent and medical liability.
Currently, there’s no consensus on whether a doctor should even tell a patient that they are using AI in their care. In the U.S., analysis of the existing law suggests you don’t have to tell the patient, but in the EU, scholars are arguing that you should. I argue that you might as well. What’s the harm in informing the patient that AI is being used in their care in order to get consent and avoid problems further down the line?
Another challenge is that AI can lie and make up information, so do you truly have informed consent? Some experts think this is a non-issue on the basis that AI is merely a tool that doctors use like any other tool, so as long as we ensure that the tool has gone through certain safeguards during its development, then that’s enough. Others disagree and argue that AI is a unique tool with the potential to have a particularly pervasive role in doctor-patient interactions, so we may need more safeguards. Based on what I have heard at medical conferences, there is disagreement among some of the leading professors and regulators around the world.
The final big issue on the clinical side is medical liability: who is responsible if you use AI in healthcare and a patient is harmed? The answer right now is that the doctor or the hospital is still ultimately responsible. But what happens if doctors are required to follow care standards set by AI? If AI is making the decisions, who do you then hold responsible? Should it be the developer? The problem is that there is no easy avenue to find developers liable under existing law. I don’t think we have any clear answers yet.
There aren’t really any rules in place for AI use in clinical practice. Some jurisdictions are working on this: the UAE, for example, has developed healthcare AI policies in Abu Dhabi and Dubai that have binding force and stringent rules which apply to almost any entity using AI in any form of healthcare.
What are the main benefits of taking part in this symposium for hospital decision-makers, doctors and healthcare professionals?
Dr. Arayssi: As AI is evolving, our knowledge about the benefits and harms is also evolving. There is no question about the opportunities, such as increased accuracy in diagnosis and reducing the risk of burnout among health professionals by freeing them from mundane, routine activities. Yet at the same time, we need to be cognizant about the potential harms from AI at the system level, the individual level, and at the legal level.
And, as is the case with any technological innovation, the technology tends to move much faster than the regulation, so we’re looking at ways to put guardrails in place. At a time when there is a large margin of uncertainty, healthcare leaders can benefit from listening to other leaders in the sector discuss how they are making decisions, as well as hearing the concerns and opinions of healthcare practitioners.
What is unique about this conference is that we are bringing speakers from various healthcare disciplines together so they can talk to each other, learn from each other, and discuss the future together. This is in contrast to typical medical knowledge sharing, which tends to be siloed, with physicians, nurses, technologists and researchers all attending separate conferences.
Dr. Solaiman: A lot of healthcare professionals are operating in an opaque space right now, where they don’t understand the implications of AI use in terms of liability or informed consent – and nor do hospitals really. Different hospitals are dealing with this in different ways. This is a good starting point for medical professionals and organizations to understand the issues and how to approach them.
It’s contributing to informed thinking about how to create internal standards to protect against risks while harnessing the technology; running healthcare more efficiently while also adhering to best practices in ethics and law. When I speak at medical conferences, I always ask for a show of hands: “Who is using ChatGPT in their practice?” Typically, half of the people in the room will raise their hand. This is a red flag which tells me clinicians and health professionals don’t understand the risks they are taking.
How do Arab hospitals and healthcare organizations compare internationally in terms of AI adoption?
Dr. Arayssi: There’s significant funding and a large appetite for AI integration into healthcare systems at a government level. There’s also a big focus on preparing medical professionals for integrating AI into their day-to-day practice. There is certainly a lot of support from the government here in Qatar. However, because the field is moving so quickly, there is a need to upskill healthcare practitioners and ensure they are aware of emerging trends, the potential risks for patients, and evolving regulations.
Dr. Solaiman: There isn’t clear regional data yet, but many countries are investing heavily in robust implementation of AI in healthcare, notably Saudi Arabia and the UAE. In Qatar, AI is being integrated into medical education as well as the main hospital system under Hamad Medical Corporation.
As is the case internationally, the most widespread use cases are in radiology and cardiology. There’s also interest in dermatology, and researchers in the region have been awarded grants to investigate more specific, niche use cases.
AI is being integrated into triage and clinical decision-support systems; it is able to determine treatment plans and identify areas that doctors have not considered.
And on an administrative level, AI is used to manage the flow of patients on wards, to deal with staff rostering, and bed allocation.
AI is also integrated into robotic devices performing surgery to predict outcomes with a very high level of precision. For example, during a knee operation, AI robots are able to predict precisely what the incision area will look like after surgery, how much of the bone, cartilage, and muscle has been cut, and so on.
What are the main challenges associated with regulating AI in healthcare?
Dr. Solaiman: While AI healthcare is regulated through medical device regulations, for example, under the Food and Drug Administration (FDA) in the U.S., the problem is that this approach focuses solely on the medical device. When regulators talk about a life-cycle approach to regulating AI in healthcare, they’re talking about the product life cycle, but when I think of life cycle, I’m thinking about AI from its inception. When you’re performing R&D in a lab, what protocols are being followed from day one, before the medical device is even created? I led a grant for three years at HBKU, for which the Ministry of Public Health (MoPH) of Qatar was an official adviser, to develop guidelines for best practices when researchers are developing AI systems for healthcare. The next regulatory stage is the product itself, and then the latter stage is determining a regulatory framework for clinical practice. A true conception of the life cycle of AI is from its ideation to its use in clinical practice, and there’s still a lot of work to be done in that space.
The symposium is accredited in Qatar by the Department of Healthcare Professions Accreditation Section (DHP-AS) of the Ministry of Public Health (MoPH) and internationally by the Accreditation Council for Continuing Medical Education (ACCME).
https://wcmq.cloud-cme.com/course/courseoverview?P=5&EID=4707













