Artificial intelligence (AI) is being used in medicine to improve the diagnosis and treatment of diseases. However, if not properly designed, tested, and used with care, AI medical devices could harm patients and worsen health inequities. An international task force, which included a bioethicist from the University of Rochester Medical Center, has made recommendations on the ethical development and use of AI medical devices. They call for increased transparency about the accuracy and limitations of AI and ensure that all people, regardless of race, ethnicity, gender, or wealth, have access to AI medical devices that work for them. While the responsibility of proper design and testing lies with AI developers, healthcare providers are ultimately responsible for using AI appropriately and should not rely solely on AI predictions for patient care decisions. Doctors must understand how a specific AI medical device is intended to be used, its performance, and any limitations, and they must communicate this information to their patients. AI developers should provide accurate information about their device’s intended use, clinical performance, and limitations. They should also build alerts into the device or system that inform users about the level of uncertainty in AI predictions. Developers must carefully define the data used to train and test AI models and evaluate their performance based on clinically relevant criteria. AI models should be designed to be useful and accurate in all contexts. To avoid deepening health inequities, AI models should be calibrated for all racial and gender groups by training them with representative datasets. These recommendations can be applied broadly to AI medical devices beyond nuclear medicine and medical imaging. It is important to establish an ethical and regulatory framework for AI medical devices as the technology continues to advance rapidly.
>>Join our Facebook Group be part of community. <<