Public Hospital Network CEO Pushes AI to Replace Radiologists, Doctors Push Back
Hospitals are used to making quiet changes. New machines arrive, workflows adjust, and patients rarely notice what happens behind the scenes. What Mitchell Katz is proposing is not that kind of change. It is direct, public, and aimed at one of the most specialised roles in medicine. The head of NYC Health + Hospitals has said he is ready to replace a large share of radiologists with artificial intelligence, at least for certain types of imaging.
The remark, made during a panel discussion in New York, cuts to a question that has been building for years. Artificial intelligence has been used in radiology for some time, mostly as a support tool. What Katz is suggesting goes further. He is talking about shifting the first read of scans to machines, with doctors stepping in only when something looks abnormal.
It is a proposal that blends cost concerns with access to care. It also sets up a direct conflict between hospital management and medical professionals, each looking at the same technology but drawing very different conclusions.
A System Under Pressure Turns to Automation
Katz runs the largest public hospital network in the United States, a system that includes 11 hospitals and serves more than a million patients each year. That scale shapes how decisions are made. Even small changes in workflow can have large financial and operational effects.
Imaging is one of the areas where those pressures are most visible. Demand for scans has grown steadily, from routine screenings to more complex diagnostics. At the same time, radiologists remain in limited supply, and their salaries have risen as demand has increased. For a public system that operates under tight budgets, that combination creates strain.
Katz’s proposal reflects that reality. The idea is to use AI systems to read low-risk scans such as routine mammograms. If the scan appears normal, it would be cleared without a radiologist’s review. If the system flags something unusual, a doctor would then examine the case.
The logic is straightforward. Machines can process images quickly and at scale. If they can handle a large share of routine cases, hospitals can reduce costs and increase the number of patients screened. Katz has linked this approach to expanding access, particularly for communities that rely on public healthcare.
The system he oversees has already invested heavily in imaging equipment, including a $224 million upgrade tied to GE Healthcare. The next step, in his view, is to use that hardware with software that can handle more of the diagnostic work.
Supporters of this approach point to early performance data. At Westchester Medical Centerexecutives said AI systems miss very few cancers in low-risk cases, with error rates measured in single digits per ten thousand scans. That figure, they argue, compares favourably with human performance.
For hospitals, the appeal is not only about accuracy. It is about throughput. Faster reads mean more patients can be screened. In a system where delays can affect outcomes, speed carries its own weight.
Yet the proposal depends on more than technology. Current rules in New York require radiologists to review imaging studies. For Katz’s plan to move forward, those rules would need to change. That shifts the debate from hospitals to regulators, where questions of safety and accountability take centre stage.
Doctors Push Back as Debate Moves Past Technology
If hospital administrators see efficiency, many radiologists see risk. The pushback has been sharp and, at times, blunt. Critics argue that relying on AI alone for initial reads could lead to missed diagnoses, especially in cases that fall outside standard patterns.
Mohammed Suhail described the idea as dangerous, warning that it could lead to harm if machines are trusted without sufficient oversight. His argument reflects a broader concern within the profession. Radiology is not limited to identifying patterns in images. It involves clinical judgment, context, and communication with other doctors.
This is where the debate moves beyond numbers. Even if AI systems perform well on average, medicine often deals with exceptions. A rare case that falls outside training data can carry serious consequences. For doctors, the question is not only how often errors occur, but what happens when they do.
There is also the matter of responsibility. If an AI system clears a scan and a problem is later found, who is accountable? Hospitals, software providers, and regulators all play a role, but the lines are not always clear. That uncertainty makes large-scale changes harder to implement.
The discussion also touches on how medical work is defined. Radiologists do more than read images. They prioritise urgent cases, guide treatment decisions, and train future doctors. Reducing their role to image review risks overlooking those functions.
Even within the technology sector, views are not uniform. Jensen Huang has said that earlier claims about radiology being replaced by AI were overstated. He pointed out that faster image analysis can increase the volume of scans handled, which in turn can increase demand for radiologists rather than reduce it.
That perspective suggests a different path. Instead of replacing doctors, AI could change how they work, allowing them to focus on more complex cases while machines handle routine tasks. In practice, however, the boundary between assistance and replacement is not fixed. It shifts based on policy, economics, and trust.
Katz’s proposal brings that tension into the open. It is not a theoretical discussion about future technology. It is a plan tied to a specific system, with defined costs and measurable outcomes.
The timing is also notable. Healthcare systems across the United States are dealing with rising costs, staffing shortages, and uneven access to care. AI offers one way to address those issues, but it also introduces new questions about safety and control.
For patients, the debate may feel distant, but its effects are direct. The person reading a scan, whether human or machine, plays a role in diagnosis and treatment. Changes in that process carry weight.
For regulators, the challenge is to balance access and safety. Allowing AI to take on a larger role could increase screening rates and reduce costs. At the same time, it requires confidence that the technology can handle real-world variation.
What makes this moment different is not just the technology itself, but the willingness to act on it. Katz has made clear that he sees AI as ready for wider use, at least in specific cases. Others remain cautious, pointing to limits that are not yet resolved.
Comments are closed.