By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Concealed RepublicanConcealed Republican
  • Home
  • Latest News
  • Guns
  • Politics
  • Videos
Reading: Would you want AI making decisions for your doctor while you are under the knife in the operating room?
Share
Notification Show More
Font ResizerAa
Font ResizerAa
Concealed RepublicanConcealed Republican
  • News
  • Guns
  • Politics
  • Videos
  • Home
  • Latest News
  • Guns
  • Politics
  • Videos
Have an existing account? Sign In
Follow US
  • Advertise
  • Advertise
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Concealed Republican > Blog > News > Would you want AI making decisions for your doctor while you are under the knife in the operating room?
News

Would you want AI making decisions for your doctor while you are under the knife in the operating room?

Jim Taft
Last updated: February 16, 2026 4:29 pm
By Jim Taft 17 Min Read
Share
Would you want AI making decisions for your doctor while you are under the knife in the operating room?
SHARE

Never before have we seen a technology that offers such an impressive veneer of competence, yet demonstrates such dangerous incompetence when it actually matters. It’s what happens when government works together with the largest tech companies to monopolize the public square, prematurely promote AI for the wrong uses, and exaggerate the boundaries of its limitations. “Just good enough” can work for some functions of life, but not if you are on the operating table.

When humans outsource their measured judgment to what poses as an expert but lacks internal resistance when unsure of facts, you get catastrophic failure.

Reuters is reporting, based on lawsuits from several injured patients, that in the rush to approve AI-assisted medical devices for surgery, the FDA is receiving a record number of malfunctions leading to injuries during surgery. Additionally, companies are being forced to recall these products at a record pace.

Specifically, the report highlights TruDi from Acclarent, a software that provides imaging and real-time feedback to ENT surgeons during delicate procedures. The product had already been on the market for three years in 2021, at which time the FDA received seven complaints of malfunctions and one complaint of patient injury as a result of error. At the time, this was within the realm of normal baseline adverse event reporting. In 2021, however, Acclarent introduced machine-learning algorithms to the software.

Since then, the FDA has received 100 unconfirmed reports of malfunctions and eight instances of serious injuries.

What sort of injuries? In numerous instances, the software reportedly hallucinated and allegedly misinformed surgeons about the location of their instruments while they were using them inside patients’ heads. While causation is yet to be proven, patients who underwent operations with TruDi guidance since 2021 have reported:

  • Cerebrospinal fluid reportedly leaking from the nose.
  • The surgeon mistakenly puncturing the base of a skull.
  • Two patients suffering a stroke after a major artery was wrongly cut.

Anyone familiar with using LLMs can easily understand how AI could misidentify anatomy. “The product was arguably safer before integrating changes in the software to incorporate artificial intelligence than after the software modifications were implemented,” one of the suits alleges.

TruDi is one of at least 1,357 medical devices using AI that are now approved by the FDA. That is double the number the agency allowed through 2022, which means that somehow the FDA was able to properly scrutinize nearly 700 AI medical devices in three years. There are currently only 25 scientists working in the Division of Imaging, Diagnostics and Software Reliability, the key agency that assesses the safety of these products.

The apparent rush to market with overhyped and exaggerated capabilities of LLM is clearly reflected in the results from recalls. Researchers from Yale and Johns Hopkins recently found that 60 FDA-authorized medical devices using AI were linked to 182 product recalls, with 43% of those recalls having occurred less than a year after the devices were approved. According to the study published in JAMA, that’s about twice the recall rate of all devices authorized under similar FDA protocols.

Notably, most of the companies associated with the recalls in the JAMA analysis were publicly traded companies. “The association between public company status and higher recalls may reflect investor-driven pressure for faster launches, warranting further study,” warn the authors.

According to one lawsuit in Dallas, the doctor using the TruDi system was “misled and misdirected,” leading him to cut a carotid artery — which resulted in a blood clot and stroke.

The plaintiff’s lawyer told a judge that the doctor’s own records showed he “had no idea he was anywhere near the carotid artery.” The patient, Ralph, was forced to have a portion of skull removed as part of the remedial treatment, and he is still struggling to recover his daily functions a year later.

This is part of a broader problem of laziness on the part of AI users and the desire for speed and shortcuts creeping its way into health care. Researchers from Oxford, in a recent study published in Nature Medicine, found that among 1,300 patients who used LLMs to diagnose medical problems, many of them were provided with a mix of bad and accurate information. They found that while the AI chatbots now “excel at standardized tests of medical knowledge,” their use as a frontline medical tool would “pose risks to real users seeking help with their own medical symptoms.”

Again, “just good enough” is nowhere near enough for health care. The fact that a majority of the information is correct is even more dangerous.

The problem with LLMs is that they present themselves as the most qualified and knowledgeable cognitive human being, capable of adapting to a dynamic situation. However, despite the confidence, lack of hesitation, and even coherence that they offer, they lack the ability to use judgment through error and revision. When humans outsource their measured judgment to what poses as an expert but lacks internal resistance when unsure of facts, you get catastrophic failure.

RELATED: Can computers really make up for everyone getting dumber?

MF3d/Getty Images

In public policy, particularly the FDA and approval of AI technology in health care, we must not fall into the trap of prioritizing speed over safety. That must be the guiding principle in the deployment of these technologies. The money that has been thrown at these technologies and the fact that the return on investment is still lagging should not induce us into a frenetic and rushed approval.

As a percentage of GDP, AI investment is bigger than the railroad expansion of the 1850s, putting astronauts on the moon in the 1960s, and the decades-long construction of the U.S. interstate highway system in the 1950s through 1970s, according to the Wall Street Journal. The difference is that this is all unproductive debt not producing any meaningful revenue. Now, these companies are desperately paying “influencers” to shame people into using their products.

Hopefully the technology will get better, but we should not continue prioritizing this technology in its current iteration without major changes. Nor should we ever mistake generative AI as a replacement for the human mind rather than a potential tool for augmentation of the human mind. Safety always comes first, and God created human judgment and human ethics powered by a human brain to be the last line of defense against danger.



Read the full article here

You Might Also Like

Zohran Mamdani sworn in as NYC mayor in midnight ceremony at Old City Hall

John Doyle’s Trump year-one victory lap: Border sealed, millions self-deporting, DEI dead, J6 pardons, Gaza peace & beyond

City finds out its e-buses are controlled by China

Influencer’s young son dies after flu complication battle followed by thousands

Michigan Democrat Pushing Bill to Create ‘Do-Not-Sell’ List of Potential Gun Buyers

Share This Article
Facebook X Email Print
Previous Article ICE masks defended as DHS reports spike in assaults ICE masks defended as DHS reports spike in assaults
Next Article The Supreme Court Needs to Expand Bruen The Supreme Court Needs to Expand Bruen
Leave a Comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

- Advertisement -
Ad image

Latest News

Hillary Clinton’s TDS Gets Triggered During ‘Nervous’ Outbursts on the World Stage [WATCH]
Hillary Clinton’s TDS Gets Triggered During ‘Nervous’ Outbursts on the World Stage [WATCH]
Politics
The Ground Troops of the Revolution Are Being Trained Before Our Eyes
The Ground Troops of the Revolution Are Being Trained Before Our Eyes
Politics
Trump Admin Agrees Not To Use Federal Funds To ‘Coerce’ California Universities Into Compliance
Trump Admin Agrees Not To Use Federal Funds To ‘Coerce’ California Universities Into Compliance
Politics
GOA Launches First Digital Magazine
GOA Launches First Digital Magazine
News
‘It’s never too late’: Savannah Guthrie posts gut-wrenching video update two weeks after mother’s disappearance
‘It’s never too late’: Savannah Guthrie posts gut-wrenching video update two weeks after mother’s disappearance
News
Portland pizza restaurant displays vulgar anti-ICE message on website
Portland pizza restaurant displays vulgar anti-ICE message on website
News
© 2025 Concealed Republican. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?