Endpoint Security
,
Standards, Regulations & Compliance
Agency: Guidance Favors Market Innovation Over Federal Scrutiny

New artificial intelligence-enabled health wearable devices and clinical decision support software will not face U.S. Food and Drug Administration regulatory scrutiny, providing the technology meets certain criteria, such as being “low-risk,” the agency said this week.
See Also: Enterprise Browser Transforms App Delivery and Compliance
The policy stance is spelled out in two new and separate guidance documents, and in statements by agency leadership.
The FDA published two non-binding guidance documents to coincide with public comments by FDA Commissioner Marty Makary during the Consumer Electronics Show in Las Vegas on Tuesday and in an interview with Fox Business that same day, which Makary also reposted on X.
The FDA said that “general wellness” gear, including those enabled by AI, that are “low risk” and not marketed as “medical-grade” – such as specifying a disease or medical condition, or providing medical recommendations – are typically not subject to FDA review.
Examples of such devices, include wrist-worn wearable products “intended to assess activity and recovery that outputs multiple biomarkers, among which are hours slept, sleep quality, pulse rate, and blood pressure,” the FDA said in the guidance.
“Sleep is measured via an accelerometer, while pulse rate and blood pressure are measured via a photoplethysmogram. The claim relates to general wellness and does not refer to a specific disease or medical condition, and thus is a general wellness claim,” the FDA said.
“The technology for monitoring these biomarkers does not pose a risk to the safety of users and other persons if specific regulatory controls are not applied,” the FDA added.
That hands-off approach would change if device marketing started claiming that the product could be used in a medical or clinical context, the agency warned.
As for the FDA’s new clinical decision support guidance, the agency spelled out describes types of CDS software functions that would not meet the definition of a regulated device falling under FDA’s regulatory scrutiny.
“We want to let companies know in clear guidance that if their device or software is simply providing information, they can do that without FDA regulation,” Makary told Fox Business.
Software functions that solely display or print medical information typically are not considered CDS, the FDA said.
That includes CDS products that provide “the kinds of information that are from ‘well-understood and accepted sources’ are those that reflect knowledge and practices that are widely recognized and accepted within the medical or scientific community that are supported by scientific evidence, including peer-reviewed literature, clinical guidelines, or authoritative consensus documents,” the FDA said.
“If something is simply providing information, like ChatGPT or Google, we’re not going to outrun that lion,” Makary told Fox Business.
“The FDA’s policy changes are in keeping with the Trump administration’s business-friendly AI strategy, which involves eliminating safeguards in order to fully unleash the technology’s potential,” said Andrew Nixon, a U.S. Department of Health and Human Services spokesman in a statement Wednesday to Information Security Media Group.
Makary, in his interview with Fox Business, emphasized “that the agency’s job is to ‘get out of the way as a regulator’ and stated that ‘we’re here to promote AI,'” Nixon said.
Axel Wirth, chief security strategist at medical device cybersecurity firm MedCrypt, said the guidance is in keeping with a trend of reduced FDA regulatory oversight of “low risk devices” such as health wearables with non-invasive sensing not explicitly marketed as medical devices.
“The applied logic is obvious, lower market entry burden makes new technologies available quicker, but obviously this needs to be monitored to ensure we are not moving from over-regulation to under-regulation,” he said.
