Mender blog

Success with AI in MedTech depends on the software infrastructure beneath it

Device Talks Minnesota brings together regulatory leaders, clinicians, engineers, and product teams to discuss the future direction of medical technology. Two themes dominated the conversations during this week’s event: artificial intelligence (AI) in clinical practice and cybersecurity as a patient safety requirement. The challenging regulatory environment and the expanding threat landscape underscored both themes.

However, what stood out most is something the panels rarely named directly. The AI panel spent most of its time on algorithm design, data integrity, and US Food and Drug Administration (FDA) expectations. The cybersecurity session focused on threat modeling and design priorities. Sitting between both topics is a question of mindset: are we building the software to launch? Or, to sustain across a lifecycle? Once an algorithm is validated and the device is in the field, how does the software actually change?

Robust software update infrastructure answers that question, and it is becoming the layer that decides whether AI-enabled medical devices can deliver on their promises. As data grows in importance and compliance drives (or stalls) time-to-market and overall success, the mechanisms to securely and successfully manage software play an increasingly important role.

AI in MedTech is intertwined with the regulatory environment

Featuring leaders from the Mayo Clinic, Medtronic, and the University of Minnesota, the AI panel covered the now-familiar applications: diagnostics, workflow optimization, personalized monitoring, and new models of care. Speakers walked through the regulatory considerations that come with these new solutions, including algorithm validation, data integrity, and meeting evolving FDA expectations.

What received less attention is that AI models on medical devices are not static. They are retrained when new data becomes available. They are refined when clinical performance drifts. They are sometimes rolled back when an update behaves unexpectedly in the field. Each of those events is a software change, and under the FDA framework, AI/ML model retraining or redeployment is classified as a regulated software update. Each cycle requires re-verification, re-validation, and traceable documentation.

One panelist captured the broader frustration well: "A lot of times the regulatory environment is harder than the technology you're developing." For AI-enabled medical devices, the regulatory environment and the technology are now intertwined. The model is the device, and the device updates as often as the model does.

Robust software updates make secure AI possible

The cybersecurity session reinforced this fundamental truth: security cannot be a late-stage exercise. Consider a connected infusion pump that reaches pre-launch testing before security review uncovers an unauthenticated communication channel between the device and its gateway. Addressing it at this stage means redesigning the firmware’s communication, re-running verification, and updating the risk management file and 510(k) documentation, ultimately pushing the launch by months and incurring high costs. If the threat model had been built alongside the system architecture, the channel would have been authenticated by design, with no rework required. The same logic applies to AI models with even greater force. The complexity, opacity, and rapid update cadence make late-stage security work fundamentally harder than with traditional software. Imagine discovering during pre-submission validation that the training pipeline lacked integrity controls. Model behavior can’t be patched the same as traditional software; the remedy is re-curating data, retraining, and re-validating. Dataset provenance, signed model artifacts, and pipeline integrity controls are dramatically cheaper to design at the onset than to retrofit after the fact. A poisoned model, a corrupted training pipeline, or an unauthorized firmware change all produce the same outcome: a device that no longer behaves as regulators approved it to.

A robust software update infrastructure addresses security risks from multiple angles and throughout the product lifecycle:

  • Cryptographically signed artifacts ensure that only authorized model updates reach the device, satisfying FDA Section 524B requirements for secure update capabilities.
  • A/B partition schemes with automatic rollback keep devices operational if a new model performs poorly in the field, which is critical when the device is delivering patient care.
  • Phased rollouts and fleet observability let manufacturers validate model performance on a small subset of devices before expanding, mirroring the staged clinical validation regulators expect.
  • Comprehensive audit logs turn each model deployment into traceable evidence for post-market surveillance and 510(k) submissions.

Another panelist framed the broader shift well. The device used to be the center of the universe, but now the data the device produces is the axis. That observation carries a corollary worth noting: Data is only as trustworthy as the device producing it, and the device is only as trustworthy as the software it is currently running. Without a reliable way to update and verify software, the data it produces loses its value the moment a vulnerability or model drift goes unaddressed.

The next generation of medical devices is defined by what happens after launch

The conversations at Device Talks Minnesota outlined the dynamic, data-driven direction the medical device industry is moving in. Medical device innovation is no longer about a single approved version of a product. It is about how that product evolves, how its AI components are retrained, how its security posture is maintained, and how its compliance documentation keeps pace with both. Software updates are the connective tissue between all of those factors.

For OEMs building AI-enabled medical devices, the strategic question is no longer whether to invest in a software update infrastructure. It is whether that infrastructure is built for the regulatory, security, and operational realities of healthcare. The software foundation must empower secure, compliant, and continuously validated AI in MedTech, from launch through decommissioning.

Recent articles

Succeeding in the IoT product landscape: How OEMs align AI, software, and time-to-Market

Succeeding in the IoT product landscape: How OEMs align AI, software, and time-to-Market

Discover how OEMs can effectively align AI and software to enhance their IoT products, tackle time-to-market challenges, and ensure long-term success.
What’s new in Mender: New Update Modules for container updates

What’s new in Mender: New Update Modules for container updates

Discover Mender's new Update Modules for OTA container updates, enhancing deployment efficiency and network reliability for seamless software updates.
Insights from Embedded World 2026

Insights from Embedded World 2026: The looming EU CRA deadlines driving imminent change

Explore the impact of the EU Cyber Resilience Act on OEMs at Embedded World 2026, highlighting compliance challenges and the need for robust OTA update strategies.
View more articles

Learn why leading companies choose Mender

Discover how Mender empowers both you and your customers with secure and reliable over-the-air updates for IoT devices. Focus on your product, and benefit from specialized OTA expertise and best practices.

 
sales-pipeline_295756365