HIMSS 2026: Cutting Through the AI Noise in Healthcare

By: HT Snowday
Senior Director, Midmark RTLS

April 23, 2026

If you walked the HIMSS floor this year, you probably felt it: AI was everywhere—and clarity was nowhere. 

Every booth had a story. Every solution claimed intelligence. Every demo promised transformation. And yet, talking to health system leaders throughout the week, the most common reaction I heard wasn’t excitement—it was uncertainty. 

What’s real? What’s hype? And more importantly—what actually helps clinicians and operations teams today? 

After dozens of conversations, one thing became clear: healthcare needs a simpler, more practical path to action. 

 

The Problem Isn’t Data—It’s What Happens Next 

Healthcare already has more data than it can use. 

From EHRs to RTLS to device integrations, the issue isn’t visibility—it’s what happens after you see it. 

Most solutions still follow the same pattern: 

  • Log into a dashboard
  • Interpret the data
  • Decide what to do
  • Take action somewhere else

That model worked when software was the only interface we had. But it’s also why so many health systems feel stuck with fragmented point solutions—asset tracking systems here, patient flow tools there, compliance dashboards somewhere else. 

It’s not just a technology problem. It’s an interface problem. 

 

The Real Shift: AI as the Interface 

What stood out at HIMSS wasn’t just the volume of AI—it was how narrowly most of it’s being applied. 

A lot of solutions are still using AI the same way we’ve always used software: to analyze data and present it in a better dashboard, albeit with more actionable insights. 

But that’s not the real shift. The real shift is this:  AI agents will fundamentally change the interface for healthcare software. Today we buy software as point solutions—hand hygiene dashboards, patient flow tools, asset portals. But in an agent-driven world, the interface becomes the agent itself, embedded directly in systems like the EMR. Instead of dashboards, clinicians receive real-time guidance: ‘This patient is ready to move,’ or ‘This IV pump is available nearby.’  

In an agent-enabled world, clinicians and operators don’t log into ten different systems to figure out what’s happening. They interact with an AI agent that can understand context, pull from multiple systems, recommend and even take action. 

That fundamentally changes the role of software. Applications stop being destinations. They become services. 

And many of today’s point solutions—built around users navigating screens and workflows—start to become unnecessary. The real value shifts to high-quality data and insights delivered directly into clinical workflows. 

 

What an Agent-Enabled Platform Actually Looks Like

This is where we think the industry needs to be more clear. 

An agent-enabled platform isn’t just AI layered on top of data. It’s a system that is: 

  • Accessible: exposing the right data in real time 
  • Actionable: allowing actions to be triggered programmatically 
  • Interoperable: working across systems, not inside a silo 
  • Context-aware: understanding location, workflow and clinical state 

In other words, it’s not about building a better interface. It’s about making the interface optional. 

And that leads to an important point: It’s not always the vendor’s job to build the agent. 

Health systems are going to define their own agents—aligned to their workflows, policies and clinical priorities. Our role is to make sure our systems are ready for that world. 

Agent-ready, not agent-siloed. 

 

Where RTLS Fits: Making the Physical World Usable by Agents 

If AI agents are going to operate effectively in healthcare, they need more than clinical data. 

They need real-world context. 

Where are patients right now? Where is staff available? Where is critical equipment—and is it in use? 

That’s where RTLS becomes foundational. 

At Midmark, we see RTLS not as another application, but as part of the infrastructure that makes agent-driven workflows possible. 

Our hybrid BLE/IR approach is designed around that reality: 

  • BLE provides scalable, facility-wide visibility 

  • Infrared (IR) provides room-certain precision where workflows demand it 

  • Together, they create a continuous, reliable stream of location context 

But the key is what happens next. 

That data shouldn’t live in a dashboard. It should be accessible to any agent that can use it. 

Dispatch equipment. Prioritize patient movement. Trigger workflows. 

All via any of the health system's AI agents, freeing the clinician from interacting with multiple point solutions. 

 

A Different Perspective on Innovation 

At HIMSS, there was no shortage of impressive demos. 

But the health systems we spoke with weren’t asking for more features. They were asking a simpler question: 

How do we make all of this actually work together? 

Our answer is intentionally pragmatic: 

  • Build reliable, scalable data foundations (like RTLS) 

  • Make that data agent-accessible in real time

  • Enable agents to take action—not just provide information—via human instruction 

  • Design for a future where the primary user is a human-directed agent, not just a human clicking through screens 

It’s a different way of thinking about value. 

Not as another solution to buy. But as infrastructure that makes everything else work better. 

 

Final Thought: Designing for a World Without Dashboards 

If there was one takeaway from HIMSS this year, it’s that healthcare is at an inflection point. 

We can keep building more software, more interfaces, more dashboards. 

Or we can start designing for a world where clinicians don’t have to navigate systems at all. 

Where the interface is conversational. Where the intelligence is ambient. Where the action happens automatically. 

That’s the promise of agent-first thinking. 

And the organizations that win won’t be the ones with the most software. They’ll be the ones whose systems are ready to work with whatever comes next.