FHIR-Native by Design: Why Legacy System Conversions Can't Compete with Purpose-Built Interoperability
As healthcare systems race to meet 21st Century Cures Act mandates, a critical question emerges: retrofit or rebuild? Mike O'Neill, CEO of MedicaSoft, explains why FHIR-native architecture delivers fundamentally different interoperability outcomes than legacy systems with API layers bolted on. This conversation cuts through vendor marketing to examine the structural, semantic, and operational advantages of building healthcare IT from the ground up on HL7 FHIR standards.
O'Neill draws on extensive experience leading P&L, engineering, and operations across healthcare IT startups and public companies to explain what "FHIR-native" actually means in practice—and why it matters for CIOs evaluating vendor claims. Learn how purpose-built FHIR architecture eliminates middleware complexity, reduces integration costs, and enables real-time clinical data exchange that retrofitted systems struggle to deliver.
Mike O'Neill, CEO, MedicaSoft
Megan Antonelli, Founder & CEO, HealthIMPACT Live
00:00:00 Intro: Welcome to digital Health talks. Each week we meet with healthcare leaders making an immeasurable difference in equity, access and quality. Hear about what tech is worth investing in and what isn't as we focus on the innovations that deliver. Join Megan Antonelli, Jenny Sharp and Shahid Shah for a weekly no BS deep dive on what's really making an impact in healthcare.
00:00:30 Megan Antonelli: Hi everyone. This is Megan Antonelli and you're listening to Health Impacts of Digital Health Talks, where healthcare technology leaders make informed decisions. Today, I am thrilled to be talking with Mike O'Neill. He is the CEO of medica, an industry veteran, and he has led IT teams through product launches, rapid growth phases and acquisition integration across startups and public company environments. He's managed international teams, negotiated complex partnerships and brings PNL accountability to engineering decisions. Experience that informs today's discussion on why architectural choices made at the foundation level determine long term interoperability success. Hi, Mike, welcome to Digital Health Talks.
00:01:16 Mike O'Neill: I'm very glad to be here. Megan.
00:01:18 Megan Antonelli: Yeah, it's great to meet you. I know we share a friend in common, Shahid Shah, who, um, you know, goes back to when you worked at the VA. Um, but tell us a little bit about your, you know, your background. Um, you know, what you've been working on and how you got to to, you know, your role with, uh, medica. Soft.
00:01:37 Mike O'Neill: Soft. Sure. Yes. Well, actually, as you mentioned, VA was kind of the entry point for me into the world of health. It. I came there as really more of a hardware person, even semiconductor person. I'd say, um, but, uh, came to VA to help them establish an innovation program, and that amounted to doing a lot of interesting work to bring new technology into VA health care. Um, health. It was a big part of it, working on things like Blue Button and so forth, which is uh, I would remember very well. Um, but in any case, that's really was my introduction to the challenges of working with healthcare data. And when I moved on from VA and had a chance to start a new company, which is medical, that's where we started to say, well, let's look at data and how do we solve the problem of handling data from all kinds of different systems and making it useful for people. And, uh, well, it's been an interesting road, but that brings us to today.
00:02:37 Megan Antonelli: Yeah. That's great. I mean, the VA has always been, you know, part of the conversation in terms of all of the work that they did to kind of create those original EHRs and the systems and of course, now with their transitions. But I think, you know, as we've watched sort of the importance of, of data sort of work its way through health care and what interoperability means. And of course, um, you know, I can't even remember when it was. Exactly, but when when HL7 released fire and, and and the, you know, the work and the, you know, sort of level to which things have accelerated since then.
00:03:13 Mike O'Neill: Yeah.
00:03:13 Megan Antonelli: Um, there's a lot of discussion around that. And I think we hear about, you know, the vendors and the data companies talking about fire compatibility and what that means. Um, and I'd love to hear from your perspective, you know, coming all the way from, from what was happening with Blue Button to now in terms of, you know, what that looks like. Um, you know, tell us a little bit about, you know, what does the vendor pool really look like with respect to who's using fire and what fire native systems might mean?
00:03:46 Mike O'Neill: Sure. Um, well, maybe it's worth spending a minute on the last thing that you said there. Native fire or fire native systems versus systems that support fire. Um, just for clarity, by what we mean by native fire is we often think these days we say most people think of fire in terms of interoperability, its role as an API for sharing or exchanging data. But we can't forget that there's another part to this standard that the data model, how you represent, um, all the different flavors of data that occur in healthcare and fire, I feel like is a really outstanding and flexible data model. So when we say a fire native system, what we mean is it uses fire internally as its data model. Versus you may have an EHR that has had a proprietary data model for years and years, needs to support fire. So you add an API in a translation layer and you're able to speak fire, let's say, to the external world, but internally it's something else. Um, so that's kind of native fire versus supports fire, but not natively. Uh, and I think in the industry there's a, there's there are a lot of fire implementations that are doing kind of narrower use cases. Um, a lot of I'll expose an API for Fhir API so that an application can be integrated here. Um, think of EHR as supporting even meaningful use requirements for, for patient access. So that's good. I mean, that means that there's an awareness of fire out there in the industry, and there's there's a lot of, um, uh, proliferation of Fhir APIs. I think maybe we'll get into well, where do we go from here? I think that's just a first step. I don't think that's the final step.
00:05:36 Megan Antonelli: Yeah. And I think, you know, you talked a little bit about, you know, why that's important in terms of the native versus just being kind of compatible. Um, you know, and, and what that means kind of later as we look at, you know, levels of interoperability, right. So tell us a little bit about, you know, what that means as you go through kind of where data is used, how it gets used and, and what that means across the sort of architecture and, and ecosystem that is our, our relatively very complex health care system.
00:06:13 Mike O'Neill: Sure. Uh, well, you mentioned things like levels of interoperability. I mean, there's a there's an almost academic way to think of it. Um, there we often talk about foundational, structural, semantic, organizational. Uh, foundational is like, can you and I even exchange data? And I think across the industry, you know, we we, we have that we're and we've moved beyond paper and fax to electronic. But structurally, it's like, how do we, uh, format or the, the data that we're talking about is a blood pressure. Just a string of text I send to you, or is it a thing that has fields for. Okay, we've got that semantic is do you and I both mean the same thing? And then organizational is, okay, after we've done all that, can we really get the value out of the data that that we were hoping and Fire is fire offers a great solution for structural and semantic interoperability. It gives us the the formats, if you will, and a way to agree on what are the meanings and how do we capture the meanings in the data. Um, and the hope is if we, you know, if we all do that, then we've got the foundation for organizational interoperability. Then we've reached a point where I've got multiple sources of data, wherever it comes from, whatever I do with it, I'm able to bring it together, understand what it means, understand the governance and provenance and so forth, and then actually use it in my day to day workflows, right?
00:07:48 Megan Antonelli: Which I think is so important. I mean, I think we've all been taught we talk about interoperability so frequently in healthcare, and we think about it very much as organizations talking to organizations. Right. And and the reality is that, you know, from the beginning, it's been also within the organization that, you know, that it's hard to use the data, let alone getting to a place where you're actually pushing it between systems. And all of, you know, and making that easy for the patient or providers outside of the system. So, um, you know, we look at it sometimes and think, oh, how can we how, you know, how is it going so slowly? But it's because of the complexity of it all. And of course, the that, you know, this data rule in the scheme of it has been sort of, you know, a relatively recent structure that has then allowed for this to, to occur. So as we look at kind of, um, you know, the Where the Cures Act and the final rule mandate standard is standardization and using Fhir. Um, does that actually distinguish between Fhir native systems and legacy platforms? Um, are there advantages? What's the you know, if you're a CIO making these decisions, what, you know, what do they have to kind of compare or contrast between in terms of the compliance advantage of any.
00:09:11 Mike O'Neill: Yeah. So a couple of things that you mentioned in there. So first, regulation doesn't distinguish really between a native fire implementation versus an implementation that translates data to Fhir. And honestly I think that that's a good thing. I don't think we want regulation to dictate or try to control internal implementations. That's probably not their role. But regulations do kind of support and hopefully encourage or sometimes lead the evolution in the industry. And, you know, in that case you could think of um, ONC type health. IT regulations over the years have gone from push people off of paper and fax to electronic systems to electronic data and provide some formats for that Cdas and US, CDI and so forth, encourage people to move to API's, then require them to to move to API's. And I think where we're seeing the latest, um, uh, ONC kind of push from a regulatory, regulatory point of view is to to say, well, okay, we're now going to push people a little bit out of that comfort zone where just because you can exchange CDA documents, even if it's via a queue or something, we really want you to have that data be more liquid, be available cloud based systems accessible via not just APIs, but Fhir APIs and not just read only to support a few use cases, but read and write support a broader set of data. Have some performance metrics that you can actually meet so that real world systems can can can use it. So I think the regulatory evolutions kind of going, uh, going in that direction.
00:11:01 Megan Antonelli: Right.
00:11:02 Mike O'Neill: If, if it's not, uh, you know, answering kind of two for one, you mentioned a really key thing along the way in asking that question, which is interoperability. We normally think of exchanging data between organizations, but internal interoperability is actually a thing that, you know, needs needs work. I, I think I feel like we feel like the better you're doing with internal interoperability, the better participant you are in external interoperability. To come back to fire, native fire and architecture. And I guess I have to tell you, Megan, I'm still an engineer at heart, so I like talking about architecture. But one of the things that you know, that we really like about fire as a data model is it's a very good way to implement a consistent, flexible enterprise data architecture so that if you have a common data model for how you're going to represent data and tools that allow you to bring data into that model and share data it avoids, it helps you avoid or eliminate data silos that tend to accumulate around an enterprise. Every time you solve a problem, you create a new database, and then you find out they don't work well together, and so forth. Common data architecture with fire as a data model and a way to exchange data between applications as well as organizations, helps you solve a lot of those problems that just cost CIOs a lot of money and generate a lot of headaches for them?
00:12:43 Megan Antonelli: Yeah. I mean, and then I think about, you know, I mean, we were just in New York this past week and you're seeing I mean, it's this the island of Manhattan is soon just going to be one, one hospital complex where you go from Northwell to Mount Sinai to NYU Langone as they all acquire practices, grow their buildings, acquire practices outside. So we talk about internal, you know, interoperability, but also as you bring in other organizations through acquisition or partnership, you know, that complexity becomes even more, you know, or the need to do that relatively quickly. Um, you know, becomes even more urgent. So, um, you know, I think when we think about, uh, sort of integration budgets and kind of the financial impact, um, I imagine that sort of having that data architecture helps quite a bit. Can you talk a little bit about that?
00:13:38 Mike O'Neill: Yeah, sure. For I guess I would say for. For a native fire system, there's nothing inherently more complex or less complex that would say, well, hey, just if you implement a native fire data platform, it's just automatically cheaper. Um, it's still a system that, that, that requires, uh, design and support and maintenance and so forth. I think the way that it helps an organization in terms of cost and complexity and so forth is by, uh, it allows you to structure how you solve all of these problems. I mean, if you're a CIO who's got to deal with the acquisition of the latest health system and integrating that data, EHR integration, supporting new clinical workflows, how does AI fit into this? I've got to exchange it. It's it's a lot of problems that have everything from policy, um, patient consent, technology and so forth. So having some structured way to tackle those really helps. Um, what it may just help with your sanity, but it also helps with cost and complexity and being able to deploy teams effectively to solve the problems. So you can think of a data architecture as a foundational thing. So if you can say, look, I've got a common approach to my data. It's, you know, it's fire based. And then I have tools that work on that. Now, when I need to say I'm integrating a new system, a new healthcare system, I have a structured way to talk about sharing data, migrating data and so forth. When I need to say, look, we're adding a new class of applications that involve sensitive data. How am I sure that I'm protecting privacy? You have a data structure data model that understands how to implement things like patient consent. You're not redoing your data every single time you tackle a new problem? I've got a foundation and now I start to build on top of it. That's that's why, you know, it can be a little bit nerdy to talk about data architecture and data models and data standards, but it's really foundational to a lot of the things that we deal with.
00:15:49 Megan Antonelli: Right. Well, let's take it into the real world a little bit. So when you take that fire native architecture and how it impacts kind of system performance for clinical workflows, like what what do those if, you know, if you're not using a native and you've got kind of translation delays or semantic inconsistencies, what you know, what are the friction that could then be, you know, come about that does affect kind of patient care or provider communication. Where where do you see that? Um, you know, happening.
00:16:19 Mike O'Neill: Yeah, yeah. So I think there's two areas. So a native fire versus a non in a translational system. There can be absolutely just raw performance delays. I have data that's accessible faster and in a workflow where someone's making decisions. A clinician that can matter. So a native Fhir system can provide better performance and help you in that way. I think what we found is that even bigger impact is when you don't have the the common data approach that a native Fhir platform allows you to to support. You end up not being able to provide all of the data that you'd like to that workflow you have. I use the term data silos before. With data silos, sometimes you're not able to bring all of the data that you want to bear in a given workflow, and now you don't have you either just don't have a complete, let's say, patient record, or you don't have the complete context that you would like. And so it starts to limit Um, the effectiveness or the scope of what you can put into a workflow. So there is, I guess, to to recap that there are some benefits to just raw performance, the response time that users see in the user interface. I think there's even more impact in being able to make all of the data that you have in your enterprise available to the workflow.
00:17:50 Megan Antonelli: Right? And, you know, we've been talking a lot about, you know, hearing from folks talking about, you know, we don't want more point. Point solutions. We want platform solutions. But the reality of healthcare, particularly right now as new solutions are coming in faster than people can even keep up. And, and and at the end of the day, the clinicians want them. Your operations teams wants them. Your, you know, administrative teams want them because they do improve performance as you bring in new vendors or switch vendors even is there is are there benefits there as well?
00:18:24 Mike O'Neill: Um, yeah, sure. And, you know, in that world of always wanting to to add new capabilities. Vendor lock in it to be blunt about it can be a real problem. So thinking about it from like the data point of view, um, a native fire solution and a just a well-architected data system in general does make it easier if you're going to switch vendors, you have a structured way to approach data migration and the incorporation of new data into the system. So it makes it easier to switch. But I think even more powerful is you. You are not tied to one vendor to solve every one of your problems, because they're the ones that hold your data. You're kind of locked into that. Um, one of the big benefits of a standard like fire, uh, including a data model and an API, is if your data is in such a system, there is a there's a large number of people who can then access that data and use it for everything from population health type of analytics to mobile apps for patients and computing clinical quality measures. There's a lot of work going on in the industry to do things using Fhir data. Um, and it's it's um, I guess it's in in some industries, you would call it a best of breed approach and say, look, I don't have to switch vendors just to do something new because, well, my, if I don't, the vendor who has my data won't let me. You're not locked. That way, you're able to make all of your data available to a number of um companies, vendors, applications, workflows, and really take a kind of best of breed approach.
00:20:13 Megan Antonelli: Right. Yeah. No, absolutely. And as you know, I mean, we talk about it and we look at, you know, the data and the importance of the data and yes, organizations talking to each other, but the the ability to look at your organizational data right now, I think is so important. And, you know, as AI applications come in to do more predictive, you know, work on your patients and on what what's working within your organizations, I imagine that this even becomes more, you know, that much more important in terms of the entire ecosystem of the apps and analytic tools that that are coming into the organization. So, you know, are there when you look at that, um, in terms of kind of what that means to sort of sort of look towards the future of of what enterprise healthcare departments look like today.
00:21:06 Mike O'Neill: Um, yeah. So I think again, it's you each question you mentioned so many interesting things. Um, let me tie, uh, the benefit of a higher native native platform. We've talked about sometimes being performance, the ability to bring data, disparate sources of data together. There's another aspect that I think touches the future aspects that you mentioned, including introducing AI, which is improving the quality of your data. Um, healthcare data, as we know, and we probably all say it's all over the place in terms of data quality and completeness. So being able to say, look, I've got a common approach. As I bring in data, I'm able to do things like implement national code sets or translate proprietary code sets to national standards to process notes and get codable concepts make more computable data. But these are things that are always evolving. The things that I do this year, I probably didn't even think of three years ago. If I have to start over from the from the ground up every time I want to, you know, improve or augment my data, that's very painful in in every dimension, cost and complexity to people's time. So having a system that gives me a structured way to do that, to address data, to improve data quality without having to start over. And obviously, I think that using the fire standard provides you a some structured ways to to do this to address data quality. That's powerful. So now as you say, look, I'm introducing AI applications. And what I really want to do is I don't the data that I want to make available there, I need to know the provenance of it. I need to understand the governance. I need to understand the quality. Uh, having a data architecture gives you the structure that you can do that with confidence.
00:23:13 Megan Antonelli: Yeah. No, I mean, I think that, um, you know, as much as we hear about it and, you know, certainly in my experience with all of this sort of coming to it and thinking, oh, wow, wouldn't if if these organizations could talk to each other and you know, how powerful it would be from a research perspective and how powerful would it be from, you know, a treatment and diagnostics perspective and that we are only just now after, I won't say how many years beginning to kind of scratch the surface on that where we're looking at, you know, you know, you mentioned social determinants of health and those data sets that live outside of the health care organization and bringing them in that the power of of all of that becomes, you know, kind of real and, and operational. And the impact is there both and the benefit for patients and for clinicians and, you know, all members of the, you know, sort of healthcare ecosystem. So, um, you know what what that does and, you know, data, The garbage in, garbage out. True, then. True now. And? And it does feel like it just becomes that much more important as we are both collecting so much, you know, and then using it to make really important decisions. So as we, um, you know, have a few more minutes left and we always like to stop with, you know, sort of end with on a very positive note, although most of this is very positive in terms of the progress that's been made. But when you look towards the future and what it means both, you know, in the next three to five years, both this sort of rapid acceleration of technology adoption, rapid proliferation of data and that this fundamental, you know, adoption of fire native architecture happening across healthcare, what are some of the things that stand out as real, um, possible, you know, improvements to, to care, you know, competitive advantages for those healthcare systems that that lean in that direction. What what you know, what are what are your, um, you know, bright spots as you look to the future?
00:25:19 Mike O'Neill: Yeah, sure. Um, I would say this we've said in healthcare for a while that being data driven is important, whether we're talking about outcomes and patient health or operations and efficiency. And, um, and the thing that excites me, the reason I love doing what we do here is I think we really are getting to a point where we will we can make that a true statement across a very broad set of data in all kinds of workflows. I think it's always been true, but a little bit narrow in scope. And every year it gets broader. Um, you look at it as if if we can help people get a good solid data architecture, then they'll have more and better data. Um, and they can they can worry less about the plumbing, if I can say it that way. Okay. That's solid. You don't have to churn that every single year. Not that you don't have to continue to work on your data, but it allows you to shift more and more focus to what am I doing with the data? What information am I extracting from it? And as they can do that, then I think they're making that statement come true. Then their care delivery and their operations are becoming more and more data driven. And I feel like that's a great thing.
00:26:47 Megan Antonelli: That is a great thing. And it's you know, I love the analogy used. We actually just sort of came out of health impact, where we were using sort of the builders, the architects and the data and the, you know, interoperability where, you know, that was the pipes and what we had to to kind of, you know, we have to build these, this foundation to realize the future of of what we want in healthcare, which is to be able to access our data and care from wherever we are. You know, and I think that, you know, I mean, if you use that analogy of a health of a of a home or a building where, you know, if the systems aren't, um, you know, organized and recognizable, if you don't know where those, um, uh, the where, the, where the walls are and where the pipes are to find them, you know, then then it becomes messy. And so the idea of this, uh, native systems, um, you know, being the foundation I think is, is so important.
00:27:46 Mike O'Neill: Great, I agree.
00:27:47 Megan Antonelli: Yeah. Well, thank you, Mike, so much for joining us. It's really a pleasure talking to you. And I've learned so much about all of this. And I know our audience has as well. Tell them how to contact you.
00:27:58 Mike O'Neill: Uh, you can always find the company on the web, medicosnotes.com. And you can find me from there. I'm in there and I'm in LinkedIn and.
00:28:08 Megan Antonelli: Perfect. Yes. And we'll have to get you and chatted together soon at a future health impact as well. Well, thanks so much, Mike, and thank you to our listeners. I hope you've enjoyed this conversation on, you know, kind of why architectural decisions made at the foundation level determine long term interoperability outcomes. And when that's what we want, that's that's important. And that difference between, you know, fire native systems and legacy platforms with translation translation layers. So for more insights on healthcare technology and strategy, you know, be sure to subscribe to Digital Health Talks wherever you listen. And that's Megan Antonelli signing off.
00:28:48 Outro: Thank you for joining us on Digital Health Talks, where we explore the intersection of healthcare and technology with leaders who are transforming patient care. This episode was brought to you by our valued program partners. Automation Anywhere, revolutionizing healthcare workflows through intelligent automation. Natera advancing contactless vital signs. Monitoring elite groups, delivering strategic healthcare IT solutions. Sailpoint securing healthcare identity management and access governance. Your engagement helps drive the future of healthcare innovation. Subscribe to Digital Health Talks on your preferred podcast platform. Share these insights with your network and follow us on LinkedIn for exclusive content and updates. Ready to connect with healthcare technology leaders in person? Join us at the next Health Impact event. Visit Health Impact forums for dates and registration. Until next time. This is digital health talks where changemakers come together to fix healthcare.