Medical Connectivity

February-16-2015

9:00

Assessing the magnitude and significance of cyber threats has at least two important purposes. One is to determine the extent of measures that have been or should be taken to respond to or counter the threat. This is part of the rational deployment of resources across the multiple risks that we face, whether cyber or otherwise. In this regard it is simply not possible or necessary to respond to all risks with equal vigor. A second purpose can be to communicate threat significance to or among interested parties. For such communication there is a tendency to reduce complex, multifaceted issues to a simple broad summary word, e.g., the threat level is “Guarded”. Such simplicity is possibly attractive but not necessarily meaningful with regard to what to do with the information communicated.

Of interest here is how those issuing threat assessments are making their determinations. One approach is quasi mathematical in which components of a threat are “scored”, and then those scores combined in some way, with the net score then indicating something of supposed significance. Those of us who have used methods such as those in ISO14971 are familiar with scoring severity and probability, and then presenting these in a two dimensional grid which is then divided into some number of risk zones (often 3). A “risk score” is also often determined by multiplying the severity score by the probability score to get the risk score, although there is no theoretical basis for such a multiplication. None-the-less, with a multiplication scheme additional factors can also be considered, scored and also multiplied. The best known of these additional factors is detectability, especially when risk assessment is applied to manufacturing and inspection is meant to ideally find and eliminate bad product from the production stream, thereby reducing the risk of a manufacturing defect reaching the end user. My colleague and I have discussed the limitations of these kinds of fake mathematical calculations here and here. Limitations do not mean that a method should not be used. Instead it means that it should be used knowingly and with caution.

In the cybersecurity space a different kind of math has emerged, as used for example by the National Health Information Sharing and Analysis Center (NH-ISAC). Here four factors are combined by addition and subtraction using the relationship Severity= (Criticality + Lethality) – (System Countermeasures + Network Countermeasures). Each of these four factors has a 1-5 scale with word descriptions of each, e.g., a criticality of 3 is “Less critical application servers” while a lethality of 3 is “No known exploit exists; Attacker could gain root or administrator privileges; Attacker could commit degradation of service.” As far as I have been able to determine there is no theoretical basis for or validation of linearly adding and subtracting the four individual scores, nor is there a basis for each factor having the same 1-5 scale. In addition the use of the same scale for each factor introduces a false symmetry, e.g., a Criticality of 4 with a Lethality of 2 has the same effect as a Criticality of 2 with a Lethality of 4. There is also a false relativity with respect to, for example, 4 being logically perceived as twice as bad as 2.

The conversion of the calculated number into a particular threat level also appears to be more-or-less arbitrary. In NH-ISAC there are 5 threat levels: Low, Guarded, Elevated, High, and Severe (with corresponding color codes Green, Blue, Yellow, Orange, Red). The score range for these 5 threat levels is distributed with the intervals being 3, 2, 3, 2 and 2, which also seems arbitrary. This scheme also results in many different combinations of the four factors leading to the same overall rating. In addition the end result has high sensitivity to the inherent uncertainty in the individual scores. Furthermore there is an inherent limitation in knowing just the end result, e.g., Guarded, which is that it is not possible to work backwards to find out what contributed to that level. For example in a particular instance was it high criticality or low protection? Also, it cannot be determined how much taking an action that moves an individual score up or down would help unless the raw score is reported along with the level. How individual Severity scores are combined to produce a global Severity score is not described. In fact as used by NH-ISAC, it is not even possible to publicly determine what actual threat or threats they have considered in arriving at their overall threat level. If you don’t even know the particular threat it is not possible to respond, even if a response was appropriate. Given that this methodology has no available documentation we also don’t know anything about inter-rater variability, i.e., to what degree would two independent raters arrive at the same scoring.

A different group with an online presence in threat rating at first offered me the following explanation of their methodology (from an email): “The determination is somewhat more subjective, how the rubric is calculated has not been published to my knowledge. We essentially get together online and have a discussion about the current issue.” This methodology clearly is not transparent. Subsequent to my inquiry they added to their website a linear combination scoring system that has at least some of the same limitations as the one discussed above.

Another characteristic of third party threat assessment is that those doing the assessing are not those who have to respond to the threat. Nor do they have to rank the cybersecurity threat in comparison to other threats to patient and system safety that have nothing to do cybersecurity. Yes, such threats do still exist. This can lead to a Chicken Little mentality in which those enmeshed in their own arena of doom come to believe that their perceived doom is the only doom, or at least a much worse doom than other people’s doom. Moreover it leads to the self aggrandizement of their own domain, and attempted self propagation of their importance. In this regard the announcement of the  recent collaboration between two cybersecurity threat rankers included the assertion that, “This landmark partnership will enhance health sector cyber security by leveraging the strengths of…” the participants and “…will facilitate improved  situational awareness information sharing.” all before they had done anything. How grand!

Health related cybersecurity is no doubt an issue that has to be treated seriously, but this must be done in the context of all the other issues that challenge the healthcare enterprise and the safety of patients. In this context we should not confuse potential vulnerabilities with actionable risk, nor confuse any risk with a critical risk. In this regard it is important to remember that safety is the absence of unreasonable risk, not the absence of risk.

William Hyman is Professor Emeritus of the Department of Biomedical Engineering at Texas A&M University. He now lives New York City where he continues his professional activities.

Share

January-26-2015

11:00

My first exposure to Mobisante and their disruptive diagnostic ultrasound system was the mHealth Summit in November of 2010. At that time, the consumerization of medical devices had been gaining traction, mostly in the physician office market. Consumerization offers medical device manufacturers advantages in lower design costs, shorter time-to-market, lower product costs, increased usability and lower training costs.

I recently got Sailesh Chutani, co-founder and CEO of Mobisante, on the phone and we discussed their product strategy — a software based diagnostic ultrasound that runs on a variety of consumer electronics platforms.

Your product is clearly a diagnostic ultrasound medical device, but one can’t help but notice the rather unique design and choice of components. What were the factors driving the eventual design and appearance of your diagnostic ultrasound?

For us, in terms of where we started, our goal was to make ultrasound imaging universally accessible; to democratize it. Currently, there are three very significant barriers to broader adoption of ultrasound imaging:  cost, complexity and the difficulty of integration with workflows.

Traditional ultrasounds are the way they are because historically the only way to get the high performance and image quality you need was to do custom hardware, custom everything. This is by necessity very expensive. Then, in 2007, Qualcomm came up with Snapdragon chip sets. Now, for the first time, you had enough computing power in a smartphone to do the processing required for real time ultrasound imaging. So, we were looking at all of that and thinking, “Okay, so what are some the cost drivers?” Doing custom hardware is a pretty major driver of costs. If we could now use commodity electronics as building blocks for these devices, our costs would be dramatically lower.

The second big barrier was that a lot of the complexity you see in traditional devices comes from designs that are kind of one-off designs. And these devices are also designed for highly trained sonographers or experts. Those devices have dozens of knobs and controls. Sometimes weeks of training is required to just learn how to master a conventional ultrasound system. We questioned whether all of that complexity was necessary. Certainly, this complexity is an impediment if you’re going to make the diagnostic functionality more broadly accessible, especially to non-experts. As consumers, all of us are getting trained on the interaction paradigm of smartphones and tablets, so why not just make using ultrasound look like any other application you’d download? Stick to the interaction paradigm that the whole community has been trained on, and leverage that. So now, it takes someone five minutes now to learn to operate the device versus taking three weeks of class.

Breaking the third barrier entailed leveraging the connectivity that comes for free in all of these smart phones and tablets. So, we leverage those connectivity capabilities to offer functionality beyond image capture, but also managing and organizing these images through the diagnostic life cycle. We offer cloud-based image management and then eventually we’ll offer over-read services and analytics. These capabilities simplify integration of our devices into clinical workflows.

Those are the three key insights and drivers we had for our product, and I think you can clearly see them emerge out of the design and the actual product today. We are leveraging commodity smartphones, tablets and other off-the-shelf hardware, focusing on designing a simplified user interface that piggybacks off the training we all have in gestures and touch. The third piece is connectivity and increasing the value of the solution by not just capturing images at the device but also providing image management and access to other complementary services.

It seems that you intended to create a disruptive medical device from the get go? ?

That’s an accurate statement. [chuckle]

Our driver was really, how do you increase access. And maybe I can step back and ask, “Why does this even matter?” If you look at the broader issue, and this is really a global issue, the big question is, “How do you increase access to healthcare and do it in an affordable manner?“

There are two main drivers that determine the cost of healthcare. One is where are you delivering care? Are you delivering care in places like hospitals, in clinics, or in people’s own homes? And the other is who’s delivering that care? Is it highly trained MDs or nurses, nurse practitioners, or community workers or maybe patients themselves?

So, if you can move care closer to the patients, whether it’s a clinic out in the community or their own home, and you can move more of the care delivery to mid-level professionals and eventually patients themselves – if you can move along those two dimensions, that’s where you start to break the cost curve.

The problem is, in order to do that, you need a different kind of medical device or toolbox. Less skilled users need diagnostic and procedural guidance that you don’t have today. Today’s medical toolbox, the diagnostic toolbox included, is designed to be operated in a hospital environment by highly trained professionals. So, in some sense, that’s the problem we wanted to solve, and we picked ultrasound imaging as the tool to focus on first because it has very broad applicability.

Point of care imaging can take the guesswork out of medicine, right? So, you don’t have to palpate or poke to try and figure out what’s happening inside someone’s body – you can image and see what’s happening. That is the driver for what we’re doing. And yes, we also wanted to explore, how do you have a business model that would be difficult for incumbents to copy, and an approach, which as the newcomer, we could pioneer and build into a very strong position in the market?

Your product design has allowed you to carve out a segment of the market that really didn’t exist before.

That is correct. That is correct because, ultimately, if you think about who do we compete with? We compete with non-consumption (pdf), to use Clayton Christensen’s term, right? We are really making ultrasound imaging available to people who either didn’t have access before, had inconvenient access, or couldn’t afford it. Right? So, for them, they’re not looking for the dozens of features of a $100,000 device. They’re looking for something basic that allows them to do triage, quick looks. Essentially, answer yes/no questions.

So, while your disruptive product design significantly reduced the purchase price and per unit revenue for your product compared to traditional ultrasound systems, it sounds like it’s also created new business opportunities and new revenue opportunities?

Absolutely. An example would be, in addition to the imaging device, offering people image management in the cloud.  Besides automating the diagnostic workflow, clinics and hospital systems can essentially start to use that to do quality control on the images being acquired. They can use it to do training. Radiologists can start to offer 24X7 over-read services for ultrasounds, which exists in CT and MRI, but is not as prevalent in ultrasound.

So, yes, you’re absolutely right, you got the new opportunities to provide better care and create new revenue opportunities for us. But, there’s one point I want to highlight. Traditionally, devices have been sold in a fee-for-service reimbursement world. Manufacturers sell big-ticket medical devices justified by the fees health care providers can charge for using the device to do procedures. In this scenario, capital equipment costs are less important than the provider’s revenue potential from procedure fees. Providers have an incentive to do as many procedures as possible because they’re receiving a fee for providing that service. Now with the change in the health care system, the Affordable Care Act, I think people are starting to look at costs, clinical effectiveness and overall value – this creates a very different kind of business environment where you’re looking at tools asking the questions, “Does it help you provide better care? Does it help you provide cheaper care?”

I think that’s the big opportunity for innovation in point-of-care devices because they essentially allow you to do much more effective triage very early to see who needs the more expensive modality or not. For example, you’re at your community clinic in a rural area, complaining of abdominal pain. Today, if they don’t have imaging, they’re referring you to a hospital due to the serious conditions your symptoms might indicate. You want to rule out a whole class of things. If they have a device like ours, they can do it right away, and then for a number of cases, they will be able to have confidence. “Yeah, you don’t need that extra level of screening or diagnosis. You’re fine. You can go home. You just have gas or you ate something that didn’t agree with you.” Early on, in the health care delivery process, it’s possible to really reduce the number of unneeded procedures and screenings that have been the norm, with these point-of-care devices.

Up to this point, we’ve talked mostly about market facing factors that have both driven your design and are a consequence of your design approach. What factors or what kind of impacts were there internally in your company as a consequence of taking this pretty radical approach to medical devices? And I’m thinking things like new core competencies, regulatory impacts, purchasing components, all those kinds of issues.

Oh, it’s huge. It starts with what kind of team do you put in place, right? We needed folks, not only from the medical device community, but people who knew how to operate in this environment where some of the building blocks are off the shelf. The next consideration had to do with what we were going to be building on a platform that would evolve very rapidly. We had to learn to architect our product’s stack, so that it can cope with the rapid change that occurs with consumer electronics without requiring extensive redesign. And then tied to that was “Well, how do we approach our regulatory strategy?” If every time something minor changes, we have to get a new 510(k), you’d go out of business pretty quickly.

Share

January-5-2015

14:43

If it were possible to be unaware of the general problem  of cybersecurity, the recent Sony hack with its public disclosures of  “private” e- conversations and then terroristic blackmail, following the earlier release of celebrity cloud photos, ought to have provided notice that what is electronically stored is likely to be available to those determined to have it. Moreover we know that cybersecurity can in principle also impact the function and availability  of connected systems (Sony again) and/or the information they contain. We also need to be concerned about the malicious alteration of information or disruption of device performance. You may remember the hacked insulin pump story which is already a few years old, and the story that the wireless function of Vice President Cheney’s pacemaker was disabled to protect against hacking.

In this broad context it may be worth taking a look at the FDA’s  now posted contents of the October 21-22, 2014 FDA workshop on  “Collaborative Approaches for Medical Device and Healthcare Cybersecurity”. There is also a link there to the October 29 FDA Webinar on the Final Guidance on Premarket Submissions for Management of Cybersecurity in Medical Devices.  (If that link doesn’t work, as it didn’t for me, try here.) I had not been not aware that October was National Cybersecurity Awareness Month under the auspices of the Department of Homeland Security (DHS).

The stated purpose of the workshop was to bring together all stakeholders in the healthcare and public health sector including medical device manufacturers, healthcare facilities and personnel (e.g. healthcare providers, biomedical engineers, IT system administrators), professional and trade organizations (including medical device cybersecurity consortia), insurance providers, cybersecurity researchers, local, State and Federal Governments, and information security firms in order to identify cybersecurity challenges and ways the sector can work together to address these challenges. Some 1300 people where there.

The posted contents provide a rich after-the-fact resource from the workshop including separate videos of  each session, the slides presented, and a word-for-word transcript of the proceedings. The availability of the materials in relatively small chunks allows for selected viewing or digesting it in several viewing sessions. However for those into binge viewing, you can also do two days straight. There were ten sessions beginning with Framing the Question, traversing Gaps and Challenges, and addressing the NIST Framework for Improving Critical Infrastructure, and Risk Assessment. The concluding session was consideration of Building Potential Cybersecurity Solutions/Paths. Each of these sessions was either a panel discussion, or had one or more presenters and a group of discussants, resulting in a great deal of material and many perspectives. There were also four  keynotes: Marty Edwards (DHS), Edward Gabriel (Assistant Secretary of Preparedness and Response), Michael Daniel (Special Assistant to the President), and Mary Logan (AAMI).

A potentially interesting sequel to the workshop is the creation of a limited access discussion forum provided by MITRE.  Set up on its Handshake  website, the intent is to continue the dialogue from the workshop around common challenges and possible paths forward in medical device and healthcare cybersecurity.  Among its benefits, the collaboration space is said to afford the community the opportunity to share best practices and to join one or more of the 5 subgroups of specific interest. Of course no discussion venue these days can be free of its own privacy (or lack thereof) statement. MITRE states that” the user’s name, profile photo, connections (social graph), and activity stream of non-access controlled activities are visible to all participants in this collaborative space”. When I joined this forum there were 42 members but a minimal level of activity, so it remains to be seen whether this resource actually becomes of any value.

One might note in this regard that there is no shortage of electronic ways to discuss cybersecurity or anything else these days, and that in most cases discussion by itself does not solve problems as compared to actually doing something. This is perhaps the empty promise of social media where passing around snippets of ideas is confused with actual work and accomplishment. In fact social media participation often takes place instead of actual work. And yes, I realize the irony of making this observation in a blog post.

There is little doubt that cybersecurity concerns, in all its forms, will be with us for some time to come. I currently have three different identity security accounts provided by three different breached entities, including healthcare and the federal government. While cyber risk management practices provide some level of protection, and must be put  in place, monitored and maintained, it seems that the threats will continue to exist and to evolve. Of course there are those that benefit from this challenge, reminding us that one person’s adversity is often another person’s source of income.

None-the-less, spend some time with the FDA workshop. There is much to learn.

William Hyman is Professor Emeritus of the Department of Biomedical Engineering at Texas A&M University. He recently retired and has moved to New York City where he continues his professional activities.

Share

November-10-2014

16:53

The Office of the Inspector General (OIG) of the U.S Department of Health and Human Services has released a report (pdf) outlining its 2015 work plan.  Among a host of subjects is “Information Technology Security, Protected Health Information, and Data Accuracy” with the subsection “Controls over networked medical devices at hospitals”. The focus here is on the security of  patient electronic health information which is to be protected under law. Other risks associated with device networking are not addressed.

The relevant subsection (page 22) is relatively brief:

We will examine whether CMS oversight of hospitals’ security controls over networked medical devices is sufficient to effectively protect associated electronic protected health information (ePHI) and ensure beneficiary safety. Computerized medical devices, such as dialysis machines, radiology systems, and medication dispensing systems that are integrated with electronic medical records (EMRs) and the larger health network, pose a growing threat to the security and privacy of personal health information. Such medical devices use hardware, software, and networks to monitor a patient’s medical status and transmit and receive related data using wired or wireless communications. To participate in Medicare, providers such as hospitals are required to secure medical records and patient information, including ePHI. (42 CFR § 482.24(b).) Medical device manufacturers provide Manufacturer Disclosure Statement for Medical Device Security (MDS2) forms to assist health care providers in assessing the vulnerability and risks associated with ePHI that is transmitted or maintained by a medical device.

Note that this is the OIG’s intention to  examine what CMS is doing, not directly what hospitals are doing. However it might be expected that CMS would endeavor to assure its performance in order to pass muster under OIG review.

The reference to MDS2 in this subsection should be noted. MDS2 is intended as a format for medical device manufacturers to provide a standard set of security/risk information to hospitals to be used in the hospital’s network risk management plan. MDS2 was developed by HIMSS and ACCE, and then further standardized through cooperation with other organizations. The use of MDS2 by manufacturers remains voluntary, and is driven by customer demand for this information in the MDS2 format. Some manufacturers have made their MDS2 forms openly available on the web which is certainly a good thing. Others have provided web availability to registered users which is a marginally good thing. And of course some manufacturers have not made them web available (assuming they have them) which is  a bad thing.

But what does the sentence in the subsection about MDS2  mean? I ask this question in the personal context of having recently been attentive to the use of should, shall, may and must in requirements documents, as well as the advice of some to avoid all of these words. (This is a subject for a separate post.) The latter approach of not using such words appears to be the path that HHS has taken in crafting this sentence. As written, is it a statement that this is what all manufacturers are doing? Or is it an instruction to manufacturers, a demand on manufacturers, or an instruction/demand on hospitals? If the latter, does it mean that CMS during an inspection would expect the hospital to have and be able to produce its MDS2’s for all networked devices? The reference to MDS2 here can be contrasted with the FDA’s recent Guidance on Content of Premarket Submissions for Management of Cybersecurity in Medical Devices (pdf) which makes no mention of MDS2.

Whether or not having MDS2’s  is mandatory, hospitals requiring such forms from manufacturers and then actually using them is a good idea. As a consumer driven resource, the more that hospitals ask for (demand) the MDS2 the more likely they are to be readily available. This might remind us that the reason to do things is not limited to the government or other authority having jurisdiction (AHJ) forcing us to, and the absence of anyone forcing us is not in turn a reason not to do things. In an ideal world (in which we do not live) that which we were mandated to do would be the same as that which was otherwise the right thing to do.

For reference, 42CFR482.24(b) is part of the Medicare Conditions of Participation-Medical Record Services (i.e., these are  hospital requirements).  Part (b) is best understood in the context of the full 482 section, but here I only include the general statement and point 3 of part (b):

The hospital must have a medical record service that has administrative responsibility for medical records. A medical record must be maintained for every individual evaluated or treated in the hospital.

(b) Standard: Form and retention of record. The hospital must maintain a medical record for each inpatient and outpatient. Medical records must be accurately written, promptly completed, properly filed and retained, and accessible. The hospital must use a system of author identification and record maintenance that ensures the integrity of the authentication and protects the security of all record entries.

(b) (3) The hospital must have a procedure for ensuring the confidentiality of patient records. Information from or copies of records may be released only to authorized individuals, and the hospital must ensure that unauthorized individuals cannot gain access to or alter patient records. Original medical records must be released by the hospital only in accordance with Federal or State laws, court orders, or subpoenas.

It is interesting here that second and third sentences of (b) seem to speak to the deliberate release of records. The first sentence might just mean that also, but in the network context it is apparently being given a much broader interpretation.  The emphasis here on procedure is also noteworthy. The requirement is not simply that the hospital has maintained confidentiality (i.e. no breaches) but that it has a methodology in place to prevent breaches.

In summary, the OIG intends to audit how CMS is assuring that the requirements for patient data security is being met by hospitals, here in the specific context of networked systems. This may mean that CMS will pay particular attention to this subject. This in turn means that hospitals probably need to do the same.

William Hyman is Professor Emeritus of the Department of Biomedical Engineering at Texas A&M University. He recently retired and has moved to New York City where he continues his professional activities.

Share

November-4-2014

10:45

When I do presentations on the use of standards, I invariably have a slide which defines interoperability as “the ability of a system or a product to work with other systems or products without special effort on the part of the customer.” My second slide then defines syntactic and semantic interoperability.

Syntactic interoperability occurs when there are two or more systems capable of communicating and exchanging data and this is usually attainable with the use of physical standards, data standards, and messaging structures. Semantic interoperability is defined as the ability to automatically interpret the information exchanged meaningfully and accurately in order to produce useful results as defined by the end users of both systems.

Semantic interoperability is usually achieved with a common information exchange reference model where the content of the information exchange requests are unambiguously defined, i.e. what is sent is the same as what is understood. In order to have the type of interoperability as defined above, systems should drive their integration goals towards semantic interoperability. This idea of attempting to attain semantic interoperability was highlighted with two conversations I had this summer while working on a project.

This project entailed identifying and analyzing healthcare organizations that are integrating remote monitoring or patient generated data into their EMRs. With the proliferation of remotely generated data expanding greatly due to mobile application and wearable sensor vendors offering cloud based solutions, it is believed that personalized healthcare will become more prevalent. In addition, it is thought that if all of that data is aggregated and analyzed (Big Data), a better understanding of the underlying symptoms, possible diagnoses and ultimately cures for diseases will occur more rapidly. Notwithstanding that utopic vision, as usual, the hard work is done in the ‘trenches’ building the infrastructures, devices and workflows. Given the objective of incorporating patient generated medical device data into the EMR, it is useful to compare this to how medical device data is acquired and used in the acute care setting.

In the traditional healthcare enterprise medical device data integration workflow process, a medical device is connected to the hospital network, the data is aggregated with a medical device data system (MDDS) provided by the medical device manufacturer or a device integration vendor’s solution. The MDDS then sends HL7 messages to the integration broker of the EMR application. The data coming from the MDDS generally is delivered every 30 seconds or 1 minute to the EMR integration broker. Usually, the MDDS has a server with some buffering capability such that the device data can be queried or retrieved or stored for future forwarding if there is a problem at the interface.

From the perspective of the clinician in the hospital using a charting application in the EMR which has integrated medical device data, the data available for annotation into the flow chart is usually displayed in intervals and the clinician chooses one of those data points within that interval as the validated parameter for the EMR application. So, if the clinical protocol for that particular patient is Q15 or the required documentation frequency of the specific physiological parameters is every 15 minutes, 29 or 59 available data points are discarded once the clinician chooses one for charting purposes. This data being integrated into the EHR is usually located in the hospital or clinic (a controlled environment), the device is ‘trustworthy,’ and the data being entered into the EMR is validated by the clinician. The data ‘provenance’ of location of generation, trustworthiness of the measurement device, and validation is assumed based on the clinical workflow and technical infrastructure supporting integration.

In the remotely monitored or patient generated data situation, there is usually a vendor provided service which offers ‘cloud’ based access to the data. This server usually resides outside of the healthcare enterprise infrastructure and the data is not traditionally integrated into the healthcare EMR application.  In this situation, ‘data provenance’ quality changes greatly. Compared to the acute care setting, the frequency or volume of data available is greatly reduced, perhaps once daily. Additionally, the clinical workflow does not have an immediate or near real-time clinician validation step. Lastly, the environment in which the physiological measurement is taken is not as controlled as in a clinic or hospital, data generation may be dependent on patient actions or technique and the medical device or sensor may not be as accurate as that found in the clinic or hospital setting.

EMR/EHRs are considered medical legal documents such that clinicians retain liability for any information in the document. This makes them and healthcare institutions more carefully vet any information or data that is integrated into the EMR/EHR. In addition to the medical legal reason cited above, there is the trust issue regarding data veracity. If a clinician doesn’t have the correct level of trust in the data, then any action they may or may not take can be inappropriate.

There are very few healthcare organizations that are integrating remotely monitored *and* in-hospital medical device data into their EMRs. Those that are integrating it are following these approaches: integrated data viewing in a patient context without data-comingling, and/or integrated viewing with a separate co-mingled data repository which provides separate data provenance. Integrated patient context viewing creates a viewing environment in which it may seem the data resides in the same place when in fact, it may be in separate physical repositories. Data co-mingling means the data resides in the same physical data repository.

Another idea that was discussed was that use of the HL7 3.0 messaging standard (object oriented and XML-based) merely ‘kicked the work’ downstream. “XML doesn’t solve the difficult problem of identifying data, it just allows for data to be identified with tags that have no semantics. The problem is not really syntax, but semantics. The different data codes do not identify when they are to be used or the difference between instances.” (See VA link below.) This is a bit of an extension to the idea above of data having provenance. Again, with having semantic interoperability, the data item being integrated would be understood with regard to not only the value and type, but where it came from, and a measure of the confidence one might have in the data value (this would help in the clinician trust with regard to data veracity).

The current data standards that are recommended for use in medical device information integration do not take into account the ‘provenance.’ The need for defining provenance comes into play if or when EMRs start making use of the data gathered outside of the ‘controlled’ healthcare environment.

For even more examples of the issues with remote monitoring data and semantics, readers may wish to visit the Center for Connected Health blog as well as read about the VA Telehealth integration activity (pdf file which will download). Both of these organizations in the USA are ahead in the thinking and implementation of incorporating remotely generated patient data into their EHRs. Each is using a different approach and yet both understand the difference between syntactic and semantic interoperability at a pragmatic level.

So as a healthcare organization starts planning to integrate medical device data generated outside the controlled healthcare enterprise, they should think about data provenance and how to effectively move towards semantic interoperability. Merely using standards at the interfaces will not guarantee semantic interoperability. A determination of the data gathering location, quality and timeliness will be required to properly place data in the proper context for a clinician to appropriately use that data. Providing this context will go a long way to alleviating clinician fears regarding the use of the patient generated data and/or allow the clinician to better judge any actions they may take based on the data.

Bridget A. Moorman, CCE, is president of BMoorman Consulting, LLC, providing consulting to healthcare providers, standards promulgation organizations and medical device and information technology companies regarding their medical device integration strategies.  She can be reached via email  or at her website.

Share

Blog url: 
http://medicalconnectivity.com

Follow Us: