6 questions to ask about EHR rating methodologies
According to research, which electronic health record (EHR) vendor is ranked number one? It may seem like a simple question, but if you’ve tried to answer it, you’ve likely found a wide range of responses.
Analyst firms, researchers, industry associations and vendors all take on different perspectives when they rank EHR platforms. It’s important to know your sources and understand their market definitions and methodologies before you weigh these results in your next EHR purchasing decision.
Like a good old-fashioned reporter, you should know the “Who, What, When, Where, Why and How” behind these findings. More specifically, here are questions to help you evaluate the next EHR rating that crosses your inbox:
Firms like Gartner and IDC take a more traditional analyst-derived opinion perspective. Other firms, like Black Book and KLAS Research, rank vendors based on survey data collected electronically or through personal interviews. But ask: who participated? Were they clinicians or administrators? Were they executive level? Is my technical support organization a similar size?
Equally important is knowing who initiated, paid for and conducted the research. What are their qualifications? Do they have any conflicts of interest that could potentially skew results? How well does the interviewer understand the technology?
What product is being evaluated? Vendors often have a portfolio with several solutions with multiple versions. For example, respondents’ complaints about a certain functionality might have been resolved with a more recent product release.
What do they ask? Do they cover a wide or narrow range of key performance indicators? Does this scope match your area of interest?
When did the research take place, and over what period of time? Knowing this information can help you place the research results in context with other announcements or developments.
Where was the survey conducted: in large health systems or small independent physician practices? This again goes back to size and budget. If I am using research to support a product decision, do the evaluating providers look like my organization?
Are there geographic considerations? For example, a survey of rural hospitals may not apply to urban areas, or U.S.-based findings may be irrelevant in Europe.
Why do researchers conduct the survey? Why do vendors participate? Are there financial considerations? For example, do respondents pay to participate and/or receive results?
How did researchers conduct the survey? Did they email questionnaires, or conduct phone interviews? How could the method impact results?
How were people invited to participate? Was it a random or targeted selection process? Some firms speak with the same respondents year after year and ask questions which they can only answer with experiences older than 12-24 months.
Remember, answering “yes” or “no” to any of these questions is not necessarily a good or bad thing. It just helps you determine if the findings are a good fit for your needs.
However, answering “I don’t know” or “I can’t tell” might signal the need for caution. Organizations with more transparency in their methodology are more credible than ones that are unclear. Because once you are empowered with that knowledge, you can apply the results more effectively to your decision-making process.