In the last two segments of this series, I talked about evaluating your plan’s Model of Care (MOC), including data, measures and the mistakes some SNPs make during evaluations. Now let’s get down to business and cover some practical considerations for quality improvement processes and recommendations for your next plan year. If this is your first written evaluation, what format will you use and what will your stakeholders expect? Will you follow the plan format used for your annual quality improvement evaluation? Do you have stakeholders who need the data in a certain format so they can better digest it? If you did not indicate in your MOC how you would present your evaluation, it’s time to hold an internal discussion about the best way to explain your data and outcomes.
Did you make your list of data points from MOC 4? If not, please do so. After you look through your HRA performance data and care plan completion rates, step back and ask yourself, “What other operational areas within our plan that are not reflected here might put us at risk?”
The purpose of your annual MOC evaluation is to identify the clinical, operational and outcomes aspects of your model that need improvement. Plans must put corrective action processes or interventions improving the model into place and articulate a process for measuring changes. This is where many plans fall short. Instead of showing a truly effectuated “Plan, Do, Study, Act” cycle with measured improvements, some plans simply change their goals. Now, what does that say?!
Denied Claims and Out-of-Network
Most plans list goals like “improving access to care” and/or “appropriate use of benefits in the target population.” If your plan’s care management system is not configured to map interventions to a benefit class or category, how is your plan identifying and measuring access? Are you looking retrospectively at claims data/utilization? Mining claims data for genuine outcomes can be much more valuable than any general or supposed outcomes you would get from collecting HEDIS data or gathering data through exclusion methodology.
To start, take claims data from your SNP population for the prior year and filter it by PCP visits and visits to the predominant specialty providers. Sort claims for each of these categories against ICD-9 and ICD-10 codes and revisit the disease statistics you laid out in MOC 1. Do they align? Do you need to re-evaluate the disease states of your target population in light of your findings? Don’t overlook a service encounter just because a claim was denied or the service was delivered out-of-network. In fact, it makes sense to analyze out-of-area claims for access issues. Does the specialty provider encounter-data align with the disease states of individual members? Does the data reveal that your care plans or assessments are missing anything? Has your plan aligned ICD-10 logic with your population’s disease prevalence as it was first outlined in your MOC? Are you are capturing your data event and mapping the data correctly?
For the sake of goal measurement, let’s discuss community services access. As we all know, many SNPs cannot successfully manage their populations without community providers. Since these providers are typically not contracted with plans, out-of-network utilization patterns cannot be tracked until claims come in. Thus, plans must rely on the integration of these providers into care plans. Ask yourself whether you are effectively measuring the use of community providers. Do you need to configure your system to document their use? Get specific! Don’t just make the generic statement: “We refer 20 percent of our patients to the community food bank.” Instead, identify the disease states of patients who are accessing community services and document why they are using them. What does this tell you about how well your care managers are managing plans (or not) and, thus, whether they are fulfilling the purpose of your SNP?
Dig for Root Causes
While many D-SNPs have the enrollment verification process down, I-SNPs and C-SNPs have to rely on documentation from folks outside the plan to qualify enrollments. How does your I-SNP or C-SNP fare in this process? Does the outside institution verify levels of care and pass that information along to you in a timely manner? Are lengths-of-stay meeting requirements? Is a quality improvement program needed for this process? Have your internal monitoring and auditing teams identified any issues? How is the corrective action coming along?
I have posed many questions in this blog posting because addressing them is an important exercise for plans as they work through their annual evaluations from start to finish. Ask tough questions and keep digging for root causes so you can redesign. Get the audit protocol document, review the universe format for the evaluation and performance sections, validate that your plan can indeed populate it, then identify start dates, end dates, and baselines or benchmarks. Many plans wrote their MOCs several years ago and have not reviewed their MOC 4 section structures. These need to align with audit expectations for the universe data.
A few simple and smart steps can ease the pain of writing your evaluation and actually make it a fun, worthwhile exercise! Who would ever think that writing an annual evaluation could be fun? ;-)
Jane Scott is senior vice president of Professional Services for Health Integrated.