Summary

Divyangkumar Joshi discusses the Data Modernization Project and how we're using it to innovate and meet the needs of our business.

Graphic reading "OIT Q&A."

 Articles

Q&A: Medicare System Payment Modernization

Graphic reading "OIT Q&A: Medicare Payment System Modernization."

OIT works around the clock to ensure CMS systems that manage Part A and B Claims (FISS/MCS/CWF/VMS) keep data secure while meeting the business needs of our customers.  

With the ever-increasing amount of insurance and claims data available, CMS is gaining much better insight into what customers want, how they use and purchase insurance, and what they think of our services. This information can be used to make better decisions across CMS, from product and service design to marketing and aftercare.  

The Medicare Payment System Modernization (MPSM) project emerged because the Fee-for-Service systems are becoming more complex, more-costly, and less feasible to maintain and grow every year. Furthermore, CMS employees, nurses, and analysts reviewing beneficiary and claim information had difficulty accessing and retrieving all the information in one place.  

MPSM is notable because it addresses the needs of our customers and promotes the use of cloud technology to improve system modernization efforts across the agency. We asked Divyangkumar Joshi, Product Manager, from OIT’s Application Management Group (AMG) to discuss the rewards and challenges of MPSM and what the project means for CMS. 

Q: How did Data Modernization (DMod) come about? 

A: This project came about because Medicare Administrative Contractor (MAC) part A/B Claims Analysts, MAC Customer Service Representatives, Benefits Coordination and Recovery Center (BCRC) analysts, nurses, and other users of these legacy systems had to review multiple green screens to gather relevant beneficiary information, perform medical reviews of part A/B and DME adjudicated claims, and access the Medicare system to process the beneficiary's information or data. Imagine having to review over 100 different screens to retrieve data about just one Medicare client. 

Besides the sheer number of screens, each screen has to be served up one at a time by our mainframe system. Because of this, green screens have a slow response time. There were cases where beneficiaries using MyMedicare.gov wanted to get information on coverage, deductibles, and claim status and needed the latest data. However, data was not always available in real-time.  

Since green screens are complex for everyone, we instituted mainframe-hosted data access Application Programming Interfaces (APIs). But these APIs were unavailable during nightly mainframe batch cycles when the systems do the bulk of their work, so MACs asked us to make those APIs available 24/7. The Medicare Payment System Modernization (MPSM)  team came up with a low-risk solution that allowed us also to build experience and infrastructure that would set us up for a successful eventual migration. 

Q: Why is the DMod initiative important to CMS? 

A: MPSM is working on enabling long-term, large-scale modernization of the Medicare payment system, which is responsive and makes data available timely, reliable, and human-centered, giving the best user experience. This experience and infrastructure bridge the gap between data replication and data modernization, which directly supports data integrity. This initiative aligns perfectly with CMS's strategic pillars of driving innovation and fostering excellence. The MPSM initiative spans the entire agency and fosters an environment of collaboration and inclusivity, developing solutions that solve the pain point of Medicare providers and beneficiaries. 

Q: What are the main issues or problems driving this effort that DMod needed to resolve?  

A: The main problems were green screens, screen scraping, slow response times, and no availability of real-time data. Other than that, the following problems inherent to legacy Virtual Storage Access Method (VSAM) data will eventually be improved by DMod:

1.  VSAM data is not self-describing. We needed to set up a process with the VDCs where we can stay in sync as data schema (copybooks) are updated.

2.  VSAM files are heterogeneous. There are multiple different layouts of records possible in a VSAM file and since VSAM isn't self-describing that means we need application logic that can determine record format before translation. 

3.  VSAM data is weakly typed. There is no data type enforcement in VSAM files which leads to quality issues, compounded by the complexity of COBOL "redefines" where multiple different fields with incompatible data types can both be defined to occupy the same space within a serialized object (although not both at the same time). This leads to a lot of difficulty in trying to verify the accuracy of data translation.  

Q: How is DMod solving the problem? 

A: This initiative makes legacy VSAM data available in AWS by providing real-time, continuous streaming of VSAM data into a target platform with a transformative process that makes the VSAM data readable by modern applications and databases. In the long term, data modernization will improve the inherent VSAM problems mentioned above. 

Q: What are the three most significant benefits of DMod? 

A: The three biggest benefits to this effort are availability of data 24/7, making the data easier to access and manage, and eventually, addressing the complexities of VSAM noted above. 

Q: What were the biggest challenges that your team faced? 

A: Some of the biggest challenges faced by the team are replicating and handling VSAM data and working with a brand-new IBM product (InfoSphere's Remote Source module). It has taken extensive collaboration with IBM to reach our current level of maturity. Also, finding engineering resources with the right skill set for this type of product has been a challenge. 

Q: How does the DMod team ultimately plan to address the drawbacks of data replication? 

A: Replication is a step in the right direction, but because it creates additional maintenance costs and system complexity while keeping the legacy datastores intact, it’s not intended to be a permanent solution. The team ultimately plans to retire the legacy datastores and data replication by migrating the VSAM data into a modern cloud datastore and developing access methods for the legacy systems to read from the cloud. 

Q: Why did these systems need to be replicated and migrated to the cloud? 

A: Data Replication makes legacy VSAM data available in Amazon Web Services (AWS) by providing real-time, continuous streaming of VSAM data into a target platform. This transformative process makes the VSAM data readable by modern applications and databases. 

Recent Articles

Recent Media