So you've decided to migrate all mainframe workloads to Unix, Linux and/or Windows platforms. Nevermind the figures showing that mainframes hosting 20 or more applications have lower long-term costs than big iron; short-term costs rule, and you need cost savings now.
As recently as six years ago, at the end of the last great IT cost-cutting push, deep-sixing the mainframe would have been very difficult. However, migration of large, enterprise-critical applications from the mainframe is now much easier. One reason for the improved migration outlook is the effort IBM and many users have put into modernising mainframe apps and middleware. Another reason is that the target platforms themselves are more standardised and offer more migration tool choices. Finally, there are now vendors making a living at providing services for mainframe migration.
Thus, migration off the mainframe, in most cases, can be done in reasonable time (for example, one or two years for 200 apps), with few -- if any -- resulting problems. The catch here is that it's easy to try a common-sense approach to migration and fail miserably. To succeed, users need to adopt the right overall strategy and best practices to implement that strategy.
Mainframe migration: The three-finger strategy
The basis of a successful mainframe migration strategy is understanding that the aim is not to move mainframe apps to a new platform as quickly or cheaply as possible. Rather, its aim is
The minimal-disruption aim leads to a strategy that can be ticked off with three fingers of one hand.
- Triage the software.
- Stage the process.
- Use third parties.
Below, I explain these three components of the strategy.
Step One: Triage the software
Broadly speaking, today's enterprises can employ three approaches for moving a piece of mainframe software to a Unix/Linux or Windows platform.
- Migrate. The program's source or binary code is moved to another platform with little or no change, and the developer applies tools on the new platform to add any needed new technologies.
- Regenerate. The program is first reverse-engineered, a process that creates an abstracted design model of the application. The application is then regenerated from the design model on the new platform with new technologies included.
- Replace. IT discards the existing application and writes an entirely new one on the new platform. The new application supposedly incorporates at least the same functionality as the old. Folding in the new technologies is part of the development process.
It may sound counterintuitive, but the best approach, if possible, is to regenerate. Remember, the aim is to cause minimal disruption to the end user. Regenerating a program allows the migration tool software to automatically re-tune the application for the new environment. As a result, apps that ran well in the mainframe's scale-up environment are pretty likely to run well in the new environment. And, with the design model in hand, it is relatively easy to detect and fix any remaining performance problems.
So triaging the mainframe software is a matter of identifying which programs can be regenerated; failing that, which ones can be migrated; and failing that, which must be replaced. Triaging the software results in a much greater proportion of apps that are ported with little or no end-user disruption.
Two: Stage the migration process
In the past, migrations used to be "either-or." Either there was a grand switch-over day, in which the old mainframe was turned off and the new applications were turned on, or the old mainframe was kept around forever -- just in case.
Specifically, this means that you should plan to take the following steps.
- Don't turn off the old app when the new one starts running.
- If possible, stage use of the new app by departments or functions.
- Create a network "switch" that allows easy routing of interactions to either the old or new app.
- Do make sure to port all apps concurrently, and turn off the mainframe about six months after end users have been fully transferred to all ported apps.
Three: Use third-party migration providers
What this really says is that today, IT departments with sufficient expertise in both mainframes and the new platforms, as well as efficient tools, are few and far between, if they exist at all. Mainframe applications may require modernisation (Web interfaces and Web-servicisation) to run effectively in the new environment, and that requires tools that you probably lack. They may lack documentation so that you no longer have the in-depth knowledge needed to tune them in their new environment. You almost certainly will not have any in-house-developed regeneration tools. Third parties can supply all of these things, and they can sometimes provide additional transition-disruption "insurance" with their expertise and methods for handling these situations.
In the next installment, I will discuss criteria for choosing a migration solution provider. Here, I will simply note that in the last couple years, providers have had an impressive record of successfully migrating customers off the mainframe in medium-scale enterprises.
An effective migration strategy will take you surprisingly far toward successfully migrating off the mainframe. Given a good strategy, best practices in implementation are often a matter of "don't screw it up." However, the best implementers recognise that migration is an opportunity as well as a requirement. In other words, at very little additional cost, implementers can improve their software during the migration process as well as their ability to upgrade or migrate software in the future -- a very important consideration for companies considering cloud computing. In Part II, I consider how best-practice mainframe-migration tools can deliver migration speed, cost reductions and better software.
ABOUT THE AUTHOR: Wayne Kernochan is president of Infostructure Associates, an affiliate of Valley View Ventures. Infostructure Associates aims to provide thought leadership and sound advice to vendors and users of information technology. This document is the result of Infostructure Associates-sponsored research. Infostructure Associates believes that its findings are objective and represent the best analysis available at the time of publication.
This was first published in October 2009