Let us suppose that you have carried out a full port of a piece of mainframe software, and it runs on the target platform. Will the resulting program seem the same to end users? Not necessarily, because both Windows and Unix/Linux differ in significant ways from each other and from z/OS. In order to minimise disruption to end users, you need to tune the new software version to run effectively, with adequate performance and scalability, the desired security, and approximately the same functionality and performance.
To tune migrated mainframe apps, you need to understand how the new operating environment will change the way an app runs. Here is a list of some of the key differences in Windows, Unix/Linux and z/OS that often lead to a need for tuning after migration.
- Windows, Unix and Linux are client-server; z/OS is "master-slave."
- Windows, Unix/Linux and z/OS have significantly different security schemes.
- Windows, Unix/Linux and z/OS have significantly different resource-access approaches, especially in I/O.
The good news is that compensating for these differences is usually not technically difficult. The best practice for target-platform tuning is simply to anticipate these problems and budget time and effort to fix them. Only those who think that this step can be avoided in the name of cost savings or speed of migration typically run into trouble.
Continue Reading This Article
Enjoy this article as well as all of our content, including E-Guides, news, tips and more.
The z/OS operating system (and therefore most mainframe applications) was built in an era in which the number of end users of an application numbered at most in the hundreds or thousands, and there was no such thing as a multiplexer. Communication with the outside world was "master-slave" -- dumb terminals could communicate with the mainframe only when the mainframe asked for input. So legacy mainframe apps are built to assume that only one or a few instances of the app will be running at a time, that most of their time will be spent processing, not ingesting end-user input, and that communications will be "bursty" -- delivered and sent in large chunks.
On Windows or Unix/Linux, this approach often does not perform or scale well. These are scale-out platforms, with more frequent and smaller communications, where the remote client can initiate input to the server. As a result, the migrated mainframe app finds it difficult to respond swiftly to end-user requests. And because of the complexity of today's Web architectures, it can be difficult to detect where in the network the bottleneck is occurring.
Typically, the way to analyse the new app's network is with an end-to-end application management tool. Once you understand the likely usage patterns of the app, you should be able to fix many of the problems by adding CICS-like scale-out multiplexing software. Because the target platforms are often under-utilised, it should be possible to "throw processors" at input processing once a request arrives at a particular machine. In other words, tuning the communications software and the process scheduling should handle most, if not all, performance/scalability problems.
Calling this a security problem is a little alarmist. This is not a problem of the scale of denial-of-service attacks or malware. Rather, it has to do with the fact that Unix/Linux, Windows and z/OS each have different core ways of controlling access to software and applications.
Z/OS provides sophisticated specialised security software -- Resource Access Control Facility -- that allows a fine-grained specification of security for each end user. By contrast, Unix/Linux were built to provide very simple security primitives (read, write, and execute for users, groups of users, and administrators), which make security on top of these primitives less fine-grained. Windows (more specifically, the network operating system extension of Windows) offers more primitives and does not confine the types of user to "user, group, administrator."
The result is that when you port z/OS to Unix/Linux, you are undergoing a "double approximation." The environment is approximating the desired security scheme using Unix/Linux primitives, and the migrated app is approximating its old security scheme using the Unix/Linux platform desired scheme. Windows allows a closer approximation to the mainframe's security scheme.
In either case, the danger of security that is too loose or too strict is not very great. Both target platforms have made great strides in providing finer-grained security over the last two decades – however, "small danger" is not the same as "no danger." This is the type of situation for which testing tools' function and stress test suites were built. Careful application of these suites should therefore either identify any problems or reassure you that there are none. At that point, incremental changes in the security of particular groups of end users should take care of any problems that you identify.
Over the years, mainframe and Windows/Unix/Linux handling of some system resources (e.g., processors and storage) has converged. However, there are still underlying differences in each platform's approach in the area of I/O to and from disk. Z/OS offers ISAM, VSAM and roll-your-own data I/O; Unix/Linux provides a simple, not-very-scalable file-oriented indexing scheme (related to inode indexing); Windows offers an approach that includes some, but not all, of the sophistication of the mainframe. The practical result has been that mainframe I/O not handled by mainframe programming languages and data management tools does not translate well to vanilla Unix/Linux, and sometimes not to Windows, either. More specifically, older legacy mainframe apps that include highly tuned assembler data management code may not scale if the migration simply translates this code into the equivalent Unix/Linux or Windows I/O primitives.
The best fix for this problem is to make sure that the migration tool translates this code into more scalable Windows/Unix/Linux I/O commands in the first place. That is what the database companies do: They provide a different bypass of Unix/Linux I/O for each Unix/Linux implementation. Failing that, you should either do performance testing on these apps and then search for and fix the offending I/O code, or just do a global search and replace.
Anticipating the ways that target-platform peculiarities can affect app performance and security can save a lot of grief for the migrater and the end user of the application. Moreover, this marks the end stage of a successful process. Once an app runs, performs/scales and provides adequate security on the target platform, the main goals of migration off the mainframe should have been fulfilled.
However, it is likely that there will be more challenges in the near future. What if the application needs to be moved to an internal or external cloud in the near future? Get-the-job-done mainframe migration does not guarantee readiness for a cloud. Is there a possibility that the app may need to be moved from Windows to Linux or back? Can you get more out of the migrated app, such as composition with other apps to enhance or integrate business processes? What if you have to combine the app with another from a company you just acquired?
A little forethought during the migration can save a lot of time in future projects like these. In part 4 in the mainframe migration series, I'll discuss the relatively straightforward ways that a migrated app can be spruced up during the migration process to meet future needs.
ABOUT THE AUTHOR: Wayne Kernochan is president of Infostructure Associates, an affiliate of Valley View Ventures. Infostructure Associates aims to provide thought leadership and sound advice to vendors and users of information technology. This document is the result of Infostructure Associates-sponsored research. Infostructure Associates believes that its findings are objective and represent the best analysis available at the time of publication.
This was first published in November 2009