Thursday, December 22, 2016

OSPF - Open Shortest Path First

An important aspect in the huge operation is the performance and availability the underlying platform z/OS.
Many applications require a high availability . In the z/OS infrastructure environment, also really important to have fault tolerance like HP nonstop.
For that, Z/OS has the feature called ‘Open Shortest Path First (OSPF)’ network. This is a dynamic routing technique, instead of the static routing that has been used in mainframe environment. This implementation makes the z/OS platform fully tolerant to network failures through automatic failover and recovery.
The failover process
Failover means that when one machine fails, another machine takes over and resumes service. 
To address this issue, and to make further improvements in the z/OS configuration , all z/OS environments can get migrated into the OSPF network.
Open Shortest Path First

As written above, OSPF stands for ‘Open Shortest Path First’. It is a technique where the z/OS network traffic will be routed though the most efficient path. 
- With OSPF we are able to stay fully available on the network in case of a (network) disaster. 
- The failover will be performed automatically and no manual intervention is required. 
With OSPF, z/OS now has its own logical network. This means that there will be no interference with other network changes or failovers outside of z/OS.

Friday, February 26, 2016

Dev-Ops in Mainframe















DevOps is short for ‘development operations’ and this buzz word is increasingly used to describe the delivery of software and infrastructure changes.
DevOps is about the rapid ‘code, build, test, package, release, configure and monitor’ of software deployments - is also called agile infrastructure. Successful DevOps implementation revolves around developing a DevOps culture requiring effective communications between all those involved in the delivery pipeline.
 Here are five developments we need to know about to compete.
1. Large enterprises will get fully on-board
In most large enterprises, DevOps isn’t new. Small teams are using DevOps principles to work on discrete projects, and after a few years of experimenting, they’re starting to rack up successes. But in general, DevOps hasn’t been adopted widely throughout the enterprise. As a result, software releases in the enterprise are still too slow, too buggy and too costly.
Now that multiple teams are proving the value of adopting DevOps practices, C-level executives are taking notice and beginning to wonder whether DevOps might be an answer to some of their organisation’s top business challenges. They’re starting to engage with IT and asking how they can employ DevOps principles at scale throughout the enterprise to bring more speed and quality to business applications.
Interest at the C-level is a positive development because DevOps can’t succeed in large enterprises without executive support. DevOps requires a fair amount of experimentation and a tolerance for failure—the kind of experimentation and failure that might not be acceptable to the organisation’s leaders unless they feel confident that the eventual outcome will be worth it.
As enterprises begin to modernise legacy applications in 2016 and beyond, DevOps will play a central role. Within five years, DevOps will be the norm when it comes to software development.
2. Standards will emerge
DevOps currently has no defined standards, so what passes for DevOps in one organisation might not look much like DevOps in another. That means DevOps entails a certain amount of risk—and large enterprises are notoriously risk averse. Even if your small teams are documenting wins, scaling out DevOps successes to the broader organization can be a process of trial and error, which most enterprises don’t tend to engage in willingly.
As different teams experiment with DevOps and share their successes, there will be opportunities to standardise best practices gleaned from the lessons learned. Standardisation will help mitigate the risk of scaling DevOps practices and could involve everything from testing processes to determining the best deployment tools, even how to use internal coaching across teams.
Eventually, as best practices rise to the top, they will likely become adopted and pervasive across industries.
3. Security will increasingly become integrated with DevOps
Whether they learned the hard way or by watching other organisations get burnt, enterprises know that a security problem caught by users is far more damaging than a security problem they catch internally before it’s released into production.
As the pace of software delivery increases, it poses a challenge for security teams because their primary focus is on releasing and maintaining safe and secure applications. Doing things faster doesn’t necessarily give them the time to thoroughly vet applications before they get into end users’ hands. The challenge lies in finding the right combination of processes that allows thorough security assessments and keeps software releases flowing at a rapid pace. Bringing security and DevOps teams together offers a solution.
Recent high-profile security breaches have made it clear that security cannot be an afterthought. Security best practices and testing must be built into the development process from the beginning—and that means making it a part of the DevOps team.
That said, the full integration of security and DevOps has not become mainstream yet. In 2016 and beyond, security team members will become increasingly integrated into DevOps practices. This will occur with security experts coaching DevOps on how to effectively and efficiently embed application security within software development, deployment, and production cycles.
4. Key technology adoptions that enable DevOps will take off
Because DevOps is still in the beginning stages of adoption, there are few defined tool chains for DevOps and no accepted single standards. However, as organisations learn and share success from their DevOps practices, this will start to change in 2016 and beyond, with a few key technology concepts helping IT maximise speed and quality throughout the software development lifecycle.
Increasing automation
Automated testing, infrastructure and application deployments speed up cycles and reduce errors. By automating routine and repetitive tasks, you decrease cycle times in the software delivery lifecycle also ensure repeatability. As enterprises look to adopt DevOps, the first few wins from a technology tool chain perspective will occur through the adoption of automation that will accelerate tasks, eliminate manual handoffs, and cut down error prone processes.
Decreasing latency
As organisations move to increase the pace of application delivery, they must look at each stage of the lifecycle and identity and remove the biggest hurdles that impede a rapid, high quality delivery cadence of software releases to customers. You can make huge progress by identifying the biggest bottlenecks in the ‘conveyer belt’ delivery pipeline. It’s not particularly useful to remove smaller bottlenecks first because the major bottlenecks can still cause technical ‘debt’ to build up earlier in the pipeline or reduce key resources further down in the pipeline.
Increasing visibility
For high quality applications to be delivered at a rapid pace from inception to the hands of end users, it is important to continuously assess and monitor them at every stage of the lifecycle. You must monitor and measure key metrics against business demands including application user experience, health and availability of the application and infrastructure, and threat and risk monitoring, sharing results across the team through continuous feedback loops. If they don’t meet the business needs, you must then improve and iterate, making constant forward progress.
5. Job roles will evolve
Most IT organisations’ adoption of DevOps will force everyone to adopt new skills—not just from a technical perspective, but also from a cultural one. As developers become more familiar with infrastructure, and operations staff gets more familiar with code, it’s inevitable that jobs will begin to morph and evolve.
In 2016 and beyond, those changes will go beyond development and operations to impact business analysts, planning teams, and even C-level executives. For example, traditional system administrator roles will become less relevant as automation takes over many tasks, while ‘full-stack’ engineers, who are familiar with the entire application technology stack, will start to become more critical.
Roles will evolve as teams become more horizontally embedded around products and services, and multiple roles become part of the extended DevOps delivery chain.
Conclusions for DevOps in 2016

In the digital revolution where software will determine business leadership in the marketplace, it’s critical that enterprises understand the power of DevOps to help deliver higher quality software faster.
DevOps is no longer a fringe movement or even simply an idea for so-called ‘unicorns.’ It’s the way enterprise IT must operate to compete and stay relevant in the marketplace. Armed with these predictions for DevOps, you can be ready for the changes to come—and take steps to be among the innovators, not those left behind

Tuesday, February 16, 2016

BPXBATCH utility - How to run C/C++ Programs thru JCL in Z/OS Batch.

BPXBATCH utility

This information provides a quick reference for the IBM-supplied BPXBATCH program. BPXBATCH makes it easy for you to run shell scripts andz/OS® XL C/C++ executable files that reside in z/OS UNIX files through the z/OS batch environment. If you do most of your work from TSO/E, use BPXBATCH to avoid going into the shell to run your scripts and applications.
In addition to using BPXBATCH, if you want to perform a local spawn without being concerned about environment set-up (that is, without having to set specific environment variables, which could be overwritten if they are also set in your profile) you can use BPXBATSL. BPXBATSL, which provide you with an alternate entry point into BPXBATCH, and force a program to run using a local spawn instead of fork or exec as BPXBATCH does. This ultimately allows a program to run faster.
BPXBATSL is also useful when you want to perform a local spawn of your program, but also need subsequent child processes to be forked or executed. Formerly, with BPXBATCH, this could not be done since BPXBATCH and the requested program shared the same environment variables.BPXBATSL is provided as an alternative to BPXBATCH. It will force the running of the target program into the same address space as the job itself is initiated in, so that all resources for the job can be used by the target program; for example, DD allocations. In all other respects, it is identical to BPXBATCH.
For information on c89 commands, see c89 — Compiler invocation using host environment variables.

BPXBATCH usage

The BPXBATCH program allows you to submit z/OS batch jobs that run shell commands, scripts, or z/OS XL C/C++ executable files in z/OS UNIX files from a shell session. You can invoke BPXBATCH from a JCL job, from TSO/E (as a command, through a CALL command, from a REXX EXEC).
JCL: Use one of the following:
  • EXEC PGM=BPXBATCH,PARM='SH program-name'
  • EXEC PGM=BPXBATCH,PARM='PGM program-name'
TSO/E: Use one of the following:
  • BPXBATCH SH program-name
  • BPXBATCH PGM program-name
BPXBATCH allows you to allocate the z/OS standard files stdinstdout, and stderr as z/OS UNIX files for passing input, for shell command processing, and writing output and error messages. If you do allocate standard files, they must be z/OS UNIX files. If you do not allocate them, stdin,stdout, and stderr default to /dev/null. You allocate the standard files by using the options of the data definition keyword PATH.
Note: The BPXBATCH utility also uses the STDENV file to allow you to pass environment variables to the program that is being invoked. This can be useful when not using the shell, such as when using the PGM parameter.
Example: For JCL jobs, specify PATH keyword options on DD statements; for example:
//jobname JOB …
 
//stepname EXEC PGM=BPXBATCH,PARM='PGM program-name parm1 parm2'
 
//STDIN   DD  PATH='/stdin-file-pathname',PATHOPTS=(ORDONLY)
//STDOUT  DD  PATH='/stdout-file-pathname',PATHOPTS=(OWRONLY,OCREAT,OTRUNC),
//            PATHMODE=SIRWXU
//STDERR  DD  PATH='/stderr-file-pathname',PATHOPTS=(OWRONLY,OCREAT,OTRUNC),
//            PATHMODE=SIRWXU
⋮
You can also allocate the standard files dynamically through use of SVC 99.
For TSO/E, you specify PATH keyword options on the ALLOCATE command. For example:
ALLOCATE FILE(STDIN) PATH('/stdin-file-pathname') PATHOPTS(ORDONLY)
ALLOCATE FILE(STDOUT) PATH('/stdout-file-pathname')
         PATHOPTS(OWRONLY,OCREAT,OTRUNC) PATHMODE(SIRWXU)
ALLOCATE FILE(STDERR) PATH('/stderr-file-pathname')
         PATHOPTS(OWRONLY,OCREAT,OTRUNC) PATHMODE(SIRWXU)
 
BPXBATCH SH program-name
You must always allocate stdin as read. You must always allocate stdout and stderr as write.

Parameter

BPXBATCH accepts one parameter string as input. At least one blank character must separate the parts of the parameter string. When BPXBATCH is run from a batch job, the total length of the parameter string must not exceed 100 characters. When BPXBATCH is run from TSO, the parameter string can be up to 500 characters. If neither SH nor PGM is specified as part of the parameter string, BPXBATCH assumes that it must start the shell to run the shell script allocated by stdin.
SH | PGM
Specifies whether BPXBATCH is to run a shell script or command or a z/OS XL C/C++ executable file that is located in a z/OS UNIX file.
SH
Instructs BPXBATCH to start the shell, and to run shell commands or scripts that are provided from stdin or the specified program-name.
Note: If you specify SH with no program-name information, BPXBATCH attempts to run anything read in from stdin.
PGM
Instructs BPXBATCH to run the specified program-name as a called program.
If you specify PGM, you must also specify program-name. BPXBATCH creates a process for the program to run in and then calls the program. The HOME and LOGNAME environment variables are set automatically when the program is run, only if they do not exist in the file that is referenced by STDENV. You can use STDENV to set these environment variables, and others.
program-name
Specifies the shell command or the z/OS UNIX path name for the shell script or z/OS XL C/C++ executable file to be run. In addition, program-name can contain option information.
BPXBATCH interprets the program name as case-sensitive.
Note: When PGM and program-name are specified and the specified program name does not begin with a slash character (), BPXBATCH prefixes your initial working directory information to the program path name.

Usage notes

You should be aware of the following:
  1. BPXBATCH is an alias for the program BPXMBATC, which resides in the SYS1.LINKLIB data set.
  2. BPXBATCH must be invoked from a user address space running with a program status word (PSW) key of 8.
  3. BPXBATCH does not perform any character translation on the supplied parameter information. You should supply parameter information, including z/OS UNIX path names, using only the POSIX portable character set.
  4. A program that is run by BPXBATCH cannot use allocations for any files other than stdinstdout, or stderr.
  5. BPXBATCH does not close file descriptors except for 01, and 2. Other file descriptors that are open and not defined as "marked to be closed"remain open when you call BPXBATCH. BPXBATCH runs the specified script or executable file.
  6. BPXBATCH uses write-to-operator (WTO) routing code 11 to write error messages to either the JCL job log or your TSO/E terminal. Your TSO/E user profile must specify WTPMSG so that BPXBATCH can display messages to the terminal.

Files

The following list describes the files:
  • SYS1.LINKLIB(BPXMBATC) is the BPXBATCH program location.
  • The stdin default is /dev/null.
  • The stdout default is /dev/null.
  • The stdenv default is /dev/null.
  • The stderr default is the value of stdout. If all defaults are accepted, stderr is /dev/null.

Tuesday, February 9, 2016

Overview of Z/OS Crypto


Cryptographic software : ICSF

The Integrated Cryptographic Service Facility or ICSF is the system component that provides
the interface to the hardware. As new functions are implemented in hardware, new versions
of ICSF are made available to exploit those functions. ICSF is available as a component of
z/OS. The most current versions are available at:

http://www.ibm.com/servers/eserver/zseries/zos/downloads/

A number of crypto instructions are now available in the CP and can be coded directly in an
application. However most of the crypto functions can only be accessed by using the ICSF
Application Programming Interfaces (APIs). The API passes the cryptographic request to
ICSF, which determines what hardware is available and which device can best service the
request. The ICSF Application Programmer’s Guide provides a table that describes the
hardware that is required to support the APIs.

Saturday, February 6, 2016

How Mainframe(System z) addresses ATM, EFT, and POS processing Requirements?

ATM and POS transactions, which are business critical transactions that require a platform
(hardware and software) that provides robust levels of reliability, availability, and scalability.

The IBM System z server and z/OS operating system provides an ideal environment to host
payment systems. In the following sections, we list the processing requirements that can be
met when running on a System z platform.

1. Availability
2. Parallel Sysplex Clustering
3. Manageability
4. Security
5. Scalability
6. Dynamic Workload balancing
7. IBM Integrated Cryptographic service Facility
8. Integration with External Authorization Systems.

1. Availablity:

System z provides 24-hour a day, 7-days per week availability, which includes scheduled
maintenance. Continuous availability goes beyond just hardware fault tolerance; it is
achieved by a combination of hardware, application code, and good system management
practices.

On a server basis, System z systems are equipped with features that provide for very high
availability:
Redundant I/O interconnect
Concurrent Capacity Backup Downgrade (CBU Undo)
Concurrent memory upgrade
Enhanced driver maintenance
Capacity backup upgrade
On/Off capacity

2. Parallel Sysplex clustering

When configured properly, a Parallel Sysplex® cluster has no single point-of-failure and can
provide customers with near continuous application availability over planned and unplanned
outages. Events that otherwise seriously impact application availability (such as failures in
hardware elements or critical operating system components) have no, or reduced, impact in a
Parallel Sysplex environment.

With a Parallel Sysplex cluster, it is possible to construct a parallel processing environment
with no single point-of-failure. Because all systems in the Parallel Sysplex can have
concurrent access to all critical applications and data, the loss of a system due to either
hardware or software failure does not necessitate loss of application availability. Peer
instances of a failing subsystem executing on remaining healthy system nodes can take over
recovery responsibility for resources that are held by the failing instance.

Alternatively, the failing subsystem can be automatically restarted on still-healthy systems
using automatic restart capabilities to perform recovery for work in progress at the time of the
failure. While the failing subsystem instance is unavailable, new work requests can be
redirected to other data-sharing instances of the subsystem on other cluster nodes to provide
continuous application availability across the failure and subsequent recovery, which
provides the ability to mask planned and unplanned outages from the end user.

A Parallel Sysplex cluster consists of up to 32 z/OS images coupled to one or more Coupling
Facilities (CFs or ICFs) using high-speed specialized links for communication. The Coupling
Facilities, at the heart of a Parallel Sysplex cluster, enable high speed, read/write data
sharing and resource sharing among all of the z/OS images in a cluster. All images are also
connected to a common time source to ensure that all events are properly sequenced in time.

The flexibility of System z, z/OS, and Parallel Sysplex allows customers to create many high
availability system designs, from multiple LPARs in a Parallel Sysplex on multiple System z
servers, to dual LPARs in a Parallel Sysplex on a single System z server.

3. Manageability

A wide array of tools, which include the IBM Tivoli® product and other operational facilities,
contribute to continuous availability. IBM Autonomic Computing facilities and tools provide for
completely fault tolerant, manageable systems that can be upgraded and maintained without
downtime.

Autonomic computing technologies that provide Self-Optimizing, Self-Configuring, and
Self-Healing characteristics go beyond simple hardware fault tolerance. Additionally, the
System z hardware environment provides:

Fault Detection
Automatic switching to backups where available
(Chipkill memory, ECC cache, CP, Service Processor, system bus, Multipath I/O, and soon)
Plug and Play and Hot swap I/O
Capacity Upgrade on Demand

4. Security

On March 14, 2003, IBM eServer™ zSeries 900 was the first server to be awarded EAL5
security certification. The System z architecture is designed to prevent the flow of information
among logical partitions on a system, thus helping to ensure that confidential or sensitive data
remains within the boundaries of a single partition.

On February 15, 2005, IBM and Novell® announced that SUSE® Linux Enterprise Server 9
successfully completed a Common Criteria (CC) evaluation to achieve a new level of security
certification (CAPP/EAL4+). IBM and Novell also achieved United States (US) Department of
Defense (DoD) Common Operating Environment (COE) compliance, which is a Defense
Information Systems Agency requirement for military computing products.

On March 2, 2006, z/OS V1.7 with the RACF® optional feature achieved EAL4+ for
Controlled Access Protection Profile (CAPP) and Labeled Security Protection Profile (LSPP).
This prestigious certification assures customers that z/OS V1.7 goes through an extensive
and rigorous testing process and conforms to standards that the International Standards
Organization sanctions.

These certification efforts highlight the IBM ongoing commitment to providing robust levels of
security to assist customers in protecting their business critical data.

5. Scalability


The Capacity Upgrade on Demand (CUoD) capability allows you to non-disruptively add one
or more Central Processors (CPs), Internal Coupling Facilities (ICFs), System z Application
Assist Processor (zAAP), and Integrated Facility for Linux (IFLs) to increase server resources
when they are needed, without incurring downtime. Capacity Upgrade on Demand can
quickly add processors up to the maximum number of available inactive engines. Also,
additional books (up to a maximum of four in total) can be installed concurrently, providing
additional processing units and memory capacity to a z9® or z10® server.

In addition, the new Enhanced Book Availability function also enables a memory upgrade to
an installed z9 or z10 book in a multi-book server. This feature provide customers with the
capacity for much needed dynamic growth in an unpredictable ATM/EFT world.
The CUoD functions include:

Non-disruptive CP, ICF, IFL, and zAAP upgrades
Dynamic upgrade of all I/O cards in the I/O Cage
Dynamic upgrade of memory

The Parallel Sysplex environment can scale near linearly from two to 32 systems. This
environment can be a mix of any servers that support the Parallel Sysplex environment.

6. Dynamic workload balancing

To end users and business applications, the entire Parallel Sysplex cluster can be seen as a
single logical resource. Just as work can be dynamically distributed across the individual
processors within a single SMP server, so too can work be directed to any node in a Parallel
Sysplex cluster that has the available capacity, which avoids the need to partition data or
applications among individual nodes in the cluster or to replicate databases across multiple
servers.

7. IBM Integrated Cryptographic Service Facility

In addition to the external security modules (HSMs) that are available on other platforms,
BASE24-eps on System z can take full advantage of the IBM Crypto Express 2 card using the
Integrated Cryptographic Service Facility (ICSF) for very high speed and highly available
cryptographic services, such as Personal Identification Number (PIN) translation and
verification and Message Authentication Code (MAC) generation and validation.

8. Integration with external authorization systems

On all platforms, BASE24-eps can use data communications to send requests and receive
responses from external transaction authorization systems. On System z only, other means
of communicating with external authorization systems are available, such as:

IBM WebSphere MQ CICS Gateway for communicating synchronously or asynchronously
with local CICS-based authorization systems. (Synchronous communications is
recommended only for suitably reliable and low-latency systems.)

The IBM External CICS Interface (EXCI) for communicating synchronously with suitably
reliable and low-latency local CICS authorization systems with the lowest possible CPU
cost.

The IBM IMSConnect for communicating with local IMS-based authorization systems.

Tuesday, February 2, 2016

THE CICS JOURNEY CONTINUES - By Nick Garrod

CICS recently celebrated 46 years as the premier transaction processor (some IMS fans may disagree here—that is their right) but what does this mean all these years later? Well, for starters, we have a piece of software that has lasted 46 years in the market and that is not something one sees every day (again, IMS fans could take issue as they are a year more mature). CICS has evolved over the years to be rather good at enterprise-grade, mixed-language application serving and the recent announcement of CICS Transaction Server V5.3, (CICS TS V5.3), along with the CICS Tools, make a compelling proposition that in 46 years hence CICS will still be leading the field.

When CICS V5 entered the market, it was underpinned by three main themes:
• Service Agility
• Operational Efficiency
• With Cloud enablement.

Internally, we also talked about the enhancements we were making to core CICS function, the foundation of CICS. The three releases have built on these themes (and the core foundation) and remain largely similar with a slight amendment to the final point, which in release 3 we refer to as cloud with DevOps. This is not just a marketing ploy to jump on the DevOps bandwagon; there is serious value to the DevOps teams to be found in this release.

Service Agility

The improvements in this space are largely around two key areas, support for Java and support for the WebSphere Liberty Profile, providing better interoperability, simplified management and enhanced Java SE support.

A set of new WebSphere Liberty profile features provide support for a wider range of Java web APIs and application frameworks. CICS TS V5.3 augments the previously provided Liberty profile features with:

• Contexts and dependency injection (CDI)*
• Enterprise JavaBean (EJB) Lite subset
• Managed Beans*
• MongoDB*
• OSGi Console*
• Session persistence (JDBC type 4 driver only)*.
  • Also available for CICS TS V5.2 via APAR PI25503
These features are new, but work has also gone into improving existing Liberty profile features. Specifically, the work that has been done is in the area of adding EAR support for bundles, making Java-based web applications even more portable, adding SQLJ support for use with the DB2 type 2 driver data sources and adding transaction support to the Blueprint feature.

Enhanced operability is addressed in CICS TS V5.3 with technology that provides the ability for Java programs in a Liberty profile Java Virtual Machine (JVM) and non-Java programs to call each other using standard API calls. This enables Java applications to use the standard JEE Connector Architecture (JCA) to invoke CICS programs in any supported language. Non-Java CICS programs can issue an EXEC CICSLINK to call a Java application running in a Liberty profile JVM server.

IBM z/OS Connect feature is now supported by CICS TS (in CICS TS V5.2 via APAR PI25503), which provides RESTful APIs and accepts JavaScript Object Notation (JSON) payloads between CICS, Mobile devices and cloud environments. Java applications and JVM system objects can now be monitored using Java Management Extensions (JMX). The following JMX-related Liberty profile features are now supported:

• Local JMX connector*
• Monitoring*
• REST connector (for JMX)*.
  • Also available for CICS TS V5.2 via APAR PI25503
Users of the Liberty profile JVM server can now manage and monitor applications and system objects locally by using the JMX client API, or remotely by using the Jconsole monitoring tool included in Java SE.

JVM server administration is improved by simplifying the process of managing log files that include controls for a maximum number of zFS logs, the ability to redirect log files to the MVS JES log and the standardization of timestamps.

Java SE programs that run in a CICS OSGi JVM server can now use the WebSphere MQ classes for the Java Message Service (JMS) as an alternative to proprietary WebSphere MQ classes for Java. Developers familiar with the JMS API can easily access WebSphere MQ resources with enhancements made to the CICS MQ attachment facility to support the required new commands.

There is no Java SE support in a Liberty profile JVM server; it is currently limited to Java programs that run in an OSGi JVM server. Support from WebSphere MQ that uses the WebSphere MQ classes for JMS is provided in WebSphere MQ for z/OS V7.1 and V8. However, there are several requirements:

• V71 requires MQ APAR PI29770 (built on fix pack 7.1.0.6) or later
• V8.0 requires base APAR PI28482 and fix pack 8.0.0.2 or later
• CICS TS V5.2 is also supported and requires APAR PI32151.

Operational Efficiency

The improvements in this theme concern performance optimizations, better metrics and additional security. These optimizations are largely in the area of web services. Since its introduction into CICS TS V3.1 in 2005, web services has become one of the most popular methods of interacting with CICS applications and in this new release there are a number of significant optimizations.

The pipeline processing of HTTP requests has been improved, removing the need for an intermediate web attach task (CWXN transaction) in the majority of use cases. This will reduce the CPU and memory overhead for most types of SOAP and JSON based HTTP CICS web services. This optimization may also be used for inbound HTTPS requests, where SSLL support is provided by the Application Transparent Transport Layer Security (AT-TLS) feature of IBM Communications Server. CICS TCPIPSERVICE resources may be configured as AT-TLS aware to obtain security information from AT-TLS. HTTPS implementations using CICS SSL support still use the CWXN transaction but should also see performance improvements due to reduced TCB switches from these scenarios.

CICS TS V5.3 sees performance improvements in many areas that help to reduce CPU overhead. These include exploitation of a number of new hardware instructions introduced in the IBM z9, cache alignment of some key CICS control blocks, the use of prefetch, reduced lock contention within monitoring algorithms, improvements to the MRO session management algorithms and tuning of internal procedures. Improvements in efficiency have noticeable gains in the CICS trace facility, the CICS monitoring facility and for MRO connections with high session counts.

CICS transaction tracking identifies relationships between tasks in an application as they flow across CICS systems and can be visualized in the CICS Explorer. In this release, transaction tracking has been extended to transactions started by the CICS-WebSphere MQ bridge, expanding the scope of transactions that can use this facility to help with reporting, auditing and problem determination.
A number of new metrics have been added into the global CICS statistics for transaction CPU time measurements and are captured without the need for CICS monitoring to be active. This allows greater insight into the CPU resource usage of the regions without the SMF 110 records processing overhead.

Additional security features include new support for the Enhanced Password Algorithm, implemented in RACF APAR OA43999, allowing stronger encryption of passwords. There is enhanced support for Kerberos providing an EXEC CICS SIGNON TOKEN command, avoiding the need to flow a password. Applications can now validate a Kerberos security token (as determined by an external security manager) and associate a new user ID with the current terminal.

A new EXEC CICS REQUEST PASSTICKET API can be used for outbound requests from the current task, where basic authentication is required, which avoids the need to flow passwords. RACF or similar external security manager builds the PassTicket. There are further opportunities to offload authentication requests to open TCBs, which reduces contention on the resource owning TCB.

Cloud With DevOps

The underlying capability in the area of cloud and DevOps enables adoption of a continuous development integration model for automated CICS deployments. The value comes from four main areas:
• Automated builds
• Scripted deployments
• UrbanCode Deploy support
• Enhance cloud enablement.

Bundles and cloud applications are a convenient way to package and manage components, resources and dependencies in CICS.

In this release, a command line interface for automating the building of CICS projects created in the CICS Explorer is introduced in the shape of a CICS build toolkit. CICS cloud applications and bundles, as well as OSGi components, can now be automatically built from source code.
Scripts call the CICS build toolkit to create a build, which automatically runs when application updates are made. A build script would typically check the latest application version from source control, along with its dependencies. It then calls the CICS build kit to build the projects from the application.

In a final step the script initiates a copy of the built projects to an appropriate location—such as an artifact repository or a staging area on zFS. The CICS build kit is supported on z/OS, Linux and Microsoft Windows, and supports CICS TS V4.1 and later (see Figure 1).

A built CICS project that resides in zFS may be programmatically deployed across CICS systems by using a set of scripting commands new to this release and can simplify and automate application deployments.

DFHDPLOY is a new batch utility to support the automated provisioning of CICS bundles, OSGi bundles within CICS bundles and CICS applications by using the following simple commands:
• SET APPLICATION
• SET BUNDLE
• DEPLOY APPLICATION
• UNDEPLOY APPLICATION
• DEPLOY BUNDLE
• UNDEPLOY BUNDLE.
script to deploy them across CICS systems and set them to a desired state, such as “enabled” or “available.” It can also be used to undeploy and remove them when they are no longer required (see Figure 2).

IBM UrbanCode Deploy orchestrates and automates the deployment of applications, middleware configurations and database changes. CICS has made available a plugin for UrbanCode Deploy that supports the deployment of CICS applications as part of these orchestrations (see Figure 3).
Multiple deployment steps can be coordinated in a single action with UrbanCode Deploy. Similar applications and environments such as development systems or more tightly controlled test and production environments can reuse these deployment processes.

The UrbanCode plugin provides function for installing and removing resources, NEWCOPY and PHASEIN for programs and performing pipeline scan. Batch utilities like DFHDEPLOY can be reused using the z/OS utility plugin (see Figures 4 and 5). The plugin is available for CICS TS V4.1 or later and may be downloaded from the UrbanCode Deploy plugin website at https://developer/ibm.com/ urbancode/plugin/cics-ts.

In addition to these deployment capabilities, there are a number of improvements made to the core CICS cloud function to increase the value to CICS customers. Threshold policies were introduced in CICS TS V5 and in this recent release they have been enhanced by providing the ability to supply a threshold policy for the number of IBM MQ requests, DL/1 requests, named counter requests and shared temporary storage requests issued by a CICS task. This now brings the number of thresholds against which an action can be triggered to 14.

Program and URI entry points have been available before but this release sees the addition of transaction entry points. This provides the ability to scope policies to be specific to a particular transaction ID. Recovery of the application infrastructure is enhanced, so that the available or unavailable state of an application is automatically recovered across CICS restarts.

CICS TS version 5 has seen many new innovations: It has been the first version to have a family release (five CICS Tools have been announced on the same day as the run-time); we have seen faster development cycles between major releases; we have seen rich feature packs available for download; and we have seen more requirements deliver than ever before (by the time release 3 hits the streets, more than 300 customer requirements will have been satisfied); and we have seen a move toward design thinking in the way this latest release was developed. Advances in the hardware provide potential benefits to CICS customers such as:
• Simultaneous multithreading (SMT) processors by CICS Java applications
• Greater data compression and reduced data transfer time for CICS SMF data
• Cryptographic acceleration for CICS transactions using SSL or TLS
• Planned availability of large memory on z/OS (for storage above the bar or JVM heap storage).

All these things, together with advances in the hardware CICS runs on, the mainframe, provide supreme additional value for your CICS environment and a compelling reason to upgrade. If you just want to try out some of the exciting new capability, you can always download the CICS TS Developer Trial as many times as you wish

Wednesday, January 27, 2016

COBOL Future

Recently I have attended seminar on COBOL which was given by Captain COBOL. 

Just wanted to record few things from that session...:)

COBOL will be replaced by …

• Fortran: 1960s
• PL/I: 1970s
• PASCAL: 1980s
• C: 1990
• C++: 1995
• Java: 1998
• C#: 2001
• What's next?

COBOL is constantly evolving

• Java interoperability
• Unicode support
– (UTF-16, more and more UTF-8)
• XML processing for Web Services
• JSON support on the way

COBOL can be Java!

• You can write Java classes in COBOL
• Invoke Java methods from COBOL
• Inherit from Java classes in COBOL
• COBOL can be C!
• Write C functions with return value in COBOL
• Invoke C functions from COBOL
• COBOL can be multithreaded

COBOL is the Language of the Future!

• 54 million new lines of COBOL written every year
• The base is growing, not shrinking
• 1 million COBOL programmers


Sunday, January 3, 2016

DB2 10 Conversion Mode - Reduce resource consumption on DB2 10 for z/OS

To utilize the DB2 10 improvements, packages must be rebound to create the new 64-bit thread structures.
Large z/OS Page Frames
Among the infrastructure changes in DB2 10 is the way that PGFIX(YES) buffer pools are managed on a System z10* or zEnterprise* 196 (z196) processor. The System z10 processor with z/OS* 1.10 introduced 1 MB page frames. As the amount of memory increases, large page frames help z/OS improve CPU performance. IBM laboratory testing has measured CPU improvements of a few percent for specific transaction workloads. To exploit large frames, define the buffer pools with PGFIX(YES) and specify the LFAREA z/OS system parameter. LFAREA is the amount of real storage that z/OS uses for large frames.

RELEASE(DEALLOCATE)

The RELEASE(DEALLOCATE) option has been part of the DB2 \BIND/REBIND command for a long time, but DB2 10 makes the function more useful. The dramatic Database Services Address Space Parameters (DBM1) virtual-storage constraint relief in DB2 10 achieved with rebind makes it possible to make more use of RELEASE(DEALLOCATE). This change saves up to 10 percent of CPU time for high-volume transactions with short-running SQL, without changing applications or Data Definition Language (DDL).

Distributed Applications

For Distributed Data Facility (DDF) work, after rebinding packages with RELEASE(DEALLOCATE), the customer must issue the MODIFY DDF PKGREL(BINDOPT) command to allow DB2 to use RELEASE(DEALLOCATE) processing for packages bound with RELEASE(DEALLOCATE). DDF inactive thread processing (CMTSTAT=INACTIVE) takes advantage of new high-performance database-access threads (DBATs) to increase distributed DBAT thread reuse. Implementing the MODIFY DDF PKGREL(COMMIT) command can commit behavior when, for example, you need to allow utilities to break in. With DB2 10 for Linux*, UNIX* and Windows* 9.7 Fix Pack 3, the Call Level Interface (CLI) and JDBC packages are bound with RELEASE(DEALLOCATE) by default. RELEASE(DEALLOCATE) assumes the use of well-behaved applications that adhere to required locking protocols and issue frequent commits.
DB2 10 further improves overall Distributed Relational Database Architecture (DRDA) application performance for result sets from SELECT statements that contain the FETCH FIRST 1 ROW ONLY clause. Simply combine the OPEN, FETCH and CLOSE requests into a single network request. DB2 10 also offers improved DDF performance through restructuring distributed processing on the server, particularly the interaction between the DDF and DBM1 address spaces.
Migrating to DB2 10 Conversion Mode, rebinding packages, exploiting large page frames and exploiting RELEASE(DEALLOCATE) can save up to 10 percent of the CPU for transaction workloads and up to 20 percent for native SQL-procedure applications. Much greater CPU savings is possible for queries that contain large numbers of index Stage 1 predicates, as well as IN-list predicates. Also, more Stage 2 predicates can be pushed down to Stage 1.

Capacity Improvements

Since most of the thread storage has been moved above the bar, DB2 10 can support more threads than DB2 9, thereby making it possible to reduce the number of DB2 data-sharing members, or at least hold the number of members constant while increasing transaction throughput.
If you were previously unable to increase the MAXKEEPD zparm value due to a DBM1 virtual-storage constraint in DB2 9, you may be able to increase MAXKEEPD since the local-statement cache is moved above the bar. Increasing the constraint value may reduce prepares for more SQL statements and provide additional CPU savings.

I/O Improvements

Other CPU improvements apply in more specific scenarios. Some of these are related to I/O improvements since I/Os are one of the significant consumers of CPU time. These include an improved, dynamic prefetch sequential-detection algorithm. List prefetch support for indexes helps minimize index I/Os when scanning a disorganized index. The number of log I/Os is also reduced and long-term page fixing of the log buffers saves CPU time as well.

zIIP and zAAP Exploitation

Buffer pool prefetch and deferred write Supervisor Request Blocks (SRBs) ordinarily aren’t big CPU consumers, but DB2 10 makes this specific processing eligible for System z* Integrated Information Processors (zIIPs); zIIP processing can reduce the number of billed MIPS a company uses. This CPU savings is more significant when using index compression.
As in DB2 9, DB2 10 can direct up to 80 percent of CPU-intensive parallel-query processing to run on an available zIIP. DB2 10 makes more queries eligible for query parallelism, which can result in more zIIP exploitation. In DB2 10, portions of the RUNSTATS utility are eligible to run on a zIIP.
XML schema validation and nonvalidation parsing of XML documents are eligible for zIIP or System z Application Assist Processing (zAAP). If XML parsing is done under DDF enclave threads, it’s eligible for zIIP. If the XML parsing is done under a batch utility, it’s eligible for zAAP.

Query Performance

Range-list index scan is a new type of access path DB2 10 uses to significantly improve the performance of certain scrolling-type applications where the returned result set is only part of the complete result set. The alternative in DB2 9 was to use multi-index access (index ORing), which isn’t as efficient as single-index access. Prior to DB2 10, list prefetch couldn’t be used for matching IN-list access. In DB2 10, list prefetch can be used for IN-list table access, ACCESSTYPE=’IN’.
The process of putting rows from a view or nested table expression into a work file for additional query processing is called physical materialization, which is an overhead. Additionally, it limits the number of join sequences that can be considered and can limit the administrator’s capability to apply predicates early in the processing sequence. The join predicates on materialization work files are also not indexable. In general, avoiding materialization is desirable. In DB2 10, materialization can be avoided in additional areas, particularly for view and table expressions involved in outer joins.
The processing of Stage 1 and nonindex matching predicates has also been enhanced. DB2 now processes non-Boolean predicates more efficiently when accessing an index and Stage 1 data-access predicates. You don’t need to rebind your static applications to take advantage of some of these optimization improvements. However, a rebind is required to take full advantage. More complex queries with many predicates show higher improvement. Queries that scan large amounts of data also show higher CPU savings.
DB2 10 also contains some SQL-sorting enhancements. DB2 10 introduces hash support for large sorts, which potentially reduces the number of merge passes needed to complete these sorts. Hashing can be used to identify duplicates on input or to sort, if functions such as DISTINCT or GROUP BY are specified. Some additional cases also exist where, when FETCH FIRST N ROWS is used, DB2 10 can avoid the sort process altogether.

Insert Performance

The performance of high concurrent inserts is better in DB2 10, with tests showing a typical range of 5- to 40- percent reduction in CPU time when compared with DB2 9 performance. Higher CPU performance is achieved when processing sequential inserts. The amount of improvement also depends on the table-space type, with Universal Table Spaces seeing higher improvements.
When a series of inserts are sequential with respect to the cluster key–but the key is less than the highest key in the index–a new page-selection algorithm helps minimize getpages, which can help reduce CPU cost.
DB2 10 contains some referential integrity-checking improvements on inserts that may result in reduced CPU utilization and I/O processing.

Utilities and Large Objects

Generally speaking, DB2 utility CPU performance in DB2 10 is equivalent to that of DB2 9, but DB2 9 already introduced performance improvements of up to 50 percent CPU savings compared to DB2 8 for various utilities processing.
DB2 10 also contains numerous large object (LOB) enhancements and one of them applies to Conversion Mode, namely LOB materialization avoidance. For LOBs, IBM has observed up to a 16-percent reduction in CPU consumption for DDF Inserts.

Add Up the Savings

All of the CPU savings described here apply to Conversion Mode. Some of the performance improvements are available upon installation of DB2 10, without additional changes, while others require very minimal changes, such as a change to installation parameters. Some require a System z10 processor and perhaps changing the buffer-pool parameters. Most require appropriate rebinds to fully realize the benefits. Additional CPU benefits can be obtained when migrating to New Function Mode, which will be outlined in a future article.

Interview with Father of DB2 - Don Haderle



IBM Fellow Don Haderle is the only person who can claim to have been both the “father” and “mother” of DB2*. 

In the 1980s, more than one person who had worked on the DB2 project had been called the “Father of DB2” in press announcements. So during a database show in San Francisco, when Haderle and several panelists were asked their titles, he responded
“ ‘The Mother of DB2 because the Father of DB2 title was already taken.’ So they called me the Mother of DB2 for quite some time,” he recalls. However, Janet Perna, the management executive lead for DB2 on open systems at the time, later dubbed him the “official” Father of DB2. “And when I retired, that was the title they gave me in a press article. So it was a name IBM gave me; I didn’t make it up.”

Haderle spoke about the development of the database management system (DBMS) and its lasting impact on the mainframe and beyond.

What spurred the development of DB2?
Don Haderle (DH): IBM derived most of its revenue and profit from mainframe hardware, including peripherals. DBMS customers used more storage and processing capacity than others, so IBM sought to drive greater DBMS adoption. However, IBM depended on ISVs to support the latest IBM hardware. These vendors often delayed doing so until the new hardware enjoyed a strong installation base. This slowed hardware sales. As a result, IBM’s storage division funded the development of an advanced DBMS and transaction systems in 1976.

What had come before DB2 and what made a new approach necessary?

DH:
 Early DBMSs, such as IBM’s IMS* and Cullinet’s Integrated Database Management System, supported bill of materials, material resource planning and other applications critical to business processes in manufacturing, finance, retail and other industries. These products featured hierarchical or network data models and provided both database and transaction management services. 
However, database schema changes required rewriting application programs, and programmers had to understand the complex principles of concurrency and consistency—advanced thoughts at the time. As a result, application upgrades were often complicated and time-consuming; and it was difficult to share a database with distinctly different applications.

From a mainframe user’s perspective, what need were you seeking to address?

DH: Customers pressed for a solution to their application backlog, which was measured in terms of years in some cases. They wanted a database that could respond to rapid development for diverse applications doing transactions, batch and interactive access. They were willing to suffer some hit in cost performance over IMS and CICS* to respond to their business initiatives quickly. This was the design point for DB2—orders of magnitude improvement in the application development cycle for databases to perform transactions and business intelligence.

Where did your team begin?
DH:
 The relational prototype from IBM Research, called SystemR, offered a great base for starting. IBM Fellow Ted Codd published his famous paper in 1969 for the exposition of the relational database model. The IBM Research team, among others, put together a prototypical example of that model. They came up with the concrete specification, which was SQL, and they put together that prototype and wrote a paper in 1976 that exposed the language and the prototype, which was SystemR. That was a “Wow!” back in that era. The folks that developed SystemR worked right next door to us in San Jose, and so we went up to chitchat with them

What is the origin of the DB2 name? 
DH:
 IMS DL/I was IBM’s first database—hence, DB ONE. There was a contest—I was heads down on the technical stuff—and the marketing folks came up with DB2.
How did it revolutionize databases?
DH:
 The revolution was a single database and the relational model, for transactions and business intelligence

At the time, did your team realize the impact DB2 would have?
DH:
 Did we think it would last for 30 years? We were hopeful, but first we needed to ensure the long-term survival of the mainframe and port DB2 to open systems platforms, like UNIX*, on IBM and non-IBM hardware. But that is a longer story.

Though it launched in 1983, you point to 1988 and DB2 V2 as a seminal point in its development. Why? 
DH:
 In 1988, DB2 V2 proved it was viable for online transactional processing [OLTP], which was the lifeblood of business computing. It had already proven its capability to perform analytical queries, but 1988 made it viable for the heart of business processing —OLTP. At that point, it could yield adequate performance metrics—transactions per second—though it was still more expense in compute cost vis-à-vis IMS DL/I, but Moore’s Law would continue to narrow that gap.

What has been the lasting impact of DB2 at IBM? 
DH:
 DB2 was key to establishing the IBM software business and making IBM an overall solution provider—hardware, software, services. As I said earlier, IBM was a hardware company in the 1970s and the software was sponsored by the hardware businesses to support their platforms. We reported to our respective hardware executives.
DB2’s early success, coupled with IBM WebSphere* in the 1990s, led the transformation of the business and engendered the investments that made that happen. Thus the survival of DB2 is a product of the completeness and competitiveness of the IBM software portfolio, not just the excellence of DB2 itself.

Looking back, what’s been the most surprising use for DB2?
DH:
 When you say “surprising,” I tend to think of applications—the moon launch, open-heart surgery or some other gee-whiz application. The closest thing for DB2 was the database system for the Olympics for several games [the 1992 Summer Olympics in Barcelona, 1996 Summer Olympics in Atlanta and the 1998 Winter Olympics in Nagano] when there was extreme pressure on performance and any failure was visible to the world. What made that a pressure moment was that as soon as the game was played, the race was run and the scores were posted, it was immediately available to all the press around the world. If they didn’t have the information immediately after the event—and there were hundreds of thousands of queries done on this thing—then we had egg all over our faces.

Where do you see DB2 in 30 years?
DH:
 In the same position as today—a vital database infrastructure for core business processing within enterprises. While there’s another business need being addressed by NOSQL and NEWSQL databases that will evolve, DB2 will serve the basic business transactions and business intelligence as it does today, evolving to respond to the big data business needs that demand aggregation of data within and outside the enterprise in surprising volumes with surprising velocity.