Friday, February 26, 2016

Dev-Ops in Mainframe















DevOps is short for ‘development operations’ and this buzz word is increasingly used to describe the delivery of software and infrastructure changes.
DevOps is about the rapid ‘code, build, test, package, release, configure and monitor’ of software deployments - is also called agile infrastructure. Successful DevOps implementation revolves around developing a DevOps culture requiring effective communications between all those involved in the delivery pipeline.
 Here are five developments we need to know about to compete.
1. Large enterprises will get fully on-board
In most large enterprises, DevOps isn’t new. Small teams are using DevOps principles to work on discrete projects, and after a few years of experimenting, they’re starting to rack up successes. But in general, DevOps hasn’t been adopted widely throughout the enterprise. As a result, software releases in the enterprise are still too slow, too buggy and too costly.
Now that multiple teams are proving the value of adopting DevOps practices, C-level executives are taking notice and beginning to wonder whether DevOps might be an answer to some of their organisation’s top business challenges. They’re starting to engage with IT and asking how they can employ DevOps principles at scale throughout the enterprise to bring more speed and quality to business applications.
Interest at the C-level is a positive development because DevOps can’t succeed in large enterprises without executive support. DevOps requires a fair amount of experimentation and a tolerance for failure—the kind of experimentation and failure that might not be acceptable to the organisation’s leaders unless they feel confident that the eventual outcome will be worth it.
As enterprises begin to modernise legacy applications in 2016 and beyond, DevOps will play a central role. Within five years, DevOps will be the norm when it comes to software development.
2. Standards will emerge
DevOps currently has no defined standards, so what passes for DevOps in one organisation might not look much like DevOps in another. That means DevOps entails a certain amount of risk—and large enterprises are notoriously risk averse. Even if your small teams are documenting wins, scaling out DevOps successes to the broader organization can be a process of trial and error, which most enterprises don’t tend to engage in willingly.
As different teams experiment with DevOps and share their successes, there will be opportunities to standardise best practices gleaned from the lessons learned. Standardisation will help mitigate the risk of scaling DevOps practices and could involve everything from testing processes to determining the best deployment tools, even how to use internal coaching across teams.
Eventually, as best practices rise to the top, they will likely become adopted and pervasive across industries.
3. Security will increasingly become integrated with DevOps
Whether they learned the hard way or by watching other organisations get burnt, enterprises know that a security problem caught by users is far more damaging than a security problem they catch internally before it’s released into production.
As the pace of software delivery increases, it poses a challenge for security teams because their primary focus is on releasing and maintaining safe and secure applications. Doing things faster doesn’t necessarily give them the time to thoroughly vet applications before they get into end users’ hands. The challenge lies in finding the right combination of processes that allows thorough security assessments and keeps software releases flowing at a rapid pace. Bringing security and DevOps teams together offers a solution.
Recent high-profile security breaches have made it clear that security cannot be an afterthought. Security best practices and testing must be built into the development process from the beginning—and that means making it a part of the DevOps team.
That said, the full integration of security and DevOps has not become mainstream yet. In 2016 and beyond, security team members will become increasingly integrated into DevOps practices. This will occur with security experts coaching DevOps on how to effectively and efficiently embed application security within software development, deployment, and production cycles.
4. Key technology adoptions that enable DevOps will take off
Because DevOps is still in the beginning stages of adoption, there are few defined tool chains for DevOps and no accepted single standards. However, as organisations learn and share success from their DevOps practices, this will start to change in 2016 and beyond, with a few key technology concepts helping IT maximise speed and quality throughout the software development lifecycle.
Increasing automation
Automated testing, infrastructure and application deployments speed up cycles and reduce errors. By automating routine and repetitive tasks, you decrease cycle times in the software delivery lifecycle also ensure repeatability. As enterprises look to adopt DevOps, the first few wins from a technology tool chain perspective will occur through the adoption of automation that will accelerate tasks, eliminate manual handoffs, and cut down error prone processes.
Decreasing latency
As organisations move to increase the pace of application delivery, they must look at each stage of the lifecycle and identity and remove the biggest hurdles that impede a rapid, high quality delivery cadence of software releases to customers. You can make huge progress by identifying the biggest bottlenecks in the ‘conveyer belt’ delivery pipeline. It’s not particularly useful to remove smaller bottlenecks first because the major bottlenecks can still cause technical ‘debt’ to build up earlier in the pipeline or reduce key resources further down in the pipeline.
Increasing visibility
For high quality applications to be delivered at a rapid pace from inception to the hands of end users, it is important to continuously assess and monitor them at every stage of the lifecycle. You must monitor and measure key metrics against business demands including application user experience, health and availability of the application and infrastructure, and threat and risk monitoring, sharing results across the team through continuous feedback loops. If they don’t meet the business needs, you must then improve and iterate, making constant forward progress.
5. Job roles will evolve
Most IT organisations’ adoption of DevOps will force everyone to adopt new skills—not just from a technical perspective, but also from a cultural one. As developers become more familiar with infrastructure, and operations staff gets more familiar with code, it’s inevitable that jobs will begin to morph and evolve.
In 2016 and beyond, those changes will go beyond development and operations to impact business analysts, planning teams, and even C-level executives. For example, traditional system administrator roles will become less relevant as automation takes over many tasks, while ‘full-stack’ engineers, who are familiar with the entire application technology stack, will start to become more critical.
Roles will evolve as teams become more horizontally embedded around products and services, and multiple roles become part of the extended DevOps delivery chain.
Conclusions for DevOps in 2016

In the digital revolution where software will determine business leadership in the marketplace, it’s critical that enterprises understand the power of DevOps to help deliver higher quality software faster.
DevOps is no longer a fringe movement or even simply an idea for so-called ‘unicorns.’ It’s the way enterprise IT must operate to compete and stay relevant in the marketplace. Armed with these predictions for DevOps, you can be ready for the changes to come—and take steps to be among the innovators, not those left behind

Tuesday, February 16, 2016

BPXBATCH utility - How to run C/C++ Programs thru JCL in Z/OS Batch.

BPXBATCH utility

This information provides a quick reference for the IBM-supplied BPXBATCH program. BPXBATCH makes it easy for you to run shell scripts andz/OS® XL C/C++ executable files that reside in z/OS UNIX files through the z/OS batch environment. If you do most of your work from TSO/E, use BPXBATCH to avoid going into the shell to run your scripts and applications.
In addition to using BPXBATCH, if you want to perform a local spawn without being concerned about environment set-up (that is, without having to set specific environment variables, which could be overwritten if they are also set in your profile) you can use BPXBATSL. BPXBATSL, which provide you with an alternate entry point into BPXBATCH, and force a program to run using a local spawn instead of fork or exec as BPXBATCH does. This ultimately allows a program to run faster.
BPXBATSL is also useful when you want to perform a local spawn of your program, but also need subsequent child processes to be forked or executed. Formerly, with BPXBATCH, this could not be done since BPXBATCH and the requested program shared the same environment variables.BPXBATSL is provided as an alternative to BPXBATCH. It will force the running of the target program into the same address space as the job itself is initiated in, so that all resources for the job can be used by the target program; for example, DD allocations. In all other respects, it is identical to BPXBATCH.
For information on c89 commands, see c89 — Compiler invocation using host environment variables.

BPXBATCH usage

The BPXBATCH program allows you to submit z/OS batch jobs that run shell commands, scripts, or z/OS XL C/C++ executable files in z/OS UNIX files from a shell session. You can invoke BPXBATCH from a JCL job, from TSO/E (as a command, through a CALL command, from a REXX EXEC).
JCL: Use one of the following:
  • EXEC PGM=BPXBATCH,PARM='SH program-name'
  • EXEC PGM=BPXBATCH,PARM='PGM program-name'
TSO/E: Use one of the following:
  • BPXBATCH SH program-name
  • BPXBATCH PGM program-name
BPXBATCH allows you to allocate the z/OS standard files stdinstdout, and stderr as z/OS UNIX files for passing input, for shell command processing, and writing output and error messages. If you do allocate standard files, they must be z/OS UNIX files. If you do not allocate them, stdin,stdout, and stderr default to /dev/null. You allocate the standard files by using the options of the data definition keyword PATH.
Note: The BPXBATCH utility also uses the STDENV file to allow you to pass environment variables to the program that is being invoked. This can be useful when not using the shell, such as when using the PGM parameter.
Example: For JCL jobs, specify PATH keyword options on DD statements; for example:
//jobname JOB …
 
//stepname EXEC PGM=BPXBATCH,PARM='PGM program-name parm1 parm2'
 
//STDIN   DD  PATH='/stdin-file-pathname',PATHOPTS=(ORDONLY)
//STDOUT  DD  PATH='/stdout-file-pathname',PATHOPTS=(OWRONLY,OCREAT,OTRUNC),
//            PATHMODE=SIRWXU
//STDERR  DD  PATH='/stderr-file-pathname',PATHOPTS=(OWRONLY,OCREAT,OTRUNC),
//            PATHMODE=SIRWXU
⋮
You can also allocate the standard files dynamically through use of SVC 99.
For TSO/E, you specify PATH keyword options on the ALLOCATE command. For example:
ALLOCATE FILE(STDIN) PATH('/stdin-file-pathname') PATHOPTS(ORDONLY)
ALLOCATE FILE(STDOUT) PATH('/stdout-file-pathname')
         PATHOPTS(OWRONLY,OCREAT,OTRUNC) PATHMODE(SIRWXU)
ALLOCATE FILE(STDERR) PATH('/stderr-file-pathname')
         PATHOPTS(OWRONLY,OCREAT,OTRUNC) PATHMODE(SIRWXU)
 
BPXBATCH SH program-name
You must always allocate stdin as read. You must always allocate stdout and stderr as write.

Parameter

BPXBATCH accepts one parameter string as input. At least one blank character must separate the parts of the parameter string. When BPXBATCH is run from a batch job, the total length of the parameter string must not exceed 100 characters. When BPXBATCH is run from TSO, the parameter string can be up to 500 characters. If neither SH nor PGM is specified as part of the parameter string, BPXBATCH assumes that it must start the shell to run the shell script allocated by stdin.
SH | PGM
Specifies whether BPXBATCH is to run a shell script or command or a z/OS XL C/C++ executable file that is located in a z/OS UNIX file.
SH
Instructs BPXBATCH to start the shell, and to run shell commands or scripts that are provided from stdin or the specified program-name.
Note: If you specify SH with no program-name information, BPXBATCH attempts to run anything read in from stdin.
PGM
Instructs BPXBATCH to run the specified program-name as a called program.
If you specify PGM, you must also specify program-name. BPXBATCH creates a process for the program to run in and then calls the program. The HOME and LOGNAME environment variables are set automatically when the program is run, only if they do not exist in the file that is referenced by STDENV. You can use STDENV to set these environment variables, and others.
program-name
Specifies the shell command or the z/OS UNIX path name for the shell script or z/OS XL C/C++ executable file to be run. In addition, program-name can contain option information.
BPXBATCH interprets the program name as case-sensitive.
Note: When PGM and program-name are specified and the specified program name does not begin with a slash character (), BPXBATCH prefixes your initial working directory information to the program path name.

Usage notes

You should be aware of the following:
  1. BPXBATCH is an alias for the program BPXMBATC, which resides in the SYS1.LINKLIB data set.
  2. BPXBATCH must be invoked from a user address space running with a program status word (PSW) key of 8.
  3. BPXBATCH does not perform any character translation on the supplied parameter information. You should supply parameter information, including z/OS UNIX path names, using only the POSIX portable character set.
  4. A program that is run by BPXBATCH cannot use allocations for any files other than stdinstdout, or stderr.
  5. BPXBATCH does not close file descriptors except for 01, and 2. Other file descriptors that are open and not defined as "marked to be closed"remain open when you call BPXBATCH. BPXBATCH runs the specified script or executable file.
  6. BPXBATCH uses write-to-operator (WTO) routing code 11 to write error messages to either the JCL job log or your TSO/E terminal. Your TSO/E user profile must specify WTPMSG so that BPXBATCH can display messages to the terminal.

Files

The following list describes the files:
  • SYS1.LINKLIB(BPXMBATC) is the BPXBATCH program location.
  • The stdin default is /dev/null.
  • The stdout default is /dev/null.
  • The stdenv default is /dev/null.
  • The stderr default is the value of stdout. If all defaults are accepted, stderr is /dev/null.

Tuesday, February 9, 2016

Overview of Z/OS Crypto


Cryptographic software : ICSF

The Integrated Cryptographic Service Facility or ICSF is the system component that provides
the interface to the hardware. As new functions are implemented in hardware, new versions
of ICSF are made available to exploit those functions. ICSF is available as a component of
z/OS. The most current versions are available at:

http://www.ibm.com/servers/eserver/zseries/zos/downloads/

A number of crypto instructions are now available in the CP and can be coded directly in an
application. However most of the crypto functions can only be accessed by using the ICSF
Application Programming Interfaces (APIs). The API passes the cryptographic request to
ICSF, which determines what hardware is available and which device can best service the
request. The ICSF Application Programmer’s Guide provides a table that describes the
hardware that is required to support the APIs.

Saturday, February 6, 2016

How Mainframe(System z) addresses ATM, EFT, and POS processing Requirements?

ATM and POS transactions, which are business critical transactions that require a platform
(hardware and software) that provides robust levels of reliability, availability, and scalability.

The IBM System z server and z/OS operating system provides an ideal environment to host
payment systems. In the following sections, we list the processing requirements that can be
met when running on a System z platform.

1. Availability
2. Parallel Sysplex Clustering
3. Manageability
4. Security
5. Scalability
6. Dynamic Workload balancing
7. IBM Integrated Cryptographic service Facility
8. Integration with External Authorization Systems.

1. Availablity:

System z provides 24-hour a day, 7-days per week availability, which includes scheduled
maintenance. Continuous availability goes beyond just hardware fault tolerance; it is
achieved by a combination of hardware, application code, and good system management
practices.

On a server basis, System z systems are equipped with features that provide for very high
availability:
Redundant I/O interconnect
Concurrent Capacity Backup Downgrade (CBU Undo)
Concurrent memory upgrade
Enhanced driver maintenance
Capacity backup upgrade
On/Off capacity

2. Parallel Sysplex clustering

When configured properly, a Parallel Sysplex® cluster has no single point-of-failure and can
provide customers with near continuous application availability over planned and unplanned
outages. Events that otherwise seriously impact application availability (such as failures in
hardware elements or critical operating system components) have no, or reduced, impact in a
Parallel Sysplex environment.

With a Parallel Sysplex cluster, it is possible to construct a parallel processing environment
with no single point-of-failure. Because all systems in the Parallel Sysplex can have
concurrent access to all critical applications and data, the loss of a system due to either
hardware or software failure does not necessitate loss of application availability. Peer
instances of a failing subsystem executing on remaining healthy system nodes can take over
recovery responsibility for resources that are held by the failing instance.

Alternatively, the failing subsystem can be automatically restarted on still-healthy systems
using automatic restart capabilities to perform recovery for work in progress at the time of the
failure. While the failing subsystem instance is unavailable, new work requests can be
redirected to other data-sharing instances of the subsystem on other cluster nodes to provide
continuous application availability across the failure and subsequent recovery, which
provides the ability to mask planned and unplanned outages from the end user.

A Parallel Sysplex cluster consists of up to 32 z/OS images coupled to one or more Coupling
Facilities (CFs or ICFs) using high-speed specialized links for communication. The Coupling
Facilities, at the heart of a Parallel Sysplex cluster, enable high speed, read/write data
sharing and resource sharing among all of the z/OS images in a cluster. All images are also
connected to a common time source to ensure that all events are properly sequenced in time.

The flexibility of System z, z/OS, and Parallel Sysplex allows customers to create many high
availability system designs, from multiple LPARs in a Parallel Sysplex on multiple System z
servers, to dual LPARs in a Parallel Sysplex on a single System z server.

3. Manageability

A wide array of tools, which include the IBM Tivoli® product and other operational facilities,
contribute to continuous availability. IBM Autonomic Computing facilities and tools provide for
completely fault tolerant, manageable systems that can be upgraded and maintained without
downtime.

Autonomic computing technologies that provide Self-Optimizing, Self-Configuring, and
Self-Healing characteristics go beyond simple hardware fault tolerance. Additionally, the
System z hardware environment provides:

Fault Detection
Automatic switching to backups where available
(Chipkill memory, ECC cache, CP, Service Processor, system bus, Multipath I/O, and soon)
Plug and Play and Hot swap I/O
Capacity Upgrade on Demand

4. Security

On March 14, 2003, IBM eServer™ zSeries 900 was the first server to be awarded EAL5
security certification. The System z architecture is designed to prevent the flow of information
among logical partitions on a system, thus helping to ensure that confidential or sensitive data
remains within the boundaries of a single partition.

On February 15, 2005, IBM and Novell® announced that SUSE® Linux Enterprise Server 9
successfully completed a Common Criteria (CC) evaluation to achieve a new level of security
certification (CAPP/EAL4+). IBM and Novell also achieved United States (US) Department of
Defense (DoD) Common Operating Environment (COE) compliance, which is a Defense
Information Systems Agency requirement for military computing products.

On March 2, 2006, z/OS V1.7 with the RACF® optional feature achieved EAL4+ for
Controlled Access Protection Profile (CAPP) and Labeled Security Protection Profile (LSPP).
This prestigious certification assures customers that z/OS V1.7 goes through an extensive
and rigorous testing process and conforms to standards that the International Standards
Organization sanctions.

These certification efforts highlight the IBM ongoing commitment to providing robust levels of
security to assist customers in protecting their business critical data.

5. Scalability


The Capacity Upgrade on Demand (CUoD) capability allows you to non-disruptively add one
or more Central Processors (CPs), Internal Coupling Facilities (ICFs), System z Application
Assist Processor (zAAP), and Integrated Facility for Linux (IFLs) to increase server resources
when they are needed, without incurring downtime. Capacity Upgrade on Demand can
quickly add processors up to the maximum number of available inactive engines. Also,
additional books (up to a maximum of four in total) can be installed concurrently, providing
additional processing units and memory capacity to a z9® or z10® server.

In addition, the new Enhanced Book Availability function also enables a memory upgrade to
an installed z9 or z10 book in a multi-book server. This feature provide customers with the
capacity for much needed dynamic growth in an unpredictable ATM/EFT world.
The CUoD functions include:

Non-disruptive CP, ICF, IFL, and zAAP upgrades
Dynamic upgrade of all I/O cards in the I/O Cage
Dynamic upgrade of memory

The Parallel Sysplex environment can scale near linearly from two to 32 systems. This
environment can be a mix of any servers that support the Parallel Sysplex environment.

6. Dynamic workload balancing

To end users and business applications, the entire Parallel Sysplex cluster can be seen as a
single logical resource. Just as work can be dynamically distributed across the individual
processors within a single SMP server, so too can work be directed to any node in a Parallel
Sysplex cluster that has the available capacity, which avoids the need to partition data or
applications among individual nodes in the cluster or to replicate databases across multiple
servers.

7. IBM Integrated Cryptographic Service Facility

In addition to the external security modules (HSMs) that are available on other platforms,
BASE24-eps on System z can take full advantage of the IBM Crypto Express 2 card using the
Integrated Cryptographic Service Facility (ICSF) for very high speed and highly available
cryptographic services, such as Personal Identification Number (PIN) translation and
verification and Message Authentication Code (MAC) generation and validation.

8. Integration with external authorization systems

On all platforms, BASE24-eps can use data communications to send requests and receive
responses from external transaction authorization systems. On System z only, other means
of communicating with external authorization systems are available, such as:

IBM WebSphere MQ CICS Gateway for communicating synchronously or asynchronously
with local CICS-based authorization systems. (Synchronous communications is
recommended only for suitably reliable and low-latency systems.)

The IBM External CICS Interface (EXCI) for communicating synchronously with suitably
reliable and low-latency local CICS authorization systems with the lowest possible CPU
cost.

The IBM IMSConnect for communicating with local IMS-based authorization systems.

Tuesday, February 2, 2016

THE CICS JOURNEY CONTINUES - By Nick Garrod

CICS recently celebrated 46 years as the premier transaction processor (some IMS fans may disagree here—that is their right) but what does this mean all these years later? Well, for starters, we have a piece of software that has lasted 46 years in the market and that is not something one sees every day (again, IMS fans could take issue as they are a year more mature). CICS has evolved over the years to be rather good at enterprise-grade, mixed-language application serving and the recent announcement of CICS Transaction Server V5.3, (CICS TS V5.3), along with the CICS Tools, make a compelling proposition that in 46 years hence CICS will still be leading the field.

When CICS V5 entered the market, it was underpinned by three main themes:
• Service Agility
• Operational Efficiency
• With Cloud enablement.

Internally, we also talked about the enhancements we were making to core CICS function, the foundation of CICS. The three releases have built on these themes (and the core foundation) and remain largely similar with a slight amendment to the final point, which in release 3 we refer to as cloud with DevOps. This is not just a marketing ploy to jump on the DevOps bandwagon; there is serious value to the DevOps teams to be found in this release.

Service Agility

The improvements in this space are largely around two key areas, support for Java and support for the WebSphere Liberty Profile, providing better interoperability, simplified management and enhanced Java SE support.

A set of new WebSphere Liberty profile features provide support for a wider range of Java web APIs and application frameworks. CICS TS V5.3 augments the previously provided Liberty profile features with:

• Contexts and dependency injection (CDI)*
• Enterprise JavaBean (EJB) Lite subset
• Managed Beans*
• MongoDB*
• OSGi Console*
• Session persistence (JDBC type 4 driver only)*.
  • Also available for CICS TS V5.2 via APAR PI25503
These features are new, but work has also gone into improving existing Liberty profile features. Specifically, the work that has been done is in the area of adding EAR support for bundles, making Java-based web applications even more portable, adding SQLJ support for use with the DB2 type 2 driver data sources and adding transaction support to the Blueprint feature.

Enhanced operability is addressed in CICS TS V5.3 with technology that provides the ability for Java programs in a Liberty profile Java Virtual Machine (JVM) and non-Java programs to call each other using standard API calls. This enables Java applications to use the standard JEE Connector Architecture (JCA) to invoke CICS programs in any supported language. Non-Java CICS programs can issue an EXEC CICSLINK to call a Java application running in a Liberty profile JVM server.

IBM z/OS Connect feature is now supported by CICS TS (in CICS TS V5.2 via APAR PI25503), which provides RESTful APIs and accepts JavaScript Object Notation (JSON) payloads between CICS, Mobile devices and cloud environments. Java applications and JVM system objects can now be monitored using Java Management Extensions (JMX). The following JMX-related Liberty profile features are now supported:

• Local JMX connector*
• Monitoring*
• REST connector (for JMX)*.
  • Also available for CICS TS V5.2 via APAR PI25503
Users of the Liberty profile JVM server can now manage and monitor applications and system objects locally by using the JMX client API, or remotely by using the Jconsole monitoring tool included in Java SE.

JVM server administration is improved by simplifying the process of managing log files that include controls for a maximum number of zFS logs, the ability to redirect log files to the MVS JES log and the standardization of timestamps.

Java SE programs that run in a CICS OSGi JVM server can now use the WebSphere MQ classes for the Java Message Service (JMS) as an alternative to proprietary WebSphere MQ classes for Java. Developers familiar with the JMS API can easily access WebSphere MQ resources with enhancements made to the CICS MQ attachment facility to support the required new commands.

There is no Java SE support in a Liberty profile JVM server; it is currently limited to Java programs that run in an OSGi JVM server. Support from WebSphere MQ that uses the WebSphere MQ classes for JMS is provided in WebSphere MQ for z/OS V7.1 and V8. However, there are several requirements:

• V71 requires MQ APAR PI29770 (built on fix pack 7.1.0.6) or later
• V8.0 requires base APAR PI28482 and fix pack 8.0.0.2 or later
• CICS TS V5.2 is also supported and requires APAR PI32151.

Operational Efficiency

The improvements in this theme concern performance optimizations, better metrics and additional security. These optimizations are largely in the area of web services. Since its introduction into CICS TS V3.1 in 2005, web services has become one of the most popular methods of interacting with CICS applications and in this new release there are a number of significant optimizations.

The pipeline processing of HTTP requests has been improved, removing the need for an intermediate web attach task (CWXN transaction) in the majority of use cases. This will reduce the CPU and memory overhead for most types of SOAP and JSON based HTTP CICS web services. This optimization may also be used for inbound HTTPS requests, where SSLL support is provided by the Application Transparent Transport Layer Security (AT-TLS) feature of IBM Communications Server. CICS TCPIPSERVICE resources may be configured as AT-TLS aware to obtain security information from AT-TLS. HTTPS implementations using CICS SSL support still use the CWXN transaction but should also see performance improvements due to reduced TCB switches from these scenarios.

CICS TS V5.3 sees performance improvements in many areas that help to reduce CPU overhead. These include exploitation of a number of new hardware instructions introduced in the IBM z9, cache alignment of some key CICS control blocks, the use of prefetch, reduced lock contention within monitoring algorithms, improvements to the MRO session management algorithms and tuning of internal procedures. Improvements in efficiency have noticeable gains in the CICS trace facility, the CICS monitoring facility and for MRO connections with high session counts.

CICS transaction tracking identifies relationships between tasks in an application as they flow across CICS systems and can be visualized in the CICS Explorer. In this release, transaction tracking has been extended to transactions started by the CICS-WebSphere MQ bridge, expanding the scope of transactions that can use this facility to help with reporting, auditing and problem determination.
A number of new metrics have been added into the global CICS statistics for transaction CPU time measurements and are captured without the need for CICS monitoring to be active. This allows greater insight into the CPU resource usage of the regions without the SMF 110 records processing overhead.

Additional security features include new support for the Enhanced Password Algorithm, implemented in RACF APAR OA43999, allowing stronger encryption of passwords. There is enhanced support for Kerberos providing an EXEC CICS SIGNON TOKEN command, avoiding the need to flow a password. Applications can now validate a Kerberos security token (as determined by an external security manager) and associate a new user ID with the current terminal.

A new EXEC CICS REQUEST PASSTICKET API can be used for outbound requests from the current task, where basic authentication is required, which avoids the need to flow passwords. RACF or similar external security manager builds the PassTicket. There are further opportunities to offload authentication requests to open TCBs, which reduces contention on the resource owning TCB.

Cloud With DevOps

The underlying capability in the area of cloud and DevOps enables adoption of a continuous development integration model for automated CICS deployments. The value comes from four main areas:
• Automated builds
• Scripted deployments
• UrbanCode Deploy support
• Enhance cloud enablement.

Bundles and cloud applications are a convenient way to package and manage components, resources and dependencies in CICS.

In this release, a command line interface for automating the building of CICS projects created in the CICS Explorer is introduced in the shape of a CICS build toolkit. CICS cloud applications and bundles, as well as OSGi components, can now be automatically built from source code.
Scripts call the CICS build toolkit to create a build, which automatically runs when application updates are made. A build script would typically check the latest application version from source control, along with its dependencies. It then calls the CICS build kit to build the projects from the application.

In a final step the script initiates a copy of the built projects to an appropriate location—such as an artifact repository or a staging area on zFS. The CICS build kit is supported on z/OS, Linux and Microsoft Windows, and supports CICS TS V4.1 and later (see Figure 1).

A built CICS project that resides in zFS may be programmatically deployed across CICS systems by using a set of scripting commands new to this release and can simplify and automate application deployments.

DFHDPLOY is a new batch utility to support the automated provisioning of CICS bundles, OSGi bundles within CICS bundles and CICS applications by using the following simple commands:
• SET APPLICATION
• SET BUNDLE
• DEPLOY APPLICATION
• UNDEPLOY APPLICATION
• DEPLOY BUNDLE
• UNDEPLOY BUNDLE.
script to deploy them across CICS systems and set them to a desired state, such as “enabled” or “available.” It can also be used to undeploy and remove them when they are no longer required (see Figure 2).

IBM UrbanCode Deploy orchestrates and automates the deployment of applications, middleware configurations and database changes. CICS has made available a plugin for UrbanCode Deploy that supports the deployment of CICS applications as part of these orchestrations (see Figure 3).
Multiple deployment steps can be coordinated in a single action with UrbanCode Deploy. Similar applications and environments such as development systems or more tightly controlled test and production environments can reuse these deployment processes.

The UrbanCode plugin provides function for installing and removing resources, NEWCOPY and PHASEIN for programs and performing pipeline scan. Batch utilities like DFHDEPLOY can be reused using the z/OS utility plugin (see Figures 4 and 5). The plugin is available for CICS TS V4.1 or later and may be downloaded from the UrbanCode Deploy plugin website at https://developer/ibm.com/ urbancode/plugin/cics-ts.

In addition to these deployment capabilities, there are a number of improvements made to the core CICS cloud function to increase the value to CICS customers. Threshold policies were introduced in CICS TS V5 and in this recent release they have been enhanced by providing the ability to supply a threshold policy for the number of IBM MQ requests, DL/1 requests, named counter requests and shared temporary storage requests issued by a CICS task. This now brings the number of thresholds against which an action can be triggered to 14.

Program and URI entry points have been available before but this release sees the addition of transaction entry points. This provides the ability to scope policies to be specific to a particular transaction ID. Recovery of the application infrastructure is enhanced, so that the available or unavailable state of an application is automatically recovered across CICS restarts.

CICS TS version 5 has seen many new innovations: It has been the first version to have a family release (five CICS Tools have been announced on the same day as the run-time); we have seen faster development cycles between major releases; we have seen rich feature packs available for download; and we have seen more requirements deliver than ever before (by the time release 3 hits the streets, more than 300 customer requirements will have been satisfied); and we have seen a move toward design thinking in the way this latest release was developed. Advances in the hardware provide potential benefits to CICS customers such as:
• Simultaneous multithreading (SMT) processors by CICS Java applications
• Greater data compression and reduced data transfer time for CICS SMF data
• Cryptographic acceleration for CICS transactions using SSL or TLS
• Planned availability of large memory on z/OS (for storage above the bar or JVM heap storage).

All these things, together with advances in the hardware CICS runs on, the mainframe, provide supreme additional value for your CICS environment and a compelling reason to upgrade. If you just want to try out some of the exciting new capability, you can always download the CICS TS Developer Trial as many times as you wish