Tuesday, December 29, 2015

IBM MobileFirst Platform

The IBM MobileFirst Platform provides an integrated development and testing environment for mobile applications to access data in DB2 for z/OS. 

The development environment is simplified by combining multiple set of tools, frameworks and codebases into one single development environment, and one codebase to develop and maintain.

 In other words, you can build your Android, iPad, iPhone applications and more using one codebase.









IBM MobileFirst server, which provides a set of adapters for server-side development. 

The result from the adapters are converted to JSON format, which can be easily consumed in the mobile application.

To build a DB2 for z/OS mobile application, you need IBM MobileFirst and DB2 Connect. 

Optionally, you can install Android development environment, which allows you to write native Android application and/or to run a mobile application in an Android emulator

Sunday, December 27, 2015

Four-year cost of moving 1 TB of data daily into the Consolidated Solution


Quantifying Data movement Costs

A typical IBM mainframe customer moves multiple terabytes of OLTP data from z Systems to distributed servers every day.
A recent IBM analysis found that this activity, often called extract, transform, and load (ETL), can consume 16% to 18% of a customer's total MIPS. For some, the figure approaches 30%.


Below Table shows the results of the data movement study, which focused on two large banking customers in Europe and Asia that each routinely moved their OLTP data off-platform for analysis.

Distributed Core Consumption Total MIPS Consumption
EU Bank 28% 16%
ASIAN BANK 8% 18%
Therefore, moving data to a separate analytical platform clearly consumes a lot of resources. But what does this mean in terms of dollars and cents?

To quantify the cost of this intensive ETL activity, IBM conducted a separate, laboratory-based study that resembled the way that the example banks moved their data from their z Systems environment and onto an x86 server (in this case, a pre-integrated competitor V4 eighth unit single database node). A four-year amortization schedule was used to spread out the cost of the system (hardware, software, maintenance, and support), along with network, storage, and labor expenses.







The result was a unit cost per GB or ETL job to move data off of the z Systems platform. These metrics were used to compute the cost of moving 1 TB of data each day using a simple z Systems software stack, including the IBM z/OS® operating system, IBM DB2® for z/OS, and various DB2 tools. The data would be moved from an IBM z13 to an operation data store (ODS) and then on to three data marts.


As shown in Figure, the study projected total data movement costs of more than $10 million over the four-year period. The study assumed there are four cores on the z13 running at 85% utilization and 12 cores on each of the x86 servers running at 45% utilization. In this scenario, ETL activity burned 519 MIPS and used 10 x86 cores per day.


The primary focus of this ETL study was the cost of extracting and loading data, not transforming it. So the true cost to the banks, or any company, would be substantially higher than is shown here if you added the expense of data transformation using tools such as IBM DataStage®, Ab Initio, Informatica, or others.

Thursday, December 24, 2015

Moving Data Costs Millons to Banks/FIs

The IBM z Systems platform is the business world’s preferred system of record, responsible for more than 70% of the operational data generated by banks, retailers, and other large
enterprises around the globe.

And each day, much of that data is moved to separate analytics environments to gain new
business insights.

Moving data costs money.

The additional MIPS that are used by operational systems contribute to software, hardware, storage, and labor costs that can total millions of dollars.

Companies mostly overlook these costs and continue to analyze their operational data, such as online transaction processing (OLTP) records, by copying it to distributed servers. One
oft-quoted anecdote states that for every OLTP database, there are seven or more copies being maintained in other locations for analytics or other purposes


No one questions the value of analyzing OLTP data. You cannot overstate the benefit of finding hidden trends in sales reports, insurance claims, and so on. But at what cost?

Conventional thinking has been that off-platform analytics is the best option, and that data movement expenses are insignificant.

But conventional thinking is wrong.

The IBM zEnterprise® Analytics System 9700 solution lets you analyze data as rapidly as ever while dramatically reducing the expense of moving it.


Tuesday, December 22, 2015

How the mainframe applications has been changed in the past 50 years

















ID Stage Description
1 Green Screen Limited user community with a limited and very controlled UI
2 Client/server The user community remains limited, however the UI is increasingly richer.
3 Web/Desktop Wider user community (anyone with browser), UI controlled by web page (first z Systems web pages were similar in layout to green screens)
4 SOA Much wider user community, with Service Oriented Architectures reusing existing services and exposing them through decoupled integration methods. The UI is controlled by a service requester application or, in the case of Web 2.0, by the user.

5 Mobile User community is anyone with a mobile device, rich flexible integrated UI, often reusing services developed for SOA.

Monday, December 21, 2015

What is Z/OS Connect??


Typical application architecture for a mobile system with Z/OS Connect

The following is a typical application architecture for a mobile system of engagement app that uses Bluemix as the Mobile-Backend-as-a-Service (MbaaS).

The NodeJS runtime orchestrates calls to various APIs to serve the mobile UI.

API Management is used to expose REST APIs surfaced by z/OS Connect as Custom APIs in the Bluemix Catalog and consumed by the NodeJS runtime.

The Secure Gateway service secures the passage from Bluemix to the corporate data center.



z System mobile integration solutions

1. Provide a REST JSON interface to your mainframe with z/OS Connect














2. Extend the reach of business services with REST and SOAP APIs with IBM API Management














3. Create a mobile security gateway with IBM DataPower Gateway















4. Build a strategic enterprise service bus with IBM Integration Bus











5.  Integrate with an adapter framework using IBM MobileFirst Server



Wednesday, December 16, 2015

Mainframe - Definition

Mainframe
A state-of-the-art computer for mission critical tasks. Today, mainframe refers to a class of ultra-reliable servers from IBM that is designed for enterprise-class and carrier-class operations.
What’s the difference
HP, Unisys, Sun, and others make machines that compete with IBM mainframes in many industries but are mostly referred to as servers. In addition, non-IBM mainframe datacenters have hundreds and thousands of servers, whereas IBM mainframe datacenters have only a few machines.
There is a difference
One might wonder why mainframes cost hundreds of thousands of dollars when the raw gigahertz (GHz) rating of their CPUs may be only twice that of a PC costing 1,000 times less. Read on to learn why.
Lots of processors, memory, and channels
Mainframes support symmetric multiprocessing (SMP) with several dozen central processors (CPU chips) in one system. They are highly scalable. CPUs can be added to a system, and systems can be added in clusters. Built with multiple ports into high-speed caches and main memory, a mainframe can address thousands of gigabytes of RAM. They connect to high-speed disk subsystems that can hold petabytes of data.
Enormous throughput
A mainframe provides exceptional throughput by offloading its input/output processing to a peripheral channel, which is a computer itself. Mainframes can support hundreds of channels, and additional processors may act as I/O traffic cops that handle exceptions (channel busy, channel failure, etc.).
All these subsystems handle the transaction overhead, freeing the CPU to do real “data processing” such as computing balances in customer records and subtracting amounts from inventories, the purpose of the computer in the first place.
Super reliable
Mainframe operating systems are generally rock solid because a lot of circuitry is designed to detect and correct errors. Every subsystem may be continuously monitored for potential failure, in some cases even triggering a list of parts to be replaced at the next scheduled maintenance. As a result, mainframes are incredibly reliable with mean time between failure (MTBF) up to 20 years!
Here to stay
Once upon a time, mainframes meant “complicated” and required the most programming and operations expertise. Today, networks of desktop clients and servers are just as complex, if not more so. Large enterprises have their hands full supporting thousands of PCs along with Windows, Unix and Linux and possibly some Macs for good measure.
With trillions of dollars worth of IBM mainframe applications in place, mainframes will be around for quite a while. Some even predict they are the wave of the future!

Wednesday, November 18, 2015

Job Entry Subsystem(JES)

Job Entry Subsystem, since it’s pretty important to the ability for z/OS to run batch workloads.  

Description

– Component of z/OS that provides the necessary functions to get jobs into, and out of, the system.

– JES2 and JES3 are available.

– Manages jobs before and after running the program; the base control program manages them during processing.


The Job Entry Subsystem does just what the name implies – manages the entry (and exit) 
of jobs into z/OS.  When a batch job (or other kinds of work, actually) needs to run on z/OS, 
JES is responsible 

    - For taking that work in, 
    - Preparing for the allocation of resources, and 
    - Then passing the job into z/OS to be dispatched and run.  

JES manages the output generated by the running programs – it SPOOLs the output to the 
JES SPOOL (pretty much like a print spool on your PC) and temporarily stores it until it can 
be printed or stored or deleted.

“JES2” and “JES3”.   IBM actually sells two different JESes.  
They perform the same basic system functions, but with somewhat different interfaces.  Both were developed in the early days of z/OS, out in the field.  Some customers prefer one, some the other.  JES2 is more widely deployed, although there is a smaller, yet enthusiastic JES3 user community also. 

JCL is the “language” of batch jobs.  A programmer or user creates a JCL file (or as some old-timers say, a “deck”) that describes how a program executes.  The JCL identifies the program, its input and output files, and parameters that influence how the program runs.  
Multiple programs can be strung together in a “job stream” where data is passed to and from the programs in the stream.  

JCL files consist of a series of records, or “cards” that begin with two forward slashes.  
So if you ever see a file that has a bunch of lines that start with //, you’ll know it’s a JCL file that controls the running of a batch job stream.  

Z/OS parallel sysplex


Its designed for Application Availability of 99.999%

Monday, November 16, 2015

DB2 for Z/OS VS other clustered DBMS


Processing Unicode data in COBOL applications

COBOL supports UTF-16 data. COBOL has no support for UTF-8 data.

About this task

DB2® for z/OS®, however, supports both UTF-8 and UTF-16 data.

Procedure

To process Unicode data in COBOL applications for DB2 for z/OS, perform the following recommended actions:
  • Use one of the national data types for Unicode data. For example, use the COBOL PIC N(n) USAGE NATIONAL data type for Unicode character data. These data types are UTF-16 and enable COBOL to support Unicode data.
    Although COBOL does not have a native UTF-8 data type, you can still use a COBOL application to retrieve UTF-8 data from DB2. DB2 converts the output to the format that is required by the application. For example, if you query the DB2 catalog, DB2 converts the data for the COBOL application from UTF-8 to either UTF-16 (for PIC N USAGE NATIONAL variables) or EBCDIC (for PIC X variables). However, you should not store unconverted UTF-8 data in a COBOL variable. For example, if you have UTF-8 data in a PIC X variable, COBOL thinks that the data is EBCDIC and the data could get corrupted. Even something as simple as moving this UTF-8 value from one variable to another variable could corrupt the data, because COBOL pads the variable with X'40' for EBCDIC instead of X'20' for UTF-8.
  • Store your data in DB2 in UTF-16. This format often requires more space than UTF-8. However, you gain CPU savings in processing because DB2 and COBOL are both using UTF-16 data, and no conversions are needed.
  • Use the DB2 coprocessor to prepare your application.
  • Specify the appropriate CCSID for your COBOL application source and data according to the instructions in Specifying a CCSID for your application.
    Recommendation: Use the ENCODING bind option to specify the CCSID of the data. This option typically yields the best performance. However, depending on the situation, you might consider the other options for Specifying a CCSID for your application.
  • Do not specify ENCODING UNICODE as a bind option if your program uses PIC X variables and specifies the COBOL compiler option NOSQLCCSID. If you do specify ENCODING UNICODE in this situation, DB2 interprets these character variables as UTF-8, but COBOL does not support UTF-8. Thus, DB2 might misinterpret the data.

http://www-01.ibm.com/support/knowledgecenter/api/content/SSEPEK_11.0.0/com.ibm.db2z11.doc.char/src/tpc/db2z_processunidatacobol.html


64 Bit Address Space


With the release of zSeries mainframes in 2000, IBM extended the addressability
of the architecture to 64 bits. 

With 64-bit addressing, the potential size of a z/OS address
space expands to a size so vast that we need new terms to describe it.

Each address space,called a 64-bit address space, is 16 exabytes (EB) in size; an exabyte is slightly more than one billion gigabytes. 

The new address space has logically 264 addresses. It is 8 billion times the size of the former 2 GB address space. The number is 16 with 18 zeros after it:
16,000,000,000,000,000,000 bytes, or 16 EB (see the slide).

We say that the potential size is 16 exabytes because z/OS, by default, continues to create
address spaces with a size of 2 GB. The address space exceeds this limit only if a
program running in it allocates virtual storage above the 2 GB address. If so, the z/OS
operating system increases the storage available to the user from 2 GB to 16 EB.

The 16 MB address became the dividing point between the two previous architectures (the 24-bit addressability introduced with MVS/370 and the 31-bit addressing introduced in the operating system MVS Extended Architecture or MVS/XA), and is commonly called the line.  The area that separates the virtual storage area below the 2 GB address from the user private area is called the bar.

Z/OS components

The parts on the bottom of the diagram are the z/OS operating system components, and the parts in the top gray box are related to the application work that runs ON z/OS.  Note the key types of application workloads listed there:

Transaction managers – It includes CICS, IMS TM, AND WebSphere App Server in this category.  z/OS is a key platform for high volume, high value transaction management
Database – DB2 is the key database these days, but IMS DB and several 3rd party DBMSs like IDMS and ADABAS also fall into the database management category

Batch – batch is one of the key differentiators on z/OS.  Contrary to popular belief, batch is NOT dead.  It remains a very important application model and will continue to do so as long as IT systems exist.  z/OS is well-suited for running batch because of the OS support via job scheduling, the Job Entry Subsystem, and the ability for System z to process very large volumes of data simultaneously.

End user access – Users of z/OS applications can be what we think of as “green screen” users of the old 3270 technology.  But nowadays, it is rare for customers to build new systems using 3270.  Most new z/OS applications use web interfaces or SOA services interfaces, just as non z systems do.



And all of these application and database middleware and systems are architected to interlock with System z security software.

Thursday, November 5, 2015

Why USS Supports only EBCDIC 1047?

So why is C designed for EBCDIC 1047? Because z/OS Unix Systems Services (USS) is also designed for it.

When IBM created USS for z/OS, it makes sense that it had to work in EBCDIC. The POSIX standard for UNIX doesn't require the use of ASCII, and z/OS is an EBCDIC operating system. IBM really didn't have a choice.

The problem is that UNIX, and its core programming language C, rely on characters that don't exist in some EBCDIC codepages.

EBCDIC 1047 is designed to include all the characters USS needs - effectively all the characters from Extended ASCII: ISO8859-1.

So EBCDIC 1047 is the default EBCDIC codepage used in USS.

 All parameter and help files are usually supplied in EBCDIC 1047, the C compiler expects code in EBCDIC 1047, and all UNIX file contents default to EBCDIC 1047.

Wednesday, November 4, 2015

Code Pages : C & REXX are exceptional

How C and REXX Mess Things Up

Code pages only change characters that aren't needed when programming or administering the mainframe. Is it correct? NO. That works for traditional z/OS features, and programming languages such as COBOL and PL/1.

But there are a couple of exceptions, and you can safely bet that more will follow.

Take REXX for example.

A common thing to do in REXX is concatenate two strings. This is done using the two vertical bars ('||'). For example, our sample REXX to get z/OS information has the line:

Say 'z/OS version:' zos_ver || '.' || zos_rel

The bad news is that the vertical bar character '|' isn't one of those 'standard' EBCDIC characters. So a vertical bar in EBCDIC 0037 looks like an exclamation mark (!) in Sweden. The REXX interpreter doesn't care what this character looks like, as long as it has a code of 79. So if you're in Sweden and using EBCDIC 0278, the above line becomes:

Say 'z/OS version:' zos_ver !! '.' !! zos_rel

C is another problem child. It uses funky characters like the square brackets '[]', curly brackets '{}' and broken vertical bar '¦'. These move around (or disappear) depending on your code page. But with C there's another catch: it's designed to use EBCDIC 1047, not EBCDIC 0037. So if you're using arrays in C, the line:

char cvtstuff[140];

is fine if you're using IBM1047. For IBM0037, it becomes:

char cvtstuffÝ140´

If you're using EBCDIC 0050, another common EBCDIC code page, it becomes:

char cvtstuffÝ140"

Tuesday, November 3, 2015

DB2 TIPS


Select unknown values only




For performance reasons it is better to select as few columns as possible. The optimizer may choose techniques like Index-only access. Besides it makes the query easier to read and maintain.
In the following query selects five columns of table FOTB100. The first two columns are also used in the where-clause with “=”-predicates. So the values of these columns are no surprise and therefore it is not necessary to select these columns. So leave them out of the column-list.
 SELECT
        REK_NR
      , PE_TX_REF
      , ST_TS
      , ST
      , PRI
 INTO
       :REK-NR          OF FINTXSTH
     , :PE-TX-REF     OF FINTXSTH
     , :ST-TS              OF FINTXSTH
     , :ST                   OF FINTXSTH
     , :PRI                  OF FINTXSTH
FROM  FOTB100
 WHERE  REK_NR     = :REK-NR    OF FINTXST
   AND  PE_TX_REF  = :PE-TX-REF OF FINTXST



EBCDIC - Few details...

From a very early age, most of us are taught about ASCII, and how this is used by computers to convert single byte numbers to the characters we see on our screen. So an 'a' is really 97 as far as the computer is concerned. So imagine my surprise when I found out that mainframes don't use ASCII, but EBCDIC.

Image result for ebcdic

I remember my reaction: "You've got to be kidding! Didn't EBCDIC die out years ago?"

Nope. And it's not just z/OS that uses it.
       IBM i,
       Fujitsu BSD2000/OSD,
       Unisys MCP,
       z/VSE,
       z/VM and
       z/TPF all happily continue to use EBCDIC today.

To them an 'a' is really 129, not 97.

This all worked out fine for many years. In fact EBCDIC was the most popular encoding system in the world until the Personal Computer revolution brought ASCII to the limelight.

But EBCDIC falls down when we need to display languages other than English. Words like "på" (Swedish) and "brève" (French) need special characters not necessarily available in the standard EBCDIC table. 

What's worse, there's no way that all these special characters for all the languages in the world are going to fit into the 255 places that an eight bit number has. To get around this, IBM created code pages.

EBCDIC Code Pages

Today there's no such thing as a single EBCDIC code table. You can find a few websites that claim to convert from ASCII to EBCDIC. But the chances are that they're really converting from ASCII to EBCDIC code page 37, or EBCDIC 0037.

EBCDIC 0037 is the default code page used by the United States and other English speaking countries when working with MVS: the traditional side of z/OS. It has all the normal a-z, A-Z, 0-9 characters, and other symbols like +, () and *. It also includes a few of the foreign characters for when we've borrowed foreign words like "resumé".

However if you're in France, the chances are that you'll be using EBCDIC 0297. In EBCDIC 0297, the standard a-z, A-z and 0-9 characters are the the same as EBCDIC 0037. But to see French words, other characters are used for other numbers.

 For example,
       - 177 is a pound sign (£) in EBCDIC 0037, &
       - A cross-hash (#)           in EBCDIC 0297.

There are many different code pages for all the different regions. From Spain and Iceland to Thailand and Japan.

Compare with ASCII

This is not a lot different to ASCII, which has gone from the original 7-bit code ASCII to ISO8859 with its different sub-definitions.

For example
  - ISO8859-1 is the standard 'Extended ASCII' that we all love,
  - ISO8859-2 is better for Eastern Europe, and
  - ISO8859-4 for countries like Latvia, Lithuania, Estonia and Greenland.

IBM controls these EBCDIC code pages, and assigns an ID to them called the Coded Character Set Identifier (CCSID).
 - The CCSID for EBCDIC 0037 is, you guessed it, 37.
 - IBM also has set CCSIDs for other characters sets - CCSID 1208 is Unicode (UTF-8).

Friday, October 30, 2015

SPUFI TIPS

SQL Debugging
Finding a bad character in SQL with SPUFI

You know when you have that forty lines of code you are testing in SPUFI and you get Illegal Symbol ',' and you can't find it. Well you can, those funny numbers after the message to give you a pointer.

Here is an example:


SELECT                                                                  
'A',                                                                    
'BBB' ,                                                                 
,'C'                                                                   
FROM                                                                    
SYSIBM.SYSDUMMY1                                                        
;                                                                       
---------+---------+---------+---------+---------+---------+---------+--
---------+---------+---------+---------+---------+---------+---------+--
---------+---------+---------+---------+---------+---------+---------+-
DSNT408I SQLCODE = -104, ERROR:  ILLEGAL SYMBOL ",". SOME SYMBOLS THAT MIGH
         LEGAL ARE: ( + - ? : CASE CAST USER <FLOAT> CURRENT <DECIMAL>  
         <INTEGER>                                                      
DSNT418I SQLSTATE   = 42601 SQLSTATE RETURN CODE                        
DSNT415I SQLERRP    = DSNHPARS SQL PROCEDURE DETECTING ERROR            
DSNT416I SQLERRD    = 0  0  0  -1  217  0 SQL DIAGNOSTIC INFORMATION    
DSNT416I SQLERRD    = X'00000000'  X'00000000'  X'00000000'  X'FFFFFFFF'
         X'000000D9'  X'00000000' SQL DIAGNOSTIC INFORMATION      

Look at the first SQLERRD line.  The fifth number shows the character position within you SQL with the error.  SPUFI is limited to 72 chars so pos 217 means line 4 position 1 (217/72=3, Rem = 1).

JCL Tips

Cleaning up sequence numbers in JCL
It is well known that JCL statements are from columns 1-72, but SYSIN cards are from 1-80.


Therefore, when we have Num On while editing a JCL and it has SYSIN cards in it, it is good to do UNNUM followed by NUM OFF.  It ensures that sequence number does not remain in cols 73-80 of SYSIN Card and results are predictable.

Tuesday, October 27, 2015

JCL - II

JCL Syntax

Now that you've run the JCL and seen that it works.

First, you'll notice that most lines start with two slashes. The two slashes mark a line as part of JCL. Lines that do not contain those slashes, such as the last two lines in this job, are usually embedded input files.
//TSOJOB  JOB CLASS=A,NOTIFY=&SYSUID,MSGCLASS=H
- This line is the job header. It defines a job called TSOJOB

- The CLASS parameter specifies the job's priority, the maximum amount of resources the job is allowed to consume, and so on. is a good default in most installations, at least for the short jobs we'll use in this book.

- The NOTIFY parameter specifies that a user should be notified when the job ends. It could be the name of a user to notify, but here it is &SYSUID, which is a macro that expands to the user who submits the job.

- The MSGCLASS parameter specifies that the output of the job needs to be held. This makes it accessible afterward, as you will see in Section 1.3.5, "Viewing the Job Output."
//        EXEC PGM=IKJEFT01

This line starts an execution step—a step in the batch job that runs a program. It is possible for these steps to be named using an identifier immediately after the two slashes. However, this is a very simple job, so there is no need to identify this stage.

The program that this step executes is IKJEFT01, which is the TSO interpreter.
//SYSTSPRT DD SYSOUT=*
This line is a data definition. It defines the data stream called SYSTSPRT, which is the output of TSO. SYSOUT=* means that this data stream will go to the standard output of the job. In the next section, you will learn how to get to this output to read it.
//SYSTSIN  DD *
This line is another data definition. It defines SYSTSIN, which is the input to the TSO interpreter. The value * means that the text that follows is the data to be placed in SYSTSIN.
SEND 'Hello, World' U(ORIPOME)
/*
This is the input to the TSO interpreter. The first line is a command, the same "Hello, World" command we executed in Section 1.2.3, " 'Hello, World' from TSO." The second line, /*, is a delimiter that means the end of the file.

JCL

Entering commands from TSO is one way to accomplish tasks in z/OS, but many other ways exist. 

-   One of the most popular and powerful ways is to create files that contain lists of things to do. These lists are called          batch jobs and are written in z/OS Job Control Language (JCL).

-  It fulfills roughly the same role as shell scripting languages in UNIX.

Introduction to JCL

JCL is a language with its own unique vocabulary and syntax.

JCL to create batch jobs. A batch job is a request that z/OS will execute later.

 z/OS will choose when to execute the job and how much z/OS resources the job can have based upon the policies that the system administrator has set up. 

z/OS can manage multiple diverse workloads (jobs) based upon the service level that the installation wants. 

For example, online financial applications will be given higher priority and, therefore, more z/OS resources, and noncritical work will be given a lower priority and, therefore, fewer z/OS resources


In your batch job, you will tell z/OS this information:
  • You'll give the name of your job, with a //JOB statement
  • You'll specify the program you want to execute, with a //EXEC PGM=<program name>statement
  • If your program uses or creates any data, you'll point to the data using a //DDstatement.

Example Job:

This job executes an IBM-provided z/OS program called IEFBR14. This is a dummy program that tells z/OS "I'm done and all is well." It requires no input and produces no output other than an indication to the operating system that it completed successfully.

//Arunams JOB CLASS=A, NOTIFY=&SYSUID,MSGGLASS=H
//                    EXEC PGM=IEFBR14


You can also run TSO as a batch job by using JCL to tell z/OS this information:
  • The name of the job
  • The program to run, which is the TSO interpreter IKJEFT01
  • Where to get the input for IKJEFT01 and the commands that you want to execute
  • Where to put the output from IKJEFT01, the output from TSO, and the commands that you execute
//TSOJOB  JOB CLASS=A,NOTIFY=&SYSUID,MSGCLASS=H
//        EXEC PGM=IKJEFT01
//SYSTSPRT DD SYSOUT=*
//SYSTSIN  DD *
SEND 'Hello, World' U(ORIPOME)
/*

Friday, October 23, 2015

Few Shell Commands

skulker [-irw] [-l logfile] directory days_old 


Removes files in directory older than specified number of days

shell script in /samples ƒ 

- copy to /bin/skulker or /usr/sbin/skulker ƒ 
- can be modified by installation ƒ 
- Protect it from hackers! (make it non-writable) 

e.g. skulker /tmp/ 100 ƒ

deletes files in /tmp older than 100 days ƒ
trailing slash follows a /tmp symlink to another directory

- use cron to schedule it to run regularly

fuser [-cfku] file ... 

- List process IDs of processes with open files ƒ
- Useful for finding the current users of a file, or a filesystem (e.g. before unmount) ƒ
- e.g. fuser -cu /usr/lpp/dfs shows who is using the containing filesystem

uptime 

-ƒ Display how long the system has been IPLed ƒ
- e.g. uptime 01:02PM up 14 day(s), 01:15, 58 users, load average: 0.00, 0.00, 0.00

Finding file opens in Unix

rbd2/u/drbd:>fuser -u $LOGS/evtlog.2016-04-12
/plex/var/trb/rbd2/dps1/logs/evtlog.2016-04-12: 33620442(DRBD)
rbd2/u/drbd:>ps -p 33620442
       PID TTY       TIME CMD

  33620442 ?         1:09 /plex/var/trb/rbd2/dps1/lib/sislib/sievtq.exe

Z/OS Unix Environment


1. Programs using OSS, is normally written in REXX or C Language.
2. Languages "Transform" into USS assembler callable services (syscalls) via the POSIX API.
3. Assembler services can be called directly too. 

Basic Terminology in Z/OS Unix Services

USS         – z/OS UNIX System Services

Process    – program using UNIX system services

Thread  – a unit (task) of work in a process

Dub        – establish a z/OS UNIX environment for an address space z create process

Fork/Spawn – methods by which a parent process creates a child process

Syscall / System Call / Callable Service z a request by an active process for a service to be performed by z/OS UNIX System Services

Zombie – address space with OMVS resources remaining after dubbed process terminates


Facts and Advantages of Z/OS Unix

Fast facts about z/OS UNIX

  • It is a certified UNIX system and an integral element of z/OS. 
  • WebSphere® Application Server, CICS®, IMS™ , Java™  Runtime, Tuxedo, DB2®, WebSphere MQ, SAP R/3, Lotus Domino, and Oracle Web Server all use z/OS UNIX.
  • z/OS UNIX applications can communicate with DB2®, CICS, IMS, and WebSphere MQ. WebSphere Application Server on z/OS using the Optimized Local Adapters support (WOLA) provides direct communication between z/OS UNIX applications and Websphere Application Server applications on z/OS.
  • z/OS UNIX is built for the enterprise where you can prioritize workloads for high performance when running with a mixed workload.
  • There is a broad range of ISV applications ported to z/OS UNIX.
  • z/OS UNIX has a hierarchical file system familiar to UNIX users.
  • Applications can work with data in both  the z/OS UNIX file systems and traditional MVS™ data sets.  MVS programs can access UNIX files, and UNIX programs can access MVS data sets. 
  • The SMB File and Print Server enables a distributed file sharing infrastructure for z/OS UNIX files and Windows® workstations.
  • Users can choose which interface they want to use: the standard shell, 3270, or the ISPF interfaces.
Advantages of Z/OS Unix
  • z/OS applications can take advantage of all the enterprise services
  • Extensive support in the marketplace
  • Enterprise-class applications and middleware
  • Access to advanced security features of RACF®
  • Access to the two-phase commit protocol provided by Resource Recovery Services (RRS)
  • Workload Manager to manage the allocation of physical resources to maximize system responsiveness.
  • High performance Database Access

z/OS UNIX and traditional UNIX


z/OS UNIX systemTraditional UNIX system
  • Manage and secure system resources from a single point.
  • Consistently runs near 100% utilization.
  • WLM gives UNIX applications extra resources to manage tasks.
  • Manage and secure each system's resources individually.
  • A dedicated UNIX system usually runs at about 50-70% utilization.
  • Without any job management, traditional UNIX systems can become disabled.