Tuesday, December 29, 2015

IBM MobileFirst Platform

The IBM MobileFirst Platform provides an integrated development and testing environment for mobile applications to access data in DB2 for z/OS. 

The development environment is simplified by combining multiple set of tools, frameworks and codebases into one single development environment, and one codebase to develop and maintain.

 In other words, you can build your Android, iPad, iPhone applications and more using one codebase.









IBM MobileFirst server, which provides a set of adapters for server-side development. 

The result from the adapters are converted to JSON format, which can be easily consumed in the mobile application.

To build a DB2 for z/OS mobile application, you need IBM MobileFirst and DB2 Connect. 

Optionally, you can install Android development environment, which allows you to write native Android application and/or to run a mobile application in an Android emulator

Sunday, December 27, 2015

Four-year cost of moving 1 TB of data daily into the Consolidated Solution


Quantifying Data movement Costs

A typical IBM mainframe customer moves multiple terabytes of OLTP data from z Systems to distributed servers every day.
A recent IBM analysis found that this activity, often called extract, transform, and load (ETL), can consume 16% to 18% of a customer's total MIPS. For some, the figure approaches 30%.


Below Table shows the results of the data movement study, which focused on two large banking customers in Europe and Asia that each routinely moved their OLTP data off-platform for analysis.

Distributed Core Consumption Total MIPS Consumption
EU Bank 28% 16%
ASIAN BANK 8% 18%
Therefore, moving data to a separate analytical platform clearly consumes a lot of resources. But what does this mean in terms of dollars and cents?

To quantify the cost of this intensive ETL activity, IBM conducted a separate, laboratory-based study that resembled the way that the example banks moved their data from their z Systems environment and onto an x86 server (in this case, a pre-integrated competitor V4 eighth unit single database node). A four-year amortization schedule was used to spread out the cost of the system (hardware, software, maintenance, and support), along with network, storage, and labor expenses.







The result was a unit cost per GB or ETL job to move data off of the z Systems platform. These metrics were used to compute the cost of moving 1 TB of data each day using a simple z Systems software stack, including the IBM z/OS® operating system, IBM DB2® for z/OS, and various DB2 tools. The data would be moved from an IBM z13 to an operation data store (ODS) and then on to three data marts.


As shown in Figure, the study projected total data movement costs of more than $10 million over the four-year period. The study assumed there are four cores on the z13 running at 85% utilization and 12 cores on each of the x86 servers running at 45% utilization. In this scenario, ETL activity burned 519 MIPS and used 10 x86 cores per day.


The primary focus of this ETL study was the cost of extracting and loading data, not transforming it. So the true cost to the banks, or any company, would be substantially higher than is shown here if you added the expense of data transformation using tools such as IBM DataStage®, Ab Initio, Informatica, or others.

Thursday, December 24, 2015

Moving Data Costs Millons to Banks/FIs

The IBM z Systems platform is the business world’s preferred system of record, responsible for more than 70% of the operational data generated by banks, retailers, and other large
enterprises around the globe.

And each day, much of that data is moved to separate analytics environments to gain new
business insights.

Moving data costs money.

The additional MIPS that are used by operational systems contribute to software, hardware, storage, and labor costs that can total millions of dollars.

Companies mostly overlook these costs and continue to analyze their operational data, such as online transaction processing (OLTP) records, by copying it to distributed servers. One
oft-quoted anecdote states that for every OLTP database, there are seven or more copies being maintained in other locations for analytics or other purposes


No one questions the value of analyzing OLTP data. You cannot overstate the benefit of finding hidden trends in sales reports, insurance claims, and so on. But at what cost?

Conventional thinking has been that off-platform analytics is the best option, and that data movement expenses are insignificant.

But conventional thinking is wrong.

The IBM zEnterprise® Analytics System 9700 solution lets you analyze data as rapidly as ever while dramatically reducing the expense of moving it.


Tuesday, December 22, 2015

How the mainframe applications has been changed in the past 50 years

















ID Stage Description
1 Green Screen Limited user community with a limited and very controlled UI
2 Client/server The user community remains limited, however the UI is increasingly richer.
3 Web/Desktop Wider user community (anyone with browser), UI controlled by web page (first z Systems web pages were similar in layout to green screens)
4 SOA Much wider user community, with Service Oriented Architectures reusing existing services and exposing them through decoupled integration methods. The UI is controlled by a service requester application or, in the case of Web 2.0, by the user.

5 Mobile User community is anyone with a mobile device, rich flexible integrated UI, often reusing services developed for SOA.

Monday, December 21, 2015

What is Z/OS Connect??


Typical application architecture for a mobile system with Z/OS Connect

The following is a typical application architecture for a mobile system of engagement app that uses Bluemix as the Mobile-Backend-as-a-Service (MbaaS).

The NodeJS runtime orchestrates calls to various APIs to serve the mobile UI.

API Management is used to expose REST APIs surfaced by z/OS Connect as Custom APIs in the Bluemix Catalog and consumed by the NodeJS runtime.

The Secure Gateway service secures the passage from Bluemix to the corporate data center.



z System mobile integration solutions

1. Provide a REST JSON interface to your mainframe with z/OS Connect














2. Extend the reach of business services with REST and SOAP APIs with IBM API Management














3. Create a mobile security gateway with IBM DataPower Gateway















4. Build a strategic enterprise service bus with IBM Integration Bus











5.  Integrate with an adapter framework using IBM MobileFirst Server



Wednesday, December 16, 2015

Mainframe - Definition

Mainframe
A state-of-the-art computer for mission critical tasks. Today, mainframe refers to a class of ultra-reliable servers from IBM that is designed for enterprise-class and carrier-class operations.
What’s the difference
HP, Unisys, Sun, and others make machines that compete with IBM mainframes in many industries but are mostly referred to as servers. In addition, non-IBM mainframe datacenters have hundreds and thousands of servers, whereas IBM mainframe datacenters have only a few machines.
There is a difference
One might wonder why mainframes cost hundreds of thousands of dollars when the raw gigahertz (GHz) rating of their CPUs may be only twice that of a PC costing 1,000 times less. Read on to learn why.
Lots of processors, memory, and channels
Mainframes support symmetric multiprocessing (SMP) with several dozen central processors (CPU chips) in one system. They are highly scalable. CPUs can be added to a system, and systems can be added in clusters. Built with multiple ports into high-speed caches and main memory, a mainframe can address thousands of gigabytes of RAM. They connect to high-speed disk subsystems that can hold petabytes of data.
Enormous throughput
A mainframe provides exceptional throughput by offloading its input/output processing to a peripheral channel, which is a computer itself. Mainframes can support hundreds of channels, and additional processors may act as I/O traffic cops that handle exceptions (channel busy, channel failure, etc.).
All these subsystems handle the transaction overhead, freeing the CPU to do real “data processing” such as computing balances in customer records and subtracting amounts from inventories, the purpose of the computer in the first place.
Super reliable
Mainframe operating systems are generally rock solid because a lot of circuitry is designed to detect and correct errors. Every subsystem may be continuously monitored for potential failure, in some cases even triggering a list of parts to be replaced at the next scheduled maintenance. As a result, mainframes are incredibly reliable with mean time between failure (MTBF) up to 20 years!
Here to stay
Once upon a time, mainframes meant “complicated” and required the most programming and operations expertise. Today, networks of desktop clients and servers are just as complex, if not more so. Large enterprises have their hands full supporting thousands of PCs along with Windows, Unix and Linux and possibly some Macs for good measure.
With trillions of dollars worth of IBM mainframe applications in place, mainframes will be around for quite a while. Some even predict they are the wave of the future!