Wednesday, January 27, 2016

COBOL Future

Recently I have attended seminar on COBOL which was given by Captain COBOL. 

Just wanted to record few things from that session...:)

COBOL will be replaced by …

• Fortran: 1960s
• PL/I: 1970s
• PASCAL: 1980s
• C: 1990
• C++: 1995
• Java: 1998
• C#: 2001
• What's next?

COBOL is constantly evolving

• Java interoperability
• Unicode support
– (UTF-16, more and more UTF-8)
• XML processing for Web Services
• JSON support on the way

COBOL can be Java!

• You can write Java classes in COBOL
• Invoke Java methods from COBOL
• Inherit from Java classes in COBOL
• COBOL can be C!
• Write C functions with return value in COBOL
• Invoke C functions from COBOL
• COBOL can be multithreaded

COBOL is the Language of the Future!

• 54 million new lines of COBOL written every year
• The base is growing, not shrinking
• 1 million COBOL programmers


Sunday, January 3, 2016

DB2 10 Conversion Mode - Reduce resource consumption on DB2 10 for z/OS

To utilize the DB2 10 improvements, packages must be rebound to create the new 64-bit thread structures.
Large z/OS Page Frames
Among the infrastructure changes in DB2 10 is the way that PGFIX(YES) buffer pools are managed on a System z10* or zEnterprise* 196 (z196) processor. The System z10 processor with z/OS* 1.10 introduced 1 MB page frames. As the amount of memory increases, large page frames help z/OS improve CPU performance. IBM laboratory testing has measured CPU improvements of a few percent for specific transaction workloads. To exploit large frames, define the buffer pools with PGFIX(YES) and specify the LFAREA z/OS system parameter. LFAREA is the amount of real storage that z/OS uses for large frames.

RELEASE(DEALLOCATE)

The RELEASE(DEALLOCATE) option has been part of the DB2 \BIND/REBIND command for a long time, but DB2 10 makes the function more useful. The dramatic Database Services Address Space Parameters (DBM1) virtual-storage constraint relief in DB2 10 achieved with rebind makes it possible to make more use of RELEASE(DEALLOCATE). This change saves up to 10 percent of CPU time for high-volume transactions with short-running SQL, without changing applications or Data Definition Language (DDL).

Distributed Applications

For Distributed Data Facility (DDF) work, after rebinding packages with RELEASE(DEALLOCATE), the customer must issue the MODIFY DDF PKGREL(BINDOPT) command to allow DB2 to use RELEASE(DEALLOCATE) processing for packages bound with RELEASE(DEALLOCATE). DDF inactive thread processing (CMTSTAT=INACTIVE) takes advantage of new high-performance database-access threads (DBATs) to increase distributed DBAT thread reuse. Implementing the MODIFY DDF PKGREL(COMMIT) command can commit behavior when, for example, you need to allow utilities to break in. With DB2 10 for Linux*, UNIX* and Windows* 9.7 Fix Pack 3, the Call Level Interface (CLI) and JDBC packages are bound with RELEASE(DEALLOCATE) by default. RELEASE(DEALLOCATE) assumes the use of well-behaved applications that adhere to required locking protocols and issue frequent commits.
DB2 10 further improves overall Distributed Relational Database Architecture (DRDA) application performance for result sets from SELECT statements that contain the FETCH FIRST 1 ROW ONLY clause. Simply combine the OPEN, FETCH and CLOSE requests into a single network request. DB2 10 also offers improved DDF performance through restructuring distributed processing on the server, particularly the interaction between the DDF and DBM1 address spaces.
Migrating to DB2 10 Conversion Mode, rebinding packages, exploiting large page frames and exploiting RELEASE(DEALLOCATE) can save up to 10 percent of the CPU for transaction workloads and up to 20 percent for native SQL-procedure applications. Much greater CPU savings is possible for queries that contain large numbers of index Stage 1 predicates, as well as IN-list predicates. Also, more Stage 2 predicates can be pushed down to Stage 1.

Capacity Improvements

Since most of the thread storage has been moved above the bar, DB2 10 can support more threads than DB2 9, thereby making it possible to reduce the number of DB2 data-sharing members, or at least hold the number of members constant while increasing transaction throughput.
If you were previously unable to increase the MAXKEEPD zparm value due to a DBM1 virtual-storage constraint in DB2 9, you may be able to increase MAXKEEPD since the local-statement cache is moved above the bar. Increasing the constraint value may reduce prepares for more SQL statements and provide additional CPU savings.

I/O Improvements

Other CPU improvements apply in more specific scenarios. Some of these are related to I/O improvements since I/Os are one of the significant consumers of CPU time. These include an improved, dynamic prefetch sequential-detection algorithm. List prefetch support for indexes helps minimize index I/Os when scanning a disorganized index. The number of log I/Os is also reduced and long-term page fixing of the log buffers saves CPU time as well.

zIIP and zAAP Exploitation

Buffer pool prefetch and deferred write Supervisor Request Blocks (SRBs) ordinarily aren’t big CPU consumers, but DB2 10 makes this specific processing eligible for System z* Integrated Information Processors (zIIPs); zIIP processing can reduce the number of billed MIPS a company uses. This CPU savings is more significant when using index compression.
As in DB2 9, DB2 10 can direct up to 80 percent of CPU-intensive parallel-query processing to run on an available zIIP. DB2 10 makes more queries eligible for query parallelism, which can result in more zIIP exploitation. In DB2 10, portions of the RUNSTATS utility are eligible to run on a zIIP.
XML schema validation and nonvalidation parsing of XML documents are eligible for zIIP or System z Application Assist Processing (zAAP). If XML parsing is done under DDF enclave threads, it’s eligible for zIIP. If the XML parsing is done under a batch utility, it’s eligible for zAAP.

Query Performance

Range-list index scan is a new type of access path DB2 10 uses to significantly improve the performance of certain scrolling-type applications where the returned result set is only part of the complete result set. The alternative in DB2 9 was to use multi-index access (index ORing), which isn’t as efficient as single-index access. Prior to DB2 10, list prefetch couldn’t be used for matching IN-list access. In DB2 10, list prefetch can be used for IN-list table access, ACCESSTYPE=’IN’.
The process of putting rows from a view or nested table expression into a work file for additional query processing is called physical materialization, which is an overhead. Additionally, it limits the number of join sequences that can be considered and can limit the administrator’s capability to apply predicates early in the processing sequence. The join predicates on materialization work files are also not indexable. In general, avoiding materialization is desirable. In DB2 10, materialization can be avoided in additional areas, particularly for view and table expressions involved in outer joins.
The processing of Stage 1 and nonindex matching predicates has also been enhanced. DB2 now processes non-Boolean predicates more efficiently when accessing an index and Stage 1 data-access predicates. You don’t need to rebind your static applications to take advantage of some of these optimization improvements. However, a rebind is required to take full advantage. More complex queries with many predicates show higher improvement. Queries that scan large amounts of data also show higher CPU savings.
DB2 10 also contains some SQL-sorting enhancements. DB2 10 introduces hash support for large sorts, which potentially reduces the number of merge passes needed to complete these sorts. Hashing can be used to identify duplicates on input or to sort, if functions such as DISTINCT or GROUP BY are specified. Some additional cases also exist where, when FETCH FIRST N ROWS is used, DB2 10 can avoid the sort process altogether.

Insert Performance

The performance of high concurrent inserts is better in DB2 10, with tests showing a typical range of 5- to 40- percent reduction in CPU time when compared with DB2 9 performance. Higher CPU performance is achieved when processing sequential inserts. The amount of improvement also depends on the table-space type, with Universal Table Spaces seeing higher improvements.
When a series of inserts are sequential with respect to the cluster key–but the key is less than the highest key in the index–a new page-selection algorithm helps minimize getpages, which can help reduce CPU cost.
DB2 10 contains some referential integrity-checking improvements on inserts that may result in reduced CPU utilization and I/O processing.

Utilities and Large Objects

Generally speaking, DB2 utility CPU performance in DB2 10 is equivalent to that of DB2 9, but DB2 9 already introduced performance improvements of up to 50 percent CPU savings compared to DB2 8 for various utilities processing.
DB2 10 also contains numerous large object (LOB) enhancements and one of them applies to Conversion Mode, namely LOB materialization avoidance. For LOBs, IBM has observed up to a 16-percent reduction in CPU consumption for DDF Inserts.

Add Up the Savings

All of the CPU savings described here apply to Conversion Mode. Some of the performance improvements are available upon installation of DB2 10, without additional changes, while others require very minimal changes, such as a change to installation parameters. Some require a System z10 processor and perhaps changing the buffer-pool parameters. Most require appropriate rebinds to fully realize the benefits. Additional CPU benefits can be obtained when migrating to New Function Mode, which will be outlined in a future article.

Interview with Father of DB2 - Don Haderle



IBM Fellow Don Haderle is the only person who can claim to have been both the “father” and “mother” of DB2*. 

In the 1980s, more than one person who had worked on the DB2 project had been called the “Father of DB2” in press announcements. So during a database show in San Francisco, when Haderle and several panelists were asked their titles, he responded
“ ‘The Mother of DB2 because the Father of DB2 title was already taken.’ So they called me the Mother of DB2 for quite some time,” he recalls. However, Janet Perna, the management executive lead for DB2 on open systems at the time, later dubbed him the “official” Father of DB2. “And when I retired, that was the title they gave me in a press article. So it was a name IBM gave me; I didn’t make it up.”

Haderle spoke about the development of the database management system (DBMS) and its lasting impact on the mainframe and beyond.

What spurred the development of DB2?
Don Haderle (DH): IBM derived most of its revenue and profit from mainframe hardware, including peripherals. DBMS customers used more storage and processing capacity than others, so IBM sought to drive greater DBMS adoption. However, IBM depended on ISVs to support the latest IBM hardware. These vendors often delayed doing so until the new hardware enjoyed a strong installation base. This slowed hardware sales. As a result, IBM’s storage division funded the development of an advanced DBMS and transaction systems in 1976.

What had come before DB2 and what made a new approach necessary?

DH:
 Early DBMSs, such as IBM’s IMS* and Cullinet’s Integrated Database Management System, supported bill of materials, material resource planning and other applications critical to business processes in manufacturing, finance, retail and other industries. These products featured hierarchical or network data models and provided both database and transaction management services. 
However, database schema changes required rewriting application programs, and programmers had to understand the complex principles of concurrency and consistency—advanced thoughts at the time. As a result, application upgrades were often complicated and time-consuming; and it was difficult to share a database with distinctly different applications.

From a mainframe user’s perspective, what need were you seeking to address?

DH: Customers pressed for a solution to their application backlog, which was measured in terms of years in some cases. They wanted a database that could respond to rapid development for diverse applications doing transactions, batch and interactive access. They were willing to suffer some hit in cost performance over IMS and CICS* to respond to their business initiatives quickly. This was the design point for DB2—orders of magnitude improvement in the application development cycle for databases to perform transactions and business intelligence.

Where did your team begin?
DH:
 The relational prototype from IBM Research, called SystemR, offered a great base for starting. IBM Fellow Ted Codd published his famous paper in 1969 for the exposition of the relational database model. The IBM Research team, among others, put together a prototypical example of that model. They came up with the concrete specification, which was SQL, and they put together that prototype and wrote a paper in 1976 that exposed the language and the prototype, which was SystemR. That was a “Wow!” back in that era. The folks that developed SystemR worked right next door to us in San Jose, and so we went up to chitchat with them

What is the origin of the DB2 name? 
DH:
 IMS DL/I was IBM’s first database—hence, DB ONE. There was a contest—I was heads down on the technical stuff—and the marketing folks came up with DB2.
How did it revolutionize databases?
DH:
 The revolution was a single database and the relational model, for transactions and business intelligence

At the time, did your team realize the impact DB2 would have?
DH:
 Did we think it would last for 30 years? We were hopeful, but first we needed to ensure the long-term survival of the mainframe and port DB2 to open systems platforms, like UNIX*, on IBM and non-IBM hardware. But that is a longer story.

Though it launched in 1983, you point to 1988 and DB2 V2 as a seminal point in its development. Why? 
DH:
 In 1988, DB2 V2 proved it was viable for online transactional processing [OLTP], which was the lifeblood of business computing. It had already proven its capability to perform analytical queries, but 1988 made it viable for the heart of business processing —OLTP. At that point, it could yield adequate performance metrics—transactions per second—though it was still more expense in compute cost vis-à-vis IMS DL/I, but Moore’s Law would continue to narrow that gap.

What has been the lasting impact of DB2 at IBM? 
DH:
 DB2 was key to establishing the IBM software business and making IBM an overall solution provider—hardware, software, services. As I said earlier, IBM was a hardware company in the 1970s and the software was sponsored by the hardware businesses to support their platforms. We reported to our respective hardware executives.
DB2’s early success, coupled with IBM WebSphere* in the 1990s, led the transformation of the business and engendered the investments that made that happen. Thus the survival of DB2 is a product of the completeness and competitiveness of the IBM software portfolio, not just the excellence of DB2 itself.

Looking back, what’s been the most surprising use for DB2?
DH:
 When you say “surprising,” I tend to think of applications—the moon launch, open-heart surgery or some other gee-whiz application. The closest thing for DB2 was the database system for the Olympics for several games [the 1992 Summer Olympics in Barcelona, 1996 Summer Olympics in Atlanta and the 1998 Winter Olympics in Nagano] when there was extreme pressure on performance and any failure was visible to the world. What made that a pressure moment was that as soon as the game was played, the race was run and the scores were posted, it was immediately available to all the press around the world. If they didn’t have the information immediately after the event—and there were hundreds of thousands of queries done on this thing—then we had egg all over our faces.

Where do you see DB2 in 30 years?
DH:
 In the same position as today—a vital database infrastructure for core business processing within enterprises. While there’s another business need being addressed by NOSQL and NEWSQL databases that will evolve, DB2 will serve the basic business transactions and business intelligence as it does today, evolving to respond to the big data business needs that demand aggregation of data within and outside the enterprise in surprising volumes with surprising velocity.