IBM Systems Magazine, Mainframe - July/August 2017 - 43
"It's a good idea even in a static environment
to look at what is being backed up, as well
as how and where it's stored on an annual
basis at least."
-Harry Batten, executive IT architect, IBM
Batten says. "The backups we
are talking about here are for
easy recovery of day-to-day
operations. Compliance data
is self-explanatory-the government tells us that we need
to retain this information for x
number of years. Not all remaining data sets need to be backed
up; however, there are almost
always system data sets such as
"A question we ask is, 'What
would be the result of losing or
corrupting this data set?' If we
aren't sure whether it should be
backed up, it's fairly easy to put
it in a management class that will
migrate it, then delete it after x
days of non-usage."
Clients must consider whether
their data is static (i.e., little chance exists of this data
changing on a regular basis) or
dynamic (i.e., records within the
data are regularly added, modified or deleted). The type of data
dictates the type of backup methodology that should be used.
"For more static data," Batten
says, "we could use a generation
data group, while dynamic data
would benefit from an incremental method."
While the actual exercise of
backing up data may be relatively
simple, Batten notes understanding when backups are no longer
current-or even usable-is a more challenging,
but imperative, task. Data that was backed up
years ago and is no longer pertinent can be taking
up valuable storage space on backup disk or tape.
Identifying this data and understanding its lifecycle
is a step toward being able to automate the backup
and recovery process.
"Once the lifecycle is understood, it's a good idea
to use automated methods, such as data facility
storage management subsystems with properly
defined management classes and automatic class
selection routines," Batten says. "The currently
available tooling for products like hierarchical storage management and tape management will allow
the user to produce reports that can show when a
particular data set was backed up and when it was
last accessed. Even a simple listing of a backup
volume can show this information."
A successful disaster recovery plan identifies
and accounts for the data that's required for system
recovery. Batten says many z Systems clients use
synchronous or asynchronous data-replication
methods to ensure all current data is present at
a remote site in the event of a disaster. In other
words, clients must know not only which data is
necessary for system recovery, but also where it's
stored. For example, a client may have data that
has been migrated to tape that is needed in order to
restore data to a point in time. In this case, the client's recovery plan should note the tape must also
be replicated and available at the remote site.
GDPS* is designed to guarantee data consistency
for z Systems data and automate the entire recovery
process. GDPS utilizes IBM remote copy solutions
such as Metro Mirror, Global Mirror and z/OS* Global
Mirror. The automation achieved with GDPS is based
upon Tivoli* System Automation (SA) for z/OS, which
is the only automation product designed to exploit
the Parallel Sysplex* environment. Tivoli SA provides
the functionality and ability to manage the whole
sysplex from a central point.
Data should be backed up at
a time that will cause the least
amount of disruption to users.
Batten says modern technologies
have allowed for more dynamic
data backup; certain databases
or data set types can be backed
up while remaining open. In
other cases, the backup software
could lock the data and cause
delays in application programs.
"For data sets that are updated
perhaps via a batch job, backup
should be triggered by a scheduler after satisfactory completion
of the job," he says.
Batten notes clients should
consider revisiting their backup
and recovery plans periodically, but especially after the
adoption of new technologies
(e.g., encryption) or when new
data-compliance regulations are
put into effect.
"It's a good idea even in a
static environment to look at
what is being backed up, as well
as how and where it's stored
on an annual basis at least,"
Batten says. "It's not uncommon
for applications to be retired or
rewritten and the old data never
removed from storage."
He also advises clients
to develop robust naming
conventions for their data sets.
This ensures each backup file has
a unique identifier and allows
clients to quickly determine the
details of the backup by publishing a list of existing management
classes and their triggers. When
a new application is introduced,
this list allows clients to determine whether the application fits
into an existing class or if a new
one should be created.
According to Batten, z Systems
data audits have uncovered
ibmsystemsmag.com JULY/AUGUST 2017 // 43
pg 41-45.indd 5
6/13/17 10:19 AM