If you have read my previous two entries you will note I am experienced with the RDBMS offerings from IBM, Oracle, and Microsoft.
Today lets look at adjusting database parameters using the various vendor options. We will focus on log archiving for this exercise. This is a key component of RDBMS backup and recovery. For instance, all vendors allow for PIT recovery to varying degrees. If you don't have database log archiving in place you can usually only restore from your last full backup. The default configuration in most cases is to run in NON-ARCHIVELOG MODE (Oracle terminology). Logging is also very important for SQL Server 2005/2008 mirroring, Oracle Dataguard, and DB2 HADR distributed D/R options.
SQL Server 2000, 2005, 2008:
In SQL Server 2000 you could utilize Query Analyzer or OSQL from the command line to enter database commands. In 2005 and beyond Management Studio (MS) replaces Query Analyzer and Enterprise Manager. SQL Server has a bunch of system procedures provided to access information though we see them pushing us toward catalog and dynamic management views. I still like the procedures since the names of them point you to the function and the parameters easy to find online and I learned them first when starting on in SQL 2000.
From MS:
sp_helpdb TESTDB -- Look for for the status, filename, and size maxsize growth information
go
alter database TESTDB set recovery full
go
sp_helpdb TESTDB
All of the above can also be done using the GUI in MS as well as entering the actual commands. I am a firm believer that if you only know a GUI you will never have old school knowledge which makes the new stuff easier to grasp anyway.
If the TESTDB database is in SIMPLE mode entering the sp_helpdb command will show this in the STATUS field of the output. SIMPLE mode means that you can't issue a BACKUP LOG TESTDB to LOGDEVICE; which copies the log and truncates the transaction log of the inactive portion. If you allow the transaction log in SQL Server in SQL Server in FULL mode to fill the database will stop. Shops will many times setup a management plan to backup the transaction log every 15 minutes or so to a backup device or a file. IMHO, the SQL Server log handling is weakest of the big 3 because of this as the others (DB2 and Oracle) switch over automatically when they are full to the next available log and you also have to feed in all of the BACKUP LOG output files to the recovery with the NORECOVERY option when doing a restore whereas with DB2 and Oracle you can do everything in one command. Afterwards, you can now utilize the STOPAT operand of RESTORE DATABASE on the last log file restored to bring the DB online. Finally, if your LOGDEVICE is disk SQL Server Management plans allow us to keep the area pruned based on various choices. That is a very nice feature.
DB2 UDB on LUW and Z/OS:
In DB2 for LUW (Linux, UNIX, and Windows) the database architecture is very similar to SQL Server in that each database can have its own set of parameters.
In the case of DB2 on LUW you have a set of parameters for each database that can be accessed from the Control Center (GUI) or from the DB2 command windows. I must be getting old as I love commands; however, the key though is to understand why you are entering commands as I have worked with people who know all the commands but don't know when to enter them as if you have the concept down you can always find syntax on GOOGLE or the online documentation each vendor provides. The below is from a windows server so you can easily navigate to the command window from START>PROGRAMS>IBM DB2>DB2COPY>Command Line
Tools >Command Window
db2 connect to TRAIN
db2 get db cfg for TRAIN
The below shows the key parameters associated with the log after you issued the get command. In LUW you could pipe the output to GREP or FIND depending on the platform for future granulation. The below shows that LOGRETAIN is set to RECOVERY which means the database is now in recovery mode. I am very impressed with DB2 UDB's log handling capabilities. The UDB option allows for overflow and sizing options similar to the others. The below parameters are pretty self-explanatory and easily looked up in the online media.
Log file size (4KB) (LOGFILSIZ) = 10000
Number of primary log files (LOGPRIMARY) = 3
Number of secondary log files (LOGSECOND) = 9
Changed path to log files (NEWLOGPATH) =
Path to log files = E:\LOGS\TRAIN\
Overflow log path (OVERFLOWLOGPATH) =
Mirror log path (MIRRORLOGPATH) =
First active log file = S0000500.LOG
Block log on disk full (BLK_LOG_DSK_FUL) = NO
Block non logged operations (BLOCKNONLOGGED) = NO
Percent max primary log space by transaction (MAX_LOG) = 0
Num. of active log files for 1 active UOW(NUM_LOG_SPAN) = 0
Group commit count (MINCOMMIT) = 1
Percent log file reclaimed before soft chckpt (SOFTMAX) = 100
Log retain for recovery enabled (LOGRETAIN) = RECOVERY
User exit for logging enabled (USEREXIT) = OFF
Assuming log archiving is not turned on you can then easily issue commands similar to the below for each database you want log archiving turned on.
Db2 connect to TRAIN
Dd2 get db cfg
Db2 update db config using newlogpath e:\logs\TRAIN
Db2 update db config using logretain on
Db2stop
Db2start
In many cases using the db2 update command does not require a restart for the changes to take affect.
If your log path is to disk I recommend that you have a script in place that will tell you when the directory is getting close to full so that you may proactively clean-up things.
I wrote the below for such usage on Windows and have similar code for Linux, UNIX, and Z/OS as needed to keep old files that accumulate cleaned up based on age.
' Filename: delagedfle.vbs
'
' Author: Me
'
' Date: May 20, 2004
'
' Last Revised: May 20, 2004
'
' This WSH script is to be run at the command prompt is given a file to check its age.
'
' Usage: [CScript WScript] delagedfle.vbs
' Example: cscript delagedfile.vbs \\temp\\filex 10 -- deletes \\temp\filex if last modifed date is beyond 10 days
Dim file
Dim Diff
Dim Diffc
Dim name
Dim namex
Dim FileSystemObj
Dim ObjFSO
Dim objFile
Dim num
Dim tage
Dim k
Dim args
Dim msg
Dim objWMIService
Dim colFiles
Dim Delcount
set args = WScript.Arguments
num = args.Count
delcount = 0
diff = 0
tage = 0
Delete = "YES"
'wscript.echo num
' Procedure tocheck parm:
Sub ParmError()
WScript.Echo "Usage: [CScript | WScript] delagedfle.vbs
WScript.Quit 1
End Sub
Select Case num
Case 0
Call ParmError
Case 1
Call ParmError
Case 2
Call ParmError
Case 3
Wscript.Echo "Proper parms selected"
Case 4
Wscript.Echo "Simulated delete"
Delete = "NO"
Case Else
Call ParmError
End Select
msg = "Process started: " & time
WScript.Echo msg
File = args.Item(0)
namex = args.Item(1)
tage = args.Item(2)
strComputer = "."
set objFSO = CreateObject("Scripting.FileSystemObject")
Set objWMIService = GetObject("winmgmts:" & "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")
Set colFiles = objWMIService.ExecQuery("Select * from CIM_DataFile where Path ='\\" & file & "\\'") ' ExecQuery("Select * from CIM_DataFile where FileName Like '%" & name & "%'")
WScript.Echo "Found " & colFiles.Count & " files"
For Each objFile in colFiles
If instr(objFile.name, namex) then
'DateInfo = DateInfo & Now & VbCrLf
Set objFile = objFSO.GetFile(objFile.Name)
diff = DateDiff("d", objFile.DateLastModified,Date)
difx = cint(diff)
tage = cint(tage)
'Wscript.Echo difx
'Wscript.Echo tage
If tage < difx Then
Wscript.Echo "Target age in days for del: " & tage & " Last modified " & difx & " days ago:"
delcount = delcount + 1
Wscript.Echo "File delete: " & objFile.Path
Wscript.Echo "Date last modified: " & objFile.DateLastModified
If Delete = "YES" Then
objFSO.DeleteFile(objFile.Path)
End If
End If
End If
Next
If delcount = 0 Then
Wscript.Echo "No Files deleted in this run"
Else
Wscript.Echo "Number of files deleted was: " & delcount
End If
msg = "Process completed: " & time
WScript.Echo msg
DB2/ZOS
I will make a passing comment about DB2 Z/OS which no one uses anymore (Just kidding mainframers as TSO/REXX is still my favorite language). Yeah right as almost every brick and mortar Fortune 1000 company will most likely have at least one DB2 Z/OS application on the floor. I don't work very much on the system side of DB2 for Z/OS anymore; however, from earlier in my career one of the things a DB2 SYSPROG did was prepare DSNZPARM (ZPARM). Remember from my bio that DB2 on the mainframe has more division of labor ( a nice Fredrick Taylor term from industrial management) than the other platforms as a DB2 SYSPROG and a DB2 DBA might be two separate people which is usually the case. The ZPARM is where the key system parameters were kept. In later releases the -SET SYSPARM command was introduced to allow for dynamic ZPARM changes. ZPARM is where the DB2 SYSPROG will setup the logging parameters which are similar to what we saw earlier in DB2 UDB though there are several major nuances such as a boot strap (BSDS) on the mainframe which you don't have on UDB. DB2 on Z/OS also does not allow PIT recovery until DB2 V9 or DB2 X, I believe so your options were TOLASTFULLCOPY and TORBA (your last quiesce point) for a RECOVER DATABASE command. Because of size and mainframe capabilities many people will archive the logs directly to tape instead of disk which allows for the tape management sofware to keep up with the tape cycling . The log information is kept in the BSDS. The IBM mainframe is an awesome machine.
Oracle for LUW:
Oracle logging works the same on LUW. Oracle was originally written in Assembler language but later ported to C which made export files portable across all supported platforms. A little known fact is that Oracle runs on Z/OS as well. I know of a company that does this.
Oracle as does DB2 also allows a very robust set of options for managing the logs. Oracle logging is at the entire database level which is slighly different than a database in SQL Server or DB2. In Oracle a SCHEMA relates pretty well to a database in SQL Server or DB2 though not nessacarily as in a SQL or DB2 database you can have multiple schema's though most people seem to have a separate schema in each database though that is changing thereby using one database per application.
The below shows how you can turn on logging in Oracle which is usually disabled.
You have to connect as SYSDBA to do this on Oracle
sqlplus "/ as sysdba" -- I kind of miss connect internal though it was not secure
SQL> SQL>
SQL>
SQL> archive log list;
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination /u01/app/oracle/admin/SID/arch
Oldest online log sequence 535
Current log sequence
SQL*Plus: Release 10.2.0.4.0 - Production on Sun Dec 27 10:34:29 2009
Copyright (c) 1982, 2007, Oracle. All Rights Reserved.
Enter password:
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> show parameter log
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
audit_syslog_level string
db_create_online_log_dest_1 string
db_create_online_log_dest_2 string
db_create_online_log_dest_3 string
db_create_online_log_dest_4 string
db_create_online_log_dest_5 string
log_archive_config string
log_archive_dest string
log_archive_dest_1 string
log_archive_dest_10 string
log_archive_dest_2 string
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_3 string
log_archive_dest_4 string
log_archive_dest_5 string
log_archive_dest_6 string
log_archive_dest_7 string
log_archive_dest_8 string
log_archive_dest_9 string
log_archive_dest_state_1 string enable
log_archive_dest_state_10 string enable
log_archive_dest_state_2 string enable
log_archive_dest_state_3 string enable
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_state_4 string enable
log_archive_dest_state_5 string enable
log_archive_dest_state_6 string enable
log_archive_dest_state_7 string enable
log_archive_dest_state_8 string enable
log_archive_dest_state_9 string enable
log_archive_duplex_dest string
log_archive_format string %t_%s_%r.arc
log_archive_local_first boolean TRUE
log_archive_max_processes integer 2
log_archive_min_succeed_dest integer 1
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
log_archive_start boolean FALSE
log_archive_trace integer 0
log_buffer integer 10215424
log_checkpoint_interval integer 0
log_checkpoint_timeout integer 1800
log_checkpoints_to_alert boolean FALSE
log_file_name_convert string
logmnr_max_persistent_sessions integer 1
remote_login_passwordfile string EXCLUSIVE
SQL>
log_archive_dest_10 string
SQL> show parameter dest_1
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_create_online_log_dest_1 string
log_archive_dest_1 string
log_archive_dest_10 string
SQL>
SQL>
SQL>
SQL>
SQL> list
1* alter system set log_archive_dest_1='LOCATION=/u01/app/oracle/admin/$ORACLE_SID/arch' scope=both
SQL> /
System altered.
SQL>
SQL> show parameter dest_1
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_create_online_log_dest_1 string
log_archive_dest_1 string LOCATION=/u01/app/oracle/admin
/$ORACLE_SID/arch
log_archive_dest_10 string
SQL>
SYS>shutdown immediate;
SYS>startup mount;
SYS>alter database archivelog;
SYS>alter database open;
SYS>archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/app/oracle/admin/SID/arch
Oldest online log sequence 543
Next log sequence to archive 545
Current log sequence 545
SYS>
In Oracle the inital log groups are created at database creation time via DBCA or a CREATE DATABASE script which DBCA invokes under the covers. You should have at least two members in each group on separate disks if possible.
You can add log groups later on with an ALTER DATABASE ADD LOGFILE GROUP command. Each time a group fills it spills over to the next group and then the archiver copies the previous group to the archive destination. Therefore if you don't have enough groups you can run into a hung instance of Oracle if they switch before the archiver copies them over. Therefore, you should check your alert log and determine the frequency of log file switch. If the log is switching every couple of minutes and you are not doing a massive bulk load then you need to resize them which you can do via creating more groups on the fly and dropping the old ones. Always make sure you backup your control file (Similar to DB2 BSDS on Z/OS) to trace. If you are not using RMAN and your archive destination is not to tape then you should employ a script that runs to keep the archive destination pruned as well as alerting you when say the mount gets within 10% of being full. Many shops still utilize scripts and such to handle backups and alerts instead of GRID CONTROL and RMAN though that is the way you should be headed to take advantage of things such as BLOCK RECOVER.
We stay tuned for the next edition of the mult-platform DBA. It will not take me as long to get to my next post even though I have a day and a night job!