Sunday, December 27, 2009

Database parameters changes and logging:

With it being the holidays (my favorite time of the year!) with a little extra rest it is time to write another entry into my blog for the Multi-platform DBA. I have graduated from arguing about which vendor is better as they all provide similar features though some are more mature than others. I will let you figure that one out! In this day and age the DBA must be a businessman in addition to a technician. Most companies have cut back and one who is strong in one or more RDBMS's and decent in others will have a leg up on the competition who many times have developed a prejudice towards only one RDBMS. Remember the programer who said knowing FORTRAN was the only language they ever needed? (Probably not!)

If you have read my previous two entries you will note I am experienced with the RDBMS offerings from IBM, Oracle, and Microsoft.

Today lets look at adjusting database parameters using the various vendor options. We will focus on log archiving for this exercise. This is a key component of RDBMS backup and recovery. For instance, all vendors allow for PIT recovery to varying degrees. If you don't have database log archiving in place you can usually only restore from your last full backup. The default configuration in most cases is to run in NON-ARCHIVELOG MODE (Oracle terminology). Logging is also very important for SQL Server 2005/2008 mirroring, Oracle Dataguard, and DB2 HADR distributed D/R options.

SQL Server 2000, 2005, 2008:

In SQL Server 2000 you could utilize Query Analyzer or OSQL from the command line to enter database commands. In 2005 and beyond Management Studio (MS) replaces Query Analyzer and Enterprise Manager. SQL Server has a bunch of system procedures provided to access information though we see them pushing us toward catalog and dynamic management views. I still like the procedures since the names of them point you to the function and the parameters easy to find online and I learned them first when starting on in SQL 2000.

From MS:

sp_helpdb TESTDB -- Look for for the status, filename, and size maxsize growth information
go
alter database TESTDB set recovery full
go
sp_helpdb TESTDB

All of the above can also be done using the GUI in MS as well as entering the actual commands. I am a firm believer that if you only know a GUI you will never have old school knowledge which makes the new stuff easier to grasp anyway.

If the TESTDB database is in SIMPLE mode entering the sp_helpdb command will show this in the STATUS field of the output. SIMPLE mode means that you can't issue a BACKUP LOG TESTDB to LOGDEVICE; which copies the log and truncates the transaction log of the inactive portion. If you allow the transaction log in SQL Server in SQL Server in FULL mode to fill the database will stop. Shops will many times setup a management plan to backup the transaction log every 15 minutes or so to a backup device or a file. IMHO, the SQL Server log handling is weakest of the big 3 because of this as the others (DB2 and Oracle) switch over automatically when they are full to the next available log and you also have to feed in all of the BACKUP LOG output files to the recovery with the NORECOVERY option when doing a restore whereas with DB2 and Oracle you can do everything in one command. Afterwards, you can now utilize the STOPAT operand of RESTORE DATABASE on the last log file restored to bring the DB online. Finally, if your LOGDEVICE is disk SQL Server Management plans allow us to keep the area pruned based on various choices. That is a very nice feature.

DB2 UDB on LUW and Z/OS:

In DB2 for LUW (Linux, UNIX, and Windows) the database architecture is very similar to SQL Server in that each database can have its own set of parameters.

In the case of DB2 on LUW you have a set of parameters for each database that can be accessed from the Control Center (GUI) or from the DB2 command windows. I must be getting old as I love commands; however, the key though is to understand why you are entering commands as I have worked with people who know all the commands but don't know when to enter them as if you have the concept down you can always find syntax on GOOGLE or the online documentation each vendor provides. The below is from a windows server so you can easily navigate to the command window from START>PROGRAMS>IBM DB2>DB2COPY>Command Line

Tools >Command Window

db2 connect to TRAIN

db2 get db cfg for TRAIN

The below shows the key parameters associated with the log after you issued the get command. In LUW you could pipe the output to GREP or FIND depending on the platform for future granulation. The below shows that LOGRETAIN is set to RECOVERY which means the database is now in recovery mode. I am very impressed with DB2 UDB's log handling capabilities. The UDB option allows for overflow and sizing options similar to the others. The below parameters are pretty self-explanatory and easily looked up in the online media.


Log file size (4KB) (LOGFILSIZ) = 10000
Number of primary log files (LOGPRIMARY) = 3
Number of secondary log files (LOGSECOND) = 9
Changed path to log files (NEWLOGPATH) =
Path to log files = E:\LOGS\TRAIN\
Overflow log path (OVERFLOWLOGPATH) =
Mirror log path (MIRRORLOGPATH) =
First active log file = S0000500.LOG
Block log on disk full (BLK_LOG_DSK_FUL) = NO
Block non logged operations (BLOCKNONLOGGED) = NO
Percent max primary log space by transaction (MAX_LOG) = 0
Num. of active log files for 1 active UOW(NUM_LOG_SPAN) = 0

Group commit count (MINCOMMIT) = 1
Percent log file reclaimed before soft chckpt (SOFTMAX) = 100
Log retain for recovery enabled (LOGRETAIN) = RECOVERY
User exit for logging enabled (USEREXIT) = OFF

Assuming log archiving is not turned on you can then easily issue commands similar to the below for each database you want log archiving turned on.


Db2 connect to TRAIN

Dd2 get db cfg
Db2 update db config using newlogpath e:\logs\TRAIN
Db2 update db config using logretain on
Db2stop
Db2start

In many cases using the db2 update command does not require a restart for the changes to take affect.

If your log path is to disk I recommend that you have a script in place that will tell you when the directory is getting close to full so that you may proactively clean-up things.

I wrote the below for such usage on Windows and have similar code for Linux, UNIX, and Z/OS as needed to keep old files that accumulate cleaned up based on age.



' Filename: delagedfle.vbs
'
' Author: Me
'
' Date: May 20, 2004
'
' Last Revised: May 20, 2004
'
' This WSH script is to be run at the command prompt is given a file to check its age.
'
' Usage: [CScript WScript] delagedfle.vbs
' Example: cscript delagedfile.vbs \\temp\\filex 10 -- deletes \\temp\filex if last modifed date is beyond 10 days


Dim file
Dim Diff
Dim Diffc
Dim name
Dim namex
Dim FileSystemObj
Dim ObjFSO
Dim objFile
Dim num
Dim tage
Dim k
Dim args
Dim msg
Dim objWMIService
Dim colFiles
Dim Delcount

set args = WScript.Arguments
num = args.Count
delcount = 0
diff = 0
tage = 0
Delete = "YES"
'wscript.echo num


' Procedure tocheck parm:
Sub ParmError()
WScript.Echo "Usage: [CScript | WScript] delagedfle.vbs "
WScript.Quit 1
End Sub

Select Case num
Case 0
Call ParmError
Case 1
Call ParmError
Case 2
Call ParmError
Case 3
Wscript.Echo "Proper parms selected"
Case 4
Wscript.Echo "Simulated delete"
Delete = "NO"

Case Else
Call ParmError
End Select

msg = "Process started: " & time
WScript.Echo msg

File = args.Item(0)
namex = args.Item(1)
tage = args.Item(2)

strComputer = "."
set objFSO = CreateObject("Scripting.FileSystemObject")
Set objWMIService = GetObject("winmgmts:" & "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")
Set colFiles = objWMIService.ExecQuery("Select * from CIM_DataFile where Path ='\\" & file & "\\'") ' ExecQuery("Select * from CIM_DataFile where FileName Like '%" & name & "%'")
WScript.Echo "Found " & colFiles.Count & " files"
For Each objFile in colFiles
If instr(objFile.name, namex) then
'DateInfo = DateInfo & Now & VbCrLf
Set objFile = objFSO.GetFile(objFile.Name)
diff = DateDiff("d", objFile.DateLastModified,Date)
difx = cint(diff)
tage = cint(tage)
'Wscript.Echo difx
'Wscript.Echo tage
If tage < difx Then
Wscript.Echo "Target age in days for del: " & tage & " Last modified " & difx & " days ago:"
delcount = delcount + 1
Wscript.Echo "File delete: " & objFile.Path
Wscript.Echo "Date last modified: " & objFile.DateLastModified
If Delete = "YES" Then
objFSO.DeleteFile(objFile.Path)
End If
End If
End If
Next
If delcount = 0 Then
Wscript.Echo "No Files deleted in this run"
Else
Wscript.Echo "Number of files deleted was: " & delcount
End If

msg = "Process completed: " & time
WScript.Echo msg



DB2/ZOS

I will make a passing comment about DB2 Z/OS which no one uses anymore (Just kidding mainframers as TSO/REXX is still my favorite language). Yeah right as almost every brick and mortar Fortune 1000 company will most likely have at least one DB2 Z/OS application on the floor. I don't work very much on the system side of DB2 for Z/OS anymore; however, from earlier in my career one of the things a DB2 SYSPROG did was prepare DSNZPARM (ZPARM). Remember from my bio that DB2 on the mainframe has more division of labor ( a nice Fredrick Taylor term from industrial management) than the other platforms as a DB2 SYSPROG and a DB2 DBA might be two separate people which is usually the case. The ZPARM is where the key system parameters were kept. In later releases the -SET SYSPARM command was introduced to allow for dynamic ZPARM changes. ZPARM is where the DB2 SYSPROG will setup the logging parameters which are similar to what we saw earlier in DB2 UDB though there are several major nuances such as a boot strap (BSDS) on the mainframe which you don't have on UDB. DB2 on Z/OS also does not allow PIT recovery until DB2 V9 or DB2 X, I believe so your options were TOLASTFULLCOPY and TORBA (your last quiesce point) for a RECOVER DATABASE command. Because of size and mainframe capabilities many people will archive the logs directly to tape instead of disk which allows for the tape management sofware to keep up with the tape cycling . The log information is kept in the BSDS. The IBM mainframe is an awesome machine.

Oracle for LUW:

Oracle logging works the same on LUW. Oracle was originally written in Assembler language but later ported to C which made export files portable across all supported platforms. A little known fact is that Oracle runs on Z/OS as well. I know of a company that does this.

Oracle as does DB2 also allows a very robust set of options for managing the logs. Oracle logging is at the entire database level which is slighly different than a database in SQL Server or DB2. In Oracle a SCHEMA relates pretty well to a database in SQL Server or DB2 though not nessacarily as in a SQL or DB2 database you can have multiple schema's though most people seem to have a separate schema in each database though that is changing thereby using one database per application.

The below shows how you can turn on logging in Oracle which is usually disabled.

You have to connect as SYSDBA to do this on Oracle

sqlplus "/ as sysdba" -- I kind of miss connect internal though it was not secure


SQL> SQL>
SQL>
SQL> archive log list;
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination /u01/app/oracle/admin/SID/arch
Oldest online log sequence 535
Current log sequence


SQL*Plus: Release 10.2.0.4.0 - Production on Sun Dec 27 10:34:29 2009

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

Enter password:

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options



SQL> show parameter log

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
audit_syslog_level string
db_create_online_log_dest_1 string
db_create_online_log_dest_2 string
db_create_online_log_dest_3 string
db_create_online_log_dest_4 string
db_create_online_log_dest_5 string
log_archive_config string
log_archive_dest string
log_archive_dest_1 string
log_archive_dest_10 string
log_archive_dest_2 string

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_3 string
log_archive_dest_4 string
log_archive_dest_5 string
log_archive_dest_6 string
log_archive_dest_7 string
log_archive_dest_8 string
log_archive_dest_9 string
log_archive_dest_state_1 string enable
log_archive_dest_state_10 string enable
log_archive_dest_state_2 string enable
log_archive_dest_state_3 string enable

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_state_4 string enable
log_archive_dest_state_5 string enable
log_archive_dest_state_6 string enable
log_archive_dest_state_7 string enable
log_archive_dest_state_8 string enable
log_archive_dest_state_9 string enable
log_archive_duplex_dest string
log_archive_format string %t_%s_%r.arc
log_archive_local_first boolean TRUE
log_archive_max_processes integer 2
log_archive_min_succeed_dest integer 1

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
log_archive_start boolean FALSE
log_archive_trace integer 0
log_buffer integer 10215424
log_checkpoint_interval integer 0
log_checkpoint_timeout integer 1800
log_checkpoints_to_alert boolean FALSE
log_file_name_convert string
logmnr_max_persistent_sessions integer 1
remote_login_passwordfile string EXCLUSIVE
SQL>

log_archive_dest_10 string
SQL> show parameter dest_1

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_create_online_log_dest_1 string
log_archive_dest_1 string
log_archive_dest_10 string
SQL>


SQL>
SQL>
SQL>
SQL> list
1* alter system set log_archive_dest_1='LOCATION=/u01/app/oracle/admin/$ORACLE_SID/arch' scope=both
SQL>
/

System altered.

SQL>


SQL> show parameter dest_1

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_create_online_log_dest_1 string
log_archive_dest_1 string LOCATION=/u01/app/oracle/admin
/$ORACLE_SID/arch
log_archive_dest_10 string
SQL>


SYS>shutdown immediate;
SYS>startup mount;
SYS>alter database archivelog;
SYS>alter database open;


SYS>archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/app/oracle/admin/SID/arch
Oldest online log sequence 543
Next log sequence to archive 545
Current log sequence 545
SYS>

In Oracle the inital log groups are created at database creation time via DBCA or a CREATE DATABASE script which DBCA invokes under the covers. You should have at least two members in each group on separate disks if possible.

You can add log groups later on with an ALTER DATABASE ADD LOGFILE GROUP command. Each time a group fills it spills over to the next group and then the archiver copies the previous group to the archive destination. Therefore if you don't have enough groups you can run into a hung instance of Oracle if they switch before the archiver copies them over. Therefore, you should check your alert log and determine the frequency of log file switch. If the log is switching every couple of minutes and you are not doing a massive bulk load then you need to resize them which you can do via creating more groups on the fly and dropping the old ones. Always make sure you backup your control file (Similar to DB2 BSDS on Z/OS) to trace. If you are not using RMAN and your archive destination is not to tape then you should employ a script that runs to keep the archive destination pruned as well as alerting you when say the mount gets within 10% of being full. Many shops still utilize scripts and such to handle backups and alerts instead of GRID CONTROL and RMAN though that is the way you should be headed to take advantage of things such as BLOCK RECOVER.

We stay tuned for the next edition of the mult-platform DBA. It will not take me as long to get to my next post even though I have a day and a night job!

Sunday, December 21, 2008

Command line options for DB2, Oracle, and SQL Server

I have noticed , especially in the SQL Server world a tendency to believe that the GUI is the holy grail of database management for the DBA. You can't beat it for certain types of look-ups and quick hitters specifically if you don't feel like Goggling for syntax.

However, grizzled vet's know that the command line interface for DBA's allows you to script commands to run in batch. This is wonderful if say you need to create 100 database users for a request that comes up for the next day just as you are ready to go home. For instance, you could utilize Management Studio in SQL Server 2005 or 2008 and be there all night or if they supplied you a soft copy of the users you could edit this information into a file wrapping SP_ADDUSER or CREATE USER (If 2008) around the file and then quickly add it as a schedule task for osql (IIRC, Microsoft has a future replacement for osql in SQL2008 though it still is supported!).

Let's examine your options in the big 3 database vendors for command line execution of scripts.


DB2 for Z/OS:


You can use SPFUI from ISPF or you can execute DSNTEP2 (one of the DB2 sample programs that comes with the RDMS) using JCL similar to the below with slight modifications for your installation to the JOBCARD and the PLAN and LIB parameters. By submitting a job manually or having a scheduling tool submit it you can execute code as needed . Many dynamic DB2 SQL statements on the IBM mainframe can be executed this way by updating the SQL under the SYSIN card.


//JOBNAME (ACCT),'DSNTEP2',CLASS=Z,MSGCLASS=R,
// NOTIFY=&SYSUID ALL JOBS NEED ONE
//*
//STEP1 EXEC PGM=IKJEFT01,DYNAMNBR=20
//SYSTSPRT DD SYSOUT=*
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSTSIN DD *
DSN SYSTEM (DB2D)
RUN PROGRAM(DSNTEP2) PLAN(DSNTEP2) LIB('DB2.RUNLIB.LOAD')
/*
//SYSIN DD *
SELECT DBNAME,TSNAME,PQTY,SQTY
FROM SYSIBM.SYSTABLEPART
WHERE DBNAME = 'DAPP01S2'
ORDER BY PQTY DESC

The above is just a simple catalog query for space information for a particular database.

DB2 UDB on LUW:

In DB2 on LUW you can script your code using the db2 command processor (DB2 before the command) and imbed it in the scripting language of choice for the platform. In the below example you invoke the db2 command processor by prefixing the command with db2 and then specifying a file with SQL statements in it. The below is for Windows and it resides in a .cmd file. You could then invoke it from the command line or you can use the DB2 Task Center to schedule the below command file as a job (You would not want a pause in the script if it was to be a scheduled job of course).


:REM
:REM -- run from c:\scripts
:REM
db2 connect to SAMPLE
echo . "Control C to kill before continuing else press enter"
pause
db2 -f input.sql -t > output.log
@echo off
echo "check log for errors"

In the above example the user that executes the command must have access to a database called SAMPLE. If so then they could execute the code inside of input.sql once they connected to the database.

Oracle on LUW:

In Oracle you can easily use SQLPLUS to run Oracle processes in batch. You can also use DBMS_SCHEDULER in 10g and above and DBMS_JOB if you are at a really back-level release of Oracle to schedule tasks.
Many times it is easier to use CRON instead of the internal tools; especially, if you want to control things even if the instance is down. We use some wrapper scripts that allow us to easily execute different tasks using the same script over and over without more PL/SQL.
The below example I utilize on Unix and Linux to create a job to clone a database to another server.


Note that the syntax for executing the script is in the comments for the script
more xrcpfiles.sql

REM collect the names of the oracle files in the database

-- ---------------------------------------------------- --
-- xrcpbuild.sql - RCP files from one machine to --
-- another. --
-- will build rcpcopy.sh in place --
-- --
-- sqlplus -s / @xrcpfiles targethost --
-- ---------------------------------------------------- --

set echo on
set feedback off
set heading off
set linesize 200
set pagesize 300
set verify off

define nhost ='&1'
define pref ='&2'

whenever sqlerror exit sql.sqlcode

spool $ORACLE_BASE/admin/@/create/rcpcopy.sh

prompt #! /bin/ksh
prompt #Name - rcpcopy.ksh copies
prompt
prompt if [ -z "$1" ]
prompt then
prompt echo "supply email address"
prompt exit 2
prompt fi
prompt
prompt if [ -z "$2" ]
prompt then
prompt echo "Is it a hot copy YES or NO"
prompt exit 2
prompt fi
prompt

prompt echo "start `date`"
prompt

prompt if [ $2 = "NO" ]
prompt then
select ' rcp -p ' rtrim(name)' &nhost:&pref'rtrim(name)';print rc=$?' from v$controlfile;
select ' rcp -p ' rtrim(member)' &nhost:&pref'rtrim(member)';print rc=$?' from v$logfile;
prompt fi
select 'rcp -p ' rtrim(name)' &nhost:&pref'rtrim(name)';print rc=$?' from v$datafile;
select 'rcp -p ' rtrim(name)' &nhost:&pref'rtrim(name)';print rc=$?' from v$tempfile;

prompt
prompt echo "end `date`"

prompt
prompt echo "rcp from `uname -na` completed" mailx -s "rcp_from_`uname -na`" $1
prompt
prompt exit 0

spool off

exit

sqlplus -s / @xrcpfiles targethost

The -s says to run in silent mode and the script is after the @ sign. The targethost in this case is a parameter of the target.
I use operating authentication on my Oracle script environments so that I don't have to specify passwords for cron jobs. That is what the / means in the above
By simply typing the above if your variables and SQLPATH are setup or if you run the above from where the script lives you can use SQLPLUS to drive many batch processes.
Any SQLPLUS script could then be executed similar to the above or as a CRON scheduled task or a Windows scheduled tasks if done under windows .


SQL Server:


In SQL Server one option you have is if you are running the SQL Server Agent you can create jobs to batch up various T-SQL processes using Management Studio (You can also do it using the appropriate sp_ stored procedure).

There are times though with one-off processes that this can be a bunch of work with the various options available in the GUI.

Many times it is simplier for me to just place the commands in a file and pass them to osql which I can run at the commandline or even pass osql as a Windows scheduled task.
In SQL Server one way I have scheduled tasks is to create a script and then utilized osql to run it in batch. I believe in 2008 there is a replacment for osql which appears be sun-setting. It should not be much different.


osql -d dbase -E -S server\instance -i e:\scripts\truncates.sql


The above using a -E which is trusted which means you are using Windows authentication. The -d specifies the database where it will run and -S is the named instance or default instance and the -e specifies the file. It would execute the truncate statements specified in the file passed using the -e switch.


I hope you have gotten something out of the ways you can execute SQL statements from the command line in various RDBMS systems. There are other ways to do this that we did not discuss.



Next time we will look at some high availability options.

Saturday, October 11, 2008

My DBA journey and database compression options

Career:

In October 2008, JAW has decided to join the blogesphere!

Back in 1985, I started out after matriculating through college (Can you say COBOL?) as a 3800 laser printer/computer operator. I thought I knew a lot about IT coming out of college but I found out that I didn't know a lot. Anyway I started to immerse myself in the technology of the time and started to get promoted a lot. I developed a love for CLIST and REXX programming.

IIRC, my first professional experience with a RDBMS (can you say VSAM,ISAM, or QSAM?) was DB2 V2.1 around July 1989 as I was what is known as a MVS (Z/OS) Systems Programmer. At various points I was a DB2 systems programmer, CICS systems programmer, VTAM/NCP/TCPIP systems programmer. Most of these types of jobs today are done by mainframe outsourcing outfits instead of in-house as the skill set is dwindling. I even have a file on the CBT mainframe shareware site at http://cbttape.org/. Around this time I started to learn as much as I could about databases as I began to see a brighter future in this. In the mid-to-late 90's I was doing DB2 System Programming and DB2 Application support which are usually separate roles in a mainframe shop along with becoming stronger in the programming languages of that platform. I have worked on DB2 all the way up to V9.1 on Z/OS and V9.7 on Windows.

In 1998 as a DB2 systems programmer I was given the opportunity to install and configure the Oracle Transparent Gateway for DB2. This simply allowed a DB2 subsystem's data to be accessed from Oracle 7.3 which had just made its way to the shop by 1998. Once you setup the gateway correctly you could go into Oracle and create a database link and a synonym and you did not know the data was in DB2 unless you SET AUTOTRACE ON or the like inside of SQLPLUS (or that you were pulling big data over the wire). The gateway on the mainframe was a DB2 application that understood Oracle and did the heavy work.

At this point I said DB2 is the king and what is this Oracle stuff?

However, I decided not to be like some peers and jumped into this thing with both feet as I was a young guy who could learn new stuff I started to buy Oracle books and pick up stuff from the contract Oracle DBA brought in to help build this UNIX based application that required data from mainframe DB2. The company also started to send me to class to learn Oracle and MQ-Series as queuing was coming into play.

During the push to Y2K (I was still primarily a mainframer) the company announced it would outsource my department because the mainframe support was too costly and that all new development would be Oracle. Since I was spending my own time and money to learn shell programming, AWK, SED and Oracle I was kept on for the new millennium and went into being an Oracle DBA full time in 1st quarter 2000! I was working on V7.3, V8, and starting to study 8i( 8.1.5 can you say shared pool contention???) which was installed on databases systems that year at my shop. I was working with some more experienced Oracle guys picking up all I could learn. I was able to quickly use what I knew about DB2, JCL, COBOL, BAL, and REXX and come up to speed on the similar Oracle/UNIX constructs. Eventually, I became the Lead Oracle DBA as fate would have it. IIRC, it was around 2003 that SQL Server 2000 made its way into the shop in a big way. Management decreed that new applications that did not require Oracle would be done on SQL Server.

They say that those who don't learn from history are doomed to repeat it. In 2004 I started to buy SQL Server books (LOL!) and began to learn SQL Server 2000. I found DTS to be pretty good for quick and dirty stuff. I started to learn VBSCRIPT, VB, and took classes on DOTNET and C development somewhere in this period as C# was to become the the new platform for the company I even dusted of some DOS SHELL programming stuff. I eventually re-wrote all of my Oracle UNIX scripts in VBscript to support Oracle on Windows and became a big proponent of RHEL Linux also. I started to play with ADO and later ADO.NET and started to have some fun at this time with that. We brought in strong SQL Server people which I picked up things from them also over time. When SQL Server 2005 came out I started learning about it and spent time comparing and contrasting the various features of the different platforms. In my spare time I am trying to learn something about SQL2008. Being a jack of all trades and master of none is tough. Some of the most fun I had in this period was using the various DOTNET providers for Oracle to access from the Visual Studio architect.

Most of my work time is spent with Oracle and DB2. Today, I am in the midst of learning Oracle 11G, Oracle Application Express (just created our first production APEX application with help from the APEX forum), SQL Developer (we are trying to get rid of TOAD) in addition to my regular work. I have found ADDM and SQL Tuning advisor to be great for my Oracle 10g applications.

I think of the big three that Oracle is the most feature rich followed by DB2 and SQL Server. As you should be able to tell I would not have said that years ago when I was a mainframe bigot!


Compression:

A concept where less is usually more! Every experienced IT person understands the concept.

In the DB2 world I first dealt with compression in V3.1 back in the mid-to-late nineties. It was a huge saver of disk. IIRC, we had a mainframe with DB2 Sort Assist to move some of the cycles off the mainframe into the firmware. The thing I liked about it was most was that the software came at no special cost as long as you had the proper hardware. This saved us reams of DASD over time and performance improved for us including our OLTP applications because more data was quickly available for processing! Basically, adding a COMPRESS operand to CREATE TABLESPACE or ALTER TABLESPACE did the trick.

In the Oracle world I noted that compression showed up around Oracle 9i. However, it was primarily of use for OLAP applications with a few restrictions that you can read about. To be fair, one aspect that Oracle had before DB2 was the ability to compress indexes which can buy you a lot with composite indexes. In Oracle 11g advanced compression came about and I would make comments to my Oracle representative about the country being in a recession and why this is not in the base stack charge of EE? Larry needs a new boat I guess. Based on my DB2 experience I can see getting the same boost for OLTP if you pay the extra money. I am still mad about that one though.

In the SQL2008 side of the house MSOFT has introduced compression. I need to read more about it and play with it when the chance presents itself. If you have tested it let me know. I would think that you would not use it in production until a few service packs go by. However, in SQL 2005 I was surprised how reliable mirroring was which is similar to Oracle Data Guard or DB2's HADR.

Next time we can talk about command line options for the big 3.

To paraphrase Nature Boy Ric Flair, "Diamonds are forever and so is database technology, WOO!".