Todo

Prio Task
A A console based installer for the UNIX platform is needed. Based on this, we can provide a perbuild packages like an RPM package for Linux.

A A prebuild package for Windows is more then desirable, so we are still looking for someone who is willing to take this job.

B Test environments or test users are needed for the AIX, HP-UX and Solaris platform to have recurrent tests on that platforms.

Ideas & problems waiting for a solution...

Codename Description
dbtubackup
(added 2002-07-22)
Tool for one of the core processes - doing a database backup. The tool needs to be able to do Online/Offline Backups; SET WRITE SUSPEND/RESUME should be covered as well.
For every database there will be a configuration file (XML?). This file defines different "backup classes". A backup class sets the backup type (Online/Offline/...), backup media (DISK, ADSM,...) and additional information (Compress Backup yes/no, Number of Generations to hold,...). Defining rules will available too - this will be done with formulas (containing variables as DB_LOGFILE_SIZE, DB_CURRENT_SIZE, LAST_BACKUP_DAYS_SINCE and LAST_BACKUP_LOGFILES_SINCE).
dbtuclonecfg
(added 2002-07-22)
Small tool for capturing the configurations (DBM CFG, DB CFG or db2set variables) and writing them to a file (ready for processing them with db2 -tf <file>).
Thanks to Natalie Goldstein (for pointing me to that point) and to Dave Harvey for his still existing tool (capturing the DB CFG with a Perl script).
dbtureorg
(added 2002-07-22)
Another tool for one of the core processes. It should combine RUNSTATS and REORG TABLE into one solution.
The reorganization is done for these tables only which are pointed out by the formulas found in the Administration Guide (resulting in an "Reorg on Demand"). An index name (or the token PK - standing for the primary key of the table) can be defined to reorder the table during the reorganization.
dbturevokeall
(added 2002-06-04)
This will be a tool, that reads the SYSCAT views of a given database and generates a DCL file containing all REVOKEs that are possible. Running that DCL file you will get a database without any granted rights. This is a reasonable step after creating the database and all of its objects. After running all the REVOKEs you can run the GRANTs, which you really need.
Every REVOKE within the generated DCL file will be lead by a comment with something like "granted on <timestamp> by <user>". At the beginning and end of the file a comment with the current timestamp should be integrated as well.
<noname>
(added 2002-07-22)
Creating a small tool, that checks whether it runs in a PE/WE/EE or in an EEE environment. This can be useful for running checks within scripts, ...
<noname>
(added 2002-06-04)
Building an application that needs to snapshot itself, I have looked for a function, etc., to get the Application Handle/Application ID of my own connection.
I haven't found anything valuable - there are special registers for Application Name, User ID and Clients NNAME. Client NNAME is often not set, and the other two are often common for a bunch of processes.
I was thinking about the getpid() function from Unix compared to a snapshot on application level (having an Client Process ID element). This seems to be relatively unique, but duplicate PID's are also possible. Application Name, User ID and Client NNAME still doesn't make the situation any better. Combining with the Inbound Communication Address element could also help, but brings two problems with it. First there must be mapping to the network setup (different protocols, different NIC's,...) and second I'm not sure about the possible use of NAT between client and server.
The db2batch command must have the same problem - how they have solved this?
<noname>
(added 2002-06-04)
Creating a testbed with 2 or 3 tables and a few hundred records each (every table should be 25 or more pages long). These tables should exist twice - one group with statistics, one without statistics.
The records are inserted by a small application, which inserts one record, takes a lock snapshot and does the commit. So we learn the internal ID of all the records - this information has to be written to a file.
Then we can run different statements (selects on single tables for read/update, joins, create tables, updates, inserts, deletes,...) using a proper configured DDL file and do a lock snapshot before commit/rollback (doing nothing should be tested also - to have the base line). The snapshot data can be mapped to the collected information about internal ID's. Running this with different isolation levels will get a great environment to test and play with the locking mechanisms of DB2 (for a better understanding).
<noname>
(added 2002-06-04)
Building of a wrapper for the monitoring API, enabling Perl and/or Java to do snapshots. The snapshot will be done during creation of the "snapshot object". The data will be hidden, to access them their needs to be a method working internaly with the Dbtu_Parser_FindSnapshotBufferElement() and Dbtu_Converter_SnapshotBufferElement2String() function.