Oracle Ramblings

Tips and wisdom from 15 years of DBA experience

Friday, October 21, 2011

Audit SELECT on one table

What seems like such a simple task is a bit more painful than it needs to be. The task is "Capture all SQL statements against table X." 


$ sqlplus "/ as sysdba"
SQL> show parameter audit_trail;
(if it shows "db,extended", you are done with setup. Go onto the next section)
SQL> alter system set audit_trail=db,extended scope=spfile;
SQL> shutdown;
SQL> startup;
SQL> show parameter audit_trail;
(verify that the parameter equals "db,extended")

Turning on Auditing:

$ sqlplus "/ as sysdba"
SQL> audit select on schema.table by access; -- if you do not specify by access, then by default "by session" will be used which only audits the first statement of a given session.
SQL> delete from sys.aud$; -- only do this if you are SURE you can safely delete all audit data...makes debugging simpler.
SQL> select count(*) from dba_audit_trail;
(should be zero)
SQL> exit;
$ sqlplus non-sys-user/pw
SQL> select count(*) from schema.table; -- select from the audited table...
SQL> exit;
$ sqlplus "/ as sysdba"
SQL> select count(*) from dba_audit_trail;
(should be non-zero)
The interesting columns are sql_text and sql_bind in dba_audit_object.  To look at the data you can do something like this:
SQL> column sql_bind format a50
SQL> column sql_text format a50
SQL> set lin 200
SQL> select username, to_char(timestamp, 'DD-MON-YYYY HH24:MI:SS'), sql_bind, sql_text from dba_audit_object;
That's it.

Monday, November 29, 2010

Transform data in Oracle Golden Gate

In order to transform data in Oracle Golden Gate, you will need to understand the MAP command well. Although there are cases where you will want to execute commands on the source in order to place them into additional payload that will be sent to the target, the vast majority of activity in transformation will be done on the target. 

In the following scenario we place a value into a given column based on the value of an existing column:

MAP schema1.tabA, 
    TARGET schema1.tabA, 
        COLMAP (USEDEFAULTS, colA = @CASE(colB,23,14,24,15,16));

This statement reads "If colB is 23, set colA to 14, else if colB is 24, set colA to 15, in all other cases, set colA to 16");

For more details on the @CASE() function, see the Oracle Golden Gate Reference Guide at

Friday, November 12, 2010

How To Put NULL In Target Column With Golden Gate

Golden Gate has a large API which makes most tasks very simple. One item that took me a bit longer to figure out than it should of was how to tell Golden Gate to place NULL into the target column. Here is how to do it:

MAP source.table, TARGET target.table, COLMAP(USEDEFAULTS, null_column = @COLSTAT(NULL));

I have no idea why it took me so long to figure this out.

Saturday, September 11, 2010

Set up RMAN Encryption

This example will set up RMAN Transparent Data Encryption for RMAN. It is a bit confusing from the documentation as to how to set up RMAN encryption, for you first need to learn about Oracle Wallets. I will spare you the details, and instead just give you the steps you need to take to make TDE work for RMAN. This method has been tested on 10gR2, but will probably work on 11g as well.

1. Add the following to sqlnet.ora on the host that you are backing up:
        (DIRECTORY = /super/secure/location)

2. Log into the database you want to back up and issue:
alter system set encryption key identified by "Super_secure_password";

(note that the password is case-sensitive. You will not have to enter this password to backup/restore, but you will to open the Wallet)

3. Verify that the wallet is open:
select * from v$encryption_wallet;

Note that when you bounce the instance, you will need to re-open the wallet with this command:
alter system set encryption wallet open identified by "Super_secure_password";

4. configure rman:
$ rman target /

new RMAN configuration parameters:
new RMAN configuration parameters are successfully stored


You can also change the default algorithm of AES128 that Oracle uses to perform the encryption using the CONFIGURE ENCRYPTION ALGORITHM 'new algorithm';

5. That's it! Now just perform your backups as usual.

6. Verify that the encryption is working:
  • Perform a full backup (with or without encryption)
  • Create a test table and insert a row with a string you can easily grep for, like SuperDuperSecret or something
  • Perform an incremental backup without encryption on.
  • Do hexdump -C | grep SuperDuperSecret on each incremental backup piece. You should get a hit.
  • Insert a second row into the table with a different string like MoreSuperDuperSecret
  • Turn on encryption
  • Perform a backup
  • Do hexdump -C | grep MoreSuperDuperSecret on each incremental backup piece. You should not be able to find that string. Viola!

Wednesday, February 10, 2010

Clear Space in the Flash Recovery Areas

A common problem related to the Flash Recovery Area is "ORA-19809: limit exceeded for recovery files~ORA-19804: cannot reclaim 44040192 bytes disk space from 429496729600 limit". If you can confirm that that error is coming from the flash recovery area, you can simply issue the following:

SQL> show parameter db_recovery_file_dest_size

------------------------------ ------------- -------
db_recovery_file_dest_size big integer 400G

SQL> alter system set db_recovery_file_dest_size=200G

SQL> alter system set db_recovery_file_dest_size=400G

Simply setting the recovery dest file size to a smaller value temporarily will remove the oldest files...of course you will lose some of the history you can flashback to, so be aware of this.

Wednesday, February 3, 2010

Streams vs. Golden Gate

In late 2009, Oracle purchased Golden Gate software whose technology is now the stated platform that Oracle will build upon for all new replication features. Streams will still be supported, but the new goodies and major focus will be on Golden Gate. My initial experiences with Golden Gate have been very positive, and for those of you who have ever used streams, you will be excited to learn that there is a single command line interface to do *everything*, and transformations are far, far simpler. Here is the breakdown:


Set up database links, hope that your global names are set properly, use dbms_streams_adm.maintain_schemas(), or if you are really a masochist, build your capture & apply queues, and set up propagation, then spend a couple hours digging through V$ views to find out why things aren't working.

Golden Gate:
untar/unzip the software on both source and target to a NON-nfs mounted volume - I had problems using gg when the connection to the nfs server was slow, & gg does not support this config. Who installs binaries on nfs mounted volumes anyway? Duh.

type ./ggsci
> info all -- (gives you everything you need to know about what is running)

I found the easiest config was to walk through the steps in the install guide, then use the first example to create a reporting system in the admin guide. It's so simple and straightforward.


Create dml handlers or apply handlers, and have fun debugging them looking for errors in v$applyblahblah, v$captureblahblah...

Golden Gate:
Add a line like this to your source to only pull rows where col1 is greater than 1000:
TABLE schema1.table1, FILTER(col1 > 1000);
Other transformations are equally simple. Exclude a column?
TABLE schema1.table1, EXCCOL (col2);

...send data from one schema to another?
MAP schema1.*, TARGET schema2.*; a column from one table to another?
MAP schema1.tab1, TARGET schema1.tab2, COLMAP (usedefaults, tab1_col=tab2_col)

My experience with GG proved that GG is faster than STREAMS even when using real-time downstream capture...oh, that's another thing - ever struggled setting up redo transport services for real time downstream capture in streams, yeah...none of that with GG. It does not use any Oracle-specific mechanism to transport the data. There is a manager process on each server involved in the replication that listens on a user defined port (default 7809). The extract process reads from the online redo log or archive log if necessary and sends what are called trail files over to the target. Those trail files are then read by the replicat process and applied on the target. Overall, GG looks very promising.

For more information on Golden Gate, see the following:

Friday, December 11, 2009

DB Links and streams not working

I am probably not the only person who has experienced frustration when trying to set up db links between databases on different hosts in order to use streams. Streams requires that global_names be set to true. If you remember this simple rule, your life will be simpler when trying to diagnose errors like:

ERROR at line 1:
ORA-02085: database link SOURCE.SJM.COM connects to SOURCE

Rule: (From Oracle's CREATE DATABASE LINK Reference): " If you specify only the database name, then Oracle Database implicitly appends the database domain of the local database"

This rule is probably where most of our problems stem from. It is why when you create a db link named "SOURCE", you see "SOURCE.DOMAIN.COM" if you do a select db_link from dba_db_links. You will also notice that "DOMAIN.COM" matches the value when you do: "show parameter db_domain"

Secondly, the next line from the Oracle reference states: "If the value of the GLOBAL_NAMES initialization parameter is TRUE, then the database link must have the same name as the database to which it connects." This explains why the db link that you created MUST be the same as the value returned from "select * from global_name" on the database the db link is connecting to.