Tips and wisdom from 15 years of DBA experience

Friday, December 11, 2009

DB Links and streams not working

I am probably not the only person who has experienced frustration when trying to set up db links between databases on different hosts in order to use streams. Streams requires that global_names be set to true. If you remember this simple rule, your life will be simpler when trying to diagnose errors like:

ERROR at line 1:
ORA-02085: database link SOURCE.SJM.COM connects to SOURCE

Rule: (From Oracle's CREATE DATABASE LINK Reference): " If you specify only the database name, then Oracle Database implicitly appends the database domain of the local database"

This rule is probably where most of our problems stem from. It is why when you create a db link named "SOURCE", you see "SOURCE.DOMAIN.COM" if you do a select db_link from dba_db_links. You will also notice that "DOMAIN.COM" matches the value when you do: "show parameter db_domain"

Secondly, the next line from the Oracle reference states: "If the value of the GLOBAL_NAMES initialization parameter is TRUE, then the database link must have the same name as the database to which it connects." This explains why the db link that you created MUST be the same as the value returned from "select * from global_name" on the database the db link is connecting to.

Friday, December 4, 2009

How to view Grid Control agent "raw" metrics

Sometimes Alerts will not clear properly in OEM, specifically I have seen this happen with regard to invalid objects. In order to find out what is really going on, it is sometimes helpful to look at the actual data that the agent is presenting to oem for debugging. It is simple to do this, as described in Metalink note 276350.1:

First, check whether Metric Browsing is enabled on the agent...the server name below is the server name that you are monitoring - the server where the agent is running, NOT the oem repository server:


If you get a 403 error from this url, then do the following on that server:

$ vi /sysman/config/

Then find the line that reads:

# To enable the metric browser, uncomment the following line
# This is a reloadable parameter

...and un-comment this line:


Then reload the agent:

AGENT_HOME>/bin/emctl reload agent

You should now be able to access the URL that returned a 403 previously. It will list all of various metrics that the agent has, so you can see where any issues are. In my case, it was very helpful, because I was getting invalid object warnings, but through sqlplus I could query dba_objects and see that there were no invalids in the schema it was complaining about. It turns out that the agent was viewing BIN$... objects from the recycle bin as invalid. I cleared the recycle bin, and viola! I was able to click the "reevaluate alert" button and clear the alert. problem solved.

Wednesday, December 2, 2009

Remove Agent from Grid Control

I had a circumstance where the hostname changed on a box, and although there was only one agent on the box, grid control was confused about the original agent, and it's stats was "Agent Unreachable". Oddly, it showed that the last upload time was the same as the reachable agent. The solution is quite simple: perform the following steps as indicated in metalink doc 735062.1:

"After any change on the host configuration, apply the following steps.

In Grid Control navigate to:

Deployment Tab
-> Refresh Host Configuration link
-> Select the host in the "Available Host" menu
-> Click on "Refresh Hosts" button.

Make sure that the submitted job finishes correctly and check the host home page to confirm that the changes have been propagated to the repository"

It worked perfectly for me on GC The old agent is gone from GC now. Yeah!

Thursday, October 22, 2009

How to set Oracle Apex Checkbox default value

As often is the case when I blog about something, I spent waaaaay too much time trying to figure out how to do something that is seemingly very simple. In this case my trivial requirement was to create a single checkbox, that when checked, puts "Yes" in the database, and when unchecked, puts "No" in the database. Here is how to do it:

- Create your Checkbox item.
- Make sure that the following are set in the "List of Values" section of the checkbox item:

Named LOV: - Select Named LOV -
Display Extra Values: No
Dynamic Translation: - Not Translated -
Number Of Columns: 1
Display Null: No
Null Display Value:
Null Return Value:
List of values definition: STATIC:;Yes

Save these changes, if you made any and then add a computation to the page as follows:

Click the "Create" icon.
Choose "Item on This Page"
click next...

Compute Item: (The checkbox you want to modify)
Sequence: (choose the default)
Computation Point: After Submit
Computation Type: Static Assignment
click next...

Computation: (Value you want to return) for example: No
click next...

Condition Type: Value of Item in Expression 1 is NULL
Expression 1: (name of checkbox you want to modify. Example: P1_DATA_FG
click create...

Viola! Give it a shot, and it should work for you. I learned all this stuff from here:

Craig Glendenning

Tuesday, October 13, 2009

ORA-22921 and streams

The most likely issue is that you are trying to stream between two different character sets. In my case, I ran into this when running dbms_streams_adm.maintain_schemas() with a source db on WE8MSWIN1252 and a destination on AL32UTF8. I had run csscan on the source schema and there were no issues or truncations returned, but the problem still haunted me. I read on an Oracle forum that there may be a bug associated with this, but when I tried to search on metalink, it just hung, thanks to the "New and Improved" metalink. Grrr. I will take a running ford focus over a non-running cadillac any day of the week.

Saturday, October 3, 2009

Viewing SYSDBA and SYSOPER operations through Grid Control (OEM)

Auditors need to see that I track operations performed as SYSOPER or SYSDBA. Here is how I do it. First, in the database, log in as SYSDBA and issue:

SQL> alter system set audit_trail=xml scope=spfile
SQL> alter system set audit_sys_operations=true
SQL> shutdown immediate;
SQL> startup;

Ok, now be sure to create an OS-level space monitor to do something with those .xml files that will be rapidly clouding your $ORACLE_BASE/admin/(db_name)/adump directory. I will explain why you need to use XML in a moment, instead of audit_trail=DB. Now, go into Grid Control and click on the home page for the instance you just restarted. You must be sure to log into the database as SYSDBA and be sure to choose "Save as Preferred Credential" for this instance . Next, go to the reports tab, and go to Reports -> Security -> Database Targets -> SYS User Operations. On the next page, click the flashlight and choose the instance you just restarted and click continue.

If you receive an error like "Failed to render element. Error: This report only applies to database with version 10.2 and up." it means that you did not properly log into the database instance as SYSDBA and set the preferred credentials. Try going back to the instance home page, logging out, then logging back in being sure to click "Save as Preferred Credential". I have seen cases where immediately after checking the checkbox, the page refreshes, and un-checks it. This can be easily overlooked while you are going to click the "Login" button. Now retry the steps above on the Reports tab. If you experience a problem, please post a comment on this page so someone can help you!

If all went well, you should now have a listing of your audit trail. I like to choose "Show All xxx" and then click the "Time" heading twice to sort by most recent activity. Viola! Now you know what is going on in your systems. The next step is to put alerts on the things that you care about, like "Page me when someone logs into my instance as SYSDBA or SYSOPER."

Ok, so why do you need to set audit_trail=XML? I believe it is a requirement by OEM. I don't recall the exact message but setting audit_trail=DB did not display any records and gave a message indicating that you must set audit_trail to DB or XML. Since I was already set to DB, I decided to try XML, and it worked. CG version Will show audit records with audit_trail=DB? I dunno.

Wednesday, September 23, 2009

RMAN-05001 resolution with both db_file_name_convert and NOFILENAMECHECK

If you are experiencing RMAN-05001: auxiliary filename conflicts with a file used by the target database, then you may be confused as I was about how to proceed. If you are converting ALL files from various locations to new locations, then use db_file_name_convert. If you want to retain the exact same locations with no conversion, you use NOFILENAMECHECK. But what if you want to convert some, and not convert others? I was not sure whether NOFILENAMECHECK would then override db_file_name_convert, but it turns out that you can use both in concert. For those files that are in the same location, it will not complain, but the others that you want to convert will be converted properly from the db_file_name_convert entries in your pfile. Nice.

Thursday, September 17, 2009

How to conect to Active Directory though Oracle

Connecting to an AD server through Oracle can be accomplished using DBMS_LDAP. This particular example was tested in Oracle 9i, but will probably work in more recent versions of Oracle. Note that the l_ldap_user variable MUST contain the ldap domain.

I highly recommend parameterizing the password, and would make certain that I am using all security precautions recommended by Oracle before implementing in production.

-- 9/17/09 - Craig Glendenning - a test procedure based on Oracle sample code to check ldap connectivity
create or replace
l_ldap_host VARCHAR2(256) := '';
l_ldap_port VARCHAR2(256) := '389';
l_ldap_user VARCHAR2(256) := '';
l_ldap_passwd VARCHAR2(256) := 'your_pw';
l_retval PLS_INTEGER;
l_session DBMS_LDAP.session;
-- Choose to raise exceptions.
-- Connect to the LDAP server.
l_session := DBMS_LDAP.init(l_ldap_host,l_ldap_port);
DBMS_OUTPUT.PUT_LINE('Init done ..Session is ' || l_session);
l_retval := DBMS_LDAP.simple_bind_s( ld => l_session,
dn => l_ldap_user,
passwd => l_ldap_passwd);
-- Disconnect from the LDAP server.
l_retval := DBMS_LDAP.unbind_s(l_session);
dbms_output.put_line('All Done!!');
raise_application_error(-20001,'Error - '||SQLCODE||' '||SQLERRM);

Monday, September 14, 2009

What is Oracle Supplemental Logging?

Here is a quick synopsis on supplemental logging. I have simply distilled the information gathered from The "Oracle Database Utilities" Manual, Chapter 17 for 10g R2. Because Streams uses logminer and logminer required supplemental logging, streams requires supplemental logging.

Types of Supplemental Logging:
You can add supplemental logging at either the table level, or the database level.

Database Level Supplemental Logging:
There are two types of database level supplemental logging: minimal logging, and identification key logging. Identification key logging can impose significant overhead. Minimal Supplemental Logging "logs the minimal amount of information needed for LogMiner to identify, group, and merge the redo operations associated with DML changes." You turn it on with:


Database Level Identification Key logging is necessary when the redo logs will not be mined at the source database instance. Here are the options:

  • ALL system-generated uncondititional supplemental log group


  • PRIMARY KEY system-generated uncondititional supplemental log group


  • UNIQUE system-generated conditional supplemental log group


  • FOREIGN KEY system-generated conditional supplemental log group


Table-level Supplemental Logging
Table-level Identification key Logging:
This configuration offers the same options as those provided at the database level and "whenever a column in any database table is changed, the entire row containing that column (except columns for LOBs, LONGs, and ADTs) will be placed in the redo log file".


Table-level User-Defined Supplemental Log Groups.
This option gives maximum control over the columns that are logged. You get to specify what columns' before-images you wish to add to the redo record. Not only that, but there are unconditional and conditional log groups. here is an example of an unconditional log group:


This will add EMPLOYEE_ID, LAST_NAME, and DEPARTMENT_ID to the redo record *even* if none of those columns were modified - this is a result of the ALWAYS parameter. In contrast, here is an example of a conditional user-defined supplemental log group:


In this case, only when one of the three listed columns is updated will the before image of those columns be logged...and finally, here is how you "associate this column with other columns in the named supplemental log group such that any modification to the NO LOG column causes the other columns in the supplemental log group to be placed in the redo log file. This might be useful, for example, if you want to log certain columns in a group if a LONG column changes. You cannot supplementally log the LONG column itself; however, you can use changes to that column to trigger supplemental logging of other columns in the same row"


Note the use of the "NO LOG" clause. This means that the DEPARTMENT_ID column will not be logged, but if it changes, be sure to log the EMPLOYEE_ID.

For more detail on Supplemental Logging, see the Oracle Docs Here.

Wednesday, September 2, 2009

WE8MSWIN1252, WE8ISO8859P1 and UTF-8

In an ideal world, all applications would be unicode compliant, but in reality, we sometimes have to deal with applications that do not do proper character set conversion, when they are using an encoding like WE8MSWIN1252 and encounter a database in UTF-8. So here is a very good listing of the 27 code points where WE8MSWIN1252 differ from Unicode and WE8ISO8859P1. Symbols like the Euro symbol are not even defined in WE8ISO8859P1, so they will show up as an upside down question mark under some circumstances. Those circumstances I yet to fully understand - there is a lot of complexity in the conversion process.

Tuesday, September 1, 2009

How To Set Up Oracle Streams

The purpose of this document is to describe the procedures for setting up a streams environment, starting the various streams capture and apply processes and monitoring the streams functionality through OEM. The document is broken into 3 sections, which should be executed in sequence:
Cleanup: Use this section if you want to start from a clean system after a failed or previously installed setup.
Setup: This contains the information to prepare the streams environment to replicate from one schema to a different schema in a different database.
Testing: Add some testing data and verify the data made it through
Monitoring: How to watch the Stream in action through OEM
Additional Notes: If you want to change the schema names, etc. Here are some tips.
Unknowns and things to learn: Anything related to additional research needed to understand something about streams

Usage Notes: Anything that needs to be run on the source system will have a Source heading and anything that needs to be run on the target system will have a Target heading. This has been tested on Oracle only.



$ sqlplus "/ as sysdba"
SQL> exec dbms_streams_adm.remove_streams_configuration;
SQL> drop user strmadmin cascade;
SQL> drop user streams_source cascade;


$ sqlplus "/ as sysdba"
SQL> exec dbms_streams_adm.remove_streams_configuration;
SQL> drop user strmadmin cascade;
SQL> drop user streams_source cascade;

If you hit any errors while trying to execute the above commands, issue the following, then try again:

$ sqlplus "/ as sysdba"
SQL> purge dba_recyclebin;

Sometimes during the run of dbms_streams_adm.maintain_schemas, things fail, but you can’t re-try the operation because it is recoverable, so you have to do the following:


$ sqlplus "/ as sysdba"
SQL> alter system set global_names=false;
SQL> select script_id from DBA_RECOVERABLE_SCRIPT ;

If you receive an error like this when executing the above command…

ERROR at line 1:
ORA-06550: line 1, column 8:
PLS-00352: Unable to access another database 'TARGET'
ORA-06550: line 1, column 8:
PLS-00201: identifier 'DBMS_STREAMS_RPC@TARGET' must be declared
ORA-06550: line 1, column 8:
PL/SQL: Statement ignored
ORA-06512: at "SYS.DBMS_STREAMS_MT", line 2255
ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 2907
ORA-06512: at line 1

Then do this:

$ sqlplus "/ as sysdba"
SQL> create database link TARGET connect to system identified by xxxxxxx using 'target';

Then try the DBMS_STREAMS_ADM command again.


Source and Target:
These commands must first be run on both systems. Note that the Source and Target databases can be on the same host, but this document assumes that you are using different databases. That is a much more robust proof of concept than replicating data within a single database.

$ sqlplus "/ as sysdba"
SQL> alter system set global_names=true;
SQL> alter system set streams_pool_size = 200 m;


Add a tnsnames.ora entry for your target database. Note that because streams requires using global_names that the alias name must match the target sid. DO NOT use “TARGET” as the alias. It will not work.

(ADDRESS = (PROTOCOL = TCP)(HOST = ussy-targethost)(PORT = 1524))
(SID = target)

$ tnsping target

Create a streams administrator user. This is where the various streams tables and objects will reside. Choose a different tablespace if users is not where you want that data stored.

$ sqlplus "/ as sysdba"
SQL> create user strmadmin identified by strmadmin default tablespace users;
SQL> grant connect, resource, dba to strmadmin;
SQL> grant select_catalog_role, select any dictionary to strmadmin;

SQL> begin
grantee => 'strmadmin',
grant_privileges => true

Create a user that will contain all testing source tables. We will add rows to a table to show how this table gets transferred by dbms_datapump during a later step.

SQL> create user streams_source identified by streams_source;
SQL> grant connect,resource to streams_source;
SQL> grant select on$awreport to streams_source;
(I chose this table because it has a BLOB column, which is always good to test)

SQL> connect streams_source/streams_source;
SQL> create table source_tab as select * from$awreport where dbms_lob.getlength(awlob) > 0;
SQL> alter table source_tab add primary key(ps#,gen#);
SQL> alter table source_tab add supplemental log data (primary key,unique) columns;

SQL> connect strmadmin/strmadmin;
Note that the name of the db link must match the tns alias you added earlier. You MUST change the names below to match your database!
SQL> create database link target connect to strmadmin identified by strmadmin using 'target';
SQL> create directory dp_streams as '/where/you/want/datapump/files';


Add a tnsnames.ora entry for your source database:

(ADDRESS = (PROTOCOL = TCP)(HOST = ussy-sourcehost)(PORT = 1524))
(SID = source)

$ tnsping source

$ sqlplus "/ as sysdba"
SQL> create user strmadmin identified by strmadmin;
SQL> grant connect, resource, dba to strmadmin;
SQL> grant select_catalog_role, select any dictionary to strmadmin;

SQL> begin dbms_streams_auth.grant_admin_privilege
(grantee => 'strmadmin',
grant_privileges => true);

SQL> connect strmadmin/strmadmin
Note that the name of the db link must match the tns alias you added earlier. You MUST change the names below to match your database!
SQL> create database link source connect to strmadmin identified by strmadmin using 'source';
SQL> create directory dp_streams as '/where/you/want/datapump/files';


$ sqlplus strmadmin/strmadmin

The following command will set up the capture and apply queues, etc. This will copy over a schema to the same schema name in another database. This procedure may take a long time to run, as it uses dbms_datapump to move the data between the two databases. See the dbms_streams_adm reference for the full list of parameters. There are a *lot* of them!
Note that the name of the source database and target database must match the tns alias you added earlier PLUS the domain name. It Must be a fully qualified domain name specified below – The scipt will fail if you run it as-is - You MUST change the names below to match your database!

SQL> begin
schema_names => 'streams_source',
source_directory_object => 'dp_streams',
destination_directory_object => 'dp_streams',
source_database => '',
destination_database => '',
include_ddl => true);

If you receive an error like this…

ERROR at line 1:
ORA-23616: Failure in executing block 6 for script
ORA-06512: at "SYS.DBMS_STREAMS_MT", line 7673
ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 2606
ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 2667
ORA-06512: at line 2

…Then review following table:


Source and Target:

Check the row counts to get a baseline on each system:

$ sqlplus streams_source/streams_source
SQL> select count(*) from source_tab;


$ sqlplus streams_source/streams_source
SQL> select max(ps#), max(gen#) from source_tab;
SQL> create sequence ps start with
SQL> create sequence gen start with
SQL> insert into source_tab select ps.nextval, gen.nextval,extnum,awlob,objname,partname from source_tab;
SQL> commit;
SQL> select count(*) from source_tab;


Check the row count in the target table. It could take several minutes for the data to propagate, depending on how much data has been inserted.

$ sqlplus streams_source/streams_source
SQL> select count(*) from source_tab;


From the OEM home page, go to:
• Targets Tab
• Databases tab in the upper left tab list
• Choose the source database
• Go to the Maintenance tab
• Choose “Management” under the Streams section (Login to the database as sys if you are asked to login)

You should now see tabs allowing you to drill into the capture, propagation and apply queues. Remember that the Apply queue will be on the target database in the setup we created above.

Monitoring Using strmmon:

Open two screens with strmmon, one connected to the source, and one to the target database.

$ strmmon -interval 1 -count 1000 -user sys -passw xxxxxxxx -dbname source -sysdba
(In another Shell…)
$ strmmon -interval 1 -count 1000 -user sys -passw xxxxxxxx -dbname target -sysdba

Additional Notes:

• Here is what you need to do to set up a custom transformation rule:


SQL> connect streams_source/streams_source
SQL> alter table source_tab add version varchar2(100);
SQL> connect strmadmin/strmadmin

-- This function assumes that you added a VERSION column to source_tab. You can modify as you wish - it's very simple, and a practically useless example - but you can see how data is transformed.
SQL> CREATE OR REPLACE FUNCTION strmadmin.transform_capture (in_any IN SYS.AnyData)
cmd VARCHAR2(10);
convert_value SYS.Anydata;
newvalue_anydata SYS.AnyData;
ob_name VARCHAR2(30);
ob_owner VARCHAR2(30);
oldvalue_anydata SYS.Anydata;
rc := in_any.GETOBJECT(lcr);
ob_owner := lcr.GET_OBJECT_OWNER();
ob_name := lcr.GET_OBJECT_NAME();
cmd := lcr.GET_COMMAND_TYPE();
convert_value := SYS.ANYDATA.ConvertVarchar2('converted!');
newvalue_anydata := lcr.GET_VALUE( 'new','VERSION');
oldvalue_anydata := lcr.GET_VALUE( 'old','VERSION');
IF ((newvalue_anydata IS NOT NULL) OR (oldvalue_anydata IS NOT NULL))
lcr.SET_VALUE( value_type => 'new',
column_value => convert_value);
RETURN in_any;
END transform_capture;

SQL> set linesize 150
SQL> select rule_owner,rule_name, rule_type, streams_type, streams_name from dba_streams_schema_rules;

SQL> begin
2 dbms_streams_adm.set_rule_transform_function (
3 rule_name => 'STRMADMIN.',
4 transform_function => 'STRMADMIN.TRANSFORM_CAPTURE');
5 end;
6 /

In my case, I specified the DML capture rule, but you may put the transformation on other rules if you wish.

• If you use dbms_streams_adm.maintain_schemas and the data is not initially propagated, the most common reason is that the schema or table you are trying to propagate already exists in the target. If that is the case, you will need to use another method if you want to start streaming to that existing schema or table…you can’t use dbms_streams_adm.maintain_schema. Look in the *.dlg log file in the source and target database server directory specified by dp_streams.

• If you want to know about the various methods that can be called on an lcr, see this:

• If you ever get into a state where the capture queue is sitting in “PAUSED FOR FLOW CONTROL”, the Apply queue is up and running and in “DEQUEUE MESSAGES”, first check the alert logs for both databases. I saw this situation in one case, and in the alert log for the source database, I saw “Propagation Schedule for (SYS.DB1$CAPQ, "SYS"."DB1$APPQ"@TARGET.YOURDOMAIN.COM) encountered following error: ORA-01031: insufficient privileges”

• Metalink Doc 418985.1 explains the following error, which I saw in the alert log immediately after dbms_streams_adm.maintain_schema successfully completed and then my instance died!

LOGMINER: Begin mining logfile for session 74 thread 1 sequence 2152, /u16/db01/redo03.log
Thu Aug 20 18:40:22 2009
Errors in file /u16/app/oracle/admin/db1/bdump/db1_pmon_17365.trc:
ORA-00600: internal error code, arguments: [kotgtoid-x-1], [0], [], [], [], [], [], []
Thu Aug 20 18:40:23 2009
PMON: terminating instance due to error 472
Instance terminated by PMON, pid = 17365

Thursday, August 27, 2009

How Do I Shrink Datafiles to Reclaim space?

There is probably a lot of available space that could be re-claimed by your database server operating system, but it is locked up in Oracle datafiles. Fortunately it is possible to shrink Oracle datafiles. This works in 10.2 and possibly earlier versions, but I am not sure - if you are on anything earlier than 10.2, you have other issues to worry about : )

Here is what this Tom Kyte script will give you:

  • The block size your db is using
  • The possible savings by shrinking each datafile and a total possible savings
  • The SQL to shrink each datafile as far as it can go

-- This script was written by Tom Kyte and retrieved from

set pages 0
set lin 150
set verify off
column file_name format a60 word_wrapped
column smallest format 999,990 heading "Smallest|Size|Poss."
column currsize format 999,990 heading "Current|Size"
column savings  format 999,990 heading "Poss.|Savings"
column sum format 999,999,999
break on report
compute sum of savings on report

column value new_val blksize
select value from v$parameter where name = 'db_block_size'

select file_name,
       ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) smallest,
       ceil( blocks*&&blksize/1024/1024) currsize,
       ceil( blocks*&&blksize/1024/1024) -
       ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) savings
from dba_data_files a,
     ( select file_id, max(block_id+blocks-1) hwm
         from dba_extents
        group by file_id ) b
where a.file_id = b.file_id(+)

column cmd format a95 word_wrapped

select 'alter database datafile '''||file_name||''' resize ' ||
       ceil( (nvl(hwm,1)*&&blksize)/1024/1024 )  || 'm;' cmd
from dba_data_files a,
     ( select file_id, max(block_id+blocks-1) hwm
         from dba_extents
        group by file_id ) b
where a.file_id = b.file_id(+)
  and ceil( blocks*&&blksize/1024/1024) -
      ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) > 0

Wednesday, August 19, 2009

HIPAA Compliance Documents

HIPAA, or “Health Insurance Portability and Accountability Act” of 1996 is a law, and the U.S. Department Of Health and Human Services created a set of rules that need to be followed in order to adhere to the law. Of course your company may have additional standards to comply with, so don't let this information override anything that your company may require.

Must an entity (business, person, etc.) comply with HIPAA?

Where can I read a Summary of the Rules that need to be followed to comply with HIPAA?

I am so anxious to read more! Where can I read the law?

Wow! This is so exciting! Where can I read the Rules that the US Dept. of Health and Human Services developed in order to help Health care providers adhere to the law?

Tuesday, August 18, 2009

Audit Logons in Oracle

There is more than one way to audit logons in Oracle. I will provide what I beleive to be the simplest method. This will give you the who,when and where of Oracle logons. A couple of notes about this method: It will store a new record at each Oracle session creation. If you use OEM, then you may get spammed with records from DBSNMP (or whatever user you are telling OEM to log in as).


$ sqlplus "/ as sysdba"
SQL> show parameter audit_trail;
(if it shows "db", you are done with setup. Go onto the next section)
SQL> alter system set audit_trail=db scope=spfile;
SQL> shutdown;
SQL> startup;
SQL> show parameter audit_trail;
(verify that the parameter equals "db")

Turning on Auditing:

$ sqlplus "/ as sysdba"
SQL> audit create session by session;
SQL> select count(*) from dba_audit_trail;
(should be zero)
SQL> exit;
$ sqlplus non-sys-user/pw
SQL> exit;
$ sqlplus "/ as sysdba"
SQL> select count(*) from dba_audit_trail;
(should be non-zero)

Done. Short and Sweet.


If you want to capture logons from SYS or anyone connecting with SYSOPER or SYSDBA, you need to issue the following, and bounce the database:

alter system set audit_sys_operations=true scope=spfile;

Monday, August 10, 2009

Oracle Apex on 10g using the PL/SQL Gateway? Yes!

Oracle Application Express is a very solid tool for rapid web application development that does not require knowledge of a programming language or HTML. It comes free with the Oracle database, and handles session tracking, security, database access etc. very simply and intuitively. For example, I had a form with pick lists, and I wanted to create a modify-able version of that form. I created an updateable report so that for each row you can click a little modify icon to the left of the row, which would bring up the form. The form had pick lists that were populated from a lookup table, and I was impressed that Oracle "knew" that I did not want the values from the lookup table but from the table that contained the row I just clicked on to modify. Nice.

Below are instructions for installing Apex 3.2 on Oracle 10g using the Embedded PL/SQL Gateway (EPG) and the http server that comes with the XML DB. This is not an "officially" supported configuration by Oracle. They do support APEX on 10g using an external web server. I have at least 5 forms used in production now that confirm that 10g using the EPG works, and I am going to show you exactly what you need to do to make it work. The Oracle documentation lacks some crucial elements, and the diagnostic information produced by the XML DB http server is, non-existent as far as I can discern. If anyone knows how to get the equivalent of an apache http log from the XML DB http server, please post a comment to share your wisdom!

APEX installation instructions:

First, download Oracle Application Express from Oracle OTN then unzip Oracle APEX and begin the installation process as follows:

$ unzip
$ cd apex
$ sqlplus "/ as sysdba"
SQL> @apexins sysaux sysaux temp /i/
SQL> @apxchpwd
(jot down the password that you chose)
SQL> @apex_epg_config /stuff


VERY IMPORTANT!! the /stuff directory above MUST be the directory that is one level above where the "apex" directory is located that you unzipped during installation. For example, if your apex directory is in /home/oracle/stuff, then choose /home/oracle/stuff. DO NOT choose /home/oracle/stuff/apex. Just a common stumbling block...

$ sqlplus "/ as sysdba"
SQL> @apxldimg.sql /stuff
(same dir you listed in previous step)
SQL> @apxxepwd.sql admin


This bit of code was off otn forums, and is necessary to prevent the annoying pop-up at the browser for user anonymous 5 times before letting you get to the web page authentication.

l_configxml XMLTYPE;
l_value VARCHAR2(5) := 'true'; -- (true/false)
l_configxml := DBMS_XDB.cfg_get();

IF l_configxml.existsNode('/xdbconfig/sysconfig/protocolconfig/httpconfig/allow-repository-anonymous-access') = 0 THEN
-- Add missing element.
SELECT insertChildXML
XMLType('' ||
l_value ||
INTO l_configxml
FROM dual;

DBMS_OUTPUT.put_line('Element inserted.');
-- Update existing element.
INTO l_configxml
FROM dual;

DBMS_OUTPUT.put_line('Element updated.');



If you have problems logging in, or with images not displaying correctly:

SQL> select * from dba_epg_dad_authorization;

-------------- ------------------------------


If the query above returns no rows or different output, do this:

dbms_epg.authorize_dad (
dad_name => 'APEX',
user => 'ANONYMOUS');


Re-check the dba_epg_dad_authorization query:

SQL> select * from dba_epg_dad_authorization;


If you are still having problems, check out the various dad attributes:

set serveroutput on size 1000000
FOR d IN 1..l_dadNames.COUNT LOOP
FOR a IN 1..l_attrValues.COUNT LOOP
DBMS_OUTPUT.PUT_LINE('- '||RPAD(l_attrNames(a),25)||' : '||l_attrValues(a));
FOR a IN 1..l_attrValues.COUNT LOOP
DBMS_OUTPUT.PUT_LINE('- '||RPAD('mapping',25)||' : '||l_attrValues(a));
FOR a IN ( SELECT username FROM dba_epg_dad_authorization WHERE dad_name = l_dadNames(d) )
DBMS_OUTPUT.PUT_LINE('- '||RPAD('authorized',25)||' : '||a.username);


If nothing from this output helps in your troubleshooting efforts, you can turn on the highest level of logging (debugging) which will write to udump in theory, but I have not seen anything show up there...:

SQL> execute dbms_epg.set_global_attribute('log-level', 7);

Friday, August 7, 2009

Proper Pagination design in Oracle using Analytic Functions

A common web development problem is how to prevent slogging the database with full table scans when trying to paginate a web page. I defer the solution to Tom Kyte, re-posting the section "Pagination in Getting Rows N Through M" in this article:

Pagination in Getting Rows N Through M

The last thing I'll take a look at is performing pagination with analytics. Pagination is common in Web-based applications: The user inputs a set of criteria, which runs a query and then lets the user click the Previous and Next buttons to page through the result set. To achieve this paging functionality, the application needs to be able to get a certain set of rows from the query. In the prior column dealing with ROWNUM and in the "Top-n Query" section above, I demonstrated how to achieve that pagination, but it required two levels of inline views and could be considered rather unwieldy. A syntactically easier way to write that pagination query is

select *
from (
select /*+ first_rows(25) */
over (order by something unique)rn
from your_tables )
where rn between :n and :m
order by rn;

For example, suppose you wanted to paginate through the ALL_OBJECTS view, wanted to show 25 rows on a page, and needed to show page 5. The query would be

SQL> variable n number
SQL> variable m number
SQL> exec :n := 101; :m := 125
PL/SQL procedure successfully completed.

SQL> select *
2 from (
3 select /*+ first_rows(25) */
4 object_id,object_name,
5 row_number() over
6 (order by object_id) rn
7 from all_objects)
8 where rn between :n and :m
9 order by rn;

--------- ----------- ---
102 I_SEQ1 101
103 I_OBJAUTH1 102
124 I_ACCESS1 123
125 I_TRIGGER1 124
126 I_TRIGGER2 125

25 rows selected.

I chose 25 for the FIRST_ROWS hint because that is my page size and pagination-style queries should be optimized to get the first page as fast as possible. I assign ROW_NUMBER to each row after sorting by OBJECT_ID. Then I use a simple BETWEEN to retrieve the specific rows I need (101 through 125 in this case).

You need to order by something unique for these pagination queries, so that ROW_NUMBER is assigned deterministically to the rows each and every time. Otherwise, you end up with a problem similar to what I pointed out with the salary top-n query. For more details on deterministic queries, go to Also, for complete information on using analytic functions, see (chapter 21 of the Data Warehousing Guide). In my opinion, analytic functions are the coolest thing to happen to SQL since the SELECT keyword was introduced.

Thursday, August 6, 2009

How do I apply metric thresholds to multiple databases in Oracle Enterprise Manager (OEM)?

I am very much a command line junkie, but have been warming up to Oracle Enterprise Manager quite a bit lately. My ol' monitoring toolbox of scripts are getting good competition from the performance tab, with the advent of active session history graphing, and being able to look at the state of a database at prior moments in the past. That said, there was one thing that took me way too long to figure out this morning. I simply wanted to change a metric warning and critical threshold on Process Limit Usage (%) and apply it to about 20 databases. No big deal. However, I could not do it through any of the related links on the database home page. Googling for related keywords yielded no direct hits. (If you are reading this from a Google search, then you owe me $42.6 billion dollars). Let me introduce you to OEM Monitoring Templates.

From OEM (Tested on EM 10g):

1. Click the small "Setup" link in the upper righthand corner

2. From the vertical navigation bar, click Monitoring Templates

3. Click "Create". Click the little flashlight to search for a target that you are going to use to build your metrics template from and click "Continue".

4. Choose a name for the template, and click on Metric Thresholds. You can modify the values here that have been copied from your target. What I did was chose "Select All" and then unselected the Metric and values I wanted to add to all databases, and clicked the "Remove Metrics from Template" button. Then I Hit "OK".

5. Here is where it gets can compare this template against your hosts by clicking "Compare Settings", to diff them before you blindly apply it to your entire set of DBs. Very Nice. Click OK when you are done.

6. You should be back on the "Monitoring Templates" screen where you can now click "Apply" to apply this template to various targets.

It would be nice to have this a bit more visible from within OEM. But then again it kinda makes me excited about finding new nuggets of simplification. Like a scavenger hunt, I guess...sort of?...nevermind.