Tips and wisdom from 15 years of DBA experience

Wednesday, September 23, 2009

RMAN-05001 resolution with both db_file_name_convert and NOFILENAMECHECK

If you are experiencing RMAN-05001: auxiliary filename conflicts with a file used by the target database, then you may be confused as I was about how to proceed. If you are converting ALL files from various locations to new locations, then use db_file_name_convert. If you want to retain the exact same locations with no conversion, you use NOFILENAMECHECK. But what if you want to convert some, and not convert others? I was not sure whether NOFILENAMECHECK would then override db_file_name_convert, but it turns out that you can use both in concert. For those files that are in the same location, it will not complain, but the others that you want to convert will be converted properly from the db_file_name_convert entries in your pfile. Nice.

Thursday, September 17, 2009

How to conect to Active Directory though Oracle

Connecting to an AD server through Oracle can be accomplished using DBMS_LDAP. This particular example was tested in Oracle 9i, but will probably work in more recent versions of Oracle. Note that the l_ldap_user variable MUST contain the ldap domain.

I highly recommend parameterizing the password, and would make certain that I am using all security precautions recommended by Oracle before implementing in production.


-- 9/17/09 - Craig Glendenning - a test procedure based on Oracle sample code to check ldap connectivity
create or replace
PROCEDURE LDAP_CONNECT AS
l_ldap_host VARCHAR2(256) := 'YOUR_LDAP_DOMAIN.com';
l_ldap_port VARCHAR2(256) := '389';
l_ldap_user VARCHAR2(256) := 'ad_user@YOUR_LDAP_DOMAIN.com';
l_ldap_passwd VARCHAR2(256) := 'your_pw';
l_retval PLS_INTEGER;
l_session DBMS_LDAP.session;
BEGIN
-- Choose to raise exceptions.
--DBMS_LDAP.USE_EXCEPTION := TRUE;
DBMS_OUTPUT.PUT_LINE('Connecting');
-- Connect to the LDAP server.
l_session := DBMS_LDAP.init(l_ldap_host,l_ldap_port);
DBMS_OUTPUT.PUT_LINE('Init done ..Session is ' || l_session);
l_retval := DBMS_LDAP.simple_bind_s( ld => l_session,
dn => l_ldap_user,
passwd => l_ldap_passwd);
DBMS_OUTPUT.PUT_LINE('Connected');
-- Disconnect from the LDAP server.
l_retval := DBMS_LDAP.unbind_s(l_session);
DBMS_OUTPUT.PUT_LINE('L_RETVAL: ' || l_retval);
dbms_output.put_line('All Done!!');
EXCEPTION
WHEN OTHERS THEN
raise_application_error(-20001,'Error - '||SQLCODE||' '||SQLERRM);
END LDAP_CONNECT;
/

Monday, September 14, 2009

What is Oracle Supplemental Logging?

Here is a quick synopsis on supplemental logging. I have simply distilled the information gathered from The "Oracle Database Utilities" Manual, Chapter 17 for 10g R2. Because Streams uses logminer and logminer required supplemental logging, streams requires supplemental logging.

Types of Supplemental Logging:
You can add supplemental logging at either the table level, or the database level.

Database Level Supplemental Logging:
There are two types of database level supplemental logging: minimal logging, and identification key logging. Identification key logging can impose significant overhead. Minimal Supplemental Logging "logs the minimal amount of information needed for LogMiner to identify, group, and merge the redo operations associated with DML changes." You turn it on with:


ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;


Database Level Identification Key logging is necessary when the redo logs will not be mined at the source database instance. Here are the options:


  • ALL system-generated uncondititional supplemental log group

  • SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

  • PRIMARY KEY system-generated uncondititional supplemental log group

  • SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;

  • UNIQUE system-generated conditional supplemental log group

  • SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;

  • FOREIGN KEY system-generated conditional supplemental log group

  • ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;


Table-level Supplemental Logging
Table-level Identification key Logging:
This configuration offers the same options as those provided at the database level and "whenever a column in any database table is changed, the entire row containing that column (except columns for LOBs, LONGs, and ADTs) will be placed in the redo log file".

ALTER TABLE HR.EMPLOYEES ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;


Table-level User-Defined Supplemental Log Groups.
This option gives maximum control over the columns that are logged. You get to specify what columns' before-images you wish to add to the redo record. Not only that, but there are unconditional and conditional log groups. here is an example of an unconditional log group:

ALTER TABLE HR.EMPLOYEES
ADD SUPPLEMENTAL LOG GROUP emp_parttime (EMPLOYEE_ID, LAST_NAME,
DEPARTMENT_ID) ALWAYS;

This will add EMPLOYEE_ID, LAST_NAME, and DEPARTMENT_ID to the redo record *even* if none of those columns were modified - this is a result of the ALWAYS parameter. In contrast, here is an example of a conditional user-defined supplemental log group:

ALTER TABLE HR.EMPLOYEES
ADD SUPPLEMENTAL LOG GROUP emp_fulltime (EMPLOYEE_ID, LAST_NAME,
DEPARTMENT_ID);

In this case, only when one of the three listed columns is updated will the before image of those columns be logged...and finally, here is how you "associate this column with other columns in the named supplemental log group such that any modification to the NO LOG column causes the other columns in the supplemental log group to be placed in the redo log file. This might be useful, for example, if you want to log certain columns in a group if a LONG column changes. You cannot supplementally log the LONG column itself; however, you can use changes to that column to trigger supplemental logging of other columns in the same row"

ALTER TABLE HR.EMPLOYEES
ADD SUPPLEMENTAL LOG GROUP emp_parttime(
DEPARTMENT_ID NO LOG, EMPLOYEE_ID);


Note the use of the "NO LOG" clause. This means that the DEPARTMENT_ID column will not be logged, but if it changes, be sure to log the EMPLOYEE_ID.

For more detail on Supplemental Logging, see the Oracle Docs Here.

Wednesday, September 2, 2009

WE8MSWIN1252, WE8ISO8859P1 and UTF-8

In an ideal world, all applications would be unicode compliant, but in reality, we sometimes have to deal with applications that do not do proper character set conversion, when they are using an encoding like WE8MSWIN1252 and encounter a database in UTF-8. So here is a very good listing of the 27 code points where WE8MSWIN1252 differ from Unicode and WE8ISO8859P1. Symbols like the Euro symbol are not even defined in WE8ISO8859P1, so they will show up as an upside down question mark under some circumstances. Those circumstances I yet to fully understand - there is a lot of complexity in the conversion process.


Tuesday, September 1, 2009

How To Set Up Oracle Streams

The purpose of this document is to describe the procedures for setting up a streams environment, starting the various streams capture and apply processes and monitoring the streams functionality through OEM. The document is broken into 3 sections, which should be executed in sequence:
Cleanup: Use this section if you want to start from a clean system after a failed or previously installed setup.
Setup: This contains the information to prepare the streams environment to replicate from one schema to a different schema in a different database.
Testing: Add some testing data and verify the data made it through
Monitoring: How to watch the Stream in action through OEM
Additional Notes: If you want to change the schema names, etc. Here are some tips.
Unknowns and things to learn: Anything related to additional research needed to understand something about streams

Usage Notes: Anything that needs to be run on the source system will have a Source heading and anything that needs to be run on the target system will have a Target heading. This has been tested on Oracle 10.2.0.4 only.


Cleanup

Source:

$ sqlplus "/ as sysdba"
SQL> exec dbms_streams_adm.remove_streams_configuration;
SQL> drop user strmadmin cascade;
SQL> drop user streams_source cascade;


Target:

$ sqlplus "/ as sysdba"
SQL> exec dbms_streams_adm.remove_streams_configuration;
SQL> drop user strmadmin cascade;
SQL> drop user streams_source cascade;


If you hit any errors while trying to execute the above commands, issue the following, then try again:

$ sqlplus "/ as sysdba"
SQL> purge dba_recyclebin;


Sometimes during the run of dbms_streams_adm.maintain_schemas, things fail, but you can’t re-try the operation because it is recoverable, so you have to do the following:

Source:

$ sqlplus "/ as sysdba"
SQL> alter system set global_names=false;
SQL> select script_id from DBA_RECOVERABLE_SCRIPT ;
SQL> exec DBMS_STREAMS_ADM.RECOVER_OPERATION('&script_id','PURGE');


If you receive an error like this when executing the above command…

ERROR at line 1:
ORA-06550: line 1, column 8:
PLS-00352: Unable to access another database 'TARGET'
ORA-06550: line 1, column 8:
PLS-00201: identifier 'DBMS_STREAMS_RPC@TARGET' must be declared
ORA-06550: line 1, column 8:
PL/SQL: Statement ignored
ORA-06512: at "SYS.DBMS_RECOVERABLE_SCRIPT", line 71
ORA-06512: at "SYS.DBMS_STREAMS_MT", line 2255
ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 2907
ORA-06512: at line 1

Then do this:

$ sqlplus "/ as sysdba"
SQL> create database link TARGET connect to system identified by xxxxxxx using 'target';


Then try the DBMS_STREAMS_ADM command again.

Setup

Source and Target:
These commands must first be run on both systems. Note that the Source and Target databases can be on the same host, but this document assumes that you are using different databases. That is a much more robust proof of concept than replicating data within a single database.

$ sqlplus "/ as sysdba"
SQL> alter system set global_names=true;
SQL> alter system set streams_pool_size = 200 m;


Source:

Add a tnsnames.ora entry for your target database. Note that because streams requires using global_names that the alias name must match the target sid. DO NOT use “TARGET” as the alias. It will not work.

TARGET =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ussy-targethost)(PORT = 1524))
)
(CONNECT_DATA =
(SID = target)
(SERVER = DEDICATED)
)
)

$ tnsping target


Create a streams administrator user. This is where the various streams tables and objects will reside. Choose a different tablespace if users is not where you want that data stored.

$ sqlplus "/ as sysdba"
SQL> create user strmadmin identified by strmadmin default tablespace users;
SQL> grant connect, resource, dba to strmadmin;
SQL> grant select_catalog_role, select any dictionary to strmadmin;

SQL> begin
dbms_streams_auth.grant_admin_privilege
(
grantee => 'strmadmin',
grant_privileges => true
);
end;
/


Create a user that will contain all testing source tables. We will add rows to a table to show how this table gets transferred by dbms_datapump during a later step.

SQL> create user streams_source identified by streams_source;
SQL> grant connect,resource to streams_source;
SQL> grant select on sys.aw$awreport to streams_source;
(I chose this table because it has a BLOB column, which is always good to test)

SQL> connect streams_source/streams_source;
SQL> create table source_tab as select * from sys.aw$awreport where dbms_lob.getlength(awlob) > 0;
SQL> alter table source_tab add primary key(ps#,gen#);
SQL> alter table source_tab add supplemental log data (primary key,unique) columns;

SQL> connect strmadmin/strmadmin;
Note that the name of the db link must match the tns alias you added earlier. You MUST change the names below to match your database!
SQL> create database link target connect to strmadmin identified by strmadmin using 'target';
SQL> create directory dp_streams as '/where/you/want/datapump/files';

Target:

Add a tnsnames.ora entry for your source database:

SOURCE =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ussy-sourcehost)(PORT = 1524))
)
(CONNECT_DATA =
(SID = source)
(SERVER = DEDICATED)
)
)

$ tnsping source

$ sqlplus "/ as sysdba"
SQL> create user strmadmin identified by strmadmin;
SQL> grant connect, resource, dba to strmadmin;
SQL> grant select_catalog_role, select any dictionary to strmadmin;

SQL> begin dbms_streams_auth.grant_admin_privilege
(grantee => 'strmadmin',
grant_privileges => true);
end;
/

SQL> connect strmadmin/strmadmin
Note that the name of the db link must match the tns alias you added earlier. You MUST change the names below to match your database!
SQL> create database link source connect to strmadmin identified by strmadmin using 'source';
SQL> create directory dp_streams as '/where/you/want/datapump/files';


Source:

$ sqlplus strmadmin/strmadmin


The following command will set up the capture and apply queues, etc. This will copy over a schema to the same schema name in another database. This procedure may take a long time to run, as it uses dbms_datapump to move the data between the two databases. See the dbms_streams_adm reference for the full list of parameters. There are a *lot* of them!
Note that the name of the source database and target database must match the tns alias you added earlier PLUS the domain name. It Must be a fully qualified domain name specified below – The scipt will fail if you run it as-is - You MUST change the names below to match your database!

SQL> begin
dbms_streams_adm.maintain_schemas(
schema_names => 'streams_source',
source_directory_object => 'dp_streams',
destination_directory_object => 'dp_streams',
source_database => 'source.yourdomain.com',
destination_database => 'target.yourdomain.com',
include_ddl => true);
end;
/


If you receive an error like this…

ERROR at line 1:
ORA-23616: Failure in executing block 6 for script
718A198AB6AF5314E040007F01003E43
ORA-06512: at "SYS.DBMS_RECOVERABLE_SCRIPT", line 560
ORA-06512: at "SYS.DBMS_RECOVERABLE_SCRIPT", line 583
ORA-06512: at "SYS.DBMS_STREAMS_MT", line 7673
ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 2606
ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 2667
ORA-06512: at line 2

…Then review following table:

SQL> SELECT * FROM DBA_RECOVERABLE_SCRIPT_ERRORS;


Source and Target:

Check the row counts to get a baseline on each system:

$ sqlplus streams_source/streams_source
SQL> select count(*) from source_tab;


Source:

$ sqlplus streams_source/streams_source
SQL> select max(ps#), max(gen#) from source_tab;
SQL> create sequence ps start with
SQL> create sequence gen start with
SQL> insert into source_tab select ps.nextval, gen.nextval,extnum,awlob,objname,partname from source_tab;
SQL> commit;
SQL> select count(*) from source_tab;


Target:

Check the row count in the target table. It could take several minutes for the data to propagate, depending on how much data has been inserted.

$ sqlplus streams_source/streams_source
SQL> select count(*) from source_tab;


Monitoring


From the OEM home page, go to:
• Targets Tab
• Databases tab in the upper left tab list
• Choose the source database
• Go to the Maintenance tab
• Choose “Management” under the Streams section (Login to the database as sys if you are asked to login)

You should now see tabs allowing you to drill into the capture, propagation and apply queues. Remember that the Apply queue will be on the target database in the setup we created above.

Monitoring Using strmmon:

Open two screens with strmmon, one connected to the source, and one to the target database.

$ strmmon -interval 1 -count 1000 -user sys -passw xxxxxxxx -dbname source -sysdba
(In another Shell…)
$ strmmon -interval 1 -count 1000 -user sys -passw xxxxxxxx -dbname target -sysdba


Additional Notes:


• Here is what you need to do to set up a custom transformation rule:

Source:

SQL> connect streams_source/streams_source
SQL> alter table source_tab add version varchar2(100);
SQL> connect strmadmin/strmadmin

-- This function assumes that you added a VERSION column to source_tab. You can modify as you wish - it's very simple, and a practically useless example - but you can see how data is transformed.
SQL> CREATE OR REPLACE FUNCTION strmadmin.transform_capture (in_any IN SYS.AnyData)
RETURN SYS.AnyData
IS
cmd VARCHAR2(10);
convert_value SYS.Anydata;
lcr SYS.LCR$_ROW_RECORD;
newvalue_anydata SYS.AnyData;
ob_name VARCHAR2(30);
ob_owner VARCHAR2(30);
oldvalue_anydata SYS.Anydata;
rc NUMBER;
BEGIN
IF in_any.GETTYPENAME = 'SYS.LCR$_ROW_RECORD' THEN
rc := in_any.GETOBJECT(lcr);
ob_owner := lcr.GET_OBJECT_OWNER();
ob_name := lcr.GET_OBJECT_NAME();
cmd := lcr.GET_COMMAND_TYPE();
convert_value := SYS.ANYDATA.ConvertVarchar2('converted!');
IF ob_owner='STREAMS_SOURCE' AND cmd = 'UPDATE' THEN
IF ob_name='SOURCE_TAB' THEN
newvalue_anydata := lcr.GET_VALUE( 'new','VERSION');
oldvalue_anydata := lcr.GET_VALUE( 'old','VERSION');
IF ((newvalue_anydata IS NOT NULL) OR (oldvalue_anydata IS NOT NULL))
THEN
lcr.SET_VALUE( value_type => 'new',
column_name=>'VERSION',
column_value => convert_value);
END IF;
END IF;
END IF;
RETURN SYS.ANYDATA.CONVERTOBJECT(lcr);
END IF;
RETURN in_any;
END transform_capture;
/

SQL> set linesize 150
SQL> select rule_owner,rule_name, rule_type, streams_type, streams_name from dba_streams_schema_rules;

SQL> begin
2 dbms_streams_adm.set_rule_transform_function (
3 rule_name => 'STRMADMIN.',
4 transform_function => 'STRMADMIN.TRANSFORM_CAPTURE');
5 end;
6 /


In my case, I specified the DML capture rule, but you may put the transformation on other rules if you wish.

• If you use dbms_streams_adm.maintain_schemas and the data is not initially propagated, the most common reason is that the schema or table you are trying to propagate already exists in the target. If that is the case, you will need to use another method if you want to start streaming to that existing schema or table…you can’t use dbms_streams_adm.maintain_schema. Look in the *.dlg log file in the source and target database server directory specified by dp_streams.


• If you want to know about the various methods that can be called on an lcr, see this: http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/t_lcr.htm

• If you ever get into a state where the capture queue is sitting in “PAUSED FOR FLOW CONTROL”, the Apply queue is up and running and in “DEQUEUE MESSAGES”, first check the alert logs for both databases. I saw this situation in one case, and in the alert log for the source database, I saw “Propagation Schedule for (SYS.DB1$CAPQ, "SYS"."DB1$APPQ"@TARGET.YOURDOMAIN.COM) encountered following error: ORA-01031: insufficient privileges”

• Metalink Doc 418985.1 explains the following error, which I saw in the alert log immediately after dbms_streams_adm.maintain_schema successfully completed and then my instance died!

LOGMINER: Begin mining logfile for session 74 thread 1 sequence 2152, /u16/db01/redo03.log
Thu Aug 20 18:40:22 2009
Errors in file /u16/app/oracle/admin/db1/bdump/db1_pmon_17365.trc:
ORA-00600: internal error code, arguments: [kotgtoid-x-1], [0], [], [], [], [], [], []
Thu Aug 20 18:40:23 2009
PMON: terminating instance due to error 472
Instance terminated by PMON, pid = 17365

Followers