Tips and wisdom from 15 years of DBA experience

Tuesday, September 1, 2009

How To Set Up Oracle Streams

The purpose of this document is to describe the procedures for setting up a streams environment, starting the various streams capture and apply processes and monitoring the streams functionality through OEM. The document is broken into 3 sections, which should be executed in sequence:
Cleanup: Use this section if you want to start from a clean system after a failed or previously installed setup.
Setup: This contains the information to prepare the streams environment to replicate from one schema to a different schema in a different database.
Testing: Add some testing data and verify the data made it through
Monitoring: How to watch the Stream in action through OEM
Additional Notes: If you want to change the schema names, etc. Here are some tips.
Unknowns and things to learn: Anything related to additional research needed to understand something about streams

Usage Notes: Anything that needs to be run on the source system will have a Source heading and anything that needs to be run on the target system will have a Target heading. This has been tested on Oracle 10.2.0.4 only.


Cleanup

Source:

$ sqlplus "/ as sysdba"
SQL> exec dbms_streams_adm.remove_streams_configuration;
SQL> drop user strmadmin cascade;
SQL> drop user streams_source cascade;


Target:

$ sqlplus "/ as sysdba"
SQL> exec dbms_streams_adm.remove_streams_configuration;
SQL> drop user strmadmin cascade;
SQL> drop user streams_source cascade;


If you hit any errors while trying to execute the above commands, issue the following, then try again:

$ sqlplus "/ as sysdba"
SQL> purge dba_recyclebin;


Sometimes during the run of dbms_streams_adm.maintain_schemas, things fail, but you can’t re-try the operation because it is recoverable, so you have to do the following:

Source:

$ sqlplus "/ as sysdba"
SQL> alter system set global_names=false;
SQL> select script_id from DBA_RECOVERABLE_SCRIPT ;
SQL> exec DBMS_STREAMS_ADM.RECOVER_OPERATION('&script_id','PURGE');


If you receive an error like this when executing the above command…

ERROR at line 1:
ORA-06550: line 1, column 8:
PLS-00352: Unable to access another database 'TARGET'
ORA-06550: line 1, column 8:
PLS-00201: identifier 'DBMS_STREAMS_RPC@TARGET' must be declared
ORA-06550: line 1, column 8:
PL/SQL: Statement ignored
ORA-06512: at "SYS.DBMS_RECOVERABLE_SCRIPT", line 71
ORA-06512: at "SYS.DBMS_STREAMS_MT", line 2255
ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 2907
ORA-06512: at line 1

Then do this:

$ sqlplus "/ as sysdba"
SQL> create database link TARGET connect to system identified by xxxxxxx using 'target';


Then try the DBMS_STREAMS_ADM command again.

Setup

Source and Target:
These commands must first be run on both systems. Note that the Source and Target databases can be on the same host, but this document assumes that you are using different databases. That is a much more robust proof of concept than replicating data within a single database.

$ sqlplus "/ as sysdba"
SQL> alter system set global_names=true;
SQL> alter system set streams_pool_size = 200 m;


Source:

Add a tnsnames.ora entry for your target database. Note that because streams requires using global_names that the alias name must match the target sid. DO NOT use “TARGET” as the alias. It will not work.

TARGET =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ussy-targethost)(PORT = 1524))
)
(CONNECT_DATA =
(SID = target)
(SERVER = DEDICATED)
)
)

$ tnsping target


Create a streams administrator user. This is where the various streams tables and objects will reside. Choose a different tablespace if users is not where you want that data stored.

$ sqlplus "/ as sysdba"
SQL> create user strmadmin identified by strmadmin default tablespace users;
SQL> grant connect, resource, dba to strmadmin;
SQL> grant select_catalog_role, select any dictionary to strmadmin;

SQL> begin
dbms_streams_auth.grant_admin_privilege
(
grantee => 'strmadmin',
grant_privileges => true
);
end;
/


Create a user that will contain all testing source tables. We will add rows to a table to show how this table gets transferred by dbms_datapump during a later step.

SQL> create user streams_source identified by streams_source;
SQL> grant connect,resource to streams_source;
SQL> grant select on sys.aw$awreport to streams_source;
(I chose this table because it has a BLOB column, which is always good to test)

SQL> connect streams_source/streams_source;
SQL> create table source_tab as select * from sys.aw$awreport where dbms_lob.getlength(awlob) > 0;
SQL> alter table source_tab add primary key(ps#,gen#);
SQL> alter table source_tab add supplemental log data (primary key,unique) columns;

SQL> connect strmadmin/strmadmin;
Note that the name of the db link must match the tns alias you added earlier. You MUST change the names below to match your database!
SQL> create database link target connect to strmadmin identified by strmadmin using 'target';
SQL> create directory dp_streams as '/where/you/want/datapump/files';

Target:

Add a tnsnames.ora entry for your source database:

SOURCE =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ussy-sourcehost)(PORT = 1524))
)
(CONNECT_DATA =
(SID = source)
(SERVER = DEDICATED)
)
)

$ tnsping source

$ sqlplus "/ as sysdba"
SQL> create user strmadmin identified by strmadmin;
SQL> grant connect, resource, dba to strmadmin;
SQL> grant select_catalog_role, select any dictionary to strmadmin;

SQL> begin dbms_streams_auth.grant_admin_privilege
(grantee => 'strmadmin',
grant_privileges => true);
end;
/

SQL> connect strmadmin/strmadmin
Note that the name of the db link must match the tns alias you added earlier. You MUST change the names below to match your database!
SQL> create database link source connect to strmadmin identified by strmadmin using 'source';
SQL> create directory dp_streams as '/where/you/want/datapump/files';


Source:

$ sqlplus strmadmin/strmadmin


The following command will set up the capture and apply queues, etc. This will copy over a schema to the same schema name in another database. This procedure may take a long time to run, as it uses dbms_datapump to move the data between the two databases. See the dbms_streams_adm reference for the full list of parameters. There are a *lot* of them!
Note that the name of the source database and target database must match the tns alias you added earlier PLUS the domain name. It Must be a fully qualified domain name specified below – The scipt will fail if you run it as-is - You MUST change the names below to match your database!

SQL> begin
dbms_streams_adm.maintain_schemas(
schema_names => 'streams_source',
source_directory_object => 'dp_streams',
destination_directory_object => 'dp_streams',
source_database => 'source.yourdomain.com',
destination_database => 'target.yourdomain.com',
include_ddl => true);
end;
/


If you receive an error like this…

ERROR at line 1:
ORA-23616: Failure in executing block 6 for script
718A198AB6AF5314E040007F01003E43
ORA-06512: at "SYS.DBMS_RECOVERABLE_SCRIPT", line 560
ORA-06512: at "SYS.DBMS_RECOVERABLE_SCRIPT", line 583
ORA-06512: at "SYS.DBMS_STREAMS_MT", line 7673
ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 2606
ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 2667
ORA-06512: at line 2

…Then review following table:

SQL> SELECT * FROM DBA_RECOVERABLE_SCRIPT_ERRORS;


Source and Target:

Check the row counts to get a baseline on each system:

$ sqlplus streams_source/streams_source
SQL> select count(*) from source_tab;


Source:

$ sqlplus streams_source/streams_source
SQL> select max(ps#), max(gen#) from source_tab;
SQL> create sequence ps start with
SQL> create sequence gen start with
SQL> insert into source_tab select ps.nextval, gen.nextval,extnum,awlob,objname,partname from source_tab;
SQL> commit;
SQL> select count(*) from source_tab;


Target:

Check the row count in the target table. It could take several minutes for the data to propagate, depending on how much data has been inserted.

$ sqlplus streams_source/streams_source
SQL> select count(*) from source_tab;


Monitoring


From the OEM home page, go to:
• Targets Tab
• Databases tab in the upper left tab list
• Choose the source database
• Go to the Maintenance tab
• Choose “Management” under the Streams section (Login to the database as sys if you are asked to login)

You should now see tabs allowing you to drill into the capture, propagation and apply queues. Remember that the Apply queue will be on the target database in the setup we created above.

Monitoring Using strmmon:

Open two screens with strmmon, one connected to the source, and one to the target database.

$ strmmon -interval 1 -count 1000 -user sys -passw xxxxxxxx -dbname source -sysdba
(In another Shell…)
$ strmmon -interval 1 -count 1000 -user sys -passw xxxxxxxx -dbname target -sysdba


Additional Notes:


• Here is what you need to do to set up a custom transformation rule:

Source:

SQL> connect streams_source/streams_source
SQL> alter table source_tab add version varchar2(100);
SQL> connect strmadmin/strmadmin

-- This function assumes that you added a VERSION column to source_tab. You can modify as you wish - it's very simple, and a practically useless example - but you can see how data is transformed.
SQL> CREATE OR REPLACE FUNCTION strmadmin.transform_capture (in_any IN SYS.AnyData)
RETURN SYS.AnyData
IS
cmd VARCHAR2(10);
convert_value SYS.Anydata;
lcr SYS.LCR$_ROW_RECORD;
newvalue_anydata SYS.AnyData;
ob_name VARCHAR2(30);
ob_owner VARCHAR2(30);
oldvalue_anydata SYS.Anydata;
rc NUMBER;
BEGIN
IF in_any.GETTYPENAME = 'SYS.LCR$_ROW_RECORD' THEN
rc := in_any.GETOBJECT(lcr);
ob_owner := lcr.GET_OBJECT_OWNER();
ob_name := lcr.GET_OBJECT_NAME();
cmd := lcr.GET_COMMAND_TYPE();
convert_value := SYS.ANYDATA.ConvertVarchar2('converted!');
IF ob_owner='STREAMS_SOURCE' AND cmd = 'UPDATE' THEN
IF ob_name='SOURCE_TAB' THEN
newvalue_anydata := lcr.GET_VALUE( 'new','VERSION');
oldvalue_anydata := lcr.GET_VALUE( 'old','VERSION');
IF ((newvalue_anydata IS NOT NULL) OR (oldvalue_anydata IS NOT NULL))
THEN
lcr.SET_VALUE( value_type => 'new',
column_name=>'VERSION',
column_value => convert_value);
END IF;
END IF;
END IF;
RETURN SYS.ANYDATA.CONVERTOBJECT(lcr);
END IF;
RETURN in_any;
END transform_capture;
/

SQL> set linesize 150
SQL> select rule_owner,rule_name, rule_type, streams_type, streams_name from dba_streams_schema_rules;

SQL> begin
2 dbms_streams_adm.set_rule_transform_function (
3 rule_name => 'STRMADMIN.',
4 transform_function => 'STRMADMIN.TRANSFORM_CAPTURE');
5 end;
6 /


In my case, I specified the DML capture rule, but you may put the transformation on other rules if you wish.

• If you use dbms_streams_adm.maintain_schemas and the data is not initially propagated, the most common reason is that the schema or table you are trying to propagate already exists in the target. If that is the case, you will need to use another method if you want to start streaming to that existing schema or table…you can’t use dbms_streams_adm.maintain_schema. Look in the *.dlg log file in the source and target database server directory specified by dp_streams.


• If you want to know about the various methods that can be called on an lcr, see this: http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/t_lcr.htm

• If you ever get into a state where the capture queue is sitting in “PAUSED FOR FLOW CONTROL”, the Apply queue is up and running and in “DEQUEUE MESSAGES”, first check the alert logs for both databases. I saw this situation in one case, and in the alert log for the source database, I saw “Propagation Schedule for (SYS.DB1$CAPQ, "SYS"."DB1$APPQ"@TARGET.YOURDOMAIN.COM) encountered following error: ORA-01031: insufficient privileges”

• Metalink Doc 418985.1 explains the following error, which I saw in the alert log immediately after dbms_streams_adm.maintain_schema successfully completed and then my instance died!

LOGMINER: Begin mining logfile for session 74 thread 1 sequence 2152, /u16/db01/redo03.log
Thu Aug 20 18:40:22 2009
Errors in file /u16/app/oracle/admin/db1/bdump/db1_pmon_17365.trc:
ORA-00600: internal error code, arguments: [kotgtoid-x-1], [0], [], [], [], [], [], []
Thu Aug 20 18:40:23 2009
PMON: terminating instance due to error 472
Instance terminated by PMON, pid = 17365

No comments:

Followers