Oracle8i Standby Database Concepts and Administration Release 2 (8.1.6) Part Number A76995-01 |
|
This chapter describes the following standby database scenarios:
This scenario describes the creation of a standby database STANDBY1 on the same host as the primary database PROD1. The host is a UNIX machine with three file systems, each mounted on a separate disk configuration on a different controller. By placing the standby database on a different file system from the primary database, you protect the primary database from a hard disk failure. By running the same-host standby database in managed recovery mode, you can keep it continuously up-to-date.
After you set up the standby database on the local host, you plan to create a standby database on a remote host for total disaster protection. In this way, even if all disks of the primary database crash or are destroyed in a disaster, you can fail over to the remote standby database and keep the database open.
Because the host uses three file systems, each on its own set of disks with its own controller, you decide to maintain the primary database files on the first file system, the standby database files on the second file system, and the ORACLE_HOME binaries on the third file system. If the primary database disks fail, you can switch to the standby database; if the ORACLE_HOME disks fail, you can switch to the remote standby database.
To host the standby database on the same machine as the primary database, you must set the following parameters in the standby database initialization parameter file:
Fortunately, most (but not all) of the primary database datafiles and redo log files are in the same directory and are named consistently. You will have to rename some of the files manually using ALTER DATABASE statements.
Because the primary database is shut down every Sunday for an hour for maintenance, you decide to use that time to make a cold, consistent backup. You can then restart the database while you make the necessary configurations for the standby database.
The next step in the procedure is to create the backup that will form the basis for the standby database. You know that you can use either an inconsistent or consistent backup, but because the database must go down every Sunday for maintenance, you decide to make a consistent backup then and use it for the standby database.
On Sunday, before shutting down the primary database, you query the database to determine which datafiles it contains:
SQL> SELECT name FROM v$datafile; NAME -------------------------------------------------------------------------------- /fs1/dbs/tbs_01.f /fs1/dbs/tbs_02.f /fs1/dbs/tbs_11.f /fs1/dbs/tbs_12.f /fs1/dbs/tbs_21.f /fs1/dbs/tbs_22.f /fs1/dbs/tbs_13.f /fs1/dbs/tbs_23.f /fs1/dbs/tbs_24.f /fs1/dbs/tbs_31.f /fs1/dbs/tbs_32.f /fs1/dbs/tbs_41.f /fs1/dbs2/tbs_42.f /fs1/dbs2/tbs_51.f /fs1/dbs2/tbs_52.f /fs1/dbs2/tbs_03.f /fs1/dbs3/tbs_14.f /fs1/dbs3/tbs_25.f /fs1/dbs3/tbs_33.f /fs1/dbs3/tbs_43.f /fs1/dbs3/tbs_53.f 21 rows selected.
After determining which datafiles are in the database, you shut down the database with the IMMEDIATE option:
SQL> SHUTDOWN IMMEDIATE;
At this point, you decide to back up all of the primary datafiles to a temporary directory as follows:
% cp /fs1/dbs/* /fs1/temp % cp /fs1/dbs2/* /fs1/temp % cp /fs1/dbs3/* /fs1/temp
You perform some other routine maintenance operations and then restart the database as follows:
SQL> STARTUP PFILE=initPROD1.ora;
After a few minutes, you create the standby database control file in the same directory in which you stored the consistent backup:
SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/fs1/temp/stbycf.f';
After you have successfully created the standby database control file, you can copy the datafiles and the standby database control file from the primary file system to the standby file system.
Because the transferring of datafiles can take a long time, you first copy the control file, begin copying the datafiles, and then proceed to other tasks (such as network configuration). For example, enter the following at the UNIX command shell:
% cp /fs1/temp/stbycf.f /fs2/dbs/cf1.f % cp /fs1/temp/tbs* /fs2/dbs
In order to run a standby database in a managed standby environment, you must configure a Net8 connection between the primary and standby databases so that you can archive the redo logs to the standby service.
You use the IPC protocol to connect the primary database to the standby database because both databases are on the same host. Because you do not maintain an Oracle Names server, you must create both a tnsnames.ora
entry for the primary database and a listener.ora
entry for the standby database.
tnsnames.ora
file.
Your next step is to open the tnsnames.ora
file in a text editor:
% vi /fs3/oracle/network/admin/tnsnames.ora
Currently, only one service name entry exists in the file, a TCP/IP connection to the PROD1 database:
prod1 = (DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(PORT=1512)(HOST=dlsun183)) (CONNECT_DATA=(SID=prod1)) )
To define an IPC connection between the primary and the standby database, you add an entry with the following format:
standby_service_name = (DESCRIPTION= (ADDRESS=(PROTOCOL=ipc) (KEY=keyhandle)) (CONNECT_DATA=(SID=standby_sid)))
Substitute appropriate values for standby_service_name, keyhandle, and standby_sid, as the following example shows:
standby1 = (DESCRIPTION= (ADDRESS=(PROTOCOL=ipc) (KEY=kstdby1)) (CONNECT_DATA=(SID=stdby1)))
listener.ora
file.
Your next step is to open the listener.ora
file, which is located on file system /fs3:
% vi /fs3/oracle/network/admin/listener.ora
You discover the following list of addresses (where on the host the listener is listening) and SIDs (which connections the listener is listening for):
LISTENER = (ADDRESS_LIST= (ADDRESS=(PROTOCOL=tcp)(PORT=1512)(HOST=dlsun183)) ) SID_LIST_LISTENER = (SID_LIST= (SID_DESC=(SID_NAME=PROD1)(ORACLE_HOME=/fs3/oracle)) )
Currently, the listener is listening on port 1512 of the host dlsun183 for database PROD1.
You need to edit the listener.ora file and add two entries with the following format:
STANDBY_LISTENER = (ADDRESS_LIST=(ADDRESS=(PROTOCOL=ipc) (KEY=keyhandle))) SID_LIST_STANDBY_LISTENER = (SID_LIST= (SID_DESC=(SID_NAME=standby_sid)(ORACLE_HOME=/oracle_home)))
The listener.ora file is typically located in the $ORACLE_HOME/network/admin directory on the standby site. Substitute appropriate values for keyhandle, standby_sid, and oracle_home as the following example shows:
STBY1_LISTENER = (ADDRESS_LIST=(ADDRESS=(PROTOCOL=ipc) # same node as primary (KEY=kstdby1))) # ORACLE_SID standby instance is started with SID_LIST_STBY1_LISTENER = (SID_LIST= (SID_DESC=(SID_NAME=stdby1)(ORACLE_HOME=/vobs/oracle)))
Now that you have edited the listener.ora
file, you must restart the listener so that it recognizes the changes in the file:
% lsnrctl LSNRCTL for Solaris: Version 8.1.5.0.0 - Production on 05-APR-99 11:39:41 (c) Copyright 1998 Oracle Corporation. All rights reserved. Welcome to LSNRCTL, type "help" for information. LSNRCTL> start stby1_listener
As an alternative to the steps outlined in this section, you can use the Net8 Assistant graphical user interface to configure the network files. For additional information, see the Net8 Administrator's Guide.
Now that you have configured the network files, you can edit the primary database initialization parameter file. The primary database is now up and running, so these changes will only be enabled if you restart the instance or issue ALTER SESSION or ALTER SYSTEM statements.
The only changes you need to make to the file involve archiving to the standby service. Currently, the primary database parameter file looks as follows:
db_name=prod1 control_files=(/fs1/dbs/cf1.f,/fs1/dbs/cf2.f) compatible=8.1.6 log_archive_start = TRUE log_archive_dest_1 = 'LOCATION=/fs1/arc_dest/ MANDATORY REOPEN=60' log_archive_dest_state_1 = ENABLE log_archive_format = log_%t_%s.arc audit_trail=FALSE o7_dictionary_accessibility=FALSE global_names=FALSE db_domain=regress.rdbms.dev.us.oracle.com remote_login_passwordfile = exclusive # default parameters for instance 1 processes=30 sessions=30 transactions=21 transactions_per_rollback_segment=21 distributed_transactions=10 db_block_buffers=1000 db_files=200 shared_pool_size=10000000
Currently, you archive to only one location: a local directory. Because you want to maintain the local standby database in managed recovery mode, you must specify a new archiving location using a service name.
Open the primary database initialization parameter file with a text editor and examine the current archiving location and format:
log_archive_dest_1 = 'LOCATION=/fs1/arc_dest/ MANDATORY REOPEN=60' log_archive_dest_state_1 = ENABLE log_archive_format = log_%t_%s.arc
log_archive_dest_2 = 'SERVICE=standby1 OPTIONAL REOPEN=180' log_archive_dest_state_2 = ENABLE
After editing the primary database initialization parameter file, create a copy for use by the standby database:
% cp /fs1/temp/initPROD1.ora /fs3/oracle/dbs/initSTANDBY1.ora
If the primary database initialization parameter file contains the IFILE parameter, you also need to copy the file referred to by the IFILE parameter to the standby site and, if necessary, make appropriate changes to it.
You know that the initialization parameters shown in Table 5-1 play a key role in the standby database recovery process, and decide to edit them.
Edit the standby database parameter file as follows (with edited values in bold):
db_name = prod1 #The same as PRMYinit.ora control_files = (/fs2/dbs/cf1.f) compatible = 8.1.6 log_archive_start = TRUE log_archive_dest_1='LOCATION=/fs2/arc_dest/' log_archive_dest_state_1 = ENABLE log_archive_format = log_%t_%s.arc standby_archive_dest = /fs2/stdby/ db_file_name_convert = ('/fs1/dbs','/fs2/dbs') log_file_name_convert = ('/fs1/dbs','/fs2/dbs') lock_name_space = standby1 audit_trail=FALSE o7_dictionary_accessibility=FALSE global_names=FALSE db_domain=regress.rdbms.dev.us.oracle.com remote_login_passwordfile = exclusive # default parameters for instance 1 processes=30 sessions=30 transactions=21 transactions_per_rollback_segment=21 distributed_transactions=10 db_block_buffers=1000 db_files=200 shared_pool_size=10000000
Now that you have configured all network and parameter files, you can enable archiving to the standby database.
tnsnames.ora
file on the primary site and the listener.ora
file on the standby site as follows:
% setenv ORACLE_SID stdby1
First, you start the standby database instance without mounting the standby database control file, as the following example shows:
SQL> CONNECT sys/change_on_install@standby1 SQL> STARTUP NOMOUNT PFILE=/fs3/oracle/dbs/initSTANDBY1.ora; SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
Next, write a SQL script to rename the datafiles not captured by the DB_FILE_NAME_CONVERT parameter:
ALTER DATABASE RENAME FILE /fs1/dbs2/tbs_42.f, /fs1/dbs2/tbs_51.f, /fs1/dbs2/tbs_52.f, /fs1/dbs2/tbs_03.f, /fs1/dbs3/tbs_14.f, /fs1/dbs3/tbs_25.f, /fs1/dbs3/tbs_33.f, /fs1/dbs3/tbs_43.f, /fs1/dbs3/tbs_53.f, TO /fs2/dbs/tbs_42.f, /fs2/dbs/tbs_51.f, /fs2/dbs/tbs_52.f, /fs2/dbs/tbs_03.f, /fs2/dbs/tbs_14.f, /fs2/dbs/tbs_25.f, /fs2/dbs/tbs_33.f, /fs2/dbs/tbs_43.f, /fs2/dbs/tbs_53.f /
Finally, you enable the changes you made to the primary database parameter file so that the standby database can begin receiving archived redo logs:
SQL> CONNECT sys/change_on_install@prod1 as sysdba SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2 = 'SERVICE=standby1 OPTIONAL REOPEN=180'; SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2 = ENABLE;
Because you have enabled the changes to the primary database parameter file, the primary database is now able to archive to the standby service name. Before you can perform managed recovery, however, you must synchronize the standby database by applying those logs containing changes made after the primary database backup, but before the first log received by the standby database.
Write the following SQL script and run it on the standby database:
SELECT high.thread#, "LowGap#", "HighGap#" FROM ( SELECT thread#, MIN(sequence#)-1 "HighGap#" FROM ( SELECT a.thread#, a.sequence# FROM ( SELECT * FROM v$archived_log ) a, ( SELECT thread#, MAX(sequence#)gap1 FROM v$log_history GROUP BY thread# ) b WHERE a.thread# = b.thread# AND a.sequence# > gap1 ) GROUP BY thread# ) high, ( SELECT thread#, MIN(sequence#) "LowGap#" FROM ( SELECT thread#, sequence# FROM v$log_history, v$datafile WHERE checkpoint_change# <= next_change# AND checkpoint_change# >= first_change# ) GROUP BY thread# ) low WHERE low.thread# = high.thread#;
The output of the query is as follows:
SQL> @gap THREAD# LowGap# HighGap# ---------- ---------- ---------- 1 250 252
Hence, you must apply log sequence 250, 251, and 252 to synchronize the standby database before initiating managed recovery.
The archived log filenames generated by gap sequence queries on the standby database are generated by the LOG_ARCHIVE_DEST_1 and LOG_ARCHIVE_FORMAT parameters in the initialization parameter file. Before transmitting the logs from the primary site to the standby site, you must determine the correct filenames.
First, you determine the filenames of the logs in the gap that were archived by the primary database. After connecting to the primary database using SQL*Plus, issue the following SQL query to obtain the names:
SQL> CONNECT sys/change_on_install@prod1 SQL> SELECT name FROM v$archived_log WHERE sequence# IN (250,251,252); NAME -------------------------------------------------------------------------------- /fs1/arc_dest/log_1_250.arc /fs1/arc_dest/log_1_251.arc /fs1/arc_dest/log_1_252.arc
The gap sequence in this case consists of log_1_250.arc, log_1_251.arc, and log_1_252.arc. The settings for LOG_ARCHIVE_DEST_1 and LOG_ARCHIVE_FORMAT in the standby database parameter file are as follows:
LOG_ARCHIVE_DEST_1 = 'LOCATION=/fs2/arc_dest/' LOG_ARCHIVE_FORMAT = log_%t_%s.arc
You move the gap sequence logs from the primary file system to the standby file system, renaming them according to values for the LOG_ARCHIVE_DEST_1 and LOG_ARCHIVE_FORMAT initialization parameters at the standby site:
% cp /fs1/arc_dest/log_1_250.arc /fs2/arc_dest/log_1_250.arc % cp /fs1/arc_dest/log_1_251.arc /fs2/arc_dest/log_1_251.arc % cp /fs1/arc_dest/log_1_252.arc /fs2/arc_dest/log_1_252.arc
Now you can apply the gap sequence logs using the RECOVER AUTOMATIC STANDBY DATABASE statement. This statement uses the values of the LOG_ARCHIVE_DEST_1 and LOG_ARCHIVE_FORMAT parameters to construct the target filename. Once the gap sequence logs have been applied, the standby database is synchronized with the primary database and can be placed in managed recovery mode.
While connected to the standby database in SQL*Plus, recover the database using the AUTOMATIC option:
SQL> RECOVER AUTOMATIC STANDBY DATABASE; ORA-00279: change 35083 generated at 08/16/1999 14:08:37 needed for thread 2 Specify log: {<RET>=suggested | filename | AUTO | CANCEL} CANCEL Media recovery cancelled. SQL>
After recovering the gap sequence logs, Oracle prompts you for the name of a log that does not exist. The reason is that the recovery process does not know about the logs archived to the STANDBY service by the primary database. Cancel recovery at this point:
SQL> CANCEL
You can now enable managed recovery using the RECOVER MANAGED STANDBY DATABASE statement. You decide to use the TIMEOUT option of the RECOVER statement to specify a time interval of 20 minutes so that Oracle waits the specified number of minutes to write the requested archived log entry to the directory of the standby database control file. If the requested archived log entry is not written to the standby database control file directory within the specified time interval, the recovery operation is canceled.
While connected to the standby database using SQL*Plus, place the standby database in managed recovery mode:
SQL> RECOVER MANAGED STANDBY DATABASE TIMEOUT 20;
Oracle now begins managed recovery. As the primary database archives redo logs to the standby site, the standby database automatically applies them.
This scenario describes the creation of a standby database STANDBY1 on a remote host. The following assumptions are being made:
Create the backup that will form the basis for the standby database.
SQL> SELECT name FROM v$datafile; NAME ----------------------------------------------------------------------- /vobs/oracle/dbs/dbf_1.f 1 row selected.
SQL> SHUTDOWN IMMEDIATE;
% cp /vobs/oracle/dbs/dbf_1.f /backup
SQL> STARTUP PFILE=PRMYinit.ora;
SQL> ARCHIVE LOG LIST
If the output from the ARCHIVE LOG LIST statement displays "No Archive Mode," perform the following steps:
SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/backup/stbycf.f'
The standby database control file and the primary database datafiles are in the same temporary location at the primary site to make copying to the standby site easier.
Copy the primary database datafiles and the standby control file from the temporary location at the primary site to the standby site, as the following example shows:
% rcp /backup/* STBYHOST:/fs2/oracle/stdby
This scenario assumes that the TCP/IP network protocol is used to connect to the primary and the standby databases. This step involves editing the following parameter files:
You need to edit the tnsnames.ora
file and add an entry with the following format:
standby_service_name = (DESCRIPTION= (ADDRESS=(PROTOCOL=tcp) (PORT=port_number)(HOST=host_name)) (CONNECT_DATA=(SID=standby_sid)))
The tnsnames.ora file is typically located in the $ORACLE_HOME/network/admin directory on the primary site. Substitute appropriate values for standby_service_name, port_number, host_name, and standby_sid, as the following example shows:
standby1 = (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp) (PORT=5112)(HOST=STBYHOST)) (CONNECT_DATA=(SID=stdby1)))
You need to edit the listener.ora file and add two entries with the following format:
STANDBY_LISTENER = (ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp) (PORT=port_number)(HOST=host_name))) SID_LIST_STANDBY_LISTENER = (SID_LIST= (SID_DESC=(SID_NAME=standby_sid)(ORACLE_HOME=/oracle_home)))
The listener.ora file is typically located in the $ORACLE_HOME/network/admin directory on the standby site. Substitute appropriate values for port_number, host_name, standby_sid, and oracle_home, as the following example shows:
STDBY1_LISTENER = (ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp) (PORT=5112)(HOST=STBYHOST))) SID_LIST_STDBY1_LISTENER = (SID_LIST= (SID_DESC=(SID_NAME=stdby1)(ORACLE_HOME=/oracle)))
Make sure the SID_NAME in the listener.ora file matches the SID in the tnsnames.ora file. Also, make sure the PORT and HOST values are the same in the two files. Note that you have the option of creating a new listener or adding a new address to an existing listener.
The two entries that you added in step 4 define a listener to listen for connections to standby database stdby1 on port 5112. In this step, you must start the stdby1_listener listener. For example:
% lsnrctl start stdby1_listener;
% rcp /vobs/oracle/dbs/PRMYinit.ora STBYHOST:/fs2/oracle/stdby/STBYinit.ora
The STBYinit.ora file looks as follows:
# #parameter file STBYinit.ora # db_name=prod1 #The same as PRMYinit.ora # The following two parameters have been changed from PRMYinit.ora control_files=/fs2/oracle/stdby/stbycf.f lock_name_space=stdby1 # The following parameters are the same as PRMYinit.ora audit_trail=FALSE o7_dictionary_accessibility=FALSE global_names=FALSE db_domain=regress.rdbms.dev.us.oracle.com commit_point_strength=1 processes=30 sessions=30 transactions=21 transactions_per_rollback_segment=21 distributed_transactions=10 db_block_buffers=100 shared_pool_size=4000000 ifile=/vobs/oracle/work/tkinit.ora # Verify that file exists on the standby site # and that the file specification is valid # specific parameters for standby database log_archive_format = stdby_%t_%s.arc standby_archive_dest=/fs2/oracle/stdby/ log_archive_dest_1='LOCATION=/fs2/oracle/stdby/' db_file_name_convert=('/vobs/oracle/dbs','/fs2/oracle/stdby') log_file_name_convert=('/vobs/oracle/dbs','/fs2/oracle/stdby') log_archive_start=FALSE log_archive_trace=127
% cp STBYinit.ora Failover.ora
Edit Failover.ora so if you fail over to the stdby1 standby database, you can use the Failover.ora file as the initialization parameter file for the new primary database. Make sure you use appropriate values for the LOG_ARCHIVE_DEST_n parameters.
tnsnames.ora
file on the standby site in case failover to the standby database occurs. See step 4 for information on how to configure the tnsnames.ora
file.
Start the standby database to enable archiving.
tnsnames.ora
file on the primary site and the listener.ora
file on the standby site as follows:
% setenv ORACLE_SID stdby1
SQL> CONNECT sys/sys_password as sysdba
SQL> STARTUP NOMOUNT PFILE=STBYinit.ora;
SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
LOG_ARCHIVE_DEST_2 = 'SERVICE=standby1 MANDATORY REOPEN=60'
LOG_ARCHIVE_DEST_STATE_2 = ENABLE
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=standby1 MANDATORY REOPEN=60'; SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
SELECT high.thread#, "LowGap#", "HighGap#" FROM ( SELECT thread#, MIN(sequence#)-1 "HighGap#" FROM ( SELECT a.thread#, a.sequence# FROM ( SELECT * FROM v$archived_log ) a, ( SELECT thread#, MAX(next_change#)gap1 FROM v$log_history GROUP BY thread# ) b WHERE a.thread# = b.thread# AND a.next_change# > gap1 ) GROUP BY thread# ) high, ( SELECT thread#, MIN(sequence#) "LowGap#" FROM ( SELECT thread#, sequence# FROM v$log_history, v$datafile WHERE checkpoint_change# <= next_change# AND checkpoint_change# >= first_change# ) GROUP BY thread# ) low WHERE low.thread# = high.thread#; THREAD# LowGap# HighGap# ---------- ---------- ---------- 1 90 92
SELECT name FROM v$archived_log WHERE thread#=1 AND sequence#<=92 AND sequence#>=90; NAME ----------------------------------------- /vobs/oracle/dbs/r_1_90.arc /vobs/oracle/dbs/r_1_91.arc /vobs/oracle/dbs/r_1_92.arc
% rcp /vobs/oracle/dbs/r_1_90.arc STBYHOST:/fs2/oracle/stdby/stdby_1_90.arc % rcp /vobs/oracle/dbs/r_1_91.arc STBYHOST:/fs2/oracle/stdby/stdby_1_91.arc % rcp /vobs/oracle/dbs/r_1_92.arc STBYHOST:/fs2/oracle/stdby/stdby_1_92.arc
SQL> RECOVER STANDBY DATABASE;
On the standby database, enable managed recovery by issuing the following SQL statement:
SQL> RECOVER MANAGED STANDBY DATABASE;
This scenario describes the procedures you should follow when a physical change is made in the primary database. The following topics are covered:
To maintain consistency when you add a datafile to the primary database, you must add a corresponding datafile to the standby database. Otherwise, changes in the online redo logs that relate to the new datafile in the primary database will not be applied to the standby database.
SQL> CREATE TABLESPACE tbs_4 DATAFILE 'tbs_4.f' SIZE 2M;
SQL> STARTUP NOMOUNT PFILE=STBYinit.ora;
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
SQL> RECOVER MANAGED STANDBY DATABASE;
This SQL statement causes Oracle to stop applying archived redo logs because a datafile on the primary site does not exist on the standby site. Messages similar to the following are displayed when you try to archive the redo logs:
ORA-00283: recovery session canceled due to errors ORA-01157: cannot identify/lock data file 4 - see DBWR trace file ORA-01110: data file 4: '/vobs/oracle/dbs/stdby/tbs_4.f'
The error messages indicate that the datafile has been added to the standby database control file, but the datafile has not been created yet.
SQL> ALTER DATABASE CREATE DATAFILE '/vobs/oracle/dbs/stdby/tbs_4.f' AS '/vobs/oracle/dbs/stdby/tbs_4.f';
SQL> RECOVER MANAGED STANDBY DATABASE;
In the primary database, assume you have transferred a datafile from another database to the primary database.
SQL> ALTER TABLESPACE tbs_4 BEGIN BACKUP;
% cp tbs_4.f /backup
SQL> ALTER TABLESPACE tbs_4 END BACKUP;
% cp /backup/tbs_4.f STBYHOST:/vobs/oracle/dbs/stdby
SQL> RECOVER MANAGED STANDBY DATABASE;
After you add a datafile to the primary database and you either create or copy a corresponding datafile to the standby database, the status field in the V$DATAFILE view will contain the "RECOVER" value. The "RECOVER" value indicates that the datafile needs to be recovered. The "RECOVER" value is not accurate in this situation. You need to refresh the standby database control file to get an accurate value in the status field of the V$DATAFILE view. See Refreshing the Standby Database Control File for additional information.
Note: While refreshing the standby database control file will give you an accurate value, it may invalidate managed recovery. If managed recovery is invalidated, your only option is manual recovery. |
When you rename one or more datafiles in the primary database, you also need to rename the corresponding datafiles in the standby database.
SQL> ALTER TABLESPACE tbs_4 OFFLINE; SQL> ALTER TABLESPACE tbs_4 RENAME DATAFILE 'tbs_4.f' TO 'tbs_x.f'; SQL> ALTER DATABASE RECOVER TABLESPACE tbs_4; SQL> ALTER TABLESPACE tbs_4 ONLINE;
SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS 'stbycf.f';
SQL> RECOVER AUTOMATIC STANDBY DATABASE;
SQL> SHUTDOWN IMMEDIATE;
% rcp stbycf.f STBYHOST:/fs2/oracle/stdby/
% mv tbs_4.f tbs_x.f
SQL> STARTUP NOMOUNT PFILE=STBYinit.ora;
SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
SQL> RECOVER MANAGED STANDBY DATABASE;
If you do not rename the corresponding file at the standby site, and then attempt to refresh the standby database control file, the standby database will attempt to use the renamed datafile, but it will not find the renamed datafile. Consequently, you will see error messages similar to the following:
ORA-00283: recovery session canceled due to errors ORA-01157: cannot identify/lock data file 4 - see DBWR trace file ORA-01110: data file 4: '/vobs/oracle/dbs/stdby/tbs_x.f'
When you delete one or more datafiles or tablespaces in the primary database, you also need to delete the corresponding datafiles or tablespaces in the standby database. You also need to refresh the standby database control file.
SQL> DROP TABLESPACE tbs_4; % rm tbs_4.f
% rm tbs_4.f
One method of tuning available to the DBA is changing the size and number of online redo logs. Consequently, when you add or remove an online redo log at the primary site, it is important that you refresh the standby database control file.
SQL> ALTER DATABASE ADD LOGFILE 'prmy3.log' SIZE 100K;
or
SQL> ALTER DATABASE DROP LOGFILE 'prmy3.log';
If you use the CREATE CONTROLFILE statement in the primary database to change one or more database parameters, you need to refresh the standby database control file. Some of the parameters you can change with this statement are the maximum number of redo log file groups, redo log file members, archived redo log files, data files, or instances that can concurrently have the database mounted and open.
SQL> CREATE CONTROLFILE REUSE DATABASE stby1 NORESETLOGS LOGFILE 'prmy1.log' SIZE 100K, 'prmy2.log' SIZE 100K DATAFILE 'dbf_1.f' SIZE 10M MAXLOGFILES 52 ARCHIVELOG;
If you use the RESETLOGS clause of the CREATE CONTROLFILE statement, you will invalidate the standby database. Once the standby database is invalidated, your only option is to re-create the standby database.
In some cases, when you make a physical change to the primary database, in addition to making the corresponding change to the standby database, you also need to refresh the standby database control file. The online redo logs do not record changes made to the primary database control file. This section describes the cases where you need to refresh the standby database control file and the steps to follow.
You need to refresh the standby database control file whenever you:
Perform the following steps to keep the standby database control file synchronized with the primary database control file:
SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS 'stbycf.f';
SQL> RECOVER MANAGED STANDBY DATABASE CANCEL;
SQL> SHUTDOWN IMMEDIATE;
% rcp stbycf.f STBYHOST:/fs2/oracle/stdby/
SQL> STARTUP NOMOUNT PFILE=STBYinit.ora;
SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
SQL> RECOVER MANAGED STANDBY DATABASE;
You may get the following error messages when you try to enable managed recovery:
ORA-00308: cannot open archived log 'standby1' ORA-27037: unable to obtain file status SVR4 Error: 2: No such file or directory Additional information: 3 ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below ORA-01152: file 1 was not restored from a sufficiently old backup ORA-01110: data file 1: '/vobs/oracle/dbs/stdby/tbs_1.f'
If you get the ORA-00308 error, cancel recovery by issuing the following statement:
SQL> CANCEL
These error messages are issued when one or more logs in the gap sequence have not been successfully applied. If you receive these errors, repeat steps 7 and 8.
Some physical changes you make to the primary database can invalidate the standby database. Once a standby database is invalidated, your only option is to rebuild it.
The following clauses of the ALTER DATABASE statement invalidate the standby database:
In some SQL statements, the user has the option of specifying the NOLOGGING clause, which indicates that the database operation is not logged in the redo log file. Even though the user specifies the NOLOGGING clause, a redo log record is still written to the redo log. However, when the redo log file is transferred to the standby site and applied to the standby database, a portion of the datafile is unusable and marked as being unrecoverable. When you either activate the standby database, or open the standby database with the read-only option, and attempt to read the range of blocks that are marked as "UNRECOVERABLE," you will see error messages similar to the following:
ORA-01578: ORACLE data block corrupted (file # 1, block # 2521) ORA-01110: data file 1: '/vobs/oracle/dbs/stdby/tbs_1.f' ORA-26040: Data block was loaded using the NOLOGGING option
In order to recover after the NOLOGGING clause was specified, you need to copy the datafile that contains the unjournaled data from the primary site to the standby site. Perform the following steps:
SQL> SELECT name, unrecoverable_change# FROM v$datafile; NAME UNRECOVERA -------------------------------------------------------- ---------- /vobs/oracle/dbs/tbs_1.f 5216 /vobs/oracle/dbs/tbs_2.f 0 /vobs/oracle/dbs/tbs_3.f 0 /vobs/oracle/dbs/tbs_4.f 0 4 rows selected.
SQL> SELECT name, unrecoverable_change# FROM v$datafile; NAME UNRECOVERA -------------------------------------------------------- ---------- /vobs/oracle/dbs/stdby/tbs_1.f 5186 /vobs/oracle/dbs/stdby/tbs_2.f 0 /vobs/oracle/dbs/stdby/tbs_3.f 0 /vobs/oracle/dbs/stdby/tbs_4.f 0 4 rows selected.
Compare the value of the unrecoverable_change# column in both query results. If the value of the unrecoverable_change# column in the primary database is greater than the same column in the standby database, then the datafile needs to be copied from the primary site to the standby site.
In this example, the value of the unrecoverable_change# in the primary database for the tbs_1.f datafile is greater, so you need to copy the tbs_1.f datafile to the standby site.
SQL> ALTER TABLESPACE system BEGIN BACKUP; SQL> EXIT; % cp tbs_1.f /backup SQL> ALTER TABLESPACE system END BACKUP;
In the primary database, issue the following statement:
SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/backup/stbycf.f;
In the standby database, issue the following statement:
SQL> SHUTDOWN IMMEDIATE;
% rcp /backup/* STBYHOST:/fs2/oracle/stdby/
SQL> STARTUP NOMOUNT PFILE=STBYinit.ora;
SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
SQL> RECOVER MANAGED STANDBY DATABASE;
You may get the following error messages when you try to enable managed recovery:
ORA-00308: cannot open archived log 'standby1' ORA-27037: unable to obtain file status SVR4 Error: 2: No such file or directory Additional information: 3 ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below ORA-01152: file 1 was not restored from a sufficiently old backup ORA-01110: data file 1: '/vobs/oracle/dbs/stdby/tbs_1.f'
If you get the ORA-00308 error, cancel recovery by issuing the following statement:
SQL> CANCEL
These error messages are issued when one or more logs in the gap sequence have not been successfully applied. If you receive these errors, repeat steps 8 and 9.
Every standby database is associated with one and only one primary database. A single primary database can, however, support multiple standby databases. This scenario identifies the kind of information you need in order to decide which of the multiple standby databases to activate.
One of the important things to consider in a multiple standby database configuration is whether the archive destination is mandatory or optional. The following table lists an advantage and disadvantage for each destination:
Consider the following recommendations in a multiple standby database configuration:
Suppose the primary database is located in San Francisco and supports five standby databases as follows:
Standby | Location | Type | Description |
---|---|---|---|
1 |
local directory |
Mandatory |
Local copy of the archived redo logs. |
2 |
San Francisco |
Mandatory |
Fail over to this standby database when there is physical damage at the primary site. This standby site is connected to the primary site by a local area network. |
3 |
Boston |
Optional |
Fail over to this standby database when a disaster occurs that affects San Francisco. |
4 |
Los Angeles |
Optional |
This standby site receives archived redo logs, but does not apply them. See Scenario 9: Standby Database with No Ongoing Recovery for a description of this type of configuration. |
5 |
San Francisco |
Optional |
This standby site receives archived redo logs, and applies them after an 8-hour time lag. See Scenario 10: Standby Database with a Time Lag for a description of this type of configuration. |
Assume that a disaster occurs in San Francisco where the primary site is located, and the primary host is damaged. One of the standby databases must be activated. You cannot assume that the database administrator (DBA) who set up the multiple standby database configuration is available to decide which standby database to fail over to. Therefore, it is imperative to have a disaster recovery plan at each standby site, as well as at the primary site. Each member of the disaster recovery team needs to know about the disaster recovery plan and be aware of the procedures to follow. This scenario identifies the kind of information that the person who is making the decision would need when deciding which standby database to activate. One method of conveying information to the disaster recovery team is to include a ReadMe file at each standby site.
The ReadMe file at each site should describe how to:
There may be firewalls between systems. The ReadMe file should include instructions for going through the firewalls.
The following example shows the contents of a sample ReadMe file:
----------------Standby Database Disaster Recovery ReadMe File---------------- Warning: ******************************************************************************** Perform the steps in this procedure only if you are responsible for failing over to a standby database after the primary database fails. If you perform the steps outlined in this file unnecessarily, you may corrupt the entire database system. ******************************************************************************** Multiple Standby Database Configuration: No. Location Type IP Address --- --------------- --------- -------------- 1 San Francisco Primary 128.1.124.25 2 San Francisco Standby 128.1.124.157 3 Boston Standby 136.132.1.55 4 Los Angeles Standby 145.23.82.16 5 San Francisco Standby 128.1.135.24 You are in system No. 3, which is located in Boston. Perform the following steps to fail over to the most up-to-date and available standby database: 1. Log on to the local standby database as a DBA. a) Log on with the following user name and password: username: Standby3 password: zkc722Khn b) Invoke SQL*Plus as follows: % sqlplus c) Connect as the DBA as follows: CONNECT sys/s23LsdIc AS SYSDBA 2. Connect to as many remote systems as possible. You can connect to a maximum of four systems. System 4 does not have a firewall, so you can connect to it directly. Systems 1, 2, and 5 share the same firewall host. You need to go to the firewall host first and then connect to each system. The IP address for the firewall host is 128.1.1.100. Use the following user name and password: username: Disaster password: 82lhsIW32 3. Log on to as many remote systems as possible with the following user names and passwords: Login information: No. Location IP Address username password --- --------------- ------------- ---------- ---------- 1 San Francisco 128.1.124.25 Oracle8i sdd290Ec 2 San Francisco 128.1.124.157 Standby2 ei23nJHb 3 (L o c a l) 4 Los Angeles 145.23.82.16 Standby4 23HHoe2a 5 San Francisco 128.1.135.24 Standby5 snc#$dnc 4. Invoke SQL*Plus on each remote system you are able to log on to as follows: % sqlplus 5. Connect to each remote database as follows: CONNECT sys/password AS SYSDBA The DBA passwords for each location are: No. Location Password --- --------------- ----------- 1 San Francisco x2dwlsd91 2 San Francisco a239s1DAq 3 (L o c a l) 4 Los Angeles owKL(@as23 5 San Francisco sad_KS13x 6. If you are able to log on to System 1, invoke SQL*Plus and issue the following statements: SQL> SHUTDOWN IMMEDIATE; SQL> STARTUP PFILE=PRMYinit.ora; Note: If you are able to execute the STARTUP statement successfully, the primary database has not been damaged. Do not continue with this procedure. 7. Issue the following SQL statements on each standby database (including the one on this machine) that you were able to connect to: SQL> RECOVER MANAGED STANDBY DATABASE CANCEL; SQL> SHUTDOWN IMMEDIATE; SQL> STARTUP NOMOUNT PFILE=STBYinit.ora; SQL> ALTER DATABASE MOUNT STANDBY DATABASE; SQL> RECOVER AUTOMATIC STANDBY DATABASE; *** press the Return key for each of the prompts SQL> SELECT THREAD#, MAX(SEQUENCE#) FROM V$LOG_HISTORY GROUP BY THREAD#; Compare the query results of each standby database. Activate the standby database with the largest sequence number. 8. Fail over to the standby database with the largest sequence number. On the standby database with the largest sequence number, invoke SQL*Plus and issue the following SQL statements: SQL> ALTER DATABASE ACTIVATE STANDBY DATABASE; SQL> SHUTDOWN IMMEDIATE; SQL> STARTUP PFILE=Failover.ora; ------------End of Standby Database Disaster Recovery ReadMe File-------------
When a standby database is activated, it becomes a production database, and is no longer capable of serving as a standby database. Client applications need to redirect their connections from the original primary database to the newly activated production database. This scenario describes the following ways to set up client failover:
The tnsnames.ora file can be configured for multiple addresses. In a local TNS configuration, at least one of the addresses should be the address of a standby site. Modify the tnsnames.ora file at each client site to ensure that an address for a standby site has been supplied. The tnsnames.ora file is typically located in the $ORACLE_HOME/network/admin directory. You can assign multiple addresses to one TNS name and use the FAILOVER option. See the Net8 Administrator's Guide for details about how to set multiple addresses and how to use the FAILOVER option. The following example shows an entry that has an address for a standby site in addition to the address for the primary site:
ProductDB = ( DESCRIPTION= (FAILOVER=ON) (LOAD_BALANCE=OFF) (ADDRESS=(PROTOCOL=tcp)(PORT=1521)(HOST=PRMYHOST.foo.com)) (ADDRESS=(PROTOCOL=tcp)(PORT=1521)(HOST=STBYHOST.foo.com)) (CONNECT_DATA=(SID=db1)) )
In this example, the primary database is located at PRMYHOST.foo.com, and the standby database is located at STBYHOST.foo.com. When the client application connects to ProductDB, it tries to send a connection request to PRMYHOST.foo.com first. If there is no response, the client application tries to send another connection request to STBYHOST.foo.com. When the primary database is down and the standby database is activated, the client application can connect to the new production database automatically.
If the primary database fails after the connection has been established, the client application will not automatically direct the remaining request to the newly activated production database. You must establish the connection to the ProductDB database again.
If you are using an Oracle Names server, you can change the TNS name settings on the server. You can assign multiple addresses to one TNS name and use the FAILOVER option. See the Net8 Administrator's Guide for details about how to set multiple addresses and how to use the FAILOVER option on an Oracle Names server. The format for setting up the TNS name on the Oracle Names server is the same as the format for the local tnsnames.ora file. Therefore, the example in Local TNS Configuration also applies to the Oracle Names server configuration.
The following configurations require that the client reconnect to the database server when the primary database fails over to the standby database:
However, if you are using an Oracle Call Interface (OCI) client, you can use transparent application failover (TAF). TAF is the ability of applications to automatically reconnect to the database if the connection fails. If the client application is not involved in a database transaction, then users may not notice the failure of the primary database server. See Oracle8i Designing and Tuning for Performance for details on how to configure TAF. The following example shows address information for the ProductDB database and the Standby1 database:
ProductDB=( DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(PORT=1521)(HOST=PRMYHOST.foo.com)) (CONNECT_DATA=(SID=db1)(FAILOVER_MODE=(BACKUP=Standby1) (TYPE=session) (METHOD=basic))) ) Standby1 =( DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(PORT=1521)(HOST=STBYHOST.foo.com)) (CONNECT_DATA=(SID=db1)) )
Sequence of events:
Instead of setting the configurations so that the client can automatically fail over to the new production database, you can always choose to manually modify the network settings after the standby database is activated.
You can modify the local tnsnames.ora file. Redirect the TNS name pointing to the original production database to the newly activated production database. For example, assume the original TNS setting is as follows:
ProductDB=( DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(PORT=1521)(HOST=PRMYHOST.foo.com)) (CONNECT_DATA=(SID=db1)) )
When the primary database on PRMYHOST.foo.com fails, and the standby database in STBYHOST.foo.com is activated, causing it to become the new production database, you need to edit the tnsnames.ora file and change the entry to the following:
ProductDB=( DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(PORT=1521)(HOST=STBYHOST.foo.com)) (CONNECT_DATA=(SID=db1)) )
You do not need to change your client application; subsequent connections to the production database will be sent to the new production database on STBYHOST .foo.com.
If you are using an Oracle Names server, make a similar change for the corresponding entry. All clients using this Oracle Names server will send their subsequent connections to the new production database on STBYHOST.foo.com.
You can also change the settings on the DNS server. Change the settings for the domain name, which is used by the clients to locate the production database.
For example, assume the following:
Change the DNS server entry for the domain name in the DNS server to the following:
ProductDB.foo.com IN A 128.3.151.63
After you change the DNS settings, not all clients know about the change immediately. The old DNS settings may be cached somewhere, causing some clients to continue to use the old settings. The old settings need to be replaced with the new settings.
When a standby database is in managed recovery mode, the standby database automatically applies archived redo logs as it receives them from the primary database. When the network goes down, automatic archival from the primary database to the standby database stops.
If the standby database is specified as an optional archive destination, then the primary database continues to operate normally.
When the network is up and running again, automatic archival of the archived redo logs from the primary database to the standby database resumes. However, if the standby database is specified as an optional archive destination, and a log switch occurred at the primary site, the standby database has a gap sequence for the time when the network was down. The standby database cannot apply any of the archived redo logs at the standby site until the gap sequence is resolved.
If the standby database is specified as a mandatory archive destination, then the primary database will not archive any redo logs until the network failure is resolved and the primary database is able to archive to the standby site.
The primary database may eventually stall if the network problem is not fixed in a timely manner because the primary database will not be able to switch to an online redo log that has not been archived. You can issue the following SQL query to determine whether the primary database stalled because it was not able to switch to an online redo log:
SELECT decode(count(*),0,'NO','YES') "switch_possible" FROM v$log WHERE archived='YES';
If the output from the query displays "Yes," a log switch is possible; if the output displays "No," a log switch is not possible.
This scenario describes how to recover after a network failure.
The V$ARCHIVE_DEST view contains the network error and identifies which standby database cannot be reached. On the primary database, issue the following SQL statement:
SQL> SELECT * FROM V$ARCHIVE_DEST DEST_ID STATUS BINDING NAME_SP TARGET REOPEN_SEC DESTINATION FAIL_DATE FAIL_SEQUE FAIL_BLOCK ERROR ---------- --------- --------- ------- ------- ---------- -------------------------------------------------------------------------------- --------- ---------- ---------- -------------------------------------------------------------------------------- 1 VALID MANDATORY SYSTEM PRIMARY 0 /vobs/oracle/dbs 0 0 2 ERROR MANDATORY SYSTEM STANDBY 5 standby1 26-JUL-99 61 0 ORA-12541: TNS:no listener 3 INACTIVE OPTIONAL SYSTEM PRIMARY 0 0 0 4 INACTIVE OPTIONAL SYSTEM PRIMARY 0 0 0 5 INACTIVE OPTIONAL SYSTEM PRIMARY 0 0 0
The query results show there are errors archiving to the standby1 standby database.
The query results in step 1 display the error as TNS:no listener. You should check whether the listener on the standby site is started. If the listener is stopped, then start it.
If you are not able to solve the network problem quickly, and if the standby database is specified as a mandatory destination, you should try to prevent the database from stalling by doing one of the following:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2 = DEFER;
When the network problem is resolved, you can enable the archive destination again:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2 = ENABLE;
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2 = 'SERVICE=standby1 OPTIONAL REOPEN=60';
When the network problem is resolved, you can change the archive destination from optional back to mandatory:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2 = 'SERVICE=standby1 MANDATORY REOPEN=60';
In some cases, you may not be the person responsible for correcting the problem. You can periodically query the V$ARCHIVE_DEST view to see if the problem has been resolved.
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
When the network is up and running again, and new archived logs are transferred from the primary to the standby database, they will not be applied to the standby database until the logs in the gap sequence have been applied manually.
On the standby database, run the following SQL script to identify the archived redo logs in the gap sequence:
SELECT high.thread#, "LowGap#", "HighGap#" FROM ( SELECT thread#, MIN(sequence#)-1 "HighGap#" FROM ( SELECT a.thread#, a.sequence# FROM ( SELECT * FROM v$archived_log ) a, ( SELECT thread#, MAX(next_change#)gap1 FROM v$log_history GROUP BY thread# ) b WHERE a.thread# = b.thread# AND a.next_change# > gap1 ) GROUP BY thread# ) high, ( SELECT thread#, MIN(sequence#) "LowGap#" FROM ( SELECT thread#, sequence# FROM v$log_history, v$datafile WHERE checkpoint_change# <= next_change# AND checkpoint_change# >= first_change# ) GROUP BY thread# ) low WHERE low.thread# = high.thread#; THREAD# LowGap# HighGap# ---------- ---------- ---------- 1 90 92
SELECT name FROM v$archived_log WHERE thread#=1 AND sequence#<=92 AND sequence#>=90; NAME ----------------------------------------- /vobs/oracle/dbs/r_1_90.arc /vobs/oracle/dbs/r_1_91.arc /vobs/oracle/dbs/r_1_92.arc
% rcp /vobs/oracle/dbs/r_1_90.arc STBYHOST:/fs2/oracle/stdby/stdby_1_90.arc % rcp /vobs/oracle/dbs/r_1_91.arc STBYHOST:/fs2/oracle/stdby/stdby_1_91.arc % rcp /vobs/oracle/dbs/r_1_92.arc STBYHOST:/fs2/oracle/stdby/stdby_1_92.arc
SQL> RECOVER AUTOMATIC STANDBY DATABASE;
A network failure can cause multiple gaps in the standby database. The gaps must be resolved one at a time.
It is important to specify a local directory at the primary site as a mandatory archive destination, so that all of the archived redo logs reside on the same system as the primary database. When the primary system is unreachable, and the primary database is part of a multiple standby database configuration, you can try to identify the archived redo logs at the other standby sites.
This scenario describes the case where you have failed over to a standby database and have begun using it as a normal production database. After a period of time, you decide you want to fail back to the original production system and make the production database the standby database again. The general steps are:
The detailed steps follow:
% rcp STBYinit.ora PRMYHOST:fallback.ora
The fallback.ora file will become the standby initialization parameter file for the standby database at the original primary site.
The fallback.ora file looks as follows:
# #parameter file fallback.ora # db_name=prod1 #The same as PRMYinit.ora control_files=/fs2/oracle/stdby/stbycf.f lock_name_space=fallback; audit_trail=FALSE o7_dictionary_accessibility=FALSE global_names=FALSE db_domain=regress.rdbms.dev.us.oracle.com commit_point_strength=1 processes=30 sessions=30 transactions=21 transactions_per_rollback_segment=21 distributed_transactions=10 db_block_buffers=100 shared_pool_size=4000000 ifile=/vobs/oracle/work/tkinit.ora # specific parameters for standby database log_archive_format = r_%t_%s.arc standby_archive_dest=/vobs/oracle/dbs/ log_archive_dest_1='LOCATION=/vobs/oracle/dbs/' log_archive_dest_state_1 = ENABLE db_file_name_convert=('/fs2/oracle/stdby','/vobs/oracle/dbs') log_file_name_convert=('/fs2/oracle/stdby','/vobs/oracle/dbs') log_archive_start=FALSE log_archive_trace=127
You need to supply appropriate values for the LOG_ARCHIVE_DEST_1 and LOG_ARCHIVE_DEST_STATE_1 parameters.
Note: Step 7 of Scenario 2: Creating a Standby Database on a Remote Host suggested that you make a copy of the standby database initialization parameter file. If you made a copy, then you can modify the copy in this step.
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT; SQL> SHUTDOWN IMMEDIATE;
% rcp /fs2/oracle/stdby/stdby_1_102.arc PRMYHOST:/vobs/oracle/dbs/r_1_102.arc % rcp /fs2/oracle/stdby/stdby_1_103.arc PRMYHOST:/vobs/oracle/dbs/r_1_103.arc
SQL> CONNECT sys/sys_password AS SYSDBA SQL> STARTUP NOMOUNT PFILE=fallback.ora; SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
SQL> RECOVER AUTOMATIC STANDBY DATABASE;
SQL> ALTER DATABASE ACTIVATE STANDBY DATABASE; SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP PFILE=PRMYinit.ora;
This scenario describes what the DBA needs to do when archived online redo logs are automatically transferred to the standby site, but they are not automatically applied to the standby database.
For example, suppose a standby database is set up on a host running a Web server. The workload for the host is very heavy, and you do not want to add workload to the host by automatically applying redo logs as they arrive from the primary site. You decide to let the host receive archived redo logs without applying them. Thus, the system resources for applying the redo logs will be saved. Later, if you decide to open the standby database as a read-only database to perform some queries, or activate the standby database as a production database, you can stop the Web server and apply all of the redo logs at one time.
This scenario assumes that you already have a standby database set up. Steps 1 through 10 in Scenario 2: Creating a Standby Database on a Remote Host describe how to set up a standby database.
The listener for the standby database should be started. The standby database should be started and mounted, but not in recovery mode. (Thus, the time and resources for applying redo logs will be saved.)
This scenario is similar to Scenario 2: Creating a Standby Database on a Remote Host. In Scenario 2, the standby database is in managed recovery mode. When a standby database is in managed recovery mode, it automatically applies archived redo logs received from the primary site.
When the archived redo logs are not immediately applied to the standby database, there is no ongoing recovery at the standby site. This section describes the tasks a DBA needs to consider when managing a standby database with no ongoing recovery. This section covers the following topics:
When a datafile is added to the primary site, you need to decide what action to take on the standby site. You have the following options:
When you attempt to archive the redo logs, messages similar to the following are displayed:
ORA-00283: recovery session canceled due to errors ORA-01157: cannot identify/lock data file 4 - see DBWR trace file ORA-01110: data file 4: '/vobs/oracle/dbs/stdby/tbs_4.f'
Create the correponding new datafile by issuing the following statement:
SQL> ALTER DATABASE CREATE DATAFILE '/vobs/oracle/dbs/stdby/tbs_4.f' AS '/vobs/oracle/dbs/stdby/tbs_4.f';
When the control file on the primary site has been altered, if you want to account for it on the standby site, you should apply the redo logs at the standby site by issuing the following statement:
SQL> RECOVER AUTOMATIC STANDBY DATABASE;
After you apply the redo logs at the standby site, refresh the standby database control file. See Refreshing the Standby Database Control File for the steps.
In order to recover after the NOLOGGING clause was specified at the primary site, you must do the following:
SQL> RECOVER AUTOMATIC STANDBY DATABASE;
See Scenario 4: Recovering After the NOLOGGING Clause Was Specified for the procedure to follow.
If you can afford to lose the changes incurred by specifying NOLOGGING at the primary site, you may choose to do nothing.
If this standby site is mandatory, you do not need to do anything at the standby site. If the standby site is optional, you should resolve the gaps immediately by applying the redo logs. It is very important to resolve the gaps in a timely manner. Otherwise, you run the risk of not being able to access the archived redo logs when the primary site is unreachable.
Before you activate the standby database, you should apply all applicable redo logs. You must resolve any gaps in the redo log sequence before you activate the standby database, as outlined in the following steps:
SELECT high.thread#, "LowGap#", "HighGap#" FROM ( SELECT thread#, MIN(sequence#)-1 "HighGap#" FROM ( SELECT a.thread#, a.sequence# FROM ( SELECT * FROM v$archived_log ) a, ( SELECT thread#, MAX(sequence#)gap1 FROM v$log_history GROUP BY thread# ) b WHERE a.thread# = b.thread# AND a.sequence# > gap1 ) GROUP BY thread# ) high, ( SELECT thread#, MIN(sequence#) "LowGap#" FROM ( SELECT thread#, sequence# FROM v$log_history, v$datafile WHERE checkpoint_change# <= next_change# AND checkpoint_change# >= first_change# ) GROUP BY thread# ) low WHERE low.thread# = high.thread#; THREAD# LowGap# HighGap# ---------- ---------- ---------- 1 90 92
The gap sequence is the LowGap# to the HighGap#. In this example, the gap sequence is 90, 91, and 92 for thread 1. If no gap sequence is selected in this step, go to step 5.
Obtain the filenames of the logs in the gap sequence by performing a query on the V$ARCHIVED_LOG view as follows:
SELECT name FROM v$archived_log WHERE thread#=1 AND sequence#<=92 AND sequence#>=90; NAME ----------------------------------------- /vobs/oracle/dbs/r_1_90.arc /vobs/oracle/dbs/r_1_91.arc /vobs/oracle/dbs/r_1_92.arc
% rcp /vobs/oracle/dbs/r_1_90.arc STBYHOST:/fs2/oracle/stdby/stdby_1_90.arc % rcp /vobs/oracle/dbs/r_1_91.arc STBYHOST:/fs2/oracle/stdby/stdby_1_91.arc % rcp /vobs/oracle/dbs/r_1_92.arc STBYHOST:/fs2/oracle/stdby/stdby_1_92.arc
It is important to specify a local directory at the primary site as a mandatory archive destination so that all of the archived redo logs reside on the same system as the primary database. When the primary system is unreachable, and the primary database is part of a multiple standby database configuration, you can try to identify the archived redo logs at the other standby sites.
SQL> RECOVER AUTOMATIC STANDBY DATABASE;
Repeat steps 1 through 4 until there are no more gaps.
SQL> ALTER DATABASE ACTIVATE STANDBY DATABASE;
SQL> SHUTDOWN IMMEDIATE;
If necessary, build the parameter file for the new production instance. You can build it from the parameter file for the standby database. Then, you can issue the following statement at the standby database:
SQL> STARTUP PFILE=FailOver.ora;
In managed recovery mode, the standby database automatically applies redo logs when they arrive from the primary database. But in some cases, you may not want the logs to be applied immediately, because you want to create a time lag between the archiving of a redo log at the primary site and the application of the log at the standby site. A time lag can protect against the transfer of corrupted or erroneous data from the primary site to the standby site.
For example, suppose you run a batch job every night on the primary database. Unfortunately, you accidently ran the batch job twice and you did not realize the mistake until the batch job completed for the second time. Ideally, you need to roll back the database to the point in time before the batch job began. A primary database that has a standby database with a time lag (for example, 8 hours) could help you to recover. You could activate the standby database with the time lag and use it as the new production database.
To create a standby database with a time lag, use manual recovery mode instead of managed recovery mode. The online redo log files can still be automatically transferred, just as with managed recovery mode. But, the log files are not immediately applied to the standby database. You can run a script periodically to check whether each redo log is old enough to meet your desired time lag. The script will move the redo logs that are old enough to the manual recovery directory and then, manually apply them.
This scenario use a 4-hour time lag in the examples and covers the following topics:
Readers of this scenario are assumed to be familiar with the procedures for creating a typical standby database. The details have been omitted from the steps outlined in this scenario. Refer to Scenario 2: Creating a Standby Database on a Remote Host for details of normal standby database setup.
Perform the following steps to create a standby database with a time lag:
Edit the STANDBY_ARCHIVE_DEST and LOG_ARCHIVE_DEST_1 parameters.
The LOG_ARCHIVE_DEST_1 parameter specifies the location of the archived redo logs. You must use this directory when performing manual recovery. You must use the directory specified by the STANDBY_ARCHIVE_DEST parameter when performing managed recovery.
For example, assume you have set these parameters to the following values:
STANDBY_ARCHIVE_DEST=/fs2/oracle/stdby_log/ LOG_ARCHIVE_DEST_1 ='LOCATION=/fs2/oracle/stdby/'
The following PERL script performs what is outlined in this step:
#!/usr/local/bin/perl #How many hours the standby database should lag behind the primary database $LAG_HOUR = 4; #The manual recovery directory $DEST_DIR = '/fs2/oracle/stdby/'; #The flag for whether there are new logs to be applied. $needApply = 0; #Check the managed recovery directory while ( </fs2/oracle/stby_log/*.arc> ) { # Get the timestamp of the file $file_time = (stat($_))[9]; # See if the file is "old enough" if ( time-$file_time > $LAG_HOUR*60*60 ) { print "mv $_ $DEST_DIR\n"; system "mv $_ $DEST_DIR"; $needApply = 1; } } #If redo logs were moved in this round, apply them if ( $needApply == 1 ) { system "/usr/Lagged_Standby/ApplyLog"; }
The SHELL script (/usr/Lagged_Standby/ApplyLog) used to apply the redo logs consists of the following:
sqlplus internal << EOF recover automatic standby database; cancel exit EOF
For example, in UNIX, you can write a CRON job file. Issue the man crontab command at the UNIX command shell to get help on how to write a CRON job file.
You should decide how frequently to schedule the job. The more frequently the job runs, the more granularity you can get, and the less volume of logs need to be applied at a time. The minimum granularity is one redo log file.
For example, if you want the job to run every 10 minutes, you can write the following CRON job file, (suppose the script file written in step 9 is /usr/Lagged_Standby/lag4.pl):
0,10,20,30,40,50 * * * * /usr/Lagged_Standby/lag4.pl
Suppose the preceding CRON job file is /usr/Lagged_Standby/jobs.cron. You can schedule the CRON job by issuing the following command:
% crontab /usr/Lagged_Standby/jobs.cron
The lag4.pl script will run every 10 minutes. The exact time lag will appear after the specified time (in this example, it is 4 hours).
The runtime scenario of a standby database with a time lag is slightly different from a standby database with no time lag, because the application of the online redo logs lags behind.
As the DBA, you need to keep the following tasks in mind when managing a standby database with a time lag:
% mv /fs2/oracle/stdby_log/*.arc /fs2/oracle/standby/
SQL> RECOVER AUTOMATIC STANDBY DATABASE;
This way, the standby database control file will be updated, but the lag disappears. You need to wait for the specified time lag (for example, 4 hours) to reinforce the lag.
% mv /fs2/oracle/stdby_log/*.arc /fs2/oracle/standby/
SQL> RECOVER AUTOMATIC STANDBY DATABASE;
This way, the standby database control file will be updated, but the lag disappears. You need to wait for the specified time lag (for example, 4 hours) to reinforce the lag.
If this standby site is mandatory, you do not need to do anything at the standby site. If the standby site is optional, you can resolve the gaps immediately by performing the following steps:
% mv /fs2/oracle/stdby_log/*.arc /fs2/oracle/standby/
SQL> RECOVER AUTOMATIC STANDBY DATABASE;
This way, the standby database control file will be updated, but the lag disappears. You need to wait for the specified time lag (for example, 4 hours) to reinforce the lag.
In the case stated at the beginning of this scenario, you may want to take advantage of the time lag, and get a production database whose status is a specified time (for example, 4 hours) before the current production database. You can activate the appropriate time-lagged standby database as follows:
SQL> ALTER DATABASE ACTIVATE STANDBY DATABASE;
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP PFILE=Failover.ora;
If you do not want to take advantage of the time lag, you can activate the standby database as a normal standby database with no time lag as follows:
% mv /fs2/oracle/stdby_log/*.arc /fs2/oracle/standby/
SQL> RECOVER AUTOMATIC STANDBY DATABASE;
SQL> ALTER DATABASE ACTIVATE STANDBY DATABASE;
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP PFILE=Failover.ora;
|
![]() Copyright © 1996-2000, Oracle Corporation. All Rights Reserved. |
|