The parameters shown in Example are valid for the Chicago database when it is running in either the primary or the standby database role. The configuration examples use the names shown in the following table:. These parameters control how redo transport services transmit redo data to the standby system and the archiving of redo data on the local file system.
These are the recommended settings and require standby redo log files see Section 3. Example shows the additional standby role initialization parameters on the primary database. These parameters take effect when the primary database is transitioned to the standby role. Specifying the initialization parameters shown in Example sets up the primary database to resolve gaps, converts new datafile and log file path names from a new primary database, and archives the incoming redo data when this database is in the standby role.
With the initialization parameters for both the primary and standby roles set as described, none of the parameters need to change after a role transition. The following table provides a brief explanation about each parameter setting shown in Example and Example This destination transmits redo data to the remote physical standby destination boston. See Section 5. See Oracle Database Administrator's Guide for information about archiving. Table provides a checklist of the tasks that you perform to create a physical standby database and the database or databases on which you perform each task.
Table Creating a Physical Standby Database. You can use any backup copy of the primary database to create the physical standby database, as long as you have the necessary archived redo log files to completely recover the database. Then, create the control file for the standby database, and open the primary database to user access, as shown in the following example:. Step 1 Copy the primary database parameter file to the standby database.
Create a text i n itialization parameter file PFILE from the server parameter file SPFILE used by the primary database; a text initialization parameter file can be copied to the standby location and modified.
Later, in Section 3. Step 2 Set initialization parameters on the physical standby database. Although most of the i n itialization parameter settings in the text initialization parameter file that you copied from the primary system are also appropriate for the physical standby database, some modifications need to be made.
Example shows the portion of the standby initialization parameter file where values were modified for the physical standby database. Parameter values that are different from Example and Example are shown in bold typeface. The parameters shown in Example are valid for the Boston database when it is running in either the primary or the standby database role.
If the values differ, redo transport services may be unable to transmit redo data from the primary database to the standby databases. The following table provides a brief explanation about the parameter settings shown in Example that have different settings from the primary database. If a switchover occurs and this instance becomes the primary database, then it will transmit redo data to the remote Chicago destination.
Use an operating system copy utility to copy the following binary files from the primary system to the standby system:. Stan dby control file created in Section 3. On platforms other than Windows, create a password file, and set the password for the SYS user to the same password used by the SYS user on the primary database.
The password for the SYS user on every database in a Data Guard configuration must be identical for redo transmission to succeed. Step 3 Configure listeners for the primary and standby databases. On both the primary and standby sites, use Oracle Net Manager to configure a listener for the respective databases. To restart the listeners to pick up the new definitions , enter the following LSNRCTL utility commands on both the primary and standby systems:. On both the primary and standby systems, use Oracle Net Manager to create a network service name for the primary and standby databases that will be used by redo transport services.
The Oracle Net service name must resolve to a connect descriptor that uses the same protocol, host address, port, and service that you specified when you configured the listeners for the primary and standby databases.
The connect descriptor must also specify that a dedicated server be used. To perform a hot backup, put the tablespaces in BACKUP mode one at a time and then copy the datafiles to the backup location. A logical Oracle instance in Oracle10g is transitioned from a physical standby database. In order to create a logical standby database, a physical Oracle instance must be created. This is a much simpler task. In this section, it is assumed that the primary database and the Oracle instance reside on separate hosts.
A quick review of the Oracle instance creation concepts explained in Chapter 3, "Implementing Standby Databases," is good preparation for this section, as most of the concepts presented here are similar to those of Oracle9i that are described in great detail in that chapter. Following is the step-by-step guide that is used to create a logical standby database:. Enable the forced logging to capture all database changes in the redo log file.
Enable the supplemental data logging to capture additional information that assists the SQL apply operation on the logical standby database. Supplemental logging can be enabled by executing the following statement on the primary database:. Create a physical Oracle instance and start the managed recovery mode so that the primary database and the Oracle instance are in sync.
The method of creating a physical Oracle instance is explained in detailed in Chapter 3 of this book. Once the physical Oracle instance is recovered to the last available archived redo log file, start the process of transitioning it to a logical standby database. Modify the initialization parameter file to support the logical standby database.
A logical Oracle instance has its own online log file, which will be archived to one of the local archived destination. The following is an extract of the init. Creating Logical Standby Databases.
The create control file statement also creates the LogMiner dictionary; hence, it may take few minutes to create, depending on the availability of the computing resources. The primary database generates redo data during the LogMiner dictionary build. The redo data needs to be applied on the physical Oracle instance before activating it as the logical standby database.
In order to apply these log files, mount the physical Oracle instance and start the recovery. Once all the archived redo logs have been applied onto the standby database, control will return to the user. The next step is to change the name of database using nid utility and reflect the new name in the initialization parameters file. Create temp files for the temp tablespace, and start the log apply service. These steps are identical to the steps used for Oracle9i, and are explained in Chapter 3 of this text.
The verification techniques remain same as explained for Oracle9i. Copy the backup files backupset,standby controlfile,archivelog and password file to the same location in standby database server. BUILD procedure waits for all existing transactions to complete.
Long-running transactions executed on the primary database will affect the timeliness of this command. The redo logs will be applied until the Log Miner dictionary is found in the log files.
That could take several minutes. Despite the use of transaction chunks, SQL Apply performance may degrade when processing transactions that modify more than eight million rows. For transactions larger than 8 million rows, SQL Apply uses the temporary segment to stage some of the internal metadata required to process the transaction.
You will need to allocate enough space in your temporary segment for SQL Apply to successfully process transactions larger than 8 million rows. All transactions start out categorized as small transactions.
Depending on the amount of memory available for the LCR cache and the amount of memory consumed by LCRs belonging to a transaction, SQL Apply determines when to recategorize a transaction as a large transaction. In this case, the log-mining component will page out the first part of the LONG data to read the later part of the column modification. In a well-tuned logical standby database, pageout activities will occur occasionally and should not effect the overall throughput of the system.
Modifications made to the logical standby database do not become persistent until the commit record of the transaction is mined from the redo log files and applied to the logical standby database. Thus, every time SQL Apply is stopped, whether as a result of a user directive or because of a system failure, SQL Apply must go back and mine the earliest uncommitted transaction again.
In cases where a transaction does little work but remains open for a long period of time, restarting SQL Apply from the start could be prohibitively costly because SQL Apply would have to mine a large number of archived redo log files again, just to read the redo data for a few uncommitted transactions.
To mitigate this, SQL Apply periodically checkpoints old uncommitted data. Archived redo log files that are not needed for restart are automatically deleted by SQL Apply. SQL Apply has the following characteristics when applying DML transactions that affect the throughput and latency on the logical standby database:.
Batch updates or deletes done on the primary database, where a single statement results in multiple rows being modified, are applied as individual row modifications on the logical standby database.
Thus, it is imperative for each maintained table to have a unique index or a primary key. See Section 4. Direct path inserts performed on the primary database are applied using a conventional INSERT statement on the logical standby database. SQL Apply has the following characteristics when applying DDL transactions that affect the thr oughput and latency on the logical standby database:.
DDL transactions are applied serially on the logical standby database. Thus, DDL transactions applied concurrently on the primary database are applied one at a time on the logical standby database. SQL Apply reissues the DDL that was performed at the primary database, and ensures that DMLs that occur within the same transaction on the same object that is the target of the DDL operation are not replicated at the logical standby database.
Thus, the following two cases will cause the primary and standby sites to diverge from each other:. The DDL contains a non-literal value that is derived from the state at the primary database. An example of such a DDL is:. Password verification functions that check for the complexity of passwords must be created in the SYS schema.
Because SQL Apply does not replicate objects created in the SYS schema, such verification functions will not be replicated to the logical standby database. You must create the password verification function manually at the logical standby database, and associate it with the appropriate profiles. The database guard is set to ALL by default on a logical standby database.
For example, use the following statement to enable users to modify tables not maintained by SQL Apply:. The following performance views monitor the behavior of SQL Apply maintaining a logical standby database. The following sections describe the key views that can be used to monitor a logical standby database:. By default, the view records the most recent 10, events. If SQL Apply should stop unexpectedly, the reason for the problem is also recorded in this view.
The view can be customized to contain other information, such as which DDL transactions were applied and which were skipped. For example:.
This query shows that SQL Apply was started and stopped a few times. It also shows what DDL was applied and skipped. The most recent archived redo log file is sequence number 13, and it was received at the logical standby database at This view provides information related to the failover characteristics of the logical standby database, including:.
This output is from a logical standby database that has received and applied all redo generated from the primary database. This view provides information about the current state of the various processes involved with SQL Apply, including;.
The output shows a snapshot of SQL Apply running. In the event of a restart, SQL Apply will start mining redo records generated after the time The output shows that SQL Apply is running in the real-time apply mode and is currently applying redo data received from the primary database, the primary database's DBID is and the LogMiner session identifier associated the SQL Apply session is 1.
No rows are returned from this view when SQL Apply is not running. This view is only meaningful in the context of a logical standby database. SQL Apply can be in any of six states of progress: initializing SQL Apply, waiting for dictionary logs, loading the LogMiner dictionary, applying redo data , waiting for an archive gap to be resolved, and idle.
Figure shows the flow of these states. Initializing State. This loading dictionary state can persist for a while. Loading the LogMiner dictionary on a large database can take a long time. The dictionary load happens in three phases:. The relevant archived redo log files or standby redo logs files are mined to gather the redo changes relevant to load the LogMiner dictionary. Applying State.
0コメント