Tuesday, October 7, 2008

Database general activities and issues

In This database general activities and issues will discussed along with the solution.This post will updated on regular basis.
1.Enabling Archiving

To enable archivelog mode so that Oracle saves the old redo logs to disk after a log switch before overwriting them when it cycles back around, perform the following commands. Oracle requires at least two redo logs (log groups) in order to accommodate possible archiving, since one could be being spooled during archiving to the archivelogs directory while changes currently being made to the database are being written to the other by the log writer.
$ vi $ORACLE_HOME/dbs/initPROD.ora
Edit init.ora file to contain the archive parameters (directory name and file format and the start flag):
log_archive_dest = /u01/oradata/PROD/archivelogs/
log_archive_format = arch_PROD_%S.arc
log_archive_start = true
("archive log start" can also be used when connected as sysdba to start the arch process, if you don't want it automatically done through init.ora; but, neither of these turn on archivelog mode - they just start the arch process itself.)
$ mkdir /u01/oradata/PROD/archivelogs
$ sqlplus "/ as sysdba"
SQL> shutdown immediate
SQL> startup mount
SQL> alter database archivelog;
Turns on archivelog mode for the database.
SQL> alter database open;
SQL> archive log list
Shows that you are now in archivelog mode, that archiving is enabled, and shows you the current log sequence numbers. You can "alter system switch logfile;" a few times to see archivelog files going into the log_archive_dest directory.
SQL> select * from v$database;
Also shows that you are in archivelog mode.
To disable archivelog mode, similar commands (when connected as sysdba) are used, except for using the "alter database noarchivelog;" command instead (these commands are also performed after the database is shut down). The log_archive_start parameter in the init.ora file would also need to be set to false to permanently disable the arch process itself.

2.Moving Datafiles

You can move a datafile to another disk, or change it's name (such as for restoring a copy of it from your backup tape to that other location in the event of a disk failure or moving it to another disk that has more room for it) by making the move after shutting down the database, and using the "alter database rename file" command to tell Oracle where the file has been moved to, as in the following example, which moves one of the development datafiles from disk u03 to disk u04:
$ sqlplus "/ as sysdba"
SQL> shutdown immediate
SQL> !mv /u03/oradata/PROD/devl_PROD_01.dbf /u04/oradata/PROD
SQL> startup mount
SQL> alter database rename file '/u03/oradata/PROD/devl_PROD_01.dbf'
to '/u04/oradata/PROD/devl_PROD_01.dbf';
SQL> alter database open;
SQL> select * from v$datafile;
Lists the current datafiles, showing your changes.
This can also be done without shutting down the database, but taking the associated tablespace offline first (which prevents others from accessing that tablespace's tables, indexes, and other data):
$ sqlplus "/ as sysdba"
SQL> alter tablespace development offline;
SQL> !mv /u03/oradata/PROD/devl_PROD_01.dbf /u04/oradata/PROD
SQL> alter database rename file '/u03/oradata/PROD/devl_PROD_01.dbf'
to '/u04/oradata/PROD/devl_PROD_01.dbf';
SQL> alter tablespace development online;
SQL> select * from v$datafile;
Finally, the textual control file from the "alter database backup controlfile to trace;" command can be used to change the datafile locations, such as to move all datafiles from one disk to another (u03 to u04 in the example below). The textual control file is created in the database's udump directory, which is found by using either of the following commands:
SQL> select value from v$parameter where name = 'user_dump_dest';
SQL> show parameter user_dump
with a name like /u00/oracle/admin/PROD/udump/prod_ora_16060.trc. (A sample textual control file is shown in the Appendix.)
SQL> connect / as sysdba
SQL> alter database backup controlfile to trace;
SQL> show parameter user_dump
SQL> shutdown immediate
SQL> exit
$ mv /u03/oradata/PROD/*PROD*.dbf /u04/oradata/PROD
$ cd /u00/oracle/admin/PROD/udump
$ ls -ltr *.trc
To find the name of the textual control file just created (the last one in the ls listing, such as prod_ora_16060.trc).
$ vi prod_ora_16060.trc
Edit the textual control file and remove all lines before the STARTUP NOMOUNT line, remove the lines after ALTER DATABASE OPEN, or, if present, ALTER TABLESPACE TEMP ADD TEMPFILE, change the names in the DATAFILE section to match the new disk names such as from "u03" to "u04" (DO NOT change the order of the file names!), comment out (put # in front of) the RECOVER command, and, for Oracle 9i and above, and change all comment lines to start with dashes. You may also want to change the TEMPFILE location, which is not in the DATAFILE section for Oracle 9i+. The vi commands to do these (for 9i+; except for TEMPFILE) are usually:
:1,/STARTUP NOMOUNT/-1d
:/ALTER TABLESPACE TEMP/+2,$d
:g/^DATAFILE/,/;/s/\/u03/\/u04/
:/RECOVER DATABASE/s/^/# /
:1,$s/^#/--/
:wq
[Be sure to read the NOTES below.]
$ sqlplus "/ as sysdba"
SQL> @prod_ora_16060.trc
Recreates the control files with the new disk volume names.
SQL> select * from v$datafile;
NOTES: Be aware that the 9i "alter database backup controlfile to trace;" command creates a textual control file with two "create database" commands. You will use the first set (the one with NORESETLOGS) and edit out the second set, while you are changing the directory and/or file names in the textual control file. Also, be sure to include the ADD TEMPFILE command that may be generated following the OPEN command - that is easy to overlook and get edited out accidentally.

3.Adding Redo Log Members

To add another member on another disk drive and separate controller to a redo log group, in order to mirror the logs in the group for media failure protection, you will need to specify the new file name and the current group number, as shown below. The new redo log file will be created the same size as all the other redo log files in the group.
SQL> alter database add logfile member
'/u04/oradata/PROD/log_PROD.rdo' to group 1;
SQL> select * from v$logfile;
Lists the current log files, showing your changes.
To add more members than maxlogmembers to a redo log group, you will need to recreate the control files with that new maximum by creating a textual control file, editing the maxlogmembers parameter in it, and running that SQL when connected as sysdba after shutting down the database:
$ sqlplus "/ as sysdba"
SQL> alter database backup controlfile to trace;
SQL> !ls -ltr /u00/oracle/admin/PROD/udump
To find the name of the textual control file just created.
SQL> !vi /u01/oracle/admin/PROD/udump/prod_ora_16060.trc
Edit the textual control file and remove all lines before the STARTUP NOMOUNT line, remove the lines after ALTER DATABASE OPEN, or, if present, ALTER TABLESPACE TEMP ADD TEMPFILE, change the maxlogmembers value from the default 2 to 3 or 4, comment out (put # in front of) the RECOVER command, and, for Oracle 9i and above, and change all comment lines to start with dashes. The vi commands to do these (for 9i+) are usually:
:1,/STARTUP NOMOUNT/-1d
:/ALTER TABLESPACE TEMP/+2,$d
:/MAXLOGMEMBERS/s/2/3/
:/RECOVER DATABASE/s/^/# /
:1,$s/^#/--/
:wq
SQL> shutdown immediate
SQL> @/u01/oracle/admin/PROD/udump/prod_ora_16060.trc
Recreates the control files with the new maxlogmembers value.

4.Adding Redo Log Groups

Additional redo log groups can be added to lessen the possibility of the database freezing while waiting on archiving to free up the next log group. The new group number along with the list of new redo logs and the allocation size for those member files is specified as shown:
SQL> alter database add logfile group 4
('/u01/oradata/PROD/log_PROD_4A.rdo',
'/u01/oradata/PROD/log_PROD_4B.rdo') size 500K;
SQL> select * from v$logfile;

5. Multiplexing Control Files

Multiplexing the control files involves just shutting down the database, copying the current control file to the other disks and renaming them, editing the init.ora file to include the new control file names, and restarting the database. From that point, all datafile header updates and datafile additions and migrations will be reflected in all of the identical multiplexed control files, and the other remaining control file(s) can be used to bring the database back up if one or more of the control files are lost.
$ sqlplus "/ as sysdba"
SQL> shutdown immediate
SQL> host
$ cp -p /u03/oradata/PROD/ctrl_PROD_01.ctl /u01/oradata/PROD/ctrl_PROD_02.ctl
$ cp -p /u03/oradata/PROD/ctrl_PROD_01.ctl /u00/oradata/PROD/ctrl_PROD_03.ctl
Copies the original control file to the other disks, renaming them to match the control_files parameter.
$ vi $ORACLE_HOME/dbs/initPROD.ora
Edit init.ora to add other control files on separate disks to the control_files parameter, using their full pathnames, separated by commas, such as:
control_files = (/u03/oradata/PROD/ctrl_PROD_01.ctl,
/u01/oradata/PROD/ctrl_PROD_02.ctl,
/u00/oradata/PROD/ctrl_PROD_03.ctl)
$ exit
SQL> startup
SQL> select * from v$controlfile;
Lists the current control files, showing your changes.
Multiplexing Archive Log Files
I haven't done this yet, but, multiplexing archive log files is available in Oracle in 8i and beyond (see chapter 8 in Oracle9i Database Administrator's Guide). For earlier versions, you could set up a cron job to run every half-hour or so to copy the archive log files not currently in the mirror directory into it. Make sure that the mirror directory is on a different disk than the archive logs directory; otherwise, you defeat the purpose if the archive logs disk goes out.

6. Backups

The DBA's primary job is to make sure that the data is available and accessable by the users during those times that they need it, which means that a complete and well-tested backup and recovery procedure is in place and functioning. This section covers what files need to be included in that backup, the types of backups (cold backups and hot backups), and other processing that you could perform during the backup for proactive maintenance.
What To Back Up
The following types of files making up the database and providing recovery capabilities should be backed up during one backup cycle, such as all of them during a cold backup, or all or subsets of them during a hot backup cycle, assuming the entire set is eventually backed up during that cycle (see Oracle9i User-Managed Backup and Recovery Guide, chapter 2):
• Datafiles (for all tablespaces)
• Control Files (binary and textual versions)
• Redo Log Files (cold backups only, not hot backups)
• Archive Log Files (archived redo logs, if archivelog mode is enabled)
• Parameter Files (init.ora; like $ORACLE_HOME/dbs/initPROD.ora)
• Password Files (like $ORACLE_HOME/dbs/orapwdPROD, if used)
The most basic way to get the names of the datafiles, control files, and redo log files through SQL is shown below, which can be used in both the startup mount state and the database open state:
SQL> select name from v$datafile;
SQL> select name from v$tempfile;
SQL> select name from v$controlfile;
SQL> select member from v$logfile;
You can also get the tablespace name with the datafile and temp file names using:
SQL> select tablespace_name,file_name from dba_data_files order by tablespace_name;
SQL> select tablespace_name,file_name from dba_temp_files;
The list of archive log files which are currently on-line cannot be found through SQL. However, you can get the directory containing those archive logs, the format of the file names, and whether or not archiving was turned on in the init.ora parameter file using:
SQL> select name,value from v$parameter where name in ('log_archive_dest',
'log_archive_format','log_archive_start');
These archiving parameters can also be found in an sqldba session by issuing either of the following commands (but the first one might truncate the dest directory name):
SQL> show parameter archive
SQL> archive log list
To get the list of archive log files for the last 5 days (the date is when Oracle started writing to that redo log, not when it was copied to the archivelogs directory), which may or may not still be on-line, you can query the v$archived_log table in Oracle 8.x and above:
SQL> select name from v$archived_log
where trunc(completion_time) >= trunc(sysdate)-5;
Or, you can query the v$log_history table in Oracle 7.x:
SQL> select archive_name from v$log_history
where trunc(to_date(substr(time,1,8),'MM/DD/YY')) >=
trunc(sysdate)-5;
The name of the parameter file is not kept in the database, but is usually $ORACLE_HOME/dbs/initPROD.ora (substitute your Oracle SID).
Since you can't find the currently existing archive log files or the init.ora file or the password file through SQL, the best way to get the list of database files to back up (assuming you are using a standard naming convention for all of your files related to a database instance, such as using the Oracle SID in the names of those files or in the name of one level of the directory hierarchies containing your datafiles, control files, redo log files, and archive log files) is by using the unix find command to get the list of file pathnames that contain that Oracle SID, as shown below (which is redirect to a file named backemup.dat):
$ find / -name '*PROD*' ! -type d 2>/dev/null >backemup.dat
The addendums to this presentation contain the scripts backup_list.shl and backup_list.sql, which are sample unix and SQL scripts used to create the list of files to be backed up, assuming a standard naming convention. More information on naming conventions can be found in papers on Oracle's OFA, which is referred to as either Oracle Flexible Architecture or Optimal Flexible Architecture.

6.1 Cold Backups

Cold backups are done while the database is shut down, usually after a "shutdown immediate" is issued, and contain the most complete backup snapshot, since all database files, including control files and redo log files, are backed up from the same point in time.

During the time that a cold backup is in progress, the database is not available for querying or updating while those files are being copied to tape or to some other disk drive or directory (possibly compressing (zipping) them at the same time). Cold backups can be done with the database either in archivelog mode or in noarchivelog mode.

To perform cold backups, you will first need to get the list of files to be backed up (putting the file names into backemup.dat in the examples below), including datafiles, control files, and redo log files, which you can get either by querying the database itself while the database is still up (see "What To Back Up") or by using the unix "find" command to locate files following a standard naming convention, preferrably after you have shut down the database (see below), since the archive log files and other database files can also be picked up at that time by that "find" command. Be sure to get a fresh list of files for each backup in order to accommodate structural changes in the database (such as new datafiles) as well as additional archive log files.

Then, you should include the parameter file and password file (if used) in the list. If you are using the "find" command, those files could also be picked up by "find", since they both usually contain the SID name. It would also be a good idea to include a textual control file in the list, which you would create before shutting down the database for the backup, as shown below (which contains the SID name as part of the directory name and, so, could also be picked up by "find"):
$ sqlplus "/ as sysdba"
SQL> alter database backup controlfile to trace;
SQL> shutdown immediate
SQL> exit
$ ls -ltr /u00/oracle/admin/PROD/udump/* | tail -1 | sed 's/^.* \//\//' >>backemup.dat
Puts the name of the textual control file just created in the udump directory (the last one in the ls listing, such as prod_ora_16060.trc) into the list of files to be backed up.
Finally, after shutting down the database, you will need to get the list of the archive log files to back up (the shutdown may cause new archive log files to be written out). You could do this either as part of the "find" command at this point, along with the datafiles, control files, and redo log files, or by just getting the fully-qualified listing of the file pathnames in the archivelogs directory. You could also combine the two and get the list of non-archive files for the backup, followed by the list of archive files (zipped followed by not zipped here), as shown below:
$ find / -name '*PROD*' ! -type d 2>/dev/null | grep -v 'arc$' | grep -v 'gz$' >>backemup.dat
$ ls /u01/oradata/PROD/archivelogs/*.arc.gz >>backemup.dat
$ ls /u01/oradata/PROD/archivelogs/*.arc >>backemup.dat
After the list of files has been created (in backemup.dat) and the database has been shut down, you can do any of the following to back up the files or make copies for the backup. The first two options allow you to start up the database before writing the copied files to tape, which usually results in a shorter down time for the database.
• Copy the files to another staging (backup) directory (such as /u03/oradata/prod1 here) to be written to tape later, and restart the database, or,
• Compress (using GNU Zip here) the files to another staging directory to be written to tape later, and restart the database, or
• Copy the files directly to tape (no staging directory), then, restart the database.
These three options can be performed as shown below (do only one of them):
$ cat backemup.dat | sed 's/\(^.*\)\/\(.*\)$/cp -p \1\/\2 \/u03\/oradata\/prod1\/\2/' | sh
Copies the files to another staging (backup) directory.
$ cat backemup.dat | sed 's/\(^.*\)\/\(.*\)$/gzip -cv1 \1\/\2 >\/u03\/oradata\/prod1\/\2.gz;
touch -r \1\/\2 \/u03\/oradata\/prod1\/\2.gz/' | sh
Compresses (zips) the files to another staging directory (the touch command must be on the same line as the sed command).
$ cat backemup.dat | cpio -ovC64 >/dev/rmt0
Copies the files directly to tape (without using a staging directory).
Then, restart the database after either of the three above commands:
$ sqlplus "/ as sysdba"
SQL> startup
SQL> exit
Be sure to copy the backup disk directory files to tape after you have done the startup for the first two options, such as shown below. (See "Backup/Restore Using Unix cpio Command" for cpio description.)
$ ls /u03/oradata/prod1/* | cpio -ovC64 >/dev/rmt0

6.2 Hot Backups

Hot backups are done while the database is still up and running and being accessed and updated. This is used when continuous 24-hour-per-day / 7-day-per-week operations are required. If you have even a half-hour window when your database can be down, you can probably do a cold backup using the staging directory method above.

Doing hot backups requires that archivelog mode is turned on. It is more taxing on redo logs and archivelogs when the backups are being performed, since entire database blocks are written to the redo logs for the changes that are made, instead of just the individual changes (transaction data).

You must NEVER backup the online redo log files when doing hot backups. Restoring a backup of an online redo log file would cause corruption during database recovery. The online redo log files are the most vulnerable items in the hot backup scheme. However, forcing log switches can catch the pending datafile updates in the redo logs and give you some coverage in this area.

You cannot backup the online control files themselves, but you can backup a special copy of them, either a binary copy or a textual copy, as shown below.

To perform hot backups, you will only back up one tablespace's datafiles at a time. In order to reduce the time that the tablespace is in backup mode, it is best to copy its datafiles to a backup disk directory, instead of to tape at the current time. When all copies have been done, you can then copy that backup disk directory to tape while the database is running normally. You must use the "begin backup" and "end backup" commands, as shown below; otherwise, your backups will be corrupted and useless. Note that the data in the tablespace remains available to the users while in backup mode, and any transactions can still be performed on that data. (See section on Disaster Recovery Scenarios for info on database crashes during hot backups.)

For each tablespace, perform the following steps while connected as sysdba (the DEVELOPMENT tablespace is shown here - change the tablespace name and the datafile names as appropriate for each tablespace backup):
SQL> alter tablespace development begin backup;
SQL> !cp -p /u03/oradata/PROD/devl_PROD_*.dbf /u03/oradata/prod1
Copies the tablespace's datafiles to a backup disk directory (/u03/oradata/prod1 here), using standard naming conventions.
SQL> alter tablespace development end backup;
Force a log switch to archive the current log to catch all of the committed transactions in it. Then, copy the archive log files (zipped and not zipped) to your backup disk directory, as shown below:
SQL> alter system switch logfile;
$ ls /u01/oradata/PROD/archivelogs/*.arc.gz >logfile.lst
$ ls /u01/oradata/PROD/archivelogs/*.arc >>logfile.lst
Generates the list of archivelog files to be copied (don't just copy the entire directory - you might catch an archivelog file in mid-creation, but, do keep the archiving process running).
$ sleep 5
Waits for a few seconds to allow current archivelog writes to complete.
$ cat logfile.lst | sed "s/\(.*\/\)\([^\/].*\)/cp -p \1\2 \/u03\/oradata\/prod1\/\2/" >logfile.shl
$ sh logfile.shl
The above commands copy the archivelog files to your backup disk directory.
Create a control file to back up (binary and textual) AFTER all other files have been copied:
SQL> alter database backup controlfile to '/u03/oradata/prod1/controlfile.ctl';
SQL> alter database backup controlfile to trace;
$ ls -ltr /u00/oracle/admin/PROD/udump
To find the name of the textual control file just created (the last one in the ls listing, such as prod_ora_16060.trc).
$ cp -p /u00/oracle/admin/PROD/udump/prod_ora_16060.trc /u03/oradata/prod1
Copies that textual control file to your backup disk directory.
After all datafiles and log files have been copied to your backup disk directory, back up those copies in your backup disk directory to tape. If you wanted to zip those files first with a utility such as GNU Zip to backup the compressed versions (but see below), you would do that before the cpio copy command, as shown below.
$ cd /u03/oradata/prod1
$ gzip -vN1 *
$ find /u03/oradata/prod1 | cpio -ovC64 >/dev/rmt0
Partial hot backups, in which the tablespaces are backed up at separate times, possibly across several days for the entire set of tablespaces, are possible, but not recommended.
NOTE: Only put one tablespace at a time into hot backup mode, and, then, for as short a time as possible. Hot backups are hard on your redo logs, since entire database blocks are written to them for each change, instead of just the individual changes themselves, so, your redo logs tend to get filled up fast, resulting in much more frequent log switches while your tablespaces are in hot backup mode. A checksum is written to the beginning and to the end of the database block whenever a change is made. In Oracle7, during recovery, if those two checksums are different (meaning a change occurred in that block while it was being written out during a hot backup), the changed block is retrieved from the archivelogs. In Oracle8 (from what I remember from my Oracle class), those two checksums are compared during the hot backup, and the archivelog block is written out instead of the database block if a change is detected. It is for this reason that entire changed blocks are archived during hot backups, instead of just the changed bytes.Instead of doing compression after you have copied the files into the backup disk directory, you can use a UNIX pipe (or other operating system equivalent, if available) to compress the files on the fly as you copy or create them, such as in this export example, in which exp writes to the pipe while gzip reads from the pipe (that's a double quote/single quote after exp and a single quote/double quote after sysdba in the export (exp) command):
$ mknod /tmp/exp_pipe p
$ gzip -cNf prod.dmp.gz &
$ exp "'/ as sysdba'" file=/tmp/exp_pipe full=y compress=n log=prod.dmp.log
$ rm -f /tmp/exp_pipe
Other Nightly Processing
In addition to backups, other nightly processing can also be done to guard against data loss and allow for proactive problem detection and correction, such as the following, which are described below:
1.Create and keep current copies of your textual control files.
2.Create and keep current copies of your textual init.ora file if you are using an spfile.
3.Perform full database exports for quick table restores and to check datafile integrity.
4.Generate definitions for all of your tables and indexes.
5.Gather statistics on datafile and index space usage and table extent growth for proactive maintenance.
Keep current copies of the textual control files for each of your database instances, in case the control file is corrupted and you have to rebuild it. For each of the database instances (such as SEED, TRNG, PPRD, and PROD), switch to that instance (use ". oraenv" and enter SEED, or, use "export ORACLE_SID=SEED", to switch to SEED in unix), and enter the following in sqlplus or svrmgr to create the textual control file in the instance's user_dump_dest directory specified in the init.ora file:
SQL> alter database backup controlfile to trace;
Keep a current copy of the textual init.ora file (initialization parameter file) if you are using an spfile (server parameter file) instead of an init.ora file to hold your initialization parameters. If the binary spfile becomes corrupt, the init.ora file can be used in its place to restart your database (after deleting the spfile, since Oracle looks for the spfile first). To create the init.ora file (which is placed in the $ORACLE_HOME/dbs directory on UNIX systems, or the $ORACLE_HOME\database directory on NT systems):
SQL> create pfile from spfile;
Create an export file of the entire database for individual table restores. This will also detect any block corruption in the table data, since a full table scan is performed. Block corruptions are not detected in physical block copies during tape backups, so you wouldn't know that you were propagating bad blocks to all of your backups until a user tried to access data in that block. During export, if a bad data block is encountered, the export will fail.

$ exp "'/ as sysdba'" file=/u03/oradata/prod1.dmp full=y \
log=/u03/oradata/prod1.dmp.log >/dev/null 2>/dev/null
Create an indexfile of that export for table and index definitions. The indexfile can also be used to separate tables into product-based tablespaces, or to split the indexes off into their own tablespace.
$ imp "'/ as sysdba'" file=/u03/oradata/prod1.dmp \
indexfile=/u03/oradata/prod1.idx full=y \
log=/u03/oradata/prod1.idx.log >/dev/null 2>/dev/null
Gather statistics on space usage so that you will know when to reallocate tables to give them larger extents before they run out of extents to allocate (using gurrddl or gurnddl along with export/import), or to add datafiles to tablespaces that are about to fill up (alter tablespace ... add datafile ...), or to recreate indexes that currently have unused or wasted space due to deletions from their associated tables. Below are some sample commands you can use to see potential space problems for tables and tablespaces and to see if an index has a lot of wasted space (and should be recreated at a time of no activity):
SQL> select segment_name,segment_type,extents,max_extents
from sys.dba_segments where extents + 5 > max_extents
or extents > 50 order by segment_name,segment_type desc;
SQL> col "Free" format 9,999,999,999
SQL> col "Tot Size" format 9,999,999,999
SQL> select tablespace_name,sum(bytes) "Tot Size"
from dba_data_files group by tablespace_name;
SQL> select tablespace_name,sum(bytes) "Free"
from dba_free_space group by tablespace_name;
For each index, find the number of deleted rows (may rebuild it later):
SQL> validate index posnctl.nhrfinc_key_index;
SQL> select name,del_lf_rows_len from index_stats;
The above pair are used together (validate clears out the index_stats table before putting its stats into it).

7. Archiving To Tape

The best method for periodically writing the archive log files to tape is to send them to disk first and have a cron job backup those archive logs to tape later on. Never archive directly to tape, since that would require a dedicated tape drive just for the archivelogs, and since the backup would need to complete before cycling back to that online redo log (otherwise, the database will "freeze" until that redo log is available for reuse - the same thing that happens if your archivelog directory gets filled up). If you did have a tape drive that could be dedicated to the archivelogs, they could be backed up more frequently than a normal backup procedure, such as every 10 minutes. In that case, you should only backup the new files which haven't been placed on the tape yet.

8. Disaster Recovery Scenarios

Backups are no good unless you know how to use them to restore your database up to and including the most recent change made to the data. Oracle gives those capabilities for up-to-the-instant recoveries to you, as long as you know how to use them. This section describes Oracle's recovery capabilities and how to use them for general recovery of datafiles, tablespaces, or the entire database, and for specific disaster recovery scenarios, which are variations on the themes of the general recovery procedures.
Things To Check
Messages displayed during database startup will usually indicate what kind of problems the database is experiencing. After you get an error message during your initial startup, you will most likely need to do a shutdown before proceeding with database recovery.

Alert Log in the database's bdump directory (you can find bdump using "select value from v$parameter where name = 'background_dump_dest';" or "show parameter background_dump"), named something like /u00/oracle/admin/PROD/bdump/alert_PROD.log).

Other recent trace (.trc) files in the database's bdump (background_dump_dest) and udump (user_dump_dest) directories.

Oracle processes, using "ps -ef | grep ora" at the unix command line. Should see entries like ora_smon_PROD, ora_pmon_PROD, ora_dbwr_PROD, ora_lgwr_PROD, ora_arch_PROD, and other ora_xxxx_PROD processes for each database instance, and oraclePROD for jobsub and other users.

The most recent .lis files in the job submission directory (such as /home/jobsub), at the end of the "ls -ltr /home/jobsub" listing, which may indicate why a user's run crashed or why it filled up a table or archive log directory.
What To Restore
Only restore the datafile that has gone bad if you are recovering up to the current time.

Restore ALL datafiles if you are recovering up to an earlier time than now but after those datafiles were backed up (for point-in-time recovery or cancel-based recovery). Since Oracle requires that all datafiles be synchronized at the same SCN (System Change Number), just recovering one datafile to an earlier time would cause that datafile to have a lower SCN than all the other datafiles, which Oracle wouldn't accept.

Restore all archivelog files that will be needed in the recovery (for the time range between the backup and the point-in-time that you are recovering to), if they are not currently available on-line or if those on-line archivelog files have been corrupted.

NEVER restore control files unless all copies are lost. See "Loss of Control Files" for information about restoring control files.

NEVER restore online redo log files from a hot backup (you shouldn't have backed them up in the first place); it is OK to restore them when doing cold backups, but ONLY IF you are restoring up to the time of that backup or doing a partial restore where those old logs won't be used during the restore. Restoring a backup of an active redo log file would cause corruption during database recovery. If the redo logs are deleted or damaged, just remove the damaged redo log files and reset the logs on open, and Oracle will recreate the redo log files for you (be sure you backup immediately after any resetlogs command, since Oracle can't recover datafiles restored from a backup made prior to resetting the logs):
$ sqlplus "/ as sysdba"
SQL> startup mount
SQL> alter database open resetlogs;
Backup/Restore Using Unix cpio Command
Here is a quick overview of using the unix cpio command for tape backups and restores. The tape drive for our system is /dev/rmt0. (A blocksize of 32K, indicated by C64, is used by these cpio commands. You can change the blocksize of your tape drive in AIX from user root through the smit utility, or by using the chdev command, as in "chdev -l 'rmt0' -a block_size='32768'", which will speed up the writing to that tape drive from the default 512-byte blocksize.)
Saving files to backup:
$ cat backemup.dat | cpio -ovC64 >/dev/rmt0
Listing tape contents:
$ cpio -itvC64" < "/"dev/rmt0
Restoring files to same pathnames:
$ cpio -imvC64 "<"/"dev/rmt0
Restoring files individually to new pathnames, entering the new pathname when prompted (also use "u" flag to overwrite newer files if they exist as newpathname; for fullpathname, enclose patterns in double quotes):
$ cpio -irmC64 fullpathname "<"/"dev/rmt0
newpathname (or, period (".") for same name, or, enter to skip file)
Disaster Recovery Overview
The first thing you should ALWAYS do when your database crashes is to make a BACKUP of your datafiles, control files, archive log files, and initialization parameters file (init.ora) for your database that crashed, either copying those files to tape or to another directory, or zipping them with something like GNU Zip, or making a tar file of them. DO NOT TRY TO RESTART YOUR DATABASE BEFORE BACKING UP THOSE FILES. Otherwise, you might make the situation worse. If all else fails, the people at SCT or Oracle may be able to get your database going again from that initial backup, even though the crash may have corrupted some files.

After a crash occurs and you've made a backup, do a STARTUP when connected as sysdba. If there are problems encountered during startup, Oracle will display an error message about the first problem, which will usually give you an idea about what type of recovery scenario you will need to apply. In most cases, you will need to do a shutdown after this initial startup attempt before beginning the recovery.

There are three primary recovery options for media failure scenarios. The three recovery options are:
1) Recover Database
2) Recover Datafile
3) Recover Tablespace
Most of the other recovery scenarios are just variations on these three themes. Recovery steps for specific disaster recovery scenarios are given later on; however, in a generic recovery:
1.The database is shut down or the affected datafile(s) or tablespace is taken offline.
2.The affected datafile(s) is restored from a backup along with the archived redo logs created since that backup.
3.A recover command is issued.
4.The tablespace or datafile(s) is brought back online.
You can see the Oracle9i User-Managed Backup and Recovery Guide for more information on Oracle database disaster recovery.

Basic Recover Database Recovery Option

The Recover Database option is used to recover all datafiles needing recovery, either up to the point of failure, called complete recovery, or up to a point in the past before the failure, called incomplete recovery, which is only available using this option. Recover Database is performed from the MOUNT state, so, the database will have to be SHUT DOWN before using this. All datafiles must be ONLINE to be recovered, in contrast to the other two options where the datafiles are offline in most cases. The basic steps are: restore the datafiles, mount, online the datafiles, recover database, and open, as shown below:
$ cp -p /u03/oradata/prod1/devl_PROD_01.dbf /u03/oradata/PROD
Restores the datafiles from backup, such as devl_PROD_01.dbf here.
$sqlplus "/ as sysdba"
SQL> startup mount
SQL> select * from v$datafile;
Shows status of datafiles, indicating those that are offline.
SQL> alter database datafile '/u03/oradata/PROD/devl_PROD_01.dbf' online;
Online the offline datafiles shown above, such as devl_PROD_01.dbf here.
SQL> set autorecovery on
Tells recovery not to ask which redo logs to use (in log_archive_dest).
SQL> recover database;
Recovers all datafiles, as needed, up to the current point in time.
SQL> alter database open;
Opens the database for user access.
You may also need to drop and recreate the TEMP tablespace after a database recovery. See Loss of TEMP Datafile on how to do this.
Recover Database can also be used to perform an incomplete recovery up to some point in time before the failure occurred, or before the database was last shut down. To do this, ALL datafiles must be restored from the backup before doing the incomplete recovery (not control files or online redo log files), and ALL datafiles are recovered back to the same point in time - you can't do a partial incomplete recovery.

There are three incomplete recovery options available: time-based, cancel-based, and change-based recovery. Time-based recovery brings the datafiles up to a given date and time. Cancel-based recovery applies entire archive log files and online redo log files one at a time until the word CANCEL is typed in, so you can't use autorecovery with it. Change-based recovery applies all archive log files and online redo log files up to, but NOT including, a given system change number. If you have added datafile(s) since the last backup, and you want to recover up to a point before the datafile(s) were added, you will need to restore the control files and use the "using backup controlfile" option of the recover command (see Loss of Control Files). (I don't know of a way to recover up to a point between datafile additions - it's all or none.) The basic steps are similar to the complete recover database option, except for the incomplete recovery commands, which are shown below (only use one of these):
SQL> recover automatic database until time '2005-02-14:15:45:00';
Automatic is similar to Set Autorecovery On. Use 1 second before the time of failure or of the log to exclude, which, for Oracle 8.x and above, could be found using:
select to_char(a.time,'YYYY-MM-DD:HH24:MI:SS') as time, a.name from (select first_time as time, substr(name,1,58) as name, first_change# as change from v$archived_log union select first_time, 'Redo Log Group ' || to_char(group#), first_change# from v$log) a order by change;
Or, for Oracle 7.x, could be found using:
select to_char(a.time,'YYYY-MM-DD:HH24:MI:SS') as time, a.name from (select time, substr(archive_name,1,58) as name, low_change# as change from v$log_history union select first_time, 'Redo Log Group ' || to_char(group#), first_change# from v$log) a order by change;
SQL> recover database until cancel;
Accept logs until you type in CANCEL.
SQL> recover automatic database until change 43047423;
The system change numbers (SCN) contained in each archive log file is shown in Oracle 8.x and above using: select name,first_change#,next_change# - 1 from v$archived_log;
Or, for Oracle 7.x, using: select name,low_change#,high_change# from v$log_history;
For the online redo logs, you could use "select max(first_change# - 1) from v$log;" to find the SCN to use for applying all but the current online redo log file.
After an incomplete recovery, you must open the database with the RESETLOGS option, as shown below, and then immediately shut down the database and make a BACKUP, since the old backup is unuseable after RESETLOGS is used.
SQL> alter database open resetlogs;

Basic Recover Datafile Recovery Option

The Recover Datafile option is used to recover the specified datafile up to the point of failure, synchronizing it with the other datafiles (complete recovery only). Recover Datafile is performed either from the MOUNT state (after a shutdown), with the datafile ONLINE or OFFLINE, or from the OPEN state, with the datafile OFFLINE. A bad datafile must be taken offline before the database can be opened. Since the SYSTEM tablespace CANNOT be taken offline, you cannot use Recover Datafile from the OPEN state on SYSTEM.

The basic steps for the MOUNT state are: restore the datafile, mount, recover datafile (using automatic here instead of autorecovery), online the datafile if needed, and open, as shown below:
$ cp -p /u03/oradata/prod1/devl_PROD_01.dbf /u03/oradata/PROD
$ sqlplus "/ as sysdba"
SQL> startup mount
SQL> recover automatic datafile '/u03/oradata/PROD/devl_PROD_01.dbf';
SQL> select * from v$datafile;
SQL> alter database datafile '/u03/oradata/PROD/devl_PROD_01.dbf' online;
SQL> alter database open;
The basic steps for Recover Datafile from the OPEN state (except for SYSTEM) while the rest of the database is still up and running, are: offline the datafile, restore the datafile, recover datafile, and online the datafile, as shown below:
$ sqlplus "/ as sysdba"
SQL> alter database datafile '/u03/oradata/PROD/devl_PROD_01.dbf' offline;
SQL> !cp -p /u03/oradata/prod1/devl_PROD_01.dbf /u03/oradata/PROD
SQL> recover automatic datafile '/u03/oradata/PROD/devl_PROD_01.dbf';
SQL> alter database datafile '/u03/oradata/PROD/devl_PROD_01.dbf' online;
Doing a Recover Datafile from the OPEN state (except for SYSTEM) can also be used to bring up your database for use before doing a recovery. The basic steps are: mount, offline the datafile, open, restore the datafile, recover datafile, and online the datafile, as shown below:
$ sqlplus "/ as sysdba"
SQL> startup mount
SQL> alter database datafile '/u03/oradata/PROD/devl_PROD_01.dbf' offline;
SQL> alter database open;
The database is now available to the users, except for that datafile.
SQL> !cp -p /u03/oradata/prod1/devl_PROD_01.dbf /u03/oradata/PROD
SQL> recover automatic datafile '/u03/oradata/PROD/devl_PROD_01.dbf';
SQL> alter database datafile '/u03/oradata/PROD/devl_PROD_01.dbf' online;
You can still access data in tables in a tablespace which has an offlined datafile, just so long as the data is in one of the other datafiles of the tablespace and the table header is not in the offlined datafile.
Basic Recover Tablespace Recovery Option
The Recover Tablespace option is used to recover all datafiles needing recovery in a tablespace up to the point of failure, synchronizing them with the other datafiles (complete recovery only). Recover Tablespace is performed from the OPEN state after taking the tablespace OFFLINE (which brings its datafiles offline). Since the SYSTEM tablespace CANNOT be taken offline, you cannot use Recover Tablespace on SYSTEM. The basic steps while the database is open are: offline the tablespace, restore the datafiles, recover tablespace, and online the tablespace, as shown below:
$ sqlplus "/ as sysdba"
SQL> alter tablespace development offline immediate;
Using Immediate rolls back currently pending transactions.
SQL> !cp -p /u03/oradata/prod1/devl_PROD* /u03/oradata/PROD
SQL> recover automatic tablespace development;
SQL> alter tablespace development online;
Doing a Recover Tablespace after opening the database can be used to bring up your database for use before doing a recovery. The basic steps are: mount, offline the bad datafiles, open, offline the tablespace, restore the datafiles, recover tablespace, and online the tablespace, as shown below:
$ sqlplus "/ as sysdba"
SQL> startup mount
SQL> alter database datafile '/u03/oradata/PROD/devl_PROD_01.dbf' offline;
SQL> alter database open;
SQL> alter tablespace development offline;
SQL> !cp -p /u03/oradata/prod1/devl_PROD* /u03/oradata/PROD
SQL> recover automatic tablespace development;
SQL> alter tablespace development online;
If a write error occurs on one of the datafiles when you offline a tablespace, use "alter tablespace tsname offline temporary;". Then, you can run a recovery on the datafile. If all datafiles in an offlined tablespace have write errors, use "alter tablespace tsname offline immediate;". Then, you can run a recovery on the tablespace.

9. Archivelogs Disk Volume Filled Up

Symptoms: If the disk containing the directory for the archive log files fills up, Oracle will stop all transactions until the archiver process is able to continue writing to that directory. Doing a "df -k" shows that there is no room left for more archivelogs in that directory's disk. Current users' sessions will freeze, and users trying to log on will get "ERROR: ORA-00257: archiver error. Connect internal only, until freed.", since the archiver process is still waiting to archive that redo log group. (Note: Use the oerr utility in UNIX to get the error description, such as "oerr ora 257".) To verify this conclusion:
$ sqlplus "/ as sysdba"
SQL> select value from v$parameter where name = 'background_dump_dest';
Shows the pathname of the bdump directory.
SQL> !tail -200 /u00/oracle/admin/PROD/bdump/alert_PROD.log
Shows "ORA-00272: error writing archive log", indicating that the redo log group can't be written.
SQL> !ls -ltr /u00/oracle/admin/PROD/bdump/arch*
Get the latest archiver trace file in the bdump directory, such as arch_13106.trc (the last one listed).
SQL> !cat /u00/oracle/admin/PROD/bdump/arch_13106.trc
Also shows "ORA-00272: error writing archive log".
Action: You will need to free up some space on that disk volume for Oracle to continue, either by moving files off of that volume, or, more likely, by deleting old archive log files out of that directory which you have already backed up to tape. You must not delete archive log files that have not been backed up, since you wouldn't be able to recover transactions in those files if your database crashes. The following script deletes the old log files earlier than the given number of days (24-hour periods). If you back up nightly, 1 day is the smallest number you should enter, or 3 on Monday's if you don't back up on weekends. You have to be logged in as userid oracle or as root to remove the archive log files.
# File: remove_old_logs.shl
echo "You must be logged in as user Oracle to run this script,"
echo "which removes all archivelog files older than X days."
echo "Enter number of days to keep: \c"
read DAYS_KP; export DAYS_KP
find /u01/oradata/PROD/archivelogs -name '*.arc' -mtime +$DAYS_KP -exec rm {} \;
find /u01/oradata/PROD/archivelogs -name '*.arc.gz' -mtime +$DAYS_KP -exec rm {} \;
echo "Results after deletions:"
du -k
df -k
If you are on a Windows NT-based system (including Windows XP and Windows 2000) or on Windows 2003 (added 6/13/06), I've written a similar script using DOS Batch commands, which you can download by clicking on this: remove_old_logs.bat. This script uses extended DOS commands in Windows NT and Windows 2003, which aren't available in Windows 98 and before. It defaults to Windows 2003, so, if you want to use it on Windows NT, change the osver variable in the script to NT instead of 2003.
If you can't free up any space on the archive log disk, you can redirect the archives to another disk until you are able to free up space on it, such as shown below. This redirection goes away if you restart your database, but you can make it permanent by updating the server parameter file (spfile), also shown below, or the init.ora file before restarting, according to which one of those that you are using. I haven't tried this, but just saw it on the web in a presentation by Chris Lawson on dbspecialists.com.
$ sqlplus "/ as sysdba"
SQL> alter system archive log start to '';
SQL> alter system set log_archive_dest='' scope=spfile;

10. Loss of Control Files

Symptoms: May be none until you try to shutdown and startup the database. On shutdown, if the control files were deleted, you would get "ORA-00210: cannot open control file '/u03/oradata/PROD/ctrl_PROD_01.ctl'", or, if the control files were overwritten, you would get "ORA-00201: control file version incompatible with ORACLE version" along with their names. On startup for both cases, you would get "ORA-00205: error in identifying control file '/u03/oradata/PROD/ctrl_PROD_01.ctl', along with "ORA-07366: sfifi: invalid file, file does not have valid header block." if overwritten.
Action: If you have a recent up-to-date textual control file (no added datafiles) from "alter database backup controlfile to trace;" in your user_dump_dest directory, you can just edit out the control file header and run it:
$ sqlplus "/ as sysdba"
SQL> shutdown abort
SQL> !ls -ltr /u00/oracle/admin/PROD/udump/*.trc
Get the latest textual control file in user_dump_dest directory, such as prod_ora_31494.trc (the last one listed).
SQL> !vi /u00/oracle/admin/PROD/udump/prod_ora_31494.trc
Edit the textual control file and remove all lines before the STARTUP NOMOUNT line, remove the lines after ALTER DATABASE OPEN, and, for Oracle 9i and above, and change all comment lines to start with dashes. The vi commands to do these (for 9i+) are usually:
:1,/STARTUP NOMOUNT/-1d
:/ALTER DATABASE OPEN/+1,$d
:1,$s/^#/--/
:wq
SQL> @/u00/oracle/admin/PROD/udump/prod_ora_31494.trc
If you don't have a textual control file, you'll need to restore all datafiles and control files, but, not online redo log files, from the last backup and do a recovery "using backup controlfile":
$ sqlplus "/ as sysdba"
SQL> shutdown abort
At this point, restore ALL datafiles AND control files from the last backup, along with any archivelogs that are needed since that time, but, NOT the online redo log files.
SQL> connect / as sysdba
SQL> startup mount
SQL> recover automatic database using backup controlfile;
Then, AUTO on "ORA-00308: cannot open archived log ...".
SQL> alter database open resetlogs;
Immediately shut down the database and make a BACKUP, since the old backup is unuseable after RESETLOGS is used.
Note that you can combine the recovery options depending on what's needed. For example, I needed to recover up to a point in time before a datafile was added, so, I did the steps above, including deleting the added datafile after shutting down the database, and substituting the following recover command (you won't enter AUTO as above, and, when it's finished the partial recovery, it will show "Media recovery cancelled."):
SQL> recover automatic database using backup controlfile until time '2001-07-16:13:15:00';

Friday, August 1, 2008

Autoconfog and customisation

AutoConfig is a configuration tool that automates the configuration of an Oracle Applications system. The information required for configuring an Applications system is collected into a repository, called the Applications Context; there is one Applications Context for each application tier, and one for the database tier. When AutoConfig runs, it uses information from the Applications Context file to generate all configuration files and update database profiles.
Run the adchkcfg utility to get an html report that lists all the files and profile options that get changed when you run AutoConfig.
1. Execute autoconfig in check mode
2. Check reported changes in $APPL_TOP/admin//out/
3. Update XML file or customize relevant template appropriately
DO NOT UPDATE configuration files directly without modifying the XML file or template appropriately. The Autoconfig custom template, key to autoconfig stabilization, is a very recent feature and this document will help to define the process.
Following steps must be performed in the sequence to run autoconfig in an On Demand environment:
• Identify Customizations
• Implement Config Changes and Customizations
• Run Autoconfig in the correct sequence

Identify Customizations
Downtime not required in this step as it is a “read-only” operation on the configuration file.
1. Execute Autoconfig in Check Mode
Login using Unix account to Apps user
cd $AD_TOP/bin
Execute following command to run adconfig in a test mode:
${AD_TOP}/bin/adchkcfg.sh contextfile= appspass=
This will generate the report containing the differences. The location of the report will be output to your screen that is generally $APPL_TOP/admin/_ /out//cfgcheck.txt.
$ adchkcfg.sh contextfile=PROD_ocvmrh2177.xml appspass= welcome
Testing AutoConfig application and building diffs for this environment.
Loading APPL_TOP environment from /u01/applmgr/1159
o [ Test mode ]
o Using Context: /u01/applmgr/1159/admin/ PROD_ocvmrh2177/out/04051937/ PROD_ocvmrh2177.xml
o Context Value Management will now update the Context file
o Updating Context file ... COMPLETED
o Testing templates from all of the product tops ...
o Testing AD_TOP
o Differences log written out to: /u01/applmgr/1159/admin/PROD_ocvmrh2177/out/04051937/cfgcheck.txt
o The logfile for this session is located at: /u01/applmgr/1159/admin/PROD_ocvmrh2177/log/04051937/adconfig.log
o
2. Identify customization by performing the following steps carefully for every file:
• cd $APPL_TOP/admin/_/out//
o (e.g cd /u01/applmgr/1159/admin/PROD_ocvmrh2051/out/03171715)
• ls cfgcheck.txt
o this will have the summary for customization changes
• ls x_*
o this will show the diff associated with each file
o For example, if there is a customization in httpd.conf for a given instance, you will find ‘x_httpd.conf’.
o An example difference file:
o x_appsweb_PROD_ocvmrh2177.cfg
o Checking differences between
o Existing file(<) : /u01/applmgr/common/html/bin/appsweb_PROD_ocvmrh2177.cfg
o New file(>) : /u01/applmgr/1159/admin/PROD_ocvmrh2177/out/04051937/appsweb_PROD_ocvmrh2177.cfg
o Note: You will lose the custom changes in the existing file as listed below. Please resolve the differences before you run AutoConfig in normal mode. Use begin custom/end custom blocks to retain your customizations.
o ==============================================
o Differences
o =====================================================
o 32c32
o > serverURL=/forms/formservlet
o -----
o < serverURL = /forms/formservlet?ifip=100.100.100.100
Implementing Configuration Changes and Customizations
3. Repeat these steps for every configuration file change:
Identify the template file
View the header in the respective configuration file (appsweb_PROD_ocvmrh2177.cfg in above example).
Second line of header would show the name of template file:
Forms Web CGI Configuration File for Oracle Applications 11i
$Header: appsweb.cfg 115.127 2003/11/06 05:15:23 skghosh ship $
Apps templates could be found in $FND_TOP/admin/template or $AD_TOP/admin/template
Review the type of customization
Check the entry for respective change in template file:
Changes which can be implemented in XML file
In above case, template file appsweb.cfg has following entry for serverURL:
serverURL=%s_forms_servlet_serverurl%
If the value is enclosed in % signs, the string within the % is the name of context parameter. When autoconfig runs, it would take the value of this parameter in XML file and update it on this line.
To implement this change properly, you would need to find this parameter in XML file and update it to proper value.
Changes which require template customizations
If the config change were to a value which is hard coded in Template file, you would need to customize the template file. For example:
< CustomLog "| /u01/applmgr/product/iAS/Apache/Apache/bin/rotatelogs /u01/applmgr/product/iAS/Apache/Apache/logs/access_log 86400 " common
-----
> CustomLog /u01/applmgr/product/iAS/Apache/Apache/logs/access_log common
Corresponding template shows:
CustomLog %s_tp%/Apache/Apache/logs/access_log common
To generate correct entry as above, template needs to be modified as follows:
CustomLog "|%s_tp%/Apache/Apache/bin/rotatelogs %s_tp%/Apache/Apache/logs/access_log common

Customizing Templates
Once you have identified which template needs to be customized and what changes are required, perform following steps:
cd $FND_TOP/admin/template (or AD_TOP if template is there)
mkdir custom (if it doesn’t exists)
cp httpd_ias_1022.conf custom (copy current template to custom directory)
Edit httd_ias_1022.conf and make changes as decided above.
When making changes to conf files or templates, care must be taken to use macros that are instantiated by autoconfig and not hardcode any values in the conf files that could potentially cause failures in other scenarios like refresh, clone etc. (e.g.: hard coding ORACLE_SID instead of using macro %s_dbSid% which causes autoconfig to replace it with the proper ORACLE_SID.
Sample MACROS
List of sample macros can be reviewed from the following template file:
$AD_TOP/admin/template/adctxinf.tmp
Differences which can be ignored
If the difference only includes context variables, these can be ignored.
====================================================
Differences
====================================================
262c262
-----
>security.allowedAddresses=127.0.0.1,auohssohi01.oracleoutsourcing.com,%s_allowed_addresses%
Running Autoconfig in correct sequence
For a multi-midtier environment, Profile options needs to be updated from Admin node only. To avoid updating profile options from other nodes, previously we used to run autoconfig with wrong password. However that does create other problems. Here is a better way to do it:

4. Run autoconfig on all MTs except Concurrent Node with following options:
adconfig.sh run=INSTE8_SETUP contextfile= appspass=

Previously it was done by running adconfig.sh with wrong password.So that it may not connect to database.hence couldn’t update profile options.

5. Run autconfig on Concurrent Node (incase of PCP it is Primary Concurrent Node)
adconfig.sh contextfile= appspass=
If in the future, a new patch is installed, which happens to replace the original template, autoconfig will detect this (via version in the "Header" section of the file) and refuse to proceed with autoconfig until the custom version of conf file is updated. The following error is spit out by autoconfig such a situation is encountered.
"Version Conflicts among development maintained and customized templates encountered; aborting AutoConfig run."
This error occurs when a patch brings a new version of a template that is already customized. In those cases, you are required to customize the new version of the template by comparing the development maintained template with customized template.
6. Perform post-autoconfig steps as per standard process such as starting of services, health check, buddy check etc.


This is regarding TNS failover entries issue .Check the varaible in xml file "s_alt_service_instances".if this is set then some other issue is there else the below statment should resolve there tns failover issue.

To generate the failover aliases, update the database tier context variable s_alt_service_instances.
You can specify a comma-separated description list of "servicename:instance" to manage connect time failover. For example:
'SERVICE_NAME:INSTANCE_NAME1,SERVICE_NAME:INSTANCE_NAME2'
This will generate a TNS alias in the tnsnames.ora file that fails over to INSTANCE_NAME1 when the current instance is not available. If INSTANCE_NAME1 is not available, failover is to INSTANCE_NAME2 instead.
To set up the failover listing, perform the following steps in the order listed:
1.Update the context variable s_alt_service_instances in the database tier context file, applying the failover rules as described above
2.Run AutoConfig on all database tiers
3.Run AutoConfig on all application tiers
These steps will generate the failover TNS alias _FO with a description list as configured in s_alt_service_instances. Before you can use these aliases, you will have to set the two task variable s_tool_twotask.
All backup configuration files from each AutoConfig session are stored in:
On the application tier:
/admin//out//
On the database tier:
/appsutil/out///
where: = (month, day, hour, and minute of the AutoConfig run)
You can run restore.sh (Unix) or restore.cmd (Windows) to roll back an AutoConfig session.

Tuesday, March 25, 2008

Patching in Oracle Part I

A patch is a software code change.Oracle Applications is an extremely flexible and powerful suite of products and software, which runs on many different platforms. As such, the complexities associated with extending its use to every business environment is challenging. Because of its flexibility and being feature rich, Oracle extends the base product functionality though the addition of code changes. These changes are bundled into a “patch”.
Always test the patch on a test system prior to applying to the production database. Because of the rare possibility of problems being encountered, perform a “cold” system backup of your test instance, if one doesn’t already exist.

Patch types Oracle produces

Maintenance Pack: A Maintenance pack is a consolidation, or bundling, of patches for an entire product area. Maintenance packs introduce new functionality in a given product and are particularly useful for customers who are beginning their implementation. They can start with the application of a Maintenance Pack, such as the 11.5.3 CD Pack versus applying the 11.5.1 CD Pack followed by the individual patches released since 11.5.1’s initial availability. Maintenance Packs are cumulative, so a customer does not need to apply 11.5.1, 11.5.2 and 11.5.3 to get to the latest code, they can start with 11.5.3 (or the latest version available). Maintenance Packs can be downloaded from MetaLink but are quite large and should be ordered as CD Pack from the Oracle Store. A Maintenance Pack is the largest or top most level of patching done in Oracle Applications. This type of patch contains all of the product specific family packs for the specified release version.

The readme file for a Maintenance Pack contains the list of patch’s that are included in the Maintenance Pack and can be downloaded from MetaLink.

The product minor version (11.5.1, 11.5.2, 11.5.3, etc) can change by the application of Maintenance packs.

Special considerations for upgrade customers:
Maintenance packs are not for upgrading from previous version of the Applications, such as 10.7 or 11.0.x. In these cases, customers must use RapidWiz and AutoUpgrade utilities and complete all the steps specified in the Upgrade Manual, before applying the Maintenance pack. Customers must obtain the CD Pack from the Oracle Store because it contains the RapidWiz. The RapidWiz cannot be downloaded from MetaLink

Family Pack: A Family Pack patch can introduce new functionality in a given product area. It also is a consolidated set of patches specific to a product family area, such as AOL or HRMS. Family Packs are cumulative, meaning all product family packs need to be applied in sequence. Family Packs are denoted with a letter, such as A, B, C etc., “A” preceding “B” and “B” preceding “C” and so on. The functionality changes included in the Family Packs are included in subsequent Maintenance Packs.

Mini-Pack: The terms megapatch, patchset, and mini-pack are interchangeable terms and refer to a group of bug fixes that are applied at one time. Mini-packs can be released for a single product or a group of related products (e.g. Purchasing mini-pack, Foundation mini-pack, Accounts Payable mini-pack, etc.). They are generally given alphabetical suffixes that provide their position in the sequence of a product’s lifecycle such as AP.A, AP.B, AP.C, etc. Mini-packs are cumulative and can contain new functionality.

One-off Patch: A "one-off" patch addresses a single fix or enhancement. One-off patches can introduce new functionality and is noted in the readme file for the patch. One-off patches are usually issued when an issue requires an immediate code change that can’t wait for distribution in any other form.

All ERP patches are delivered with the software modules that are needed, including the dependencies for that module. Some modules, such as forms, have a large number of dependencies. Even a small change like correcting a LOV for a form can result in the one-off patch to include dozens of other modules to satisfy the dependencies.

Diagnostic Patch: A diagnostic patch is sent to a customer to assist Oracle Support Services and Oracle Development obtain diagnostic information. The patch is provided when a product failure cannot be reproduced in an OSS environment and customer specific data is required to continue the investigation.

Interoperability Patch: Interoperability patches can be required when an Oracle Applications instance is migrated to a newer version of the technology stack. The patch is typically included in the DB and Forms technology stack upgrade. An example of this was when Oracle Applications Release 11 was initially released, it required version 8.0 of the DB. Later on, when the Applications were certified with 8i, the Applications required an interoperability patch to work with 8i. Interoperability patches are quite common in Oracle Applications releases 10.7NCA, 11.0.X and 11i due to the to synchronization Oracle Applications modules with the core Oracle database and tools. This type of patch can be created when an upgrade occurs in the Oracle software stack and Oracle Applications require additional modules to maintain functionality.

Translated Patch: A fully language translated patch. From R11i on, all applications patches are automatically translated into 28 different languages. The timing and availability of these patches may vary.

A class of patches that contain legislative data has an additional driver called hrglobal, which may need to be applied. Also, for some groups of patches, it may be beneficial to merge the patches into one set of driver files. Depending upon your implementation, you may also need to deal with multi-language patches and multi-node patching. These topics are discussed in the following sections.

Applying Legislative Patches
For Oracle Payroll customers, there is another category of patch required by the system. The hrglobal patch supports the legislative requirements of multiple countries. Given the nature of this patch, it is updated frequently by Oracle. It is often a post-patch requirement for the mandatory patches released for Oracle Payroll.
To find the latest copy of the hrglobal patch, view MetaLink Note 145837.1. This note will contain the latest patch number for the hrglobal patch, along with a link to the patch installation instructions and a change history for the patch. The hrglobal patch can be downloaded from MetaLink like any other patch. Review the patch’s readme file for required prerequisites.
After unpacking the patch, the adpatch utility can be run to install the patch’s u driver. In addition to the u driver, these patches contain a special hrglobal driver. As a result of these differences, there are additional considerations for applying this type of patch.
Once the u driver has been applied, the DataInstall Java utility needs to be run in order to select needed legislations for install. The syntax for this command is as follows:
jre oracle.apps.per.DataInstall apps apps_password thin
[hostname]:[dbport]:[oracle_sid]

When the DataInstall utility has been executed, the Applications DBA will need to select all relevant legislations. Figure 5-6 shows the main menu for DataInstall.
DataInstall Main Menu
1. Select legislative data to install/upgrade
2. Select college data to install/upgrade
3. Select JIT/Geocode or OTL to install/upgrade
4. Exit to confirmation menu

Figure 5-6. The DataInstall Main Menu
Select option 1 to choose which legislative data to install or upgrade. From the resulting menu, shown in Figure 5-7, you should choose to install any legislative data marked as installed. Note that the selection numbers will change depending upon your version of the hrglobal patch. Check the numbers for the appropriate data.
# Localisation Product(s) Leg. Data? Action
1 Global Human Resources Installed
2 Australia Human Resources
3 Australia Payroll...
55 United States Human Resources Installed
56 United States Payroll Installed - Change Action where is [I : Install, C : Clear]
Figure 5-7. The DataInstall legislative data submenu
Select the legislative data to be installed by entering the localization number and I. If an incorrect number is selected, you can correct the mistake by entering that number with a C to clear the action.
After all legislative data is marked for install, return to the main menu to select any required college data. When all college data is selected, return to the main menu and select 4 to exit the utility. Upon exiting, an Actions Summary will be displayed. Review that summary to ensure that all required actions have been selected.
The final stage of the legislative patch is to run the adpatch utility to apply the hrglobal driver. This driver is copied to the $PER_TOP/patch/115/driver directory by the patch’s u driver. The same adpatch options for applying other drivers should be used for the hrglobal driver.
Using AD Merge
When applying a group of large patches, such as a Maintenance Pack and a cumulative update, some performance benefits can be incurred by using the AD Merge utility to combine the patches into one patch. From my personal experiences I would say merging the best time reducing feature. People most of the times keep asking how do we know which pathces to merge and which not. Well my answer always would be, depends on Analysis.which is done with the readme and the docs associated with the patch. And a thorough analysis in the earlier stage will save you the downtime and make the patching smoother and better.
The set of patches to be merged should be copied to a common directory. After the patches are unbundled, the AD Merge utility can be run against the patches. Here is an example:
admrgpch /source_dir /target_dir
The completed merged driver files found in the target directory can be applied as a standard patch would be applied. The merged driver files will have a name like u_merged.drv. A log file, admrgpch.log, will be created in the directory where the utility was run.
For more information, see MetaLink Note 228779.1, “How to Merge Patches Using admrgpch.” The admrgpch utility can be run with several parameters, shown in Table 5-3.
Option Purpose
s Specifies the source directory containing compressed patch files.
d Specifies the destination directory for merged patch files.
verbose Controls the level of detail included in admrgpch output.
manifest Specifies a text file containing the list of patch files to be merged. This is useful if the source directory includes a large number of patch files.
logfile Specifies the log file to contain the output from admrgpch utility.
merge_name Specifies the name of the merged file. This defaults to “merged”, and it should be changed to be more descriptive.
Table 5-3. admrgpch Options
When using this utility, thoroughly test the resulting patch.
Applying NLS Patches
For E-Business Suite installations with multiple language requirements, there are patches available for each additional language. Each required NLS patch needs to be applied to Oracle Applications. Oracle provides some recommendations for dealing with NLS patches; these are outlined in MetaLink Note 174436.1.
The U.S. version of the patch should be applied before any of the translation patches. The translation patches may be applied without downtime to the entire system if users of the affected language are not active.
Using admrgpch, it is possible to merge all U.S. patches into one patch, and then merge all non-U.S. patches into a separate patch. Depending upon the application configuration, some variation of this approach may be necessary.
Performing Multi-Node Patching
There are a couple of options available to optimize patching for multi-node environments. As of Oracle Applications 11.5.10, the system can be designed with a shared application-tier filesystem. The shared application filesystem contains the application’s APPL_TOP, COMMON_TOP, and ORACLE_HOME. (MetaLink Note 233428.1 describes sharing the application-tier filesystem.) As a result of this configuration, patching the shared filesystem applies the patch to all nodes.
Prior to 11.5.10, a shared APPL_TOP did not include the ORACLE_HOME. For these systems, Forms and iAS patches must be applied to each Form and Web Node.
In order to increase the performance of the patching process, Distributed AD will execute workers on remote nodes in a multi-node implementation. Distributed AD improves scalability and resource utilization. Distributed AD is only available with AD Minipack H or later, and with a shared Application Tier Filesystem or shared APPL_TOP. More information on this feature can be found in MetaLink Note 236469.1.
If a shared Application Tier Filesystem is not in use, each filesystem will need to be patched separately. A patched filesystem can be cloned to another node if the downtime required to patch the node exceeds the time required to clone the filesystem.
Patch drivers have different requirements when applying them in a multi-node environment. The c driver must be applied to all APPL_TOPs, the d driver is applied on the Admin Node, the g driver is applied to all APPL_TOPs unless the APPL_TOP is only the Admin Node, and the u driver is applied to all APPL_TOPs on all nodes.
Monitoring and Resolving Patching Problems
Patching problems manifest themselves in many different ways. Typically the adpatch session will display an error or will appear to be hung on one task for a long period of time. The first step in resolving the issue is to review the adpatch log file and associated worker log file. Next, the reason the worker failed must be determined and resolved. After resolution has been obtained, adctrl can be used to continue the patching process.
Reviewing Log Files
During and after the application of patches, it is helpful to review log files of the adpatch session and its workers. These files are found in the $APPL_TOP/admin/$CONTEXT_NAME/log directory. The adpatch log filename is specified during the patch process. See the “Using AD Patch” section earlier in the chapter for more details.
In order to monitor the patch from a telnet session other than the one where the patch was started, a simple UNIX command such as tail -f u[patch#].log will display information as it is written to the log file. This is a useful means for monitoring the progress of a patch that is being applied.
The log files for the workers will be named adwork[xxx].log, where [xxx] is the number of the patch worker process. If a particular worker has failed, examine the related log file for detailed information. This information can be researched on MetaLink or used to open an SR with Oracle Support.
For example, the log file listing for the u driver of patch 11112, applied through adpatch using 5 workers, may look like this:
$ls

adwork001.log
adwork002.log
adwork003.log
adwork004.log
adwork005.log
u111112.log
Using AD Control
The administrative tool used to manage patch workers is AD Control, or adctrl. Frequently workers will fail or hang, which will require the Oracle Applications DBA to interface with adctrl. (Common patching errors will be covered later in this chapter.)
AD Control menu options will vary depending upon the AD patch version applied to the instance. When logged in as the application owner on the Admin Node, execute adctrl to display the menu options shown in Figure 5-8.
AD Controller Menu
----------------------------------------------------
1. Show worker status
2. Tell worker to restart a failed job
3. Tell worker to quit
4. Tell manager that a worker failed its job
5. Tell manager that a worker acknowledges quit
6. Restart a worker on the current machine
7. Exit
Figure 5-8. AD Controller Menu
To execute an adctrl menu option, simply type the menu option and press Enter. If options 2–6 are chosen, either specify the number of the worker that requires action, or press Enter for the action to be executed for all workers.
The “Skip Worker” menu option is a hidden adctrl menu option. If a worker needs to be skipped, start adctrl, enter 8, and then enter the worker number. Only use this option if advised by Oracle Support.even if we use this with out any support advise , be sure what the worker was going to do and how for the patch has gone .As some times you may skip a worker but that would have been doing some work which is required in the further process of the patch.Hence we may go into deeper trouble skipping the workers just to continue the patch.

Resolving AD Patch Worker Failure
If a worker has failed, the adpatch session will normally display a failedworker message. The status of the worker may also be determined using adctrl. If a worker has failed, the worker error can be obtained by viewing the worker log file. Once the worker issue has been resolved, use adctrl to restart the worker.
If a worker has failed, and it is determined that the step the worker was trying to execute may be skipped, the hidden option 8 of the adctrl menu, “Skip Worker,” may be used to skip the worker. It is only advisable to do this if the step is not critical to the environment being patched.
The following are common worker failures that will be seen by the Applications DBA during patching. The error messages will be displayed by the adpatch session or in the worker log file:
Error message: ORA-01013: user requested cancel of current operation
Resolution to error: If this error occurs, simply use adctrl to restart the worker on the current machine.
Error message: Patch not applied successfully, adpatch did not cleanup its restart files (*rf9).
Resolution to error: If this error occurs, execute the following as the instance owner:
$cd $APPL_TOP/admin/$CONTEXT_NAME
$mv restart restart_old
$mkdir restart
After cleaning up the restart files, you may then restart the adpatch session using adpatch.
Error message: ERROR: Javakey subcommand exited with status 1
Resolution to error: If this error occurs, the identity.obj file needs to be re-created.Recreate the identity.obj file. Then, use adctrl to restart the failed worker.
Error message: No error message is displayed; rather the worker log file states that the worker is complete, yet adctrl indicates that the worker is still running.
Resolution to error: This patching problem occurs when the worker is complete, but did not update patching tables correctly to notify the adpatch session that it has finished. In this scenario, the adpatch session is still waiting for the finish return code from the worker. When this occurs, use adctrl to fail the worker, then restart the worker.
Any form, library, or report that fails to generate during the patch process can be regenerated manually after all patching and post-patching steps have completed. If the object still fails to compile, open an SR.
Additional Tips for Resolving Patching Issues
If a patch has hung or workers have failed, and the reason for this failure cannot be determined, it is advisable to check the number of invalid objects in the database. If the number of invalid objects is high, recompile the invalid objects in parallel and restart the patching session.
If the adpatch session is hung, and all other methods for resolution have been executed, it may be necessary to bounce the database and restart the patch session. This method for resolving patching issues is sometimes necessary, especially when applying large patches, such as Maintenance Packs.
If a failure occurs during the application of a patch, it may be necessary to apply another patch to resolve the issue. If this type of issue occurs during the application of a large patch, you may want to be able to restart the original patch from the point of failure. MetaLink Note 175485.1 provides details for applying a patch with adpatch already running.

Database Patching

Database patching consists of either upgrades or interim fixes. Database upgrades are typically complex in nature and require installation of new software when upgrading from one point release to another. Obsolete and new initialization parameters must be reviewed when upgrading to a new release of the database.
Database upgrades can be accomplished manually or by using dbmig, the database migration utility. Since the method for upgrading the database is version and platform dependent, the associated readme file for the upgrade must be reviewed, and the steps required to perform the upgrade should be documented.
Interim patch fixes for the database are applied as the owner of the database install with the opatch utility or by running an operating system script. Details on how to apply database patches are outlined in the patch’s readme.
Before upgrading or applying a patch to the database, the oraInst.loc file must point to the correct Oracle inventory location for the database ORACLE_HOME. It is also important to cleanly shut down the database before proceeding, and to perform a cold database backup.
The opatch utility is downloaded from MetaLink as patch number 2617419. The opatch utility requires Perl and JDK to function, and they must be installed and specified in the path and library environment variables. Once the opatch utility has been downloaded and unbundled, the Opatch directory of the opatch unbundled patch should be added to the PATH, as in the following example:
$export PATH=$PATH:/[path_of_2617419]/Opatch
The library path of Perl must also be specified with the following PERL5LIB environment variable, as in the following example:
$export PERL5LIB=[path_of_PERL]/lib
To validate that opatch is functioning properly, execute the following command with the lsinventory option:
$opatch lsinventory
Once opatch has been successfully set up, the database interim patch fix may be applied. To do this, first review the readme file for the patch. Make certain that all prerequisites have been met. Document any post-patching steps that are required. Download the patch and unbundle it. Change to the directory where the patch has been unbundled. Verify that the database has been shut down. Apply the patch by executing opatch as the database owner with the apply parameter, as in the following example:
$opatch apply
To verify that a patch has successfully been applied, the lsinventory option can again be executed. This will display all patches that have been applied to the database.
Note: If the opatch fails, there may be a patch_locked file located under the hidden directory $ORACLE_HOME/.patch_storage. The opatch utility may not be executed until the patch_locked file is removed.

Monday, March 10, 2008

oracle-database-questions-series (III)

1. Data pump Advantage & Architecture.

Ans - New utilities in Oracle Database 10g take performance and versatility to new levels. New import and export features introduced in Oracle Database 10g, called Oracle Data Pump, represent a radical departure from the client/server approach to which database users have grown accustomed over the past several generations of Oracle Database. The server now runs export and import jobs. You can load or unload massive amounts of data quickly using parallelization, and you can adjust the degree of parallelism on the fly. Export and import jobs are now restartable, so a failure doesn't necessarily mean starting over. The API is exposed and easy to use; it's simple to create an import or export job from PL/SQL. And once started, such jobs run in the background, but you can check status and make modifications, from anywhere, using the client utilities.
Describing the Oracle Export and Import utilities as slow is like saying the Titanic sprung a small leak. Stories of Export and Import executions running for hours (and sometime days) are commonplace. Oracle has finally created a new data movement utility, called Data Pump, to increase the performance of data transfers. Oracle states that Data Pump’s performance on data retrieval is 60% faster than Export and 15 to 20 times faster on data input than Import. In addition, Oracle states that Data Pump is able to take advantage of parallel processing to increase performance. Hopefully, this will alleviate the performance issues related to transferring data between Oracle databases.

Architecture

Before Oracle Database 10g, (Oracle7 through Oracle9i) the import and export utilities ran as clients and did the bulk of the work. Data being exported was read by the database instance, passed over the connection to the export client, and then written to disk. All the data was single-threaded through the one export process. Data volumes today are often magnitudes larger than when this architecture was first put in place, making that single export process a bottleneck because performance of an export job is limited by the throughput that the export utility can sustain.
With Oracle Database 10g and the new Data Pump architecture, all the work is now done by the database instance, which can parallelize the work in two ways: by creating multiple Data Pump worker-processes to read/write data being exported/imported, and by creating parallel I/O server processes to more quickly SELECT or INSERT that data. Gone is the single-process bottleneck.

2. What are the new parameters in Oracle10g.
Ans – db_recovery_file_dest
Db_unique_name
Sga_target
Skip_unusable_indexes


3. Physical Standby Database Vs Logical Standby Database.

Physical Standby Databases

A physical standby database is physically identical to the primary database, with on-disk database structures that are identical to the primary database on a block-for-block basis. The database schema, including indexes, must be the same.

Data Guard maintains a physical standby database by performing managed recovery operations. When it is not performing recovery operations, a physical standby database can be open for read-only operations.

* Managed recovery

The physical standby database is maintained by applying the archived redo logs on the standby system using the Oracle recovery mechanism. The recovery operation applies changes block-for-block using the physical row ID. The database cannot be opened for read or read/write operations while redo data is being applied.

* Open read-only

The physical standby database can be open for read-only operations so that you can execute queries on the database. While open for read-only operations, the standby database can continue to receive redo logs but application of the data from the logs is deferred until the database resumes managed recovery operations.

Although the physical standby database cannot perform both managed recovery and read-only operations at the same time, you can switch between them. For example, you can run a physical standby database to perform managed recovery operations, then open it so applications can perform read-only operations to run reports, and then change it back to perform managed recovery operations to apply outstanding archived redo logs. You can repeat this cycle, alternating between managed recovery and read-only operations, as necessary.

In either case, the physical standby database is available to perform backup operations. Furthermore, the physical standby database will continue to receive redo logs even if they are not being applied at that moment.
Benefits of a Physical Standby Database

A physical standby database provides the following benefits:

* Disaster recovery and high availability

A physical standby database enables a robust and efficient disaster recovery and high availability solution. Easy-to-manage switchover and failover capabilities allow easy role reversals between primary and physical standby databases, minimizing the downtime of the primary database for planned and unplanned outages.

* Data protection

Using a physical standby database, Data Guard can ensure no data loss, even in the face of unforeseen disasters. A physical standby database supports all datatypes, and DDL and DML operations that the primary can support. It also provides safeguard against data corruptions and user errors. Storage level physical corruptions on the primary database do not propagate to the standby database. Similarly, logical corruptions or user errors that cause the primary database to be permanently damaged can be resolved. Finally, the redo data is validated when it is applied to the standby database.

* Reduction in primary database workload

Oracle Recovery Manager (RMAN) can use physical standby databases to off-load backups from the primary database saving valuable CPU and I/O cycles. The physical standby database can also be opened in read-only mode to perform reporting and queries.

* Performance

The redo apply technology used by the physical standby database applies changes using low-level recovery mechanisms, which bypass all SQL level code layers and therefore is the most efficient mechanism for applying changes. This makes the redo apply technology a highly efficient mechanism to propagate changes among databases.


Logical Standby Databases

A logical standby database is initially created as an identical copy of the primary database, but it later can be altered to have a different structure. The logical standby database is updated by applying SQL statements. The logical standby database uses LogMiner technology to convert the log information received from the primary database into SQL statements.This allows users to access the standby database for queries and reporting purposes at any time. Thus, the logical standby database can be used concurrently for data protection and reporting operations.

Data Guard automatically applies archived redo log information to the logical standby database by transforming data in the redo logs into SQL statements and then executing the SQL statements on the logical standby database. Because the logical standby database is updated using SQL statements, it must remain open. Although the logical standby database is open for read/write operations, its target tables for the regenerated SQL are available only for read-only operations. While those tables are being updated, they can be used simultaneously for other tasks such as reporting, summations, and queries. Moreover, these tasks can be optimized by creating additional indexes and materialized views on the maintained tables.

A logical standby database has some restrictions on datatypes, types of tables, and types of data definition language (DDL) and data manipulation language (DML) operations. Unsupported datatypes and tables are described in more detail in Section 4.1.4.
Benefits of a Logical Standby Database

A logical standby database provides similar disaster recovery, high availability, and data protection benefits as a physical standby database. It also provides the following specialized benefits:

* Efficient use of standby hardware resources

A logical standby database can be used for other business purposes in addition to disaster recovery requirements. It can host additional databases schemas beyond the ones that are protected in a Data Guard configuration, and users can perform normal DDL or DML operations on those schemas any time. Because the logical standby tables that are protected by Data Guard can be stored in a different physical layout than on the primary database, additional indexes and materialized views can be created to improve query performance and suit specific business requirements.

* Reduction in primary database workload

A logical standby database can remain open at the same time its tables are updated from the primary database, and those tables are simultaneously available for read access. This makes a logical standby database an excellent choice to do queries, summations, and reporting activities, thereby off-loading the primary database from those tasks and saving valuable CPU and I/O cycles.

you cannot have DataGuard with Oracle SE. Either you upgrade to EE or with or without DataGuard (Enterprise Edition), you can implement manual (user-managed) recovery for a physical standby. This means write your own scripts to copy archived redo logs to the standby site and then recover the standby database using them. This would only simulate "maximum performance" mode.

"maximum availability mode" and "open read only for reporting" are conflicting goals. If you want to roll forward your physical standby manually say, once every 24 hours, and then open it for read-only reporting each day, that is very reasonable and achievable with Standard Edition (assuming you can write good scripts).


4. Lost 1 table 2 days back how to recover that table explain.

Ans - There could be 4 Option
1. import the table from the dump file.
2. Recover from the Flash Recovery
3. Recover database Until time
4. Recover TSPITR
Case 1 : The current time is 12 p.m. on 9-Dec-97. Your training DBA just told you he dropped the
employee (EMP) table. The table was dropped around 11:45 a.m. Database activity is minimal because
most staff are currently in a meeting. The table must be recovered.
Steps Explanation and Commands
1 If the database is open, shut it down using either the “normal” or “immediate” options.
SQL> Shutdown immediate
2 Restore all datafiles from backup (the most recent if possible):
$ cp /disk1/backup/*.dbf /disk1/data/
3 Mount the database.
SQL> Startup mount
4 Recover the database:
SQL> recover database until time ‘1997-12-09:11:44:00’;
5 To synchronize datafiles with control files and redo logs, open data-base using “resetlogs” option:
SQL> alter database open resetlogs;
6 Perform a full database backup.
7 When recovery is successful and the backup has completed, notify users that the database is available for use, and any data entered after the recovery time (11:44 a.m.) will need to be reentered.
5. How to check / synchronize Primary Database with Standby database. (views)


Ans - You can verify archive logs and SCN’s on primary and standby databases by comparing results for the two following queries.

COL name FORMAT a48;
SELECT name, first_change#, next_change# FROM v$archived_log;

SELECT name, checkpoint_change#, archive_change# FROM v$database;

SELECT * FROM v$logstdby;

SELECT facility,severity,error_code "Err",
callout "Cal",timestamp,message FROM v$dataguard_status
ORDER BY timestamp;

SELECT severity,message FROM v$dataguard_status
ORDER BY timestamp;

Use this query to verify Log Apply services are functioning properly.

SELECT APPLIED_SCN, APPLIED_TIME, READ_SCN, READ_TIME,
NEWEST_SCN, NEWEST_TIME
FROM DBA_LOGSTDBY_PROGRESS;

When APPLIED_SCN and NEWEST_SCN values are equal all available
changes are applied. If APPLIED_SCN is less than NEWEST_SCN then SQL
Apply is currently not working. Obviously we can check archives on both databases. On the primary run this query:

SELECT name FROM v$archived_log;
On the standby run this query:

SELECT FILE_NAME, SEQUENCE#, FIRST_CHANGE#, NEXT_CHANGE#,
TIMESTAMP, DICT_BEGIN, DICT_END, THREAD#
FROM DBA_LOGSTDBY_LOG ORDER BY SEQUENCE#;

This query can also be used to verify archives on the standby database and Primary database you can run this query on both the servers.

SELECT serial#,logstdby_id,type,status FROM v$logstdby;

6. What are the Modes of Materialize views?
Ans- In Oracle9i, if you specify REFRESH FAST for a single-table aggregate materialized view, you must have created a materialized view log for the underlying table, or the refresh command will fail. When creating a materialized view, you have the option of specifying whether the refresh occurs manually (ON DEMAND) or automatically (ON COMMIT, DBMS_JOB). To use the fast warehouse refresh facility, you must specify the ON DEMAND mode. To refresh the materialized view, call one of the procedures in DBMS_MVIEW.


7. Which permanent data file which store uncommitted data?

Ans - Uncommitted Data on the Data Files
If the user's server process is unable to find a free data buffer, DBWR is notified to flush a section of changed data buffers to their corresponding data files. The section of data buffers being flushed may contain both committed and uncommitted changes.This could result in Oracle having uncommitted and committed data blocks on the data files. Oracle, for performance reasons, will not check each block in the section being flushed to see if it contains uncommitted data. Oracle wants to write that section as quickly as possible.
A key point is that before this "panic flush" of changed data buffers to disk occurs, DBWR will signal LGWR to write all before images of uncommitted blocks in the section being flushed. This process ensures that all before images of uncommitted data changes can be retrieved from the redo log file during a recovery.
Committed Data Not On the Data Files
When a transaction commits, the server process places a commit record in the log buffer and tells LGWR to perform a contiguous write of all the redo log buffer entries up to and including the commit record to the redo log files (not the data files!).Oracle is now able to guarantee that the changes will not be lost even if there is an instance failure. Please note that the flushing of dirty data buffers is performed independently by DBWR and can occur either before or after the commit. This could result in Oracle having committed data on the redo log files but not on the data files.

The Synchronization Process

If a failure occurs and the instance terminates abnormally, Oracle must restore the database to a transaction consistent state just prior to the failure. The database must remove all uncommitted data from the data files (because of "panic flushes") and replay all changes committed and recorded in the redo log files but not recorded on the data files (remember that a commit forces LGWR to flush, not DBWR). Oracle restores the database to a transaction consistent state using roll forward and roll backward processes.

8. How to convert Dictionary Managed Table space to Locally Managed Table space.

Ans - Convert between LMT and DMT:

The DBMS_SPACE_ADMIN package allows DBAs to quickly and easily
convert between LMT and DMT mode. Look at these examples:
Example :
SQL> exec dbms_space_admin.Tablespace_Migrate_TO_Local('ts1');
SQL> exec dbms_space_admin.Tablespace_Migrate_FROM_Local('ts2');


9. Control file lost how to recover. You don’t have any backup.

Ans _ Create Control file manually and recover database.

10. What is Data Guard what are the advantages.

Ans - Data Guard Broker

Oracle's Data Guard Broker is the management framework that is used to create, configure, administer and monitor a Data Guard environment. The Data Guard Broker provides the following benefits:
§Simplifies the creation of Data Guard environments by providing wizards to create and configure physical or logical standby databases. Data Guard is able to generate all of the files necessary (parameter, tnsnames.ora, etc.) to establish the connectivity between the standby and primary database servers.
§ Allows administrators to invoke a failover or switchover operation with a single command and control complex role changes across all systems in the configuration. A switchover is a planned transfer of control from the primary to the standby while a failover is an unplanned transfer of control due to some unforeseen event. By automating activities such as failover and switchover, the possibility of errors is reduced.
§ Provides performance-monitoring tools to monitor log transport and log apply imes.
§ Provides a GUI interface (Data Guard Manager) tool that allows DBAs to administer a primary /multiple standby configuration with a simple point-and-click interface.
§ Administrators are able to manage all components of the configuration, including primary and standby servers and databases, log transport services, and log apply services.
§ Is highly integrated with Oracle Enterprise Manager to provide e-mail and paging capabilities
An Oracle background server process called DMON is started on every site that is managed by the broker. The DMON process is created when the Data Guard Broker monitor is started on the primary or standby database servers. The DMON process is responsible for interacting with the local instance and the DMON processes running on the other servers to perform the functions requested by the Data Guard Manager or command line interface. The DMON process is also responsible for monitoring the health of the broker configuration.
DMON maintains a persistent configuration file on all of the servers managed by the Data Guard Broker framework. The configuration file contains entries that provide details on all objects in the configuration and their statuses. The broker uses this information to send information back to the Data Guard Manager, configure and start the site and database resource objects and control each object's behavior.

11. What is Oracle ASM what are its advantages?

Ans- the advantages of ASM?
· Disk Addition—Adding a disk becomes very easy. No downtime is required and file extents are redistributed automatically.
· I/O Distribution—I/O is spread over all the available disks automatically, without manual intervention, reducing chances of a hot spot.
· Stripe Width—Striping can be fine grained as in Redo Log Files (128K for faster transfer rate) and coarse for datafiles (1MB for transfer of a large number of blocks at one time).
· Buffering—The ASM filesystem is not buffered, making it direct I/O capable by design.
· Kernelized Asynch I/O—There is no special setup necessary to enable kernelized asynchronous I/O, without using raw or third-party filesystems such as Veritas Quick I/O.
· Mirroring—Software mirroring can be set up easily, if hardware mirroring is not available.
§ Automatic load balancing
§ Software data redundancy
§ Software RAID 1 with double or triple mirrors
§ Elimination of fragmentation
§ This seems like a bit of a stretch, as fragmentation may also be eliminated with the use of Locally Managed Tablespaces, a feature introduced in Oracle 8i.
§ Elimination of file management

12. What is Mean Time Recovery (MTTR)?
Ans _ Fast-Start Time-Based Recovery
Rather than wait for specific events such as log switches to trigger checkpoints, Oracle9i can be instructed to use fast-start checkpointing architecture to allow the DBWn processes to periodically write dirty buffers to disk and incrementally advance the checkpoint position. This results in a reduced Mean Time To Recovery (MTTR) and a reduction in I/O spikes associated with log switches.

The FAST_START_MTTR_TARGET initialization parameter is used to specify the number of seconds crash recovery should take. Oracle uses this target time to configure the the FAST_START_IO_TARGET and LOG_CHECKPOINT_INTERVAL parameters to reduce crash recovery time to a level as close to the target time as possible. The FAST_START_IO_TARGET, LOG_CHECKPOINT_INTERVAL and LOG_CHECKPOINT_TIMEOUT parameters should not be set as they may interfere with the process.

The maximum value for FAST_START_MTTR_TARGET is 3600 (1 hour), with values exceeding this being rounded down. There is no minimum value, but values that are too low may not be possible to achieve as this is limited by the low limit of the target number of dirty buffers, which is 1000. Added to this is the time mounting the database will take.

If the value is set too low, then the effective MTTR target will be the best MTTR target the system can achieve. If the value is set to high, the effective MTTR is estimated based on the whole buffer cache being dirty. The ESTIMATED_MTTR column in the V$INSTANCE_RECOVERY view can be used to view the effective MTTR. If the parameter setting, shown by the TARGET_MTTR column, is consistently different to the effective MTTR it should be adjusted since this means it is set at an unrealistic value.

Remember that the extra checkpointing required to reduce the crash recovery time may compromise the system performance. A balance must be reached between general system performance and crash recovery time. Set FAST_START_MTTR_TARGET to zero to disable fast-start checkpointing.

The FAST_START_IO_TARGET initialization parameter is used to specify the maximum number of dirty blocks in the buffer cache. Its use has been deprecated in favour of the FAST_START_MTTR_TARGET. In addition the DB_BLOCK_MAX_DIRTY_TARGET parameter has been removed.

13. If there is High CPU Usage in the server who do you diagnosis and resolve on Linux / Solaris Box?
Ans- check the OS CPU usage find which process consume the CPU if its oracle process then find the SQL Statement and tune it.

14. New features of oracle 9i?

15. New features of oracle 10g?
Ans - 1. Data Pump
2. Flashback
3. Automatic Workload Repository (AWR)
Automatic Workload Repository (AWR) defaults to a collection interval every 30 minutes and collects data that is the foundation for all of the other self-tuning features. AWR is very much like STATSPACK, especially the level-5 STATSPACK collection mechanism where top SQL is collected every hour, based on your rolling thresholds for high-use SQL. In addition to the SQL, AWR collects detailed run-time statistics on the top SQL (disk reads, executions, consistent gets) and uses this information to adjust the rolling collection threshold. This technique ensures that AWR always collects the most resource-intensive SQL.

4. Automatic Storage Management(ASM)
The Automatic Storage Management (ASM) feature allows for the automatic stripe-and-mirror everywhere approach to be used to automatically load balance the disk I/O subsystem and remove the need for the DBA to specify physical file locations when allocating a tablespace.
5. Transportable Tablespace
OS file copy is generally much faster than other traditional means of data movement such as export/import or SQL*Loader. However, in Oracle9i Database and below, a restriction limits its usefulness to only a few cases in which both the target and source database run on the same OS platform—you can't transport tablespaces between Solaris and HP-UX, for example.
In Oracle Database 10g, this restriction has disappeared: you can now transport tablespaces between platforms as long as the OS byte orders are identical

6. Audit Trail
First, FGA now supports DML statements in addition to selects. These changes are recorded in the same place, the table FGA_LOG$, and displayed through the view DBA_FGA_AUDIT_TRAIL. In addition to DMLs, you can now choose to trigger a trail only if all relevant columns are accessed, or even when a few of them are accessed

7. RMAN
Rman enhanced Incremental Backup & Compress Backup set.


16. What is the advantage and disadvantages in Auto extend on and off?
Ans - Another disadvantage that hasn't been mentioned yet is the performance impact - if a user is inserting rows and fills up the tablespace to the point it has to extend, then the end user has to incur the cost of waiting for the file to extend. This is avoided if the DBA anticipates the growth and manually pre-extends the datafile, or adds another data file to the tablespace.


17. What are the kernel parameters which are required to change in the time of oracle installation?

shmmax = 2147483648 (To verify, execute: cat /proc/sys/kernel/shmmax)
shmmni = 4096 (To verify, execute: cat /proc/sys/kernel/shmmni)
shmall = 2097152 (To verify, execute: cat /proc/sys/kernel/shmall) (for 10g R1)
shmmin = 1 (To verify, execute: ipcs -lm |grep "min seg size")
shmseg = 10 (It's hardcoded in the kernel - the default is much higher)
semmsl = 250 (To verify, execute: cat /proc/sys/kernel/sem | awk '{print $1}')
semmns = 32000 (To verify, execute: cat /proc/sys/kernel/sem | awk '{print $2}')
semopm = 100 (To verify, execute: cat /proc/sys/kernel/sem | awk '{print $3}')
semmni = 128 (To verify, execute: cat /proc/sys/kernel/sem | awk '{print $4}')
SHMMAX
The SHMMAX parameter is used to define the maximum size (in bytes) for a shared memory segment and should be set large enough for the largest SGA size. If the SHMMAX is set incorrectly (too low), it is possible that the Oracle SGA (which is held in shared segments) may be limited in size. An inadequate SHMMAX setting would result in the following:
ORA-27123: unable to attach to shared memory segment
You can determine the value of SHMMAX by performing the following:
# cat /proc/sys/kernel/shmmax
33554432
SHMMNI
We now look at the SHMMNI parameters. This kernel parameter is used to set the maximum number of shared memory segments system wide. The default value for this parameter is 4096. This value is sufficient and typically does not need to be changed.
You can determine the value of SHMMNI by performing the following:
# cat /proc/sys/kernel/shmmni
4096

Semaphores
Now that we have configured our shared memory settings, it is time to take care of configuring our semaphores. A semaphore can be thought of as a counter that is used to control access to a shared resource. Semaphores provide low level synchronization between processes (or threads within a process) so that only one process (or thread) has access to the shared segment, thereby ensureing the integrity of that shared resource. When an application requests semaphores, it does so using "sets".

18. What are the packages install when you install RMAN.
Ans – DBMS_RCVMAN , DBMS_BACKUP_RESTORE, DBMS_RCVCAT


19. What are the things we need to check in stats pack report.
Ans – 1. Instance Efficiency Percentage
a) Buffer Nowait %
b) Buffer Hit%
c) Library Hit%
d) Execute to Parse %
2. Load Profile
a) Redo size
b) Logical reads
c) Physical reads
d) Physical writes
e) Parse
f) Sorts
3. Top 5 Timed Event
a) CPU Time
b) Direct path read Temp
c) Control file parallel write
d) Control file sequential read
4. Table space I/O Stats
5. Other Waits:
a) Buffer Busy waits
b) DB file Sequential Read
c) DB file scattered read
d) Direct Path Read
e) Enqueue
f) Free Buffer Wait
g) Latch Free
h) Library Cache pin
i) Log file space switch / log buffer event
j) Log file sync
k) Undo segment statistics
l) Dictionary cache hits and misses
m) Table space IO / File I/O

20. What are the things we need to check in Explain plan report

Ans - Remove unnecessary large-table full table scans - Unnecessary full table scans cause a huge amount of unnecessary I/O, and can drag down an entire database. The tuning expert first evaluates the SQL based on the number of rows returned by the query. If the query returns less and 40 percent of the table rows in an ordered table, or 7 percent of the rows in an unordered table), the query can be tuned to use an index in lieu of the full table scan. The most common tuning for unnecessary full table scans is adding indexes. Standard B-tree indexes can be added to tables, and bitmapped and function-based indexes can also eliminate full table scans. The decision about removing a full table scan should be based on a careful examination of the I/O costs of the index scan vs. the costs of the full table scan, factoring in the multiblock reads and possible parallel execution. In some cases an unnecessary full table scan can be forced to use an index by adding an index hint to the SQL statement.

Cache small-table full table scans - In cases where a full table scan is the fastest access method, the tuning professional should ensure that a dedicated data buffer is available for the rows. In Oracle7 you can issue alter table xxx cache. In Oracle8 and beyond, the small table can be cached by forcing to into the KEEP pool.

Verify optimal index usage - This is especially important for improving the speed of queries. Oracle sometimes has a choice of indexes, and the tuning professional must examine each index and ensure that Oracle is using the proper index. This also includes the use of bitmapped and function-based indexes.

Verify optimal JOIN techniques - Some queries will perform faster with NESTED LOOP joins, others with HASH joins, while other favor sort-merge joins.

21. What is the Advantage of RMAN?
Ans – 1. Configure one time
2. Automatic Backup Control file
3. Backup Space Management: Setting Default Retention Policies
4. Improved Archived Log Backup
5. Centrally Maintained Backup Information Catalog
6. Scripting Available
7. Performance Benefit (Multiple Channel allocation parrallel option)
8. No generation of extra redo during open DB backups.
10. Block corruption Detection.
11. Compress Backup ( save disk space)

22. What is PGA_AGGREGATE_TARGET, SHARED_POOL_ADVIVE, PGA_TARGET_ADVICE, DB_CACHE_ADVICE, MTTR

1. Data Cache advice - The v$db_cache_advice utility show the marginal changes in physical data block reads for different sizes of db_cache_size. Bear in mind that the data from STATSPACK can provide similar data as v$db_cache_advice, and most Oracle tuning professionals use STATSPACK and v$db_cache_advice to monitor the effectiveness of their data buffers.

These advisory utilities are extremely important for the Oracle DBA who must adjust the sizes of the RAM areas to meet current processing demands
Using v$db_cache_advice
The following query can be used to perform the cache advice function, once the db_cache_advice has been enabled and the database has run long enough to give representative results.
-- ***********************************************************
-- Display cache advice
-- ***********************************************************


column c1 heading 'Cache Size (meg)' format 999,999,999,999
column c2 heading 'Buffers' format 999,999,999
column c3 heading 'Estd Phys|Read Factor' format 999.90
column c4 heading 'Estd Phys| Reads' format 999,999,999

select
size_for_estimate c1,
buffers_for_estimate c2,
estd_physical_read_factor c3,
estd_physical_reads c4
from
v$db_cache_advice
where
name = 'DEFAULT'
and
block_size = (SELECT value FROM V$PARAMETER
WHERE name = 'db_block_size')
and
advice_status = 'ON';
The output from the script is shown below. Note that the values range from 10 percent of the current size to double the current size of the db_cache_size.

Estd Phys Estd Phys
Cache Size (meg) Buffers Read Factor Reads
---------------- ------------ ----------- ------------
30 3,802 18.70 192,317,943 <== 10% size
60 7,604 12.83 131,949,536
91 11,406 7.38 75,865,861
121 15,208 4.97 51,111,658
152 19,010 3.64 37,460,786
182 22,812 2.50 25,668,196
212 26,614 1.74 17,850,847
243 30,416 1.33 13,720,149
273 34,218 1.13 11,583,180
304 38,020 1.00 10,282,475 Current Size
334 41,822 .93 9,515,878
364 45,624 .87 8,909,026
395 49,426 .83 8,495,039
424 53,228 .79 8,116,496
456 57,030 .76 7,824,764
486 60,832 .74 7,563,180
517 64,634 .71 7,311,729
547 68,436 .69 7,104,280
577 72,238 .67 6,895,122
608 76,040 .66 6,739,731 <== 2x size
From the above listing we see that increasing the db_cache_size from 304 meg to 334 meg would result in approximately 700,000 less physical reads. This can be plotted as a 1/x function and the exact optimal point computed as the second derivative of the function:

2. PGA_AGGREGATE_TARGET

Oracle9i has introduced a new advisory utility dubbed v$pga_target_advice. This utility will show the marginal changes in optimal, one-pass, and multipass PGA execution for different sizes of pga_aggregate_target, ranging from 10% to 200% of the current value.

SELECT ROUND(pga_target_for_estimate/1024/1024) target_mb,
estd_pga_cache_hit_percentage cache_hit_perc,
estd_overalloc_count
FROM v$pga_target_advice;

23. What is the difference between alter system Vs alter database?
Ans - alter database - chages the information n the databse level
alter system - changes the information n the instance level,
Database - Collection of files storing database information (configuration/datafiles/redo logs etc)
Instance - Processes and memory used to manage a database.

Alter databse; performs operations that affect the database
Alter system; performs operations that affect the instance


24. How to take incremental logical backup?
Ans – EXP INCTYPE= The type of Import being performed: COMPLETE, CUMULATIVE, and INCREMENTAL.

25. What is Bulk Bind or Collect:
Ans – new feature in PL/SQL for faster processing of DML. ( Array Processing in Memory)

26. Transportable Tablespace?
Ans - OS file copy is generally much faster than other traditional means of data movement such as export/import or SQL*Loader. However, in Oracle9i Database and below, a restriction limits its usefulness to only a few cases in which both the target and source database run on the same OS platform—you can't transport tablespaces between Solaris and HP-UX, for example.
In Oracle Database 10g, this restriction has disappeared: you can now transport tablespaces between platforms as long as the OS byte orders are identical


27. When did a table spanned across table spaces?

Ans – A partitioned table can span across multiple tablespaces.

28. What is Database replication / Advance Replication?
Ans - Replication is the process of creating and maintaining replica versions of database objects (e.g. tables) in a distributed database system. Replication can improve performance and increase availability of applications because alternate data access options becomes available. For example, users can access a local database rather than a remote server to minimize network traffic. Furthermore, the application can continue to function if parts of the distributed database are down as replicas of the data might still accessible.

29. What is Database Auditing?
Ans - Traditional Oracle Database auditing options let you track the actions users perform on objects at the macro level on DML / DDL statements.

Implement Dba_audit_trails .

dbms_fga.add_policy


30. What is tkprof and how is it used
Ans- The tkprof tool is a tuning tool used to determine cpu and execution times for SQL statements. You use it by first setting timed_statistics to true in the initialization file and then turning on tracing for either the entire database via the sql_trace parameter or for the session using the ALTER SESSION command. Once the trace file is generated you run the tkprof tool against the trace file and then look at the output from the tkprof tool. This can also be used to generate explain plan output.

31. When should you increase copy latches? What parameters control copy latches
When you get excessive contention for the copy latches as shown by the "redo copy" latch hit ratio. You can increase copy latches via the initialization parameter LOG_SIMULTANEOUS_COPIES to twice the number of CPUs on your system.

32. When looking at the estat events report you see that you are getting busy buffer waits. Is this bad? How can you find what is causing it
Buffer busy waits could indicate contention in redo, rollback or data blocks. You need to check the v$waitstat view to see what areas are causing the problem. The value of the "count" column tells where the problem is, the "class" column tells you with what. UNDO is rollback segments, DATA is data base buffers.