Tuesday, March 25, 2008

Patching in Oracle Part I

A patch is a software code change.Oracle Applications is an extremely flexible and powerful suite of products and software, which runs on many different platforms. As such, the complexities associated with extending its use to every business environment is challenging. Because of its flexibility and being feature rich, Oracle extends the base product functionality though the addition of code changes. These changes are bundled into a “patch”.
Always test the patch on a test system prior to applying to the production database. Because of the rare possibility of problems being encountered, perform a “cold” system backup of your test instance, if one doesn’t already exist.

Patch types Oracle produces

Maintenance Pack: A Maintenance pack is a consolidation, or bundling, of patches for an entire product area. Maintenance packs introduce new functionality in a given product and are particularly useful for customers who are beginning their implementation. They can start with the application of a Maintenance Pack, such as the 11.5.3 CD Pack versus applying the 11.5.1 CD Pack followed by the individual patches released since 11.5.1’s initial availability. Maintenance Packs are cumulative, so a customer does not need to apply 11.5.1, 11.5.2 and 11.5.3 to get to the latest code, they can start with 11.5.3 (or the latest version available). Maintenance Packs can be downloaded from MetaLink but are quite large and should be ordered as CD Pack from the Oracle Store. A Maintenance Pack is the largest or top most level of patching done in Oracle Applications. This type of patch contains all of the product specific family packs for the specified release version.

The readme file for a Maintenance Pack contains the list of patch’s that are included in the Maintenance Pack and can be downloaded from MetaLink.

The product minor version (11.5.1, 11.5.2, 11.5.3, etc) can change by the application of Maintenance packs.

Special considerations for upgrade customers:
Maintenance packs are not for upgrading from previous version of the Applications, such as 10.7 or 11.0.x. In these cases, customers must use RapidWiz and AutoUpgrade utilities and complete all the steps specified in the Upgrade Manual, before applying the Maintenance pack. Customers must obtain the CD Pack from the Oracle Store because it contains the RapidWiz. The RapidWiz cannot be downloaded from MetaLink

Family Pack: A Family Pack patch can introduce new functionality in a given product area. It also is a consolidated set of patches specific to a product family area, such as AOL or HRMS. Family Packs are cumulative, meaning all product family packs need to be applied in sequence. Family Packs are denoted with a letter, such as A, B, C etc., “A” preceding “B” and “B” preceding “C” and so on. The functionality changes included in the Family Packs are included in subsequent Maintenance Packs.

Mini-Pack: The terms megapatch, patchset, and mini-pack are interchangeable terms and refer to a group of bug fixes that are applied at one time. Mini-packs can be released for a single product or a group of related products (e.g. Purchasing mini-pack, Foundation mini-pack, Accounts Payable mini-pack, etc.). They are generally given alphabetical suffixes that provide their position in the sequence of a product’s lifecycle such as AP.A, AP.B, AP.C, etc. Mini-packs are cumulative and can contain new functionality.

One-off Patch: A "one-off" patch addresses a single fix or enhancement. One-off patches can introduce new functionality and is noted in the readme file for the patch. One-off patches are usually issued when an issue requires an immediate code change that can’t wait for distribution in any other form.

All ERP patches are delivered with the software modules that are needed, including the dependencies for that module. Some modules, such as forms, have a large number of dependencies. Even a small change like correcting a LOV for a form can result in the one-off patch to include dozens of other modules to satisfy the dependencies.

Diagnostic Patch: A diagnostic patch is sent to a customer to assist Oracle Support Services and Oracle Development obtain diagnostic information. The patch is provided when a product failure cannot be reproduced in an OSS environment and customer specific data is required to continue the investigation.

Interoperability Patch: Interoperability patches can be required when an Oracle Applications instance is migrated to a newer version of the technology stack. The patch is typically included in the DB and Forms technology stack upgrade. An example of this was when Oracle Applications Release 11 was initially released, it required version 8.0 of the DB. Later on, when the Applications were certified with 8i, the Applications required an interoperability patch to work with 8i. Interoperability patches are quite common in Oracle Applications releases 10.7NCA, 11.0.X and 11i due to the to synchronization Oracle Applications modules with the core Oracle database and tools. This type of patch can be created when an upgrade occurs in the Oracle software stack and Oracle Applications require additional modules to maintain functionality.

Translated Patch: A fully language translated patch. From R11i on, all applications patches are automatically translated into 28 different languages. The timing and availability of these patches may vary.

A class of patches that contain legislative data has an additional driver called hrglobal, which may need to be applied. Also, for some groups of patches, it may be beneficial to merge the patches into one set of driver files. Depending upon your implementation, you may also need to deal with multi-language patches and multi-node patching. These topics are discussed in the following sections.

Applying Legislative Patches
For Oracle Payroll customers, there is another category of patch required by the system. The hrglobal patch supports the legislative requirements of multiple countries. Given the nature of this patch, it is updated frequently by Oracle. It is often a post-patch requirement for the mandatory patches released for Oracle Payroll.
To find the latest copy of the hrglobal patch, view MetaLink Note 145837.1. This note will contain the latest patch number for the hrglobal patch, along with a link to the patch installation instructions and a change history for the patch. The hrglobal patch can be downloaded from MetaLink like any other patch. Review the patch’s readme file for required prerequisites.
After unpacking the patch, the adpatch utility can be run to install the patch’s u driver. In addition to the u driver, these patches contain a special hrglobal driver. As a result of these differences, there are additional considerations for applying this type of patch.
Once the u driver has been applied, the DataInstall Java utility needs to be run in order to select needed legislations for install. The syntax for this command is as follows:
jre oracle.apps.per.DataInstall apps apps_password thin
[hostname]:[dbport]:[oracle_sid]

When the DataInstall utility has been executed, the Applications DBA will need to select all relevant legislations. Figure 5-6 shows the main menu for DataInstall.
DataInstall Main Menu
1. Select legislative data to install/upgrade
2. Select college data to install/upgrade
3. Select JIT/Geocode or OTL to install/upgrade
4. Exit to confirmation menu

Figure 5-6. The DataInstall Main Menu
Select option 1 to choose which legislative data to install or upgrade. From the resulting menu, shown in Figure 5-7, you should choose to install any legislative data marked as installed. Note that the selection numbers will change depending upon your version of the hrglobal patch. Check the numbers for the appropriate data.
# Localisation Product(s) Leg. Data? Action
1 Global Human Resources Installed
2 Australia Human Resources
3 Australia Payroll...
55 United States Human Resources Installed
56 United States Payroll Installed - Change Action where is [I : Install, C : Clear]
Figure 5-7. The DataInstall legislative data submenu
Select the legislative data to be installed by entering the localization number and I. If an incorrect number is selected, you can correct the mistake by entering that number with a C to clear the action.
After all legislative data is marked for install, return to the main menu to select any required college data. When all college data is selected, return to the main menu and select 4 to exit the utility. Upon exiting, an Actions Summary will be displayed. Review that summary to ensure that all required actions have been selected.
The final stage of the legislative patch is to run the adpatch utility to apply the hrglobal driver. This driver is copied to the $PER_TOP/patch/115/driver directory by the patch’s u driver. The same adpatch options for applying other drivers should be used for the hrglobal driver.
Using AD Merge
When applying a group of large patches, such as a Maintenance Pack and a cumulative update, some performance benefits can be incurred by using the AD Merge utility to combine the patches into one patch. From my personal experiences I would say merging the best time reducing feature. People most of the times keep asking how do we know which pathces to merge and which not. Well my answer always would be, depends on Analysis.which is done with the readme and the docs associated with the patch. And a thorough analysis in the earlier stage will save you the downtime and make the patching smoother and better.
The set of patches to be merged should be copied to a common directory. After the patches are unbundled, the AD Merge utility can be run against the patches. Here is an example:
admrgpch /source_dir /target_dir
The completed merged driver files found in the target directory can be applied as a standard patch would be applied. The merged driver files will have a name like u_merged.drv. A log file, admrgpch.log, will be created in the directory where the utility was run.
For more information, see MetaLink Note 228779.1, “How to Merge Patches Using admrgpch.” The admrgpch utility can be run with several parameters, shown in Table 5-3.
Option Purpose
s Specifies the source directory containing compressed patch files.
d Specifies the destination directory for merged patch files.
verbose Controls the level of detail included in admrgpch output.
manifest Specifies a text file containing the list of patch files to be merged. This is useful if the source directory includes a large number of patch files.
logfile Specifies the log file to contain the output from admrgpch utility.
merge_name Specifies the name of the merged file. This defaults to “merged”, and it should be changed to be more descriptive.
Table 5-3. admrgpch Options
When using this utility, thoroughly test the resulting patch.
Applying NLS Patches
For E-Business Suite installations with multiple language requirements, there are patches available for each additional language. Each required NLS patch needs to be applied to Oracle Applications. Oracle provides some recommendations for dealing with NLS patches; these are outlined in MetaLink Note 174436.1.
The U.S. version of the patch should be applied before any of the translation patches. The translation patches may be applied without downtime to the entire system if users of the affected language are not active.
Using admrgpch, it is possible to merge all U.S. patches into one patch, and then merge all non-U.S. patches into a separate patch. Depending upon the application configuration, some variation of this approach may be necessary.
Performing Multi-Node Patching
There are a couple of options available to optimize patching for multi-node environments. As of Oracle Applications 11.5.10, the system can be designed with a shared application-tier filesystem. The shared application filesystem contains the application’s APPL_TOP, COMMON_TOP, and ORACLE_HOME. (MetaLink Note 233428.1 describes sharing the application-tier filesystem.) As a result of this configuration, patching the shared filesystem applies the patch to all nodes.
Prior to 11.5.10, a shared APPL_TOP did not include the ORACLE_HOME. For these systems, Forms and iAS patches must be applied to each Form and Web Node.
In order to increase the performance of the patching process, Distributed AD will execute workers on remote nodes in a multi-node implementation. Distributed AD improves scalability and resource utilization. Distributed AD is only available with AD Minipack H or later, and with a shared Application Tier Filesystem or shared APPL_TOP. More information on this feature can be found in MetaLink Note 236469.1.
If a shared Application Tier Filesystem is not in use, each filesystem will need to be patched separately. A patched filesystem can be cloned to another node if the downtime required to patch the node exceeds the time required to clone the filesystem.
Patch drivers have different requirements when applying them in a multi-node environment. The c driver must be applied to all APPL_TOPs, the d driver is applied on the Admin Node, the g driver is applied to all APPL_TOPs unless the APPL_TOP is only the Admin Node, and the u driver is applied to all APPL_TOPs on all nodes.
Monitoring and Resolving Patching Problems
Patching problems manifest themselves in many different ways. Typically the adpatch session will display an error or will appear to be hung on one task for a long period of time. The first step in resolving the issue is to review the adpatch log file and associated worker log file. Next, the reason the worker failed must be determined and resolved. After resolution has been obtained, adctrl can be used to continue the patching process.
Reviewing Log Files
During and after the application of patches, it is helpful to review log files of the adpatch session and its workers. These files are found in the $APPL_TOP/admin/$CONTEXT_NAME/log directory. The adpatch log filename is specified during the patch process. See the “Using AD Patch” section earlier in the chapter for more details.
In order to monitor the patch from a telnet session other than the one where the patch was started, a simple UNIX command such as tail -f u[patch#].log will display information as it is written to the log file. This is a useful means for monitoring the progress of a patch that is being applied.
The log files for the workers will be named adwork[xxx].log, where [xxx] is the number of the patch worker process. If a particular worker has failed, examine the related log file for detailed information. This information can be researched on MetaLink or used to open an SR with Oracle Support.
For example, the log file listing for the u driver of patch 11112, applied through adpatch using 5 workers, may look like this:
$ls

adwork001.log
adwork002.log
adwork003.log
adwork004.log
adwork005.log
u111112.log
Using AD Control
The administrative tool used to manage patch workers is AD Control, or adctrl. Frequently workers will fail or hang, which will require the Oracle Applications DBA to interface with adctrl. (Common patching errors will be covered later in this chapter.)
AD Control menu options will vary depending upon the AD patch version applied to the instance. When logged in as the application owner on the Admin Node, execute adctrl to display the menu options shown in Figure 5-8.
AD Controller Menu
----------------------------------------------------
1. Show worker status
2. Tell worker to restart a failed job
3. Tell worker to quit
4. Tell manager that a worker failed its job
5. Tell manager that a worker acknowledges quit
6. Restart a worker on the current machine
7. Exit
Figure 5-8. AD Controller Menu
To execute an adctrl menu option, simply type the menu option and press Enter. If options 2–6 are chosen, either specify the number of the worker that requires action, or press Enter for the action to be executed for all workers.
The “Skip Worker” menu option is a hidden adctrl menu option. If a worker needs to be skipped, start adctrl, enter 8, and then enter the worker number. Only use this option if advised by Oracle Support.even if we use this with out any support advise , be sure what the worker was going to do and how for the patch has gone .As some times you may skip a worker but that would have been doing some work which is required in the further process of the patch.Hence we may go into deeper trouble skipping the workers just to continue the patch.

Resolving AD Patch Worker Failure
If a worker has failed, the adpatch session will normally display a failedworker message. The status of the worker may also be determined using adctrl. If a worker has failed, the worker error can be obtained by viewing the worker log file. Once the worker issue has been resolved, use adctrl to restart the worker.
If a worker has failed, and it is determined that the step the worker was trying to execute may be skipped, the hidden option 8 of the adctrl menu, “Skip Worker,” may be used to skip the worker. It is only advisable to do this if the step is not critical to the environment being patched.
The following are common worker failures that will be seen by the Applications DBA during patching. The error messages will be displayed by the adpatch session or in the worker log file:
Error message: ORA-01013: user requested cancel of current operation
Resolution to error: If this error occurs, simply use adctrl to restart the worker on the current machine.
Error message: Patch not applied successfully, adpatch did not cleanup its restart files (*rf9).
Resolution to error: If this error occurs, execute the following as the instance owner:
$cd $APPL_TOP/admin/$CONTEXT_NAME
$mv restart restart_old
$mkdir restart
After cleaning up the restart files, you may then restart the adpatch session using adpatch.
Error message: ERROR: Javakey subcommand exited with status 1
Resolution to error: If this error occurs, the identity.obj file needs to be re-created.Recreate the identity.obj file. Then, use adctrl to restart the failed worker.
Error message: No error message is displayed; rather the worker log file states that the worker is complete, yet adctrl indicates that the worker is still running.
Resolution to error: This patching problem occurs when the worker is complete, but did not update patching tables correctly to notify the adpatch session that it has finished. In this scenario, the adpatch session is still waiting for the finish return code from the worker. When this occurs, use adctrl to fail the worker, then restart the worker.
Any form, library, or report that fails to generate during the patch process can be regenerated manually after all patching and post-patching steps have completed. If the object still fails to compile, open an SR.
Additional Tips for Resolving Patching Issues
If a patch has hung or workers have failed, and the reason for this failure cannot be determined, it is advisable to check the number of invalid objects in the database. If the number of invalid objects is high, recompile the invalid objects in parallel and restart the patching session.
If the adpatch session is hung, and all other methods for resolution have been executed, it may be necessary to bounce the database and restart the patch session. This method for resolving patching issues is sometimes necessary, especially when applying large patches, such as Maintenance Packs.
If a failure occurs during the application of a patch, it may be necessary to apply another patch to resolve the issue. If this type of issue occurs during the application of a large patch, you may want to be able to restart the original patch from the point of failure. MetaLink Note 175485.1 provides details for applying a patch with adpatch already running.

Database Patching

Database patching consists of either upgrades or interim fixes. Database upgrades are typically complex in nature and require installation of new software when upgrading from one point release to another. Obsolete and new initialization parameters must be reviewed when upgrading to a new release of the database.
Database upgrades can be accomplished manually or by using dbmig, the database migration utility. Since the method for upgrading the database is version and platform dependent, the associated readme file for the upgrade must be reviewed, and the steps required to perform the upgrade should be documented.
Interim patch fixes for the database are applied as the owner of the database install with the opatch utility or by running an operating system script. Details on how to apply database patches are outlined in the patch’s readme.
Before upgrading or applying a patch to the database, the oraInst.loc file must point to the correct Oracle inventory location for the database ORACLE_HOME. It is also important to cleanly shut down the database before proceeding, and to perform a cold database backup.
The opatch utility is downloaded from MetaLink as patch number 2617419. The opatch utility requires Perl and JDK to function, and they must be installed and specified in the path and library environment variables. Once the opatch utility has been downloaded and unbundled, the Opatch directory of the opatch unbundled patch should be added to the PATH, as in the following example:
$export PATH=$PATH:/[path_of_2617419]/Opatch
The library path of Perl must also be specified with the following PERL5LIB environment variable, as in the following example:
$export PERL5LIB=[path_of_PERL]/lib
To validate that opatch is functioning properly, execute the following command with the lsinventory option:
$opatch lsinventory
Once opatch has been successfully set up, the database interim patch fix may be applied. To do this, first review the readme file for the patch. Make certain that all prerequisites have been met. Document any post-patching steps that are required. Download the patch and unbundle it. Change to the directory where the patch has been unbundled. Verify that the database has been shut down. Apply the patch by executing opatch as the database owner with the apply parameter, as in the following example:
$opatch apply
To verify that a patch has successfully been applied, the lsinventory option can again be executed. This will display all patches that have been applied to the database.
Note: If the opatch fails, there may be a patch_locked file located under the hidden directory $ORACLE_HOME/.patch_storage. The opatch utility may not be executed until the patch_locked file is removed.

Monday, March 10, 2008

oracle-database-questions-series (III)

1. Data pump Advantage & Architecture.

Ans - New utilities in Oracle Database 10g take performance and versatility to new levels. New import and export features introduced in Oracle Database 10g, called Oracle Data Pump, represent a radical departure from the client/server approach to which database users have grown accustomed over the past several generations of Oracle Database. The server now runs export and import jobs. You can load or unload massive amounts of data quickly using parallelization, and you can adjust the degree of parallelism on the fly. Export and import jobs are now restartable, so a failure doesn't necessarily mean starting over. The API is exposed and easy to use; it's simple to create an import or export job from PL/SQL. And once started, such jobs run in the background, but you can check status and make modifications, from anywhere, using the client utilities.
Describing the Oracle Export and Import utilities as slow is like saying the Titanic sprung a small leak. Stories of Export and Import executions running for hours (and sometime days) are commonplace. Oracle has finally created a new data movement utility, called Data Pump, to increase the performance of data transfers. Oracle states that Data Pump’s performance on data retrieval is 60% faster than Export and 15 to 20 times faster on data input than Import. In addition, Oracle states that Data Pump is able to take advantage of parallel processing to increase performance. Hopefully, this will alleviate the performance issues related to transferring data between Oracle databases.

Architecture

Before Oracle Database 10g, (Oracle7 through Oracle9i) the import and export utilities ran as clients and did the bulk of the work. Data being exported was read by the database instance, passed over the connection to the export client, and then written to disk. All the data was single-threaded through the one export process. Data volumes today are often magnitudes larger than when this architecture was first put in place, making that single export process a bottleneck because performance of an export job is limited by the throughput that the export utility can sustain.
With Oracle Database 10g and the new Data Pump architecture, all the work is now done by the database instance, which can parallelize the work in two ways: by creating multiple Data Pump worker-processes to read/write data being exported/imported, and by creating parallel I/O server processes to more quickly SELECT or INSERT that data. Gone is the single-process bottleneck.

2. What are the new parameters in Oracle10g.
Ans – db_recovery_file_dest
Db_unique_name
Sga_target
Skip_unusable_indexes


3. Physical Standby Database Vs Logical Standby Database.

Physical Standby Databases

A physical standby database is physically identical to the primary database, with on-disk database structures that are identical to the primary database on a block-for-block basis. The database schema, including indexes, must be the same.

Data Guard maintains a physical standby database by performing managed recovery operations. When it is not performing recovery operations, a physical standby database can be open for read-only operations.

* Managed recovery

The physical standby database is maintained by applying the archived redo logs on the standby system using the Oracle recovery mechanism. The recovery operation applies changes block-for-block using the physical row ID. The database cannot be opened for read or read/write operations while redo data is being applied.

* Open read-only

The physical standby database can be open for read-only operations so that you can execute queries on the database. While open for read-only operations, the standby database can continue to receive redo logs but application of the data from the logs is deferred until the database resumes managed recovery operations.

Although the physical standby database cannot perform both managed recovery and read-only operations at the same time, you can switch between them. For example, you can run a physical standby database to perform managed recovery operations, then open it so applications can perform read-only operations to run reports, and then change it back to perform managed recovery operations to apply outstanding archived redo logs. You can repeat this cycle, alternating between managed recovery and read-only operations, as necessary.

In either case, the physical standby database is available to perform backup operations. Furthermore, the physical standby database will continue to receive redo logs even if they are not being applied at that moment.
Benefits of a Physical Standby Database

A physical standby database provides the following benefits:

* Disaster recovery and high availability

A physical standby database enables a robust and efficient disaster recovery and high availability solution. Easy-to-manage switchover and failover capabilities allow easy role reversals between primary and physical standby databases, minimizing the downtime of the primary database for planned and unplanned outages.

* Data protection

Using a physical standby database, Data Guard can ensure no data loss, even in the face of unforeseen disasters. A physical standby database supports all datatypes, and DDL and DML operations that the primary can support. It also provides safeguard against data corruptions and user errors. Storage level physical corruptions on the primary database do not propagate to the standby database. Similarly, logical corruptions or user errors that cause the primary database to be permanently damaged can be resolved. Finally, the redo data is validated when it is applied to the standby database.

* Reduction in primary database workload

Oracle Recovery Manager (RMAN) can use physical standby databases to off-load backups from the primary database saving valuable CPU and I/O cycles. The physical standby database can also be opened in read-only mode to perform reporting and queries.

* Performance

The redo apply technology used by the physical standby database applies changes using low-level recovery mechanisms, which bypass all SQL level code layers and therefore is the most efficient mechanism for applying changes. This makes the redo apply technology a highly efficient mechanism to propagate changes among databases.


Logical Standby Databases

A logical standby database is initially created as an identical copy of the primary database, but it later can be altered to have a different structure. The logical standby database is updated by applying SQL statements. The logical standby database uses LogMiner technology to convert the log information received from the primary database into SQL statements.This allows users to access the standby database for queries and reporting purposes at any time. Thus, the logical standby database can be used concurrently for data protection and reporting operations.

Data Guard automatically applies archived redo log information to the logical standby database by transforming data in the redo logs into SQL statements and then executing the SQL statements on the logical standby database. Because the logical standby database is updated using SQL statements, it must remain open. Although the logical standby database is open for read/write operations, its target tables for the regenerated SQL are available only for read-only operations. While those tables are being updated, they can be used simultaneously for other tasks such as reporting, summations, and queries. Moreover, these tasks can be optimized by creating additional indexes and materialized views on the maintained tables.

A logical standby database has some restrictions on datatypes, types of tables, and types of data definition language (DDL) and data manipulation language (DML) operations. Unsupported datatypes and tables are described in more detail in Section 4.1.4.
Benefits of a Logical Standby Database

A logical standby database provides similar disaster recovery, high availability, and data protection benefits as a physical standby database. It also provides the following specialized benefits:

* Efficient use of standby hardware resources

A logical standby database can be used for other business purposes in addition to disaster recovery requirements. It can host additional databases schemas beyond the ones that are protected in a Data Guard configuration, and users can perform normal DDL or DML operations on those schemas any time. Because the logical standby tables that are protected by Data Guard can be stored in a different physical layout than on the primary database, additional indexes and materialized views can be created to improve query performance and suit specific business requirements.

* Reduction in primary database workload

A logical standby database can remain open at the same time its tables are updated from the primary database, and those tables are simultaneously available for read access. This makes a logical standby database an excellent choice to do queries, summations, and reporting activities, thereby off-loading the primary database from those tasks and saving valuable CPU and I/O cycles.

you cannot have DataGuard with Oracle SE. Either you upgrade to EE or with or without DataGuard (Enterprise Edition), you can implement manual (user-managed) recovery for a physical standby. This means write your own scripts to copy archived redo logs to the standby site and then recover the standby database using them. This would only simulate "maximum performance" mode.

"maximum availability mode" and "open read only for reporting" are conflicting goals. If you want to roll forward your physical standby manually say, once every 24 hours, and then open it for read-only reporting each day, that is very reasonable and achievable with Standard Edition (assuming you can write good scripts).


4. Lost 1 table 2 days back how to recover that table explain.

Ans - There could be 4 Option
1. import the table from the dump file.
2. Recover from the Flash Recovery
3. Recover database Until time
4. Recover TSPITR
Case 1 : The current time is 12 p.m. on 9-Dec-97. Your training DBA just told you he dropped the
employee (EMP) table. The table was dropped around 11:45 a.m. Database activity is minimal because
most staff are currently in a meeting. The table must be recovered.
Steps Explanation and Commands
1 If the database is open, shut it down using either the “normal” or “immediate” options.
SQL> Shutdown immediate
2 Restore all datafiles from backup (the most recent if possible):
$ cp /disk1/backup/*.dbf /disk1/data/
3 Mount the database.
SQL> Startup mount
4 Recover the database:
SQL> recover database until time ‘1997-12-09:11:44:00’;
5 To synchronize datafiles with control files and redo logs, open data-base using “resetlogs” option:
SQL> alter database open resetlogs;
6 Perform a full database backup.
7 When recovery is successful and the backup has completed, notify users that the database is available for use, and any data entered after the recovery time (11:44 a.m.) will need to be reentered.
5. How to check / synchronize Primary Database with Standby database. (views)


Ans - You can verify archive logs and SCN’s on primary and standby databases by comparing results for the two following queries.

COL name FORMAT a48;
SELECT name, first_change#, next_change# FROM v$archived_log;

SELECT name, checkpoint_change#, archive_change# FROM v$database;

SELECT * FROM v$logstdby;

SELECT facility,severity,error_code "Err",
callout "Cal",timestamp,message FROM v$dataguard_status
ORDER BY timestamp;

SELECT severity,message FROM v$dataguard_status
ORDER BY timestamp;

Use this query to verify Log Apply services are functioning properly.

SELECT APPLIED_SCN, APPLIED_TIME, READ_SCN, READ_TIME,
NEWEST_SCN, NEWEST_TIME
FROM DBA_LOGSTDBY_PROGRESS;

When APPLIED_SCN and NEWEST_SCN values are equal all available
changes are applied. If APPLIED_SCN is less than NEWEST_SCN then SQL
Apply is currently not working. Obviously we can check archives on both databases. On the primary run this query:

SELECT name FROM v$archived_log;
On the standby run this query:

SELECT FILE_NAME, SEQUENCE#, FIRST_CHANGE#, NEXT_CHANGE#,
TIMESTAMP, DICT_BEGIN, DICT_END, THREAD#
FROM DBA_LOGSTDBY_LOG ORDER BY SEQUENCE#;

This query can also be used to verify archives on the standby database and Primary database you can run this query on both the servers.

SELECT serial#,logstdby_id,type,status FROM v$logstdby;

6. What are the Modes of Materialize views?
Ans- In Oracle9i, if you specify REFRESH FAST for a single-table aggregate materialized view, you must have created a materialized view log for the underlying table, or the refresh command will fail. When creating a materialized view, you have the option of specifying whether the refresh occurs manually (ON DEMAND) or automatically (ON COMMIT, DBMS_JOB). To use the fast warehouse refresh facility, you must specify the ON DEMAND mode. To refresh the materialized view, call one of the procedures in DBMS_MVIEW.


7. Which permanent data file which store uncommitted data?

Ans - Uncommitted Data on the Data Files
If the user's server process is unable to find a free data buffer, DBWR is notified to flush a section of changed data buffers to their corresponding data files. The section of data buffers being flushed may contain both committed and uncommitted changes.This could result in Oracle having uncommitted and committed data blocks on the data files. Oracle, for performance reasons, will not check each block in the section being flushed to see if it contains uncommitted data. Oracle wants to write that section as quickly as possible.
A key point is that before this "panic flush" of changed data buffers to disk occurs, DBWR will signal LGWR to write all before images of uncommitted blocks in the section being flushed. This process ensures that all before images of uncommitted data changes can be retrieved from the redo log file during a recovery.
Committed Data Not On the Data Files
When a transaction commits, the server process places a commit record in the log buffer and tells LGWR to perform a contiguous write of all the redo log buffer entries up to and including the commit record to the redo log files (not the data files!).Oracle is now able to guarantee that the changes will not be lost even if there is an instance failure. Please note that the flushing of dirty data buffers is performed independently by DBWR and can occur either before or after the commit. This could result in Oracle having committed data on the redo log files but not on the data files.

The Synchronization Process

If a failure occurs and the instance terminates abnormally, Oracle must restore the database to a transaction consistent state just prior to the failure. The database must remove all uncommitted data from the data files (because of "panic flushes") and replay all changes committed and recorded in the redo log files but not recorded on the data files (remember that a commit forces LGWR to flush, not DBWR). Oracle restores the database to a transaction consistent state using roll forward and roll backward processes.

8. How to convert Dictionary Managed Table space to Locally Managed Table space.

Ans - Convert between LMT and DMT:

The DBMS_SPACE_ADMIN package allows DBAs to quickly and easily
convert between LMT and DMT mode. Look at these examples:
Example :
SQL> exec dbms_space_admin.Tablespace_Migrate_TO_Local('ts1');
SQL> exec dbms_space_admin.Tablespace_Migrate_FROM_Local('ts2');


9. Control file lost how to recover. You don’t have any backup.

Ans _ Create Control file manually and recover database.

10. What is Data Guard what are the advantages.

Ans - Data Guard Broker

Oracle's Data Guard Broker is the management framework that is used to create, configure, administer and monitor a Data Guard environment. The Data Guard Broker provides the following benefits:
§Simplifies the creation of Data Guard environments by providing wizards to create and configure physical or logical standby databases. Data Guard is able to generate all of the files necessary (parameter, tnsnames.ora, etc.) to establish the connectivity between the standby and primary database servers.
§ Allows administrators to invoke a failover or switchover operation with a single command and control complex role changes across all systems in the configuration. A switchover is a planned transfer of control from the primary to the standby while a failover is an unplanned transfer of control due to some unforeseen event. By automating activities such as failover and switchover, the possibility of errors is reduced.
§ Provides performance-monitoring tools to monitor log transport and log apply imes.
§ Provides a GUI interface (Data Guard Manager) tool that allows DBAs to administer a primary /multiple standby configuration with a simple point-and-click interface.
§ Administrators are able to manage all components of the configuration, including primary and standby servers and databases, log transport services, and log apply services.
§ Is highly integrated with Oracle Enterprise Manager to provide e-mail and paging capabilities
An Oracle background server process called DMON is started on every site that is managed by the broker. The DMON process is created when the Data Guard Broker monitor is started on the primary or standby database servers. The DMON process is responsible for interacting with the local instance and the DMON processes running on the other servers to perform the functions requested by the Data Guard Manager or command line interface. The DMON process is also responsible for monitoring the health of the broker configuration.
DMON maintains a persistent configuration file on all of the servers managed by the Data Guard Broker framework. The configuration file contains entries that provide details on all objects in the configuration and their statuses. The broker uses this information to send information back to the Data Guard Manager, configure and start the site and database resource objects and control each object's behavior.

11. What is Oracle ASM what are its advantages?

Ans- the advantages of ASM?
· Disk Addition—Adding a disk becomes very easy. No downtime is required and file extents are redistributed automatically.
· I/O Distribution—I/O is spread over all the available disks automatically, without manual intervention, reducing chances of a hot spot.
· Stripe Width—Striping can be fine grained as in Redo Log Files (128K for faster transfer rate) and coarse for datafiles (1MB for transfer of a large number of blocks at one time).
· Buffering—The ASM filesystem is not buffered, making it direct I/O capable by design.
· Kernelized Asynch I/O—There is no special setup necessary to enable kernelized asynchronous I/O, without using raw or third-party filesystems such as Veritas Quick I/O.
· Mirroring—Software mirroring can be set up easily, if hardware mirroring is not available.
§ Automatic load balancing
§ Software data redundancy
§ Software RAID 1 with double or triple mirrors
§ Elimination of fragmentation
§ This seems like a bit of a stretch, as fragmentation may also be eliminated with the use of Locally Managed Tablespaces, a feature introduced in Oracle 8i.
§ Elimination of file management

12. What is Mean Time Recovery (MTTR)?
Ans _ Fast-Start Time-Based Recovery
Rather than wait for specific events such as log switches to trigger checkpoints, Oracle9i can be instructed to use fast-start checkpointing architecture to allow the DBWn processes to periodically write dirty buffers to disk and incrementally advance the checkpoint position. This results in a reduced Mean Time To Recovery (MTTR) and a reduction in I/O spikes associated with log switches.

The FAST_START_MTTR_TARGET initialization parameter is used to specify the number of seconds crash recovery should take. Oracle uses this target time to configure the the FAST_START_IO_TARGET and LOG_CHECKPOINT_INTERVAL parameters to reduce crash recovery time to a level as close to the target time as possible. The FAST_START_IO_TARGET, LOG_CHECKPOINT_INTERVAL and LOG_CHECKPOINT_TIMEOUT parameters should not be set as they may interfere with the process.

The maximum value for FAST_START_MTTR_TARGET is 3600 (1 hour), with values exceeding this being rounded down. There is no minimum value, but values that are too low may not be possible to achieve as this is limited by the low limit of the target number of dirty buffers, which is 1000. Added to this is the time mounting the database will take.

If the value is set too low, then the effective MTTR target will be the best MTTR target the system can achieve. If the value is set to high, the effective MTTR is estimated based on the whole buffer cache being dirty. The ESTIMATED_MTTR column in the V$INSTANCE_RECOVERY view can be used to view the effective MTTR. If the parameter setting, shown by the TARGET_MTTR column, is consistently different to the effective MTTR it should be adjusted since this means it is set at an unrealistic value.

Remember that the extra checkpointing required to reduce the crash recovery time may compromise the system performance. A balance must be reached between general system performance and crash recovery time. Set FAST_START_MTTR_TARGET to zero to disable fast-start checkpointing.

The FAST_START_IO_TARGET initialization parameter is used to specify the maximum number of dirty blocks in the buffer cache. Its use has been deprecated in favour of the FAST_START_MTTR_TARGET. In addition the DB_BLOCK_MAX_DIRTY_TARGET parameter has been removed.

13. If there is High CPU Usage in the server who do you diagnosis and resolve on Linux / Solaris Box?
Ans- check the OS CPU usage find which process consume the CPU if its oracle process then find the SQL Statement and tune it.

14. New features of oracle 9i?

15. New features of oracle 10g?
Ans - 1. Data Pump
2. Flashback
3. Automatic Workload Repository (AWR)
Automatic Workload Repository (AWR) defaults to a collection interval every 30 minutes and collects data that is the foundation for all of the other self-tuning features. AWR is very much like STATSPACK, especially the level-5 STATSPACK collection mechanism where top SQL is collected every hour, based on your rolling thresholds for high-use SQL. In addition to the SQL, AWR collects detailed run-time statistics on the top SQL (disk reads, executions, consistent gets) and uses this information to adjust the rolling collection threshold. This technique ensures that AWR always collects the most resource-intensive SQL.

4. Automatic Storage Management(ASM)
The Automatic Storage Management (ASM) feature allows for the automatic stripe-and-mirror everywhere approach to be used to automatically load balance the disk I/O subsystem and remove the need for the DBA to specify physical file locations when allocating a tablespace.
5. Transportable Tablespace
OS file copy is generally much faster than other traditional means of data movement such as export/import or SQL*Loader. However, in Oracle9i Database and below, a restriction limits its usefulness to only a few cases in which both the target and source database run on the same OS platform—you can't transport tablespaces between Solaris and HP-UX, for example.
In Oracle Database 10g, this restriction has disappeared: you can now transport tablespaces between platforms as long as the OS byte orders are identical

6. Audit Trail
First, FGA now supports DML statements in addition to selects. These changes are recorded in the same place, the table FGA_LOG$, and displayed through the view DBA_FGA_AUDIT_TRAIL. In addition to DMLs, you can now choose to trigger a trail only if all relevant columns are accessed, or even when a few of them are accessed

7. RMAN
Rman enhanced Incremental Backup & Compress Backup set.


16. What is the advantage and disadvantages in Auto extend on and off?
Ans - Another disadvantage that hasn't been mentioned yet is the performance impact - if a user is inserting rows and fills up the tablespace to the point it has to extend, then the end user has to incur the cost of waiting for the file to extend. This is avoided if the DBA anticipates the growth and manually pre-extends the datafile, or adds another data file to the tablespace.


17. What are the kernel parameters which are required to change in the time of oracle installation?

shmmax = 2147483648 (To verify, execute: cat /proc/sys/kernel/shmmax)
shmmni = 4096 (To verify, execute: cat /proc/sys/kernel/shmmni)
shmall = 2097152 (To verify, execute: cat /proc/sys/kernel/shmall) (for 10g R1)
shmmin = 1 (To verify, execute: ipcs -lm |grep "min seg size")
shmseg = 10 (It's hardcoded in the kernel - the default is much higher)
semmsl = 250 (To verify, execute: cat /proc/sys/kernel/sem | awk '{print $1}')
semmns = 32000 (To verify, execute: cat /proc/sys/kernel/sem | awk '{print $2}')
semopm = 100 (To verify, execute: cat /proc/sys/kernel/sem | awk '{print $3}')
semmni = 128 (To verify, execute: cat /proc/sys/kernel/sem | awk '{print $4}')
SHMMAX
The SHMMAX parameter is used to define the maximum size (in bytes) for a shared memory segment and should be set large enough for the largest SGA size. If the SHMMAX is set incorrectly (too low), it is possible that the Oracle SGA (which is held in shared segments) may be limited in size. An inadequate SHMMAX setting would result in the following:
ORA-27123: unable to attach to shared memory segment
You can determine the value of SHMMAX by performing the following:
# cat /proc/sys/kernel/shmmax
33554432
SHMMNI
We now look at the SHMMNI parameters. This kernel parameter is used to set the maximum number of shared memory segments system wide. The default value for this parameter is 4096. This value is sufficient and typically does not need to be changed.
You can determine the value of SHMMNI by performing the following:
# cat /proc/sys/kernel/shmmni
4096

Semaphores
Now that we have configured our shared memory settings, it is time to take care of configuring our semaphores. A semaphore can be thought of as a counter that is used to control access to a shared resource. Semaphores provide low level synchronization between processes (or threads within a process) so that only one process (or thread) has access to the shared segment, thereby ensureing the integrity of that shared resource. When an application requests semaphores, it does so using "sets".

18. What are the packages install when you install RMAN.
Ans – DBMS_RCVMAN , DBMS_BACKUP_RESTORE, DBMS_RCVCAT


19. What are the things we need to check in stats pack report.
Ans – 1. Instance Efficiency Percentage
a) Buffer Nowait %
b) Buffer Hit%
c) Library Hit%
d) Execute to Parse %
2. Load Profile
a) Redo size
b) Logical reads
c) Physical reads
d) Physical writes
e) Parse
f) Sorts
3. Top 5 Timed Event
a) CPU Time
b) Direct path read Temp
c) Control file parallel write
d) Control file sequential read
4. Table space I/O Stats
5. Other Waits:
a) Buffer Busy waits
b) DB file Sequential Read
c) DB file scattered read
d) Direct Path Read
e) Enqueue
f) Free Buffer Wait
g) Latch Free
h) Library Cache pin
i) Log file space switch / log buffer event
j) Log file sync
k) Undo segment statistics
l) Dictionary cache hits and misses
m) Table space IO / File I/O

20. What are the things we need to check in Explain plan report

Ans - Remove unnecessary large-table full table scans - Unnecessary full table scans cause a huge amount of unnecessary I/O, and can drag down an entire database. The tuning expert first evaluates the SQL based on the number of rows returned by the query. If the query returns less and 40 percent of the table rows in an ordered table, or 7 percent of the rows in an unordered table), the query can be tuned to use an index in lieu of the full table scan. The most common tuning for unnecessary full table scans is adding indexes. Standard B-tree indexes can be added to tables, and bitmapped and function-based indexes can also eliminate full table scans. The decision about removing a full table scan should be based on a careful examination of the I/O costs of the index scan vs. the costs of the full table scan, factoring in the multiblock reads and possible parallel execution. In some cases an unnecessary full table scan can be forced to use an index by adding an index hint to the SQL statement.

Cache small-table full table scans - In cases where a full table scan is the fastest access method, the tuning professional should ensure that a dedicated data buffer is available for the rows. In Oracle7 you can issue alter table xxx cache. In Oracle8 and beyond, the small table can be cached by forcing to into the KEEP pool.

Verify optimal index usage - This is especially important for improving the speed of queries. Oracle sometimes has a choice of indexes, and the tuning professional must examine each index and ensure that Oracle is using the proper index. This also includes the use of bitmapped and function-based indexes.

Verify optimal JOIN techniques - Some queries will perform faster with NESTED LOOP joins, others with HASH joins, while other favor sort-merge joins.

21. What is the Advantage of RMAN?
Ans – 1. Configure one time
2. Automatic Backup Control file
3. Backup Space Management: Setting Default Retention Policies
4. Improved Archived Log Backup
5. Centrally Maintained Backup Information Catalog
6. Scripting Available
7. Performance Benefit (Multiple Channel allocation parrallel option)
8. No generation of extra redo during open DB backups.
10. Block corruption Detection.
11. Compress Backup ( save disk space)

22. What is PGA_AGGREGATE_TARGET, SHARED_POOL_ADVIVE, PGA_TARGET_ADVICE, DB_CACHE_ADVICE, MTTR

1. Data Cache advice - The v$db_cache_advice utility show the marginal changes in physical data block reads for different sizes of db_cache_size. Bear in mind that the data from STATSPACK can provide similar data as v$db_cache_advice, and most Oracle tuning professionals use STATSPACK and v$db_cache_advice to monitor the effectiveness of their data buffers.

These advisory utilities are extremely important for the Oracle DBA who must adjust the sizes of the RAM areas to meet current processing demands
Using v$db_cache_advice
The following query can be used to perform the cache advice function, once the db_cache_advice has been enabled and the database has run long enough to give representative results.
-- ***********************************************************
-- Display cache advice
-- ***********************************************************


column c1 heading 'Cache Size (meg)' format 999,999,999,999
column c2 heading 'Buffers' format 999,999,999
column c3 heading 'Estd Phys|Read Factor' format 999.90
column c4 heading 'Estd Phys| Reads' format 999,999,999

select
size_for_estimate c1,
buffers_for_estimate c2,
estd_physical_read_factor c3,
estd_physical_reads c4
from
v$db_cache_advice
where
name = 'DEFAULT'
and
block_size = (SELECT value FROM V$PARAMETER
WHERE name = 'db_block_size')
and
advice_status = 'ON';
The output from the script is shown below. Note that the values range from 10 percent of the current size to double the current size of the db_cache_size.

Estd Phys Estd Phys
Cache Size (meg) Buffers Read Factor Reads
---------------- ------------ ----------- ------------
30 3,802 18.70 192,317,943 <== 10% size
60 7,604 12.83 131,949,536
91 11,406 7.38 75,865,861
121 15,208 4.97 51,111,658
152 19,010 3.64 37,460,786
182 22,812 2.50 25,668,196
212 26,614 1.74 17,850,847
243 30,416 1.33 13,720,149
273 34,218 1.13 11,583,180
304 38,020 1.00 10,282,475 Current Size
334 41,822 .93 9,515,878
364 45,624 .87 8,909,026
395 49,426 .83 8,495,039
424 53,228 .79 8,116,496
456 57,030 .76 7,824,764
486 60,832 .74 7,563,180
517 64,634 .71 7,311,729
547 68,436 .69 7,104,280
577 72,238 .67 6,895,122
608 76,040 .66 6,739,731 <== 2x size
From the above listing we see that increasing the db_cache_size from 304 meg to 334 meg would result in approximately 700,000 less physical reads. This can be plotted as a 1/x function and the exact optimal point computed as the second derivative of the function:

2. PGA_AGGREGATE_TARGET

Oracle9i has introduced a new advisory utility dubbed v$pga_target_advice. This utility will show the marginal changes in optimal, one-pass, and multipass PGA execution for different sizes of pga_aggregate_target, ranging from 10% to 200% of the current value.

SELECT ROUND(pga_target_for_estimate/1024/1024) target_mb,
estd_pga_cache_hit_percentage cache_hit_perc,
estd_overalloc_count
FROM v$pga_target_advice;

23. What is the difference between alter system Vs alter database?
Ans - alter database - chages the information n the databse level
alter system - changes the information n the instance level,
Database - Collection of files storing database information (configuration/datafiles/redo logs etc)
Instance - Processes and memory used to manage a database.

Alter databse; performs operations that affect the database
Alter system; performs operations that affect the instance


24. How to take incremental logical backup?
Ans – EXP INCTYPE= The type of Import being performed: COMPLETE, CUMULATIVE, and INCREMENTAL.

25. What is Bulk Bind or Collect:
Ans – new feature in PL/SQL for faster processing of DML. ( Array Processing in Memory)

26. Transportable Tablespace?
Ans - OS file copy is generally much faster than other traditional means of data movement such as export/import or SQL*Loader. However, in Oracle9i Database and below, a restriction limits its usefulness to only a few cases in which both the target and source database run on the same OS platform—you can't transport tablespaces between Solaris and HP-UX, for example.
In Oracle Database 10g, this restriction has disappeared: you can now transport tablespaces between platforms as long as the OS byte orders are identical


27. When did a table spanned across table spaces?

Ans – A partitioned table can span across multiple tablespaces.

28. What is Database replication / Advance Replication?
Ans - Replication is the process of creating and maintaining replica versions of database objects (e.g. tables) in a distributed database system. Replication can improve performance and increase availability of applications because alternate data access options becomes available. For example, users can access a local database rather than a remote server to minimize network traffic. Furthermore, the application can continue to function if parts of the distributed database are down as replicas of the data might still accessible.

29. What is Database Auditing?
Ans - Traditional Oracle Database auditing options let you track the actions users perform on objects at the macro level on DML / DDL statements.

Implement Dba_audit_trails .

dbms_fga.add_policy


30. What is tkprof and how is it used
Ans- The tkprof tool is a tuning tool used to determine cpu and execution times for SQL statements. You use it by first setting timed_statistics to true in the initialization file and then turning on tracing for either the entire database via the sql_trace parameter or for the session using the ALTER SESSION command. Once the trace file is generated you run the tkprof tool against the trace file and then look at the output from the tkprof tool. This can also be used to generate explain plan output.

31. When should you increase copy latches? What parameters control copy latches
When you get excessive contention for the copy latches as shown by the "redo copy" latch hit ratio. You can increase copy latches via the initialization parameter LOG_SIMULTANEOUS_COPIES to twice the number of CPUs on your system.

32. When looking at the estat events report you see that you are getting busy buffer waits. Is this bad? How can you find what is causing it
Buffer busy waits could indicate contention in redo, rollback or data blocks. You need to check the v$waitstat view to see what areas are causing the problem. The value of the "count" column tells where the problem is, the "class" column tells you with what. UNDO is rollback segments, DATA is data base buffers.

Monday, March 3, 2008

oracle-database-questions-series-ii.html

1. How many memory layers are in the shared pool?
Ans - a) library cache
b) Dictionary cache
c) SQL Area

2. How do you find out from the RMAN catalog if a particular archive log has been backed-up?
Ans - LIST BACKUP OF ARCHIVELOG sequence=453;

3. How can you tell how much space is left on a given file system and how much space each of the
file system's subdirectories take-up?
Ans – df -h

4. Define the SGA and:
• How you would configure SGA for a mid-sized OLTP environment?
• What is involved in tuning the SGA?
Ans - Generaly SGA will be configured as below.

Size is determined based on several factors like following:- 1-size of database (small, medium or large) 2-whether the application will be OLTP & DSS 3-total memory on the server 4-whether you have other applicatiions on the same server for example Oracle ERP
application server & db may be installed on SAME SERVER which require varying
memory sizes 5-number of users expected to use application on the server 6-type of users (super users or light users or both) However, if the DB is dedicated to one server alone, and knowing the size of your DB,
I would take defaults on the install manual & those of DBCA and play with SGA
parameter values in your tuning cycle.
7. Type of connection of session (dedicated or multithreaded)

For tuning the SGA involve Tuning the all
the component of the SGA ie.
a) Shared pool
b) Buffer Cache
c) Large Pool
d) Java Pool

5. What is the cache hit ratio, what impact does it have on performance of an Oracle database and
what is involved in tuning it?
Ans - a) Library cache HIT ratio by v$librarycache
b) Dictionary cache HIT ratio by v$rowcache
a) Buffer cache HIT ratio should be more than 90% but at the same time Physical read and write should also be considered. (V$SYSSTAT)

6. Other than making use of the statspack utility, what would you check when you are monitoring or
running a health check on an Oracle 8i or 9i database?
Ans – a) OEM
b) TKProf

7. How do you tell what your machine name is and what is its IP address?
Ans - Window / Linux
a)ipconfig / ifconfig
b)ner user / hostname

8. How would you go about verifying the network name that the local_listener is currently using?
Ans – LSNRCTL and check the listener.log file

9. You have 4 instances running on the same UNIX box. How can you determine which shared
memory and semaphores are associated with which instance?

Ans - ipcs grep oracle

10. What view(s) do you use to associate a user's SQLPLUS session with his o/s process?
Ans – v$session and v$process.

11. What is the recommended interval at which to run statspack snapshots, and why?
Ans – 1 Hour prefered



12. What spfile/init.ora file parameter exists to force the CBO to make the execution path of a given
statement use an index, even if the index scan may appear to be calculated as more costly?

Ans - optimizer_index_cost_adj

13. Assuming today is Monday, how would you use the DBMS_JOB package to schedule the
execution of a given procedure owned by SCOTT to start Wednesday at 9AM and to run
subsequently every other day at
2AM.
Ans - dbms_job.submit(:jobno, 'statspack.snap;', trunc(sysdate)+9/24, 'trunc(SYSDATE+1/24,''HH'')', TRUE, :instno);

14. How would you edit your CRONTAB to schedule the running of /test/test.sh to run every other
day at 2PM?
Ans - Use vi to edit Crontab file append a line on the file 00 2 * * * /test/test.sh

15. What do the 9i dbms_standard.sql_txt() and
dbms_standard.sql_text() procedures do?
Ans - DBMS_STANDARD package provides language facilities that help your application interact with Oracle.Function sql_txt (sql_text out ora_name_list_t) return binary_integer;

16. In which dictionary table or view would you look to determine at which time a snapshot or
MVIEW last successfully refreshed?

17. How would you best determine why your MVIEW couldn't FAST REFRESH?
Ans - from dba_mview_refresh_times

18. How does propagation differ between Advanced Replication and Snapshot Replication (readonly)?

19. Which dictionary view(s) would you first look at to
understand or get a high-level idea of a given Advanced Replication environment?
Ans - dba_repcatlog ; dba_repsites_new ; dba_repextensions

20. How would you begin to troubleshoot an ORA-3113 error?
Ans – This is End of File Communication ERROR , check the sga Memory at OS , increase it.

21. Which dictionary tables and/or views would you look at to diagnose a locking issue?
Ans - sys.v_$lock, sys.v_$session sys.obj$ , sys.user$ ,sys.v_$process

22. An automatic job running via DBMS_JOB has failed. Knowing only that "it's failed", how do you
approach troubleshooting this issue?
Ans – check with dba_jobs and reschedule the job again.

23. How would you extract DDL of a table without using a GUI tool?
Ans - SELECT dbms_metadata.get_ddl('TABLE','DEPT','SCOTT') FROM DUAL;

24. You're getting high "busy buffer waits" - how can you find what's causing it?
Ans – check with STATSPACK then check for which section cause this data blocks, segment header, free list block , Undo header.


25. What query tells you how much space a tablespace named "test" is taking up, and how much
space is remaining?
Ans – we can check it from dba_free_space.

26. Database is hung. Old and new user connections alike hang on impact. What do you do? Your
SYS SQLPLUS session IS able to connect.
Ans _ check the Alart.log file for error and check the memory of SGA.

27. Database crashes. Corruption is found scattered among the file system neither of your doing nor
of Oracle's. What database recovery options are available? Database is in archive log mode.
Ans – Complete Database Recovery.

28. Illustrate how to determine the amount of physical CPUs a Unix Box possesses (LINUX and/or
Solaris).
Ans – TOP command in Linux / VMSTAT 5 on Solaris

29. How do you increase the OS limitation for open files (LINUX and/or Solaris)?
Ans - echo 65536 > /proc/sys/fs/file-

30. Provide an example of a shell script which logs into SQLPLUS as SYS, determines the current
date, changes the date format to include minutes & seconds, issues a drop table command, displays
the date again, and finally exits.

31. Explain how you would restore a database using RMAN to Point in Time?

32. How does Oracle guarantee data integrity of data changes?
Ans – Using Constraints

33. Which environment variables are absolutely critical in order to run the OUI?
Ans - On UNIX: Oracle_Home , Path , ORA_NLS33 AND LD_LIBRARY_PATH

34. What SQL query from v$session can you run to show how many sessions are logged in as a
particular user account?
Ans – select count(*) from v$session , v$process

35. Why does Oracle not permit the use of PCTUSED with indexes?
Ans - This is similar to the setting for PCTUSED on Oracle indexes. Oracle doesn’t allow you to set PCTUSED on an index because it has the ability to tell the optimal time to relink the data block onto the freelist chain. Here’s what happens if you try to specify PCTUSED for an index:

36. What would you use to improve performance on an insert statement that places millions of rows
into that table?
Ans – Disable all the constraints and drop the indexes.

37. If you're unsure in which script a sys or system-owned object is created, but you know it's in a
script from a specific directory, what UNIX command from that directory structure can you run to
find your answer?
Ans – pwd , find

38. How would you configure your networking files to connect to a database by the name of DSS which resides in domain icallinc.com?
Ans - In tnsnames use host=DSS.icallinc.com
39. You create a private database link and upon
connection, fails with: ORA-2085: connects to
What is the problem? How would you go about resolving this error?
Ans – check the Tnsnames.ora file for the correct entry

40. I have my backup RMAN script called "backup_rman.sh". I am on the target database. My
catalog username/password is rman/rman. My catalog db is called rman. How would you run this
shell script from the O/S such that it would run as a background process?
Ans - backup_rman.sh >&1&

41. Explain the concept of the DUAL table.
Ans – this is a system Table which has one column.

42. What are the ways tablespaces can be managed and how do they differ?
Ans - 2 ways Locally Managed or Managed in the dictionaryLocally-managed tablespaces have the following advantages over dictionary-managed tablespaces: Local management of extents avoids recursive space management operations, which can occur in dictionary-managed tablespaces if consuming or releasing space in an extent results in another operation that consumes or releases space in a rollback segment or data dictionary table. Local management of extents automatically tracks adjacent free space, eliminating the need to coalesce free extents.

43. From the database level, how can you tell under which time zone a database is operating?
Ans - select DBTIMEZONE from dual;

44. What's the benefit of "dbms_stats" over "analyze"?
Ans - Starting with the introduction of the dbms_stats package, Oracle provides a simple way for the Oracle professional to collect statistics for the CBO. The old-fashioned analyze table and dbms_utility methods for generating CBO statistics are obsolete and somewhat dangerous to SQL performance because they don't always capture high-quality information about tables and indexes. The CBO uses object statistics to choose the best execution plan for all SQL statements.

45. Typically, where is the conventional directory structure chosen for Oracle binaries to reside?
Ans – OracleHome\bin

46. You have found corruption in a tablespace that contains static tables that are part of a database
that is in NOARCHIVE log mode. How would you restore the tablespace without losing new data in
the other tablespaces?