Friday, November 13, 2009

Flashback in oracle

Flashback
Steps involved are mainly
Enable archiving.
Set Flashback parameters in database and init.ora
Enable Flashback
Grants and Permissions
Test flashback manually.

Enable archiving.
alter system set log_archive_dest_1='location=’/u05/app/clvs1app/db/apps_st/data/ flashback '
SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.
Total System Global Area 481259520 bytes
Fixed Size 1314212 bytes
Variable Size 427819612 bytes
Database Buffers 37748736 bytes
Redo Buffers 14376960 bytes
Database mounted.
SQL> !echo $ORACLE_SID
CLVS1
SQL> alter database archivelog;
Database altered.
Set Flashback parameters in database and init.ora
SQL> ALTER SYSTEM SET DB_FLASHBACK_RETENTION_TARGET=10080;
System altered.
Note : this is set for 7 days as discussed with Jason.
SQL> alter system set db_recovery_file_dest_size=10g;
System altered.
SQL> alter system set db_recovery_file_dest='/u05/app/clvs1app/db/apps_st/data/flashback';
System altered.
Set the both the above envirments in init.ora file also.
Enable Flashback
SQL> ALTER DATABASE FLASHBACK ON;
Database altered.
SQL> alter database open;
Database altered.
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 15
Next log sequence to archive 17
Current log sequence 17
SQL>
/u05/app/clvs1app/db/apps_st/data/flashback/CLVS1/flashback

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;

We can also disable supplemental log by
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;

Grants and Permissions

SQL> grant execute on dbms_flashback to test_user;
Grant succeeded.
SQL> grant select any transaction to test_user;
Grant succeeded.
Test flashback manually .
SQL> create table scott.emp1 as select * from scott.emp;
Table created.
SQL> select count(*) from scott.emp1;
COUNT(*)
----------
14
SQL>delete from scott.emp1 where rownum <= 8; SQL> alter session set nls_date_format='YYYY/MM/DD HH24:MI:SS';
Session altered.
SQL> select sysdate from dual;
SYSDATE
-------------------
2009/09/08 07:57:44
SQL> delete from scott.emp1;
6 rows deleted.
SQL> commit;
Commit complete.
SQL> select sysdate from dual;
SYSDATE
-------------------
2009/09/08 07:58:09
SQL> select count(*) from scott.emp1;
COUNT(*)
----------
0
SQL> alter table scott.emp1 enable row movement;
Table altered.
SQL> flashback table scott.emp1 to timestamp TO_TIMESTAMP('2009/09/08 07:57:44','YYYY/MM/DD HH24:MI:SS');
Flashback complete.
SQL> select count(*) from scott.test;
SQL> select count(*) from scott.emp1;
COUNT(*)
----------
6

Monday, July 20, 2009

Oracle Streams

Oracle Streams for data movement.
One of the challenges of today's distributed business environments is the sharing of information among a multitude of applications and databases. As the number of applications grows, you end up with several databases, which may even be from different vendors. Sharing information in such a complex and disparate environment can be demanding.
Oracle Streams provides a flexible infrastructure that businesses can use to simplify information sharing tasks.
What Is Streams?
Oracle Streams captures database changes at a source database, stages the changes, propagates the changes to one or more destination databases, and then applies the changes at the destination database(s). Using Oracle Streams, enterprise systems can capture, propagate, and apply information as follows:
• Within an Oracle database
• Between two Oracle databases
• Among multiple Oracle databases
• Between an Oracle database and a non-Oracle database
How Streams Works
Oracle Streams begins by capturing changes. The changes (to data, tables, schemas, and so on) that take place in a database are recorded in the database redo log files. The Streams capture process extracts these changes from the redo log files and formats each change into a logical change record (LCR). The LCRs are then stored in a queue (staged). Next, Streams propagates LCRs from one queue (the producer queue) to another (the consumer queue) and can then apply (or consume) the LCRs from the consumer queue to the destination database.
Oracle Streams can be used in performing tasks such as
• Data replication
• Data warehouse extraction and loading
• Event notification
• Message queuing
• Database platform migration
• Database and application upgrading
This article illustrates how you can set up Streams data replication from one Oracle database to another.
Setting up Streams between Two Oracle Databases
The example in this article illustrates the replication of a source database (TRNG) to a destination database (STRMTST). (The two databases reside on same machine in this example, but you can use the set of steps if the two databases reside on different machine.) You can take the following step-by-step approach to set up a Streams replication environment:
1. Set up ARCHIVELOG mode.
2. Set up the Streams administrator.
3. Set initialization parameters.
4. Create a database link.
5. Set up source and destination queues.
6. Set up supplemental logging at the source database.
7. Configure the capture process at the source database.
8. Configure the propagation process.
9. Create the destination table.
10. Grant object privileges.
11. Set the instantiation system change number (SCN).
12. Configure the apply process at the destination database.
13. Start the capture and apply processes.
The following sections describe the process for each step and include sample code where applicable.
1. Set up ARCHIVELOG mode. The Streams capture process reads information from the redo log files of the database. To ensure that the information in the redo log files is available for capture, you must run the source database in ARCHIVELOG mode. In this example, you must set the source database to run in ARCHIVELOG mode. Refer to Oracle Database Administrator's Guide for instructions on how to set up ARCHIVELOG mode.
Startup mount;
Alter Database archivelog;
Alter database open;
2. Set up the Streams administrator. The Streams environment is managed by an administrative database user. To manage the Streams environment, the Streams administrator needs some specific privileges and has to create some tables to store Streams-related information.
For each database participating in the Streams data sharing environment, you need to create a user account and designate this account as the Streams administrator. Don't use this account for any other purpose, and don't use the SYS or the SYSTEM user as the Streams administrator.
The Streams administrator creates some tables in its default tablespace. You should specify a tablespace other than SYSTEM for this purpose. You can use the USERS tablespace or another tablespace for storing the tables owned by the Streams administrator, but for ease of management, best practices recommend that you use a separate tablespace for Streams. Use the following command to create the tablespace for the Streams administrator:
CREATE TABLESPACE streams_tbs DATAFILE '/u01/app/trngapp/oracle/db_11g/oradata/streamtst/streams_tbs.dbf' SIZE 25M;
Now create the Streams administrator user in the database, as follows:
CREATE USER strmadmin
IDENTIFIED BY strmadmin
DEFAULT TABLESPACE streams_tbs
TTAJORARY TABLESPACE tTAJ
QUOTA UNLIMITED ON streams_tbs;

Now grant the CONNECT, RESOURCE, and DBA roles to the Streams administrator:
SQL> grant connect, resource, dba to strmadmin;

Grant succeeded.


Use the GRANT_ADMIN_PRIVILEGE procedure in the DBMS_STREAMS_AUTH package to grant the required privileges to the Streams administrator:
SQL> begin dbms_streams_auth.grant_admin_privilege
(grantee => 'strmadmin',
grant_privileges => true);
end;
/
PL/SQL procedure successfully completed.
grant select_catalog_role, select any dictionary to strmadmin;

Grant succeeded.

Target Database: STREAMTST
SQL> conn sys@STREAMTST as sysdba
Enter password:
Connected.
SQL> create user strmadmin identified by strmadmin;

User created.

SQL> grant connect, resource, dba to strmadmin;

Grant succeeded.

SQL> begin dbms_streams_auth.grant_admin_privilege
2 (grantee => 'strmadmin',
3 grant_privileges => true);
4 end;
5 /

PL/SQL procedure successfully completed.

Note that Streams does not require you to use the following roles, but granting these roles can assist with administration:
GRANT SELECT_CATALOG_ROLE
TO strmadmin;
GRANT SELECT ANY DICTIONARY
TO strmadmin;
Complete the preceding steps to set up a Streams administrator on each of the databases participating in the Streams data sharing environment. In this example, these steps need to be executed on both the TRNG and the STREAMTST databases.

3. Set initialization parameters. Set specific initialization parameters at the databases participating in the Streams data sharing. Table 1 describes these initialization parameters and the values they must be set to.

Table 1: Initialization parameters and their values
4. The GLOBAL_NAMES parameter must be set to TRUE in both the source and the destination databases. This influences the database link creation in Step 4.
5. For Streams to work, the COMPATIBLE parameter must be 9.2.0 or higher in both the source and the destination databases. For this example, use the value 10.1.0 for this parameter, because you are setting up Streams replication in an Oracle 10g database.
6. Streams uses system global area (SGA) memory for storing information required for capture and apply. To allocate memory in the SGA for use by Streams, set up a Streams pool in the SGA by specifying the STREAMS_POOL_SIZE parameter. The default value for this parameter is 0, which means that Streams instead uses memory from the shared pool. If you leave STREAMS_POOL_SIZE at its default value (0), Streams will plan to use up to 10 percent of the SHARED_POOL_SIZE, so Oracle recommends setting the STREAMS_POOL_SIZE parameter to at least 100MB. Both the source and the destination databases must account for this SGA memory.
Setup INIT parameters
Source Database: TRNG
SQL> conn sys@TRNG as sysdba
Enter password:
Connected.
SQL> alter system set global_names=true;

System altered.

SQL> alter system set streams_pool_size = 100 m;

System altered.

Target Database: STREAMTST
SQL> conn sys@STREAMTST as sysdba
Enter password:
Connected.
SQL> alter system set global_names=true;

System altered.

SQL> alter system set streams_pool_size = 100 m;

System altered.


7. The JOB_QUEUE_PROCESSES parameter must be set to 2 or higher in the source database(s). In this example, you must set JOB_QUEUE_PROCESSES at the source database (TRNG) to 2 or higher.
8. Create a database link. You need to create a database link from the source database to the destination database. In this example, you will create a database link from TRNG to STREAMTST.
Create Database Link
Target Database: TRNG
SQL> conn strmadmin/strmadmin@TRNG
Connected.
SQL> create database link STREAMTST
2 connect to strmadmin
3 identified by strmadmin
4 using 'STREAMTST';

Database link created.

Source Database: STREAMTST
SQL> conn strmadmin/strmadmin@STREAMTST
Connected.
SQL> create database link TRNG
2 connect to strmadmin
3 identified by strmadmin
4 using 'TRNG';

Database link created.
Set up source and destination queues. The data moves from the source to the destination database through queues. Use the SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package to set up the queues. By default, this procedure creates a queue table named streams_queue_table and a queue named streams_queue. You can override these names by specifying the queue_table and queue_name parameters of the procedure SET_UP_QUEUE. The default names work well, unless you want to create multiple queues and multiple queue tables. Execute this procedure on both the source and the destination databases. When you execute it on the source database (TRNG), it creates a source queue, and when you execute it on the destination database (STREAMTST), it creates a destination queue. Because you are choosing to accept the default queue name, the names of the source and destination queues will be the same. In a later step, you will associate these two queues with the propagation process.
For the purpose of the replication example in this article, you will accept the default names, by executing the procedure as follows:
Setup Source and Destination queues
Source Database: TRNG
SQL> conn strmadmin/strmadmin@TRNG
Connected.
SQL> EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE();

PL/SQL procedure successfully completed.
Target Database: STREAMTST
SQL> conn strmadmin/strmadmin@STREAMTST
Connected.
SQL> EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE();
PL/SQL procedure successfully completed.
9. Setup Schema for streams
Schema: SCOTT
Table: Taj
NOTE: Unlock scott schema because in 10g scott schema is locked by default
Source Database: TRNG
SQL> conn sys@TRNG as sysdba
Enter password:
Connected.
SQL> alter user scott account unlock identified by tiger;
User altered.
SQL> conn scott/tiger@TRNG
Connected.
SQL> create table TAJ ( no number primary key,name varchar2(20),ddate date);
Table created.
Target Database: STREAMTST
SQL> conn sys@STREAMTST as sysdba
Enter password:
Connected.
SQL> alter user scott account unlock identified by tiger;
User altered.
SQL> conn scott/tiger@STREAMTST
Connected.
SQL> create table TAJ ( no number primary key,name varchar2(20),ddate date);
Table created.
Set up supplemental logging at the source database. Before you start capturing changes at the source database, you need to add supplemental logging on the tables being changed. Supplemental logging puts additional information in the redo logs that helps during the apply process. For more information on supplemental logging, refer to the Oracle Streams Replication Administrator's Guide.
To add supplemental logging, connect to the source database as the schema owner (SCOTT, in this example) and run the following statement:
Setup Supplemental logging at the source database
Source Database: TRNG
SQL> conn scott/tiger@TRNG
Connected.
SQL> alter table taj add supplemental log data (primary key,unique) columns;
Table altered.
Configure the capture process at the source database. The capture process initiates replication, by capturing the changes in the source database. It then formats each change into an LCR and enqueues the LCRs.
Create a capture process to extract the changes from the redo logs. You can configure a capture process to run on the source database—called a local capture—or remotely on another database—called a downstream capture.
While creating a capture process, you add rules to specify which changes to capture and which ones to discard. The rules are combined into rule sets. The capture process can have a positive rule set and a negative rule set. For a replication capture process to work, you need to add a positive rule set that specifies that the data manipulation language (DML) changes as well as the data definition language (DDL) changes be captured. This procedure call creates the local Streams capture process, named capture_stream. Note that you can use the optional parameter source_database to specify another database for a downstream capture. If you omit this parameter (as in this example) or set the parameter value to NULL, the procedure will create a local capture process.
The streams_type parameter indicates that this procedure will create a capture process. (Note that the same procedure will later be used to create an apply process.) The inclusion_rule parameter with the value set to true means that this procedure will create a positive rule set for the capture process. The true value for the include_dml parameter means that a rule will be created for DML changes, and a true value for the include_ddl parameter means that a rule will be created for DDL changes. The table_name parameter indicates that these rules are associated with the TAJ table in the SCOTT schema. You can also set up rules for a schema or for the entire database, using the ADD_SCHEMA_RULES and ADD_GLOBAL_RULES procedures, respectively. See the PL/SQL Packages and Types Reference manual for more information on these packages.
Do this by using the ADD_TABLE_RULES procedure of the DBMS_STREAMS_ADM package:
Configure capture process at the source database
Source Database: TRNG
SQL> conn strmadmin/strmadmin@TRNG
Connected.
SQL> begin dbms_streams_adm.add_table_rules
( table_name => 'scott.taj',
streams_type => 'capture',
streams_name => 'capture_stream',
queue_name=> 'strmadmin.streams_queue',
include_dml => true,
include_ddl => true,
inclusion_rule => true);
end;
/

PL/SQL procedure successfully completed.

10. Configure the propagation process. Once the changes are captured and enqueued, propagate the changes to the destination database. To do so, you create a propagation process and associate the source queue with the destination queue. You create the source and destination queues (each named the default streams_queue) in the source and destination databases.This procedure call creates a propagation process, named TRNG_to_STREAMTST, for which the source queue is the streams_queue in the TRNG database and the destination queue is the streams_queue in the STREAMTST database. This procedure also adds DML and DDL rules to the positive rule set.
Configure the propagation process
Source Database: TRNG
SQL> conn strmadmin/strmadmin@TRNG
Connected.
SQL> begin dbms_streams_adm.add_table_propagation_rules
( table_name => 'scott.taj',
streams_name => 'TRNG_TO_STREAMTST',
source_queue_name => 'strmadmin.streams_queue',
destination_queue_name => 'strmadmin.streams_queue@STREAMTST',
include_dml => true,
include_ddl => true,
source_database => 'TRNG',
inclusion_rule => true);
end;
/
11. Create the destination table. Before you can start replicating the DDL and DML changes on a source table to the destination database, the table must exist in the destination database. If the table doesn't already exist in the destination database, you have several options for creating the object in the destination database, such as using Data Pump, export/import, RMAN, transportable tablespaces, and so on.
12. Grant object privileges. On the destination database, the Streams administrator applies the changes captured from the source database. To be able to apply the changes to the destination tables, the Streams administrator must have the required privileges on the relevant objects. For example, if the changes need to be applied to the TAJ table of the SCOTT schema, you need to execute the following after connecting to the destination (STREAMTST) database:
GRANT ALL ON scott.TAJ TO strmadmin;
13. Set the instantiation System Change Number (SCN). Now you need to set the instantiation SCN for the table from the source database you want to replicate. This ensures that the changes in the source table captured before the instantiation SCN is set will not be applied at the destination database.
This SCN is used to set the instantiation SCN for the TAJ table of the SCOTT schema in the destination database. Note that the SET_TABLE_INSTANTIATION_SCN procedure of the DBMS_APPLY_ADM package is called via the STREAMTST database link, which means that although this procedure is being called while you are connected to TRNG, it will be executed in the STREAMTST database.
If the table didn't previously exist in the destination database and you use export/import, Data Pump, or transportable tablespaces to copy it, the instantiation SCN will be set automatically for the table.
Set the instantiation system change number (SCN)
Source Database: TRNG
SQL> CONN STRMADMIN/STRMADMIN@TRNG
Connected.
SQL> declare
source_scn number;
begin
source_scn := dbms_flashback.get_system_change_number();
dbms_apply_adm.set_table_instantiation_scn@STREAMTST
( source_object_name => 'scott.taj',
source_database_name => 'TRNG',
instantiation_scn => source_scn);
end;
/
14. Configure the apply process at the destination database. Now you create an apply process and associate the destination queue with it. You also add rules for the apply process. You achieve this by calling the ADD_TABLE_RULES procedure of the DBMS_STREAMS_ADM package:
Configure the apply process at the destination database
Target Database: STREAMTST
SQL> conn strmadmin/strmadmin@STREAMTST
Connected.
SQL> begin dbms_streams_adm.add_table_rules
( table_name => 'scott.taj',
streams_type => 'apply',
streams_name => 'apply_stream',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => true,
source_database => 'TRNG',
inclusion_rule => true);
end;
/

PL/SQL procedure successfully completed.
This procedure creates an apply process (as indicated by the streams_type parameter) for the TAJ table in the SCOTT schema. The apply process, named apply_stream, is associated with the streams_queue. The apply process also adds DML and DDL rules to the positive rule set (as indicated by the parameter inclusion_rule).
Note that earlier you used DBMS_STREAMS_ADM.ADD_TABLE_RULES to create (and add rules for) a capture process (Step 7).
15. Start the capture and apply processes. Now that you have configured all necessary objects and processes, all you need to do is start the capture and apply processes. To start the capture process, connect to the source database and execute the START_CAPTURE procedure of the DBMS_CAPTURE_ADM package:
Start the capture and apply processes
Source Database: TRNG
SQL> conn strmadmin/strmadmin@TRNG
Connected.
SQL> begin dbms_capture_adm.start_capture
( capture_name => 'capture_stream');
end;
/
Similarly, to start the apply process, connect to the destination database and execute the START_APPLY process of the DBMS_APPLY_ADM package. However, before you do that, best practices recommend that you set the disable_on_error parameter of the apply process to n, so that the apply process will continue even if it encounters some errors.
Target Database: STREAMTST
SQL> conn strmadmin/strmadmin@STREAMTST
Connected.
SQL> begin dbms_apply_adm.set_parameter
( apply_name => 'apply_stream',
parameter => 'disable_on_error',
value => 'n');
end;
/

PL/SQL procedure successfully completed.
SQL> begin
dbms_apply_adm.start_apply
( apply_name => 'apply_stream');
end;
/
PL/SQL procedure successfully completed.
Now the Streams replication environment is ready, and DML and DDL changes in the SCOTT.TAJ table in the source database will be replicated in the corresponding table in the destination database.
SQL> conn scott/tiger@TRNG
Connected.
SQL> --DDL operation
SQL> alter table taj add (flag char(1));
Table altered.
SQL> --DML operation
SQL> begin insert into taj values (1,'first_entry',sysdate,1);
commit;
end;
/
PL/SQL procedure successfully completed.

SQL> conn scott/tiger@STREAMTST
Connected.
SQL> --TEST DDL operation
SQL> desc taj
Name Null? Type
----------------------------------------- -------- ----------------------------
NO NOT NULL NUMBER
NAME VARCHAR2(20)
DDATE DATE
FLAG CHAR(1)
SQL> --TEST DML operation
SQL> select * from taj;
NO NAME DDATE F
---------- -------------------- --------- -
1 first_entry 24-JAN-08 1

Conclusion
Oracle Streams can capture, propagate, and apply changes in the database automatically, including both DML and DDL changes. Applications requiring replication, data warehouses, database migrations, and database upgrades can all benefit from Oracle Streams.

Saturday, May 30, 2009

10G Database Upgrade document

Current Release of database is 10.1.0.5 and the upgrade release should be 10.2.0.4
1. Download the following pre-requisites for Installation.
Software Requirments
10.2.0.2 software on Linux x86
10.2.0.2 companion on Linux x86 CD for ultrasearch
10.2.0.4 patchset Linux x86.
NOTE: No hardware or software level changes may be required for this upgrade at the O/S level.
Steps:
Please move current oraInventory to some other location.
Install 10.2.0.2 Software on a separate home. Follow installation steps.
Don’t create a database, so uncheck that option in the screens of Installation.
Once done it will ask for root.sh to be run. This completes the 10.2.0.2 installation.
After that go to companion software start the installation of that. Only thing to be taken care in that is to specify the ORACLE HOME correctly I,e in this case the new 10.2.0.2 home created. And in the screen select the 2nd option and rest Follow general instructions in that.

Once that completes apply the 10.2.0.4 patchset on the same ORACLE HOME I,e again 10.2.0.4.This completes the 10.2.0.4 Installation.
Once Completed stop the database and listener on 10.2.0.1 ORACLE HOME , and go to 10.2.0.4 ORACLE_HOME/bin.
Run dbua from there and select the database to be upgraded.If the database is not visible in the options check the oratab , if the entries are correct in that , if not update the same.
In the screens to follow uncheck the screens for Flash Recovery Area ,Configure Enterprise Manager and recompile invalid .Once done run utlrp.sql manually to recompile invalids.
Once done with that configure listener using netca or manually copying files from old ORACLE HOME to new ORACLE HOME and change the paths accordingly.
Issue Faced may be ORA-1031 Insufficient privileges
A connection to the database using SQL*Plus fails with same error:
% sqlplus /nolog
SQL> connect / as sysdba
ORA-1031 Insufficient privileges
Resolution
1.add the user oraclep to dba group in /etc/group.
2. add the group I,e dba16 to config.c in rdbms/lib in the 10.2.0.4 ORACLE_HOME.

Sunday, May 3, 2009

Cloud Computing Concepts

CLOUD COMPUTING

What is a cloud?
• Virtualized server pool
• Reconfigures to provide different service profiles on demand
• Individual node providing service is unimportant

Cloud computing Technical means:
Providing services on virtual machines allocated on top of a large physical machine pool

Cloud computing in Business means:
A method to address scalability and availability concerns for large scale applications

Cloud computing some people say is Democratized distributed computing

While there are several definitions of what constitutes a Cloud and what the term Cloud Computing really means, simply put, Cloud Computing allows users to tap into a virtually unlimited pool of computing resources over the Internet (the Cloud).Unlike traditional IT, Cloud users have little insight or control over the underlying infrastructure, and they must interact with the Cloud via an API provided by the Cloud vendors. The most exciting part of the Cloud, however, is its elasticity and utility-style pricing. Users can provision resources dynamically in a self-service manner and pay only for what they use – much like how we use utilities like electricity or water.

Amazon Web Services (AWS) offering: The AWS cloud provides reliable and dependable on-demand infrastructure that frees time and expense for you to focus on innovating for your business.

Costs
Eliminates up-front capital expenditures significantly reduces ongoing operational expenses Offers a pay-as-you-go utility computing model

Lowers Costs
Eliminates hassle of configuring data centers Reduces time to pilot and test projects Enables you to focus on adding value to your business

Reduces Time to Market Offloads Heavy Lifting
Gives you access to massive data centers maintain the flexibility your business demands Use only the capacity you need, when you need it


Many Uses for AWS
• Elastic computing
• Media distribution
• Scalable Web sites
• Business continuity (backup/recovery)
• Record retention and management
• Financial applications
• High-performance computing
• Software development/testing

AWS In the Enterprise
 Use development platforms you already know
 Fast access to compute power and storage
 Automatically scale to the needs of your business
 Pay only for what you use, with no commitments
 Secure, dependable, and fast infrastructure services
 Easily provision resources for one-off projects
 Service the needs of the entire organization without jeopardizing in-progress projects

Cloud Computing and Oracle
Cloud computing seems to be where the future of enterprise computing lies and as one of the leading players in the field of enterprise computing, Oracle has taken a proactive approach to the opportunities offered to it by the cloud.
The company, for instance, announces the release of a product line that is tailor-made for the cloud. These products allow users to not only conveniently deploy their Oracle applications in the cloud, but to also back up their Oracle databases in the cloud, just as they would on an in-house server. This is something that Oracle is doing in partnership with Amazon through their web services arm, whose cloud computing service (known as Elastic Cloud Compute) has proven to be one of the first real successes in this emerging field. Oracle promises to team up with other cloud platform providers in the near future, in a complementary role (where the other providers offer the platforms, while Oracle offers the kind of applications that are able to withstand the demands of the cloud).
Many analysts also see Oracles ventures into the cloud, though enthusiastic, as more of a way of attempting to protect its enterprise computing turf, since there is a real risk of a client switching software providers during their migration into the cloud, if their current provider doesn’t develop applications that can run in the cloud.
While the Cloud has created a loyal set of followers – mostly among web startups and developers – enterprises have found it somewhat difficult to fully utilize due to a variety of reasons, including unavailability of enterprise software, data security and privacy issues, etc. Oracle is addressing this challenge by introducing products and services that will help make the Cloud enterprise-ready. Using the new Cloud offerings from Oracle, enterprises will be able to leverage the Cloud to meet transient and unanticipated demands by treating the Cloud as an extension to their on-premises data center.
The ability to back up Oracle Database in the Cloud is a key part of Oracle’s Cloud offering. It allows customers to use Storage Clouds, such as Amazon Simple Storage Service (S3), as their next-generation offsite backup storage destination.
Compared to traditional tape-based offsite storage, Cloud backups are more accessible, faster to restore under most circumstances, and more reliable.
Exciting developments taking place in the area of Cloud Computing present new opportunities for enterprises. By introducing the capability to backup databases in the Cloud in a secure and efficient manner, Oracle is leading the effort to make the Cloud relevant to enterprises. Backups in the Cloud provide enterprises with a next-generation offsite backup storage solution that is always accessible, faster to restore under most circumstances, and more reliable. Continuous accessibility of Cloud backups opens up new possibilities that have not been possible with tape based offsite storage. With its unlimited capacity and extremely attractive consumption-based pricing, the Cloud can help enterprises avoid up-front capital expenditure and become more agile, efficient, and competitive - with Oracle helping pave the way.


Oracle’s goal is to make cloud computing relevant to enterprises by

• Providing right set of products and services
• Lead the industry efforts in developing Cloud standards.
• ensure data security, etc.
Amazon is #1 Cloud Computing vendor & Oracle’s 1st Partner
Deploying Oracle Software in the Cloud
Pre-configured virtual machine images (AMIs) available for EC2 Consists of Oracle Enterprise Linux + Oracle Database 11g + Application Express

No Real Application Clusters (RAC) support currently (EC2 does not support clusters)

Fully configured hardware and Oracle Database environment in less than 30 minutes .For test, dev, QA, POC, and other short-term projects Such projects otherwise often get bottlenecked by IT Can also be used as “sandbox” to try out new releases/options

Maximizing Availability and Security
Oracle Data Guard + Availability Zones = High Availability
Scale-out disaster recovery, business continuity, and read scalability

Transparent Data Encryption
Encrypts data on disk without requiring application changes

Network Encryption
Encrypts in-flight data

Virtual Private Database
Users only see the data that they are authorized to see
The Future of Cloud Computing at Oracle
With the early forays it has made into cloud computing, Oracle seems to have secured its future in the face of the computing ‘revolution’ that is shaping up in the cloud.

Friday, January 2, 2009

Cloning of 11i (11.5.8) from Single-Node to Multi-Node

Introduction.

This paper presents a step-by-step approach for completely cloning an Oracle Applications 11i (AutoConfig enabled environment) using Rapid Clone from one to three nodes, it includes Port Selection, Forms Server, Reports Server, Apache Server and Concurrent Processing, also all the scripts or programs which are used to startup up and shutdown all the services. The cloning word means to do a functional copy of an existing environment.

Some of the reasons to do a cloning process are:

• To create a test environment from an existing production environment to test some patches or to reproduce any production issues.
• To keep a test environment with the most current infomation of a production environment.
• To move any existing environment to other servers.

Just copying the application directories doesn’t mean that our new environment will work properly, we need to do some additional steps or tasks to have a functional environment.

Rapid Install 11.5.8 version creates an Oracle Applications environment which is compatible with “Rapid Clone”, by the way Oracle always recommends to apply the latest Rapid Clone’s patches, for example I applied patch 3130740, “Patch 11i.ADX.D”.

Oracle also recommends to use the latest tool to do a cloning process, so we will use Rapid Clone, by the way to do a clone from multinode to singlenode installation is still not supported by Oracle, but I have heard some partners have done that but using the manual method (No Rapid Clone), the most important thing here and I have done is to keep a single node installation with same patch level than a multinode installation to be able to refresh my production environment (multinode installation) to my test environment (single node installation).

The standard method to do a Rapid Clone cloning procedure follows the next steps:

• To prepare the source environment to be able to use Rapid Clone method (some patches must be needed).
• Cloning the Oracle Applications environment.
• Copying the source environment.
• Configuring the target environment.
• To make the post-cloning tasks.
Source System Configuration.

In our source system, we made a single node installation, in this server we installed Oracle RDBMS software 8.1.7.4 version, Forms, Reports and Apache Servers, also the Concurrent Processing tier.
Table 1. Source System Configuration

System Configuration Single-node, one server
Database, Administration, Concurrent Processing Server Forms, Reports and Apache Servers. Sun Solaris 5.8 64 bit
Platform Sun Solaris 5.8 64-bit
Database Version 8.1.7.4
Application Version 11.5.8
Forms Client Connectivity Jinitiator (1.1.8.16)
User Interface Forms, HTML/Javascript (Self-service Web)
Processors 4 at 750 Mhz.
RAM Memory 8 Gbytes

Table 2. Target System Configuration

System Configuration Multi-node, three servers
Database Server and Administration Server. Sun Solaris 5.8 64 bit
Processors 10 at 900 Mhz.
RAM Memory 20 Gbytes
Forms and Apache Servers. Sun Solaris 5.8 64 bit
Processors 8 at 900 Mhz.
RAM Memory 18 Gbytes
Reports and Concurrent Processing Servers. Sun Solaris 5.8 64 bit
Processors 4 at 900 Mhz.
RAM Memory 8 Gbytes
Database Version 8.1.7.4
Application Version 11.5.8.0.0
Forms Client Connectivity Jinitiator (1.1.8.16)
User Interface Forms, HTML/Javascript (Self-service Web)

Components of 11i architecture.

The components of 11i architecture have changed considerably. The stack includes two distinct interfaces - the “traditional” Forms (ERP applications) interface and Web Interface (Self Service, CRM and Workflow).

The next components of the technology stack are installed:

1. Database Server: This is where the database is hosted, also where Oracle RDBMS is installed. The 8.1.7.4 version is installed by the default for a 11.5.8 version installation. The Ebusiness software will install two ORACLE_HOMES, one for the database and the another one for the Forms, Apache and Report Servers.

2. Forms Server: 11.5.8 Ebusiness software will install Developer 6i version (patchset 9), which includes 6.0.8.18 Forms Server version.

3. Reports Server: 11.5.8 Ebusiness software will install 6.0.8.18 version for Reports and Graphics.

4. Apache Server: The 1.3.19 version of Apache Server is shipped with Ebusiness software 11.5.8, which also install the 1.0.2.2.2 version of Internet Application Server.

5. Concurrent Processing Server: These processes have to execute all the jobs or requests submitted by the Oracle Applications software.

6. Administration Server: Each of the other tech stacks needs an administration component in order to perform tasks such as patching the stack, re-linking executables and re-creating Java files. As a consequence, an Administration component is present on the database server (you have the option to install or not to install this option), Concurrent Manager server, Forms server and Web server. The utilities adpatch and adadmin are part of the Administration tech stack.

7. JInitiator: This Browser Plug in (version 1.1.8.16) is installed on the client, which in most cases is a PC.

Backup Methodology.

Is very important to have a valid backup, it is in case that you need to copy all the application stack from a restore of an application’s backup, also you have the option to do the copy directly from the server where all the technology stack is installed. Database backup could be done online or offline, all depending of the system availability, if you are cloning from an online backup, you will need to recover your database (you will apply the archivelogs until you want, before continuing with the application tier script execution).

S.O. users.

In our initial setup the same user was used to own the database and application stack files. This Unix userid “citcrp3” belongs to the “dba” group. In case you are restoring the files from a tape it is very important to keep the files and privileges like they were taken, so it is recommended to do the restore operation like root user.

Cloning procedure.

1. Preparing the source environment to be able to use Rapid Clone tool.

If you have any prior version to 11.5.7 which is not enabled with AutoConfig tool, you still have the option to use Rapid Clone, by the way you will need first apply all the patches to enable the AutoConfig tool, after that you will need to apply an additional patch to use Rapid Clone method.

In the next table all the prerequisites are listed:

Table 3. Required versions

Software Minimum versión Location Details
Oracle Universal Installer 2.1.0.17 All source system nodes. This version is included in 11.5.7 and above versions, if you have any environment prior to this one, you will need apply patch 2949808.
Perl 5.005 Source and target nodes. Use the Perl shipped with iAS1022 and RDBMS 9i if available or download it from Perl.com. Perl must be in the PATH before cloning. Windows users: do not use Perl from MKSToolKit.
JRE 1.1.8 Source database node. If the RDBMS ORACLE_HOME was not installed using Rapid Install, install JRE 1.1.8 into the /jre/1.1.8 directory.
JRE 1.3.1 Source database node. Windows users only: install JRE 1.3.1 into the /jre/1.3.1 directory.
JDK 1.3.1 Target middle-tier applications nodes. Refer to note 130091.1 of Metalink.
Zip 2.3 All source nodes. Download from InfoZip. Zip must be in your PATH for cloning.

a) For Windows users only: apply patch 2237858 to enable long file names support.

b) Migrate to AutoConfig
If you any prior version to 11.5.7, you must follow all the steps in Note 165195.1 of Metalink to enable the AutoConfig option.

c) Apply additional patches
Verify that 2.3 version of ZIP is included in the $PATH environment variable of the user which will be used to do the cloning prodecure.

Apply patch (2716534) of the Context Editor tool.
Patch 2873456 is a replacement for patch 2716534 until now Feb-19-2004.

Apply patch (2614231), which contains 2.1.0.17.0 version of the OUI.
Installation example:
unzip -d /oui21 p2614231_11i_.zip

Like owner of the database files:
unzip -d /oui21 p2614231_11i_.zip

Note: The above OUI version is installed in 11.5.7 and above versions.

If your enviroment is AutoConfig enabled, you will need to apply the last patch for Rapid Install 3130740, “Patch 11i.ADX.D”, this patch includes Rapid Clone patch, it is to be able to clone Oracle Applications with 8i, 9.0.1 and 9.2.0 versions.

d) Update to JDK 1.3.1
If Oracle Applications uses a prior version to 1.3.1, you will need to update to 1.3.1, you can follow the steps in Note: 212005.1

e) Generate the Applications Context files in the database tier.
You must generate the “xml” file in the directory of the database (RDBMS ORACLE_HOME) /appsutil/.xml and other one for the APPL_TOP directory, if any file doesn’t exist, you can create it with the next command:

a) You must be connected with the owner of the file system of the application if it is a single user installation, or if it is multiuser installation, you must be connected like root user.
b) Start the Context Creation tool in the database tier node (Database Server):

UNIX:
$/appsutil/editcontext/createcontext

f) Update the Applications Context File

UNIX:
$/bin/adconfig.sh contextfile= appspass=
where = $APPL_TOP/admin/SID.xml SID=PROD3 (in our environment).

g) Install the Rapid Clone files in the database server node

Follow the specified procedure in 4 section, step 1 of note 165195.1of Metalink.
Copying all the files of AutoConfig to the RDBMS ORACLE_HOME
• In the Application Tier server, go to the APPL_TOP directory and run the file $APPL_TOP/APPSORA.env to define all the Application’s environment variables.
• Creating the appsutil.zip file.
UNIX:
$perl <$AD_TOP>/bin/admkappsutil.pl, this command will create the appsutil.zip file in the $APPL_TOP/admin/out path.
• In the database server (Database Tier), you will need to copy the appsutil.zip file generated in the above step to the directory.
UNIX:
$cp –p $APPL_TOP/admin/SID/out/appsutil.zip $/
$cd $
$unzip –o appsutil.zip

The –o flag means that all the existing files will be overwritten, if this option isn’t specified, answer with [A]ll.

The last step updates the Rapid Clone files in the directory, including all the generated files after the Rapid Install and AutoConfig patches installation.

Due to adpreclone.pl file isn’t created with just following the above steps, it is necessary to execute all the specified steps in section 4 of Metalink’s document 165195.1, which are detailed in the next steps:

• Generate the Context File in the Database node, executing the next command:

UNIX:
$cd $
$. .env Note: You mustn’t have the Apps environment variables defined.
$cd /appsutil/bin
$adbldxml.sh tier=db appsuser= appspasswd=

Complete the following AutoConfig steps.
Review Prior Manual Configuration Changes.
The Database Context File may not include manual post-install configuration changes made after the Rapid Install completed. Before running the AutoConfig portion of this patch, review any modifications to specific configuration files and reconcile them with the Database Context file.
Note: Prior modifications include any changes made to configuration files as instructed in patch READMEs or other accompanying documents.

Appendix B of Metalink note 165195.1 describes how to review and resolve manual configuration changes.
Generate and Apply AutoConfig Configuration Files.
Attention: This step performs the conversion to AutoConfig. Once completed, the previous configuration will not be available.

Attention: The database server must remain available during the AutoConfig run. All the other database tier services should be shut down.
1. Execute the following commands:
o On UNIX
cd /appsutil/bin
adconfig.sh contextfile= appspass=
o On Windows
cd /d \appsutil\bin
adconfig.cmd contextfile= appspass=
Warning: Running AutoConfig on the database node will update the RDBMS init.ora and network listener files. Be sure to review the configuration changes from step 3. The new AutoConfig files support the use of IFILEs to allow for values to be customized or added as needed.


2. Cloning the Oracle Applications environment.

Use Rapid Clone to create template files for cloning on the source system. After the source system is copied to the target, Rapid Clone updates these templates to contain the new target system configuration settings. Rapid Clone will not change the source system configuration.
There are three phases to the cloning process:

1. Prepare the Source System.
Execute the following commands to prepare the source system for cloning.

a) Prepare the source system database for cloning.

Log on to the source system as the owner of the RDBMS software and run the following commands:

$cd /appsutil/scripts/
where is the SID of the source environment.

$perl adpreclone.pl dbTier

After executing the above command, the $/appsutil/clone directory will be created.

b) Prepare the source system application tier for cloning.

Log on the server as the owner of the applications file (in our environment we used the same user) and execute the next commands in every server where an APPL_TOP exists:

$cd /admin/scripts/
where is the SID of the source environment.

$perl adpreclone.pl appsTier

After executing the above command, the $/clone directory will be created.


2. Copy the Source System to the Target System.

The next table shows all the directories which were copied to target server:


Table 4. Source System directories

Directories: Files description: Used space:
/prod02/prodcrp3/prod3appl APPL_TOP 18.82 Gbytes.
/prod05/prodcrp3/prod3comn COMMON_TOP 1.51 Gbytes.
/prod05/prodcrp3/prod3db ORACLE_HOME for 8.1.7.4 database. 1.99 Gbytes.
/prod08/prodcrp3/prod3ora/8.0.6 ORACLE_HOME for apps tech stack 8.0.6 version 2.93 Gbytes.
/prod08/prodcrp3/prod3ora/iAS Directory for iAS product. 2.80 Gbytes.
/prod01/prodcrp3/prod3data/ Directory where SYSTEM and TEMPORAL datafiles, redologs and controlfiles are stored. 10.04 Gbytes.
/prod03/prodcrp3/prod3data/ Directory for table tablespaces, redologs (group 3 and 4). 10.35 Gbytes.
/prod06/prodcrp3/prod3data/ Directory for index tablespaces, redologs (group 3 y 4). 12.07 Gbytes.
/prod08/prodcrp3/prod3data/ Directory for table tablespaces. 2.38 Gbytes.
Total Used Space: 62.89 Gbytes.

We copied all the files in the database target server node, it is because we had enough space and also for future cloning requirements. Our source system is proesa1880 server and our database production target node name is gpdb. Like example, the next commands were executed in gpdb server to copy the required files:

rcp -pr proesa1880:/prod01/prodcrp3/prod3data /prod13/archives/PROD3-BACKUP/prod01/prodcrp3/
rcp -pr proesa1880:/prod02/prodcrp3/prod3appl /prod13/archives/PROD3-BACKUP/prod02/prodcrp3/
rcp -pr proesa1880:/prod03/prodcrp3/prod3data /prod13/archives/PROD3-BACKUP/prod03/prodcrp3/
rcp -pr proesa1880:/prod05/prodcrp3/prod3comn /prod12/PROD3-BACKUP/prod05/prodcrp3/
rcp -pr proesa1880:/prod05/prodcrp3/prod3db /prod12/PROD3-BACKUP/prod05/prodcrp3/
rcp -pr proesa1880:/prod06/prodcrp3/prod3data /prod13/archives/PROD3-BACKUP/prod06/prodcrp3/
rcp -pr proesa1880:/prod08/prodcrp3/prod3data /prod12/PROD3-BACKUP/prod08/prodcrp3/
rcp -pr proesa1880:/prod08/prodcrp3/prod3ora /prod12/PROD3-BACKUP/prod08/prodcrp3/
rcp –p proesa1880:/prod01/prodcrp3/prod3data/temp01.dbf /prod12/PROD13-BACKUP/prod01/prodcrp3/prod3data/


a) Copy the application tier file system.

Log on to the source system application tier nodes as the owner of the apps files and do the next tasks:
-Shut down the application tier server processes.
-Copy the following application tier directories from source node to the target application node:





<806 ORACLE_HOME>


The next table shows the directories which were copied from source server (singlenode) to target servers (multinode).

Table 5. Source System directories

Application Tiers
Directory name: Source Directory in proesa1880 server: Target directory in gpapps server: Target directory in gprep server:
/prod02/prodcrp3/prod3appl /prod01/oraprod/prodappl /prod01/oraprod/prodappl
/prod05/prodcrp3/prod3comn/html /prod02/oraprod/prodcomn/html /prod02/oraprod/prodcomn/html
/prod05/prodcrp3/prod3comn/java /prod02/oraprod/prodcomn/java /prod02/oraprod/prodcomn/java
/util /prod05/prodcrp3/prod3comn/util /prod02/oraprod/prodcomn/util /prod01/oraprod/prodcomn/util
/clone /prod05/prodcrp3/prod3comn/clone /prod01/oraprod/prodcomn/clone /prod01/oraprod/prodcomn/clone
<806 ORACLE_HOME> /prod08/prodcrp3/prod3ora/8.0.6 /prod03/oraprod/prodora/8.0.6 /prod03/oraprod/prodora/8.0.6
/prod08/prodcrp3/prod3ora/iAS /prod03/oraprod/prodora/iAS /prod03/oraprod/prodora/iAS


Like I said above in this document, all the directories were copied to database server node (gpdb), after that the application directories were copied to two nodes (gpapps and gprep servers). The next commands were used to copy the application directories, they were executed from gpapps and gprep servers (application tier nodes).

rcp -pr gpdb:/prod13/archives/PROD3-BACKUP/prod02/prodcrp3/ /prod01/oraprod/
rcp -pr gpdb:/prod12/PROD3-BACKUP/prod05/prodcrp3/prod3comn/html /prod02/oraprod/prodcomn/
rcp -pr gpdb:/prod12/PROD3-BACKUP/prod05/prodcrp3/prod3comn/java /prod02/oraprod/prodcomn/
rcp -pr gpdb:/prod12/PROD3-BACKUP/prod05/prodcrp3/prod3comn/util /prod02/oraprod/prodcomn/
rcp -pr gpdb:/prod12/PROD3-BACKUP/prod05/prodcrp3/prod3comn/clone /prod02/oraprod/prodcomn/
rcp -pr gpdb:/prod12/PROD3-BACKUP/prod05/prodcrp3/prod3ora/8.0.6 /prod03/oraprod/prodora/
rcp -pr gpdb:/prod12/PROD3-BACKUP/prod05/prodcrp3/prod3ora/iAS /prod03/oraprod/prodora/


b) Copy the database tier file system.

Log on to the source system database node as the ORACLE user.
-Perform a normal shutdown of the source system database.
-Copy the source system database files to the target system.

The next table shows the directories which were copied from source server (singlenode) to target servers (multinode).


Table 6. Source System database directories

Description: Source files and directories: Target files and directories:
Temporary datafiles /prod01/prodcrp3/prod3data/temp01.dbf
/prod06/prodcrp3/prod3data/temp02.dbf /prod02/oraprod/proddata/temp01.dbf
/prod02/oraprod/proddata/temp02.dbf
System datafiles /prod01/prodcrp3/prod3data/system09.dbf
/prod01/prodcrp3/prod3data/system08.dbf
/prod01/prodcrp3/prod3data/system07.dbf
/prod01/prodcrp3/prod3data/system06.dbf
/prod01/prodcrp3/prod3data/system05.dbf
/prod01/prodcrp3/prod3data/system04.dbf
/prod01/prodcrp3/prod3data/system03.dbf
/prod01/prodcrp3/prod3data/system02.dbf
/prod01/prodcrp3/prod3data/system01.dbf /prod03/oraprod/proddata/ system09.dbf
/prod03/oraprod/proddata/ system08.dbf
/prod03/oraprod/proddata/ system07.dbf
/prod03/oraprod/proddata/ system06.dbf
/prod03/oraprod/proddata/ system05.dbf
/prod03/oraprod/proddata/ system04.dbf
/prod03/oraprod/proddata/ system03.dbf
/prod03/oraprod/proddata/ system02.dbf
/prod03/oraprod/proddata/ system01.dbf
Rollback datafiles /prod03/prodcrp3/prod3data/rbs01.dbf
/prod03/prodcrp3/prod3data/rbs02.dbf /prod02/oraprod/proddata/ rbs01.dbf
/prod02/oraprod/proddata/ rbs02.dbf
Table Tablespaces /prod03/prodcrp3/prod3data/
/prod08/prodcrp3/prod3data/ /prod01/oraprod/proddata/*
/prod01/oraprod/proddata/*
Index tablespaces /prod06/prodcrp3/prod3data/ /prod02/oraprod/proddata/*


After copying the source database files to the target database node, you can startup all the source services including the application services.


3. Configure the Target System.

In this section, I show all the output of the executed commands. This section is divided in two parts, one for the Database Tier and the other one for the Application Tiers.

a) Configuring the target system database server.

Execute the next commands like the owner of the target database server files, in our environment it was called “oraprod”. Also don’t forget to add the /usr/ccs/bin value to the PATH environment variable, it is needed to execute the perl script in this node.

$cd /appsutil/clone/bin
$perl adcfgclone.pl dbTier

After executing the above command, the next options are showed on the screen:


Please enter the APPS password:
apps

First Creating a new context file for the cloned system.
The program is going to ask you for information about the new system:

Provide the values required for creation of the new Database Context file.

Do you want to use a virtual hostname for the target node (y/n) [n] ?:

Target system database SID [PROD3]:PROD

Target system domain name []: ommited by security reasons

Target system RDBMS ORACLE_HOME directory [/prod05/prodcrp3/prod3db/8.1.7]:/prod11/oraprod/proddb/8.1.7

Target system utl_file accessible directories list [/usr/prod3]:/usr/prod

Number of DATA_TOP's on the target system [4]:

Target system DATA_TOP 1 [/prod03/prodcrp3/prod3data]:/prod03/oraprod/proddata

Note: This is the used path for “SYSTEM” datafiles.

Target system DATA_TOP 2 [/prod01/prodcrp3/prod3data]:/prod01/oraprod/proddata

Note: This is the used path for “DATA” datafiles.

Target system DATA_TOP 3 [/prod08/prodcrp3/prod3data]:/prod02/oraprod/proddata

Note: This is the used path for “INDEX”, “TEMPORARY” and “ROLLBACK” datafiles.

Target system DATA_TOP 4 [/prod06/prodcrp3/prod3data]:/prod06/oraprod/proddata

The new context file has to contain port numbers related to target system.

Do you want to preserve the port values from the source system on the target system (y/n) [y] ?:

Database port is 1525

Backing up /prod11/oraprod/proddb/8.1.7/appsutil/PROD_gpdb.xml to /prod11/oraprod/proddb/8.1.7/appsutil/PROD_gpdb.xml0.bak

Creating the new Database Context file from :
/prod11/oraprod/proddb/8.1.7/appsutil/template/adxdbctx.tmp

The new database context file has been created :
/prod11/oraprod/proddb/8.1.7/appsutil/PROD_gpdb.xml

Log file located at /tmp/CloneContext_11191007.log

Running Rapid Clone with command:
perl ./adclone.pl java=./../jre/1.3.1 mode=apply stage=/prod11/oraprod/proddb/8.1.7/appsutil/clone component=dbTier method=CUSTOMdbctxtg=/prod11/oraprod/proddb/8.1.7/appsutil/PROD_gpdb.xml showProgress

Using adconfig.zip : 115.340

Beginning database tier Apply - Wed Nov 19 22:09:37 2003
Log file located at /prod11/oraprod/proddb/8.1.7/appsutil/log/PROD_gpdb/ApplyDBTier_111
91009.log
/ 4% completed
\
| 13% completed
- 19% completed
|
/ 29% completed
\ 29% completed
/
- 29% completed
| 29% completed
-
\ 29% completed
/ 29% completed
\
| 29% completed
- 29%

/ 29% completed
\ 29%

- 29% completed
| 29

\ 29% completed

Completed Apply...
Wed Nov 19 22:13:08 2003

Beginning APPSDB_PROD registration to central inventory...

ORACLE_HOME NAME : APPSDB_PROD
ORACLE_HOME PATH : /prod11/oraprod/proddb/8.1.7
INVENTORY LOCATION : /var/opt/oracle/oraInst.loc

Log file located at /u01/oraprod/oraInventory/logs/OracleHomeCloner_11191013.log
RC-00123: ORACLE_HOME APPSDB_PROD already exists inside the inventory

Starting database listener for PROD:
Running:
/prod11/oraprod/proddb/8.1.7/appsutil/scripts/PROD_gpdb/addlnctl.sh start PROD

You are running addlnctl.sh version 115.3


Starting listener process PROD ...


Listener PROD has already been started.

addlnctl.sh: exiting with status 0


b) Configuring the target system application tier server nodes.

Execute the next commands like the owner of the target application server files, in our environment we used the same user than the database server and it was called “oraprod”. Also don’t forget to add the /usr/ccs/bin value to the PATH environment variable, it is needed to execute the perl script in this node.

$cd /clone/bin
$perl adcfgclone.pl appsTier

After executing the above command, the next options are showed on the screen:

Please enter the APPS password:
prodfrut

First Creating a new context file for the cloned system.
The program is going to ask you for information about the new system:


Provide the values required for creation of the new APPL_TOP Context file.

Do you want to use a virtual hostname for the target node (y/n) [n] ?:

Target system database SID [PROD3]:PROD

Target system domain name []: ommited by security reasons

Target system database server node [proesa1880]:gpdb

Does the target system have more than one application tier server node (y/n) [n] ?:y

Target system concurrent processing node [proesa1880]:gprep

Target system administration node [proesa1880]:gpapps

Target system forms server node [proesa1880]:gpapps

Target system web server node [proesa1880]:gpapps

Is the target system APPL_TOP divided into multiple mount points (y/n) [n] ?:

Target system APPL_TOP mount point[/prod02/prodcrp3/prod3appl]:
/prod01/oraprod/prodappl

Target system COMMON_TOP directory [/prod05/prodcrp3/prod3comn]:
/prod02/oraprod/prodcomn

Target system 8.0.6 ORACLE_HOME directory [/prod08/prodcrp3/prod3ora/8.0.6]:
/prod03/oraprod/prodora/8.0.6

Target system iAS ORACLE_HOME directory [/prod08/prodcrp3/prod3ora/iAS]:
/prod03/oraprod/prodora/iAS

Location of JDK 1.3.1 on the target system [/usr/j2se]:

The new context file has to contain port numbers related to target system.

Do you want to preserve the port values from the source system on the target system (y/n) [y] ?:
Web Listener port is 8004
Complete port information available at /prod01/oraprod/prodappl/admin/out/PROD_gpapps/portpool.lst

UTL_FILE_DIR on database tier consists of the following directories.

1. /usr/prod -- This directory must exist also the user must have the needed privileges.
2. /prod11/oraprod/proddb/8.1.7/appsutil/outbound/PROD_gpdb
Choose a value which will be set as APPLPTMP value on the target node [1]:

Creating the new APPL_TOP Context file from :
/prod01/oraprod/prodappl/ad/11.5.0/admin/template/adxmlctx.tmp

The new APPL_TOP context file has been created :
/prod01/oraprod/prodappl/admin/PROD_gpapps.xml

Log file located at /tmp/CloneContext_11191134.log

Running Rapid Clone with command:
perl ./adclone.pl java=./../jre/1.3.1 mode=apply stage=/prod02/oraprod/prodcomn/
clone component=appsTier method=CUSTOM appctxtg=/prod01/oraprod/prodappl/admin/PROD_gpapps.xml showProgress

Using adconfig.zip : 115.340

Beginning application tier Apply - Wed Nov 19 23:36:51 2003
Log file located at /prod01/oraprod/prodappl/admin/PROD_gpapps/log/ApplyAppsTier_11191136.log

Completed Apply...
Wed Nov 19 23:48:36 2003

Beginning APPSIAS_PROD registration to central inventory...

ORACLE_HOME NAME : APPSIAS_PROD
ORACLE_HOME PATH : /prod03/oraprod/prodora/iAS
INVENTORY LOCATION : /var/opt/oracle/oraInst.loc

Log file located at /u02/oratest/testora/oraInventory/logs/OracleHomeCloner_11191148.log
RC-00123: ORACLE_HOME APPSIAS_PROD already exists inside the inventory

Starting application Services for PROD:
Running:
/prod02/oraprod/prodcomn/admin/scripts/PROD_gpapps/adstrtal.sh APPS/

You are running adstrtal.sh version 115.11

Executing service control script:
/prod02/oraprod/prodcomn/admin/scripts/PROD_gpapps/adapcctl.sh start
script returned:
****************************************************

adapcctl.sh version 115.34

Starting Apache Web Server Listener (dedicated HTTP) ...
Starting Apache Web Server Listener (dedicated PLSQL) ...
/prod03/oraprod/prodora/iAS/Apache/Apache/bin/apachectl start: httpd started

adapcctl.sh: exiting with status 0

.end std out.

****************************************************

Executing service control script:
/prod02/oraprod/prodcomn/admin/scripts/PROD_gpapps/adalnctl.sh start
script returned:
****************************************************

adalnctl.sh version

Checking for FNDFS executable.
Starting listener process APPS_PROD.

adalnctl.sh: exiting with status 0

.end std out.

****************************************************

Executing service control script:
/prod02/oraprod/prodcomn/admin/scripts/PROD_gpapps/adfrmctl.sh start
script returned:
****************************************************

You are running adfrmctl.sh version 115.29

Cannot complete applications logon. You may have entered an invalid applications password, or there may have been a database connect error.
Starting forms server for PROD on port 9004.
Cannot complete applications logon. You may have entered an invalid applications password, or there may have been a database connect error.

adfrmctl.sh: exiting with status 0

.end std out.

****************************************************

Executing service control script:
/prod02/oraprod/prodcomn/admin/scripts/PROD_gpapps/adfmcctl.sh start
script returned:
****************************************************

You are running adfmcctl.sh version 115.16

Cannot complete applications logon. You may have entered an invalid applications password, or there may have been a database connect error.
Starting forms load balancing client for PROD.
Cannot complete applications logon. You may have entered an invalid applications password, or there may have been a database connect error.

adfmcctl.sh: exiting with status 0

.end std out.
****************************************************

Executing service control script:
/prod02/oraprod/prodcomn/admin/scripts/PROD_gpapps/adfmsctl.sh start
script returned:
****************************************************

You are running adfmsctl.sh version 115.12

Cannot complete applications logon. You may have entered an invalid applications password, or there may have been a database connect error.
starting forms metrics server for PROD.
Cannot complete applications logon. You may have entered an invalid applications password, or there may have been a database connect error.

adfmsctl.sh: exiting with status 0

.end std out.
****************************************************

Executing service control script:

/prod02/oraprod/prodcomn/admin/scripts/PROD_gpapps/jtffmctl.sh start
script returned:
****************************************************

You are running jtffmctl.sh version 115.5

Starting Fulfillment Server for PROD on port 9304 ...

jtffmctl.sh: exiting with status 0

.end std out.
****************************************************

Executing service control script:
/prod02/oraprod/prodcomn/admin/scripts/PROD_gpapps/addisctl.sh start
script returned:
****************************************************

addisctl.sh version 115.8

/prod03/oraprod/prodora/8.0.6/vbroker/bin/osagent
Started osagent.
Osagent logs messages to the file /prod03/oraprod/prodora/8.0.6/discwb4/util/osagent.log
.
Waiting for OAD to start...
Started OAD.
OAD logs messages to the file /prod03/oraprod/prodora/8.0.6/discwb4/util/oad.log.
Discoverer Locator Started.
Locator logs messages to the file /prod03/oraprod/prodora/8.0.6/discwb4/util/locator.log
.
Registering Discoverer Session
Completed registration of repository_id = IDL:DCISessionManager:1.0
object_name = _8004OracleDiscovererSession4
reference data =
path_name = /prod03/oraprod/prodora/8.0.6/discwb4/util/runses.sh
activation_policy = UNSHARED_SERVER
args = (length=4)[-session; _8004OracleDiscovererSe
ssion4; -preference; _8004OracleDiscovererPreferences4; ]
env = NONE
for OAD on host
Registering the Collector
Completed registration of repository_id = IDL:DCICollector:1.0
object_name = _8004OracleDiscovererCollector4
reference data =
path_name = /prod03/oraprod/prodora/8.0.6/discwb4/util/runcol.sh
activation_policy = SHARED_SERVER
args = (length=2)[-collector; _8004OracleDiscoverer
Collector4; ]
env = NONE
for OAD on host
Applying preferences from file : /prod03/oraprod/prodora/8.0.6/discwb4/util/pref.txt
Finished applying preferences

Closing down registry..
Registry File sync...
Registering Discoverer Preference Repository
Completed registration of repository_id = IDL:DCICORBAInterface:1.0
object_name = _8004OracleDiscovererPreferences4
reference data =
path_name = /prod03/oraprod/prodora/8.0.6/discwb4/util/runpref.sh
activation_policy = SHARED_SERVER
args = (length=2)[-preference; _8004OracleDiscovere
rPreferences4; ]
env = NONE
for OAD on host
addisctl.sh: exiting with status 0

.end std out.
****************************************************

All enabled services for this node are started.
Check logfile /prod02/oraprod/prodcomn/admin/log/PROD_gpapps/11192348.log for details

Exiting with status 0

If you look there are some warning messages, it is because my application tier password didn’t match with my database server execution script for the APPS user password.

The next output is for my Concurrent Processing server (in our environment we are using two nodes for the application tier).

Execute the next commands like the owner of the target application server files, in our environment we used the same user than the database server and it was called “oraprod”. Also don’t forget to add the /usr/ccs/bin value to the PATH environment variable, it is needed to execute the perl script in this node.

$cd /clone/bin
$perl adcfgclone.pl appsTier

After executing the above command, the next options are showed on the screen:

Please enter the APPS password:
prodfrut

First Creating a new context file for the cloned system.
The program is going to ask you for information about the new system:

Provide the values required for creation of the new APPL_TOP Context file.

Do you want to use a virtual hostname for the target node (y/n) [n] ?:

Target system database SID [PROD3]:PROD

Target system domain name []: ommited by security reasons

Target system database server node [proesa1880]:gpdb

Does the target system have more than one application tier server node (y/n) [n] ?:y

Target system concurrent processing node [proesa1880]:gprep

Target system administration node [proesa1880]:gpapps

Target system forms server node [proesa1880]:gpapps

Target system web server node [proesa1880]:gpapps

Is the target system APPL_TOP divided into multiple mount points (y/n) [n] ?:

Target system APPL_TOP mount point [/prod02/prodcrp3/prod3appl]:
/prod01/oraprod/prodappl

Target system COMMON_TOP directory [/prod05/prodcrp3/prod3comn]:
/prod02/oraprod/prodcomn

Target system 8.0.6 ORACLE_HOME directory [/prod08/prodcrp3/prod3ora/8.0.6]:
/prod03/oraprod/prodora/8.0.6

Target system iAS ORACLE_HOME directory [/prod08/prodcrp3/prod3ora/iAS]:
/prod03/oraprod/prodora/iAS

Location of JDK 1.3.1 on the target system [/usr/j2se]:

The new context file has to contain port numbers related to target system.

Do you want to preserve the port values from the source system on the target system (y/n) [y] ?:

Web Listener port is 8004
Complete port information available at /prod01/oraprod/prodappl/admin/out/PROD_gprep/portpool.lst

UTL_FILE_DIR on database tier consists of the following directories.

1. /usr/prod -- This directory must exist also the user must have the needed privileges.

2. /prod11/oraprod/proddb/8.1.7/appsutil/outbound/PROD_gpdb
Choose a value which will be set as APPLPTMP value on the target node [1]:

Creating the new APPL_TOP Context file from :
/prod01/oraprod/prodappl/ad/11.5.0/admin/template/adxmlctx.tmp

The new APPL_TOP context file has been created :
/prod01/oraprod/prodappl/admin/PROD_gprep.xml

Log file located at /tmp/CloneContext_11191159.log

Running Rapid Clone with command:
perl ./adclone.pl java=./../jre/1.3.1 mode=apply stage=/prod02/oraprod/prodcomn/
clone component=appsTier method=CUSTOM appctxtg=/prod01/oraprod/prodappl/admin/PROD_gpr
ep.xml showProgress

Using adconfig.zip : 115.340

Beginning application tier Apply - Thu Nov 20 00:01:51 2003
Log file located at /prod01/oraprod/prodappl/admin/PROD_gprep/log/ApplyAppsTier_1120120
1.log

Thu Nov 20 00:13:04 2003

Beginning APPSIAS_PROD registration to central inventory...

ORACLE_HOME NAME : APPSIAS_PROD
ORACLE_HOME PATH : /prod03/oraprod/prodora/iAS
INVENTORY LOCATION : /var/opt/oracle/oraInst.loc

Log file located at /u02/oratest/testora/oraInventory/logs/OracleHomeCloner_11201213.log
RC-00123: ORACLE_HOME APPSIAS_PROD already exists inside the inventory

Starting application Services for PROD:
Running:
/prod02/oraprod/prodcomn/admin/scripts/PROD_gprep/adstrtal.sh APPS/

You are running adstrtal.sh version 115.11

Executing service control script:
/prod02/oraprod/prodcomn/admin/scripts/PROD_gprep/adalnctl.sh start
script returned:
****************************************************

adalnctl.sh version

Checking for FNDFS executable.
Starting listener process APPS_PROD.

adalnctl.sh: exiting with status 0

.end std out.
****************************************************

Executing service control script:
/prod02/oraprod/prodcomn/admin/scripts/PROD_gprep/adcmctl.sh start
script returned:
****************************************************

You are running adcmctl.sh version 115.17

Starting concurrent manager for PROD ...
Starting PROD_1120@PROD Internal Concurrent Manager
Default printer is noprint

adcmctl.sh: exiting with status 0

.end std out.
****************************************************

Executing service control script:
/prod02/oraprod/prodcomn/admin/scripts/PROD_gprep/adrepctl.sh start
script returned:
****************************************************

You are running adrepctl.sh version 115.16

Cannot complete applications logon. You may have entered an invalid applications password, or there may have been a database connect error.
starting Reports Server for PROD on port 7004.
Cannot complete applications logon. You may have entered an invalid applications password, or there may have been a database connect error.

adrepctl.sh: exiting with status 0

.end std out.
****************************************************

All enabled services for this node are started.
Check logfile /prod02/oraprod/prodcomn/admin/log/PROD_gprep/11200013.log for details

Exiting with status 0

If you look there are some warning messages, it is because my application tier password didn’t match with my database server execution script for the APPS user password.

At this moment all the services must be running properly, by the way it is very important to perform all the finishing tasks which are specified in the next section.

3. Finishing tasks.

This section lists tasks that may be necessary depending on your implementation and the intended use of the cloned system.

a) Update profile options: Rapid Clone updates only site level profile options. If any other profile options are set to instance specific values, you must update them manually.

b) Update printer settings: If the new cloned system needs to utilize different printers, update the target system with the new printer settings now.

c) Update workflow configuration settings: Cloning an Oracle Applications instance will not update the host and instance specific information used by Oracle Workflow. Review the following tables and columns to verify there is no instance specific data in the Workflow configuration on the target system:

Tabla 7. Target System tables with values to be modified.

Nombre de la tabla: Nombre de la columna: Valores detallados de la columna:
WF_NOTICATION_ATTRIBUTES TEXT_VALUE Value starts with http:// : Update to new web host.
WF_ITEM_ATTRIBUTE_VALUES TEXT_VALUE Value starts with "http:// : Update to new web host.
WF_SYSTEMS GUID Create a new system defined as the new global database name using the Workflow Administrator Web Applications responsibility.
WF_SYSTEMS NAME Value needs to be replaced with the database global name
WF_AGENTS ADDRESS Update database link with the new database global name.
FND_FORM_FUNCTIONS WEB_HOST_NAME Update with the new web host name.
FND_FORM_FUNCTIONS WEB_AGENT_NAME Update to point at the new PLSQL listener name.
FND_CONCURRENT_REQUESTS LOGFILE_NODE_NAME *Update with the new value of the Concurrent Processing server name.
FND_CONCURRENT_REQUESTS OUTFILE_NODE_NAME *Update with the new value of the Concurrent Processing server name.
FND_CONCURRENT_REQUESTS LOGFILE_NAME *Update to the new path of the concurrent processing server name.
FND_CONCURRENT_REQUESTS OUTFILE_NAME *Update to the new path of the concurrent processing server name.
* See the below steps.

To update the value of the new Concurrent Processing server, I used the next sentences:

UPDATE FND_CONCURRENT_REQUESTS
SET LOGFILE_NODE_NAME = 'GPREP',
OUTFILE_NODE_NAME = 'GPREP';

COMMIT;

To update the value of the new path for the requests in the FND_CONCURRENT_REQUESTS table, I used the next sentences:

UPDATE FND_CONCURRENT_REQUESTS
SET LOGFILE_NAME = REPLACE(LOGFILE_NAME, '/prod05/prodcrp3/prod3comn/admin/log/PROD3/', '/prod02/oraprod/prodcomn/admin/log/PROD_gprep/'),
OUTFILE_NAME = REPLACE(OUTFILE_NAME, '/prod05/prodcrp3/prod3comn/admin/out/PROD3/', '/prod02/oraprod/prodcomn/admin/out/PROD_gprep/');

COMMIT;
a) After the above changes, it is suggested to complete the next steps to be able to execute the AD utilities (adadmin, adpatch) properly.

It is suggested to rename the topfile.txt which is placed in $APPL_TOP/admin/topfile.txt path, this file will be created after executing any AD utility.

b) Maybe could be necessary to change or just verify the APPS’s password in the $IAS_TOP/Apache/modplsql/wdbsvr.app file (just like me), it is needed just to avoid any privilege problem when accessing the application after the cloning procedure.

c) To be able to execute the “JAR’s” regeneration files option from adadmin utility, it will be necessary to execute the next command to regenerate the new certification key for this new cloned instance:

$adjkey –initialize

d) If you want to change the default colorScheme inside of the application accessed by http://hostname.domain:port/dev60cgi/f60cgi URL, it will be necessary to change the colorScheme parameter in $OA_HTML/bin/appsweb_SID.cfg file, where SID=PROD (for our environment).

e) Avoiding Yellow Bar Warning. We will need to follow the next additional steps to solve the Yellow Warning Bar, because we have just generated a new certification file, we will need to copy the next source files to the target Forms Server node:

$APPL_TOP/admin/appltop.cer this file will be copied to $APPL_TOP/admin/appltop.cer path of the Forms Server target node.
$APPL_TOP/admin/adsign.txt this file will be copied to $APPL_TOP/admin/adsign.txt path of the Forms Server target node.
$HOME/identitydb.obj this file will be copied to $HOME/identitydb.obj path of the owner for the apps files.

After copying these files, it will be necessary to execute the regeneration of JAR’s files twice, choosing the regeneration of files for all the products.

f) It is also recommended to compile all the APPS’s objects, this is done executing the adadmin utility.

Target System Configuration.

The next table shows the final configuration for target system, the directories which were used for all the three nodes: gpdb, gpapps y gprep.

Table 8. Target System Configuration.

Server: Files: Directory: Gbytes size:
gpdb Tables Datafiles /prod01/oraprod/proddata/ 10.58
Index Datafiles /prod02/oraprod/proddata/ 9.93
SYSTEM tablespace’s Datafiles /prod03/oraprod/proddata/ 6.41
Rollback Segments Datafiles /prod08/oraprod/proddata/rbs01.dbf
/prod12/oraprod/proddata/rbs02.dbf 1.00
1.00
Temporary Segments Datafiles /prod10/oraprod/proddata/temp01.dbf
/prod09/oraprod/proddata/temp02.dbf 2.00
2.00
Controlfiles /prod03/oraprod/proddata/cntrl01.dbf
/prod01/oraprod/proddata/cntrl02.dbf
/prod02/oraprod/proddata/cntrl03.dbf 8.4 Mbytes
8.4 Mbytes
8.4 Mbytes
Redolog Files Group # 1:
/prod11/oraprod/proddata/log01a.dbf
/prod05/oraprod/proddata/log01b.dbf
Group # 2:
/prod10/oraprod/proddata/log02a.dbf
/prod04/oraprod/proddata/log02b.dbf
Group # 3:
/prod11/oraprod/proddata/log03a.dbf
/prod05/oraprod/proddata/log03b.dbf
Group # 4:
/prod10/oraprod/proddata/log04a.dbf
/prod04/oraprod/proddata/log04b.dbf 50 Mbytes
50 Mbytes

50 Mbytes
50 Mbytes

50 Mbytes
50 Mbytes

50 Mbytes
50 Mbytes
Archivelogs /prod13/archives/ 50 Gbytes Available
Oracle (RDBMS 8.1.7.4) /prod11/oraprod/proddb/8.1.7 2.03
gpapps APPL_TOP /prod01/oraprod/prodappl 18.41
COMMON_TOP /prod02/oraprod/prodcomn 1.48
ORACLE_HOME /prod03/oraprod/prodora/8.0.6 2.97
iAS_HOME /prod03/oraprod/prodora/iAS 3.58
gprep APPL_TOP /prod01/oraprod/prodappl 18.41
COMMON_TOP /prod02/oraprod/prodcomn 1.48
ORACLE_HOME /prod03/oraprod/prodora/8.0.6 2.97
iAS_HOME /prod03/oraprod/prodora/iAS 3.58

Accessing application.

You can get the logon screen, following any of the next URLs:

1. http://.:8004
2. http://.:8004/dev60cgi/f60cgi

Testing the new functionality.

It is advisable to have the source system, which can be compared against the new target system (cloned system), by the way this document it is not intended to mention every test that you have to follow to confirm that your new cloning procedure has worked as it is intended, it only depends on your own environment, by the way I have just done the next tests:

-Requests execution.
-If you have PDF reports, it is advisable to test the output for this kind of reports, maybe you will need to change the DISPLAY variable incide of the COMMON_TOP’s scripts.
-If you are using Discoverer Viewer and Discoverer Plus try to test their functionality.
-If you are using Oracle Workflow try to test its functionality.
-Oracle Alert can also be tested.

Conclusion.

This document has like main goal to describe the process that was followed to have a successful cloning procedure from 1 to 3 nodes using Rapid Clone method in a 11.5.8 version. I started from point g) Install the Rapid Clone files in the database server node, it is because our environment was AutoConfig enabled. This procedure can be applied in the same way for a 1 to 2 nodes or 1 to 1 node.

Note.

This paper is destined to share my experience and issues with other colleagues. I tried to show all the issues and troubles that I had while making a cloning procedure using Rapid Clone method for first time. This document is not intended to be a must to follow guide, if you have any other comments or issues that you can share they are welcome.

Database general activities and issues II

Continuing the prevous topic with the same heading

Loss of TEMP Datafile
Symptoms: On large sorts (select distinct, order by, group by, union) that can't be done in memory, the sort will fail with "ORA-01157: cannot identify data file 3 - file not found" if the loss happened in the middle of the sort, or "ORA-01116: error in opening database file 3" if the loss happened before the sort started, along with the file name "ORA-01110: data file 3: '/u03/oradata/PROD/temp_PROD_01.dbf'". Nothing is put into the alert.log file, and no trace files will be generated for this.
You may also need to drop and recreate the TEMP tablespace after a database recovery, which may be indicated by an error message like "ORA-25153: temporary tablespace is empty" when you first log into your application (such as SCT Banner), although you probably won't get that message just logging into sqlplus.
Action: All you need to do is to take the datafile offline and drop and recreate the TEMP tablespace, which can be done either while the database is still running (from sqlplus) or beginning at the mount state (when connected as sysdba). Here, we are using a locally managed temporary tablespace.
$ sqlplus "/ as sysdba"
SQL> alter database datafile '/u03/oradata/PROD/temp_PROD_01.dbf' offline;
Then, if from mount state, do an "alter database open;" here.
SQL> select file_name,bytes/1024 kbytes from dba_temp_files;
Shows the size of the TEMP datafile(s) in Kbytes.
SQL> select initial_extent/1024 kbytes from dba_tablespaces
where tablespace_name = 'TEMP';
Shows the uniform size of the extents in Kbytes.
SQL> drop tablespace temp;
SQL> !rm /u03/oradata/PROD/temp_PROD_01.dbf
SQL> create tablespace temp
tempfile '/u03/oradata/PROD/temp_PROD_01.dbf' size 40064K
extent management local uniform size 640K;

TEMP Datafile Offline

Symptoms: Similar to Loss of TEMP Datafile, except for the message "ORA-00376: file 3 cannot be read at this time", along with the file name message.
Action: You can offline the datafile and drop and recreate the tablespace, as above, or you can do a datafile recovery with the database open and bring the datafile online.
SQL> recover automatic datafile '/u03/oradata/PROD/temp_PROD_01.dbf';
SQL> alter database datafile '/u03/oradata/PROD/temp_PROD_01.dbf' online;
Loss of INACTIVE Online Redo Log Group (which had already been archived)
Symptoms: The database crashes when an attempt is made to access that redo log group during a logfile switch. Current users will be kicked out with "ORA-01092: ORACLE instance terminated. Disconnection forced", and users trying to log on will get "ERROR: ORA-03114: not connected to ORACLE" and "ERROR: ORA-00472: PMON process terminated with error". Nothing will be placed in the alert.log file; however, there will be pmon (process monitor), dbwr (database writer), and lgwr (log writer) trace files generated in the background dump dest directory. Check these from the unix command line, since the database is not up to check them from:
$ grep background_dump_dest /u00/oracle/product/v723/dbs/initPROD.ora
Shows the pathname of the bdump directory.
$ cd /u00/oracle/admin/PROD/bdump
$ ls -ltr *.trc
Get the latest trace files in the bdump directory, such as pmon_13612.trc and lgwr_32306.trc (the last ones listed).
$ cat pmon_13612.trc
Shows "ORA-00470: LGWR process terminated with error" (also in dbwr).
$ cat lgwr_32306.trc
Shows "ORA-00313: open failed for members of log group 3 of thread 1" and messages containing the missing log file names like "ORA-00312: online log 3 thread 1: '/u03/oradata/PROD/log_PROD_3B.rdo'".
Action: You will need to startup the database in the mount state, drop the missing logfile group (and remove the group's member files, if they are not already gone), and add the logfile group back, at which time the database can be opened for use.
$ sqlplus "/ as sysdba"
SQL> startup
Shows "ORA-01081: cannot start already-running ORACLE - shut it down first".
SQL> startup force
Or, you could have done a "shutdown abort" followed by a "startup". Shows "Database mounted." and then "ORA-00313: open failed for members of log group 3 of thread 1" with additional messages containing the log file names.
SQL> select bytes/1024 from v$log where group# = 3;
Shows the size in K of the missing group's members.
SQL> select member from v$logfile where group# = 3;
Shows the member (file) names of the missing group.
SQL> alter database drop logfile group 3;
SQL> alter database add logfile group 3
('/u03/oradata/PROD/log_PROD_3A.rdo',
'/u03/oradata/PROD/log_PROD_3B.rdo') size 500K;
Recreates all member files for that group of the given size.
SQL> alter database open;
If just one member of group is lost, the other member(s) will take over and allow Oracle to keep running with no apparent problem. However, alert_PROD.log will show errors such as "ORA-00313: open failed for members of log group 3 of thread 1" and "ORA-00312: online log 3 thread 1: '/u03/oradata/PROD/log_PROD_3A.rdo'" on each log switch for that group. In this case, drop and recreate the member:
SQL> select * from v$log where group# = 3;
If the status is active or current for that group, do an "alter system switch logfile;"
SQL> alter database drop logfile member
'/u03/oradata/PROD/log_PROD_3A.rdo';
SQL> alter database add logfile member
'/u03/oradata/PROD/log_PROD_3A.rdo' to group 3;

Loss of CURRENT Online Redo Log Group (which needed to be archived)

Symptoms: This behaves like the database freezing when the archivelogs disk volume is filled up; however, doing a "df -k" shows that there is plenty of room for more archivelogs. Current users' sessions will freeze, and users trying to log on will get "ERROR: ORA-00257: archiver error. Connect internal only, until freed.", since the archiver process is still waiting to archive that redo log group after Oracle has cycled through the other groups and needs to use that group again. To verify this conclusion:
$ sqlplus "/ as sysdba"
SQL> select value from v$parameter where name = 'background_dump_dest';
Shows the pathname of the bdump directory.
SQL> !tail -200 /u00/oracle/admin/PROD/bdump/alert_PROD.log
Shows "ORA-00286: No members available, or no member contains valid data", indicating that the redo log group is missing or corrupt, as well as messages such as "ORACLE Instance PROD - Can not allocate log, archival required", "Thread 1 cannot allocate new log, sequence 21", and "All online logs needed archiving".
SQL> !ls -ltr /u00/oracle/admin/PROD/bdump/arch*
Get the latest archiver trace file in the bdump directory, such as arch_22882.trc (the last one listed).
SQL> !cat /u00/oracle/admin/PROD/bdump/arch_22882.trc
Also shows "ORA-00286: No members available, or no member contains valid data".
SQL> shutdown abort
SQL> startup
Shows "ORA-00313: open failed for members of log group 2 of thread 1", or whatever group number has the problem.
Then, if you had tried to just drop and recreate the redo log group from this mount state (as in loss of inactive online redo log group), you would have gotten:
SQL> alter database drop logfile group 2;
Fails with "ORA-00350: log 2 of thread 1 needs to be archived".
Action: This requires an incomplete recovery to just before that group was used, since Oracle can't continue without doing a successful archive, and since you can't drop and recreate the redo log group while the archiver has it marked for archiving. Note that all changes in the lost redo log group will be lost (yuck!). An incomplete time-based recovery is shown below:
SQL> connect / as sysdba
SQL> shutdown abort
At this point, restore ALL datafiles from the last backup, any archivelogs that are needed since that time, AND the lost online redo log group's files (this is the only case in which you would restore those redo logs), but, NOT the control files.
SQL> startup mount
SQL> select group#, sequence#, bytes, first_change#, to_char(first_time,'DD-MON-YY HH24:MI:SS'), status from v$log;
Get the first time for changes in that missing group, subtract 1 second from it, and use that value in the recover command below.
SQL> recover automatic database until time '2005-02-14:12:59:59';
SQL> alter database open resetlogs;
Then, immediately shut down the database and make a BACKUP, since the old backup is unuseable after RESETLOGS is used.

Failure During Hot Backup

If you have a failure while you are doing a hot backup on a tablespace, besides doing any recovery that is needed for the particular failure, you will also need to bring those tablespace datafiles back out of hot backup mode. To do this, while the database is in a mount state, do an "end backup" on each of those datafiles before opening the database. This is available in Oracle 7.2 and above - before that, you would have to restore all the datafiles and do an incomplete recovery to the time before the hot backup was started on that tablespace.
$ sqlplus "/ as sysdba"
SQL> startup mount
SQL> select df.name,bk.time from v$datafile df,v$backup bk
where df.file# = bk.file# and bk.status = 'ACTIVE';
Shows the datafiles currently in a hot backup state.
SQL> alter database datafile
'/u03/oradata/PROD/devl_PROD_01.dbf' end backup;
Do an "end backup" on those listed hot backup datafiles.
SQL> alter database open;

Loss of Server Parameter File (spfile)

If your server parameter file (spfile) becomes corrupt, and you haven't been creating a textual init.ora parameter file as a backup, you can pull the parameters from it using the strings command in UNIX to create an init.ora file. You will need to edit the resulting file to get rid of any garbage characters in it (but don't worry about the "*." characters at the beginning of the lines) and make any corrections to it before using it to start your database, but, at least you will have something to go by:
$ cd $ORACLE_HOME/dbs
$ strings spfilePROD.ora >initPROD.ora
If you have been saving off a textual init.ora parameter file as a backup, you can restore that init.ora file to the $ORACLE_HOME/dbs directory in UNIX (or $ORACLE_HOME\database directory in NT). You will need to delete the corrupt spfile before trying to restart your database, since Oracle looks for the spfile first, and the init.ora file last, to use as the parameter file when it starts up the database (or, you could leave the spfile there and use the pfile option in the startup command to point to the init.ora file). Then, once your database is up, you can recreate the spfile using the following (as sysdba):
SQL> create spfile from pfile;