`

Upgrade Oracle 9i RAC to Oracle 10g RAC

 
阅读更多

Upgrade Oracle 9i RAC to Oracle 10g RAC
by Vincent Chan

This article provides the procedures for converting Oracle 9i (9.2.0.4) RAC to Oracle 10g (10.2.0.1) RAC on Red Hat Enterprise Linux 3 (RHEL3).

Oracle Database 10g and Oracle Real Application Cluster (RAC) 10g itself, boast many new and exciting features that simplify database manageability and provide higher availability and scalability. Compared to its predecessors, Oracle 10g RAC is much easier to install, use and manage, thereby reducing the total cost of ownership.

Overview
The RAC cluster comprises two Intel x86 servers running on RHEL3 (Kernel 2.4.21-27). Each node has access to a shared storage and connectivity to the public and private network.

This article is structured into the following steps:

1. Preliminary Installation
2. Install Oracle Clusterware
3. Install Oracle RAC Software
4. Upgrade the Database
5. Migrate Database to ASM (Optional Step)

Unless otherwise specified, you should execute all steps on both nodes.

Here's an overview of our Oracle 9i RAC environment:

Host Name

Instance Name

Database Name

$ORACLE_HOME

Database File Storage

salmon1

prod1a

prod1

/u01/app/oracle/product/9.2.0

OCFS

salmon2

prod1b

prod1

/u01/app/oracle/product/9.2.0

OCFS

File

File Name

File Storage

Quorum

/ocfs/prod1/quorum

OCFS

Shared Configuration

/ocfs/prod1/srvm

OCFS

And an overview of the Oracle 10g RAC environment:

Host Name

Instance Name

Database Name

$ORACLE_HOME

Database File Storage

salmon1

prod1a

prod1

/u01/app/oracle/product/10.2.0/db_1

OCFS

salmon2

prod1b

prod1

/u01/app/oracle/product/10.2.0/db_1

OCFS

File

File Name

File Storage

Voting Disk

/ocfs/prod1/vdisk1

/ocfs/prod1/vdisk2

/ocfs/prod1/vdisk3

OCFS

Oracle Cluster Registry

/ocfs/prod1/ocr1

/ocfs/prod1/ocr2

OCFS

Step 1: Preliminary Installation
1a. Verify software package versions
Install the required packages. Additional information can be obtained from http://download-east.oracle.com/docs/cd/B19306_01/install.102/b14203/prelinux.htm#sthref380

binutils-2.14
compat-db-4.0.14-5
compat-gcc-7.3-2.96.128
compat-gcc-c++-7.3-2.96.128
compat-libstdc++-7.3-2.96.128
compat-libstdc++-devel-7.3-2.96.128
gcc-3.2
glibc-2.3.2-95.27
make-3.79
openmotif-2.2.3
setarch-1.3-1
1b. Verify kernel parameters
Verify the following kernel parameters. Additional information can be obtained from http://download-east.oracle.com/docs/cd/B19306_01/install.102/b14203/prelinux.htm#sthref418

[root@salmon1]# sysctl -a | grep shm
kernel.shmmni = 4096
kernel.shmall = 2097152
kernel.shmmax = 2147483648
[root@salmon1]# sysctl -a | grep sem
kernel.sem = 250 32000 100 128
[root@salmon1]# sysctl -a | grep -i ip_local
net.ipv4.ip_local_port_range = 1024 65000
[root@salmon1]# sysctl -a | grep -i file-max
fs.file-max = 65536
[root@salmon1]# sysctl -a | egrep "rmem_default|rmem_max|wmem_default| wmem_max"
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_max = 262144
1c. Verify users and groups
Verify that the oracle user primary group is oinstall and the secondary group is dba.

[root@salmon1]# id oracle
uid=500(oracle) gid=501(oinstall) groups=501(oinstall),500(dba)
Verify that the user nobody exists on the node.
[root@salmon1]# id nobody
uid=99(nobody) gid=99(nobody) groups=99(nobody)
1d. Edit the oracle user environment file
Verify that the oracle user primary group is oinstall and the secondary group is dba.

[oracle@salmon1]$ more .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
export PATH=$PATH:$HOME/bin
export ORACLE_SID=prod1a
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/crs
export ORA_CRS_HOME=$ORACLE_BASE/product/crs
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$ORACLE_HOME/bin:$PATH
umask 022
1e. Configure the oracle user shell limits
[root@salmon1]# more /etc/security/limits.conf
* soft nproc 2047
* hard nproc 16384
* soft nofile 1024
* hard nofile 65536
[root@salmon1]# grep pam_limits /etc/pam.d/login
session required /lib/security/pam_limits.so
1f. Configure public and private network
Using the information below, make the necessary changes to network interface devices eth0 (public) and eth3 (private).

[root@salmon1]# redhat-config-network

Host Name

IP Address

Type

salmon1.dbsconsult.com

192.168.0.184

Public (eth0)

salmon2.dbsconsult.com

192.168.0.185

Public (eth0)

salmon1-priv.dbsconsult.com

10.10.10.84

Private (eth3)

salmon2-priv.dbsconsult.com

10.10.10.85

Private (eth3)

salmon1-vip.dbsconsult.com

192.168.0.186

Virtual

salmon2-vip.dbsconsult.com

192.168.0.187

Virtual

1g. Edit the /etc/hosts file
Add the entries for Virtual IP (VIP) addresses in /etc/hosts. VIP is required in Oracle 10g RAC to increase availability by eliminating the need to wait on network timeout which may take as long as 10 minutes. Refer to http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=264847.1 for additional information on VIP configuration.

127.0.0.1 localhost.localdomain localhost
10.10.10.84 salmon1-priv.dbsconsult.com salmon1-priv
10.10.10.85 salmon2-priv.dbsconsult.com salmon2-priv
192.168.0.184 salmon1.dbsconsult.com salmon1
192.168.0.185 salmon2.dbsconsult.com salmon2
192.168.0.186 salmon1-vip.dbsconsult.com salmon1-vip
192.168.0.187 salmon2-vip.dbsconsult.com salmon2-vip
Verify the hostname and the configured network interface devices.

[root@salmon1]# hostname
salmon1.dbsconsult.com
[root@salmon1]# /sbin/ifconfig
1h. Establish user equivalence with SSH
During the Oracle Clusterware and RAC installation, the Oracle Universal Installer (OUI) has to be able to copy the software as oracle to all RAC nodes without being prompted for a password. This can be accomplished using ssh instead of rsh.

To establish user equivalence, generate the user's public and private keys as the oracle user on both nodes.

[oracle@salmon1]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Created directory '/home/oracle/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
5d:8c:42:97:eb:42:ae:52:52:e9:59:20:2a:d3:6f:59 oracle@salmon1.dbsconsult.com
The public key on each node is copied to both nodes. Execute the following on each node.
[oracle@salmon1]$ ssh salmon1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'salmon1 (192.168.0.184)' can't be established.
RSA key fingerprint is 00:d9:70:08:bc:fd:b5:e4:e3:df:a3:c7:d8:46:1e:a5.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'salmon1,192.168.0.184' (RSA) to the list of known hosts.
oracle@salmon1's password:
[oracle@salmon1]$ ssh salmon2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'salmon2 (192.168.0.185)' can't be established.
RSA key fingerprint is 00:d9:70:08:bc:fd:b5:e4:e3:df:a3:c7:d8:46:1e:a5.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'salmon2,192.168.0.185' (RSA) to the list of known hosts.
oracle@salmon2's password:
Test the connection on every node. Verify that you are not prompted for password when you run the following the second time.

ssh salmon1 date
ssh salmon2 date
ssh salmon1-priv date
ssh salmon2-priv date
ssh salmon1.dbsconsult.com date
ssh salmon2.dbsconsult.com date
ssh salmon1-priv.dbsconsult.com date
ssh salmon2-priv.dbsconsult.com date
1i. Configure hangcheck timer kernel module
The hangcheck timer kernel module monitors the system's health and restarts a failing RAC node. It uses two parameters, hangcheck_tick (defines the system checks frequency) and hangcheck_margin (defines the maximum hang delay before a RAC node is reset), to determine if a node is failing.

The hangcheck timer should have already been configured for Oracle 9i (9.2.0.4) RAC. Verify that the following line is in /etc/rc.d/rc.local.

[root@salmon1]# grep insmod /etc/rc.d/rc.local
insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
1j. Create pfile
Save a copy of the pfile for the database upgrade.

SQL> create pfile='/tmp/pfile_singleinst' from spfile;

Step 2: Install Oracle Clusterware Software
Oracle Clusterware requires two files – the Oracle Cluster Registry (OCR) and the voting disk on shared raw devices or on Cluster File System (CFS). The Shared Configuration file and quorum file in Oracle 9i RAC has been renamed to Oracle Cluster Registry and voting disk respectively. These files must be accessible to all nodes in the cluster.

To avoid single point of failure, the OCR and voting disk can now be multiplexed by the database. You can create up to two OCR and up to three voting disks.

2a. Oracle Clusterware pre-installation checks
Cluster Verification Utility (CVU) reduces the complexity and time it takes to install RAC. The tool scans all the required components in the cluster environment to ensure all criteria are met for a successful installation. Additional information on CVU can be found at http://download-east.oracle.com/docs/cd/B19306_01/rac.102/b14197/appsupport.htm#BEHIJAJC

Install cvuqdisk RPM prior to running CVU:

rpm -iv /stage/clusterware/rpm/cvuqdisk-1.0.1-1.rpmTo check for shared storage accessibility, run:

/stage/clusterware/cluvfy/runcluvfy.sh comp ssa -n salmon1,salmon2Upgrade OCFS to the minimum required version if you receive the following warning:

WARNING:
OCFS shared storage discovery skipped because OCFS version 1.0.14 or later is required.The procedure to upgrade OCFS is located at http://oss.oracle.com/projects/ocfs/dist/documentation/How_To_Upgrade.txt

To perform Oracle Clusterware pre-installation checks, run:

/stage/clusterware/cluvfy/runcluvfy.sh stage –pre crsinst -n salmon1,salmon2 –verboseCVU reports the error below if non-routable IP addresses such as 192.168.*.*, 172.*.*.* or 10.*.*.* are used for the public interface (eth0).

ERROR:
Could not find a suitable set of interfaces for VIPs.
Node connectivity check failed.
The error can be safely ignored. As a workaround, invoke the Virtual IP Configuration Assistant (VIPCA) manually during the installation on the second node and a suitable network interface (eth0) will be detected.

2b. Install Oracle Clusterware software
There are two approaches to installing the Oracle Clusterware:

1. Upgrade the existing Shared Configuration file to Oracle 10g OCR format

2. Create new OCR files.

With option 1, the OUI will automatically upgrade the Oracle 9i Shared Configuration file to the 10g OCR format and the OCR file locator, /etc/oracle/ocr.loc points to the location of the upgraded Shared Configuration file (/ocfs/prod1/srvm). After the upgrade, the file, /var/opt/oracle/srvConfig.loc is updated to /dev/null.

[oracle@salmon1]$ more /var/opt/oracle/srvConfig.loc
srvconfig_loc=/dev/null
During the upgrade, the OUI does not provide the option of specifying multiple OCR and voting disk file locations. You can however, use ocrconfig and crsctl to manually multiplex the files.

With option 2, you simply perform a fresh Oracle Clusterware installation. You have to shut down all Oracle 9i RAC processes, which include the database instances, listeners, Global Services Daemons and Oracle Cluster Manager and rename the Shared Configuration pointer file, srvConfig.loc to prevent the OUI from detecting the existing Oracle 9i RAC environment.

During the installation, you will be prompted with the option of multiplexing the OCR and voting disk.

My preference is to perform a fresh install. In this demonstration, we will use option 2 to install the Oracle 10g Clusterware software.

Mount the Oracle Clusterware CD or download the software from OTN. The OUI should be launched on only the first node. During installation, the installer automatically copies the software to the second node.

[oracle@salmon1]$ /stage/clusterware/runInstaller1. Welcome: Click on "Next"
2. Specify Home Details:
a. Name: OraCr10g_home
b. Path: /u01/app/oracle/product/crs
3. Product-Specific Prerequisite Checks: Verify that all checks are successful. Click on "Next."
4. Specify Cluster Configuration:
Cluster Name: crs
a. Public Node Name: salmon1.dbsconsult.com
b. Private Node Name: salmon1-priv.dbsconsult.com
c. Virtual Host Name: salmon1-vip.dbsconsult.com
d. Public Node Name: salmon2.dbsconsult.com
e. Private Node Name: salmon2-priv.dbsconsult.com
f. Virtual Host Name: salmon2-vip.dbsconsult.com 5. Specify Network Interface Usage:
a. Interface Name: eth0 Subnet: 192.168.0.0 Interface Type: Public
b. Interface Name: eth3 Subnet: 10.10.10.0 Interface Type: Private
6. Specify Oracle Cluster Registry (OCR) Location:
a. Select "OCR Configuration"
b. Specify OCR Location: /ocfs/prod1/ocr1
c. Specify OCR Mirror Location: /ocfs/prod1/ocr2
7. Specify Voting Disk Location:
a. Select "Voting Disk Configuration"
b. Voting Disk Location: /ocfs/prod1/vdisk1
c. Additional Voting Disk 1 Location: /ocfs/prod1/vdisk2
d. Additional Voting Disk 2 Location: /ocfs/prod1/vdisk3
8. Summary: Click on "Install"
9. Execute Configuration scripts: Execute /u01/app/oracle/product/crs/root.sh from another window as the root user on each node one at a time. The root.sh script creates the OCR keys, formats the voting disks and starts the CSSD, CRSD and EVMD processes.
10. Execute /u01/app/oracle/product/crs/root.sh as the root user on the second node. The root.sh script invokes the VIPCA automatically and fails with an error – "The given interface(s), "eth0" is not public. Public interfaces should be used to configure virtual IPs." IP address, 192.168.x.x is considered non-routable by CVU and thus it fails to find a suitable public interface. A workaround is to run VIPCA manually.
11. As the root user, manually invokes VIPCA on the second node:
/u01/app/oracle/product/crs/bin/vipca
12. Welcome: Click on "Next"
13. VIP Configuration Assistant, Step 1 of 2: Network Interfaces:
a. Select "eth0"
14. VIP Configuration Assistant, Step 2 of 2: Virtual IPs for cluster nodes:
a. Node name: salmon1
b. IP Alias Name: salmon1-vip
c. IP address: 192.168.0.186
d. Subnet Mask: 255.255.255.0
e. Node name: salmon2
f. IP Alias Name: salmon2-vip
g. IP address: 192.168.0.187
h. Subnet Mask: 255.255.255.0
15. Summary: Click on "Finish"
16. Configuration Assistant Progress Dialog: Verify that all checks are successful. Click on "OK."
17. Configuration Results: Click on "Exit."
18. Return to the Execute Configuration scripts screen on the first node and click on "OK."
19. Configuration Assistants: Verify that all checks are successful. The OUI does a Clusterware
post-installation check at the end. If the CVU fails, correct the problem and re-run the
following command:
/u01/app/oracle/product/crs/bin/cluvfy stage -post crsinst –n salmon1,salmon2
20. End of Installation: Click "Exit."

[oracle@salmon1]$ more /etc/oracle/ocr.loc
ocrconfig_loc=/ocfs/prod1/ocr1
ocrmirrorconfig_loc=/ocfs/prod1/ocr2
local_only=FALSE
[oracle@salmon1]$ srvctl status nodeapps -n salmon1
VIP is running on node: salmon1
GSD is running on node: salmon1
PRKO-2016 : Error in checking condition of listener on node: salmon1
ONS daemon is running on node: salmon1
[oracle@salmon1]$ /u01/app/oracle/product/crs/bin/olsnodes -n
salmon1 1
salmon1 2
[oracle@salmon1]$ ps -ef | egrep "cssd|crsd|evmd"
Step 3: Install Oracle RAC Software
3a. Edit the oracle user environment file
[oracle@salmon1]$ more .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
export PATH=$PATH:$HOME/bin
export ORACLE_SID=prod1a
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1
export ORA_CRS_HOME=$ORACLE_BASE/product/crs
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$ORACLE_HOME/bin:$PATH
umask 022
[oracle@salmon1]$ mkdir -p /u01/app/oracle/product/10.2.0/db_1
3b. Oracle Database pre-installation checks
Perform Oracle Database pre-installation checks and correct any errors:

/stage/clusterware/cluvfy/runcluvfy.sh stage –pre dbinst -n salmon1,salmon2 –verbose3c. Install RAC software
Mount the Oracle Database 10g Enterprise Edition CD or download the software from http://www.oracle.com/technology/software/products/database/oracle10g/index.html.

Launch the OUI on only the first node. During installation, the installer automatically copies the software to the second node.

1. Welcome: Click "Next"
2. Select Installation Type:
a. Select "Enterprise Edition"
3. Specify Home Details:
a. Name: OraDB10g_home1
b. Path: /u01/app/oracle/product/10.2.0/db_1
4. Specify Hardware Cluster Installation Mode:
a. Select "Cluster Installation"
b. Click on "Select All"
5. Product-Specific Prerequisite Checks:
a. Verify that all checks are successful before proceeding.
6. Select Configuration Option:
a. Select "Install database Software only"
7. Summary: Click on "Install"
8. Execute Configuration scripts: Execute /u01/app/oracle/product/10.2.0/db_1/root.sh script
from another window as the root user on each cluster node.
9. Return to the Execute Configuration scripts screen and click on "OK."
10. End of Installation: Click on "Exit."

3d. Configure Oracle Listener
The Network Configuration Assistant (NETCA) should only be launched and configured on one node. At the end of the configuration process, the NETCA starts up the Oracle listener on both nodes.

1. Real Application Clusters, Configuration:
a. Select "Cluster configuration"
2. Real Application Clusters, Active Nodes:
a. Click on "Select all nodes"
3. Welcome:
a. Select "Listener configuration"
4. Listener Configuration, Listener:
a. Select "Add"
5. Listener Configuration, Listener Name:
a. Listener Name: LISTENER
6. Listener Configuration, Select Protocols:
a. Selected Protocols: TCP
7. Listener Configuration, TCP/IP Protocol:
a. Select "Use the standard port number of 1521"
8. Listener Configuration, More Listeners?:
a. Select "No"
9. Listener Configuration Done:
a. Click on "Finish"

3e. Verify status of services
On node 1:

[oracle@salmon1]$ srvctl status nodeapps -n salmon1
VIP is running on node: salmon1
GSD is running on node: salmon1
Listener is running on node: salmon1
ONS daemon is running on node: salmon1
On node 2:

[oracle@salmon2]$ srvctl status nodeapps -n salmon2
VIP is running on node: salmon2
GSD is running on node: salmon2
Listener is running on node: salmon2
ONS daemon is running on node: salmon2
3f. listener.ora file
On node 1:

LISTENER_SALMON1 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = salmon1-vip)(PORT = 1521)(IP = FIRST))
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.184)(PORT = 1521)(IP = FIRST))
)
)
SID_LIST_LISTENER_SALMON1 =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
(PROGRAM = extproc)
)
)
On node 2:

LISTENER_SALMON2 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = salmon2-vip)(PORT = 1521)(IP = FIRST))
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.185)(PORT = 1521)(IP = FIRST))
)
)
SID_LIST_LISTENER_SALMON2 =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
(PROGRAM = extproc)
)
)

Step 4: Upgrade the Database
Database upgrade has reduced in complexity. Oracle provides a pre-upgrade information tool that analyzes the database and provides recommendations for proper database settings prior to the upgrade. There are also fewer steps to manually upgrade a database.

RAC database can be upgraded using the Database Upgrade Assistant (DBUA) or by performing a manual upgrade. The method demonstrated here is a manual database upgrade. Ensure that a full cold backup is taken before attempting the upgrade. For detailed information on performing an upgrade, please refer to http://download-east.oracle.com/docs/...upgrade.htm#i1011372.

4a. Create password files for RAC instances
Using the orapwd utility, create orapwprod1a and orapwprod1b files in $ORACLE_HOME/dbs on the first and second node respectively.

4b. Create init.ora for 10g RAC instances
On node 1:

[oracle@salmon1]$ cp /u01/app/oracle/product/9.2.0/dbs/initprod1a.ora $ORACLE_HOME/dbs
[oracle@salmon1]$ more $ORACLE_HOME/dbs/initprod1a.ora
spfile=/ocfs/prod1/spfileprod1.ora
On node 2:

[oracle@salmon2]$ cp /u01/app/oracle/product/9.2.0/dbs/initprod1b.ora $ORACLE_HOME/dbs
[oracle@salmon2]$ more $ORACLE_HOME/dbs/initprod1b.ora
spfile=/ocfs/prod1/spfileprod1.ora
4c. Create tnsnames.ora on RAC nodes
[oracle@salmon1]$ more $ORACLE_HOME/network/admin/tnsnames.ora
LISTENERS_PROD1 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = salmon1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = salmon2-vip)(PORT = 1521))
)
PROD1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = salmon1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = salmon2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVICE_NAME = PROD1)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 200)
(DELAY = 5)
)
)
)
PROD1A =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = salmon1-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = PROD1)
(INSTANCE_NAME = PROD1A)
)
)
PROD1B =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = salmon2-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = PROD1)
(INSTANCE_NAME = PROD1B)
)
)
4d. Edit pfile
Modify the file, pfile_singleinst saved earlier in Step 1j. Comment the following two parameters:

*.cluster_database_instances=2
*.cluster_database=true
4e. Start up 9i database in single-instance mode
[root@salmon1]# export ORACLE_HOME=/u01/app/oracle/product/9.2.0
[root@salmon1]# $ORACLE_HOME/oracm/bin/ocmstart.sh
oracm </dev/null 2>&1 >/u01/app/oracle/product/9.2.0/oracm/log/cm.out &
[root@salmon1]# su - oracle
[oracle@salmon1]$ echo $ORACLE_HOME
/u01/app/oracle/product/ocm
[oracle@salmon1]$ sqlplus "/ as sysdba"
SQL*Plus: Release 9.2.0.4.0 - Production on Sat Jul 30 06:13:06 2005
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to an idle instance.
SQL> startup pfile=/tmp/pfile_singleinst
4f. Perform pre-database upgrade checks
Make the necessary adjustments reported in the pre-upgrade information tool report before proceeding with the upgrade. The SYSAUX tablespace is created after starting the database in the new 10g release.

SQL> @/u01/app/oracle/product/10.2.0/db_1/rdbms/admin/utlu102i4g. Gather optimizer statistics
Optimizer statistics is automatically gathered during the upgrade for dictionary objects that have stale or missing statistics. To shorten the upgrade time, consider collecting the statistics before the upgrade.

exec dbms_stats.gather_schema_stats('CTXSYS', options=>'GATHER', estimate_percent
=> DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
exec dbms_stats.gather_schema_stats('SYS', options=>'GATHER', estimate_percent =>
DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
4h. Connect role privilege
Prior to Oracle 10gR2, the privileges granted to Connect role are:

SQL> select PRIVILEGE from DBA_SYS_PRIVS where grantee='CONNECT';
PRIVILEGE
--------------------
CREATE VIEW
CREATE TABLE
ALTER SESSION
CREATE CLUSTER
CREATE SESSION
CREATE SYNONYM
CREATE SEQUENCE
CREATE DATABASE LINK
In Oracle 10gR2, the only privilege granted to the Connect role is Create Session. Grant the necessary privileges to users or roles before the database upgrade. The upgrade scripts automatically make the required adjustments to the Oracle-supplied users.

4i. Verify sys and system default tablespace
SQL> select USERNAME, DEFAULT_TABLESPACE from DBA_USERS where username in ('SYS','SYSTEM');
USERNAME DEFAULT_TABLESPACE
----------- --------------------
SYSTEM SYSTEM
SYS SYSTEM
4j. Check for invalid objects
Recompile any invalid objects and obtain a list of objects that could not be recompiled.

4k.Update oratab file
Update the /etc/oratab file to reflect the new 10g ORACLE_HOME.

4l. Disable jobs
Disable any cron or snapshot jobs.

4m. Shut down 9i database
SQL> shutdown immediate;4n. Upgrade database
In the new Oracle 10gR2 environment on node 1, run the following statements in the following order::

1. SQL> startup pfile=/tmp/pfile_singleinst upgrade

2. SQL> create tablespace sysaux datafile '/ocfs/prod1/sysaux01.dbf' size

500M extent management local segment space management auto;

3. SQL> spool /tmp/catupgrd.log

4. SQL> @$ORACLE_HOME/rdbms/admin/catupgrd

Review log file for errors.

4o. Check for invalid objects
Recompile any invalid objects and obtain a list of objects that could not be recompiled. Compare this list with the list of invalid objects before the upgrade.

4p. Optimizer statistics gathering job
After the database upgrade, an optimizer statistics collection job called GATHER_STATS_JOB was created and is scheduled to run when the MAINTENANCE_WINDOW_GROUP window group is opened.

The MAINTENANCE_WINDOW_GROUP consists of two windows – the WEEKNIGHT_WINDOW and the WEEKEND_WINDOW. The WEEKNIGHT _WINDOW opens Monday through Friday at 10pm for 8 hours and the WEEKEND_WINDOW opens on Saturday at 12am for 48 hours.

SQL> select JOB_NAME, WINDOW_GROUP_NAME, SW.WINDOW_NAME, DURATION, SW.REPEAT_INTERVAL
2 from DBA_SCHEDULER_JOBS j, DBA_SCHEDULER_WINGROUP_MEMBERS wm, DBA_SCHEDULER_WINDOWS sw
3 where j.schedule_name=wm.window_group_name
4 and sw.window_name=wm.window_name
5 and job_name='GATHER_STATS_JOB';
JOB_NAME WINDOW_GROUP_NAME WINDOW_NAME DURATION
----------------- ------------------------- ----------------- ---------------
REPEAT_INTERVAL
--------------------------------------------------------------------------------
GATHER_STATS_JOB MAINTENANCE_WINDOW_GROUP WEEKEND_WINDOW +002 00:00:00
freq=daily;byday=SAT;byhour=0;byminute=0;bysecond=0
GATHER_STATS_JOB MAINTENANCE_WINDOW_GROUP WEEKNIGHT_WINDOW +000 08:00:00
freq=daily;byday=MON,TUE,WED,THU,FRI;byhour=22;byminute=0; bysecond=0
You may want to make the necessary modifications to the window properties or disable the job only if you have other methods of keeping the statistics current.

4q. Start up 10g database in cluster mode
Modify the following entries in pfile_singleinst and restart the database using the modified pfile.

*.cluster_database_instances=2
*.cluster_database=true
*.compatible='10.2.0'
*.log_archive_format='prod1_%t_%s_%r.arc'
The compatible parameter once set to 10.2.0 is irreversible. You cannot downgrade the database back to 9.2.0.

The log_archive_format requires a "%r" format specification when the compatible is set to 10.2.0. The "%r" refers to the logical incarnation of the database and changes each time when the database is opened using the resetlogs command. The default value of log_archive_format is %t_%s_%r.dbf.

4r. Create spfile
Create the spfile and restart the instance using the new spfile.

SQL> create spfile='/ocfs/prod1/spfileprod1.ora' from pfile='/tmp/pfile_singleinst';4s. Start up the second instance
[oracle@salmon2]$ export ORACLE_SID=prod1b
[oracle@salmon2]$ sqlplus / as sysdba
SQL*Plus: Release 10.2.0.1.0 - Production on Sat Jul 30 08:51:13 2005
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup
4t. Register the RAC instances with Oracle Clusterware
[oracle@salmon1]$ srvctl add database -d prod1 -o $ORACLE_HOME
[oracle@salmon1]$ srvctl add instance -d prod1 -i prod1a -n salmon1
[oracle@salmon1]$ srvctl add instance -d prod1 -i prod1b -n salmon2
[oracle@salmon1]$ srvctl config database -d prod1
salmon1 prod1a /u01/app/oracle/product/10.2.0/db_1
salmon2 prod1b /u01/app/oracle/product/10.2.0/db_1
4u. Remove Oracle 9i RAC start up/shut down scripts
Remove any scripts that are responsible for starting and stopping Oracle 9i RAC processes such as the Oracle Cluster Manager and Global Services Daemons. Reboot the cluster nodes and verify that the Oracle 10g Clusterware, nodeapps and instances are automatically started.

[oracle@salmon1]$ crs_stat -t
Name Type Target State Host
-----------------------------------------------------------------
ora.prod1.db application ONLINE ONLINE salmon1
ora....1a.inst application ONLINE ONLINE salmon1
ora....1b.inst application ONLINE ONLINE salmon2
ora....N1.lsnr application ONLINE ONLINE salmon1
ora....on1.gsd application ONLINE ONLINE salmon1
ora....on1.ons application ONLINE ONLINE salmon1
ora....on1.vip application ONLINE ONLINE salmon1
ora....N2.lsnr application ONLINE ONLINE salmon2
ora....on2.gsd application ONLINE ONLINE salmon2
ora....on2.ons application ONLINE ONLINE salmon2
ora....on2.vip application ONLINE ONLINE salmon2
[oracle@salmon1]$ srvctl status database -d prod1
Instance prod1a is running on node salmon1
Instance prod1b is running on node salmon2
Use the following commands to manually start and stop the instances:
srvctl start database -d prod1
srvctl start instance -d prod1 -i prod1a
srvctl start instance -d prod1 -i prod1b
srvctl stop database -d prod1
srvctl stop instance -d prod1 -i prod1a
srvctl stop instance -d prod1 -i prod1b
4v. Re-enable jobs
Re-enable any jobs that were disabled before the upgrade.

Congratulations, you have upgraded your Oracle 9i RAC to Oracle 10g RAC!

Step 5: Migrate Database to ASM (Optional Step)
You can continue running your RAC database on OCFS or migrate it to Automated Storage Management (ASM). ASM is the recommended shared storage solution for Oracle RAC database. It eliminates the complexity of managing database storage and its striping and mirroring capabilities provide storage performance and data-loss protection.

In this section, we will provide a step-by-step procedure for migrating your database to ASM.

5a. Download ASM RPMs
Download the ASM RPMs for your kernel from http://www.oracle.com/technology/tech/linux/asmlib/index.html

oracleasm-support-2.0.0-1.i386.rpm
oracleasm-2.4.21-27.EL-1.0.4-2.i686.rpm (driver for UP kernel)
oracleasmlib-2.0.0-1.i386.rpm
5b. Install ASM RPMs
Install the ASM RPMs as the root user.

[root@salmon1]# rpm -Uvh oracleasm-2.4.21-27.EL-1.0.4-2.i686.rpm /
oracleasmlib-2.0.0-1.i386.rpm /
oracleasm-support-2.0.0-1.i386.rpm
5c. Configure ASM
Configure the ASM Library driver as the root user.

[root@salmon1]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Fix permissions of Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: [ OK ]
Loading module "oracleasm": [ OK ]
Mounting ASMlib driver filesystem: [ OK ]
Scanning system for ASM disks: [ OK ]
5d. Create ASM disks
As the root user on any node, execute the following commands to create the ASM disks.

/etc/init.d/oracleasm createdisk VOL1 /dev/sdg5
/etc/init.d/oracleasm createdisk VOL2 /dev/sdg6
/etc/init.d/oracleasm createdisk VOL3 /dev/sdg7
/etc/init.d/oracleasm createdisk VOL4 /dev/sdg8
/etc/init.d/oracleasm createdisk VOL5 /dev/sdg9
/etc/init.d/oracleasm createdisk VOL6 /dev/sdg10
/etc/init.d/oracleasm createdisk VOL7 /dev/sdg11
/etc/init.d/oracleasm createdisk VOL8 /dev/sdg12
/etc/init.d/oracleasm createdisk VOL9 /dev/sdg13
/etc/init.d/oracleasm createdisk VOL10 /dev/sdg14
Verify that the ASM disks are visible from every node.

[root@salmon1]# /etc/init.d/oracleasm listdisks
VOL1
VOL2
VOL3
VOL4
VOL5
VOL6
VOL7
VOL8
VOL9
VOL10
[root@salmon1]# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks [ OK ]
5e. Create ASM init.ora
On the first node, create an init+ASM1A.ora file in $ORACLE_HOME/dbs with the following parameters:

#asm_diskgroups='DG1', 'DG2', 'RECOVERYDEST'
asm_diskstring='ORCL:VOL*'
background_dump_dest=/u01/app/oracle/admin/+ASM/bdump
core_dump_dest=/u01/app/oracle/admin/+ASM/cdump
user_dump_dest=/u01/app/oracle/admin/+ASM/udump
instance_type=asm
large_pool_size=16M
remote_login_passwordfile=exclusive
+ASM1A.instance_number=1
+ASM1B.instance_number=2
5f. Create ASM password file
Create the password files, orapw+ASM1A and orapw+ASM1B in $ORACLE_HOME/dbs on the first and second node respectively.

[oracle@salmon1]$ cd $ORACLE_HOME/dbs
[oracle@salmon1]$ orapwd file=orapw+ASM1A password=sys entries=5
[oracle@salmon2]$ cd $ORACLE_HOME/dbs
[oracle@salmon2]$ orapwd file=orapw+ASM1B password=sys entries=5
5g. Create the first ASM instance
[oracle@salmon1]$ mkdir -p /u01/app/oracle/admin/+ASM/bdump
[oracle@salmon1]$ mkdir -p /u01/app/oracle/admin/+ASM/cdump
[oracle@salmon1]$ mkdir -p /u01/app/oracle/admin/+ASM/udump
[oracle@salmon1]$ export ORACLE_SID=+ASM1A
[oracle@salmon1]$ sqlplus / as sysdba
SQL*Plus: Release 10.2.0.1.0 - Production on Mon Aug 1 04:07:17 2005
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup
ASM instance started
Total System Global Area 96468992 bytes
Fixed Size 1217908 bytes
Variable Size 70085260 bytes
ASM Cache 25165824 bytes
ORA-15110: no diskgroups mounted
5h. Create the ASM instance spfile
Create a spfile and restart the ASM instance. Any newly created disk groups will automatically be added to the spfile.

SQL> create spfile from pfile;
File created.
5i. Create ASM disk groups
Create three disk groups: DG1, DG2 and RECOVERYDEST. DG1 and DG2 will be used to store Oracle data files and redo logs. RECOVERYDEST will be used as the flash recovery area.

SQL> create diskgroup dg1 normal redundancy
2 failgroup fg1a disk
3 'ORCL:VOL1','ORCL:VOL2'
4 failgroup fg1b disk
5 'ORCL:VOL3','ORCL:VOL4';
Diskgroup created.
SQL> create diskgroup dg2 normal redundancy
2 failgroup fg2a disk
3 'ORCL:VOL5','ORCL:VOL6'
4 failgroup fg2b disk
5 'ORCL:VOL7','ORCL:VOL8';
Diskgroup created.
SQL> create diskgroup recoverydest normal redundancy
2 failgroup fgrd1 disk
3 'ORCL:VOL9'
4 failgroup fgrd2 disk
5 'ORCL:VOL10';

Diskgroup created.
NAME TYPE VALUE
------------------------------------ ----------- ----------------------------
asm_diskgroups string DG1, DG2, RECOVERYDEST
SQL> select name,total_mb from v$asm_diskgroup;

NAME TOTAL_MB
-------------------- -------------
DG1 36864
DG2 36864
RECOVERYDEST 73728

SQL> select name,path,failgroup from v$asm_disk;

NAME PATH FAILGROUP
---------- -------------- ---------------
VOL1 ORCL:VOL1 FG1A
VOL10 ORCL:VOL10 FGRD2
VOL2 ORCL:VOL2 FG1A
VOL3 ORCL:VOL3 FG1B
VOL4 ORCL:VOL4 FG1B
VOL5 ORCL:VOL5 FG2A
VOL6 ORCL:VOL6 FG2A
VOL7 ORCL:VOL7 FG2B
VOL8 ORCL:VOL8 FG2B
VOL9 ORCL:VOL9 FGRD1
5j. Configure flash recovery area
SQL> connect sys/sys@prod1a as sysdba
Connected.
SQL> alter database disable block change tracking;
alter database disable block change tracking
*
ERROR at line 1:
ORA-19759: block change tracking is not enabled
SQL> alter system set db_recovery_file_dest_size=72G;
System altered.
SQL> alter system set db_recovery_file_dest='+RECOVERYDEST';
System altered.
5k. Migrate data files to ASM
You must use RMAN to migrate the data files to ASM disk groups. All data files will be migrated to the newly created disk group, DG1. The redo logs and control files are created in DG1 and DG2.

SQL> connect sys/sys@prod1a as sysdba
Connected.
SQL> alter system set db_create_file_dest='+DG1';
System altered.
SQL> alter system set control_files='+DG1/cf1.dbf' scope=spfile;
System altered.
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options
[oracle@salmon1 oracle]$ srvctl stop database -d prod1
[oracle@salmon1 oracle]$ rman target /
Recovery Manager: Release 10.2.0.1.0 - Production on Mon Aug 1 04:47:55 2005
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database (not started)
RMAN> startup nomount
Oracle instance started
Total System Global Area 528482304 bytes
Fixed Size 1220360 bytes
Variable Size 310378744 bytes
Database Buffers 209715200 bytes
Redo Buffers 7168000 bytes
RMAN> restore controlfile from '/ocfs/prod1/control01.ctl';
Starting restore at 01-AUG-05
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=204 instance=prod1a devtype=DISK
channel ORA_DISK_1: copied control file copy
output filename=+DG1/cf1.dbf
Finished restore at 01-AUG-05
RMAN> alter database mount;
database mounted
released channel: ORA_DISK_1
RMAN> backup as copy database format '+DG1';
Starting backup at 01-AUG-05
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=204 instance=prod1a devtype=DISK
channel ORA_DISK_1: starting datafile copy
input datafile fno=00001 name=/ocfs/prod1/system01.dbf
output filename=+DG1/prod1/datafile/system.257.565159841 tag=TAG20050801T045038 recid=1 stamp=565159936
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:01:46
channel ORA_DISK_1: starting datafile copy
input datafile fno=00002 name=/ocfs/prod1/undotbs1.dbf
output filename=+DG1/prod1/datafile/undotbs1.258.565159949 tag=TAG20050801T045038 recid=2 stamp=565160001
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:56
channel ORA_DISK_1: starting datafile copy
input datafile fno=00005 name=/ocfs/prod1/undotbs2.dbf
output filename=+DG1/prod1/datafile/undotbs2.259.565160005 tag=TAG20050801T045038 recid=3 stamp=565160052
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:55
channel ORA_DISK_1: starting datafile copy
input datafile fno=00007 name=/ocfs/prod1/sysaux01.dbf
output filename=+DG1/prod1/datafile/sysaux.260.565160061 tag=TAG20050801T045038 recid=4 stamp=565160084
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting datafile copy
copying current control file
output filename=+DG1/prod1/controlfile/backup.261.565160095 tag=TAG20050801T045038 recid=5 stamp=565160101
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
input datafile fno=00003 name=/ocfs/prod1/users01.dbf
output filename=+DG1/prod1/datafile/users.262.565160103 tag=TAG20050801T045038 recid=6 stamp=565160108
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:08
channel ORA_DISK_1: starting datafile copy
input datafile fno=00004 name=/ocfs/prod1/drsys01.dbf
output filename=+DG1/prod1/datafile/drsys.263.565160111 tag=TAG20050801T045038 recid=7 stamp=565160115
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
input datafile fno=00006 name=/ocfs/prod1/apps_d01.dbf
output filename=+DG1/prod1/datafile/apps_d.264.565160121 tag=TAG20050801T045038 recid=8 stamp=565160122
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
including current SPFILE in backupset
channel ORA_DISK_1: starting piece 1 at 01-AUG-05
channel ORA_DISK_1: finished piece 1 at 01-AUG-05
piece handle=+DG1/prod1/backupset/2005_08_01/nnsnf0_tag20050801t045038_0.265.565160131 tag=TAG20050801T045038 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:17
Finished backup at 01-AUG-05
RMAN> switch database to copy;
datafile 1 switched to datafile copy "+DG1/prod1/datafile/system.257.565159841"
datafile 2 switched to datafile copy "+DG1/prod1/datafile/undotbs1.258.565159949"
datafile 3 switched to datafile copy "+DG1/prod1/datafile/users.262.565160103"
datafile 4 switched to datafile copy "+DG1/prod1/datafile/drsys.263.565160111"
datafile 5 switched to datafile copy "+DG1/prod1/datafile/undotbs2.259.565160005"
datafile 6 switched to datafile copy "+DG1/prod1/datafile/apps_d.264.565160121"
datafile 7 switched to datafile copy "+DG1/prod1/datafile/sysaux.260.565160061"
RMAN> alter database open;
database opened
RMAN> exit
SQL> connect sys/sys@prod1a as sysdba
Connected.
SQL> select tablespace_name, file_name from dba_data_files;
TABLESPACE_NAME FILE_NAME
--------------- ---------------------------------------------
SYSTEM +DG1/prod1/datafile/system.257.565159841
UNDOTBS1 +DG1/prod1/datafile/undotbs1.258.565159949
USERS +DG1/prod1/datafile/users.262.565160103
DRSYS +DG1/prod1/datafile/drsys.263.565160111
UNDOTBS2 +DG1/prod1/datafile/undotbs2.259.565160005
APPS_D +DG1/prod1/datafile/apps_d.264.565160121
SYSAUX +DG1/prod1/datafile/sysaux.260.565160061
7 rows selected.
5l. Migrate temp tablespace to ASM
SQL> select file_name from dba_temp_files;

FILE_NAME
---------------------------------------------
/ocfs/prod1/temp01.dbf
SQL> alter database tempfile '/ocfs/prod1/temp01.dbf' drop including datafiles;
Database altered.
SQL> alter tablespace temp add tempfile size 100M;
Tablespace altered.
SQL> select file_name from dba_temp_files;
FILE_NAME
---------------------------------------------
+DG1/prod1/tempfile/temp.266.565165327

5m. Migrate redo logs of the first instance to ASM
SQL> alter system set db_create_online_log_dest_1='+DG1';
System altered.
SQL> alter system set db_create_online_log_dest_2='+DG2';
System altered.
SQL> select l.group#, thread#, member
2 from v$log l, v$logfile lf
3 where l.group#=lf.group#;
GROUP# THREAD# MEMBER
---------- ---------- --------------------------------
1 1 /ocfs/prod1/redo01.log
2 1 /ocfs/prod1/redo02.log
3 2 /ocfs/prod1/redo03.log
4 2 /ocfs/prod1/redo04.log
SQL> alter database add logfile group 5 size 10M;
Database altered.
SQL> alter system switch logfile;
System altered.
SQL> alter database drop logfile group 1;
Database altered.
SQL> alter database add logfile group 1 size 100M;
Database altered.
SQL> alter database drop logfile group 2;
Database altered.
SQL> alter database add logfile group 2 size 100M;
Database altered.
SQL> alter system switch logfile;
System altered.
SQL> alter database drop logfile group 5;
Database altered.
SQL> select l.group#, thread#, member
2 from v$log l, v$logfile lf
3 where l.group#=lf.group#;
GROUP# THREAD# MEMBER
---------- ---------- ----------------------------------------------
1 1 +DG1/prod1/onlinelog/group_1.268.565168737
2 1 +DG1/prod1/onlinelog/group_2.269.565168889
3 2 /ocfs/prod1/redo03.log
4 2 /ocfs/prod1/redo04.log
1 1 +DG2/prod1/onlinelog/group_1.257.565168763
2 1 +DG2/prod1/onlinelog/group_2.258.565168913
6 rows selected.
5n. Modify init.ora of both RAC instances
On node 1:

[oracle@salmon1]$ more $ORACLE_HOME/dbs/initprod1a.ora
spfile='+DG1/spfileprod1.ora'
On node 2:

[oracle@salmon2]$ more $ORACLE_HOME/dbs/initprod1b.ora
spfile='+DG1/spfileprod1.ora'
5o. Migrate database spfile to ASM
On prod1a:

SQL> create pfile='/tmp/pfilemig' from spfile;
File created.
SQL> create spfile='+DG1/spfileprod1.ora' from pfile='/tmp/pfilemig';
File created.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 528482304 bytes
Fixed Size 1220360 bytes
Variable Size 310378744 bytes
Database Buffers 209715200 bytes
Redo Buffers 7168000 bytes
Database mounted.
Database opened.
SQL> show parameter spfile
NAME TYPE VALUE
------------------------------------ ----------- --------------------------
spfile string +DG1/spfileprod1.ora
5p. Create the second ASM instance
[oracle@salmon1]$ cd /u01/app/oracle/product/10.2.0/db_1/dbs
[oracle@salmon1]$ scp spfile+ASM1A.ora salmon2:/u01/app/oracle/product/10.2.0/db_1/dbs/spfile+ASM1B.ora
spfile+ASM1A.ora 100% 1536 1.4MB/s 00:00
Create the second ASM instance on the second node.
[oracle@salmon2]$ mkdir -p /u01/app/oracle/admin/+ASM/bdump
[oracle@salmon2]$ mkdir -p /u01/app/oracle/admin/+ASM/cdump
[oracle@salmon2]$ mkdir -p /u01/app/oracle/admin/+ASM/udump
[oracle@salmon2]$ export ORACLE_SID=+ASM1B
[oracle@salmon1 dbs]$ sqlplus / as sysdba
SQL*Plus: Release 10.2.0.1.0 - Production on Mon Aug 1 18:27:28 2005
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup
ASM instance started
Total System Global Area 96468992 bytes
Fixed Size 1217908 bytes
Variable Size 70085260 bytes
ASM Cache 25165824 bytes
ASM diskgroups mounted
5q. Migrate redo logs of the second instance to ASM
[oracle@salmon1 dbs]$ export ORACLE_SID=prod1b
[oracle@salmon1 dbs]$ sqlplus / as sysdba
SQL*Plus: Release 10.2.0.1.0 - Production on Mon Aug 1 18:45:26 2005
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup
ORACLE instance started.
Total System Global Area 528482304 bytes
Fixed Size 1220360 bytes
Variable Size 310378744 bytes
Database Buffers 209715200 bytes
Redo Buffers 7168000 bytes
Database mounted.
Database opened.
SQL> select l.group#, thread#, member
2 from v$log l, v$logfile lf
3 where l.group#=lf.group#;
GROUP# THREAD# MEMBER
---------- ---------- -------------------------------------------------
1 1 +DG1/prod1/onlinelog/group_1.268.565168737
2 1 +DG1/prod1/onlinelog/group_2.269.565168889
3 2 /ocfs/prod1/redo03.log
4 2 /ocfs/prod1/redo04.log
1 1 +DG2/prod1/onlinelog/group_1.257.565168763
2 1 +DG2/prod1/onlinelog/group_2.258.565168913
6 rows selected.
SQL> alter database add logfile group 5 size 10M;
Database altered.
SQL> alter system switch logfile;
System altered.
SQL> alter database drop logfile group 3;
Database altered.
SQL> alter database add logfile group 3 size 100M;
Database altered.
SQL> alter database drop logfile group 4;
Database altered.
SQL> alter database add logfile group 4 size 100M;
Database altered.
SQL> alter system switch logfile;
System altered.
SQL> alter database drop logfile group 5;
Database altered.
SQL> select l.group#, thread#, member
2 from v$log l, v$logfile lf
3 where l.group#=lf.group#;
GROUP# THREAD# MEMBER
---------- ---------- --------------------------------------------------
1 1 +DG1/prod1/onlinelog/group_1.268.565168737
2 1 +DG1/prod1/onlinelog/group_2.269.565168889
3 2 +DG1/prod1/onlinelog/group_3.271.565179129
4 2 +DG1/prod1/onlinelog/group_4.272.565180943
1 1 +DG2/prod1/onlinelog/group_1.257.565168763
2 1 +DG2/prod1/onlinelog/group_2.258.565168913
3 2 +DG2/prod1/onlinelog/group_3.259.565179149
4 2 +DG2/prod1/onlinelog/group_4.260.565180961
8 rows selected.
5r. Using ASM Command-Line Utility
ASM Command-Line utility (asmcmd) was introduced in Oracle 10gR2 to provide a Unix-like command-line interface to view and administer files and directories stored in ASM. Refer to http://www.oracle.com/pls/db102/db102.show_toc?which=main&partno=b14215&maxlevel=2&section=&expand=55928 for details on asmcmd.

[oracle@salmon1]$ export ORACLE_SID=+ASM1A
[oracle@salmon1]$ asmcmd ls
DG1/
DG2/
RECOVERYDEST/
[oracle@salmon1]$ asmcmd ls -l DG1/PROD1/DATAFILE
Type Redund Striped Time Sys Name
DATAFILE MIRROR COARSE AUG 01 19:00:00 Y APPS_D.264.565160121
DATAFILE MIRROR COARSE AUG 01 19:00:00 Y DRSYS.263.565160111
DATAFILE MIRROR COARSE AUG 01 19:00:00 Y SYSAUX.260.565160061
DATAFILE MIRROR COARSE AUG 01 19:00:00 Y SYSTEM.257.565159841
DATAFILE MIRROR COARSE AUG 01 19:00:00 Y UNDOTBS1.258.565159949
DATAFILE MIRROR COARSE AUG 01 19:00:00 Y UNDOTBS2.259.565160005
DATAFILE MIRROR COARSE AUG 01 19:00:00 Y USERS.262.565160103
[oracle@salmon1]$ asmcmd -p
ASMCMD [+] > ls
DG1/
DG2/
RECOVERYDEST/
ASMCMD [+] > ls DG1
PROD1/
cf1.dbf
spfileprod1.ora
ASMCMD [+] > ls -l DG1
Type Redund Striped Time Sys Name
Y PROD1/
N cf1.dbf =>
+DG1/PROD1/CONTROLFILE/backup.256.565159777
N spfileprod1.ora =>
+DG1/PROD1/PARAMETERFILE/spfile.267.565170213
ASMCMD [+] > exit
5s. Register the ASM instances with Oracle Clusterware
For higher availability, register the ASM instances under the Oracle Clusterware framework. When registered, the clusterware should detect any failed instances and automatically attempt to start up the instances. The clusterware should also automatically start up the instances when the servers are rebooted.

On node 1:

[oracle@salmon1]$ srvctl add asm -n salmon1 -i +ASM1A -o $ORACLE_HOME
[oracle@salmon1]$ srvctl start asm -n salmon1
[oracle@salmon1]$ srvctl status asm -n salmon1
ASM instance +ASM1A is running on node salmon1.
On node 2:

[oracle@salmon2]$ srvctl add asm -n salmon2 -i +ASM1B -o $ORACLE_HOME
[oracle@salmon2]$ srvctl start asm -n salmon2
[oracle@salmon2]$ srvctl status asm -n salmon2
ASM instance +ASM1B is running on node salmon2.
[oracle@salmon1]$ ps -ef | grep dbw | grep -v grep
oracle 21231 1 0 19:41 ? 00:00:00 asm_dbw0_+ASM1A
[oracle@salmon1]$ kill -9 21231
[oracle@salmon1]$ ps -ef | grep dbw | grep -v grep
[oracle@salmon1]$ srvctl status asm -n salmon1
ASM instance +ASM1A is not running on node salmon1.
[oracle@salmon1]$ ps -ef | grep dbw | grep -v grep
oracle 26123 1 1 11:51 ? 00:00:00 asm_dbw0_+ASM1A
[oracle@salmon1]$ srvctl status asm -n salmon1
ASM instance +ASM1A is running on node salmon1.
After successfully migrating all the data files over to ASM, the old data files are no longer needed and can be removed. Your RAC database is now running on ASM!

Conclusion
Compared to its predecessors, Oracle 10g RAC is much easier to install, use and manage, thereby reducing the total cost of ownership. Now that you have upgraded your RAC environment to Oracle 10g, you can take a look at and start using the new features. I hope that this guide has provided a clear and concise method of performing the upgrade.

Author Bio

Vincent Chan (vkchan99@yahoo.com) is a Senior Consultant at MSD Inc. He is an Oracle Certified Master DBA with more than nine years of experience architecting and implementing Oracle solutions for various clients.

from:http://www.databasejournal.com/features/oracle/article.php/10893_3581481_1/Upgrade-Oracle-9i-RAC-to-Oracle-10g-RAC.htm

分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics