Introduction:
Clusterware de-installation can be made in the case you want to rebuilt your entire cluster or simply convert your RAC database to a single instance database.
The procedure is straightforward and can be accomplished in a few minutes.
In the following example, we have 7 node cluster running. Each node has its own grid infrastructure home and oracle database home.
Procedure:
1- First Stop all databases and resources running on all nodes of the cluster.
view plaincopy to clipboardprint?
[[email protected] backup_ocr]# $GRID_HOME/bin/crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'srvdddb01'
CRS-2673: Attempting to stop 'ora.crsd' on 'srvdddb01'
CRS-2677: Stop of 'ora.crsd' on 'srvdddb01' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'srvdddb01'
CRS-2673: Attempting to stop 'ora.evmd' on 'srvdddb01'
CRS-2673: Attempting to stop 'ora.asm' on 'srvdddb01'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'srvdddb01'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'srvdddb01'
CRS-2677: Stop of 'ora.asm' on 'srvdddb01' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'srvdddb01'
CRS-2677: Stop of 'ora.drivers.acfs' on 'srvdddb01' succeeded
CRS-2677: Stop of 'ora.evmd' on 'srvdddb01' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'srvdddb01' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'srvdddb01' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'srvdddb01' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'srvdddb01'
CRS-2677: Stop of 'ora.cssd' on 'srvdddb01' succeeded
CRS-2673: Attempting to stop 'ora.diskmon' on 'srvdddb01'
CRS-2673: Attempting to stop 'ora.crf' on 'srvdddb01'
CRS-2677: Stop of 'ora.crf' on 'srvdddb01' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'srvdddb01'
CRS-2677: Stop of 'ora.diskmon' on 'srvdddb01' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'srvdddb01' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'srvdddb01'
CRS-2677: Stop of 'ora.gpnpd' on 'srvdddb01' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'srvdddb01' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[[email protected] backup_ocr]#
2- Run the deinstall utility from the first node of the cluster as the grid home user:
view plaincopy to clipboardprint?
[[email protected] backup_ocr]# su - grid
-bash-3.2$ /opt/11.2.0/grid/deinstall/deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /opt/app/oraInventory/logs/
############ ORACLE DEINSTALL & DECONFIG TOOL START ############
######################### CHECK OPERATION START #########################
Install check configuration START
The tool has detected that node(s) 'srvdddb01,srvdddb02,srvdddb03,srvdddb04,srvdddb05,srvdddb06,srvdddb07' is(are) attached to the central inventory for the home '/opt/11.2.0/grid'. Do you want to continue? [y|n]:y
Checking for existence of the Oracle home location /opt/11.2.0/grid
Oracle Home type selected for de-install is: CRS
Oracle Base selected for de-install is: /opt/app/oracle
Checking for existence of central inventory location /opt/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /opt/11.2.0/grid
The following nodes are part of this cluster: srvdddb01,srvdddb02,srvdddb03,srvdddb04,srvdddb05,srvdddb06,srvdddb07
Install check configuration END
Skipping Windows and .NET products configuration check
Checking Windows and .NET products configuration END
Traces log file: /opt/app/oraInventory/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "srvdddb01"[srvdddb01-vip]
>
The following information can be collected by running "/sbin/ifconfig -a" on node "srvdddb01"
Enter the IP netmask of Virtual IP "10.119.133.137" on node "srvdddb01"[255.255.255.0]
>
255.255.255.0
Enter the network interface name on which the virtual IP address "10.119.133.137" is active
>
bond1
Enter an address or the name of the virtual IP used on node "srvdddb02"[srvdddb02-vip]
>
The following information can be collected by running "/sbin/ifconfig -a" on node "srvdddb02"
Enter the IP netmask of Virtual IP "10.119.133.139" on node "srvdddb02"[255.255.255.0]
>
255.255.255.0
Enter the network interface name on which the virtual IP address "10.119.133.139" is active[bond1]
>
bond1
Enter an address or the name of the virtual IP used on node "srvdddb03"[srvdddb03-vip]
>
The following information can be collected by running "/sbin/ifconfig -a" on node "srvdddb03"
Enter the IP netmask of Virtual IP "10.119.133.141" on node "srvdddb03"[255.255.255.0]
>
Enter the network interface name on which the virtual IP address "10.119.133.141" is active[bond1]
>
Enter an address or the name of the virtual IP used on node "srvdddb04"[srvdddb04-vip]
>
The following information can be collected by running "/sbin/ifconfig -a" on node "srvdddb04"
Enter the IP netmask of Virtual IP "10.119.133.143" on node "srvdddb04"[255.255.255.0]
>
Enter the network interface name on which the virtual IP address "10.119.133.143" is active[bond1]
>
Enter an address or the name of the virtual IP used on node "srvdddb05"[srvdddb05-vip]
>
The following information can be collected by running "/sbin/ifconfig -a" on node "srvdddb05"
Enter the IP netmask of Virtual IP "10.119.133.145" on node "srvdddb05"[255.255.255.0]
>
Enter the network interface name on which the virtual IP address "10.119.133.145" is active[bond1]
>
Enter an address or the name of the virtual IP used on node "srvdddb06"[srvdddb06-vip]
>
The following information can be collected by running "/sbin/ifconfig -a" on node "srvdddb06"
Enter the IP netmask of Virtual IP "10.119.133.147" on node "srvdddb06"[255.255.255.0]
>
Enter the network interface name on which the virtual IP address "10.119.133.147" is active[bond1]
>
Enter an address or the name of the virtual IP used on node "srvdddb07"[10.119.133.147]
>
10.119.133.149
The following information can be collected by running "/sbin/ifconfig -a" on node "srvdddb07"
Enter the IP netmask of Virtual IP "10.119.133.149" on node "srvdddb07"[255.255.255.0]
>
Enter the network interface name on which the virtual IP address "10.119.133.149" is active[bond1]
>
Enter an address or the name of the virtual IP[]
>
Network Configuration check config START
Network de-configuration trace file location: /opt/app/oraInventory/logs/netdc_check2011-10-06_11-56-05-AM.log
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /opt/app/oraInventory/logs/asmcadc_check2011-10-06_11-56-07-AM.log
ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: y
Specify the ASM Diagnostic Destination [ ]:
Specify the diskstring []: /dev/mapper/asm*part1p1
Specify the diskgroups that are managed by this ASM instance []:
######################### CHECK OPERATION END #########################
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /opt/11.2.0/grid
The cluster node(s) on which the Oracle home de-installation will be performed are:srvdddb01,srvdddb02,srvdddb03,srvdddb04,srvdddb05,srvdddb06,srvdddb07
Oracle Home selected for de-install is: /opt/11.2.0/grid
Inventory Location where the Oracle home registered is: /opt/app/oraInventory
Skipping Windows and .NET products configuration check
ASM instance will be de-configured from this Oracle home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/opt/app/oraInventory/logs/deinstall_deconfig2011-10-06_11-34-16-AM.out'
Any error messages from this session will be written to: '/opt/app/oraInventory/logs/deinstall_deconfig2011-10-06_11-34-16-AM.err'
######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /opt/app/oraInventory/logs/asmcadc_clean2011-10-06_12-05-51-PM.log
ASM Clean Configuration START
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /opt/app/oraInventory/logs/netdc_clean2011-10-06_12-05-59-PM.log
De-configuring Naming Methods configuration file on all nodes...
Naming Methods configuration file de-configured successfully.
De-configuring Local Net Service Names configuration file on all nodes...
Local Net Service Names configuration file de-configured successfully.
De-configuring Directory Usage configuration file on all nodes...
Directory Usage configuration file de-configured successfully.
De-configuring backup files on all nodes...
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
---------------------------------------->
The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.
Run the following command as the root user or the administrator on node "srvdddb02".
/tmp/deinstall2011-10-06_11-34-02AM/perl/bin/perl -I/tmp/deinstall2011-10-06_11-34-02AM/perl/lib -I/tmp/deinstall2011-10-06_11-34-02AM/crs/install /tmp/deinstall2011-10-06_11-34-02AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2011-10-06_11-34-02AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Run the following command as the root user or the administrator on node "srvdddb07".
/tmp/deinstall2011-10-06_11-34-02AM/perl/bin/perl -I/tmp/deinstall2011-10-06_11-34-02AM/perl/lib -I/tmp/deinstall2011-10-06_11-34-02AM/crs/install /tmp/deinstall2011-10-06_11-34-02AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2011-10-06_11-34-02AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Run the following command as the root user or the administrator on node "srvdddb06".
/tmp/deinstall2011-10-06_11-34-02AM/perl/bin/perl -I/tmp/deinstall2011-10-06_11-34-02AM/perl/lib -I/tmp/deinstall2011-10-06_11-34-02AM/crs/install /tmp/deinstall2011-10-06_11-34-02AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2011-10-06_11-34-02AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Run the following command as the root user or the administrator on node "srvdddb05".
/tmp/deinstall2011-10-06_11-34-02AM/perl/bin/perl -I/tmp/deinstall2011-10-06_11-34-02AM/perl/lib -I/tmp/deinstall2011-10-06_11-34-02AM/crs/install /tmp/deinstall2011-10-06_11-34-02AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2011-10-06_11-34-02AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Run the following command as the root user or the administrator on node "srvdddb04".
/tmp/deinstall2011-10-06_11-34-02AM/perl/bin/perl -I/tmp/deinstall2011-10-06_11-34-02AM/perl/lib -I/tmp/deinstall2011-10-06_11-34-02AM/crs/install /tmp/deinstall2011-10-06_11-34-02AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2011-10-06_11-34-02AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Run the following command as the root user or the administrator on node "srvdddb03".
/tmp/deinstall2011-10-06_11-34-02AM/perl/bin/perl -I/tmp/deinstall2011-10-06_11-34-02AM/perl/lib -I/tmp/deinstall2011-10-06_11-34-02AM/crs/install /tmp/deinstall2011-10-06_11-34-02AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2011-10-06_11-34-02AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Run the following command as the root user or the administrator on node "srvdddb01".
/tmp/deinstall2011-10-06_11-34-02AM/perl/bin/perl -I/tmp/deinstall2011-10-06_11-34-02AM/perl/lib -I/tmp/deinstall2011-10-06_11-34-02AM/crs/install /tmp/deinstall2011-10-06_11-34-02AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2011-10-06_11-34-02AM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode
Press Enter after you finish running the above commands
############# ORACLE DEINSTALL & DECONFIG TOOL END #############
-bash-3.2$
You have to run the above commands on each node of the cluster. Note here the use of ì-lastnodeî option for the node 1 of the cluster.
3- Check the de-installation procedure:
- Verify the above log files on each node.
- Verify /etc/inittab file on each node if it contains the ohasd string. This entry must be removed by the above deinstall commands.
- Verify there is no ora or d.bin processes running (ps -edf | grep ora and ps-edf | grep d.bin). If you they still you can manually kill them using (kill -9 pid).
- Verify under the directory /etc/oracle/
all olr.loc files have been renamed to .orig by the above deinstall commands:
view plaincopy to clipboardprint? -bash-3.2$ cd /etc/oracle/ -bash-3.2$ ls -rtl total 0 -rw-r–r– 1 root root 0 Oct 4 18:41 olr.loc.orig -rw-r–r– 1 root root 0 Oct 4 18:41 ocr.loc.orig
If all the above check are ok. You can manually remove the files under $GRID_HOME directory (rm -rf *).