How to Relink 12c Oracle GI / RAC Binaries after OS Upgrade

Oracle RAC

Here are the practical steps for relinking Oracle Grid Infrastructure and RAC binaries after OS upgrading or patching.

Shut down all the Oracle databases, stop EM agent, etc.
$ srvctl stop database -d DBNAME

$ emctl stop agent

  1. Stop CRS and disable CRS to avoid CRS restarting while SA reboots the server for upgrading/patching. Stop and disable Oracle ASMLIB driver too.

crsctl stop crs

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘racnode1’
CRS-2673: Attempting to stop ‘ora.crsd’ on ‘racnode1’
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on ‘racnode1’
…..

CRS-2677: Stop of ‘ora.gipcd’ on ‘racnode1’ succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘racnode1’ has completed
CRS-4133: Oracle High Availability Services has been stopped.

/u01/app/12.1.0.2/grid/bin/crsctl disable crs

CRS-4621: Oracle High Availability Services autostart is disabled.

/etc/init.d/oracleasm stop

Dropping Oracle ASMLib disks: [ OK ]
Shutting down the Oracle ASMLib driver: [ OK ]

/etc/init.d/oracleasm disable

Writing Oracle ASM library driver configuration: done
Dropping Oracle ASMLib disks: [ OK ]
Shutting down the Oracle ASMLib driver: [ OK ]
#

  1. Backup binaries of GI_HOME and RDBMS ORACLE_HOME.

cd /u01/app/12.1.0.2/grid

tar -cvf grid_home.tar ./

— Backup RDMS ORACLE_HOME

cd /u01/app/oracle/product

ls

11.2.0 agent12g

tar -cvf oracle_home.tar ./11.2.0

  1. System administrator implements OS upgrading and/or patching.
  2. Relink all RDMS ORACLE_HOME as oracle user.

$ which relink
/u01/app/oracle/product/11.2.0/dbhome_1/bin/relink
$ relink all

— Check log under $ORACLE_HOME/install
$ORACLE_HOME/install/relink.log

  1. Enable CRS, and also enable Oracle ASMLIB driver, and check ASM disks.

cd /u01/app/12.1.0.2/grid/bin

crsctl enable crs

CRS-4622: Oracle High Availability Services autostart is enabled.
#

/etc/init.d/oracleasm enable

Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]

/etc/init.d/oracleasm listdisks

ASM_DISK01
ASM_DISK02



ASM_DISK23
ASM_FRA01
OCR_VOTE01

  1. Relink GI_HOME binaries.

a) As root user, unlock GI_HOME.

cd /u01/app/12.1.0.2/grid/crs/install

./rootcrs.sh -unlock

Using configuration parameter file: /u01/app/12.1.0.2/grid/crs/install/crsconfig_params
2016/08/30 16:48:34 CLSRSC-4012: Shutting down Oracle Trace File Analyzer (TFA) Collector.
2016/08/30 16:48:43 CLSRSC-4013: Successfully shut down Oracle Trace File Analyzer (TFA) Collector.
2016/08/30 16:48:44 CLSRSC-347: Successfully unlock /u01/app/12.1.0.2/grid
b) As Oracle GI user ( grid ) relink GI_HOME.

su – grid

$ . oraenv +ASM1
$ env|grep ORA
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/grid
ORACLE_HOME=/u01/app/12.1.0.2/grid
$/u01/app/12.1.0.2/grid/bin/relink all
writing relink log to: /u01/app/12.1.0.2/grid/install/relink.log
c) As root user again, execute as below. The cluster and all the resources will be restarted by the second command automatically.

cd /u01/app/12.1.0.2/grid/rdbms/install

./rootadd_rdbms.sh

#

cd /u01/app/12.1.0.2/grid/crs/install

./rootcrs.sh -patch

Using configuration parameter file: /u01/app/12.1.0.2/grid/crs/install/crsconfig_params
2016/08/30 16:54:35 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2016/08/30 16:54:48 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘racnode1’



RS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [3696455212].

  1. Check the resources status as GI owner ( grid ).

$crsctl stat res -t