RAC Health Check Commands

Oracle RAC

[[email protected] ~]# ps -ef|grep smon

[[email protected] ~]# cat /etc/oratab

[[email protected] ~]# cd /u01/app/oracle/product/11.2.0/dbhome_1/

[[email protected] dbhome_1]# cd bin

[[email protected] bin]# export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 e_1/

[[email protected] bin]# ./srvctl status database -d perfo12
Instance perfo121 is running on node perf01-ora01
Instance perfo122 is running on node perf01-ora02

[[email protected] bin]# ./sqlplus sys/[email protected] as sysdba

SQL> SELECT inst_name FROM v$active_instances;

SQL> !

[[email protected] bin]# cd /u01/app/11.2.0/grid/bin/

[[email protected] bin]# ./srvctl config database -d perfo12

[[email protected] bin]# ./srvctl status database -d perfo12

[[email protected] bin]# cd /u01/app/oracle/product/11.2.0/dbhome_1/

[[email protected] dbhome_1]# cd bin
[[email protected] bin]# ls

[[email protected] bin]# ./lsnrctl status

[[email protected] bin]# cd /u01/app/11.2.0/grid/bin/

[[email protected] bin]# ./crsctl check cluster

[[email protected] bin]# ./crsctl check cluster -all

[[email protected] bin]# ./crs_stat -t -v

[[email protected] bin]# hostname

[[email protected] ~]$ srvctl status instance -d perfo12 -i perfo121,perfo122

SQL> select name from v$datafile;

SQL> select name from v$controlfile;

SQL> select member from v$logfile;

SQL> show parameter spfile;

SQL> create pfile from spfile;

SQL> show parameter back

SQL> select instance_name, host_name, archiver, thread#, status from gv$instance;

SQL> col host_name for a20;
SQL> /

SQL> select file_name, bytes/1024/1024 from dba_data_files;

SQL> col file_name for a20;
SQL> /
SQL> set linesize 150;
SQL> /

SQL> set wrap off;
SQL> /

SQL> select name from v$datafile;

SQL> select group_number, name,allocation_unit_size alloc_unit_size,state,type,total_mb,usable_file_mb from v$asm_diskgroup;

SQL> select name from gv$tablespace;

SQL> desc gv$tablespace;

SQL> select name,BIGFILE from gv$tablespace;

SQL> !

[[email protected] ~]$ srvctl status nodeapps -n perf01-ora01

[[email protected] ~]$ srvctl status asm -n perf01-ora01

[orac[email protected] ~]$ srvctl status asm -n perf02-ora02

[[email protected] ~]$ srvctl status asm -n perf01-ora02

================================================================

CRSCTL CheatSheet

You can find below various commands which can be used to administer Oracle Clusterware using crsctl. This is for purpose of easy reference.

Start Oracle Clusterware

crsctl start crs

Stop Oracle Clusterware

crsctl stop crs

Enable Oracle Clusterware

crsctl enable crs

It enables automatic startup of Clusterware daemons

Disable Oracle Clusterware

crsctl disable crs

It disables automatic startup of Clusterware daemons. This is useful when you are performing some
operations like OS patching and does not want clusterware to start the daemons automatically.

Checking Voting disk Location

$crsctl query css votedisk

Note: -Any command which just needs to query information can be run using oracle user. But anything which alters Oracle Clusterware
requires root privileges.

–Add Voting disk

crsctl add css votedisk path

–Remove Voting disk

crsctl delete css votedisk path

–Check CRS Status

$crsctl check crs

To check particular daemon status

$crsctl check cssd

$crsctl check crsd

$crsctl check evmd

Event Manager appears healthy

You can also check Clusterware status on both the nodes using

$crsctl check cluster

–Checking Oracle Clusterware Version

To determine software version (binary version of the software on a particular cluster node) use

$crsctl query crs softwareversion

Oracle Clusterware version on node [prod01] is [11.1.0.6.0]

For checking active version on cluster, use

$ crsctl query crs activeversion

Oracle Clusterware active version on the cluster is [11.1.0.6.0]

As per documentation, multiple versions are used while upgrading.

There are other options for CRSCTL too which can be seen using

$crsctl

Or

$crsctl help
11.2 Reference

11.2 introduced few changes to crsctl usage. Most important is clusterized commands which allows you to perform remote operations. They are

crsctl check cluster
crsctl start cluster
crsctl stop cluster

All these commands allow following usage

Default Stop local server
-all Stop all servers
-n Stop named servers
server […] One or more blank-separated server names
-f Force option

Let’s see usage

$ crsctl check cluster -all

crs_unregister is replaced by crsctl delete resource

crs_stat has been deprecated (though still works) and you need to use

$crsctl stat res -t