2 Node HA RAC Production Environment Hardware Requirements:

Oracle RAC

In addition, each Linux node will only be configured with two network interfaces ó one for the public network (eth0) and one that will be used for both the Oracle RAC private interconnect.

For a production RAC implementation, the private interconnect should be at least Gigabit (or more) with redundant paths and “only” be used by Oracle to transfer Cluster Manager and Cache Fusion related data.

Three internal 1TB 1TB and 25 GB SCSI/SATA hard disk (FYI, the virtual SCSI device AFAIK has a larger queue depth as the SATA one, so -at least in theory it should get you a better performance by sticking with the scsi virtual disk) will be configured as a three volume group that will be used for all shared disk storage,Backup and Recovery and for keeping oracle clusterware binaries files.

Note that SCAN addresses, virtual IP addresses, and public IP addresses must all be on the same subnet.

Oracle only allows one OCR per disk group in order to protect against physical disk failures. When configuring Oracle Clusterware files on a production system, Oracle recommends using either normal or high redundancy ASM disk groups. If disk mirroring is already occurring at either the OS or hardware level, you can use external redundancy.

If you decide against using ASM for the OCR and voting disk files, Oracle Clusterware still allows these files to be stored on a cluster file system like Oracle Cluster File System Release 2 (OCFS2) or a NFS system. Please note that installing Oracle Clusterware files on raw or block devices is no longer supported, unless an existing system is being upgraded.

Previous versions of this guide used OCFS2 for storing the OCR and voting disk files. This guide will store the OCR and voting disk files on ASM in an ASM disk group named +CRS using external redundancy which is one OCR location and one voting disk location. The ASM disk group should be be created on shared storage and be at least 2GB in size.

The Oracle physical database files (data, online redo logs, control files, archived redo logs) will be installed on ASM in an ASM disk group named +RACDB_DATA while the Fast Recovery Area will be created in a separate ASM disk group named +FRA.

The two Oracle RAC nodes and the network storage server will be configured

This article is only designed to work as documented with absolutely no substitutions. The only exception here is the choice of vendor hardware (i.e. machines, networking equipment, and internal / external hard drives). Ensure that the hardware you purchase from the vendor is supported on Red Hat Enterprise Linux 5 and Openfiler 2.3 (Final Release).

Like OPS, Oracle RAC allows multiple instances to access the same database (storage) simultaneously. RAC provides fault tolerance, load balancing, and performance benefits by allowing the system to scale out, and at the same time since all instances access the same database, the failure of one node will not cause the loss of access to the database.

At the heart of Oracle RAC is a shared disk subsystem. Each instance in the cluster must be able to access all of the data, redo log files, control files and parameter file for all other instances in the cluster. The data disks must be globally available in order to allow all instances to access the database. Each instance has its own redo log files and UNDO tablespace that are locally read/writable. The other instances in the cluster must be able to access them (read-only) in order to recover that instance in the event of a system failure. The redo log files for an instance are only writable by that instance and will only be read from another instance during system failure. The UNDO, on the other hand, is read all the time during normal database operation (e.g. for CR fabrication).

A big difference between Oracle RAC and OPS is the addition of Cache Fusion. With OPS a request for data from one instance to another required the data to be written to disk first, then the requesting instance can read that data (after acquiring the required locks). This process was called disk pinging. With cache fusion, data is passed along a high-speed interconnect using a sophisticated locking algorithm.

A big difference between Oracle RAC and OPS is the addition of Cache Fusion. With OPS a request for data from one instance to another required the data to be written to disk first, then the requesting instance can read that data (after acquiring the required locks). This process was called disk pinging. With cache fusion, data is passed along a high-speed interconnect using a sophisticated locking algorithm