Home > Oracle RAC Stretched Cluster testing, Oracle RAC Stretched clusters > Extended Distance Oracle RAC Test Cluster Storage Layer

Extended Distance Oracle RAC Test Cluster Storage Layer

The storage for my test stretched RAC cluster is provided by two iSCSI filers, running on ancient Intel Pentium 4 vintage hardware. Each filer runs the Openfiler (v2.3) Open Source storage array server software, which provides volume management and iSCSI target services, allowing disk storage from the filers to be accessed by the two database server nodes in the cluster. Even though these two servers are both over 6 years old and only have 256MB of memory they still seem to perform well enough as iSCSI servers; maybe that wouldn’t be the case if they were serving more I/O than can be provided by a single 7200 rpm ATA disk each.

Setting up the volumes on the filer disks, to be made available as iSCSI volumes,  is performed using the Openfiler web management interface.  Openfiler is  a great piece of software but documentation for it is a bit thin on the ground, even the paid-for Administrators Guide  I viewed was something of a disappointment. Fortunately my ex-colleague’s blog has some excellent pointers to setting up iSCSI targets on Openfiler see http://www.techhead.co.uk/how-to-configure-openfiler-v23-iscsi-storage-for-use-with-vmware-esx .

On each filer I set up a set of 7 volumes each of about 10GB. The table below shows the filers, volumes names and sizes.

filer01 AVL1 10GB, filer02 F2AVL1 10GB
filer01 AVL2 10GB, filer02 F2AVL2 10GB
filer01 AVL3 10GB, filer02 F2AVL3 10GB
filer01 AVL4 10GB, filer02 F2AVL4 10GB
filer01 AVL5 10GB, filer02 F2AVL5 10GB
filer01 FRA1 10GB, filer02 F2FRA1 10GB
filer01 FRA2 10GB, filer02 F2FRA2 10GB

All 14 iSCSI volumes were made available to both database servers using the lower storage network. This network consists of two Netgear gigabit desktop switches, with RAC1 and Filer01 connected to the stnsw1 switch and Filer02 and RAC2 connected to the stnsw2 switch. The two switches are connected together to complete the network. In this way both nodes RAC1 and RAC2 can see both Filer01 and Filer01, but if the link between the two switches is removed RAC1 will only have connectivity to Filer01, not Filer02 and RAC2 will only see Filer02 not Filer01. Also breaking the connection between the switches will not cause the link detection on the network interfaces on the database servers to fail so that all that will be lost is the connectivity, not all network access.

When the connection between the two switches is severed each database server will still have access to one half of the mirror copy of its ASM volumes, however each database node will be accessing a separate mirror copy.
Next article I’ll describe the experiments I intend to perform and start documenting how the cluster fails when the storage is split.

  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: