Home > Data Guard, Disaster recovery, RAC > Oracle RAC on Extended Cluster

Oracle RAC on Extended Cluster

I have recently worked with Oracle RAC on Extended Cluster, and I would like to share here some knowledge I acquired regarding this technology.

This is an alternative to Oracle Data Guard, which provides an almost immediate replication method between two sites. However, it requires the availability of a fast network between the two sites, for the interconnect.

Let’s imagine the following scenario: We have a production database hosted in our main building. Our enterprise has also a data center in another building, which is located 10 km away. How can we install a solution to replicate the data from our primary site to our standby site?

Here we have different options: we can install and configure DataGuard, creating a standby database in our second building. We can even use Active Dataguard to allow the standby database to the open in order to perform queries or reports.

But we can also have RAC on Extended Cluster. In this case, the database will be integrated in a RAC, where node1 will be our main site, and node2 will be our secondary site.

The application will then access only to the primary site, as it was done normally. The secondary site will have a separate storage, and the data will be replicated through the interconnect. This requires that the bandwidth for the interconnect is high enough, but the recovery would be almost immediate in case of failure, as the data will be always be available after commit in both sites. This is not the case on Dataguard, as the data is replicated only after the generation of archivelogs.

But then we face a new problem, how do we solve brain splitting? If we have two nodes, and the interconnect has a failure but both nodes are up, which node will know that he is the node that needs to be up, and which node needs to be shutdown?

To solve this problem, Voting Disk has a major importance in this scenario. We need a third server allocating a Voting Disk, and visible from both Oracle servers. This voting disk will act as a third vote to decide which server will remain up in case of failure, and which server is unavailable.

This third server can be an Oracle server, and we would allocate the voting disk in an ASM instance. But it is also possible to use an inexpensive NFS device for this purpose.

To do this, you can use the following command:

[root@nodo1]# crsctl query css votedisk
## STATE File Universal Id File Name Disk group
— —– —————– ——— ———
1. ONLINE xxxxxxxxxxxx (/nfs/cluster1/vote1) []
2. ONLINE xxxxxxxxxxxxx (/nfs/cluster2/vote2) []

Then, add the third voting disk on cluster node 3 (the mount point must be created previously):

[root@nodo1 /]# crsctl add css votedisk /nfs/cluster3/vote3
Now formatting voting disk: /nfs/cluster3/vote3.
CRS-4603: Successful addition of voting disk /nfs/cluster3/vote3.

Using this method, we will be able to provide service in the standby database immediately in case the primary database fails, as the standby database will always be available. After all, it’s a RAC.

Additionally, we can use the standby node as primary node for another database, using the primary site as a standby site for this second database. By doing this, we will use more effectively the resources of the server.

  1. Mian Abdul Majid
    January 4, 2019 at 10:04 pm

    Great Article, thanks for sharing your experience

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: