Archive

Archive for the ‘RAC’ Category

Oracle RAC on Extended Cluster

January 4, 2019 1 comment

I have recently worked with Oracle RAC on Extended Cluster, and I would like to share here some knowledge I acquired regarding this technology.

This is an alternative to Oracle Data Guard, which provides an almost immediate replication method between two sites. However, it requires the availability of a fast network between the two sites, for the interconnect.

Let’s imagine the following scenario: We have a production database hosted in our main building. Our enterprise has also a data center in another building, which is located 10 km away. How can we install a solution to replicate the data from our primary site to our standby site?

Here we have different options: we can install and configure DataGuard, creating a standby database in our second building. We can even use Active Dataguard to allow the standby database to the open in order to perform queries or reports.

But we can also have RAC on Extended Cluster. In this case, the database will be integrated in a RAC, where node1 will be our main site, and node2 will be our secondary site.

The application will then access only to the primary site, as it was done normally. The secondary site will have a separate storage, and the data will be replicated through the interconnect. This requires that the bandwidth for the interconnect is high enough, but the recovery would be almost immediate in case of failure, as the data will be always be available after commit in both sites. This is not the case on Dataguard, as the data is replicated only after the generation of archivelogs.

But then we face a new problem, how do we solve brain splitting? If we have two nodes, and the interconnect has a failure but both nodes are up, which node will know that he is the node that needs to be up, and which node needs to be shutdown?

To solve this problem, Voting Disk has a major importance in this scenario. We need a third server allocating a Voting Disk, and visible from both Oracle servers. This voting disk will act as a third vote to decide which server will remain up in case of failure, and which server is unavailable.

This third server can be an Oracle server, and we would allocate the voting disk in an ASM instance. But it is also possible to use an inexpensive NFS device for this purpose.

To do this, you can use the following command:

[root@nodo1]# crsctl query css votedisk
## STATE File Universal Id File Name Disk group
— —– —————– ——— ———
1. ONLINE xxxxxxxxxxxx (/nfs/cluster1/vote1) []
2. ONLINE xxxxxxxxxxxxx (/nfs/cluster2/vote2) []

Then, add the third voting disk on cluster node 3 (the mount point must be created previously):

[root@nodo1 /]# crsctl add css votedisk /nfs/cluster3/vote3
Now formatting voting disk: /nfs/cluster3/vote3.
CRS-4603: Successful addition of voting disk /nfs/cluster3/vote3.

Using this method, we will be able to provide service in the standby database immediately in case the primary database fails, as the standby database will always be available. After all, it’s a RAC.

Additionally, we can use the standby node as primary node for another database, using the primary site as a standby site for this second database. By doing this, we will use more effectively the resources of the server.

Changing sys password in RAC databases

May 24, 2011 5 comments

If you ever have worked with a RAC database, and you have changed a normal user password, you will notice that it has no difference between a single instance database. You just have to perform “ALTER USER xxx IDENTIFIED BY yyyy” and your password will be changed.

However, today I had a problem with a RAC database. I had to change the SYS password, and I did it the same way I would do in a single-instance database. But when I tried to connect with sys user, the error “ORA-01017: invalid username/password; logon denied” appeared. What I was doing wrong?

After a few thoughts, I found the solution. SYS password is instance specific in RAC databases, so you have to change it in every single instance. That’s all.