Skip to main content

An Alternative Method for Oracle Physical Standby

Physical Standby Server can be used to recover data from disaster is one of the best methods. But the installation and management may be a little bit difficult. There is an another way.


First i should specify properties of the primary and recovery databases.
Primary server name: primsrv
Instance name: orcl
Primary server: ORACLE_BASE: /u01/app/oracle
Primary server: ORACLE_HOME: /u01/app/oracle/product/10.2.0/db_1
Primary server datafile location: /database/oradata/orcl
Primary server redolog file location: /database/oradata/orcl
Primary server controlfile location: /database/oradata/orcl
Primary server backup location (will be mounted by recovery server via nfs with the same folder name): /backup/rman_bck
Primary server archivelog destination (will be mounted by recovery server via nfs with the same folder name): /fra/arch/orcl
Recovery server name: recovsrv
And other properties of the recovery server database will be same of the primary server.

Now i can install recovery server.
First i should install an operating system same as the primary server and prepare folder structure where the datafiles, redologs and controlfiles exist. In this example it is /database/oradata/orcl folder. Then i should install same version of oracle software only as the primary server has. After installing oracle software i should create database instance exactly same as the primary server’s instance using dbca tool. I should be careful for specifying the right locations for datafiles, redologs and controlfiles. I also should be careful for specifying the right parameters of the database like character set, log_archive_dest_1, log_archive_format.

Now i have two oracle databases have the same structures. But how can i recover primary server to the recovery server? First i need to provide recovery server to see primary server’s /backup/rman_bck and /fra/arch/orcl folders. I can do it using nfs. With nfs sharing, recovery server can see primary server’s folders.


Recovery server’s /etc/fstab configuration should be look like this.
primsrv:/backup              /backup               nfs …..
primsrv:/fra/arch/orcl      /fra/arch/orcl    nfs ……


At recovery server shutdown database and remove datafiles and controlfiles.
recovsrv SQL> shutdown immediate
recovsrv #> cd /database/oradata/orcl
recovsrv #> rm –f *.dbf
recovsrv #> rm –f *.ctl


At primary server take backup of controlfile and initialization file then send it to the recovery server
primsrv SQL> alter database backup controlfile to '/home/oracle/primorcl.ctl';
primsrv SQL> create pfile='/home/oracle/initprim.ora' from spfile;
primsrv #> scp /home/oracle/primsrv.ctl oracle@recovsrv:/database/oradata/orcl
primsrv #> scp /home/oracle/initprim.ora oracle@recovsrv:/home/oracle


At recovery server make copies of the backup controlfile and create spfile from pfile. Then startup database at mount state, restore and recover database.
recovsrv #> cd /database/oradata/orcl
recovsrv #> mv primorcl.ctl control01.ctl
recovsrv #> cp control01.ctl control02.ctl
recovsrv #> cp control01.ctl control03.ctl
recovsrv SQL> create spfile from pfile='/home/oracle/initprim.ora';
recovsrv SQL> startup mount
recovsrv RMAN> restore database
recovsrv SQL> recover database using backup controlfile until cancel


There is a parameter named archive_lag_target used to switch archivelog files periodically. I can set this parameter to apply most recent changes in the primary server to recovery server. At primary server set archive_lag_target parameter to 900 second.
primsrv SQL> alter system set archive_lag_target=900 scope both;


So I can setup a cron job that executes recovery process every 15 minutes. At recovery server
recovsrv #> crontab -l
———————————
*/15 * * * * /home/oracle/scripts/recov.sh


recovsrv #> more /home/oracle/scripts/recov.sh
————————————————————————
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1
ORACLE_SID=orcl
LD_LIBRARY_PATH=$ORACLE_HOME/lib
export ORACLE_BASE
export ORACLE_HOME
export ORACLE_SID
export LD_LIBRARY_PATH
PATH=$ORACLE_HOME/bin:$PATH:$HOME/bin:$ORACLE_HOME/OPatch
export PATH
sqlplus "/ as sysdba" @/home/oracle/scripts/recov.sql


recovsrv #> more /home/oracle/scripts/recov.sql
————————————————————————
set autorecovery on
spool /home/oracle/scripts/recov.lst
recover database using backup controlfile until cancel;
exit


At recovery server when cron job runs, i can tail output
recovsrv #> tail -f /home/oracle/scripts/recov.lst
————————————————————————
tail: recov.lst: file truncated

ORA-00279: change 137823348 generated at 12/23/2010 18:02:56 needed for thread
1
ORA-00289: suggestion : /fra/arch/orcl/arch_728088008_1_8238.arc
ORA-00280: change 137823348 for thread 1 is in sequence #8238
ORA-00278: log file ‘/fra/arch/orcl/arch_728088008_1_8237.arc’ no longer
needed for this recovery

ORA-00279: change 137828288 generated at 12/23/2010 18:17:57 needed for thread
1
ORA-00289: suggestion : /fra/arch/orcl/arch_728088008_1_8239.arc
ORA-00280: change 137828288 for thread 1 is in sequence #8239
ORA-00278: log file ‘/fra/arch/orcl/arch_728088008_1_8238.arc’ no longer
needed for this recovery

ORA-00308: cannot open archived log ‘/fra/arch/orcl/arch_728088008_1_8239.arc’
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3


As you see the output, archivelogs will be applied every 15 minutes on the recovery server.


Important note: If i would like to open recovery server for transactions, i have to execute alter database open resetlogs command. In case of a failure, i have a ready database for business continuity.

Comments

Popular posts from this blog

Creating Multiple VLANs over Bonding Interfaces with Proper Routing on a Centos Linux Host

In this post, I am going to explain configuring multiple VLANs on a bond interface. First and foremost, I would like to describe the environment and give details of the infrastructure. The server has 4 Ethernet links to a layer 3 switch with names: enp3s0f0, enp3s0f1, enp4s0f0, enp4s0f1 There are two bond interfaces both configured as active-backup bond0, bond1 enp4s0f0 and enp4s0f1 interfaces are bonded as bond0. Bond0 is for making ssh connections and management only so corresponding switch ports are not configured in trunk mode. enp3s0f0 and enp3s0f1 interfaces are bonded as bond1. Bond1 is for data and corresponding switch ports are configured in trunk mode. Bond0 is the default gateway for the server and has IP address 10.1.10.11 Bond1 has three subinterfaces with VLAN 4, 36, 41. IP addresses are 10.1.3.11, 10.1.35.11, 10.1.40.11 respectively. Proper communication with other servers on the network we should use routing tables. There are three

3 Node (Master Slave Slave) Redis Cluster with Sentinel

It is possible to make your Redis cluster Fault Tolerant and Highly Available by building a replica set and then monitor these nodes using sentinel for automatic failover. I am going to give an example setup to explain it. The structure is built with three nodes running one as a master and two as slaves. Master Node: (Centos 7.2) 192.168.1.11 Slave1 Node: (Centos 7.2) 192.168.1.12 Slave2 Node: (Centos 7.2) 192.168.1.13 Edit System settings on each node: /etc/sysctl.conf Disable transparent hugepage (transparent_hugepage=never) on each node: /etc/default/grub Apply grub config and reboot each node: Master Node: /etc/redis/6379.conf Slave1 Node: /etc/redis/6379.conf Slave2 Node: /etc/redis/6379.conf Master Node: /etc/redis/sentinel.conf Slave1 Node: /etc/redis/sentinel.conf Slave2 Node: /etc/redis/sentinel.conf Each Node: /etc/systemd/system/multi-user.target.wants/redis-server.service Each Node: /etc/