Database migration using SAN migration technique

Actual Database migration

We are planning migrating databases from one RAC to another RAC. For this purpose we are planning to use SAN migration technique.

This is a proof of concept (POC), I will update this BLOG as we make progress.

1. For POC, we will create a new diskgroup [san_poc] on existing RAC.

old01 $ kfod op=disks disks=raw

——————————————————————————–

 Disk          Size Path                                     User     Group

================================================================================

   1:      20473 Mb /dev/oracleasm/disks/ASM_0001            oracle   oinstall

   2:      20473 Mb /dev/oracleasm/disks/ASM_0002            oracle   oinstall

   3:      20473 Mb ORCL:ASM_0001                            <unknown> <unknown>

   4:      20473 Mb ORCL:ASM_0002                            <unknown> <unknown>

old02 $ asmcmd lsdg

State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name

MOUNTED  EXTERN  N         512   4096  1048576     61440    18734                0           18734              0             Y  DATA1/

MOUNTED  EXTERN  N         512   4096  1048576     40960    40455                0           40455              0             N  FRA1/

old01 $ asmcmd -p

ASMCMD [+] > mkdg ‘<dg name=”san_poc” redundancy=”external”><dsk string=”/dev/oracleasm/disks/ASM_0001″/><dsk string=”/dev/oracleasm/disks/ASM_0002″/></dg>’

ASMCMD [+] > lsdg

State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name

MOUNTED  EXTERN  N         512   4096  1048576     61440    18734                0           18734              0             Y  DATA1/

MOUNTED  EXTERN  N         512   4096  1048576     40960    40455                0           40455              0             N  FRA1/

MOUNTED  EXTERN  N         512   4096  1048576     40946    40894                0           40894              0             N  SAN_POC/

ASMCMD [+] >


Before we create a new database on this new disk group, we need to mount this new diskgroup on another node:

old02 $ asmcmd -p

ASMCMD [+] > lsdg

State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name

MOUNTED  EXTERN  N         512   4096  1048576     61440    18734                0           18734              0             Y  DATA1/

MOUNTED  EXTERN  N         512   4096  1048576     40960    40455                0           40455              0             N  FRA1/

ASMCMD [+] > mount san_poc

ASMCMD [+] > lsdg

State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name

MOUNTED  EXTERN  N         512   4096  1048576     61440    18734                0           18734              0             Y  DATA1/

MOUNTED  EXTERN  N         512   4096  1048576     40960    40455                0           40455              0             N  FRA1/

MOUNTED  EXTERN  N         512   4096  1048576     40946    40851                0           40851              0             N  SAN_POC/

ASMCMD [+] >

2. Create a database: Here I used dbca to create database quickly with all database files – redo log, control files, data files, temp files, etc. are on SAN_POC disk group.

Also, I created a new user and table with couple rows in it. This will help to validate the database status after migration for the end user objects. To keep this post concise, I am omitting details for these trivial tasks. 

For the new database, created a new service using following command:

old01 $ srvctl add service -d poc -s poc_svc -r poc1 -a poc2 -P PRECONNECT -y AUTOMATIC -q TRUE -e select -m basic -z 10 -w 5

old01 $ srvctl status service -d poc

Service poc_svc is not running.

old01 $ srvctl start service -d poc  -s poc_svc

old01 $ srvctl status service -d poc

Service poc_svc is running on instance(s) poc1

3. As a prep work, we can copy database initialization file and password file from ${ORACLE_HOME}/dbs directory to the new cluster. Also, need to create audit dump destination, archive or any other OS directory structure that’s used by database.

4. Next step is to shutdown database and un-mount disk group from all the nodes.

srvctl stop database -d poc

ASMCMD [+] > umount san_poc

5. Worked with system admin to detach LUNs from the existing cluster and present them to the new cluster. In this process we identified that there were identical ASM LUN name on both the servers. To resolve this, we renamed LUN using asmlib on one of the existing cluster node. On the other node we used oracleasm scandisks command. DO NOT use “oracleasm deletedisk” command. System admin released LUNs from OS as well as through storage admin console.

LUNs released from the existing cluster were presented to the new cluster. On both the new cluster nodes execute “oracleasm scandisks”.

Now, this was the new excitement phase…

6. I logged in to new cluster node. To validate disk is intact, I ran kfed command:

$ kfed dev=/dev/oracleasm/disks/ASM_0005 op=read cnt=1 blknum=0

kfbh.endian:                          1 ; 0x000: 0x01

kfbh.hard:                          130 ; 0x001: 0x82

kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD

kfbh.datfmt:                          1 ; 0x003: 0x01

kfbh.block.blk:                       0 ; 0x004: blk=0

kfbh.block.obj:              2147483648 ; 0x008: disk=0

kfbh.check:                  2930213453 ; 0x00c: 0xaea7824d

kfbh.fcn.base:                        0 ; 0x010: 0x00000000

kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000

kfbh.spare1:                          0 ; 0x018: 0x00000000

kfbh.spare2:                          0 ; 0x01c: 0x00000000

kfdhdb.driver.provstr: ORCLDISKASM_0005 ; 0x000: length=16

kfdhdb.driver.reserved[0]:   1598903105 ; 0x008: 0x5f4d5341

kfdhdb.driver.reserved[1]:    892350512 ; 0x00c: 0x35303030

kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000

kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000

kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000

kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000

kfdhdb.compat:                168820736 ; 0x020: 0x0a100000

kfdhdb.dsknum:                        0 ; 0x024: 0x0000

kfdhdb.grptyp:                        1 ; 0x026: KFDGTP_EXTERNAL

kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER

kfdhdb.dskname:            SAN_POC_0000 ; 0x028: length=12

kfdhdb.grpname:                 SAN_POC ; 0x048: length=7

kfdhdb.fgname:             SAN_POC_0000 ; 0x068: length=12

 Hold my breath…. attempt to mount diskgroup.

 mount san_poc

 And I was successful. Repeated same step on another node. 

 7. Now time to making sure that we can start database[POC] that was residing on this diskgroup. Before starting database, added database to the OCR by issuing following commands:

new01 $ srvctl add database -d poc  -o /oracle/orabase/product/11.2.0.3 -c RAC -y AUTOMATIC -p ‘+SAN_POC/poc/spfilepoc.ora’ -a “SAN_POC”

new01 $ srvctl add instance -d poc -i poc1 -n new01

new01 $ srvctl add instance -d poc -i poc2 -n new02

new01 $ srvctl add service -d poc -s poc_svc -r poc1 -a poc2 -P PRECONNECT -y AUTOMATIC -q TRUE -e select -m basic -z 10 -w 5

new01 $ srvctl start database -d poc

I overjoyed this as database started without any issues!!! [Though, I had to create audit dump directory…]. 

8. “LISTENER” parameter needs to be modified in the database server parameter file.

SQL> alter system set remote_listener=’new-scan1:1521′ sid=’*’ scope=spfile;

System altered.

SQL> alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=new02-vip)(PORT=1521))))’ sid=’poc2′ scope=spfile;

System altered.

SQL> alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=new01-vip)(PORT=1521))))’ sid=’poc1′ scope=spfile;

Restarted database and was able to connect to this POC database from the remote client.

Summary:

Assumptions:

  • OS and Oracle software binaries are identical on existing as well as target cluster nodes.
  • ASM disk groups doesn’t contain voting/OCR files
  • ASM disk groups – planned to migrate contains only database files.
  • ASM disk groups planned to migrate are self-contained within the disk groups being migrated.

Migration Steps:

  1. Identify databases are being migrated.
  2. Collect the database and related database service configuration.
  3. Prepare commands to recreate the services as well adding databases to the OCR on the target cluster.
  4. Copy password files from existing cluster nodes to target cluster nodes.
  5. Copy init files from the existing cluster nodes to the target cluster nodes.
  6. Create entries in /etc/oratab for the databases to be migrated
  7. Shutdown database(s) having files on the disk group being migrated.
  8. Unmount disk group – all the nodes of existing cluster. (umount <DiskGroupName>)
  9. SysAdmin detach SAN LUN from the existing cluster.
  10. SysAdmin presents same LUNs to the new target cluster nodes.
  11. ASM admin/DBA will mount the diskgroup by using asmcmd utility (mount <DiskGroupName>)
  12. Add database to the OCR using commands identified in Step#3.
  13. Start database using srvctl.
  14. Modify local_listener and remote_listener parameter [Node vip and scan name needs to be updated]
  15. Add database service(s), if any identified in Step#2 and Step#3
  16. Modify asm_diskgroups parameter in spfile for all the ASM instances to ensure that DISKGROUPS migrated through SAN migration gets mounted in case of node reboot.
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s