List tables with stale statistics

This morning we were digging into the query performance issue, which was running fine until we created an index. As a quick fix we created a baseline from the good execution plan we found out in the historical stored plans.

Later on, when we reviewed last analyzed date for the tables in questions we thought scheduler job should have collected stats on this table.

We used following procedure to list out any table with the stale stats:

DECLARE
stale_lst dbms_stats.ObjectTab;
begin
dbms_stats.gather_schema_stats(
OWNNAME => 'APP_XXXXX',
OPTIONS => 'LIST STALE',
OBJLIST => stale_lst);
if ( stale_lst.count> 0 ) then
dbms_output.put_line('Number of objects with Stale Statistics: ' || stale_lst.count);
for i in stale_lst.First .. stale_lst.Last
loop
dbms_output.put_line(rpad(stale_lst(i).ownname, 30, ' ') || stale_lst(i).objname );
end loop;
else
dbms_output.put_line('There are no table with the stale statistics.');
end if;
end;
/

Surprisingly none of the tables showed up in the result set. We reviewed dba_tab_modifications table and found out that table has been truncated since last stats collection, so why statistics has not been gathered on this table. We used following query to find out whether statistics are locked on this table:

SELECT table_name, stattype_locked FROM dba_tab_statistics WHERE owner = ‘APP_XXXXX’ and table_name = ‘APP_tab_name’ order by STATTYPE_LOCKED desc nulls last;

Guess what!!! Application administrator who had password for the application schema had locked the statistics for the table. We unlocked the stats. Tables with locked statistics are not being reported as stale.

Advertisements

Database migration using SAN migration technique

Actual Database migration

We are planning migrating databases from one RAC to another RAC. For this purpose we are planning to use SAN migration technique.

This is a proof of concept (POC), I will update this BLOG as we make progress.

1. For POC, we will create a new diskgroup [san_poc] on existing RAC.

old01 $ kfod op=disks disks=raw

——————————————————————————–

 Disk          Size Path                                     User     Group

================================================================================

   1:      20473 Mb /dev/oracleasm/disks/ASM_0001            oracle   oinstall

   2:      20473 Mb /dev/oracleasm/disks/ASM_0002            oracle   oinstall

   3:      20473 Mb ORCL:ASM_0001                            <unknown> <unknown>

   4:      20473 Mb ORCL:ASM_0002                            <unknown> <unknown>

old02 $ asmcmd lsdg

State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name

MOUNTED  EXTERN  N         512   4096  1048576     61440    18734                0           18734              0             Y  DATA1/

MOUNTED  EXTERN  N         512   4096  1048576     40960    40455                0           40455              0             N  FRA1/

old01 $ asmcmd -p

ASMCMD [+] > mkdg ‘<dg name=”san_poc” redundancy=”external”><dsk string=”/dev/oracleasm/disks/ASM_0001″/><dsk string=”/dev/oracleasm/disks/ASM_0002″/></dg>’

ASMCMD [+] > lsdg

State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name

MOUNTED  EXTERN  N         512   4096  1048576     61440    18734                0           18734              0             Y  DATA1/

MOUNTED  EXTERN  N         512   4096  1048576     40960    40455                0           40455              0             N  FRA1/

MOUNTED  EXTERN  N         512   4096  1048576     40946    40894                0           40894              0             N  SAN_POC/

ASMCMD [+] >


Before we create a new database on this new disk group, we need to mount this new diskgroup on another node:

old02 $ asmcmd -p

ASMCMD [+] > lsdg

State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name

MOUNTED  EXTERN  N         512   4096  1048576     61440    18734                0           18734              0             Y  DATA1/

MOUNTED  EXTERN  N         512   4096  1048576     40960    40455                0           40455              0             N  FRA1/

ASMCMD [+] > mount san_poc

ASMCMD [+] > lsdg

State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name

MOUNTED  EXTERN  N         512   4096  1048576     61440    18734                0           18734              0             Y  DATA1/

MOUNTED  EXTERN  N         512   4096  1048576     40960    40455                0           40455              0             N  FRA1/

MOUNTED  EXTERN  N         512   4096  1048576     40946    40851                0           40851              0             N  SAN_POC/

ASMCMD [+] >

2. Create a database: Here I used dbca to create database quickly with all database files – redo log, control files, data files, temp files, etc. are on SAN_POC disk group.

Also, I created a new user and table with couple rows in it. This will help to validate the database status after migration for the end user objects. To keep this post concise, I am omitting details for these trivial tasks. 

For the new database, created a new service using following command:

old01 $ srvctl add service -d poc -s poc_svc -r poc1 -a poc2 -P PRECONNECT -y AUTOMATIC -q TRUE -e select -m basic -z 10 -w 5

old01 $ srvctl status service -d poc

Service poc_svc is not running.

old01 $ srvctl start service -d poc  -s poc_svc

old01 $ srvctl status service -d poc

Service poc_svc is running on instance(s) poc1

3. As a prep work, we can copy database initialization file and password file from ${ORACLE_HOME}/dbs directory to the new cluster. Also, need to create audit dump destination, archive or any other OS directory structure that’s used by database.

4. Next step is to shutdown database and un-mount disk group from all the nodes.

srvctl stop database -d poc

ASMCMD [+] > umount san_poc

5. Worked with system admin to detach LUNs from the existing cluster and present them to the new cluster. In this process we identified that there were identical ASM LUN name on both the servers. To resolve this, we renamed LUN using asmlib on one of the existing cluster node. On the other node we used oracleasm scandisks command. DO NOT use “oracleasm deletedisk” command. System admin released LUNs from OS as well as through storage admin console.

LUNs released from the existing cluster were presented to the new cluster. On both the new cluster nodes execute “oracleasm scandisks”.

Now, this was the new excitement phase…

6. I logged in to new cluster node. To validate disk is intact, I ran kfed command:

$ kfed dev=/dev/oracleasm/disks/ASM_0005 op=read cnt=1 blknum=0

kfbh.endian:                          1 ; 0x000: 0x01

kfbh.hard:                          130 ; 0x001: 0x82

kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD

kfbh.datfmt:                          1 ; 0x003: 0x01

kfbh.block.blk:                       0 ; 0x004: blk=0

kfbh.block.obj:              2147483648 ; 0x008: disk=0

kfbh.check:                  2930213453 ; 0x00c: 0xaea7824d

kfbh.fcn.base:                        0 ; 0x010: 0x00000000

kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000

kfbh.spare1:                          0 ; 0x018: 0x00000000

kfbh.spare2:                          0 ; 0x01c: 0x00000000

kfdhdb.driver.provstr: ORCLDISKASM_0005 ; 0x000: length=16

kfdhdb.driver.reserved[0]:   1598903105 ; 0x008: 0x5f4d5341

kfdhdb.driver.reserved[1]:    892350512 ; 0x00c: 0x35303030

kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000

kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000

kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000

kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000

kfdhdb.compat:                168820736 ; 0x020: 0x0a100000

kfdhdb.dsknum:                        0 ; 0x024: 0x0000

kfdhdb.grptyp:                        1 ; 0x026: KFDGTP_EXTERNAL

kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER

kfdhdb.dskname:            SAN_POC_0000 ; 0x028: length=12

kfdhdb.grpname:                 SAN_POC ; 0x048: length=7

kfdhdb.fgname:             SAN_POC_0000 ; 0x068: length=12

 Hold my breath…. attempt to mount diskgroup.

 mount san_poc

 And I was successful. Repeated same step on another node. 

 7. Now time to making sure that we can start database[POC] that was residing on this diskgroup. Before starting database, added database to the OCR by issuing following commands:

new01 $ srvctl add database -d poc  -o /oracle/orabase/product/11.2.0.3 -c RAC -y AUTOMATIC -p ‘+SAN_POC/poc/spfilepoc.ora’ -a “SAN_POC”

new01 $ srvctl add instance -d poc -i poc1 -n new01

new01 $ srvctl add instance -d poc -i poc2 -n new02

new01 $ srvctl add service -d poc -s poc_svc -r poc1 -a poc2 -P PRECONNECT -y AUTOMATIC -q TRUE -e select -m basic -z 10 -w 5

new01 $ srvctl start database -d poc

I overjoyed this as database started without any issues!!! [Though, I had to create audit dump directory…]. 

8. “LISTENER” parameter needs to be modified in the database server parameter file.

SQL> alter system set remote_listener=’new-scan1:1521′ sid=’*’ scope=spfile;

System altered.

SQL> alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=new02-vip)(PORT=1521))))’ sid=’poc2′ scope=spfile;

System altered.

SQL> alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=new01-vip)(PORT=1521))))’ sid=’poc1′ scope=spfile;

Restarted database and was able to connect to this POC database from the remote client.

Summary:

Assumptions:

  • OS and Oracle software binaries are identical on existing as well as target cluster nodes.
  • ASM disk groups doesn’t contain voting/OCR files
  • ASM disk groups – planned to migrate contains only database files.
  • ASM disk groups planned to migrate are self-contained within the disk groups being migrated.

Migration Steps:

  1. Identify databases are being migrated.
  2. Collect the database and related database service configuration.
  3. Prepare commands to recreate the services as well adding databases to the OCR on the target cluster.
  4. Copy password files from existing cluster nodes to target cluster nodes.
  5. Copy init files from the existing cluster nodes to the target cluster nodes.
  6. Create entries in /etc/oratab for the databases to be migrated
  7. Shutdown database(s) having files on the disk group being migrated.
  8. Unmount disk group – all the nodes of existing cluster. (umount <DiskGroupName>)
  9. SysAdmin detach SAN LUN from the existing cluster.
  10. SysAdmin presents same LUNs to the new target cluster nodes.
  11. ASM admin/DBA will mount the diskgroup by using asmcmd utility (mount <DiskGroupName>)
  12. Add database to the OCR using commands identified in Step#3.
  13. Start database using srvctl.
  14. Modify local_listener and remote_listener parameter [Node vip and scan name needs to be updated]
  15. Add database service(s), if any identified in Step#2 and Step#3
  16. Modify asm_diskgroups parameter in spfile for all the ASM instances to ensure that DISKGROUPS migrated through SAN migration gets mounted in case of node reboot.

Password Less SSH setup

Password less SSH Setup for End User

Purpose

This document describes password less SSH setup for a given user ID [for individual user ID] for following scenarios:

Workstation to UNIX

UNIX to UNIX

CAUTION: Please do NOT use this procedure for generic account like “oracle”,”infa”, etc. as following this step may overwrite existing SSH setup.

Pre-requisite:

For Workstation to UNIX password less SSH setup, download full PuTTY.zip, or PuTTY.exe and pscp.exe along with following files:

Make sure that the folder where you copy these files are in the environment variable “PATH”.

Key Generation

  1. Start puttygen.exe [PuTTY Key Generator]

  2. Change right bottom text box (“Number of bits in a generated key:”) from 1024 to 2048.

  3. Click on the “Generate” button and rollover mouse randomly in the box – this will generate the random key.

  1. Enter meaningful value in “Key Comment” field.

  2. Select the content in the box with title “Public key for pasting into Open SSH authorized_keys file:”. Right click in the box with highlighted text , copy the content and paste it to new document and save it as “authorized_keys.txt” [Path doesn’t matter as long as you can get to this file later on, here we will store in “<PuTTYPath>\Config\authorized_keys.txt”]

  3. Enter passphrase => anything that you can remember – can be sentence, slogan you like it. [You need this to remember as long as you use this key and need to enter every time restart or log off/login or reload the PuTTY agent]

    For example: “I love The God!”

This setup works without passphrase too, but is very unsecure as if someone gets hold of your private key, they can logon as “you”.

  1. Re-enter same key-passphrase

  2. Click on the “Save private key” button. Select appropriate directory and enter file name along with extension “.ppk” => Here I entered “dpatel_rsa_2048_20130611.ppk” and click “Save” button.

  3. Click on “Save public Key” button. Select same directory where private key has been stored. For consistency purpose save public key as same name with extension as “.pub” => here I entered “dpatel_rsa_2048_20130611.pub” and click “Save” button.

  4. Close “PuTTY Key Generator” window.

Load Keys

  1. Start “Pageant.exe” file. Unlike other applications, this application does NOT open any GUI. It sits in the system tray.

  2. Right click on this icon and click on “Add Key”. Select the private key we have stored in “Key Generation Step#8.

  3. This will ask for the passphrase that we entered in Step#6. Enter passphrase in this box and click “OK” button.

  4. If there are any typos “Enter passphrase” will pop-up again. Upon successful passphrase this window will disappear.
  5. This step is optional. To validate key is loaded, double click on the “Pageant.exe” icon in the system tray. It will open window similar to one shown below and displays all loaded keys.

  6. This key needs to be loaded every time, workstation is rebooted or “Pageant.exe” restarted. To automate this you can add following batch file in the “Windows Startup” or execute batch file manually. This will require entering passphrase.

SSH Login without further setup

This is as usual, as this setup is not yet known by authenticating server. We will demonstrate this using pscp.exe [PuTTY Secure Copy] utility.

  1. Open command window. scp

  2. Upon entering correct password file is being copied.

SSH Setup on Unix/Linux server(s)

Setup described in this section needs to be repeated on each Linux/Unix server where password less SSH [from Workstation] setup is required.

SSH setup on first UNIX server

  1. Open SSH session to the Unix host [here wdhoel05]

  2. Validate “.ssh” directory is present in the user’s home directory.

  3. If exists “.ssh” directory then rename it for safety.
    1. mkdir –p ~/.ssh
    2. chmod 700 ~/.ssh
  4. Generate SSH keys for UNIX. This key will be without any passphrase so that we can utilize it in the shell scripting without any user interventions. Enter following command an accept all default values. [Hit enter on each prompt]
    1. ssh-keygen –b 2048

  1. Validate files generated.

  1. Let’s validate the SSH setup on this server for password less login.
    1. cat ~/authorized_keys.txt > ~/.ssh/authorized_keys

  2. Create new session for wdhoel05

  3. Execute following command:
    1. cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

  1. Once password less SSH is validated, use following script to generate a master shell script that can be utilized on all subsequent UNIX servers to establish password less SSH “UNIX to UNIX” as well as “Workstation to UNIX”.
    1. Copy following content [Between two yellow highlighted lines] to the script “genMasterKeys.sh”
    2. For easiness, I have copied script genMasterKeys.sh
      to /tmp/genMasterKeys.sh on wdhoel05 – so you can simply copy to your home directory
      1. cp –p /tmp/genMasterKeys.sh ~/
    3. Change permission for this file as “700”

wdhoel05:dpatel(/home/dpatel)> cat genMasterKeys.sh

#!/bin/ksh
# Dharmesh Patel
# Date : 02/11/2013
# This script reads current SSH keys and authorized key on first server to generate master shell script
# This master script can be executed on all subsequent UNIX servers to establish password less connectivity.
export SCRIPT_NAME=`id -un`_ssh_master.sh
export ID_RSA=`cat ~/.ssh/id_rsa`
export ID_RSA_PUB=`cat ~/.ssh/id_rsa.pub`
export WORKSTATION_RSA_PUB=`cat ~/authorized_keys.txt`

(
cat << EOF_SCRIPT
#!/bin/ksh

# File name: ${SCRIPT_NAME}
# Author : Dharmesh Patel
# Date : `date ‘+%m/%d/%Y’`
#
# Note : This script has been generated using the script – genMasterKeys.sh
# If you encounter any issues with this script, please contact Dharmesh Patel
# Description: Update authorized keys on this host with the master id_rsa.pub
#
#

function bkp_if_no_bkp_exist {
export ORG_FILE=\$1
export BKP_EXIST=0
for f in \`find -name “\${ORG_FILE}.bkp.*”\`
do
diff \${ORG_FILE} \${f} 1>/dev/null 2>/dev/null
if [ \$? -eq 0 ]; then
BKP_EXIST=1
break;
fi
done
if [ \${BKP_EXIST} -eq 0 ]; then
mv \${ORG_FILE} \${ORG_FILE}.bkp.\$\$
fi
}

echo “\`uname -n\`: Updating ssh key configuration on \`date +%Y%m%d_%H%M%S\`” | tee -a updt_auth_keys.log

if [ “e”\${HOME} = “e” ]; then
export HOME=/home/dpatel
fi

if [ ! -s \${HOME}/.ssh ]; then
mkdir -p \${HOME}/.ssh/
fi

cd \${HOME}/.ssh

if [ -f id_rsa ]; then
echo “Backup existing id_rsa file to id_rsa.bkp.\$\$” | tee -a updt_auth_keys.log
bkp_if_no_bkp_exist id_rsa
fi

if [ -f id_rsa.pub ]; then
echo “Backup existing id_rsa.pub file to id_rsa.pub.bkp.\$\$” | tee -a updt_auth_keys.log
bkp_if_no_bkp_exist id_rsa.pub
fi

if [ -f authorized_keys ]; then
echo “Backup existing authorized_keys file to authorized_keys.\$\$” | tee -a updt_auth_keys.log
bkp_if_no_bkp_exist authorized_keys
fi

(
cat << EOF
${ID_RSA}
EOF
) > id_rsa

(
cat << EOFP
${ID_RSA_PUB}
EOFP
) > id_rsa.pub

(
cat << EOFA
${WORKSTATION_RSA_PUB}
${ID_RSA_PUB}
EOFA
) >>authorized_keys

chmod 700 \${HOME}/.ssh
chmod 600 id_rsa
chmod 644 id_rsa.pub
chmod 644 authorized_keys

EOF_SCRIPT
) > ${SCRIPT_NAME}

chmod 700 ${SCRIPT_NAME}

echo “Script: ${SCRIPT_NAME} is ready for use.”

wdhoel05:dpatel(/home/dpatel)> chmod 700 genMasterKeys.sh

  1. Execute this script [genMasterKeys.sh] on wdhole05, this will generate a script named: <LoginID>_ssh_master.sh

  2. Now we are ready to perform similar setup on all other UNIX servers.

SSH setup on subsequent servers

  1. Copy script file generated in 10th step of previous section to the desired UNIX server(s).

  2. Logon to the destination server here – wdhoel06 – and execute script “dpatel_ssh_master.sh”

  3. If any existing files are there they will be backed up.

  4. Now, let’s validate connectivity from workstation to wdhoel06

  5. Now let’s test connectivity between two UNIX servers
    1. Wdhoel05 to wdhoel06

  1. Wdhoel06 to wdhoel05

  1. Steps 1 through 5 needs to be repeated on all subsequent UNIX servers.

Loading private key on Workstation

Daily/Restart

As described earlier in this document private key needs to be loaded using Pageant.exe utility to use password less connectivity from Workstation. Copy following commands in the batch file and use batch file at startup and/or invoke manually. Please update highlighted text appropriate to your local workstation setup in the following script:

REM Author: Dharmesh Patel

REM Date: 18-December-2012

@echo Loading pAgeant.exe – private keys for Dharmesh Patel . . .

start “ C:\AAA\PuTTY\PAGEANT.EXE” “C:\AAA\PuTTY\Config\dpatel_rsa_2048_20130611.ppk

pause

@echo “Loading one of the session . . .”

cd /d “C:\AAA\PuTTY

REM Following commented line shows how to load preset session, if any.

REM start PUTTY.EXE -load “oem1” -ssh -P 22 wdhoel05 -l dpatel

start PUTTY.EXE -ssh -P 22 wdhoel05 -l dpatel

pause