Exadata Cloud – Post Provisioning View of the system

Review of Exadata Deployment

Once the Exadata provisioning process completes (which takes around 4-5hrs for a ½ rack).  We explore to see what gets deployed:

$ cat/etc/oratab

OCITEST:/u02/app/oracle/product/12.2.0/dbhome_2:Y

+ASM1:/u01/app/12.2.0.1/grid:N       # line added by Agent

 

[grid@phxdbm-o3eja1 ~]$ olsnodes -n

phxdbm-o3eja1 1

phxdbm-o3eja2 2

phxdbm-o3eja3 3

phxdbm-o3eja4 4

 

[grid@phxdbm-o3eja1 ~]$ cat /var/opt/oracle/creg/OCITEST.ini | grep nodelist

nodelist=phxdbm-o3eja1 phxdbm-o3eja2 phxdbm-o3eja3 phxdbm-o3eja4

 

[grid@phxdbm-o3eja1 ~]$ crsctl stat res -t

—————————————————————————–

Name           Target  State        Server                   State details

—————————————————————————–

Local Resources

—————————————————————————–

ora.ACFSC1_DG1.C1_DG11V.advm

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE

ora.ACFSC1_DG1.C1_DG12V.advm

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE

ora.ACFSC1_DG1.dg

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE    ora.ACFSC1_DG2.C1_DG2V.advm

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE    ora.ACFSC1_DG2.dg

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE    ora.ASMNET1LSNR_ASM.lsnr

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE

ora.DATAC1.dg

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE . ora.DBFS_DG.dg

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE

ora.LISTENER.lsnr

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE

ora.RECOC1.dg

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE . ora.acfsc1_dg1.c1_dg11v.acfs

ONLINE  ONLINE       phxdbm-o3eja1            mounted on /scratch/acfsc1_dg1,STABLE

ONLINE  ONLINE       phxdbm-o3eja2            mounted on /scratch/acfsc1_dg1,STABLE

ONLINE  ONLINE       phxdbm-o3eja3            mounted on /scratch/acfsc1_dg1,STABLE

ONLINE  ONLINE       phxdbm-o3eja4            mounted on /scratch/acfsc1_dg1,STABLE

ora.acfsc1_dg1.c1_dg12v.acfs

ONLINE  ONLINE       phxdbm-o3eja1            mounted on /u02/app_acfs,STABLE

ONLINE  ONLINE       phxdbm-o3eja2            mounted on /u02/app_acfs,STABLE

ONLINE  ONLINE       phxdbm-o3eja3            mounted on /u02/app_acfs,STABLE

ONLINE  ONLINE       phxdbm-o3eja4            mounted on /u02/app_acfs,STABLE

ora.acfsc1_dg2.c1_dg2v.acfs

ONLINE  ONLINE       phxdbm-o3eja1            mounted on /var/opt/oracle/dbaas_acfs,STABLE

ONLINE  ONLINE       phxdbm-o3eja2            mounted on /var/opt/oracle/dbaas_acfs,STABLE

ONLINE  ONLINE       phxdbm-o3eja3            mounted on /var/opt/oracle/dbaas_acfs,STABLE

ONLINE  ONLINE       phxdbm-o3eja4            mounted on /var/opt/oracle/dbaas_acfs,STABLE

ora.net1.network

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE

ora.ons

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE

ora.proxy_advm

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE

—————————————————————————–

Cluster Resources

——————————————————————————–

ora.LISTENER_SCAN1.lsnr

1        ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ora.LISTENER_SCAN2.lsnr

1        ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ora.LISTENER_SCAN3.lsnr

1        ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ora.asm

1        ONLINE  ONLINE       phxdbm-o3eja1            Started,STABLE

2        ONLINE  ONLINE       phxdbm-o3eja2            Started,STABLE

3        ONLINE  ONLINE       phxdbm-o3eja3            Started,STABLE

4        ONLINE  ONLINE       phxdbm-o3eja4            Started,STABLE

ora.cvu

1        ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ora.ocitest.db

1        ONLINE  ONLINE       phxdbm-o3eja1            Open,HOME=/u02/app/oracle/product/12.2.0/dbhome_2,STABLE

2        ONLINE  ONLINE       phxdbm-o3eja2            Open,HOME=/u02/app/o

racle/product/12.2.0

/dbhome_2,STABLE

3        ONLINE  ONLINE       phxdbm-o3eja3            Open,HOME=/u02/app/oracle/product/12.2.0

/dbhome_2,STABLE

4        ONLINE  ONLINE       phxdbm-o3eja4            Open,HOME=/u02/app/oracle/product/12.2.0

/dbhome_2,STABLE

ora.phxdbm-o3eja1.vip

1        ONLINE  ONLINE       phxdbm-o3eja1            STABLE ora.phxdbm-o3eja2.vip

1        ONLINE  ONLINE       phxdbm-o3eja2            STABLE ora.phxdbm-o3eja3.vip

1        ONLINE  ONLINE       phxdbm-o3eja3            STABLE ora.phxdbm-o3eja4.vip

1        ONLINE  ONLINE       phxdbm-o3eja4            STABLE ora.qosmserver

1        OFFLINE OFFLINE                               STABLE ora.scan1.vip

1        ONLINE  ONLINE       phxdbm-o3eja2            STABLE ora.scan2.vip

1        ONLINE  ONLINE       phxdbm-o3eja3            STABLE ora.scan3.vip

1        ONLINE  ONLINE       phxdbm-o3eja1            STABLE

—————————————————————————–

[grid@phxdbm-o3eja1 ~]$ asmcmd lsct

DB_Name  Status     Software_Version  Compatible_version  Instance_Name   Disk_Group

+APX     CONNECTED        12.2.0.1.0          12.2.0.1.0  +APX1   ACFSC1_DG1

+APX     CONNECTED        12.2.0.1.0          12.2.0.1.0  +APX1   ACFSC1_DG2

+ASM     CONNECTED        12.2.0.1.0          12.2.0.1.0  +ASM1   DATAC1

+ASM     CONNECTED        12.2.0.1.0          12.2.0.1.0  +ASM1    DBFS_DG

OCITEST  CONNECTED        12.2.0.1.0          12.2.0.0.0  OCITEST1 DATAC1

OCITEST  CONNECTED        12.2.0.1.0          12.2.0.0.0  OCITEST1  RECOC1

_OCR     CONNECTED         –                  phxdbm-o3eja1.client.phxexadata.oraclevcn.com  DBFS_DG

yoda     CONNECTED        12.2.0.1.0          12.2.0.0.0  yoda1    DATAC1

yoda     CONNECTED        12.2.0.1.0          12.2.0.0.0  yoda1    RECOC1

 

[root@phxdbm-o3eja1 ~]# df -k

Filesystem           1K-blocks     Used Available Use% Mounted on

/dev/mapper/VGExaDb-LVDbSys1

24639868  3878788  19486408  17% /

tmpfs                742619136  2465792 740153344   1% /dev/shm

/dev/xvda1              499656    26360    447084   6% /boot

/dev/mapper/VGExaDb-LVDbOra1

20511356   719324  18727072   4% /u01

/dev/xvdb             51475068  9757380  39079864  20% /u01/app/12.2.0.1/grid

/dev/xvdc             51475068  9302820  39534424  20% /u01/app/oracle/product/12.1.0.2/dbhome_1

/dev/xvdd             51475068  8173956  40663288  17% /u01/app/oracle/product/12.2.0.1/dbhome_1

/dev/xvde             51475068  6002756  42834488  13% /u01/app/oracle/product/11.2.0.4/dbhome_1

/dev/xvdg            206293688 19751360 176040184  11% /u02

/dev/asm/c1_dg12v-186

459276288  1067008 458209280   1% /u02/app_acfs

/dev/asm/c1_dg11v-186

229638144   611488 229026656   1% /scratch/acfsc1_dg1

/dev/asm/c1_dg2v-341 228589568 26597644 201991924  12% /var/opt/oracle/dbaas_acfs

 

Oracle Homes are created and mounted, though for IQN we will only be using 12.2, 12.1.0.2, and 11.2.0.4 [interim].

The   following are Exadata specific filesystems and use cases
/scratch/acfs1_dg1             –staging Exadata

/u02/app_acfs.                    – User filesystem for applications (currently empty)

/var/opt/oracle/dbaas_acfs.  –  Binary and image repository for all Exadata patching and enablement

Oracle Private Cloud Appliance (PCA) – How to get an Inventory

We recently had to move our PCA.  But before we did this move we needed to make sure we have everything documented, this included a detailed inventory of the computes nodes, storage attached, managment node configuration.  This blog post will illustrate how to do this inventory collection.  Note, that you’ll need root access to the [active] Management node.

Here’s some basic info on our PCA:

Component Software Specification Hardware Specification
Server PCA 2.2.2

OVM 3.2.10

Oracle VM Manager, Oracle Fabric Manager, and PCA controller software installed on the management servers

Oracle Server X5-2

20 nodes

 

(2) 18-core processors and 256 GB of memory

 

(2) Oracle Fabric Interconnect F1-15 switches Oracle Fabric Manager and SDN software external storage needed for any guest application

 

used specifically to provide 10 GbE (SFP+) and 8 Gb FC (LC) ports to connect to VMAX 10K
Internal Network  (2) 36-port QDR InfiniBand switches

used for high-speed internal communication between the Compute Servers, fabric interconnects, OVMM servers

 

Management Network (2) 24-port 10-Gigabit Ethernet switches

Provide management interface /access for Compute Servers, fabric interconnects, OVMM servers

ZFS

 

ZS3-ES storage appliance

18TB total

Application E-Business Suite R12 (12.1.3)
Oracle Database

 

11.2.0.4

Non RAC/Filesystem

2TB

External Storage EMC VMAX 10k

First run yum install expect on the OVM manager then modify the inventory expect script to have the correct admin password. You will probably want to run it as ./inventory > /tmp/inventory-pca.txt as it will be quite voluminous output.

Alternatively to the inventory script, we can leverage the eovmcli script in that same directory. Create a new script (eg. doit.sh) in the /u01/app/oracle/ovm-manager-3/ovm_cli/expectscripts/ directory with the following content. Replace references to password with the correct admin password. Run script and send me the output.

for i in `./eovmcli ‘admin’ ‘password’ ‘list vm’ | grep id: | awk ‘{print $NF}’ | cut -d: -f2-`;
do

echo ---------- PROCESSING VM=$i;
./eovmcli 'admin' 'password' "show vm name=$i";
echo;

for j in `./eovmcli 'admin' 'password' "show vm name=$i" | egrep VmDiskMapping | awk '{print $NF}'`;
do
echo vDisk=$j;
./eovmcli 'admin' 'password' "show vmdiskmapping id=$j";
echo;
done
echo
unset j

done

 

But to understand the inventory script the following commands are actually run underneath {inside}

OVM> list ServerPool

Command: list ServerPool

Status: Success

Time: 2017-11-14 20:56:12,341 UTC

Data: 

  id:0004fb00000200004d46b98dcfc43ff3  name:Rack1_ServerPool

 

PCA Storage Cloud Layout

The Oracle Private Cloud Appliance (PCA) supports storage expansion using either Fibre Channel or InfiniBand storage devices connected to the Fabric Interconnects.  We have chosen to leverage their existing Fibre channel based EMC VMAX for this expansion.  This section will describe the connectivity to the EMC array.

 

Storage Cloud Overview

 

Note, there is a OVM server pool, named Rack1_ServerPool, in the PCA. The PCA consists 20 compute nodes, as noted by ovcacn<compute node number> , and are assigned to this Server Pool; e.g.,  ovcacn[07-14]r1 (8 servers)  and  ovcacn[26-37]r1 (12 servers)

A vHBA is created on each compute node for each storage cloud. A total of four storage clouds are defined when PCA is installed, thus (4) vHBAs on each of the compute and management nodes.

Storage clouds allow you to cable and configure your external storage in such a way as to improve overall throughput or to build a fully HA enabled infrastructure.  Storage clouds are created and configured automatically on PCA installation.

We have a fully HA-enabled environment, where all four Storage Clouds are cross-cabled between the PCA Fabric Interconnects and two  FC switches.

For each PCA compute server, the WWPNs are registered and created for the vHBAs with assigned aliases.  Compute nodes can be identified as belonging to a particular server and storage cloud.

Once the PCA Fabric Interconnect WWPNs are presented to the VMAX array, it is visible to the PCA and can be seen using pca-admin list wwpn-info command.   These are command output is used to illustrate and identify matching WWPNs.

Fibre Channel with the Oracle PCA, requires a NPIV-capable FC switch or switches. Note, because the Fabric Interconnects use NPIV to map the port nodes to the World Wide Node Names (WWNNs) of the vHBAs that are created on each server, it is not possible to simply patch FC-capable storage directly into the FC ports on the Fabric Interconnects.  Software required to translate WWPNs to WWNNs does not exist on the storage heads of most FC storage devices, so directly attaching the storage device would prevent registration of the WWPNs for the vHBAs available on each server.

Storage Cloud Connectivity

There are (4) Cloud Storage (external Fibre connections) attached to the PCA X5-2, these are listed below (using the show storage-network command)

Network_Name                        Description         

————                        ———–         

Cloud_D                             Default Storage Cloud ru15 port2

Cloud_A                             Default Storage Cloud ru22 port1

Cloud_C                             Default Storage Cloud ru15 port1

Cloud_B                             Default Storage Cloud ru22 port2

Each Storage Cloud is connected into the two PCA internal switches: ovcasw22r1 and ovcasw15r1

Each compute node (CN) has four HBAs connected into the Storage Clouds using vHBA01 to vHBA04.  The following describes this connectivity.

  • vHBA01 is connected to Cloud_A
  • vHBA02 is connected to Cloud_B
  • vHBA03 is connected to Cloud_C
  • vHBA04 is connected to Cloud_D

 

This CN to Cloud connectivity is illustrated below for each Storage Cloud:

—————————————-

Network_Name         Cloud_A             

Description          Default Storage Cloud ru22 port1

Ports                ovcasw22r1:3:1, ovcasw22r1:12:1

vHBAs                ovcacn32r1-vhba01, ovcacn13r1-vhba01, ovcacn37r1-vhba01, ovcacn26r1-vhba01, ovcacn31r1-vhba01, ovcacn10r1-vhba01, ovcacn27r1-vhba01, ovcacn09r1-vhba01, ovcacn08r1-vhba01, ovcacn29r1-vhba01, ovcacn28r1-vhba01, ovcacn12r1-vhba01, ovcamn06r1-vhba01, ovcacn07r1-vhba01, ovcacn11r1-vhba01, ovcacn36r1-vhba01, ovcacn30r1-vhba01, ovcacn35r1-vhba01, ovcacn14r1-vhba01, ovcacn34r1-vhba01, ovcacn33r1-vhba01, ovcamn05r1-vhba01

—————————————-

Network_Name         Cloud_B             

Description          Default Storage Cloud ru22 port2

Ports                ovcasw22r1:3:2, ovcasw22r1:12:2

vHBAs                ovcacn32r1-vhba02, ovcacn13r1-vhba02, ovcacn37r1-vhba02, ovcacn26r1-vhba02, ovcacn31r1-vhba02, ovcacn10r1-vhba02, ovcacn27r1-vhba02, ovcacn09r1-vhba02, ovcacn08r1-vhba02, ovcacn29r1-vhba02, ovcacn28r1-vhba02, ovcacn12r1-vhba02, ovcamn06r1-vhba02, ovcacn07r1-vhba02, ovcacn11r1-vhba02, ovcacn36r1-vhba02, ovcacn30r1-vhba02, ovcacn35r1-vhba02, ovcacn14r1-vhba02, ovcacn34r1-vhba02, ovcacn33r1-vhba02, ovcamn05r1-vhba02

—————————————-

Network_Name         Cloud_C             

Description          Default Storage Cloud ru15 port1

Ports                ovcasw15r1:12:1, ovcasw15r1:3:1

vHBAs                ovcacn32r1-vhba03, ovcacn13r1-vhba03, ovcacn37r1-vhba03, ovcacn26r1-vhba03, ovcacn31r1-vhba03, ovcacn10r1-vhba03, ovcacn27r1-vhba03, ovcacn09r1-vhba03, ovcacn08r1-vhba03, ovcacn29r1-vhba03, ovcacn28r1-vhba03, ovcacn12r1-vhba03, ovcamn06r1-vhba03, ovcacn07r1-vhba03, ovcacn11r1-vhba03, ovcacn36r1-vhba03, ovcacn30r1-vhba03, ovcacn35r1-vhba03, ovcacn14r1-vhba03, ovcacn34r1-vhba03, ovcacn33r1-vhba03, ovcamn05r1-vhba03

—————————————-

Network_Name         Cloud_D             

Description          Default Storage Cloud ru15 port2

Ports                ovcasw15r1:12:2, ovcasw15r1:3:2

vHBAs                ovcacn32r1-vhba04, ovcacn13r1-vhba04, ovcacn37r1-vhba04, ovcacn26r1-vhba04, ovcacn31r1-vhba04, ovcacn10r1-vhba04, ovcacn27r1-vhba04, ovcacn09r1-vhba04, ovcacn08r1-vhba04, ovcacn29r1-vhba04, ovcacn28r1-vhba04, ovcacn12r1-vhba04, ovcamn06r1-vhba04, ovcacn07r1-vhba04, ovcacn11r1-vhba04, ovcacn36r1-vhba04, ovcacn30r1-vhba04, ovcacn35r1-vhba04, ovcacn14r1-vhba04, ovcacn34r1-vhba04, ovcacn33r1-vhba04, ovcamn05r1-vhba04

 

Storage Cloud with WWPN

Each server in the Oracle PCA is connected to the Fabric Interconnects via an InfiniBand (IB) connection. The Fabric Interconnects are capable of translating connections on their Fibre Channel ports to reroute them over these IB connections. To facilitate this, vHBAs are defined on each server to map to a Storage cloud defined on the Fabric Interconnects. The storage cloud that these vHBAs map to, determine which FC ports they relate to on the Fabric Interconnects.

A similar view of the connectivity can be seen from WWPN listing perspective. As with above, every CN is reflected in this listing; ie, every CN has connectivity to every Storage Cloud.

Cloud_Name           Cloud_A             

WWPN_List            50:01:39:70:00:7D:41:20, 50:01:39:70:00:7D:41:12, 50:01:39:70:00:7D:41:1C, 50:01:39:70:00:7D:41:06, 50:01:39:70:00:7D:41:04, 50:01:39:70:00:7D:41:0A, 50:01:39:70:00:7D:41:1E, 50:01:39:70:00:7D:41:2A, 50:01:39:70:00:7D:41:16, 50:01:39:70:00:7D:41:26, 50:01:39:70:00:7D:41:18, 50:01:39:70:00:7D:41:10, 50:01:39:70:00:7D:41:02, 50:01:39:70:00:7D:41:08, 50:01:39:70:00:7D:41:0C, 50:01:39:70:00:7D:41:0E, 50:01:39:70:00:7D:41:1A, 50:01:39:70:00:7D:41:24, 50:01:39:70:00:7D:41:28, 50:01:39:70:00:7D:41:14, 50:01:39:70:00:7D:41:22, 50:01:39:70:00:7D:41:00

—————————————-

Cloud_Name           Cloud_B             

WWPN_List            50:01:39:70:00:7D:41:21, 50:01:39:70:00:7D:41:13, 50:01:39:70:00:7D:41:1D, 50:01:39:70:00:7D:41:07, 50:01:39:70:00:7D:41:05, 50:01:39:70:00:7D:41:0B, 50:01:39:70:00:7D:41:1F, 50:01:39:70:00:7D:41:2B, 50:01:39:70:00:7D:41:17, 50:01:39:70:00:7D:41:27, 50:01:39:70:00:7D:41:19, 50:01:39:70:00:7D:41:11, 50:01:39:70:00:7D:41:03, 50:01:39:70:00:7D:41:09, 50:01:39:70:00:7D:41:0D, 50:01:39:70:00:7D:41:0F, 50:01:39:70:00:7D:41:1B, 50:01:39:70:00:7D:41:25, 50:01:39:70:00:7D:41:29, 50:01:39:70:00:7D:41:15, 50:01:39:70:00:7D:41:23, 50:01:39:70:00:7D:41:01

—————————————-

Cloud_Name           Cloud_C             

WWPN_List            50:01:39:70:00:7D:51:20, 50:01:39:70:00:7D:51:12, 50:01:39:70:00:7D:51:1C, 50:01:39:70:00:7D:51:06, 50:01:39:70:00:7D:51:04, 50:01:39:70:00:7D:51:0A, 50:01:39:70:00:7D:51:1E, 50:01:39:70:00:7D:51:2A, 50:01:39:70:00:7D:51:16, 50:01:39:70:00:7D:51:26, 50:01:39:70:00:7D:51:18, 50:01:39:70:00:7D:51:10, 50:01:39:70:00:7D:51:02, 50:01:39:70:00:7D:51:08, 50:01:39:70:00:7D:51:0C, 50:01:39:70:00:7D:51:0E, 50:01:39:70:00:7D:51:1A, 50:01:39:70:00:7D:51:24, 50:01:39:70:00:7D:51:28, 50:01:39:70:00:7D:51:14, 50:01:39:70:00:7D:51:22, 50:01:39:70:00:7D:51:00

—————————————-

Cloud_Name           Cloud_D             

WWPN_List            50:01:39:70:00:7D:51:21, 50:01:39:70:00:7D:51:13, 50:01:39:70:00:7D:51:1D, 50:01:39:70:00:7D:51:07, 50:01:39:70:00:7D:51:05, 50:01:39:70:00:7D:51:0B, 50:01:39:70:00:7D:51:1F, 50:01:39:70:00:7D:51:2B, 50:01:39:70:00:7D:51:17, 50:01:39:70:00:7D:51:27, 50:01:39:70:00:7D:51:19, 50:01:39:70:00:7D:51:11, 50:01:39:70:00:7D:51:03, 50:01:39:70:00:7D:51:09, 50:01:39:70:00:7D:51:0D, 50:01:39:70:00:7D:51:0F, 50:01:39:70:00:7D:51:1B, 50:01:39:70:00:7D:51:25, 50:01:39:70:00:7D:51:29, 50:01:39:70:00:7D:51:15, 50:01:39:70:00:7D:51:23, 50:01:39:70:00:7D:51:01

An associated grouping by vHBA and Cloud is listed here:

 

WWPN             vHBA           Cloud_Name     Server       Type     Alias                 

————–   ——          —-          ——–     ——–  ———-                                   

50:01:39:70:00:7D:41:28   vhba01     Cloud_A   ovcacn14r1      CN    ovcacn14r1-Cloud_A                      

50:01:39:70:00:7D:41:20   vhba01     Cloud_A   ovcacn32r1      CN    ovcacn32r1-Cloud_A                      

50:01:39:70:00:7D:41:22   vhba01     Cloud_A   ovcacn33r1      CN    ovcacn33r1-Cloud_A                      

50:01:39:70:00:7D:41:24   vhba01     Cloud_A   ovcacn35r1      CN    ovcacn35r1-Cloud_A                      

50:01:39:70:00:7D:41:26   vhba01     Cloud_A   ovcacn29r1      CN    ovcacn29r1-Cloud_A                      

50:01:39:70:00:7D:41:06   vhba01     Cloud_A   ovcacn26r1      CN    ovcacn26r1-Cloud_A                      

50:01:39:70:00:7D:41:04   vhba01     Cloud_A   ovcacn31r1      CN    ovcacn31r1-Cloud_A                       

50:01:39:70:00:7D:41:08   vhba01     Cloud_A   ovcacn07r1      CN    ovcacn07r1-Cloud_A                      

50:01:39:70:00:7D:41:0C   vhba01     Cloud_A   ovcacn11r1      CN    ovcacn11r1-Cloud_A                      

50:01:39:70:00:7D:41:1E   vhba01     Cloud_A   ovcacn27r1      CN    ovcacn27r1-Cloud_A                      

50:01:39:70:00:7D:41:14   vhba01     Cloud_A   ovcacn34r1      CN    ovcacn34r1-Cloud_A                      

50:01:39:70:00:7D:41:12   vhba01     Cloud_A   ovcacn13r1      CN    ovcacn13r1-Cloud_A                      

50:01:39:70:00:7D:41:1A   vhba01     Cloud_A   ovcacn30r1      CN    ovcacn30r1-Cloud_A                      

50:01:39:70:00:7D:41:18   vhba01     Cloud_A   ovcacn28r1      CN    ovcacn28r1-Cloud_A                       

50:01:39:70:00:7D:41:0A   vhba01     Cloud_A   ovcacn10r1      CN    ovcacn10r1-Cloud_A                      

50:01:39:70:00:7D:41:1C   vhba01     Cloud_A   ovcacn37r1      CN    ovcacn37r1-Cloud_A                      

50:01:39:70:00:7D:41:0E   vhba01     Cloud_A   ovcacn36r1      CN    ovcacn36r1-Cloud_A                      

50:01:39:70:00:7D:41:16   vhba01     Cloud_A   ovcacn08r1      CN    ovcacn08r1-Cloud_A                      

50:01:39:70:00:7D:41:2A   vhba01     Cloud_A   ovcacn09r1      CN    ovcacn09r1-Cloud_A                      

50:01:39:70:00:7D:41:10   vhba01     Cloud_A   ovcacn12r1      CN    ovcacn12r1-Cloud_A                      

50:01:39:70:00:7D:41:29   vhba02     Cloud_B   ovcacn14r1      CN    ovcacn14r1-Cloud_B                      

50:01:39:70:00:7D:41:21   vhba02     Cloud_B   ovcacn32r1      CN    ovcacn32r1-Cloud_B                      

50:01:39:70:00:7D:41:23   vhba02     Cloud_B   ovcacn33r1      CN    ovcacn33r1-Cloud_B                      

50:01:39:70:00:7D:41:25   vhba02     Cloud_B   ovcacn35r1      CN    ovcacn35r1-Cloud_B                      

50:01:39:70:00:7D:41:27   vhba02     Cloud_B   ovcacn29r1      CN    ovcacn29r1-Cloud_B                      

50:01:39:70:00:7D:41:07   vhba02     Cloud_B   ovcacn26r1      CN    ovcacn26r1-Cloud_B                      

50:01:39:70:00:7D:41:05   vhba02     Cloud_B   ovcacn31r1      CN    ovcacn31r1-Cloud_B                      

50:01:39:70:00:7D:41:09   vhba02     Cloud_B   ovcacn07r1      CN    ovcacn07r1-Cloud_B                      

50:01:39:70:00:7D:41:1D   vhba02     Cloud_B   ovcacn37r1      CN    ovcacn37r1-Cloud_B                      

50:01:39:70:00:7D:41:17   vhba02     Cloud_B   ovcacn08r1      CN    ovcacn08r1-Cloud_B                      

50:01:39:70:00:7D:41:11   vhba02     Cloud_B   ovcacn12r1      CN    ovcacn12r1-Cloud_B                      

50:01:39:70:00:7D:41:1F   vhba02     Cloud_B   ovcacn27r1      CN    ovcacn27r1-Cloud_B                      

50:01:39:70:00:7D:41:13   vhba02     Cloud_B   ovcacn13r1      CN    ovcacn13r1-Cloud_B                      

50:01:39:70:00:7D:41:19   vhba02     Cloud_B   ovcacn28r1      CN    ovcacn28r1-Cloud_B                      

50:01:39:70:00:7D:41:0B   vhba02     Cloud_B   ovcacn10r1      CN    ovcacn10r1-Cloud_B                      

50:01:39:70:00:7D:41:15   vhba02     Cloud_B   ovcacn34r1      CN    ovcacn34r1-Cloud_B                      

50:01:39:70:00:7D:41:0F   vhba02     Cloud_B   ovcacn36r1      CN    ovcacn36r1-Cloud_B                      

50:01:39:70:00:7D:41:0D   vhba02     Cloud_B   ovcacn11r1      CN    ovcacn11r1-Cloud_B                      

50:01:39:70:00:7D:41:1B   vhba02     Cloud_B   ovcacn30r1      CN    ovcacn30r1-Cloud_B                      

50:01:39:70:00:7D:41:2B   vhba02     Cloud_B   ovcacn09r1      CN    ovcacn09r1-Cloud_B                      

50:01:39:70:00:7D:51:12   vhba03     Cloud_C   ovcacn13r1      CN    ovcacn13r1-Cloud_C                       

50:01:39:70:00:7D:51:1E   vhba03     Cloud_C   ovcacn27r1      CN    ovcacn27r1-Cloud_C                      

50:01:39:70:00:7D:51:08   vhba03     Cloud_C   ovcacn07r1      CN    ovcacn07r1-Cloud_C                      

50:01:39:70:00:7D:51:10   vhba03     Cloud_C   ovcacn12r1      CN    ovcacn12r1-Cloud_C                      

50:01:39:70:00:7D:51:20   vhba03     Cloud_C   ovcacn32r1      CN    ovcacn32r1-Cloud_C                      

50:01:39:70:00:7D:51:22   vhba03     Cloud_C   ovcacn33r1      CN    ovcacn33r1-Cloud_C                      

50:01:39:70:00:7D:51:24   vhba03     Cloud_C   ovcacn35r1      CN    ovcacn35r1-Cloud_C                      

50:01:39:70:00:7D:51:26   vhba03     Cloud_C   ovcacn29r1      CN    ovcacn29r1-Cloud_C                       

50:01:39:70:00:7D:51:28   vhba03     Cloud_C   ovcacn14r1      CN    ovcacn14r1-Cloud_C                      

50:01:39:70:00:7D:51:1C   vhba03     Cloud_C   ovcacn37r1      CN    ovcacn37r1-Cloud_C                      

50:01:39:70:00:7D:51:0C   vhba03     Cloud_C   ovcacn11r1      CN    ovcacn11r1-Cloud_C                      

50:01:39:70:00:7D:51:06   vhba03     Cloud_C   ovcacn26r1      CN    ovcacn26r1-Cloud_C                      

50:01:39:70:00:7D:51:14   vhba03     Cloud_C   ovcacn34r1      CN    ovcacn34r1-Cloud_C                      

50:01:39:70:00:7D:51:2A   vhba03     Cloud_C   ovcacn09r1      CN    ovcacn09r1-Cloud_C                      

50:01:39:70:00:7D:51:1A   vhba03     Cloud_C   ovcacn30r1      CN    ovcacn30r1-Cloud_C                       

50:01:39:70:00:7D:51:16   vhba03     Cloud_C   ovcacn08r1      CN    ovcacn08r1-Cloud_C                      

50:01:39:70:00:7D:51:0A   vhba03     Cloud_C   ovcacn10r1      CN    ovcacn10r1-Cloud_C                      

50:01:39:70:00:7D:51:18   vhba03     Cloud_C   ovcacn28r1      CN    ovcacn28r1-Cloud_C                      

50:01:39:70:00:7D:51:04   vhba03     Cloud_C   ovcacn31r1      CN    ovcacn31r1-Cloud_C                      

50:01:39:70:00:7D:51:0E   vhba03     Cloud_C   ovcacn36r1      CN    ovcacn36r1-Cloud_C                      

50:01:39:70:00:7D:51:1B   vhba04     Cloud_D   ovcacn30r1      CN    ovcacn30r1-Cloud_D                      

50:01:39:70:00:7D:51:1D   vhba04     Cloud_D   ovcacn37r1      CN    ovcacn37r1-Cloud_D                      

50:01:39:70:00:7D:51:1F   vhba04     Cloud_D   ovcacn27r1      CN    ovcacn27r1-Cloud_D                      

50:01:39:70:00:7D:51:07   vhba04     Cloud_D   ovcacn26r1      CN    ovcacn26r1-Cloud_D                       

50:01:39:70:00:7D:51:19   vhba04     Cloud_D   ovcacn28r1      CN    ovcacn28r1-Cloud_D                      

50:01:39:70:00:7D:51:21   vhba04     Cloud_D   ovcacn32r1      CN    ovcacn32r1-Cloud_D                      

50:01:39:70:00:7D:51:23   vhba04     Cloud_D   ovcacn33r1      CN    ovcacn33r1-Cloud_D                      

50:01:39:70:00:7D:51:25   vhba04     Cloud_D   ovcacn35r1      CN    ovcacn35r1-Cloud_D                      

50:01:39:70:00:7D:51:27   vhba04     Cloud_D   ovcacn29r1      CN    ovcacn29r1-Cloud_D                      

50:01:39:70:00:7D:51:29   vhba04     Cloud_D   ovcacn14r1      CN    ovcacn14r1-Cloud_D                      

50:01:39:70:00:7D:51:09   vhba04     Cloud_D   ovcacn07r1      CN    ovcacn07r1-Cloud_D                      

50:01:39:70:00:7D:51:0D   vhba04     Cloud_D   ovcacn11r1      CN    ovcacn11r1-Cloud_D                      

50:01:39:70:00:7D:51:15   vhba04     Cloud_D   ovcacn34r1      CN    ovcacn34r1-Cloud_D                      

50:01:39:70:00:7D:51:0B   vhba04     Cloud_D   ovcacn10r1      CN    ovcacn10r1-Cloud_D                      

50:01:39:70:00:7D:51:05   vhba04     Cloud_D   ovcacn31r1      CN    ovcacn31r1-Cloud_D                      

50:01:39:70:00:7D:51:2B   vhba04     Cloud_D   ovcacn09r1      CN    ovcacn09r1-Cloud_D                      

50:01:39:70:00:7D:51:11   vhba04     Cloud_D   ovcacn12r1      CN    ovcacn12r1-Cloud_D                      

50:01:39:70:00:7D:51:17   vhba04     Cloud_D   ovcacn08r1      CN    ovcacn08r1-Cloud_D                      

50:01:39:70:00:7D:51:13   vhba04     Cloud_D   ovcacn13r1      CN    ovcacn13r1-Cloud_D                      

50:01:39:70:00:7D:51:0F   vhba04     Cloud_D   ovcacn36r1      CN    ovcacn36r1-Cloud_D                      

—————–

80 rows displayed

 

It is important to distinguish between WWNNs and WWPNs. A WWNN is used to identify a device or node such as an HBA, while a WWPN is used to identify a port that is accessible for that same device. Since some devices can have multiple ports, a device may have a single WWNN and multiple WWPNs.

For CN vHBAs, there is a single WWNN and a single WWPN for each vHBA. Note, the fourth hexadecimal octet that makes up the WWN differs.

pca-admin show vhba-info ovcacn07r1

vHBA_Name       Cloud     WWNN                      WWPN                     

———       —–     —-                      —-                     

vhba03          Cloud_C  50:01:39:71:00:7D:51:08   50:01:39:70:00:7D:51:08  

vhba02          Cloud_B  50:01:39:71:00:7D:41:09   50:01:39:70:00:7D:41:09  

vhba01          Cloud_A  50:01:39:71:00:7D:41:08   50:01:39:70:00:7D:41:08  

vhba04          Cloud_D  50:01:39:71:00:7D:51:09   50:01:39:70:00:7D:51:09  

 

OVM> list PhysicalDisk

Command: list PhysicalDisk

Status: Success

Time: 2017-11-14 20:45:20,156 UTC

Data: 

  id:0004fb000018000089acb680613acbb7  name:3600605b00a76d8601e6b20a309121c29

  id:0004fb000018000045e53c34341ddba2  name:3600605b00a7663001e6b1f8c093ed7d1

  id:0004fb000018000071649d0873535c10  name:3600605b00a7690301e6b23120945f79f

  id:0004fb00001800007285822483e7faf9  name:SUN (1)

  id:0004fb000018000075656b0d46cd0f92  name:SUN (2)

  id:0004fb0000180000652ed33b97ce0813  name:3600605b00a76d7d01e6b1fc40920aaa1

  id:0004fb0000180000003ca296f4d63d47  name:3600605b00a76d8401e6b1e5e08eba0e6

  id:0004fb0000180000c7a2053f6a33ab6c  name:3600605b00a7648701e6b1f7209061413

  id:0004fb0000180000c2ce5bf457cb8c3e  name:3600605b00a7644901e6b2092092a2995

  id:0004fb000018000004a154445ea57a30  name:3600605b00a7635001e6b1fed0925240f

  id:0004fb00001800002316100cabe79348  name:EMC VMAX FC LUN07

  id:0004fb00001800005ad30a34ba849e31  name:EMC VMAX FC LUN06

  id:0004fb0000180000ea41971236b070bb  name:EMC VMAX FC LUN05

  id:0004fb0000180000293acf9735f6d443  name:EMC VMAX SATA LUN03

  id:0004fb0000180000a65b1bc3c16c0210  name:EMC VMAX FC LUN02

  id:0004fb0000180000683cff7d90036fe7  name:EMC VMAX FC LUN01

  id:0004fb0000180000a8254d24e27180aa  name:EMC VMAX FAST(Prod) LUN04

  id:0004fb00001800002e04766575ed1315  name:EMC ebsprod fra 01

  id:0004fb0000180000f1e48b8c1465c245  name:EMC ebsprod fra 02

  id:0004fb000018000004d1ab0deb5e4926  name:EMC ebsprod fra 03

  id:0004fb00001800008f2efe35e2c708e5  name:EMC ebsprod fra 04

  id:0004fb0000180000a1c7cfe90681651b  name:EMC ebsprod fra 05

  id:0004fb0000180000ce63d8cc9231123f  name:EMC ebsprod fra 06

  id:0004fb00001800000d857c98406d3dfb  name:EMC ebsprod fra 07

  id:0004fb00001800000946d5a01856ae35  name:EMC ebsprod fra 08

  id:0004fb00001800002f1f8d3690e4a119  name:EMC ebsprod orion 01

  id:0004fb0000180000c68a7d0fa8dab371  name:EMC ebsprod orion 02

  id:0004fb00001800008f9886d751d9ed1a  name:EMC ebsprod redo 01

  id:0004fb00001800000e870c6e72191753  name:EMC ebsprod redo 02

  id:0004fb00001800004321068d7cdb0369  name:EMC ebsprod redo 04

  id:0004fb00001800005bb7782ff2960efb  name:EMC ebsprod redo 03

  id:0004fb00001800003e2e40a8d1376096  name:EMC ebsprod ocrvd 01

  id:0004fb000018000084ccd381a4fa1a24  name:EMC ebsprod ocrvd 02

  id:0004fb00001800002a17b022e6dd05c5  name:EMC ebsprod ocrvd 03

  id:0004fb0000180000c3b24ad7cb408520  name:EMC ebsprod data 01

  id:0004fb000018000076c38f606617f660  name:EMC ebsprod data 02

  id:0004fb000018000032359a9b4a30c1d2  name:EMC ebsprod data 03

  id:0004fb000018000025b0c2eb9914a10b  name:EMC ebsprod data 04

  id:0004fb0000180000679464b498ac424b  name:EMC ebsprod data 05

  id:0004fb0000180000b6f66219e0edb83f  name:EMC ebsprod data 06

  id:0004fb0000180000d290f7cfaf6c2187  name:EMC ebsprod data 07

  id:0004fb0000180000fc5181433a564f81  name:3600605b00a7680f01e6b1f760903237c

  id:0004fb0000180000029437418b4d907b  name:3600605b00a762f001e6b1f7108cfaf4b

  id:0004fb0000180000096c38cbd7a41395  name:3600605b00a7637301e6b20f709059454

  id:0004fb0000180000a836c8251c98965d  name:3600605b00a768ec01e6b211d13b4a617

  id:0004fb0000180000004234fb6dbcdfcd  name:3600605b00a766da01e6b1f9709164a14

  id:0004fb0000180000ed2ab0b0df14f17d  name:3600605b00a768d001e6b1e7208c66eb0

  id:0004fb00001800005f310f01833b9144  name:3600605b00a763b801e6b208508ec64a8

  id:0004fb000018000039fb1f3383585596  name:3600605b00a7663301e6b1e1d0902a0c3

  id:0004fb0000180000a291ab8b56714ce1  name:3600605b00a768ee01e6b212c09b1ce22

  id:0004fb00001800007526bc66e0a68bbe  name:3600605b00a76dc401e6b1fc208d1fab0

  id:0004fb00001800007fb35c0b1749db85  name:3600605b00a7662801e6b22e008cbf7bd

  id:0004fb0000180000e29c37f16074690f  name:3600605b00a7636c01e6b1efa08b958ab

Command: list server

Status: Success

Time: 2017-11-14 20:54:47,324 UTC

Data: 

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:a0:78  name:ovcacn12r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:16:0c  name:ovcacn07r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:36:d0  name:ovcacn11r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:7a:0d:da  name:ovcacn36r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:7a:59:88  name:ovcacn28r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:a4:5c  name:ovcacn37r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6c:a6  name:ovcacn14r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:67:54  name:ovcacn13r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:84:a3:0e  name:ovcacn29r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6f:be  name:ovcacn31r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:4a:9e  name:ovcacn10r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:87:15:50  name:ovcacn27r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6a:8a  name:ovcacn32r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:80:10:4e  name:ovcacn34r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:a0:b4  name:ovcacn26r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:a1:44  name:ovcacn33r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:14:3e  name:ovcacn35r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:7a:4b:0c  name:ovcacn08r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6f:d6  name:ovcacn30r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6e:b6  name:ovcacn09r1

 

 

Command: list Repository

Status: Success

Time: 2017-11-14 20:55:52,268 UTC

Data: 

  id:0004fb0000030000f6d09c5125f8f99b  name:Rack1-Repository

  id:0004fb0000030000d1969e0ffefed9e3  name:EMC-VMAX-FC-Repo5

  id:0004fb0000030000d55498ec6e5a4470  name:ovcacn27r1-localfsrepo

  id:0004fb0000030000465dbff337acb2b7  name:ovcacn08r1-localfsrepo

  id:0004fb000003000094e850fa4e5b5dc1  name:ovcacn37r1-localfsrepo

  id:0004fb00000300009633c963daed6fbe  name:ovcacn07r1-localfsrepo

  id:0004fb0000030000bdb6e6d7b3c63a39  name:ovcacn26r1-localfsrepo

  id:0004fb0000030000c9f71d6a43cf8ddc  name:EMC-VMAX–FC-Repo1

  id:0004fb00000300009bdea16ab8bfdbe7  name:ovcacn30r1-localfsrepo

  id:0004fb00000300003f9f5da1e76442e6  name:ovcacn11r1-localfsrepo

  id:0004fb0000030000a7d6ca273d18e846  name:EMC-VMAX-FC-Repo6

  id:0004fb0000030000bd9b7812ae267d47  name:EMC-VMAX-SATA-Repo3

  id:0004fb00000300000b710d9f8e03502d  name:ovcacn09r1-localfsrepo

  id:0004fb00000300006f28ce4acad4b952  name:EMC-VMAX-FC-Repo7

  id:0004fb0000030000a0e7f2e6213c04f3  name:ovcacn36r1-localfsrepo

  id:0004fb0000030000c8cc073e70c5c41f  name:EMC-VMAX-FC-Repo2

  id:0004fb00000300001cb34b718c486dd6  name:ovcacn29r1-localfsrepo

  id:0004fb0000030000ffde50c6ec8f06e4  name:ovcacn31r1-localfsrepo

  id:0004fb0000030000903dae0fc220ac45  name:ovcacn10r1-localfsrepo

  id:0004fb000003000031031f7a8b957aa0  name:ovcacn13r1-localfsrepo

  id:0004fb00000300000aa95fabb2b85dc7  name:ovcacn14r1-localfsrepo

  id:0004fb0000030000847307a8e689dda3  name:ovcacn34r1-localfsrepo

  id:0004fb00000300004df4b5d72bb7e3c1  name:ovcacn35r1-localfsrepo

  id:0004fb00000300000d17779831c520ab  name:ovcacn32r1-localfsrepo

  id:0004fb0000030000276054535f2cf66f  name:ovcacn12r1-localfsrepo

  id:0004fb00000300006f1bc814a1dba812  name:EMC-VMAX-FAST(Prod)-Repo4

  id:0004fb0000030000adad162696c02503  name:ovcacn33r1-localfsrepo

  id:0004fb00000300006a283e29a8546139  name:ovcacn28r1-localfsrepo

OVM> list SanServer

Command: list SanServer

Status: Success

Time: 2017-11-14 20:56:02,774 UTC

Data: 

  id:0004fb0000090000c0070fc37e9fe47a  name:OVCA_ZFSSA_Rack1

  id:Unmanaged iSCSI Storage Array  name:Unmanaged iSCSI Storage Array

  id:Unmanaged FibreChannel Storage Array  name:Unmanaged FibreChannel Storage Array

 

OVM> list StorageInitiator

Command: list StorageInitiator

Status: Success

Time: 2017-11-14 20:56:20,541 UTC

Data: 

  id:0x50013970007d4110  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d4111  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5110  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5111  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:4be6a1e5f39e  name:iqn.1988-12.com.oracle:4be6a1e5f39e

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:a0:78  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:a0:78

  id:0x50013970007d4108  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d4109  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5108  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5109  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:b6db9886524  name:iqn.1988-12.com.oracle:b6db9886524

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:16:0c  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:16:0c

  id:0x50013970007d410c  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d410d  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d510c  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d510d  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:2cfd9cdfab1  name:iqn.1988-12.com.oracle:2cfd9cdfab1

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:36:d0  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:36:d0

  id:0x50013970007d410e  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d410f  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d510e  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d510f  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:72f8d85d1efc  name:iqn.1988-12.com.oracle:72f8d85d1efc

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:7a:0d:da  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:7a:0d:da

  id:0x50013970007d4118  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d4119  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5118  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5119  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:a5903e2a89f  name:iqn.1988-12.com.oracle:a5903e2a89f

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:7a:59:88  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:7a:59:88

  id:0x50013970007d411c  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d411d  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d511c  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d511d  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:c13f7ca17ee4  name:iqn.1988-12.com.oracle:c13f7ca17ee4

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:a4:5c  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:a4:5c

  id:0x50013970007d4128  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d4129  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5128  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5129  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:c79e2161d338  name:iqn.1988-12.com.oracle:c79e2161d338

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6c:a6  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6c:a6

  id:0x50013970007d4112  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d4113  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5112  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5113  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:e819eb62c9ac  name:iqn.1988-12.com.oracle:e819eb62c9ac

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:67:54  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:67:54

  id:0x50013970007d4126  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d4127  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5126  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5127  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:bb6edac83fcd  name:iqn.1988-12.com.oracle:bb6edac83fcd

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:84:a3:0e  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:84:a3:0e

  id:0x50013970007d4104  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d4105  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5104  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5105  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:59da9467ef15  name:iqn.1988-12.com.oracle:59da9467ef15

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6f:be  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6f:be

  id:0x50013970007d410a  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d410b  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d510a  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d510b  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:82f2dc9afc61  name:iqn.1988-12.com.oracle:82f2dc9afc61

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:4a:9e  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:4a:9e

  id:0x50013970007d411e  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d411f  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d511e  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d511f  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:78d33e6c874  name:iqn.1988-12.com.oracle:78d33e6c874

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:87:15:50  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:87:15:50

  id:0x50013970007d4120  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d4121  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5120  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5121  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:d940444ea668  name:iqn.1988-12.com.oracle:d940444ea668

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6a:8a  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6a:8a

  id:0x50013970007d4114  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d4115  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5114  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5115  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:5e907b7089a2  name:iqn.1988-12.com.oracle:5e907b7089a2

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:80:10:4e  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:80:10:4e

  id:0x50013970007d4106  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d4107  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5106  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5107  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:59b9c2229679  name:iqn.1988-12.com.oracle:59b9c2229679

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:a0:b4  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:a0:b4

  id:0x50013970007d4122  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d4123  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5122  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5123  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:9191559ef7c0  name:iqn.1988-12.com.oracle:9191559ef7c0

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:a1:44  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:a1:44

  id:0x50013970007d4124  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d4125  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5124  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5125  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:84a7b614eeb5  name:iqn.1988-12.com.oracle:84a7b614eeb5

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:14:3e  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:14:3e

  id:0x50013970007d4116  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d4117  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5116  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5117  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:5cd7ad97b52c  name:iqn.1988-12.com.oracle:5cd7ad97b52c

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:7a:4b:0c  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:7a:4b:0c

  id:0x50013970007d411a  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d411b  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d511a  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d511b  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:7ba542ea5198  name:iqn.1988-12.com.oracle:7ba542ea5198

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6f:d6  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6f:d6

  id:0x50013970007d412a  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d412b  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d512a  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d512b  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:a263989acb86  name:iqn.1988-12.com.oracle:a263989acb86

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6e:b6  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6e:b6

 

OVM> list VirtualDisk

Command: list VirtualDisk

Status: Success

Time: 2017-11-14 20:56:37,915 UTC

Data: 

  id:0004fb0000120000f5c7429df92b318d.img  name:admebsd501_boot

  id:0004fb000012000027b273600ca28486.img  name:admebsd501_LUN01

  id:0004fb0000120000f4a6aa71f9ac687d.img  name:admebsd501_LUN02

  id:0004fb0000120000b8d11e181cfa4d22.img  name:admebsd503_LUN01 (2)

  id:0004fb000012000020ba4039152aa40e.img  name:admebsd503_boot (2)

  id:0004fb00001200005d5538775464765a.img  name:admebsd503_LUN02

  id:0004fb0000120000b15748fc16cb38d6.img  name:admavp501_boot

  id:0004fb0000120000692fc2fcc8c9d829.img  name:PCASRV-Java7_LUN01

  id:786df5556a5144609142c95da9cb2496.img  name:system

  id:0004fb0000120000932da51c889aa150.img  name:admracsb201_RACSB4_DATA_01

  id:0004fb0000120000cc66b647fbd2957d.img  name:admracsb201_RACSB4_DATA_02

  id:0004fb000012000055a11b1c8dd5305c.img  name:admracsb201_RACSB4_DATA_03

  id:0004fb00001200007403b46e06fcdc72.img  name:PCASRV-Java6_LUN01

  id:0004fb0000120000252fc76e79018f07.img  name:admracsb201_RACSB4_DATA_04

  id:0004fb000012000074ab4ad505255980.img  name:admnfst601_LUN03

  id:0004fb00001200003c218c9678ba82bd.img  name:AdmOracleLinux6.7_BaseMT_1.1_boot

  id:0004fb00001200005622b5ee67451ce0.img  name:AdmOracleLinux6.7_BaseMT_1.1_LUN01

  id:0004fb0000120000343ee39c7febc12b.img  name:admebst501_LUN01

  id:0004fb00001200004bfe02732e54a59c.img  name:AdmEbsLxAppPoc03_boot

  id:0004fb0000120000e3ea48d4e1f615c5.img  name:admracsb201_RACSB4_REDO_01

  id:0004fb000012000091afdac18d05e52f.img  name:bootdisk

  id:0004fb000012000057bc58afcfad7f7c.img  name:admracsb201_RACSB4_REDO_02

  id:0004fb00001200006630c64c4bcf67a2.img  name:admracsb201_RACSB4_OCRVD_01

  id:0004fb00001200003394f367ef8084d0.img  name:AdmEbsLxAppPoc01_boot

 

Command: list VM

Status: Success

Time: 2017-11-14 21:05:47,987 UTC

Data: 

  id:0004fb00000600003593a5716c5b22bd  name:admavp501

  id:0004fb00000600006ac8200d95a0b83f  name:admebst202

  id:0004fb000006000086a50c53eee394f7  name:admebsd503

  id:0004fb0000060000ebad2ed25c3ff95e  name:admebst502

  id:0004fb00000600008252dfd8bc640872  name:AdmOracleLinux6.7_BaseMT_1.1

  id:0004fb0000060000f587f0449f7b75c9  name:AdmOracleLinux6.7_BaseDB_1.1

  id:0004fb00000600002bb591e7f660ae40  name:AdmOracleLinux6.7_BaseMT

  id:0004fb00000600002c437d74c2761d4d  name:Template_Adm_DB_OL6u7_x86_64_1.1

  id:0004fb0000060000ed58fbd4d2a58094  name:Template_Adm_MT_OL6u7_x86_64_2.0

  id:0004fb0000060000ff12b3a2b5c589dd  name:Template_Adm_RAC_DB_OL6u7_x86_64_2.0

  id:0004fb00000600000b5c4a71002da8cb  name:Template_AdmEbsLxAppPoc02_01

  id:0004fb0000060000d9b57cd388b2b8ca  name:Template_Adm_DB_OL6u7_x86_64_1.0

  id:0004fb000006000099ed04e2800c952e  name:Template_Adm_RAC_DB_OL6u7_x86_64_1.0

  id:0004fb0000060000febd1b0344a74c7d  name:Template_Adm_MT_OL6u7_x86_64_1.0

Command: list VmDiskMapping

Status: Success

Time: 2017-11-14 20:57:47,424 UTC

Data: 

  id:0004fb00001300001c5dbefed4a7a42b  name:0004fb00001300001c5dbefed4a7a42b

  id:0004fb0000130000dcd307ad89c7a0cc  name:0004fb0000130000dcd307ad89c7a0cc

  id:0004fb0000130000815171a045300831  name:0004fb0000130000815171a045300831

  id:0004fb0000130000efc16abda39a8bb1  name:0004fb0000130000efc16abda39a8bb1

  id:0004fb00001300007cfeaab5aa624453  name:0004fb00001300007cfeaab5aa624453

  id:0004fb0000130000846beb032f2c87fc  name:0004fb0000130000846beb032f2c87fc

  id:0004fb0000130000d655da92ab6a9b0f  name:0004fb0000130000d655da92ab6a9b0f

  id:0004fb0000130000f0dcba1f3758cc2c  name:0004fb0000130000f0dcba1f3758cc2c

 …..

./doit.eovmcli2: Generating OVM VM Inventory Report

———- PROCESSING VM=admapxp201

 

Command: show vm name=admapxp201

 

Status: Success

 

Time: 2017-11-17 23:00:36,448 UTC

 

Data:

 

  Name = admapxp201

 

  Id = 0004fb000006000032d9e101be35a66b

 

  Status = Stopped

 

  Memory (MB) = 32768

 

  Max. Memory (MB) = 32768

 

  Max. Processors = 4

 

  Processors = 4

 

  Priority = 50

 

  Processor Cap = 100

 

  High Availability = Yes

 

  Operating System = Oracle Linux 6

 

  Mouse Type = Default

 

  Domain Type = Xen HVM, PV Drivers

 

  Keymap = en-us

 

  Boot Order 1 = Disk

 

  Server = 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6f:d6  [ovcacn30r1]

 

  Repository = 0004fb0000030000a7d6ca273d18e846  [EMC-VMAX-FC-Repo6]

 

  Vnic 1 = 0004fb0000070000aea4da88d7a72625  [00:21:f6:00:00:43]

 

  Vnic 2 = 0004fb00000700001f66c01d8774537a  [00:21:f6:00:00:44]

 

  VmDiskMapping 1 = 0004fb00001300005993debac1c7311c

 

  VmDiskMapping 2 = 0004fb0000130000cbf203ebf15e4fd1

 

  VmDiskMapping 3 = 0004fb00001300001c3b76e0a93313f9

 

  VmDiskMapping 4 = 0004fb00001300003606f24c6cb2315e

 

  tag 1 = 0004fb0000260000633a36e2d8e304be  [Production]

 

vDisk=0004fb00001300005993debac1c7311c

 

Command: show vmdiskmapping id=0004fb00001300005993debac1c7311c

Are You Ready to apply the 12.2.0.1 July RU ???

Here's the steps that I went thru to apply the Grid Infrastructure Jul2017 Release Update 12.2.0.1.170718, Patch 26133434 

Configuration:  2 Node RAC cluster on Kaminario K2 AFA

The Grid Infrastructure Jul2017 Release Update (RU) 12.2.0.1.170718 includes updates for both the Clusterware home and Database home that can be applied in a rolling fashion.
In this blog post we have updated both nodes GI and DB stack.
The details and execution for Node1 are repeated and presented here for Node2 as well
Big thanks to Mike Dietrich for some insight !

 Step 1) Upgrade the Opatch version to atleast (12.2.0.1.7). We need to upgrade the OPatch version at GI and DB Homes on all the nodes.

[root@vna02 grid]# cd OPatch

[root@vna02 OPatch]# ./opatch version

OPatch Version: 12.2.0.1.9   è Grid Home

OPatch succeeded.

[oracle   @vna01 dbhome_1]$ opatch version

OPatch Version: 12.2.0.1.9  è Database Home

Step 2) Patch conflict check:

Node 1 : 

[oracle@vna01 GI]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/oracle/software/patches/DB-GI-RU/GI/26133434/26123830

Oracle Interim Patch Installer version 12.2.0.1.9

Copyright (c) 2017, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.2.0/dbhome_1
Central Inventory : /u01/app/oraInventory
from           : /u01/app/oracle/product/12.2.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.9
OUI version       : 12.2.0.1.4
Log file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2017-09-20_18-43-33PM_1.log
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.

[oracle@vna01 GI]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/oracle/software/patches/DB-GI-RU/GI/26133434/26002778

Oracle Interim Patch Installer version 12.2.0.1.9
Copyright (c) 2017, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.2.0/dbhome_1
Central Inventory : /u01/app/oraInventory
from           : /u01/app/oracle/product/12.2.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.9
OUI version       : 12.2.0.1.4
Log file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2017-09-20_19-01-04PM_1.log
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.

From the Database Home :

[oracle@vna01 GI]$ . oraenv
ORACLE_SID = [VNADB1] ? VNADB1
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@vna01 GI]$ cd $ORACLE_HOME/OPatch
[oracle@vna01 OPatch]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/oracle/software/patches/DB-GI-RU/GI/26133434/26123830
Oracle Interim Patch Installer version 12.2.0.1.9
Copyright (c) 2017, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.2.0/dbhome_1
Central Inventory : /u01/app/oraInventory
from           : /u01/app/oracle/product/12.2.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.9
OUI version       : 12.2.0.1.4
Log file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2017-09-20_19-03-12PM_1.log
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.

[oracle@vna01 OPatch]$
[oracle@vna01 OPatch]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/oracle/software/patches/DB-GI-RU/GI/26133434/26002778
Oracle Interim Patch Installer version 12.2.0.1.9
Copyright (c) 2017, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.2.0/dbhome_1
Central Inventory : /u01/app/oraInventory
from           : /u01/app/oracle/product/12.2.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.9
OUI version       : 12.2.0.1.4
Log file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2017-09-20_19-03-25PM_1.log
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.

One-off Patch Conflict Detection and Resolution

[root@vna01 OPatch]# $ORACLE_HOME/OPatch/opatchauto apply /home/oracle/software/patches/DB-GI-RU/GI/26133434 -analyze

OPatchauto session is initiated at Wed Sep 20 19:53:25 2017
System initialization log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchautodb/systemconfig2017-09-20_07-53-27PM.log.
Session log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/opatchauto2017-09-20_07-53-48PM.log
The id for this session is QWPL
Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/12.2.0/dbhome_1
Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.2.0/grid
Patch applicability verified successfully on home /u01/app/oracle/product/12.2.0/dbhome_1
Patch applicability verified successfully on home /u01/app/12.2.0/grid
Verifying SQL patch applicability on home /u01/app/oracle/product/12.2.0/dbhome_1

Following step failed during analysis:
/bin/sh -c 'ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1 ORACLE_SID=VNADB1 /u01/app/oracle/product/12.2.0/dbhome_1/OPatch/datapatch -prereq'
SQL patch applicability verified successfully on home /u01/app/oracle/product/12.2.0/dbhome_1
OPatchAuto successful.

--------------------------------Summary--------------------------------
Analysis for applying patches has completed successfully:
Host:vna01
RAC Home:/u01/app/oracle/product/12.2.0/dbhome_1

==Following patches were SKIPPED:
Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/25586399
Reason: This patch is not applicable to this specified target type - "rac_database"

==Following patches were SUCCESSFULLY analyzed to be applied:
Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/26002778
Log: /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_19-53-51PM_1.log
Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/26123830
Log: /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_19-53-51PM_1.log

Host:vna01
CRS Home:/u01/app/12.2.0/grid
==Following patches were SUCCESSFULLY analyzed to be applied:
Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/26002778
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_19-53-51PM_1.log
Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/25586399
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_19-53-51PM_1.log
Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/26123830
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_19-53-51PM_1.log
OPatchauto session completed at Wed Sep 20 19:57:09 2017
Time taken to complete the session 3 minutes, 44 seconds


Now OPatchauto Apply process:

[root@vna01 OPatch]# $ORACLE_HOME/OPatch/opatchauto apply /home/oracle/software/patches/DB-GI-RU/GI/26133434

OPatchauto session is initiated at Wed Sep 20 20:18:27 2017

System initialization log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchautodb/systemconfig2017-09-20_08-18-28PM.log.

Session log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/opatchauto2017-09-20_08-18-50PM.log

The id for this session is CNCU

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.2.0/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/12.2.0/dbhome_1

Patch applicability verified successfully on home /u01/app/oracle/product/12.2.0/dbhome_1

Patch applicability verified successfully on home /u01/app/12.2.0/grid

Verifying SQL patch applicability on home /u01/app/oracle/product/12.2.0/dbhome_1

"/bin/sh -c 'ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1 ORACLE_SID=VNADB1 /u01/app/oracle/product/12.2.0/dbhome_1/OPatch/datapatch -prereq'" command failed with errors. Please refer to logs for more details. SQL changes, if any, can be analyzed by manually retrying the same command.

SQL patch applicability verified successfully on home /u01/app/oracle/product/12.2.0/dbhome_1

Preparing to bring down database service on home /u01/app/oracle/product/12.2.0/dbhome_1

Successfully prepared home /u01/app/oracle/product/12.2.0/dbhome_1 to bring down database service

Bringing down CRS service on home /u01/app/12.2.0/grid

Prepatch operation log file location: /u01/app/oracle/crsdata/vna01/crsconfig/crspatch_vna01_2017-09-20_08-22-15PM.log

CRS service brought down successfully on home /u01/app/12.2.0/grid

Performing prepatch operation on home /u01/app/oracle/product/12.2.0/dbhome_1

Perpatch operation completed successfully on home /u01/app/oracle/product/12.2.0/dbhome_1

Start applying binary patch on home /u01/app/oracle/product/12.2.0/dbhome_1

Binary patch applied successfully on home /u01/app/oracle/product/12.2.0/dbhome_1

Performing postpatch operation on home /u01/app/oracle/product/12.2.0/dbhome_1

Postpatch operation completed successfully on home /u01/app/oracle/product/12.2.0/dbhome_1

Start applying binary patch on home /u01/app/12.2.0/grid

Binary patch applied successfully on home /u01/app/12.2.0/grid

Starting CRS service on home /u01/app/12.2.0/grid

Postpatch operation log file location: /u01/app/oracle/crsdata/vna01/crsconfig/crspatch_vna01_2017-09-20_08-27-01PM.log

CRS service started successfully on home /u01/app/12.2.0/grid

Preparing home /u01/app/oracle/product/12.2.0/dbhome_1 after database service restarted

No step execution required.........

Prepared home /u01/app/oracle/product/12.2.0/dbhome_1 successfully after database service restarted

Trying to apply SQL patch on home /u01/app/oracle/product/12.2.0/dbhome_1

"/bin/sh -c 'ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1 ORACLE_SID=VNADB1 /u01/app/oracle/product/12.2.0/dbhome_1/OPatch/datapatch'" command failed with errors. Please refer to logs for more details. SQL changes, if any, can be applied by manually retrying the same command.

SQL patch applied successfully on home /u01/app/oracle/product/12.2.0/dbhome_1

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:vna01

RAC Home:/u01/app/oracle/product/12.2.0/dbhome_1

Summary:

==Following patches were SKIPPED:

Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/25586399

Reason: This patch is not applicable to this specified target type - "rac_database"



==Following patches were SUCCESSFULLY applied:

Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/26002778

Log: /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_20-23-57PM_1.log

Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/26123830

Log: /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_20-23-57PM_1.log


Host:vna01

CRS Home:/u01/app/12.2.0/grid

Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/26002778

Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_20-24-44PM_1.log

Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/25586399

Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_20-24-44PM_1.log

Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/26123830

Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_20-24-44PM_1.log

OPatchauto session completed at Wed Sep 20 20:34:23 2017

Time taken to complete the session 15 minutes, 56 seconds


lsInventory Output:

[oracle@vna01 OPatch]$ opatch lsinventory

Oracle Interim Patch Installer version 12.2.0.1.9

Copyright (c) 2017, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/12.2.0/grid

Central Inventory : /u01/app/oraInventory

from           : /u01/app/12.2.0/grid/oraInst.loc

OPatch version    : 12.2.0.1.9

OUI version       : 12.2.0.1.4

Log file location : /u01/app/12.2.0/grid/cfgtoollogs/opatch/opatch2017-09-20_20-38-46PM_1.log



lsinventory Output file location : /u01/app/12.2.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2017-09-20_20-38-46PM.txt

--------------------------------------------------------------------------------

Local Machine Information::

Hostname: vna01

ARU platform id: 226

ARU platform description:: Linux x86-64

Installed Top-level Products (1):

Oracle Grid Infrastructure 12c                                       12.2.0.1.0

There are 1 products installed in this Oracle Home.

Interim patches (3) :

Patch  26123830     : applied on Wed Sep 20 20:26:39 BST 2017

Unique Patch ID:  21405588

Patch description:  "DATABASE RELEASE UPDATE: 12.2.0.1.170718 (26123830)"

Created on 7 Jul 2017, 00:33:59 hrs PST8PDT

Bugs fixed:

23026585, 24336249, 24929210, 24942749, 25036474, 25110233, 25410877

25417050, 25427662, 25459958, 25547901, 25569149, 25600342, 25600421

25606091, 25655390, 25662088, 24385983, 24923215, 25099758, 25429959

25662101, 25728085, 25823754, 22594071, 23665623, 23749454, 24326846

24334708, 24560906, 24573817, 24578797, 24609996, 24624166, 24668398

24674955, 24744686, 24811725, 24827228, 24831514, 24908321, 24976007

25184555, 25210499, 25211628, 25223839, 25262869, 25316758, 25337332

25455795, 25457409, 25539063, 25546608, 25612095, 25643931, 25410017

22729345, 24485174, 24509056, 24714096, 25329664, 25410180, 25607726

25957038, 25973152, 26024732, 24376878, 24589590, 24676172, 23548817

24796092, 24907917, 25044977, 25736747, 25766822, 25856821, 25051628

24534401, 24835919, 25050160, 25395696, 25430120, 25616359, 25715167

25967985

Patch  25586399     : applied on Wed Sep 20 20:26:17 BST 2017

Unique Patch ID:  21306685

Patch description:  "ACFS Patch Set Update : 12.2.0.1.170718 (25586399)"

Created on 16 Jun 2017, 00:35:19 hrs PST8PDT

Bugs fixed:

24679041, 24964969, 25098392, 25078431, 25491831


Patch  26002778     : applied on Wed Sep 20 20:25:26 BST 2017

Unique Patch ID:  21306682

Patch description:  "OCW Patch Set Update : 12.2.0.1.170718 (26002778)"

Created on 3 Jul 2017, 03:26:30 hrs PST8PDT

Bugs fixed:

26144044, 25541343, 25715179, 25493588, 24932026, 24801915, 25832375

25728787, 25825732, 24578464, 25832312, 25742471, 25790699, 25655495

25307145, 25485737, 25505841, 25697364, 24663993, 25026470, 25591658

25537905, 24451580, 25409838, 25371632, 25569634, 25245759, 24665035

25646592, 25025157, 24732650, 24664849, 24584419, 24423011, 24831158

25037836, 25556203, 24464953, 24657753, 25197670, 24796183, 20559126

25197395, 24808260

--------------------------------------------------------------------------------

OPatch succeeded.

[oracle@vna01 OPatch]

From the Database Home :

[oracle@vna01 OPatch]$ . oraenv

ORACLE_SID = [+ASM1] ? VNADB1

The Oracle base remains unchanged with value /u01/app/oracle

[oracle@vna01 OPatch]$  export PATH=$ORACLE_HOME/OPatch:$PATH

[oracle@vna01 OPatch]$ which opatch

/u01/app/oracle/product/12.2.0/dbhome_1/OPatch/opatch

[oracle@vna01 OPatch]$ opatch lsinventory

Oracle Interim Patch Installer version 12.2.0.1.9

Copyright (c) 2017, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/12.2.0/dbhome_1

Central Inventory : /u01/app/oraInventory

from           : /u01/app/oracle/product/12.2.0/dbhome_1/oraInst.loc

OPatch version    : 12.2.0.1.9

OUI version       : 12.2.0.1.4

Log file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2017-09-20_20-40-03PM_1.log

lsinventory Output file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2017-09-20_20-40-03PM.txt

--------------------------------------------------------------------------------

Local Machine Information::

Hostname: vna01

ARU platform id: 226

ARU platform description:: Linux x86-64

Installed Top-level Products (1):

Oracle Database 12c                                                  12.2.0.1.0

There are 1 products installed in this Oracle Home.

Interim patches (2) :

Patch  26123830     : applied on Wed Sep 20 20:24:26 BST 2017

Unique Patch ID:  21405588

Patch description:  "DATABASE RELEASE UPDATE: 12.2.0.1.170718 (26123830)"

Created on 7 Jul 2017, 00:33:59 hrs PST8PDT

Bugs fixed:

23026585, 24336249, 24929210, 24942749, 25036474, 25110233, 25410877

25417050, 25427662, 25459958, 25547901, 25569149, 25600342, 25600421

25606091, 25655390, 25662088, 24385983, 24923215, 25099758, 25429959

25662101, 25728085, 25823754, 22594071, 23665623, 23749454, 24326846

24334708, 24560906, 24573817, 24578797, 24609996, 24624166, 24668398

24674955, 24744686, 24811725, 24827228, 24831514, 24908321, 24976007

25184555, 25210499, 25211628, 25223839, 25262869, 25316758, 25337332

25455795, 25457409, 25539063, 25546608, 25612095, 25643931, 25410017

22729345, 24485174, 24509056, 24714096, 25329664, 25410180, 25607726

25957038, 25973152, 26024732, 24376878, 24589590, 24676172, 23548817

24796092, 24907917, 25044977, 25736747, 25766822, 25856821, 25051628

24534401, 24835919, 25050160, 25395696, 25430120, 25616359, 25715167

25967985



Patch  26002778     : applied on Wed Sep 20 20:24:11 BST 2017

Unique Patch ID:  21306682

Patch description:  "OCW Patch Set Update : 12.2.0.1.170718 (26002778)"

Created on 3 Jul 2017, 03:26:30 hrs PST8PDT

Bugs fixed:

26144044, 25541343, 25715179, 25493588, 24932026, 24801915, 25832375

25728787, 25825732, 24578464, 25832312, 25742471, 25790699, 25655495

25307145, 25485737, 25505841, 25697364, 24663993, 25026470, 25591658

25537905, 24451580, 25409838, 25371632, 25569634, 25245759, 24665035

25646592, 25025157, 24732650, 24664849, 24584419, 24423011, 24831158

25037836, 25556203, 24464953, 24657753, 25197670, 24796183, 20559126

25197395, 24808260

--------------------------------------------------------------------------------

OPatch succeeded.

[oracle@vna01 OPatch]$



Node 2 : 

Run OPatch Conflict Check

From GI Home:

[oracle@vna02 patches]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/oracle/patches/26133434/26123830

Oracle Interim Patch Installer version 12.2.0.1.9

Copyright (c) 2017, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/app/12.2.0/grid

Central Inventory : /u01/app/oraInventory

from           : /u01/app/12.2.0/grid/oraInst.loc

OPatch version    : 12.2.0.1.9

OUI version       : 12.2.0.1.4

Log file location : /u01/app/12.2.0/grid/cfgtoollogs/opatch/opatch2017-09-20_20-48-20PM_1.log



Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.

[oracle@vna02 patches]$

[oracle@vna02 patches]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/oracle/patches/26133434/26002778

Oracle Interim Patch Installer version 12.2.0.1.9

Copyright (c) 2017, Oracle Corporation.  All rights reserved.



PREREQ session

Oracle Home       : /u01/app/12.2.0/grid

Central Inventory : /u01/app/oraInventory

from           : /u01/app/12.2.0/grid/oraInst.loc

OPatch version    : 12.2.0.1.9

OUI version       : 12.2.0.1.4

Log file location : /u01/app/12.2.0/grid/cfgtoollogs/opatch/opatch2017-09-20_20-48-32PM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.

For the DB Home:

[oracle@vna02 patches]$ export PATH=$ORACLE_HOME/OPatch:$PATH

[oracle@vna02 patches]$ which opatch

/u01/app/oracle/product/12.2.0/dbhome_1/OPatch/opatch

[oracle@vna02 patches]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/oracle/patches/26133434/26123830

Oracle Interim Patch Installer version 12.2.0.1.9

Copyright (c) 2017, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/app/oracle/product/12.2.0/dbhome_1

Central Inventory : /u01/app/oraInventory

from           : /u01/app/oracle/product/12.2.0/dbhome_1/oraInst.loc

OPatch version    : 12.2.0.1.9

OUI version       : 12.2.0.1.4

Log file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2017-09-20_20-52-24PM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.

[oracle@vna02 patches]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/oracle/patches/26133434/26002778

Oracle Interim Patch Installer version 12.2.0.1.9

Copyright (c) 2017, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/app/oracle/product/12.2.0/dbhome_1

Central Inventory : /u01/app/oraInventory

from           : /u01/app/oracle/product/12.2.0/dbhome_1/oraInst.loc

OPatch version    : 12.2.0.1.9

OUI version       : 12.2.0.1.4

Log file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2017-09-20_20-52-38PM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.

[oracle@vna02 patches]$



OPATCH Conflict Checks:

[root@vna02 12.2.0]# $ORACLE_HOME/OPatch/opatchauto apply /home/oracle/patches/26133434 -analyze

OPatchauto session is initiated at Thu Sep 21 02:18:32 2017

System initialization log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchautodb/systemconfig2017-09-21_02-18-33AM.log.

Session log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/opatchauto2017-09-21_02-18-53AM.log

The id for this session is NWN8

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.2.0/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/12.2.0/dbhome_1

Patch applicability verified successfully on home /u01/app/oracle/product/12.2.0/dbhome_1

Patch applicability verified successfully on home /u01/app/12.2.0/grid

Verifying SQL patch applicability on home /u01/app/oracle/product/12.2.0/dbhome_1

SQL patch applicability verified successfully on home /u01/app/oracle/product/12.2.0/dbhome_1

OPatchAuto successful.

--------------------------------Summary--------------------------------

Analysis for applying patches has completed successfully:

Host:vna02

RAC Home:/u01/app/oracle/product/12.2.0/dbhome_1

==Following patches were SKIPPED:

Patch: /home/oracle/patches/26133434/25586399

Reason: This patch is not applicable to this specified target type - "rac_database"

==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /home/oracle/patches/26133434/26002778

Log: /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-18-56AM_1.log

Patch: /home/oracle/patches/26133434/26123830

Log: /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-18-56AM_1.log

Host:vna02

CRS Home:/u01/app/12.2.0/grid

==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /home/oracle/patches/26133434/26002778

Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-18-56AM_1.log

Patch: /home/oracle/patches/26133434/25586399

Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-18-56AM_1.log



Patch: /home/oracle/patches/26133434/26123830

Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-18-56AM_1.log

OPatchauto session completed at Thu Sep 21 02:22:48 2017

Time taken to complete the session 4 minutes, 16 seconds


OPatchauto apply:



[root@vna02 12.2.0]# $ORACLE_HOME/OPatch/opatchauto apply /home/oracle/patches/26133434



OPatchauto session is initiated at Thu Sep 21 02:25:35 2017



System initialization log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchautodb/systemconfig2017-09-21_02-25-36AM.log.



Session log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/opatchauto2017-09-21_02-25-57AM.log

The id for this session is PM1S



Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/12.2.0/dbhome_1



Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.2.0/grid

Patch applicability verified successfully on home /u01/app/12.2.0/grid



Patch applicability verified successfully on home /u01/app/oracle/product/12.2.0/dbhome_1





Verifying SQL patch applicability on home /u01/app/oracle/product/12.2.0/dbhome_1

SQL patch applicability verified successfully on home /u01/app/oracle/product/12.2.0/dbhome_1





Preparing to bring down database service on home /u01/app/oracle/product/12.2.0/dbhome_1

Successfully prepared home /u01/app/oracle/product/12.2.0/dbhome_1 to bring down database service





Bringing down CRS service on home /u01/app/12.2.0/grid

Prepatch operation log file location: /u01/app/oracle/crsdata/vna02/crsconfig/crspatch_vna02_2017-09-21_02-30-11AM.log

CRS service brought down successfully on home /u01/app/12.2.0/grid





Performing prepatch operation on home /u01/app/oracle/product/12.2.0/dbhome_1

Perpatch operation completed successfully on home /u01/app/oracle/product/12.2.0/dbhome_1





Start applying binary patch on home /u01/app/oracle/product/12.2.0/dbhome_1

Binary patch applied successfully on home /u01/app/oracle/product/12.2.0/dbhome_1





Performing postpatch operation on home /u01/app/oracle/product/12.2.0/dbhome_1

Postpatch operation completed successfully on home /u01/app/oracle/product/12.2.0/dbhome_1





Start applying binary patch on home /u01/app/12.2.0/grid

Binary patch applied successfully on home /u01/app/12.2.0/grid





Starting CRS service on home /u01/app/12.2.0/grid

Postpatch operation log file location: /u01/app/oracle/crsdata/vna02/crsconfig/crspatch_vna02_2017-09-21_02-34-30AM.log

CRS service started successfully on home /u01/app/12.2.0/grid





Preparing home /u01/app/oracle/product/12.2.0/dbhome_1 after database service restarted

No step execution required.........

Prepared home /u01/app/oracle/product/12.2.0/dbhome_1 successfully after database service restarted





Trying to apply SQL patch on home /u01/app/oracle/product/12.2.0/dbhome_1

SQL patch applied successfully on home /u01/app/oracle/product/12.2.0/dbhome_1



OPatchAuto successful.



--------------------------------Summary--------------------------------



Patching is completed successfully. Please find the summary as follows:



Host:vna02

RAC Home:/u01/app/oracle/product/12.2.0/dbhome_1

Summary:



==Following patches were SKIPPED:



Patch: /home/oracle/patches/26133434/25586399

Reason: This patch is not applicable to this specified target type - "rac_database"





==Following patches were SUCCESSFULLY applied:



Patch: /home/oracle/patches/26133434/26002778

Log: /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-31-39AM_1.log



Patch: /home/oracle/patches/26133434/26123830

Log: /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-31-39AM_1.log





Host:vna02

CRS Home:/u01/app/12.2.0/grid

Summary:



==Following patches were SUCCESSFULLY applied:



Patch: /home/oracle/patches/26133434/26002778

Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-32-21AM_1.log



Patch: /home/oracle/patches/26133434/25586399

Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-32-21AM_1.log



Patch: /home/oracle/patches/26133434/26123830

Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-32-21AM_1.log







OPatchauto session completed at Thu Sep 21 02:41:44 2017

Time taken to complete the session 16 minutes, 9 seconds

[root@vna02 12.2.0]#

LsInventory Checks:

GRIDHome Inventory

[oracle@vna02 ~]$ . oraenv

ORACLE_SID = [oracle] ? +ASM2
The Oracle base has been set to /u01/app/oracle

[oracle@vna02 ~]$ export PATH=$ORACLE_HOME/OPatch:$PATH
[oracle@vna02 ~]$ which opatch
/u01/app/12.2.0/grid/OPatch/opatch

[oracle@vna02 ~]$ opatch lsinventory

Oracle Interim Patch Installer version 12.2.0.1.9
Copyright (c) 2017, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/12.2.0/grid
Central Inventory : /u01/app/oraInventory
from           : /u01/app/12.2.0/grid/oraInst.loc
OPatch version    : 12.2.0.1.9
OUI version       : 12.2.0.1.4
Log file location : /u01/app/12.2.0/grid/cfgtoollogs/opatch/opatch2017-09-21_02-44-21AM_1.log
Lsinventory Output file location : /u01/app/12.2.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2017-09-21_02-44-21AM.txt

--------------------------------------------------------------------------------
Local Machine Information::
Hostname: vna02
ARU platform id: 226
ARU platform description:: Linux x86-64
Installed Top-level Products (1):
Oracle Grid Infrastructure 12c                                       12.2.0.1.0
There are 1 products installed in this Oracle Home.

Interim patches (3) :
Patch  26123830     : applied on Thu Sep 21 02:34:08 BST 2017
Unique Patch ID:  21405588
Patch description:  "DATABASE RELEASE UPDATE: 12.2.0.1.170718 (26123830)"
Created on 7 Jul 2017, 00:33:59 hrs PST8PDT

Bugs fixed:
23026585, 24336249, 24929210, 24942749, 25036474, 25110233, 25410877

25417050, 25427662, 25459958, 25547901, 25569149, 25600342, 25600421

25606091, 25655390, 25662088, 24385983, 24923215, 25099758, 25429959

25662101, 25728085, 25823754, 22594071, 23665623, 23749454, 24326846

24334708, 24560906, 24573817, 24578797, 24609996, 24624166, 24668398

24674955, 24744686, 24811725, 24827228, 24831514, 24908321, 24976007

25184555, 25210499, 25211628, 25223839, 25262869, 25316758, 25337332

25455795, 25457409, 25539063, 25546608, 25612095, 25643931, 25410017

22729345, 24485174, 24509056, 24714096, 25329664, 25410180, 25607726

25957038, 25973152, 26024732, 24376878, 24589590, 24676172, 23548817

24796092, 24907917, 25044977, 25736747, 25766822, 25856821, 25051628

24534401, 24835919, 25050160, 25395696, 25430120, 25616359, 25715167

25967985



Patch  25586399     : applied on Thu Sep 21 02:33:51 BST 2017

Unique Patch ID:  21306685

Patch description:  "ACFS Patch Set Update : 12.2.0.1.170718 (25586399)"

Created on 16 Jun 2017, 00:35:19 hrs PST8PDT

Bugs fixed:

24679041, 24964969, 25098392, 25078431, 25491831



Patch  26002778     : applied on Thu Sep 21 02:33:01 BST 2017

Unique Patch ID:  21306682

Patch description:  "OCW Patch Set Update : 12.2.0.1.170718 (26002778)"

Created on 3 Jul 2017, 03:26:30 hrs PST8PDT

Bugs fixed:

26144044, 25541343, 25715179, 25493588, 24932026, 24801915, 25832375

25728787, 25825732, 24578464, 25832312, 25742471, 25790699, 25655495

25307145, 25485737, 25505841, 25697364, 24663993, 25026470, 25591658

25537905, 24451580, 25409838, 25371632, 25569634, 25245759, 24665035

25646592, 25025157, 24732650, 24664849, 24584419, 24423011, 24831158

25037836, 25556203, 24464953, 24657753, 25197670, 24796183, 20559126

25197395, 24808260







--------------------------------------------------------------------------------



OPatch succeeded.

[oracle@vna02 ~]$









DBHome Inventory:







[oracle@vna02 ~]$ export PATH=$ORACLE_HOME/OPatch:$PATH

[oracle@vna02 ~]$ which opatch

/u01/app/oracle/product/12.2.0/dbhome_1/OPatch/opatch

[oracle@vna02 ~]$

[oracle@vna02 ~]$

[oracle@vna02 ~]$ opatch lsinventory

Oracle Interim Patch Installer version 12.2.0.1.9

Copyright (c) 2017, Oracle Corporation.  All rights reserved.





Oracle Home       : /u01/app/oracle/product/12.2.0/dbhome_1

Central Inventory : /u01/app/oraInventory

from           : /u01/app/oracle/product/12.2.0/dbhome_1/oraInst.loc

OPatch version    : 12.2.0.1.9

OUI version       : 12.2.0.1.4

Log file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2017-09-21_02-45-58AM_1.log



Lsinventory Output file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2017-09-21_02-45-58AM.txt



--------------------------------------------------------------------------------

Local Machine Information::

Hostname: vna02

ARU platform id: 226

ARU platform description:: Linux x86-64



Installed Top-level Products (1):



Oracle Database 12c                                                  12.2.0.1.0

There are 1 products installed in this Oracle Home.





Interim patches (2) :



Patch  26123830     : applied on Thu Sep 21 02:32:03 BST 2017

Unique Patch ID:  21405588

Patch description:  "DATABASE RELEASE UPDATE: 12.2.0.1.170718 (26123830)"

Created on 7 Jul 2017, 00:33:59 hrs PST8PDT

Bugs fixed:

23026585, 24336249, 24929210, 24942749, 25036474, 25110233, 25410877

25417050, 25427662, 25459958, 25547901, 25569149, 25600342, 25600421

25606091, 25655390, 25662088, 24385983, 24923215, 25099758, 25429959

25662101, 25728085, 25823754, 22594071, 23665623, 23749454, 24326846

24334708, 24560906, 24573817, 24578797, 24609996, 24624166, 24668398

24674955, 24744686, 24811725, 24827228, 24831514, 24908321, 24976007

25184555, 25210499, 25211628, 25223839, 25262869, 25316758, 25337332

25455795, 25457409, 25539063, 25546608, 25612095, 25643931, 25410017

22729345, 24485174, 24509056, 24714096, 25329664, 25410180, 25607726

25957038, 25973152, 26024732, 24376878, 24589590, 24676172, 23548817

24796092, 24907917, 25044977, 25736747, 25766822, 25856821, 25051628

24534401, 24835919, 25050160, 25395696, 25430120, 25616359, 25715167

25967985



Patch  26002778     : applied on Thu Sep 21 02:31:51 BST 2017

Unique Patch ID:  21306682

Patch description:  "OCW Patch Set Update : 12.2.0.1.170718 (26002778)"

Created on 3 Jul 2017, 03:26:30 hrs PST8PDT

Bugs fixed:

26144044, 25541343, 25715179, 25493588, 24932026, 24801915, 25832375

25728787, 25825732, 24578464, 25832312, 25742471, 25790699, 25655495

25307145, 25485737, 25505841, 25697364, 24663993, 25026470, 25591658

25537905, 24451580, 25409838, 25371632, 25569634, 25245759, 24665035

25646592, 25025157, 24732650, 24664849, 24584419, 24423011, 24831158

25037836, 25556203, 24464953, 24657753, 25197670, 24796183, 20559126

25197395, 24808260







--------------------------------------------------------------------------------



OPatch succeeded.

[oracle@vna02 ~]$

 

ACFS Snapshot – A Walk Through

This blog explores some of the new 12.2 ACFS features.  We will walk through the ACFS snapshot process flow:

 

[oracle@oracle122 log]$ acfsutil snap info /acfsmounts/acfsdata/

snapshot name:               just_before_load

snapshot location:           /acfsmounts/acfsdata/.ACFS/snaps/just_before_load

RO snapshot or RW snapshot:  RO

parent name:                 /acfsmounts/acfsdata/

snapshot creation time:      Wed Mar 22 20:36:09 2017

storage added to snapshot:   8650752   (   8.25 MB )

number of snapshots:  1

snapshot space usage: 8704000  (   8.30 MB )

[oracle@oracle122 log]$ du -sk .

18292  .


[oracle@oracle122 log]$ acfsutil snap create -w -p just_before_load just_about_batch_upload /acfsmounts/acfsdata/

acfsutil snap create: Snapshot operation is complete.

[oracle@oracle122 log]$ acfsutil snap info /acfsmounts/acfsdata

snapshot name:               just_before_load

snapshot location:           /acfsmounts/acfsdata/.ACFS/snaps/just_before_load

RO snapshot or RW snapshot:  RO

parent name:                 /acfsmounts/acfsdata

snapshot creation time:      Wed Mar 22 20:36:09 2017

storage added to snapshot:   8650752   (   8.25 MB )

snapshot name:               just_about_batch_upload

snapshot location:           /acfsmounts/acfsdata/.ACFS/snaps/just_about_batch_upload

RO snapshot or RW snapshot:  RW

parent name:                 just_before_load

snapshot creation time:      Wed Mar 22 20:42:56 2017

storage added to snapshot:   8650752   (   8.25 MB )

root@oracle122 ~]# acfsutil compress on /acfsmounts/acfsdata/log/wtf

acfsutil compress on: ACFS-05518: /acfsmounts/acfsdata/log/wtf is not an ACFS mount point

[root@oracle122 ~]# acfsutil compress info /acfsmounts/acfsdata/log/wtf

The file /acfsmounts/acfsdata/log/wtf is not compressed.

[root@oracle122 ~]# acfsutil compress info /acfsmounts/acfsdata/log/nitin

nitin             nitin_compressed 

[root@oracle122 ~]# acfsutil compress info /acfsmounts/acfsdata/log/nitin_compressed

Compression Unit size: 32768

Disk storage used:   (  60.00 KB )

Disk storage saved:  (   7.75 MB )

Storage used is 1% of what the uncompressed file would use.

File is not scheduled for asynchronous compression.

oracle@oracle122 log]$ ls -l lastlog*

-rw-r--r--. 1 oracle oracle 145708 Mar 22 12:07 lastlog

-rw-r--r--. 1 oracle oracle 145708 Mar 23 05:49 lastlog_compressed

[oracle@oracle122 log]$

[root@oracle122 ~]# acfsutil compress info /acfsmounts/acfsdata/log/lastlog_compressed

Compression Unit size: 32768

Disk storage used:   (  32.00 KB )

Disk storage saved:  ( 110.29 KB )

Storage used is 22% of what the uncompressed file would use.

File is not scheduled for asynchronous compression.

If you are curious about the other snapshop options... then look below !!

[oracle@oracle122 log]$ acfsutil snap -h

 Command Subcmd    Arguments

--------------- --------- ------------------------------------------

snap create    [-w|-r|-c] [-p parent_snap_name] <snap_name> <mountpoint>

snap create    [-w]                      - create a writeable snapshot

snap create    [-r]                      - create a read-only snapshot

snap create                                This is the default behavior

snap create    [-c]                      - create a writable snapshot of a

snap create                                snap duplicate target

snap create    [-p parent_snap_name]     - create a snapshot from a snapshot

snap delete    <snap_name> <mountpoint> - delete a file system snapshot

snap rename    <old_snap_name> <new_snap_name> <mountpoint>

snap rename                             - rename a file system snapshot

snap convert   -w|-r <snap_name> <mountpoint>

snap convert   -w                       - convert to a writeable snapshot

snap convert   -r                       - convert to a read-only snapshot

snap info      [-t] [<snap_name>] <mountpoint>

snap info                    - get information about snapshots

snap info      [-t]          - display family tree starting at next name given

snap info      [<snap_name>] - snapshot name

snap info      <mountpoint>  - mount point

snap remaster  {<snap_name> | -c} <volume_path>

snap remaster                           - make the specified snapshot

snap remaster                             the master file system.  The

snap remaster                             current master and all other

snap remaster                             snapshots will be deleted.

snap remaster                             WARNING: This operation cannot

snap remaster                             be reversed.  Admin privileges

snap remaster                             are required.  The file system

snap remaster                             must be unmounted on all nodes.

snap remaster                             The file system must not have

snap remaster                             Replication running.

snap remaster  [-c]                     - Continue an interrupted snapshot

snap remaster                             remastering.  Use the -c option,

snap remaster                             instead of the <snap_name>, to

 snap remaster                             complete an interrupted

snap remaster                             snapshot remastering.

snap remaster  [-f]                     - Force the snapshot remastering.

 snap duplicate apply     [-b] [-d {0..6}] [<snap_name>] <mountpoint>

 snap duplicate apply     -b                       - maintain backup snapshot

 snap duplicate apply     [-d {0..6}]              - set trace level for debugging

 snap duplicate apply     [<snap_name>]            - target snapshot

 snap duplicate apply     <mountpoint>             - mount point for target site

 snap duplicate create    [-r] [-i oldsnapname] [-d {0..6}] <newsnapname> <mountpoint>

 snap duplicate create    [-r]              - restart of data stream

 snap duplicate create    [-p parentsnap]   - parent snap for base site

 snap duplicate create    [-i oldsnapname]  - old snapshot name

 snap duplicate create    [-d {0..6}]       - set trace level for debugging

 snap duplicate create    <newsnapname>     - new snapshot name

 snap duplicate create    <mountpoint>      - mount point for base site

 snap quota     [[-|+]nnn[K|M|G|T|P]]<snap_name> <mountpoint>

 snap quota                              - set quota for snapshot

 

Grid Infrastructure and RAC 12.2 New Features – a Recap

The following list illustrates the new 12.2 Oracle RAC and Grid Infrastructure. This is a personal list which “I believe to be the most interesting.” I apologize to the RAC Dev team if I left out any features.

Streamlined Grid Infrastructure Installation

12.2 Grid Infrastructure software is available as an image file for download and installation. The key objective of this feature was to enable a simpler and quicker installation of Grid Infrastructure. Administrators simply prep the system by creating a new Grid home directory, appropriate users, permissions and kernel settings. Once completed, Admins extract the image file into the newly-created Grid home, and execute the gridsetup.sh script to invoke setup wizard to register the Oracle Grid Infrastructure stack with Oracle inventory. This installation approach can be used for Oracle Grid Infrastructure for Cluster and Standalone Servers configurations. This new software installation will improve large scale deployment automation as well as deployment of customized images, Patch Set Updates (PSUs) and patches.

Real Application Clusters Reader Nodes

In 12.2, Oracle extended the capability of Flex Clusters by introducing Reader nodes. Reader nodes are Leaf nodes (in a Flex Cluster) that run read-only RAC database instances. The Reader nodes are not affected by RAC reconfigurations, caused by node evictions or other cluster node membership changes, as long as the Hub Node, to which it is connected, is part of the cluster. Reader Nodes allows users to create huge reader farms (up to 64 reader nodes per Hub Node), thus enabling massive parallel processing. In this architecture, updates to the read/write instances (running on Hub nodes) are immediately propagated to the read-only instances on the Leaf Nodes, where they can be used for online reporting or instantaneous queries. Users can create services to direct queries to read-only instances running on reader nodes.

Service-Oriented Buffer Cache Access

RAC Services, which are used to allocate and distribute workloads across RAC instances, are the cornerstone of RAC workload management. There is a strong relationship between a RAC Service, a specific workload, and the database object it accesses. With 12.2 RAC, a Service- oriented buffer cache feature was introduced to improve scale and performance, by optimizing instance and node-buffer cache affinity. This is done by caching or pre-warming instances with data blocks for objects accessed where a service is expected to run.

Twelve Days of 12.2

Server Weight-Based Node Eviction

When there is a spilt-brain, or when a node eviction decision must be made, traditionally the decision was based on age, or duration of the nodes, in the cluster; i.e., nodes with a large uptime in the cluster will survive. In 12.2 RAC, Server weight-based node eviction uses a more intelligent, tie-breaker mechanism to evict a particular node or a group of nodes from a cluster. The Server Weight-based node eviction feature introspects the current load on those servers as part of the decision. Two principle mechanisms, a system inherent automatic mechanism and a user input-based mechanism is used to offer and provide guidance.

Load-Aware Resource Placement

Load-aware resource placement, prevents overloading a server with more database instances than the server is capable of running. The metrics used to determine whether an application can be started on a given server, is based on the expected resource consumption of the application, as well as the capacity of the server in terms of CPU and memory. Administrators can define database resources such as CPU (cpu_count) and memory (memory_target) to Clusterware. Clusterware uses this information to place the database instances only on servers that meet a sufficient number of CPUs, amount of memory or both.

srvctl modify database -db testdb -cpucount 8 -memorytarget 64g

Hang Manager

The Hang Manager features first became available in 11gR1. In this initial version, Hang Manager evaluated and identified system hangs, then dumped the relevant information, “wait for graph,” into a trace file. In 12.2, Hang Manager takes action and attempts to resolve the system hang. An ORA-32701 error message is logged in the alert log to reflect the hang resolution. Hang Manager also runs in both single-instance and Oracle RAC database instances. With Hang Manager, it is constantly aware of processes running in reader nodes instances, and checks whether any of these processes are blocking progress on Hub Nodes to take action, if possible.

Separation of Duty for Administering RAC Clusters

12.2 RAC introduces a new administrative privilege called SYSRAC. This privilege is used by the Clusterware agent, and removes the need to use SYSDBA privilege for RAC administrative tasks, thus reducing the reliance on SYSDBA on production systems. Note, SYSRAC privilege is the default mode for connecting to the database by Clusterware agent; e.g, when executing RAC utilities such as SRVCTL.

Rapid Home Provisioning of Oracle Software

Rapid Home Provisioning enables you to create clusters, provision, patch, and upgrade Oracle Grid Infrastructure and Oracle Database homes. It also provisions 11.2 Clusters, applications, and middleware using Rapid Home Provisioning.

Extended Clusters

In 12.2 GI Administrators can create an extended RAC cluster across two, or more, geographically separate sites. Note, each site will include a set of servers with its own storage. If a site fails, the other site acts as an active standby. 12.2 Extended Clusters can be built on initial installation or be converted from an existing (non-Flex ASM) cluster, using the ConvertToExtended script.

De-support of OCR and Voting Files on Shared Filesystem

In Grid Infrastructure 12.2, the placement of Oracle Clusterware files: the Oracle Cluster Registry (OCR), and the Voting Files, directly on a shared file system is desupported. Only ASM or NFS is supported. If you need to use a supported shared file system, either a Network File System, or a shared cluster file system instead of native disk devices, then you must create Oracle ASM disks on supported network file systems that you plan to use for hosting Oracle Clusterware files before installing Oracle Grid Infrastructure. You can then use the Oracle ASM disks in an Oracle ASM disk group to manage Oracle Clusterware files. If your Oracle Database files are stored on a shared file system, then you can continue to use shared file system storage for database files, instead of moving them to Oracle ASM storage.

Clonewars – Next Gen Cloning with Oracle 12.2 Multitenancy (Part Deux)… With a Sprinkle of PDB Refresh

 

This is Part 2 of the Remote [PDB] Cloning capabilities of Oracle 12.2 Mulitenant.

Cloning Example 2:  Remote clone copy from an existing CBD/PDB into a local PDB (PDB->PDB).  In this example “darkside” is  CDB with darthmaul being the source/remote PDB and  yoda (PDB) is local target

 

SQL> select database_name from v$database;

DATABASE_NAME
--------------------------------------------------------
DARKSIDE

darkside$SQL> alter pluggable database darthmaul open;
Pluggable database altered.

SQL> select name, open_mode from v$pdbs;
NAME .    OPEN_MODE

--------------------
PDB$SEED  READ ONLY
DARTHMAUL READ WRITE

darkside$SQL> archive log list ;
Database log mode            Archive Mode
Automatic archival           Enabled
Archive destination          USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     1
Next log sequence to archive   3
Current log sequence         3

darkside$SQL> select name, open_mode from v$database;
NAME     OPEN_MODE
--------- --------------------
DARKSIDE  READ WRITE

darkside$SQL> COLUMN property_name FORMAT A30
COLUMN property_value FORMAT A30
SELECT property_name, property_value
FROM   database_properties
WHERE  property_name = 'LOCAL_UNDO_ENABLED'; 
PROPERTY_NAME                PROPERTY_VALUE
------------------------------ ------------------------------
LOCAL_UNDO_ENABLED           TRUE


$ cat darkside_create_remote_clone_user.sql
create user c##darksidecloneuser identified by cloneuser123 container=ALL;
grant create session, create pluggable database to c##darksidecloneuser  container=ALL;

$cat darkside_db_link.sql
create database link darksideclone_link
CONNECT TO c##darksidecloneuser IDENTIFIED BY cloneuser123 USING 'darkside'

Nishan$SQL> select DB_LINK,HOST from dba_db_links;
DB_LINK        HOST
------------  ---------------------------
SYS_HUB          SEEDDATA
REMOTECLONELINK  hansolo
DARKSIDECLONE_LINK darkside

darkside$SQL> select name from v$datafile;
NAME
-------------------------------------------------------------------------------

+ACFSDATA/DARKSIDE/4E63D836FC5AF80DE053B214A8C07E55/DATAFILE/system.276.942656929

+ACFSDATA/DARKSIDE/4E63D836FC5AF80DE053B214A8C07E55/DATAFILE/sysaux.277.942656929

+ACFSDATA/DARKSIDE/4E63D836FC5AF80DE053B214A8C07E55/DATAFILE/undotbs1.275.942656929

+ACFSDATA/DARKSIDE/4E63D836FC5AF80DE053B214A8C07E55/DATAFILE/users.279.942657041

+ACFSDATA/DARKSIDE/4E63D836FC5AF80DE053B214A8C07E55/DATAFILE/rey.291.942877803

+ACFSDATA/DARKSIDE/4E63D836FC5AF80DE053B214A8C07E55/DATAFILE/luke.292.942877825

darkside$SQL> show con_name
CON_NAME
-----------------------------
DARTHMAUL


darkside$SQL> create table foofighters tablespace rey as select * from obj$;
Table created.

Nishan$SQL> create pluggable database yoda from darthmaul@DARKSIDECLONE_LINK;

Pluggable database created.

Nishan$SQL> alter session set container = yoda;
Session altered.

yoda$SQL> select name, open_mode from v$pdbs;
NAME                    OPEN_MODE
----------------------------------------
YODA                   MOUNTED

yoda$SQL> select name from v$datafile;
NAME
--------------------------------------------------------------------------------
+DATA/NISHAN/4E82D233272C7273E0538514A8C00DF3/DATAFILE/system.310.942878321
+DATA/NISHAN/4E82D233272C7273E0538514A8C00DF3/DATAFILE/sysaux.311.942878321
+DATA/NISHAN/4E82D233272C7273E0538514A8C00DF3/DATAFILE/undotbs1.309.942878321
+DATA/NISHAN/4E82D233272C7273E0538514A8C00DF3/DATAFILE/users.306.942878319
+DATA/NISHAN/4E82D233272C7273E0538514A8C00DF3/DATAFILE/rey.307.942878319
+DATA/NISHAN/4E82D233272C7273E0538514A8C00DF3/DATAFILE/luke.308.942878319


Now on to Refresh the PDB

SQL>create table foofighters tablespace rey as select * from obj$

Table created.




SQL> select segment_name from dba_segments where tablespace_name = 'REY'

SEGMENT_NAME

----------------------------------------------------------------

FOOFIGHTERS




SQL> select name, open_mode from v$pdbs;

NAME            OPEN_MODE

------------------------------

PDB$SEED        READ ONLY

OBIWAN          READ WRITE

FORCEAWAKENS    MOUNTED

YODA            MOUNTED




SQL> alter pluggable database yoda open read only;

Pluggable database altered.




SQL> select segment_name from dba_segments where tablespace_name = 'REY';

no rows selected




SQL> alter session set container = yoda;

Session altered.




SQL> ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE;

Pluggable database altered.




SQL> ALTER PLUGGABLE DATABASE refresh;

Pluggable database altered.




SQL> select segment_name from dba_segments where tablespace_name = 'REY';

select segment_name from dba_segments where tablespace_name = 'REY'  

ERROR at line 1:

ORA-01219: database or pluggable database not open: queries allowed on fixed

tables or views only




SQL> ALTER PLUGGABLE DATABASE open read only;

Pluggable database altered.




SQL> select segment_name from dba_segments where tablespace_name = 'REY';

SEGMENT_NAME

-----------------------------------------------------

FOOFIGHTERS

 

 

 

 

 

Clonewars – Next Gen Cloning with Oracle 12.2 Multitenancy (Part Un)

In this blog, we will walk through Oracle 12.2 Remote Cloning of PDB feature. In Oracle 12.1, remote cloning was also available, however, this required placing the productions database (which is usually the source) in read-only mode. This makes the cloning feature very inefficient to leverage. In 12.2, it is now possible to maintain the production database in read-write mode and allow for online copy of the database, this is reffered to as a “hot clone”.  The distinction between a hot clone and a cold clone is only relevant for customers running 12.1 Multitenancy. As of 12.2 all clones are hot clones, unless the source database is explicitly closed.

We will illustrate two examples of this real-world example, just the names have been changed to protect the extremely innocent. And sorry about the StarWars references.. just couldn’t help myself!!

Note, for clarity, the remote DB is source database which will cloned, and the local DB is the CDB where the PDB will cloned into.

Cloning Example 1: Remote clone copy from existing non-CDB into a local PDB (non-CDB->PDB).  In this example “hansolo” is remote non-CDB (source PDB).

Cloning Example 2: Remote clone copy from existing CBD/PDB into a local PDB (PDB->PDB). In this example “darkside” is CDB with obiwan being the source PDB and  nishan-obiwan (PDB) is local.

Cloning Example 1

Prep work and validation
 Hansolo$SQL> startup
 ORACLE instance started.
 Total System Global Area 2483027968 bytes
 Fixed Size 8795808 bytes
 Variable Size 637536608 bytes
 Database Buffers 1610612736 bytes
 Redo Buffers 7979008 bytes
 In-Memory Area 218103808 bytes
 Database mounted.
 Database opened.
Hansolo$SQL> select database_name from v$database;
 DATABASE_NAME
 ------------------------------------------------------
 HANSOLO

Nishan$SQL> select name from v$pdbs;
 NAME
 ------------------------------------------------------------------------------
 PDB$SEED
 OBIWAN

In 12.2, each PDB will have its own undo tablespace. 
This new undo management configuration is called local undo mode, and is the underlying 
design for many of the portability features in 12.2. Local Undo is the default for greefield/fresh 12.2 installs, 
for upgrades to 12.2 the Shared Undo will need to converted to Local (we won't cover that here)
 
Hansolo$SQL> SELECT property_name, property_value FROM database_properties WHERE property_name = 'LOCAL_UNDO_ENABLED ';
PROPERTY_NAME PROPERTY_VALUE
------------------------------ ------------------------------
LOCAL_UNDO_ENABLED TRUE

Hansolo$SQL> archive log list
 Database log mode Archive Mode
 Automatic archival Enabled
 Archive destination USE_DB_RECOVERY_FILE_DEST
 Oldest online log sequence 118
 Next log sequence to archive 120
 Current log sequence 120

Hansolo$SQL> select name, open_mode from v$database
NAME OPEN_MODE
--------- --------------------
HANSOLO READ WRITE

Hansolo$SQL> create tablespace kyloren datafile size 20M;

Tablespace created.

Hansolo$SQL> create tablespace MazKanata datafile size 20M

Tablespace created.

Hansolo$SQL> select tablespace_name from dba_tablespaces;

TABLESPACE_NAME
------------------------------
 SYSTEM
 SYSAUX
 UNDOTBS1
 TEMP
 USERS
 KYLOREN
 MAZKANATA

Hansolo$SQL> select current_scn from v$database;

CURRENT_SCN
 -----------
 27506427

$cat hansolo_create_remoteclone.sql
 CREATE USER cloneuser IDENTIFIED BY cloneuser123;
 GRANT CREATE SESSION, CREATE PLUGGABLE DATABASE TO cloneuser;

Hansolo$SQL>@hansolo_create_remoteclone.sql

Verify user connection

Hansolo$SQL> connect cloneuser/cloneuser123;
 Connected.

Now, prep the source environment

Nishan$SQL> select database_name from v$database;

DATABASE_NAME
-------------------------------------------------
NISHAN

Create DBLink to hansolo from nishan

$cat pdbclone_dblink.sql
CREATE DATABASE LINK remoteclonelink CONNECT TO cloneuser IDENTIFIED BY 
cloneuser123 USING 'hansolo'

Nishan$SQL> @pdbclone_dblink.sql

Nishan$SQL> select db_link, host from dba_db_links;
DB_LINK            HOST
----------------  -----------------
SYS_HUB           SEEDDATA 
REMOTECLONELINK   hansolo 

Verify connection to hansolo from forceawakens PDB

$ sqlplus cloneuser/cloneuser123@hansolo

Nishan$SQL> create pluggable database forceawakens from non$cdb@REMOTECLONELINK;

Pluggable database created.

Nishan$SQL> alter session set container = FORCEAWAKENS;
Session altered.

forceawakens$SQL> select name, open_mode from v$pdbs;
 NAME           OPEN_MODE
---------       ----------------------
FORCEAWAKENS    MOUNTED

forceawakens$SQL> select name from v$datafile;
NAME
-------------------------------------------------------------------------------
 +DATA/NISHAN/4E6DBABFDE2EBBECE0538514A8C0247B/DATAFILE/system.302.942700581
 +DATA/NISHAN/4E6DBABFDE2EBBECE0538514A8C0247B/DATAFILE/sysaux.301.942700581
 +DATA/NISHAN/4E6DBABFDE2EBBECE0538514A8C0247B/DATAFILE/undotbs1.300.942700581
 +DATA/NISHAN/4E6DBABFDE2EBBECE0538514A8C0247B/DATAFILE/users.297.942700581
 +DATA/NISHAN/4E6DBABFDE2EBBECE0538514A8C0247B/DATAFILE/kyloren.298.942700581
 +DATA/NISHAN/4E6DBABFDE2EBBECE0538514A8C0247B/DATAFILE/mazkanata.299.942700581

forceawakens$SQL> select current_scn from v$database;

CURRENT_SCN
-----------
 0

Since the source database was a non-CDB, it needs to be cleansed to be PDB-capable using the @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql. This is a requirement before you can open and online the PDB.

forceawakens$SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql

forceawakens$SQL> alter pluggable database open;

ACFS 12.2 New Features – a Recap

Oracle Automatic Storage Management Cluster File System (ACFS) made it’s debut with Oracle 11.2. Many DBAs are not aware of the vast features that are available with ACFS. With each release and update to Oracle, significant enhancements have been made. With Oracle Database 12c Release 2, new feature/functionality was made to ACFS.

Snapshot Enhancements

In Oracle 12.2, Oracle extends ACFS snapshot functionality and further simplifies file system snapshot operations. The following are a few of the key new features with snapshots:

Admins can now, if needed, impose quotas to snapshots to limit amount of write operations that can be done on a snapshot. Quotas can be set on the snapshot level. Oracle also provides the capability to rename an existing ACFS snapshot, to allow more user-friendly names.

When we delete a snapshot with the “acfsutil snap delete snapshot mount_point” command, we can force a delete, even if there are open files.

There are several new capabilities with snapshot re-mastering and duplication. The new ACFS snapshot remaster capability allows for a snapshot in the snapshot registry to become the primary file system. ACFS snapshot duplication features are introduced. With the “acfsutil snap duplicate create” command, can be used to duplicate a snapshot from an existing snapshot, to a standby target file system.

The “apply” option to the “acfsutil snap duplicate” command, allows us to apply deltas to the target ACFS file system or snapshot. If this is the initial apply, the target file system must be empty. If the target had been applied before, then the apply process becomes an incremental update. Before the incremental update occurs, the contents of the target file system must match the content of the older snapshot, since the last incremental update. Also, the contents of the target snapshot cannot be modified while the apply is happening.

Additionally, ACFS snapshot-based replication now uses SSH protocols to transmit data streams.

4k Sectors and Metadata

When Admins create an ACFS file system, they have the option to create the file system with the 4096-byte metadata structure. When issuing the mkfs command, you can specify the metadata block size with the –i option; two valid options are 512 bytes or 4096 bytes. The 4096-byte metadata structure is made up of multiple 512-byte logical sectors.

If the COMPATIBLE.ADVM ASM Diskgroup attribute is set to 12.2 or greater, then the metadata block is 4096 bytes by default. If COMPATIBLE.ADVM attribute is set to less than 12.2, then the block size is set to 512 bytes. When the ADVM volume of the ACFS file system is set with 4K logical disk sector size, Direct I/O requests should be aligned on the 4K offset and be a multiple of 4k size for optimal performance.

Defragger

Very rarely would you need the defragmentation tool, due to the fact that ACFS algorithm is for allocation and coalesce-ment of free space. However, for those rare situations, when we can get into fragmented situations under heavy workloads or for compressed files, Oracle provides the defrag option to the acfsutil command. Now, we can issue “acfsutil defrag dir” or “acfsutil defrag file” commands for on-demand defragmentation.

ACFS will perform all defrag operations in the background. With the –r option of the “acfsutil defrag dir”command, you can recursively defrag subdirectories.

Compression Enhancements

ACFS compression can significantly reduce disk storage requirements for customers running databases on ACFS. Databases running on ACFS, must be of versions 11.2.0.4 or higher. ACFS compression can be enabled for specific ACFS file systems for database files, RMAN backup files, archivelogs, data pump extract files, and general purpose files. Oracle does not support redo log/flashback logs/control file compression.

When enabling ACFS compression for a file system, only new incoming files will be compressed. All existing files on the file system will remain un-compressed. Likewise, if you decide to uncompress a file system, Oracle will not de-compress files. Oracle will simply disable compression for newly created files.

To compress and uncompress ACFS file systems, execute the acfsutil compress on or acfsutil compress off commands. To view compression state and space consumption information, you can execute the “acfsutil compress info” command. The commands “acfsutil info fs” and “acfsutil info file” now support ACFS compression status.

At this time, databases with 2K or 4K block sizes are not supported for ACFS compression. ACFS compression is supported on Linux and AIX. ACFS is also supported to work with ACFS snapshot-based replication.

Loopback Devices

ACFS now supports loopback devices on the Linux operating system. With ACFS loopback device support, we can now take OVM images, templates, and virtual disks and present them as a block device. Files can be sparse or non-sparse. ACFS also supports Direct I/O on sparse images.

Metadata Collector

The metadata collector, copies metadata structures from an Oracle ACFS file system to a separate output file that can be ingested for analysis and diagnostics. The metadata collector reads the contents of the file system and all metadata is written out to a specified output file. The metadata collector can read the ACFS file system online without requiring an outage. Note, this tool is not a replacement for the file system checker command (fsck), but a supplement for additional diagnosis and support. Even though the metadata collector can read the file system while it is online, for best results, unmount the file system prior to metadata collection. The size of the output file, is directly correlated to the size of the file system that the collection is specified for. To collect metadata for a file system, invoke the “acfsutil meta” command.

The auto-resize feature, allows us to “autoextend” a file system if the size of the file system is about to run out of space. Just like an Oracle datafile that has the autoextend option enabled, we can now “autoextend” the ACFS file system to the size of the increment by option. With the –a option to the “acfsutil size” command, we can specify the increment by size.

We can also specify the maximum size or quota for the ACFS file system to “autoextend” to guard against a runaway space consumption. To set the maximum size for an ACFS file system, execute the “acfsutil size” command with the –x option.

12.2 SQLPlus History command – features and fumbles

Yes, there’s been a lot of hoopla about the HISTORY capability in 12.2 SQlPlus, and I know my friend Gokhan Atil has written about it too. So I’m just gonna share this bit for my team and my feedback on it

SQLplus always lacked the history capability like U/Linux shell history. Now in 12.2 SQLPlus its here. Let me describe a little bit of the feature and its quirks.
Here’s what SQLPlus help has to say about the functionality

SQL> help hist

 HISTORY
 -------

 Stores, lists, executes, edits of the commands
 entered during the current SQL*Plus session.

 HIST[ORY] [N {RUN | EDIT | DEL[ETE]}] | [CLEAR]

 N is the entry number listed in the history list.
 Use this number to recall, edit or delete the command.

 Example:
 HIST 3 RUN - will run the 3rd entry from the list.

 HIST[ORY] without any option will list all entries in the list.

So, let’s walk thru this thing:

[oracle@oracle122 admin]$ sqlplus "/ as sysdba"
SQL*Plus: Release 12.2.0.1.0 Production on Sat Mar 4 14:43:55 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

To enable SQLPlus history, you set "set history on" at the SQLPlus prompt. 
However, it has to be set each time you connect to SQLPlus. I know my feeble little brain 
will forget when I'm in a hurry and login to SQLPlus, thus, 
I added "set history on" into $ORACLE_HOME/sqlplus/admin/glogin.sql

SQL> hist
SP2-1651: History list is empty.


Since I haven't done anything, thus the history is obviously empty.  Now let's do stuff !!

SQL> select name from v$datafile
  2  ;

NAME
--------------------------------------------------------------------------------
+DATA/NISHAN/DATAFILE/system.257.937616243
+DATA/NISHAN/DATAFILE/sysaux.258.937616333
+DATA/NISHAN/DATAFILE/undotbs1.259.937616357
+DATA/NISHAN/4700A987085B3DFAE05387E5E50A8C7B/DATAFILE/system.267.937616485
+DATA/NISHAN/4700A987085B3DFAE05387E5E50A8C7B/DATAFILE/sysaux.266.937616485
+DATA/NISHAN/DATAFILE/users.260.937616359
+DATA/NISHAN/4700A987085B3DFAE05387E5E50A8C7B/DATAFILE/undotbs1.268.937616485
+DATA/NISHAN/49CF3DA922C680E1E0539C14A8C0E4E3/DATAFILE/system.272.937617037
+DATA/NISHAN/49CF3DA922C680E1E0539C14A8C0E4E3/DATAFILE/sysaux.273.937617039
+DATA/NISHAN/49CF3DA922C680E1E0539C14A8C0E4E3/DATAFILE/undotbs1.271.937617037
+DATA/NISHAN/49CF3DA922C680E1E0539C14A8C0E4E3/DATAFILE/users.275.937617207

11 rows selected.

SQL> hist
  1  select name from v$datafile
     ;

However, all these silly little mistakes get recorded into history too.... and thus my sloppiness gets shown in broad daylight :-(
SQL> hist
  1  select name from v$datafile
     ;
  2  1
  3  2
  4  del

Here's the delete command to remove my sloppiness:

SQL> hist 4 del
SQL> hist
  1  select name from v$datafile
     ;
  2  1
  3  2

Let's run some stuff to populate the history with real commands. Note, that all desc, show, set, select commands get recorded into history. 
This may or may not be a good thing....think bloated-ness

SQL> show parameter mem

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
hi_shared_memory_address	     integer	 0
inmemory_adg_enabled		     boolean	 TRUE
inmemory_clause_default 	     string
inmemory_expressions_usage	     string	 ENABLE
inmemory_force			     string	 DEFAULT
inmemory_max_populate_servers	     integer	 2
inmemory_query			     string	 ENABLE
inmemory_size			     big integer 208M
inmemory_trickle_repopulate_servers_ integer	 1
percent
inmemory_virtual_columns	     string	 MANUAL
memory_max_target		     big integer 0
memory_target			     big integer 0
optimizer_inmemory_aware	     boolean	 TRUE
shared_memory_address		     integer	 0

SQL> show parameter inmemory

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
inmemory_adg_enabled		     boolean	 TRUE
inmemory_clause_default 	     string
inmemory_expressions_usage	     string	 ENABLE
inmemory_force			     string	 DEFAULT
inmemory_max_populate_servers	     integer	 2
inmemory_query			     string	 ENABLE
inmemory_size			     big integer 208M
inmemory_trickle_repopulate_servers_ integer	 1
percent
inmemory_virtual_columns	     string	 MANUAL
optimizer_inmemory_aware	     boolean	 TRUE

SQL> hist
  1  select name from v$datafile
     ;
  2  1
  3  2
  4  show parameter mem
  5  show parameter sga
  6  show parameter inmemory

SQL> desc v$inmemory_area
 Name					   Null?    Type
 ----------------------------------------- -------- ----------------------------
 POOL						    VARCHAR2(26)
 ALLOC_BYTES					    NUMBER
 USED_BYTES					    NUMBER
 POPULATE_STATUS				    VARCHAR2(26)
 CON_ID 					    NUMBER

Let's say I screw up this query (and I really did)!!

SQL> select * form v$inmemory_area;
select * form v$inmemory_area
         *
ERROR at line 1:
ORA-00923: FROM keyword not found where expected

I could do the old-school way, using the change command as follows:

SQL> c/form/from/
  1* select * from v$inmemory_area
SQL> /

POOL			   ALLOC_BYTES USED_BYTES POPULATE_STATUS
-------------------------- ----------- ---------- --------------------------
    CON_ID
----------
1MB POOL		     166723584		0 DONE
	 1

64KB POOL		      33554432		0 DONE
	 1

1MB POOL		     166723584		0 DONE
	 2


POOL			   ALLOC_BYTES USED_BYTES POPULATE_STATUS
-------------------------- ----------- ---------- --------------------------
    CON_ID
----------
64KB POOL		      33554432		0 DONE
	 2

1MB POOL		     166723584		0 DONE
	 3

64KB POOL		      33554432		0 DONE
	 3


6 rows selected.

SQL> hist
  1  select name from v$datafile
     ;
  2  1
  3  2
  4  show parameter mem
  5  show parameter sga
  6  show parameter inmemory
  ..
  ..
 11  select * form v$inmemory_area;
 12  c/for/from/
 15  /

But I sure wish I could delete multiple lines like this, but alas, I cannot


SQL> hist 11,12,13 del
SP2-1655: History command syntax error.

Anyways, Here's the new way of editing the command in the history :

SQL> hist 
  1  select name from v$datafile
     ;
  2  show parameter mem
  3  show parameter sga
  4  show parameter inmemory
  5  desc v$inmemory_pool
  6  desc v$inmemory_size
  7  esc v$inmemory_area
  8  desc v$inmemory_area
  9  select * form v$inmemory_area;
 10  c/form/from/
 11  /

SQL> hist 9 edit
This pops up the OS editor (vi of course), you edit as you normally would; save/quit, 
and you're back to the SQLPus prompt:

But look what happens after "hist 9 edit", 9 is still the same; its an immutable entry.  
Which suppose it kinda expected, as you shouldn't be able to change history :-) !!

Thus, by editing you're effective storing a new entry in the SQLPlus buffer and 
you'll have to execute this buffer, just like to old school way.  
You can do sqlplus "l" command to list the current buffer

 
This execution adds an new entry for the new command rather than replace the history "9" entry  !!!

SQL> hist 
  1  select name from v$datafile
     ;
  2  show parameter mem
  3  show parameter sga
  4  show parameter inmemory
  5  desc v$inmemory_pool
  6  desc v$inmemory_size
  7  esc v$inmemory_area
  8  desc v$inmemory_area
  9  select * form v$inmemory_area;
 10  c/for/from/
 11  /
 12  select * from  v$inmemory_area;

This feature is good start, but I hope they add more capabilities to this thing as its still rudimentary.

Uh Oh, I didnt set my Exadata core count correctly , now what?

Changing Capacity On-Demand Core Count in Exadata

We recently implemented an Exadata X6  at a one of client sites (yes, we don’t Oracle ACS, we do it ourselves).   However, the client failed to tell us that they had licensed only a subset of the cores per compute, after we actually had implmeneted and *migrated* production databases onto the X6.  So how do we set the core count correctly after implementation (post-OEDA run).  We heard horror stories about other folks saying they needed to re-image to set core count.  To be specific, its easy to increase cores, but decrease is nasty business.

The steps below are ones we used to decrease the core count:

1. Gracefully stop all databases running on all compute nodes.

2. Login to the compute nodes as root and run the “dbmcli” utility

3. Display the current core count using the following command:

LIST DBSERVER attributes coreCount

4. Change the core count to the desired count using this command (this needs done on all compute nodes):

ALTER DBSERVER pendingCoreCount = 14

NOTE:  Since we are  decreasing the number of cores after installation of the system, the FORCE option needs to be done.

ALTER DBSERVER pendingCoreCount = 14 FORCE

5. Reboot

6. Verify the change was correct by using the “LIST” command in step 3.

 

Just FYI… Troubleshooting

If there is an issue with the MS service starting up, it could be because of the Java being used on the system.

For Exadata release 12.1.2.3.1.160411, the version of Java was 1.8.0.66 and was flagged by a security audit as a vulnerability and was removed from the system.  When the system rebooted, the MS service couldn’t start back up because Java was removed. Follow these steps to reinstall Java and get the MS service restarted on the compute nodes.

1. Download the latest JDK from the Oracle site. NOTE: The RPM download was used.

2. Install the JDK package on the system:

rpm -ivh jdk-8u102- linux-x64.rpm

3. Redeploy the MS service application:

/opt/oracle/dbserver/dbms/deploy/scripts/unix/setup_dynamicDeploy DB -D

4. Restart the MS service:

ALTER DBSERVER RESTART SERVICES MS

 

iptables save

When I was working with our sysadmin person, he freaked out that when we stop/restarted iptables service, and the iptables were gone.  The iptables rules are lost upon shutdown on iptbles service or after system reboot.
 
What I usually do is save off my iptables rules into a save file as follows:
# iptables-save > $HOME/iptables.savefile
This command is essentially a print of current iptables rules to stdout.
[root@server ~]# iptables-save > iptables.dump
[root@server ~]# less iptables.dump

# Generated by iptables-save v1.4.7 on Fri Jul 22 20:24:22 2016
*filter
:INPUT ACCEPT [348542:44953290]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [9496643:512690291]
-A INPUT -i ib1 -p tcp -m tcp –dport 5042 -j ACCEPT
-A INPUT -i ib1 -p tcp -m tcp –dport 5042 –tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
-A INPUT -i ib1 -p tcp -m tcp –dport 3260 -j ACCEPT
-A INPUT -i ib1 -p tcp -m tcp –dport 3260 –tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
-A INPUT -i ib1 -p tcp -m tcp –dport 443 -j ACCEPT
-A INPUT -i ib1 -p tcp -m tcp –dport 443 –tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
-A INPUT -i ib1 -p tcp -m tcp –dport 22 -j ACCEPT
-A INPUT -i ib1 -p tcp -m tcp –dport 22 –tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
-A INPUT -s 204.70.128.1/32 -i ib1 -p udp -m udp –sport 123 -j ACCEPT
-A INPUT -s 10.13.34.32/32 -i ib1 -p tcp -m tcp –sport 53 -j ACCEPT
-A INPUT -s 10.13.34.32/32 -i ib1 -p udp -m udp –sport 53 -j ACCEPT
-A INPUT -s 10.13.34.31/32 -i ib1 -p tcp -m tcp –sport 53 -j ACCEPT
-A INPUT -s 10.13.34.31/32 -i ib1 -p udp -m udp –sport 53 -j ACCEPT
-A INPUT -i ib1 -p tcp -m tcp –dport 1024:65535 –tcp-flags FIN,SYN,RST,ACK SYN -j REJECT –reject-with icmp-port-unreachable
-A INPUT -i ib1 -p tcp -m tcp –dport 1024:65535 -j ACCEPT
-A INPUT -i ib1 -p tcp -j REJECT –reject-with icmp-port-unreachable
-A INPUT -i ib1 -p udp -j REJECT –reject-with icmp-port-unreachable
-A INPUT -i ib0 -p tcp -m tcp –dport 5042 -j ACCEPT
-A INPUT -i ib0 -p tcp -m tcp –dport 5042 –tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
-A INPUT -i ib0 -p tcp -m tcp –dport 3260 -j ACCEPT
-A INPUT -i ib0 -p tcp -m tcp –dport 3260 –tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
-A INPUT -i ib0 -p tcp -m tcp –dport 443 -j ACCEPT
-A INPUT -i ib0 -p tcp -m tcp –dport 443 –tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
-A INPUT -i ib0 -p tcp -m tcp –dport 22 -j ACCEPT
-A INPUT -i ib0 -p tcp -m tcp –dport 22 –tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
-A INPUT -s 204.70.128.1/32 -i ib0 -p udp -m udp –sport 123 -j ACCEPT
-A INPUT -s 10.13.34.32/32 -i ib0 -p tcp -m tcp –sport 53 -j ACCEPT
-A INPUT -s 10.13.34.32/32 -i ib0 -p udp -m udp –sport 53 -j ACCEPT
-A INPUT -s 10.13.34.31/32 -i ib0 -p tcp -m tcp –sport 53 -j ACCEPT
-A INPUT -s 10.13.34.31/32 -i ib0 -p udp -m udp –sport 53 -j ACCEPT
-A INPUT -i ib0 -p tcp -m tcp –dport 1024:65535 –tcp-flags FIN,SYN,RST,ACK SYN -j REJECT –reject-with icmp-port-unreachable
-A INPUT -i ib0 -p tcp -m tcp –dport 1024:65535 -j ACCEPT
-A INPUT -i ib0 -p tcp -j REJECT –reject-with icmp-port-unreachable
-A INPUT -i ib0 -p udp -j REJECT –reject-with icmp-port-unreachable
-A INPUT -s 10.43.48.107/32 -i eth0 -p udp -m udp –dport 162 -j ACCEPT
….
COMMIT
# Completed on Fri Jul 22 20:24:22 2016
We can then execute iptables-restore, copy in or restore a dump of rules made by iptables-save.  
[root@server ~]# iptables-restore < iptables.dump
[root@server ~]# iptables -L

# Generated by iptables-save v1.4.7 on Fri Jul 22 20:24:22 2016
*filter
:INPUT ACCEPT [348542:44953290]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [9496643:512690291]
-A INPUT -i ib1 -p tcp -m tcp –dport 5042 -j ACCEPT
-A INPUT -i ib1 -p tcp -m tcp –dport 5042 –tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
-A INPUT -i ib1 -p tcp -m tcp –dport 3260 -j ACCEPT
-A INPUT -i ib1 -p tcp -m tcp –dport 3260 –tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
-A INPUT -i ib1 -p tcp -m tcp –dport 443 -j ACCEPT
-A INPUT -i ib1 -p tcp -m tcp –dport 443 –tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
-A INPUT -i ib1 -p tcp -m tcp –dport 22 -j ACCEPT
-A INPUT -i ib1 -p tcp -m tcp –dport 22 –tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
-A INPUT -s 204.70.128.1/32 -i ib1 -p udp -m udp –sport 123 -j ACCEPT
-A INPUT -s 10.13.34.32/32 -i ib1 -p tcp -m tcp –sport 53 -j ACCEPT
-A INPUT -s 10.13.34.32/32 -i ib1 -p udp -m udp –sport 53 -j ACCEPT
-A INPUT -s 10.13.34.31/32 -i ib1 -p tcp -m tcp –sport 53 -j ACCEPT
-A INPUT -s 10.13.34.31/32 -i ib1 -p udp -m udp –sport 53 -j ACCEPT
-A INPUT -i ib1 -p tcp -m tcp –dport 1024:65535 –tcp-flags FIN,SYN,RST,ACK SYN -j REJECT –reject-with icmp-port-unreachable
-A INPUT -i ib1 -p tcp -m tcp –dport 1024:65535 -j ACCEPT
-A INPUT -i ib1 -p tcp -j REJECT –reject-with icmp-port-unreachable
-A INPUT -i ib1 -p udp -j REJECT –reject-with icmp-port-unreachable
-A INPUT -i ib0 -p tcp -m tcp –dport 5042 -j ACCEPT
-A INPUT -i ib0 -p tcp -m tcp –dport 5042 –tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
-A INPUT -i ib0 -p tcp -m tcp –dport 3260 -j ACCEPT
-A INPUT -i ib0 -p tcp -m tcp –dport 3260 –tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
-A INPUT -i ib0 -p tcp -m tcp –dport 443 -j ACCEPT
-A INPUT -i ib0 -p tcp -m tcp –dport 443 –tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
-A INPUT -i ib0 -p tcp -m tcp –dport 22 -j ACCEPT
-A INPUT -i ib0 -p tcp -m tcp –dport 22 –tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
-A INPUT -s 204.70.128.1/32 -i ib0 -p udp -m udp –sport 123 -j ACCEPT
-A INPUT -s 10.13.34.32/32 -i ib0 -p tcp -m tcp –sport 53 -j ACCEPT
-A INPUT -s 10.13.34.32/32 -i ib0 -p udp -m udp –sport 53 -j ACCEPT
-A INPUT -s 10.13.34.31/32 -i ib0 -p tcp -m tcp –sport 53 -j ACCEPT
-A INPUT -s 10.13.34.31/32 -i ib0 -p udp -m udp –sport 53 -j ACCEPT
-A INPUT -i ib0 -p tcp -m tcp –dport 1024:65535 –tcp-flags FIN,SYN,RST,ACK SYN -j REJECT –reject-with icmp-port-unreachable
-A INPUT -i ib0 -p tcp -m tcp –dport 1024:65535 -j ACCEPT
-A INPUT -i ib0 -p tcp -j REJECT –reject-with icmp-port-unreachable
-A INPUT -i ib0 -p udp -j REJECT –reject-with icmp-port-unreachable
-A INPUT -s 10.43.48.107/32 -i eth0 -p udp -m udp –dport 162 -j ACCEPT
….
COMMIT
# Completed on Fri Jul 22 20:24:22 2016
Once imported back-in, simply run service iptables reload
 
 
As stated above, RHEL/OEL default configuration, when stopping or restarting the iptables service, discards the running configuration. Setting the IPTABLES_SAVE_ON_STOP=”yes” or IPTABLES_SAVE_ON_RESTART=”yes” in /etc/sysconfig/iptables-config, will prevent that discard. 
 
 
You can also do service iptables save, to save into /etc/sysconfig/iptables

Setting Round-Robin Multipathing Policy in VMware ESXi 6.0

Storage Array Type Plugins (SATP) and Path Selection Plugins (PSP) are part of the VMware APIs for Pluggable Storage Architecture (PSA). The SATP has all the knowledge of the storage array to aggregate I/Os across multiple channels and has the intelligence to send failover commands when a path has failed. The Path Selection Policy can be either “Fixed”, “Most Recently Used” or “Round Robin”.

If a VMware VM is using RDM with All Flash Arrays, then the Round Robin policy should be used. Furthermore, inside the Linux kernel (VM), the noop IO scheduler should be used. Both need to executed for proper throughput.

As a best practice, the preferred method to set Round Robin policy, is to create a rule that will allow any newly added FlashArray device, to automatically set the Round Robin PSP and an IO Operation Limit value of 1. In this blog I’ll refer to the PureStorage array for setting Round Robin policy as well as setting IO limit.

The following command creates a rule that achieves both of these for only Pure Storage FlashArray devices:

esxcli storage nmp satp rule add -s “VMW_SATP_ALUA” -V “PURE” -M “FlashArray” -P”VMW_PSP_RR” -O “iops=1”

This must be repeated for each ESXi host.
This can also be accomplished through PowerCLI. Once connected to a vCenter Server this script will iterate through all of the hosts in that particular vCenter and create a default rule to set Round Robin for all Pure Storage FlashArray devices with an I/O Operation Limit set to 1.

$hosts = get-vmhost
foreach ($esx in $hosts)
{
$esxcli=get-esxcli -VMHost $esx
$esxcli.storage.nmp.satp.rule.add($null, $null, “PURE FlashArray RR IO Operation Limit
Rule”, $null, $null, $null, “FlashArray”, $null, “VMW_PSP_RR”, “iops=1”, “VMW_SATP_ALUA”,
$null, $null, “PURE”)
}

It is important to note that existing, previously presented devices will need to be either manually set to Round Robin and an I/O Operation Limit of 1 or unclaimed and reclaimed through either a reboot of the host or through a manual device reclaim process so that it can inherit the configuration set forth by the new rule. For setting a new I/O Operation Limit on an existing device, use the following procedure:

The first step is to change the particular device to use the Round Robin PSP. This must be done on every ESXi host and can be done with through the vSphere Web Client, the Pure Storage Plugin for the vSphere Web Client or via command line utilities.

Via esxcli:
esxcli storage nmp device set -d naa. –psp=VMW_PSP_RR

Note that changing the PSP using the Web Client Plugin is the preferred option as it will automatically configure Round Robin across all of the hosts. Note that this does not set the IO Operation Limit to 1. That is a command line option only, and must be done separately.

Round Robin can also be set on a per-device, per-host basis using the standard vSphere Web Client actions. The procedure to setup Round Robin policy for a Pure Storage volume. Note that this does not set the IO Operation Limit it 1 which is a command line option only—this must be done separately.

The IO Operations Limit cannot be checked from the vSphere Web Client—it can only be verified or altered via command line utilities. The following command can check a particular device for the PSP and IO Operations Limit:

esxcli storage nmp device list -d naa.

To set a device that is pre-existing to have an IO Operation limit of one, run the following command:

esxcli storage nmp psp roundrobin deviceconfig set -d naa. -I 1 -t iops