Exadata Cloud – Post Provisioning Exadata Configuration – Part 1

Post Provisioning Exadata Configuration – Part1

After an Exadata is provisioned, ther are several post provisioning steps that need to be executed in order to allow system automation such as patching, backups, and infrastructure updates. This document will describe these steps.

All the traffic in an Exadata DB System is, by default, routed through the client network. To route backup traffic to the backup interface (BONDETH1), a static route needs to be created on each of the compute nodes in the cluster.

First identify the gateway configured for the BONDETH1 interface.

grep GATEWAY /etc/sysconfig/network-scripts/ifcfg-bondeth1 |awk -F”=” ‘{print $2}’

10.232.35.1

Review current /etc/sysconfig/network-scripts/route-bondeth1

cat /etc/sysconfig/network-scripts/route-bondeth1

10.232.35.0/24 dev bondeth1 table 211

default via 10.232.35.1 dev bondeth1 table 211

Create a new static rule for BONDETH1 and update route-bondeth1 with the following entries (per Cloud region)

Phoenix (PHX) region:

ADDRESS0=129.146.0.0

NETMASK0=255.255.0.0

GATEWAY0=10.232.35.1

 Ashburn (IAD) region):

ADDRESS0=129.213.0.0

NETMASK0=255.255.0.0

GATEWAY0=10.232.35.1

Restart the interface.

[root@dbsys ~]# ifdown bondeth1; ifup bondeth1; 


Once this change is done, you should see a new entry in the route table:

[root@~ network-scripts]# netstat -rn

Kernel IP routing table

Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface

0.0.0.0         10.232.34.1     0.0.0.0         UG        0 0          0 bondeth0

10.232.34.0     0.0.0.0         255.255.255.0   U         0 0          0 bondeth0

10.232.35.0     0.0.0.0         255.255.255.0   U         0 0          0 bondeth1

129.146.0.0     10.232.35.1     255.255.0.0     UG        0 0          0 bondeth1

169.254.200.0   0.0.0.0         255.255.255.252 U         0 0          0 eth0

192.168.132.0   0.0.0.0         255.255.252.0   U         0 0          0 clib1

192.168.132.0   0.0.0.0         255.255.252.0   U         0 0          0 clib0

192.168.136.0   0.0.0.0         255.255.248.0   U         0 0          0 stib0

192.168.136.0   0.0.0.0         255.255.248.0   U         0 0          0 stib1

 

Exadata Cloud – Post Provisioning View of the system

Review of Exadata Deployment

Once the Exadata provisioning process completes (which takes around 4-5hrs for a ½ rack).  We explore to see what gets deployed:

$ cat/etc/oratab

OCITEST:/u02/app/oracle/product/12.2.0/dbhome_2:Y

+ASM1:/u01/app/12.2.0.1/grid:N       # line added by Agent

 

[grid@phxdbm-o3eja1 ~]$ olsnodes -n

phxdbm-o3eja1 1

phxdbm-o3eja2 2

phxdbm-o3eja3 3

phxdbm-o3eja4 4

 

[grid@phxdbm-o3eja1 ~]$ cat /var/opt/oracle/creg/OCITEST.ini | grep nodelist

nodelist=phxdbm-o3eja1 phxdbm-o3eja2 phxdbm-o3eja3 phxdbm-o3eja4

 

[grid@phxdbm-o3eja1 ~]$ crsctl stat res -t

—————————————————————————–

Name           Target  State        Server                   State details

—————————————————————————–

Local Resources

—————————————————————————–

ora.ACFSC1_DG1.C1_DG11V.advm

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE

ora.ACFSC1_DG1.C1_DG12V.advm

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE

ora.ACFSC1_DG1.dg

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE    ora.ACFSC1_DG2.C1_DG2V.advm

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE    ora.ACFSC1_DG2.dg

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE    ora.ASMNET1LSNR_ASM.lsnr

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE

ora.DATAC1.dg

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE . ora.DBFS_DG.dg

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE

ora.LISTENER.lsnr

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE

ora.RECOC1.dg

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE . ora.acfsc1_dg1.c1_dg11v.acfs

ONLINE  ONLINE       phxdbm-o3eja1            mounted on /scratch/acfsc1_dg1,STABLE

ONLINE  ONLINE       phxdbm-o3eja2            mounted on /scratch/acfsc1_dg1,STABLE

ONLINE  ONLINE       phxdbm-o3eja3            mounted on /scratch/acfsc1_dg1,STABLE

ONLINE  ONLINE       phxdbm-o3eja4            mounted on /scratch/acfsc1_dg1,STABLE

ora.acfsc1_dg1.c1_dg12v.acfs

ONLINE  ONLINE       phxdbm-o3eja1            mounted on /u02/app_acfs,STABLE

ONLINE  ONLINE       phxdbm-o3eja2            mounted on /u02/app_acfs,STABLE

ONLINE  ONLINE       phxdbm-o3eja3            mounted on /u02/app_acfs,STABLE

ONLINE  ONLINE       phxdbm-o3eja4            mounted on /u02/app_acfs,STABLE

ora.acfsc1_dg2.c1_dg2v.acfs

ONLINE  ONLINE       phxdbm-o3eja1            mounted on /var/opt/oracle/dbaas_acfs,STABLE

ONLINE  ONLINE       phxdbm-o3eja2            mounted on /var/opt/oracle/dbaas_acfs,STABLE

ONLINE  ONLINE       phxdbm-o3eja3            mounted on /var/opt/oracle/dbaas_acfs,STABLE

ONLINE  ONLINE       phxdbm-o3eja4            mounted on /var/opt/oracle/dbaas_acfs,STABLE

ora.net1.network

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE

ora.ons

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE

ora.proxy_advm

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE

—————————————————————————–

Cluster Resources

——————————————————————————–

ora.LISTENER_SCAN1.lsnr

1        ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ora.LISTENER_SCAN2.lsnr

1        ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ora.LISTENER_SCAN3.lsnr

1        ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ora.asm

1        ONLINE  ONLINE       phxdbm-o3eja1            Started,STABLE

2        ONLINE  ONLINE       phxdbm-o3eja2            Started,STABLE

3        ONLINE  ONLINE       phxdbm-o3eja3            Started,STABLE

4        ONLINE  ONLINE       phxdbm-o3eja4            Started,STABLE

ora.cvu

1        ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ora.ocitest.db

1        ONLINE  ONLINE       phxdbm-o3eja1            Open,HOME=/u02/app/oracle/product/12.2.0/dbhome_2,STABLE

2        ONLINE  ONLINE       phxdbm-o3eja2            Open,HOME=/u02/app/o

racle/product/12.2.0

/dbhome_2,STABLE

3        ONLINE  ONLINE       phxdbm-o3eja3            Open,HOME=/u02/app/oracle/product/12.2.0

/dbhome_2,STABLE

4        ONLINE  ONLINE       phxdbm-o3eja4            Open,HOME=/u02/app/oracle/product/12.2.0

/dbhome_2,STABLE

ora.phxdbm-o3eja1.vip

1        ONLINE  ONLINE       phxdbm-o3eja1            STABLE ora.phxdbm-o3eja2.vip

1        ONLINE  ONLINE       phxdbm-o3eja2            STABLE ora.phxdbm-o3eja3.vip

1        ONLINE  ONLINE       phxdbm-o3eja3            STABLE ora.phxdbm-o3eja4.vip

1        ONLINE  ONLINE       phxdbm-o3eja4            STABLE ora.qosmserver

1        OFFLINE OFFLINE                               STABLE ora.scan1.vip

1        ONLINE  ONLINE       phxdbm-o3eja2            STABLE ora.scan2.vip

1        ONLINE  ONLINE       phxdbm-o3eja3            STABLE ora.scan3.vip

1        ONLINE  ONLINE       phxdbm-o3eja1            STABLE

—————————————————————————–

[grid@phxdbm-o3eja1 ~]$ asmcmd lsct

DB_Name  Status     Software_Version  Compatible_version  Instance_Name   Disk_Group

+APX     CONNECTED        12.2.0.1.0          12.2.0.1.0  +APX1   ACFSC1_DG1

+APX     CONNECTED        12.2.0.1.0          12.2.0.1.0  +APX1   ACFSC1_DG2

+ASM     CONNECTED        12.2.0.1.0          12.2.0.1.0  +ASM1   DATAC1

+ASM     CONNECTED        12.2.0.1.0          12.2.0.1.0  +ASM1    DBFS_DG

OCITEST  CONNECTED        12.2.0.1.0          12.2.0.0.0  OCITEST1 DATAC1

OCITEST  CONNECTED        12.2.0.1.0          12.2.0.0.0  OCITEST1  RECOC1

_OCR     CONNECTED         –                  phxdbm-o3eja1.client.phxexadata.oraclevcn.com  DBFS_DG

yoda     CONNECTED        12.2.0.1.0          12.2.0.0.0  yoda1    DATAC1

yoda     CONNECTED        12.2.0.1.0          12.2.0.0.0  yoda1    RECOC1

 

[root@phxdbm-o3eja1 ~]# df -k

Filesystem           1K-blocks     Used Available Use% Mounted on

/dev/mapper/VGExaDb-LVDbSys1

24639868  3878788  19486408  17% /

tmpfs                742619136  2465792 740153344   1% /dev/shm

/dev/xvda1              499656    26360    447084   6% /boot

/dev/mapper/VGExaDb-LVDbOra1

20511356   719324  18727072   4% /u01

/dev/xvdb             51475068  9757380  39079864  20% /u01/app/12.2.0.1/grid

/dev/xvdc             51475068  9302820  39534424  20% /u01/app/oracle/product/12.1.0.2/dbhome_1

/dev/xvdd             51475068  8173956  40663288  17% /u01/app/oracle/product/12.2.0.1/dbhome_1

/dev/xvde             51475068  6002756  42834488  13% /u01/app/oracle/product/11.2.0.4/dbhome_1

/dev/xvdg            206293688 19751360 176040184  11% /u02

/dev/asm/c1_dg12v-186

459276288  1067008 458209280   1% /u02/app_acfs

/dev/asm/c1_dg11v-186

229638144   611488 229026656   1% /scratch/acfsc1_dg1

/dev/asm/c1_dg2v-341 228589568 26597644 201991924  12% /var/opt/oracle/dbaas_acfs

 

Oracle Homes are created and mounted, though for IQN we will only be using 12.2, 12.1.0.2, and 11.2.0.4 [interim].

The   following are Exadata specific filesystems and use cases
/scratch/acfs1_dg1             –staging Exadata

/u02/app_acfs.                    – User filesystem for applications (currently empty)

/var/opt/oracle/dbaas_acfs.  –  Binary and image repository for all Exadata patching and enablement

Exadata Cloud Deployment and Considerations

I recently did a presentation and wipe-board session on Exadata Cloud deployment.  As part of that engagment, I did a small write-up on this topic.  This is a series of blogs that reflects the presentation:

Cloud Exadata Network and Platform Configuration

 Exadata DB Systems are offered in quarter rack, half rack or full rack configurations, and each configuration consists of compute nodes and storage servers. The compute nodes are each configured as a Virtual Machine (VM).

Key Operational characteristics of Exadata Cloud

  • Admins have root privileges for the compute node VMs. Thus 3rd party software can be installed, however, only supported Oracle DB versions and rpms should be implemented.

 

  • Admins do not have administrative access to the Exadata infrastructure components, including the physical compute node hardware, network switches, power distribution units (PDUs), integrated lights- out management (ILOM) interfaces, or the Exadata Storage Servers, which are all administered by Oracle.

 

  • Admins have full administrative privileges for your databases. However, application users should connect to databases via Oracle Net Services.

 

  • Admins are responsible for database administration tasks such as creating tablespaces and managing database users.

 

  • Admins should define how ssh keys will managed for users that will need compute node access.

 

 

 

 

 

 

 

 

 

 

 

Provisioning Exadata Pre-reqs

The following are network pre-reqs for provisioning Cloud Exadata DB Systems

Subnets

  • Require two separate VCN subnets: client subnet for user data traffic and backup subnet for backup traffic.
  • Define both the client subnet and the backup subnet as public subnets. Exadata requires a public subnet to support backup of the database to the Object Store.
  • Do not use a subnet that overlaps with 192.168.128.0/20. This restriction applies to both the client subnet and backup subnet.
  • Oracle requires that you use a VCN Resolver for DNS name resolution for the client subnet. It automatically resolves the Swift endpoints required for backing up databases, patching, and updating the cloud tooling on an Exadata DB System.

At the completion of the provisioning, you should have the following configured:

 

 

 

 

 

 

Security Lists and Routing

  • Each VCN subnet has a default security list that contains a rule to allow TCP traffic on destination port 22 (SSH) from source 0.0.0.0/0 and any source port. Properly configure the security list ingress and egress rules.
  • The OneCommand configuration enables TCP and ICMP traffic between all nodes and all ports in the respective subnet for client and backup subnets
  • Exadata DB System’s cloud network (VCN) must be configured with an internet gateway. Add a route table rule to open the access to the Object Storage Service Swift endpoint on CIDR 0.0.0.0/0.
  • Update the backup subnet’s security list to disallow any access from outside the subnet and allow egress traffic for TCP port 443 (https) on CIDR Ranges 129.146.0.0/16 (Phoenix region), 129.213.0.0/16 (Ashburn region)

Enable a route table with an entry that includes a Internet Gateway.  This will enable remote ssh access to the Exadata nodes

 

 

 

 

 

 

 

Provisioning Exadata

Service Console – Provision Exadata

Below are screenshot views that illustrate the provisioning of Exadata

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Cloud Exadata Storage Configuration

Exadata Storage Servers use the following ASM disk groups:

DATA diskgroup – for the storage of Oracle Data base datafiles.

RECO diskgroup – primarily used for storing files related to backup and recovery, such as RMAN backups and archived redo log files.  Depending how admins choose to provision for backups on Exadata storage

approximately 40% of the available storage space is allocated to the DATA disk group and approximately 60% is allocated to the RECO disk group.

Provision for backups on Exadata storage, approximately 80% of the available storage space is allocated to the DATA disk group and approximately 20% is allocated to the RECO disk group.

DBFS and ACFS diskgroups are system diskgroups that support various operational purposes. The DBFS disk group is primarily used to store the shared Clusterware files (Oracle Cluster Registry and voting disks), while the ACFS disk groups are primarily used to store Oracle Database binaries, staging directories and metadata.

 

Are You Ready to apply the 12.2.0.1 July RU ???

Here's the steps that I went thru to apply the Grid Infrastructure Jul2017 Release Update 12.2.0.1.170718, Patch 26133434 

Configuration:  2 Node RAC cluster on Kaminario K2 AFA

The Grid Infrastructure Jul2017 Release Update (RU) 12.2.0.1.170718 includes updates for both the Clusterware home and Database home that can be applied in a rolling fashion.
In this blog post we have updated both nodes GI and DB stack.
The details and execution for Node1 are repeated and presented here for Node2 as well
Big thanks to Mike Dietrich for some insight !

 Step 1) Upgrade the Opatch version to atleast (12.2.0.1.7). We need to upgrade the OPatch version at GI and DB Homes on all the nodes.

[root@vna02 grid]# cd OPatch

[root@vna02 OPatch]# ./opatch version

OPatch Version: 12.2.0.1.9   è Grid Home

OPatch succeeded.

[oracle   @vna01 dbhome_1]$ opatch version

OPatch Version: 12.2.0.1.9  è Database Home

Step 2) Patch conflict check:

Node 1 : 

[oracle@vna01 GI]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/oracle/software/patches/DB-GI-RU/GI/26133434/26123830

Oracle Interim Patch Installer version 12.2.0.1.9

Copyright (c) 2017, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.2.0/dbhome_1
Central Inventory : /u01/app/oraInventory
from           : /u01/app/oracle/product/12.2.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.9
OUI version       : 12.2.0.1.4
Log file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2017-09-20_18-43-33PM_1.log
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.

[oracle@vna01 GI]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/oracle/software/patches/DB-GI-RU/GI/26133434/26002778

Oracle Interim Patch Installer version 12.2.0.1.9
Copyright (c) 2017, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.2.0/dbhome_1
Central Inventory : /u01/app/oraInventory
from           : /u01/app/oracle/product/12.2.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.9
OUI version       : 12.2.0.1.4
Log file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2017-09-20_19-01-04PM_1.log
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.

From the Database Home :

[oracle@vna01 GI]$ . oraenv
ORACLE_SID = [VNADB1] ? VNADB1
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@vna01 GI]$ cd $ORACLE_HOME/OPatch
[oracle@vna01 OPatch]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/oracle/software/patches/DB-GI-RU/GI/26133434/26123830
Oracle Interim Patch Installer version 12.2.0.1.9
Copyright (c) 2017, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.2.0/dbhome_1
Central Inventory : /u01/app/oraInventory
from           : /u01/app/oracle/product/12.2.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.9
OUI version       : 12.2.0.1.4
Log file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2017-09-20_19-03-12PM_1.log
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.

[oracle@vna01 OPatch]$
[oracle@vna01 OPatch]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/oracle/software/patches/DB-GI-RU/GI/26133434/26002778
Oracle Interim Patch Installer version 12.2.0.1.9
Copyright (c) 2017, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.2.0/dbhome_1
Central Inventory : /u01/app/oraInventory
from           : /u01/app/oracle/product/12.2.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.9
OUI version       : 12.2.0.1.4
Log file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2017-09-20_19-03-25PM_1.log
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.

One-off Patch Conflict Detection and Resolution

[root@vna01 OPatch]# $ORACLE_HOME/OPatch/opatchauto apply /home/oracle/software/patches/DB-GI-RU/GI/26133434 -analyze

OPatchauto session is initiated at Wed Sep 20 19:53:25 2017
System initialization log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchautodb/systemconfig2017-09-20_07-53-27PM.log.
Session log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/opatchauto2017-09-20_07-53-48PM.log
The id for this session is QWPL
Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/12.2.0/dbhome_1
Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.2.0/grid
Patch applicability verified successfully on home /u01/app/oracle/product/12.2.0/dbhome_1
Patch applicability verified successfully on home /u01/app/12.2.0/grid
Verifying SQL patch applicability on home /u01/app/oracle/product/12.2.0/dbhome_1

Following step failed during analysis:
/bin/sh -c 'ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1 ORACLE_SID=VNADB1 /u01/app/oracle/product/12.2.0/dbhome_1/OPatch/datapatch -prereq'
SQL patch applicability verified successfully on home /u01/app/oracle/product/12.2.0/dbhome_1
OPatchAuto successful.

--------------------------------Summary--------------------------------
Analysis for applying patches has completed successfully:
Host:vna01
RAC Home:/u01/app/oracle/product/12.2.0/dbhome_1

==Following patches were SKIPPED:
Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/25586399
Reason: This patch is not applicable to this specified target type - "rac_database"

==Following patches were SUCCESSFULLY analyzed to be applied:
Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/26002778
Log: /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_19-53-51PM_1.log
Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/26123830
Log: /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_19-53-51PM_1.log

Host:vna01
CRS Home:/u01/app/12.2.0/grid
==Following patches were SUCCESSFULLY analyzed to be applied:
Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/26002778
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_19-53-51PM_1.log
Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/25586399
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_19-53-51PM_1.log
Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/26123830
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_19-53-51PM_1.log
OPatchauto session completed at Wed Sep 20 19:57:09 2017
Time taken to complete the session 3 minutes, 44 seconds


Now OPatchauto Apply process:

[root@vna01 OPatch]# $ORACLE_HOME/OPatch/opatchauto apply /home/oracle/software/patches/DB-GI-RU/GI/26133434

OPatchauto session is initiated at Wed Sep 20 20:18:27 2017

System initialization log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchautodb/systemconfig2017-09-20_08-18-28PM.log.

Session log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/opatchauto2017-09-20_08-18-50PM.log

The id for this session is CNCU

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.2.0/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/12.2.0/dbhome_1

Patch applicability verified successfully on home /u01/app/oracle/product/12.2.0/dbhome_1

Patch applicability verified successfully on home /u01/app/12.2.0/grid

Verifying SQL patch applicability on home /u01/app/oracle/product/12.2.0/dbhome_1

"/bin/sh -c 'ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1 ORACLE_SID=VNADB1 /u01/app/oracle/product/12.2.0/dbhome_1/OPatch/datapatch -prereq'" command failed with errors. Please refer to logs for more details. SQL changes, if any, can be analyzed by manually retrying the same command.

SQL patch applicability verified successfully on home /u01/app/oracle/product/12.2.0/dbhome_1

Preparing to bring down database service on home /u01/app/oracle/product/12.2.0/dbhome_1

Successfully prepared home /u01/app/oracle/product/12.2.0/dbhome_1 to bring down database service

Bringing down CRS service on home /u01/app/12.2.0/grid

Prepatch operation log file location: /u01/app/oracle/crsdata/vna01/crsconfig/crspatch_vna01_2017-09-20_08-22-15PM.log

CRS service brought down successfully on home /u01/app/12.2.0/grid

Performing prepatch operation on home /u01/app/oracle/product/12.2.0/dbhome_1

Perpatch operation completed successfully on home /u01/app/oracle/product/12.2.0/dbhome_1

Start applying binary patch on home /u01/app/oracle/product/12.2.0/dbhome_1

Binary patch applied successfully on home /u01/app/oracle/product/12.2.0/dbhome_1

Performing postpatch operation on home /u01/app/oracle/product/12.2.0/dbhome_1

Postpatch operation completed successfully on home /u01/app/oracle/product/12.2.0/dbhome_1

Start applying binary patch on home /u01/app/12.2.0/grid

Binary patch applied successfully on home /u01/app/12.2.0/grid

Starting CRS service on home /u01/app/12.2.0/grid

Postpatch operation log file location: /u01/app/oracle/crsdata/vna01/crsconfig/crspatch_vna01_2017-09-20_08-27-01PM.log

CRS service started successfully on home /u01/app/12.2.0/grid

Preparing home /u01/app/oracle/product/12.2.0/dbhome_1 after database service restarted

No step execution required.........

Prepared home /u01/app/oracle/product/12.2.0/dbhome_1 successfully after database service restarted

Trying to apply SQL patch on home /u01/app/oracle/product/12.2.0/dbhome_1

"/bin/sh -c 'ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1 ORACLE_SID=VNADB1 /u01/app/oracle/product/12.2.0/dbhome_1/OPatch/datapatch'" command failed with errors. Please refer to logs for more details. SQL changes, if any, can be applied by manually retrying the same command.

SQL patch applied successfully on home /u01/app/oracle/product/12.2.0/dbhome_1

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:vna01

RAC Home:/u01/app/oracle/product/12.2.0/dbhome_1

Summary:

==Following patches were SKIPPED:

Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/25586399

Reason: This patch is not applicable to this specified target type - "rac_database"



==Following patches were SUCCESSFULLY applied:

Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/26002778

Log: /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_20-23-57PM_1.log

Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/26123830

Log: /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_20-23-57PM_1.log


Host:vna01

CRS Home:/u01/app/12.2.0/grid

Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/26002778

Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_20-24-44PM_1.log

Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/25586399

Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_20-24-44PM_1.log

Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/26123830

Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_20-24-44PM_1.log

OPatchauto session completed at Wed Sep 20 20:34:23 2017

Time taken to complete the session 15 minutes, 56 seconds


lsInventory Output:

[oracle@vna01 OPatch]$ opatch lsinventory

Oracle Interim Patch Installer version 12.2.0.1.9

Copyright (c) 2017, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/12.2.0/grid

Central Inventory : /u01/app/oraInventory

from           : /u01/app/12.2.0/grid/oraInst.loc

OPatch version    : 12.2.0.1.9

OUI version       : 12.2.0.1.4

Log file location : /u01/app/12.2.0/grid/cfgtoollogs/opatch/opatch2017-09-20_20-38-46PM_1.log



lsinventory Output file location : /u01/app/12.2.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2017-09-20_20-38-46PM.txt

--------------------------------------------------------------------------------

Local Machine Information::

Hostname: vna01

ARU platform id: 226

ARU platform description:: Linux x86-64

Installed Top-level Products (1):

Oracle Grid Infrastructure 12c                                       12.2.0.1.0

There are 1 products installed in this Oracle Home.

Interim patches (3) :

Patch  26123830     : applied on Wed Sep 20 20:26:39 BST 2017

Unique Patch ID:  21405588

Patch description:  "DATABASE RELEASE UPDATE: 12.2.0.1.170718 (26123830)"

Created on 7 Jul 2017, 00:33:59 hrs PST8PDT

Bugs fixed:

23026585, 24336249, 24929210, 24942749, 25036474, 25110233, 25410877

25417050, 25427662, 25459958, 25547901, 25569149, 25600342, 25600421

25606091, 25655390, 25662088, 24385983, 24923215, 25099758, 25429959

25662101, 25728085, 25823754, 22594071, 23665623, 23749454, 24326846

24334708, 24560906, 24573817, 24578797, 24609996, 24624166, 24668398

24674955, 24744686, 24811725, 24827228, 24831514, 24908321, 24976007

25184555, 25210499, 25211628, 25223839, 25262869, 25316758, 25337332

25455795, 25457409, 25539063, 25546608, 25612095, 25643931, 25410017

22729345, 24485174, 24509056, 24714096, 25329664, 25410180, 25607726

25957038, 25973152, 26024732, 24376878, 24589590, 24676172, 23548817

24796092, 24907917, 25044977, 25736747, 25766822, 25856821, 25051628

24534401, 24835919, 25050160, 25395696, 25430120, 25616359, 25715167

25967985

Patch  25586399     : applied on Wed Sep 20 20:26:17 BST 2017

Unique Patch ID:  21306685

Patch description:  "ACFS Patch Set Update : 12.2.0.1.170718 (25586399)"

Created on 16 Jun 2017, 00:35:19 hrs PST8PDT

Bugs fixed:

24679041, 24964969, 25098392, 25078431, 25491831


Patch  26002778     : applied on Wed Sep 20 20:25:26 BST 2017

Unique Patch ID:  21306682

Patch description:  "OCW Patch Set Update : 12.2.0.1.170718 (26002778)"

Created on 3 Jul 2017, 03:26:30 hrs PST8PDT

Bugs fixed:

26144044, 25541343, 25715179, 25493588, 24932026, 24801915, 25832375

25728787, 25825732, 24578464, 25832312, 25742471, 25790699, 25655495

25307145, 25485737, 25505841, 25697364, 24663993, 25026470, 25591658

25537905, 24451580, 25409838, 25371632, 25569634, 25245759, 24665035

25646592, 25025157, 24732650, 24664849, 24584419, 24423011, 24831158

25037836, 25556203, 24464953, 24657753, 25197670, 24796183, 20559126

25197395, 24808260

--------------------------------------------------------------------------------

OPatch succeeded.

[oracle@vna01 OPatch]

From the Database Home :

[oracle@vna01 OPatch]$ . oraenv

ORACLE_SID = [+ASM1] ? VNADB1

The Oracle base remains unchanged with value /u01/app/oracle

[oracle@vna01 OPatch]$  export PATH=$ORACLE_HOME/OPatch:$PATH

[oracle@vna01 OPatch]$ which opatch

/u01/app/oracle/product/12.2.0/dbhome_1/OPatch/opatch

[oracle@vna01 OPatch]$ opatch lsinventory

Oracle Interim Patch Installer version 12.2.0.1.9

Copyright (c) 2017, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/12.2.0/dbhome_1

Central Inventory : /u01/app/oraInventory

from           : /u01/app/oracle/product/12.2.0/dbhome_1/oraInst.loc

OPatch version    : 12.2.0.1.9

OUI version       : 12.2.0.1.4

Log file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2017-09-20_20-40-03PM_1.log

lsinventory Output file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2017-09-20_20-40-03PM.txt

--------------------------------------------------------------------------------

Local Machine Information::

Hostname: vna01

ARU platform id: 226

ARU platform description:: Linux x86-64

Installed Top-level Products (1):

Oracle Database 12c                                                  12.2.0.1.0

There are 1 products installed in this Oracle Home.

Interim patches (2) :

Patch  26123830     : applied on Wed Sep 20 20:24:26 BST 2017

Unique Patch ID:  21405588

Patch description:  "DATABASE RELEASE UPDATE: 12.2.0.1.170718 (26123830)"

Created on 7 Jul 2017, 00:33:59 hrs PST8PDT

Bugs fixed:

23026585, 24336249, 24929210, 24942749, 25036474, 25110233, 25410877

25417050, 25427662, 25459958, 25547901, 25569149, 25600342, 25600421

25606091, 25655390, 25662088, 24385983, 24923215, 25099758, 25429959

25662101, 25728085, 25823754, 22594071, 23665623, 23749454, 24326846

24334708, 24560906, 24573817, 24578797, 24609996, 24624166, 24668398

24674955, 24744686, 24811725, 24827228, 24831514, 24908321, 24976007

25184555, 25210499, 25211628, 25223839, 25262869, 25316758, 25337332

25455795, 25457409, 25539063, 25546608, 25612095, 25643931, 25410017

22729345, 24485174, 24509056, 24714096, 25329664, 25410180, 25607726

25957038, 25973152, 26024732, 24376878, 24589590, 24676172, 23548817

24796092, 24907917, 25044977, 25736747, 25766822, 25856821, 25051628

24534401, 24835919, 25050160, 25395696, 25430120, 25616359, 25715167

25967985



Patch  26002778     : applied on Wed Sep 20 20:24:11 BST 2017

Unique Patch ID:  21306682

Patch description:  "OCW Patch Set Update : 12.2.0.1.170718 (26002778)"

Created on 3 Jul 2017, 03:26:30 hrs PST8PDT

Bugs fixed:

26144044, 25541343, 25715179, 25493588, 24932026, 24801915, 25832375

25728787, 25825732, 24578464, 25832312, 25742471, 25790699, 25655495

25307145, 25485737, 25505841, 25697364, 24663993, 25026470, 25591658

25537905, 24451580, 25409838, 25371632, 25569634, 25245759, 24665035

25646592, 25025157, 24732650, 24664849, 24584419, 24423011, 24831158

25037836, 25556203, 24464953, 24657753, 25197670, 24796183, 20559126

25197395, 24808260

--------------------------------------------------------------------------------

OPatch succeeded.

[oracle@vna01 OPatch]$



Node 2 : 

Run OPatch Conflict Check

From GI Home:

[oracle@vna02 patches]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/oracle/patches/26133434/26123830

Oracle Interim Patch Installer version 12.2.0.1.9

Copyright (c) 2017, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/app/12.2.0/grid

Central Inventory : /u01/app/oraInventory

from           : /u01/app/12.2.0/grid/oraInst.loc

OPatch version    : 12.2.0.1.9

OUI version       : 12.2.0.1.4

Log file location : /u01/app/12.2.0/grid/cfgtoollogs/opatch/opatch2017-09-20_20-48-20PM_1.log



Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.

[oracle@vna02 patches]$

[oracle@vna02 patches]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/oracle/patches/26133434/26002778

Oracle Interim Patch Installer version 12.2.0.1.9

Copyright (c) 2017, Oracle Corporation.  All rights reserved.



PREREQ session

Oracle Home       : /u01/app/12.2.0/grid

Central Inventory : /u01/app/oraInventory

from           : /u01/app/12.2.0/grid/oraInst.loc

OPatch version    : 12.2.0.1.9

OUI version       : 12.2.0.1.4

Log file location : /u01/app/12.2.0/grid/cfgtoollogs/opatch/opatch2017-09-20_20-48-32PM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.

For the DB Home:

[oracle@vna02 patches]$ export PATH=$ORACLE_HOME/OPatch:$PATH

[oracle@vna02 patches]$ which opatch

/u01/app/oracle/product/12.2.0/dbhome_1/OPatch/opatch

[oracle@vna02 patches]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/oracle/patches/26133434/26123830

Oracle Interim Patch Installer version 12.2.0.1.9

Copyright (c) 2017, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/app/oracle/product/12.2.0/dbhome_1

Central Inventory : /u01/app/oraInventory

from           : /u01/app/oracle/product/12.2.0/dbhome_1/oraInst.loc

OPatch version    : 12.2.0.1.9

OUI version       : 12.2.0.1.4

Log file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2017-09-20_20-52-24PM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.

[oracle@vna02 patches]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/oracle/patches/26133434/26002778

Oracle Interim Patch Installer version 12.2.0.1.9

Copyright (c) 2017, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/app/oracle/product/12.2.0/dbhome_1

Central Inventory : /u01/app/oraInventory

from           : /u01/app/oracle/product/12.2.0/dbhome_1/oraInst.loc

OPatch version    : 12.2.0.1.9

OUI version       : 12.2.0.1.4

Log file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2017-09-20_20-52-38PM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.

[oracle@vna02 patches]$



OPATCH Conflict Checks:

[root@vna02 12.2.0]# $ORACLE_HOME/OPatch/opatchauto apply /home/oracle/patches/26133434 -analyze

OPatchauto session is initiated at Thu Sep 21 02:18:32 2017

System initialization log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchautodb/systemconfig2017-09-21_02-18-33AM.log.

Session log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/opatchauto2017-09-21_02-18-53AM.log

The id for this session is NWN8

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.2.0/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/12.2.0/dbhome_1

Patch applicability verified successfully on home /u01/app/oracle/product/12.2.0/dbhome_1

Patch applicability verified successfully on home /u01/app/12.2.0/grid

Verifying SQL patch applicability on home /u01/app/oracle/product/12.2.0/dbhome_1

SQL patch applicability verified successfully on home /u01/app/oracle/product/12.2.0/dbhome_1

OPatchAuto successful.

--------------------------------Summary--------------------------------

Analysis for applying patches has completed successfully:

Host:vna02

RAC Home:/u01/app/oracle/product/12.2.0/dbhome_1

==Following patches were SKIPPED:

Patch: /home/oracle/patches/26133434/25586399

Reason: This patch is not applicable to this specified target type - "rac_database"

==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /home/oracle/patches/26133434/26002778

Log: /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-18-56AM_1.log

Patch: /home/oracle/patches/26133434/26123830

Log: /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-18-56AM_1.log

Host:vna02

CRS Home:/u01/app/12.2.0/grid

==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /home/oracle/patches/26133434/26002778

Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-18-56AM_1.log

Patch: /home/oracle/patches/26133434/25586399

Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-18-56AM_1.log



Patch: /home/oracle/patches/26133434/26123830

Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-18-56AM_1.log

OPatchauto session completed at Thu Sep 21 02:22:48 2017

Time taken to complete the session 4 minutes, 16 seconds


OPatchauto apply:



[root@vna02 12.2.0]# $ORACLE_HOME/OPatch/opatchauto apply /home/oracle/patches/26133434



OPatchauto session is initiated at Thu Sep 21 02:25:35 2017



System initialization log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchautodb/systemconfig2017-09-21_02-25-36AM.log.



Session log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/opatchauto2017-09-21_02-25-57AM.log

The id for this session is PM1S



Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/12.2.0/dbhome_1



Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.2.0/grid

Patch applicability verified successfully on home /u01/app/12.2.0/grid



Patch applicability verified successfully on home /u01/app/oracle/product/12.2.0/dbhome_1





Verifying SQL patch applicability on home /u01/app/oracle/product/12.2.0/dbhome_1

SQL patch applicability verified successfully on home /u01/app/oracle/product/12.2.0/dbhome_1





Preparing to bring down database service on home /u01/app/oracle/product/12.2.0/dbhome_1

Successfully prepared home /u01/app/oracle/product/12.2.0/dbhome_1 to bring down database service





Bringing down CRS service on home /u01/app/12.2.0/grid

Prepatch operation log file location: /u01/app/oracle/crsdata/vna02/crsconfig/crspatch_vna02_2017-09-21_02-30-11AM.log

CRS service brought down successfully on home /u01/app/12.2.0/grid





Performing prepatch operation on home /u01/app/oracle/product/12.2.0/dbhome_1

Perpatch operation completed successfully on home /u01/app/oracle/product/12.2.0/dbhome_1





Start applying binary patch on home /u01/app/oracle/product/12.2.0/dbhome_1

Binary patch applied successfully on home /u01/app/oracle/product/12.2.0/dbhome_1





Performing postpatch operation on home /u01/app/oracle/product/12.2.0/dbhome_1

Postpatch operation completed successfully on home /u01/app/oracle/product/12.2.0/dbhome_1





Start applying binary patch on home /u01/app/12.2.0/grid

Binary patch applied successfully on home /u01/app/12.2.0/grid





Starting CRS service on home /u01/app/12.2.0/grid

Postpatch operation log file location: /u01/app/oracle/crsdata/vna02/crsconfig/crspatch_vna02_2017-09-21_02-34-30AM.log

CRS service started successfully on home /u01/app/12.2.0/grid





Preparing home /u01/app/oracle/product/12.2.0/dbhome_1 after database service restarted

No step execution required.........

Prepared home /u01/app/oracle/product/12.2.0/dbhome_1 successfully after database service restarted





Trying to apply SQL patch on home /u01/app/oracle/product/12.2.0/dbhome_1

SQL patch applied successfully on home /u01/app/oracle/product/12.2.0/dbhome_1



OPatchAuto successful.



--------------------------------Summary--------------------------------



Patching is completed successfully. Please find the summary as follows:



Host:vna02

RAC Home:/u01/app/oracle/product/12.2.0/dbhome_1

Summary:



==Following patches were SKIPPED:



Patch: /home/oracle/patches/26133434/25586399

Reason: This patch is not applicable to this specified target type - "rac_database"





==Following patches were SUCCESSFULLY applied:



Patch: /home/oracle/patches/26133434/26002778

Log: /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-31-39AM_1.log



Patch: /home/oracle/patches/26133434/26123830

Log: /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-31-39AM_1.log





Host:vna02

CRS Home:/u01/app/12.2.0/grid

Summary:



==Following patches were SUCCESSFULLY applied:



Patch: /home/oracle/patches/26133434/26002778

Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-32-21AM_1.log



Patch: /home/oracle/patches/26133434/25586399

Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-32-21AM_1.log



Patch: /home/oracle/patches/26133434/26123830

Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-32-21AM_1.log







OPatchauto session completed at Thu Sep 21 02:41:44 2017

Time taken to complete the session 16 minutes, 9 seconds

[root@vna02 12.2.0]#

LsInventory Checks:

GRIDHome Inventory

[oracle@vna02 ~]$ . oraenv

ORACLE_SID = [oracle] ? +ASM2
The Oracle base has been set to /u01/app/oracle

[oracle@vna02 ~]$ export PATH=$ORACLE_HOME/OPatch:$PATH
[oracle@vna02 ~]$ which opatch
/u01/app/12.2.0/grid/OPatch/opatch

[oracle@vna02 ~]$ opatch lsinventory

Oracle Interim Patch Installer version 12.2.0.1.9
Copyright (c) 2017, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/12.2.0/grid
Central Inventory : /u01/app/oraInventory
from           : /u01/app/12.2.0/grid/oraInst.loc
OPatch version    : 12.2.0.1.9
OUI version       : 12.2.0.1.4
Log file location : /u01/app/12.2.0/grid/cfgtoollogs/opatch/opatch2017-09-21_02-44-21AM_1.log
Lsinventory Output file location : /u01/app/12.2.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2017-09-21_02-44-21AM.txt

--------------------------------------------------------------------------------
Local Machine Information::
Hostname: vna02
ARU platform id: 226
ARU platform description:: Linux x86-64
Installed Top-level Products (1):
Oracle Grid Infrastructure 12c                                       12.2.0.1.0
There are 1 products installed in this Oracle Home.

Interim patches (3) :
Patch  26123830     : applied on Thu Sep 21 02:34:08 BST 2017
Unique Patch ID:  21405588
Patch description:  "DATABASE RELEASE UPDATE: 12.2.0.1.170718 (26123830)"
Created on 7 Jul 2017, 00:33:59 hrs PST8PDT

Bugs fixed:
23026585, 24336249, 24929210, 24942749, 25036474, 25110233, 25410877

25417050, 25427662, 25459958, 25547901, 25569149, 25600342, 25600421

25606091, 25655390, 25662088, 24385983, 24923215, 25099758, 25429959

25662101, 25728085, 25823754, 22594071, 23665623, 23749454, 24326846

24334708, 24560906, 24573817, 24578797, 24609996, 24624166, 24668398

24674955, 24744686, 24811725, 24827228, 24831514, 24908321, 24976007

25184555, 25210499, 25211628, 25223839, 25262869, 25316758, 25337332

25455795, 25457409, 25539063, 25546608, 25612095, 25643931, 25410017

22729345, 24485174, 24509056, 24714096, 25329664, 25410180, 25607726

25957038, 25973152, 26024732, 24376878, 24589590, 24676172, 23548817

24796092, 24907917, 25044977, 25736747, 25766822, 25856821, 25051628

24534401, 24835919, 25050160, 25395696, 25430120, 25616359, 25715167

25967985



Patch  25586399     : applied on Thu Sep 21 02:33:51 BST 2017

Unique Patch ID:  21306685

Patch description:  "ACFS Patch Set Update : 12.2.0.1.170718 (25586399)"

Created on 16 Jun 2017, 00:35:19 hrs PST8PDT

Bugs fixed:

24679041, 24964969, 25098392, 25078431, 25491831



Patch  26002778     : applied on Thu Sep 21 02:33:01 BST 2017

Unique Patch ID:  21306682

Patch description:  "OCW Patch Set Update : 12.2.0.1.170718 (26002778)"

Created on 3 Jul 2017, 03:26:30 hrs PST8PDT

Bugs fixed:

26144044, 25541343, 25715179, 25493588, 24932026, 24801915, 25832375

25728787, 25825732, 24578464, 25832312, 25742471, 25790699, 25655495

25307145, 25485737, 25505841, 25697364, 24663993, 25026470, 25591658

25537905, 24451580, 25409838, 25371632, 25569634, 25245759, 24665035

25646592, 25025157, 24732650, 24664849, 24584419, 24423011, 24831158

25037836, 25556203, 24464953, 24657753, 25197670, 24796183, 20559126

25197395, 24808260







--------------------------------------------------------------------------------



OPatch succeeded.

[oracle@vna02 ~]$









DBHome Inventory:







[oracle@vna02 ~]$ export PATH=$ORACLE_HOME/OPatch:$PATH

[oracle@vna02 ~]$ which opatch

/u01/app/oracle/product/12.2.0/dbhome_1/OPatch/opatch

[oracle@vna02 ~]$

[oracle@vna02 ~]$

[oracle@vna02 ~]$ opatch lsinventory

Oracle Interim Patch Installer version 12.2.0.1.9

Copyright (c) 2017, Oracle Corporation.  All rights reserved.





Oracle Home       : /u01/app/oracle/product/12.2.0/dbhome_1

Central Inventory : /u01/app/oraInventory

from           : /u01/app/oracle/product/12.2.0/dbhome_1/oraInst.loc

OPatch version    : 12.2.0.1.9

OUI version       : 12.2.0.1.4

Log file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2017-09-21_02-45-58AM_1.log



Lsinventory Output file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2017-09-21_02-45-58AM.txt



--------------------------------------------------------------------------------

Local Machine Information::

Hostname: vna02

ARU platform id: 226

ARU platform description:: Linux x86-64



Installed Top-level Products (1):



Oracle Database 12c                                                  12.2.0.1.0

There are 1 products installed in this Oracle Home.





Interim patches (2) :



Patch  26123830     : applied on Thu Sep 21 02:32:03 BST 2017

Unique Patch ID:  21405588

Patch description:  "DATABASE RELEASE UPDATE: 12.2.0.1.170718 (26123830)"

Created on 7 Jul 2017, 00:33:59 hrs PST8PDT

Bugs fixed:

23026585, 24336249, 24929210, 24942749, 25036474, 25110233, 25410877

25417050, 25427662, 25459958, 25547901, 25569149, 25600342, 25600421

25606091, 25655390, 25662088, 24385983, 24923215, 25099758, 25429959

25662101, 25728085, 25823754, 22594071, 23665623, 23749454, 24326846

24334708, 24560906, 24573817, 24578797, 24609996, 24624166, 24668398

24674955, 24744686, 24811725, 24827228, 24831514, 24908321, 24976007

25184555, 25210499, 25211628, 25223839, 25262869, 25316758, 25337332

25455795, 25457409, 25539063, 25546608, 25612095, 25643931, 25410017

22729345, 24485174, 24509056, 24714096, 25329664, 25410180, 25607726

25957038, 25973152, 26024732, 24376878, 24589590, 24676172, 23548817

24796092, 24907917, 25044977, 25736747, 25766822, 25856821, 25051628

24534401, 24835919, 25050160, 25395696, 25430120, 25616359, 25715167

25967985



Patch  26002778     : applied on Thu Sep 21 02:31:51 BST 2017

Unique Patch ID:  21306682

Patch description:  "OCW Patch Set Update : 12.2.0.1.170718 (26002778)"

Created on 3 Jul 2017, 03:26:30 hrs PST8PDT

Bugs fixed:

26144044, 25541343, 25715179, 25493588, 24932026, 24801915, 25832375

25728787, 25825732, 24578464, 25832312, 25742471, 25790699, 25655495

25307145, 25485737, 25505841, 25697364, 24663993, 25026470, 25591658

25537905, 24451580, 25409838, 25371632, 25569634, 25245759, 24665035

25646592, 25025157, 24732650, 24664849, 24584419, 24423011, 24831158

25037836, 25556203, 24464953, 24657753, 25197670, 24796183, 20559126

25197395, 24808260







--------------------------------------------------------------------------------



OPatch succeeded.

[oracle@vna02 ~]$

 

ACFS Snapshot – A Walk Through

This blog explores some of the new 12.2 ACFS features.  We will walk through the ACFS snapshot process flow:

 

[oracle@oracle122 log]$ acfsutil snap info /acfsmounts/acfsdata/

snapshot name:               just_before_load

snapshot location:           /acfsmounts/acfsdata/.ACFS/snaps/just_before_load

RO snapshot or RW snapshot:  RO

parent name:                 /acfsmounts/acfsdata/

snapshot creation time:      Wed Mar 22 20:36:09 2017

storage added to snapshot:   8650752   (   8.25 MB )

number of snapshots:  1

snapshot space usage: 8704000  (   8.30 MB )

[oracle@oracle122 log]$ du -sk .

18292  .


[oracle@oracle122 log]$ acfsutil snap create -w -p just_before_load just_about_batch_upload /acfsmounts/acfsdata/

acfsutil snap create: Snapshot operation is complete.

[oracle@oracle122 log]$ acfsutil snap info /acfsmounts/acfsdata

snapshot name:               just_before_load

snapshot location:           /acfsmounts/acfsdata/.ACFS/snaps/just_before_load

RO snapshot or RW snapshot:  RO

parent name:                 /acfsmounts/acfsdata

snapshot creation time:      Wed Mar 22 20:36:09 2017

storage added to snapshot:   8650752   (   8.25 MB )

snapshot name:               just_about_batch_upload

snapshot location:           /acfsmounts/acfsdata/.ACFS/snaps/just_about_batch_upload

RO snapshot or RW snapshot:  RW

parent name:                 just_before_load

snapshot creation time:      Wed Mar 22 20:42:56 2017

storage added to snapshot:   8650752   (   8.25 MB )

root@oracle122 ~]# acfsutil compress on /acfsmounts/acfsdata/log/wtf

acfsutil compress on: ACFS-05518: /acfsmounts/acfsdata/log/wtf is not an ACFS mount point

[root@oracle122 ~]# acfsutil compress info /acfsmounts/acfsdata/log/wtf

The file /acfsmounts/acfsdata/log/wtf is not compressed.

[root@oracle122 ~]# acfsutil compress info /acfsmounts/acfsdata/log/nitin

nitin             nitin_compressed 

[root@oracle122 ~]# acfsutil compress info /acfsmounts/acfsdata/log/nitin_compressed

Compression Unit size: 32768

Disk storage used:   (  60.00 KB )

Disk storage saved:  (   7.75 MB )

Storage used is 1% of what the uncompressed file would use.

File is not scheduled for asynchronous compression.

oracle@oracle122 log]$ ls -l lastlog*

-rw-r--r--. 1 oracle oracle 145708 Mar 22 12:07 lastlog

-rw-r--r--. 1 oracle oracle 145708 Mar 23 05:49 lastlog_compressed

[oracle@oracle122 log]$

[root@oracle122 ~]# acfsutil compress info /acfsmounts/acfsdata/log/lastlog_compressed

Compression Unit size: 32768

Disk storage used:   (  32.00 KB )

Disk storage saved:  ( 110.29 KB )

Storage used is 22% of what the uncompressed file would use.

File is not scheduled for asynchronous compression.

If you are curious about the other snapshop options... then look below !!

[oracle@oracle122 log]$ acfsutil snap -h

 Command Subcmd    Arguments

--------------- --------- ------------------------------------------

snap create    [-w|-r|-c] [-p parent_snap_name] <snap_name> <mountpoint>

snap create    [-w]                      - create a writeable snapshot

snap create    [-r]                      - create a read-only snapshot

snap create                                This is the default behavior

snap create    [-c]                      - create a writable snapshot of a

snap create                                snap duplicate target

snap create    [-p parent_snap_name]     - create a snapshot from a snapshot

snap delete    <snap_name> <mountpoint> - delete a file system snapshot

snap rename    <old_snap_name> <new_snap_name> <mountpoint>

snap rename                             - rename a file system snapshot

snap convert   -w|-r <snap_name> <mountpoint>

snap convert   -w                       - convert to a writeable snapshot

snap convert   -r                       - convert to a read-only snapshot

snap info      [-t] [<snap_name>] <mountpoint>

snap info                    - get information about snapshots

snap info      [-t]          - display family tree starting at next name given

snap info      [<snap_name>] - snapshot name

snap info      <mountpoint>  - mount point

snap remaster  {<snap_name> | -c} <volume_path>

snap remaster                           - make the specified snapshot

snap remaster                             the master file system.  The

snap remaster                             current master and all other

snap remaster                             snapshots will be deleted.

snap remaster                             WARNING: This operation cannot

snap remaster                             be reversed.  Admin privileges

snap remaster                             are required.  The file system

snap remaster                             must be unmounted on all nodes.

snap remaster                             The file system must not have

snap remaster                             Replication running.

snap remaster  [-c]                     - Continue an interrupted snapshot

snap remaster                             remastering.  Use the -c option,

snap remaster                             instead of the <snap_name>, to

 snap remaster                             complete an interrupted

snap remaster                             snapshot remastering.

snap remaster  [-f]                     - Force the snapshot remastering.

 snap duplicate apply     [-b] [-d {0..6}] [<snap_name>] <mountpoint>

 snap duplicate apply     -b                       - maintain backup snapshot

 snap duplicate apply     [-d {0..6}]              - set trace level for debugging

 snap duplicate apply     [<snap_name>]            - target snapshot

 snap duplicate apply     <mountpoint>             - mount point for target site

 snap duplicate create    [-r] [-i oldsnapname] [-d {0..6}] <newsnapname> <mountpoint>

 snap duplicate create    [-r]              - restart of data stream

 snap duplicate create    [-p parentsnap]   - parent snap for base site

 snap duplicate create    [-i oldsnapname]  - old snapshot name

 snap duplicate create    [-d {0..6}]       - set trace level for debugging

 snap duplicate create    <newsnapname>     - new snapshot name

 snap duplicate create    <mountpoint>      - mount point for base site

 snap quota     [[-|+]nnn[K|M|G|T|P]]<snap_name> <mountpoint>

 snap quota                              - set quota for snapshot

 

Grid Infrastructure and RAC 12.2 New Features – a Recap

The following list illustrates the new 12.2 Oracle RAC and Grid Infrastructure. This is a personal list which “I believe to be the most interesting.” I apologize to the RAC Dev team if I left out any features.

Streamlined Grid Infrastructure Installation

12.2 Grid Infrastructure software is available as an image file for download and installation. The key objective of this feature was to enable a simpler and quicker installation of Grid Infrastructure. Administrators simply prep the system by creating a new Grid home directory, appropriate users, permissions and kernel settings. Once completed, Admins extract the image file into the newly-created Grid home, and execute the gridsetup.sh script to invoke setup wizard to register the Oracle Grid Infrastructure stack with Oracle inventory. This installation approach can be used for Oracle Grid Infrastructure for Cluster and Standalone Servers configurations. This new software installation will improve large scale deployment automation as well as deployment of customized images, Patch Set Updates (PSUs) and patches.

Real Application Clusters Reader Nodes

In 12.2, Oracle extended the capability of Flex Clusters by introducing Reader nodes. Reader nodes are Leaf nodes (in a Flex Cluster) that run read-only RAC database instances. The Reader nodes are not affected by RAC reconfigurations, caused by node evictions or other cluster node membership changes, as long as the Hub Node, to which it is connected, is part of the cluster. Reader Nodes allows users to create huge reader farms (up to 64 reader nodes per Hub Node), thus enabling massive parallel processing. In this architecture, updates to the read/write instances (running on Hub nodes) are immediately propagated to the read-only instances on the Leaf Nodes, where they can be used for online reporting or instantaneous queries. Users can create services to direct queries to read-only instances running on reader nodes.

Service-Oriented Buffer Cache Access

RAC Services, which are used to allocate and distribute workloads across RAC instances, are the cornerstone of RAC workload management. There is a strong relationship between a RAC Service, a specific workload, and the database object it accesses. With 12.2 RAC, a Service- oriented buffer cache feature was introduced to improve scale and performance, by optimizing instance and node-buffer cache affinity. This is done by caching or pre-warming instances with data blocks for objects accessed where a service is expected to run.

Twelve Days of 12.2

Server Weight-Based Node Eviction

When there is a spilt-brain, or when a node eviction decision must be made, traditionally the decision was based on age, or duration of the nodes, in the cluster; i.e., nodes with a large uptime in the cluster will survive. In 12.2 RAC, Server weight-based node eviction uses a more intelligent, tie-breaker mechanism to evict a particular node or a group of nodes from a cluster. The Server Weight-based node eviction feature introspects the current load on those servers as part of the decision. Two principle mechanisms, a system inherent automatic mechanism and a user input-based mechanism is used to offer and provide guidance.

Load-Aware Resource Placement

Load-aware resource placement, prevents overloading a server with more database instances than the server is capable of running. The metrics used to determine whether an application can be started on a given server, is based on the expected resource consumption of the application, as well as the capacity of the server in terms of CPU and memory. Administrators can define database resources such as CPU (cpu_count) and memory (memory_target) to Clusterware. Clusterware uses this information to place the database instances only on servers that meet a sufficient number of CPUs, amount of memory or both.

srvctl modify database -db testdb -cpucount 8 -memorytarget 64g

Hang Manager

The Hang Manager features first became available in 11gR1. In this initial version, Hang Manager evaluated and identified system hangs, then dumped the relevant information, “wait for graph,” into a trace file. In 12.2, Hang Manager takes action and attempts to resolve the system hang. An ORA-32701 error message is logged in the alert log to reflect the hang resolution. Hang Manager also runs in both single-instance and Oracle RAC database instances. With Hang Manager, it is constantly aware of processes running in reader nodes instances, and checks whether any of these processes are blocking progress on Hub Nodes to take action, if possible.

Separation of Duty for Administering RAC Clusters

12.2 RAC introduces a new administrative privilege called SYSRAC. This privilege is used by the Clusterware agent, and removes the need to use SYSDBA privilege for RAC administrative tasks, thus reducing the reliance on SYSDBA on production systems. Note, SYSRAC privilege is the default mode for connecting to the database by Clusterware agent; e.g, when executing RAC utilities such as SRVCTL.

Rapid Home Provisioning of Oracle Software

Rapid Home Provisioning enables you to create clusters, provision, patch, and upgrade Oracle Grid Infrastructure and Oracle Database homes. It also provisions 11.2 Clusters, applications, and middleware using Rapid Home Provisioning.

Extended Clusters

In 12.2 GI Administrators can create an extended RAC cluster across two, or more, geographically separate sites. Note, each site will include a set of servers with its own storage. If a site fails, the other site acts as an active standby. 12.2 Extended Clusters can be built on initial installation or be converted from an existing (non-Flex ASM) cluster, using the ConvertToExtended script.

De-support of OCR and Voting Files on Shared Filesystem

In Grid Infrastructure 12.2, the placement of Oracle Clusterware files: the Oracle Cluster Registry (OCR), and the Voting Files, directly on a shared file system is desupported. Only ASM or NFS is supported. If you need to use a supported shared file system, either a Network File System, or a shared cluster file system instead of native disk devices, then you must create Oracle ASM disks on supported network file systems that you plan to use for hosting Oracle Clusterware files before installing Oracle Grid Infrastructure. You can then use the Oracle ASM disks in an Oracle ASM disk group to manage Oracle Clusterware files. If your Oracle Database files are stored on a shared file system, then you can continue to use shared file system storage for database files, instead of moving them to Oracle ASM storage.

Clonewars – Next Gen Cloning with Oracle 12.2 Multitenancy (Part Deux)… With a Sprinkle of PDB Refresh

 

This is Part 2 of the Remote [PDB] Cloning capabilities of Oracle 12.2 Mulitenant.

Cloning Example 2:  Remote clone copy from an existing CBD/PDB into a local PDB (PDB->PDB).  In this example “darkside” is  CDB with darthmaul being the source/remote PDB and  yoda (PDB) is local target

 

SQL> select database_name from v$database;

DATABASE_NAME
--------------------------------------------------------
DARKSIDE

darkside$SQL> alter pluggable database darthmaul open;
Pluggable database altered.

SQL> select name, open_mode from v$pdbs;
NAME .    OPEN_MODE

--------------------
PDB$SEED  READ ONLY
DARTHMAUL READ WRITE

darkside$SQL> archive log list ;
Database log mode            Archive Mode
Automatic archival           Enabled
Archive destination          USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     1
Next log sequence to archive   3
Current log sequence         3

darkside$SQL> select name, open_mode from v$database;
NAME     OPEN_MODE
--------- --------------------
DARKSIDE  READ WRITE

darkside$SQL> COLUMN property_name FORMAT A30
COLUMN property_value FORMAT A30
SELECT property_name, property_value
FROM   database_properties
WHERE  property_name = 'LOCAL_UNDO_ENABLED'; 
PROPERTY_NAME                PROPERTY_VALUE
------------------------------ ------------------------------
LOCAL_UNDO_ENABLED           TRUE


$ cat darkside_create_remote_clone_user.sql
create user c##darksidecloneuser identified by cloneuser123 container=ALL;
grant create session, create pluggable database to c##darksidecloneuser  container=ALL;

$cat darkside_db_link.sql
create database link darksideclone_link
CONNECT TO c##darksidecloneuser IDENTIFIED BY cloneuser123 USING 'darkside'

Nishan$SQL> select DB_LINK,HOST from dba_db_links;
DB_LINK        HOST
------------  ---------------------------
SYS_HUB          SEEDDATA
REMOTECLONELINK  hansolo
DARKSIDECLONE_LINK darkside

darkside$SQL> select name from v$datafile;
NAME
-------------------------------------------------------------------------------

+ACFSDATA/DARKSIDE/4E63D836FC5AF80DE053B214A8C07E55/DATAFILE/system.276.942656929

+ACFSDATA/DARKSIDE/4E63D836FC5AF80DE053B214A8C07E55/DATAFILE/sysaux.277.942656929

+ACFSDATA/DARKSIDE/4E63D836FC5AF80DE053B214A8C07E55/DATAFILE/undotbs1.275.942656929

+ACFSDATA/DARKSIDE/4E63D836FC5AF80DE053B214A8C07E55/DATAFILE/users.279.942657041

+ACFSDATA/DARKSIDE/4E63D836FC5AF80DE053B214A8C07E55/DATAFILE/rey.291.942877803

+ACFSDATA/DARKSIDE/4E63D836FC5AF80DE053B214A8C07E55/DATAFILE/luke.292.942877825

darkside$SQL> show con_name
CON_NAME
-----------------------------
DARTHMAUL


darkside$SQL> create table foofighters tablespace rey as select * from obj$;
Table created.

Nishan$SQL> create pluggable database yoda from darthmaul@DARKSIDECLONE_LINK;

Pluggable database created.

Nishan$SQL> alter session set container = yoda;
Session altered.

yoda$SQL> select name, open_mode from v$pdbs;
NAME                    OPEN_MODE
----------------------------------------
YODA                   MOUNTED

yoda$SQL> select name from v$datafile;
NAME
--------------------------------------------------------------------------------
+DATA/NISHAN/4E82D233272C7273E0538514A8C00DF3/DATAFILE/system.310.942878321
+DATA/NISHAN/4E82D233272C7273E0538514A8C00DF3/DATAFILE/sysaux.311.942878321
+DATA/NISHAN/4E82D233272C7273E0538514A8C00DF3/DATAFILE/undotbs1.309.942878321
+DATA/NISHAN/4E82D233272C7273E0538514A8C00DF3/DATAFILE/users.306.942878319
+DATA/NISHAN/4E82D233272C7273E0538514A8C00DF3/DATAFILE/rey.307.942878319
+DATA/NISHAN/4E82D233272C7273E0538514A8C00DF3/DATAFILE/luke.308.942878319


Now on to Refresh the PDB

SQL>create table foofighters tablespace rey as select * from obj$

Table created.




SQL> select segment_name from dba_segments where tablespace_name = 'REY'

SEGMENT_NAME

----------------------------------------------------------------

FOOFIGHTERS




SQL> select name, open_mode from v$pdbs;

NAME            OPEN_MODE

------------------------------

PDB$SEED        READ ONLY

OBIWAN          READ WRITE

FORCEAWAKENS    MOUNTED

YODA            MOUNTED




SQL> alter pluggable database yoda open read only;

Pluggable database altered.




SQL> select segment_name from dba_segments where tablespace_name = 'REY';

no rows selected




SQL> alter session set container = yoda;

Session altered.




SQL> ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE;

Pluggable database altered.




SQL> ALTER PLUGGABLE DATABASE refresh;

Pluggable database altered.




SQL> select segment_name from dba_segments where tablespace_name = 'REY';

select segment_name from dba_segments where tablespace_name = 'REY'  

ERROR at line 1:

ORA-01219: database or pluggable database not open: queries allowed on fixed

tables or views only




SQL> ALTER PLUGGABLE DATABASE open read only;

Pluggable database altered.




SQL> select segment_name from dba_segments where tablespace_name = 'REY';

SEGMENT_NAME

-----------------------------------------------------

FOOFIGHTERS

 

 

 

 

 

Clonewars – Next Gen Cloning with Oracle 12.2 Multitenancy (Part Un)

In this blog, we will walk through Oracle 12.2 Remote Cloning of PDB feature. In Oracle 12.1, remote cloning was also available, however, this required placing the productions database (which is usually the source) in read-only mode. This makes the cloning feature very inefficient to leverage. In 12.2, it is now possible to maintain the production database in read-write mode and allow for online copy of the database, this is reffered to as a “hot clone”.  The distinction between a hot clone and a cold clone is only relevant for customers running 12.1 Multitenancy. As of 12.2 all clones are hot clones, unless the source database is explicitly closed.

We will illustrate two examples of this real-world example, just the names have been changed to protect the extremely innocent. And sorry about the StarWars references.. just couldn’t help myself!!

Note, for clarity, the remote DB is source database which will cloned, and the local DB is the CDB where the PDB will cloned into.

Cloning Example 1: Remote clone copy from existing non-CDB into a local PDB (non-CDB->PDB).  In this example “hansolo” is remote non-CDB (source PDB).

Cloning Example 2: Remote clone copy from existing CBD/PDB into a local PDB (PDB->PDB). In this example “darkside” is CDB with obiwan being the source PDB and  nishan-obiwan (PDB) is local.

Cloning Example 1

Prep work and validation
 Hansolo$SQL> startup
 ORACLE instance started.
 Total System Global Area 2483027968 bytes
 Fixed Size 8795808 bytes
 Variable Size 637536608 bytes
 Database Buffers 1610612736 bytes
 Redo Buffers 7979008 bytes
 In-Memory Area 218103808 bytes
 Database mounted.
 Database opened.
Hansolo$SQL> select database_name from v$database;
 DATABASE_NAME
 ------------------------------------------------------
 HANSOLO

Nishan$SQL> select name from v$pdbs;
 NAME
 ------------------------------------------------------------------------------
 PDB$SEED
 OBIWAN

In 12.2, each PDB will have its own undo tablespace. 
This new undo management configuration is called local undo mode, and is the underlying 
design for many of the portability features in 12.2. Local Undo is the default for greefield/fresh 12.2 installs, 
for upgrades to 12.2 the Shared Undo will need to converted to Local (we won't cover that here)
 
Hansolo$SQL> SELECT property_name, property_value FROM database_properties WHERE property_name = 'LOCAL_UNDO_ENABLED ';
PROPERTY_NAME PROPERTY_VALUE
------------------------------ ------------------------------
LOCAL_UNDO_ENABLED TRUE

Hansolo$SQL> archive log list
 Database log mode Archive Mode
 Automatic archival Enabled
 Archive destination USE_DB_RECOVERY_FILE_DEST
 Oldest online log sequence 118
 Next log sequence to archive 120
 Current log sequence 120

Hansolo$SQL> select name, open_mode from v$database
NAME OPEN_MODE
--------- --------------------
HANSOLO READ WRITE

Hansolo$SQL> create tablespace kyloren datafile size 20M;

Tablespace created.

Hansolo$SQL> create tablespace MazKanata datafile size 20M

Tablespace created.

Hansolo$SQL> select tablespace_name from dba_tablespaces;

TABLESPACE_NAME
------------------------------
 SYSTEM
 SYSAUX
 UNDOTBS1
 TEMP
 USERS
 KYLOREN
 MAZKANATA

Hansolo$SQL> select current_scn from v$database;

CURRENT_SCN
 -----------
 27506427

$cat hansolo_create_remoteclone.sql
 CREATE USER cloneuser IDENTIFIED BY cloneuser123;
 GRANT CREATE SESSION, CREATE PLUGGABLE DATABASE TO cloneuser;

Hansolo$SQL>@hansolo_create_remoteclone.sql

Verify user connection

Hansolo$SQL> connect cloneuser/cloneuser123;
 Connected.

Now, prep the source environment

Nishan$SQL> select database_name from v$database;

DATABASE_NAME
-------------------------------------------------
NISHAN

Create DBLink to hansolo from nishan

$cat pdbclone_dblink.sql
CREATE DATABASE LINK remoteclonelink CONNECT TO cloneuser IDENTIFIED BY 
cloneuser123 USING 'hansolo'

Nishan$SQL> @pdbclone_dblink.sql

Nishan$SQL> select db_link, host from dba_db_links;
DB_LINK            HOST
----------------  -----------------
SYS_HUB           SEEDDATA 
REMOTECLONELINK   hansolo 

Verify connection to hansolo from forceawakens PDB

$ sqlplus cloneuser/cloneuser123@hansolo

Nishan$SQL> create pluggable database forceawakens from non$cdb@REMOTECLONELINK;

Pluggable database created.

Nishan$SQL> alter session set container = FORCEAWAKENS;
Session altered.

forceawakens$SQL> select name, open_mode from v$pdbs;
 NAME           OPEN_MODE
---------       ----------------------
FORCEAWAKENS    MOUNTED

forceawakens$SQL> select name from v$datafile;
NAME
-------------------------------------------------------------------------------
 +DATA/NISHAN/4E6DBABFDE2EBBECE0538514A8C0247B/DATAFILE/system.302.942700581
 +DATA/NISHAN/4E6DBABFDE2EBBECE0538514A8C0247B/DATAFILE/sysaux.301.942700581
 +DATA/NISHAN/4E6DBABFDE2EBBECE0538514A8C0247B/DATAFILE/undotbs1.300.942700581
 +DATA/NISHAN/4E6DBABFDE2EBBECE0538514A8C0247B/DATAFILE/users.297.942700581
 +DATA/NISHAN/4E6DBABFDE2EBBECE0538514A8C0247B/DATAFILE/kyloren.298.942700581
 +DATA/NISHAN/4E6DBABFDE2EBBECE0538514A8C0247B/DATAFILE/mazkanata.299.942700581

forceawakens$SQL> select current_scn from v$database;

CURRENT_SCN
-----------
 0

Since the source database was a non-CDB, it needs to be cleansed to be PDB-capable using the @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql. This is a requirement before you can open and online the PDB.

forceawakens$SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql

forceawakens$SQL> alter pluggable database open;

In-Memory of.. Sorry I mean 12.2 In-Memory New Features

As part our of continuing 12.2 New Feature Series we explore different areas of Oracle 12.2

In this blog we discuss the new In-Memory 12.2 features

In-Memory Expressions

An In-memory expression, or “hot” expression, enables frequently evaluated query expressions to be materialized in the In-Memory Column Store, for subsequent reuse. By default, the procedure DBMS_INMEMORY_ADMIN.IME_CAPTURE_EXPRESSIONS identifies and populates IM expressions.

Populating the materialized values of frequently used query expressions, into the In-Memory Column Store, greatly reduces the system resources required to execute queries, allowing for better scalability. The procedure, IME_CAPTURE_EXPRESSIONS, will capture and populate the 20 “hottest” expressions in the database for a specified time range.

In-Memory Virtual Columns

An IM virtual column, is a value derived by evaluating an expression. IM virtual columns improve query performance by avoiding repeated calculations. Also, the database can scan and filter IM virtual columns, using techniques such as SIMD vector processing.

In-Memory FastStart

Before 12.2, the columnar format was only available In-Memory, meaning that after a database restart, the In-Memory Column Store would have to be populated. This multiple step process, converted traditional, row formatted data into the compressed columnar format and placed in-memory.

Now, In-Memory Column Store optimizes the compressed columnar population of database objects (tables, partitions, and subpartitions) in the In-Memory column store. This process, significantly reduces the time required to re-populate In-Memory objects.

Use DBMS_INMEMORY_ADMIN.FASTSTART_ENABLE procedure to enable a specific tablespace for FastStart

Automatic Data Optimization (ADO) Support for In-Memory Column Store

In 12.2, ADO now also manages the IM column store as a new data tier. When enabled, the Heat Map feature automatically tracks data access patterns; ADO uses this Heat Map data to implement user-defined policies at the database level. ADO manages the In-Memory Column Store, by moving objects (tables, partitions or subpartitions) in and out of the memory, based on Heat Map statistics.

Twelve Days of 12.2

Copyright © 2016 Viscosity North America, Inc. All rights reserved.

In-Memory Join Groups

IM column stores can use join groups, to optimize joins of populated IM tables. Join groups, eliminate the performance overhead of decompressing and hashing column values. Create join groups using the CREATE INMEMORY JOIN GROUP statement:

CREATE INMEMORY JOIN GROUP prodid_jg (mine.items(product_id),mine.product_line(product_id));

In-Memory Support on Oracle Active Data Guard

12.2, allows IM column store to be enabled on Oracle Active Data Guard environments, by setting the init.ora parameter INMEMORY_ADG_ENABLED to TRUE. Using the in-memory column store, on an Active Data Guard standby database, enables users to offload larger and heavier reporting workloads, onto Standby Databases. Moreover, 12.2 permits the Standby Database to populate a completely different set of data in the in-memory column store than the Primary Database, providing greater data access flexibility.

In-Memory Column Store Dynamic Resizing

You can now dynamically increase the size of the in-memory area, while the database is open, assuming that enough memory is available within the SGA. Thus, the in-memory column store can be resized without restarting the database, providing greater application availability.

ACFS 12.2 New Features – a Recap

Oracle Automatic Storage Management Cluster File System (ACFS) made it’s debut with Oracle 11.2. Many DBAs are not aware of the vast features that are available with ACFS. With each release and update to Oracle, significant enhancements have been made. With Oracle Database 12c Release 2, new feature/functionality was made to ACFS.

Snapshot Enhancements

In Oracle 12.2, Oracle extends ACFS snapshot functionality and further simplifies file system snapshot operations. The following are a few of the key new features with snapshots:

Admins can now, if needed, impose quotas to snapshots to limit amount of write operations that can be done on a snapshot. Quotas can be set on the snapshot level. Oracle also provides the capability to rename an existing ACFS snapshot, to allow more user-friendly names.

When we delete a snapshot with the “acfsutil snap delete snapshot mount_point” command, we can force a delete, even if there are open files.

There are several new capabilities with snapshot re-mastering and duplication. The new ACFS snapshot remaster capability allows for a snapshot in the snapshot registry to become the primary file system. ACFS snapshot duplication features are introduced. With the “acfsutil snap duplicate create” command, can be used to duplicate a snapshot from an existing snapshot, to a standby target file system.

The “apply” option to the “acfsutil snap duplicate” command, allows us to apply deltas to the target ACFS file system or snapshot. If this is the initial apply, the target file system must be empty. If the target had been applied before, then the apply process becomes an incremental update. Before the incremental update occurs, the contents of the target file system must match the content of the older snapshot, since the last incremental update. Also, the contents of the target snapshot cannot be modified while the apply is happening.

Additionally, ACFS snapshot-based replication now uses SSH protocols to transmit data streams.

4k Sectors and Metadata

When Admins create an ACFS file system, they have the option to create the file system with the 4096-byte metadata structure. When issuing the mkfs command, you can specify the metadata block size with the –i option; two valid options are 512 bytes or 4096 bytes. The 4096-byte metadata structure is made up of multiple 512-byte logical sectors.

If the COMPATIBLE.ADVM ASM Diskgroup attribute is set to 12.2 or greater, then the metadata block is 4096 bytes by default. If COMPATIBLE.ADVM attribute is set to less than 12.2, then the block size is set to 512 bytes. When the ADVM volume of the ACFS file system is set with 4K logical disk sector size, Direct I/O requests should be aligned on the 4K offset and be a multiple of 4k size for optimal performance.

Defragger

Very rarely would you need the defragmentation tool, due to the fact that ACFS algorithm is for allocation and coalesce-ment of free space. However, for those rare situations, when we can get into fragmented situations under heavy workloads or for compressed files, Oracle provides the defrag option to the acfsutil command. Now, we can issue “acfsutil defrag dir” or “acfsutil defrag file” commands for on-demand defragmentation.

ACFS will perform all defrag operations in the background. With the –r option of the “acfsutil defrag dir”command, you can recursively defrag subdirectories.

Compression Enhancements

ACFS compression can significantly reduce disk storage requirements for customers running databases on ACFS. Databases running on ACFS, must be of versions 11.2.0.4 or higher. ACFS compression can be enabled for specific ACFS file systems for database files, RMAN backup files, archivelogs, data pump extract files, and general purpose files. Oracle does not support redo log/flashback logs/control file compression.

When enabling ACFS compression for a file system, only new incoming files will be compressed. All existing files on the file system will remain un-compressed. Likewise, if you decide to uncompress a file system, Oracle will not de-compress files. Oracle will simply disable compression for newly created files.

To compress and uncompress ACFS file systems, execute the acfsutil compress on or acfsutil compress off commands. To view compression state and space consumption information, you can execute the “acfsutil compress info” command. The commands “acfsutil info fs” and “acfsutil info file” now support ACFS compression status.

At this time, databases with 2K or 4K block sizes are not supported for ACFS compression. ACFS compression is supported on Linux and AIX. ACFS is also supported to work with ACFS snapshot-based replication.

Loopback Devices

ACFS now supports loopback devices on the Linux operating system. With ACFS loopback device support, we can now take OVM images, templates, and virtual disks and present them as a block device. Files can be sparse or non-sparse. ACFS also supports Direct I/O on sparse images.

Metadata Collector

The metadata collector, copies metadata structures from an Oracle ACFS file system to a separate output file that can be ingested for analysis and diagnostics. The metadata collector reads the contents of the file system and all metadata is written out to a specified output file. The metadata collector can read the ACFS file system online without requiring an outage. Note, this tool is not a replacement for the file system checker command (fsck), but a supplement for additional diagnosis and support. Even though the metadata collector can read the file system while it is online, for best results, unmount the file system prior to metadata collection. The size of the output file, is directly correlated to the size of the file system that the collection is specified for. To collect metadata for a file system, invoke the “acfsutil meta” command.

The auto-resize feature, allows us to “autoextend” a file system if the size of the file system is about to run out of space. Just like an Oracle datafile that has the autoextend option enabled, we can now “autoextend” the ACFS file system to the size of the increment by option. With the –a option to the “acfsutil size” command, we can specify the increment by size.

We can also specify the maximum size or quota for the ACFS file system to “autoextend” to guard against a runaway space consumption. To set the maximum size for an ACFS file system, execute the “acfsutil size” command with the –x option.

12.2 SQLPlus History command – features and fumbles

Yes, there’s been a lot of hoopla about the HISTORY capability in 12.2 SQlPlus, and I know my friend Gokhan Atil has written about it too. So I’m just gonna share this bit for my team and my feedback on it

SQLplus always lacked the history capability like U/Linux shell history. Now in 12.2 SQLPlus its here. Let me describe a little bit of the feature and its quirks.
Here’s what SQLPlus help has to say about the functionality

SQL> help hist

 HISTORY
 -------

 Stores, lists, executes, edits of the commands
 entered during the current SQL*Plus session.

 HIST[ORY] [N {RUN | EDIT | DEL[ETE]}] | [CLEAR]

 N is the entry number listed in the history list.
 Use this number to recall, edit or delete the command.

 Example:
 HIST 3 RUN - will run the 3rd entry from the list.

 HIST[ORY] without any option will list all entries in the list.

So, let’s walk thru this thing:

[oracle@oracle122 admin]$ sqlplus "/ as sysdba"
SQL*Plus: Release 12.2.0.1.0 Production on Sat Mar 4 14:43:55 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

To enable SQLPlus history, you set "set history on" at the SQLPlus prompt. 
However, it has to be set each time you connect to SQLPlus. I know my feeble little brain 
will forget when I'm in a hurry and login to SQLPlus, thus, 
I added "set history on" into $ORACLE_HOME/sqlplus/admin/glogin.sql

SQL> hist
SP2-1651: History list is empty.


Since I haven't done anything, thus the history is obviously empty.  Now let's do stuff !!

SQL> select name from v$datafile
  2  ;

NAME
--------------------------------------------------------------------------------
+DATA/NISHAN/DATAFILE/system.257.937616243
+DATA/NISHAN/DATAFILE/sysaux.258.937616333
+DATA/NISHAN/DATAFILE/undotbs1.259.937616357
+DATA/NISHAN/4700A987085B3DFAE05387E5E50A8C7B/DATAFILE/system.267.937616485
+DATA/NISHAN/4700A987085B3DFAE05387E5E50A8C7B/DATAFILE/sysaux.266.937616485
+DATA/NISHAN/DATAFILE/users.260.937616359
+DATA/NISHAN/4700A987085B3DFAE05387E5E50A8C7B/DATAFILE/undotbs1.268.937616485
+DATA/NISHAN/49CF3DA922C680E1E0539C14A8C0E4E3/DATAFILE/system.272.937617037
+DATA/NISHAN/49CF3DA922C680E1E0539C14A8C0E4E3/DATAFILE/sysaux.273.937617039
+DATA/NISHAN/49CF3DA922C680E1E0539C14A8C0E4E3/DATAFILE/undotbs1.271.937617037
+DATA/NISHAN/49CF3DA922C680E1E0539C14A8C0E4E3/DATAFILE/users.275.937617207

11 rows selected.

SQL> hist
  1  select name from v$datafile
     ;

However, all these silly little mistakes get recorded into history too.... and thus my sloppiness gets shown in broad daylight :-(
SQL> hist
  1  select name from v$datafile
     ;
  2  1
  3  2
  4  del

Here's the delete command to remove my sloppiness:

SQL> hist 4 del
SQL> hist
  1  select name from v$datafile
     ;
  2  1
  3  2

Let's run some stuff to populate the history with real commands. Note, that all desc, show, set, select commands get recorded into history. 
This may or may not be a good thing....think bloated-ness

SQL> show parameter mem

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
hi_shared_memory_address	     integer	 0
inmemory_adg_enabled		     boolean	 TRUE
inmemory_clause_default 	     string
inmemory_expressions_usage	     string	 ENABLE
inmemory_force			     string	 DEFAULT
inmemory_max_populate_servers	     integer	 2
inmemory_query			     string	 ENABLE
inmemory_size			     big integer 208M
inmemory_trickle_repopulate_servers_ integer	 1
percent
inmemory_virtual_columns	     string	 MANUAL
memory_max_target		     big integer 0
memory_target			     big integer 0
optimizer_inmemory_aware	     boolean	 TRUE
shared_memory_address		     integer	 0

SQL> show parameter inmemory

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
inmemory_adg_enabled		     boolean	 TRUE
inmemory_clause_default 	     string
inmemory_expressions_usage	     string	 ENABLE
inmemory_force			     string	 DEFAULT
inmemory_max_populate_servers	     integer	 2
inmemory_query			     string	 ENABLE
inmemory_size			     big integer 208M
inmemory_trickle_repopulate_servers_ integer	 1
percent
inmemory_virtual_columns	     string	 MANUAL
optimizer_inmemory_aware	     boolean	 TRUE

SQL> hist
  1  select name from v$datafile
     ;
  2  1
  3  2
  4  show parameter mem
  5  show parameter sga
  6  show parameter inmemory

SQL> desc v$inmemory_area
 Name					   Null?    Type
 ----------------------------------------- -------- ----------------------------
 POOL						    VARCHAR2(26)
 ALLOC_BYTES					    NUMBER
 USED_BYTES					    NUMBER
 POPULATE_STATUS				    VARCHAR2(26)
 CON_ID 					    NUMBER

Let's say I screw up this query (and I really did)!!

SQL> select * form v$inmemory_area;
select * form v$inmemory_area
         *
ERROR at line 1:
ORA-00923: FROM keyword not found where expected

I could do the old-school way, using the change command as follows:

SQL> c/form/from/
  1* select * from v$inmemory_area
SQL> /

POOL			   ALLOC_BYTES USED_BYTES POPULATE_STATUS
-------------------------- ----------- ---------- --------------------------
    CON_ID
----------
1MB POOL		     166723584		0 DONE
	 1

64KB POOL		      33554432		0 DONE
	 1

1MB POOL		     166723584		0 DONE
	 2


POOL			   ALLOC_BYTES USED_BYTES POPULATE_STATUS
-------------------------- ----------- ---------- --------------------------
    CON_ID
----------
64KB POOL		      33554432		0 DONE
	 2

1MB POOL		     166723584		0 DONE
	 3

64KB POOL		      33554432		0 DONE
	 3


6 rows selected.

SQL> hist
  1  select name from v$datafile
     ;
  2  1
  3  2
  4  show parameter mem
  5  show parameter sga
  6  show parameter inmemory
  ..
  ..
 11  select * form v$inmemory_area;
 12  c/for/from/
 15  /

But I sure wish I could delete multiple lines like this, but alas, I cannot


SQL> hist 11,12,13 del
SP2-1655: History command syntax error.

Anyways, Here's the new way of editing the command in the history :

SQL> hist 
  1  select name from v$datafile
     ;
  2  show parameter mem
  3  show parameter sga
  4  show parameter inmemory
  5  desc v$inmemory_pool
  6  desc v$inmemory_size
  7  esc v$inmemory_area
  8  desc v$inmemory_area
  9  select * form v$inmemory_area;
 10  c/form/from/
 11  /

SQL> hist 9 edit
This pops up the OS editor (vi of course), you edit as you normally would; save/quit, 
and you're back to the SQLPus prompt:

But look what happens after "hist 9 edit", 9 is still the same; its an immutable entry.  
Which suppose it kinda expected, as you shouldn't be able to change history :-) !!

Thus, by editing you're effective storing a new entry in the SQLPlus buffer and 
you'll have to execute this buffer, just like to old school way.  
You can do sqlplus "l" command to list the current buffer

 
This execution adds an new entry for the new command rather than replace the history "9" entry  !!!

SQL> hist 
  1  select name from v$datafile
     ;
  2  show parameter mem
  3  show parameter sga
  4  show parameter inmemory
  5  desc v$inmemory_pool
  6  desc v$inmemory_size
  7  esc v$inmemory_area
  8  desc v$inmemory_area
  9  select * form v$inmemory_area;
 10  c/for/from/
 11  /
 12  select * from  v$inmemory_area;

This feature is good start, but I hope they add more capabilities to this thing as its still rudimentary.

Uh Oh, I didnt set my Exadata core count correctly , now what?

Changing Capacity On-Demand Core Count in Exadata

We recently implemented an Exadata X6  at a one of client sites (yes, we don’t Oracle ACS, we do it ourselves).   However, the client failed to tell us that they had licensed only a subset of the cores per compute, after we actually had implmeneted and *migrated* production databases onto the X6.  So how do we set the core count correctly after implementation (post-OEDA run).  We heard horror stories about other folks saying they needed to re-image to set core count.  To be specific, its easy to increase cores, but decrease is nasty business.

The steps below are ones we used to decrease the core count:

1. Gracefully stop all databases running on all compute nodes.

2. Login to the compute nodes as root and run the “dbmcli” utility

3. Display the current core count using the following command:

LIST DBSERVER attributes coreCount

4. Change the core count to the desired count using this command (this needs done on all compute nodes):

ALTER DBSERVER pendingCoreCount = 14

NOTE:  Since we are  decreasing the number of cores after installation of the system, the FORCE option needs to be done.

ALTER DBSERVER pendingCoreCount = 14 FORCE

5. Reboot

6. Verify the change was correct by using the “LIST” command in step 3.

 

Just FYI… Troubleshooting

If there is an issue with the MS service starting up, it could be because of the Java being used on the system.

For Exadata release 12.1.2.3.1.160411, the version of Java was 1.8.0.66 and was flagged by a security audit as a vulnerability and was removed from the system.  When the system rebooted, the MS service couldn’t start back up because Java was removed. Follow these steps to reinstall Java and get the MS service restarted on the compute nodes.

1. Download the latest JDK from the Oracle site. NOTE: The RPM download was used.

2. Install the JDK package on the system:

rpm -ivh jdk-8u102- linux-x64.rpm

3. Redeploy the MS service application:

/opt/oracle/dbserver/dbms/deploy/scripts/unix/setup_dynamicDeploy DB -D

4. Restart the MS service:

ALTER DBSERVER RESTART SERVICES MS

 

What’s with MGTDB anyways

For those who have either upgraded or fresh-installed 12.1 (12c) Grid Infrastructure stack, will notice a new database instance (-MGMTDB) that was provisioned automagically. So what is this MGMTDB and why do I need this overhead.

Si let’s recap what the DB is and what it does…
Management Database is the central repository to store Cluster Health Monitor, the Grid Infrastructure Management Repository.

MGMT database is a container database (CDB) with one pluggable database (PDB) running. However, this database runs out of the Grid Infrastructure home.
The MGMTDB is a Rac One Node database; i.e., it runs on one node at a time, but because this is Clustered Resource, it can be started or failed over on any node in the cluster. MGMTDB is as a non-critical component of the GI stack (with no “real” hard dependencies). This means that if MGMTDB fails or becomes unavailable, Grid Infrastructure continues running

MGMTDB is configured (subject to change) with 750 MB SGA/325 MB PGA, and 5GB database size. But note that, due to the footprint MGMT’s SGA is not configured for hugepages . Since, this database is dynamically created on install, the OUI installer does not have pre-knowledge of the database that are configured or will be migrated to this cluster, thus in order to avoid any database names conflict the name “-MGMTDB” was chosen (notice the “-“). Note, bypassing MGMTDB installation is only allowed for upgrades to 12.1.0.2. New 12.1.0.2 installations or upgrades to future releases will require MGMTDB to be installed. if MGMTDB is not selected during upgrade, all features (Cluster Health Monitor (CHM/OS) etc) that depend on it will be disabled.

So if you are wondering where the datafiles and other structures are stored for this database. Well they would will be stored in the same diskgroup as OCR and VOTE However, these dtabase files can be migrated into ASM diskgroup post install.

MGMTDB will store a subset of Operating System (OS) performance data for longer term to provide diagnostic information and support intelligent workload management. Performance data (OS metrics similar to, but a subset of Exawatcher) collected by the ‘Cluster Health Monitor’ (CHM) is stored also on local disk, so when not using MGMTDB, CHM data can still be obtained from local disk but intelligent workload management (QoS) will be disabled. onger term MGMTDB will become a key component of the Grid Infrastructure and provide services for important components, because of this MGMTDB will eventually become a mandatory component in future upgrades to releases on Exadata.

See document 1568402.1 for more details.

Queryable Opatch and Datapatch

One of my Oracle Support Buddies mentioned to me a cool feature called Query-able Opatch.  This new feature of 12c Oracle Database provides the capability to store, in-database, and query the patch inventory.  Note this feature is specific to Database Home, it does fit into the for Grid Infrastructure or other Oracle Homes.

I wasn't quite sure what problem this feature was trying to solve or what big value it was attempting to bring on.  Regardless, I thought I'd investigate and see what this feature was about.  Mind you I didn't do any in-depth analysis, but enough to shed light on the topic.  I'll followup later was detailed analysis.    We will also touch on the new Datapatch feature as well. 

Query-able Opatch
In versions prior to 12c, the typical stack flow of getting Oracle patch inventory information was :  
opatch lsinventory —> oraInventory_loc  —> Central Inventory (OBase) —> local inventory (OHome) 

Now, In 12c, the stack flow is as follows, if you implement and configure Queryable Opatch feature: 
opatch lsinvenroty (XML) —> queryable patch interface (qpi) —{XML} —> Inventory data (in database) 
  
The key ingredient here is the queryable patch interface (qpi).  
QPI consists of 
	•	      External table (patch_xml_inv) created by catqitab.sql. 
	•	      Uses oracle_load and and prepocessor (opatch_script_dir —>qop)
	•	      SQL interface dbms_qopatch, dbmsqopi.sql, used as plsql interface to query. The dbms_opatch package contains the following procedures/functions : get_patch, 
			  get_patch_lsinventory, get_sqlpatch_status


Once the external table is created using the catqitab.sql script, you can then execute the load and instantiation of the Opatch Registry data.  

Process of instantiation of Opatch Registry data: 


NewImage


1. Select against the opatch_xml_inv (external table) 2. Execution of opatch lsinventory -xml (pre-processor program) 3. Load inventory data into table(s)
SQL> SELECT directory_name, directory_path FROM dba_directories WHERE directory_name like 'OPATCH%'; DIRECTORY_NAME DIRECTORY_PATH ------------------------------ ----------------------------------------------- OPATCH_LOG_DIR /u01/app/oracle/product/12.1/db_home1/QOpatch OPATCH_SCRIPT_DIR /u01/app/oracle/product/12.1/db_home1/QOpatch NewImage




Note, that the DBMS_QOPATCH returns XML format, thus you'll need to transform the XML format into something more readable; e.g., using a stylesheet (XSLT). Luckily Oracle provides a default XSLT, GET_OPATCH_XSLT. GET_OPATCH_XSLT is function of DBMS_QPOPATCH. You can use this function or you can build your own XSLT sheet. NewImage







DataPatch is also a new sub-feature of Queryable Patch Inventory.  DataPatch is a driver script that automates the post-patch SQL actions for database patches 
It is applicable only to database home (not GI home) and for patches that have a SQL changes. 
When the  binary patch is successfully applied, Datapatch updates the SQL Patch Registry in the database (table) —> dba_registry_history/dba_registry_sqlpatch 
Note, DataPatch has to be executed per database.  Also, YOU STILL INSTALL PATCHES using Opatch first !!! 
Without DataPatch you could never tell if the database had the SQL part of the patch applied.

NewImage

Exadata Monitoring and Agents – EM Plugin

To those who attended our Exadata Monitoring and Agents. Here’s some Answers and followup from the Chat room

The primary goal of the Exadata Pluigin is to digest the schematic file and validate database.xml and catalog.xml files. If the pre-check runs w/o failure then Discovery can be executed.

Agent only runs on compute nodes and monitors all components remotely; i,e ,no additional scripts/code is installed on the peripheral components. Agents pull component metrics and vitals using either ssh commands (using user equivalence based commands) or subscribe to SNMP traps.

Note, that there are always two agents deployed, the master does majority of the work, and a slave agent, which “kicks-in” if the master fails. Agents should be installed on all compute nodes

Initially, the guided discovery wizard runs ASM kfod to get disk names and reads cellip.ora.

The components monitored via the Exadata-EM plugin include the following:
• Storage Cells

• Infiniband Switches (IB switches)
EM agent runs remote ssh calls to collect switch metrics, IB switch sends SNMP traps (PUSH) for all alerts. This collection does require ssh equilavalnace for nm2user. This collection includes varipous sensor data: FAN, voltage, temparture. As well port metrics.
Plugin does the following:
Ssh nm2user@ ibnetdiscover

Reads the components names connected to the IBM switch, matches up the compute node hostnames tp the hostnames used to install agent

• Cisco Switch
EM agent runs remote SNMP get calls to gather metric data, this includes port status, switch vitals; eg, CPU, memory, power, and temp. In addition, performance metrics are also collect; eg, ingress and egress throughput rates

• PDU and KVM
For the PDU, both active and passive PDUs are monitored. Agent runs SNMP get calls from each PDU, metric collection includes Power, temperature, Fan status. The same steps and metrics are gathered for the KVM

• ILOM targets
EM Agent executes remote ipmitool calls to each compute node’s ILOM target. This execution requires oemuser credentials to run ipmitool. Agent collects sensor data as well as configuration data (firmware version and serial number)

In EM 12.1.0.4 , the key enhancements introduced include gathering IB performance, on-demand schematic refresh, Cell performance monitoring as well as a guided resolution for cell alerts. SNMP automation notification setup for Exadata Storage Server and InfiniBand Switches.

The Agent discovers IB switches and compute nodes and sends output to ibnetdiscover. The KVM, PDU, Cisco and ILOM discovery is performed via schematic file on compute node, and finally subscribes to SNMP for cells and IBM switches; note, SNMP has to be manually setup and enabled on peripheral componets for SNMP push of cell alerts. EM agent runs cellcli via ssh to obtain Storage metrics, this does require ssh equialvance with Agent user

The latest version (as of this writing, 12.1.0.6), there were a number of key visualization and metrics enhancements. For example:

• CDB-level I/O Workload Summary with PDB-level details breakdown.
• I/O Resource Management for Oracle Database 12c.
• Exadata Database Machine-level physical visualization of I/O Utilization for CDB and PDB on each Exadata Storage Server. There is also a critical integration link to Database Resource Management UI.
• Additional InfiniBand Switch Sensor fault detection, including power supply unit sensors and fan presence sensors.
• Automatically push Exadata plug-in to agent during discovery.

Use fully qualified names with Agent, using shorten names will causes issues. If there are any issues with metrics gathering or agent, EMDiag Kit should be used to triage this. The EMDiag kit includes scripts that can be used EM issues. Specifically, the kit includes repvfy, agtvfy, and omsvfy. These tools can be used to diagnose issues with the OEM Repository, EM Agents, control management services.
To obtain the EMDiag Kit, download the zip file for the version that you need, per Oracle Support Note: MOS ID# 421053.1

Export EMDIAG_HOME=/u01/app/oracle/product/emdiag
$EMDIAG_HOME/bin/repvfy install
$EMDIAG_HOME/bin/repvfy verify Exadata –level 9 -details

Rarely discussed 12c New Features Part 3 – Oracle Net Listener Registration

In Oracle Database 12c there were some minor Oracle Net Services features. This blog post covers some of the changes. In the next part I’ll review some of Dead Connection Detection changes as well as some of the smaller new features.

This change is neither sexy nor fun, but as an devoted RAC dev guy, I find these little changes (evolutions) amusing 🙂

In prior releases the service registration was performed by PMON and is now performed by a dedicated process called LREG (listener registration). The LREG process (ora_lreg_), is a critical database background process . Since this is critical background process, if it dies, it will cause an Oracle instance crash.

LREG now assumes all of PMON’s instance/service registration responsibilities; e.g., instance registration, such as: service_update, service_register, LBA payload, etc.

As with PMON in pre-12c versions, LREG (during registration) process provides the listener with information about the following:
* Names of the database services provided by the database
* Name of the database instance associated with the services and its current and maximum load
* Service handlers (dispatchers and dedicated servers) available for the instance, including their type, protocol addresses, and current and maximum load (for LBA)

If the listener is not running when an instance starts, the LREG process cannot register the service information. LREG attempts to connect to the listener periodically on default port TCP/IP 1521 if no local_listener is set and it may take up to 60 seconds before LREG registers with the listener after it has been started. To initiate service registration immediately after the listener is started, use the SQL statement ALTER SYSTEM REGISTER.

LREG can be traced using the same methods as with PMON:

Enabling an Oracle Net server side sqlnet tracing will invoke a trace for LREG on instance startup. The old PMON trace command now traces LREG:
alter system set events = ‘10257 trace name context forever, level 5’;

Listener registration information can also be dumped into the ora_lreg trace file: alter system set events = ‘immediate trace name listener_registration level 3’;

FlexASM Deep Dive – Show Me the Output!!!

If you saw the first FlexASM blog you know we installed and configured FlexASM and a CDB plus a couple of PDBs. Also, this was Policy Managed with a cardinality of 2. Now let’s see what the configuration looks like, and we can break it down using the wonderful crsctl and srvctl tools

First let’s ensure we are really running in FlexASM mode:

[oracle@rac02 ~]$ asmcmd showclustermode
ASM cluster : Flex mode enabled

[oracle@rac02 ~]$ srvctl status serverpool -serverpool naboo
Server pool name: naboo
Active servers count: 2

[oracle@rac01 trace]$ crsctl get node role status -all
Node ‘rac01’ active role is ‘hub’
Node ‘rac03’ active role is ‘hub’
Node ‘rac02’ active role is ‘hub’
Node ‘rac04’ active role is ‘hub’

[oracle@rac01 ~]$ crsctl stat res -t
——————————————————————————–
Name Target State Server State details
——————————————————————————–
Local Resources
——————————————————————————–
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE rac01 STABLE
ONLINE ONLINE rac02 STABLE
ONLINE ONLINE rac03 STABLE
ONLINE ONLINE rac04 STABLE

You notice that we have 4 ASM listeners one on each node in the Cluster. You’ll see the process as the following on each node:

[oracle@rac01 ~]$ ps -ef |grep -i asmnet

ooracle 6646 1 0 12:19 ? 00:00:00 /u01/app/12.1.0/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit

ora.CRSDATA.DATAVOL1.advm
ONLINE ONLINE rac01 Volume device /dev/a
sm/datavol1-194 is o
nline,STABLE
ONLINE ONLINE rac02 Volume device /dev/a
sm/datavol1-194 is o
nline,STABLE
ONLINE OFFLINE rac03 Unable to connect to
ASM,STABLE
ONLINE ONLINE rac04 Volume device /dev/a
sm/datavol1-194 is o
nline,STABLE
The datavol1 ADVM resource runs on all the nodes where indicated it should run. In this case we are seeing that RAC03 is having some issues.
Let’s look into that a little later. But I like the fact crsctl tells something is amiss here on node3

ora.CRSDATA.dg
ONLINE ONLINE rac01 STABLE
ONLINE ONLINE rac02 STABLE
ONLINE ONLINE rac03 STABLE
OFFLINE OFFLINE rac04 STABLE

ora.FRA.dg
ONLINE ONLINE rac01 STABLE
ONLINE ONLINE rac02 STABLE
ONLINE ONLINE rac03 STABLE
OFFLINE OFFLINE rac04 STABLE

The crsdata and fra disk groups resource is started on all nodes except node 4

ora.LISTENER.lsnr
ONLINE ONLINE rac01 STABLE
ONLINE ONLINE rac02 STABLE
ONLINE ONLINE rac03 STABLE
ONLINE ONLINE rac04 STABLE

We all know, as in 11gR2, that this is the Node listener.

ora.PDBDATA.dg
ONLINE ONLINE rac01 STABLE
ONLINE ONLINE rac02 STABLE
ONLINE ONLINE rac03 STABLE
OFFLINE OFFLINE rac04 STABLE

The pdbdata disk groups resource is started on all nodes except node 4

ora.crsdata.datavol1.acfs
ONLINE ONLINE rac01 mounted on /u02/app/
oracle/acfsmounts,ST
ABLE
ONLINE ONLINE rac02 mounted on /u02/app/
oracle/acfsmounts,ST
ABLE
ONLINE OFFLINE rac03 (2) volume /u02/app/
oracle/acfsmounts of
fline,STABLE
ONLINE ONLINE rac04 mounted on /u02/app/
oracle/acfsmounts,ST
ABLE

ACFS filesystem resource for datavol1 is started on all nodes except node3.
But I think the following has something to do w/ it :-). Need to debug this a bit later. I even tried:
[oracle@rac03 ~]$ asmcmd volenable –all
ASMCMD-9470: ASM proxy instance unavailable
ASMCMD-9471: cannot enable or disable volumes

ora.net1.network
ONLINE ONLINE rac01 STABLE
ONLINE ONLINE rac02 STABLE
ONLINE ONLINE rac03 STABLE
ONLINE ONLINE rac04 STABLE
ora.ons
ONLINE ONLINE rac01 STABLE
ONLINE ONLINE rac02 STABLE
ONLINE ONLINE rac03 STABLE
ONLINE ONLINE rac04 STABLE

The Network (in my case I only have only Net1) and ONS are same as in previous versions

ora.proxy_advm
ONLINE ONLINE rac01 STABLE
ONLINE ONLINE rac02 STABLE
ONLINE OFFLINE rac03 STABLE
ONLINE ONLINE rac04 STABLE

Yep, since proxy_advm is not started on node3, the filesystems won’t come online….but again, i’ll look at that later
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac02 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE rac03 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE rac04 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE rac01 169.254.90.36 172.16
.11.10,STABLE
ora.asm
1 ONLINE ONLINE rac03 STABLE
2 ONLINE ONLINE rac01 STABLE
3 ONLINE ONLINE rac02 STABLE

Since we have the cardinality of 3 ASM instance we have 3 ASM resources active

ora.cvu
1 ONLINE ONLINE rac01 STABLE
ora.mgmtdb
1 ONLINE ONLINE rac01 Open,STABLE
ora.oc4j
1 ONLINE ONLINE rac01 STABLE
ora.rac01.vip
1 ONLINE ONLINE rac01 STABLE
ora.rac02.vip
1 ONLINE ONLINE rac02 STABLE
ora.rac03.vip
1 ONLINE ONLINE rac03 STABLE
ora.rac04.vip
1 ONLINE ONLINE rac04 STABLE
ora.scan1.vip
1 ONLINE ONLINE rac02 STABLE
ora.scan2.vip
1 ONLINE ONLINE rac03 STABLE
ora.scan3.vip
1 ONLINE ONLINE rac04 STABLE
ora.tatooine.db
1 ONLINE ONLINE rac01 Open,STABLE
2 ONLINE ONLINE rac02 Open,STABLE

As we stated above, I specified a Policy Managed database with cardinality of 2, so I have 2 database instances running
——————————————————————————–

Here’s some other important supporting info on FlexASm:

[oracle@rac02 ~]$ srvctl config asm -detail
ASM home: /u01/app/12.1.0/grid
Password file: +CRSDATA/orapwASM
ASM listener: LISTENER
ASM is enabled.
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM

[oracle@rac02 ~]$ srvctl status filesystem
ACFS file system /u02/app/oracle/acfsmounts is mounted on nodes rac01,rac02,rac04

ANd here’s what the Database has to say about FlexASM

NOTE: ASMB registering with ASM instance as client 0x10001 (reg:1377584805)
NOTE: ASMB connected to ASM instance +ASM1 (Flex mode; client id 0x10001)
NOTE: ASMB rebuilding ASM server state
NOTE: ASMB rebuilt 2 (of 2) groups
SUCCESS: ASMB reconnected & completed ASM server state

So for the interesting part:
If you notice that ASM is not running node 4:
[oracle@rac02 ~]$ srvctl status asm -v

ASM is running on rac01,rac02,rac03
[oracle@rac02 ~]$ srvctl status asm -detail
ASM is running on rac01,rac02,rac03

So, how does a client (ocrdump, rman, asmcmd, etc..) connect to if ASM if there is no ASM on that node. Well let’s test this using asmcmd on node4. You notice that a pipe is created, a connect string is generated and passed to ASMCMD to connect remotely to ASM2 on node2!!!!

22-Sep-13 12:54 ASMCMD Foreground (PID = 14106): Pipe /tmp/pipe_14106 has been found.
22-Sep-13 12:54 ASMCMD Background (PID = 14117): Successfully opened the pipe /tmp/pipe_14106
22-Sep-13 12:54 ASMCMD Foreground (PID = 14106): Successfully opened the pipe /tmp/pipe_14106 in read mode
NOTE: Executing kfod /u01/app/12.1.0/grid/bin/kfod op=getclstype..
22-Sep-13 12:54 Printing the connection string
contype =
driver =
instanceName = <>
usr =
ServiceName = <+ASM>
23-Sep-13 16:23 Successfully connected to ASM instance +ASM2
23-Sep-13 16:23 NOTE: Querying ASM instance to get list of disks
22-Sep-13 12:54 Registered Daemon process.
22-Sep-13 12:54 ASMCMD Foreground (PID = 14106): Closed pipe /tmp/pipe_14106.

Creating PDBs

Consolidate where possible …Isolate where necessary

In the last blog I mentioned the benefits of schema consolidation and how it dove tails directly into a 12c Oracle Database PDB implementation.
In this part 2 of the PDB blog, we will get a little more detailed and do a basic walk-through, from “cradle to grave” of a PDB. We’ll use SQlPlus as the tool of choice, next time I’ll show w/ DBCA

First verify that we are truly on 12c Oracle database

SQL> select instance_name, version, status, con_id from v$instance;

INSTANCE_NAME VERSION STATUS CON_ID
—————- —————– ———— ———-
yoda 12.1.0.1.0 OPEN 0

The v$database view tells us that we are dealing with a CDB based database

CDB$ROOT@YODA> select cdb, con_id from v$database;

CDB CON_ID
— ———-
YES 0

or a more elegant way:

CDB$ROOT@YODA> select NAME, DECODE(CDB, ‘YES’, ‘Multitenant Option enabled’, ‘Regular 12c Database: ‘) “Multitenant Option ?” , OPEN_MODE, CON_ID from V$DATABASE;

NAME Multitenant Option ? OPEN_MODE CON_ID
——— ————————– ——————– ———-
YODA Multitenant Option enabled READ ONLY 0

There are alot of new views and tables to support PBD/CDB. But we’ll focus on the v$PDBS and CDB_PDBS views:

CDB$ROOT@YODA> desc v$pdbs
Name
——–
CON_ID
DBID
CON_UID
GUID
NAME
OPEN_MODE
RESTRICTED
OPEN_TIME
CREATE_SCN
TOTAL_SIZE

CDB$ROOT@YODA> desc cdb_pdbs
Name
——–
PDB_ID
PDB_NAME
DBID
CON_UID
GUID
STATUS
CREATION_SCN
CON_ID

The SQlPlus command con_name (container name) shows the container and the con_id we are connected to:

CDB$ROOT@YODA> show con_name

CON_NAME
——————————
CDB$ROOT

CDB$ROOT@YODA> show con_id

CON_ID
——————————
1

Let’s see what PDBs that are created in this CDB and their current state:

CDB$ROOT@YODA> select CON_ID,DBID,NAME,TOTAL_SIZE from v$pdbs;

CON_ID DBID NAME TOTAL_SIZE
———- ———- —————————— ———-
2 4066465523 PDB$SEED 283115520
3 483260478 PDBOBI 0

CDB$ROOT@YODA> select con_id, name, open_mode from v$pdbs;

CON_ID NAME OPEN_MODE
———- ——————– ———-
2 PDB$SEED READ ONLY
3 PDBOBI MOUNTED

Recall from part 1 of the blog series, that we created a PDB (pdbobi) when we specified the Pluggable Database Feature on install, and that a PDB$SEED got created as part of that Install process

Now lets’s connect to the two different PDBs and see what they got!! You really shouldn’t ever connect to PDB$SEED, since its just used as a template, but we’re just curious 🙂

CDB$ROOT@YODA> alter session set container=PDB$SEED;
Session altered.

CDB$ROOT@YODA> select name from v$datafile;

NAME
——————————————————————————–
+PDBDATA/YODA/DATAFILE/undotbs1.260.823892155
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/system.271.823892297
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/sysaux.270.823892297

As you can see that PDB$SEED houses the template tablespaces -> System, Sysaux, and Undo tablespaces

If we connect back to the root-CDB, we see that it houses essentially the traditional database tablespaces (like in pre-12c days).

CDB$ROOT@YODA> alter session set container=cdb$root;
Session altered.

CDB$ROOT@YODA> select name from v$datafile;

NAME
——————————————————————————–
+PDBDATA/YODA/DATAFILE/system.258.823892109
+PDBDATA/YODA/DATAFILE/sysaux.257.823892063
+PDBDATA/YODA/DATAFILE/undotbs1.260.823892155
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/system.271.823892297
+PDBDATA/YODA/DATAFILE/users.259.823892155
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/sysaux.270.823892297
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/system.276.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/sysaux.274.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/users.277.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/example.275.823892813

BTW, the datafiles listed in V$datafiles differs from cbd_data_files. cdb_data_files only shows datafiles from “open” PDB, so just be careful if you’re looking for correct datafile

Let’s connect to our user PDB (pdbobi) and see what we can see 🙂

CDB$ROOT@YODA> alter session set container=pdbobi;
Session altered.

CDB$ROOT@YODA> select con_id, name, open_mode from v$pdbs;

CON_ID NAME OPEN_MODE
———- —————– ———–
3 PDBOBI MOUNTED

Place PDBOBI in Read Write mode. Note, that when you create the PDB, it is initially in mounted mode with a status of NEW.
View the OPEN MODE status of a PDB by querying the OPEN_MODE column in the V$PDBS view or view the status of a PDB by querying the STATUS column of the CDB_PDBS or DBA_PDBS view

CDB$ROOT@YODA> alter pluggable database pdbobi open;

Pluggable database altered.

or CDB$ROOT@YODA> alter pluggable database all open;

And let’s create a new tablespace in this PDB

CDB$ROOT@YODA> create tablespace obiwan datafile size 500M;

Tablespace created.

CDB$ROOT@YODA> select file_name from cdb_data_files;

FILE_NAME
——————————————————————————–
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/example.275.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/users.277.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/sysaux.274.823892813
+PDBDATA/YODA/E456D87DF75E6553E043EDFE10AC71EA/DATAFILE/obiwan.284.824683339
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/system.276.823892813

PDBOBI only has scope for its own PDB files. We will illustrate this further down below.

Let’s create a new clone from an existing PDB, but with a new path

CDB$ROOT@YODA> create pluggable database PDBvader from PDBOBI FILE_NAME_CONVERT=(‘+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE’,’+PDBDATA’);
create pluggable database PDBvader from PDBOBI FILE_NAME_CONVERT=(‘+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE’,’+PDBDATA’)
*
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable database

CDB$ROOT@YODA> show con_name

CON_NAME
——————————
PDBOBI

Hmm…..remember we were still connected to PDBOBI. You can only create PDBs from root (and not even from pdb$seed). So connect to CDBROOT

CDB$ROOT@YODA> create pluggable database PDBvader from PDBOBI FILE_NAME_CONVERT=(‘+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE’,’+PDBDATA’);

Pluggable database created.

CDB$ROOT@YODA> select pdb_name, status from cdb_pdbs;

PDB_NAME STATUS
———- ————-
PDBOBI NORMAL
PDB$SEED NORMAL
PDBVADER NORMAL

And

CDB$ROOT@YODA> select CON_ID,DBID,NAME,TOTAL_SIZE from v$pdbs;

CON_ID DBID NAME TOTAL_SIZE
———- ———- ————- ————-
2 4066465523 PDB$SEED 283115520
3 483260478 PDBOBI 917504000
4 994649056 PDBVADER 0

Hmm……the TOTAL_SIZE column shows 0 bytes. Recall that all new PDBs are created and placed in MOUNTED stated

CDB$ROOT@YODA> alter session set container=pdbvader;

Session altered.

CDB$ROOT@YODA> alter pluggable database open;

Pluggable database altered.

CDB$ROOT@YODA> select file_name from cdb_data_files;

FILE_NAME
——————————————————————————–
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/system.280.823980769
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/sysaux.279.823980769
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/users.281.823980769
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/example.282.823980769

Viola…. size is now reflected !!

CDB$ROOT@YODA> select CON_ID,DBID,NAME,TOTAL_SIZE from v$pdbs;

CON_ID DBID NAME TOTAL_SIZE
———- ———- —————————— ———-
4 994649056 PDBVADER 393216000

Again, the scope of PDBVADER is to its own container files; it can’t see PDBOBI files at all. If we connect back to cdb$root and look at v$datafile, we see that cdb$root has scope for all the datafiles in the CDB database

Incidentally, that long identifier, “E46B24386A131109E043EDFE10AC6E89”, in the OMF name is the GUID or Global Identifier for that PDB. This is not the same as container unique identifier (CON_UID). The con_uid is a local
identifier; whereas the GUID is universal. Keep in mind that we can unplug a PDB from one CDB into another CDB, so the GUID provides this uniqueness and streamlines portability.

CDB$ROOT@YODA> select name, con_id from v$datafile order by con_id

NAME CON_ID
———————————————————————————– ———-
+PDBDATA/YODA/DATAFILE/undotbs1.260.823892155 1
+PDBDATA/YODA/DATAFILE/sysaux.257.823892063 1
+PDBDATA/YODA/DATAFILE/system.258.823892109 1
+PDBDATA/YODA/DATAFILE/users.259.823892155 1
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/system.271.823892297 2
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/sysaux.270.823892297 2
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/example.275.823892813 3
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/users.277.823892813 3
+PDBDATA/YODA/E456D87DF75E6553E043EDFE10AC71EA/DATAFILE/obiwan.284.824683339 3
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/system.276.823892813 3
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/sysaux.274.823892813 3
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/sysaux.279.823980769 4
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/users.281.823980769 4
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/example.282.823980769 4
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/system.280.823980769 4

Now that we are done testing with PDBVADER PDB, we can shutdown and drop this PDB

CDB$ROOT@YODA> alter session set container=cdb$root;

Session altered.

CDB$ROOT@YODA> drop pluggable database pdbvader including datafiles;
drop pluggable database pdbvader including datafiles
*
ERROR at line 1:
ORA-65025: Pluggable database PDBVADER is not closed on all instances.

CDB$ROOT@YODA> alter pluggable database pdbvader close;

Pluggable database altered.

CDB$ROOT@YODA> drop pluggable database pdbvader including datafiles;

Pluggable database dropped.

Just for completeness, I’ll illustrate couple different ways to create a PDB

The beauty of PDB is not mobility (plug and unplug), which we’ll show later, but that we can create/clone a new PDB from a “gold-image PDB” . That’s real agility and a Database as a Service (DbaaS) play.

So let’s create a new PDB in a couple of different ways.

Method #1: Create a PDB from SEED
CDB$ROOT@YODA> alter session set container=cdb$root;

Session altered.

CDB$ROOT@YODA> CREATE PLUGGABLE DATABASE pdbhansolo admin user hansolo identified by hansolo roles=(dba);

Pluggable database created.

CDB$ROOT@YODA> alter pluggable database pdbhansolo open;

Pluggable database altered.

CDB$ROOT@YODA> select file_name from cdb_data_files;

FILE_NAME
——————————————————————————–
+PDBDATA/YODA/E51109E2AF22127AE043EDFE10AC1DD9/DATAFILE/system.280.824693889
+PDBDATA/YODA/E51109E2AF22127AE043EDFE10AC1DD9/DATAFILE/sysaux.279.824693893

Notice that it just contains the basic files to enable a PDB. The CDB will copy from the PDB$SEED the System and Sysaux tablesapces and instantiate them in the new PDB.

Method #2: Clone from an existing PDB (PDBOBI in our case)

CDB$ROOT@YODA> alter session set container=cdb$root;

Session altered.

CDB$ROOT@YODA> alter pluggable database pdbobi close;

Pluggable database altered.

CDB$ROOT@YODA> alter pluggable database pdbobi open read only;

Pluggable database altered.

CDB$ROOT@YODA> CREATE PLUGGABLE DATABASE pdbleia from pdbobi;

Pluggable database created.

CDB$ROOT@YODA> alter pluggable database pdbleia open;

Pluggable database altered.

CDB$ROOT@YODA> select file_name from cdb_data_files;

FILE_NAME
——————————————————————————–
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/system.281.824694649
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/sysaux.282.824694651
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/users.285.824694661
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/example.286.824694661
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/obiwan.287.824694669

Notice, that the OBI tablespace that we created in PDBOBI came over as part of this Clone process!!

You can also create a PDB as a snapshot (COW) from another PDB. I’ll post this test on the next blog report. But essentially you’ll need a NAS Appliannce, or any technology that will provide you with COW snapshot.
I plan on using ACFS as the storage container and ACFS RW Snapshot for the snapshot PDB.