The Add Roles to Windows2012 Server

There are two methods to add roles/features to a Windows 2012 Server. One is using the Add Roles Wizard and the other is using the cmdlet.

The Add Roles Wizard simplifies how you install roles on the server. This is quite different from the way it was done in Windows 2008 Server where admins had to run Add or Remove Windows Components multiple times to install all the roles, role services, and features that are needed on a server.

Roles Wizard lets you install multiple roles at one time. Server Manager replaces Add or Remove Windows Components, and a single session in the Add Roles Wizard can complete the configuration of the server; in addition, it verifies that all the software components required by a role are installed. If it is necessary, the wizard prompts you to approve the installation of other roles, role services, or software components that are required by roles that you select.
Most roles and role services that are available for installation require you to make decisions during the installation process that determine how the role operates in your

To install roles and features in Windows Server 2012, you can also use the Server ManagerCmd.exe tool or the Add-WindowsFeature cmdlet in Windows PowerShell.
The following video walks through the process flow
AddPhyDisk_2_Pool_WinSrv2012.mp4

Oracle Inventory and what it means to

Whilst presenting on Exadata patching, an interesting question about “What is the Inventory”, came up at break. This individual was new to Oracle, so I spent some time going over what the Oracle Inventory is. Here is the brain dump captured 🙂

The Oracle [Software] Inventory is the mechanism that manages the software library on the server, a.k.a Oracle System. An Oracle system is server or node where Oracle software is installed. A key part of this inventory is the OUI, opatch and the inventory file. We will touch on some of these topics.

Oracle Inventory is the XML file that is stored and managed when the OUI installer (runInstaller from media) is used to installed Oracle productset.

On each Oracle system there is a Central Inventory and Local Inventory. This structure is built when oraInstRoot.sh is executed
The Central Inventory is at location defined by /etc/oraInst.loc file (specifically the inventory_loc variable), it is typically installed at $ORACLE_BASE/oraInventory, and it defines at high level the location of the registered Oracle_Homes (OH) on the system. The actual inventory file is named:
$ORACLE_BASE/oraInventory/ContentsXML/inventory.xml.

The invPtrLoc variable can used during the runInstaller invocation to point to a specific location of the inventory file. However, if doing this, then you must be aware that -invPtrLoc in opatch


[oracle@pdb12cgg app]$ cd oraInventory
[oracle@pdb12cgg oraInventory]$ ls
backup  ContentsXML  logs  oui


[oracle@pdb12cgg ContentsXML]$ cat inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2011, Oracle. All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>11.2.0.3.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraGI12Home1" LOC="/u01/app/oracle/product/12.1.0/grid" TYPE="O" IDX="1"/>
<HOME NAME="OraDB12Home1" LOC="/u02/app/oracle/product/12.1.0/dbhome_1" TYPE="O" IDX="2"/>
<HOME NAME="OraHome1" LOC="/u02/app/oracle/product/12.1.2/oggcore_1" TYPE="O" IDX="3"/>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>


The inventory includes the following key things: HOME IDX key value is the index into the file for the registered OH, the location of the OH and Tag Name for the OH


To attach (register the OH into Central Inventory) a cloned or copied OH we do the following:
./runInstaller -silent-attachHome ORACLE_HOME="/u02/app/oracle/product/12.1.0/dbhome_1” ORACLE_HOME_NAME=“DBHome12c"

Similarly we can use -detachHome to detach the OH


The Local Inventory is at location defined by inventory.xml file, it is typically installed at $ORACLE_HOME/inventory/ContentsXML/comps.xml, and outlines the details of the software components and patch levels installed at the Oracle_Home:
cat /u02/app/oracle/product/12.1.0/dbhome_1/inventory/ContentsXML/comps.xml

... an excerpt....

<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2013, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<PRD_LIST>
<TL_LIST>
<COMP NAME="oracle.server" VER="12.1.0.1.0" BUILD_NUMBER="0" REP_VER="0.0.0.0.0"
 RELEASE="Production" INV_LOC="Components/oracle.server/12.1.0.1.0/1/" LANGS="en
" XML_INV_LOC="Components21/oracle.server/12.1.0.1.0/" ACT_INST_VER="12.1.0.1.0"
 DEINST_VER="11.2.0.0.0" INSTALL_TIME="2013.Aug.19 18:01:58 CDT" INST_LOC="/u02/
app/oracle/product/12.1.0/dbhome_1/oracle.server">
   <EXT_NAME>Oracle Database 12c</EXT_NAME>
   <DESC>Installs an optional preconfigured starter database, product options, m
anagement tools, networking services, utilities, and basic client software for a
n Oracle Database server. This option also supports Automatic Storage Management
 database configuration.</DESC>
   <DESCID>COMPONENT_DESC</DESCID>
   <STG_INFO OSP_VER="10.2.0.0.0"/>
   <CMP_JAR_INFO>
      <INFO NAME="filemapObj" VAL="Components/oracle/server/v12_1_0_1_0/filemap.
xml"/>
      <INFO NAME="helpDir" VAL="Components/oracle/server/v12_1_0_1_0/help/"/>
      <INFO NAME="actionsClass" VAL="Components.oracle.server.v12_1_0_1_0.CompAc
tions"/>
      <INFO NAME="resourceClass" VAL="Components.oracle.server.v12_1_0_1_0.resou
rces.CompRes"/>

Here’s the confusing part, there is also a oraInst.loc file in $ORACLE_HOME/oraInst.loc in the OH too!  The opatch utility uses this oraInst.loc to inspect the inventory

Here's how the Central and Local Invntories relate to each other:

From Central Inv:
<HOME NAME="OraDB12Home1" LOC="/u02/app/oracle/product/12.1.0/dbhome_1" TYPE="O" <strong>IDX="2”</strong>/>


Local Inv:
<DEP_LIST>
      <DEP NAME="oracle.rdbms" VER="12.1.0.1.0" DEP_GRP_NAME="Optional" <strong>HOME_IDX="2"</strong>/>
      <DEP NAME="oracle.options" VER="12.1.0.1.0" DEP_GRP_NAME="Optional" HOME_IDX="2"/>
      <DEP NAME="oracle.network" VER="12.1.0.1.0" DEP_GRP_NAME="Optional" HOME_IDX="2"/>
      <DEP NAME="oracle.rdbms.oci" VER="12.1.0.1.0" DEP_GRP_NAME="Optional" HOME_IDX="2"/>
      <DEP NAME="oracle.precomp" VER="12.1.0.1.0" DEP_GRP_NAME="Optional" HOME_IDX="2"/>
      <DEP NAME="oracle.xdk" VER="12.1.0.1.0" DEP_GRP_NAME="Optional" HOME_IDX="2"/>
      <DEP NAME="oracle.odbc" VER="12.1.0.1.0" DEP_GRP_NAME="Optional" HOME_IDX="2"/>
      <DEP NAME="oracle.sysman.ccr" VER="10.3.7.0.3" DEP_GRP_NAME="Optional" HOME_IDX="2"/>
      <DEP NAME="oracle.sysman.ccr.client" VER="10.3.2.1.0" DEP_GRP_NAME="Optional" HOME_IDX="2"/>\

The hierarchy and layout of the Oracle Inventory:

OH inventory

The oraclehomeproperties.xml file defines the OS architecture and ARU id.  For those who may not know, Automated Release Update (ARU) defines how opatch utility will treat the patchset 

Clients of the Inventory:
oraenv, opatch, backups, etc

Here's an strace of oraenv , it illustrates that the inventory files are accessed.


strace -aefl oraenv

stat("/u02/app/oracle/product/12.1.0/dbhome_1/inventory/ContentsXML/oraclehomeproperties.xml", {st_mode=S_IFREG|0640, st_size=549, ...}) = 0
geteuid() = 500
getegid() = 501
getuid() = 500
getgid() = 501
access("/u02/app/oracle/product/12.1.0/dbhome_1/inventory/ContentsXML/oraclehomeproperties.xml", W_OK) = 0
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0
stat("/u02/app/oracle/product/12.1.0/dbhome_1/bin/orabase", {st_mode=S_IFREG|0755, st_size=4941164, ...}) = 0
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0
stat("/u02/app/oracle/product/12.1.0/dbhome_1/bin/orabase", {st_mode=S_IFREG|0755, st_size=4941164, ...}) = 0
geteuid() = 500
getegid() = 501
getuid() = 500
getgid() = 501
access("/u02/app/oracle/product/12.1.0/dbhome_1/bin/orabase", X_OK) = 0


I have had issues when we cloned and attached an OH.  Sometimes I do a health check on the inventory just to make sure everything is cool, here's what we do:

[oracle@pdb12cgg ContentsXML]$ $ORACLE_HOME/OPatch/opatch util LoadXML -xmlInput <strong>/u02/app/oracle/product/12.1.0/dbhome_1/inventory/ContentsXML/comps.xml</strong>
Oracle Interim Patch Installer version 12.1.0.1.0
Copyright (c) 2012, Oracle Corporation.  All rights reserved.

Oracle Home       : /u02/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u02/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version    : 12.1.0.1.0
OUI version       : 12.1.0.1.0
Log file location : /u02/app/oracle/product/12.1.0/dbhome_1/cfgtoollogs/opatch/opatch2014-09-01_21-44-23PM_1.log

Invoking utility "loadxml"
UtilSession: XML file is OK.
OPatch succeeded.


[oracle@pdb12cgg ContentsXML]$ $ORACLE_HOME/OPatch/opatch util LoadXML -xmlInput <strong>/u01/app/oraInventory/ContentsXML/inventory.xml</strong>
Oracle Interim Patch Installer version 12.1.0.1.0
Copyright (c) 2012, Oracle Corporation.  All rights reserved.

Oracle Home       : /u02/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u02/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version    : 12.1.0.1.0
OUI version       : 12.1.0.1.0
Log file location : /u02/app/oracle/product/12.1.0/dbhome_1/cfgtoollogs/opatch/opatch2014-09-01_21-45-31PM_1.log

Invoking utility "loadxml"
UtilSession: XML file is OK.

OPatch succeeded.



<strong>Backups</strong>

For key inventory operations (install, deinstall,clone,add node,attach home) the Installer will do an automated backup of the Central Inventory and Local Inventory (the entire directory for both), into oraInventory/backup/<time stamp>.  Note, that you cannot [easily] recovery from a corrupted or destroyed OH.  For this reason a restore of the backup is necessary


[oracle@pdb12cgg oraInventory]$ ls
backup  ContentsXML  logs  oui
[oracle@pdb12cgg oraInventory]$ ls backup
2013-08-16_08-30-42AM  2013-08-19_05-20-26PM  2014-08-05_04-54-40PM

Exadata Monitoring and Agents – EM Plugin

To those who attended our Exadata Monitoring and Agents. Here’s some Answers and followup from the Chat room

The primary goal of the Exadata Pluigin is to digest the schematic file and validate database.xml and catalog.xml files. If the pre-check runs w/o failure then Discovery can be executed.

Agent only runs on compute nodes and monitors all components remotely; i,e ,no additional scripts/code is installed on the peripheral components. Agents pull component metrics and vitals using either ssh commands (using user equivalence based commands) or subscribe to SNMP traps.

Note, that there are always two agents deployed, the master does majority of the work, and a slave agent, which “kicks-in” if the master fails. Agents should be installed on all compute nodes

Initially, the guided discovery wizard runs ASM kfod to get disk names and reads cellip.ora.

The components monitored via the Exadata-EM plugin include the following:
• Storage Cells

• Infiniband Switches (IB switches)
EM agent runs remote ssh calls to collect switch metrics, IB switch sends SNMP traps (PUSH) for all alerts. This collection does require ssh equilavalnace for nm2user. This collection includes varipous sensor data: FAN, voltage, temparture. As well port metrics.
Plugin does the following:
Ssh nm2user@ ibnetdiscover

Reads the components names connected to the IBM switch, matches up the compute node hostnames tp the hostnames used to install agent

• Cisco Switch
EM agent runs remote SNMP get calls to gather metric data, this includes port status, switch vitals; eg, CPU, memory, power, and temp. In addition, performance metrics are also collect; eg, ingress and egress throughput rates

• PDU and KVM
For the PDU, both active and passive PDUs are monitored. Agent runs SNMP get calls from each PDU, metric collection includes Power, temperature, Fan status. The same steps and metrics are gathered for the KVM

• ILOM targets
EM Agent executes remote ipmitool calls to each compute node’s ILOM target. This execution requires oemuser credentials to run ipmitool. Agent collects sensor data as well as configuration data (firmware version and serial number)

In EM 12.1.0.4 , the key enhancements introduced include gathering IB performance, on-demand schematic refresh, Cell performance monitoring as well as a guided resolution for cell alerts. SNMP automation notification setup for Exadata Storage Server and InfiniBand Switches.

The Agent discovers IB switches and compute nodes and sends output to ibnetdiscover. The KVM, PDU, Cisco and ILOM discovery is performed via schematic file on compute node, and finally subscribes to SNMP for cells and IBM switches; note, SNMP has to be manually setup and enabled on peripheral componets for SNMP push of cell alerts. EM agent runs cellcli via ssh to obtain Storage metrics, this does require ssh equialvance with Agent user

The latest version (as of this writing, 12.1.0.6), there were a number of key visualization and metrics enhancements. For example:

• CDB-level I/O Workload Summary with PDB-level details breakdown.
• I/O Resource Management for Oracle Database 12c.
• Exadata Database Machine-level physical visualization of I/O Utilization for CDB and PDB on each Exadata Storage Server. There is also a critical integration link to Database Resource Management UI.
• Additional InfiniBand Switch Sensor fault detection, including power supply unit sensors and fan presence sensors.
• Automatically push Exadata plug-in to agent during discovery.

Use fully qualified names with Agent, using shorten names will causes issues. If there are any issues with metrics gathering or agent, EMDiag Kit should be used to triage this. The EMDiag kit includes scripts that can be used EM issues. Specifically, the kit includes repvfy, agtvfy, and omsvfy. These tools can be used to diagnose issues with the OEM Repository, EM Agents, control management services.
To obtain the EMDiag Kit, download the zip file for the version that you need, per Oracle Support Note: MOS ID# 421053.1

Export EMDIAG_HOME=/u01/app/oracle/product/emdiag
$EMDIAG_HOME/bin/repvfy install
$EMDIAG_HOME/bin/repvfy verify Exadata –level 9 -details

ASM Check script

Here's a little script from @racdba that does ASM check when we go onsite

#!/bin/ksh
HOST=`hostname`
ASM_OS_DEV_NM=/tmp/asmdevicenames.log
ASMVOTEDSK=/tmp/asm_votingdisks.log
GRID_HOME=`cat /etc/oratab |grep “+ASM” |awk -F “:” ‘{print $2}’`
ORACLE_HOME=$GRID_HOME
PATH=$ORACLE_HOME/bin:$PATH:
export GAWK=/bin/gawk

#
#
do_pipe ()
{
SQLP=”$GRID_HOME/bin/sqlplus -s / as sysdba”;
$SQLP |& # Open a pipe to SQL*Plus
print -p — ‘set feed off pause off pages 0 head off veri off line 500’;
print -p — ‘set term off time off’;
print -p — “set sqlprompt ””;

print -p — ‘select sysdate from dual;’;
read -p SYSDATE;

print -p — “select version from v\$instance;”;
read -p ASM_VERSION;

print -p — “select value from v\$parameter where name=’processes’;”;
read -p ASM_PROCESS;

print -p — “select value/1024/1024 from v\$parameter where name=’memory_target’;”;
read -p ASM_MEMORY;

print -p — “quit;”;
sleep 5;
}
#
function get_asminfo {
for LUNS in `ls /dev/oracleasm/disks/*`
do
echo “ASMLIB disk: $LUNS”
asmdisk=`kfed read $LUNS | grep dskname | tr -s ‘ ‘| cut -f2 -d’ ‘`
echo “ASM disk: $asmdisk”
majorminor=`ls -l $LUNS | tr -s ‘ ‘ | cut -f5,6 -d’ ‘`
dev=`ls -l /dev | tr -s ‘ ‘ | grep “$majorminor” | cut -f10 -d’ ‘`
echo “Device path: /dev/$dev”
echo “—-”
done

echo “”
echo “# ————————————————————————————————– #”;
/usr/sbin/oracleasm-discover;
}

function get_mem_info {
MEM=`free | $GAWK ‘/^Mem:/{ print int( ($2 / 1024 / 1024 + 4) / 4 ) * 4 }’`
SWAP=`free | $GAWK ‘/^Swap:/{ print int ( $2 / 1024 / 1024 + 0.5 ) }’`
HUGEPAGES=`grep HugePages_Total /proc/meminfo | $GAWK ‘{print $2}’`

echo “Physical Memory: $MEM |Swap: $SWAP”
echo “HugePages: $HUGEPAGES”
}

export ORACLE_SID=`cat /etc/oratab |grep “+ASM” |awk -F “:” ‘{print $1}’`
CHKPMON=`ps -ef|grep -v grep|grep pmon_$i|awk ‘{print $8}’`
if [ -n “$CHKPMON” ]; then
do_pipe $ORACLE_SID
echo “# ————————————————————————————————– #”;
echo “HOSTNAME: ${HOST}”
echo “GRID HOME: ${GRID_HOME}”
echo “ASM VERSION: ${ASM_VERSION}”
echo “ASM PROCESSES: ${ASM_PROCESS}”
echo “ASM MEMORY: ${ASM_MEMORY} MB”
echo “# ————————————————————————————————– #”;
get_mem_info
echo “# ————————————————————————————————– #”;
else
echo “${ORACLE_SID} is not running.”
fi

echo “# ————————————————————————————————– #”;
echo “LINUX VERSION INFORMATION:”
echo ” ”
[ -f “/etc/redhat-release” ] && cat /etc/redhat-release
[ -f “/etc/oracle-release” ] && cat /etc/oracle-release
uname -a
echo “# ————————————————————————————————– #”;

##SQLP=”sqlplus -s / as sysdba”;
##$SQLP < $ASM_OS_DEV_NM
##set feed off pause off head on veri off line 500;
##set term off time off numwidth 15;
##set sqlprompt ”;
##col label for a25
##col path for a55
##–select label,path,os_mb from v\$asm_disk;
##select label,os_mb from v\$asm_disk;
##exit;
##!

echo “ASM OS DEVICE INFORMATION:”
##cat $ASM_OS_DEV_NM
## Check for ASMLib
ASMLIBCHK=`rpm -qa |grep oracleasmlib`
if [[ -n $ASMLIBCHK ]]
then
echo “# ————————————————————————————————– #”;
echo “ASMLIB RPM: ${ASMLIBCHK}”
echo ” ”
##echo “ASM OS DEVICE INFORMATION:”
##echo ” ”
get_asminfo
else
echo “ASMLIB is NOT installed.”
fi

echo “# ————————————————————————————————– #”;

## Check OCR/Voting disks
OCR=`$GRID_HOME/bin/ocrcheck |grep “Device/File Name” |awk ‘{print $4}’`
##echo ” ”
##echo “GRID HOME is located at ${GRID_HOME}.”
echo “OCR LOCATION: ${OCR}”
echo “# ————————————————————————————————– #”;
echo ” ”

## Voting disk
$GRID_HOME/bin/crsctl query css votedisk > $ASMVOTEDSK

echo “VOTING DISK INFORMATION:”
echo ” ”
cat $ASMVOTEDSK
echo “# ————————————————————————————————– #”;

## Cleanup
if [[ -f $ASM_OS_DEV_NM ]]
then
rm $ASM_OS_DEV_NM
fi

if [[ -f $ASMVOTEDSK ]]
then
rm $ASMVOTEDSK
fi

iSCSI and enable disks for ASM

Helpful tips for my iSCSI storage buds that want to enable/provision disks for ASM

# get initiator name of host or hosts
cat /etc/iscsi/initiatorname.iscsi

# create volume on san and present to host
login to san, create volume, add initator(s) to access

# get wwid for each disk and update alias in multipath.conf
multipath -ll
or
scsi_id -g -u -s /block/sdd

# create partiton on dev
fdisk /dev/mapper/DATA18

# add device map from partition table and verify it can be seen
kpartx -a /dev/mapper/DATA18
kpartx -l /dev/mapper/DATA18

# set partition alignment
echo “2048,,” | sfdisk -uS /dev/mapper/DATA18

# create the asm disk
oracleasm createdisk DATA18 /dev/mapper/DATA18p1

#verify you can see disk
oracleasm scandisks
oracleasm listdisks

Want to modify ports on SCAN and Node Listeners,think again

Consideration for setting Parameters for Scan and Node Listeners on RAC, Queuesize, SDU, Ports, etc

TNS listener information held in a listener.ora for releases >= 11.2 on RAC should not be modified. That is the IPC endpoint information for a node listener should not be changed. Global listener parameters can be set in the file, ie it is supported to add tracing parameters and ASO parameters like wallet location, etc.

Example:
SDU cannot be set in TCP endpoint for SCAN / Node listeners, but SDU can be changed via the global parameter DEFAULT_SDU_SIZE in the SQLNET.ORA file.
(Set this in the RDBMS oracle home sqlnet.ora file)

Scan listeners are not configured by the standard methods or using *.ora files, but rather built during install and manipulated using via srvctl.
Information on setup is held within Grid Infrastructure(GI). Currently there is no option for adding parameters such as SDU and QUEUESIZE to a listener.
SCAN only supports one address in the TNS connect descriptor and allows only 1 port assigned to it. Default port is 1521, but can be changed if required.
ER 11782958 has been raised to address the current restrictions that are in place around listener parameters on RAC and release 11.2.
Where there is a global parameter, this can be used in place.

As a side note, TCP.QUEUESIZE parameter is now available in 11.2.0.3+ (The patch for TCP.QUEUESIZE can be located on Metalink Note under patch 13777308), which enables SDU/Queuesize. The default is the system-defined maximum value. The defined maximum value for Linux is 128.
Allowable values are any integer value up to the system-defined maximum; e.g., TCP.QUEUESIZE=100

Rarely discussed 12c New Features Part 3 – Oracle Net Listener Registration

In Oracle Database 12c there were some minor Oracle Net Services features. This blog post covers some of the changes. In the next part I’ll review some of Dead Connection Detection changes as well as some of the smaller new features.

This change is neither sexy nor fun, but as an devoted RAC dev guy, I find these little changes (evolutions) amusing 🙂

In prior releases the service registration was performed by PMON and is now performed by a dedicated process called LREG (listener registration). The LREG process (ora_lreg_), is a critical database background process . Since this is critical background process, if it dies, it will cause an Oracle instance crash.

LREG now assumes all of PMON’s instance/service registration responsibilities; e.g., instance registration, such as: service_update, service_register, LBA payload, etc.

As with PMON in pre-12c versions, LREG (during registration) process provides the listener with information about the following:
* Names of the database services provided by the database
* Name of the database instance associated with the services and its current and maximum load
* Service handlers (dispatchers and dedicated servers) available for the instance, including their type, protocol addresses, and current and maximum load (for LBA)

If the listener is not running when an instance starts, the LREG process cannot register the service information. LREG attempts to connect to the listener periodically on default port TCP/IP 1521 if no local_listener is set and it may take up to 60 seconds before LREG registers with the listener after it has been started. To initiate service registration immediately after the listener is started, use the SQL statement ALTER SYSTEM REGISTER.

LREG can be traced using the same methods as with PMON:

Enabling an Oracle Net server side sqlnet tracing will invoke a trace for LREG on instance startup. The old PMON trace command now traces LREG:
alter system set events = ‘10257 trace name context forever, level 5’;

Listener registration information can also be dumped into the ora_lreg trace file: alter system set events = ‘immediate trace name listener_registration level 3’;

Creating PDBs

Consolidate where possible …Isolate where necessary

In the last blog I mentioned the benefits of schema consolidation and how it dove tails directly into a 12c Oracle Database PDB implementation.
In this part 2 of the PDB blog, we will get a little more detailed and do a basic walk-through, from “cradle to grave” of a PDB. We’ll use SQlPlus as the tool of choice, next time I’ll show w/ DBCA

First verify that we are truly on 12c Oracle database

SQL> select instance_name, version, status, con_id from v$instance;

INSTANCE_NAME VERSION STATUS CON_ID
—————- —————– ———— ———-
yoda 12.1.0.1.0 OPEN 0

The v$database view tells us that we are dealing with a CDB based database

CDB$ROOT@YODA> select cdb, con_id from v$database;

CDB CON_ID
— ———-
YES 0

or a more elegant way:

CDB$ROOT@YODA> select NAME, DECODE(CDB, ‘YES’, ‘Multitenant Option enabled’, ‘Regular 12c Database: ‘) “Multitenant Option ?” , OPEN_MODE, CON_ID from V$DATABASE;

NAME Multitenant Option ? OPEN_MODE CON_ID
——— ————————– ——————– ———-
YODA Multitenant Option enabled READ ONLY 0

There are alot of new views and tables to support PBD/CDB. But we’ll focus on the v$PDBS and CDB_PDBS views:

CDB$ROOT@YODA> desc v$pdbs
Name
——–
CON_ID
DBID
CON_UID
GUID
NAME
OPEN_MODE
RESTRICTED
OPEN_TIME
CREATE_SCN
TOTAL_SIZE

CDB$ROOT@YODA> desc cdb_pdbs
Name
——–
PDB_ID
PDB_NAME
DBID
CON_UID
GUID
STATUS
CREATION_SCN
CON_ID

The SQlPlus command con_name (container name) shows the container and the con_id we are connected to:

CDB$ROOT@YODA> show con_name

CON_NAME
——————————
CDB$ROOT

CDB$ROOT@YODA> show con_id

CON_ID
——————————
1

Let’s see what PDBs that are created in this CDB and their current state:

CDB$ROOT@YODA> select CON_ID,DBID,NAME,TOTAL_SIZE from v$pdbs;

CON_ID DBID NAME TOTAL_SIZE
———- ———- —————————— ———-
2 4066465523 PDB$SEED 283115520
3 483260478 PDBOBI 0

CDB$ROOT@YODA> select con_id, name, open_mode from v$pdbs;

CON_ID NAME OPEN_MODE
———- ——————– ———-
2 PDB$SEED READ ONLY
3 PDBOBI MOUNTED

Recall from part 1 of the blog series, that we created a PDB (pdbobi) when we specified the Pluggable Database Feature on install, and that a PDB$SEED got created as part of that Install process

Now lets’s connect to the two different PDBs and see what they got!! You really shouldn’t ever connect to PDB$SEED, since its just used as a template, but we’re just curious 🙂

CDB$ROOT@YODA> alter session set container=PDB$SEED;
Session altered.

CDB$ROOT@YODA> select name from v$datafile;

NAME
——————————————————————————–
+PDBDATA/YODA/DATAFILE/undotbs1.260.823892155
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/system.271.823892297
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/sysaux.270.823892297

As you can see that PDB$SEED houses the template tablespaces -> System, Sysaux, and Undo tablespaces

If we connect back to the root-CDB, we see that it houses essentially the traditional database tablespaces (like in pre-12c days).

CDB$ROOT@YODA> alter session set container=cdb$root;
Session altered.

CDB$ROOT@YODA> select name from v$datafile;

NAME
——————————————————————————–
+PDBDATA/YODA/DATAFILE/system.258.823892109
+PDBDATA/YODA/DATAFILE/sysaux.257.823892063
+PDBDATA/YODA/DATAFILE/undotbs1.260.823892155
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/system.271.823892297
+PDBDATA/YODA/DATAFILE/users.259.823892155
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/sysaux.270.823892297
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/system.276.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/sysaux.274.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/users.277.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/example.275.823892813

BTW, the datafiles listed in V$datafiles differs from cbd_data_files. cdb_data_files only shows datafiles from “open” PDB, so just be careful if you’re looking for correct datafile

Let’s connect to our user PDB (pdbobi) and see what we can see 🙂

CDB$ROOT@YODA> alter session set container=pdbobi;
Session altered.

CDB$ROOT@YODA> select con_id, name, open_mode from v$pdbs;

CON_ID NAME OPEN_MODE
———- —————– ———–
3 PDBOBI MOUNTED

Place PDBOBI in Read Write mode. Note, that when you create the PDB, it is initially in mounted mode with a status of NEW.
View the OPEN MODE status of a PDB by querying the OPEN_MODE column in the V$PDBS view or view the status of a PDB by querying the STATUS column of the CDB_PDBS or DBA_PDBS view

CDB$ROOT@YODA> alter pluggable database pdbobi open;

Pluggable database altered.

or CDB$ROOT@YODA> alter pluggable database all open;

And let’s create a new tablespace in this PDB

CDB$ROOT@YODA> create tablespace obiwan datafile size 500M;

Tablespace created.

CDB$ROOT@YODA> select file_name from cdb_data_files;

FILE_NAME
——————————————————————————–
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/example.275.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/users.277.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/sysaux.274.823892813
+PDBDATA/YODA/E456D87DF75E6553E043EDFE10AC71EA/DATAFILE/obiwan.284.824683339
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/system.276.823892813

PDBOBI only has scope for its own PDB files. We will illustrate this further down below.

Let’s create a new clone from an existing PDB, but with a new path

CDB$ROOT@YODA> create pluggable database PDBvader from PDBOBI FILE_NAME_CONVERT=(‘+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE’,’+PDBDATA’);
create pluggable database PDBvader from PDBOBI FILE_NAME_CONVERT=(‘+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE’,’+PDBDATA’)
*
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable database

CDB$ROOT@YODA> show con_name

CON_NAME
——————————
PDBOBI

Hmm…..remember we were still connected to PDBOBI. You can only create PDBs from root (and not even from pdb$seed). So connect to CDBROOT

CDB$ROOT@YODA> create pluggable database PDBvader from PDBOBI FILE_NAME_CONVERT=(‘+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE’,’+PDBDATA’);

Pluggable database created.

CDB$ROOT@YODA> select pdb_name, status from cdb_pdbs;

PDB_NAME STATUS
———- ————-
PDBOBI NORMAL
PDB$SEED NORMAL
PDBVADER NORMAL

And

CDB$ROOT@YODA> select CON_ID,DBID,NAME,TOTAL_SIZE from v$pdbs;

CON_ID DBID NAME TOTAL_SIZE
———- ———- ————- ————-
2 4066465523 PDB$SEED 283115520
3 483260478 PDBOBI 917504000
4 994649056 PDBVADER 0

Hmm……the TOTAL_SIZE column shows 0 bytes. Recall that all new PDBs are created and placed in MOUNTED stated

CDB$ROOT@YODA> alter session set container=pdbvader;

Session altered.

CDB$ROOT@YODA> alter pluggable database open;

Pluggable database altered.

CDB$ROOT@YODA> select file_name from cdb_data_files;

FILE_NAME
——————————————————————————–
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/system.280.823980769
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/sysaux.279.823980769
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/users.281.823980769
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/example.282.823980769

Viola…. size is now reflected !!

CDB$ROOT@YODA> select CON_ID,DBID,NAME,TOTAL_SIZE from v$pdbs;

CON_ID DBID NAME TOTAL_SIZE
———- ———- —————————— ———-
4 994649056 PDBVADER 393216000

Again, the scope of PDBVADER is to its own container files; it can’t see PDBOBI files at all. If we connect back to cdb$root and look at v$datafile, we see that cdb$root has scope for all the datafiles in the CDB database

Incidentally, that long identifier, “E46B24386A131109E043EDFE10AC6E89”, in the OMF name is the GUID or Global Identifier for that PDB. This is not the same as container unique identifier (CON_UID). The con_uid is a local
identifier; whereas the GUID is universal. Keep in mind that we can unplug a PDB from one CDB into another CDB, so the GUID provides this uniqueness and streamlines portability.

CDB$ROOT@YODA> select name, con_id from v$datafile order by con_id

NAME CON_ID
———————————————————————————– ———-
+PDBDATA/YODA/DATAFILE/undotbs1.260.823892155 1
+PDBDATA/YODA/DATAFILE/sysaux.257.823892063 1
+PDBDATA/YODA/DATAFILE/system.258.823892109 1
+PDBDATA/YODA/DATAFILE/users.259.823892155 1
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/system.271.823892297 2
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/sysaux.270.823892297 2
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/example.275.823892813 3
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/users.277.823892813 3
+PDBDATA/YODA/E456D87DF75E6553E043EDFE10AC71EA/DATAFILE/obiwan.284.824683339 3
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/system.276.823892813 3
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/sysaux.274.823892813 3
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/sysaux.279.823980769 4
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/users.281.823980769 4
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/example.282.823980769 4
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/system.280.823980769 4

Now that we are done testing with PDBVADER PDB, we can shutdown and drop this PDB

CDB$ROOT@YODA> alter session set container=cdb$root;

Session altered.

CDB$ROOT@YODA> drop pluggable database pdbvader including datafiles;
drop pluggable database pdbvader including datafiles
*
ERROR at line 1:
ORA-65025: Pluggable database PDBVADER is not closed on all instances.

CDB$ROOT@YODA> alter pluggable database pdbvader close;

Pluggable database altered.

CDB$ROOT@YODA> drop pluggable database pdbvader including datafiles;

Pluggable database dropped.

Just for completeness, I’ll illustrate couple different ways to create a PDB

The beauty of PDB is not mobility (plug and unplug), which we’ll show later, but that we can create/clone a new PDB from a “gold-image PDB” . That’s real agility and a Database as a Service (DbaaS) play.

So let’s create a new PDB in a couple of different ways.

Method #1: Create a PDB from SEED
CDB$ROOT@YODA> alter session set container=cdb$root;

Session altered.

CDB$ROOT@YODA> CREATE PLUGGABLE DATABASE pdbhansolo admin user hansolo identified by hansolo roles=(dba);

Pluggable database created.

CDB$ROOT@YODA> alter pluggable database pdbhansolo open;

Pluggable database altered.

CDB$ROOT@YODA> select file_name from cdb_data_files;

FILE_NAME
——————————————————————————–
+PDBDATA/YODA/E51109E2AF22127AE043EDFE10AC1DD9/DATAFILE/system.280.824693889
+PDBDATA/YODA/E51109E2AF22127AE043EDFE10AC1DD9/DATAFILE/sysaux.279.824693893

Notice that it just contains the basic files to enable a PDB. The CDB will copy from the PDB$SEED the System and Sysaux tablesapces and instantiate them in the new PDB.

Method #2: Clone from an existing PDB (PDBOBI in our case)

CDB$ROOT@YODA> alter session set container=cdb$root;

Session altered.

CDB$ROOT@YODA> alter pluggable database pdbobi close;

Pluggable database altered.

CDB$ROOT@YODA> alter pluggable database pdbobi open read only;

Pluggable database altered.

CDB$ROOT@YODA> CREATE PLUGGABLE DATABASE pdbleia from pdbobi;

Pluggable database created.

CDB$ROOT@YODA> alter pluggable database pdbleia open;

Pluggable database altered.

CDB$ROOT@YODA> select file_name from cdb_data_files;

FILE_NAME
——————————————————————————–
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/system.281.824694649
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/sysaux.282.824694651
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/users.285.824694661
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/example.286.824694661
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/obiwan.287.824694669

Notice, that the OBI tablespace that we created in PDBOBI came over as part of this Clone process!!

You can also create a PDB as a snapshot (COW) from another PDB. I’ll post this test on the next blog report. But essentially you’ll need a NAS Appliannce, or any technology that will provide you with COW snapshot.
I plan on using ACFS as the storage container and ACFS RW Snapshot for the snapshot PDB.