Setting Jumbo Frames – Portrait of a Large MTU size

There cases where we need to ensure that large packet “address-ability” exists. This is needed to verify configuration for non standard packet sizes, i.e, MTU of 9000. For example if we are deploying a NAS or backup server across the network.

Setting the MTU can be done by editing the configuration script for the relevant interface in /etc/sysconfig/network-scripts/. In our example, we will use the eth1 interface, thus the file to edit would be ifcfg-eth1.

Add a line to specify the MTU, for example:
DEVICE=eth1
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.20.2
NETMASK=255.255.255.0
MTU=9000

Assuming that MTU is set on the system, just do a ifdown eth1 followed by ifup eth1.
An ifconfig eth1 will tell if its set correctly

eth1 Link encap:Ethernet HWaddr 00:0F:EA:94:xx:xx
inet addr:192.168.20.2 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::20f:eaff:fe91:407/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
RX packets:141567 errors:0 dropped:0 overruns:0 frame:0
TX packets:141306 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:101087512 (96.4 MiB) TX bytes:32695783 (31.1 MiB)
Interrupt:18 Base address:0xc000

To validate end-2-end MTU 9000 packet management

Execute the following on Linux systems:

ping -M do -s 8972 [destinationIP]
For example: ping datadomain.viscosityna.com -s 8972

The reason for the 8972 on Linux/Unix system, the ICMP/ping implementation doesn’t encapsulate the 28 byte ICMP (8) + TCP (20) (ping + standard transmission control protocol packet) header. Therefore, take in account : 9000 and subtract 28 = 8972.

[root@racnode01]# ping -s 8972 -M do datadomain.viscosityna.com
PING datadomain.viscosityna.com. (192.168.20.32) 8972(9000) bytes of data.
8980 bytes from racnode1.viscosityna.com. (192.168.20.2): icmp_seq=0 ttl=64 time=0.914 ms

To illustrate if proper MTU packet address-ability is not in place, I can set a larger packet size in the ping (8993). The packet gets fragmented you will see
“Packet needs to be fragmented by DF set”. In this example, the ping command uses ” -s” to set the packet size, and “-M do” sets the Do Not Fragment

[root@racnode01]# ping -s 8993 -M do datadomain.viscosityna.com
5 packets transmitted, 5 received, 0% packet loss, time 4003ms
rtt min/avg/max/mdev = 0.859/0.955/1.167/0.109 ms, pipe 2
PING datadomain.viscosityna.com. (192.168.20.32) 8993(9001) bytes of data.
From racnode1.viscosityna.com. (192.168.20.2) icmp_seq=0 Frag needed and DF set (mtu = 9000)

By adjusting the packet size, you can figure out what the mtu for the link is. This will represent the lowest mtu allowed by any device in the path, e.g., the switch, source or target node, target or anything else inbetween.

Finally, another way to verify the correct usage of the MTU size is the command ‘netstat -a -i -n’ (the column MTU size should be 9000 when you are performing tests on Jumbo Frames)

OVCA Network specs – discussion with Oracle

Notes from meeting w/ Oracle on OVCA:

The most current version of Oracle Virtual Compute Appliance (VCA or OVCA) is X4-2
VCA consists of the following:
Infiniband (IB) QDR links operating at 40Gb/s bandwidth (raw fabric bit rate); effective maximum network throughput possible is in the order of 32Gb/s for data (due in part b/c 8b/10b used for encoding).
PCIe card limits (20-25Gb/s per server depending on server model)

VCA supports VLANs and can accept VLAN tags.

Virtual Machine (VM) network throughput is limited by various factors such as:
Expected Network throughput between Different Virtual Machines is as follows (from Oracle specs):

Between VMs running on same Compute Node: Approx. 15.5Gb/s
Between two pairs of VMs communicating running on the same Compute Node: 25.5Gb/s (with 12Gb/s-13Gb/s per client+server pair)

Between two VMs that runs on different Compute Nodes: Approx. 7.8Gb/s-8Gb/s
Between two pairs of VMs communicating running on different Compute Nodes: No reduction in speed.

Performance is affected by software layers and hypervisor/virtualization such as:
Xsigo drivers, EoIB and IPoIB add path length and latency.
Virtual Machine access adds further latency for virtual front and backend I/O devices
Private Virtual Interconnect (PVI) peer-to-peer communication vs. external traffic via Xsigo backplane

Want to modify ports on SCAN and Node Listeners,think again

Consideration for setting Parameters for Scan and Node Listeners on RAC, Queuesize, SDU, Ports, etc

TNS listener information held in a listener.ora for releases >= 11.2 on RAC should not be modified. That is the IPC endpoint information for a node listener should not be changed. Global listener parameters can be set in the file, ie it is supported to add tracing parameters and ASO parameters like wallet location, etc.

Example:
SDU cannot be set in TCP endpoint for SCAN / Node listeners, but SDU can be changed via the global parameter DEFAULT_SDU_SIZE in the SQLNET.ORA file.
(Set this in the RDBMS oracle home sqlnet.ora file)

Scan listeners are not configured by the standard methods or using *.ora files, but rather built during install and manipulated using via srvctl.
Information on setup is held within Grid Infrastructure(GI). Currently there is no option for adding parameters such as SDU and QUEUESIZE to a listener.
SCAN only supports one address in the TNS connect descriptor and allows only 1 port assigned to it. Default port is 1521, but can be changed if required.
ER 11782958 has been raised to address the current restrictions that are in place around listener parameters on RAC and release 11.2.
Where there is a global parameter, this can be used in place.

As a side note, TCP.QUEUESIZE parameter is now available in 11.2.0.3+ (The patch for TCP.QUEUESIZE can be located on Metalink Note under patch 13777308), which enables SDU/Queuesize. The default is the system-defined maximum value. The defined maximum value for Linux is 128.
Allowable values are any integer value up to the system-defined maximum; e.g., TCP.QUEUESIZE=100

FlexASM Deep Dive – Show Me the Output!!!

If you saw the first FlexASM blog you know we installed and configured FlexASM and a CDB plus a couple of PDBs. Also, this was Policy Managed with a cardinality of 2. Now let’s see what the configuration looks like, and we can break it down using the wonderful crsctl and srvctl tools

First let’s ensure we are really running in FlexASM mode:

[oracle@rac02 ~]$ asmcmd showclustermode
ASM cluster : Flex mode enabled

[oracle@rac02 ~]$ srvctl status serverpool -serverpool naboo
Server pool name: naboo
Active servers count: 2

[oracle@rac01 trace]$ crsctl get node role status -all
Node ‘rac01’ active role is ‘hub’
Node ‘rac03’ active role is ‘hub’
Node ‘rac02’ active role is ‘hub’
Node ‘rac04’ active role is ‘hub’

[oracle@rac01 ~]$ crsctl stat res -t
——————————————————————————–
Name Target State Server State details
——————————————————————————–
Local Resources
——————————————————————————–
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE rac01 STABLE
ONLINE ONLINE rac02 STABLE
ONLINE ONLINE rac03 STABLE
ONLINE ONLINE rac04 STABLE

You notice that we have 4 ASM listeners one on each node in the Cluster. You’ll see the process as the following on each node:

[oracle@rac01 ~]$ ps -ef |grep -i asmnet

ooracle 6646 1 0 12:19 ? 00:00:00 /u01/app/12.1.0/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit

ora.CRSDATA.DATAVOL1.advm
ONLINE ONLINE rac01 Volume device /dev/a
sm/datavol1-194 is o
nline,STABLE
ONLINE ONLINE rac02 Volume device /dev/a
sm/datavol1-194 is o
nline,STABLE
ONLINE OFFLINE rac03 Unable to connect to
ASM,STABLE
ONLINE ONLINE rac04 Volume device /dev/a
sm/datavol1-194 is o
nline,STABLE
The datavol1 ADVM resource runs on all the nodes where indicated it should run. In this case we are seeing that RAC03 is having some issues.
Let’s look into that a little later. But I like the fact crsctl tells something is amiss here on node3

ora.CRSDATA.dg
ONLINE ONLINE rac01 STABLE
ONLINE ONLINE rac02 STABLE
ONLINE ONLINE rac03 STABLE
OFFLINE OFFLINE rac04 STABLE

ora.FRA.dg
ONLINE ONLINE rac01 STABLE
ONLINE ONLINE rac02 STABLE
ONLINE ONLINE rac03 STABLE
OFFLINE OFFLINE rac04 STABLE

The crsdata and fra disk groups resource is started on all nodes except node 4

ora.LISTENER.lsnr
ONLINE ONLINE rac01 STABLE
ONLINE ONLINE rac02 STABLE
ONLINE ONLINE rac03 STABLE
ONLINE ONLINE rac04 STABLE

We all know, as in 11gR2, that this is the Node listener.

ora.PDBDATA.dg
ONLINE ONLINE rac01 STABLE
ONLINE ONLINE rac02 STABLE
ONLINE ONLINE rac03 STABLE
OFFLINE OFFLINE rac04 STABLE

The pdbdata disk groups resource is started on all nodes except node 4

ora.crsdata.datavol1.acfs
ONLINE ONLINE rac01 mounted on /u02/app/
oracle/acfsmounts,ST
ABLE
ONLINE ONLINE rac02 mounted on /u02/app/
oracle/acfsmounts,ST
ABLE
ONLINE OFFLINE rac03 (2) volume /u02/app/
oracle/acfsmounts of
fline,STABLE
ONLINE ONLINE rac04 mounted on /u02/app/
oracle/acfsmounts,ST
ABLE

ACFS filesystem resource for datavol1 is started on all nodes except node3.
But I think the following has something to do w/ it :-). Need to debug this a bit later. I even tried:
[oracle@rac03 ~]$ asmcmd volenable –all
ASMCMD-9470: ASM proxy instance unavailable
ASMCMD-9471: cannot enable or disable volumes

ora.net1.network
ONLINE ONLINE rac01 STABLE
ONLINE ONLINE rac02 STABLE
ONLINE ONLINE rac03 STABLE
ONLINE ONLINE rac04 STABLE
ora.ons
ONLINE ONLINE rac01 STABLE
ONLINE ONLINE rac02 STABLE
ONLINE ONLINE rac03 STABLE
ONLINE ONLINE rac04 STABLE

The Network (in my case I only have only Net1) and ONS are same as in previous versions

ora.proxy_advm
ONLINE ONLINE rac01 STABLE
ONLINE ONLINE rac02 STABLE
ONLINE OFFLINE rac03 STABLE
ONLINE ONLINE rac04 STABLE

Yep, since proxy_advm is not started on node3, the filesystems won’t come online….but again, i’ll look at that later
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac02 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE rac03 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE rac04 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE rac01 169.254.90.36 172.16
.11.10,STABLE
ora.asm
1 ONLINE ONLINE rac03 STABLE
2 ONLINE ONLINE rac01 STABLE
3 ONLINE ONLINE rac02 STABLE

Since we have the cardinality of 3 ASM instance we have 3 ASM resources active

ora.cvu
1 ONLINE ONLINE rac01 STABLE
ora.mgmtdb
1 ONLINE ONLINE rac01 Open,STABLE
ora.oc4j
1 ONLINE ONLINE rac01 STABLE
ora.rac01.vip
1 ONLINE ONLINE rac01 STABLE
ora.rac02.vip
1 ONLINE ONLINE rac02 STABLE
ora.rac03.vip
1 ONLINE ONLINE rac03 STABLE
ora.rac04.vip
1 ONLINE ONLINE rac04 STABLE
ora.scan1.vip
1 ONLINE ONLINE rac02 STABLE
ora.scan2.vip
1 ONLINE ONLINE rac03 STABLE
ora.scan3.vip
1 ONLINE ONLINE rac04 STABLE
ora.tatooine.db
1 ONLINE ONLINE rac01 Open,STABLE
2 ONLINE ONLINE rac02 Open,STABLE

As we stated above, I specified a Policy Managed database with cardinality of 2, so I have 2 database instances running
——————————————————————————–

Here’s some other important supporting info on FlexASm:

[oracle@rac02 ~]$ srvctl config asm -detail
ASM home: /u01/app/12.1.0/grid
Password file: +CRSDATA/orapwASM
ASM listener: LISTENER
ASM is enabled.
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM

[oracle@rac02 ~]$ srvctl status filesystem
ACFS file system /u02/app/oracle/acfsmounts is mounted on nodes rac01,rac02,rac04

ANd here’s what the Database has to say about FlexASM

NOTE: ASMB registering with ASM instance as client 0x10001 (reg:1377584805)
NOTE: ASMB connected to ASM instance +ASM1 (Flex mode; client id 0x10001)
NOTE: ASMB rebuilding ASM server state
NOTE: ASMB rebuilt 2 (of 2) groups
SUCCESS: ASMB reconnected & completed ASM server state

So for the interesting part:
If you notice that ASM is not running node 4:
[oracle@rac02 ~]$ srvctl status asm -v

ASM is running on rac01,rac02,rac03
[oracle@rac02 ~]$ srvctl status asm -detail
ASM is running on rac01,rac02,rac03

So, how does a client (ocrdump, rman, asmcmd, etc..) connect to if ASM if there is no ASM on that node. Well let’s test this using asmcmd on node4. You notice that a pipe is created, a connect string is generated and passed to ASMCMD to connect remotely to ASM2 on node2!!!!

22-Sep-13 12:54 ASMCMD Foreground (PID = 14106): Pipe /tmp/pipe_14106 has been found.
22-Sep-13 12:54 ASMCMD Background (PID = 14117): Successfully opened the pipe /tmp/pipe_14106
22-Sep-13 12:54 ASMCMD Foreground (PID = 14106): Successfully opened the pipe /tmp/pipe_14106 in read mode
NOTE: Executing kfod /u01/app/12.1.0/grid/bin/kfod op=getclstype..
22-Sep-13 12:54 Printing the connection string
contype =
driver =
instanceName = <>
usr =
ServiceName = <+ASM>
23-Sep-13 16:23 Successfully connected to ASM instance +ASM2
23-Sep-13 16:23 NOTE: Querying ASM instance to get list of disks
22-Sep-13 12:54 Registered Daemon process.
22-Sep-13 12:54 ASMCMD Foreground (PID = 14106): Closed pipe /tmp/pipe_14106.

Creating PDBs

Consolidate where possible …Isolate where necessary

In the last blog I mentioned the benefits of schema consolidation and how it dove tails directly into a 12c Oracle Database PDB implementation.
In this part 2 of the PDB blog, we will get a little more detailed and do a basic walk-through, from “cradle to grave” of a PDB. We’ll use SQlPlus as the tool of choice, next time I’ll show w/ DBCA

First verify that we are truly on 12c Oracle database

SQL> select instance_name, version, status, con_id from v$instance;

INSTANCE_NAME VERSION STATUS CON_ID
—————- —————– ———— ———-
yoda 12.1.0.1.0 OPEN 0

The v$database view tells us that we are dealing with a CDB based database

CDB$ROOT@YODA> select cdb, con_id from v$database;

CDB CON_ID
— ———-
YES 0

or a more elegant way:

CDB$ROOT@YODA> select NAME, DECODE(CDB, ‘YES’, ‘Multitenant Option enabled’, ‘Regular 12c Database: ‘) “Multitenant Option ?” , OPEN_MODE, CON_ID from V$DATABASE;

NAME Multitenant Option ? OPEN_MODE CON_ID
——— ————————– ——————– ———-
YODA Multitenant Option enabled READ ONLY 0

There are alot of new views and tables to support PBD/CDB. But we’ll focus on the v$PDBS and CDB_PDBS views:

CDB$ROOT@YODA> desc v$pdbs
Name
——–
CON_ID
DBID
CON_UID
GUID
NAME
OPEN_MODE
RESTRICTED
OPEN_TIME
CREATE_SCN
TOTAL_SIZE

CDB$ROOT@YODA> desc cdb_pdbs
Name
——–
PDB_ID
PDB_NAME
DBID
CON_UID
GUID
STATUS
CREATION_SCN
CON_ID

The SQlPlus command con_name (container name) shows the container and the con_id we are connected to:

CDB$ROOT@YODA> show con_name

CON_NAME
——————————
CDB$ROOT

CDB$ROOT@YODA> show con_id

CON_ID
——————————
1

Let’s see what PDBs that are created in this CDB and their current state:

CDB$ROOT@YODA> select CON_ID,DBID,NAME,TOTAL_SIZE from v$pdbs;

CON_ID DBID NAME TOTAL_SIZE
———- ———- —————————— ———-
2 4066465523 PDB$SEED 283115520
3 483260478 PDBOBI 0

CDB$ROOT@YODA> select con_id, name, open_mode from v$pdbs;

CON_ID NAME OPEN_MODE
———- ——————– ———-
2 PDB$SEED READ ONLY
3 PDBOBI MOUNTED

Recall from part 1 of the blog series, that we created a PDB (pdbobi) when we specified the Pluggable Database Feature on install, and that a PDB$SEED got created as part of that Install process

Now lets’s connect to the two different PDBs and see what they got!! You really shouldn’t ever connect to PDB$SEED, since its just used as a template, but we’re just curious 🙂

CDB$ROOT@YODA> alter session set container=PDB$SEED;
Session altered.

CDB$ROOT@YODA> select name from v$datafile;

NAME
——————————————————————————–
+PDBDATA/YODA/DATAFILE/undotbs1.260.823892155
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/system.271.823892297
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/sysaux.270.823892297

As you can see that PDB$SEED houses the template tablespaces -> System, Sysaux, and Undo tablespaces

If we connect back to the root-CDB, we see that it houses essentially the traditional database tablespaces (like in pre-12c days).

CDB$ROOT@YODA> alter session set container=cdb$root;
Session altered.

CDB$ROOT@YODA> select name from v$datafile;

NAME
——————————————————————————–
+PDBDATA/YODA/DATAFILE/system.258.823892109
+PDBDATA/YODA/DATAFILE/sysaux.257.823892063
+PDBDATA/YODA/DATAFILE/undotbs1.260.823892155
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/system.271.823892297
+PDBDATA/YODA/DATAFILE/users.259.823892155
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/sysaux.270.823892297
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/system.276.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/sysaux.274.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/users.277.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/example.275.823892813

BTW, the datafiles listed in V$datafiles differs from cbd_data_files. cdb_data_files only shows datafiles from “open” PDB, so just be careful if you’re looking for correct datafile

Let’s connect to our user PDB (pdbobi) and see what we can see 🙂

CDB$ROOT@YODA> alter session set container=pdbobi;
Session altered.

CDB$ROOT@YODA> select con_id, name, open_mode from v$pdbs;

CON_ID NAME OPEN_MODE
———- —————– ———–
3 PDBOBI MOUNTED

Place PDBOBI in Read Write mode. Note, that when you create the PDB, it is initially in mounted mode with a status of NEW.
View the OPEN MODE status of a PDB by querying the OPEN_MODE column in the V$PDBS view or view the status of a PDB by querying the STATUS column of the CDB_PDBS or DBA_PDBS view

CDB$ROOT@YODA> alter pluggable database pdbobi open;

Pluggable database altered.

or CDB$ROOT@YODA> alter pluggable database all open;

And let’s create a new tablespace in this PDB

CDB$ROOT@YODA> create tablespace obiwan datafile size 500M;

Tablespace created.

CDB$ROOT@YODA> select file_name from cdb_data_files;

FILE_NAME
——————————————————————————–
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/example.275.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/users.277.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/sysaux.274.823892813
+PDBDATA/YODA/E456D87DF75E6553E043EDFE10AC71EA/DATAFILE/obiwan.284.824683339
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/system.276.823892813

PDBOBI only has scope for its own PDB files. We will illustrate this further down below.

Let’s create a new clone from an existing PDB, but with a new path

CDB$ROOT@YODA> create pluggable database PDBvader from PDBOBI FILE_NAME_CONVERT=(‘+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE’,’+PDBDATA’);
create pluggable database PDBvader from PDBOBI FILE_NAME_CONVERT=(‘+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE’,’+PDBDATA’)
*
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable database

CDB$ROOT@YODA> show con_name

CON_NAME
——————————
PDBOBI

Hmm…..remember we were still connected to PDBOBI. You can only create PDBs from root (and not even from pdb$seed). So connect to CDBROOT

CDB$ROOT@YODA> create pluggable database PDBvader from PDBOBI FILE_NAME_CONVERT=(‘+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE’,’+PDBDATA’);

Pluggable database created.

CDB$ROOT@YODA> select pdb_name, status from cdb_pdbs;

PDB_NAME STATUS
———- ————-
PDBOBI NORMAL
PDB$SEED NORMAL
PDBVADER NORMAL

And

CDB$ROOT@YODA> select CON_ID,DBID,NAME,TOTAL_SIZE from v$pdbs;

CON_ID DBID NAME TOTAL_SIZE
———- ———- ————- ————-
2 4066465523 PDB$SEED 283115520
3 483260478 PDBOBI 917504000
4 994649056 PDBVADER 0

Hmm……the TOTAL_SIZE column shows 0 bytes. Recall that all new PDBs are created and placed in MOUNTED stated

CDB$ROOT@YODA> alter session set container=pdbvader;

Session altered.

CDB$ROOT@YODA> alter pluggable database open;

Pluggable database altered.

CDB$ROOT@YODA> select file_name from cdb_data_files;

FILE_NAME
——————————————————————————–
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/system.280.823980769
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/sysaux.279.823980769
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/users.281.823980769
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/example.282.823980769

Viola…. size is now reflected !!

CDB$ROOT@YODA> select CON_ID,DBID,NAME,TOTAL_SIZE from v$pdbs;

CON_ID DBID NAME TOTAL_SIZE
———- ———- —————————— ———-
4 994649056 PDBVADER 393216000

Again, the scope of PDBVADER is to its own container files; it can’t see PDBOBI files at all. If we connect back to cdb$root and look at v$datafile, we see that cdb$root has scope for all the datafiles in the CDB database

Incidentally, that long identifier, “E46B24386A131109E043EDFE10AC6E89”, in the OMF name is the GUID or Global Identifier for that PDB. This is not the same as container unique identifier (CON_UID). The con_uid is a local
identifier; whereas the GUID is universal. Keep in mind that we can unplug a PDB from one CDB into another CDB, so the GUID provides this uniqueness and streamlines portability.

CDB$ROOT@YODA> select name, con_id from v$datafile order by con_id

NAME CON_ID
———————————————————————————– ———-
+PDBDATA/YODA/DATAFILE/undotbs1.260.823892155 1
+PDBDATA/YODA/DATAFILE/sysaux.257.823892063 1
+PDBDATA/YODA/DATAFILE/system.258.823892109 1
+PDBDATA/YODA/DATAFILE/users.259.823892155 1
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/system.271.823892297 2
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/sysaux.270.823892297 2
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/example.275.823892813 3
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/users.277.823892813 3
+PDBDATA/YODA/E456D87DF75E6553E043EDFE10AC71EA/DATAFILE/obiwan.284.824683339 3
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/system.276.823892813 3
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/sysaux.274.823892813 3
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/sysaux.279.823980769 4
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/users.281.823980769 4
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/example.282.823980769 4
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/system.280.823980769 4

Now that we are done testing with PDBVADER PDB, we can shutdown and drop this PDB

CDB$ROOT@YODA> alter session set container=cdb$root;

Session altered.

CDB$ROOT@YODA> drop pluggable database pdbvader including datafiles;
drop pluggable database pdbvader including datafiles
*
ERROR at line 1:
ORA-65025: Pluggable database PDBVADER is not closed on all instances.

CDB$ROOT@YODA> alter pluggable database pdbvader close;

Pluggable database altered.

CDB$ROOT@YODA> drop pluggable database pdbvader including datafiles;

Pluggable database dropped.

Just for completeness, I’ll illustrate couple different ways to create a PDB

The beauty of PDB is not mobility (plug and unplug), which we’ll show later, but that we can create/clone a new PDB from a “gold-image PDB” . That’s real agility and a Database as a Service (DbaaS) play.

So let’s create a new PDB in a couple of different ways.

Method #1: Create a PDB from SEED
CDB$ROOT@YODA> alter session set container=cdb$root;

Session altered.

CDB$ROOT@YODA> CREATE PLUGGABLE DATABASE pdbhansolo admin user hansolo identified by hansolo roles=(dba);

Pluggable database created.

CDB$ROOT@YODA> alter pluggable database pdbhansolo open;

Pluggable database altered.

CDB$ROOT@YODA> select file_name from cdb_data_files;

FILE_NAME
——————————————————————————–
+PDBDATA/YODA/E51109E2AF22127AE043EDFE10AC1DD9/DATAFILE/system.280.824693889
+PDBDATA/YODA/E51109E2AF22127AE043EDFE10AC1DD9/DATAFILE/sysaux.279.824693893

Notice that it just contains the basic files to enable a PDB. The CDB will copy from the PDB$SEED the System and Sysaux tablesapces and instantiate them in the new PDB.

Method #2: Clone from an existing PDB (PDBOBI in our case)

CDB$ROOT@YODA> alter session set container=cdb$root;

Session altered.

CDB$ROOT@YODA> alter pluggable database pdbobi close;

Pluggable database altered.

CDB$ROOT@YODA> alter pluggable database pdbobi open read only;

Pluggable database altered.

CDB$ROOT@YODA> CREATE PLUGGABLE DATABASE pdbleia from pdbobi;

Pluggable database created.

CDB$ROOT@YODA> alter pluggable database pdbleia open;

Pluggable database altered.

CDB$ROOT@YODA> select file_name from cdb_data_files;

FILE_NAME
——————————————————————————–
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/system.281.824694649
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/sysaux.282.824694651
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/users.285.824694661
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/example.286.824694661
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/obiwan.287.824694669

Notice, that the OBI tablespace that we created in PDBOBI came over as part of this Clone process!!

You can also create a PDB as a snapshot (COW) from another PDB. I’ll post this test on the next blog report. But essentially you’ll need a NAS Appliannce, or any technology that will provide you with COW snapshot.
I plan on using ACFS as the storage container and ACFS RW Snapshot for the snapshot PDB.