How to handle ODA dangling job entries from a failed ODA odacli operation

odacli delete db issues

On Oracle Database Appliance ODA X7-HA (12.2.1.2), I created a database call TEST database using odacli create-database command, then I ripped out the current “freshly created” datafiles and used RMAN duplicate command to restored the datafiles. It went alright without any issues. However last week, I decided to delete this TEST database , and it hang for a long time on “Deleting Datafiles” steps. I have a feeling that odacli didn’t recognize the “restored” database files. 

The only reason I know it was “Deleting Datafiles” steps is because I did “odacli describe-job”. 

I wish there is a force delete dbhome command from odacli, or unregister database command. This database I was trying to delete is the one that I cloned from our TEST EBS database. 

[root@vnanode0 ~]# odacli list-dbstorages 

DCS-10032:Resource db storage is not found.

Unable to delete (cloned database from non-oda platform) database., odacli delete database hangs 

[root@vnanode0 ~]# odacli delete-database -i d488be55-d2af-4615-a766-5f230fc48aff -fd 

“jobId” : “bb7bc2e3-94f2-4523-b901-32257f493e68”, 

“status” : “Running”, 

“message” : null, 

“reports” : [ { 

“taskId” : “TaskZJsonRpcExt_2126”, 

“taskName” : “Validate db d488be55-d2af-4615-a766-5f230fc48aff for deletion”, 

“taskResult” : “Success”, 

“startTime” : “June 05, 2018 15:33:41 PM EDT”, 

“endTime” : “June 05, 2018 15:33:41 PM EDT”, 

“status” : “Success”, 

“taskDescription” : null, 

“parentTaskId” : “TaskSequential_2124”, 

“jobId” : “bb7bc2e3-94f2-4523-b901-32257f493e68”, 

“tags” : [ ], 

“reportLevel” : “Info”, 

“updatedTime” : “June 05, 2018 15:33:41 PM EDT” 

} ], 

“createTimestamp” : “June 05, 2018 15:33:41 PM EDT”, 

“resourceList” : [ ], 

“description” : “Database service deletion with db name: TEST with id : d488be55-d2af-4615-a766-5f230fc48aff”, 

“updatedTime” : “June 05, 2018 15:33:41 PM EDT” 

[root@vnanode0 ~]# odacli list-databases 

ID DB Name DB Type DB Version CDB Class Shape Storage Status DbHomeID 

—————————————- ———- ——– ——————– ———- ——– ——– ———- ———— —————————————- 

d488be55-d2af-4615-a766-5f230fc48aff TEST RacOne 12.1.0.2 false IMDB odb16 ACFS Deleting da48243e-2d6f-47e5-ae8a-7f958cf5bbbf

[root@vnanode0 ~]# odacli describe-job -i 8bb544f6-4f24-4e18-900a-71bf5ef7f383 

Job details 

—————————————————————- 

ID: 8bb544f6-4f24-4e18-900a-71bf5ef7f383 

Description: Database service deletion with db name: TEST with id : d488be55-d2af-4615-a766-5f230fc48aff 

Status: Failure 

Created: June 4, 2018 7:56:47 AM EDT 

Message: DCS-10011:Input parameter ‘ACFS Device for delete’ cannot be NULL. 

Task Name Start Time End Time Status 

————————————————– ———————————– ———————————– ———- 

database Service deletion for d488be55-d2af-4615-a766-5f230fc48aff June 4, 2018 7:56:47 AM EDT June 4, 2018 7:56:52 AM EDT Failure 

database Service deletion for d488be55-d2af-4615-a766-5f230fc48aff June 4, 2018 7:56:47 AM EDT June 4, 2018 7:56:52 AM EDT Failure 

Validate db d488be55-d2af-4615-a766-5f230fc48aff for deletion June 4, 2018 7:56:47 AM EDT June 4, 2018 7:56:47 AM EDT Success 

Database Deletion June 4, 2018 7:56:47 AM EDT June 4, 2018 7:56:48 AM EDT Success 

Unregister Db From Cluster June 4, 2018 7:56:48 AM EDT June 4, 2018 7:56:48 AM EDT Success 

Kill Pmon Process June 4, 2018 7:56:48 AM EDT June 4, 2018 7:56:48 AM EDT Success 

Database Files Deletion June 4, 2018 7:56:48 AM EDT June 4, 2018 7:56:48 AM EDT Success 

Deleting FileSystem June 4, 2018 7:56:50 AM EDT June 4, 2018 7:56:52 AM EDT Failure <<<<<<<<<<<<<<<<<<<<<<<<<<< Shows Failure in Deleting FileSystem 

Even odacli delete-database -i <dbid> -fd.   Also Failed

Bug 27048646 was filed to support deleting and cleaning up the database entries. 

[root@vnanode0 ~]# odacli list-databases 

ID DB Name DB Type DB Version CDB Class Shape Storage Status DbHomeID 

—————————————- ———- ——– ——————– ———- ——– ——– ———- ———— —————————————- 

d488be55-d2af-4615-a766-5f230fc48aff TEST RacOne 12.1.0.2 false IMDB odb16 ACFS Deleting da48243e-2d6f-47e5-ae8a-7f958cf5bbbf 

From dcs-agent.log: 

————————- 

2018-06-04 07:56:49,405 DEBUG [Stopping FileSyestem] [] c.o.d.c.u.CommonsUtils: 

run: cmd= ‘[su, 

-, 

oracle, 

-c, 

export ORACLE_SID=+ASM1; 

export ORACLE_HOME=/u01/app/12.2.0.1/oracle; 

/u01/app/12.2.0.1/oracle/bin/asmcmd –nocp volinfo -G Data datTEST]’ 

2018-06-04 07:56:50,225 DEBUG [Stopping FileSyestem] [] c.o.d.c.u.c.CommandExecutor: Return code: 0 

2018-06-04 07:56:50,225 DEBUG [Stopping FileSyestem] [] c.o.d.c.u.CommonsUtils: Output : 

volume datTEST not found in diskgroup Data 

2018-06-04 07:56:50,225 ERROR [Stopping FileSyestem] [] c.o.d.a.r.s.s.VolumeUtils: Failed to get the acfs device for the volume datTEST in the disk group Data

… 

2018-06-06 10:01:00,902 INFO [dw-2755 – GET /jobs/bb7bc2e3-94f2-4523-b901-32257f493e68] [] c.o.d.a.r.JobsApi: Received GET request to getJobDetail on JobsApi with jobid = bb7bc2e3-94f2-4523-b901-32257f493e68 

2018-06-06 10:01:00,908 DEBUG [dw-2755 – GET /jobs/bb7bc2e3-94f2-4523-b901-32257f493e68] [] c.o.d.c.t.r.ReportApi: Job Report: 

“updatedTime” : 1528227225929, 

“jobId” : “bb7bc2e3-94f2-4523-b901-32257f493e68”, 

“status” : “Failure”, 

“message” : “DCS-10011:Input parameter ‘ACFS Device for delete’ cannot be NULL.”, 

“reports” : [ { 

“updatedTime” : 1528227225922, 

“startTime” : 1528227221448, 

“endTime” : 1528227225922, 

“taskId” : “TaskZLockWrapper_2123”, 

“status” : “Failure”, 

“taskResult” : “DCS-10011:Input parameter ‘ACFS Device for delete’ cannot be NULL.”, 

“taskName” : “database Service deletion for d488be55-d2af-4615-a766-5f230fc48aff”, 

“taskDescription” : null, 

“parentTaskId” : “TaskServiceRequest_2122”, 

“jobId” : “bb7bc2e3-94f2-4523-b901-32257f493e68”, 

“tags” : [ ], 

“reportLevel” : “Error” 

}, { 

“updatedTime” : 1528227225917, 

“startTime” : 1528227221467, 

“endTime” : 1528227225917, 

“taskId” : “TaskSequential_2124”, 

“status” : “Failure”, 

“taskResult” : “DCS-10011:Input parameter ‘ACFS Device for delete’ cannot be NULL.”, 

“taskName” : “database Service deletion for d488be55-d2af-4615-a766-5f230fc48aff”, 

“taskDescription” : null, 

“parentTaskId” : “TaskZLockWrapper_2123”, 

“jobId” : “bb7bc2e3-94f2-4523-b901-32257f493e68”, 

“tags” : [ ], 

“reportLevel” : “Error” 

}, { 

“updatedTime” : 1528227221491, 

“startTime” : 1528227221484, 

“endTime” : 1528227221491, 

“taskId” : “TaskZJsonRpcExt_2126”, 

“status” : “Success”, 

“taskResult” : “Success”, 

“taskName” : “Validate db d488be55-d2af-4615-a766-5f230fc48aff for deletion”, 

“taskDescription” : null, 

“parentTaskId” : “TaskSequential_2124”, 

“jobId” : “bb7bc2e3-94f2-4523-b901-32257f493e68”, 

“tags” : [ ], 

“reportLevel” : “Info” 

}, { 

“updatedTime” : 1528227222042, 

“startTime” : 1528227221581, 

“endTime” : 1528227222042, 

“taskId” : “TaskZJsonRpcExt_2138”, 

“status” : “Success”, 

“taskResult” : “database deleted successfully”, 

“taskName” : “Database Deletion”, 

“taskDescription” : null, 

“parentTaskId” : “TaskSequential_2124”, 

“jobId” : “bb7bc2e3-94f2-4523-b901-32257f493e68”, 

“tags” : [ ], 

“reportLevel” : “Info” 

}, { 

“updatedTime” : 1528227222224, 

“startTime” : 1528227222050, 

“endTime” : 1528227222224, 

“taskId” : “TaskZJsonRpcExt_2140”, 

“status” : “Success”, 

“taskResult” : “database unregistered from cluster successfully”, 

“taskName” : “Unregister Db From Cluster”, 

“taskDescription” : null, 

“parentTaskId” : “TaskSequential_2124”, 

“jobId” : “bb7bc2e3-94f2-4523-b901-32257f493e68”, 

“tags” : [ ], 

“reportLevel” : “Info” 

}, { 

“updatedTime” : 1528227222371, 

“startTime” : 1528227222316, 

“endTime” : 1528227222371, 

“taskId” : “TaskZJsonRpcExt_2144”, 

“status” : “Success”, 

“taskResult” : “pmon process of database killed successfully”, 

“taskName” : “Kill Pmon Process”, 

“taskDescription” : null, 

“parentTaskId” : “TaskSequential_2124”, 

“jobId” : “bb7bc2e3-94f2-4523-b901-32257f493e68”, 

“tags” : [ ], 

“reportLevel” : “Info” 

}, { 

“updatedTime” : 1528227222739, 

“startTime” : 1528227222561, 

“endTime” : 1528227222739, 

“taskId” : “TaskZJsonRpcExt_2148”, 

“status” : “Success”, 

“taskResult” : “database files deleted successfully”, 

“taskName” : “Database Files Deletion”, 

“taskDescription” : null, 

“parentTaskId” : “TaskSequential_2124”, 

“jobId” : “bb7bc2e3-94f2-4523-b901-32257f493e68”, 

“tags” : [ ], 

“reportLevel” : “Info” 

}, { 

“updatedTime” : 1528227225908, 

“startTime” : 1528227224546, 

“endTime” : 1528227225908, 

“taskId” : “TaskZJsonRpcExt_2152”, 

“status” : “Failure”, 

“taskResult” : “DCS-10011:Input parameter ‘ACFS Device for delete’ cannot be NULL.”, 

“taskName” : “Deleting FileSystem”, 

“taskDescription” : null, 

“parentTaskId” : “TaskSequential_2124”, 

“jobId” : “bb7bc2e3-94f2-4523-b901-32257f493e68”, 

“tags” : [ ], 

“reportLevel” : “Error” 

} ], 

“createTimestamp” : 1528227221408, 

“resourceList” : [ ], 

“description” : “Database service deletion with db name: TEST with id : d488be55-d2af-4615-a766-5f230fc48aff” 

————————- 

Resolution – NOTE!!! Only do this with the advisement and clearance of Oracle Support (I did this with OSupport on WebEX)

This required us to manually cleanup the orphan job entires

First, download db-derby-10.11.1.1-lib.zip from the following URL: 

http://db.apache.org/derby/releases/release-10.11.1.1.cgi 

1. download and unzip db-derby-10.11.1.1-lib.zip 

2. Shut down dcsagent: 

# initctl stop initdcsagent 

3. cd /opt/oracle/dcs/repo/ (there must be node_0 under it) 

ls 

node_0 

4. Connect to database and run sql commands to query/remove the rows in the metadata tables. 

# java -cp /root/db-derby-10.11.1.1-bin/lib/derbytools.jar:/root/db-derby-10.11.1.1-bin/lib/derbyclient.jar:/root/db-derby-10.11.1.1-bin/lib/derby.jar org.apache.derby.tools.ij 

ij>connect ‘jdbc:derby:node_0’; 

5. ij>show tables; 

6. ij>select id,name,dbname,dbid,status,DBSTORAGE from db where dbname=’&DELETEDDATABASENAME’; 

>> It should return 1 row for the deleted database 

7. ij> delete from db where dbname=’&DELETEDDATABASENAME’; 

8. ij> commit; 

9. Start dcsagent 

initctl start initdcsagent 

Note we will iterate through steps 6 and 7 to remove everything that shouldn’t be there – may be storage details and locations as well. 

Basically remove anything in the table relating to the old database. 

[root@vnanode0 ~]# odacli list-databases 

DCS-10032:Resource database is not found. 

[root@vnanode0 ~]# odacli list-dbhomes 

ID Name DB Version Home Location Status 

—————————————- ——————– —————————————- ——————————————— ———- 

5afc2dd7-35c2-4ffa-a0cc-2c81ef190e9f OraDB12102_home2 12.1.0.2.171017 (26914423, 26717470) /u01/app/oracle/product/12.1.0.2/dbhome_2 Configured 

How to Create a Single Instance database on ODA on a specific

Yes, this operation does deserve a Blog post, only because its so confusing.  As usual, I show the dcs-agent.log

Odacli create-database for single instance on a specific node

This procedure illustrates the creation of a  single instance database on ODA X7-HA 12.2.1.3

[root@vnaoda1-1 ~]# odacli create-database -n vnasi -cl oltp -dh 75a428db-d173-4ad8-bae3-021a45eeeebf -s odb1 -r ACFS -y SI -g 1 -m

Password for SYS,SYSTEM and PDB Admin:

Job details                                                      

—————————————————————-

                     ID:  9a4f5364-a634-4e0e-b1db-9e81e96123cc

            Description:  Database service creation with db name: vnasi

                 Status:  Created

                Created:  August 20, 2018 10:53:55 PM EDT

                Message: 

Task Name                                Start Time                          End Time                            Status    

—————————————- ———————————– ———————————– ———-

[root@vnaoda1-1 ~]# odacli describe-job -i 9a4f5364-a634-4e0e-b1db-9e81e96123cc

Job details                                                      

—————————————————————-

                     ID:  9a4f5364-a634-4e0e-b1db-9e81e96123cc

            Description:  Database service creation with db name: vnasi

                 Status:  Running

                Created:  August 20, 2018 10:53:55 PM EDT

                Message: 

Task Name                                Start Time                          End Time                            Status    

—————————————- ———————————– ———————————– ———-

Setting up ssh equivalance               August 20, 2018 10:53:55 PM EDT     August 20, 2018 10:53:55 PM EDT     Success   

Creating volume datvnasi                 August 20, 2018 10:53:55 PM EDT     August 20, 2018 10:54:11 PM EDT     Success   

Creating ACFS filesystem for DATA        August 20, 2018 10:54:11 PM EDT     August 20, 2018 10:54:19 PM EDT     Success   

Database Service creation                August 20, 2018 10:54:19 PM EDT     August 20, 2018 10:54:19 PM EDT     Running   

Database Creation                        August 20, 2018 10:54:19 PM EDT     August 20, 2018 10:54:19 PM EDT     Running   

[root@vnaoda1-1 log]# tail  -100 dcs-agent.log

    “endTime” : 1534820059579,

    “taskId” : “TaskZJsonRpcExt_3387”,

    “status” : “Running”,

    “taskResult” : “”,

    “taskName” : “Database Creation”,

    “taskDescription” : null,

    “parentTaskId” : “TaskSequential_3386”,

    “jobId” : “9a4f5364-a634-4e0e-b1db-9e81e96123cc”,

    “tags” : [ ],

    “reportLevel” : “Info”

  } ],

  “createTimestamp” : 1534820035095,

  “resourceList” : [ ],

  “description” : “Database service creation with db name: vnasi”

}

2018-08-20 22:55:28,836 INFO [dw-31414 – GET /jobs/9a4f5364-a634-4e0e-b1db-9e81e96123cc] [] c.o.d.a.u.d.DcsuserUtil: Authenticate for user: oda-cliadmin

2018-08-20 22:55:28,853 DEBUG [dw-31414 – GET /jobs/9a4f5364-a634-4e0e-b1db-9e81e96123cc] [] c.o.d.a.a.AgentAuthenticator: auth success

2018-08-20 22:55:28,853 INFO [dw-31414 – GET /jobs/9a4f5364-a634-4e0e-b1db-9e81e96123cc] [] c.o.d.a.r.JobsApi: Received GET request to getJobDetail on JobsApi with jobid = 9a4f5364-a634-4e0e-b1db-9e81e96123cc

2018-08-20 22:55:28,856 DEBUG [dw-31414 – GET /jobs/9a4f5364-a634-4e0e-b1db-9e81e96123cc] [] c.o.d.c.t.r.ServiceJobReport: Add TaskReport r.id=9a4f5364-a634-4e0e-b1db-9e81e96123cc jid=9a4f5364-a634-4e0e-b1db-9e81e96123cc status=Running

2018-08-20 22:55:28,856 DEBUG [dw-31414 – GET /jobs/9a4f5364-a634-4e0e-b1db-9e81e96123cc] [] c.o.d.c.t.r.ServiceJobReport: Add TaskReport r.id=TaskSequential_3335 jid=9a4f5364-a634-4e0e-b1db-9e81e96123cc status=Success

2018-08-20 22:55:28,856 DEBUG [dw-31414 – GET /jobs/9a4f5364-a634-4e0e-b1db-9e81e96123cc] [] c.o.d.c.t.r.ServiceJobReport: Add TaskReport r.id=TaskSequential_3353 jid=9a4f5364-a634-4e0e-b1db-9e81e96123cc status=Success

2018-08-20 22:55:28,856 DEBUG [dw-31414 – GET /jobs/9a4f5364-a634-4e0e-b1db-9e81e96123cc] [] c.o.d.c.t.r.ServiceJobReport: Add TaskReport r.id=TaskSequential_3359 jid=9a4f5364-a634-4e0e-b1db-9e81e96123cc status=Success

2018-08-20 22:55:28,856 DEBUG [dw-31414 – GET /jobs/9a4f5364-a634-4e0e-b1db-9e81e96123cc] [] c.o.d.c.t.r.ServiceJobReport: Add TaskReport r.id=TaskSequential_3386 jid=9a4f5364-a634-4e0e-b1db-9e81e96123cc status=Running

2018-08-20 22:55:28,856 DEBUG [dw-31414 – GET /jobs/9a4f5364-a634-4e0e-b1db-9e81e96123cc] [] c.o.d.c.t.r.ServiceJobReport: Add TaskReport r.id=TaskZJsonRpcExt_3387 jid=9a4f5364-a634-4e0e-b1db-9e81e96123cc status=Running

2018-08-20 22:55:28,856 DEBUG [dw-31414 – GET /jobs/9a4f5364-a634-4e0e-b1db-9e81e96123cc] [] c.o.d.c.t.r.ReportApi: Job Report:

{

  “updatedTime” : 1534820035111,

  “jobId” : “9a4f5364-a634-4e0e-b1db-9e81e96123cc”,

  “status” : “Running”,

  “message” : null,

  “reports” : [ {

    “updatedTime” : 1534820035750,

    “startTime” : 1534820035720,

    “endTime” : 1534820035750,

    “taskId” : “TaskSequential_3335”,

    “status” : “Success”,

    “taskResult” : “”,

    “taskName” : “Setting up ssh equivalance”,

    “taskDescription” : null,

    “parentTaskId” : “TaskSequential_3324”,

    “jobId” : “9a4f5364-a634-4e0e-b1db-9e81e96123cc”,

    “tags” : [ ],

    “reportLevel” : “Info”

  }, {

    “updatedTime” : 1534820051337,

    “startTime” : 1534820035939,

    “endTime” : 1534820051337,

    “taskId” : “TaskSequential_3353”,

    “status” : “Success”,

    “taskResult” : “”,

    “taskName” : “Creating volume datvnasi”,

    “taskDescription” : null,

    “parentTaskId” : “TaskSequential_3324”,

    “jobId” : “9a4f5364-a634-4e0e-b1db-9e81e96123cc”,

    “tags” : [ ],

    “reportLevel” : “Info”

  }, {

    “updatedTime” : 1534820059397,

    “startTime” : 1534820051341,

    “endTime” : 1534820059396,

    “taskId” : “TaskSequential_3359”,

    “status” : “Success”,

    “taskResult” : “”,

    “taskName” : “Creating ACFS filesystem for DATA”,

    “taskDescription” : null,

    “parentTaskId” : “TaskSequential_3324”,

    “jobId” : “9a4f5364-a634-4e0e-b1db-9e81e96123cc”,

    “tags” : [ ],

    “reportLevel” : “Info”

  }, {

    “updatedTime” : 1534820059576,

    “startTime” : 1534820059575,

    “endTime” : 1534820059575,

    “taskId” : “TaskSequential_3386”,

    “status” : “Running”,

    “taskResult” : “”,

    “taskName” : “Database Service creation”,

    “taskDescription” : null,

    “parentTaskId” : “TaskSequential_3324”,

    “jobId” : “9a4f5364-a634-4e0e-b1db-9e81e96123cc”,

    “tags” : [ ],

    “reportLevel” : “Info”

  }, {

    “updatedTime” : 1534820059580,

    “startTime” : 1534820059579,

    “endTime” : 1534820059579,

    “taskId” : “TaskZJsonRpcExt_3387”,

    “status” : “Running”,

    “taskResult” : “”,

    “taskName” : “Database Creation”,

    “taskDescription” : null,

    “parentTaskId” : “TaskSequential_3386”,

    “jobId” : “9a4f5364-a634-4e0e-b1db-9e81e96123cc”,

    “tags” : [ ],

    “reportLevel” : “Info”

  } ],

  “createTimestamp” : 1534820035095,

  “resourceList” : [ ],

  “description” : “Database service creation with db name: vnasi”

}

[root@vnaoda1-1 log]# tail  -100f dcs-agent.log

    “endTime” : 1534820059579,

    “taskId” : “TaskZJsonRpcExt_3387”,

    “status” : “Running”,

    “taskResult” : “”,

    “taskName” : “Database Creation”,

    “taskDescription” : null,

    “parentTaskId” : “TaskSequential_3386”,

    “jobId” : “9a4f5364-a634-4e0e-b1db-9e81e96123cc”,

    “tags” : [ ],

    “reportLevel” : “Info”

  } ],

  “createTimestamp” : 1534820035095,

  “resourceList” : [ ],

  “description” : “Database service creation with db name: vnasi”

}

2018-08-20 22:55:28,836 INFO [dw-31414 – GET /jobs/9a4f5364-a634-4e0e-b1db-9e81e96123cc] [] c.o.d.a.u.d.DcsuserUtil: Authenticate for user: oda-cliadmin

2018-08-20 22:55:28,853 DEBUG [dw-31414 – GET /jobs/9a4f5364-a634-4e0e-b1db-9e81e96123cc] [] c.o.d.a.a.AgentAuthenticator: auth success

2018-08-20 22:55:28,853 INFO [dw-31414 – GET /jobs/9a4f5364-a634-4e0e-b1db-9e81e96123cc] [] c.o.d.a.r.JobsApi: Received GET request to getJobDetail on JobsApi with jobid = 9a4f5364-a634-4e0e-b1db-9e81e96123cc

2018-08-20 22:55:28,856 DEBUG [dw-31414 – GET /jobs/9a4f5364-a634-4e0e-b1db-9e81e96123cc] [] c.o.d.c.t.r.ServiceJobReport: Add TaskReport r.id=9a4f5364-a634-4e0e-b1db-9e81e96123cc jid=9a4f5364-a634-4e0e-b1db-9e81e96123cc status=Running

2018-08-20 22:55:28,856 DEBUG [dw-31414 – GET /jobs/9a4f5364-a634-4e0e-b1db-9e81e96123cc] [] c.o.d.c.t.r.ServiceJobReport: Add TaskReport r.id=TaskSequential_3335 jid=9a4f5364-a634-4e0e-b1db-9e81e96123cc status=Success

2018-08-20 22:55:28,856 DEBUG [dw-31414 – GET /jobs/9a4f5364-a634-4e0e-b1db-9e81e96123cc] [] c.o.d.c.t.r.ServiceJobReport: Add TaskReport r.id=TaskSequential_3353 jid=9a4f5364-a634-4e0e-b1db-9e81e96123cc status=Success

2018-08-20 22:55:28,856 DEBUG [dw-31414 – GET /jobs/9a4f5364-a634-4e0e-b1db-9e81e96123cc] [] c.o.d.c.t.r.ServiceJobReport: Add TaskReport r.id=TaskSequential_3359 jid=9a4f5364-a634-4e0e-b1db-9e81e96123cc status=Success

2018-08-20 22:55:28,856 DEBUG [dw-31414 – GET /jobs/9a4f5364-a634-4e0e-b1db-9e81e96123cc] [] c.o.d.c.t.r.ServiceJobReport: Add TaskReport r.id=TaskSequential_3386 jid=9a4f5364-a634-4e0e-b1db-9e81e96123cc status=Running

2018-08-20 22:55:28,856 DEBUG [dw-31414 – GET /jobs/9a4f5364-a634-4e0e-b1db-9e81e96123cc] [] c.o.d.c.t.r.ServiceJobReport: Add TaskReport r.id=TaskZJsonRpcExt_3387 jid=9a4f5364-a634-4e0e-b1db-9e81e96123cc status=Running

2018-08-20 22:55:28,856 DEBUG [dw-31414 – GET /jobs/9a4f5364-a634-4e0e-b1db-9e81e96123cc] [] c.o.d.c.t.r.ReportApi: Job Report:

{

  “updatedTime” : 1534820035111,

  “jobId” : “9a4f5364-a634-4e0e-b1db-9e81e96123cc”,

  “status” : “Running”,

  “message” : null,

  “reports” : [ {

    “updatedTime” : 1534820035750,

    “startTime” : 1534820035720,

    “endTime” : 1534820035750,

    “taskId” : “TaskSequential_3335”,

    “status” : “Success”,

    “taskResult” : “”,

    “taskName” : “Setting up ssh equivalance”,

    “taskDescription” : null,

    “parentTaskId” : “TaskSequential_3324”,

    “jobId” : “9a4f5364-a634-4e0e-b1db-9e81e96123cc”,

    “tags” : [ ],

    “reportLevel” : “Info”

  }, {

    “updatedTime” : 1534820051337,

    “startTime” : 1534820035939,

    “endTime” : 1534820051337,

    “taskId” : “TaskSequential_3353”,

    “status” : “Success”,

    “taskResult” : “”,

    “taskName” : “Creating volume datvnasi”,

    “taskDescription” : null,

    “parentTaskId” : “TaskSequential_3324”,

    “jobId” : “9a4f5364-a634-4e0e-b1db-9e81e96123cc”,

    “tags” : [ ],

    “reportLevel” : “Info”

  }, {

    “updatedTime” : 1534820059397,

    “startTime” : 1534820051341,

    “endTime” : 1534820059396,

    “taskId” : “TaskSequential_3359”,

    “status” : “Success”,

    “taskResult” : “”,

    “taskName” : “Creating ACFS filesystem for DATA”,

    “taskDescription” : null,

    “parentTaskId” : “TaskSequential_3324”,

    “jobId” : “9a4f5364-a634-4e0e-b1db-9e81e96123cc”,

    “tags” : [ ],

    “reportLevel” : “Info”

  }, {

    “updatedTime” : 1534820059576,

    “startTime” : 1534820059575,

    “endTime” : 1534820059575,

    “taskId” : “TaskSequential_3386”,

    “status” : “Running”,

    “taskResult” : “”,

    “taskName” : “Database Service creation”,

    “taskDescription” : null,

    “parentTaskId” : “TaskSequential_3324”,

    “jobId” : “9a4f5364-a634-4e0e-b1db-9e81e96123cc”,

    “tags” : [ ],

    “reportLevel” : “Info”

  }, {

    “updatedTime” : 1534820059580,

    “startTime” : 1534820059579,

    “endTime” : 1534820059579,

    “taskId” : “TaskZJsonRpcExt_3387”,

    “status” : “Running”,

    “taskResult” : “”,

    “taskName” : “Database Creation”,

    “taskDescription” : null,

    “parentTaskId” : “TaskSequential_3386”,

    “jobId” : “9a4f5364-a634-4e0e-b1db-9e81e96123cc”,

    “tags” : [ ],

    “reportLevel” : “Info”

  } ],

  “createTimestamp” : 1534820035095,

  “resourceList” : [ ],

  “description” : “Database service creation with db name: vnasi”

}

^C

[root@vnaoda1-1 product]# odacli describe-job -i 9a4f5364-a634-4e0e-b1db-9e81e96123cc

Job details                                                      

—————————————————————-

                     ID:  9a4f5364-a634-4e0e-b1db-9e81e96123cc

            Description:  Database service creation with db name: vnasi

                 Status:  Success

                Created:  August 20, 2018 10:53:55 PM EDT

                Message: 

Task Name                                Start Time                          End Time                            Status    

—————————————- ———————————– ———————————– ———-

Setting up ssh equivalance               August 20, 2018 10:53:55 PM EDT     August 20, 2018 10:53:55 PM EDT     Success   

Creating volume datvnasi                 August 20, 2018 10:53:55 PM EDT     August 20, 2018 10:54:11 PM EDT     Success   

Creating ACFS filesystem for DATA        August 20, 2018 10:54:11 PM EDT     August 20, 2018 10:54:19 PM EDT     Success   

Database Service creation                August 20, 2018 10:54:19 PM EDT     August 20, 2018 11:00:23 PM EDT     Success   

Database Creation                        August 20, 2018 10:54:19 PM EDT     August 20, 2018 10:59:08 PM EDT     Success   

Change permission for xdb wallet files   August 20, 2018 10:59:09 PM EDT     August 20, 2018 10:59:09 PM EDT     Success   

Place SnapshotCtrlFile in sharedLoc      August 20, 2018 10:59:09 PM EDT     August 20, 2018 10:59:10 PM EDT     Success   

Running DataPatch                        August 20, 2018 11:00:09 PM EDT     August 20, 2018 11:00:20 PM EDT     Success   

updating the Database version            August 20, 2018 11:00:20 PM EDT     August 20, 2018 11:00:23 PM EDT     Success   

create Users tablespace                  August 20, 2018 11:00:23 PM EDT     August 20, 2018 11:00:25 PM EDT     Success   

[root@vnaoda1-1 product]# odacli list-databases

ID                                       DB Name    DB Type  DB Version           CDB        Class    Shape    Storage    Status        DbHomeID                                

—————————————- ———- ——– ——————– ———- ——– ——– ———- ———— —————————————-

95dbbc70-3da4-467a-ad17-2740dd5bec2c     PROD       Si       12.1.0.2             false      OLTP     Odb16    ACFS       Configured   75a428db-d173-4ad8-bae3-021a45eeeebf    

c3ffa93a-1050-4257-aa08-4f515f51f8ae     vnasi      Si       12.1.0.2             false      OLTP     Odb1     ACFS       Configured   75a428db-d173-4ad8-bae3-021a45eeeebf    

[root@vnaoda1-1 product]# odacli describe-database -i c3ffa93a-1050-4257-aa08-4f515f51f8ae

Database details                                                  

—————————————————————-

                     ID: c3ffa93a-1050-4257-aa08-4f515f51f8ae

            Description: vnasi

                DB Name: vnasi

             DB Version: 12.1.0.2

                DB Type: Si

             DB Edition: EE

                   DBID: 3134823637

Instance Only Database: false

                    CDB: false

               PDB Name:

    PDB Admin User Name:

                  Class: OLTP

                  Shape: Odb1

                Storage: ACFS

           CharacterSet: AL32UTF8

  National CharacterSet: AL16UTF16

               Language: AMERICAN

              Territory: AMERICA

                Home ID: 75a428db-d173-4ad8-bae3-021a45eeeebf

        Console Enabled: false

     Level 0 Backup Day: Sunday

    AutoBackup Disabled: false

                Created: August 20, 2018 10:53:52 PM EDT

         DB Domain Name: vnainc.com

[root@vnaoda1-1 product]# odacli describe-database -i c3ffa93a-1050-4257-aa08-4f515f51f8ae

Database details                                                  

—————————————————————-

                     ID: c3ffa93a-1050-4257-aa08-4f515f51f8ae

            Description: vnasi

                DB Name: vnasi

             DB Version: 12.1.0.2

                DB Type: Si

             DB Edition: EE

                   DBID: 3134823637

Instance Only Database: false

                    CDB: false

               PDB Name:

    PDB Admin User Name:

                  Class: OLTP

                  Shape: Odb1

                Storage: ACFS

           CharacterSet: AL32UTF8

  National CharacterSet: AL16UTF16

               Language: AMERICAN

              Territory: AMERICA

                Home ID: 75a428db-d173-4ad8-bae3-021a45eeeebf

        Console Enabled: false

     Level 0 Backup Day: Sunday

    AutoBackup Disabled: false

                Created: August 20, 2018 10:53:52 PM EDT

         DB Domain Name: vnainc.com

[oracle@vnaoda1-1 ~]$ srvctl config database -d vnasi

Database unique name: vnasi

Database name: vnasi

Oracle home: /u01/app/oracle/product/12.1.0.2/dbhome_11

Oracle user: oracle

Spfile: /u02/app/oracle/oradata/vnasi/dbs/spfilevnasi.ora

Password file:

Domain: vnainc.com

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Server pools:

Disk Groups: DATA

Mount point paths: /u02/app/oracle/oradata/vnasi,/u03/app/oracle/

Services:

Type: SINGLE

OSDBA group: dba

OSOPER group: dba

Database instance: vnasi

Configured nodes: vnaoda1-1

Database is administrator managed

[oracle@vnaoda1-1 ~]$ ps -ef|grep smon

root     16230     1  0 Jul30 ?        00:45:31 /u01/app/12.2.0.1/oracle/bin/osysmond.bin

oracle   20897     1  0 Jul30 ?        00:00:27 asm_smon_+ASM2

oracle   71928     1  0 23:00 ?        00:00:00 ora_smon_vnasi

oracle   83495 82037  0 23:06 pts/1    00:00:00 grep smon

DCS Agent log:

2018-08-20 23:01:41,701 DEBUG [dw-31439 – GET /jobs/9a4f5364-a634-4e0e-b1db-9e81e96123cc] [] c.o.d.c.t.r.ReportApi: Job Report:

{

  “updatedTime” : 1534820427093,

  “jobId” : “9a4f5364-a634-4e0e-b1db-9e81e96123cc”,

  “status” : “Success”,

  “message” : null,

  “reports” : [ {

    “updatedTime” : 1534820035750,

    “startTime” : 1534820035720,

    “endTime” : 1534820035750,

    “taskId” : “TaskSequential_3335”,

    “status” : “Success”,

    “taskResult” : “”,

    “taskName” : “Setting up ssh equivalance”,

    “taskDescription” : null,

    “parentTaskId” : “TaskSequential_3324”,

    “jobId” : “9a4f5364-a634-4e0e-b1db-9e81e96123cc”,

    “tags” : [ ],

    “reportLevel” : “Info”

  }, {

    “updatedTime” : 1534820051337,

    “startTime” : 1534820035939,

    “endTime” : 1534820051337,

    “taskId” : “TaskSequential_3353”,

    “status” : “Success”,

    “taskResult” : “”,

    “taskName” : “Creating volume datvnasi”,

    “taskDescription” : null,

    “parentTaskId” : “TaskSequential_3324”,

    “jobId” : “9a4f5364-a634-4e0e-b1db-9e81e96123cc”,

    “tags” : [ ],

    “reportLevel” : “Info”

  }, {

    “updatedTime” : 1534820059397,

    “startTime” : 1534820051341,

    “endTime” : 1534820059396,

    “taskId” : “TaskSequential_3359”,

    “status” : “Success”,

    “taskResult” : “”,

    “taskName” : “Creating ACFS filesystem for DATA”,

    “taskDescription” : null,

    “parentTaskId” : “TaskSequential_3324”,

    “jobId” : “9a4f5364-a634-4e0e-b1db-9e81e96123cc”,

    “tags” : [ ],

    “reportLevel” : “Info”

  }, {

    “updatedTime” : 1534820423774,

    “startTime” : 1534820059575,

    “endTime” : 1534820423773,

    “taskId” : “TaskSequential_3386”,

    “status” : “Success”,

    “taskResult” : “”,

    “taskName” : “Database Service creation”,

    “taskDescription” : null,

    “parentTaskId” : “TaskSequential_3324”,

    “jobId” : “9a4f5364-a634-4e0e-b1db-9e81e96123cc”,

    “tags” : [ ],

    “reportLevel” : “Info”

  }, {

    “updatedTime” : 1534820348996,

    “startTime” : 1534820059579,

    “endTime” : 1534820348978,

    “taskId” : “TaskZJsonRpcExt_3387”,

    “status” : “Success”,

    “taskResult” : “”,

    “taskName” : “Database Creation”,

    “taskDescription” : null,

    “parentTaskId” : “TaskSequential_3386”,

    “jobId” : “9a4f5364-a634-4e0e-b1db-9e81e96123cc”,

    “tags” : [ ],

    “reportLevel” : “Info”

  }, {

    “updatedTime” : 1534820349023,

    “startTime” : 1534820349014,

    “endTime” : 1534820349022,

    “taskId” : “TaskZJsonRpcExt_3391”,

    “status” : “Success”,

    “taskResult” : “Sucessfully changes the permissions for xdb wallet files”,

    “taskName” : “Change permission for xdb wallet files”,

    “taskDescription” : null,

    “parentTaskId” : “TaskSequential_3386”,

    “jobId” : “9a4f5364-a634-4e0e-b1db-9e81e96123cc”,

    “tags” : [ ],

    “reportLevel” : “Info”

  }, {

    “updatedTime” : 1534820350913,

    “startTime” : 1534820349026,

    “endTime” : 1534820350912,

    “taskId” : “TaskZJsonRpcExt_3393”,

    “status” : “Success”,

    “taskResult” : “Successfully placed snapshot control file in shared location.”,

    “taskName” : “Place SnapshotCtrlFile in sharedLoc”,

    “taskDescription” : null,

    “parentTaskId” : “TaskSequential_3386”,

    “jobId” : “9a4f5364-a634-4e0e-b1db-9e81e96123cc”,

    “tags” : [ ],

    “reportLevel” : “Info”

  }, {

    “updatedTime” : 1534820420561,

    “startTime” : 1534820409047,

    “endTime” : 1534820420559,

    “taskId” : “TaskZJsonRpcExt_3397”,

    “status” : “Success”,

    “taskResult” : “Successfully ran datapatch:vnasi”,

    “taskName” : “Running DataPatch”,

    “taskDescription” : null,

    “parentTaskId” : “TaskSequential_3386”,

    “jobId” : “9a4f5364-a634-4e0e-b1db-9e81e96123cc”,

    “tags” : [ ],

    “reportLevel” : “Info”

  }, {

    “updatedTime” : 1534820423765,

    “startTime” : 1534820420585,

    “endTime” : 1534820423764,

    “taskId” : “TaskZJsonRpcExt_3402”,

    “status” : “Success”,

    “taskResult” : “successfully updated Database version”,

    “taskName” : “updating the Database version”,

    “taskDescription” : null,

    “parentTaskId” : “TaskParallel_3399”,

    “jobId” : “9a4f5364-a634-4e0e-b1db-9e81e96123cc”,

    “tags” : [ ],

    “reportLevel” : “Info”

  }, {

    “updatedTime” : 1534820425534,

    “startTime” : 1534820423777,

    “endTime” : 1534820425534,

    “taskId” : “TaskSequential_3404”,

    “status” : “Success”,

    “taskResult” : “”,

    “taskName” : “create Users tablespace”,

    “taskDescription” : null,

    “parentTaskId” : “TaskSequential_3324”,

    “jobId” : “9a4f5364-a634-4e0e-b1db-9e81e96123cc”,

    “tags” : [ ],

    “reportLevel” : “Info”

  } ],

  “createTimestamp” : 1534820035095,

  “resourceList” : [ {

    “updatedTime” : 1534820348980,

    “resourceId” : “c3ffa93a-1050-4257-aa08-4f515f51f8ae”,

    “jobId” : “9a4f5364-a634-4e0e-b1db-9e81e96123cc”,

    “resourceType” : “DB”

  } ],

  “description” : “Database service creation with db name: vnasi”

}

2018-08-20 23:02:09,678 INFO [dw-31439 – GET /databases] [] c.o.d.a.u.d.DcsuserUtil: Authenticate for user: oda-cliadmin

2018-08-20 23:02:09,701 DEBUG [dw-31439 – GET /databases] [] c.o.d.a.a.AgentAuthenticator: auth success

2018-08-20 23:02:09,702 INFO [dw-31439 – GET /databases] [] c.o.d.a.r.DatabasesApi: Received GET request to listDatabases on DatabasesApi

2018-08-20 23:02:52,097 INFO [dw-31441 – GET /databases/c3ffa93a-1050-4257-aa08-4f515f51f8ae] [] c.o.d.a.u.d.DcsuserUtil: Authenticate for user: oda-cliadmin

2018-08-20 23:02:52,114 DEBUG [dw-31441 – GET /databases/c3ffa93a-1050-4257-aa08-4f515f51f8ae] [] c.o.d.a.a.AgentAuthenticator: auth success

2018-08-20 23:02:52,114 INFO [dw-31441 – GET /databases/c3ffa93a-1050-4257-aa08-4f515f51f8ae] [] c.o.d.a.r.DatabasesApi: Received GET request to getDatabase on DatabasesApi with databaseId :c3ffa93a-1050-4257-aa08-4f515f51f8ae

2018-08-20 23:02:52,114 DEBUG [dw-31441 – GET /databases/c3ffa93a-1050-4257-aa08-4f515f51f8ae] [] c.o.d.a.r.s.d.DbUtils: db :

DB {

    dbName: vnasi

    databaseUniqueName: vnasi

    dbVersion: 12.1.0.2

    dbHomeId: 75a428db-d173-4ad8-bae3-021a45eeeebf

    dbId: 3134823637

    isCdb: false

    pdBName: null

    pdbAdminUserName: null

    enableTDE: false

    dbType: Si

    dbTargetNodeNumber: 1

    dbClass: Oltp

    dbShape: Odb1

    dbStorage: Acfs

    instanceOnly: false

    registerOnly: false

    dbCharacterSet:

DbCharacterSet {

    characterSet: AL32UTF8

    nlsCharacterset: AL16UTF16

    dbTerritory: AMERICA

    dbLanguage: AMERICAN

}

    dbConsoleEnable: false

    backupDestination: None

    level0BackupDay: sunday

    cloudStorageContainer: null

    backupConfigId: null

    dbDomainName: vnainc.com

    dbEdition: EE

}

ODA Patching/Upgrade Triage

This blog will define the appropriate logs, files and command output to collect when triaging ODA related patching issues

Recently we attempted to upgrade from 12.2.1.2 to 12.2.1.3 on ODA X7.  We did the usual as part of the upgrade process.

unzip p27648057_122130_Linux-x86-64_1of3.zip

ls -l  oda-sm-12.2.1.3.0-180504-server*

/opt/oracle/dcs/bin/odacli update-repository -f oda-sm-12.2.1.3.0-180504-server1of3.zip, p27648057_122130_Linux-x86-64_2of3.zip,

p27648057_122130_Linux-x86-64_3of3.zip

/opt/oracle/dcs/bin/odacli update-dcsagent -v 12.2.1.3.0

/opt/oracle/dcs/bin/odacli describe-job -i jobid

rpm -qa |grep dcs-agent

dcs-agent-18.1.3.0.0_LINUX.X64_180504-86.x86_64

/opt/oracle/dcs/bin/odacli update-server -v 12.2.1.3.0

The update-server failed according to this log:

{

    “updatedTime” : 1530126568790,

    “startTime” : 1530126562856,

    “endTime” : 1530126568788,

    “taskId” : “TaskZJsonRpcExt_225”,

    “status” : “Success”,

    “taskResult” : “Successfully created the yum repos for patchingos”,

    “taskName” : “Creating repositories using yum”,

    “taskDescription” : null,

    “parentTaskId” : “TaskSequential_224”,

    “jobId” : “5dd32b38-4d4b-4a3a-bf45-bb71cc8bf801”,

    “tags” : [ ],

    “reportLevel” : “Info”

  }, {

    “updatedTime” : 1530126894836,

    “startTime” : 1530126568799,

    “endTime” : 1530126894834,

    “taskId” : “TaskZJsonRpcExt_228”,

    “status” : “Success”,

    “taskResult” : “Successfully updated the OS”,

    “taskName” : “Applying OS Patches”,

    “taskDescription” : null,

    “parentTaskId” : “TaskParallel_227”,

    “jobId” : “5dd32b38-4d4b-4a3a-bf45-bb71cc8bf801”,

    “tags” : [ ],

    “reportLevel” : “Info”

  },

{

    “updatedTime” : 1530126899457,

    “startTime” : 1530126894980,

    “endTime” : 1530126899455,

    “taskId” : “TaskZJsonRpcExt_236”,

    “status” : “Success”,

    “taskResult” : “Successfully updated the Firmware Disk”,

    “taskName” : “Applying Firmware Disk Patches”,

    “taskDescription” : null,

    “parentTaskId” : “TaskSequential_235”,

    “jobId” : “5dd32b38-4d4b-4a3a-bf45-bb71cc8bf801”,

    “tags” : [ ],

    “reportLevel” : “Info”

  },

{

    “updatedTime” : 1530127898145,

    “startTime” : 1530127248252,

    “endTime” : 1530127898144,

    “taskId” : “TaskSequential_121”,

    “status” : “Failure”,

    “taskResult” : “DCS-10001:Internal error encountered:  apply patch using OpatchAuto on node odanode1.”,

    “taskName” : “task:TaskSequential_121”,

    “taskDescription” : null,

    “parentTaskId” : “TaskSequential_256”,

    “jobId” : “5dd32b38-4d4b-4a3a-bf45-bb71cc8bf801”,

    “tags” : [ ],

    “reportLevel” : “Error”

  },

….

{

    “updatedTime” : 1530127898142,

    “startTime” : 1530127463976,

    “endTime” : 1530127898141,

    “taskId” : “TaskSequential_162”,

    “status” : “Failure”,

    “taskResult” : “DCS-10001:Internal error encountered:  apply patch using OpatchAuto on nodeodanode1.”,

    “taskName” : “task:TaskSequential_162”,

    “taskDescription” : null,

    “parentTaskId” : “TaskSequential_121”,

    “jobId” : “5dd32b38-4d4b-4a3a-bf45-bb71cc8bf801”,

    “tags” : [ ],

    “reportLevel” : “Error”

  }, {

    “updatedTime” : 1530127898139,

    “startTime” : 1530127463979,

    “endTime” : 1530127898137,

    “taskId” : “TaskZJsonRpcExt_163”,

    “status” : “Failure”,

    “taskResult” : “DCS-10001:Internal error encountered:  apply patch using OpatchAuto on node odanode1″,

    “taskName” : “clusterware upgrade”,

    “taskDescription” : null,

    “parentTaskId” : “TaskSequential_162”,

    “jobId” : “5dd32b38-4d4b-4a3a-bf45-bb71cc8bf801”,

    “tags” : [ ],

    “reportLevel” : “Error”

  } ],

  “createTimestamp” : 1530126459722,

  “resourceList” : [ ],

  “description” : “Server Patching”

}

When opening an SR make sure you collected the following infor for support.  This will minimize the ‘backn’forth’-ness and reduce mean time to resolve.

Please provide the output from the commands below:
1.

odacli describe-component

System Version 

—————

12.2.1.3.0

Component                                Installed Version    Available Version   

—————————————- ——————– ——————–

OAK                                       12.2.1.3.0            up-to-date          

GI                                        12.2.0.1.171017       12.2.0.1.180116     

DB                                        12.2.0.1.171017       12.2.0.1.180116     

DCSAGENT                                  18.1.3.0.0            up-to-date          

ILOM                                      4.0.0.28.r121827      4.0.0.22.r120818    

BIOS                                      41017600              41017100            

OS                                        6.9                   up-to-date          

FIRMWARECONTROLLER                        QDV1RE14              up-to-date          

ASR                                       5.7.7                 up-to-date          

# odacli describe-latestpatch

componentType   availableVersion    

————— ——————–

gi              12.2.0.1.180116     

gi              12.2.0.1.180116     

db              12.2.0.1.180116     

db              12.1.0.2.180116     

db              11.2.0.4.180116     

oak             12.2.1.3.0          

oak             12.2.1.3.0          

asr             5.7.7               

ilom            4.0.0.22.r120818    

os              6.9                 

hmp             2.4.1.0.9           

bios            41017100            

firmwarecontroller 13.00.00.00         

firmwarecontroller 4.650.00-7176       

firmwarecontroller kpyagr3q            

firmwarecontroller qdv1re14            

firmwaredisk    0r3q                

firmwaredisk    a122                

firmwaredisk    a122                

firmwaredisk    a374                

firmwaredisk    c122                

firmwaredisk    c122                

firmwaredisk    c376                

firmwaredisk    0112                

dcsagent        18.1.3.0.0          

2.  Provide/upload the following most recent configuration and diagnostic details./log files: 

/u01/app/12.2.0.1/oracle/cfgtoollogs/opatchautodb/systemconfig* 

/tmp/opatchAutoAnalyzePatch.log 

/u01/app/12.2.0.1/oracle/cfgtoollogs/opatchauto/core/opatch/opatch<date>.log 

/u01/app/12.2.0.1/oracle/cfgtoollogs/opatchautodb/systemconfig<date>.log 

/u01/app/12.2.0.1/oracle/cfgtoollogs/opatchauto/opatchauto<date>.log

/u01/app/oracle/crsdata/odanode1/crsconfig/crspatch_<hostname><date>.log 

Hybrid DNS Configuration

The following is definition/illustration of on-premise->OCI (Hybrid) DNS configuration.

Oracle Cloud Infrastructure (OCI) customers can configure DNS names for their instances in the Virtual Cloud Network (VCN) as described in DNS in Your Virtual Cloud Network. The DNS names are resolvable only within the VCN using the VCN DNS resolver available at 169.254.169.254. This IP address is only reachable from instances in the VCN.

This document describes the process to enable resolution of DNS names of instances in the VCN from on-premises clients and vice-versa, when the on-premises datacenter is connected with the VCN (through VPN or FastConnect).

Setup Overview

Case1 – DNS resolution from on-premises to VCN

On-premises to VCN

When an on-premises client is trying to connecting to cloud VCN resources:

  1. Client machine initiates a DNS query (for db1.exaclient.custvcn.oraclevcn.com) to on-prem DNS server (172.16.0.5)
  2. On-prem DNS server forwards the request to DNS VM in the VCN (10.0.10.15) over private connectivity (VPN or FastConnect)
  3. DNS query forwarded to VCN DNS resolver (169.254.169.254)
  4. DNS VM gets the IP address of the FQDN and send it back to on-prem DNS server
  5. On-prem DNS server gets the IP address and responds to the client machine

Case2 – DNS resolution from VCN to on-premises

VCN to on-premises

When an instance in the VCN is trying to connect to an on-premises instance:

  1. Instance in the VCN initiates a DNS query (say app1.customer.net)
  2. The DNS server configured in the DHCP options used by the instance’s subnet will receive the DNS request. In this case, the request will be received by DNS VM in the VCN
  3. DNS query forwarded to on-premises DNS server (172.16.0.5) over private connectivity (VPN of Fastconnect)
  4. DNS VM gets the response and sends it back to client

Configuration Steps

Below are the steps to achieve this configuration

  1. Create a DNS VM in the VCN
    1. Create a security list with following rules:
      • allow udp 53 (for DNS queries) from clients (VCN address space + On-prem address space)
      • allow tcp 22 (for ssh access) from Internet or on-prem address space
      • allow ICMP type3 from same sources as rule above (for ssh access)
    2. Create a DHCP options set:
      • Set DNS type as “Internet and VCN resolver”
  2. Create a subnet, which uses the security list and DHCP options set created above.
  3. Launch a VM with latest ‘Oracle Linux 7.4’ image into this subnet
  4. Install & Configure named
   $ sudo yum install bind
   $ sudo firewall-cmd --permanent --add-port=53/udp
   $ sudo firewall-cmd --permanent --add-port=53/tcp
   $ sudo /bin/systemctl restart firewalld
   $ cat > /etc/named.conf
options {
        listen-on port 53 { any; };
        allow-query    { localhost; 10.0.0.0/16; 172.16.0.0/16; };
        forward        only;
        forwarders     { 169.254.169.254; };
        recursion yes;
};

zone "customer.net" {
        type       forward;
        forward    only;
        forwarders { 172.16.0.5; 172.16.31.5; };
};

<hit ctrl-D>

   * $ sudo service named restart
  1. Configure forwarding on the on-prem DNS servers for ‘VCN domain’ (custvcn.oraclevcn.com) to be forwarded to DNS VM in the VCN. Below is a snapshot of the setup in an AD/DNS server. AD conditional forwarding setup

Note, that I borrowed this material from Oracle and made slight modifications for the configurations we are currently  supporting

Exadata Cloud – Database Deployment

This is Part 3 of the Post Provisioning of Exadata Cloud

Here we will create our first 12.2 database using dbaasapi utility

Once the Exadata tooling has been updated, you can leverage the dbaasapi command to create or delete Exadata Oracle databases.  When creating new databases you should only use the dbaasapi to this purpose.

The dbaasapi utility, which is a REST API, reads a JSON request body and generates a JSON response body in an output file.

The utility is located in the /var/opt/oracle/dbaasapi/ directory on the compute nodes and must be run as the root user.

[root@ashdbm01~]# mkdir –p /home/oracle/dbassinput

[root@ashdbm01~] # cd /home/oracle/dbassinput

Create a JSON file for DB creation, using the template

[root@ashdbm01~] # cat /home/oracle/dbassinput/dbaas.json

{ “object”: “db”, “action”: “start”, “operation”: “createdb”, “params”: {

“nodelist”:

“dbname”:

“edition”:

“version”: “adminPassword”:

“sid”:

“pdbName”:

“charset”:

“ncharset”: “backupDestination”: “cloudStorageContainer”:

“”,

“exadb”, “EE_EP”, “12.1.0.2”, “WElcome#123_”, “exadb”, “PDB1”, “AL32UTF8”, “AL16UTF16”, “OSS”,

“https://swiftobjectstorage.<region>.oraclecloud.com/v1/mycompany/DBBackups”, “cloudStorageUser”: “jsmith@mycompany.com”,

“cloudStoragePwd”: “XXXXXXXXXXXX”

},

“outputfile”: “/home/oracle/createdb.out”,

“FLAGS”: “”

}

The following example illustrates the usage of dbaasapi to build a 12.2 database.

Dbaasapi uses the json format file as input for db creation.

[root@phxdbm-o3eja1 oracle]# cat $HOME/createdb_start.json

{

  “object”: “db”,

  “action”: “start”,

  “operation”: “createdb”,

  “params”: {

    “dbname”:                “yoda”,

    “edition”:               “EE_EP”,

    “version”:               “12.2.0.1”,

    “adminPassword”:         “”,

    “pdbName”:               “jedi1”,

    “backupDestination”:     “NONE”,

    “cloudStorageContainer”: “<bkup_oss_url>”,

    “cloudStorageUser”:      “<bkup_oss_user>”,

    “cloudStoragePwd”:       “<bkup_oss_passwd>”,

    “charset”:               “AL32UTF8”,

    “ncharset”:              “AL16UTF16”,

    “nodelist”:              “”

  },

  “outputfile”: “/home/oracle/created_yoda.out”,

  “FLAGS”: “”

}

[root@phxdbm-o3eja1 oracle]#  /var/opt/oracle/dbaaspi/dbaasapi -I cretedb_yoda

Once invoked, you can use the status.json file to check the progress

[root@phxdbm-o3eja1 oracle]# /var/opt/oracle/dbaasapi/dbaasapi -I status_createdb_yoda.json

[root@phxdbm-o3eja1 oracle]# cat createdb.out

{

“msg” : “Sync sqlnet file…[done]\\n

#Invoking assistant bkup\\nUsing cmd : /var/opt/oracle/ocde/assistants/bkup/bkup -out /var/opt/oracle/ocde/res/bkup.out -sid=\”yoda1\” -reco_grp=\”RECOC1\” -hostname=\”phxdbm-o3eja1.client.phxexadata.oraclevcn.com\” -oracle_home=\”/u02/app/oracle/product/12.2.0/dbhome_3\” -dbname=\”yoda\” -dbtype=\”exarac\” -exabm=\”yes\” -edition=\”enterprise\” -bkup_cfg_files=\”no\” -acfs_vol_dir=\”/var/opt/oracle/dbaas_acfs\” -bkup_oss_url=\”<bkup_oss_url>\” -bkup_oss_user=\”<bkup_oss_user>\” -version=\”12201\” -archlog=\”yes\” -oracle_base=\”/u02/app/oracle\” -firstrun=\”no\” -action=\”config\” -bkup_oss=\”no\” -bkup_disk=\”no\” -data_grp=\”DATAC1\” -action=config \\n\\n#

#Done executing bkup\\

Removed all entries from creg file : /var/opt/oracle/creg/yoda.ini matching passwd or decrypt_key\\n\\n#### Completed OCDE

Successfully ####\\nWARN: Could not register elogger_parameters: elogger.pm::_init: /var/opt/oracle/dbaas_acfs/events does not exist”,

   “object” : “db”,

   “status” : “Success”,

   “errmsg” : “”,

   “outputfile” : “/home/oracle/created_yoda.out”,

   “action” : “start”,

   “id” : “4”,

   “operation” : “createdb”,

   “logfile” : “/var/opt/oracle/log/yoda/dbaasapi/db/createdb/4.log”

}

Backups

When the dbaasapi utility is used to create a database, it automatically enables automated RMAN backups.  These scheduled backups which are processed via crontab.  The following lists the crontab entries for backups.

[root@ network-scripts]# cat /etc/crontab

SHELL=/bin/bash

PATH=/sbin:/bin:/usr/sbin:/usr/bin

MAILTO=root

HOME=/

# For details see man 4 crontabs

# Example of job definition:

# .—————- minute (0 – 59)

# |  .————- hour (0 – 23)

# |  |  .———- day of month (1 – 31)

# |  |  |  .——- month (1 – 12) OR jan,feb,mar,apr …

# |  |  |  |  .—- day of week (0 – 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat

# |  |  |  |  |

# *  *  *  *  * user-name command to be executed

0,30 * * * * root /var/opt/oracle/bkup_api/bkup_api bkup_archlogs –dbname=OCITEST

0,30 * * * * root /var/opt/oracle/bkup_api/bkup_api bkup_archlogs –dbname=yoda

Exadata Cloud – Post Provisioning Exadata Configuration – Part 2

This is Part 2 of the Post Provisioning of Exadata Cloud

Here we will update the Tooling software necessary to support Exadata Patching.

Exadata Tooling Update

[root@phxdbm-o3eja1 exadbcpatch]# /var/opt/oracle/exapatch/exadbcpatchsm – list_tools
INFO: non async case
INFO: cmd is: /var/opt/oracle/exapatch/exadbcpatch -list_tools

Starting EXADBCPATCH
Logfile is /var/opt/oracle/log/exadbcpatch/exadbcpatch_2018-02- 28_22:37:23.log
Config file is /var/opt/oracle/exapatch/exadbcpatch.cfg

INFO: oss_container_url is not given, using INFO: tools images available for patching

‘last_async_precheck_txn_id’ => ‘

‘last_async_apply_txn_id’ => ‘ ‘errmsg’ => ”,
‘err’ => ”, ‘current_version’ => ‘180104’, ‘last_async_precheck_patch_id’ ‘current_patch’ => ‘180104’, ‘last_async_apply_patch_id’ => ‘patches’ => [

‘,

‘ ‘,

{
‘patchid’ => ‘17.4.1.2.0BM_180223’, ‘last_precheck_txnid’ => ”, ‘description’ => ‘DBaaSTools for ECS OCI’

} ]

<json begin>{“last_async_precheck_txn_id”:” “,”last_async_apply_txn_id”:” “,”err”:””,”errmsg”:””,”current_version”:”180104″,”last_async_precheck_patch_ id”:” “,”current_patch”:”180104″,”last_async_apply_patch_id”:” “,”patches”:[{“patchid”:”17.4.1.2.0BM_180223″,”last_precheck_txnid”:””,”descr iption”:”DBaaSTools for ECS OCI”}]}<json end>

=> ‘ ‘,

[root@phxdbm-o3eja1 exadbcpatch]# /var/opt/oracle/exapatch/exadbcpatchsm – toolsinst_async ‘17.4.1.2.0BM_180223’
INFO: async case
INFO: patch number given is : 17.4.1.2.0BM_180223

INFO: check for this action toolsinst_async
<start txn>69<end txn>
INFO: command to be run is: /var/opt/oracle/exapatch/exadbcpatch – toolsinst_async -rpmversion=17.4.1.2.0BM_180223
INFO: system cmd is: “nohup /var/opt/oracle/exapatch/exadbcpatch – toolsinst_async -rpmversion=17.4.1.2.0BM_180223 “
[root@phxdbm-o3eja1 exadbcpatch]# /var/opt/oracle/exapatch/exadbcpatchsm – list_tools
INFO: non async case
INFO: cmd is: /var/opt/oracle/exapatch/exadbcpatch -list_tools
Starting EXADBCPATCH
Logfile is /var/opt/oracle/log/exadbcpatch/exadbcpatch_2018-02- 28_22:42:27.log
Config file is /var/opt/oracle/exapatch/exadbcpatch.cfg

INFO: oss_container_url is not given, using the default INFO: tools images available for patching

$VAR1 = {
‘last_async_precheck_txn_id’ => ‘ ‘,

‘last_async_apply_txn_id’ => ‘ ‘,

‘errmsg’ => ‘no applicable dbaastools rpms found: check exadbcpatch.log in /var/opt/oracle/log/exadbcpatch’,

‘err’ => ‘-1’,
‘current_version’ => ‘180223’, ‘last_async_precheck_patch_id’ => ‘ ‘, ‘current_patch’ => ‘180223’, ‘last_async_apply_patch_id’ => ‘ ‘, ‘patches’ => []

};

<json begin>{“last_async_precheck_txn_id”:” “,”last_async_apply_txn_id”:” “,”err”:”-1″,”errmsg”:”no applicable dbaastools rpms found: check exadbcpatch.log in /var/opt/oracle/log/exadbcpatch”,”current_version”:”180223″,”last_async_prech eck_patch_id”:” “,”current_patch”:”180223″,”last_async_apply_patch_id”:” “,”patches”:[]}<json end>

[root@phxdbm-o3eja1 ~]# /var/opt/oracle/exapatch/exadbcpatchsm -get_status 8

# /var/opt/oracle/exapatch/exadbcpatchmulti -precheck_async 26737266-GI – sshkey=/home/opc/.ssh/id_rsa -instance1=phxdbm-o3eja1:/u01/app/12.2.0.1/grid -instance2=phxdbm-o3eja2:/u01/app/12.2.0.1/grid -instance3=phxdbm- o3eja3:/u01/app/12.2.0.1/grid -instance4=phxdbm- o3eja4:/u01/app/12.2.0.1/grid

INFO: number of nodes – 4
INFO: Master transaction id is : 70 <json begin>70<json end>
$VAR1 = {

‘cur_patchid’ => ”,
‘hostname’ => ‘phxdbm-o3eja1’, ‘ohomes’ => ‘/u01/app/12.2.0.1/grid’, ‘new_version’ => ”, ‘patch_completed’ => ”, ‘new_patchid’ => ”,
‘cur_version’ => ”,
‘apply_passed’ => ”

};
INFO: hostname : phxdbm-o3eja1

INFO: ohomes : /u01/app/12.2.0.1/grid
INFO: sshkey being used is /home/opc/.ssh/id_rsa for phxdbm-o3eja1
INFO: cmd being run is: sudo /var/opt/oracle/exapatch/exadbcpatchsm – precheck_async 26737266-GI -patch_homes=/u01/app/12.2.0.1/grid – txn_fl=/home/opc/txnid_fl.248762
INFO: running on the node phxdbm-o3eja1
INFO: host and txn id given : phxdbm-o3eja1 and 71
INFO: status seen is: Running: precheck in progress
INFO: status of slave txn###: Running: precheck in progress on phxdbm-o3eja1 INFO: host and txn id given : phxdbm-o3eja1 and 71
INFO: status seen is: Running: precheck in progress
INFO: status of slave txn###: Running: precheck in progress on phxdbm-o3eja1 INFO: host and txn id given : phxdbm-o3eja1 and 71
INFO: status seen is: Running: precheck in progress
INFO: status of slave txn###: Running: precheck in progress on phxdbm-o3eja1 INFO: host and txn id given : phxdbm-o3eja1 and 71
INFO: status seen is: Precheck completed
INFO: status of slave txn###: Precheck completed on phxdbm-o3eja1
INFO: -precheck_async completed on phxdbm-o3eja1:/u01/app/12.2.0.1/grid $VAR1 = {

‘cur_patchid’ => ”,
‘hostname’ => ‘phxdbm-o3eja2’, ‘ohomes’ => ‘/u01/app/12.2.0.1/grid’, ‘new_version’ => ”, ‘patch_completed’ => ”, ‘new_patchid’ => ”,
‘cur_version’ => ”,
‘apply_passed’ => ”

};
INFO: hostname : phxdbm-o3eja2

INFO: ohomes : /u01/app/12.2.0.1/grid
INFO: sshkey being used is /home/opc/.ssh/id_rsa for phxdbm-o3eja2 INFO: cmd being run is: ssh -q -t -i /home/opc/.ssh/id_rsa -o StrictHostKeyChecking=no opc@phxdbm-o3eja2 sudo /var/opt/oracle/exapatch/exadbcpatchsm -precheck_async 26737266-GI – patch_homes=/u01/app/12.2.0.1/grid -txn_fl=/home/opc/txnid_fl.248762 INFO: running on the node phxdbm-o3eja2
INFO: host and txn id given : phxdbm-o3eja2 and 17

INFO: status seen is: Running: precheck in progress
INFO: status of slave txn###: Running: precheck in progress on phxdbm-o3eja2 ….

INFO: status seen is: Precheck completed
INFO: status of slave txn###: Precheck completed on phxdbm-o3eja2
INFO: -precheck_async completed on phxdbm-o3eja2:/u01/app/12.2.0.1/grid INFO: number of nodes – 4
INFO: Master transaction id is : 72
<json begin>72<json end>
$VAR1 = {

‘cur_patchid’ => ”,
‘hostname’ => ‘phxdbm-o3eja1’, ‘ohomes’ => ‘/u01/app/12.2.0.1/grid’, ‘new_version’ => ”, ‘patch_completed’ => ”, ‘new_patchid’ => ”,
‘cur_version’ => ”,
‘apply_passed’ => ”

};

# /var/opt/oracle/exapatch/exadbcpatchmulti -apply_sync 26737266-GI – sshkey=/home/opc/.ssh/id_rsa -instance1=phxdbm-o3eja1:/u01/app/12.2.0.1/grid -instance2=phxdbm-o3eja2:/u01/app/12.2.0.1/grid -instance3=phxdbm- o3eja3:/u01/app/12.2.0.1/grid -instance4=phxdbm- o3eja4:/u01/app/12.2.0.1/grid -run_datasql=1

INFO: number of nodes – 4
INFO: Master transaction id is : 72 <json begin>72<json end>
$VAR1 = {

‘cur_patchid’ => ”,
‘hostname’ => ‘phxdbm-o3eja1’, ‘ohomes’ => ‘/u01/app/12.2.0.1/grid’, ‘new_version’ => ”, ‘patch_completed’ => ”, ‘new_patchid’ => ”,
‘cur_version’ => ”,
‘apply_passed’ => ”

};
INFO: hostname : phxdbm-o3eja1

INFO: ohomes : /u01/app/12.2.0.1/grid
INFO: sshkey being used is /home/opc/.ssh/id_rsa for phxdbm-o3eja1
INFO: cmd being run is: sudo /var/opt/oracle/exapatch/exadbcpatchsm – apply_async 26737266-GI -patch_homes=/u01/app/12.2.0.1/grid – txn_fl=/home/opc/txnid_fl.290963
INFO: running on the node phxdbm-o3eja1
INFO: host and txn id given : phxdbm-o3eja1 and 73
INFO: status seen is: Running: precheck in progress
INFO: status of slave txn###: Running: precheck in progress on phxdbm-o3eja1 INFO: host and txn id given : phxdbm-o3eja1 and 73
INFO: status seen is: Running: precheck in progress
INFO: status of slave txn###: Running: precheck in progress on phxdbm-o3eja1 INFO: number of nodes – 4
INFO: Master transaction id is : 72
<json begin>72<json end>
$VAR1 = {

‘cur_patchid’ => ”,
‘hostname’ => ‘phxdbm-o3eja1’, ‘ohomes’ => ‘/u01/app/12.2.0.1/grid’, ‘new_version’ => ”, ‘patch_completed’ => ”, ‘new_patchid’ => ”,
‘cur_version’ => ”,
‘apply_passed’ => ”

};
INFO: hostname : phxdbm-o3eja1

INFO: ohomes : /u01/app/12.2.0.1/grid
INFO: sshkey being used is /home/opc/.ssh/id_rsa for phxdbm-o3eja1
INFO: cmd being run is: sudo /var/opt/oracle/exapatch/exadbcpatchsm – apply_async 26737266-GI -patch_homes=/u01/app/12.2.0.1/grid – txn_fl=/home/opc/txnid_fl.290963
INFO: running on the node phxdbm-o3eja1
INFO: host and txn id given : phxdbm-o3eja1 and 73
INFO: status seen is: Running: precheck in progress
INFO: status of slave txn###: Running: precheck in progress on phxdbm-o3eja1 INFO: host and txn id given : phxdbm-o3eja1 and 73
INFO: status seen is: Running: precheck in progress

INFO: status of slave txn###: Running: config in progress on phxdbm-o3eja1 INFO: -apply_async completed on phxdbm-o3eja1:/u01/app/12.2.0.1/grid $VAR1 = {

‘cur_patchid’ => ”,
‘hostname’ => ‘phxdbm-o3eja1’, ‘ohomes’ => ‘/u01/app/12.2.0.1/grid’, ‘new_version’ => ”, ‘patch_completed’ => ”, ‘new_patchid’ => ”,
‘cur_version’ => ”,
‘apply_passed’ => ”

};
INFO: hostname : phxdbm-o3eja1

[grid@phxdbm-o3eja1 OPatch]$ /u01/app/12.2.0.1/grid/OPatch/opatch lspatches 26928563;TOMCAT RELEASE UPDATE 12.2.0.1.0(ID:170711) (26928563) 26925644;OCW RELEASE UPDATE 12.2.0.1.0(ID:171003) (26925644) 26839277;DBWLM RELEASE UPDATE 12.2.0.1.0(ID:170913) (26839277) 26737232;ACFS RELEASE UPDATE 12.2.0.1.0(ID:170823) (26737232) 26710464;Database Release Update : 12.2.0.1.171017 (26710464)

OPatch succeeded.
[grid@phxdbm-o3eja1 OPatch]$ pwd
/u01/app/12.2.0.1/grid/OPatch
[grid@phxdbm-o3eja1 OPatch]$ ssh phxdbm-o3eja2 /u01/app/12.2.0.1/grid/OPatch/opatch lspatches
26928563;TOMCAT RELEASE UPDATE 12.2.0.1.0(ID:170711) (26928563) 26925644;OCW RELEASE UPDATE 12.2.0.1.0(ID:171003) (26925644) 26839277;DBWLM RELEASE UPDATE 12.2.0.1.0(ID:170913) (26839277) 26737232;ACFS RELEASE UPDATE 12.2.0.1.0(ID:170823) (26737232) 26710464;Database Release Update : 12.2.0.1.171017 (26710464)

OPatch succeeded.
[grid@phxdbm-o3eja1 OPatch]$ ssh phxdbm-o3eja3 /u01/app/12.2.0.1/grid/OPatch/opatch lspatches
26928563;TOMCAT RELEASE UPDATE 12.2.0.1.0(ID:170711) (26928563) 26925644;OCW RELEASE UPDATE 12.2.0.1.0(ID:171003) (26925644) 26839277;DBWLM RELEASE UPDATE 12.2.0.1.0(ID:170913) (26839277) 26737232;ACFS RELEASE UPDATE 12.2.0.1.0(ID:170823) (26737232) 26710464;Database Release Update : 12.2.0.1.171017 (26710464)

OPatch succeeded.
[grid@phxdbm-o3eja1 OPatch]$ ssh phxdbm-o3eja4 /u01/app/12.2.0.1/grid/OPatch/opatch lspatches
26928563;TOMCAT RELEASE UPDATE 12.2.0.1.0(ID:170711) (26928563) 26925644;OCW RELEASE UPDATE 12.2.0.1.0(ID:171003) (26925644) 26839277;DBWLM RELEASE UPDATE 12.2.0.1.0(ID:170913) (26839277) 26737232;ACFS RELEASE UPDATE 12.2.0.1.0(ID:170823) (26737232) 26710464;Database Release Update : 12.2.0.1.171017 (26710464)

OPatch succeeded.

Exadata Cloud – Post Provisioning View of the system

Review of Exadata Deployment

Once the Exadata provisioning process completes (which takes around 4-5hrs for a ½ rack).  We explore to see what gets deployed:

$ cat/etc/oratab

OCITEST:/u02/app/oracle/product/12.2.0/dbhome_2:Y

+ASM1:/u01/app/12.2.0.1/grid:N       # line added by Agent

 

[grid@phxdbm-o3eja1 ~]$ olsnodes -n

phxdbm-o3eja1 1

phxdbm-o3eja2 2

phxdbm-o3eja3 3

phxdbm-o3eja4 4

 

[grid@phxdbm-o3eja1 ~]$ cat /var/opt/oracle/creg/OCITEST.ini | grep nodelist

nodelist=phxdbm-o3eja1 phxdbm-o3eja2 phxdbm-o3eja3 phxdbm-o3eja4

 

[grid@phxdbm-o3eja1 ~]$ crsctl stat res -t

—————————————————————————–

Name           Target  State        Server                   State details

—————————————————————————–

Local Resources

—————————————————————————–

ora.ACFSC1_DG1.C1_DG11V.advm

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE

ora.ACFSC1_DG1.C1_DG12V.advm

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE

ora.ACFSC1_DG1.dg

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE    ora.ACFSC1_DG2.C1_DG2V.advm

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE    ora.ACFSC1_DG2.dg

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE    ora.ASMNET1LSNR_ASM.lsnr

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE

ora.DATAC1.dg

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE . ora.DBFS_DG.dg

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE

ora.LISTENER.lsnr

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE

ora.RECOC1.dg

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE . ora.acfsc1_dg1.c1_dg11v.acfs

ONLINE  ONLINE       phxdbm-o3eja1            mounted on /scratch/acfsc1_dg1,STABLE

ONLINE  ONLINE       phxdbm-o3eja2            mounted on /scratch/acfsc1_dg1,STABLE

ONLINE  ONLINE       phxdbm-o3eja3            mounted on /scratch/acfsc1_dg1,STABLE

ONLINE  ONLINE       phxdbm-o3eja4            mounted on /scratch/acfsc1_dg1,STABLE

ora.acfsc1_dg1.c1_dg12v.acfs

ONLINE  ONLINE       phxdbm-o3eja1            mounted on /u02/app_acfs,STABLE

ONLINE  ONLINE       phxdbm-o3eja2            mounted on /u02/app_acfs,STABLE

ONLINE  ONLINE       phxdbm-o3eja3            mounted on /u02/app_acfs,STABLE

ONLINE  ONLINE       phxdbm-o3eja4            mounted on /u02/app_acfs,STABLE

ora.acfsc1_dg2.c1_dg2v.acfs

ONLINE  ONLINE       phxdbm-o3eja1            mounted on /var/opt/oracle/dbaas_acfs,STABLE

ONLINE  ONLINE       phxdbm-o3eja2            mounted on /var/opt/oracle/dbaas_acfs,STABLE

ONLINE  ONLINE       phxdbm-o3eja3            mounted on /var/opt/oracle/dbaas_acfs,STABLE

ONLINE  ONLINE       phxdbm-o3eja4            mounted on /var/opt/oracle/dbaas_acfs,STABLE

ora.net1.network

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE

ora.ons

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE

ora.proxy_advm

ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ONLINE  ONLINE       phxdbm-o3eja4            STABLE

—————————————————————————–

Cluster Resources

——————————————————————————–

ora.LISTENER_SCAN1.lsnr

1        ONLINE  ONLINE       phxdbm-o3eja2            STABLE

ora.LISTENER_SCAN2.lsnr

1        ONLINE  ONLINE       phxdbm-o3eja3            STABLE

ora.LISTENER_SCAN3.lsnr

1        ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ora.asm

1        ONLINE  ONLINE       phxdbm-o3eja1            Started,STABLE

2        ONLINE  ONLINE       phxdbm-o3eja2            Started,STABLE

3        ONLINE  ONLINE       phxdbm-o3eja3            Started,STABLE

4        ONLINE  ONLINE       phxdbm-o3eja4            Started,STABLE

ora.cvu

1        ONLINE  ONLINE       phxdbm-o3eja1            STABLE

ora.ocitest.db

1        ONLINE  ONLINE       phxdbm-o3eja1            Open,HOME=/u02/app/oracle/product/12.2.0/dbhome_2,STABLE

2        ONLINE  ONLINE       phxdbm-o3eja2            Open,HOME=/u02/app/o

racle/product/12.2.0

/dbhome_2,STABLE

3        ONLINE  ONLINE       phxdbm-o3eja3            Open,HOME=/u02/app/oracle/product/12.2.0

/dbhome_2,STABLE

4        ONLINE  ONLINE       phxdbm-o3eja4            Open,HOME=/u02/app/oracle/product/12.2.0

/dbhome_2,STABLE

ora.phxdbm-o3eja1.vip

1        ONLINE  ONLINE       phxdbm-o3eja1            STABLE ora.phxdbm-o3eja2.vip

1        ONLINE  ONLINE       phxdbm-o3eja2            STABLE ora.phxdbm-o3eja3.vip

1        ONLINE  ONLINE       phxdbm-o3eja3            STABLE ora.phxdbm-o3eja4.vip

1        ONLINE  ONLINE       phxdbm-o3eja4            STABLE ora.qosmserver

1        OFFLINE OFFLINE                               STABLE ora.scan1.vip

1        ONLINE  ONLINE       phxdbm-o3eja2            STABLE ora.scan2.vip

1        ONLINE  ONLINE       phxdbm-o3eja3            STABLE ora.scan3.vip

1        ONLINE  ONLINE       phxdbm-o3eja1            STABLE

—————————————————————————–

[grid@phxdbm-o3eja1 ~]$ asmcmd lsct

DB_Name  Status     Software_Version  Compatible_version  Instance_Name   Disk_Group

+APX     CONNECTED        12.2.0.1.0          12.2.0.1.0  +APX1   ACFSC1_DG1

+APX     CONNECTED        12.2.0.1.0          12.2.0.1.0  +APX1   ACFSC1_DG2

+ASM     CONNECTED        12.2.0.1.0          12.2.0.1.0  +ASM1   DATAC1

+ASM     CONNECTED        12.2.0.1.0          12.2.0.1.0  +ASM1    DBFS_DG

OCITEST  CONNECTED        12.2.0.1.0          12.2.0.0.0  OCITEST1 DATAC1

OCITEST  CONNECTED        12.2.0.1.0          12.2.0.0.0  OCITEST1  RECOC1

_OCR     CONNECTED         –                  phxdbm-o3eja1.client.phxexadata.oraclevcn.com  DBFS_DG

yoda     CONNECTED        12.2.0.1.0          12.2.0.0.0  yoda1    DATAC1

yoda     CONNECTED        12.2.0.1.0          12.2.0.0.0  yoda1    RECOC1

 

[root@phxdbm-o3eja1 ~]# df -k

Filesystem           1K-blocks     Used Available Use% Mounted on

/dev/mapper/VGExaDb-LVDbSys1

24639868  3878788  19486408  17% /

tmpfs                742619136  2465792 740153344   1% /dev/shm

/dev/xvda1              499656    26360    447084   6% /boot

/dev/mapper/VGExaDb-LVDbOra1

20511356   719324  18727072   4% /u01

/dev/xvdb             51475068  9757380  39079864  20% /u01/app/12.2.0.1/grid

/dev/xvdc             51475068  9302820  39534424  20% /u01/app/oracle/product/12.1.0.2/dbhome_1

/dev/xvdd             51475068  8173956  40663288  17% /u01/app/oracle/product/12.2.0.1/dbhome_1

/dev/xvde             51475068  6002756  42834488  13% /u01/app/oracle/product/11.2.0.4/dbhome_1

/dev/xvdg            206293688 19751360 176040184  11% /u02

/dev/asm/c1_dg12v-186

459276288  1067008 458209280   1% /u02/app_acfs

/dev/asm/c1_dg11v-186

229638144   611488 229026656   1% /scratch/acfsc1_dg1

/dev/asm/c1_dg2v-341 228589568 26597644 201991924  12% /var/opt/oracle/dbaas_acfs

 

Oracle Homes are created and mounted, though for IQN we will only be using 12.2, 12.1.0.2, and 11.2.0.4 [interim].

The   following are Exadata specific filesystems and use cases
/scratch/acfs1_dg1             –staging Exadata

/u02/app_acfs.                    – User filesystem for applications (currently empty)

/var/opt/oracle/dbaas_acfs.  –  Binary and image repository for all Exadata patching and enablement

Exadata Cloud Deployment and Considerations

I recently did a presentation and wipe-board session on Exadata Cloud deployment.  As part of that engagment, I did a small write-up on this topic.  This is a series of blogs that reflects the presentation:

Cloud Exadata Network and Platform Configuration

 Exadata DB Systems are offered in quarter rack, half rack or full rack configurations, and each configuration consists of compute nodes and storage servers. The compute nodes are each configured as a Virtual Machine (VM).

Key Operational characteristics of Exadata Cloud

  • Admins have root privileges for the compute node VMs. Thus 3rd party software can be installed, however, only supported Oracle DB versions and rpms should be implemented.

 

  • Admins do not have administrative access to the Exadata infrastructure components, including the physical compute node hardware, network switches, power distribution units (PDUs), integrated lights- out management (ILOM) interfaces, or the Exadata Storage Servers, which are all administered by Oracle.

 

  • Admins have full administrative privileges for your databases. However, application users should connect to databases via Oracle Net Services.

 

  • Admins are responsible for database administration tasks such as creating tablespaces and managing database users.

 

  • Admins should define how ssh keys will managed for users that will need compute node access.

 

 

 

 

 

 

 

 

 

 

 

Provisioning Exadata Pre-reqs

The following are network pre-reqs for provisioning Cloud Exadata DB Systems

Subnets

  • Require two separate VCN subnets: client subnet for user data traffic and backup subnet for backup traffic.
  • Define both the client subnet and the backup subnet as public subnets. Exadata requires a public subnet to support backup of the database to the Object Store.
  • Do not use a subnet that overlaps with 192.168.128.0/20. This restriction applies to both the client subnet and backup subnet.
  • Oracle requires that you use a VCN Resolver for DNS name resolution for the client subnet. It automatically resolves the Swift endpoints required for backing up databases, patching, and updating the cloud tooling on an Exadata DB System.

At the completion of the provisioning, you should have the following configured:

 

 

 

 

 

 

Security Lists and Routing

  • Each VCN subnet has a default security list that contains a rule to allow TCP traffic on destination port 22 (SSH) from source 0.0.0.0/0 and any source port. Properly configure the security list ingress and egress rules.
  • The OneCommand configuration enables TCP and ICMP traffic between all nodes and all ports in the respective subnet for client and backup subnets
  • Exadata DB System’s cloud network (VCN) must be configured with an internet gateway. Add a route table rule to open the access to the Object Storage Service Swift endpoint on CIDR 0.0.0.0/0.
  • Update the backup subnet’s security list to disallow any access from outside the subnet and allow egress traffic for TCP port 443 (https) on CIDR Ranges 129.146.0.0/16 (Phoenix region), 129.213.0.0/16 (Ashburn region)

Enable a route table with an entry that includes a Internet Gateway.  This will enable remote ssh access to the Exadata nodes

 

 

 

 

 

 

 

Provisioning Exadata

Service Console – Provision Exadata

Below are screenshot views that illustrate the provisioning of Exadata

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Cloud Exadata Storage Configuration

Exadata Storage Servers use the following ASM disk groups:

DATA diskgroup – for the storage of Oracle Data base datafiles.

RECO diskgroup – primarily used for storing files related to backup and recovery, such as RMAN backups and archived redo log files.  Depending how admins choose to provision for backups on Exadata storage

approximately 40% of the available storage space is allocated to the DATA disk group and approximately 60% is allocated to the RECO disk group.

Provision for backups on Exadata storage, approximately 80% of the available storage space is allocated to the DATA disk group and approximately 20% is allocated to the RECO disk group.

DBFS and ACFS diskgroups are system diskgroups that support various operational purposes. The DBFS disk group is primarily used to store the shared Clusterware files (Oracle Cluster Registry and voting disks), while the ACFS disk groups are primarily used to store Oracle Database binaries, staging directories and metadata.

 

Oracle Private Cloud Appliance (PCA) – How to get an Inventory

We recently had to move our PCA.  But before we did this move we needed to make sure we have everything documented, this included a detailed inventory of the computes nodes, storage attached, managment node configuration.  This blog post will illustrate how to do this inventory collection.  Note, that you’ll need root access to the [active] Management node.

Here’s some basic info on our PCA:

Component Software Specification Hardware Specification
Server PCA 2.2.2

OVM 3.2.10

Oracle VM Manager, Oracle Fabric Manager, and PCA controller software installed on the management servers

Oracle Server X5-2

20 nodes

 

(2) 18-core processors and 256 GB of memory

 

(2) Oracle Fabric Interconnect F1-15 switches Oracle Fabric Manager and SDN software external storage needed for any guest application

 

used specifically to provide 10 GbE (SFP+) and 8 Gb FC (LC) ports to connect to VMAX 10K
Internal Network  (2) 36-port QDR InfiniBand switches

used for high-speed internal communication between the Compute Servers, fabric interconnects, OVMM servers

 

Management Network (2) 24-port 10-Gigabit Ethernet switches

Provide management interface /access for Compute Servers, fabric interconnects, OVMM servers

ZFS

 

ZS3-ES storage appliance

18TB total

Application E-Business Suite R12 (12.1.3)
Oracle Database

 

11.2.0.4

Non RAC/Filesystem

2TB

External Storage EMC VMAX 10k

First run yum install expect on the OVM manager then modify the inventory expect script to have the correct admin password. You will probably want to run it as ./inventory > /tmp/inventory-pca.txt as it will be quite voluminous output.

Alternatively to the inventory script, we can leverage the eovmcli script in that same directory. Create a new script (eg. doit.sh) in the /u01/app/oracle/ovm-manager-3/ovm_cli/expectscripts/ directory with the following content. Replace references to password with the correct admin password. Run script and send me the output.

for i in `./eovmcli ‘admin’ ‘password’ ‘list vm’ | grep id: | awk ‘{print $NF}’ | cut -d: -f2-`;
do

echo ---------- PROCESSING VM=$i;
./eovmcli 'admin' 'password' "show vm name=$i";
echo;

for j in `./eovmcli 'admin' 'password' "show vm name=$i" | egrep VmDiskMapping | awk '{print $NF}'`;
do
echo vDisk=$j;
./eovmcli 'admin' 'password' "show vmdiskmapping id=$j";
echo;
done
echo
unset j

done

 

But to understand the inventory script the following commands are actually run underneath {inside}

OVM> list ServerPool

Command: list ServerPool

Status: Success

Time: 2017-11-14 20:56:12,341 UTC

Data: 

  id:0004fb00000200004d46b98dcfc43ff3  name:Rack1_ServerPool

 

PCA Storage Cloud Layout

The Oracle Private Cloud Appliance (PCA) supports storage expansion using either Fibre Channel or InfiniBand storage devices connected to the Fabric Interconnects.  We have chosen to leverage their existing Fibre channel based EMC VMAX for this expansion.  This section will describe the connectivity to the EMC array.

 

Storage Cloud Overview

 

Note, there is a OVM server pool, named Rack1_ServerPool, in the PCA. The PCA consists 20 compute nodes, as noted by ovcacn<compute node number> , and are assigned to this Server Pool; e.g.,  ovcacn[07-14]r1 (8 servers)  and  ovcacn[26-37]r1 (12 servers)

A vHBA is created on each compute node for each storage cloud. A total of four storage clouds are defined when PCA is installed, thus (4) vHBAs on each of the compute and management nodes.

Storage clouds allow you to cable and configure your external storage in such a way as to improve overall throughput or to build a fully HA enabled infrastructure.  Storage clouds are created and configured automatically on PCA installation.

We have a fully HA-enabled environment, where all four Storage Clouds are cross-cabled between the PCA Fabric Interconnects and two  FC switches.

For each PCA compute server, the WWPNs are registered and created for the vHBAs with assigned aliases.  Compute nodes can be identified as belonging to a particular server and storage cloud.

Once the PCA Fabric Interconnect WWPNs are presented to the VMAX array, it is visible to the PCA and can be seen using pca-admin list wwpn-info command.   These are command output is used to illustrate and identify matching WWPNs.

Fibre Channel with the Oracle PCA, requires a NPIV-capable FC switch or switches. Note, because the Fabric Interconnects use NPIV to map the port nodes to the World Wide Node Names (WWNNs) of the vHBAs that are created on each server, it is not possible to simply patch FC-capable storage directly into the FC ports on the Fabric Interconnects.  Software required to translate WWPNs to WWNNs does not exist on the storage heads of most FC storage devices, so directly attaching the storage device would prevent registration of the WWPNs for the vHBAs available on each server.

Storage Cloud Connectivity

There are (4) Cloud Storage (external Fibre connections) attached to the PCA X5-2, these are listed below (using the show storage-network command)

Network_Name                        Description         

————                        ———–         

Cloud_D                             Default Storage Cloud ru15 port2

Cloud_A                             Default Storage Cloud ru22 port1

Cloud_C                             Default Storage Cloud ru15 port1

Cloud_B                             Default Storage Cloud ru22 port2

Each Storage Cloud is connected into the two PCA internal switches: ovcasw22r1 and ovcasw15r1

Each compute node (CN) has four HBAs connected into the Storage Clouds using vHBA01 to vHBA04.  The following describes this connectivity.

  • vHBA01 is connected to Cloud_A
  • vHBA02 is connected to Cloud_B
  • vHBA03 is connected to Cloud_C
  • vHBA04 is connected to Cloud_D

 

This CN to Cloud connectivity is illustrated below for each Storage Cloud:

—————————————-

Network_Name         Cloud_A             

Description          Default Storage Cloud ru22 port1

Ports                ovcasw22r1:3:1, ovcasw22r1:12:1

vHBAs                ovcacn32r1-vhba01, ovcacn13r1-vhba01, ovcacn37r1-vhba01, ovcacn26r1-vhba01, ovcacn31r1-vhba01, ovcacn10r1-vhba01, ovcacn27r1-vhba01, ovcacn09r1-vhba01, ovcacn08r1-vhba01, ovcacn29r1-vhba01, ovcacn28r1-vhba01, ovcacn12r1-vhba01, ovcamn06r1-vhba01, ovcacn07r1-vhba01, ovcacn11r1-vhba01, ovcacn36r1-vhba01, ovcacn30r1-vhba01, ovcacn35r1-vhba01, ovcacn14r1-vhba01, ovcacn34r1-vhba01, ovcacn33r1-vhba01, ovcamn05r1-vhba01

—————————————-

Network_Name         Cloud_B             

Description          Default Storage Cloud ru22 port2

Ports                ovcasw22r1:3:2, ovcasw22r1:12:2

vHBAs                ovcacn32r1-vhba02, ovcacn13r1-vhba02, ovcacn37r1-vhba02, ovcacn26r1-vhba02, ovcacn31r1-vhba02, ovcacn10r1-vhba02, ovcacn27r1-vhba02, ovcacn09r1-vhba02, ovcacn08r1-vhba02, ovcacn29r1-vhba02, ovcacn28r1-vhba02, ovcacn12r1-vhba02, ovcamn06r1-vhba02, ovcacn07r1-vhba02, ovcacn11r1-vhba02, ovcacn36r1-vhba02, ovcacn30r1-vhba02, ovcacn35r1-vhba02, ovcacn14r1-vhba02, ovcacn34r1-vhba02, ovcacn33r1-vhba02, ovcamn05r1-vhba02

—————————————-

Network_Name         Cloud_C             

Description          Default Storage Cloud ru15 port1

Ports                ovcasw15r1:12:1, ovcasw15r1:3:1

vHBAs                ovcacn32r1-vhba03, ovcacn13r1-vhba03, ovcacn37r1-vhba03, ovcacn26r1-vhba03, ovcacn31r1-vhba03, ovcacn10r1-vhba03, ovcacn27r1-vhba03, ovcacn09r1-vhba03, ovcacn08r1-vhba03, ovcacn29r1-vhba03, ovcacn28r1-vhba03, ovcacn12r1-vhba03, ovcamn06r1-vhba03, ovcacn07r1-vhba03, ovcacn11r1-vhba03, ovcacn36r1-vhba03, ovcacn30r1-vhba03, ovcacn35r1-vhba03, ovcacn14r1-vhba03, ovcacn34r1-vhba03, ovcacn33r1-vhba03, ovcamn05r1-vhba03

—————————————-

Network_Name         Cloud_D             

Description          Default Storage Cloud ru15 port2

Ports                ovcasw15r1:12:2, ovcasw15r1:3:2

vHBAs                ovcacn32r1-vhba04, ovcacn13r1-vhba04, ovcacn37r1-vhba04, ovcacn26r1-vhba04, ovcacn31r1-vhba04, ovcacn10r1-vhba04, ovcacn27r1-vhba04, ovcacn09r1-vhba04, ovcacn08r1-vhba04, ovcacn29r1-vhba04, ovcacn28r1-vhba04, ovcacn12r1-vhba04, ovcamn06r1-vhba04, ovcacn07r1-vhba04, ovcacn11r1-vhba04, ovcacn36r1-vhba04, ovcacn30r1-vhba04, ovcacn35r1-vhba04, ovcacn14r1-vhba04, ovcacn34r1-vhba04, ovcacn33r1-vhba04, ovcamn05r1-vhba04

 

Storage Cloud with WWPN

Each server in the Oracle PCA is connected to the Fabric Interconnects via an InfiniBand (IB) connection. The Fabric Interconnects are capable of translating connections on their Fibre Channel ports to reroute them over these IB connections. To facilitate this, vHBAs are defined on each server to map to a Storage cloud defined on the Fabric Interconnects. The storage cloud that these vHBAs map to, determine which FC ports they relate to on the Fabric Interconnects.

A similar view of the connectivity can be seen from WWPN listing perspective. As with above, every CN is reflected in this listing; ie, every CN has connectivity to every Storage Cloud.

Cloud_Name           Cloud_A             

WWPN_List            50:01:39:70:00:7D:41:20, 50:01:39:70:00:7D:41:12, 50:01:39:70:00:7D:41:1C, 50:01:39:70:00:7D:41:06, 50:01:39:70:00:7D:41:04, 50:01:39:70:00:7D:41:0A, 50:01:39:70:00:7D:41:1E, 50:01:39:70:00:7D:41:2A, 50:01:39:70:00:7D:41:16, 50:01:39:70:00:7D:41:26, 50:01:39:70:00:7D:41:18, 50:01:39:70:00:7D:41:10, 50:01:39:70:00:7D:41:02, 50:01:39:70:00:7D:41:08, 50:01:39:70:00:7D:41:0C, 50:01:39:70:00:7D:41:0E, 50:01:39:70:00:7D:41:1A, 50:01:39:70:00:7D:41:24, 50:01:39:70:00:7D:41:28, 50:01:39:70:00:7D:41:14, 50:01:39:70:00:7D:41:22, 50:01:39:70:00:7D:41:00

—————————————-

Cloud_Name           Cloud_B             

WWPN_List            50:01:39:70:00:7D:41:21, 50:01:39:70:00:7D:41:13, 50:01:39:70:00:7D:41:1D, 50:01:39:70:00:7D:41:07, 50:01:39:70:00:7D:41:05, 50:01:39:70:00:7D:41:0B, 50:01:39:70:00:7D:41:1F, 50:01:39:70:00:7D:41:2B, 50:01:39:70:00:7D:41:17, 50:01:39:70:00:7D:41:27, 50:01:39:70:00:7D:41:19, 50:01:39:70:00:7D:41:11, 50:01:39:70:00:7D:41:03, 50:01:39:70:00:7D:41:09, 50:01:39:70:00:7D:41:0D, 50:01:39:70:00:7D:41:0F, 50:01:39:70:00:7D:41:1B, 50:01:39:70:00:7D:41:25, 50:01:39:70:00:7D:41:29, 50:01:39:70:00:7D:41:15, 50:01:39:70:00:7D:41:23, 50:01:39:70:00:7D:41:01

—————————————-

Cloud_Name           Cloud_C             

WWPN_List            50:01:39:70:00:7D:51:20, 50:01:39:70:00:7D:51:12, 50:01:39:70:00:7D:51:1C, 50:01:39:70:00:7D:51:06, 50:01:39:70:00:7D:51:04, 50:01:39:70:00:7D:51:0A, 50:01:39:70:00:7D:51:1E, 50:01:39:70:00:7D:51:2A, 50:01:39:70:00:7D:51:16, 50:01:39:70:00:7D:51:26, 50:01:39:70:00:7D:51:18, 50:01:39:70:00:7D:51:10, 50:01:39:70:00:7D:51:02, 50:01:39:70:00:7D:51:08, 50:01:39:70:00:7D:51:0C, 50:01:39:70:00:7D:51:0E, 50:01:39:70:00:7D:51:1A, 50:01:39:70:00:7D:51:24, 50:01:39:70:00:7D:51:28, 50:01:39:70:00:7D:51:14, 50:01:39:70:00:7D:51:22, 50:01:39:70:00:7D:51:00

—————————————-

Cloud_Name           Cloud_D             

WWPN_List            50:01:39:70:00:7D:51:21, 50:01:39:70:00:7D:51:13, 50:01:39:70:00:7D:51:1D, 50:01:39:70:00:7D:51:07, 50:01:39:70:00:7D:51:05, 50:01:39:70:00:7D:51:0B, 50:01:39:70:00:7D:51:1F, 50:01:39:70:00:7D:51:2B, 50:01:39:70:00:7D:51:17, 50:01:39:70:00:7D:51:27, 50:01:39:70:00:7D:51:19, 50:01:39:70:00:7D:51:11, 50:01:39:70:00:7D:51:03, 50:01:39:70:00:7D:51:09, 50:01:39:70:00:7D:51:0D, 50:01:39:70:00:7D:51:0F, 50:01:39:70:00:7D:51:1B, 50:01:39:70:00:7D:51:25, 50:01:39:70:00:7D:51:29, 50:01:39:70:00:7D:51:15, 50:01:39:70:00:7D:51:23, 50:01:39:70:00:7D:51:01

An associated grouping by vHBA and Cloud is listed here:

 

WWPN             vHBA           Cloud_Name     Server       Type     Alias                 

————–   ——          —-          ——–     ——–  ———-                                   

50:01:39:70:00:7D:41:28   vhba01     Cloud_A   ovcacn14r1      CN    ovcacn14r1-Cloud_A                      

50:01:39:70:00:7D:41:20   vhba01     Cloud_A   ovcacn32r1      CN    ovcacn32r1-Cloud_A                      

50:01:39:70:00:7D:41:22   vhba01     Cloud_A   ovcacn33r1      CN    ovcacn33r1-Cloud_A                      

50:01:39:70:00:7D:41:24   vhba01     Cloud_A   ovcacn35r1      CN    ovcacn35r1-Cloud_A                      

50:01:39:70:00:7D:41:26   vhba01     Cloud_A   ovcacn29r1      CN    ovcacn29r1-Cloud_A                      

50:01:39:70:00:7D:41:06   vhba01     Cloud_A   ovcacn26r1      CN    ovcacn26r1-Cloud_A                      

50:01:39:70:00:7D:41:04   vhba01     Cloud_A   ovcacn31r1      CN    ovcacn31r1-Cloud_A                       

50:01:39:70:00:7D:41:08   vhba01     Cloud_A   ovcacn07r1      CN    ovcacn07r1-Cloud_A                      

50:01:39:70:00:7D:41:0C   vhba01     Cloud_A   ovcacn11r1      CN    ovcacn11r1-Cloud_A                      

50:01:39:70:00:7D:41:1E   vhba01     Cloud_A   ovcacn27r1      CN    ovcacn27r1-Cloud_A                      

50:01:39:70:00:7D:41:14   vhba01     Cloud_A   ovcacn34r1      CN    ovcacn34r1-Cloud_A                      

50:01:39:70:00:7D:41:12   vhba01     Cloud_A   ovcacn13r1      CN    ovcacn13r1-Cloud_A                      

50:01:39:70:00:7D:41:1A   vhba01     Cloud_A   ovcacn30r1      CN    ovcacn30r1-Cloud_A                      

50:01:39:70:00:7D:41:18   vhba01     Cloud_A   ovcacn28r1      CN    ovcacn28r1-Cloud_A                       

50:01:39:70:00:7D:41:0A   vhba01     Cloud_A   ovcacn10r1      CN    ovcacn10r1-Cloud_A                      

50:01:39:70:00:7D:41:1C   vhba01     Cloud_A   ovcacn37r1      CN    ovcacn37r1-Cloud_A                      

50:01:39:70:00:7D:41:0E   vhba01     Cloud_A   ovcacn36r1      CN    ovcacn36r1-Cloud_A                      

50:01:39:70:00:7D:41:16   vhba01     Cloud_A   ovcacn08r1      CN    ovcacn08r1-Cloud_A                      

50:01:39:70:00:7D:41:2A   vhba01     Cloud_A   ovcacn09r1      CN    ovcacn09r1-Cloud_A                      

50:01:39:70:00:7D:41:10   vhba01     Cloud_A   ovcacn12r1      CN    ovcacn12r1-Cloud_A                      

50:01:39:70:00:7D:41:29   vhba02     Cloud_B   ovcacn14r1      CN    ovcacn14r1-Cloud_B                      

50:01:39:70:00:7D:41:21   vhba02     Cloud_B   ovcacn32r1      CN    ovcacn32r1-Cloud_B                      

50:01:39:70:00:7D:41:23   vhba02     Cloud_B   ovcacn33r1      CN    ovcacn33r1-Cloud_B                      

50:01:39:70:00:7D:41:25   vhba02     Cloud_B   ovcacn35r1      CN    ovcacn35r1-Cloud_B                      

50:01:39:70:00:7D:41:27   vhba02     Cloud_B   ovcacn29r1      CN    ovcacn29r1-Cloud_B                      

50:01:39:70:00:7D:41:07   vhba02     Cloud_B   ovcacn26r1      CN    ovcacn26r1-Cloud_B                      

50:01:39:70:00:7D:41:05   vhba02     Cloud_B   ovcacn31r1      CN    ovcacn31r1-Cloud_B                      

50:01:39:70:00:7D:41:09   vhba02     Cloud_B   ovcacn07r1      CN    ovcacn07r1-Cloud_B                      

50:01:39:70:00:7D:41:1D   vhba02     Cloud_B   ovcacn37r1      CN    ovcacn37r1-Cloud_B                      

50:01:39:70:00:7D:41:17   vhba02     Cloud_B   ovcacn08r1      CN    ovcacn08r1-Cloud_B                      

50:01:39:70:00:7D:41:11   vhba02     Cloud_B   ovcacn12r1      CN    ovcacn12r1-Cloud_B                      

50:01:39:70:00:7D:41:1F   vhba02     Cloud_B   ovcacn27r1      CN    ovcacn27r1-Cloud_B                      

50:01:39:70:00:7D:41:13   vhba02     Cloud_B   ovcacn13r1      CN    ovcacn13r1-Cloud_B                      

50:01:39:70:00:7D:41:19   vhba02     Cloud_B   ovcacn28r1      CN    ovcacn28r1-Cloud_B                      

50:01:39:70:00:7D:41:0B   vhba02     Cloud_B   ovcacn10r1      CN    ovcacn10r1-Cloud_B                      

50:01:39:70:00:7D:41:15   vhba02     Cloud_B   ovcacn34r1      CN    ovcacn34r1-Cloud_B                      

50:01:39:70:00:7D:41:0F   vhba02     Cloud_B   ovcacn36r1      CN    ovcacn36r1-Cloud_B                      

50:01:39:70:00:7D:41:0D   vhba02     Cloud_B   ovcacn11r1      CN    ovcacn11r1-Cloud_B                      

50:01:39:70:00:7D:41:1B   vhba02     Cloud_B   ovcacn30r1      CN    ovcacn30r1-Cloud_B                      

50:01:39:70:00:7D:41:2B   vhba02     Cloud_B   ovcacn09r1      CN    ovcacn09r1-Cloud_B                      

50:01:39:70:00:7D:51:12   vhba03     Cloud_C   ovcacn13r1      CN    ovcacn13r1-Cloud_C                       

50:01:39:70:00:7D:51:1E   vhba03     Cloud_C   ovcacn27r1      CN    ovcacn27r1-Cloud_C                      

50:01:39:70:00:7D:51:08   vhba03     Cloud_C   ovcacn07r1      CN    ovcacn07r1-Cloud_C                      

50:01:39:70:00:7D:51:10   vhba03     Cloud_C   ovcacn12r1      CN    ovcacn12r1-Cloud_C                      

50:01:39:70:00:7D:51:20   vhba03     Cloud_C   ovcacn32r1      CN    ovcacn32r1-Cloud_C                      

50:01:39:70:00:7D:51:22   vhba03     Cloud_C   ovcacn33r1      CN    ovcacn33r1-Cloud_C                      

50:01:39:70:00:7D:51:24   vhba03     Cloud_C   ovcacn35r1      CN    ovcacn35r1-Cloud_C                      

50:01:39:70:00:7D:51:26   vhba03     Cloud_C   ovcacn29r1      CN    ovcacn29r1-Cloud_C                       

50:01:39:70:00:7D:51:28   vhba03     Cloud_C   ovcacn14r1      CN    ovcacn14r1-Cloud_C                      

50:01:39:70:00:7D:51:1C   vhba03     Cloud_C   ovcacn37r1      CN    ovcacn37r1-Cloud_C                      

50:01:39:70:00:7D:51:0C   vhba03     Cloud_C   ovcacn11r1      CN    ovcacn11r1-Cloud_C                      

50:01:39:70:00:7D:51:06   vhba03     Cloud_C   ovcacn26r1      CN    ovcacn26r1-Cloud_C                      

50:01:39:70:00:7D:51:14   vhba03     Cloud_C   ovcacn34r1      CN    ovcacn34r1-Cloud_C                      

50:01:39:70:00:7D:51:2A   vhba03     Cloud_C   ovcacn09r1      CN    ovcacn09r1-Cloud_C                      

50:01:39:70:00:7D:51:1A   vhba03     Cloud_C   ovcacn30r1      CN    ovcacn30r1-Cloud_C                       

50:01:39:70:00:7D:51:16   vhba03     Cloud_C   ovcacn08r1      CN    ovcacn08r1-Cloud_C                      

50:01:39:70:00:7D:51:0A   vhba03     Cloud_C   ovcacn10r1      CN    ovcacn10r1-Cloud_C                      

50:01:39:70:00:7D:51:18   vhba03     Cloud_C   ovcacn28r1      CN    ovcacn28r1-Cloud_C                      

50:01:39:70:00:7D:51:04   vhba03     Cloud_C   ovcacn31r1      CN    ovcacn31r1-Cloud_C                      

50:01:39:70:00:7D:51:0E   vhba03     Cloud_C   ovcacn36r1      CN    ovcacn36r1-Cloud_C                      

50:01:39:70:00:7D:51:1B   vhba04     Cloud_D   ovcacn30r1      CN    ovcacn30r1-Cloud_D                      

50:01:39:70:00:7D:51:1D   vhba04     Cloud_D   ovcacn37r1      CN    ovcacn37r1-Cloud_D                      

50:01:39:70:00:7D:51:1F   vhba04     Cloud_D   ovcacn27r1      CN    ovcacn27r1-Cloud_D                      

50:01:39:70:00:7D:51:07   vhba04     Cloud_D   ovcacn26r1      CN    ovcacn26r1-Cloud_D                       

50:01:39:70:00:7D:51:19   vhba04     Cloud_D   ovcacn28r1      CN    ovcacn28r1-Cloud_D                      

50:01:39:70:00:7D:51:21   vhba04     Cloud_D   ovcacn32r1      CN    ovcacn32r1-Cloud_D                      

50:01:39:70:00:7D:51:23   vhba04     Cloud_D   ovcacn33r1      CN    ovcacn33r1-Cloud_D                      

50:01:39:70:00:7D:51:25   vhba04     Cloud_D   ovcacn35r1      CN    ovcacn35r1-Cloud_D                      

50:01:39:70:00:7D:51:27   vhba04     Cloud_D   ovcacn29r1      CN    ovcacn29r1-Cloud_D                      

50:01:39:70:00:7D:51:29   vhba04     Cloud_D   ovcacn14r1      CN    ovcacn14r1-Cloud_D                      

50:01:39:70:00:7D:51:09   vhba04     Cloud_D   ovcacn07r1      CN    ovcacn07r1-Cloud_D                      

50:01:39:70:00:7D:51:0D   vhba04     Cloud_D   ovcacn11r1      CN    ovcacn11r1-Cloud_D                      

50:01:39:70:00:7D:51:15   vhba04     Cloud_D   ovcacn34r1      CN    ovcacn34r1-Cloud_D                      

50:01:39:70:00:7D:51:0B   vhba04     Cloud_D   ovcacn10r1      CN    ovcacn10r1-Cloud_D                      

50:01:39:70:00:7D:51:05   vhba04     Cloud_D   ovcacn31r1      CN    ovcacn31r1-Cloud_D                      

50:01:39:70:00:7D:51:2B   vhba04     Cloud_D   ovcacn09r1      CN    ovcacn09r1-Cloud_D                      

50:01:39:70:00:7D:51:11   vhba04     Cloud_D   ovcacn12r1      CN    ovcacn12r1-Cloud_D                      

50:01:39:70:00:7D:51:17   vhba04     Cloud_D   ovcacn08r1      CN    ovcacn08r1-Cloud_D                      

50:01:39:70:00:7D:51:13   vhba04     Cloud_D   ovcacn13r1      CN    ovcacn13r1-Cloud_D                      

50:01:39:70:00:7D:51:0F   vhba04     Cloud_D   ovcacn36r1      CN    ovcacn36r1-Cloud_D                      

—————–

80 rows displayed

 

It is important to distinguish between WWNNs and WWPNs. A WWNN is used to identify a device or node such as an HBA, while a WWPN is used to identify a port that is accessible for that same device. Since some devices can have multiple ports, a device may have a single WWNN and multiple WWPNs.

For CN vHBAs, there is a single WWNN and a single WWPN for each vHBA. Note, the fourth hexadecimal octet that makes up the WWN differs.

pca-admin show vhba-info ovcacn07r1

vHBA_Name       Cloud     WWNN                      WWPN                     

———       —–     —-                      —-                     

vhba03          Cloud_C  50:01:39:71:00:7D:51:08   50:01:39:70:00:7D:51:08  

vhba02          Cloud_B  50:01:39:71:00:7D:41:09   50:01:39:70:00:7D:41:09  

vhba01          Cloud_A  50:01:39:71:00:7D:41:08   50:01:39:70:00:7D:41:08  

vhba04          Cloud_D  50:01:39:71:00:7D:51:09   50:01:39:70:00:7D:51:09  

 

OVM> list PhysicalDisk

Command: list PhysicalDisk

Status: Success

Time: 2017-11-14 20:45:20,156 UTC

Data: 

  id:0004fb000018000089acb680613acbb7  name:3600605b00a76d8601e6b20a309121c29

  id:0004fb000018000045e53c34341ddba2  name:3600605b00a7663001e6b1f8c093ed7d1

  id:0004fb000018000071649d0873535c10  name:3600605b00a7690301e6b23120945f79f

  id:0004fb00001800007285822483e7faf9  name:SUN (1)

  id:0004fb000018000075656b0d46cd0f92  name:SUN (2)

  id:0004fb0000180000652ed33b97ce0813  name:3600605b00a76d7d01e6b1fc40920aaa1

  id:0004fb0000180000003ca296f4d63d47  name:3600605b00a76d8401e6b1e5e08eba0e6

  id:0004fb0000180000c7a2053f6a33ab6c  name:3600605b00a7648701e6b1f7209061413

  id:0004fb0000180000c2ce5bf457cb8c3e  name:3600605b00a7644901e6b2092092a2995

  id:0004fb000018000004a154445ea57a30  name:3600605b00a7635001e6b1fed0925240f

  id:0004fb00001800002316100cabe79348  name:EMC VMAX FC LUN07

  id:0004fb00001800005ad30a34ba849e31  name:EMC VMAX FC LUN06

  id:0004fb0000180000ea41971236b070bb  name:EMC VMAX FC LUN05

  id:0004fb0000180000293acf9735f6d443  name:EMC VMAX SATA LUN03

  id:0004fb0000180000a65b1bc3c16c0210  name:EMC VMAX FC LUN02

  id:0004fb0000180000683cff7d90036fe7  name:EMC VMAX FC LUN01

  id:0004fb0000180000a8254d24e27180aa  name:EMC VMAX FAST(Prod) LUN04

  id:0004fb00001800002e04766575ed1315  name:EMC ebsprod fra 01

  id:0004fb0000180000f1e48b8c1465c245  name:EMC ebsprod fra 02

  id:0004fb000018000004d1ab0deb5e4926  name:EMC ebsprod fra 03

  id:0004fb00001800008f2efe35e2c708e5  name:EMC ebsprod fra 04

  id:0004fb0000180000a1c7cfe90681651b  name:EMC ebsprod fra 05

  id:0004fb0000180000ce63d8cc9231123f  name:EMC ebsprod fra 06

  id:0004fb00001800000d857c98406d3dfb  name:EMC ebsprod fra 07

  id:0004fb00001800000946d5a01856ae35  name:EMC ebsprod fra 08

  id:0004fb00001800002f1f8d3690e4a119  name:EMC ebsprod orion 01

  id:0004fb0000180000c68a7d0fa8dab371  name:EMC ebsprod orion 02

  id:0004fb00001800008f9886d751d9ed1a  name:EMC ebsprod redo 01

  id:0004fb00001800000e870c6e72191753  name:EMC ebsprod redo 02

  id:0004fb00001800004321068d7cdb0369  name:EMC ebsprod redo 04

  id:0004fb00001800005bb7782ff2960efb  name:EMC ebsprod redo 03

  id:0004fb00001800003e2e40a8d1376096  name:EMC ebsprod ocrvd 01

  id:0004fb000018000084ccd381a4fa1a24  name:EMC ebsprod ocrvd 02

  id:0004fb00001800002a17b022e6dd05c5  name:EMC ebsprod ocrvd 03

  id:0004fb0000180000c3b24ad7cb408520  name:EMC ebsprod data 01

  id:0004fb000018000076c38f606617f660  name:EMC ebsprod data 02

  id:0004fb000018000032359a9b4a30c1d2  name:EMC ebsprod data 03

  id:0004fb000018000025b0c2eb9914a10b  name:EMC ebsprod data 04

  id:0004fb0000180000679464b498ac424b  name:EMC ebsprod data 05

  id:0004fb0000180000b6f66219e0edb83f  name:EMC ebsprod data 06

  id:0004fb0000180000d290f7cfaf6c2187  name:EMC ebsprod data 07

  id:0004fb0000180000fc5181433a564f81  name:3600605b00a7680f01e6b1f760903237c

  id:0004fb0000180000029437418b4d907b  name:3600605b00a762f001e6b1f7108cfaf4b

  id:0004fb0000180000096c38cbd7a41395  name:3600605b00a7637301e6b20f709059454

  id:0004fb0000180000a836c8251c98965d  name:3600605b00a768ec01e6b211d13b4a617

  id:0004fb0000180000004234fb6dbcdfcd  name:3600605b00a766da01e6b1f9709164a14

  id:0004fb0000180000ed2ab0b0df14f17d  name:3600605b00a768d001e6b1e7208c66eb0

  id:0004fb00001800005f310f01833b9144  name:3600605b00a763b801e6b208508ec64a8

  id:0004fb000018000039fb1f3383585596  name:3600605b00a7663301e6b1e1d0902a0c3

  id:0004fb0000180000a291ab8b56714ce1  name:3600605b00a768ee01e6b212c09b1ce22

  id:0004fb00001800007526bc66e0a68bbe  name:3600605b00a76dc401e6b1fc208d1fab0

  id:0004fb00001800007fb35c0b1749db85  name:3600605b00a7662801e6b22e008cbf7bd

  id:0004fb0000180000e29c37f16074690f  name:3600605b00a7636c01e6b1efa08b958ab

Command: list server

Status: Success

Time: 2017-11-14 20:54:47,324 UTC

Data: 

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:a0:78  name:ovcacn12r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:16:0c  name:ovcacn07r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:36:d0  name:ovcacn11r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:7a:0d:da  name:ovcacn36r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:7a:59:88  name:ovcacn28r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:a4:5c  name:ovcacn37r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6c:a6  name:ovcacn14r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:67:54  name:ovcacn13r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:84:a3:0e  name:ovcacn29r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6f:be  name:ovcacn31r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:4a:9e  name:ovcacn10r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:87:15:50  name:ovcacn27r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6a:8a  name:ovcacn32r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:80:10:4e  name:ovcacn34r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:a0:b4  name:ovcacn26r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:a1:44  name:ovcacn33r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:14:3e  name:ovcacn35r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:7a:4b:0c  name:ovcacn08r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6f:d6  name:ovcacn30r1

  id:08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6e:b6  name:ovcacn09r1

 

 

Command: list Repository

Status: Success

Time: 2017-11-14 20:55:52,268 UTC

Data: 

  id:0004fb0000030000f6d09c5125f8f99b  name:Rack1-Repository

  id:0004fb0000030000d1969e0ffefed9e3  name:EMC-VMAX-FC-Repo5

  id:0004fb0000030000d55498ec6e5a4470  name:ovcacn27r1-localfsrepo

  id:0004fb0000030000465dbff337acb2b7  name:ovcacn08r1-localfsrepo

  id:0004fb000003000094e850fa4e5b5dc1  name:ovcacn37r1-localfsrepo

  id:0004fb00000300009633c963daed6fbe  name:ovcacn07r1-localfsrepo

  id:0004fb0000030000bdb6e6d7b3c63a39  name:ovcacn26r1-localfsrepo

  id:0004fb0000030000c9f71d6a43cf8ddc  name:EMC-VMAX–FC-Repo1

  id:0004fb00000300009bdea16ab8bfdbe7  name:ovcacn30r1-localfsrepo

  id:0004fb00000300003f9f5da1e76442e6  name:ovcacn11r1-localfsrepo

  id:0004fb0000030000a7d6ca273d18e846  name:EMC-VMAX-FC-Repo6

  id:0004fb0000030000bd9b7812ae267d47  name:EMC-VMAX-SATA-Repo3

  id:0004fb00000300000b710d9f8e03502d  name:ovcacn09r1-localfsrepo

  id:0004fb00000300006f28ce4acad4b952  name:EMC-VMAX-FC-Repo7

  id:0004fb0000030000a0e7f2e6213c04f3  name:ovcacn36r1-localfsrepo

  id:0004fb0000030000c8cc073e70c5c41f  name:EMC-VMAX-FC-Repo2

  id:0004fb00000300001cb34b718c486dd6  name:ovcacn29r1-localfsrepo

  id:0004fb0000030000ffde50c6ec8f06e4  name:ovcacn31r1-localfsrepo

  id:0004fb0000030000903dae0fc220ac45  name:ovcacn10r1-localfsrepo

  id:0004fb000003000031031f7a8b957aa0  name:ovcacn13r1-localfsrepo

  id:0004fb00000300000aa95fabb2b85dc7  name:ovcacn14r1-localfsrepo

  id:0004fb0000030000847307a8e689dda3  name:ovcacn34r1-localfsrepo

  id:0004fb00000300004df4b5d72bb7e3c1  name:ovcacn35r1-localfsrepo

  id:0004fb00000300000d17779831c520ab  name:ovcacn32r1-localfsrepo

  id:0004fb0000030000276054535f2cf66f  name:ovcacn12r1-localfsrepo

  id:0004fb00000300006f1bc814a1dba812  name:EMC-VMAX-FAST(Prod)-Repo4

  id:0004fb0000030000adad162696c02503  name:ovcacn33r1-localfsrepo

  id:0004fb00000300006a283e29a8546139  name:ovcacn28r1-localfsrepo

OVM> list SanServer

Command: list SanServer

Status: Success

Time: 2017-11-14 20:56:02,774 UTC

Data: 

  id:0004fb0000090000c0070fc37e9fe47a  name:OVCA_ZFSSA_Rack1

  id:Unmanaged iSCSI Storage Array  name:Unmanaged iSCSI Storage Array

  id:Unmanaged FibreChannel Storage Array  name:Unmanaged FibreChannel Storage Array

 

OVM> list StorageInitiator

Command: list StorageInitiator

Status: Success

Time: 2017-11-14 20:56:20,541 UTC

Data: 

  id:0x50013970007d4110  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d4111  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5110  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5111  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:4be6a1e5f39e  name:iqn.1988-12.com.oracle:4be6a1e5f39e

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:a0:78  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:a0:78

  id:0x50013970007d4108  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d4109  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5108  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5109  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:b6db9886524  name:iqn.1988-12.com.oracle:b6db9886524

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:16:0c  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:16:0c

  id:0x50013970007d410c  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d410d  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d510c  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d510d  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:2cfd9cdfab1  name:iqn.1988-12.com.oracle:2cfd9cdfab1

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:36:d0  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:36:d0

  id:0x50013970007d410e  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d410f  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d510e  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d510f  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:72f8d85d1efc  name:iqn.1988-12.com.oracle:72f8d85d1efc

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:7a:0d:da  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:7a:0d:da

  id:0x50013970007d4118  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d4119  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5118  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5119  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:a5903e2a89f  name:iqn.1988-12.com.oracle:a5903e2a89f

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:7a:59:88  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:7a:59:88

  id:0x50013970007d411c  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d411d  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d511c  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d511d  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:c13f7ca17ee4  name:iqn.1988-12.com.oracle:c13f7ca17ee4

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:a4:5c  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:a4:5c

  id:0x50013970007d4128  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d4129  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5128  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5129  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:c79e2161d338  name:iqn.1988-12.com.oracle:c79e2161d338

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6c:a6  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6c:a6

  id:0x50013970007d4112  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d4113  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5112  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5113  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:e819eb62c9ac  name:iqn.1988-12.com.oracle:e819eb62c9ac

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:67:54  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:67:54

  id:0x50013970007d4126  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d4127  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5126  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5127  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:bb6edac83fcd  name:iqn.1988-12.com.oracle:bb6edac83fcd

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:84:a3:0e  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:84:a3:0e

  id:0x50013970007d4104  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d4105  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5104  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5105  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:59da9467ef15  name:iqn.1988-12.com.oracle:59da9467ef15

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6f:be  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6f:be

  id:0x50013970007d410a  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d410b  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d510a  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d510b  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:82f2dc9afc61  name:iqn.1988-12.com.oracle:82f2dc9afc61

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:4a:9e  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:4a:9e

  id:0x50013970007d411e  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d411f  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d511e  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d511f  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:78d33e6c874  name:iqn.1988-12.com.oracle:78d33e6c874

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:87:15:50  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:87:15:50

  id:0x50013970007d4120  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d4121  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5120  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5121  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:d940444ea668  name:iqn.1988-12.com.oracle:d940444ea668

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6a:8a  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6a:8a

  id:0x50013970007d4114  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d4115  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5114  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5115  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:5e907b7089a2  name:iqn.1988-12.com.oracle:5e907b7089a2

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:80:10:4e  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:80:10:4e

  id:0x50013970007d4106  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d4107  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5106  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5107  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:59b9c2229679  name:iqn.1988-12.com.oracle:59b9c2229679

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:a0:b4  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:a0:b4

  id:0x50013970007d4122  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d4123  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5122  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5123  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:9191559ef7c0  name:iqn.1988-12.com.oracle:9191559ef7c0

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:a1:44  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:a1:44

  id:0x50013970007d4124  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d4125  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5124  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5125  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:84a7b614eeb5  name:iqn.1988-12.com.oracle:84a7b614eeb5

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:14:3e  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:79:14:3e

  id:0x50013970007d4116  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d4117  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5116  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d5117  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:5cd7ad97b52c  name:iqn.1988-12.com.oracle:5cd7ad97b52c

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:7a:4b:0c  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:7a:4b:0c

  id:0x50013970007d411a  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d411b  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d511a  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d511b  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:7ba542ea5198  name:iqn.1988-12.com.oracle:7ba542ea5198

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6f:d6  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6f:d6

  id:0x50013970007d412a  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d412b  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d512a  name:FC Initiator @ Port 0xffffffff

  id:0x50013970007d512b  name:FC Initiator @ Port 0xffffffff

  id:iqn.1988-12.com.oracle:a263989acb86  name:iqn.1988-12.com.oracle:a263989acb86

  id:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6e:b6  name:storage.LocalStorageInitiator in 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6e:b6

 

OVM> list VirtualDisk

Command: list VirtualDisk

Status: Success

Time: 2017-11-14 20:56:37,915 UTC

Data: 

  id:0004fb0000120000f5c7429df92b318d.img  name:admebsd501_boot

  id:0004fb000012000027b273600ca28486.img  name:admebsd501_LUN01

  id:0004fb0000120000f4a6aa71f9ac687d.img  name:admebsd501_LUN02

  id:0004fb0000120000b8d11e181cfa4d22.img  name:admebsd503_LUN01 (2)

  id:0004fb000012000020ba4039152aa40e.img  name:admebsd503_boot (2)

  id:0004fb00001200005d5538775464765a.img  name:admebsd503_LUN02

  id:0004fb0000120000b15748fc16cb38d6.img  name:admavp501_boot

  id:0004fb0000120000692fc2fcc8c9d829.img  name:PCASRV-Java7_LUN01

  id:786df5556a5144609142c95da9cb2496.img  name:system

  id:0004fb0000120000932da51c889aa150.img  name:admracsb201_RACSB4_DATA_01

  id:0004fb0000120000cc66b647fbd2957d.img  name:admracsb201_RACSB4_DATA_02

  id:0004fb000012000055a11b1c8dd5305c.img  name:admracsb201_RACSB4_DATA_03

  id:0004fb00001200007403b46e06fcdc72.img  name:PCASRV-Java6_LUN01

  id:0004fb0000120000252fc76e79018f07.img  name:admracsb201_RACSB4_DATA_04

  id:0004fb000012000074ab4ad505255980.img  name:admnfst601_LUN03

  id:0004fb00001200003c218c9678ba82bd.img  name:AdmOracleLinux6.7_BaseMT_1.1_boot

  id:0004fb00001200005622b5ee67451ce0.img  name:AdmOracleLinux6.7_BaseMT_1.1_LUN01

  id:0004fb0000120000343ee39c7febc12b.img  name:admebst501_LUN01

  id:0004fb00001200004bfe02732e54a59c.img  name:AdmEbsLxAppPoc03_boot

  id:0004fb0000120000e3ea48d4e1f615c5.img  name:admracsb201_RACSB4_REDO_01

  id:0004fb000012000091afdac18d05e52f.img  name:bootdisk

  id:0004fb000012000057bc58afcfad7f7c.img  name:admracsb201_RACSB4_REDO_02

  id:0004fb00001200006630c64c4bcf67a2.img  name:admracsb201_RACSB4_OCRVD_01

  id:0004fb00001200003394f367ef8084d0.img  name:AdmEbsLxAppPoc01_boot

 

Command: list VM

Status: Success

Time: 2017-11-14 21:05:47,987 UTC

Data: 

  id:0004fb00000600003593a5716c5b22bd  name:admavp501

  id:0004fb00000600006ac8200d95a0b83f  name:admebst202

  id:0004fb000006000086a50c53eee394f7  name:admebsd503

  id:0004fb0000060000ebad2ed25c3ff95e  name:admebst502

  id:0004fb00000600008252dfd8bc640872  name:AdmOracleLinux6.7_BaseMT_1.1

  id:0004fb0000060000f587f0449f7b75c9  name:AdmOracleLinux6.7_BaseDB_1.1

  id:0004fb00000600002bb591e7f660ae40  name:AdmOracleLinux6.7_BaseMT

  id:0004fb00000600002c437d74c2761d4d  name:Template_Adm_DB_OL6u7_x86_64_1.1

  id:0004fb0000060000ed58fbd4d2a58094  name:Template_Adm_MT_OL6u7_x86_64_2.0

  id:0004fb0000060000ff12b3a2b5c589dd  name:Template_Adm_RAC_DB_OL6u7_x86_64_2.0

  id:0004fb00000600000b5c4a71002da8cb  name:Template_AdmEbsLxAppPoc02_01

  id:0004fb0000060000d9b57cd388b2b8ca  name:Template_Adm_DB_OL6u7_x86_64_1.0

  id:0004fb000006000099ed04e2800c952e  name:Template_Adm_RAC_DB_OL6u7_x86_64_1.0

  id:0004fb0000060000febd1b0344a74c7d  name:Template_Adm_MT_OL6u7_x86_64_1.0

Command: list VmDiskMapping

Status: Success

Time: 2017-11-14 20:57:47,424 UTC

Data: 

  id:0004fb00001300001c5dbefed4a7a42b  name:0004fb00001300001c5dbefed4a7a42b

  id:0004fb0000130000dcd307ad89c7a0cc  name:0004fb0000130000dcd307ad89c7a0cc

  id:0004fb0000130000815171a045300831  name:0004fb0000130000815171a045300831

  id:0004fb0000130000efc16abda39a8bb1  name:0004fb0000130000efc16abda39a8bb1

  id:0004fb00001300007cfeaab5aa624453  name:0004fb00001300007cfeaab5aa624453

  id:0004fb0000130000846beb032f2c87fc  name:0004fb0000130000846beb032f2c87fc

  id:0004fb0000130000d655da92ab6a9b0f  name:0004fb0000130000d655da92ab6a9b0f

  id:0004fb0000130000f0dcba1f3758cc2c  name:0004fb0000130000f0dcba1f3758cc2c

 …..

./doit.eovmcli2: Generating OVM VM Inventory Report

———- PROCESSING VM=admapxp201

 

Command: show vm name=admapxp201

 

Status: Success

 

Time: 2017-11-17 23:00:36,448 UTC

 

Data:

 

  Name = admapxp201

 

  Id = 0004fb000006000032d9e101be35a66b

 

  Status = Stopped

 

  Memory (MB) = 32768

 

  Max. Memory (MB) = 32768

 

  Max. Processors = 4

 

  Processors = 4

 

  Priority = 50

 

  Processor Cap = 100

 

  High Availability = Yes

 

  Operating System = Oracle Linux 6

 

  Mouse Type = Default

 

  Domain Type = Xen HVM, PV Drivers

 

  Keymap = en-us

 

  Boot Order 1 = Disk

 

  Server = 08:00:20:ff:ff:ff:ff:ff:ff:ff:00:10:e0:8d:6f:d6  [ovcacn30r1]

 

  Repository = 0004fb0000030000a7d6ca273d18e846  [EMC-VMAX-FC-Repo6]

 

  Vnic 1 = 0004fb0000070000aea4da88d7a72625  [00:21:f6:00:00:43]

 

  Vnic 2 = 0004fb00000700001f66c01d8774537a  [00:21:f6:00:00:44]

 

  VmDiskMapping 1 = 0004fb00001300005993debac1c7311c

 

  VmDiskMapping 2 = 0004fb0000130000cbf203ebf15e4fd1

 

  VmDiskMapping 3 = 0004fb00001300001c3b76e0a93313f9

 

  VmDiskMapping 4 = 0004fb00001300003606f24c6cb2315e

 

  tag 1 = 0004fb0000260000633a36e2d8e304be  [Production]

 

vDisk=0004fb00001300005993debac1c7311c

 

Command: show vmdiskmapping id=0004fb00001300005993debac1c7311c

Are You Ready to apply the 12.2.0.1 July RU ???

Here's the steps that I went thru to apply the Grid Infrastructure Jul2017 Release Update 12.2.0.1.170718, Patch 26133434 

Configuration:  2 Node RAC cluster on Kaminario K2 AFA

The Grid Infrastructure Jul2017 Release Update (RU) 12.2.0.1.170718 includes updates for both the Clusterware home and Database home that can be applied in a rolling fashion.
In this blog post we have updated both nodes GI and DB stack.
The details and execution for Node1 are repeated and presented here for Node2 as well
Big thanks to Mike Dietrich for some insight !

 Step 1) Upgrade the Opatch version to atleast (12.2.0.1.7). We need to upgrade the OPatch version at GI and DB Homes on all the nodes.

[root@vna02 grid]# cd OPatch

[root@vna02 OPatch]# ./opatch version

OPatch Version: 12.2.0.1.9   è Grid Home

OPatch succeeded.

[oracle   @vna01 dbhome_1]$ opatch version

OPatch Version: 12.2.0.1.9  è Database Home

Step 2) Patch conflict check:

Node 1 : 

[oracle@vna01 GI]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/oracle/software/patches/DB-GI-RU/GI/26133434/26123830

Oracle Interim Patch Installer version 12.2.0.1.9

Copyright (c) 2017, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.2.0/dbhome_1
Central Inventory : /u01/app/oraInventory
from           : /u01/app/oracle/product/12.2.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.9
OUI version       : 12.2.0.1.4
Log file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2017-09-20_18-43-33PM_1.log
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.

[oracle@vna01 GI]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/oracle/software/patches/DB-GI-RU/GI/26133434/26002778

Oracle Interim Patch Installer version 12.2.0.1.9
Copyright (c) 2017, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.2.0/dbhome_1
Central Inventory : /u01/app/oraInventory
from           : /u01/app/oracle/product/12.2.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.9
OUI version       : 12.2.0.1.4
Log file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2017-09-20_19-01-04PM_1.log
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.

From the Database Home :

[oracle@vna01 GI]$ . oraenv
ORACLE_SID = [VNADB1] ? VNADB1
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@vna01 GI]$ cd $ORACLE_HOME/OPatch
[oracle@vna01 OPatch]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/oracle/software/patches/DB-GI-RU/GI/26133434/26123830
Oracle Interim Patch Installer version 12.2.0.1.9
Copyright (c) 2017, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.2.0/dbhome_1
Central Inventory : /u01/app/oraInventory
from           : /u01/app/oracle/product/12.2.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.9
OUI version       : 12.2.0.1.4
Log file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2017-09-20_19-03-12PM_1.log
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.

[oracle@vna01 OPatch]$
[oracle@vna01 OPatch]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/oracle/software/patches/DB-GI-RU/GI/26133434/26002778
Oracle Interim Patch Installer version 12.2.0.1.9
Copyright (c) 2017, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.2.0/dbhome_1
Central Inventory : /u01/app/oraInventory
from           : /u01/app/oracle/product/12.2.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.9
OUI version       : 12.2.0.1.4
Log file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2017-09-20_19-03-25PM_1.log
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.

One-off Patch Conflict Detection and Resolution

[root@vna01 OPatch]# $ORACLE_HOME/OPatch/opatchauto apply /home/oracle/software/patches/DB-GI-RU/GI/26133434 -analyze

OPatchauto session is initiated at Wed Sep 20 19:53:25 2017
System initialization log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchautodb/systemconfig2017-09-20_07-53-27PM.log.
Session log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/opatchauto2017-09-20_07-53-48PM.log
The id for this session is QWPL
Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/12.2.0/dbhome_1
Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.2.0/grid
Patch applicability verified successfully on home /u01/app/oracle/product/12.2.0/dbhome_1
Patch applicability verified successfully on home /u01/app/12.2.0/grid
Verifying SQL patch applicability on home /u01/app/oracle/product/12.2.0/dbhome_1

Following step failed during analysis:
/bin/sh -c 'ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1 ORACLE_SID=VNADB1 /u01/app/oracle/product/12.2.0/dbhome_1/OPatch/datapatch -prereq'
SQL patch applicability verified successfully on home /u01/app/oracle/product/12.2.0/dbhome_1
OPatchAuto successful.

--------------------------------Summary--------------------------------
Analysis for applying patches has completed successfully:
Host:vna01
RAC Home:/u01/app/oracle/product/12.2.0/dbhome_1

==Following patches were SKIPPED:
Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/25586399
Reason: This patch is not applicable to this specified target type - "rac_database"

==Following patches were SUCCESSFULLY analyzed to be applied:
Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/26002778
Log: /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_19-53-51PM_1.log
Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/26123830
Log: /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_19-53-51PM_1.log

Host:vna01
CRS Home:/u01/app/12.2.0/grid
==Following patches were SUCCESSFULLY analyzed to be applied:
Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/26002778
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_19-53-51PM_1.log
Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/25586399
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_19-53-51PM_1.log
Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/26123830
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_19-53-51PM_1.log
OPatchauto session completed at Wed Sep 20 19:57:09 2017
Time taken to complete the session 3 minutes, 44 seconds


Now OPatchauto Apply process:

[root@vna01 OPatch]# $ORACLE_HOME/OPatch/opatchauto apply /home/oracle/software/patches/DB-GI-RU/GI/26133434

OPatchauto session is initiated at Wed Sep 20 20:18:27 2017

System initialization log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchautodb/systemconfig2017-09-20_08-18-28PM.log.

Session log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/opatchauto2017-09-20_08-18-50PM.log

The id for this session is CNCU

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.2.0/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/12.2.0/dbhome_1

Patch applicability verified successfully on home /u01/app/oracle/product/12.2.0/dbhome_1

Patch applicability verified successfully on home /u01/app/12.2.0/grid

Verifying SQL patch applicability on home /u01/app/oracle/product/12.2.0/dbhome_1

"/bin/sh -c 'ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1 ORACLE_SID=VNADB1 /u01/app/oracle/product/12.2.0/dbhome_1/OPatch/datapatch -prereq'" command failed with errors. Please refer to logs for more details. SQL changes, if any, can be analyzed by manually retrying the same command.

SQL patch applicability verified successfully on home /u01/app/oracle/product/12.2.0/dbhome_1

Preparing to bring down database service on home /u01/app/oracle/product/12.2.0/dbhome_1

Successfully prepared home /u01/app/oracle/product/12.2.0/dbhome_1 to bring down database service

Bringing down CRS service on home /u01/app/12.2.0/grid

Prepatch operation log file location: /u01/app/oracle/crsdata/vna01/crsconfig/crspatch_vna01_2017-09-20_08-22-15PM.log

CRS service brought down successfully on home /u01/app/12.2.0/grid

Performing prepatch operation on home /u01/app/oracle/product/12.2.0/dbhome_1

Perpatch operation completed successfully on home /u01/app/oracle/product/12.2.0/dbhome_1

Start applying binary patch on home /u01/app/oracle/product/12.2.0/dbhome_1

Binary patch applied successfully on home /u01/app/oracle/product/12.2.0/dbhome_1

Performing postpatch operation on home /u01/app/oracle/product/12.2.0/dbhome_1

Postpatch operation completed successfully on home /u01/app/oracle/product/12.2.0/dbhome_1

Start applying binary patch on home /u01/app/12.2.0/grid

Binary patch applied successfully on home /u01/app/12.2.0/grid

Starting CRS service on home /u01/app/12.2.0/grid

Postpatch operation log file location: /u01/app/oracle/crsdata/vna01/crsconfig/crspatch_vna01_2017-09-20_08-27-01PM.log

CRS service started successfully on home /u01/app/12.2.0/grid

Preparing home /u01/app/oracle/product/12.2.0/dbhome_1 after database service restarted

No step execution required.........

Prepared home /u01/app/oracle/product/12.2.0/dbhome_1 successfully after database service restarted

Trying to apply SQL patch on home /u01/app/oracle/product/12.2.0/dbhome_1

"/bin/sh -c 'ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1 ORACLE_SID=VNADB1 /u01/app/oracle/product/12.2.0/dbhome_1/OPatch/datapatch'" command failed with errors. Please refer to logs for more details. SQL changes, if any, can be applied by manually retrying the same command.

SQL patch applied successfully on home /u01/app/oracle/product/12.2.0/dbhome_1

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:vna01

RAC Home:/u01/app/oracle/product/12.2.0/dbhome_1

Summary:

==Following patches were SKIPPED:

Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/25586399

Reason: This patch is not applicable to this specified target type - "rac_database"



==Following patches were SUCCESSFULLY applied:

Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/26002778

Log: /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_20-23-57PM_1.log

Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/26123830

Log: /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_20-23-57PM_1.log


Host:vna01

CRS Home:/u01/app/12.2.0/grid

Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/26002778

Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_20-24-44PM_1.log

Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/25586399

Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_20-24-44PM_1.log

Patch: /home/oracle/software/patches/DB-GI-RU/GI/26133434/26123830

Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-20_20-24-44PM_1.log

OPatchauto session completed at Wed Sep 20 20:34:23 2017

Time taken to complete the session 15 minutes, 56 seconds


lsInventory Output:

[oracle@vna01 OPatch]$ opatch lsinventory

Oracle Interim Patch Installer version 12.2.0.1.9

Copyright (c) 2017, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/12.2.0/grid

Central Inventory : /u01/app/oraInventory

from           : /u01/app/12.2.0/grid/oraInst.loc

OPatch version    : 12.2.0.1.9

OUI version       : 12.2.0.1.4

Log file location : /u01/app/12.2.0/grid/cfgtoollogs/opatch/opatch2017-09-20_20-38-46PM_1.log



lsinventory Output file location : /u01/app/12.2.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2017-09-20_20-38-46PM.txt

--------------------------------------------------------------------------------

Local Machine Information::

Hostname: vna01

ARU platform id: 226

ARU platform description:: Linux x86-64

Installed Top-level Products (1):

Oracle Grid Infrastructure 12c                                       12.2.0.1.0

There are 1 products installed in this Oracle Home.

Interim patches (3) :

Patch  26123830     : applied on Wed Sep 20 20:26:39 BST 2017

Unique Patch ID:  21405588

Patch description:  "DATABASE RELEASE UPDATE: 12.2.0.1.170718 (26123830)"

Created on 7 Jul 2017, 00:33:59 hrs PST8PDT

Bugs fixed:

23026585, 24336249, 24929210, 24942749, 25036474, 25110233, 25410877

25417050, 25427662, 25459958, 25547901, 25569149, 25600342, 25600421

25606091, 25655390, 25662088, 24385983, 24923215, 25099758, 25429959

25662101, 25728085, 25823754, 22594071, 23665623, 23749454, 24326846

24334708, 24560906, 24573817, 24578797, 24609996, 24624166, 24668398

24674955, 24744686, 24811725, 24827228, 24831514, 24908321, 24976007

25184555, 25210499, 25211628, 25223839, 25262869, 25316758, 25337332

25455795, 25457409, 25539063, 25546608, 25612095, 25643931, 25410017

22729345, 24485174, 24509056, 24714096, 25329664, 25410180, 25607726

25957038, 25973152, 26024732, 24376878, 24589590, 24676172, 23548817

24796092, 24907917, 25044977, 25736747, 25766822, 25856821, 25051628

24534401, 24835919, 25050160, 25395696, 25430120, 25616359, 25715167

25967985

Patch  25586399     : applied on Wed Sep 20 20:26:17 BST 2017

Unique Patch ID:  21306685

Patch description:  "ACFS Patch Set Update : 12.2.0.1.170718 (25586399)"

Created on 16 Jun 2017, 00:35:19 hrs PST8PDT

Bugs fixed:

24679041, 24964969, 25098392, 25078431, 25491831


Patch  26002778     : applied on Wed Sep 20 20:25:26 BST 2017

Unique Patch ID:  21306682

Patch description:  "OCW Patch Set Update : 12.2.0.1.170718 (26002778)"

Created on 3 Jul 2017, 03:26:30 hrs PST8PDT

Bugs fixed:

26144044, 25541343, 25715179, 25493588, 24932026, 24801915, 25832375

25728787, 25825732, 24578464, 25832312, 25742471, 25790699, 25655495

25307145, 25485737, 25505841, 25697364, 24663993, 25026470, 25591658

25537905, 24451580, 25409838, 25371632, 25569634, 25245759, 24665035

25646592, 25025157, 24732650, 24664849, 24584419, 24423011, 24831158

25037836, 25556203, 24464953, 24657753, 25197670, 24796183, 20559126

25197395, 24808260

--------------------------------------------------------------------------------

OPatch succeeded.

[oracle@vna01 OPatch]

From the Database Home :

[oracle@vna01 OPatch]$ . oraenv

ORACLE_SID = [+ASM1] ? VNADB1

The Oracle base remains unchanged with value /u01/app/oracle

[oracle@vna01 OPatch]$  export PATH=$ORACLE_HOME/OPatch:$PATH

[oracle@vna01 OPatch]$ which opatch

/u01/app/oracle/product/12.2.0/dbhome_1/OPatch/opatch

[oracle@vna01 OPatch]$ opatch lsinventory

Oracle Interim Patch Installer version 12.2.0.1.9

Copyright (c) 2017, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/12.2.0/dbhome_1

Central Inventory : /u01/app/oraInventory

from           : /u01/app/oracle/product/12.2.0/dbhome_1/oraInst.loc

OPatch version    : 12.2.0.1.9

OUI version       : 12.2.0.1.4

Log file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2017-09-20_20-40-03PM_1.log

lsinventory Output file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2017-09-20_20-40-03PM.txt

--------------------------------------------------------------------------------

Local Machine Information::

Hostname: vna01

ARU platform id: 226

ARU platform description:: Linux x86-64

Installed Top-level Products (1):

Oracle Database 12c                                                  12.2.0.1.0

There are 1 products installed in this Oracle Home.

Interim patches (2) :

Patch  26123830     : applied on Wed Sep 20 20:24:26 BST 2017

Unique Patch ID:  21405588

Patch description:  "DATABASE RELEASE UPDATE: 12.2.0.1.170718 (26123830)"

Created on 7 Jul 2017, 00:33:59 hrs PST8PDT

Bugs fixed:

23026585, 24336249, 24929210, 24942749, 25036474, 25110233, 25410877

25417050, 25427662, 25459958, 25547901, 25569149, 25600342, 25600421

25606091, 25655390, 25662088, 24385983, 24923215, 25099758, 25429959

25662101, 25728085, 25823754, 22594071, 23665623, 23749454, 24326846

24334708, 24560906, 24573817, 24578797, 24609996, 24624166, 24668398

24674955, 24744686, 24811725, 24827228, 24831514, 24908321, 24976007

25184555, 25210499, 25211628, 25223839, 25262869, 25316758, 25337332

25455795, 25457409, 25539063, 25546608, 25612095, 25643931, 25410017

22729345, 24485174, 24509056, 24714096, 25329664, 25410180, 25607726

25957038, 25973152, 26024732, 24376878, 24589590, 24676172, 23548817

24796092, 24907917, 25044977, 25736747, 25766822, 25856821, 25051628

24534401, 24835919, 25050160, 25395696, 25430120, 25616359, 25715167

25967985



Patch  26002778     : applied on Wed Sep 20 20:24:11 BST 2017

Unique Patch ID:  21306682

Patch description:  "OCW Patch Set Update : 12.2.0.1.170718 (26002778)"

Created on 3 Jul 2017, 03:26:30 hrs PST8PDT

Bugs fixed:

26144044, 25541343, 25715179, 25493588, 24932026, 24801915, 25832375

25728787, 25825732, 24578464, 25832312, 25742471, 25790699, 25655495

25307145, 25485737, 25505841, 25697364, 24663993, 25026470, 25591658

25537905, 24451580, 25409838, 25371632, 25569634, 25245759, 24665035

25646592, 25025157, 24732650, 24664849, 24584419, 24423011, 24831158

25037836, 25556203, 24464953, 24657753, 25197670, 24796183, 20559126

25197395, 24808260

--------------------------------------------------------------------------------

OPatch succeeded.

[oracle@vna01 OPatch]$



Node 2 : 

Run OPatch Conflict Check

From GI Home:

[oracle@vna02 patches]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/oracle/patches/26133434/26123830

Oracle Interim Patch Installer version 12.2.0.1.9

Copyright (c) 2017, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/app/12.2.0/grid

Central Inventory : /u01/app/oraInventory

from           : /u01/app/12.2.0/grid/oraInst.loc

OPatch version    : 12.2.0.1.9

OUI version       : 12.2.0.1.4

Log file location : /u01/app/12.2.0/grid/cfgtoollogs/opatch/opatch2017-09-20_20-48-20PM_1.log



Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.

[oracle@vna02 patches]$

[oracle@vna02 patches]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/oracle/patches/26133434/26002778

Oracle Interim Patch Installer version 12.2.0.1.9

Copyright (c) 2017, Oracle Corporation.  All rights reserved.



PREREQ session

Oracle Home       : /u01/app/12.2.0/grid

Central Inventory : /u01/app/oraInventory

from           : /u01/app/12.2.0/grid/oraInst.loc

OPatch version    : 12.2.0.1.9

OUI version       : 12.2.0.1.4

Log file location : /u01/app/12.2.0/grid/cfgtoollogs/opatch/opatch2017-09-20_20-48-32PM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.

For the DB Home:

[oracle@vna02 patches]$ export PATH=$ORACLE_HOME/OPatch:$PATH

[oracle@vna02 patches]$ which opatch

/u01/app/oracle/product/12.2.0/dbhome_1/OPatch/opatch

[oracle@vna02 patches]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/oracle/patches/26133434/26123830

Oracle Interim Patch Installer version 12.2.0.1.9

Copyright (c) 2017, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/app/oracle/product/12.2.0/dbhome_1

Central Inventory : /u01/app/oraInventory

from           : /u01/app/oracle/product/12.2.0/dbhome_1/oraInst.loc

OPatch version    : 12.2.0.1.9

OUI version       : 12.2.0.1.4

Log file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2017-09-20_20-52-24PM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.

[oracle@vna02 patches]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/oracle/patches/26133434/26002778

Oracle Interim Patch Installer version 12.2.0.1.9

Copyright (c) 2017, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/app/oracle/product/12.2.0/dbhome_1

Central Inventory : /u01/app/oraInventory

from           : /u01/app/oracle/product/12.2.0/dbhome_1/oraInst.loc

OPatch version    : 12.2.0.1.9

OUI version       : 12.2.0.1.4

Log file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2017-09-20_20-52-38PM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.

[oracle@vna02 patches]$



OPATCH Conflict Checks:

[root@vna02 12.2.0]# $ORACLE_HOME/OPatch/opatchauto apply /home/oracle/patches/26133434 -analyze

OPatchauto session is initiated at Thu Sep 21 02:18:32 2017

System initialization log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchautodb/systemconfig2017-09-21_02-18-33AM.log.

Session log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/opatchauto2017-09-21_02-18-53AM.log

The id for this session is NWN8

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.2.0/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/12.2.0/dbhome_1

Patch applicability verified successfully on home /u01/app/oracle/product/12.2.0/dbhome_1

Patch applicability verified successfully on home /u01/app/12.2.0/grid

Verifying SQL patch applicability on home /u01/app/oracle/product/12.2.0/dbhome_1

SQL patch applicability verified successfully on home /u01/app/oracle/product/12.2.0/dbhome_1

OPatchAuto successful.

--------------------------------Summary--------------------------------

Analysis for applying patches has completed successfully:

Host:vna02

RAC Home:/u01/app/oracle/product/12.2.0/dbhome_1

==Following patches were SKIPPED:

Patch: /home/oracle/patches/26133434/25586399

Reason: This patch is not applicable to this specified target type - "rac_database"

==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /home/oracle/patches/26133434/26002778

Log: /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-18-56AM_1.log

Patch: /home/oracle/patches/26133434/26123830

Log: /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-18-56AM_1.log

Host:vna02

CRS Home:/u01/app/12.2.0/grid

==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /home/oracle/patches/26133434/26002778

Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-18-56AM_1.log

Patch: /home/oracle/patches/26133434/25586399

Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-18-56AM_1.log



Patch: /home/oracle/patches/26133434/26123830

Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-18-56AM_1.log

OPatchauto session completed at Thu Sep 21 02:22:48 2017

Time taken to complete the session 4 minutes, 16 seconds


OPatchauto apply:



[root@vna02 12.2.0]# $ORACLE_HOME/OPatch/opatchauto apply /home/oracle/patches/26133434



OPatchauto session is initiated at Thu Sep 21 02:25:35 2017



System initialization log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchautodb/systemconfig2017-09-21_02-25-36AM.log.



Session log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/opatchauto2017-09-21_02-25-57AM.log

The id for this session is PM1S



Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/12.2.0/dbhome_1



Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.2.0/grid

Patch applicability verified successfully on home /u01/app/12.2.0/grid



Patch applicability verified successfully on home /u01/app/oracle/product/12.2.0/dbhome_1





Verifying SQL patch applicability on home /u01/app/oracle/product/12.2.0/dbhome_1

SQL patch applicability verified successfully on home /u01/app/oracle/product/12.2.0/dbhome_1





Preparing to bring down database service on home /u01/app/oracle/product/12.2.0/dbhome_1

Successfully prepared home /u01/app/oracle/product/12.2.0/dbhome_1 to bring down database service





Bringing down CRS service on home /u01/app/12.2.0/grid

Prepatch operation log file location: /u01/app/oracle/crsdata/vna02/crsconfig/crspatch_vna02_2017-09-21_02-30-11AM.log

CRS service brought down successfully on home /u01/app/12.2.0/grid





Performing prepatch operation on home /u01/app/oracle/product/12.2.0/dbhome_1

Perpatch operation completed successfully on home /u01/app/oracle/product/12.2.0/dbhome_1





Start applying binary patch on home /u01/app/oracle/product/12.2.0/dbhome_1

Binary patch applied successfully on home /u01/app/oracle/product/12.2.0/dbhome_1





Performing postpatch operation on home /u01/app/oracle/product/12.2.0/dbhome_1

Postpatch operation completed successfully on home /u01/app/oracle/product/12.2.0/dbhome_1





Start applying binary patch on home /u01/app/12.2.0/grid

Binary patch applied successfully on home /u01/app/12.2.0/grid





Starting CRS service on home /u01/app/12.2.0/grid

Postpatch operation log file location: /u01/app/oracle/crsdata/vna02/crsconfig/crspatch_vna02_2017-09-21_02-34-30AM.log

CRS service started successfully on home /u01/app/12.2.0/grid





Preparing home /u01/app/oracle/product/12.2.0/dbhome_1 after database service restarted

No step execution required.........

Prepared home /u01/app/oracle/product/12.2.0/dbhome_1 successfully after database service restarted





Trying to apply SQL patch on home /u01/app/oracle/product/12.2.0/dbhome_1

SQL patch applied successfully on home /u01/app/oracle/product/12.2.0/dbhome_1



OPatchAuto successful.



--------------------------------Summary--------------------------------



Patching is completed successfully. Please find the summary as follows:



Host:vna02

RAC Home:/u01/app/oracle/product/12.2.0/dbhome_1

Summary:



==Following patches were SKIPPED:



Patch: /home/oracle/patches/26133434/25586399

Reason: This patch is not applicable to this specified target type - "rac_database"





==Following patches were SUCCESSFULLY applied:



Patch: /home/oracle/patches/26133434/26002778

Log: /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-31-39AM_1.log



Patch: /home/oracle/patches/26133434/26123830

Log: /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-31-39AM_1.log





Host:vna02

CRS Home:/u01/app/12.2.0/grid

Summary:



==Following patches were SUCCESSFULLY applied:



Patch: /home/oracle/patches/26133434/26002778

Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-32-21AM_1.log



Patch: /home/oracle/patches/26133434/25586399

Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-32-21AM_1.log



Patch: /home/oracle/patches/26133434/26123830

Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-09-21_02-32-21AM_1.log







OPatchauto session completed at Thu Sep 21 02:41:44 2017

Time taken to complete the session 16 minutes, 9 seconds

[root@vna02 12.2.0]#

LsInventory Checks:

GRIDHome Inventory

[oracle@vna02 ~]$ . oraenv

ORACLE_SID = [oracle] ? +ASM2
The Oracle base has been set to /u01/app/oracle

[oracle@vna02 ~]$ export PATH=$ORACLE_HOME/OPatch:$PATH
[oracle@vna02 ~]$ which opatch
/u01/app/12.2.0/grid/OPatch/opatch

[oracle@vna02 ~]$ opatch lsinventory

Oracle Interim Patch Installer version 12.2.0.1.9
Copyright (c) 2017, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/12.2.0/grid
Central Inventory : /u01/app/oraInventory
from           : /u01/app/12.2.0/grid/oraInst.loc
OPatch version    : 12.2.0.1.9
OUI version       : 12.2.0.1.4
Log file location : /u01/app/12.2.0/grid/cfgtoollogs/opatch/opatch2017-09-21_02-44-21AM_1.log
Lsinventory Output file location : /u01/app/12.2.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2017-09-21_02-44-21AM.txt

--------------------------------------------------------------------------------
Local Machine Information::
Hostname: vna02
ARU platform id: 226
ARU platform description:: Linux x86-64
Installed Top-level Products (1):
Oracle Grid Infrastructure 12c                                       12.2.0.1.0
There are 1 products installed in this Oracle Home.

Interim patches (3) :
Patch  26123830     : applied on Thu Sep 21 02:34:08 BST 2017
Unique Patch ID:  21405588
Patch description:  "DATABASE RELEASE UPDATE: 12.2.0.1.170718 (26123830)"
Created on 7 Jul 2017, 00:33:59 hrs PST8PDT

Bugs fixed:
23026585, 24336249, 24929210, 24942749, 25036474, 25110233, 25410877

25417050, 25427662, 25459958, 25547901, 25569149, 25600342, 25600421

25606091, 25655390, 25662088, 24385983, 24923215, 25099758, 25429959

25662101, 25728085, 25823754, 22594071, 23665623, 23749454, 24326846

24334708, 24560906, 24573817, 24578797, 24609996, 24624166, 24668398

24674955, 24744686, 24811725, 24827228, 24831514, 24908321, 24976007

25184555, 25210499, 25211628, 25223839, 25262869, 25316758, 25337332

25455795, 25457409, 25539063, 25546608, 25612095, 25643931, 25410017

22729345, 24485174, 24509056, 24714096, 25329664, 25410180, 25607726

25957038, 25973152, 26024732, 24376878, 24589590, 24676172, 23548817

24796092, 24907917, 25044977, 25736747, 25766822, 25856821, 25051628

24534401, 24835919, 25050160, 25395696, 25430120, 25616359, 25715167

25967985



Patch  25586399     : applied on Thu Sep 21 02:33:51 BST 2017

Unique Patch ID:  21306685

Patch description:  "ACFS Patch Set Update : 12.2.0.1.170718 (25586399)"

Created on 16 Jun 2017, 00:35:19 hrs PST8PDT

Bugs fixed:

24679041, 24964969, 25098392, 25078431, 25491831



Patch  26002778     : applied on Thu Sep 21 02:33:01 BST 2017

Unique Patch ID:  21306682

Patch description:  "OCW Patch Set Update : 12.2.0.1.170718 (26002778)"

Created on 3 Jul 2017, 03:26:30 hrs PST8PDT

Bugs fixed:

26144044, 25541343, 25715179, 25493588, 24932026, 24801915, 25832375

25728787, 25825732, 24578464, 25832312, 25742471, 25790699, 25655495

25307145, 25485737, 25505841, 25697364, 24663993, 25026470, 25591658

25537905, 24451580, 25409838, 25371632, 25569634, 25245759, 24665035

25646592, 25025157, 24732650, 24664849, 24584419, 24423011, 24831158

25037836, 25556203, 24464953, 24657753, 25197670, 24796183, 20559126

25197395, 24808260







--------------------------------------------------------------------------------



OPatch succeeded.

[oracle@vna02 ~]$









DBHome Inventory:







[oracle@vna02 ~]$ export PATH=$ORACLE_HOME/OPatch:$PATH

[oracle@vna02 ~]$ which opatch

/u01/app/oracle/product/12.2.0/dbhome_1/OPatch/opatch

[oracle@vna02 ~]$

[oracle@vna02 ~]$

[oracle@vna02 ~]$ opatch lsinventory

Oracle Interim Patch Installer version 12.2.0.1.9

Copyright (c) 2017, Oracle Corporation.  All rights reserved.





Oracle Home       : /u01/app/oracle/product/12.2.0/dbhome_1

Central Inventory : /u01/app/oraInventory

from           : /u01/app/oracle/product/12.2.0/dbhome_1/oraInst.loc

OPatch version    : 12.2.0.1.9

OUI version       : 12.2.0.1.4

Log file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2017-09-21_02-45-58AM_1.log



Lsinventory Output file location : /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2017-09-21_02-45-58AM.txt



--------------------------------------------------------------------------------

Local Machine Information::

Hostname: vna02

ARU platform id: 226

ARU platform description:: Linux x86-64



Installed Top-level Products (1):



Oracle Database 12c                                                  12.2.0.1.0

There are 1 products installed in this Oracle Home.





Interim patches (2) :



Patch  26123830     : applied on Thu Sep 21 02:32:03 BST 2017

Unique Patch ID:  21405588

Patch description:  "DATABASE RELEASE UPDATE: 12.2.0.1.170718 (26123830)"

Created on 7 Jul 2017, 00:33:59 hrs PST8PDT

Bugs fixed:

23026585, 24336249, 24929210, 24942749, 25036474, 25110233, 25410877

25417050, 25427662, 25459958, 25547901, 25569149, 25600342, 25600421

25606091, 25655390, 25662088, 24385983, 24923215, 25099758, 25429959

25662101, 25728085, 25823754, 22594071, 23665623, 23749454, 24326846

24334708, 24560906, 24573817, 24578797, 24609996, 24624166, 24668398

24674955, 24744686, 24811725, 24827228, 24831514, 24908321, 24976007

25184555, 25210499, 25211628, 25223839, 25262869, 25316758, 25337332

25455795, 25457409, 25539063, 25546608, 25612095, 25643931, 25410017

22729345, 24485174, 24509056, 24714096, 25329664, 25410180, 25607726

25957038, 25973152, 26024732, 24376878, 24589590, 24676172, 23548817

24796092, 24907917, 25044977, 25736747, 25766822, 25856821, 25051628

24534401, 24835919, 25050160, 25395696, 25430120, 25616359, 25715167

25967985



Patch  26002778     : applied on Thu Sep 21 02:31:51 BST 2017

Unique Patch ID:  21306682

Patch description:  "OCW Patch Set Update : 12.2.0.1.170718 (26002778)"

Created on 3 Jul 2017, 03:26:30 hrs PST8PDT

Bugs fixed:

26144044, 25541343, 25715179, 25493588, 24932026, 24801915, 25832375

25728787, 25825732, 24578464, 25832312, 25742471, 25790699, 25655495

25307145, 25485737, 25505841, 25697364, 24663993, 25026470, 25591658

25537905, 24451580, 25409838, 25371632, 25569634, 25245759, 24665035

25646592, 25025157, 24732650, 24664849, 24584419, 24423011, 24831158

25037836, 25556203, 24464953, 24657753, 25197670, 24796183, 20559126

25197395, 24808260







--------------------------------------------------------------------------------



OPatch succeeded.

[oracle@vna02 ~]$

 

Clonewars – Next Gen Cloning with Oracle 12.2 Multitenancy (Part Deux)… With a Sprinkle of PDB Refresh

 

This is Part 2 of the Remote [PDB] Cloning capabilities of Oracle 12.2 Mulitenant.

Cloning Example 2:  Remote clone copy from an existing CBD/PDB into a local PDB (PDB->PDB).  In this example “darkside” is  CDB with darthmaul being the source/remote PDB and  yoda (PDB) is local target

 

SQL> select database_name from v$database;

DATABASE_NAME
--------------------------------------------------------
DARKSIDE

darkside$SQL> alter pluggable database darthmaul open;
Pluggable database altered.

SQL> select name, open_mode from v$pdbs;
NAME .    OPEN_MODE

--------------------
PDB$SEED  READ ONLY
DARTHMAUL READ WRITE

darkside$SQL> archive log list ;
Database log mode            Archive Mode
Automatic archival           Enabled
Archive destination          USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     1
Next log sequence to archive   3
Current log sequence         3

darkside$SQL> select name, open_mode from v$database;
NAME     OPEN_MODE
--------- --------------------
DARKSIDE  READ WRITE

darkside$SQL> COLUMN property_name FORMAT A30
COLUMN property_value FORMAT A30
SELECT property_name, property_value
FROM   database_properties
WHERE  property_name = 'LOCAL_UNDO_ENABLED'; 
PROPERTY_NAME                PROPERTY_VALUE
------------------------------ ------------------------------
LOCAL_UNDO_ENABLED           TRUE


$ cat darkside_create_remote_clone_user.sql
create user c##darksidecloneuser identified by cloneuser123 container=ALL;
grant create session, create pluggable database to c##darksidecloneuser  container=ALL;

$cat darkside_db_link.sql
create database link darksideclone_link
CONNECT TO c##darksidecloneuser IDENTIFIED BY cloneuser123 USING 'darkside'

Nishan$SQL> select DB_LINK,HOST from dba_db_links;
DB_LINK        HOST
------------  ---------------------------
SYS_HUB          SEEDDATA
REMOTECLONELINK  hansolo
DARKSIDECLONE_LINK darkside

darkside$SQL> select name from v$datafile;
NAME
-------------------------------------------------------------------------------

+ACFSDATA/DARKSIDE/4E63D836FC5AF80DE053B214A8C07E55/DATAFILE/system.276.942656929

+ACFSDATA/DARKSIDE/4E63D836FC5AF80DE053B214A8C07E55/DATAFILE/sysaux.277.942656929

+ACFSDATA/DARKSIDE/4E63D836FC5AF80DE053B214A8C07E55/DATAFILE/undotbs1.275.942656929

+ACFSDATA/DARKSIDE/4E63D836FC5AF80DE053B214A8C07E55/DATAFILE/users.279.942657041

+ACFSDATA/DARKSIDE/4E63D836FC5AF80DE053B214A8C07E55/DATAFILE/rey.291.942877803

+ACFSDATA/DARKSIDE/4E63D836FC5AF80DE053B214A8C07E55/DATAFILE/luke.292.942877825

darkside$SQL> show con_name
CON_NAME
-----------------------------
DARTHMAUL


darkside$SQL> create table foofighters tablespace rey as select * from obj$;
Table created.

Nishan$SQL> create pluggable database yoda from darthmaul@DARKSIDECLONE_LINK;

Pluggable database created.

Nishan$SQL> alter session set container = yoda;
Session altered.

yoda$SQL> select name, open_mode from v$pdbs;
NAME                    OPEN_MODE
----------------------------------------
YODA                   MOUNTED

yoda$SQL> select name from v$datafile;
NAME
--------------------------------------------------------------------------------
+DATA/NISHAN/4E82D233272C7273E0538514A8C00DF3/DATAFILE/system.310.942878321
+DATA/NISHAN/4E82D233272C7273E0538514A8C00DF3/DATAFILE/sysaux.311.942878321
+DATA/NISHAN/4E82D233272C7273E0538514A8C00DF3/DATAFILE/undotbs1.309.942878321
+DATA/NISHAN/4E82D233272C7273E0538514A8C00DF3/DATAFILE/users.306.942878319
+DATA/NISHAN/4E82D233272C7273E0538514A8C00DF3/DATAFILE/rey.307.942878319
+DATA/NISHAN/4E82D233272C7273E0538514A8C00DF3/DATAFILE/luke.308.942878319


Now on to Refresh the PDB

SQL>create table foofighters tablespace rey as select * from obj$

Table created.




SQL> select segment_name from dba_segments where tablespace_name = 'REY'

SEGMENT_NAME

----------------------------------------------------------------

FOOFIGHTERS




SQL> select name, open_mode from v$pdbs;

NAME            OPEN_MODE

------------------------------

PDB$SEED        READ ONLY

OBIWAN          READ WRITE

FORCEAWAKENS    MOUNTED

YODA            MOUNTED




SQL> alter pluggable database yoda open read only;

Pluggable database altered.




SQL> select segment_name from dba_segments where tablespace_name = 'REY';

no rows selected




SQL> alter session set container = yoda;

Session altered.




SQL> ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE;

Pluggable database altered.




SQL> ALTER PLUGGABLE DATABASE refresh;

Pluggable database altered.




SQL> select segment_name from dba_segments where tablespace_name = 'REY';

select segment_name from dba_segments where tablespace_name = 'REY'  

ERROR at line 1:

ORA-01219: database or pluggable database not open: queries allowed on fixed

tables or views only




SQL> ALTER PLUGGABLE DATABASE open read only;

Pluggable database altered.




SQL> select segment_name from dba_segments where tablespace_name = 'REY';

SEGMENT_NAME

-----------------------------------------------------

FOOFIGHTERS

 

 

 

 

 

Flex Your Inner ASM – ASM 12.2 New Features

The following article will describe at a high-level the key ASM improvements in Oracle 12.2.  Note, due to number of changes and improvements in ACFS, we cover ACFS is a separate article.

Oracle ASM Flex Disk Groups and File Groups

In previous releases disk group storage attributes were defined only at the disk group level, which was generally quite coarse from a storage management perspective. In 12.2, the concept of ASM flex disk group is introduced to enable users to manage storage at the database level, allowing greater granularity of control. Note, this capability is complimentary to the existing disk group manageability.  Flex groups is built on the concept of File groups.  File groups are a group of files that share the same set of properties and characteristics and used to describe database files.   A significant benefit of file groups is the capability to have a different availability specification for each database; a key example is capability to create point-in-time database clones.

The following example illustrates how flex diskgroup is create.

SQL> create diskgroup vna_data flex redundancy disk ‘/dev/mapper/mpath*’

Or Convert existing standard diskgroup to flex diskgroup

SQL> alter diskgroup vna_data convert redundancy to flex;

Each database that is built on ASM (that has compatible.asm and compatible.rdbms is set to 12.2.0.0)  will consist of file group, which allows granular storage management capabilities, such as  redundancy, rebalance power limits and priority, striping and quota groups; i.e., at the file group level.  Since the redundancy is now at the file group level and not at the disk group level, you can convert and change the redundancy of database from normal to high.  Note, that you cannot currently change from external to high/normal.

Disk group can contain multiple file groups each with its independent Flex redundancy.  A file group can belong to only one disk group; however, a database can span multiple disk groups with multiple file groups in different disk groups

 

ASM Support for Preferred Read on Extended Clusters

In previous releases the ASM_PREFERRED_READ_FAILURE_GROUPS defined read preference to a specific failure group in extended clusters.

In ASM 12.2, the preferred read failure groups capability is now automatically detected and set in the ASM instance when extended clusters are deployed. The ASM instance will evaluate which disks are local to that instance and set preference accordingly.  Thus the ASM_PREFFERED_READ parameter is no longer necessary.

 

 

 

Oracle IOServer

In 12.2 Oracle extends the capability of Flex ASM by enabling the ASM instance to be disjointed and separate from the physical servers hosting databases.  This feature is referred to as ASM IO Server (IOS). This “far cluster” capability enables the deployment of larger clusters of ASM instances that can support more database clients while reducing the ASM instance footprint, thus storage consolidation occurs by placing larger number database into a single set of disk groups.

With the introduction of IOS, ASM and database storage access can be configured in the following configurations:

  • direct access to ASM disks with local clients (same as pre-12.2)
  • Flex ASM clients with direct access to ASM disks
  • ACFS access through the ASM proxy instance
  • Remote-Network based connectivity to ASM disk groups with Oracle IOServer (IOS)

Updates for Oracle ASM Filter Driver Installation and Configuration

Oracle 12.1 introduced ASM filter driver(ASMFD) for improved device management and disk group protection; however, the installation required upfront enablement before Grid Infrastructure. In 12.2, the installation and configuration for ASM Filter Driver (ASMFD) is now streamlined and enabled as part of the Oracle Grid Infrastructure installation.

ASM Extended Support for 4K Sector Size

The full support for 4K sector sizes has been looming since 11.2.  Now in 12.2 ASM, a new diskgroup attribute, logical_sector_size, will define the logical sector size (in bytes) of the disk group and specifies the smallest I/O that can be issued to the underlying ASM disks.

Deprecated Features

Deprecation of Oracle ASM Intelligent Data Placement

Deprecation of ASM_PREFERRED_READ_FAILURE_GROUPS Initialization Parameter

In-Memory of.. Sorry I mean 12.2 In-Memory New Features

As part our of continuing 12.2 New Feature Series we explore different areas of Oracle 12.2

In this blog we discuss the new In-Memory 12.2 features

In-Memory Expressions

An In-memory expression, or “hot” expression, enables frequently evaluated query expressions to be materialized in the In-Memory Column Store, for subsequent reuse. By default, the procedure DBMS_INMEMORY_ADMIN.IME_CAPTURE_EXPRESSIONS identifies and populates IM expressions.

Populating the materialized values of frequently used query expressions, into the In-Memory Column Store, greatly reduces the system resources required to execute queries, allowing for better scalability. The procedure, IME_CAPTURE_EXPRESSIONS, will capture and populate the 20 “hottest” expressions in the database for a specified time range.

In-Memory Virtual Columns

An IM virtual column, is a value derived by evaluating an expression. IM virtual columns improve query performance by avoiding repeated calculations. Also, the database can scan and filter IM virtual columns, using techniques such as SIMD vector processing.

In-Memory FastStart

Before 12.2, the columnar format was only available In-Memory, meaning that after a database restart, the In-Memory Column Store would have to be populated. This multiple step process, converted traditional, row formatted data into the compressed columnar format and placed in-memory.

Now, In-Memory Column Store optimizes the compressed columnar population of database objects (tables, partitions, and subpartitions) in the In-Memory column store. This process, significantly reduces the time required to re-populate In-Memory objects.

Use DBMS_INMEMORY_ADMIN.FASTSTART_ENABLE procedure to enable a specific tablespace for FastStart

Automatic Data Optimization (ADO) Support for In-Memory Column Store

In 12.2, ADO now also manages the IM column store as a new data tier. When enabled, the Heat Map feature automatically tracks data access patterns; ADO uses this Heat Map data to implement user-defined policies at the database level. ADO manages the In-Memory Column Store, by moving objects (tables, partitions or subpartitions) in and out of the memory, based on Heat Map statistics.

Twelve Days of 12.2

Copyright © 2016 Viscosity North America, Inc. All rights reserved.

In-Memory Join Groups

IM column stores can use join groups, to optimize joins of populated IM tables. Join groups, eliminate the performance overhead of decompressing and hashing column values. Create join groups using the CREATE INMEMORY JOIN GROUP statement:

CREATE INMEMORY JOIN GROUP prodid_jg (mine.items(product_id),mine.product_line(product_id));

In-Memory Support on Oracle Active Data Guard

12.2, allows IM column store to be enabled on Oracle Active Data Guard environments, by setting the init.ora parameter INMEMORY_ADG_ENABLED to TRUE. Using the in-memory column store, on an Active Data Guard standby database, enables users to offload larger and heavier reporting workloads, onto Standby Databases. Moreover, 12.2 permits the Standby Database to populate a completely different set of data in the in-memory column store than the Primary Database, providing greater data access flexibility.

In-Memory Column Store Dynamic Resizing

You can now dynamically increase the size of the in-memory area, while the database is open, assuming that enough memory is available within the SGA. Thus, the in-memory column store can be resized without restarting the database, providing greater application availability.

ASR and IP Addresses on PCA/OVCA

How to determine OVCA or PCA Management Node, Compute Node, and ILOM service processor IP addresses

The asr command has various switches that give information on the assets; however, the assets are generally list by ILOM address.  So how do you get the asset listing and how do you map it back to the primary IP of the Compute node?

 

[root@ovcamn06r1 bin]# ./asr list_asset

        Usage: list_asset [-i ip] [-h host] [-s subnet] [-c Display assets list in csv format] [-?]

-s subnet

--subnet=subnet

List activation and ASR status by subnet. Subnet value can be a comma delimited list.

[root@ovcamn06r1 bin]# ./asr list_asset -s 192.168.4

IP_ADDRESS    HOST_NAME           SERIAL_NUMBER PARENT_SERIAL ASR PROTOCOL SOURCE LAST_HEARTBEAT PRODUCT_NAME                  

----------    ---------           ------------- ------------- --- -------- ------ -------------- ------------                  

192.168.4.114 ORACLESP-1527NM10F6 1527NM10F6                  Y   SNMP     ILOM   NA             ORACLE SERVER X5-2            

192.168.4.105 ORACLESP-1534NM1011 1534NM1011                  Y   SNMP     ILOM   NA             ORACLE SERVER X5-2            

192.168.4.103 ilom-ovcamn05r1     1534NM1024                  Y   SNMP     ILOM   NA             ORACLE SERVER X5-2            

192.168.4.116 ORACLESP-1534NM102A 1534NM102A                  Y   SNMP     ILOM   NA             ORACLE SERVER X5-2            

192.168.4.120 ORACLESP-1534NM103A 1534NM103A                  Y   SNMP     ILOM   NA             ORACLE SERVER X5-2            

192.168.4.106 ORACLESP-1534NM103C 1534NM103C                  Y   SNMP     ILOM   NA             ORACLE SERVER X5-2            

192.168.4.109 ORACLESP-1534NM103X 1534NM103X                  Y   SNMP     ILOM   NA             ORACLE SERVER X5-2             

192.168.4.125 ORACLESP-1534NM1043 1534NM1043                  Y   SNMP     ILOM   NA             ORACLE SERVER X5-2            

192.168.4.108 ORACLESP-1534NM1047 1534NM1047                  Y   SNMP     ILOM   NA             ORACLE SERVER X5-2             

192.168.4.110 ORACLESP-1534NM104A 1534NM104A                  Y   SNMP     ILOM   NA             ORACLE SERVER X5-2            

192.168.4.115 ORACLESP-1534NM104D 1534NM104D                  Y   SNMP     ILOM   NA             ORACLE SERVER X5-2             

192.168.4.122 ORACLESP-1546NM10JR 1546NM10JR                  Y   SNMP     ILOM   NA             ORACLE SERVER X5-2            

192.168.4.117 ORACLESP-1546NM10JV 1546NM10JV                  Y   SNMP     ILOM   NA             ORACLE SERVER X5-2             

192.168.4.118 ORACLESP-1546NM10K0 1546NM10K0                  Y   SNMP     ILOM   NA             ORACLE SERVER X5-2            

192.168.4.119 ORACLESP-1546NM10K3 1546NM10K3                  Y   SNMP     ILOM   NA             ORACLE SERVER X5-2             

192.168.4.121 ORACLESP-1546NM10K7 1546NM10K7                  Y   SNMP     ILOM   NA             ORACLE SERVER X5-2            

192.168.4.124 ORACLESP-1546NM10K8 1546NM10K8                  Y   SNMP     ILOM   NA             ORACLE SERVER X5-2             

192.168.4.112 ORACLESP-1546NM10KA 1546NM10KA                  Y   SNMP     ILOM   NA             ORACLE SERVER X5-2            

192.168.4.113 ORACLESP-1546NM10KB 1546NM10KB                  Y   SNMP     ILOM   NA             ORACLE SERVER X5-2            

192.168.4.111 ORACLESP-1546NM10KD 1546NM10KD                  Y   SNMP     ILOM   NA             ORACLE SERVER X5-2            

192.168.4.107 ORACLESP-1546NM10KE 1546NM10KE                  Y   SNMP     ILOM   NA             ORACLE SERVER X5-2            

192.168.4.104 ilom-ovcamn06r1     1546NM10KL                  Y   SNMP     ILOM   NA             ORACLE SERVER X5-2            

192.168.4.200 ovcasw21ar1         AK00335296                  Y   SNMP     ILOM   NA             Oracle Switch ES1-24          

192.168.4.201 ovcasw21br1         AK00335297                  Y   SNMP     ILOM   NA             Oracle Switch ES1-24          

192.168.4.203 ovcasw20r1          AK00335855                  Y   SNMP     ILOM   NA             Sun Datacenter InfiniBand Switch 36

192.168.4.202 ovcasw19r1          AK00335857                  Y   SNMP     ILOM   NA             Sun Datacenter InfiniBand Switch 36

 

Once you have the asset list, you can get the individual asset lists per host (using ILOM hostname) or IP (using -i).

[root@ovcamn06r1 bin]# ./asr list_asset -h ORACLESP-1546NM10K8

IP_ADDRESS    HOST_NAME           SERIAL_NUMBER PARENT_SERIAL ASR PROTOCOL SOURCE LAST_HEARTBEAT PRODUCT_NAME

----------    ---------           ------------- ------------- --- -------- ------ -------------- ------------

192.168.4.124 ORACLESP-1546NM10K8 1546NM10K8                  Y   SNMP     ILOM   NA             ORACLE SERVER X5-2


-i ip

--ipaddress=ip

List activation and ASR status by ipaddress. IP value can be a comma delimited list.

[root@ovcamn06r1 bin]# ./asr list_asset -i 192.168.4.108

IP_ADDRESS    HOST_NAME           SERIAL_NUMBER PARENT_SERIAL ASR PROTOCOL SOURCE LAST_HEARTBEAT PRODUCT_NAME

----------    ---------           ------------- ------------- --- -------- ------ -------------- ------------

192.168.4.108 ORACLESP-1534NM1047 1534NM1047                  Y   SNMP     ILOM   NA             ORACLE SERVER X5-2


If you want to map the Server hostname back to the ILOM hostname, you can execute the following.  The ilom suffix will pre-pend the actual Server hostname.

 

[root@ovcamn06r1 bin]# cat /etc/dhcp/dhcpd_dynamic.conf |grep ilom

host ilom-ovcacn10r1 { hardware ethernet 00:xx:xx:x0:17:37; fixed-address 192.168.4.105;default-lease-time 600;}

host ilom-ovcacn26r1 { hardware ethernet 00:xx:xx:xd:a0:cb; fixed-address 192.168.4.106;default-lease-time 600;}

host ilom-ovcacn33r1 { hardware ethernet 00:xx:xx:xe:2f:cd; fixed-address 192.168.4.107;default-lease-time 600;}

host ilom-ovcacn14r1 { hardware ethernet 00:xx:xx:xd:5d:4b; fixed-address 192.168.4.108;default-lease-time 600;}

host ilom-ovcacn11r1 { hardware ethernet 00:xx:xx:xd:67:a1; fixed-address 192.168.4.109;default-lease-time 600;}

host ilom-ovcacn29r1 { hardware ethernet 00:xx:xx:xd:6f:7b; fixed-address 192.168.4.110;default-lease-time 600;}

host ilom-ovcacn07r1 { hardware ethernet 00:xx:xx:xe:2f:4f; fixed-address 192.168.4.111;default-lease-time 600;}

host ilom-ovcacn36r1 { hardware ethernet 00:xx:xx:xe:30:69; fixed-address 192.168.4.112;default-lease-time 600;}

host ilom-ovcacn35r1 { hardware ethernet 00:xx:xx:xe:68:13; fixed-address 192.168.4.113;default-lease-time 600;}

host ilom-ovcacn30r1 { hardware ethernet 00:xx:xx:xa:e1:89; fixed-address 192.168.4.114;default-lease-time 600;}

host ilom-ovcacn12r1 { hardware ethernet 00:xx:xx:xd:4d:a3; fixed-address 192.168.4.115;default-lease-time 600;}

host ilom-ovcacn28r1 { hardware ethernet 00:xx:xx:xd:a0:47; fixed-address 192.168.4.116;default-lease-time 600;}

host ilom-ovcacn37r1 { hardware ethernet 00:xx:xx:xe:5f:4f; fixed-address 192.168.4.117;default-lease-time 600;}

host ilom-ovcacn34r1 { hardware ethernet 00:xx:xx:xe:5f:a9; fixed-address 192.168.4.118;default-lease-time 600;}

host ilom-ovcacn31r1 { hardware ethernet 00:xx:xx:xe:61:53; fixed-address 192.168.4.119;default-lease-time 600;}

host ilom-ovcacn13r1 { hardware ethernet 00:xx:xx:xd:4e:39; fixed-address 192.168.4.120;default-lease-time 600;}

host ilom-ovcacn32r1 { hardware ethernet 00:xx:xx:xe:2f:55; fixed-address 192.168.4.121;default-lease-time 600;}

host ilom-ovcacn08r1 { hardware ethernet 00:xx:xx:xe:60:db; fixed-address 192.168.4.122;default-lease-time 600;}

host ilom-ovcacn09r1 { hardware ethernet 00:xx:xx:xd:8c:fd; fixed-address 192.168.4.124;default-lease-time 600;}

host ilom-ovcacn27r1 { hardware ethernet 00:xx:xx:xd:66:6f; fixed-address 192.168.4.125;default-lease-time 600;}

host ilom-ovcamn06r1 { hardware ethernet 00:xx:xx:xe:5f:b5; fixed-address 192.168.4.104; default-lease-time 600;}

host ilom-ovcamn05r1 { hardware ethernet 00:xx:xx:xd:a1:55; fixed-address 192.168.4.103; default-lease-time 600;}


Note, the IP address is ILOM IP, to get Server IP subtract the last octet by 100;eg, 105 -100 = 5, this 192.168.4.5 is the Server IP.  To validate, execute:

 

PCA> list compute-node

 

Compute_Node  IP_Address    Provisioning_Status  ILOM_MAC            Provisioning_State

————  ———-    ——————-  ——–            ——————

ovcacn34r1    192.168.4.18  RUNNING              00:xx:xx:xe:5f:a9   running

ovcacn37r1    192.168.4.17  RUNNING              00:xx:xx:xe:5f:4f   running

ovcacn29r1    192.168.4.10  RUNNING              00:xx:xx:xd:6f:7b   running

ovcacn36r1    192.168.4.12  RUNNING              00:xx:xx:xe:30:69   running

ovcacn11r1    192.168.4.9   RUNNING              00:xx:xx:xd:67:a1   running

ovcacn26r1    192.168.4.6   RUNNING              00:xx:xx:xd:a0:cb   running

ovcacn07r1    192.168.4.11  RUNNING              00:xx:xx:xe:2f:4f   running

ovcacn30r1    192.168.4.14  RUNNING              00:xx:xx:xa:e1:89   running

ovcacn35r1    192.168.4.13  RUNNING              00:xx:xx:xe:68:13   running

ovcacn33r1    192.168.4.7   RUNNING              00:xx:xx:xe:2f:cd   running

ovcacn10r1    192.168.4.5   RUNNING              00:xx:xx:x0:17:37   running

ovcacn28r1    192.168.4.16  RUNNING              00:xx:xx:xd:a0:47   running

ovcacn12r1    192.168.4.15  RUNNING              00:xx:xx:xd:4d:a3   running

ovcacn08r1    192.168.4.22  RUNNING              00:xx:xx:xe:60:db   running

ovcacn14r1    192.168.4.8   RUNNING              00:xx:xx:xd:5d:4b   running

ovcacn32r1    192.168.4.21  RUNNING              00:xx:xx:xe:2f:55   running

ovcacn31r1    192.168.4.19  RUNNING              00:xx:xx:xe:61:53   running

ovcacn09r1    192.168.4.24  RUNNING              00:xx:xx:xd:8c:fd   running

ovcacn27r1    192.168.4.25  RUNNING              00:xx:xx:xd:66:6f   running

ovcacn13r1    192.168.4.20  RUNNING              00:xx:xx:xd:4e:39   running

 

PCA> list compute-node

 

Compute_Node  IP_Address    Provisioning_Status  ILOM_MAC            Provisioning_State

————  ———-    ——————-  ——–            ——————

ovcacn34r1    192.168.4.18  RUNNING              00:xx:xx:xe:5f:a9   running

ovcacn37r1    192.168.4.17  RUNNING              00:xx:xx:xe:5f:4f   running

ovcacn29r1    192.168.4.10  RUNNING              00:xx:xx:xd:6f:7b   running

ovcacn36r1    192.168.4.12  RUNNING              00:xx:xx:xe:30:69   running

ovcacn11r1    192.168.4.9   RUNNING              00:xx:xx:xd:67:a1   running

ovcacn26r1    192.168.4.6   RUNNING              00:xx:xx:xd:a0:cb   running

ovcacn07r1    192.168.4.11  RUNNING              00:xx:xx:xe:2f:4f   running

ovcacn30r1    192.168.4.14  RUNNING              00:xx:xx:xa:e1:89   running

ovcacn35r1    192.168.4.13  RUNNING              00:xx:xx:xe:68:13   running

ovcacn33r1    192.168.4.7   RUNNING              00:xx:xx:xe:2f:cd   running

ovcacn10r1    192.168.4.5   RUNNING              00:xx:xx:x0:17:37   running

ovcacn28r1    192.168.4.16  RUNNING              00:xx:xx:xd:a0:47   running

ovcacn12r1    192.168.4.15  RUNNING              00:xx:xx:xd:4d:a3   running

ovcacn08r1    192.168.4.22  RUNNING              00:xx:xx:xe:60:db   running

ovcacn14r1    192.168.4.8   RUNNING              00:xx:xx:xd:5d:4b   running

ovcacn32r1    192.168.4.21  RUNNING              00:xx:xx:xe:2f:55   running

ovcacn31r1    192.168.4.19  RUNNING              00:xx:xx:xe:61:53   running

ovcacn09r1    192.168.4.24  RUNNING              00:xx:xx:xd:8c:fd   running

ovcacn27r1    192.168.4.25  RUNNING              00:xx:xx:xd:66:6f   running

ovcacn13r1    192.168.4.20  RUNNING              00:xx:xx:xd:4e:39   running

 

I generally do a asset list generation via a csv

Display asset list to csv format. Please run “list_asset -c > asr_assets.csv” to copy the list_asset output into a csv file.

./asr list_asset –c >$HOME/asr_assets_all.csv

 

 

 

 

Dallas Oracle User Group Performance & 12.2 New Features Technical Day

Hey DFW Oracle Folks … heads up I’m speaking at the DOUG Performance & Tuning and 12.2 New Features Technical Day!

Charles Kim and I be speaking on the following topic:

  • “What to Expect .. When You are Expecting to Virtualize Your Database” –  (OVM or VMware)
  • Linux Best Practices for Oracle Database Systems

Time:

  • Thursday 20 October 2016 9:30am-5:30pm

Location: 

  • Courtyard & TownePlace Suites DFW Airport North/Grapevine, TX
    2200 Bass Pro Court|Grapevine, TX 76051 [map]

Speakers (Seven Oracle ACE Directors!):

  • Jim Czuprynski

  • Charles Kim

  • Cary Millsap

  • Dan Morgan

  • Kerry Osborne

  • Tanel Poder

  • Nitin Vengurlekar

Sign up & more details:

Data Guard Setup …its a recap

The intent of this section is to illustrate the build of the physical standby (disaster recovery) database from a production database.  For this scenario both databases will reside on the 1 physical server.

This section assumes that the Oracle Grid Infrasturcture, either Standalone or Cluster configuration is installed on both the primary host and standby host.  Additionally, the primary and standby databases will reside in an ASM diskgroup.   The primary database, VNA is currently in open mode.  The db_unique_name of the physical standby database is VNADR

The detailed doc is download-able here:    dataguardbook-11204setup

DATAGUARD CONFIGURATION

Building the Physical standby Database

  1. Add entries to tnsnames.ora for the VNA database and the DR database VNADR to both the primary database host and the DR database host.

 

VNA=

  (DESCRIPTION=

     (ADDRESS=(PROTOCOL=tcp)(HOST=dallas.viscosityna.com)(PORT=1532))

     (CONNECT_DATA=

       (SID=VNA)

     )

   )

 

VNADR = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 10.10.9.168)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = VNADR ) ))

 

  1. Add listener entry and reload the listener.

LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))            # line added by Agent

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON              # line added by Agent

 

SID_LIST_LISTENER =

  (SID_LIST =

    (SID_DESC =

      (GLOBAL_DBNAME = VNADR)

      (SID_NAME = VNADR)

      (ORACLE_HOME = /u001/app/oracle/product/11.2.0.4/db_2)

    )

  )

 

[oracle@dallas admin]$ lsnrctl reload listener

 

LSNRCTL for Linux: Version 11.2.0.4.0 – Production on 11-JUL-2014 15:39:16

 

Copyright (c) 1991, 2011, Oracle.  All rights reserved.

 

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))

The command completed successfully

  1. An entry for for VNADR will need to added to /etc/oratab on the DR database host.

# This file is used by ORACLE utilities.  It is created by root.sh

# and updated by either Database Configuration Assistant while creating

# a database or ASM Configuration Assistant while creating ASM instance.

 

# A colon, ‘:’, is used as the field terminator.  A new line terminates

# the entry.  Lines beginning with a pound sign, ‘#’, are comments.

#

# Entries are of the form:

#   $ORACLE_SID:$ORACLE_HOME:<N|Y>:

#

# The first and second fields are the system identifier and home

# directory of the database respectively.  The third filed indicates

# to the dbstart utility that the database should , “Y”, or should not,

# “N”, be brought up at system boot time.

#

# Multiple entries with the same $ORACLE_SID are not allowed.

#

+ASM:/u001/app/11.2.0.4/grid:N

VNADR:/u001/app/oracle/product/11.2.0.4/db_2:N

 

  1. The Oracle password file should be copied from the primary database to the DR database server.

 

[oracle@dallas admin]$ scp dallas.viscosityna.com:/u01/app/oracle/product/11.2.0/VNA/dbs/orapwVNA /u001/app/oracle/product/11.2.0/db_2/dbs/orapwVNADR

Uh Oh, I didnt set my Exadata core count correctly , now what?

Changing Capacity On-Demand Core Count in Exadata

We recently implemented an Exadata X6  at a one of client sites (yes, we don’t Oracle ACS, we do it ourselves).   However, the client failed to tell us that they had licensed only a subset of the cores per compute, after we actually had implmeneted and *migrated* production databases onto the X6.  So how do we set the core count correctly after implementation (post-OEDA run).  We heard horror stories about other folks saying they needed to re-image to set core count.  To be specific, its easy to increase cores, but decrease is nasty business.

The steps below are ones we used to decrease the core count:

1. Gracefully stop all databases running on all compute nodes.

2. Login to the compute nodes as root and run the “dbmcli” utility

3. Display the current core count using the following command:

LIST DBSERVER attributes coreCount

4. Change the core count to the desired count using this command (this needs done on all compute nodes):

ALTER DBSERVER pendingCoreCount = 14

NOTE:  Since we are  decreasing the number of cores after installation of the system, the FORCE option needs to be done.

ALTER DBSERVER pendingCoreCount = 14 FORCE

5. Reboot

6. Verify the change was correct by using the “LIST” command in step 3.

 

Just FYI… Troubleshooting

If there is an issue with the MS service starting up, it could be because of the Java being used on the system.

For Exadata release 12.1.2.3.1.160411, the version of Java was 1.8.0.66 and was flagged by a security audit as a vulnerability and was removed from the system.  When the system rebooted, the MS service couldn’t start back up because Java was removed. Follow these steps to reinstall Java and get the MS service restarted on the compute nodes.

1. Download the latest JDK from the Oracle site. NOTE: The RPM download was used.

2. Install the JDK package on the system:

rpm -ivh jdk-8u102- linux-x64.rpm

3. Redeploy the MS service application:

/opt/oracle/dbserver/dbms/deploy/scripts/unix/setup_dynamicDeploy DB -D

4. Restart the MS service:

ALTER DBSERVER RESTART SERVICES MS

 

Convert VMware Fusion VM to ESXi based VM

There are occasions that we need to create custom built Linux VMs on behalf of our clients. Forexample, we may build a Linux VM that has all the best practices for 12c Oracle database or a 12c Weblogic Server. We sometimes do this in VMware Fusion or in vSphere/ESXi config.
In this example we will showcase how we migrate VMs built in Fusion to a ESXi based environment.
In this example, it is assumed that the Linux VM is been pre-created in VMware Fusion.
My VMware Fusion runs on MAC OsX 10.9.5

The key tool in this migration/conversion is called vmware-vdiskmanager, and is located in the following directory:
/Applications/VMware\ Fusion.app/Contents/Library

vmware-vdiskmanager has the following capabilities (as per Help)

NitinV$ ./vmware-vdiskmanager -h

VMware Virtual Disk Manager - build 1945692.
Usage: vmware-vdiskmanager OPTIONS |
Offline disk manipulation utility
Operations, only one may be specified at a time:
-c : create disk. Additional creation options must
be specified. Only local virtual disks can be
created.
-d : defragment the specified virtual disk. Only
local virtual disks may be defragmented.
-k : shrink the specified virtual disk. Only local
virtual disks may be shrunk.
-n : rename the specified virtual disk; need to
specify destination disk-name. Only local virtual
disks may be renamed.
-p : prepare the mounted virtual disk specified by
the volume path for shrinking.
-r : convert the specified disk; need to specify
destination disk-type. For local destination disks
the disk type must be specified.
-x : expand the disk to the specified capacity. Only
local virtual disks may be expanded.
-R : check a sparse virtual disk for consistency and attempt
to repair any errors.
-e : check for disk chain consistency.
-D : make disk deletable. This should only be used on disks
that have been copied from another product.

Other Options:
-q : do not log messages

Additional options for create and convert:
-a : (for use with -c only) adapter type
(ide, buslogic, lsilogic). Pass lsilogic for other adapter types.
-s : capacity of the virtual disk
-t : disk type id

Disk types:
0 : single growable virtual disk
1 : growable virtual disk split in 2GB files
2 : preallocated virtual disk
3 : preallocated virtual disk split in 2GB files
4 : preallocated ESX-type virtual disk
5 : compressed disk optimized for streaming
6 : thin provisioned virtual disk - ESX 3.x and above

Below is the command I ran to convert my Linux VM (Openfiler) to ESXi vmdk:
/Applications/VMware\ Fusion.app/Contents/Library/vmware-vdiskmanager -r OpenFiler1.vmwarevm/Virtual\ Disk.vmdk -t 4 /Volumes/Oracle-images\ 1/LinuxStones/linuxStones.vmdk
Creating disk '/Volumes/Oracle-images 1/LinuxStones/linuxStones.vmdk'
Convert: 100% done.

Virtual disk conversion successful.

ls -l ~/LinuxStones

linuxStones-flat.vmdk
linuxStones.vmdk
Linux66_StonesVT.ova

This conversion produces two files.
Once the vmdk and flat.vmdk files are generated, the next step is to import these into ESXi. I used vSphere client to execute this workflow:
1. Create a new VM, using the usual method; e.g., File->New->Virtual Machine->Custom-> Choose Datastore location
2. Choose Virtual Machine Version -> Guest CPU/Memory/Network/SCSI controller settings -> Select “Do Not Create Disk” -> Finish
3. Go back to VM Configuration-> Datastore -> Browse DataStore -> Upload
4. Upload .vmdk and -flat.vmdk
5. Go back to VM configuration (Virtual Machine Properties) -> Add -> Device Type (Hard Disk) -> “Use an existing virtual disk”
6. Locate the datastore and select the existing disk -> Finish -> OK
7. Startup VM

YUM for Exadata, or is it Yummy Exadata

For method 1 there are two options

  1. Connect a non-Exadata DB server to sync with ULN (or public Yum) and setup a Yum repository – OR –
  2. Use the general Oracle Linux iso as Yum repository on the local DB server.

For 1) when creating a repository by syncing with ULN: do not place the repository on an Exadata database server. See this link for more information on connecting to ULN and setting up a repository.

The quickest steps to setup a local repo are using an iso image as described in 2). Perform the following steps:

  • Download the full Oracle Linux 6 iso from https://edelivery.oracle.com/linux.
    Select “Oracle Linux” into “Select a Product Pack” drop box and “X86 64 bit” into “Platform” drop box and press on the button GO.
    Choose the link “Oracle Linux 6 Update 6 Media Pack for x86_64 (64 bit)” and, from the next page, download the iso
    Oracle Linux Release 6 Update 6 for x86_64 (64 Bit)  – V52218-01.iso
  • Create a mountpoint:
    # mkdir /mnt/ol6
  • Mount the repository:
    # mount -o loop <file.iso> /mnt/ol6
  • Edit /etc/yum.repos.d/Exadata-computenode.repo to make it look as follows:
    [ol6_iso]
    name=Oracle Exadata DB server
    baseurl=file:///mnt/ol6
    gpgcheck=0
    enabled=1
  • Validate the repository:
yum list –disablerepo=* –enablerepo=ol6_iso

This should list all the Oracle Linux 6 packages on the ‘ol6_iso’ repository

When either a synchronized ULN repository is used or the Oracle Linux 6 iso is mounted as described above: run the following steps on Exadata 121210 Database servers running Oracle Linux 6 to install the appropriate packages:

# yum –disablerepo=* –enablerepo=ol6_iso install xorg-x11-xauth (note: starting 12.1.2.2.0 this package is not mandatory for X applications)
# yum –disablerepo=* –enablerepo=ol6_iso install xorg-x11-utils


NOTE: errors regarding “pre-existing rpmdb problem(s), ‘yum check’ output” can be ignored
 
Output will be similar to:

# yum –disablerepo=* –enablerepo=ol6_iso install xorg-x11-xauth -y
Loaded plugins: downloadonly
Setting up Install Process
Resolving Dependencies
–> Running transaction check
—> Package xorg-x11-xauth.x86_64 1:1.0.2-7.1.el6 will be installed
–> Processing Dependency: libXmuu.so.1()(64bit) for package: 1:xorg-x11-xauth-1.0.2-7.1.el6.x86_64
–> Running transaction check
—> Package libXmu.x86_64 0:1.1.1-2.el6 will be installed
–> Processing Dependency: libXt.so.6()(64bit) for package: libXmu-1.1.1-2.el6.x86_64
–> Running transaction check
—> Package libXt.x86_64 0:1.1.4-6.1.el6 will be installed
–> Finished Dependency Resolution
Dependencies Resolved
========================================================================================================================
Package Arch Version Repository Size
========================================================================================================================
Installing:
xorg-x11-xauth x86_64 1:1.0.2-7.1.el6 ol6_latest 34 k
Installing for dependencies:
libXmu x86_64 1.1.1-2.el6 ol6_latest 65 k
libXt x86_64 1.1.4-6.1.el6 ol6_latest 164 k
Transaction Summary
========================================================================================================================
Install 3 Package(s)
Total download size: 264 k
Installed size: 622 k
Is this ok [y/N]: y
Downloading Packages:
(1/3): libXmu-1.1.1-2.el6.x86_64.rpm | 65 kB 00:00
(2/3): libXt-1.1.4-6.1.el6.x86_64.rpm | 164 kB 00:00
(3/3): xorg-x11-xauth-1.0.2-7.1.el6.x86_64.rpm | 34 kB 00:00
————————————————————————————————————————
Total 231 kB/s | 264 kB 00:01
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Warning: RPMDB altered outside of yum.
** Found 3 pre-existing rpmdb problem(s), ‘yum check’ output follows:
fuse-2.8.3-4.0.2.el6.x86_64 has missing requires of kernel >= (‘0’, ‘2.6.14’, None)
2:irqbalance-1.0.4-10.0.2.el6.x86_64 has missing requires of kernel >= (‘0’, ‘2.6.32’, ‘358.2.1’)
perl-BSD-Resource-1.28-1.fc6.1.x86_64 has missing requires of perl(:MODULE_COMPAT_5.8.8)
Installing : libXt-1.1.4-6.1.el6.x86_64 1/3
Installing : libXmu-1.1.1-2.el6.x86_64 2/3
Installing : 1:xorg-x11-xauth-1.0.2-7.1.el6.x86_64 3/3
Verifying : libXmu-1.1.1-2.el6.x86_64 1/3
Verifying : 1:xorg-x11-xauth-1.0.2-7.1.el6.x86_64 2/3
Verifying : libXt-1.1.4-6.1.el6.x86_64 3/3
Installed:
xorg-x11-xauth.x86_64 1:1.0.2-7.1.el6
Dependency Installed:
libXmu.x86_64 0:1.1.1-2.el6 libXt.x86_64 0:1.1.4-6.1.el6
Complete!
# yum install xorg-x11-utils
Loaded plugins: downloadonly
Setting up Install Process
Resolving Dependencies
–> Running transaction check
—> Package xorg-x11-utils.x86_64 0:7.5-6.el6 will be installed
–> Processing Dependency: libXxf86misc.so.1()(64bit) for package: xorg-x11-utils-7.5-6.el6.x86_64
–> Processing Dependency: libdmx.so.1()(64bit) for package: xorg-x11-utils-7.5-6.el6.x86_64
–> Processing Dependency: libXxf86dga.so.1()(64bit) for package: xorg-x11-utils-7.5-6.el6.x86_64
–> Processing Dependency: libXxf86vm.so.1()(64bit) for package: xorg-x11-utils-7.5-6.el6.x86_64
–> Processing Dependency: libXv.so.1()(64bit) for package: xorg-x11-utils-7.5-6.el6.x86_64
–> Running transaction check
—> Package libXv.x86_64 0:1.0.9-2.1.el6 will be installed
—> Package libXxf86dga.x86_64 0:1.1.4-2.1.el6 will be installed
—> Package libXxf86misc.x86_64 0:1.0.3-4.el6 will be installed
—> Package libXxf86vm.x86_64 0:1.1.3-2.1.el6 will be installed
—> Package libdmx.x86_64 0:1.1.3-3.el6 will be installed
–> Finished Dependency Resolution
Dependencies Resolved
========================================================================================================================
Package Arch Version Repository Size
========================================================================================================================
Installing:
xorg-x11-utils x86_64 7.5-6.el6 ol6_latest 94 k
Installing for dependencies:
libXv x86_64 1.0.9-2.1.el6 ol6_latest 16 k
libXxf86dga x86_64 1.1.4-2.1.el6 ol6_latest 17 k
libXxf86misc x86_64 1.0.3-4.el6 ol6_latest 17 k
libXxf86vm x86_64 1.1.3-2.1.el6 ol6_latest 16 k
libdmx x86_64 1.1.3-3.el6 ol6_latest 14 k
Transaction Summary
========================================================================================================================
Install 6 Package(s)
Total download size: 174 k
Installed size: 324 k
Is this ok [y/N]: y
Downloading Packages:
(1/6): libXv-1.0.9-2.1.el6.x86_64.rpm | 16 kB 00:00
(2/6): libXxf86dga-1.1.4-2.1.el6.x86_64.rpm | 17 kB 00:00
(3/6): libXxf86misc-1.0.3-4.el6.x86_64.rpm | 17 kB 00:00
(4/6): libXxf86vm-1.1.3-2.1.el6.x86_64.rpm | 16 kB 00:00
(5/6): libdmx-1.1.3-3.el6.x86_64.rpm | 14 kB 00:00
(6/6): xorg-x11-utils-7.5-6.el6.x86_64.rpm | 94 kB 00:00
————————————————————————————————————————
Total 103 kB/s | 174 kB 00:01
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : libXv-1.0.9-2.1.el6.x86_64 1/6
Installing : libdmx-1.1.3-3.el6.x86_64 2/6
Installing : libXxf86dga-1.1.4-2.1.el6.x86_64 3/6
Installing : libXxf86vm-1.1.3-2.1.el6.x86_64 4/6
Installing : libXxf86misc-1.0.3-4.el6.x86_64 5/6
Installing : xorg-x11-utils-7.5-6.el6.x86_64 6/6
Verifying : xorg-x11-utils-7.5-6.el6.x86_64 1/6
Verifying : libXxf86misc-1.0.3-4.el6.x86_64 2/6
Verifying : libXxf86vm-1.1.3-2.1.el6.x86_64 3/6
Verifying : libXxf86dga-1.1.4-2.1.el6.x86_64 4/6
Verifying : libdmx-1.1.3-3.el6.x86_64 5/6
Verifying : libXv-1.0.9-2.1.el6.x86_64 6/6
Installed:
xorg-x11-utils.x86_64 0:7.5-6.el6
Dependency Installed:
libXv.x86_64 0:1.0.9-2.1.el6 libXxf86dga.x86_64 0:1.1.4-2.1.el6 libXxf86misc.x86_64 0:1.0.3-4.el6
libXxf86vm.x86_64 0:1.1.3-2.1.el6 libdmx.x86_64 0:1.1.3-3.el6
Complete!

 

Nodes freshly imaged to 12.1.2.x need two additional packages: libdmx and libXxf86vm. Install as follows:

# yum install libdmx libXxf86vm

Method #2: Manual downloading and installing rpms
Download and install individual packages manually.

NOTE: Manual downloading and installing is not recommended because when any other dependency required is not pulled in automatically (which yum does).E

Setting Round-Robin Multipathing Policy in VMware ESXi 6.0

Storage Array Type Plugins (SATP) and Path Selection Plugins (PSP) are part of the VMware APIs for Pluggable Storage Architecture (PSA). The SATP has all the knowledge of the storage array to aggregate I/Os across multiple channels and has the intelligence to send failover commands when a path has failed. The Path Selection Policy can be either “Fixed”, “Most Recently Used” or “Round Robin”.

If a VMware VM is using RDM with All Flash Arrays, then the Round Robin policy should be used. Furthermore, inside the Linux kernel (VM), the noop IO scheduler should be used. Both need to executed for proper throughput.

As a best practice, the preferred method to set Round Robin policy, is to create a rule that will allow any newly added FlashArray device, to automatically set the Round Robin PSP and an IO Operation Limit value of 1. In this blog I’ll refer to the PureStorage array for setting Round Robin policy as well as setting IO limit.

The following command creates a rule that achieves both of these for only Pure Storage FlashArray devices:

esxcli storage nmp satp rule add -s “VMW_SATP_ALUA” -V “PURE” -M “FlashArray” -P”VMW_PSP_RR” -O “iops=1”

This must be repeated for each ESXi host.
This can also be accomplished through PowerCLI. Once connected to a vCenter Server this script will iterate through all of the hosts in that particular vCenter and create a default rule to set Round Robin for all Pure Storage FlashArray devices with an I/O Operation Limit set to 1.

$hosts = get-vmhost
foreach ($esx in $hosts)
{
$esxcli=get-esxcli -VMHost $esx
$esxcli.storage.nmp.satp.rule.add($null, $null, “PURE FlashArray RR IO Operation Limit
Rule”, $null, $null, $null, “FlashArray”, $null, “VMW_PSP_RR”, “iops=1”, “VMW_SATP_ALUA”,
$null, $null, “PURE”)
}

It is important to note that existing, previously presented devices will need to be either manually set to Round Robin and an I/O Operation Limit of 1 or unclaimed and reclaimed through either a reboot of the host or through a manual device reclaim process so that it can inherit the configuration set forth by the new rule. For setting a new I/O Operation Limit on an existing device, use the following procedure:

The first step is to change the particular device to use the Round Robin PSP. This must be done on every ESXi host and can be done with through the vSphere Web Client, the Pure Storage Plugin for the vSphere Web Client or via command line utilities.

Via esxcli:
esxcli storage nmp device set -d naa. –psp=VMW_PSP_RR

Note that changing the PSP using the Web Client Plugin is the preferred option as it will automatically configure Round Robin across all of the hosts. Note that this does not set the IO Operation Limit to 1. That is a command line option only, and must be done separately.

Round Robin can also be set on a per-device, per-host basis using the standard vSphere Web Client actions. The procedure to setup Round Robin policy for a Pure Storage volume. Note that this does not set the IO Operation Limit it 1 which is a command line option only—this must be done separately.

The IO Operations Limit cannot be checked from the vSphere Web Client—it can only be verified or altered via command line utilities. The following command can check a particular device for the PSP and IO Operations Limit:

esxcli storage nmp device list -d naa.

To set a device that is pre-existing to have an IO Operation limit of one, run the following command:

esxcli storage nmp psp roundrobin deviceconfig set -d naa. -I 1 -t iops

High Level Overview of 11204 ASM Rebalance in Async ARB0

High Level look at 11204 Rebalance with Plan Optimiation and Async ARB0

 

Drop disk

 

SQL> alter diskgroup reco drop disk ‘ASM_NORM_DATA4’ rebalance power 12

here we issue the rebalance

NOTE: requesting all-instance membership refresh for group=2

GMON querying group 2 at 120 for pid 19, osid 19030

GMON updating for reconfiguration, group 2 at 121 for pid 19, osid 19030

NOTE: group 2 PST updated.

NOTE: membership refresh pending for group 2/0x89b87754 (RECO)

GMON querying group 2 at 122 for pid 13, osid 4000

SUCCESS: refreshed membership for 2/0x89b87754 (RECO)

NOTE: starting rebalance of group 2/0x89b87754 (RECO) at power 12   rebalance internally started

Starting background process ARB0    ARB0 gets started for this rebalance

SUCCESS: alter diskgroup reco drop disk ‘ASM_NORM_DATA4’ rebalance power 12

Wed Sep 19 23:54:10 2012

ARB0 started with pid=21, OS id=19526

NOTE: assigning ARB0 to group 2/0x89b87754 (RECO) with 12 parallel I/Os   ARB0 assigned to this

diskgroup rebalance. Note that it states 12 parallel I/Os

NOTE: Attempting voting file refresh on diskgroup RECO

Wed Sep 19 23:54:38 2012

NOTE: requesting all-instance membership refresh for group=2   first indications that rebalance is completing

GMON updating for reconfiguration, group 2 at 123 for pid 22, osid 19609

NOTE: group 2 PST updated.

SUCCESS: grp 2 disk ASM_NORM_DATA4 emptied    Once rebalanced relocation phase is complete, the disk is emptied

NOTE: erasing header on grp 2 disk ASM_NORM_DATA4   The emptied disk’s header is erased and set to FORMER

NOTE: process _x000_+asm (19609) initiating offline of disk 3.3915941808 (ASM_NORM_DATA4) with mask 0x7e in group 2

The dropped disk is offlined

NOTE: initiating PST update: grp = 2, dsk = 3/0xe96887b0, mask = 0x6a, op = clear

GMON updating disk modes for group 2 at 124 for pid 22, osid 19609

NOTE: PST update grp = 2 completed successfully

NOTE: initiating PST update: grp = 2, dsk = 3/0xe96887b0, mask = 0x7e, op = clear

GMON updating disk modes for group 2 at 125 for pid 22, osid 19609

NOTE: cache closing disk 3 of grp 2: ASM_NORM_DATA4

NOTE: PST update grp = 2 completed successfully

GMON updating for reconfiguration, group 2 at 126 for pid 22, osid 19609

NOTE: cache closing disk 3 of grp 2: (not open) ASM_NORM_DATA4

NOTE: group 2 PST updated.

Wed Sep 19 23:54:42 2012

NOTE: membership refresh pending for group 2/0x89b87754 (RECO)

GMON querying group 2 at 127 for pid 13, osid 4000

GMON querying group 2 at 128 for pid 13, osid 4000

NOTE: Disk in mode 0x8 marked for de-assignment

SUCCESS: refreshed membership for 2/0x89b87754 (RECO)

NOTE: Attempting voting file refresh on diskgroup RECO

Wed Sep 19 23:56:45 2012

NOTE: stopping process ARB0    All phases of rebalance are completed and ARB0 is shutdown

SUCCESS: rebalance completed for group 2/0x89b87754 (RECO)   Rebalance marked as complete

 

 

Add disk

Starting background process ARB0

SUCCESS: alter diskgroup reco add disk ‘ORCL:ASM_NORM_DATA4’ rebalance power 16

Thu Sep 20 23:08:22 2012

ARB0 started with pid=22, OS id=19415

NOTE: assigning ARB0 to group 2/0x89b87754 (RECO) with 16 parallel I/Os

Thu Sep 20 23:08:31 2012

NOTE: Attempting voting file refresh on diskgroup RECO

Thu Sep 20 23:08:46 2012

NOTE: requesting all-instance membership refresh for group=2

Thu Sep 20 23:08:49 2012

NOTE: F1X0 copy 1 relocating from 0:2 to 0:459 for diskgroup 2 (RECO)

Thu Sep 20 23:08:50 2012

GMON updating for reconfiguration, group 2 at 134 for pid 27, osid 19492

NOTE: group 2 PST updated.

Thu Sep 20 23:08:50 2012

NOTE: membership refresh pending for group 2/0x89b87754 (RECO)

NOTE: F1X0 copy 2 relocating from 1:2 to 1:500 for diskgroup 2 (RECO)

NOTE: F1X0 copy 3 relocating from 2:2 to 2:548 for diskgroup 2 (RECO)

GMON querying group 2 at 135 for pid 13, osid 4000

SUCCESS: refreshed membership for 2/0x89b87754 (RECO)

Thu Sep 20 23:09:06 2012

NOTE: Attempting voting file refresh on diskgroup RECO

Thu Sep 20 23:09:57 2012

NOTE: stopping process ARB0

SUCCESS: rebalance completed for group 2/0x89b87754 (RECO)

SQL> select NUMBER_KFGMG, OP_KFGMG, ACTUAL_KFGMG, REBALST_KFGMG from X$KFGMG;
NUMBER_KFGMG   OP_KFGMG ACTUAL_KFGMG REBALST_KFGMG
------------ ---------- ------------ -------------
           2         1           0             2
           2         32           0             2

NUMBER_KFGMG   OP_KFGMG ACTUAL_KFGMG REBALST_KFGMG
------------ ---------- ------------ -------------
           2         1           16             1
NUMBER_KFGMG   OP_KFGMG ACTUAL_KFGMG REBALST_KFGMG
------------ ---------- ------------ -------------
           2        1           16             2
           2         32           16             2


NUMBER_KFGMG   OP_KFGMG ACTUAL_KFGMG REBALST_KFGMG
------------ ---------- ------------ -------------
           2         1           16             2

			

Queryable Opatch and Datapatch

One of my Oracle Support Buddies mentioned to me a cool feature called Query-able Opatch.  This new feature of 12c Oracle Database provides the capability to store, in-database, and query the patch inventory.  Note this feature is specific to Database Home, it does fit into the for Grid Infrastructure or other Oracle Homes.

I wasn't quite sure what problem this feature was trying to solve or what big value it was attempting to bring on.  Regardless, I thought I'd investigate and see what this feature was about.  Mind you I didn't do any in-depth analysis, but enough to shed light on the topic.  I'll followup later was detailed analysis.    We will also touch on the new Datapatch feature as well. 

Query-able Opatch
In versions prior to 12c, the typical stack flow of getting Oracle patch inventory information was :  
opatch lsinventory —> oraInventory_loc  —> Central Inventory (OBase) —> local inventory (OHome) 

Now, In 12c, the stack flow is as follows, if you implement and configure Queryable Opatch feature: 
opatch lsinvenroty (XML) —> queryable patch interface (qpi) —{XML} —> Inventory data (in database) 
  
The key ingredient here is the queryable patch interface (qpi).  
QPI consists of 
	•	      External table (patch_xml_inv) created by catqitab.sql. 
	•	      Uses oracle_load and and prepocessor (opatch_script_dir —>qop)
	•	      SQL interface dbms_qopatch, dbmsqopi.sql, used as plsql interface to query. The dbms_opatch package contains the following procedures/functions : get_patch, 
			  get_patch_lsinventory, get_sqlpatch_status


Once the external table is created using the catqitab.sql script, you can then execute the load and instantiation of the Opatch Registry data.  

Process of instantiation of Opatch Registry data: 


NewImage


1. Select against the opatch_xml_inv (external table) 2. Execution of opatch lsinventory -xml (pre-processor program) 3. Load inventory data into table(s)
SQL> SELECT directory_name, directory_path FROM dba_directories WHERE directory_name like 'OPATCH%'; DIRECTORY_NAME DIRECTORY_PATH ------------------------------ ----------------------------------------------- OPATCH_LOG_DIR /u01/app/oracle/product/12.1/db_home1/QOpatch OPATCH_SCRIPT_DIR /u01/app/oracle/product/12.1/db_home1/QOpatch NewImage




Note, that the DBMS_QOPATCH returns XML format, thus you'll need to transform the XML format into something more readable; e.g., using a stylesheet (XSLT). Luckily Oracle provides a default XSLT, GET_OPATCH_XSLT. GET_OPATCH_XSLT is function of DBMS_QPOPATCH. You can use this function or you can build your own XSLT sheet. NewImage







DataPatch is also a new sub-feature of Queryable Patch Inventory.  DataPatch is a driver script that automates the post-patch SQL actions for database patches 
It is applicable only to database home (not GI home) and for patches that have a SQL changes. 
When the  binary patch is successfully applied, Datapatch updates the SQL Patch Registry in the database (table) —> dba_registry_history/dba_registry_sqlpatch 
Note, DataPatch has to be executed per database.  Also, YOU STILL INSTALL PATCHES using Opatch first !!! 
Without DataPatch you could never tell if the database had the SQL part of the patch applied.

NewImage

Oracle 12c Multitentant My Top Questions Part II

CDB / PDB Operations

Q. How can I install and setup Pluggable Database ?

A. Use runInstaller to install the Oracle Database software
Use dbca to create databases. You can create many pluggable databases in a single operation.
DBCA enables you to specify the number of PDBs in the CDB when it is created.After a CDB is created, you can use DBCA to plug PDBs into it and unplug PDBs from it.
What Operations act on PDBs as entities ?
These operations act on PDBs as entities:
• create PDB (brand-new, as a clone of an existing PDB, by plugging in an unplugged PDB)
• unplug PDB
• drop PDB
• set the Open_Mode for a PDB

Q. How do I start up a Pluggable database ?

A. Connect to current PDB:
SQL> alter pluggable database open;

When connect to root:
SQL> alter pluggable database nisha open;

Q. How do I shutdown / close a Pluggable database ?

A. When connect to current PDB:
SQL> alter pluggable database close;

When connect to root:
SQL> alter pluggable database nisha close;

Q. How do I drop a PDB?

A. drop pluggable database nisha including datafiles;

 

Q.  How do I recover/restore a PDB in RAC?

A.

alter pluggable database pdb close immediate instances=all;  <– can do this using srvctl too

restore pluggable database pdb until scn <scn number>;

recover pluggable database pdb until scn <scn number>;

alter pluggable database pdb  open resetlogs;

alter pluggable database pdb open immediate instances=all;  <– can do this using srvctl too

Q. How to clone a PDB from an existing PDB ?

A. Note that the source must be open in read only mode.
— Using Oracle-Managed Files
create pluggable database ishan from nisha;

Q. How do I unplug a PDB ?

A. alter pluggable database Nisha unplug into ‘/acfsdata/oradata/manifestNisha.xml’ ;
The manifest file is an XML file that contains information on a PDB and is required to plug in a PDB, as it contains all the information on a PDB.
It can be re-created using the DBMS_PDB.RECOVER() procedure

Q. How can I tell which parameters are modifiable at PDB level ?

A. select NAME, ISPDB_MODIFIABLE from V$PARAMETER;

Q. What common users do I have in my cdb ?

A. SQL> select distinct USERNAME from CDB_USERS where common = ‘YES’;

Q. How do I create a common user ?
SQL> create user c##db_dba1 identified by manager1 container=all;

Q. How do I create a local user ?
SQL> create user nisha_dba1 identified by manager1 container=current;
Q. How can I view which service is attached to my Pluggable database ? 

A. The following query illustrates

SQL> column NAME format a30
SQL> select PDB, INST_ID, NAME from gv$services order by 1;

PDB                                    INST_ID    NAME
——————————– ———- ——————————–
CDB$ROOT                                  1 cdb1XDB
CDB$ROOT                                  1 SYS$BACKGROUND
CDB$ROOT                                  1 SYS$USERS
CDB$ROOT                                  1 cdb1
NISHA                                           1 nisha
ISHAN                                           1 ishan

Q. Where can I find Alert log and traces for my pluggable Database ?

A. Single copy of Alert log is generated which contains warnings and alert information for all PDBs. Find details by selecting from v$diag_info dynamic view.
Q. Is the multitenant option available in Standard Edition?

A. Yes, but you may only create one PDB, per CDB

Q. How to monitor the undo usage of each container /database in CDB/PDB ?

select NAME,MAX(TUNED_UNDORETENTION), MAX(MAXQUERYLEN), MAX(NOSPACEERRCNT), MAX(EXPSTEALCNT)
from V$CONTAINERS c , V$UNDOSTAT u
where c.CON_ID=u.CON_ID
group by NAME;

select NAME,SNAP_ID,UNDOTSN,UNDOBLKS,TXNCOUNT,MAXQUERYLEN,MAXQUERYSQLID
from V$CONTAINERS c , DBA_HIST_UNDOSTAT u
where c.CON_ID=u.CON_ID
and u.CON_DBID=c.DBID
order by NAME;

Q. Are there any background processes ex, PMON, SMON etc associated with PDBs ?

A. No. There is one set of background processes shared by the root and all PDBs.

Q. Are there separate control files required for each PDB ?

A. No. There is a single redo log and a single control file for an entire CDB.

Q. Are there separate Redo log files required for each PDB ?

A. No. There is a single redo log and a single control file for an entire CDB.

Q. Can I monitor SGA usage on a PDB by PDB basis?

A. There is a single SGA shared by all pluggable databases. However, you can determine SGA consumptions by all containers i.e, root and PDB.

SQL> alter session set container=CDB$ROOT;
SQL> select POOL, NAME, BYTES from V$SGASTAT where CON_ID = ‘&con_id’;
SQL> select CON_ID, POOL, sum(bytes) from  v$sgastat
group by CON_ID, POOL order by  CON_ID, POOL;

Q. Can I monitor PGA usage on a PDB by PDB basis?

A. select CON_ID, sum(PGA_USED_MEM), sum(PGA_ALLOC_MEM), sum(PGA_MAX_MEM)
from  v$process group by CON_ID order by  CON_ID;

alter session set container =CDB$ROOT;
select NAME , value from  v$sysstat  where NAME like ‘workarea%’;

alter session set container = ;
select NAME , value from  v$sysstat  where NAME like ‘workarea%’;

Oracle 12c Multitentant – My Top Ten PDB Questions PartI

Q. What are my options for connecting to a Pluggable Database ?

A. Connect to root, then
SQL> alter session set container = nisha;

Database connection using easy connect

Ex: CONNECT username/password@host[:port][/service_name][:server][/instance_name]

$ sqlplus test1/test123@//localhost/nisha

Need to define Database connection using a net service name

Example TNSNAMES.ora:
=======
LISTENER_CDB1 =
  (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))

CDB1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = cdb1)
    )
  )

nisha =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = node1)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = nisha)
    )
  )

Q. How do I switch to main(root) container Database ?

A. SQL> ALTER SESSION SET CONTAINER = CDB$ROOT;

Q. How do I determine which PDB or CDB I am currently connected to ?

SQL> show con_name
CON_NAME
——————————
nisha

OR

SQL> select sys_context ( ‘Userenv’, ‘Con_Name’) “Container DB” from dual;
Container DB
——————————————————————————–
nisha

Q. What are my options to migrate an existing pre 12.1 database to 12c Multi-tenant database ?

A.
Option 1.
• Upgrade an existing pre 12.1 database to 12.1
• Plug-in the database post upgrade into a CDB
Option 2.
• Create a staged PDBs based on pdb$seed
• Use datapump or golden gate replication to migrate a Database into a PDB
 

Q. How do I know if my database is Multitenant or not ?

A. From root container execute:
SQL> select NAME, DECODE(CDB, ‘YES’, ‘Multitenant Option enabled’, ‘Regular 12c Database: ‘) “Multitenant Option ?” , OPEN_MODE, CON_ID from V$DATABASE;
NAME                        Multitenant Option ?                  OPEN_MODE              CON_ID
———           ——————————           ——————–           ———-
CDBJIVE              Multitenant Option enabled                      MOUNTED                       0
 

Q. How do I know What Pluggable databases do we have in this container database ?

SQL>  select CON_ID, NAME, OPEN_MODE from V$PDBS;
    CON_ID NAME                           OPEN_MODE
———- ————————        ————
         2 PDB$SEED                         READ ONLY
         3 PDB1                                 MOUNTED
         4 PDB2                                 MOUNTED
         5 PDB3                                 MOUNTED
         6 PDB4                                 MOUNTED
         7 PDB5                                 MOUNTED
         8 nisha                                MOUNTED
         9 nitin                                MOUNTED
 …
 

Q. What is the different Container ID signify?

A. CON_ID “0” means data does not pertain to any particular Container but to the CDB as a whole. For example, a row returned by fetching from V$DATABASE pertains to the CDB and not to any particular Container, so CON_ID is set to “0”.  A CONTAINER_DATA object can conceivably return data pertaining to various Containers (including the Root which has CON_ID==1) as well as to the CDB as a whole, and CON_ID in the row for the CDB will be set to 0.

Following table describes  various values of CON_ID Column in Container Data Objects.
0 = The data pertains to the entire CDB
1=  The data pertains to the root
2= The data pertains to the seed
3 – 254 = The data pertains to a PDB, Each PDB has its own container ID.
Q. Do I need separate SYSTEM,SYSAUX, Temporary tablespaces, and Undo for each of my PDB ?

A. There is a separate SYSTEM and SYSAUX tablespace for the root and for each PDB. However, there is one default temporary tablespace for the entire CDB. But, you can create additional temporary tablespaces in individual PDBs. One active temporary tablespace is needed for a single-instance CDB, or one active temporary tablespace is needed for each instance of an Oracle RAC CDB. There is one active undo tablespace for a single-instance CDB. As with previous versions, for RAC [CDB], there is one active undo tablespace for each instance. Only a common user who has the appropriate privileges and whose current container is the root can create an undo tablespace.

Q. Can I specify a separate default tablespace for the CDB and for each PDB ?

A. Yes. You can specify a separate default tablespace for the root and for each PDB

Q. Does the CDB contain any user data ?

A. No. All user data is in the PDBs. The root contains no user data or minimal user data. This makes the unplug-ability very streamlined

Q. Does Pluggable database support separate database characterset ? 

A. A CDB uses a single character set. All of the PDBs in the CDB use this character set.
 
Q. Is there a specific Net Files in a Pluggable database environment ? 

A. There is a single listener.ora, tnsnames.ora, and sqlnet.ora file for an entire CDB. All of the PDBs in the CDB use these files.
 

Q. Can I change init.ora for my PDB.
For Multitenant there are two groups of parameters: – Those that can be modified within a PDB and those that can only be set at CDB level
The following query helps to determine:

SELECT NAME FROM V$SYSTEM_PARAMETER WHERE ISPDB_MODIFIABLE=’TRUE’ ORDER BY NAME;

 

[Re-] Imaging Exadata with 12.1.2.1.1

 [Re-] Imaging Exadata with 12.1.2.1.1

Introduction
This document, based on Doc ID 1991376.1, specifically addresses re-imaging Exadata systems to 12.1.2.1.1, using standard (non-OVM) deployment via USB boot.  In this scenario, a new Exadata system was delivered with the 12.1.2.1.0 image; however, client decided to implement the latest image (12.1.2.1.1) before OEDA was executed.

Scenario
The purpose of this doc to build on the base doc and includes resolution to issues we ran into.  This doc applies to Oracle Exadata Storage Server Software Version 12.1.2.1.0 and later.  In our deployment, we re-imaged a new X5-2 that had existing 12.1.2.1.0, with version 12.1.2.1.1 released in April.

This paper describes steps necessary to image or re-image Exadata DB and cells nodes using the USB method.  Doc ID 1991376.1 covers the other available options. This document assumes that Exadata has been powered on and a hardware validation has been performed by an Oracle Field Engineer. Prior to imaging, the administrator should obtain and execute the latest OEDA to generate the configuration files (preconf.csv).

Procedure
The following are preparatory steps before imaging the Exadata:

1.     The generated preconfig.csv will need to be copied to the DB node where the image file will generated.  This should node1 (first DB node)

2.     Connect to the first DB node by connecting a laptop to the ILOM Serial Management port.  It is important to connect to serial RS232 port (115200,8,N,1) as well as the VGA port.  We encountered issues where log output was not always displayed to tty1.

3.     In ILOM, obtain the eth0 MAC address for each DB node.

Connect to /SYS/MB/NET0 and execute “show fru_macaddress”

> cd /SYS/MB/NET0
/SYS/MB/NET0

-> show fru_macaddress

/SYS/MB/NET0
Properties:
fru_macaddress = 00:10:e0:6f:b2:aa

node1 compute node macaddress – 00:10:e0:6f:8e:e6
node2 compute node macaddress – 00:10:e0:6f:b2:aa
node 1 cell node macaddress –  00:10:e0:62:b8:0a
node 2 cell macaddr –  00:10:e0:71:e8:ac
node 3 cell macaddr –  00:10:e0:72:06:b8

4.     In preconf.csv file, add the the MAC addresses in CAPs in the appropriate place for each node. (this is the 7th field the first set of empty fields in the OEDA preconf.csv “,,”)

5.     Download the Imagemaker files.  This kit is publicly available on the edelivery.oracle.com site under “Oracle Database Products >Linux x8664”.  Note, there are seperate ImageMaker files for DB nodes and cells. See MOS note 1306961.1 for more information on downloading the files.

6.     After download, unzip the first file “V75080-01.zip”. Perform unzip then untar the file.
# unzip V7508001.zip
# unzip cellImageMaker_12.1.2.1.1_LINUX.X64_150316.21.x86_64.zip
# tar pxvf cellImageMaker_12.1.2.1.1_LINUX.X64_150316.21.x86_64.tar

7.     Insert a blank USB device in the slot of the DB node where the ImageMaker was unpacked.

8.     Use makeImageMaker.sh to build the kernel, initrd and image files for the USB device.  The makeImageMedia.sh script provided with Storage Cell and Linux Database Host images is available on the Oracle Software Delivery Cloud (http://edelivery.oracle.com).  For building 12.1.2.1.0 images. The following is recommended makeImageMedia.sh options for building USB bootable media :

./makeImageMedia.sh -factory  -stit reboot-on-success –nodisktests \ -dhcp –preconf ‘/tmp/preconv.cvs’

9.     Repeat steps 6-8 to makeImageMedia.sh for the cell node using a different USB device

10.   Once the USB device has been boot formatted, place it into the slot of the server to be imaged (ensure image is for the correct type of DB node or cell).  Note, do not issue a Linux ‘mount’ against the device and ensure that the device is not mounted in Linux

11.  Reboot the DB node or cell to start the imaging process from USB.  Note that regardless of cell or DB node, the boot image message states CELLUSB_INSTALL_OS.  At this point the internal disk of DB or Cell node will be loaded with the new image.   This process took 20 minutes on a X5 Cell node and 15 minutes on DB node.  Note, because we specified reboot-on-success, it will reboot twice

12.   Validate that the nodes are imaged correctly, run imageinfo –-all

13.  Ibhosts

14.  New Exadata Systems with Exadata release 12.1.2.1.0 or higher will not have a hotspare available. The reclaimdisk.sh script will be used to reclaim diskspace from the RAID5 volume group to produce a hotspare.  This script should only be executed after imaging all the DB and cell nodes

Do not skip this step. Skipping this step will result in unused space that can no longer be reclaimed by reclaimdisks.sh.

The following steps must be completed on ALL database servers before running the OEDA installation tool:

“/opt/oracle.SupportTools/reclaimdisks.sh check”
“/opt/oracle.SupportTools/reclaimdisks.sh free reclaim”

15.  On successful re-imaging, the next step is to execute OEDA deployment. This is not covered in this document.

Installing Oracle Linux 6.6, Prepping Linux OS for Oracle Database 12c Install, and Installing Oracle Database 12c

Installing Oracle Linux 6.6, Prepping Linux OS for Oracle Database 12c Install, and Installing Oracle Database 12c

I know it seems simple to install Linux and Oracle Dtaabase. But I felt a need to standardize on how we do it internally at Viscosity. So I provided this little note to our DBA team to build out a new Oracle Database 12c on Linux 6.6 server.  There are options also for configuring for VMWare as well.  PS, Thx to EMC folks for helping out w/the hardware config

In this example, we will be implementing 12c Grid Infrastructure/Automatic Storage Management and create two disk groups (data and deco)  using four-disks each.

Installing Oracle Linux 6.6
1. Insert the Oracle Linux 6.6 DVD into the server, and boot to it.
2. Select Install or upgrade an existing system
3. Skip
4. In the opening splash screen, select Next.
5. Choose the language you wish to use, and click Next.
6. Select the keyboard layout, and click Next.
7. Select Basic Storage Devices, and click Next.
8. Select Fresh Installation, and click Next.
9. Insert the hostname, and select Configure Network.
10. In the Network Connections menu, configure network connections.
11. After configuring the network connections, click Close.
12. Click Next.
13. Select the nearest city in your time zone, and click Next.
14. Enter the root password, and click Next.
15. Select Use All Space, and click Next.
16. When the installation prompts you to confirm that you are writing changes to the disk, select Write changes to disk.
17. Select Software Basic Server, and click Next. Oracle Linux installation begins.
18. When the installation completes, select Reboot to restart the server.

Initial configuration tasks
Complete the following steps to provide the functionality that Oracle Database requires. We performed all of these tasks as root.
Disable firewall services. In the command line (as root), type:
 # service iptables stop
# chkconfig iptables off
# service ip6tables stop
# chkconfig ip6tables off

Set SELinux:
 # vi /etc/selinux/config

SELINUX=permissive

Modify /etc/hosts to include the IP address of the internal IP and the hostname.
Edit 90-nproc.conf:

# vi /etc/security/limits.d/90-nproc.conf
 Change this:
 * soft nproc 1024

To this:
 * – nproc 16384

Install 12c RPM packages, resolve package dependencies, and modify kernel parameters:                                        

# yum install oracle-rdbms-server-12cR1-preinstall–y

Install automatic system tuning for database storage through yum:
 # yum install tuned
# chkconfig tuned on
# tuned-adm profile enterprise-storage

Using yum, install the following prerequisite packages for Oracle Database:
 # yum install elfutils-libelf-devel
# yum install xhost
# yum install unixODBC
# yum install unixODBC-devel
# yum install oracleasm-support oracleasmlib oracleasm

Create the oracle user account and groups and password:

 # groupadd -g 6003 oper
 # groupadd -g 6004 asmadmin
 # groupadd -g 6005 asmdba
 # groupadd -g 6006 asmoper
 # usermod -G dba,oper,asmadmin,asmdba,asmoper oracle # passwd oracle

Create the /u01 directory for Oracle inventory and software and give it to the oracle user:

 # mkdir -p /u01/app/oracle/product/12.1.0/grid
 # mkdir -p /u01/app/oracle/product/12.1.0/dbs                                                 
# chown -R oracle:oinstall /u01
 # chmod -R 775 /u01

Optionally, edit bash profiles to set up user environments:
# vim /home/oracle/.bash_profile

Adding the four data and four log drives to the VM and modifying the vmfx file
1. Power off the VM.
2. Right-click the VM in the vSphere Web Client, and choose Edit Settings…
3. Click on the VM Options tab, and expand the Advanced menu option.
4. Choose Edit Configuration…
5. Click Add Row, and enter disk.EnableUUID in the parameter field and TRUE in the value field.
6. Go back to the Virtual Hardware tab.
7. Click the drop-down menu for New device, and choose New Hard Disk.
8. Name the Hard Disk and choose the size that you want it to be.
9. Repeat steps 7 and 8 for all remaining drives.
10. Click OK.
11. Power the VM back on.

 Configuring disks for ASM
For each of the eight shared disks, create a GPT label, and create one partition. For example, see the following shell script:
 for disk in sdb sdc sdd sde sdf sdg sdh sdi; do
parted /dev/$disk mklabel gpt
parted /dev/$disk mkpart primary “1 -1″
 done

If desired, label the disk’s partition with its Oracle function. For example:
 # parted /dev/sdc name 1 DATA1
# parted /dev/sdd name 1 DATA2
# parted /dev/sde name 1 DATA3
# parted /dev/sdf name 1 DATA4
# parted /dev/sdg name 1 LOG1
# parted /dev/sdh name 1 LOG2
# parted /dev/sdh name 1 LOG3
# parted /dev/sdi name 1 LOG4

Initialize Oracle ASM on each server by executing the following commands as root on each node.
 oracleasm init
oracleasm configure -e -u grid -g oinstall -s y -x sda

Label each shared disk-partition with an appropriate ASM name. For example, following the OS partition names
created above, execute the following commands on one system:
 # oracleasm createdisk DATA1 /dev/sdc1
# oracleasm createdisk DATA2/dev/sdd1
# oracleasm createdisk DATA3 /dev/sde1
# oracleasm createdisk DATA4 /dev/sdf1
# oracleasm createdisk LOG1 /dev/sdg1
# oracleasm createdisk LOG2 /dev/sdh1
# oracleasm createdisk LOG3 /dev/sdi1
# oracleasm createdisk LOG4 /dev/sdj1

On each server, scan the disks to make the disks immediately available to Oracle ASM.
 # oracleasm scandisks
# oracleasm listdisks

Installing Oracle Grid Infrastructure 12c
1. Log in as the oracle user.
2. Unzip linuxamd64_12c_grid_1of2.zip and linuxamd64_12c_grid_2of2.zip
3. Open a terminal to the unzipped database directory.
4. Type grid_env to set the Oracle grid environment.
5. To start the installer, type./runInstaller
6. At the Updates screen, select Skip updates.
7. In the Select Installation Option screen, select Install and Configure Grid Infrastructure for a Standalone Server,and click Next.
8. Choose the language, and click Next.
9. In the Create ASM Disk Group screen, choose the Disk Group Name, and change redundancy to External.
10. Select the four disks that you are planning to use for the database, and click Next.
11. In the Specify ASM Password screen, choose Use same password for these accounts, write the passwords for the ASM users, and click Next.
12. Leave the default Operating System Groups, and click Next.
13. Leave the default installation, and click Next.
14. Leave the default inventory location, and click Next.
15. Under Root script execution, select Automatically run configuration scripts and enter root credentials.
16. In the Prerequisite Checks screen, make sure that there are no errors.
17. In the Summary screen, verify that everything is correct, and click Finish to install Oracle Grid Infrastructure.
18. At one point during the installation, the installation prompts you to execute two configuration scripts as root.

Follow the instructions to run the scripts.
19. At the Finish screen, click Close.
20. To run the ASM Configuration Assistant, type asmca.
21. In the ASM Configuration Assistant, click Create.
22. In the Create Disk Group window, name the new disk group log, choose redundancy External (None), and select the four disks for redo logs.
23. Click Advanced Options, and type 12.1.0.0.0 in ASM Compatibility and Database Compatibility. Click OK.
24. Right-click the DATA drive, and choose Edit Attributes. Make sure both ASM and Database Compatibility fields list 12.1.0.0.0, and click OK.
25. Exit the ASM Configuration Assistant.

Installing Oracle Database 12c
1. Unzip linuxamd64_12c_database_1_of_2.zip and linuxamd64_12c_database_2_of_2.zip.
2. Open a terminal to the unzipped database directory.
3. Type db_env to set the Oracle database environment.
4. Run ./runInstaller.sh.
5. Wait for the GUI installer loads.
6. On the Configure Security Updates screen, enter the credentials for My Oracle Support. If you do not have an account, uncheck the box I wish to receive security updates via My Oracle Support, and click Next.
7. At the warning, click Yes.
8. On the Download Software Updates screen, enter the desired update option, and click Next.
9. On the Select Installation Option screen, select Install database software only, and click Next.
10. On the Grid Installation Options screen, select Single instance database installation, and click Next.
11. On the Select Product Languages screen, leave the default setting of English, and click Next.
12. On the Select Database Edition screen, select Enterprise Edition, and click Next.
13. On the Specify Installation Location, leave the defaults, and click Next.
14. On the Create Inventory screen, leave the default settings, and click Next.
15. On the Privileged Operating System groups screen, keep the defaults, and click Next.
16. Allow the prerequisite checker to complete.
17. On the Summary screen, click Install.
18. Once the Execute Configuration scripts prompt appears, ssh into the server as root, and run the following command:

# /home/oracle/app/oracle/product/12.1.0/dbs/root.sh
19. Return to the prompt, and click OK.
20. Once the installer completes, click Close.

Data Domain discussion and Notes

The following are notes from our discussions w/ EMC on deploying Data Domain w/ RMAN at client site:

Process flow is as follows
Clients -> eth -> Media/Master Server ((includes OST)-> SAN -> DD

The OST plugin first sends a request to the DD server/appliance to validate the hash of the data packet. DD uses variable segment size to check this, it is not fixed block. If DD has already seen it, the full data packet is discarded and the block’s expiration is updated on the Data Domain. This is where Boost comes in handy, as it does it inline/in-band, reducing roundtrip acks. However, when DD does not have a hit; i.e., doesn’t find a duplicate block already stored, the media server sends a second packet with all of the data. This could mean twice the number of data packets if your data does not dedupe well, which leads to poor performance and high CPU utilization all around.

If you don’t have a lot of duplicate data, the media server can take a longer time to process the same amount of data and it still had to send it anyway, negating the use of Boost. Keep the following in mind: Each backup stream is processed in one thread (typically there is 4 to 8 threads per core), so stream speed is limited by processor core speed, we typically see CPU usage at 13%.

The following were key points/considerations:

Use RMAN catalog
Use rman duplicate or clone when cloning databases. If using backup/restore for cloning, make sure to use ‘nid’ to get new db incarnation

Don’t use RMAN compression
RMAN 2 Channels per core -> define how many cores you want to consume
RMAN Maxopenfiles = 4
RMAN Filesperset = 1 <- a must for deduce to work correctly Jumbo Frames dnfs - 4 1gb links or 1 10GbE Example RMAN command

RUN
{
CONFIGURE DEFAULT DEVICE TYPE TO SBT_TAPE; # default
CONFIGURE DEVICE TYPE SBT_TAPE Backup TYPE to BACKUPSET;
CONFIGURE CHANNEL DEVICE TYPE ‘SBT_TAPE’
PARMS ‘SBT_LIBRARY=/u01/app/oracle/product/11.2.0.4/dbhome/lib/libddobk.so, ENV=(STORAGE_UNIT=storageunitname,BACKUP_HOST=ddomain406.viscosityn.com,ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome)';
ALLOCATE CHANNEL c1 TYPE SBT_TAPE
PARMS ‘SBT_LIBRARY=/u01/app/oracle/product/11.2.0.4/dbhome/lib/libddobk.so, ENV=(STORAGE_UNIT=storageunitname,BACKUP_HOST=ddomain406.viscosityn.com,ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome)';
send ‘set username ddboostusername password XXXX servername ddomain406.viscosityn.com';
RELEASE CHANNEL c1;
}


RUN {
CONFIGURE DEFAULT DEVICE TYPE TO SBT_TAPE; # default
CONFIGURE DEVICE TYPE SBT_TAPE Backup TYPE to BACKUPSET;
CONFIGURE CHANNEL DEVICE TYPE ‘SBT_TAPE’
PARMS ‘SBT_LIBRARY=${ORACLE_HOME}/lib/libddobk.so, ENV=(STORAGE_UNIT=storageunitname,BACKUP_HOST=ddomain406.viscosityna.com,ORACLE_HOME=${ORACLE_HOME})';
ALLOCATE CHANNEL c1 TYPE SBT_TAPE
PARMS ‘SBT_LIBRARY=${ORACLE_HOME}/lib/libddobk.so, ENV=(STORAGE_UNIT=storageunitname,BACKUP_HOST=ddomain406.viscosityna.com,ORACLE_HOME=${ORACLE_HOME})';
ALLOCATE CHANNEL c2 TYPE SBT_TAPE
PARMS ‘SBT_LIBRARY=${ORACLE_HOME}/lib/libddobk.so, ENV=(STORAGE_UNIT=storageunitname,BACKUP_HOST=ddomain406.viscosityna.com,ORACLE_HOME=${ORACLE_HOME})';
ALLOCATE CHANNEL c3 TYPE SBT_TAPE
PARMS ‘SBT_LIBRARY=${ORACLE_HOME}/lib/libddobk.so, ENV=(STORAGE_UNIT=storageunitname,BACKUP_HOST=ddomain406.viscosityna.com,ORACLE_HOME=${ORACLE_HOME})';
ALLOCATE CHANNEL c4 TYPE SBT_TAPE
PARMS ‘SBT_LIBRARY=${ORACLE_HOME}/lib/libddobk.so, ENV=(STORAGE_UNIT=storageunitname,BACKUP_HOST=ddomain406.viscosityna.com,ORACLE_HOME=${ORACLE_HOME})';
ALLOCATE CHANNEL c5 TYPE SBT_TAPE
PARMS ‘SBT_LIBRARY=${ORACLE_HOME}/lib/libddobk.so, ENV=(STORAGE_UNIT=storageunitname,BACKUP_HOST=ddomain406.viscosityna.com,ORACLE_HOME=${ORACLE_HOME})';
ALLOCATE CHANNEL c6 TYPE SBT_TAPE
PARMS ‘SBT_LIBRARY=${ORACLE_HOME}/lib/libddobk.so, ENV=(STORAGE_UNIT=storageunitname,BACKUP_HOST=ddomain406.viscosityna.com,ORACLE_HOME=${ORACLE_HOME})';
backup filesperset 1 all database format ‘%d_DATABASE_%T_%t_s%s_p%p’ tag ‘${ORACLE_DB} database backup';
backup current controlfile format ‘%d_controlfile_%T_%t_s%s_p%p’ tag ‘${ORACLE_DB} Controlfile backup';
backup spfile format ‘%d_spfile_%T_%t_s%s_p%p’ tag ‘${ORACLE_DB} Spfile backup';
release channel c1;
release channel c2;
release channel c3;
release channel c4;
release channel c5;
release channel c6;
}

▪While this backup is running, you can monitor performance from Data Domain side using the following command:
# ddboost show stats interval 5 count 100

▪To inspect storage statistics; eg, compress/dedupe rates:
# ddboost storage-unit show compression

Final thoughts

We were told that the latency should not exceed 15 ms (even for routed) (entreprise DD), plus make sure its not highly routed

We used DD Management Center it proved to be quite useful for monitoring and trending

Customer had dd2200 – Observations & Considerations
We were able to get 20 TB/hr
There was one channel == 1 thread

Watch out for Solaris with multi-threaded, very weird scheduling issues with CMT systems (16 threads per core)
Best to \test w/ non-boost then w/ boost to determine overall benefit of DDBoost. Esp for databases smaller than 300GB
Use hard mount for non-boost
Dont backup archive logs if you can avoid it
setup ifgroups for layer3 aggregation

What should you ask your All Flash Array Vendor

I was just traveling back from a client, where the customer just bought into the concept of an all flash array for their database and VDI workloads.  They had asked me to help out where I can.  So I started pondering the things this customer (or any customers/buyers) should think through…..

At first I was going to do a comparative analysis (table) of the All Flash Arrays out on the market.  However, since the AFA market is constantly changing anyways… why bother with a comparison.

Thus, I changed my approach to aiding the buyer/architect in positioning the appropriate questions to the vendor.  Thus the approach became more of  “What to consider when considering” when purchasing a AFA.

Now note, I’m not stating some earth shattering thought leadership here or a new dimension of looking at this issue,   I’m merely sharing what I was going to present to the customer

Anyways……As with most storage decisions, its very hard to bucketize considerations into the performance, costs and manageability categories, because they are so intertwined.  Also, I specifically did not address Cost separately, since Cost traverses every layer and topic, whether for cost-performance, cost- supportability or feature-cost usage.

1. Performance is king! – We know AFA performance is awesome, but think thru and ask the following:

a. How does the AFA fair with the differing workloads; i.e., degree of sequential to random, and read/write ratios of 80/20, 70/30,  and 50/50.  And especially when the array is near capacity -> 70% or 80%

b. How is garbage collection handled.  Is it using ASIC/SSD or controller based garbage collection.  Regardless, the buyer shouldn’t have to understand the bowels of garbage collection, so the question to the vendor should be simply what is the performance consistency, or better stated “consistency of performance” –  specifically during steady state/peak workloads or during flash maintenance operations (garbage collection, flash overwrites, wear leveling, etc.).

c.  I wasn’t sure if I should even add this entry, but for completeness I will.  AFAs  on the market today use a type or combination of  SSD drives: SLC, MLC, (cMLC), eMLC, etc.  As with the above, buyers should not concern themselves with this level of detail, but one should ascertain the performance they should expect.  This category really needs to go in the costing Category – cost per IOP, cost per GB, etc.

2. Manageability

a. How does the array handle non-disruptive upgrades (NDU).  AFA occasional patches, updates and even field replaceable changes, thus, need to determine what is the impact of making these changes; i.e., is it an online transparent change, online change with a reboot (outage),  or destructive change?  For example, how is a AFA OS patch handled or how is SSD firmware changes handled?

b. Scalability –  What I mean here is really AFA expansion without disruption.  Ask whether you can add another array, another set of controllers, etc, without having to export the array data contents, add in new array, and load back data. It should be mainframe class scale.

c. Storage Array simplicity – How usable are the GUI tools to manage the operational array tasks; e.g.,  create volumes, measure performance, effective-ness of Data Services, and alert notication on failing components

3. Features (Data Services) – By now most AFA will incorporate snapshots, replication, compression, and of course de-duplication. But the real question is what is impact when using these services concurrently, what about selectively using features (by LUN/volume), and overall performance  impact of these services.

This just get you started on the things to think through !!!

blktrace basics

Life of an I/O

Once a user issues an I/O request, this I/O enters block layer…then the magic begins…

1. IO request is remapped atop underlying logical/aggregated device device (MD, DM). Depending on alignment, size, …, requests are split into 2 separate I/Os
2. Requests added to the request queue
3. Merged with a previous entry on the queue -> All I/Os end up on a request queue at some point
4. The I/O is issued to a device driver, and submitted to a device
5. Later, the I/O is completed by the device, and posted by its driver

btt is a Linux utility that provides an analysis of the amount of time the I/O spent in the different areas of the I/O stack.

btt requires that you run blktrace first. Invoke blktrace specifying whatever devices and other parameters you want. You must save the traces to disk in this step,
In its current state, btt does not work in live mode.

After tracing completes, run blkrawverify, specifying all devices that were traced (or at least on all devices that you will use btt with.

If blkrawverify finds errors in the trace streams saved, it is best to recapture the data

Run blkparse with the -d option specifying a file to store the combined binary stream. (e.g.: blkparse -d bp.bin …).

blktrace produces a series of binary files containing parallel trace streams – one file per CPU per device. blkparse provides the ability to combine all the files into one time-ordered stream of traces for all devices.

Here’s some guidelines on the key indicators from the btt output:

Q — A block I/O is Queued
G — Get Request

A newly queued block I/O was not a candidate for merging with any existing request, so a new block layer request is allocated.

M — A block I/O is Merged with an existing request.
I — A request is Inserted into the device’s queue.
D — A request is issued to the D evice.
C — A request is Completed by the driver.
P — The block device queue is Plugged, to allow the aggregation of requests.
U — The device queue is Unplugged, allowing the aggregated requests to be issued to the device

Metrics of an I/O
Q2I – time it takes to process an I/O prior to it being inserted or merged onto a request queue

Includes split, and remap time
I2D – time the I/O is “idle” on the request queue

D2C – time the I/O is “active” in the driver and on the device

Q2I + I2D + D2C = Q2C
Q2C: Total processing time of the I/O

The latency data files which can be optionally produced by btt provide per-IO latency information, one for total IO time (Q2C) and one for latencies induced by lower layer drivers and devices (D2C).

In both cases, the first column (X values) represent runtime (seconds), while the second column (Y values) shows the actual latency for a command at that time (either Q2C or D2C).

The Add Roles to Windows2012 Server

There are two methods to add roles/features to a Windows 2012 Server. One is using the Add Roles Wizard and the other is using the cmdlet.

The Add Roles Wizard simplifies how you install roles on the server. This is quite different from the way it was done in Windows 2008 Server where admins had to run Add or Remove Windows Components multiple times to install all the roles, role services, and features that are needed on a server.

Roles Wizard lets you install multiple roles at one time. Server Manager replaces Add or Remove Windows Components, and a single session in the Add Roles Wizard can complete the configuration of the server; in addition, it verifies that all the software components required by a role are installed. If it is necessary, the wizard prompts you to approve the installation of other roles, role services, or software components that are required by roles that you select.
Most roles and role services that are available for installation require you to make decisions during the installation process that determine how the role operates in your

To install roles and features in Windows Server 2012, you can also use the Server ManagerCmd.exe tool or the Add-WindowsFeature cmdlet in Windows PowerShell.
The following video walks through the process flow
AddPhyDisk_2_Pool_WinSrv2012.mp4

Oracle Inventory and what it means to

Whilst presenting on Exadata patching, an interesting question about “What is the Inventory”, came up at break. This individual was new to Oracle, so I spent some time going over what the Oracle Inventory is. Here is the brain dump captured 🙂

The Oracle [Software] Inventory is the mechanism that manages the software library on the server, a.k.a Oracle System. An Oracle system is server or node where Oracle software is installed. A key part of this inventory is the OUI, opatch and the inventory file. We will touch on some of these topics.

Oracle Inventory is the XML file that is stored and managed when the OUI installer (runInstaller from media) is used to installed Oracle productset.

On each Oracle system there is a Central Inventory and Local Inventory. This structure is built when oraInstRoot.sh is executed
The Central Inventory is at location defined by /etc/oraInst.loc file (specifically the inventory_loc variable), it is typically installed at $ORACLE_BASE/oraInventory, and it defines at high level the location of the registered Oracle_Homes (OH) on the system. The actual inventory file is named:
$ORACLE_BASE/oraInventory/ContentsXML/inventory.xml.

The invPtrLoc variable can used during the runInstaller invocation to point to a specific location of the inventory file. However, if doing this, then you must be aware that -invPtrLoc in opatch


[oracle@pdb12cgg app]$ cd oraInventory
[oracle@pdb12cgg oraInventory]$ ls
backup  ContentsXML  logs  oui


[oracle@pdb12cgg ContentsXML]$ cat inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2011, Oracle. All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>11.2.0.3.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraGI12Home1" LOC="/u01/app/oracle/product/12.1.0/grid" TYPE="O" IDX="1"/>
<HOME NAME="OraDB12Home1" LOC="/u02/app/oracle/product/12.1.0/dbhome_1" TYPE="O" IDX="2"/>
<HOME NAME="OraHome1" LOC="/u02/app/oracle/product/12.1.2/oggcore_1" TYPE="O" IDX="3"/>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>


The inventory includes the following key things: HOME IDX key value is the index into the file for the registered OH, the location of the OH and Tag Name for the OH


To attach (register the OH into Central Inventory) a cloned or copied OH we do the following:
./runInstaller -silent-attachHome ORACLE_HOME="/u02/app/oracle/product/12.1.0/dbhome_1” ORACLE_HOME_NAME=“DBHome12c"

Similarly we can use -detachHome to detach the OH


The Local Inventory is at location defined by inventory.xml file, it is typically installed at $ORACLE_HOME/inventory/ContentsXML/comps.xml, and outlines the details of the software components and patch levels installed at the Oracle_Home:
cat /u02/app/oracle/product/12.1.0/dbhome_1/inventory/ContentsXML/comps.xml

... an excerpt....

<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2013, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<PRD_LIST>
<TL_LIST>
<COMP NAME="oracle.server" VER="12.1.0.1.0" BUILD_NUMBER="0" REP_VER="0.0.0.0.0"
 RELEASE="Production" INV_LOC="Components/oracle.server/12.1.0.1.0/1/" LANGS="en
" XML_INV_LOC="Components21/oracle.server/12.1.0.1.0/" ACT_INST_VER="12.1.0.1.0"
 DEINST_VER="11.2.0.0.0" INSTALL_TIME="2013.Aug.19 18:01:58 CDT" INST_LOC="/u02/
app/oracle/product/12.1.0/dbhome_1/oracle.server">
   <EXT_NAME>Oracle Database 12c</EXT_NAME>
   <DESC>Installs an optional preconfigured starter database, product options, m
anagement tools, networking services, utilities, and basic client software for a
n Oracle Database server. This option also supports Automatic Storage Management
 database configuration.</DESC>
   <DESCID>COMPONENT_DESC</DESCID>
   <STG_INFO OSP_VER="10.2.0.0.0"/>
   <CMP_JAR_INFO>
      <INFO NAME="filemapObj" VAL="Components/oracle/server/v12_1_0_1_0/filemap.
xml"/>
      <INFO NAME="helpDir" VAL="Components/oracle/server/v12_1_0_1_0/help/"/>
      <INFO NAME="actionsClass" VAL="Components.oracle.server.v12_1_0_1_0.CompAc
tions"/>
      <INFO NAME="resourceClass" VAL="Components.oracle.server.v12_1_0_1_0.resou
rces.CompRes"/>

Here’s the confusing part, there is also a oraInst.loc file in $ORACLE_HOME/oraInst.loc in the OH too!  The opatch utility uses this oraInst.loc to inspect the inventory

Here's how the Central and Local Invntories relate to each other:

From Central Inv:
<HOME NAME="OraDB12Home1" LOC="/u02/app/oracle/product/12.1.0/dbhome_1" TYPE="O" <strong>IDX="2”</strong>/>


Local Inv:
<DEP_LIST>
      <DEP NAME="oracle.rdbms" VER="12.1.0.1.0" DEP_GRP_NAME="Optional" <strong>HOME_IDX="2"</strong>/>
      <DEP NAME="oracle.options" VER="12.1.0.1.0" DEP_GRP_NAME="Optional" HOME_IDX="2"/>
      <DEP NAME="oracle.network" VER="12.1.0.1.0" DEP_GRP_NAME="Optional" HOME_IDX="2"/>
      <DEP NAME="oracle.rdbms.oci" VER="12.1.0.1.0" DEP_GRP_NAME="Optional" HOME_IDX="2"/>
      <DEP NAME="oracle.precomp" VER="12.1.0.1.0" DEP_GRP_NAME="Optional" HOME_IDX="2"/>
      <DEP NAME="oracle.xdk" VER="12.1.0.1.0" DEP_GRP_NAME="Optional" HOME_IDX="2"/>
      <DEP NAME="oracle.odbc" VER="12.1.0.1.0" DEP_GRP_NAME="Optional" HOME_IDX="2"/>
      <DEP NAME="oracle.sysman.ccr" VER="10.3.7.0.3" DEP_GRP_NAME="Optional" HOME_IDX="2"/>
      <DEP NAME="oracle.sysman.ccr.client" VER="10.3.2.1.0" DEP_GRP_NAME="Optional" HOME_IDX="2"/>\

The hierarchy and layout of the Oracle Inventory:

OH inventory

The oraclehomeproperties.xml file defines the OS architecture and ARU id.  For those who may not know, Automated Release Update (ARU) defines how opatch utility will treat the patchset 

Clients of the Inventory:
oraenv, opatch, backups, etc

Here's an strace of oraenv , it illustrates that the inventory files are accessed.


strace -aefl oraenv

stat("/u02/app/oracle/product/12.1.0/dbhome_1/inventory/ContentsXML/oraclehomeproperties.xml", {st_mode=S_IFREG|0640, st_size=549, ...}) = 0
geteuid() = 500
getegid() = 501
getuid() = 500
getgid() = 501
access("/u02/app/oracle/product/12.1.0/dbhome_1/inventory/ContentsXML/oraclehomeproperties.xml", W_OK) = 0
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0
stat("/u02/app/oracle/product/12.1.0/dbhome_1/bin/orabase", {st_mode=S_IFREG|0755, st_size=4941164, ...}) = 0
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0
stat("/u02/app/oracle/product/12.1.0/dbhome_1/bin/orabase", {st_mode=S_IFREG|0755, st_size=4941164, ...}) = 0
geteuid() = 500
getegid() = 501
getuid() = 500
getgid() = 501
access("/u02/app/oracle/product/12.1.0/dbhome_1/bin/orabase", X_OK) = 0


I have had issues when we cloned and attached an OH.  Sometimes I do a health check on the inventory just to make sure everything is cool, here's what we do:

[oracle@pdb12cgg ContentsXML]$ $ORACLE_HOME/OPatch/opatch util LoadXML -xmlInput <strong>/u02/app/oracle/product/12.1.0/dbhome_1/inventory/ContentsXML/comps.xml</strong>
Oracle Interim Patch Installer version 12.1.0.1.0
Copyright (c) 2012, Oracle Corporation.  All rights reserved.

Oracle Home       : /u02/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u02/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version    : 12.1.0.1.0
OUI version       : 12.1.0.1.0
Log file location : /u02/app/oracle/product/12.1.0/dbhome_1/cfgtoollogs/opatch/opatch2014-09-01_21-44-23PM_1.log

Invoking utility "loadxml"
UtilSession: XML file is OK.
OPatch succeeded.


[oracle@pdb12cgg ContentsXML]$ $ORACLE_HOME/OPatch/opatch util LoadXML -xmlInput <strong>/u01/app/oraInventory/ContentsXML/inventory.xml</strong>
Oracle Interim Patch Installer version 12.1.0.1.0
Copyright (c) 2012, Oracle Corporation.  All rights reserved.

Oracle Home       : /u02/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u02/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version    : 12.1.0.1.0
OUI version       : 12.1.0.1.0
Log file location : /u02/app/oracle/product/12.1.0/dbhome_1/cfgtoollogs/opatch/opatch2014-09-01_21-45-31PM_1.log

Invoking utility "loadxml"
UtilSession: XML file is OK.

OPatch succeeded.



<strong>Backups</strong>

For key inventory operations (install, deinstall,clone,add node,attach home) the Installer will do an automated backup of the Central Inventory and Local Inventory (the entire directory for both), into oraInventory/backup/<time stamp>.  Note, that you cannot [easily] recovery from a corrupted or destroyed OH.  For this reason a restore of the backup is necessary


[oracle@pdb12cgg oraInventory]$ ls
backup  ContentsXML  logs  oui
[oracle@pdb12cgg oraInventory]$ ls backup
2013-08-16_08-30-42AM  2013-08-19_05-20-26PM  2014-08-05_04-54-40PM

Exadata Monitoring and Agents – EM Plugin

To those who attended our Exadata Monitoring and Agents. Here’s some Answers and followup from the Chat room

The primary goal of the Exadata Pluigin is to digest the schematic file and validate database.xml and catalog.xml files. If the pre-check runs w/o failure then Discovery can be executed.

Agent only runs on compute nodes and monitors all components remotely; i,e ,no additional scripts/code is installed on the peripheral components. Agents pull component metrics and vitals using either ssh commands (using user equivalence based commands) or subscribe to SNMP traps.

Note, that there are always two agents deployed, the master does majority of the work, and a slave agent, which “kicks-in” if the master fails. Agents should be installed on all compute nodes

Initially, the guided discovery wizard runs ASM kfod to get disk names and reads cellip.ora.

The components monitored via the Exadata-EM plugin include the following:
• Storage Cells

• Infiniband Switches (IB switches)
EM agent runs remote ssh calls to collect switch metrics, IB switch sends SNMP traps (PUSH) for all alerts. This collection does require ssh equilavalnace for nm2user. This collection includes varipous sensor data: FAN, voltage, temparture. As well port metrics.
Plugin does the following:
Ssh nm2user@ ibnetdiscover

Reads the components names connected to the IBM switch, matches up the compute node hostnames tp the hostnames used to install agent

• Cisco Switch
EM agent runs remote SNMP get calls to gather metric data, this includes port status, switch vitals; eg, CPU, memory, power, and temp. In addition, performance metrics are also collect; eg, ingress and egress throughput rates

• PDU and KVM
For the PDU, both active and passive PDUs are monitored. Agent runs SNMP get calls from each PDU, metric collection includes Power, temperature, Fan status. The same steps and metrics are gathered for the KVM

• ILOM targets
EM Agent executes remote ipmitool calls to each compute node’s ILOM target. This execution requires oemuser credentials to run ipmitool. Agent collects sensor data as well as configuration data (firmware version and serial number)

In EM 12.1.0.4 , the key enhancements introduced include gathering IB performance, on-demand schematic refresh, Cell performance monitoring as well as a guided resolution for cell alerts. SNMP automation notification setup for Exadata Storage Server and InfiniBand Switches.

The Agent discovers IB switches and compute nodes and sends output to ibnetdiscover. The KVM, PDU, Cisco and ILOM discovery is performed via schematic file on compute node, and finally subscribes to SNMP for cells and IBM switches; note, SNMP has to be manually setup and enabled on peripheral componets for SNMP push of cell alerts. EM agent runs cellcli via ssh to obtain Storage metrics, this does require ssh equialvance with Agent user

The latest version (as of this writing, 12.1.0.6), there were a number of key visualization and metrics enhancements. For example:

• CDB-level I/O Workload Summary with PDB-level details breakdown.
• I/O Resource Management for Oracle Database 12c.
• Exadata Database Machine-level physical visualization of I/O Utilization for CDB and PDB on each Exadata Storage Server. There is also a critical integration link to Database Resource Management UI.
• Additional InfiniBand Switch Sensor fault detection, including power supply unit sensors and fan presence sensors.
• Automatically push Exadata plug-in to agent during discovery.

Use fully qualified names with Agent, using shorten names will causes issues. If there are any issues with metrics gathering or agent, EMDiag Kit should be used to triage this. The EMDiag kit includes scripts that can be used EM issues. Specifically, the kit includes repvfy, agtvfy, and omsvfy. These tools can be used to diagnose issues with the OEM Repository, EM Agents, control management services.
To obtain the EMDiag Kit, download the zip file for the version that you need, per Oracle Support Note: MOS ID# 421053.1

Export EMDIAG_HOME=/u01/app/oracle/product/emdiag
$EMDIAG_HOME/bin/repvfy install
$EMDIAG_HOME/bin/repvfy verify Exadata –level 9 -details

ASM Check script

Here's a little script from @racdba that does ASM check when we go onsite

#!/bin/ksh
HOST=`hostname`
ASM_OS_DEV_NM=/tmp/asmdevicenames.log
ASMVOTEDSK=/tmp/asm_votingdisks.log
GRID_HOME=`cat /etc/oratab |grep “+ASM” |awk -F “:” ‘{print $2}’`
ORACLE_HOME=$GRID_HOME
PATH=$ORACLE_HOME/bin:$PATH:
export GAWK=/bin/gawk

#
#
do_pipe ()
{
SQLP=”$GRID_HOME/bin/sqlplus -s / as sysdba”;
$SQLP |& # Open a pipe to SQL*Plus
print -p — ‘set feed off pause off pages 0 head off veri off line 500’;
print -p — ‘set term off time off’;
print -p — “set sqlprompt ””;

print -p — ‘select sysdate from dual;’;
read -p SYSDATE;

print -p — “select version from v\$instance;”;
read -p ASM_VERSION;

print -p — “select value from v\$parameter where name=’processes’;”;
read -p ASM_PROCESS;

print -p — “select value/1024/1024 from v\$parameter where name=’memory_target’;”;
read -p ASM_MEMORY;

print -p — “quit;”;
sleep 5;
}
#
function get_asminfo {
for LUNS in `ls /dev/oracleasm/disks/*`
do
echo “ASMLIB disk: $LUNS”
asmdisk=`kfed read $LUNS | grep dskname | tr -s ‘ ‘| cut -f2 -d’ ‘`
echo “ASM disk: $asmdisk”
majorminor=`ls -l $LUNS | tr -s ‘ ‘ | cut -f5,6 -d’ ‘`
dev=`ls -l /dev | tr -s ‘ ‘ | grep “$majorminor” | cut -f10 -d’ ‘`
echo “Device path: /dev/$dev”
echo “—-”
done

echo “”
echo “# ————————————————————————————————– #”;
/usr/sbin/oracleasm-discover;
}

function get_mem_info {
MEM=`free | $GAWK ‘/^Mem:/{ print int( ($2 / 1024 / 1024 + 4) / 4 ) * 4 }’`
SWAP=`free | $GAWK ‘/^Swap:/{ print int ( $2 / 1024 / 1024 + 0.5 ) }’`
HUGEPAGES=`grep HugePages_Total /proc/meminfo | $GAWK ‘{print $2}’`

echo “Physical Memory: $MEM |Swap: $SWAP”
echo “HugePages: $HUGEPAGES”
}

export ORACLE_SID=`cat /etc/oratab |grep “+ASM” |awk -F “:” ‘{print $1}’`
CHKPMON=`ps -ef|grep -v grep|grep pmon_$i|awk ‘{print $8}’`
if [ -n “$CHKPMON” ]; then
do_pipe $ORACLE_SID
echo “# ————————————————————————————————– #”;
echo “HOSTNAME: ${HOST}”
echo “GRID HOME: ${GRID_HOME}”
echo “ASM VERSION: ${ASM_VERSION}”
echo “ASM PROCESSES: ${ASM_PROCESS}”
echo “ASM MEMORY: ${ASM_MEMORY} MB”
echo “# ————————————————————————————————– #”;
get_mem_info
echo “# ————————————————————————————————– #”;
else
echo “${ORACLE_SID} is not running.”
fi

echo “# ————————————————————————————————– #”;
echo “LINUX VERSION INFORMATION:”
echo ” ”
[ -f “/etc/redhat-release” ] && cat /etc/redhat-release
[ -f “/etc/oracle-release” ] && cat /etc/oracle-release
uname -a
echo “# ————————————————————————————————– #”;

##SQLP=”sqlplus -s / as sysdba”;
##$SQLP < $ASM_OS_DEV_NM
##set feed off pause off head on veri off line 500;
##set term off time off numwidth 15;
##set sqlprompt ”;
##col label for a25
##col path for a55
##–select label,path,os_mb from v\$asm_disk;
##select label,os_mb from v\$asm_disk;
##exit;
##!

echo “ASM OS DEVICE INFORMATION:”
##cat $ASM_OS_DEV_NM
## Check for ASMLib
ASMLIBCHK=`rpm -qa |grep oracleasmlib`
if [[ -n $ASMLIBCHK ]]
then
echo “# ————————————————————————————————– #”;
echo “ASMLIB RPM: ${ASMLIBCHK}”
echo ” ”
##echo “ASM OS DEVICE INFORMATION:”
##echo ” ”
get_asminfo
else
echo “ASMLIB is NOT installed.”
fi

echo “# ————————————————————————————————– #”;

## Check OCR/Voting disks
OCR=`$GRID_HOME/bin/ocrcheck |grep “Device/File Name” |awk ‘{print $4}’`
##echo ” ”
##echo “GRID HOME is located at ${GRID_HOME}.”
echo “OCR LOCATION: ${OCR}”
echo “# ————————————————————————————————– #”;
echo ” ”

## Voting disk
$GRID_HOME/bin/crsctl query css votedisk > $ASMVOTEDSK

echo “VOTING DISK INFORMATION:”
echo ” ”
cat $ASMVOTEDSK
echo “# ————————————————————————————————– #”;

## Cleanup
if [[ -f $ASM_OS_DEV_NM ]]
then
rm $ASM_OS_DEV_NM
fi

if [[ -f $ASMVOTEDSK ]]
then
rm $ASMVOTEDSK
fi

iSCSI and enable disks for ASM

Helpful tips for my iSCSI storage buds that want to enable/provision disks for ASM

# get initiator name of host or hosts
cat /etc/iscsi/initiatorname.iscsi

# create volume on san and present to host
login to san, create volume, add initator(s) to access

# get wwid for each disk and update alias in multipath.conf
multipath -ll
or
scsi_id -g -u -s /block/sdd

# create partiton on dev
fdisk /dev/mapper/DATA18

# add device map from partition table and verify it can be seen
kpartx -a /dev/mapper/DATA18
kpartx -l /dev/mapper/DATA18

# set partition alignment
echo “2048,,” | sfdisk -uS /dev/mapper/DATA18

# create the asm disk
oracleasm createdisk DATA18 /dev/mapper/DATA18p1

#verify you can see disk
oracleasm scandisks
oracleasm listdisks

Want to modify ports on SCAN and Node Listeners,think again

Consideration for setting Parameters for Scan and Node Listeners on RAC, Queuesize, SDU, Ports, etc

TNS listener information held in a listener.ora for releases >= 11.2 on RAC should not be modified. That is the IPC endpoint information for a node listener should not be changed. Global listener parameters can be set in the file, ie it is supported to add tracing parameters and ASO parameters like wallet location, etc.

Example:
SDU cannot be set in TCP endpoint for SCAN / Node listeners, but SDU can be changed via the global parameter DEFAULT_SDU_SIZE in the SQLNET.ORA file.
(Set this in the RDBMS oracle home sqlnet.ora file)

Scan listeners are not configured by the standard methods or using *.ora files, but rather built during install and manipulated using via srvctl.
Information on setup is held within Grid Infrastructure(GI). Currently there is no option for adding parameters such as SDU and QUEUESIZE to a listener.
SCAN only supports one address in the TNS connect descriptor and allows only 1 port assigned to it. Default port is 1521, but can be changed if required.
ER 11782958 has been raised to address the current restrictions that are in place around listener parameters on RAC and release 11.2.
Where there is a global parameter, this can be used in place.

As a side note, TCP.QUEUESIZE parameter is now available in 11.2.0.3+ (The patch for TCP.QUEUESIZE can be located on Metalink Note under patch 13777308), which enables SDU/Queuesize. The default is the system-defined maximum value. The defined maximum value for Linux is 128.
Allowable values are any integer value up to the system-defined maximum; e.g., TCP.QUEUESIZE=100

Rarely discussed 12c New Features Part 3 – Oracle Net Listener Registration

In Oracle Database 12c there were some minor Oracle Net Services features. This blog post covers some of the changes. In the next part I’ll review some of Dead Connection Detection changes as well as some of the smaller new features.

This change is neither sexy nor fun, but as an devoted RAC dev guy, I find these little changes (evolutions) amusing 🙂

In prior releases the service registration was performed by PMON and is now performed by a dedicated process called LREG (listener registration). The LREG process (ora_lreg_), is a critical database background process . Since this is critical background process, if it dies, it will cause an Oracle instance crash.

LREG now assumes all of PMON’s instance/service registration responsibilities; e.g., instance registration, such as: service_update, service_register, LBA payload, etc.

As with PMON in pre-12c versions, LREG (during registration) process provides the listener with information about the following:
* Names of the database services provided by the database
* Name of the database instance associated with the services and its current and maximum load
* Service handlers (dispatchers and dedicated servers) available for the instance, including their type, protocol addresses, and current and maximum load (for LBA)

If the listener is not running when an instance starts, the LREG process cannot register the service information. LREG attempts to connect to the listener periodically on default port TCP/IP 1521 if no local_listener is set and it may take up to 60 seconds before LREG registers with the listener after it has been started. To initiate service registration immediately after the listener is started, use the SQL statement ALTER SYSTEM REGISTER.

LREG can be traced using the same methods as with PMON:

Enabling an Oracle Net server side sqlnet tracing will invoke a trace for LREG on instance startup. The old PMON trace command now traces LREG:
alter system set events = ‘10257 trace name context forever, level 5’;

Listener registration information can also be dumped into the ora_lreg trace file: alter system set events = ‘immediate trace name listener_registration level 3’;

Creating PDBs

Consolidate where possible …Isolate where necessary

In the last blog I mentioned the benefits of schema consolidation and how it dove tails directly into a 12c Oracle Database PDB implementation.
In this part 2 of the PDB blog, we will get a little more detailed and do a basic walk-through, from “cradle to grave” of a PDB. We’ll use SQlPlus as the tool of choice, next time I’ll show w/ DBCA

First verify that we are truly on 12c Oracle database

SQL> select instance_name, version, status, con_id from v$instance;

INSTANCE_NAME VERSION STATUS CON_ID
—————- —————– ———— ———-
yoda 12.1.0.1.0 OPEN 0

The v$database view tells us that we are dealing with a CDB based database

CDB$ROOT@YODA> select cdb, con_id from v$database;

CDB CON_ID
— ———-
YES 0

or a more elegant way:

CDB$ROOT@YODA> select NAME, DECODE(CDB, ‘YES’, ‘Multitenant Option enabled’, ‘Regular 12c Database: ‘) “Multitenant Option ?” , OPEN_MODE, CON_ID from V$DATABASE;

NAME Multitenant Option ? OPEN_MODE CON_ID
——— ————————– ——————– ———-
YODA Multitenant Option enabled READ ONLY 0

There are alot of new views and tables to support PBD/CDB. But we’ll focus on the v$PDBS and CDB_PDBS views:

CDB$ROOT@YODA> desc v$pdbs
Name
——–
CON_ID
DBID
CON_UID
GUID
NAME
OPEN_MODE
RESTRICTED
OPEN_TIME
CREATE_SCN
TOTAL_SIZE

CDB$ROOT@YODA> desc cdb_pdbs
Name
——–
PDB_ID
PDB_NAME
DBID
CON_UID
GUID
STATUS
CREATION_SCN
CON_ID

The SQlPlus command con_name (container name) shows the container and the con_id we are connected to:

CDB$ROOT@YODA> show con_name

CON_NAME
——————————
CDB$ROOT

CDB$ROOT@YODA> show con_id

CON_ID
——————————
1

Let’s see what PDBs that are created in this CDB and their current state:

CDB$ROOT@YODA> select CON_ID,DBID,NAME,TOTAL_SIZE from v$pdbs;

CON_ID DBID NAME TOTAL_SIZE
———- ———- —————————— ———-
2 4066465523 PDB$SEED 283115520
3 483260478 PDBOBI 0

CDB$ROOT@YODA> select con_id, name, open_mode from v$pdbs;

CON_ID NAME OPEN_MODE
———- ——————– ———-
2 PDB$SEED READ ONLY
3 PDBOBI MOUNTED

Recall from part 1 of the blog series, that we created a PDB (pdbobi) when we specified the Pluggable Database Feature on install, and that a PDB$SEED got created as part of that Install process

Now lets’s connect to the two different PDBs and see what they got!! You really shouldn’t ever connect to PDB$SEED, since its just used as a template, but we’re just curious 🙂

CDB$ROOT@YODA> alter session set container=PDB$SEED;
Session altered.

CDB$ROOT@YODA> select name from v$datafile;

NAME
——————————————————————————–
+PDBDATA/YODA/DATAFILE/undotbs1.260.823892155
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/system.271.823892297
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/sysaux.270.823892297

As you can see that PDB$SEED houses the template tablespaces -> System, Sysaux, and Undo tablespaces

If we connect back to the root-CDB, we see that it houses essentially the traditional database tablespaces (like in pre-12c days).

CDB$ROOT@YODA> alter session set container=cdb$root;
Session altered.

CDB$ROOT@YODA> select name from v$datafile;

NAME
——————————————————————————–
+PDBDATA/YODA/DATAFILE/system.258.823892109
+PDBDATA/YODA/DATAFILE/sysaux.257.823892063
+PDBDATA/YODA/DATAFILE/undotbs1.260.823892155
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/system.271.823892297
+PDBDATA/YODA/DATAFILE/users.259.823892155
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/sysaux.270.823892297
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/system.276.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/sysaux.274.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/users.277.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/example.275.823892813

BTW, the datafiles listed in V$datafiles differs from cbd_data_files. cdb_data_files only shows datafiles from “open” PDB, so just be careful if you’re looking for correct datafile

Let’s connect to our user PDB (pdbobi) and see what we can see 🙂

CDB$ROOT@YODA> alter session set container=pdbobi;
Session altered.

CDB$ROOT@YODA> select con_id, name, open_mode from v$pdbs;

CON_ID NAME OPEN_MODE
———- —————– ———–
3 PDBOBI MOUNTED

Place PDBOBI in Read Write mode. Note, that when you create the PDB, it is initially in mounted mode with a status of NEW.
View the OPEN MODE status of a PDB by querying the OPEN_MODE column in the V$PDBS view or view the status of a PDB by querying the STATUS column of the CDB_PDBS or DBA_PDBS view

CDB$ROOT@YODA> alter pluggable database pdbobi open;

Pluggable database altered.

or CDB$ROOT@YODA> alter pluggable database all open;

And let’s create a new tablespace in this PDB

CDB$ROOT@YODA> create tablespace obiwan datafile size 500M;

Tablespace created.

CDB$ROOT@YODA> select file_name from cdb_data_files;

FILE_NAME
——————————————————————————–
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/example.275.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/users.277.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/sysaux.274.823892813
+PDBDATA/YODA/E456D87DF75E6553E043EDFE10AC71EA/DATAFILE/obiwan.284.824683339
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/system.276.823892813

PDBOBI only has scope for its own PDB files. We will illustrate this further down below.

Let’s create a new clone from an existing PDB, but with a new path

CDB$ROOT@YODA> create pluggable database PDBvader from PDBOBI FILE_NAME_CONVERT=(‘+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE’,’+PDBDATA’);
create pluggable database PDBvader from PDBOBI FILE_NAME_CONVERT=(‘+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE’,’+PDBDATA’)
*
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable database

CDB$ROOT@YODA> show con_name

CON_NAME
——————————
PDBOBI

Hmm…..remember we were still connected to PDBOBI. You can only create PDBs from root (and not even from pdb$seed). So connect to CDBROOT

CDB$ROOT@YODA> create pluggable database PDBvader from PDBOBI FILE_NAME_CONVERT=(‘+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE’,’+PDBDATA’);

Pluggable database created.

CDB$ROOT@YODA> select pdb_name, status from cdb_pdbs;

PDB_NAME STATUS
———- ————-
PDBOBI NORMAL
PDB$SEED NORMAL
PDBVADER NORMAL

And

CDB$ROOT@YODA> select CON_ID,DBID,NAME,TOTAL_SIZE from v$pdbs;

CON_ID DBID NAME TOTAL_SIZE
———- ———- ————- ————-
2 4066465523 PDB$SEED 283115520
3 483260478 PDBOBI 917504000
4 994649056 PDBVADER 0

Hmm……the TOTAL_SIZE column shows 0 bytes. Recall that all new PDBs are created and placed in MOUNTED stated

CDB$ROOT@YODA> alter session set container=pdbvader;

Session altered.

CDB$ROOT@YODA> alter pluggable database open;

Pluggable database altered.

CDB$ROOT@YODA> select file_name from cdb_data_files;

FILE_NAME
——————————————————————————–
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/system.280.823980769
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/sysaux.279.823980769
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/users.281.823980769
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/example.282.823980769

Viola…. size is now reflected !!

CDB$ROOT@YODA> select CON_ID,DBID,NAME,TOTAL_SIZE from v$pdbs;

CON_ID DBID NAME TOTAL_SIZE
———- ———- —————————— ———-
4 994649056 PDBVADER 393216000

Again, the scope of PDBVADER is to its own container files; it can’t see PDBOBI files at all. If we connect back to cdb$root and look at v$datafile, we see that cdb$root has scope for all the datafiles in the CDB database

Incidentally, that long identifier, “E46B24386A131109E043EDFE10AC6E89”, in the OMF name is the GUID or Global Identifier for that PDB. This is not the same as container unique identifier (CON_UID). The con_uid is a local
identifier; whereas the GUID is universal. Keep in mind that we can unplug a PDB from one CDB into another CDB, so the GUID provides this uniqueness and streamlines portability.

CDB$ROOT@YODA> select name, con_id from v$datafile order by con_id

NAME CON_ID
———————————————————————————– ———-
+PDBDATA/YODA/DATAFILE/undotbs1.260.823892155 1
+PDBDATA/YODA/DATAFILE/sysaux.257.823892063 1
+PDBDATA/YODA/DATAFILE/system.258.823892109 1
+PDBDATA/YODA/DATAFILE/users.259.823892155 1
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/system.271.823892297 2
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/sysaux.270.823892297 2
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/example.275.823892813 3
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/users.277.823892813 3
+PDBDATA/YODA/E456D87DF75E6553E043EDFE10AC71EA/DATAFILE/obiwan.284.824683339 3
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/system.276.823892813 3
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/sysaux.274.823892813 3
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/sysaux.279.823980769 4
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/users.281.823980769 4
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/example.282.823980769 4
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/system.280.823980769 4

Now that we are done testing with PDBVADER PDB, we can shutdown and drop this PDB

CDB$ROOT@YODA> alter session set container=cdb$root;

Session altered.

CDB$ROOT@YODA> drop pluggable database pdbvader including datafiles;
drop pluggable database pdbvader including datafiles
*
ERROR at line 1:
ORA-65025: Pluggable database PDBVADER is not closed on all instances.

CDB$ROOT@YODA> alter pluggable database pdbvader close;

Pluggable database altered.

CDB$ROOT@YODA> drop pluggable database pdbvader including datafiles;

Pluggable database dropped.

Just for completeness, I’ll illustrate couple different ways to create a PDB

The beauty of PDB is not mobility (plug and unplug), which we’ll show later, but that we can create/clone a new PDB from a “gold-image PDB” . That’s real agility and a Database as a Service (DbaaS) play.

So let’s create a new PDB in a couple of different ways.

Method #1: Create a PDB from SEED
CDB$ROOT@YODA> alter session set container=cdb$root;

Session altered.

CDB$ROOT@YODA> CREATE PLUGGABLE DATABASE pdbhansolo admin user hansolo identified by hansolo roles=(dba);

Pluggable database created.

CDB$ROOT@YODA> alter pluggable database pdbhansolo open;

Pluggable database altered.

CDB$ROOT@YODA> select file_name from cdb_data_files;

FILE_NAME
——————————————————————————–
+PDBDATA/YODA/E51109E2AF22127AE043EDFE10AC1DD9/DATAFILE/system.280.824693889
+PDBDATA/YODA/E51109E2AF22127AE043EDFE10AC1DD9/DATAFILE/sysaux.279.824693893

Notice that it just contains the basic files to enable a PDB. The CDB will copy from the PDB$SEED the System and Sysaux tablesapces and instantiate them in the new PDB.

Method #2: Clone from an existing PDB (PDBOBI in our case)

CDB$ROOT@YODA> alter session set container=cdb$root;

Session altered.

CDB$ROOT@YODA> alter pluggable database pdbobi close;

Pluggable database altered.

CDB$ROOT@YODA> alter pluggable database pdbobi open read only;

Pluggable database altered.

CDB$ROOT@YODA> CREATE PLUGGABLE DATABASE pdbleia from pdbobi;

Pluggable database created.

CDB$ROOT@YODA> alter pluggable database pdbleia open;

Pluggable database altered.

CDB$ROOT@YODA> select file_name from cdb_data_files;

FILE_NAME
——————————————————————————–
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/system.281.824694649
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/sysaux.282.824694651
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/users.285.824694661
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/example.286.824694661
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/obiwan.287.824694669

Notice, that the OBI tablespace that we created in PDBOBI came over as part of this Clone process!!

You can also create a PDB as a snapshot (COW) from another PDB. I’ll post this test on the next blog report. But essentially you’ll need a NAS Appliannce, or any technology that will provide you with COW snapshot.
I plan on using ACFS as the storage container and ACFS RW Snapshot for the snapshot PDB.