Translation

The oldest posts, are written in Italian. If you are interested and you want read the post in English, please use Google Translator. You can find it on the right side. If the translation is wrong, please email me: I'll try to translate for you.

martedì, giugno 20, 2017

On DB Time, DB CPU and Utilization

DB Time/DB CPU


The "Load Section" of an AWR Report has two interesting lines
  • DB Time(s)
  • DB CPU(s)

Load Profile

This "Load Profile" is an AWR snapshot of 1h (60m/3600s). Look at the definitions of those metrics.

  • DB Time is a time model statistic that is the sum of Oracle process CPU consumption and non-idle wait time
  • DB CPU is Oracle server/foreground/shadow process CPU consumption  

(the above definitions are taken from an Orapub's Blog). Both of them are divided by "Elapsed Time" so they are an average on the snapshot interval.

Because of the definition of "DB Time", the "DB Time(s)" represent the Average Active Session on just one second.

"DB CPU(s)" tell us how many CPU are used in one second. You can compare this value with the number of CPU on your system, in order to understand if you are CPU bound.

In my example (from production system), the database is doing.....nothing: 1 active session each 2 seconds and a half (0.4 in 1s + 0,4 in 1s + 0.2 in 1/2s).

Also each 1s only 0.3 CPUs are used on 120 CPUs of the system (see next picture).


Utilization

How you can see from this Orapub post, the utilization is calculated as
  • Utilization = Requirement / Capacity (a)
(a) capacity: the maximum amount that can be contained or accommodated

From our point of view

  • The requirement is the time used 
  • The capacity is the time available

so the previous formula became

  • Utilization = time used / time available

You can calculate the utilization using two different methods. These methods are taken from Craig Shallahamer's Video Lessons.
  1. Core method
  2. Busy:Idle method

In order to explain these methods, I post another picture from AWR report

Operating System Statistics

Core method

With this method, requirement and capacity are:

Requirement =>  (v$osstat.)BUSY_TIME
Capacity => #CPU * Elapsed (Time)

In AWR report, the "BUSY_TIME" came from V$OSSTAT, while "Elapsed" is the "Elapsed Time" of snapshot and #CPU is the number of CPU on the system (the NUM_CPU column of V$OSSTAT)

Remember to normalize the numbers

Utilization = BUSY_TIME / Elapsed = (3336707/100) / (120*60*60)
Utilization = 33367.07 / 432000
Utilization = 0,077
Utilization = 7,7%

Because BUSY_TIME is in hundredths of seconds, I had divided the BUSY_TIME by 100. Also, the "Elapsed Time" in AWR report is in minutes, I multiplicated it for 60.


Busy:Idle method

With this method, requirement and capacity are:

Requirement =>  (v$osstat.)BUSY_TIME
Capacity => (v$osstat.)BUSY_TIME + (v$osstat.)IDLE_TIME

In this case, all information comes from the "Operating System Statistics" section of AWR Report.

Utilization = BUSY_TIME / (BUSY_TIME + IDLE_TIME)
Utilization = 3336707 / (3336707 + 39244501)
Utilization = 3336707 / 42581208
Utilization = 0,078
Utilization = 7,8%

In this case, all measurement was in hundredths of seconds so no further operations are needed.

The low utilization is a confirm of the very low Average Active Section shown in firs part of this post.

Reference

* http://blog.orapub.com/20130228/how-many-cpu-cores-do-i-really-have-in-my-oracle-database-server.html
* http://www.toadworld.com/platforms/oracle/b/weblog/archive/2014/05/26/understanding-cpu-utilization-amp-cpu-capacity
* http://blog.orapub.com/20140805/what-is-oracle-db-time-db-cpu-wall-time-and-non-idle-wait-time.html
* https://docs.oracle.com/database/122/REFRN/V-OSSTAT.htm#REFRN30321

lunedì, giugno 12, 2017

SQL Profile: Case 3 - Part 1: Create/Execute/Report/Accept

Case 3: The statement is in AWR repository

In this scenario there are following steps:

(a) Find out the SQL_ID from DBA_HIST_SQLTEXT
(b) Check the execution plan
(c) Find out of the snapshot interval from DBA_HIST_SQLSTAT
(d) Create a task using the SQL_ID found in the previous step
(e) Run the task
(f) Show the recommendations
(g) Accept the recommendations

These steps are similar to the previous two (Case 1 and Case 2). The difference is that you have to look for the sql_id into DBA_HIST* tables.


Step a) Find out the SQL_ID into DBA_HIST_SQLTEXT table

set long 999999
set pages 99
SELECT 
      sql_id 
    , sql_text 
FROM 
      dba_hist_sqltext 
WHERE 
      regexp_like (SQL_TEXT, 'af_t_obj_spec','i') 
AND   regexp_like (SQL_TEXT, 'AF_T_OBJ t3','i');


SQL_ID
-------------
SQL_TEXT
--------------------------------------------------------------------------------
57fbbndr8f1r0
SELECT  AF_T_OBJ_SPEC.OBJ_SPEC_ID, AF_T_OBJ_SPEC.SHORT_NAME, AF_T_OBJ_SPEC.DESCR
IPTION, AF_T_OBJ_SPEC.DELETED, AF_T_OBJ_SPEC.LAST_VERSION, AF_T_OBJ_SPEC.OBJ_TYP
E_NAME, AF_T_OBJ_SPEC.LOCK_USER, AF_T_OBJ_SPEC.LOCK_LEASE_DATE, AF_T_OBJ_SPEC.LO
CK_SESSION, AF_T_OBJ_SPEC.LOCK_INSTANCE, AF_T_OBJ_SPEC.LIFECYCLE_UUID, AF_T_OBJ_
SPEC.INHERIT_PARENT_PRIVS, AF_T_OBJ_SPEC.OBJ_SPEC_UUID, AF_T_OBJ_SPEC.CREATED_BY
, AF_T_OBJ_SPEC.CREATED_DATE, AF_T_OBJ_SPEC.MODIFIED_BY, AF_T_OBJ_SPEC.MODIFIED_
DATE, AF_T_OBJ_SPEC.OWNER, AF_T_OBJ_SPEC.FEATURE_NAME, AF_T_OBJ_SPEC.CHECKSUM, A
F_T_OBJ_SPEC.INACTIVE, AF_T_OBJ_SPEC.SYSTEM_MANAGED, AF_T_OBJ_SPEC.FINAL_VERSION
, AF_T_OBJ_SPEC.SOURCE_ID, AF_T_OBJ.OBJ_ID, AF_T_OBJ.CHANGENUMBER, AF_T_OBJ.VERS
ION, AF_T_OBJ.INITIAL_COMMENT, AF_T_OBJ.STATUS_UUID, AF_T_OBJ.OBJ_UUID, AF_T_OBJ
.OBJ_SPEC_UUID AF_T_OBJ_OBJ_SPEC_UUID, AF_T_OBJ.CREATED_BY AF_T_OBJ_CREATED_BY,
AF_T_OBJ.CREATED_DATE AF_T_OBJ_CREATED_DATE, AF_T_OBJ.MODIFIED_BY AF_T_OBJ_MODIF
IED_BY, AF_T_OBJ.MODIFIED_DATE AF_T_OBJ_MODIFIED_DATE, AF_T_OBJ.OBJ_DEFINITION ,
 rownum FROM AF_T_OBJ_SPEC, AF_T_OBJ WHERE AF_T_OBJ_SPEC.OBJ_SPEC_UUID =AF_T_OBJ
.OBJ_SPEC_UUID  AND   AF_T_OBJ_SPEC.DELETED = 0   AND  AF_T_OBJ.VERSION = (SELEC
T MAX(VERSION) FROM AF_T_OBJ t3 where AF_T_OBJ.OBJ_SPEC_UUID = T3.OBJ_SPEC_UUID)
  AND  EXISTS (SELECT 0 FROM AF_L_LIFECYCLE_STATUS WHERE AF_L_LIFECYCLE_STATUS.S
TATUS_UUID = AF_T_OBJ.STATUS_UUID AND AF_L_LIFECYCLE_STATUS.LIFECYCLE_UUID = AF_
T_OBJ_SPEC.LIFECYCLE_UUID AND AF_L_LIFECYCLE_STATUS.STATUS_NAME  in (:1 ,:2 ))
AND  AF_T_OBJ.VERSION = (SELECT MAX(VERSION) FROM AF_T_OBJ t3 where AF_T_OBJ.OBJ
_SPEC_UUID = T3.OBJ_SPEC_UUID)  AND  AF_T_OBJ_SPEC.OBJ_TYPE_NAME in (:3 ) ORDER
BY  AF_T_OBJ_SPEC.DESCRIPTION



Step b) Check the execution plan

SELECT * FROM TABLE (DBMS_XPLAN.DISPLAY_AWR('&sqlid'));

Enter value for sqlid: 57fbbndr8f1r0
old   1: SELECT * FROM TABLE (DBMS_XPLAN.DISPLAY_AWR('&sqlid'))
new   1: SELECT * FROM TABLE (DBMS_XPLAN.DISPLAY_AWR('57fbbndr8f1r0'))


You can find the output here (the database I using, is a two node RAC and I run the commands on the first instance (*)). For this sql_id there are 4 execution plans (*):

Plan hash value: 333756168

Plan hash value: 934906073
Plan hash value: 2303425470
Plan hash value: 2773806998


If you want, at this point, you could choose one of the previous plan_hash_value and you could set it for the SQL Profile. 

But what I want is that SQL Advisor analyze the statement for me, so I go to the "Step c"


Step c) Find out a snapshot interval from DBA_HIST_SQLSTAT

set lines 210 
set pages 99 
SELECT min(snap_id), max(snap_id) FROM DBA_HIST_SQLSTAT WHERE sql_id ='&sqlid';

Enter value for sqlid: 57fbbndr8f1r0
old   1: SELECT min(snap_id), max(snap_id) FROM DBA_HIST_SQLSTAT WHERE sql_id ='&sqlid'
new   1: SELECT min(snap_id), max(snap_id) FROM DBA_HIST_SQLSTAT WHERE sql_id ='57fbbndr8f1r0'
MIN(SNAP_ID) MAX(SNAP_ID)
------------ ------------
        4944         5118

The details about sql_id into DBA_HIST_SQLSTAT are:

select 
      snap_id
    , sql_id
    , plan_hash_value 
from 
      dba_hist_sqlstat 
where 
      sql_id='57fbbndr8f1r0' 
order by 
      snap_id;

   SNAP_ID SQL_ID        PLAN_HASH_VALUE
---------- ------------- ---------------
      4944 57fbbndr8f1r0      2303425470
      4944 57fbbndr8f1r0       333756168
      4944 57fbbndr8f1r0      2773806998
      5117 57fbbndr8f1r0      2303425470
      5117 57fbbndr8f1r0       333756168
      5117 57fbbndr8f1r0      2773806998
      5118 57fbbndr8f1r0       333756168
      5118 57fbbndr8f1r0       934906073  <<< see (*) <<<
      5118 57fbbndr8f1r0      2773806998



Step d) Create the task, using the SQL_ID 


With these snap_id, I can create a task and get recommendations

DECLARE
   task_name VARCHAR2(30);
BEGIN
    task_name := DBMS_SQLTUNE.CREATE_TUNING_TASK(
      begin_snap=> &s_snapid
    , end_snap=> &e_snapid
    , sql_id => '&sqlid'
    , scope => 'COMPREHENSIVE'
    , time_limit => 90 -- nr of seconds of analysis 
    , task_name => 'task_&sqlid'
    , description => 'Task to tune a query on a specified table');
END;
/

Enter value for s_snapid: 4944
old   5:       begin_snap=> &s_snapid
new   5:       begin_snap=> 4944
Enter value for e_snapid: 5118
old   6:     , end_snap=> &e_snapid
new   6:     , end_snap=> 5118+1
Enter value for sqlid: 57fbbndr8f1r0
old   7:     , sql_id => '&sqlid'
new   7:     , sql_id => '57fbbndr8f1r0'
old  10:     , task_name => 'task_&sqlid'
new  10:     , task_name => 'task_57fbbndr8f1r0'


In this case, I chose to analyze the whole range, but I could choose just a part of it.


Step e) Run the task in order to analyze the statement

BEGIN
    DBMS_SQLTUNE.EXECUTE_TUNING_TASK(task_name =>'task_57fbbndr8f1r0');
END;
/


Step f) Show the recommendations

set long 40000
set longchunksize 1000
set linesize 210
set heading off
set pages 0
SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK('task_57fbbndr8f1r0') from DUAL;
set heading on
set pages 99


The output is here. The Advisor found 3 improvements:

1- SQL Profile Finding
2- Index Finding
3- Alternative Plan Finding (*)

Also, you could query the data dictionary to get information:

SELECT
    finding_id
  , message
FROM
    dba_advisor_findings 
WHERE
    task_name='task_57fbbndr8f1r0';

FINDING_ID MESSAGE
---------- -----------------------------------------------------------------------------------------------
         1 A potentially better execution plan was found for this statement.
         2 The execution plan of this statement can be improved by creating one or more indices.
         3 Some alternative execution plans for this statement were found by searching the system's 
           real-time and historical performance data.


For the 3th point, the "Alternative Plan Findings" see below notes.

Step g) Accept the recommendations 

If you want to accept the profile, then you have to run (see the log):

execute dbms_sqltune.accept_sql_profile( - 
    task_name =>'task_57fbbndr8f1r0' -
  , task_owner => 'SYS' -
  , replace => TRUE);


Step h) Optionally, you can drop the TASK

exec DBMS_SQLTUNE.DROP_TUNING_TASK ('&task_name')


Step i) Optionally, you can drop the SQL Profile


exec DBMS_SQLTUNE.DROP_SQL_PROFILE ('&sqlprofile_name')


(*) Notes

From AWR repository (DBA_HIST_SQLSTAT), you can see that there are 4 palns hash value for the 57fbbndr8f1r0 statement. They are

333756168
934906073
2303425470
2773806998

Anyway, SQL Advisor, in its analysis, doesn't consider the plan value nr 934906073. In the log, in the "Alternative Plan Finding" section, in fact, you see:

3- Alternative Plan Finding
---------------------------
  Some alternative execution plans for this statement were found by searching the system's real-time and historical performance data.

  The following table lists these plans ranked by their average elapsed time.
  See section "ALTERNATIVE PLANS SECTION" for detailed information on each plan.

  id plan hash  last seen            elapsed (s)  origin          note
  -- ---------- -------------------- ------------ --------------- ----------------
   1 2303425470  2017-05-09/16:41:30        0.096 Cursor Cache
   2  333756168  2017-05-09/16:41:30        0.122 Cursor Cache
   3 2773806998  2017-05-09/16:41:29        0.132 Cursor Cache

All statements are taken from the memory (Cursor Cache). This is why the plan 934906073 is not found: it is in the second instance:

select 
      inst_id
    , plan_hash_value
from 
      gv$sql 
where 
      sql_id='57fbbndr8f1r0' 
order by inst_id;

   INST_ID PLAN_HASH_VALUE
---------- ---------------
         1       333756168
         1      2303425470
         1      2773806998
         2       333756168
         2       934906073 <<< instance nr 2
         2      2773806998

If you want to find suggestions about plan hash value 934906073, you have to run DBMS_SQLTUNE.EXECUTE_TUNING_TASK, (Step e), on the second instance. Here is the log. This is the section about "Alternative Plan Finding":

  id plan hash  last seen            elapsed (s)  origin          note
  -- ---------- -------------------- ------------ --------------- ----------------
   1 2773806998  2017-05-09/17:06:12        0.064 Cursor Cache
   2  934906073  2017-05-09/17:06:12        0.120 Cursor Cache <<< here
   3  333756168  2017-05-09/17:01:38        0.123 Cursor Cache

Well. This is how "SQL Tuning Advisor" works. I opened a SR for this and the support told me:

If plans are cached in memory the STA Tuning Task will take the info from cursor cache for performance.

So. The behavior of SQL Tuning Advisor, if used against AWR repository is:

* Providing SQL Profile suggestion: "Finding Section" (1st finding)
* Providing suggestion with index: "Index Finding" (2nd finding)
* Providing alternative solutions: "Alternative Plan Finding" (3rd finding)

About the last point (the 3rd), the alternative solutions are took form memory.

lunedì, maggio 08, 2017

Recovering Table, Part 4 - Note

There are two additional pieces of information I want to give you.
  1. The directories created, in order to restore the auxiliary database
  2. The configuration of rman script

The script I used to recover the table is:

run {
2> recover table PCMS_ADM.PIANIFICAZIONE_MANUTENZIONE
3> until time "to_date('02/05/2017 17:00:00','dd/mm/yyyy hh24:mi:ss')"
4> auxiliary destination '/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp'
5> remap table PCMS_ADM.PIANIFICAZIONE_MANUTENZIONE:PIANIFICAZIONE_MAN_BCK;
6> }


  • The directories that rman use for the "core" files
Into directory specified by "auxiliary destination" parameter, rman will create a directory named as the same name of the production database. In my case it was PCMSP. Here will be restored the "core files"

Under this directory, there will be 3 directory:



  • one for controlfile (/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/controlfile/)
  • one for datafile (/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/datafile/)
  • one for redlog online (/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/onlinelog/)
The archived log, will be restored under PCMSP directory (the same level fo controlfile/datafile/onlinelog directory)

The list of "core" datafile. The archived logs are not shown, here.

  • The directories that rman use for the "application" files
At the same level of the directory where rman will restore the core files (auxiliary destination=/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp) will be created a new directory (DSEK_PITR_PCMSP in my case) where the "application" datafiles will go.

core files => /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/
application files => /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DSEK_PITR_PCMSP/

where

  • DSEK is a prefix put by rman
  • PITR is for Point In Time to Recovery
  • PCMSP is the name fo database

The "core" directory (PCMS) and "application" directory (DESK_PITR_PCMSP), are created at the same level. Also the archived log will be restored to the same level. Here are shown the archived log needed for the recovery of "application" tablespace
Summarizing what I said, the tree of directy is:

<auxiliary destination>
     |
     -> archived log for "core" part
     -> archived log for "application" part
     -<db_name>
            |
            -> controlfile dir
                  |
                  -> controlfile
            -> onlinelog dir
                  |
                  -> redolog file online
            -> datafile dir
                  |
                  -> SYSTEM tbs
                  -> SYSAUX tbs
                  -> TEMP tbs
                  -> all UNDO tbs
     -<pitr_db_name>
            |
            -> datafile for "application" part


  • Channel configuration
Another important thing to show is the script I used. How you can see, I'm not using the allocation of a channel. In my case, the backup is on tapes. If you try to allocate a channel, like this (I use Legato Networker. This is why all env parameters starts with "NSR_")

run {
2> ALLOCATE CHANNEL c1 TYPE 'SBT_TAPE'
PARMS 'ENV=(NSR_SERVER=bck_server, NSR_CLIENT=bck_client, NSR_GROUP=db_group, NSR_DATA_VOLUME_POOL=db_pool_1M)'; 
3>
4> recover table PCMS_ADM.PIANIFICAZIONE_MANUTENZIONE
5> until time "to_date('02/05/2017 17:00:00','dd/mm/yyyy hh24:mi:ss')"
6> auxiliary destination '/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp'
7> remap table PCMS_ADM.PIANIFICAZIONE_MANUTENZIONE:PIANIFICAZIONE_MAN_BCK;
8> }


then you get following error:

RMAN06034: at least 1 channel must be allocated to execute this command

The cause and solution are explained in this metalink note: "Recover Table Fails with RMAN06034:
at least 1 channel must be allocated to execute this command (Doc ID 2105325.1)":


You cannot manually allocate a channel when RMAN will implicitly create the auxiliary instance, the channel specification must be configured in the target database. This is done with:

configure channel sbt_tape FORMAT '%U' PARMS='ENV=(NB_ORA_CLIENT=......, NB_ORA_POLICY=............., NB_ORA_SCHED=.....)';


So, I have to connect to the target and to the catalog/controlfile and run the "configure channel" command:

$rman
RMAN> connect target /
RMAN> configure channel sbt_tape FORMAT '%U' PARMS='ENV=(NLS_CLIENT=......, NLS_SERVER=.....)';

Using Legato Networker, the only two parameters that you have to use for the restore are NSR_CLIENT and NSR_SERVER. The tape number is saved into catalog/controlfile.


  • Other recovery scenarios

As told in Part 1, there are several cases to recover table. I report two example from the manual pages.

The first one. The recover generate just a dump file. The import will not be done.

RECOVER TABLE SCOTT.EMP, SCOTT.DEPT
    UNTIL TIME 'SYSDATE-1'
    AUXILIARY DESTINATION '/tmp/oracle/recover'
    DATAPUMP DESTINATION '/tmp/recover/dumpfiles'
    DUMP FILE 'emp_dept_exp_dump.dat'
    NOTABLEIMPORT;

The second one. For the PITR, the recover will use the UNTIL SEQUENCE and it'll remap also the tablespace.

RECOVER TABLE SH.SALES:SALES_1998, SH.SALES:SALES_1999
    UNTIL SEQUENCE 354
    AUXILIARY DESTINATION '/tmp/oracle/recover'
    REMAP TABLE 'SH'.'SALES':'SALES_1998':'HISTORIC_SALES_1998',
              'SH'.'SALES':'SALES_1999':'HISTORIC_SALES_1999' 
    REMAP TABLESPACE 'SALES_TS':'SALES_PRE_2000_TS';



Recovering Table, Part 1 - Introduction
Recovering Table, Part 2 - The "Core" part
Recovering Table, Part 3 - The "Application" part

Recovering Table, Part 3 - The "Application" part

The script I used to recover the table is:

run {
2> recover table PCMS_ADM.PIANIFICAZIONE_MANUTENZIONE
3> until time "to_date('02/05/2017 17:00:00','dd/mm/yyyy hh24:mi:ss')"
4> auxiliary destination '/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp'
5> remap table PCMS_ADM.PIANIFICAZIONE_MANUTENZIONE:PIANIFICAZIONE_MAN_BCK;
6> }

The recovery table phase is split into 2 logical steps:

  1. Restore the core part
  2. Restore the application part


  • Restore the "Application" part

During this step, RMAN will restore:

  • Tablespaces where the table is located
  • Archived log
Transparently rman will run following scripts:

##########################
Restore the "application" tablespace
##########################

In my case, there is just a tablespace

{
# set requested point in time
set until  time "to_date('02/05/2017 17:00:00','dd/mm/yyyy hh24:mi:ss')";
# set destinations for recovery set and auxiliary set datafiles
set newname for datafile  13 to new;
set newname for datafile  14 to new;
set newname for datafile  17 to new;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  13, 14, 17;

switch clone datafile all;
}

where

00013 to /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/datafile/o1_mf_pcms_dat_%u_.dbf
00014 to /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/datafile/o1_mf_pcms_dat_%u_.dbf
00017 to /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/datafile/o1_mf_pcms_dat_%u_.dbf


###########################
Recover the "application" tablespace
###########################

{
# set requested point in time
set until  time "to_date('02/05/2017 17:00:00','dd/mm/yyyy hh24:mi:ss')";
# online the datafiles restored or switched
sql clone "alter database datafile  13 online";
sql clone "alter database datafile  14 online";
sql clone "alter database datafile  17 online";
# recover and open resetlogs
recover clone database tablespace  "PCMS_DATA", "SYSTEM", "UNDOTBS1", "UNDOTBS2", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
}

How you can see into log file, also in this case, it is necessary to restore the Archived Log.

##########################
Creation of expdp/impdp directory
##########################

{
# create directory for datapump import
sql "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp''";
# create directory for datapump export
sql clone "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp''";
}

#############
expdp of the table
#############

After the Database recovery, it start the table export...:

Performing export of tables...
   EXPDP> Starting "SYS"."TSPITR_EXP_DgAf_BngC":  
   EXPDP> Estimate in progress using BLOCKS method...
   EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
   EXPDP> Total estimation using BLOCKS method: 7 MB
   EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
   EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
   EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
   EXPDP> . . exported "PCMS_ADM"."PIANIFICAZIONE_MANUTENZIONE"    5.110 MB   34394 rows
   EXPDP> Master table "SYS"."TSPITR_EXP_DgAf_BngC" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TSPITR_EXP_DgAf_BngC is:
   EXPDP>   /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/tspitr_DgAf_81953.dmp
   EXPDP> Job "SYS"."TSPITR_EXP_DgAf_BngC" successfully completed at Thu May 4 15:51:03 2017 elapsed 0 00:00:29
Export completed


{
# shutdown clone before import
shutdown clone abort
}

###########
impdp of table
###########

...and the table import:

Performing import of tables...
   IMPDP> Master table "SYS"."TSPITR_IMP_DgAf_gvec" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_DgAf_gvec":  
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
   IMPDP> . . imported "PCMS_ADM"."PIANIFICAZIONE_MAN_BCK"         5.110 MB   34394 rows
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
   IMPDP> Job "SYS"."TSPITR_IMP_DgAf_gvec" successfully completed at Thu May 4 15:51:14 2017 elapsed 0 00:00:04
Import completed

How you can see, after the expdp command is completed, because the auxilyary database it's not necessary anymore. it will be closed,


#########
The last step
#########

After importing the table, all restored files ("core" filse, "application" files and archived logs), will be deleted.

auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_temp_djpckr57_.tmp deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_temp_djpckr70_.tmp deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/onlinelog/o1_mf_8_djpddy16_.log deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/onlinelog/o1_mf_6_djpddo9z_.log deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/onlinelog/o1_mf_4_djpdddof_.log deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/onlinelog/o1_mf_2_djpdd443_.log deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/onlinelog/o1_mf_7_djpddsr2_.log deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/onlinelog/o1_mf_5_djpddk23_.log deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/onlinelog/o1_mf_3_djpdd893_.log deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/onlinelog/o1_mf_1_djpdczsp_.log deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/datafile/o1_mf_pcms_dat_djpcndlt_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/datafile/o1_mf_pcms_dat_djpd013l_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/DGAF_PITR_PCMSP/datafile/o1_mf_pcms_dat_djpcop6k_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_sysaux_djpcc7lb_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_sysaux_djpc4pyw_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_undotbs2_djpcbo4t_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_undotbs2_djpc9bnh_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_undotbs1_djpc8t8l_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_undotbs1_djpc7rts_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_system_djpc79bp_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_system_djpc65fy_.dbf deleted
auxiliary instance file /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/controlfile/o1_mf_djpc35l1_.ctl deleted
auxiliary instance file tspitr_DgAf_81953.dmp deleted

Recovering Table, Part 1 - The "Core" part
Recovering Table, Part 2 - The "Application" part
Recovering Table, Part 4 - Note

Recovering Table, Part 2 . The "Core" part

The script I used to recover the table is:

run {
2> recover table PCMS_ADM.PIANIFICAZIONE_MANUTENZIONE
3> until time "to_date('02/05/2017 17:00:00','dd/mm/yyyy hh24:mi:ss')"
4> auxiliary destination '/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp'
5> remap table PCMS_ADM.PIANIFICAZIONE_MANUTENZIONE:PIANIFICAZIONE_MAN_BCK;
6> }

The recovery table phase is split into 2 logical steps:
  1. Restore the "core" part
  2. Restore the "application" part
The log of the recover is here.
  • Restore the Core part

During this step, RMAN will restore an auxiliary database with following components:
  • SYSTEM tablespace
  • SYSAUX tablespace
  • UNDO tablespaceze
  • TEMP tablespace
  • Archived log

Regardless of whether the database is RAC or not,the auxiliary database will be a single instance. In my case, the database (RAC) name is "PCMSp" and the 2 instances are "PCMSp1" and "PCMSp2". The auxiliary database will have these parameters

SID='DgAf'
db_name=PCMSP
db_unique_name=DgAf_pitr_PCMSP

(these parameters will be chosen automatically by rman). Also, will be restored the UNDO tablespaces of all instances (in my case, 2 instances mean 2 UNDO tablespaces). Transparently rman will run following scripts:

################
Restore the controlfile
################

{
# set requested point in time
set until  time "to_date('02/05/2017 17:00:00','dd/mm/yyyy hh24:mi:ss')";
# restore the controlfile
restore clone controlfile;

# mount the controlfile
sql clone 'alter database mount clone database';

# archive current online log 
sql 'alter system archive log current';
}

###################
Restore the "core" datafile
###################

{
# set requested point in time
set until  time "to_date('02/05/2017 17:00:00','dd/mm/yyyy hh24:mi:ss')";
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile  1 to new;
set newname for clone datafile  2 to new;
set newname for clone datafile  5 to new;
set newname for clone datafile  6 to new;
set newname for clone datafile  7 to new;
set newname for clone datafile  8 to new;
set newname for clone datafile  3 to new;
set newname for clone datafile  4 to new;
set newname for clone tempfile  1 to new;
set newname for clone tempfile  2 to new;
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  1, 2, 5, 6, 7, 8, 3, 4;

switch clone datafile all;
}

Just to be clear

00001 to /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_system_%u_.dbf
00002 to /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_system_%u_.dbf
00003 to /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_sysaux_%u_.dbf
00004 to /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_sysaux_%u_.dbf
00005 to /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_undotbs1_%u_.dbf
00006 to /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_undotbs1_%u_.dbf
00007 to /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_undotbs2_%u_.dbf
00008 to /FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/datafile/o1_mf_undotbs2_%u_.dbf


#############
Recover database
#############

{
# set requested point in time
set until  time "to_date('02/05/2017 17:00:00','dd/mm/yyyy hh24:mi:ss')";
# online the datafiles restored or switched
sql clone "alter database datafile  1 online";
sql clone "alter database datafile  2 online";
sql clone "alter database datafile  5 online";
sql clone "alter database datafile  6 online";
sql clone "alter database datafile  7 online";
sql clone "alter database datafile  8 online";
sql clone "alter database datafile  3 online";
sql clone "alter database datafile  4 online";
# recover and open database read only
recover clone database tablespace  "SYSTEM", "UNDOTBS1", "UNDOTBS2", "SYSAUX";
sql clone 'alter database open read only';
}

During this step, the archived logs will be restored. After the archived logs are applied, the “DgAf” (the auxiliary) database can be opened and the restore of the "application" part can  start.

####################
Start of Auxiliary database
####################

sql statement: alter database open read only
{
   sql clone "create spfile from memory";
   shutdown clone immediate;
   startup clone nomount;
   sql clone "alter system set  control_files = 
  ''/FS_DATAPUMP_X_MIGRAZIONE/RECOVER_PCMSp/PCMSP/controlfile/o1_mf_djpc35l1_.ctl'' comment=
''RMAN set'' scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
# mount database
sql clone 'alter database mount clone database';
}