Saturday, December 20, 2014

Is V$SESSION_LONOPS really gives me the session remaining time????

V$SESSION_LONGOPS will NOT (in general) tell you how much longer a particular session -- or even statement -- is going to run.

In fact, it can't. Stated this way, the question (applied to sessions, anyway) is equivalent to a classic problem in computing science, known as the HALTING PROBLEM -- one which is (fairly easily) proven to be unsolvable.

[Sticking to the "theory" things for just a moment more -- it should be provable that any read-consistent single SQL statement *will^ execute in a finite amount of time. Providing, at least, that it makes no calls to PL/SQL. But "finite" can still be very-very-very-large, as anybody who has written a cartesian join across 3 or 4 very large tables has quickly learned. In any case, though, once we start talking about sessions (or even PL/SQL calls), all bets are off -- there is no way to even say with certainty that an arbitrary job will ever complete.]

Forgetting esoteric "theory", and just sticking to the nuts an bolts, we have the following problems and limitations around LONGOPS:

(*) Only a few selected operations (for example, full scans) are considered "eligible" for consideration as "LONGOPS".

(*) Even when your plan has an eligible operation, LONGOPS are only tracked in this view once certain criteria (e.g., elapsed time, object size, etc.) have been met.

(*) A session can execute many statements. Once the LONGOP you are watching has completed and (presumably) that statement finishes, the session can execute MORE statements -- possibly with MORE LONGOPS. Or not.

(*) A single statement can have many LONGOPS. Once the LONGOP you are watching has completed, the same statement might -- or might not -- begin executing ANOTHER longop. How many will there be? Its really hard to know.


The best we can say about V$SESSION_LONGOPS is that it can give you a lower-bound estimate for runtime.

That is, when you are monitoring a LONGOP that is projected to take another 90 minutes, then it is a pretty good bet that the current statement will PROBABLY take AT LEAST 90 more minutes to complete. (That said, this is not a lot more reliable than watching the progress meter for file-copies on Windows. Sometimes the projected times are pretty accurate, and other times, they are nowhere near.)

dbms_application_info.set_session_longops

In Oracle 11g and beyond, v$sql_monitor serves as a replacement for the cumbersome v$session_longops view for monitor long-running SQL
The best news is that statistics collection is now made automatic in 11g with
v$sql_plan_monitor / v$sql_monitor
----------------------------------------------------------------------------------------------------------------------
Also:
_sqlmon_max_plan: Default = 20 per CPU. This hidden parameter establishes a maximum number of plan entries that can be monitored.
_sqlmon_max_planlines: Default = 300. This hidden parameter establishes the number of plan lines beyond which a plan cannot be monitored.
----------------------------------------------------------------------------------------------------------------------
Also (for Linear operations):

Hidden parameter _SQLEXEC_PROGRESSION_COST

This parameter controls the population of V$SESSION_LONGOPS view by long running queries. This view is used to monitor the progress of queries that are running for long duration. Queries that cost more than the value that has been set are identified for monitoring. Progression monitoring involves overhead and may affect the performance. The default value is 1000, which may prevent SQL statements from being shared! Setting it to 0 will turn off the monitoring

Wednesday, December 3, 2014

Script to find the time between redo log switches

In general, you want the redo logs to be large enough that you are not constantly swapping logs (generally not more than every 15 or 20 minutes) and small enough that you aren't risking too much data should there be a catastrophic server failure (assuming that archived redo logs are written to a different server and that you aren't using something like DataGuard to replicate the redo to a backup server in real time).


select b.recid,to_char(b.first_time,'DD/MM/YY HH24:MI:SS') start_time,a.recid,
to_char(a.first_time,'DD/MM/YY HH24:MI:SS') end_time,
round(((a.first_time-b.first_time)*25)*60,2) minutes from v$log_history a,v$log_history b where a.recid=b.recid+1 order by
a.first_time asc;


Who is generating redo logs now?
One of the first question, which cover firefighter is: "Who is generating redo logs in this moment". Idea is to determine action on ad hoc basis. For that I use following script, which I call top_redo.sql:
view sourceprint?
/* -----------------------------------------------------------------------------
 Filename: top_redo.sql
  
 CR/TR#  :

 Purpose : Shows current redo logs generation info (RAC-non RAC environment)
            
 Date    : 12.08.2008.
 Author  : Damir Vadas, damir.vadas@gmail.com
  
 Remarks : run as privileged user

 Changes (DD.MM.YYYY, Name, CR/TR#):
--------------------------------------------------------------------------- */
col machine for a15
col username for a10
col redo_MB for 999G990 heading "Redo |Size MB"
column sid_serial for a13;

select b.inst_id,
       lpad((b.SID || ',' || lpad(b.serial#,5)),11) sid_serial,
       b.username,
       machine,
       b.osuser,
       b.status,
       a.redo_mb 
from (select n.inst_id, sid,
             round(value/1024/1024) redo_mb
        from gv$statname n, gv$sesstat s
        where n.inst_id=s.inst_id
              and n.name = 'redo size'
              and s.statistic# = n.statistic#
        order by value desc
     ) a,
     gv$session b
where b.inst_id=a.inst_id
  and a.sid = b.sid
and   rownum <= 30
;

PROMPT Top 30 from gv$sesstat view according generated redo logs
Result is something like:
SQL> @top_redo
                                                                                               Redo
   INST_ID SID_SERIAL    USERNAME   MACHINE         OSUSER                         STATUS    Size MB
---------- ------------- ---------- --------------- ------------------------------ -------- --------
         1   788,    1              iis1            oracle                         ACTIVE      2,073
         4   788,    1              iis4            oracle                         ACTIVE      1,928
         1   792,    1              iis1            oracle                         ACTIVE      1,168
    



Top 30 from gv$sesstat view according generated redo logs
SQL>
If you want to concentrate on real oracle users (avoid core Oracle processes in result) place next condition in outer where clause:
and b.username is not null
When and how many redo logs generation occurred?
Beside current analyze in many times wider analyze/compare is even more interesting. So questions like:
When do we have most of redo log generation?
Where was the peak of log generation?
Did we have any "strange" redo log generation?
need a little different approach-query v$log_history view. It holds historic data which retention period is initially controlled with MAXLOGHISTORY, defined while creating database (fixed not changeable without recreation of control file) and CONTROL_FILE_RECORD_KEEP_TIME which is changeable. In mine case it was set to 31 days (exact number of days for longest month):
SQL> show parameter CONTROL_FILE_RECORD_KEEP_TIME

NAME_COL_PLUS_SHOW_PARAM       TYPE        VALUE_COL_PLUS_SHOW_PARAM
------------------------------ ----------- ---------------------------------------------
control_file_record_keep_time  integer     31
Script to gather data through mentioned period looks like.
/* -----------------------------------------------------------------------------
 Filename: rl.sql
  
 CR/TR#  :

 Purpose : redo logs distribution per hours on each day ...

 Date    : 07.08.2007.
 Author  : Damir Vadas, damir.vadas@hypo-alpe-adria.com
  
 Remarks : run as privileged user

 Changes (DD.MM.YYYY, Name, CR/TR#):
--------------------------------------------------------------------------- */
set pagesize 120;
set linesize 200;
col day for a8;
spool rl.txt
PROMPT Archive log distribution per hours on each day ...

select
  to_char(first_time,'YY-MM-DD') day,
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'00',1,0)),'999') "00",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'01',1,0)),'999') "01",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'02',1,0)),'999') "02",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'03',1,0)),'999') "03",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'04',1,0)),'999') "04",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'05',1,0)),'999') "05",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'06',1,0)),'999') "06",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'07',1,0)),'999') "07",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'08',1,0)),'999') "08",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'09',1,0)),'999') "09",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'10',1,0)),'999') "10",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'11',1,0)),'999') "11",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'12',1,0)),'999') "12",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'13',1,0)),'999') "13",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'14',1,0)),'999') "14",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'15',1,0)),'999') "15",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'16',1,0)),'999') "16",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'17',1,0)),'999') "17",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'18',1,0)),'999') "18",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'19',1,0)),'999') "19",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'20',1,0)),'999') "20",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'21',1,0)),'999') "21",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'22',1,0)),'999') "22",
  to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'23',1,0)),'999') "23",
  COUNT(*) TOT
from v$log_history
group by to_char(first_time,'YY-MM-DD')
order by day
;
Result looks like:
SQL>@rl
Archive log distribution per hours on each day ...

DAY      00   01   02   03   04   05   06   07   08   09   10   11   12   13   14   15   16   17   18   19   20   21   22   23     TOT
-------- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- -----
11-01-14    0    0   23   16   17   16   16   16   22   39   23   18   22   18   18   18   22   18   19   16   19   16   16   17   425
11-01-15   24   23   17   18   21   16   16   16   22   18   20   19   19   18   18   21   20   18   20   16   18   16   16   17   447
11-01-16   40   39   43   24   17   16   16   16   22   18   18   21   21   18   19   19   22   18   19   17   18   16   16   16   509



SQL>
Redo logs generation is grouped by hours where last column (TOT) is sum of all redo logs in one day. According this it is more then obvious where redo log generation was highest, so our interest may be focused on presented point in time.
How much is that in Mb?
Total redo logs size (and according that, archived log size) cannot be computed from previous query because not all redo log switches occur when redo log was full. For that you might want to use this very easy query:
SQL> select sum(value)/1048576 redo_MB from sys.gv_$sysstat where name = 'redo size';

   REDO_MB
----------
1074623.75

SQL>
If you want to calculate on instance grouping, then use this:
SQL> select inst_id, sum(value)/1048576 redo_MB from sys.gv_$sysstat where name = 'redo size'
  2  group by inst_id;

   INST_ID    REDO_MB
---------- ----------
         1 370325.298
         2   4712.567
         4 405129.283
         3 294457.100

SQL>
Both queries works on single instances as well.
Which segments are generating redo logs?
After we found out our point of interest, in mine case where were most of the redo logs generation, it is very useful to find out which segments (not tables only) are causing redo log generation. For that we need to use "dba_hist" based tables, part of "Oracle AWR (Automated Workload Repository)", which usage I have described in topic Automated AWR reports in Oracle 10g/11g. For this example I'll focus on data based on time period: 11-01-28 13:00-11-01-28 14:00. Query for such a task should be:
SELECT to_char(begin_interval_time,'YY-MM-DD HH24') snap_time,
        dhso.object_name,
        sum(db_block_changes_delta) BLOCK_CHANGED
  FROM dba_hist_seg_stat dhss,
       dba_hist_seg_stat_obj dhso,
       dba_hist_snapshot dhs
  WHERE dhs.snap_id = dhss.snap_id
    AND dhs.instance_number = dhss.instance_number
    AND dhss.obj# = dhso.obj#
    AND dhss.dataobj# = dhso.dataobj#
    AND begin_interval_time BETWEEN to_date('11-01-28 13:00','YY-MM-DD HH24:MI')
                                AND to_date('11-01-28 14:00','YY-MM-DD HH24:MI')
  GROUP BY to_char(begin_interval_time,'YY-MM-DD HH24'),
           dhso.object_name
  HAVING sum(db_block_changes_delta) > 0
ORDER BY sum(db_block_changes_delta) desc ;
Reduced result from previously shown query would be:
SNAP_TIME   OBJECT_NAME                    BLOCK_CHANGED
----------- ------------------------------ -------------
11-01-28 13 USR_RACUNI_MV                        1410112
11-01-28 13 TROK_TAB_RESEAU_I                     734592
11-01-28 13 TROK_VOIE_I                           638496
11-01-28 13 TROK_DATUM_ULAZA_I                    434688
11-01-28 13 TROK_PAIEMENT_I                       428544
11-01-28 13 D_DPX_VP_RAD                          351760
11-01-28 13 TROK_SVE_OK_I                         161472
11-01-28 13 I_DATPBZ_S002                         135296
11-01-28 13 IDS2_DATUM_I                          129904
11-01-28 13 IDS2_PZNBR                            129632
11-01-28 13 IDS2_IDS1_FK_I                        128848
11-01-28 13 IDS2_DATTRAN_I                        127440
11-01-28 13 IDS2_DATSOC_I                         127152
11-01-28 13 IDS2_VRSTA_PROD_I                     122816
...
Let us focus on first segment "USR_RACUNI_MV", segment with highest number of changed blocks (what mean directly highest redo log generation). Just for information, this is MATERIALIZED VIEW.
What SQL was causing redo log generation
Now when we know when, how much and what, time is to find out how redo logs are generated. In next query "USR_RACUNI_MV" and mentioned period are hard codded, because we are focused on them. Just to point that SQL that start with "SELECT" are not point of our interest because they do not make any changes.
SELECT to_char(begin_interval_time,'YYYY_MM_DD HH24') WHEN,
       dbms_lob.substr(sql_text,4000,1) SQL,
       dhss.instance_number INST_ID,
       dhss.sql_id,
       executions_delta exec_delta,
       rows_processed_delta rows_proc_delta
  FROM dba_hist_sqlstat dhss,
       dba_hist_snapshot dhs,
       dba_hist_sqltext dhst
  WHERE upper(dhst.sql_text) LIKE '%USR_RACUNI_MV%'
    AND ltrim(upper(dhst.sql_text)) NOT LIKE 'SELECT%'
    AND dhss.snap_id=dhs.snap_id
    AND dhss.instance_number=dhs.instance_number
    AND dhss.sql_id=dhst.sql_id
    AND begin_interval_time BETWEEN to_date('11-01-28 13:00','YY-MM-DD HH24:MI')
                                AND to_date('11-01-28 14:00','YY-MM-DD HH24:MI')
;
Result is like:
WHEN          SQL                                               inst_id       sql_id  exec_delta rows_proc_delta
------------- ------------------------------------------------- ------- ------------- ---------- ---------------
2011_01_28 13 DECLARE                                                 1 duwxbg5d1dw0q          0                0
                job BINARY_INTEGER := :job;
                next_date DATE := :mydate; 
                broken BOOLEAN := FALSE;
              BEGIN
                dbms_refresh.refresh('"TAB"."USR_RACUNI_MV"');
                :mydate := next_date;
                IF broken THEN :b := 1;
                ELSE :b := 0;
                END IF;
              END;                                                   
2011_01_28 13 delete from "TAB"."USR_RACUNI_MV"                       1 5n375fxu0uv89          0                0
For both of examples it was impossible to find out number of rows changed according operation that was performed. Let us see output of another example (NC_TRANSACTION_OK_T table) where we can meet with DDL that generate redo logs!
WHEN          SQL                                               inst_id       sql_id  exec_delta rows_proc_delta
------------- ------------------------------------------------- ------- ------------- ---------- ---------------
2011_01_28 13 alter table TAB.NC_TRANSACTION_OK_T                     4 g5gvacc8ngnb8          0               0
              shrink space cascade                               
If you are focused on pure number of changes, then you might to perform query where inst_id and sql_id are irrelevant (excluded from query). Here is a little modified previous example, for "Z_PLACENO" segment (pure oracle table):
SELECT when, sql, SUM(sx) executions, sum (sd) rows_processed
FROM (
      SELECT to_char(begin_interval_time,'YYYY_MM_DD HH24') when,
             dbms_lob.substr(sql_text,4000,1) sql,
             dhss.instance_number inst_id,
             dhss.sql_id,
             sum(executions_delta) exec_delta,
             sum(rows_processed_delta) rows_proc_delta
        FROM dba_hist_sqlstat dhss,
             dba_hist_snapshot dhs,
             dba_hist_sqltext dhst
        WHERE upper(dhst.sql_text) LIKE '%Z_PLACENO%'
          AND ltrim(upper(dhst.sql_text)) NOT LIKE 'SELECT%'
          AND dhss.snap_id=dhs.snap_id
          AND dhss.instance_Number=dhs.instance_number
          AND dhss.sql_id = dhst.sql_id
          AND begin_interval_time BETWEEN to_date('11-01-25 14:00','YY-MM-DD HH24:MI')
                                      AND to_date('11-01-25 15:00','YY-MM-DD HH24:MI')
        GROUP BY to_char(begin_interval_time,'YYYY_MM_DD HH24'),
             dbms_lob.substr(sql_text,4000,1),
             dhss.instance_number,
             dhss.sql_id
)
group by when, sql;
Result is like:
WHEN          SQL                                                                    exec_delta rows_proc_delta
------------- ---------------------------------------------------------------------- ---------- ---------------
2011_01_25 14 DELETE FROM Z_PLACENO                                                           4         7250031
2011_01_25 14 INSERT INTO Z_PLACENO(OBP_ID,MT_SIFRA,A_TOT)                                    4         7250830
              SELECT P.OBP_ID,P.MT_SIFRA,SUM(P.OSNOVICA)  
                FROM (SELECT OPI.OBP_ID,
                              OPO.MT_SIFRA,
                              SUM(OPO.IZNKN) OSNOVICA
                        WHERE OPI.OBP_ID = OPO.OPI_OBP_ID 
                          AND OPI.RBR = OPO.OPI_RBR 
                          AND NVL(OPI.S_PRETPOREZA,'O') IN ( 'O','N','A','Z','S')
                        GROUP BY OPI.OBP_ID,OPO.MT_SIFRA
                      )
Here you can see directly number executions and number of involved rows.
Query based on segment directly
Sometimes you do not want to focus on period, so your investigation may start with segment as starting point. For such a tasks I use next query. This is small variation of previous example where "USR_RACUNI_MV" segment is hard codded.
SELECT to_char(begin_interval_time,'YY-MM-DD HH24') snap_time,
       sum(db_block_changes_delta)
  FROM dba_hist_seg_stat dhss,
       dba_hist_seg_stat_obj dhso,
       dba_hist_snapshot dhs
  WHERE dhs.snap_id = dhss.snap_id
    AND dhs.instance_number = dhss.instance_number
    AND dhss.obj# = dhso.obj#
    AND dhss.dataobj# = dhso.dataobj#
    AND dhso.object_name = 'USR_RACUNI_MV'
  GROUP BY to_char(begin_interval_time,'YY-MM-DD HH24')
  ORDER BY to_char(begin_interval_time,'YY-MM-DD HH24');
Reduced result is:
   SNAP_TIME   SUM(DB_BLOCK_CHANGES_DELTA)
   ----------- ---------------------------
   ...
   11-01-28 11                     1224240
   11-01-28 12                      702880
 11-01-28 13                     1410112
   11-01-28 14                      806416
   11-01-28 15                     2008912
   11-01-28 16                     1103648
   ...

Monday, December 1, 2014

Let the Data Guard Broker control LOG_ARCHIVE_* parameters!

Let the Data Guard Broker control LOG_ARCHIVE_* parameters!


When using the Data Guard Broker, you don’t need to set any LOG_ARCHIVE_* parameter for the databases that are part of your Data Guard configuration. The broker is doing that for you. Forget about what you may have heard about VALID_FOR – you don’t need that with the broker. Actually, setting any of the LOG_ARCHIVE_* parameters with an enabled broker configuration might even confuse the broker and lead to warning or error messages. Let’s look at a typical example about the redo log transport mode. There is a broker configuration enabled with one primary database prima and one physical standby physt. The broker config files are mirrored on each site and spfiles are in use that the broker (the DMON background process, to be precise) can access:
Data Guard Broker: OverviewWhen connecting to the broker, you should always connect to a DMON running on the primary site. The only exception from this rule is when you want to do a failover: That must be done connected to the standby site. I will now change the redo log transport mode to sync for the standby database. It helps when you think of the log transport mode as an attribute (respectively a property) of a certain database in your configuration, because that is how the broker sees it also.

[oracle@uhesse1 ~]$ dgmgrl sys/oracle@prima
DGMGRL for Linux: Version 11.2.0.3.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> edit database physt set property logxptmode=sync;
Property "logxptmode" updated
In this case, physt is a standby database that is receiving redo from primary database prima, which is why the LOG_ARCHIVE_DEST_2 parameter of that primary was changed accordingly:
[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Sep 30 17:21:41 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> show parameter log_archive_dest_2

NAME         TYPE  VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2       string  service="physt", LGWR SYNC AFF
       IRM delay=0 optional compressi
       on=disable max_failure=0 max_c
       onnections=1 reopen=300 db_uni
       que_name="physt" net_timeout=3
       0, valid_for=(all_logfiles,pri
       mary_role)
Configuration for physt
The mirrored broker configuration files on all involved database servers contain that logxptmode property now. There is no new entry in the spfile of physt required. The present configuration allows now to raise the protection mode:
DGMGRL> edit configuration set protection mode as maxavailability;
Succeeded.
The next broker command is done to support a switchover later on while keeping the higher protection mode:
DGMGRL> edit database prima set property logxptmode=sync;
Property "logxptmode" updated
Notice that this doesn’t lead to any spfile entry; only the broker config files store that new property. In case of a switchover, prima will then receive redo with sync.
Configuration for primaNow let’s do that switchover and see how the broker ensures automatically that the new primary physt will ship redo to prima:

DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxAvailability
  Databases:
    prima - Primary database
    physt - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> switchover to physt;
Performing switchover NOW, please wait...
New primary database "physt" is opening...
Operation requires shutdown of instance "prima" on database "prima"
Shutting down instance "prima"...
ORACLE instance shut down.
Operation requires startup of instance "prima" on database "prima"
Starting instance "prima"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "physt"
All I did was the switchover command, and without me specifying any LOG_ARCHIVE* parameter, the broker did it all like this picture shows:
Configuration after switchoverEspecially, now the spfile of the physt database got the new entry:

[oracle@uhesse2 ~]$ sqlplus sys/oracle@physt as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Oct 14 15:43:41 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> show parameter log_archive_dest_2

NAME         TYPE  VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2       string  service="prima", LGWR SYNC AFF
       IRM delay=0 optional compressi
       on=disable max_failure=0 max_c
       onnections=1 reopen=300 db_uni
       que_name="prima" net_timeout=3
       0, valid_for=(all_logfiles,pri
       mary_role)
Not only is it not necessary to specify any of the LOG_ARCHIVE* parameters, it is actually a bad idea to do so. The guideline here is: Let the broker control them! Else it will at least complain about it with warning messages. So as an example what you should not do:
[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Oct 14 15:57:11 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> alter system set log_archive_trace=4096;

System altered.
Although that is the correct syntax, the broker now gets confused, because that parameter setting is not in line with what is in the broker config files. Accordingly that triggers a warning:
DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxAvailability
  Databases:
    physt - Primary database
    prima - Physical standby database
      Warning: ORA-16792: configurable property value is inconsistent with database setting

Fast-Start Failover: DISABLED

Configuration Status:
WARNING

DGMGRL> show database prima statusreport;
STATUS REPORT
       INSTANCE_NAME   SEVERITY ERROR_TEXT
               prima    WARNING ORA-16714: the value of property LogArchiveTrace is inconsistent with the database setting
In order to resolve that inconsistency, I will do it also with a broker command – which is what I should have done instead of the alter system command in the first place:
DGMGRL> edit database prima set property LogArchiveTrace=4096;
Property "logarchivetrace" updated
DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxAvailability
  Databases:
    physt - Primary database
    prima - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS
Thanks to a question from Noons (I really appreciate comments!), let me add the complete list of initialization parameters that the broker is supposed to control. Most but not all is LOG_ARCHIVE*
LOG_ARCHIVE_DEST_n
LOG_ARCHIVE_DEST_STATE_n
ARCHIVE_LAG_TARGET
DB_FILE_NAME_CONVERT
LOG_ARCHIVE_FORMAT
LOG_ARCHIVE_MAX_PROCESSES
LOG_ARCHIVE_MIN_SUCCEED_DEST
LOG_ARCHIVE_TRACE
LOG_FILE_NAME_CONVERT
STANDBY_FILE_MANAGEMENT

Oracle database Upgrade from 11.2.0.1 to 11.2.0.3 with Dataguard Broker in Place

INTRODUCTION
Purpose of this document is to guide the reader trough the steps of upgrading an Oracle Database to a higher release when a Physical Standby database is in place.  In this example we use an Oracle RDBMS 11.2.0.1 release that is upgraded to 11.2.0.3.
Starting point is a configured Oracle Data Guard environment, managed by Data Guard broker release 11.2.0.1.
STEP 1: DATABASE Software
We are installing the new Oracle software release in its own $ORACLE_HOME for this upgrade, we use the out placement principle. This step can be done in advance before we start the upgrade. Follow the Oracle installation manual to install the software and select the software-only installation.
STEP 2: DISABLE CONFIGURATION BROKER
We start with turning off the Data Guard broker configuration. During the migration, we do not use the Data Guard broker.
DGMGRL> connect /
Connected.
DGMGRL> disable configuration
Disabled.
DGMGRL> exit
In addition, we disable the background process DMON on both primary and standby instance.
On the primary:
SQL> alter system set dg_broker_start=false scope=Both;
altered system.
On standby:
SQL> alter system set dg_broker_start=false scope=Both;
altered system.
STEP 3: COPY FILES TO THE NEW $ORACLE_HOME
When the broker is disabled we can copy the required files from the old to the new $ORACLE_HOME , think of the SPFILE file, orapwd, tnsnames.ora and listener.ora.
Of course perform this operation on both nodes, first the files from $ORACLE_HOME/dbs and $ORACLE_HOME/network/admin/. Beside the copy of the files modify also the path mentioned in the SID_LIST_LISTENER  string in the listener.ora file.
Primary host:
$ cp -p $ORACLE_HOME/dbs/orapwproda /u01/app/oracle/product/11.2.0/dbhome_2/dbs/
$ cp -p $ORACLE_HOME/dbs/dr*.dat /u01/app/oracle/product/11.2.0/dbhome_2/dbs/
$ cp -p $ORACLE_HOME/dbs/spfileproda.ora /u01/app/oracle/product/11.2.0/dbhome_2/dbs/
$ cp -p $ORACLE_HOME/dbs/tnsnames.ora /u01/app/oracle/product/11.2.0/dbhome_2/network/admin//
$ cp -p $ORACLE_HOME/dbs/listener.ora /u01/app/oracle/product/11.2.0/dbhome_2/network/admin/
$ vi listener.ora
Standby host:
$ cp -p $ORACLE_HOME/dbs/orapwproda /u01/app/oracle/product/11.2.0/dbhome_2/dbs/
$ cp -p $ORACLE_HOME/dbs/dr*.dat /u01/app/oracle/product/11.2.0/dbhome_2/dbs/
$ cp -p $ORACLE_HOME/dbs/spfileproda.ora /u01/app/oracle/product/11.2.0/dbhome_2/dbs/
$ cp -p $ORACLE_HOME/dbs/tnsnames.ora /u01/app/oracle/product/11.2.0/dbhome_2/network/admin/
$ cp -p $ORACLE_HOME/dbs/listener.ora /u01/app/oracle/product/11.2.0/dbhome_2/network/admin/
$ vi listener.ora
STEP 4: START Listener FROM THE NEW $ORACLE_HOME
We want to start the listener from the new $ORACLE_HOME. Stop de current listener and restart the listener from the new $ORACLE_HOME. When you perform this action a small interruption is possible for new sessions connecting to the primary database.
Perform this action for both environments.
On the primary host:$ lsnrctl stop
$ / u01/app/oracle/product/11.2.0/dbhome_2/bin/lsnrctl start
On standby host:$ lsnrctl stop
$ / u01/app/oracle/product/11.2.0/dbhome_2/bin/lsnrctl start
When correct, the listener is actively running using version 11.2.0.3.
STEP 5: START PHYSICAL STANDBY FROM THE NEW $ORACLE_HOME
Now the files are copied and the listener is running from the 11.2.0.3 version, stop the physical standby database and start it from the new ORACLE_HOME.
Warning: The physical standby database must not running with  Active Data Guard option, only allowed mode is mounted with Redo Apply active.
Stop the physical standby:
$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Tue Dec 20 11:06:09 2011
Copyright (c) 1982, 2009, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit
Modify the oratab to be able to set the new SID to the new $ORACLE_HOME:
$ vi /etc/oratab
proda:/u01/app/oracle/product/11.2.0/dbhome_2:N
agent:/u01/app/oracle/agent11g:N
Use oraenv for the new ORACLE_HOME, and then start the physical standby database in mount mode and activate Redo Apply service. When started the database instance is running using 11.2.0.3 software.
$ . oraenv
ORACLE_SID = [proda] ? proda
The Oracle base remains unchanged with value /u01/app/oracle
$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.3.0 Production on Tue Dec 20 11:11:20 2011
Copyright (c) 1982, 2011, Oracle.  All rights reserved.
Connected to an idle instance.
SQL> startup mount
ORACLE instance started.
Total System Global Area  414298112 bytes
Fixed Size                  1345324 bytes
Variable Size             352323796 bytes
Database Buffers           54525952 bytes
Redo Buffers                6103040 bytes
Database mounted.
SQL> alter database recover managed standby database using current logfile disconnect from session;
Database altered.
SQL> exit
STEP 6: UPGRADE THE PRIMARY DATABASE
Now the physical standby database is running in the new ORACLE_HOME we can start to upgrade the primary database. The modifications on the primary database are automatically send and applied on the standby database.
To upgrade the primary database you must follow the Oracle Upgrade Guide. Upgrading the database does not differ with respect to a database in a Data Guard environment.
Start the dbua from the new $ORACLE_HOME and follow the wizard to perform the upgrade.
$ / u01/app/oracle/product/11.2.0/dbhome_2/bin/dbua
Remark: Do not turn off  archive logmode during the upgrade.
After the upgrade modify the compatibility parameter if required.
Validate whether the parameters are correct, it may happen that this is not correct, I notice during the upgrade the log_archive_dest_# was invalid after disable the broker configuration.  The usage of the Database upgrade assistant (DBUA) is not part of this blog. 
STEP 7: ENABLE BROKER CONFIGURATION.
When the primary is upgraded, the physical standby is  automatically upgraded as well. We can (re)enable the broker configuration again. We start the broker process DMON on both the primary and physical standby database.
On the primary host:
SQL> alter system set dg_broker_start=true scope=both;
System altered.
On the standby host:
SQL>  alter system set dg_broker_start=true scope=both;
System altered.
Now we can enable the broker configuration again.
DGMGRL> connect /
Connected.
DGMGRL> enable configuration
Enabled.
DGMGRL> exit
Note: If you copy the broker files after you have started DMON, you need to restart both primary and standby database to pickup the broker file. Recreating a new configuration is also possible.
Validate the configuration including the properties with special attention to the static connect properties, execute a switchover to validate this is still working as designed.
STEP 8: Remove the old ORACLE_HOME.
Remove the old software by running the deinstall script.
$ cd /u01/app/oracle/product/11.2.0/dbhome_1/deinstall
$ ./deinstall
Checking for required files and bootstrapping …
Please wait …