1 Oracle Database 11g Release 2 (11.2.0.4) New Features
This chapter contains descriptions of all of the features that are new to Oracle Database 11g Release 2 (11.2.0.4).1.1 Oracle Data Redaction
This new database security feature is part of Oracle Advanced Security and prevents data columns (such as credit card numbers, U.S. Social Security numbers, and other sensitive or regulated data) from being displayed by applications. It is driven by declarative policies that can take into account database session factors and information passed by applications. Sensitive display data can be redacted at runtime on live production systems with minimal disruption to running applications and without altering the actual stored data. Different types of redaction are supported including full, partial, random, and regular expression redaction. You can conceal entire data values or redact only part of the value. The functionality is implemented inside of the database, therefore separate installation is not required.
See Also:
Oracle Database Advanced Security Administrator's Guide for details1.2 Trace File Analyzer and Collector
The Trace File Analyzer (TFA) and Collector, also known as TFA Collector, is a diagnostic collection utility to simplify diagnostic data collection on Oracle Clusterware, Oracle Grid Infrastructure and Oracle RAC systems.Unlike similar solutions, the TFA Collector optimizes data gathering by providing a single source of management as well as various focus levels. Data for a whole cluster can be gathered from one node using one command and can be stored on a central server for further analysis and processing. The TFA Collector also allows for trimming data collection by focusing on certain components or relevant time frames only.
See Also:
Oracle Clusterware Administration and Deployment Guide for details1.3 RACcheck - The Oracle RAC Configuration Audit Tool
RACcheck is designed to audit vital configuration settings for the Oracle Database, single instance databases, as well as Oracle Real Application Clusters (Oracle RAC) databases. It also includes checks for Oracle Clusterware, Oracle Automatic Storage Management (Oracle ASM) and Oracle Grid Infrastructure.RACcheck provides best practices recommedations considering the whole stack, including Maximum Availability Architecture (MAA) configurations and is therefore the ideal tool for regular health checks as well as pre- and post-upgrade best practices assessments.
See Also:
Oracle Real Application Clusters Administration and Deployment Guide for details1.4 Database Replay Support for Database Consolidation
Database Replay now supports simultaneous execution of multiple database captures on a single consolidated database. Consolidated database replay supports scheduling of the individual replays enabling investigations of various scenarios (for example, what if all my individual workloads hit their peak utilizations at the same time).Consolidated replay provides the ability to test database performance for database consolidation projects, whether consolidating onto an Oracle database machine or other consolidated infrastructure.
See Also:
Oracle Database Real Application Testing User's Guide for details1.5 Optimization for Flashback Data Archive History Tables
When using flashback data archive to track changes on tables, you can now enable optimization of the corresponding history tables using theOPTIMIZE DATA
clause when creating or altering a flashback data archive.Optimization of flashback data archive history tables provides better storage efficiency and better performance for flashback queries on the change history without additional intervention needed by the DBA.
See Also:
Oracle Database Advanced Application Developer's Guide for details1.6 Desupported Features
The following features are desupported in Oracle Database 11g Release 2 (11.2):-
The
-cleanupOBase
flag of the deinstallation tool is desupported. There is no replacement for this flag.
-
The DES, RC4, and MD5 algorithms are desupported.
See Also:
Oracle Database Upgrade Guide for details1.7 New sqlnet.ora Parameter SSL_EXTENDED_KEY_USAGE
Starting with this release, you can use theSQLNET.SSL_EXTENDED_KEY_USAGE
parameter in the sqlnet.ora file to select a Secure Sockets Layer
certificate to be used automatically to authenticate clients. For
example, suppose you have multiple certificates for a smart card but
only one of the certificates has an extended key usage field of client
authentication. In the application, a certificate chooser dialog box
would appear, prompting the user to select the type of authentication.
Because the type of authentication would always be for clients, the SQLNET.SSL_EXTENDED_KEY_USAGE
parameter can enable the application to bypass this dialog box and
automatically choose client authentication. As a result, the user has
fewer steps to perform in a task, thereby making the user's job easier
and more efficient.
See Also:
Oracle Database Advanced Security Administrator's Guide for details1.8 New PrimaryLostWriteAction Property
The newPrimaryLostWriteAction
Data Guard broker
configuration property determines what action is taken if a standby
database detects that a lost write has occurred at the primary database.
See Also:
Oracle Data Guard Broker for details1.9 ENABLE_GOLDENGATE_REPLICATION for Oracle GoldenGate
TheENABLE_GOLDENGATE_REPLICATION
initialization
parameter controls services provided by the RDBMS for Oracle GoldenGate
(both capture and apply services). Set this to true to enable RDBMS
services used by Oracle GoldenGate.OPTIMIZER_DYNAMIC_SAMPLING. This comes into play when a table does not have any statistics and the parameter is enabled. The previous default setting of 2 was to use dynamic statistics if at least one table in the statement has no statistics and the number of blocks that the statistics would be based on would be 64 and the value could range between 2 and 10 , each value doubling the number of blocks that will be sampled. The new value of 11 means that the optimizer will gather dynamic statistics automatically whenever the optimizer deems it necessary and based on the number of blocks it thinks appropriate.
My testing has shown a couple of anomalies between the two versions it exists on ( 11.2.0.4, 12.1.0.1).
Firstly I set up a test table and copied it between environments so it was consistent. I then gathered stats and then deleted them to get the details of blocks etc.
1
2
3
4
5
6
7
8
9
10
| Create table fred.ds as select * from dba_objects; Create index fred.DS_IND1 on fred.ds(object_name); select num_rows,avg_row_len, blocks from dba_tables where owner = 'FRED' and table_ name = ‘DS’ NUM_ROWS AVG_ROW_LEN BLOCKS ---------- ----------- ---------- 87970 98 1261 execute dbms_stats.delete_table_stats('FRED','DS'); |
1
2
3
4
5
6
7
8
9
10
11
12
13
| alter system flush shared_pool; alter session set optimizer_dynamic_sampling=11; set lines 240 pages 0 alter session set tracefile_identifier = 'JOHN'; alter session set events='<em>10053 trace name context forever</em>, level 1'; Set autotrace traceonly explain select count(*) from fred.ds where object_type='INDEX' and object_id >5000; |
In an 11.2.0.3 database I would see the following entries in the trace file, when using the default value of 2
1
2
3
4
5
6
7
8
9
10
11
12
13
| *** 2013-12-08 10:05:25.364 ** Executed dynamic sampling query: level : 2 sample pct. : 5.000000 actual sample size : 3842 filtered sample card. : 228 orig. card. : 102917 block cnt. table stat. : 1260 block cnt. for sampling: 1260 max. sample block cnt. : 64 sample block cnt. : 63 min. sel. est. : 0.00050000 |
In both 11.2.0.4 and 12.1.0.1 that reporting remains the same for values other than the new one of 11
In 11.2.0.4 using the default value of 2 we get an explain plan as follows
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| Execution Plan ---------------------------------------------------------- Plan hash value: 161315987 --------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 24 | 344 (1)| 00:00:05 | | 1 | SORT AGGREGATE | | 1 | 24 | | | |* 2 | TABLE ACCESS FULL| DS | 3883 | 93192 | 344 (1)| 00:00:05 | --------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("OBJECT_TYPE"='INDEX' AND "OBJECT_ID">5000) Note ----- - dynamic sampling used for this statement (level=2) |
In 11.2.0.4 using a value of 11
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| Plan hash value: 161315987 --------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 24 | 344 (1)| 00:00:05 | | 1 | SORT AGGREGATE | | 1 | 24 | | | |* 2 | TABLE ACCESS FULL| DS | 2878 | 69072 | 344 (1)| 00:00:05 | --------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("OBJECT_TYPE"='INDEX' AND "OBJECT_ID">5000) Note ----- - dynamic sampling used for this statement (level=11) |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| Execution Plan ---------------------------------------------------------- Plan hash value: 161315987 --------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 24 | 344 (1)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 24 | | | |* 2 | TABLE ACCESS FULL| DS | 4467 | 104K| 344 (1)| 00:00:01 | --------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("OBJECT_TYPE"='INDEX' AND "OBJECT_ID">5000) Note ----- - dynamic statistics used: dynamic sampling (level=AUTO) |
It can also be seen that the note line seen in 11.2.0.4 is
- dynamic sampling used for this statement (level=11)
whereas it changes in 12.1.0.1 to
- dynamic statistics used: dynamic sampling (level=AUTO)
My small test case is not sufficient to make any sort of educated guess as to how efficient the auto level is and how much overhead it will have. The actual value of the count was 4295.
11.2.0.3(2) 3883
11.2.0.4(11) 2878 - particularly inaccurate
12.1.0.1(11) 4467 - almost bang on.Hopefully this will pique someones interest in experimenting with setting optimizer_dynamic_sampling=11 but remember, it is much better to gather accurate stats in the first place than rely on not having them and the optimizer having to work them out on the fly, both in terms of performance and accuracy.
No comments:
Post a Comment