From: https://sites.google.com/site/oraclerac009/b-rac-install-upgrade/11gr2-rac/11g-new-features-over-10g
11g RAC new features over 10g RAC
1) SCAN
The single client access name (SCAN) is the address used by all clients connecting to the cluster. The SCAN
name is a domain name registered to three IP addresses, either in the domain name service (DNS) or the Grid
Naming Service (GNS). The SCAN name eliminates the need to change clients when nodes are added to or
removed from the cluster. Clients using SCAN names can also access the cluster using EZCONNECT.
A) The Single Client Access Name (SCAN) is a domain name that resolves to all the addresses allocated
for the SCAN name. Three IP addresses should be provided (in DNS) to use for SCAN name mapping
as this ensures high availability. During Oracle Grid Infrastructure installation, listeners are created
for each of the SCAN addresses, and Oracle Grid Infrastructure controls which server responds to a
SCAN address request.
B) The SCAN addresses need to be on the same subnet as the VIP addresses for nodes in the cluster.
C) The SCAN domain name must be unique within your corporate network.
2) GNS
In the past, the host and VIP names and addresses were defined in the DNS or locally in a hosts file. GNS can
simplify this setup by using DHCP. To use GNS, DHCP must be configured in the subdomain in which the
cluster resides.
3) OCR and Voting on ASM storage
The ability to use ASM (Automatic Storage Management) diskgroups for Clusterware OCR and Voting disks
is a new feature in the Oracle Database 11g Release 2 Grid Infrastructure. If you choose this option and ASM
is not yet configured, OUI launches ASM configuration assistant to configure ASM and a diskgroup.
4) Passwordless Automatic SSH Connectivity
If SSH has not been configured prior to the Installation, you can prompt the installer to do this for you. The
configuration can be tested as well.
5) Intelligent Platform Management interface (IPMI)
Intelligent Platform Management Interface (IPMI) provides a set of common interfaces to computer hardware
and firmware that administrators can use to monitor system health and manage the system.
With Oracle Database 11g Release 2, Oracle Clusterware can integrate IPMI to provide failure isolation
support and to ensure cluster integrity. You must have the following hardware and software configured to
enable cluster nodes to be managed with IPMI:
A) Each cluster member node requires a Baseboard Management Controller (BMC) running firmware
compatible with IPMI version 1.5, which supports IPMI over LANs, and is configured for remote control.
B) Each cluster member node requires an IPMI driver installed on each node.
The cluster requires a management network for IPMI. This can be a shared network, but Oracle
recommends that you configure a dedicated network.
C) Each cluster node's ethernet port used by BMC must be connected to the IPMI management network.
If you intend to use IPMI, then you must provide an administration account username and password when
prompted during installation.
6) Time Sync
Oracle Clusterware 11g release 2 (11.2) requires time synchronization across all nodes within a cluster when
Oracle RAC is deployed. To achieve this you should have your OS configured network time protocol (NTP).
The new Oracle Cluster Time Synchronization Service is designed for organizations whose Oracle RAC
databases are unable to access NTP services.
7) Clusterware and ASM share the same Oracle Home
The clusterware and ASM share the same home thus it is known as the Grid Infrastructure home (prior to
11gR2, ASM and RDBMS could be installed either in the same Oracle home or in separate Oracle homes).
8) Hangchecktimer and oprocd are replaced
Oracle Clusterware 11g release 2 (11.2) replaces the oprocd and Hangcheck processes with the cluster
synchronization service daemon Agent and Monitor to provide more accurate recognition of hangs and to
avoid false termination.
9) Rebootless Restart
The fencing mechanism has changed in 11gR2. Oracle Clusterware aims to achieve a node eviction without
rebooting a node. CSSD starts a graceful shutdown mechanism after seeing a failure. Thereafter, OHASD will
try to restart the stack. It is only if the cleanup (of a failed subcomponent) fails that the node is rebooted in
order to perform a forced cleanup.
10) HAIP
In 11.2.0.2 the new HAIP (redundant Interconnect) facility is active and multiple interface selection will
support load balancing and failover. You can select more than 4 interfaces for private interconnect at install
time or add them dynamically using oifcfg.
|
No comments:
Post a Comment