Achieving Availability and Scalability with Oracle 12c Flex Clusters and Flex ASM - PDF

Please download to get full document.

View again

of 30
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Information Report
Category:

Food

Published:

Views: 73 | Pages: 30

Extension: PDF | Download: 0

Share
Related documents
Description
Achieving Availability and Scalability with Oracle 12c Flex Clusters and Flex ASM DOAG 2013 Kai Yu Senior Principal Architect Oracle Solutions Engineering, Dell Inc. Agenda Oracle 12c Grid Infrastructure
Transcript
Achieving Availability and Scalability with Oracle 12c Flex Clusters and Flex ASM DOAG 2013 Kai Yu Senior Principal Architect Oracle Solutions Engineering, Dell Inc. Agenda Oracle 12c Grid Infrastructure and RAC overview Oracle Flex Clusters Flex ASM Architecture Oracle RAC Support for Pluggable Databases Oracle RAC Troubleshooting and Health Check 2 Oracle OpenWorld 2013 About Author Kai Yu, Senior Principal Architect, Dell Database Engineering 18 years Oracle DBA/Apps DBAS and Solutions Engineering Specializing in Oracle RAC, Oracle VM and Oracle EBS Oracle ACE Director, Oracle papers author/presenter 2011 OAUG Innovator of Year, 2012 Oracle Excellence Award: Technologist of the Year: Cloud Architect by Oracle Magazine My Oracle Blog: Co-author Apress Book Expert Oracle RAC 12c 3 Oracle OpenWorld 2013 What I do Providing solutions on the whole stack from the ground up Solutions Deliverable List Validated integration Best practices Virtualization Oracle EM12c Oracle Applications Performance Study 4 Oracle OpenWorld 2013 Oracle 12c Grid Infrastructure and RAC Oracle Real Application Clusters: active-active cluster database Protect database availability against up to N-1 server failure Reduce planned downtime for hardware, OS, software upgrade Add node or remove node based on demand of capacity Application load balancing Provide high availability, scalability and flexibility User connections Node1 Node2 Node3 RAC Instance1 RAC Instance2 RAC Instance3 Cluster Interconnect RAC Database 5 Oracle OpenWorld 2013 Oracle 12c Grid Infrastructure and RAC Database Clients connect to RAC Database Use Virtual IP (VIP) instead of RAC node Host Name/IP Virtual IP (VIP) automatic failover by Oracle Clusterware without waiting for TCP/IP timeout Application connections failed over to surviving nodes DML will be rolled back and started over after reconnecting. Transparent Application Failover (TAF): Client side failover: specify how to failover query Oracle Notification Services (ONS) for notifying down event Fast Connect Failover (FCF) for fast connection failover: database clients registered with Fast Application Notification (FAN), database clients get notified the up and down event and react accordingly. failover works for most database clients: JDBC, OCI,, UCP, etc Application Continuity (AC) of Oracle 12c: During the instance outage, automatically replay the transaction on another instance without the need for end-users and applications resubmitting the transaction. 6 Oracle OpenWorld 2013 Oracle 12c Grid Infrastructure and RAC The Oracle RAC Stack: Oracle Grid Infrastructure : Clusterware + ASM Oracle RAC coordinates and synchronize multiple DB instances through Cache Fusion technology 7 Oracle OpenWorld 2013 Oracle 12c Grid Infrastructure and RAC Oracle 12c Clusterware Enable the communication between the cluster server Managing resources ASM instances, DB instances, Virtual IPs, SCAN, etc Foundation for RAC database and HA features Manage failover of Virtual IP to other node Restarts failed Oracle processes Manages node memberships & prevent the split-brain syndrome Installed into the same Grid Infrastructure home with ASM Oracle 12c Clusterware Components Required shared storage and private interconnects between cluster nodes Technology stack Oracle Clusterware 12c Technology Stack Cluster Ready Service Technology Stack High Availability Service Technology Stack CRS CSS appagent ologgerd cssdagent ASM GPNPD GIPC CTSS EVM mdns oraagent ONS oraagent scriptagent osymond orarootagent orarootagent 8 Oracle OpenWorld 2013 Oracle 12c Grid Infrastructure and RAC Clusterware/ASM Startup Sequences: Oracle clusterware is started up automatically when OS: On Linux:OS init process init.ohasd multiple levels: Level 0 Level 1 Level 2 Level 3 Level 4 cssdmonitor ASM Network sources SCANIP OHASD oraagent mdnsd GIPCD EVMD CRSD orarootagent Node VIP ACF Registry GNSVIP INIT OHASD GPNPD CRSD ASM instance Diskgroup DB Resource OHASD oraclerootagent CTSSD SCAN Listener Diskmon CRSD oraagent Listener Services cssdagent CSSD eons GPNPD CTSSD Services Process on the High Availability Stack Process on the Cluster Ready Service Stack Resource managed by Cluster Ready Service ONS GNS GSD 9 Oracle OpenWorld 2013 Oracle 12c Grid Infrastructure and RAC Clusterware OCR and Voting disks Voting disk: stores cluster membership used by CSS OCR stores information about clusterware resources Multiplexed OCR, Odd numbers of Voting disks Preferably to be stored in ASM Oracle 12c RAC New Features Overview New features and enhancements focus on business continuity, high availability, scalability, agility, cost-effective workload management : Oracle Flex Clusters for cluster scalability Oracle Flex ASM for high availability scalability Oracle RAC support for Oracle 12c Pluggable databases What-If Command Evaluation Public Networks for RAC: IPv6 Support Added Application Continuity: Cluster Health Monitor (CHM) Enhancements Shared ASM Password File stored in ASM diskgroup 10 Oracle OpenWorld 2013 Oracle 12cR1 Flex Clusters Oracle Flex Clusters Architecture Scalability limitation of the standard cluster All nodes tightly-connected: N * (N-1)/2 interconnect paths All nodes directly connected to storage: total N storage paths Preventing the cluster to go beyond of 100 nodes. Two two-layered hub-and spoke topology: Hub Node: interconnected and directly connected to the shared storage Leaf Node: connected a Hub node, no storage connection is required Scalability of Oracle 12cR1 RAC 64 Hub Nodes 20 up to 2000 Hub nodes + Leaf Nodes Each Leaf node has its dedicated Hub node to connect 11 Oracle OpenWorld 2013 Oracle 12cR1 Flex Clusters A Leaf node with access to shared storage can be changed to a Hub node A Standard cluster A Flex Cluster, but can t change back without a reconfiguration of a cluster All Hub nodes function in same way as the standard cluster node using Flex ASM. When you design the cluster initially, you may choose either one. If you are not sure, choose the standard Cluster as it can converted to Flex Cluster. Configuring Flex Clusters Configuring Flex Cluster during the OUI : Advanced Installation 12 Oracle OpenWorld 2013 Oracle 12cR1 Flex Clusters Configuring Flex Clusters Requires a fixed GNS VIP address for the Flex Cluster 13 Oracle OpenWorld 2013 Oracle 12cR1 Flex Clusters After Grid Infrastructure installation, Hub node status: 14 Oracle OpenWorld 2013 Oracle 12cR1 Flex Clusters After Grid Infrastructure installation, Leaf node: Changing a standard Cluster to Flex Cluster: Prerequisites: Add GNS service with a Fixed Virtual IP #srvctl add gns vip VIP_address domain doman_name Enable Flex ASM (see next slide) Convert to Flex Cluster #crsctl set cluster mode flex # crsctl stop crs #crsctl start crs wait Add Leaf listener: $srvctl add listener leaflistener skip 15 Oracle OpenWorld 2013 Oracle 12cR1 Flex ASM Oracle Flex ASM Architecture Limitation of the standard ASM Each node has an ASM instance which costs CPU/memory Local ASM instance failure will cause DB instance failure Flex ASM is an option on Oracle 12c : enabled or disabled. A small # of ASM instance. (default 3, specified by admin) DB instances connects to any ASM instance (local/remote) 16 Oracle OpenWorld 2013 Oracle 12cR1 Flex ASM Two kinds of Oracle ASM configurations: Local ASM clients connect to local ASM instance Flex ASM clients connect to a remote ASM instance ASM network Added for Flex ASM for communication between ASM clients and ASM On Oracle 12cR1, share the network with cluster private interconnect Database instances to access ASM servers on different nodes: Flex ASM uses password file authentication ASM password is shared and stored in ASM Disk group 17 Oracle OpenWorld 2013 Oracle 12cR1 Flex ASM Flex ASM and Flex Clusters Flex ASM is enabled if you choose Flex Cluster. ASM instance runs on Hub nodes only as Hub nodes have access to the shared storage Flex ASM can be enabled for standard Cluster. Only a subset of nodes run ASM instance Configure Flex ASM On GI OUI, select Flex ASM option or select Flex Cluster Need to specify ASM network Convert to standard ASM to Flex ASM Setup ASM network 18 Oracle OpenWorld 2013 Oracle 12cR1 Flex ASM Convert ASM to Flex ASM with asmca tool such as $asmca -silent -converttoflexasm -asmnetworks eth1/ asmlistenerport 1521 Run converttoflexasm.sh as the root :on all nodes, one at a time Managing Flex ASM No specific tasks for Flex ASM management Use asmcmd and SRVCTL commands: Provide a better HA of database instances Database instance can connect to remote ASM instance if case the local ASM instance fails 19 Oracle OpenWorld 2013 Oracle 12cR1 Flex ASM Deploy Oracle Flex ASM: Architecture Options Oracle Flex ASM for new Oracle 12c Database Deployment: 20 Oracle OpenWorld 2013 Oracle 12cR1 Flex ASM Oracle Flex ASM for mixing Oracle pre-12c DBs +12c DBs Deployment: Run ASM instance on every node. 12c DBs can failover to other ASM Run pre-12c DB on ASM node, 12c DBs can failover to other Run pre-12c DB on ASM nodes only, 12c DBs can failover to other ASM 21 Oracle OpenWorld 2013 Oracle RAC Support for Pluggable Databases Pluggable Databases Architecture Overview a Pluggable database (PDB) is a self contained collection of schemas A Container database (CDB) which is a superset of the pluggable databases the Root CDB$ROOT, the Seed named PDB$SEED, Zero or more PDBS SQL SELECT NAME, CON_ID, DBID, CON_UID FROM V$CONTAINERS ORDER BY CON_ID; NAME CON_ID DBID CON_UID CDB$ROOT PDB$SEED PDB PDB PDB Oracle OpenWorld 2013 Oracle RAC Support for Pluggable Databases How 12c Pluggable Database works on Oracle 12c RAC Different open modes of a PDB: Mounted; Read only; Read and Write check the Open modes of all the PDBs on this RAC instance when you connect to the CDB root: SELECT NAME, OPEN_MODE, RESTRICTED FROM V$PDBS; NAME OPEN_MODE RESTRICTED PDB$SEED READ ONLY NO PDB1 READ WRITE NO PDB2 READ WRITE NO PDB3 Mounted NO Startup a PDB in sqlplus STARTUP OPEN STARTUP OPEN READ ONLY STARTUP RESTRICT OPEN READ ONLY SHUTDOWN IMMEDIATE ALTER PLUGGABLE DATABASE OPEN READ ONLY 24 Oracle OpenWorld 2013 Oracle RAC Support for Pluggable Databases How 12c Pluggable Database works on Oracle 12c RAC 25 Oracle OpenWorld 2013 Consolidate Database using PDB How 12c Pluggable Database works on Oracle 12c RAC Create a PDB Create dynamic database service for PDB $srvctl add service -db cdb -service hr1 -pdb pdb1 -preferred host1 -available host2 Connect PDB through service HR_PDB1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = knewracscan.kcloud.dblab.com) (PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = hr1.kcloud.dblab.com) )) 26 Oracle OpenWorld 2013 Oracle RAC Troubleshooting and Health Check Clusterware health check and troubleshooting Clusterware utility: crsctl for check crs status and start/stop crsctl check cluster all; crsctl stat ret -t Log files: $GIRD_Home/log/ host /alert host .log and $GIRD_Home/log/ host / process /log 27 Oracle OpenWorld 2013 Clusterware health Verification Utility : CLUFY Verifies Clusterware, RAC best practices, mandatory requirements: $./cluvfy comp healthcheck collect cluster bestpractice html 28 Oracle OpenWorld 2013 Oracle RAC Troubleshooting and health Check Oracle RACcheck: A RAC Configuration Audit tool audit the configurations settings for RAC, Clusterware andasm MOS note ID , download the tool To invoke :./raccheck ; it will produce an auditing report CHM: detect/analyze OS and Cluster s resource related degrdations and failure. A set of tools tracing OS resource consumptions Enhanced in Oracle 12cR1 and consists three components: osysmond: System Monitor Service process on each node monitor and collects real time OS metric data and send to Ologgerd ologgerd: cluster logger service, one for each 32 node Grid Infrastructure Management Repository (the CHM repository): central repository to store metrics data Grid Infrasturcture Management Repository (GIMR) A single instance Oracle database run by grid user Installed on one of the cluster nodes Need to select Advanced Installation option run on the same node that runs the ologgerd service to reduce the traffic 29 Oracle OpenWorld 2013 Oracle RAC Troubleshooting and health Check by default the database is stored in same location of OCR/Voting disks $ oclumon manage -get repsize reppath alllogger -details CHM Repository Path = +DATA1/_MGMTDB/DATAFILE/sysmgmtdata CHM Repository Size = Logger = knewracn1 Nodes = knewracn1,knewracn2,knewracn4,knewracn7,knewracn5,knewracn8,knewracn6 Use OCLUMON to manage the size and retention of the repository diagcollecton.pl to collect the CHM data Get the master node: $ oclumon manage -get master login to the master node as root run the command: diagcollection.pl -collect -crshome CRS_HOME It produces four.gz files which include various log files for diagnosis. use OCLUMON query the CHM repository for node specific data $oclumon dumpnodeview -allnodes -v -s begin_timestamp -e end_timestamp 30 Oracle OpenWorld 2013 Contact me at or visit my Oracle Blog at 34 Oracle OpenWorld 2013
Recommended
View more...
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks