mcm whitepaper 1 1 1

Upload: svitak

Post on 03-Apr-2018

222 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/28/2019 MCM Whitepaper 1 1 1

    1/17

    MySQL Cluster ManagerReducing Complexity & Risk whi leImproving Management Efficiency

    A MySQL Technical White Paper by Oracle

    J uly 2011

    Copyright 2010, 2011 Oracle and/or its affiliates. All rights reserved.

  • 7/28/2019 MCM Whitepaper 1 1 1

    2/17

    Table of Contents Page #

    MySQL Cluster Manager .........................................................................................................................1Reducing Complexity & Risk while Improving Management

    Eff ic iency ..................................................................................................................................................1

    Table of Contents Page # .................................................................................2

    1. Execut ive Summary ..........................................................................................................................3

    2.Introducing MySQL Cluster Manager ................................................................................................3

    2.1.Automated Management with MySQL Cluster Manager .....................4

    2.2.Automated Monitoring and Self Healing Recovery with MySQL Cluster Manager ........4

    2.3.High Availability Operation with MySQL Cluster Manager ..................5

    3.MySQL Cluster Manager Archi tecture and Use .............................................................................6

    3.1.MySQL Cluster Manager Architecture ..................................................6

    3.2.MySQL Cluster Manager Model & Terms .............................................8

    3.3.Using MySQL Cluster Manager a worked example ..........................9

    4. Conclusion ......................................................................................................................................16

    5. About The MySQL Cluster Database ...........................................................................................16

    6. Addi tional Resources .....................................................................................................................17

    Copyright 2010, 2011 Oracle and/or its affiliates. All rights reserved. Page 2

  • 7/28/2019 MCM Whitepaper 1 1 1

    3/17

    1. Executive Summary

    According to industry research1, staffing and downtime account for just under 50% of a database's Total Cost ofOwnership. At the same time, IT budget and staffing levels are declining or flat while demand for highly available weband converged communications services continues to increase, placing significant pressure on IT groups.

    Organizations are demanding that IT focuses more on strategic initiatives, and less on the day-to-day management ofan IT environment. IT groups need to be agile and responsive to market demands by quickly deploying new services orre-provisioning existing applications.

    At the same time SLAs (Service Level Agreements) are becoming more demanding. Service interruptions directlyimpact revenue streams and customer satisfaction, with some industry watchers estimating each minute of downtimecosts $30,000, or just under $2m per hour2.

    As a result of these pressures, IT groups must do more with less. They must increase productivity and efficiency,without compromising availability. Intelligent database management tools are key to enabling this type oftransformation.

    MySQL Cluster Manager simplifies the creation and management of the MySQL Cluster database by automatingcommon management tasks. As a result, Database Administrators (DBAs) and Systems Administrator are moreproductive, enabling them to focus on strategic IT initiatives and respond more quickly to changing user requirements.

    At the same time, risks of database downtime that previously resulted from manual configuration errors, aresignificantly reduced.

    2. Introducing MySQL Cluster Manager

    As more users adopt clustered databases to increase service availability, scalability and performance, there isincreasing recognition that management and administration overheads can increase as a result of the greatercomplexity inherent in any clustered architecture.

    To address these issues, MySQL has developed MySQL Cluster Manager which automates and simplifies the creationand management of the MySQL Cluster database.

    As an example, management operations requiring rolling restarts of a MySQL Cluster database that previouslydemanded 46 manual commands3 and which consumed 2.5 hours of DBA time4 can now be performed with a singlecommand, and are fully automated with MySQL Cluster Manager, serving to reduce: Management complexity and overhead; Risk of downtime through the automation of configuration and change management processes; Custom scripting of management commands or developing and maintaining in-house management tools.

    MySQL Cluster Manager achieves these benefits through three core capabilities: Automated management;

    Monitoring and self-healing recovery; High availability operation.

    1 IDC, Maximizing the Business Value of Enterprise Database Applications2 http://blog.dssdatacenter.com/2009/07/17/a-pragmatic-view-of-downtime-cost/3 Based on a MySQL Cluster configuration comprised of 4 x MySQL Cluster Data Nodes, 2 x MySQL Server SQL Nodes and 2 x MySQL Cluster Management Nodes implementedacross individual servers (8 x total). Total operation comprised the following commands: 1 x preliminary check of cluster state; 8 x ssh commands per server; 8 x per-process stopcommands; 4 x scp of configuration files (2 x mgmd & 2 x mysqld); 8 x per-process start commands; 8 x checks for started and re-joined processes; 8 x process completionverifications; 1 x verify completion of the whole cluster. Total command count does not include manual editing of each configuration file. Excludes the preparation steps of copyingthe new software package to each host and defining where it's located: Mysql> add package --basedir=/usr/local/mysql_7_0_7 7.0;4 Based on a DBA restarting 4 x MySQL Cluster Data Nodes, each with 6GB of data, and performing 10,000 operations per second http://www.clusterdb.com/mysql-cluster/mysql-cluster-data-node-restart-times/

    Copyright 2010, 2011 Oracle and/or its affiliates. All rights reserved. Page 3

  • 7/28/2019 MCM Whitepaper 1 1 1

    4/17

    The following figure shows how MySQL Cluster Manager is integrated into the MySQL Cluster architecture.

    Figure 1: MySQL Cluster Manager Implemented via Discrete Agents on each Data, SQL and Management Node

    2.1. Automated Management with MySQL Cluster ManagerMySQL Cluster Manager provides the ability to control the entire cluster as a single entity, while also supporting verygranular control down to individual processes within the cluster itself. Administrators are able to create and deleteentire clusters, and to start, stop and restart the cluster with a single command. As a result, administrators no longerneed to manually restart each data node in turn, in the correct sequence, or to create custom scripts to automate theprocess.

    MySQL Cluster Manager automates on-line management operations, including the upgrade, downgrade andreconfiguration of running clusters as well as adding nodes on-line for dynamic, on-demand scalability, withoutinterrupting applications or clients accessing the database. Administrators no longer need to manually editconfiguration files and distribute them to other cluster nodes, or to determine if rolling restarts are required. MySQLCluster Manager handles all of these tasks, thereby enforcing best practices and making on-line operations significantlysimpler, faster and less error-prone.

    2.2. Automated Monitoring and Self Healing Recovery with MySQL Cluster ManagerMySQL Cluster Manager is able to monitor cluster health at both an Operating System and per-process level byautomatically polling each node in the cluster. It can detect if a process or server host is alive, dead or has hung,allowing for faster problem detection, resolution and recovery.

    To deliver 99.999% availability, MySQL Cluster has the capability to self-heal from failures by automatically restartingfailed Data Nodes, without manual intervention. MySQL Cluster Manager extends this functionality by also monitoringand automatically recovering SQL and Management Nodes. This supports a more seamless and complete self-healingof the Cluster to fully restore operations and capacity to applications.

    Copyright 2010, 2011 Oracle and/or its affiliates. All rights reserved. Page 4

  • 7/28/2019 MCM Whitepaper 1 1 1

    5/17

    Figure 2: MySQL Cluster Manager monitors & automatically self-heals failed Data, SQL and ManagementNodes

    2.3. High Availability Operation with MySQL Cluster ManagerThe MySQL Cluster database powers mission-critical application in web, telecommunications, government andenterprise environments. It is therefore critical that MySQL Cluster Manager does not impact availability of theunderlying cluster in anyway.

    To ensure high availability operation, MySQL Cluster Manager is decoupled from the actual database processes, so if amanagement agent stops or is upgraded, it does not impact the running database in any way.

    Ensuring consistency of configurations across the cluster can create significant administrative overhead in existingenvironments. All MySQL Cluster configuration information and process identifiers are persisted to disk, enabling themto survive system failures or re-starts of the MySQL Cluster Manager. As management agents restart, they areautomatically re-synchronized with the other running management agents to ensure configuration consistency acrossthe entire cluster, without administrator intervention.

    MySQL Cluster Manager coordinates communication between each management agent in order to reliably propagatereconfiguration requests. As a result, configurations remain consistent across all nodes in the cluster. Any changesare only committed when all nodes confirm they have received the re-configuration request. If one or more nodes failto receive the request, then an error is reported back to the client. By automating the communication andsynchronization of re-configuration requests, opportunities for errors resulting from manually distributing configurationfiles are eliminated.

    Copyright 2010, 2011 Oracle and/or its affiliates. All rights reserved. Page 5

  • 7/28/2019 MCM Whitepaper 1 1 1

    6/17

    Figure 3: MySQL Cluster Manager Reliably Distribu tes Reconfiguration Requests Around the Cluster

    With the capabilities described above, MySQL Cluster Manager preserves and enhances the high availability featuresof the MySQL Cluster database.

    3. MySQL Cluster Manager Architecture and Use

    3.1. MySQL Cluster Manager Architecture

    MySQL Cluster Manager is implemented as a set of agents one running on each physical host that will containMySQL Cluster nodes (processes) to be managed. The administrator connects the regular mysql client to any one ofthese agents and then the agents each communicate with each other. By default, port 1862 is used to connect to theagents. An example consisting of 4 host machines together with the mysql client running on a laptop is shown in Figure4.

    Figure 4: MySQL Cluster Manager Agents runn ing on each host

    Copyright 2010, 2011 Oracle and/or its affiliates. All rights reserved. Page 6

  • 7/28/2019 MCM Whitepaper 1 1 1

    7/17

    The agents work together to perform operations across the nodes making up the Cluster, each Cluster dealing with itslocal nodes (those processes running on the same host). For example, if a data node, management node or MySQLServer on host X fail, then the process will be restarted by the agent running on host X.As another example, in the event of an upgrade operation the agents co-operate to ensure that each node is upgradedin the correct sequence, but it is always the local agent that will actually perform the upgrade to a specific node.

    To upgrade MySQL Cluster Manager, all agents can be stopped and new ones started (using the new softwareversion) with no impact to the continuing operation of the MySQL Cluster database.

    3.1.1 Changes from Previous Approaches to Managing MySQL Cluster

    When using MySQL Cluster Manager to manage your MySQL Cluster deployment, the administrator no longer edits theconfiguration files (for example conf i g. i ni and my. cnf); instead, these files are created and maintained by theagents. In fact, if those files are manually edited, the changes will be overwritten by the configuration information whichis held within the agents. Each agent stores all of the cluster configuration data, but it only creates the configurationfiles that are required for the nodes that are configured to run on that host. An example of this is shown in Figure 5below.

    Figure 5: Configuration files c reated on each host

    Similarly when using MySQL Cluster Manager, management actions must not be performed by the administrator usingthe ndb_mgmcommand (which directly connects to the management node meaning that the agents themselves wouldnot have visibility of any operations performed with it).

    The introduction of MySQL Cluster manager does not remove the need for management nodes; in particular theycontinue to perform a number of critical roles:

    When data nodes start up (or are restarted) they connect to the management node(s) to retrieve theirconfiguration data (the management node in turn fetches that data from the configuration files created by theagents);

    Copyright 2010, 2011 Oracle and/or its affiliates. All rights reserved. Page 7

  • 7/28/2019 MCM Whitepaper 1 1 1

    8/17

    When stopping or restarting a data node through MySQL Cluster Manager, the state change is actuallyperformed by the management node;

    The management node(s) can continue to act as arbitrators (avoiding a split-brain scenario). For this reason, itis still important to run those processes on separate hosts from the data nodes;

    Some reporting information (for example, memory usage) is not yet available in the Cluster Manager and canstill be performed using the ndb_mgmtool.

    When using MySQL Cluster Manager, the 'angel' processes are no longer needed (or created) for the data nodes, as itbecomes the responsibility of the agents to detect the failure of the data nodes and recreate them as required.Additionally, the agents extend this functionality to include the management nodes and MySQL Server nodes.

    It should be noted that there is no angel process for the agents themselves and so for the highest levels of availability,the administrator may choose to use a process monitor to detect the failure of an agent and automatically restart it.When an agent is unavailable, the MySQL Cluster database continues to operate in a fault-tolerant manner butmanagement changes cannot be made until it is restarted. When the agent process is restarted, it re-establishescommunication with the other agents and has access to all current configuration data, ensuring that the managementfunction is restored.

    3.2. MySQL Cluster Manager Model & Terms

    Before looking at how to install, configure and use MySQL Cluster Manager, it helps to understand a number ofcomponents and the relationships between them (as viewed by the MySQL Cluster Manager). Figure 6 illustrates anumber of these entities and then there is a brief description of each.

    Figure 6: MySQL Cluster Manager model

    Site: the set of physical hosts which are used to run database cluster processes that will be managed by MySQLCluster Manager. A site can include 1 or more clusters5.

    Cluster: represents a MySQL Cluster deployment. A Cluster contains 1 or more processes running on 1 or more hosts.

    Host: a physical machine (i.e. a server), running the MySQL Cluster Manager agent.

    Agent: the MySQL Cluster Manager process running on each host. Responsible for all management actions performedon the processes.

    5In MySQL Cluster Manager 1.0 each site may only include a single Cluster

    Copyright 2010, 2011 Oracle and/or its affiliates. All rights reserved. Page 8

  • 7/28/2019 MCM Whitepaper 1 1 1

    9/17

    Process: an individual MySQL Cluster node; one of: ndb_mgmd, ndbd, ndbmt d, mysql d &ndbapi 6.

    Package: a copy of a MySQL Cluster installation directory as downloaded from mysql.com, stored on each host.

    3.3. Using MySQL Cluster Manager a worked example

    This section walks through the steps required to install, configure and run MySQL Cluster Manager. It goes on todemonstrate how it is used to create and manage a MySQL Cluster deployment.

    3.3.1 Example configuration

    Throughout the example, the MySQL Cluster configuration shown in Figure 7 is used.

    Figure 7: MySQL Cluster configuration used in example

    3.3.2 Installing, configuring, running & accessing MySQL Cluster Manager

    The agent must be installed and run on each host in the Cluster (download fromedelivery.oracle.com):

    1. Expand the downloaded tar-ball into a known directory (in this example we use ~/ mcm)

    2. Launch the agent process:

    mcm$ . / bi n/ mcmd&

    3. Connect to any of the agents from any machine (that has themysql client installed):

    $ mysql - h 192. 168. 0. 10 - P 1862 - u admi n - psuper - - prompt=' mcm> '

    6ndbapi is a special case, representing a slot for an external application process to connect to the cluster using the NDB API

    Copyright 2010, 2011 Oracle and/or its affiliates. All rights reserved. Page 9

  • 7/28/2019 MCM Whitepaper 1 1 1

    10/17

    Wel come t o the MySQL moni t or . Commands end wi t h ; or \ g.Your MySQL connect i on i d i s 1Server vers i on: 1. 1. 1- agent - manager MySQL Cl ust er Manager

    Type ' hel p; ' or ' \ h' f or hel p. Type ' \ c' t o cl ear t he cur r ent i nputst at ement .

    mcm>

    After repeating steps 1 and 2 on each of the physical hosts that will contain MySQL Cluster nodes tobe managed, the system looks like Figure 8 below.

    Figure 8: MySQL Cluster Manager Agents runn ing on each host

    3.3.3 Creating & Starting a Cluster

    Once the agents have been installed and started, the next step is to set up the entities that will make up the managedCluster(s). As a prerequisite, the installation files for the version(s) of MySQL Cluster that are to be managed should bestored in a known location in this example, / usr / l ocal / mysql _X_Y_Z.

    As in step 3 from section 3.3.2 connect to the agent process running on any of the physical hosts from there thewhole process of creating all entities needed for the Cluster (site, hosts, packages and Cluster) and then starting theCluster can be performed.

    1. Define the site that is, the set of hosts that will run MySQL Cluster processes that need to bemanaged:

    mcm> cr eat e s i t e - - host s=192. 168. 0. 10, 192. 168. 0. 11, 192. 168. 0. 12,192. 168. 0. 13 mysi t e;

    2. Define the package(s) that will be used for the Cluster; further packages can be added later(for example when upgrading to the latest MySQL Cluster release). In this example, theMySQL Cluster installation directories are in the same location on every host if that isn't thecase then the - - hosts option can be used to qualify which hosts the directory name appliesto:

    Copyright 2010, 2011 Oracle and/or its affiliates. All rights reserved. Page 10

  • 7/28/2019 MCM Whitepaper 1 1 1

    11/17

    mcm> add package - - basedi r =/ usr / l ocal / mysql _6_3_27a 6. 3. 27a;mcm> add package - - basedi r =/ usr / l ocal / mysql _7_0_9 7. 0. 9;

    3. Create the Cluster, specifying the package (version of MySQL Cluster) and the set ofnodes/processes and which host each should run on:

    mcm> cr eat e cl ust er - - package=6. 3. 26

    - - processhost s=ndb_mgmd@192. 168. 0. 10, ndb_mgmd@192. 168. 0. 11,ndbd@192. 168. 0. 12, ndbd@192. 168. 0. 13, ndbd@192. 168. 0. 12,ndbd@192. 168. 0. 13, mysql d@192. 168. 0. 10, mysql d@192. 168. 0. 11mycl uster;

    4. The Cluster has now been defined all that remains is to start the processes for the databaseto become available:

    mcm> st ar t cl ust er mycl ust er ;

    Note that the node-ids are allocated automatically, starting with 1 for the first in the list ofprocesses provided, and then incrementing by 1 for each subsequent process.

    The MySQL Cluster is now up and running as shown in Figure 7 above.

    3.3.4 Check the status of the Cluster

    Using MySQL Cluster Manager, you can check the status at a number of levels:

    The overall state of the Cluster

    The state of each of the hosts (specifically whether the agent on that host is running andaccessible)

    The state of each of the nodes (processes) making up the Cluster

    mcm> show st at us - - cl ust er mycl ust er ;

    +- - - - - - - - - - - +- - - - - - - - - - - - - - - - - - - +| Cl uster | St at us |+- - - - - - - - - - - +- - - - - - - - - - - - - - - - - - - +| mycl uster | f ul l y- operati onal |

    +- - - - - - - - - - - +- - - - - - - - - - - - - - - - - - - +mcm> l i st host s mysi t e;

    +- - - - - - - - - - - - - - +- - - - - - - - - - - +- - - - - - - - - +| Host | St at us | Ver si on |+- - - - - - - - - - - - - - +- - - - - - - - - - - +- - - - - - - - - +| 192. 168. 0. 10 | Avai l abl e | 1. 0. 1 || 192. 168. 0. 11 | Avai l abl e | 1. 0. 1 || 192. 168. 0. 12 | Avai l abl e | 1. 0. 1 || 192. 168. 0. 13 | Avai l abl e | 1. 0. 1 |+- - - - - - - - - - - - - - +- - - - - - - - - - - +- - - - - - - - - +

    mcm> show st at us - - process mycl ust er ;

    +- - - - - - +- - - - - - - - - - +- - - - - - - - - - - - - - +- - - - - - - - - +- - - - - - - - - - - +| I d | Process | Host | St atus | Nodegr oup |

    +- - - - - - +- - - - - - - - - - +- - - - - - - - - - - - - - +- - - - - - - - - +- - - - - - - - - - - +| 1 | ndb_mgmd | 192. 168. 0. 10 | r unni ng | || 2 | ndb_mgmd | 192. 168. 0. 11 | r unni ng | || 3 | ndbd | 192. 168. 0. 12 | r unni ng | 0 || 4 | ndbd | 192. 168. 0. 13 | r unni ng | 0 || 5 | ndbd | 192. 168. 0. 12 | r unni ng | 1 || 6 | ndbd | 192. 168. 0. 13 | r unni ng | 1 || 7 | mysql d | 192. 168. 0. 10 | r unni ng | || 8 | mysql d | 192. 168. 0. 11 | r unni ng | |+- - - - - - +- - - - - - - - - - +- - - - - - - - - - - - - - +- - - - - - - - - +- - - - - - - - - - - +

    3.3.5 Checking and setting MySQL Cluster parameters

    Copyright 2010, 2011 Oracle and/or its affiliates. All rights reserved. Page 11

  • 7/28/2019 MCM Whitepaper 1 1 1

    12/17

    When using MySQL Cluster Manager, the administrator reviews and changes all configuration parameters using theget and set commands rather than editing the configuration files directly. For both get and set you can control thescope of the attributes being read or updated. For example, you can specify that the attribute applies to all nodes, allnodes of a particular class (such as all data nodes) or to a specific node. Additionally, when reading attributes you canprovide a specific attribute name or ask for all applicable attributes as well as indicating whether you wish to see theattributes still with their default values or just those that have been overwritten.

    There follow some examples of getting and setting attributes.

    1. Fetch all parameters that apply to all data nodes, including defaults

    mcm> get - d : ndbd mycl ust er ;

    +- - - - - - - - - - - - - - - - - - - - - - - +- - - - - - - +- - - - - - - - - - +- - - - - - +- - - - - - - - - - +- - - - - - +- - - - - - - - - +- - - - - - - - - - +| Name | Val ue | Process1 | I d1 | Process2 | I d2 | Level | Comment |+- - - - - - - - - - - - - - - - - - - - - - - +- - - - - - - +- - - - - - - - - - +- - - - - - +- - - - - - - - - - +- - - - - - +- - - - - - - - - +- - - - - - - - - - +| __ndbmt _l qh_t hr eads | NULL | ndbd | 3 | | | Defaul t | || __ndbmt _l qh_wor ker s | NULL | ndbd | 3 | | | Defaul t | || Arbi t r at i on | NULL | ndbd | 3 | | | Def aul t | |. . . . . . . . : : : : : : : :| __ndbmt _l qh_t hr eads | NULL | ndbd | 4 | | | Defaul t | || __ndbmt _l qh_wor ker s | NULL | ndbd | 4 | | | Defaul t | || Arbi t r at i on | NULL | ndbd | 4 | | | Def aul t | |

    | Ar bi t r at i onTi meout | 3000 | ndbd | 4 | | | Def aul t | |. . . . . . . . : : : : : : : :| __ndbmt _l qh_t hr eads | NULL | ndbd | 5 | | | Defaul t | |. . . . . . . . : : : : : : : :| __ndbmt _l qh_t hr eads | NULL | ndbd | 6 | | | Defaul t | |. . . . . . . . : : : : : : : :+- - - - - - - - - - - - - - - - - - - - - - - +- - - - - - - +- - - - - - - - - - +- - - - - - +- - - - - - - - - - +- - - - - - +- - - - - - - - - +- - - - - - - - - - +

    2. Fetch the values of parameters (excluding defaults) for mysql d with ID=7:

    mcm> get : mysql d: 7 mycl ust er ;

    +- - - - - - - - - - - - +- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +- - - - - - - - - - +- - - - - - +- . . .| Name | Val ue | Process1 | I d1 |+- - - - - - - - - - - - +- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +- - - - - - - - - - +- - - - - - +- . . .| dat adi r | / usr / l ocal / mcm/ manager / cl ust er s/ mycl ust er / 7/ dat a | mysql d | 7 | . . .

    | HostName | ws1 | mysql d | 7 | . . .| ndb- nodei d | 7 | mysql d | 7 | . . .| ndbcl uster | | mysql d | 7 | . . .| NodeI d | 7 | mysql d | 7 | . . .+- - - - - - - - - - - - +- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +- - - - - - - - - - +- - - - - - +- . . .

    3. Fetch the port parameter to connect tomysql d with ID=7:

    mcm> get - d por t : mysql d: 7 mycl ust er ;

    +- - - - - - +- - - - - - - +- - - - - - - - - - +- - - - - - +- - - - - - - - - - +- - - - - - +- - - - - - - - - +- - - - - - - - - +| Name | Val ue | Pr ocess1 | I d1 | Pr ocess2 | I d2 | Level | Comment |+- - - - - - +- - - - - - - +- - - - - - - - - - +- - - - - - +- - - - - - - - - - +- - - - - - +- - - - - - - - - +- - - - - - - - - +| por t | 3306 | mysql d | 7 | | | Def aul t | |+- - - - - - +- - - - - - - +- - - - - - - - - - +- - - - - - +- - - - - - - - - - +- - - - - - +- - - - - - - - - +- - - - - - - - - +

    4. Turn off privilege checking for all MySQL Servers and change the port for connecting to the

    mysqld with ID =8 to 3307. Allow data nodes to be automatically restarted after they fail:

    mcm> set ski p- grant - t abl es: mysql d=t r ue, por t : mysql d: 8=3307, St opOnEr r or : ndbd=f al semycl ust er;

    Copyright 2010, 2011 Oracle and/or its affiliates. All rights reserved. Page 12

  • 7/28/2019 MCM Whitepaper 1 1 1

    13/17

    MySQL Cluster Manager automatically determines which nodes (processes) need to berestarted and in which order to make the change take effect but avoid loss of service. In thisexample, this results in each management, data and MySQL Server node being restartedsequentially as shown in Figure 9 below.

    Figure 9: MySQL Cluster nodes automatically restarted after configuration change

    3.3.6 Upgrading MySQL Cluster

    Upgrading MySQL Cluster when using MySQL Cluster Manager is extremely simple. The amount of administrator timeneeded and the opportunities for human error are massively reduced.

    There are three prerequisites before performing the upgrade:

    1. Ensure that the upgrade path that you are attempting to perform is supported; by referring tohttp://dev.mysql.com/doc/mysql-cluster-excerpt/5.1/en/mysql-cluster-upgrade-downgrade-compatibility.htmlIf not then you will need to perform an upgrade to an intermediate version of MySQL Clusterfirst;

    2. Ensure that the version of MySQL Cluster Manager you are using supports both the original

    and new versions of MySQL Cluster by referring to the documentation at http://dev.mysql.com/doc/index-cluster.htmlIf not then you must first upgrade to an appropriate version of MySQL Cluster Manager asdescribed in section 3.3.8.

    3. Ensure that the install tar ball for the new version of MySQL Cluster has been copied to eachhost and the associated packages defined.

    The upgrade itself is performed using a single command which will upgrade each node in sequence such that databaseservice is not interrupted to connected applications and clients:

    mcm> upgrade cl ust er - - package=7. 0. 9 mycl ust er ;

    Copyright 2010, 2011 Oracle and/or its affiliates. All rights reserved. Page 13

  • 7/28/2019 MCM Whitepaper 1 1 1

    14/17

    +- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +| Command r esul t |+- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +| Cl ust er upgr aded successf ul l y |+- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +1 row i n set ( 1 mi n 41. 54 sec)

    3.3.7 Adding Nodes to a running Cluster

    MySQL Cluster Manager automates the process of adding additional nodes to a running Cluster helping ensure thatthere is no loss of service during the process. In this example a second MySQL Server process is added to two of theexisting hosts and 2 new data nodes are added to each of 2 new hosts to produce a configuration as shown in Figure10 below.

    Figure 10 MySQL Cluster nodes automatically restarted after configuration change

    If any of the new nodes are going to run on a new server then the first action is to install and run the MySQL Cluster

    Manager agent on those nodes as described in section 3.3.2.

    Once the agent is running on all of the hosts, from the MySQL Cluster Manager command line, the following stepsshould then be run:

    1. Add the new hosts to the site being managed:

    mcm> add host s - - host s=192. 168. 0. 14, 192. 168. 0. 15 mysi t e;

    2. Register the MySQL Cluster load (package) against these new hosts:

    mcm> add package - - basedi r =/ usr / l ocal / mysql _7_0_9 - - host s=192. 168. 0. 14, 192. 168. 0. 157. 0. 9;

    Copyright 2010, 2011 Oracle and/or its affiliates. All rights reserved. Page 14

  • 7/28/2019 MCM Whitepaper 1 1 1

    15/17

  • 7/28/2019 MCM Whitepaper 1 1 1

    16/17

    3.3.10 HA Provided by MySQL Cluster Manager

    If MySQL Cluster detects that a MySQL Cluster process has failed, then it will automatically restart it (for data nodesthis only happens ifStopOnEr r or attribute has been set to FALSE). For data nodes, the agents take over this role fromthe existing MySQL Cluster data node angel processes, but for Management and MySQL Server nodes this is newfunctionality.

    It is the responsibility of the administrator or application to restart the MySQL Cluster Manager agents in the event thatthey fail; for example by creating a script in / et c/ i ni t . d

    3.3.11 Introducing MySQL Cluster Manager to an existing Cluster

    In MySQL Cluster Manager 1.1, there is no automated method to bring an existing MySQL Cluster deployment underthe management of MySQL Cluster Manager; this capability will be added in a future release. In the mean time, adocumented procedure is provided and is available from http://dev.mysql.com/doc/mysql-cluster-manager/1.1.1/en/mcm-install-cluster-migrating.html

    4. Conclusion

    In this paper we have explored how MySQL Cluster Manager simplifies the creation and management of the MySQLCluster database by automating common management tasks.

    Management operations that previously demanded an administrator to manually enter multiple commands, whileensuring each action was executed in the correct sequence, can now be performed with a single command, and arefully automated.

    MySQL Cluster Manager achieves three core capabilities: Automated management; Monitoring and self-healing recovery;

    High availability operation;

    With the capabilities, MySQL Cluster Manager benefits customers by reducing the following: Management complexity and overhead; Risk of downtime through the automation of configuration and change management processes;

    Custom scripting of management commands or developing and maintaining in-house management tools.

    As a result, administrators are more productive, enabling them to focus on strategic IT initiatives and respond morequickly to changing user requirements. At the same time, risks of database downtime which previously resulted frommanual configuration errors, are significantly reduced.

    5. About The MySQL Cluster Database

    MySQL Cluster is a highly available, real-time relational database featuring a shared-nothing distributed architecturewith no single point of failure to deliver 99.999% uptime, allowing you to meet your most demanding mission-criticalapplication requirements.

    MySQL Cluster's real-time design delivers predictable, millisecond response times with the ability to service tens ofthousands of transactions per second. Support for in-memory and disk based data, automatic data partitioning withload balancing and the ability to add nodes to a running cluster with zero downtime allows linear database scalability tohandle the most unpredictable workloads.

    The benefits of MySQL Cluster have been realized in some of the most demanding data management environmentswithin the telecommunications, web, finance and government sectors, for the likes of Alcatel-Lucent, Cisco, Ericsson,

    J uniper, Shopatron, Telenor, UTStarcom and the US Navy.

    Copyright 2010, 2011 Oracle and/or its affiliates. All rights reserved. Page 16

  • 7/28/2019 MCM Whitepaper 1 1 1

    17/17

    To learn more about MySQL Cluster, refer to the MySQL Cluster Architecture and New Features Whitepaper postedin the Technical Whitepapers section of the MySQL Cluster resources page: http://www.mysql.com/products/database/cluster/resources.html

    6. Additional Resources

    MySQL Cluster Manager on the web: http://www.mysql.com/products/database/cluster/mcm/

    Introducing MySQL Cluster Manager 1.1 to a running Cluster:http://dev.mysql.com/doc/mysql-cluster-manager/1.1.1/en/mcm-install-cluster-migrating.html

    MySQL Cluster Manager Documentation: http://dev.mysql.com/doc/index-cluster.html

    Demonstration video for MySQL Cluster Manager:http://www.mysql.com/products/database/cluster/mcm/cluster_install_demo.html

    Copyright 2010, 2011, Oracle Corporation. MySQL is a registered trademark of Oracle Corporation in the U.S. andin other countries. Other products mentioned may be trademarks of their companies.