Search

Chapter 2. Configuring Red Hat High Availability clusters on Microsoft Azure

download PDF

Red Hat supports High Availability (HA) on Red Hat Enterprise Linux (RHEL) 7.4 and later versions. This chapter includes information and procedures for configuring a Red Hat HA cluster on Microsoft Azure using virtual machine (VM) instances as cluster nodes. The procedures in this chapter assume you are creating a custom image for Azure. You have a number of options for obtaining the RHEL 7 images to use for your cluster. For more information on image options for Azure, see Red Hat Enterprise Linux Image Options on Azure.

This chapter includes prerequisite procedures for setting up your environment for Azure. Once you have set up your environment, you can create and configure Azure VM instances.

This chapter also includes procedures specific to the creation of HA clusters, which transform individual VM nodes into a cluster of HA nodes on Azure. These include procedures for installing the High Availability packages and agents on each cluster node, configuring fencing, and installing Azure network resource agents.

This chapter refers to the Microsoft Azure documentation in a number of places. For many procedures, see the referenced Azure documentation for more information.

Prerequisites

  • You need to install the Azure command line interface (CLI). For more information, see Installing the Azure CLI.
  • Enable your subscriptions in the Red Hat Cloud Access program. The Red Hat Cloud Access program allows you to move your Red Hat subscriptions from physical or on-premise systems onto Azure with full support from Red Hat.

2.1. Creating resources in Azure

Complete the following procedure to create an availability set. You need these resources to complete subsequent tasks in this chapter.

Procedure

  • Create an availability set. All cluster nodes must be in the same availability set.

    $ az vm availability-set create --name _MyAvailabilitySet_ --resource-group _MyResourceGroup_

    Example:

    [clouduser@localhost]$ az vm availability-set create --name rhelha-avset1 --resource-group azrhelclirsgrp
    {
      "additionalProperties": {},
        "id": "/subscriptions/.../resourceGroups/azrhelclirsgrp/providers/Microsoft.Compute/availabilitySets/rhelha-avset1",
        "location": "southcentralus",
        "name": “rhelha-avset1",
        "platformFaultDomainCount": 2,
        "platformUpdateDomainCount": 5,
    
    ...omitted

2.2. Creating an Azure Active Directory Application

Complete the following procedures to create an Azure Active Directory (AD) Application. The Azure AD Application authorizes and automates access for HA operations for all nodes in the cluster.

Prerequisites

You need to install the Azure Command Line Interface (CLI).

Procedure

  1. Ensure you are an Administrator or Owner for the Microsoft Azure subscription. You need this authorization to create an Azure AD application.
  2. Log in to your Azure account.

    $ az login
  3. Enter the following command to create the Azure AD Application. To use your own password, add the --password option to the command. Ensure that you create a strong password.

    $ az ad sp create-for-rbac --name _FencingApplicationName_ --role owner --scopes "/subscriptions/_SubscriptionID_/resourceGroups/_MyResourseGroup_"

    Example:

    [clouduser@localhost ~] $ az ad sp create-for-rbac --name FencingApp --role owner --scopes "/subscriptions/2586c64b-xxxxxx-xxxxxxx-xxxxxxx/resourceGroups/azrhelclirsgrp"
    Retrying role assignment creation: 1/36
    Retrying role assignment creation: 2/36
    Retrying role assignment creation: 3/36
    {
      "appId": "1a3dfe06-df55-42ad-937b-326d1c211739",
      "displayName": "FencingApp",
      "name": "http://FencingApp",
      "password": "43a603f0-64bb-482e-800d-402efe5f3d47",
      "tenant": "77ecefb6-xxxxxxxxxx-xxxxxxx-757a69cb9485"
    }
  4. Save the following information before proceeding. You need this information to set up the fencing agent.

    • Azure AD Application ID
    • Azure AD Application Password
    • Tenant ID
    • Microsoft Azure Subscription ID

2.3. Installing the Red Hat HA packages and agents

Complete the following steps on all nodes.

Procedure

  1. Register the VM with Red Hat.

    $ sudo -i
    # subscription-manager register --auto-attach
  2. Disable all repositories.

    # subscription-manager repos --disable=*
  3. Enable the RHEL 7 Server and RHEL 7 Server HA repositories.

    # subscription-manager repos --enable=rhel-7-server-rpms
    # subscription-manager repos --enable=rhel-ha-for-rhel-7-server-rpms
  4. Update all packages.

    # yum update -y
  5. Reboot if the kernel is updated.

    # reboot
  6. Install pcs, pacemaker, fence agent, resource agent, and nmap-ncat.

    # yum install -y pcs pacemaker fence-agents-azure-arm resource-agents nmap-ncat

2.4. Configuring HA services

Complete the following steps on all nodes.

Procedure

  1. The user hacluster was created during the pcs and pacemaker installation in the previous section. Create a password for hacluster on all cluster nodes. Use the same password for all nodes.

    # passwd hacluster
  2. Add the high availability service to the RHEL Firewall if firewalld.service is enabled.

    # firewall-cmd --permanent --add-service=high-availability
    # firewall-cmd --reload
  3. Start the pcs service and enable it to start on boot.

    # systemctl enable pcsd.service --now

Verification step

  • Ensure the pcs service is running.

    # systemctl is-active pcsd.service

2.5. Creating a cluster

Complete the following steps to create the cluster of nodes.

Procedure

  1. On one of the nodes, enter the following command to authenticate the pcs user hacluster. Specify the name of each node in the cluster.

    # pcs host auth  _hostname1_ _hostname2_ _hostname3_

    Example:

    [root@node01 clouduser]# pcs host auth node01 node02 node03
    Username: hacluster
    Password:
    node01: Authorized
    node02: Authorized
    node03: Authorized
  2. Create the cluster.

    # pcs cluster setup --name _hostname1_ _hostname2_ _hostname3_

    Example:

    [root@node01 clouduser]# pcs cluster setup --name newcluster node01 node02 node03
    
    ...omitted
    
    Synchronizing pcsd certificates on nodes node01, node02, node03...
    node02: Success
    node03: Success
    node01: Success
    Restarting pcsd on the nodes in order to reload the certificates...
    node02: Success
    node03: Success
    node01: Success

Verification steps

  1. Enable the cluster.

    # pcs cluster enable --all
  2. Start the cluster.

    # pcs cluster start --all

    Example:

    [root@node01 clouduser]# pcs cluster enable --all
    node02: Cluster Enabled
    node03: Cluster Enabled
    node01: Cluster Enabled
    
    [root@node01 clouduser]# pcs cluster start --all
    node02: Starting Cluster...
    node03: Starting Cluster...
    node01: Starting Cluster...

2.6. Creating a fence device

Complete the following steps to configure fencing from any node in the cluster.

Procedure

  1. Identify the available instances that can be fenced.

    # fence_azure_arm -l [appid] -p [authkey] --resourceGroup=[name] --subscriptionId=[name] --tenantId=[name] -o list

    Example:

    [root@node1 ~]# fence_azure_arm -l XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX -p XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX --resourceGroup=hacluster-rg --subscriptionId=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX --tenantId=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX -o list
    node01-vm,
    node02-vm,
    node03-vm,
  2. Create a fence device. Use the pcmk_host_map command to map the RHEL host name to the instance ID.

    # pcs stonith create _clusterfence_ fence_azure_arm login=_AD-Application-ID_ passwd=_AD-passwd_ pcmk_host_map="_pcmk-host-map_ resourcegroup= _myresourcegroup_ tenantid=_tenantid_ subscriptionid=_subscriptionid_

Verification steps

  1. Test the fencing agent for one of the other nodes.

    # pcs stonith fence _azurenodename_

    Example:

    [root@node01 ~]# pcs stonith fence fenceazure
     Resource: fenceazure (class=stonith type=fence_azure_arm)
      Attributes: login=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX passwd=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX pcmk_host_map=nodea:nodea-vm;nodeb:nodeb-vm;nodec:nodec-vm pcmk_reboot_retries=4 pcmk_reboot_timeout=480 power_timeout=240 resourceGroup=rg subscriptionId=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX tenantId=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
      Operations: monitor interval=60s (fenceazure-monitor-interval-60s)
    [root@node01 ~]# pcs stonith
     fenceazure     (stonith:fence_azure_arm):      Started nodea
  2. Check the status to verify the node started.

    # watch pcs status

    Example:

    [root@node01 ~]# watch pcs status
     fenceazure     (stonith:fence_azure_arm):      Started nodea

2.7. Creating an Azure internal load balancer

The Azure internal load balancer removes cluster nodes that do not answer health probe requests.

Perform the following procedure to create an Azure internal load balancer. Each step references a specific Microsoft procedure and includes the settings for customizing the load balancer for HA.

Prerequisites

Access to the Azure control panel

Procedure

  1. Create a basic load balancer. Select Internal load balancer, the Basic SKU, and Dynamic for the type of IP address assignment.
  2. Create a backend address pool. Associate the backend pool to the availability set created while creating Azure resources in HA. Do not set any target network IP configurations.
  3. Create a health probe. For the health probe, select TCP and enter port 61000. You can use a TCP port number that does not interfere with another service. For certain HA product applications, for example, SAP HANA and SQL Server, you may need to work with Microsoft to identify the correct port to use.
  4. Create a load balancer rule. To create the load balancing rule, use the default values that are prepopulated. Ensure to set Floating IP (direct server return) to Enabled.

2.8. Configuring the Azure load balancer resource agent

After you have created the health probe, you must configure the load balancer resource agent. This resource agent runs a service that answers health probe requests from the Azure load balancer and removes cluster nodes that do not answer requests.

Procedure

  1. Enter the Azure id command to view the Azure load balancer resource agent description. This shows the options and default operations for this agent.

    # pcs resource describe _azure-id_
  2. Create an Ipaddr2 resource for managing the IP on the node.

    # pcs resource create _resource-id_ IPaddr2 ip=_virtual/floating-ip_ cidr_netmask=_virtual/floating-mask_ --group _group-id_ nic=_network-interface_ op monitor interval=30s

    Example:

    [root@node01 ~]# pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=172.16.66.99 cidr_netmask=24 --group CloudIP nic=eth0 op monitor interval=30s
  3. Configure the load balancer resource agent.

    # pcs resource create _resource-loadbalancer-name_ azure-lb port=_port-number_ --group _cluster-resources-group_

Verification step

  • Run the pcs status command to see the results.

    [root@node01 clouduser]# pcs status

    Example:

    [root@node01 ~]# pcs status
    Cluster name: hacluster
    
    WARNINGS:
    No stonith devices and stonith-enabled is not false
    
    Stack: corosync
    Current DC: nodeb (version 1.1.22-1.el7-63d2d79005) - partition with quorum
    Last updated: Wed Sep  9 16:47:07 2020
    Last change: Wed Sep  9 16:44:32 2020 by hacluster via crmd on nodeb
    
    3 nodes configured
    0 resource instances configured
    
    Online: [ node01 node02 node03 ]
    
    No resources
    
    
    Daemon Status:
      corosync: active/enabled
      pacemaker: active/enabled
      pcsd: active/enabled

Additional resources

2.9. Configuring shared block storage

This section provides an optional procedure for configuring shared block storage for a Red Hat High Availability cluster with Microsoft Azure Shared Disks. The procedure assumes three Azure VMs (a three-node cluster) with a 1TB shared disk.

Note

This is a stand-alone sample procedure for configuring block storage. The procedure assumes that you have not yet created your cluster.

Prerequisites

Procedure

  1. Create a shared block volume using the Azure command az disk create.

    $ az disk create -g resource_group -n shared_block_volume_name --size-gb disk_size --max-shares number_vms -l location

    For example, the following command creates a shared block volume named shared-block-volume.vhd in the resource group sharedblock within the Azure Availability Zone westcentralus.

    $ az disk create -g sharedblock-rg -n shared-block-volume.vhd --size-gb 1024 --max-shares 3 -l westcentralus
    
    {
      "creationData": {
        "createOption": "Empty",
        "galleryImageReference": null,
        "imageReference": null,
        "sourceResourceId": null,
        "sourceUniqueId": null,
        "sourceUri": null,
        "storageAccountId": null,
        "uploadSizeBytes": null
      },
      "diskAccessId": null,
      "diskIopsReadOnly": null,
      "diskIopsReadWrite": 5000,
      "diskMbpsReadOnly": null,
      "diskMbpsReadWrite": 200,
      "diskSizeBytes": 1099511627776,
      "diskSizeGb": 1024,
      "diskState": "Unattached",
      "encryption": {
        "diskEncryptionSetId": null,
        "type": "EncryptionAtRestWithPlatformKey"
      },
      "encryptionSettingsCollection": null,
      "hyperVgeneration": "V1",
      "id": "/subscriptions/12345678910-12345678910/resourceGroups/sharedblock-rg/providers/Microsoft.Compute/disks/shared-block-volume.vhd",
      "location": "westcentralus",
      "managedBy": null,
      "managedByExtended": null,
      "maxShares": 3,
      "name": "shared-block-volume.vhd",
      "networkAccessPolicy": "AllowAll",
      "osType": null,
      "provisioningState": "Succeeded",
      "resourceGroup": "sharedblock-rg",
      "shareInfo": null,
      "sku": {
        "name": "Premium_LRS",
        "tier": "Premium"
      },
      "tags": {},
      "timeCreated": "2020-08-27T15:36:56.263382+00:00",
      "type": "Microsoft.Compute/disks",
      "uniqueId": "cd8b0a25-6fbe-4779-9312-8d9cbb89b6f2",
      "zones": null
    }
  2. Verify that you have created the shared block volume using the Azure command az disk show.

    $ az disk show -g resource_group -n shared_block_volume_name

    For example, the following command shows details for the shared block volume shared-block-volume.vhd within the resource group sharedblock-rg.

    $ az disk show -g sharedblock-rg -n shared-block-volume.vhd
    
    {
      "creationData": {
        "createOption": "Empty",
        "galleryImageReference": null,
        "imageReference": null,
        "sourceResourceId": null,
        "sourceUniqueId": null,
        "sourceUri": null,
        "storageAccountId": null,
        "uploadSizeBytes": null
      },
      "diskAccessId": null,
      "diskIopsReadOnly": null,
      "diskIopsReadWrite": 5000,
      "diskMbpsReadOnly": null,
      "diskMbpsReadWrite": 200,
      "diskSizeBytes": 1099511627776,
      "diskSizeGb": 1024,
      "diskState": "Unattached",
      "encryption": {
        "diskEncryptionSetId": null,
        "type": "EncryptionAtRestWithPlatformKey"
      },
      "encryptionSettingsCollection": null,
      "hyperVgeneration": "V1",
      "id": "/subscriptions/12345678910-12345678910/resourceGroups/sharedblock-rg/providers/Microsoft.Compute/disks/shared-block-volume.vhd",
      "location": "westcentralus",
      "managedBy": null,
      "managedByExtended": null,
      "maxShares": 3,
      "name": "shared-block-volume.vhd",
      "networkAccessPolicy": "AllowAll",
      "osType": null,
      "provisioningState": "Succeeded",
      "resourceGroup": "sharedblock-rg",
      "shareInfo": null,
      "sku": {
        "name": "Premium_LRS",
        "tier": "Premium"
      },
      "tags": {},
      "timeCreated": "2020-08-27T15:36:56.263382+00:00",
      "type": "Microsoft.Compute/disks",
      "uniqueId": "cd8b0a25-6fbe-4779-9312-8d9cbb89b6f2",
      "zones": null
    }
  3. Create three network interfaces using the Azure command az network nic create. Run the following command three times using a different <nic_name> for each.

    $ az network nic create -g resource_group -n nic_name --subnet subnet_name --vnet-name virtual_network --location location --network-security-group network_security_group --private-ip-address-version IPv4

    For example, the following command creates a network interface with the name shareblock-nodea-vm-nic-protected.

    $ az network nic create -g sharedblock-rg -n sharedblock-nodea-vm-nic-protected --subnet sharedblock-subnet-protected --vnet-name sharedblock-vn --location westcentralus --network-security-group sharedblock-nsg --private-ip-address-version IPv4
  4. Create three virtual machines and attach the shared block volume using the Azure command az vm create. Option values are the same for each VM except that each VM has its own <vm_name>, <new_vm_disk_name>, and <nic_name>.

    $ az vm create -n vm_name -g resource_group --attach-data-disks shared_block_volume_name --data-disk-caching None --os-disk-caching ReadWrite --os-disk-name new-vm-disk-name --os-disk-size-gb disk_size --location location --size virtual_machine_size --image image_name --admin-username vm_username --authentication-type ssh --ssh-key-values ssh_key --nics -nic_name_ --availability-set availability_set --ppg proximity_placement_group

    For example, the following command creates a virtual machine named sharedblock-nodea-vm.

    $ az vm create -n sharedblock-nodea-vm -g sharedblock-rg --attach-data-disks shared-block-volume.vhd --data-disk-caching None --os-disk-caching ReadWrite --os-disk-name sharedblock-nodea-vm.vhd --os-disk-size-gb 64 --location westcentralus --size Standard_D2s_v3 --image /subscriptions/12345678910-12345678910/resourceGroups/sample-azureimagesgroupwestcentralus/providers/Microsoft.Compute/images/sample-azure-rhel-7.0-20200713.n.0.x86_64 --admin-username sharedblock-user --authentication-type ssh --ssh-key-values @sharedblock-key.pub --nics sharedblock-nodea-vm-nic-protected --availability-set sharedblock-as --ppg sharedblock-ppg
    
    {
      "fqdns": "",
      "id": "/subscriptions/12345678910-12345678910/resourceGroups/sharedblock-rg/providers/Microsoft.Compute/virtualMachines/sharedblock-nodea-vm",
      "location": "westcentralus",
      "macAddress": "00-22-48-5D-EE-FB",
      "powerState": "VM running",
      "privateIpAddress": "198.51.100.3",
      "publicIpAddress": "",
      "resourceGroup": "sharedblock-rg",
      "zones": ""
    }

Verification steps

  1. For each VM in your cluster, verify that the block device is available by using the SSH command with your VM <ip_address>.

    # ssh ip_address "hostname ; lsblk -d | grep ' 1T '"

    For example, the following command lists details including the host name and block device for the VM IP 198.51.100.3.

    # ssh 198.51.100.3 "hostname ; lsblk -d | grep ' 1T '"
    
    nodea
    sdb    8:16   0    1T  0 disk
  2. Use the SSH command to verify that each VM in your cluster uses the same shared disk.

    # ssh _ip_address_s "hostname ; lsblk -d | grep ' 1T ' | awk '{print \$1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='"

    For example, the following command lists details including the host name and shared disk volume ID for the instance IP address 198.51.100.3.

    # ssh 198.51.100.3 "hostname ; lsblk -d | grep ' 1T ' | awk '{print \$1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='"
    
    nodea
    E: ID_SERIAL=3600224808dd8eb102f6ffc5822c41d89

After you have verified that the shared disk is attached to each VM, you can configure resilient storage for the cluster. For information on configuring resilient storage for a Red Hat High Availability cluster, see Configuring a GFS2 File System in a Cluster. For general information on GFS2 file systems, see Configuring and managing GFS2 file systems.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.