Red Hat Virtualization (RHV) Definitions, Requirements, and Installation

Mindwatering Incorporated

Author: Tripp W Black

Created: 10/07 at 04:33 PM

 

Category:
Linux
RH V

Definitions:
Host: Hardware that runs the virtualization as Layer 2 (add-on service) or Layer 1, integrated Hypervisor OS.

Guest: VM or Container (Pod of Containers actually) running on the Host

Hypervisor: Software that partitions the host hardware into multiple VMs (CPU, Memory, Storage/Disk, and Networking) and runs them

HA: High Availability - If one host goes offline (isolation or hardware failure), the VMs can be re-started on the remaining "up" hosts.


Red Hat Virtualization Concepts/Definitions:
RHV - Open source virtualization platform that provides centralized management of hosts, VMs, and VDs (virtual desktops) across an enterprise datacenter. It consists of three major components:
- RHV M - Manager Administrative Portal
- - "Hosted Engine" VM
- - Administrative Portal provides controls for management of the physical and virtual resources in a RHV environment, RHV-M also exposes the REST APIs and SDKs for various programming languages.
- Physical hosts - RVH H (RVH self-hosted engine/hypervisors - type 1)
- - kernel-based KVM hypervisor, requiring hardware virtualization extensions supporting Intel-VT-x or AMD-V and also the No eXecute (NX) flag. IBM POWER8 is also supported.
- - Anaconda for installation, LVM for image management, and the RHV-H's web console for local administration and monitoring
- Storage domains
- - Data domain is a centrally accessed repository for VM disks and images, ISO files, and other data accessible to all hosts in a RHV data-center. NFS, iSCSI, and others are used for storage domains.

Other components:
- Remove Viewer
- - client administrative workstation viewer to access consoles of RHV virtual machines. On RHEL client systems, it is installed with the spice-xpi package which installs the Remote Viewer and its required plugins.

Minimum Host Requirements:
- 2 GB RAM, and up to 4 TB
- The Intel-VT-x or AMD-V with the NX flag on CPU
- Minimal storage is 55 GB
- /var/tmp at least 5 GB
Minimum Host Network Requirements:
- 1 NIC @ 1 Gbps
- - However min 3 recommended: 1 for mgmt traffic, 1 for VM guest traffic, and 1 for data domain storage traffic
- External DNS and NTP servers not allowed on VMs of the RHV since its hosts come up before their VMs and have to have forward and reverse DNS entries.
- RVH-H firewall is auto-configured for required network services

Storage Domain:
- Also, a RHEV Storage Domain. It is represented as a Volume Group (VG). Each VM w/in the VG has it's one LV (Logical Volume) which becomes the VM's disk. Performance degrades with high numbers of LGs in VGs (300+), so the soft limit is 300. To have more scalability, you create new Storage Domains (VGs). See RH Technote: 441203 for performance limits. When a snapshot is created, a new LV is created for the VM with the Storage Domain PV.
- Types:
- - Data Domain: stores the hard disk images of the VMs (the LVs) and VM templates. Data Domains can utilize NFS, iSCSI, FCP, GlusterFS (deprecated), and POSIX storage. A data domain cannot be shared between data centers.
- - Export Domain: (deprecated) stores VM LVs disk images and VM templates for transfer between data centers, and where backups of VMs are copied. Export Domains are NFS. Multiple datacenters can access a single export domain, but can only be used by one at a time.
- - ISO Domain: (deprecated) stores disk images for installations

Data Domain: hosted_storage
Management Logical Network: ovirtmgmt
DataCenter: default
Cluster: default

RHV-H = Host - Host Types:
Engine Host: Host w/Manager VM. Two hosts running manager (engines) = HA capable
Guest Host: Host running VMs

Two ways to get to a working RHV-H host/hypervisor:
- RHV-H ISO (or other methods)
- RHEL linux host with the Virtualization repository packages and modules added.

Hosts talk to the RHV-M VM (or separate server) via the Virtual Desktop and Server Manager (VDSM)
- Monitors memory, storage, and networks
- Creates, migrates, and destroys VMs

RHV-H - Red Hat Virtualization Host:
- Is a standalone minimal operating system based on RHEL
- Includes a graphical web admin management interface
- Installs via ISO file, USB storage, PXE network distribution, or by cloning

RHHI-V - Red Hat Hyper-converged Infrastructure for Virtualization
- Uses "self-hosted" engine
- Gluster Storage


RHV Installation Methods:
- Standalone = Manager separate (not hosted on the hosts = not self hosted)
or
- Self-Hosted Engine = Manager runs on the first host after host is installed first.

Host Graphical UI:
https://rhvhosta.mindwatering.net:9090
- accept the self-cert

When RHV-H hosts are installed (e.g. ISO) manually, they have to be subscription registered and enabled:
[root@hostname ~]# subscription-manager repos --enable=rhel-8-server-rhvh-4-rpms


RHV-M - Red Hat Virtualization Manager:
- Integrates w/various Directory services (JBOSS --> LDAP) for user management
- Manages physical and virtual resources in a RHV environment
- Uses local PostgreSQL db for config engine (engine) and the data warehouse (ovirt-engine-history) databases

Manager Standalone Installation Order:
1. Manager installed on separate server
2. Install hosts (min. 2 for HA)
3. Connect hosts to Manager
4. Attach storage accessible to all hosts

Self-Hosted Engine Installation Order:
1. Install first self-hosted engine host
- subscribe host to entitlements for RHV and enable software repos
- confirm DNS forward and reverse working
2. Create Manager VM on host
- create via the host graphical web console
or
- create on the host via: hosted-engine --deploy command. The GUI is recommended method.
3. Attach storage accessible to all hosts
- Back-end storage is typically NFS or iSCSI. The storage attached becomes the "Default" data center and the "default" cluster. This default storage contains the LV/storage of the RHV-M VM created.
- This also illustrates that the actual creation of the RHV-M VM is actually after the storage
4. Install additional self-hosted engine hosts (min. 2 for HA)

RVH-M Minimum Requirements:
- CPU: 2 core, quad core recommended
- Memory: 16 GB recommended, but can run with 4 GB if data warehouse is not installed, and memory not being consumed by existing processes
- Storage: 25 GB locally accessible/writable, but 50 GB or more recommended
- Network: 1 NIC, 1 Gbps min.

RHV-M Administration Portal:
https://rhvmgr.mindwatering.net
or
https://rhvmgr.mindwatering.net/ovirt-engine/webadmin/
User: admin
Password: <pwd set before clicking start button>

Verification of RHV-M appliance services:
$ ssh root@rhvmgr.mindwatering.net
<enter root pwd>
[root@rhvmgr ~]# host rhvmgr.mindwatering.com
<view output - confirm name and IP, note IP for next validation>
[root@rhvmgr ~]# host -t PTR <IP shown above>
<view output - confirm IP resolves to name>
[root@rhvmgr ~]# ip add show eth0
<view output of NIC config - confirm IP, broadcast, mtu, and up>
[root@rhvmgr ~]# free
<view output - confirm memory and swap sufficient to minimums>
[root@rhvmgr ~]# lscpu | grep 'CPU(s)'
<view output - confirm number of cores>
[root@rhvmgr ~]# grep -A1 localhost /usr/share/ovirt-engine-dwh/services/ovirt-engine-dwhd/ovirt-engine-dwhd.conf | grep -v '^#'
<view output - confirm PostgreSQL db listening and note ports>
[root@rhvmgr ~]# systemctl is-active ovirt-engine.service
<view output - confirm says "active">
[root@rhvmgr ~]# systemctl is-enabled ovirt-engine.service
<view output - confirm says "enabled">

Download of RVH-M CA root certificate for browser trust/install:
http://rhvmgr.mindwatering.net/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA

If uploads (ISOs, etc) fail on upload, did the browser get the Manager certificate installed?
e.g. "Connection to ovirt-imageio-proxy service has failed. Make sure the service is installed, configured, and ovirt-engine-certificate is registered as a valid CA in the browser."

NFS Export Configuration for Storage Domains
- storage server is not allowed to be one of the VMs hosted as it must be up before hosts are started
- read/write mode
- edit config file as either:
- - /etc/exports
or
- - /etc/exports.d/rhv.exports
- ownership:
- - top-level directory owned by vdsm w/UID 36 and by group kvm GID 36, with 755 access:
- - - user vdsm has rwx access (7)
- - - group kvm has rx access (6)
- - - other has rx access only (5)
- ensure NFS server service running (e.g. nfs-server.service enabled and active/running)

Verification of Export:
$ ssh root@rvhstor1.mindwatering.net
<enter root pwd>
[root@rvhstor1 ~]# systemctl is-active nfs-server.service
<view output - confirm active>
[root@rvhstor1 ~]# systemctl is-enabled nfs-server.service
<view output - confirm enabled>
[root@rvhstor1 ~]# firewall-cmd --get-active-zones
<view output - e.g. public>
[root@rvhstor1 ~]# firewall-cmd --list-all --zone=public
<view output - verify ports and services for nfs are included>
[root@rvhstor1 ~]# exportfs
<view output - verify that the /exports/<datanames> are listed and match what will be used in RHV e.g. hosted_engine>
[root@rvhstor1 ~]# ls -ld /exports/hosted_engine/
<view output - check permissions and ownership - e.g. drwxr-xr-x. 3 vdsm kvm 76 Mon 12 12:34 /exports/hosted_engine/>

Console View Apps for Administrative workstations using GUI consoles:
- Linux:
- - virt-viewer
- - $ sudo yum -y install virt-viewer
Note: documentation also says installed via spice-xpi package.

- MS Windows:
- - Download both the viewer and the USB device add-on from the RVH-M appliance:
- - - https://rhvmgr.mindwatering.net/ovirt-engine/services/files/spice/virt-viewer-x64.msi (64-bit)
- - - https://rhvmgr.mindwatering.net/ovirt-engine/services/files/spice/usbdk-x64.msi (64-bit)
- Viewer has headless mode and VNC option/use
- Viewer SPICE supports file transfer and clipboard copy-and-paste

Creating VM Notes:
- Disk Interface Types:
- - IDE: oldest and slowest, use for older OS and disk compatibility, not recommended
- - VirtIO: /dev/vdX drives, must faster than IDE, but more limited than SCSI. Use when advanced features not needed.
- - VirtIO-SCSI: /dev/sdX drives, improves scalability, replaces the virtio-blk driver, can connect directly to SCSI LUNs and handles 100s of devices on single controller
- Allocation Policy - both thin and think options
- - Same as vSphere, thick all of disk formatted and allocated, thin/sparse for only initial space needed, rest is allocated as used.
- - Thin may be a little slower as disk usage requires new allocations to be zeroed before new data blocks are written.

- Boot Notes:
- Run Once parameters persist THROUGH reboots intentionally because it often needs to for software installation. You must shutdown VM before the run once defaults back to a normal boot.


Data Centers and Clusters:
Data center:
- Top level organizational object in RHV
- Contains all the physical and logical resources in a single, managed, virtual environments; includes clusters, hosts, logical networks, and storage domains
- All hosts and clusters in the data center share the same storage and networks
- First auto-created data center is named "default"
- Future data centers created in the Administration Portal:
- - Administrative Portal --> Compute (left menu) --> Data Centers (menu tab) --> New (button)
- - - Name - Enter a unique sensible name
- - - Storage Type - If you change Type to Local, you will create a single host datacenter that can only use local storage
- - - Compatibility - Choose current (default) if all the clusters and hosts are the current version; otherwise, choose the current version of the hosts that are not yet upgraded
- - - Click OK

Data Center - Guide Me
- Wizard that guides creation of new clusters, hosts, and storage, attaching existing storage, configuring existing storage, and configuring ISO libraries or reconfiguring an ISO library
- A new datacenter is "Uninitialized" until first host and storage are added and the RHV-M can confirm they are usable; then the status is changed to "Up"

Cluster:
- Group of hosts in a single data center with the same architecture and CPU model. A cluster is a "migration domain", such that VMs can live migrate to only other machines in "this" cluster
- Clusters have a CPU Type family feature set that is shared by all hosts in the cluster - not necessarily the same exact CPU just same generation and family
- All clusters must be configured with same resources including logical networks, storage domains.
- Stopped VMs can be cold migrated w/in the data center clusters w/o matching architecture or CPU.
- Networking can be standard "Linux Bridge", or the newer SDN Open vSwitch, "OVS (Experimental)" option, which is not yet supported for production, but popular for use anyway
- First auto-created cluster is named "default"
- Future clusters created in the Administration Portal:
Note: Create any MAC pool before creating cluster
- - Administrative Portal --> Administration (left menu) --> Configure (menu tab) --> MAC Address Pools (left menu list)
- - - Click Add (button)
- - - - Name: Enter a unique sensible name
- - - - MAC Address Ranges: Enter the starting and ending ranges in the two fields
- - - - Click OK (closes the pool window)
- - - Click Close (closes the Configure window)
- - Administrative Portal --> Compute (left menu) --> Clusters (menu tab) --> New (button)
- - - General (tab):
- - - - Datacenter - Select the data center for the new cluster
- - - - Name - Enter a unique sensible name
- - - - Management Network - If more than one management network select the correct one for the cluster/data center. The default auto-created management network to select is typically, "ovirtmgmt"
- - - - CPU Architecture - x86_64 typically
- - - - CPU Type - The cluster base version. Same as vSphere e.g. "Intel Westmere Family"
- - - - API Compatibility Version - Take default unless need to support third-party tools using an older version
- - - - Switch Type - Linux Bridge
- - - - Firewall Type - firewalld
- - - - Default Network Provider: No Default Provider
- - - - Maximum Log Memory Threshold: 95%
- - - - Enable Virt Service: Checked typically
- - - Optimization (tab)
- - - - Memory page sharing threshold, CPU thread handling, and memory ballooning settings
- - - Migration Policy (tab)
- - - - Rules for determining when existing VMs migrate automatically between hosts (specifically, Load balancing)
- - - Scheduling Policy (tab)
- - - - Rules for selecting which host new machines are placed/started
- - - Console (tab)
- - - - Console connection protocols and proxy configuration (e.g. default SPICE)
- - - Fencing (tab)
- - - - Actions for an isolated or failed/crashed host, typically to handle attached storage to limit corruption on failure
- - - MAC Address Pool
- - - - Custom MACs for use in this cluster instead of using the Data Center pool
- - - Click OK

General (tab) Other Info:
- If using external random number generator for "entropy", you can set cluster to use /dev/hwrng instead of /dev/urandom. If selected,all the hosts must have that hardware-based source selected and configured.

Hosts with HA and Migration:
- Pinning allows VMs to only run on a specific host. They cannot be live migrated and will not auto-migrate when its host goes into maintenance mode -- it gets shutdown.
- - To find Pinned VMs on Host: Administration Portal --> Compute (left menu) --> Hosts (left menu tab) --> open Host --> Details (screen) --> VMs (tab) --> Pinned to Host
- If the host is the current Storage Pool Manager (SPM), the role is migrated to another host in the cluster
- See Scaling RHV --> Maintenance mode further below


User Accounts and Roles:
Summary:
- RHV authenticates users based on information from an external LDAP server
- The initial local domain is called "internal" which contains the local user accounts (e.g. admin@local)
- Additional local users for AAP or other utility can be created with the ovirt-aaa-jdbc-tool or the Administrative Portal
- Directory support: OpenLDAP, RHEL Identity Manager (iDM), or MS AD, and others
- Directory addition adds ability for LDAP users to authenticate
- RHV uses roles that authorize them for access
- - Directory users have no authorization roles, they must be added

Under the covers:
- ovirt-engine-extension-aaa-ldap package provides LDAP support for OpenLDAP, iDM, and the others
- ovirt-engine-extension-aaa-ldap-setup package provides LDAP integration w/in RHV-M
- These must be installed in the RHV-M VM

Add External LDAP RHEL iDM:
- iDM is based on upsteam FreeIPA project
- Gather LDAP source info:
- - FQDN of the LDAP server or its VIP
- - Public CA certificate in PEM format
- - Existing LDAP account (service account id and pwd) to use for authentication to the LDAP iDM
- - Base DN, User DN, and any other filter required
- Run ovirt-engine-extension-aaa-ldap-setup
- - Available LDAP Implementation: choose #6 - IPA
- - Use DNS: click <enter>, for [Yes] option
- - Available Policy method: #1 - Single Server
- - Host: mwid.mindwatering.net
- - Protocol to use: click <enter>, for [startTLS]
- - Method to obtain PEM encoded CA certificate (File, URL, Inline, System, Insecure): File
- - URL: http://mwid.mindwatering.net/ipa/config/ca.crt
- - Search user DN: uid=rhvadmin,cn=users,cn=accounts,dc=mindwatering,dc=net
- - Enter search user password: ********
- - Enter base DN: <enter> (Typically is correct as parsed from the User DN e.g. dc=mindwatering,dc=net)
- - Profile name visible to users: mindwatering.net
- - Prompt for LDAP user name: rhvadmin ..., Prompt for LDAP user pwd: ***********
- After finishing set-up, restart the service w/:
# systemctl restart ovirt-engine

Add External OpenLDAP:
The set-up steps for AD, OpenLDAP, etc are very similar to steps above.

Managing User Access Summary:
- Access/authorization model: users, actions, and objects
- Actions are tasks on objects made by users (e.g. user stopping a VM)
- Actions have a permission.
- To simply permissions have been lumped into related user-type or object-based roles. (e.g. SuperUser (admin) or PowerUserRole or HostAdmin)
- - For example, HostAdmin on a cluster gives administrative access to all hosts w/in that specific cluster only
- Users can be assigned roles over the entire data center, or just one object w/in the datacenter (e.g. a VM)
- Default (system) roles cannot be changed or removed
- Object Inheritance:
System --> Data Center --> User, Storage Domain, and Cluster
--> User
--> Storage Domain --> Template
--> Cluster --> Host and VM

User Roll Types:
- Administrative Role: Users w/role can access the Administrative Portal
- - Use these roles to better manage user access and to delegate administrative authority. For example, assign SystemAdmin to specific users without giving them access to the admin@internal account. Users with roles can be properly tracked and managed for compliance.
- - Assign less comprehensive roles to appropriate users in order to offload administrative tasks. The DataCenterAdmin, ClusterAdmin, and PowerUserRole roles are useful for this purpose.
- User Role: Users w/role can access the VM Portal
- - users w/ UserRole can only see the "basic mode" of the VM Portal

Predefined User Roles:
Administrative Roles (Basic)
- SuperUser:
Role gives user full permissions across all objects and levels in your RHV environment. The admin@internal user has this role, and should be given only to the architects and engineers who create and manage the RHV

- DataCenterAdmin:
Role gives user administrative permissions across all objects in a data center, except for storage which is managed by StorageAdmin. Users with this administration can manage assigned data center objects but cannot create another or manage one not assigned

- ClusterAdmin:
Role gives user administrative permissions for all resources in a specific cluster. Users with role can administer the assigned clusters but cannot create another or manage one not assigned

Administrative Roles (Advanced)
- TemplateAdmin:
Role gives users ability to create, delete, and configure templates w/in storage domains.

- StorageAdmin:
Role gives users ability to create, delete, and manage assigned storage domains.

- HostAdmin:
Role gives users ability to create, remove, configure, and manage a host.

- NetworkAdmin:
Role gives users ability to create, remove, and edit networks of an assigned data center or cluster.

- GlusterAdmin
Role represents the permissions required for a Red Hat Gluster Storage administrator. Users with this role can create, remove, and manage Gluster storage volumes.

- VmImporterExporter
Role gives users ability to import and export virtual machines.

User Roles (Basic)
- UserRole
This role allows users to log in to the VM Portal. This role allows the user to access and use assigned virtual machines, including checking their state, and viewing virtual machine details. This role does not allow the user to administer their assigned virtual machines.

- PowerUserRole
This role gives the user permission to manage and create virtual machines and templates at their assigned level. Users with this role assigned at a data center level can create virtual machines and templates in the data center. This role allows users to self-service their own virtual machines. The PowerUserRole includes/inherits/adds the user to the UserVmManager role when you create a VM. However, an admin can remove the "lower" role and remove this access if he/she wishes

- UserVmManager
This role allows users to manage virtual machines, and to create and use snapshots for the VMs they are assigned, to edit the VMs' configuration, and delete the VMs. If a user creates a virtual machine using the VM Portal, that user is automatically assigned this role on the new virtual machine.

User Roles (Advanced)
- UserTemplateBasedVm
This role gives the user limited privileges to use only the virtual machine templates. Users with this role assigned can create virtual machines based on templates.

- DiskOperator
This role gives the user privileges to manage virtual disks. Users with this role assigned can use, view, and edit virtual disks.

- VmCreator
This role gives the user permission to create virtual machines using the User Portal. Users with this role assigned can create virtual machines using VM Portal.

- TemplateCreator
This role gives the user privileges to create, edit, manage, and remove templates. Users with this role assigned can create, remove, and edit templates.

- DiskCreator
This role gives the user permission to create, edit, manage, and remove virtual disks. Users with this role can create, remove, manage, and edit virtual disks within the assigned part of the environment.

- TemplateOwner
This role gives the user privileges to edit and remove templates, as well as assign user permissions for templates. It is automatically assigned to the user who creates a template.

- VnicProfileUser
This role gives the user permission to attach or detach network interfaces. Users with this role can attach or detach network interfaces from logical networks.


Role Assignment to Users:
- Assign system-wide rules using the Administration Portal:
--> Administration (menu) --> Configure (menu tab) --> System Permissions --> Add (button)
- Assign object-scoped rules at "that" object using the Administration Portal:
- - Example for data center access:
--> Administration (menu) --> Compute (menu tab) --> Data Centers --> Default (opened) --> Permissions (tab) --> Add (button) --> Complete the Add Permission to User dialog
- To remove a role, a user with the SuperUser role can return back to the same location and clear the box for the user in the dialog, and click OK.

Reset the Internal Admin account:
[root@rvhmgr ~]# ovirt-aaa-jdbc-tool user password-reset admin --password-valid-to="2026-01-01 12:00:00Z"
<enter new pwd>
<reenter new pwd>

Unlock the Internal Admin account (from too many failures):
[root@rvhmgr ~]# ovirt-aaa-jdbc-tool user unlock admin


Scaling RHV Infrastructure:
Summary:
- Add to increase capacity, or remove to reduce unneeded cluster capacity
- Place hosts in to Maintenance Mode for any event that might cause VDSM to stop working on the RVH-H. e.g. host reboot, network issue repair, or storage issue repair

Host Maintenance Mode:
- Host status changes to: Preparing for Maintenance once selected, and Maintenance once achieved
- VDSM provides communication between RHV-M and the hosts, and maintenance mode disables engine health checking. VDSM continues to run on the RHV-H while in maintenance mode.
- Causes VMs to be migrated to another host in the cluster (assuming resources available on the other hosts)
- Pinned VMs are shutdown.
- If the current host is the Storage Pool Manager (SPM), its SPM role is given to another host in the cluster.
- If the host was the last host active in the data center, the data center will also be placed into maintenance mode.

Data Center Maintenance Mode:
- Placing a data center in storage mode, will place all storage domains into maintenance mode.
- The SPM has no tasks to manage.
- The data center outputs no logs. Logs restart when the master storage domain becomes active again.

Moving a Host between Clusters:
- The host does NOT have to be removed and re-added from the RVH-M Administration Portal.
- Place the host in Maintenance:
- Administration Portal --> Compute (left menu) --> Hosts (left menu tab) --> View table of hosts --> select host --> On the Management button dropdown, select Maintenance.
- - Administration Portal --> Compute (left menu) --> Clusters (left menu tab) --> select cluster --> View table of hosts --> select host --> On the Management button dropdown, select Maintenance.
- Switch the cluster:
- - Administration Portal --> Compute (left menu) --> Clusters (left menu tab) --> select cluster --> View table of hosts --> select host --> Edit (button).
- - Host Cluster drop down: Select new cluster in same or different data center. Click OK to exit, click OK to confirm move.
- Activate the host from Maintenance:
- - Administration Portal --> Compute (left menu) --> Clusters (left menu tab) --> select cluster --> View table of hosts --> select host --> On the Management button dropdown, select Activate.

Removing a Host:
- Administration Portal --> Compute (left menu) --> Hosts (left menu tab) --> View table of hosts --> select host --> On the Management button dropdown, select Maintenance.
- Administration Portal --> Compute (left menu) --> Clusters (left menu tab) --> select cluster --> View table of hosts --> select host --> On the Management button dropdown, select Maintenance.
- Wait until Remove button become active. Verify the host is still selected, click Remove, and OK to confirm.
- The removal only removes its cluster and data center associations in the RHV engine database of the RHV-M. The host is not wiped.
- If desired, the host can be added to any existing cluster where it meets the CPU family and version criteria.

Adding a Host:
- Local storage domains and networks are added to the host when added to a cluster.
- The host firewalld rules are auto updated by the RHV-M.
- Administration Portal --> Compute (left menu) --> Clusters (left menu tab) --> select cluster --> View table of hosts --> click New (button)
- - Name: Specify a unique meaningful name
- - Credentials: Either SSH Login Username and Password or public key access (preferred)
- - Hostname: Enter host FQDN

Automated Host Provision - Kickstart:
- Requires a network installation server (Kickstart server) with PXE (Pre-boot eXecution Environment), TFTP, and a shared Kickstart file to start an installation by booting onto the network
- Requires host motherboard to be set-up/allow PXE boot
- Requires host NIC that supports PXE boot
- The Kickstart PXE server must have:
- - DHCP server to handle the initial communication, the network configuration (DHCP, and the TFTP server location for the usable boot image
- - TFTP server to provide boot images with command line options to start the installer
- - HTTTP, FTP, and/or NFS server to provide the installation media and the Kickstart file for installation
- UEFI-based boot firmware require additional extra files from the "shim" and "grub2-efi" packages, and a different configuration file
- - Instructions are in the RHEL 8 Installation Guide: Configuring a TFTP Server for UEFI-based AMD64 and Intel 64 Clients

PXE Boot Communication Summary:
- At boot, the client's network interface card broadcasts a DHCPDISCOVER packet extended with PXE-specific options. A DHCP server on the network replies with a DHCPOFFER, giving the client information about the PXE server and offering an IP address. When the client responds with a DHCPREQUEST, the server sends a DHCPACK with the Trivial FTP (TFTP) server URL of a file that can boot the client into an installer program.
- The client downloads the file from the TFTP server (frequently the same system as the DHCP server), verifies the file using a checksum, and loads the file. Typically, the file is a network boot loader called pxelinux.0. This boot loader accesses a configuration file on the TFTP server that tells it how to download and start the RHV-H installer, and how to locate the Kickstart file on an HTTP, FTP, or NFS server. After verification, the files are used to boot the client.
- See MW Support article: RHEL-based PXE Kickstart Network Server Setup


Managing RHV Networks:
RHV Networks Summary:
- RHV configures logical networks that segregate different types of network traffic onto separate VLANs on physical networks for improved security and performance.
- - Example: Separate VLANs for management, storage, and VM guest traffic
- Logical networks general types:
- - VM Network (for cluster VMs)
- - Infrastructure Network (communication between RHV-M and the RHV hosts, not connected to VMs, with require/have no created Linux bridge on RHV hosts)
- Logical networks defined in the data center and assigned to one or more clusters. Multiple clusters networks are typical to provide communication between VMs in different clusters.
- Logical networks have unique name, unique VLAN tag (VLAN ID number), are mapped to a virtual vNIC, and configured with Quality of Service (QoS) and bandwidth limiting settings. If attached to same NIC on the hosts, multiple logical networks may have the same label (e.g. internal, client/department id, etc.).
- Logical network labels may consist of only upper and lowercase letters, underscores, and hyphens. Adding a label, causes automatic attachment of all hosts in cluster(s) having the logical network to NICs. Removing a label from a logical network removes the logical network(s) from all hosts with that label.
- Software defined Linux bridge is created, per logical network, on the RHV host that maps/provides connectivity between the vNICs and physical host NICs.
- VMs have vNICs assigned to a VM network; the host uses a Linux bridge to connect the VM Network to one of its NICs.
- Infrastructure VMs are created at the cluster level and indicate what type of traffic is carried. Each host in the cluster will have the host's physical NID configured for each infrastructure network.
- 1 GbE is typically sufficient for management network and the display network.
- 10 or 40 GbE is recommended for migration and storage networks, aggregate through NIC bonding or teaming

Logical Network Types:
- Management
This network role facilitates VDSM communication between the RHV-M and the RHV hosts. By default, it is created during the RHV-M engine deployment and named ovirtmgmt. It is the only logical network created automatically; all others are created according to environment requirements.

- Display
This network role is assigned to a network to carry the virtual machine display (SPICE or VNC) traffic from the Administration or VM Portal to the host running the VM. The RHV host then accesses the VM console using internal services. Display networks are not connected to virtual machine vNICs.

- VM network
Any logical network designated as a VM network carries network traffic relevant to the virtual machine network. This network is used for traffic created by VM applications and connects to VM vNICs. If applications require public access, this network must be configured to access appropriate routing and the public gateway.

- Storage
A storage network provides private access for storage traffic from RHV hosts to storage servers. Multiple storage networks can be created to further segregate file system based (NFS or POSIX) from block based (iSCSI or FCoE) traffic, to allow different performance tuning for each type. Jumbo Frames are commonly configured on storage networks. Storage networks are not a network role, but are configured to isolated storage traffic to separate VLANs or physical NICs for performance tuning and QoS. Storage networks are not connected to virtual machine vNICs.

- Migration
This network role is assigned to handle virtual machines migration traffic between RHV hosts. Assigning a dedicated non-routed migration network ensures that the management network does not lose connection to hypervisors during network-saturating VM migrations.

- Gluster
This network role is assigned to provide traffic from Red Hat Gluster Servers to GlusterFS storage clusters.

- Fencing
Although not a network role, creating a network for isolating fencing requests ensure that this critical requests are not missed. RHV-M does not perform host fencing itself but sends fence requests to the appropriate host to execute the fencing command.

Required vs. Optional Networks:
- When created, logical networks may be designated as Required at the cluster level. By default, new logical networks are added to clusters as required networks. Required networks must be connected to every host in the cluster, and are expected to always be operational.
- When a required network becomes nonoperational for a host, that host's virtual machines are migrated to another cluster host, as specified by the current cluster migration policy. Mission-critical workloads should be configured to use required networks.
- Logical networks that are not designated as required are regarded as optional. Optional networks may be implemented only on the hosts that will use them. The presence or absence of optional networks does not affect the host's operational status.
- When an optional network becomes nonoperational for a host, that host's virtual machines that were using that network are not migrated to another host. This prevents unnecessary overhead caused by multiple, simultaneous migrations for noncritical network outages. However, a virtual machine with a vNIC configured for an optional VM network will not start on a host that does not have that network available.

RHV Logical Layers Logic Network Configuration:
- Data Center Layer
Logical networks are defined at the data center level. Each data center has the ovirtmgmt management network by default. Additional logical networks are optional but recommended. VM network designation and a custom MTU are set at the data center level. A logical network defined for a data center must be added to the clusters that use the logical network.

- Cluster Layer
Logical networks are available from the data center, and added to clusters that will use them. Each cluster is connected to the management network by default. You can add any logical networks to a cluster if they are defined for the parent data center. When a required logical network is added to a cluster, it must be implemented on each cluster host. Optional logical networks can be added to hosts as needed.

- Host Layer
Virtual machine logical networks are connected to each host in a cluster and implemented as a Linux bridge device associated with a physical network interface. Infrastructure networks do not implement Linux bridges but are directly associated with host physical NICs. When first added to a cluster, each host has a management network automatically implemented as a bridge on one of its NICs. All required networks in a cluster must be associated with a NIC on each cluster host to become operational for the cluster.

- Virtual Machine Layer
Logical networks that are available for a host are available to attach to the host's VM NICs.

Logical Network Creation:
- Documentation says:
- - Administration Portal --> Compute (left menu) Networks. But there is nothing there.
- Administration Portal --> Network (left menu) --> Network (left menu tab) --> click New (button)
- In New Logical Network window:
- - General (tab):
- - - Data center: <select>
- - - Name: <unique name based on your organization naming convention>
- - - Description: <enter useful description as desired>
- - - Network Label: <unique label>
- - - VLAN Tagging: check checkbox unless network is flat (no VLANs)
- - - VM Network: uncheck if not used as VM Network
- - - MTU: 1500 unless storage network and 9000 (jumbo frames) is required
- - Cluster (tab):
- - - Clusters: check/uncheck checkboxes of clusters whose hosts will attach/use this network. Uncheck the Required checkbox if the network is not required and is optional.
- - Click OK (button)

To segment the type of network traffic (VM, Management, Storage, Display, Migration, Gluster, and Default route), perform at the cluster level:
- Administration Portal --> Compute (left menu) --> Clusters (left menu tab) --> click cluster on table to select and open --> Logical Networks (tab) --> highlight/select network --> Manage Networks (button)

Adding Logical Networks to RHV-H Hosts:
- After addition to the cluster(s), new logical networks are attached to all hosts is those cluster(s) in a state: Non Operational.
- To become Operational, the logical network must be attached to a physical NIC on every host in a cluster.

To attach Logical Network to RHV-H:
- Administration Portal --> Compute (left menu) --> Hosts (left menu tab) --> select and open host --> Network Interfaces (tab) --> Setup Host Networks (button)
- In the Setup Host <hostname> Networks window, under Networks (depressed button), click and drag the logical network listed under the Unassigned Logical Networks to the grey dotted "no network assigned" box under "Assigned Logical Networks".
- Still in the Setup Host <hostname> Networks window, click the pencil icon to edit the network configuration mapping in the Edit Management Network window and set its network parameters:
- - IPv4 (tab)
- - - Boot: Static
- - - IP: <enter ip>
- - - Netmask/Routing Prefix: 255.255.255.0
- - - Gateway: <leave empty unless this interface is routing>
- - DNS (tab)
- - - Update as needed.
- - Click OK
- Still in the Setup Host <hostname> Networks window, under Labels (depressed button), click and drag label from right to left, as was done for the Logical Network itself.
- Verify connectivity between Host and Engine: <selected>
- Save network configuration: <selected>
- Click OK
Note: If the network config is not honored, force a sync from the RHV-M and the RHV-H, click Sync All Networks (button).

External Network Providers:
- Extended/utilized through RHV-M
- External provider must use OpenStack Neutron REST API/Red Hat OpenStack Platform OpenStack (RHOSP) Networking service
- Provides an API for SDN capabilities including dynamic creation and management of switches, routers, firewalls, and external connections to physical networks
- Neutron plugins include: Cisco virtual and physical switches, NEC OpenFlow, Open vSwitch/Open Virtual Networking, Linux bridging, VMware NSX, MidoNet, OpenContrail, Open Daylight, Brocade, Juniper, etc.
- RHV 4.3 supports RHOSP versions: 10, 13, 14 with original Open vSwitch driver
- RHV 4.3 supports RHOSP versions: 13 + with Open Daylight drivers

SDN Overview:
- Software-defined networking is more than deploying virtual networking components in a virtualization or cloud environment.
- The SDN controller is the control plane component that manages network devices in the data (forwarding) plane. These network devices, such as switches, routers, and firewalls, are programmatically configured for network routes, security, subnets and bandwidth in cooperation with the cloud-native application requiring dynamic services and allocation. An SDN controller centralizes the network global view, and presents the perception of a massively scalable, logical network switch to those applications.
- Open vSwitch (OVS) can plug and unplugs port, create networks or subnets, and provide IP addressing. An Open vSwitch bridge allocates virtual ports to instances, and can span across to the physical network for incoming and outgoing traffic. Implementation is provided by OpenFlow (OF), which defines the communication protocol that enables the SDN Controller to act as the middle manager with both physical and virtual networking components, passing information to switches and routers below, and the applications and business logic above.

Enhanced Open Virtual Networking (OVN) Features:
- Enhances the OVS significantly to add native support for virtual network abstractions, such as virtual L2 and L3 overlays and security groups.
- Provides a native overlay networking solution, by shifting networking away from being handled only by Linux host networking.
- Some high level features of OVN include:
- - Provides virtual networking abstraction for OVS, implemented using L2 and L3 overlays, but can also manage connectivity to physical networks.
- - Supports flexible security policies implemented using flows that use OVS connection tracking.
- - Native support for distributed L3 routing using OVS flows, with support for both IPv4 and IPv6.
- - Native support for NAT and load balancing using OVS connection tracking.
- - Native fully distributed support for DHCP.
- - Works with any OVS datapath (such as the default Linux kernel datapath, DPDK, or Hyper-V) that support Geneve tunnels and OVS connection tracking.
- - Supports L3 gateways from logical to physical networks.
- - Supports software-based L2 gateways.
- - Can provide networking for both VMs and containers running inside of those VMs, without a second layer of overlay networking.
- Neutron Network Limitations with RHV:
- - For use as VM networks, cannot be used as Display networks
- - Always non-Required network type
- - Cannot be edited in RHV-M
- - Port mirroring is not available for vNICs connected to external provider logical networks
- - Skydive cannot support reviewing applications and reports on running processes

Integration Steps:
- Choose integration method:
- - RHOSP: Red Hat OpenStack Platform director Networker role to a node which is added by RHV-M to a RHV-H (recommended by RH).
- - Manual installation of Neutron agents:
- - - Register each host to following repos: rhel7-server-rpms, rhel7-server-rhv-4-mgmt-agent-rpms, and rhel-7-server-ansible-2-rpms
- - - Install OS Networking "hook", which VDSM invokes the plugin (e.g. OVS) that passes networks to libvirt, and allow ICMP traffic into the hosts:
- - - - # yum update && install vdsm-hook-openstacknet
- - - - # iptables -D input -j REJECT --reject-with icmp-host-prohibited
- Register Neutron network with RHV-M
- RHV-M automatically discovers networks and presents as Logical Networks for RHV-H assignments, and VM assignments.


Managing RHV Storage:
Storage Domain Overview:
- A collection of images with a common storage interface. A storage domain contains images of templates, virtual machines, snapshots, and ISO files.
- Is made of either block devices (iSCSI, FC) or a file system (NFS, GlusterFS).
- - On file system backed storage, all virtual disks, templates, and snapshots are files.
- - On block device backed storage, each virtual disk, template, or snapshot is a logical volume. Block devices are aggregated into a logical entity called a volume group, and then divided by LVM (Logical Volume Manager) into logical volumes for use as virtual hard disks.
- Virtual disks use either the QCOW2 or raw format. Storage can be either sparse or preallocated. Snapshots are always sparse, but can be taken for disks of either format.
- Virtual machines that share the same storage domain can migrate between hosts in the same cluster.
- One host in the datacenter is the Storage Pool Manager, regardless of cluster; the rest of the hosts read configuration, the SPM host can read and write it. All hosts can read/write to (to the stored images in) the storage domain.

Storage Types Back Ends:
- NFS
Red Hat Virtualization (RHV) supports NFS exports to create a storage domain. NFS exports are easy to manage, and work seamlessly with RHV. RHV recognizes NFS export resizing immediately, without requiring any additional manual intervention. NFS is supported for data, ISO, and export storage domains. When enterprise NFS is deployed over 10GbE, segregated with VLANs, and individual services are configured to use specific ports, it is both fast and secure.

- iSCSI
An iSCSI-based storage domain enables RHV to use existing Ethernet networks to connect to the block devices presented as LUNs in an iSCSI target. iSCSI-based storage is only supported for data domains. In a production environment, also consider booting hosts from the enterprise grade iSCSI server Enterprise iSCSI servers have native cloning features for easy deployment of new hosts using host templates. For optimum performance, hosts should use hardware-based iSCSI initiators and deploy over 10 GbE or faster networks.

- GlusterFS
RHV supports the native GlusterFS driver for creating a Red Hat Gluster Storage backed data storage domain. Three or more servers are configured as a Red Hat Gluster Storage server cluster, instead of using a SAN array. Red Hat Storage should be used over 10GbE and segregated with VLANs. Red Hat Gluster Storage is only supported for data domains.

- FC SAN
RHV also supports fast and secure Fibre Channel based SANs to create data storage domains. If you already have FC SAN in your environment, then you should take advantage of it for RHV. However, FC SANs require specialized network devices and skills to operate. Like iSCSI, a FC SAN also supports booting hosts directly from storage. FC SAN has the native cloning features to support easy deployment of new hosts using host templates.

- Local storage
Local storage should only be considered for small lab environments. Do not use local storage for production workloads. Local storage precludes the use of live migration, snapshots, and the flexibility that virtualization supports.

Storage Pool Manager Overview:
- The host that can make changes to the structure of the data domain is known as the Storage Pool Manager (SPM).
- The SPM coordinates all metadata changes in the data center, such as creating and deleting disk images, creating and merging snapshots, copying images between storage domains, creating templates, and storage allocation for block devices.
- There is one SPM for every data center. All other hosts can only read storage domain structural metadata.
RHV-M identifies the SPM based on the SPM assignment history for the host. If the host was the last SPM used, RHV-M selects the host as the SPM. If the selected SPM is unresponsive, RHV-M randomly selects another potential SPM. This selection process requires the host to assume and retain a storage-centric lease, which allows the host to modify storage metadata. Storage-centric leases are saved in the storage domain, rather than in the RHV-M or hosts.
- The SPM must be running on a data center host to add and configure storage domains. An administrator must register a host (hypervisor) before setting up a new data center. Once a host is part of the data center, it is possible to configure the data center storage domains.
- In an NFS data domain, the SPM creates a virtual machine disk as a file in the file system, either as a QCOW2 file for thin provisioning (sparse), or as a normal file for preallocated storage space (RAW).
- In an iSCSI or FC data domain, the SPM creates a volume group (VG) on the storage domain's LUN, and creates a virtual machine disk as a logical volume (LV) in that volume group. For a preallocated virtual disk, the SPM creates a logical volume of the specified size (in GB). For a thin provisioned virtual disk, the SPM initially creates a 512 MB logical volume. The host on which the virtual machine is running continuously monitors the logical volume. If the host determines that more space is needed, then the host notifies the SPM, and the SPM extends the logical volume by another 512 MB.
- From a performance standpoint, a preallocated (RAW) virtual disk is significantly faster than a thin provisioned (QCOW2) virtual disk. The recommended practice is to use thin provisioning for non-I/O intensive virtual desktops, and preallocation for virtual servers.

Storage Domain Types:
- Data Storage Domain, ISO Storage Domain, and Export Storage Domain
- - The latter two are both deprecated. Use the Data Storage Domain for ISOs and to transfer VMs from a cluster to another cluster.

NFS-backed Storage Domain Configuration:
- Create the NFS export, note the server URL/export path, the username and password, confirm permissions on the extent target is correct for the username
- Administration Portal --> Storage (left menu) --> Storage Domains (left menu tab) --> click New Domain (button)
- - Data Center: <default>
- - Name: nfs-storagedomain
- - Domain Function: Data
- - Description: Default datacenter storage domain
- - Storage Type: NFS
- - Host to Use: <select host>
- - Export Path: nfs-server1.mindwatering.net/exports/rhvstoreage1
- - Custom Connection Parameters:
- - - complete as needed
- - Click OK.

Note: The selected host does not necessarily mean it will be the first SPM, but the first host that can access the new storage.

iSCSI-backed Storage Domain Configuration:
- Create the iSCSI target and configure its LUNS for usage. Only one storage domain at a time can utilize each iSCSI LUN.
- Administration Portal --> Storage (left menu) --> Storage Domains (left menu tab) --> click New Domain (button)
- - Fill out like NFS above, except choose iSCSI, enter the IP and port for the target, click Discover
- - Afterwards, under Discover Targets, select the desired LUN by clicking the Add button.

Note: Previously used LUNs are hidden for re-use so not accidentally selected again. If we need to reuse a LUN, we need to clear it LUN ID, stop the multipathd.service on all hosts, and then start them all again.
(Clear with multipath -l, to get LUN IDs, and then dd if=/dev/zero of=/dev/mapper/<LUNID> bs=1M count=200 oflag=direct)

GlusterFS-backed Storage Domain Configuration:
- Install glusterfs-fuse and glusterfs packages on all RHV-H hosts.
- Administration Portal --> Storage (left menu) --> Storage Domains (left menu tab) --> click New Domain (button)
- - Type: GlusterFS

External Storage Providers:
Overview of OpenStack as External Provider:
- RHV can consume OpenStack Glance REST API and re-use storage from OpenStack's Glance (image) Cinder (block storage) service / RHOSP services
- OpenStack instance = VM
- OpenStack image = RHV template
- OpenStack Glance for Image Management:
- - The OpenStack image service provides a catalog of virtual machine images. In RHV, these images can be imported and used as floating disks, or attached to virtual machines and converted into templates. The Glance service is used as a storage domain that is not attached to any data center. Virtual disks in RHV can also be exported to Glance as virtual disks. Imported OpenStack images must be converted to templates to be used for deploying new virtual machines in RHV.
- - The authentication credentials set for the OpenStack image service external provider enables Red Hat Virtualization Manager to authenticate to the OpenStack identity service on behalf of the OpenStack image service.
- OpenStack Cinder for Storage Management:
- - The OpenStack block storage service provides persistent block storage management for virtual hard disks. Cinder volumes are provisioned by Ceph Storage. In RHV, you can create disks on Cinder storage that can be used as floating disks or attached to virtual machines. After you add Cinder to the RHV Manager, you can create a disk on the storage provided by Cinder.

Configure External Image (Glance) Provider in RHV-M:
- Get the URL, and the username and password from the OpenStack environment
- Administration Portal --> Administration (left menu) --> Providers (left menu tab) --> On Providers page, click Add (button)
- - Type: OpenStack Image
- - Provider URL: http://openstackserver.mindwatering.net:9292
- - Click Requires Authentication, enter the username and password.
- - Click OK

Configure External Block Storage (Cinder) Provider in RHV-M:
- Get the URL, and the username and password from the OpenStack environment
- Administration Portal --> Administration (left menu) --> Providers (left menu tab) --> On Providers page, click Add (button)
- - Type: OpenStack Block Storage
- - Provider URL: http://openstackserver.mindwatering.net:8776
- - Click Requires Authentication, enter the username and password.
- - Protocol: HTTP/HTTPS
- - Host Name (Keystone ID server): openstackidentity.mindwatering.net
- - API Port (Keystone ID server): 35357
- - Tenant Name: <services_tenant_name>
- - Click OK


Deploying VMs:
Overview:
- VMs are also called "guests"
- vSphere VM Tools = Guest Agents
- The hypervisor cannot run OS guests that are not compatible with the architecture. For example, Mac M2 VM cannot run if migrated from a M2 Mac because the hosts are Intel (or AMD) architecture.
- Not supported:
- - Linux: SSO in gnome environments, agents other than the qemu-guest-agent, virt-sysprep (template sealing), virt-sparsify, v2v of RHEL 8 is not supported as of RHV 4.3.
- - Windows: Windows 11 is not yet supported in RHV 4.3 initial release, but is supported now (2025/09) if host OS is RHEL 8.6 or higher, Windows server higher than 2019 is not yet supported at RHV 4.3 release, but Server 2022 is supported now if host OS is RHEL 8.6 or higher. Support matrix is maintained on article: 973163

Instance Types - VM Sizing Presets;
- Tiny: 1 vCPU, 512 MB RAM
- Small: 1 vCPU, 2048 MB RAM
- Medium: 2 vCPU, 4096 MB RAM
- Large: 2 vCPU, 8192 MB RAM
- XLarge: 4 vCPU, 16384 MB RAM

Templates:
- Blank - empty/none, list will include any template images added to the Data Storage Domain for the Data center

Operating System:
- Presets by OS, so that OS selection will select virtualized devices (motherboard/BIOS, disk interfaces, etc.) most compatible with that OS release.

Optimize For:
- Presets for advanced settings for persistence and configuration. Select Server for most VMs.

Instance Images:
- Used to create/configure the VM storage. (This is NOT template images. That was Template above.)
- Click Create to create a new disk
- - Interface specifies the hardware interface:
- - - VirtIO-SCSI and VirtIO are faster and require the underlying host/hypervisor have paravirtualization guest drivers installed for the VM OS selection above. RHEL OS does.
- - - IDE emulates basic IDE long supported by most OS selections
- - Allocation Policy: Preallocated (Thick - whole disk is created and consumed on storage), or Thin Provision (sparse)
- - - Preallocation is faster from a performance standpoint, but takes up more space
- - - If the storage includes deduplication, choose Preallocated on the VM/RHV-side, and Thin on the back-end storage-side.
- - - If desired, the app-data disk can be separate from the VM "local" disk, allows VM template to contain the current version of the VM, and the data to be mounted and backed up separately. The management of these separately can be more troublesome to keep track as machines are destroyed, and necessitate automation with logic on if app-data should be deleted or not if VM is deleted.
- - Instantiate VM network Interface selects vNIC (profile), click the dropdown to select the desired Logical Network. Click the + to add another vNIC.
- - Show Advanced Options
- - - Predefined Boot Sequence: Allows changing Boot order (move CD/DVDROM above disk, and move PXE Network Boot below Disk) and other advanced options.
- - - Attach CD: Also allows connecting an ISO for the OS installation to the new VM

During creation, or after creation:
- Before first boot, you generally want to attach ISO to the virtual CD/DVDROM of the VM. Run the Boot Once and select the ISO image desired. The CDROM will stay connected across reboots, but will disconnect automatically after Shutdown.

Guest Agents:
- Provides RHV-M information such as IP(s), and allows host memory management, resource usage data, and other performance and compatibility features, and allows RHV-M to gracefully shutdown instead of Powering Off a VM.
- The qemu-guest-agent service, provided by the qemu-guest-agent package, is installed by default on RHEL guests and provides the communication w/the RHV-M.
- The table below describes the different Linux guest drivers available for Red Hat Virtualization guests. Not all drivers are available for all supported operating systems. Some Guest OS installations can detect the hypervisor and choose install these drivers automatically (e.g. Ubuntu Linux).
Driver Description
virtio-net Paravirtualized network driver for enhancing performance of network interfaces.
virtio-block Paravirtualized HDD driver for increased I/O performance. Optimizes communication and coordination between the guest and hypervisor.
virtio-scsi Paravirtualized iSCSI HDD driver provides support for adding hundreds of devices, and uses the standard SCSI device naming scheme.
virtio-serial Provides support for multiple serial ports to improve performance, for faster communication between the guest and host.
virtio-balloon Controls the amount of memory a guest can actually access. Optimizes memory overcommitment.
qxl This paravirtualized display driver reduces CPU usage on the host and provides better performance.

- On Windows, install the RHV Agent as part of the RHEV-Tools installation. Below are the available guest agents and tools:
Name Description
ovirt-guest-agent-common Allows Red Hat Virtualization Manager to execute specific commands, and to receive guest internal events or information.
spice-agent Supports multiple monitors, and reduces the bandwidth usage over wide area networks. It also enables cut and paste operations for text and images between the guest and client.
rhev-sso Desktop agent that enables users to automatically log in to their virtual machines.

Installing the Guest Agents on Red Hat Enterprise Linux
- On Red Hat Enterprise Linux virtual machines, the Red Hat Virtualization guest agents are installed using the qemu-guest-agent package. This package is installed by default, even in a minimal installation of Red Hat Enterprise Linux 7 or Red Hat Enterprise Linux 8.
- Virtualization drivers are also installed by default in RHEL.

Viewing VM Usage Data:
- Administration Portal --> Compute (left menu) --> Virtual Machines (left menu tab) --> select and open VM
- - The virtual machines page displays the IP addresses and FQDN for the virtual machine.
- - Additional information collected by the agent can be seen by clicking on the name of the virtual machine, and then exploring tabs such as General, Network Interfaces, and Guest Info.

Cloning a VM:
- Administration Portal --> Compute (left menu) --> Virtual Machines (left menu tab) --> select/highlight VM
- - Click Shutdown (button at top of view)
- - Click Create Snapshot (button at top of view)
- The clone contains all the configuration information do not run it at the same time as the original machine.
- Alternately, create a "Sealed Template" to create a copy that is cleared of unique data, and create machines from the template.

Editing a VM:
- Administration Portal --> Compute (left menu) --> Virtual Machines (left menu tab) --> select/highlight VM
- - Click Edit (button at top of view)
- Changes that can only be preformed while VM is shutdown:
- - Reducing Memory Size, Reducing or increasing Maximum Memory, and reducing Physical Memory Guaranteed
- - Unplug of vCPUs can only be done if hot-plugged and if OS supports either adding or unplugging.
- - Unplugging vNICs or disk images should only be done after the OS is no longer using them, and their config is removed from the OS. If you remove a disk that is the boot/system disk, will cause the VM to be non-bootable.

Creating a Template:
- Create Clone of shutdown VM or create a New VM and install the OS, and then shutdown the OS
- Boot the VM and run the seal/clean utility on the VM. The last part of the seal is the shutdown of the VM. Do not boot the VM again or the sealing is undone, and it has to be done again. Perform the Make Template next.
- Administration Portal --> Compute (left menu) --> Virtual Machines (left menu tab) --> select/highlight VM previously cloned
- - Click 3 dots (right of buttons at top of view) --> choose Make Template, if Linux, click the "Seal Template (Linux only)" option to seal w/out manually performing the seal.

Sealing the cloned VM Detailed Instructions:
- Once the virtual machine has been configured, remove all information unique to the virtual machine. This is referred to as sealing the image.
- - Unique information includes hardware information specific to the virtual machine, such as MAC addresses; unique system configurations, such as the host name and static IP addresses; and possibly logs and other data.
- - Depending on the operating system of the virtual machine, you may need to perform these steps manually, or there may be tools available to seal the image for you.
- - - Linux virtual machines typically use the virt-sysprep utility externally on a stopped virtual machine.
- - - Running Windows virtual machines typically use sysprep.exe to initiate a System Out-of-Box-Experience (OOBE). The process for sealing a Windows virtual machine concludes when you shut down the virtual machine.
- - - The RHV-M Administration Portal also provides an option where it will seal a Linux virtual machine image prior to creating a template in the Make Template dialog/page. Be aware that this option may not work for all variants of Linux.
- - Items that were stripped out of the virtual machine during the sealing process are recreated when the virtual machine boots up for the first time. As such, once a virtual machine has been sealed, do not start it until after you have made a template. If you accidentally start the virtual machine before you create the template, you will have to go through the process of sealing the image again.

Creating New VMs from Templates with cloud-init:
- Used to automate the initial setup of virtual machines, such as configuring the host name, network interfaces, and authorized keys.
- Used to avoid conflicts on the network when provisioning virtual machines that have been deployed based on a template.
- Must be installed on the VM before using to create VMs.
- - The cloud-init package must first be installed on the virtual machine. Once installed, the cloud-init service starts during the boot process to search for configuration instructions.
- - - On a Red Hat Enterprise Linux 8 system, the cloud-init package is available in the rhel-8-for-x86_64-appstream-rpms repository.
- - - On a Red Hat Enterprise Linux 7 system, the cloud-init package is available in the rhel-7-server-rpms repository.
- Use the options in the Run Once window to provide instructions for the immediate boot process. You can persistently configure cloud-init to run every time the virtual machine boots by editing the virtual machine, and making changes to the Initial Run tab in the advanced options view. The same changes can be made to a template, so that any virtual machine created from the template will always run cloud-init at boot.
- Use case scenarios:
- - Customizing virtual machines using a standard template: Use the Use Cloud-Init/Sysprep options in the Initial Run tab of the New Template and Edit Template windows to specify options for customizing virtual machines that are based on this template.
- - Customizing virtual machines using "Initial Run": Administrators can use the cloud-init options in the Initial Run section of the Run Once window to initialize a virtual machine. This could be used to override settings set by a template.

Preparing the Template:
- As soon as the cloud-init package is installed on your Linux virtual machine, you can use that machine to create a template with cloud-init enabled.
- Configure the template so that the advanced option Initial Run has the setting Use Cloud-Init/Sysprep selected. This enables additional configuration options for setting host name, time zone, authentication and network properties, and for running customer cloud-init scripts. This is roughly equivalent to vSphere Policy VM Customization Specs.
- If the cloud-init settings are in the template, then when you create a new virtual machine, those Initial Run settings are applied to the virtual machine by default. You also have the option of overriding those settings from the New Virtual Machine window when you create the VM from the template.
- There are two easy ways to apply Initial Run settings to the template:
- - The template inherits any settings from the Initial Run configuration of the original virtual machine, just like it inherits other characteristics of the virtual machine. However, this means you have to change the base virtual machine's settings and then create the template.
- - You can create the template normally, and then use Edit Template to change the Initial Run settings for cloud-init. The original virtual machine will not have these settings applied, but machines created from the template will.

Custom Script Partial Examples from RH notes to add a user and create a file (.vimrc) for that user during VM provisioning from the template image.
users:
- name: developer2
passwd: $6$l.uq5YSZ/aebb.SN$S/KjOZQFn.3bZcmlgBRGF7fIEefBPCHD.k46IW0dKx/XK.I0DmZQBKGgCIxg7mykIIzzmW02JyZwXgORfHWBE.
lock_passwd: false

write_files:
- path: /home/developer2/.vimrc
content: |
set ai et ts=2 sts=2 sw=2
owner: developer2:developer2
mode: '0664'

Full reference example: cloudinit.readthedocs.io/en/latest/reference/examples.html

Making a template via cloud-init:
1. Install cloud-init w/in VM:
$ ssh myadminid@vmname.mindwatering.net
$ sudo su -
# yum -y install cloud-init
<wait>
# shutdown -h now
or
# systemctl poweroff

2. Make Template:
- Administration Portal --> Compute (left menu) --> Virtual Machines (left menu tab) --> select/highlight VM just shutdown --> click 3-dots (right of buttons) --> click Make Template (button option)
- In the New Template window, complete the fields:
- - Name, Description, and Alias. Name and alias need to be unique for sanity.
- - Seal Template (Linux only): checked
- - Click OK (button)

3. Update the template for cloud-init/sysprep:
- Administration Portal --> Compute (left menu) --> Templates (left menu tab)
- Select/highlight the template with the name/alias just created. Click Edit (button)
- In the Edit Template window:
- - Click Show Advanced Options (button), click Initial Run (tab)
- - Use Cloud-Init/Sysprep: <ensure selected>
- - VM Hostname: <ensure empty>
- Click OK (button)

4. Create the VM from the template:
a. Create the New VM (do not auto-run/boot):
- Administration Portal --> Compute (left menu) --> Virtual Machines (left menu tab) --> New (button)
- In the New Virtual Machine window, complete the fields:
- - Cluster, Name, Description
- - Template: <select the new template just created/prepped>
- - Click OK (button)

b. Set cloud-init via Run Once:
- With the new VM highlighted, click Run (dropdown) and select Run Once
- In the Run Virtual Machine(s) window:
- - Initial Run: click "+"
- - Use Cloud-init: <ensure checked>
- - VM Hostname: <ensure new VM's name is populated w/step 4a above, so hostname is set automatically>
- - Click Authentication to toggle section
- - User Name: <enter new user to create>
- - Password: <enter new user password to set>
- - Verify Password: <enter password again>
- - Custom Script:
(add)
write_files:
- path: /etc/motd
content: |
This machine for official Mindwatering Use Only
- - Click OK (button)


Live Migration of VMs Between Hosts within a Cluster
Live Migration Overview:
- Process of moving a virtual machine from one physical host to another while VM is running.
- VM memory, network configuration, and access to storage are moved from the original virtual machine on one physical host to a new virtual machine on a different physical host, while OS and applications are still running.
- Used to support maintenance tasks on hosts without disrupting running virtual machines.
- Is transparent to the end-user; users communicating with the virtual machine should notice no more than a network pause of a few milliseconds as the transfer completes.

Limitations:
- The new host must have a CPU with the same architecture and features as the original host. Clusters hosts have same architecture and family to help ensure this is not an issue.
- Live migration requires a disabled virtual machine cache to ensure a coherent virtual machine migration.

Prerequisites:
- The virtual machine must be migrated to a host in the same cluster as the host where the virtual machine is running.
- The status of both hosts must display Up.
- Hosts must have access to the same virtual networks, VLANs, and data storage domains.
- Destination host must have CPU and Memory free to support the additional VM's requirements/usage.
- The virtual machine must not have the cache!=none custom property set. The cache parameter configures the different cache modes for a virtual machine.
- Live migration is performed using the Migration Network.
- - The default configuration uses the ovirtmgmt network as both the management network and the migration network; concurrent live migrations can saturate a network shared by management and migration traffic.
- - For best performance, the storage, migration, and management networks should be separated to avoid network saturation.

Live Migration Steps:
- Administration Portal --> Compute (left menu) --> Virtual Machines (left menu tab) --> select/highlight VM to migrate to a new host --> click Migrate (button)
- In the Migrate VM(s) window:
- - Destination Host: <change Automatically Choose Host option to the desired host>
- - Virtual Machines: <review the VM(s) displayed to confirm you have the right one(s)>
- - Click Migrate (button)

Note:
The Status field of the VM transitions through/from Migrating From 0% and Migrating To, before finally showing Up.


Configuring Automated Migration / Scheduling Policies
Automated Migration Overview:
- Automated Migration is the automatic migration of VMs from one host to another based on scheduling or resource compliance thresholds.
- RHV-M checks the resources of the clustered hosts either when a virtual machine starts, or when migrating a virtual machine, to determine an appropriate host for the virtual machine in the cluster.
- RHV-M also checks the resources periodically to detect any noncompliance between the current load on the individual host and the cluster policies.
- If the current load on the individual host is not compliant with the cluster policies, RHV-M migrates virtual machines from one host to another host in the cluster without requiring manual administrative intervention.
- RHV-H hosts to be in Maintenance mode for upgrades and maintenance. When you mark a host for maintenance, RHV-M automatically migrates virtual machines from the host in maintenance to another available host. The migration policy states the criteria for the live migration of the virtual machines.

Load Balancing and Scheduling Overview:
- Load balancing refers to the distribution of the virtual machines among the hosts in the cluster to ensure efficient utilization of the clustered resources.
- Each cluster uses a specific policy for load balancing that has tunable properties. RHV-M uses these properties to decide when to move the virtual machines from one host to another.
- Load balancing (audit) runs once every minute for a certain cluster to ensure that the load on the cluster is balanced.
- RHV-M uses the process called scheduling to determine the host on which a virtual machine starts.

Scheduling Policies Overview:
- Scheduling performed based on the scheduling policies of the cluster.
- A scheduling policy is a combination of filters, weights, and load balancing logic. The filters apply the hard constraints that a host must satisfy to run a virtual machine, such as the minimum RAM, or CPU. The weights apply the soft constraints that control the relative preference for a host to run a virtual machine. Lower weights are considered better for the scheduler preference.
- The load balancing logic determines whether a specific host is underutilized or over-utilized. After identifying the underutilized and over-utilized hosts, the load balancing logic calls the scheduler to migrate the virtual machine from the over-utilized host to the underutilized host.

Live Migration Convergence:
- RHV-M copies the virtual machine state to the new host in real time.
- As the migration completes, the state change in place while the migration was running may need to be retransmitted. Eventually, the migration converges and allows RHV-M to pause the virtual machine for a fraction of a second to transmit the last few changes to the new host.
- Finally, the virtual machine is resumed on the new host.
- Very busy RHV environments will take longer to converge. Migration policies also determine how RHV handles the migration time limit.

Migration Policies:
- Minimal downtime policy is the default migration policy. This migration policy optimizes for the shortest pause of the virtual machine during migration, but may abort the migration if it is taking an excessive time to converge.
- Post-copy migration policy also optimizes for the shortest pause, if possible. If the migration fails to converge after an extended time, then this policy is applied. Post-copy starts the virtual machine in the destination host as soon as possible. To achieve this, a subset of the virtual machine memory moves to the destination hosts. If the virtual machine tries to access a memory page that is not in the destination host, then it issues a page fault, and the source host transfers that page.
- Suspend workload if needed migration policy supports migration under most load conditions, but a longer pause of the virtual machine may occur if it has a heavy load.

Resilience Policies:
- Used in the event of a host failure or enters Maintenance
- Options:
- - Migrate Virtual Machines (All)
- - Migrate Only Highly Available Virtual Machines (HA only)
- - Do Not Migrate Virtual Machines (Disabled)

Steps to Update Cluster Migration Policy and Resilience Policy:
- Administration Portal --> Compute (left menu) --> Clusters (left menu tab) --> select/highlight Cluster to edit --> click Edit (button)
- In the Edit Cluster Window, click Migration Policy (tab):
- - Migration Policy: <toggle options Minimum downtime (default), Post-copy migration, or Suspend workload if needed>
- - Resilience Policy: <toggle options Migrate Virtual Machines (default), Migrate Only Highly Available Virtual Machines, or Do Not Migrate Virtual Machines>
- - Click OK (button)

Scheduling Policy:
- Cluster Scheduling Policy regards whether VMs can be created within the cluster
- The default configuration does not allow deployment of a virtual machine on an over-utilized host. A host is over-utilized when its CPU load is higher than 80% for more than 2 minutes.
- Options:
- - power_saving: improves the efficiency of electrical power consumption on the hosts. The hosts that remain under-utilized, in terms of CPU load, for longer than the defined time interval are marked to be powered down. Before powering down the under-utilized host, all of its running virtual machines are migrated to other appropriate hosts.
- - none: disables load balancing in the cluster. This policy spreads the compute (memory and CPU) load evenly across all the hosts in the cluster. Hosts that reach the defined values of any of the policy properties, such as CpuOverCommitDurationMinutes, HighUtilization, or MaxFreeMemoryForOverUtilized, do not run additional attached virtual machines.
- - cluster_maintenance: disables starting new virtual machines during maintenance tasks. Only the highly available virtual machines are scheduled to start on appropriate hosts. In the event of a host failure, any of the highly available or regular virtual machines can migrate to a healthy host.
- - evenly_distributed: spreads the compute (memory and CPU) load evenly across all the hosts in the cluster. Hosts that reach the defined values of any of the policy properties, such as CpuOverCommitDurationMinutes, HighUtilization, or MaxFreeMemoryForOverUtilized, do not run additional attached virtual machines. The vm_evenly_distributed policy schedules virtual machines evenly between the hosts in the cluster, based on a count of the virtual machines. To keep the cluster balanced, all the hosts should have a virtual machine count below the defined HighVmCount, and no host in the cluster should have a virtual machine count beyond the defined MigrationThreshold.

Scheduling Policy Data Properties:
- HighVmCount: represents the minimum number of running virtual machines per host required to initiate load balancing. The default value is 10. An overutilized host runs more virtual machines than this number.
- MigrationThreshold: configures a buffer before virtual machines migrate from the host. This value is the maximum inclusive difference in virtual machine count between the highly utilized hosts and least utilized hosts. The default value is 5. To keep the cluster balanced, the virtual machine count of every host should be lower than the value of this property.
- SpmVmGrace: represents the number of slots reserved for virtual machines on SPM hosts. In a cluster, the SPM hosts have relatively lower loads. This property defines how many virtual machines run on the SPM host, in comparison to other hosts. The default value is 5.
- CpuOverCommitDurationMinutes: represents the time in minutes that the host can run a CPU load beyond the utilization values that are defined. After the specified time elapses, the scheduling policy takes action to implement the necessary virtual machine migration. The default value is 2. This value is limited to a maximum of two characters.
- HighUtilization: the percentage of CPU usage on hosts that causes the virtual machine migration, if the CPU usage continues at this level for the defined time interval. The default value is 80.

Steps to Update Cluster Scheduling Policy:
- Administration Portal --> Compute (left menu) --> Clusters (left menu tab) --> select/highlight Cluster to edit --> click Edit (button)
- In the Edit Cluster Window, click Scheduling Policy (tab):
- - Select Policy: <toggle options: power_saving, none, cluster_maintenance, evenly_distributed, vm_evenly_distributed>
- - Properties (heading): <set properties and thresholds>
- - Scheduling Optimization: <toggle between Optimize for Utilization and Optimize for Speed>
- - Click OK (button)


Managing Virtual Machines Images:
Snapshots Overview:
- A Snapshot is a view of a virtual machine that includes the operating system and applications on any or all available disks at a given point in time.
- Snapshots can be taken with stopped or running VMs.
- Only one snapshot can be run at a time.
- An administrator may take a snapshot of a virtual machine before making modifications, if the updates go badly, he/she can revert the state of the virtual machine to one recorded by the snapshot.
- To rollback to a previous state/snapshot, shutdown the machine and roll back to a previous image, all later snapshots are discarded. Before committing to a rollback, you can preview boot the VM at that point to confirm this is the state you wish to keep.

Steps to create a VM Snapshot:
- Administration Portal --> Compute (left menu) --> Virtual Machines (left menu tab) --> select/highlight VM--> Click Snapshot (button)
- In the Create Snapshot window, confirm the following:
- - Disks to include: All disks are initially included (checked), uncheck if any are not desired for the snapshot
- - Description: <enter meaningful description of why>
- - Save Memory: <select if VM is running to save current memory state of VM>
- - Click OK (button)

Steps to manage VM Snapshots:
- Administration Portal --> Compute (left menu) --> Virtual Machines (left menu tab) --> select/highlight VM
- If VM is being rolled back, click Shutdown (button)
- Open VM, click Snapshots (tab)
- Select the snapshot desired, next Preview (button), Run (button as preview to revert to), or Delete (button)

Managing Virtual Machine Images:
- RHV-M stores virtual machine disk images in data domains.
- A data domain can only be attached to one data center at a time.
- However, a single data center can have multiple data domains attached simultaneously. (e.g. create a Data Domain called VMTransfer, and connect it to whichever data center needs it.)
- Disk images is stored in a single data center, and can me relocated or migrated.

Relocation methods:
- Moving a disk image for a virtual machine from one data domain to another data domain.
- Exporting virtual machines from one data center and importing them into another data center.
- Importing an existing QCOW2 image from outside RHV into a data domain, and then attaching it to a virtual machine.

Notes:
- The current RHV version can import images directly into data domains, and move data domains from one data center to another.
- Previous versions of RHV used an export domain to export and import images between data domains. Export domains are deprecated in RHV 4.1, but still quite useful because they export and import OVF which include more than just the disk, but the whole VM, which is commonly used for single VM migration from a competitor product into or out of RHV.

Steps to Import a VM Disk/Image:
- Administration Portal --> Storage (left menu) --> Disks (left menu tab) --> click Upload (button) --> click Start.
- In the Upload Image window:
- - Choose File, navigate/select the image to upload from the local workstation
- - Size: <select size same or larger than imported image>
- - Alias: <enter name>
- - Data Center: <select data center>
- - Storage Domain: <select storage domain>
- - Click OK (button)
- Wait for the image to appear in the Disks (tab) list; a progress bar displays underneath it as it uploads. When RHV-M finishes the image upload, its status changes to OK.

Note:
The image is not yet attached to any VM.

Steps to Attach a VM Disk/Image to Existing VM:
- Administration Portal --> Compute (left menu) --> Virtual Machines (left menu tab) --> select and open desired VM
- Change to the Disks (tab), click Attach (button)
- - In the attach disk window:
- - - Click checkbox next to disk to attach. Adjust its interface configuration as needed. If disk is Boot disk, click checkbox under OS (Operating System).
- - - Click OK (button).

Steps to Create a Legacy Export Domain:
- Administration Portal --> Storage (left menu) --> Domains (left menu tab) --> click New
- In the New Domain window:
- - Data Center: <select data center>
- - Domain function: Export
- - Storage Type: NFS
- - Host to Use: <take default>
- - Name: <enter a meaningful name - e.g. VMExport
- - Export Path: <enter the IP and export path - e.g. nfs-server1.mindwatering.net/exports/rhvexport1>
- - Click OK (button)

Steps for Importing VM Images via Export Domains - Open Virtualization Format (OVF):
- Administration Portal --> Storage (left menu) --> Domains (left menu tab) --> select/highlight the export domain (e.g. VMExport) --> click VM Import (button)
- In the Import Virtual Machine(s) window
- - Name: <enter name>
- - Click OK (button)

Exporting VM Images using Export Domains Overview:
- When RHV-M exports a virtual machine into an export domain, it puts the OVF Package for the virtual machine in a directory structure in that export domain.
- An OVA file is a TAR archive of a OVF package with this directory structure.
- The directory structure includes two subdirectories: images and master.
- - The directories which comprise the OVF Package include an "OVF file" which is named with the .ovf file extension.
- - This is a descriptor file that specifies the virtual hardware configuration for the virtual machine. The directories also include virtual disk image files for that virtual machine.

Steps to Export OVA
- Direct access the back-end NFS storage of the export domain provides an unsupported way to extract virtual machines from RHV.
or
- Use the REST API as the supported method, see the "Red Hat Virtualization REST API Guide"

VM Conversion for Virtual-2-Virtual (V2V) Migration:
- Linux includes the virt-v2v tool. In Ubuntu, the package to install is libguestsfs-tools. In RockyOS/RHEL, the package is virt-p2v-maker.
- Tool converts between RHV, RHOSP, OS, vSphere, and KVM

Moving VM Disks to a New Data Domain Overview:
- Manually move virtual machine disks to another data domain in the data center. (There is no "storage migration" like in vSphere.)
or
- Export virtual machines to a new data center by moving them into a new data domain, and then moving the data domain to another data center. (just disk - don't use)

Steps to Migrate VMs from one Data Domain to Another Data Domain in same Data Center:
- Administration Portal --> Compute (left menu) --> Virtual Machines (left menu tab) --> highlight/select VM --> click Shutdown (button)
- In Virtual Machines (view) --> verify VM status is Down
- Administration Portal --> Compute (left menu) --> Data Centers (left menu tab) --> highlight/open datacenter --> Storage (tab) --> highlight/select Export Domain (e.g. VMExport) --> confirm Active, not Maintenance, switch to Active as needed. It may take a while to go from Locked to Active.
- Administration Portal --> Storage (left menu) --> Disks (left menu tab) --> highlight/select Disk(s) associated with the VM --> Click Move (button)
- In the Move Disk(s) window:
- - Target and Disk Profile fields: <Select destination Data Domain>
- - Click OK (button) and wait
- In the Disks (view) again, select/open each of the VM disk(s), verify the target destination Data Domain is listed

Steps to Exporting Virtual Machines to a Different Data Center:
a. Shutdown VM:
- Administration Portal --> Compute (left menu) --> Virtual Machines (left menu tab) --> highlight/select VM --> click Shutdown (button)
- In Virtual Machines (view) --> verify VM status is Down

b. Export to Export Domain (already Active on "this" datacenter)
- In Virtual Machines (view) --> highlight/select VM --> click 3-dots --> click Create Snapshot --> click Export to Export Domain (menu option)
- In the Export Virtual Machine window:
- - Ignore two check boxes, if Export Domain is not selected, select it (e.g. VM Export), click OK (button).
- Watch the VM events for the successful export.

c. Disable the Export Domain on "this" datacenter:
- Administration Portal --> Compute (left menu) --> Data Centers (left menu tab) --> highlight/open origin datacenter --> Storage (tab) --> highlight/select Export Domain (e.g. VMExport) --> click Maintenance (button)
- In the Storage Domain maintenance window, click OK (button) to confirm. Wait.
- Administration Portal --> Compute (left menu) --> Data Centers (left menu tab) --> highlight/open origin datacenter --> Storage (tab) --> highlight/select Export Domain (e.g. VMExport) --> click Detach (button)
- In the Detach Storage window, click OK (button) to confirm.

d. Enable the Export Domain on "that other" datacenter:
- Administration Portal --> Compute (left menu) --> Data Centers (left menu tab) --> highlight/open origin datacenter --> Storage (tab) --> click Attache Export (button)
- In the Attach to Data Center window:
- - Select radio button next the Export Domain (e.g. VMExport) --> Click OK (button) to confirm. Wait for status to transition from Locked to Active.

e. Import VM into "that other" datacenter:
- Administration Portal --> Storage (left menu) --> Domains (left menu tab) --> highlight/select Export Domain (e.g. VMExport) --> click VM Import (button)
- In the view of VMs, click name of the VM exported to import --> click Import (button)
- In the Import Virtual Machine(s) window, click OK (button).
- - Note: Unless the name was changed on the NFS side, the VM name will conflict with the original VM, a warning will appear next to the VM name; we must use a new VM name. Change the name, click OK (button) again. Click Close.
- The VM is now imported, the VM status will be Down.

f. Update the invalid vNIC info for the new VM:
- Administration Portal --> Compute (left menu) --> Virtual Machines (left menu tab) --> highlight/select VM --> click Edit (button)
- In the Edit VM window, click Network Interfaces (tab) --> Click Edit (button).
- In the Edit Network Interface window:
- - Profile: <select the correct Network Profile>
- - Click OK
- Back in the VM list, highlight/select VM --> Click Run (button)
- Verify the VM status --> Up


The Red Hat Infrastructure Migration Solution (IMS)
- Migrates VMware ESX-based enterprise workloads to:
- - Red Hat Virtualization
- - Red Hat OpenStack Platform
- - Red Hat Hyper-converged Infrastructure for Virtualization
- - Red Hat Virtualization for Cloud.
- Based on Red Hat management technologies, including Red Hat Ansible Automation and the Red Hat CloudForms management platform,
- Existing workloads are analyzed and migrated per workload-specific business requirements.
- Provides the pathway to cloud-native application development via Linux containers, Kubernetes, automation, and other open-source technologies.
- - Red Hat OpenShift Container Platform (RHOS) for Container Native Virtualization (CNV)

The Phases of Migration
- Discovery Session:
- - A scheduled Discovery Session to better understand and document the scope of the migration.
- - Recommendations for open-source virtualization destination platform(s):
- - - Red Hat Virtualization (RHV)
- - - Red Hat OpenStack Platform (RHOSP)
- - - Red Hat Hyper-converged Infrastructure (RHHI)
- - - - RHHI-V using RHV and Red Hat Gluster Storage
- - - - RHHI-C for cloud using RHOSP and Red Hat Ceph Storage

- Migration pilot:
- - Open source platform(s) are deployed and made operational using Red Hat Hybrid Cloud Management infrastructure tooling.
- - Pilot and practice migrations demonstrate typical approaches, establish initial migration capability, and define the resource requirements for a larger scale migration.

Migration at scale:
- - Design and implementation assistance to build and optimize production infrastructure
- - Unify and streamline operations across virtualization pools
- - Navigate complex migration cases.

IMS Product Requirements:
- CloudForms 4.7.0+
- Red Hat Virtualization 4.2.7+
- Red Hat Enterprise Linux (Hypervisor) 7.6+
- Red Hat OpenStack Platform 13+
- VMware vSphere 5.5+

IMS Software Requirements to Install on RHV:
- rhel-7-server-rpms (RHEL 7.6 virt-v2v updates needed)
- jb-eap-7-for-rhel-7-server-rpms (JBoss EAP 7 rpm packages for RHV and for JBoss VMs)
- rhel-7-server-optional-rpms (RHEL 7.6 optional packages)
- rhel-7-server-extras-rpms (RHEL 7.6 extras packages)
- rhel-7-server-supplementary-rpms (RHEL 7.6 supplementary packages)
- rhel-7-server-rhv-4.2-manager-rpms (RHV Manager 4.2 packages)
- rhel-7-server-rhv-4-manager-tools-rpms (RHV Manager 4.2 tools packages)
- rhel-7-server-rhv-4-mgmt-agent-rpms (RHV 4 agents for RHEL)
- rhel-7-server-ansible-2-rpms (Ansible 2.x packages)
- rhel-7-server-rh-common-rpms (RH common packages agents)

RHV-H Requirements to Become a Conversion Host:
- RHEL Hypervisors
- - VDDK SDK: VMware-vix-disklib-6.5.2-6195444.x86_64.tar.gz (Virtual Disk Development Kit)
-- nbdkit SRPMS: rhel-7-server-rhv-4-mgmt-agent-source-rpms (nbdkit Source RPMS)

ESX/vSphere to RHV Migration Process:
- User creates an infrastructure mapping and a VM migration plan in CloudForms.
- Migration plan run
- Based on the mapping, CloudForms locates the VM(s) to be migrated.
- If VDDK transport was configured, the ESXi host(s) fingerprint is(are) captured for authentication if VDDK transport method.
- If SSH transport was configured, the SSH key is used to connect to the ESXi host(s) on which the VM(s) reside(s).
- CloudForms contacts the RHV-H conversion host for each VM to be migrated:
- - The conversion RHV-H connects to ESXi to its data store(s) where the VM resides, using the virt-v2v-wrapper.py, and streams the disk(s) to be converted to the target RHV-H Data Domain using virt-v2v.
- - After all disks are converted, the target VMs is created in RHV using the source ESXi-based VM's metadata (name, tags, power state, vNIC MAC, CPU core count, memory, disk(s).
- - After the VM is created, the migrated disk(s) are attached to the RHV target new VM. Migration of that VM is complete.
- The status of each VM's migration is displayed in CloudForms throughout the process.

Process Steps Review:
- Create an infrastructure mapping and a virtual machine migration plan in CloudForms.
- CloudForms locates the virtual machines to be migrated.
- The ESXi host is authenticated during the conversion process.
- CloudForms initiates communication with the RHV conversion host.
- The RHV conversion host connects to the source data store, and streams the disk to be converted to the target data domain.
- The target virtual machine is created in RHV.
- The disk is attached to the target virtual machine.


Steps for CloudForms Migration:
1. CloudForms - Add a vSphere Virtualization Provider:
- Locate the Infrastructure providers window to add a new provider:
- - Name: <enter unique identify name for vCenter>
- - Type: VMware vCenter
- - Under Endpoints (heading):
- - - (vCenter) host name: <mwvcsa.mindwatering.net>
- - - Username: <admin ID>
- - - Password: <password>
- - - Click Validate (button)
- - - - Once validated, the "Credential validation was successful message" displays.
- - Click Add (button)

2. CloudForms - Add a RHV Virtualization Provider:
- Locate the Infrastructure providers window to add a new provider:
- - Name: <enter unique identify name for RHV-M>
- - Type: Red Hat Virtualization
- - Under Endpoints (heading):
- - - (RHV-M) Host name: <rvhmgr.mindwatering.net>
- - - RHV Username: <admin ID>
- - - RHV Password: <password>
- - - Verify TLS Certificates: No
- - - Click Validate (button)
- - - - Once validated, the "Credential validation was successful message" displays.
- - Click Add (button)

3. CloudForms - Add Credentials to a Conversion Host in RHV:
- Locate the Hosts window and update the credentials for both the ESXi host(s) and the RHV-H hosts
- - Open each host, under the Endpoints (heading) --> Default (tab) enter the administration username and password for each host, click Validate (button), click SAVE (button)

4. Install Tools (Packages) onto the RHV-H Conversion Host via the RHV-M VM:
$ ssh myadminid@rhvmgr.mindwatering.net
[root@rhvmgr ~]# cd /usr/share/ovirt-ansible-v2v-conversion-host/playbooks
[root@rhvmgr ~]# cat conversion_hosts_inventory.yml
all:
vars:
ansible_ssh_private_key_file: /etc/pki/ovirt-engine/keys/engine_id_rsa
v2v_repo_rpms_name: "rhel-7-server-rhv-4-mgmt-agent-rpms"
v2v_repo_rpms_url: "http://storage.example.com/repos/rhel-7-server-rhv-4-mgmt-agent-rpms"
v2v_repo_srpms_name: "rhel-7-server-rhv-4-mgmt-agent-source-rpms"
v2v_repo_srpms_url: "http://storage.example.com/repos/rhel-7-server-rhv-4-mgmt-agent-source-rpms"
v2v_vddk_package_name: "VMware-vix-disklib-6.5.2-6195444.x86_64.tar.gz"
v2v_vddk_package_url: "http://storage.example.com/repos/VMware-vix-disklib-6.5.2-6195444.x86_64.tar.gz"
manageiq_url: "https://cf.example.com"
manageiq_username: "admin"
manageiq_password: "redhat"
manageiq_zone_id: "1"
manageiq_providers:
- name: "RHV"
connection_configurations:
- endpoint:
role: "default"
verify_ssl: false
hosts:
rhvhost1.example.com:
rhvhost2.example.com:
v2v_host_type: rhv
v2v_transport_methods:
- vddk
manageiq_provider_name: "RHV"
A conversion_host_check.yml playbook is provided to verify proper installation. Use the conversion_host_enable.yml playbook to install the necessary tools, and then use the host check playbook to verify the installation.

[root@rhvmgr ~]# cd /usr/share/ovirt-ansible-v2v-conversion-host/playbooks
[root@rhvmgr ~]# ansible-playbook --inventory-file=conversion_hosts_inventory.yml conversion_host_enable.yml
[root@rhvmgr ~]# ansible-playbook --inventory-file=conversion_hosts_inventory.yml conversion_host_check.yml

5. Confirm the CloudForms host vmdb shows the RHV-H hosts added:
When the conversion host software is properly installed, the conversion host is found in the vmdb on the CloudForms host.
a. SSH into the CloudForms host and query the database.
[root@cf ~]# vmdb
[root@cf vmdb]# rails c
** CFME 5.10.0.29, codename: Hammer
Loading production environment (Rails 5.0.7.1)
irb(main):002:0> pp ConversionHost.all
[#<ConversionHost:0x0000000000a12b89
id: 1,
name: "rhvhost1.mindwatering.net",
address: nil,
...
vddk_transport_supported: true,
ssh_transport_supported: false,
...
cpu_limit: nil,
memory_limit: nil,
network_limit: nil,
blockio_limit: nil>]
=> #<ActiveRecord::Relation [#<ConversionHost id: 1, name: "rhvhost2.mindwatering.net", ...vddk_transport_supported: true, ssh_transport_supported: false, ... cpu_limit: nil, memory_limit: nil, network_limit: nil, blockio_limit: nil>]>

6. Install the IMS Migration and Conversion Ansible Playbooks onto CloudForms and Create the Catalog Item:
- The CloudForms system must be configured to enable the Embedded Ansible role on the EVM Configuration screen.
- - 10 minutes - to enable this role, configure the feature, and start the Embedded Ansible Worker service.
- Ansible Playbooks perform the migration tasks by copying virtual machine disks for processing.
- - Configure the credential username and password used by these playbooks.
- - Configure CloudForms with the source repository for the playbooks.
- - - Set the repository should to update playbooks to the latest stored version each time one is launched.
- - - Once configured, the Ansible Playbooks screen will display the playbooks available.
- - Add the catalog item for the playbooks:
- - - Select the repository just added, the playbook, and the credentials of the vSphere environment to be migrated.

7. Perform an Infrastructure Migration:
a. Create an Infrastructure Mapping
- CloudForms provides a wizard to walk through each resource to map.
- - Map Compute: Select the the compute hosts resources, and map source and destination clusters to each other.
- - Map Storage: Select the storage resources, and map source vSphere data stores to RHV data domains.
- - Map Networks: Map the vSphere network resources, including management networks, VM networks, storage networks to their corresponding RHV logical networks.
- - Results: Review the results of the mapping.

b. Create a Migration Plan:
- VMs: CloudForms provides a selector wizard in the Migration Plans screen, with manual section and filters available for selecting the correct VMs.
- - Larger lists can be imported by uploading a CSV file.
- Advanced Options: Assign selected VMs to Ansible Playbooks for pre-processing and post-processing tasks.
- Schedule: Schedule the plan for a future date, if desired.
- Save the plan.
- Run the plan:
- - CloudForms gathers data and preforms pre-migration checks
- - Approve the plan, kick-off the migration
- - Progress monitored on the Migration Plan screen: Each VM to migrate will be powered off in vSphere, migrated with progress bar, and then powered on in RHV
- - VMs arrive as created on the RHV-M Administrator Portal --> Compute --> Virtual Machines view
- - Orchestration Logs at: /var/www/miq/vmdb/log/automation.log (CloudForms server)
- - Conversion Logs for each host are at: /var/log/vdsm/import/v2v-import-* (* = each conversion host)


Backing-Up and Restoring RHV-M:
Database and Configuration Files Backup:
- engine-backup utility
- Backs up key configuration files, the engine database, and the Data Warehouse database of your RHV-M installation.
- Does not back up the operating system or installed software.
- Backup creates a .tgz (TAR Archive)
- After backup copy off the RHV-M to a secure location in case restore is required.
- Backups and restores must be used with same release (e.g. 4.3)

Restoration using engine-setup:
- The restore process requires that the RHV-M server has been reinstalled with an operating system and the RHV-M software packages, but that engine-setup has not yet been run.

Backup options for engine-backup:
--mode=mode
Specifies the operating mode of the command. Two modes are available: backup, which creates a backup, and restore, which restores a backup. (required)
--file=backup-file: Specifies the location of the archive file containing the backup. (required)
--log=log-file: Specifies the location of a file used to record log messages from the backup or restore operation. (required)
--scope=scope: Specifies the scope of the backup or restore operation.
There are four scope options:
--scope=scope=all: backup or restore the engine database, Data Warehouse, and RHV-M configuration files (default)
--scope=scope=db: backup or restore only the engine database
--scope=scope=files: backup or restore only RHV-M configuration files
--scope=scope=dwhdb: backup or restore only the Data Warehouse database

Full backup example:
- On RHV-H running RHV-M
$ ssh root@rhvhost1.mindwatering.net
[root@rhvhost1 ~]# hosted-engine --set-maintenance --mode=global
[root@rhvhost1 ~]# exit
- On RHV-M, perform backup:
$ ssh root@rhvmgr.mindwatering.net
[root@rhvmgr ~]# engine-backup --scope=all --mode=backup --file=rhvm-backup.tgz \
--log=backup.log
[root@rhvmgr ~]# exit

Restore options for engine-backup:
--provision-db: Creates a PostgreSQL database for the RHV-M engine on the server being restored. Used when restoring to a fresh installation that has not been setup.
--provision-dwh-db: Creates a database for the Data Warehouse on the server being restored. Used when restoring to a fresh installation that has not been setup.
--restore-permissions: Restores database permissions stored in the backup. Used when restoring to a fresh installation, or when overwriting an installation that was previously set up.

Full restore example, after VM recreated and software installed, but engine-setup not yet run:
$ ssh root@rhvmgr.mindwatering.net
[root@rhvmgr ~]# engine-cleanup
[root@rhvmgr ~]# engine-backup --mode=restore --file=backup-file.tgz --log=log-file --provision-db --provision-dwh-db --restore-permissions
[root@rhvmgr ~]# engine-setup --accept-defaults --offline
[root@rhvmgr ~]# exit

Overwriting the RHV-M Installation to Revert to Earlier Backup:
- Previous backup must be available
- Discard environment changes by running engine-cleanup
- - Prompts for removal of components
- - Stops the engine service
- - Removes all installed ovirt data. If you do not confirm to remove the data, then engine-cleanup will abort.
- After cleanup, run engine-backup with restore mode:
- - Run the engine-backup command to restore a full backup, or a database-only backup restore - The tables and credentials are already created, so you do not need to create them again.
- After restoring the database, you must run the engine-setup command to reconfigure the RHV-M.


Updating and Upgrading RHV
- Minor release updates: Minor updates within the same version of RHV. e.g. RHV 4.3, example of update: RHV. ex. RHV 4.3.1 to RHV 4.3.7
- Major release upgrades: Major upgrades between point releases of RHV. eg. RHV 4.3 to RHV 4.4.
- - Consult Upgrade Guide for changes in features and special considerations to work before, during, and/or after the upgrade.

Yum Version Locking:
- RH products commonly use version locking to protect their updates
- RH products commonly have conflicting package repos. e.g. AAP can be corrupted if the AppStream repo is used instead of the AAP one or the offline bundle.
- RHV-M: yum update does not update RHV-M, because the RHV installation locked the RHV-M packages from updates using the yum-plugin-versionlock package.
- - Locked packages: /etc/yum/pluginconf.d/versionlock.list

Steps for RHV-M Updates:
a. Place RHV-M in Maintenance Before Updates:
$ ssh root@rhvhost1.mindwatering.net
[root@rhvhost1 ~]# hosted-engine --set-maintenance --mode=global

b. Check for updates:
$ ssh root@rhvmgr.mindwatering.net
[root@rhvmgr ~]# engine-upgrade-check

Note: If the message, "No upgrade is available for the setup package", then nothing needs to be done. The hosted-engine can be taken out of maintenance.

c. Upgrade the setup packages:
$ ssh root@rhvhost1.mindwatering.net
[root@rhvhost1 ~]# yum update ovirt\*setup\*
[root@rhvhost1 ~]# engine-setup
<wait>
[root@rhvhost1 ~]# yum update

Note: If kernel was update, reboot the host before taking out of maintenance.
$ ssh root@rhvmgr.mindwatering.net
[root@rhvhost1 ~]# reboot

d. Put RHV-M back into normal mode:
$ ssh root@rhvhost1.mindwatering.net
[root@rhvhost1 ~]# hosted-engine --set-maintenance --mode=none

Prerequisites for RHV-H Updates:
- Hosts updated from the Administration Portal:
- - One host: Administration Portal --> Compute --> Hosts --> select/open host --> Installation (button) --> Check for Upgrade (button) --> in Upgrade Host window, click OK (button) to confirm and start
- Updates only preserve the /etc and /var directories; all other data is replaced.
- Updates of cluster occur automatically, one RHV-H at a time, auto Maintenance occurs, and VMs auto-migrate to other hosts in the cluster. If resources are lacking, the VM migrations fail, and the update is aborted.
- Hosts go through the stages of: Preparing for Maintenance --> Maintenance --> Installing --> Reboot --> Unresponsive --> Up
- Minor updates take minutes to install. Major upgrades take longer.
- All RHV-H hosts must be registered and attached to software entitlements from the RH CDN or the RH SS:
- - Red Hat Enterprise Linux
- - Read Hat Virtualization
- Once enabled/attached, RHV-M update manager checks for updates every 24 hours.
- - Update the check time with the engine-config command, but changes are not applied until ovirt-engine service is restarted.
e.g.
[root@rhvhost1 ~]# engine-config -s HostPackagesUpdateTimeInHours=48
[root@rhvhost1 ~]# systemctl restart ovirt-engine.service
- If host is not yet registered for RHEL, enable with:
[root@rhvhost1 ~]# subscription-manager repos --enable=rhel-x-server-rhvh-y-rpms


Implement High-Availability (HA)
HA Summary:
- High availability refers to multiple devices operating as a single entity. In the event of a failure, the alternate device takes over control and continues standard operation, with little or no impact of failure to the end user.
- Hardware choices for both RHV-M and RHV hosts, and storage and networking hardware and configuration, must account for fault tolerance and high availability to limit possible single points of failure.

Standard Host Architecture in a Cluster:
- Clusters must support a consistent CPU family since they are migration domains.
- Use the same vendor and model of server for all hosts in a cluster, configure the same. Homogeneous hardware at the cluster level also provides consistent Live Motion performance in the environment.
- - Make sure that hardware, such as CPUs (family and number), network interfaces, host bus interfaces (HBA), and RAID cards, are the same across all hosts in the cluster.
- - Firmware and BIOS versions should be up-to-date, at same version between the hosts.

External Support Infrastructure:
- DNS is critical for RHV to operate correctly. Ensure that forward and reverse name resolution is functioning correctly for the RHV-H hosts, and the RHV-M, and that fully-qualified domain names are used.
- Ensure NTP synchronization is used to avoid issues with authentication and TLS/SSL certificates sensitive to time skew issues.
- Ensure LDAP ID provider is HA. Keep the admin@internal user for emergency use.

External Storage Requirements:
- Ensure storage back-end providers are HA:
- - Redundant Ethernet or Fibre channel (FC) switches for your storage networks
- - Multiple NICs used and bonded for iSCSI or NFS. 10/40GbE NICs are recommended to improve performance.
- - For SAN, multiple HBAs (FC) or initiatiors (iSCSI) require set-up to provide multiple paths to the SAN.
- - - Use the same make, model, firmware version, and driver versions in the same systems and clusters, to ensure consistent performance and ease troubleshooting.
- - - Consider using SAN-based boot if there is already a SAN available to store VMs. This configuration avoids issues related to storage on the host, and improves performance on tasks like hypervisor image cloning, thus speeding up virtual machine deployment times.
- GlusterFS is a scalable network based file system that relies heavily on network performance, requiring high throughput NICs/network devices.

External Networking Requirements:
- Use redundant network switches
- Use bonded NICs in LACP mode
- - RHEL supports bonding modes 0-6, Logical Networks with VM traffic support 1,2,3, and 4, Modes 0,5,6 do not support the Linux bridge needed for VM networks.
- Use 10GbE links for VM traffic, use 10GbE or 40GbE for storage traffic. Grant VLANS priority that are used for Live Migration, user-VM communication, and RHV-M to RHV-H hosts management network.

Configuring Network Bonds on RHV-H:
- Configure two bonded NICs as a bonded interfaces. Configure in the Administration Portal after hosts are added to the RHV-M.
- Use the new bonded interface just like any other interface, adding and removing logical networks as desired.

Steps to bond Interfaces:
- Administration Portal --> Compute (left menu) --> Hosts (left menu tab) --> select/open host --> Network Interfaces (tab) -->click Setup Host Networks (button), just as you would to configure logical networks.
- In the window, drag the icon for one physical interface to another interface that you want to bond.
- In the Create New Bond window, select a Bond Name and Bonding Mode, click OK (button).

Important:
- Configure your networking hardware as needed to support your bonding mode. For example, the default mode used by RHV, IEEE 802.3ad/LACP (mode 4), requires bonding in that mode to be enabled for the switch ports connected to the participating NICs.
- Configure your switch ports to permit the correct VLANs to be passed to the interfaces on your hosts.

Host Requirements:
- RHV supports hosts based on Red Hat Enterprise Linux (RHEL), as well as Red Hat Virtualization Host (RHV-H).
- Red Hat Enterprise Linux-based hosts can be useful for environments requiring customization at the OS level, because of hardware support, for example.
- - Because of the manual configuration and updates performed on those hosts, Red Hat Enterprise Linux based hosts can cause unexpected issues in an RHV HA environment.
- RHV-H is preferred operating system for hosts:
- - Only the required packages and services supporting VMs and the hypervisor are installed.
- - Overhead is reduced
- - Overall security "attack surface" is reduced
- - RHV-H allows installation additional RPM packages, if needed, reducing the need for "thick" RHEL-based hosts
- - Installed w/recommended configuration for a RHV host - no manual configuration needed
- - Includes its Cockpit web administration tool for local host troubleshooting of issues
- Hardware Out-of-Band (OOB) management for remote console and power control.
- - Up-to-date firmware and BIOS.
- - Memory installed/scaled to limit memory swapping, which significantly degrades VM performance
- - RAID configuration of the local boot disks
- - Redundant power

Large installation RHV-M Scaling:
- All-in-one (default) RHV-M installation is the preferred deployment approach
- Scaling allows some RHV-M components on separate hosts for higher performance:
- - PostgreSQL database
- - Data warehouse
- - WebSocket proxy
- Scaling/breakup complicates RHV-M deployment and requires careful thought about redundancy, availability, and backup scenarios.


Configuring Highly Available Virtual Machines
VM HA Overview:
- HA is needed for VMs running critical workloads. RHV-M can also automatically restart high priority virtual machines first. Multiple levels of priority exist for this purpose.
- VMs are automatically restarted within seconds when they crash. No admin or user actions are needed.
- RHV-M continuously monitors the hosts and storage to detect hardware failures or loss of connectivity to a RHV-H.
- RHV-M automatically restarts the high availability virtual machine, either on its original host or on another host in the cluster.
- For a non-responsive (crashed/partly crashed or isolated host), if two VMs are running, the disk will likely corrupt with both of the VM OSs performing writes. To accomplish host HA, RHV has "fencing".
- - Cluster hosts must support an out-of-band power management system, such as iLO, DRAC, RSA, or a network-attached remote power switch that is configured to act as a fencing device.
- - RHV 4.x, and later, support a special storage volume as a lease which basically contain heartbeats (timestamps) to determine if the VMs are still up-and-running. vSphere using this methodology.
- An alternate HA option is Pacemaker, if installed/configured, disable the native RHV-M HA config.

Distinguish Non-Operational vs. Non-Responsive Host:
- A non-operational host has encountered a problem, but RHV-M can still communicate with it. Likewise, a host that is moved to Maintenance mode is basically non-operational. In either case, RHV-M works with the host to automatically migrate all its virtual machines to other operational hosts in the cluster.
- A non-responsive host is one that is not communicating with RHV-M. After about 30 seconds, RHV-M fences that host and restarts any highly available virtual machines on operational hosts in the cluster.

Power Management Configuration Options:
- Power Management: check/uncheck to enable/disable
- Kdump Integration: check/uncheck to enable/disable
- Disable policy control of power management: check/uncheck to enable/disable cluster scheduling policy for host
- Fence agents:
- - Click + to add one, click - next to one to remove
- Advanced Parameters: specifies the search order for a proxy in the cluster and data center for the host.

Steps to Add a Hardware Host Fence:
- Administration Portal --> Compute (left menu) --> Hosts (left menu tab) --> select/open host -->
- In the Edit Host window, select Power Management (tab) --> Under Agents by Sequential Order heading, click +.
- - In the Edit Fence agent window, enter the Address, User Name, Password, select the Type (e.g. apc, rac, etc), Port, Slot, select if Secure, and enter any Options to pass, in key=value,key2=value2 format, and click Test. If successful, click OK (button) to save.
- Click OK (button) again.

Steps to Configure HA VMs on the Host:
- Administration Portal --> Compute (left menu) --> Hosts (left menu tab) --> select/open host -->
- In the Edit Host window, select High Availability (tab)
- Options:
- - Cluster: <can actually change host cluster here>
- - Optimized for: Server
- - Highly Available: check/uncheck to enable (turn on)/disable
- - Target Storage Domain for VM Lease: <select storage domain e.g. VMLease>
- - Resume Behavior: AUTO_RESUME
- - Priority (Priority for Run/Migration queue): Low
- - Watchdog Model: No-Watchdog
- - Watchdog Action: <set option if Model is chosen above>

Criteria for Successful Restart of VMs on Another Host:
- Power management is available for the hosts running the highly available virtual machines.
- The host running the highly available virtual machine is part of a cluster that has other available hosts.
- The destination host is running.
- The source and destination hosts have access to the data domain on which the virtual machine resides.
- The source and destination hosts have access to the same virtual networks and VLANs.
- There are enough CPUs on the destination host that are not in use to support the virtual machine requirements.
- There is enough RAM on the destination host that is not in use to support the virtual machine requirements.


Red Hat Hyper-converged Infrastructure for Virtualization
RHHI-V Overview:
- RHHI-V is a tuned installation of RHV-M, using RH Gluster Storage, on self-hosted RHV-H hosts sharing compute and storage, with an Open Virtual Network (OVN) software-defined networking stack, and Red Hat Ansible Automation for provisioning.
- With OVN, RHHI-V integrates with Red Hat OpenStack Platform and Red Hat OpenShift Container Platform infrastructures for a single hybrid cloud platform.
- Hyper-converged combines both compute and storage resources, and makes them simultaneously available among all hosts in a single, scalable, virtualization installation known as a pod.
- Storage is pooled across all hosts in a pod, eliminating the need for a storage area network, and is managed by the same Red Hat Virtualization management software used for standard RHV deployments.
- Add hosts to increase/scale both the compute capacity and the storage space.
- Scale resources to match application requirements:
- - Increase only disk space by expanding the storage pools on each host w/o adding another host.
- - Add hosts to increase compute capacity, but add only the storage required to maintain pool redundancy per cluster.

RHHI-V Virtual Data Optimizer (VDO) Features:
- Zero-block elimination: records zero-filled blocks as metadata only
- Deduplication: redundant blocks are eliminated by using a pointer to the original block
- LZ4 compression: applied dynamically to individual data blocks

RRHI-V VDO Limitations:
- Available only for new installations at deployment time, cannot be enabled on deployments already running
- Thin provisioning is not compatible with VDO - n/a as VDO includes de-duplication on the storage side

Definitions:
RAID: Structure of physical devices made into logical devices
Brick: Prepared physical device (disk) used to build logical storage
Volume: Logical structure used to create storage domains
Pool: Group of storage devices across all hosts in a discreate installation
VDO: Storage management driver for increasing storage efficiency
Engine: RVH-M / RHV Infrastructure Manager
Cluster: Failover group of hypervisor RVH-H hosts
Pod: Discrete installation in multiples of 3 RVH-H hosts. In the case of RHHI-V, the pod is a 3-host/node cluster.

Installation Summary:
- Install Red Hat Virtualization hosts: Install the physical machines as hyperconverged hosts
- Configure SSH access: Configure passwordless key-based, SSH authentication between hosts
- Configure Red Hat Gluster Storage: Setup Gluster storage on the physical hosts using the Web Console
- Deploy the hosted engine: Deploy the RHV-M virtual machine using the Web Console
- Configure the Gluster storage domain: Setup the Gluster storage domain using the Administration Portal
- - One RAID set (mirror/0) should be used for the system/boot disk
- - Rest of disks will become the Storage Domains

RHV-H Hosts Setup:
a. Download the Hypervisor for RHV 4.3 ISO, and prepare bootable media (USB stick, etc)

b. Boot the host and perform Normal installation, taking default values except to adjust parameters as required for host hardware.
- Use Automatically configure partitioning
- Size /var/log to at least 15GB for Gluster Storage logging
- Configure networks and map to physical NICs, select Automatically connect to this network when it is available option
- Run check script:
[root@rvhhost1 ~]# nodectl check
<view output and verify all entries say OK>

c. Enable the software repository on each host:
[root@rvhhost1 ~]# subscription-manager repos --enable=rhel-x-server-rhvh-y-rpms

d. Generate the public/private SSH key pair:
[root@rhvhost1 ~]# ssh-keygen -t rsa
...output omitted...
Enter passphrase (empty for no passphrase): <enter>
Enter same passphrase again: <enter>
The private key is saved in /root/.ssh/id_rsa. The public key is saved in /root/.ssh/id_rsa.pub.
...output omitted...

e. Configure SSH Access on hosts so SSH keys in known_hosts public key store on the engine host.
- confirm hosts have access to each other with password:
[root@rvhhost1 ~]# ssh root@rhvhost1.mindwatering.net
[root@rvhhost1 ~]# exit
[root@rvhhost1 ~]# ssh root@rhvhost2.mindwatering.net
...output omitted...
[root@rvhhost2 ~]# exit
[root@rvhhost1 ~]# ssh root@rhvhost3.mindwatering.net
...output omitted...
[root@rvhhost3 ~]# exit

- copy the SSH keys to the other hosts:
root@rvhhost1 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@hvhost1.mindwatering.net
...output omitted...
[root@rvhhost1 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@rhvhost2.mindwatering.net
...output omitted...
[root@rvhhost1 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@rhvhost3.mindwatering.net
...output omitted...

f. Set-up the GSCluster Hyper-converged disks:
- rhvhost1 Web Console --> V (left menu) --> Hosted Engine (left menu tab) --> under Hyperconverged, click Start (button)
- In Gluster Deployment window, Hosts (tab/screen):
- - Host1: rhvhost1.mindwatering.net
- - Host2: rhvhost1.mindwatering.net
- - Host3: rhvhost1.mindwatering.net
- - Click Next (button)

- In Gluster Deployment window, FQDNs (tab/screen):
- - Use same hostnames as in previous step: <checked>
or
- - Host2: <enter FQDN or IP>
- - Host3: <enter FQDN or IP>
Note: If using IPs, the SSH file known_hosts has to kept updated
- - Click Next (button)

- In Gluster Deployment window, Volumes (tab/screen):
- - Name: engine
- - Volume Type: Replicate
- - Arbiter: <unchecked>
- - Brick Dirs: /gluster_bricks/engine/engine
- - click + Add Volume (if needed to add another volume row)
- - Name: data
- - Volume Type: Replicate
- - Arbiter: <checked>
- - Brick Dirs: /gluster_bricks/data/data
- - click + Add Volume (if needed to add another volume row)
- - Name: vmstore
- - Volume Type: Replicate
- - Arbiter: <checked>
- - Brick Dirs: /gluster_bricks/vmstore/vmstore
- - click + Add Volume (if needed to add another volume row)
...
- - Click Next (button)

Notes: Non-system disks are assigned now to Gluster.
- Specify unique names for each volume to create, there must be at least 3 volumes, and the first volume will be used for the engine store:
- - engine: Used by RHV-M to track RHV objects and activities
- - vmstore: Holds the systems disks for all deployed virtual machines
- - data: Stores all non-system data disks for deployed virtual machines
- Keep VM volumes separate for backup and DR purposes
- Storage disks become bricks

- In Gluster Deployment window, Bricks (tab/screen):
- - Under Raid Information (heading):
- - - Raid Type: RAID 6
- - - Stripe Size(KB): 256
- - - Data Disk Count: <enter number to assign>
- - Under Brick Configuration (heading):
- - - Select Host: rhvhost1.mindwatering.net
- - - Device Name (engine) sdb
- - - Size(GB) (engine) <enter size e.g. 100>
- - - Enable Dedupe & Compression (engine): <checked>
- - - Logical Size(GB) (engine): <enter size e.g. 1000>
- - - Size(GB) (data) <enter size e.g. 500>
- - - Enable Dedupe & Compression (data): <checked>
- - - Logical Size(GB) (data): <enter size e.g. 5000>
- - - Size(GB) (vmstore) <enter size e.g. 1500>
- - - Enable Dedupe & Compression (vmstore): <checked>
- - - Logical Size(GB) (vmstore): <enter size e.g. 15000>
- - - Configure LV Cache <checked>
- - - - SSD: sdk
- - - - LV Size(GB): 220
- - - - Cache Mode: writethrough
- - Click Next (button)

- In Gluster Deployment window, Review (tab/screen):
- - Review settings selected
- - Click Deploy (button)

Hosted Engine Deployment Overview:
- RHV-M is installed as self-hosted appliance image as a virtual machine on the first hypervisor host.
- The embedded RHV-M setup creates a Default data center, and a Default cluster with your three physical hosts as members, and then enables Red Hat Gluster Storage functionality on each.
- All cluster hosts are configured to use the virtual-host tuned profile.
- The RHV-M hosted engine is deployed using a wizard.

Hosted Engine Steps:
a. VM (tab)
- Under VM Settings (heading):
- - Engine VM FQDN: rhvmgr@mindwatering.net
- - MAC Address: <00:12:34:12:34:12>
- - Network Configuration: DHCP (vs Static)
- - Bridge Interface: ensf0
- - Root: <password>
- - Root SSH Access: Yes
- - Number of Virtual CPUs: 4
- - Memory Size (MiB): 16348
- Click Next (button)

b. Engine (tab):
- Under Engine Credentials (heading):
- - Admin Portal Password: <password>
- Under Notification Settings (heading):
- - Server Name: localhost
- Server Port Number: 25
- Sender E-Mail Address: root@localhost
- Recipient E-Mail Addresses: root@localhost
- Click Next (button)

c. Prepare VM (tab):
- Review the setup information
- Approve the information
- Click Prepare VM (button)

d. Storage (tab):
- Under Storage Settings (heading):
- - Storage Type: Gluster
- - Storage Connection: <enter FQDN or IP of Gluster first host - e.g. rhvhost1.mindwatering.net>
- - Mount Options: <enter the backup rhvhost2 and rhvhost3 - e.g. backup-volfile-servers=rhvhost2.mindwatering.net;rhvhost3.mindwatering.net>
- Click Finish (button)

e. Afterwards, verify the RHV-M set-up is correct:
- Administration Portal --> Compute (left menu) --> Hosts (left menu tab): review hosts
- Administration Portal --> Compute (left menu) --> Virtual Machines (left menu tab): review the RHV-M created
- Administration Portal --> Storage (left menu) --> Domains (left menu tab): review storage configuration

Gluster Storage Domain Configuration Overview:
- Add the previously configured Gluster storage as a RHV storage domain
- Each storage domain requires a dedicated storage logical network attached to each cluster host

Gluster Storage Domain Steps:
a. Create Logical Network for the Storage Domain:
- Administration Portal --> Networks (left menu) --> Networks (left menu tab)
- Click New (button)
- - General (tab):
- - - Data center: <select>
- - - Name: <unique name based on your organization naming convention>
- - - Description: <enter useful description as desired>
- - - Network Label: <unique label>
- - - VLAN Tagging: check checkbox unless network is flat (no VLANs)
- - - VM Network: <uncheck>
- - - MTU: 1500 unless storage network and 9000 (jumbo frames) is required
- - Clusters (tab):
- - - Migration Network: <checked>
- - - Gluster Network: <checked>
- - Click OK (button)

b. Attach the Gluster network on each RHV-H:
- Administration Portal --> Compute (left menu) --> Hosts (left menu tab) --> highlight/select host --> Click Edit (button)
- - Network Interfaces (tab):
- - open Setup Hosts Networks window:
- - - Drag and drop the newly created network to the correct Gluster Storage Domain physical NIC.
- - - Verify connectivity: <checked>
- - - Save network configuration: <checked>
- - - Click OK (button)
- - Click OK (button)
- Repeat for rest of RHV-H hosts

c. Verify the network health:
- Administration Portal --> Compute (left menu) --> Hosts (left menu tab) --> select/open host
- - Network Interface (tab):
- - - Inspect network state. If any network has "Out of Sync" status or is missing its IP Address, click Refresh Capabilities (button) to sync
- - - Verify again the network state.

Adding addition Hosts:
a. Prerequisites:
- Install host with the RHV-H OS
- Configure for passwordless SSH access
- Configured for Gluster storage identically as the original three hosts.

b. Add via Administration Portal:
- Administration Portal --> Compute (left menu) --> Hosts (left menu tab) --> New Host (tab)


Maintain a Red Hat Hyperconverged Infrastructure for Virtualization
Maintaining a RHHI-V Pod:
- Most tasks are same as a normal RHV cluster
- Some tasks are unique to RHHI-V, such as the Red Hat Gluster Storage

Tasks on a RHHI-V Pod same as RHV:
- Configure high availability using fencing policies.
- Configure backup and recovery options, including geo-replication, failover, and failback.
- Configure encryption with Transport Layer Security (TLS/SSL) and certificates.
- Monitor the cluster and managing notification events.
- Performing upgrades for RHV-H hosts and the RHV-M management engine.
- Adding and removing hypervisor hosts.

RHHI-V Pod Shutdown:
- Pod shutdown must be done in a specific order.
- Install the RHHI-V Ansible shutdown package on the host with the engine (RHV-M) - of course, any of the hosts might have the engine over the course of time.
[root@rhvhost1 ~]# yum install ovirt-ansible-shutdown-env -y
- Write an Ansible playbook that calls the API, make sure you update the password:
[root@rhvhost1 ~]# pwd
<view output - e.g. /root/>
[root@rhvhost1 ~]# vi shutdown_rhhi-v.yml
---
- name: oVirt shutdown environment
hosts: localhost
connection: local
gather_facts: false

vars:
engine_url: https://ovirt-engine.example.com/ovirt-engine/api
engine_user: admin@internal
engine_password: redhat
engine_cafile: /etc/pki/ovirt-engine/ca.pem

roles:
- ovirt.shutdown_env

<esc>:wq (to save)
- Run the shutdown playbook:
[root@rhvhost1 ~]# ansible-playbook -i localhost ~/shutdown_rhhi-v.yml

RHHI-V Pod Startup:
a. Power on all three (or more) hosts, and wait.

b. SSH into each RHV-H and confirm: Gluster service is running, networks are available and have IP addresses assigned to required NICs, and that Gluster peers are connected to each other:
$ ssh root@rhvhost1.mindwatering.net
<enter pwd>
[root@rhvhost1 ~]# systemctl status glusterd
< verify running, if not running and no error, start it>
root@rhvhost1 ~]# ip -br addr show
< verify networks available and IPs set>
root@rhvhost1 ~]# gluster peer status
<verify 2 peers, and each say (Connected)>

$ ssh root@rhvhost2.mindwatering.net
<enter pwd>
[root@rhvhost2 ~]# systemctl status glusterd
< verify running, if not running and no error, start it>
root@rhvhost2 ~]# ip -br addr show
< verify networks available and IPs set>
root@rhvhost2 ~]# gluster peer status
<verify 2 peers, and each say (Connected)>

$ ssh root@rhvhost3.mindwatering.net
<enter pwd>
[root@rhvhost3 ~]# systemctl status glusterd
< verify running, if not running and no error, start it>
root@rhvhost3 ~]# ip -br addr show
< verify networks available and IPs set>
root@rhvhost3 ~]# gluster peer status
<verify 2 peers, and each say (Connected)>

c. Verify all bricks are online:
[root@rhvhost1 ~]# gluster volume status engine
<verify the Online column shows all bricks with Y>

d. If all is well with Gluster, start the engine on the RHV-H that you want to run the RHV-M -- this does not have to be the last VM where it was running.
[root@rhvhost1 ~]# hosted-engine --vm-start
<wait>
[root@rhvhost1 ~]# hosted-engine --vm-status
<confirm healthy>

e. The pod is currently in "global maintenance mode". Set normal status:
- Administration Portal --> Compute (left menu) --> Hosts (left menu tab) --> select/highlight the hosted engine node --> Click Disable Global HA Maintenance (button)

f. Start all the other VMs either via the Administration Portal or each local host Web Console.
- Administration Portal --> Compute (left menu) --> Virtual Machines (left menu tab) --> select/highlight VM --> Click Run (button)

Managing and Scaling RHHI-V Pod Storage:
- Managing Gluster Storage requires creating and maintaining bricks, building or restructuring RAID volumes which are used for the pod Storage Domains.
- Each brick is an export directory on a RHV-H pod host.
- Volumes are a logical set of bricks (physical disks configured as bricks), which are evenly distributed across the RHV-H pod hosts. The volumes span across the pod's hosts.
- Bricks (disks that make up a Volume) must not have previous partitions or labels.
- - To clear, reset the brick to empty it, but it keeps its UUID, and config for host name and path of the the brick.
- Bricks (disks within a volume set) can be replaced when one fails:
- - Select the volume, click Replace Brick (button), select the spare replacement brick by selecting the host (typically same one) and directory of the new brick.
- - Migrate VMs off the volume (or shut them down)
- - Stop the volume
- - Click Remove Brick and select the failed brick.
- - Start the Volume
- - Start back up the VMs if not migrated.

Create a Brick:
- Administration Portal --> Storage (left menu) --> Storage Devices (left menu tab) --> Create a Brick (button)
- In the Create Brick window:
- - Under Create Brick (heading):
- - - Brick Name: vmstore2-brick
- - - Mount Point: /rhgs/vmstore2-brick
- - Under RAID Parameters (heading):
- - - RAID Type: RAID 6
- - - No of Physical Disks in RAID Volume: <empty>
- - - Stripe Size (KP): 128
- - - Storage Devices: <add disks available. same on each host - e.g. sde1 / SCSI / 1200 GiB>
- - - Cache Device: <empty for SSD, select pod cache storage - e.g. sdb>
- - Click OK (button)

Create a Volume:
- Administration Portal --> Storage (left menu) --> Volumes (left menu tab) --> New Volume (button)
- In the New Volume window:
- - Data Center: Default
- - Volume Cluster: Default
- - Name: <enter unique name based on function and org standard>
- - Type: <Replicate or Distributed>
- - Transport Type: TCP <checked>, RDMA <unchecked>
- - Click Add Bricks (button) to add at least 3 bricks (typically one from each host, or 6 with 2/each host)
- - - Select bricks
- - - Click OK (button)
- - Click OK (button)
- Administration Portal --> Storage (left menu) --> Volumes (left menu tab) --> highlight/select volume --> Start (button)

Delete a Volume:
- Migrate VMs off Volume
- Stop Volume
- Remove Volume
- IMPORTANT: Volume data remains on volume until it is wiped and reused




previous page

×