diff --git a/CNAME b/CNAME new file mode 100644 index 00000000000..4993baf651d --- /dev/null +++ b/CNAME @@ -0,0 +1 @@ +forklift-docs.konveyor.io diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md new file mode 100644 index 00000000000..ddee4673182 --- /dev/null +++ b/CODE_OF_CONDUCT.md @@ -0,0 +1,128 @@ +# Contributor Covenant Code of Conduct + +## Our Pledge + +We as members, contributors, and leaders pledge to make participation in our +community a harassment-free experience for everyone, regardless of age, body +size, visible or invisible disability, ethnicity, sex characteristics, gender +identity and expression, level of experience, education, socio-economic status, +nationality, personal appearance, race, religion, or sexual identity +and orientation. + +We pledge to act and interact in ways that contribute to an open, welcoming, +diverse, inclusive, and healthy community. + +## Our Standards + +Examples of behavior that contributes to a positive environment for our +community include: + +- Demonstrating empathy and kindness toward other people +- Being respectful of differing opinions, viewpoints, and experiences +- Giving and gracefully accepting constructive feedback +- Accepting responsibility and apologizing to those affected by our mistakes, + and learning from the experience +- Focusing on what is best not just for us as individuals, but for the + overall community + +Examples of unacceptable behavior include: + +- The use of sexualized language or imagery, and sexual attention or + advances of any kind +- Trolling, insulting or derogatory comments, and personal or political attacks +- Public or private harassment +- Publishing others' private information, such as a physical or email + address, without their explicit permission +- Other conduct which could reasonably be considered inappropriate in a + professional setting + +## Enforcement Responsibilities + +Community leaders are responsible for clarifying and enforcing our standards of +acceptable behavior and will take appropriate and fair corrective action in +response to any behavior that they deem inappropriate, threatening, offensive, +or harmful. + +Community leaders have the right and responsibility to remove, edit, or reject +comments, commits, code, wiki edits, issues, and other contributions that are +not aligned to this Code of Conduct, and will communicate reasons for moderation +decisions when appropriate. + +## Scope + +This Code of Conduct applies within all community spaces, and also applies when +an individual is officially representing the community in public spaces. +Examples of representing our community include using an official e-mail address, +posting via an official social media account, or acting as an appointed +representative at an online or offline event. + +## Enforcement + +Instances of abusive, harassing, or otherwise unacceptable behavior may be +reported to the community leaders responsible for enforcement at +konveyor.io. +All complaints will be reviewed and investigated promptly and fairly. + +All community leaders are obligated to respect the privacy and security of the +reporter of any incident. + +## Enforcement Guidelines + +Community leaders will follow these Community Impact Guidelines in determining +the consequences for any action they deem in violation of this Code of Conduct: + +### 1. Correction + +**Community Impact**: Use of inappropriate language or other behavior deemed +unprofessional or unwelcome in the community. + +**Consequence**: A private, written warning from community leaders, providing +clarity around the nature of the violation and an explanation of why the +behavior was inappropriate. A public apology may be requested. + +### 2. Warning + +**Community Impact**: A violation through a single incident or series +of actions. + +**Consequence**: A warning with consequences for continued behavior. No +interaction with the people involved, including unsolicited interaction with +those enforcing the Code of Conduct, for a specified period of time. This +includes avoiding interactions in community spaces as well as external channels +like social media. Violating these terms may lead to a temporary or +permanent ban. + +### 3. Temporary Ban + +**Community Impact**: A serious violation of community standards, including +sustained inappropriate behavior. + +**Consequence**: A temporary ban from any sort of interaction or public +communication with the community for a specified period of time. No public or +private interaction with the people involved, including unsolicited interaction +with those enforcing the Code of Conduct, is allowed during this period. +Violating these terms may lead to a permanent ban. + +### 4. Permanent Ban + +**Community Impact**: Demonstrating a pattern of violation of community +standards, including sustained inappropriate behavior, harassment of an +individual, or aggression toward or disparagement of classes of individuals. + +**Consequence**: A permanent ban from any sort of public interaction within +the community. + +## Attribution + +This Code of Conduct is adapted from the [Contributor Covenant][homepage], +version 2.0, available at +https://www.contributor-covenant.org/version/2/0/code_of_conduct.html. + +Community Impact Guidelines were inspired by [Mozilla's code of conduct +enforcement ladder](https://github.com/mozilla/diversity). + +[homepage]: https://www.contributor-covenant.org + +For answers to common questions about this code of conduct, see the FAQ at +https://www.contributor-covenant.org/faq. Translations are available at +https://www.contributor-covenant.org/translations. diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 00000000000..7f375065b01 --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1,33 @@ +# Contributing to Forklift documentation + +This project is [Apache 2.0 licensed](LICENSE) and accepts contributions via +GitHub pull requests. + +Read the [Guidelines for Red Hat Documentation](https://redhat-documentation.github.io/) before opening a pull request. + +### Upstream and downstream variables + +This document uses the following variables to ensure that upstream and downstream product names and versions are rendered correctly. + +| Variable | Upstream value | Downstream value | +| -------- | -------------- | ---------------- | +| project-full | Forklift | Migration Toolkit for Virtualization | +| project-short | Forklift | MTV | +| project-version | 2.0 | 2.0 | +| virt | KubeVirt | OpenShift Virtualization | +| ocp | OKD | Red Hat OpenShift Container Platform | +| ocp-version | 4.7 | 4.7 | +| ocp-short | OKD | OCP | + +Variables cannot be used in CLI commands or code blocks unless you include the "attributes" keyword: + + [options="nowrap" subs="+quotes,+attributes"] + ---- + # ls {VariableName} + ---- + +You can hide or show specific blocks, paragraphs, warnings or chapters with the `build` variable. Its value can be set to "downstream" or "upstream": + + ifeval::["build" == "upstream"] + This content is only relevant for Forklift. + endif::[] diff --git a/Gemfile b/Gemfile new file mode 100644 index 00000000000..c7b0183bfd4 --- /dev/null +++ b/Gemfile @@ -0,0 +1,31 @@ +# frozen_string_literal: true +# Encoding.default_external = Encoding::UTF_8 +# Encoding.default_internal = Encoding::UTF_8 + +source "https://rubygems.org" + +# gem "asciidoctor-pdf" +gem "asciidoctor" +# gem "bundle" +# gem "html-proofer" +# gem "jekyll-theme-minimal" +# gem "jekyll-feed" +gem "jekyll-paginate" +# gem "jekyll-redirect-from" +# gem "jekyll-sitemap" +# gem "jekyll-tagging" +# gem 'jekyll-seo-tag' +# gem "jekyll", ">= 3.5" +# gem "premonition", ">= 4.0.0" +# gem "pygments.rb" +# gem "rake" +# +# +gem "github-pages", group: :jekyll_plugins + +# ensures that jekyll-asciidoc is loaded first +group :jekyll_plugins do + gem 'jekyll-asciidoc' +end + +gemspec diff --git a/Gemfile.lock b/Gemfile.lock new file mode 100644 index 00000000000..c7aec156081 --- /dev/null +++ b/Gemfile.lock @@ -0,0 +1,325 @@ +PATH + remote: . + specs: + jekyll-theme-cayman (0.1.1) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + +GEM + remote: https://rubygems.org/ + specs: + activesupport (7.2.2) + base64 + benchmark (>= 0.3) + bigdecimal + concurrent-ruby (~> 1.0, >= 1.3.1) + connection_pool (>= 2.2.5) + drb + i18n (>= 1.6, < 2) + logger (>= 1.4.2) + minitest (>= 5.1) + securerandom (>= 0.3) + tzinfo (~> 2.0, >= 2.0.5) + addressable (2.8.7) + public_suffix (>= 2.0.2, < 7.0) + asciidoctor (2.0.23) + ast (2.4.2) + base64 (0.2.0) + benchmark (0.4.0) + bigdecimal (3.1.8) + coffee-script (2.4.1) + coffee-script-source + execjs + coffee-script-source (1.12.2) + colorator (1.1.0) + commonmarker (0.23.10) + concurrent-ruby (1.3.4) + connection_pool (2.4.1) + csv (3.3.0) + dnsruby (1.72.2) + simpleidn (~> 0.2.1) + drb (2.2.1) + em-websocket (0.5.3) + eventmachine (>= 0.12.9) + http_parser.rb (~> 0) + ethon (0.16.0) + ffi (>= 1.15.0) + eventmachine (1.2.7) + execjs (2.10.0) + faraday (2.12.0) + faraday-net_http (>= 2.0, < 3.4) + json + logger + faraday-net_http (3.3.0) + net-http + ffi (1.17.0-x86_64-linux-musl) + forwardable-extended (2.6.0) + gemoji (4.1.0) + github-pages (232) + github-pages-health-check (= 1.18.2) + jekyll (= 3.10.0) + jekyll-avatar (= 0.8.0) + jekyll-coffeescript (= 1.2.2) + jekyll-commonmark-ghpages (= 0.5.1) + jekyll-default-layout (= 0.1.5) + jekyll-feed (= 0.17.0) + jekyll-gist (= 1.5.0) + jekyll-github-metadata (= 2.16.1) + jekyll-include-cache (= 0.2.1) + jekyll-mentions (= 1.6.0) + jekyll-optional-front-matter (= 0.3.2) + jekyll-paginate (= 1.1.0) + jekyll-readme-index (= 0.3.0) + jekyll-redirect-from (= 0.16.0) + jekyll-relative-links (= 0.6.1) + jekyll-remote-theme (= 0.4.3) + jekyll-sass-converter (= 1.5.2) + jekyll-seo-tag (= 2.8.0) + jekyll-sitemap (= 1.4.0) + jekyll-swiss (= 1.0.0) + jekyll-theme-architect (= 0.2.0) + jekyll-theme-cayman (= 0.2.0) + jekyll-theme-dinky (= 0.2.0) + jekyll-theme-hacker (= 0.2.0) + jekyll-theme-leap-day (= 0.2.0) + jekyll-theme-merlot (= 0.2.0) + jekyll-theme-midnight (= 0.2.0) + jekyll-theme-minimal (= 0.2.0) + jekyll-theme-modernist (= 0.2.0) + jekyll-theme-primer (= 0.6.0) + jekyll-theme-slate (= 0.2.0) + jekyll-theme-tactile (= 0.2.0) + jekyll-theme-time-machine (= 0.2.0) + jekyll-titles-from-headings (= 0.5.3) + jemoji (= 0.13.0) + kramdown (= 2.4.0) + kramdown-parser-gfm (= 1.1.0) + liquid (= 4.0.4) + mercenary (~> 0.3) + minima (= 2.5.1) + nokogiri (>= 1.16.2, < 2.0) + rouge (= 3.30.0) + terminal-table (~> 1.4) + webrick (~> 1.8) + github-pages-health-check (1.18.2) + addressable (~> 2.3) + dnsruby (~> 1.60) + octokit (>= 4, < 8) + public_suffix (>= 3.0, < 6.0) + typhoeus (~> 1.3) + html-pipeline (2.14.3) + activesupport (>= 2) + nokogiri (>= 1.4) + html-proofer (3.19.4) + addressable (~> 2.3) + mercenary (~> 0.3) + nokogiri (~> 1.13) + parallel (~> 1.10) + rainbow (~> 3.0) + typhoeus (~> 1.3) + yell (~> 2.0) + http_parser.rb (0.8.0) + i18n (1.14.6) + concurrent-ruby (~> 1.0) + jekyll (3.10.0) + addressable (~> 2.4) + colorator (~> 1.0) + csv (~> 3.0) + em-websocket (~> 0.5) + i18n (>= 0.7, < 2) + jekyll-sass-converter (~> 1.0) + jekyll-watch (~> 2.0) + kramdown (>= 1.17, < 3) + liquid (~> 4.0) + mercenary (~> 0.3.3) + pathutil (~> 0.9) + rouge (>= 1.7, < 4) + safe_yaml (~> 1.0) + webrick (>= 1.0) + jekyll-asciidoc (3.0.1) + asciidoctor (>= 1.5.0, < 3.0.0) + jekyll (>= 3.0.0) + jekyll-avatar (0.8.0) + jekyll (>= 3.0, < 5.0) + jekyll-coffeescript (1.2.2) + coffee-script (~> 2.2) + coffee-script-source (~> 1.12) + jekyll-commonmark (1.4.0) + commonmarker (~> 0.22) + jekyll-commonmark-ghpages (0.5.1) + commonmarker (>= 0.23.7, < 1.1.0) + jekyll (>= 3.9, < 4.0) + jekyll-commonmark (~> 1.4.0) + rouge (>= 2.0, < 5.0) + jekyll-default-layout (0.1.5) + jekyll (>= 3.0, < 5.0) + jekyll-feed (0.17.0) + jekyll (>= 3.7, < 5.0) + jekyll-gist (1.5.0) + octokit (~> 4.2) + jekyll-github-metadata (2.16.1) + jekyll (>= 3.4, < 5.0) + octokit (>= 4, < 7, != 4.4.0) + jekyll-include-cache (0.2.1) + jekyll (>= 3.7, < 5.0) + jekyll-mentions (1.6.0) + html-pipeline (~> 2.3) + jekyll (>= 3.7, < 5.0) + jekyll-optional-front-matter (0.3.2) + jekyll (>= 3.0, < 5.0) + jekyll-paginate (1.1.0) + jekyll-readme-index (0.3.0) + jekyll (>= 3.0, < 5.0) + jekyll-redirect-from (0.16.0) + jekyll (>= 3.3, < 5.0) + jekyll-relative-links (0.6.1) + jekyll (>= 3.3, < 5.0) + jekyll-remote-theme (0.4.3) + addressable (~> 2.0) + jekyll (>= 3.5, < 5.0) + jekyll-sass-converter (>= 1.0, <= 3.0.0, != 2.0.0) + rubyzip (>= 1.3.0, < 3.0) + jekyll-sass-converter (1.5.2) + sass (~> 3.4) + jekyll-seo-tag (2.8.0) + jekyll (>= 3.8, < 5.0) + jekyll-sitemap (1.4.0) + jekyll (>= 3.7, < 5.0) + jekyll-swiss (1.0.0) + jekyll-theme-architect (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-dinky (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-hacker (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-leap-day (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-merlot (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-midnight (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-minimal (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-modernist (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-primer (0.6.0) + jekyll (> 3.5, < 5.0) + jekyll-github-metadata (~> 2.9) + jekyll-seo-tag (~> 2.0) + jekyll-theme-slate (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-tactile (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-time-machine (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-titles-from-headings (0.5.3) + jekyll (>= 3.3, < 5.0) + jekyll-watch (2.2.1) + listen (~> 3.0) + jemoji (0.13.0) + gemoji (>= 3, < 5) + html-pipeline (~> 2.2) + jekyll (>= 3.0, < 5.0) + json (2.8.1) + kramdown (2.4.0) + rexml + kramdown-parser-gfm (1.1.0) + kramdown (~> 2.0) + liquid (4.0.4) + listen (3.9.0) + rb-fsevent (~> 0.10, >= 0.10.3) + rb-inotify (~> 0.9, >= 0.9.10) + logger (1.6.1) + mercenary (0.3.6) + minima (2.5.1) + jekyll (>= 3.5, < 5.0) + jekyll-feed (~> 0.9) + jekyll-seo-tag (~> 2.1) + minitest (5.25.1) + net-http (0.5.0) + uri + nokogiri (1.16.7-x86_64-linux) + racc (~> 1.4) + octokit (4.25.1) + faraday (>= 1, < 3) + sawyer (~> 0.9) + parallel (1.26.3) + parser (3.3.6.0) + ast (~> 2.4.1) + racc + pathutil (0.16.2) + forwardable-extended (~> 2.6) + public_suffix (5.1.1) + racc (1.8.1) + rainbow (3.1.1) + rb-fsevent (0.11.2) + rb-inotify (0.11.1) + ffi (~> 1.0) + regexp_parser (2.9.2) + rexml (3.3.9) + rouge (3.30.0) + rubocop (0.93.1) + parallel (~> 1.10) + parser (>= 2.7.1.5) + rainbow (>= 2.2.2, < 4.0) + regexp_parser (>= 1.8) + rexml + rubocop-ast (>= 0.6.0) + ruby-progressbar (~> 1.7) + unicode-display_width (>= 1.4.0, < 2.0) + rubocop-ast (1.35.0) + parser (>= 3.3.1.0) + ruby-progressbar (1.13.0) + rubyzip (2.3.2) + safe_yaml (1.0.5) + sass (3.7.4) + sass-listen (~> 4.0.0) + sass-listen (4.0.0) + rb-fsevent (~> 0.9, >= 0.9.4) + rb-inotify (~> 0.9, >= 0.9.7) + sawyer (0.9.2) + addressable (>= 2.3.5) + faraday (>= 0.17.3, < 3) + securerandom (0.3.2) + simpleidn (0.2.3) + terminal-table (1.8.0) + unicode-display_width (~> 1.1, >= 1.1.1) + typhoeus (1.4.1) + ethon (>= 0.9.0) + tzinfo (2.0.6) + concurrent-ruby (~> 1.0) + unicode-display_width (1.8.0) + uri (1.0.1) + w3c_validators (1.3.7) + json (>= 1.8) + nokogiri (~> 1.6) + rexml (~> 3.2) + webrick (1.9.0) + yell (2.2.2) + +PLATFORMS + x86_64-linux-musl + +DEPENDENCIES + asciidoctor + github-pages + html-proofer (~> 3.0) + jekyll-asciidoc + jekyll-paginate + jekyll-theme-cayman! + rubocop (~> 0.50) + w3c_validators (~> 1.3) + +BUNDLED WITH + 2.3.25 diff --git a/LICENSE b/LICENSE new file mode 100644 index 00000000000..d6456956733 --- /dev/null +++ b/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/assets/css/style.css b/assets/css/style.css new file mode 100644 index 00000000000..266420bdba9 --- /dev/null +++ b/assets/css/style.css @@ -0,0 +1,352 @@ +/*! normalize.css v3.0.2 | MIT License | git.io/normalize */ +/** 1. Set default font family to sans-serif. 2. Prevent iOS text size adjust after orientation change, without disabling user zoom. */ +@import url("https://fonts.googleapis.com/css?family=Open+Sans:400,700"); +html { font-family: sans-serif; /* 1 */ -ms-text-size-adjust: 100%; /* 2 */ -webkit-text-size-adjust: 100%; /* 2 */ } + +/** Remove default margin. */ +body { margin: 0; } + +/* HTML5 display definitions ========================================================================== */ +/** Correct `block` display not defined for any HTML5 element in IE 8/9. Correct `block` display not defined for `details` or `summary` in IE 10/11 and Firefox. Correct `block` display not defined for `main` in IE 11. */ +article, aside, details, figcaption, figure, footer, header, hgroup, main, menu, nav, section, summary { display: block; } + +/** 1. Correct `inline-block` display not defined in IE 8/9. 2. Normalize vertical alignment of `progress` in Chrome, Firefox, and Opera. */ +audio, canvas, progress, video { display: inline-block; /* 1 */ vertical-align: baseline; /* 2 */ } + +/** Prevent modern browsers from displaying `audio` without controls. Remove excess height in iOS 5 devices. */ +audio:not([controls]) { display: none; height: 0; } + +/** Address `[hidden]` styling not present in IE 8/9/10. Hide the `template` element in IE 8/9/11, Safari, and Firefox < 22. */ +[hidden], template { display: none; } + +/* Links ========================================================================== */ +/** Remove the gray background color from active links in IE 10. */ +a { background-color: transparent; } + +/** Improve readability when focused and also mouse hovered in all browsers. */ +a:active, a:hover { outline: 0; } + +/* Text-level semantics ========================================================================== */ +/** Address styling not present in IE 8/9/10/11, Safari, and Chrome. */ +abbr[title] { border-bottom: 1px dotted; } + +/** Address style set to `bolder` in Firefox 4+, Safari, and Chrome. */ +b, strong { font-weight: bold; } + +/** Address styling not present in Safari and Chrome. */ +dfn { font-style: italic; } + +/** Address variable `h1` font-size and margin within `section` and `article` contexts in Firefox 4+, Safari, and Chrome. */ +h1 { font-size: 2em; margin: 0.67em 0; } + +/** Address styling not present in IE 8/9. */ +mark { background: #ff0; color: #000; } + +/** Address inconsistent and variable font size in all browsers. */ +small { font-size: 80%; } + +/** Prevent `sub` and `sup` affecting `line-height` in all browsers. */ +sub, sup { font-size: 75%; line-height: 0; position: relative; vertical-align: baseline; } + +sup { top: -0.5em; } + +sub { bottom: -0.25em; } + +/* Embedded content ========================================================================== */ +/** Remove border when inside `a` element in IE 8/9/10. */ +img { border: 0; } + +/** Correct overflow not hidden in IE 9/10/11. */ +svg:not(:root) { overflow: hidden; } + +/* Grouping content ========================================================================== */ +/** Address margin not present in IE 8/9 and Safari. */ +figure { margin: 1em 40px; } + +/** Address differences between Firefox and other browsers. */ +hr { box-sizing: content-box; height: 0; } + +/** Contain overflow in all browsers. */ +pre { overflow: auto; } + +/** Address odd `em`-unit font size rendering in all browsers. */ +code, kbd, pre, samp { font-family: monospace, monospace; font-size: 1em; } + +/* Forms ========================================================================== */ +/** Known limitation: by default, Chrome and Safari on OS X allow very limited styling of `select`, unless a `border` property is set. */ +/** 1. Correct color not being inherited. Known issue: affects color of disabled elements. 2. Correct font properties not being inherited. 3. Address margins set differently in Firefox 4+, Safari, and Chrome. */ +button, input, optgroup, select, textarea { color: inherit; /* 1 */ font: inherit; /* 2 */ margin: 0; /* 3 */ } + +/** Address `overflow` set to `hidden` in IE 8/9/10/11. */ +button { overflow: visible; } + +/** Address inconsistent `text-transform` inheritance for `button` and `select`. All other form control elements do not inherit `text-transform` values. Correct `button` style inheritance in Firefox, IE 8/9/10/11, and Opera. Correct `select` style inheritance in Firefox. */ +button, select { text-transform: none; } + +/** 1. Avoid the WebKit bug in Android 4.0.* where (2) destroys native `audio` and `video` controls. 2. Correct inability to style clickable `input` types in iOS. 3. Improve usability and consistency of cursor style between image-type `input` and others. */ +button, html input[type="button"], input[type="reset"], input[type="submit"] { -webkit-appearance: button; /* 2 */ cursor: pointer; /* 3 */ } + +/** Re-set default cursor for disabled elements. */ +button[disabled], html input[disabled] { cursor: default; } + +/** Remove inner padding and border in Firefox 4+. */ +button::-moz-focus-inner, input::-moz-focus-inner { border: 0; padding: 0; } + +/** Address Firefox 4+ setting `line-height` on `input` using `!important` in the UA stylesheet. */ +input { line-height: normal; } + +/** It's recommended that you don't attempt to style these elements. Firefox's implementation doesn't respect box-sizing, padding, or width. 1. Address box sizing set to `content-box` in IE 8/9/10. 2. Remove excess padding in IE 8/9/10. */ +input[type="checkbox"], input[type="radio"] { box-sizing: border-box; /* 1 */ padding: 0; /* 2 */ } + +/** Fix the cursor style for Chrome's increment/decrement buttons. For certain `font-size` values of the `input`, it causes the cursor style of the decrement button to change from `default` to `text`. */ +input[type="number"]::-webkit-inner-spin-button, input[type="number"]::-webkit-outer-spin-button { height: auto; } + +/** 1. Address `appearance` set to `searchfield` in Safari and Chrome. 2. Address `box-sizing` set to `border-box` in Safari and Chrome (include `-moz` to future-proof). */ +input[type="search"] { -webkit-appearance: textfield; /* 1 */ /* 2 */ box-sizing: content-box; } + +/** Remove inner padding and search cancel button in Safari and Chrome on OS X. Safari (but not Chrome) clips the cancel button when the search input has padding (and `textfield` appearance). */ +input[type="search"]::-webkit-search-cancel-button, input[type="search"]::-webkit-search-decoration { -webkit-appearance: none; } + +/** Define consistent border, margin, and padding. */ +fieldset { border: 1px solid #c0c0c0; margin: 0 2px; padding: 0.35em 0.625em 0.75em; } + +/** 1. Correct `color` not being inherited in IE 8/9/10/11. 2. Remove padding so people aren't caught out if they zero out fieldsets. */ +legend { border: 0; /* 1 */ padding: 0; /* 2 */ } + +/** Remove default vertical scrollbar in IE 8/9/10/11. */ +textarea { overflow: auto; } + +/** Don't inherit the `font-weight` (applied by a rule above). NOTE: the default cannot safely be changed in Chrome and Safari on OS X. */ +optgroup { font-weight: bold; } + +/* Tables ========================================================================== */ +/** Remove most spacing between table cells. */ +table { border-collapse: collapse; border-spacing: 0; } + +td, th { padding: 0; } + +.highlight table td { padding: 5px; } + +.highlight table pre { margin: 0; } + +.highlight .cm { color: #999988; font-style: italic; } + +.highlight .cp { color: #999999; font-weight: bold; } + +.highlight .c1 { color: #999988; font-style: italic; } + +.highlight .cs { color: #999999; font-weight: bold; font-style: italic; } + +.highlight .c, .highlight .cd { color: #999988; font-style: italic; } + +.highlight .err { color: #a61717; background-color: #e3d2d2; } + +.highlight .gd { color: #000000; background-color: #ffdddd; } + +.highlight .ge { color: #000000; font-style: italic; } + +.highlight .gr { color: #aa0000; } + +.highlight .gh { color: #999999; } + +.highlight .gi { color: #000000; background-color: #ddffdd; } + +.highlight .go { color: #888888; } + +.highlight .gp { color: #555555; } + +.highlight .gs { font-weight: bold; } + +.highlight .gu { color: #aaaaaa; } + +.highlight .gt { color: #aa0000; } + +.highlight .kc { color: #000000; font-weight: bold; } + +.highlight .kd { color: #000000; font-weight: bold; } + +.highlight .kn { color: #000000; font-weight: bold; } + +.highlight .kp { color: #000000; font-weight: bold; } + +.highlight .kr { color: #000000; font-weight: bold; } + +.highlight .kt { color: #445588; font-weight: bold; } + +.highlight .k, .highlight .kv { color: #000000; font-weight: bold; } + +.highlight .mf { color: #009999; } + +.highlight .mh { color: #009999; } + +.highlight .il { color: #009999; } + +.highlight .mi { color: #009999; } + +.highlight .mo { color: #009999; } + +.highlight .m, .highlight .mb, .highlight .mx { color: #009999; } + +.highlight .sb { color: #d14; } + +.highlight .sc { color: #d14; } + +.highlight .sd { color: #d14; } + +.highlight .s2 { color: #d14; } + +.highlight .se { color: #d14; } + +.highlight .sh { color: #d14; } + +.highlight .si { color: #d14; } + +.highlight .sx { color: #d14; } + +.highlight .sr { color: #009926; } + +.highlight .s1 { color: #d14; } + +.highlight .ss { color: #990073; } + +.highlight .s { color: #d14; } + +.highlight .na { color: #008080; } + +.highlight .bp { color: #999999; } + +.highlight .nb { color: #0086B3; } + +.highlight .nc { color: #445588; font-weight: bold; } + +.highlight .no { color: #008080; } + +.highlight .nd { color: #3c5d5d; font-weight: bold; } + +.highlight .ni { color: #800080; } + +.highlight .ne { color: #990000; font-weight: bold; } + +.highlight .nf { color: #990000; font-weight: bold; } + +.highlight .nl { color: #990000; font-weight: bold; } + +.highlight .nn { color: #555555; } + +.highlight .nt { color: #000080; } + +.highlight .vc { color: #008080; } + +.highlight .vg { color: #008080; } + +.highlight .vi { color: #008080; } + +.highlight .nv { color: #008080; } + +.highlight .ow { color: #000000; font-weight: bold; } + +.highlight .o { color: #000000; font-weight: bold; } + +.highlight .w { color: #bbbbbb; } + +.highlight { background-color: #f8f8f8; } + +* { box-sizing: border-box; } + +body { padding: 0; margin: 0; font-family: "Open Sans", "Helvetica Neue", Helvetica, Arial, sans-serif; font-size: 16px; line-height: 1.5; color: #606c71; } + +#skip-to-content { height: 1px; width: 1px; position: absolute; overflow: hidden; top: -10px; } +#skip-to-content:focus { position: fixed; top: 10px; left: 10px; height: auto; width: auto; background: #e19447; outline: thick solid #e19447; } + +a { color: #1e6bb8; text-decoration: none; } +a:hover { text-decoration: underline; } + +.btn { display: inline-block; margin-bottom: 1rem; color: rgba(255, 255, 255, 0.7); background-color: rgba(255, 255, 255, 0.08); border-color: rgba(255, 255, 255, 0.2); border-style: solid; border-width: 1px; border-radius: 0.3rem; transition: color 0.2s, background-color 0.2s, border-color 0.2s; } +.btn:hover { color: rgba(255, 255, 255, 0.8); text-decoration: none; background-color: rgba(255, 255, 255, 0.2); border-color: rgba(255, 255, 255, 0.3); } +.btn + .btn { margin-left: 1rem; } +@media screen and (min-width: 64em) { .btn { padding: 0.75rem 1rem; } } +@media screen and (min-width: 42em) and (max-width: 64em) { .btn { padding: 0.6rem 0.9rem; font-size: 0.9rem; } } +@media screen and (max-width: 42em) { .btn { display: block; width: 100%; padding: 0.75rem; font-size: 0.9rem; } + .btn + .btn { margin-top: 1rem; margin-left: 0; } } + +.page-header { color: #fff; text-align: center; background-color: #1f2067; background-image: linear-gradient(90deg, #3b3c93, #1f2067); } +@media screen and (min-width: 64em) { .page-header { padding: 5rem 6rem; } } +@media screen and (min-width: 42em) and (max-width: 64em) { .page-header { padding: 3rem 4rem; } } +@media screen and (max-width: 42em) { .page-header { padding: 2rem 1rem; } } + +.project-name { margin-top: 0; margin-bottom: 0.1rem; } +@media screen and (min-width: 64em) { .project-name { font-size: 3.25rem; } } +@media screen and (min-width: 42em) and (max-width: 64em) { .project-name { font-size: 2.25rem; } } +@media screen and (max-width: 42em) { .project-name { font-size: 1.75rem; } } + +.project-tagline { margin-bottom: 2rem; font-weight: normal; opacity: 0.7; } +@media screen and (min-width: 64em) { .project-tagline { font-size: 1.25rem; } } +@media screen and (min-width: 42em) and (max-width: 64em) { .project-tagline { font-size: 1.15rem; } } +@media screen and (max-width: 42em) { .project-tagline { font-size: 1rem; } } + +.main-content { word-wrap: break-word; } +.main-content :first-child { margin-top: 0; } +@media screen and (min-width: 64em) { .main-content { max-width: 64rem; padding: 2rem 6rem; margin: 0 auto; font-size: 1.1rem; } } +@media screen and (min-width: 42em) and (max-width: 64em) { .main-content { padding: 2rem 4rem; font-size: 1.1rem; } } +@media screen and (max-width: 42em) { .main-content { padding: 2rem 1rem; font-size: 1rem; } } +.main-content kbd { background-color: #fafbfc; border: 1px solid #c6cbd1; border-bottom-color: #959da5; border-radius: 3px; box-shadow: inset 0 -1px 0 #959da5; color: #444d56; display: inline-block; font-size: 11px; line-height: 10px; padding: 3px 5px; vertical-align: middle; } +.main-content img { max-width: 100%; } +.main-content h1, .main-content h2, .main-content h3, .main-content h4, .main-content h5, .main-content h6 { margin-top: 2rem; margin-bottom: 1rem; font-weight: normal; color: #3d3c93; } +.main-content p { margin-bottom: 1em; } +.main-content code { padding: 2px 4px; font-family: Consolas, "Liberation Mono", Menlo, Courier, monospace; font-size: 0.9rem; color: #567482; background-color: #f3f6fa; border-radius: 0.3rem; } +.main-content pre { padding: 0.8rem; margin-top: 0; margin-bottom: 1rem; font: 1rem Consolas, "Liberation Mono", Menlo, Courier, monospace; color: #567482; word-wrap: normal; background-color: #f3f6fa; border: solid 1px #dce6f0; border-radius: 0.3rem; } +.main-content pre > code { padding: 0; margin: 0; font-size: 0.9rem; color: #567482; word-break: normal; white-space: pre; background: transparent; border: 0; } +.main-content .highlight { margin-bottom: 1rem; } +.main-content .highlight pre { margin-bottom: 0; word-break: normal; } +.main-content .highlight pre, .main-content pre { padding: 0.8rem; overflow: auto; font-size: 0.9rem; line-height: 1.45; border-radius: 0.3rem; -webkit-overflow-scrolling: touch; } +.main-content pre code, .main-content pre tt { display: inline; max-width: initial; padding: 0; margin: 0; overflow: initial; line-height: inherit; word-wrap: normal; background-color: transparent; border: 0; } +.main-content pre code:before, .main-content pre code:after, .main-content pre tt:before, .main-content pre tt:after { content: normal; } +.main-content ul, .main-content ol { margin-top: 0; } +.main-content blockquote { padding: 0 1rem; margin-left: 0; color: #819198; border-left: 0.3rem solid #dce6f0; } +.main-content blockquote > :first-child { margin-top: 0; } +.main-content blockquote > :last-child { margin-bottom: 0; } +.main-content table { display: block; width: 100%; overflow: auto; word-break: normal; word-break: keep-all; -webkit-overflow-scrolling: touch; } +.main-content table th { font-weight: bold; } +.main-content table th, .main-content table td { padding: 0.5rem 1rem; border: 1px solid #e9ebec; } +.main-content dl { padding: 0; } +.main-content dl dt { padding: 0; margin-top: 1rem; font-size: 1rem; font-weight: bold; } +.main-content dl dd { padding: 0; margin-bottom: 1rem; } +.main-content hr { height: 2px; padding: 0; margin: 1rem 0; background-color: #eff0f1; border: 0; } + +.site-footer { padding-top: 2rem; margin-top: 2rem; border-top: solid 1px #eff0f1; } +@media screen and (min-width: 64em) { .site-footer { font-size: 1rem; } } +@media screen and (min-width: 42em) and (max-width: 64em) { .site-footer { font-size: 1rem; } } +@media screen and (max-width: 42em) { .site-footer { font-size: 0.9rem; } } + +.site-footer-owner { display: block; font-weight: bold; } + +.site-footer-credits { color: #819198; } + +h1#logo img { max-width: 100%; } + +h1#logo { margin-bottom: 0; } + +.main-logo { position: relative; z-index: 9; max-width: 70%; display: block; margin: 0 auto; margin-bottom: -5em; } + +.belt { width: 100%; color: #fff; } + +@keyframes beltmove { 100% { stroke-dashoffset: 600; } } +.belt path { transform: skew(-45deg); stroke-width: 35; stroke-dasharray: 2 10 2 10 2 10 2 10 2 10; animation: beltmove 20s linear infinite; } + +.main-logo use { fill: #a73; opacity: 0; animation: convey 3s linear forwards; } + +use:nth-child(1) { animation-delay: 5s; } + +use:nth-child(2) { animation-delay: 3s; } + +use:nth-child(3) { animation-delay: 1s; } + +@keyframes convey { 0% { transform: translate(40%, 40%); opacity: 0; } + 20% { opacity: 1; } + 80% { transform: translate(0%, 40%); } + 100% { opacity: 1; } } +@keyframes convey2 { 0% { transform: translate(50%, 60%); opacity: 0; } + 20% { opacity: 1; } + 80% { transform: translate(0%, 60%); } + 100% { opacity: 1; } } +use:nth-child(1) { animation: convey2 3s linear forwards 5s; } diff --git a/assets/fonts/Noto-Sans-700/Noto-Sans-700.eot b/assets/fonts/Noto-Sans-700/Noto-Sans-700.eot new file mode 100755 index 00000000000..03bf93fec2a Binary files /dev/null and b/assets/fonts/Noto-Sans-700/Noto-Sans-700.eot differ diff --git a/assets/fonts/Noto-Sans-700/Noto-Sans-700.svg b/assets/fonts/Noto-Sans-700/Noto-Sans-700.svg new file mode 100644 index 00000000000..925fe47475a --- /dev/null +++ b/assets/fonts/Noto-Sans-700/Noto-Sans-700.svg @@ -0,0 +1,336 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/assets/fonts/Noto-Sans-700/Noto-Sans-700.ttf b/assets/fonts/Noto-Sans-700/Noto-Sans-700.ttf new file mode 100755 index 00000000000..4599e3ca9af Binary files /dev/null and b/assets/fonts/Noto-Sans-700/Noto-Sans-700.ttf differ diff --git a/assets/fonts/Noto-Sans-700/Noto-Sans-700.woff b/assets/fonts/Noto-Sans-700/Noto-Sans-700.woff new file mode 100755 index 00000000000..9d0b78df811 Binary files /dev/null and b/assets/fonts/Noto-Sans-700/Noto-Sans-700.woff differ diff --git a/assets/fonts/Noto-Sans-700/Noto-Sans-700.woff2 b/assets/fonts/Noto-Sans-700/Noto-Sans-700.woff2 new file mode 100755 index 00000000000..55fc44bcd12 Binary files /dev/null and b/assets/fonts/Noto-Sans-700/Noto-Sans-700.woff2 differ diff --git a/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.eot b/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.eot new file mode 100755 index 00000000000..cb97b2b4dd5 Binary files /dev/null and b/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.eot differ diff --git a/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.svg b/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.svg new file mode 100644 index 00000000000..abdafc0f53b --- /dev/null +++ b/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.svg @@ -0,0 +1,334 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.ttf b/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.ttf new file mode 100755 index 00000000000..6640dbeb333 Binary files /dev/null and b/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.ttf differ diff --git a/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.woff b/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.woff new file mode 100755 index 00000000000..209739eeb09 Binary files /dev/null and b/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.woff differ diff --git a/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.woff2 b/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.woff2 new file mode 100755 index 00000000000..f5525aa28be Binary files /dev/null and b/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.woff2 differ diff --git a/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.eot b/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.eot new file mode 100755 index 00000000000..a9973499352 Binary files /dev/null and b/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.eot differ diff --git a/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.svg b/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.svg new file mode 100644 index 00000000000..dcd8fc89dc9 --- /dev/null +++ b/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.svg @@ -0,0 +1,337 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.ttf b/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.ttf new file mode 100755 index 00000000000..7f75a2d9096 Binary files /dev/null and b/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.ttf differ diff --git a/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.woff b/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.woff new file mode 100755 index 00000000000..6dce67cede1 Binary files /dev/null and b/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.woff differ diff --git a/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.woff2 b/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.woff2 new file mode 100755 index 00000000000..a9c14c49206 Binary files /dev/null and b/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.woff2 differ diff --git a/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.eot b/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.eot new file mode 100755 index 00000000000..15fc8bfc91a Binary files /dev/null and b/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.eot differ diff --git a/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.svg b/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.svg new file mode 100644 index 00000000000..bd2894d6a27 --- /dev/null +++ b/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.svg @@ -0,0 +1,335 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.ttf b/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.ttf new file mode 100755 index 00000000000..a83bbf9fc89 Binary files /dev/null and b/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.ttf differ diff --git a/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.woff b/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.woff new file mode 100755 index 00000000000..17c85006d0d Binary files /dev/null and b/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.woff differ diff --git a/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.woff2 b/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.woff2 new file mode 100755 index 00000000000..a87d9cd7c61 Binary files /dev/null and b/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.woff2 differ diff --git a/assets/img/forklift-logo-darkbg.svg b/assets/img/forklift-logo-darkbg.svg new file mode 100644 index 00000000000..8a846e6361a --- /dev/null +++ b/assets/img/forklift-logo-darkbg.svg @@ -0,0 +1,164 @@ + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/assets/img/forklift-logo-lightbg.svg b/assets/img/forklift-logo-lightbg.svg new file mode 100644 index 00000000000..a8038cdf923 --- /dev/null +++ b/assets/img/forklift-logo-lightbg.svg @@ -0,0 +1,159 @@ + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/assets/img/konveyor-logo-forklift.jpg b/assets/img/konveyor-logo-forklift.jpg new file mode 100644 index 00000000000..185460764ef Binary files /dev/null and b/assets/img/konveyor-logo-forklift.jpg differ diff --git a/assets/img/logo_location.txt b/assets/img/logo_location.txt new file mode 100644 index 00000000000..2d6d6c6b515 --- /dev/null +++ b/assets/img/logo_location.txt @@ -0,0 +1 @@ +https://github.com/konveyor/community/tree/main/brand/logo diff --git a/assets/js/scale.fix.js b/assets/js/scale.fix.js new file mode 100644 index 00000000000..2f4f8fd4d31 --- /dev/null +++ b/assets/js/scale.fix.js @@ -0,0 +1,30 @@ +(function (document) { + var metas = document.getElementsByTagName("meta"), + changeViewportContent = function (content) { + for (var i = 0; i < metas.length; i++) { + if (metas[i].name == "viewport") { + metas[i].content = content; + } + } + }, + initialize = function () { + changeViewportContent( + "width=device-width, minimum-scale=1.0, maximum-scale=1.0" + ); + }, + gestureStart = function () { + changeViewportContent( + "width=device-width, minimum-scale=0.25, maximum-scale=1.6" + ); + }, + gestureEnd = function () { + initialize(); + }; + + if (navigator.userAgent.match(/iPhone/i)) { + initialize(); + + document.addEventListener("touchstart", gestureStart, false); + document.addEventListener("touchend", gestureEnd, false); + } +})(document); diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/docinfo.xml b/documentation/doc-Migration_Toolkit_for_Virtualization/docinfo.xml new file mode 100644 index 00000000000..bb612757d2b --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/docinfo.xml @@ -0,0 +1,15 @@ +{user-guide-title} +{project-full} +{project-version} +{subtitle} + + {abstract} + + + + Red Hat Modernization and Migration + Documentation Team + ccs-mms-docs@redhat.com + + + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/master/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/master/index.html new file mode 100644 index 00000000000..6044896fd8f --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/master/index.html @@ -0,0 +1,8582 @@ + + + + + + + + Installing and using Forklift 2.3 | Forklift Documentation + + + + + + + + + + + + + +Installing and using Forklift 2.3 | Forklift Documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+

Installing and using Forklift 2.3

+
+
+ +
+
+

About Forklift

+
+
+

You can use Forklift to migrate virtual machines from the following source providers to KubeVirt destination providers:

+
+
+
    +
  • +

    VMware vSphere

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    OpenStack

    +
  • +
  • +

    Open Virtual Appliances (OVAs) that were created by VMware vSphere

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+ +
+

About cold and warm migration

+
+

Forklift supports cold migration from:

+
+
+
    +
  • +

    VMware vSphere

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    OpenStack

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

Forklift supports warm migration from VMware vSphere and from oVirt.

+
+
+

Cold migration

+
+

Cold migration is the default migration type. The source virtual machines are shut down while the data is copied.

+
+
+ + + + + +
+ + +
+

VMware only: In cold migrations, in situations in which a package manager cannot be used during the migration, Forklift does not install the qemu-guest-agent daemon on the migrated VMs. This has some impact on the functionality of the migrated VMs, but overall, they are still expected to function.

+
+
+

To enable Forklift to automatically install qemu-guest-agent on the migrated VMs, ensure that your package manager can install the daemon during the first boot of the VM after migration.

+
+
+

If that is not possible, use your preferred automated or manual procedure to install qemu-guest-agent manually.

+
+
+
+
+
+

Warm migration

+
+

Most of the data is copied during the precopy stage while the source virtual machines (VMs) are running.

+
+
+

Then the VMs are shut down and the remaining data is copied during the cutover stage.

+
+
+
Precopy stage
+

The VMs are not shut down during the precopy stage.

+
+
+

The VM disks are copied incrementally by using changed block tracking (CBT) snapshots. The snapshots are created at one-hour intervals by default. You can change the snapshot interval by updating the forklift-controller deployment.

+
+
+ + + + + +
+ + +
+

You must enable CBT for each source VM and each VM disk.

+
+
+

A VM can support up to 28 CBT snapshots. If the source VM has too many CBT snapshots and the Migration Controller service is not able to create a new snapshot, warm migration might fail. The Migration Controller service deletes each snapshot when the snapshot is no longer required.

+
+
+
+
+

The precopy stage runs until the cutover stage is started manually or is scheduled to start.

+
+
+
Cutover stage
+

The VMs are shut down during the cutover stage and the remaining data is migrated. Data stored in RAM is not migrated.

+
+
+

You can start the cutover stage manually by using the Forklift console or you can schedule a cutover time in the Migration manifest.

+
+
+
+

Advantages and disadvantages of cold and warm migrations

+
+
Overview
+
+

Both cold migration and warm migration have advantages and disadvantages, as described in the table that follows:

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Advantages and disadvantages of cold and warm migrations
Cold migrationWarm migration

Duration

Correlates to the amount of data on the disks

Correlates to the amount of data on the disks and VM utilization

Data transferred

Approximate sum of all disks

Approximate sum of all disks and VM utilization

VM downtime

High

Low

+
+
+
Detailed description
+
+

The table that follows offers a more detailed description of the advantages and disadvantages of each type of migration. It assumes that you have installed Red Hat Enterprise Linux (RHEL) 9 on the OKD platform on which you installed Forklift.

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2. Detailed description of advantages and disadvantages
Cold migrationWarm migration

Fail fast

Each VM is converted to be compatible with OKD and, if the conversion is successful, the VM is transferred. If a VM cannot be converted, the migration fails immediately.

For each VM, Forklift creates a snapshot and transfers it to OKD. When you start the cutover, Forklift creates the last snapshot, transfers it, and then converts the VM.

Tools

Forklift only.

Forklift and CDI from KubeVirt.

Parallelism

Disks must be transferred sequentially.

Disks can be transferred in parallel using different pods.

+
+ + + + + +
+ + +
+

The preceding table describes the situation for VMs that are running because the main benefit of warm migration is the reduced downtime, and there is no reason to initiate warm migration for VMs that are down. However, performing warm migration for VMs that are down is not the same as cold migration, even when Forklift uses virt-v2v and RHEL 9. For VMs that are down, Forklift transfers the disks using CDI, unlike in cold migration.

+
+
+
+
+ + + + + +
+ + +
+

When importing from VMware, there are additional factors which impact the migration speed such as limits related to ESXi, vSphere. or VDDK.

+
+
+
+
+
+
Conclusions
+
+

Based on the preceding information, we can draw the following conclusions about cold migration vs. warm migration:

+
+
+
    +
  • +

    The shortest downtime of VMs can be achieved by using warm migration.

    +
  • +
  • +

    The shortest duration for VMs with a large amount of data on a single disk can be achieved by using cold migration.

    +
  • +
  • +

    The shortest duration for VMs with a large amount of data that is spread evenly across multiple disks can be achieved by using warm migration.

    +
  • +
+
+
+
+
+
+
+
+

Prerequisites

+
+
+

Review the following prerequisites to ensure that your environment is prepared for migration.

+
+
+

Software requirements

+
+

You must install compatible versions of OKD and KubeVirt.

+
+
+
+

Storage support and default modes

+
+

Forklift uses the following default volume and access modes for supported storage.

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 3. Default volume and access modes
ProvisionerVolume modeAccess mode

kubernetes.io/aws-ebs

Block

ReadWriteOnce

kubernetes.io/azure-disk

Block

ReadWriteOnce

kubernetes.io/azure-file

Filesystem

ReadWriteMany

kubernetes.io/cinder

Block

ReadWriteOnce

kubernetes.io/gce-pd

Block

ReadWriteOnce

kubernetes.io/hostpath-provisioner

Filesystem

ReadWriteOnce

manila.csi.openstack.org

Filesystem

ReadWriteMany

openshift-storage.cephfs.csi.ceph.com

Filesystem

ReadWriteMany

openshift-storage.rbd.csi.ceph.com

Block

ReadWriteOnce

kubernetes.io/rbd

Block

ReadWriteOnce

kubernetes.io/vsphere-volume

Block

ReadWriteOnce

+
+ + + + + +
+ + +
+

If the KubeVirt storage does not support dynamic provisioning, you must apply the following settings:

+
+
+
    +
  • +

    Filesystem volume mode

    +
    +

    Filesystem volume mode is slower than Block volume mode.

    +
    +
  • +
  • +

    ReadWriteOnce access mode

    +
    +

    ReadWriteOnce access mode does not support live virtual machine migration.

    +
    +
  • +
+
+
+

See Enabling a statically-provisioned storage class for details on editing the storage profile.

+
+
+
+
+ + + + + +
+ + +
+

If your migration uses block storage and persistent volumes created with an EXT4 file system, increase the file system overhead in CDI to be more than 10%. The default overhead that is assumed by CDI does not completely include the reserved place for the root partition. If you do not increase the file system overhead in CDI by this amount, your migration might fail.

+
+
+
+
+ + + + + +
+ + +
+

When migrating from OpenStack or running a cold-migration from RHV to the OCP cluster that MTV is deployed on, the migration allocates persistent volumes without CDI. In these cases, you might need to adjust the file system overhead.

+
+
+

If the configured file system overhead, which has a default value of 10%, is too low, the disk transfer will fail due to lack of space. In such a case, you would want to increase the file system overhead.

+
+
+

In some cases, however, you might want to decrease the file system overhead to reduce storage consumption.

+
+
+

You can change the file system overhead by changing the value of the controller_filesystem_overhead in the spec portion of the forklift-controller CR, as described in Configuring the MTV Operator.

+
+
+
+
+
+

Network prerequisites

+
+

The following prerequisites apply to all migrations:

+
+
+
    +
  • +

    IP addresses, VLANs, and other network configuration settings must not be changed before or during migration. The MAC addresses of the virtual machines are preserved during migration.

    +
  • +
  • +

    The network connections between the source environment, the KubeVirt cluster, and the replication repository must be reliable and uninterrupted.

    +
  • +
  • +

    If you are mapping more than one source and destination network, you must create a network attachment definition for each additional destination network.

    +
  • +
+
+
+

Ports

+
+

The firewalls must enable traffic over the following ports:

+
+ + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 4. Network ports required for migrating from VMware vSphere
PortProtocolSourceDestinationPurpose

443

TCP

OpenShift nodes

VMware vCenter

+

VMware provider inventory

+
+
+

Disk transfer authentication

+

443

TCP

OpenShift nodes

VMware ESXi hosts

+

Disk transfer authentication

+

902

TCP

OpenShift nodes

VMware ESXi hosts

+

Disk transfer data copy

+
+ + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 5. Network ports required for migrating from oVirt
PortProtocolSourceDestinationPurpose

443

TCP

OpenShift nodes

oVirt Engine

+

oVirt provider inventory

+
+
+

Disk transfer authentication

+

443

TCP

OpenShift nodes

oVirt hosts

+

Disk transfer authentication

+

54322

TCP

OpenShift nodes

oVirt hosts

+

Disk transfer data copy

+
+
+
+
+

Source virtual machine prerequisites

+
+

The following prerequisites apply to all migrations:

+
+
+
    +
  • +

    ISO/CDROM disks must be unmounted.

    +
  • +
  • +

    Each NIC must contain one IPv4 and/or one IPv6 address.

    +
  • +
  • +

    The operating system of a VM must be certified and supported as a guest operating system with KubeVirt.

    +
  • +
  • +

    The name of a VM must not contain a period (.). Forklift changes any period in a VM name to a dash (-).

    +
  • +
  • +

    The name of a VM must not be the same as any other VM in the KubeVirt environment.

    +
    + + + + + +
    + + +
    +

    Forklift automatically assigns a new name to a VM that does not comply with the rules.

    +
    +
    +

    Forklift makes the following changes when it automatically generates a new VM name:

    +
    +
    +
      +
    • +

      Excluded characters are removed.

      +
    • +
    • +

      Uppercase letters are switched to lowercase letters.

      +
    • +
    • +

      Any underscore (_) is changed to a dash (-).

      +
    • +
    +
    +
    +

    This feature allows a migration to proceed smoothly even if someone enters a VM name that does not follow the rules.

    +
    +
    +
    +
  • +
+
+
+
VMs with Secure Boot enabled might not be migrated automatically
+

Virtual machines (VMs) with Secure Boot enabled currently might not be migrated automatically. This is because Secure Boot, a security standard developed by members of the PC industry to ensure that a device boots using only software that is trusted by the Original Equipment Manufacturer (OEM), would prevent the VMs from booting on the destination provider. 

+
+
+

Workaround: The current workaround is to disable Secure Boot on the destination. For more details, see Disabling Secure Boot. (MTV-1548)

+
+
+
Windows VMs which are using Measured Boot cannot be migrated
+

Microsoft Windows virtual machines (VMs), which are using the Measured Boot feature, cannot be migrated because Measured Boot is a mechanism to prevent any kind of device changes, by checking each start-up component, including the firmware, all the way to the boot driver.

+
+
+

The alternative to migration is to re-create the Windows VM directly on KubeVirt.

+
+
+
+

oVirt prerequisites

+
+

The following prerequisites apply to oVirt migrations:

+
+
+
    +
  • +

    To create a source provider, you must have at least the UserRole and ReadOnlyAdmin roles assigned to you. These are the minimum required permissions, however, any other administrator or superuser permissions will also work.

    +
  • +
+
+
+ + + + + +
+ + +
+

You must keep the UserRole and ReadOnlyAdmin roles until the virtual machines of the source provider have been migrated. Otherwise, the migration will fail.

+
+
+
+
+
    +
  • +

    To migrate virtual machines:

    +
    +
      +
    • +

      You must have one of the following:

      +
      +
        +
      • +

        oVirt admin permissions. These permissions allow you to migrate any virtual machine in the system.

        +
      • +
      • +

        DiskCreator and UserVmManager permissions on every virtual machine you want to migrate.

        +
      • +
      +
      +
    • +
    • +

      You must use a compatible version of oVirt.

      +
    • +
    • +

      You must have the Engine CA certificate, unless it was replaced by a third-party certificate, in which case, specify the Engine Apache CA certificate.

      +
      +

      You can obtain the Engine CA certificate by navigating to https://<engine_host>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA in a browser.

      +
      +
    • +
    • +

      If you are migrating a virtual machine with a direct LUN disk, ensure that the nodes in the KubeVirt destination cluster that the VM is expected to run on can access the backend storage.

      +
    • +
    +
    +
  • +
+
+
+ + + + + +
+ + +
+
    +
  • +

    Unlike disk images that are copied from a source provider to a target provider, LUNs are detached, but not removed, from virtual machines in the source provider and then attached to the virtual machines (VMs) that are created in the target provider.

    +
  • +
  • +

    LUNs are not removed from the source provider during the migration in case fallback to the source provider is required. However, before re-attaching the LUNs to VMs in the source provider, ensure that the LUNs are not used by VMs on the target environment at the same time, which might lead to data corruption.

    +
  • +
+
+
+
+
+
+

OpenStack prerequisites

+
+

The following prerequisites apply to OpenStack migrations:

+
+
+ +
+
+

Additional authentication methods for migrations with OpenStack source providers

+
+

Forklift versions 2.6 and later support the following authentication methods for migrations with OpenStack source providers in addition to the standard username and password credential set:

+
+
+
    +
  • +

    Token authentication

    +
  • +
  • +

    Application credential authentication

    +
  • +
+
+
+

You can use these methods to migrate virtual machines with OpenStack source providers using the CLI the same way you migrate other virtual machines, except for how you prepare the Secret manifest.

+
+
+
Using token authentication with an OpenStack source provider
+
+

You can use token authentication, instead of username and password authentication, when you create an OpenStack source provider.

+
+
+

Forklift supports both of the following types of token authentication:

+
+
+
    +
  • +

    Token with user ID

    +
  • +
  • +

    Token with user name

    +
  • +
+
+
+

For each type of token authentication, you need to use data from OpenStack to create a Secret manifest.

+
+
+
Prerequisites
+

Have an OpenStack account.

+
+
+
Procedure
+
    +
  1. +

    In the dashboard of the OpenStack web console, click Project > API Access.

    +
  2. +
  3. +

    Expand Download OpenStack RC file and click OpenStack RC file.

    +
    +

    The file that is downloaded, referred to here as <openstack_rc_file>, includes the following fields used for token authentication:

    +
    +
    +
    +
    OS_AUTH_URL
    +OS_PROJECT_ID
    +OS_PROJECT_NAME
    +OS_DOMAIN_NAME
    +OS_USERNAME
    +
    +
    +
  4. +
  5. +

    To get the data needed for token authentication, run the following command:

    +
    +
    +
    $ openstack token issue
    +
    +
    +
    +

    The output, referred to here as <openstack_token_output>, includes the token, userID, and projectID that you need for authentication using a token with user ID.

    +
    +
  6. +
  7. +

    Create a Secret manifest similar to the following:

    +
    +
      +
    • +

      For authentication using a token with user ID:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-tokenid
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: token
      +  token: <token_from_openstack_token_output>
      +  projectID: <projectID_from_openstack_token_output>
      +  userID: <userID_from_openstack_token_output>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    • +

      For authentication using a token with user name:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-tokenname
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: token
      +  token: <token_from_openstack_token_output>
      +  domainName: <OS_DOMAIN_NAME_from_openstack_rc_file>
      +  projectName: <OS_PROJECT_NAME_from_openstack_rc_file>
      +  username: <OS_USERNAME_from_openstack_rc_file>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    +
    +
  8. +
  9. +

    Continue migrating your virtual machine according to the procedure in Migrating virtual machines, starting with step 2, "Create a Provider manifest for the source provider."

    +
  10. +
+
+
+
+
Using application credential authentication with an OpenStack source provider
+
+

You can use application credential authentication, instead of username and password authentication, when you create an OpenStack source provider.

+
+
+

Forklift supports both of the following types of application credential authentication:

+
+
+
    +
  • +

    Application credential ID

    +
  • +
  • +

    Application credential name

    +
  • +
+
+
+

For each type of application credential authentication, you need to use data from OpenStack to create a Secret manifest.

+
+
+
Prerequisites
+

You have an OpenStack account.

+
+
+
Procedure
+
    +
  1. +

    In the dashboard of the OpenStack web console, click Project > API Access.

    +
  2. +
  3. +

    Expand Download OpenStack RC file and click OpenStack RC file.

    +
    +

    The file that is downloaded, referred to here as <openstack_rc_file>, includes the following fields used for application credential authentication:

    +
    +
    +
    +
    OS_AUTH_URL
    +OS_PROJECT_ID
    +OS_PROJECT_NAME
    +OS_DOMAIN_NAME
    +OS_USERNAME
    +
    +
    +
  4. +
  5. +

    To get the data needed for application credential authentication, run the following command:

    +
    +
    +
    $ openstack application credential create --role member --role reader --secret redhat forklift
    +
    +
    +
    +

    The output, referred to here as <openstack_credential_output>, includes:

    +
    +
    +
      +
    • +

      The id and secret that you need for authentication using an application credential ID

      +
    • +
    • +

      The name and secret that you need for authentication using an application credential name

      +
    • +
    +
    +
  6. +
  7. +

    Create a Secret manifest similar to the following:

    +
    +
      +
    • +

      For authentication using the application credential ID:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-appid
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: applicationcredential
      +  applicationCredentialID: <id_from_openstack_credential_output>
      +  applicationCredentialSecret: <secret_from_openstack_credential_output>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    • +

      For authentication using the application credential name:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-appname
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: applicationcredential
      +  applicationCredentialName: <name_from_openstack_credential_output>
      +  applicationCredentialSecret: <secret_from_openstack_credential_output>
      +  domainName: <OS_DOMAIN_NAME_from_openstack_rc_file>
      +  username: <OS_USERNAME_from_openstack_rc_file>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    +
    +
  8. +
  9. +

    Continue migrating your virtual machine according to the procedure in Migrating virtual machines, starting with step 2, "Create a Provider manifest for the source provider."

    +
  10. +
+
+
+
+
+
+

VMware prerequisites

+
+

It is strongly recommended to create a VDDK image to accelerate migrations. For more information, see Creating a VDDK image.

+
+
+

The following prerequisites apply to VMware migrations:

+
+
+
    +
  • +

    You must use a compatible version of VMware vSphere.

    +
  • +
  • +

    You must be logged in as a user with at least the minimal set of VMware privileges.

    +
  • +
  • +

    To access the virtual machine using a pre-migration hook, VMware Tools must be installed on the source virtual machine.

    +
  • +
  • +

    The VM operating system must be certified and supported for use as a guest operating system with KubeVirt and for conversion to KVM with virt-v2v.

    +
  • +
  • +

    If you are running a warm migration, you must enable changed block tracking (CBT) on the VMs and on the VM disks.

    +
  • +
  • +

    If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host.

    +
  • +
  • +

    It is strongly recommended to disable hibernation because Forklift does not support migrating hibernated VMs.

    +
  • +
+
+
+ + + + + +
+ + +
+

In the event of a power outage, data might be lost for a VM with disabled hibernation. However, if hibernation is not disabled, migration will fail

+
+
+
+
+ + + + + +
+ + +
+

Neither Forklift nor OpenShift Virtualization support conversion of Btrfs for migrating VMs from VMWare.

+
+
+
+

VMware privileges

+
+

The following minimal set of VMware privileges is required to migrate virtual machines to KubeVirt with the Forklift.

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 6. VMware privileges
PrivilegeDescription

Virtual machine.Interaction privileges:

Virtual machine.Interaction.Power Off

Allows powering off a powered-on virtual machine. This operation powers down the guest operating system.

Virtual machine.Interaction.Power On

Allows powering on a powered-off virtual machine and resuming a suspended virtual machine.

Virtual machine.Guest operating system management by VIX API

Allows managing a virtual machine by the VMware VIX API.

+

Virtual machine.Provisioning privileges:

+
+
+ + + + + +
+ + +
+

All Virtual machine.Provisioning privileges are required.

+
+
+

Virtual machine.Provisioning.Allow disk access

Allows opening a disk on a virtual machine for random read and write access. Used mostly for remote disk mounting.

Virtual machine.Provisioning.Allow file access

Allows operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Allow read-only disk access

Allows opening a disk on a virtual machine for random read access. Used mostly for remote disk mounting.

Virtual machine.Provisioning.Allow virtual machine download

Allows read operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Allow virtual machine files upload

Allows write operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Clone template

Allows cloning of a template.

Virtual machine.Provisioning.Clone virtual machine

Allows cloning of an existing virtual machine and allocation of resources.

Virtual machine.Provisioning.Create template from virtual machine

Allows creation of a new template from a virtual machine.

Virtual machine.Provisioning.Customize guest

Allows customization of a virtual machine’s guest operating system without moving the virtual machine.

Virtual machine.Provisioning.Deploy template

Allows deployment of a virtual machine from a template.

Virtual machine.Provisioning.Mark as template

Allows marking an existing powered-off virtual machine as a template.

Virtual machine.Provisioning.Mark as virtual machine

Allows marking an existing template as a virtual machine.

Virtual machine.Provisioning.Modify customization specification

Allows creation, modification, or deletion of customization specifications.

Virtual machine.Provisioning.Promote disks

Allows promote operations on a virtual machine’s disks.

Virtual machine.Provisioning.Read customization specifications

Allows reading a customization specification.

Virtual machine.Snapshot management privileges:

Virtual machine.Snapshot management.Create snapshot

Allows creation of a snapshot from the virtual machine’s current state.

Virtual machine.Snapshot management.Remove Snapshot

Allows removal of a snapshot from the snapshot history.

Datastore privileges:

Datastore.Browse datastore

Allows exploring the contents of a datastore.

Datastore.Low level file operations

Allows performing low-level file operations - read, write, delete, and rename - in a datastore.

Sessions privileges:

Sessions.Validate session

Allows verification of the validity of a session.

Cryptographic privileges:

Cryptographic.Decrypt

Allows decryption of an encrypted virtual machine.

Cryptographic.Direct access

Allows access to encrypted resources.

+
+

Creating a VDDK image

+
+

Forklift can use the VMware Virtual Disk Development Kit (VDDK) SDK to accelerate transferring virtual disks from VMware vSphere.

+
+
+ + + + + +
+ + +
+

Creating a VDDK image, although optional, is highly recommended.

+
+
+
+
+

To make use of this feature, you download the VMware Virtual Disk Development Kit (VDDK), build a VDDK image, and push the VDDK image to your image registry.

+
+
+

The VDDK package contains symbolic links, therefore, the procedure of creating a VDDK image must be performed on a file system that preserves symbolic links (symlinks).

+
+
+ + + + + +
+ + +
+

Storing the VDDK image in a public registry might violate the VMware license terms.

+
+
+
+
+
Prerequisites
+
    +
  • +

    OKD image registry.

    +
  • +
  • +

    podman installed.

    +
  • +
  • +

    You are working on a file system that preserves symbolic links (symlinks).

    +
  • +
  • +

    If you are using an external registry, KubeVirt must be able to access it.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Create and navigate to a temporary directory:

    +
    +
    +
    $ mkdir /tmp/<dir_name> && cd /tmp/<dir_name>
    +
    +
    +
  2. +
  3. +

    In a browser, navigate to the VMware VDDK version 8 download page.

    +
  4. +
  5. +

    Select version 8.0.1 and click Download.

    +
  6. +
+
+
+ + + + + +
+ + +
+

In order to migrate to KubeVirt 4.12, download VDDK version 7.0.3.2 from the VMware VDDK version 7 download page.

+
+
+
+
+
    +
  1. +

    Save the VDDK archive file in the temporary directory.

    +
  2. +
  3. +

    Extract the VDDK archive:

    +
    +
    +
    $ tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gz
    +
    +
    +
  4. +
  5. +

    Create a Dockerfile:

    +
    +
    +
    $ cat > Dockerfile <<EOF
    +FROM registry.access.redhat.com/ubi8/ubi-minimal
    +USER 1001
    +COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib
    +RUN mkdir -p /opt
    +ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"]
    +EOF
    +
    +
    +
  6. +
  7. +

    Build the VDDK image:

    +
    +
    +
    $ podman build . -t <registry_route_or_server_path>/vddk:<tag>
    +
    +
    +
  8. +
  9. +

    Push the VDDK image to the registry:

    +
    +
    +
    $ podman push <registry_route_or_server_path>/vddk:<tag>
    +
    +
    +
  10. +
  11. +

    Ensure that the image is accessible to your KubeVirt environment.

    +
  12. +
+
+
+
+

Increasing the NFC service memory of an ESXi host

+
+

If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host. Otherwise, the migration will fail because the NFC service memory is limited to 10 parallel connections.

+
+
+
Procedure
+
    +
  1. +

    Log in to the ESXi host as root.

    +
  2. +
  3. +

    Change the value of maxMemory to 1000000000 in /etc/vmware/hostd/config.xml:

    +
    +
    +
    ...
    +      <nfcsvc>
    +         <path>libnfcsvc.so</path>
    +         <enabled>true</enabled>
    +         <maxMemory>1000000000</maxMemory>
    +         <maxStreamMemory>10485760</maxStreamMemory>
    +      </nfcsvc>
    +...
    +
    +
    +
  4. +
  5. +

    Restart hostd:

    +
    +
    +
    # /etc/init.d/hostd restart
    +
    +
    +
    +

    You do not need to reboot the host.

    +
    +
  6. +
+
+
+
+
+

Open Virtual Appliance (OVA) prerequisites

+
+

The following prerequisites apply to Open Virtual Appliance (OVA) file migrations:

+
+
+
    +
  • +

    All OVA files are created by VMware vSphere.

    +
  • +
+
+
+ + + + + +
+ + +
+

Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by Forklift. Forklift supports only OVA files created by VMware vSphere.

+
+
+
+
+
    +
  • +

    The OVA files are in one or more folders under an NFS shared directory in one of the following structures:

    +
    +
      +
    • +

      In one or more compressed Open Virtualization Format (OVF) packages that hold all the VM information.

      +
      +

      The filename of each compressed package must have the .ova extension. Several compressed packages can be stored in the same folder.

      +
      +
      +

      When this structure is used, Forklift scans the root folder and the first-level subfolders for compressed packages.

      +
      +
      +

      For example, if the NFS share is, /nfs, then:
      +The folder /nfs is scanned.
      +The folder /nfs/subfolder1 is scanned.
      +But, /nfs/subfolder1/subfolder2 is not scanned.

      +
      +
    • +
    • +

      In extracted OVF packages.

      +
      +

      When this structure is used, Forklift scans the root folder, first-level subfolders, and second-level subfolders for extracted OVF packages. +However, there can be only one .ovf file in a folder. Otherwise, the migration will fail.

      +
      +
      +

      For example, if the NFS share is, /nfs, then:
      +The OVF file /nfs/vm.ovf is scanned.
      +The OVF file /nfs/subfolder1/vm.ovf is scanned.
      +The OVF file /nfs/subfolder1/subfolder2/vm.ovf is scanned.
      +But, the OVF file /nfs/subfolder1/subfolder2/subfolder3/vm.ovf is not scanned.

      +
      +
    • +
    +
    +
  • +
+
+
+
+

Software compatibility guidelines

+
+

You must install compatible software versions.

+
+ + ++++++++ + + + + + + + + + + + + + + + + + + + + +
Table 7. Compatible software versions
ForkliftOKDKubeVirtVMware vSphereoVirtOpenStack

2.3.0

4.10 or later

4.10 or later

6.5 or later

4.4 SP1 or later

16.1 or later

+
+ + + + + +
+ + +
Migration from oVirt 4.3
+
+

Forklift was tested only with oVirt (RHV) 4.4 SP1. +Migration from oVirt (oVirt) 4.3 has not been tested with Forklift 2.3. While not supported, basic migrations from oVirt 4.3 are expected to work.

+
+
+

Generally it is advised to upgrade oVirt Manager (RHVM) to the previously mentioned supported version before the migration to KubeVirt.

+
+
+

Therefore, it is recommended to upgrade oVirt to the supported version above before the migration to KubeVirt.

+
+
+

However, migrations from oVirt 4.3.11 were tested with Forklift 2.3, and may work in practice in many environments using Forklift 2.3. In this case, we advise upgrading oVirt Manager (RHVM) to the previously mentioned supported version before the migration to KubeVirt.

+
+
+
+
+

OpenShift Operator Life Cycles

+
+

For more information about the software maintenance Life Cycle classifications for Operators shipped by Red Hat for use with OpenShift Container Platform, see OpenShift Operator Life Cycles.

+
+
+
+
+
+
+

Installing and configuring the Forklift Operator

+
+
+

You can install the Forklift Operator by using the OKD web console or the command line interface (CLI).

+
+
+

In Forklift version 2.4 and later, the Forklift Operator includes the Forklift plugin for the OKD web console.

+
+
+

After you install the Forklift Operator by using either the OKD web console or the CLI, you can configure the Operator.

+
+
+

Installing the Forklift Operator by using the OKD web console

+
+

You can install the Forklift Operator by using the OKD web console.

+
+
+
Prerequisites
+
    +
  • +

    OKD 4.10 or later installed.

    +
  • +
  • +

    KubeVirt Operator installed on an OpenShift migration target cluster.

    +
  • +
  • +

    You must be logged in as a user with cluster-admin permissions.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click OperatorsOperatorHub.

    +
  2. +
  3. +

    Use the Filter by keyword field to search for forklift-operator.

    +
    + + + + + +
    + + +
    +

    The Forklift Operator is a Community Operator. Red Hat does not support Community Operators.

    +
    +
    +
    +
  4. +
  5. +

    Click Migration Toolkit for Virtualization Operator and then click Install.

    +
  6. +
  7. +

    Click Create ForkliftController when the button becomes active.

    +
  8. +
  9. +

    Click Create.

    +
    +

    Your ForkliftController appears in the list that is displayed.

    +
    +
  10. +
  11. +

    Click WorkloadsPods to verify that the Forklift pods are running.

    +
  12. +
  13. +

    Click OperatorsInstalled Operators to verify that Migration Toolkit for Virtualization Operator appears in the konveyor-forklift project with the status Succeeded.

    +
    +

    When the plugin is ready you will be prompted to reload the page. The Migration menu item is automatically added to the navigation bar, displayed on the left of the OKD web console.

    +
    +
  14. +
+
+
+
+

Installing the Forklift Operator from the command line interface

+
+

You can install the Forklift Operator from the command line interface (CLI).

+
+
+
Prerequisites
+
    +
  • +

    OKD 4.10 or later installed.

    +
  • +
  • +

    KubeVirt Operator installed on an OpenShift migration target cluster.

    +
  • +
  • +

    You must be logged in as a user with cluster-admin permissions.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Create the konveyor-forklift project:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: project.openshift.io/v1
    +kind: Project
    +metadata:
    +  name: konveyor-forklift
    +EOF
    +
    +
    +
  2. +
  3. +

    Create an OperatorGroup CR called migration:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: operators.coreos.com/v1
    +kind: OperatorGroup
    +metadata:
    +  name: migration
    +  namespace: konveyor-forklift
    +spec:
    +  targetNamespaces:
    +    - konveyor-forklift
    +EOF
    +
    +
    +
  4. +
  5. +

    Create a Subscription CR for the Operator:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: operators.coreos.com/v1alpha1
    +kind: Subscription
    +metadata:
    +  name: forklift-operator
    +  namespace: konveyor-forklift
    +spec:
    +  channel: development
    +  installPlanApproval: Automatic
    +  name: forklift-operator
    +  source: community-operators
    +  sourceNamespace: openshift-marketplace
    +  startingCSV: "konveyor-forklift-operator.2.3.0"
    +EOF
    +
    +
    +
  6. +
  7. +

    Create a ForkliftController CR:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: ForkliftController
    +metadata:
    +  name: forklift-controller
    +  namespace: konveyor-forklift
    +spec:
    +  olm_managed: true
    +EOF
    +
    +
    +
  8. +
  9. +

    Verify that the Forklift pods are running:

    +
    +
    +
    $ kubectl get pods -n konveyor-forklift
    +
    +
    +
    +
    Example output
    +
    +
    NAME                                                    READY   STATUS    RESTARTS   AGE
    +forklift-api-bb45b8db4-cpzlg                            1/1     Running   0          6m34s
    +forklift-controller-7649db6845-zd25p                    2/2     Running   0          6m38s
    +forklift-must-gather-api-78fb4bcdf6-h2r4m               1/1     Running   0          6m28s
    +forklift-operator-59c87cfbdc-pmkfc                      1/1     Running   0          28m
    +forklift-ui-plugin-5c5564f6d6-zpd85                     1/1     Running   0          6m24s
    +forklift-validation-7d84c74c6f-fj9xg                    1/1     Running   0          6m30s
    +forklift-volume-populator-controller-85d5cb64b6-mrlmc   1/1     Running   0          6m36s
    +
    +
    +
  10. +
+
+
+
+

Configuring the Forklift Operator

+
+

You can configure all of the following settings of the Forklift Operator by modifying the ForkliftController CR, or in the Settings section of the Overview page, unless otherwise indicated.

+
+
+
    +
  • +

    Maximum number of virtual machines (VMs) per plan that can be migrated simultaneously.

    +
  • +
  • +

    How long must gather reports are retained before being automatically deleted.

    +
  • +
  • +

    CPU limit allocated to the main controller container.

    +
  • +
  • +

    Memory limit allocated to the main controller container.

    +
  • +
  • +

    Interval at which a new snapshot is requested before initiating a warm migration.

    +
  • +
  • +

    Frequency with which the system checks the status of snapshot creation or removal during a warm migration.

    +
  • +
  • +

    Percentage of space in persistent volumes allocated as file system overhead when the storageclass is filesystem (ForkliftController CR only).

    +
  • +
  • +

    Fixed amount of additional space allocated in persistent block volumes. This setting is applicable for any storageclass that is block-based (ForkliftController CR only).

    +
  • +
  • +

    Configuration map of operating systems to preferences for vSphere source providers (ForkliftController CR only).

    +
  • +
  • +

    Configuration map of operating systems to preferences for oVirt (oVirt) source providers (ForkliftController CR only).

    +
  • +
+
+
+

The procedure for configuring these settings using the user interface is presented in Configuring MTV settings. The procedure for configuring these settings by modifying the ForkliftController CR is presented following.

+
+
+
Procedure
+
    +
  • +

    Change a parameter’s value in the spec portion of the ForkliftController CR by adding the label and value as follows:

    +
  • +
+
+
+
+
spec:
+  label: value (1)
+
+
+
+ + + + + +
1Labels you can configure using the CLI are shown in the table that follows, along with a description of each label and its default value.
+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 8. Forklift Operator labels
LabelDescriptionDefault value

controller_max_vm_inflight

The maximum number of VMs per plan that can be migrated simultaneously.

20

must_gather_api_cleanup_max_age

The duration in hours for retaining must gather reports before they are automatically deleted.

-1 (disabled)

controller_container_limits_cpu

The CPU limit allocated to the main controller container.

500m

controller_container_limits_memory

The memory limit allocated to the main controller container.

800Mi

controller_precopy_interval

The interval in minutes at which a new snapshot is requested before initiating a warm migration.

60

controller_snapshot_status_check_rate_seconds

The frequency in seconds with which the system checks the status of snapshot creation or removal during a warm migration.

10

controller_filesystem_overhead

Percentage of space in persistent volumes allocated as file system overhead when the storageclass is filesystem.

+

ForkliftController CR only.

10

controller_block_overhead

Fixed amount of additional space allocated in persistent block volumes. This setting is applicable for any storageclass that is block-based. It can be used when data, such as encryption headers, is written to the persistent volumes in addition to the content of the virtual disk.

+

ForkliftController CR only.

0

vsphere_osmap_configmap_name

Configuration map for vSphere source providers. This configuration map maps the operating system of the incoming VM to a KubeVirt preference name. This configuration map needs to be in the namespace where the Forklift Operator is deployed.

+

To see the list of preferences in your KubeVirt environment, open the OpenShift web console and click VirtualizationPreferences.

+

You can add values to the configuration map when this label has the default value, forklift-vsphere-osmap. In order to override or delete values, specify a configuration map that is different from forklift-vsphere-osmap.

+

ForkliftController CR only.

forklift-vsphere-osmap

ovirt_osmap_configmap_name

Configuration map for oVirt source providers. This configuration map maps the operating system of the incoming VM to a KubeVirt preference name. This configuration map needs to be in the namespace where the Forklift Operator is deployed.

+

To see the list of preferences in your KubeVirt environment, open the OpenShift web console and click VirtualizationPreferences.

+

You can add values to the configuration map when this label has the default value, forklift-ovirt-osmap. In order to override or delete values, specify a configuration map that is different from forklift-ovirt-osmap.

+

ForkliftController CR only.

forklift-ovirt-osmap

+
+
+
+
+

Migrating virtual machines by using the OKD web console

+
+
+

You can migrate virtual machines (VMs) by using the OKD web console to:

+
+ +
+ + + + + +
+ + +
+

You must ensure that all prerequisites are met.

+
+
+

VMware only: You must have the minimal set of VMware privileges.

+
+
+

VMware only: Creating a VMware Virtual Disk Development Kit (VDDK) image will increase migration speed.

+
+
+
+
+

The MTV user interface

+
+

The Forklift user interface is integrated into the OKD web console.

+
+
+

In the left-hand panel, you can choose a page related to a component of the migration progress, for example, Providers for Migration, or, if you are an administrator, you can choose Overview, which contains information about migrations and lets you configure Forklift settings.

+
+
+
+Forklift user interface +
+
Figure 1. Forklift extension interface
+
+
+

In pages related to components, you can click on the Projects list, which is in the upper-left portion of the page, and see which projects (namespaces) you are allowed to work with.

+
+
+
    +
  • +

    If you are an administrator, you can see all projects.

    +
  • +
  • +

    If you are a non-administrator, you can see only the projects that you have permissions to work with.

    +
  • +
+
+
+
+

The MTV Overview page

+
+

The Forklift Overview page displays system-wide information about migrations and a list of Settings you can change.

+
+
+

If you have Administrator privileges, you can access the Overview page by clicking MigrationOverview in the OKD web console.

+
+
+

The Overview page has 3 tabs:

+
+
+
    +
  • +

    Overview

    +
  • +
  • +

    YAML

    +
  • +
  • +

    Metrics

    +
  • +
+
+
+

Overview tab

+
+

The Overview tab lets you see:

+
+
+
    +
  • +

    Operator: The namespace on which the Forklift Operator is deployed and the status of the Operator

    +
  • +
  • +

    Pods: The name, status, and creation time of each pod that was deployed by the Forklift Operator

    +
  • +
  • +

    Conditions: Status of the Forklift Operator:

    +
    +
      +
    • +

      Failure: Last failure. False indicates no failure since deployment.

      +
    • +
    • +

      Running: Whether the Operator is currently running and waiting for the next reconciliation.

      +
    • +
    • +

      Successful: Last successful reconciliation.

      +
    • +
    +
    +
  • +
+
+
+
+

YAML tab

+
+

The custom resource ForkliftController that defines the operation of the Forklift Operator. You can modify the custom resource from this tab.

+
+
+
+

Metrics tab

+
+

The Metrics tab lets you see:

+
+
+
    +
  • +

    Migrations: The number of migrations performed using Forklift:

    +
    +
      +
    • +

      Total

      +
    • +
    • +

      Running

      +
    • +
    • +

      Failed

      +
    • +
    • +

      Succeeded

      +
    • +
    • +

      Canceled

      +
    • +
    +
    +
  • +
  • +

    Virtual Machine Migrations: The number of VMs migrated using Forklift:

    +
    +
      +
    • +

      Total

      +
    • +
    • +

      Running

      +
    • +
    • +

      Failed

      +
    • +
    • +

      Succeeded

      +
    • +
    • +

      Canceled

      +
    • +
    +
    +
  • +
+
+
+ + + + + +
+ + +
+

Since a single migration might involve many virtual machines, the number of migrations performed using Forklift might vary significantly from the number of virtual machines that have been migrated using Forklift.

+
+
+
+
+
    +
  • +

    Chart showing the number of running, failed, and succeeded migrations performed using Forklift for each of the last 7 days

    +
  • +
  • +

    Chart showing the number of running, failed, and succeeded virtual machine migrations performed using Forklift for each of the last 7 days

    +
  • +
+
+
+
+
+

Configuring MTV settings

+
+

If you have Administrator privileges, you can access the Overview page and change the following settings in it:

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 9. Forklift settings
SettingDescriptionDefault value

Max concurrent virtual machine migrations

The maximum number of VMs per plan that can be migrated simultaneously

20

Must gather cleanup after (hours)

The duration for retaining must gather reports before they are automatically deleted

Disabled

Controller main container CPU limit

The CPU limit allocated to the main controller container

500 m

Controller main container Memory limit

The memory limit allocated to the main controller container

800 Mi

Precopy internal (minutes)

The interval at which a new snapshot is requested before initiating a warm migration

60

Snapshot polling interval (seconds)

The frequency with which the system checks the status of snapshot creation or removal during a warm migration

10

+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationOverview. The Settings list is on the right-hand side of the page.

    +
  2. +
  3. +

    In the Settings list, click the Edit icon of the setting you want to change.

    +
  4. +
  5. +

    Choose a setting from the list.

    +
  6. +
  7. +

    Click Save.

    +
  8. +
+
+
+
+

Adding providers

+
+

You can add source providers and destination providers for a virtual machine migration by using the OKD web console.

+
+
+

Adding source providers

+
+

You can use Forklift to migrate VMs from the following source providers:

+
+
+
    +
  • +

    VMware vSphere

    +
  • +
  • +

    oVirt

    +
  • +
  • +

    OpenStack

    +
  • +
  • +

    Open Virtual Appliances (OVAs) that were created by VMware vSphere

    +
  • +
  • +

    KubeVirt

    +
  • +
+
+
+

You can add a source provider by using the OKD web console.

+
+
+
Adding a VMware vSphere source provider
+
+

You can migrate VMware vSphere VMs from VMware vCenter or from a VMWare ESX/ESXi server. In Forklift versions 2.6 and later, you can migrate directly from an ESX/ESXi server, without going through vCenter, by specifying the SDK endpoint to that of an ESX/ESXi server.

+
+
+ + + + + +
+ + +
+

EMS enforcement is disabled for migrations with VMware vSphere source providers in order to enable migrations from versions of vSphere that are supported by Forklift but do not comply with the 2023 FIPS requirements. Therefore, users should consider whether migrations from vSphere source providers risk their compliance with FIPS. Supported versions of vSphere are specified in Software compatibility guidelines.

+
+
+
+
+
Prerequisites
+
    +
  • +

    It is strongly recommended to create a VMware Virtual Disk Development Kit (VDDK) image in a secure registry that is accessible to all clusters. A VDDK image accelerates migration and reduces the risk of a plan failing. If you are not using VDDK and a plan fails, then please retry with VDDK installed. For more information, see Creating a VDDK image.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Click vSphere.

    +
  6. +
  7. +

    Specify the following fields:

    +
  8. +
+
+
+

Provider Details

+
+
+
    +
  • +

    Provider resource name: Name of the source provider.

    +
  • +
  • +

    Endpoint type: Select the vSphere provider endpoint type. Options: vCenter or ESXi. You can migrate virtual machines from vCenter, an ESX/ESXi server that is not managed by vCenter, or from an ESX/ESXi server that is managed by vCenter but does not go through vCenter.

    +
  • +
  • +

    URL: URL of the SDK endpoint of the vCenter on which the source VM is mounted. Ensure that the URL includes the sdk path, usually /sdk. For example, https://vCenter-host-example.com/sdk. If a certificate for FQDN is specified, the value of this field needs to match the FQDN in the certificate.

    +
  • +
  • +

    VDDK init image: VDDKInitImage path. It is strongly recommended to create a VDDK init image to accelerate migrations. For more information, see Creating a VDDK image.

    +
  • +
+
+
+

Provider details

+
+
+
    +
  • +

    Username: vCenter user or ESXi user. For example, user@vsphere.local.

    +
  • +
  • +

    Password: vCenter user password or ESXi user password.

    +
    +
      +
    1. +

      Choose one of the following options for validating CA certificates:

      +
      +
        +
      • +

        Use a custom CA certificate: Migrate after validating a custom CA certificate.

        +
      • +
      • +

        Use the system CA certificate: Migrate after validating the system CA certificate.

        +
      • +
      • +

        Skip certificate validation : Migrate without validating a CA certificate.

        +
        +
          +
        1. +

          To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.

          +
        2. +
        3. +

          To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.

          +
        4. +
        5. +

          To skip certificate validation, toggle the Skip certificate validation switch to the right.

          +
        6. +
        +
        +
      • +
      +
      +
    2. +
    3. +

      Optional: Ask Forklift to fetch a custom CA certificate from the provider’s API endpoint URL.

      +
      +
        +
      1. +

        Click Fetch certificate from URL. The Verify certificate window opens.

        +
      2. +
      3. +

        If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.

        +
        +

        Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.

        +
        +
      4. +
      +
      +
    4. +
    5. +

      Click Create provider to add and save the provider.

      +
      +

      The provider appears in the list of providers.

      +
      +
    6. +
    +
    +
    + + + + + +
    + + +
    +

    It might take a few minutes for the provider to have the status Ready.

    +
    +
    +
    +
  • +
+
+
+
Selecting a migration network for a VMware source provider
+
+

You can select a migration network in the OKD web console for a source provider to reduce risk to the source environment and to improve performance.

+
+
+

Using the default network for migration can result in poor performance because the network might not have sufficient bandwidth. This situation can have a negative effect on the source platform because the disk transfer operation might saturate the network.

+
+
+ + + + + +
+ + +
+

You can also control the network from which disks are transferred from a host by using the Network File Copy (NFC) service in vSphere.

+
+
+
+
+
Prerequisites
+
    +
  • +

    The migration network must have sufficient throughput, minimum speed of 10 Gbps, for disk transfer.

    +
  • +
  • +

    The migration network must be accessible to the KubeVirt nodes through the default gateway.

    +
    + + + + + +
    + + +
    +

    The source virtual disks are copied by a pod that is connected to the pod network of the target namespace.

    +
    +
    +
    +
  • +
  • +

    The migration network should have jumbo frames enabled.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click the host number in the Hosts column beside a provider to view a list of hosts.

    +
  4. +
  5. +

    Select one or more hosts and click Select migration network.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Network: Network name

      +
    • +
    • +

      ESXi host admin username: For example, root

      +
    • +
    • +

      ESXi host admin password: Password

      +
    • +
    +
    +
  8. +
  9. +

    Click Save.

    +
  10. +
  11. +

    Verify that the status of each host is Ready.

    +
    +

    If a host status is not Ready, the host might be unreachable on the migration network or the credentials might be incorrect. You can modify the host configuration and save the changes.

    +
    +
  12. +
+
+
+
+
+
Adding an oVirt source provider
+
+

You can add an oVirt source provider by using the OKD web console.

+
+
+
Prerequisites
+
    +
  • +

    Engine CA certificate, unless it was replaced by a third-party certificate, in which case, specify the Engine Apache CA certificate

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Click Red Hat Virtualization

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Provider resource name: Name of the source provider.

      +
    • +
    • +

      URL: URL of the API endpoint of the oVirt Manager (RHVM) on which the source VM is mounted. Ensure that the URL includes the path leading to the RHVM API server, usually /ovirt-engine/api. For example, https://rhv-host-example.com/ovirt-engine/api.

      +
    • +
    • +

      Username: Username.

      +
    • +
    • +

      Password: Password.

      +
    • +
    +
    +
  8. +
  9. +

    Choose one of the following options for validating CA certificates:

    +
    +
      +
    • +

      Use a custom CA certificate: Migrate after validating a custom CA certificate.

      +
    • +
    • +

      Use the system CA certificate: Migrate after validating the system CA certificate.

      +
    • +
    • +

      Skip certificate validation : Migrate without validating a CA certificate.

      +
      +
        +
      1. +

        To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.

        +
      2. +
      3. +

        To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.

        +
      4. +
      5. +

        To skip certificate validation, toggle the Skip certificate validation switch to the right.

        +
      6. +
      +
      +
    • +
    +
    +
  10. +
  11. +

    Optional: Ask Forklift to fetch a custom CA certificate from the provider’s API endpoint URL.

    +
    +
      +
    1. +

      Click Fetch certificate from URL. The Verify certificate window opens.

      +
    2. +
    3. +

      If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.

      +
      +

      Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.

      +
      +
    4. +
    +
    +
  12. +
  13. +

    Click Create provider to add and save the provider.

    +
    +

    The provider appears in the list of providers.

    +
    +
  14. +
+
+
+
+
Adding an OpenStack source provider
+
+

You can add an OpenStack source provider by using the OKD web console.

+
+
+ + + + + +
+ + +
+

When you migrate an image-based VM from an OpenStack provider, a snapshot is created for the image that is attached to the source VM and the data from the snapshot is copied over to the target VM. This means that the target VM will have the same state as that of the source VM at the time the snapshot was created.

+
+
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Click OpenStack.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Provider resource name: Name of the source provider.

      +
    • +
    • +

      URL: URL of the OpenStack Identity (Keystone) endpoint. For example, http://controller:5000/v3.

      +
    • +
    • +

      Authentication type: Choose one of the following methods of authentication and supply the information related to your choice. For example, if you choose Application credential ID as the authentication type, the Application credential ID and the Application credential secret fields become active, and you need to supply the ID and the secret.

      +
      +
        +
      • +

        Application credential ID

        +
        + +
        +
      • +
      • +

        Application credential name

        +
        +
          +
        • +

          Application credential name: OpenStack application credential name

          +
        • +
        • +

          Application credential secret: : OpenStack application credential Secret

          +
        • +
        • +

          Username: OpenStack username

          +
        • +
        • +

          Domain: OpenStack domain name

          +
        • +
        +
        +
      • +
      • +

        Token with user ID

        +
        +
          +
        • +

          Token: OpenStack token

          +
        • +
        • +

          User ID: OpenStack user ID

          +
        • +
        • +

          Project ID: OpenStack project ID

          +
        • +
        +
        +
      • +
      • +

        Token with user Name

        +
        +
          +
        • +

          Token: OpenStack token

          +
        • +
        • +

          Username: OpenStack username

          +
        • +
        • +

          Project: OpenStack project

          +
        • +
        • +

          Domain name: OpenStack domain name

          +
        • +
        +
        +
      • +
      • +

        Password

        +
        +
          +
        • +

          Username: OpenStack username

          +
        • +
        • +

          Password: OpenStack password

          +
        • +
        • +

          Project: OpenStack project

          +
        • +
        • +

          Domain: OpenStack domain name

          +
        • +
        +
        +
      • +
      +
      +
    • +
    +
    +
  8. +
  9. +

    Choose one of the following options for validating CA certificates:

    +
    +
      +
    • +

      Use a custom CA certificate: Migrate after validating a custom CA certificate.

      +
    • +
    • +

      Use the system CA certificate: Migrate after validating the system CA certificate.

      +
    • +
    • +

      Skip certificate validation : Migrate without validating a CA certificate.

      +
      +
        +
      1. +

        To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.

        +
      2. +
      3. +

        To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.

        +
      4. +
      5. +

        To skip certificate validation, toggle the Skip certificate validation switch to the right.

        +
      6. +
      +
      +
    • +
    +
    +
  10. +
  11. +

    Optional: Ask Forklift to fetch a custom CA certificate from the provider’s API endpoint URL.

    +
    +
      +
    1. +

      Click Fetch certificate from URL. The Verify certificate window opens.

      +
    2. +
    3. +

      If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.

      +
      +

      Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.

      +
      +
    4. +
    +
    +
  12. +
  13. +

    Click Create provider to add and save the provider.

    +
    +

    The provider appears in the list of providers.

    +
    +
  14. +
+
+
+
+
Adding an Open Virtual Appliance (OVA) source provider
+
+

You can add Open Virtual Appliance (OVA) files that were created by VMware vSphere as a source provider by using the OKD web console.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Click Open Virtual Appliance (OVA).

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Provider resource name: Name of the source provider

      +
    • +
    • +

      URL: URL of the NFS file share that serves the OVA

      +
    • +
    +
    +
  8. +
  9. +

    Click Create provider to add and save the provider.

    +
    +

    The provider appears in the list of providers.

    +
    +
    + + + + + +
    + + +
    +

    An error message might appear that states that an error has occurred. You can ignore this message.

    +
    +
    +
    +
  10. +
+
+
+
+
Adding a Red Hat KubeVirt source provider
+
+

You can use a Red Hat KubeVirt provider as both a source provider and destination provider.

+
+
+

Specifically, the host cluster that is automatically added as a KubeVirt provider can be used as both a source provider and a destination provider.

+
+
+

You can migrate VMs from the cluster that Forklift is deployed on to another cluster, or from a remote cluster to the cluster that Forklift is deployed on.

+
+
+ + + + + +
+ + +
+

The OKD cluster version of the source provider must be 4.13 or later.

+
+
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Click KubeVirt.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Provider resource name: Name of the source provider

      +
    • +
    • +

      URL: URL of the endpoint of the API server

      +
    • +
    • +

      Service account bearer token: Token for a service account with cluster-admin privileges

      +
      +

      If both URL and Service account bearer token are left blank, the local OKD cluster is used.

      +
      +
    • +
    +
    +
  8. +
  9. +

    Choose one of the following options for validating CA certificates:

    +
    +
      +
    • +

      Use a custom CA certificate: Migrate after validating a custom CA certificate.

      +
    • +
    • +

      Use the system CA certificate: Migrate after validating the system CA certificate.

      +
    • +
    • +

      Skip certificate validation : Migrate without validating a CA certificate.

      +
      +
        +
      1. +

        To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.

        +
      2. +
      3. +

        To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.

        +
      4. +
      5. +

        To skip certificate validation, toggle the Skip certificate validation switch to the right.

        +
      6. +
      +
      +
    • +
    +
    +
  10. +
  11. +

    Optional: Ask Forklift to fetch a custom CA certificate from the provider’s API endpoint URL.

    +
    +
      +
    1. +

      Click Fetch certificate from URL. The Verify certificate window opens.

      +
    2. +
    3. +

      If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.

      +
      +

      Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.

      +
      +
    4. +
    +
    +
  12. +
  13. +

    Click Create provider to add and save the provider.

    +
    +

    The provider appears in the list of providers.

    +
    +
  14. +
+
+
+
+
+

Adding destination providers

+
+

You can add a KubeVirt destination provider by using the OKD web console.

+
+
+
Adding a KubeVirt destination provider
+
+

You can use a Red Hat KubeVirt provider as both a source provider and destination provider.

+
+
+

Specifically, the host cluster that is automatically added as a KubeVirt provider can be used as both a source provider and a destination provider.

+
+
+

You can also add another KubeVirt destination provider to the OKD web console in addition to the default KubeVirt destination provider, which is the cluster where you installed Forklift.

+
+
+

You can migrate VMs from the cluster that Forklift is deployed on to another cluster, or from a remote cluster to the cluster that Forklift is deployed on.

+
+
+
Prerequisites
+ +
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Click KubeVirt.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Provider resource name: Name of the source provider

      +
    • +
    • +

      URL: URL of the endpoint of the API server

      +
    • +
    • +

      Service account bearer token: Token for a service account with cluster-admin privileges

      +
      +

      If both URL and Service account bearer token are left blank, the local OKD cluster is used.

      +
      +
    • +
    +
    +
  8. +
  9. +

    Choose one of the following options for validating CA certificates:

    +
    +
      +
    • +

      Use a custom CA certificate: Migrate after validating a custom CA certificate.

      +
    • +
    • +

      Use the system CA certificate: Migrate after validating the system CA certificate.

      +
    • +
    • +

      Skip certificate validation : Migrate without validating a CA certificate.

      +
      +
        +
      1. +

        To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.

        +
      2. +
      3. +

        To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.

        +
      4. +
      5. +

        To skip certificate validation, toggle the Skip certificate validation switch to the right.

        +
      6. +
      +
      +
    • +
    +
    +
  10. +
  11. +

    Optional: Ask Forklift to fetch a custom CA certificate from the provider’s API endpoint URL.

    +
    +
      +
    1. +

      Click Fetch certificate from URL. The Verify certificate window opens.

      +
    2. +
    3. +

      If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.

      +
      +

      Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.

      +
      +
    4. +
    +
    +
  12. +
  13. +

    Click Create provider to add and save the provider.

    +
    +

    The provider appears in the list of providers.

    +
    +
  14. +
+
+
+
+
Selecting a migration network for a KubeVirt provider
+
+

You can select a default migration network for a KubeVirt provider in the OKD web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.

+
+
+

If you do not select a migration network, the default migration network is the pod network, which might not be optimal for disk transfer.

+
+
+ + + + + +
+ + +
+

You can override the default migration network of the provider by selecting a different network when you create a migration plan.

+
+
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    On the right side of the provider, select Select migration network from the Options menu kebab.

    +
  4. +
  5. +

    Select a network from the list of available networks and click Select.

    +
  6. +
+
+
+
+
+
+

Creating migration plans

+
+

You can create a migration plan by using the OKD web console to specify a source provider, the virtual machines (VMs) you want to migrate, and other plan details.

+
+
+

For your convenience, there are two procedures to create migration plans, starting with either a source provider or with specific VMs:

+
+
+ +
+
+ + + + + +
+ + +
+

Virtual machines with guest initiated storage connections, such as Internet Small Computer Systems Interface (iSCSI) connections or Network File System (NFS) mounts, are not handled by Forklift and could require additional planning before or reconfiguration after the migration.

+
+
+

This is to ensure that no issues arise due to the addition or newly migrated VM accessing this storage.

+
+
+
+
+ + + + + +
+ + +
+

A plan cannot contain more than 500 VMs or 500 disks.

+
+
+
+
+

Creating a migration plan starting with a source provider

+
+

You can create a migration plan based on a source provider, starting on the Plans for virtualization page. Note the specific options for migrations from VMware or oVirt providers.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click Plans for virtualization and then click Create Plan.

    +
    +

    The Create migration plan wizard opens to the Select source provider interface.

    +
    +
  2. +
  3. +

    Select the source provider of the VMs you want to migrate.

    +
    +

    The Select virtual machines interface opens.

    +
    +
  4. +
  5. +

    Select the VMs you want to migrate and click Next.

    +
    +

    The Create migration plan pane opens. It displays the source provider’s name and suggestions for a target provider and namespace, a network map, and a storage map.

    +
    +
  6. +
  7. +

    Enter the Plan name.

    +
  8. +
  9. +

    Make any needed changes to the editable items.

    +
  10. +
  11. +

    Click Add mapping to edit a suggested network mapping or a storage mapping, or to add one or more additional mappings.

    +
  12. +
  13. +

    Click Create migration plan.

    +
    +

    Forklift validates the migration plan and the Plan details page opens, indicating whether the plan is ready for use or contains an error. The details of the plan are listed, and you can edit the items you filled in on the previous page. If you make any changes, Forklift validates the plan again.

    +
    +
  14. +
  15. +

    VMware source providers only (All optional):

    +
    +
      +
    • +

      Preserving static IPs of VMs: By default, virtual network interface controllers (vNICs) change during the migration process. As a result, vNICs that are configured with a static IP linked to the interface name in the guest VM lose their IP. To avoid this, click the Edit icon next to Preserve static IPs and toggle the Whether to preserve the static IPs switch in the window that opens. Then click Save.

      +
      +

      Forklift then issues a warning message about any VMs for which vNIC properties are missing. To retrieve any missing vNIC properties, run those VMs in vSphere in order for the vNIC properties to be reported to Forklift.

      +
      +
    • +
    • +

      Entering a list of decryption passphrases for disks encrypted using Linux Unified Key Setup (LUKS): To enter a list of decryption passphrases for LUKS-encrypted devices, in the Settings section, click the Edit icon next to Disk decryption passphrases, enter the passphrases, and then click Save. You do not need to enter the passphrases in a specific order - for each LUKS-encrypted device, Forklift tries each passphrase until one unlocks the device.

      +
    • +
    • +

      Specifying a root device: Applies to multi-boot VM migrations only. By default, Forklift uses the first bootable device detected as the root device.

      +
      +

      To specify a different root device, in the Settings section, click the Edit icon next to Root device and choose a device from the list of commonly-used options, or enter a device in the text box.

      +
      +
      +

      Forklift uses the following format for disk location: /dev/sd<disk_identifier><disk_partition>. For example, if the second disk is the root device and the operating system is on the disk’s second partition, the format would be: /dev/sdb2. After you enter the boot device, click Save.

      +
      +
      +

      If the conversion fails because the boot device provided is incorrect, it is possible to get the correct information by looking at the conversion pod logs.

      +
      +
    • +
    +
    +
  16. +
  17. +

    oVirt source providers only (Optional):

    +
    +
      +
    • +

      Preserving the CPU model of VMs that are migrated from oVirt: Generally, the CPU model (type) for oVirt VMs is set at the cluster level, but it can be set at the VM level, which is called a custom CPU model. +By default, Forklift sets the CPU model on the destination cluster as follows: Forklift preserves custom CPU settings for VMs that have them, but, for VMs without custom CPU settings, Forklift does not set the CPU model. Instead, the CPU model is later set by KubeVirt.

      +
      +

      To preserve the cluster-level CPU model of your oVirt VMs, in the Settings section, click the Edit icon next to Preserve CPU model. Toggle the Whether to preserve the CPU model switch, and then click Save.

      +
      +
    • +
    +
    +
  18. +
  19. +

    If the plan is valid,

    +
    +
      +
    1. +

      You can run the plan now by clicking Start migration.

      +
    2. +
    3. +

      You can run the plan later by selecting it on the Plans for virtualization page and following the procedure in Running a migration plan.

      +
      + + + + + +
      + + +
      +

      When you migrate a VMware 7 VM to an OKD 4.13+ platform that uses CentOS 7.9, the name of the network interfaces changes and the static IP configuration for the VM no longer works.

      +
      +
      +
      +
    4. +
    +
    +
  20. +
+
+
+
+

Creating a migration plan starting with specific VMs

+
+

You can create a migration plan based on specific VMs, starting on the Providers for virtualization page. Note the specific options for migrations from VMware or oVirt providers.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click Providers for virtualization.

    +
  2. +
  3. +

    In the row of the appropriate source provider, click VMs.

    +
    +

    The Virtual Machines tab opens.

    +
    +
  4. +
  5. +

    Select the VMs you want to migrate and click Create migration plan.

    +
    +

    The Create migration plan pane opens. It displays the source provider’s name and suggestions for a target provider and namespace, a network map, and a storage map.

    +
    +
  6. +
  7. +

    Enter the Plan name.

    +
  8. +
  9. +

    Make any needed changes to the editable items.

    +
  10. +
  11. +

    Click Add mapping to edit a suggested network mapping or a storage mapping, or to add one or more additional mappings.

    +
  12. +
  13. +

    Click Create migration plan.

    +
    +

    Forklift validates the migration plan and the Plan details page opens, indicating whether the plan is ready for use or contains an error. The details of the plan are listed, and you can edit the items you filled in on the previous page. If you make any changes, Forklift validates the plan again.

    +
    +
  14. +
  15. +

    VMware source providers only (All optional):

    +
    +
      +
    • +

      Preserving static IPs of VMs: By default, virtual network interface controllers (vNICs) change during the migration process. As a result, vNICs that are configured with a static IP linked to the interface name in the guest VM lose their IP. To avoid this, click the Edit icon next to Preserve static IPs and toggle the Whether to preserve the static IPs switch in the window that opens. Then click Save.

      +
      +

      Forklift then issues a warning message about any VMs for which vNIC properties are missing. To retrieve any missing vNIC properties, run those VMs in vSphere in order for the vNIC properties to be reported to Forklift.

      +
      +
    • +
    • +

      Entering a list of decryption passphrases for disks encrypted using Linux Unified Key Setup (LUKS): To enter a list of decryption passphrases for LUKS-encrypted devices, in the Settings section, click the Edit icon next to Disk decryption passphrases, enter the passphrases, and then click Save. You do not need to enter the passphrases in a specific order - for each LUKS-encrypted device, Forklift tries each passphrase until one unlocks the device.

      +
    • +
    • +

      Specifying a root device: Applies to multi-boot VM migrations only. By default, Forklift uses the first bootable device detected as the root device.

      +
      +

      To specify a different root device, in the Settings section, click the Edit icon next to Root device and choose a device from the list of commonly-used options, or enter a device in the text box.

      +
      +
      +

      Forklift uses the following format for disk location: /dev/sd<disk_identifier><disk_partition>. For example, if the second disk is the root device and the operating system is on the disk’s second partition, the format would be: /dev/sdb2. After you enter the boot device, click Save.

      +
      +
      +

      If the conversion fails because the boot device provided is incorrect, it is possible to get the correct information by looking at the conversion pod logs.

      +
      +
    • +
    +
    +
  16. +
  17. +

    oVirt source providers only (Optional):

    +
    +
      +
    • +

      Preserving the CPU model of VMs that are migrated from oVirt: Generally, the CPU model (type) for oVirt VMs is set at the cluster level, but it can be set at the VM level, which is called a custom CPU model. +By default, Forklift sets the CPU model on the destination cluster as follows: Forklift preserves custom CPU settings for VMs that have them, but, for VMs without custom CPU settings, Forklift does not set the CPU model. Instead, the CPU model is later set by KubeVirt.

      +
      +

      To preserve the cluster-level CPU model of your oVirt VMs, in the Settings section, click the Edit icon next to Preserve CPU model. Toggle the Whether to preserve the CPU model switch, and then click Save.

      +
      +
    • +
    +
    +
  18. +
  19. +

    If the plan is valid,

    +
    +
      +
    1. +

      You can run the plan now by clicking Start migration.

      +
    2. +
    3. +

      You can run the plan later by selecting it on the Plans for virtualization page and following the procedure in Running a migration plan.

      +
      + + + + + +
      + + +
      +

      When you migrate a VMware 7 VM to an OKD 4.13+ platform that uses CentOS 7.9, the name of the network interfaces changes and the static IP configuration for the VM no longer works.

      +
      +
      +
      +
    4. +
    +
    +
  20. +
+
+
+
+
+

Running a migration plan

+
+

You can run a migration plan and view its progress in the OKD web console.

+
+
+
Prerequisites
+
    +
  • +

    Valid migration plan.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
    +

    The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, and the description of each plan.

    +
    +
  2. +
  3. +

    Click Start beside a migration plan to start the migration.

    +
  4. +
  5. +

    Click Start in the confirmation window that opens.

    +
    +

    The Migration details by VM screen opens, displaying the migration’s progress

    +
    +
    +

    Warm migration only:

    +
    +
    +
      +
    • +

      The precopy stage starts.

      +
    • +
    • +

      Click Cutover to complete the migration.

      +
    • +
    +
    +
  6. +
  7. +

    If the migration fails:

    +
    +
      +
    1. +

      Click Get logs to retrieve the migration logs.

      +
    2. +
    3. +

      Click Get logs in the confirmation window that opens.

      +
    4. +
    5. +

      Wait until Get logs changes to Download logs and then click the button to download the logs.

      +
    6. +
    +
    +
  8. +
  9. +

    Click a migration’s Status, whether it failed or succeeded or is still ongoing, to view the details of the migration.

    +
    +

    The Migration details by VM screen opens, displaying the start and end times of the migration, the amount of data copied, and a progress pipeline for each VM being migrated.

    +
    +
  10. +
  11. +

    Expand an individual VM to view its steps and the elapsed time and state of each step.

    +
  12. +
+
+
+
+

Migration plan options

+
+

On the Plans for virtualization page of the OKD web console, you can click the Options menu kebab beside a migration plan to access the following options:

+
+
+
    +
  • +

    Get logs: Retrieves the logs of a migration. When you click Get logs, a confirmation window opens. After you click Get logs in the window, wait until Get logs changes to Download logs and then click the button to download the logs.

    +
  • +
  • +

    Edit: Edit the details of a migration plan. You cannot edit a migration plan while it is running or after it has completed successfully.

    +
  • +
  • +

    Duplicate: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:

    +
    +
      +
    • +

      Migrate VMs to a different namespace.

      +
    • +
    • +

      Edit an archived migration plan.

      +
    • +
    • +

      Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.

      +
    • +
    +
    +
  • +
  • +

    Archive: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed.

    +
    + + + + + +
    + + +
    +

    The Archive option is irreversible. However, you can duplicate an archived plan.

    +
    +
    +
    +
  • +
  • +

    Delete: Permanently remove a migration plan. You cannot delete a running migration plan.

    +
    + + + + + +
    + + +
    +

    The Delete option is irreversible.

    +
    +
    +

    Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs, and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

    +
    +
    +
    +
  • +
  • +

    View details: Display the details of a migration plan.

    +
  • +
  • +

    Restart: Restart a failed or canceled migration plan.

    +
  • +
  • +

    Cancel scheduled cutover: Cancel a scheduled cutover migration for a warm migration plan.

    +
  • +
+
+
+
+

Canceling a migration

+
+

You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the OKD web console.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click Plans for virtualization.

    +
  2. +
  3. +

    Click the name of a running migration plan to view the migration details.

    +
  4. +
  5. +

    Select one or more VMs and click Cancel.

    +
  6. +
  7. +

    Click Yes, cancel to confirm the cancellation.

    +
    +

    In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.

    +
    +
  8. +
+
+
+

You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.

+
+
+
+
+
+

Migrating virtual machines from the command line

+
+
+

You can migrate virtual machines to KubeVirt from the command line.

+
+
+ + + + + +
+ + +
+

You must ensure that all prerequisites are met.

+
+
+
+
+ + + + + +
+ + +
+

A plan cannot contain more than 500 VMs or 500 disks.

+
+
+
+
+

Permissions needed by non-administrators to work with migration plan components

+
+

If you are an administrator, you can work with all components of migration plans (for example, providers, network mappings, and migration plans).

+
+
+

By default, non-administrators have limited ability to work with migration plans and their components. As an administrator, you can modify their roles to allow them full access to all components, or you can give them limited permissions.

+
+
+

For example, administrators can assign non-administrators one or more of the following cluster roles for migration plans:

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + +
Table 10. Example migration plan roles and their privileges
RoleDescription

plans.forklift.konveyor.io-v1beta1-view

Can view migration plans but not to create, delete or modify them

plans.forklift.konveyor.io-v1beta1-edit

Can create, delete or modify (all parts of edit permissions) individual migration plans

plans.forklift.konveyor.io-v1beta1-admin

All edit privileges and the ability to delete the entire collection of migration plans

+
+

Note that pre-defined cluster roles include a resource (for example, plans), an API group (for example, forklift.konveyor.io-v1beta1) and an action (for example, view, edit).

+
+
+

As a more comprehensive example, you can grant non-administrators the following set of permissions per namespace:

+
+
+
    +
  • +

    Create and modify storage maps, network maps, and migration plans for the namespaces they have access to

    +
  • +
  • +

    Attach providers created by administrators to storage maps, network maps, and migration plans

    +
  • +
  • +

    Not be able to create providers or to change system settings

    +
  • +
+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 11. Example permissions required for non-adminstrators to work with migration plan components but not create providers
ActionsAPI groupResource

get, list, watch, create, update, patch, delete

forklift.konveyor.io

plans

get, list, watch, create, update, patch, delete

forklift.konveyor.io

migrations

get, list, watch, create, update, patch, delete

forklift.konveyor.io

hooks

get, list, watch

forklift.konveyor.io

providers

get, list, watch, create, update, patch, delete

forklift.konveyor.io

networkmaps

get, list, watch, create, update, patch, delete

forklift.konveyor.io

storagemaps

get, list, watch

forklift.konveyor.io

forkliftcontrollers

create, patch, delete

Empty string

secrets

+
+ + + + + +
+ + +
+

Non-administrators need to have the create permissions that are part of edit roles for network maps and for storage maps to create migration plans, even when using a template for a network map or a storage map.

+
+
+
+
+ + + + + +
+ + +
forklift-controller consistently failing to reconcile a plan, and returning an HTTP 500 error
+
+

There is an issue with the forklift-controller consistently failing to reconcile a Migration Plan, and subsequently returning an HTTP 500 error. This issue is caused when you specify the user permissions only on the virtual machine (VM).

+
+
+

In Forklift, you need to add permissions at the datacenter level, which includes storage, networks, switches, and so on, which are used by the VM. You must then propagate the permissions to the child elements.

+
+
+

If you do not want to add this level of permissions, you must manually add the permissions to each object on the VM host required.

+
+
+
+
+
+

Retrieving a VMware vSphere moRef

+
+

When you migrate VMs with a VMware vSphere source provider using Forklift from the CLI, you need to know the managed object reference (moRef) of certain entities in vSphere, such as datastores, networks, and VMs.

+
+
+

You can retrieve the moRef of one or more vSphere entities from the Inventory service. You can then use each moRef as a reference for retrieving the moRef of another entity.

+
+
+
Procedure
+
    +
  1. +

    Retrieve the routes for the project:

    +
    +
    +
    oc get route -n openshift-mtv
    +
    +
    +
  2. +
  3. +

    Retrieve the Inventory service route:

    +
    +
    +
    $ kubectl get route <inventory_service> -n konveyor-forklift
    +
    +
    +
  4. +
  5. +

    Retrieve the access token:

    +
    +
    +
    $ TOKEN=$(oc whoami -t)
    +
    +
    +
  6. +
  7. +

    Retrieve the moRef of a VMware vSphere provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/vsphere -k
    +
    +
    +
  8. +
  9. +

    Retrieve the datastores of a VMware vSphere source provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/vsphere/<provider id>/datastores/ -k
    +
    +
    +
    +
    Example output
    +
    +
    [
    +  {
    +    "id": "datastore-11",
    +    "parent": {
    +      "kind": "Folder",
    +      "id": "group-s5"
    +    },
    +    "path": "/Datacenter/datastore/v2v_general_porpuse_ISCSI_DC",
    +    "revision": 46,
    +    "name": "v2v_general_porpuse_ISCSI_DC",
    +    "selfLink": "providers/vsphere/01278af6-e1e4-4799-b01b-d5ccc8dd0201/datastores/datastore-11"
    +  },
    +  {
    +    "id": "datastore-730",
    +    "parent": {
    +      "kind": "Folder",
    +      "id": "group-s5"
    +    },
    +    "path": "/Datacenter/datastore/f01-h27-640-SSD_2",
    +    "revision": 46,
    +    "name": "f01-h27-640-SSD_2",
    +    "selfLink": "providers/vsphere/01278af6-e1e4-4799-b01b-d5ccc8dd0201/datastores/datastore-730"
    +  },
    + ...
    +
    +
    +
  10. +
+
+
+

In this example, the moRef of the datastore v2v_general_porpuse_ISCSI_DC is datastore-11 and the moRef of the datastore f01-h27-640-SSD_2 is datastore-730.

+
+
+
+

Migrating virtual machines

+
+

You migrate virtual machines (VMs) from the command line (CLI) by creating Forklift custom resources (CRs). The CRs and the migration procedure vary by source provider.

+
+
+ + + + + +
+ + +
+

You must specify a name for cluster-scoped CRs.

+
+
+

You must specify both a name and a namespace for namespace-scoped CRs.

+
+
+

To migrate to or from an OKD cluster that is different from the one the migration plan is defined on, you must have an KubeVirt service account token with cluster-admin privileges.

+
+
+
+
+

Migrating from a VMware vSphere source provider

+
+

You can migrate from a VMware vSphere source provider by using the CLI.

+
+
+
Procedure
+
    +
  1. +

    Create a Secret manifest for the source provider credentials:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: v1
    +kind: Secret
    +metadata:
    +  name: <secret>
    +  namespace: <namespace>
    +  ownerReferences: (1)
    +    - apiVersion: forklift.konveyor.io/v1beta1
    +      kind: Provider
    +      name: <provider_name>
    +      uid: <provider_uid>
    +  labels:
    +    createdForProviderType: vsphere
    +    createdForResourceType: providers
    +type: Opaque
    +stringData:
    +  user: <user> (2)
    +  password: <password> (3)
    +  insecureSkipVerify: <"true"/"false"> (4)
    +  cacert: | (5)
    +    <ca_certificate>
    +  url: <api_end_point> (6)
    +EOF
    +
    +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + +
    1The ownerReferences section is optional.
    2Specify the vCenter user or the ESX/ESXi user.
    3Specify the password of the vCenter user or the ESX/ESXi user.
    4Specify "true" to skip certificate verification, specify "false" to verify the certificate. Defaults to "false" if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed.
    5When this field is not set and skip certificate verification is disabled, Forklift attempts to use the system CA.
    6Specify the API endpoint URL of the vCenter or the ESX/ESXi, for example, https://<vCenter_host>/sdk.
    +
    +
  2. +
+
+
+
    +
  1. +

    Create a Provider manifest for the source provider:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Provider
    +metadata:
    +  name: <source_provider>
    +  namespace: <namespace>
    +spec:
    +  type: vsphere
    +  url: <api_end_point> (1)
    +  settings:
    +    vddkInitImage: <VDDK_image> (2)
    +    sdkEndpoint: vcenter (3)
    +  secret:
    +    name: <secret> (4)
    +    namespace: <namespace>
    +EOF
    +
    +
    +
    + + + + + + + + + + + + + + + + + +
    1Specify the URL of the API endpoint, for example, https://<vCenter_host>/sdk.
    2Optional, but it is strongly recommended to create a VDDK image to accelerate migrations. Follow OpenShift documentation to specify the VDDK image you created.
    3Options: vcenter or esxi.
    4Specify the name of the provider Secret CR.
    +
    +
  2. +
+
+
+
    +
  1. +

    Create a Host manifest:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Host
    +metadata:
    +  name: <vmware_host>
    +  namespace: <namespace>
    +spec:
    +  provider:
    +    namespace: <namespace>
    +    name: <source_provider> (1)
    +  id: <source_host_mor> (2)
    +  ipAddress: <source_network_ip> (3)
    +EOF
    +
    +
    +
    + + + + + + + + + + + + + +
    1Specify the name of the VMware vSphere Provider CR.
    2Specify the Managed Object Reference (moRef) of the VMware vSphere host. To retrieve the moRef, see Retrieving a VMware vSphere moRef.
    3Specify the IP address of the VMware vSphere migration network.
    +
    +
  2. +
+
+
+
    +
  1. +

    Create a NetworkMap manifest to map the source and destination networks:

    +
    +
    +
    $  cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: NetworkMap
    +metadata:
    +  name: <network_map>
    +  namespace: <namespace>
    +spec:
    +  map:
    +    - destination:
    +        name: <network_name>
    +        type: pod (1)
    +      source: (2)
    +        id: <source_network_id>
    +        name: <source_network_name>
    +    - destination:
    +        name: <network_attachment_definition> (3)
    +        namespace: <network_attachment_definition_namespace> (4)
    +        type: multus
    +      source:
    +        id: <source_network_id>
    +        name: <source_network_name>
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +EOF
    +
    +
    +
    + + + + + + + + + + + + + + + + + +
    1Allowed values are pod and multus.
    2You can use either the id or the name parameter to specify the source network. For id, specify the VMware vSphere network Managed Object Reference (moRef). To retrieve the moRef, see Retrieving a VMware vSphere moRef.
    3Specify a network attachment definition for each additional KubeVirt network.
    4Required only when type is multus. Specify the namespace of the KubeVirt network attachment definition.
    +
    +
  2. +
+
+
+
    +
  1. +

    Create a StorageMap manifest to map source and destination storage:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: StorageMap
    +metadata:
    +  name: <storage_map>
    +  namespace: <namespace>
    +spec:
    +  map:
    +    - destination:
    +        storageClass: <storage_class>
    +        accessMode: <access_mode> (1)
    +      source:
    +        id: <source_datastore> (2)
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +EOF
    +
    +
    +
    + + + + + + + + + +
    1Allowed values are ReadWriteOnce and ReadWriteMany.
    2Specify the VMware vSphere datastore moRef. For example, f2737930-b567-451a-9ceb-2887f6207009. To retrieve the moRef, see Retrieving a VMware vSphere moRef.
    +
    +
  2. +
  3. +

    Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    +
    +
    +
    $  cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Hook
    +metadata:
    +  name: <hook>
    +  namespace: <namespace>
    +spec:
    +  image: quay.io/konveyor/hook-runner
    +  playbook: |
    +    LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv
    +    YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s
    +    IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh
    +    cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr
    +    bG9hZAoK
    +EOF
    +
    +
    +
    +

    where:

    +
    +
    +

    playbook refers to an optional Base64-encoded Ansible Playbook. If you specify a playbook, the image must be hook-runner.

    +
    +
    + + + + + +
    + + +
    +

    You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

    +
    +
    +
    +
  4. +
+
+
+
    +
  1. +

    Create a Plan manifest for the migration:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Plan
    +metadata:
    +  name: <plan> (1)
    +  namespace: <namespace>
    +spec:
    +  warm: false (2)
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +  map: (3)
    +    network: (4)
    +      name: <network_map> (5)
    +      namespace: <namespace>
    +    storage: (6)
    +      name: <storage_map> (7)
    +      namespace: <namespace>
    +  preserveStaticIPs: (8)
    +  targetNamespace: <target_namespace>
    +  vms: (9)
    +    - id: <source_vm> (10)
    +    - name: <source_vm>
    +      hooks: (11)
    +        - hook:
    +            namespace: <namespace>
    +            name: <hook> (12)
    +          step: <step> (13)
    +EOF
    +
    +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    1Specify the name of the Plan CR.
    2Specify whether the migration is warm - true - or cold - false. If you specify a warm migration without specifying a value for the cutover parameter in the Migration manifest, only the precopy stage will run.
    3Specify only one network map and one storage map per plan.
    4Specify a network mapping even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
    5Specify the name of the NetworkMap CR.
    6Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
    7Specify the name of the StorageMap CR.
    8By default, virtual network interface controllers (vNICs) change during the migration process. As a result, vNICs that are configured with a static IP linked to the interface name in the guest VM lose their IP. +To avoid this, set preserveStaticIPs to true. Forklift issues a warning message about any VMs for which vNIC properties are missing. To retrieve any missing vNIC properties, run those VMs in vSphere in order for the vNIC properties to be reported to Forklift.
    9You can use either the id or the name parameter to specify the source VMs.
    10Specify the VMware vSphere VM moRef. To retrieve the moRef, see Retrieving a VMware vSphere moRef.
    11Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.
    12Specify the name of the Hook CR.
    13Allowed values are PreHook, before the migration plan starts, or PostHook, after the migration is complete. +
    + + + + + +
    + + +
    +

    When you migrate a VMware 7 VM to an OKD 4.13+ platform that uses CentOS 7.9, the name of the network interfaces changes and the static IP configuration for the VM no longer works.

    +
    +
    +
    +
    +
  2. +
  3. +

    Create a Migration manifest to run the Plan CR:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Migration
    +metadata:
    +  name: <name_of_migration_cr>
    +  namespace: <namespace>
    +spec:
    +  plan:
    +    name: <name_of_plan_cr>
    +    namespace: <namespace>
    +  cutover: <optional_cutover_time>
    +EOF
    +
    +
    +
    + + + + + +
    + + +
    +

    If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00.

    +
    +
    +
    +
  4. +
+
+
+ + + + + +
+ + +
forklift-controller consistently failing to reconcile a plan, and returning an HTTP 500 error
+
+

There is an issue with the forklift-controller consistently failing to reconcile a Migration Plan, and subsequently returning an HTTP 500 error. This issue is caused when you specify the user permissions only on the virtual machine (VM).

+
+
+

In Forklift, you need to add permissions at the datacenter level, which includes storage, networks, switches, and so on, which are used by the VM. You must then propagate the permissions to the child elements.

+
+
+

If you do not want to add this level of permissions, you must manually add the permissions to each object on the VM host required.

+
+
+
+
+
+

Migrating from a oVirt source provider

+
+

You can migrate from a oVirt (oVirt) source provider by using the CLI.

+
+
+
Prerequisites
+

If you are migrating a virtual machine with a direct LUN disk, ensure that the nodes in the KubeVirt destination cluster that the VM is expected to run on can access the backend storage.

+
+
+ + + + + +
+ + +
+
    +
  • +

    Unlike disk images that are copied from a source provider to a target provider, LUNs are detached, but not removed, from virtual machines in the source provider and then attached to the virtual machines (VMs) that are created in the target provider.

    +
  • +
  • +

    LUNs are not removed from the source provider during the migration in case fallback to the source provider is required. However, before re-attaching the LUNs to VMs in the source provider, ensure that the LUNs are not used by VMs on the target environment at the same time, which might lead to data corruption.

    +
  • +
+
+
+
+
+
Procedure
+
    +
  1. +

    Create a Secret manifest for the source provider credentials:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: v1
    +kind: Secret
    +metadata:
    +  name: <secret>
    +  namespace: <namespace>
    +  ownerReferences: (1)
    +    - apiVersion: forklift.konveyor.io/v1beta1
    +      kind: Provider
    +      name: <provider_name>
    +      uid: <provider_uid>
    +  labels:
    +    createdForProviderType: ovirt
    +    createdForResourceType: providers
    +type: Opaque
    +stringData:
    +  user: <user> (2)
    +  password: <password> (3)
    +  insecureSkipVerify: <"true"/"false"> (4)
    +  cacert: | (5)
    +    <ca_certificate>
    +  url: <api_end_point> (6)
    +EOF
    +
    +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + +
    1The ownerReferences section is optional.
    2Specify the oVirt Engine user.
    3Specify the user password.
    4Specify "true" to skip certificate verification, specify "false" to verify the certificate. Defaults to "false" if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed.
    5Enter the Engine CA certificate, unless it was replaced by a third-party certificate, in which case, enter the Engine Apache CA certificate. You can retrieve the Engine CA certificate at https://<engine_host>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA.
    6Specify the API endpoint URL, for example, https://<engine_host>/ovirt-engine/api.
    +
    +
  2. +
+
+
+
    +
  1. +

    Create a Provider manifest for the source provider:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Provider
    +metadata:
    +  name: <source_provider>
    +  namespace: <namespace>
    +spec:
    +  type: ovirt
    +  url: <api_end_point> (1)
    +  secret:
    +    name: <secret> (2)
    +    namespace: <namespace>
    +EOF
    +
    +
    +
    + + + + + + + + + +
    1Specify the URL of the API endpoint, for example, https://<engine_host>/ovirt-engine/api.
    2Specify the name of provider Secret CR.
    +
    +
  2. +
+
+
+
    +
  1. +

    Create a NetworkMap manifest to map the source and destination networks:

    +
    +
    +
    $  cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: NetworkMap
    +metadata:
    +  name: <network_map>
    +  namespace: <namespace>
    +spec:
    +  map:
    +    - destination:
    +        name: <network_name>
    +        type: pod (1)
    +      source: (2)
    +        id: <source_network_id>
    +        name: <source_network_name>
    +    - destination:
    +        name: <network_attachment_definition> (3)
    +        namespace: <network_attachment_definition_namespace> (4)
    +        type: multus
    +      source:
    +        id: <source_network_id>
    +        name: <source_network_name>
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +EOF
    +
    +
    +
    + + + + + + + + + + + + + + + + + +
    1Allowed values are pod and multus.
    2You can use either the id or the name parameter to specify the source network. For id, specify the oVirt network Universal Unique ID (UUID).
    3Specify a network attachment definition for each additional KubeVirt network.
    4Required only when type is multus. Specify the namespace of the KubeVirt network attachment definition.
    +
    +
  2. +
+
+
+
    +
  1. +

    Create a StorageMap manifest to map source and destination storage:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: StorageMap
    +metadata:
    +  name: <storage_map>
    +  namespace: <namespace>
    +spec:
    +  map:
    +    - destination:
    +        storageClass: <storage_class>
    +        accessMode: <access_mode> (1)
    +      source:
    +        id: <source_storage_domain> (2)
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +EOF
    +
    +
    +
    + + + + + + + + + +
    1Allowed values are ReadWriteOnce and ReadWriteMany.
    2Specify the oVirt storage domain UUID. For example, f2737930-b567-451a-9ceb-2887f6207009.
    +
    +
  2. +
  3. +

    Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    +
    +
    +
    $  cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Hook
    +metadata:
    +  name: <hook>
    +  namespace: <namespace>
    +spec:
    +  image: quay.io/konveyor/hook-runner
    +  playbook: |
    +    LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv
    +    YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s
    +    IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh
    +    cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr
    +    bG9hZAoK
    +EOF
    +
    +
    +
    +

    where:

    +
    +
    +

    playbook refers to an optional Base64-encoded Ansible Playbook. If you specify a playbook, the image must be hook-runner.

    +
    +
    + + + + + +
    + + +
    +

    You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

    +
    +
    +
    +
  4. +
+
+
+
    +
  1. +

    Create a Plan manifest for the migration:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Plan
    +metadata:
    +  name: <plan> (1)
    +  namespace: <namespace>
    +  preserveClusterCpuModel: true (2)
    +spec:
    +  warm: false (3)
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +  map: (4)
    +    network: (5)
    +      name: <network_map> (6)
    +      namespace: <namespace>
    +    storage: (7)
    +      name: <storage_map> (8)
    +      namespace: <namespace>
    +  targetNamespace: <target_namespace>
    +  vms: (9)
    +    - id: <source_vm> (10)
    +    - name: <source_vm>
    +      hooks: (11)
    +        - hook:
    +            namespace: <namespace>
    +            name: <hook> (12)
    +          step: <step> (13)
    +EOF
    +
    +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    1Specify the name of the Plan CR.
    2See note below.
    3Specify whether the migration is warm or cold. If you specify a warm migration without specifying a value for the cutover parameter in the Migration manifest, only the precopy stage will run.
    4Specify only one network map and one storage map per plan.
    5Specify a network mapping even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
    6Specify the name of the NetworkMap CR.
    7Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
    8Specify the name of the StorageMap CR.
    9You can use either the id or the name parameter to specify the source VMs.
    10Specify the oVirt VM UUID.
    11Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.
    12Specify the name of the Hook CR.
    13Allowed values are PreHook, before the migration plan starts, or PostHook, after the migration is complete. +
    + + + + + +
    + + +
    +
      +
    • +

      If the migrated machines is set with a custom CPU model, it will be set with that CPU model in the destination cluster, regardless of the setting of preserveClusterCpuModel.

      +
    • +
    • +

      If the migrated machine is not set with a custom CPU model:

      +
      +
        +
      • +

        If preserveClusterCpuModel is set to 'true`, Forklift checks the CPU model of the VM when it runs in oVirt, based on the cluster’s configuration, and then sets the migrated VM with that CPU model.

        +
      • +
      • +

        If preserveClusterCpuModel is set to 'false`, Forklift does not set a CPU type and the VM is set with the default CPU model of the destination cluster.

        +
      • +
      +
      +
    • +
    +
    +
    +
    +
    +
  2. +
  3. +

    Create a Migration manifest to run the Plan CR:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Migration
    +metadata:
    +  name: <name_of_migration_cr>
    +  namespace: <namespace>
    +spec:
    +  plan:
    +    name: <name_of_plan_cr>
    +    namespace: <namespace>
    +  cutover: <optional_cutover_time>
    +EOF
    +
    +
    +
    + + + + + +
    + + +
    +

    If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00.

    +
    +
    +
    +
  4. +
+
+
+
+

Migrating from an OpenStack source provider

+
+

You can migrate from an OpenStack source provider by using the CLI.

+
+
+
Procedure
+
    +
  1. +

    Create a Secret manifest for the source provider credentials:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: v1
    +kind: Secret
    +metadata:
    +  name: <secret>
    +  namespace: <namespace>
    +  ownerReferences: (1)
    +    - apiVersion: forklift.konveyor.io/v1beta1
    +      kind: Provider
    +      name: <provider_name>
    +      uid: <provider_uid>
    +  labels:
    +    createdForProviderType: openstack
    +    createdForResourceType: providers
    +type: Opaque
    +stringData:
    +  user: <user> (2)
    +  password: <password> (3)
    +  insecureSkipVerify: <"true"/"false"> (4)
    +  domainName: <domain_name>
    +  projectName: <project_name>
    +  regionName: <region_name>
    +  cacert: | (5)
    +    <ca_certificate>
    +  url: <api_end_point> (6)
    +EOF
    +
    +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + +
    1The ownerReferences section is optional.
    2Specify the OpenStack user.
    3Specify the user OpenStack password.
    4Specify "true" to skip certificate verification, specify "false" to verify the certificate. Defaults to "false" if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed.
    5When this field is not set and skip certificate verification is disabled, Forklift attempts to use the system CA.
    6Specify the API endpoint URL, for example, https://<identity_service>/v3.
    +
    +
  2. +
+
+
+
    +
  1. +

    Create a Provider manifest for the source provider:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Provider
    +metadata:
    +  name: <source_provider>
    +  namespace: <namespace>
    +spec:
    +  type: openstack
    +  url: <api_end_point> (1)
    +  secret:
    +    name: <secret> (2)
    +    namespace: <namespace>
    +EOF
    +
    +
    +
    + + + + + + + + + +
    1Specify the URL of the API endpoint.
    2Specify the name of provider Secret CR.
    +
    +
  2. +
+
+
+
    +
  1. +

    Create a NetworkMap manifest to map the source and destination networks:

    +
    +
    +
    $  cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: NetworkMap
    +metadata:
    +  name: <network_map>
    +  namespace: <namespace>
    +spec:
    +  map:
    +    - destination:
    +        name: <network_name>
    +        type: pod (1)
    +      source:(2)
    +        id: <source_network_id>
    +        name: <source_network_name>
    +    - destination:
    +        name: <network_attachment_definition> (3)
    +        namespace: <network_attachment_definition_namespace> (4)
    +        type: multus
    +      source:
    +        id: <source_network_id>
    +        name: <source_network_name>
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +EOF
    +
    +
    +
    + + + + + + + + + + + + + + + + + +
    1Allowed values are pod and multus.
    2You can use either the id or the name parameter to specify the source network. For id, specify the OpenStack network UUID.
    3Specify a network attachment definition for each additional KubeVirt network.
    4Required only when type is multus. Specify the namespace of the KubeVirt network attachment definition.
    +
    +
  2. +
+
+
+
    +
  1. +

    Create a StorageMap manifest to map source and destination storage:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: StorageMap
    +metadata:
    +  name: <storage_map>
    +  namespace: <namespace>
    +spec:
    +  map:
    +    - destination:
    +        storageClass: <storage_class>
    +        accessMode: <access_mode> (1)
    +      source:
    +        id: <source_volume_type> (2)
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +EOF
    +
    +
    +
    + + + + + + + + + +
    1Allowed values are ReadWriteOnce and ReadWriteMany.
    2Specify the OpenStack volume_type UUID. For example, f2737930-b567-451a-9ceb-2887f6207009.
    +
    +
  2. +
  3. +

    Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    +
    +
    +
    $  cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Hook
    +metadata:
    +  name: <hook>
    +  namespace: <namespace>
    +spec:
    +  image: quay.io/konveyor/hook-runner
    +  playbook: |
    +    LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv
    +    YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s
    +    IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh
    +    cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr
    +    bG9hZAoK
    +EOF
    +
    +
    +
    +

    where:

    +
    +
    +

    playbook refers to an optional Base64-encoded Ansible Playbook. If you specify a playbook, the image must be hook-runner.

    +
    +
    + + + + + +
    + + +
    +

    You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

    +
    +
    +
    +
  4. +
+
+
+
    +
  1. +

    Create a Plan manifest for the migration:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Plan
    +metadata:
    +  name: <plan> (1)
    +  namespace: <namespace>
    +spec:
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +  map: (2)
    +    network: (3)
    +      name: <network_map> (4)
    +      namespace: <namespace>
    +    storage: (5)
    +      name: <storage_map> (6)
    +      namespace: <namespace>
    +  targetNamespace: <target_namespace>
    +  vms: (7)
    +    - id: <source_vm> (8)
    +    - name: <source_vm>
    +      hooks: (9)
    +        - hook:
    +            namespace: <namespace>
    +            name: <hook> (10)
    +          step: <step> (11)
    +EOF
    +
    +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    1Specify the name of the Plan CR.
    2Specify only one network map and one storage map per plan.
    3Specify a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
    4Specify the name of the NetworkMap CR.
    5Specify a storage mapping, even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
    6Specify the name of the StorageMap CR.
    7You can use either the id or the name parameter to specify the source VMs.
    8Specify the OpenStack VM UUID.
    9Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.
    10Specify the name of the Hook CR.
    11Allowed values are PreHook, before the migration plan starts, or PostHook, after the migration is complete.
    +
    +
  2. +
  3. +

    Create a Migration manifest to run the Plan CR:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Migration
    +metadata:
    +  name: <name_of_migration_cr>
    +  namespace: <namespace>
    +spec:
    +  plan:
    +    name: <name_of_plan_cr>
    +    namespace: <namespace>
    +  cutover: <optional_cutover_time>
    +EOF
    +
    +
    +
    + + + + + +
    + + +
    +

    If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00.

    +
    +
    +
    +
  4. +
+
+
+
+

Migrating from an Open Virtual Appliance (OVA) source provider

+
+

You can migrate from Open Virtual Appliance (OVA) files that were created by VMware vSphere as a source provider by using the CLI.

+
+
+
Procedure
+
    +
  1. +

    Create a Secret manifest for the source provider credentials:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: v1
    +kind: Secret
    +metadata:
    +  name: <secret>
    +  namespace: <namespace>
    +  ownerReferences: (1)
    +    - apiVersion: forklift.konveyor.io/v1beta1
    +      kind: Provider
    +      name: <provider_name>
    +      uid: <provider_uid>
    +  labels:
    +    createdForProviderType: ova
    +    createdForResourceType: providers
    +type: Opaque
    +stringData:
    +  url: <nfs_server:/nfs_path> (2)
    +EOF
    +
    +
    +
    + + + + + + + + + +
    1The ownerReferences section is optional.
    2where: nfs_server is an IP or hostname of the server where the share was created and nfs_path is the path on the server where the OVA files are stored.
    +
    +
  2. +
+
+
+
    +
  1. +

    Create a Provider manifest for the source provider:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Provider
    +metadata:
    +  name: <source_provider>
    +  namespace: <namespace>
    +spec:
    +  type: ova
    +  url:  <nfs_server:/nfs_path> (1)
    +  secret:
    +    name: <secret> (2)
    +    namespace: <namespace>
    +EOF
    +
    +
    +
    + + + + + + + + + +
    1where: nfs_server is an IP or hostname of the server where the share was created and nfs_path is the path on the server where the OVA files are stored.
    2Specify the name of provider Secret CR.
    +
    +
  2. +
+
+
+
    +
  1. +

    Create a NetworkMap manifest to map the source and destination networks:

    +
    +
    +
    $  cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: NetworkMap
    +metadata:
    +  name: <network_map>
    +  namespace: <namespace>
    +spec:
    +  map:
    +    - destination:
    +        name: <network_name>
    +        type: pod (1)
    +      source:
    +        id: <source_network_id> (2)
    +    - destination:
    +        name: <network_attachment_definition> (3)
    +        namespace: <network_attachment_definition_namespace> (4)
    +        type: multus
    +      source:
    +        id: <source_network_id>
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +EOF
    +
    +
    +
    + + + + + + + + + + + + + + + + + +
    1Allowed values are pod and multus.
    2Specify the OVA network Universal Unique ID (UUID).
    3Specify a network attachment definition for each additional KubeVirt network.
    4Required only when type is multus. Specify the namespace of the KubeVirt network attachment definition.
    +
    +
  2. +
+
+
+
    +
  1. +

    Create a StorageMap manifest to map source and destination storage:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: StorageMap
    +metadata:
    +  name: <storage_map>
    +  namespace: <namespace>
    +spec:
    +  map:
    +    - destination:
    +        storageClass: <storage_class>
    +        accessMode: <access_mode> (1)
    +      source:
    +        name:  Dummy storage for source provider <provider_name> (2)
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +EOF
    +
    +
    +
    + + + + + + + + + +
    1Allowed values are ReadWriteOnce and ReadWriteMany.
    2For OVA, the StorageMap can map only a single storage, which all the disks from the OVA are associated with, to a storage class at the destination. For this reason, the storage is referred to in the UI as "Dummy storage for source provider <provider_name>". In the YAML, write the phrase as it appears above, without the quotation marks and replacing <provider_name> with the actual name of the provider.
    +
    +
  2. +
  3. +

    Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    +
    +
    +
    $  cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Hook
    +metadata:
    +  name: <hook>
    +  namespace: <namespace>
    +spec:
    +  image: quay.io/konveyor/hook-runner
    +  playbook: |
    +    LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv
    +    YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s
    +    IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh
    +    cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr
    +    bG9hZAoK
    +EOF
    +
    +
    +
    +

    where:

    +
    +
    +

    playbook refers to an optional Base64-encoded Ansible Playbook. If you specify a playbook, the image must be hook-runner.

    +
    +
    + + + + + +
    + + +
    +

    You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

    +
    +
    +
    +
  4. +
+
+
+
    +
  1. +

    Create a Plan manifest for the migration:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Plan
    +metadata:
    +  name: <plan> (1)
    +  namespace: <namespace>
    +spec:
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +  map: (2)
    +    network: (3)
    +      name: <network_map> (4)
    +      namespace: <namespace>
    +    storage: (5)
    +      name: <storage_map> (6)
    +      namespace: <namespace>
    +  targetNamespace: <target_namespace>
    +  vms: (7)
    +    - id: <source_vm> (8)
    +    - name: <source_vm>
    +      hooks: (9)
    +        - hook:
    +            namespace: <namespace>
    +            name: <hook> (10)
    +          step: <step> (11)
    +EOF
    +
    +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    1Specify the name of the Plan CR.
    2Specify only one network map and one storage map per plan.
    3Specify a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
    4Specify the name of the NetworkMap CR.
    5Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
    6Specify the name of the StorageMap CR.
    7You can use either the id or the name parameter to specify the source VMs.
    8Specify the OVA VM UUID.
    9Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.
    10Specify the name of the Hook CR.
    11Allowed values are PreHook, before the migration plan starts, or PostHook, after the migration is complete.
    +
    +
  2. +
  3. +

    Create a Migration manifest to run the Plan CR:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Migration
    +metadata:
    +  name: <name_of_migration_cr>
    +  namespace: <namespace>
    +spec:
    +  plan:
    +    name: <name_of_plan_cr>
    +    namespace: <namespace>
    +  cutover: <optional_cutover_time>
    +EOF
    +
    +
    +
    + + + + + +
    + + +
    +

    If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00.

    +
    +
    +
    +
  4. +
+
+
+
+

Migrating from a Red Hat KubeVirt source provider

+
+

You can use a Red Hat KubeVirt provider as either a source provider or as a destination provider.

+
+
+
Procedure
+
    +
  1. +

    Create a Secret manifest for the source provider credentials:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: v1
    +kind: Secret
    +metadata:
    +  name: <secret>
    +  namespace: <namespace>
    +  ownerReferences: (1)
    +    - apiVersion: forklift.konveyor.io/v1beta1
    +      kind: Provider
    +      name: <provider_name>
    +      uid: <provider_uid>
    +  labels:
    +    createdForProviderType: openshift
    +    createdForResourceType: providers
    +type: Opaque
    +stringData:
    +  token: <token> (2)
    +  password: <password> (3)
    +  insecureSkipVerify: <"true"/"false"> (4)
    +  cacert: | (5)
    +    <ca_certificate>
    +  url: <api_end_point> (6)
    +EOF
    +
    +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + +
    1The ownerReferences section is optional.
    2Specify a token for a service account with cluster-admin privileges. If both token and url are left blank, the local OKD cluster is used.
    3Specify the user password.
    4Specify "true" to skip certificate verification, specify "false" to verify the certificate. Defaults to "false" if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed.
    5When this field is not set and skip certificate verification is disabled, Forklift attempts to use the system CA.
    6Specify the URL of the endpoint of the API server.
    +
    +
  2. +
+
+
+
    +
  1. +

    Create a Provider manifest for the source provider:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Provider
    +metadata:
    +  name: <source_provider>
    +  namespace: <namespace>
    +spec:
    +  type: openshift
    +  url: <api_end_point> (1)
    +  secret:
    +    name: <secret> (2)
    +    namespace: <namespace>
    +EOF
    +
    +
    +
    + + + + + + + + + +
    1Specify the URL of the endpoint of the API server.
    2Specify the name of provider Secret CR.
    +
    +
  2. +
+
+
+
    +
  1. +

    Create a NetworkMap manifest to map the source and destination networks:

    +
    +
    +
    $  cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: NetworkMap
    +metadata:
    +  name: <network_map>
    +  namespace: <namespace>
    +spec:
    +  map:
    +    - destination:
    +        name: <network_name>
    +        type: pod (1)
    +      source:
    +        name: <network_name>
    +        type: pod
    +    - destination:
    +        name: <network_attachment_definition> (2)
    +        namespace: <network_attachment_definition_namespace> (3)
    +        type: multus
    +      source:
    +        name: <network_attachment_definition>
    +        namespace: <network_attachment_definition_namespace>
    +        type: multus
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +EOF
    +
    +
    +
    + + + + + + + + + + + + + +
    1Allowed values are pod and multus.
    2Specify a network attachment definition for each additional KubeVirt network. Specify the +namespace either by using the namespace property or with a name built as follows: <network_namespace>/<network_name>.
    3Required only when type is multus. Specify the namespace of the KubeVirt network attachment definition.
    +
    +
  2. +
+
+
+
    +
  1. +

    Create a StorageMap manifest to map source and destination storage:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: StorageMap
    +metadata:
    +  name: <storage_map>
    +  namespace: <namespace>
    +spec:
    +  map:
    +    - destination:
    +        storageClass: <storage_class>
    +        accessMode: <access_mode> (1)
    +      source:
    +        name: <storage_class>
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +EOF
    +
    +
    +
    + + + + + +
    1Allowed values are ReadWriteOnce and ReadWriteMany.
    +
    +
  2. +
  3. +

    Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    +
    +
    +
    $  cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Hook
    +metadata:
    +  name: <hook>
    +  namespace: <namespace>
    +spec:
    +  image: quay.io/konveyor/hook-runner
    +  playbook: |
    +    LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv
    +    YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s
    +    IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh
    +    cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr
    +    bG9hZAoK
    +EOF
    +
    +
    +
    +

    where:

    +
    +
    +

    playbook refers to an optional Base64-encoded Ansible Playbook. If you specify a playbook, the image must be hook-runner.

    +
    +
    + + + + + +
    + + +
    +

    You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

    +
    +
    +
    +
  4. +
+
+
+
    +
  1. +

    Create a Plan manifest for the migration:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Plan
    +metadata:
    +  name: <plan> (1)
    +  namespace: <namespace>
    +spec:
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +  map: (2)
    +    network: (3)
    +      name: <network_map> (4)
    +      namespace: <namespace>
    +    storage: (5)
    +      name: <storage_map> (6)
    +      namespace: <namespace>
    +  targetNamespace: <target_namespace>
    +  vms:
    +    - name: <source_vm>
    +      namespace: <namespace>
    +      hooks: (7)
    +        - hook:
    +            namespace: <namespace>
    +            name: <hook> (8)
    +          step: <step> (9)
    +EOF
    +
    +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    1Specify the name of the Plan CR.
    2Specify only one network map and one storage map per plan.
    3Specify a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
    4Specify the name of the NetworkMap CR.
    5Specify a storage mapping, even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
    6Specify the name of the StorageMap CR.
    7Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.
    8Specify the name of the Hook CR.
    9Allowed values are PreHook, before the migration plan starts, or PostHook, after the migration is complete.
    +
    +
  2. +
  3. +

    Create a Migration manifest to run the Plan CR:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Migration
    +metadata:
    +  name: <name_of_migration_cr>
    +  namespace: <namespace>
    +spec:
    +  plan:
    +    name: <name_of_plan_cr>
    +    namespace: <namespace>
    +  cutover: <optional_cutover_time>
    +EOF
    +
    +
    +
    + + + + + +
    + + +
    +

    If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00.

    +
    +
    +
    +
  4. +
+
+
+
+
+

Canceling a migration

+
+

You can cancel an entire migration or individual virtual machines (VMs) while a migration is in progress from the command line interface (CLI).

+
+
+
Canceling an entire migration
+
    +
  • +

    Delete the Migration CR:

    +
    +
    +
    $ kubectl delete migration <migration> -n <namespace> (1)
    +
    +
    +
    + + + + + +
    1Specify the name of the Migration CR.
    +
    +
  • +
+
+
+
Canceling the migration of individual VMs
+
    +
  1. +

    Add the individual VMs to the spec.cancel block of the Migration manifest:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Migration
    +metadata:
    +  name: <migration>
    +  namespace: <namespace>
    +...
    +spec:
    +  cancel:
    +  - id: vm-102 (1)
    +  - id: vm-203
    +  - name: rhel8-vm
    +EOF
    +
    +
    +
    + + + + + +
    1You can specify a VM by using the id key or the name key. +
    +

    The value of the id key is the managed object reference, for a VMware VM, or the VM UUID, for a oVirt VM.

    +
    +
    +
  2. +
  3. +

    Retrieve the Migration CR to monitor the progress of the remaining VMs:

    +
    +
    +
    $ kubectl get migration/<migration> -n <namespace> -o yaml
    +
    +
    +
  4. +
+
+
+
+
+
+

Advanced migration options

+
+
+

Changing precopy intervals for warm migration

+
+

You can change the snapshot interval by patching the ForkliftController custom resource (CR).

+
+
+
Procedure
+
    +
  • +

    Patch the ForkliftController CR:

    +
    +
    +
    $ kubectl patch forkliftcontroller/<forklift-controller> -n konveyor-forklift -p '{"spec": {"controller_precopy_interval": <60>}}' --type=merge (1)
    +
    +
    +
    + + + + + +
    1Specify the precopy interval in minutes. The default value is 60. +
    +

    You do not need to restart the forklift-controller pod.

    +
    +
    +
  • +
+
+
+
+

Creating custom rules for the Validation service

+
+

The Validation service uses Open Policy Agent (OPA) policy rules to check the suitability of each virtual machine (VM) for migration. The Validation service generates a list of concerns for each VM, which are stored in the Provider Inventory service as VM attributes. The web console displays the concerns for each VM in the provider inventory.

+
+
+

You can create custom rules to extend the default ruleset of the Validation service. For example, you can create a rule that checks whether a VM has multiple disks.

+
+
+

About Rego files

+
+

Validation rules are written in Rego, the Open Policy Agent (OPA) native query language. The rules are stored as .rego files in the /usr/share/opa/policies/io/konveyor/forklift/<provider> directory of the Validation pod.

+
+
+

Each validation rule is defined in a separate .rego file and tests for a specific condition. If the condition evaluates as true, the rule adds a {“category”, “label”, “assessment”} hash to the concerns. The concerns content is added to the concerns key in the inventory record of the VM. The web console displays the content of the concerns key for each VM in the provider inventory.

+
+
+

The following .rego file example checks for distributed resource scheduling enabled in the cluster of a VMware VM:

+
+
+
drs_enabled.rego example
+
+
package io.konveyor.forklift.vmware (1)
+
+has_drs_enabled {
+    input.host.cluster.drsEnabled (2)
+}
+
+concerns[flag] {
+    has_drs_enabled
+    flag := {
+        "category": "Information",
+        "label": "VM running in a DRS-enabled cluster",
+        "assessment": "Distributed resource scheduling is not currently supported by OpenShift Virtualization. The VM can be migrated but it will not have this feature in the target environment."
+    }
+}
+
+
+
+ + + + + + + + + +
1Each validation rule is defined within a package. The package namespaces are io.konveyor.forklift.vmware for VMware and io.konveyor.forklift.ovirt for oVirt.
2Query parameters are based on the input key of the Validation service JSON.
+
+
+
+

Checking the default validation rules

+
+

Before you create a custom rule, you must check the default rules of the Validation service to ensure that you do not create a rule that redefines an existing default value.

+
+
+

Example: If a default rule contains the line default valid_input = false and you create a custom rule that contains the line default valid_input = true, the Validation service will not start.

+
+
+
Procedure
+
    +
  1. +

    Connect to the terminal of the Validation pod:

    +
    +
    +
    $ kubectl rsh <validation_pod>
    +
    +
    +
  2. +
  3. +

    Go to the OPA policies directory for your provider:

    +
    +
    +
    $ cd /usr/share/opa/policies/io/konveyor/forklift/<provider> (1)
    +
    +
    +
    + + + + + +
    1Specify vmware or ovirt.
    +
    +
  4. +
  5. +

    Search for the default policies:

    +
    +
    +
    $ grep -R "default" *
    +
    +
    +
  6. +
+
+
+
+

Creating a validation rule

+
+

You create a validation rule by applying a config map custom resource (CR) containing the rule to the Validation service.

+
+
+ + + + + +
+ + +
+
    +
  • +

    If you create a rule with the same name as an existing rule, the Validation service performs an OR operation with the rules.

    +
  • +
  • +

    If you create a rule that contradicts a default rule, the Validation service will not start.

    +
  • +
+
+
+
+
+
Validation rule example
+

Validation rules are based on virtual machine (VM) attributes collected by the Provider Inventory service.

+
+
+

For example, the VMware API uses this path to check whether a VMware VM has NUMA node affinity configured: MOR:VirtualMachine.config.extraConfig["numa.nodeAffinity"].

+
+
+

The Provider Inventory service simplifies this configuration and returns a testable attribute with a list value:

+
+
+
+
"numaNodeAffinity": [
+    "0",
+    "1"
+],
+
+
+
+

You create a Rego query, based on this attribute, and add it to the forklift-validation-config config map:

+
+
+
+
`count(input.numaNodeAffinity) != 0`
+
+
+
+
Procedure
+
    +
  1. +

    Create a config map CR according to the following example:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: v1
    +kind: ConfigMap
    +metadata:
    +  name: <forklift-validation-config>
    +  namespace: konveyor-forklift
    +data:
    +  vmware_multiple_disks.rego: |-
    +    package <provider_package> (1)
    +
    +    has_multiple_disks { (2)
    +      count(input.disks) > 1
    +    }
    +
    +    concerns[flag] {
    +      has_multiple_disks (3)
    +        flag := {
    +          "category": "<Information>", (4)
    +          "label": "Multiple disks detected",
    +          "assessment": "Multiple disks detected on this VM."
    +        }
    +    }
    +EOF
    +
    +
    +
    + + + + + + + + + + + + + + + + + +
    1Specify the provider package name. Allowed values are io.konveyor.forklift.vmware for VMware and io.konveyor.forklift.ovirt for oVirt.
    2Specify the concerns name and Rego query.
    3Specify the concerns name and flag parameter values.
    4Allowed values are Critical, Warning, and Information.
    +
    +
  2. +
  3. +

    Stop the Validation pod by scaling the forklift-controller deployment to 0:

    +
    +
    +
    $ kubectl scale -n konveyor-forklift --replicas=0 deployment/forklift-controller
    +
    +
    +
  4. +
  5. +

    Start the Validation pod by scaling the forklift-controller deployment to 1:

    +
    +
    +
    $ kubectl scale -n konveyor-forklift --replicas=1 deployment/forklift-controller
    +
    +
    +
  6. +
  7. +

    Check the Validation pod log to verify that the pod started:

    +
    +
    +
    $ kubectl logs -f <validation_pod>
    +
    +
    +
    +

    If the custom rule conflicts with a default rule, the Validation pod will not start.

    +
    +
  8. +
  9. +

    Remove the source provider:

    +
    +
    +
    $ kubectl delete provider <provider> -n konveyor-forklift
    +
    +
    +
  10. +
  11. +

    Add the source provider to apply the new rule:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Provider
    +metadata:
    +  name: <provider>
    +  namespace: konveyor-forklift
    +spec:
    +  type: <provider_type> (1)
    +  url: <api_end_point> (2)
    +  secret:
    +    name: <secret> (3)
    +    namespace: konveyor-forklift
    +EOF
    +
    +
    +
    + + + + + + + + + + + + + +
    1Allowed values are ovirt, vsphere, and openstack.
    2Specify the API end point URL, for example, https://<vCenter_host>/sdk for vSphere, https://<engine_host>/ovirt-engine/api for oVirt, or https://<identity_service>/v3 for OpenStack.
    3Specify the name of the provider Secret CR.
    +
    +
  12. +
+
+
+

You must update the rules version after creating a custom rule so that the Inventory service detects the changes and validates the VMs.

+
+
+
+

Updating the inventory rules version

+
+

You must update the inventory rules version each time you update the rules so that the Provider Inventory service detects the changes and triggers the Validation service.

+
+
+

The rules version is recorded in a rules_version.rego file for each provider.

+
+
+
Procedure
+
    +
  1. +

    Retrieve the current rules version:

    +
    +
    +
    $ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version (1)
    +
    +
    +
    +
    Example output
    +
    +
    {
    +   "result": {
    +       "rules_version": 5
    +   }
    +}
    +
    +
    +
  2. +
  3. +

    Connect to the terminal of the Validation pod:

    +
    +
    +
    $ kubectl rsh <validation_pod>
    +
    +
    +
  4. +
  5. +

    Update the rules version in the /usr/share/opa/policies/io/konveyor/forklift/<provider>/rules_version.rego file.

    +
  6. +
  7. +

    Log out of the Validation pod terminal.

    +
  8. +
  9. +

    Verify the updated rules version:

    +
    +
    +
    $ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version (1)
    +
    +
    +
    +
    Example output
    +
    +
    {
    +   "result": {
    +       "rules_version": 6
    +   }
    +}
    +
    +
    +
  10. +
+
+
+
+
+

Retrieving the Inventory service JSON

+
+

You retrieve the Inventory service JSON by sending an Inventory service query to a virtual machine (VM). The output contains an "input" key, which contains the inventory attributes that are queried by the Validation service rules.

+
+
+

You can create a validation rule based on any attribute in the "input" key, for example, input.snapshot.kind.

+
+
+
Procedure
+
    +
  1. +

    Retrieve the routes for the project:

    +
    +
    +
    oc get route -n openshift-mtv
    +
    +
    +
  2. +
  3. +

    Retrieve the Inventory service route:

    +
    +
    +
    $ kubectl get route <inventory_service> -n konveyor-forklift
    +
    +
    +
  4. +
  5. +

    Retrieve the access token:

    +
    +
    +
    $ TOKEN=$(oc whoami -t)
    +
    +
    +
  6. +
  7. +

    Trigger an HTTP GET request (for example, using Curl):

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers -k
    +
    +
    +
  8. +
  9. +

    Retrieve the UUID of a provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider> -k (1)
    +
    +
    +
    + + + + + +
    1Allowed values for the provider are vsphere, ovirt, and openstack.
    +
    +
  10. +
  11. +

    Retrieve the VMs of a provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider>/<UUID>/vms -k
    +
    +
    +
  12. +
  13. +

    Retrieve the details of a VM:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider>/<UUID>/workloads/<vm> -k
    +
    +
    +
    +
    Example output
    +
    +
    {
    +    "input": {
    +        "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/workloads/vm-431",
    +        "id": "vm-431",
    +        "parent": {
    +            "kind": "Folder",
    +            "id": "group-v22"
    +        },
    +        "revision": 1,
    +        "name": "iscsi-target",
    +        "revisionValidated": 1,
    +        "isTemplate": false,
    +        "networks": [
    +            {
    +                "kind": "Network",
    +                "id": "network-31"
    +            },
    +            {
    +                "kind": "Network",
    +                "id": "network-33"
    +            }
    +        ],
    +        "disks": [
    +            {
    +                "key": 2000,
    +                "file": "[iSCSI_Datastore] iscsi-target/iscsi-target-000001.vmdk",
    +                "datastore": {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                },
    +                "capacity": 17179869184,
    +                "shared": false,
    +                "rdm": false
    +            },
    +            {
    +                "key": 2001,
    +                "file": "[iSCSI_Datastore] iscsi-target/iscsi-target_1-000001.vmdk",
    +                "datastore": {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                },
    +                "capacity": 10737418240,
    +                "shared": false,
    +                "rdm": false
    +            }
    +        ],
    +        "concerns": [],
    +        "policyVersion": 5,
    +        "uuid": "42256329-8c3a-2a82-54fd-01d845a8bf49",
    +        "firmware": "bios",
    +        "powerState": "poweredOn",
    +        "connectionState": "connected",
    +        "snapshot": {
    +            "kind": "VirtualMachineSnapshot",
    +            "id": "snapshot-3034"
    +        },
    +        "changeTrackingEnabled": false,
    +        "cpuAffinity": [
    +            0,
    +            2
    +        ],
    +        "cpuHotAddEnabled": true,
    +        "cpuHotRemoveEnabled": false,
    +        "memoryHotAddEnabled": false,
    +        "faultToleranceEnabled": false,
    +        "cpuCount": 2,
    +        "coresPerSocket": 1,
    +        "memoryMB": 2048,
    +        "guestName": "Red Hat Enterprise Linux 7 (64-bit)",
    +        "balloonedMemory": 0,
    +        "ipAddress": "10.19.2.96",
    +        "storageUsed": 30436770129,
    +        "numaNodeAffinity": [
    +            "0",
    +            "1"
    +        ],
    +        "devices": [
    +            {
    +                "kind": "RealUSBController"
    +            }
    +        ],
    +        "host": {
    +            "id": "host-29",
    +            "parent": {
    +                "kind": "Cluster",
    +                "id": "domain-c26"
    +            },
    +            "revision": 1,
    +            "name": "IP address or host name of the vCenter host or oVirt Engine host",
    +            "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/hosts/host-29",
    +            "status": "green",
    +            "inMaintenance": false,
    +            "managementServerIp": "10.19.2.96",
    +            "thumbprint": <thumbprint>,
    +            "timezone": "UTC",
    +            "cpuSockets": 2,
    +            "cpuCores": 16,
    +            "productName": "VMware ESXi",
    +            "productVersion": "6.5.0",
    +            "networking": {
    +                "pNICs": [
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic0",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic1",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic2",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic3",
    +                        "linkSpeed": 10000
    +                    }
    +                ],
    +                "vNICs": [
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk2",
    +                        "portGroup": "VM_Migration",
    +                        "dPortGroup": "",
    +                        "ipAddress": "192.168.79.13",
    +                        "subnetMask": "255.255.255.0",
    +                        "mtu": 9000
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk0",
    +                        "portGroup": "Management Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "10.19.2.13",
    +                        "subnetMask": "255.255.255.128",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk1",
    +                        "portGroup": "Storage Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "172.31.2.13",
    +                        "subnetMask": "255.255.0.0",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk3",
    +                        "portGroup": "",
    +                        "dPortGroup": "dvportgroup-48",
    +                        "ipAddress": "192.168.61.13",
    +                        "subnetMask": "255.255.255.0",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk4",
    +                        "portGroup": "VM_DHCP_Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "10.19.2.231",
    +                        "subnetMask": "255.255.255.128",
    +                        "mtu": 1500
    +                    }
    +                ],
    +                "portGroups": [
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM Network",
    +                        "name": "VM Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-Management Network",
    +                        "name": "Management Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_10G_Network",
    +                        "name": "VM_10G_Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Storage",
    +                        "name": "VM_Storage",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_DHCP_Network",
    +                        "name": "VM_DHCP_Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-Storage Network",
    +                        "name": "Storage Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Isolated_67",
    +                        "name": "VM_Isolated_67",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Migration",
    +                        "name": "VM_Migration",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2"
    +                    }
    +                ],
    +                "switches": [
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch0",
    +                        "name": "vSwitch0",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM Network",
    +                            "key-vim.host.PortGroup-Management Network"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic4"
    +                        ]
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch1",
    +                        "name": "vSwitch1",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM_10G_Network",
    +                            "key-vim.host.PortGroup-VM_Storage",
    +                            "key-vim.host.PortGroup-VM_DHCP_Network",
    +                            "key-vim.host.PortGroup-Storage Network"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic2",
    +                            "key-vim.host.PhysicalNic-vmnic0"
    +                        ]
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch2",
    +                        "name": "vSwitch2",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM_Isolated_67",
    +                            "key-vim.host.PortGroup-VM_Migration"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic3",
    +                            "key-vim.host.PhysicalNic-vmnic1"
    +                        ]
    +                    }
    +                ]
    +            },
    +            "networks": [
    +                {
    +                    "kind": "Network",
    +                    "id": "network-31"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-34"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-57"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-33"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "dvportgroup-47"
    +                }
    +            ],
    +            "datastores": [
    +                {
    +                    "kind": "Datastore",
    +                    "id": "datastore-35"
    +                },
    +                {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                }
    +            ],
    +            "vms": null,
    +            "networkAdapters": [],
    +            "cluster": {
    +                "id": "domain-c26",
    +                "parent": {
    +                    "kind": "Folder",
    +                    "id": "group-h23"
    +                },
    +                "revision": 1,
    +                "name": "mycluster",
    +                "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/clusters/domain-c26",
    +                "folder": "group-h23",
    +                "networks": [
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-31"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-34"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-57"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-33"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "dvportgroup-47"
    +                    }
    +                ],
    +                "datastores": [
    +                    {
    +                        "kind": "Datastore",
    +                        "id": "datastore-35"
    +                    },
    +                    {
    +                        "kind": "Datastore",
    +                        "id": "datastore-63"
    +                    }
    +                ],
    +                "hosts": [
    +                    {
    +                        "kind": "Host",
    +                        "id": "host-44"
    +                    },
    +                    {
    +                        "kind": "Host",
    +                        "id": "host-29"
    +                    }
    +                ],
    +                "dasEnabled": false,
    +                "dasVms": [],
    +                "drsEnabled": true,
    +                "drsBehavior": "fullyAutomated",
    +                "drsVms": [],
    +                "datacenter": null
    +            }
    +        }
    +    }
    +}
    +
    +
    +
  14. +
+
+
+
+

Adding hooks to a migration plan

+
+

You can add hooks a migration plan from the command line by using the Forklift API.

+
+
+

API-based hooks for Forklift migration plans

+
+

You can add hooks to a migration plan from the command line by using the Forklift API.

+
+
Default hook image
+
+

The default hook image for an Forklift hook is registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel8:v1.8.2-2. The image is based on the Ansible Runner image with the addition of python-openshift to provide Ansible Kubernetes resources and a recent oc binary.

+
+
Hook execution
+
+

An Ansible playbook that is provided as part of a migration hook is mounted into the hook container as a ConfigMap. The hook container is run as a job on the desired cluster, using the default ServiceAccount in the konveyor-forklift namespace.

+
+
PreHooks and PostHooks
+
+

You specify hooks per VM and you can run each as a PreHook or a PostHook. In this context, a PreHook is a hook that is run before a migration and a PostHook is a hook that is run after a migration.

+
+
+

When you add a hook, you must specify the namespace where the hook CR is located, the name of the hook, and specify whether the hook is a PreHook or PostHook.

+
+
+ + + + + +
+ + +
+

In order for a PreHook to run on a VM, the VM must be started and available via SSH.

+
+
+
+
+
Example PreHook:
+
+
kind: Plan
+apiVersion: forklift.konveyor.io/v1beta1
+metadata:
+  name: test
+  namespace: konveyor-forklift
+spec:
+  vms:
+    - id: vm-2861
+      hooks:
+        - hook:
+            namespace: konveyor-forklift
+            name: playbook
+          step: PreHook
+
+
+
+
+

Adding Hook CRs to a VM migration by using the Forklift API

+
+

You can add a PreHook or a PostHook Hook CR when you migrate a virtual machine from the command line by using the Forklift API. A PreHook runs before a migration, a PostHook, after.

+
+
+ + + + + +
+ + +
+

You can retrieve additional information stored in a secret or in a configMap by using a k8s module.

+
+
+
+
+

For example, you can create a hook CR to install cloud-init on a VM and write a file before migration.

+
+
+
Procedure
+
    +
  1. +

    If needed, create a secret with an SSH private key for the VM. You can either use an existing key or generate a key pair, install the public key on the VM, and base64 encode the private key in the secret.

    +
    +
    +
    apiVersion: v1
    +data:
    +  key: VGhpcyB3YXMgZ2VuZXJhdGVkIHdpdGggc3NoLWtleWdlbiBwdXJlbHkgZm9yIHRoaXMgZXhhbXBsZS4KSXQgaXMgbm90IHVzZWQgYW55d2hlcmUuCi0tLS0tQkVHSU4gT1BFTlNTSCBQUklWQVRFIEtFWS0tLS0tCmIzQmxibk56YUMxclpYa3RkakVBQUFBQUJHNXZibVVBQUFBRWJtOXVaUUFBQUFBQUFBQUJBQUFCbHdBQUFBZHpjMmd0Y24KTmhBQUFBQXdFQUFRQUFBWUVBMzVTTFRReDBFVjdPTWJQR0FqcEsxK2JhQURTTVFuK1NBU2pyTGZLNWM5NGpHdzhDbnA4LwovRHErZHFBR1pxQkg2ZnAxYmVJM1BZZzVWVDk0RVdWQ2RrTjgwY3dEcEo0Z1R0NHFUQ1gzZUYvY2x5VXQyUC9zaTNjcnQ0CjBQdi9wVnZXU1U2TlhHaDJIZC93V0MwcGh5Z0RQOVc5SHRQSUF0OFpnZmV2ZnUwZHpraVl6OHNVaElWU2ZsRGpaNUFqcUcKUjV2TVVUaGlrczEvZVlCeTdiMkFFSEdzYU8xN3NFbWNiYUlHUHZuUFVwWmQrdjkyYU1JdWZoYjhLZkFSbzZ3Ty9ISW1VbQovdDdHWFBJUmxBMUhSV0p1U05odTQzZS9DY3ZYd3Z6RnZrdE9kYXlEQzBMTklHMkpVaURlNWd0UUQ1WHZXc1p3MHQvbEs1CklacjFrZXZRNUJsYWNISmViV1ZNYUQvdllpdFdhSFo4OEF1Y0czaGh2bjkrOGNSTGhNVExiVlFSMWh2UVpBL1JtQXN3eE0KT3VJSmRaUmtxTThLZlF4Z28zQThRNGJhQW1VbnpvM3Zwa0FWdC9uaGtIOTRaRE5rV2U2RlRhdThONStyYTJCZkdjZVA4VApvbjFEeTBLRlpaUlpCREVVRVc0eHdTYUVOYXQ3c2RDNnhpL1d5OURaQUFBRm1NRFBXeDdBejFzZUFBQUFCM056YUMxeWMyCkVBQUFHQkFOK1VpMDBNZEJGZXpqR3p4Z0k2U3RmbTJnQTBqRUova2dFbzZ5M3l1WFBlSXhzUEFwNmZQL3c2dm5hZ0JtYWcKUituNmRXM2lOejJJT1ZVL2VCRmxRblpEZk5ITUE2U2VJRTdlS2t3bDkzaGYzSmNsTGRqLzdJdDNLN2VORDcvNlZiMWtsTwpqVnhvZGgzZjhGZ3RLWWNvQXovVnZSN1R5QUxmR1lIM3IzN3RIYzVJbU0vTEZJU0ZVbjVRNDJlUUk2aGtlYnpGRTRZcExOCmYzbUFjdTI5Z0JCeHJHanRlN0JKbkcyaUJqNzV6MUtXWGZyL2RtakNMbjRXL0Nud0VhT3NEdnh5SmxKdjdleGx6eUVaUU4KUjBWaWJrallidU4zdnduTDE4TDh4YjVMVG5Xc2d3dEN6U0J0aVZJZzN1WUxVQStWNzFyR2NOTGY1U3VTR2E5WkhyME9RWgpXbkJ5WG0xbFRHZy83MklyVm1oMmZQQUxuQnQ0WWI1L2Z2SEVTNFRFeTIxVUVkWWIwR1FQMFpnTE1NVERyaUNYV1VaS2pQCkNuME1ZS053UEVPRzJnSmxKODZONzZaQUZiZjU0WkIvZUdRelpGbnVoVTJydkRlZnEydGdYeG5Iai9FNko5UTh0Q2hXV1UKV1FReEZCRnVNY0VtaERXcmU3SFF1c1l2MXN2UTJRQUFBQU1CQUFFQUFBR0JBSlZtZklNNjdDQmpXcU9KdnFua2EvakRrUwo4TDdpSE5mekg1TnRZWVdPWmRMTlk2L0lRa1pDeFcwTWtSKzlUK0M3QUZKZzBNV2Q5ck5PeUxJZDkxNjZoOVJsNG0xdFJjCnViZ1o2dWZCZ3hGVDlXS21mSEdCNm4zelh5b2pQOEFJTnR6ODVpaUVHVXFFRWtVRVdMd0RGSmdvcFllQ3l1VmZ2ZE92MUgKRm1WWmEwNVo0b3NQNkNENXVmc2djQ1RYQTR6VnZ5ZHVCYkxqdHN5RjdYZjNUdjZUQ1QxU0swZHErQk1OOXRvb0RZaXpwagpzbDh6NzlybXp3eUFyWFlVcnFUUkpsNmpwRkNrWHJLcy9LeG96MHhhbXlMY2RORk9hWE51LzlnTkpjRERsV2hPcFRqNHk4CkpkNXBuV1Jueis1RHJLRFdhY0loUW1CMUxVd2ZLWmQwbVFxaUpzMUMxcXZVUmlKOGExaThKUTI4bHFuWTFRRk9wbk13emcKWEpla2FndThpT1ExRFJlQkhaM0NkcVJUYnY3bVJZSGxramx0dXJmZGc4M3hvM0ErZ1JSR001eUVOcW5xSkplQjhJQVB5UwptMFp0dGdqbHNqNTJ2K1B1NmExMHoxZndKK1VML2N6dTRKeEpOYlp6WTFIMnpLODJBaVI1T3JYNmx2aUEvSWFSRVcwUUFBCkFNQndVeUJpcUc5bEZCUnltL2UvU1VORVMzdHpicUZNdTdIcy84WTV5SnAxKzR6OXUxNGtJR2ttV0Y5eE5HT3hrY3V0cWwKeHVUcndMbjFUaFNQTHQrTjUwTGhVdzR4ZjBhNUxqemdPbklPU0FRbm5HY1Nxa0dTRDlMR21obGE2WmpydFBHY29lQ3JHdAo5M1Vvcmx5YkxNRzFFRFAxWmpKS1RaZzl6OUMwdDlTTGd3ei9DbFhydW9UNXNQVUdKWnUrbHlIZXpSTDRtcHl6OEZMcnlOCkdNci9leVM5bWdISjNVVkZEYjNIZ3BaK1E1SUdBRU5rZVZEcHIwMGhCZXZndGd6YWtBQUFEQkFQVXQ1RitoMnBVby94V1YKenRkcVQvMzA4dFB5MXVMMU1lWFoydEJPQmRwSDJyd0JzdWt0aTIySGtWZUZXQjJFdUlFUXppMzY3MGc1UGdxR1p4Vng4dQpobEE0Rkg4ZXN1NTNQckZqVW9EeFJhb3d3WXBFcFh5Y2pnNUE1MStwR1VQcWljWjB0YjliaWlhc3BWWXZhWW5sdGlnVG5iClN0UExMY29nemNiL0dGcVYyaXlzc3lwTlMwKzBNRTUxcEtxWGNaS2swbi8vVHpZWWs4TW8vZzRsQ3pmUEZQUlZrVVM5blIKWU1pQzRlcEk0TERmbVdnM0xLQ2N1Zk85all3aWgwYlFBQUFNRUE2WEtldDhEMHNvc0puZVh5WFZGd0dyVyszNlhBVGRQTwpMWDdjaStjYzFoOGV1eHdYQWx3aTJJNFhxSmJBVjBsVEhuVGEycXN3Uy9RQlpJUUJWSkZlVjVyS1daZTc4R2F3d1pWTFZNCldETmNwdFFyRTFaM2pGNS9TdUVzdlVxSDE0Tkc5RUFXWG1iUkNzelE0Vlk3NzQrSi9sTFkvMnlDT1diNzlLYTJ5OGxvYUoKVXczWWVtSld3blp2R3hKNldsL3BmQ2xYN3lEVXlXUktLdGl0cWNjbmpCWVkyRE1tZURwdURDYy9ZdDZDc3dLRmRkMkJ1UwpGZGt5cDlZY3VMaDlLZEFBQUFIR3BoYzI5dVFFRlVMVGd3TWxVdWJXOXVkR3hsYjI0dWFXNTBjbUVCQWdNRUJRWT0KLS0tLS1FTkQgT1BFTlNTSCBQUklWQVRFIEtFWS0tLS0tCgo=
    +kind: Secret
    +metadata:
    +  name: ssh-credentials
    +  namespace: konveyor-forklift
    +type: Opaque
    +
    +
    +
  2. +
  3. +

    Encode your playbook by conncatenating a file and piping it for base64, for example:

    +
    +
    +
    $ cat playbook.yml | base64 -w0
    +
    +
    +
    + + + + + +
    + + +
    +

    You can also use a here document to encode a playbook:

    +
    +
    +
    +
    $ cat << EOF | base64 -w0
    +- hosts: localhost
    +  tasks:
    +  - debug:
    +      msg: test
    +EOF
    +
    +
    +
    +
    +
  4. +
  5. +

    Create a Hook CR:

    +
    +
    +
    apiVersion: forklift.konveyor.io/v1beta1
    +kind: Hook
    +metadata:
    +  name: playbook
    +  namespace: konveyor-forklift
    +spec:
    +  image: registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel8:v1.8.2-2
    +  playbook: LSBuYW1lOiBNYWluCiAgaG9zdHM6IGxvY2FsaG9zdAogIHRhc2tzOgogIC0gbmFtZTogTG9hZCBQbGFuCiAgICBpbmNsdWRlX3ZhcnM6CiAgICAgIGZpbGU6IHBsYW4ueW1sCiAgICAgIG5hbWU6IHBsYW4KCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3ZhcnM6CiAgICAgIGZpbGU6IHdvcmtsb2FkLnltbAogICAgICBuYW1lOiB3b3JrbG9hZAoKICAtIG5hbWU6IAogICAgZ2V0ZW50OgogICAgICBkYXRhYmFzZTogcGFzc3dkCiAgICAgIGtleTogInt7IGFuc2libGVfdXNlcl9pZCB9fSIKICAgICAgc3BsaXQ6ICc6JwoKICAtIG5hbWU6IEVuc3VyZSBTU0ggZGlyZWN0b3J5IGV4aXN0cwogICAgZmlsZToKICAgICAgcGF0aDogfi8uc3NoCiAgICAgIHN0YXRlOiBkaXJlY3RvcnkKICAgICAgbW9kZTogMDc1MAogICAgZW52aXJvbm1lbnQ6CiAgICAgIEhPTUU6ICJ7eyBhbnNpYmxlX2ZhY3RzLmdldGVudF9wYXNzd2RbYW5zaWJsZV91c2VyX2lkXVs0XSB9fSIKCiAgLSBrOHNfaW5mbzoKICAgICAgYXBpX3ZlcnNpb246IHYxCiAgICAgIGtpbmQ6IFNlY3JldAogICAgICBuYW1lOiBzc2gtY3JlZGVudGlhbHMKICAgICAgbmFtZXNwYWNlOiBrb252ZXlvci1mb3JrbGlmdAogICAgcmVnaXN0ZXI6IHNzaF9jcmVkZW50aWFscwoKICAtIG5hbWU6IENyZWF0ZSBTU0gga2V5CiAgICBjb3B5OgogICAgICBkZXN0OiB+Ly5zc2gvaWRfcnNhCiAgICAgIGNvbnRlbnQ6ICJ7eyBzc2hfY3JlZGVudGlhbHMucmVzb3VyY2VzWzBdLmRhdGEua2V5IHwgYjY0ZGVjb2RlIH19IgogICAgICBtb2RlOiAwNjAwCgogIC0gYWRkX2hvc3Q6CiAgICAgIG5hbWU6ICJ7eyB3b3JrbG9hZC52bS5pcGFkZHJlc3MgfX0iCiAgICAgIGFuc2libGVfdXNlcjogcm9vdAogICAgICBncm91cHM6IHZtcwoKLSBob3N0czogdm1zCiAgdGFza3M6CiAgLSBuYW1lOiBJbnN0YWxsIGNsb3VkLWluaXQKICAgIGRuZjoKICAgICAgbmFtZToKICAgICAgLSBjbG91ZC1pbml0CiAgICAgIHN0YXRlOiBsYXRlc3QKCiAgLSBuYW1lOiBDcmVhdGUgVGVzdCBGaWxlCiAgICBjb3B5OgogICAgICBkZXN0OiAvdGVzdC50eHQKICAgICAgY29udGVudDogIkhlbGxvIFdvcmxkIgogICAgICBtb2RlOiAwNjQ0Cg==
    +  serviceAccount: forklift-controller (1)
    +
    +
    +
    + + + + + +
    1Specify a serviceAccount to run the hook with in order to control access to resources on the cluster. +
    + + + + + +
    + + +
    +

    To decode an attached playbook retrieve the resource with custom output and pipe it to base64. For example:

    +
    +
    +
    +
     oc get -n konveyor-forklift hook playbook -o \
    +   go-template='{{ .spec.playbook }}' | base64 -d
    +
    +
    +
    +
    +
    +

    The playbook encoded here runs the following:

    +
    +
    +
    +
    - name: Main
    +  hosts: localhost
    +  tasks:
    +  - name: Load Plan
    +    include_vars:
    +      file: plan.yml
    +      name: plan
    +
    +  - name: Load Workload
    +    include_vars:
    +      file: workload.yml
    +      name: workload
    +
    +  - name:
    +    getent:
    +      database: passwd
    +      key: "{{ ansible_user_id }}"
    +      split: ':'
    +
    +  - name: Ensure SSH directory exists
    +    file:
    +      path: ~/.ssh
    +      state: directory
    +      mode: 0750
    +    environment:
    +      HOME: "{{ ansible_facts.getent_passwd[ansible_user_id][4] }}"
    +
    +  - k8s_info:
    +      api_version: v1
    +      kind: Secret
    +      name: ssh-credentials
    +      namespace: konveyor-forklift
    +    register: ssh_credentials
    +
    +  - name: Create SSH key
    +    copy:
    +      dest: ~/.ssh/id_rsa
    +      content: "{{ ssh_credentials.resources[0].data.key | b64decode }}"
    +      mode: 0600
    +
    +  - add_host:
    +      name: "{{ workload.vm.ipaddress }}"
    +      ansible_user: root
    +      groups: vms
    +
    +- hosts: vms
    +  tasks:
    +  - name: Install cloud-init
    +    dnf:
    +      name:
    +      - cloud-init
    +      state: latest
    +
    +  - name: Create Test File
    +    copy:
    +      dest: /test.txt
    +      content: "Hello World"
    +      mode: 0644
    +
    +
    +
    +
  6. +
  7. +

    Create a Plan CR using the hook:

    +
    +
    +
    kind: Plan
    +apiVersion: forklift.konveyor.io/v1beta1
    +metadata:
    +  name: test
    +  namespace: konveyor-forklift
    +spec:
    +  map:
    +    network:
    +      namespace: "konveyor-forklift"
    +      name: "network"
    +    storage:
    +      namespace: "konveyor-forklift"
    +      name: "storage"
    +  provider:
    +    source:
    +      namespace: "konveyor-forklift"
    +      name: "boston"
    +    destination:
    +      namespace: "konveyor-forklift"
    +      name: host
    +  targetNamespace: "konveyor-forklift"
    +  vms:
    +    - id: vm-2861
    +      hooks:
    +        - hook:
    +            namespace: konveyor-forklift
    +            name: playbook
    +          step: PreHook (1)
    +
    +
    +
    + + + + + +
    1Options are PreHook, to run the hook before the migration, and PostHook, to run the hook after the migration.
    +
    +
  8. +
+
+
+ + + + + +
+ + +
+

In order for a PreHook to run on a VM, the VM must be started and available via SSH.

+
+
+
+
+
+
+
+
+

Upgrading Forklift

+
+
+

You can upgrade the Forklift Operator by using the OKD web console to install the new version.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click OperatorsInstalled OperatorsMigration Toolkit for Virtualization OperatorSubscription.

    +
  2. +
  3. +

    Change the update channel to the correct release.

    +
    +

    See Changing update channel in the OKD documentation.

    +
    +
  4. +
  5. +

    Confirm that Upgrade status changes from Up to date to Upgrade available. If it does not, restart the CatalogSource pod:

    +
    +
      +
    1. +

      Note the catalog source, for example, redhat-operators.

      +
    2. +
    3. +

      From the command line, retrieve the catalog source pod:

      +
      +
      +
      $ kubectl get pod -n openshift-marketplace | grep <catalog_source>
      +
      +
      +
    4. +
    5. +

      Delete the pod:

      +
      +
      +
      $ kubectl delete pod -n openshift-marketplace <catalog_source_pod>
      +
      +
      +
      +

      Upgrade status changes from Up to date to Upgrade available.

      +
      +
      +

      If you set Update approval on the Subscriptions tab to Automatic, the upgrade starts automatically.

      +
      +
    6. +
    +
    +
  6. +
  7. +

    If you set Update approval on the Subscriptions tab to Manual, approve the upgrade.

    +
    +

    See Manually approving a pending upgrade in the OKD documentation.

    +
    +
  8. +
  9. +

    If you are upgrading from Forklift 2.2 and have defined VMware source providers, edit the VMware provider by adding a VDDK init image. Otherwise, the update will change the state of any VMware providers to Critical. For more information, see Adding a VMSphere source provider.

    +
  10. +
  11. +

    If you mapped to NFS on the OKD destination provider in Forklift 2.2, edit the AccessModes and VolumeMode parameters in the NFS storage profile. Otherwise, the upgrade will invalidate the NFS mapping. For more information, see Customizing the storage profile.

    +
  12. +
+
+
+
+
+

Uninstalling Forklift

+
+
+

You can uninstall Forklift by using the OKD web console or the command line interface (CLI).

+
+
+

Uninstalling Forklift by using the OKD web console

+
+

You can uninstall Forklift by using the OKD web console.

+
+
+
Prerequisites
+
    +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click Operators > Installed Operators.

    +
  2. +
  3. +

    Click Forklift Operator.

    +
    +

    The Operator Details page opens in the Details tab.

    +
    +
  4. +
  5. +

    Click the ForkliftController tab.

    +
  6. +
  7. +

    Click Actions and select Delete ForkLiftController.

    +
    +

    A confirmation window opens.

    +
    +
  8. +
  9. +

    Click Delete.

    +
    +

    The controller is removed.

    +
    +
  10. +
  11. +

    Open the Details tab.

    +
    +

    The Create ForkliftController button appears instead of the controller you deleted. There is no need to click it.

    +
    +
  12. +
  13. +

    On the upper-right side of the page, click Actions and select Uninstall Operator.

    +
    +

    A confirmation window opens, displaying any operand instances.

    +
    +
  14. +
  15. +

    To delete all instances, select the Delete all operand instances for this operator checkbox. By default, the checkbox is cleared.

    +
    + + + + + +
    + + +
    +

    If your Operator configured off-cluster resources, these will continue to run and will require manual cleanup.

    +
    +
    +
    +
  16. +
  17. +

    Click Uninstall.

    +
    +

    The Installed Operators page opens, and the Forklift Operator is removed from the list of installed Operators.

    +
    +
  18. +
  19. +

    Click Home > Overview.

    +
  20. +
  21. +

    In the Status section of the page, click Dynamic Plugins.

    +
    +

    The Dynamic Plugins popup opens, listing forklift-console-plugin as a failed plugin. If the forklift-console-plugin does not appear as a failed plugin, refresh the web console.

    +
    +
  22. +
  23. +

    Click forklift-console-plugin.

    +
    +

    The ConsolePlugin details page opens in the Details tab.

    +
    +
  24. +
  25. +

    On the upper right-hand side of the page, click Actions and select Delete ConsolePlugin from the list.

    +
    +

    A confirmation window opens.

    +
    +
  26. +
  27. +

    Click Delete.

    +
    +

    The plugin is removed from the list of Dynamic plugins on the Overview page. If the plugin still appears, restart the Overview page.

    +
    +
  28. +
+
+
+
+

Uninstalling Forklift from the command line interface

+
+

You can uninstall Forklift from the command line interface (CLI).

+
+
+ + + + + +
+ + +
+

This action does not remove resources managed by the Forklift Operator, including custom resource definitions (CRDs) and custom resources (CRs). To remove these after uninstalling the Forklift Operator, you might need to manually delete the Forklift Operator CRDs.

+
+
+
+
+
Prerequisites
+
    +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Delete the forklift controller by running the following command:

    +
    +
    +
    $ oc delete ForkliftController --all -n openshift-mtv
    +
    +
    +
  2. +
  3. +

    Delete the subscription to the Forklift Operator by running the following command:

    +
    +
    +
    $ oc get subscription -o name|grep 'mtv-operator'| xargs oc delete
    +
    +
    +
  4. +
  5. +

    Delete the clusterserviceversion for the Forklift Operator by running the following command:

    +
    +
    +
    $ oc get clusterserviceversion -o name|grep 'mtv-operator'| xargs oc delete
    +
    +
    +
  6. +
  7. +

    Delete the plugin console CR by running the following command:

    +
    +
    +
    $ oc delete ConsolePlugin forklift-console-plugin
    +
    +
    +
  8. +
  9. +

    Optional: Delete the custom resource definitions (CRDs) by running the following command:

    +
    +
    +
    kubectl get crd -o name | grep 'forklift.konveyor.io' | xargs kubectl delete
    +
    +
    +
  10. +
  11. +

    Optional: Perform cleanup by deleting the Forklift project by running the following command:

    +
    +
    +
    oc delete project openshift-mtv
    +
    +
    +
  12. +
+
+
+
+
+
+

Forklift performance recommendations

+
+
+

The purpose of this section is to share recommendations for efficient and effective migration of virtual machines (VMs) using Forklift, based on findings observed through testing.

+
+
+

The data provided here was collected from testing in Red Hat Labs and is provided for reference only. 

+
+
+

Overall, these numbers should be considered to show the best-case scenarios.

+
+
+

The observed performance of migration can differ from these results and depends on several factors.

+
+
+

Ensure fast storage and network speeds

+
+

Ensure fast storage and network speeds, both for VMware and OKD (OCP) environments.

+
+
+
    +
  • +

    To perform fast migrations, VMware must have fast read access to datastores.  Networking between VMware ESXi hosts should be fast, ensure a 10 GiB network connection, and avoid network bottlenecks.

    +
    +
      +
    • +

      Extend the VMware network to the OCP Workers Interface network environment.

      +
    • +
    • +

      It is important to ensure that the VMware network offers high throughput (10 Gigabit Ethernet) and rapid networking to guarantee that the reception rates align with the read rate of the ESXi datastore.

      +
    • +
    • +

      Be aware that the migration process uses significant network bandwidth and that the migration network is utilized. If other services utilize that network, it may have an impact on those services and their migration rates.

      +
    • +
    • +

      For example, 200 to 325 MiB/s was the average network transfer rate from the vmnic for each ESXi host associated with transferring data to the OCP interface.

      +
    • +
    +
    +
  • +
+
+
+
+

Ensure fast datastore read speeds to ensure efficient and performant migrations.

+
+

Datastores read rates impact the total transfer times, so it is essential to ensure fast reads are possible from the ESXi datastore to the ESXi host.  

+
+
+

Example in numbers: 200 to 300 MiB/s was the average read rate for both vSphere and ESXi endpoints for a single ESXi server. When multiple ESXi servers are used, higher datastore read rates are possible.

+
+
+
+

Endpoint types 

+
+

Forklift 2.6 allows for the following vSphere provider options:

+
+
+
    +
  • +

    ESXi endpoint (inventory and disk transfers from ESXi), introduced in Forklift 2.6

    +
  • +
  • +

    vCenter Server endpoint; no networks for the ESXi host (inventory and disk transfers from vCenter)

    +
  • +
  • +

    vCenter endpoint and ESXi networks are available (inventory from vCenter, disk transfers from ESXi).

    +
  • +
+
+
+

When transferring many VMs that are registered to multiple ESXi hosts, using the vCenter endpoint and ESXi network is suggested.

+
+
+ + + + + +
+ + +
+

As of vSphere 7.0, ESXi hosts can label which network to use for NBD transport. This is accomplished by tagging the desired virtual network interface card (NIC) with the appropriate vSphereBackupNFC label.  When this is done, Forklift will be able to utilize the ESXi interface for network transfer to Openshift as long as the worker and ESXi host interfaces are reachable.  This is especially useful when migration users may not have access to the ESXi credentials yet would like to be able to control which ESXi interface is used for migration. 

+
+
+

For more details, see: (Forklift-1230)

+
+
+
+
+

You can use the following ESXi command, which designates interface vmk2 for NBD backup:

+
+
+
+
esxcli network ip interface tag add -t vSphereBackupNFC -i vmk2
+
+
+
+
+

Set ESXi hosts BIOS profile and ESXi Host Power Management for High Performance

+
+

Where possible, ensure that hosts used to perform migrations are set with BIOS profiles related to maximum performance.  Hosts which use Host Power Management controlled within vSphere should check that High Performance is set.

+
+
+

Testing showed that when transferring more than 10 VMs with both BIOS and host power management set accordingly, migrations had an increase of 15 MiB in the average datastore read rate.

+
+
+
+

Avoid additional network load on VMware networks

+
+

You can reduce the network load on VMware networks by selecting the migration network when using the ESXi endpoint.

+
+
+

By incorporating a virtualization provider, Forklift enables the selection of a specific network, which is accessible on the ESXi hosts, for the purpose of migrating virtual machines to OCP.  Selecting this migration network from the ESXi host in the Forklift UI will ensure that the transfer is performed using the selected network as an ESXi endpoint..

+
+
+

It is imperative to ensure that the network selected has connectivity to the OCP interface, has adequate bandwidth for migrations, and that the network interface is not saturated.

+
+
+

In environments with fast networks, such as 10GbE networks, migration network impacts can be expected to match the rate of ESXi datastore reads.

+
+
+
+

Control maximum concurrent disk migrations per ESXi host.

+
+

Set the MAX_VM_INFLIGHT MTV variable to control the maximum number of concurrent VMs transfers allowed for the ESXi host. 

+
+
+

Forklift allows for concurrency to be controlled using this variable; by default, it is set to 20.

+
+
+

When setting MAX_VM_INFLIGHT, consider the number of maximum concurrent VMs transfers are required for ESXi hosts. It is important to consider the type of migration to be transferred concurrently. Warm migrations, which are defined by migrations of a running VM that will be migrated over a scheduled time.

+
+
+

Warm migrations use snapshots to compare and migrate only the differences between previous snapshots of the disk.  The migration of the differences between snapshots happens over specific intervals before a final cut-over of the running VM to OKD occurs. 

+
+
+

In Forklift 2.6, MAX_VM_INFLIGHT reserves one transfer slot per VM, regardless of current migration activity for a specific snapshot or the number of disks that belong to a single vm. The total set by MAX_VM_INFLIGHT is used to indicate how many concurrent VM tranfers per ESXi host is allowed.

+
+
+
Examples
+
    +
  • +

    MAX_VM_INFLIGHT = 20 and 2 ESXi hosts defined in the provider mean each host can transfer 20 VMs.

    +
  • +
+
+
+
+

Migrations are completed faster when migrating multiple VMs concurrently

+
+

When multiple VMs from a specific ESXi host are to be migrated, starting concurrent migrations for multiple VMs leads to faster migration times. 

+
+
+

Testing demonstrated that migrating 10 VMs (each containing 35 GiB of data, with a total size of 50 GiB) from a single host is significantly faster than migrating the same number of VMs sequentially, one after another. 

+
+
+

It is possible to increase concurrent migration to more than 10 virtual machines from a single host, but it does not show a significant improvement. 

+
+
+
Examples
+
    +
  • +

    1 single disk VMs took 6 minutes, with migration rate of 100 MiB/s

    +
  • +
  • +

    10 single disk VMs took 22 minutes, with migration rate of 272 MiB/s

    +
  • +
  • +

    20 single disk VMs took 42 minutes, with migration rate of 284 MiB/s

    +
  • +
+
+
+ + + + + +
+ + +
+

From the aforementioned examples, it is evident that the migration of 10 virtual machines simultaneously is three times faster than the migration of identical virtual machines in a sequential manner.

+
+
+

The migration rate was almost the same when moving 10 or 20 virtual machines simultaneously.

+
+
+
+
+
+

Migrations complete faster using multiple hosts.

+
+

Using multiple hosts with registered VMs equally distributed among the ESXi hosts used for migrations leads to faster migration times.

+
+
+

Testing showed that when transferring more than 10 single disk VMS, each containing 35 GiB of data out of a total of 50G total, using an additional host can reduce migration time.

+
+
+
Examples
+
    +
  • +

    80 single disk VMs, containing 35 GiB of data each, using a single host took 2 hours and 43 minutes, with a migration rate of 294 MiB/s.

    +
  • +
  • +

    80 single disk VMs, containing 35 GiB of data each, using 8 ESXi hosts took 41 minutes, with a migration rate of 1,173 MiB/s.

    +
  • +
+
+
+ + + + + +
+ + +
+

From the aforementioned examples, it is evident that migrating 80 VMs from 8 ESXi hosts, 10 from each host, concurrently is four times faster than running the same VMs from a single ESXi host. 

+
+
+

Migrating a larger number of VMs from more than 8 ESXi hosts concurrently could potentially show increased performance. However, it was not tested and therefore not recommended.

+
+
+
+
+
+

Multiple migration plans compared to a single large migration plan

+
+

The maximum number of disks that can be referenced by a single migration plan is 500. For more details, see (MTV-1203)

+
+
+

When attempting to migrate many VMs in a single migration plan, it can take some time for all migrations to start.  By breaking up one migration plan into several migration plans, it is possible to start them at the same time.

+
+
+

Comparing migrations of:

+
+
+
    +
  • +

    500 VMs using 8 ESXi hosts in 1 plan, max_vm_inflight=100, took 5 hours and 10 minutes.

    +
  • +
  • +

    800 VMs using 8 ESXi hosts with 8 plans, max_vm_inflight=100, took 57 minutes.

    +
  • +
+
+
+

Testing showed that by breaking one single large plan into multiple moderately sized plans, for example, 100 VMS per plan, the total migration time can be reduced.

+
+
+
+

Maximum values tested

+
+
    +
  • +

    Maximum number of ESXi hosts tested: 8

    +
  • +
  • +

    Maximum number of VMs in a single migration plan: 500

    +
  • +
  • +

    Maximum number of VMs migrated in a single test: 5000

    +
  • +
  • +

    Maximum number of migration plans performed concurrently: 40

    +
  • +
  • +

    Maximum single disk size migrated: 6 T disks, which contained 3 Tb of data

    +
  • +
  • +

    Maximum number of disks on a single VM migrated: 50

    +
  • +
  • +

    Highest observed single datastore read rate from a single ESXi server:  312 MiB/second

    +
  • +
  • +

    Highest observed multi-datastore read rate using eight ESXi servers and two datastores: 1,242 MiB/second

    +
  • +
  • +

    Highest observed virtual NIC transfer rate to an OpenShift worker: 327 MiB/second

    +
  • +
  • +

    Maximum migration transfer rate of a single disk: 162 MiB/second (rate observed when transferring nonconcurrent migration of 1.5 Tb utilized data)

    +
  • +
  • +

    Maximum cold migration transfer rate of the multiple VMs (single disk) from a single ESXi host: 294 MiB/s (concurrent migration of 30 VMs, 35/50 GiB used, from Single ESXi)

    +
  • +
  • +

    Maximum cold migration transfer rate of the multiple VMs (single disk) from multiple ESXi hosts: 1173MB/s (concurrent migration of 80 VMs, 35/50 GiB used, from 8 ESXi servers, 10 VMs from each ESXi)

    +
  • +
+
+
+

For additional details on performance, see Forklift performance addendum

+
+
+
+
+
+

Troubleshooting

+
+
+

This section provides information for troubleshooting common migration issues.

+
+
+

Error messages

+
+

This section describes error messages and how to resolve them.

+
+
+
warm import retry limit reached
+

The warm import retry limit reached error message is displayed during a warm migration if a VMware virtual machine (VM) has reached the maximum number (28) of changed block tracking (CBT) snapshots during the precopy stage.

+
+
+

To resolve this problem, delete some of the CBT snapshots from the VM and restart the migration plan.

+
+
+
Unable to resize disk image to required size
+

The Unable to resize disk image to required size error message is displayed when migration fails because a virtual machine on the target provider uses persistent volumes with an EXT4 file system on block storage. The problem occurs because the default overhead that is assumed by CDI does not completely include the reserved place for the root partition.

+
+
+

To resolve this problem, increase the file system overhead in CDI to more than 10%.

+
+
+
+

Using the must-gather tool

+
+

You can collect logs and information about Forklift custom resources (CRs) by using the must-gather tool. You must attach a must-gather data file to all customer cases.

+
+
+

You can gather data for a specific namespace, migration plan, or virtual machine (VM) by using the filtering options.

+
+
+ + + + + +
+ + +
+

If you specify a non-existent resource in the filtered must-gather command, no archive file is created.

+
+
+
+
+
Prerequisites
+
    +
  • +

    You must be logged in to the KubeVirt cluster as a user with the cluster-admin role.

    +
  • +
  • +

    You must have the OKD CLI (oc) installed.

    +
  • +
+
+
+
Collecting logs and CR information
+
    +
  1. +

    Navigate to the directory where you want to store the must-gather data.

    +
  2. +
  3. +

    Run the oc adm must-gather command:

    +
    +
    +
    $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest
    +
    +
    +
    +

    The data is saved as /must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.

    +
    +
  4. +
  5. +

    Optional: Run the oc adm must-gather command with the following options to gather filtered data:

    +
    +
      +
    • +

      Namespace:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- NS=<namespace> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Migration plan:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- PLAN=<migration_plan> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Virtual machine:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- VM=<vm_id> NS=<namespace> /usr/bin/targeted (1)
      +
      +
      +
      + + + + + +
      1Specify the VM ID as it appears in the Plan CR.
      +
      +
    • +
    +
    +
  6. +
+
+
+
+

Architecture

+
+

This section describes Forklift custom resources, services, and workflows.

+
+
+

Forklift custom resources and services

+
+

Forklift is provided as an OKD Operator. It creates and manages the following custom resources (CRs) and services.

+
+
+
Forklift custom resources
+
    +
  • +

    Provider CR stores attributes that enable Forklift to connect to and interact with the source and target providers.

    +
  • +
  • +

    NetworkMapping CR maps the networks of the source and target providers.

    +
  • +
  • +

    StorageMapping CR maps the storage of the source and target providers.

    +
  • +
  • +

    Plan CR contains a list of VMs with the same migration parameters and associated network and storage mappings.

    +
  • +
  • +

    Migration CR runs a migration plan.

    +
    +

    Only one Migration CR per migration plan can run at a given time. You can create multiple Migration CRs for a single Plan CR.

    +
    +
  • +
+
+
+
Forklift services
+
    +
  • +

    The Inventory service performs the following actions:

    +
    +
      +
    • +

      Connects to the source and target providers.

      +
    • +
    • +

      Maintains a local inventory for mappings and plans.

      +
    • +
    • +

      Stores VM configurations.

      +
    • +
    • +

      Runs the Validation service if a VM configuration change is detected.

      +
    • +
    +
    +
  • +
  • +

    The Validation service checks the suitability of a VM for migration by applying rules.

    +
  • +
  • +

    The Migration Controller service orchestrates migrations.

    +
    +

    When you create a migration plan, the Migration Controller service validates the plan and adds a status label. If the plan fails validation, the plan status is Not ready and the plan cannot be used to perform a migration. If the plan passes validation, the plan status is Ready and it can be used to perform a migration. After a successful migration, the Migration Controller service changes the plan status to Completed.

    +
    +
  • +
  • +

    The Populator Controller service orchestrates disk transfers using Volume Populators.

    +
  • +
  • +

    The Kubevirt Controller and Containerized Data Import (CDI) Controller services handle most technical operations.

    +
  • +
+
+
+
+

High-level migration workflow

+
+

The high-level workflow shows the migration process from the point of view of the user:

+
+
+
    +
  1. +

    You create a source provider, a target provider, a network mapping, and a storage mapping.

    +
  2. +
  3. +

    You create a Plan custom resource (CR) that includes the following resources:

    +
    +
      +
    • +

      Source provider

      +
    • +
    • +

      Target provider, if Forklift is not installed on the target cluster

      +
    • +
    • +

      Network mapping

      +
    • +
    • +

      Storage mapping

      +
    • +
    • +

      One or more virtual machines (VMs)

      +
    • +
    +
    +
  4. +
  5. +

    You run a migration plan by creating a Migration CR that references the Plan CR.

    +
    +

    If you cannot migrate all the VMs for any reason, you can create multiple Migration CRs for the same Plan CR until all VMs are migrated.

    +
    +
  6. +
  7. +

    For each VM in the Plan CR, the Migration Controller service records the VM migration progress in the Migration CR.

    +
  8. +
  9. +

    Once the data transfer for each VM in the Plan CR completes, the Migration Controller service creates a VirtualMachine CR.

    +
    +

    When all VMs have been migrated, the Migration Controller service updates the status of the Plan CR to Completed. The power state of each source VM is maintained after migration.

    +
    +
  10. +
+
+
+
+

Detailed migration workflow

+
+

You can use the detailed migration workflow to troubleshoot a failed migration.

+
+
+

The workflow describes the following steps:

+
+
+

Warm Migration or migration to a remote OpenShift cluster:

+
+
+
    +
  1. +

    When you create the Migration custom resource (CR) to run a migration plan, the Migration Controller service creates a DataVolume CR for each source VM disk.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Containerized Data Importer (CDI) Controller service creates a persistent volume claim (PVC) based on the parameters specified in the DataVolume CR.



    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
  6. +
  7. +

    The CDI Controller service creates an importer pod.

    +
  8. +
  9. +

    The importer pod streams the VM disk to the PV.

    +
    +

    After the VM disks are transferred:

    +
    +
  10. +
  11. +

    The Migration Controller service creates a conversion pod with the PVCs attached to it when importing from VMWare.

    +
    +

    The conversion pod runs virt-v2v, which installs and configures device drivers on the PVCs of the target VM.

    +
    +
  12. +
  13. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  14. +
  15. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  16. +
+
+
+

Cold migration from oVirt or OpenStack to the local OpenShift cluster:

+
+
+
    +
  1. +

    When you create a Migration custom resource (CR) to run a migration plan, the Migration Controller service creates for each source VM disk a PersistentVolumeClaim CR, and an OvirtVolumePopulator when the source is oVirt, or an OpenstackVolumePopulator CR when the source is OpenStack.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Populator Controller service creates a temporarily persistent volume claim (PVC).

    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
    +
      +
    • +

      The Migration Controller service creates a dummy pod to bind all PVCs. The name of the pod contains pvcinit.

      +
    • +
    +
    +
  6. +
  7. +

    The Populator Controller service creates a populator pod.

    +
  8. +
  9. +

    The populator pod transfers the disk data to the PV.

    +
    +

    After the VM disks are transferred:

    +
    +
  10. +
  11. +

    The temporary PVC is deleted, and the initial PVC points to the PV with the data.

    +
  12. +
  13. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  14. +
  15. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  16. +
+
+
+

Cold migration from VMWare to the local OpenShift cluster:

+
+
+
    +
  1. +

    When you create a Migration custom resource (CR) to run a migration plan, the Migration Controller service creates a DataVolume CR for each source VM disk.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Containerized Data Importer (CDI) Controller service creates a blank persistent volume claim (PVC) based on the parameters specified in the DataVolume CR.



    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
  6. +
+
+
+

For all VM disks:

+
+
+
    +
  1. +

    The Migration Controller service creates a dummy pod to bind all PVCs. The name of the pod contains pvcinit.

    +
  2. +
  3. +

    The Migration Controller service creates a conversion pod for all PVCs.

    +
  4. +
  5. +

    The conversion pod runs virt-v2v, which converts the VM to the KVM hypervisor and transfers the disks' data to their corresponding PVs.

    +
    +

    After the VM disks are transferred:

    +
    +
  6. +
  7. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  8. +
  9. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  10. +
+
+
+
+
+

Logs and custom resources

+
+

You can download logs and custom resource (CR) information for troubleshooting. For more information, see the detailed migration workflow.

+
+
+

Collected logs and custom resource information

+
+

You can download logs and custom resource (CR) yaml files for the following targets by using the OKD web console or the command line interface (CLI):

+
+
+
    +
  • +

    Migration plan: Web console or CLI.

    +
  • +
  • +

    Virtual machine: Web console or CLI.

    +
  • +
  • +

    Namespace: CLI only.

    +
  • +
+
+
+

The must-gather tool collects the following logs and CR files in an archive file:

+
+
+
    +
  • +

    CRs:

    +
    +
      +
    • +

      DataVolume CR: Represents a disk mounted on a migrated VM.

      +
    • +
    • +

      VirtualMachine CR: Represents a migrated VM.

      +
    • +
    • +

      Plan CR: Defines the VMs and storage and network mapping.

      +
    • +
    • +

      Job CR: Optional: Represents a pre-migration hook, a post-migration hook, or both.

      +
    • +
    +
    +
  • +
  • +

    Logs:

    +
    +
      +
    • +

      importer pod: Disk-to-data-volume conversion log. The importer pod naming convention is importer-<migration_plan>-<vm_id><5_char_id>, for example, importer-mig-plan-ed90dfc6-9a17-4a8btnfh, where ed90dfc6-9a17-4a8 is a truncated oVirt VM ID and btnfh is the generated 5-character ID.

      +
    • +
    • +

      conversion pod: VM conversion log. The conversion pod runs virt-v2v, which installs and configures device drivers on the PVCs of the VM. The conversion pod naming convention is <migration_plan>-<vm_id><5_char_id>.

      +
    • +
    • +

      virt-launcher pod: VM launcher log. When a migrated VM is powered on, the virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

      +
    • +
    • +

      forklift-controller pod: The log is filtered for the migration plan, virtual machine, or namespace specified by the must-gather command.

      +
    • +
    • +

      forklift-must-gather-api pod: The log is filtered for the migration plan, virtual machine, or namespace specified by the must-gather command.

      +
    • +
    • +

      hook-job pod: The log is filtered for hook jobs. The hook-job naming convention is <migration_plan>-<vm_id><5_char_id>, for example, plan2j-vm-3696-posthook-4mx85 or plan2j-vm-3696-prehook-mwqnl.

      +
      + + + + + +
      + + +
      +

      Empty or excluded log files are not included in the must-gather archive file.

      +
      +
      +
      +
    • +
    +
    +
  • +
+
+
+
Example must-gather archive structure for a VMware migration plan
+
+
must-gather
+└── namespaces
+    ├── target-vm-ns
+    │   ├── crs
+    │   │   ├── datavolume
+    │   │   │   ├── mig-plan-vm-7595-tkhdz.yaml
+    │   │   │   ├── mig-plan-vm-7595-5qvqp.yaml
+    │   │   │   └── mig-plan-vm-8325-xccfw.yaml
+    │   │   └── virtualmachine
+    │   │       ├── test-test-rhel8-2disks2nics.yaml
+    │   │       └── test-x2019.yaml
+    │   └── logs
+    │       ├── importer-mig-plan-vm-7595-tkhdz
+    │       │   └── current.log
+    │       ├── importer-mig-plan-vm-7595-5qvqp
+    │       │   └── current.log
+    │       ├── importer-mig-plan-vm-8325-xccfw
+    │       │   └── current.log
+    │       ├── mig-plan-vm-7595-4glzd
+    │       │   └── current.log
+    │       └── mig-plan-vm-8325-4zw49
+    │           └── current.log
+    └── openshift-mtv
+        ├── crs
+        │   └── plan
+        │       └── mig-plan-cold.yaml
+        └── logs
+            ├── forklift-controller-67656d574-w74md
+            │   └── current.log
+            └── forklift-must-gather-api-89fc7f4b6-hlwb6
+                └── current.log
+
+
+
+
+

Downloading logs and custom resource information from the web console

+
+

You can download logs and information about custom resources (CRs) for a completed, failed, or canceled migration plan or for migrated virtual machines (VMs) by using the OKD web console.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
  2. +
  3. +

    Click Get logs beside a migration plan name.

    +
  4. +
  5. +

    In the Get logs window, click Get logs.

    +
    +

    The logs are collected. A Log collection complete message is displayed.

    +
    +
  6. +
  7. +

    Click Download logs to download the archive file.

    +
  8. +
  9. +

    To download logs for a migrated VM, click a migration plan name and then click Get logs beside the VM.

    +
  10. +
+
+
+
+

Accessing logs and custom resource information from the command line interface

+
+

You can access logs and information about custom resources (CRs) from the command line interface by using the must-gather tool. You must attach a must-gather data file to all customer cases.

+
+
+

You can gather data for a specific namespace, a completed, failed, or canceled migration plan, or a migrated virtual machine (VM) by using the filtering options.

+
+
+ + + + + +
+ + +
+

If you specify a non-existent resource in the filtered must-gather command, no archive file is created.

+
+
+
+
+
Prerequisites
+
    +
  • +

    You must be logged in to the KubeVirt cluster as a user with the cluster-admin role.

    +
  • +
  • +

    You must have the OKD CLI (oc) installed.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Navigate to the directory where you want to store the must-gather data.

    +
  2. +
  3. +

    Run the oc adm must-gather command:

    +
    +
    +
    $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest
    +
    +
    +
    +

    The data is saved as /must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.

    +
    +
  4. +
  5. +

    Optional: Run the oc adm must-gather command with the following options to gather filtered data:

    +
    +
      +
    • +

      Namespace:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- NS=<namespace> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Migration plan:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- PLAN=<migration_plan> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Virtual machine:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- VM=<vm_name> NS=<namespace> /usr/bin/targeted (1)
      +
      +
      +
      + + + + + +
      1You must specify the VM name, not the VM ID, as it appears in the Plan CR.
      +
      +
    • +
    +
    +
  6. +
+
+
+
+
+
+
+

Additional information

+
+
+

Forklift performance addendum

+
+

The data provided here was collected from testing in Red Hat Labs and is provided for reference only. 

+
+
+

Overall, these numbers should be considered to show the best-case scenarios.

+
+
+

The observed performance of migration can differ from these results and depends on several factors.

+
+
+

ESXi performance

+
+
Single ESXi performance
+

Test migration using the same ESXi host.

+
+
+

In each iteration, the total VMs are increased, to display the impact of concurrent migration on the duration.

+
+
+

The results show that migration time is linear when increasing the total VMs (50 GiB disk, Utilization 70%).

+
+
+

The optimal number of VMs per ESXi is 10.

+
+ + ++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 12. Single ESXi tests
Test Case DescriptionMTVVDDKmax_vm inflightMigration TypeTotal Duration

cold migration, 10 VMs, Single ESXi, Private Network [1]

2.6

7.0.3

100

cold

0:21:39

cold migration, 20 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

0:41:16

cold migration, 30 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

1:00:59

cold migration, 40 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

1:23:02

cold migration, 50 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

1:46:24

cold migration, 80 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

2:42:49

cold migration, 100 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

3:25:15

+
+
Multi ESXi hosts and single data store
+

In each iteration, the number of ESXi hosts were increased, to show that increasing the number of ESXi hosts improves the migration time (50 GiB disk, Utilization 70%).

+
+ + ++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 13. Multi ESXi hosts and single data store
Test Case DescriptionMTVVDDKMax_vm inflightMigration TypeTotal Duration

cold migration, 100 VMs, Single ESXi, Private Network [2]

2.6

7.0.3

100

cold

3:25:15

cold migration, 100 VMs, 4 ESXs (25 VMs per ESX), Private Network

2.6

7.0.3

100

cold

1:22:27

cold migration, 100 VMs, 5 ESXs (20 VMs per ESX), Private Network, 1 DataStore

2.6

7.0.3

100

cold

1:04:57

+
+
+

Different migration network performance

+
+

Each iteration the Migration Network was changed, using the Provider, to find the fastest network for migration.

+
+
+

The results show that there is no degradation using management compared to non-managment networks when all interfaces and network speeds are the same.

+
+ + ++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 14. Different migration network tests
Test Case DescriptionMTVVDDKmax_vm inflightMigration TypeTotal Duration

cold migration, 10 VMs, Single ESXi, MGMT Network

2.6

7.0.3

100

cold

0:21:30

cold migration, 10 VMs, Single ESXi, Private Network [3]

2.6

7.0.3

20

cold

0:21:20

cold migration, 10 VMs, Single ESXi, Default Network

2.6.2

7.0.3

20

cold

0:21:30

+
+
+
+
+
+
+
+1. Private Network refers to a non -Management network +
+
+2. Private Network refers to a non-Management network +
+
+3. Private Network refers to a non-Management network +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/about-cold-warm-migration/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/about-cold-warm-migration/index.html new file mode 100644 index 00000000000..2f9ce3de4d9 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/about-cold-warm-migration/index.html @@ -0,0 +1,255 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

About cold and warm migration

+
+
+
+

Forklift supports cold migration from:

+
+
+
    +
  • +

    VMware vSphere

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    {osp}

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

Forklift supports warm migration from VMware vSphere and from oVirt.

+
+
+
+
+

Cold migration

+
+
+

Cold migration is the default migration type. The source virtual machines are shut down while the data is copied.

+
+
+ + + + + +
+
Note
+
+
+

Unresolved directive in about-cold-warm-migration.adoc - include::snip_qemu-guest-agent.adoc[]

+
+
+
+
+
+
+

Warm migration

+
+
+

Most of the data is copied during the precopy stage while the source virtual machines (VMs) are running.

+
+
+

Then the VMs are shut down and the remaining data is copied during the cutover stage.

+
+
+
Precopy stage
+

The VMs are not shut down during the precopy stage.

+
+
+

The VM disks are copied incrementally by using changed block tracking (CBT) snapshots. The snapshots are created at one-hour intervals by default. You can change the snapshot interval by updating the forklift-controller deployment.

+
+
+ + + + + +
+
Important
+
+
+

You must enable CBT for each source VM and each VM disk.

+
+
+

A VM can support up to 28 CBT snapshots. If the source VM has too many CBT snapshots and the Migration Controller service is not able to create a new snapshot, warm migration might fail. The Migration Controller service deletes each snapshot when the snapshot is no longer required.

+
+
+
+
+

The precopy stage runs until the cutover stage is started manually or is scheduled to start.

+
+
+
Cutover stage
+

The VMs are shut down during the cutover stage and the remaining data is migrated. Data stored in RAM is not migrated.

+
+
+

You can start the cutover stage manually by using the Forklift console or you can schedule a cutover time in the Migration manifest.

+
+
+
+
+

Advantages and disadvantages of cold and warm migrations

+
+
+

Overview

+
+

Unresolved directive in about-cold-warm-migration.adoc - include::snip_cold-warm-comparison-table.adoc[]

+
+
+
+

Detailed description

+
+

The table that follows offers a more detailed description of the advantages and disadvantages of each type of migration. It assumes that you have installed Red Hat Enterprise Linux (RHEL) 9 on the OKD platform on which you installed Forklift.

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Detailed description of advantages and disadvantages
Cold migrationWarm migration

Fail fast

Each VM is converted to be compatible with OKD and, if the conversion is successful, the VM is transferred. If a VM cannot be converted, the migration fails immediately.

For each VM, Forklift creates a snapshot and transfers it to OKD. When you start the cutover, Forklift creates the last snapshot, transfers it, and then converts the VM.

Tools

Forklift only.

Forklift and CDI from KubeVirt.

Parallelism

Disks must be transferred sequentially.

Disks can be transferred in parallel using different pods.

+
+ + + + + +
+
Note
+
+
+

The preceding table describes the situation for VMs that are running because the main benefit of warm migration is the reduced downtime, and there is no reason to initiate warm migration for VMs that are down. However, performing warm migration for VMs that are down is not the same as cold migration, even when Forklift uses virt-v2v and RHEL 9. For VMs that are down, Forklift transfers the disks using CDI, unlike in cold migration.

+
+
+
+
+ + + + + +
+
Note
+
+
+

When importing from VMware, there are additional factors which impact the migration speed such as limits related to ESXi, vSphere. or VDDK.

+
+
+
+
+
+

Conclusions

+
+

Based on the preceding information, we can draw the following conclusions about cold migration vs. warm migration:

+
+
+
    +
  • +

    The shortest downtime of VMs can be achieved by using warm migration.

    +
  • +
  • +

    The shortest duration for VMs with a large amount of data on a single disk can be achieved by using cold migration.

    +
  • +
  • +

    The shortest duration for VMs with a large amount of data that is spread evenly across multiple disks can be achieved by using warm migration.

    +
  • +
+
+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/about-hook-crs-for-migration-plans-api/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/about-hook-crs-for-migration-plans-api/index.html new file mode 100644 index 00000000000..f73614f35c0 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/about-hook-crs-for-migration-plans-api/index.html @@ -0,0 +1,116 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

API-based hooks for Forklift migration plans

+
+

You can add hooks to a migration plan from the command line by using the Forklift API.

+
+

Default hook image

+
+

The default hook image for an Forklift hook is registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel8:v1.8.2-2. The image is based on the Ansible Runner image with the addition of python-openshift to provide Ansible Kubernetes resources and a recent oc binary.

+
+

Hook execution

+
+

An Ansible playbook that is provided as part of a migration hook is mounted into the hook container as a ConfigMap. The hook container is run as a job on the desired cluster, using the default ServiceAccount in the konveyor-forklift namespace.

+
+

PreHooks and PostHooks

+
+

You specify hooks per VM and you can run each as a PreHook or a PostHook. In this context, a PreHook is a hook that is run before a migration and a PostHook is a hook that is run after a migration.

+
+
+

When you add a hook, you must specify the namespace where the hook CR is located, the name of the hook, and specify whether the hook is a PreHook or PostHook.

+
+
+ + + + + +
+
Important
+
+
+

In order for a PreHook to run on a VM, the VM must be started and available via SSH.

+
+
+
+
+
Example PreHook:
+
+
kind: Plan
+apiVersion: forklift.konveyor.io/v1beta1
+metadata:
+  name: test
+  namespace: konveyor-forklift
+spec:
+  vms:
+    - id: vm-2861
+      hooks:
+        - hook:
+            namespace: konveyor-forklift
+            name: playbook
+          step: PreHook
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/about-rego-files/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/about-rego-files/index.html new file mode 100644 index 00000000000..8d382c1a370 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/about-rego-files/index.html @@ -0,0 +1,104 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

About Rego files

+
+

Validation rules are written in Rego, the Open Policy Agent (OPA) native query language. The rules are stored as .rego files in the /usr/share/opa/policies/io/konveyor/forklift/<provider> directory of the Validation pod.

+
+
+

Each validation rule is defined in a separate .rego file and tests for a specific condition. If the condition evaluates as true, the rule adds a {“category”, “label”, “assessment”} hash to the concerns. The concerns content is added to the concerns key in the inventory record of the VM. The web console displays the content of the concerns key for each VM in the provider inventory.

+
+
+

The following .rego file example checks for distributed resource scheduling enabled in the cluster of a VMware VM:

+
+
+
drs_enabled.rego example
+
+
package io.konveyor.forklift.vmware (1)
+
+has_drs_enabled {
+    input.host.cluster.drsEnabled (2)
+}
+
+concerns[flag] {
+    has_drs_enabled
+    flag := {
+        "category": "Information",
+        "label": "VM running in a DRS-enabled cluster",
+        "assessment": "Distributed resource scheduling is not currently supported by OpenShift Virtualization. The VM can be migrated but it will not have this feature in the target environment."
+    }
+}
+
+
+
+
    +
  1. +

    Each validation rule is defined within a package. The package namespaces are io.konveyor.forklift.vmware for VMware and io.konveyor.forklift.ovirt for oVirt.

    +
  2. +
  3. +

    Query parameters are based on the input key of the Validation service JSON.

    +
  4. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/accessing-default-validation-rules/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/accessing-default-validation-rules/index.html new file mode 100644 index 00000000000..015598b9240 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/accessing-default-validation-rules/index.html @@ -0,0 +1,108 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Checking the default validation rules

+
+

Before you create a custom rule, you must check the default rules of the Validation service to ensure that you do not create a rule that redefines an existing default value.

+
+
+

Example: If a default rule contains the line default valid_input = false and you create a custom rule that contains the line default valid_input = true, the Validation service will not start.

+
+
+
Procedure
+
    +
  1. +

    Connect to the terminal of the Validation pod:

    +
    +
    +
    $ kubectl rsh <validation_pod>
    +
    +
    +
  2. +
  3. +

    Go to the OPA policies directory for your provider:

    +
    +
    +
    $ cd /usr/share/opa/policies/io/konveyor/forklift/<provider> (1)
    +
    +
    +
    +
      +
    1. +

      Specify vmware or ovirt.

      +
    2. +
    +
    +
  4. +
  5. +

    Search for the default policies:

    +
    +
    +
    $ grep -R "default" *
    +
    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/accessing-logs-cli/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/accessing-logs-cli/index.html new file mode 100644 index 00000000000..7891fb4bce1 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/accessing-logs-cli/index.html @@ -0,0 +1,157 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Accessing logs and custom resource information from the command line interface

+
+

You can access logs and information about custom resources (CRs) from the command line interface by using the must-gather tool. You must attach a must-gather data file to all customer cases.

+
+
+

You can gather data for a specific namespace, a completed, failed, or canceled migration plan, or a migrated virtual machine (VM) by using the filtering options.

+
+
+ + + + + +
+
Note
+
+
+

If you specify a non-existent resource in the filtered must-gather command, no archive file is created.

+
+
+
+
+
Prerequisites
+
    +
  • +

    You must be logged in to the KubeVirt cluster as a user with the cluster-admin role.

    +
  • +
  • +

    You must have the OKD CLI (oc) installed.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Navigate to the directory where you want to store the must-gather data.

    +
  2. +
  3. +

    Run the oc adm must-gather command:

    +
    +
    +
    $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest
    +
    +
    +
    +

    The data is saved as /must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.

    +
    +
  4. +
  5. +

    Optional: Run the oc adm must-gather command with the following options to gather filtered data:

    +
    +
      +
    • +

      Namespace:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- NS=<namespace> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Migration plan:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- PLAN=<migration_plan> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Virtual machine:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- VM=<vm_name> NS=<namespace> /usr/bin/targeted (1)
      +
      +
      +
      +
        +
      1. +

        You must specify the VM name, not the VM ID, as it appears in the Plan CR.

        +
      2. +
      +
      +
    • +
    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/accessing-logs-ui/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/accessing-logs-ui/index.html new file mode 100644 index 00000000000..bbdce8ba627 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/accessing-logs-ui/index.html @@ -0,0 +1,92 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Downloading logs and custom resource information from the web console

+
+

You can download logs and information about custom resources (CRs) for a completed, failed, or canceled migration plan or for migrated virtual machines (VMs) by using the OKD web console.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
  2. +
  3. +

    Click Get logs beside a migration plan name.

    +
  4. +
  5. +

    In the Get logs window, click Get logs.

    +
    +

    The logs are collected. A Log collection complete message is displayed.

    +
    +
  6. +
  7. +

    Click Download logs to download the archive file.

    +
  8. +
  9. +

    To download logs for a migrated VM, click a migration plan name and then click Get logs beside the VM.

    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/adding-hook-crs-to-migration-plans-api/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/adding-hook-crs-to-migration-plans-api/index.html new file mode 100644 index 00000000000..e78d1905ee2 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/adding-hook-crs-to-migration-plans-api/index.html @@ -0,0 +1,302 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Adding Hook CRs to a VM migration by using the Forklift API

+
+

You can add a PreHook or a PostHook Hook CR when you migrate a virtual machine from the command line by using the Forklift API. A PreHook runs before a migration, a PostHook, after.

+
+
+ + + + + +
+
Note
+
+
+

You can retrieve additional information stored in a secret or in a configMap by using a k8s module.

+
+
+
+
+

For example, you can create a hook CR to install cloud-init on a VM and write a file before migration.

+
+
+
Procedure
+
    +
  1. +

    If needed, create a secret with an SSH private key for the VM. You can either use an existing key or generate a key pair, install the public key on the VM, and base64 encode the private key in the secret.

    +
    +
    +
    apiVersion: v1
    +data:
    +  key: VGhpcyB3YXMgZ2VuZXJhdGVkIHdpdGggc3NoLWtleWdlbiBwdXJlbHkgZm9yIHRoaXMgZXhhbXBsZS4KSXQgaXMgbm90IHVzZWQgYW55d2hlcmUuCi0tLS0tQkVHSU4gT1BFTlNTSCBQUklWQVRFIEtFWS0tLS0tCmIzQmxibk56YUMxclpYa3RkakVBQUFBQUJHNXZibVVBQUFBRWJtOXVaUUFBQUFBQUFBQUJBQUFCbHdBQUFBZHpjMmd0Y24KTmhBQUFBQXdFQUFRQUFBWUVBMzVTTFRReDBFVjdPTWJQR0FqcEsxK2JhQURTTVFuK1NBU2pyTGZLNWM5NGpHdzhDbnA4LwovRHErZHFBR1pxQkg2ZnAxYmVJM1BZZzVWVDk0RVdWQ2RrTjgwY3dEcEo0Z1R0NHFUQ1gzZUYvY2x5VXQyUC9zaTNjcnQ0CjBQdi9wVnZXU1U2TlhHaDJIZC93V0MwcGh5Z0RQOVc5SHRQSUF0OFpnZmV2ZnUwZHpraVl6OHNVaElWU2ZsRGpaNUFqcUcKUjV2TVVUaGlrczEvZVlCeTdiMkFFSEdzYU8xN3NFbWNiYUlHUHZuUFVwWmQrdjkyYU1JdWZoYjhLZkFSbzZ3Ty9ISW1VbQovdDdHWFBJUmxBMUhSV0p1U05odTQzZS9DY3ZYd3Z6RnZrdE9kYXlEQzBMTklHMkpVaURlNWd0UUQ1WHZXc1p3MHQvbEs1CklacjFrZXZRNUJsYWNISmViV1ZNYUQvdllpdFdhSFo4OEF1Y0czaGh2bjkrOGNSTGhNVExiVlFSMWh2UVpBL1JtQXN3eE0KT3VJSmRaUmtxTThLZlF4Z28zQThRNGJhQW1VbnpvM3Zwa0FWdC9uaGtIOTRaRE5rV2U2RlRhdThONStyYTJCZkdjZVA4VApvbjFEeTBLRlpaUlpCREVVRVc0eHdTYUVOYXQ3c2RDNnhpL1d5OURaQUFBRm1NRFBXeDdBejFzZUFBQUFCM056YUMxeWMyCkVBQUFHQkFOK1VpMDBNZEJGZXpqR3p4Z0k2U3RmbTJnQTBqRUova2dFbzZ5M3l1WFBlSXhzUEFwNmZQL3c2dm5hZ0JtYWcKUituNmRXM2lOejJJT1ZVL2VCRmxRblpEZk5ITUE2U2VJRTdlS2t3bDkzaGYzSmNsTGRqLzdJdDNLN2VORDcvNlZiMWtsTwpqVnhvZGgzZjhGZ3RLWWNvQXovVnZSN1R5QUxmR1lIM3IzN3RIYzVJbU0vTEZJU0ZVbjVRNDJlUUk2aGtlYnpGRTRZcExOCmYzbUFjdTI5Z0JCeHJHanRlN0JKbkcyaUJqNzV6MUtXWGZyL2RtakNMbjRXL0Nud0VhT3NEdnh5SmxKdjdleGx6eUVaUU4KUjBWaWJrallidU4zdnduTDE4TDh4YjVMVG5Xc2d3dEN6U0J0aVZJZzN1WUxVQStWNzFyR2NOTGY1U3VTR2E5WkhyME9RWgpXbkJ5WG0xbFRHZy83MklyVm1oMmZQQUxuQnQ0WWI1L2Z2SEVTNFRFeTIxVUVkWWIwR1FQMFpnTE1NVERyaUNYV1VaS2pQCkNuME1ZS053UEVPRzJnSmxKODZONzZaQUZiZjU0WkIvZUdRelpGbnVoVTJydkRlZnEydGdYeG5Iai9FNko5UTh0Q2hXV1UKV1FReEZCRnVNY0VtaERXcmU3SFF1c1l2MXN2UTJRQUFBQU1CQUFFQUFBR0JBSlZtZklNNjdDQmpXcU9KdnFua2EvakRrUwo4TDdpSE5mekg1TnRZWVdPWmRMTlk2L0lRa1pDeFcwTWtSKzlUK0M3QUZKZzBNV2Q5ck5PeUxJZDkxNjZoOVJsNG0xdFJjCnViZ1o2dWZCZ3hGVDlXS21mSEdCNm4zelh5b2pQOEFJTnR6ODVpaUVHVXFFRWtVRVdMd0RGSmdvcFllQ3l1VmZ2ZE92MUgKRm1WWmEwNVo0b3NQNkNENXVmc2djQ1RYQTR6VnZ5ZHVCYkxqdHN5RjdYZjNUdjZUQ1QxU0swZHErQk1OOXRvb0RZaXpwagpzbDh6NzlybXp3eUFyWFlVcnFUUkpsNmpwRkNrWHJLcy9LeG96MHhhbXlMY2RORk9hWE51LzlnTkpjRERsV2hPcFRqNHk4CkpkNXBuV1Jueis1RHJLRFdhY0loUW1CMUxVd2ZLWmQwbVFxaUpzMUMxcXZVUmlKOGExaThKUTI4bHFuWTFRRk9wbk13emcKWEpla2FndThpT1ExRFJlQkhaM0NkcVJUYnY3bVJZSGxramx0dXJmZGc4M3hvM0ErZ1JSR001eUVOcW5xSkplQjhJQVB5UwptMFp0dGdqbHNqNTJ2K1B1NmExMHoxZndKK1VML2N6dTRKeEpOYlp6WTFIMnpLODJBaVI1T3JYNmx2aUEvSWFSRVcwUUFBCkFNQndVeUJpcUc5bEZCUnltL2UvU1VORVMzdHpicUZNdTdIcy84WTV5SnAxKzR6OXUxNGtJR2ttV0Y5eE5HT3hrY3V0cWwKeHVUcndMbjFUaFNQTHQrTjUwTGhVdzR4ZjBhNUxqemdPbklPU0FRbm5HY1Nxa0dTRDlMR21obGE2WmpydFBHY29lQ3JHdAo5M1Vvcmx5YkxNRzFFRFAxWmpKS1RaZzl6OUMwdDlTTGd3ei9DbFhydW9UNXNQVUdKWnUrbHlIZXpSTDRtcHl6OEZMcnlOCkdNci9leVM5bWdISjNVVkZEYjNIZ3BaK1E1SUdBRU5rZVZEcHIwMGhCZXZndGd6YWtBQUFEQkFQVXQ1RitoMnBVby94V1YKenRkcVQvMzA4dFB5MXVMMU1lWFoydEJPQmRwSDJyd0JzdWt0aTIySGtWZUZXQjJFdUlFUXppMzY3MGc1UGdxR1p4Vng4dQpobEE0Rkg4ZXN1NTNQckZqVW9EeFJhb3d3WXBFcFh5Y2pnNUE1MStwR1VQcWljWjB0YjliaWlhc3BWWXZhWW5sdGlnVG5iClN0UExMY29nemNiL0dGcVYyaXlzc3lwTlMwKzBNRTUxcEtxWGNaS2swbi8vVHpZWWs4TW8vZzRsQ3pmUEZQUlZrVVM5blIKWU1pQzRlcEk0TERmbVdnM0xLQ2N1Zk85all3aWgwYlFBQUFNRUE2WEtldDhEMHNvc0puZVh5WFZGd0dyVyszNlhBVGRQTwpMWDdjaStjYzFoOGV1eHdYQWx3aTJJNFhxSmJBVjBsVEhuVGEycXN3Uy9RQlpJUUJWSkZlVjVyS1daZTc4R2F3d1pWTFZNCldETmNwdFFyRTFaM2pGNS9TdUVzdlVxSDE0Tkc5RUFXWG1iUkNzelE0Vlk3NzQrSi9sTFkvMnlDT1diNzlLYTJ5OGxvYUoKVXczWWVtSld3blp2R3hKNldsL3BmQ2xYN3lEVXlXUktLdGl0cWNjbmpCWVkyRE1tZURwdURDYy9ZdDZDc3dLRmRkMkJ1UwpGZGt5cDlZY3VMaDlLZEFBQUFIR3BoYzI5dVFFRlVMVGd3TWxVdWJXOXVkR3hsYjI0dWFXNTBjbUVCQWdNRUJRWT0KLS0tLS1FTkQgT1BFTlNTSCBQUklWQVRFIEtFWS0tLS0tCgo=
    +kind: Secret
    +metadata:
    +  name: ssh-credentials
    +  namespace: konveyor-forklift
    +type: Opaque
    +
    +
    +
  2. +
  3. +

    Encode your playbook by conncatenating a file and piping it for base64, for example:

    +
    +
    +
    $ cat playbook.yml | base64 -w0
    +
    +
    +
    + + + + + +
    +
    Note
    +
    +
    +

    You can also use a here document to encode a playbook:

    +
    +
    +
    +
    $ cat << EOF | base64 -w0
    +- hosts: localhost
    +  tasks:
    +  - debug:
    +      msg: test
    +EOF
    +
    +
    +
    +
    +
  4. +
  5. +

    Create a Hook CR:

    +
    +
    +
    apiVersion: forklift.konveyor.io/v1beta1
    +kind: Hook
    +metadata:
    +  name: playbook
    +  namespace: konveyor-forklift
    +spec:
    +  image: registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel8:v1.8.2-2
    +  playbook: LSBuYW1lOiBNYWluCiAgaG9zdHM6IGxvY2FsaG9zdAogIHRhc2tzOgogIC0gbmFtZTogTG9hZCBQbGFuCiAgICBpbmNsdWRlX3ZhcnM6CiAgICAgIGZpbGU6IHBsYW4ueW1sCiAgICAgIG5hbWU6IHBsYW4KCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3ZhcnM6CiAgICAgIGZpbGU6IHdvcmtsb2FkLnltbAogICAgICBuYW1lOiB3b3JrbG9hZAoKICAtIG5hbWU6IAogICAgZ2V0ZW50OgogICAgICBkYXRhYmFzZTogcGFzc3dkCiAgICAgIGtleTogInt7IGFuc2libGVfdXNlcl9pZCB9fSIKICAgICAgc3BsaXQ6ICc6JwoKICAtIG5hbWU6IEVuc3VyZSBTU0ggZGlyZWN0b3J5IGV4aXN0cwogICAgZmlsZToKICAgICAgcGF0aDogfi8uc3NoCiAgICAgIHN0YXRlOiBkaXJlY3RvcnkKICAgICAgbW9kZTogMDc1MAogICAgZW52aXJvbm1lbnQ6CiAgICAgIEhPTUU6ICJ7eyBhbnNpYmxlX2ZhY3RzLmdldGVudF9wYXNzd2RbYW5zaWJsZV91c2VyX2lkXVs0XSB9fSIKCiAgLSBrOHNfaW5mbzoKICAgICAgYXBpX3ZlcnNpb246IHYxCiAgICAgIGtpbmQ6IFNlY3JldAogICAgICBuYW1lOiBzc2gtY3JlZGVudGlhbHMKICAgICAgbmFtZXNwYWNlOiBrb252ZXlvci1mb3JrbGlmdAogICAgcmVnaXN0ZXI6IHNzaF9jcmVkZW50aWFscwoKICAtIG5hbWU6IENyZWF0ZSBTU0gga2V5CiAgICBjb3B5OgogICAgICBkZXN0OiB+Ly5zc2gvaWRfcnNhCiAgICAgIGNvbnRlbnQ6ICJ7eyBzc2hfY3JlZGVudGlhbHMucmVzb3VyY2VzWzBdLmRhdGEua2V5IHwgYjY0ZGVjb2RlIH19IgogICAgICBtb2RlOiAwNjAwCgogIC0gYWRkX2hvc3Q6CiAgICAgIG5hbWU6ICJ7eyB3b3JrbG9hZC52bS5pcGFkZHJlc3MgfX0iCiAgICAgIGFuc2libGVfdXNlcjogcm9vdAogICAgICBncm91cHM6IHZtcwoKLSBob3N0czogdm1zCiAgdGFza3M6CiAgLSBuYW1lOiBJbnN0YWxsIGNsb3VkLWluaXQKICAgIGRuZjoKICAgICAgbmFtZToKICAgICAgLSBjbG91ZC1pbml0CiAgICAgIHN0YXRlOiBsYXRlc3QKCiAgLSBuYW1lOiBDcmVhdGUgVGVzdCBGaWxlCiAgICBjb3B5OgogICAgICBkZXN0OiAvdGVzdC50eHQKICAgICAgY29udGVudDogIkhlbGxvIFdvcmxkIgogICAgICBtb2RlOiAwNjQ0Cg==
    +  serviceAccount: forklift-controller (1)
    +
    +
    +
    +
      +
    1. +

      Specify a serviceAccount to run the hook with in order to control access to resources on the cluster.

      +
      + + + + + +
      +
      Note
      +
      +
      +

      To decode an attached playbook retrieve the resource with custom output and pipe it to base64. For example:

      +
      +
      +
      +
       oc get -n konveyor-forklift hook playbook -o \
      +   go-template='{{ .spec.playbook }}' | base64 -d
      +
      +
      +
      +
      +
      +

      The playbook encoded here runs the following:

      +
      +
      +
      +
      - name: Main
      +  hosts: localhost
      +  tasks:
      +  - name: Load Plan
      +    include_vars:
      +      file: plan.yml
      +      name: plan
      +
      +  - name: Load Workload
      +    include_vars:
      +      file: workload.yml
      +      name: workload
      +
      +  - name:
      +    getent:
      +      database: passwd
      +      key: "{{ ansible_user_id }}"
      +      split: ':'
      +
      +  - name: Ensure SSH directory exists
      +    file:
      +      path: ~/.ssh
      +      state: directory
      +      mode: 0750
      +    environment:
      +      HOME: "{{ ansible_facts.getent_passwd[ansible_user_id][4] }}"
      +
      +  - k8s_info:
      +      api_version: v1
      +      kind: Secret
      +      name: ssh-credentials
      +      namespace: konveyor-forklift
      +    register: ssh_credentials
      +
      +  - name: Create SSH key
      +    copy:
      +      dest: ~/.ssh/id_rsa
      +      content: "{{ ssh_credentials.resources[0].data.key | b64decode }}"
      +      mode: 0600
      +
      +  - add_host:
      +      name: "{{ workload.vm.ipaddress }}"
      +      ansible_user: root
      +      groups: vms
      +
      +- hosts: vms
      +  tasks:
      +  - name: Install cloud-init
      +    dnf:
      +      name:
      +      - cloud-init
      +      state: latest
      +
      +  - name: Create Test File
      +    copy:
      +      dest: /test.txt
      +      content: "Hello World"
      +      mode: 0644
      +
      +
      +
    2. +
    +
    +
  6. +
  7. +

    Create a Plan CR using the hook:

    +
    +
    +
    kind: Plan
    +apiVersion: forklift.konveyor.io/v1beta1
    +metadata:
    +  name: test
    +  namespace: konveyor-forklift
    +spec:
    +  map:
    +    network:
    +      namespace: "konveyor-forklift"
    +      name: "network"
    +    storage:
    +      namespace: "konveyor-forklift"
    +      name: "storage"
    +  provider:
    +    source:
    +      namespace: "konveyor-forklift"
    +      name: "boston"
    +    destination:
    +      namespace: "konveyor-forklift"
    +      name: host
    +  targetNamespace: "konveyor-forklift"
    +  vms:
    +    - id: vm-2861
    +      hooks:
    +        - hook:
    +            namespace: konveyor-forklift
    +            name: playbook
    +          step: PreHook (1)
    +
    +
    +
    +
      +
    1. +

      Options are PreHook, to run the hook before the migration, and PostHook, to run the hook after the migration.

      +
    2. +
    +
    +
  8. +
+
+
+ + + + + +
+
Important
+
+
+

In order for a PreHook to run on a VM, the VM must be started and available via SSH.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/adding-source-provider/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/adding-source-provider/index.html new file mode 100644 index 00000000000..ff885be082c --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/adding-source-provider/index.html @@ -0,0 +1,82 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Click Create provider to add and save the provider.

    +
    +

    The provider appears in the list of providers.

    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/adding-virt-provider/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/adding-virt-provider/index.html new file mode 100644 index 00000000000..09569d2008e --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/adding-virt-provider/index.html @@ -0,0 +1,116 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Adding a KubeVirt destination provider

+
+

You can add a KubeVirt destination provider to the OKD web console in addition to the default KubeVirt destination provider, which is the provider where you installed Forklift.

+
+
+
Prerequisites
+ +
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Select KubeVirt from the Provider type list.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Provider name: Specify the provider name to display in the list of target providers.

      +
    • +
    • +

      Kubernetes API server URL: Specify the OKD cluster API endpoint.

      +
    • +
    • +

      Service account token: Specify the cluster-admin service account token.

      +
      +

      If both URL and Service account token are left blank, the local OKD cluster is used.

      +
      +
    • +
    +
    +
  8. +
  9. +

    Click Create.

    +
    +

    The provider appears in the list of providers.

    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/canceling-migration-cli/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/canceling-migration-cli/index.html new file mode 100644 index 00000000000..63e88682b81 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/canceling-migration-cli/index.html @@ -0,0 +1,132 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Canceling a migration

+
+

You can cancel an entire migration or individual virtual machines (VMs) while a migration is in progress from the command line interface (CLI).

+
+
+
Canceling an entire migration
+
    +
  • +

    Delete the Migration CR:

    +
    +
    +
    $ kubectl delete migration <migration> -n <namespace> (1)
    +
    +
    +
    +
      +
    1. +

      Specify the name of the Migration CR.

      +
    2. +
    +
    +
  • +
+
+
+
Canceling the migration of individual VMs
+
    +
  1. +

    Add the individual VMs to the spec.cancel block of the Migration manifest:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Migration
    +metadata:
    +  name: <migration>
    +  namespace: <namespace>
    +...
    +spec:
    +  cancel:
    +  - id: vm-102 (1)
    +  - id: vm-203
    +  - name: rhel8-vm
    +EOF
    +
    +
    +
    +
      +
    1. +

      You can specify a VM by using the id key or the name key.

      +
      +

      The value of the id key is the managed object reference, for a VMware VM, or the VM UUID, for a oVirt VM.

      +
      +
    2. +
    +
    +
  2. +
  3. +

    Retrieve the Migration CR to monitor the progress of the remaining VMs:

    +
    +
    +
    $ kubectl get migration/<migration> -n <namespace> -o yaml
    +
    +
    +
  4. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/canceling-migration-ui/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/canceling-migration-ui/index.html new file mode 100644 index 00000000000..cedb188de5b --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/canceling-migration-ui/index.html @@ -0,0 +1,92 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Canceling a migration

+
+

You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the OKD web console.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click Plans for virtualization.

    +
  2. +
  3. +

    Click the name of a running migration plan to view the migration details.

    +
  4. +
  5. +

    Select one or more VMs and click Cancel.

    +
  6. +
  7. +

    Click Yes, cancel to confirm the cancellation.

    +
    +

    In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.

    +
    +
  8. +
+
+
+

You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.

+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/changing-precopy-intervals/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/changing-precopy-intervals/index.html new file mode 100644 index 00000000000..41ade68818b --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/changing-precopy-intervals/index.html @@ -0,0 +1,92 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Changing precopy intervals for warm migration

+
+

You can change the snapshot interval by patching the ForkliftController custom resource (CR).

+
+
+
Procedure
+
    +
  • +

    Patch the ForkliftController CR:

    +
    +
    +
    $ kubectl patch forkliftcontroller/<forklift-controller> -n konveyor-forklift -p '{"spec": {"controller_precopy_interval": <60>}}' --type=merge (1)
    +
    +
    +
    +
      +
    1. +

      Specify the precopy interval in minutes. The default value is 60.

      +
      +

      You do not need to restart the forklift-controller pod.

      +
      +
    2. +
    +
    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/collected-logs-cr-info/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/collected-logs-cr-info/index.html new file mode 100644 index 00000000000..2358db5fb13 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/collected-logs-cr-info/index.html @@ -0,0 +1,183 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Collected logs and custom resource information

+
+

You can download logs and custom resource (CR) yaml files for the following targets by using the OKD web console or the command line interface (CLI):

+
+
+
    +
  • +

    Migration plan: Web console or CLI.

    +
  • +
  • +

    Virtual machine: Web console or CLI.

    +
  • +
  • +

    Namespace: CLI only.

    +
  • +
+
+
+

The must-gather tool collects the following logs and CR files in an archive file:

+
+
+
    +
  • +

    CRs:

    +
    +
      +
    • +

      DataVolume CR: Represents a disk mounted on a migrated VM.

      +
    • +
    • +

      VirtualMachine CR: Represents a migrated VM.

      +
    • +
    • +

      Plan CR: Defines the VMs and storage and network mapping.

      +
    • +
    • +

      Job CR: Optional: Represents a pre-migration hook, a post-migration hook, or both.

      +
    • +
    +
    +
  • +
  • +

    Logs:

    +
    +
      +
    • +

      importer pod: Disk-to-data-volume conversion log. The importer pod naming convention is importer-<migration_plan>-<vm_id><5_char_id>, for example, importer-mig-plan-ed90dfc6-9a17-4a8btnfh, where ed90dfc6-9a17-4a8 is a truncated oVirt VM ID and btnfh is the generated 5-character ID.

      +
    • +
    • +

      conversion pod: VM conversion log. The conversion pod runs virt-v2v, which installs and configures device drivers on the PVCs of the VM. The conversion pod naming convention is <migration_plan>-<vm_id><5_char_id>.

      +
    • +
    • +

      virt-launcher pod: VM launcher log. When a migrated VM is powered on, the virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

      +
    • +
    • +

      forklift-controller pod: The log is filtered for the migration plan, virtual machine, or namespace specified by the must-gather command.

      +
    • +
    • +

      forklift-must-gather-api pod: The log is filtered for the migration plan, virtual machine, or namespace specified by the must-gather command.

      +
    • +
    • +

      hook-job pod: The log is filtered for hook jobs. The hook-job naming convention is <migration_plan>-<vm_id><5_char_id>, for example, plan2j-vm-3696-posthook-4mx85 or plan2j-vm-3696-prehook-mwqnl.

      +
      + + + + + +
      +
      Note
      +
      +
      +

      Empty or excluded log files are not included in the must-gather archive file.

      +
      +
      +
      +
    • +
    +
    +
  • +
+
+
+
Example must-gather archive structure for a VMware migration plan
+
+
must-gather
+└── namespaces
+    ├── target-vm-ns
+    │   ├── crs
+    │   │   ├── datavolume
+    │   │   │   ├── mig-plan-vm-7595-tkhdz.yaml
+    │   │   │   ├── mig-plan-vm-7595-5qvqp.yaml
+    │   │   │   └── mig-plan-vm-8325-xccfw.yaml
+    │   │   └── virtualmachine
+    │   │       ├── test-test-rhel8-2disks2nics.yaml
+    │   │       └── test-x2019.yaml
+    │   └── logs
+    │       ├── importer-mig-plan-vm-7595-tkhdz
+    │       │   └── current.log
+    │       ├── importer-mig-plan-vm-7595-5qvqp
+    │       │   └── current.log
+    │       ├── importer-mig-plan-vm-8325-xccfw
+    │       │   └── current.log
+    │       ├── mig-plan-vm-7595-4glzd
+    │       │   └── current.log
+    │       └── mig-plan-vm-8325-4zw49
+    │           └── current.log
+    └── openshift-mtv
+        ├── crs
+        │   └── plan
+        │       └── mig-plan-cold.yaml
+        └── logs
+            ├── forklift-controller-67656d574-w74md
+            │   └── current.log
+            └── forklift-must-gather-api-89fc7f4b6-hlwb6
+                └── current.log
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/common-attributes/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/common-attributes/index.html new file mode 100644 index 00000000000..1c7d2835188 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/common-attributes/index.html @@ -0,0 +1,66 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/compatibility-guidelines/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/compatibility-guidelines/index.html new file mode 100644 index 00000000000..0a4cbe3a5eb --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/compatibility-guidelines/index.html @@ -0,0 +1,137 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Software compatibility guidelines

+
+
+
+

You must install compatible software versions.

+
+ + ++++++++ + + + + + + + + + + + + + + + + + + + + +
Table 1. Compatible software versions
ForkliftOKDKubeVirtVMware vSphereoVirtOpenStack

2.3.0

4.10 or later

4.10 or later

6.5 or later

4.4 SP1 or later

16.1 or later

+
+ + + + + +
+
Note
+
+
Migration from oVirt 4.3
+
+

Forklift was tested only with oVirt (RHV) 4.4 SP1. +Migration from oVirt (oVirt) 4.3 has not been tested with Forklift 2.3. While not supported, basic migrations from oVirt 4.3 are expected to work.

+
+
+

Generally it is advised to upgrade oVirt Manager (RHVM) to the previously mentioned supported version before the migration to KubeVirt.

+
+
+

Therefore, it is recommended to upgrade oVirt to the supported version above before the migration to KubeVirt.

+
+
+

However, migrations from oVirt 4.3.11 were tested with Forklift 2.3, and may work in practice in many environments using Forklift 2.3. In this case, we advise upgrading oVirt Manager (RHVM) to the previously mentioned supported version before the migration to KubeVirt.

+
+
+
+
+
+
+

OpenShift Operator Life Cycles

+
+
+

For more information about the software maintenance Life Cycle classifications for Operators shipped by Red Hat for use with OpenShift Container Platform, see OpenShift Operator Life Cycles.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/configuring-mtv-operator/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/configuring-mtv-operator/index.html new file mode 100644 index 00000000000..20b4893bcb4 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/configuring-mtv-operator/index.html @@ -0,0 +1,202 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Configuring the Forklift Operator

+
+

You can configure all of the following settings of the Forklift Operator by modifying the ForkliftController CR, or in the Settings section of the Overview page, unless otherwise indicated.

+
+
+
    +
  • +

    Maximum number of virtual machines (VMs) per plan that can be migrated simultaneously.

    +
  • +
  • +

    How long must gather reports are retained before being automatically deleted.

    +
  • +
  • +

    CPU limit allocated to the main controller container.

    +
  • +
  • +

    Memory limit allocated to the main controller container.

    +
  • +
  • +

    Interval at which a new snapshot is requested before initiating a warm migration.

    +
  • +
  • +

    Frequency with which the system checks the status of snapshot creation or removal during a warm migration.

    +
  • +
  • +

    Percentage of space in persistent volumes allocated as file system overhead when the storageclass is filesystem (ForkliftController CR only).

    +
  • +
  • +

    Fixed amount of additional space allocated in persistent block volumes. This setting is applicable for any storageclass that is block-based (ForkliftController CR only).

    +
  • +
  • +

    Configuration map of operating systems to preferences for vSphere source providers (ForkliftController CR only).

    +
  • +
  • +

    Configuration map of operating systems to preferences for oVirt (oVirt) source providers (ForkliftController CR only).

    +
  • +
+
+
+

The procedure for configuring these settings using the user interface is presented in Configuring MTV settings. The procedure for configuring these settings by modifying the ForkliftController CR is presented following.

+
+
+
Procedure
+
    +
  • +

    Change a parameter’s value in the spec portion of the ForkliftController CR by adding the label and value as follows:

    +
  • +
+
+
+
+
spec:
+  label: value (1)
+
+
+
+
    +
  1. +

    Labels you can configure using the CLI are shown in the table that follows, along with a description of each label and its default value.

    +
  2. +
+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Forklift Operator labels
LabelDescriptionDefault value

controller_max_vm_inflight

The maximum number of VMs per plan that can be migrated simultaneously.

20

must_gather_api_cleanup_max_age

The duration in hours for retaining must gather reports before they are automatically deleted.

-1 (disabled)

controller_container_limits_cpu

The CPU limit allocated to the main controller container.

500m

controller_container_limits_memory

The memory limit allocated to the main controller container.

800Mi

controller_precopy_interval

The interval in minutes at which a new snapshot is requested before initiating a warm migration.

60

controller_snapshot_status_check_rate_seconds

The frequency in seconds with which the system checks the status of snapshot creation or removal during a warm migration.

10

controller_filesystem_overhead

Percentage of space in persistent volumes allocated as file system overhead when the storageclass is filesystem.

+

ForkliftController CR only.

10

controller_block_overhead

Fixed amount of additional space allocated in persistent block volumes. This setting is applicable for any storageclass that is block-based. It can be used when data, such as encryption headers, is written to the persistent volumes in addition to the content of the virtual disk.

+

ForkliftController CR only.

0

vsphere_osmap_configmap_name

Configuration map for vSphere source providers. This configuration map maps the operating system of the incoming VM to a KubeVirt preference name. This configuration map needs to be in the namespace where the Forklift Operator is deployed.

+

To see the list of preferences in your KubeVirt environment, open the {ocp-name} web console and click VirtualizationPreferences.

+

You can add values to the configuration map when this label has the default value, forklift-vsphere-osmap. In order to override or delete values, specify a configuration map that is different from forklift-vsphere-osmap.

+

ForkliftController CR only.

forklift-vsphere-osmap

ovirt_osmap_configmap_name

Configuration map for oVirt source providers. This configuration map maps the operating system of the incoming VM to a KubeVirt preference name. This configuration map needs to be in the namespace where the Forklift Operator is deployed.

+

To see the list of preferences in your KubeVirt environment, open the {ocp-name} web console and click VirtualizationPreferences.

+

You can add values to the configuration map when this label has the default value, forklift-ovirt-osmap. In order to override or delete values, specify a configuration map that is different from forklift-ovirt-osmap.

+

ForkliftController CR only.

forklift-ovirt-osmap

+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-migration-plan-2-6-3/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-migration-plan-2-6-3/index.html new file mode 100644 index 00000000000..59998ab4acf --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-migration-plan-2-6-3/index.html @@ -0,0 +1,139 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

+ +The Create migration plan pane opens. It displays the source provider’s name and suggestions for a target provider and namespace, a network map, and a storage map. +. Enter the Plan name. +. Make any needed changes to the editable items. +. Click Add mapping to edit a suggested network mapping or a storage mapping, or to add one or more additional mappings. +. Click Create migration plan.

+
+
+

+ +Forklift validates the migration plan and the Plan details page opens, indicating whether the plan is ready for use or contains an error. The details of the plan are listed, and you can edit the items you filled in on the previous page. If you make any changes, Forklift validates the plan again.

+
+
+
    +
  1. +

    VMware source providers only (All optional):

    +
    +
      +
    • +

      Preserving static IPs of VMs: By default, virtual network interface controllers (vNICs) change during the migration process. As a result, vNICs that are configured with a static IP linked to the interface name in the guest VM lose their IP. To avoid this, click the Edit icon next to Preserve static IPs and toggle the Whether to preserve the static IPs switch in the window that opens. Then click Save.

      +
      +

      Forklift then issues a warning message about any VMs for which vNIC properties are missing. To retrieve any missing vNIC properties, run those VMs in vSphere in order for the vNIC properties to be reported to Forklift.

      +
      +
    • +
    • +

      Entering a list of decryption passphrases for disks encrypted using Linux Unified Key Setup (LUKS): To enter a list of decryption passphrases for LUKS-encrypted devices, in the Settings section, click the Edit icon next to Disk decryption passphrases, enter the passphrases, and then click Save. You do not need to enter the passphrases in a specific order - for each LUKS-encrypted device, Forklift tries each passphrase until one unlocks the device.

      +
    • +
    • +

      Specifying a root device: Applies to multi-boot VM migrations only. By default, Forklift uses the first bootable device detected as the root device.

      +
      +

      To specify a different root device, in the Settings section, click the Edit icon next to Root device and choose a device from the list of commonly-used options, or enter a device in the text box.

      +
      +
      +

      Forklift uses the following format for disk location: /dev/sd<disk_identifier><disk_partition>. For example, if the second disk is the root device and the operating system is on the disk’s second partition, the format would be: /dev/sdb2. After you enter the boot device, click Save.

      +
      +
      +

      If the conversion fails because the boot device provided is incorrect, it is possible to get the correct information by looking at the conversion pod logs.

      +
      +
    • +
    +
    +
  2. +
  3. +

    oVirt source providers only (Optional):

    +
    +
      +
    • +

      Preserving the CPU model of VMs that are migrated from oVirt: Generally, the CPU model (type) for oVirt VMs is set at the cluster level, but it can be set at the VM level, which is called a custom CPU model. +By default, Forklift sets the CPU model on the destination cluster as follows: Forklift preserves custom CPU settings for VMs that have them, but, for VMs without custom CPU settings, Forklift does not set the CPU model. Instead, the CPU model is later set by KubeVirt.

      +
      +

      To preserve the cluster-level CPU model of your oVirt VMs, in the Settings section, click the Edit icon next to Preserve CPU model. Toggle the Whether to preserve the CPU model switch, and then click Save.

      +
      +
    • +
    +
    +
  4. +
  5. +

    If the plan is valid,

    +
    +
      +
    1. +

      You can run the plan now by clicking Start migration.

      +
    2. +
    3. +

      You can run the plan later by selecting it on the Plans for virtualization page and following the procedure in Running a migration plan.

      +
      +

      Unresolved directive in creating-migration-plan-2-6-3.adoc - include::snip_vmware-name-change.adoc[]

      +
      +
    4. +
    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-migration-plan/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-migration-plan/index.html new file mode 100644 index 00000000000..036b89a58e4 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-migration-plan/index.html @@ -0,0 +1,270 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a migration plan

+
+

You can create a migration plan by using the OKD web console.

+
+
+

A migration plan allows you to group virtual machines to be migrated together or with the same migration parameters, for example, a percentage of the members of a cluster or a complete application.

+
+
+

You can configure a hook to run an Ansible playbook or custom container image during a specified stage of the migration plan.

+
+
+
Prerequisites
+
    +
  • +

    If Forklift is not installed on the target cluster, you must add a target provider on the Providers page of the web console.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
  2. +
  3. +

    Click Create plan.

    +
  4. +
  5. +

    Specify the following fields:

    +
    +
      +
    • +

      Plan name: Enter a migration plan name to display in the migration plan list.

      +
    • +
    • +

      Plan description: Optional: Brief description of the migration plan.

      +
    • +
    • +

      Source provider: Select a source provider.

      +
    • +
    • +

      Target provider: Select a target provider.

      +
    • +
    • +

      Target namespace: Do one of the following:

      +
      +
        +
      • +

        Select a target namespace from the list

        +
      • +
      • +

        Create a target namespace by typing its name in the text box, and then clicking create "<the_name_you_entered>"

        +
      • +
      +
      +
    • +
    • +

      You can change the migration transfer network for this plan by clicking Select a different network, selecting a network from the list, and then clicking Select.

      +
      +

      If you defined a migration transfer network for the KubeVirt provider and if the network is in the target namespace, the network that you defined is the default network for all migration plans. Otherwise, the pod network is used.

      +
      +
    • +
    +
    +
  6. +
  7. +

    Click Next.

    +
  8. +
  9. +

    Select options to filter the list of source VMs and click Next.

    +
  10. +
  11. +

    Select the VMs to migrate and then click Next.

    +
  12. +
  13. +

    Select an existing network mapping or create a new network mapping.

    +
  14. +
  15. +

    . Optional: Click Add to add an additional network mapping.

    +
    +

    To create a new network mapping:

    +
    +
    +
      +
    • +

      Select a target network for each source network.

      +
    • +
    • +

      Optional: Select Save current mapping as a template and enter a name for the network mapping.

      +
    • +
    +
    +
  16. +
  17. +

    Click Next.

    +
  18. +
  19. +

    Select an existing storage mapping, which you can modify, or create a new storage mapping.

    +
    +

    To create a new storage mapping:

    +
    +
    +
      +
    1. +

      If your source provider is VMware, select a Source datastore and a Target storage class.

      +
    2. +
    3. +

      If your source provider is oVirt, select a Source storage domain and a Target storage class.

      +
    4. +
    5. +

      If your source provider is {osp}, select a Source volume type and a Target storage class.

      +
    6. +
    +
    +
  20. +
  21. +

    Optional: Select Save current mapping as a template and enter a name for the storage mapping.

    +
  22. +
  23. +

    Click Next.

    +
  24. +
  25. +

    Select a migration type and click Next.

    +
    +
      +
    • +

      Cold migration: The source VMs are stopped while the data is copied.

      +
    • +
    • +

      Warm migration: The source VMs run while the data is copied incrementally. Later, you will run the cutover, which stops the VMs and copies the remaining VM data and metadata.

      +
      + + + + + +
      +
      Note
      +
      +
      +

      Warm migration is supported only from vSphere and oVirt.

      +
      +
      +
      +
    • +
    +
    +
  26. +
  27. +

    Click Next.

    +
  28. +
  29. +

    Optional: You can create a migration hook to run an Ansible playbook before or after migration:

    +
    +
      +
    1. +

      Click Add hook.

      +
    2. +
    3. +

      Select the Step when the hook will be run: pre-migration or post-migration.

      +
    4. +
    5. +

      Select a Hook definition:

      +
      +
        +
      • +

        Ansible playbook: Browse to the Ansible playbook or paste it into the field.

        +
      • +
      • +

        Custom container image: If you do not want to use the default hook-runner image, enter the image path: <registry_path>/<image_name>:<tag>.

        +
        + + + + + +
        +
        Note
        +
        +
        +

        The registry must be accessible to your OKD cluster.

        +
        +
        +
        +
      • +
      +
      +
    6. +
    +
    +
  30. +
  31. +

    Click Next.

    +
  32. +
  33. +

    Review your migration plan and click Finish.

    +
    +

    The migration plan is saved on the Plans page.

    +
    +
    +

    You can click the {kebab} of the migration plan and select View details to verify the migration plan details.

    +
    +
  34. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-network-mapping/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-network-mapping/index.html new file mode 100644 index 00000000000..3296969c677 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-network-mapping/index.html @@ -0,0 +1,122 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a network mapping

+
+

You can create one or more network mappings by using the OKD web console to map source networks to KubeVirt networks.

+
+
+
Prerequisites
+
    +
  • +

    Source and target providers added to the OKD web console.

    +
  • +
  • +

    If you map more than one source and target network, each additional KubeVirt network requires its own network attachment definition.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationNetworkMaps for virtualization.

    +
  2. +
  3. +

    Click Create NetworkMap.

    +
  4. +
  5. +

    Specify the following fields:

    +
    +
      +
    • +

      Name: Enter a name to display in the network mappings list.

      +
    • +
    • +

      Source provider: Select a source provider.

      +
    • +
    • +

      Target provider: Select a target provider.

      +
    • +
    +
    +
  6. +
  7. +

    Select a Source network and a Target namespace/network.

    +
  8. +
  9. +

    Optional: Click Add to create additional network mappings or to map multiple source networks to a single target network.

    +
  10. +
  11. +

    If you create an additional network mapping, select the network attachment definition as the target network.

    +
  12. +
  13. +

    Click Create.

    +
    +

    The network mapping is displayed on the NetworkMaps screen.

    +
    +
  14. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-storage-mapping/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-storage-mapping/index.html new file mode 100644 index 00000000000..3c7b6803215 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-storage-mapping/index.html @@ -0,0 +1,138 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a storage mapping

+
+

You can create a storage mapping by using the OKD web console to map source disk storages to KubeVirt storage classes.

+
+
+
Prerequisites
+
    +
  • +

    Source and target providers added to the OKD web console.

    +
  • +
  • +

    Local and shared persistent storage that support VM migration.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationStorageMaps for virtualization.

    +
  2. +
  3. +

    Click Create StorageMap.

    +
  4. +
  5. +

    Specify the following fields:

    +
    +
      +
    • +

      Name: Enter a name to display in the storage mappings list.

      +
    • +
    • +

      Source provider: Select a source provider.

      +
    • +
    • +

      Target provider: Select a target provider.

      +
    • +
    +
    +
  6. +
  7. +

    To create a storage mapping, click Add and map storage sources to target storage classes as follows:

    +
    +
      +
    1. +

      If your source provider is VMware vSphere, select a Source datastore and a Target storage class.

      +
    2. +
    3. +

      If your source provider is oVirt, select a Source storage domain and a Target storage class.

      +
    4. +
    5. +

      If your source provider is {osp}, select a Source volume type and a Target storage class.

      +
    6. +
    7. +

      If your source provider is a set of one or more OVA files, select a Source and a Target storage class for the dummy storage that applies to all virtual disks within the OVA files.

      +
    8. +
    9. +

      If your storage provider is KubeVirt. select a Source storage class and a Target storage class.

      +
    10. +
    11. +

      Optional: Click Add to create additional storage mappings, including mapping multiple storage sources to a single target storage class.

      +
    12. +
    +
    +
  8. +
  9. +

    Click Create.

    +
    +

    The mapping is displayed on the StorageMaps page.

    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-validation-rule/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-validation-rule/index.html new file mode 100644 index 00000000000..d025862940e --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-validation-rule/index.html @@ -0,0 +1,238 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a validation rule

+
+

You create a validation rule by applying a config map custom resource (CR) containing the rule to the Validation service.

+
+
+ + + + + +
+
Important
+
+
+
    +
  • +

    If you create a rule with the same name as an existing rule, the Validation service performs an OR operation with the rules.

    +
  • +
  • +

    If you create a rule that contradicts a default rule, the Validation service will not start.

    +
  • +
+
+
+
+
+
Validation rule example
+

Validation rules are based on virtual machine (VM) attributes collected by the Provider Inventory service.

+
+
+

For example, the VMware API uses this path to check whether a VMware VM has NUMA node affinity configured: MOR:VirtualMachine.config.extraConfig["numa.nodeAffinity"].

+
+
+

The Provider Inventory service simplifies this configuration and returns a testable attribute with a list value:

+
+
+
+
"numaNodeAffinity": [
+    "0",
+    "1"
+],
+
+
+
+

You create a Rego query, based on this attribute, and add it to the forklift-validation-config config map:

+
+
+
+
`count(input.numaNodeAffinity) != 0`
+
+
+
+
Procedure
+
    +
  1. +

    Create a config map CR according to the following example:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: v1
    +kind: ConfigMap
    +metadata:
    +  name: <forklift-validation-config>
    +  namespace: konveyor-forklift
    +data:
    +  vmware_multiple_disks.rego: |-
    +    package <provider_package> (1)
    +
    +    has_multiple_disks { (2)
    +      count(input.disks) > 1
    +    }
    +
    +    concerns[flag] {
    +      has_multiple_disks (3)
    +        flag := {
    +          "category": "<Information>", (4)
    +          "label": "Multiple disks detected",
    +          "assessment": "Multiple disks detected on this VM."
    +        }
    +    }
    +EOF
    +
    +
    +
    +
      +
    1. +

      Specify the provider package name. Allowed values are io.konveyor.forklift.vmware for VMware and io.konveyor.forklift.ovirt for oVirt.

      +
    2. +
    3. +

      Specify the concerns name and Rego query.

      +
    4. +
    5. +

      Specify the concerns name and flag parameter values.

      +
    6. +
    7. +

      Allowed values are Critical, Warning, and Information.

      +
    8. +
    +
    +
  2. +
  3. +

    Stop the Validation pod by scaling the forklift-controller deployment to 0:

    +
    +
    +
    $ kubectl scale -n konveyor-forklift --replicas=0 deployment/forklift-controller
    +
    +
    +
  4. +
  5. +

    Start the Validation pod by scaling the forklift-controller deployment to 1:

    +
    +
    +
    $ kubectl scale -n konveyor-forklift --replicas=1 deployment/forklift-controller
    +
    +
    +
  6. +
  7. +

    Check the Validation pod log to verify that the pod started:

    +
    +
    +
    $ kubectl logs -f <validation_pod>
    +
    +
    +
    +

    If the custom rule conflicts with a default rule, the Validation pod will not start.

    +
    +
  8. +
  9. +

    Remove the source provider:

    +
    +
    +
    $ kubectl delete provider <provider> -n konveyor-forklift
    +
    +
    +
  10. +
  11. +

    Add the source provider to apply the new rule:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Provider
    +metadata:
    +  name: <provider>
    +  namespace: konveyor-forklift
    +spec:
    +  type: <provider_type> (1)
    +  url: <api_end_point> (2)
    +  secret:
    +    name: <secret> (3)
    +    namespace: konveyor-forklift
    +EOF
    +
    +
    +
    +
      +
    1. +

      Allowed values are ovirt, vsphere, and openstack.

      +
    2. +
    3. +

      Specify the API end point URL, for example, https://<vCenter_host>/sdk for vSphere, https://<engine_host>/ovirt-engine/api for oVirt, or https://<identity_service>/v3 for {osp}.

      +
    4. +
    5. +

      Specify the name of the provider Secret CR.

      +
    6. +
    +
    +
  12. +
+
+
+

You must update the rules version after creating a custom rule so that the Inventory service detects the changes and validates the VMs.

+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-vddk-image/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-vddk-image/index.html new file mode 100644 index 00000000000..ea6c6cc92c4 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-vddk-image/index.html @@ -0,0 +1,201 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a VDDK image

+
+

Forklift can use the VMware Virtual Disk Development Kit (VDDK) SDK to accelerate transferring virtual disks from VMware vSphere.

+
+
+ + + + + +
+
Note
+
+
+

Creating a VDDK image, although optional, is highly recommended.

+
+
+
+
+

To make use of this feature, you download the VMware Virtual Disk Development Kit (VDDK), build a VDDK image, and push the VDDK image to your image registry.

+
+
+

The VDDK package contains symbolic links, therefore, the procedure of creating a VDDK image must be performed on a file system that preserves symbolic links (symlinks).

+
+
+ + + + + +
+
Note
+
+
+

Storing the VDDK image in a public registry might violate the VMware license terms.

+
+
+
+
+
Prerequisites
+
    +
  • +

    OKD image registry.

    +
  • +
  • +

    podman installed.

    +
  • +
  • +

    You are working on a file system that preserves symbolic links (symlinks).

    +
  • +
  • +

    If you are using an external registry, KubeVirt must be able to access it.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Create and navigate to a temporary directory:

    +
    +
    +
    $ mkdir /tmp/<dir_name> && cd /tmp/<dir_name>
    +
    +
    +
  2. +
  3. +

    In a browser, navigate to the VMware VDDK version 8 download page.

    +
  4. +
  5. +

    Select version 8.0.1 and click Download.

    +
  6. +
+
+
+ + + + + +
+
Note
+
+
+

In order to migrate to KubeVirt 4.12, download VDDK version 7.0.3.2 from the VMware VDDK version 7 download page.

+
+
+
+
+
    +
  1. +

    Save the VDDK archive file in the temporary directory.

    +
  2. +
  3. +

    Extract the VDDK archive:

    +
    +
    +
    $ tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gz
    +
    +
    +
  4. +
  5. +

    Create a Dockerfile:

    +
    +
    +
    $ cat > Dockerfile <<EOF
    +FROM registry.access.redhat.com/ubi8/ubi-minimal
    +USER 1001
    +COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib
    +RUN mkdir -p /opt
    +ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"]
    +EOF
    +
    +
    +
  6. +
  7. +

    Build the VDDK image:

    +
    +
    +
    $ podman build . -t <registry_route_or_server_path>/vddk:<tag>
    +
    +
    +
  8. +
  9. +

    Push the VDDK image to the registry:

    +
    +
    +
    $ podman push <registry_route_or_server_path>/vddk:<tag>
    +
    +
    +
  10. +
  11. +

    Ensure that the image is accessible to your KubeVirt environment.

    +
  12. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/error-messages/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/error-messages/index.html new file mode 100644 index 00000000000..9d2e86ffc93 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/error-messages/index.html @@ -0,0 +1,83 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Error messages

+
+

This section describes error messages and how to resolve them.

+
+
+
warm import retry limit reached
+

The warm import retry limit reached error message is displayed during a warm migration if a VMware virtual machine (VM) has reached the maximum number (28) of changed block tracking (CBT) snapshots during the precopy stage.

+
+
+

To resolve this problem, delete some of the CBT snapshots from the VM and restart the migration plan.

+
+
+
Unable to resize disk image to required size
+

The Unable to resize disk image to required size error message is displayed when migration fails because a virtual machine on the target provider uses persistent volumes with an EXT4 file system on block storage. The problem occurs because the default overhead that is assumed by CDI does not completely include the reserved place for the root partition.

+
+
+

To resolve this problem, increase the file system overhead in CDI to more than 10%.

+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/136_OpenShift_Migration_Toolkit_0121_mtv-workflow.svg b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/136_OpenShift_Migration_Toolkit_0121_mtv-workflow.svg new file mode 100644 index 00000000000..999c62adec4 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/136_OpenShift_Migration_Toolkit_0121_mtv-workflow.svg @@ -0,0 +1 @@ +NetworkmappingTargetproviderVirtualmachines1UserVirtual-Machine-Import4MigrationControllerPlan2Migration3StoragemappingSourceprovider136_OpenShift_0121 diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/136_OpenShift_Migration_Toolkit_0121_virt-workflow.svg b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/136_OpenShift_Migration_Toolkit_0121_virt-workflow.svg new file mode 100644 index 00000000000..473e21ba4e2 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/136_OpenShift_Migration_Toolkit_0121_virt-workflow.svg @@ -0,0 +1 @@ +Virtual-Machine-ImportProviderAPIVirtualmachineCDIControllerKubeVirtController<VM_name>podDataVolumeSourceProviderConversionpodPersistentVolumeDynamicallyprovisionedstoragePersistentVolume Claim163438710ProviderCredentialsUserVMdisk29VirtualMachineImportControllerVirtual-Machine-InstanceVirtual-Machine57Importerpod136_OpenShift_0121 diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/136_Upstream_Migration_Toolkit_0121_mtv-workflow.svg b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/136_Upstream_Migration_Toolkit_0121_mtv-workflow.svg new file mode 100644 index 00000000000..33a031a0909 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/136_Upstream_Migration_Toolkit_0121_mtv-workflow.svg @@ -0,0 +1 @@ +NetworkmappingTargetproviderVirtualmachines1UserVirtual-Machine-Import4MigrationControllerPlan2Migration3StoragemappingSourceprovider136_0121 diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/136_Upstream_Migration_Toolkit_0121_virt-workflow.svg b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/136_Upstream_Migration_Toolkit_0121_virt-workflow.svg new file mode 100644 index 00000000000..e73192c0102 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/136_Upstream_Migration_Toolkit_0121_virt-workflow.svg @@ -0,0 +1 @@ +Virtual-Machine-ImportProviderAPIVirtualmachineCDIControllerKubeVirtController<VM_name>podDataVolumeSourceProviderConversionpodPersistentVolumeDynamicallyprovisionedstoragePersistentVolume Claim163438710ProviderCredentialsUserVMdisk29VirtualMachineImportControllerVirtual-Machine-InstanceVirtual-Machine57Importerpod136_0121 diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/forklift-logo-darkbg.png b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/forklift-logo-darkbg.png new file mode 100644 index 00000000000..06e9d1b2494 Binary files /dev/null and b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/forklift-logo-darkbg.png differ diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/forklift-logo-darkbg.svg b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/forklift-logo-darkbg.svg new file mode 100644 index 00000000000..8a846e6361a --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/forklift-logo-darkbg.svg @@ -0,0 +1,164 @@ + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/forklift-logo-lightbg.png b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/forklift-logo-lightbg.png new file mode 100644 index 00000000000..8dba83d97f8 Binary files /dev/null and b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/forklift-logo-lightbg.png differ diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/forklift-logo-lightbg.svg b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/forklift-logo-lightbg.svg new file mode 100644 index 00000000000..a8038cdf923 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/forklift-logo-lightbg.svg @@ -0,0 +1,159 @@ + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/kebab.png b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/kebab.png new file mode 100644 index 00000000000..81893bd4ad1 Binary files /dev/null and b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/kebab.png differ diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/mtv-ui.png b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/mtv-ui.png new file mode 100644 index 00000000000..009c9b46386 Binary files /dev/null and b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/mtv-ui.png differ diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/increasing-nfc-memory-vmware-host/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/increasing-nfc-memory-vmware-host/index.html new file mode 100644 index 00000000000..3c4a9520f9c --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/increasing-nfc-memory-vmware-host/index.html @@ -0,0 +1,103 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Increasing the NFC service memory of an ESXi host

+
+

If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host. Otherwise, the migration will fail because the NFC service memory is limited to 10 parallel connections.

+
+
+
Procedure
+
    +
  1. +

    Log in to the ESXi host as root.

    +
  2. +
  3. +

    Change the value of maxMemory to 1000000000 in /etc/vmware/hostd/config.xml:

    +
    +
    +
    ...
    +      <nfcsvc>
    +         <path>libnfcsvc.so</path>
    +         <enabled>true</enabled>
    +         <maxMemory>1000000000</maxMemory>
    +         <maxStreamMemory>10485760</maxStreamMemory>
    +      </nfcsvc>
    +...
    +
    +
    +
  4. +
  5. +

    Restart hostd:

    +
    +
    +
    # /etc/init.d/hostd restart
    +
    +
    +
    +

    You do not need to reboot the host.

    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/installing-mtv-operator/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/installing-mtv-operator/index.html new file mode 100644 index 00000000000..6723c35ac88 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/installing-mtv-operator/index.html @@ -0,0 +1,79 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
Prerequisites
+
    +
  • +

    OKD 4.10 or later installed.

    +
  • +
  • +

    KubeVirt Operator installed on an OpenShift migration target cluster.

    +
  • +
  • +

    You must be logged in as a user with cluster-admin permissions.

    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/issue_templates/issue.md b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/issue_templates/issue.md new file mode 100644 index 00000000000..30d52ab9cba --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/issue_templates/issue.md @@ -0,0 +1,15 @@ +## Summary + +(Describe the problem. Don't worry if the problem occurs in more than one checklist. You only need to mention the checklist where you see a problem. We will fix the module.) + +## What is the problem? + +(Paste the text or a screenshot here. Remember to include the **task number** so that we know which module is affected.) + +## What is the solution? + +(Correct text, link, or task.) + +## Notes + +(Do we need to fix something else?) diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/issue_templates/issue/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/issue_templates/issue/index.html new file mode 100644 index 00000000000..52cdcab5357 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/issue_templates/issue/index.html @@ -0,0 +1,79 @@ + + + + + + + + Summary | Forklift Documentation + + + + + + + + + + + + + +Summary | Forklift Documentation + + + + + + + + + + + + + + + + + + + + + + +
+

Summary

+ +

(Describe the problem. Don’t worry if the problem occurs in more than one checklist. You only need to mention the checklist where you see a problem. We will fix the module.)

+ +

What is the problem?

+ +

(Paste the text or a screenshot here. Remember to include the task number so that we know which module is affected.)

+ +

What is the solution?

+ +

(Correct text, link, or task.)

+ +

Notes

+ +

(Do we need to fix something else?)

+ + + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/known-issues-2-7/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/known-issues-2-7/index.html new file mode 100644 index 00000000000..ae43f112ac8 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/known-issues-2-7/index.html @@ -0,0 +1,87 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Known issues

+
+

Forklift 2.7 has the following known issues:

+
+
+
Select Migration Network from the endpoint type ESXi displays multiple incorrect networks
+

When you choose Select Migration Network, from the endpoint type of ESXi, multiple incorrect networks are displayed. (MTV-1291)

+
+
+

Unresolved directive in known-issues-2-7.adoc - include::snip_secure_boot_issue.adoc[]

+
+
+

Unresolved directive in known-issues-2-7.adoc - include::snip_measured_boot_windows_vm.adoc[]

+
+
+
Network and Storage maps in the UI are not correct when created from the command line
+

When creating Network and Storage maps from the UI, the correct names are not shown in the UI. (MTV-1421)

+
+
+
Migration fails with module network-legacy configured in RHEL guests
+

Migration fails if the module configuration file is available in the guest and the dhcp-client package is not installed, returning a dracut module 'network-legacy' will not be installed, because command 'dhclient' could not be found error. (MTV-1615)

+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/making-open-source-more-inclusive/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/making-open-source-more-inclusive/index.html new file mode 100644 index 00000000000..fd0eece1bda --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/making-open-source-more-inclusive/index.html @@ -0,0 +1,69 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Making open source more inclusive

+
+

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/migration-plan-options-ui/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/migration-plan-options-ui/index.html new file mode 100644 index 00000000000..2d5234ade02 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/migration-plan-options-ui/index.html @@ -0,0 +1,141 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Migration plan options

+
+

On the Plans for virtualization page of the OKD web console, you can click the {kebab} beside a migration plan to access the following options:

+
+
+
    +
  • +

    Get logs: Retrieves the logs of a migration. When you click Get logs, a confirmation window opens. After you click Get logs in the window, wait until Get logs changes to Download logs and then click the button to download the logs.

    +
  • +
  • +

    Edit: Edit the details of a migration plan. You cannot edit a migration plan while it is running or after it has completed successfully.

    +
  • +
  • +

    Duplicate: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:

    +
    +
      +
    • +

      Migrate VMs to a different namespace.

      +
    • +
    • +

      Edit an archived migration plan.

      +
    • +
    • +

      Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.

      +
    • +
    +
    +
  • +
  • +

    Archive: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    The Archive option is irreversible. However, you can duplicate an archived plan.

    +
    +
    +
    +
  • +
  • +

    Delete: Permanently remove a migration plan. You cannot delete a running migration plan.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    The Delete option is irreversible.

    +
    +
    +

    Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs, and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

    +
    +
    +
    +
  • +
  • +

    View details: Display the details of a migration plan.

    +
  • +
  • +

    Restart: Restart a failed or canceled migration plan.

    +
  • +
  • +

    Cancel scheduled cutover: Cancel a scheduled cutover migration for a warm migration plan.

    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-changelog-2-7/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-changelog-2-7/index.html new file mode 100644 index 00000000000..c467dab9c9b --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-changelog-2-7/index.html @@ -0,0 +1,2330 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift changelog

+
+
+
+

The following changelog for Forklift includes a full list of packages used in the Forklift 2.7 releases.

+
+
+
+
+

Forklift 2.7 packages

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Forklift packages
Forklift 2.7.0Forklift 2.7.2Forklift 2.7.3

abattis-cantarell-fonts-0.301-4.el9.noarch

abattis-cantarell-fonts-0.301-4.el9.noarch

Abattis-cantarell-fonts-0.301-4.el9.noarch

acl-2.3.1-4.el9.x86_64

acl-2.3.1-4.el9.x86_64

acl-2.3.1-4.el9.x86_64

adobe-source-code-pro-fonts-2.030.1.050-12.el9.1.noarch

adobe-source-code-pro-fonts-2.030.1.050-12.el9.1.noarch

adobe-source-code-pro-fonts-2.030.1.050-12.el9.1.noarch

alternatives-1.24-1.el9.x86_64

alternatives-1.24-1.el9.x86_64

alternatives-1.24-1.el9.x86_64

attr-2.5.1-3.el9.x86_64

attr-2.5.1-3.el9.x86_64

attr-2.5.1-3.el9.x86_64

audit-libs-3.1.2-2.el9.x86_64

audit-libs-3.1.2-2.el9.x86_64

audit-libs-3.1.2-2.el9.x86_64

augeas-libs-1.13.0-6.el9_4.x86_64

augeas-libs-1.13.0-6.el9_4.x86_64

augeas-libs-1.13.0-6.el9_4.x86_64

basesystem-11-13.el9.noarch

basesystem-11-13.el9.noarch

basesystem-11-13.el9.noarch

bash-5.1.8-9.el9.x86_64

bash-5.1.8-9.el9.x86_64

bash-5.1.8-9.el9.x86_64

binutils-2.35.2-43.el9.x86_64

binutils-2.35.2-43.el9.x86_64

binutils-2.35.2-43.el9.x86_64

binutils-gold-2.35.2-43.el9.x86_64

binutils-gold-2.35.2-43.el9.x86_64

binutils-gold-2.35.2-43.el9.x86_64

bzip2-1.0.8-8.el9.x86_64

bzip2-1.0.8-8.el9.x86_64

bzip2-1.0.8-8.el9.x86_64

bzip2-libs-1.0.8-8.el9.x86_64

bzip2-libs-1.0.8-8.el9.x86_64

bzip2-libs-1.0.8-8.el9.x86_64

ca-certificates-2024.2.69_v8.0.303-91.4.el9_4.noarch

ca-certificates-2024.2.69_v8.0.303-91.4.el9_4.noarch

ca-certificates-2024.2.69_v8.0.303-91.4.el9_4.noarch

capstone-4.0.2-10.el9.x86_64

capstone-4.0.2-10.el9.x86_64

capstone-4.0.2-10.el9.x86_64

checkpolicy-3.6-1.el9.x86_64

checkpolicy-3.6-1.el9.x86_64

checkpolicy-3.6-1.el9.x86_64

clevis-18-112.el9.x86_64

clevis-18-112.el9.x86_64

clevis-18-112.el9.x86_64

clevis-luks-18-112.el9.x86_64

clevis-luks-18-112.el9.x86_64

clevis-luks-18-112.el9.x86_64

cmake-rpm-macros-3.26.5-2.el9.noarch

cmake-rpm-macros-3.26.5-2.el9.noarch

cmake-rpm-macros-3.26.5-2.el9.noarch

coreutils-single-8.32-35.el9.x86_64

coreutils-single-8.32-35.el9.x86_64

coreutils-single-8.32-35.el9.x86_64

cpio-2.13-16.el9.x86_64

cpio-2.13-16.el9.x86_64

cpio-2.13-16.el9.x86_64

cracklib-2.9.6-27.el9.x86_64

cracklib-2.9.6-27.el9.x86_64

cracklib-2.9.6-27.el9.x86_64

cracklib-dicts-2.9.6-27.el9.x86_64

cracklib-dicts-2.9.6-27.el9.x86_64

cracklib-dicts-2.9.6-27.el9.x86_64

crypto-policies-20240202-1.git283706d.el9.noarch

crypto-policies-20240202-1.git283706d.el9.noarch

crypto-policies-20240202-1.git283706d.el9.noarch

cryptsetup-2.6.0-3.el9.x86_64

cryptsetup-2.6.0-3.el9.x86_64

cryptsetup-2.6.0-3.el9.x86_64

cryptsetup-libs-2.6.0-3.el9.x86_64

cryptsetup-libs-2.6.0-3.el9.x86_64

cryptsetup-libs-2.6.0-3.el9.x86_64

curl-minimal-7.76.1-29.el9_4.1.x86_64

curl-minimal-7.76.1-29.el9_4.1.x86_64

curl-minimal-7.76.1-29.el9_4.1.x86_64

cyrus-sasl-2.1.27-21.el9.x86_64

cyrus-sasl-2.1.27-21.el9.x86_64

cyrus-sasl-2.1.27-21.el9.x86_64

cyrus-sasl-gssapi-2.1.27-21.el9.x86_64

cyrus-sasl-gssapi-2.1.27-21.el9.x86_64

cyrus-sasl-gssapi-2.1.27-21.el9.x86_64

cyrus-sasl-lib-2.1.27-21.el9.x86_64

cyrus-sasl-lib-2.1.27-21.el9.x86_64

cyrus-sasl-lib-2.1.27-21.el9.x86_64

daxctl-libs-71.1-8.el9.x86_64

daxctl-libs-71.1-8.el9.x86_64

daxctl-libs-71.1-8.el9.x86_64

dbus-1.12.20-8.el9.x86_64

dbus-1.12.20-8.el9.x86_64

dbus-1.12.20-8.el9.x86_64

dbus-broker-28-7.el9.x86_64

dbus-broker-28-7.el9.x86_64

dbus-broker-28-7.el9.x86_64

dbus-common-1.12.20-8.el9.noarch

dbus-common-1.12.20-8.el9.noarch

dbus-common-1.12.20-8.el9.noarch

dbus-libs-1.12.20-8.el9.x86_64

dbus-libs-1.12.20-8.el9.x86_64

dbus-libs-1.12.20-8.el9.x86_64

dejavu-sans-fonts-2.37-18.el9.noarch

dejavu-sans-fonts-2.37-18.el9.noarch

dejavu-sans-fonts-2.37-18.el9.noarch

device-mapper-1.02.197-2.el9.x86_64

device-mapper-1.02.197-2.el9.x86_64

device-mapper-1.02.197-2.el9.x86_64

device-mapper-event-1.02.197-2.el9.x86_64

device-mapper-event-1.02.197-2.el9.x86_64

device-mapper-event-1.02.197-2.el9.x86_64

device-mapper-event-libs-1.02.197-2.el9.x86_64

device-mapper-event-libs-1.02.197-2.el9.x86_64

device-mapper-event-libs-1.02.197-2.el9.x86_64

device-mapper-libs-1.02.197-2.el9.x86_64

device-mapper-libs-1.02.197-2.el9.x86_64

device-mapper-libs-1.02.197-2.el9.x86_64

device-mapper-persistent-data-1.0.9-3.el9_4.x86_64

device-mapper-persistent-data-1.0.9-3.el9_4.x86_64

device-mapper-persistent-data-1.0.9-3.el9_4.x86_64

dhcp-client-4.4.2-19.b1.el9.x86_64

dhcp-client-4.4.2-19.b1.el9.x86_64

dhcp-client-4.4.2-19.b1.el9.x86_64

dhcp-common-4.4.2-19.b1.el9.noarch

dhcp-common-4.4.2-19.b1.el9.noarch

dhcp-common-4.4.2-19.b1.el9.noarch

diffutils-3.7-12.el9.x86_64

diffutils-3.7-12.el9.x86_64

diffutils-3.7-12.el9.x86_64

dmidecode-3.5-3.el9.x86_64

dmidecode-3.5-3.el9.x86_64

dmidecode-3.5-3.el9.x86_64

dnf-data-4.14.0-9.el9.noarch

dnf-data-4.14.0-9.el9.noarch

dnf-data-4.14.0-9.el9.noarch

dnsmasq-2.85-16.el9_4.x86_64

dnsmasq-2.85-16.el9_4.x86_64

dnsmasq-2.85-16.el9_4.x86_64

dosfstools-4.2-3.el9.x86_64

dosfstools-4.2-3.el9.x86_64

dosfstools-4.2-3.el9.x86_64

dracut-057-53.git20240104.el9.x86_64

dracut-057-53.git20240104.el9.x86_64

dracut-057-53.git20240104.el9.x86_64

dwz-0.14-3.el9.x86_64

dwz-0.14-3.el9.x86_64

dwz-0.14-3.el9.x86_64

e2fsprogs-1.46.5-5.el9.x86_64

e2fsprogs-1.46.5-5.el9.x86_64

e2fsprogs-1.46.5-5.el9.x86_64

e2fsprogs-libs-1.46.5-5.el9.x86_64

e2fsprogs-libs-1.46.5-5.el9.x86_64

e2fsprogs-libs-1.46.5-5.el9.x86_64

edk2-ovmf-20231122-6.el9_4.3.noarch

edk2-ovmf-20231122-6.el9_4.3.noarch

edk2-ovmf-20231122-6.el9_4.3.noarch

efi-srpm-macros-6-2.el9_0.noarch

efi-srpm-macros-6-2.el9_0.noarch

efi-srpm-macros-6-2.el9_0.noarch

elfutils-debuginfod-client-0.190-2.el9.x86_64

elfutils-debuginfod-client-0.190-2.el9.x86_64

elfutils-debuginfod-client-0.190-2.el9.x86_64

elfutils-default-yama-scope-0.190-2.el9.noarch

elfutils-default-yama-scope-0.190-2.el9.noarch

elfutils-default-yama-scope-0.190-2.el9.noarch

elfutils-libelf-0.190-2.el9.x86_64

elfutils-libelf-0.190-2.el9.x86_64

elfutils-libelf-0.190-2.el9.x86_64

elfutils-libs-0.190-2.el9.x86_64

elfutils-libs-0.190-2.el9.x86_64

elfutils-libs-0.190-2.el9.x86_64

expat-2.5.0-2.el9_4.1.x86_64

expat-2.5.0-2.el9_4.1.x86_64

expat-2.5.0-2.el9_4.1.x86_64

file-5.39-16.el9.x86_64

file-5.39-16.el9.x86_64

file-5.39-16.el9.x86_64

file-libs-5.39-16.el9.x86_64

file-libs-5.39-16.el9.x86_64

file-libs-5.39-16.el9.x86_64

filesystem-3.16-2.el9.x86_64

filesystem-3.16-2.el9.x86_64

filesystem-3.16-2.el9.x86_64

findutils-4.8.0-6.el9.x86_64

findutils-4.8.0-6.el9.x86_64

findutils-4.8.0-6.el9.x86_64

fonts-filesystem-2.0.5-7.el9.1.noarch

fonts-filesystem-2.0.5-7.el9.1.noarch

fonts-filesystem-2.0.5-7.el9.1.noarch

fonts-srpm-macros-2.0.5-7.el9.1.noarch

fonts-srpm-macros-2.0.5-7.el9.1.noarch

fonts-srpm-macros-2.0.5-7.el9.1.noarch

fuse-2.9.9-15.el9.x86_64

fuse-2.9.9-15.el9.x86_64

fuse-2.9.9-15.el9.x86_64

fuse-common-3.10.2-8.el9.x86_64

fuse-common-3.10.2-8.el9.x86_64

fuse-common-3.10.2-8.el9.x86_64

fuse-libs-2.9.9-15.el9.x86_64

fuse-libs-2.9.9-15.el9.x86_64

fuse-libs-2.9.9-15.el9.x86_64

gawk-5.1.0-6.el9.x86_64

gawk-5.1.0-6.el9.x86_64

gawk-5.1.0-6.el9.x86_64

gdbm-libs-1.19-4.el9.x86_64

gdbm-libs-1.19-4.el9.x86_64

gdbm-libs-1.19-4.el9.x86_64

gdisk-1.0.7-5.el9.x86_64

gdisk-1.0.7-5.el9.x86_64

gdisk-1.0.7-5.el9.x86_64

geolite2-city-20191217-6.el9.noarch

geolite2-city-20191217-6.el9.noarch

geolite2-city-20191217-6.el9.noarch

geolite2-country-20191217-6.el9.noarch

geolite2-country-20191217-6.el9.noarch

geolite2-country-20191217-6.el9.noarch

gettext-0.21-8.el9.x86_64

gettext-0.21-8.el9.x86_64

gettext-0.21-8.el9.x86_64

gettext-libs-0.21-8.el9.x86_64

gettext-libs-0.21-8.el9.x86_64

gettext-libs-0.21-8.el9.x86_64

ghc-srpm-macros-1.5.0-6.el9.noarch

ghc-srpm-macros-1.5.0-6.el9.noarch

ghc-srpm-macros-1.5.0-6.el9.noarch

glib-networking-2.68.3-3.el9.x86_64

glib-networking-2.68.3-3.el9.x86_64

glib-networking-2.68.3-3.el9.x86_64

glib2-2.68.4-14.el9_4.1.x86_64

glib2-2.68.4-14.el9_4.1.x86_64

glib2-2.68.4-14.el9_4.1.x86_64

glibc-2.34-100.el9_4.3.x86_64

glibc-2.34-100.el9_4.4.x86_64

glibc-2.34-100.el9_4.4.x86_64

glibc-common-2.34-100.el9_4.3.x86_64

glibc-common-2.34-100.el9_4.4.x86_64

glibc-common-2.34-100.el9_4.4.x86_64

glibc-gconv-extra-2.34-100.el9_4.3.x86_64

glibc-gconv-extra-2.34-100.el9_4.4.x86_64

glibc-gconv-extra-2.34-100.el9_4.4.x86_64

glibc-langpack-en-2.34-100.el9_4.4.x86_64

glibc-langpack-en-2.34-100.el9_4.4.x86_64

glibc-minimal-langpack-2.34-100.el9_4.3.x86_64

glibc-minimal-langpack-2.34-100.el9_4.4.x86_64

glibc-minimal-langpack-2.34-100.el9_4.4.x86_64

gmp-6.2.0-13.el9.x86_64

gmp-6.2.0-13.el9.x86_64

gmp-6.2.0-13.el9.x86_64

gnupg2-2.3.3-4.el9.x86_64

gnupg2-2.3.3-4.el9.x86_64

gnupg2-2.3.3-4.el9.x86_64

gnutls-3.8.3-4.el9_4.x86_64

gnutls-3.8.3-4.el9_4.x86_64

gnutls-3.8.3-4.el9_4.x86_64

gnutls-dane-3.8.3-4.el9_4.x86_64

gnutls-dane-3.8.3-4.el9_4.x86_64

gnutls-dane-3.8.3-4.el9_4.x86_64

gnutls-utils-3.8.3-4.el9_4.x86_64

gnutls-utils-3.8.3-4.el9_4.x86_64

gnutls-utils-3.8.3-4.el9_4.x86_64

go-srpm-macros-3.2.0-3.el9.noarch

go-srpm-macros-3.2.0-3.el9.noarch

go-srpm-macros-3.2.0-3.el9.noarch

gobject-introspection-1.68.0-11.el9.x86_64

gobject-introspection-1.68.0-11.el9.x86_64

gobject-introspection-1.68.0-11.el9.x86_64

gpg-pubkey-5a6340b3-6229229e

gpg-pubkey-5a6340b3-6229229e

gpg-pubkey-5a6340b3-6229229e

gpg-pubkey-fd431d51-4ae0493b

gpg-pubkey-fd431d51-4ae0493b

gpg-pubkey-fd431d51-4ae0493b

gpgme-1.15.1-6.el9.x86_64

gpgme-1.15.1-6.el9.x86_64

gpgme-1.15.1-6.el9.x86_64

grep-3.6-5.el9.x86_64

grep-3.6-5.el9.x86_64

grep-3.6-5.el9.x86_64

groff-base-1.22.4-10.el9.x86_64

groff-base-1.22.4-10.el9.x86_64

groff-base-1.22.4-10.el9.x86_64

gsettings-desktop-schemas-40.0-6.el9.x86_64

gsettings-desktop-schemas-40.0-6.el9.x86_64

gsettings-desktop-schemas-40.0-6.el9.x86_64

gssproxy-0.8.4-6.el9.x86_64

gssproxy-0.8.4-6.el9.x86_64

gssproxy-0.8.4-6.el9.x86_64

guestfs-tools-1.51.6-3.el9_4.x86_64

guestfs-tools-1.51.6-3.el9_4.x86_64

guestfs-tools-1.51.6-3.el9_4.x86_64

gzip-1.12-1.el9.x86_64

gzip-1.12-1.el9.x86_64

gzip-1.12-1.el9.x86_64

hexedit-1.6-1.el9.x86_64

hexedit-1.6-1.el9.x86_64

hexedit-1.6-1.el9.x86_64

hivex-libs-1.3.21-3.el9.x86_64

hivex-libs-1.3.21-3.el9.x86_64

hivex-libs-1.3.21-3.el9.x86_64

hwdata-0.348-9.13.el9.noarch

hwdata-0.348-9.13.el9.noarch

hwdata-0.348-9.13.el9.noarch

inih-49-6.el9.x86_64

inih-49-6.el9.x86_64

inih-49-6.el9.x86_64

ipcalc-1.0.0-5.el9.x86_64

ipcalc-1.0.0-5.el9.x86_64

ipcalc-1.0.0-5.el9.x86_64

iproute-6.2.0-6.el9_4.x86_64

iproute-6.2.0-6.el9_4.x86_64

iproute-6.2.0-6.el9_4.x86_64

iproute-tc-6.2.0-6.el9_4.x86_64

iproute-tc-6.2.0-6.el9_4.x86_64

iproute-tc-6.2.0-6.el9_4.x86_64

iptables-libs-1.8.10-4.el9_4.x86_64

iptables-libs-1.8.10-4.el9_4.x86_64

iptables-libs-1.8.10-4.el9_4.x86_64

iptables-nft-1.8.10-4.el9_4.x86_64

iptables-nft-1.8.10-4.el9_4.x86_64

iptables-nft-1.8.10-4.el9_4.x86_64

iputils-20210202-9.el9.x86_64

iputils-20210202-9.el9.x86_64

iputils-20210202-9.el9.x86_64

ipxe-roms-qemu-20200823-9.git4bd064de.el9.noarch

ipxe-roms-qemu-20200823-9.git4bd064de.el9.noarch

ipxe-roms-qemu-20200823-9.git4bd064de.el9.noarch

jansson-2.14-1.el9.x86_64

jansson-2.14-1.el9.x86_64

jansson-2.14-1.el9.x86_64

jose-11-3.el9.x86_64

jose-11-3.el9.x86_64

jose-11-3.el9.x86_64

jq-1.6-16.el9.x86_64

jq-1.6-16.el9.x86_64

jq-1.6-16.el9.x86_64

json-c-0.14-11.el9.x86_64

json-c-0.14-11.el9.x86_64

json-c-0.14-11.el9.x86_64

json-glib-1.6.6-1.el9.x86_64

json-glib-1.6.6-1.el9.x86_64

json-glib-1.6.6-1.el9.x86_64

kbd-2.4.0-9.el9.x86_64

kbd-2.4.0-9.el9.x86_64

kbd-2.4.0-9.el9.x86_64

kbd-legacy-2.4.0-9.el9.noarch

kbd-legacy-2.4.0-9.el9.noarch

kbd-legacy-2.4.0-9.el9.noarch

kbd-misc-2.4.0-9.el9.noarch

kbd-misc-2.4.0-9.el9.noarch

kbd-misc-2.4.0-9.el9.noarch

kernel-core-5.14.0-427.35.1.el9_4.x86_64

kernel-core-5.14.0-427.37.1.el9_4.x86_64

kernel-core-5.14.0-427.40.1.el9_4.x86_64

kernel-modules-core-5.14.0-427.35.1.el9_4.x86_64

kernel-modules-core-5.14.0-427.37.1.el9_4.x86_64

kernel-modules-core-5.14.0-427.40.1.el9_4.x86_64

kernel-srpm-macros-1.0-13.el9.noarch

kernel-srpm-macros-1.0-13.el9.noarch

kernel-srpm-macros-1.0-13.el9.noarch

keyutils-1.6.3-1.el9.x86_64

keyutils-1.6.3-1.el9.x86_64

keyutils-1.6.3-1.el9.x86_64

keyutils-libs-1.6.3-1.el9.x86_64

keyutils-libs-1.6.3-1.el9.x86_64

keyutils-libs-1.6.3-1.el9.x86_64

kmod-28-9.el9.x86_64

kmod-28-9.el9.x86_64

kmod-28-9.el9.x86_64

kmod-libs-28-9.el9.x86_64

kmod-libs-28-9.el9.x86_64

kmod-libs-28-9.el9.x86_64

kpartx-0.8.7-27.el9.x86_64

kpartx-0.8.7-27.el9.x86_64

kpartx-0.8.7-27.el9.x86_64

krb5-libs-1.21.1-2.el9_4.x86_64

krb5-libs-1.21.1-2.el9_4.x86_64

krb5-libs-1.21.1-2.el9_4.x86_64

langpacks-core-en-3.0-16.el9.noarch

langpacks-core-en-3.0-16.el9.noarch

langpacks-core-en-3.0-16.el9.noarch

langpacks-core-font-en-3.0-16.el9.noarch

langpacks-core-font-en-3.0-16.el9.noarch

langpacks-core-font-en-3.0-16.el9.noarch

langpacks-en-3.0-16.el9.noarch

langpacks-en-3.0-16.el9.noarch

langpacks-en-3.0-16.el9.noarch

less-590-4.el9_4.x86_64

less-590-4.el9_4.x86_64

less-590-4.el9_4.x86_64

libacl-2.3.1-4.el9.x86_64

libacl-2.3.1-4.el9.x86_64

libacl-2.3.1-4.el9.x86_64

libaio-0.3.111-13.el9.x86_64

libaio-0.3.111-13.el9.x86_64

libaio-0.3.111-13.el9.x86_64

libarchive-3.5.3-4.el9.x86_64

libarchive-3.5.3-4.el9.x86_64

libarchive-3.5.3-4.el9.x86_64

libassuan-2.5.5-3.el9.x86_64

libassuan-2.5.5-3.el9.x86_64

libassuan-2.5.5-3.el9.x86_64

libatomic-11.4.1-3.el9.x86_64

libatomic-11.4.1-3.el9.x86_64

libatomic-11.4.1-3.el9.x86_64

libattr-2.5.1-3.el9.x86_64

libattr-2.5.1-3.el9.x86_64

libattr-2.5.1-3.el9.x86_64

libbasicobjects-0.1.1-53.el9.x86_64

libbasicobjects-0.1.1-53.el9.x86_64

libbasicobjects-0.1.1-53.el9.x86_64

libblkid-2.37.4-18.el9.x86_64

libblkid-2.37.4-18.el9.x86_64

libblkid-2.37.4-18.el9.x86_64

libbpf-1.3.0-2.el9.x86_64

libbpf-1.3.0-2.el9.x86_64

libbpf-1.3.0-2.el9.x86_64

libbrotli-1.0.9-6.el9.x86_64

libbrotli-1.0.9-6.el9.x86_64

libbrotli-1.0.9-6.el9.x86_64

libcap-2.48-9.el9_2.x86_64

libcap-2.48-9.el9_2.x86_64

libcap-2.48-9.el9_2.x86_64

libcap-ng-0.8.2-7.el9.x86_64

libcap-ng-0.8.2-7.el9.x86_64

libcap-ng-0.8.2-7.el9.x86_64

libcbor-0.7.0-5.el9.x86_64

libcbor-0.7.0-5.el9.x86_64

libcbor-0.7.0-5.el9.x86_64

libcollection-0.7.0-53.el9.x86_64

libcollection-0.7.0-53.el9.x86_64

libcollection-0.7.0-53.el9.x86_64

libcom_err-1.46.5-5.el9.x86_64

libcom_err-1.46.5-5.el9.x86_64

libcom_err-1.46.5-5.el9.x86_64

libconfig-1.7.2-9.el9.x86_64

libconfig-1.7.2-9.el9.x86_64

libconfig-1.7.2-9.el9.x86_64

libcurl-minimal-7.76.1-29.el9_4.1.x86_64

libcurl-minimal-7.76.1-29.el9_4.1.x86_64

libcurl-minimal-7.76.1-29.el9_4.1.x86_64

libdb-5.3.28-53.el9.x86_64

libdb-5.3.28-53.el9.x86_64

libdb-5.3.28-53.el9.x86_64

libdnf-0.69.0-8.el9_4.1.x86_64

libdnf-0.69.0-8.el9_4.1.x86_64

libdnf-0.69.0-8.el9_4.1.x86_64

libeconf-0.4.1-3.el9_2.x86_64

libeconf-0.4.1-3.el9_2.x86_64

libeconf-0.4.1-3.el9_2.x86_64

libedit-3.1-38.20210216cvs.el9.x86_64

libedit-3.1-38.20210216cvs.el9.x86_64

libedit-3.1-38.20210216cvs.el9.x86_64

libev-4.33-5.el9.x86_64

libev-4.33-5.el9.x86_64

libev-4.33-5.el9.x86_64

libevent-2.1.12-8.el9_4.x86_64

libevent-2.1.12-8.el9_4.x86_64

libevent-2.1.12-8.el9_4.x86_64

libfdisk-2.37.4-18.el9.x86_64

libfdisk-2.37.4-18.el9.x86_64

libfdisk-2.37.4-18.el9.x86_64

libfdt-1.6.0-7.el9.x86_64

libfdt-1.6.0-7.el9.x86_64

libfdt-1.6.0-7.el9.x86_64

libffi-3.4.2-8.el9.x86_64

libffi-3.4.2-8.el9.x86_64

libffi-3.4.2-8.el9.x86_64

libfido2-1.13.0-2.el9.x86_64

libfido2-1.13.0-2.el9.x86_64

libfido2-1.13.0-2.el9.x86_64

libgcc-11.4.1-3.el9.x86_64

libgcc-11.4.1-3.el9.x86_64

libgcc-11.4.1-3.el9.x86_64

libgcrypt-1.10.0-10.el9_2.x86_64

libgcrypt-1.10.0-10.el9_2.x86_64

libgcrypt-1.10.0-10.el9_2.x86_64

libgomp-11.4.1-3.el9.x86_64

libgomp-11.4.1-3.el9.x86_64

libgomp-11.4.1-3.el9.x86_64

libgpg-error-1.42-5.el9.x86_64

libgpg-error-1.42-5.el9.x86_64

libgpg-error-1.42-5.el9.x86_64

libguestfs-1.50.1-8.el9_4.x86_64

libguestfs-1.50.1-8.el9_4.x86_64

libguestfs-1.50.1-8.el9_4.x86_64

libguestfs-appliance-1.50.1-8.el9_4.x86_64

libguestfs-appliance-1.50.1-8.el9_4.x86_64

libguestfs-appliance-1.50.1-8.el9_4.x86_64

libguestfs-winsupport-9.3-1.el9_3.x86_64

libguestfs-winsupport-9.3-1.el9_3.x86_64

libguestfs-winsupport-9.3-1.el9_3.x86_64

libguestfs-xfs-1.50.1-8.el9_4.x86_64

libguestfs-xfs-1.50.1-8.el9_4.x86_64

libguestfs-xfs-1.50.1-8.el9_4.x86_64

libibverbs-48.0-1.el9.x86_64

libibverbs-48.0-1.el9.x86_64

libibverbs-48.0-1.el9.x86_64

libicu-67.1-9.el9.x86_64

libicu-67.1-9.el9.x86_64

libicu-67.1-9.el9.x86_64

libidn2-2.3.0-7.el9.x86_64

libidn2-2.3.0-7.el9.x86_64

libidn2-2.3.0-7.el9.x86_64

libini_config-1.3.1-53.el9.x86_64

libini_config-1.3.1-53.el9.x86_64

libini_config-1.3.1-53.el9.x86_64

libjose-11-3.el9.x86_64

libjose-11-3.el9.x86_64

libjose-11-3.el9.x86_64

libkcapi-1.4.0-2.el9.x86_64

libkcapi-1.4.0-2.el9.x86_64

libkcapi-1.4.0-2.el9.x86_64

libkcapi-hmaccalc-1.4.0-2.el9.x86_64

libkcapi-hmaccalc-1.4.0-2.el9.x86_64

libkcapi-hmaccalc-1.4.0-2.el9.x86_64

libksba-1.5.1-6.el9_1.x86_64

libksba-1.5.1-6.el9_1.x86_64

libksba-1.5.1-6.el9_1.x86_64

libluksmeta-9-12.el9.x86_64

libluksmeta-9-12.el9.x86_64

libluksmeta-9-12.el9.x86_64

libmaxminddb-1.5.2-3.el9.x86_64

libmaxminddb-1.5.2-3.el9.x86_64

libmaxminddb-1.5.2-3.el9.x86_64

libmnl-1.0.4-16.el9_4.x86_64

libmnl-1.0.4-16.el9_4.x86_64

libmnl-1.0.4-16.el9_4.x86_64

libmodulemd-2.13.0-2.el9.x86_64

libmodulemd-2.13.0-2.el9.x86_64

libmodulemd-2.13.0-2.el9.x86_64

libmount-2.37.4-18.el9.x86_64

libmount-2.37.4-18.el9.x86_64

libmount-2.37.4-18.el9.x86_64

libnbd-1.18.1-4.el9_4.x86_64

libnbd-1.18.1-4.el9_4.x86_64

libnbd-1.18.1-4.el9_4.x86_64

libnetfilter_conntrack-1.0.9-1.el9.x86_64

libnetfilter_conntrack-1.0.9-1.el9.x86_64

libnetfilter_conntrack-1.0.9-1.el9.x86_64

libnfnetlink-1.0.1-21.el9.x86_64

libnfnetlink-1.0.1-21.el9.x86_64

libnfnetlink-1.0.1-21.el9.x86_64

libnfsidmap-2.5.4-26.el9_4.x86_64

libnfsidmap-2.5.4-26.el9_4.x86_64

libnfsidmap-2.5.4-26.el9_4.x86_64

libnftnl-1.2.6-4.el9_4.x86_64

libnftnl-1.2.6-4.el9_4.x86_64

libnftnl-1.2.6-4.el9_4.x86_64

libnghttp2-1.43.0-5.el9_4.3.x86_64

libnghttp2-1.43.0-5.el9_4.3.x86_64

libnghttp2-1.43.0-5.el9_4.3.x86_64

libnl3-3.9.0-1.el9.x86_64

libnl3-3.9.0-1.el9.x86_64

libnl3-3.9.0-1.el9.x86_64

libosinfo-1.10.0-1.el9.x86_64

libosinfo-1.10.0-1.el9.x86_64

libosinfo-1.10.0-1.el9.x86_64

libpath_utils-0.2.1-53.el9.x86_64

libpath_utils-0.2.1-53.el9.x86_64

libpath_utils-0.2.1-53.el9.x86_64

libpeas-1.30.0-4.el9.x86_64

libpeas-1.30.0-4.el9.x86_64

libpeas-1.30.0-4.el9.x86_64

libpipeline-1.5.3-4.el9.x86_64

libpipeline-1.5.3-4.el9.x86_64

libpipeline-1.5.3-4.el9.x86_64

libpkgconf-1.7.3-10.el9.x86_64

libpkgconf-1.7.3-10.el9.x86_64

libpkgconf-1.7.3-10.el9.x86_64

libpmem-1.12.1-1.el9.x86_64

libpmem-1.12.1-1.el9.x86_64

libpmem-1.12.1-1.el9.x86_64

libpng-1.6.37-12.el9.x86_64

libpng-1.6.37-12.el9.x86_64

libpng-1.6.37-12.el9.x86_64

libproxy-0.4.15-35.el9.x86_64

libproxy-0.4.15-35.el9.x86_64

libproxy-0.4.15-35.el9.x86_64

libproxy-webkitgtk4-0.4.15-35.el9.x86_64

libproxy-webkitgtk4-0.4.15-35.el9.x86_64

libproxy-webkitgtk4-0.4.15-35.el9.x86_64

libpsl-0.21.1-5.el9.x86_64

libpsl-0.21.1-5.el9.x86_64

libpsl-0.21.1-5.el9.x86_64

libpwquality-1.4.4-8.el9.x86_64

libpwquality-1.4.4-8.el9.x86_64

libpwquality-1.4.4-8.el9.x86_64

librdmacm-48.0-1.el9.x86_64

librdmacm-48.0-1.el9.x86_64

librdmacm-48.0-1.el9.x86_64

libref_array-0.1.5-53.el9.x86_64

libref_array-0.1.5-53.el9.x86_64

libref_array-0.1.5-53.el9.x86_64

librepo-1.14.5-2.el9.x86_64

librepo-1.14.5-2.el9.x86_64

librepo-1.14.5-2.el9.x86_64

libreport-filesystem-2.15.2-6.el9.noarch

libreport-filesystem-2.15.2-6.el9.noarch

libreport-filesystem-2.15.2-6.el9.noarch

librhsm-0.0.3-7.el9_3.1.x86_64

librhsm-0.0.3-7.el9_3.1.x86_64

librhsm-0.0.3-7.el9_3.1.x86_64

libseccomp-2.5.2-2.el9.x86_64

libseccomp-2.5.2-2.el9.x86_64

libseccomp-2.5.2-2.el9.x86_64

libselinux-3.6-1.el9.x86_64

libselinux-3.6-1.el9.x86_64

libselinux-3.6-1.el9.x86_64

libselinux-utils-3.6-1.el9.x86_64

libselinux-utils-3.6-1.el9.x86_64

libselinux-utils-3.6-1.el9.x86_64

libsemanage-3.6-1.el9.x86_64

libsemanage-3.6-1.el9.x86_64

libsemanage-3.6-1.el9.x86_64

libsepol-3.6-1.el9.x86_64

libsepol-3.6-1.el9.x86_64

libsepol-3.6-1.el9.x86_64

libsigsegv-2.13-4.el9.x86_64

libsigsegv-2.13-4.el9.x86_64

libsigsegv-2.13-4.el9.x86_64

libslirp-4.4.0-7.el9.x86_64

libslirp-4.4.0-7.el9.x86_64

libslirp-4.4.0-7.el9.x86_64

libsmartcols-2.37.4-18.el9.x86_64

libsmartcols-2.37.4-18.el9.x86_64

libsmartcols-2.37.4-18.el9.x86_64

libsolv-0.7.24-2.el9.x86_64

libsolv-0.7.24-2.el9.x86_64

libsolv-0.7.24-2.el9.x86_64

libsoup-2.72.0-8.el9.x86_64

libsoup-2.72.0-8.el9.x86_64

libsoup-2.72.0-8.el9.x86_64

libss-1.46.5-5.el9.x86_64

libss-1.46.5-5.el9.x86_64

libss-1.46.5-5.el9.x86_64

libssh-0.10.4-13.el9.x86_64

libssh-0.10.4-13.el9.x86_64

libssh-0.10.4-13.el9.x86_64

libssh-config-0.10.4-13.el9.noarch

libssh-config-0.10.4-13.el9.noarch

libssh-config-0.10.4-13.el9.noarch

libstdc++-11.4.1-3.el9.x86_64

libstdc++-11.4.1-3.el9.x86_64

libstdc++-11.4.1-3.el9.x86_64

libtasn1-4.16.0-8.el9_1.x86_64

libtasn1-4.16.0-8.el9_1.x86_64

libtasn1-4.16.0-8.el9_1.x86_64

libtirpc-1.3.3-8.el9_4.x86_64

libtirpc-1.3.3-8.el9_4.x86_64

libtirpc-1.3.3-8.el9_4.x86_64

libtpms-0.9.1-3.20211126git1ff6fe1f43.el9_2.x86_64

libtpms-0.9.1-4.20211126git1ff6fe1f43.el9_2.x86_64

libtpms-0.9.1-4.20211126git1ff6fe1f43.el9_2.x86_64

libunistring-0.9.10-15.el9.x86_64

libunistring-0.9.10-15.el9.x86_64

libunistring-0.9.10-15.el9.x86_64

liburing-2.5-1.el9.x86_64

liburing-2.5-1.el9.x86_64

liburing-2.5-1.el9.x86_64

libusbx-1.0.26-1.el9.x86_64

libusbx-1.0.26-1.el9.x86_64

libusbx-1.0.26-1.el9.x86_64

libutempter-1.2.1-6.el9.x86_64

libutempter-1.2.1-6.el9.x86_64

libutempter-1.2.1-6.el9.x86_64

libuuid-2.37.4-18.el9.x86_64

libuuid-2.37.4-18.el9.x86_64

libuuid-2.37.4-18.el9.x86_64

libverto-0.3.2-3.el9.x86_64

libverto-0.3.2-3.el9.x86_64

libverto-0.3.2-3.el9.x86_64

libverto-libev-0.3.2-3.el9.x86_64

libverto-libev-0.3.2-3.el9.x86_64

libverto-libev-0.3.2-3.el9.x86_64

libvirt-client-10.0.0-6.7.el9_4.x86_64

libvirt-client-10.0.0-6.7.el9_4.x86_64

libvirt-client-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-common-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-common-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-common-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-config-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-config-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-config-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-secret-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-secret-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-secret-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-storage-core-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-storage-core-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-storage-core-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-log-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-log-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-log-10.0.0-6.7.el9_4.x86_64

libvirt-libs-10.0.0-6.7.el9_4.x86_64

libvirt-libs-10.0.0-6.7.el9_4.x86_64

libvirt-libs-10.0.0-6.7.el9_4.x86_64

libxcrypt-4.4.18-3.el9.x86_64

libxcrypt-4.4.18-3.el9.x86_64

libxcrypt-4.4.18-3.el9.x86_64

libxcrypt-compat-4.4.18-3.el9.x86_64

libxcrypt-compat-4.4.18-3.el9.x86_64

libxcrypt-compat-4.4.18-3.el9.x86_64

libxml2-2.9.13-6.el9_4.x86_64

libxml2-2.9.13-6.el9_4.x86_64

libxml2-2.9.13-6.el9_4.x86_64

libxslt-1.1.34-9.el9.x86_64

libxslt-1.1.34-9.el9.x86_64

libxslt-1.1.34-9.el9.x86_64

libyaml-0.2.5-7.el9.x86_64

libyaml-0.2.5-7.el9.x86_64

libyaml-0.2.5-7.el9.x86_64

libzstd-1.5.1-2.el9.x86_64

libzstd-1.5.1-2.el9.x86_64

libzstd-1.5.1-2.el9.x86_64

linux-firmware-20240716-143.2.el9_4.noarch

linux-firmware-20240905-143.3.el9_4.noarch

linux-firmware-20240905-143.3.el9_4.noarch

linux-firmware-whence-20240716-143.2.el9_4.noarch

linux-firmware-whence-20240905-143.3.el9_4.noarch

linux-firmware-whence-20240905-143.3.el9_4.noarch

lsscsi-0.32-6.el9.x86_64

lsscsi-0.32-6.el9.x86_64

lsscsi-0.32-6.el9.x86_64

lua-libs-5.4.4-4.el9.x86_64

lua-libs-5.4.4-4.el9.x86_64

lua-libs-5.4.4-4.el9.x86_64

lua-srpm-macros-1-6.el9.noarch

lua-srpm-macros-1-6.el9.noarch

lua-srpm-macros-1-6.el9.noarch

luksmeta-9-12.el9.x86_64

luksmeta-9-12.el9.x86_64

luksmeta-9-12.el9.x86_64

lvm2-2.03.23-2.el9.x86_64

lvm2-2.03.23-2.el9.x86_64

lvm2-2.03.23-2.el9.x86_64

lvm2-libs-2.03.23-2.el9.x86_64

lvm2-libs-2.03.23-2.el9.x86_64

lvm2-libs-2.03.23-2.el9.x86_64

lz4-libs-1.9.3-5.el9.x86_64

lz4-libs-1.9.3-5.el9.x86_64

lz4-libs-1.9.3-5.el9.x86_64

lzo-2.10-7.el9.x86_64

lzo-2.10-7.el9.x86_64

lzo-2.10-7.el9.x86_64

lzop-1.04-8.el9.x86_64

lzop-1.04-8.el9.x86_64

lzop-1.04-8.el9.x86_64

man-db-2.9.3-7.el9.x86_64

man-db-2.9.3-7.el9.x86_64

man-db-2.9.3-7.el9.x86_64

mdadm-4.2-14.el9_4.x86_64

mdadm-4.2-14.el9_4.x86_64

mdadm-4.2-14.el9_4.x86_64

microdnf-3.9.1-3.el9.x86_64

microdnf-3.9.1-3.el9.x86_64

microdnf-3.9.1-3.el9.x86_64

mingw-binutils-generic-2.41-3.el9.x86_64

mingw-binutils-generic-2.41-3.el9.x86_64

mingw-binutils-generic-2.41-3.el9.x86_64

mingw-filesystem-base-148-3.el9.noarch

mingw-filesystem-base-148-3.el9.noarch

mingw-filesystem-base-148-3.el9.noarch

mingw32-crt-11.0.1-3.el9.noarch

mingw32-crt-11.0.1-3.el9.noarch

mingw32-crt-11.0.1-3.el9.noarch

mingw32-filesystem-148-3.el9.noarch

mingw32-filesystem-148-3.el9.noarch

mingw32-filesystem-148-3.el9.noarch

mingw32-srvany-1.1-3.el9.noarch

mingw32-srvany-1.1-3.el9.noarch

mingw32-srvany-1.1-3.el9.noarch

mpfr-4.1.0-7.el9.x86_64

mpfr-4.1.0-7.el9.x86_64

mpfr-4.1.0-7.el9.x86_64

mtools-4.0.26-4.el9_0.x86_64

mtools-4.0.26-4.el9_0.x86_64

mtools-4.0.26-4.el9_0.x86_64

nbdkit-1.36.2-1.el9.x86_64

nbdkit-1.36.2-1.el9.x86_64

nbdkit-1.36.2-1.el9.x86_64

nbdkit-basic-filters-1.36.2-1.el9.x86_64

nbdkit-basic-filters-1.36.2-1.el9.x86_64

nbdkit-basic-filters-1.36.2-1.el9.x86_64

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

nbdkit-python-plugin-1.36.2-1.el9.x86_64

nbdkit-python-plugin-1.36.2-1.el9.x86_64

nbdkit-python-plugin-1.36.2-1.el9.x86_64

nbdkit-server-1.36.2-1.el9.x86_64

nbdkit-server-1.36.2-1.el9.x86_64

nbdkit-server-1.36.2-1.el9.x86_64

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

ncurses-6.2-10.20210508.el9.x86_64

ncurses-6.2-10.20210508.el9.x86_64

ncurses-6.2-10.20210508.el9.x86_64

ncurses-base-6.2-10.20210508.el9.noarch

ncurses-base-6.2-10.20210508.el9.noarch

ncurses-base-6.2-10.20210508.el9.noarch

ncurses-libs-6.2-10.20210508.el9.x86_64

ncurses-libs-6.2-10.20210508.el9.x86_64

ncurses-libs-6.2-10.20210508.el9.x86_64

ndctl-libs-71.1-8.el9.x86_64

ndctl-libs-71.1-8.el9.x86_64

ndctl-libs-71.1-8.el9.x86_64

nettle-3.9.1-1.el9.x86_64

nettle-3.9.1-1.el9.x86_64

nettle-3.9.1-1.el9.x86_64

nfs-utils-2.5.4-26.el9_4.x86_64

nfs-utils-2.5.4-26.el9_4.x86_64

nfs-utils-2.5.4-26.el9_4.x86_64

npth-1.6-8.el9.x86_64

npth-1.6-8.el9.x86_64

npth-1.6-8.el9.x86_64

numactl-libs-2.0.16-3.el9.x86_64

numactl-libs-2.0.16-3.el9.x86_64

numactl-libs-2.0.16-3.el9.x86_64

numad-0.5-37.20150602git.el9.x86_64

numad-0.5-37.20150602git.el9.x86_64

numad-0.5-37.20150602git.el9.x86_64

ocaml-srpm-macros-6-6.el9.noarch

ocaml-srpm-macros-6-6.el9.noarch

ocaml-srpm-macros-6-6.el9.noarch

oniguruma-6.9.6-1.el9.5.x86_64

oniguruma-6.9.6-1.el9.5.x86_64

oniguruma-6.9.6-1.el9.5.x86_64

openblas-srpm-macros-2-11.el9.noarch

openblas-srpm-macros-2-11.el9.noarch

openblas-srpm-macros-2-11.el9.noarch

openldap-2.6.6-3.el9.x86_64

openldap-2.6.6-3.el9.x86_64

openldap-2.6.6-3.el9.x86_64

openssh-8.7p1-38.el9_4.4.x86_64

openssh-8.7p1-38.el9_4.4.x86_64

openssh-8.7p1-38.el9_4.4.x86_64

openssh-clients-8.7p1-38.el9_4.4.x86_64

openssh-clients-8.7p1-38.el9_4.4.x86_64

openssh-clients-8.7p1-38.el9_4.4.x86_64

openssl-3.0.7-28.el9_4.x86_64

openssl-3.0.7-28.el9_4.x86_64

openssl-3.0.7-28.el9_4.x86_64

openssl-fips-provider-3.0.7-2.el9.x86_64

openssl-fips-provider-3.0.7-2.el9.x86_64

openssl-fips-provider-3.0.7-2.el9.x86_64

openssl-libs-3.0.7-28.el9_4.x86_64

openssl-libs-3.0.7-28.el9_4.x86_64

openssl-libs-3.0.7-28.el9_4.x86_64

osinfo-db-20231215-1.el9.noarch

osinfo-db-20231215-1.el9.noarch

osinfo-db-20231215-1.el9.noarch

osinfo-db-tools-1.10.0-1.el9.x86_64

osinfo-db-tools-1.10.0-1.el9.x86_64

osinfo-db-tools-1.10.0-1.el9.x86_64

p11-kit-0.25.3-2.el9.x86_64

p11-kit-0.25.3-2.el9.x86_64

p11-kit-0.25.3-2.el9.x86_64

p11-kit-trust-0.25.3-2.el9.x86_64

p11-kit-trust-0.25.3-2.el9.x86_64

p11-kit-trust-0.25.3-2.el9.x86_64

pam-1.5.1-19.el9.x86_64

pam-1.5.1-19.el9.x86_64

pam-1.5.1-19.el9.x86_64

parted-3.5-2.el9.x86_64

parted-3.5-2.el9.x86_64

parted-3.5-2.el9.x86_64

passt-0^20231204.gb86afe3-1.el9.x86_64

passt-0^20231204.gb86afe3-1.el9.x86_64

passt-0^20231204.gb86afe3-1.el9.x86_64

passt-selinux-0^20231204.gb86afe3-1.el9.noarch

passt-selinux-0^20231204.gb86afe3-1.el9.noarch

passt-selinux-0^20231204.gb86afe3-1.el9.noarch

pcre-8.44-3.el9.3.x86_64

pcre-8.44-3.el9.3.x86_64

pcre-8.44-3.el9.3.x86_64

pcre2-10.40-5.el9.x86_64

pcre2-10.40-5.el9.x86_64

pcre2-10.40-5.el9.x86_64

pcre2-syntax-10.40-5.el9.noarch

pcre2-syntax-10.40-5.el9.noarch

pcre2-syntax-10.40-5.el9.noarch

perl-AutoLoader-5.74-481.el9.noarch

perl-AutoLoader-5.74-481.el9.noarch

perl-AutoLoader-5.74-481.el9.noarch

perl-B-1.80-481.el9.x86_64

perl-B-1.80-481.el9.x86_64

perl-B-1.80-481.el9.x86_64

perl-base-2.27-481.el9.noarch

perl-base-2.27-481.el9.noarch

perl-base-2.27-481.el9.noarch

perl-Carp-1.50-460.el9.noarch

perl-Carp-1.50-460.el9.noarch

perl-Carp-1.50-460.el9.noarch

perl-Class-Struct-0.66-481.el9.noarch

perl-Class-Struct-0.66-481.el9.noarch

perl-Class-Struct-0.66-481.el9.noarch

perl-constant-1.33-461.el9.noarch

perl-constant-1.33-461.el9.noarch

perl-constant-1.33-461.el9.noarch

perl-Data-Dumper-2.174-462.el9.x86_64

perl-Data-Dumper-2.174-462.el9.x86_64

perl-Data-Dumper-2.174-462.el9.x86_64

perl-Digest-1.19-4.el9.noarch

perl-Digest-1.19-4.el9.noarch

perl-Digest-1.19-4.el9.noarch

perl-Digest-MD5-2.58-4.el9.x86_64

perl-Digest-MD5-2.58-4.el9.x86_64

perl-Digest-MD5-2.58-4.el9.x86_64

perl-Encode-3.08-462.el9.x86_64

perl-Encode-3.08-462.el9.x86_64

perl-Encode-3.08-462.el9.x86_64

perl-Errno-1.30-481.el9.x86_64

perl-Errno-1.30-481.el9.x86_64

perl-Errno-1.30-481.el9.x86_64

perl-Exporter-5.74-461.el9.noarch

perl-Exporter-5.74-461.el9.noarch

perl-Exporter-5.74-461.el9.noarch

perl-Fcntl-1.13-481.el9.x86_64

perl-Fcntl-1.13-481.el9.x86_64

perl-Fcntl-1.13-481.el9.x86_64

perl-File-Basename-2.85-481.el9.noarch

perl-File-Basename-2.85-481.el9.noarch

perl-File-Basename-2.85-481.el9.noarch

perl-File-Path-2.18-4.el9.noarch

perl-File-Path-2.18-4.el9.noarch

perl-File-Path-2.18-4.el9.noarch

perl-File-stat-1.09-481.el9.noarch

perl-File-stat-1.09-481.el9.noarch

perl-File-stat-1.09-481.el9.noarch

perl-File-Temp-0.231.100-4.el9.noarch

perl-File-Temp-0.231.100-4.el9.noarch

perl-File-Temp-0.231.100-4.el9.noarch

perl-FileHandle-2.03-481.el9.noarch

perl-FileHandle-2.03-481.el9.noarch

perl-FileHandle-2.03-481.el9.noarch

perl-Getopt-Long-2.52-4.el9.noarch

perl-Getopt-Long-2.52-4.el9.noarch

perl-Getopt-Long-2.52-4.el9.noarch

perl-Getopt-Std-1.12-481.el9.noarch

perl-Getopt-Std-1.12-481.el9.noarch

perl-Getopt-Std-1.12-481.el9.noarch

perl-HTTP-Tiny-0.076-462.el9.noarch

perl-HTTP-Tiny-0.076-462.el9.noarch

perl-HTTP-Tiny-0.076-462.el9.noarch

perl-if-0.60.800-481.el9.noarch

perl-if-0.60.800-481.el9.noarch

perl-if-0.60.800-481.el9.noarch

perl-interpreter-5.32.1-481.el9.x86_64

perl-interpreter-5.32.1-481.el9.x86_64

perl-interpreter-5.32.1-481.el9.x86_64

perl-IO-1.43-481.el9.x86_64

perl-IO-1.43-481.el9.x86_64

perl-IO-1.43-481.el9.x86_64

perl-IO-Socket-IP-0.41-5.el9.noarch

perl-IO-Socket-IP-0.41-5.el9.noarch

perl-IO-Socket-IP-0.41-5.el9.noarch

perl-IO-Socket-SSL-2.073-1.el9.noarch

perl-IO-Socket-SSL-2.073-1.el9.noarch

perl-IO-Socket-SSL-2.073-1.el9.noarch

perl-IPC-Open3-1.21-481.el9.noarch

perl-IPC-Open3-1.21-481.el9.noarch

perl-IPC-Open3-1.21-481.el9.noarch

perl-libnet-3.13-4.el9.noarch

perl-libnet-3.13-4.el9.noarch

perl-libnet-3.13-4.el9.noarch

perl-libs-5.32.1-481.el9.x86_64

perl-libs-5.32.1-481.el9.x86_64

perl-libs-5.32.1-481.el9.x86_64

perl-MIME-Base64-3.16-4.el9.x86_64

perl-MIME-Base64-3.16-4.el9.x86_64

perl-MIME-Base64-3.16-4.el9.x86_64

perl-Mozilla-CA-20200520-6.el9.noarch

perl-Mozilla-CA-20200520-6.el9.noarch

perl-Mozilla-CA-20200520-6.el9.noarch

perl-mro-1.23-481.el9.x86_64

perl-mro-1.23-481.el9.x86_64

perl-mro-1.23-481.el9.x86_64

perl-NDBM_File-1.15-481.el9.x86_64

perl-NDBM_File-1.15-481.el9.x86_64

perl-NDBM_File-1.15-481.el9.x86_64

perl-Net-SSLeay-1.92-2.el9.x86_64

perl-Net-SSLeay-1.92-2.el9.x86_64

perl-Net-SSLeay-1.92-2.el9.x86_64

perl-overload-1.31-481.el9.noarch

perl-overload-1.31-481.el9.noarch

perl-overload-1.31-481.el9.noarch

perl-overloading-0.02-481.el9.noarch

perl-overloading-0.02-481.el9.noarch

perl-overloading-0.02-481.el9.noarch

perl-parent-0.238-460.el9.noarch

perl-parent-0.238-460.el9.noarch

perl-parent-0.238-460.el9.noarch

perl-PathTools-3.78-461.el9.x86_64

perl-PathTools-3.78-461.el9.x86_64

perl-PathTools-3.78-461.el9.x86_64

perl-Pod-Escapes-1.07-460.el9.noarch

perl-Pod-Escapes-1.07-460.el9.noarch

perl-Pod-Escapes-1.07-460.el9.noarch

perl-Pod-Perldoc-3.28.01-461.el9.noarch

perl-Pod-Perldoc-3.28.01-461.el9.noarch

perl-Pod-Perldoc-3.28.01-461.el9.noarch

perl-Pod-Simple-3.42-4.el9.noarch

perl-Pod-Simple-3.42-4.el9.noarch

perl-Pod-Simple-3.42-4.el9.noarch

perl-Pod-Usage-2.01-4.el9.noarch

perl-Pod-Usage-2.01-4.el9.noarch

perl-Pod-Usage-2.01-4.el9.noarch

perl-podlators-4.14-460.el9.noarch

perl-podlators-4.14-460.el9.noarch

perl-podlators-4.14-460.el9.noarch

perl-POSIX-1.94-481.el9.x86_64

perl-POSIX-1.94-481.el9.x86_64

perl-POSIX-1.94-481.el9.x86_64

perl-Scalar-List-Utils-1.56-461.el9.x86_64

perl-Scalar-List-Utils-1.56-461.el9.x86_64

perl-Scalar-List-Utils-1.56-461.el9.x86_64

perl-SelectSaver-1.02-481.el9.noarch

perl-SelectSaver-1.02-481.el9.noarch

perl-SelectSaver-1.02-481.el9.noarch

perl-Socket-2.031-4.el9.x86_64

perl-Socket-2.031-4.el9.x86_64

perl-Socket-2.031-4.el9.x86_64

perl-srpm-macros-1-41.el9.noarch

perl-srpm-macros-1-41.el9.noarch

perl-srpm-macros-1-41.el9.noarch

perl-Storable-3.21-460.el9.x86_64

perl-Storable-3.21-460.el9.x86_64

perl-Storable-3.21-460.el9.x86_64

perl-subs-1.03-481.el9.noarch

perl-subs-1.03-481.el9.noarch

perl-subs-1.03-481.el9.noarch

perl-Symbol-1.08-481.el9.noarch

perl-Symbol-1.08-481.el9.noarch

perl-Symbol-1.08-481.el9.noarch

perl-Term-ANSIColor-5.01-461.el9.noarch

perl-Term-ANSIColor-5.01-461.el9.noarch

perl-Term-ANSIColor-5.01-461.el9.noarch

perl-Term-Cap-1.17-460.el9.noarch

perl-Term-Cap-1.17-460.el9.noarch

perl-Term-Cap-1.17-460.el9.noarch

perl-Text-ParseWords-3.30-460.el9.noarch

perl-Text-ParseWords-3.30-460.el9.noarch

perl-Text-ParseWords-3.30-460.el9.noarch

perl-Text-Tabs+Wrap-2013.0523-460.el9.noarch

perl-Text-Tabs+Wrap-2013.0523-460.el9.noarch

perl-Text-Tabs+Wrap-2013.0523-460.el9.noarch

perl-Time-Local-1.300-7.el9.noarch

perl-Time-Local-1.300-7.el9.noarch

perl-Time-Local-1.300-7.el9.noarch

perl-URI-5.09-3.el9.noarch

perl-URI-5.09-3.el9.noarch

perl-URI-5.09-3.el9.noarch

perl-vars-1.05-481.el9.noarch

perl-vars-1.05-481.el9.noarch

perl-vars-1.05-481.el9.noarch

pigz-2.5-4.el9.x86_64

pigz-2.5-4.el9.x86_64

pigz-2.5-4.el9.x86_64

pixman-0.40.0-6.el9.x86_64

pixman-0.40.0-6.el9.x86_64

pixman-0.40.0-6.el9.x86_64

pkgconf-1.7.3-10.el9.x86_64

pkgconf-1.7.3-10.el9.x86_64

pkgconf-1.7.3-10.el9.x86_64

policycoreutils-3.6-2.1.el9.x86_64

policycoreutils-3.6-2.1.el9.x86_64

policycoreutils-3.6-2.1.el9.x86_64

policycoreutils-python-utils-3.6-2.1.el9.noarch

policycoreutils-python-utils-3.6-2.1.el9.noarch

policycoreutils-python-utils-3.6-2.1.el9.noarch

polkit-0.117-11.el9.x86_64

polkit-0.117-11.el9.x86_64

polkit-0.117-11.el9.x86_64

polkit-libs-0.117-11.el9.x86_64

polkit-libs-0.117-11.el9.x86_64

polkit-libs-0.117-11.el9.x86_64

polkit-pkla-compat-0.1-21.el9.x86_64

polkit-pkla-compat-0.1-21.el9.x86_64

polkit-pkla-compat-0.1-21.el9.x86_64

popt-1.18-8.el9.x86_64

popt-1.18-8.el9.x86_64

popt-1.18-8.el9.x86_64

procps-ng-3.3.17-14.el9.x86_64

procps-ng-3.3.17-14.el9.x86_64

procps-ng-3.3.17-14.el9.x86_64

protobuf-c-1.3.3-13.el9.x86_64

protobuf-c-1.3.3-13.el9.x86_64

protobuf-c-1.3.3-13.el9.x86_64

psmisc-23.4-3.el9.x86_64

psmisc-23.4-3.el9.x86_64

psmisc-23.4-3.el9.x86_64

publicsuffix-list-dafsa-20210518-3.el9.noarch

publicsuffix-list-dafsa-20210518-3.el9.noarch

publicsuffix-list-dafsa-20210518-3.el9.noarch

pyproject-srpm-macros-1.12.0-1.el9.noarch

pyproject-srpm-macros-1.12.0-1.el9.noarch

pyproject-srpm-macros-1.12.0-1.el9.noarch

python-srpm-macros-3.9-53.el9.noarch

python-srpm-macros-3.9-53.el9.noarch

python-srpm-macros-3.9-53.el9.noarch

python-unversioned-command-3.9.18-3.el9_4.5.noarch

python-unversioned-command-3.9.18-3.el9_4.5.noarch

python-unversioned-command-3.9.18-3.el9_4.5.noarch

python3-3.9.18-3.el9_4.5.x86_64

python3-3.9.18-3.el9_4.5.x86_64

python3-3.9.18-3.el9_4.5.x86_64

python3-audit-3.1.2-2.el9.x86_64

python3-audit-3.1.2-2.el9.x86_64

python3-audit-3.1.2-2.el9.x86_64

python3-distro-1.5.0-7.el9.noarch

python3-distro-1.5.0-7.el9.noarch

python3-distro-1.5.0-7.el9.noarch

python3-libs-3.9.18-3.el9_4.5.x86_64

python3-libs-3.9.18-3.el9_4.5.x86_64

python3-libs-3.9.18-3.el9_4.5.x86_64

python3-libselinux-3.6-1.el9.x86_64

python3-libselinux-3.6-1.el9.x86_64

python3-libselinux-3.6-1.el9.x86_64

python3-libsemanage-3.6-1.el9.x86_64

python3-libsemanage-3.6-1.el9.x86_64

python3-libsemanage-3.6-1.el9.x86_64

python3-pip-wheel-21.2.3-8.el9.noarch

python3-pip-wheel-21.2.3-8.el9.noarch

python3-pip-wheel-21.2.3-8.el9.noarch

python3-policycoreutils-3.6-2.1.el9.noarch

python3-policycoreutils-3.6-2.1.el9.noarch

python3-policycoreutils-3.6-2.1.el9.noarch

python3-pyyaml-5.4.1-6.el9.x86_64

python3-pyyaml-5.4.1-6.el9.x86_64

python3-pyyaml-5.4.1-6.el9.x86_64

python3-setools-4.4.4-1.el9.x86_64

python3-setools-4.4.4-1.el9.x86_64

python3-setools-4.4.4-1.el9.x86_64

python3-setuptools-53.0.0-12.el9_4.1.noarch

python3-setuptools-53.0.0-12.el9_4.1.noarch

python3-setuptools-53.0.0-12.el9_4.1.noarch

python3-setuptools-wheel-53.0.0-12.el9_4.1.noarch

python3-setuptools-wheel-53.0.0-12.el9_4.1.noarch

python3-setuptools-wheel-53.0.0-12.el9_4.1.noarch

qemu-img-8.2.0-11.el9_4.6.x86_64

qemu-img-8.2.0-11.el9_4.6.x86_64

qemu-img-8.2.0-11.el9_4.6.x86_64

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

qt5-srpm-macros-5.15.9-1.el9.noarch

qt5-srpm-macros-5.15.9-1.el9.noarch

qt5-srpm-macros-5.15.9-1.el9.noarch

quota-4.06-6.el9.x86_64

quota-4.06-6.el9.x86_64

quota-4.06-6.el9.x86_64

quota-nls-4.06-6.el9.noarch

quota-nls-4.06-6.el9.noarch

quota-nls-4.06-6.el9.noarch

readline-8.1-4.el9.x86_64

readline-8.1-4.el9.x86_64

readline-8.1-4.el9.x86_64

redhat-release-9.4-0.5.el9.x86_64

redhat-release-9.4-0.5.el9.x86_64

redhat-release-9.4-0.5.el9.x86_64

redhat-rpm-config-207-1.el9.noarch

redhat-rpm-config-207-1.el9.noarch

redhat-rpm-config-207-1.el9.noarch

rootfiles-8.1-31.el9.noarch

rootfiles-8.1-31.el9.noarch

rootfiles-8.1-31.el9.noarch

rpcbind-1.2.6-7.el9.x86_64

rpcbind-1.2.6-7.el9.x86_64

rpcbind-1.2.6-7.el9.x86_64

rpm-4.16.1.3-29.el9.x86_64

rpm-4.16.1.3-29.el9.x86_64

rpm-4.16.1.3-29.el9.x86_64

rpm-libs-4.16.1.3-29.el9.x86_64

rpm-libs-4.16.1.3-29.el9.x86_64

rpm-libs-4.16.1.3-29.el9.x86_64

rpm-plugin-selinux-4.16.1.3-29.el9.x86_64

rpm-plugin-selinux-4.16.1.3-29.el9.x86_64

rpm-plugin-selinux-4.16.1.3-29.el9.x86_64

rust-srpm-macros-17-4.el9.noarch

rust-srpm-macros-17-4.el9.noarch

rust-srpm-macros-17-4.el9.noarch

scrub-2.6.1-4.el9.x86_64

scrub-2.6.1-4.el9.x86_64

scrub-2.6.1-4.el9.x86_64

seabios-bin-1.16.3-2.el9.noarch

seabios-bin-1.16.3-2.el9.noarch

seabios-bin-1.16.3-2.el9.noarch

seavgabios-bin-1.16.3-2.el9.noarch

seavgabios-bin-1.16.3-2.el9.noarch

seavgabios-bin-1.16.3-2.el9.noarch

sed-4.8-9.el9.x86_64

sed-4.8-9.el9.x86_64

sed-4.8-9.el9.x86_64

selinux-policy-38.1.35-2.el9_4.2.noarch

selinux-policy-38.1.35-2.el9_4.2.noarch

selinux-policy-38.1.35-2.el9_4.2.noarch

selinux-policy-targeted-38.1.35-2.el9_4.2.noarch

selinux-policy-targeted-38.1.35-2.el9_4.2.noarch

selinux-policy-targeted-38.1.35-2.el9_4.2.noarch

setup-2.13.7-10.el9.noarch

setup-2.13.7-10.el9.noarch

setup-2.13.7-10.el9.noarch

shadow-utils-4.9-8.el9.x86_64

shadow-utils-4.9-8.el9.x86_64

shadow-utils-4.9-8.el9.x86_64

snappy-1.1.8-8.el9.x86_64

snappy-1.1.8-8.el9.x86_64

snappy-1.1.8-8.el9.x86_64

sqlite-libs-3.34.1-7.el9_3.x86_64

sqlite-libs-3.34.1-7.el9_3.x86_64

sqlite-libs-3.34.1-7.el9_3.x86_64

squashfs-tools-4.4-10.git1.el9.x86_64

squashfs-tools-4.4-10.git1.el9.x86_64

squashfs-tools-4.4-10.git1.el9.x86_64

supermin-5.3.3-1.el9.x86_64

supermin-5.3.3-1.el9.x86_64

supermin-5.3.3-1.el9.x86_64

swtpm-0.8.0-2.el9_4.x86_64

swtpm-0.8.0-2.el9_4.x86_64

swtpm-0.8.0-2.el9_4.x86_64

swtpm-libs-0.8.0-2.el9_4.x86_64

swtpm-libs-0.8.0-2.el9_4.x86_64

swtpm-libs-0.8.0-2.el9_4.x86_64

swtpm-tools-0.8.0-2.el9_4.x86_64

swtpm-tools-0.8.0-2.el9_4.x86_64

swtpm-tools-0.8.0-2.el9_4.x86_64

syslinux-6.04-0.20.el9.x86_64

syslinux-6.04-0.20.el9.x86_64

syslinux-6.04-0.20.el9.x86_64

syslinux-extlinux-6.04-0.20.el9.x86_64

syslinux-extlinux-6.04-0.20.el9.x86_64

syslinux-extlinux-6.04-0.20.el9.x86_64

syslinux-extlinux-nonlinux-6.04-0.20.el9.noarch

syslinux-extlinux-nonlinux-6.04-0.20.el9.noarch

syslinux-extlinux-nonlinux-6.04-0.20.el9.noarch

syslinux-nonlinux-6.04-0.20.el9.noarch

syslinux-nonlinux-6.04-0.20.el9.noarch

syslinux-nonlinux-6.04-0.20.el9.noarch

systemd-252-32.el9_4.7.x86_64

systemd-252-32.el9_4.7.x86_64

systemd-252-32.el9_4.7.x86_64

systemd-container-252-32.el9_4.7.x86_64

systemd-container-252-32.el9_4.7.x86_64

systemd-container-252-32.el9_4.7.x86_64

systemd-libs-252-32.el9_4.7.x86_64

systemd-libs-252-32.el9_4.7.x86_64

systemd-libs-252-32.el9_4.7.x86_64

systemd-pam-252-32.el9_4.7.x86_64

systemd-pam-252-32.el9_4.7.x86_64

systemd-pam-252-32.el9_4.7.x86_64

systemd-rpm-macros-252-32.el9_4.7.noarch

systemd-rpm-macros-252-32.el9_4.7.noarch

systemd-rpm-macros-252-32.el9_4.7.noarch

systemd-udev-252-32.el9_4.7.x86_64

systemd-udev-252-32.el9_4.7.x86_64

systemd-udev-252-32.el9_4.7.x86_64

tar-1.34-6.el9_4.1.x86_64

tar-1.34-6.el9_4.1.x86_64

tar-1.34-6.el9_4.1.x86_64

tpm2-tools-5.2-3.el9.x86_64

tpm2-tools-5.2-3.el9.x86_64

tpm2-tools-5.2-3.el9.x86_64

tpm2-tss-3.2.2-2.el9.x86_64

tpm2-tss-3.2.2-2.el9.x86_64

tpm2-tss-3.2.2-2.el9.x86_64

tzdata-2024a-1.el9.noarch

tzdata-2024a-1.el9.noarch

tzdata-2024a-1.el9.noarch

unbound-libs-1.16.2-3.el9_3.5.x86_64

unbound-libs-1.16.2-3.el9_3.5.x86_64

unbound-libs-1.16.2-3.el9_3.5.x86_64

unzip-6.0-56.el9.x86_64

unzip-6.0-56.el9.x86_64

unzip-6.0-56.el9.x86_64

userspace-rcu-0.12.1-6.el9.x86_64

userspace-rcu-0.12.1-6.el9.x86_64

userspace-rcu-0.12.1-6.el9.x86_64

util-linux-2.37.4-18.el9.x86_64

util-linux-2.37.4-18.el9.x86_64

util-linux-2.37.4-18.el9.x86_64

util-linux-core-2.37.4-18.el9.x86_64

util-linux-core-2.37.4-18.el9.x86_64

util-linux-core-2.37.4-18.el9.x86_64

vim-minimal-8.2.2637-20.el9_1.x86_64

vim-minimal-8.2.2637-20.el9_1.x86_64

vim-minimal-8.2.2637-20.el9_1.x86_64

virt-v2v-2.4.0-4.el9_4.x86_64

virt-v2v-2.4.0-4.el9_4.x86_64

virt-v2v-2.4.0-4.el9_4.x86_64

virtio-win-1.9.40-0.el9_4.noarch

virtio-win-1.9.40-0.el9_4.noarch

virtio-win-1.9.40-0.el9_4.noarch

webkit2gtk3-jsc-2.42.5-1.el9.x86_64

webkit2gtk3-jsc-2.42.5-1.el9.x86_64

webkit2gtk3-jsc-2.46.1-2.el9_4.x86_64

which-2.21-29.el9.x86_64

which-2.21-29.el9.x86_64

which-2.21-29.el9.x86_64

xfsprogs-6.3.0-1.el9.x86_64

xfsprogs-6.3.0-1.el9.x86_64

xfsprogs-6.3.0-1.el9.x86_64

xz-5.2.5-8.el9_0.x86_64

xz-5.2.5-8.el9_0.x86_64

xz-5.2.5-8.el9_0.x86_64

xz-libs-5.2.5-8.el9_0.x86_64

xz-libs-5.2.5-8.el9_0.x86_64

xz-libs-5.2.5-8.el9_0.x86_64

yajl-2.1.0-22.el9.x86_64

yajl-2.1.0-22.el9.x86_64

yajl-2.1.0-22.el9.x86_64

zip-3.0-35.el9.x86_64

zip-3.0-35.el9.x86_64

zip-3.0-35.el9.x86_64

zlib-1.2.11-40.el9.x86_64

zlib-1.2.11-40.el9.x86_64

zlib-1.2.11-40.el9.x86_64

zstd-1.5.1-2.el9.x86_64

zstd-1.5.1-2.el9.x86_64

zstd-1.5.1-2.el9.x86_64

+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-overview-page/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-overview-page/index.html new file mode 100644 index 00000000000..99c663f2770 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-overview-page/index.html @@ -0,0 +1,214 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

The MTV Overview page

+
+
+
+

The Forklift Overview page displays system-wide information about migrations and a list of Settings you can change.

+
+
+

If you have Administrator privileges, you can access the Overview page by clicking MigrationOverview in the OKD web console.

+
+
+

The Overview page has 3 tabs:

+
+
+
    +
  • +

    Overview

    +
  • +
  • +

    YAML

    +
  • +
  • +

    Metrics

    +
  • +
+
+
+
+
+

Overview tab

+
+
+

The Overview tab lets you see:

+
+
+
    +
  • +

    Operator: The namespace on which the Forklift Operator is deployed and the status of the Operator

    +
  • +
  • +

    Pods: The name, status, and creation time of each pod that was deployed by the Forklift Operator

    +
  • +
  • +

    Conditions: Status of the Forklift Operator:

    +
    +
      +
    • +

      Failure: Last failure. False indicates no failure since deployment.

      +
    • +
    • +

      Running: Whether the Operator is currently running and waiting for the next reconciliation.

      +
    • +
    • +

      Successful: Last successful reconciliation.

      +
    • +
    +
    +
  • +
+
+
+
+
+

YAML tab

+
+
+

The custom resource ForkliftController that defines the operation of the Forklift Operator. You can modify the custom resource from this tab.

+
+
+
+
+

Metrics tab

+
+
+

The Metrics tab lets you see:

+
+
+
    +
  • +

    Migrations: The number of migrations performed using Forklift:

    +
    +
      +
    • +

      Total

      +
    • +
    • +

      Running

      +
    • +
    • +

      Failed

      +
    • +
    • +

      Succeeded

      +
    • +
    • +

      Canceled

      +
    • +
    +
    +
  • +
  • +

    Virtual Machine Migrations: The number of VMs migrated using Forklift:

    +
    +
      +
    • +

      Total

      +
    • +
    • +

      Running

      +
    • +
    • +

      Failed

      +
    • +
    • +

      Succeeded

      +
    • +
    • +

      Canceled

      +
    • +
    +
    +
  • +
+
+
+ + + + + +
+
Note
+
+
+

Since a single migration might involve many virtual machines, the number of migrations performed using Forklift might vary significantly from the number of virtual machines that have been migrated using Forklift.

+
+
+
+
+
    +
  • +

    Chart showing the number of running, failed, and succeeded migrations performed using Forklift for each of the last 7 days

    +
  • +
  • +

    Chart showing the number of running, failed, and succeeded virtual machine migrations performed using Forklift for each of the last 7 days

    +
  • +
+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-performance-addendum/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-performance-addendum/index.html new file mode 100644 index 00000000000..8fce82d5b2a --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-performance-addendum/index.html @@ -0,0 +1,291 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift performance addendum

+
+
+
+

Unresolved directive in mtv-performance-addendum.adoc - include::snip_performance.adoc[]

+
+
+
+
+

ESXi performance

+
+
+
Single ESXi performance
+

Test migration using the same ESXi host.

+
+
+

In each iteration, the total VMs are increased, to display the impact of concurrent migration on the duration.

+
+
+

The results show that migration time is linear when increasing the total VMs (50 GiB disk, Utilization 70%).

+
+
+

The optimal number of VMs per ESXi is 10.

+
+ + ++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Single ESXi tests
Test Case DescriptionMTVVDDKmax_vm inflightMigration TypeTotal Duration

cold migration, 10 VMs, Single ESXi, Private Network [1]

2.6

7.0.3

100

cold

0:21:39

cold migration, 20 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

0:41:16

cold migration, 30 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

1:00:59

cold migration, 40 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

1:23:02

cold migration, 50 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

1:46:24

cold migration, 80 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

2:42:49

cold migration, 100 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

3:25:15

+
+
Multi ESXi hosts and single data store
+

In each iteration, the number of ESXi hosts were increased, to show that increasing the number of ESXi hosts improves the migration time (50 GiB disk, Utilization 70%).

+
+ + ++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2. Multi ESXi hosts and single data store
Test Case DescriptionMTVVDDKMax_vm inflightMigration TypeTotal Duration

cold migration, 100 VMs, Single ESXi, Private Network [2]

2.6

7.0.3

100

cold

3:25:15

cold migration, 100 VMs, 4 ESXs (25 VMs per ESX), Private Network

2.6

7.0.3

100

cold

1:22:27

cold migration, 100 VMs, 5 ESXs (20 VMs per ESX), Private Network, 1 DataStore

2.6

7.0.3

100

cold

1:04:57

+
+
+
+

Different migration network performance

+
+
+

Each iteration the Migration Network was changed, using the Provider, to find the fastest network for migration.

+
+
+

The results show that there is no degradation using management compared to non-managment networks when all interfaces and network speeds are the same.

+
+ + ++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 3. Different migration network tests
Test Case DescriptionMTVVDDKmax_vm inflightMigration TypeTotal Duration

cold migration, 10 VMs, Single ESXi, MGMT Network

2.6

7.0.3

100

cold

0:21:30

cold migration, 10 VMs, Single ESXi, Private Network [3]

2.6

7.0.3

20

cold

0:21:20

cold migration, 10 VMs, Single ESXi, Default Network

2.6.2

7.0.3

20

cold

0:21:30

+
+
+
+
+
+1. Private Network refers to a non -Management network +
+
+2. Private Network refers to a non-Management network +
+
+3. Private Network refers to a non-Management network +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-performance-recommendation/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-performance-recommendation/index.html new file mode 100644 index 00000000000..8b989bce88d --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-performance-recommendation/index.html @@ -0,0 +1,382 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift performance recommendations

+
+
+
+

The purpose of this section is to share recommendations for efficient and effective migration of virtual machines (VMs) using Forklift, based on findings observed through testing.

+
+
+

Unresolved directive in mtv-performance-recommendation.adoc - include::snip_performance.adoc[]

+
+
+
+
+

Ensure fast storage and network speeds

+
+
+

Ensure fast storage and network speeds, both for VMware and OKD (OCP) environments.

+
+
+
    +
  • +

    To perform fast migrations, VMware must have fast read access to datastores.  Networking between VMware ESXi hosts should be fast, ensure a 10 GiB network connection, and avoid network bottlenecks.

    +
    +
      +
    • +

      Extend the VMware network to the OCP Workers Interface network environment.

      +
    • +
    • +

      It is important to ensure that the VMware network offers high throughput (10 Gigabit Ethernet) and rapid networking to guarantee that the reception rates align with the read rate of the ESXi datastore.

      +
    • +
    • +

      Be aware that the migration process uses significant network bandwidth and that the migration network is utilized. If other services utilize that network, it may have an impact on those services and their migration rates.

      +
    • +
    • +

      For example, 200 to 325 MiB/s was the average network transfer rate from the vmnic for each ESXi host associated with transferring data to the OCP interface.

      +
    • +
    +
    +
  • +
+
+
+
+
+

Ensure fast datastore read speeds to ensure efficient and performant migrations.

+
+
+

Datastores read rates impact the total transfer times, so it is essential to ensure fast reads are possible from the ESXi datastore to the ESXi host.  

+
+
+

Example in numbers: 200 to 300 MiB/s was the average read rate for both vSphere and ESXi endpoints for a single ESXi server. When multiple ESXi servers are used, higher datastore read rates are possible.

+
+
+
+
+

Endpoint types 

+
+
+

Forklift 2.6 allows for the following vSphere provider options:

+
+
+
    +
  • +

    ESXi endpoint (inventory and disk transfers from ESXi), introduced in Forklift 2.6

    +
  • +
  • +

    vCenter Server endpoint; no networks for the ESXi host (inventory and disk transfers from vCenter)

    +
  • +
  • +

    vCenter endpoint and ESXi networks are available (inventory from vCenter, disk transfers from ESXi).

    +
  • +
+
+
+

When transferring many VMs that are registered to multiple ESXi hosts, using the vCenter endpoint and ESXi network is suggested.

+
+
+ + + + + +
+
Note
+
+
+

As of vSphere 7.0, ESXi hosts can label which network to use for NBD transport. This is accomplished by tagging the desired virtual network interface card (NIC) with the appropriate vSphereBackupNFC label.  When this is done, Forklift will be able to utilize the ESXi interface for network transfer to Openshift as long as the worker and ESXi host interfaces are reachable.  This is especially useful when migration users may not have access to the ESXi credentials yet would like to be able to control which ESXi interface is used for migration. 

+
+
+

For more details, see: (Forklift-1230)

+
+
+
+
+

You can use the following ESXi command, which designates interface vmk2 for NBD backup:

+
+
+
+
esxcli network ip interface tag add -t vSphereBackupNFC -i vmk2
+
+
+
+
+
+

Set ESXi hosts BIOS profile and ESXi Host Power Management for High Performance

+
+
+

Where possible, ensure that hosts used to perform migrations are set with BIOS profiles related to maximum performance.  Hosts which use Host Power Management controlled within vSphere should check that High Performance is set.

+
+
+

Testing showed that when transferring more than 10 VMs with both BIOS and host power management set accordingly, migrations had an increase of 15 MiB in the average datastore read rate.

+
+
+
+
+

Avoid additional network load on VMware networks

+
+
+

You can reduce the network load on VMware networks by selecting the migration network when using the ESXi endpoint.

+
+
+

By incorporating a virtualization provider, Forklift enables the selection of a specific network, which is accessible on the ESXi hosts, for the purpose of migrating virtual machines to OCP.  Selecting this migration network from the ESXi host in the Forklift UI will ensure that the transfer is performed using the selected network as an ESXi endpoint..

+
+
+

It is imperative to ensure that the network selected has connectivity to the OCP interface, has adequate bandwidth for migrations, and that the network interface is not saturated.

+
+
+

In environments with fast networks, such as 10GbE networks, migration network impacts can be expected to match the rate of ESXi datastore reads.

+
+
+
+
+

Control maximum concurrent disk migrations per ESXi host.

+
+
+

Set the MAX_VM_INFLIGHT MTV variable to control the maximum number of concurrent VMs transfers allowed for the ESXi host. 

+
+
+

Forklift allows for concurrency to be controlled using this variable; by default, it is set to 20.

+
+
+

When setting MAX_VM_INFLIGHT, consider the number of maximum concurrent VMs transfers are required for ESXi hosts. It is important to consider the type of migration to be transferred concurrently. Warm migrations, which are defined by migrations of a running VM that will be migrated over a scheduled time.

+
+
+

Warm migrations use snapshots to compare and migrate only the differences between previous snapshots of the disk.  The migration of the differences between snapshots happens over specific intervals before a final cut-over of the running VM to OKD occurs. 

+
+
+

In Forklift 2.6, MAX_VM_INFLIGHT reserves one transfer slot per VM, regardless of current migration activity for a specific snapshot or the number of disks that belong to a single vm. The total set by MAX_VM_INFLIGHT is used to indicate how many concurrent VM tranfers per ESXi host is allowed.

+
+
+
Examples
+
    +
  • +

    MAX_VM_INFLIGHT = 20 and 2 ESXi hosts defined in the provider mean each host can transfer 20 VMs.

    +
  • +
+
+
+
+
+

Migrations are completed faster when migrating multiple VMs concurrently

+
+
+

When multiple VMs from a specific ESXi host are to be migrated, starting concurrent migrations for multiple VMs leads to faster migration times. 

+
+
+

Testing demonstrated that migrating 10 VMs (each containing 35 GiB of data, with a total size of 50 GiB) from a single host is significantly faster than migrating the same number of VMs sequentially, one after another. 

+
+
+

It is possible to increase concurrent migration to more than 10 virtual machines from a single host, but it does not show a significant improvement. 

+
+
+
Examples
+
    +
  • +

    1 single disk VMs took 6 minutes, with migration rate of 100 MiB/s

    +
  • +
  • +

    10 single disk VMs took 22 minutes, with migration rate of 272 MiB/s

    +
  • +
  • +

    20 single disk VMs took 42 minutes, with migration rate of 284 MiB/s

    +
  • +
+
+
+ + + + + +
+
Note
+
+
+

From the aforementioned examples, it is evident that the migration of 10 virtual machines simultaneously is three times faster than the migration of identical virtual machines in a sequential manner.

+
+
+

The migration rate was almost the same when moving 10 or 20 virtual machines simultaneously.

+
+
+
+
+
+
+

Migrations complete faster using multiple hosts.

+
+
+

Using multiple hosts with registered VMs equally distributed among the ESXi hosts used for migrations leads to faster migration times.

+
+
+

Testing showed that when transferring more than 10 single disk VMS, each containing 35 GiB of data out of a total of 50G total, using an additional host can reduce migration time.

+
+
+
Examples
+
    +
  • +

    80 single disk VMs, containing 35 GiB of data each, using a single host took 2 hours and 43 minutes, with a migration rate of 294 MiB/s.

    +
  • +
  • +

    80 single disk VMs, containing 35 GiB of data each, using 8 ESXi hosts took 41 minutes, with a migration rate of 1,173 MiB/s.

    +
  • +
+
+
+ + + + + +
+
Note
+
+
+

From the aforementioned examples, it is evident that migrating 80 VMs from 8 ESXi hosts, 10 from each host, concurrently is four times faster than running the same VMs from a single ESXi host. 

+
+
+

Migrating a larger number of VMs from more than 8 ESXi hosts concurrently could potentially show increased performance. However, it was not tested and therefore not recommended.

+
+
+
+
+
+
+

Multiple migration plans compared to a single large migration plan

+
+
+

The maximum number of disks that can be referenced by a single migration plan is 500. For more details, see (MTV-1203)

+
+
+

When attempting to migrate many VMs in a single migration plan, it can take some time for all migrations to start.  By breaking up one migration plan into several migration plans, it is possible to start them at the same time.

+
+
+

Comparing migrations of:

+
+
+
    +
  • +

    500 VMs using 8 ESXi hosts in 1 plan, max_vm_inflight=100, took 5 hours and 10 minutes.

    +
  • +
  • +

    800 VMs using 8 ESXi hosts with 8 plans, max_vm_inflight=100, took 57 minutes.

    +
  • +
+
+
+

Testing showed that by breaking one single large plan into multiple moderately sized plans, for example, 100 VMS per plan, the total migration time can be reduced.

+
+
+
+
+

Maximum values tested

+
+
+
    +
  • +

    Maximum number of ESXi hosts tested: 8

    +
  • +
  • +

    Maximum number of VMs in a single migration plan: 500

    +
  • +
  • +

    Maximum number of VMs migrated in a single test: 5000

    +
  • +
  • +

    Maximum number of migration plans performed concurrently: 40

    +
  • +
  • +

    Maximum single disk size migrated: 6 T disks, which contained 3 Tb of data

    +
  • +
  • +

    Maximum number of disks on a single VM migrated: 50

    +
  • +
  • +

    Highest observed single datastore read rate from a single ESXi server:  312 MiB/second

    +
  • +
  • +

    Highest observed multi-datastore read rate using eight ESXi servers and two datastores: 1,242 MiB/second

    +
  • +
  • +

    Highest observed virtual NIC transfer rate to an {ocp-name} worker: 327 MiB/second

    +
  • +
  • +

    Maximum migration transfer rate of a single disk: 162 MiB/second (rate observed when transferring nonconcurrent migration of 1.5 Tb utilized data)

    +
  • +
  • +

    Maximum cold migration transfer rate of the multiple VMs (single disk) from a single ESXi host: 294 MiB/s (concurrent migration of 30 VMs, 35/50 GiB used, from Single ESXi)

    +
  • +
  • +

    Maximum cold migration transfer rate of the multiple VMs (single disk) from multiple ESXi hosts: 1173MB/s (concurrent migration of 80 VMs, 35/50 GiB used, from 8 ESXi servers, 10 VMs from each ESXi)

    +
  • +
+
+
+

For additional details on performance, see Forklift performance addendum

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-resources-and-services/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-resources-and-services/index.html new file mode 100644 index 00000000000..ee0eb836f52 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-resources-and-services/index.html @@ -0,0 +1,131 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift custom resources and services

+
+

Forklift is provided as an OKD Operator. It creates and manages the following custom resources (CRs) and services.

+
+
+
Forklift custom resources
+
    +
  • +

    Provider CR stores attributes that enable Forklift to connect to and interact with the source and target providers.

    +
  • +
  • +

    NetworkMapping CR maps the networks of the source and target providers.

    +
  • +
  • +

    StorageMapping CR maps the storage of the source and target providers.

    +
  • +
  • +

    Plan CR contains a list of VMs with the same migration parameters and associated network and storage mappings.

    +
  • +
  • +

    Migration CR runs a migration plan.

    +
    +

    Only one Migration CR per migration plan can run at a given time. You can create multiple Migration CRs for a single Plan CR.

    +
    +
  • +
+
+
+
Forklift services
+
    +
  • +

    The Inventory service performs the following actions:

    +
    +
      +
    • +

      Connects to the source and target providers.

      +
    • +
    • +

      Maintains a local inventory for mappings and plans.

      +
    • +
    • +

      Stores VM configurations.

      +
    • +
    • +

      Runs the Validation service if a VM configuration change is detected.

      +
    • +
    +
    +
  • +
  • +

    The Validation service checks the suitability of a VM for migration by applying rules.

    +
  • +
  • +

    The Migration Controller service orchestrates migrations.

    +
    +

    When you create a migration plan, the Migration Controller service validates the plan and adds a status label. If the plan fails validation, the plan status is Not ready and the plan cannot be used to perform a migration. If the plan passes validation, the plan status is Ready and it can be used to perform a migration. After a successful migration, the Migration Controller service changes the plan status to Completed.

    +
    +
  • +
  • +

    The Populator Controller service orchestrates disk transfers using Volume Populators.

    +
  • +
  • +

    The Kubevirt Controller and Containerized Data Import (CDI) Controller services handle most technical operations.

    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-selected-packages-2-7/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-selected-packages-2-7/index.html new file mode 100644 index 00000000000..8c1a84cd8ff --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-selected-packages-2-7/index.html @@ -0,0 +1,207 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift selected packages

+ + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Selected Forklift packages
Package summaryForklift 2.7.0Forklift 2.7.2Forklift 2.7.3

The skeleton package which defines a simple Red Hat Enterprise Linux system

basesystem-11-13.el9.noarch

basesystem-11-13.el9.noarch

basesystem-11-13.el9.noarch

Core kernel modules to match the core kernel

kernel-modules-core-5.14.0-427.35.1.el9_4.x86_64

kernel-modules-core-5.14.0-427.37.1.el9_4.x86_64

kernel-modules-core-5.14.0-427.40.1.el9_4.x86_64

The Linux kernel

kernel-core-5.14.0-427.35.1.el9_4.x86_64

kernel-core-5.14.0-427.37.1.el9_4.x86_64

kernel-core-5.14.0-427.40.1.el9_4.x86_64

Access and modify virtual machine disk images

libguestfs-1.50.1-8.el9_4.x86_64

libguestfs-1.50.1-8.el9_4.x86_64

libguestfs-1.50.1-8.el9_4.x86_64

Client side utilities of the libvirt library

libvirt-client-10.0.0-6.7.el9_4.x86_64

libvirt-client-10.0.0-6.7.el9_4.x86_64

libvirt-client-10.0.0-6.7.el9_4.x86_64

Libvirt libraries

libvirt-libs-10.0.0-6.7.el9_4.x86_64

libvirt-libs-10.0.0-6.7.el9_4.x86_64

libvirt-libs-10.0.0-6.7.el9_4.x86_64

QEMU driver plugin for the libvirtd daemon

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

NBD server

nbdkit-1.36.2-1.el9.x86_64

nbdkit-1.36.2-1.el9.x86_64

nbdkit-1.36.2-1.el9.x86_64

Basic filters for nbdkit

nbdkit-basic-filters-1.36.2-1.el9.x86_64

nbdkit-basic-filters-1.36.2-1.el9.x86_64

nbdkit-basic-filters-1.36.2-1.el9.x86_64

Basic plugins for nbdkit

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

HTTP/FTP (cURL) plugin for nbdkit

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

NBD proxy / forward plugin for nbdkit

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

Python 3 plugin for nbdkit

nbdkit-python-plugin-1.36.2-1.el9.x86_64

nbdkit-python-plugin-1.36.2-1.el9.x86_64

nbdkit-python-plugin-1.36.2-1.el9.x86_64

The nbdkit server

nbdkit-server-1.36.2-1.el9.x86_64

nbdkit-server-1.36.2-1.el9.x86_64

nbdkit-server-1.36.2-1.el9.x86_64

SSH plugin for nbdkit

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

VMware VDDK plugin for nbdkit

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

QEMU command line tool for manipulating disk images

qemu-img-8.2.0-11.el9_4.6.x86_64

qemu-img-8.2.0-11.el9_4.6.x86_64

qemu-img-8.2.0-11.el9_4.6.x86_64

QEMU common files needed by all QEMU targets

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

+

qemu-kvm core components

+

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

Convert a virtual machine to run on KVM

virt-v2v-2.4.0-4.el9_4.x86_64

virt-v2v-2.4.0-4.el9_4.x86_64

virt-v2v-2.4.0-4.el9_4.x86_64

+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-settings/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-settings/index.html new file mode 100644 index 00000000000..cf80ce08d29 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-settings/index.html @@ -0,0 +1,133 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Configuring MTV settings

+
+

If you have Administrator privileges, you can access the Overview page and change the following settings in it:

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Forklift settings
SettingDescriptionDefault value

Max concurrent virtual machine migrations

The maximum number of VMs per plan that can be migrated simultaneously

20

Must gather cleanup after (hours)

The duration for retaining must gather reports before they are automatically deleted

Disabled

Controller main container CPU limit

The CPU limit allocated to the main controller container

500 m

Controller main container Memory limit

The memory limit allocated to the main controller container

800 Mi

Precopy internal (minutes)

The interval at which a new snapshot is requested before initiating a warm migration

60

Snapshot polling interval (seconds)

The frequency with which the system checks the status of snapshot creation or removal during a warm migration

10

+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationOverview. The Settings list is on the right-hand side of the page.

    +
  2. +
  3. +

    In the Settings list, click the Edit icon of the setting you want to change.

    +
  4. +
  5. +

    Choose a setting from the list.

    +
  6. +
  7. +

    Click Save.

    +
  8. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-ui/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-ui/index.html new file mode 100644 index 00000000000..fe46270f79d --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-ui/index.html @@ -0,0 +1,91 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

The MTV user interface

+
+

The Forklift user interface is integrated into the OKD web console.

+
+
+

In the left-hand panel, you can choose a page related to a component of the migration progress, for example, Providers for Migration, or, if you are an administrator, you can choose Overview, which contains information about migrations and lets you configure Forklift settings.

+
+
+
+Forklift user interface +
+
Figure 1. Forklift extension interface
+
+
+

In pages related to components, you can click on the Projects list, which is in the upper-left portion of the page, and see which projects (namespaces) you are allowed to work with.

+
+
+
    +
  • +

    If you are an administrator, you can see all projects.

    +
  • +
  • +

    If you are a non-administrator, you can see only the projects that you have permissions to work with.

    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-workflow/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-workflow/index.html new file mode 100644 index 00000000000..1e64183ed1f --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-workflow/index.html @@ -0,0 +1,113 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

High-level migration workflow

+
+

The high-level workflow shows the migration process from the point of view of the user:

+
+
+
    +
  1. +

    You create a source provider, a target provider, a network mapping, and a storage mapping.

    +
  2. +
  3. +

    You create a Plan custom resource (CR) that includes the following resources:

    +
    +
      +
    • +

      Source provider

      +
    • +
    • +

      Target provider, if Forklift is not installed on the target cluster

      +
    • +
    • +

      Network mapping

      +
    • +
    • +

      Storage mapping

      +
    • +
    • +

      One or more virtual machines (VMs)

      +
    • +
    +
    +
  4. +
  5. +

    You run a migration plan by creating a Migration CR that references the Plan CR.

    +
    +

    If you cannot migrate all the VMs for any reason, you can create multiple Migration CRs for the same Plan CR until all VMs are migrated.

    +
    +
  6. +
  7. +

    For each VM in the Plan CR, the Migration Controller service records the VM migration progress in the Migration CR.

    +
  8. +
  9. +

    Once the data transfer for each VM in the Plan CR completes, the Migration Controller service creates a VirtualMachine CR.

    +
    +

    When all VMs have been migrated, the Migration Controller service updates the status of the Plan CR to Completed. The power state of each source VM is maintained after migration.

    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/network-prerequisites/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/network-prerequisites/index.html new file mode 100644 index 00000000000..41abaef39ad --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/network-prerequisites/index.html @@ -0,0 +1,196 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Network prerequisites

+
+
+
+

The following prerequisites apply to all migrations:

+
+
+
    +
  • +

    IP addresses, VLANs, and other network configuration settings must not be changed before or during migration. The MAC addresses of the virtual machines are preserved during migration.

    +
  • +
  • +

    The network connections between the source environment, the KubeVirt cluster, and the replication repository must be reliable and uninterrupted.

    +
  • +
  • +

    If you are mapping more than one source and destination network, you must create a network attachment definition for each additional destination network.

    +
  • +
+
+
+
+
+

Ports

+
+
+

The firewalls must enable traffic over the following ports:

+
+ + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Network ports required for migrating from VMware vSphere
PortProtocolSourceDestinationPurpose

443

TCP

OpenShift nodes

VMware vCenter

+

VMware provider inventory

+
+
+

Disk transfer authentication

+

443

TCP

OpenShift nodes

VMware ESXi hosts

+

Disk transfer authentication

+

902

TCP

OpenShift nodes

VMware ESXi hosts

+

Disk transfer data copy

+
+ + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2. Network ports required for migrating from oVirt
PortProtocolSourceDestinationPurpose

443

TCP

OpenShift nodes

oVirt Engine

+

oVirt provider inventory

+
+
+

Disk transfer authentication

+

443

TCP

OpenShift nodes

oVirt hosts

+

Disk transfer authentication

+

54322

TCP

OpenShift nodes

oVirt hosts

+

Disk transfer data copy

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/new-features-and-enhancements-2-7/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/new-features-and-enhancements-2-7/index.html new file mode 100644 index 00000000000..972834ffb21 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/new-features-and-enhancements-2-7/index.html @@ -0,0 +1,85 @@ + + + + + + + + New features and enhancements | Forklift Documentation + + + + + + + + + + + + + +New features and enhancements | Forklift Documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+

New features and enhancements

+
+
+
+

Forklift 2.7 introduces the following features and enhancements:

+
+
+
+
+

New features and enhancements 2.7.0

+
+
+
    +
  • +

    In Forklift 2.7.0, warm migration is now based on RHEL 9 inheriting features and bug fixes.

    +
  • +
+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/new-migrating-virtual-machines-cli/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/new-migrating-virtual-machines-cli/index.html new file mode 100644 index 00000000000..7d53d0228bb --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/new-migrating-virtual-machines-cli/index.html @@ -0,0 +1,155 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
Procedure
+
    +
  1. +

    Create a Secret manifest for the source provider credentials:

    +
  2. +
+
+
+
    +
  1. +

    Create a Provider manifest for the source provider:

    +
  2. +
  3. +

    Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    +
    +
    +
    $  cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Hook
    +metadata:
    +  name: <hook>
    +  namespace: <namespace>
    +spec:
    +  image: quay.io/konveyor/hook-runner
    +  playbook: |
    +    LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv
    +    YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s
    +    IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh
    +    cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr
    +    bG9hZAoK
    +EOF
    +
    +
    +
    +

    where:

    +
    +
    +

    playbook refers to an optional Base64-encoded Ansible Playbook. If you specify a playbook, the image must be hook-runner.

    +
    +
    + + + + + +
    +
    Note
    +
    +
    +

    You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

    +
    +
    +
    +
  4. +
  5. +

    Create a Migration manifest to run the Plan CR:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Migration
    +metadata:
    +  name: <name_of_migration_cr>
    +  namespace: <namespace>
    +spec:
    +  plan:
    +    name: <name_of_plan_cr>
    +    namespace: <namespace>
    +  cutover: <optional_cutover_time>
    +EOF
    +
    +
    +
    + + + + + +
    +
    Note
    +
    +
    +

    If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00.

    +
    +
    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/non-admin-permissions-for-ui/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/non-admin-permissions-for-ui/index.html new file mode 100644 index 00000000000..1f5a612da3e --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/non-admin-permissions-for-ui/index.html @@ -0,0 +1,192 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Permissions needed by non-administrators to work with migration plan components

+
+

If you are an administrator, you can work with all components of migration plans (for example, providers, network mappings, and migration plans).

+
+
+

By default, non-administrators have limited ability to work with migration plans and their components. As an administrator, you can modify their roles to allow them full access to all components, or you can give them limited permissions.

+
+
+

For example, administrators can assign non-administrators one or more of the following cluster roles for migration plans:

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + +
Table 1. Example migration plan roles and their privileges
RoleDescription

plans.forklift.konveyor.io-v1beta1-view

Can view migration plans but not to create, delete or modify them

plans.forklift.konveyor.io-v1beta1-edit

Can create, delete or modify (all parts of edit permissions) individual migration plans

plans.forklift.konveyor.io-v1beta1-admin

All edit privileges and the ability to delete the entire collection of migration plans

+
+

Note that pre-defined cluster roles include a resource (for example, plans), an API group (for example, forklift.konveyor.io-v1beta1) and an action (for example, view, edit).

+
+
+

As a more comprehensive example, you can grant non-administrators the following set of permissions per namespace:

+
+
+
    +
  • +

    Create and modify storage maps, network maps, and migration plans for the namespaces they have access to

    +
  • +
  • +

    Attach providers created by administrators to storage maps, network maps, and migration plans

    +
  • +
  • +

    Not be able to create providers or to change system settings

    +
  • +
+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2. Example permissions required for non-adminstrators to work with migration plan components but not create providers
ActionsAPI groupResource

get, list, watch, create, update, patch, delete

forklift.konveyor.io

plans

get, list, watch, create, update, patch, delete

forklift.konveyor.io

migrations

get, list, watch, create, update, patch, delete

forklift.konveyor.io

hooks

get, list, watch

forklift.konveyor.io

providers

get, list, watch, create, update, patch, delete

forklift.konveyor.io

networkmaps

get, list, watch, create, update, patch, delete

forklift.konveyor.io

storagemaps

get, list, watch

forklift.konveyor.io

forkliftcontrollers

create, patch, delete

Empty string

secrets

+
+ + + + + +
+
Note
+
+
+

Non-administrators need to have the create permissions that are part of edit roles for network maps and for storage maps to create migration plans, even when using a template for a network map or a storage map.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/obtaining-console-url/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/obtaining-console-url/index.html new file mode 100644 index 00000000000..a7484bc6176 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/obtaining-console-url/index.html @@ -0,0 +1,107 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Getting the Forklift web console URL

+
+

You can get the Forklift web console URL at any time by using either the OKD web console, or the command line.

+
+
+
Prerequisites
+
    +
  • +

    KubeVirt Operator installed.

    +
  • +
  • +

    Forklift Operator installed.

    +
  • +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  • +

    If you are using the OKD web console, follow these steps:

    +
  • +
+
+
+

Unresolved directive in obtaining-console-url.adoc - include::snippet_getting_web_console_url_web.adoc[]

+
+
+
    +
  • +

    If you are using the command line, get the Forklift web console URL with the following command:

    +
  • +
+
+
+

Unresolved directive in obtaining-console-url.adoc - include::snippet_getting_web_console_url_cli.adoc[]

+
+
+

You can now launch a browser and navigate to the Forklift web console.

+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/openstack-prerequisites/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/openstack-prerequisites/index.html new file mode 100644 index 00000000000..1d763c12759 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/openstack-prerequisites/index.html @@ -0,0 +1,76 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

OpenStack prerequisites

+
+

The following prerequisites apply to {osp} migrations:

+
+
+ +
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/ostack-app-cred-auth/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/ostack-app-cred-auth/index.html new file mode 100644 index 00000000000..057e57c61db --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/ostack-app-cred-auth/index.html @@ -0,0 +1,189 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Using application credential authentication with an {osp} source provider

+
+

You can use application credential authentication, instead of username and password authentication, when you create an {osp} source provider.

+
+
+

Forklift supports both of the following types of application credential authentication:

+
+
+
    +
  • +

    Application credential ID

    +
  • +
  • +

    Application credential name

    +
  • +
+
+
+

For each type of application credential authentication, you need to use data from OpenStack to create a Secret manifest.

+
+
+
Prerequisites
+

You have an {osp} account.

+
+
+
Procedure
+
    +
  1. +

    In the dashboard of the {osp} web console, click Project > API Access.

    +
  2. +
  3. +

    Expand Download OpenStack RC file and click OpenStack RC file.

    +
    +

    The file that is downloaded, referred to here as <openstack_rc_file>, includes the following fields used for application credential authentication:

    +
    +
    +
    +
    OS_AUTH_URL
    +OS_PROJECT_ID
    +OS_PROJECT_NAME
    +OS_DOMAIN_NAME
    +OS_USERNAME
    +
    +
    +
  4. +
  5. +

    To get the data needed for application credential authentication, run the following command:

    +
    +
    +
    $ openstack application credential create --role member --role reader --secret redhat forklift
    +
    +
    +
    +

    The output, referred to here as <openstack_credential_output>, includes:

    +
    +
    +
      +
    • +

      The id and secret that you need for authentication using an application credential ID

      +
    • +
    • +

      The name and secret that you need for authentication using an application credential name

      +
    • +
    +
    +
  6. +
  7. +

    Create a Secret manifest similar to the following:

    +
    +
      +
    • +

      For authentication using the application credential ID:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-appid
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: applicationcredential
      +  applicationCredentialID: <id_from_openstack_credential_output>
      +  applicationCredentialSecret: <secret_from_openstack_credential_output>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    • +

      For authentication using the application credential name:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-appname
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: applicationcredential
      +  applicationCredentialName: <name_from_openstack_credential_output>
      +  applicationCredentialSecret: <secret_from_openstack_credential_output>
      +  domainName: <OS_DOMAIN_NAME_from_openstack_rc_file>
      +  username: <OS_USERNAME_from_openstack_rc_file>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    +
    +
  8. +
  9. +

    Continue migrating your virtual machine according to the procedure in Migrating virtual machines, starting with step 2, "Create a Provider manifest for the source provider."

    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/ostack-token-auth/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/ostack-token-auth/index.html new file mode 100644 index 00000000000..2682f17e9c2 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/ostack-token-auth/index.html @@ -0,0 +1,180 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Using token authentication with an {osp} source provider

+
+

You can use token authentication, instead of username and password authentication, when you create an {osp} source provider.

+
+
+

Forklift supports both of the following types of token authentication:

+
+
+
    +
  • +

    Token with user ID

    +
  • +
  • +

    Token with user name

    +
  • +
+
+
+

For each type of token authentication, you need to use data from OpenStack to create a Secret manifest.

+
+
+
Prerequisites
+

Have an {osp} account.

+
+
+
Procedure
+
    +
  1. +

    In the dashboard of the {osp} web console, click Project > API Access.

    +
  2. +
  3. +

    Expand Download OpenStack RC file and click OpenStack RC file.

    +
    +

    The file that is downloaded, referred to here as <openstack_rc_file>, includes the following fields used for token authentication:

    +
    +
    +
    +
    OS_AUTH_URL
    +OS_PROJECT_ID
    +OS_PROJECT_NAME
    +OS_DOMAIN_NAME
    +OS_USERNAME
    +
    +
    +
  4. +
  5. +

    To get the data needed for token authentication, run the following command:

    +
    +
    +
    $ openstack token issue
    +
    +
    +
    +

    The output, referred to here as <openstack_token_output>, includes the token, userID, and projectID that you need for authentication using a token with user ID.

    +
    +
  6. +
  7. +

    Create a Secret manifest similar to the following:

    +
    +
      +
    • +

      For authentication using a token with user ID:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-tokenid
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: token
      +  token: <token_from_openstack_token_output>
      +  projectID: <projectID_from_openstack_token_output>
      +  userID: <userID_from_openstack_token_output>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    • +

      For authentication using a token with user name:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-tokenname
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: token
      +  token: <token_from_openstack_token_output>
      +  domainName: <OS_DOMAIN_NAME_from_openstack_rc_file>
      +  projectName: <OS_PROJECT_NAME_from_openstack_rc_file>
      +  username: <OS_USERNAME_from_openstack_rc_file>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    +
    +
  8. +
  9. +

    Continue migrating your virtual machine according to the procedure in Migrating virtual machines, starting with step 2, "Create a Provider manifest for the source provider."

    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/ova-prerequisites/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/ova-prerequisites/index.html new file mode 100644 index 00000000000..8a80106602f --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/ova-prerequisites/index.html @@ -0,0 +1,130 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Open Virtual Appliance (OVA) prerequisites

+
+

The following prerequisites apply to Open Virtual Appliance (OVA) file migrations:

+
+
+
    +
  • +

    All OVA files are created by VMware vSphere.

    +
  • +
+
+
+ + + + + +
+
Note
+
+
+

Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by Forklift. Forklift supports only OVA files created by VMware vSphere.

+
+
+
+
+
    +
  • +

    The OVA files are in one or more folders under an NFS shared directory in one of the following structures:

    +
    +
      +
    • +

      In one or more compressed Open Virtualization Format (OVF) packages that hold all the VM information.

      +
      +

      The filename of each compressed package must have the .ova extension. Several compressed packages can be stored in the same folder.

      +
      +
      +

      When this structure is used, Forklift scans the root folder and the first-level subfolders for compressed packages.

      +
      +
      +

      For example, if the NFS share is, /nfs, then:
      +The folder /nfs is scanned.
      +The folder /nfs/subfolder1 is scanned.
      +But, /nfs/subfolder1/subfolder2 is not scanned.

      +
      +
    • +
    • +

      In extracted OVF packages.

      +
      +

      When this structure is used, Forklift scans the root folder, first-level subfolders, and second-level subfolders for extracted OVF packages. +However, there can be only one .ovf file in a folder. Otherwise, the migration will fail.

      +
      +
      +

      For example, if the NFS share is, /nfs, then:
      +The OVF file /nfs/vm.ovf is scanned.
      +The OVF file /nfs/subfolder1/vm.ovf is scanned.
      +The OVF file /nfs/subfolder1/subfolder2/vm.ovf is scanned.
      +But, the OVF file /nfs/subfolder1/subfolder2/subfolder3/vm.ovf is not scanned.

      +
      +
    • +
    +
    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/retrieving-validation-service-json/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/retrieving-validation-service-json/index.html new file mode 100644 index 00000000000..e4357f734b1 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/retrieving-validation-service-json/index.html @@ -0,0 +1,483 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Retrieving the Inventory service JSON

+
+

You retrieve the Inventory service JSON by sending an Inventory service query to a virtual machine (VM). The output contains an "input" key, which contains the inventory attributes that are queried by the Validation service rules.

+
+
+

You can create a validation rule based on any attribute in the "input" key, for example, input.snapshot.kind.

+
+
+
Procedure
+
    +
  1. +

    Retrieve the routes for the project:

    +
    +
    +
    oc get route -n openshift-mtv
    +
    +
    +
  2. +
  3. +

    Retrieve the Inventory service route:

    +
    +
    +
    $ kubectl get route <inventory_service> -n konveyor-forklift
    +
    +
    +
  4. +
  5. +

    Retrieve the access token:

    +
    +
    +
    $ TOKEN=$(oc whoami -t)
    +
    +
    +
  6. +
  7. +

    Trigger an HTTP GET request (for example, using Curl):

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers -k
    +
    +
    +
  8. +
  9. +

    Retrieve the UUID of a provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider> -k (1)
    +
    +
    +
    +
      +
    1. +

      Allowed values for the provider are vsphere, ovirt, and openstack.

      +
    2. +
    +
    +
  10. +
  11. +

    Retrieve the VMs of a provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider>/<UUID>/vms -k
    +
    +
    +
  12. +
  13. +

    Retrieve the details of a VM:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider>/<UUID>/workloads/<vm> -k
    +
    +
    +
    +
    Example output
    +
    +
    {
    +    "input": {
    +        "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/workloads/vm-431",
    +        "id": "vm-431",
    +        "parent": {
    +            "kind": "Folder",
    +            "id": "group-v22"
    +        },
    +        "revision": 1,
    +        "name": "iscsi-target",
    +        "revisionValidated": 1,
    +        "isTemplate": false,
    +        "networks": [
    +            {
    +                "kind": "Network",
    +                "id": "network-31"
    +            },
    +            {
    +                "kind": "Network",
    +                "id": "network-33"
    +            }
    +        ],
    +        "disks": [
    +            {
    +                "key": 2000,
    +                "file": "[iSCSI_Datastore] iscsi-target/iscsi-target-000001.vmdk",
    +                "datastore": {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                },
    +                "capacity": 17179869184,
    +                "shared": false,
    +                "rdm": false
    +            },
    +            {
    +                "key": 2001,
    +                "file": "[iSCSI_Datastore] iscsi-target/iscsi-target_1-000001.vmdk",
    +                "datastore": {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                },
    +                "capacity": 10737418240,
    +                "shared": false,
    +                "rdm": false
    +            }
    +        ],
    +        "concerns": [],
    +        "policyVersion": 5,
    +        "uuid": "42256329-8c3a-2a82-54fd-01d845a8bf49",
    +        "firmware": "bios",
    +        "powerState": "poweredOn",
    +        "connectionState": "connected",
    +        "snapshot": {
    +            "kind": "VirtualMachineSnapshot",
    +            "id": "snapshot-3034"
    +        },
    +        "changeTrackingEnabled": false,
    +        "cpuAffinity": [
    +            0,
    +            2
    +        ],
    +        "cpuHotAddEnabled": true,
    +        "cpuHotRemoveEnabled": false,
    +        "memoryHotAddEnabled": false,
    +        "faultToleranceEnabled": false,
    +        "cpuCount": 2,
    +        "coresPerSocket": 1,
    +        "memoryMB": 2048,
    +        "guestName": "Red Hat Enterprise Linux 7 (64-bit)",
    +        "balloonedMemory": 0,
    +        "ipAddress": "10.19.2.96",
    +        "storageUsed": 30436770129,
    +        "numaNodeAffinity": [
    +            "0",
    +            "1"
    +        ],
    +        "devices": [
    +            {
    +                "kind": "RealUSBController"
    +            }
    +        ],
    +        "host": {
    +            "id": "host-29",
    +            "parent": {
    +                "kind": "Cluster",
    +                "id": "domain-c26"
    +            },
    +            "revision": 1,
    +            "name": "IP address or host name of the vCenter host or oVirt Engine host",
    +            "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/hosts/host-29",
    +            "status": "green",
    +            "inMaintenance": false,
    +            "managementServerIp": "10.19.2.96",
    +            "thumbprint": <thumbprint>,
    +            "timezone": "UTC",
    +            "cpuSockets": 2,
    +            "cpuCores": 16,
    +            "productName": "VMware ESXi",
    +            "productVersion": "6.5.0",
    +            "networking": {
    +                "pNICs": [
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic0",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic1",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic2",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic3",
    +                        "linkSpeed": 10000
    +                    }
    +                ],
    +                "vNICs": [
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk2",
    +                        "portGroup": "VM_Migration",
    +                        "dPortGroup": "",
    +                        "ipAddress": "192.168.79.13",
    +                        "subnetMask": "255.255.255.0",
    +                        "mtu": 9000
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk0",
    +                        "portGroup": "Management Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "10.19.2.13",
    +                        "subnetMask": "255.255.255.128",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk1",
    +                        "portGroup": "Storage Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "172.31.2.13",
    +                        "subnetMask": "255.255.0.0",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk3",
    +                        "portGroup": "",
    +                        "dPortGroup": "dvportgroup-48",
    +                        "ipAddress": "192.168.61.13",
    +                        "subnetMask": "255.255.255.0",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk4",
    +                        "portGroup": "VM_DHCP_Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "10.19.2.231",
    +                        "subnetMask": "255.255.255.128",
    +                        "mtu": 1500
    +                    }
    +                ],
    +                "portGroups": [
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM Network",
    +                        "name": "VM Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-Management Network",
    +                        "name": "Management Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_10G_Network",
    +                        "name": "VM_10G_Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Storage",
    +                        "name": "VM_Storage",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_DHCP_Network",
    +                        "name": "VM_DHCP_Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-Storage Network",
    +                        "name": "Storage Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Isolated_67",
    +                        "name": "VM_Isolated_67",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Migration",
    +                        "name": "VM_Migration",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2"
    +                    }
    +                ],
    +                "switches": [
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch0",
    +                        "name": "vSwitch0",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM Network",
    +                            "key-vim.host.PortGroup-Management Network"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic4"
    +                        ]
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch1",
    +                        "name": "vSwitch1",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM_10G_Network",
    +                            "key-vim.host.PortGroup-VM_Storage",
    +                            "key-vim.host.PortGroup-VM_DHCP_Network",
    +                            "key-vim.host.PortGroup-Storage Network"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic2",
    +                            "key-vim.host.PhysicalNic-vmnic0"
    +                        ]
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch2",
    +                        "name": "vSwitch2",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM_Isolated_67",
    +                            "key-vim.host.PortGroup-VM_Migration"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic3",
    +                            "key-vim.host.PhysicalNic-vmnic1"
    +                        ]
    +                    }
    +                ]
    +            },
    +            "networks": [
    +                {
    +                    "kind": "Network",
    +                    "id": "network-31"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-34"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-57"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-33"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "dvportgroup-47"
    +                }
    +            ],
    +            "datastores": [
    +                {
    +                    "kind": "Datastore",
    +                    "id": "datastore-35"
    +                },
    +                {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                }
    +            ],
    +            "vms": null,
    +            "networkAdapters": [],
    +            "cluster": {
    +                "id": "domain-c26",
    +                "parent": {
    +                    "kind": "Folder",
    +                    "id": "group-h23"
    +                },
    +                "revision": 1,
    +                "name": "mycluster",
    +                "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/clusters/domain-c26",
    +                "folder": "group-h23",
    +                "networks": [
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-31"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-34"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-57"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-33"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "dvportgroup-47"
    +                    }
    +                ],
    +                "datastores": [
    +                    {
    +                        "kind": "Datastore",
    +                        "id": "datastore-35"
    +                    },
    +                    {
    +                        "kind": "Datastore",
    +                        "id": "datastore-63"
    +                    }
    +                ],
    +                "hosts": [
    +                    {
    +                        "kind": "Host",
    +                        "id": "host-44"
    +                    },
    +                    {
    +                        "kind": "Host",
    +                        "id": "host-29"
    +                    }
    +                ],
    +                "dasEnabled": false,
    +                "dasVms": [],
    +                "drsEnabled": true,
    +                "drsBehavior": "fullyAutomated",
    +                "drsVms": [],
    +                "datacenter": null
    +            }
    +        }
    +    }
    +}
    +
    +
    +
  14. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/retrieving-vmware-moref/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/retrieving-vmware-moref/index.html new file mode 100644 index 00000000000..84f320538da --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/retrieving-vmware-moref/index.html @@ -0,0 +1,149 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Retrieving a VMware vSphere moRef

+
+

When you migrate VMs with a VMware vSphere source provider using Forklift from the CLI, you need to know the managed object reference (moRef) of certain entities in vSphere, such as datastores, networks, and VMs.

+
+
+

You can retrieve the moRef of one or more vSphere entities from the Inventory service. You can then use each moRef as a reference for retrieving the moRef of another entity.

+
+
+
Procedure
+
    +
  1. +

    Retrieve the routes for the project:

    +
    +
    +
    oc get route -n openshift-mtv
    +
    +
    +
  2. +
  3. +

    Retrieve the Inventory service route:

    +
    +
    +
    $ kubectl get route <inventory_service> -n konveyor-forklift
    +
    +
    +
  4. +
  5. +

    Retrieve the access token:

    +
    +
    +
    $ TOKEN=$(oc whoami -t)
    +
    +
    +
  6. +
  7. +

    Retrieve the moRef of a VMware vSphere provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/vsphere -k
    +
    +
    +
  8. +
  9. +

    Retrieve the datastores of a VMware vSphere source provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/vsphere/<provider id>/datastores/ -k
    +
    +
    +
    +
    Example output
    +
    +
    [
    +  {
    +    "id": "datastore-11",
    +    "parent": {
    +      "kind": "Folder",
    +      "id": "group-s5"
    +    },
    +    "path": "/Datacenter/datastore/v2v_general_porpuse_ISCSI_DC",
    +    "revision": 46,
    +    "name": "v2v_general_porpuse_ISCSI_DC",
    +    "selfLink": "providers/vsphere/01278af6-e1e4-4799-b01b-d5ccc8dd0201/datastores/datastore-11"
    +  },
    +  {
    +    "id": "datastore-730",
    +    "parent": {
    +      "kind": "Folder",
    +      "id": "group-s5"
    +    },
    +    "path": "/Datacenter/datastore/f01-h27-640-SSD_2",
    +    "revision": 46,
    +    "name": "f01-h27-640-SSD_2",
    +    "selfLink": "providers/vsphere/01278af6-e1e4-4799-b01b-d5ccc8dd0201/datastores/datastore-730"
    +  },
    + ...
    +
    +
    +
  10. +
+
+
+

In this example, the moRef of the datastore v2v_general_porpuse_ISCSI_DC is datastore-11 and the moRef of the datastore f01-h27-640-SSD_2 is datastore-730.

+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rhv-prerequisites/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rhv-prerequisites/index.html new file mode 100644 index 00000000000..f116bc8fa86 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rhv-prerequisites/index.html @@ -0,0 +1,129 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

oVirt prerequisites

+
+

The following prerequisites apply to oVirt migrations:

+
+
+
    +
  • +

    To create a source provider, you must have at least the UserRole and ReadOnlyAdmin roles assigned to you. These are the minimum required permissions, however, any other administrator or superuser permissions will also work.

    +
  • +
+
+
+ + + + + +
+
Important
+
+
+

You must keep the UserRole and ReadOnlyAdmin roles until the virtual machines of the source provider have been migrated. Otherwise, the migration will fail.

+
+
+
+
+
    +
  • +

    To migrate virtual machines:

    +
    +
      +
    • +

      You must have one of the following:

      +
      +
        +
      • +

        oVirt admin permissions. These permissions allow you to migrate any virtual machine in the system.

        +
      • +
      • +

        DiskCreator and UserVmManager permissions on every virtual machine you want to migrate.

        +
      • +
      +
      +
    • +
    • +

      You must use a compatible version of oVirt.

      +
    • +
    • +

      You must have the Engine CA certificate, unless it was replaced by a third-party certificate, in which case, specify the Engine Apache CA certificate.

      +
      +

      You can obtain the Engine CA certificate by navigating to https://<engine_host>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA in a browser.

      +
      +
    • +
    • +

      If you are migrating a virtual machine with a direct LUN disk, ensure that the nodes in the KubeVirt destination cluster that the VM is expected to run on can access the backend storage.

      +
    • +
    +
    +
  • +
+
+
+

Unresolved directive in rhv-prerequisites.adoc - include::snip-migrating-luns.adoc[]

+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.0/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.0/index.html new file mode 100644 index 00000000000..569b98c9f91 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.0/index.html @@ -0,0 +1,163 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.0

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere with Forklift.

+
+
+

The release notes describe new features and enhancements, known issues, and technical changes.

+
+
+
+
+

New features and enhancements

+
+
+

This release adds the following features and improvements.

+
+
+
Warm migration
+

Warm migration reduces downtime by copying most of the VM data during a precopy stage while the VMs are running. During the cutover stage, the VMs are stopped and the rest of the data is copied.

+
+
+
Cancel migration
+

You can cancel an entire migration plan or individual VMs while a migration is in progress. A canceled migration plan can be restarted in order to migrate the remaining VMs.

+
+
+
Migration network
+

You can select a migration network for the source and target providers for improved performance. By default, data is copied using the VMware administration network and the OKD pod network.

+
+
+
Validation service
+

The validation service checks source VMs for issues that might affect migration and flags the VMs with concerns in the migration plan.

+
+
+ + + + + +
+
Important
+
+
+

The validation service is a Technology Preview feature only. Technology Preview features +are not supported with Red Hat production service level agreements (SLAs) and +might not be functionally complete. Red Hat does not recommend using them +in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during +the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+
+
+
+

Known issues

+
+
+

This section describes known issues and mitigations.

+
+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Network map displays a "Destination network not found" error
+

If the network map remains in a NotReady state and the NetworkMap manifest displays a Destination network not found error, the cause is a missing network attachment definition. You must create a network attachment definition for each additional destination network before you create the network map. (BZ#1971259)

+
+
+
Warm migration gets stuck during third precopy
+

Warm migration uses changed block tracking snapshots to copy data during the precopy stage. The snapshots are created at one-hour intervals by default. When a snapshot is created, its contents are copied to the destination cluster. However, when the third snapshot is created, the first snapshot is deleted and the block tracking is lost. (BZ#1969894)

+
+
+

You can do one of the following to mitigate this issue:

+
+
+
    +
  • +

    Start the cutover stage no more than one hour after the precopy stage begins so that only one internal snapshot is created.

    +
  • +
  • +

    Increase the snapshot interval in the vm-import-controller-config config map to 720 minutes:

    +
    +
    +
    $ kubectl patch configmap/vm-import-controller-config \
    +  -n openshift-cnv -p '{"data": \
    +  {"warmImport.intervalMinutes": "720"}}'
    +
    +
    +
  • +
+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.1/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.1/index.html new file mode 100644 index 00000000000..69fce95165c --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.1/index.html @@ -0,0 +1,191 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.1

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

+
+
+

The release notes describe new features and enhancements, known issues, and technical changes.

+
+
+
+
+

Technical changes

+
+
+
VDDK image added to HyperConverged custom resource
+

The VMware Virtual Disk Development Kit (VDDK) SDK image must be added to the HyperConverged custom resource. Before this release, it was referenced in the v2v-vmware config map.

+
+
+
+
+

New features and enhancements

+
+
+

This release adds the following features and improvements.

+
+
+
Cold migration from oVirt
+

You can perform a cold migration of VMs from oVirt.

+
+
+
Migration hooks
+

You can create migration hooks to run Ansible playbooks or custom code before or after migration.

+
+
+
Filtered must-gather data collection
+

You can specify options for the must-gather tool that enable you to filter the data by namespace, migration plan, or VMs.

+
+
+
SR-IOV network support
+

You can migrate VMs with a single root I/O virtualization (SR-IOV) network interface if the KubeVirt environment has an SR-IOV network.

+
+
+
+
+

Known issues

+
+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Disk copy stage does not progress
+

The disk copy stage of a oVirt VM does not progress and the Forklift web console does not display an error message. (BZ#1990596)

+
+
+

The cause of this problem might be one of the following conditions:

+
+
+
    +
  • +

    The storage class does not exist on the target cluster.

    +
  • +
  • +

    The VDDK image has not been added to the HyperConverged custom resource.

    +
  • +
  • +

    The VM does not have a disk.

    +
  • +
  • +

    The VM disk is locked.

    +
  • +
  • +

    The VM time zone is not set to UTC.

    +
  • +
  • +

    The VM is configured for a USB device.

    +
  • +
+
+
+

To disable USB devices, see Configuring USB Devices in the Red Hat Virtualization documentation.

+
+
+

To determine the cause:

+
+
+
    +
  1. +

    Click WorkloadsVirtualization in the OKD web console.

    +
  2. +
  3. +

    Click the Virtual Machines tab.

    +
  4. +
  5. +

    Select a virtual machine to open the Virtual Machine Overview screen.

    +
  6. +
  7. +

    Click Status to view the status of the virtual machine.

    +
  8. +
+
+
+
VM time zone must be UTC with no offset
+

The time zone of the source VMs must be UTC with no offset. You can set the time zone to GMT Standard Time after first assessing the potential impact on the workload. (BZ#1993259)

+
+
+
oVirt resource UUID causes a "Provider not found" error
+

If a oVirt resource UUID is used in a Host, NetworkMap, StorageMap, or Plan custom resource (CR), a "Provider not found" error is displayed.

+
+
+

You must use the resource name. (BZ#1994037)

+
+
+
Same oVirt resource name in different data centers causes ambiguous reference
+

If a oVirt resource name is used in a NetworkMap, StorageMap, or Plan custom resource (CR) and if the same resource name exists in another data center, the Plan CR displays a critical "Ambiguous reference" condition. You must rename the resource or use the resource UUID in the CR.

+
+
+

In the web console, the resource name appears twice in the same list without a data center reference to distinguish them. You must rename the resource. (BZ#1993089)

+
+
+
Snapshots are not deleted after warm migration
+

Snapshots are not deleted automatically after a successful warm migration of a VMware VM. You must delete the snapshots manually in VMware vSphere. (BZ#2001270)

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.2/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.2/index.html new file mode 100644 index 00000000000..21f7dba5da7 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.2/index.html @@ -0,0 +1,219 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.2

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Setting the precopy time interval for warm migration
+

You can set the time interval between snapshots taken during the precopy stage of warm migration.

+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
Creating validation rules
+

You can create custom validation rules to check the suitability of VMs for migration. Validation rules are based on the VM attributes collected by the Provider Inventory service and written in Rego, the Open Policy Agent native query language.

+
+
+
Downloading logs by using the web console
+

You can download logs for a migration plan or a migrated VM by using the Forklift web console.

+
+
+
Duplicating a migration plan by using the web console
+

You can duplicate a migration plan by using the web console, including its VMs, mappings, and hooks, in order to edit the copy and run as a new migration plan.

+
+
+
Archiving a migration plan by using the web console
+

You can archive a migration plan by using the MTV web console. Archived plans can be viewed or duplicated. They cannot be run, edited, or unarchived.

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Certain Validation service issues do not block migration
+

Certain Validation service issues, which are marked as Critical and display the assessment text, The VM will not be migrated, do not block migration. (BZ#2025977)

+
+
+

The following Validation service assessments do not block migration:

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Issues that do not block migration
AssessmentResult

The disk interface type is not supported by OpenShift Virtualization (only sata, virtio_scsi and virtio interface types are currently supported).

The migrated VM will have a virtio disk if the source interface is not recognized.

The NIC interface type is not supported by OpenShift Virtualization (only e1000, rtl8139 and virtio interface types are currently supported).

The migrated VM will have a virtio NIC if the source interface is not recognized.

The VM is using a vNIC profile configured for host device passthrough, which is not currently supported by OpenShift Virtualization.

The migrated VM will have an SR-IOV NIC. The destination network must be set up correctly.

One or more of the VM’s disks has an illegal or locked status condition.

The migration will proceed but the disk transfer is likely to fail.

The VM has a disk with a storage type other than image, and this is not currently supported by OpenShift Virtualization.

The migration will proceed but the disk transfer is likely to fail.

The VM has one or more snapshots with disks in ILLEGAL state. This is not currently supported by OpenShift Virtualization.

The migration will proceed but the disk transfer is likely to fail.

The VM has USB support enabled, but USB devices are not currently supported by OpenShift Virtualization.

The migrated VM will not have USB devices.

The VM is configured with a watchdog device, which is not currently supported by OpenShift Virtualization.

The migrated VM will not have a watchdog device.

The VM’s status is not up or down.

The migration will proceed but it might hang if the VM cannot be powered off.

+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Missing resource causes error message in current.log file
+

If a resource does not exist, for example, if the virt-launcher pod does not exist because the migrated VM is powered off, its log is unavailable.

+
+
+

The following error appears in the missing resource’s current.log file when it is downloaded from the web console or created with the must-gather tool: error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'. (BZ#2023260)

+
+
+
Importer pod log is unavailable after warm migration
+

Retaining the importer pod for debug purposes causes warm migration to hang during the precopy stage. (BZ#2016290)

+
+
+

As a temporary workaround, the importer pod is removed at the end of the precopy stage so that the precopy succeeds. However, this means that the importer pod log is not retained after warm migration is complete. You can only view the importer pod log by using the oc logs -f <cdi-importer_pod> command during the precopy stage.

+
+
+

This issue only affects the importer pod log and warm migration. Cold migration and the virt-v2v logs are not affected.

+
+
+
Deleting migration plan does not remove temporary resources.
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Migration plan details page of the web console does not describe the reason for the failure. (BZ#2008846)

+
+
+
Network, storage, and VM referenced by name in the Plan CR are not displayed in the web console.
+

If a Plan CR references storage, network, or VMs by name instead of by ID, the resources do not appear in the Forklift web console. The migration plan cannot be edited or duplicated. (BZ#1986020)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If you delete a migration plan and then run a new migration plan with the same name or if you delete a migrated VM and then remigrate the source VM, the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
If a target VM is deleted during migration, its migration status is Succeeded in the Plan CR
+

If you delete a target VirtualMachine CR during the 'Convert image to kubevirt' step of the migration, the Migration details page of the web console displays the state of the step as VirtualMachine CR not found. However, the status of the VM migration is Succeeded in the Plan CR file and in the web console. (BZ#2031529)

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.3/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.3/index.html new file mode 100644 index 00000000000..dd92f4c652c --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.3/index.html @@ -0,0 +1,156 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.3

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Setting the VddkInitImage path is part of the procedure of adding VMware provider.
+

In the web console, you enter the VddkInitImage path when adding a VMware provider. Alternatively, from the CLI, you add the VddkInitImage path to the Provider CR for VMware migrations.

+
+
+
The StorageProfile resource needs to be updated for a non-provisioner storage class
+

You must update the StorageProfile resource with accessModes and volumeMode for non-provisioner storage classes such as NFS. The documentation includes a link to the relevant procedure.

+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
Forklift 2.3 supports warm migration from oVirt
+

You can use warm migration to migrate VMs from both VMware and oVirt.

+
+
+
The minimal sufficient set of privileges for VMware users is established
+

VMware users do not have to have full cluster-admin privileges to perform a VM migration. The minimal sufficient set of user’s privileges is established and documented.

+
+
+
Forklift documentation is updated with instructions on using hooks
+

Forklift documentation includes instructions on adding hooks to migration plans and running hooks on VMs.

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Some warm migrations from oVirt might fail
+

When you run a migration plan for warm migration of multiple VMs from oVirt, the migrations of some VMs might fail during the cutover stage. In that case, restart the migration plan and set the cutover time for the VM migrations that failed in the first run. (BZ#2063531)

+
+
+
Snapshots are not deleted after warm migration
+

The Migration Controller service does not delete snapshots automatically after a successful warm migration of a oVirt VM. You can delete the snapshots manually. (BZ#22053183)

+
+
+
Warm migration from oVirt fails if a snapshot operation is performed on the source VM
+

If the user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (BZ#2057459)

+
+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Deleting migration plan does not remove temporary resources.
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Migration plan details page of the web console does not describe the reason for the failure. (BZ#2008846)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If you delete a migration plan and then run a new migration plan with the same name or if you delete a migrated VM and then remigrate the source VM, the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
Migration of virtual machines with encrypted partitions fails during conversion
+

The problem occurs for both vSphere and oVirt migrations.

+
+
+
Forklift 2.3.4 only: When the source provider is oVirt, duplicating a migration plan fails in either the network mapping stage or the storage mapping stage.
+

Possible workaround: Delete cache in the browser or restart the browser. (BZ#2143191)

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.4/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.4/index.html new file mode 100644 index 00000000000..798ac9a723c --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.4/index.html @@ -0,0 +1,260 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.4

+
+
+
+

Migrate virtual machines (VMs) from VMware vSphere or oVirt or {osp} to KubeVirt with Forklift.

+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Faster disk image migration from oVirt
+

Disk images are not converted anymore using virt-v2v when migrating from oVirt. This change speeds up migrations and also allows migration for guest operating systems that are not supported by virt-vsv. (forklift-controller#403)

+
+
+
Faster disk transfers by ovirt-imageio client (ovirt-img)
+

Disk transfers use ovirt-imageio client (ovirt-img) instead of Containerized Data Import (CDI) when migrating from RHV to the local OpenShift Container Platform cluster, accelerating the migration.

+
+
+
Faster migration using conversion pod disk transfer
+

When migrating from vSphere to the local OpenShift Container Platform cluster, the conversion pod transfers the disk data instead of Containerized Data Importer (CDI), accelerating the migration.

+
+
+
Migrated virtual machines are not scheduled on the target OCP cluster
+

The migrated virtual machines are no longer scheduled on the target OpenShift Container Platform cluster. This enables migrating VMs that cannot start due to limit constraints on the target at migration time.

+
+
+
StorageProfile resource needs to be updated for a non-provisioner storage class
+

You must update the StorageProfile resource with accessModes and volumeMode for non-provisioner storage classes such as NFS.

+
+
+
VDDK 8 can be used in the VDDK image
+

Previous versions of Forklift supported only using VDDK version 7 for the VDDK image. Forklift supports both versions 7 and 8, as follows:

+
+
+
    +
  • +

    If you are migrating to OCP 4.12 or earlier, use VDDK version 7.

    +
  • +
  • +

    If you are migrating to OCP 4.13 or later, use VDDK version 8.

    +
  • +
+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
OpenStack migration
+

Forklift now supports migrations with {osp} as a source provider. This feature is a provided as a Technology Preview and only supports cold migrations.

+
+
+
OCP console plugin
+

The Forklift Operator now integrates the Forklift web console into the OKD web console. The new UI operates as an OCP Console plugin that adds the sub-menu Migration to the navigation bar. It is implemented in version 2.4, disabling the old UI. You can enable the old UI by setting feature_ui: true in ForkliftController. (MTV-427)

+
+
+
Skip certification option
+

'Skip certificate validation' option was added to VMware and oVirt providers. If selected, the provider’s certificate will not be validated and the UI will not ask for specifying a CA certificate.

+
+
+
Only third-party certificate required
+

Only the third-party certificate needs to be specified when defining a oVirt provider that sets with the Manager CA certificate.

+
+
+
Conversion of VMs with RHEL9 guest operating system
+

Cold migrations from vSphere to a local Red Hat OpenShift cluster use virt-v2v on RHEL 9. (MTV-332)

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Deleting migration plan does not remove temporary resources
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If deleting a migration plan and then running a new migration plan with the same name, or if deleting a migrated VM and then remigrate the source VM, then the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
Migration of virtual machines with encrypted partitions fails during conversion
+

vSphere only: Migrations from oVirt and OpenStack don’t fail, but the encryption key may be missing on the target OCP cluster.

+
+
+
Snapshots that are created during the migration in OpenStack are not deleted
+

The Migration Controller service does not delete snapshots that are created during the migration for source virtual machines in OpenStack automatically. Workaround: the snapshots can be removed manually on OpenStack.

+
+
+
oVirt snapshots are not deleted after a successful migration
+

The Migration Controller service does not delete snapshots automatically after a successful warm migration of a oVirt VM. Workaround: Snapshots can be removed from oVirt instead. (MTV-349)

+
+
+
Migration fails during precopy/cutover while a snapshot operation is executed on the source VM
+

Some warm migrations from oVirt might fail. When running a migration plan for warm migration of multiple VMs from oVirt, the migrations of some VMs might fail during the cutover stage. In that case, restart the migration plan and set the cutover time for the VM migrations that failed in the first run.

+
+
+

Warm migration from oVirt fails if a snapshot operation is performed on the source VM. If the user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (MTV-456)

+
+
+
Cannot schedule migrated VM with multiple disks to more than one storage classes of type hostPath
+

When migrating a VM with multiple disks to more than one storage classes of type hostPath, it may result in a VM that cannot be scheduled. Workaround: It is recommended to use shared storage on the target OCP cluster.

+
+
+
Deleting migrated VM does not remove PVC and PV
+

When removing a VM that was migrated, its persistent volume claims (PVCs) and physical volumes (PV) are not deleted. Workaround: remove the CDI importer pods and then remove the remaining PVCs and PVs. (MTV-492)

+
+
+
PVC deletion hangs after archiving and deleting migration plan
+

When a migration fails, its PVCs and PVs are not deleted as expected when its migration plan is archived and deleted. Workaround: Remove the CDI importer pods and then remove the remaining PVCs and PVs. (MTV-493)

+
+
+
VM with multiple disks may boot from non-bootable disk after migration
+

VM with multiple disks that was migrated might not be able to boot on the target OCP cluster. Workaround: Set the boot order appropriately to boot from the bootable disk. (MTV-433)

+
+
+
Non-supported guest operating systems in warm migrations
+

Warm migrations and migrations to remote OCP clusters from vSphere do not support all types of guest operating systems that are supported in cold migrations to the local OCP cluster. It is a consequence of using RHEL 8 in the former case and RHEL 9 in the latter case.
+See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.

+
+
+
VMs from vSphere with RHEL 9 guest operating system may start with network interfaces that are down
+

When migrating VMs that are installed with RHEL 9 as guest operating system from vSphere, their network interfaces could be disabled when they start in OpenShift Virtualization. (MTV-491)

+
+
+
Upgrade from 2.4.0 fails
+

When upgrading from MTV 2.4.0 to a later version, the operation fails with an error that says the field 'spec.selector' of deployment forklift-controller is immutable. Workaround: remove the custom resource forklift-controller of type ForkliftController from the installed namespace, and recreate it. The user needs to refresh the OCP Console once the forklift-console-plugin pod runs to load the upgraded Forklift web console. (MTV-518)

+
+
+
+
+

Resolved issues

+
+
+

This release has the following resolved issues:

+
+
+
Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack)
+

A flaw was found in handling multiplexed streams in the HTTP/2 protocol. In previous releases of MTV, the HTTP/2 protocol allowed a denial of service (server resource consumption) because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection, which resulted in a denial of service due to server resource consumption.

+
+
+

This issue has been resolved in MTV 2.4.3 and 2.5.2. It is advised to update to one of these versions of MTV or later.

+
+ +
+
Improve invalid/conflicting VM name handling
+

Improve the automatic renaming of VMs during migration to fit RFC 1123. This feature that was introduced in 2.3.4 is enhanced to cover more special cases. (MTV-212)

+
+
+
Prevent locking user accounts due to incorrect credentials
+

If a user specifies an incorrect password for oVirt providers, they are no longer locked in oVirt. An error returns when the oVirt manager is accessible and adding the provider. If the oVirt manager is inaccessible, the provider is added, but there would be no further attempt after failing, due to incorrect credentials. (MTV-324)

+
+
+
Users without cluster-admin role can create new providers
+

Previously, the cluster-admin role was required to browse and create providers. In this release, users with sufficient permissions on MTV resources (providers, plans, migrations, NetworkMaps, StorageMaps, hooks) can operate MTV without cluster-admin permissions. (MTV-334)

+
+
+
Convert i440fx to q35
+

Migration of virtual machines with i440fx chipset is now supported. The chipset is converted to q35 during the migration. (MTV-430)

+
+
+
Preserve the UUID setting in SMBIOS for a VM that is migrated from oVirt
+

The Universal Unique ID (UUID) number within the System Management BIOS (SMBIOS) no longer changes for VMs that are migrated from oVirt. This enhancement enables applications that operate within the guest operating system and rely on this setting, such as for licensing purposes, to operate on the target OCP cluster in a manner similar to that of oVirt. (MTV-597)

+
+
+
Do not expose password for oVirt in error messages
+

Previously, the password that was specified for oVirt manager appeared in error messages that were displayed in the web console and logs when failing to connect to oVirt. In this release, error messages that are generated when failing to connect to oVirt do not reveal the password for oVirt manager.

+
+
+
QEMU guest agent is now installed on migrated VMs
+

The QEMU guest agent is installed on VMs during cold migration from vSphere. (BZ#2018062)

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.5/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.5/index.html new file mode 100644 index 00000000000..12c6a393b48 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.5/index.html @@ -0,0 +1,464 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.5

+
+
+
+

You can use Forklift to migrate virtual machines from the following source providers to KubeVirt destination providers:

+
+
+
    +
  • +

    VMware vSphere

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    {osp}

    +
  • +
  • +

    Open Virtual Appliances (OVAs) that were created by VMware vSphere

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

The release notes describe technical changes, new features and enhancements, and known issues for Forklift.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Migration from OpenStack moves to being a fully supported feature
+

In this version of Forklift, migration using OpenStack source providers graduated from a Technology Preview feature to a fully supported feature.

+
+
+
Disabling FIPS
+

Forklift enables migrations from vSphere source providers by not enforcing Enterprise Master Secret (EMS). This enables migrating from all vSphere versions that Forklift supports, including migrations that do not meet 2023 FIPS requirements.

+
+
+
Integration of the create and update provider user interface
+

The user interface of the create and update providers now aligns with the look and feel of the OKD web console and displays up-to-date data.

+
+
+
Standalone UI
+

The old UI of Forklift 2.3 cannot be enabled by setting feature_ui: true in ForkliftController anymore.

+
+
+
Support deployment on {ocp-name} 4.15
+

Forklift 2.5.6 can be deployed on {ocp-name} 4.15 clusters.

+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
Migration of OVA files from VMware vSphere
+

In Forklift 2.3, you can migrate using Open Virtual Appliance (OVA) files that were created by VMware vSphere as source providers. (MTV-336)

+
+
+ + + + + +
+
Note
+
+
+

Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by Forklift. Forklift supports only OVA files created by VMware vSphere.

+
+
+
+
+

Unresolved directive in rn-2.5.adoc - include::snippet_ova_tech_preview.adoc[]

+
+
+
Migrating VMs between OKD clusters
+

In Forklift 2.3, you can now use Red Hat KubeVirt provider as a source provider and a destination provider. You can migrate VMs from the cluster that Forklift is deployed on to another cluster, or from a remote cluster to the cluster that Forklift is deployed on. (MTV-571)

+
+
+
Migration of VMs with direct LUNs from RHV
+

During the migration from oVirt (oVirt), direct Logical Units (LUNs) are detached from the source virtual machines and attached to the target virtual machines. Note that this mechanism does not work yet for Fibre Channel. (MTV-329)

+
+
+
Additional authentication methods for OpenStack
+

In addition to standard password authentication, Forklift supports the following authentication methods: Token authentication and Application credential authentication. (MTV-539)

+
+
+
Validation rules for OpenStack
+

The validation service includes default validation rules for virtual machines from OpenStack. (MTV-508)

+
+
+
VDDK is now optional for VMware vSphere providers
+

You can now create the VMware vSphere source provider without specifying a VMware Virtual Disk Development Kit (VDDK) init image. It is strongly recommended you create a VDDK init image to accelerate migrations.

+
+
+
Deployment on OKE enabled
+

In Forklift 2.5.3, deployment on {ocp-name} Kubernetes Engine (OKE) has been enabled. For more information, see About {ocp-name} Kubernetes Engine. (MTV-803)

+
+
+
Migration of VMs to destination storage classes with encrypted RBD now supported
+

In Forklift 2.5.4, migration of VMs to destination storage classes that have encrypted RADOS Block Devices (RBD) volumes is now supported.

+
+
+

To make use of this new feature, set the value of the parameter controller_block_overhead to 1Gi, following the procedure in Configuring the MTV Operator. (MTV-851)

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Deleting migration plan does not remove temporary resources
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)

+
+
+
Migration of virtual machines with encrypted partitions fails during conversion
+

vSphere only: Migrations from oVirt and OpenStack do not fail, but the encryption key may be missing on the target OKD cluster.

+
+
+
Migration fails during precopy/cutover while performing a snapshot operation on the source VM
+

Warm migration from oVirt fails if a snapshot operation is triggered and running on the source VM at the same time as the migration is scheduled. The migration does not wait for the snapshot operation to finish. (MTV-456)

+
+
+
Unable to schedule migrated VM with multiple disks to more than one storage classes of type hostPath
+

When migrating a VM with multiple disks to more than one storage classes of type hostPath, it might happen that a VM cannot be scheduled. Workaround: Use shared storage on the target OKD cluster.

+
+
+
Non-supported guest operating systems in warm migrations
+

Warm migrations and migrations to remote OKD clusters from vSphere do not support all types of guest operating systems that are supported in cold migrations to the local OKD cluster. This is a consequence of using RHEL 8 in the former case and RHEL 9 in the latter case.
+See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.

+
+
+
VMs from vSphere with RHEL 9 guest operating system can start with network interfaces that are down
+

When migrating VMs that are installed with RHEL 9 as guest operating system from vSphere, the network interfaces of the VMs could be disabled when they start in {ocp-name} Virtualization. (MTV-491)

+
+
+
Import OVA: ConnectionTestFailed message appears when adding OVA provider
+

When adding an OVA provider, the error message ConnectionTestFailed can appear, although the provider is created successfully. If the message does not disappear after a few minutes and the provider status does not move to Ready, this means that the ova server pod creation has failed. (MTV-671)

+
+
+
Left over ovirtvolumepopulator from failed migration causes plan to stay indefinitely in CopyDisks phase
+

An outdated ovirtvolumepopulator in the namespace, left over from an earlier failed migration, stops a new plan of the same VM when it transitions to CopyDisks phase. The plan remains in that phase indefinitely. (MTV-929)

+
+
+
Unclear error message when Forklift fails to build a PVC
+

The migration fails to build the Persistent Volume Claim (PVC) if the destination storage class does not have a configured storage profile. The forklift-controller raises an error message without a clear reason for failing to create a PVC. (MTV-928)

+
+
+

For a complete list of all known issues in this release, see the list of Known Issues in Jira.

+
+
+
+
+

Resolved issues

+
+
+

This release has the following resolved issues:

+
+
+
Flaw was found in jsrsasign package which is vulnerable to Observable Discrepancy
+

Versions of the package jsrsasign before 11.0.0, used in earlier releases of Forklift, are vulnerable to Observable Discrepancy in the RSA PKCS1.5 or RSA-OAEP decryption process. This discrepancy means an attacker could decrypt ciphertexts by exploiting this vulnerability. However, exploiting this vulnerability requires the attacker to have access to a large number of ciphertexts encrypted with the same key. This issue has been resolved in Forklift 2.5.5 by upgrading the package jsrasign to version 11.0.0.

+
+
+

For more information, see CVE-2024-21484.

+
+
+
Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack)
+

A flaw was found in handling multiplexed streams in the HTTP/2 protocol. In previous releases of Forklift, the HTTP/2 protocol allowed a denial of service (server resource consumption) because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection, which resulted in a denial of service due to server resource consumption.

+
+
+

This issue has been resolved in Forklift 2.5.2. It is advised to update to this version of MTV or later.

+
+ +
+
Gin Web Framework does not properly sanitize filename parameter of Context.FileAttachment function
+

A flaw was found in the Gin-Gonic Gin Web Framework, used by Forklift. The filename parameter of the Context.FileAttachment function was not properly sanitized. This flaw in the package could allow a remote attacker to bypass security restrictions caused by improper input validation by the filename parameter of the Context.FileAttachment function. A maliciously created filename could cause the Content-Disposition header to be sent with an unexpected filename value, or otherwise modify the Content-Disposition header.

+
+
+

This issue has been resolved in Forklift 2.5.2. It is advised to update to this version of Forklift or later.

+
+ +
+
CVE-2023-26144: mtv-console-plugin-container: graphql: Insufficient checks in the OverlappingFieldsCanBeMergedRule.ts
+

A flaw was found in the package GraphQL from 16.3.0 and before 16.8.1. This flaw means Forklift versions before Forklift 2.5.2 are vulnerable to Denial of Service (DoS) due to insufficient checks in the OverlappingFieldsCanBeMergedRule.ts file when parsing large queries. This issue may allow an attacker to degrade system performance. (MTV-712)

+
+
+

This issue has been resolved in Forklift 2.5.2. It is advised to update to this version of Forklift or later.

+
+
+

For more information, see CVE-2023-26144.

+
+
+
CVE-2023-45142: Memory leak found in the otelhttp handler of open-telemetry
+

A flaw was found in otelhttp handler of OpenTelemetry-Go. This flaw means Forklift versions before Forklift 2.5.3 are vulnerable to a memory leak caused by http.user_agent and http.method having unbound cardinality, which could allow a remote, unauthenticated attacker to exhaust the server’s memory by sending many malicious requests, affecting the availability. (MTV-795)

+
+
+

This issue has been resolved in Forklift 2.5.3. It is advised to update to this version of Forklift or later.

+
+
+

For more information, see CVE-2023-45142.

+
+
+
CVE-2023-39322: QUIC connections do not set an upper bound on the amount of data buffered when reading post-handshake messages
+

A flaw was found in Golang. This flaw means Forklift versions before Forklift 2.5.3 are vulnerable to QUIC connections not setting an upper bound on the amount of data buffered when reading post-handshake messages, allowing a malicious QUIC connection to cause unbounded memory growth. With the fix, connections now consistently reject messages larger than 65KiB in size. (MTV-708)

+
+
+

This issue has been resolved in Forklift 2.5.3. It is advised to update to this version of Forklift or later.

+
+
+

For more information, see CVE-2023-39322.

+
+
+
CVE-2023-39321: Processing an incomplete post-handshake message for a QUIC connection can cause a panic
+

A flaw was found in Golang. This flaw means Forklift versions before Forklift 2.5.3 are vulnerable to processing an incomplete post-handshake message for a QUIC connection, which causes a panic. (MTV-693)

+
+
+

This issue has been resolved in Forklift 2.5.3. It is advised to update to this version of Forklift or later.

+
+
+

For more information, see CVE-2023-39321.

+
+
+
CVE-2023-39319: Flaw in html/template package
+

A flaw was found in the Golang html/template package used in Forklift. This flaw means Forklift versions before Forklift 2.5.3 are vulnerable, as the html/template package did not properly handle occurrences of <script, <!--, and </script within JavaScript literals in <script> contexts. This flaw could cause the template parser to improperly consider script contexts to be terminated early, causing actions to be improperly escaped, which could be leveraged to perform an XSS attack. (MTV-693)

+
+
+

This issue has been resolved in Forklift 2.5.3. It is advised to update to this version of Forklift or later.

+
+
+

For more information, see CVE-2023-39319.

+
+
+
CVE-2023-39318: Flaw in html/template package
+

A flaw was found in the Golang html/template package used in Forklift. This flaw means Forklift versions before Forklift 2.5.3 are vulnerable as the html/template package did not properly handle HMTL-like "" comment tokens, nor hashbang \#! comment tokens. This flaw could cause the template parser to improperly interpret the contents of <script> contexts, causing actions to be improperly escaped, which could be leveraged to perform an XSS attack. (MTV-693)

+
+
+

This issue has been resolved in Forklift 2.5.3. It is advised to update to this version of Forklift or later.

+
+
+

For more information, see CVE-2023-39318.

+
+
+
Logs archive file downloaded from UI includes logs related to deleted migration plan/VM
+

In earlier releases of Forklift 2.3, the log files downloaded from UI could contain logs that are related to an earlier migration plan. (MTV-783)

+
+
+

This issue has been resolved in Forklift 2.5.3.

+
+
+
Extending a VM disk in RHV is not reflected in the MTV inventory
+

In earlier releases of Forklift 2.3, the size of disks that are extended in RHV was not adequately monitored. This resulted in the inability to migrate virtual machines with extended disks from a RHV provider. (MTV-830)

+
+
+

This issue has been resolved in Forklift 2.5.3.

+
+
+
Filesystem overhead configurable
+

In earlier releases of Forklift 2.3, the filesystem overhead for new persistent volumes was hard-coded to 10%. The overhead was insufficient for certain filesystem types, resulting in failures during cold-migrations from oVirt and OSP to the cluster where Forklift is deployed. In other filesystem types, the hard-coded overhead was too high, resulting in excessive storage consumption.

+
+
+

In Forklift 2.5.3, the filesystem overhead can be configured, as it is no longer hard-coded. If your migration allocates persistent volumes without CDI, you can adjust the file system overhead. You adjust the file system overhead by adding the following label and value to the spec portion of the forklift-controller CR:

+
+
+
+
spec:
+  `controller_filesystem_overhead: <percentage>` (1)
+
+
+
+
    +
  1. +

    The percentage of overhead. If this label is not added, the default value of 10% is used. This setting is valid only if the storageclass is filesystem. (MTV-699)

    +
  2. +
+
+
+
Ensure up-to-date data is displayed in the create and update provider forms
+

In earlier releases of Forklift, the create and update provider forms could have presented stale data.

+
+
+

This issue is resolved in Forklift 2.3, the new forms of create and update provider display up-to-date properties of the provider. (MTV-603)

+
+
+
Snapshots that are created during a migration in OpenStack are not deleted
+

In earlier releases of Forklift, the Migration Controller service did not delete snapshots that were created during a migration of source virtual machines in OpenStack automatically.

+
+
+

This issue is resolved in Forklift 2.3, all the snapshots created during the migration are removed after the migration has been completed. (MTV-620)

+
+
+
oVirt snapshots are not deleted after a successful migration
+

In earlier releases of Forklift, the Migration Controller service did not delete snapshots automatically after a successful warm migration of a VM from oVirt.

+
+
+

This issue is resolved in Forklift 2.3, the snapshots generated during migration are removed after a successful migration, and the original snapshots are not removed after a successful migration. (MTV-349)

+
+
+
Warm migration fails when cutover conflicts with precopy
+

In earlier releases of Forklift, the cutover operation failed when it was triggered while precopy was being performed. The VM was locked in oVirt and therefore the ovirt-engine rejected the snapshot creation, or disk transfer, operation.

+
+
+

This issue is resolved in Forklift 2.3, the cutover operation is triggered, but it is not performed at that time because the VM is locked. Once the precopy operation completes, the cutover operation is triggered. (MTV-686)

+
+
+
Warm migration fails when VM is locked
+

In earlier releases of Forklift, triggering a warm migration while there was an ongoing operation in oVirt that locked the VM caused the migration to fail because it could not trigger the snapshot creation.

+
+
+

This issue is resolved in Forklift 2.3, warm migration does not fail when an operation that locks the VM is performed in oVirt. The migration does not fail, but starts when the VM is unlocked. (MTV-687)

+
+
+
Deleting migrated VM does not remove PVC and PV
+

In earlier releases of Forklift, when removing a VM that was migrated, its persistent volume claims (PVCs) and physical volumes (PV) were not deleted.

+
+
+

This issue is resolved in Forklift 2.3, PVCs and PVs are deleted when deleting migrated VM.(MTV-492)

+
+
+
PVC deletion hangs after archiving and deleting migration plan
+

In earlier releases of Forklift, when a migration failed, its PVCs and PVs were not deleted as expected when its migration plan was archived and deleted.

+
+
+

This issue is resolved in Forklift 2.3, PVCs are deleted when archiving and deleting migration plan.(MTV-493)

+
+
+
VM with multiple disks can boot from a non-bootable disk after migration
+

In earlier releases of Forklift, VM with multiple disks that were migrated might not have been able to boot on the target OKD cluster.

+
+
+

This issue is resolved in Forklift 2.3, VM with multiple disks that are migrated can boot on the target OKD cluster. (MTV-433)

+
+
+
Transfer network not taken into account for cold migrations from vSphere
+

In Forklift releases 2.4.0-2.5.3, cold migrations from vSphere to the local cluster on which Forklift was deployed did not take a specified transfer network into account. This issue is resolved in Forklift 2.5.4. (MTV-846)

+
+
+
Fix migration of VMs with multi-boot guest operating system from vSphere
+

In Forklift 2.5.6, the virt-v2v arguments include –root first, which mitigates an issue with multi-boot VMs where the pod fails. This is a fix for a regression that was introduced in Forklift 2.4, in which the '--root' argument was dropped. (MTV-987)

+
+
+
Errors logged in populator pods are improved
+

In earlier releases of Forklift 2.3, populator pods were always restarted on failure. This made it difficult to gather the logs from the failed pods. In Forklift 2.5.3, the number of restarts of populator pods is limited to three times. On the third and final time, the populator pod remains in the fail status and its logs can then be easily gathered by must-gather and by forklift-controller to know this step has failed. (MTV-818)

+
+
+
npm IP package vulnerability
+

A vulnerability found in the Node.js Package Manager (npm) IP Package can allow an attacker to obtain sensitive information and obtain access to normally inaccessible resources. MTV-941

+
+
+

This issue has been resolved in Forklift 2.5.6.

+
+
+

For more information, see CVE-2023-42282

+
+
+
Flaw was found in the Golang net/http/internal package
+

A flaw was found in the versions of the Golang net/http/internal package, that were used in earlier releases of Forklift. This flaw could allow a malicious user to send an HTTP request and cause the receiver to read more bytes from the network than are in the body (up to 1GiB), causing the receiver to fail reading the response, possibly leading to a Denial of Service (DoS). This issue has been resolved in Forklift 2.5.6.

+
+
+

For more information, see CVE-2023-39326.

+
+
+

For a complete list of all resolved issues in this release, see the list of Resolved Issues in Jira.

+
+
+
+
+

Upgrade notes

+
+
+

It is recommended to upgrade from Forklift 2.4.2 to Forklift 2.3.

+
+
+
Upgrade from 2.4.0 fails
+

When upgrading from MTV 2.4.0 to a later version, the operation fails with an error that says the field 'spec.selector' of deployment forklift-controller is immutable. Workaround: Remove the custom resource forklift-controller of type ForkliftController from the installed namespace, and recreate it. Refresh the OKD console once the forklift-console-plugin pod runs to load the upgraded Forklift web console. (MTV-518)

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.6/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.6/index.html new file mode 100644 index 00000000000..d2cf202a0f7 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.6/index.html @@ -0,0 +1,511 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.6

+
+
+
+

You can use Forklift to migrate virtual machines from the following source providers to KubeVirt destination providers:

+
+
+
    +
  • +

    VMware vSphere

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    {osp}

    +
  • +
  • +

    Open Virtual Appliances (OVAs) that were created by VMware vSphere

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

The release notes describe technical changes, new features and enhancements, known issues, and resolved issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Simplified the creation of vSphere providers
+

In earlier releases of Forklift, users had to specify a fingerprint when creating a vSphere provider. This required users to retrieve the fingerprint from the server that vCenter runs on. Forklift no longer requires this fingerprint as an input, but rather computes it from the specified certificate in the case of a secure connection or automatically retrieves it from the server that runs vCenter/ESXi in the case of an insecure connection.

+
+
+
Redesigned the migration plan creation dialog
+

The user interface console has improved the process of creating a migration plan. The new migration plan dialog enables faster creation of migration plans.

+
+
+

It includes only the minimal settings that are required, while you can confirgure advanced settings separately. The new dialog also provides defaults for network and storage mappings, where applicable. The new dialog can also be invoked from the the Provider > Virtual Machines tab, after selecting the virtual machines to migrate. It also better aligns with the user experience in the OCP console.

+
+
+
virtual machine preferences have replaced {ocp-name} templates
+

The virtual machine preferences have replaced {ocp-name} templates. Forklift currently falls back to using {ocp-name} templates when a relevant preference is not available.

+
+
+

Custom mappings of guest operating system type to virtual machine preference can be configured using config maps. This is in order to use custom virtual machine preferences, or to support more guest operating system types.

+
+
+
Full support for migration from OVA
+

Migration from OVA moves from being a Technical Preview and is now a fully supported feature.

+
+
+
The VM is posted with its desired Running state
+

Forklift creates the VM with its desired Running state on the target provider, instead of creating the VM and then running it as an additional operation. (MTV-794)

+
+
+
The must-gather logs can now be loaded only by using the CLI
+

The Forklift web console can no longer download logs. With this update, you must download must-gather logs by using CLI commands. For more information, see Must Gather Operator.

+
+
+
Forklift no longer runs pvc-init pods when migrating from vSphere
+

Forklift no longer runs pvc-init pods during cold migration from a vSphere provider to the {ocp-name} cluster that Forklift is deployed on. However, in other flows where data volumes are used, they are set with the cdi.kubevirt.io/storage.bind.immediate.requested annotation, and CDI runs first-consume pods for storage classes with volume binding mode WaitForFirstConsumer.

+
+
+
+
+

New features and enhancements

+
+
+

This section provides features and enhancements introduced in Forklift 2.6.

+
+
+

New features and enhancements 2.6.3

+
+
Support for migrating LUKS-encrypted devices in migrations from vSphere
+

You can now perform cold migrations from a vSphere provider of VMs whose virtual disks are encrypted by Linux Unified Key Setup (LUKS). (MTV-831)

+
+
+
Specifying the primary disk when migrating from vSphere
+

You can now specify the primary disk when you migrate VMs from vSphere with more than one bootable disk. This avoids Forklift automatically attempting to convert the first bootable disk that it detects while it examines all the disks of a virtual machine. This feature is needed because the first bootable disk is not necessarily the disk that the VM is expected to boot from in KubeVirt. (MTV-1079)

+
+
+
Links to remote provider UIs
+

You can now remotely access the UI of a remote cluster when you create a source provider. For example, if the provider is a remote oVirt oVirt cluster, Forklift adds a link to the remote oVirt web console when you define the provider. This feature makes it easier for you to manage and debug a migration from remote clusters. (MTV-1054)

+
+
+
+

New features and enhancements 2.6.0

+
+
Migration from vSphere over a secure connection
+

You can now specify a CA certificate that can be used to authenticate the server that runs vCenter or ESXi, depending on the specified SDK endpoint of the vSphere provider. (MTV-530)

+
+
+
Migration to or from a remote {ocp-name} over a secure connection
+

You can now specify a CA certificate that can be used to authenticate the API server of a remote {ocp-name} cluster. (MTV-728)

+
+
+
Migration from an ESXi server without going through vCenter
+

Forklift enables the configuration of vSphere providers with the SDK of ESXi. You need to select ESXi as the Endpoint type of the vSphere provider and specify the URL of the SDK of the ESXi server. (MTV-514)

+
+
+
Migration of image-based VMs from {osp}
+

Forklift supports the migration of VMs that were created from images in {osp}. (MTV-644)

+
+
+
Migration of VMs with Fibre Channel LUNs from oVirt
+

Forklift supports migrations of VMs that are set with Fibre Channel (FC) LUNs from oVirt. As with other LUN disks, you need to ensure the {ocp-name} nodes have access to the FC LUNs. During the migrations, the FC LUNs are detached from the source VMs in oVirt and attached to the migrated VMs in {ocp-name}. (MTV-659)

+
+
+
Preserve CPU types of VMs that are migrated from oVirt
+

Forklift sets the CPU type of migrated VMs in {ocp-name} with their custom CPU type in oVirt. In addition, a new option was added to migration plans that are set with oVirt as a source provider to preserve the original CPU types of source VMs. When this option is selected, Forklift identifies the CPU type based on the cluster configuration and sets this CPU type for the migrated VMs, for which the source VMs are not set with a custom CPU. (MTV-547)

+
+
+
Validation for RHEL 6 guest operating system is now available when migrating VMs with RHEL 6 guest operating system
+

Red Hat Enterprise Linux (RHEL) 9 does not support RHEL 6 as a guest operating system. Therefore, RHEL 6 is not supported in {ocp-name} Virtualization. With this update, a validation of RHEL 6 guest operating system was added to {ocp-name} Virtualization. (MTV413)

+
+
+
Automatic retrieval of CA certificates for the provider’s URL in the console
+

The ability to retrieve CA certificates, which was available in previous versions, has been restored. The vSphere Verify certificate option is in the add-provider dialog. This option was removed in the transition to the OKD console and has now been added to the console. This functionality is also available for oVirt, {osp}, and {ocp-name} providers now. (MTV-737)

+
+
+
Validation of a specified VDDK image
+

Forklift validates the availability of a VDDK image that is specified for a vSphere provider on the target {ocp-name} name as part of the validation of a migration plan. Forklift also checks whether the libvixDiskLib.so symbolic link (symlink) exists within the image. If the validation fails, the migration plan cannot be started. (MTV-618)

+
+
+
Add a warning and partial support for TPM
+

Forklift presents a warning when attempting to migrate a VM that is set with a TPM device from oVirt or vSphere. The migrated VM in {ocp-name} would be set with a TPM device but without the content of the TPM device on the source environment. (MTV-378)

+
+
+
Plans that failed to migrate VMs can now be edited
+

With this update, you can edit plans that have failed to migrate any VMs. Some plans fail or are canceled because of incorrect network and storage mappings. You can now edit these plans until they succeed. (MTV-779)

+
+
+
Validation rules are now available for OVA
+

The validation service includes default validation rules for virtual machines from the Open Virtual Appliance (OVA). (MTV-669)

+
+
+
+
+
+

Resolved issues

+
+
+

This release has the following resolved issues:

+
+
+

Resolved issues 2.6.7

+
+
Incorrect handling of quotes in ifcfg files
+

In earlier releases of Forklift, there was an issue with the incorrect handling of single and double quotes in interface configuration (ifcfg) files, which control the software interfaces for individual network devices. This issue has been resolved in Forklift 2.6.7, in order to cover additional IP configurations on Red Hat Enterprise Linux, CentOS, Rocky Linux and similar distributions. (MTV-1439)

+
+
+
Failure to preserve netplan based network configuration
+

In earlier releases of Forklift, there was an issue with the preservation of netplan-based network configurations. This issue has been resolved in Forklift 2.6.7, so that static IP configurations are preserved if netplan (netplan.io) is used by using the netplan configuration files to generate udev rules for known mac-address and ifname tuples. (MTV-1440)

+
+
+
Error messages are written into udev .rules files
+

In earlier releases of Forklift, there was an issue with the accidental leakage of error messages into udev .rules files. This issue has been resolved in Forklift 2.6.7, with a static IP persistence script added to the udev rule file. (MTV-1441)

+
+
+
+

Resolved issues 2.6.6

+
+
Runtime error: invalid memory address or nil pointer dereference
+

In earlier releases of Forklift, there was a runtime error of invalid memory address or nil pointer dereference caused by a pointer that was nil, and there was an attempt to access the value that it points to. This issue has been resolved in Forklift 2.6.6. (MTV-1353)

+
+
+
All Plan and Migration pods scheduled to same node causing the node to crash
+

In earlier releases of Forklift, the scheduler could place all migration pods on a single node. When this happened, the node ran out of the resources. This issue has been resolved in Forklift 2.6.6. (MTV-1354)

+
+
+
Empty bearer token is sufficient for authentication
+

In earlier releases of Forklift, a vulnerability was found in the Forklift Controller.  There is no verification against the authorization header, except to ensure it uses bearer authentication. Without an authorization header and a bearer token, a 401 error occurs. The presence of a token value provides a 200 response with the requested information. This issue has been resolved in Forklift 2.6.6.

+
+
+

For more details, see (CVE-2024-8509).

+
+
+
+

Resolved issues 2.6.5

+
+
VMware Linux interface name changes during migration
+

In earlier releases of Forklift, during the migration of Rocky Linux 8, CentOS 7.2 and later, and Ubuntu 22 virtual machines (VM) from VMware to OKD (OCP), the name of the network interfaces is modified, and the static IP configuration for the VM is no longer functional. This issue has been resolved for static IPs in Rocky Linux 8, Centos 7.2 and later, Ubuntu 22 in Forklift 2.6.5. (MTV-595)

+
+
+
+

Resolved issues 2.6.4

+
+
Disks and drives are offline after migrating Windows virtual machines from RHV or VMware to OCP
+

Windows (Windows 2022) VMs configured with multiple disks, which are Online before the migration, are Offline after a successful migration from oVirt or VMware to OKD, using Forklift. Only the C:\ primary disk is Online. This issue has been resolved for basic disks in Forklift 2.6.4. (MTV-1299)

+
+
+

For details of the known issue of dynamic disks being Offline in Windows Server 2022 after cold and warm migrations from vSphere to container-native virtualization (CNV) with Ceph RADOS Block Devices (RBD), using the storage class ocs-storagecluster-ceph-rbd, see (MTV-1344).

+
+
+
Preserve IP option for Windows does not preserve all settings
+

In earlier releases of Forklift, while migrating a Windows 2022 Server with a static IP address assigned, and selecting the Preserve static IPs option, after a successful Windows migration, while the node started and the IP address was preserved, the subnet mask, gateway, and DNS servers were not preserved. This resulted in an incomplete migration, and the customer was forced to log in locally from the console to fully configure the network. This issue has been resolved in Forklift 2.6.4. (MTV-1286)

+
+
+
qemu-guest-agent not being installed at first boot in Windows Server 2022
+

After a successful Windows 2022 server guest migration using Forklift 2.6.1, the qemu-guest-agent is not completely installed. The Windows Scheduled task is being created, however it is being set to run 4 hours in the future instead of the intended 2 minutes in the future. (MTV-1325)

+
+
+
+

Resolved issues 2.6.3

+
+
CVE-2024-24788: golang: net malformed DNS message can cause infinite loop
+

In earlier releases of Forklift, there was a flaw was discovered in the stdlib package of the Go programming language, which impacts previous versions of Forklift. This vulnerability primarily threatens web-facing applications and services that rely on Go for DNS queries. This issue has been resolved in Forklift 2.6.3.

+
+
+

For more details, see (CVE-2024-24788).

+
+
+
Migration scheduling does not take into account that virt-v2v copies disks sequentially (vSphere only)
+

In earlier releases of Forklift, there was a problem with the way Forklift interpreted the controller_max_vm_inflight setting for vSphere to schedule migrations. This issue has been resolved in Forklift 2.6.3. (MTV-1191)

+
+
+
Cold migrations fail after changing the ESXi network (vSphere only)
+

In earlier versions of Forklift, cold migrations from a vSphere provider with an ESXi SDK endpoint failed if any network was used except for the default network for disk transfers. This issue has been resolved in Forklift 2.6.3. (MTV-1180)

+
+
+
Warm migrations over an ESXi network are stuck in DiskTransfer state (vSphere only)
+

In earlier versions of Forklift, warm migrations over an ESXi network from a vSphere provider with a vCenter SDK endpoint were stuck in DiskTransfer state because Forklift was unable to locate image snapshots. This issue has been resolved in Forklift 2.6.3. (MTV-1161)

+
+
+
Leftover PVCs are in Lost state after cold migrations
+

In earlier versions of Forklift, after cold migrations, there were leftover PVCs that had a status of Lost instead of being deleted, even after the migration plan that created them was archived and deleted. Investigation showed that this was because importer pods were retained after copying, by default, rather than in only specific cases. This issue has been resolved in Forklift 2.6.3. (MTV-1095)

+
+
+
Guest operating system from vSphere might be missing (vSphere only)
+

In earlier versions of Forklift, some VMs that were imported from vSphere were not mapped to a template in OKD while other VMs, with the same guest operating system, were mapped to the corresponding template. Investigations indicated that this was because vSphere stopped reporting the operating system after not receiving updates from VMware tools for some time. This issue has been resolved in Forklift 2.6.3 by taking the value of the operating system from the output of the investigation that virt-v2v performs on the disks. (MTV-1046)

+
+
+
+

Resolved issues 2.6.2

+
+
CVE-2023-45288: Golang net/http, x/net/http2: unlimited number of CONTINUATION frames can cause a denial-of-service (DoS) attack
+

A flaw was discovered with the implementation of the HTTP/2 protocol in the Go programming language, which impacts previous versions of Forklift. There were insufficient limitations on the number of CONTINUATION frames sent within a single stream. An attacker could potentially exploit this to cause a denial-of-service (DoS) attack. This flaw has been resolved in Forklift 2.6.2.

+
+
+

For more details, see (CVE-2023-45288).

+
+
+
CVE-2024-24785: mtv-api-container: Golang html/template: errors returned from MarshalJSON methods may break template escaping
+

A flaw was found in the html/template Golang standard library package, which impacts previous versions of Forklift. If errors returned from MarshalJSON methods contain user-controlled data, they may be used to break the contextual auto-escaping behavior of the HTML/template package, allowing subsequent actions to inject unexpected content into the templates. This flaw has been resolved in Forklift 2.6.2.

+
+
+

For more details, see (CVE-2024-24785).

+
+
+
CVE-2024-24784: mtv-validation-container: Golang net/mail: comments in display names are incorrectly handled
+

A flaw was found in the net/mail Golang standard library package, which impacts previous versions of Forklift. The ParseAddressList function incorrectly handles comments, text in parentheses, and display names. As this is a misalignment with conforming address parsers, it can result in different trust decisions being made by programs using different parsers. This flaw has been resolved in Forklift 2.6.2.

+
+
+

For more details, see (CVE-2024-24784).

+
+
+
CVE-2024-24783: mtv-api-container: Golang crypto/x509: Verify panics on certificates with an unknown public key algorithm
+

A flaw was found in the crypto/x509 Golang standard library package, which impacts previous versions of Forklift. Verifying a certificate chain that contains a certificate with an unknown public key algorithm causes Certificate.Verify to panic. This affects all crypto/tls clients and servers that set Config.ClientAuth to VerifyClientCertIfGiven or RequireAndVerifyClientCert. The default behavior is for TLS servers to not verify client certificates. This flaw has been resolved in Forklift 2.6.2.

+
+
+

For more details, see (CVE-2024-24783).

+
+
+
CVE-2023-45290: mtv-api-container: Golang net/http memory exhaustion in Request.ParseMultipartForm
+

A flaw was found in the net/http Golang standard library package, which impacts previous versions of Forklift. When parsing a multipart form, either explicitly with Request.ParseMultipartForm or implicitly with Request.FormValue, Request.PostFormValue, or Request.FormFile, limits on the total size of the parsed form are not applied to the memory consumed while reading a single form line. This permits a maliciously crafted input containing long lines to cause the allocation of arbitrarily large amounts of memory, potentially leading to memory exhaustion. This flaw has been resolved in Forklift 2.6.2.

+
+
+

For more details, see (CVE-2023-45290).

+
+
+
ImageConversion does not run when target storage is set with WaitForFirstConsumer (WFFC)
+

In earlier releases of Forklift, migration of VMs failed because the migration was stuck in the AllocateDisks phase. As a result of being stuck, the migration did not progress, and PVCs were not bound. The root cause of the issue was that ImageConversion did not run when target storage was set for wait-for-first-consumer. The problem was resolved in Forklift 2.6.2. (MTV-1126)

+
+
+
forklift-controller panics when importing VMs with direct LUNs
+

In earlier releases of Forklift, forklift-controller panicked when a user attempted to import VMs that had direct LUNs. The problem was resolved in Forklift 2.6.2. (MTV-1134)

+
+
+
+

Resolved issues 2.6.1

+
+
VMs with multiple disks that are migrated from vSphere and OVA files are not being fully copied
+

In Forklift 2.6.0, there was a problem in copying VMs with multiple disks from VMware vSphere and from OVA files. The migrations appeared to succeed but all the disks were transferred to the same PV in the target environment while other disks were empty. In some cases, bootable disks were overridden, so the VM could not boot. In other cases, data from the other disks was missing. The problem was resolved in Forklift 2.6.1. (MTV-1067)

+
+
+
Migrating VMs from one OKD cluster to another fails due to a timeout
+

In Forklift 2.6.0, migrations from one OKD cluster to another failed when the time to transfer the disks of a VM exceeded the time to live (TTL) of the Export API in {ocp-name}, which was set to 2 hours by default. The problem was resolved in Forklift 2.6.1 by setting the default TTL of the Export API to 12 hours, which greatly reduces the possibility of an expiration of the Export API. Additionally, you can increase or decrease the TTL setting as needed. (MTV-1052)

+
+
+
Forklift forklift-controller pod crashes when receiving a disk without a datastore
+

In earlier releases of Forklift, if a VM was configured with a disk that was on a datastore that was no longer available in vSphere at the time a migration was attempted, the forklift-controller crashed, rendering Forklift unusable. In Forklift 2.6.1, Forklift presents a critical validation for VMs with such disks, informing users of the problem, and the forklift-controller no longer crashes, although it cannot transfer the disk. (MTV-1029)

+
+
+
+

Resolved issues 2.6.0

+
+
Deleting an OVA provider automatically also deletes the PV
+

In earlier releases of Forklift, the PV was not removed when the OVA provider was deleted. This has been resolved in Forklift 2.6.0, and the PV is automatically deleted when the OVA provider is deleted. (MTV-848)

+
+
+
Fix for data being lost when migrating VMware VMs with snapshots
+

In earlier releases of Forklift, when migrating a VM that has a snapshot from VMware, the VM that was created in {ocp-name} Virtualization contained the data in the snapshot but not the latest data of the VM. This has been resolved in Forklift 2.6.0. (MTV-447)

+
+
+
Canceling and deleting a failed migration plan does not clean up the populate pods and PVC
+

In earlier releases of Forklift, when you canceled and deleted a failed migration plan, and after creating a PVC and spawning the populate pods, the populate pods and PVC were not deleted. You had to delete the pods and PVC manually. This issue has been resolved in Forklift 2.6.0. (MTV-678)

+
+
+
OKD to OKD migrations require the cluster version to be 4.13 or later
+

In earlier releases of Forklift, when migrating from OKD to OKD, the version of the source provider cluster had to be OKD version 4.13 or later. This issue has been resolved in Forklift 2.6.0, with validation being shown when migrating from versions of {ocp-name} before 4.13. (MTV-734)

+
+
+
Multiple storage domains from RHV were always mapped to a single storage class
+

In earlier releases of Forklift, multiple disks from different storage domains were always mapped to a single storage class, regardless of the storage mapping that was configured. This issue has been resolved in Forklift 2.6.0. (MTV-1008)

+
+
+
Firmware detection by virt-v2v
+

In earlier releases of Forklift, a VM that was migrated from an OVA that did not include the firmware type in its OVF configuration was set with UEFI. This was incorrect for VMs that were configured with BIOS. This issue has been resolved in Forklift 2.6.0, as Forklift now consumes the firmware that is detected by virt-v2v during the conversion of the disks. (MTV-759)

+
+
+
Creating a host secret requires validation of the secret before creation of the host
+

In earlier releases of Forklift, when configuring a transfer network for vSphere hosts, the console plugin created the Host CR before creating its secret. The secret should be specified first in order to validate it before the Host CR is posted. This issue has been resolved in Forklift 2.6.0. (MTV-868)

+
+
+
When adding OVA provider a ConnectionTestFailed message appears
+

In earlier releases of Forklift, when adding an OVA provider, the error message ConnectionTestFailed instantly appeared, although the provider had been created successfully. This issue has been resolved in Forklift 2.6.0. (MTV-671)

+
+
+
RHV provider ConnectionTestSucceeded True response from the wrong URL
+

In earlier releases of Forklift, the ConnectionTestSucceeded condition was set to True even when the URL was different than the API endpoint for the RHV Manager. This issue has been resolved in Forklift 2.6.0. (MTV-740)

+
+
+
Migration does not fail when a vSphere Data Center is nested inside a folder
+

In earlier releases of Forklift, migrating a VM that is placed in a Data Center that is stored directly under the /vcenter in vSphere succeeded. However, it failed when the Data Center was stored inside a folder. This issue was resolved in Forklift 2.6.0. (MTV-796)

+
+
+
The OVA inventory watcher detects deleted files
+

The OVA inventory watcher detects files changes, including deleted files. Updates from the ova-provider-server pod are now sent every five minutes to the forklift-controller pod that updates the inventory. (MTV-733)

+
+
+
Unclear error message when Forklift fails to build or create a PVC
+

In earlier releases of Forklift, the error logs lacked clear information to identify the reason for a failure to create a PV on a destination storage class that does not have a configured storage profile. This issue was resolved in Forklift 2.6.0. (MTV-928)

+
+
+
Plans stay indefinitely in the CopyDisks phase when there is an outdated ovirtvolumepopulator
+

In earlier releases of Forklift, an earlier failed migration could have left an outdated ovirtvolumepopulator. When starting a new plan for the same VM to the same project, the CreateDataVolumes phase did not create populator PVCs when transitioning to CopyDisks, causing the CopyDisks phase to stay indefinitely. This issue was resolved in Forklift 2.6.0. (MTV-929)

+
+
+

For a complete list of all resolved issues in this release, see the list of Resolved Issues in Jira.

+
+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+ + + + + +
+
Warning
+
+
Warm migration and remote migration flows are impacted by multiple bugs
+
+

Warm migration and remote migration flows are impacted by multiple bugs. It is strongly recommended to fall back to cold migration until this issue is resolved. (MTV-1366)

+
+
+
+
+
Migrating older Linux distributions from VMware to OKD, the name of the network interfaces changes
+

When migrating older Linux distributions, such as CentOS 7.0 and 7.1, virtual machines (VMs) from VMware to OKD, the name of the network interfaces changes, and the static IP configuration for the VM no longer functions. This issue is caused by RHEL 7.0 and 7.1 still requiring virtio-transitional. Workaround: Manually update the guest to RHEL 7.2 or update the VM specification post-migration to use transitional. (MTV-1382)

+
+
+
Dynamic disks are offline in Windows Server 2022 after migration from vSphere to CNV with ceph-rbd
+

The dynamic disks are Offline in Windows Server 2022 after cold and warm migrations from vSphere to container-native virtualization (CNV) with Ceph RADOS Block Devices (RBD), using the storage class ocs-storagecluster-ceph-rbd. (MTV-1344)

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)

+
+
+
Migration of virtual machines with encrypted partitions fails during a conversion (vSphere only)
+

vSphere only: Migrations from oVirt and {osp} do not fail, but the encryption key might be missing on the target OKD cluster.

+
+
+
Migration fails during precopy/cutover while performing a snapshot operation on the source VM
+

Warm migration from oVirt fails if a snapshot operation is triggered and running on the source VM at the same time as the migration is scheduled. The migration does not wait for the snapshot operation to finish. (MTV-456)

+
+
+
Unable to schedule migrated VM with multiple disks to more than one storage class of type hostPath
+

When migrating a VM with multiple disks to more than one storage class of type hostPath, it might happen that a VM cannot be scheduled. Workaround: Use shared storage on the target OKD cluster.

+
+
+
Non-supported guest operating systems in warm migrations
+

Warm migrations and migrations to remote OKD clusters from vSphere do not support the same guest operating systems that are supported in cold migrations and migrations to the local OKD cluster. RHEL 8 and RHEL 9 might cause this limitation.

+
+ +
+
VMs from vSphere with RHEL 9 guest operating system can start with network interfaces that are down
+

When migrating VMs that are installed with RHEL 9 as a guest operating system from vSphere, the network interfaces of the VMs could be disabled when they start in {ocp-name} Virtualization. (MTV-491)

+
+
+
Migration of a VM with NVME disks from vSphere fails
+

When migrating a virtual machine (VM) with NVME disks from vSphere, the migration process fails, and the Web Console shows that the Convert image to kubevirt stage is running but did not finish successfully. (MTV-963)

+
+
+
Importing image-based VMs can fail
+

Migrating an image-based VM without the virtual_size field can fail on a block mode storage class. (MTV-946)

+
+
+
Deleting a migration plan does not remove temporary resources
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs, and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)

+
+
+
Migrating VMs with independent persistent disks from VMware to OCP-V fails
+

Migrating VMs with independent persistent disks from VMware to OCP-V fails. (MTV-993)

+
+
+
Guest operating system from vSphere might be missing
+

When vSphere does not receive updates about the guest operating system from the VMware tools, it considers the information about the guest operating system to be outdated and ceases to report it. When this occurs, Forklift is unaware of the guest operating system of the VM and is unable to associate it with the appropriate virtual machine preference or {ocp-name} template. (MTV-1046)

+
+
+
Failure to migrate an image-based VM from {osp} to the default project
+

The migration process fails when migrating an image-based VM from {osp} to the default project. (MTV-964)

+
+
+

For a complete list of all known issues in this release, see the list of Known Issues in Jira.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.7/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.7/index.html new file mode 100644 index 00000000000..42955c29c65 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.7/index.html @@ -0,0 +1,91 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.7

+
+

You can use Forklift to migrate virtual machines from the following source providers to KubeVirt destination providers:

+
+
+
    +
  • +

    VMware vSphere versions 6, 7, and 8

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    {osp}

    +
  • +
  • +

    Open Virtual Appliances (OVAs) that were created by VMware vSphere

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

The release notes describe technical changes, new features and enhancements, known issues, and resolved issues.

+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-27-resolved-issues/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-27-resolved-issues/index.html new file mode 100644 index 00000000000..c021b01cfd0 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-27-resolved-issues/index.html @@ -0,0 +1,168 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Resolved issues

+
+
+
+

Forklift 2.7 has the following resolved issues:

+
+
+
+
+

Resolved issues 2.7.3

+
+
+
Migration plan does not fail when conversion pod fails
+

In earlier releases of Forklift, when running the virt-v2v guest conversion, the migration plan did not fail if the conversion pod failed, as expected. This issue has been resolved in Forklift 2.7.3. (MTV-1569)

+
+
+
Large number of VMs in the inventory can cause the inventory controller to panic
+

In earlier releases of Forklift, having a large number of virtual machines (VMs) in the inventory could cause the inventory controller to panic and return a concurrent write to websocket connection warning. This issue was caused by the concurrent write to the WebSocket connection and has been addressed by the addition of a lock, so the Go routine waits before sending the response from the server. This issue has been resolved in Forklift 2.7.3. (MTV-1220)

+
+
+
VM selection disappears when selecting multiple VMs in the Migration Plan
+

In earlier releases of Forklift, VM selection checkbox disappeared after selecting multiple VMs in the Migration Plan. This issue has been resolved in Forklift 2.7.3. (MTV-1546)

+
+
+
forklift-controller crashing during OVA plan migration
+

In earlier releases of Forklift, the forklift-controller would crash during an OVA plan migration, returning a runtime error: invalid memory address or nil pointer dereference panic.  This issue has been resolved in Forklift 2.7.3. (MTV-1577)

+
+
+
+
+

Resolved issues 2.7.2

+
+
+
VMNetworksNotMapped error occurs after creating a plan from the UI with the source provider set to KubeVirt
+

In earlier releases of Forklift, after creating a plan with an KubeVirt source provider, the Migration Plan failed with the error The plan is not ready - VMNetworksNotMapped. This issue has been resolved in Forklift 2.7.2. (MTV-1201)

+
+
+
Migration Plan for KubeVirt to KubeVirt missing the source namespace causing VMNetworkNotMapped error
+

In earlier releases of Forklift, when creating a Migration Plan for an KubeVirt to KubeVirt migration using the Plan Creation Form, the network map generated was missing the source namespace, which caused a VMNetworkNotMapped error on the plan. This issue has been resolved in Forklift 2.7.2. (MTV-1297)

+
+
+
DV, PVC, and PV are not cleaned up and removed if the migration plan is Archived and Deleted
+

In earlier releases of Forklift, the DataVolume (DV), PersistentVolumeClaim (PVC), and PersistentVolume (PV) continued to exist after the migration plan was archived and deleted. This issue has been resolved in Forklift 2.7.2. (MTV-1477)

+
+
+
Other migrations are halted from starting as the scheduler is waiting for the complete VM to get transferred
+

In earlier releases of Forklift, when warm migrating a virtual machine (VM) that has several disks, you had to wait for the complete VM to get migrated, and the scheduler was halted until all the disks finished before the migration would be started. This issue has been resolved in Forklift 2.7.2. (MTV-1537)

+
+
+
Warm migration is not functioning as expected
+

In earlier releases of Forklift, warm migration did not function as expected. When running the warm migration with VMs larger than the MaxInFlight disks, the VMs over this number did not start the migration until the cutover. This issue has been resolved in Forklift 2.7.2. (MTV-1543)

+
+
+
Migration hanging due to error: virt-v2v: error: -i libvirt: expecting a libvirt guest name
+

In earlier releases of Forklift, when attempting to migrate a VMware VM with a non-compliant Kubernetes name, the Openshift console returned a warning that the VM would be renamed. However, after starting the Migration Plan, it hangs since the migration pod is in an Error state. This issue has been resolved in Forklift 2.7.2. This issue has been resolved in Forklift 2.7.2. (MTV-1555)

+
+
+
VMs are not migrated if they have more disks than MAX_VM_INFLIGHT
+

In earlier releases of Forklift, when migrating the VM using the warm migration, if there were more disks than the MAX_VM_INFLIGHT the VM was not scheduled and the migration was not started. This issue has been resolved in Forklift 2.7.2. (MTV-1573)

+
+
+
Migration Plan returns an error even when Changed Block Tracking (CBT) is enabled
+

In earlier releases of Forklift, when running a VM in VMware, if the CBT flag was enabled while the VM was running by adding both ctkEnabled=TRUE and scsi0:0.ctkEnabled=TRUE parameters, an error message Danger alert:The plan is not ready - VMMissingChangedBlockTracking was returned, and the migration plan was prevented from working. This issue has been resolved in Forklift 2.7.2. (MTV-1576)

+
+
+
+
+

Resolved issues 2.7.0

+
+
+
Change . to - in the names of VMs that are migrated
+

In earlier releases of Forklift, if the name of the virtual machines (VMs) contained ., this was changed to - when they were migrated. This issue has been resolved in Forklift 2.7.0. (MTV-1292)

+
+
+
Status condition indicating a failed mapping resource in a plan is not added to the plan
+

In earlier releases of Forklift, a status condition indicating a failed mapping resource of a plan was not added to the plan. This issue has been resolved in Forklift 2.7.0, with a status condition indicating the failed mapping being added. (MTV-1461)

+
+
+
ifcfg files with HWaddr cause the NIC name to change
+

In earlier releases of Forklift, interface configuration (ifcfg) files with a hardware address (HWaddr) of the Ethernet interface caused the name of the network interface controller (NIC) to change. This issue has been resolved in Forklift 2.7.0. (MTV-1463)

+
+
+
Import fails with special characters in VMX file
+

In earlier releases of Forklift, imports failed when there were special characters in the parameters of the VMX file. This issue has been resolved in Forklift 2.7.0. (MTV-1472)

+
+
+
Observed invalid memory address or nil pointer dereference panic
+

In earlier releases of Forklift, an invalid memory address or nil pointer dereference panic was observed, which was caused by a refactor and could be triggered when there was a problem with the inventory pod. This issue has been resolved in Forklift 2.7.0. (MTV-1482)

+
+
+
Static IPv4 changed after warm migrating win2022/2019 VMs
+

In earlier releases of Forklift, the static Internet Protocol version 4 (IPv4) address was changed after a warm migration of Windows Server 2022 and Windows Server 2019 VMs. This issue has been resolved in Forklift 2.7.0. (MTV-1491)

+
+
+
Warm migration is missing arguments
+

In earlier releases of Forklift, virt-v2v-in-place for the warm migration was missing arguments that were available in virt-v2v for the cold migration. This issue has been resolved in Forklift 2.7.0. (MTV-1495)

+
+
+
Default gateway settings changed after migrating Windows Server 2022 VMs with preserve static IPs
+

In earlier releases of Forklift, the default gateway settings were changed after migrating Windows Server 2022 VMs with the preserve static IPs setting. This issue has been resolved in Forklift 2.7.0. (MTV-1497)

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/running-migration-plan/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/running-migration-plan/index.html new file mode 100644 index 00000000000..2e334f30bf3 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/running-migration-plan/index.html @@ -0,0 +1,135 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Running a migration plan

+
+

You can run a migration plan and view its progress in the OKD web console.

+
+
+
Prerequisites
+
    +
  • +

    Valid migration plan.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
    +

    The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, and the description of each plan.

    +
    +
  2. +
  3. +

    Click Start beside a migration plan to start the migration.

    +
  4. +
  5. +

    Click Start in the confirmation window that opens.

    +
    +

    The Migration details by VM screen opens, displaying the migration’s progress

    +
    +
    +

    Warm migration only:

    +
    +
    +
      +
    • +

      The precopy stage starts.

      +
    • +
    • +

      Click Cutover to complete the migration.

      +
    • +
    +
    +
  6. +
  7. +

    If the migration fails:

    +
    +
      +
    1. +

      Click Get logs to retrieve the migration logs.

      +
    2. +
    3. +

      Click Get logs in the confirmation window that opens.

      +
    4. +
    5. +

      Wait until Get logs changes to Download logs and then click the button to download the logs.

      +
    6. +
    +
    +
  8. +
  9. +

    Click a migration’s Status, whether it failed or succeeded or is still ongoing, to view the details of the migration.

    +
    +

    The Migration details by VM screen opens, displaying the start and end times of the migration, the amount of data copied, and a progress pipeline for each VM being migrated.

    +
    +
  10. +
  11. +

    Expand an individual VM to view its steps and the elapsed time and state of each step.

    +
  12. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/selecting-migration-network-for-virt-provider/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/selecting-migration-network-for-virt-provider/index.html new file mode 100644 index 00000000000..98f3e25d5dd --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/selecting-migration-network-for-virt-provider/index.html @@ -0,0 +1,100 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Selecting a migration network for a KubeVirt provider

+
+

You can select a default migration network for a KubeVirt provider in the OKD web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.

+
+
+

If you do not select a migration network, the default migration network is the pod network, which might not be optimal for disk transfer.

+
+
+ + + + + +
+
Note
+
+
+

You can override the default migration network of the provider by selecting a different network when you create a migration plan.

+
+
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    On the right side of the provider, select Select migration network from the {kebab}.

    +
  4. +
  5. +

    Select a network from the list of available networks and click Select.

    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/selecting-migration-network-for-vmware-source-provider/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/selecting-migration-network-for-vmware-source-provider/index.html new file mode 100644 index 00000000000..9f8218d08a9 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/selecting-migration-network-for-vmware-source-provider/index.html @@ -0,0 +1,142 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Selecting a migration network for a VMware source provider

+
+

You can select a migration network in the OKD web console for a source provider to reduce risk to the source environment and to improve performance.

+
+
+

Using the default network for migration can result in poor performance because the network might not have sufficient bandwidth. This situation can have a negative effect on the source platform because the disk transfer operation might saturate the network.

+
+
+

Unresolved directive in selecting-migration-network-for-vmware-source-provider.adoc - include::snip_vmware_esxi_nfc.adoc[]

+
+
+
Prerequisites
+
    +
  • +

    The migration network must have sufficient throughput, minimum speed of 10 Gbps, for disk transfer.

    +
  • +
  • +

    The migration network must be accessible to the KubeVirt nodes through the default gateway.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    The source virtual disks are copied by a pod that is connected to the pod network of the target namespace.

    +
    +
    +
    +
  • +
  • +

    The migration network should have jumbo frames enabled.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click the host number in the Hosts column beside a provider to view a list of hosts.

    +
  4. +
  5. +

    Select one or more hosts and click Select migration network.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Network: Network name

      +
    • +
    • +

      ESXi host admin username: For example, root

      +
    • +
    • +

      ESXi host admin password: Password

      +
    • +
    +
    +
  8. +
  9. +

    Click Save.

    +
  10. +
  11. +

    Verify that the status of each host is Ready.

    +
    +

    If a host status is not Ready, the host might be unreachable on the migration network or the credentials might be incorrect. You can modify the host configuration and save the changes.

    +
    +
  12. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/selecting-migration-network/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/selecting-migration-network/index.html new file mode 100644 index 00000000000..a6e11c3a7eb --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/selecting-migration-network/index.html @@ -0,0 +1,118 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Selecting a migration network for a source provider

+
+

You can select a migration network for a source provider in the Forklift web console for improved performance.

+
+
+

If a source network is not optimal for migration, a Warning icon is displayed beside the host number in the Hosts column of the provider list.

+
+
+
Prerequisites
+

The migration network has the following prerequisites:

+
+
+
    +
  • +

    Minimum speed of 10 Gbps.

    +
  • +
  • +

    Accessible to the OpenShift nodes through the default gateway. The source disks are copied by a pod that is connected to the pod network of the target namespace.

    +
  • +
  • +

    Jumbo frames enabled.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Click Providers.

    +
  2. +
  3. +

    Click the host number of a provider to view the host list and network details.

    +
  4. +
  5. +

    Select the host to be updated and click Select migration network.

    +
  6. +
  7. +

    Select a Network from the list of available networks.

    +
    +

    The network list displays only the networks accessible to all the selected hosts. The hosts must have

    +
    +
  8. +
  9. +

    Click Check connection to verify the credentials.

    +
  10. +
  11. +

    Click Select to select the migration network.

    +
    +

    The migration network appears in the network details of the updated hosts.

    +
    +
  12. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip-certificate-options/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip-certificate-options/index.html new file mode 100644 index 00000000000..27f7d0a5cb2 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip-certificate-options/index.html @@ -0,0 +1,114 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
    +
  1. +

    Choose one of the following options for validating CA certificates:

    +
    +
      +
    • +

      Use a custom CA certificate: Migrate after validating a custom CA certificate.

      +
    • +
    • +

      Use the system CA certificate: Migrate after validating the system CA certificate.

      +
    • +
    • +

      Skip certificate validation : Migrate without validating a CA certificate.

      +
      +
        +
      1. +

        To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.

        +
      2. +
      3. +

        To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.

        +
      4. +
      5. +

        To skip certificate validation, toggle the Skip certificate validation switch to the right.

        +
      6. +
      +
      +
    • +
    +
    +
  2. +
  3. +

    Optional: Ask Forklift to fetch a custom CA certificate from the provider’s API endpoint URL.

    +
    +
      +
    1. +

      Click Fetch certificate from URL. The Verify certificate window opens.

      +
    2. +
    3. +

      If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.

      +
      +

      Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.

      +
      +
    4. +
    +
    +
  4. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip-migrating-luns/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip-migrating-luns/index.html new file mode 100644 index 00000000000..89ea20fc7cb --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip-migrating-luns/index.html @@ -0,0 +1,86 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Note
+
+
+
    +
  • +

    Unlike disk images that are copied from a source provider to a target provider, LUNs are detached, but not removed, from virtual machines in the source provider and then attached to the virtual machines (VMs) that are created in the target provider.

    +
  • +
  • +

    LUNs are not removed from the source provider during the migration in case fallback to the source provider is required. However, before re-attaching the LUNs to VMs in the source provider, ensure that the LUNs are not used by VMs on the target environment at the same time, which might lead to data corruption.

    +
  • +
+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_cold-warm-comparison-table/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_cold-warm-comparison-table/index.html new file mode 100644 index 00000000000..3de7ebdc31b --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_cold-warm-comparison-table/index.html @@ -0,0 +1,100 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

Both cold migration and warm migration have advantages and disadvantages, as described in the table that follows:

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Advantages and disadvantages of cold and warm migrations
Cold migrationWarm migration

Duration

Correlates to the amount of data on the disks

Correlates to the amount of data on the disks and VM utilization

Data transferred

Approximate sum of all disks

Approximate sum of all disks and VM utilization

VM downtime

High

Low

+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_measured_boot_windows_vm/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_measured_boot_windows_vm/index.html new file mode 100644 index 00000000000..cb7ba0bcc04 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_measured_boot_windows_vm/index.html @@ -0,0 +1,72 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
Windows VMs which are using Measured Boot cannot be migrated
+

Microsoft Windows virtual machines (VMs), which are using the Measured Boot feature, cannot be migrated because Measured Boot is a mechanism to prevent any kind of device changes, by checking each start-up component, including the firmware, all the way to the boot driver.

+
+
+

The alternative to migration is to re-create the Windows VM directly on KubeVirt.

+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_performance/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_performance/index.html new file mode 100644 index 00000000000..5cb6e8b3fe8 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_performance/index.html @@ -0,0 +1,74 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

The data provided here was collected from testing in Red Hat Labs and is provided for reference only. 

+
+
+

Overall, these numbers should be considered to show the best-case scenarios.

+
+
+

The observed performance of migration can differ from these results and depends on several factors.

+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_permissions-info/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_permissions-info/index.html new file mode 100644 index 00000000000..6d00955ab3c --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_permissions-info/index.html @@ -0,0 +1,85 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

If you are an administrator, you can see and work with components (providers, plans, etc.) for all projects.

+
+
+

If you are a non-administrator, you can only see and work only with the components of projects you have permissions for.

+
+
+ + + + + +
+
Tip
+
+
+

You can see which projects you have permissions for by clicking the Project list, which is in the upper-left of every page in the Migrations section except for the Overview.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_plan-limits/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_plan-limits/index.html new file mode 100644 index 00000000000..23bf00cb45b --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_plan-limits/index.html @@ -0,0 +1,79 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Important
+
+
+

A plan cannot contain more than 500 VMs or 500 disks.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_qemu-guest-agent/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_qemu-guest-agent/index.html new file mode 100644 index 00000000000..56abc669f59 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_qemu-guest-agent/index.html @@ -0,0 +1,74 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

VMware only: In cold migrations, in situations in which a package manager cannot be used during the migration, Forklift does not install the qemu-guest-agent daemon on the migrated VMs. This has some impact on the functionality of the migrated VMs, but overall, they are still expected to function.

+
+
+

To enable Forklift to automatically install qemu-guest-agent on the migrated VMs, ensure that your package manager can install the daemon during the first boot of the VM after migration.

+
+
+

If that is not possible, use your preferred automated or manual procedure to install qemu-guest-agent manually.

+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_secure_boot_issue/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_secure_boot_issue/index.html new file mode 100644 index 00000000000..7b9913db1f7 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_secure_boot_issue/index.html @@ -0,0 +1,72 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
VMs with Secure Boot enabled might not be migrated automatically
+

Virtual machines (VMs) with Secure Boot enabled currently might not be migrated automatically. This is because Secure Boot, a security standard developed by members of the PC industry to ensure that a device boots using only software that is trusted by the Original Equipment Manufacturer (OEM), would prevent the VMs from booting on the destination provider. 

+
+
+

Workaround: The current workaround is to disable Secure Boot on the destination. For more details, see Disabling Secure Boot. (MTV-1548)

+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_vmware-name-change/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_vmware-name-change/index.html new file mode 100644 index 00000000000..99a71e99bc3 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_vmware-name-change/index.html @@ -0,0 +1,79 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Important
+
+
+

When you migrate a VMware 7 VM to an OKD 4.13+ platform that uses CentOS 7.9, the name of the network interfaces changes and the static IP configuration for the VM no longer works.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_vmware-permissions/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_vmware-permissions/index.html new file mode 100644 index 00000000000..8c70e95d1a1 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_vmware-permissions/index.html @@ -0,0 +1,86 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Important
+
+
forklift-controller consistently failing to reconcile a plan, and returning an HTTP 500 error
+
+

There is an issue with the forklift-controller consistently failing to reconcile a Migration Plan, and subsequently returning an HTTP 500 error. This issue is caused when you specify the user permissions only on the virtual machine (VM).

+
+
+

In Forklift, you need to add permissions at the datacenter level, which includes storage, networks, switches, and so on, which are used by the VM. You must then propagate the permissions to the child elements.

+
+
+

If you do not want to add this level of permissions, you must manually add the permissions to each object on the VM host required.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_vmware_esxi_nfc/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_vmware_esxi_nfc/index.html new file mode 100644 index 00000000000..56c44d6d197 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_vmware_esxi_nfc/index.html @@ -0,0 +1,79 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Note
+
+
+

You can also control the network from which disks are transferred from a host by using the Network File Copy (NFC) service in vSphere.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snippet_getting_web_console_url_cli/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snippet_getting_web_console_url_cli/index.html new file mode 100644 index 00000000000..c2e318d94bd --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snippet_getting_web_console_url_cli/index.html @@ -0,0 +1,87 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

+

+
+
+
+
$ kubectl get route virt -n konveyor-forklift \
+  -o custom-columns=:.spec.host
+
+
+
+

+ +The URL for the forklift-ui service that opens the login page for the Forklift web console is displayed.

+
+
+

+ +.Example output

+
+
+
+
https://virt-konveyor-forklift.apps.cluster.openshift.com.
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snippet_getting_web_console_url_web/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snippet_getting_web_console_url_web/index.html new file mode 100644 index 00000000000..f289a4342a0 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snippet_getting_web_console_url_web/index.html @@ -0,0 +1,84 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
    +
  1. +

    Log in to the OKD web console.

    +
  2. +
  3. +

    Click NetworkingRoutes.

    +
  4. +
  5. +

    Select the {namespace} project in the Project: list.

    +
    +

    The URL for the forklift-ui service that opens the login page for the Forklift web console is displayed.

    +
    +
    +

    Click the URL to navigate to the Forklift web console.

    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snippet_ova_tech_preview/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snippet_ova_tech_preview/index.html new file mode 100644 index 00000000000..7d120299050 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snippet_ova_tech_preview/index.html @@ -0,0 +1,87 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview.

+
+
+ + + + + +
+
Important
+
+
+

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/source-vm-prerequisites/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/source-vm-prerequisites/index.html new file mode 100644 index 00000000000..8a970accab2 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/source-vm-prerequisites/index.html @@ -0,0 +1,127 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Source virtual machine prerequisites

+
+

The following prerequisites apply to all migrations:

+
+
+
    +
  • +

    ISO/CDROM disks must be unmounted.

    +
  • +
  • +

    Each NIC must contain one IPv4 and/or one IPv6 address.

    +
  • +
  • +

    The operating system of a VM must be certified and supported as a guest operating system with KubeVirt.

    +
  • +
  • +

    The name of a VM must not contain a period (.). Forklift changes any period in a VM name to a dash (-).

    +
  • +
  • +

    The name of a VM must not be the same as any other VM in the KubeVirt environment.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    Forklift automatically assigns a new name to a VM that does not comply with the rules.

    +
    +
    +

    Forklift makes the following changes when it automatically generates a new VM name:

    +
    +
    +
      +
    • +

      Excluded characters are removed.

      +
    • +
    • +

      Uppercase letters are switched to lowercase letters.

      +
    • +
    • +

      Any underscore (_) is changed to a dash (-).

      +
    • +
    +
    +
    +

    This feature allows a migration to proceed smoothly even if someone enters a VM name that does not follow the rules.

    +
    +
    +
    +
  • +
+
+
+

Unresolved directive in source-vm-prerequisites.adoc - include::snip_secure_boot_issue.adoc[]

+
+
+

Unresolved directive in source-vm-prerequisites.adoc - include::snip_measured_boot_windows_vm.adoc[]

+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/storage-support/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/storage-support/index.html new file mode 100644 index 00000000000..bedbf978eda --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/storage-support/index.html @@ -0,0 +1,211 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Storage support and default modes

+
+

Forklift uses the following default volume and access modes for supported storage.

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Default volume and access modes
ProvisionerVolume modeAccess mode

kubernetes.io/aws-ebs

Block

ReadWriteOnce

kubernetes.io/azure-disk

Block

ReadWriteOnce

kubernetes.io/azure-file

Filesystem

ReadWriteMany

kubernetes.io/cinder

Block

ReadWriteOnce

kubernetes.io/gce-pd

Block

ReadWriteOnce

kubernetes.io/hostpath-provisioner

Filesystem

ReadWriteOnce

manila.csi.openstack.org

Filesystem

ReadWriteMany

openshift-storage.cephfs.csi.ceph.com

Filesystem

ReadWriteMany

openshift-storage.rbd.csi.ceph.com

Block

ReadWriteOnce

kubernetes.io/rbd

Block

ReadWriteOnce

kubernetes.io/vsphere-volume

Block

ReadWriteOnce

+
+ + + + + +
+
Note
+
+
+

If the KubeVirt storage does not support dynamic provisioning, you must apply the following settings:

+
+
+
    +
  • +

    Filesystem volume mode

    +
    +

    Filesystem volume mode is slower than Block volume mode.

    +
    +
  • +
  • +

    ReadWriteOnce access mode

    +
    +

    ReadWriteOnce access mode does not support live virtual machine migration.

    +
    +
  • +
+
+
+

See Enabling a statically-provisioned storage class for details on editing the storage profile.

+
+
+
+
+ + + + + +
+
Note
+
+
+

If your migration uses block storage and persistent volumes created with an EXT4 file system, increase the file system overhead in CDI to be more than 10%. The default overhead that is assumed by CDI does not completely include the reserved place for the root partition. If you do not increase the file system overhead in CDI by this amount, your migration might fail.

+
+
+
+
+ + + + + +
+
Note
+
+
+

When migrating from OpenStack or running a cold-migration from RHV to the OCP cluster that MTV is deployed on, the migration allocates persistent volumes without CDI. In these cases, you might need to adjust the file system overhead.

+
+
+

If the configured file system overhead, which has a default value of 10%, is too low, the disk transfer will fail due to lack of space. In such a case, you would want to increase the file system overhead.

+
+
+

In some cases, however, you might want to decrease the file system overhead to reduce storage consumption.

+
+
+

You can change the file system overhead by changing the value of the controller_filesystem_overhead in the spec portion of the forklift-controller CR, as described in Configuring the MTV Operator.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/technical-changes-2-7/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/technical-changes-2-7/index.html new file mode 100644 index 00000000000..2db2a342c87 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/technical-changes-2-7/index.html @@ -0,0 +1,73 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Technical changes

+
+

Forklift 2.7 has the following technical changes:

+
+
+
Upgraded virt-v2v to RHEL9 for warm migrations
+

Forklift previously used virt-v2v from Red Hat Enterprise Linux (RHEL) 8, which does not include bug fixes and features that are available in virt-v2v in RHEL9. In Forklift 2.7.0, components are updated to RHEL 9 in order to improve the functionality of warm migration. (MTV-1152)

+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/technology-preview/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/technology-preview/index.html new file mode 100644 index 00000000000..1ec520ebece --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/technology-preview/index.html @@ -0,0 +1,88 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Important
+
+
+

{FeatureName} is a Technology Preview feature only. Technology Preview features +are not supported with Red Hat production service level agreements (SLAs) and +might not be functionally complete. Red Hat does not recommend using them +in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during +the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/uninstalling-mtv-cli/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/uninstalling-mtv-cli/index.html new file mode 100644 index 00000000000..b7238048181 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/uninstalling-mtv-cli/index.html @@ -0,0 +1,144 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Uninstalling Forklift from the command line interface

+
+

You can uninstall Forklift from the command line interface (CLI).

+
+
+ + + + + +
+
Note
+
+
+

This action does not remove resources managed by the Forklift Operator, including custom resource definitions (CRDs) and custom resources (CRs). To remove these after uninstalling the Forklift Operator, you might need to manually delete the Forklift Operator CRDs.

+
+
+
+
+
Prerequisites
+
    +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Delete the forklift controller by running the following command:

    +
    +
    +
    $ oc delete ForkliftController --all -n openshift-mtv
    +
    +
    +
  2. +
  3. +

    Delete the subscription to the Forklift Operator by running the following command:

    +
    +
    +
    $ oc get subscription -o name|grep 'mtv-operator'| xargs oc delete
    +
    +
    +
  4. +
  5. +

    Delete the clusterserviceversion for the Forklift Operator by running the following command:

    +
    +
    +
    $ oc get clusterserviceversion -o name|grep 'mtv-operator'| xargs oc delete
    +
    +
    +
  6. +
  7. +

    Delete the plugin console CR by running the following command:

    +
    +
    +
    $ oc delete ConsolePlugin forklift-console-plugin
    +
    +
    +
  8. +
  9. +

    Optional: Delete the custom resource definitions (CRDs) by running the following command:

    +
    +
    +
    kubectl get crd -o name | grep 'forklift.konveyor.io' | xargs kubectl delete
    +
    +
    +
  10. +
  11. +

    Optional: Perform cleanup by deleting the Forklift project by running the following command:

    +
    +
    +
    oc delete project openshift-mtv
    +
    +
    +
  12. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/uninstalling-mtv-ui/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/uninstalling-mtv-ui/index.html new file mode 100644 index 00000000000..99653afed57 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/uninstalling-mtv-ui/index.html @@ -0,0 +1,168 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Uninstalling Forklift by using the OKD web console

+
+

You can uninstall Forklift by using the OKD web console.

+
+
+
Prerequisites
+
    +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click Operators > Installed Operators.

    +
  2. +
  3. +

    Click Forklift Operator.

    +
    +

    The Operator Details page opens in the Details tab.

    +
    +
  4. +
  5. +

    Click the ForkliftController tab.

    +
  6. +
  7. +

    Click Actions and select Delete ForkLiftController.

    +
    +

    A confirmation window opens.

    +
    +
  8. +
  9. +

    Click Delete.

    +
    +

    The controller is removed.

    +
    +
  10. +
  11. +

    Open the Details tab.

    +
    +

    The Create ForkliftController button appears instead of the controller you deleted. There is no need to click it.

    +
    +
  12. +
  13. +

    On the upper-right side of the page, click Actions and select Uninstall Operator.

    +
    +

    A confirmation window opens, displaying any operand instances.

    +
    +
  14. +
  15. +

    To delete all instances, select the Delete all operand instances for this operator checkbox. By default, the checkbox is cleared.

    +
    + + + + + +
    +
    Important
    +
    +
    +

    If your Operator configured off-cluster resources, these will continue to run and will require manual cleanup.

    +
    +
    +
    +
  16. +
  17. +

    Click Uninstall.

    +
    +

    The Installed Operators page opens, and the Forklift Operator is removed from the list of installed Operators.

    +
    +
  18. +
  19. +

    Click Home > Overview.

    +
  20. +
  21. +

    In the Status section of the page, click Dynamic Plugins.

    +
    +

    The Dynamic Plugins popup opens, listing forklift-console-plugin as a failed plugin. If the forklift-console-plugin does not appear as a failed plugin, refresh the web console.

    +
    +
  22. +
  23. +

    Click forklift-console-plugin.

    +
    +

    The ConsolePlugin details page opens in the Details tab.

    +
    +
  24. +
  25. +

    On the upper right-hand side of the page, click Actions and select Delete ConsolePlugin from the list.

    +
    +

    A confirmation window opens.

    +
    +
  26. +
  27. +

    Click Delete.

    +
    +

    The plugin is removed from the list of Dynamic plugins on the Overview page. If the plugin still appears, restart the Overview page.

    +
    +
  28. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/updating-validation-rules-version/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/updating-validation-rules-version/index.html new file mode 100644 index 00000000000..8efcea4ade9 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/updating-validation-rules-version/index.html @@ -0,0 +1,127 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Updating the inventory rules version

+
+

You must update the inventory rules version each time you update the rules so that the Provider Inventory service detects the changes and triggers the Validation service.

+
+
+

The rules version is recorded in a rules_version.rego file for each provider.

+
+
+
Procedure
+
    +
  1. +

    Retrieve the current rules version:

    +
    +
    +
    $ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version (1)
    +
    +
    +
    +
    Example output
    +
    +
    {
    +   "result": {
    +       "rules_version": 5
    +   }
    +}
    +
    +
    +
  2. +
  3. +

    Connect to the terminal of the Validation pod:

    +
    +
    +
    $ kubectl rsh <validation_pod>
    +
    +
    +
  4. +
  5. +

    Update the rules version in the /usr/share/opa/policies/io/konveyor/forklift/<provider>/rules_version.rego file.

    +
  6. +
  7. +

    Log out of the Validation pod terminal.

    +
  8. +
  9. +

    Verify the updated rules version:

    +
    +
    +
    $ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version (1)
    +
    +
    +
    +
    Example output
    +
    +
    {
    +   "result": {
    +       "rules_version": 6
    +   }
    +}
    +
    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/upgrading-mtv-ui/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/upgrading-mtv-ui/index.html new file mode 100644 index 00000000000..38e0b7cd6f7 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/upgrading-mtv-ui/index.html @@ -0,0 +1,127 @@ + + + + + + + + Upgrading Forklift | Forklift Documentation + + + + + + + + + + + + + +Upgrading Forklift | Forklift Documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+

Upgrading Forklift

+
+

You can upgrade the Forklift Operator by using the OKD web console to install the new version.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click OperatorsInstalled Operators{operator-name-ui}Subscription.

    +
  2. +
  3. +

    Change the update channel to the correct release.

    +
    +

    See Changing update channel in the OKD documentation.

    +
    +
  4. +
  5. +

    Confirm that Upgrade status changes from Up to date to Upgrade available. If it does not, restart the CatalogSource pod:

    +
    +
      +
    1. +

      Note the catalog source, for example, redhat-operators.

      +
    2. +
    3. +

      From the command line, retrieve the catalog source pod:

      +
      +
      +
      $ kubectl get pod -n openshift-marketplace | grep <catalog_source>
      +
      +
      +
    4. +
    5. +

      Delete the pod:

      +
      +
      +
      $ kubectl delete pod -n openshift-marketplace <catalog_source_pod>
      +
      +
      +
      +

      Upgrade status changes from Up to date to Upgrade available.

      +
      +
      +

      If you set Update approval on the Subscriptions tab to Automatic, the upgrade starts automatically.

      +
      +
    6. +
    +
    +
  6. +
  7. +

    If you set Update approval on the Subscriptions tab to Manual, approve the upgrade.

    +
    +

    See Manually approving a pending upgrade in the OKD documentation.

    +
    +
  8. +
  9. +

    If you are upgrading from Forklift 2.2 and have defined VMware source providers, edit the VMware provider by adding a VDDK init image. Otherwise, the update will change the state of any VMware providers to Critical. For more information, see Adding a VMSphere source provider.

    +
  10. +
  11. +

    If you mapped to NFS on the OKD destination provider in Forklift 2.2, edit the AccessModes and VolumeMode parameters in the NFS storage profile. Otherwise, the upgrade will invalidate the NFS mapping. For more information, see Customizing the storage profile.

    +
  12. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/using-must-gather/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/using-must-gather/index.html new file mode 100644 index 00000000000..857998b1851 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/using-must-gather/index.html @@ -0,0 +1,157 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Using the must-gather tool

+
+

You can collect logs and information about Forklift custom resources (CRs) by using the must-gather tool. You must attach a must-gather data file to all customer cases.

+
+
+

You can gather data for a specific namespace, migration plan, or virtual machine (VM) by using the filtering options.

+
+
+ + + + + +
+
Note
+
+
+

If you specify a non-existent resource in the filtered must-gather command, no archive file is created.

+
+
+
+
+
Prerequisites
+
    +
  • +

    You must be logged in to the KubeVirt cluster as a user with the cluster-admin role.

    +
  • +
  • +

    You must have the OKD CLI (oc) installed.

    +
  • +
+
+
+
Collecting logs and CR information
+
    +
  1. +

    Navigate to the directory where you want to store the must-gather data.

    +
  2. +
  3. +

    Run the oc adm must-gather command:

    +
    +
    +
    $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest
    +
    +
    +
    +

    The data is saved as /must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.

    +
    +
  4. +
  5. +

    Optional: Run the oc adm must-gather command with the following options to gather filtered data:

    +
    +
      +
    • +

      Namespace:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- NS=<namespace> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Migration plan:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- PLAN=<migration_plan> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Virtual machine:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- VM=<vm_id> NS=<namespace> /usr/bin/targeted (1)
      +
      +
      +
      +
        +
      1. +

        Specify the VM ID as it appears in the Plan CR.

        +
      2. +
      +
      +
    • +
    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/virt-migration-workflow/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/virt-migration-workflow/index.html new file mode 100644 index 00000000000..1297608f51e --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/virt-migration-workflow/index.html @@ -0,0 +1,209 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Detailed migration workflow

+
+

You can use the detailed migration workflow to troubleshoot a failed migration.

+
+
+

The workflow describes the following steps:

+
+
+

Warm Migration or migration to a remote {ocp-name} cluster:

+
+
+
    +
  1. +

    When you create the Migration custom resource (CR) to run a migration plan, the Migration Controller service creates a DataVolume CR for each source VM disk.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Containerized Data Importer (CDI) Controller service creates a persistent volume claim (PVC) based on the parameters specified in the DataVolume CR.



    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
  6. +
  7. +

    The CDI Controller service creates an importer pod.

    +
  8. +
  9. +

    The importer pod streams the VM disk to the PV.

    +
    +

    After the VM disks are transferred:

    +
    +
  10. +
  11. +

    The Migration Controller service creates a conversion pod with the PVCs attached to it when importing from VMWare.

    +
    +

    The conversion pod runs virt-v2v, which installs and configures device drivers on the PVCs of the target VM.

    +
    +
  12. +
  13. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  14. +
  15. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  16. +
+
+
+

Cold migration from oVirt or {osp} to the local {ocp-name} cluster:

+
+
+
    +
  1. +

    When you create a Migration custom resource (CR) to run a migration plan, the Migration Controller service creates for each source VM disk a PersistentVolumeClaim CR, and an OvirtVolumePopulator when the source is oVirt, or an OpenstackVolumePopulator CR when the source is {osp}.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Populator Controller service creates a temporarily persistent volume claim (PVC).

    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
    +
      +
    • +

      The Migration Controller service creates a dummy pod to bind all PVCs. The name of the pod contains pvcinit.

      +
    • +
    +
    +
  6. +
  7. +

    The Populator Controller service creates a populator pod.

    +
  8. +
  9. +

    The populator pod transfers the disk data to the PV.

    +
    +

    After the VM disks are transferred:

    +
    +
  10. +
  11. +

    The temporary PVC is deleted, and the initial PVC points to the PV with the data.

    +
  12. +
  13. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  14. +
  15. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  16. +
+
+
+

Cold migration from VMWare to the local {ocp-name} cluster:

+
+
+
    +
  1. +

    When you create a Migration custom resource (CR) to run a migration plan, the Migration Controller service creates a DataVolume CR for each source VM disk.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Containerized Data Importer (CDI) Controller service creates a blank persistent volume claim (PVC) based on the parameters specified in the DataVolume CR.



    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
  6. +
+
+
+

For all VM disks:

+
+
+
    +
  1. +

    The Migration Controller service creates a dummy pod to bind all PVCs. The name of the pod contains pvcinit.

    +
  2. +
  3. +

    The Migration Controller service creates a conversion pod for all PVCs.

    +
  4. +
  5. +

    The conversion pod runs virt-v2v, which converts the VM to the KVM hypervisor and transfers the disks' data to their corresponding PVs.

    +
    +

    After the VM disks are transferred:

    +
    +
  6. +
  7. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  8. +
  9. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/vmware-prerequisites/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/vmware-prerequisites/index.html new file mode 100644 index 00000000000..8a935d57d25 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/vmware-prerequisites/index.html @@ -0,0 +1,278 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

VMware prerequisites

+
+

It is strongly recommended to create a VDDK image to accelerate migrations. For more information, see Creating a VDDK image.

+
+
+

The following prerequisites apply to VMware migrations:

+
+
+
    +
  • +

    You must use a compatible version of VMware vSphere.

    +
  • +
  • +

    You must be logged in as a user with at least the minimal set of VMware privileges.

    +
  • +
  • +

    To access the virtual machine using a pre-migration hook, VMware Tools must be installed on the source virtual machine.

    +
  • +
  • +

    The VM operating system must be certified and supported for use as a guest operating system with KubeVirt and for conversion to KVM with virt-v2v.

    +
  • +
  • +

    If you are running a warm migration, you must enable changed block tracking (CBT) on the VMs and on the VM disks.

    +
  • +
  • +

    If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host.

    +
  • +
  • +

    It is strongly recommended to disable hibernation because Forklift does not support migrating hibernated VMs.

    +
  • +
+
+
+ + + + + +
+
Important
+
+
+

In the event of a power outage, data might be lost for a VM with disabled hibernation. However, if hibernation is not disabled, migration will fail

+
+
+
+
+ + + + + +
+
Note
+
+
+

Neither Forklift nor OpenShift Virtualization support conversion of Btrfs for migrating VMs from VMWare.

+
+
+
+

VMware privileges

+
+

The following minimal set of VMware privileges is required to migrate virtual machines to KubeVirt with the Forklift.

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. VMware privileges
PrivilegeDescription

Virtual machine.Interaction privileges:

Virtual machine.Interaction.Power Off

Allows powering off a powered-on virtual machine. This operation powers down the guest operating system.

Virtual machine.Interaction.Power On

Allows powering on a powered-off virtual machine and resuming a suspended virtual machine.

Virtual machine.Guest operating system management by VIX API

Allows managing a virtual machine by the VMware VIX API.

+

Virtual machine.Provisioning privileges:

+
+
+ + + + + +
+
Note
+
+
+

All Virtual machine.Provisioning privileges are required.

+
+
+

Virtual machine.Provisioning.Allow disk access

Allows opening a disk on a virtual machine for random read and write access. Used mostly for remote disk mounting.

Virtual machine.Provisioning.Allow file access

Allows operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Allow read-only disk access

Allows opening a disk on a virtual machine for random read access. Used mostly for remote disk mounting.

Virtual machine.Provisioning.Allow virtual machine download

Allows read operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Allow virtual machine files upload

Allows write operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Clone template

Allows cloning of a template.

Virtual machine.Provisioning.Clone virtual machine

Allows cloning of an existing virtual machine and allocation of resources.

Virtual machine.Provisioning.Create template from virtual machine

Allows creation of a new template from a virtual machine.

Virtual machine.Provisioning.Customize guest

Allows customization of a virtual machine’s guest operating system without moving the virtual machine.

Virtual machine.Provisioning.Deploy template

Allows deployment of a virtual machine from a template.

Virtual machine.Provisioning.Mark as template

Allows marking an existing powered-off virtual machine as a template.

Virtual machine.Provisioning.Mark as virtual machine

Allows marking an existing template as a virtual machine.

Virtual machine.Provisioning.Modify customization specification

Allows creation, modification, or deletion of customization specifications.

Virtual machine.Provisioning.Promote disks

Allows promote operations on a virtual machine’s disks.

Virtual machine.Provisioning.Read customization specifications

Allows reading a customization specification.

Virtual machine.Snapshot management privileges:

Virtual machine.Snapshot management.Create snapshot

Allows creation of a snapshot from the virtual machine’s current state.

Virtual machine.Snapshot management.Remove Snapshot

Allows removal of a snapshot from the snapshot history.

Datastore privileges:

Datastore.Browse datastore

Allows exploring the contents of a datastore.

Datastore.Low level file operations

Allows performing low-level file operations - read, write, delete, and rename - in a datastore.

Sessions privileges:

Sessions.Validate session

Allows verification of the validity of a session.

Cryptographic privileges:

Cryptographic.Decrypt

Allows decryption of an encrypted virtual machine.

Cryptographic.Direct access

Allows access to encrypted resources.

+ + +
+ + diff --git a/documentation/doc-Release_notes/docinfo.xml b/documentation/doc-Release_notes/docinfo.xml new file mode 100644 index 00000000000..b35cd5a2260 --- /dev/null +++ b/documentation/doc-Release_notes/docinfo.xml @@ -0,0 +1,15 @@ +{rn-title} +{project-full} +{project-version} +Version {project-version} + + This document describes new features, known issues, and resolved issues for {the-lc} {project-full} {project-version}. + + + + Red Hat Modernization and Migration + Documentation Team + ccs-mms-docs@redhat.com + + + diff --git a/documentation/doc-Release_notes/master/index.html b/documentation/doc-Release_notes/master/index.html new file mode 100644 index 00000000000..73c8972bd3a --- /dev/null +++ b/documentation/doc-Release_notes/master/index.html @@ -0,0 +1,2674 @@ + + + + + + + + Release notes | Forklift Documentation + + + + + + + + + + + + + +Release notes | Forklift Documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+

Release notes

+ +
+

Forklift 2.7

+
+
+

You can use Forklift to migrate virtual machines from the following source providers to KubeVirt destination providers:

+
+
+
    +
  • +

    VMware vSphere versions 6, 7, and 8

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    OpenStack

    +
  • +
  • +

    Open Virtual Appliances (OVAs) that were created by VMware vSphere

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

The release notes describe technical changes, new features and enhancements, known issues, and resolved issues.

+
+
+

Technical changes

+
+

Forklift 2.7 has the following technical changes:

+
+
+
Upgraded virt-v2v to RHEL9 for warm migrations
+

Forklift previously used virt-v2v from Red Hat Enterprise Linux (RHEL) 8, which does not include bug fixes and features that are available in virt-v2v in RHEL9. In Forklift 2.7.0, components are updated to RHEL 9 in order to improve the functionality of warm migration. (MTV-1152)

+
+
+

Forklift selected packages

+
+

The following listed packages are from the virt-v2v guest conversion pod:

+
+ + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Selected Forklift packages
Package summaryForklift 2.7.0Forklift 2.7.2Forklift 2.7.3

The skeleton package which defines a simple Red Hat Enterprise Linux system

basesystem-11-13.el9.noarch

basesystem-11-13.el9.noarch

basesystem-11-13.el9.noarch

Core kernel modules to match the core kernel

kernel-modules-core-5.14.0-427.35.1.el9_4.x86_64

kernel-modules-core-5.14.0-427.37.1.el9_4.x86_64

kernel-modules-core-5.14.0-427.40.1.el9_4.x86_64

The Linux kernel

kernel-core-5.14.0-427.35.1.el9_4.x86_64

kernel-core-5.14.0-427.37.1.el9_4.x86_64

kernel-core-5.14.0-427.40.1.el9_4.x86_64

Access and modify virtual machine disk images

libguestfs-1.50.1-8.el9_4.x86_64

libguestfs-1.50.1-8.el9_4.x86_64

libguestfs-1.50.1-8.el9_4.x86_64

Client side utilities of the libvirt library

libvirt-client-10.0.0-6.7.el9_4.x86_64

libvirt-client-10.0.0-6.7.el9_4.x86_64

libvirt-client-10.0.0-6.7.el9_4.x86_64

Libvirt libraries

libvirt-libs-10.0.0-6.7.el9_4.x86_64

libvirt-libs-10.0.0-6.7.el9_4.x86_64

libvirt-libs-10.0.0-6.7.el9_4.x86_64

QEMU driver plugin for the libvirtd daemon

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

NBD server

nbdkit-1.36.2-1.el9.x86_64

nbdkit-1.36.2-1.el9.x86_64

nbdkit-1.36.2-1.el9.x86_64

Basic filters for nbdkit

nbdkit-basic-filters-1.36.2-1.el9.x86_64

nbdkit-basic-filters-1.36.2-1.el9.x86_64

nbdkit-basic-filters-1.36.2-1.el9.x86_64

Basic plugins for nbdkit

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

HTTP/FTP (cURL) plugin for nbdkit

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

NBD proxy / forward plugin for nbdkit

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

Python 3 plugin for nbdkit

nbdkit-python-plugin-1.36.2-1.el9.x86_64

nbdkit-python-plugin-1.36.2-1.el9.x86_64

nbdkit-python-plugin-1.36.2-1.el9.x86_64

The nbdkit server

nbdkit-server-1.36.2-1.el9.x86_64

nbdkit-server-1.36.2-1.el9.x86_64

nbdkit-server-1.36.2-1.el9.x86_64

SSH plugin for nbdkit

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

VMware VDDK plugin for nbdkit

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

QEMU command line tool for manipulating disk images

qemu-img-8.2.0-11.el9_4.6.x86_64

qemu-img-8.2.0-11.el9_4.6.x86_64

qemu-img-8.2.0-11.el9_4.6.x86_64

QEMU common files needed by all QEMU targets

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

+

qemu-kvm core components

+

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

Convert a virtual machine to run on KVM

virt-v2v-2.4.0-4.el9_4.x86_64

virt-v2v-2.4.0-4.el9_4.x86_64

virt-v2v-2.4.0-4.el9_4.x86_64

+
+

For a full list of packages in Forklift, see Forklift changelog.

+
+
+
+
+

New features and enhancements

+
+

Forklift 2.7 introduces the following features and enhancements:

+
+
+

New features and enhancements 2.7.0

+
+
    +
  • +

    In Forklift 2.7.0, warm migration is now based on RHEL 9 inheriting features and bug fixes.

    +
  • +
+
+
+
+
+

Resolved issues

+
+

Forklift 2.7 has the following resolved issues:

+
+
+

Resolved issues 2.7.3

+
+
Migration plan does not fail when conversion pod fails
+

In earlier releases of Forklift, when running the virt-v2v guest conversion, the migration plan did not fail if the conversion pod failed, as expected. This issue has been resolved in Forklift 2.7.3. (MTV-1569)

+
+
+
Large number of VMs in the inventory can cause the inventory controller to panic
+

In earlier releases of Forklift, having a large number of virtual machines (VMs) in the inventory could cause the inventory controller to panic and return a concurrent write to websocket connection warning. This issue was caused by the concurrent write to the WebSocket connection and has been addressed by the addition of a lock, so the Go routine waits before sending the response from the server. This issue has been resolved in Forklift 2.7.3. (MTV-1220)

+
+
+
VM selection disappears when selecting multiple VMs in the Migration Plan
+

In earlier releases of Forklift, VM selection checkbox disappeared after selecting multiple VMs in the Migration Plan. This issue has been resolved in Forklift 2.7.3. (MTV-1546)

+
+
+
forklift-controller crashing during OVA plan migration
+

In earlier releases of Forklift, the forklift-controller would crash during an OVA plan migration, returning a runtime error: invalid memory address or nil pointer dereference panic.  This issue has been resolved in Forklift 2.7.3. (MTV-1577)

+
+
+
+

Resolved issues 2.7.2

+
+
VMNetworksNotMapped error occurs after creating a plan from the UI with the source provider set to KubeVirt
+

In earlier releases of Forklift, after creating a plan with an KubeVirt source provider, the Migration Plan failed with the error The plan is not ready - VMNetworksNotMapped. This issue has been resolved in Forklift 2.7.2. (MTV-1201)

+
+
+
Migration Plan for KubeVirt to KubeVirt missing the source namespace causing VMNetworkNotMapped error
+

In earlier releases of Forklift, when creating a Migration Plan for an KubeVirt to KubeVirt migration using the Plan Creation Form, the network map generated was missing the source namespace, which caused a VMNetworkNotMapped error on the plan. This issue has been resolved in Forklift 2.7.2. (MTV-1297)

+
+
+
DV, PVC, and PV are not cleaned up and removed if the migration plan is Archived and Deleted
+

In earlier releases of Forklift, the DataVolume (DV), PersistentVolumeClaim (PVC), and PersistentVolume (PV) continued to exist after the migration plan was archived and deleted. This issue has been resolved in Forklift 2.7.2. (MTV-1477)

+
+
+
Other migrations are halted from starting as the scheduler is waiting for the complete VM to get transferred
+

In earlier releases of Forklift, when warm migrating a virtual machine (VM) that has several disks, you had to wait for the complete VM to get migrated, and the scheduler was halted until all the disks finished before the migration would be started. This issue has been resolved in Forklift 2.7.2. (MTV-1537)

+
+
+
Warm migration is not functioning as expected
+

In earlier releases of Forklift, warm migration did not function as expected. When running the warm migration with VMs larger than the MaxInFlight disks, the VMs over this number did not start the migration until the cutover. This issue has been resolved in Forklift 2.7.2. (MTV-1543)

+
+
+
Migration hanging due to error: virt-v2v: error: -i libvirt: expecting a libvirt guest name
+

In earlier releases of Forklift, when attempting to migrate a VMware VM with a non-compliant Kubernetes name, the Openshift console returned a warning that the VM would be renamed. However, after starting the Migration Plan, it hangs since the migration pod is in an Error state. This issue has been resolved in Forklift 2.7.2. This issue has been resolved in Forklift 2.7.2. (MTV-1555)

+
+
+
VMs are not migrated if they have more disks than MAX_VM_INFLIGHT
+

In earlier releases of Forklift, when migrating the VM using the warm migration, if there were more disks than the MAX_VM_INFLIGHT the VM was not scheduled and the migration was not started. This issue has been resolved in Forklift 2.7.2. (MTV-1573)

+
+
+
Migration Plan returns an error even when Changed Block Tracking (CBT) is enabled
+

In earlier releases of Forklift, when running a VM in VMware, if the CBT flag was enabled while the VM was running by adding both ctkEnabled=TRUE and scsi0:0.ctkEnabled=TRUE parameters, an error message Danger alert:The plan is not ready - VMMissingChangedBlockTracking was returned, and the migration plan was prevented from working. This issue has been resolved in Forklift 2.7.2. (MTV-1576)

+
+
+
+

Resolved issues 2.7.0

+
+
Change . to - in the names of VMs that are migrated
+

In earlier releases of Forklift, if the name of the virtual machines (VMs) contained ., this was changed to - when they were migrated. This issue has been resolved in Forklift 2.7.0. (MTV-1292)

+
+
+
Status condition indicating a failed mapping resource in a plan is not added to the plan
+

In earlier releases of Forklift, a status condition indicating a failed mapping resource of a plan was not added to the plan. This issue has been resolved in Forklift 2.7.0, with a status condition indicating the failed mapping being added. (MTV-1461)

+
+
+
ifcfg files with HWaddr cause the NIC name to change
+

In earlier releases of Forklift, interface configuration (ifcfg) files with a hardware address (HWaddr) of the Ethernet interface caused the name of the network interface controller (NIC) to change. This issue has been resolved in Forklift 2.7.0. (MTV-1463)

+
+
+
Import fails with special characters in VMX file
+

In earlier releases of Forklift, imports failed when there were special characters in the parameters of the VMX file. This issue has been resolved in Forklift 2.7.0. (MTV-1472)

+
+
+
Observed invalid memory address or nil pointer dereference panic
+

In earlier releases of Forklift, an invalid memory address or nil pointer dereference panic was observed, which was caused by a refactor and could be triggered when there was a problem with the inventory pod. This issue has been resolved in Forklift 2.7.0. (MTV-1482)

+
+
+
Static IPv4 changed after warm migrating win2022/2019 VMs
+

In earlier releases of Forklift, the static Internet Protocol version 4 (IPv4) address was changed after a warm migration of Windows Server 2022 and Windows Server 2019 VMs. This issue has been resolved in Forklift 2.7.0. (MTV-1491)

+
+
+
Warm migration is missing arguments
+

In earlier releases of Forklift, virt-v2v-in-place for the warm migration was missing arguments that were available in virt-v2v for the cold migration. This issue has been resolved in Forklift 2.7.0. (MTV-1495)

+
+
+
Default gateway settings changed after migrating Windows Server 2022 VMs with preserve static IPs
+

In earlier releases of Forklift, the default gateway settings were changed after migrating Windows Server 2022 VMs with the preserve static IPs setting. This issue has been resolved in Forklift 2.7.0. (MTV-1497)

+
+
+
+
+

Known issues

+
+

Forklift 2.7 has the following known issues:

+
+
+
Select Migration Network from the endpoint type ESXi displays multiple incorrect networks
+

When you choose Select Migration Network, from the endpoint type of ESXi, multiple incorrect networks are displayed. (MTV-1291)

+
+
+
VMs with Secure Boot enabled might not be migrated automatically
+

Virtual machines (VMs) with Secure Boot enabled currently might not be migrated automatically. This is because Secure Boot, a security standard developed by members of the PC industry to ensure that a device boots using only software that is trusted by the Original Equipment Manufacturer (OEM), would prevent the VMs from booting on the destination provider. 

+
+
+

Workaround: The current workaround is to disable Secure Boot on the destination. For more details, see Disabling Secure Boot. (MTV-1548)

+
+
+
Windows VMs which are using Measured Boot cannot be migrated
+

Microsoft Windows virtual machines (VMs), which are using the Measured Boot feature, cannot be migrated because Measured Boot is a mechanism to prevent any kind of device changes, by checking each start-up component, including the firmware, all the way to the boot driver.

+
+
+

The alternative to migration is to re-create the Windows VM directly on KubeVirt.

+
+
+
Network and Storage maps in the UI are not correct when created from the command line
+

When creating Network and Storage maps from the UI, the correct names are not shown in the UI. (MTV-1421)

+
+
+
Migration fails with module network-legacy configured in RHEL guests
+

Migration fails if the module configuration file is available in the guest and the dhcp-client package is not installed, returning a dracut module 'network-legacy' will not be installed, because command 'dhclient' could not be found error. (MTV-1615)

+
+
+
+

Forklift changelog

+
+

The following changelog for Forklift includes a full list of packages used in the Forklift 2.7 releases.

+
+
+

Forklift 2.7 packages

+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2. Forklift packages
Forklift 2.7.0Forklift 2.7.2Forklift 2.7.3

abattis-cantarell-fonts-0.301-4.el9.noarch

abattis-cantarell-fonts-0.301-4.el9.noarch

Abattis-cantarell-fonts-0.301-4.el9.noarch

acl-2.3.1-4.el9.x86_64

acl-2.3.1-4.el9.x86_64

acl-2.3.1-4.el9.x86_64

adobe-source-code-pro-fonts-2.030.1.050-12.el9.1.noarch

adobe-source-code-pro-fonts-2.030.1.050-12.el9.1.noarch

adobe-source-code-pro-fonts-2.030.1.050-12.el9.1.noarch

alternatives-1.24-1.el9.x86_64

alternatives-1.24-1.el9.x86_64

alternatives-1.24-1.el9.x86_64

attr-2.5.1-3.el9.x86_64

attr-2.5.1-3.el9.x86_64

attr-2.5.1-3.el9.x86_64

audit-libs-3.1.2-2.el9.x86_64

audit-libs-3.1.2-2.el9.x86_64

audit-libs-3.1.2-2.el9.x86_64

augeas-libs-1.13.0-6.el9_4.x86_64

augeas-libs-1.13.0-6.el9_4.x86_64

augeas-libs-1.13.0-6.el9_4.x86_64

basesystem-11-13.el9.noarch

basesystem-11-13.el9.noarch

basesystem-11-13.el9.noarch

bash-5.1.8-9.el9.x86_64

bash-5.1.8-9.el9.x86_64

bash-5.1.8-9.el9.x86_64

binutils-2.35.2-43.el9.x86_64

binutils-2.35.2-43.el9.x86_64

binutils-2.35.2-43.el9.x86_64

binutils-gold-2.35.2-43.el9.x86_64

binutils-gold-2.35.2-43.el9.x86_64

binutils-gold-2.35.2-43.el9.x86_64

bzip2-1.0.8-8.el9.x86_64

bzip2-1.0.8-8.el9.x86_64

bzip2-1.0.8-8.el9.x86_64

bzip2-libs-1.0.8-8.el9.x86_64

bzip2-libs-1.0.8-8.el9.x86_64

bzip2-libs-1.0.8-8.el9.x86_64

ca-certificates-2024.2.69_v8.0.303-91.4.el9_4.noarch

ca-certificates-2024.2.69_v8.0.303-91.4.el9_4.noarch

ca-certificates-2024.2.69_v8.0.303-91.4.el9_4.noarch

capstone-4.0.2-10.el9.x86_64

capstone-4.0.2-10.el9.x86_64

capstone-4.0.2-10.el9.x86_64

checkpolicy-3.6-1.el9.x86_64

checkpolicy-3.6-1.el9.x86_64

checkpolicy-3.6-1.el9.x86_64

clevis-18-112.el9.x86_64

clevis-18-112.el9.x86_64

clevis-18-112.el9.x86_64

clevis-luks-18-112.el9.x86_64

clevis-luks-18-112.el9.x86_64

clevis-luks-18-112.el9.x86_64

cmake-rpm-macros-3.26.5-2.el9.noarch

cmake-rpm-macros-3.26.5-2.el9.noarch

cmake-rpm-macros-3.26.5-2.el9.noarch

coreutils-single-8.32-35.el9.x86_64

coreutils-single-8.32-35.el9.x86_64

coreutils-single-8.32-35.el9.x86_64

cpio-2.13-16.el9.x86_64

cpio-2.13-16.el9.x86_64

cpio-2.13-16.el9.x86_64

cracklib-2.9.6-27.el9.x86_64

cracklib-2.9.6-27.el9.x86_64

cracklib-2.9.6-27.el9.x86_64

cracklib-dicts-2.9.6-27.el9.x86_64

cracklib-dicts-2.9.6-27.el9.x86_64

cracklib-dicts-2.9.6-27.el9.x86_64

crypto-policies-20240202-1.git283706d.el9.noarch

crypto-policies-20240202-1.git283706d.el9.noarch

crypto-policies-20240202-1.git283706d.el9.noarch

cryptsetup-2.6.0-3.el9.x86_64

cryptsetup-2.6.0-3.el9.x86_64

cryptsetup-2.6.0-3.el9.x86_64

cryptsetup-libs-2.6.0-3.el9.x86_64

cryptsetup-libs-2.6.0-3.el9.x86_64

cryptsetup-libs-2.6.0-3.el9.x86_64

curl-minimal-7.76.1-29.el9_4.1.x86_64

curl-minimal-7.76.1-29.el9_4.1.x86_64

curl-minimal-7.76.1-29.el9_4.1.x86_64

cyrus-sasl-2.1.27-21.el9.x86_64

cyrus-sasl-2.1.27-21.el9.x86_64

cyrus-sasl-2.1.27-21.el9.x86_64

cyrus-sasl-gssapi-2.1.27-21.el9.x86_64

cyrus-sasl-gssapi-2.1.27-21.el9.x86_64

cyrus-sasl-gssapi-2.1.27-21.el9.x86_64

cyrus-sasl-lib-2.1.27-21.el9.x86_64

cyrus-sasl-lib-2.1.27-21.el9.x86_64

cyrus-sasl-lib-2.1.27-21.el9.x86_64

daxctl-libs-71.1-8.el9.x86_64

daxctl-libs-71.1-8.el9.x86_64

daxctl-libs-71.1-8.el9.x86_64

dbus-1.12.20-8.el9.x86_64

dbus-1.12.20-8.el9.x86_64

dbus-1.12.20-8.el9.x86_64

dbus-broker-28-7.el9.x86_64

dbus-broker-28-7.el9.x86_64

dbus-broker-28-7.el9.x86_64

dbus-common-1.12.20-8.el9.noarch

dbus-common-1.12.20-8.el9.noarch

dbus-common-1.12.20-8.el9.noarch

dbus-libs-1.12.20-8.el9.x86_64

dbus-libs-1.12.20-8.el9.x86_64

dbus-libs-1.12.20-8.el9.x86_64

dejavu-sans-fonts-2.37-18.el9.noarch

dejavu-sans-fonts-2.37-18.el9.noarch

dejavu-sans-fonts-2.37-18.el9.noarch

device-mapper-1.02.197-2.el9.x86_64

device-mapper-1.02.197-2.el9.x86_64

device-mapper-1.02.197-2.el9.x86_64

device-mapper-event-1.02.197-2.el9.x86_64

device-mapper-event-1.02.197-2.el9.x86_64

device-mapper-event-1.02.197-2.el9.x86_64

device-mapper-event-libs-1.02.197-2.el9.x86_64

device-mapper-event-libs-1.02.197-2.el9.x86_64

device-mapper-event-libs-1.02.197-2.el9.x86_64

device-mapper-libs-1.02.197-2.el9.x86_64

device-mapper-libs-1.02.197-2.el9.x86_64

device-mapper-libs-1.02.197-2.el9.x86_64

device-mapper-persistent-data-1.0.9-3.el9_4.x86_64

device-mapper-persistent-data-1.0.9-3.el9_4.x86_64

device-mapper-persistent-data-1.0.9-3.el9_4.x86_64

dhcp-client-4.4.2-19.b1.el9.x86_64

dhcp-client-4.4.2-19.b1.el9.x86_64

dhcp-client-4.4.2-19.b1.el9.x86_64

dhcp-common-4.4.2-19.b1.el9.noarch

dhcp-common-4.4.2-19.b1.el9.noarch

dhcp-common-4.4.2-19.b1.el9.noarch

diffutils-3.7-12.el9.x86_64

diffutils-3.7-12.el9.x86_64

diffutils-3.7-12.el9.x86_64

dmidecode-3.5-3.el9.x86_64

dmidecode-3.5-3.el9.x86_64

dmidecode-3.5-3.el9.x86_64

dnf-data-4.14.0-9.el9.noarch

dnf-data-4.14.0-9.el9.noarch

dnf-data-4.14.0-9.el9.noarch

dnsmasq-2.85-16.el9_4.x86_64

dnsmasq-2.85-16.el9_4.x86_64

dnsmasq-2.85-16.el9_4.x86_64

dosfstools-4.2-3.el9.x86_64

dosfstools-4.2-3.el9.x86_64

dosfstools-4.2-3.el9.x86_64

dracut-057-53.git20240104.el9.x86_64

dracut-057-53.git20240104.el9.x86_64

dracut-057-53.git20240104.el9.x86_64

dwz-0.14-3.el9.x86_64

dwz-0.14-3.el9.x86_64

dwz-0.14-3.el9.x86_64

e2fsprogs-1.46.5-5.el9.x86_64

e2fsprogs-1.46.5-5.el9.x86_64

e2fsprogs-1.46.5-5.el9.x86_64

e2fsprogs-libs-1.46.5-5.el9.x86_64

e2fsprogs-libs-1.46.5-5.el9.x86_64

e2fsprogs-libs-1.46.5-5.el9.x86_64

edk2-ovmf-20231122-6.el9_4.3.noarch

edk2-ovmf-20231122-6.el9_4.3.noarch

edk2-ovmf-20231122-6.el9_4.3.noarch

efi-srpm-macros-6-2.el9_0.noarch

efi-srpm-macros-6-2.el9_0.noarch

efi-srpm-macros-6-2.el9_0.noarch

elfutils-debuginfod-client-0.190-2.el9.x86_64

elfutils-debuginfod-client-0.190-2.el9.x86_64

elfutils-debuginfod-client-0.190-2.el9.x86_64

elfutils-default-yama-scope-0.190-2.el9.noarch

elfutils-default-yama-scope-0.190-2.el9.noarch

elfutils-default-yama-scope-0.190-2.el9.noarch

elfutils-libelf-0.190-2.el9.x86_64

elfutils-libelf-0.190-2.el9.x86_64

elfutils-libelf-0.190-2.el9.x86_64

elfutils-libs-0.190-2.el9.x86_64

elfutils-libs-0.190-2.el9.x86_64

elfutils-libs-0.190-2.el9.x86_64

expat-2.5.0-2.el9_4.1.x86_64

expat-2.5.0-2.el9_4.1.x86_64

expat-2.5.0-2.el9_4.1.x86_64

file-5.39-16.el9.x86_64

file-5.39-16.el9.x86_64

file-5.39-16.el9.x86_64

file-libs-5.39-16.el9.x86_64

file-libs-5.39-16.el9.x86_64

file-libs-5.39-16.el9.x86_64

filesystem-3.16-2.el9.x86_64

filesystem-3.16-2.el9.x86_64

filesystem-3.16-2.el9.x86_64

findutils-4.8.0-6.el9.x86_64

findutils-4.8.0-6.el9.x86_64

findutils-4.8.0-6.el9.x86_64

fonts-filesystem-2.0.5-7.el9.1.noarch

fonts-filesystem-2.0.5-7.el9.1.noarch

fonts-filesystem-2.0.5-7.el9.1.noarch

fonts-srpm-macros-2.0.5-7.el9.1.noarch

fonts-srpm-macros-2.0.5-7.el9.1.noarch

fonts-srpm-macros-2.0.5-7.el9.1.noarch

fuse-2.9.9-15.el9.x86_64

fuse-2.9.9-15.el9.x86_64

fuse-2.9.9-15.el9.x86_64

fuse-common-3.10.2-8.el9.x86_64

fuse-common-3.10.2-8.el9.x86_64

fuse-common-3.10.2-8.el9.x86_64

fuse-libs-2.9.9-15.el9.x86_64

fuse-libs-2.9.9-15.el9.x86_64

fuse-libs-2.9.9-15.el9.x86_64

gawk-5.1.0-6.el9.x86_64

gawk-5.1.0-6.el9.x86_64

gawk-5.1.0-6.el9.x86_64

gdbm-libs-1.19-4.el9.x86_64

gdbm-libs-1.19-4.el9.x86_64

gdbm-libs-1.19-4.el9.x86_64

gdisk-1.0.7-5.el9.x86_64

gdisk-1.0.7-5.el9.x86_64

gdisk-1.0.7-5.el9.x86_64

geolite2-city-20191217-6.el9.noarch

geolite2-city-20191217-6.el9.noarch

geolite2-city-20191217-6.el9.noarch

geolite2-country-20191217-6.el9.noarch

geolite2-country-20191217-6.el9.noarch

geolite2-country-20191217-6.el9.noarch

gettext-0.21-8.el9.x86_64

gettext-0.21-8.el9.x86_64

gettext-0.21-8.el9.x86_64

gettext-libs-0.21-8.el9.x86_64

gettext-libs-0.21-8.el9.x86_64

gettext-libs-0.21-8.el9.x86_64

ghc-srpm-macros-1.5.0-6.el9.noarch

ghc-srpm-macros-1.5.0-6.el9.noarch

ghc-srpm-macros-1.5.0-6.el9.noarch

glib-networking-2.68.3-3.el9.x86_64

glib-networking-2.68.3-3.el9.x86_64

glib-networking-2.68.3-3.el9.x86_64

glib2-2.68.4-14.el9_4.1.x86_64

glib2-2.68.4-14.el9_4.1.x86_64

glib2-2.68.4-14.el9_4.1.x86_64

glibc-2.34-100.el9_4.3.x86_64

glibc-2.34-100.el9_4.4.x86_64

glibc-2.34-100.el9_4.4.x86_64

glibc-common-2.34-100.el9_4.3.x86_64

glibc-common-2.34-100.el9_4.4.x86_64

glibc-common-2.34-100.el9_4.4.x86_64

glibc-gconv-extra-2.34-100.el9_4.3.x86_64

glibc-gconv-extra-2.34-100.el9_4.4.x86_64

glibc-gconv-extra-2.34-100.el9_4.4.x86_64

glibc-langpack-en-2.34-100.el9_4.4.x86_64

glibc-langpack-en-2.34-100.el9_4.4.x86_64

glibc-minimal-langpack-2.34-100.el9_4.3.x86_64

glibc-minimal-langpack-2.34-100.el9_4.4.x86_64

glibc-minimal-langpack-2.34-100.el9_4.4.x86_64

gmp-6.2.0-13.el9.x86_64

gmp-6.2.0-13.el9.x86_64

gmp-6.2.0-13.el9.x86_64

gnupg2-2.3.3-4.el9.x86_64

gnupg2-2.3.3-4.el9.x86_64

gnupg2-2.3.3-4.el9.x86_64

gnutls-3.8.3-4.el9_4.x86_64

gnutls-3.8.3-4.el9_4.x86_64

gnutls-3.8.3-4.el9_4.x86_64

gnutls-dane-3.8.3-4.el9_4.x86_64

gnutls-dane-3.8.3-4.el9_4.x86_64

gnutls-dane-3.8.3-4.el9_4.x86_64

gnutls-utils-3.8.3-4.el9_4.x86_64

gnutls-utils-3.8.3-4.el9_4.x86_64

gnutls-utils-3.8.3-4.el9_4.x86_64

go-srpm-macros-3.2.0-3.el9.noarch

go-srpm-macros-3.2.0-3.el9.noarch

go-srpm-macros-3.2.0-3.el9.noarch

gobject-introspection-1.68.0-11.el9.x86_64

gobject-introspection-1.68.0-11.el9.x86_64

gobject-introspection-1.68.0-11.el9.x86_64

gpg-pubkey-5a6340b3-6229229e

gpg-pubkey-5a6340b3-6229229e

gpg-pubkey-5a6340b3-6229229e

gpg-pubkey-fd431d51-4ae0493b

gpg-pubkey-fd431d51-4ae0493b

gpg-pubkey-fd431d51-4ae0493b

gpgme-1.15.1-6.el9.x86_64

gpgme-1.15.1-6.el9.x86_64

gpgme-1.15.1-6.el9.x86_64

grep-3.6-5.el9.x86_64

grep-3.6-5.el9.x86_64

grep-3.6-5.el9.x86_64

groff-base-1.22.4-10.el9.x86_64

groff-base-1.22.4-10.el9.x86_64

groff-base-1.22.4-10.el9.x86_64

gsettings-desktop-schemas-40.0-6.el9.x86_64

gsettings-desktop-schemas-40.0-6.el9.x86_64

gsettings-desktop-schemas-40.0-6.el9.x86_64

gssproxy-0.8.4-6.el9.x86_64

gssproxy-0.8.4-6.el9.x86_64

gssproxy-0.8.4-6.el9.x86_64

guestfs-tools-1.51.6-3.el9_4.x86_64

guestfs-tools-1.51.6-3.el9_4.x86_64

guestfs-tools-1.51.6-3.el9_4.x86_64

gzip-1.12-1.el9.x86_64

gzip-1.12-1.el9.x86_64

gzip-1.12-1.el9.x86_64

hexedit-1.6-1.el9.x86_64

hexedit-1.6-1.el9.x86_64

hexedit-1.6-1.el9.x86_64

hivex-libs-1.3.21-3.el9.x86_64

hivex-libs-1.3.21-3.el9.x86_64

hivex-libs-1.3.21-3.el9.x86_64

hwdata-0.348-9.13.el9.noarch

hwdata-0.348-9.13.el9.noarch

hwdata-0.348-9.13.el9.noarch

inih-49-6.el9.x86_64

inih-49-6.el9.x86_64

inih-49-6.el9.x86_64

ipcalc-1.0.0-5.el9.x86_64

ipcalc-1.0.0-5.el9.x86_64

ipcalc-1.0.0-5.el9.x86_64

iproute-6.2.0-6.el9_4.x86_64

iproute-6.2.0-6.el9_4.x86_64

iproute-6.2.0-6.el9_4.x86_64

iproute-tc-6.2.0-6.el9_4.x86_64

iproute-tc-6.2.0-6.el9_4.x86_64

iproute-tc-6.2.0-6.el9_4.x86_64

iptables-libs-1.8.10-4.el9_4.x86_64

iptables-libs-1.8.10-4.el9_4.x86_64

iptables-libs-1.8.10-4.el9_4.x86_64

iptables-nft-1.8.10-4.el9_4.x86_64

iptables-nft-1.8.10-4.el9_4.x86_64

iptables-nft-1.8.10-4.el9_4.x86_64

iputils-20210202-9.el9.x86_64

iputils-20210202-9.el9.x86_64

iputils-20210202-9.el9.x86_64

ipxe-roms-qemu-20200823-9.git4bd064de.el9.noarch

ipxe-roms-qemu-20200823-9.git4bd064de.el9.noarch

ipxe-roms-qemu-20200823-9.git4bd064de.el9.noarch

jansson-2.14-1.el9.x86_64

jansson-2.14-1.el9.x86_64

jansson-2.14-1.el9.x86_64

jose-11-3.el9.x86_64

jose-11-3.el9.x86_64

jose-11-3.el9.x86_64

jq-1.6-16.el9.x86_64

jq-1.6-16.el9.x86_64

jq-1.6-16.el9.x86_64

json-c-0.14-11.el9.x86_64

json-c-0.14-11.el9.x86_64

json-c-0.14-11.el9.x86_64

json-glib-1.6.6-1.el9.x86_64

json-glib-1.6.6-1.el9.x86_64

json-glib-1.6.6-1.el9.x86_64

kbd-2.4.0-9.el9.x86_64

kbd-2.4.0-9.el9.x86_64

kbd-2.4.0-9.el9.x86_64

kbd-legacy-2.4.0-9.el9.noarch

kbd-legacy-2.4.0-9.el9.noarch

kbd-legacy-2.4.0-9.el9.noarch

kbd-misc-2.4.0-9.el9.noarch

kbd-misc-2.4.0-9.el9.noarch

kbd-misc-2.4.0-9.el9.noarch

kernel-core-5.14.0-427.35.1.el9_4.x86_64

kernel-core-5.14.0-427.37.1.el9_4.x86_64

kernel-core-5.14.0-427.40.1.el9_4.x86_64

kernel-modules-core-5.14.0-427.35.1.el9_4.x86_64

kernel-modules-core-5.14.0-427.37.1.el9_4.x86_64

kernel-modules-core-5.14.0-427.40.1.el9_4.x86_64

kernel-srpm-macros-1.0-13.el9.noarch

kernel-srpm-macros-1.0-13.el9.noarch

kernel-srpm-macros-1.0-13.el9.noarch

keyutils-1.6.3-1.el9.x86_64

keyutils-1.6.3-1.el9.x86_64

keyutils-1.6.3-1.el9.x86_64

keyutils-libs-1.6.3-1.el9.x86_64

keyutils-libs-1.6.3-1.el9.x86_64

keyutils-libs-1.6.3-1.el9.x86_64

kmod-28-9.el9.x86_64

kmod-28-9.el9.x86_64

kmod-28-9.el9.x86_64

kmod-libs-28-9.el9.x86_64

kmod-libs-28-9.el9.x86_64

kmod-libs-28-9.el9.x86_64

kpartx-0.8.7-27.el9.x86_64

kpartx-0.8.7-27.el9.x86_64

kpartx-0.8.7-27.el9.x86_64

krb5-libs-1.21.1-2.el9_4.x86_64

krb5-libs-1.21.1-2.el9_4.x86_64

krb5-libs-1.21.1-2.el9_4.x86_64

langpacks-core-en-3.0-16.el9.noarch

langpacks-core-en-3.0-16.el9.noarch

langpacks-core-en-3.0-16.el9.noarch

langpacks-core-font-en-3.0-16.el9.noarch

langpacks-core-font-en-3.0-16.el9.noarch

langpacks-core-font-en-3.0-16.el9.noarch

langpacks-en-3.0-16.el9.noarch

langpacks-en-3.0-16.el9.noarch

langpacks-en-3.0-16.el9.noarch

less-590-4.el9_4.x86_64

less-590-4.el9_4.x86_64

less-590-4.el9_4.x86_64

libacl-2.3.1-4.el9.x86_64

libacl-2.3.1-4.el9.x86_64

libacl-2.3.1-4.el9.x86_64

libaio-0.3.111-13.el9.x86_64

libaio-0.3.111-13.el9.x86_64

libaio-0.3.111-13.el9.x86_64

libarchive-3.5.3-4.el9.x86_64

libarchive-3.5.3-4.el9.x86_64

libarchive-3.5.3-4.el9.x86_64

libassuan-2.5.5-3.el9.x86_64

libassuan-2.5.5-3.el9.x86_64

libassuan-2.5.5-3.el9.x86_64

libatomic-11.4.1-3.el9.x86_64

libatomic-11.4.1-3.el9.x86_64

libatomic-11.4.1-3.el9.x86_64

libattr-2.5.1-3.el9.x86_64

libattr-2.5.1-3.el9.x86_64

libattr-2.5.1-3.el9.x86_64

libbasicobjects-0.1.1-53.el9.x86_64

libbasicobjects-0.1.1-53.el9.x86_64

libbasicobjects-0.1.1-53.el9.x86_64

libblkid-2.37.4-18.el9.x86_64

libblkid-2.37.4-18.el9.x86_64

libblkid-2.37.4-18.el9.x86_64

libbpf-1.3.0-2.el9.x86_64

libbpf-1.3.0-2.el9.x86_64

libbpf-1.3.0-2.el9.x86_64

libbrotli-1.0.9-6.el9.x86_64

libbrotli-1.0.9-6.el9.x86_64

libbrotli-1.0.9-6.el9.x86_64

libcap-2.48-9.el9_2.x86_64

libcap-2.48-9.el9_2.x86_64

libcap-2.48-9.el9_2.x86_64

libcap-ng-0.8.2-7.el9.x86_64

libcap-ng-0.8.2-7.el9.x86_64

libcap-ng-0.8.2-7.el9.x86_64

libcbor-0.7.0-5.el9.x86_64

libcbor-0.7.0-5.el9.x86_64

libcbor-0.7.0-5.el9.x86_64

libcollection-0.7.0-53.el9.x86_64

libcollection-0.7.0-53.el9.x86_64

libcollection-0.7.0-53.el9.x86_64

libcom_err-1.46.5-5.el9.x86_64

libcom_err-1.46.5-5.el9.x86_64

libcom_err-1.46.5-5.el9.x86_64

libconfig-1.7.2-9.el9.x86_64

libconfig-1.7.2-9.el9.x86_64

libconfig-1.7.2-9.el9.x86_64

libcurl-minimal-7.76.1-29.el9_4.1.x86_64

libcurl-minimal-7.76.1-29.el9_4.1.x86_64

libcurl-minimal-7.76.1-29.el9_4.1.x86_64

libdb-5.3.28-53.el9.x86_64

libdb-5.3.28-53.el9.x86_64

libdb-5.3.28-53.el9.x86_64

libdnf-0.69.0-8.el9_4.1.x86_64

libdnf-0.69.0-8.el9_4.1.x86_64

libdnf-0.69.0-8.el9_4.1.x86_64

libeconf-0.4.1-3.el9_2.x86_64

libeconf-0.4.1-3.el9_2.x86_64

libeconf-0.4.1-3.el9_2.x86_64

libedit-3.1-38.20210216cvs.el9.x86_64

libedit-3.1-38.20210216cvs.el9.x86_64

libedit-3.1-38.20210216cvs.el9.x86_64

libev-4.33-5.el9.x86_64

libev-4.33-5.el9.x86_64

libev-4.33-5.el9.x86_64

libevent-2.1.12-8.el9_4.x86_64

libevent-2.1.12-8.el9_4.x86_64

libevent-2.1.12-8.el9_4.x86_64

libfdisk-2.37.4-18.el9.x86_64

libfdisk-2.37.4-18.el9.x86_64

libfdisk-2.37.4-18.el9.x86_64

libfdt-1.6.0-7.el9.x86_64

libfdt-1.6.0-7.el9.x86_64

libfdt-1.6.0-7.el9.x86_64

libffi-3.4.2-8.el9.x86_64

libffi-3.4.2-8.el9.x86_64

libffi-3.4.2-8.el9.x86_64

libfido2-1.13.0-2.el9.x86_64

libfido2-1.13.0-2.el9.x86_64

libfido2-1.13.0-2.el9.x86_64

libgcc-11.4.1-3.el9.x86_64

libgcc-11.4.1-3.el9.x86_64

libgcc-11.4.1-3.el9.x86_64

libgcrypt-1.10.0-10.el9_2.x86_64

libgcrypt-1.10.0-10.el9_2.x86_64

libgcrypt-1.10.0-10.el9_2.x86_64

libgomp-11.4.1-3.el9.x86_64

libgomp-11.4.1-3.el9.x86_64

libgomp-11.4.1-3.el9.x86_64

libgpg-error-1.42-5.el9.x86_64

libgpg-error-1.42-5.el9.x86_64

libgpg-error-1.42-5.el9.x86_64

libguestfs-1.50.1-8.el9_4.x86_64

libguestfs-1.50.1-8.el9_4.x86_64

libguestfs-1.50.1-8.el9_4.x86_64

libguestfs-appliance-1.50.1-8.el9_4.x86_64

libguestfs-appliance-1.50.1-8.el9_4.x86_64

libguestfs-appliance-1.50.1-8.el9_4.x86_64

libguestfs-winsupport-9.3-1.el9_3.x86_64

libguestfs-winsupport-9.3-1.el9_3.x86_64

libguestfs-winsupport-9.3-1.el9_3.x86_64

libguestfs-xfs-1.50.1-8.el9_4.x86_64

libguestfs-xfs-1.50.1-8.el9_4.x86_64

libguestfs-xfs-1.50.1-8.el9_4.x86_64

libibverbs-48.0-1.el9.x86_64

libibverbs-48.0-1.el9.x86_64

libibverbs-48.0-1.el9.x86_64

libicu-67.1-9.el9.x86_64

libicu-67.1-9.el9.x86_64

libicu-67.1-9.el9.x86_64

libidn2-2.3.0-7.el9.x86_64

libidn2-2.3.0-7.el9.x86_64

libidn2-2.3.0-7.el9.x86_64

libini_config-1.3.1-53.el9.x86_64

libini_config-1.3.1-53.el9.x86_64

libini_config-1.3.1-53.el9.x86_64

libjose-11-3.el9.x86_64

libjose-11-3.el9.x86_64

libjose-11-3.el9.x86_64

libkcapi-1.4.0-2.el9.x86_64

libkcapi-1.4.0-2.el9.x86_64

libkcapi-1.4.0-2.el9.x86_64

libkcapi-hmaccalc-1.4.0-2.el9.x86_64

libkcapi-hmaccalc-1.4.0-2.el9.x86_64

libkcapi-hmaccalc-1.4.0-2.el9.x86_64

libksba-1.5.1-6.el9_1.x86_64

libksba-1.5.1-6.el9_1.x86_64

libksba-1.5.1-6.el9_1.x86_64

libluksmeta-9-12.el9.x86_64

libluksmeta-9-12.el9.x86_64

libluksmeta-9-12.el9.x86_64

libmaxminddb-1.5.2-3.el9.x86_64

libmaxminddb-1.5.2-3.el9.x86_64

libmaxminddb-1.5.2-3.el9.x86_64

libmnl-1.0.4-16.el9_4.x86_64

libmnl-1.0.4-16.el9_4.x86_64

libmnl-1.0.4-16.el9_4.x86_64

libmodulemd-2.13.0-2.el9.x86_64

libmodulemd-2.13.0-2.el9.x86_64

libmodulemd-2.13.0-2.el9.x86_64

libmount-2.37.4-18.el9.x86_64

libmount-2.37.4-18.el9.x86_64

libmount-2.37.4-18.el9.x86_64

libnbd-1.18.1-4.el9_4.x86_64

libnbd-1.18.1-4.el9_4.x86_64

libnbd-1.18.1-4.el9_4.x86_64

libnetfilter_conntrack-1.0.9-1.el9.x86_64

libnetfilter_conntrack-1.0.9-1.el9.x86_64

libnetfilter_conntrack-1.0.9-1.el9.x86_64

libnfnetlink-1.0.1-21.el9.x86_64

libnfnetlink-1.0.1-21.el9.x86_64

libnfnetlink-1.0.1-21.el9.x86_64

libnfsidmap-2.5.4-26.el9_4.x86_64

libnfsidmap-2.5.4-26.el9_4.x86_64

libnfsidmap-2.5.4-26.el9_4.x86_64

libnftnl-1.2.6-4.el9_4.x86_64

libnftnl-1.2.6-4.el9_4.x86_64

libnftnl-1.2.6-4.el9_4.x86_64

libnghttp2-1.43.0-5.el9_4.3.x86_64

libnghttp2-1.43.0-5.el9_4.3.x86_64

libnghttp2-1.43.0-5.el9_4.3.x86_64

libnl3-3.9.0-1.el9.x86_64

libnl3-3.9.0-1.el9.x86_64

libnl3-3.9.0-1.el9.x86_64

libosinfo-1.10.0-1.el9.x86_64

libosinfo-1.10.0-1.el9.x86_64

libosinfo-1.10.0-1.el9.x86_64

libpath_utils-0.2.1-53.el9.x86_64

libpath_utils-0.2.1-53.el9.x86_64

libpath_utils-0.2.1-53.el9.x86_64

libpeas-1.30.0-4.el9.x86_64

libpeas-1.30.0-4.el9.x86_64

libpeas-1.30.0-4.el9.x86_64

libpipeline-1.5.3-4.el9.x86_64

libpipeline-1.5.3-4.el9.x86_64

libpipeline-1.5.3-4.el9.x86_64

libpkgconf-1.7.3-10.el9.x86_64

libpkgconf-1.7.3-10.el9.x86_64

libpkgconf-1.7.3-10.el9.x86_64

libpmem-1.12.1-1.el9.x86_64

libpmem-1.12.1-1.el9.x86_64

libpmem-1.12.1-1.el9.x86_64

libpng-1.6.37-12.el9.x86_64

libpng-1.6.37-12.el9.x86_64

libpng-1.6.37-12.el9.x86_64

libproxy-0.4.15-35.el9.x86_64

libproxy-0.4.15-35.el9.x86_64

libproxy-0.4.15-35.el9.x86_64

libproxy-webkitgtk4-0.4.15-35.el9.x86_64

libproxy-webkitgtk4-0.4.15-35.el9.x86_64

libproxy-webkitgtk4-0.4.15-35.el9.x86_64

libpsl-0.21.1-5.el9.x86_64

libpsl-0.21.1-5.el9.x86_64

libpsl-0.21.1-5.el9.x86_64

libpwquality-1.4.4-8.el9.x86_64

libpwquality-1.4.4-8.el9.x86_64

libpwquality-1.4.4-8.el9.x86_64

librdmacm-48.0-1.el9.x86_64

librdmacm-48.0-1.el9.x86_64

librdmacm-48.0-1.el9.x86_64

libref_array-0.1.5-53.el9.x86_64

libref_array-0.1.5-53.el9.x86_64

libref_array-0.1.5-53.el9.x86_64

librepo-1.14.5-2.el9.x86_64

librepo-1.14.5-2.el9.x86_64

librepo-1.14.5-2.el9.x86_64

libreport-filesystem-2.15.2-6.el9.noarch

libreport-filesystem-2.15.2-6.el9.noarch

libreport-filesystem-2.15.2-6.el9.noarch

librhsm-0.0.3-7.el9_3.1.x86_64

librhsm-0.0.3-7.el9_3.1.x86_64

librhsm-0.0.3-7.el9_3.1.x86_64

libseccomp-2.5.2-2.el9.x86_64

libseccomp-2.5.2-2.el9.x86_64

libseccomp-2.5.2-2.el9.x86_64

libselinux-3.6-1.el9.x86_64

libselinux-3.6-1.el9.x86_64

libselinux-3.6-1.el9.x86_64

libselinux-utils-3.6-1.el9.x86_64

libselinux-utils-3.6-1.el9.x86_64

libselinux-utils-3.6-1.el9.x86_64

libsemanage-3.6-1.el9.x86_64

libsemanage-3.6-1.el9.x86_64

libsemanage-3.6-1.el9.x86_64

libsepol-3.6-1.el9.x86_64

libsepol-3.6-1.el9.x86_64

libsepol-3.6-1.el9.x86_64

libsigsegv-2.13-4.el9.x86_64

libsigsegv-2.13-4.el9.x86_64

libsigsegv-2.13-4.el9.x86_64

libslirp-4.4.0-7.el9.x86_64

libslirp-4.4.0-7.el9.x86_64

libslirp-4.4.0-7.el9.x86_64

libsmartcols-2.37.4-18.el9.x86_64

libsmartcols-2.37.4-18.el9.x86_64

libsmartcols-2.37.4-18.el9.x86_64

libsolv-0.7.24-2.el9.x86_64

libsolv-0.7.24-2.el9.x86_64

libsolv-0.7.24-2.el9.x86_64

libsoup-2.72.0-8.el9.x86_64

libsoup-2.72.0-8.el9.x86_64

libsoup-2.72.0-8.el9.x86_64

libss-1.46.5-5.el9.x86_64

libss-1.46.5-5.el9.x86_64

libss-1.46.5-5.el9.x86_64

libssh-0.10.4-13.el9.x86_64

libssh-0.10.4-13.el9.x86_64

libssh-0.10.4-13.el9.x86_64

libssh-config-0.10.4-13.el9.noarch

libssh-config-0.10.4-13.el9.noarch

libssh-config-0.10.4-13.el9.noarch

libstdc++-11.4.1-3.el9.x86_64

libstdc++-11.4.1-3.el9.x86_64

libstdc++-11.4.1-3.el9.x86_64

libtasn1-4.16.0-8.el9_1.x86_64

libtasn1-4.16.0-8.el9_1.x86_64

libtasn1-4.16.0-8.el9_1.x86_64

libtirpc-1.3.3-8.el9_4.x86_64

libtirpc-1.3.3-8.el9_4.x86_64

libtirpc-1.3.3-8.el9_4.x86_64

libtpms-0.9.1-3.20211126git1ff6fe1f43.el9_2.x86_64

libtpms-0.9.1-4.20211126git1ff6fe1f43.el9_2.x86_64

libtpms-0.9.1-4.20211126git1ff6fe1f43.el9_2.x86_64

libunistring-0.9.10-15.el9.x86_64

libunistring-0.9.10-15.el9.x86_64

libunistring-0.9.10-15.el9.x86_64

liburing-2.5-1.el9.x86_64

liburing-2.5-1.el9.x86_64

liburing-2.5-1.el9.x86_64

libusbx-1.0.26-1.el9.x86_64

libusbx-1.0.26-1.el9.x86_64

libusbx-1.0.26-1.el9.x86_64

libutempter-1.2.1-6.el9.x86_64

libutempter-1.2.1-6.el9.x86_64

libutempter-1.2.1-6.el9.x86_64

libuuid-2.37.4-18.el9.x86_64

libuuid-2.37.4-18.el9.x86_64

libuuid-2.37.4-18.el9.x86_64

libverto-0.3.2-3.el9.x86_64

libverto-0.3.2-3.el9.x86_64

libverto-0.3.2-3.el9.x86_64

libverto-libev-0.3.2-3.el9.x86_64

libverto-libev-0.3.2-3.el9.x86_64

libverto-libev-0.3.2-3.el9.x86_64

libvirt-client-10.0.0-6.7.el9_4.x86_64

libvirt-client-10.0.0-6.7.el9_4.x86_64

libvirt-client-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-common-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-common-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-common-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-config-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-config-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-config-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-secret-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-secret-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-secret-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-storage-core-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-storage-core-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-storage-core-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-log-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-log-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-log-10.0.0-6.7.el9_4.x86_64

libvirt-libs-10.0.0-6.7.el9_4.x86_64

libvirt-libs-10.0.0-6.7.el9_4.x86_64

libvirt-libs-10.0.0-6.7.el9_4.x86_64

libxcrypt-4.4.18-3.el9.x86_64

libxcrypt-4.4.18-3.el9.x86_64

libxcrypt-4.4.18-3.el9.x86_64

libxcrypt-compat-4.4.18-3.el9.x86_64

libxcrypt-compat-4.4.18-3.el9.x86_64

libxcrypt-compat-4.4.18-3.el9.x86_64

libxml2-2.9.13-6.el9_4.x86_64

libxml2-2.9.13-6.el9_4.x86_64

libxml2-2.9.13-6.el9_4.x86_64

libxslt-1.1.34-9.el9.x86_64

libxslt-1.1.34-9.el9.x86_64

libxslt-1.1.34-9.el9.x86_64

libyaml-0.2.5-7.el9.x86_64

libyaml-0.2.5-7.el9.x86_64

libyaml-0.2.5-7.el9.x86_64

libzstd-1.5.1-2.el9.x86_64

libzstd-1.5.1-2.el9.x86_64

libzstd-1.5.1-2.el9.x86_64

linux-firmware-20240716-143.2.el9_4.noarch

linux-firmware-20240905-143.3.el9_4.noarch

linux-firmware-20240905-143.3.el9_4.noarch

linux-firmware-whence-20240716-143.2.el9_4.noarch

linux-firmware-whence-20240905-143.3.el9_4.noarch

linux-firmware-whence-20240905-143.3.el9_4.noarch

lsscsi-0.32-6.el9.x86_64

lsscsi-0.32-6.el9.x86_64

lsscsi-0.32-6.el9.x86_64

lua-libs-5.4.4-4.el9.x86_64

lua-libs-5.4.4-4.el9.x86_64

lua-libs-5.4.4-4.el9.x86_64

lua-srpm-macros-1-6.el9.noarch

lua-srpm-macros-1-6.el9.noarch

lua-srpm-macros-1-6.el9.noarch

luksmeta-9-12.el9.x86_64

luksmeta-9-12.el9.x86_64

luksmeta-9-12.el9.x86_64

lvm2-2.03.23-2.el9.x86_64

lvm2-2.03.23-2.el9.x86_64

lvm2-2.03.23-2.el9.x86_64

lvm2-libs-2.03.23-2.el9.x86_64

lvm2-libs-2.03.23-2.el9.x86_64

lvm2-libs-2.03.23-2.el9.x86_64

lz4-libs-1.9.3-5.el9.x86_64

lz4-libs-1.9.3-5.el9.x86_64

lz4-libs-1.9.3-5.el9.x86_64

lzo-2.10-7.el9.x86_64

lzo-2.10-7.el9.x86_64

lzo-2.10-7.el9.x86_64

lzop-1.04-8.el9.x86_64

lzop-1.04-8.el9.x86_64

lzop-1.04-8.el9.x86_64

man-db-2.9.3-7.el9.x86_64

man-db-2.9.3-7.el9.x86_64

man-db-2.9.3-7.el9.x86_64

mdadm-4.2-14.el9_4.x86_64

mdadm-4.2-14.el9_4.x86_64

mdadm-4.2-14.el9_4.x86_64

microdnf-3.9.1-3.el9.x86_64

microdnf-3.9.1-3.el9.x86_64

microdnf-3.9.1-3.el9.x86_64

mingw-binutils-generic-2.41-3.el9.x86_64

mingw-binutils-generic-2.41-3.el9.x86_64

mingw-binutils-generic-2.41-3.el9.x86_64

mingw-filesystem-base-148-3.el9.noarch

mingw-filesystem-base-148-3.el9.noarch

mingw-filesystem-base-148-3.el9.noarch

mingw32-crt-11.0.1-3.el9.noarch

mingw32-crt-11.0.1-3.el9.noarch

mingw32-crt-11.0.1-3.el9.noarch

mingw32-filesystem-148-3.el9.noarch

mingw32-filesystem-148-3.el9.noarch

mingw32-filesystem-148-3.el9.noarch

mingw32-srvany-1.1-3.el9.noarch

mingw32-srvany-1.1-3.el9.noarch

mingw32-srvany-1.1-3.el9.noarch

mpfr-4.1.0-7.el9.x86_64

mpfr-4.1.0-7.el9.x86_64

mpfr-4.1.0-7.el9.x86_64

mtools-4.0.26-4.el9_0.x86_64

mtools-4.0.26-4.el9_0.x86_64

mtools-4.0.26-4.el9_0.x86_64

nbdkit-1.36.2-1.el9.x86_64

nbdkit-1.36.2-1.el9.x86_64

nbdkit-1.36.2-1.el9.x86_64

nbdkit-basic-filters-1.36.2-1.el9.x86_64

nbdkit-basic-filters-1.36.2-1.el9.x86_64

nbdkit-basic-filters-1.36.2-1.el9.x86_64

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

nbdkit-python-plugin-1.36.2-1.el9.x86_64

nbdkit-python-plugin-1.36.2-1.el9.x86_64

nbdkit-python-plugin-1.36.2-1.el9.x86_64

nbdkit-server-1.36.2-1.el9.x86_64

nbdkit-server-1.36.2-1.el9.x86_64

nbdkit-server-1.36.2-1.el9.x86_64

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

ncurses-6.2-10.20210508.el9.x86_64

ncurses-6.2-10.20210508.el9.x86_64

ncurses-6.2-10.20210508.el9.x86_64

ncurses-base-6.2-10.20210508.el9.noarch

ncurses-base-6.2-10.20210508.el9.noarch

ncurses-base-6.2-10.20210508.el9.noarch

ncurses-libs-6.2-10.20210508.el9.x86_64

ncurses-libs-6.2-10.20210508.el9.x86_64

ncurses-libs-6.2-10.20210508.el9.x86_64

ndctl-libs-71.1-8.el9.x86_64

ndctl-libs-71.1-8.el9.x86_64

ndctl-libs-71.1-8.el9.x86_64

nettle-3.9.1-1.el9.x86_64

nettle-3.9.1-1.el9.x86_64

nettle-3.9.1-1.el9.x86_64

nfs-utils-2.5.4-26.el9_4.x86_64

nfs-utils-2.5.4-26.el9_4.x86_64

nfs-utils-2.5.4-26.el9_4.x86_64

npth-1.6-8.el9.x86_64

npth-1.6-8.el9.x86_64

npth-1.6-8.el9.x86_64

numactl-libs-2.0.16-3.el9.x86_64

numactl-libs-2.0.16-3.el9.x86_64

numactl-libs-2.0.16-3.el9.x86_64

numad-0.5-37.20150602git.el9.x86_64

numad-0.5-37.20150602git.el9.x86_64

numad-0.5-37.20150602git.el9.x86_64

ocaml-srpm-macros-6-6.el9.noarch

ocaml-srpm-macros-6-6.el9.noarch

ocaml-srpm-macros-6-6.el9.noarch

oniguruma-6.9.6-1.el9.5.x86_64

oniguruma-6.9.6-1.el9.5.x86_64

oniguruma-6.9.6-1.el9.5.x86_64

openblas-srpm-macros-2-11.el9.noarch

openblas-srpm-macros-2-11.el9.noarch

openblas-srpm-macros-2-11.el9.noarch

openldap-2.6.6-3.el9.x86_64

openldap-2.6.6-3.el9.x86_64

openldap-2.6.6-3.el9.x86_64

openssh-8.7p1-38.el9_4.4.x86_64

openssh-8.7p1-38.el9_4.4.x86_64

openssh-8.7p1-38.el9_4.4.x86_64

openssh-clients-8.7p1-38.el9_4.4.x86_64

openssh-clients-8.7p1-38.el9_4.4.x86_64

openssh-clients-8.7p1-38.el9_4.4.x86_64

openssl-3.0.7-28.el9_4.x86_64

openssl-3.0.7-28.el9_4.x86_64

openssl-3.0.7-28.el9_4.x86_64

openssl-fips-provider-3.0.7-2.el9.x86_64

openssl-fips-provider-3.0.7-2.el9.x86_64

openssl-fips-provider-3.0.7-2.el9.x86_64

openssl-libs-3.0.7-28.el9_4.x86_64

openssl-libs-3.0.7-28.el9_4.x86_64

openssl-libs-3.0.7-28.el9_4.x86_64

osinfo-db-20231215-1.el9.noarch

osinfo-db-20231215-1.el9.noarch

osinfo-db-20231215-1.el9.noarch

osinfo-db-tools-1.10.0-1.el9.x86_64

osinfo-db-tools-1.10.0-1.el9.x86_64

osinfo-db-tools-1.10.0-1.el9.x86_64

p11-kit-0.25.3-2.el9.x86_64

p11-kit-0.25.3-2.el9.x86_64

p11-kit-0.25.3-2.el9.x86_64

p11-kit-trust-0.25.3-2.el9.x86_64

p11-kit-trust-0.25.3-2.el9.x86_64

p11-kit-trust-0.25.3-2.el9.x86_64

pam-1.5.1-19.el9.x86_64

pam-1.5.1-19.el9.x86_64

pam-1.5.1-19.el9.x86_64

parted-3.5-2.el9.x86_64

parted-3.5-2.el9.x86_64

parted-3.5-2.el9.x86_64

passt-0^20231204.gb86afe3-1.el9.x86_64

passt-0^20231204.gb86afe3-1.el9.x86_64

passt-0^20231204.gb86afe3-1.el9.x86_64

passt-selinux-0^20231204.gb86afe3-1.el9.noarch

passt-selinux-0^20231204.gb86afe3-1.el9.noarch

passt-selinux-0^20231204.gb86afe3-1.el9.noarch

pcre-8.44-3.el9.3.x86_64

pcre-8.44-3.el9.3.x86_64

pcre-8.44-3.el9.3.x86_64

pcre2-10.40-5.el9.x86_64

pcre2-10.40-5.el9.x86_64

pcre2-10.40-5.el9.x86_64

pcre2-syntax-10.40-5.el9.noarch

pcre2-syntax-10.40-5.el9.noarch

pcre2-syntax-10.40-5.el9.noarch

perl-AutoLoader-5.74-481.el9.noarch

perl-AutoLoader-5.74-481.el9.noarch

perl-AutoLoader-5.74-481.el9.noarch

perl-B-1.80-481.el9.x86_64

perl-B-1.80-481.el9.x86_64

perl-B-1.80-481.el9.x86_64

perl-base-2.27-481.el9.noarch

perl-base-2.27-481.el9.noarch

perl-base-2.27-481.el9.noarch

perl-Carp-1.50-460.el9.noarch

perl-Carp-1.50-460.el9.noarch

perl-Carp-1.50-460.el9.noarch

perl-Class-Struct-0.66-481.el9.noarch

perl-Class-Struct-0.66-481.el9.noarch

perl-Class-Struct-0.66-481.el9.noarch

perl-constant-1.33-461.el9.noarch

perl-constant-1.33-461.el9.noarch

perl-constant-1.33-461.el9.noarch

perl-Data-Dumper-2.174-462.el9.x86_64

perl-Data-Dumper-2.174-462.el9.x86_64

perl-Data-Dumper-2.174-462.el9.x86_64

perl-Digest-1.19-4.el9.noarch

perl-Digest-1.19-4.el9.noarch

perl-Digest-1.19-4.el9.noarch

perl-Digest-MD5-2.58-4.el9.x86_64

perl-Digest-MD5-2.58-4.el9.x86_64

perl-Digest-MD5-2.58-4.el9.x86_64

perl-Encode-3.08-462.el9.x86_64

perl-Encode-3.08-462.el9.x86_64

perl-Encode-3.08-462.el9.x86_64

perl-Errno-1.30-481.el9.x86_64

perl-Errno-1.30-481.el9.x86_64

perl-Errno-1.30-481.el9.x86_64

perl-Exporter-5.74-461.el9.noarch

perl-Exporter-5.74-461.el9.noarch

perl-Exporter-5.74-461.el9.noarch

perl-Fcntl-1.13-481.el9.x86_64

perl-Fcntl-1.13-481.el9.x86_64

perl-Fcntl-1.13-481.el9.x86_64

perl-File-Basename-2.85-481.el9.noarch

perl-File-Basename-2.85-481.el9.noarch

perl-File-Basename-2.85-481.el9.noarch

perl-File-Path-2.18-4.el9.noarch

perl-File-Path-2.18-4.el9.noarch

perl-File-Path-2.18-4.el9.noarch

perl-File-stat-1.09-481.el9.noarch

perl-File-stat-1.09-481.el9.noarch

perl-File-stat-1.09-481.el9.noarch

perl-File-Temp-0.231.100-4.el9.noarch

perl-File-Temp-0.231.100-4.el9.noarch

perl-File-Temp-0.231.100-4.el9.noarch

perl-FileHandle-2.03-481.el9.noarch

perl-FileHandle-2.03-481.el9.noarch

perl-FileHandle-2.03-481.el9.noarch

perl-Getopt-Long-2.52-4.el9.noarch

perl-Getopt-Long-2.52-4.el9.noarch

perl-Getopt-Long-2.52-4.el9.noarch

perl-Getopt-Std-1.12-481.el9.noarch

perl-Getopt-Std-1.12-481.el9.noarch

perl-Getopt-Std-1.12-481.el9.noarch

perl-HTTP-Tiny-0.076-462.el9.noarch

perl-HTTP-Tiny-0.076-462.el9.noarch

perl-HTTP-Tiny-0.076-462.el9.noarch

perl-if-0.60.800-481.el9.noarch

perl-if-0.60.800-481.el9.noarch

perl-if-0.60.800-481.el9.noarch

perl-interpreter-5.32.1-481.el9.x86_64

perl-interpreter-5.32.1-481.el9.x86_64

perl-interpreter-5.32.1-481.el9.x86_64

perl-IO-1.43-481.el9.x86_64

perl-IO-1.43-481.el9.x86_64

perl-IO-1.43-481.el9.x86_64

perl-IO-Socket-IP-0.41-5.el9.noarch

perl-IO-Socket-IP-0.41-5.el9.noarch

perl-IO-Socket-IP-0.41-5.el9.noarch

perl-IO-Socket-SSL-2.073-1.el9.noarch

perl-IO-Socket-SSL-2.073-1.el9.noarch

perl-IO-Socket-SSL-2.073-1.el9.noarch

perl-IPC-Open3-1.21-481.el9.noarch

perl-IPC-Open3-1.21-481.el9.noarch

perl-IPC-Open3-1.21-481.el9.noarch

perl-libnet-3.13-4.el9.noarch

perl-libnet-3.13-4.el9.noarch

perl-libnet-3.13-4.el9.noarch

perl-libs-5.32.1-481.el9.x86_64

perl-libs-5.32.1-481.el9.x86_64

perl-libs-5.32.1-481.el9.x86_64

perl-MIME-Base64-3.16-4.el9.x86_64

perl-MIME-Base64-3.16-4.el9.x86_64

perl-MIME-Base64-3.16-4.el9.x86_64

perl-Mozilla-CA-20200520-6.el9.noarch

perl-Mozilla-CA-20200520-6.el9.noarch

perl-Mozilla-CA-20200520-6.el9.noarch

perl-mro-1.23-481.el9.x86_64

perl-mro-1.23-481.el9.x86_64

perl-mro-1.23-481.el9.x86_64

perl-NDBM_File-1.15-481.el9.x86_64

perl-NDBM_File-1.15-481.el9.x86_64

perl-NDBM_File-1.15-481.el9.x86_64

perl-Net-SSLeay-1.92-2.el9.x86_64

perl-Net-SSLeay-1.92-2.el9.x86_64

perl-Net-SSLeay-1.92-2.el9.x86_64

perl-overload-1.31-481.el9.noarch

perl-overload-1.31-481.el9.noarch

perl-overload-1.31-481.el9.noarch

perl-overloading-0.02-481.el9.noarch

perl-overloading-0.02-481.el9.noarch

perl-overloading-0.02-481.el9.noarch

perl-parent-0.238-460.el9.noarch

perl-parent-0.238-460.el9.noarch

perl-parent-0.238-460.el9.noarch

perl-PathTools-3.78-461.el9.x86_64

perl-PathTools-3.78-461.el9.x86_64

perl-PathTools-3.78-461.el9.x86_64

perl-Pod-Escapes-1.07-460.el9.noarch

perl-Pod-Escapes-1.07-460.el9.noarch

perl-Pod-Escapes-1.07-460.el9.noarch

perl-Pod-Perldoc-3.28.01-461.el9.noarch

perl-Pod-Perldoc-3.28.01-461.el9.noarch

perl-Pod-Perldoc-3.28.01-461.el9.noarch

perl-Pod-Simple-3.42-4.el9.noarch

perl-Pod-Simple-3.42-4.el9.noarch

perl-Pod-Simple-3.42-4.el9.noarch

perl-Pod-Usage-2.01-4.el9.noarch

perl-Pod-Usage-2.01-4.el9.noarch

perl-Pod-Usage-2.01-4.el9.noarch

perl-podlators-4.14-460.el9.noarch

perl-podlators-4.14-460.el9.noarch

perl-podlators-4.14-460.el9.noarch

perl-POSIX-1.94-481.el9.x86_64

perl-POSIX-1.94-481.el9.x86_64

perl-POSIX-1.94-481.el9.x86_64

perl-Scalar-List-Utils-1.56-461.el9.x86_64

perl-Scalar-List-Utils-1.56-461.el9.x86_64

perl-Scalar-List-Utils-1.56-461.el9.x86_64

perl-SelectSaver-1.02-481.el9.noarch

perl-SelectSaver-1.02-481.el9.noarch

perl-SelectSaver-1.02-481.el9.noarch

perl-Socket-2.031-4.el9.x86_64

perl-Socket-2.031-4.el9.x86_64

perl-Socket-2.031-4.el9.x86_64

perl-srpm-macros-1-41.el9.noarch

perl-srpm-macros-1-41.el9.noarch

perl-srpm-macros-1-41.el9.noarch

perl-Storable-3.21-460.el9.x86_64

perl-Storable-3.21-460.el9.x86_64

perl-Storable-3.21-460.el9.x86_64

perl-subs-1.03-481.el9.noarch

perl-subs-1.03-481.el9.noarch

perl-subs-1.03-481.el9.noarch

perl-Symbol-1.08-481.el9.noarch

perl-Symbol-1.08-481.el9.noarch

perl-Symbol-1.08-481.el9.noarch

perl-Term-ANSIColor-5.01-461.el9.noarch

perl-Term-ANSIColor-5.01-461.el9.noarch

perl-Term-ANSIColor-5.01-461.el9.noarch

perl-Term-Cap-1.17-460.el9.noarch

perl-Term-Cap-1.17-460.el9.noarch

perl-Term-Cap-1.17-460.el9.noarch

perl-Text-ParseWords-3.30-460.el9.noarch

perl-Text-ParseWords-3.30-460.el9.noarch

perl-Text-ParseWords-3.30-460.el9.noarch

perl-Text-Tabs+Wrap-2013.0523-460.el9.noarch

perl-Text-Tabs+Wrap-2013.0523-460.el9.noarch

perl-Text-Tabs+Wrap-2013.0523-460.el9.noarch

perl-Time-Local-1.300-7.el9.noarch

perl-Time-Local-1.300-7.el9.noarch

perl-Time-Local-1.300-7.el9.noarch

perl-URI-5.09-3.el9.noarch

perl-URI-5.09-3.el9.noarch

perl-URI-5.09-3.el9.noarch

perl-vars-1.05-481.el9.noarch

perl-vars-1.05-481.el9.noarch

perl-vars-1.05-481.el9.noarch

pigz-2.5-4.el9.x86_64

pigz-2.5-4.el9.x86_64

pigz-2.5-4.el9.x86_64

pixman-0.40.0-6.el9.x86_64

pixman-0.40.0-6.el9.x86_64

pixman-0.40.0-6.el9.x86_64

pkgconf-1.7.3-10.el9.x86_64

pkgconf-1.7.3-10.el9.x86_64

pkgconf-1.7.3-10.el9.x86_64

policycoreutils-3.6-2.1.el9.x86_64

policycoreutils-3.6-2.1.el9.x86_64

policycoreutils-3.6-2.1.el9.x86_64

policycoreutils-python-utils-3.6-2.1.el9.noarch

policycoreutils-python-utils-3.6-2.1.el9.noarch

policycoreutils-python-utils-3.6-2.1.el9.noarch

polkit-0.117-11.el9.x86_64

polkit-0.117-11.el9.x86_64

polkit-0.117-11.el9.x86_64

polkit-libs-0.117-11.el9.x86_64

polkit-libs-0.117-11.el9.x86_64

polkit-libs-0.117-11.el9.x86_64

polkit-pkla-compat-0.1-21.el9.x86_64

polkit-pkla-compat-0.1-21.el9.x86_64

polkit-pkla-compat-0.1-21.el9.x86_64

popt-1.18-8.el9.x86_64

popt-1.18-8.el9.x86_64

popt-1.18-8.el9.x86_64

procps-ng-3.3.17-14.el9.x86_64

procps-ng-3.3.17-14.el9.x86_64

procps-ng-3.3.17-14.el9.x86_64

protobuf-c-1.3.3-13.el9.x86_64

protobuf-c-1.3.3-13.el9.x86_64

protobuf-c-1.3.3-13.el9.x86_64

psmisc-23.4-3.el9.x86_64

psmisc-23.4-3.el9.x86_64

psmisc-23.4-3.el9.x86_64

publicsuffix-list-dafsa-20210518-3.el9.noarch

publicsuffix-list-dafsa-20210518-3.el9.noarch

publicsuffix-list-dafsa-20210518-3.el9.noarch

pyproject-srpm-macros-1.12.0-1.el9.noarch

pyproject-srpm-macros-1.12.0-1.el9.noarch

pyproject-srpm-macros-1.12.0-1.el9.noarch

python-srpm-macros-3.9-53.el9.noarch

python-srpm-macros-3.9-53.el9.noarch

python-srpm-macros-3.9-53.el9.noarch

python-unversioned-command-3.9.18-3.el9_4.5.noarch

python-unversioned-command-3.9.18-3.el9_4.5.noarch

python-unversioned-command-3.9.18-3.el9_4.5.noarch

python3-3.9.18-3.el9_4.5.x86_64

python3-3.9.18-3.el9_4.5.x86_64

python3-3.9.18-3.el9_4.5.x86_64

python3-audit-3.1.2-2.el9.x86_64

python3-audit-3.1.2-2.el9.x86_64

python3-audit-3.1.2-2.el9.x86_64

python3-distro-1.5.0-7.el9.noarch

python3-distro-1.5.0-7.el9.noarch

python3-distro-1.5.0-7.el9.noarch

python3-libs-3.9.18-3.el9_4.5.x86_64

python3-libs-3.9.18-3.el9_4.5.x86_64

python3-libs-3.9.18-3.el9_4.5.x86_64

python3-libselinux-3.6-1.el9.x86_64

python3-libselinux-3.6-1.el9.x86_64

python3-libselinux-3.6-1.el9.x86_64

python3-libsemanage-3.6-1.el9.x86_64

python3-libsemanage-3.6-1.el9.x86_64

python3-libsemanage-3.6-1.el9.x86_64

python3-pip-wheel-21.2.3-8.el9.noarch

python3-pip-wheel-21.2.3-8.el9.noarch

python3-pip-wheel-21.2.3-8.el9.noarch

python3-policycoreutils-3.6-2.1.el9.noarch

python3-policycoreutils-3.6-2.1.el9.noarch

python3-policycoreutils-3.6-2.1.el9.noarch

python3-pyyaml-5.4.1-6.el9.x86_64

python3-pyyaml-5.4.1-6.el9.x86_64

python3-pyyaml-5.4.1-6.el9.x86_64

python3-setools-4.4.4-1.el9.x86_64

python3-setools-4.4.4-1.el9.x86_64

python3-setools-4.4.4-1.el9.x86_64

python3-setuptools-53.0.0-12.el9_4.1.noarch

python3-setuptools-53.0.0-12.el9_4.1.noarch

python3-setuptools-53.0.0-12.el9_4.1.noarch

python3-setuptools-wheel-53.0.0-12.el9_4.1.noarch

python3-setuptools-wheel-53.0.0-12.el9_4.1.noarch

python3-setuptools-wheel-53.0.0-12.el9_4.1.noarch

qemu-img-8.2.0-11.el9_4.6.x86_64

qemu-img-8.2.0-11.el9_4.6.x86_64

qemu-img-8.2.0-11.el9_4.6.x86_64

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

qt5-srpm-macros-5.15.9-1.el9.noarch

qt5-srpm-macros-5.15.9-1.el9.noarch

qt5-srpm-macros-5.15.9-1.el9.noarch

quota-4.06-6.el9.x86_64

quota-4.06-6.el9.x86_64

quota-4.06-6.el9.x86_64

quota-nls-4.06-6.el9.noarch

quota-nls-4.06-6.el9.noarch

quota-nls-4.06-6.el9.noarch

readline-8.1-4.el9.x86_64

readline-8.1-4.el9.x86_64

readline-8.1-4.el9.x86_64

redhat-release-9.4-0.5.el9.x86_64

redhat-release-9.4-0.5.el9.x86_64

redhat-release-9.4-0.5.el9.x86_64

redhat-rpm-config-207-1.el9.noarch

redhat-rpm-config-207-1.el9.noarch

redhat-rpm-config-207-1.el9.noarch

rootfiles-8.1-31.el9.noarch

rootfiles-8.1-31.el9.noarch

rootfiles-8.1-31.el9.noarch

rpcbind-1.2.6-7.el9.x86_64

rpcbind-1.2.6-7.el9.x86_64

rpcbind-1.2.6-7.el9.x86_64

rpm-4.16.1.3-29.el9.x86_64

rpm-4.16.1.3-29.el9.x86_64

rpm-4.16.1.3-29.el9.x86_64

rpm-libs-4.16.1.3-29.el9.x86_64

rpm-libs-4.16.1.3-29.el9.x86_64

rpm-libs-4.16.1.3-29.el9.x86_64

rpm-plugin-selinux-4.16.1.3-29.el9.x86_64

rpm-plugin-selinux-4.16.1.3-29.el9.x86_64

rpm-plugin-selinux-4.16.1.3-29.el9.x86_64

rust-srpm-macros-17-4.el9.noarch

rust-srpm-macros-17-4.el9.noarch

rust-srpm-macros-17-4.el9.noarch

scrub-2.6.1-4.el9.x86_64

scrub-2.6.1-4.el9.x86_64

scrub-2.6.1-4.el9.x86_64

seabios-bin-1.16.3-2.el9.noarch

seabios-bin-1.16.3-2.el9.noarch

seabios-bin-1.16.3-2.el9.noarch

seavgabios-bin-1.16.3-2.el9.noarch

seavgabios-bin-1.16.3-2.el9.noarch

seavgabios-bin-1.16.3-2.el9.noarch

sed-4.8-9.el9.x86_64

sed-4.8-9.el9.x86_64

sed-4.8-9.el9.x86_64

selinux-policy-38.1.35-2.el9_4.2.noarch

selinux-policy-38.1.35-2.el9_4.2.noarch

selinux-policy-38.1.35-2.el9_4.2.noarch

selinux-policy-targeted-38.1.35-2.el9_4.2.noarch

selinux-policy-targeted-38.1.35-2.el9_4.2.noarch

selinux-policy-targeted-38.1.35-2.el9_4.2.noarch

setup-2.13.7-10.el9.noarch

setup-2.13.7-10.el9.noarch

setup-2.13.7-10.el9.noarch

shadow-utils-4.9-8.el9.x86_64

shadow-utils-4.9-8.el9.x86_64

shadow-utils-4.9-8.el9.x86_64

snappy-1.1.8-8.el9.x86_64

snappy-1.1.8-8.el9.x86_64

snappy-1.1.8-8.el9.x86_64

sqlite-libs-3.34.1-7.el9_3.x86_64

sqlite-libs-3.34.1-7.el9_3.x86_64

sqlite-libs-3.34.1-7.el9_3.x86_64

squashfs-tools-4.4-10.git1.el9.x86_64

squashfs-tools-4.4-10.git1.el9.x86_64

squashfs-tools-4.4-10.git1.el9.x86_64

supermin-5.3.3-1.el9.x86_64

supermin-5.3.3-1.el9.x86_64

supermin-5.3.3-1.el9.x86_64

swtpm-0.8.0-2.el9_4.x86_64

swtpm-0.8.0-2.el9_4.x86_64

swtpm-0.8.0-2.el9_4.x86_64

swtpm-libs-0.8.0-2.el9_4.x86_64

swtpm-libs-0.8.0-2.el9_4.x86_64

swtpm-libs-0.8.0-2.el9_4.x86_64

swtpm-tools-0.8.0-2.el9_4.x86_64

swtpm-tools-0.8.0-2.el9_4.x86_64

swtpm-tools-0.8.0-2.el9_4.x86_64

syslinux-6.04-0.20.el9.x86_64

syslinux-6.04-0.20.el9.x86_64

syslinux-6.04-0.20.el9.x86_64

syslinux-extlinux-6.04-0.20.el9.x86_64

syslinux-extlinux-6.04-0.20.el9.x86_64

syslinux-extlinux-6.04-0.20.el9.x86_64

syslinux-extlinux-nonlinux-6.04-0.20.el9.noarch

syslinux-extlinux-nonlinux-6.04-0.20.el9.noarch

syslinux-extlinux-nonlinux-6.04-0.20.el9.noarch

syslinux-nonlinux-6.04-0.20.el9.noarch

syslinux-nonlinux-6.04-0.20.el9.noarch

syslinux-nonlinux-6.04-0.20.el9.noarch

systemd-252-32.el9_4.7.x86_64

systemd-252-32.el9_4.7.x86_64

systemd-252-32.el9_4.7.x86_64

systemd-container-252-32.el9_4.7.x86_64

systemd-container-252-32.el9_4.7.x86_64

systemd-container-252-32.el9_4.7.x86_64

systemd-libs-252-32.el9_4.7.x86_64

systemd-libs-252-32.el9_4.7.x86_64

systemd-libs-252-32.el9_4.7.x86_64

systemd-pam-252-32.el9_4.7.x86_64

systemd-pam-252-32.el9_4.7.x86_64

systemd-pam-252-32.el9_4.7.x86_64

systemd-rpm-macros-252-32.el9_4.7.noarch

systemd-rpm-macros-252-32.el9_4.7.noarch

systemd-rpm-macros-252-32.el9_4.7.noarch

systemd-udev-252-32.el9_4.7.x86_64

systemd-udev-252-32.el9_4.7.x86_64

systemd-udev-252-32.el9_4.7.x86_64

tar-1.34-6.el9_4.1.x86_64

tar-1.34-6.el9_4.1.x86_64

tar-1.34-6.el9_4.1.x86_64

tpm2-tools-5.2-3.el9.x86_64

tpm2-tools-5.2-3.el9.x86_64

tpm2-tools-5.2-3.el9.x86_64

tpm2-tss-3.2.2-2.el9.x86_64

tpm2-tss-3.2.2-2.el9.x86_64

tpm2-tss-3.2.2-2.el9.x86_64

tzdata-2024a-1.el9.noarch

tzdata-2024a-1.el9.noarch

tzdata-2024a-1.el9.noarch

unbound-libs-1.16.2-3.el9_3.5.x86_64

unbound-libs-1.16.2-3.el9_3.5.x86_64

unbound-libs-1.16.2-3.el9_3.5.x86_64

unzip-6.0-56.el9.x86_64

unzip-6.0-56.el9.x86_64

unzip-6.0-56.el9.x86_64

userspace-rcu-0.12.1-6.el9.x86_64

userspace-rcu-0.12.1-6.el9.x86_64

userspace-rcu-0.12.1-6.el9.x86_64

util-linux-2.37.4-18.el9.x86_64

util-linux-2.37.4-18.el9.x86_64

util-linux-2.37.4-18.el9.x86_64

util-linux-core-2.37.4-18.el9.x86_64

util-linux-core-2.37.4-18.el9.x86_64

util-linux-core-2.37.4-18.el9.x86_64

vim-minimal-8.2.2637-20.el9_1.x86_64

vim-minimal-8.2.2637-20.el9_1.x86_64

vim-minimal-8.2.2637-20.el9_1.x86_64

virt-v2v-2.4.0-4.el9_4.x86_64

virt-v2v-2.4.0-4.el9_4.x86_64

virt-v2v-2.4.0-4.el9_4.x86_64

virtio-win-1.9.40-0.el9_4.noarch

virtio-win-1.9.40-0.el9_4.noarch

virtio-win-1.9.40-0.el9_4.noarch

webkit2gtk3-jsc-2.42.5-1.el9.x86_64

webkit2gtk3-jsc-2.42.5-1.el9.x86_64

webkit2gtk3-jsc-2.46.1-2.el9_4.x86_64

which-2.21-29.el9.x86_64

which-2.21-29.el9.x86_64

which-2.21-29.el9.x86_64

xfsprogs-6.3.0-1.el9.x86_64

xfsprogs-6.3.0-1.el9.x86_64

xfsprogs-6.3.0-1.el9.x86_64

xz-5.2.5-8.el9_0.x86_64

xz-5.2.5-8.el9_0.x86_64

xz-5.2.5-8.el9_0.x86_64

xz-libs-5.2.5-8.el9_0.x86_64

xz-libs-5.2.5-8.el9_0.x86_64

xz-libs-5.2.5-8.el9_0.x86_64

yajl-2.1.0-22.el9.x86_64

yajl-2.1.0-22.el9.x86_64

yajl-2.1.0-22.el9.x86_64

zip-3.0-35.el9.x86_64

zip-3.0-35.el9.x86_64

zip-3.0-35.el9.x86_64

zlib-1.2.11-40.el9.x86_64

zlib-1.2.11-40.el9.x86_64

zlib-1.2.11-40.el9.x86_64

zstd-1.5.1-2.el9.x86_64

zstd-1.5.1-2.el9.x86_64

zstd-1.5.1-2.el9.x86_64

+
+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/about-cold-warm-migration/index.html b/documentation/doc-Release_notes/modules/about-cold-warm-migration/index.html new file mode 100644 index 00000000000..b9c877a8c6e --- /dev/null +++ b/documentation/doc-Release_notes/modules/about-cold-warm-migration/index.html @@ -0,0 +1,255 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

About cold and warm migration

+
+
+
+

Forklift supports cold migration from:

+
+
+
    +
  • +

    VMware vSphere

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    {osp}

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

Forklift supports warm migration from VMware vSphere and from oVirt.

+
+
+
+
+

Cold migration

+
+
+

Cold migration is the default migration type. The source virtual machines are shut down while the data is copied.

+
+
+ + + + + +
+
Note
+
+
+

Unresolved directive in about-cold-warm-migration.adoc - include::snip_qemu-guest-agent.adoc[]

+
+
+
+
+
+
+

Warm migration

+
+
+

Most of the data is copied during the precopy stage while the source virtual machines (VMs) are running.

+
+
+

Then the VMs are shut down and the remaining data is copied during the cutover stage.

+
+
+
Precopy stage
+

The VMs are not shut down during the precopy stage.

+
+
+

The VM disks are copied incrementally by using changed block tracking (CBT) snapshots. The snapshots are created at one-hour intervals by default. You can change the snapshot interval by updating the forklift-controller deployment.

+
+
+ + + + + +
+
Important
+
+
+

You must enable CBT for each source VM and each VM disk.

+
+
+

A VM can support up to 28 CBT snapshots. If the source VM has too many CBT snapshots and the Migration Controller service is not able to create a new snapshot, warm migration might fail. The Migration Controller service deletes each snapshot when the snapshot is no longer required.

+
+
+
+
+

The precopy stage runs until the cutover stage is started manually or is scheduled to start.

+
+
+
Cutover stage
+

The VMs are shut down during the cutover stage and the remaining data is migrated. Data stored in RAM is not migrated.

+
+
+

You can start the cutover stage manually by using the Forklift console or you can schedule a cutover time in the Migration manifest.

+
+
+
+
+

Advantages and disadvantages of cold and warm migrations

+
+
+

Overview

+
+

Unresolved directive in about-cold-warm-migration.adoc - include::snip_cold-warm-comparison-table.adoc[]

+
+
+
+

Detailed description

+
+

The table that follows offers a more detailed description of the advantages and disadvantages of each type of migration. It assumes that you have installed Red Hat Enterprise Linux (RHEL) 9 on the OKD platform on which you installed Forklift.

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Detailed description of advantages and disadvantages
Cold migrationWarm migration

Fail fast

Each VM is converted to be compatible with OKD and, if the conversion is successful, the VM is transferred. If a VM cannot be converted, the migration fails immediately.

For each VM, Forklift creates a snapshot and transfers it to OKD. When you start the cutover, Forklift creates the last snapshot, transfers it, and then converts the VM.

Tools

Forklift only.

Forklift and CDI from KubeVirt.

Parallelism

Disks must be transferred sequentially.

Disks can be transferred in parallel using different pods.

+
+ + + + + +
+
Note
+
+
+

The preceding table describes the situation for VMs that are running because the main benefit of warm migration is the reduced downtime, and there is no reason to initiate warm migration for VMs that are down. However, performing warm migration for VMs that are down is not the same as cold migration, even when Forklift uses virt-v2v and RHEL 9. For VMs that are down, Forklift transfers the disks using CDI, unlike in cold migration.

+
+
+
+
+ + + + + +
+
Note
+
+
+

When importing from VMware, there are additional factors which impact the migration speed such as limits related to ESXi, vSphere. or VDDK.

+
+
+
+
+
+

Conclusions

+
+

Based on the preceding information, we can draw the following conclusions about cold migration vs. warm migration:

+
+
+
    +
  • +

    The shortest downtime of VMs can be achieved by using warm migration.

    +
  • +
  • +

    The shortest duration for VMs with a large amount of data on a single disk can be achieved by using cold migration.

    +
  • +
  • +

    The shortest duration for VMs with a large amount of data that is spread evenly across multiple disks can be achieved by using warm migration.

    +
  • +
+
+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/about-hook-crs-for-migration-plans-api/index.html b/documentation/doc-Release_notes/modules/about-hook-crs-for-migration-plans-api/index.html new file mode 100644 index 00000000000..c2d56327d44 --- /dev/null +++ b/documentation/doc-Release_notes/modules/about-hook-crs-for-migration-plans-api/index.html @@ -0,0 +1,116 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

API-based hooks for Forklift migration plans

+
+

You can add hooks to a migration plan from the command line by using the Forklift API.

+
+

Default hook image

+
+

The default hook image for an Forklift hook is registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel8:v1.8.2-2. The image is based on the Ansible Runner image with the addition of python-openshift to provide Ansible Kubernetes resources and a recent oc binary.

+
+

Hook execution

+
+

An Ansible playbook that is provided as part of a migration hook is mounted into the hook container as a ConfigMap. The hook container is run as a job on the desired cluster, using the default ServiceAccount in the konveyor-forklift namespace.

+
+

PreHooks and PostHooks

+
+

You specify hooks per VM and you can run each as a PreHook or a PostHook. In this context, a PreHook is a hook that is run before a migration and a PostHook is a hook that is run after a migration.

+
+
+

When you add a hook, you must specify the namespace where the hook CR is located, the name of the hook, and specify whether the hook is a PreHook or PostHook.

+
+
+ + + + + +
+
Important
+
+
+

In order for a PreHook to run on a VM, the VM must be started and available via SSH.

+
+
+
+
+
Example PreHook:
+
+
kind: Plan
+apiVersion: forklift.konveyor.io/v1beta1
+metadata:
+  name: test
+  namespace: konveyor-forklift
+spec:
+  vms:
+    - id: vm-2861
+      hooks:
+        - hook:
+            namespace: konveyor-forklift
+            name: playbook
+          step: PreHook
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/about-rego-files/index.html b/documentation/doc-Release_notes/modules/about-rego-files/index.html new file mode 100644 index 00000000000..d8d236537bc --- /dev/null +++ b/documentation/doc-Release_notes/modules/about-rego-files/index.html @@ -0,0 +1,104 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

About Rego files

+
+

Validation rules are written in Rego, the Open Policy Agent (OPA) native query language. The rules are stored as .rego files in the /usr/share/opa/policies/io/konveyor/forklift/<provider> directory of the Validation pod.

+
+
+

Each validation rule is defined in a separate .rego file and tests for a specific condition. If the condition evaluates as true, the rule adds a {“category”, “label”, “assessment”} hash to the concerns. The concerns content is added to the concerns key in the inventory record of the VM. The web console displays the content of the concerns key for each VM in the provider inventory.

+
+
+

The following .rego file example checks for distributed resource scheduling enabled in the cluster of a VMware VM:

+
+
+
drs_enabled.rego example
+
+
package io.konveyor.forklift.vmware (1)
+
+has_drs_enabled {
+    input.host.cluster.drsEnabled (2)
+}
+
+concerns[flag] {
+    has_drs_enabled
+    flag := {
+        "category": "Information",
+        "label": "VM running in a DRS-enabled cluster",
+        "assessment": "Distributed resource scheduling is not currently supported by OpenShift Virtualization. The VM can be migrated but it will not have this feature in the target environment."
+    }
+}
+
+
+
+
    +
  1. +

    Each validation rule is defined within a package. The package namespaces are io.konveyor.forklift.vmware for VMware and io.konveyor.forklift.ovirt for oVirt.

    +
  2. +
  3. +

    Query parameters are based on the input key of the Validation service JSON.

    +
  4. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/accessing-default-validation-rules/index.html b/documentation/doc-Release_notes/modules/accessing-default-validation-rules/index.html new file mode 100644 index 00000000000..975c3931256 --- /dev/null +++ b/documentation/doc-Release_notes/modules/accessing-default-validation-rules/index.html @@ -0,0 +1,108 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Checking the default validation rules

+
+

Before you create a custom rule, you must check the default rules of the Validation service to ensure that you do not create a rule that redefines an existing default value.

+
+
+

Example: If a default rule contains the line default valid_input = false and you create a custom rule that contains the line default valid_input = true, the Validation service will not start.

+
+
+
Procedure
+
    +
  1. +

    Connect to the terminal of the Validation pod:

    +
    +
    +
    $ kubectl rsh <validation_pod>
    +
    +
    +
  2. +
  3. +

    Go to the OPA policies directory for your provider:

    +
    +
    +
    $ cd /usr/share/opa/policies/io/konveyor/forklift/<provider> (1)
    +
    +
    +
    +
      +
    1. +

      Specify vmware or ovirt.

      +
    2. +
    +
    +
  4. +
  5. +

    Search for the default policies:

    +
    +
    +
    $ grep -R "default" *
    +
    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/accessing-logs-cli/index.html b/documentation/doc-Release_notes/modules/accessing-logs-cli/index.html new file mode 100644 index 00000000000..96350457664 --- /dev/null +++ b/documentation/doc-Release_notes/modules/accessing-logs-cli/index.html @@ -0,0 +1,157 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Accessing logs and custom resource information from the command line interface

+
+

You can access logs and information about custom resources (CRs) from the command line interface by using the must-gather tool. You must attach a must-gather data file to all customer cases.

+
+
+

You can gather data for a specific namespace, a completed, failed, or canceled migration plan, or a migrated virtual machine (VM) by using the filtering options.

+
+
+ + + + + +
+
Note
+
+
+

If you specify a non-existent resource in the filtered must-gather command, no archive file is created.

+
+
+
+
+
Prerequisites
+
    +
  • +

    You must be logged in to the KubeVirt cluster as a user with the cluster-admin role.

    +
  • +
  • +

    You must have the OKD CLI (oc) installed.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Navigate to the directory where you want to store the must-gather data.

    +
  2. +
  3. +

    Run the oc adm must-gather command:

    +
    +
    +
    $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest
    +
    +
    +
    +

    The data is saved as /must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.

    +
    +
  4. +
  5. +

    Optional: Run the oc adm must-gather command with the following options to gather filtered data:

    +
    +
      +
    • +

      Namespace:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- NS=<namespace> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Migration plan:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- PLAN=<migration_plan> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Virtual machine:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- VM=<vm_name> NS=<namespace> /usr/bin/targeted (1)
      +
      +
      +
      +
        +
      1. +

        You must specify the VM name, not the VM ID, as it appears in the Plan CR.

        +
      2. +
      +
      +
    • +
    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/accessing-logs-ui/index.html b/documentation/doc-Release_notes/modules/accessing-logs-ui/index.html new file mode 100644 index 00000000000..213192773d2 --- /dev/null +++ b/documentation/doc-Release_notes/modules/accessing-logs-ui/index.html @@ -0,0 +1,92 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Downloading logs and custom resource information from the web console

+
+

You can download logs and information about custom resources (CRs) for a completed, failed, or canceled migration plan or for migrated virtual machines (VMs) by using the OKD web console.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
  2. +
  3. +

    Click Get logs beside a migration plan name.

    +
  4. +
  5. +

    In the Get logs window, click Get logs.

    +
    +

    The logs are collected. A Log collection complete message is displayed.

    +
    +
  6. +
  7. +

    Click Download logs to download the archive file.

    +
  8. +
  9. +

    To download logs for a migrated VM, click a migration plan name and then click Get logs beside the VM.

    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/adding-hook-crs-to-migration-plans-api/index.html b/documentation/doc-Release_notes/modules/adding-hook-crs-to-migration-plans-api/index.html new file mode 100644 index 00000000000..359426ae22b --- /dev/null +++ b/documentation/doc-Release_notes/modules/adding-hook-crs-to-migration-plans-api/index.html @@ -0,0 +1,302 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Adding Hook CRs to a VM migration by using the Forklift API

+
+

You can add a PreHook or a PostHook Hook CR when you migrate a virtual machine from the command line by using the Forklift API. A PreHook runs before a migration, a PostHook, after.

+
+
+ + + + + +
+
Note
+
+
+

You can retrieve additional information stored in a secret or in a configMap by using a k8s module.

+
+
+
+
+

For example, you can create a hook CR to install cloud-init on a VM and write a file before migration.

+
+
+
Procedure
+
    +
  1. +

    If needed, create a secret with an SSH private key for the VM. You can either use an existing key or generate a key pair, install the public key on the VM, and base64 encode the private key in the secret.

    +
    +
    +
    apiVersion: v1
    +data:
    +  key: VGhpcyB3YXMgZ2VuZXJhdGVkIHdpdGggc3NoLWtleWdlbiBwdXJlbHkgZm9yIHRoaXMgZXhhbXBsZS4KSXQgaXMgbm90IHVzZWQgYW55d2hlcmUuCi0tLS0tQkVHSU4gT1BFTlNTSCBQUklWQVRFIEtFWS0tLS0tCmIzQmxibk56YUMxclpYa3RkakVBQUFBQUJHNXZibVVBQUFBRWJtOXVaUUFBQUFBQUFBQUJBQUFCbHdBQUFBZHpjMmd0Y24KTmhBQUFBQXdFQUFRQUFBWUVBMzVTTFRReDBFVjdPTWJQR0FqcEsxK2JhQURTTVFuK1NBU2pyTGZLNWM5NGpHdzhDbnA4LwovRHErZHFBR1pxQkg2ZnAxYmVJM1BZZzVWVDk0RVdWQ2RrTjgwY3dEcEo0Z1R0NHFUQ1gzZUYvY2x5VXQyUC9zaTNjcnQ0CjBQdi9wVnZXU1U2TlhHaDJIZC93V0MwcGh5Z0RQOVc5SHRQSUF0OFpnZmV2ZnUwZHpraVl6OHNVaElWU2ZsRGpaNUFqcUcKUjV2TVVUaGlrczEvZVlCeTdiMkFFSEdzYU8xN3NFbWNiYUlHUHZuUFVwWmQrdjkyYU1JdWZoYjhLZkFSbzZ3Ty9ISW1VbQovdDdHWFBJUmxBMUhSV0p1U05odTQzZS9DY3ZYd3Z6RnZrdE9kYXlEQzBMTklHMkpVaURlNWd0UUQ1WHZXc1p3MHQvbEs1CklacjFrZXZRNUJsYWNISmViV1ZNYUQvdllpdFdhSFo4OEF1Y0czaGh2bjkrOGNSTGhNVExiVlFSMWh2UVpBL1JtQXN3eE0KT3VJSmRaUmtxTThLZlF4Z28zQThRNGJhQW1VbnpvM3Zwa0FWdC9uaGtIOTRaRE5rV2U2RlRhdThONStyYTJCZkdjZVA4VApvbjFEeTBLRlpaUlpCREVVRVc0eHdTYUVOYXQ3c2RDNnhpL1d5OURaQUFBRm1NRFBXeDdBejFzZUFBQUFCM056YUMxeWMyCkVBQUFHQkFOK1VpMDBNZEJGZXpqR3p4Z0k2U3RmbTJnQTBqRUova2dFbzZ5M3l1WFBlSXhzUEFwNmZQL3c2dm5hZ0JtYWcKUituNmRXM2lOejJJT1ZVL2VCRmxRblpEZk5ITUE2U2VJRTdlS2t3bDkzaGYzSmNsTGRqLzdJdDNLN2VORDcvNlZiMWtsTwpqVnhvZGgzZjhGZ3RLWWNvQXovVnZSN1R5QUxmR1lIM3IzN3RIYzVJbU0vTEZJU0ZVbjVRNDJlUUk2aGtlYnpGRTRZcExOCmYzbUFjdTI5Z0JCeHJHanRlN0JKbkcyaUJqNzV6MUtXWGZyL2RtakNMbjRXL0Nud0VhT3NEdnh5SmxKdjdleGx6eUVaUU4KUjBWaWJrallidU4zdnduTDE4TDh4YjVMVG5Xc2d3dEN6U0J0aVZJZzN1WUxVQStWNzFyR2NOTGY1U3VTR2E5WkhyME9RWgpXbkJ5WG0xbFRHZy83MklyVm1oMmZQQUxuQnQ0WWI1L2Z2SEVTNFRFeTIxVUVkWWIwR1FQMFpnTE1NVERyaUNYV1VaS2pQCkNuME1ZS053UEVPRzJnSmxKODZONzZaQUZiZjU0WkIvZUdRelpGbnVoVTJydkRlZnEydGdYeG5Iai9FNko5UTh0Q2hXV1UKV1FReEZCRnVNY0VtaERXcmU3SFF1c1l2MXN2UTJRQUFBQU1CQUFFQUFBR0JBSlZtZklNNjdDQmpXcU9KdnFua2EvakRrUwo4TDdpSE5mekg1TnRZWVdPWmRMTlk2L0lRa1pDeFcwTWtSKzlUK0M3QUZKZzBNV2Q5ck5PeUxJZDkxNjZoOVJsNG0xdFJjCnViZ1o2dWZCZ3hGVDlXS21mSEdCNm4zelh5b2pQOEFJTnR6ODVpaUVHVXFFRWtVRVdMd0RGSmdvcFllQ3l1VmZ2ZE92MUgKRm1WWmEwNVo0b3NQNkNENXVmc2djQ1RYQTR6VnZ5ZHVCYkxqdHN5RjdYZjNUdjZUQ1QxU0swZHErQk1OOXRvb0RZaXpwagpzbDh6NzlybXp3eUFyWFlVcnFUUkpsNmpwRkNrWHJLcy9LeG96MHhhbXlMY2RORk9hWE51LzlnTkpjRERsV2hPcFRqNHk4CkpkNXBuV1Jueis1RHJLRFdhY0loUW1CMUxVd2ZLWmQwbVFxaUpzMUMxcXZVUmlKOGExaThKUTI4bHFuWTFRRk9wbk13emcKWEpla2FndThpT1ExRFJlQkhaM0NkcVJUYnY3bVJZSGxramx0dXJmZGc4M3hvM0ErZ1JSR001eUVOcW5xSkplQjhJQVB5UwptMFp0dGdqbHNqNTJ2K1B1NmExMHoxZndKK1VML2N6dTRKeEpOYlp6WTFIMnpLODJBaVI1T3JYNmx2aUEvSWFSRVcwUUFBCkFNQndVeUJpcUc5bEZCUnltL2UvU1VORVMzdHpicUZNdTdIcy84WTV5SnAxKzR6OXUxNGtJR2ttV0Y5eE5HT3hrY3V0cWwKeHVUcndMbjFUaFNQTHQrTjUwTGhVdzR4ZjBhNUxqemdPbklPU0FRbm5HY1Nxa0dTRDlMR21obGE2WmpydFBHY29lQ3JHdAo5M1Vvcmx5YkxNRzFFRFAxWmpKS1RaZzl6OUMwdDlTTGd3ei9DbFhydW9UNXNQVUdKWnUrbHlIZXpSTDRtcHl6OEZMcnlOCkdNci9leVM5bWdISjNVVkZEYjNIZ3BaK1E1SUdBRU5rZVZEcHIwMGhCZXZndGd6YWtBQUFEQkFQVXQ1RitoMnBVby94V1YKenRkcVQvMzA4dFB5MXVMMU1lWFoydEJPQmRwSDJyd0JzdWt0aTIySGtWZUZXQjJFdUlFUXppMzY3MGc1UGdxR1p4Vng4dQpobEE0Rkg4ZXN1NTNQckZqVW9EeFJhb3d3WXBFcFh5Y2pnNUE1MStwR1VQcWljWjB0YjliaWlhc3BWWXZhWW5sdGlnVG5iClN0UExMY29nemNiL0dGcVYyaXlzc3lwTlMwKzBNRTUxcEtxWGNaS2swbi8vVHpZWWs4TW8vZzRsQ3pmUEZQUlZrVVM5blIKWU1pQzRlcEk0TERmbVdnM0xLQ2N1Zk85all3aWgwYlFBQUFNRUE2WEtldDhEMHNvc0puZVh5WFZGd0dyVyszNlhBVGRQTwpMWDdjaStjYzFoOGV1eHdYQWx3aTJJNFhxSmJBVjBsVEhuVGEycXN3Uy9RQlpJUUJWSkZlVjVyS1daZTc4R2F3d1pWTFZNCldETmNwdFFyRTFaM2pGNS9TdUVzdlVxSDE0Tkc5RUFXWG1iUkNzelE0Vlk3NzQrSi9sTFkvMnlDT1diNzlLYTJ5OGxvYUoKVXczWWVtSld3blp2R3hKNldsL3BmQ2xYN3lEVXlXUktLdGl0cWNjbmpCWVkyRE1tZURwdURDYy9ZdDZDc3dLRmRkMkJ1UwpGZGt5cDlZY3VMaDlLZEFBQUFIR3BoYzI5dVFFRlVMVGd3TWxVdWJXOXVkR3hsYjI0dWFXNTBjbUVCQWdNRUJRWT0KLS0tLS1FTkQgT1BFTlNTSCBQUklWQVRFIEtFWS0tLS0tCgo=
    +kind: Secret
    +metadata:
    +  name: ssh-credentials
    +  namespace: konveyor-forklift
    +type: Opaque
    +
    +
    +
  2. +
  3. +

    Encode your playbook by conncatenating a file and piping it for base64, for example:

    +
    +
    +
    $ cat playbook.yml | base64 -w0
    +
    +
    +
    + + + + + +
    +
    Note
    +
    +
    +

    You can also use a here document to encode a playbook:

    +
    +
    +
    +
    $ cat << EOF | base64 -w0
    +- hosts: localhost
    +  tasks:
    +  - debug:
    +      msg: test
    +EOF
    +
    +
    +
    +
    +
  4. +
  5. +

    Create a Hook CR:

    +
    +
    +
    apiVersion: forklift.konveyor.io/v1beta1
    +kind: Hook
    +metadata:
    +  name: playbook
    +  namespace: konveyor-forklift
    +spec:
    +  image: registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel8:v1.8.2-2
    +  playbook: LSBuYW1lOiBNYWluCiAgaG9zdHM6IGxvY2FsaG9zdAogIHRhc2tzOgogIC0gbmFtZTogTG9hZCBQbGFuCiAgICBpbmNsdWRlX3ZhcnM6CiAgICAgIGZpbGU6IHBsYW4ueW1sCiAgICAgIG5hbWU6IHBsYW4KCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3ZhcnM6CiAgICAgIGZpbGU6IHdvcmtsb2FkLnltbAogICAgICBuYW1lOiB3b3JrbG9hZAoKICAtIG5hbWU6IAogICAgZ2V0ZW50OgogICAgICBkYXRhYmFzZTogcGFzc3dkCiAgICAgIGtleTogInt7IGFuc2libGVfdXNlcl9pZCB9fSIKICAgICAgc3BsaXQ6ICc6JwoKICAtIG5hbWU6IEVuc3VyZSBTU0ggZGlyZWN0b3J5IGV4aXN0cwogICAgZmlsZToKICAgICAgcGF0aDogfi8uc3NoCiAgICAgIHN0YXRlOiBkaXJlY3RvcnkKICAgICAgbW9kZTogMDc1MAogICAgZW52aXJvbm1lbnQ6CiAgICAgIEhPTUU6ICJ7eyBhbnNpYmxlX2ZhY3RzLmdldGVudF9wYXNzd2RbYW5zaWJsZV91c2VyX2lkXVs0XSB9fSIKCiAgLSBrOHNfaW5mbzoKICAgICAgYXBpX3ZlcnNpb246IHYxCiAgICAgIGtpbmQ6IFNlY3JldAogICAgICBuYW1lOiBzc2gtY3JlZGVudGlhbHMKICAgICAgbmFtZXNwYWNlOiBrb252ZXlvci1mb3JrbGlmdAogICAgcmVnaXN0ZXI6IHNzaF9jcmVkZW50aWFscwoKICAtIG5hbWU6IENyZWF0ZSBTU0gga2V5CiAgICBjb3B5OgogICAgICBkZXN0OiB+Ly5zc2gvaWRfcnNhCiAgICAgIGNvbnRlbnQ6ICJ7eyBzc2hfY3JlZGVudGlhbHMucmVzb3VyY2VzWzBdLmRhdGEua2V5IHwgYjY0ZGVjb2RlIH19IgogICAgICBtb2RlOiAwNjAwCgogIC0gYWRkX2hvc3Q6CiAgICAgIG5hbWU6ICJ7eyB3b3JrbG9hZC52bS5pcGFkZHJlc3MgfX0iCiAgICAgIGFuc2libGVfdXNlcjogcm9vdAogICAgICBncm91cHM6IHZtcwoKLSBob3N0czogdm1zCiAgdGFza3M6CiAgLSBuYW1lOiBJbnN0YWxsIGNsb3VkLWluaXQKICAgIGRuZjoKICAgICAgbmFtZToKICAgICAgLSBjbG91ZC1pbml0CiAgICAgIHN0YXRlOiBsYXRlc3QKCiAgLSBuYW1lOiBDcmVhdGUgVGVzdCBGaWxlCiAgICBjb3B5OgogICAgICBkZXN0OiAvdGVzdC50eHQKICAgICAgY29udGVudDogIkhlbGxvIFdvcmxkIgogICAgICBtb2RlOiAwNjQ0Cg==
    +  serviceAccount: forklift-controller (1)
    +
    +
    +
    +
      +
    1. +

      Specify a serviceAccount to run the hook with in order to control access to resources on the cluster.

      +
      + + + + + +
      +
      Note
      +
      +
      +

      To decode an attached playbook retrieve the resource with custom output and pipe it to base64. For example:

      +
      +
      +
      +
       oc get -n konveyor-forklift hook playbook -o \
      +   go-template='{{ .spec.playbook }}' | base64 -d
      +
      +
      +
      +
      +
      +

      The playbook encoded here runs the following:

      +
      +
      +
      +
      - name: Main
      +  hosts: localhost
      +  tasks:
      +  - name: Load Plan
      +    include_vars:
      +      file: plan.yml
      +      name: plan
      +
      +  - name: Load Workload
      +    include_vars:
      +      file: workload.yml
      +      name: workload
      +
      +  - name:
      +    getent:
      +      database: passwd
      +      key: "{{ ansible_user_id }}"
      +      split: ':'
      +
      +  - name: Ensure SSH directory exists
      +    file:
      +      path: ~/.ssh
      +      state: directory
      +      mode: 0750
      +    environment:
      +      HOME: "{{ ansible_facts.getent_passwd[ansible_user_id][4] }}"
      +
      +  - k8s_info:
      +      api_version: v1
      +      kind: Secret
      +      name: ssh-credentials
      +      namespace: konveyor-forklift
      +    register: ssh_credentials
      +
      +  - name: Create SSH key
      +    copy:
      +      dest: ~/.ssh/id_rsa
      +      content: "{{ ssh_credentials.resources[0].data.key | b64decode }}"
      +      mode: 0600
      +
      +  - add_host:
      +      name: "{{ workload.vm.ipaddress }}"
      +      ansible_user: root
      +      groups: vms
      +
      +- hosts: vms
      +  tasks:
      +  - name: Install cloud-init
      +    dnf:
      +      name:
      +      - cloud-init
      +      state: latest
      +
      +  - name: Create Test File
      +    copy:
      +      dest: /test.txt
      +      content: "Hello World"
      +      mode: 0644
      +
      +
      +
    2. +
    +
    +
  6. +
  7. +

    Create a Plan CR using the hook:

    +
    +
    +
    kind: Plan
    +apiVersion: forklift.konveyor.io/v1beta1
    +metadata:
    +  name: test
    +  namespace: konveyor-forklift
    +spec:
    +  map:
    +    network:
    +      namespace: "konveyor-forklift"
    +      name: "network"
    +    storage:
    +      namespace: "konveyor-forklift"
    +      name: "storage"
    +  provider:
    +    source:
    +      namespace: "konveyor-forklift"
    +      name: "boston"
    +    destination:
    +      namespace: "konveyor-forklift"
    +      name: host
    +  targetNamespace: "konveyor-forklift"
    +  vms:
    +    - id: vm-2861
    +      hooks:
    +        - hook:
    +            namespace: konveyor-forklift
    +            name: playbook
    +          step: PreHook (1)
    +
    +
    +
    +
      +
    1. +

      Options are PreHook, to run the hook before the migration, and PostHook, to run the hook after the migration.

      +
    2. +
    +
    +
  8. +
+
+
+ + + + + +
+
Important
+
+
+

In order for a PreHook to run on a VM, the VM must be started and available via SSH.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/adding-source-provider/index.html b/documentation/doc-Release_notes/modules/adding-source-provider/index.html new file mode 100644 index 00000000000..76106511701 --- /dev/null +++ b/documentation/doc-Release_notes/modules/adding-source-provider/index.html @@ -0,0 +1,82 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Click Create provider to add and save the provider.

    +
    +

    The provider appears in the list of providers.

    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/adding-virt-provider/index.html b/documentation/doc-Release_notes/modules/adding-virt-provider/index.html new file mode 100644 index 00000000000..c8e68008cdd --- /dev/null +++ b/documentation/doc-Release_notes/modules/adding-virt-provider/index.html @@ -0,0 +1,116 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Adding a KubeVirt destination provider

+
+

You can add a KubeVirt destination provider to the OKD web console in addition to the default KubeVirt destination provider, which is the provider where you installed Forklift.

+
+
+
Prerequisites
+ +
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Select KubeVirt from the Provider type list.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Provider name: Specify the provider name to display in the list of target providers.

      +
    • +
    • +

      Kubernetes API server URL: Specify the OKD cluster API endpoint.

      +
    • +
    • +

      Service account token: Specify the cluster-admin service account token.

      +
      +

      If both URL and Service account token are left blank, the local OKD cluster is used.

      +
      +
    • +
    +
    +
  8. +
  9. +

    Click Create.

    +
    +

    The provider appears in the list of providers.

    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/canceling-migration-cli/index.html b/documentation/doc-Release_notes/modules/canceling-migration-cli/index.html new file mode 100644 index 00000000000..e5508c9c505 --- /dev/null +++ b/documentation/doc-Release_notes/modules/canceling-migration-cli/index.html @@ -0,0 +1,132 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Canceling a migration

+
+

You can cancel an entire migration or individual virtual machines (VMs) while a migration is in progress from the command line interface (CLI).

+
+
+
Canceling an entire migration
+
    +
  • +

    Delete the Migration CR:

    +
    +
    +
    $ kubectl delete migration <migration> -n <namespace> (1)
    +
    +
    +
    +
      +
    1. +

      Specify the name of the Migration CR.

      +
    2. +
    +
    +
  • +
+
+
+
Canceling the migration of individual VMs
+
    +
  1. +

    Add the individual VMs to the spec.cancel block of the Migration manifest:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Migration
    +metadata:
    +  name: <migration>
    +  namespace: <namespace>
    +...
    +spec:
    +  cancel:
    +  - id: vm-102 (1)
    +  - id: vm-203
    +  - name: rhel8-vm
    +EOF
    +
    +
    +
    +
      +
    1. +

      You can specify a VM by using the id key or the name key.

      +
      +

      The value of the id key is the managed object reference, for a VMware VM, or the VM UUID, for a oVirt VM.

      +
      +
    2. +
    +
    +
  2. +
  3. +

    Retrieve the Migration CR to monitor the progress of the remaining VMs:

    +
    +
    +
    $ kubectl get migration/<migration> -n <namespace> -o yaml
    +
    +
    +
  4. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/canceling-migration-ui/index.html b/documentation/doc-Release_notes/modules/canceling-migration-ui/index.html new file mode 100644 index 00000000000..7f415c27acb --- /dev/null +++ b/documentation/doc-Release_notes/modules/canceling-migration-ui/index.html @@ -0,0 +1,92 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Canceling a migration

+
+

You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the OKD web console.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click Plans for virtualization.

    +
  2. +
  3. +

    Click the name of a running migration plan to view the migration details.

    +
  4. +
  5. +

    Select one or more VMs and click Cancel.

    +
  6. +
  7. +

    Click Yes, cancel to confirm the cancellation.

    +
    +

    In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.

    +
    +
  8. +
+
+
+

You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.

+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/changing-precopy-intervals/index.html b/documentation/doc-Release_notes/modules/changing-precopy-intervals/index.html new file mode 100644 index 00000000000..20c4b305c89 --- /dev/null +++ b/documentation/doc-Release_notes/modules/changing-precopy-intervals/index.html @@ -0,0 +1,92 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Changing precopy intervals for warm migration

+
+

You can change the snapshot interval by patching the ForkliftController custom resource (CR).

+
+
+
Procedure
+
    +
  • +

    Patch the ForkliftController CR:

    +
    +
    +
    $ kubectl patch forkliftcontroller/<forklift-controller> -n konveyor-forklift -p '{"spec": {"controller_precopy_interval": <60>}}' --type=merge (1)
    +
    +
    +
    +
      +
    1. +

      Specify the precopy interval in minutes. The default value is 60.

      +
      +

      You do not need to restart the forklift-controller pod.

      +
      +
    2. +
    +
    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/collected-logs-cr-info/index.html b/documentation/doc-Release_notes/modules/collected-logs-cr-info/index.html new file mode 100644 index 00000000000..8e707ef52b9 --- /dev/null +++ b/documentation/doc-Release_notes/modules/collected-logs-cr-info/index.html @@ -0,0 +1,183 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Collected logs and custom resource information

+
+

You can download logs and custom resource (CR) yaml files for the following targets by using the OKD web console or the command line interface (CLI):

+
+
+
    +
  • +

    Migration plan: Web console or CLI.

    +
  • +
  • +

    Virtual machine: Web console or CLI.

    +
  • +
  • +

    Namespace: CLI only.

    +
  • +
+
+
+

The must-gather tool collects the following logs and CR files in an archive file:

+
+
+
    +
  • +

    CRs:

    +
    +
      +
    • +

      DataVolume CR: Represents a disk mounted on a migrated VM.

      +
    • +
    • +

      VirtualMachine CR: Represents a migrated VM.

      +
    • +
    • +

      Plan CR: Defines the VMs and storage and network mapping.

      +
    • +
    • +

      Job CR: Optional: Represents a pre-migration hook, a post-migration hook, or both.

      +
    • +
    +
    +
  • +
  • +

    Logs:

    +
    +
      +
    • +

      importer pod: Disk-to-data-volume conversion log. The importer pod naming convention is importer-<migration_plan>-<vm_id><5_char_id>, for example, importer-mig-plan-ed90dfc6-9a17-4a8btnfh, where ed90dfc6-9a17-4a8 is a truncated oVirt VM ID and btnfh is the generated 5-character ID.

      +
    • +
    • +

      conversion pod: VM conversion log. The conversion pod runs virt-v2v, which installs and configures device drivers on the PVCs of the VM. The conversion pod naming convention is <migration_plan>-<vm_id><5_char_id>.

      +
    • +
    • +

      virt-launcher pod: VM launcher log. When a migrated VM is powered on, the virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

      +
    • +
    • +

      forklift-controller pod: The log is filtered for the migration plan, virtual machine, or namespace specified by the must-gather command.

      +
    • +
    • +

      forklift-must-gather-api pod: The log is filtered for the migration plan, virtual machine, or namespace specified by the must-gather command.

      +
    • +
    • +

      hook-job pod: The log is filtered for hook jobs. The hook-job naming convention is <migration_plan>-<vm_id><5_char_id>, for example, plan2j-vm-3696-posthook-4mx85 or plan2j-vm-3696-prehook-mwqnl.

      +
      + + + + + +
      +
      Note
      +
      +
      +

      Empty or excluded log files are not included in the must-gather archive file.

      +
      +
      +
      +
    • +
    +
    +
  • +
+
+
+
Example must-gather archive structure for a VMware migration plan
+
+
must-gather
+└── namespaces
+    ├── target-vm-ns
+    │   ├── crs
+    │   │   ├── datavolume
+    │   │   │   ├── mig-plan-vm-7595-tkhdz.yaml
+    │   │   │   ├── mig-plan-vm-7595-5qvqp.yaml
+    │   │   │   └── mig-plan-vm-8325-xccfw.yaml
+    │   │   └── virtualmachine
+    │   │       ├── test-test-rhel8-2disks2nics.yaml
+    │   │       └── test-x2019.yaml
+    │   └── logs
+    │       ├── importer-mig-plan-vm-7595-tkhdz
+    │       │   └── current.log
+    │       ├── importer-mig-plan-vm-7595-5qvqp
+    │       │   └── current.log
+    │       ├── importer-mig-plan-vm-8325-xccfw
+    │       │   └── current.log
+    │       ├── mig-plan-vm-7595-4glzd
+    │       │   └── current.log
+    │       └── mig-plan-vm-8325-4zw49
+    │           └── current.log
+    └── openshift-mtv
+        ├── crs
+        │   └── plan
+        │       └── mig-plan-cold.yaml
+        └── logs
+            ├── forklift-controller-67656d574-w74md
+            │   └── current.log
+            └── forklift-must-gather-api-89fc7f4b6-hlwb6
+                └── current.log
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/common-attributes/index.html b/documentation/doc-Release_notes/modules/common-attributes/index.html new file mode 100644 index 00000000000..1887404da45 --- /dev/null +++ b/documentation/doc-Release_notes/modules/common-attributes/index.html @@ -0,0 +1,66 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + +
+ + diff --git a/documentation/doc-Release_notes/modules/compatibility-guidelines/index.html b/documentation/doc-Release_notes/modules/compatibility-guidelines/index.html new file mode 100644 index 00000000000..49ac5c53ec7 --- /dev/null +++ b/documentation/doc-Release_notes/modules/compatibility-guidelines/index.html @@ -0,0 +1,137 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Software compatibility guidelines

+
+
+
+

You must install compatible software versions.

+
+ + ++++++++ + + + + + + + + + + + + + + + + + + + + +
Table 1. Compatible software versions
ForkliftOKDKubeVirtVMware vSphereoVirtOpenStack

2.3.0

4.10 or later

4.10 or later

6.5 or later

4.4 SP1 or later

16.1 or later

+
+ + + + + +
+
Note
+
+
Migration from oVirt 4.3
+
+

Forklift was tested only with oVirt (RHV) 4.4 SP1. +Migration from oVirt (oVirt) 4.3 has not been tested with Forklift 2.3. While not supported, basic migrations from oVirt 4.3 are expected to work.

+
+
+

Generally it is advised to upgrade oVirt Manager (RHVM) to the previously mentioned supported version before the migration to KubeVirt.

+
+
+

Therefore, it is recommended to upgrade oVirt to the supported version above before the migration to KubeVirt.

+
+
+

However, migrations from oVirt 4.3.11 were tested with Forklift 2.3, and may work in practice in many environments using Forklift 2.3. In this case, we advise upgrading oVirt Manager (RHVM) to the previously mentioned supported version before the migration to KubeVirt.

+
+
+
+
+
+
+

OpenShift Operator Life Cycles

+
+
+

For more information about the software maintenance Life Cycle classifications for Operators shipped by Red Hat for use with OpenShift Container Platform, see OpenShift Operator Life Cycles.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/configuring-mtv-operator/index.html b/documentation/doc-Release_notes/modules/configuring-mtv-operator/index.html new file mode 100644 index 00000000000..51fd24c7245 --- /dev/null +++ b/documentation/doc-Release_notes/modules/configuring-mtv-operator/index.html @@ -0,0 +1,202 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Configuring the Forklift Operator

+
+

You can configure all of the following settings of the Forklift Operator by modifying the ForkliftController CR, or in the Settings section of the Overview page, unless otherwise indicated.

+
+
+
    +
  • +

    Maximum number of virtual machines (VMs) per plan that can be migrated simultaneously.

    +
  • +
  • +

    How long must gather reports are retained before being automatically deleted.

    +
  • +
  • +

    CPU limit allocated to the main controller container.

    +
  • +
  • +

    Memory limit allocated to the main controller container.

    +
  • +
  • +

    Interval at which a new snapshot is requested before initiating a warm migration.

    +
  • +
  • +

    Frequency with which the system checks the status of snapshot creation or removal during a warm migration.

    +
  • +
  • +

    Percentage of space in persistent volumes allocated as file system overhead when the storageclass is filesystem (ForkliftController CR only).

    +
  • +
  • +

    Fixed amount of additional space allocated in persistent block volumes. This setting is applicable for any storageclass that is block-based (ForkliftController CR only).

    +
  • +
  • +

    Configuration map of operating systems to preferences for vSphere source providers (ForkliftController CR only).

    +
  • +
  • +

    Configuration map of operating systems to preferences for oVirt (oVirt) source providers (ForkliftController CR only).

    +
  • +
+
+
+

The procedure for configuring these settings using the user interface is presented in Configuring MTV settings. The procedure for configuring these settings by modifying the ForkliftController CR is presented following.

+
+
+
Procedure
+
    +
  • +

    Change a parameter’s value in the spec portion of the ForkliftController CR by adding the label and value as follows:

    +
  • +
+
+
+
+
spec:
+  label: value (1)
+
+
+
+
    +
  1. +

    Labels you can configure using the CLI are shown in the table that follows, along with a description of each label and its default value.

    +
  2. +
+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Forklift Operator labels
LabelDescriptionDefault value

controller_max_vm_inflight

The maximum number of VMs per plan that can be migrated simultaneously.

20

must_gather_api_cleanup_max_age

The duration in hours for retaining must gather reports before they are automatically deleted.

-1 (disabled)

controller_container_limits_cpu

The CPU limit allocated to the main controller container.

500m

controller_container_limits_memory

The memory limit allocated to the main controller container.

800Mi

controller_precopy_interval

The interval in minutes at which a new snapshot is requested before initiating a warm migration.

60

controller_snapshot_status_check_rate_seconds

The frequency in seconds with which the system checks the status of snapshot creation or removal during a warm migration.

10

controller_filesystem_overhead

Percentage of space in persistent volumes allocated as file system overhead when the storageclass is filesystem.

+

ForkliftController CR only.

10

controller_block_overhead

Fixed amount of additional space allocated in persistent block volumes. This setting is applicable for any storageclass that is block-based. It can be used when data, such as encryption headers, is written to the persistent volumes in addition to the content of the virtual disk.

+

ForkliftController CR only.

0

vsphere_osmap_configmap_name

Configuration map for vSphere source providers. This configuration map maps the operating system of the incoming VM to a KubeVirt preference name. This configuration map needs to be in the namespace where the Forklift Operator is deployed.

+

To see the list of preferences in your KubeVirt environment, open the {ocp-name} web console and click VirtualizationPreferences.

+

You can add values to the configuration map when this label has the default value, forklift-vsphere-osmap. In order to override or delete values, specify a configuration map that is different from forklift-vsphere-osmap.

+

ForkliftController CR only.

forklift-vsphere-osmap

ovirt_osmap_configmap_name

Configuration map for oVirt source providers. This configuration map maps the operating system of the incoming VM to a KubeVirt preference name. This configuration map needs to be in the namespace where the Forklift Operator is deployed.

+

To see the list of preferences in your KubeVirt environment, open the {ocp-name} web console and click VirtualizationPreferences.

+

You can add values to the configuration map when this label has the default value, forklift-ovirt-osmap. In order to override or delete values, specify a configuration map that is different from forklift-ovirt-osmap.

+

ForkliftController CR only.

forklift-ovirt-osmap

+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/creating-migration-plan-2-6-3/index.html b/documentation/doc-Release_notes/modules/creating-migration-plan-2-6-3/index.html new file mode 100644 index 00000000000..1d2e3d350a2 --- /dev/null +++ b/documentation/doc-Release_notes/modules/creating-migration-plan-2-6-3/index.html @@ -0,0 +1,139 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

+ +The Create migration plan pane opens. It displays the source provider’s name and suggestions for a target provider and namespace, a network map, and a storage map. +. Enter the Plan name. +. Make any needed changes to the editable items. +. Click Add mapping to edit a suggested network mapping or a storage mapping, or to add one or more additional mappings. +. Click Create migration plan.

+
+
+

+ +Forklift validates the migration plan and the Plan details page opens, indicating whether the plan is ready for use or contains an error. The details of the plan are listed, and you can edit the items you filled in on the previous page. If you make any changes, Forklift validates the plan again.

+
+
+
    +
  1. +

    VMware source providers only (All optional):

    +
    +
      +
    • +

      Preserving static IPs of VMs: By default, virtual network interface controllers (vNICs) change during the migration process. As a result, vNICs that are configured with a static IP linked to the interface name in the guest VM lose their IP. To avoid this, click the Edit icon next to Preserve static IPs and toggle the Whether to preserve the static IPs switch in the window that opens. Then click Save.

      +
      +

      Forklift then issues a warning message about any VMs for which vNIC properties are missing. To retrieve any missing vNIC properties, run those VMs in vSphere in order for the vNIC properties to be reported to Forklift.

      +
      +
    • +
    • +

      Entering a list of decryption passphrases for disks encrypted using Linux Unified Key Setup (LUKS): To enter a list of decryption passphrases for LUKS-encrypted devices, in the Settings section, click the Edit icon next to Disk decryption passphrases, enter the passphrases, and then click Save. You do not need to enter the passphrases in a specific order - for each LUKS-encrypted device, Forklift tries each passphrase until one unlocks the device.

      +
    • +
    • +

      Specifying a root device: Applies to multi-boot VM migrations only. By default, Forklift uses the first bootable device detected as the root device.

      +
      +

      To specify a different root device, in the Settings section, click the Edit icon next to Root device and choose a device from the list of commonly-used options, or enter a device in the text box.

      +
      +
      +

      Forklift uses the following format for disk location: /dev/sd<disk_identifier><disk_partition>. For example, if the second disk is the root device and the operating system is on the disk’s second partition, the format would be: /dev/sdb2. After you enter the boot device, click Save.

      +
      +
      +

      If the conversion fails because the boot device provided is incorrect, it is possible to get the correct information by looking at the conversion pod logs.

      +
      +
    • +
    +
    +
  2. +
  3. +

    oVirt source providers only (Optional):

    +
    +
      +
    • +

      Preserving the CPU model of VMs that are migrated from oVirt: Generally, the CPU model (type) for oVirt VMs is set at the cluster level, but it can be set at the VM level, which is called a custom CPU model. +By default, Forklift sets the CPU model on the destination cluster as follows: Forklift preserves custom CPU settings for VMs that have them, but, for VMs without custom CPU settings, Forklift does not set the CPU model. Instead, the CPU model is later set by KubeVirt.

      +
      +

      To preserve the cluster-level CPU model of your oVirt VMs, in the Settings section, click the Edit icon next to Preserve CPU model. Toggle the Whether to preserve the CPU model switch, and then click Save.

      +
      +
    • +
    +
    +
  4. +
  5. +

    If the plan is valid,

    +
    +
      +
    1. +

      You can run the plan now by clicking Start migration.

      +
    2. +
    3. +

      You can run the plan later by selecting it on the Plans for virtualization page and following the procedure in Running a migration plan.

      +
      +

      Unresolved directive in creating-migration-plan-2-6-3.adoc - include::snip_vmware-name-change.adoc[]

      +
      +
    4. +
    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/creating-migration-plan/index.html b/documentation/doc-Release_notes/modules/creating-migration-plan/index.html new file mode 100644 index 00000000000..dd8d25a3483 --- /dev/null +++ b/documentation/doc-Release_notes/modules/creating-migration-plan/index.html @@ -0,0 +1,270 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a migration plan

+
+

You can create a migration plan by using the OKD web console.

+
+
+

A migration plan allows you to group virtual machines to be migrated together or with the same migration parameters, for example, a percentage of the members of a cluster or a complete application.

+
+
+

You can configure a hook to run an Ansible playbook or custom container image during a specified stage of the migration plan.

+
+
+
Prerequisites
+
    +
  • +

    If Forklift is not installed on the target cluster, you must add a target provider on the Providers page of the web console.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
  2. +
  3. +

    Click Create plan.

    +
  4. +
  5. +

    Specify the following fields:

    +
    +
      +
    • +

      Plan name: Enter a migration plan name to display in the migration plan list.

      +
    • +
    • +

      Plan description: Optional: Brief description of the migration plan.

      +
    • +
    • +

      Source provider: Select a source provider.

      +
    • +
    • +

      Target provider: Select a target provider.

      +
    • +
    • +

      Target namespace: Do one of the following:

      +
      +
        +
      • +

        Select a target namespace from the list

        +
      • +
      • +

        Create a target namespace by typing its name in the text box, and then clicking create "<the_name_you_entered>"

        +
      • +
      +
      +
    • +
    • +

      You can change the migration transfer network for this plan by clicking Select a different network, selecting a network from the list, and then clicking Select.

      +
      +

      If you defined a migration transfer network for the KubeVirt provider and if the network is in the target namespace, the network that you defined is the default network for all migration plans. Otherwise, the pod network is used.

      +
      +
    • +
    +
    +
  6. +
  7. +

    Click Next.

    +
  8. +
  9. +

    Select options to filter the list of source VMs and click Next.

    +
  10. +
  11. +

    Select the VMs to migrate and then click Next.

    +
  12. +
  13. +

    Select an existing network mapping or create a new network mapping.

    +
  14. +
  15. +

    . Optional: Click Add to add an additional network mapping.

    +
    +

    To create a new network mapping:

    +
    +
    +
      +
    • +

      Select a target network for each source network.

      +
    • +
    • +

      Optional: Select Save current mapping as a template and enter a name for the network mapping.

      +
    • +
    +
    +
  16. +
  17. +

    Click Next.

    +
  18. +
  19. +

    Select an existing storage mapping, which you can modify, or create a new storage mapping.

    +
    +

    To create a new storage mapping:

    +
    +
    +
      +
    1. +

      If your source provider is VMware, select a Source datastore and a Target storage class.

      +
    2. +
    3. +

      If your source provider is oVirt, select a Source storage domain and a Target storage class.

      +
    4. +
    5. +

      If your source provider is {osp}, select a Source volume type and a Target storage class.

      +
    6. +
    +
    +
  20. +
  21. +

    Optional: Select Save current mapping as a template and enter a name for the storage mapping.

    +
  22. +
  23. +

    Click Next.

    +
  24. +
  25. +

    Select a migration type and click Next.

    +
    +
      +
    • +

      Cold migration: The source VMs are stopped while the data is copied.

      +
    • +
    • +

      Warm migration: The source VMs run while the data is copied incrementally. Later, you will run the cutover, which stops the VMs and copies the remaining VM data and metadata.

      +
      + + + + + +
      +
      Note
      +
      +
      +

      Warm migration is supported only from vSphere and oVirt.

      +
      +
      +
      +
    • +
    +
    +
  26. +
  27. +

    Click Next.

    +
  28. +
  29. +

    Optional: You can create a migration hook to run an Ansible playbook before or after migration:

    +
    +
      +
    1. +

      Click Add hook.

      +
    2. +
    3. +

      Select the Step when the hook will be run: pre-migration or post-migration.

      +
    4. +
    5. +

      Select a Hook definition:

      +
      +
        +
      • +

        Ansible playbook: Browse to the Ansible playbook or paste it into the field.

        +
      • +
      • +

        Custom container image: If you do not want to use the default hook-runner image, enter the image path: <registry_path>/<image_name>:<tag>.

        +
        + + + + + +
        +
        Note
        +
        +
        +

        The registry must be accessible to your OKD cluster.

        +
        +
        +
        +
      • +
      +
      +
    6. +
    +
    +
  30. +
  31. +

    Click Next.

    +
  32. +
  33. +

    Review your migration plan and click Finish.

    +
    +

    The migration plan is saved on the Plans page.

    +
    +
    +

    You can click the {kebab} of the migration plan and select View details to verify the migration plan details.

    +
    +
  34. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/creating-network-mapping/index.html b/documentation/doc-Release_notes/modules/creating-network-mapping/index.html new file mode 100644 index 00000000000..49965fade17 --- /dev/null +++ b/documentation/doc-Release_notes/modules/creating-network-mapping/index.html @@ -0,0 +1,122 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a network mapping

+
+

You can create one or more network mappings by using the OKD web console to map source networks to KubeVirt networks.

+
+
+
Prerequisites
+
    +
  • +

    Source and target providers added to the OKD web console.

    +
  • +
  • +

    If you map more than one source and target network, each additional KubeVirt network requires its own network attachment definition.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationNetworkMaps for virtualization.

    +
  2. +
  3. +

    Click Create NetworkMap.

    +
  4. +
  5. +

    Specify the following fields:

    +
    +
      +
    • +

      Name: Enter a name to display in the network mappings list.

      +
    • +
    • +

      Source provider: Select a source provider.

      +
    • +
    • +

      Target provider: Select a target provider.

      +
    • +
    +
    +
  6. +
  7. +

    Select a Source network and a Target namespace/network.

    +
  8. +
  9. +

    Optional: Click Add to create additional network mappings or to map multiple source networks to a single target network.

    +
  10. +
  11. +

    If you create an additional network mapping, select the network attachment definition as the target network.

    +
  12. +
  13. +

    Click Create.

    +
    +

    The network mapping is displayed on the NetworkMaps screen.

    +
    +
  14. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/creating-storage-mapping/index.html b/documentation/doc-Release_notes/modules/creating-storage-mapping/index.html new file mode 100644 index 00000000000..71a83673557 --- /dev/null +++ b/documentation/doc-Release_notes/modules/creating-storage-mapping/index.html @@ -0,0 +1,138 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a storage mapping

+
+

You can create a storage mapping by using the OKD web console to map source disk storages to KubeVirt storage classes.

+
+
+
Prerequisites
+
    +
  • +

    Source and target providers added to the OKD web console.

    +
  • +
  • +

    Local and shared persistent storage that support VM migration.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationStorageMaps for virtualization.

    +
  2. +
  3. +

    Click Create StorageMap.

    +
  4. +
  5. +

    Specify the following fields:

    +
    +
      +
    • +

      Name: Enter a name to display in the storage mappings list.

      +
    • +
    • +

      Source provider: Select a source provider.

      +
    • +
    • +

      Target provider: Select a target provider.

      +
    • +
    +
    +
  6. +
  7. +

    To create a storage mapping, click Add and map storage sources to target storage classes as follows:

    +
    +
      +
    1. +

      If your source provider is VMware vSphere, select a Source datastore and a Target storage class.

      +
    2. +
    3. +

      If your source provider is oVirt, select a Source storage domain and a Target storage class.

      +
    4. +
    5. +

      If your source provider is {osp}, select a Source volume type and a Target storage class.

      +
    6. +
    7. +

      If your source provider is a set of one or more OVA files, select a Source and a Target storage class for the dummy storage that applies to all virtual disks within the OVA files.

      +
    8. +
    9. +

      If your storage provider is KubeVirt. select a Source storage class and a Target storage class.

      +
    10. +
    11. +

      Optional: Click Add to create additional storage mappings, including mapping multiple storage sources to a single target storage class.

      +
    12. +
    +
    +
  8. +
  9. +

    Click Create.

    +
    +

    The mapping is displayed on the StorageMaps page.

    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/creating-validation-rule/index.html b/documentation/doc-Release_notes/modules/creating-validation-rule/index.html new file mode 100644 index 00000000000..ab6079299c8 --- /dev/null +++ b/documentation/doc-Release_notes/modules/creating-validation-rule/index.html @@ -0,0 +1,238 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a validation rule

+
+

You create a validation rule by applying a config map custom resource (CR) containing the rule to the Validation service.

+
+
+ + + + + +
+
Important
+
+
+
    +
  • +

    If you create a rule with the same name as an existing rule, the Validation service performs an OR operation with the rules.

    +
  • +
  • +

    If you create a rule that contradicts a default rule, the Validation service will not start.

    +
  • +
+
+
+
+
+
Validation rule example
+

Validation rules are based on virtual machine (VM) attributes collected by the Provider Inventory service.

+
+
+

For example, the VMware API uses this path to check whether a VMware VM has NUMA node affinity configured: MOR:VirtualMachine.config.extraConfig["numa.nodeAffinity"].

+
+
+

The Provider Inventory service simplifies this configuration and returns a testable attribute with a list value:

+
+
+
+
"numaNodeAffinity": [
+    "0",
+    "1"
+],
+
+
+
+

You create a Rego query, based on this attribute, and add it to the forklift-validation-config config map:

+
+
+
+
`count(input.numaNodeAffinity) != 0`
+
+
+
+
Procedure
+
    +
  1. +

    Create a config map CR according to the following example:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: v1
    +kind: ConfigMap
    +metadata:
    +  name: <forklift-validation-config>
    +  namespace: konveyor-forklift
    +data:
    +  vmware_multiple_disks.rego: |-
    +    package <provider_package> (1)
    +
    +    has_multiple_disks { (2)
    +      count(input.disks) > 1
    +    }
    +
    +    concerns[flag] {
    +      has_multiple_disks (3)
    +        flag := {
    +          "category": "<Information>", (4)
    +          "label": "Multiple disks detected",
    +          "assessment": "Multiple disks detected on this VM."
    +        }
    +    }
    +EOF
    +
    +
    +
    +
      +
    1. +

      Specify the provider package name. Allowed values are io.konveyor.forklift.vmware for VMware and io.konveyor.forklift.ovirt for oVirt.

      +
    2. +
    3. +

      Specify the concerns name and Rego query.

      +
    4. +
    5. +

      Specify the concerns name and flag parameter values.

      +
    6. +
    7. +

      Allowed values are Critical, Warning, and Information.

      +
    8. +
    +
    +
  2. +
  3. +

    Stop the Validation pod by scaling the forklift-controller deployment to 0:

    +
    +
    +
    $ kubectl scale -n konveyor-forklift --replicas=0 deployment/forklift-controller
    +
    +
    +
  4. +
  5. +

    Start the Validation pod by scaling the forklift-controller deployment to 1:

    +
    +
    +
    $ kubectl scale -n konveyor-forklift --replicas=1 deployment/forklift-controller
    +
    +
    +
  6. +
  7. +

    Check the Validation pod log to verify that the pod started:

    +
    +
    +
    $ kubectl logs -f <validation_pod>
    +
    +
    +
    +

    If the custom rule conflicts with a default rule, the Validation pod will not start.

    +
    +
  8. +
  9. +

    Remove the source provider:

    +
    +
    +
    $ kubectl delete provider <provider> -n konveyor-forklift
    +
    +
    +
  10. +
  11. +

    Add the source provider to apply the new rule:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Provider
    +metadata:
    +  name: <provider>
    +  namespace: konveyor-forklift
    +spec:
    +  type: <provider_type> (1)
    +  url: <api_end_point> (2)
    +  secret:
    +    name: <secret> (3)
    +    namespace: konveyor-forklift
    +EOF
    +
    +
    +
    +
      +
    1. +

      Allowed values are ovirt, vsphere, and openstack.

      +
    2. +
    3. +

      Specify the API end point URL, for example, https://<vCenter_host>/sdk for vSphere, https://<engine_host>/ovirt-engine/api for oVirt, or https://<identity_service>/v3 for {osp}.

      +
    4. +
    5. +

      Specify the name of the provider Secret CR.

      +
    6. +
    +
    +
  12. +
+
+
+

You must update the rules version after creating a custom rule so that the Inventory service detects the changes and validates the VMs.

+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/creating-vddk-image/index.html b/documentation/doc-Release_notes/modules/creating-vddk-image/index.html new file mode 100644 index 00000000000..1c2c2df32b5 --- /dev/null +++ b/documentation/doc-Release_notes/modules/creating-vddk-image/index.html @@ -0,0 +1,201 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a VDDK image

+
+

Forklift can use the VMware Virtual Disk Development Kit (VDDK) SDK to accelerate transferring virtual disks from VMware vSphere.

+
+
+ + + + + +
+
Note
+
+
+

Creating a VDDK image, although optional, is highly recommended.

+
+
+
+
+

To make use of this feature, you download the VMware Virtual Disk Development Kit (VDDK), build a VDDK image, and push the VDDK image to your image registry.

+
+
+

The VDDK package contains symbolic links, therefore, the procedure of creating a VDDK image must be performed on a file system that preserves symbolic links (symlinks).

+
+
+ + + + + +
+
Note
+
+
+

Storing the VDDK image in a public registry might violate the VMware license terms.

+
+
+
+
+
Prerequisites
+
    +
  • +

    OKD image registry.

    +
  • +
  • +

    podman installed.

    +
  • +
  • +

    You are working on a file system that preserves symbolic links (symlinks).

    +
  • +
  • +

    If you are using an external registry, KubeVirt must be able to access it.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Create and navigate to a temporary directory:

    +
    +
    +
    $ mkdir /tmp/<dir_name> && cd /tmp/<dir_name>
    +
    +
    +
  2. +
  3. +

    In a browser, navigate to the VMware VDDK version 8 download page.

    +
  4. +
  5. +

    Select version 8.0.1 and click Download.

    +
  6. +
+
+
+ + + + + +
+
Note
+
+
+

In order to migrate to KubeVirt 4.12, download VDDK version 7.0.3.2 from the VMware VDDK version 7 download page.

+
+
+
+
+
    +
  1. +

    Save the VDDK archive file in the temporary directory.

    +
  2. +
  3. +

    Extract the VDDK archive:

    +
    +
    +
    $ tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gz
    +
    +
    +
  4. +
  5. +

    Create a Dockerfile:

    +
    +
    +
    $ cat > Dockerfile <<EOF
    +FROM registry.access.redhat.com/ubi8/ubi-minimal
    +USER 1001
    +COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib
    +RUN mkdir -p /opt
    +ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"]
    +EOF
    +
    +
    +
  6. +
  7. +

    Build the VDDK image:

    +
    +
    +
    $ podman build . -t <registry_route_or_server_path>/vddk:<tag>
    +
    +
    +
  8. +
  9. +

    Push the VDDK image to the registry:

    +
    +
    +
    $ podman push <registry_route_or_server_path>/vddk:<tag>
    +
    +
    +
  10. +
  11. +

    Ensure that the image is accessible to your KubeVirt environment.

    +
  12. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/error-messages/index.html b/documentation/doc-Release_notes/modules/error-messages/index.html new file mode 100644 index 00000000000..04dc21ee5d8 --- /dev/null +++ b/documentation/doc-Release_notes/modules/error-messages/index.html @@ -0,0 +1,83 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Error messages

+
+

This section describes error messages and how to resolve them.

+
+
+
warm import retry limit reached
+

The warm import retry limit reached error message is displayed during a warm migration if a VMware virtual machine (VM) has reached the maximum number (28) of changed block tracking (CBT) snapshots during the precopy stage.

+
+
+

To resolve this problem, delete some of the CBT snapshots from the VM and restart the migration plan.

+
+
+
Unable to resize disk image to required size
+

The Unable to resize disk image to required size error message is displayed when migration fails because a virtual machine on the target provider uses persistent volumes with an EXT4 file system on block storage. The problem occurs because the default overhead that is assumed by CDI does not completely include the reserved place for the root partition.

+
+
+

To resolve this problem, increase the file system overhead in CDI to more than 10%.

+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/images/136_OpenShift_Migration_Toolkit_0121_mtv-workflow.svg b/documentation/doc-Release_notes/modules/images/136_OpenShift_Migration_Toolkit_0121_mtv-workflow.svg new file mode 100644 index 00000000000..999c62adec4 --- /dev/null +++ b/documentation/doc-Release_notes/modules/images/136_OpenShift_Migration_Toolkit_0121_mtv-workflow.svg @@ -0,0 +1 @@ +NetworkmappingTargetproviderVirtualmachines1UserVirtual-Machine-Import4MigrationControllerPlan2Migration3StoragemappingSourceprovider136_OpenShift_0121 diff --git a/documentation/doc-Release_notes/modules/images/136_OpenShift_Migration_Toolkit_0121_virt-workflow.svg b/documentation/doc-Release_notes/modules/images/136_OpenShift_Migration_Toolkit_0121_virt-workflow.svg new file mode 100644 index 00000000000..473e21ba4e2 --- /dev/null +++ b/documentation/doc-Release_notes/modules/images/136_OpenShift_Migration_Toolkit_0121_virt-workflow.svg @@ -0,0 +1 @@ +Virtual-Machine-ImportProviderAPIVirtualmachineCDIControllerKubeVirtController<VM_name>podDataVolumeSourceProviderConversionpodPersistentVolumeDynamicallyprovisionedstoragePersistentVolume Claim163438710ProviderCredentialsUserVMdisk29VirtualMachineImportControllerVirtual-Machine-InstanceVirtual-Machine57Importerpod136_OpenShift_0121 diff --git a/documentation/doc-Release_notes/modules/images/136_Upstream_Migration_Toolkit_0121_mtv-workflow.svg b/documentation/doc-Release_notes/modules/images/136_Upstream_Migration_Toolkit_0121_mtv-workflow.svg new file mode 100644 index 00000000000..33a031a0909 --- /dev/null +++ b/documentation/doc-Release_notes/modules/images/136_Upstream_Migration_Toolkit_0121_mtv-workflow.svg @@ -0,0 +1 @@ +NetworkmappingTargetproviderVirtualmachines1UserVirtual-Machine-Import4MigrationControllerPlan2Migration3StoragemappingSourceprovider136_0121 diff --git a/documentation/doc-Release_notes/modules/images/136_Upstream_Migration_Toolkit_0121_virt-workflow.svg b/documentation/doc-Release_notes/modules/images/136_Upstream_Migration_Toolkit_0121_virt-workflow.svg new file mode 100644 index 00000000000..e73192c0102 --- /dev/null +++ b/documentation/doc-Release_notes/modules/images/136_Upstream_Migration_Toolkit_0121_virt-workflow.svg @@ -0,0 +1 @@ +Virtual-Machine-ImportProviderAPIVirtualmachineCDIControllerKubeVirtController<VM_name>podDataVolumeSourceProviderConversionpodPersistentVolumeDynamicallyprovisionedstoragePersistentVolume Claim163438710ProviderCredentialsUserVMdisk29VirtualMachineImportControllerVirtual-Machine-InstanceVirtual-Machine57Importerpod136_0121 diff --git a/documentation/doc-Release_notes/modules/images/forklift-logo-darkbg.png b/documentation/doc-Release_notes/modules/images/forklift-logo-darkbg.png new file mode 100644 index 00000000000..06e9d1b2494 Binary files /dev/null and b/documentation/doc-Release_notes/modules/images/forklift-logo-darkbg.png differ diff --git a/documentation/doc-Release_notes/modules/images/forklift-logo-darkbg.svg b/documentation/doc-Release_notes/modules/images/forklift-logo-darkbg.svg new file mode 100644 index 00000000000..8a846e6361a --- /dev/null +++ b/documentation/doc-Release_notes/modules/images/forklift-logo-darkbg.svg @@ -0,0 +1,164 @@ + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/documentation/doc-Release_notes/modules/images/forklift-logo-lightbg.png b/documentation/doc-Release_notes/modules/images/forklift-logo-lightbg.png new file mode 100644 index 00000000000..8dba83d97f8 Binary files /dev/null and b/documentation/doc-Release_notes/modules/images/forklift-logo-lightbg.png differ diff --git a/documentation/doc-Release_notes/modules/images/forklift-logo-lightbg.svg b/documentation/doc-Release_notes/modules/images/forklift-logo-lightbg.svg new file mode 100644 index 00000000000..a8038cdf923 --- /dev/null +++ b/documentation/doc-Release_notes/modules/images/forklift-logo-lightbg.svg @@ -0,0 +1,159 @@ + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/documentation/doc-Release_notes/modules/images/kebab.png b/documentation/doc-Release_notes/modules/images/kebab.png new file mode 100644 index 00000000000..81893bd4ad1 Binary files /dev/null and b/documentation/doc-Release_notes/modules/images/kebab.png differ diff --git a/documentation/doc-Release_notes/modules/images/mtv-ui.png b/documentation/doc-Release_notes/modules/images/mtv-ui.png new file mode 100644 index 00000000000..009c9b46386 Binary files /dev/null and b/documentation/doc-Release_notes/modules/images/mtv-ui.png differ diff --git a/documentation/doc-Release_notes/modules/increasing-nfc-memory-vmware-host/index.html b/documentation/doc-Release_notes/modules/increasing-nfc-memory-vmware-host/index.html new file mode 100644 index 00000000000..4fa9acdffef --- /dev/null +++ b/documentation/doc-Release_notes/modules/increasing-nfc-memory-vmware-host/index.html @@ -0,0 +1,103 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Increasing the NFC service memory of an ESXi host

+
+

If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host. Otherwise, the migration will fail because the NFC service memory is limited to 10 parallel connections.

+
+
+
Procedure
+
    +
  1. +

    Log in to the ESXi host as root.

    +
  2. +
  3. +

    Change the value of maxMemory to 1000000000 in /etc/vmware/hostd/config.xml:

    +
    +
    +
    ...
    +      <nfcsvc>
    +         <path>libnfcsvc.so</path>
    +         <enabled>true</enabled>
    +         <maxMemory>1000000000</maxMemory>
    +         <maxStreamMemory>10485760</maxStreamMemory>
    +      </nfcsvc>
    +...
    +
    +
    +
  4. +
  5. +

    Restart hostd:

    +
    +
    +
    # /etc/init.d/hostd restart
    +
    +
    +
    +

    You do not need to reboot the host.

    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/installing-mtv-operator/index.html b/documentation/doc-Release_notes/modules/installing-mtv-operator/index.html new file mode 100644 index 00000000000..c1a682d9f34 --- /dev/null +++ b/documentation/doc-Release_notes/modules/installing-mtv-operator/index.html @@ -0,0 +1,79 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
Prerequisites
+
    +
  • +

    OKD 4.10 or later installed.

    +
  • +
  • +

    KubeVirt Operator installed on an OpenShift migration target cluster.

    +
  • +
  • +

    You must be logged in as a user with cluster-admin permissions.

    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/issue_templates/issue.md b/documentation/doc-Release_notes/modules/issue_templates/issue.md new file mode 100644 index 00000000000..30d52ab9cba --- /dev/null +++ b/documentation/doc-Release_notes/modules/issue_templates/issue.md @@ -0,0 +1,15 @@ +## Summary + +(Describe the problem. Don't worry if the problem occurs in more than one checklist. You only need to mention the checklist where you see a problem. We will fix the module.) + +## What is the problem? + +(Paste the text or a screenshot here. Remember to include the **task number** so that we know which module is affected.) + +## What is the solution? + +(Correct text, link, or task.) + +## Notes + +(Do we need to fix something else?) diff --git a/documentation/doc-Release_notes/modules/issue_templates/issue/index.html b/documentation/doc-Release_notes/modules/issue_templates/issue/index.html new file mode 100644 index 00000000000..472b85e102b --- /dev/null +++ b/documentation/doc-Release_notes/modules/issue_templates/issue/index.html @@ -0,0 +1,79 @@ + + + + + + + + Summary | Forklift Documentation + + + + + + + + + + + + + +Summary | Forklift Documentation + + + + + + + + + + + + + + + + + + + + + + +
+

Summary

+ +

(Describe the problem. Don’t worry if the problem occurs in more than one checklist. You only need to mention the checklist where you see a problem. We will fix the module.)

+ +

What is the problem?

+ +

(Paste the text or a screenshot here. Remember to include the task number so that we know which module is affected.)

+ +

What is the solution?

+ +

(Correct text, link, or task.)

+ +

Notes

+ +

(Do we need to fix something else?)

+ + + +
+ + diff --git a/documentation/doc-Release_notes/modules/known-issues-2-7/index.html b/documentation/doc-Release_notes/modules/known-issues-2-7/index.html new file mode 100644 index 00000000000..2e500f474f4 --- /dev/null +++ b/documentation/doc-Release_notes/modules/known-issues-2-7/index.html @@ -0,0 +1,87 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Known issues

+
+

Forklift 2.7 has the following known issues:

+
+
+
Select Migration Network from the endpoint type ESXi displays multiple incorrect networks
+

When you choose Select Migration Network, from the endpoint type of ESXi, multiple incorrect networks are displayed. (MTV-1291)

+
+
+

Unresolved directive in known-issues-2-7.adoc - include::snip_secure_boot_issue.adoc[]

+
+
+

Unresolved directive in known-issues-2-7.adoc - include::snip_measured_boot_windows_vm.adoc[]

+
+
+
Network and Storage maps in the UI are not correct when created from the command line
+

When creating Network and Storage maps from the UI, the correct names are not shown in the UI. (MTV-1421)

+
+
+
Migration fails with module network-legacy configured in RHEL guests
+

Migration fails if the module configuration file is available in the guest and the dhcp-client package is not installed, returning a dracut module 'network-legacy' will not be installed, because command 'dhclient' could not be found error. (MTV-1615)

+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/making-open-source-more-inclusive/index.html b/documentation/doc-Release_notes/modules/making-open-source-more-inclusive/index.html new file mode 100644 index 00000000000..a4060515181 --- /dev/null +++ b/documentation/doc-Release_notes/modules/making-open-source-more-inclusive/index.html @@ -0,0 +1,69 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Making open source more inclusive

+
+

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/migration-plan-options-ui/index.html b/documentation/doc-Release_notes/modules/migration-plan-options-ui/index.html new file mode 100644 index 00000000000..e6b4a07e6eb --- /dev/null +++ b/documentation/doc-Release_notes/modules/migration-plan-options-ui/index.html @@ -0,0 +1,141 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Migration plan options

+
+

On the Plans for virtualization page of the OKD web console, you can click the {kebab} beside a migration plan to access the following options:

+
+
+
    +
  • +

    Get logs: Retrieves the logs of a migration. When you click Get logs, a confirmation window opens. After you click Get logs in the window, wait until Get logs changes to Download logs and then click the button to download the logs.

    +
  • +
  • +

    Edit: Edit the details of a migration plan. You cannot edit a migration plan while it is running or after it has completed successfully.

    +
  • +
  • +

    Duplicate: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:

    +
    +
      +
    • +

      Migrate VMs to a different namespace.

      +
    • +
    • +

      Edit an archived migration plan.

      +
    • +
    • +

      Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.

      +
    • +
    +
    +
  • +
  • +

    Archive: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    The Archive option is irreversible. However, you can duplicate an archived plan.

    +
    +
    +
    +
  • +
  • +

    Delete: Permanently remove a migration plan. You cannot delete a running migration plan.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    The Delete option is irreversible.

    +
    +
    +

    Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs, and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

    +
    +
    +
    +
  • +
  • +

    View details: Display the details of a migration plan.

    +
  • +
  • +

    Restart: Restart a failed or canceled migration plan.

    +
  • +
  • +

    Cancel scheduled cutover: Cancel a scheduled cutover migration for a warm migration plan.

    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/mtv-changelog-2-7/index.html b/documentation/doc-Release_notes/modules/mtv-changelog-2-7/index.html new file mode 100644 index 00000000000..2f516752cc0 --- /dev/null +++ b/documentation/doc-Release_notes/modules/mtv-changelog-2-7/index.html @@ -0,0 +1,2330 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift changelog

+
+
+
+

The following changelog for Forklift includes a full list of packages used in the Forklift 2.7 releases.

+
+
+
+
+

Forklift 2.7 packages

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Forklift packages
Forklift 2.7.0Forklift 2.7.2Forklift 2.7.3

abattis-cantarell-fonts-0.301-4.el9.noarch

abattis-cantarell-fonts-0.301-4.el9.noarch

Abattis-cantarell-fonts-0.301-4.el9.noarch

acl-2.3.1-4.el9.x86_64

acl-2.3.1-4.el9.x86_64

acl-2.3.1-4.el9.x86_64

adobe-source-code-pro-fonts-2.030.1.050-12.el9.1.noarch

adobe-source-code-pro-fonts-2.030.1.050-12.el9.1.noarch

adobe-source-code-pro-fonts-2.030.1.050-12.el9.1.noarch

alternatives-1.24-1.el9.x86_64

alternatives-1.24-1.el9.x86_64

alternatives-1.24-1.el9.x86_64

attr-2.5.1-3.el9.x86_64

attr-2.5.1-3.el9.x86_64

attr-2.5.1-3.el9.x86_64

audit-libs-3.1.2-2.el9.x86_64

audit-libs-3.1.2-2.el9.x86_64

audit-libs-3.1.2-2.el9.x86_64

augeas-libs-1.13.0-6.el9_4.x86_64

augeas-libs-1.13.0-6.el9_4.x86_64

augeas-libs-1.13.0-6.el9_4.x86_64

basesystem-11-13.el9.noarch

basesystem-11-13.el9.noarch

basesystem-11-13.el9.noarch

bash-5.1.8-9.el9.x86_64

bash-5.1.8-9.el9.x86_64

bash-5.1.8-9.el9.x86_64

binutils-2.35.2-43.el9.x86_64

binutils-2.35.2-43.el9.x86_64

binutils-2.35.2-43.el9.x86_64

binutils-gold-2.35.2-43.el9.x86_64

binutils-gold-2.35.2-43.el9.x86_64

binutils-gold-2.35.2-43.el9.x86_64

bzip2-1.0.8-8.el9.x86_64

bzip2-1.0.8-8.el9.x86_64

bzip2-1.0.8-8.el9.x86_64

bzip2-libs-1.0.8-8.el9.x86_64

bzip2-libs-1.0.8-8.el9.x86_64

bzip2-libs-1.0.8-8.el9.x86_64

ca-certificates-2024.2.69_v8.0.303-91.4.el9_4.noarch

ca-certificates-2024.2.69_v8.0.303-91.4.el9_4.noarch

ca-certificates-2024.2.69_v8.0.303-91.4.el9_4.noarch

capstone-4.0.2-10.el9.x86_64

capstone-4.0.2-10.el9.x86_64

capstone-4.0.2-10.el9.x86_64

checkpolicy-3.6-1.el9.x86_64

checkpolicy-3.6-1.el9.x86_64

checkpolicy-3.6-1.el9.x86_64

clevis-18-112.el9.x86_64

clevis-18-112.el9.x86_64

clevis-18-112.el9.x86_64

clevis-luks-18-112.el9.x86_64

clevis-luks-18-112.el9.x86_64

clevis-luks-18-112.el9.x86_64

cmake-rpm-macros-3.26.5-2.el9.noarch

cmake-rpm-macros-3.26.5-2.el9.noarch

cmake-rpm-macros-3.26.5-2.el9.noarch

coreutils-single-8.32-35.el9.x86_64

coreutils-single-8.32-35.el9.x86_64

coreutils-single-8.32-35.el9.x86_64

cpio-2.13-16.el9.x86_64

cpio-2.13-16.el9.x86_64

cpio-2.13-16.el9.x86_64

cracklib-2.9.6-27.el9.x86_64

cracklib-2.9.6-27.el9.x86_64

cracklib-2.9.6-27.el9.x86_64

cracklib-dicts-2.9.6-27.el9.x86_64

cracklib-dicts-2.9.6-27.el9.x86_64

cracklib-dicts-2.9.6-27.el9.x86_64

crypto-policies-20240202-1.git283706d.el9.noarch

crypto-policies-20240202-1.git283706d.el9.noarch

crypto-policies-20240202-1.git283706d.el9.noarch

cryptsetup-2.6.0-3.el9.x86_64

cryptsetup-2.6.0-3.el9.x86_64

cryptsetup-2.6.0-3.el9.x86_64

cryptsetup-libs-2.6.0-3.el9.x86_64

cryptsetup-libs-2.6.0-3.el9.x86_64

cryptsetup-libs-2.6.0-3.el9.x86_64

curl-minimal-7.76.1-29.el9_4.1.x86_64

curl-minimal-7.76.1-29.el9_4.1.x86_64

curl-minimal-7.76.1-29.el9_4.1.x86_64

cyrus-sasl-2.1.27-21.el9.x86_64

cyrus-sasl-2.1.27-21.el9.x86_64

cyrus-sasl-2.1.27-21.el9.x86_64

cyrus-sasl-gssapi-2.1.27-21.el9.x86_64

cyrus-sasl-gssapi-2.1.27-21.el9.x86_64

cyrus-sasl-gssapi-2.1.27-21.el9.x86_64

cyrus-sasl-lib-2.1.27-21.el9.x86_64

cyrus-sasl-lib-2.1.27-21.el9.x86_64

cyrus-sasl-lib-2.1.27-21.el9.x86_64

daxctl-libs-71.1-8.el9.x86_64

daxctl-libs-71.1-8.el9.x86_64

daxctl-libs-71.1-8.el9.x86_64

dbus-1.12.20-8.el9.x86_64

dbus-1.12.20-8.el9.x86_64

dbus-1.12.20-8.el9.x86_64

dbus-broker-28-7.el9.x86_64

dbus-broker-28-7.el9.x86_64

dbus-broker-28-7.el9.x86_64

dbus-common-1.12.20-8.el9.noarch

dbus-common-1.12.20-8.el9.noarch

dbus-common-1.12.20-8.el9.noarch

dbus-libs-1.12.20-8.el9.x86_64

dbus-libs-1.12.20-8.el9.x86_64

dbus-libs-1.12.20-8.el9.x86_64

dejavu-sans-fonts-2.37-18.el9.noarch

dejavu-sans-fonts-2.37-18.el9.noarch

dejavu-sans-fonts-2.37-18.el9.noarch

device-mapper-1.02.197-2.el9.x86_64

device-mapper-1.02.197-2.el9.x86_64

device-mapper-1.02.197-2.el9.x86_64

device-mapper-event-1.02.197-2.el9.x86_64

device-mapper-event-1.02.197-2.el9.x86_64

device-mapper-event-1.02.197-2.el9.x86_64

device-mapper-event-libs-1.02.197-2.el9.x86_64

device-mapper-event-libs-1.02.197-2.el9.x86_64

device-mapper-event-libs-1.02.197-2.el9.x86_64

device-mapper-libs-1.02.197-2.el9.x86_64

device-mapper-libs-1.02.197-2.el9.x86_64

device-mapper-libs-1.02.197-2.el9.x86_64

device-mapper-persistent-data-1.0.9-3.el9_4.x86_64

device-mapper-persistent-data-1.0.9-3.el9_4.x86_64

device-mapper-persistent-data-1.0.9-3.el9_4.x86_64

dhcp-client-4.4.2-19.b1.el9.x86_64

dhcp-client-4.4.2-19.b1.el9.x86_64

dhcp-client-4.4.2-19.b1.el9.x86_64

dhcp-common-4.4.2-19.b1.el9.noarch

dhcp-common-4.4.2-19.b1.el9.noarch

dhcp-common-4.4.2-19.b1.el9.noarch

diffutils-3.7-12.el9.x86_64

diffutils-3.7-12.el9.x86_64

diffutils-3.7-12.el9.x86_64

dmidecode-3.5-3.el9.x86_64

dmidecode-3.5-3.el9.x86_64

dmidecode-3.5-3.el9.x86_64

dnf-data-4.14.0-9.el9.noarch

dnf-data-4.14.0-9.el9.noarch

dnf-data-4.14.0-9.el9.noarch

dnsmasq-2.85-16.el9_4.x86_64

dnsmasq-2.85-16.el9_4.x86_64

dnsmasq-2.85-16.el9_4.x86_64

dosfstools-4.2-3.el9.x86_64

dosfstools-4.2-3.el9.x86_64

dosfstools-4.2-3.el9.x86_64

dracut-057-53.git20240104.el9.x86_64

dracut-057-53.git20240104.el9.x86_64

dracut-057-53.git20240104.el9.x86_64

dwz-0.14-3.el9.x86_64

dwz-0.14-3.el9.x86_64

dwz-0.14-3.el9.x86_64

e2fsprogs-1.46.5-5.el9.x86_64

e2fsprogs-1.46.5-5.el9.x86_64

e2fsprogs-1.46.5-5.el9.x86_64

e2fsprogs-libs-1.46.5-5.el9.x86_64

e2fsprogs-libs-1.46.5-5.el9.x86_64

e2fsprogs-libs-1.46.5-5.el9.x86_64

edk2-ovmf-20231122-6.el9_4.3.noarch

edk2-ovmf-20231122-6.el9_4.3.noarch

edk2-ovmf-20231122-6.el9_4.3.noarch

efi-srpm-macros-6-2.el9_0.noarch

efi-srpm-macros-6-2.el9_0.noarch

efi-srpm-macros-6-2.el9_0.noarch

elfutils-debuginfod-client-0.190-2.el9.x86_64

elfutils-debuginfod-client-0.190-2.el9.x86_64

elfutils-debuginfod-client-0.190-2.el9.x86_64

elfutils-default-yama-scope-0.190-2.el9.noarch

elfutils-default-yama-scope-0.190-2.el9.noarch

elfutils-default-yama-scope-0.190-2.el9.noarch

elfutils-libelf-0.190-2.el9.x86_64

elfutils-libelf-0.190-2.el9.x86_64

elfutils-libelf-0.190-2.el9.x86_64

elfutils-libs-0.190-2.el9.x86_64

elfutils-libs-0.190-2.el9.x86_64

elfutils-libs-0.190-2.el9.x86_64

expat-2.5.0-2.el9_4.1.x86_64

expat-2.5.0-2.el9_4.1.x86_64

expat-2.5.0-2.el9_4.1.x86_64

file-5.39-16.el9.x86_64

file-5.39-16.el9.x86_64

file-5.39-16.el9.x86_64

file-libs-5.39-16.el9.x86_64

file-libs-5.39-16.el9.x86_64

file-libs-5.39-16.el9.x86_64

filesystem-3.16-2.el9.x86_64

filesystem-3.16-2.el9.x86_64

filesystem-3.16-2.el9.x86_64

findutils-4.8.0-6.el9.x86_64

findutils-4.8.0-6.el9.x86_64

findutils-4.8.0-6.el9.x86_64

fonts-filesystem-2.0.5-7.el9.1.noarch

fonts-filesystem-2.0.5-7.el9.1.noarch

fonts-filesystem-2.0.5-7.el9.1.noarch

fonts-srpm-macros-2.0.5-7.el9.1.noarch

fonts-srpm-macros-2.0.5-7.el9.1.noarch

fonts-srpm-macros-2.0.5-7.el9.1.noarch

fuse-2.9.9-15.el9.x86_64

fuse-2.9.9-15.el9.x86_64

fuse-2.9.9-15.el9.x86_64

fuse-common-3.10.2-8.el9.x86_64

fuse-common-3.10.2-8.el9.x86_64

fuse-common-3.10.2-8.el9.x86_64

fuse-libs-2.9.9-15.el9.x86_64

fuse-libs-2.9.9-15.el9.x86_64

fuse-libs-2.9.9-15.el9.x86_64

gawk-5.1.0-6.el9.x86_64

gawk-5.1.0-6.el9.x86_64

gawk-5.1.0-6.el9.x86_64

gdbm-libs-1.19-4.el9.x86_64

gdbm-libs-1.19-4.el9.x86_64

gdbm-libs-1.19-4.el9.x86_64

gdisk-1.0.7-5.el9.x86_64

gdisk-1.0.7-5.el9.x86_64

gdisk-1.0.7-5.el9.x86_64

geolite2-city-20191217-6.el9.noarch

geolite2-city-20191217-6.el9.noarch

geolite2-city-20191217-6.el9.noarch

geolite2-country-20191217-6.el9.noarch

geolite2-country-20191217-6.el9.noarch

geolite2-country-20191217-6.el9.noarch

gettext-0.21-8.el9.x86_64

gettext-0.21-8.el9.x86_64

gettext-0.21-8.el9.x86_64

gettext-libs-0.21-8.el9.x86_64

gettext-libs-0.21-8.el9.x86_64

gettext-libs-0.21-8.el9.x86_64

ghc-srpm-macros-1.5.0-6.el9.noarch

ghc-srpm-macros-1.5.0-6.el9.noarch

ghc-srpm-macros-1.5.0-6.el9.noarch

glib-networking-2.68.3-3.el9.x86_64

glib-networking-2.68.3-3.el9.x86_64

glib-networking-2.68.3-3.el9.x86_64

glib2-2.68.4-14.el9_4.1.x86_64

glib2-2.68.4-14.el9_4.1.x86_64

glib2-2.68.4-14.el9_4.1.x86_64

glibc-2.34-100.el9_4.3.x86_64

glibc-2.34-100.el9_4.4.x86_64

glibc-2.34-100.el9_4.4.x86_64

glibc-common-2.34-100.el9_4.3.x86_64

glibc-common-2.34-100.el9_4.4.x86_64

glibc-common-2.34-100.el9_4.4.x86_64

glibc-gconv-extra-2.34-100.el9_4.3.x86_64

glibc-gconv-extra-2.34-100.el9_4.4.x86_64

glibc-gconv-extra-2.34-100.el9_4.4.x86_64

glibc-langpack-en-2.34-100.el9_4.4.x86_64

glibc-langpack-en-2.34-100.el9_4.4.x86_64

glibc-minimal-langpack-2.34-100.el9_4.3.x86_64

glibc-minimal-langpack-2.34-100.el9_4.4.x86_64

glibc-minimal-langpack-2.34-100.el9_4.4.x86_64

gmp-6.2.0-13.el9.x86_64

gmp-6.2.0-13.el9.x86_64

gmp-6.2.0-13.el9.x86_64

gnupg2-2.3.3-4.el9.x86_64

gnupg2-2.3.3-4.el9.x86_64

gnupg2-2.3.3-4.el9.x86_64

gnutls-3.8.3-4.el9_4.x86_64

gnutls-3.8.3-4.el9_4.x86_64

gnutls-3.8.3-4.el9_4.x86_64

gnutls-dane-3.8.3-4.el9_4.x86_64

gnutls-dane-3.8.3-4.el9_4.x86_64

gnutls-dane-3.8.3-4.el9_4.x86_64

gnutls-utils-3.8.3-4.el9_4.x86_64

gnutls-utils-3.8.3-4.el9_4.x86_64

gnutls-utils-3.8.3-4.el9_4.x86_64

go-srpm-macros-3.2.0-3.el9.noarch

go-srpm-macros-3.2.0-3.el9.noarch

go-srpm-macros-3.2.0-3.el9.noarch

gobject-introspection-1.68.0-11.el9.x86_64

gobject-introspection-1.68.0-11.el9.x86_64

gobject-introspection-1.68.0-11.el9.x86_64

gpg-pubkey-5a6340b3-6229229e

gpg-pubkey-5a6340b3-6229229e

gpg-pubkey-5a6340b3-6229229e

gpg-pubkey-fd431d51-4ae0493b

gpg-pubkey-fd431d51-4ae0493b

gpg-pubkey-fd431d51-4ae0493b

gpgme-1.15.1-6.el9.x86_64

gpgme-1.15.1-6.el9.x86_64

gpgme-1.15.1-6.el9.x86_64

grep-3.6-5.el9.x86_64

grep-3.6-5.el9.x86_64

grep-3.6-5.el9.x86_64

groff-base-1.22.4-10.el9.x86_64

groff-base-1.22.4-10.el9.x86_64

groff-base-1.22.4-10.el9.x86_64

gsettings-desktop-schemas-40.0-6.el9.x86_64

gsettings-desktop-schemas-40.0-6.el9.x86_64

gsettings-desktop-schemas-40.0-6.el9.x86_64

gssproxy-0.8.4-6.el9.x86_64

gssproxy-0.8.4-6.el9.x86_64

gssproxy-0.8.4-6.el9.x86_64

guestfs-tools-1.51.6-3.el9_4.x86_64

guestfs-tools-1.51.6-3.el9_4.x86_64

guestfs-tools-1.51.6-3.el9_4.x86_64

gzip-1.12-1.el9.x86_64

gzip-1.12-1.el9.x86_64

gzip-1.12-1.el9.x86_64

hexedit-1.6-1.el9.x86_64

hexedit-1.6-1.el9.x86_64

hexedit-1.6-1.el9.x86_64

hivex-libs-1.3.21-3.el9.x86_64

hivex-libs-1.3.21-3.el9.x86_64

hivex-libs-1.3.21-3.el9.x86_64

hwdata-0.348-9.13.el9.noarch

hwdata-0.348-9.13.el9.noarch

hwdata-0.348-9.13.el9.noarch

inih-49-6.el9.x86_64

inih-49-6.el9.x86_64

inih-49-6.el9.x86_64

ipcalc-1.0.0-5.el9.x86_64

ipcalc-1.0.0-5.el9.x86_64

ipcalc-1.0.0-5.el9.x86_64

iproute-6.2.0-6.el9_4.x86_64

iproute-6.2.0-6.el9_4.x86_64

iproute-6.2.0-6.el9_4.x86_64

iproute-tc-6.2.0-6.el9_4.x86_64

iproute-tc-6.2.0-6.el9_4.x86_64

iproute-tc-6.2.0-6.el9_4.x86_64

iptables-libs-1.8.10-4.el9_4.x86_64

iptables-libs-1.8.10-4.el9_4.x86_64

iptables-libs-1.8.10-4.el9_4.x86_64

iptables-nft-1.8.10-4.el9_4.x86_64

iptables-nft-1.8.10-4.el9_4.x86_64

iptables-nft-1.8.10-4.el9_4.x86_64

iputils-20210202-9.el9.x86_64

iputils-20210202-9.el9.x86_64

iputils-20210202-9.el9.x86_64

ipxe-roms-qemu-20200823-9.git4bd064de.el9.noarch

ipxe-roms-qemu-20200823-9.git4bd064de.el9.noarch

ipxe-roms-qemu-20200823-9.git4bd064de.el9.noarch

jansson-2.14-1.el9.x86_64

jansson-2.14-1.el9.x86_64

jansson-2.14-1.el9.x86_64

jose-11-3.el9.x86_64

jose-11-3.el9.x86_64

jose-11-3.el9.x86_64

jq-1.6-16.el9.x86_64

jq-1.6-16.el9.x86_64

jq-1.6-16.el9.x86_64

json-c-0.14-11.el9.x86_64

json-c-0.14-11.el9.x86_64

json-c-0.14-11.el9.x86_64

json-glib-1.6.6-1.el9.x86_64

json-glib-1.6.6-1.el9.x86_64

json-glib-1.6.6-1.el9.x86_64

kbd-2.4.0-9.el9.x86_64

kbd-2.4.0-9.el9.x86_64

kbd-2.4.0-9.el9.x86_64

kbd-legacy-2.4.0-9.el9.noarch

kbd-legacy-2.4.0-9.el9.noarch

kbd-legacy-2.4.0-9.el9.noarch

kbd-misc-2.4.0-9.el9.noarch

kbd-misc-2.4.0-9.el9.noarch

kbd-misc-2.4.0-9.el9.noarch

kernel-core-5.14.0-427.35.1.el9_4.x86_64

kernel-core-5.14.0-427.37.1.el9_4.x86_64

kernel-core-5.14.0-427.40.1.el9_4.x86_64

kernel-modules-core-5.14.0-427.35.1.el9_4.x86_64

kernel-modules-core-5.14.0-427.37.1.el9_4.x86_64

kernel-modules-core-5.14.0-427.40.1.el9_4.x86_64

kernel-srpm-macros-1.0-13.el9.noarch

kernel-srpm-macros-1.0-13.el9.noarch

kernel-srpm-macros-1.0-13.el9.noarch

keyutils-1.6.3-1.el9.x86_64

keyutils-1.6.3-1.el9.x86_64

keyutils-1.6.3-1.el9.x86_64

keyutils-libs-1.6.3-1.el9.x86_64

keyutils-libs-1.6.3-1.el9.x86_64

keyutils-libs-1.6.3-1.el9.x86_64

kmod-28-9.el9.x86_64

kmod-28-9.el9.x86_64

kmod-28-9.el9.x86_64

kmod-libs-28-9.el9.x86_64

kmod-libs-28-9.el9.x86_64

kmod-libs-28-9.el9.x86_64

kpartx-0.8.7-27.el9.x86_64

kpartx-0.8.7-27.el9.x86_64

kpartx-0.8.7-27.el9.x86_64

krb5-libs-1.21.1-2.el9_4.x86_64

krb5-libs-1.21.1-2.el9_4.x86_64

krb5-libs-1.21.1-2.el9_4.x86_64

langpacks-core-en-3.0-16.el9.noarch

langpacks-core-en-3.0-16.el9.noarch

langpacks-core-en-3.0-16.el9.noarch

langpacks-core-font-en-3.0-16.el9.noarch

langpacks-core-font-en-3.0-16.el9.noarch

langpacks-core-font-en-3.0-16.el9.noarch

langpacks-en-3.0-16.el9.noarch

langpacks-en-3.0-16.el9.noarch

langpacks-en-3.0-16.el9.noarch

less-590-4.el9_4.x86_64

less-590-4.el9_4.x86_64

less-590-4.el9_4.x86_64

libacl-2.3.1-4.el9.x86_64

libacl-2.3.1-4.el9.x86_64

libacl-2.3.1-4.el9.x86_64

libaio-0.3.111-13.el9.x86_64

libaio-0.3.111-13.el9.x86_64

libaio-0.3.111-13.el9.x86_64

libarchive-3.5.3-4.el9.x86_64

libarchive-3.5.3-4.el9.x86_64

libarchive-3.5.3-4.el9.x86_64

libassuan-2.5.5-3.el9.x86_64

libassuan-2.5.5-3.el9.x86_64

libassuan-2.5.5-3.el9.x86_64

libatomic-11.4.1-3.el9.x86_64

libatomic-11.4.1-3.el9.x86_64

libatomic-11.4.1-3.el9.x86_64

libattr-2.5.1-3.el9.x86_64

libattr-2.5.1-3.el9.x86_64

libattr-2.5.1-3.el9.x86_64

libbasicobjects-0.1.1-53.el9.x86_64

libbasicobjects-0.1.1-53.el9.x86_64

libbasicobjects-0.1.1-53.el9.x86_64

libblkid-2.37.4-18.el9.x86_64

libblkid-2.37.4-18.el9.x86_64

libblkid-2.37.4-18.el9.x86_64

libbpf-1.3.0-2.el9.x86_64

libbpf-1.3.0-2.el9.x86_64

libbpf-1.3.0-2.el9.x86_64

libbrotli-1.0.9-6.el9.x86_64

libbrotli-1.0.9-6.el9.x86_64

libbrotli-1.0.9-6.el9.x86_64

libcap-2.48-9.el9_2.x86_64

libcap-2.48-9.el9_2.x86_64

libcap-2.48-9.el9_2.x86_64

libcap-ng-0.8.2-7.el9.x86_64

libcap-ng-0.8.2-7.el9.x86_64

libcap-ng-0.8.2-7.el9.x86_64

libcbor-0.7.0-5.el9.x86_64

libcbor-0.7.0-5.el9.x86_64

libcbor-0.7.0-5.el9.x86_64

libcollection-0.7.0-53.el9.x86_64

libcollection-0.7.0-53.el9.x86_64

libcollection-0.7.0-53.el9.x86_64

libcom_err-1.46.5-5.el9.x86_64

libcom_err-1.46.5-5.el9.x86_64

libcom_err-1.46.5-5.el9.x86_64

libconfig-1.7.2-9.el9.x86_64

libconfig-1.7.2-9.el9.x86_64

libconfig-1.7.2-9.el9.x86_64

libcurl-minimal-7.76.1-29.el9_4.1.x86_64

libcurl-minimal-7.76.1-29.el9_4.1.x86_64

libcurl-minimal-7.76.1-29.el9_4.1.x86_64

libdb-5.3.28-53.el9.x86_64

libdb-5.3.28-53.el9.x86_64

libdb-5.3.28-53.el9.x86_64

libdnf-0.69.0-8.el9_4.1.x86_64

libdnf-0.69.0-8.el9_4.1.x86_64

libdnf-0.69.0-8.el9_4.1.x86_64

libeconf-0.4.1-3.el9_2.x86_64

libeconf-0.4.1-3.el9_2.x86_64

libeconf-0.4.1-3.el9_2.x86_64

libedit-3.1-38.20210216cvs.el9.x86_64

libedit-3.1-38.20210216cvs.el9.x86_64

libedit-3.1-38.20210216cvs.el9.x86_64

libev-4.33-5.el9.x86_64

libev-4.33-5.el9.x86_64

libev-4.33-5.el9.x86_64

libevent-2.1.12-8.el9_4.x86_64

libevent-2.1.12-8.el9_4.x86_64

libevent-2.1.12-8.el9_4.x86_64

libfdisk-2.37.4-18.el9.x86_64

libfdisk-2.37.4-18.el9.x86_64

libfdisk-2.37.4-18.el9.x86_64

libfdt-1.6.0-7.el9.x86_64

libfdt-1.6.0-7.el9.x86_64

libfdt-1.6.0-7.el9.x86_64

libffi-3.4.2-8.el9.x86_64

libffi-3.4.2-8.el9.x86_64

libffi-3.4.2-8.el9.x86_64

libfido2-1.13.0-2.el9.x86_64

libfido2-1.13.0-2.el9.x86_64

libfido2-1.13.0-2.el9.x86_64

libgcc-11.4.1-3.el9.x86_64

libgcc-11.4.1-3.el9.x86_64

libgcc-11.4.1-3.el9.x86_64

libgcrypt-1.10.0-10.el9_2.x86_64

libgcrypt-1.10.0-10.el9_2.x86_64

libgcrypt-1.10.0-10.el9_2.x86_64

libgomp-11.4.1-3.el9.x86_64

libgomp-11.4.1-3.el9.x86_64

libgomp-11.4.1-3.el9.x86_64

libgpg-error-1.42-5.el9.x86_64

libgpg-error-1.42-5.el9.x86_64

libgpg-error-1.42-5.el9.x86_64

libguestfs-1.50.1-8.el9_4.x86_64

libguestfs-1.50.1-8.el9_4.x86_64

libguestfs-1.50.1-8.el9_4.x86_64

libguestfs-appliance-1.50.1-8.el9_4.x86_64

libguestfs-appliance-1.50.1-8.el9_4.x86_64

libguestfs-appliance-1.50.1-8.el9_4.x86_64

libguestfs-winsupport-9.3-1.el9_3.x86_64

libguestfs-winsupport-9.3-1.el9_3.x86_64

libguestfs-winsupport-9.3-1.el9_3.x86_64

libguestfs-xfs-1.50.1-8.el9_4.x86_64

libguestfs-xfs-1.50.1-8.el9_4.x86_64

libguestfs-xfs-1.50.1-8.el9_4.x86_64

libibverbs-48.0-1.el9.x86_64

libibverbs-48.0-1.el9.x86_64

libibverbs-48.0-1.el9.x86_64

libicu-67.1-9.el9.x86_64

libicu-67.1-9.el9.x86_64

libicu-67.1-9.el9.x86_64

libidn2-2.3.0-7.el9.x86_64

libidn2-2.3.0-7.el9.x86_64

libidn2-2.3.0-7.el9.x86_64

libini_config-1.3.1-53.el9.x86_64

libini_config-1.3.1-53.el9.x86_64

libini_config-1.3.1-53.el9.x86_64

libjose-11-3.el9.x86_64

libjose-11-3.el9.x86_64

libjose-11-3.el9.x86_64

libkcapi-1.4.0-2.el9.x86_64

libkcapi-1.4.0-2.el9.x86_64

libkcapi-1.4.0-2.el9.x86_64

libkcapi-hmaccalc-1.4.0-2.el9.x86_64

libkcapi-hmaccalc-1.4.0-2.el9.x86_64

libkcapi-hmaccalc-1.4.0-2.el9.x86_64

libksba-1.5.1-6.el9_1.x86_64

libksba-1.5.1-6.el9_1.x86_64

libksba-1.5.1-6.el9_1.x86_64

libluksmeta-9-12.el9.x86_64

libluksmeta-9-12.el9.x86_64

libluksmeta-9-12.el9.x86_64

libmaxminddb-1.5.2-3.el9.x86_64

libmaxminddb-1.5.2-3.el9.x86_64

libmaxminddb-1.5.2-3.el9.x86_64

libmnl-1.0.4-16.el9_4.x86_64

libmnl-1.0.4-16.el9_4.x86_64

libmnl-1.0.4-16.el9_4.x86_64

libmodulemd-2.13.0-2.el9.x86_64

libmodulemd-2.13.0-2.el9.x86_64

libmodulemd-2.13.0-2.el9.x86_64

libmount-2.37.4-18.el9.x86_64

libmount-2.37.4-18.el9.x86_64

libmount-2.37.4-18.el9.x86_64

libnbd-1.18.1-4.el9_4.x86_64

libnbd-1.18.1-4.el9_4.x86_64

libnbd-1.18.1-4.el9_4.x86_64

libnetfilter_conntrack-1.0.9-1.el9.x86_64

libnetfilter_conntrack-1.0.9-1.el9.x86_64

libnetfilter_conntrack-1.0.9-1.el9.x86_64

libnfnetlink-1.0.1-21.el9.x86_64

libnfnetlink-1.0.1-21.el9.x86_64

libnfnetlink-1.0.1-21.el9.x86_64

libnfsidmap-2.5.4-26.el9_4.x86_64

libnfsidmap-2.5.4-26.el9_4.x86_64

libnfsidmap-2.5.4-26.el9_4.x86_64

libnftnl-1.2.6-4.el9_4.x86_64

libnftnl-1.2.6-4.el9_4.x86_64

libnftnl-1.2.6-4.el9_4.x86_64

libnghttp2-1.43.0-5.el9_4.3.x86_64

libnghttp2-1.43.0-5.el9_4.3.x86_64

libnghttp2-1.43.0-5.el9_4.3.x86_64

libnl3-3.9.0-1.el9.x86_64

libnl3-3.9.0-1.el9.x86_64

libnl3-3.9.0-1.el9.x86_64

libosinfo-1.10.0-1.el9.x86_64

libosinfo-1.10.0-1.el9.x86_64

libosinfo-1.10.0-1.el9.x86_64

libpath_utils-0.2.1-53.el9.x86_64

libpath_utils-0.2.1-53.el9.x86_64

libpath_utils-0.2.1-53.el9.x86_64

libpeas-1.30.0-4.el9.x86_64

libpeas-1.30.0-4.el9.x86_64

libpeas-1.30.0-4.el9.x86_64

libpipeline-1.5.3-4.el9.x86_64

libpipeline-1.5.3-4.el9.x86_64

libpipeline-1.5.3-4.el9.x86_64

libpkgconf-1.7.3-10.el9.x86_64

libpkgconf-1.7.3-10.el9.x86_64

libpkgconf-1.7.3-10.el9.x86_64

libpmem-1.12.1-1.el9.x86_64

libpmem-1.12.1-1.el9.x86_64

libpmem-1.12.1-1.el9.x86_64

libpng-1.6.37-12.el9.x86_64

libpng-1.6.37-12.el9.x86_64

libpng-1.6.37-12.el9.x86_64

libproxy-0.4.15-35.el9.x86_64

libproxy-0.4.15-35.el9.x86_64

libproxy-0.4.15-35.el9.x86_64

libproxy-webkitgtk4-0.4.15-35.el9.x86_64

libproxy-webkitgtk4-0.4.15-35.el9.x86_64

libproxy-webkitgtk4-0.4.15-35.el9.x86_64

libpsl-0.21.1-5.el9.x86_64

libpsl-0.21.1-5.el9.x86_64

libpsl-0.21.1-5.el9.x86_64

libpwquality-1.4.4-8.el9.x86_64

libpwquality-1.4.4-8.el9.x86_64

libpwquality-1.4.4-8.el9.x86_64

librdmacm-48.0-1.el9.x86_64

librdmacm-48.0-1.el9.x86_64

librdmacm-48.0-1.el9.x86_64

libref_array-0.1.5-53.el9.x86_64

libref_array-0.1.5-53.el9.x86_64

libref_array-0.1.5-53.el9.x86_64

librepo-1.14.5-2.el9.x86_64

librepo-1.14.5-2.el9.x86_64

librepo-1.14.5-2.el9.x86_64

libreport-filesystem-2.15.2-6.el9.noarch

libreport-filesystem-2.15.2-6.el9.noarch

libreport-filesystem-2.15.2-6.el9.noarch

librhsm-0.0.3-7.el9_3.1.x86_64

librhsm-0.0.3-7.el9_3.1.x86_64

librhsm-0.0.3-7.el9_3.1.x86_64

libseccomp-2.5.2-2.el9.x86_64

libseccomp-2.5.2-2.el9.x86_64

libseccomp-2.5.2-2.el9.x86_64

libselinux-3.6-1.el9.x86_64

libselinux-3.6-1.el9.x86_64

libselinux-3.6-1.el9.x86_64

libselinux-utils-3.6-1.el9.x86_64

libselinux-utils-3.6-1.el9.x86_64

libselinux-utils-3.6-1.el9.x86_64

libsemanage-3.6-1.el9.x86_64

libsemanage-3.6-1.el9.x86_64

libsemanage-3.6-1.el9.x86_64

libsepol-3.6-1.el9.x86_64

libsepol-3.6-1.el9.x86_64

libsepol-3.6-1.el9.x86_64

libsigsegv-2.13-4.el9.x86_64

libsigsegv-2.13-4.el9.x86_64

libsigsegv-2.13-4.el9.x86_64

libslirp-4.4.0-7.el9.x86_64

libslirp-4.4.0-7.el9.x86_64

libslirp-4.4.0-7.el9.x86_64

libsmartcols-2.37.4-18.el9.x86_64

libsmartcols-2.37.4-18.el9.x86_64

libsmartcols-2.37.4-18.el9.x86_64

libsolv-0.7.24-2.el9.x86_64

libsolv-0.7.24-2.el9.x86_64

libsolv-0.7.24-2.el9.x86_64

libsoup-2.72.0-8.el9.x86_64

libsoup-2.72.0-8.el9.x86_64

libsoup-2.72.0-8.el9.x86_64

libss-1.46.5-5.el9.x86_64

libss-1.46.5-5.el9.x86_64

libss-1.46.5-5.el9.x86_64

libssh-0.10.4-13.el9.x86_64

libssh-0.10.4-13.el9.x86_64

libssh-0.10.4-13.el9.x86_64

libssh-config-0.10.4-13.el9.noarch

libssh-config-0.10.4-13.el9.noarch

libssh-config-0.10.4-13.el9.noarch

libstdc++-11.4.1-3.el9.x86_64

libstdc++-11.4.1-3.el9.x86_64

libstdc++-11.4.1-3.el9.x86_64

libtasn1-4.16.0-8.el9_1.x86_64

libtasn1-4.16.0-8.el9_1.x86_64

libtasn1-4.16.0-8.el9_1.x86_64

libtirpc-1.3.3-8.el9_4.x86_64

libtirpc-1.3.3-8.el9_4.x86_64

libtirpc-1.3.3-8.el9_4.x86_64

libtpms-0.9.1-3.20211126git1ff6fe1f43.el9_2.x86_64

libtpms-0.9.1-4.20211126git1ff6fe1f43.el9_2.x86_64

libtpms-0.9.1-4.20211126git1ff6fe1f43.el9_2.x86_64

libunistring-0.9.10-15.el9.x86_64

libunistring-0.9.10-15.el9.x86_64

libunistring-0.9.10-15.el9.x86_64

liburing-2.5-1.el9.x86_64

liburing-2.5-1.el9.x86_64

liburing-2.5-1.el9.x86_64

libusbx-1.0.26-1.el9.x86_64

libusbx-1.0.26-1.el9.x86_64

libusbx-1.0.26-1.el9.x86_64

libutempter-1.2.1-6.el9.x86_64

libutempter-1.2.1-6.el9.x86_64

libutempter-1.2.1-6.el9.x86_64

libuuid-2.37.4-18.el9.x86_64

libuuid-2.37.4-18.el9.x86_64

libuuid-2.37.4-18.el9.x86_64

libverto-0.3.2-3.el9.x86_64

libverto-0.3.2-3.el9.x86_64

libverto-0.3.2-3.el9.x86_64

libverto-libev-0.3.2-3.el9.x86_64

libverto-libev-0.3.2-3.el9.x86_64

libverto-libev-0.3.2-3.el9.x86_64

libvirt-client-10.0.0-6.7.el9_4.x86_64

libvirt-client-10.0.0-6.7.el9_4.x86_64

libvirt-client-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-common-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-common-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-common-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-config-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-config-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-config-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-secret-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-secret-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-secret-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-storage-core-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-storage-core-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-storage-core-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-log-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-log-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-log-10.0.0-6.7.el9_4.x86_64

libvirt-libs-10.0.0-6.7.el9_4.x86_64

libvirt-libs-10.0.0-6.7.el9_4.x86_64

libvirt-libs-10.0.0-6.7.el9_4.x86_64

libxcrypt-4.4.18-3.el9.x86_64

libxcrypt-4.4.18-3.el9.x86_64

libxcrypt-4.4.18-3.el9.x86_64

libxcrypt-compat-4.4.18-3.el9.x86_64

libxcrypt-compat-4.4.18-3.el9.x86_64

libxcrypt-compat-4.4.18-3.el9.x86_64

libxml2-2.9.13-6.el9_4.x86_64

libxml2-2.9.13-6.el9_4.x86_64

libxml2-2.9.13-6.el9_4.x86_64

libxslt-1.1.34-9.el9.x86_64

libxslt-1.1.34-9.el9.x86_64

libxslt-1.1.34-9.el9.x86_64

libyaml-0.2.5-7.el9.x86_64

libyaml-0.2.5-7.el9.x86_64

libyaml-0.2.5-7.el9.x86_64

libzstd-1.5.1-2.el9.x86_64

libzstd-1.5.1-2.el9.x86_64

libzstd-1.5.1-2.el9.x86_64

linux-firmware-20240716-143.2.el9_4.noarch

linux-firmware-20240905-143.3.el9_4.noarch

linux-firmware-20240905-143.3.el9_4.noarch

linux-firmware-whence-20240716-143.2.el9_4.noarch

linux-firmware-whence-20240905-143.3.el9_4.noarch

linux-firmware-whence-20240905-143.3.el9_4.noarch

lsscsi-0.32-6.el9.x86_64

lsscsi-0.32-6.el9.x86_64

lsscsi-0.32-6.el9.x86_64

lua-libs-5.4.4-4.el9.x86_64

lua-libs-5.4.4-4.el9.x86_64

lua-libs-5.4.4-4.el9.x86_64

lua-srpm-macros-1-6.el9.noarch

lua-srpm-macros-1-6.el9.noarch

lua-srpm-macros-1-6.el9.noarch

luksmeta-9-12.el9.x86_64

luksmeta-9-12.el9.x86_64

luksmeta-9-12.el9.x86_64

lvm2-2.03.23-2.el9.x86_64

lvm2-2.03.23-2.el9.x86_64

lvm2-2.03.23-2.el9.x86_64

lvm2-libs-2.03.23-2.el9.x86_64

lvm2-libs-2.03.23-2.el9.x86_64

lvm2-libs-2.03.23-2.el9.x86_64

lz4-libs-1.9.3-5.el9.x86_64

lz4-libs-1.9.3-5.el9.x86_64

lz4-libs-1.9.3-5.el9.x86_64

lzo-2.10-7.el9.x86_64

lzo-2.10-7.el9.x86_64

lzo-2.10-7.el9.x86_64

lzop-1.04-8.el9.x86_64

lzop-1.04-8.el9.x86_64

lzop-1.04-8.el9.x86_64

man-db-2.9.3-7.el9.x86_64

man-db-2.9.3-7.el9.x86_64

man-db-2.9.3-7.el9.x86_64

mdadm-4.2-14.el9_4.x86_64

mdadm-4.2-14.el9_4.x86_64

mdadm-4.2-14.el9_4.x86_64

microdnf-3.9.1-3.el9.x86_64

microdnf-3.9.1-3.el9.x86_64

microdnf-3.9.1-3.el9.x86_64

mingw-binutils-generic-2.41-3.el9.x86_64

mingw-binutils-generic-2.41-3.el9.x86_64

mingw-binutils-generic-2.41-3.el9.x86_64

mingw-filesystem-base-148-3.el9.noarch

mingw-filesystem-base-148-3.el9.noarch

mingw-filesystem-base-148-3.el9.noarch

mingw32-crt-11.0.1-3.el9.noarch

mingw32-crt-11.0.1-3.el9.noarch

mingw32-crt-11.0.1-3.el9.noarch

mingw32-filesystem-148-3.el9.noarch

mingw32-filesystem-148-3.el9.noarch

mingw32-filesystem-148-3.el9.noarch

mingw32-srvany-1.1-3.el9.noarch

mingw32-srvany-1.1-3.el9.noarch

mingw32-srvany-1.1-3.el9.noarch

mpfr-4.1.0-7.el9.x86_64

mpfr-4.1.0-7.el9.x86_64

mpfr-4.1.0-7.el9.x86_64

mtools-4.0.26-4.el9_0.x86_64

mtools-4.0.26-4.el9_0.x86_64

mtools-4.0.26-4.el9_0.x86_64

nbdkit-1.36.2-1.el9.x86_64

nbdkit-1.36.2-1.el9.x86_64

nbdkit-1.36.2-1.el9.x86_64

nbdkit-basic-filters-1.36.2-1.el9.x86_64

nbdkit-basic-filters-1.36.2-1.el9.x86_64

nbdkit-basic-filters-1.36.2-1.el9.x86_64

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

nbdkit-python-plugin-1.36.2-1.el9.x86_64

nbdkit-python-plugin-1.36.2-1.el9.x86_64

nbdkit-python-plugin-1.36.2-1.el9.x86_64

nbdkit-server-1.36.2-1.el9.x86_64

nbdkit-server-1.36.2-1.el9.x86_64

nbdkit-server-1.36.2-1.el9.x86_64

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

ncurses-6.2-10.20210508.el9.x86_64

ncurses-6.2-10.20210508.el9.x86_64

ncurses-6.2-10.20210508.el9.x86_64

ncurses-base-6.2-10.20210508.el9.noarch

ncurses-base-6.2-10.20210508.el9.noarch

ncurses-base-6.2-10.20210508.el9.noarch

ncurses-libs-6.2-10.20210508.el9.x86_64

ncurses-libs-6.2-10.20210508.el9.x86_64

ncurses-libs-6.2-10.20210508.el9.x86_64

ndctl-libs-71.1-8.el9.x86_64

ndctl-libs-71.1-8.el9.x86_64

ndctl-libs-71.1-8.el9.x86_64

nettle-3.9.1-1.el9.x86_64

nettle-3.9.1-1.el9.x86_64

nettle-3.9.1-1.el9.x86_64

nfs-utils-2.5.4-26.el9_4.x86_64

nfs-utils-2.5.4-26.el9_4.x86_64

nfs-utils-2.5.4-26.el9_4.x86_64

npth-1.6-8.el9.x86_64

npth-1.6-8.el9.x86_64

npth-1.6-8.el9.x86_64

numactl-libs-2.0.16-3.el9.x86_64

numactl-libs-2.0.16-3.el9.x86_64

numactl-libs-2.0.16-3.el9.x86_64

numad-0.5-37.20150602git.el9.x86_64

numad-0.5-37.20150602git.el9.x86_64

numad-0.5-37.20150602git.el9.x86_64

ocaml-srpm-macros-6-6.el9.noarch

ocaml-srpm-macros-6-6.el9.noarch

ocaml-srpm-macros-6-6.el9.noarch

oniguruma-6.9.6-1.el9.5.x86_64

oniguruma-6.9.6-1.el9.5.x86_64

oniguruma-6.9.6-1.el9.5.x86_64

openblas-srpm-macros-2-11.el9.noarch

openblas-srpm-macros-2-11.el9.noarch

openblas-srpm-macros-2-11.el9.noarch

openldap-2.6.6-3.el9.x86_64

openldap-2.6.6-3.el9.x86_64

openldap-2.6.6-3.el9.x86_64

openssh-8.7p1-38.el9_4.4.x86_64

openssh-8.7p1-38.el9_4.4.x86_64

openssh-8.7p1-38.el9_4.4.x86_64

openssh-clients-8.7p1-38.el9_4.4.x86_64

openssh-clients-8.7p1-38.el9_4.4.x86_64

openssh-clients-8.7p1-38.el9_4.4.x86_64

openssl-3.0.7-28.el9_4.x86_64

openssl-3.0.7-28.el9_4.x86_64

openssl-3.0.7-28.el9_4.x86_64

openssl-fips-provider-3.0.7-2.el9.x86_64

openssl-fips-provider-3.0.7-2.el9.x86_64

openssl-fips-provider-3.0.7-2.el9.x86_64

openssl-libs-3.0.7-28.el9_4.x86_64

openssl-libs-3.0.7-28.el9_4.x86_64

openssl-libs-3.0.7-28.el9_4.x86_64

osinfo-db-20231215-1.el9.noarch

osinfo-db-20231215-1.el9.noarch

osinfo-db-20231215-1.el9.noarch

osinfo-db-tools-1.10.0-1.el9.x86_64

osinfo-db-tools-1.10.0-1.el9.x86_64

osinfo-db-tools-1.10.0-1.el9.x86_64

p11-kit-0.25.3-2.el9.x86_64

p11-kit-0.25.3-2.el9.x86_64

p11-kit-0.25.3-2.el9.x86_64

p11-kit-trust-0.25.3-2.el9.x86_64

p11-kit-trust-0.25.3-2.el9.x86_64

p11-kit-trust-0.25.3-2.el9.x86_64

pam-1.5.1-19.el9.x86_64

pam-1.5.1-19.el9.x86_64

pam-1.5.1-19.el9.x86_64

parted-3.5-2.el9.x86_64

parted-3.5-2.el9.x86_64

parted-3.5-2.el9.x86_64

passt-0^20231204.gb86afe3-1.el9.x86_64

passt-0^20231204.gb86afe3-1.el9.x86_64

passt-0^20231204.gb86afe3-1.el9.x86_64

passt-selinux-0^20231204.gb86afe3-1.el9.noarch

passt-selinux-0^20231204.gb86afe3-1.el9.noarch

passt-selinux-0^20231204.gb86afe3-1.el9.noarch

pcre-8.44-3.el9.3.x86_64

pcre-8.44-3.el9.3.x86_64

pcre-8.44-3.el9.3.x86_64

pcre2-10.40-5.el9.x86_64

pcre2-10.40-5.el9.x86_64

pcre2-10.40-5.el9.x86_64

pcre2-syntax-10.40-5.el9.noarch

pcre2-syntax-10.40-5.el9.noarch

pcre2-syntax-10.40-5.el9.noarch

perl-AutoLoader-5.74-481.el9.noarch

perl-AutoLoader-5.74-481.el9.noarch

perl-AutoLoader-5.74-481.el9.noarch

perl-B-1.80-481.el9.x86_64

perl-B-1.80-481.el9.x86_64

perl-B-1.80-481.el9.x86_64

perl-base-2.27-481.el9.noarch

perl-base-2.27-481.el9.noarch

perl-base-2.27-481.el9.noarch

perl-Carp-1.50-460.el9.noarch

perl-Carp-1.50-460.el9.noarch

perl-Carp-1.50-460.el9.noarch

perl-Class-Struct-0.66-481.el9.noarch

perl-Class-Struct-0.66-481.el9.noarch

perl-Class-Struct-0.66-481.el9.noarch

perl-constant-1.33-461.el9.noarch

perl-constant-1.33-461.el9.noarch

perl-constant-1.33-461.el9.noarch

perl-Data-Dumper-2.174-462.el9.x86_64

perl-Data-Dumper-2.174-462.el9.x86_64

perl-Data-Dumper-2.174-462.el9.x86_64

perl-Digest-1.19-4.el9.noarch

perl-Digest-1.19-4.el9.noarch

perl-Digest-1.19-4.el9.noarch

perl-Digest-MD5-2.58-4.el9.x86_64

perl-Digest-MD5-2.58-4.el9.x86_64

perl-Digest-MD5-2.58-4.el9.x86_64

perl-Encode-3.08-462.el9.x86_64

perl-Encode-3.08-462.el9.x86_64

perl-Encode-3.08-462.el9.x86_64

perl-Errno-1.30-481.el9.x86_64

perl-Errno-1.30-481.el9.x86_64

perl-Errno-1.30-481.el9.x86_64

perl-Exporter-5.74-461.el9.noarch

perl-Exporter-5.74-461.el9.noarch

perl-Exporter-5.74-461.el9.noarch

perl-Fcntl-1.13-481.el9.x86_64

perl-Fcntl-1.13-481.el9.x86_64

perl-Fcntl-1.13-481.el9.x86_64

perl-File-Basename-2.85-481.el9.noarch

perl-File-Basename-2.85-481.el9.noarch

perl-File-Basename-2.85-481.el9.noarch

perl-File-Path-2.18-4.el9.noarch

perl-File-Path-2.18-4.el9.noarch

perl-File-Path-2.18-4.el9.noarch

perl-File-stat-1.09-481.el9.noarch

perl-File-stat-1.09-481.el9.noarch

perl-File-stat-1.09-481.el9.noarch

perl-File-Temp-0.231.100-4.el9.noarch

perl-File-Temp-0.231.100-4.el9.noarch

perl-File-Temp-0.231.100-4.el9.noarch

perl-FileHandle-2.03-481.el9.noarch

perl-FileHandle-2.03-481.el9.noarch

perl-FileHandle-2.03-481.el9.noarch

perl-Getopt-Long-2.52-4.el9.noarch

perl-Getopt-Long-2.52-4.el9.noarch

perl-Getopt-Long-2.52-4.el9.noarch

perl-Getopt-Std-1.12-481.el9.noarch

perl-Getopt-Std-1.12-481.el9.noarch

perl-Getopt-Std-1.12-481.el9.noarch

perl-HTTP-Tiny-0.076-462.el9.noarch

perl-HTTP-Tiny-0.076-462.el9.noarch

perl-HTTP-Tiny-0.076-462.el9.noarch

perl-if-0.60.800-481.el9.noarch

perl-if-0.60.800-481.el9.noarch

perl-if-0.60.800-481.el9.noarch

perl-interpreter-5.32.1-481.el9.x86_64

perl-interpreter-5.32.1-481.el9.x86_64

perl-interpreter-5.32.1-481.el9.x86_64

perl-IO-1.43-481.el9.x86_64

perl-IO-1.43-481.el9.x86_64

perl-IO-1.43-481.el9.x86_64

perl-IO-Socket-IP-0.41-5.el9.noarch

perl-IO-Socket-IP-0.41-5.el9.noarch

perl-IO-Socket-IP-0.41-5.el9.noarch

perl-IO-Socket-SSL-2.073-1.el9.noarch

perl-IO-Socket-SSL-2.073-1.el9.noarch

perl-IO-Socket-SSL-2.073-1.el9.noarch

perl-IPC-Open3-1.21-481.el9.noarch

perl-IPC-Open3-1.21-481.el9.noarch

perl-IPC-Open3-1.21-481.el9.noarch

perl-libnet-3.13-4.el9.noarch

perl-libnet-3.13-4.el9.noarch

perl-libnet-3.13-4.el9.noarch

perl-libs-5.32.1-481.el9.x86_64

perl-libs-5.32.1-481.el9.x86_64

perl-libs-5.32.1-481.el9.x86_64

perl-MIME-Base64-3.16-4.el9.x86_64

perl-MIME-Base64-3.16-4.el9.x86_64

perl-MIME-Base64-3.16-4.el9.x86_64

perl-Mozilla-CA-20200520-6.el9.noarch

perl-Mozilla-CA-20200520-6.el9.noarch

perl-Mozilla-CA-20200520-6.el9.noarch

perl-mro-1.23-481.el9.x86_64

perl-mro-1.23-481.el9.x86_64

perl-mro-1.23-481.el9.x86_64

perl-NDBM_File-1.15-481.el9.x86_64

perl-NDBM_File-1.15-481.el9.x86_64

perl-NDBM_File-1.15-481.el9.x86_64

perl-Net-SSLeay-1.92-2.el9.x86_64

perl-Net-SSLeay-1.92-2.el9.x86_64

perl-Net-SSLeay-1.92-2.el9.x86_64

perl-overload-1.31-481.el9.noarch

perl-overload-1.31-481.el9.noarch

perl-overload-1.31-481.el9.noarch

perl-overloading-0.02-481.el9.noarch

perl-overloading-0.02-481.el9.noarch

perl-overloading-0.02-481.el9.noarch

perl-parent-0.238-460.el9.noarch

perl-parent-0.238-460.el9.noarch

perl-parent-0.238-460.el9.noarch

perl-PathTools-3.78-461.el9.x86_64

perl-PathTools-3.78-461.el9.x86_64

perl-PathTools-3.78-461.el9.x86_64

perl-Pod-Escapes-1.07-460.el9.noarch

perl-Pod-Escapes-1.07-460.el9.noarch

perl-Pod-Escapes-1.07-460.el9.noarch

perl-Pod-Perldoc-3.28.01-461.el9.noarch

perl-Pod-Perldoc-3.28.01-461.el9.noarch

perl-Pod-Perldoc-3.28.01-461.el9.noarch

perl-Pod-Simple-3.42-4.el9.noarch

perl-Pod-Simple-3.42-4.el9.noarch

perl-Pod-Simple-3.42-4.el9.noarch

perl-Pod-Usage-2.01-4.el9.noarch

perl-Pod-Usage-2.01-4.el9.noarch

perl-Pod-Usage-2.01-4.el9.noarch

perl-podlators-4.14-460.el9.noarch

perl-podlators-4.14-460.el9.noarch

perl-podlators-4.14-460.el9.noarch

perl-POSIX-1.94-481.el9.x86_64

perl-POSIX-1.94-481.el9.x86_64

perl-POSIX-1.94-481.el9.x86_64

perl-Scalar-List-Utils-1.56-461.el9.x86_64

perl-Scalar-List-Utils-1.56-461.el9.x86_64

perl-Scalar-List-Utils-1.56-461.el9.x86_64

perl-SelectSaver-1.02-481.el9.noarch

perl-SelectSaver-1.02-481.el9.noarch

perl-SelectSaver-1.02-481.el9.noarch

perl-Socket-2.031-4.el9.x86_64

perl-Socket-2.031-4.el9.x86_64

perl-Socket-2.031-4.el9.x86_64

perl-srpm-macros-1-41.el9.noarch

perl-srpm-macros-1-41.el9.noarch

perl-srpm-macros-1-41.el9.noarch

perl-Storable-3.21-460.el9.x86_64

perl-Storable-3.21-460.el9.x86_64

perl-Storable-3.21-460.el9.x86_64

perl-subs-1.03-481.el9.noarch

perl-subs-1.03-481.el9.noarch

perl-subs-1.03-481.el9.noarch

perl-Symbol-1.08-481.el9.noarch

perl-Symbol-1.08-481.el9.noarch

perl-Symbol-1.08-481.el9.noarch

perl-Term-ANSIColor-5.01-461.el9.noarch

perl-Term-ANSIColor-5.01-461.el9.noarch

perl-Term-ANSIColor-5.01-461.el9.noarch

perl-Term-Cap-1.17-460.el9.noarch

perl-Term-Cap-1.17-460.el9.noarch

perl-Term-Cap-1.17-460.el9.noarch

perl-Text-ParseWords-3.30-460.el9.noarch

perl-Text-ParseWords-3.30-460.el9.noarch

perl-Text-ParseWords-3.30-460.el9.noarch

perl-Text-Tabs+Wrap-2013.0523-460.el9.noarch

perl-Text-Tabs+Wrap-2013.0523-460.el9.noarch

perl-Text-Tabs+Wrap-2013.0523-460.el9.noarch

perl-Time-Local-1.300-7.el9.noarch

perl-Time-Local-1.300-7.el9.noarch

perl-Time-Local-1.300-7.el9.noarch

perl-URI-5.09-3.el9.noarch

perl-URI-5.09-3.el9.noarch

perl-URI-5.09-3.el9.noarch

perl-vars-1.05-481.el9.noarch

perl-vars-1.05-481.el9.noarch

perl-vars-1.05-481.el9.noarch

pigz-2.5-4.el9.x86_64

pigz-2.5-4.el9.x86_64

pigz-2.5-4.el9.x86_64

pixman-0.40.0-6.el9.x86_64

pixman-0.40.0-6.el9.x86_64

pixman-0.40.0-6.el9.x86_64

pkgconf-1.7.3-10.el9.x86_64

pkgconf-1.7.3-10.el9.x86_64

pkgconf-1.7.3-10.el9.x86_64

policycoreutils-3.6-2.1.el9.x86_64

policycoreutils-3.6-2.1.el9.x86_64

policycoreutils-3.6-2.1.el9.x86_64

policycoreutils-python-utils-3.6-2.1.el9.noarch

policycoreutils-python-utils-3.6-2.1.el9.noarch

policycoreutils-python-utils-3.6-2.1.el9.noarch

polkit-0.117-11.el9.x86_64

polkit-0.117-11.el9.x86_64

polkit-0.117-11.el9.x86_64

polkit-libs-0.117-11.el9.x86_64

polkit-libs-0.117-11.el9.x86_64

polkit-libs-0.117-11.el9.x86_64

polkit-pkla-compat-0.1-21.el9.x86_64

polkit-pkla-compat-0.1-21.el9.x86_64

polkit-pkla-compat-0.1-21.el9.x86_64

popt-1.18-8.el9.x86_64

popt-1.18-8.el9.x86_64

popt-1.18-8.el9.x86_64

procps-ng-3.3.17-14.el9.x86_64

procps-ng-3.3.17-14.el9.x86_64

procps-ng-3.3.17-14.el9.x86_64

protobuf-c-1.3.3-13.el9.x86_64

protobuf-c-1.3.3-13.el9.x86_64

protobuf-c-1.3.3-13.el9.x86_64

psmisc-23.4-3.el9.x86_64

psmisc-23.4-3.el9.x86_64

psmisc-23.4-3.el9.x86_64

publicsuffix-list-dafsa-20210518-3.el9.noarch

publicsuffix-list-dafsa-20210518-3.el9.noarch

publicsuffix-list-dafsa-20210518-3.el9.noarch

pyproject-srpm-macros-1.12.0-1.el9.noarch

pyproject-srpm-macros-1.12.0-1.el9.noarch

pyproject-srpm-macros-1.12.0-1.el9.noarch

python-srpm-macros-3.9-53.el9.noarch

python-srpm-macros-3.9-53.el9.noarch

python-srpm-macros-3.9-53.el9.noarch

python-unversioned-command-3.9.18-3.el9_4.5.noarch

python-unversioned-command-3.9.18-3.el9_4.5.noarch

python-unversioned-command-3.9.18-3.el9_4.5.noarch

python3-3.9.18-3.el9_4.5.x86_64

python3-3.9.18-3.el9_4.5.x86_64

python3-3.9.18-3.el9_4.5.x86_64

python3-audit-3.1.2-2.el9.x86_64

python3-audit-3.1.2-2.el9.x86_64

python3-audit-3.1.2-2.el9.x86_64

python3-distro-1.5.0-7.el9.noarch

python3-distro-1.5.0-7.el9.noarch

python3-distro-1.5.0-7.el9.noarch

python3-libs-3.9.18-3.el9_4.5.x86_64

python3-libs-3.9.18-3.el9_4.5.x86_64

python3-libs-3.9.18-3.el9_4.5.x86_64

python3-libselinux-3.6-1.el9.x86_64

python3-libselinux-3.6-1.el9.x86_64

python3-libselinux-3.6-1.el9.x86_64

python3-libsemanage-3.6-1.el9.x86_64

python3-libsemanage-3.6-1.el9.x86_64

python3-libsemanage-3.6-1.el9.x86_64

python3-pip-wheel-21.2.3-8.el9.noarch

python3-pip-wheel-21.2.3-8.el9.noarch

python3-pip-wheel-21.2.3-8.el9.noarch

python3-policycoreutils-3.6-2.1.el9.noarch

python3-policycoreutils-3.6-2.1.el9.noarch

python3-policycoreutils-3.6-2.1.el9.noarch

python3-pyyaml-5.4.1-6.el9.x86_64

python3-pyyaml-5.4.1-6.el9.x86_64

python3-pyyaml-5.4.1-6.el9.x86_64

python3-setools-4.4.4-1.el9.x86_64

python3-setools-4.4.4-1.el9.x86_64

python3-setools-4.4.4-1.el9.x86_64

python3-setuptools-53.0.0-12.el9_4.1.noarch

python3-setuptools-53.0.0-12.el9_4.1.noarch

python3-setuptools-53.0.0-12.el9_4.1.noarch

python3-setuptools-wheel-53.0.0-12.el9_4.1.noarch

python3-setuptools-wheel-53.0.0-12.el9_4.1.noarch

python3-setuptools-wheel-53.0.0-12.el9_4.1.noarch

qemu-img-8.2.0-11.el9_4.6.x86_64

qemu-img-8.2.0-11.el9_4.6.x86_64

qemu-img-8.2.0-11.el9_4.6.x86_64

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

qt5-srpm-macros-5.15.9-1.el9.noarch

qt5-srpm-macros-5.15.9-1.el9.noarch

qt5-srpm-macros-5.15.9-1.el9.noarch

quota-4.06-6.el9.x86_64

quota-4.06-6.el9.x86_64

quota-4.06-6.el9.x86_64

quota-nls-4.06-6.el9.noarch

quota-nls-4.06-6.el9.noarch

quota-nls-4.06-6.el9.noarch

readline-8.1-4.el9.x86_64

readline-8.1-4.el9.x86_64

readline-8.1-4.el9.x86_64

redhat-release-9.4-0.5.el9.x86_64

redhat-release-9.4-0.5.el9.x86_64

redhat-release-9.4-0.5.el9.x86_64

redhat-rpm-config-207-1.el9.noarch

redhat-rpm-config-207-1.el9.noarch

redhat-rpm-config-207-1.el9.noarch

rootfiles-8.1-31.el9.noarch

rootfiles-8.1-31.el9.noarch

rootfiles-8.1-31.el9.noarch

rpcbind-1.2.6-7.el9.x86_64

rpcbind-1.2.6-7.el9.x86_64

rpcbind-1.2.6-7.el9.x86_64

rpm-4.16.1.3-29.el9.x86_64

rpm-4.16.1.3-29.el9.x86_64

rpm-4.16.1.3-29.el9.x86_64

rpm-libs-4.16.1.3-29.el9.x86_64

rpm-libs-4.16.1.3-29.el9.x86_64

rpm-libs-4.16.1.3-29.el9.x86_64

rpm-plugin-selinux-4.16.1.3-29.el9.x86_64

rpm-plugin-selinux-4.16.1.3-29.el9.x86_64

rpm-plugin-selinux-4.16.1.3-29.el9.x86_64

rust-srpm-macros-17-4.el9.noarch

rust-srpm-macros-17-4.el9.noarch

rust-srpm-macros-17-4.el9.noarch

scrub-2.6.1-4.el9.x86_64

scrub-2.6.1-4.el9.x86_64

scrub-2.6.1-4.el9.x86_64

seabios-bin-1.16.3-2.el9.noarch

seabios-bin-1.16.3-2.el9.noarch

seabios-bin-1.16.3-2.el9.noarch

seavgabios-bin-1.16.3-2.el9.noarch

seavgabios-bin-1.16.3-2.el9.noarch

seavgabios-bin-1.16.3-2.el9.noarch

sed-4.8-9.el9.x86_64

sed-4.8-9.el9.x86_64

sed-4.8-9.el9.x86_64

selinux-policy-38.1.35-2.el9_4.2.noarch

selinux-policy-38.1.35-2.el9_4.2.noarch

selinux-policy-38.1.35-2.el9_4.2.noarch

selinux-policy-targeted-38.1.35-2.el9_4.2.noarch

selinux-policy-targeted-38.1.35-2.el9_4.2.noarch

selinux-policy-targeted-38.1.35-2.el9_4.2.noarch

setup-2.13.7-10.el9.noarch

setup-2.13.7-10.el9.noarch

setup-2.13.7-10.el9.noarch

shadow-utils-4.9-8.el9.x86_64

shadow-utils-4.9-8.el9.x86_64

shadow-utils-4.9-8.el9.x86_64

snappy-1.1.8-8.el9.x86_64

snappy-1.1.8-8.el9.x86_64

snappy-1.1.8-8.el9.x86_64

sqlite-libs-3.34.1-7.el9_3.x86_64

sqlite-libs-3.34.1-7.el9_3.x86_64

sqlite-libs-3.34.1-7.el9_3.x86_64

squashfs-tools-4.4-10.git1.el9.x86_64

squashfs-tools-4.4-10.git1.el9.x86_64

squashfs-tools-4.4-10.git1.el9.x86_64

supermin-5.3.3-1.el9.x86_64

supermin-5.3.3-1.el9.x86_64

supermin-5.3.3-1.el9.x86_64

swtpm-0.8.0-2.el9_4.x86_64

swtpm-0.8.0-2.el9_4.x86_64

swtpm-0.8.0-2.el9_4.x86_64

swtpm-libs-0.8.0-2.el9_4.x86_64

swtpm-libs-0.8.0-2.el9_4.x86_64

swtpm-libs-0.8.0-2.el9_4.x86_64

swtpm-tools-0.8.0-2.el9_4.x86_64

swtpm-tools-0.8.0-2.el9_4.x86_64

swtpm-tools-0.8.0-2.el9_4.x86_64

syslinux-6.04-0.20.el9.x86_64

syslinux-6.04-0.20.el9.x86_64

syslinux-6.04-0.20.el9.x86_64

syslinux-extlinux-6.04-0.20.el9.x86_64

syslinux-extlinux-6.04-0.20.el9.x86_64

syslinux-extlinux-6.04-0.20.el9.x86_64

syslinux-extlinux-nonlinux-6.04-0.20.el9.noarch

syslinux-extlinux-nonlinux-6.04-0.20.el9.noarch

syslinux-extlinux-nonlinux-6.04-0.20.el9.noarch

syslinux-nonlinux-6.04-0.20.el9.noarch

syslinux-nonlinux-6.04-0.20.el9.noarch

syslinux-nonlinux-6.04-0.20.el9.noarch

systemd-252-32.el9_4.7.x86_64

systemd-252-32.el9_4.7.x86_64

systemd-252-32.el9_4.7.x86_64

systemd-container-252-32.el9_4.7.x86_64

systemd-container-252-32.el9_4.7.x86_64

systemd-container-252-32.el9_4.7.x86_64

systemd-libs-252-32.el9_4.7.x86_64

systemd-libs-252-32.el9_4.7.x86_64

systemd-libs-252-32.el9_4.7.x86_64

systemd-pam-252-32.el9_4.7.x86_64

systemd-pam-252-32.el9_4.7.x86_64

systemd-pam-252-32.el9_4.7.x86_64

systemd-rpm-macros-252-32.el9_4.7.noarch

systemd-rpm-macros-252-32.el9_4.7.noarch

systemd-rpm-macros-252-32.el9_4.7.noarch

systemd-udev-252-32.el9_4.7.x86_64

systemd-udev-252-32.el9_4.7.x86_64

systemd-udev-252-32.el9_4.7.x86_64

tar-1.34-6.el9_4.1.x86_64

tar-1.34-6.el9_4.1.x86_64

tar-1.34-6.el9_4.1.x86_64

tpm2-tools-5.2-3.el9.x86_64

tpm2-tools-5.2-3.el9.x86_64

tpm2-tools-5.2-3.el9.x86_64

tpm2-tss-3.2.2-2.el9.x86_64

tpm2-tss-3.2.2-2.el9.x86_64

tpm2-tss-3.2.2-2.el9.x86_64

tzdata-2024a-1.el9.noarch

tzdata-2024a-1.el9.noarch

tzdata-2024a-1.el9.noarch

unbound-libs-1.16.2-3.el9_3.5.x86_64

unbound-libs-1.16.2-3.el9_3.5.x86_64

unbound-libs-1.16.2-3.el9_3.5.x86_64

unzip-6.0-56.el9.x86_64

unzip-6.0-56.el9.x86_64

unzip-6.0-56.el9.x86_64

userspace-rcu-0.12.1-6.el9.x86_64

userspace-rcu-0.12.1-6.el9.x86_64

userspace-rcu-0.12.1-6.el9.x86_64

util-linux-2.37.4-18.el9.x86_64

util-linux-2.37.4-18.el9.x86_64

util-linux-2.37.4-18.el9.x86_64

util-linux-core-2.37.4-18.el9.x86_64

util-linux-core-2.37.4-18.el9.x86_64

util-linux-core-2.37.4-18.el9.x86_64

vim-minimal-8.2.2637-20.el9_1.x86_64

vim-minimal-8.2.2637-20.el9_1.x86_64

vim-minimal-8.2.2637-20.el9_1.x86_64

virt-v2v-2.4.0-4.el9_4.x86_64

virt-v2v-2.4.0-4.el9_4.x86_64

virt-v2v-2.4.0-4.el9_4.x86_64

virtio-win-1.9.40-0.el9_4.noarch

virtio-win-1.9.40-0.el9_4.noarch

virtio-win-1.9.40-0.el9_4.noarch

webkit2gtk3-jsc-2.42.5-1.el9.x86_64

webkit2gtk3-jsc-2.42.5-1.el9.x86_64

webkit2gtk3-jsc-2.46.1-2.el9_4.x86_64

which-2.21-29.el9.x86_64

which-2.21-29.el9.x86_64

which-2.21-29.el9.x86_64

xfsprogs-6.3.0-1.el9.x86_64

xfsprogs-6.3.0-1.el9.x86_64

xfsprogs-6.3.0-1.el9.x86_64

xz-5.2.5-8.el9_0.x86_64

xz-5.2.5-8.el9_0.x86_64

xz-5.2.5-8.el9_0.x86_64

xz-libs-5.2.5-8.el9_0.x86_64

xz-libs-5.2.5-8.el9_0.x86_64

xz-libs-5.2.5-8.el9_0.x86_64

yajl-2.1.0-22.el9.x86_64

yajl-2.1.0-22.el9.x86_64

yajl-2.1.0-22.el9.x86_64

zip-3.0-35.el9.x86_64

zip-3.0-35.el9.x86_64

zip-3.0-35.el9.x86_64

zlib-1.2.11-40.el9.x86_64

zlib-1.2.11-40.el9.x86_64

zlib-1.2.11-40.el9.x86_64

zstd-1.5.1-2.el9.x86_64

zstd-1.5.1-2.el9.x86_64

zstd-1.5.1-2.el9.x86_64

+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/mtv-overview-page/index.html b/documentation/doc-Release_notes/modules/mtv-overview-page/index.html new file mode 100644 index 00000000000..cb799a58ac8 --- /dev/null +++ b/documentation/doc-Release_notes/modules/mtv-overview-page/index.html @@ -0,0 +1,214 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

The MTV Overview page

+
+
+
+

The Forklift Overview page displays system-wide information about migrations and a list of Settings you can change.

+
+
+

If you have Administrator privileges, you can access the Overview page by clicking MigrationOverview in the OKD web console.

+
+
+

The Overview page has 3 tabs:

+
+
+
    +
  • +

    Overview

    +
  • +
  • +

    YAML

    +
  • +
  • +

    Metrics

    +
  • +
+
+
+
+
+

Overview tab

+
+
+

The Overview tab lets you see:

+
+
+
    +
  • +

    Operator: The namespace on which the Forklift Operator is deployed and the status of the Operator

    +
  • +
  • +

    Pods: The name, status, and creation time of each pod that was deployed by the Forklift Operator

    +
  • +
  • +

    Conditions: Status of the Forklift Operator:

    +
    +
      +
    • +

      Failure: Last failure. False indicates no failure since deployment.

      +
    • +
    • +

      Running: Whether the Operator is currently running and waiting for the next reconciliation.

      +
    • +
    • +

      Successful: Last successful reconciliation.

      +
    • +
    +
    +
  • +
+
+
+
+
+

YAML tab

+
+
+

The custom resource ForkliftController that defines the operation of the Forklift Operator. You can modify the custom resource from this tab.

+
+
+
+
+

Metrics tab

+
+
+

The Metrics tab lets you see:

+
+
+
    +
  • +

    Migrations: The number of migrations performed using Forklift:

    +
    +
      +
    • +

      Total

      +
    • +
    • +

      Running

      +
    • +
    • +

      Failed

      +
    • +
    • +

      Succeeded

      +
    • +
    • +

      Canceled

      +
    • +
    +
    +
  • +
  • +

    Virtual Machine Migrations: The number of VMs migrated using Forklift:

    +
    +
      +
    • +

      Total

      +
    • +
    • +

      Running

      +
    • +
    • +

      Failed

      +
    • +
    • +

      Succeeded

      +
    • +
    • +

      Canceled

      +
    • +
    +
    +
  • +
+
+
+ + + + + +
+
Note
+
+
+

Since a single migration might involve many virtual machines, the number of migrations performed using Forklift might vary significantly from the number of virtual machines that have been migrated using Forklift.

+
+
+
+
+
    +
  • +

    Chart showing the number of running, failed, and succeeded migrations performed using Forklift for each of the last 7 days

    +
  • +
  • +

    Chart showing the number of running, failed, and succeeded virtual machine migrations performed using Forklift for each of the last 7 days

    +
  • +
+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/mtv-performance-addendum/index.html b/documentation/doc-Release_notes/modules/mtv-performance-addendum/index.html new file mode 100644 index 00000000000..1e26b32346b --- /dev/null +++ b/documentation/doc-Release_notes/modules/mtv-performance-addendum/index.html @@ -0,0 +1,291 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift performance addendum

+
+
+
+

Unresolved directive in mtv-performance-addendum.adoc - include::snip_performance.adoc[]

+
+
+
+
+

ESXi performance

+
+
+
Single ESXi performance
+

Test migration using the same ESXi host.

+
+
+

In each iteration, the total VMs are increased, to display the impact of concurrent migration on the duration.

+
+
+

The results show that migration time is linear when increasing the total VMs (50 GiB disk, Utilization 70%).

+
+
+

The optimal number of VMs per ESXi is 10.

+
+ + ++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Single ESXi tests
Test Case DescriptionMTVVDDKmax_vm inflightMigration TypeTotal Duration

cold migration, 10 VMs, Single ESXi, Private Network [1]

2.6

7.0.3

100

cold

0:21:39

cold migration, 20 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

0:41:16

cold migration, 30 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

1:00:59

cold migration, 40 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

1:23:02

cold migration, 50 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

1:46:24

cold migration, 80 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

2:42:49

cold migration, 100 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

3:25:15

+
+
Multi ESXi hosts and single data store
+

In each iteration, the number of ESXi hosts were increased, to show that increasing the number of ESXi hosts improves the migration time (50 GiB disk, Utilization 70%).

+
+ + ++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2. Multi ESXi hosts and single data store
Test Case DescriptionMTVVDDKMax_vm inflightMigration TypeTotal Duration

cold migration, 100 VMs, Single ESXi, Private Network [2]

2.6

7.0.3

100

cold

3:25:15

cold migration, 100 VMs, 4 ESXs (25 VMs per ESX), Private Network

2.6

7.0.3

100

cold

1:22:27

cold migration, 100 VMs, 5 ESXs (20 VMs per ESX), Private Network, 1 DataStore

2.6

7.0.3

100

cold

1:04:57

+
+
+
+

Different migration network performance

+
+
+

Each iteration the Migration Network was changed, using the Provider, to find the fastest network for migration.

+
+
+

The results show that there is no degradation using management compared to non-managment networks when all interfaces and network speeds are the same.

+
+ + ++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 3. Different migration network tests
Test Case DescriptionMTVVDDKmax_vm inflightMigration TypeTotal Duration

cold migration, 10 VMs, Single ESXi, MGMT Network

2.6

7.0.3

100

cold

0:21:30

cold migration, 10 VMs, Single ESXi, Private Network [3]

2.6

7.0.3

20

cold

0:21:20

cold migration, 10 VMs, Single ESXi, Default Network

2.6.2

7.0.3

20

cold

0:21:30

+
+
+
+
+
+1. Private Network refers to a non -Management network +
+
+2. Private Network refers to a non-Management network +
+
+3. Private Network refers to a non-Management network +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/mtv-performance-recommendation/index.html b/documentation/doc-Release_notes/modules/mtv-performance-recommendation/index.html new file mode 100644 index 00000000000..b1241241580 --- /dev/null +++ b/documentation/doc-Release_notes/modules/mtv-performance-recommendation/index.html @@ -0,0 +1,382 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift performance recommendations

+
+
+
+

The purpose of this section is to share recommendations for efficient and effective migration of virtual machines (VMs) using Forklift, based on findings observed through testing.

+
+
+

Unresolved directive in mtv-performance-recommendation.adoc - include::snip_performance.adoc[]

+
+
+
+
+

Ensure fast storage and network speeds

+
+
+

Ensure fast storage and network speeds, both for VMware and OKD (OCP) environments.

+
+
+
    +
  • +

    To perform fast migrations, VMware must have fast read access to datastores.  Networking between VMware ESXi hosts should be fast, ensure a 10 GiB network connection, and avoid network bottlenecks.

    +
    +
      +
    • +

      Extend the VMware network to the OCP Workers Interface network environment.

      +
    • +
    • +

      It is important to ensure that the VMware network offers high throughput (10 Gigabit Ethernet) and rapid networking to guarantee that the reception rates align with the read rate of the ESXi datastore.

      +
    • +
    • +

      Be aware that the migration process uses significant network bandwidth and that the migration network is utilized. If other services utilize that network, it may have an impact on those services and their migration rates.

      +
    • +
    • +

      For example, 200 to 325 MiB/s was the average network transfer rate from the vmnic for each ESXi host associated with transferring data to the OCP interface.

      +
    • +
    +
    +
  • +
+
+
+
+
+

Ensure fast datastore read speeds to ensure efficient and performant migrations.

+
+
+

Datastores read rates impact the total transfer times, so it is essential to ensure fast reads are possible from the ESXi datastore to the ESXi host.  

+
+
+

Example in numbers: 200 to 300 MiB/s was the average read rate for both vSphere and ESXi endpoints for a single ESXi server. When multiple ESXi servers are used, higher datastore read rates are possible.

+
+
+
+
+

Endpoint types 

+
+
+

Forklift 2.6 allows for the following vSphere provider options:

+
+
+
    +
  • +

    ESXi endpoint (inventory and disk transfers from ESXi), introduced in Forklift 2.6

    +
  • +
  • +

    vCenter Server endpoint; no networks for the ESXi host (inventory and disk transfers from vCenter)

    +
  • +
  • +

    vCenter endpoint and ESXi networks are available (inventory from vCenter, disk transfers from ESXi).

    +
  • +
+
+
+

When transferring many VMs that are registered to multiple ESXi hosts, using the vCenter endpoint and ESXi network is suggested.

+
+
+ + + + + +
+
Note
+
+
+

As of vSphere 7.0, ESXi hosts can label which network to use for NBD transport. This is accomplished by tagging the desired virtual network interface card (NIC) with the appropriate vSphereBackupNFC label.  When this is done, Forklift will be able to utilize the ESXi interface for network transfer to Openshift as long as the worker and ESXi host interfaces are reachable.  This is especially useful when migration users may not have access to the ESXi credentials yet would like to be able to control which ESXi interface is used for migration. 

+
+
+

For more details, see: (Forklift-1230)

+
+
+
+
+

You can use the following ESXi command, which designates interface vmk2 for NBD backup:

+
+
+
+
esxcli network ip interface tag add -t vSphereBackupNFC -i vmk2
+
+
+
+
+
+

Set ESXi hosts BIOS profile and ESXi Host Power Management for High Performance

+
+
+

Where possible, ensure that hosts used to perform migrations are set with BIOS profiles related to maximum performance.  Hosts which use Host Power Management controlled within vSphere should check that High Performance is set.

+
+
+

Testing showed that when transferring more than 10 VMs with both BIOS and host power management set accordingly, migrations had an increase of 15 MiB in the average datastore read rate.

+
+
+
+
+

Avoid additional network load on VMware networks

+
+
+

You can reduce the network load on VMware networks by selecting the migration network when using the ESXi endpoint.

+
+
+

By incorporating a virtualization provider, Forklift enables the selection of a specific network, which is accessible on the ESXi hosts, for the purpose of migrating virtual machines to OCP.  Selecting this migration network from the ESXi host in the Forklift UI will ensure that the transfer is performed using the selected network as an ESXi endpoint..

+
+
+

It is imperative to ensure that the network selected has connectivity to the OCP interface, has adequate bandwidth for migrations, and that the network interface is not saturated.

+
+
+

In environments with fast networks, such as 10GbE networks, migration network impacts can be expected to match the rate of ESXi datastore reads.

+
+
+
+
+

Control maximum concurrent disk migrations per ESXi host.

+
+
+

Set the MAX_VM_INFLIGHT MTV variable to control the maximum number of concurrent VMs transfers allowed for the ESXi host. 

+
+
+

Forklift allows for concurrency to be controlled using this variable; by default, it is set to 20.

+
+
+

When setting MAX_VM_INFLIGHT, consider the number of maximum concurrent VMs transfers are required for ESXi hosts. It is important to consider the type of migration to be transferred concurrently. Warm migrations, which are defined by migrations of a running VM that will be migrated over a scheduled time.

+
+
+

Warm migrations use snapshots to compare and migrate only the differences between previous snapshots of the disk.  The migration of the differences between snapshots happens over specific intervals before a final cut-over of the running VM to OKD occurs. 

+
+
+

In Forklift 2.6, MAX_VM_INFLIGHT reserves one transfer slot per VM, regardless of current migration activity for a specific snapshot or the number of disks that belong to a single vm. The total set by MAX_VM_INFLIGHT is used to indicate how many concurrent VM tranfers per ESXi host is allowed.

+
+
+
Examples
+
    +
  • +

    MAX_VM_INFLIGHT = 20 and 2 ESXi hosts defined in the provider mean each host can transfer 20 VMs.

    +
  • +
+
+
+
+
+

Migrations are completed faster when migrating multiple VMs concurrently

+
+
+

When multiple VMs from a specific ESXi host are to be migrated, starting concurrent migrations for multiple VMs leads to faster migration times. 

+
+
+

Testing demonstrated that migrating 10 VMs (each containing 35 GiB of data, with a total size of 50 GiB) from a single host is significantly faster than migrating the same number of VMs sequentially, one after another. 

+
+
+

It is possible to increase concurrent migration to more than 10 virtual machines from a single host, but it does not show a significant improvement. 

+
+
+
Examples
+
    +
  • +

    1 single disk VMs took 6 minutes, with migration rate of 100 MiB/s

    +
  • +
  • +

    10 single disk VMs took 22 minutes, with migration rate of 272 MiB/s

    +
  • +
  • +

    20 single disk VMs took 42 minutes, with migration rate of 284 MiB/s

    +
  • +
+
+
+ + + + + +
+
Note
+
+
+

From the aforementioned examples, it is evident that the migration of 10 virtual machines simultaneously is three times faster than the migration of identical virtual machines in a sequential manner.

+
+
+

The migration rate was almost the same when moving 10 or 20 virtual machines simultaneously.

+
+
+
+
+
+
+

Migrations complete faster using multiple hosts.

+
+
+

Using multiple hosts with registered VMs equally distributed among the ESXi hosts used for migrations leads to faster migration times.

+
+
+

Testing showed that when transferring more than 10 single disk VMS, each containing 35 GiB of data out of a total of 50G total, using an additional host can reduce migration time.

+
+
+
Examples
+
    +
  • +

    80 single disk VMs, containing 35 GiB of data each, using a single host took 2 hours and 43 minutes, with a migration rate of 294 MiB/s.

    +
  • +
  • +

    80 single disk VMs, containing 35 GiB of data each, using 8 ESXi hosts took 41 minutes, with a migration rate of 1,173 MiB/s.

    +
  • +
+
+
+ + + + + +
+
Note
+
+
+

From the aforementioned examples, it is evident that migrating 80 VMs from 8 ESXi hosts, 10 from each host, concurrently is four times faster than running the same VMs from a single ESXi host. 

+
+
+

Migrating a larger number of VMs from more than 8 ESXi hosts concurrently could potentially show increased performance. However, it was not tested and therefore not recommended.

+
+
+
+
+
+
+

Multiple migration plans compared to a single large migration plan

+
+
+

The maximum number of disks that can be referenced by a single migration plan is 500. For more details, see (MTV-1203)

+
+
+

When attempting to migrate many VMs in a single migration plan, it can take some time for all migrations to start.  By breaking up one migration plan into several migration plans, it is possible to start them at the same time.

+
+
+

Comparing migrations of:

+
+
+
    +
  • +

    500 VMs using 8 ESXi hosts in 1 plan, max_vm_inflight=100, took 5 hours and 10 minutes.

    +
  • +
  • +

    800 VMs using 8 ESXi hosts with 8 plans, max_vm_inflight=100, took 57 minutes.

    +
  • +
+
+
+

Testing showed that by breaking one single large plan into multiple moderately sized plans, for example, 100 VMS per plan, the total migration time can be reduced.

+
+
+
+
+

Maximum values tested

+
+
+
    +
  • +

    Maximum number of ESXi hosts tested: 8

    +
  • +
  • +

    Maximum number of VMs in a single migration plan: 500

    +
  • +
  • +

    Maximum number of VMs migrated in a single test: 5000

    +
  • +
  • +

    Maximum number of migration plans performed concurrently: 40

    +
  • +
  • +

    Maximum single disk size migrated: 6 T disks, which contained 3 Tb of data

    +
  • +
  • +

    Maximum number of disks on a single VM migrated: 50

    +
  • +
  • +

    Highest observed single datastore read rate from a single ESXi server:  312 MiB/second

    +
  • +
  • +

    Highest observed multi-datastore read rate using eight ESXi servers and two datastores: 1,242 MiB/second

    +
  • +
  • +

    Highest observed virtual NIC transfer rate to an {ocp-name} worker: 327 MiB/second

    +
  • +
  • +

    Maximum migration transfer rate of a single disk: 162 MiB/second (rate observed when transferring nonconcurrent migration of 1.5 Tb utilized data)

    +
  • +
  • +

    Maximum cold migration transfer rate of the multiple VMs (single disk) from a single ESXi host: 294 MiB/s (concurrent migration of 30 VMs, 35/50 GiB used, from Single ESXi)

    +
  • +
  • +

    Maximum cold migration transfer rate of the multiple VMs (single disk) from multiple ESXi hosts: 1173MB/s (concurrent migration of 80 VMs, 35/50 GiB used, from 8 ESXi servers, 10 VMs from each ESXi)

    +
  • +
+
+
+

For additional details on performance, see Forklift performance addendum

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/mtv-resources-and-services/index.html b/documentation/doc-Release_notes/modules/mtv-resources-and-services/index.html new file mode 100644 index 00000000000..27e819d0c9a --- /dev/null +++ b/documentation/doc-Release_notes/modules/mtv-resources-and-services/index.html @@ -0,0 +1,131 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift custom resources and services

+
+

Forklift is provided as an OKD Operator. It creates and manages the following custom resources (CRs) and services.

+
+
+
Forklift custom resources
+
    +
  • +

    Provider CR stores attributes that enable Forklift to connect to and interact with the source and target providers.

    +
  • +
  • +

    NetworkMapping CR maps the networks of the source and target providers.

    +
  • +
  • +

    StorageMapping CR maps the storage of the source and target providers.

    +
  • +
  • +

    Plan CR contains a list of VMs with the same migration parameters and associated network and storage mappings.

    +
  • +
  • +

    Migration CR runs a migration plan.

    +
    +

    Only one Migration CR per migration plan can run at a given time. You can create multiple Migration CRs for a single Plan CR.

    +
    +
  • +
+
+
+
Forklift services
+
    +
  • +

    The Inventory service performs the following actions:

    +
    +
      +
    • +

      Connects to the source and target providers.

      +
    • +
    • +

      Maintains a local inventory for mappings and plans.

      +
    • +
    • +

      Stores VM configurations.

      +
    • +
    • +

      Runs the Validation service if a VM configuration change is detected.

      +
    • +
    +
    +
  • +
  • +

    The Validation service checks the suitability of a VM for migration by applying rules.

    +
  • +
  • +

    The Migration Controller service orchestrates migrations.

    +
    +

    When you create a migration plan, the Migration Controller service validates the plan and adds a status label. If the plan fails validation, the plan status is Not ready and the plan cannot be used to perform a migration. If the plan passes validation, the plan status is Ready and it can be used to perform a migration. After a successful migration, the Migration Controller service changes the plan status to Completed.

    +
    +
  • +
  • +

    The Populator Controller service orchestrates disk transfers using Volume Populators.

    +
  • +
  • +

    The Kubevirt Controller and Containerized Data Import (CDI) Controller services handle most technical operations.

    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/mtv-selected-packages-2-7/index.html b/documentation/doc-Release_notes/modules/mtv-selected-packages-2-7/index.html new file mode 100644 index 00000000000..03e0ec1b3b8 --- /dev/null +++ b/documentation/doc-Release_notes/modules/mtv-selected-packages-2-7/index.html @@ -0,0 +1,207 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift selected packages

+ + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Selected Forklift packages
Package summaryForklift 2.7.0Forklift 2.7.2Forklift 2.7.3

The skeleton package which defines a simple Red Hat Enterprise Linux system

basesystem-11-13.el9.noarch

basesystem-11-13.el9.noarch

basesystem-11-13.el9.noarch

Core kernel modules to match the core kernel

kernel-modules-core-5.14.0-427.35.1.el9_4.x86_64

kernel-modules-core-5.14.0-427.37.1.el9_4.x86_64

kernel-modules-core-5.14.0-427.40.1.el9_4.x86_64

The Linux kernel

kernel-core-5.14.0-427.35.1.el9_4.x86_64

kernel-core-5.14.0-427.37.1.el9_4.x86_64

kernel-core-5.14.0-427.40.1.el9_4.x86_64

Access and modify virtual machine disk images

libguestfs-1.50.1-8.el9_4.x86_64

libguestfs-1.50.1-8.el9_4.x86_64

libguestfs-1.50.1-8.el9_4.x86_64

Client side utilities of the libvirt library

libvirt-client-10.0.0-6.7.el9_4.x86_64

libvirt-client-10.0.0-6.7.el9_4.x86_64

libvirt-client-10.0.0-6.7.el9_4.x86_64

Libvirt libraries

libvirt-libs-10.0.0-6.7.el9_4.x86_64

libvirt-libs-10.0.0-6.7.el9_4.x86_64

libvirt-libs-10.0.0-6.7.el9_4.x86_64

QEMU driver plugin for the libvirtd daemon

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

NBD server

nbdkit-1.36.2-1.el9.x86_64

nbdkit-1.36.2-1.el9.x86_64

nbdkit-1.36.2-1.el9.x86_64

Basic filters for nbdkit

nbdkit-basic-filters-1.36.2-1.el9.x86_64

nbdkit-basic-filters-1.36.2-1.el9.x86_64

nbdkit-basic-filters-1.36.2-1.el9.x86_64

Basic plugins for nbdkit

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

HTTP/FTP (cURL) plugin for nbdkit

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

NBD proxy / forward plugin for nbdkit

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

Python 3 plugin for nbdkit

nbdkit-python-plugin-1.36.2-1.el9.x86_64

nbdkit-python-plugin-1.36.2-1.el9.x86_64

nbdkit-python-plugin-1.36.2-1.el9.x86_64

The nbdkit server

nbdkit-server-1.36.2-1.el9.x86_64

nbdkit-server-1.36.2-1.el9.x86_64

nbdkit-server-1.36.2-1.el9.x86_64

SSH plugin for nbdkit

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

VMware VDDK plugin for nbdkit

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

QEMU command line tool for manipulating disk images

qemu-img-8.2.0-11.el9_4.6.x86_64

qemu-img-8.2.0-11.el9_4.6.x86_64

qemu-img-8.2.0-11.el9_4.6.x86_64

QEMU common files needed by all QEMU targets

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

+

qemu-kvm core components

+

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

Convert a virtual machine to run on KVM

virt-v2v-2.4.0-4.el9_4.x86_64

virt-v2v-2.4.0-4.el9_4.x86_64

virt-v2v-2.4.0-4.el9_4.x86_64

+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/mtv-settings/index.html b/documentation/doc-Release_notes/modules/mtv-settings/index.html new file mode 100644 index 00000000000..2ad1bd3c508 --- /dev/null +++ b/documentation/doc-Release_notes/modules/mtv-settings/index.html @@ -0,0 +1,133 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Configuring MTV settings

+
+

If you have Administrator privileges, you can access the Overview page and change the following settings in it:

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Forklift settings
SettingDescriptionDefault value

Max concurrent virtual machine migrations

The maximum number of VMs per plan that can be migrated simultaneously

20

Must gather cleanup after (hours)

The duration for retaining must gather reports before they are automatically deleted

Disabled

Controller main container CPU limit

The CPU limit allocated to the main controller container

500 m

Controller main container Memory limit

The memory limit allocated to the main controller container

800 Mi

Precopy internal (minutes)

The interval at which a new snapshot is requested before initiating a warm migration

60

Snapshot polling interval (seconds)

The frequency with which the system checks the status of snapshot creation or removal during a warm migration

10

+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationOverview. The Settings list is on the right-hand side of the page.

    +
  2. +
  3. +

    In the Settings list, click the Edit icon of the setting you want to change.

    +
  4. +
  5. +

    Choose a setting from the list.

    +
  6. +
  7. +

    Click Save.

    +
  8. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/mtv-ui/index.html b/documentation/doc-Release_notes/modules/mtv-ui/index.html new file mode 100644 index 00000000000..d5e6d0e28aa --- /dev/null +++ b/documentation/doc-Release_notes/modules/mtv-ui/index.html @@ -0,0 +1,91 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

The MTV user interface

+
+

The Forklift user interface is integrated into the OKD web console.

+
+
+

In the left-hand panel, you can choose a page related to a component of the migration progress, for example, Providers for Migration, or, if you are an administrator, you can choose Overview, which contains information about migrations and lets you configure Forklift settings.

+
+
+
+Forklift user interface +
+
Figure 1. Forklift extension interface
+
+
+

In pages related to components, you can click on the Projects list, which is in the upper-left portion of the page, and see which projects (namespaces) you are allowed to work with.

+
+
+
    +
  • +

    If you are an administrator, you can see all projects.

    +
  • +
  • +

    If you are a non-administrator, you can see only the projects that you have permissions to work with.

    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/mtv-workflow/index.html b/documentation/doc-Release_notes/modules/mtv-workflow/index.html new file mode 100644 index 00000000000..45b90dda312 --- /dev/null +++ b/documentation/doc-Release_notes/modules/mtv-workflow/index.html @@ -0,0 +1,113 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

High-level migration workflow

+
+

The high-level workflow shows the migration process from the point of view of the user:

+
+
+
    +
  1. +

    You create a source provider, a target provider, a network mapping, and a storage mapping.

    +
  2. +
  3. +

    You create a Plan custom resource (CR) that includes the following resources:

    +
    +
      +
    • +

      Source provider

      +
    • +
    • +

      Target provider, if Forklift is not installed on the target cluster

      +
    • +
    • +

      Network mapping

      +
    • +
    • +

      Storage mapping

      +
    • +
    • +

      One or more virtual machines (VMs)

      +
    • +
    +
    +
  4. +
  5. +

    You run a migration plan by creating a Migration CR that references the Plan CR.

    +
    +

    If you cannot migrate all the VMs for any reason, you can create multiple Migration CRs for the same Plan CR until all VMs are migrated.

    +
    +
  6. +
  7. +

    For each VM in the Plan CR, the Migration Controller service records the VM migration progress in the Migration CR.

    +
  8. +
  9. +

    Once the data transfer for each VM in the Plan CR completes, the Migration Controller service creates a VirtualMachine CR.

    +
    +

    When all VMs have been migrated, the Migration Controller service updates the status of the Plan CR to Completed. The power state of each source VM is maintained after migration.

    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/network-prerequisites/index.html b/documentation/doc-Release_notes/modules/network-prerequisites/index.html new file mode 100644 index 00000000000..6bc83209c47 --- /dev/null +++ b/documentation/doc-Release_notes/modules/network-prerequisites/index.html @@ -0,0 +1,196 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Network prerequisites

+
+
+
+

The following prerequisites apply to all migrations:

+
+
+
    +
  • +

    IP addresses, VLANs, and other network configuration settings must not be changed before or during migration. The MAC addresses of the virtual machines are preserved during migration.

    +
  • +
  • +

    The network connections between the source environment, the KubeVirt cluster, and the replication repository must be reliable and uninterrupted.

    +
  • +
  • +

    If you are mapping more than one source and destination network, you must create a network attachment definition for each additional destination network.

    +
  • +
+
+
+
+
+

Ports

+
+
+

The firewalls must enable traffic over the following ports:

+
+ + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Network ports required for migrating from VMware vSphere
PortProtocolSourceDestinationPurpose

443

TCP

OpenShift nodes

VMware vCenter

+

VMware provider inventory

+
+
+

Disk transfer authentication

+

443

TCP

OpenShift nodes

VMware ESXi hosts

+

Disk transfer authentication

+

902

TCP

OpenShift nodes

VMware ESXi hosts

+

Disk transfer data copy

+
+ + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2. Network ports required for migrating from oVirt
PortProtocolSourceDestinationPurpose

443

TCP

OpenShift nodes

oVirt Engine

+

oVirt provider inventory

+
+
+

Disk transfer authentication

+

443

TCP

OpenShift nodes

oVirt hosts

+

Disk transfer authentication

+

54322

TCP

OpenShift nodes

oVirt hosts

+

Disk transfer data copy

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/new-features-and-enhancements-2-7/index.html b/documentation/doc-Release_notes/modules/new-features-and-enhancements-2-7/index.html new file mode 100644 index 00000000000..e7d5e79e2bb --- /dev/null +++ b/documentation/doc-Release_notes/modules/new-features-and-enhancements-2-7/index.html @@ -0,0 +1,85 @@ + + + + + + + + New features and enhancements | Forklift Documentation + + + + + + + + + + + + + +New features and enhancements | Forklift Documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+

New features and enhancements

+
+
+
+

Forklift 2.7 introduces the following features and enhancements:

+
+
+
+
+

New features and enhancements 2.7.0

+
+
+
    +
  • +

    In Forklift 2.7.0, warm migration is now based on RHEL 9 inheriting features and bug fixes.

    +
  • +
+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/new-migrating-virtual-machines-cli/index.html b/documentation/doc-Release_notes/modules/new-migrating-virtual-machines-cli/index.html new file mode 100644 index 00000000000..84936d4f21b --- /dev/null +++ b/documentation/doc-Release_notes/modules/new-migrating-virtual-machines-cli/index.html @@ -0,0 +1,155 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
Procedure
+
    +
  1. +

    Create a Secret manifest for the source provider credentials:

    +
  2. +
+
+
+
    +
  1. +

    Create a Provider manifest for the source provider:

    +
  2. +
  3. +

    Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    +
    +
    +
    $  cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Hook
    +metadata:
    +  name: <hook>
    +  namespace: <namespace>
    +spec:
    +  image: quay.io/konveyor/hook-runner
    +  playbook: |
    +    LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv
    +    YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s
    +    IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh
    +    cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr
    +    bG9hZAoK
    +EOF
    +
    +
    +
    +

    where:

    +
    +
    +

    playbook refers to an optional Base64-encoded Ansible Playbook. If you specify a playbook, the image must be hook-runner.

    +
    +
    + + + + + +
    +
    Note
    +
    +
    +

    You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

    +
    +
    +
    +
  4. +
  5. +

    Create a Migration manifest to run the Plan CR:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Migration
    +metadata:
    +  name: <name_of_migration_cr>
    +  namespace: <namespace>
    +spec:
    +  plan:
    +    name: <name_of_plan_cr>
    +    namespace: <namespace>
    +  cutover: <optional_cutover_time>
    +EOF
    +
    +
    +
    + + + + + +
    +
    Note
    +
    +
    +

    If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00.

    +
    +
    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/non-admin-permissions-for-ui/index.html b/documentation/doc-Release_notes/modules/non-admin-permissions-for-ui/index.html new file mode 100644 index 00000000000..4ef0782e1a8 --- /dev/null +++ b/documentation/doc-Release_notes/modules/non-admin-permissions-for-ui/index.html @@ -0,0 +1,192 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Permissions needed by non-administrators to work with migration plan components

+
+

If you are an administrator, you can work with all components of migration plans (for example, providers, network mappings, and migration plans).

+
+
+

By default, non-administrators have limited ability to work with migration plans and their components. As an administrator, you can modify their roles to allow them full access to all components, or you can give them limited permissions.

+
+
+

For example, administrators can assign non-administrators one or more of the following cluster roles for migration plans:

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + +
Table 1. Example migration plan roles and their privileges
RoleDescription

plans.forklift.konveyor.io-v1beta1-view

Can view migration plans but not to create, delete or modify them

plans.forklift.konveyor.io-v1beta1-edit

Can create, delete or modify (all parts of edit permissions) individual migration plans

plans.forklift.konveyor.io-v1beta1-admin

All edit privileges and the ability to delete the entire collection of migration plans

+
+

Note that pre-defined cluster roles include a resource (for example, plans), an API group (for example, forklift.konveyor.io-v1beta1) and an action (for example, view, edit).

+
+
+

As a more comprehensive example, you can grant non-administrators the following set of permissions per namespace:

+
+
+
    +
  • +

    Create and modify storage maps, network maps, and migration plans for the namespaces they have access to

    +
  • +
  • +

    Attach providers created by administrators to storage maps, network maps, and migration plans

    +
  • +
  • +

    Not be able to create providers or to change system settings

    +
  • +
+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2. Example permissions required for non-adminstrators to work with migration plan components but not create providers
ActionsAPI groupResource

get, list, watch, create, update, patch, delete

forklift.konveyor.io

plans

get, list, watch, create, update, patch, delete

forklift.konveyor.io

migrations

get, list, watch, create, update, patch, delete

forklift.konveyor.io

hooks

get, list, watch

forklift.konveyor.io

providers

get, list, watch, create, update, patch, delete

forklift.konveyor.io

networkmaps

get, list, watch, create, update, patch, delete

forklift.konveyor.io

storagemaps

get, list, watch

forklift.konveyor.io

forkliftcontrollers

create, patch, delete

Empty string

secrets

+
+ + + + + +
+
Note
+
+
+

Non-administrators need to have the create permissions that are part of edit roles for network maps and for storage maps to create migration plans, even when using a template for a network map or a storage map.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/obtaining-console-url/index.html b/documentation/doc-Release_notes/modules/obtaining-console-url/index.html new file mode 100644 index 00000000000..e166a2f5375 --- /dev/null +++ b/documentation/doc-Release_notes/modules/obtaining-console-url/index.html @@ -0,0 +1,107 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Getting the Forklift web console URL

+
+

You can get the Forklift web console URL at any time by using either the OKD web console, or the command line.

+
+
+
Prerequisites
+
    +
  • +

    KubeVirt Operator installed.

    +
  • +
  • +

    Forklift Operator installed.

    +
  • +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  • +

    If you are using the OKD web console, follow these steps:

    +
  • +
+
+
+

Unresolved directive in obtaining-console-url.adoc - include::snippet_getting_web_console_url_web.adoc[]

+
+
+
    +
  • +

    If you are using the command line, get the Forklift web console URL with the following command:

    +
  • +
+
+
+

Unresolved directive in obtaining-console-url.adoc - include::snippet_getting_web_console_url_cli.adoc[]

+
+
+

You can now launch a browser and navigate to the Forklift web console.

+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/openstack-prerequisites/index.html b/documentation/doc-Release_notes/modules/openstack-prerequisites/index.html new file mode 100644 index 00000000000..f5234fb18b2 --- /dev/null +++ b/documentation/doc-Release_notes/modules/openstack-prerequisites/index.html @@ -0,0 +1,76 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

OpenStack prerequisites

+
+

The following prerequisites apply to {osp} migrations:

+
+
+ +
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/ostack-app-cred-auth/index.html b/documentation/doc-Release_notes/modules/ostack-app-cred-auth/index.html new file mode 100644 index 00000000000..729de6cc1b2 --- /dev/null +++ b/documentation/doc-Release_notes/modules/ostack-app-cred-auth/index.html @@ -0,0 +1,189 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Using application credential authentication with an {osp} source provider

+
+

You can use application credential authentication, instead of username and password authentication, when you create an {osp} source provider.

+
+
+

Forklift supports both of the following types of application credential authentication:

+
+
+
    +
  • +

    Application credential ID

    +
  • +
  • +

    Application credential name

    +
  • +
+
+
+

For each type of application credential authentication, you need to use data from OpenStack to create a Secret manifest.

+
+
+
Prerequisites
+

You have an {osp} account.

+
+
+
Procedure
+
    +
  1. +

    In the dashboard of the {osp} web console, click Project > API Access.

    +
  2. +
  3. +

    Expand Download OpenStack RC file and click OpenStack RC file.

    +
    +

    The file that is downloaded, referred to here as <openstack_rc_file>, includes the following fields used for application credential authentication:

    +
    +
    +
    +
    OS_AUTH_URL
    +OS_PROJECT_ID
    +OS_PROJECT_NAME
    +OS_DOMAIN_NAME
    +OS_USERNAME
    +
    +
    +
  4. +
  5. +

    To get the data needed for application credential authentication, run the following command:

    +
    +
    +
    $ openstack application credential create --role member --role reader --secret redhat forklift
    +
    +
    +
    +

    The output, referred to here as <openstack_credential_output>, includes:

    +
    +
    +
      +
    • +

      The id and secret that you need for authentication using an application credential ID

      +
    • +
    • +

      The name and secret that you need for authentication using an application credential name

      +
    • +
    +
    +
  6. +
  7. +

    Create a Secret manifest similar to the following:

    +
    +
      +
    • +

      For authentication using the application credential ID:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-appid
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: applicationcredential
      +  applicationCredentialID: <id_from_openstack_credential_output>
      +  applicationCredentialSecret: <secret_from_openstack_credential_output>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    • +

      For authentication using the application credential name:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-appname
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: applicationcredential
      +  applicationCredentialName: <name_from_openstack_credential_output>
      +  applicationCredentialSecret: <secret_from_openstack_credential_output>
      +  domainName: <OS_DOMAIN_NAME_from_openstack_rc_file>
      +  username: <OS_USERNAME_from_openstack_rc_file>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    +
    +
  8. +
  9. +

    Continue migrating your virtual machine according to the procedure in Migrating virtual machines, starting with step 2, "Create a Provider manifest for the source provider."

    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/ostack-token-auth/index.html b/documentation/doc-Release_notes/modules/ostack-token-auth/index.html new file mode 100644 index 00000000000..dab7371ed1c --- /dev/null +++ b/documentation/doc-Release_notes/modules/ostack-token-auth/index.html @@ -0,0 +1,180 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Using token authentication with an {osp} source provider

+
+

You can use token authentication, instead of username and password authentication, when you create an {osp} source provider.

+
+
+

Forklift supports both of the following types of token authentication:

+
+
+
    +
  • +

    Token with user ID

    +
  • +
  • +

    Token with user name

    +
  • +
+
+
+

For each type of token authentication, you need to use data from OpenStack to create a Secret manifest.

+
+
+
Prerequisites
+

Have an {osp} account.

+
+
+
Procedure
+
    +
  1. +

    In the dashboard of the {osp} web console, click Project > API Access.

    +
  2. +
  3. +

    Expand Download OpenStack RC file and click OpenStack RC file.

    +
    +

    The file that is downloaded, referred to here as <openstack_rc_file>, includes the following fields used for token authentication:

    +
    +
    +
    +
    OS_AUTH_URL
    +OS_PROJECT_ID
    +OS_PROJECT_NAME
    +OS_DOMAIN_NAME
    +OS_USERNAME
    +
    +
    +
  4. +
  5. +

    To get the data needed for token authentication, run the following command:

    +
    +
    +
    $ openstack token issue
    +
    +
    +
    +

    The output, referred to here as <openstack_token_output>, includes the token, userID, and projectID that you need for authentication using a token with user ID.

    +
    +
  6. +
  7. +

    Create a Secret manifest similar to the following:

    +
    +
      +
    • +

      For authentication using a token with user ID:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-tokenid
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: token
      +  token: <token_from_openstack_token_output>
      +  projectID: <projectID_from_openstack_token_output>
      +  userID: <userID_from_openstack_token_output>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    • +

      For authentication using a token with user name:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-tokenname
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: token
      +  token: <token_from_openstack_token_output>
      +  domainName: <OS_DOMAIN_NAME_from_openstack_rc_file>
      +  projectName: <OS_PROJECT_NAME_from_openstack_rc_file>
      +  username: <OS_USERNAME_from_openstack_rc_file>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    +
    +
  8. +
  9. +

    Continue migrating your virtual machine according to the procedure in Migrating virtual machines, starting with step 2, "Create a Provider manifest for the source provider."

    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/ova-prerequisites/index.html b/documentation/doc-Release_notes/modules/ova-prerequisites/index.html new file mode 100644 index 00000000000..64363640d36 --- /dev/null +++ b/documentation/doc-Release_notes/modules/ova-prerequisites/index.html @@ -0,0 +1,130 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Open Virtual Appliance (OVA) prerequisites

+
+

The following prerequisites apply to Open Virtual Appliance (OVA) file migrations:

+
+
+
    +
  • +

    All OVA files are created by VMware vSphere.

    +
  • +
+
+
+ + + + + +
+
Note
+
+
+

Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by Forklift. Forklift supports only OVA files created by VMware vSphere.

+
+
+
+
+
    +
  • +

    The OVA files are in one or more folders under an NFS shared directory in one of the following structures:

    +
    +
      +
    • +

      In one or more compressed Open Virtualization Format (OVF) packages that hold all the VM information.

      +
      +

      The filename of each compressed package must have the .ova extension. Several compressed packages can be stored in the same folder.

      +
      +
      +

      When this structure is used, Forklift scans the root folder and the first-level subfolders for compressed packages.

      +
      +
      +

      For example, if the NFS share is, /nfs, then:
      +The folder /nfs is scanned.
      +The folder /nfs/subfolder1 is scanned.
      +But, /nfs/subfolder1/subfolder2 is not scanned.

      +
      +
    • +
    • +

      In extracted OVF packages.

      +
      +

      When this structure is used, Forklift scans the root folder, first-level subfolders, and second-level subfolders for extracted OVF packages. +However, there can be only one .ovf file in a folder. Otherwise, the migration will fail.

      +
      +
      +

      For example, if the NFS share is, /nfs, then:
      +The OVF file /nfs/vm.ovf is scanned.
      +The OVF file /nfs/subfolder1/vm.ovf is scanned.
      +The OVF file /nfs/subfolder1/subfolder2/vm.ovf is scanned.
      +But, the OVF file /nfs/subfolder1/subfolder2/subfolder3/vm.ovf is not scanned.

      +
      +
    • +
    +
    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/retrieving-validation-service-json/index.html b/documentation/doc-Release_notes/modules/retrieving-validation-service-json/index.html new file mode 100644 index 00000000000..f515f1259b3 --- /dev/null +++ b/documentation/doc-Release_notes/modules/retrieving-validation-service-json/index.html @@ -0,0 +1,483 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Retrieving the Inventory service JSON

+
+

You retrieve the Inventory service JSON by sending an Inventory service query to a virtual machine (VM). The output contains an "input" key, which contains the inventory attributes that are queried by the Validation service rules.

+
+
+

You can create a validation rule based on any attribute in the "input" key, for example, input.snapshot.kind.

+
+
+
Procedure
+
    +
  1. +

    Retrieve the routes for the project:

    +
    +
    +
    oc get route -n openshift-mtv
    +
    +
    +
  2. +
  3. +

    Retrieve the Inventory service route:

    +
    +
    +
    $ kubectl get route <inventory_service> -n konveyor-forklift
    +
    +
    +
  4. +
  5. +

    Retrieve the access token:

    +
    +
    +
    $ TOKEN=$(oc whoami -t)
    +
    +
    +
  6. +
  7. +

    Trigger an HTTP GET request (for example, using Curl):

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers -k
    +
    +
    +
  8. +
  9. +

    Retrieve the UUID of a provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider> -k (1)
    +
    +
    +
    +
      +
    1. +

      Allowed values for the provider are vsphere, ovirt, and openstack.

      +
    2. +
    +
    +
  10. +
  11. +

    Retrieve the VMs of a provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider>/<UUID>/vms -k
    +
    +
    +
  12. +
  13. +

    Retrieve the details of a VM:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider>/<UUID>/workloads/<vm> -k
    +
    +
    +
    +
    Example output
    +
    +
    {
    +    "input": {
    +        "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/workloads/vm-431",
    +        "id": "vm-431",
    +        "parent": {
    +            "kind": "Folder",
    +            "id": "group-v22"
    +        },
    +        "revision": 1,
    +        "name": "iscsi-target",
    +        "revisionValidated": 1,
    +        "isTemplate": false,
    +        "networks": [
    +            {
    +                "kind": "Network",
    +                "id": "network-31"
    +            },
    +            {
    +                "kind": "Network",
    +                "id": "network-33"
    +            }
    +        ],
    +        "disks": [
    +            {
    +                "key": 2000,
    +                "file": "[iSCSI_Datastore] iscsi-target/iscsi-target-000001.vmdk",
    +                "datastore": {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                },
    +                "capacity": 17179869184,
    +                "shared": false,
    +                "rdm": false
    +            },
    +            {
    +                "key": 2001,
    +                "file": "[iSCSI_Datastore] iscsi-target/iscsi-target_1-000001.vmdk",
    +                "datastore": {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                },
    +                "capacity": 10737418240,
    +                "shared": false,
    +                "rdm": false
    +            }
    +        ],
    +        "concerns": [],
    +        "policyVersion": 5,
    +        "uuid": "42256329-8c3a-2a82-54fd-01d845a8bf49",
    +        "firmware": "bios",
    +        "powerState": "poweredOn",
    +        "connectionState": "connected",
    +        "snapshot": {
    +            "kind": "VirtualMachineSnapshot",
    +            "id": "snapshot-3034"
    +        },
    +        "changeTrackingEnabled": false,
    +        "cpuAffinity": [
    +            0,
    +            2
    +        ],
    +        "cpuHotAddEnabled": true,
    +        "cpuHotRemoveEnabled": false,
    +        "memoryHotAddEnabled": false,
    +        "faultToleranceEnabled": false,
    +        "cpuCount": 2,
    +        "coresPerSocket": 1,
    +        "memoryMB": 2048,
    +        "guestName": "Red Hat Enterprise Linux 7 (64-bit)",
    +        "balloonedMemory": 0,
    +        "ipAddress": "10.19.2.96",
    +        "storageUsed": 30436770129,
    +        "numaNodeAffinity": [
    +            "0",
    +            "1"
    +        ],
    +        "devices": [
    +            {
    +                "kind": "RealUSBController"
    +            }
    +        ],
    +        "host": {
    +            "id": "host-29",
    +            "parent": {
    +                "kind": "Cluster",
    +                "id": "domain-c26"
    +            },
    +            "revision": 1,
    +            "name": "IP address or host name of the vCenter host or oVirt Engine host",
    +            "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/hosts/host-29",
    +            "status": "green",
    +            "inMaintenance": false,
    +            "managementServerIp": "10.19.2.96",
    +            "thumbprint": <thumbprint>,
    +            "timezone": "UTC",
    +            "cpuSockets": 2,
    +            "cpuCores": 16,
    +            "productName": "VMware ESXi",
    +            "productVersion": "6.5.0",
    +            "networking": {
    +                "pNICs": [
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic0",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic1",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic2",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic3",
    +                        "linkSpeed": 10000
    +                    }
    +                ],
    +                "vNICs": [
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk2",
    +                        "portGroup": "VM_Migration",
    +                        "dPortGroup": "",
    +                        "ipAddress": "192.168.79.13",
    +                        "subnetMask": "255.255.255.0",
    +                        "mtu": 9000
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk0",
    +                        "portGroup": "Management Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "10.19.2.13",
    +                        "subnetMask": "255.255.255.128",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk1",
    +                        "portGroup": "Storage Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "172.31.2.13",
    +                        "subnetMask": "255.255.0.0",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk3",
    +                        "portGroup": "",
    +                        "dPortGroup": "dvportgroup-48",
    +                        "ipAddress": "192.168.61.13",
    +                        "subnetMask": "255.255.255.0",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk4",
    +                        "portGroup": "VM_DHCP_Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "10.19.2.231",
    +                        "subnetMask": "255.255.255.128",
    +                        "mtu": 1500
    +                    }
    +                ],
    +                "portGroups": [
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM Network",
    +                        "name": "VM Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-Management Network",
    +                        "name": "Management Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_10G_Network",
    +                        "name": "VM_10G_Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Storage",
    +                        "name": "VM_Storage",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_DHCP_Network",
    +                        "name": "VM_DHCP_Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-Storage Network",
    +                        "name": "Storage Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Isolated_67",
    +                        "name": "VM_Isolated_67",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Migration",
    +                        "name": "VM_Migration",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2"
    +                    }
    +                ],
    +                "switches": [
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch0",
    +                        "name": "vSwitch0",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM Network",
    +                            "key-vim.host.PortGroup-Management Network"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic4"
    +                        ]
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch1",
    +                        "name": "vSwitch1",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM_10G_Network",
    +                            "key-vim.host.PortGroup-VM_Storage",
    +                            "key-vim.host.PortGroup-VM_DHCP_Network",
    +                            "key-vim.host.PortGroup-Storage Network"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic2",
    +                            "key-vim.host.PhysicalNic-vmnic0"
    +                        ]
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch2",
    +                        "name": "vSwitch2",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM_Isolated_67",
    +                            "key-vim.host.PortGroup-VM_Migration"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic3",
    +                            "key-vim.host.PhysicalNic-vmnic1"
    +                        ]
    +                    }
    +                ]
    +            },
    +            "networks": [
    +                {
    +                    "kind": "Network",
    +                    "id": "network-31"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-34"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-57"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-33"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "dvportgroup-47"
    +                }
    +            ],
    +            "datastores": [
    +                {
    +                    "kind": "Datastore",
    +                    "id": "datastore-35"
    +                },
    +                {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                }
    +            ],
    +            "vms": null,
    +            "networkAdapters": [],
    +            "cluster": {
    +                "id": "domain-c26",
    +                "parent": {
    +                    "kind": "Folder",
    +                    "id": "group-h23"
    +                },
    +                "revision": 1,
    +                "name": "mycluster",
    +                "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/clusters/domain-c26",
    +                "folder": "group-h23",
    +                "networks": [
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-31"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-34"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-57"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-33"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "dvportgroup-47"
    +                    }
    +                ],
    +                "datastores": [
    +                    {
    +                        "kind": "Datastore",
    +                        "id": "datastore-35"
    +                    },
    +                    {
    +                        "kind": "Datastore",
    +                        "id": "datastore-63"
    +                    }
    +                ],
    +                "hosts": [
    +                    {
    +                        "kind": "Host",
    +                        "id": "host-44"
    +                    },
    +                    {
    +                        "kind": "Host",
    +                        "id": "host-29"
    +                    }
    +                ],
    +                "dasEnabled": false,
    +                "dasVms": [],
    +                "drsEnabled": true,
    +                "drsBehavior": "fullyAutomated",
    +                "drsVms": [],
    +                "datacenter": null
    +            }
    +        }
    +    }
    +}
    +
    +
    +
  14. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/retrieving-vmware-moref/index.html b/documentation/doc-Release_notes/modules/retrieving-vmware-moref/index.html new file mode 100644 index 00000000000..8c169acf9d9 --- /dev/null +++ b/documentation/doc-Release_notes/modules/retrieving-vmware-moref/index.html @@ -0,0 +1,149 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Retrieving a VMware vSphere moRef

+
+

When you migrate VMs with a VMware vSphere source provider using Forklift from the CLI, you need to know the managed object reference (moRef) of certain entities in vSphere, such as datastores, networks, and VMs.

+
+
+

You can retrieve the moRef of one or more vSphere entities from the Inventory service. You can then use each moRef as a reference for retrieving the moRef of another entity.

+
+
+
Procedure
+
    +
  1. +

    Retrieve the routes for the project:

    +
    +
    +
    oc get route -n openshift-mtv
    +
    +
    +
  2. +
  3. +

    Retrieve the Inventory service route:

    +
    +
    +
    $ kubectl get route <inventory_service> -n konveyor-forklift
    +
    +
    +
  4. +
  5. +

    Retrieve the access token:

    +
    +
    +
    $ TOKEN=$(oc whoami -t)
    +
    +
    +
  6. +
  7. +

    Retrieve the moRef of a VMware vSphere provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/vsphere -k
    +
    +
    +
  8. +
  9. +

    Retrieve the datastores of a VMware vSphere source provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/vsphere/<provider id>/datastores/ -k
    +
    +
    +
    +
    Example output
    +
    +
    [
    +  {
    +    "id": "datastore-11",
    +    "parent": {
    +      "kind": "Folder",
    +      "id": "group-s5"
    +    },
    +    "path": "/Datacenter/datastore/v2v_general_porpuse_ISCSI_DC",
    +    "revision": 46,
    +    "name": "v2v_general_porpuse_ISCSI_DC",
    +    "selfLink": "providers/vsphere/01278af6-e1e4-4799-b01b-d5ccc8dd0201/datastores/datastore-11"
    +  },
    +  {
    +    "id": "datastore-730",
    +    "parent": {
    +      "kind": "Folder",
    +      "id": "group-s5"
    +    },
    +    "path": "/Datacenter/datastore/f01-h27-640-SSD_2",
    +    "revision": 46,
    +    "name": "f01-h27-640-SSD_2",
    +    "selfLink": "providers/vsphere/01278af6-e1e4-4799-b01b-d5ccc8dd0201/datastores/datastore-730"
    +  },
    + ...
    +
    +
    +
  10. +
+
+
+

In this example, the moRef of the datastore v2v_general_porpuse_ISCSI_DC is datastore-11 and the moRef of the datastore f01-h27-640-SSD_2 is datastore-730.

+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/rhv-prerequisites/index.html b/documentation/doc-Release_notes/modules/rhv-prerequisites/index.html new file mode 100644 index 00000000000..e4d6b30ce92 --- /dev/null +++ b/documentation/doc-Release_notes/modules/rhv-prerequisites/index.html @@ -0,0 +1,129 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

oVirt prerequisites

+
+

The following prerequisites apply to oVirt migrations:

+
+
+
    +
  • +

    To create a source provider, you must have at least the UserRole and ReadOnlyAdmin roles assigned to you. These are the minimum required permissions, however, any other administrator or superuser permissions will also work.

    +
  • +
+
+
+ + + + + +
+
Important
+
+
+

You must keep the UserRole and ReadOnlyAdmin roles until the virtual machines of the source provider have been migrated. Otherwise, the migration will fail.

+
+
+
+
+
    +
  • +

    To migrate virtual machines:

    +
    +
      +
    • +

      You must have one of the following:

      +
      +
        +
      • +

        oVirt admin permissions. These permissions allow you to migrate any virtual machine in the system.

        +
      • +
      • +

        DiskCreator and UserVmManager permissions on every virtual machine you want to migrate.

        +
      • +
      +
      +
    • +
    • +

      You must use a compatible version of oVirt.

      +
    • +
    • +

      You must have the Engine CA certificate, unless it was replaced by a third-party certificate, in which case, specify the Engine Apache CA certificate.

      +
      +

      You can obtain the Engine CA certificate by navigating to https://<engine_host>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA in a browser.

      +
      +
    • +
    • +

      If you are migrating a virtual machine with a direct LUN disk, ensure that the nodes in the KubeVirt destination cluster that the VM is expected to run on can access the backend storage.

      +
    • +
    +
    +
  • +
+
+
+

Unresolved directive in rhv-prerequisites.adoc - include::snip-migrating-luns.adoc[]

+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/rn-2.0/index.html b/documentation/doc-Release_notes/modules/rn-2.0/index.html new file mode 100644 index 00000000000..356d249ba49 --- /dev/null +++ b/documentation/doc-Release_notes/modules/rn-2.0/index.html @@ -0,0 +1,163 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.0

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere with Forklift.

+
+
+

The release notes describe new features and enhancements, known issues, and technical changes.

+
+
+
+
+

New features and enhancements

+
+
+

This release adds the following features and improvements.

+
+
+
Warm migration
+

Warm migration reduces downtime by copying most of the VM data during a precopy stage while the VMs are running. During the cutover stage, the VMs are stopped and the rest of the data is copied.

+
+
+
Cancel migration
+

You can cancel an entire migration plan or individual VMs while a migration is in progress. A canceled migration plan can be restarted in order to migrate the remaining VMs.

+
+
+
Migration network
+

You can select a migration network for the source and target providers for improved performance. By default, data is copied using the VMware administration network and the OKD pod network.

+
+
+
Validation service
+

The validation service checks source VMs for issues that might affect migration and flags the VMs with concerns in the migration plan.

+
+
+ + + + + +
+
Important
+
+
+

The validation service is a Technology Preview feature only. Technology Preview features +are not supported with Red Hat production service level agreements (SLAs) and +might not be functionally complete. Red Hat does not recommend using them +in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during +the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+
+
+
+

Known issues

+
+
+

This section describes known issues and mitigations.

+
+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Network map displays a "Destination network not found" error
+

If the network map remains in a NotReady state and the NetworkMap manifest displays a Destination network not found error, the cause is a missing network attachment definition. You must create a network attachment definition for each additional destination network before you create the network map. (BZ#1971259)

+
+
+
Warm migration gets stuck during third precopy
+

Warm migration uses changed block tracking snapshots to copy data during the precopy stage. The snapshots are created at one-hour intervals by default. When a snapshot is created, its contents are copied to the destination cluster. However, when the third snapshot is created, the first snapshot is deleted and the block tracking is lost. (BZ#1969894)

+
+
+

You can do one of the following to mitigate this issue:

+
+
+
    +
  • +

    Start the cutover stage no more than one hour after the precopy stage begins so that only one internal snapshot is created.

    +
  • +
  • +

    Increase the snapshot interval in the vm-import-controller-config config map to 720 minutes:

    +
    +
    +
    $ kubectl patch configmap/vm-import-controller-config \
    +  -n openshift-cnv -p '{"data": \
    +  {"warmImport.intervalMinutes": "720"}}'
    +
    +
    +
  • +
+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/rn-2.1/index.html b/documentation/doc-Release_notes/modules/rn-2.1/index.html new file mode 100644 index 00000000000..40e42e04211 --- /dev/null +++ b/documentation/doc-Release_notes/modules/rn-2.1/index.html @@ -0,0 +1,191 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.1

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

+
+
+

The release notes describe new features and enhancements, known issues, and technical changes.

+
+
+
+
+

Technical changes

+
+
+
VDDK image added to HyperConverged custom resource
+

The VMware Virtual Disk Development Kit (VDDK) SDK image must be added to the HyperConverged custom resource. Before this release, it was referenced in the v2v-vmware config map.

+
+
+
+
+

New features and enhancements

+
+
+

This release adds the following features and improvements.

+
+
+
Cold migration from oVirt
+

You can perform a cold migration of VMs from oVirt.

+
+
+
Migration hooks
+

You can create migration hooks to run Ansible playbooks or custom code before or after migration.

+
+
+
Filtered must-gather data collection
+

You can specify options for the must-gather tool that enable you to filter the data by namespace, migration plan, or VMs.

+
+
+
SR-IOV network support
+

You can migrate VMs with a single root I/O virtualization (SR-IOV) network interface if the KubeVirt environment has an SR-IOV network.

+
+
+
+
+

Known issues

+
+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Disk copy stage does not progress
+

The disk copy stage of a oVirt VM does not progress and the Forklift web console does not display an error message. (BZ#1990596)

+
+
+

The cause of this problem might be one of the following conditions:

+
+
+
    +
  • +

    The storage class does not exist on the target cluster.

    +
  • +
  • +

    The VDDK image has not been added to the HyperConverged custom resource.

    +
  • +
  • +

    The VM does not have a disk.

    +
  • +
  • +

    The VM disk is locked.

    +
  • +
  • +

    The VM time zone is not set to UTC.

    +
  • +
  • +

    The VM is configured for a USB device.

    +
  • +
+
+
+

To disable USB devices, see Configuring USB Devices in the Red Hat Virtualization documentation.

+
+
+

To determine the cause:

+
+
+
    +
  1. +

    Click WorkloadsVirtualization in the OKD web console.

    +
  2. +
  3. +

    Click the Virtual Machines tab.

    +
  4. +
  5. +

    Select a virtual machine to open the Virtual Machine Overview screen.

    +
  6. +
  7. +

    Click Status to view the status of the virtual machine.

    +
  8. +
+
+
+
VM time zone must be UTC with no offset
+

The time zone of the source VMs must be UTC with no offset. You can set the time zone to GMT Standard Time after first assessing the potential impact on the workload. (BZ#1993259)

+
+
+
oVirt resource UUID causes a "Provider not found" error
+

If a oVirt resource UUID is used in a Host, NetworkMap, StorageMap, or Plan custom resource (CR), a "Provider not found" error is displayed.

+
+
+

You must use the resource name. (BZ#1994037)

+
+
+
Same oVirt resource name in different data centers causes ambiguous reference
+

If a oVirt resource name is used in a NetworkMap, StorageMap, or Plan custom resource (CR) and if the same resource name exists in another data center, the Plan CR displays a critical "Ambiguous reference" condition. You must rename the resource or use the resource UUID in the CR.

+
+
+

In the web console, the resource name appears twice in the same list without a data center reference to distinguish them. You must rename the resource. (BZ#1993089)

+
+
+
Snapshots are not deleted after warm migration
+

Snapshots are not deleted automatically after a successful warm migration of a VMware VM. You must delete the snapshots manually in VMware vSphere. (BZ#2001270)

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/rn-2.2/index.html b/documentation/doc-Release_notes/modules/rn-2.2/index.html new file mode 100644 index 00000000000..550ff82fdc0 --- /dev/null +++ b/documentation/doc-Release_notes/modules/rn-2.2/index.html @@ -0,0 +1,219 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.2

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Setting the precopy time interval for warm migration
+

You can set the time interval between snapshots taken during the precopy stage of warm migration.

+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
Creating validation rules
+

You can create custom validation rules to check the suitability of VMs for migration. Validation rules are based on the VM attributes collected by the Provider Inventory service and written in Rego, the Open Policy Agent native query language.

+
+
+
Downloading logs by using the web console
+

You can download logs for a migration plan or a migrated VM by using the Forklift web console.

+
+
+
Duplicating a migration plan by using the web console
+

You can duplicate a migration plan by using the web console, including its VMs, mappings, and hooks, in order to edit the copy and run as a new migration plan.

+
+
+
Archiving a migration plan by using the web console
+

You can archive a migration plan by using the MTV web console. Archived plans can be viewed or duplicated. They cannot be run, edited, or unarchived.

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Certain Validation service issues do not block migration
+

Certain Validation service issues, which are marked as Critical and display the assessment text, The VM will not be migrated, do not block migration. (BZ#2025977)

+
+
+

The following Validation service assessments do not block migration:

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Issues that do not block migration
AssessmentResult

The disk interface type is not supported by OpenShift Virtualization (only sata, virtio_scsi and virtio interface types are currently supported).

The migrated VM will have a virtio disk if the source interface is not recognized.

The NIC interface type is not supported by OpenShift Virtualization (only e1000, rtl8139 and virtio interface types are currently supported).

The migrated VM will have a virtio NIC if the source interface is not recognized.

The VM is using a vNIC profile configured for host device passthrough, which is not currently supported by OpenShift Virtualization.

The migrated VM will have an SR-IOV NIC. The destination network must be set up correctly.

One or more of the VM’s disks has an illegal or locked status condition.

The migration will proceed but the disk transfer is likely to fail.

The VM has a disk with a storage type other than image, and this is not currently supported by OpenShift Virtualization.

The migration will proceed but the disk transfer is likely to fail.

The VM has one or more snapshots with disks in ILLEGAL state. This is not currently supported by OpenShift Virtualization.

The migration will proceed but the disk transfer is likely to fail.

The VM has USB support enabled, but USB devices are not currently supported by OpenShift Virtualization.

The migrated VM will not have USB devices.

The VM is configured with a watchdog device, which is not currently supported by OpenShift Virtualization.

The migrated VM will not have a watchdog device.

The VM’s status is not up or down.

The migration will proceed but it might hang if the VM cannot be powered off.

+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Missing resource causes error message in current.log file
+

If a resource does not exist, for example, if the virt-launcher pod does not exist because the migrated VM is powered off, its log is unavailable.

+
+
+

The following error appears in the missing resource’s current.log file when it is downloaded from the web console or created with the must-gather tool: error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'. (BZ#2023260)

+
+
+
Importer pod log is unavailable after warm migration
+

Retaining the importer pod for debug purposes causes warm migration to hang during the precopy stage. (BZ#2016290)

+
+
+

As a temporary workaround, the importer pod is removed at the end of the precopy stage so that the precopy succeeds. However, this means that the importer pod log is not retained after warm migration is complete. You can only view the importer pod log by using the oc logs -f <cdi-importer_pod> command during the precopy stage.

+
+
+

This issue only affects the importer pod log and warm migration. Cold migration and the virt-v2v logs are not affected.

+
+
+
Deleting migration plan does not remove temporary resources.
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Migration plan details page of the web console does not describe the reason for the failure. (BZ#2008846)

+
+
+
Network, storage, and VM referenced by name in the Plan CR are not displayed in the web console.
+

If a Plan CR references storage, network, or VMs by name instead of by ID, the resources do not appear in the Forklift web console. The migration plan cannot be edited or duplicated. (BZ#1986020)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If you delete a migration plan and then run a new migration plan with the same name or if you delete a migrated VM and then remigrate the source VM, the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
If a target VM is deleted during migration, its migration status is Succeeded in the Plan CR
+

If you delete a target VirtualMachine CR during the 'Convert image to kubevirt' step of the migration, the Migration details page of the web console displays the state of the step as VirtualMachine CR not found. However, the status of the VM migration is Succeeded in the Plan CR file and in the web console. (BZ#2031529)

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/rn-2.3/index.html b/documentation/doc-Release_notes/modules/rn-2.3/index.html new file mode 100644 index 00000000000..467981f7016 --- /dev/null +++ b/documentation/doc-Release_notes/modules/rn-2.3/index.html @@ -0,0 +1,156 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.3

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Setting the VddkInitImage path is part of the procedure of adding VMware provider.
+

In the web console, you enter the VddkInitImage path when adding a VMware provider. Alternatively, from the CLI, you add the VddkInitImage path to the Provider CR for VMware migrations.

+
+
+
The StorageProfile resource needs to be updated for a non-provisioner storage class
+

You must update the StorageProfile resource with accessModes and volumeMode for non-provisioner storage classes such as NFS. The documentation includes a link to the relevant procedure.

+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
Forklift 2.3 supports warm migration from oVirt
+

You can use warm migration to migrate VMs from both VMware and oVirt.

+
+
+
The minimal sufficient set of privileges for VMware users is established
+

VMware users do not have to have full cluster-admin privileges to perform a VM migration. The minimal sufficient set of user’s privileges is established and documented.

+
+
+
Forklift documentation is updated with instructions on using hooks
+

Forklift documentation includes instructions on adding hooks to migration plans and running hooks on VMs.

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Some warm migrations from oVirt might fail
+

When you run a migration plan for warm migration of multiple VMs from oVirt, the migrations of some VMs might fail during the cutover stage. In that case, restart the migration plan and set the cutover time for the VM migrations that failed in the first run. (BZ#2063531)

+
+
+
Snapshots are not deleted after warm migration
+

The Migration Controller service does not delete snapshots automatically after a successful warm migration of a oVirt VM. You can delete the snapshots manually. (BZ#22053183)

+
+
+
Warm migration from oVirt fails if a snapshot operation is performed on the source VM
+

If the user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (BZ#2057459)

+
+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Deleting migration plan does not remove temporary resources.
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Migration plan details page of the web console does not describe the reason for the failure. (BZ#2008846)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If you delete a migration plan and then run a new migration plan with the same name or if you delete a migrated VM and then remigrate the source VM, the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
Migration of virtual machines with encrypted partitions fails during conversion
+

The problem occurs for both vSphere and oVirt migrations.

+
+
+
Forklift 2.3.4 only: When the source provider is oVirt, duplicating a migration plan fails in either the network mapping stage or the storage mapping stage.
+

Possible workaround: Delete cache in the browser or restart the browser. (BZ#2143191)

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/rn-2.4/index.html b/documentation/doc-Release_notes/modules/rn-2.4/index.html new file mode 100644 index 00000000000..dbbfaae7b4c --- /dev/null +++ b/documentation/doc-Release_notes/modules/rn-2.4/index.html @@ -0,0 +1,260 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.4

+
+
+
+

Migrate virtual machines (VMs) from VMware vSphere or oVirt or {osp} to KubeVirt with Forklift.

+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Faster disk image migration from oVirt
+

Disk images are not converted anymore using virt-v2v when migrating from oVirt. This change speeds up migrations and also allows migration for guest operating systems that are not supported by virt-vsv. (forklift-controller#403)

+
+
+
Faster disk transfers by ovirt-imageio client (ovirt-img)
+

Disk transfers use ovirt-imageio client (ovirt-img) instead of Containerized Data Import (CDI) when migrating from RHV to the local OpenShift Container Platform cluster, accelerating the migration.

+
+
+
Faster migration using conversion pod disk transfer
+

When migrating from vSphere to the local OpenShift Container Platform cluster, the conversion pod transfers the disk data instead of Containerized Data Importer (CDI), accelerating the migration.

+
+
+
Migrated virtual machines are not scheduled on the target OCP cluster
+

The migrated virtual machines are no longer scheduled on the target OpenShift Container Platform cluster. This enables migrating VMs that cannot start due to limit constraints on the target at migration time.

+
+
+
StorageProfile resource needs to be updated for a non-provisioner storage class
+

You must update the StorageProfile resource with accessModes and volumeMode for non-provisioner storage classes such as NFS.

+
+
+
VDDK 8 can be used in the VDDK image
+

Previous versions of Forklift supported only using VDDK version 7 for the VDDK image. Forklift supports both versions 7 and 8, as follows:

+
+
+
    +
  • +

    If you are migrating to OCP 4.12 or earlier, use VDDK version 7.

    +
  • +
  • +

    If you are migrating to OCP 4.13 or later, use VDDK version 8.

    +
  • +
+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
OpenStack migration
+

Forklift now supports migrations with {osp} as a source provider. This feature is a provided as a Technology Preview and only supports cold migrations.

+
+
+
OCP console plugin
+

The Forklift Operator now integrates the Forklift web console into the OKD web console. The new UI operates as an OCP Console plugin that adds the sub-menu Migration to the navigation bar. It is implemented in version 2.4, disabling the old UI. You can enable the old UI by setting feature_ui: true in ForkliftController. (MTV-427)

+
+
+
Skip certification option
+

'Skip certificate validation' option was added to VMware and oVirt providers. If selected, the provider’s certificate will not be validated and the UI will not ask for specifying a CA certificate.

+
+
+
Only third-party certificate required
+

Only the third-party certificate needs to be specified when defining a oVirt provider that sets with the Manager CA certificate.

+
+
+
Conversion of VMs with RHEL9 guest operating system
+

Cold migrations from vSphere to a local Red Hat OpenShift cluster use virt-v2v on RHEL 9. (MTV-332)

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Deleting migration plan does not remove temporary resources
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If deleting a migration plan and then running a new migration plan with the same name, or if deleting a migrated VM and then remigrate the source VM, then the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
Migration of virtual machines with encrypted partitions fails during conversion
+

vSphere only: Migrations from oVirt and OpenStack don’t fail, but the encryption key may be missing on the target OCP cluster.

+
+
+
Snapshots that are created during the migration in OpenStack are not deleted
+

The Migration Controller service does not delete snapshots that are created during the migration for source virtual machines in OpenStack automatically. Workaround: the snapshots can be removed manually on OpenStack.

+
+
+
oVirt snapshots are not deleted after a successful migration
+

The Migration Controller service does not delete snapshots automatically after a successful warm migration of a oVirt VM. Workaround: Snapshots can be removed from oVirt instead. (MTV-349)

+
+
+
Migration fails during precopy/cutover while a snapshot operation is executed on the source VM
+

Some warm migrations from oVirt might fail. When running a migration plan for warm migration of multiple VMs from oVirt, the migrations of some VMs might fail during the cutover stage. In that case, restart the migration plan and set the cutover time for the VM migrations that failed in the first run.

+
+
+

Warm migration from oVirt fails if a snapshot operation is performed on the source VM. If the user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (MTV-456)

+
+
+
Cannot schedule migrated VM with multiple disks to more than one storage classes of type hostPath
+

When migrating a VM with multiple disks to more than one storage classes of type hostPath, it may result in a VM that cannot be scheduled. Workaround: It is recommended to use shared storage on the target OCP cluster.

+
+
+
Deleting migrated VM does not remove PVC and PV
+

When removing a VM that was migrated, its persistent volume claims (PVCs) and physical volumes (PV) are not deleted. Workaround: remove the CDI importer pods and then remove the remaining PVCs and PVs. (MTV-492)

+
+
+
PVC deletion hangs after archiving and deleting migration plan
+

When a migration fails, its PVCs and PVs are not deleted as expected when its migration plan is archived and deleted. Workaround: Remove the CDI importer pods and then remove the remaining PVCs and PVs. (MTV-493)

+
+
+
VM with multiple disks may boot from non-bootable disk after migration
+

VM with multiple disks that was migrated might not be able to boot on the target OCP cluster. Workaround: Set the boot order appropriately to boot from the bootable disk. (MTV-433)

+
+
+
Non-supported guest operating systems in warm migrations
+

Warm migrations and migrations to remote OCP clusters from vSphere do not support all types of guest operating systems that are supported in cold migrations to the local OCP cluster. It is a consequence of using RHEL 8 in the former case and RHEL 9 in the latter case.
+See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.

+
+
+
VMs from vSphere with RHEL 9 guest operating system may start with network interfaces that are down
+

When migrating VMs that are installed with RHEL 9 as guest operating system from vSphere, their network interfaces could be disabled when they start in OpenShift Virtualization. (MTV-491)

+
+
+
Upgrade from 2.4.0 fails
+

When upgrading from MTV 2.4.0 to a later version, the operation fails with an error that says the field 'spec.selector' of deployment forklift-controller is immutable. Workaround: remove the custom resource forklift-controller of type ForkliftController from the installed namespace, and recreate it. The user needs to refresh the OCP Console once the forklift-console-plugin pod runs to load the upgraded Forklift web console. (MTV-518)

+
+
+
+
+

Resolved issues

+
+
+

This release has the following resolved issues:

+
+
+
Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack)
+

A flaw was found in handling multiplexed streams in the HTTP/2 protocol. In previous releases of MTV, the HTTP/2 protocol allowed a denial of service (server resource consumption) because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection, which resulted in a denial of service due to server resource consumption.

+
+
+

This issue has been resolved in MTV 2.4.3 and 2.5.2. It is advised to update to one of these versions of MTV or later.

+
+ +
+
Improve invalid/conflicting VM name handling
+

Improve the automatic renaming of VMs during migration to fit RFC 1123. This feature that was introduced in 2.3.4 is enhanced to cover more special cases. (MTV-212)

+
+
+
Prevent locking user accounts due to incorrect credentials
+

If a user specifies an incorrect password for oVirt providers, they are no longer locked in oVirt. An error returns when the oVirt manager is accessible and adding the provider. If the oVirt manager is inaccessible, the provider is added, but there would be no further attempt after failing, due to incorrect credentials. (MTV-324)

+
+
+
Users without cluster-admin role can create new providers
+

Previously, the cluster-admin role was required to browse and create providers. In this release, users with sufficient permissions on MTV resources (providers, plans, migrations, NetworkMaps, StorageMaps, hooks) can operate MTV without cluster-admin permissions. (MTV-334)

+
+
+
Convert i440fx to q35
+

Migration of virtual machines with i440fx chipset is now supported. The chipset is converted to q35 during the migration. (MTV-430)

+
+
+
Preserve the UUID setting in SMBIOS for a VM that is migrated from oVirt
+

The Universal Unique ID (UUID) number within the System Management BIOS (SMBIOS) no longer changes for VMs that are migrated from oVirt. This enhancement enables applications that operate within the guest operating system and rely on this setting, such as for licensing purposes, to operate on the target OCP cluster in a manner similar to that of oVirt. (MTV-597)

+
+
+
Do not expose password for oVirt in error messages
+

Previously, the password that was specified for oVirt manager appeared in error messages that were displayed in the web console and logs when failing to connect to oVirt. In this release, error messages that are generated when failing to connect to oVirt do not reveal the password for oVirt manager.

+
+
+
QEMU guest agent is now installed on migrated VMs
+

The QEMU guest agent is installed on VMs during cold migration from vSphere. (BZ#2018062)

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/rn-2.5/index.html b/documentation/doc-Release_notes/modules/rn-2.5/index.html new file mode 100644 index 00000000000..89dd1136125 --- /dev/null +++ b/documentation/doc-Release_notes/modules/rn-2.5/index.html @@ -0,0 +1,464 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.5

+
+
+
+

You can use Forklift to migrate virtual machines from the following source providers to KubeVirt destination providers:

+
+
+
    +
  • +

    VMware vSphere

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    {osp}

    +
  • +
  • +

    Open Virtual Appliances (OVAs) that were created by VMware vSphere

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

The release notes describe technical changes, new features and enhancements, and known issues for Forklift.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Migration from OpenStack moves to being a fully supported feature
+

In this version of Forklift, migration using OpenStack source providers graduated from a Technology Preview feature to a fully supported feature.

+
+
+
Disabling FIPS
+

Forklift enables migrations from vSphere source providers by not enforcing Enterprise Master Secret (EMS). This enables migrating from all vSphere versions that Forklift supports, including migrations that do not meet 2023 FIPS requirements.

+
+
+
Integration of the create and update provider user interface
+

The user interface of the create and update providers now aligns with the look and feel of the OKD web console and displays up-to-date data.

+
+
+
Standalone UI
+

The old UI of Forklift 2.3 cannot be enabled by setting feature_ui: true in ForkliftController anymore.

+
+
+
Support deployment on {ocp-name} 4.15
+

Forklift 2.5.6 can be deployed on {ocp-name} 4.15 clusters.

+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
Migration of OVA files from VMware vSphere
+

In Forklift 2.3, you can migrate using Open Virtual Appliance (OVA) files that were created by VMware vSphere as source providers. (MTV-336)

+
+
+ + + + + +
+
Note
+
+
+

Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by Forklift. Forklift supports only OVA files created by VMware vSphere.

+
+
+
+
+

Unresolved directive in rn-2.5.adoc - include::snippet_ova_tech_preview.adoc[]

+
+
+
Migrating VMs between OKD clusters
+

In Forklift 2.3, you can now use Red Hat KubeVirt provider as a source provider and a destination provider. You can migrate VMs from the cluster that Forklift is deployed on to another cluster, or from a remote cluster to the cluster that Forklift is deployed on. (MTV-571)

+
+
+
Migration of VMs with direct LUNs from RHV
+

During the migration from oVirt (oVirt), direct Logical Units (LUNs) are detached from the source virtual machines and attached to the target virtual machines. Note that this mechanism does not work yet for Fibre Channel. (MTV-329)

+
+
+
Additional authentication methods for OpenStack
+

In addition to standard password authentication, Forklift supports the following authentication methods: Token authentication and Application credential authentication. (MTV-539)

+
+
+
Validation rules for OpenStack
+

The validation service includes default validation rules for virtual machines from OpenStack. (MTV-508)

+
+
+
VDDK is now optional for VMware vSphere providers
+

You can now create the VMware vSphere source provider without specifying a VMware Virtual Disk Development Kit (VDDK) init image. It is strongly recommended you create a VDDK init image to accelerate migrations.

+
+
+
Deployment on OKE enabled
+

In Forklift 2.5.3, deployment on {ocp-name} Kubernetes Engine (OKE) has been enabled. For more information, see About {ocp-name} Kubernetes Engine. (MTV-803)

+
+
+
Migration of VMs to destination storage classes with encrypted RBD now supported
+

In Forklift 2.5.4, migration of VMs to destination storage classes that have encrypted RADOS Block Devices (RBD) volumes is now supported.

+
+
+

To make use of this new feature, set the value of the parameter controller_block_overhead to 1Gi, following the procedure in Configuring the MTV Operator. (MTV-851)

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Deleting migration plan does not remove temporary resources
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)

+
+
+
Migration of virtual machines with encrypted partitions fails during conversion
+

vSphere only: Migrations from oVirt and OpenStack do not fail, but the encryption key may be missing on the target OKD cluster.

+
+
+
Migration fails during precopy/cutover while performing a snapshot operation on the source VM
+

Warm migration from oVirt fails if a snapshot operation is triggered and running on the source VM at the same time as the migration is scheduled. The migration does not wait for the snapshot operation to finish. (MTV-456)

+
+
+
Unable to schedule migrated VM with multiple disks to more than one storage classes of type hostPath
+

When migrating a VM with multiple disks to more than one storage classes of type hostPath, it might happen that a VM cannot be scheduled. Workaround: Use shared storage on the target OKD cluster.

+
+
+
Non-supported guest operating systems in warm migrations
+

Warm migrations and migrations to remote OKD clusters from vSphere do not support all types of guest operating systems that are supported in cold migrations to the local OKD cluster. This is a consequence of using RHEL 8 in the former case and RHEL 9 in the latter case.
+See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.

+
+
+
VMs from vSphere with RHEL 9 guest operating system can start with network interfaces that are down
+

When migrating VMs that are installed with RHEL 9 as guest operating system from vSphere, the network interfaces of the VMs could be disabled when they start in {ocp-name} Virtualization. (MTV-491)

+
+
+
Import OVA: ConnectionTestFailed message appears when adding OVA provider
+

When adding an OVA provider, the error message ConnectionTestFailed can appear, although the provider is created successfully. If the message does not disappear after a few minutes and the provider status does not move to Ready, this means that the ova server pod creation has failed. (MTV-671)

+
+
+
Left over ovirtvolumepopulator from failed migration causes plan to stay indefinitely in CopyDisks phase
+

An outdated ovirtvolumepopulator in the namespace, left over from an earlier failed migration, stops a new plan of the same VM when it transitions to CopyDisks phase. The plan remains in that phase indefinitely. (MTV-929)

+
+
+
Unclear error message when Forklift fails to build a PVC
+

The migration fails to build the Persistent Volume Claim (PVC) if the destination storage class does not have a configured storage profile. The forklift-controller raises an error message without a clear reason for failing to create a PVC. (MTV-928)

+
+
+

For a complete list of all known issues in this release, see the list of Known Issues in Jira.

+
+
+
+
+

Resolved issues

+
+
+

This release has the following resolved issues:

+
+
+
Flaw was found in jsrsasign package which is vulnerable to Observable Discrepancy
+

Versions of the package jsrsasign before 11.0.0, used in earlier releases of Forklift, are vulnerable to Observable Discrepancy in the RSA PKCS1.5 or RSA-OAEP decryption process. This discrepancy means an attacker could decrypt ciphertexts by exploiting this vulnerability. However, exploiting this vulnerability requires the attacker to have access to a large number of ciphertexts encrypted with the same key. This issue has been resolved in Forklift 2.5.5 by upgrading the package jsrasign to version 11.0.0.

+
+
+

For more information, see CVE-2024-21484.

+
+
+
Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack)
+

A flaw was found in handling multiplexed streams in the HTTP/2 protocol. In previous releases of Forklift, the HTTP/2 protocol allowed a denial of service (server resource consumption) because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection, which resulted in a denial of service due to server resource consumption.

+
+
+

This issue has been resolved in Forklift 2.5.2. It is advised to update to this version of MTV or later.

+
+ +
+
Gin Web Framework does not properly sanitize filename parameter of Context.FileAttachment function
+

A flaw was found in the Gin-Gonic Gin Web Framework, used by Forklift. The filename parameter of the Context.FileAttachment function was not properly sanitized. This flaw in the package could allow a remote attacker to bypass security restrictions caused by improper input validation by the filename parameter of the Context.FileAttachment function. A maliciously created filename could cause the Content-Disposition header to be sent with an unexpected filename value, or otherwise modify the Content-Disposition header.

+
+
+

This issue has been resolved in Forklift 2.5.2. It is advised to update to this version of Forklift or later.

+
+ +
+
CVE-2023-26144: mtv-console-plugin-container: graphql: Insufficient checks in the OverlappingFieldsCanBeMergedRule.ts
+

A flaw was found in the package GraphQL from 16.3.0 and before 16.8.1. This flaw means Forklift versions before Forklift 2.5.2 are vulnerable to Denial of Service (DoS) due to insufficient checks in the OverlappingFieldsCanBeMergedRule.ts file when parsing large queries. This issue may allow an attacker to degrade system performance. (MTV-712)

+
+
+

This issue has been resolved in Forklift 2.5.2. It is advised to update to this version of Forklift or later.

+
+
+

For more information, see CVE-2023-26144.

+
+
+
CVE-2023-45142: Memory leak found in the otelhttp handler of open-telemetry
+

A flaw was found in otelhttp handler of OpenTelemetry-Go. This flaw means Forklift versions before Forklift 2.5.3 are vulnerable to a memory leak caused by http.user_agent and http.method having unbound cardinality, which could allow a remote, unauthenticated attacker to exhaust the server’s memory by sending many malicious requests, affecting the availability. (MTV-795)

+
+
+

This issue has been resolved in Forklift 2.5.3. It is advised to update to this version of Forklift or later.

+
+
+

For more information, see CVE-2023-45142.

+
+
+
CVE-2023-39322: QUIC connections do not set an upper bound on the amount of data buffered when reading post-handshake messages
+

A flaw was found in Golang. This flaw means Forklift versions before Forklift 2.5.3 are vulnerable to QUIC connections not setting an upper bound on the amount of data buffered when reading post-handshake messages, allowing a malicious QUIC connection to cause unbounded memory growth. With the fix, connections now consistently reject messages larger than 65KiB in size. (MTV-708)

+
+
+

This issue has been resolved in Forklift 2.5.3. It is advised to update to this version of Forklift or later.

+
+
+

For more information, see CVE-2023-39322.

+
+
+
CVE-2023-39321: Processing an incomplete post-handshake message for a QUIC connection can cause a panic
+

A flaw was found in Golang. This flaw means Forklift versions before Forklift 2.5.3 are vulnerable to processing an incomplete post-handshake message for a QUIC connection, which causes a panic. (MTV-693)

+
+
+

This issue has been resolved in Forklift 2.5.3. It is advised to update to this version of Forklift or later.

+
+
+

For more information, see CVE-2023-39321.

+
+
+
CVE-2023-39319: Flaw in html/template package
+

A flaw was found in the Golang html/template package used in Forklift. This flaw means Forklift versions before Forklift 2.5.3 are vulnerable, as the html/template package did not properly handle occurrences of <script, <!--, and </script within JavaScript literals in <script> contexts. This flaw could cause the template parser to improperly consider script contexts to be terminated early, causing actions to be improperly escaped, which could be leveraged to perform an XSS attack. (MTV-693)

+
+
+

This issue has been resolved in Forklift 2.5.3. It is advised to update to this version of Forklift or later.

+
+
+

For more information, see CVE-2023-39319.

+
+
+
CVE-2023-39318: Flaw in html/template package
+

A flaw was found in the Golang html/template package used in Forklift. This flaw means Forklift versions before Forklift 2.5.3 are vulnerable as the html/template package did not properly handle HMTL-like "" comment tokens, nor hashbang \#! comment tokens. This flaw could cause the template parser to improperly interpret the contents of <script> contexts, causing actions to be improperly escaped, which could be leveraged to perform an XSS attack. (MTV-693)

+
+
+

This issue has been resolved in Forklift 2.5.3. It is advised to update to this version of Forklift or later.

+
+
+

For more information, see CVE-2023-39318.

+
+
+
Logs archive file downloaded from UI includes logs related to deleted migration plan/VM
+

In earlier releases of Forklift 2.3, the log files downloaded from UI could contain logs that are related to an earlier migration plan. (MTV-783)

+
+
+

This issue has been resolved in Forklift 2.5.3.

+
+
+
Extending a VM disk in RHV is not reflected in the MTV inventory
+

In earlier releases of Forklift 2.3, the size of disks that are extended in RHV was not adequately monitored. This resulted in the inability to migrate virtual machines with extended disks from a RHV provider. (MTV-830)

+
+
+

This issue has been resolved in Forklift 2.5.3.

+
+
+
Filesystem overhead configurable
+

In earlier releases of Forklift 2.3, the filesystem overhead for new persistent volumes was hard-coded to 10%. The overhead was insufficient for certain filesystem types, resulting in failures during cold-migrations from oVirt and OSP to the cluster where Forklift is deployed. In other filesystem types, the hard-coded overhead was too high, resulting in excessive storage consumption.

+
+
+

In Forklift 2.5.3, the filesystem overhead can be configured, as it is no longer hard-coded. If your migration allocates persistent volumes without CDI, you can adjust the file system overhead. You adjust the file system overhead by adding the following label and value to the spec portion of the forklift-controller CR:

+
+
+
+
spec:
+  `controller_filesystem_overhead: <percentage>` (1)
+
+
+
+
    +
  1. +

    The percentage of overhead. If this label is not added, the default value of 10% is used. This setting is valid only if the storageclass is filesystem. (MTV-699)

    +
  2. +
+
+
+
Ensure up-to-date data is displayed in the create and update provider forms
+

In earlier releases of Forklift, the create and update provider forms could have presented stale data.

+
+
+

This issue is resolved in Forklift 2.3, the new forms of create and update provider display up-to-date properties of the provider. (MTV-603)

+
+
+
Snapshots that are created during a migration in OpenStack are not deleted
+

In earlier releases of Forklift, the Migration Controller service did not delete snapshots that were created during a migration of source virtual machines in OpenStack automatically.

+
+
+

This issue is resolved in Forklift 2.3, all the snapshots created during the migration are removed after the migration has been completed. (MTV-620)

+
+
+
oVirt snapshots are not deleted after a successful migration
+

In earlier releases of Forklift, the Migration Controller service did not delete snapshots automatically after a successful warm migration of a VM from oVirt.

+
+
+

This issue is resolved in Forklift 2.3, the snapshots generated during migration are removed after a successful migration, and the original snapshots are not removed after a successful migration. (MTV-349)

+
+
+
Warm migration fails when cutover conflicts with precopy
+

In earlier releases of Forklift, the cutover operation failed when it was triggered while precopy was being performed. The VM was locked in oVirt and therefore the ovirt-engine rejected the snapshot creation, or disk transfer, operation.

+
+
+

This issue is resolved in Forklift 2.3, the cutover operation is triggered, but it is not performed at that time because the VM is locked. Once the precopy operation completes, the cutover operation is triggered. (MTV-686)

+
+
+
Warm migration fails when VM is locked
+

In earlier releases of Forklift, triggering a warm migration while there was an ongoing operation in oVirt that locked the VM caused the migration to fail because it could not trigger the snapshot creation.

+
+
+

This issue is resolved in Forklift 2.3, warm migration does not fail when an operation that locks the VM is performed in oVirt. The migration does not fail, but starts when the VM is unlocked. (MTV-687)

+
+
+
Deleting migrated VM does not remove PVC and PV
+

In earlier releases of Forklift, when removing a VM that was migrated, its persistent volume claims (PVCs) and physical volumes (PV) were not deleted.

+
+
+

This issue is resolved in Forklift 2.3, PVCs and PVs are deleted when deleting migrated VM.(MTV-492)

+
+
+
PVC deletion hangs after archiving and deleting migration plan
+

In earlier releases of Forklift, when a migration failed, its PVCs and PVs were not deleted as expected when its migration plan was archived and deleted.

+
+
+

This issue is resolved in Forklift 2.3, PVCs are deleted when archiving and deleting migration plan.(MTV-493)

+
+
+
VM with multiple disks can boot from a non-bootable disk after migration
+

In earlier releases of Forklift, VM with multiple disks that were migrated might not have been able to boot on the target OKD cluster.

+
+
+

This issue is resolved in Forklift 2.3, VM with multiple disks that are migrated can boot on the target OKD cluster. (MTV-433)

+
+
+
Transfer network not taken into account for cold migrations from vSphere
+

In Forklift releases 2.4.0-2.5.3, cold migrations from vSphere to the local cluster on which Forklift was deployed did not take a specified transfer network into account. This issue is resolved in Forklift 2.5.4. (MTV-846)

+
+
+
Fix migration of VMs with multi-boot guest operating system from vSphere
+

In Forklift 2.5.6, the virt-v2v arguments include –root first, which mitigates an issue with multi-boot VMs where the pod fails. This is a fix for a regression that was introduced in Forklift 2.4, in which the '--root' argument was dropped. (MTV-987)

+
+
+
Errors logged in populator pods are improved
+

In earlier releases of Forklift 2.3, populator pods were always restarted on failure. This made it difficult to gather the logs from the failed pods. In Forklift 2.5.3, the number of restarts of populator pods is limited to three times. On the third and final time, the populator pod remains in the fail status and its logs can then be easily gathered by must-gather and by forklift-controller to know this step has failed. (MTV-818)

+
+
+
npm IP package vulnerability
+

A vulnerability found in the Node.js Package Manager (npm) IP Package can allow an attacker to obtain sensitive information and obtain access to normally inaccessible resources. MTV-941

+
+
+

This issue has been resolved in Forklift 2.5.6.

+
+
+

For more information, see CVE-2023-42282

+
+
+
Flaw was found in the Golang net/http/internal package
+

A flaw was found in the versions of the Golang net/http/internal package, that were used in earlier releases of Forklift. This flaw could allow a malicious user to send an HTTP request and cause the receiver to read more bytes from the network than are in the body (up to 1GiB), causing the receiver to fail reading the response, possibly leading to a Denial of Service (DoS). This issue has been resolved in Forklift 2.5.6.

+
+
+

For more information, see CVE-2023-39326.

+
+
+

For a complete list of all resolved issues in this release, see the list of Resolved Issues in Jira.

+
+
+
+
+

Upgrade notes

+
+
+

It is recommended to upgrade from Forklift 2.4.2 to Forklift 2.3.

+
+
+
Upgrade from 2.4.0 fails
+

When upgrading from MTV 2.4.0 to a later version, the operation fails with an error that says the field 'spec.selector' of deployment forklift-controller is immutable. Workaround: Remove the custom resource forklift-controller of type ForkliftController from the installed namespace, and recreate it. Refresh the OKD console once the forklift-console-plugin pod runs to load the upgraded Forklift web console. (MTV-518)

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/rn-2.6/index.html b/documentation/doc-Release_notes/modules/rn-2.6/index.html new file mode 100644 index 00000000000..dec12ab819f --- /dev/null +++ b/documentation/doc-Release_notes/modules/rn-2.6/index.html @@ -0,0 +1,511 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.6

+
+
+
+

You can use Forklift to migrate virtual machines from the following source providers to KubeVirt destination providers:

+
+
+
    +
  • +

    VMware vSphere

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    {osp}

    +
  • +
  • +

    Open Virtual Appliances (OVAs) that were created by VMware vSphere

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

The release notes describe technical changes, new features and enhancements, known issues, and resolved issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Simplified the creation of vSphere providers
+

In earlier releases of Forklift, users had to specify a fingerprint when creating a vSphere provider. This required users to retrieve the fingerprint from the server that vCenter runs on. Forklift no longer requires this fingerprint as an input, but rather computes it from the specified certificate in the case of a secure connection or automatically retrieves it from the server that runs vCenter/ESXi in the case of an insecure connection.

+
+
+
Redesigned the migration plan creation dialog
+

The user interface console has improved the process of creating a migration plan. The new migration plan dialog enables faster creation of migration plans.

+
+
+

It includes only the minimal settings that are required, while you can confirgure advanced settings separately. The new dialog also provides defaults for network and storage mappings, where applicable. The new dialog can also be invoked from the the Provider > Virtual Machines tab, after selecting the virtual machines to migrate. It also better aligns with the user experience in the OCP console.

+
+
+
virtual machine preferences have replaced {ocp-name} templates
+

The virtual machine preferences have replaced {ocp-name} templates. Forklift currently falls back to using {ocp-name} templates when a relevant preference is not available.

+
+
+

Custom mappings of guest operating system type to virtual machine preference can be configured using config maps. This is in order to use custom virtual machine preferences, or to support more guest operating system types.

+
+
+
Full support for migration from OVA
+

Migration from OVA moves from being a Technical Preview and is now a fully supported feature.

+
+
+
The VM is posted with its desired Running state
+

Forklift creates the VM with its desired Running state on the target provider, instead of creating the VM and then running it as an additional operation. (MTV-794)

+
+
+
The must-gather logs can now be loaded only by using the CLI
+

The Forklift web console can no longer download logs. With this update, you must download must-gather logs by using CLI commands. For more information, see Must Gather Operator.

+
+
+
Forklift no longer runs pvc-init pods when migrating from vSphere
+

Forklift no longer runs pvc-init pods during cold migration from a vSphere provider to the {ocp-name} cluster that Forklift is deployed on. However, in other flows where data volumes are used, they are set with the cdi.kubevirt.io/storage.bind.immediate.requested annotation, and CDI runs first-consume pods for storage classes with volume binding mode WaitForFirstConsumer.

+
+
+
+
+

New features and enhancements

+
+
+

This section provides features and enhancements introduced in Forklift 2.6.

+
+
+

New features and enhancements 2.6.3

+
+
Support for migrating LUKS-encrypted devices in migrations from vSphere
+

You can now perform cold migrations from a vSphere provider of VMs whose virtual disks are encrypted by Linux Unified Key Setup (LUKS). (MTV-831)

+
+
+
Specifying the primary disk when migrating from vSphere
+

You can now specify the primary disk when you migrate VMs from vSphere with more than one bootable disk. This avoids Forklift automatically attempting to convert the first bootable disk that it detects while it examines all the disks of a virtual machine. This feature is needed because the first bootable disk is not necessarily the disk that the VM is expected to boot from in KubeVirt. (MTV-1079)

+
+
+
Links to remote provider UIs
+

You can now remotely access the UI of a remote cluster when you create a source provider. For example, if the provider is a remote oVirt oVirt cluster, Forklift adds a link to the remote oVirt web console when you define the provider. This feature makes it easier for you to manage and debug a migration from remote clusters. (MTV-1054)

+
+
+
+

New features and enhancements 2.6.0

+
+
Migration from vSphere over a secure connection
+

You can now specify a CA certificate that can be used to authenticate the server that runs vCenter or ESXi, depending on the specified SDK endpoint of the vSphere provider. (MTV-530)

+
+
+
Migration to or from a remote {ocp-name} over a secure connection
+

You can now specify a CA certificate that can be used to authenticate the API server of a remote {ocp-name} cluster. (MTV-728)

+
+
+
Migration from an ESXi server without going through vCenter
+

Forklift enables the configuration of vSphere providers with the SDK of ESXi. You need to select ESXi as the Endpoint type of the vSphere provider and specify the URL of the SDK of the ESXi server. (MTV-514)

+
+
+
Migration of image-based VMs from {osp}
+

Forklift supports the migration of VMs that were created from images in {osp}. (MTV-644)

+
+
+
Migration of VMs with Fibre Channel LUNs from oVirt
+

Forklift supports migrations of VMs that are set with Fibre Channel (FC) LUNs from oVirt. As with other LUN disks, you need to ensure the {ocp-name} nodes have access to the FC LUNs. During the migrations, the FC LUNs are detached from the source VMs in oVirt and attached to the migrated VMs in {ocp-name}. (MTV-659)

+
+
+
Preserve CPU types of VMs that are migrated from oVirt
+

Forklift sets the CPU type of migrated VMs in {ocp-name} with their custom CPU type in oVirt. In addition, a new option was added to migration plans that are set with oVirt as a source provider to preserve the original CPU types of source VMs. When this option is selected, Forklift identifies the CPU type based on the cluster configuration and sets this CPU type for the migrated VMs, for which the source VMs are not set with a custom CPU. (MTV-547)

+
+
+
Validation for RHEL 6 guest operating system is now available when migrating VMs with RHEL 6 guest operating system
+

Red Hat Enterprise Linux (RHEL) 9 does not support RHEL 6 as a guest operating system. Therefore, RHEL 6 is not supported in {ocp-name} Virtualization. With this update, a validation of RHEL 6 guest operating system was added to {ocp-name} Virtualization. (MTV413)

+
+
+
Automatic retrieval of CA certificates for the provider’s URL in the console
+

The ability to retrieve CA certificates, which was available in previous versions, has been restored. The vSphere Verify certificate option is in the add-provider dialog. This option was removed in the transition to the OKD console and has now been added to the console. This functionality is also available for oVirt, {osp}, and {ocp-name} providers now. (MTV-737)

+
+
+
Validation of a specified VDDK image
+

Forklift validates the availability of a VDDK image that is specified for a vSphere provider on the target {ocp-name} name as part of the validation of a migration plan. Forklift also checks whether the libvixDiskLib.so symbolic link (symlink) exists within the image. If the validation fails, the migration plan cannot be started. (MTV-618)

+
+
+
Add a warning and partial support for TPM
+

Forklift presents a warning when attempting to migrate a VM that is set with a TPM device from oVirt or vSphere. The migrated VM in {ocp-name} would be set with a TPM device but without the content of the TPM device on the source environment. (MTV-378)

+
+
+
Plans that failed to migrate VMs can now be edited
+

With this update, you can edit plans that have failed to migrate any VMs. Some plans fail or are canceled because of incorrect network and storage mappings. You can now edit these plans until they succeed. (MTV-779)

+
+
+
Validation rules are now available for OVA
+

The validation service includes default validation rules for virtual machines from the Open Virtual Appliance (OVA). (MTV-669)

+
+
+
+
+
+

Resolved issues

+
+
+

This release has the following resolved issues:

+
+
+

Resolved issues 2.6.7

+
+
Incorrect handling of quotes in ifcfg files
+

In earlier releases of Forklift, there was an issue with the incorrect handling of single and double quotes in interface configuration (ifcfg) files, which control the software interfaces for individual network devices. This issue has been resolved in Forklift 2.6.7, in order to cover additional IP configurations on Red Hat Enterprise Linux, CentOS, Rocky Linux and similar distributions. (MTV-1439)

+
+
+
Failure to preserve netplan based network configuration
+

In earlier releases of Forklift, there was an issue with the preservation of netplan-based network configurations. This issue has been resolved in Forklift 2.6.7, so that static IP configurations are preserved if netplan (netplan.io) is used by using the netplan configuration files to generate udev rules for known mac-address and ifname tuples. (MTV-1440)

+
+
+
Error messages are written into udev .rules files
+

In earlier releases of Forklift, there was an issue with the accidental leakage of error messages into udev .rules files. This issue has been resolved in Forklift 2.6.7, with a static IP persistence script added to the udev rule file. (MTV-1441)

+
+
+
+

Resolved issues 2.6.6

+
+
Runtime error: invalid memory address or nil pointer dereference
+

In earlier releases of Forklift, there was a runtime error of invalid memory address or nil pointer dereference caused by a pointer that was nil, and there was an attempt to access the value that it points to. This issue has been resolved in Forklift 2.6.6. (MTV-1353)

+
+
+
All Plan and Migration pods scheduled to same node causing the node to crash
+

In earlier releases of Forklift, the scheduler could place all migration pods on a single node. When this happened, the node ran out of the resources. This issue has been resolved in Forklift 2.6.6. (MTV-1354)

+
+
+
Empty bearer token is sufficient for authentication
+

In earlier releases of Forklift, a vulnerability was found in the Forklift Controller.  There is no verification against the authorization header, except to ensure it uses bearer authentication. Without an authorization header and a bearer token, a 401 error occurs. The presence of a token value provides a 200 response with the requested information. This issue has been resolved in Forklift 2.6.6.

+
+
+

For more details, see (CVE-2024-8509).

+
+
+
+

Resolved issues 2.6.5

+
+
VMware Linux interface name changes during migration
+

In earlier releases of Forklift, during the migration of Rocky Linux 8, CentOS 7.2 and later, and Ubuntu 22 virtual machines (VM) from VMware to OKD (OCP), the name of the network interfaces is modified, and the static IP configuration for the VM is no longer functional. This issue has been resolved for static IPs in Rocky Linux 8, Centos 7.2 and later, Ubuntu 22 in Forklift 2.6.5. (MTV-595)

+
+
+
+

Resolved issues 2.6.4

+
+
Disks and drives are offline after migrating Windows virtual machines from RHV or VMware to OCP
+

Windows (Windows 2022) VMs configured with multiple disks, which are Online before the migration, are Offline after a successful migration from oVirt or VMware to OKD, using Forklift. Only the C:\ primary disk is Online. This issue has been resolved for basic disks in Forklift 2.6.4. (MTV-1299)

+
+
+

For details of the known issue of dynamic disks being Offline in Windows Server 2022 after cold and warm migrations from vSphere to container-native virtualization (CNV) with Ceph RADOS Block Devices (RBD), using the storage class ocs-storagecluster-ceph-rbd, see (MTV-1344).

+
+
+
Preserve IP option for Windows does not preserve all settings
+

In earlier releases of Forklift, while migrating a Windows 2022 Server with a static IP address assigned, and selecting the Preserve static IPs option, after a successful Windows migration, while the node started and the IP address was preserved, the subnet mask, gateway, and DNS servers were not preserved. This resulted in an incomplete migration, and the customer was forced to log in locally from the console to fully configure the network. This issue has been resolved in Forklift 2.6.4. (MTV-1286)

+
+
+
qemu-guest-agent not being installed at first boot in Windows Server 2022
+

After a successful Windows 2022 server guest migration using Forklift 2.6.1, the qemu-guest-agent is not completely installed. The Windows Scheduled task is being created, however it is being set to run 4 hours in the future instead of the intended 2 minutes in the future. (MTV-1325)

+
+
+
+

Resolved issues 2.6.3

+
+
CVE-2024-24788: golang: net malformed DNS message can cause infinite loop
+

In earlier releases of Forklift, there was a flaw was discovered in the stdlib package of the Go programming language, which impacts previous versions of Forklift. This vulnerability primarily threatens web-facing applications and services that rely on Go for DNS queries. This issue has been resolved in Forklift 2.6.3.

+
+
+

For more details, see (CVE-2024-24788).

+
+
+
Migration scheduling does not take into account that virt-v2v copies disks sequentially (vSphere only)
+

In earlier releases of Forklift, there was a problem with the way Forklift interpreted the controller_max_vm_inflight setting for vSphere to schedule migrations. This issue has been resolved in Forklift 2.6.3. (MTV-1191)

+
+
+
Cold migrations fail after changing the ESXi network (vSphere only)
+

In earlier versions of Forklift, cold migrations from a vSphere provider with an ESXi SDK endpoint failed if any network was used except for the default network for disk transfers. This issue has been resolved in Forklift 2.6.3. (MTV-1180)

+
+
+
Warm migrations over an ESXi network are stuck in DiskTransfer state (vSphere only)
+

In earlier versions of Forklift, warm migrations over an ESXi network from a vSphere provider with a vCenter SDK endpoint were stuck in DiskTransfer state because Forklift was unable to locate image snapshots. This issue has been resolved in Forklift 2.6.3. (MTV-1161)

+
+
+
Leftover PVCs are in Lost state after cold migrations
+

In earlier versions of Forklift, after cold migrations, there were leftover PVCs that had a status of Lost instead of being deleted, even after the migration plan that created them was archived and deleted. Investigation showed that this was because importer pods were retained after copying, by default, rather than in only specific cases. This issue has been resolved in Forklift 2.6.3. (MTV-1095)

+
+
+
Guest operating system from vSphere might be missing (vSphere only)
+

In earlier versions of Forklift, some VMs that were imported from vSphere were not mapped to a template in OKD while other VMs, with the same guest operating system, were mapped to the corresponding template. Investigations indicated that this was because vSphere stopped reporting the operating system after not receiving updates from VMware tools for some time. This issue has been resolved in Forklift 2.6.3 by taking the value of the operating system from the output of the investigation that virt-v2v performs on the disks. (MTV-1046)

+
+
+
+

Resolved issues 2.6.2

+
+
CVE-2023-45288: Golang net/http, x/net/http2: unlimited number of CONTINUATION frames can cause a denial-of-service (DoS) attack
+

A flaw was discovered with the implementation of the HTTP/2 protocol in the Go programming language, which impacts previous versions of Forklift. There were insufficient limitations on the number of CONTINUATION frames sent within a single stream. An attacker could potentially exploit this to cause a denial-of-service (DoS) attack. This flaw has been resolved in Forklift 2.6.2.

+
+
+

For more details, see (CVE-2023-45288).

+
+
+
CVE-2024-24785: mtv-api-container: Golang html/template: errors returned from MarshalJSON methods may break template escaping
+

A flaw was found in the html/template Golang standard library package, which impacts previous versions of Forklift. If errors returned from MarshalJSON methods contain user-controlled data, they may be used to break the contextual auto-escaping behavior of the HTML/template package, allowing subsequent actions to inject unexpected content into the templates. This flaw has been resolved in Forklift 2.6.2.

+
+
+

For more details, see (CVE-2024-24785).

+
+
+
CVE-2024-24784: mtv-validation-container: Golang net/mail: comments in display names are incorrectly handled
+

A flaw was found in the net/mail Golang standard library package, which impacts previous versions of Forklift. The ParseAddressList function incorrectly handles comments, text in parentheses, and display names. As this is a misalignment with conforming address parsers, it can result in different trust decisions being made by programs using different parsers. This flaw has been resolved in Forklift 2.6.2.

+
+
+

For more details, see (CVE-2024-24784).

+
+
+
CVE-2024-24783: mtv-api-container: Golang crypto/x509: Verify panics on certificates with an unknown public key algorithm
+

A flaw was found in the crypto/x509 Golang standard library package, which impacts previous versions of Forklift. Verifying a certificate chain that contains a certificate with an unknown public key algorithm causes Certificate.Verify to panic. This affects all crypto/tls clients and servers that set Config.ClientAuth to VerifyClientCertIfGiven or RequireAndVerifyClientCert. The default behavior is for TLS servers to not verify client certificates. This flaw has been resolved in Forklift 2.6.2.

+
+
+

For more details, see (CVE-2024-24783).

+
+
+
CVE-2023-45290: mtv-api-container: Golang net/http memory exhaustion in Request.ParseMultipartForm
+

A flaw was found in the net/http Golang standard library package, which impacts previous versions of Forklift. When parsing a multipart form, either explicitly with Request.ParseMultipartForm or implicitly with Request.FormValue, Request.PostFormValue, or Request.FormFile, limits on the total size of the parsed form are not applied to the memory consumed while reading a single form line. This permits a maliciously crafted input containing long lines to cause the allocation of arbitrarily large amounts of memory, potentially leading to memory exhaustion. This flaw has been resolved in Forklift 2.6.2.

+
+
+

For more details, see (CVE-2023-45290).

+
+
+
ImageConversion does not run when target storage is set with WaitForFirstConsumer (WFFC)
+

In earlier releases of Forklift, migration of VMs failed because the migration was stuck in the AllocateDisks phase. As a result of being stuck, the migration did not progress, and PVCs were not bound. The root cause of the issue was that ImageConversion did not run when target storage was set for wait-for-first-consumer. The problem was resolved in Forklift 2.6.2. (MTV-1126)

+
+
+
forklift-controller panics when importing VMs with direct LUNs
+

In earlier releases of Forklift, forklift-controller panicked when a user attempted to import VMs that had direct LUNs. The problem was resolved in Forklift 2.6.2. (MTV-1134)

+
+
+
+

Resolved issues 2.6.1

+
+
VMs with multiple disks that are migrated from vSphere and OVA files are not being fully copied
+

In Forklift 2.6.0, there was a problem in copying VMs with multiple disks from VMware vSphere and from OVA files. The migrations appeared to succeed but all the disks were transferred to the same PV in the target environment while other disks were empty. In some cases, bootable disks were overridden, so the VM could not boot. In other cases, data from the other disks was missing. The problem was resolved in Forklift 2.6.1. (MTV-1067)

+
+
+
Migrating VMs from one OKD cluster to another fails due to a timeout
+

In Forklift 2.6.0, migrations from one OKD cluster to another failed when the time to transfer the disks of a VM exceeded the time to live (TTL) of the Export API in {ocp-name}, which was set to 2 hours by default. The problem was resolved in Forklift 2.6.1 by setting the default TTL of the Export API to 12 hours, which greatly reduces the possibility of an expiration of the Export API. Additionally, you can increase or decrease the TTL setting as needed. (MTV-1052)

+
+
+
Forklift forklift-controller pod crashes when receiving a disk without a datastore
+

In earlier releases of Forklift, if a VM was configured with a disk that was on a datastore that was no longer available in vSphere at the time a migration was attempted, the forklift-controller crashed, rendering Forklift unusable. In Forklift 2.6.1, Forklift presents a critical validation for VMs with such disks, informing users of the problem, and the forklift-controller no longer crashes, although it cannot transfer the disk. (MTV-1029)

+
+
+
+

Resolved issues 2.6.0

+
+
Deleting an OVA provider automatically also deletes the PV
+

In earlier releases of Forklift, the PV was not removed when the OVA provider was deleted. This has been resolved in Forklift 2.6.0, and the PV is automatically deleted when the OVA provider is deleted. (MTV-848)

+
+
+
Fix for data being lost when migrating VMware VMs with snapshots
+

In earlier releases of Forklift, when migrating a VM that has a snapshot from VMware, the VM that was created in {ocp-name} Virtualization contained the data in the snapshot but not the latest data of the VM. This has been resolved in Forklift 2.6.0. (MTV-447)

+
+
+
Canceling and deleting a failed migration plan does not clean up the populate pods and PVC
+

In earlier releases of Forklift, when you canceled and deleted a failed migration plan, and after creating a PVC and spawning the populate pods, the populate pods and PVC were not deleted. You had to delete the pods and PVC manually. This issue has been resolved in Forklift 2.6.0. (MTV-678)

+
+
+
OKD to OKD migrations require the cluster version to be 4.13 or later
+

In earlier releases of Forklift, when migrating from OKD to OKD, the version of the source provider cluster had to be OKD version 4.13 or later. This issue has been resolved in Forklift 2.6.0, with validation being shown when migrating from versions of {ocp-name} before 4.13. (MTV-734)

+
+
+
Multiple storage domains from RHV were always mapped to a single storage class
+

In earlier releases of Forklift, multiple disks from different storage domains were always mapped to a single storage class, regardless of the storage mapping that was configured. This issue has been resolved in Forklift 2.6.0. (MTV-1008)

+
+
+
Firmware detection by virt-v2v
+

In earlier releases of Forklift, a VM that was migrated from an OVA that did not include the firmware type in its OVF configuration was set with UEFI. This was incorrect for VMs that were configured with BIOS. This issue has been resolved in Forklift 2.6.0, as Forklift now consumes the firmware that is detected by virt-v2v during the conversion of the disks. (MTV-759)

+
+
+
Creating a host secret requires validation of the secret before creation of the host
+

In earlier releases of Forklift, when configuring a transfer network for vSphere hosts, the console plugin created the Host CR before creating its secret. The secret should be specified first in order to validate it before the Host CR is posted. This issue has been resolved in Forklift 2.6.0. (MTV-868)

+
+
+
When adding OVA provider a ConnectionTestFailed message appears
+

In earlier releases of Forklift, when adding an OVA provider, the error message ConnectionTestFailed instantly appeared, although the provider had been created successfully. This issue has been resolved in Forklift 2.6.0. (MTV-671)

+
+
+
RHV provider ConnectionTestSucceeded True response from the wrong URL
+

In earlier releases of Forklift, the ConnectionTestSucceeded condition was set to True even when the URL was different than the API endpoint for the RHV Manager. This issue has been resolved in Forklift 2.6.0. (MTV-740)

+
+
+
Migration does not fail when a vSphere Data Center is nested inside a folder
+

In earlier releases of Forklift, migrating a VM that is placed in a Data Center that is stored directly under the /vcenter in vSphere succeeded. However, it failed when the Data Center was stored inside a folder. This issue was resolved in Forklift 2.6.0. (MTV-796)

+
+
+
The OVA inventory watcher detects deleted files
+

The OVA inventory watcher detects files changes, including deleted files. Updates from the ova-provider-server pod are now sent every five minutes to the forklift-controller pod that updates the inventory. (MTV-733)

+
+
+
Unclear error message when Forklift fails to build or create a PVC
+

In earlier releases of Forklift, the error logs lacked clear information to identify the reason for a failure to create a PV on a destination storage class that does not have a configured storage profile. This issue was resolved in Forklift 2.6.0. (MTV-928)

+
+
+
Plans stay indefinitely in the CopyDisks phase when there is an outdated ovirtvolumepopulator
+

In earlier releases of Forklift, an earlier failed migration could have left an outdated ovirtvolumepopulator. When starting a new plan for the same VM to the same project, the CreateDataVolumes phase did not create populator PVCs when transitioning to CopyDisks, causing the CopyDisks phase to stay indefinitely. This issue was resolved in Forklift 2.6.0. (MTV-929)

+
+
+

For a complete list of all resolved issues in this release, see the list of Resolved Issues in Jira.

+
+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+ + + + + +
+
Warning
+
+
Warm migration and remote migration flows are impacted by multiple bugs
+
+

Warm migration and remote migration flows are impacted by multiple bugs. It is strongly recommended to fall back to cold migration until this issue is resolved. (MTV-1366)

+
+
+
+
+
Migrating older Linux distributions from VMware to OKD, the name of the network interfaces changes
+

When migrating older Linux distributions, such as CentOS 7.0 and 7.1, virtual machines (VMs) from VMware to OKD, the name of the network interfaces changes, and the static IP configuration for the VM no longer functions. This issue is caused by RHEL 7.0 and 7.1 still requiring virtio-transitional. Workaround: Manually update the guest to RHEL 7.2 or update the VM specification post-migration to use transitional. (MTV-1382)

+
+
+
Dynamic disks are offline in Windows Server 2022 after migration from vSphere to CNV with ceph-rbd
+

The dynamic disks are Offline in Windows Server 2022 after cold and warm migrations from vSphere to container-native virtualization (CNV) with Ceph RADOS Block Devices (RBD), using the storage class ocs-storagecluster-ceph-rbd. (MTV-1344)

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)

+
+
+
Migration of virtual machines with encrypted partitions fails during a conversion (vSphere only)
+

vSphere only: Migrations from oVirt and {osp} do not fail, but the encryption key might be missing on the target OKD cluster.

+
+
+
Migration fails during precopy/cutover while performing a snapshot operation on the source VM
+

Warm migration from oVirt fails if a snapshot operation is triggered and running on the source VM at the same time as the migration is scheduled. The migration does not wait for the snapshot operation to finish. (MTV-456)

+
+
+
Unable to schedule migrated VM with multiple disks to more than one storage class of type hostPath
+

When migrating a VM with multiple disks to more than one storage class of type hostPath, it might happen that a VM cannot be scheduled. Workaround: Use shared storage on the target OKD cluster.

+
+
+
Non-supported guest operating systems in warm migrations
+

Warm migrations and migrations to remote OKD clusters from vSphere do not support the same guest operating systems that are supported in cold migrations and migrations to the local OKD cluster. RHEL 8 and RHEL 9 might cause this limitation.

+
+ +
+
VMs from vSphere with RHEL 9 guest operating system can start with network interfaces that are down
+

When migrating VMs that are installed with RHEL 9 as a guest operating system from vSphere, the network interfaces of the VMs could be disabled when they start in {ocp-name} Virtualization. (MTV-491)

+
+
+
Migration of a VM with NVME disks from vSphere fails
+

When migrating a virtual machine (VM) with NVME disks from vSphere, the migration process fails, and the Web Console shows that the Convert image to kubevirt stage is running but did not finish successfully. (MTV-963)

+
+
+
Importing image-based VMs can fail
+

Migrating an image-based VM without the virtual_size field can fail on a block mode storage class. (MTV-946)

+
+
+
Deleting a migration plan does not remove temporary resources
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs, and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)

+
+
+
Migrating VMs with independent persistent disks from VMware to OCP-V fails
+

Migrating VMs with independent persistent disks from VMware to OCP-V fails. (MTV-993)

+
+
+
Guest operating system from vSphere might be missing
+

When vSphere does not receive updates about the guest operating system from the VMware tools, it considers the information about the guest operating system to be outdated and ceases to report it. When this occurs, Forklift is unaware of the guest operating system of the VM and is unable to associate it with the appropriate virtual machine preference or {ocp-name} template. (MTV-1046)

+
+
+
Failure to migrate an image-based VM from {osp} to the default project
+

The migration process fails when migrating an image-based VM from {osp} to the default project. (MTV-964)

+
+
+

For a complete list of all known issues in this release, see the list of Known Issues in Jira.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/rn-2.7/index.html b/documentation/doc-Release_notes/modules/rn-2.7/index.html new file mode 100644 index 00000000000..3f3e6c1af42 --- /dev/null +++ b/documentation/doc-Release_notes/modules/rn-2.7/index.html @@ -0,0 +1,91 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.7

+
+

You can use Forklift to migrate virtual machines from the following source providers to KubeVirt destination providers:

+
+
+
    +
  • +

    VMware vSphere versions 6, 7, and 8

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    {osp}

    +
  • +
  • +

    Open Virtual Appliances (OVAs) that were created by VMware vSphere

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

The release notes describe technical changes, new features and enhancements, known issues, and resolved issues.

+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/rn-27-resolved-issues/index.html b/documentation/doc-Release_notes/modules/rn-27-resolved-issues/index.html new file mode 100644 index 00000000000..bb1bd0bb729 --- /dev/null +++ b/documentation/doc-Release_notes/modules/rn-27-resolved-issues/index.html @@ -0,0 +1,168 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Resolved issues

+
+
+
+

Forklift 2.7 has the following resolved issues:

+
+
+
+
+

Resolved issues 2.7.3

+
+
+
Migration plan does not fail when conversion pod fails
+

In earlier releases of Forklift, when running the virt-v2v guest conversion, the migration plan did not fail if the conversion pod failed, as expected. This issue has been resolved in Forklift 2.7.3. (MTV-1569)

+
+
+
Large number of VMs in the inventory can cause the inventory controller to panic
+

In earlier releases of Forklift, having a large number of virtual machines (VMs) in the inventory could cause the inventory controller to panic and return a concurrent write to websocket connection warning. This issue was caused by the concurrent write to the WebSocket connection and has been addressed by the addition of a lock, so the Go routine waits before sending the response from the server. This issue has been resolved in Forklift 2.7.3. (MTV-1220)

+
+
+
VM selection disappears when selecting multiple VMs in the Migration Plan
+

In earlier releases of Forklift, VM selection checkbox disappeared after selecting multiple VMs in the Migration Plan. This issue has been resolved in Forklift 2.7.3. (MTV-1546)

+
+
+
forklift-controller crashing during OVA plan migration
+

In earlier releases of Forklift, the forklift-controller would crash during an OVA plan migration, returning a runtime error: invalid memory address or nil pointer dereference panic.  This issue has been resolved in Forklift 2.7.3. (MTV-1577)

+
+
+
+
+

Resolved issues 2.7.2

+
+
+
VMNetworksNotMapped error occurs after creating a plan from the UI with the source provider set to KubeVirt
+

In earlier releases of Forklift, after creating a plan with an KubeVirt source provider, the Migration Plan failed with the error The plan is not ready - VMNetworksNotMapped. This issue has been resolved in Forklift 2.7.2. (MTV-1201)

+
+
+
Migration Plan for KubeVirt to KubeVirt missing the source namespace causing VMNetworkNotMapped error
+

In earlier releases of Forklift, when creating a Migration Plan for an KubeVirt to KubeVirt migration using the Plan Creation Form, the network map generated was missing the source namespace, which caused a VMNetworkNotMapped error on the plan. This issue has been resolved in Forklift 2.7.2. (MTV-1297)

+
+
+
DV, PVC, and PV are not cleaned up and removed if the migration plan is Archived and Deleted
+

In earlier releases of Forklift, the DataVolume (DV), PersistentVolumeClaim (PVC), and PersistentVolume (PV) continued to exist after the migration plan was archived and deleted. This issue has been resolved in Forklift 2.7.2. (MTV-1477)

+
+
+
Other migrations are halted from starting as the scheduler is waiting for the complete VM to get transferred
+

In earlier releases of Forklift, when warm migrating a virtual machine (VM) that has several disks, you had to wait for the complete VM to get migrated, and the scheduler was halted until all the disks finished before the migration would be started. This issue has been resolved in Forklift 2.7.2. (MTV-1537)

+
+
+
Warm migration is not functioning as expected
+

In earlier releases of Forklift, warm migration did not function as expected. When running the warm migration with VMs larger than the MaxInFlight disks, the VMs over this number did not start the migration until the cutover. This issue has been resolved in Forklift 2.7.2. (MTV-1543)

+
+
+
Migration hanging due to error: virt-v2v: error: -i libvirt: expecting a libvirt guest name
+

In earlier releases of Forklift, when attempting to migrate a VMware VM with a non-compliant Kubernetes name, the Openshift console returned a warning that the VM would be renamed. However, after starting the Migration Plan, it hangs since the migration pod is in an Error state. This issue has been resolved in Forklift 2.7.2. This issue has been resolved in Forklift 2.7.2. (MTV-1555)

+
+
+
VMs are not migrated if they have more disks than MAX_VM_INFLIGHT
+

In earlier releases of Forklift, when migrating the VM using the warm migration, if there were more disks than the MAX_VM_INFLIGHT the VM was not scheduled and the migration was not started. This issue has been resolved in Forklift 2.7.2. (MTV-1573)

+
+
+
Migration Plan returns an error even when Changed Block Tracking (CBT) is enabled
+

In earlier releases of Forklift, when running a VM in VMware, if the CBT flag was enabled while the VM was running by adding both ctkEnabled=TRUE and scsi0:0.ctkEnabled=TRUE parameters, an error message Danger alert:The plan is not ready - VMMissingChangedBlockTracking was returned, and the migration plan was prevented from working. This issue has been resolved in Forklift 2.7.2. (MTV-1576)

+
+
+
+
+

Resolved issues 2.7.0

+
+
+
Change . to - in the names of VMs that are migrated
+

In earlier releases of Forklift, if the name of the virtual machines (VMs) contained ., this was changed to - when they were migrated. This issue has been resolved in Forklift 2.7.0. (MTV-1292)

+
+
+
Status condition indicating a failed mapping resource in a plan is not added to the plan
+

In earlier releases of Forklift, a status condition indicating a failed mapping resource of a plan was not added to the plan. This issue has been resolved in Forklift 2.7.0, with a status condition indicating the failed mapping being added. (MTV-1461)

+
+
+
ifcfg files with HWaddr cause the NIC name to change
+

In earlier releases of Forklift, interface configuration (ifcfg) files with a hardware address (HWaddr) of the Ethernet interface caused the name of the network interface controller (NIC) to change. This issue has been resolved in Forklift 2.7.0. (MTV-1463)

+
+
+
Import fails with special characters in VMX file
+

In earlier releases of Forklift, imports failed when there were special characters in the parameters of the VMX file. This issue has been resolved in Forklift 2.7.0. (MTV-1472)

+
+
+
Observed invalid memory address or nil pointer dereference panic
+

In earlier releases of Forklift, an invalid memory address or nil pointer dereference panic was observed, which was caused by a refactor and could be triggered when there was a problem with the inventory pod. This issue has been resolved in Forklift 2.7.0. (MTV-1482)

+
+
+
Static IPv4 changed after warm migrating win2022/2019 VMs
+

In earlier releases of Forklift, the static Internet Protocol version 4 (IPv4) address was changed after a warm migration of Windows Server 2022 and Windows Server 2019 VMs. This issue has been resolved in Forklift 2.7.0. (MTV-1491)

+
+
+
Warm migration is missing arguments
+

In earlier releases of Forklift, virt-v2v-in-place for the warm migration was missing arguments that were available in virt-v2v for the cold migration. This issue has been resolved in Forklift 2.7.0. (MTV-1495)

+
+
+
Default gateway settings changed after migrating Windows Server 2022 VMs with preserve static IPs
+

In earlier releases of Forklift, the default gateway settings were changed after migrating Windows Server 2022 VMs with the preserve static IPs setting. This issue has been resolved in Forklift 2.7.0. (MTV-1497)

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/running-migration-plan/index.html b/documentation/doc-Release_notes/modules/running-migration-plan/index.html new file mode 100644 index 00000000000..b1a854bc519 --- /dev/null +++ b/documentation/doc-Release_notes/modules/running-migration-plan/index.html @@ -0,0 +1,135 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Running a migration plan

+
+

You can run a migration plan and view its progress in the OKD web console.

+
+
+
Prerequisites
+
    +
  • +

    Valid migration plan.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
    +

    The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, and the description of each plan.

    +
    +
  2. +
  3. +

    Click Start beside a migration plan to start the migration.

    +
  4. +
  5. +

    Click Start in the confirmation window that opens.

    +
    +

    The Migration details by VM screen opens, displaying the migration’s progress

    +
    +
    +

    Warm migration only:

    +
    +
    +
      +
    • +

      The precopy stage starts.

      +
    • +
    • +

      Click Cutover to complete the migration.

      +
    • +
    +
    +
  6. +
  7. +

    If the migration fails:

    +
    +
      +
    1. +

      Click Get logs to retrieve the migration logs.

      +
    2. +
    3. +

      Click Get logs in the confirmation window that opens.

      +
    4. +
    5. +

      Wait until Get logs changes to Download logs and then click the button to download the logs.

      +
    6. +
    +
    +
  8. +
  9. +

    Click a migration’s Status, whether it failed or succeeded or is still ongoing, to view the details of the migration.

    +
    +

    The Migration details by VM screen opens, displaying the start and end times of the migration, the amount of data copied, and a progress pipeline for each VM being migrated.

    +
    +
  10. +
  11. +

    Expand an individual VM to view its steps and the elapsed time and state of each step.

    +
  12. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/selecting-migration-network-for-virt-provider/index.html b/documentation/doc-Release_notes/modules/selecting-migration-network-for-virt-provider/index.html new file mode 100644 index 00000000000..3dbc80c07a6 --- /dev/null +++ b/documentation/doc-Release_notes/modules/selecting-migration-network-for-virt-provider/index.html @@ -0,0 +1,100 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Selecting a migration network for a KubeVirt provider

+
+

You can select a default migration network for a KubeVirt provider in the OKD web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.

+
+
+

If you do not select a migration network, the default migration network is the pod network, which might not be optimal for disk transfer.

+
+
+ + + + + +
+
Note
+
+
+

You can override the default migration network of the provider by selecting a different network when you create a migration plan.

+
+
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    On the right side of the provider, select Select migration network from the {kebab}.

    +
  4. +
  5. +

    Select a network from the list of available networks and click Select.

    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/selecting-migration-network-for-vmware-source-provider/index.html b/documentation/doc-Release_notes/modules/selecting-migration-network-for-vmware-source-provider/index.html new file mode 100644 index 00000000000..5e8b8f18a43 --- /dev/null +++ b/documentation/doc-Release_notes/modules/selecting-migration-network-for-vmware-source-provider/index.html @@ -0,0 +1,142 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Selecting a migration network for a VMware source provider

+
+

You can select a migration network in the OKD web console for a source provider to reduce risk to the source environment and to improve performance.

+
+
+

Using the default network for migration can result in poor performance because the network might not have sufficient bandwidth. This situation can have a negative effect on the source platform because the disk transfer operation might saturate the network.

+
+
+

Unresolved directive in selecting-migration-network-for-vmware-source-provider.adoc - include::snip_vmware_esxi_nfc.adoc[]

+
+
+
Prerequisites
+
    +
  • +

    The migration network must have sufficient throughput, minimum speed of 10 Gbps, for disk transfer.

    +
  • +
  • +

    The migration network must be accessible to the KubeVirt nodes through the default gateway.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    The source virtual disks are copied by a pod that is connected to the pod network of the target namespace.

    +
    +
    +
    +
  • +
  • +

    The migration network should have jumbo frames enabled.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click the host number in the Hosts column beside a provider to view a list of hosts.

    +
  4. +
  5. +

    Select one or more hosts and click Select migration network.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Network: Network name

      +
    • +
    • +

      ESXi host admin username: For example, root

      +
    • +
    • +

      ESXi host admin password: Password

      +
    • +
    +
    +
  8. +
  9. +

    Click Save.

    +
  10. +
  11. +

    Verify that the status of each host is Ready.

    +
    +

    If a host status is not Ready, the host might be unreachable on the migration network or the credentials might be incorrect. You can modify the host configuration and save the changes.

    +
    +
  12. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/selecting-migration-network/index.html b/documentation/doc-Release_notes/modules/selecting-migration-network/index.html new file mode 100644 index 00000000000..c62b1f69eeb --- /dev/null +++ b/documentation/doc-Release_notes/modules/selecting-migration-network/index.html @@ -0,0 +1,118 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Selecting a migration network for a source provider

+
+

You can select a migration network for a source provider in the Forklift web console for improved performance.

+
+
+

If a source network is not optimal for migration, a Warning icon is displayed beside the host number in the Hosts column of the provider list.

+
+
+
Prerequisites
+

The migration network has the following prerequisites:

+
+
+
    +
  • +

    Minimum speed of 10 Gbps.

    +
  • +
  • +

    Accessible to the OpenShift nodes through the default gateway. The source disks are copied by a pod that is connected to the pod network of the target namespace.

    +
  • +
  • +

    Jumbo frames enabled.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Click Providers.

    +
  2. +
  3. +

    Click the host number of a provider to view the host list and network details.

    +
  4. +
  5. +

    Select the host to be updated and click Select migration network.

    +
  6. +
  7. +

    Select a Network from the list of available networks.

    +
    +

    The network list displays only the networks accessible to all the selected hosts. The hosts must have

    +
    +
  8. +
  9. +

    Click Check connection to verify the credentials.

    +
  10. +
  11. +

    Click Select to select the migration network.

    +
    +

    The migration network appears in the network details of the updated hosts.

    +
    +
  12. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/snip-certificate-options/index.html b/documentation/doc-Release_notes/modules/snip-certificate-options/index.html new file mode 100644 index 00000000000..ded6a1417f6 --- /dev/null +++ b/documentation/doc-Release_notes/modules/snip-certificate-options/index.html @@ -0,0 +1,114 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
    +
  1. +

    Choose one of the following options for validating CA certificates:

    +
    +
      +
    • +

      Use a custom CA certificate: Migrate after validating a custom CA certificate.

      +
    • +
    • +

      Use the system CA certificate: Migrate after validating the system CA certificate.

      +
    • +
    • +

      Skip certificate validation : Migrate without validating a CA certificate.

      +
      +
        +
      1. +

        To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.

        +
      2. +
      3. +

        To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.

        +
      4. +
      5. +

        To skip certificate validation, toggle the Skip certificate validation switch to the right.

        +
      6. +
      +
      +
    • +
    +
    +
  2. +
  3. +

    Optional: Ask Forklift to fetch a custom CA certificate from the provider’s API endpoint URL.

    +
    +
      +
    1. +

      Click Fetch certificate from URL. The Verify certificate window opens.

      +
    2. +
    3. +

      If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.

      +
      +

      Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.

      +
      +
    4. +
    +
    +
  4. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/snip-migrating-luns/index.html b/documentation/doc-Release_notes/modules/snip-migrating-luns/index.html new file mode 100644 index 00000000000..c319896f0e4 --- /dev/null +++ b/documentation/doc-Release_notes/modules/snip-migrating-luns/index.html @@ -0,0 +1,86 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Note
+
+
+
    +
  • +

    Unlike disk images that are copied from a source provider to a target provider, LUNs are detached, but not removed, from virtual machines in the source provider and then attached to the virtual machines (VMs) that are created in the target provider.

    +
  • +
  • +

    LUNs are not removed from the source provider during the migration in case fallback to the source provider is required. However, before re-attaching the LUNs to VMs in the source provider, ensure that the LUNs are not used by VMs on the target environment at the same time, which might lead to data corruption.

    +
  • +
+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/snip_cold-warm-comparison-table/index.html b/documentation/doc-Release_notes/modules/snip_cold-warm-comparison-table/index.html new file mode 100644 index 00000000000..b931c08d002 --- /dev/null +++ b/documentation/doc-Release_notes/modules/snip_cold-warm-comparison-table/index.html @@ -0,0 +1,100 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

Both cold migration and warm migration have advantages and disadvantages, as described in the table that follows:

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Advantages and disadvantages of cold and warm migrations
Cold migrationWarm migration

Duration

Correlates to the amount of data on the disks

Correlates to the amount of data on the disks and VM utilization

Data transferred

Approximate sum of all disks

Approximate sum of all disks and VM utilization

VM downtime

High

Low

+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/snip_measured_boot_windows_vm/index.html b/documentation/doc-Release_notes/modules/snip_measured_boot_windows_vm/index.html new file mode 100644 index 00000000000..422ba0242e6 --- /dev/null +++ b/documentation/doc-Release_notes/modules/snip_measured_boot_windows_vm/index.html @@ -0,0 +1,72 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
Windows VMs which are using Measured Boot cannot be migrated
+

Microsoft Windows virtual machines (VMs), which are using the Measured Boot feature, cannot be migrated because Measured Boot is a mechanism to prevent any kind of device changes, by checking each start-up component, including the firmware, all the way to the boot driver.

+
+
+

The alternative to migration is to re-create the Windows VM directly on KubeVirt.

+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/snip_performance/index.html b/documentation/doc-Release_notes/modules/snip_performance/index.html new file mode 100644 index 00000000000..011a47a1573 --- /dev/null +++ b/documentation/doc-Release_notes/modules/snip_performance/index.html @@ -0,0 +1,74 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

The data provided here was collected from testing in Red Hat Labs and is provided for reference only. 

+
+
+

Overall, these numbers should be considered to show the best-case scenarios.

+
+
+

The observed performance of migration can differ from these results and depends on several factors.

+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/snip_permissions-info/index.html b/documentation/doc-Release_notes/modules/snip_permissions-info/index.html new file mode 100644 index 00000000000..ac706c2b377 --- /dev/null +++ b/documentation/doc-Release_notes/modules/snip_permissions-info/index.html @@ -0,0 +1,85 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

If you are an administrator, you can see and work with components (providers, plans, etc.) for all projects.

+
+
+

If you are a non-administrator, you can only see and work only with the components of projects you have permissions for.

+
+
+ + + + + +
+
Tip
+
+
+

You can see which projects you have permissions for by clicking the Project list, which is in the upper-left of every page in the Migrations section except for the Overview.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/snip_plan-limits/index.html b/documentation/doc-Release_notes/modules/snip_plan-limits/index.html new file mode 100644 index 00000000000..69b4b5c45e1 --- /dev/null +++ b/documentation/doc-Release_notes/modules/snip_plan-limits/index.html @@ -0,0 +1,79 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Important
+
+
+

A plan cannot contain more than 500 VMs or 500 disks.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/snip_qemu-guest-agent/index.html b/documentation/doc-Release_notes/modules/snip_qemu-guest-agent/index.html new file mode 100644 index 00000000000..706a2520dee --- /dev/null +++ b/documentation/doc-Release_notes/modules/snip_qemu-guest-agent/index.html @@ -0,0 +1,74 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

VMware only: In cold migrations, in situations in which a package manager cannot be used during the migration, Forklift does not install the qemu-guest-agent daemon on the migrated VMs. This has some impact on the functionality of the migrated VMs, but overall, they are still expected to function.

+
+
+

To enable Forklift to automatically install qemu-guest-agent on the migrated VMs, ensure that your package manager can install the daemon during the first boot of the VM after migration.

+
+
+

If that is not possible, use your preferred automated or manual procedure to install qemu-guest-agent manually.

+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/snip_secure_boot_issue/index.html b/documentation/doc-Release_notes/modules/snip_secure_boot_issue/index.html new file mode 100644 index 00000000000..bbd6bab953a --- /dev/null +++ b/documentation/doc-Release_notes/modules/snip_secure_boot_issue/index.html @@ -0,0 +1,72 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
VMs with Secure Boot enabled might not be migrated automatically
+

Virtual machines (VMs) with Secure Boot enabled currently might not be migrated automatically. This is because Secure Boot, a security standard developed by members of the PC industry to ensure that a device boots using only software that is trusted by the Original Equipment Manufacturer (OEM), would prevent the VMs from booting on the destination provider. 

+
+
+

Workaround: The current workaround is to disable Secure Boot on the destination. For more details, see Disabling Secure Boot. (MTV-1548)

+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/snip_vmware-name-change/index.html b/documentation/doc-Release_notes/modules/snip_vmware-name-change/index.html new file mode 100644 index 00000000000..649d713f614 --- /dev/null +++ b/documentation/doc-Release_notes/modules/snip_vmware-name-change/index.html @@ -0,0 +1,79 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Important
+
+
+

When you migrate a VMware 7 VM to an OKD 4.13+ platform that uses CentOS 7.9, the name of the network interfaces changes and the static IP configuration for the VM no longer works.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/snip_vmware-permissions/index.html b/documentation/doc-Release_notes/modules/snip_vmware-permissions/index.html new file mode 100644 index 00000000000..ea5a1f3e484 --- /dev/null +++ b/documentation/doc-Release_notes/modules/snip_vmware-permissions/index.html @@ -0,0 +1,86 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Important
+
+
forklift-controller consistently failing to reconcile a plan, and returning an HTTP 500 error
+
+

There is an issue with the forklift-controller consistently failing to reconcile a Migration Plan, and subsequently returning an HTTP 500 error. This issue is caused when you specify the user permissions only on the virtual machine (VM).

+
+
+

In Forklift, you need to add permissions at the datacenter level, which includes storage, networks, switches, and so on, which are used by the VM. You must then propagate the permissions to the child elements.

+
+
+

If you do not want to add this level of permissions, you must manually add the permissions to each object on the VM host required.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/snip_vmware_esxi_nfc/index.html b/documentation/doc-Release_notes/modules/snip_vmware_esxi_nfc/index.html new file mode 100644 index 00000000000..7f3630ca336 --- /dev/null +++ b/documentation/doc-Release_notes/modules/snip_vmware_esxi_nfc/index.html @@ -0,0 +1,79 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Note
+
+
+

You can also control the network from which disks are transferred from a host by using the Network File Copy (NFC) service in vSphere.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/snippet_getting_web_console_url_cli/index.html b/documentation/doc-Release_notes/modules/snippet_getting_web_console_url_cli/index.html new file mode 100644 index 00000000000..0670ac7546e --- /dev/null +++ b/documentation/doc-Release_notes/modules/snippet_getting_web_console_url_cli/index.html @@ -0,0 +1,87 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

+

+
+
+
+
$ kubectl get route virt -n konveyor-forklift \
+  -o custom-columns=:.spec.host
+
+
+
+

+ +The URL for the forklift-ui service that opens the login page for the Forklift web console is displayed.

+
+
+

+ +.Example output

+
+
+
+
https://virt-konveyor-forklift.apps.cluster.openshift.com.
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/snippet_getting_web_console_url_web/index.html b/documentation/doc-Release_notes/modules/snippet_getting_web_console_url_web/index.html new file mode 100644 index 00000000000..15275919b32 --- /dev/null +++ b/documentation/doc-Release_notes/modules/snippet_getting_web_console_url_web/index.html @@ -0,0 +1,84 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
    +
  1. +

    Log in to the OKD web console.

    +
  2. +
  3. +

    Click NetworkingRoutes.

    +
  4. +
  5. +

    Select the {namespace} project in the Project: list.

    +
    +

    The URL for the forklift-ui service that opens the login page for the Forklift web console is displayed.

    +
    +
    +

    Click the URL to navigate to the Forklift web console.

    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/snippet_ova_tech_preview/index.html b/documentation/doc-Release_notes/modules/snippet_ova_tech_preview/index.html new file mode 100644 index 00000000000..27ec75b9bf9 --- /dev/null +++ b/documentation/doc-Release_notes/modules/snippet_ova_tech_preview/index.html @@ -0,0 +1,87 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview.

+
+
+ + + + + +
+
Important
+
+
+

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/source-vm-prerequisites/index.html b/documentation/doc-Release_notes/modules/source-vm-prerequisites/index.html new file mode 100644 index 00000000000..c85add9fd7e --- /dev/null +++ b/documentation/doc-Release_notes/modules/source-vm-prerequisites/index.html @@ -0,0 +1,127 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Source virtual machine prerequisites

+
+

The following prerequisites apply to all migrations:

+
+
+
    +
  • +

    ISO/CDROM disks must be unmounted.

    +
  • +
  • +

    Each NIC must contain one IPv4 and/or one IPv6 address.

    +
  • +
  • +

    The operating system of a VM must be certified and supported as a guest operating system with KubeVirt.

    +
  • +
  • +

    The name of a VM must not contain a period (.). Forklift changes any period in a VM name to a dash (-).

    +
  • +
  • +

    The name of a VM must not be the same as any other VM in the KubeVirt environment.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    Forklift automatically assigns a new name to a VM that does not comply with the rules.

    +
    +
    +

    Forklift makes the following changes when it automatically generates a new VM name:

    +
    +
    +
      +
    • +

      Excluded characters are removed.

      +
    • +
    • +

      Uppercase letters are switched to lowercase letters.

      +
    • +
    • +

      Any underscore (_) is changed to a dash (-).

      +
    • +
    +
    +
    +

    This feature allows a migration to proceed smoothly even if someone enters a VM name that does not follow the rules.

    +
    +
    +
    +
  • +
+
+
+

Unresolved directive in source-vm-prerequisites.adoc - include::snip_secure_boot_issue.adoc[]

+
+
+

Unresolved directive in source-vm-prerequisites.adoc - include::snip_measured_boot_windows_vm.adoc[]

+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/storage-support/index.html b/documentation/doc-Release_notes/modules/storage-support/index.html new file mode 100644 index 00000000000..5ca3795889c --- /dev/null +++ b/documentation/doc-Release_notes/modules/storage-support/index.html @@ -0,0 +1,211 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Storage support and default modes

+
+

Forklift uses the following default volume and access modes for supported storage.

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Default volume and access modes
ProvisionerVolume modeAccess mode

kubernetes.io/aws-ebs

Block

ReadWriteOnce

kubernetes.io/azure-disk

Block

ReadWriteOnce

kubernetes.io/azure-file

Filesystem

ReadWriteMany

kubernetes.io/cinder

Block

ReadWriteOnce

kubernetes.io/gce-pd

Block

ReadWriteOnce

kubernetes.io/hostpath-provisioner

Filesystem

ReadWriteOnce

manila.csi.openstack.org

Filesystem

ReadWriteMany

openshift-storage.cephfs.csi.ceph.com

Filesystem

ReadWriteMany

openshift-storage.rbd.csi.ceph.com

Block

ReadWriteOnce

kubernetes.io/rbd

Block

ReadWriteOnce

kubernetes.io/vsphere-volume

Block

ReadWriteOnce

+
+ + + + + +
+
Note
+
+
+

If the KubeVirt storage does not support dynamic provisioning, you must apply the following settings:

+
+
+
    +
  • +

    Filesystem volume mode

    +
    +

    Filesystem volume mode is slower than Block volume mode.

    +
    +
  • +
  • +

    ReadWriteOnce access mode

    +
    +

    ReadWriteOnce access mode does not support live virtual machine migration.

    +
    +
  • +
+
+
+

See Enabling a statically-provisioned storage class for details on editing the storage profile.

+
+
+
+
+ + + + + +
+
Note
+
+
+

If your migration uses block storage and persistent volumes created with an EXT4 file system, increase the file system overhead in CDI to be more than 10%. The default overhead that is assumed by CDI does not completely include the reserved place for the root partition. If you do not increase the file system overhead in CDI by this amount, your migration might fail.

+
+
+
+
+ + + + + +
+
Note
+
+
+

When migrating from OpenStack or running a cold-migration from RHV to the OCP cluster that MTV is deployed on, the migration allocates persistent volumes without CDI. In these cases, you might need to adjust the file system overhead.

+
+
+

If the configured file system overhead, which has a default value of 10%, is too low, the disk transfer will fail due to lack of space. In such a case, you would want to increase the file system overhead.

+
+
+

In some cases, however, you might want to decrease the file system overhead to reduce storage consumption.

+
+
+

You can change the file system overhead by changing the value of the controller_filesystem_overhead in the spec portion of the forklift-controller CR, as described in Configuring the MTV Operator.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/technical-changes-2-7/index.html b/documentation/doc-Release_notes/modules/technical-changes-2-7/index.html new file mode 100644 index 00000000000..1dbd520b49d --- /dev/null +++ b/documentation/doc-Release_notes/modules/technical-changes-2-7/index.html @@ -0,0 +1,73 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Technical changes

+
+

Forklift 2.7 has the following technical changes:

+
+
+
Upgraded virt-v2v to RHEL9 for warm migrations
+

Forklift previously used virt-v2v from Red Hat Enterprise Linux (RHEL) 8, which does not include bug fixes and features that are available in virt-v2v in RHEL9. In Forklift 2.7.0, components are updated to RHEL 9 in order to improve the functionality of warm migration. (MTV-1152)

+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/technology-preview/index.html b/documentation/doc-Release_notes/modules/technology-preview/index.html new file mode 100644 index 00000000000..7ecdb2fa8cc --- /dev/null +++ b/documentation/doc-Release_notes/modules/technology-preview/index.html @@ -0,0 +1,88 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Important
+
+
+

{FeatureName} is a Technology Preview feature only. Technology Preview features +are not supported with Red Hat production service level agreements (SLAs) and +might not be functionally complete. Red Hat does not recommend using them +in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during +the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/uninstalling-mtv-cli/index.html b/documentation/doc-Release_notes/modules/uninstalling-mtv-cli/index.html new file mode 100644 index 00000000000..e46484dbe39 --- /dev/null +++ b/documentation/doc-Release_notes/modules/uninstalling-mtv-cli/index.html @@ -0,0 +1,144 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Uninstalling Forklift from the command line interface

+
+

You can uninstall Forklift from the command line interface (CLI).

+
+
+ + + + + +
+
Note
+
+
+

This action does not remove resources managed by the Forklift Operator, including custom resource definitions (CRDs) and custom resources (CRs). To remove these after uninstalling the Forklift Operator, you might need to manually delete the Forklift Operator CRDs.

+
+
+
+
+
Prerequisites
+
    +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Delete the forklift controller by running the following command:

    +
    +
    +
    $ oc delete ForkliftController --all -n openshift-mtv
    +
    +
    +
  2. +
  3. +

    Delete the subscription to the Forklift Operator by running the following command:

    +
    +
    +
    $ oc get subscription -o name|grep 'mtv-operator'| xargs oc delete
    +
    +
    +
  4. +
  5. +

    Delete the clusterserviceversion for the Forklift Operator by running the following command:

    +
    +
    +
    $ oc get clusterserviceversion -o name|grep 'mtv-operator'| xargs oc delete
    +
    +
    +
  6. +
  7. +

    Delete the plugin console CR by running the following command:

    +
    +
    +
    $ oc delete ConsolePlugin forklift-console-plugin
    +
    +
    +
  8. +
  9. +

    Optional: Delete the custom resource definitions (CRDs) by running the following command:

    +
    +
    +
    kubectl get crd -o name | grep 'forklift.konveyor.io' | xargs kubectl delete
    +
    +
    +
  10. +
  11. +

    Optional: Perform cleanup by deleting the Forklift project by running the following command:

    +
    +
    +
    oc delete project openshift-mtv
    +
    +
    +
  12. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/uninstalling-mtv-ui/index.html b/documentation/doc-Release_notes/modules/uninstalling-mtv-ui/index.html new file mode 100644 index 00000000000..c436f342bf5 --- /dev/null +++ b/documentation/doc-Release_notes/modules/uninstalling-mtv-ui/index.html @@ -0,0 +1,168 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Uninstalling Forklift by using the OKD web console

+
+

You can uninstall Forklift by using the OKD web console.

+
+
+
Prerequisites
+
    +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click Operators > Installed Operators.

    +
  2. +
  3. +

    Click Forklift Operator.

    +
    +

    The Operator Details page opens in the Details tab.

    +
    +
  4. +
  5. +

    Click the ForkliftController tab.

    +
  6. +
  7. +

    Click Actions and select Delete ForkLiftController.

    +
    +

    A confirmation window opens.

    +
    +
  8. +
  9. +

    Click Delete.

    +
    +

    The controller is removed.

    +
    +
  10. +
  11. +

    Open the Details tab.

    +
    +

    The Create ForkliftController button appears instead of the controller you deleted. There is no need to click it.

    +
    +
  12. +
  13. +

    On the upper-right side of the page, click Actions and select Uninstall Operator.

    +
    +

    A confirmation window opens, displaying any operand instances.

    +
    +
  14. +
  15. +

    To delete all instances, select the Delete all operand instances for this operator checkbox. By default, the checkbox is cleared.

    +
    + + + + + +
    +
    Important
    +
    +
    +

    If your Operator configured off-cluster resources, these will continue to run and will require manual cleanup.

    +
    +
    +
    +
  16. +
  17. +

    Click Uninstall.

    +
    +

    The Installed Operators page opens, and the Forklift Operator is removed from the list of installed Operators.

    +
    +
  18. +
  19. +

    Click Home > Overview.

    +
  20. +
  21. +

    In the Status section of the page, click Dynamic Plugins.

    +
    +

    The Dynamic Plugins popup opens, listing forklift-console-plugin as a failed plugin. If the forklift-console-plugin does not appear as a failed plugin, refresh the web console.

    +
    +
  22. +
  23. +

    Click forklift-console-plugin.

    +
    +

    The ConsolePlugin details page opens in the Details tab.

    +
    +
  24. +
  25. +

    On the upper right-hand side of the page, click Actions and select Delete ConsolePlugin from the list.

    +
    +

    A confirmation window opens.

    +
    +
  26. +
  27. +

    Click Delete.

    +
    +

    The plugin is removed from the list of Dynamic plugins on the Overview page. If the plugin still appears, restart the Overview page.

    +
    +
  28. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/updating-validation-rules-version/index.html b/documentation/doc-Release_notes/modules/updating-validation-rules-version/index.html new file mode 100644 index 00000000000..2a3e075c44e --- /dev/null +++ b/documentation/doc-Release_notes/modules/updating-validation-rules-version/index.html @@ -0,0 +1,127 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Updating the inventory rules version

+
+

You must update the inventory rules version each time you update the rules so that the Provider Inventory service detects the changes and triggers the Validation service.

+
+
+

The rules version is recorded in a rules_version.rego file for each provider.

+
+
+
Procedure
+
    +
  1. +

    Retrieve the current rules version:

    +
    +
    +
    $ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version (1)
    +
    +
    +
    +
    Example output
    +
    +
    {
    +   "result": {
    +       "rules_version": 5
    +   }
    +}
    +
    +
    +
  2. +
  3. +

    Connect to the terminal of the Validation pod:

    +
    +
    +
    $ kubectl rsh <validation_pod>
    +
    +
    +
  4. +
  5. +

    Update the rules version in the /usr/share/opa/policies/io/konveyor/forklift/<provider>/rules_version.rego file.

    +
  6. +
  7. +

    Log out of the Validation pod terminal.

    +
  8. +
  9. +

    Verify the updated rules version:

    +
    +
    +
    $ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version (1)
    +
    +
    +
    +
    Example output
    +
    +
    {
    +   "result": {
    +       "rules_version": 6
    +   }
    +}
    +
    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/upgrading-mtv-ui/index.html b/documentation/doc-Release_notes/modules/upgrading-mtv-ui/index.html new file mode 100644 index 00000000000..a78a72a8ce0 --- /dev/null +++ b/documentation/doc-Release_notes/modules/upgrading-mtv-ui/index.html @@ -0,0 +1,127 @@ + + + + + + + + Upgrading Forklift | Forklift Documentation + + + + + + + + + + + + + +Upgrading Forklift | Forklift Documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+

Upgrading Forklift

+
+

You can upgrade the Forklift Operator by using the OKD web console to install the new version.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click OperatorsInstalled Operators{operator-name-ui}Subscription.

    +
  2. +
  3. +

    Change the update channel to the correct release.

    +
    +

    See Changing update channel in the OKD documentation.

    +
    +
  4. +
  5. +

    Confirm that Upgrade status changes from Up to date to Upgrade available. If it does not, restart the CatalogSource pod:

    +
    +
      +
    1. +

      Note the catalog source, for example, redhat-operators.

      +
    2. +
    3. +

      From the command line, retrieve the catalog source pod:

      +
      +
      +
      $ kubectl get pod -n openshift-marketplace | grep <catalog_source>
      +
      +
      +
    4. +
    5. +

      Delete the pod:

      +
      +
      +
      $ kubectl delete pod -n openshift-marketplace <catalog_source_pod>
      +
      +
      +
      +

      Upgrade status changes from Up to date to Upgrade available.

      +
      +
      +

      If you set Update approval on the Subscriptions tab to Automatic, the upgrade starts automatically.

      +
      +
    6. +
    +
    +
  6. +
  7. +

    If you set Update approval on the Subscriptions tab to Manual, approve the upgrade.

    +
    +

    See Manually approving a pending upgrade in the OKD documentation.

    +
    +
  8. +
  9. +

    If you are upgrading from Forklift 2.2 and have defined VMware source providers, edit the VMware provider by adding a VDDK init image. Otherwise, the update will change the state of any VMware providers to Critical. For more information, see Adding a VMSphere source provider.

    +
  10. +
  11. +

    If you mapped to NFS on the OKD destination provider in Forklift 2.2, edit the AccessModes and VolumeMode parameters in the NFS storage profile. Otherwise, the upgrade will invalidate the NFS mapping. For more information, see Customizing the storage profile.

    +
  12. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/using-must-gather/index.html b/documentation/doc-Release_notes/modules/using-must-gather/index.html new file mode 100644 index 00000000000..8555eb39ffa --- /dev/null +++ b/documentation/doc-Release_notes/modules/using-must-gather/index.html @@ -0,0 +1,157 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Using the must-gather tool

+
+

You can collect logs and information about Forklift custom resources (CRs) by using the must-gather tool. You must attach a must-gather data file to all customer cases.

+
+
+

You can gather data for a specific namespace, migration plan, or virtual machine (VM) by using the filtering options.

+
+
+ + + + + +
+
Note
+
+
+

If you specify a non-existent resource in the filtered must-gather command, no archive file is created.

+
+
+
+
+
Prerequisites
+
    +
  • +

    You must be logged in to the KubeVirt cluster as a user with the cluster-admin role.

    +
  • +
  • +

    You must have the OKD CLI (oc) installed.

    +
  • +
+
+
+
Collecting logs and CR information
+
    +
  1. +

    Navigate to the directory where you want to store the must-gather data.

    +
  2. +
  3. +

    Run the oc adm must-gather command:

    +
    +
    +
    $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest
    +
    +
    +
    +

    The data is saved as /must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.

    +
    +
  4. +
  5. +

    Optional: Run the oc adm must-gather command with the following options to gather filtered data:

    +
    +
      +
    • +

      Namespace:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- NS=<namespace> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Migration plan:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- PLAN=<migration_plan> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Virtual machine:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- VM=<vm_id> NS=<namespace> /usr/bin/targeted (1)
      +
      +
      +
      +
        +
      1. +

        Specify the VM ID as it appears in the Plan CR.

        +
      2. +
      +
      +
    • +
    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/virt-migration-workflow/index.html b/documentation/doc-Release_notes/modules/virt-migration-workflow/index.html new file mode 100644 index 00000000000..72ddf46a01d --- /dev/null +++ b/documentation/doc-Release_notes/modules/virt-migration-workflow/index.html @@ -0,0 +1,209 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Detailed migration workflow

+
+

You can use the detailed migration workflow to troubleshoot a failed migration.

+
+
+

The workflow describes the following steps:

+
+
+

Warm Migration or migration to a remote {ocp-name} cluster:

+
+
+
    +
  1. +

    When you create the Migration custom resource (CR) to run a migration plan, the Migration Controller service creates a DataVolume CR for each source VM disk.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Containerized Data Importer (CDI) Controller service creates a persistent volume claim (PVC) based on the parameters specified in the DataVolume CR.



    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
  6. +
  7. +

    The CDI Controller service creates an importer pod.

    +
  8. +
  9. +

    The importer pod streams the VM disk to the PV.

    +
    +

    After the VM disks are transferred:

    +
    +
  10. +
  11. +

    The Migration Controller service creates a conversion pod with the PVCs attached to it when importing from VMWare.

    +
    +

    The conversion pod runs virt-v2v, which installs and configures device drivers on the PVCs of the target VM.

    +
    +
  12. +
  13. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  14. +
  15. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  16. +
+
+
+

Cold migration from oVirt or {osp} to the local {ocp-name} cluster:

+
+
+
    +
  1. +

    When you create a Migration custom resource (CR) to run a migration plan, the Migration Controller service creates for each source VM disk a PersistentVolumeClaim CR, and an OvirtVolumePopulator when the source is oVirt, or an OpenstackVolumePopulator CR when the source is {osp}.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Populator Controller service creates a temporarily persistent volume claim (PVC).

    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
    +
      +
    • +

      The Migration Controller service creates a dummy pod to bind all PVCs. The name of the pod contains pvcinit.

      +
    • +
    +
    +
  6. +
  7. +

    The Populator Controller service creates a populator pod.

    +
  8. +
  9. +

    The populator pod transfers the disk data to the PV.

    +
    +

    After the VM disks are transferred:

    +
    +
  10. +
  11. +

    The temporary PVC is deleted, and the initial PVC points to the PV with the data.

    +
  12. +
  13. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  14. +
  15. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  16. +
+
+
+

Cold migration from VMWare to the local {ocp-name} cluster:

+
+
+
    +
  1. +

    When you create a Migration custom resource (CR) to run a migration plan, the Migration Controller service creates a DataVolume CR for each source VM disk.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Containerized Data Importer (CDI) Controller service creates a blank persistent volume claim (PVC) based on the parameters specified in the DataVolume CR.



    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
  6. +
+
+
+

For all VM disks:

+
+
+
    +
  1. +

    The Migration Controller service creates a dummy pod to bind all PVCs. The name of the pod contains pvcinit.

    +
  2. +
  3. +

    The Migration Controller service creates a conversion pod for all PVCs.

    +
  4. +
  5. +

    The conversion pod runs virt-v2v, which converts the VM to the KVM hypervisor and transfers the disks' data to their corresponding PVs.

    +
    +

    After the VM disks are transferred:

    +
    +
  6. +
  7. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  8. +
  9. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/vmware-prerequisites/index.html b/documentation/doc-Release_notes/modules/vmware-prerequisites/index.html new file mode 100644 index 00000000000..840f7372ed0 --- /dev/null +++ b/documentation/doc-Release_notes/modules/vmware-prerequisites/index.html @@ -0,0 +1,278 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

VMware prerequisites

+
+

It is strongly recommended to create a VDDK image to accelerate migrations. For more information, see Creating a VDDK image.

+
+
+

The following prerequisites apply to VMware migrations:

+
+
+
    +
  • +

    You must use a compatible version of VMware vSphere.

    +
  • +
  • +

    You must be logged in as a user with at least the minimal set of VMware privileges.

    +
  • +
  • +

    To access the virtual machine using a pre-migration hook, VMware Tools must be installed on the source virtual machine.

    +
  • +
  • +

    The VM operating system must be certified and supported for use as a guest operating system with KubeVirt and for conversion to KVM with virt-v2v.

    +
  • +
  • +

    If you are running a warm migration, you must enable changed block tracking (CBT) on the VMs and on the VM disks.

    +
  • +
  • +

    If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host.

    +
  • +
  • +

    It is strongly recommended to disable hibernation because Forklift does not support migrating hibernated VMs.

    +
  • +
+
+
+ + + + + +
+
Important
+
+
+

In the event of a power outage, data might be lost for a VM with disabled hibernation. However, if hibernation is not disabled, migration will fail

+
+
+
+
+ + + + + +
+
Note
+
+
+

Neither Forklift nor OpenShift Virtualization support conversion of Btrfs for migrating VMs from VMWare.

+
+
+
+

VMware privileges

+
+

The following minimal set of VMware privileges is required to migrate virtual machines to KubeVirt with the Forklift.

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. VMware privileges
PrivilegeDescription

Virtual machine.Interaction privileges:

Virtual machine.Interaction.Power Off

Allows powering off a powered-on virtual machine. This operation powers down the guest operating system.

Virtual machine.Interaction.Power On

Allows powering on a powered-off virtual machine and resuming a suspended virtual machine.

Virtual machine.Guest operating system management by VIX API

Allows managing a virtual machine by the VMware VIX API.

+

Virtual machine.Provisioning privileges:

+
+
+ + + + + +
+
Note
+
+
+

All Virtual machine.Provisioning privileges are required.

+
+
+

Virtual machine.Provisioning.Allow disk access

Allows opening a disk on a virtual machine for random read and write access. Used mostly for remote disk mounting.

Virtual machine.Provisioning.Allow file access

Allows operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Allow read-only disk access

Allows opening a disk on a virtual machine for random read access. Used mostly for remote disk mounting.

Virtual machine.Provisioning.Allow virtual machine download

Allows read operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Allow virtual machine files upload

Allows write operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Clone template

Allows cloning of a template.

Virtual machine.Provisioning.Clone virtual machine

Allows cloning of an existing virtual machine and allocation of resources.

Virtual machine.Provisioning.Create template from virtual machine

Allows creation of a new template from a virtual machine.

Virtual machine.Provisioning.Customize guest

Allows customization of a virtual machine’s guest operating system without moving the virtual machine.

Virtual machine.Provisioning.Deploy template

Allows deployment of a virtual machine from a template.

Virtual machine.Provisioning.Mark as template

Allows marking an existing powered-off virtual machine as a template.

Virtual machine.Provisioning.Mark as virtual machine

Allows marking an existing template as a virtual machine.

Virtual machine.Provisioning.Modify customization specification

Allows creation, modification, or deletion of customization specifications.

Virtual machine.Provisioning.Promote disks

Allows promote operations on a virtual machine’s disks.

Virtual machine.Provisioning.Read customization specifications

Allows reading a customization specification.

Virtual machine.Snapshot management privileges:

Virtual machine.Snapshot management.Create snapshot

Allows creation of a snapshot from the virtual machine’s current state.

Virtual machine.Snapshot management.Remove Snapshot

Allows removal of a snapshot from the snapshot history.

Datastore privileges:

Datastore.Browse datastore

Allows exploring the contents of a datastore.

Datastore.Low level file operations

Allows performing low-level file operations - read, write, delete, and rename - in a datastore.

Sessions privileges:

Sessions.Validate session

Allows verification of the validity of a session.

Cryptographic privileges:

Cryptographic.Decrypt

Allows decryption of an encrypted virtual machine.

Cryptographic.Direct access

Allows access to encrypted resources.

+ + +
+ + diff --git a/documentation/modules/about-cold-warm-migration/index.html b/documentation/modules/about-cold-warm-migration/index.html new file mode 100644 index 00000000000..be7006ff1e6 --- /dev/null +++ b/documentation/modules/about-cold-warm-migration/index.html @@ -0,0 +1,255 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

About cold and warm migration

+
+
+
+

Forklift supports cold migration from:

+
+
+
    +
  • +

    VMware vSphere

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    {osp}

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

Forklift supports warm migration from VMware vSphere and from oVirt.

+
+
+
+
+

Cold migration

+
+
+

Cold migration is the default migration type. The source virtual machines are shut down while the data is copied.

+
+
+ + + + + +
+
Note
+
+
+

Unresolved directive in about-cold-warm-migration.adoc - include::snip_qemu-guest-agent.adoc[]

+
+
+
+
+
+
+

Warm migration

+
+
+

Most of the data is copied during the precopy stage while the source virtual machines (VMs) are running.

+
+
+

Then the VMs are shut down and the remaining data is copied during the cutover stage.

+
+
+
Precopy stage
+

The VMs are not shut down during the precopy stage.

+
+
+

The VM disks are copied incrementally by using changed block tracking (CBT) snapshots. The snapshots are created at one-hour intervals by default. You can change the snapshot interval by updating the forklift-controller deployment.

+
+
+ + + + + +
+
Important
+
+
+

You must enable CBT for each source VM and each VM disk.

+
+
+

A VM can support up to 28 CBT snapshots. If the source VM has too many CBT snapshots and the Migration Controller service is not able to create a new snapshot, warm migration might fail. The Migration Controller service deletes each snapshot when the snapshot is no longer required.

+
+
+
+
+

The precopy stage runs until the cutover stage is started manually or is scheduled to start.

+
+
+
Cutover stage
+

The VMs are shut down during the cutover stage and the remaining data is migrated. Data stored in RAM is not migrated.

+
+
+

You can start the cutover stage manually by using the Forklift console or you can schedule a cutover time in the Migration manifest.

+
+
+
+
+

Advantages and disadvantages of cold and warm migrations

+
+
+

Overview

+
+

Unresolved directive in about-cold-warm-migration.adoc - include::snip_cold-warm-comparison-table.adoc[]

+
+
+
+

Detailed description

+
+

The table that follows offers a more detailed description of the advantages and disadvantages of each type of migration. It assumes that you have installed Red Hat Enterprise Linux (RHEL) 9 on the OKD platform on which you installed Forklift.

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Detailed description of advantages and disadvantages
Cold migrationWarm migration

Fail fast

Each VM is converted to be compatible with OKD and, if the conversion is successful, the VM is transferred. If a VM cannot be converted, the migration fails immediately.

For each VM, Forklift creates a snapshot and transfers it to OKD. When you start the cutover, Forklift creates the last snapshot, transfers it, and then converts the VM.

Tools

Forklift only.

Forklift and CDI from KubeVirt.

Parallelism

Disks must be transferred sequentially.

Disks can be transferred in parallel using different pods.

+
+ + + + + +
+
Note
+
+
+

The preceding table describes the situation for VMs that are running because the main benefit of warm migration is the reduced downtime, and there is no reason to initiate warm migration for VMs that are down. However, performing warm migration for VMs that are down is not the same as cold migration, even when Forklift uses virt-v2v and RHEL 9. For VMs that are down, Forklift transfers the disks using CDI, unlike in cold migration.

+
+
+
+
+ + + + + +
+
Note
+
+
+

When importing from VMware, there are additional factors which impact the migration speed such as limits related to ESXi, vSphere. or VDDK.

+
+
+
+
+
+

Conclusions

+
+

Based on the preceding information, we can draw the following conclusions about cold migration vs. warm migration:

+
+
+
    +
  • +

    The shortest downtime of VMs can be achieved by using warm migration.

    +
  • +
  • +

    The shortest duration for VMs with a large amount of data on a single disk can be achieved by using cold migration.

    +
  • +
  • +

    The shortest duration for VMs with a large amount of data that is spread evenly across multiple disks can be achieved by using warm migration.

    +
  • +
+
+
+
+
+ + +
+ + diff --git a/documentation/modules/about-hook-crs-for-migration-plans-api/index.html b/documentation/modules/about-hook-crs-for-migration-plans-api/index.html new file mode 100644 index 00000000000..ce78880519f --- /dev/null +++ b/documentation/modules/about-hook-crs-for-migration-plans-api/index.html @@ -0,0 +1,116 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

API-based hooks for Forklift migration plans

+
+

You can add hooks to a migration plan from the command line by using the Forklift API.

+
+

Default hook image

+
+

The default hook image for an Forklift hook is registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel8:v1.8.2-2. The image is based on the Ansible Runner image with the addition of python-openshift to provide Ansible Kubernetes resources and a recent oc binary.

+
+

Hook execution

+
+

An Ansible playbook that is provided as part of a migration hook is mounted into the hook container as a ConfigMap. The hook container is run as a job on the desired cluster, using the default ServiceAccount in the konveyor-forklift namespace.

+
+

PreHooks and PostHooks

+
+

You specify hooks per VM and you can run each as a PreHook or a PostHook. In this context, a PreHook is a hook that is run before a migration and a PostHook is a hook that is run after a migration.

+
+
+

When you add a hook, you must specify the namespace where the hook CR is located, the name of the hook, and specify whether the hook is a PreHook or PostHook.

+
+
+ + + + + +
+
Important
+
+
+

In order for a PreHook to run on a VM, the VM must be started and available via SSH.

+
+
+
+
+
Example PreHook:
+
+
kind: Plan
+apiVersion: forklift.konveyor.io/v1beta1
+metadata:
+  name: test
+  namespace: konveyor-forklift
+spec:
+  vms:
+    - id: vm-2861
+      hooks:
+        - hook:
+            namespace: konveyor-forklift
+            name: playbook
+          step: PreHook
+
+
+ + +
+ + diff --git a/documentation/modules/about-rego-files/index.html b/documentation/modules/about-rego-files/index.html new file mode 100644 index 00000000000..7f149a1c9cb --- /dev/null +++ b/documentation/modules/about-rego-files/index.html @@ -0,0 +1,104 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

About Rego files

+
+

Validation rules are written in Rego, the Open Policy Agent (OPA) native query language. The rules are stored as .rego files in the /usr/share/opa/policies/io/konveyor/forklift/<provider> directory of the Validation pod.

+
+
+

Each validation rule is defined in a separate .rego file and tests for a specific condition. If the condition evaluates as true, the rule adds a {“category”, “label”, “assessment”} hash to the concerns. The concerns content is added to the concerns key in the inventory record of the VM. The web console displays the content of the concerns key for each VM in the provider inventory.

+
+
+

The following .rego file example checks for distributed resource scheduling enabled in the cluster of a VMware VM:

+
+
+
drs_enabled.rego example
+
+
package io.konveyor.forklift.vmware (1)
+
+has_drs_enabled {
+    input.host.cluster.drsEnabled (2)
+}
+
+concerns[flag] {
+    has_drs_enabled
+    flag := {
+        "category": "Information",
+        "label": "VM running in a DRS-enabled cluster",
+        "assessment": "Distributed resource scheduling is not currently supported by OpenShift Virtualization. The VM can be migrated but it will not have this feature in the target environment."
+    }
+}
+
+
+
+
    +
  1. +

    Each validation rule is defined within a package. The package namespaces are io.konveyor.forklift.vmware for VMware and io.konveyor.forklift.ovirt for oVirt.

    +
  2. +
  3. +

    Query parameters are based on the input key of the Validation service JSON.

    +
  4. +
+
+ + +
+ + diff --git a/documentation/modules/accessing-default-validation-rules/index.html b/documentation/modules/accessing-default-validation-rules/index.html new file mode 100644 index 00000000000..6ccd0c2e3c7 --- /dev/null +++ b/documentation/modules/accessing-default-validation-rules/index.html @@ -0,0 +1,108 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Checking the default validation rules

+
+

Before you create a custom rule, you must check the default rules of the Validation service to ensure that you do not create a rule that redefines an existing default value.

+
+
+

Example: If a default rule contains the line default valid_input = false and you create a custom rule that contains the line default valid_input = true, the Validation service will not start.

+
+
+
Procedure
+
    +
  1. +

    Connect to the terminal of the Validation pod:

    +
    +
    +
    $ kubectl rsh <validation_pod>
    +
    +
    +
  2. +
  3. +

    Go to the OPA policies directory for your provider:

    +
    +
    +
    $ cd /usr/share/opa/policies/io/konveyor/forklift/<provider> (1)
    +
    +
    +
    +
      +
    1. +

      Specify vmware or ovirt.

      +
    2. +
    +
    +
  4. +
  5. +

    Search for the default policies:

    +
    +
    +
    $ grep -R "default" *
    +
    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/modules/accessing-logs-cli/index.html b/documentation/modules/accessing-logs-cli/index.html new file mode 100644 index 00000000000..00aad8ce527 --- /dev/null +++ b/documentation/modules/accessing-logs-cli/index.html @@ -0,0 +1,157 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Accessing logs and custom resource information from the command line interface

+
+

You can access logs and information about custom resources (CRs) from the command line interface by using the must-gather tool. You must attach a must-gather data file to all customer cases.

+
+
+

You can gather data for a specific namespace, a completed, failed, or canceled migration plan, or a migrated virtual machine (VM) by using the filtering options.

+
+
+ + + + + +
+
Note
+
+
+

If you specify a non-existent resource in the filtered must-gather command, no archive file is created.

+
+
+
+
+
Prerequisites
+
    +
  • +

    You must be logged in to the KubeVirt cluster as a user with the cluster-admin role.

    +
  • +
  • +

    You must have the OKD CLI (oc) installed.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Navigate to the directory where you want to store the must-gather data.

    +
  2. +
  3. +

    Run the oc adm must-gather command:

    +
    +
    +
    $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest
    +
    +
    +
    +

    The data is saved as /must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.

    +
    +
  4. +
  5. +

    Optional: Run the oc adm must-gather command with the following options to gather filtered data:

    +
    +
      +
    • +

      Namespace:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- NS=<namespace> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Migration plan:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- PLAN=<migration_plan> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Virtual machine:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- VM=<vm_name> NS=<namespace> /usr/bin/targeted (1)
      +
      +
      +
      +
        +
      1. +

        You must specify the VM name, not the VM ID, as it appears in the Plan CR.

        +
      2. +
      +
      +
    • +
    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/modules/accessing-logs-ui/index.html b/documentation/modules/accessing-logs-ui/index.html new file mode 100644 index 00000000000..957b74fdc9d --- /dev/null +++ b/documentation/modules/accessing-logs-ui/index.html @@ -0,0 +1,92 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Downloading logs and custom resource information from the web console

+
+

You can download logs and information about custom resources (CRs) for a completed, failed, or canceled migration plan or for migrated virtual machines (VMs) by using the OKD web console.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
  2. +
  3. +

    Click Get logs beside a migration plan name.

    +
  4. +
  5. +

    In the Get logs window, click Get logs.

    +
    +

    The logs are collected. A Log collection complete message is displayed.

    +
    +
  6. +
  7. +

    Click Download logs to download the archive file.

    +
  8. +
  9. +

    To download logs for a migrated VM, click a migration plan name and then click Get logs beside the VM.

    +
  10. +
+
+ + +
+ + diff --git a/documentation/modules/adding-hook-crs-to-migration-plans-api/index.html b/documentation/modules/adding-hook-crs-to-migration-plans-api/index.html new file mode 100644 index 00000000000..d193f0fef1d --- /dev/null +++ b/documentation/modules/adding-hook-crs-to-migration-plans-api/index.html @@ -0,0 +1,302 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Adding Hook CRs to a VM migration by using the Forklift API

+
+

You can add a PreHook or a PostHook Hook CR when you migrate a virtual machine from the command line by using the Forklift API. A PreHook runs before a migration, a PostHook, after.

+
+
+ + + + + +
+
Note
+
+
+

You can retrieve additional information stored in a secret or in a configMap by using a k8s module.

+
+
+
+
+

For example, you can create a hook CR to install cloud-init on a VM and write a file before migration.

+
+
+
Procedure
+
    +
  1. +

    If needed, create a secret with an SSH private key for the VM. You can either use an existing key or generate a key pair, install the public key on the VM, and base64 encode the private key in the secret.

    +
    +
    +
    apiVersion: v1
    +data:
    +  key: VGhpcyB3YXMgZ2VuZXJhdGVkIHdpdGggc3NoLWtleWdlbiBwdXJlbHkgZm9yIHRoaXMgZXhhbXBsZS4KSXQgaXMgbm90IHVzZWQgYW55d2hlcmUuCi0tLS0tQkVHSU4gT1BFTlNTSCBQUklWQVRFIEtFWS0tLS0tCmIzQmxibk56YUMxclpYa3RkakVBQUFBQUJHNXZibVVBQUFBRWJtOXVaUUFBQUFBQUFBQUJBQUFCbHdBQUFBZHpjMmd0Y24KTmhBQUFBQXdFQUFRQUFBWUVBMzVTTFRReDBFVjdPTWJQR0FqcEsxK2JhQURTTVFuK1NBU2pyTGZLNWM5NGpHdzhDbnA4LwovRHErZHFBR1pxQkg2ZnAxYmVJM1BZZzVWVDk0RVdWQ2RrTjgwY3dEcEo0Z1R0NHFUQ1gzZUYvY2x5VXQyUC9zaTNjcnQ0CjBQdi9wVnZXU1U2TlhHaDJIZC93V0MwcGh5Z0RQOVc5SHRQSUF0OFpnZmV2ZnUwZHpraVl6OHNVaElWU2ZsRGpaNUFqcUcKUjV2TVVUaGlrczEvZVlCeTdiMkFFSEdzYU8xN3NFbWNiYUlHUHZuUFVwWmQrdjkyYU1JdWZoYjhLZkFSbzZ3Ty9ISW1VbQovdDdHWFBJUmxBMUhSV0p1U05odTQzZS9DY3ZYd3Z6RnZrdE9kYXlEQzBMTklHMkpVaURlNWd0UUQ1WHZXc1p3MHQvbEs1CklacjFrZXZRNUJsYWNISmViV1ZNYUQvdllpdFdhSFo4OEF1Y0czaGh2bjkrOGNSTGhNVExiVlFSMWh2UVpBL1JtQXN3eE0KT3VJSmRaUmtxTThLZlF4Z28zQThRNGJhQW1VbnpvM3Zwa0FWdC9uaGtIOTRaRE5rV2U2RlRhdThONStyYTJCZkdjZVA4VApvbjFEeTBLRlpaUlpCREVVRVc0eHdTYUVOYXQ3c2RDNnhpL1d5OURaQUFBRm1NRFBXeDdBejFzZUFBQUFCM056YUMxeWMyCkVBQUFHQkFOK1VpMDBNZEJGZXpqR3p4Z0k2U3RmbTJnQTBqRUova2dFbzZ5M3l1WFBlSXhzUEFwNmZQL3c2dm5hZ0JtYWcKUituNmRXM2lOejJJT1ZVL2VCRmxRblpEZk5ITUE2U2VJRTdlS2t3bDkzaGYzSmNsTGRqLzdJdDNLN2VORDcvNlZiMWtsTwpqVnhvZGgzZjhGZ3RLWWNvQXovVnZSN1R5QUxmR1lIM3IzN3RIYzVJbU0vTEZJU0ZVbjVRNDJlUUk2aGtlYnpGRTRZcExOCmYzbUFjdTI5Z0JCeHJHanRlN0JKbkcyaUJqNzV6MUtXWGZyL2RtakNMbjRXL0Nud0VhT3NEdnh5SmxKdjdleGx6eUVaUU4KUjBWaWJrallidU4zdnduTDE4TDh4YjVMVG5Xc2d3dEN6U0J0aVZJZzN1WUxVQStWNzFyR2NOTGY1U3VTR2E5WkhyME9RWgpXbkJ5WG0xbFRHZy83MklyVm1oMmZQQUxuQnQ0WWI1L2Z2SEVTNFRFeTIxVUVkWWIwR1FQMFpnTE1NVERyaUNYV1VaS2pQCkNuME1ZS053UEVPRzJnSmxKODZONzZaQUZiZjU0WkIvZUdRelpGbnVoVTJydkRlZnEydGdYeG5Iai9FNko5UTh0Q2hXV1UKV1FReEZCRnVNY0VtaERXcmU3SFF1c1l2MXN2UTJRQUFBQU1CQUFFQUFBR0JBSlZtZklNNjdDQmpXcU9KdnFua2EvakRrUwo4TDdpSE5mekg1TnRZWVdPWmRMTlk2L0lRa1pDeFcwTWtSKzlUK0M3QUZKZzBNV2Q5ck5PeUxJZDkxNjZoOVJsNG0xdFJjCnViZ1o2dWZCZ3hGVDlXS21mSEdCNm4zelh5b2pQOEFJTnR6ODVpaUVHVXFFRWtVRVdMd0RGSmdvcFllQ3l1VmZ2ZE92MUgKRm1WWmEwNVo0b3NQNkNENXVmc2djQ1RYQTR6VnZ5ZHVCYkxqdHN5RjdYZjNUdjZUQ1QxU0swZHErQk1OOXRvb0RZaXpwagpzbDh6NzlybXp3eUFyWFlVcnFUUkpsNmpwRkNrWHJLcy9LeG96MHhhbXlMY2RORk9hWE51LzlnTkpjRERsV2hPcFRqNHk4CkpkNXBuV1Jueis1RHJLRFdhY0loUW1CMUxVd2ZLWmQwbVFxaUpzMUMxcXZVUmlKOGExaThKUTI4bHFuWTFRRk9wbk13emcKWEpla2FndThpT1ExRFJlQkhaM0NkcVJUYnY3bVJZSGxramx0dXJmZGc4M3hvM0ErZ1JSR001eUVOcW5xSkplQjhJQVB5UwptMFp0dGdqbHNqNTJ2K1B1NmExMHoxZndKK1VML2N6dTRKeEpOYlp6WTFIMnpLODJBaVI1T3JYNmx2aUEvSWFSRVcwUUFBCkFNQndVeUJpcUc5bEZCUnltL2UvU1VORVMzdHpicUZNdTdIcy84WTV5SnAxKzR6OXUxNGtJR2ttV0Y5eE5HT3hrY3V0cWwKeHVUcndMbjFUaFNQTHQrTjUwTGhVdzR4ZjBhNUxqemdPbklPU0FRbm5HY1Nxa0dTRDlMR21obGE2WmpydFBHY29lQ3JHdAo5M1Vvcmx5YkxNRzFFRFAxWmpKS1RaZzl6OUMwdDlTTGd3ei9DbFhydW9UNXNQVUdKWnUrbHlIZXpSTDRtcHl6OEZMcnlOCkdNci9leVM5bWdISjNVVkZEYjNIZ3BaK1E1SUdBRU5rZVZEcHIwMGhCZXZndGd6YWtBQUFEQkFQVXQ1RitoMnBVby94V1YKenRkcVQvMzA4dFB5MXVMMU1lWFoydEJPQmRwSDJyd0JzdWt0aTIySGtWZUZXQjJFdUlFUXppMzY3MGc1UGdxR1p4Vng4dQpobEE0Rkg4ZXN1NTNQckZqVW9EeFJhb3d3WXBFcFh5Y2pnNUE1MStwR1VQcWljWjB0YjliaWlhc3BWWXZhWW5sdGlnVG5iClN0UExMY29nemNiL0dGcVYyaXlzc3lwTlMwKzBNRTUxcEtxWGNaS2swbi8vVHpZWWs4TW8vZzRsQ3pmUEZQUlZrVVM5blIKWU1pQzRlcEk0TERmbVdnM0xLQ2N1Zk85all3aWgwYlFBQUFNRUE2WEtldDhEMHNvc0puZVh5WFZGd0dyVyszNlhBVGRQTwpMWDdjaStjYzFoOGV1eHdYQWx3aTJJNFhxSmJBVjBsVEhuVGEycXN3Uy9RQlpJUUJWSkZlVjVyS1daZTc4R2F3d1pWTFZNCldETmNwdFFyRTFaM2pGNS9TdUVzdlVxSDE0Tkc5RUFXWG1iUkNzelE0Vlk3NzQrSi9sTFkvMnlDT1diNzlLYTJ5OGxvYUoKVXczWWVtSld3blp2R3hKNldsL3BmQ2xYN3lEVXlXUktLdGl0cWNjbmpCWVkyRE1tZURwdURDYy9ZdDZDc3dLRmRkMkJ1UwpGZGt5cDlZY3VMaDlLZEFBQUFIR3BoYzI5dVFFRlVMVGd3TWxVdWJXOXVkR3hsYjI0dWFXNTBjbUVCQWdNRUJRWT0KLS0tLS1FTkQgT1BFTlNTSCBQUklWQVRFIEtFWS0tLS0tCgo=
    +kind: Secret
    +metadata:
    +  name: ssh-credentials
    +  namespace: konveyor-forklift
    +type: Opaque
    +
    +
    +
  2. +
  3. +

    Encode your playbook by conncatenating a file and piping it for base64, for example:

    +
    +
    +
    $ cat playbook.yml | base64 -w0
    +
    +
    +
    + + + + + +
    +
    Note
    +
    +
    +

    You can also use a here document to encode a playbook:

    +
    +
    +
    +
    $ cat << EOF | base64 -w0
    +- hosts: localhost
    +  tasks:
    +  - debug:
    +      msg: test
    +EOF
    +
    +
    +
    +
    +
  4. +
  5. +

    Create a Hook CR:

    +
    +
    +
    apiVersion: forklift.konveyor.io/v1beta1
    +kind: Hook
    +metadata:
    +  name: playbook
    +  namespace: konveyor-forklift
    +spec:
    +  image: registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel8:v1.8.2-2
    +  playbook: LSBuYW1lOiBNYWluCiAgaG9zdHM6IGxvY2FsaG9zdAogIHRhc2tzOgogIC0gbmFtZTogTG9hZCBQbGFuCiAgICBpbmNsdWRlX3ZhcnM6CiAgICAgIGZpbGU6IHBsYW4ueW1sCiAgICAgIG5hbWU6IHBsYW4KCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3ZhcnM6CiAgICAgIGZpbGU6IHdvcmtsb2FkLnltbAogICAgICBuYW1lOiB3b3JrbG9hZAoKICAtIG5hbWU6IAogICAgZ2V0ZW50OgogICAgICBkYXRhYmFzZTogcGFzc3dkCiAgICAgIGtleTogInt7IGFuc2libGVfdXNlcl9pZCB9fSIKICAgICAgc3BsaXQ6ICc6JwoKICAtIG5hbWU6IEVuc3VyZSBTU0ggZGlyZWN0b3J5IGV4aXN0cwogICAgZmlsZToKICAgICAgcGF0aDogfi8uc3NoCiAgICAgIHN0YXRlOiBkaXJlY3RvcnkKICAgICAgbW9kZTogMDc1MAogICAgZW52aXJvbm1lbnQ6CiAgICAgIEhPTUU6ICJ7eyBhbnNpYmxlX2ZhY3RzLmdldGVudF9wYXNzd2RbYW5zaWJsZV91c2VyX2lkXVs0XSB9fSIKCiAgLSBrOHNfaW5mbzoKICAgICAgYXBpX3ZlcnNpb246IHYxCiAgICAgIGtpbmQ6IFNlY3JldAogICAgICBuYW1lOiBzc2gtY3JlZGVudGlhbHMKICAgICAgbmFtZXNwYWNlOiBrb252ZXlvci1mb3JrbGlmdAogICAgcmVnaXN0ZXI6IHNzaF9jcmVkZW50aWFscwoKICAtIG5hbWU6IENyZWF0ZSBTU0gga2V5CiAgICBjb3B5OgogICAgICBkZXN0OiB+Ly5zc2gvaWRfcnNhCiAgICAgIGNvbnRlbnQ6ICJ7eyBzc2hfY3JlZGVudGlhbHMucmVzb3VyY2VzWzBdLmRhdGEua2V5IHwgYjY0ZGVjb2RlIH19IgogICAgICBtb2RlOiAwNjAwCgogIC0gYWRkX2hvc3Q6CiAgICAgIG5hbWU6ICJ7eyB3b3JrbG9hZC52bS5pcGFkZHJlc3MgfX0iCiAgICAgIGFuc2libGVfdXNlcjogcm9vdAogICAgICBncm91cHM6IHZtcwoKLSBob3N0czogdm1zCiAgdGFza3M6CiAgLSBuYW1lOiBJbnN0YWxsIGNsb3VkLWluaXQKICAgIGRuZjoKICAgICAgbmFtZToKICAgICAgLSBjbG91ZC1pbml0CiAgICAgIHN0YXRlOiBsYXRlc3QKCiAgLSBuYW1lOiBDcmVhdGUgVGVzdCBGaWxlCiAgICBjb3B5OgogICAgICBkZXN0OiAvdGVzdC50eHQKICAgICAgY29udGVudDogIkhlbGxvIFdvcmxkIgogICAgICBtb2RlOiAwNjQ0Cg==
    +  serviceAccount: forklift-controller (1)
    +
    +
    +
    +
      +
    1. +

      Specify a serviceAccount to run the hook with in order to control access to resources on the cluster.

      +
      + + + + + +
      +
      Note
      +
      +
      +

      To decode an attached playbook retrieve the resource with custom output and pipe it to base64. For example:

      +
      +
      +
      +
       oc get -n konveyor-forklift hook playbook -o \
      +   go-template='{{ .spec.playbook }}' | base64 -d
      +
      +
      +
      +
      +
      +

      The playbook encoded here runs the following:

      +
      +
      +
      +
      - name: Main
      +  hosts: localhost
      +  tasks:
      +  - name: Load Plan
      +    include_vars:
      +      file: plan.yml
      +      name: plan
      +
      +  - name: Load Workload
      +    include_vars:
      +      file: workload.yml
      +      name: workload
      +
      +  - name:
      +    getent:
      +      database: passwd
      +      key: "{{ ansible_user_id }}"
      +      split: ':'
      +
      +  - name: Ensure SSH directory exists
      +    file:
      +      path: ~/.ssh
      +      state: directory
      +      mode: 0750
      +    environment:
      +      HOME: "{{ ansible_facts.getent_passwd[ansible_user_id][4] }}"
      +
      +  - k8s_info:
      +      api_version: v1
      +      kind: Secret
      +      name: ssh-credentials
      +      namespace: konveyor-forklift
      +    register: ssh_credentials
      +
      +  - name: Create SSH key
      +    copy:
      +      dest: ~/.ssh/id_rsa
      +      content: "{{ ssh_credentials.resources[0].data.key | b64decode }}"
      +      mode: 0600
      +
      +  - add_host:
      +      name: "{{ workload.vm.ipaddress }}"
      +      ansible_user: root
      +      groups: vms
      +
      +- hosts: vms
      +  tasks:
      +  - name: Install cloud-init
      +    dnf:
      +      name:
      +      - cloud-init
      +      state: latest
      +
      +  - name: Create Test File
      +    copy:
      +      dest: /test.txt
      +      content: "Hello World"
      +      mode: 0644
      +
      +
      +
    2. +
    +
    +
  6. +
  7. +

    Create a Plan CR using the hook:

    +
    +
    +
    kind: Plan
    +apiVersion: forklift.konveyor.io/v1beta1
    +metadata:
    +  name: test
    +  namespace: konveyor-forklift
    +spec:
    +  map:
    +    network:
    +      namespace: "konveyor-forklift"
    +      name: "network"
    +    storage:
    +      namespace: "konveyor-forklift"
    +      name: "storage"
    +  provider:
    +    source:
    +      namespace: "konveyor-forklift"
    +      name: "boston"
    +    destination:
    +      namespace: "konveyor-forklift"
    +      name: host
    +  targetNamespace: "konveyor-forklift"
    +  vms:
    +    - id: vm-2861
    +      hooks:
    +        - hook:
    +            namespace: konveyor-forklift
    +            name: playbook
    +          step: PreHook (1)
    +
    +
    +
    +
      +
    1. +

      Options are PreHook, to run the hook before the migration, and PostHook, to run the hook after the migration.

      +
    2. +
    +
    +
  8. +
+
+
+ + + + + +
+
Important
+
+
+

In order for a PreHook to run on a VM, the VM must be started and available via SSH.

+
+
+
+ + +
+ + diff --git a/documentation/modules/adding-source-provider/index.html b/documentation/modules/adding-source-provider/index.html new file mode 100644 index 00000000000..1b4e5163515 --- /dev/null +++ b/documentation/modules/adding-source-provider/index.html @@ -0,0 +1,82 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Click Create provider to add and save the provider.

    +
    +

    The provider appears in the list of providers.

    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/modules/adding-virt-provider/index.html b/documentation/modules/adding-virt-provider/index.html new file mode 100644 index 00000000000..e22f09a168a --- /dev/null +++ b/documentation/modules/adding-virt-provider/index.html @@ -0,0 +1,116 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Adding a KubeVirt destination provider

+
+

You can add a KubeVirt destination provider to the OKD web console in addition to the default KubeVirt destination provider, which is the provider where you installed Forklift.

+
+
+
Prerequisites
+ +
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Select KubeVirt from the Provider type list.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Provider name: Specify the provider name to display in the list of target providers.

      +
    • +
    • +

      Kubernetes API server URL: Specify the OKD cluster API endpoint.

      +
    • +
    • +

      Service account token: Specify the cluster-admin service account token.

      +
      +

      If both URL and Service account token are left blank, the local OKD cluster is used.

      +
      +
    • +
    +
    +
  8. +
  9. +

    Click Create.

    +
    +

    The provider appears in the list of providers.

    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/modules/canceling-migration-cli/index.html b/documentation/modules/canceling-migration-cli/index.html new file mode 100644 index 00000000000..bef0a073375 --- /dev/null +++ b/documentation/modules/canceling-migration-cli/index.html @@ -0,0 +1,132 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Canceling a migration

+
+

You can cancel an entire migration or individual virtual machines (VMs) while a migration is in progress from the command line interface (CLI).

+
+
+
Canceling an entire migration
+
    +
  • +

    Delete the Migration CR:

    +
    +
    +
    $ kubectl delete migration <migration> -n <namespace> (1)
    +
    +
    +
    +
      +
    1. +

      Specify the name of the Migration CR.

      +
    2. +
    +
    +
  • +
+
+
+
Canceling the migration of individual VMs
+
    +
  1. +

    Add the individual VMs to the spec.cancel block of the Migration manifest:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Migration
    +metadata:
    +  name: <migration>
    +  namespace: <namespace>
    +...
    +spec:
    +  cancel:
    +  - id: vm-102 (1)
    +  - id: vm-203
    +  - name: rhel8-vm
    +EOF
    +
    +
    +
    +
      +
    1. +

      You can specify a VM by using the id key or the name key.

      +
      +

      The value of the id key is the managed object reference, for a VMware VM, or the VM UUID, for a oVirt VM.

      +
      +
    2. +
    +
    +
  2. +
  3. +

    Retrieve the Migration CR to monitor the progress of the remaining VMs:

    +
    +
    +
    $ kubectl get migration/<migration> -n <namespace> -o yaml
    +
    +
    +
  4. +
+
+ + +
+ + diff --git a/documentation/modules/canceling-migration-ui/index.html b/documentation/modules/canceling-migration-ui/index.html new file mode 100644 index 00000000000..6d7efd67467 --- /dev/null +++ b/documentation/modules/canceling-migration-ui/index.html @@ -0,0 +1,92 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Canceling a migration

+
+

You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the OKD web console.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click Plans for virtualization.

    +
  2. +
  3. +

    Click the name of a running migration plan to view the migration details.

    +
  4. +
  5. +

    Select one or more VMs and click Cancel.

    +
  6. +
  7. +

    Click Yes, cancel to confirm the cancellation.

    +
    +

    In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.

    +
    +
  8. +
+
+
+

You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.

+
+ + +
+ + diff --git a/documentation/modules/changing-precopy-intervals/index.html b/documentation/modules/changing-precopy-intervals/index.html new file mode 100644 index 00000000000..45c426f1057 --- /dev/null +++ b/documentation/modules/changing-precopy-intervals/index.html @@ -0,0 +1,92 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Changing precopy intervals for warm migration

+
+

You can change the snapshot interval by patching the ForkliftController custom resource (CR).

+
+
+
Procedure
+
    +
  • +

    Patch the ForkliftController CR:

    +
    +
    +
    $ kubectl patch forkliftcontroller/<forklift-controller> -n konveyor-forklift -p '{"spec": {"controller_precopy_interval": <60>}}' --type=merge (1)
    +
    +
    +
    +
      +
    1. +

      Specify the precopy interval in minutes. The default value is 60.

      +
      +

      You do not need to restart the forklift-controller pod.

      +
      +
    2. +
    +
    +
  • +
+
+ + +
+ + diff --git a/documentation/modules/collected-logs-cr-info/index.html b/documentation/modules/collected-logs-cr-info/index.html new file mode 100644 index 00000000000..3b6f414c5b5 --- /dev/null +++ b/documentation/modules/collected-logs-cr-info/index.html @@ -0,0 +1,183 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Collected logs and custom resource information

+
+

You can download logs and custom resource (CR) yaml files for the following targets by using the OKD web console or the command line interface (CLI):

+
+
+
    +
  • +

    Migration plan: Web console or CLI.

    +
  • +
  • +

    Virtual machine: Web console or CLI.

    +
  • +
  • +

    Namespace: CLI only.

    +
  • +
+
+
+

The must-gather tool collects the following logs and CR files in an archive file:

+
+
+
    +
  • +

    CRs:

    +
    +
      +
    • +

      DataVolume CR: Represents a disk mounted on a migrated VM.

      +
    • +
    • +

      VirtualMachine CR: Represents a migrated VM.

      +
    • +
    • +

      Plan CR: Defines the VMs and storage and network mapping.

      +
    • +
    • +

      Job CR: Optional: Represents a pre-migration hook, a post-migration hook, or both.

      +
    • +
    +
    +
  • +
  • +

    Logs:

    +
    +
      +
    • +

      importer pod: Disk-to-data-volume conversion log. The importer pod naming convention is importer-<migration_plan>-<vm_id><5_char_id>, for example, importer-mig-plan-ed90dfc6-9a17-4a8btnfh, where ed90dfc6-9a17-4a8 is a truncated oVirt VM ID and btnfh is the generated 5-character ID.

      +
    • +
    • +

      conversion pod: VM conversion log. The conversion pod runs virt-v2v, which installs and configures device drivers on the PVCs of the VM. The conversion pod naming convention is <migration_plan>-<vm_id><5_char_id>.

      +
    • +
    • +

      virt-launcher pod: VM launcher log. When a migrated VM is powered on, the virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

      +
    • +
    • +

      forklift-controller pod: The log is filtered for the migration plan, virtual machine, or namespace specified by the must-gather command.

      +
    • +
    • +

      forklift-must-gather-api pod: The log is filtered for the migration plan, virtual machine, or namespace specified by the must-gather command.

      +
    • +
    • +

      hook-job pod: The log is filtered for hook jobs. The hook-job naming convention is <migration_plan>-<vm_id><5_char_id>, for example, plan2j-vm-3696-posthook-4mx85 or plan2j-vm-3696-prehook-mwqnl.

      +
      + + + + + +
      +
      Note
      +
      +
      +

      Empty or excluded log files are not included in the must-gather archive file.

      +
      +
      +
      +
    • +
    +
    +
  • +
+
+
+
Example must-gather archive structure for a VMware migration plan
+
+
must-gather
+└── namespaces
+    ├── target-vm-ns
+    │   ├── crs
+    │   │   ├── datavolume
+    │   │   │   ├── mig-plan-vm-7595-tkhdz.yaml
+    │   │   │   ├── mig-plan-vm-7595-5qvqp.yaml
+    │   │   │   └── mig-plan-vm-8325-xccfw.yaml
+    │   │   └── virtualmachine
+    │   │       ├── test-test-rhel8-2disks2nics.yaml
+    │   │       └── test-x2019.yaml
+    │   └── logs
+    │       ├── importer-mig-plan-vm-7595-tkhdz
+    │       │   └── current.log
+    │       ├── importer-mig-plan-vm-7595-5qvqp
+    │       │   └── current.log
+    │       ├── importer-mig-plan-vm-8325-xccfw
+    │       │   └── current.log
+    │       ├── mig-plan-vm-7595-4glzd
+    │       │   └── current.log
+    │       └── mig-plan-vm-8325-4zw49
+    │           └── current.log
+    └── openshift-mtv
+        ├── crs
+        │   └── plan
+        │       └── mig-plan-cold.yaml
+        └── logs
+            ├── forklift-controller-67656d574-w74md
+            │   └── current.log
+            └── forklift-must-gather-api-89fc7f4b6-hlwb6
+                └── current.log
+
+
+ + +
+ + diff --git a/documentation/modules/common-attributes/index.html b/documentation/modules/common-attributes/index.html new file mode 100644 index 00000000000..8eb5c01fac2 --- /dev/null +++ b/documentation/modules/common-attributes/index.html @@ -0,0 +1,66 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + +
+ + diff --git a/documentation/modules/compatibility-guidelines/index.html b/documentation/modules/compatibility-guidelines/index.html new file mode 100644 index 00000000000..cdbf2e9ab8a --- /dev/null +++ b/documentation/modules/compatibility-guidelines/index.html @@ -0,0 +1,137 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Software compatibility guidelines

+
+
+
+

You must install compatible software versions.

+
+ + ++++++++ + + + + + + + + + + + + + + + + + + + + +
Table 1. Compatible software versions
ForkliftOKDKubeVirtVMware vSphereoVirtOpenStack

2.3.0

4.10 or later

4.10 or later

6.5 or later

4.4 SP1 or later

16.1 or later

+
+ + + + + +
+
Note
+
+
Migration from oVirt 4.3
+
+

Forklift was tested only with oVirt (RHV) 4.4 SP1. +Migration from oVirt (oVirt) 4.3 has not been tested with Forklift 2.3. While not supported, basic migrations from oVirt 4.3 are expected to work.

+
+
+

Generally it is advised to upgrade oVirt Manager (RHVM) to the previously mentioned supported version before the migration to KubeVirt.

+
+
+

Therefore, it is recommended to upgrade oVirt to the supported version above before the migration to KubeVirt.

+
+
+

However, migrations from oVirt 4.3.11 were tested with Forklift 2.3, and may work in practice in many environments using Forklift 2.3. In this case, we advise upgrading oVirt Manager (RHVM) to the previously mentioned supported version before the migration to KubeVirt.

+
+
+
+
+
+
+

OpenShift Operator Life Cycles

+
+
+

For more information about the software maintenance Life Cycle classifications for Operators shipped by Red Hat for use with OpenShift Container Platform, see OpenShift Operator Life Cycles.

+
+
+
+ + +
+ + diff --git a/documentation/modules/configuring-mtv-operator/index.html b/documentation/modules/configuring-mtv-operator/index.html new file mode 100644 index 00000000000..e981102d26e --- /dev/null +++ b/documentation/modules/configuring-mtv-operator/index.html @@ -0,0 +1,202 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Configuring the Forklift Operator

+
+

You can configure all of the following settings of the Forklift Operator by modifying the ForkliftController CR, or in the Settings section of the Overview page, unless otherwise indicated.

+
+
+
    +
  • +

    Maximum number of virtual machines (VMs) per plan that can be migrated simultaneously.

    +
  • +
  • +

    How long must gather reports are retained before being automatically deleted.

    +
  • +
  • +

    CPU limit allocated to the main controller container.

    +
  • +
  • +

    Memory limit allocated to the main controller container.

    +
  • +
  • +

    Interval at which a new snapshot is requested before initiating a warm migration.

    +
  • +
  • +

    Frequency with which the system checks the status of snapshot creation or removal during a warm migration.

    +
  • +
  • +

    Percentage of space in persistent volumes allocated as file system overhead when the storageclass is filesystem (ForkliftController CR only).

    +
  • +
  • +

    Fixed amount of additional space allocated in persistent block volumes. This setting is applicable for any storageclass that is block-based (ForkliftController CR only).

    +
  • +
  • +

    Configuration map of operating systems to preferences for vSphere source providers (ForkliftController CR only).

    +
  • +
  • +

    Configuration map of operating systems to preferences for oVirt (oVirt) source providers (ForkliftController CR only).

    +
  • +
+
+
+

The procedure for configuring these settings using the user interface is presented in Configuring MTV settings. The procedure for configuring these settings by modifying the ForkliftController CR is presented following.

+
+
+
Procedure
+
    +
  • +

    Change a parameter’s value in the spec portion of the ForkliftController CR by adding the label and value as follows:

    +
  • +
+
+
+
+
spec:
+  label: value (1)
+
+
+
+
    +
  1. +

    Labels you can configure using the CLI are shown in the table that follows, along with a description of each label and its default value.

    +
  2. +
+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Forklift Operator labels
LabelDescriptionDefault value

controller_max_vm_inflight

The maximum number of VMs per plan that can be migrated simultaneously.

20

must_gather_api_cleanup_max_age

The duration in hours for retaining must gather reports before they are automatically deleted.

-1 (disabled)

controller_container_limits_cpu

The CPU limit allocated to the main controller container.

500m

controller_container_limits_memory

The memory limit allocated to the main controller container.

800Mi

controller_precopy_interval

The interval in minutes at which a new snapshot is requested before initiating a warm migration.

60

controller_snapshot_status_check_rate_seconds

The frequency in seconds with which the system checks the status of snapshot creation or removal during a warm migration.

10

controller_filesystem_overhead

Percentage of space in persistent volumes allocated as file system overhead when the storageclass is filesystem.

+

ForkliftController CR only.

10

controller_block_overhead

Fixed amount of additional space allocated in persistent block volumes. This setting is applicable for any storageclass that is block-based. It can be used when data, such as encryption headers, is written to the persistent volumes in addition to the content of the virtual disk.

+

ForkliftController CR only.

0

vsphere_osmap_configmap_name

Configuration map for vSphere source providers. This configuration map maps the operating system of the incoming VM to a KubeVirt preference name. This configuration map needs to be in the namespace where the Forklift Operator is deployed.

+

To see the list of preferences in your KubeVirt environment, open the {ocp-name} web console and click VirtualizationPreferences.

+

You can add values to the configuration map when this label has the default value, forklift-vsphere-osmap. In order to override or delete values, specify a configuration map that is different from forklift-vsphere-osmap.

+

ForkliftController CR only.

forklift-vsphere-osmap

ovirt_osmap_configmap_name

Configuration map for oVirt source providers. This configuration map maps the operating system of the incoming VM to a KubeVirt preference name. This configuration map needs to be in the namespace where the Forklift Operator is deployed.

+

To see the list of preferences in your KubeVirt environment, open the {ocp-name} web console and click VirtualizationPreferences.

+

You can add values to the configuration map when this label has the default value, forklift-ovirt-osmap. In order to override or delete values, specify a configuration map that is different from forklift-ovirt-osmap.

+

ForkliftController CR only.

forklift-ovirt-osmap

+ + +
+ + diff --git a/documentation/modules/creating-migration-plan-2-6-3/index.html b/documentation/modules/creating-migration-plan-2-6-3/index.html new file mode 100644 index 00000000000..a3d61954fde --- /dev/null +++ b/documentation/modules/creating-migration-plan-2-6-3/index.html @@ -0,0 +1,139 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

+ +The Create migration plan pane opens. It displays the source provider’s name and suggestions for a target provider and namespace, a network map, and a storage map. +. Enter the Plan name. +. Make any needed changes to the editable items. +. Click Add mapping to edit a suggested network mapping or a storage mapping, or to add one or more additional mappings. +. Click Create migration plan.

+
+
+

+ +Forklift validates the migration plan and the Plan details page opens, indicating whether the plan is ready for use or contains an error. The details of the plan are listed, and you can edit the items you filled in on the previous page. If you make any changes, Forklift validates the plan again.

+
+
+
    +
  1. +

    VMware source providers only (All optional):

    +
    +
      +
    • +

      Preserving static IPs of VMs: By default, virtual network interface controllers (vNICs) change during the migration process. As a result, vNICs that are configured with a static IP linked to the interface name in the guest VM lose their IP. To avoid this, click the Edit icon next to Preserve static IPs and toggle the Whether to preserve the static IPs switch in the window that opens. Then click Save.

      +
      +

      Forklift then issues a warning message about any VMs for which vNIC properties are missing. To retrieve any missing vNIC properties, run those VMs in vSphere in order for the vNIC properties to be reported to Forklift.

      +
      +
    • +
    • +

      Entering a list of decryption passphrases for disks encrypted using Linux Unified Key Setup (LUKS): To enter a list of decryption passphrases for LUKS-encrypted devices, in the Settings section, click the Edit icon next to Disk decryption passphrases, enter the passphrases, and then click Save. You do not need to enter the passphrases in a specific order - for each LUKS-encrypted device, Forklift tries each passphrase until one unlocks the device.

      +
    • +
    • +

      Specifying a root device: Applies to multi-boot VM migrations only. By default, Forklift uses the first bootable device detected as the root device.

      +
      +

      To specify a different root device, in the Settings section, click the Edit icon next to Root device and choose a device from the list of commonly-used options, or enter a device in the text box.

      +
      +
      +

      Forklift uses the following format for disk location: /dev/sd<disk_identifier><disk_partition>. For example, if the second disk is the root device and the operating system is on the disk’s second partition, the format would be: /dev/sdb2. After you enter the boot device, click Save.

      +
      +
      +

      If the conversion fails because the boot device provided is incorrect, it is possible to get the correct information by looking at the conversion pod logs.

      +
      +
    • +
    +
    +
  2. +
  3. +

    oVirt source providers only (Optional):

    +
    +
      +
    • +

      Preserving the CPU model of VMs that are migrated from oVirt: Generally, the CPU model (type) for oVirt VMs is set at the cluster level, but it can be set at the VM level, which is called a custom CPU model. +By default, Forklift sets the CPU model on the destination cluster as follows: Forklift preserves custom CPU settings for VMs that have them, but, for VMs without custom CPU settings, Forklift does not set the CPU model. Instead, the CPU model is later set by KubeVirt.

      +
      +

      To preserve the cluster-level CPU model of your oVirt VMs, in the Settings section, click the Edit icon next to Preserve CPU model. Toggle the Whether to preserve the CPU model switch, and then click Save.

      +
      +
    • +
    +
    +
  4. +
  5. +

    If the plan is valid,

    +
    +
      +
    1. +

      You can run the plan now by clicking Start migration.

      +
    2. +
    3. +

      You can run the plan later by selecting it on the Plans for virtualization page and following the procedure in Running a migration plan.

      +
      +

      Unresolved directive in creating-migration-plan-2-6-3.adoc - include::snip_vmware-name-change.adoc[]

      +
      +
    4. +
    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/modules/creating-migration-plan/index.html b/documentation/modules/creating-migration-plan/index.html new file mode 100644 index 00000000000..bbc1a942495 --- /dev/null +++ b/documentation/modules/creating-migration-plan/index.html @@ -0,0 +1,270 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a migration plan

+
+

You can create a migration plan by using the OKD web console.

+
+
+

A migration plan allows you to group virtual machines to be migrated together or with the same migration parameters, for example, a percentage of the members of a cluster or a complete application.

+
+
+

You can configure a hook to run an Ansible playbook or custom container image during a specified stage of the migration plan.

+
+
+
Prerequisites
+
    +
  • +

    If Forklift is not installed on the target cluster, you must add a target provider on the Providers page of the web console.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
  2. +
  3. +

    Click Create plan.

    +
  4. +
  5. +

    Specify the following fields:

    +
    +
      +
    • +

      Plan name: Enter a migration plan name to display in the migration plan list.

      +
    • +
    • +

      Plan description: Optional: Brief description of the migration plan.

      +
    • +
    • +

      Source provider: Select a source provider.

      +
    • +
    • +

      Target provider: Select a target provider.

      +
    • +
    • +

      Target namespace: Do one of the following:

      +
      +
        +
      • +

        Select a target namespace from the list

        +
      • +
      • +

        Create a target namespace by typing its name in the text box, and then clicking create "<the_name_you_entered>"

        +
      • +
      +
      +
    • +
    • +

      You can change the migration transfer network for this plan by clicking Select a different network, selecting a network from the list, and then clicking Select.

      +
      +

      If you defined a migration transfer network for the KubeVirt provider and if the network is in the target namespace, the network that you defined is the default network for all migration plans. Otherwise, the pod network is used.

      +
      +
    • +
    +
    +
  6. +
  7. +

    Click Next.

    +
  8. +
  9. +

    Select options to filter the list of source VMs and click Next.

    +
  10. +
  11. +

    Select the VMs to migrate and then click Next.

    +
  12. +
  13. +

    Select an existing network mapping or create a new network mapping.

    +
  14. +
  15. +

    . Optional: Click Add to add an additional network mapping.

    +
    +

    To create a new network mapping:

    +
    +
    +
      +
    • +

      Select a target network for each source network.

      +
    • +
    • +

      Optional: Select Save current mapping as a template and enter a name for the network mapping.

      +
    • +
    +
    +
  16. +
  17. +

    Click Next.

    +
  18. +
  19. +

    Select an existing storage mapping, which you can modify, or create a new storage mapping.

    +
    +

    To create a new storage mapping:

    +
    +
    +
      +
    1. +

      If your source provider is VMware, select a Source datastore and a Target storage class.

      +
    2. +
    3. +

      If your source provider is oVirt, select a Source storage domain and a Target storage class.

      +
    4. +
    5. +

      If your source provider is {osp}, select a Source volume type and a Target storage class.

      +
    6. +
    +
    +
  20. +
  21. +

    Optional: Select Save current mapping as a template and enter a name for the storage mapping.

    +
  22. +
  23. +

    Click Next.

    +
  24. +
  25. +

    Select a migration type and click Next.

    +
    +
      +
    • +

      Cold migration: The source VMs are stopped while the data is copied.

      +
    • +
    • +

      Warm migration: The source VMs run while the data is copied incrementally. Later, you will run the cutover, which stops the VMs and copies the remaining VM data and metadata.

      +
      + + + + + +
      +
      Note
      +
      +
      +

      Warm migration is supported only from vSphere and oVirt.

      +
      +
      +
      +
    • +
    +
    +
  26. +
  27. +

    Click Next.

    +
  28. +
  29. +

    Optional: You can create a migration hook to run an Ansible playbook before or after migration:

    +
    +
      +
    1. +

      Click Add hook.

      +
    2. +
    3. +

      Select the Step when the hook will be run: pre-migration or post-migration.

      +
    4. +
    5. +

      Select a Hook definition:

      +
      +
        +
      • +

        Ansible playbook: Browse to the Ansible playbook or paste it into the field.

        +
      • +
      • +

        Custom container image: If you do not want to use the default hook-runner image, enter the image path: <registry_path>/<image_name>:<tag>.

        +
        + + + + + +
        +
        Note
        +
        +
        +

        The registry must be accessible to your OKD cluster.

        +
        +
        +
        +
      • +
      +
      +
    6. +
    +
    +
  30. +
  31. +

    Click Next.

    +
  32. +
  33. +

    Review your migration plan and click Finish.

    +
    +

    The migration plan is saved on the Plans page.

    +
    +
    +

    You can click the {kebab} of the migration plan and select View details to verify the migration plan details.

    +
    +
  34. +
+
+ + +
+ + diff --git a/documentation/modules/creating-network-mapping/index.html b/documentation/modules/creating-network-mapping/index.html new file mode 100644 index 00000000000..2987a66d0ad --- /dev/null +++ b/documentation/modules/creating-network-mapping/index.html @@ -0,0 +1,122 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a network mapping

+
+

You can create one or more network mappings by using the OKD web console to map source networks to KubeVirt networks.

+
+
+
Prerequisites
+
    +
  • +

    Source and target providers added to the OKD web console.

    +
  • +
  • +

    If you map more than one source and target network, each additional KubeVirt network requires its own network attachment definition.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationNetworkMaps for virtualization.

    +
  2. +
  3. +

    Click Create NetworkMap.

    +
  4. +
  5. +

    Specify the following fields:

    +
    +
      +
    • +

      Name: Enter a name to display in the network mappings list.

      +
    • +
    • +

      Source provider: Select a source provider.

      +
    • +
    • +

      Target provider: Select a target provider.

      +
    • +
    +
    +
  6. +
  7. +

    Select a Source network and a Target namespace/network.

    +
  8. +
  9. +

    Optional: Click Add to create additional network mappings or to map multiple source networks to a single target network.

    +
  10. +
  11. +

    If you create an additional network mapping, select the network attachment definition as the target network.

    +
  12. +
  13. +

    Click Create.

    +
    +

    The network mapping is displayed on the NetworkMaps screen.

    +
    +
  14. +
+
+ + +
+ + diff --git a/documentation/modules/creating-storage-mapping/index.html b/documentation/modules/creating-storage-mapping/index.html new file mode 100644 index 00000000000..286e09d15b7 --- /dev/null +++ b/documentation/modules/creating-storage-mapping/index.html @@ -0,0 +1,138 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a storage mapping

+
+

You can create a storage mapping by using the OKD web console to map source disk storages to KubeVirt storage classes.

+
+
+
Prerequisites
+
    +
  • +

    Source and target providers added to the OKD web console.

    +
  • +
  • +

    Local and shared persistent storage that support VM migration.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationStorageMaps for virtualization.

    +
  2. +
  3. +

    Click Create StorageMap.

    +
  4. +
  5. +

    Specify the following fields:

    +
    +
      +
    • +

      Name: Enter a name to display in the storage mappings list.

      +
    • +
    • +

      Source provider: Select a source provider.

      +
    • +
    • +

      Target provider: Select a target provider.

      +
    • +
    +
    +
  6. +
  7. +

    To create a storage mapping, click Add and map storage sources to target storage classes as follows:

    +
    +
      +
    1. +

      If your source provider is VMware vSphere, select a Source datastore and a Target storage class.

      +
    2. +
    3. +

      If your source provider is oVirt, select a Source storage domain and a Target storage class.

      +
    4. +
    5. +

      If your source provider is {osp}, select a Source volume type and a Target storage class.

      +
    6. +
    7. +

      If your source provider is a set of one or more OVA files, select a Source and a Target storage class for the dummy storage that applies to all virtual disks within the OVA files.

      +
    8. +
    9. +

      If your storage provider is KubeVirt. select a Source storage class and a Target storage class.

      +
    10. +
    11. +

      Optional: Click Add to create additional storage mappings, including mapping multiple storage sources to a single target storage class.

      +
    12. +
    +
    +
  8. +
  9. +

    Click Create.

    +
    +

    The mapping is displayed on the StorageMaps page.

    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/modules/creating-validation-rule/index.html b/documentation/modules/creating-validation-rule/index.html new file mode 100644 index 00000000000..da87b169805 --- /dev/null +++ b/documentation/modules/creating-validation-rule/index.html @@ -0,0 +1,238 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a validation rule

+
+

You create a validation rule by applying a config map custom resource (CR) containing the rule to the Validation service.

+
+
+ + + + + +
+
Important
+
+
+
    +
  • +

    If you create a rule with the same name as an existing rule, the Validation service performs an OR operation with the rules.

    +
  • +
  • +

    If you create a rule that contradicts a default rule, the Validation service will not start.

    +
  • +
+
+
+
+
+
Validation rule example
+

Validation rules are based on virtual machine (VM) attributes collected by the Provider Inventory service.

+
+
+

For example, the VMware API uses this path to check whether a VMware VM has NUMA node affinity configured: MOR:VirtualMachine.config.extraConfig["numa.nodeAffinity"].

+
+
+

The Provider Inventory service simplifies this configuration and returns a testable attribute with a list value:

+
+
+
+
"numaNodeAffinity": [
+    "0",
+    "1"
+],
+
+
+
+

You create a Rego query, based on this attribute, and add it to the forklift-validation-config config map:

+
+
+
+
`count(input.numaNodeAffinity) != 0`
+
+
+
+
Procedure
+
    +
  1. +

    Create a config map CR according to the following example:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: v1
    +kind: ConfigMap
    +metadata:
    +  name: <forklift-validation-config>
    +  namespace: konveyor-forklift
    +data:
    +  vmware_multiple_disks.rego: |-
    +    package <provider_package> (1)
    +
    +    has_multiple_disks { (2)
    +      count(input.disks) > 1
    +    }
    +
    +    concerns[flag] {
    +      has_multiple_disks (3)
    +        flag := {
    +          "category": "<Information>", (4)
    +          "label": "Multiple disks detected",
    +          "assessment": "Multiple disks detected on this VM."
    +        }
    +    }
    +EOF
    +
    +
    +
    +
      +
    1. +

      Specify the provider package name. Allowed values are io.konveyor.forklift.vmware for VMware and io.konveyor.forklift.ovirt for oVirt.

      +
    2. +
    3. +

      Specify the concerns name and Rego query.

      +
    4. +
    5. +

      Specify the concerns name and flag parameter values.

      +
    6. +
    7. +

      Allowed values are Critical, Warning, and Information.

      +
    8. +
    +
    +
  2. +
  3. +

    Stop the Validation pod by scaling the forklift-controller deployment to 0:

    +
    +
    +
    $ kubectl scale -n konveyor-forklift --replicas=0 deployment/forklift-controller
    +
    +
    +
  4. +
  5. +

    Start the Validation pod by scaling the forklift-controller deployment to 1:

    +
    +
    +
    $ kubectl scale -n konveyor-forklift --replicas=1 deployment/forklift-controller
    +
    +
    +
  6. +
  7. +

    Check the Validation pod log to verify that the pod started:

    +
    +
    +
    $ kubectl logs -f <validation_pod>
    +
    +
    +
    +

    If the custom rule conflicts with a default rule, the Validation pod will not start.

    +
    +
  8. +
  9. +

    Remove the source provider:

    +
    +
    +
    $ kubectl delete provider <provider> -n konveyor-forklift
    +
    +
    +
  10. +
  11. +

    Add the source provider to apply the new rule:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Provider
    +metadata:
    +  name: <provider>
    +  namespace: konveyor-forklift
    +spec:
    +  type: <provider_type> (1)
    +  url: <api_end_point> (2)
    +  secret:
    +    name: <secret> (3)
    +    namespace: konveyor-forklift
    +EOF
    +
    +
    +
    +
      +
    1. +

      Allowed values are ovirt, vsphere, and openstack.

      +
    2. +
    3. +

      Specify the API end point URL, for example, https://<vCenter_host>/sdk for vSphere, https://<engine_host>/ovirt-engine/api for oVirt, or https://<identity_service>/v3 for {osp}.

      +
    4. +
    5. +

      Specify the name of the provider Secret CR.

      +
    6. +
    +
    +
  12. +
+
+
+

You must update the rules version after creating a custom rule so that the Inventory service detects the changes and validates the VMs.

+
+ + +
+ + diff --git a/documentation/modules/creating-vddk-image/index.html b/documentation/modules/creating-vddk-image/index.html new file mode 100644 index 00000000000..2dd324e2daa --- /dev/null +++ b/documentation/modules/creating-vddk-image/index.html @@ -0,0 +1,201 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a VDDK image

+
+

Forklift can use the VMware Virtual Disk Development Kit (VDDK) SDK to accelerate transferring virtual disks from VMware vSphere.

+
+
+ + + + + +
+
Note
+
+
+

Creating a VDDK image, although optional, is highly recommended.

+
+
+
+
+

To make use of this feature, you download the VMware Virtual Disk Development Kit (VDDK), build a VDDK image, and push the VDDK image to your image registry.

+
+
+

The VDDK package contains symbolic links, therefore, the procedure of creating a VDDK image must be performed on a file system that preserves symbolic links (symlinks).

+
+
+ + + + + +
+
Note
+
+
+

Storing the VDDK image in a public registry might violate the VMware license terms.

+
+
+
+
+
Prerequisites
+
    +
  • +

    OKD image registry.

    +
  • +
  • +

    podman installed.

    +
  • +
  • +

    You are working on a file system that preserves symbolic links (symlinks).

    +
  • +
  • +

    If you are using an external registry, KubeVirt must be able to access it.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Create and navigate to a temporary directory:

    +
    +
    +
    $ mkdir /tmp/<dir_name> && cd /tmp/<dir_name>
    +
    +
    +
  2. +
  3. +

    In a browser, navigate to the VMware VDDK version 8 download page.

    +
  4. +
  5. +

    Select version 8.0.1 and click Download.

    +
  6. +
+
+
+ + + + + +
+
Note
+
+
+

In order to migrate to KubeVirt 4.12, download VDDK version 7.0.3.2 from the VMware VDDK version 7 download page.

+
+
+
+
+
    +
  1. +

    Save the VDDK archive file in the temporary directory.

    +
  2. +
  3. +

    Extract the VDDK archive:

    +
    +
    +
    $ tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gz
    +
    +
    +
  4. +
  5. +

    Create a Dockerfile:

    +
    +
    +
    $ cat > Dockerfile <<EOF
    +FROM registry.access.redhat.com/ubi8/ubi-minimal
    +USER 1001
    +COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib
    +RUN mkdir -p /opt
    +ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"]
    +EOF
    +
    +
    +
  6. +
  7. +

    Build the VDDK image:

    +
    +
    +
    $ podman build . -t <registry_route_or_server_path>/vddk:<tag>
    +
    +
    +
  8. +
  9. +

    Push the VDDK image to the registry:

    +
    +
    +
    $ podman push <registry_route_or_server_path>/vddk:<tag>
    +
    +
    +
  10. +
  11. +

    Ensure that the image is accessible to your KubeVirt environment.

    +
  12. +
+
+ + +
+ + diff --git a/documentation/modules/error-messages/index.html b/documentation/modules/error-messages/index.html new file mode 100644 index 00000000000..2a85079f234 --- /dev/null +++ b/documentation/modules/error-messages/index.html @@ -0,0 +1,83 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Error messages

+
+

This section describes error messages and how to resolve them.

+
+
+
warm import retry limit reached
+

The warm import retry limit reached error message is displayed during a warm migration if a VMware virtual machine (VM) has reached the maximum number (28) of changed block tracking (CBT) snapshots during the precopy stage.

+
+
+

To resolve this problem, delete some of the CBT snapshots from the VM and restart the migration plan.

+
+
+
Unable to resize disk image to required size
+

The Unable to resize disk image to required size error message is displayed when migration fails because a virtual machine on the target provider uses persistent volumes with an EXT4 file system on block storage. The problem occurs because the default overhead that is assumed by CDI does not completely include the reserved place for the root partition.

+
+
+

To resolve this problem, increase the file system overhead in CDI to more than 10%.

+
+ + +
+ + diff --git a/documentation/modules/images/136_OpenShift_Migration_Toolkit_0121_mtv-workflow.svg b/documentation/modules/images/136_OpenShift_Migration_Toolkit_0121_mtv-workflow.svg new file mode 100644 index 00000000000..999c62adec4 --- /dev/null +++ b/documentation/modules/images/136_OpenShift_Migration_Toolkit_0121_mtv-workflow.svg @@ -0,0 +1 @@ +NetworkmappingTargetproviderVirtualmachines1UserVirtual-Machine-Import4MigrationControllerPlan2Migration3StoragemappingSourceprovider136_OpenShift_0121 diff --git a/documentation/modules/images/136_OpenShift_Migration_Toolkit_0121_virt-workflow.svg b/documentation/modules/images/136_OpenShift_Migration_Toolkit_0121_virt-workflow.svg new file mode 100644 index 00000000000..473e21ba4e2 --- /dev/null +++ b/documentation/modules/images/136_OpenShift_Migration_Toolkit_0121_virt-workflow.svg @@ -0,0 +1 @@ +Virtual-Machine-ImportProviderAPIVirtualmachineCDIControllerKubeVirtController<VM_name>podDataVolumeSourceProviderConversionpodPersistentVolumeDynamicallyprovisionedstoragePersistentVolume Claim163438710ProviderCredentialsUserVMdisk29VirtualMachineImportControllerVirtual-Machine-InstanceVirtual-Machine57Importerpod136_OpenShift_0121 diff --git a/documentation/modules/images/136_Upstream_Migration_Toolkit_0121_mtv-workflow.svg b/documentation/modules/images/136_Upstream_Migration_Toolkit_0121_mtv-workflow.svg new file mode 100644 index 00000000000..33a031a0909 --- /dev/null +++ b/documentation/modules/images/136_Upstream_Migration_Toolkit_0121_mtv-workflow.svg @@ -0,0 +1 @@ +NetworkmappingTargetproviderVirtualmachines1UserVirtual-Machine-Import4MigrationControllerPlan2Migration3StoragemappingSourceprovider136_0121 diff --git a/documentation/modules/images/136_Upstream_Migration_Toolkit_0121_virt-workflow.svg b/documentation/modules/images/136_Upstream_Migration_Toolkit_0121_virt-workflow.svg new file mode 100644 index 00000000000..e73192c0102 --- /dev/null +++ b/documentation/modules/images/136_Upstream_Migration_Toolkit_0121_virt-workflow.svg @@ -0,0 +1 @@ +Virtual-Machine-ImportProviderAPIVirtualmachineCDIControllerKubeVirtController<VM_name>podDataVolumeSourceProviderConversionpodPersistentVolumeDynamicallyprovisionedstoragePersistentVolume Claim163438710ProviderCredentialsUserVMdisk29VirtualMachineImportControllerVirtual-Machine-InstanceVirtual-Machine57Importerpod136_0121 diff --git a/documentation/modules/images/forklift-logo-darkbg.png b/documentation/modules/images/forklift-logo-darkbg.png new file mode 100644 index 00000000000..06e9d1b2494 Binary files /dev/null and b/documentation/modules/images/forklift-logo-darkbg.png differ diff --git a/documentation/modules/images/forklift-logo-darkbg.svg b/documentation/modules/images/forklift-logo-darkbg.svg new file mode 100644 index 00000000000..8a846e6361a --- /dev/null +++ b/documentation/modules/images/forklift-logo-darkbg.svg @@ -0,0 +1,164 @@ + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/documentation/modules/images/forklift-logo-lightbg.png b/documentation/modules/images/forklift-logo-lightbg.png new file mode 100644 index 00000000000..8dba83d97f8 Binary files /dev/null and b/documentation/modules/images/forklift-logo-lightbg.png differ diff --git a/documentation/modules/images/forklift-logo-lightbg.svg b/documentation/modules/images/forklift-logo-lightbg.svg new file mode 100644 index 00000000000..a8038cdf923 --- /dev/null +++ b/documentation/modules/images/forklift-logo-lightbg.svg @@ -0,0 +1,159 @@ + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/documentation/modules/images/kebab.png b/documentation/modules/images/kebab.png new file mode 100644 index 00000000000..81893bd4ad1 Binary files /dev/null and b/documentation/modules/images/kebab.png differ diff --git a/documentation/modules/images/mtv-ui.png b/documentation/modules/images/mtv-ui.png new file mode 100644 index 00000000000..009c9b46386 Binary files /dev/null and b/documentation/modules/images/mtv-ui.png differ diff --git a/documentation/modules/increasing-nfc-memory-vmware-host/index.html b/documentation/modules/increasing-nfc-memory-vmware-host/index.html new file mode 100644 index 00000000000..59c420501ac --- /dev/null +++ b/documentation/modules/increasing-nfc-memory-vmware-host/index.html @@ -0,0 +1,103 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Increasing the NFC service memory of an ESXi host

+
+

If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host. Otherwise, the migration will fail because the NFC service memory is limited to 10 parallel connections.

+
+
+
Procedure
+
    +
  1. +

    Log in to the ESXi host as root.

    +
  2. +
  3. +

    Change the value of maxMemory to 1000000000 in /etc/vmware/hostd/config.xml:

    +
    +
    +
    ...
    +      <nfcsvc>
    +         <path>libnfcsvc.so</path>
    +         <enabled>true</enabled>
    +         <maxMemory>1000000000</maxMemory>
    +         <maxStreamMemory>10485760</maxStreamMemory>
    +      </nfcsvc>
    +...
    +
    +
    +
  4. +
  5. +

    Restart hostd:

    +
    +
    +
    # /etc/init.d/hostd restart
    +
    +
    +
    +

    You do not need to reboot the host.

    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/modules/installing-mtv-operator/index.html b/documentation/modules/installing-mtv-operator/index.html new file mode 100644 index 00000000000..f873b4a75b0 --- /dev/null +++ b/documentation/modules/installing-mtv-operator/index.html @@ -0,0 +1,79 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
Prerequisites
+
    +
  • +

    OKD 4.10 or later installed.

    +
  • +
  • +

    KubeVirt Operator installed on an OpenShift migration target cluster.

    +
  • +
  • +

    You must be logged in as a user with cluster-admin permissions.

    +
  • +
+
+ + +
+ + diff --git a/documentation/modules/issue_templates/issue.md b/documentation/modules/issue_templates/issue.md new file mode 100644 index 00000000000..30d52ab9cba --- /dev/null +++ b/documentation/modules/issue_templates/issue.md @@ -0,0 +1,15 @@ +## Summary + +(Describe the problem. Don't worry if the problem occurs in more than one checklist. You only need to mention the checklist where you see a problem. We will fix the module.) + +## What is the problem? + +(Paste the text or a screenshot here. Remember to include the **task number** so that we know which module is affected.) + +## What is the solution? + +(Correct text, link, or task.) + +## Notes + +(Do we need to fix something else?) diff --git a/documentation/modules/issue_templates/issue/index.html b/documentation/modules/issue_templates/issue/index.html new file mode 100644 index 00000000000..a97dc874dab --- /dev/null +++ b/documentation/modules/issue_templates/issue/index.html @@ -0,0 +1,79 @@ + + + + + + + + Summary | Forklift Documentation + + + + + + + + + + + + + +Summary | Forklift Documentation + + + + + + + + + + + + + + + + + + + + + + +
+

Summary

+ +

(Describe the problem. Don’t worry if the problem occurs in more than one checklist. You only need to mention the checklist where you see a problem. We will fix the module.)

+ +

What is the problem?

+ +

(Paste the text or a screenshot here. Remember to include the task number so that we know which module is affected.)

+ +

What is the solution?

+ +

(Correct text, link, or task.)

+ +

Notes

+ +

(Do we need to fix something else?)

+ + + +
+ + diff --git a/documentation/modules/known-issues-2-7/index.html b/documentation/modules/known-issues-2-7/index.html new file mode 100644 index 00000000000..dd120499133 --- /dev/null +++ b/documentation/modules/known-issues-2-7/index.html @@ -0,0 +1,87 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Known issues

+
+

Forklift 2.7 has the following known issues:

+
+
+
Select Migration Network from the endpoint type ESXi displays multiple incorrect networks
+

When you choose Select Migration Network, from the endpoint type of ESXi, multiple incorrect networks are displayed. (MTV-1291)

+
+
+

Unresolved directive in known-issues-2-7.adoc - include::snip_secure_boot_issue.adoc[]

+
+
+

Unresolved directive in known-issues-2-7.adoc - include::snip_measured_boot_windows_vm.adoc[]

+
+
+
Network and Storage maps in the UI are not correct when created from the command line
+

When creating Network and Storage maps from the UI, the correct names are not shown in the UI. (MTV-1421)

+
+
+
Migration fails with module network-legacy configured in RHEL guests
+

Migration fails if the module configuration file is available in the guest and the dhcp-client package is not installed, returning a dracut module 'network-legacy' will not be installed, because command 'dhclient' could not be found error. (MTV-1615)

+
+ + +
+ + diff --git a/documentation/modules/making-open-source-more-inclusive/index.html b/documentation/modules/making-open-source-more-inclusive/index.html new file mode 100644 index 00000000000..d7d1b9422a1 --- /dev/null +++ b/documentation/modules/making-open-source-more-inclusive/index.html @@ -0,0 +1,69 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Making open source more inclusive

+
+

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

+
+ + +
+ + diff --git a/documentation/modules/migration-plan-options-ui/index.html b/documentation/modules/migration-plan-options-ui/index.html new file mode 100644 index 00000000000..693e30962d6 --- /dev/null +++ b/documentation/modules/migration-plan-options-ui/index.html @@ -0,0 +1,141 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Migration plan options

+
+

On the Plans for virtualization page of the OKD web console, you can click the {kebab} beside a migration plan to access the following options:

+
+
+
    +
  • +

    Get logs: Retrieves the logs of a migration. When you click Get logs, a confirmation window opens. After you click Get logs in the window, wait until Get logs changes to Download logs and then click the button to download the logs.

    +
  • +
  • +

    Edit: Edit the details of a migration plan. You cannot edit a migration plan while it is running or after it has completed successfully.

    +
  • +
  • +

    Duplicate: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:

    +
    +
      +
    • +

      Migrate VMs to a different namespace.

      +
    • +
    • +

      Edit an archived migration plan.

      +
    • +
    • +

      Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.

      +
    • +
    +
    +
  • +
  • +

    Archive: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    The Archive option is irreversible. However, you can duplicate an archived plan.

    +
    +
    +
    +
  • +
  • +

    Delete: Permanently remove a migration plan. You cannot delete a running migration plan.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    The Delete option is irreversible.

    +
    +
    +

    Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs, and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

    +
    +
    +
    +
  • +
  • +

    View details: Display the details of a migration plan.

    +
  • +
  • +

    Restart: Restart a failed or canceled migration plan.

    +
  • +
  • +

    Cancel scheduled cutover: Cancel a scheduled cutover migration for a warm migration plan.

    +
  • +
+
+ + +
+ + diff --git a/documentation/modules/mtv-changelog-2-7/index.html b/documentation/modules/mtv-changelog-2-7/index.html new file mode 100644 index 00000000000..cde94042635 --- /dev/null +++ b/documentation/modules/mtv-changelog-2-7/index.html @@ -0,0 +1,2330 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift changelog

+
+
+
+

The following changelog for Forklift includes a full list of packages used in the Forklift 2.7 releases.

+
+
+
+
+

Forklift 2.7 packages

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Forklift packages
Forklift 2.7.0Forklift 2.7.2Forklift 2.7.3

abattis-cantarell-fonts-0.301-4.el9.noarch

abattis-cantarell-fonts-0.301-4.el9.noarch

Abattis-cantarell-fonts-0.301-4.el9.noarch

acl-2.3.1-4.el9.x86_64

acl-2.3.1-4.el9.x86_64

acl-2.3.1-4.el9.x86_64

adobe-source-code-pro-fonts-2.030.1.050-12.el9.1.noarch

adobe-source-code-pro-fonts-2.030.1.050-12.el9.1.noarch

adobe-source-code-pro-fonts-2.030.1.050-12.el9.1.noarch

alternatives-1.24-1.el9.x86_64

alternatives-1.24-1.el9.x86_64

alternatives-1.24-1.el9.x86_64

attr-2.5.1-3.el9.x86_64

attr-2.5.1-3.el9.x86_64

attr-2.5.1-3.el9.x86_64

audit-libs-3.1.2-2.el9.x86_64

audit-libs-3.1.2-2.el9.x86_64

audit-libs-3.1.2-2.el9.x86_64

augeas-libs-1.13.0-6.el9_4.x86_64

augeas-libs-1.13.0-6.el9_4.x86_64

augeas-libs-1.13.0-6.el9_4.x86_64

basesystem-11-13.el9.noarch

basesystem-11-13.el9.noarch

basesystem-11-13.el9.noarch

bash-5.1.8-9.el9.x86_64

bash-5.1.8-9.el9.x86_64

bash-5.1.8-9.el9.x86_64

binutils-2.35.2-43.el9.x86_64

binutils-2.35.2-43.el9.x86_64

binutils-2.35.2-43.el9.x86_64

binutils-gold-2.35.2-43.el9.x86_64

binutils-gold-2.35.2-43.el9.x86_64

binutils-gold-2.35.2-43.el9.x86_64

bzip2-1.0.8-8.el9.x86_64

bzip2-1.0.8-8.el9.x86_64

bzip2-1.0.8-8.el9.x86_64

bzip2-libs-1.0.8-8.el9.x86_64

bzip2-libs-1.0.8-8.el9.x86_64

bzip2-libs-1.0.8-8.el9.x86_64

ca-certificates-2024.2.69_v8.0.303-91.4.el9_4.noarch

ca-certificates-2024.2.69_v8.0.303-91.4.el9_4.noarch

ca-certificates-2024.2.69_v8.0.303-91.4.el9_4.noarch

capstone-4.0.2-10.el9.x86_64

capstone-4.0.2-10.el9.x86_64

capstone-4.0.2-10.el9.x86_64

checkpolicy-3.6-1.el9.x86_64

checkpolicy-3.6-1.el9.x86_64

checkpolicy-3.6-1.el9.x86_64

clevis-18-112.el9.x86_64

clevis-18-112.el9.x86_64

clevis-18-112.el9.x86_64

clevis-luks-18-112.el9.x86_64

clevis-luks-18-112.el9.x86_64

clevis-luks-18-112.el9.x86_64

cmake-rpm-macros-3.26.5-2.el9.noarch

cmake-rpm-macros-3.26.5-2.el9.noarch

cmake-rpm-macros-3.26.5-2.el9.noarch

coreutils-single-8.32-35.el9.x86_64

coreutils-single-8.32-35.el9.x86_64

coreutils-single-8.32-35.el9.x86_64

cpio-2.13-16.el9.x86_64

cpio-2.13-16.el9.x86_64

cpio-2.13-16.el9.x86_64

cracklib-2.9.6-27.el9.x86_64

cracklib-2.9.6-27.el9.x86_64

cracklib-2.9.6-27.el9.x86_64

cracklib-dicts-2.9.6-27.el9.x86_64

cracklib-dicts-2.9.6-27.el9.x86_64

cracklib-dicts-2.9.6-27.el9.x86_64

crypto-policies-20240202-1.git283706d.el9.noarch

crypto-policies-20240202-1.git283706d.el9.noarch

crypto-policies-20240202-1.git283706d.el9.noarch

cryptsetup-2.6.0-3.el9.x86_64

cryptsetup-2.6.0-3.el9.x86_64

cryptsetup-2.6.0-3.el9.x86_64

cryptsetup-libs-2.6.0-3.el9.x86_64

cryptsetup-libs-2.6.0-3.el9.x86_64

cryptsetup-libs-2.6.0-3.el9.x86_64

curl-minimal-7.76.1-29.el9_4.1.x86_64

curl-minimal-7.76.1-29.el9_4.1.x86_64

curl-minimal-7.76.1-29.el9_4.1.x86_64

cyrus-sasl-2.1.27-21.el9.x86_64

cyrus-sasl-2.1.27-21.el9.x86_64

cyrus-sasl-2.1.27-21.el9.x86_64

cyrus-sasl-gssapi-2.1.27-21.el9.x86_64

cyrus-sasl-gssapi-2.1.27-21.el9.x86_64

cyrus-sasl-gssapi-2.1.27-21.el9.x86_64

cyrus-sasl-lib-2.1.27-21.el9.x86_64

cyrus-sasl-lib-2.1.27-21.el9.x86_64

cyrus-sasl-lib-2.1.27-21.el9.x86_64

daxctl-libs-71.1-8.el9.x86_64

daxctl-libs-71.1-8.el9.x86_64

daxctl-libs-71.1-8.el9.x86_64

dbus-1.12.20-8.el9.x86_64

dbus-1.12.20-8.el9.x86_64

dbus-1.12.20-8.el9.x86_64

dbus-broker-28-7.el9.x86_64

dbus-broker-28-7.el9.x86_64

dbus-broker-28-7.el9.x86_64

dbus-common-1.12.20-8.el9.noarch

dbus-common-1.12.20-8.el9.noarch

dbus-common-1.12.20-8.el9.noarch

dbus-libs-1.12.20-8.el9.x86_64

dbus-libs-1.12.20-8.el9.x86_64

dbus-libs-1.12.20-8.el9.x86_64

dejavu-sans-fonts-2.37-18.el9.noarch

dejavu-sans-fonts-2.37-18.el9.noarch

dejavu-sans-fonts-2.37-18.el9.noarch

device-mapper-1.02.197-2.el9.x86_64

device-mapper-1.02.197-2.el9.x86_64

device-mapper-1.02.197-2.el9.x86_64

device-mapper-event-1.02.197-2.el9.x86_64

device-mapper-event-1.02.197-2.el9.x86_64

device-mapper-event-1.02.197-2.el9.x86_64

device-mapper-event-libs-1.02.197-2.el9.x86_64

device-mapper-event-libs-1.02.197-2.el9.x86_64

device-mapper-event-libs-1.02.197-2.el9.x86_64

device-mapper-libs-1.02.197-2.el9.x86_64

device-mapper-libs-1.02.197-2.el9.x86_64

device-mapper-libs-1.02.197-2.el9.x86_64

device-mapper-persistent-data-1.0.9-3.el9_4.x86_64

device-mapper-persistent-data-1.0.9-3.el9_4.x86_64

device-mapper-persistent-data-1.0.9-3.el9_4.x86_64

dhcp-client-4.4.2-19.b1.el9.x86_64

dhcp-client-4.4.2-19.b1.el9.x86_64

dhcp-client-4.4.2-19.b1.el9.x86_64

dhcp-common-4.4.2-19.b1.el9.noarch

dhcp-common-4.4.2-19.b1.el9.noarch

dhcp-common-4.4.2-19.b1.el9.noarch

diffutils-3.7-12.el9.x86_64

diffutils-3.7-12.el9.x86_64

diffutils-3.7-12.el9.x86_64

dmidecode-3.5-3.el9.x86_64

dmidecode-3.5-3.el9.x86_64

dmidecode-3.5-3.el9.x86_64

dnf-data-4.14.0-9.el9.noarch

dnf-data-4.14.0-9.el9.noarch

dnf-data-4.14.0-9.el9.noarch

dnsmasq-2.85-16.el9_4.x86_64

dnsmasq-2.85-16.el9_4.x86_64

dnsmasq-2.85-16.el9_4.x86_64

dosfstools-4.2-3.el9.x86_64

dosfstools-4.2-3.el9.x86_64

dosfstools-4.2-3.el9.x86_64

dracut-057-53.git20240104.el9.x86_64

dracut-057-53.git20240104.el9.x86_64

dracut-057-53.git20240104.el9.x86_64

dwz-0.14-3.el9.x86_64

dwz-0.14-3.el9.x86_64

dwz-0.14-3.el9.x86_64

e2fsprogs-1.46.5-5.el9.x86_64

e2fsprogs-1.46.5-5.el9.x86_64

e2fsprogs-1.46.5-5.el9.x86_64

e2fsprogs-libs-1.46.5-5.el9.x86_64

e2fsprogs-libs-1.46.5-5.el9.x86_64

e2fsprogs-libs-1.46.5-5.el9.x86_64

edk2-ovmf-20231122-6.el9_4.3.noarch

edk2-ovmf-20231122-6.el9_4.3.noarch

edk2-ovmf-20231122-6.el9_4.3.noarch

efi-srpm-macros-6-2.el9_0.noarch

efi-srpm-macros-6-2.el9_0.noarch

efi-srpm-macros-6-2.el9_0.noarch

elfutils-debuginfod-client-0.190-2.el9.x86_64

elfutils-debuginfod-client-0.190-2.el9.x86_64

elfutils-debuginfod-client-0.190-2.el9.x86_64

elfutils-default-yama-scope-0.190-2.el9.noarch

elfutils-default-yama-scope-0.190-2.el9.noarch

elfutils-default-yama-scope-0.190-2.el9.noarch

elfutils-libelf-0.190-2.el9.x86_64

elfutils-libelf-0.190-2.el9.x86_64

elfutils-libelf-0.190-2.el9.x86_64

elfutils-libs-0.190-2.el9.x86_64

elfutils-libs-0.190-2.el9.x86_64

elfutils-libs-0.190-2.el9.x86_64

expat-2.5.0-2.el9_4.1.x86_64

expat-2.5.0-2.el9_4.1.x86_64

expat-2.5.0-2.el9_4.1.x86_64

file-5.39-16.el9.x86_64

file-5.39-16.el9.x86_64

file-5.39-16.el9.x86_64

file-libs-5.39-16.el9.x86_64

file-libs-5.39-16.el9.x86_64

file-libs-5.39-16.el9.x86_64

filesystem-3.16-2.el9.x86_64

filesystem-3.16-2.el9.x86_64

filesystem-3.16-2.el9.x86_64

findutils-4.8.0-6.el9.x86_64

findutils-4.8.0-6.el9.x86_64

findutils-4.8.0-6.el9.x86_64

fonts-filesystem-2.0.5-7.el9.1.noarch

fonts-filesystem-2.0.5-7.el9.1.noarch

fonts-filesystem-2.0.5-7.el9.1.noarch

fonts-srpm-macros-2.0.5-7.el9.1.noarch

fonts-srpm-macros-2.0.5-7.el9.1.noarch

fonts-srpm-macros-2.0.5-7.el9.1.noarch

fuse-2.9.9-15.el9.x86_64

fuse-2.9.9-15.el9.x86_64

fuse-2.9.9-15.el9.x86_64

fuse-common-3.10.2-8.el9.x86_64

fuse-common-3.10.2-8.el9.x86_64

fuse-common-3.10.2-8.el9.x86_64

fuse-libs-2.9.9-15.el9.x86_64

fuse-libs-2.9.9-15.el9.x86_64

fuse-libs-2.9.9-15.el9.x86_64

gawk-5.1.0-6.el9.x86_64

gawk-5.1.0-6.el9.x86_64

gawk-5.1.0-6.el9.x86_64

gdbm-libs-1.19-4.el9.x86_64

gdbm-libs-1.19-4.el9.x86_64

gdbm-libs-1.19-4.el9.x86_64

gdisk-1.0.7-5.el9.x86_64

gdisk-1.0.7-5.el9.x86_64

gdisk-1.0.7-5.el9.x86_64

geolite2-city-20191217-6.el9.noarch

geolite2-city-20191217-6.el9.noarch

geolite2-city-20191217-6.el9.noarch

geolite2-country-20191217-6.el9.noarch

geolite2-country-20191217-6.el9.noarch

geolite2-country-20191217-6.el9.noarch

gettext-0.21-8.el9.x86_64

gettext-0.21-8.el9.x86_64

gettext-0.21-8.el9.x86_64

gettext-libs-0.21-8.el9.x86_64

gettext-libs-0.21-8.el9.x86_64

gettext-libs-0.21-8.el9.x86_64

ghc-srpm-macros-1.5.0-6.el9.noarch

ghc-srpm-macros-1.5.0-6.el9.noarch

ghc-srpm-macros-1.5.0-6.el9.noarch

glib-networking-2.68.3-3.el9.x86_64

glib-networking-2.68.3-3.el9.x86_64

glib-networking-2.68.3-3.el9.x86_64

glib2-2.68.4-14.el9_4.1.x86_64

glib2-2.68.4-14.el9_4.1.x86_64

glib2-2.68.4-14.el9_4.1.x86_64

glibc-2.34-100.el9_4.3.x86_64

glibc-2.34-100.el9_4.4.x86_64

glibc-2.34-100.el9_4.4.x86_64

glibc-common-2.34-100.el9_4.3.x86_64

glibc-common-2.34-100.el9_4.4.x86_64

glibc-common-2.34-100.el9_4.4.x86_64

glibc-gconv-extra-2.34-100.el9_4.3.x86_64

glibc-gconv-extra-2.34-100.el9_4.4.x86_64

glibc-gconv-extra-2.34-100.el9_4.4.x86_64

glibc-langpack-en-2.34-100.el9_4.4.x86_64

glibc-langpack-en-2.34-100.el9_4.4.x86_64

glibc-minimal-langpack-2.34-100.el9_4.3.x86_64

glibc-minimal-langpack-2.34-100.el9_4.4.x86_64

glibc-minimal-langpack-2.34-100.el9_4.4.x86_64

gmp-6.2.0-13.el9.x86_64

gmp-6.2.0-13.el9.x86_64

gmp-6.2.0-13.el9.x86_64

gnupg2-2.3.3-4.el9.x86_64

gnupg2-2.3.3-4.el9.x86_64

gnupg2-2.3.3-4.el9.x86_64

gnutls-3.8.3-4.el9_4.x86_64

gnutls-3.8.3-4.el9_4.x86_64

gnutls-3.8.3-4.el9_4.x86_64

gnutls-dane-3.8.3-4.el9_4.x86_64

gnutls-dane-3.8.3-4.el9_4.x86_64

gnutls-dane-3.8.3-4.el9_4.x86_64

gnutls-utils-3.8.3-4.el9_4.x86_64

gnutls-utils-3.8.3-4.el9_4.x86_64

gnutls-utils-3.8.3-4.el9_4.x86_64

go-srpm-macros-3.2.0-3.el9.noarch

go-srpm-macros-3.2.0-3.el9.noarch

go-srpm-macros-3.2.0-3.el9.noarch

gobject-introspection-1.68.0-11.el9.x86_64

gobject-introspection-1.68.0-11.el9.x86_64

gobject-introspection-1.68.0-11.el9.x86_64

gpg-pubkey-5a6340b3-6229229e

gpg-pubkey-5a6340b3-6229229e

gpg-pubkey-5a6340b3-6229229e

gpg-pubkey-fd431d51-4ae0493b

gpg-pubkey-fd431d51-4ae0493b

gpg-pubkey-fd431d51-4ae0493b

gpgme-1.15.1-6.el9.x86_64

gpgme-1.15.1-6.el9.x86_64

gpgme-1.15.1-6.el9.x86_64

grep-3.6-5.el9.x86_64

grep-3.6-5.el9.x86_64

grep-3.6-5.el9.x86_64

groff-base-1.22.4-10.el9.x86_64

groff-base-1.22.4-10.el9.x86_64

groff-base-1.22.4-10.el9.x86_64

gsettings-desktop-schemas-40.0-6.el9.x86_64

gsettings-desktop-schemas-40.0-6.el9.x86_64

gsettings-desktop-schemas-40.0-6.el9.x86_64

gssproxy-0.8.4-6.el9.x86_64

gssproxy-0.8.4-6.el9.x86_64

gssproxy-0.8.4-6.el9.x86_64

guestfs-tools-1.51.6-3.el9_4.x86_64

guestfs-tools-1.51.6-3.el9_4.x86_64

guestfs-tools-1.51.6-3.el9_4.x86_64

gzip-1.12-1.el9.x86_64

gzip-1.12-1.el9.x86_64

gzip-1.12-1.el9.x86_64

hexedit-1.6-1.el9.x86_64

hexedit-1.6-1.el9.x86_64

hexedit-1.6-1.el9.x86_64

hivex-libs-1.3.21-3.el9.x86_64

hivex-libs-1.3.21-3.el9.x86_64

hivex-libs-1.3.21-3.el9.x86_64

hwdata-0.348-9.13.el9.noarch

hwdata-0.348-9.13.el9.noarch

hwdata-0.348-9.13.el9.noarch

inih-49-6.el9.x86_64

inih-49-6.el9.x86_64

inih-49-6.el9.x86_64

ipcalc-1.0.0-5.el9.x86_64

ipcalc-1.0.0-5.el9.x86_64

ipcalc-1.0.0-5.el9.x86_64

iproute-6.2.0-6.el9_4.x86_64

iproute-6.2.0-6.el9_4.x86_64

iproute-6.2.0-6.el9_4.x86_64

iproute-tc-6.2.0-6.el9_4.x86_64

iproute-tc-6.2.0-6.el9_4.x86_64

iproute-tc-6.2.0-6.el9_4.x86_64

iptables-libs-1.8.10-4.el9_4.x86_64

iptables-libs-1.8.10-4.el9_4.x86_64

iptables-libs-1.8.10-4.el9_4.x86_64

iptables-nft-1.8.10-4.el9_4.x86_64

iptables-nft-1.8.10-4.el9_4.x86_64

iptables-nft-1.8.10-4.el9_4.x86_64

iputils-20210202-9.el9.x86_64

iputils-20210202-9.el9.x86_64

iputils-20210202-9.el9.x86_64

ipxe-roms-qemu-20200823-9.git4bd064de.el9.noarch

ipxe-roms-qemu-20200823-9.git4bd064de.el9.noarch

ipxe-roms-qemu-20200823-9.git4bd064de.el9.noarch

jansson-2.14-1.el9.x86_64

jansson-2.14-1.el9.x86_64

jansson-2.14-1.el9.x86_64

jose-11-3.el9.x86_64

jose-11-3.el9.x86_64

jose-11-3.el9.x86_64

jq-1.6-16.el9.x86_64

jq-1.6-16.el9.x86_64

jq-1.6-16.el9.x86_64

json-c-0.14-11.el9.x86_64

json-c-0.14-11.el9.x86_64

json-c-0.14-11.el9.x86_64

json-glib-1.6.6-1.el9.x86_64

json-glib-1.6.6-1.el9.x86_64

json-glib-1.6.6-1.el9.x86_64

kbd-2.4.0-9.el9.x86_64

kbd-2.4.0-9.el9.x86_64

kbd-2.4.0-9.el9.x86_64

kbd-legacy-2.4.0-9.el9.noarch

kbd-legacy-2.4.0-9.el9.noarch

kbd-legacy-2.4.0-9.el9.noarch

kbd-misc-2.4.0-9.el9.noarch

kbd-misc-2.4.0-9.el9.noarch

kbd-misc-2.4.0-9.el9.noarch

kernel-core-5.14.0-427.35.1.el9_4.x86_64

kernel-core-5.14.0-427.37.1.el9_4.x86_64

kernel-core-5.14.0-427.40.1.el9_4.x86_64

kernel-modules-core-5.14.0-427.35.1.el9_4.x86_64

kernel-modules-core-5.14.0-427.37.1.el9_4.x86_64

kernel-modules-core-5.14.0-427.40.1.el9_4.x86_64

kernel-srpm-macros-1.0-13.el9.noarch

kernel-srpm-macros-1.0-13.el9.noarch

kernel-srpm-macros-1.0-13.el9.noarch

keyutils-1.6.3-1.el9.x86_64

keyutils-1.6.3-1.el9.x86_64

keyutils-1.6.3-1.el9.x86_64

keyutils-libs-1.6.3-1.el9.x86_64

keyutils-libs-1.6.3-1.el9.x86_64

keyutils-libs-1.6.3-1.el9.x86_64

kmod-28-9.el9.x86_64

kmod-28-9.el9.x86_64

kmod-28-9.el9.x86_64

kmod-libs-28-9.el9.x86_64

kmod-libs-28-9.el9.x86_64

kmod-libs-28-9.el9.x86_64

kpartx-0.8.7-27.el9.x86_64

kpartx-0.8.7-27.el9.x86_64

kpartx-0.8.7-27.el9.x86_64

krb5-libs-1.21.1-2.el9_4.x86_64

krb5-libs-1.21.1-2.el9_4.x86_64

krb5-libs-1.21.1-2.el9_4.x86_64

langpacks-core-en-3.0-16.el9.noarch

langpacks-core-en-3.0-16.el9.noarch

langpacks-core-en-3.0-16.el9.noarch

langpacks-core-font-en-3.0-16.el9.noarch

langpacks-core-font-en-3.0-16.el9.noarch

langpacks-core-font-en-3.0-16.el9.noarch

langpacks-en-3.0-16.el9.noarch

langpacks-en-3.0-16.el9.noarch

langpacks-en-3.0-16.el9.noarch

less-590-4.el9_4.x86_64

less-590-4.el9_4.x86_64

less-590-4.el9_4.x86_64

libacl-2.3.1-4.el9.x86_64

libacl-2.3.1-4.el9.x86_64

libacl-2.3.1-4.el9.x86_64

libaio-0.3.111-13.el9.x86_64

libaio-0.3.111-13.el9.x86_64

libaio-0.3.111-13.el9.x86_64

libarchive-3.5.3-4.el9.x86_64

libarchive-3.5.3-4.el9.x86_64

libarchive-3.5.3-4.el9.x86_64

libassuan-2.5.5-3.el9.x86_64

libassuan-2.5.5-3.el9.x86_64

libassuan-2.5.5-3.el9.x86_64

libatomic-11.4.1-3.el9.x86_64

libatomic-11.4.1-3.el9.x86_64

libatomic-11.4.1-3.el9.x86_64

libattr-2.5.1-3.el9.x86_64

libattr-2.5.1-3.el9.x86_64

libattr-2.5.1-3.el9.x86_64

libbasicobjects-0.1.1-53.el9.x86_64

libbasicobjects-0.1.1-53.el9.x86_64

libbasicobjects-0.1.1-53.el9.x86_64

libblkid-2.37.4-18.el9.x86_64

libblkid-2.37.4-18.el9.x86_64

libblkid-2.37.4-18.el9.x86_64

libbpf-1.3.0-2.el9.x86_64

libbpf-1.3.0-2.el9.x86_64

libbpf-1.3.0-2.el9.x86_64

libbrotli-1.0.9-6.el9.x86_64

libbrotli-1.0.9-6.el9.x86_64

libbrotli-1.0.9-6.el9.x86_64

libcap-2.48-9.el9_2.x86_64

libcap-2.48-9.el9_2.x86_64

libcap-2.48-9.el9_2.x86_64

libcap-ng-0.8.2-7.el9.x86_64

libcap-ng-0.8.2-7.el9.x86_64

libcap-ng-0.8.2-7.el9.x86_64

libcbor-0.7.0-5.el9.x86_64

libcbor-0.7.0-5.el9.x86_64

libcbor-0.7.0-5.el9.x86_64

libcollection-0.7.0-53.el9.x86_64

libcollection-0.7.0-53.el9.x86_64

libcollection-0.7.0-53.el9.x86_64

libcom_err-1.46.5-5.el9.x86_64

libcom_err-1.46.5-5.el9.x86_64

libcom_err-1.46.5-5.el9.x86_64

libconfig-1.7.2-9.el9.x86_64

libconfig-1.7.2-9.el9.x86_64

libconfig-1.7.2-9.el9.x86_64

libcurl-minimal-7.76.1-29.el9_4.1.x86_64

libcurl-minimal-7.76.1-29.el9_4.1.x86_64

libcurl-minimal-7.76.1-29.el9_4.1.x86_64

libdb-5.3.28-53.el9.x86_64

libdb-5.3.28-53.el9.x86_64

libdb-5.3.28-53.el9.x86_64

libdnf-0.69.0-8.el9_4.1.x86_64

libdnf-0.69.0-8.el9_4.1.x86_64

libdnf-0.69.0-8.el9_4.1.x86_64

libeconf-0.4.1-3.el9_2.x86_64

libeconf-0.4.1-3.el9_2.x86_64

libeconf-0.4.1-3.el9_2.x86_64

libedit-3.1-38.20210216cvs.el9.x86_64

libedit-3.1-38.20210216cvs.el9.x86_64

libedit-3.1-38.20210216cvs.el9.x86_64

libev-4.33-5.el9.x86_64

libev-4.33-5.el9.x86_64

libev-4.33-5.el9.x86_64

libevent-2.1.12-8.el9_4.x86_64

libevent-2.1.12-8.el9_4.x86_64

libevent-2.1.12-8.el9_4.x86_64

libfdisk-2.37.4-18.el9.x86_64

libfdisk-2.37.4-18.el9.x86_64

libfdisk-2.37.4-18.el9.x86_64

libfdt-1.6.0-7.el9.x86_64

libfdt-1.6.0-7.el9.x86_64

libfdt-1.6.0-7.el9.x86_64

libffi-3.4.2-8.el9.x86_64

libffi-3.4.2-8.el9.x86_64

libffi-3.4.2-8.el9.x86_64

libfido2-1.13.0-2.el9.x86_64

libfido2-1.13.0-2.el9.x86_64

libfido2-1.13.0-2.el9.x86_64

libgcc-11.4.1-3.el9.x86_64

libgcc-11.4.1-3.el9.x86_64

libgcc-11.4.1-3.el9.x86_64

libgcrypt-1.10.0-10.el9_2.x86_64

libgcrypt-1.10.0-10.el9_2.x86_64

libgcrypt-1.10.0-10.el9_2.x86_64

libgomp-11.4.1-3.el9.x86_64

libgomp-11.4.1-3.el9.x86_64

libgomp-11.4.1-3.el9.x86_64

libgpg-error-1.42-5.el9.x86_64

libgpg-error-1.42-5.el9.x86_64

libgpg-error-1.42-5.el9.x86_64

libguestfs-1.50.1-8.el9_4.x86_64

libguestfs-1.50.1-8.el9_4.x86_64

libguestfs-1.50.1-8.el9_4.x86_64

libguestfs-appliance-1.50.1-8.el9_4.x86_64

libguestfs-appliance-1.50.1-8.el9_4.x86_64

libguestfs-appliance-1.50.1-8.el9_4.x86_64

libguestfs-winsupport-9.3-1.el9_3.x86_64

libguestfs-winsupport-9.3-1.el9_3.x86_64

libguestfs-winsupport-9.3-1.el9_3.x86_64

libguestfs-xfs-1.50.1-8.el9_4.x86_64

libguestfs-xfs-1.50.1-8.el9_4.x86_64

libguestfs-xfs-1.50.1-8.el9_4.x86_64

libibverbs-48.0-1.el9.x86_64

libibverbs-48.0-1.el9.x86_64

libibverbs-48.0-1.el9.x86_64

libicu-67.1-9.el9.x86_64

libicu-67.1-9.el9.x86_64

libicu-67.1-9.el9.x86_64

libidn2-2.3.0-7.el9.x86_64

libidn2-2.3.0-7.el9.x86_64

libidn2-2.3.0-7.el9.x86_64

libini_config-1.3.1-53.el9.x86_64

libini_config-1.3.1-53.el9.x86_64

libini_config-1.3.1-53.el9.x86_64

libjose-11-3.el9.x86_64

libjose-11-3.el9.x86_64

libjose-11-3.el9.x86_64

libkcapi-1.4.0-2.el9.x86_64

libkcapi-1.4.0-2.el9.x86_64

libkcapi-1.4.0-2.el9.x86_64

libkcapi-hmaccalc-1.4.0-2.el9.x86_64

libkcapi-hmaccalc-1.4.0-2.el9.x86_64

libkcapi-hmaccalc-1.4.0-2.el9.x86_64

libksba-1.5.1-6.el9_1.x86_64

libksba-1.5.1-6.el9_1.x86_64

libksba-1.5.1-6.el9_1.x86_64

libluksmeta-9-12.el9.x86_64

libluksmeta-9-12.el9.x86_64

libluksmeta-9-12.el9.x86_64

libmaxminddb-1.5.2-3.el9.x86_64

libmaxminddb-1.5.2-3.el9.x86_64

libmaxminddb-1.5.2-3.el9.x86_64

libmnl-1.0.4-16.el9_4.x86_64

libmnl-1.0.4-16.el9_4.x86_64

libmnl-1.0.4-16.el9_4.x86_64

libmodulemd-2.13.0-2.el9.x86_64

libmodulemd-2.13.0-2.el9.x86_64

libmodulemd-2.13.0-2.el9.x86_64

libmount-2.37.4-18.el9.x86_64

libmount-2.37.4-18.el9.x86_64

libmount-2.37.4-18.el9.x86_64

libnbd-1.18.1-4.el9_4.x86_64

libnbd-1.18.1-4.el9_4.x86_64

libnbd-1.18.1-4.el9_4.x86_64

libnetfilter_conntrack-1.0.9-1.el9.x86_64

libnetfilter_conntrack-1.0.9-1.el9.x86_64

libnetfilter_conntrack-1.0.9-1.el9.x86_64

libnfnetlink-1.0.1-21.el9.x86_64

libnfnetlink-1.0.1-21.el9.x86_64

libnfnetlink-1.0.1-21.el9.x86_64

libnfsidmap-2.5.4-26.el9_4.x86_64

libnfsidmap-2.5.4-26.el9_4.x86_64

libnfsidmap-2.5.4-26.el9_4.x86_64

libnftnl-1.2.6-4.el9_4.x86_64

libnftnl-1.2.6-4.el9_4.x86_64

libnftnl-1.2.6-4.el9_4.x86_64

libnghttp2-1.43.0-5.el9_4.3.x86_64

libnghttp2-1.43.0-5.el9_4.3.x86_64

libnghttp2-1.43.0-5.el9_4.3.x86_64

libnl3-3.9.0-1.el9.x86_64

libnl3-3.9.0-1.el9.x86_64

libnl3-3.9.0-1.el9.x86_64

libosinfo-1.10.0-1.el9.x86_64

libosinfo-1.10.0-1.el9.x86_64

libosinfo-1.10.0-1.el9.x86_64

libpath_utils-0.2.1-53.el9.x86_64

libpath_utils-0.2.1-53.el9.x86_64

libpath_utils-0.2.1-53.el9.x86_64

libpeas-1.30.0-4.el9.x86_64

libpeas-1.30.0-4.el9.x86_64

libpeas-1.30.0-4.el9.x86_64

libpipeline-1.5.3-4.el9.x86_64

libpipeline-1.5.3-4.el9.x86_64

libpipeline-1.5.3-4.el9.x86_64

libpkgconf-1.7.3-10.el9.x86_64

libpkgconf-1.7.3-10.el9.x86_64

libpkgconf-1.7.3-10.el9.x86_64

libpmem-1.12.1-1.el9.x86_64

libpmem-1.12.1-1.el9.x86_64

libpmem-1.12.1-1.el9.x86_64

libpng-1.6.37-12.el9.x86_64

libpng-1.6.37-12.el9.x86_64

libpng-1.6.37-12.el9.x86_64

libproxy-0.4.15-35.el9.x86_64

libproxy-0.4.15-35.el9.x86_64

libproxy-0.4.15-35.el9.x86_64

libproxy-webkitgtk4-0.4.15-35.el9.x86_64

libproxy-webkitgtk4-0.4.15-35.el9.x86_64

libproxy-webkitgtk4-0.4.15-35.el9.x86_64

libpsl-0.21.1-5.el9.x86_64

libpsl-0.21.1-5.el9.x86_64

libpsl-0.21.1-5.el9.x86_64

libpwquality-1.4.4-8.el9.x86_64

libpwquality-1.4.4-8.el9.x86_64

libpwquality-1.4.4-8.el9.x86_64

librdmacm-48.0-1.el9.x86_64

librdmacm-48.0-1.el9.x86_64

librdmacm-48.0-1.el9.x86_64

libref_array-0.1.5-53.el9.x86_64

libref_array-0.1.5-53.el9.x86_64

libref_array-0.1.5-53.el9.x86_64

librepo-1.14.5-2.el9.x86_64

librepo-1.14.5-2.el9.x86_64

librepo-1.14.5-2.el9.x86_64

libreport-filesystem-2.15.2-6.el9.noarch

libreport-filesystem-2.15.2-6.el9.noarch

libreport-filesystem-2.15.2-6.el9.noarch

librhsm-0.0.3-7.el9_3.1.x86_64

librhsm-0.0.3-7.el9_3.1.x86_64

librhsm-0.0.3-7.el9_3.1.x86_64

libseccomp-2.5.2-2.el9.x86_64

libseccomp-2.5.2-2.el9.x86_64

libseccomp-2.5.2-2.el9.x86_64

libselinux-3.6-1.el9.x86_64

libselinux-3.6-1.el9.x86_64

libselinux-3.6-1.el9.x86_64

libselinux-utils-3.6-1.el9.x86_64

libselinux-utils-3.6-1.el9.x86_64

libselinux-utils-3.6-1.el9.x86_64

libsemanage-3.6-1.el9.x86_64

libsemanage-3.6-1.el9.x86_64

libsemanage-3.6-1.el9.x86_64

libsepol-3.6-1.el9.x86_64

libsepol-3.6-1.el9.x86_64

libsepol-3.6-1.el9.x86_64

libsigsegv-2.13-4.el9.x86_64

libsigsegv-2.13-4.el9.x86_64

libsigsegv-2.13-4.el9.x86_64

libslirp-4.4.0-7.el9.x86_64

libslirp-4.4.0-7.el9.x86_64

libslirp-4.4.0-7.el9.x86_64

libsmartcols-2.37.4-18.el9.x86_64

libsmartcols-2.37.4-18.el9.x86_64

libsmartcols-2.37.4-18.el9.x86_64

libsolv-0.7.24-2.el9.x86_64

libsolv-0.7.24-2.el9.x86_64

libsolv-0.7.24-2.el9.x86_64

libsoup-2.72.0-8.el9.x86_64

libsoup-2.72.0-8.el9.x86_64

libsoup-2.72.0-8.el9.x86_64

libss-1.46.5-5.el9.x86_64

libss-1.46.5-5.el9.x86_64

libss-1.46.5-5.el9.x86_64

libssh-0.10.4-13.el9.x86_64

libssh-0.10.4-13.el9.x86_64

libssh-0.10.4-13.el9.x86_64

libssh-config-0.10.4-13.el9.noarch

libssh-config-0.10.4-13.el9.noarch

libssh-config-0.10.4-13.el9.noarch

libstdc++-11.4.1-3.el9.x86_64

libstdc++-11.4.1-3.el9.x86_64

libstdc++-11.4.1-3.el9.x86_64

libtasn1-4.16.0-8.el9_1.x86_64

libtasn1-4.16.0-8.el9_1.x86_64

libtasn1-4.16.0-8.el9_1.x86_64

libtirpc-1.3.3-8.el9_4.x86_64

libtirpc-1.3.3-8.el9_4.x86_64

libtirpc-1.3.3-8.el9_4.x86_64

libtpms-0.9.1-3.20211126git1ff6fe1f43.el9_2.x86_64

libtpms-0.9.1-4.20211126git1ff6fe1f43.el9_2.x86_64

libtpms-0.9.1-4.20211126git1ff6fe1f43.el9_2.x86_64

libunistring-0.9.10-15.el9.x86_64

libunistring-0.9.10-15.el9.x86_64

libunistring-0.9.10-15.el9.x86_64

liburing-2.5-1.el9.x86_64

liburing-2.5-1.el9.x86_64

liburing-2.5-1.el9.x86_64

libusbx-1.0.26-1.el9.x86_64

libusbx-1.0.26-1.el9.x86_64

libusbx-1.0.26-1.el9.x86_64

libutempter-1.2.1-6.el9.x86_64

libutempter-1.2.1-6.el9.x86_64

libutempter-1.2.1-6.el9.x86_64

libuuid-2.37.4-18.el9.x86_64

libuuid-2.37.4-18.el9.x86_64

libuuid-2.37.4-18.el9.x86_64

libverto-0.3.2-3.el9.x86_64

libverto-0.3.2-3.el9.x86_64

libverto-0.3.2-3.el9.x86_64

libverto-libev-0.3.2-3.el9.x86_64

libverto-libev-0.3.2-3.el9.x86_64

libverto-libev-0.3.2-3.el9.x86_64

libvirt-client-10.0.0-6.7.el9_4.x86_64

libvirt-client-10.0.0-6.7.el9_4.x86_64

libvirt-client-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-common-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-common-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-common-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-config-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-config-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-config-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-secret-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-secret-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-secret-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-storage-core-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-storage-core-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-storage-core-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-log-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-log-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-log-10.0.0-6.7.el9_4.x86_64

libvirt-libs-10.0.0-6.7.el9_4.x86_64

libvirt-libs-10.0.0-6.7.el9_4.x86_64

libvirt-libs-10.0.0-6.7.el9_4.x86_64

libxcrypt-4.4.18-3.el9.x86_64

libxcrypt-4.4.18-3.el9.x86_64

libxcrypt-4.4.18-3.el9.x86_64

libxcrypt-compat-4.4.18-3.el9.x86_64

libxcrypt-compat-4.4.18-3.el9.x86_64

libxcrypt-compat-4.4.18-3.el9.x86_64

libxml2-2.9.13-6.el9_4.x86_64

libxml2-2.9.13-6.el9_4.x86_64

libxml2-2.9.13-6.el9_4.x86_64

libxslt-1.1.34-9.el9.x86_64

libxslt-1.1.34-9.el9.x86_64

libxslt-1.1.34-9.el9.x86_64

libyaml-0.2.5-7.el9.x86_64

libyaml-0.2.5-7.el9.x86_64

libyaml-0.2.5-7.el9.x86_64

libzstd-1.5.1-2.el9.x86_64

libzstd-1.5.1-2.el9.x86_64

libzstd-1.5.1-2.el9.x86_64

linux-firmware-20240716-143.2.el9_4.noarch

linux-firmware-20240905-143.3.el9_4.noarch

linux-firmware-20240905-143.3.el9_4.noarch

linux-firmware-whence-20240716-143.2.el9_4.noarch

linux-firmware-whence-20240905-143.3.el9_4.noarch

linux-firmware-whence-20240905-143.3.el9_4.noarch

lsscsi-0.32-6.el9.x86_64

lsscsi-0.32-6.el9.x86_64

lsscsi-0.32-6.el9.x86_64

lua-libs-5.4.4-4.el9.x86_64

lua-libs-5.4.4-4.el9.x86_64

lua-libs-5.4.4-4.el9.x86_64

lua-srpm-macros-1-6.el9.noarch

lua-srpm-macros-1-6.el9.noarch

lua-srpm-macros-1-6.el9.noarch

luksmeta-9-12.el9.x86_64

luksmeta-9-12.el9.x86_64

luksmeta-9-12.el9.x86_64

lvm2-2.03.23-2.el9.x86_64

lvm2-2.03.23-2.el9.x86_64

lvm2-2.03.23-2.el9.x86_64

lvm2-libs-2.03.23-2.el9.x86_64

lvm2-libs-2.03.23-2.el9.x86_64

lvm2-libs-2.03.23-2.el9.x86_64

lz4-libs-1.9.3-5.el9.x86_64

lz4-libs-1.9.3-5.el9.x86_64

lz4-libs-1.9.3-5.el9.x86_64

lzo-2.10-7.el9.x86_64

lzo-2.10-7.el9.x86_64

lzo-2.10-7.el9.x86_64

lzop-1.04-8.el9.x86_64

lzop-1.04-8.el9.x86_64

lzop-1.04-8.el9.x86_64

man-db-2.9.3-7.el9.x86_64

man-db-2.9.3-7.el9.x86_64

man-db-2.9.3-7.el9.x86_64

mdadm-4.2-14.el9_4.x86_64

mdadm-4.2-14.el9_4.x86_64

mdadm-4.2-14.el9_4.x86_64

microdnf-3.9.1-3.el9.x86_64

microdnf-3.9.1-3.el9.x86_64

microdnf-3.9.1-3.el9.x86_64

mingw-binutils-generic-2.41-3.el9.x86_64

mingw-binutils-generic-2.41-3.el9.x86_64

mingw-binutils-generic-2.41-3.el9.x86_64

mingw-filesystem-base-148-3.el9.noarch

mingw-filesystem-base-148-3.el9.noarch

mingw-filesystem-base-148-3.el9.noarch

mingw32-crt-11.0.1-3.el9.noarch

mingw32-crt-11.0.1-3.el9.noarch

mingw32-crt-11.0.1-3.el9.noarch

mingw32-filesystem-148-3.el9.noarch

mingw32-filesystem-148-3.el9.noarch

mingw32-filesystem-148-3.el9.noarch

mingw32-srvany-1.1-3.el9.noarch

mingw32-srvany-1.1-3.el9.noarch

mingw32-srvany-1.1-3.el9.noarch

mpfr-4.1.0-7.el9.x86_64

mpfr-4.1.0-7.el9.x86_64

mpfr-4.1.0-7.el9.x86_64

mtools-4.0.26-4.el9_0.x86_64

mtools-4.0.26-4.el9_0.x86_64

mtools-4.0.26-4.el9_0.x86_64

nbdkit-1.36.2-1.el9.x86_64

nbdkit-1.36.2-1.el9.x86_64

nbdkit-1.36.2-1.el9.x86_64

nbdkit-basic-filters-1.36.2-1.el9.x86_64

nbdkit-basic-filters-1.36.2-1.el9.x86_64

nbdkit-basic-filters-1.36.2-1.el9.x86_64

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

nbdkit-python-plugin-1.36.2-1.el9.x86_64

nbdkit-python-plugin-1.36.2-1.el9.x86_64

nbdkit-python-plugin-1.36.2-1.el9.x86_64

nbdkit-server-1.36.2-1.el9.x86_64

nbdkit-server-1.36.2-1.el9.x86_64

nbdkit-server-1.36.2-1.el9.x86_64

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

ncurses-6.2-10.20210508.el9.x86_64

ncurses-6.2-10.20210508.el9.x86_64

ncurses-6.2-10.20210508.el9.x86_64

ncurses-base-6.2-10.20210508.el9.noarch

ncurses-base-6.2-10.20210508.el9.noarch

ncurses-base-6.2-10.20210508.el9.noarch

ncurses-libs-6.2-10.20210508.el9.x86_64

ncurses-libs-6.2-10.20210508.el9.x86_64

ncurses-libs-6.2-10.20210508.el9.x86_64

ndctl-libs-71.1-8.el9.x86_64

ndctl-libs-71.1-8.el9.x86_64

ndctl-libs-71.1-8.el9.x86_64

nettle-3.9.1-1.el9.x86_64

nettle-3.9.1-1.el9.x86_64

nettle-3.9.1-1.el9.x86_64

nfs-utils-2.5.4-26.el9_4.x86_64

nfs-utils-2.5.4-26.el9_4.x86_64

nfs-utils-2.5.4-26.el9_4.x86_64

npth-1.6-8.el9.x86_64

npth-1.6-8.el9.x86_64

npth-1.6-8.el9.x86_64

numactl-libs-2.0.16-3.el9.x86_64

numactl-libs-2.0.16-3.el9.x86_64

numactl-libs-2.0.16-3.el9.x86_64

numad-0.5-37.20150602git.el9.x86_64

numad-0.5-37.20150602git.el9.x86_64

numad-0.5-37.20150602git.el9.x86_64

ocaml-srpm-macros-6-6.el9.noarch

ocaml-srpm-macros-6-6.el9.noarch

ocaml-srpm-macros-6-6.el9.noarch

oniguruma-6.9.6-1.el9.5.x86_64

oniguruma-6.9.6-1.el9.5.x86_64

oniguruma-6.9.6-1.el9.5.x86_64

openblas-srpm-macros-2-11.el9.noarch

openblas-srpm-macros-2-11.el9.noarch

openblas-srpm-macros-2-11.el9.noarch

openldap-2.6.6-3.el9.x86_64

openldap-2.6.6-3.el9.x86_64

openldap-2.6.6-3.el9.x86_64

openssh-8.7p1-38.el9_4.4.x86_64

openssh-8.7p1-38.el9_4.4.x86_64

openssh-8.7p1-38.el9_4.4.x86_64

openssh-clients-8.7p1-38.el9_4.4.x86_64

openssh-clients-8.7p1-38.el9_4.4.x86_64

openssh-clients-8.7p1-38.el9_4.4.x86_64

openssl-3.0.7-28.el9_4.x86_64

openssl-3.0.7-28.el9_4.x86_64

openssl-3.0.7-28.el9_4.x86_64

openssl-fips-provider-3.0.7-2.el9.x86_64

openssl-fips-provider-3.0.7-2.el9.x86_64

openssl-fips-provider-3.0.7-2.el9.x86_64

openssl-libs-3.0.7-28.el9_4.x86_64

openssl-libs-3.0.7-28.el9_4.x86_64

openssl-libs-3.0.7-28.el9_4.x86_64

osinfo-db-20231215-1.el9.noarch

osinfo-db-20231215-1.el9.noarch

osinfo-db-20231215-1.el9.noarch

osinfo-db-tools-1.10.0-1.el9.x86_64

osinfo-db-tools-1.10.0-1.el9.x86_64

osinfo-db-tools-1.10.0-1.el9.x86_64

p11-kit-0.25.3-2.el9.x86_64

p11-kit-0.25.3-2.el9.x86_64

p11-kit-0.25.3-2.el9.x86_64

p11-kit-trust-0.25.3-2.el9.x86_64

p11-kit-trust-0.25.3-2.el9.x86_64

p11-kit-trust-0.25.3-2.el9.x86_64

pam-1.5.1-19.el9.x86_64

pam-1.5.1-19.el9.x86_64

pam-1.5.1-19.el9.x86_64

parted-3.5-2.el9.x86_64

parted-3.5-2.el9.x86_64

parted-3.5-2.el9.x86_64

passt-0^20231204.gb86afe3-1.el9.x86_64

passt-0^20231204.gb86afe3-1.el9.x86_64

passt-0^20231204.gb86afe3-1.el9.x86_64

passt-selinux-0^20231204.gb86afe3-1.el9.noarch

passt-selinux-0^20231204.gb86afe3-1.el9.noarch

passt-selinux-0^20231204.gb86afe3-1.el9.noarch

pcre-8.44-3.el9.3.x86_64

pcre-8.44-3.el9.3.x86_64

pcre-8.44-3.el9.3.x86_64

pcre2-10.40-5.el9.x86_64

pcre2-10.40-5.el9.x86_64

pcre2-10.40-5.el9.x86_64

pcre2-syntax-10.40-5.el9.noarch

pcre2-syntax-10.40-5.el9.noarch

pcre2-syntax-10.40-5.el9.noarch

perl-AutoLoader-5.74-481.el9.noarch

perl-AutoLoader-5.74-481.el9.noarch

perl-AutoLoader-5.74-481.el9.noarch

perl-B-1.80-481.el9.x86_64

perl-B-1.80-481.el9.x86_64

perl-B-1.80-481.el9.x86_64

perl-base-2.27-481.el9.noarch

perl-base-2.27-481.el9.noarch

perl-base-2.27-481.el9.noarch

perl-Carp-1.50-460.el9.noarch

perl-Carp-1.50-460.el9.noarch

perl-Carp-1.50-460.el9.noarch

perl-Class-Struct-0.66-481.el9.noarch

perl-Class-Struct-0.66-481.el9.noarch

perl-Class-Struct-0.66-481.el9.noarch

perl-constant-1.33-461.el9.noarch

perl-constant-1.33-461.el9.noarch

perl-constant-1.33-461.el9.noarch

perl-Data-Dumper-2.174-462.el9.x86_64

perl-Data-Dumper-2.174-462.el9.x86_64

perl-Data-Dumper-2.174-462.el9.x86_64

perl-Digest-1.19-4.el9.noarch

perl-Digest-1.19-4.el9.noarch

perl-Digest-1.19-4.el9.noarch

perl-Digest-MD5-2.58-4.el9.x86_64

perl-Digest-MD5-2.58-4.el9.x86_64

perl-Digest-MD5-2.58-4.el9.x86_64

perl-Encode-3.08-462.el9.x86_64

perl-Encode-3.08-462.el9.x86_64

perl-Encode-3.08-462.el9.x86_64

perl-Errno-1.30-481.el9.x86_64

perl-Errno-1.30-481.el9.x86_64

perl-Errno-1.30-481.el9.x86_64

perl-Exporter-5.74-461.el9.noarch

perl-Exporter-5.74-461.el9.noarch

perl-Exporter-5.74-461.el9.noarch

perl-Fcntl-1.13-481.el9.x86_64

perl-Fcntl-1.13-481.el9.x86_64

perl-Fcntl-1.13-481.el9.x86_64

perl-File-Basename-2.85-481.el9.noarch

perl-File-Basename-2.85-481.el9.noarch

perl-File-Basename-2.85-481.el9.noarch

perl-File-Path-2.18-4.el9.noarch

perl-File-Path-2.18-4.el9.noarch

perl-File-Path-2.18-4.el9.noarch

perl-File-stat-1.09-481.el9.noarch

perl-File-stat-1.09-481.el9.noarch

perl-File-stat-1.09-481.el9.noarch

perl-File-Temp-0.231.100-4.el9.noarch

perl-File-Temp-0.231.100-4.el9.noarch

perl-File-Temp-0.231.100-4.el9.noarch

perl-FileHandle-2.03-481.el9.noarch

perl-FileHandle-2.03-481.el9.noarch

perl-FileHandle-2.03-481.el9.noarch

perl-Getopt-Long-2.52-4.el9.noarch

perl-Getopt-Long-2.52-4.el9.noarch

perl-Getopt-Long-2.52-4.el9.noarch

perl-Getopt-Std-1.12-481.el9.noarch

perl-Getopt-Std-1.12-481.el9.noarch

perl-Getopt-Std-1.12-481.el9.noarch

perl-HTTP-Tiny-0.076-462.el9.noarch

perl-HTTP-Tiny-0.076-462.el9.noarch

perl-HTTP-Tiny-0.076-462.el9.noarch

perl-if-0.60.800-481.el9.noarch

perl-if-0.60.800-481.el9.noarch

perl-if-0.60.800-481.el9.noarch

perl-interpreter-5.32.1-481.el9.x86_64

perl-interpreter-5.32.1-481.el9.x86_64

perl-interpreter-5.32.1-481.el9.x86_64

perl-IO-1.43-481.el9.x86_64

perl-IO-1.43-481.el9.x86_64

perl-IO-1.43-481.el9.x86_64

perl-IO-Socket-IP-0.41-5.el9.noarch

perl-IO-Socket-IP-0.41-5.el9.noarch

perl-IO-Socket-IP-0.41-5.el9.noarch

perl-IO-Socket-SSL-2.073-1.el9.noarch

perl-IO-Socket-SSL-2.073-1.el9.noarch

perl-IO-Socket-SSL-2.073-1.el9.noarch

perl-IPC-Open3-1.21-481.el9.noarch

perl-IPC-Open3-1.21-481.el9.noarch

perl-IPC-Open3-1.21-481.el9.noarch

perl-libnet-3.13-4.el9.noarch

perl-libnet-3.13-4.el9.noarch

perl-libnet-3.13-4.el9.noarch

perl-libs-5.32.1-481.el9.x86_64

perl-libs-5.32.1-481.el9.x86_64

perl-libs-5.32.1-481.el9.x86_64

perl-MIME-Base64-3.16-4.el9.x86_64

perl-MIME-Base64-3.16-4.el9.x86_64

perl-MIME-Base64-3.16-4.el9.x86_64

perl-Mozilla-CA-20200520-6.el9.noarch

perl-Mozilla-CA-20200520-6.el9.noarch

perl-Mozilla-CA-20200520-6.el9.noarch

perl-mro-1.23-481.el9.x86_64

perl-mro-1.23-481.el9.x86_64

perl-mro-1.23-481.el9.x86_64

perl-NDBM_File-1.15-481.el9.x86_64

perl-NDBM_File-1.15-481.el9.x86_64

perl-NDBM_File-1.15-481.el9.x86_64

perl-Net-SSLeay-1.92-2.el9.x86_64

perl-Net-SSLeay-1.92-2.el9.x86_64

perl-Net-SSLeay-1.92-2.el9.x86_64

perl-overload-1.31-481.el9.noarch

perl-overload-1.31-481.el9.noarch

perl-overload-1.31-481.el9.noarch

perl-overloading-0.02-481.el9.noarch

perl-overloading-0.02-481.el9.noarch

perl-overloading-0.02-481.el9.noarch

perl-parent-0.238-460.el9.noarch

perl-parent-0.238-460.el9.noarch

perl-parent-0.238-460.el9.noarch

perl-PathTools-3.78-461.el9.x86_64

perl-PathTools-3.78-461.el9.x86_64

perl-PathTools-3.78-461.el9.x86_64

perl-Pod-Escapes-1.07-460.el9.noarch

perl-Pod-Escapes-1.07-460.el9.noarch

perl-Pod-Escapes-1.07-460.el9.noarch

perl-Pod-Perldoc-3.28.01-461.el9.noarch

perl-Pod-Perldoc-3.28.01-461.el9.noarch

perl-Pod-Perldoc-3.28.01-461.el9.noarch

perl-Pod-Simple-3.42-4.el9.noarch

perl-Pod-Simple-3.42-4.el9.noarch

perl-Pod-Simple-3.42-4.el9.noarch

perl-Pod-Usage-2.01-4.el9.noarch

perl-Pod-Usage-2.01-4.el9.noarch

perl-Pod-Usage-2.01-4.el9.noarch

perl-podlators-4.14-460.el9.noarch

perl-podlators-4.14-460.el9.noarch

perl-podlators-4.14-460.el9.noarch

perl-POSIX-1.94-481.el9.x86_64

perl-POSIX-1.94-481.el9.x86_64

perl-POSIX-1.94-481.el9.x86_64

perl-Scalar-List-Utils-1.56-461.el9.x86_64

perl-Scalar-List-Utils-1.56-461.el9.x86_64

perl-Scalar-List-Utils-1.56-461.el9.x86_64

perl-SelectSaver-1.02-481.el9.noarch

perl-SelectSaver-1.02-481.el9.noarch

perl-SelectSaver-1.02-481.el9.noarch

perl-Socket-2.031-4.el9.x86_64

perl-Socket-2.031-4.el9.x86_64

perl-Socket-2.031-4.el9.x86_64

perl-srpm-macros-1-41.el9.noarch

perl-srpm-macros-1-41.el9.noarch

perl-srpm-macros-1-41.el9.noarch

perl-Storable-3.21-460.el9.x86_64

perl-Storable-3.21-460.el9.x86_64

perl-Storable-3.21-460.el9.x86_64

perl-subs-1.03-481.el9.noarch

perl-subs-1.03-481.el9.noarch

perl-subs-1.03-481.el9.noarch

perl-Symbol-1.08-481.el9.noarch

perl-Symbol-1.08-481.el9.noarch

perl-Symbol-1.08-481.el9.noarch

perl-Term-ANSIColor-5.01-461.el9.noarch

perl-Term-ANSIColor-5.01-461.el9.noarch

perl-Term-ANSIColor-5.01-461.el9.noarch

perl-Term-Cap-1.17-460.el9.noarch

perl-Term-Cap-1.17-460.el9.noarch

perl-Term-Cap-1.17-460.el9.noarch

perl-Text-ParseWords-3.30-460.el9.noarch

perl-Text-ParseWords-3.30-460.el9.noarch

perl-Text-ParseWords-3.30-460.el9.noarch

perl-Text-Tabs+Wrap-2013.0523-460.el9.noarch

perl-Text-Tabs+Wrap-2013.0523-460.el9.noarch

perl-Text-Tabs+Wrap-2013.0523-460.el9.noarch

perl-Time-Local-1.300-7.el9.noarch

perl-Time-Local-1.300-7.el9.noarch

perl-Time-Local-1.300-7.el9.noarch

perl-URI-5.09-3.el9.noarch

perl-URI-5.09-3.el9.noarch

perl-URI-5.09-3.el9.noarch

perl-vars-1.05-481.el9.noarch

perl-vars-1.05-481.el9.noarch

perl-vars-1.05-481.el9.noarch

pigz-2.5-4.el9.x86_64

pigz-2.5-4.el9.x86_64

pigz-2.5-4.el9.x86_64

pixman-0.40.0-6.el9.x86_64

pixman-0.40.0-6.el9.x86_64

pixman-0.40.0-6.el9.x86_64

pkgconf-1.7.3-10.el9.x86_64

pkgconf-1.7.3-10.el9.x86_64

pkgconf-1.7.3-10.el9.x86_64

policycoreutils-3.6-2.1.el9.x86_64

policycoreutils-3.6-2.1.el9.x86_64

policycoreutils-3.6-2.1.el9.x86_64

policycoreutils-python-utils-3.6-2.1.el9.noarch

policycoreutils-python-utils-3.6-2.1.el9.noarch

policycoreutils-python-utils-3.6-2.1.el9.noarch

polkit-0.117-11.el9.x86_64

polkit-0.117-11.el9.x86_64

polkit-0.117-11.el9.x86_64

polkit-libs-0.117-11.el9.x86_64

polkit-libs-0.117-11.el9.x86_64

polkit-libs-0.117-11.el9.x86_64

polkit-pkla-compat-0.1-21.el9.x86_64

polkit-pkla-compat-0.1-21.el9.x86_64

polkit-pkla-compat-0.1-21.el9.x86_64

popt-1.18-8.el9.x86_64

popt-1.18-8.el9.x86_64

popt-1.18-8.el9.x86_64

procps-ng-3.3.17-14.el9.x86_64

procps-ng-3.3.17-14.el9.x86_64

procps-ng-3.3.17-14.el9.x86_64

protobuf-c-1.3.3-13.el9.x86_64

protobuf-c-1.3.3-13.el9.x86_64

protobuf-c-1.3.3-13.el9.x86_64

psmisc-23.4-3.el9.x86_64

psmisc-23.4-3.el9.x86_64

psmisc-23.4-3.el9.x86_64

publicsuffix-list-dafsa-20210518-3.el9.noarch

publicsuffix-list-dafsa-20210518-3.el9.noarch

publicsuffix-list-dafsa-20210518-3.el9.noarch

pyproject-srpm-macros-1.12.0-1.el9.noarch

pyproject-srpm-macros-1.12.0-1.el9.noarch

pyproject-srpm-macros-1.12.0-1.el9.noarch

python-srpm-macros-3.9-53.el9.noarch

python-srpm-macros-3.9-53.el9.noarch

python-srpm-macros-3.9-53.el9.noarch

python-unversioned-command-3.9.18-3.el9_4.5.noarch

python-unversioned-command-3.9.18-3.el9_4.5.noarch

python-unversioned-command-3.9.18-3.el9_4.5.noarch

python3-3.9.18-3.el9_4.5.x86_64

python3-3.9.18-3.el9_4.5.x86_64

python3-3.9.18-3.el9_4.5.x86_64

python3-audit-3.1.2-2.el9.x86_64

python3-audit-3.1.2-2.el9.x86_64

python3-audit-3.1.2-2.el9.x86_64

python3-distro-1.5.0-7.el9.noarch

python3-distro-1.5.0-7.el9.noarch

python3-distro-1.5.0-7.el9.noarch

python3-libs-3.9.18-3.el9_4.5.x86_64

python3-libs-3.9.18-3.el9_4.5.x86_64

python3-libs-3.9.18-3.el9_4.5.x86_64

python3-libselinux-3.6-1.el9.x86_64

python3-libselinux-3.6-1.el9.x86_64

python3-libselinux-3.6-1.el9.x86_64

python3-libsemanage-3.6-1.el9.x86_64

python3-libsemanage-3.6-1.el9.x86_64

python3-libsemanage-3.6-1.el9.x86_64

python3-pip-wheel-21.2.3-8.el9.noarch

python3-pip-wheel-21.2.3-8.el9.noarch

python3-pip-wheel-21.2.3-8.el9.noarch

python3-policycoreutils-3.6-2.1.el9.noarch

python3-policycoreutils-3.6-2.1.el9.noarch

python3-policycoreutils-3.6-2.1.el9.noarch

python3-pyyaml-5.4.1-6.el9.x86_64

python3-pyyaml-5.4.1-6.el9.x86_64

python3-pyyaml-5.4.1-6.el9.x86_64

python3-setools-4.4.4-1.el9.x86_64

python3-setools-4.4.4-1.el9.x86_64

python3-setools-4.4.4-1.el9.x86_64

python3-setuptools-53.0.0-12.el9_4.1.noarch

python3-setuptools-53.0.0-12.el9_4.1.noarch

python3-setuptools-53.0.0-12.el9_4.1.noarch

python3-setuptools-wheel-53.0.0-12.el9_4.1.noarch

python3-setuptools-wheel-53.0.0-12.el9_4.1.noarch

python3-setuptools-wheel-53.0.0-12.el9_4.1.noarch

qemu-img-8.2.0-11.el9_4.6.x86_64

qemu-img-8.2.0-11.el9_4.6.x86_64

qemu-img-8.2.0-11.el9_4.6.x86_64

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

qt5-srpm-macros-5.15.9-1.el9.noarch

qt5-srpm-macros-5.15.9-1.el9.noarch

qt5-srpm-macros-5.15.9-1.el9.noarch

quota-4.06-6.el9.x86_64

quota-4.06-6.el9.x86_64

quota-4.06-6.el9.x86_64

quota-nls-4.06-6.el9.noarch

quota-nls-4.06-6.el9.noarch

quota-nls-4.06-6.el9.noarch

readline-8.1-4.el9.x86_64

readline-8.1-4.el9.x86_64

readline-8.1-4.el9.x86_64

redhat-release-9.4-0.5.el9.x86_64

redhat-release-9.4-0.5.el9.x86_64

redhat-release-9.4-0.5.el9.x86_64

redhat-rpm-config-207-1.el9.noarch

redhat-rpm-config-207-1.el9.noarch

redhat-rpm-config-207-1.el9.noarch

rootfiles-8.1-31.el9.noarch

rootfiles-8.1-31.el9.noarch

rootfiles-8.1-31.el9.noarch

rpcbind-1.2.6-7.el9.x86_64

rpcbind-1.2.6-7.el9.x86_64

rpcbind-1.2.6-7.el9.x86_64

rpm-4.16.1.3-29.el9.x86_64

rpm-4.16.1.3-29.el9.x86_64

rpm-4.16.1.3-29.el9.x86_64

rpm-libs-4.16.1.3-29.el9.x86_64

rpm-libs-4.16.1.3-29.el9.x86_64

rpm-libs-4.16.1.3-29.el9.x86_64

rpm-plugin-selinux-4.16.1.3-29.el9.x86_64

rpm-plugin-selinux-4.16.1.3-29.el9.x86_64

rpm-plugin-selinux-4.16.1.3-29.el9.x86_64

rust-srpm-macros-17-4.el9.noarch

rust-srpm-macros-17-4.el9.noarch

rust-srpm-macros-17-4.el9.noarch

scrub-2.6.1-4.el9.x86_64

scrub-2.6.1-4.el9.x86_64

scrub-2.6.1-4.el9.x86_64

seabios-bin-1.16.3-2.el9.noarch

seabios-bin-1.16.3-2.el9.noarch

seabios-bin-1.16.3-2.el9.noarch

seavgabios-bin-1.16.3-2.el9.noarch

seavgabios-bin-1.16.3-2.el9.noarch

seavgabios-bin-1.16.3-2.el9.noarch

sed-4.8-9.el9.x86_64

sed-4.8-9.el9.x86_64

sed-4.8-9.el9.x86_64

selinux-policy-38.1.35-2.el9_4.2.noarch

selinux-policy-38.1.35-2.el9_4.2.noarch

selinux-policy-38.1.35-2.el9_4.2.noarch

selinux-policy-targeted-38.1.35-2.el9_4.2.noarch

selinux-policy-targeted-38.1.35-2.el9_4.2.noarch

selinux-policy-targeted-38.1.35-2.el9_4.2.noarch

setup-2.13.7-10.el9.noarch

setup-2.13.7-10.el9.noarch

setup-2.13.7-10.el9.noarch

shadow-utils-4.9-8.el9.x86_64

shadow-utils-4.9-8.el9.x86_64

shadow-utils-4.9-8.el9.x86_64

snappy-1.1.8-8.el9.x86_64

snappy-1.1.8-8.el9.x86_64

snappy-1.1.8-8.el9.x86_64

sqlite-libs-3.34.1-7.el9_3.x86_64

sqlite-libs-3.34.1-7.el9_3.x86_64

sqlite-libs-3.34.1-7.el9_3.x86_64

squashfs-tools-4.4-10.git1.el9.x86_64

squashfs-tools-4.4-10.git1.el9.x86_64

squashfs-tools-4.4-10.git1.el9.x86_64

supermin-5.3.3-1.el9.x86_64

supermin-5.3.3-1.el9.x86_64

supermin-5.3.3-1.el9.x86_64

swtpm-0.8.0-2.el9_4.x86_64

swtpm-0.8.0-2.el9_4.x86_64

swtpm-0.8.0-2.el9_4.x86_64

swtpm-libs-0.8.0-2.el9_4.x86_64

swtpm-libs-0.8.0-2.el9_4.x86_64

swtpm-libs-0.8.0-2.el9_4.x86_64

swtpm-tools-0.8.0-2.el9_4.x86_64

swtpm-tools-0.8.0-2.el9_4.x86_64

swtpm-tools-0.8.0-2.el9_4.x86_64

syslinux-6.04-0.20.el9.x86_64

syslinux-6.04-0.20.el9.x86_64

syslinux-6.04-0.20.el9.x86_64

syslinux-extlinux-6.04-0.20.el9.x86_64

syslinux-extlinux-6.04-0.20.el9.x86_64

syslinux-extlinux-6.04-0.20.el9.x86_64

syslinux-extlinux-nonlinux-6.04-0.20.el9.noarch

syslinux-extlinux-nonlinux-6.04-0.20.el9.noarch

syslinux-extlinux-nonlinux-6.04-0.20.el9.noarch

syslinux-nonlinux-6.04-0.20.el9.noarch

syslinux-nonlinux-6.04-0.20.el9.noarch

syslinux-nonlinux-6.04-0.20.el9.noarch

systemd-252-32.el9_4.7.x86_64

systemd-252-32.el9_4.7.x86_64

systemd-252-32.el9_4.7.x86_64

systemd-container-252-32.el9_4.7.x86_64

systemd-container-252-32.el9_4.7.x86_64

systemd-container-252-32.el9_4.7.x86_64

systemd-libs-252-32.el9_4.7.x86_64

systemd-libs-252-32.el9_4.7.x86_64

systemd-libs-252-32.el9_4.7.x86_64

systemd-pam-252-32.el9_4.7.x86_64

systemd-pam-252-32.el9_4.7.x86_64

systemd-pam-252-32.el9_4.7.x86_64

systemd-rpm-macros-252-32.el9_4.7.noarch

systemd-rpm-macros-252-32.el9_4.7.noarch

systemd-rpm-macros-252-32.el9_4.7.noarch

systemd-udev-252-32.el9_4.7.x86_64

systemd-udev-252-32.el9_4.7.x86_64

systemd-udev-252-32.el9_4.7.x86_64

tar-1.34-6.el9_4.1.x86_64

tar-1.34-6.el9_4.1.x86_64

tar-1.34-6.el9_4.1.x86_64

tpm2-tools-5.2-3.el9.x86_64

tpm2-tools-5.2-3.el9.x86_64

tpm2-tools-5.2-3.el9.x86_64

tpm2-tss-3.2.2-2.el9.x86_64

tpm2-tss-3.2.2-2.el9.x86_64

tpm2-tss-3.2.2-2.el9.x86_64

tzdata-2024a-1.el9.noarch

tzdata-2024a-1.el9.noarch

tzdata-2024a-1.el9.noarch

unbound-libs-1.16.2-3.el9_3.5.x86_64

unbound-libs-1.16.2-3.el9_3.5.x86_64

unbound-libs-1.16.2-3.el9_3.5.x86_64

unzip-6.0-56.el9.x86_64

unzip-6.0-56.el9.x86_64

unzip-6.0-56.el9.x86_64

userspace-rcu-0.12.1-6.el9.x86_64

userspace-rcu-0.12.1-6.el9.x86_64

userspace-rcu-0.12.1-6.el9.x86_64

util-linux-2.37.4-18.el9.x86_64

util-linux-2.37.4-18.el9.x86_64

util-linux-2.37.4-18.el9.x86_64

util-linux-core-2.37.4-18.el9.x86_64

util-linux-core-2.37.4-18.el9.x86_64

util-linux-core-2.37.4-18.el9.x86_64

vim-minimal-8.2.2637-20.el9_1.x86_64

vim-minimal-8.2.2637-20.el9_1.x86_64

vim-minimal-8.2.2637-20.el9_1.x86_64

virt-v2v-2.4.0-4.el9_4.x86_64

virt-v2v-2.4.0-4.el9_4.x86_64

virt-v2v-2.4.0-4.el9_4.x86_64

virtio-win-1.9.40-0.el9_4.noarch

virtio-win-1.9.40-0.el9_4.noarch

virtio-win-1.9.40-0.el9_4.noarch

webkit2gtk3-jsc-2.42.5-1.el9.x86_64

webkit2gtk3-jsc-2.42.5-1.el9.x86_64

webkit2gtk3-jsc-2.46.1-2.el9_4.x86_64

which-2.21-29.el9.x86_64

which-2.21-29.el9.x86_64

which-2.21-29.el9.x86_64

xfsprogs-6.3.0-1.el9.x86_64

xfsprogs-6.3.0-1.el9.x86_64

xfsprogs-6.3.0-1.el9.x86_64

xz-5.2.5-8.el9_0.x86_64

xz-5.2.5-8.el9_0.x86_64

xz-5.2.5-8.el9_0.x86_64

xz-libs-5.2.5-8.el9_0.x86_64

xz-libs-5.2.5-8.el9_0.x86_64

xz-libs-5.2.5-8.el9_0.x86_64

yajl-2.1.0-22.el9.x86_64

yajl-2.1.0-22.el9.x86_64

yajl-2.1.0-22.el9.x86_64

zip-3.0-35.el9.x86_64

zip-3.0-35.el9.x86_64

zip-3.0-35.el9.x86_64

zlib-1.2.11-40.el9.x86_64

zlib-1.2.11-40.el9.x86_64

zlib-1.2.11-40.el9.x86_64

zstd-1.5.1-2.el9.x86_64

zstd-1.5.1-2.el9.x86_64

zstd-1.5.1-2.el9.x86_64

+
+
+ + +
+ + diff --git a/documentation/modules/mtv-overview-page/index.html b/documentation/modules/mtv-overview-page/index.html new file mode 100644 index 00000000000..e49bd510186 --- /dev/null +++ b/documentation/modules/mtv-overview-page/index.html @@ -0,0 +1,214 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

The MTV Overview page

+
+
+
+

The Forklift Overview page displays system-wide information about migrations and a list of Settings you can change.

+
+
+

If you have Administrator privileges, you can access the Overview page by clicking MigrationOverview in the OKD web console.

+
+
+

The Overview page has 3 tabs:

+
+
+
    +
  • +

    Overview

    +
  • +
  • +

    YAML

    +
  • +
  • +

    Metrics

    +
  • +
+
+
+
+
+

Overview tab

+
+
+

The Overview tab lets you see:

+
+
+
    +
  • +

    Operator: The namespace on which the Forklift Operator is deployed and the status of the Operator

    +
  • +
  • +

    Pods: The name, status, and creation time of each pod that was deployed by the Forklift Operator

    +
  • +
  • +

    Conditions: Status of the Forklift Operator:

    +
    +
      +
    • +

      Failure: Last failure. False indicates no failure since deployment.

      +
    • +
    • +

      Running: Whether the Operator is currently running and waiting for the next reconciliation.

      +
    • +
    • +

      Successful: Last successful reconciliation.

      +
    • +
    +
    +
  • +
+
+
+
+
+

YAML tab

+
+
+

The custom resource ForkliftController that defines the operation of the Forklift Operator. You can modify the custom resource from this tab.

+
+
+
+
+

Metrics tab

+
+
+

The Metrics tab lets you see:

+
+
+
    +
  • +

    Migrations: The number of migrations performed using Forklift:

    +
    +
      +
    • +

      Total

      +
    • +
    • +

      Running

      +
    • +
    • +

      Failed

      +
    • +
    • +

      Succeeded

      +
    • +
    • +

      Canceled

      +
    • +
    +
    +
  • +
  • +

    Virtual Machine Migrations: The number of VMs migrated using Forklift:

    +
    +
      +
    • +

      Total

      +
    • +
    • +

      Running

      +
    • +
    • +

      Failed

      +
    • +
    • +

      Succeeded

      +
    • +
    • +

      Canceled

      +
    • +
    +
    +
  • +
+
+
+ + + + + +
+
Note
+
+
+

Since a single migration might involve many virtual machines, the number of migrations performed using Forklift might vary significantly from the number of virtual machines that have been migrated using Forklift.

+
+
+
+
+
    +
  • +

    Chart showing the number of running, failed, and succeeded migrations performed using Forklift for each of the last 7 days

    +
  • +
  • +

    Chart showing the number of running, failed, and succeeded virtual machine migrations performed using Forklift for each of the last 7 days

    +
  • +
+
+
+
+ + +
+ + diff --git a/documentation/modules/mtv-performance-addendum/index.html b/documentation/modules/mtv-performance-addendum/index.html new file mode 100644 index 00000000000..55a1818d6c4 --- /dev/null +++ b/documentation/modules/mtv-performance-addendum/index.html @@ -0,0 +1,291 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift performance addendum

+
+
+
+

Unresolved directive in mtv-performance-addendum.adoc - include::snip_performance.adoc[]

+
+
+
+
+

ESXi performance

+
+
+
Single ESXi performance
+

Test migration using the same ESXi host.

+
+
+

In each iteration, the total VMs are increased, to display the impact of concurrent migration on the duration.

+
+
+

The results show that migration time is linear when increasing the total VMs (50 GiB disk, Utilization 70%).

+
+
+

The optimal number of VMs per ESXi is 10.

+
+ + ++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Single ESXi tests
Test Case DescriptionMTVVDDKmax_vm inflightMigration TypeTotal Duration

cold migration, 10 VMs, Single ESXi, Private Network [1]

2.6

7.0.3

100

cold

0:21:39

cold migration, 20 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

0:41:16

cold migration, 30 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

1:00:59

cold migration, 40 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

1:23:02

cold migration, 50 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

1:46:24

cold migration, 80 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

2:42:49

cold migration, 100 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

3:25:15

+
+
Multi ESXi hosts and single data store
+

In each iteration, the number of ESXi hosts were increased, to show that increasing the number of ESXi hosts improves the migration time (50 GiB disk, Utilization 70%).

+
+ + ++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2. Multi ESXi hosts and single data store
Test Case DescriptionMTVVDDKMax_vm inflightMigration TypeTotal Duration

cold migration, 100 VMs, Single ESXi, Private Network [2]

2.6

7.0.3

100

cold

3:25:15

cold migration, 100 VMs, 4 ESXs (25 VMs per ESX), Private Network

2.6

7.0.3

100

cold

1:22:27

cold migration, 100 VMs, 5 ESXs (20 VMs per ESX), Private Network, 1 DataStore

2.6

7.0.3

100

cold

1:04:57

+
+
+
+

Different migration network performance

+
+
+

Each iteration the Migration Network was changed, using the Provider, to find the fastest network for migration.

+
+
+

The results show that there is no degradation using management compared to non-managment networks when all interfaces and network speeds are the same.

+
+ + ++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 3. Different migration network tests
Test Case DescriptionMTVVDDKmax_vm inflightMigration TypeTotal Duration

cold migration, 10 VMs, Single ESXi, MGMT Network

2.6

7.0.3

100

cold

0:21:30

cold migration, 10 VMs, Single ESXi, Private Network [3]

2.6

7.0.3

20

cold

0:21:20

cold migration, 10 VMs, Single ESXi, Default Network

2.6.2

7.0.3

20

cold

0:21:30

+
+
+
+
+
+1. Private Network refers to a non -Management network +
+
+2. Private Network refers to a non-Management network +
+
+3. Private Network refers to a non-Management network +
+
+ + +
+ + diff --git a/documentation/modules/mtv-performance-recommendation/index.html b/documentation/modules/mtv-performance-recommendation/index.html new file mode 100644 index 00000000000..fc3155d8989 --- /dev/null +++ b/documentation/modules/mtv-performance-recommendation/index.html @@ -0,0 +1,382 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift performance recommendations

+
+
+
+

The purpose of this section is to share recommendations for efficient and effective migration of virtual machines (VMs) using Forklift, based on findings observed through testing.

+
+
+

Unresolved directive in mtv-performance-recommendation.adoc - include::snip_performance.adoc[]

+
+
+
+
+

Ensure fast storage and network speeds

+
+
+

Ensure fast storage and network speeds, both for VMware and OKD (OCP) environments.

+
+
+
    +
  • +

    To perform fast migrations, VMware must have fast read access to datastores.  Networking between VMware ESXi hosts should be fast, ensure a 10 GiB network connection, and avoid network bottlenecks.

    +
    +
      +
    • +

      Extend the VMware network to the OCP Workers Interface network environment.

      +
    • +
    • +

      It is important to ensure that the VMware network offers high throughput (10 Gigabit Ethernet) and rapid networking to guarantee that the reception rates align with the read rate of the ESXi datastore.

      +
    • +
    • +

      Be aware that the migration process uses significant network bandwidth and that the migration network is utilized. If other services utilize that network, it may have an impact on those services and their migration rates.

      +
    • +
    • +

      For example, 200 to 325 MiB/s was the average network transfer rate from the vmnic for each ESXi host associated with transferring data to the OCP interface.

      +
    • +
    +
    +
  • +
+
+
+
+
+

Ensure fast datastore read speeds to ensure efficient and performant migrations.

+
+
+

Datastores read rates impact the total transfer times, so it is essential to ensure fast reads are possible from the ESXi datastore to the ESXi host.  

+
+
+

Example in numbers: 200 to 300 MiB/s was the average read rate for both vSphere and ESXi endpoints for a single ESXi server. When multiple ESXi servers are used, higher datastore read rates are possible.

+
+
+
+
+

Endpoint types 

+
+
+

Forklift 2.6 allows for the following vSphere provider options:

+
+
+
    +
  • +

    ESXi endpoint (inventory and disk transfers from ESXi), introduced in Forklift 2.6

    +
  • +
  • +

    vCenter Server endpoint; no networks for the ESXi host (inventory and disk transfers from vCenter)

    +
  • +
  • +

    vCenter endpoint and ESXi networks are available (inventory from vCenter, disk transfers from ESXi).

    +
  • +
+
+
+

When transferring many VMs that are registered to multiple ESXi hosts, using the vCenter endpoint and ESXi network is suggested.

+
+
+ + + + + +
+
Note
+
+
+

As of vSphere 7.0, ESXi hosts can label which network to use for NBD transport. This is accomplished by tagging the desired virtual network interface card (NIC) with the appropriate vSphereBackupNFC label.  When this is done, Forklift will be able to utilize the ESXi interface for network transfer to Openshift as long as the worker and ESXi host interfaces are reachable.  This is especially useful when migration users may not have access to the ESXi credentials yet would like to be able to control which ESXi interface is used for migration. 

+
+
+

For more details, see: (Forklift-1230)

+
+
+
+
+

You can use the following ESXi command, which designates interface vmk2 for NBD backup:

+
+
+
+
esxcli network ip interface tag add -t vSphereBackupNFC -i vmk2
+
+
+
+
+
+

Set ESXi hosts BIOS profile and ESXi Host Power Management for High Performance

+
+
+

Where possible, ensure that hosts used to perform migrations are set with BIOS profiles related to maximum performance.  Hosts which use Host Power Management controlled within vSphere should check that High Performance is set.

+
+
+

Testing showed that when transferring more than 10 VMs with both BIOS and host power management set accordingly, migrations had an increase of 15 MiB in the average datastore read rate.

+
+
+
+
+

Avoid additional network load on VMware networks

+
+
+

You can reduce the network load on VMware networks by selecting the migration network when using the ESXi endpoint.

+
+
+

By incorporating a virtualization provider, Forklift enables the selection of a specific network, which is accessible on the ESXi hosts, for the purpose of migrating virtual machines to OCP.  Selecting this migration network from the ESXi host in the Forklift UI will ensure that the transfer is performed using the selected network as an ESXi endpoint..

+
+
+

It is imperative to ensure that the network selected has connectivity to the OCP interface, has adequate bandwidth for migrations, and that the network interface is not saturated.

+
+
+

In environments with fast networks, such as 10GbE networks, migration network impacts can be expected to match the rate of ESXi datastore reads.

+
+
+
+
+

Control maximum concurrent disk migrations per ESXi host.

+
+
+

Set the MAX_VM_INFLIGHT MTV variable to control the maximum number of concurrent VMs transfers allowed for the ESXi host. 

+
+
+

Forklift allows for concurrency to be controlled using this variable; by default, it is set to 20.

+
+
+

When setting MAX_VM_INFLIGHT, consider the number of maximum concurrent VMs transfers are required for ESXi hosts. It is important to consider the type of migration to be transferred concurrently. Warm migrations, which are defined by migrations of a running VM that will be migrated over a scheduled time.

+
+
+

Warm migrations use snapshots to compare and migrate only the differences between previous snapshots of the disk.  The migration of the differences between snapshots happens over specific intervals before a final cut-over of the running VM to OKD occurs. 

+
+
+

In Forklift 2.6, MAX_VM_INFLIGHT reserves one transfer slot per VM, regardless of current migration activity for a specific snapshot or the number of disks that belong to a single vm. The total set by MAX_VM_INFLIGHT is used to indicate how many concurrent VM tranfers per ESXi host is allowed.

+
+
+
Examples
+
    +
  • +

    MAX_VM_INFLIGHT = 20 and 2 ESXi hosts defined in the provider mean each host can transfer 20 VMs.

    +
  • +
+
+
+
+
+

Migrations are completed faster when migrating multiple VMs concurrently

+
+
+

When multiple VMs from a specific ESXi host are to be migrated, starting concurrent migrations for multiple VMs leads to faster migration times. 

+
+
+

Testing demonstrated that migrating 10 VMs (each containing 35 GiB of data, with a total size of 50 GiB) from a single host is significantly faster than migrating the same number of VMs sequentially, one after another. 

+
+
+

It is possible to increase concurrent migration to more than 10 virtual machines from a single host, but it does not show a significant improvement. 

+
+
+
Examples
+
    +
  • +

    1 single disk VMs took 6 minutes, with migration rate of 100 MiB/s

    +
  • +
  • +

    10 single disk VMs took 22 minutes, with migration rate of 272 MiB/s

    +
  • +
  • +

    20 single disk VMs took 42 minutes, with migration rate of 284 MiB/s

    +
  • +
+
+
+ + + + + +
+
Note
+
+
+

From the aforementioned examples, it is evident that the migration of 10 virtual machines simultaneously is three times faster than the migration of identical virtual machines in a sequential manner.

+
+
+

The migration rate was almost the same when moving 10 or 20 virtual machines simultaneously.

+
+
+
+
+
+
+

Migrations complete faster using multiple hosts.

+
+
+

Using multiple hosts with registered VMs equally distributed among the ESXi hosts used for migrations leads to faster migration times.

+
+
+

Testing showed that when transferring more than 10 single disk VMS, each containing 35 GiB of data out of a total of 50G total, using an additional host can reduce migration time.

+
+
+
Examples
+
    +
  • +

    80 single disk VMs, containing 35 GiB of data each, using a single host took 2 hours and 43 minutes, with a migration rate of 294 MiB/s.

    +
  • +
  • +

    80 single disk VMs, containing 35 GiB of data each, using 8 ESXi hosts took 41 minutes, with a migration rate of 1,173 MiB/s.

    +
  • +
+
+
+ + + + + +
+
Note
+
+
+

From the aforementioned examples, it is evident that migrating 80 VMs from 8 ESXi hosts, 10 from each host, concurrently is four times faster than running the same VMs from a single ESXi host. 

+
+
+

Migrating a larger number of VMs from more than 8 ESXi hosts concurrently could potentially show increased performance. However, it was not tested and therefore not recommended.

+
+
+
+
+
+
+

Multiple migration plans compared to a single large migration plan

+
+
+

The maximum number of disks that can be referenced by a single migration plan is 500. For more details, see (MTV-1203)

+
+
+

When attempting to migrate many VMs in a single migration plan, it can take some time for all migrations to start.  By breaking up one migration plan into several migration plans, it is possible to start them at the same time.

+
+
+

Comparing migrations of:

+
+
+
    +
  • +

    500 VMs using 8 ESXi hosts in 1 plan, max_vm_inflight=100, took 5 hours and 10 minutes.

    +
  • +
  • +

    800 VMs using 8 ESXi hosts with 8 plans, max_vm_inflight=100, took 57 minutes.

    +
  • +
+
+
+

Testing showed that by breaking one single large plan into multiple moderately sized plans, for example, 100 VMS per plan, the total migration time can be reduced.

+
+
+
+
+

Maximum values tested

+
+
+
    +
  • +

    Maximum number of ESXi hosts tested: 8

    +
  • +
  • +

    Maximum number of VMs in a single migration plan: 500

    +
  • +
  • +

    Maximum number of VMs migrated in a single test: 5000

    +
  • +
  • +

    Maximum number of migration plans performed concurrently: 40

    +
  • +
  • +

    Maximum single disk size migrated: 6 T disks, which contained 3 Tb of data

    +
  • +
  • +

    Maximum number of disks on a single VM migrated: 50

    +
  • +
  • +

    Highest observed single datastore read rate from a single ESXi server:  312 MiB/second

    +
  • +
  • +

    Highest observed multi-datastore read rate using eight ESXi servers and two datastores: 1,242 MiB/second

    +
  • +
  • +

    Highest observed virtual NIC transfer rate to an {ocp-name} worker: 327 MiB/second

    +
  • +
  • +

    Maximum migration transfer rate of a single disk: 162 MiB/second (rate observed when transferring nonconcurrent migration of 1.5 Tb utilized data)

    +
  • +
  • +

    Maximum cold migration transfer rate of the multiple VMs (single disk) from a single ESXi host: 294 MiB/s (concurrent migration of 30 VMs, 35/50 GiB used, from Single ESXi)

    +
  • +
  • +

    Maximum cold migration transfer rate of the multiple VMs (single disk) from multiple ESXi hosts: 1173MB/s (concurrent migration of 80 VMs, 35/50 GiB used, from 8 ESXi servers, 10 VMs from each ESXi)

    +
  • +
+
+
+

For additional details on performance, see Forklift performance addendum

+
+
+
+ + +
+ + diff --git a/documentation/modules/mtv-resources-and-services/index.html b/documentation/modules/mtv-resources-and-services/index.html new file mode 100644 index 00000000000..77eae4cac23 --- /dev/null +++ b/documentation/modules/mtv-resources-and-services/index.html @@ -0,0 +1,131 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift custom resources and services

+
+

Forklift is provided as an OKD Operator. It creates and manages the following custom resources (CRs) and services.

+
+
+
Forklift custom resources
+
    +
  • +

    Provider CR stores attributes that enable Forklift to connect to and interact with the source and target providers.

    +
  • +
  • +

    NetworkMapping CR maps the networks of the source and target providers.

    +
  • +
  • +

    StorageMapping CR maps the storage of the source and target providers.

    +
  • +
  • +

    Plan CR contains a list of VMs with the same migration parameters and associated network and storage mappings.

    +
  • +
  • +

    Migration CR runs a migration plan.

    +
    +

    Only one Migration CR per migration plan can run at a given time. You can create multiple Migration CRs for a single Plan CR.

    +
    +
  • +
+
+
+
Forklift services
+
    +
  • +

    The Inventory service performs the following actions:

    +
    +
      +
    • +

      Connects to the source and target providers.

      +
    • +
    • +

      Maintains a local inventory for mappings and plans.

      +
    • +
    • +

      Stores VM configurations.

      +
    • +
    • +

      Runs the Validation service if a VM configuration change is detected.

      +
    • +
    +
    +
  • +
  • +

    The Validation service checks the suitability of a VM for migration by applying rules.

    +
  • +
  • +

    The Migration Controller service orchestrates migrations.

    +
    +

    When you create a migration plan, the Migration Controller service validates the plan and adds a status label. If the plan fails validation, the plan status is Not ready and the plan cannot be used to perform a migration. If the plan passes validation, the plan status is Ready and it can be used to perform a migration. After a successful migration, the Migration Controller service changes the plan status to Completed.

    +
    +
  • +
  • +

    The Populator Controller service orchestrates disk transfers using Volume Populators.

    +
  • +
  • +

    The Kubevirt Controller and Containerized Data Import (CDI) Controller services handle most technical operations.

    +
  • +
+
+ + +
+ + diff --git a/documentation/modules/mtv-selected-packages-2-7/index.html b/documentation/modules/mtv-selected-packages-2-7/index.html new file mode 100644 index 00000000000..8372266dd6a --- /dev/null +++ b/documentation/modules/mtv-selected-packages-2-7/index.html @@ -0,0 +1,207 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift selected packages

+ + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Selected Forklift packages
Package summaryForklift 2.7.0Forklift 2.7.2Forklift 2.7.3

The skeleton package which defines a simple Red Hat Enterprise Linux system

basesystem-11-13.el9.noarch

basesystem-11-13.el9.noarch

basesystem-11-13.el9.noarch

Core kernel modules to match the core kernel

kernel-modules-core-5.14.0-427.35.1.el9_4.x86_64

kernel-modules-core-5.14.0-427.37.1.el9_4.x86_64

kernel-modules-core-5.14.0-427.40.1.el9_4.x86_64

The Linux kernel

kernel-core-5.14.0-427.35.1.el9_4.x86_64

kernel-core-5.14.0-427.37.1.el9_4.x86_64

kernel-core-5.14.0-427.40.1.el9_4.x86_64

Access and modify virtual machine disk images

libguestfs-1.50.1-8.el9_4.x86_64

libguestfs-1.50.1-8.el9_4.x86_64

libguestfs-1.50.1-8.el9_4.x86_64

Client side utilities of the libvirt library

libvirt-client-10.0.0-6.7.el9_4.x86_64

libvirt-client-10.0.0-6.7.el9_4.x86_64

libvirt-client-10.0.0-6.7.el9_4.x86_64

Libvirt libraries

libvirt-libs-10.0.0-6.7.el9_4.x86_64

libvirt-libs-10.0.0-6.7.el9_4.x86_64

libvirt-libs-10.0.0-6.7.el9_4.x86_64

QEMU driver plugin for the libvirtd daemon

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

NBD server

nbdkit-1.36.2-1.el9.x86_64

nbdkit-1.36.2-1.el9.x86_64

nbdkit-1.36.2-1.el9.x86_64

Basic filters for nbdkit

nbdkit-basic-filters-1.36.2-1.el9.x86_64

nbdkit-basic-filters-1.36.2-1.el9.x86_64

nbdkit-basic-filters-1.36.2-1.el9.x86_64

Basic plugins for nbdkit

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

HTTP/FTP (cURL) plugin for nbdkit

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

NBD proxy / forward plugin for nbdkit

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

Python 3 plugin for nbdkit

nbdkit-python-plugin-1.36.2-1.el9.x86_64

nbdkit-python-plugin-1.36.2-1.el9.x86_64

nbdkit-python-plugin-1.36.2-1.el9.x86_64

The nbdkit server

nbdkit-server-1.36.2-1.el9.x86_64

nbdkit-server-1.36.2-1.el9.x86_64

nbdkit-server-1.36.2-1.el9.x86_64

SSH plugin for nbdkit

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

VMware VDDK plugin for nbdkit

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

QEMU command line tool for manipulating disk images

qemu-img-8.2.0-11.el9_4.6.x86_64

qemu-img-8.2.0-11.el9_4.6.x86_64

qemu-img-8.2.0-11.el9_4.6.x86_64

QEMU common files needed by all QEMU targets

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

+

qemu-kvm core components

+

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

Convert a virtual machine to run on KVM

virt-v2v-2.4.0-4.el9_4.x86_64

virt-v2v-2.4.0-4.el9_4.x86_64

virt-v2v-2.4.0-4.el9_4.x86_64

+ + +
+ + diff --git a/documentation/modules/mtv-settings/index.html b/documentation/modules/mtv-settings/index.html new file mode 100644 index 00000000000..ea0a0a666fe --- /dev/null +++ b/documentation/modules/mtv-settings/index.html @@ -0,0 +1,133 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Configuring MTV settings

+
+

If you have Administrator privileges, you can access the Overview page and change the following settings in it:

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Forklift settings
SettingDescriptionDefault value

Max concurrent virtual machine migrations

The maximum number of VMs per plan that can be migrated simultaneously

20

Must gather cleanup after (hours)

The duration for retaining must gather reports before they are automatically deleted

Disabled

Controller main container CPU limit

The CPU limit allocated to the main controller container

500 m

Controller main container Memory limit

The memory limit allocated to the main controller container

800 Mi

Precopy internal (minutes)

The interval at which a new snapshot is requested before initiating a warm migration

60

Snapshot polling interval (seconds)

The frequency with which the system checks the status of snapshot creation or removal during a warm migration

10

+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationOverview. The Settings list is on the right-hand side of the page.

    +
  2. +
  3. +

    In the Settings list, click the Edit icon of the setting you want to change.

    +
  4. +
  5. +

    Choose a setting from the list.

    +
  6. +
  7. +

    Click Save.

    +
  8. +
+
+ + +
+ + diff --git a/documentation/modules/mtv-ui/index.html b/documentation/modules/mtv-ui/index.html new file mode 100644 index 00000000000..c233f03cda6 --- /dev/null +++ b/documentation/modules/mtv-ui/index.html @@ -0,0 +1,91 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

The MTV user interface

+
+

The Forklift user interface is integrated into the OKD web console.

+
+
+

In the left-hand panel, you can choose a page related to a component of the migration progress, for example, Providers for Migration, or, if you are an administrator, you can choose Overview, which contains information about migrations and lets you configure Forklift settings.

+
+
+
+Forklift user interface +
+
Figure 1. Forklift extension interface
+
+
+

In pages related to components, you can click on the Projects list, which is in the upper-left portion of the page, and see which projects (namespaces) you are allowed to work with.

+
+
+
    +
  • +

    If you are an administrator, you can see all projects.

    +
  • +
  • +

    If you are a non-administrator, you can see only the projects that you have permissions to work with.

    +
  • +
+
+ + +
+ + diff --git a/documentation/modules/mtv-workflow/index.html b/documentation/modules/mtv-workflow/index.html new file mode 100644 index 00000000000..7ac620a124c --- /dev/null +++ b/documentation/modules/mtv-workflow/index.html @@ -0,0 +1,113 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

High-level migration workflow

+
+

The high-level workflow shows the migration process from the point of view of the user:

+
+
+
    +
  1. +

    You create a source provider, a target provider, a network mapping, and a storage mapping.

    +
  2. +
  3. +

    You create a Plan custom resource (CR) that includes the following resources:

    +
    +
      +
    • +

      Source provider

      +
    • +
    • +

      Target provider, if Forklift is not installed on the target cluster

      +
    • +
    • +

      Network mapping

      +
    • +
    • +

      Storage mapping

      +
    • +
    • +

      One or more virtual machines (VMs)

      +
    • +
    +
    +
  4. +
  5. +

    You run a migration plan by creating a Migration CR that references the Plan CR.

    +
    +

    If you cannot migrate all the VMs for any reason, you can create multiple Migration CRs for the same Plan CR until all VMs are migrated.

    +
    +
  6. +
  7. +

    For each VM in the Plan CR, the Migration Controller service records the VM migration progress in the Migration CR.

    +
  8. +
  9. +

    Once the data transfer for each VM in the Plan CR completes, the Migration Controller service creates a VirtualMachine CR.

    +
    +

    When all VMs have been migrated, the Migration Controller service updates the status of the Plan CR to Completed. The power state of each source VM is maintained after migration.

    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/modules/network-prerequisites/index.html b/documentation/modules/network-prerequisites/index.html new file mode 100644 index 00000000000..bbdc659bbbe --- /dev/null +++ b/documentation/modules/network-prerequisites/index.html @@ -0,0 +1,196 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Network prerequisites

+
+
+
+

The following prerequisites apply to all migrations:

+
+
+
    +
  • +

    IP addresses, VLANs, and other network configuration settings must not be changed before or during migration. The MAC addresses of the virtual machines are preserved during migration.

    +
  • +
  • +

    The network connections between the source environment, the KubeVirt cluster, and the replication repository must be reliable and uninterrupted.

    +
  • +
  • +

    If you are mapping more than one source and destination network, you must create a network attachment definition for each additional destination network.

    +
  • +
+
+
+
+
+

Ports

+
+
+

The firewalls must enable traffic over the following ports:

+
+ + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Network ports required for migrating from VMware vSphere
PortProtocolSourceDestinationPurpose

443

TCP

OpenShift nodes

VMware vCenter

+

VMware provider inventory

+
+
+

Disk transfer authentication

+

443

TCP

OpenShift nodes

VMware ESXi hosts

+

Disk transfer authentication

+

902

TCP

OpenShift nodes

VMware ESXi hosts

+

Disk transfer data copy

+
+ + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2. Network ports required for migrating from oVirt
PortProtocolSourceDestinationPurpose

443

TCP

OpenShift nodes

oVirt Engine

+

oVirt provider inventory

+
+
+

Disk transfer authentication

+

443

TCP

OpenShift nodes

oVirt hosts

+

Disk transfer authentication

+

54322

TCP

OpenShift nodes

oVirt hosts

+

Disk transfer data copy

+
+
+
+ + +
+ + diff --git a/documentation/modules/new-features-and-enhancements-2-7/index.html b/documentation/modules/new-features-and-enhancements-2-7/index.html new file mode 100644 index 00000000000..b9b3f63b455 --- /dev/null +++ b/documentation/modules/new-features-and-enhancements-2-7/index.html @@ -0,0 +1,85 @@ + + + + + + + + New features and enhancements | Forklift Documentation + + + + + + + + + + + + + +New features and enhancements | Forklift Documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+

New features and enhancements

+
+
+
+

Forklift 2.7 introduces the following features and enhancements:

+
+
+
+
+

New features and enhancements 2.7.0

+
+
+
    +
  • +

    In Forklift 2.7.0, warm migration is now based on RHEL 9 inheriting features and bug fixes.

    +
  • +
+
+
+
+ + +
+ + diff --git a/documentation/modules/new-migrating-virtual-machines-cli/index.html b/documentation/modules/new-migrating-virtual-machines-cli/index.html new file mode 100644 index 00000000000..54fdd97cbf5 --- /dev/null +++ b/documentation/modules/new-migrating-virtual-machines-cli/index.html @@ -0,0 +1,155 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
Procedure
+
    +
  1. +

    Create a Secret manifest for the source provider credentials:

    +
  2. +
+
+
+
    +
  1. +

    Create a Provider manifest for the source provider:

    +
  2. +
  3. +

    Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    +
    +
    +
    $  cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Hook
    +metadata:
    +  name: <hook>
    +  namespace: <namespace>
    +spec:
    +  image: quay.io/konveyor/hook-runner
    +  playbook: |
    +    LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv
    +    YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s
    +    IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh
    +    cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr
    +    bG9hZAoK
    +EOF
    +
    +
    +
    +

    where:

    +
    +
    +

    playbook refers to an optional Base64-encoded Ansible Playbook. If you specify a playbook, the image must be hook-runner.

    +
    +
    + + + + + +
    +
    Note
    +
    +
    +

    You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

    +
    +
    +
    +
  4. +
  5. +

    Create a Migration manifest to run the Plan CR:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Migration
    +metadata:
    +  name: <name_of_migration_cr>
    +  namespace: <namespace>
    +spec:
    +  plan:
    +    name: <name_of_plan_cr>
    +    namespace: <namespace>
    +  cutover: <optional_cutover_time>
    +EOF
    +
    +
    +
    + + + + + +
    +
    Note
    +
    +
    +

    If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00.

    +
    +
    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/modules/non-admin-permissions-for-ui/index.html b/documentation/modules/non-admin-permissions-for-ui/index.html new file mode 100644 index 00000000000..fccae335436 --- /dev/null +++ b/documentation/modules/non-admin-permissions-for-ui/index.html @@ -0,0 +1,192 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Permissions needed by non-administrators to work with migration plan components

+
+

If you are an administrator, you can work with all components of migration plans (for example, providers, network mappings, and migration plans).

+
+
+

By default, non-administrators have limited ability to work with migration plans and their components. As an administrator, you can modify their roles to allow them full access to all components, or you can give them limited permissions.

+
+
+

For example, administrators can assign non-administrators one or more of the following cluster roles for migration plans:

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + +
Table 1. Example migration plan roles and their privileges
RoleDescription

plans.forklift.konveyor.io-v1beta1-view

Can view migration plans but not to create, delete or modify them

plans.forklift.konveyor.io-v1beta1-edit

Can create, delete or modify (all parts of edit permissions) individual migration plans

plans.forklift.konveyor.io-v1beta1-admin

All edit privileges and the ability to delete the entire collection of migration plans

+
+

Note that pre-defined cluster roles include a resource (for example, plans), an API group (for example, forklift.konveyor.io-v1beta1) and an action (for example, view, edit).

+
+
+

As a more comprehensive example, you can grant non-administrators the following set of permissions per namespace:

+
+
+
    +
  • +

    Create and modify storage maps, network maps, and migration plans for the namespaces they have access to

    +
  • +
  • +

    Attach providers created by administrators to storage maps, network maps, and migration plans

    +
  • +
  • +

    Not be able to create providers or to change system settings

    +
  • +
+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2. Example permissions required for non-adminstrators to work with migration plan components but not create providers
ActionsAPI groupResource

get, list, watch, create, update, patch, delete

forklift.konveyor.io

plans

get, list, watch, create, update, patch, delete

forklift.konveyor.io

migrations

get, list, watch, create, update, patch, delete

forklift.konveyor.io

hooks

get, list, watch

forklift.konveyor.io

providers

get, list, watch, create, update, patch, delete

forklift.konveyor.io

networkmaps

get, list, watch, create, update, patch, delete

forklift.konveyor.io

storagemaps

get, list, watch

forklift.konveyor.io

forkliftcontrollers

create, patch, delete

Empty string

secrets

+
+ + + + + +
+
Note
+
+
+

Non-administrators need to have the create permissions that are part of edit roles for network maps and for storage maps to create migration plans, even when using a template for a network map or a storage map.

+
+
+
+ + +
+ + diff --git a/documentation/modules/obtaining-console-url/index.html b/documentation/modules/obtaining-console-url/index.html new file mode 100644 index 00000000000..d4ad17da77b --- /dev/null +++ b/documentation/modules/obtaining-console-url/index.html @@ -0,0 +1,107 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Getting the Forklift web console URL

+
+

You can get the Forklift web console URL at any time by using either the OKD web console, or the command line.

+
+
+
Prerequisites
+
    +
  • +

    KubeVirt Operator installed.

    +
  • +
  • +

    Forklift Operator installed.

    +
  • +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  • +

    If you are using the OKD web console, follow these steps:

    +
  • +
+
+
+

Unresolved directive in obtaining-console-url.adoc - include::snippet_getting_web_console_url_web.adoc[]

+
+
+
    +
  • +

    If you are using the command line, get the Forklift web console URL with the following command:

    +
  • +
+
+
+

Unresolved directive in obtaining-console-url.adoc - include::snippet_getting_web_console_url_cli.adoc[]

+
+
+

You can now launch a browser and navigate to the Forklift web console.

+
+ + +
+ + diff --git a/documentation/modules/openstack-prerequisites/index.html b/documentation/modules/openstack-prerequisites/index.html new file mode 100644 index 00000000000..59760030cda --- /dev/null +++ b/documentation/modules/openstack-prerequisites/index.html @@ -0,0 +1,76 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

OpenStack prerequisites

+
+

The following prerequisites apply to {osp} migrations:

+
+
+ +
+ + +
+ + diff --git a/documentation/modules/ostack-app-cred-auth/index.html b/documentation/modules/ostack-app-cred-auth/index.html new file mode 100644 index 00000000000..d82761cb275 --- /dev/null +++ b/documentation/modules/ostack-app-cred-auth/index.html @@ -0,0 +1,189 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Using application credential authentication with an {osp} source provider

+
+

You can use application credential authentication, instead of username and password authentication, when you create an {osp} source provider.

+
+
+

Forklift supports both of the following types of application credential authentication:

+
+
+
    +
  • +

    Application credential ID

    +
  • +
  • +

    Application credential name

    +
  • +
+
+
+

For each type of application credential authentication, you need to use data from OpenStack to create a Secret manifest.

+
+
+
Prerequisites
+

You have an {osp} account.

+
+
+
Procedure
+
    +
  1. +

    In the dashboard of the {osp} web console, click Project > API Access.

    +
  2. +
  3. +

    Expand Download OpenStack RC file and click OpenStack RC file.

    +
    +

    The file that is downloaded, referred to here as <openstack_rc_file>, includes the following fields used for application credential authentication:

    +
    +
    +
    +
    OS_AUTH_URL
    +OS_PROJECT_ID
    +OS_PROJECT_NAME
    +OS_DOMAIN_NAME
    +OS_USERNAME
    +
    +
    +
  4. +
  5. +

    To get the data needed for application credential authentication, run the following command:

    +
    +
    +
    $ openstack application credential create --role member --role reader --secret redhat forklift
    +
    +
    +
    +

    The output, referred to here as <openstack_credential_output>, includes:

    +
    +
    +
      +
    • +

      The id and secret that you need for authentication using an application credential ID

      +
    • +
    • +

      The name and secret that you need for authentication using an application credential name

      +
    • +
    +
    +
  6. +
  7. +

    Create a Secret manifest similar to the following:

    +
    +
      +
    • +

      For authentication using the application credential ID:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-appid
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: applicationcredential
      +  applicationCredentialID: <id_from_openstack_credential_output>
      +  applicationCredentialSecret: <secret_from_openstack_credential_output>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    • +

      For authentication using the application credential name:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-appname
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: applicationcredential
      +  applicationCredentialName: <name_from_openstack_credential_output>
      +  applicationCredentialSecret: <secret_from_openstack_credential_output>
      +  domainName: <OS_DOMAIN_NAME_from_openstack_rc_file>
      +  username: <OS_USERNAME_from_openstack_rc_file>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    +
    +
  8. +
  9. +

    Continue migrating your virtual machine according to the procedure in Migrating virtual machines, starting with step 2, "Create a Provider manifest for the source provider."

    +
  10. +
+
+ + +
+ + diff --git a/documentation/modules/ostack-token-auth/index.html b/documentation/modules/ostack-token-auth/index.html new file mode 100644 index 00000000000..0c73b8c5566 --- /dev/null +++ b/documentation/modules/ostack-token-auth/index.html @@ -0,0 +1,180 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Using token authentication with an {osp} source provider

+
+

You can use token authentication, instead of username and password authentication, when you create an {osp} source provider.

+
+
+

Forklift supports both of the following types of token authentication:

+
+
+
    +
  • +

    Token with user ID

    +
  • +
  • +

    Token with user name

    +
  • +
+
+
+

For each type of token authentication, you need to use data from OpenStack to create a Secret manifest.

+
+
+
Prerequisites
+

Have an {osp} account.

+
+
+
Procedure
+
    +
  1. +

    In the dashboard of the {osp} web console, click Project > API Access.

    +
  2. +
  3. +

    Expand Download OpenStack RC file and click OpenStack RC file.

    +
    +

    The file that is downloaded, referred to here as <openstack_rc_file>, includes the following fields used for token authentication:

    +
    +
    +
    +
    OS_AUTH_URL
    +OS_PROJECT_ID
    +OS_PROJECT_NAME
    +OS_DOMAIN_NAME
    +OS_USERNAME
    +
    +
    +
  4. +
  5. +

    To get the data needed for token authentication, run the following command:

    +
    +
    +
    $ openstack token issue
    +
    +
    +
    +

    The output, referred to here as <openstack_token_output>, includes the token, userID, and projectID that you need for authentication using a token with user ID.

    +
    +
  6. +
  7. +

    Create a Secret manifest similar to the following:

    +
    +
      +
    • +

      For authentication using a token with user ID:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-tokenid
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: token
      +  token: <token_from_openstack_token_output>
      +  projectID: <projectID_from_openstack_token_output>
      +  userID: <userID_from_openstack_token_output>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    • +

      For authentication using a token with user name:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-tokenname
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: token
      +  token: <token_from_openstack_token_output>
      +  domainName: <OS_DOMAIN_NAME_from_openstack_rc_file>
      +  projectName: <OS_PROJECT_NAME_from_openstack_rc_file>
      +  username: <OS_USERNAME_from_openstack_rc_file>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    +
    +
  8. +
  9. +

    Continue migrating your virtual machine according to the procedure in Migrating virtual machines, starting with step 2, "Create a Provider manifest for the source provider."

    +
  10. +
+
+ + +
+ + diff --git a/documentation/modules/ova-prerequisites/index.html b/documentation/modules/ova-prerequisites/index.html new file mode 100644 index 00000000000..d474668b551 --- /dev/null +++ b/documentation/modules/ova-prerequisites/index.html @@ -0,0 +1,130 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Open Virtual Appliance (OVA) prerequisites

+
+

The following prerequisites apply to Open Virtual Appliance (OVA) file migrations:

+
+
+
    +
  • +

    All OVA files are created by VMware vSphere.

    +
  • +
+
+
+ + + + + +
+
Note
+
+
+

Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by Forklift. Forklift supports only OVA files created by VMware vSphere.

+
+
+
+
+
    +
  • +

    The OVA files are in one or more folders under an NFS shared directory in one of the following structures:

    +
    +
      +
    • +

      In one or more compressed Open Virtualization Format (OVF) packages that hold all the VM information.

      +
      +

      The filename of each compressed package must have the .ova extension. Several compressed packages can be stored in the same folder.

      +
      +
      +

      When this structure is used, Forklift scans the root folder and the first-level subfolders for compressed packages.

      +
      +
      +

      For example, if the NFS share is, /nfs, then:
      +The folder /nfs is scanned.
      +The folder /nfs/subfolder1 is scanned.
      +But, /nfs/subfolder1/subfolder2 is not scanned.

      +
      +
    • +
    • +

      In extracted OVF packages.

      +
      +

      When this structure is used, Forklift scans the root folder, first-level subfolders, and second-level subfolders for extracted OVF packages. +However, there can be only one .ovf file in a folder. Otherwise, the migration will fail.

      +
      +
      +

      For example, if the NFS share is, /nfs, then:
      +The OVF file /nfs/vm.ovf is scanned.
      +The OVF file /nfs/subfolder1/vm.ovf is scanned.
      +The OVF file /nfs/subfolder1/subfolder2/vm.ovf is scanned.
      +But, the OVF file /nfs/subfolder1/subfolder2/subfolder3/vm.ovf is not scanned.

      +
      +
    • +
    +
    +
  • +
+
+ + +
+ + diff --git a/documentation/modules/retrieving-validation-service-json/index.html b/documentation/modules/retrieving-validation-service-json/index.html new file mode 100644 index 00000000000..a7455b8334d --- /dev/null +++ b/documentation/modules/retrieving-validation-service-json/index.html @@ -0,0 +1,483 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Retrieving the Inventory service JSON

+
+

You retrieve the Inventory service JSON by sending an Inventory service query to a virtual machine (VM). The output contains an "input" key, which contains the inventory attributes that are queried by the Validation service rules.

+
+
+

You can create a validation rule based on any attribute in the "input" key, for example, input.snapshot.kind.

+
+
+
Procedure
+
    +
  1. +

    Retrieve the routes for the project:

    +
    +
    +
    oc get route -n openshift-mtv
    +
    +
    +
  2. +
  3. +

    Retrieve the Inventory service route:

    +
    +
    +
    $ kubectl get route <inventory_service> -n konveyor-forklift
    +
    +
    +
  4. +
  5. +

    Retrieve the access token:

    +
    +
    +
    $ TOKEN=$(oc whoami -t)
    +
    +
    +
  6. +
  7. +

    Trigger an HTTP GET request (for example, using Curl):

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers -k
    +
    +
    +
  8. +
  9. +

    Retrieve the UUID of a provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider> -k (1)
    +
    +
    +
    +
      +
    1. +

      Allowed values for the provider are vsphere, ovirt, and openstack.

      +
    2. +
    +
    +
  10. +
  11. +

    Retrieve the VMs of a provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider>/<UUID>/vms -k
    +
    +
    +
  12. +
  13. +

    Retrieve the details of a VM:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider>/<UUID>/workloads/<vm> -k
    +
    +
    +
    +
    Example output
    +
    +
    {
    +    "input": {
    +        "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/workloads/vm-431",
    +        "id": "vm-431",
    +        "parent": {
    +            "kind": "Folder",
    +            "id": "group-v22"
    +        },
    +        "revision": 1,
    +        "name": "iscsi-target",
    +        "revisionValidated": 1,
    +        "isTemplate": false,
    +        "networks": [
    +            {
    +                "kind": "Network",
    +                "id": "network-31"
    +            },
    +            {
    +                "kind": "Network",
    +                "id": "network-33"
    +            }
    +        ],
    +        "disks": [
    +            {
    +                "key": 2000,
    +                "file": "[iSCSI_Datastore] iscsi-target/iscsi-target-000001.vmdk",
    +                "datastore": {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                },
    +                "capacity": 17179869184,
    +                "shared": false,
    +                "rdm": false
    +            },
    +            {
    +                "key": 2001,
    +                "file": "[iSCSI_Datastore] iscsi-target/iscsi-target_1-000001.vmdk",
    +                "datastore": {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                },
    +                "capacity": 10737418240,
    +                "shared": false,
    +                "rdm": false
    +            }
    +        ],
    +        "concerns": [],
    +        "policyVersion": 5,
    +        "uuid": "42256329-8c3a-2a82-54fd-01d845a8bf49",
    +        "firmware": "bios",
    +        "powerState": "poweredOn",
    +        "connectionState": "connected",
    +        "snapshot": {
    +            "kind": "VirtualMachineSnapshot",
    +            "id": "snapshot-3034"
    +        },
    +        "changeTrackingEnabled": false,
    +        "cpuAffinity": [
    +            0,
    +            2
    +        ],
    +        "cpuHotAddEnabled": true,
    +        "cpuHotRemoveEnabled": false,
    +        "memoryHotAddEnabled": false,
    +        "faultToleranceEnabled": false,
    +        "cpuCount": 2,
    +        "coresPerSocket": 1,
    +        "memoryMB": 2048,
    +        "guestName": "Red Hat Enterprise Linux 7 (64-bit)",
    +        "balloonedMemory": 0,
    +        "ipAddress": "10.19.2.96",
    +        "storageUsed": 30436770129,
    +        "numaNodeAffinity": [
    +            "0",
    +            "1"
    +        ],
    +        "devices": [
    +            {
    +                "kind": "RealUSBController"
    +            }
    +        ],
    +        "host": {
    +            "id": "host-29",
    +            "parent": {
    +                "kind": "Cluster",
    +                "id": "domain-c26"
    +            },
    +            "revision": 1,
    +            "name": "IP address or host name of the vCenter host or oVirt Engine host",
    +            "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/hosts/host-29",
    +            "status": "green",
    +            "inMaintenance": false,
    +            "managementServerIp": "10.19.2.96",
    +            "thumbprint": <thumbprint>,
    +            "timezone": "UTC",
    +            "cpuSockets": 2,
    +            "cpuCores": 16,
    +            "productName": "VMware ESXi",
    +            "productVersion": "6.5.0",
    +            "networking": {
    +                "pNICs": [
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic0",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic1",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic2",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic3",
    +                        "linkSpeed": 10000
    +                    }
    +                ],
    +                "vNICs": [
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk2",
    +                        "portGroup": "VM_Migration",
    +                        "dPortGroup": "",
    +                        "ipAddress": "192.168.79.13",
    +                        "subnetMask": "255.255.255.0",
    +                        "mtu": 9000
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk0",
    +                        "portGroup": "Management Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "10.19.2.13",
    +                        "subnetMask": "255.255.255.128",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk1",
    +                        "portGroup": "Storage Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "172.31.2.13",
    +                        "subnetMask": "255.255.0.0",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk3",
    +                        "portGroup": "",
    +                        "dPortGroup": "dvportgroup-48",
    +                        "ipAddress": "192.168.61.13",
    +                        "subnetMask": "255.255.255.0",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk4",
    +                        "portGroup": "VM_DHCP_Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "10.19.2.231",
    +                        "subnetMask": "255.255.255.128",
    +                        "mtu": 1500
    +                    }
    +                ],
    +                "portGroups": [
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM Network",
    +                        "name": "VM Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-Management Network",
    +                        "name": "Management Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_10G_Network",
    +                        "name": "VM_10G_Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Storage",
    +                        "name": "VM_Storage",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_DHCP_Network",
    +                        "name": "VM_DHCP_Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-Storage Network",
    +                        "name": "Storage Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Isolated_67",
    +                        "name": "VM_Isolated_67",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Migration",
    +                        "name": "VM_Migration",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2"
    +                    }
    +                ],
    +                "switches": [
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch0",
    +                        "name": "vSwitch0",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM Network",
    +                            "key-vim.host.PortGroup-Management Network"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic4"
    +                        ]
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch1",
    +                        "name": "vSwitch1",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM_10G_Network",
    +                            "key-vim.host.PortGroup-VM_Storage",
    +                            "key-vim.host.PortGroup-VM_DHCP_Network",
    +                            "key-vim.host.PortGroup-Storage Network"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic2",
    +                            "key-vim.host.PhysicalNic-vmnic0"
    +                        ]
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch2",
    +                        "name": "vSwitch2",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM_Isolated_67",
    +                            "key-vim.host.PortGroup-VM_Migration"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic3",
    +                            "key-vim.host.PhysicalNic-vmnic1"
    +                        ]
    +                    }
    +                ]
    +            },
    +            "networks": [
    +                {
    +                    "kind": "Network",
    +                    "id": "network-31"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-34"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-57"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-33"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "dvportgroup-47"
    +                }
    +            ],
    +            "datastores": [
    +                {
    +                    "kind": "Datastore",
    +                    "id": "datastore-35"
    +                },
    +                {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                }
    +            ],
    +            "vms": null,
    +            "networkAdapters": [],
    +            "cluster": {
    +                "id": "domain-c26",
    +                "parent": {
    +                    "kind": "Folder",
    +                    "id": "group-h23"
    +                },
    +                "revision": 1,
    +                "name": "mycluster",
    +                "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/clusters/domain-c26",
    +                "folder": "group-h23",
    +                "networks": [
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-31"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-34"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-57"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-33"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "dvportgroup-47"
    +                    }
    +                ],
    +                "datastores": [
    +                    {
    +                        "kind": "Datastore",
    +                        "id": "datastore-35"
    +                    },
    +                    {
    +                        "kind": "Datastore",
    +                        "id": "datastore-63"
    +                    }
    +                ],
    +                "hosts": [
    +                    {
    +                        "kind": "Host",
    +                        "id": "host-44"
    +                    },
    +                    {
    +                        "kind": "Host",
    +                        "id": "host-29"
    +                    }
    +                ],
    +                "dasEnabled": false,
    +                "dasVms": [],
    +                "drsEnabled": true,
    +                "drsBehavior": "fullyAutomated",
    +                "drsVms": [],
    +                "datacenter": null
    +            }
    +        }
    +    }
    +}
    +
    +
    +
  14. +
+
+ + +
+ + diff --git a/documentation/modules/retrieving-vmware-moref/index.html b/documentation/modules/retrieving-vmware-moref/index.html new file mode 100644 index 00000000000..79fea8a5fbb --- /dev/null +++ b/documentation/modules/retrieving-vmware-moref/index.html @@ -0,0 +1,149 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Retrieving a VMware vSphere moRef

+
+

When you migrate VMs with a VMware vSphere source provider using Forklift from the CLI, you need to know the managed object reference (moRef) of certain entities in vSphere, such as datastores, networks, and VMs.

+
+
+

You can retrieve the moRef of one or more vSphere entities from the Inventory service. You can then use each moRef as a reference for retrieving the moRef of another entity.

+
+
+
Procedure
+
    +
  1. +

    Retrieve the routes for the project:

    +
    +
    +
    oc get route -n openshift-mtv
    +
    +
    +
  2. +
  3. +

    Retrieve the Inventory service route:

    +
    +
    +
    $ kubectl get route <inventory_service> -n konveyor-forklift
    +
    +
    +
  4. +
  5. +

    Retrieve the access token:

    +
    +
    +
    $ TOKEN=$(oc whoami -t)
    +
    +
    +
  6. +
  7. +

    Retrieve the moRef of a VMware vSphere provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/vsphere -k
    +
    +
    +
  8. +
  9. +

    Retrieve the datastores of a VMware vSphere source provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/vsphere/<provider id>/datastores/ -k
    +
    +
    +
    +
    Example output
    +
    +
    [
    +  {
    +    "id": "datastore-11",
    +    "parent": {
    +      "kind": "Folder",
    +      "id": "group-s5"
    +    },
    +    "path": "/Datacenter/datastore/v2v_general_porpuse_ISCSI_DC",
    +    "revision": 46,
    +    "name": "v2v_general_porpuse_ISCSI_DC",
    +    "selfLink": "providers/vsphere/01278af6-e1e4-4799-b01b-d5ccc8dd0201/datastores/datastore-11"
    +  },
    +  {
    +    "id": "datastore-730",
    +    "parent": {
    +      "kind": "Folder",
    +      "id": "group-s5"
    +    },
    +    "path": "/Datacenter/datastore/f01-h27-640-SSD_2",
    +    "revision": 46,
    +    "name": "f01-h27-640-SSD_2",
    +    "selfLink": "providers/vsphere/01278af6-e1e4-4799-b01b-d5ccc8dd0201/datastores/datastore-730"
    +  },
    + ...
    +
    +
    +
  10. +
+
+
+

In this example, the moRef of the datastore v2v_general_porpuse_ISCSI_DC is datastore-11 and the moRef of the datastore f01-h27-640-SSD_2 is datastore-730.

+
+ + +
+ + diff --git a/documentation/modules/rhv-prerequisites/index.html b/documentation/modules/rhv-prerequisites/index.html new file mode 100644 index 00000000000..ef4796423c1 --- /dev/null +++ b/documentation/modules/rhv-prerequisites/index.html @@ -0,0 +1,129 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

oVirt prerequisites

+
+

The following prerequisites apply to oVirt migrations:

+
+
+
    +
  • +

    To create a source provider, you must have at least the UserRole and ReadOnlyAdmin roles assigned to you. These are the minimum required permissions, however, any other administrator or superuser permissions will also work.

    +
  • +
+
+
+ + + + + +
+
Important
+
+
+

You must keep the UserRole and ReadOnlyAdmin roles until the virtual machines of the source provider have been migrated. Otherwise, the migration will fail.

+
+
+
+
+
    +
  • +

    To migrate virtual machines:

    +
    +
      +
    • +

      You must have one of the following:

      +
      +
        +
      • +

        oVirt admin permissions. These permissions allow you to migrate any virtual machine in the system.

        +
      • +
      • +

        DiskCreator and UserVmManager permissions on every virtual machine you want to migrate.

        +
      • +
      +
      +
    • +
    • +

      You must use a compatible version of oVirt.

      +
    • +
    • +

      You must have the Engine CA certificate, unless it was replaced by a third-party certificate, in which case, specify the Engine Apache CA certificate.

      +
      +

      You can obtain the Engine CA certificate by navigating to https://<engine_host>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA in a browser.

      +
      +
    • +
    • +

      If you are migrating a virtual machine with a direct LUN disk, ensure that the nodes in the KubeVirt destination cluster that the VM is expected to run on can access the backend storage.

      +
    • +
    +
    +
  • +
+
+
+

Unresolved directive in rhv-prerequisites.adoc - include::snip-migrating-luns.adoc[]

+
+ + +
+ + diff --git a/documentation/modules/rn-2.0/index.html b/documentation/modules/rn-2.0/index.html new file mode 100644 index 00000000000..c8c3d5447a6 --- /dev/null +++ b/documentation/modules/rn-2.0/index.html @@ -0,0 +1,163 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.0

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere with Forklift.

+
+
+

The release notes describe new features and enhancements, known issues, and technical changes.

+
+
+
+
+

New features and enhancements

+
+
+

This release adds the following features and improvements.

+
+
+
Warm migration
+

Warm migration reduces downtime by copying most of the VM data during a precopy stage while the VMs are running. During the cutover stage, the VMs are stopped and the rest of the data is copied.

+
+
+
Cancel migration
+

You can cancel an entire migration plan or individual VMs while a migration is in progress. A canceled migration plan can be restarted in order to migrate the remaining VMs.

+
+
+
Migration network
+

You can select a migration network for the source and target providers for improved performance. By default, data is copied using the VMware administration network and the OKD pod network.

+
+
+
Validation service
+

The validation service checks source VMs for issues that might affect migration and flags the VMs with concerns in the migration plan.

+
+
+ + + + + +
+
Important
+
+
+

The validation service is a Technology Preview feature only. Technology Preview features +are not supported with Red Hat production service level agreements (SLAs) and +might not be functionally complete. Red Hat does not recommend using them +in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during +the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+
+
+
+

Known issues

+
+
+

This section describes known issues and mitigations.

+
+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Network map displays a "Destination network not found" error
+

If the network map remains in a NotReady state and the NetworkMap manifest displays a Destination network not found error, the cause is a missing network attachment definition. You must create a network attachment definition for each additional destination network before you create the network map. (BZ#1971259)

+
+
+
Warm migration gets stuck during third precopy
+

Warm migration uses changed block tracking snapshots to copy data during the precopy stage. The snapshots are created at one-hour intervals by default. When a snapshot is created, its contents are copied to the destination cluster. However, when the third snapshot is created, the first snapshot is deleted and the block tracking is lost. (BZ#1969894)

+
+
+

You can do one of the following to mitigate this issue:

+
+
+
    +
  • +

    Start the cutover stage no more than one hour after the precopy stage begins so that only one internal snapshot is created.

    +
  • +
  • +

    Increase the snapshot interval in the vm-import-controller-config config map to 720 minutes:

    +
    +
    +
    $ kubectl patch configmap/vm-import-controller-config \
    +  -n openshift-cnv -p '{"data": \
    +  {"warmImport.intervalMinutes": "720"}}'
    +
    +
    +
  • +
+
+
+
+ + +
+ + diff --git a/documentation/modules/rn-2.1/index.html b/documentation/modules/rn-2.1/index.html new file mode 100644 index 00000000000..043b2c89f48 --- /dev/null +++ b/documentation/modules/rn-2.1/index.html @@ -0,0 +1,191 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.1

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

+
+
+

The release notes describe new features and enhancements, known issues, and technical changes.

+
+
+
+
+

Technical changes

+
+
+
VDDK image added to HyperConverged custom resource
+

The VMware Virtual Disk Development Kit (VDDK) SDK image must be added to the HyperConverged custom resource. Before this release, it was referenced in the v2v-vmware config map.

+
+
+
+
+

New features and enhancements

+
+
+

This release adds the following features and improvements.

+
+
+
Cold migration from oVirt
+

You can perform a cold migration of VMs from oVirt.

+
+
+
Migration hooks
+

You can create migration hooks to run Ansible playbooks or custom code before or after migration.

+
+
+
Filtered must-gather data collection
+

You can specify options for the must-gather tool that enable you to filter the data by namespace, migration plan, or VMs.

+
+
+
SR-IOV network support
+

You can migrate VMs with a single root I/O virtualization (SR-IOV) network interface if the KubeVirt environment has an SR-IOV network.

+
+
+
+
+

Known issues

+
+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Disk copy stage does not progress
+

The disk copy stage of a oVirt VM does not progress and the Forklift web console does not display an error message. (BZ#1990596)

+
+
+

The cause of this problem might be one of the following conditions:

+
+
+
    +
  • +

    The storage class does not exist on the target cluster.

    +
  • +
  • +

    The VDDK image has not been added to the HyperConverged custom resource.

    +
  • +
  • +

    The VM does not have a disk.

    +
  • +
  • +

    The VM disk is locked.

    +
  • +
  • +

    The VM time zone is not set to UTC.

    +
  • +
  • +

    The VM is configured for a USB device.

    +
  • +
+
+
+

To disable USB devices, see Configuring USB Devices in the Red Hat Virtualization documentation.

+
+
+

To determine the cause:

+
+
+
    +
  1. +

    Click WorkloadsVirtualization in the OKD web console.

    +
  2. +
  3. +

    Click the Virtual Machines tab.

    +
  4. +
  5. +

    Select a virtual machine to open the Virtual Machine Overview screen.

    +
  6. +
  7. +

    Click Status to view the status of the virtual machine.

    +
  8. +
+
+
+
VM time zone must be UTC with no offset
+

The time zone of the source VMs must be UTC with no offset. You can set the time zone to GMT Standard Time after first assessing the potential impact on the workload. (BZ#1993259)

+
+
+
oVirt resource UUID causes a "Provider not found" error
+

If a oVirt resource UUID is used in a Host, NetworkMap, StorageMap, or Plan custom resource (CR), a "Provider not found" error is displayed.

+
+
+

You must use the resource name. (BZ#1994037)

+
+
+
Same oVirt resource name in different data centers causes ambiguous reference
+

If a oVirt resource name is used in a NetworkMap, StorageMap, or Plan custom resource (CR) and if the same resource name exists in another data center, the Plan CR displays a critical "Ambiguous reference" condition. You must rename the resource or use the resource UUID in the CR.

+
+
+

In the web console, the resource name appears twice in the same list without a data center reference to distinguish them. You must rename the resource. (BZ#1993089)

+
+
+
Snapshots are not deleted after warm migration
+

Snapshots are not deleted automatically after a successful warm migration of a VMware VM. You must delete the snapshots manually in VMware vSphere. (BZ#2001270)

+
+
+
+ + +
+ + diff --git a/documentation/modules/rn-2.2/index.html b/documentation/modules/rn-2.2/index.html new file mode 100644 index 00000000000..64a1c65a4d9 --- /dev/null +++ b/documentation/modules/rn-2.2/index.html @@ -0,0 +1,219 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.2

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Setting the precopy time interval for warm migration
+

You can set the time interval between snapshots taken during the precopy stage of warm migration.

+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
Creating validation rules
+

You can create custom validation rules to check the suitability of VMs for migration. Validation rules are based on the VM attributes collected by the Provider Inventory service and written in Rego, the Open Policy Agent native query language.

+
+
+
Downloading logs by using the web console
+

You can download logs for a migration plan or a migrated VM by using the Forklift web console.

+
+
+
Duplicating a migration plan by using the web console
+

You can duplicate a migration plan by using the web console, including its VMs, mappings, and hooks, in order to edit the copy and run as a new migration plan.

+
+
+
Archiving a migration plan by using the web console
+

You can archive a migration plan by using the MTV web console. Archived plans can be viewed or duplicated. They cannot be run, edited, or unarchived.

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Certain Validation service issues do not block migration
+

Certain Validation service issues, which are marked as Critical and display the assessment text, The VM will not be migrated, do not block migration. (BZ#2025977)

+
+
+

The following Validation service assessments do not block migration:

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Issues that do not block migration
AssessmentResult

The disk interface type is not supported by OpenShift Virtualization (only sata, virtio_scsi and virtio interface types are currently supported).

The migrated VM will have a virtio disk if the source interface is not recognized.

The NIC interface type is not supported by OpenShift Virtualization (only e1000, rtl8139 and virtio interface types are currently supported).

The migrated VM will have a virtio NIC if the source interface is not recognized.

The VM is using a vNIC profile configured for host device passthrough, which is not currently supported by OpenShift Virtualization.

The migrated VM will have an SR-IOV NIC. The destination network must be set up correctly.

One or more of the VM’s disks has an illegal or locked status condition.

The migration will proceed but the disk transfer is likely to fail.

The VM has a disk with a storage type other than image, and this is not currently supported by OpenShift Virtualization.

The migration will proceed but the disk transfer is likely to fail.

The VM has one or more snapshots with disks in ILLEGAL state. This is not currently supported by OpenShift Virtualization.

The migration will proceed but the disk transfer is likely to fail.

The VM has USB support enabled, but USB devices are not currently supported by OpenShift Virtualization.

The migrated VM will not have USB devices.

The VM is configured with a watchdog device, which is not currently supported by OpenShift Virtualization.

The migrated VM will not have a watchdog device.

The VM’s status is not up or down.

The migration will proceed but it might hang if the VM cannot be powered off.

+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Missing resource causes error message in current.log file
+

If a resource does not exist, for example, if the virt-launcher pod does not exist because the migrated VM is powered off, its log is unavailable.

+
+
+

The following error appears in the missing resource’s current.log file when it is downloaded from the web console or created with the must-gather tool: error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'. (BZ#2023260)

+
+
+
Importer pod log is unavailable after warm migration
+

Retaining the importer pod for debug purposes causes warm migration to hang during the precopy stage. (BZ#2016290)

+
+
+

As a temporary workaround, the importer pod is removed at the end of the precopy stage so that the precopy succeeds. However, this means that the importer pod log is not retained after warm migration is complete. You can only view the importer pod log by using the oc logs -f <cdi-importer_pod> command during the precopy stage.

+
+
+

This issue only affects the importer pod log and warm migration. Cold migration and the virt-v2v logs are not affected.

+
+
+
Deleting migration plan does not remove temporary resources.
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Migration plan details page of the web console does not describe the reason for the failure. (BZ#2008846)

+
+
+
Network, storage, and VM referenced by name in the Plan CR are not displayed in the web console.
+

If a Plan CR references storage, network, or VMs by name instead of by ID, the resources do not appear in the Forklift web console. The migration plan cannot be edited or duplicated. (BZ#1986020)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If you delete a migration plan and then run a new migration plan with the same name or if you delete a migrated VM and then remigrate the source VM, the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
If a target VM is deleted during migration, its migration status is Succeeded in the Plan CR
+

If you delete a target VirtualMachine CR during the 'Convert image to kubevirt' step of the migration, the Migration details page of the web console displays the state of the step as VirtualMachine CR not found. However, the status of the VM migration is Succeeded in the Plan CR file and in the web console. (BZ#2031529)

+
+
+
+ + +
+ + diff --git a/documentation/modules/rn-2.3/index.html b/documentation/modules/rn-2.3/index.html new file mode 100644 index 00000000000..6fcadf3965c --- /dev/null +++ b/documentation/modules/rn-2.3/index.html @@ -0,0 +1,156 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.3

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Setting the VddkInitImage path is part of the procedure of adding VMware provider.
+

In the web console, you enter the VddkInitImage path when adding a VMware provider. Alternatively, from the CLI, you add the VddkInitImage path to the Provider CR for VMware migrations.

+
+
+
The StorageProfile resource needs to be updated for a non-provisioner storage class
+

You must update the StorageProfile resource with accessModes and volumeMode for non-provisioner storage classes such as NFS. The documentation includes a link to the relevant procedure.

+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
Forklift 2.3 supports warm migration from oVirt
+

You can use warm migration to migrate VMs from both VMware and oVirt.

+
+
+
The minimal sufficient set of privileges for VMware users is established
+

VMware users do not have to have full cluster-admin privileges to perform a VM migration. The minimal sufficient set of user’s privileges is established and documented.

+
+
+
Forklift documentation is updated with instructions on using hooks
+

Forklift documentation includes instructions on adding hooks to migration plans and running hooks on VMs.

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Some warm migrations from oVirt might fail
+

When you run a migration plan for warm migration of multiple VMs from oVirt, the migrations of some VMs might fail during the cutover stage. In that case, restart the migration plan and set the cutover time for the VM migrations that failed in the first run. (BZ#2063531)

+
+
+
Snapshots are not deleted after warm migration
+

The Migration Controller service does not delete snapshots automatically after a successful warm migration of a oVirt VM. You can delete the snapshots manually. (BZ#22053183)

+
+
+
Warm migration from oVirt fails if a snapshot operation is performed on the source VM
+

If the user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (BZ#2057459)

+
+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Deleting migration plan does not remove temporary resources.
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Migration plan details page of the web console does not describe the reason for the failure. (BZ#2008846)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If you delete a migration plan and then run a new migration plan with the same name or if you delete a migrated VM and then remigrate the source VM, the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
Migration of virtual machines with encrypted partitions fails during conversion
+

The problem occurs for both vSphere and oVirt migrations.

+
+
+
Forklift 2.3.4 only: When the source provider is oVirt, duplicating a migration plan fails in either the network mapping stage or the storage mapping stage.
+

Possible workaround: Delete cache in the browser or restart the browser. (BZ#2143191)

+
+
+
+ + +
+ + diff --git a/documentation/modules/rn-2.4/index.html b/documentation/modules/rn-2.4/index.html new file mode 100644 index 00000000000..f3255c4d156 --- /dev/null +++ b/documentation/modules/rn-2.4/index.html @@ -0,0 +1,260 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.4

+
+
+
+

Migrate virtual machines (VMs) from VMware vSphere or oVirt or {osp} to KubeVirt with Forklift.

+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Faster disk image migration from oVirt
+

Disk images are not converted anymore using virt-v2v when migrating from oVirt. This change speeds up migrations and also allows migration for guest operating systems that are not supported by virt-vsv. (forklift-controller#403)

+
+
+
Faster disk transfers by ovirt-imageio client (ovirt-img)
+

Disk transfers use ovirt-imageio client (ovirt-img) instead of Containerized Data Import (CDI) when migrating from RHV to the local OpenShift Container Platform cluster, accelerating the migration.

+
+
+
Faster migration using conversion pod disk transfer
+

When migrating from vSphere to the local OpenShift Container Platform cluster, the conversion pod transfers the disk data instead of Containerized Data Importer (CDI), accelerating the migration.

+
+
+
Migrated virtual machines are not scheduled on the target OCP cluster
+

The migrated virtual machines are no longer scheduled on the target OpenShift Container Platform cluster. This enables migrating VMs that cannot start due to limit constraints on the target at migration time.

+
+
+
StorageProfile resource needs to be updated for a non-provisioner storage class
+

You must update the StorageProfile resource with accessModes and volumeMode for non-provisioner storage classes such as NFS.

+
+
+
VDDK 8 can be used in the VDDK image
+

Previous versions of Forklift supported only using VDDK version 7 for the VDDK image. Forklift supports both versions 7 and 8, as follows:

+
+
+
    +
  • +

    If you are migrating to OCP 4.12 or earlier, use VDDK version 7.

    +
  • +
  • +

    If you are migrating to OCP 4.13 or later, use VDDK version 8.

    +
  • +
+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
OpenStack migration
+

Forklift now supports migrations with {osp} as a source provider. This feature is a provided as a Technology Preview and only supports cold migrations.

+
+
+
OCP console plugin
+

The Forklift Operator now integrates the Forklift web console into the OKD web console. The new UI operates as an OCP Console plugin that adds the sub-menu Migration to the navigation bar. It is implemented in version 2.4, disabling the old UI. You can enable the old UI by setting feature_ui: true in ForkliftController. (MTV-427)

+
+
+
Skip certification option
+

'Skip certificate validation' option was added to VMware and oVirt providers. If selected, the provider’s certificate will not be validated and the UI will not ask for specifying a CA certificate.

+
+
+
Only third-party certificate required
+

Only the third-party certificate needs to be specified when defining a oVirt provider that sets with the Manager CA certificate.

+
+
+
Conversion of VMs with RHEL9 guest operating system
+

Cold migrations from vSphere to a local Red Hat OpenShift cluster use virt-v2v on RHEL 9. (MTV-332)

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Deleting migration plan does not remove temporary resources
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If deleting a migration plan and then running a new migration plan with the same name, or if deleting a migrated VM and then remigrate the source VM, then the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
Migration of virtual machines with encrypted partitions fails during conversion
+

vSphere only: Migrations from oVirt and OpenStack don’t fail, but the encryption key may be missing on the target OCP cluster.

+
+
+
Snapshots that are created during the migration in OpenStack are not deleted
+

The Migration Controller service does not delete snapshots that are created during the migration for source virtual machines in OpenStack automatically. Workaround: the snapshots can be removed manually on OpenStack.

+
+
+
oVirt snapshots are not deleted after a successful migration
+

The Migration Controller service does not delete snapshots automatically after a successful warm migration of a oVirt VM. Workaround: Snapshots can be removed from oVirt instead. (MTV-349)

+
+
+
Migration fails during precopy/cutover while a snapshot operation is executed on the source VM
+

Some warm migrations from oVirt might fail. When running a migration plan for warm migration of multiple VMs from oVirt, the migrations of some VMs might fail during the cutover stage. In that case, restart the migration plan and set the cutover time for the VM migrations that failed in the first run.

+
+
+

Warm migration from oVirt fails if a snapshot operation is performed on the source VM. If the user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (MTV-456)

+
+
+
Cannot schedule migrated VM with multiple disks to more than one storage classes of type hostPath
+

When migrating a VM with multiple disks to more than one storage classes of type hostPath, it may result in a VM that cannot be scheduled. Workaround: It is recommended to use shared storage on the target OCP cluster.

+
+
+
Deleting migrated VM does not remove PVC and PV
+

When removing a VM that was migrated, its persistent volume claims (PVCs) and physical volumes (PV) are not deleted. Workaround: remove the CDI importer pods and then remove the remaining PVCs and PVs. (MTV-492)

+
+
+
PVC deletion hangs after archiving and deleting migration plan
+

When a migration fails, its PVCs and PVs are not deleted as expected when its migration plan is archived and deleted. Workaround: Remove the CDI importer pods and then remove the remaining PVCs and PVs. (MTV-493)

+
+
+
VM with multiple disks may boot from non-bootable disk after migration
+

VM with multiple disks that was migrated might not be able to boot on the target OCP cluster. Workaround: Set the boot order appropriately to boot from the bootable disk. (MTV-433)

+
+
+
Non-supported guest operating systems in warm migrations
+

Warm migrations and migrations to remote OCP clusters from vSphere do not support all types of guest operating systems that are supported in cold migrations to the local OCP cluster. It is a consequence of using RHEL 8 in the former case and RHEL 9 in the latter case.
+See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.

+
+
+
VMs from vSphere with RHEL 9 guest operating system may start with network interfaces that are down
+

When migrating VMs that are installed with RHEL 9 as guest operating system from vSphere, their network interfaces could be disabled when they start in OpenShift Virtualization. (MTV-491)

+
+
+
Upgrade from 2.4.0 fails
+

When upgrading from MTV 2.4.0 to a later version, the operation fails with an error that says the field 'spec.selector' of deployment forklift-controller is immutable. Workaround: remove the custom resource forklift-controller of type ForkliftController from the installed namespace, and recreate it. The user needs to refresh the OCP Console once the forklift-console-plugin pod runs to load the upgraded Forklift web console. (MTV-518)

+
+
+
+
+

Resolved issues

+
+
+

This release has the following resolved issues:

+
+
+
Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack)
+

A flaw was found in handling multiplexed streams in the HTTP/2 protocol. In previous releases of MTV, the HTTP/2 protocol allowed a denial of service (server resource consumption) because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection, which resulted in a denial of service due to server resource consumption.

+
+
+

This issue has been resolved in MTV 2.4.3 and 2.5.2. It is advised to update to one of these versions of MTV or later.

+
+ +
+
Improve invalid/conflicting VM name handling
+

Improve the automatic renaming of VMs during migration to fit RFC 1123. This feature that was introduced in 2.3.4 is enhanced to cover more special cases. (MTV-212)

+
+
+
Prevent locking user accounts due to incorrect credentials
+

If a user specifies an incorrect password for oVirt providers, they are no longer locked in oVirt. An error returns when the oVirt manager is accessible and adding the provider. If the oVirt manager is inaccessible, the provider is added, but there would be no further attempt after failing, due to incorrect credentials. (MTV-324)

+
+
+
Users without cluster-admin role can create new providers
+

Previously, the cluster-admin role was required to browse and create providers. In this release, users with sufficient permissions on MTV resources (providers, plans, migrations, NetworkMaps, StorageMaps, hooks) can operate MTV without cluster-admin permissions. (MTV-334)

+
+
+
Convert i440fx to q35
+

Migration of virtual machines with i440fx chipset is now supported. The chipset is converted to q35 during the migration. (MTV-430)

+
+
+
Preserve the UUID setting in SMBIOS for a VM that is migrated from oVirt
+

The Universal Unique ID (UUID) number within the System Management BIOS (SMBIOS) no longer changes for VMs that are migrated from oVirt. This enhancement enables applications that operate within the guest operating system and rely on this setting, such as for licensing purposes, to operate on the target OCP cluster in a manner similar to that of oVirt. (MTV-597)

+
+
+
Do not expose password for oVirt in error messages
+

Previously, the password that was specified for oVirt manager appeared in error messages that were displayed in the web console and logs when failing to connect to oVirt. In this release, error messages that are generated when failing to connect to oVirt do not reveal the password for oVirt manager.

+
+
+
QEMU guest agent is now installed on migrated VMs
+

The QEMU guest agent is installed on VMs during cold migration from vSphere. (BZ#2018062)

+
+
+
+ + +
+ + diff --git a/documentation/modules/rn-2.5/index.html b/documentation/modules/rn-2.5/index.html new file mode 100644 index 00000000000..1499f6de735 --- /dev/null +++ b/documentation/modules/rn-2.5/index.html @@ -0,0 +1,464 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.5

+
+
+
+

You can use Forklift to migrate virtual machines from the following source providers to KubeVirt destination providers:

+
+
+
    +
  • +

    VMware vSphere

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    {osp}

    +
  • +
  • +

    Open Virtual Appliances (OVAs) that were created by VMware vSphere

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

The release notes describe technical changes, new features and enhancements, and known issues for Forklift.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Migration from OpenStack moves to being a fully supported feature
+

In this version of Forklift, migration using OpenStack source providers graduated from a Technology Preview feature to a fully supported feature.

+
+
+
Disabling FIPS
+

Forklift enables migrations from vSphere source providers by not enforcing Enterprise Master Secret (EMS). This enables migrating from all vSphere versions that Forklift supports, including migrations that do not meet 2023 FIPS requirements.

+
+
+
Integration of the create and update provider user interface
+

The user interface of the create and update providers now aligns with the look and feel of the OKD web console and displays up-to-date data.

+
+
+
Standalone UI
+

The old UI of Forklift 2.3 cannot be enabled by setting feature_ui: true in ForkliftController anymore.

+
+
+
Support deployment on {ocp-name} 4.15
+

Forklift 2.5.6 can be deployed on {ocp-name} 4.15 clusters.

+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
Migration of OVA files from VMware vSphere
+

In Forklift 2.3, you can migrate using Open Virtual Appliance (OVA) files that were created by VMware vSphere as source providers. (MTV-336)

+
+
+ + + + + +
+
Note
+
+
+

Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by Forklift. Forklift supports only OVA files created by VMware vSphere.

+
+
+
+
+

Unresolved directive in rn-2.5.adoc - include::snippet_ova_tech_preview.adoc[]

+
+
+
Migrating VMs between OKD clusters
+

In Forklift 2.3, you can now use Red Hat KubeVirt provider as a source provider and a destination provider. You can migrate VMs from the cluster that Forklift is deployed on to another cluster, or from a remote cluster to the cluster that Forklift is deployed on. (MTV-571)

+
+
+
Migration of VMs with direct LUNs from RHV
+

During the migration from oVirt (oVirt), direct Logical Units (LUNs) are detached from the source virtual machines and attached to the target virtual machines. Note that this mechanism does not work yet for Fibre Channel. (MTV-329)

+
+
+
Additional authentication methods for OpenStack
+

In addition to standard password authentication, Forklift supports the following authentication methods: Token authentication and Application credential authentication. (MTV-539)

+
+
+
Validation rules for OpenStack
+

The validation service includes default validation rules for virtual machines from OpenStack. (MTV-508)

+
+
+
VDDK is now optional for VMware vSphere providers
+

You can now create the VMware vSphere source provider without specifying a VMware Virtual Disk Development Kit (VDDK) init image. It is strongly recommended you create a VDDK init image to accelerate migrations.

+
+
+
Deployment on OKE enabled
+

In Forklift 2.5.3, deployment on {ocp-name} Kubernetes Engine (OKE) has been enabled. For more information, see About {ocp-name} Kubernetes Engine. (MTV-803)

+
+
+
Migration of VMs to destination storage classes with encrypted RBD now supported
+

In Forklift 2.5.4, migration of VMs to destination storage classes that have encrypted RADOS Block Devices (RBD) volumes is now supported.

+
+
+

To make use of this new feature, set the value of the parameter controller_block_overhead to 1Gi, following the procedure in Configuring the MTV Operator. (MTV-851)

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Deleting migration plan does not remove temporary resources
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)

+
+
+
Migration of virtual machines with encrypted partitions fails during conversion
+

vSphere only: Migrations from oVirt and OpenStack do not fail, but the encryption key may be missing on the target OKD cluster.

+
+
+
Migration fails during precopy/cutover while performing a snapshot operation on the source VM
+

Warm migration from oVirt fails if a snapshot operation is triggered and running on the source VM at the same time as the migration is scheduled. The migration does not wait for the snapshot operation to finish. (MTV-456)

+
+
+
Unable to schedule migrated VM with multiple disks to more than one storage classes of type hostPath
+

When migrating a VM with multiple disks to more than one storage classes of type hostPath, it might happen that a VM cannot be scheduled. Workaround: Use shared storage on the target OKD cluster.

+
+
+
Non-supported guest operating systems in warm migrations
+

Warm migrations and migrations to remote OKD clusters from vSphere do not support all types of guest operating systems that are supported in cold migrations to the local OKD cluster. This is a consequence of using RHEL 8 in the former case and RHEL 9 in the latter case.
+See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.

+
+
+
VMs from vSphere with RHEL 9 guest operating system can start with network interfaces that are down
+

When migrating VMs that are installed with RHEL 9 as guest operating system from vSphere, the network interfaces of the VMs could be disabled when they start in {ocp-name} Virtualization. (MTV-491)

+
+
+
Import OVA: ConnectionTestFailed message appears when adding OVA provider
+

When adding an OVA provider, the error message ConnectionTestFailed can appear, although the provider is created successfully. If the message does not disappear after a few minutes and the provider status does not move to Ready, this means that the ova server pod creation has failed. (MTV-671)

+
+
+
Left over ovirtvolumepopulator from failed migration causes plan to stay indefinitely in CopyDisks phase
+

An outdated ovirtvolumepopulator in the namespace, left over from an earlier failed migration, stops a new plan of the same VM when it transitions to CopyDisks phase. The plan remains in that phase indefinitely. (MTV-929)

+
+
+
Unclear error message when Forklift fails to build a PVC
+

The migration fails to build the Persistent Volume Claim (PVC) if the destination storage class does not have a configured storage profile. The forklift-controller raises an error message without a clear reason for failing to create a PVC. (MTV-928)

+
+
+

For a complete list of all known issues in this release, see the list of Known Issues in Jira.

+
+
+
+
+

Resolved issues

+
+
+

This release has the following resolved issues:

+
+
+
Flaw was found in jsrsasign package which is vulnerable to Observable Discrepancy
+

Versions of the package jsrsasign before 11.0.0, used in earlier releases of Forklift, are vulnerable to Observable Discrepancy in the RSA PKCS1.5 or RSA-OAEP decryption process. This discrepancy means an attacker could decrypt ciphertexts by exploiting this vulnerability. However, exploiting this vulnerability requires the attacker to have access to a large number of ciphertexts encrypted with the same key. This issue has been resolved in Forklift 2.5.5 by upgrading the package jsrasign to version 11.0.0.

+
+
+

For more information, see CVE-2024-21484.

+
+
+
Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack)
+

A flaw was found in handling multiplexed streams in the HTTP/2 protocol. In previous releases of Forklift, the HTTP/2 protocol allowed a denial of service (server resource consumption) because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection, which resulted in a denial of service due to server resource consumption.

+
+
+

This issue has been resolved in Forklift 2.5.2. It is advised to update to this version of MTV or later.

+
+ +
+
Gin Web Framework does not properly sanitize filename parameter of Context.FileAttachment function
+

A flaw was found in the Gin-Gonic Gin Web Framework, used by Forklift. The filename parameter of the Context.FileAttachment function was not properly sanitized. This flaw in the package could allow a remote attacker to bypass security restrictions caused by improper input validation by the filename parameter of the Context.FileAttachment function. A maliciously created filename could cause the Content-Disposition header to be sent with an unexpected filename value, or otherwise modify the Content-Disposition header.

+
+
+

This issue has been resolved in Forklift 2.5.2. It is advised to update to this version of Forklift or later.

+
+ +
+
CVE-2023-26144: mtv-console-plugin-container: graphql: Insufficient checks in the OverlappingFieldsCanBeMergedRule.ts
+

A flaw was found in the package GraphQL from 16.3.0 and before 16.8.1. This flaw means Forklift versions before Forklift 2.5.2 are vulnerable to Denial of Service (DoS) due to insufficient checks in the OverlappingFieldsCanBeMergedRule.ts file when parsing large queries. This issue may allow an attacker to degrade system performance. (MTV-712)

+
+
+

This issue has been resolved in Forklift 2.5.2. It is advised to update to this version of Forklift or later.

+
+
+

For more information, see CVE-2023-26144.

+
+
+
CVE-2023-45142: Memory leak found in the otelhttp handler of open-telemetry
+

A flaw was found in otelhttp handler of OpenTelemetry-Go. This flaw means Forklift versions before Forklift 2.5.3 are vulnerable to a memory leak caused by http.user_agent and http.method having unbound cardinality, which could allow a remote, unauthenticated attacker to exhaust the server’s memory by sending many malicious requests, affecting the availability. (MTV-795)

+
+
+

This issue has been resolved in Forklift 2.5.3. It is advised to update to this version of Forklift or later.

+
+
+

For more information, see CVE-2023-45142.

+
+
+
CVE-2023-39322: QUIC connections do not set an upper bound on the amount of data buffered when reading post-handshake messages
+

A flaw was found in Golang. This flaw means Forklift versions before Forklift 2.5.3 are vulnerable to QUIC connections not setting an upper bound on the amount of data buffered when reading post-handshake messages, allowing a malicious QUIC connection to cause unbounded memory growth. With the fix, connections now consistently reject messages larger than 65KiB in size. (MTV-708)

+
+
+

This issue has been resolved in Forklift 2.5.3. It is advised to update to this version of Forklift or later.

+
+
+

For more information, see CVE-2023-39322.

+
+
+
CVE-2023-39321: Processing an incomplete post-handshake message for a QUIC connection can cause a panic
+

A flaw was found in Golang. This flaw means Forklift versions before Forklift 2.5.3 are vulnerable to processing an incomplete post-handshake message for a QUIC connection, which causes a panic. (MTV-693)

+
+
+

This issue has been resolved in Forklift 2.5.3. It is advised to update to this version of Forklift or later.

+
+
+

For more information, see CVE-2023-39321.

+
+
+
CVE-2023-39319: Flaw in html/template package
+

A flaw was found in the Golang html/template package used in Forklift. This flaw means Forklift versions before Forklift 2.5.3 are vulnerable, as the html/template package did not properly handle occurrences of <script, <!--, and </script within JavaScript literals in <script> contexts. This flaw could cause the template parser to improperly consider script contexts to be terminated early, causing actions to be improperly escaped, which could be leveraged to perform an XSS attack. (MTV-693)

+
+
+

This issue has been resolved in Forklift 2.5.3. It is advised to update to this version of Forklift or later.

+
+
+

For more information, see CVE-2023-39319.

+
+
+
CVE-2023-39318: Flaw in html/template package
+

A flaw was found in the Golang html/template package used in Forklift. This flaw means Forklift versions before Forklift 2.5.3 are vulnerable as the html/template package did not properly handle HMTL-like "" comment tokens, nor hashbang \#! comment tokens. This flaw could cause the template parser to improperly interpret the contents of <script> contexts, causing actions to be improperly escaped, which could be leveraged to perform an XSS attack. (MTV-693)

+
+
+

This issue has been resolved in Forklift 2.5.3. It is advised to update to this version of Forklift or later.

+
+
+

For more information, see CVE-2023-39318.

+
+
+
Logs archive file downloaded from UI includes logs related to deleted migration plan/VM
+

In earlier releases of Forklift 2.3, the log files downloaded from UI could contain logs that are related to an earlier migration plan. (MTV-783)

+
+
+

This issue has been resolved in Forklift 2.5.3.

+
+
+
Extending a VM disk in RHV is not reflected in the MTV inventory
+

In earlier releases of Forklift 2.3, the size of disks that are extended in RHV was not adequately monitored. This resulted in the inability to migrate virtual machines with extended disks from a RHV provider. (MTV-830)

+
+
+

This issue has been resolved in Forklift 2.5.3.

+
+
+
Filesystem overhead configurable
+

In earlier releases of Forklift 2.3, the filesystem overhead for new persistent volumes was hard-coded to 10%. The overhead was insufficient for certain filesystem types, resulting in failures during cold-migrations from oVirt and OSP to the cluster where Forklift is deployed. In other filesystem types, the hard-coded overhead was too high, resulting in excessive storage consumption.

+
+
+

In Forklift 2.5.3, the filesystem overhead can be configured, as it is no longer hard-coded. If your migration allocates persistent volumes without CDI, you can adjust the file system overhead. You adjust the file system overhead by adding the following label and value to the spec portion of the forklift-controller CR:

+
+
+
+
spec:
+  `controller_filesystem_overhead: <percentage>` (1)
+
+
+
+
    +
  1. +

    The percentage of overhead. If this label is not added, the default value of 10% is used. This setting is valid only if the storageclass is filesystem. (MTV-699)

    +
  2. +
+
+
+
Ensure up-to-date data is displayed in the create and update provider forms
+

In earlier releases of Forklift, the create and update provider forms could have presented stale data.

+
+
+

This issue is resolved in Forklift 2.3, the new forms of create and update provider display up-to-date properties of the provider. (MTV-603)

+
+
+
Snapshots that are created during a migration in OpenStack are not deleted
+

In earlier releases of Forklift, the Migration Controller service did not delete snapshots that were created during a migration of source virtual machines in OpenStack automatically.

+
+
+

This issue is resolved in Forklift 2.3, all the snapshots created during the migration are removed after the migration has been completed. (MTV-620)

+
+
+
oVirt snapshots are not deleted after a successful migration
+

In earlier releases of Forklift, the Migration Controller service did not delete snapshots automatically after a successful warm migration of a VM from oVirt.

+
+
+

This issue is resolved in Forklift 2.3, the snapshots generated during migration are removed after a successful migration, and the original snapshots are not removed after a successful migration. (MTV-349)

+
+
+
Warm migration fails when cutover conflicts with precopy
+

In earlier releases of Forklift, the cutover operation failed when it was triggered while precopy was being performed. The VM was locked in oVirt and therefore the ovirt-engine rejected the snapshot creation, or disk transfer, operation.

+
+
+

This issue is resolved in Forklift 2.3, the cutover operation is triggered, but it is not performed at that time because the VM is locked. Once the precopy operation completes, the cutover operation is triggered. (MTV-686)

+
+
+
Warm migration fails when VM is locked
+

In earlier releases of Forklift, triggering a warm migration while there was an ongoing operation in oVirt that locked the VM caused the migration to fail because it could not trigger the snapshot creation.

+
+
+

This issue is resolved in Forklift 2.3, warm migration does not fail when an operation that locks the VM is performed in oVirt. The migration does not fail, but starts when the VM is unlocked. (MTV-687)

+
+
+
Deleting migrated VM does not remove PVC and PV
+

In earlier releases of Forklift, when removing a VM that was migrated, its persistent volume claims (PVCs) and physical volumes (PV) were not deleted.

+
+
+

This issue is resolved in Forklift 2.3, PVCs and PVs are deleted when deleting migrated VM.(MTV-492)

+
+
+
PVC deletion hangs after archiving and deleting migration plan
+

In earlier releases of Forklift, when a migration failed, its PVCs and PVs were not deleted as expected when its migration plan was archived and deleted.

+
+
+

This issue is resolved in Forklift 2.3, PVCs are deleted when archiving and deleting migration plan.(MTV-493)

+
+
+
VM with multiple disks can boot from a non-bootable disk after migration
+

In earlier releases of Forklift, VM with multiple disks that were migrated might not have been able to boot on the target OKD cluster.

+
+
+

This issue is resolved in Forklift 2.3, VM with multiple disks that are migrated can boot on the target OKD cluster. (MTV-433)

+
+
+
Transfer network not taken into account for cold migrations from vSphere
+

In Forklift releases 2.4.0-2.5.3, cold migrations from vSphere to the local cluster on which Forklift was deployed did not take a specified transfer network into account. This issue is resolved in Forklift 2.5.4. (MTV-846)

+
+
+
Fix migration of VMs with multi-boot guest operating system from vSphere
+

In Forklift 2.5.6, the virt-v2v arguments include –root first, which mitigates an issue with multi-boot VMs where the pod fails. This is a fix for a regression that was introduced in Forklift 2.4, in which the '--root' argument was dropped. (MTV-987)

+
+
+
Errors logged in populator pods are improved
+

In earlier releases of Forklift 2.3, populator pods were always restarted on failure. This made it difficult to gather the logs from the failed pods. In Forklift 2.5.3, the number of restarts of populator pods is limited to three times. On the third and final time, the populator pod remains in the fail status and its logs can then be easily gathered by must-gather and by forklift-controller to know this step has failed. (MTV-818)

+
+
+
npm IP package vulnerability
+

A vulnerability found in the Node.js Package Manager (npm) IP Package can allow an attacker to obtain sensitive information and obtain access to normally inaccessible resources. MTV-941

+
+
+

This issue has been resolved in Forklift 2.5.6.

+
+
+

For more information, see CVE-2023-42282

+
+
+
Flaw was found in the Golang net/http/internal package
+

A flaw was found in the versions of the Golang net/http/internal package, that were used in earlier releases of Forklift. This flaw could allow a malicious user to send an HTTP request and cause the receiver to read more bytes from the network than are in the body (up to 1GiB), causing the receiver to fail reading the response, possibly leading to a Denial of Service (DoS). This issue has been resolved in Forklift 2.5.6.

+
+
+

For more information, see CVE-2023-39326.

+
+
+

For a complete list of all resolved issues in this release, see the list of Resolved Issues in Jira.

+
+
+
+
+

Upgrade notes

+
+
+

It is recommended to upgrade from Forklift 2.4.2 to Forklift 2.3.

+
+
+
Upgrade from 2.4.0 fails
+

When upgrading from MTV 2.4.0 to a later version, the operation fails with an error that says the field 'spec.selector' of deployment forklift-controller is immutable. Workaround: Remove the custom resource forklift-controller of type ForkliftController from the installed namespace, and recreate it. Refresh the OKD console once the forklift-console-plugin pod runs to load the upgraded Forklift web console. (MTV-518)

+
+
+
+ + +
+ + diff --git a/documentation/modules/rn-2.6/index.html b/documentation/modules/rn-2.6/index.html new file mode 100644 index 00000000000..91e6379930e --- /dev/null +++ b/documentation/modules/rn-2.6/index.html @@ -0,0 +1,511 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.6

+
+
+
+

You can use Forklift to migrate virtual machines from the following source providers to KubeVirt destination providers:

+
+
+
    +
  • +

    VMware vSphere

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    {osp}

    +
  • +
  • +

    Open Virtual Appliances (OVAs) that were created by VMware vSphere

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

The release notes describe technical changes, new features and enhancements, known issues, and resolved issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Simplified the creation of vSphere providers
+

In earlier releases of Forklift, users had to specify a fingerprint when creating a vSphere provider. This required users to retrieve the fingerprint from the server that vCenter runs on. Forklift no longer requires this fingerprint as an input, but rather computes it from the specified certificate in the case of a secure connection or automatically retrieves it from the server that runs vCenter/ESXi in the case of an insecure connection.

+
+
+
Redesigned the migration plan creation dialog
+

The user interface console has improved the process of creating a migration plan. The new migration plan dialog enables faster creation of migration plans.

+
+
+

It includes only the minimal settings that are required, while you can confirgure advanced settings separately. The new dialog also provides defaults for network and storage mappings, where applicable. The new dialog can also be invoked from the the Provider > Virtual Machines tab, after selecting the virtual machines to migrate. It also better aligns with the user experience in the OCP console.

+
+
+
virtual machine preferences have replaced {ocp-name} templates
+

The virtual machine preferences have replaced {ocp-name} templates. Forklift currently falls back to using {ocp-name} templates when a relevant preference is not available.

+
+
+

Custom mappings of guest operating system type to virtual machine preference can be configured using config maps. This is in order to use custom virtual machine preferences, or to support more guest operating system types.

+
+
+
Full support for migration from OVA
+

Migration from OVA moves from being a Technical Preview and is now a fully supported feature.

+
+
+
The VM is posted with its desired Running state
+

Forklift creates the VM with its desired Running state on the target provider, instead of creating the VM and then running it as an additional operation. (MTV-794)

+
+
+
The must-gather logs can now be loaded only by using the CLI
+

The Forklift web console can no longer download logs. With this update, you must download must-gather logs by using CLI commands. For more information, see Must Gather Operator.

+
+
+
Forklift no longer runs pvc-init pods when migrating from vSphere
+

Forklift no longer runs pvc-init pods during cold migration from a vSphere provider to the {ocp-name} cluster that Forklift is deployed on. However, in other flows where data volumes are used, they are set with the cdi.kubevirt.io/storage.bind.immediate.requested annotation, and CDI runs first-consume pods for storage classes with volume binding mode WaitForFirstConsumer.

+
+
+
+
+

New features and enhancements

+
+
+

This section provides features and enhancements introduced in Forklift 2.6.

+
+
+

New features and enhancements 2.6.3

+
+
Support for migrating LUKS-encrypted devices in migrations from vSphere
+

You can now perform cold migrations from a vSphere provider of VMs whose virtual disks are encrypted by Linux Unified Key Setup (LUKS). (MTV-831)

+
+
+
Specifying the primary disk when migrating from vSphere
+

You can now specify the primary disk when you migrate VMs from vSphere with more than one bootable disk. This avoids Forklift automatically attempting to convert the first bootable disk that it detects while it examines all the disks of a virtual machine. This feature is needed because the first bootable disk is not necessarily the disk that the VM is expected to boot from in KubeVirt. (MTV-1079)

+
+
+
Links to remote provider UIs
+

You can now remotely access the UI of a remote cluster when you create a source provider. For example, if the provider is a remote oVirt oVirt cluster, Forklift adds a link to the remote oVirt web console when you define the provider. This feature makes it easier for you to manage and debug a migration from remote clusters. (MTV-1054)

+
+
+
+

New features and enhancements 2.6.0

+
+
Migration from vSphere over a secure connection
+

You can now specify a CA certificate that can be used to authenticate the server that runs vCenter or ESXi, depending on the specified SDK endpoint of the vSphere provider. (MTV-530)

+
+
+
Migration to or from a remote {ocp-name} over a secure connection
+

You can now specify a CA certificate that can be used to authenticate the API server of a remote {ocp-name} cluster. (MTV-728)

+
+
+
Migration from an ESXi server without going through vCenter
+

Forklift enables the configuration of vSphere providers with the SDK of ESXi. You need to select ESXi as the Endpoint type of the vSphere provider and specify the URL of the SDK of the ESXi server. (MTV-514)

+
+
+
Migration of image-based VMs from {osp}
+

Forklift supports the migration of VMs that were created from images in {osp}. (MTV-644)

+
+
+
Migration of VMs with Fibre Channel LUNs from oVirt
+

Forklift supports migrations of VMs that are set with Fibre Channel (FC) LUNs from oVirt. As with other LUN disks, you need to ensure the {ocp-name} nodes have access to the FC LUNs. During the migrations, the FC LUNs are detached from the source VMs in oVirt and attached to the migrated VMs in {ocp-name}. (MTV-659)

+
+
+
Preserve CPU types of VMs that are migrated from oVirt
+

Forklift sets the CPU type of migrated VMs in {ocp-name} with their custom CPU type in oVirt. In addition, a new option was added to migration plans that are set with oVirt as a source provider to preserve the original CPU types of source VMs. When this option is selected, Forklift identifies the CPU type based on the cluster configuration and sets this CPU type for the migrated VMs, for which the source VMs are not set with a custom CPU. (MTV-547)

+
+
+
Validation for RHEL 6 guest operating system is now available when migrating VMs with RHEL 6 guest operating system
+

Red Hat Enterprise Linux (RHEL) 9 does not support RHEL 6 as a guest operating system. Therefore, RHEL 6 is not supported in {ocp-name} Virtualization. With this update, a validation of RHEL 6 guest operating system was added to {ocp-name} Virtualization. (MTV413)

+
+
+
Automatic retrieval of CA certificates for the provider’s URL in the console
+

The ability to retrieve CA certificates, which was available in previous versions, has been restored. The vSphere Verify certificate option is in the add-provider dialog. This option was removed in the transition to the OKD console and has now been added to the console. This functionality is also available for oVirt, {osp}, and {ocp-name} providers now. (MTV-737)

+
+
+
Validation of a specified VDDK image
+

Forklift validates the availability of a VDDK image that is specified for a vSphere provider on the target {ocp-name} name as part of the validation of a migration plan. Forklift also checks whether the libvixDiskLib.so symbolic link (symlink) exists within the image. If the validation fails, the migration plan cannot be started. (MTV-618)

+
+
+
Add a warning and partial support for TPM
+

Forklift presents a warning when attempting to migrate a VM that is set with a TPM device from oVirt or vSphere. The migrated VM in {ocp-name} would be set with a TPM device but without the content of the TPM device on the source environment. (MTV-378)

+
+
+
Plans that failed to migrate VMs can now be edited
+

With this update, you can edit plans that have failed to migrate any VMs. Some plans fail or are canceled because of incorrect network and storage mappings. You can now edit these plans until they succeed. (MTV-779)

+
+
+
Validation rules are now available for OVA
+

The validation service includes default validation rules for virtual machines from the Open Virtual Appliance (OVA). (MTV-669)

+
+
+
+
+
+

Resolved issues

+
+
+

This release has the following resolved issues:

+
+
+

Resolved issues 2.6.7

+
+
Incorrect handling of quotes in ifcfg files
+

In earlier releases of Forklift, there was an issue with the incorrect handling of single and double quotes in interface configuration (ifcfg) files, which control the software interfaces for individual network devices. This issue has been resolved in Forklift 2.6.7, in order to cover additional IP configurations on Red Hat Enterprise Linux, CentOS, Rocky Linux and similar distributions. (MTV-1439)

+
+
+
Failure to preserve netplan based network configuration
+

In earlier releases of Forklift, there was an issue with the preservation of netplan-based network configurations. This issue has been resolved in Forklift 2.6.7, so that static IP configurations are preserved if netplan (netplan.io) is used by using the netplan configuration files to generate udev rules for known mac-address and ifname tuples. (MTV-1440)

+
+
+
Error messages are written into udev .rules files
+

In earlier releases of Forklift, there was an issue with the accidental leakage of error messages into udev .rules files. This issue has been resolved in Forklift 2.6.7, with a static IP persistence script added to the udev rule file. (MTV-1441)

+
+
+
+

Resolved issues 2.6.6

+
+
Runtime error: invalid memory address or nil pointer dereference
+

In earlier releases of Forklift, there was a runtime error of invalid memory address or nil pointer dereference caused by a pointer that was nil, and there was an attempt to access the value that it points to. This issue has been resolved in Forklift 2.6.6. (MTV-1353)

+
+
+
All Plan and Migration pods scheduled to same node causing the node to crash
+

In earlier releases of Forklift, the scheduler could place all migration pods on a single node. When this happened, the node ran out of the resources. This issue has been resolved in Forklift 2.6.6. (MTV-1354)

+
+
+
Empty bearer token is sufficient for authentication
+

In earlier releases of Forklift, a vulnerability was found in the Forklift Controller.  There is no verification against the authorization header, except to ensure it uses bearer authentication. Without an authorization header and a bearer token, a 401 error occurs. The presence of a token value provides a 200 response with the requested information. This issue has been resolved in Forklift 2.6.6.

+
+
+

For more details, see (CVE-2024-8509).

+
+
+
+

Resolved issues 2.6.5

+
+
VMware Linux interface name changes during migration
+

In earlier releases of Forklift, during the migration of Rocky Linux 8, CentOS 7.2 and later, and Ubuntu 22 virtual machines (VM) from VMware to OKD (OCP), the name of the network interfaces is modified, and the static IP configuration for the VM is no longer functional. This issue has been resolved for static IPs in Rocky Linux 8, Centos 7.2 and later, Ubuntu 22 in Forklift 2.6.5. (MTV-595)

+
+
+
+

Resolved issues 2.6.4

+
+
Disks and drives are offline after migrating Windows virtual machines from RHV or VMware to OCP
+

Windows (Windows 2022) VMs configured with multiple disks, which are Online before the migration, are Offline after a successful migration from oVirt or VMware to OKD, using Forklift. Only the C:\ primary disk is Online. This issue has been resolved for basic disks in Forklift 2.6.4. (MTV-1299)

+
+
+

For details of the known issue of dynamic disks being Offline in Windows Server 2022 after cold and warm migrations from vSphere to container-native virtualization (CNV) with Ceph RADOS Block Devices (RBD), using the storage class ocs-storagecluster-ceph-rbd, see (MTV-1344).

+
+
+
Preserve IP option for Windows does not preserve all settings
+

In earlier releases of Forklift, while migrating a Windows 2022 Server with a static IP address assigned, and selecting the Preserve static IPs option, after a successful Windows migration, while the node started and the IP address was preserved, the subnet mask, gateway, and DNS servers were not preserved. This resulted in an incomplete migration, and the customer was forced to log in locally from the console to fully configure the network. This issue has been resolved in Forklift 2.6.4. (MTV-1286)

+
+
+
qemu-guest-agent not being installed at first boot in Windows Server 2022
+

After a successful Windows 2022 server guest migration using Forklift 2.6.1, the qemu-guest-agent is not completely installed. The Windows Scheduled task is being created, however it is being set to run 4 hours in the future instead of the intended 2 minutes in the future. (MTV-1325)

+
+
+
+

Resolved issues 2.6.3

+
+
CVE-2024-24788: golang: net malformed DNS message can cause infinite loop
+

In earlier releases of Forklift, there was a flaw was discovered in the stdlib package of the Go programming language, which impacts previous versions of Forklift. This vulnerability primarily threatens web-facing applications and services that rely on Go for DNS queries. This issue has been resolved in Forklift 2.6.3.

+
+
+

For more details, see (CVE-2024-24788).

+
+
+
Migration scheduling does not take into account that virt-v2v copies disks sequentially (vSphere only)
+

In earlier releases of Forklift, there was a problem with the way Forklift interpreted the controller_max_vm_inflight setting for vSphere to schedule migrations. This issue has been resolved in Forklift 2.6.3. (MTV-1191)

+
+
+
Cold migrations fail after changing the ESXi network (vSphere only)
+

In earlier versions of Forklift, cold migrations from a vSphere provider with an ESXi SDK endpoint failed if any network was used except for the default network for disk transfers. This issue has been resolved in Forklift 2.6.3. (MTV-1180)

+
+
+
Warm migrations over an ESXi network are stuck in DiskTransfer state (vSphere only)
+

In earlier versions of Forklift, warm migrations over an ESXi network from a vSphere provider with a vCenter SDK endpoint were stuck in DiskTransfer state because Forklift was unable to locate image snapshots. This issue has been resolved in Forklift 2.6.3. (MTV-1161)

+
+
+
Leftover PVCs are in Lost state after cold migrations
+

In earlier versions of Forklift, after cold migrations, there were leftover PVCs that had a status of Lost instead of being deleted, even after the migration plan that created them was archived and deleted. Investigation showed that this was because importer pods were retained after copying, by default, rather than in only specific cases. This issue has been resolved in Forklift 2.6.3. (MTV-1095)

+
+
+
Guest operating system from vSphere might be missing (vSphere only)
+

In earlier versions of Forklift, some VMs that were imported from vSphere were not mapped to a template in OKD while other VMs, with the same guest operating system, were mapped to the corresponding template. Investigations indicated that this was because vSphere stopped reporting the operating system after not receiving updates from VMware tools for some time. This issue has been resolved in Forklift 2.6.3 by taking the value of the operating system from the output of the investigation that virt-v2v performs on the disks. (MTV-1046)

+
+
+
+

Resolved issues 2.6.2

+
+
CVE-2023-45288: Golang net/http, x/net/http2: unlimited number of CONTINUATION frames can cause a denial-of-service (DoS) attack
+

A flaw was discovered with the implementation of the HTTP/2 protocol in the Go programming language, which impacts previous versions of Forklift. There were insufficient limitations on the number of CONTINUATION frames sent within a single stream. An attacker could potentially exploit this to cause a denial-of-service (DoS) attack. This flaw has been resolved in Forklift 2.6.2.

+
+
+

For more details, see (CVE-2023-45288).

+
+
+
CVE-2024-24785: mtv-api-container: Golang html/template: errors returned from MarshalJSON methods may break template escaping
+

A flaw was found in the html/template Golang standard library package, which impacts previous versions of Forklift. If errors returned from MarshalJSON methods contain user-controlled data, they may be used to break the contextual auto-escaping behavior of the HTML/template package, allowing subsequent actions to inject unexpected content into the templates. This flaw has been resolved in Forklift 2.6.2.

+
+
+

For more details, see (CVE-2024-24785).

+
+
+
CVE-2024-24784: mtv-validation-container: Golang net/mail: comments in display names are incorrectly handled
+

A flaw was found in the net/mail Golang standard library package, which impacts previous versions of Forklift. The ParseAddressList function incorrectly handles comments, text in parentheses, and display names. As this is a misalignment with conforming address parsers, it can result in different trust decisions being made by programs using different parsers. This flaw has been resolved in Forklift 2.6.2.

+
+
+

For more details, see (CVE-2024-24784).

+
+
+
CVE-2024-24783: mtv-api-container: Golang crypto/x509: Verify panics on certificates with an unknown public key algorithm
+

A flaw was found in the crypto/x509 Golang standard library package, which impacts previous versions of Forklift. Verifying a certificate chain that contains a certificate with an unknown public key algorithm causes Certificate.Verify to panic. This affects all crypto/tls clients and servers that set Config.ClientAuth to VerifyClientCertIfGiven or RequireAndVerifyClientCert. The default behavior is for TLS servers to not verify client certificates. This flaw has been resolved in Forklift 2.6.2.

+
+
+

For more details, see (CVE-2024-24783).

+
+
+
CVE-2023-45290: mtv-api-container: Golang net/http memory exhaustion in Request.ParseMultipartForm
+

A flaw was found in the net/http Golang standard library package, which impacts previous versions of Forklift. When parsing a multipart form, either explicitly with Request.ParseMultipartForm or implicitly with Request.FormValue, Request.PostFormValue, or Request.FormFile, limits on the total size of the parsed form are not applied to the memory consumed while reading a single form line. This permits a maliciously crafted input containing long lines to cause the allocation of arbitrarily large amounts of memory, potentially leading to memory exhaustion. This flaw has been resolved in Forklift 2.6.2.

+
+
+

For more details, see (CVE-2023-45290).

+
+
+
ImageConversion does not run when target storage is set with WaitForFirstConsumer (WFFC)
+

In earlier releases of Forklift, migration of VMs failed because the migration was stuck in the AllocateDisks phase. As a result of being stuck, the migration did not progress, and PVCs were not bound. The root cause of the issue was that ImageConversion did not run when target storage was set for wait-for-first-consumer. The problem was resolved in Forklift 2.6.2. (MTV-1126)

+
+
+
forklift-controller panics when importing VMs with direct LUNs
+

In earlier releases of Forklift, forklift-controller panicked when a user attempted to import VMs that had direct LUNs. The problem was resolved in Forklift 2.6.2. (MTV-1134)

+
+
+
+

Resolved issues 2.6.1

+
+
VMs with multiple disks that are migrated from vSphere and OVA files are not being fully copied
+

In Forklift 2.6.0, there was a problem in copying VMs with multiple disks from VMware vSphere and from OVA files. The migrations appeared to succeed but all the disks were transferred to the same PV in the target environment while other disks were empty. In some cases, bootable disks were overridden, so the VM could not boot. In other cases, data from the other disks was missing. The problem was resolved in Forklift 2.6.1. (MTV-1067)

+
+
+
Migrating VMs from one OKD cluster to another fails due to a timeout
+

In Forklift 2.6.0, migrations from one OKD cluster to another failed when the time to transfer the disks of a VM exceeded the time to live (TTL) of the Export API in {ocp-name}, which was set to 2 hours by default. The problem was resolved in Forklift 2.6.1 by setting the default TTL of the Export API to 12 hours, which greatly reduces the possibility of an expiration of the Export API. Additionally, you can increase or decrease the TTL setting as needed. (MTV-1052)

+
+
+
Forklift forklift-controller pod crashes when receiving a disk without a datastore
+

In earlier releases of Forklift, if a VM was configured with a disk that was on a datastore that was no longer available in vSphere at the time a migration was attempted, the forklift-controller crashed, rendering Forklift unusable. In Forklift 2.6.1, Forklift presents a critical validation for VMs with such disks, informing users of the problem, and the forklift-controller no longer crashes, although it cannot transfer the disk. (MTV-1029)

+
+
+
+

Resolved issues 2.6.0

+
+
Deleting an OVA provider automatically also deletes the PV
+

In earlier releases of Forklift, the PV was not removed when the OVA provider was deleted. This has been resolved in Forklift 2.6.0, and the PV is automatically deleted when the OVA provider is deleted. (MTV-848)

+
+
+
Fix for data being lost when migrating VMware VMs with snapshots
+

In earlier releases of Forklift, when migrating a VM that has a snapshot from VMware, the VM that was created in {ocp-name} Virtualization contained the data in the snapshot but not the latest data of the VM. This has been resolved in Forklift 2.6.0. (MTV-447)

+
+
+
Canceling and deleting a failed migration plan does not clean up the populate pods and PVC
+

In earlier releases of Forklift, when you canceled and deleted a failed migration plan, and after creating a PVC and spawning the populate pods, the populate pods and PVC were not deleted. You had to delete the pods and PVC manually. This issue has been resolved in Forklift 2.6.0. (MTV-678)

+
+
+
OKD to OKD migrations require the cluster version to be 4.13 or later
+

In earlier releases of Forklift, when migrating from OKD to OKD, the version of the source provider cluster had to be OKD version 4.13 or later. This issue has been resolved in Forklift 2.6.0, with validation being shown when migrating from versions of {ocp-name} before 4.13. (MTV-734)

+
+
+
Multiple storage domains from RHV were always mapped to a single storage class
+

In earlier releases of Forklift, multiple disks from different storage domains were always mapped to a single storage class, regardless of the storage mapping that was configured. This issue has been resolved in Forklift 2.6.0. (MTV-1008)

+
+
+
Firmware detection by virt-v2v
+

In earlier releases of Forklift, a VM that was migrated from an OVA that did not include the firmware type in its OVF configuration was set with UEFI. This was incorrect for VMs that were configured with BIOS. This issue has been resolved in Forklift 2.6.0, as Forklift now consumes the firmware that is detected by virt-v2v during the conversion of the disks. (MTV-759)

+
+
+
Creating a host secret requires validation of the secret before creation of the host
+

In earlier releases of Forklift, when configuring a transfer network for vSphere hosts, the console plugin created the Host CR before creating its secret. The secret should be specified first in order to validate it before the Host CR is posted. This issue has been resolved in Forklift 2.6.0. (MTV-868)

+
+
+
When adding OVA provider a ConnectionTestFailed message appears
+

In earlier releases of Forklift, when adding an OVA provider, the error message ConnectionTestFailed instantly appeared, although the provider had been created successfully. This issue has been resolved in Forklift 2.6.0. (MTV-671)

+
+
+
RHV provider ConnectionTestSucceeded True response from the wrong URL
+

In earlier releases of Forklift, the ConnectionTestSucceeded condition was set to True even when the URL was different than the API endpoint for the RHV Manager. This issue has been resolved in Forklift 2.6.0. (MTV-740)

+
+
+
Migration does not fail when a vSphere Data Center is nested inside a folder
+

In earlier releases of Forklift, migrating a VM that is placed in a Data Center that is stored directly under the /vcenter in vSphere succeeded. However, it failed when the Data Center was stored inside a folder. This issue was resolved in Forklift 2.6.0. (MTV-796)

+
+
+
The OVA inventory watcher detects deleted files
+

The OVA inventory watcher detects files changes, including deleted files. Updates from the ova-provider-server pod are now sent every five minutes to the forklift-controller pod that updates the inventory. (MTV-733)

+
+
+
Unclear error message when Forklift fails to build or create a PVC
+

In earlier releases of Forklift, the error logs lacked clear information to identify the reason for a failure to create a PV on a destination storage class that does not have a configured storage profile. This issue was resolved in Forklift 2.6.0. (MTV-928)

+
+
+
Plans stay indefinitely in the CopyDisks phase when there is an outdated ovirtvolumepopulator
+

In earlier releases of Forklift, an earlier failed migration could have left an outdated ovirtvolumepopulator. When starting a new plan for the same VM to the same project, the CreateDataVolumes phase did not create populator PVCs when transitioning to CopyDisks, causing the CopyDisks phase to stay indefinitely. This issue was resolved in Forklift 2.6.0. (MTV-929)

+
+
+

For a complete list of all resolved issues in this release, see the list of Resolved Issues in Jira.

+
+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+ + + + + +
+
Warning
+
+
Warm migration and remote migration flows are impacted by multiple bugs
+
+

Warm migration and remote migration flows are impacted by multiple bugs. It is strongly recommended to fall back to cold migration until this issue is resolved. (MTV-1366)

+
+
+
+
+
Migrating older Linux distributions from VMware to OKD, the name of the network interfaces changes
+

When migrating older Linux distributions, such as CentOS 7.0 and 7.1, virtual machines (VMs) from VMware to OKD, the name of the network interfaces changes, and the static IP configuration for the VM no longer functions. This issue is caused by RHEL 7.0 and 7.1 still requiring virtio-transitional. Workaround: Manually update the guest to RHEL 7.2 or update the VM specification post-migration to use transitional. (MTV-1382)

+
+
+
Dynamic disks are offline in Windows Server 2022 after migration from vSphere to CNV with ceph-rbd
+

The dynamic disks are Offline in Windows Server 2022 after cold and warm migrations from vSphere to container-native virtualization (CNV) with Ceph RADOS Block Devices (RBD), using the storage class ocs-storagecluster-ceph-rbd. (MTV-1344)

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)

+
+
+
Migration of virtual machines with encrypted partitions fails during a conversion (vSphere only)
+

vSphere only: Migrations from oVirt and {osp} do not fail, but the encryption key might be missing on the target OKD cluster.

+
+
+
Migration fails during precopy/cutover while performing a snapshot operation on the source VM
+

Warm migration from oVirt fails if a snapshot operation is triggered and running on the source VM at the same time as the migration is scheduled. The migration does not wait for the snapshot operation to finish. (MTV-456)

+
+
+
Unable to schedule migrated VM with multiple disks to more than one storage class of type hostPath
+

When migrating a VM with multiple disks to more than one storage class of type hostPath, it might happen that a VM cannot be scheduled. Workaround: Use shared storage on the target OKD cluster.

+
+
+
Non-supported guest operating systems in warm migrations
+

Warm migrations and migrations to remote OKD clusters from vSphere do not support the same guest operating systems that are supported in cold migrations and migrations to the local OKD cluster. RHEL 8 and RHEL 9 might cause this limitation.

+
+ +
+
VMs from vSphere with RHEL 9 guest operating system can start with network interfaces that are down
+

When migrating VMs that are installed with RHEL 9 as a guest operating system from vSphere, the network interfaces of the VMs could be disabled when they start in {ocp-name} Virtualization. (MTV-491)

+
+
+
Migration of a VM with NVME disks from vSphere fails
+

When migrating a virtual machine (VM) with NVME disks from vSphere, the migration process fails, and the Web Console shows that the Convert image to kubevirt stage is running but did not finish successfully. (MTV-963)

+
+
+
Importing image-based VMs can fail
+

Migrating an image-based VM without the virtual_size field can fail on a block mode storage class. (MTV-946)

+
+
+
Deleting a migration plan does not remove temporary resources
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs, and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)

+
+
+
Migrating VMs with independent persistent disks from VMware to OCP-V fails
+

Migrating VMs with independent persistent disks from VMware to OCP-V fails. (MTV-993)

+
+
+
Guest operating system from vSphere might be missing
+

When vSphere does not receive updates about the guest operating system from the VMware tools, it considers the information about the guest operating system to be outdated and ceases to report it. When this occurs, Forklift is unaware of the guest operating system of the VM and is unable to associate it with the appropriate virtual machine preference or {ocp-name} template. (MTV-1046)

+
+
+
Failure to migrate an image-based VM from {osp} to the default project
+

The migration process fails when migrating an image-based VM from {osp} to the default project. (MTV-964)

+
+
+

For a complete list of all known issues in this release, see the list of Known Issues in Jira.

+
+
+
+ + +
+ + diff --git a/documentation/modules/rn-2.7/index.html b/documentation/modules/rn-2.7/index.html new file mode 100644 index 00000000000..3589eba6c94 --- /dev/null +++ b/documentation/modules/rn-2.7/index.html @@ -0,0 +1,91 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.7

+
+

You can use Forklift to migrate virtual machines from the following source providers to KubeVirt destination providers:

+
+
+
    +
  • +

    VMware vSphere versions 6, 7, and 8

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    {osp}

    +
  • +
  • +

    Open Virtual Appliances (OVAs) that were created by VMware vSphere

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

The release notes describe technical changes, new features and enhancements, known issues, and resolved issues.

+
+ + +
+ + diff --git a/documentation/modules/rn-27-resolved-issues/index.html b/documentation/modules/rn-27-resolved-issues/index.html new file mode 100644 index 00000000000..f42f84cab02 --- /dev/null +++ b/documentation/modules/rn-27-resolved-issues/index.html @@ -0,0 +1,168 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Resolved issues

+
+
+
+

Forklift 2.7 has the following resolved issues:

+
+
+
+
+

Resolved issues 2.7.3

+
+
+
Migration plan does not fail when conversion pod fails
+

In earlier releases of Forklift, when running the virt-v2v guest conversion, the migration plan did not fail if the conversion pod failed, as expected. This issue has been resolved in Forklift 2.7.3. (MTV-1569)

+
+
+
Large number of VMs in the inventory can cause the inventory controller to panic
+

In earlier releases of Forklift, having a large number of virtual machines (VMs) in the inventory could cause the inventory controller to panic and return a concurrent write to websocket connection warning. This issue was caused by the concurrent write to the WebSocket connection and has been addressed by the addition of a lock, so the Go routine waits before sending the response from the server. This issue has been resolved in Forklift 2.7.3. (MTV-1220)

+
+
+
VM selection disappears when selecting multiple VMs in the Migration Plan
+

In earlier releases of Forklift, VM selection checkbox disappeared after selecting multiple VMs in the Migration Plan. This issue has been resolved in Forklift 2.7.3. (MTV-1546)

+
+
+
forklift-controller crashing during OVA plan migration
+

In earlier releases of Forklift, the forklift-controller would crash during an OVA plan migration, returning a runtime error: invalid memory address or nil pointer dereference panic.  This issue has been resolved in Forklift 2.7.3. (MTV-1577)

+
+
+
+
+

Resolved issues 2.7.2

+
+
+
VMNetworksNotMapped error occurs after creating a plan from the UI with the source provider set to KubeVirt
+

In earlier releases of Forklift, after creating a plan with an KubeVirt source provider, the Migration Plan failed with the error The plan is not ready - VMNetworksNotMapped. This issue has been resolved in Forklift 2.7.2. (MTV-1201)

+
+
+
Migration Plan for KubeVirt to KubeVirt missing the source namespace causing VMNetworkNotMapped error
+

In earlier releases of Forklift, when creating a Migration Plan for an KubeVirt to KubeVirt migration using the Plan Creation Form, the network map generated was missing the source namespace, which caused a VMNetworkNotMapped error on the plan. This issue has been resolved in Forklift 2.7.2. (MTV-1297)

+
+
+
DV, PVC, and PV are not cleaned up and removed if the migration plan is Archived and Deleted
+

In earlier releases of Forklift, the DataVolume (DV), PersistentVolumeClaim (PVC), and PersistentVolume (PV) continued to exist after the migration plan was archived and deleted. This issue has been resolved in Forklift 2.7.2. (MTV-1477)

+
+
+
Other migrations are halted from starting as the scheduler is waiting for the complete VM to get transferred
+

In earlier releases of Forklift, when warm migrating a virtual machine (VM) that has several disks, you had to wait for the complete VM to get migrated, and the scheduler was halted until all the disks finished before the migration would be started. This issue has been resolved in Forklift 2.7.2. (MTV-1537)

+
+
+
Warm migration is not functioning as expected
+

In earlier releases of Forklift, warm migration did not function as expected. When running the warm migration with VMs larger than the MaxInFlight disks, the VMs over this number did not start the migration until the cutover. This issue has been resolved in Forklift 2.7.2. (MTV-1543)

+
+
+
Migration hanging due to error: virt-v2v: error: -i libvirt: expecting a libvirt guest name
+

In earlier releases of Forklift, when attempting to migrate a VMware VM with a non-compliant Kubernetes name, the Openshift console returned a warning that the VM would be renamed. However, after starting the Migration Plan, it hangs since the migration pod is in an Error state. This issue has been resolved in Forklift 2.7.2. This issue has been resolved in Forklift 2.7.2. (MTV-1555)

+
+
+
VMs are not migrated if they have more disks than MAX_VM_INFLIGHT
+

In earlier releases of Forklift, when migrating the VM using the warm migration, if there were more disks than the MAX_VM_INFLIGHT the VM was not scheduled and the migration was not started. This issue has been resolved in Forklift 2.7.2. (MTV-1573)

+
+
+
Migration Plan returns an error even when Changed Block Tracking (CBT) is enabled
+

In earlier releases of Forklift, when running a VM in VMware, if the CBT flag was enabled while the VM was running by adding both ctkEnabled=TRUE and scsi0:0.ctkEnabled=TRUE parameters, an error message Danger alert:The plan is not ready - VMMissingChangedBlockTracking was returned, and the migration plan was prevented from working. This issue has been resolved in Forklift 2.7.2. (MTV-1576)

+
+
+
+
+

Resolved issues 2.7.0

+
+
+
Change . to - in the names of VMs that are migrated
+

In earlier releases of Forklift, if the name of the virtual machines (VMs) contained ., this was changed to - when they were migrated. This issue has been resolved in Forklift 2.7.0. (MTV-1292)

+
+
+
Status condition indicating a failed mapping resource in a plan is not added to the plan
+

In earlier releases of Forklift, a status condition indicating a failed mapping resource of a plan was not added to the plan. This issue has been resolved in Forklift 2.7.0, with a status condition indicating the failed mapping being added. (MTV-1461)

+
+
+
ifcfg files with HWaddr cause the NIC name to change
+

In earlier releases of Forklift, interface configuration (ifcfg) files with a hardware address (HWaddr) of the Ethernet interface caused the name of the network interface controller (NIC) to change. This issue has been resolved in Forklift 2.7.0. (MTV-1463)

+
+
+
Import fails with special characters in VMX file
+

In earlier releases of Forklift, imports failed when there were special characters in the parameters of the VMX file. This issue has been resolved in Forklift 2.7.0. (MTV-1472)

+
+
+
Observed invalid memory address or nil pointer dereference panic
+

In earlier releases of Forklift, an invalid memory address or nil pointer dereference panic was observed, which was caused by a refactor and could be triggered when there was a problem with the inventory pod. This issue has been resolved in Forklift 2.7.0. (MTV-1482)

+
+
+
Static IPv4 changed after warm migrating win2022/2019 VMs
+

In earlier releases of Forklift, the static Internet Protocol version 4 (IPv4) address was changed after a warm migration of Windows Server 2022 and Windows Server 2019 VMs. This issue has been resolved in Forklift 2.7.0. (MTV-1491)

+
+
+
Warm migration is missing arguments
+

In earlier releases of Forklift, virt-v2v-in-place for the warm migration was missing arguments that were available in virt-v2v for the cold migration. This issue has been resolved in Forklift 2.7.0. (MTV-1495)

+
+
+
Default gateway settings changed after migrating Windows Server 2022 VMs with preserve static IPs
+

In earlier releases of Forklift, the default gateway settings were changed after migrating Windows Server 2022 VMs with the preserve static IPs setting. This issue has been resolved in Forklift 2.7.0. (MTV-1497)

+
+
+
+ + +
+ + diff --git a/documentation/modules/running-migration-plan/index.html b/documentation/modules/running-migration-plan/index.html new file mode 100644 index 00000000000..f86e5b29844 --- /dev/null +++ b/documentation/modules/running-migration-plan/index.html @@ -0,0 +1,135 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Running a migration plan

+
+

You can run a migration plan and view its progress in the OKD web console.

+
+
+
Prerequisites
+
    +
  • +

    Valid migration plan.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
    +

    The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, and the description of each plan.

    +
    +
  2. +
  3. +

    Click Start beside a migration plan to start the migration.

    +
  4. +
  5. +

    Click Start in the confirmation window that opens.

    +
    +

    The Migration details by VM screen opens, displaying the migration’s progress

    +
    +
    +

    Warm migration only:

    +
    +
    +
      +
    • +

      The precopy stage starts.

      +
    • +
    • +

      Click Cutover to complete the migration.

      +
    • +
    +
    +
  6. +
  7. +

    If the migration fails:

    +
    +
      +
    1. +

      Click Get logs to retrieve the migration logs.

      +
    2. +
    3. +

      Click Get logs in the confirmation window that opens.

      +
    4. +
    5. +

      Wait until Get logs changes to Download logs and then click the button to download the logs.

      +
    6. +
    +
    +
  8. +
  9. +

    Click a migration’s Status, whether it failed or succeeded or is still ongoing, to view the details of the migration.

    +
    +

    The Migration details by VM screen opens, displaying the start and end times of the migration, the amount of data copied, and a progress pipeline for each VM being migrated.

    +
    +
  10. +
  11. +

    Expand an individual VM to view its steps and the elapsed time and state of each step.

    +
  12. +
+
+ + +
+ + diff --git a/documentation/modules/selecting-migration-network-for-virt-provider/index.html b/documentation/modules/selecting-migration-network-for-virt-provider/index.html new file mode 100644 index 00000000000..885bcf1ba2d --- /dev/null +++ b/documentation/modules/selecting-migration-network-for-virt-provider/index.html @@ -0,0 +1,100 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Selecting a migration network for a KubeVirt provider

+
+

You can select a default migration network for a KubeVirt provider in the OKD web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.

+
+
+

If you do not select a migration network, the default migration network is the pod network, which might not be optimal for disk transfer.

+
+
+ + + + + +
+
Note
+
+
+

You can override the default migration network of the provider by selecting a different network when you create a migration plan.

+
+
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    On the right side of the provider, select Select migration network from the {kebab}.

    +
  4. +
  5. +

    Select a network from the list of available networks and click Select.

    +
  6. +
+
+ + +
+ + diff --git a/documentation/modules/selecting-migration-network-for-vmware-source-provider/index.html b/documentation/modules/selecting-migration-network-for-vmware-source-provider/index.html new file mode 100644 index 00000000000..348da7ece85 --- /dev/null +++ b/documentation/modules/selecting-migration-network-for-vmware-source-provider/index.html @@ -0,0 +1,142 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Selecting a migration network for a VMware source provider

+
+

You can select a migration network in the OKD web console for a source provider to reduce risk to the source environment and to improve performance.

+
+
+

Using the default network for migration can result in poor performance because the network might not have sufficient bandwidth. This situation can have a negative effect on the source platform because the disk transfer operation might saturate the network.

+
+
+

Unresolved directive in selecting-migration-network-for-vmware-source-provider.adoc - include::snip_vmware_esxi_nfc.adoc[]

+
+
+
Prerequisites
+
    +
  • +

    The migration network must have sufficient throughput, minimum speed of 10 Gbps, for disk transfer.

    +
  • +
  • +

    The migration network must be accessible to the KubeVirt nodes through the default gateway.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    The source virtual disks are copied by a pod that is connected to the pod network of the target namespace.

    +
    +
    +
    +
  • +
  • +

    The migration network should have jumbo frames enabled.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click the host number in the Hosts column beside a provider to view a list of hosts.

    +
  4. +
  5. +

    Select one or more hosts and click Select migration network.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Network: Network name

      +
    • +
    • +

      ESXi host admin username: For example, root

      +
    • +
    • +

      ESXi host admin password: Password

      +
    • +
    +
    +
  8. +
  9. +

    Click Save.

    +
  10. +
  11. +

    Verify that the status of each host is Ready.

    +
    +

    If a host status is not Ready, the host might be unreachable on the migration network or the credentials might be incorrect. You can modify the host configuration and save the changes.

    +
    +
  12. +
+
+ + +
+ + diff --git a/documentation/modules/selecting-migration-network/index.html b/documentation/modules/selecting-migration-network/index.html new file mode 100644 index 00000000000..bc840a983dc --- /dev/null +++ b/documentation/modules/selecting-migration-network/index.html @@ -0,0 +1,118 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Selecting a migration network for a source provider

+
+

You can select a migration network for a source provider in the Forklift web console for improved performance.

+
+
+

If a source network is not optimal for migration, a Warning icon is displayed beside the host number in the Hosts column of the provider list.

+
+
+
Prerequisites
+

The migration network has the following prerequisites:

+
+
+
    +
  • +

    Minimum speed of 10 Gbps.

    +
  • +
  • +

    Accessible to the OpenShift nodes through the default gateway. The source disks are copied by a pod that is connected to the pod network of the target namespace.

    +
  • +
  • +

    Jumbo frames enabled.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Click Providers.

    +
  2. +
  3. +

    Click the host number of a provider to view the host list and network details.

    +
  4. +
  5. +

    Select the host to be updated and click Select migration network.

    +
  6. +
  7. +

    Select a Network from the list of available networks.

    +
    +

    The network list displays only the networks accessible to all the selected hosts. The hosts must have

    +
    +
  8. +
  9. +

    Click Check connection to verify the credentials.

    +
  10. +
  11. +

    Click Select to select the migration network.

    +
    +

    The migration network appears in the network details of the updated hosts.

    +
    +
  12. +
+
+ + +
+ + diff --git a/documentation/modules/snip-certificate-options/index.html b/documentation/modules/snip-certificate-options/index.html new file mode 100644 index 00000000000..c53e67d0b24 --- /dev/null +++ b/documentation/modules/snip-certificate-options/index.html @@ -0,0 +1,114 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
    +
  1. +

    Choose one of the following options for validating CA certificates:

    +
    +
      +
    • +

      Use a custom CA certificate: Migrate after validating a custom CA certificate.

      +
    • +
    • +

      Use the system CA certificate: Migrate after validating the system CA certificate.

      +
    • +
    • +

      Skip certificate validation : Migrate without validating a CA certificate.

      +
      +
        +
      1. +

        To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.

        +
      2. +
      3. +

        To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.

        +
      4. +
      5. +

        To skip certificate validation, toggle the Skip certificate validation switch to the right.

        +
      6. +
      +
      +
    • +
    +
    +
  2. +
  3. +

    Optional: Ask Forklift to fetch a custom CA certificate from the provider’s API endpoint URL.

    +
    +
      +
    1. +

      Click Fetch certificate from URL. The Verify certificate window opens.

      +
    2. +
    3. +

      If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.

      +
      +

      Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.

      +
      +
    4. +
    +
    +
  4. +
+
+ + +
+ + diff --git a/documentation/modules/snip-migrating-luns/index.html b/documentation/modules/snip-migrating-luns/index.html new file mode 100644 index 00000000000..906ad2d15db --- /dev/null +++ b/documentation/modules/snip-migrating-luns/index.html @@ -0,0 +1,86 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Note
+
+
+
    +
  • +

    Unlike disk images that are copied from a source provider to a target provider, LUNs are detached, but not removed, from virtual machines in the source provider and then attached to the virtual machines (VMs) that are created in the target provider.

    +
  • +
  • +

    LUNs are not removed from the source provider during the migration in case fallback to the source provider is required. However, before re-attaching the LUNs to VMs in the source provider, ensure that the LUNs are not used by VMs on the target environment at the same time, which might lead to data corruption.

    +
  • +
+
+
+
+ + +
+ + diff --git a/documentation/modules/snip_cold-warm-comparison-table/index.html b/documentation/modules/snip_cold-warm-comparison-table/index.html new file mode 100644 index 00000000000..c8c84639848 --- /dev/null +++ b/documentation/modules/snip_cold-warm-comparison-table/index.html @@ -0,0 +1,100 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

Both cold migration and warm migration have advantages and disadvantages, as described in the table that follows:

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Advantages and disadvantages of cold and warm migrations
Cold migrationWarm migration

Duration

Correlates to the amount of data on the disks

Correlates to the amount of data on the disks and VM utilization

Data transferred

Approximate sum of all disks

Approximate sum of all disks and VM utilization

VM downtime

High

Low

+ + +
+ + diff --git a/documentation/modules/snip_measured_boot_windows_vm/index.html b/documentation/modules/snip_measured_boot_windows_vm/index.html new file mode 100644 index 00000000000..316a51a5c4c --- /dev/null +++ b/documentation/modules/snip_measured_boot_windows_vm/index.html @@ -0,0 +1,72 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
Windows VMs which are using Measured Boot cannot be migrated
+

Microsoft Windows virtual machines (VMs), which are using the Measured Boot feature, cannot be migrated because Measured Boot is a mechanism to prevent any kind of device changes, by checking each start-up component, including the firmware, all the way to the boot driver.

+
+
+

The alternative to migration is to re-create the Windows VM directly on KubeVirt.

+
+ + +
+ + diff --git a/documentation/modules/snip_performance/index.html b/documentation/modules/snip_performance/index.html new file mode 100644 index 00000000000..b8fe4e8b75d --- /dev/null +++ b/documentation/modules/snip_performance/index.html @@ -0,0 +1,74 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

The data provided here was collected from testing in Red Hat Labs and is provided for reference only. 

+
+
+

Overall, these numbers should be considered to show the best-case scenarios.

+
+
+

The observed performance of migration can differ from these results and depends on several factors.

+
+ + +
+ + diff --git a/documentation/modules/snip_permissions-info/index.html b/documentation/modules/snip_permissions-info/index.html new file mode 100644 index 00000000000..d8cd2591f2b --- /dev/null +++ b/documentation/modules/snip_permissions-info/index.html @@ -0,0 +1,85 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

If you are an administrator, you can see and work with components (providers, plans, etc.) for all projects.

+
+
+

If you are a non-administrator, you can only see and work only with the components of projects you have permissions for.

+
+
+ + + + + +
+
Tip
+
+
+

You can see which projects you have permissions for by clicking the Project list, which is in the upper-left of every page in the Migrations section except for the Overview.

+
+
+
+ + +
+ + diff --git a/documentation/modules/snip_plan-limits/index.html b/documentation/modules/snip_plan-limits/index.html new file mode 100644 index 00000000000..049c9ee4014 --- /dev/null +++ b/documentation/modules/snip_plan-limits/index.html @@ -0,0 +1,79 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Important
+
+
+

A plan cannot contain more than 500 VMs or 500 disks.

+
+
+
+ + +
+ + diff --git a/documentation/modules/snip_qemu-guest-agent/index.html b/documentation/modules/snip_qemu-guest-agent/index.html new file mode 100644 index 00000000000..846f51b7f39 --- /dev/null +++ b/documentation/modules/snip_qemu-guest-agent/index.html @@ -0,0 +1,74 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

VMware only: In cold migrations, in situations in which a package manager cannot be used during the migration, Forklift does not install the qemu-guest-agent daemon on the migrated VMs. This has some impact on the functionality of the migrated VMs, but overall, they are still expected to function.

+
+
+

To enable Forklift to automatically install qemu-guest-agent on the migrated VMs, ensure that your package manager can install the daemon during the first boot of the VM after migration.

+
+
+

If that is not possible, use your preferred automated or manual procedure to install qemu-guest-agent manually.

+
+ + +
+ + diff --git a/documentation/modules/snip_secure_boot_issue/index.html b/documentation/modules/snip_secure_boot_issue/index.html new file mode 100644 index 00000000000..72863f200da --- /dev/null +++ b/documentation/modules/snip_secure_boot_issue/index.html @@ -0,0 +1,72 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
VMs with Secure Boot enabled might not be migrated automatically
+

Virtual machines (VMs) with Secure Boot enabled currently might not be migrated automatically. This is because Secure Boot, a security standard developed by members of the PC industry to ensure that a device boots using only software that is trusted by the Original Equipment Manufacturer (OEM), would prevent the VMs from booting on the destination provider. 

+
+
+

Workaround: The current workaround is to disable Secure Boot on the destination. For more details, see Disabling Secure Boot. (MTV-1548)

+
+ + +
+ + diff --git a/documentation/modules/snip_vmware-name-change/index.html b/documentation/modules/snip_vmware-name-change/index.html new file mode 100644 index 00000000000..671b0a29949 --- /dev/null +++ b/documentation/modules/snip_vmware-name-change/index.html @@ -0,0 +1,79 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Important
+
+
+

When you migrate a VMware 7 VM to an OKD 4.13+ platform that uses CentOS 7.9, the name of the network interfaces changes and the static IP configuration for the VM no longer works.

+
+
+
+ + +
+ + diff --git a/documentation/modules/snip_vmware-permissions/index.html b/documentation/modules/snip_vmware-permissions/index.html new file mode 100644 index 00000000000..c89a856e8b3 --- /dev/null +++ b/documentation/modules/snip_vmware-permissions/index.html @@ -0,0 +1,86 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Important
+
+
forklift-controller consistently failing to reconcile a plan, and returning an HTTP 500 error
+
+

There is an issue with the forklift-controller consistently failing to reconcile a Migration Plan, and subsequently returning an HTTP 500 error. This issue is caused when you specify the user permissions only on the virtual machine (VM).

+
+
+

In Forklift, you need to add permissions at the datacenter level, which includes storage, networks, switches, and so on, which are used by the VM. You must then propagate the permissions to the child elements.

+
+
+

If you do not want to add this level of permissions, you must manually add the permissions to each object on the VM host required.

+
+
+
+ + +
+ + diff --git a/documentation/modules/snip_vmware_esxi_nfc/index.html b/documentation/modules/snip_vmware_esxi_nfc/index.html new file mode 100644 index 00000000000..381d8b02cf7 --- /dev/null +++ b/documentation/modules/snip_vmware_esxi_nfc/index.html @@ -0,0 +1,79 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Note
+
+
+

You can also control the network from which disks are transferred from a host by using the Network File Copy (NFC) service in vSphere.

+
+
+
+ + +
+ + diff --git a/documentation/modules/snippet_getting_web_console_url_cli/index.html b/documentation/modules/snippet_getting_web_console_url_cli/index.html new file mode 100644 index 00000000000..d31bf5e444d --- /dev/null +++ b/documentation/modules/snippet_getting_web_console_url_cli/index.html @@ -0,0 +1,87 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

+

+
+
+
+
$ kubectl get route virt -n konveyor-forklift \
+  -o custom-columns=:.spec.host
+
+
+
+

+ +The URL for the forklift-ui service that opens the login page for the Forklift web console is displayed.

+
+
+

+ +.Example output

+
+
+
+
https://virt-konveyor-forklift.apps.cluster.openshift.com.
+
+
+ + +
+ + diff --git a/documentation/modules/snippet_getting_web_console_url_web/index.html b/documentation/modules/snippet_getting_web_console_url_web/index.html new file mode 100644 index 00000000000..461130466e4 --- /dev/null +++ b/documentation/modules/snippet_getting_web_console_url_web/index.html @@ -0,0 +1,84 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
    +
  1. +

    Log in to the OKD web console.

    +
  2. +
  3. +

    Click NetworkingRoutes.

    +
  4. +
  5. +

    Select the {namespace} project in the Project: list.

    +
    +

    The URL for the forklift-ui service that opens the login page for the Forklift web console is displayed.

    +
    +
    +

    Click the URL to navigate to the Forklift web console.

    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/modules/snippet_ova_tech_preview/index.html b/documentation/modules/snippet_ova_tech_preview/index.html new file mode 100644 index 00000000000..54d5718e490 --- /dev/null +++ b/documentation/modules/snippet_ova_tech_preview/index.html @@ -0,0 +1,87 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview.

+
+
+ + + + + +
+
Important
+
+
+

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+ + +
+ + diff --git a/documentation/modules/source-vm-prerequisites/index.html b/documentation/modules/source-vm-prerequisites/index.html new file mode 100644 index 00000000000..422422f86a2 --- /dev/null +++ b/documentation/modules/source-vm-prerequisites/index.html @@ -0,0 +1,127 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Source virtual machine prerequisites

+
+

The following prerequisites apply to all migrations:

+
+
+
    +
  • +

    ISO/CDROM disks must be unmounted.

    +
  • +
  • +

    Each NIC must contain one IPv4 and/or one IPv6 address.

    +
  • +
  • +

    The operating system of a VM must be certified and supported as a guest operating system with KubeVirt.

    +
  • +
  • +

    The name of a VM must not contain a period (.). Forklift changes any period in a VM name to a dash (-).

    +
  • +
  • +

    The name of a VM must not be the same as any other VM in the KubeVirt environment.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    Forklift automatically assigns a new name to a VM that does not comply with the rules.

    +
    +
    +

    Forklift makes the following changes when it automatically generates a new VM name:

    +
    +
    +
      +
    • +

      Excluded characters are removed.

      +
    • +
    • +

      Uppercase letters are switched to lowercase letters.

      +
    • +
    • +

      Any underscore (_) is changed to a dash (-).

      +
    • +
    +
    +
    +

    This feature allows a migration to proceed smoothly even if someone enters a VM name that does not follow the rules.

    +
    +
    +
    +
  • +
+
+
+

Unresolved directive in source-vm-prerequisites.adoc - include::snip_secure_boot_issue.adoc[]

+
+
+

Unresolved directive in source-vm-prerequisites.adoc - include::snip_measured_boot_windows_vm.adoc[]

+
+ + +
+ + diff --git a/documentation/modules/storage-support/index.html b/documentation/modules/storage-support/index.html new file mode 100644 index 00000000000..eff55773528 --- /dev/null +++ b/documentation/modules/storage-support/index.html @@ -0,0 +1,211 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Storage support and default modes

+
+

Forklift uses the following default volume and access modes for supported storage.

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Default volume and access modes
ProvisionerVolume modeAccess mode

kubernetes.io/aws-ebs

Block

ReadWriteOnce

kubernetes.io/azure-disk

Block

ReadWriteOnce

kubernetes.io/azure-file

Filesystem

ReadWriteMany

kubernetes.io/cinder

Block

ReadWriteOnce

kubernetes.io/gce-pd

Block

ReadWriteOnce

kubernetes.io/hostpath-provisioner

Filesystem

ReadWriteOnce

manila.csi.openstack.org

Filesystem

ReadWriteMany

openshift-storage.cephfs.csi.ceph.com

Filesystem

ReadWriteMany

openshift-storage.rbd.csi.ceph.com

Block

ReadWriteOnce

kubernetes.io/rbd

Block

ReadWriteOnce

kubernetes.io/vsphere-volume

Block

ReadWriteOnce

+
+ + + + + +
+
Note
+
+
+

If the KubeVirt storage does not support dynamic provisioning, you must apply the following settings:

+
+
+
    +
  • +

    Filesystem volume mode

    +
    +

    Filesystem volume mode is slower than Block volume mode.

    +
    +
  • +
  • +

    ReadWriteOnce access mode

    +
    +

    ReadWriteOnce access mode does not support live virtual machine migration.

    +
    +
  • +
+
+
+

See Enabling a statically-provisioned storage class for details on editing the storage profile.

+
+
+
+
+ + + + + +
+
Note
+
+
+

If your migration uses block storage and persistent volumes created with an EXT4 file system, increase the file system overhead in CDI to be more than 10%. The default overhead that is assumed by CDI does not completely include the reserved place for the root partition. If you do not increase the file system overhead in CDI by this amount, your migration might fail.

+
+
+
+
+ + + + + +
+
Note
+
+
+

When migrating from OpenStack or running a cold-migration from RHV to the OCP cluster that MTV is deployed on, the migration allocates persistent volumes without CDI. In these cases, you might need to adjust the file system overhead.

+
+
+

If the configured file system overhead, which has a default value of 10%, is too low, the disk transfer will fail due to lack of space. In such a case, you would want to increase the file system overhead.

+
+
+

In some cases, however, you might want to decrease the file system overhead to reduce storage consumption.

+
+
+

You can change the file system overhead by changing the value of the controller_filesystem_overhead in the spec portion of the forklift-controller CR, as described in Configuring the MTV Operator.

+
+
+
+ + +
+ + diff --git a/documentation/modules/technical-changes-2-7/index.html b/documentation/modules/technical-changes-2-7/index.html new file mode 100644 index 00000000000..664efdf01c6 --- /dev/null +++ b/documentation/modules/technical-changes-2-7/index.html @@ -0,0 +1,73 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Technical changes

+
+

Forklift 2.7 has the following technical changes:

+
+
+
Upgraded virt-v2v to RHEL9 for warm migrations
+

Forklift previously used virt-v2v from Red Hat Enterprise Linux (RHEL) 8, which does not include bug fixes and features that are available in virt-v2v in RHEL9. In Forklift 2.7.0, components are updated to RHEL 9 in order to improve the functionality of warm migration. (MTV-1152)

+
+ + +
+ + diff --git a/documentation/modules/technology-preview/index.html b/documentation/modules/technology-preview/index.html new file mode 100644 index 00000000000..b2027f26d72 --- /dev/null +++ b/documentation/modules/technology-preview/index.html @@ -0,0 +1,88 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Important
+
+
+

{FeatureName} is a Technology Preview feature only. Technology Preview features +are not supported with Red Hat production service level agreements (SLAs) and +might not be functionally complete. Red Hat does not recommend using them +in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during +the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+ + +
+ + diff --git a/documentation/modules/uninstalling-mtv-cli/index.html b/documentation/modules/uninstalling-mtv-cli/index.html new file mode 100644 index 00000000000..00ba0011a78 --- /dev/null +++ b/documentation/modules/uninstalling-mtv-cli/index.html @@ -0,0 +1,144 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Uninstalling Forklift from the command line interface

+
+

You can uninstall Forklift from the command line interface (CLI).

+
+
+ + + + + +
+
Note
+
+
+

This action does not remove resources managed by the Forklift Operator, including custom resource definitions (CRDs) and custom resources (CRs). To remove these after uninstalling the Forklift Operator, you might need to manually delete the Forklift Operator CRDs.

+
+
+
+
+
Prerequisites
+
    +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Delete the forklift controller by running the following command:

    +
    +
    +
    $ oc delete ForkliftController --all -n openshift-mtv
    +
    +
    +
  2. +
  3. +

    Delete the subscription to the Forklift Operator by running the following command:

    +
    +
    +
    $ oc get subscription -o name|grep 'mtv-operator'| xargs oc delete
    +
    +
    +
  4. +
  5. +

    Delete the clusterserviceversion for the Forklift Operator by running the following command:

    +
    +
    +
    $ oc get clusterserviceversion -o name|grep 'mtv-operator'| xargs oc delete
    +
    +
    +
  6. +
  7. +

    Delete the plugin console CR by running the following command:

    +
    +
    +
    $ oc delete ConsolePlugin forklift-console-plugin
    +
    +
    +
  8. +
  9. +

    Optional: Delete the custom resource definitions (CRDs) by running the following command:

    +
    +
    +
    kubectl get crd -o name | grep 'forklift.konveyor.io' | xargs kubectl delete
    +
    +
    +
  10. +
  11. +

    Optional: Perform cleanup by deleting the Forklift project by running the following command:

    +
    +
    +
    oc delete project openshift-mtv
    +
    +
    +
  12. +
+
+ + +
+ + diff --git a/documentation/modules/uninstalling-mtv-ui/index.html b/documentation/modules/uninstalling-mtv-ui/index.html new file mode 100644 index 00000000000..bc182ce0850 --- /dev/null +++ b/documentation/modules/uninstalling-mtv-ui/index.html @@ -0,0 +1,168 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Uninstalling Forklift by using the OKD web console

+
+

You can uninstall Forklift by using the OKD web console.

+
+
+
Prerequisites
+
    +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click Operators > Installed Operators.

    +
  2. +
  3. +

    Click Forklift Operator.

    +
    +

    The Operator Details page opens in the Details tab.

    +
    +
  4. +
  5. +

    Click the ForkliftController tab.

    +
  6. +
  7. +

    Click Actions and select Delete ForkLiftController.

    +
    +

    A confirmation window opens.

    +
    +
  8. +
  9. +

    Click Delete.

    +
    +

    The controller is removed.

    +
    +
  10. +
  11. +

    Open the Details tab.

    +
    +

    The Create ForkliftController button appears instead of the controller you deleted. There is no need to click it.

    +
    +
  12. +
  13. +

    On the upper-right side of the page, click Actions and select Uninstall Operator.

    +
    +

    A confirmation window opens, displaying any operand instances.

    +
    +
  14. +
  15. +

    To delete all instances, select the Delete all operand instances for this operator checkbox. By default, the checkbox is cleared.

    +
    + + + + + +
    +
    Important
    +
    +
    +

    If your Operator configured off-cluster resources, these will continue to run and will require manual cleanup.

    +
    +
    +
    +
  16. +
  17. +

    Click Uninstall.

    +
    +

    The Installed Operators page opens, and the Forklift Operator is removed from the list of installed Operators.

    +
    +
  18. +
  19. +

    Click Home > Overview.

    +
  20. +
  21. +

    In the Status section of the page, click Dynamic Plugins.

    +
    +

    The Dynamic Plugins popup opens, listing forklift-console-plugin as a failed plugin. If the forklift-console-plugin does not appear as a failed plugin, refresh the web console.

    +
    +
  22. +
  23. +

    Click forklift-console-plugin.

    +
    +

    The ConsolePlugin details page opens in the Details tab.

    +
    +
  24. +
  25. +

    On the upper right-hand side of the page, click Actions and select Delete ConsolePlugin from the list.

    +
    +

    A confirmation window opens.

    +
    +
  26. +
  27. +

    Click Delete.

    +
    +

    The plugin is removed from the list of Dynamic plugins on the Overview page. If the plugin still appears, restart the Overview page.

    +
    +
  28. +
+
+ + +
+ + diff --git a/documentation/modules/updating-validation-rules-version/index.html b/documentation/modules/updating-validation-rules-version/index.html new file mode 100644 index 00000000000..14c198cbc05 --- /dev/null +++ b/documentation/modules/updating-validation-rules-version/index.html @@ -0,0 +1,127 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Updating the inventory rules version

+
+

You must update the inventory rules version each time you update the rules so that the Provider Inventory service detects the changes and triggers the Validation service.

+
+
+

The rules version is recorded in a rules_version.rego file for each provider.

+
+
+
Procedure
+
    +
  1. +

    Retrieve the current rules version:

    +
    +
    +
    $ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version (1)
    +
    +
    +
    +
    Example output
    +
    +
    {
    +   "result": {
    +       "rules_version": 5
    +   }
    +}
    +
    +
    +
  2. +
  3. +

    Connect to the terminal of the Validation pod:

    +
    +
    +
    $ kubectl rsh <validation_pod>
    +
    +
    +
  4. +
  5. +

    Update the rules version in the /usr/share/opa/policies/io/konveyor/forklift/<provider>/rules_version.rego file.

    +
  6. +
  7. +

    Log out of the Validation pod terminal.

    +
  8. +
  9. +

    Verify the updated rules version:

    +
    +
    +
    $ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version (1)
    +
    +
    +
    +
    Example output
    +
    +
    {
    +   "result": {
    +       "rules_version": 6
    +   }
    +}
    +
    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/modules/upgrading-mtv-ui/index.html b/documentation/modules/upgrading-mtv-ui/index.html new file mode 100644 index 00000000000..a15ecb550b5 --- /dev/null +++ b/documentation/modules/upgrading-mtv-ui/index.html @@ -0,0 +1,127 @@ + + + + + + + + Upgrading Forklift | Forklift Documentation + + + + + + + + + + + + + +Upgrading Forklift | Forklift Documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+

Upgrading Forklift

+
+

You can upgrade the Forklift Operator by using the OKD web console to install the new version.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click OperatorsInstalled Operators{operator-name-ui}Subscription.

    +
  2. +
  3. +

    Change the update channel to the correct release.

    +
    +

    See Changing update channel in the OKD documentation.

    +
    +
  4. +
  5. +

    Confirm that Upgrade status changes from Up to date to Upgrade available. If it does not, restart the CatalogSource pod:

    +
    +
      +
    1. +

      Note the catalog source, for example, redhat-operators.

      +
    2. +
    3. +

      From the command line, retrieve the catalog source pod:

      +
      +
      +
      $ kubectl get pod -n openshift-marketplace | grep <catalog_source>
      +
      +
      +
    4. +
    5. +

      Delete the pod:

      +
      +
      +
      $ kubectl delete pod -n openshift-marketplace <catalog_source_pod>
      +
      +
      +
      +

      Upgrade status changes from Up to date to Upgrade available.

      +
      +
      +

      If you set Update approval on the Subscriptions tab to Automatic, the upgrade starts automatically.

      +
      +
    6. +
    +
    +
  6. +
  7. +

    If you set Update approval on the Subscriptions tab to Manual, approve the upgrade.

    +
    +

    See Manually approving a pending upgrade in the OKD documentation.

    +
    +
  8. +
  9. +

    If you are upgrading from Forklift 2.2 and have defined VMware source providers, edit the VMware provider by adding a VDDK init image. Otherwise, the update will change the state of any VMware providers to Critical. For more information, see Adding a VMSphere source provider.

    +
  10. +
  11. +

    If you mapped to NFS on the OKD destination provider in Forklift 2.2, edit the AccessModes and VolumeMode parameters in the NFS storage profile. Otherwise, the upgrade will invalidate the NFS mapping. For more information, see Customizing the storage profile.

    +
  12. +
+
+ + +
+ + diff --git a/documentation/modules/using-must-gather/index.html b/documentation/modules/using-must-gather/index.html new file mode 100644 index 00000000000..087661ce0ab --- /dev/null +++ b/documentation/modules/using-must-gather/index.html @@ -0,0 +1,157 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Using the must-gather tool

+
+

You can collect logs and information about Forklift custom resources (CRs) by using the must-gather tool. You must attach a must-gather data file to all customer cases.

+
+
+

You can gather data for a specific namespace, migration plan, or virtual machine (VM) by using the filtering options.

+
+
+ + + + + +
+
Note
+
+
+

If you specify a non-existent resource in the filtered must-gather command, no archive file is created.

+
+
+
+
+
Prerequisites
+
    +
  • +

    You must be logged in to the KubeVirt cluster as a user with the cluster-admin role.

    +
  • +
  • +

    You must have the OKD CLI (oc) installed.

    +
  • +
+
+
+
Collecting logs and CR information
+
    +
  1. +

    Navigate to the directory where you want to store the must-gather data.

    +
  2. +
  3. +

    Run the oc adm must-gather command:

    +
    +
    +
    $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest
    +
    +
    +
    +

    The data is saved as /must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.

    +
    +
  4. +
  5. +

    Optional: Run the oc adm must-gather command with the following options to gather filtered data:

    +
    +
      +
    • +

      Namespace:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- NS=<namespace> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Migration plan:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- PLAN=<migration_plan> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Virtual machine:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- VM=<vm_id> NS=<namespace> /usr/bin/targeted (1)
      +
      +
      +
      +
        +
      1. +

        Specify the VM ID as it appears in the Plan CR.

        +
      2. +
      +
      +
    • +
    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/modules/virt-migration-workflow/index.html b/documentation/modules/virt-migration-workflow/index.html new file mode 100644 index 00000000000..8996eff96e6 --- /dev/null +++ b/documentation/modules/virt-migration-workflow/index.html @@ -0,0 +1,209 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Detailed migration workflow

+
+

You can use the detailed migration workflow to troubleshoot a failed migration.

+
+
+

The workflow describes the following steps:

+
+
+

Warm Migration or migration to a remote {ocp-name} cluster:

+
+
+
    +
  1. +

    When you create the Migration custom resource (CR) to run a migration plan, the Migration Controller service creates a DataVolume CR for each source VM disk.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Containerized Data Importer (CDI) Controller service creates a persistent volume claim (PVC) based on the parameters specified in the DataVolume CR.



    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
  6. +
  7. +

    The CDI Controller service creates an importer pod.

    +
  8. +
  9. +

    The importer pod streams the VM disk to the PV.

    +
    +

    After the VM disks are transferred:

    +
    +
  10. +
  11. +

    The Migration Controller service creates a conversion pod with the PVCs attached to it when importing from VMWare.

    +
    +

    The conversion pod runs virt-v2v, which installs and configures device drivers on the PVCs of the target VM.

    +
    +
  12. +
  13. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  14. +
  15. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  16. +
+
+
+

Cold migration from oVirt or {osp} to the local {ocp-name} cluster:

+
+
+
    +
  1. +

    When you create a Migration custom resource (CR) to run a migration plan, the Migration Controller service creates for each source VM disk a PersistentVolumeClaim CR, and an OvirtVolumePopulator when the source is oVirt, or an OpenstackVolumePopulator CR when the source is {osp}.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Populator Controller service creates a temporarily persistent volume claim (PVC).

    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
    +
      +
    • +

      The Migration Controller service creates a dummy pod to bind all PVCs. The name of the pod contains pvcinit.

      +
    • +
    +
    +
  6. +
  7. +

    The Populator Controller service creates a populator pod.

    +
  8. +
  9. +

    The populator pod transfers the disk data to the PV.

    +
    +

    After the VM disks are transferred:

    +
    +
  10. +
  11. +

    The temporary PVC is deleted, and the initial PVC points to the PV with the data.

    +
  12. +
  13. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  14. +
  15. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  16. +
+
+
+

Cold migration from VMWare to the local {ocp-name} cluster:

+
+
+
    +
  1. +

    When you create a Migration custom resource (CR) to run a migration plan, the Migration Controller service creates a DataVolume CR for each source VM disk.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Containerized Data Importer (CDI) Controller service creates a blank persistent volume claim (PVC) based on the parameters specified in the DataVolume CR.



    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
  6. +
+
+
+

For all VM disks:

+
+
+
    +
  1. +

    The Migration Controller service creates a dummy pod to bind all PVCs. The name of the pod contains pvcinit.

    +
  2. +
  3. +

    The Migration Controller service creates a conversion pod for all PVCs.

    +
  4. +
  5. +

    The conversion pod runs virt-v2v, which converts the VM to the KVM hypervisor and transfers the disks' data to their corresponding PVs.

    +
    +

    After the VM disks are transferred:

    +
    +
  6. +
  7. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  8. +
  9. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/modules/vmware-prerequisites/index.html b/documentation/modules/vmware-prerequisites/index.html new file mode 100644 index 00000000000..328b58ea566 --- /dev/null +++ b/documentation/modules/vmware-prerequisites/index.html @@ -0,0 +1,278 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

VMware prerequisites

+
+

It is strongly recommended to create a VDDK image to accelerate migrations. For more information, see Creating a VDDK image.

+
+
+

The following prerequisites apply to VMware migrations:

+
+
+
    +
  • +

    You must use a compatible version of VMware vSphere.

    +
  • +
  • +

    You must be logged in as a user with at least the minimal set of VMware privileges.

    +
  • +
  • +

    To access the virtual machine using a pre-migration hook, VMware Tools must be installed on the source virtual machine.

    +
  • +
  • +

    The VM operating system must be certified and supported for use as a guest operating system with KubeVirt and for conversion to KVM with virt-v2v.

    +
  • +
  • +

    If you are running a warm migration, you must enable changed block tracking (CBT) on the VMs and on the VM disks.

    +
  • +
  • +

    If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host.

    +
  • +
  • +

    It is strongly recommended to disable hibernation because Forklift does not support migrating hibernated VMs.

    +
  • +
+
+
+ + + + + +
+
Important
+
+
+

In the event of a power outage, data might be lost for a VM with disabled hibernation. However, if hibernation is not disabled, migration will fail

+
+
+
+
+ + + + + +
+
Note
+
+
+

Neither Forklift nor OpenShift Virtualization support conversion of Btrfs for migrating VMs from VMWare.

+
+
+
+

VMware privileges

+
+

The following minimal set of VMware privileges is required to migrate virtual machines to KubeVirt with the Forklift.

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. VMware privileges
PrivilegeDescription

Virtual machine.Interaction privileges:

Virtual machine.Interaction.Power Off

Allows powering off a powered-on virtual machine. This operation powers down the guest operating system.

Virtual machine.Interaction.Power On

Allows powering on a powered-off virtual machine and resuming a suspended virtual machine.

Virtual machine.Guest operating system management by VIX API

Allows managing a virtual machine by the VMware VIX API.

+

Virtual machine.Provisioning privileges:

+
+
+ + + + + +
+
Note
+
+
+

All Virtual machine.Provisioning privileges are required.

+
+
+

Virtual machine.Provisioning.Allow disk access

Allows opening a disk on a virtual machine for random read and write access. Used mostly for remote disk mounting.

Virtual machine.Provisioning.Allow file access

Allows operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Allow read-only disk access

Allows opening a disk on a virtual machine for random read access. Used mostly for remote disk mounting.

Virtual machine.Provisioning.Allow virtual machine download

Allows read operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Allow virtual machine files upload

Allows write operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Clone template

Allows cloning of a template.

Virtual machine.Provisioning.Clone virtual machine

Allows cloning of an existing virtual machine and allocation of resources.

Virtual machine.Provisioning.Create template from virtual machine

Allows creation of a new template from a virtual machine.

Virtual machine.Provisioning.Customize guest

Allows customization of a virtual machine’s guest operating system without moving the virtual machine.

Virtual machine.Provisioning.Deploy template

Allows deployment of a virtual machine from a template.

Virtual machine.Provisioning.Mark as template

Allows marking an existing powered-off virtual machine as a template.

Virtual machine.Provisioning.Mark as virtual machine

Allows marking an existing template as a virtual machine.

Virtual machine.Provisioning.Modify customization specification

Allows creation, modification, or deletion of customization specifications.

Virtual machine.Provisioning.Promote disks

Allows promote operations on a virtual machine’s disks.

Virtual machine.Provisioning.Read customization specifications

Allows reading a customization specification.

Virtual machine.Snapshot management privileges:

Virtual machine.Snapshot management.Create snapshot

Allows creation of a snapshot from the virtual machine’s current state.

Virtual machine.Snapshot management.Remove Snapshot

Allows removal of a snapshot from the snapshot history.

Datastore privileges:

Datastore.Browse datastore

Allows exploring the contents of a datastore.

Datastore.Low level file operations

Allows performing low-level file operations - read, write, delete, and rename - in a datastore.

Sessions privileges:

Sessions.Validate session

Allows verification of the validity of a session.

Cryptographic privileges:

Cryptographic.Decrypt

Allows decryption of an encrypted virtual machine.

Cryptographic.Direct access

Allows access to encrypted resources.

+ + +
+ + diff --git a/feed.xml b/feed.xml new file mode 100644 index 00000000000..3f8bbe47963 --- /dev/null +++ b/feed.xml @@ -0,0 +1 @@ +Jekyll2024-11-11T18:43:42-06:00/feed.xmlForklift DocumentationMigrating VMware virtual machines to KubeVirt \ No newline at end of file diff --git a/index.html b/index.html new file mode 100644 index 00000000000..8199f3bbf27 --- /dev/null +++ b/index.html @@ -0,0 +1,89 @@ + + + + + + + + Forklift Documentation | Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift Documentation

+
+

What is Forklift?

+
+
+

Forklift is a tool in the Konveyor community for migrating virtual machines from VMware or oVirt to KubeVirt.

+
+
+
+
+

Documentation

+ +
+ + +
+ + diff --git a/jekyll-theme-cayman.gemspec b/jekyll-theme-cayman.gemspec new file mode 100644 index 00000000000..4a1c2d28f03 --- /dev/null +++ b/jekyll-theme-cayman.gemspec @@ -0,0 +1,22 @@ +# frozen_string_literal: true + +Gem::Specification.new do |s| + s.name = 'jekyll-theme-cayman' + s.version = '0.1.1' + s.license = 'CC0-1.0' + s.authors = ['Jason Long', 'GitHub, Inc.'] + s.email = ['opensource+jekyll-theme-cayman@github.com'] + s.homepage = 'https://github.com/pages-themes/cayman' + s.summary = 'Cayman is a Jekyll theme for GitHub Pages' + + s.files = `git ls-files -z`.split("\x0").select do |f| + f.match(%r{^((_includes|_layouts|_sass|assets)/|(LICENSE|README)((\.(txt|md|markdown)|$)))}i) + end + + s.platform = Gem::Platform::RUBY + s.add_runtime_dependency 'jekyll', '> 3.5', '< 5.0' + s.add_runtime_dependency 'jekyll-seo-tag', '~> 2.0' + s.add_development_dependency 'html-proofer', '~> 3.0' + s.add_development_dependency 'rubocop', '~> 0.50' + s.add_development_dependency 'w3c_validators', '~> 1.3' +end diff --git a/modules/about-cold-warm-migration/index.html b/modules/about-cold-warm-migration/index.html new file mode 100644 index 00000000000..bd619b918d1 --- /dev/null +++ b/modules/about-cold-warm-migration/index.html @@ -0,0 +1,255 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

About cold and warm migration

+
+
+
+

Forklift supports cold migration from:

+
+
+
    +
  • +

    VMware vSphere

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    {osp}

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

Forklift supports warm migration from VMware vSphere and from oVirt.

+
+
+
+
+

Cold migration

+
+
+

Cold migration is the default migration type. The source virtual machines are shut down while the data is copied.

+
+
+ + + + + +
+
Note
+
+
+

Unresolved directive in about-cold-warm-migration.adoc - include::snip_qemu-guest-agent.adoc[]

+
+
+
+
+
+
+

Warm migration

+
+
+

Most of the data is copied during the precopy stage while the source virtual machines (VMs) are running.

+
+
+

Then the VMs are shut down and the remaining data is copied during the cutover stage.

+
+
+
Precopy stage
+

The VMs are not shut down during the precopy stage.

+
+
+

The VM disks are copied incrementally by using changed block tracking (CBT) snapshots. The snapshots are created at one-hour intervals by default. You can change the snapshot interval by updating the forklift-controller deployment.

+
+
+ + + + + +
+
Important
+
+
+

You must enable CBT for each source VM and each VM disk.

+
+
+

A VM can support up to 28 CBT snapshots. If the source VM has too many CBT snapshots and the Migration Controller service is not able to create a new snapshot, warm migration might fail. The Migration Controller service deletes each snapshot when the snapshot is no longer required.

+
+
+
+
+

The precopy stage runs until the cutover stage is started manually or is scheduled to start.

+
+
+
Cutover stage
+

The VMs are shut down during the cutover stage and the remaining data is migrated. Data stored in RAM is not migrated.

+
+
+

You can start the cutover stage manually by using the Forklift console or you can schedule a cutover time in the Migration manifest.

+
+
+
+
+

Advantages and disadvantages of cold and warm migrations

+
+
+

Overview

+
+

Unresolved directive in about-cold-warm-migration.adoc - include::snip_cold-warm-comparison-table.adoc[]

+
+
+
+

Detailed description

+
+

The table that follows offers a more detailed description of the advantages and disadvantages of each type of migration. It assumes that you have installed Red Hat Enterprise Linux (RHEL) 9 on the OKD platform on which you installed Forklift.

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Detailed description of advantages and disadvantages
Cold migrationWarm migration

Fail fast

Each VM is converted to be compatible with OKD and, if the conversion is successful, the VM is transferred. If a VM cannot be converted, the migration fails immediately.

For each VM, Forklift creates a snapshot and transfers it to OKD. When you start the cutover, Forklift creates the last snapshot, transfers it, and then converts the VM.

Tools

Forklift only.

Forklift and CDI from KubeVirt.

Parallelism

Disks must be transferred sequentially.

Disks can be transferred in parallel using different pods.

+
+ + + + + +
+
Note
+
+
+

The preceding table describes the situation for VMs that are running because the main benefit of warm migration is the reduced downtime, and there is no reason to initiate warm migration for VMs that are down. However, performing warm migration for VMs that are down is not the same as cold migration, even when Forklift uses virt-v2v and RHEL 9. For VMs that are down, Forklift transfers the disks using CDI, unlike in cold migration.

+
+
+
+
+ + + + + +
+
Note
+
+
+

When importing from VMware, there are additional factors which impact the migration speed such as limits related to ESXi, vSphere. or VDDK.

+
+
+
+
+
+

Conclusions

+
+

Based on the preceding information, we can draw the following conclusions about cold migration vs. warm migration:

+
+
+
    +
  • +

    The shortest downtime of VMs can be achieved by using warm migration.

    +
  • +
  • +

    The shortest duration for VMs with a large amount of data on a single disk can be achieved by using cold migration.

    +
  • +
  • +

    The shortest duration for VMs with a large amount of data that is spread evenly across multiple disks can be achieved by using warm migration.

    +
  • +
+
+
+
+
+ + +
+ + diff --git a/modules/about-hook-crs-for-migration-plans-api/index.html b/modules/about-hook-crs-for-migration-plans-api/index.html new file mode 100644 index 00000000000..1813e29d382 --- /dev/null +++ b/modules/about-hook-crs-for-migration-plans-api/index.html @@ -0,0 +1,116 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

API-based hooks for Forklift migration plans

+
+

You can add hooks to a migration plan from the command line by using the Forklift API.

+
+

Default hook image

+
+

The default hook image for an Forklift hook is registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel8:v1.8.2-2. The image is based on the Ansible Runner image with the addition of python-openshift to provide Ansible Kubernetes resources and a recent oc binary.

+
+

Hook execution

+
+

An Ansible playbook that is provided as part of a migration hook is mounted into the hook container as a ConfigMap. The hook container is run as a job on the desired cluster, using the default ServiceAccount in the konveyor-forklift namespace.

+
+

PreHooks and PostHooks

+
+

You specify hooks per VM and you can run each as a PreHook or a PostHook. In this context, a PreHook is a hook that is run before a migration and a PostHook is a hook that is run after a migration.

+
+
+

When you add a hook, you must specify the namespace where the hook CR is located, the name of the hook, and specify whether the hook is a PreHook or PostHook.

+
+
+ + + + + +
+
Important
+
+
+

In order for a PreHook to run on a VM, the VM must be started and available via SSH.

+
+
+
+
+
Example PreHook:
+
+
kind: Plan
+apiVersion: forklift.konveyor.io/v1beta1
+metadata:
+  name: test
+  namespace: konveyor-forklift
+spec:
+  vms:
+    - id: vm-2861
+      hooks:
+        - hook:
+            namespace: konveyor-forklift
+            name: playbook
+          step: PreHook
+
+
+ + +
+ + diff --git a/modules/about-rego-files/index.html b/modules/about-rego-files/index.html new file mode 100644 index 00000000000..9f6ea040837 --- /dev/null +++ b/modules/about-rego-files/index.html @@ -0,0 +1,104 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

About Rego files

+
+

Validation rules are written in Rego, the Open Policy Agent (OPA) native query language. The rules are stored as .rego files in the /usr/share/opa/policies/io/konveyor/forklift/<provider> directory of the Validation pod.

+
+
+

Each validation rule is defined in a separate .rego file and tests for a specific condition. If the condition evaluates as true, the rule adds a {“category”, “label”, “assessment”} hash to the concerns. The concerns content is added to the concerns key in the inventory record of the VM. The web console displays the content of the concerns key for each VM in the provider inventory.

+
+
+

The following .rego file example checks for distributed resource scheduling enabled in the cluster of a VMware VM:

+
+
+
drs_enabled.rego example
+
+
package io.konveyor.forklift.vmware (1)
+
+has_drs_enabled {
+    input.host.cluster.drsEnabled (2)
+}
+
+concerns[flag] {
+    has_drs_enabled
+    flag := {
+        "category": "Information",
+        "label": "VM running in a DRS-enabled cluster",
+        "assessment": "Distributed resource scheduling is not currently supported by OpenShift Virtualization. The VM can be migrated but it will not have this feature in the target environment."
+    }
+}
+
+
+
+
    +
  1. +

    Each validation rule is defined within a package. The package namespaces are io.konveyor.forklift.vmware for VMware and io.konveyor.forklift.ovirt for oVirt.

    +
  2. +
  3. +

    Query parameters are based on the input key of the Validation service JSON.

    +
  4. +
+
+ + +
+ + diff --git a/modules/accessing-default-validation-rules/index.html b/modules/accessing-default-validation-rules/index.html new file mode 100644 index 00000000000..56916501df3 --- /dev/null +++ b/modules/accessing-default-validation-rules/index.html @@ -0,0 +1,108 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Checking the default validation rules

+
+

Before you create a custom rule, you must check the default rules of the Validation service to ensure that you do not create a rule that redefines an existing default value.

+
+
+

Example: If a default rule contains the line default valid_input = false and you create a custom rule that contains the line default valid_input = true, the Validation service will not start.

+
+
+
Procedure
+
    +
  1. +

    Connect to the terminal of the Validation pod:

    +
    +
    +
    $ kubectl rsh <validation_pod>
    +
    +
    +
  2. +
  3. +

    Go to the OPA policies directory for your provider:

    +
    +
    +
    $ cd /usr/share/opa/policies/io/konveyor/forklift/<provider> (1)
    +
    +
    +
    +
      +
    1. +

      Specify vmware or ovirt.

      +
    2. +
    +
    +
  4. +
  5. +

    Search for the default policies:

    +
    +
    +
    $ grep -R "default" *
    +
    +
    +
  6. +
+
+ + +
+ + diff --git a/modules/accessing-logs-cli/index.html b/modules/accessing-logs-cli/index.html new file mode 100644 index 00000000000..d85be3b34bd --- /dev/null +++ b/modules/accessing-logs-cli/index.html @@ -0,0 +1,157 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Accessing logs and custom resource information from the command line interface

+
+

You can access logs and information about custom resources (CRs) from the command line interface by using the must-gather tool. You must attach a must-gather data file to all customer cases.

+
+
+

You can gather data for a specific namespace, a completed, failed, or canceled migration plan, or a migrated virtual machine (VM) by using the filtering options.

+
+
+ + + + + +
+
Note
+
+
+

If you specify a non-existent resource in the filtered must-gather command, no archive file is created.

+
+
+
+
+
Prerequisites
+
    +
  • +

    You must be logged in to the KubeVirt cluster as a user with the cluster-admin role.

    +
  • +
  • +

    You must have the OKD CLI (oc) installed.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Navigate to the directory where you want to store the must-gather data.

    +
  2. +
  3. +

    Run the oc adm must-gather command:

    +
    +
    +
    $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest
    +
    +
    +
    +

    The data is saved as /must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.

    +
    +
  4. +
  5. +

    Optional: Run the oc adm must-gather command with the following options to gather filtered data:

    +
    +
      +
    • +

      Namespace:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- NS=<namespace> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Migration plan:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- PLAN=<migration_plan> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Virtual machine:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- VM=<vm_name> NS=<namespace> /usr/bin/targeted (1)
      +
      +
      +
      +
        +
      1. +

        You must specify the VM name, not the VM ID, as it appears in the Plan CR.

        +
      2. +
      +
      +
    • +
    +
    +
  6. +
+
+ + +
+ + diff --git a/modules/accessing-logs-ui/index.html b/modules/accessing-logs-ui/index.html new file mode 100644 index 00000000000..6befe678c69 --- /dev/null +++ b/modules/accessing-logs-ui/index.html @@ -0,0 +1,92 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Downloading logs and custom resource information from the web console

+
+

You can download logs and information about custom resources (CRs) for a completed, failed, or canceled migration plan or for migrated virtual machines (VMs) by using the OKD web console.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
  2. +
  3. +

    Click Get logs beside a migration plan name.

    +
  4. +
  5. +

    In the Get logs window, click Get logs.

    +
    +

    The logs are collected. A Log collection complete message is displayed.

    +
    +
  6. +
  7. +

    Click Download logs to download the archive file.

    +
  8. +
  9. +

    To download logs for a migrated VM, click a migration plan name and then click Get logs beside the VM.

    +
  10. +
+
+ + +
+ + diff --git a/modules/adding-hook-crs-to-migration-plans-api/index.html b/modules/adding-hook-crs-to-migration-plans-api/index.html new file mode 100644 index 00000000000..9ddf9331e8e --- /dev/null +++ b/modules/adding-hook-crs-to-migration-plans-api/index.html @@ -0,0 +1,302 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Adding Hook CRs to a VM migration by using the Forklift API

+
+

You can add a PreHook or a PostHook Hook CR when you migrate a virtual machine from the command line by using the Forklift API. A PreHook runs before a migration, a PostHook, after.

+
+
+ + + + + +
+
Note
+
+
+

You can retrieve additional information stored in a secret or in a configMap by using a k8s module.

+
+
+
+
+

For example, you can create a hook CR to install cloud-init on a VM and write a file before migration.

+
+
+
Procedure
+
    +
  1. +

    If needed, create a secret with an SSH private key for the VM. You can either use an existing key or generate a key pair, install the public key on the VM, and base64 encode the private key in the secret.

    +
    +
    +
    apiVersion: v1
    +data:
    +  key: VGhpcyB3YXMgZ2VuZXJhdGVkIHdpdGggc3NoLWtleWdlbiBwdXJlbHkgZm9yIHRoaXMgZXhhbXBsZS4KSXQgaXMgbm90IHVzZWQgYW55d2hlcmUuCi0tLS0tQkVHSU4gT1BFTlNTSCBQUklWQVRFIEtFWS0tLS0tCmIzQmxibk56YUMxclpYa3RkakVBQUFBQUJHNXZibVVBQUFBRWJtOXVaUUFBQUFBQUFBQUJBQUFCbHdBQUFBZHpjMmd0Y24KTmhBQUFBQXdFQUFRQUFBWUVBMzVTTFRReDBFVjdPTWJQR0FqcEsxK2JhQURTTVFuK1NBU2pyTGZLNWM5NGpHdzhDbnA4LwovRHErZHFBR1pxQkg2ZnAxYmVJM1BZZzVWVDk0RVdWQ2RrTjgwY3dEcEo0Z1R0NHFUQ1gzZUYvY2x5VXQyUC9zaTNjcnQ0CjBQdi9wVnZXU1U2TlhHaDJIZC93V0MwcGh5Z0RQOVc5SHRQSUF0OFpnZmV2ZnUwZHpraVl6OHNVaElWU2ZsRGpaNUFqcUcKUjV2TVVUaGlrczEvZVlCeTdiMkFFSEdzYU8xN3NFbWNiYUlHUHZuUFVwWmQrdjkyYU1JdWZoYjhLZkFSbzZ3Ty9ISW1VbQovdDdHWFBJUmxBMUhSV0p1U05odTQzZS9DY3ZYd3Z6RnZrdE9kYXlEQzBMTklHMkpVaURlNWd0UUQ1WHZXc1p3MHQvbEs1CklacjFrZXZRNUJsYWNISmViV1ZNYUQvdllpdFdhSFo4OEF1Y0czaGh2bjkrOGNSTGhNVExiVlFSMWh2UVpBL1JtQXN3eE0KT3VJSmRaUmtxTThLZlF4Z28zQThRNGJhQW1VbnpvM3Zwa0FWdC9uaGtIOTRaRE5rV2U2RlRhdThONStyYTJCZkdjZVA4VApvbjFEeTBLRlpaUlpCREVVRVc0eHdTYUVOYXQ3c2RDNnhpL1d5OURaQUFBRm1NRFBXeDdBejFzZUFBQUFCM056YUMxeWMyCkVBQUFHQkFOK1VpMDBNZEJGZXpqR3p4Z0k2U3RmbTJnQTBqRUova2dFbzZ5M3l1WFBlSXhzUEFwNmZQL3c2dm5hZ0JtYWcKUituNmRXM2lOejJJT1ZVL2VCRmxRblpEZk5ITUE2U2VJRTdlS2t3bDkzaGYzSmNsTGRqLzdJdDNLN2VORDcvNlZiMWtsTwpqVnhvZGgzZjhGZ3RLWWNvQXovVnZSN1R5QUxmR1lIM3IzN3RIYzVJbU0vTEZJU0ZVbjVRNDJlUUk2aGtlYnpGRTRZcExOCmYzbUFjdTI5Z0JCeHJHanRlN0JKbkcyaUJqNzV6MUtXWGZyL2RtakNMbjRXL0Nud0VhT3NEdnh5SmxKdjdleGx6eUVaUU4KUjBWaWJrallidU4zdnduTDE4TDh4YjVMVG5Xc2d3dEN6U0J0aVZJZzN1WUxVQStWNzFyR2NOTGY1U3VTR2E5WkhyME9RWgpXbkJ5WG0xbFRHZy83MklyVm1oMmZQQUxuQnQ0WWI1L2Z2SEVTNFRFeTIxVUVkWWIwR1FQMFpnTE1NVERyaUNYV1VaS2pQCkNuME1ZS053UEVPRzJnSmxKODZONzZaQUZiZjU0WkIvZUdRelpGbnVoVTJydkRlZnEydGdYeG5Iai9FNko5UTh0Q2hXV1UKV1FReEZCRnVNY0VtaERXcmU3SFF1c1l2MXN2UTJRQUFBQU1CQUFFQUFBR0JBSlZtZklNNjdDQmpXcU9KdnFua2EvakRrUwo4TDdpSE5mekg1TnRZWVdPWmRMTlk2L0lRa1pDeFcwTWtSKzlUK0M3QUZKZzBNV2Q5ck5PeUxJZDkxNjZoOVJsNG0xdFJjCnViZ1o2dWZCZ3hGVDlXS21mSEdCNm4zelh5b2pQOEFJTnR6ODVpaUVHVXFFRWtVRVdMd0RGSmdvcFllQ3l1VmZ2ZE92MUgKRm1WWmEwNVo0b3NQNkNENXVmc2djQ1RYQTR6VnZ5ZHVCYkxqdHN5RjdYZjNUdjZUQ1QxU0swZHErQk1OOXRvb0RZaXpwagpzbDh6NzlybXp3eUFyWFlVcnFUUkpsNmpwRkNrWHJLcy9LeG96MHhhbXlMY2RORk9hWE51LzlnTkpjRERsV2hPcFRqNHk4CkpkNXBuV1Jueis1RHJLRFdhY0loUW1CMUxVd2ZLWmQwbVFxaUpzMUMxcXZVUmlKOGExaThKUTI4bHFuWTFRRk9wbk13emcKWEpla2FndThpT1ExRFJlQkhaM0NkcVJUYnY3bVJZSGxramx0dXJmZGc4M3hvM0ErZ1JSR001eUVOcW5xSkplQjhJQVB5UwptMFp0dGdqbHNqNTJ2K1B1NmExMHoxZndKK1VML2N6dTRKeEpOYlp6WTFIMnpLODJBaVI1T3JYNmx2aUEvSWFSRVcwUUFBCkFNQndVeUJpcUc5bEZCUnltL2UvU1VORVMzdHpicUZNdTdIcy84WTV5SnAxKzR6OXUxNGtJR2ttV0Y5eE5HT3hrY3V0cWwKeHVUcndMbjFUaFNQTHQrTjUwTGhVdzR4ZjBhNUxqemdPbklPU0FRbm5HY1Nxa0dTRDlMR21obGE2WmpydFBHY29lQ3JHdAo5M1Vvcmx5YkxNRzFFRFAxWmpKS1RaZzl6OUMwdDlTTGd3ei9DbFhydW9UNXNQVUdKWnUrbHlIZXpSTDRtcHl6OEZMcnlOCkdNci9leVM5bWdISjNVVkZEYjNIZ3BaK1E1SUdBRU5rZVZEcHIwMGhCZXZndGd6YWtBQUFEQkFQVXQ1RitoMnBVby94V1YKenRkcVQvMzA4dFB5MXVMMU1lWFoydEJPQmRwSDJyd0JzdWt0aTIySGtWZUZXQjJFdUlFUXppMzY3MGc1UGdxR1p4Vng4dQpobEE0Rkg4ZXN1NTNQckZqVW9EeFJhb3d3WXBFcFh5Y2pnNUE1MStwR1VQcWljWjB0YjliaWlhc3BWWXZhWW5sdGlnVG5iClN0UExMY29nemNiL0dGcVYyaXlzc3lwTlMwKzBNRTUxcEtxWGNaS2swbi8vVHpZWWs4TW8vZzRsQ3pmUEZQUlZrVVM5blIKWU1pQzRlcEk0TERmbVdnM0xLQ2N1Zk85all3aWgwYlFBQUFNRUE2WEtldDhEMHNvc0puZVh5WFZGd0dyVyszNlhBVGRQTwpMWDdjaStjYzFoOGV1eHdYQWx3aTJJNFhxSmJBVjBsVEhuVGEycXN3Uy9RQlpJUUJWSkZlVjVyS1daZTc4R2F3d1pWTFZNCldETmNwdFFyRTFaM2pGNS9TdUVzdlVxSDE0Tkc5RUFXWG1iUkNzelE0Vlk3NzQrSi9sTFkvMnlDT1diNzlLYTJ5OGxvYUoKVXczWWVtSld3blp2R3hKNldsL3BmQ2xYN3lEVXlXUktLdGl0cWNjbmpCWVkyRE1tZURwdURDYy9ZdDZDc3dLRmRkMkJ1UwpGZGt5cDlZY3VMaDlLZEFBQUFIR3BoYzI5dVFFRlVMVGd3TWxVdWJXOXVkR3hsYjI0dWFXNTBjbUVCQWdNRUJRWT0KLS0tLS1FTkQgT1BFTlNTSCBQUklWQVRFIEtFWS0tLS0tCgo=
    +kind: Secret
    +metadata:
    +  name: ssh-credentials
    +  namespace: konveyor-forklift
    +type: Opaque
    +
    +
    +
  2. +
  3. +

    Encode your playbook by conncatenating a file and piping it for base64, for example:

    +
    +
    +
    $ cat playbook.yml | base64 -w0
    +
    +
    +
    + + + + + +
    +
    Note
    +
    +
    +

    You can also use a here document to encode a playbook:

    +
    +
    +
    +
    $ cat << EOF | base64 -w0
    +- hosts: localhost
    +  tasks:
    +  - debug:
    +      msg: test
    +EOF
    +
    +
    +
    +
    +
  4. +
  5. +

    Create a Hook CR:

    +
    +
    +
    apiVersion: forklift.konveyor.io/v1beta1
    +kind: Hook
    +metadata:
    +  name: playbook
    +  namespace: konveyor-forklift
    +spec:
    +  image: registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel8:v1.8.2-2
    +  playbook: LSBuYW1lOiBNYWluCiAgaG9zdHM6IGxvY2FsaG9zdAogIHRhc2tzOgogIC0gbmFtZTogTG9hZCBQbGFuCiAgICBpbmNsdWRlX3ZhcnM6CiAgICAgIGZpbGU6IHBsYW4ueW1sCiAgICAgIG5hbWU6IHBsYW4KCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3ZhcnM6CiAgICAgIGZpbGU6IHdvcmtsb2FkLnltbAogICAgICBuYW1lOiB3b3JrbG9hZAoKICAtIG5hbWU6IAogICAgZ2V0ZW50OgogICAgICBkYXRhYmFzZTogcGFzc3dkCiAgICAgIGtleTogInt7IGFuc2libGVfdXNlcl9pZCB9fSIKICAgICAgc3BsaXQ6ICc6JwoKICAtIG5hbWU6IEVuc3VyZSBTU0ggZGlyZWN0b3J5IGV4aXN0cwogICAgZmlsZToKICAgICAgcGF0aDogfi8uc3NoCiAgICAgIHN0YXRlOiBkaXJlY3RvcnkKICAgICAgbW9kZTogMDc1MAogICAgZW52aXJvbm1lbnQ6CiAgICAgIEhPTUU6ICJ7eyBhbnNpYmxlX2ZhY3RzLmdldGVudF9wYXNzd2RbYW5zaWJsZV91c2VyX2lkXVs0XSB9fSIKCiAgLSBrOHNfaW5mbzoKICAgICAgYXBpX3ZlcnNpb246IHYxCiAgICAgIGtpbmQ6IFNlY3JldAogICAgICBuYW1lOiBzc2gtY3JlZGVudGlhbHMKICAgICAgbmFtZXNwYWNlOiBrb252ZXlvci1mb3JrbGlmdAogICAgcmVnaXN0ZXI6IHNzaF9jcmVkZW50aWFscwoKICAtIG5hbWU6IENyZWF0ZSBTU0gga2V5CiAgICBjb3B5OgogICAgICBkZXN0OiB+Ly5zc2gvaWRfcnNhCiAgICAgIGNvbnRlbnQ6ICJ7eyBzc2hfY3JlZGVudGlhbHMucmVzb3VyY2VzWzBdLmRhdGEua2V5IHwgYjY0ZGVjb2RlIH19IgogICAgICBtb2RlOiAwNjAwCgogIC0gYWRkX2hvc3Q6CiAgICAgIG5hbWU6ICJ7eyB3b3JrbG9hZC52bS5pcGFkZHJlc3MgfX0iCiAgICAgIGFuc2libGVfdXNlcjogcm9vdAogICAgICBncm91cHM6IHZtcwoKLSBob3N0czogdm1zCiAgdGFza3M6CiAgLSBuYW1lOiBJbnN0YWxsIGNsb3VkLWluaXQKICAgIGRuZjoKICAgICAgbmFtZToKICAgICAgLSBjbG91ZC1pbml0CiAgICAgIHN0YXRlOiBsYXRlc3QKCiAgLSBuYW1lOiBDcmVhdGUgVGVzdCBGaWxlCiAgICBjb3B5OgogICAgICBkZXN0OiAvdGVzdC50eHQKICAgICAgY29udGVudDogIkhlbGxvIFdvcmxkIgogICAgICBtb2RlOiAwNjQ0Cg==
    +  serviceAccount: forklift-controller (1)
    +
    +
    +
    +
      +
    1. +

      Specify a serviceAccount to run the hook with in order to control access to resources on the cluster.

      +
      + + + + + +
      +
      Note
      +
      +
      +

      To decode an attached playbook retrieve the resource with custom output and pipe it to base64. For example:

      +
      +
      +
      +
       oc get -n konveyor-forklift hook playbook -o \
      +   go-template='{{ .spec.playbook }}' | base64 -d
      +
      +
      +
      +
      +
      +

      The playbook encoded here runs the following:

      +
      +
      +
      +
      - name: Main
      +  hosts: localhost
      +  tasks:
      +  - name: Load Plan
      +    include_vars:
      +      file: plan.yml
      +      name: plan
      +
      +  - name: Load Workload
      +    include_vars:
      +      file: workload.yml
      +      name: workload
      +
      +  - name:
      +    getent:
      +      database: passwd
      +      key: "{{ ansible_user_id }}"
      +      split: ':'
      +
      +  - name: Ensure SSH directory exists
      +    file:
      +      path: ~/.ssh
      +      state: directory
      +      mode: 0750
      +    environment:
      +      HOME: "{{ ansible_facts.getent_passwd[ansible_user_id][4] }}"
      +
      +  - k8s_info:
      +      api_version: v1
      +      kind: Secret
      +      name: ssh-credentials
      +      namespace: konveyor-forklift
      +    register: ssh_credentials
      +
      +  - name: Create SSH key
      +    copy:
      +      dest: ~/.ssh/id_rsa
      +      content: "{{ ssh_credentials.resources[0].data.key | b64decode }}"
      +      mode: 0600
      +
      +  - add_host:
      +      name: "{{ workload.vm.ipaddress }}"
      +      ansible_user: root
      +      groups: vms
      +
      +- hosts: vms
      +  tasks:
      +  - name: Install cloud-init
      +    dnf:
      +      name:
      +      - cloud-init
      +      state: latest
      +
      +  - name: Create Test File
      +    copy:
      +      dest: /test.txt
      +      content: "Hello World"
      +      mode: 0644
      +
      +
      +
    2. +
    +
    +
  6. +
  7. +

    Create a Plan CR using the hook:

    +
    +
    +
    kind: Plan
    +apiVersion: forklift.konveyor.io/v1beta1
    +metadata:
    +  name: test
    +  namespace: konveyor-forklift
    +spec:
    +  map:
    +    network:
    +      namespace: "konveyor-forklift"
    +      name: "network"
    +    storage:
    +      namespace: "konveyor-forklift"
    +      name: "storage"
    +  provider:
    +    source:
    +      namespace: "konveyor-forklift"
    +      name: "boston"
    +    destination:
    +      namespace: "konveyor-forklift"
    +      name: host
    +  targetNamespace: "konveyor-forklift"
    +  vms:
    +    - id: vm-2861
    +      hooks:
    +        - hook:
    +            namespace: konveyor-forklift
    +            name: playbook
    +          step: PreHook (1)
    +
    +
    +
    +
      +
    1. +

      Options are PreHook, to run the hook before the migration, and PostHook, to run the hook after the migration.

      +
    2. +
    +
    +
  8. +
+
+
+ + + + + +
+
Important
+
+
+

In order for a PreHook to run on a VM, the VM must be started and available via SSH.

+
+
+
+ + +
+ + diff --git a/modules/adding-source-provider/index.html b/modules/adding-source-provider/index.html new file mode 100644 index 00000000000..1912c1c05fd --- /dev/null +++ b/modules/adding-source-provider/index.html @@ -0,0 +1,82 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Click Create provider to add and save the provider.

    +
    +

    The provider appears in the list of providers.

    +
    +
  6. +
+
+ + +
+ + diff --git a/modules/adding-virt-provider/index.html b/modules/adding-virt-provider/index.html new file mode 100644 index 00000000000..d5a27b3623e --- /dev/null +++ b/modules/adding-virt-provider/index.html @@ -0,0 +1,116 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Adding a KubeVirt destination provider

+
+

You can add a KubeVirt destination provider to the OKD web console in addition to the default KubeVirt destination provider, which is the provider where you installed Forklift.

+
+
+
Prerequisites
+ +
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Select KubeVirt from the Provider type list.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Provider name: Specify the provider name to display in the list of target providers.

      +
    • +
    • +

      Kubernetes API server URL: Specify the OKD cluster API endpoint.

      +
    • +
    • +

      Service account token: Specify the cluster-admin service account token.

      +
      +

      If both URL and Service account token are left blank, the local OKD cluster is used.

      +
      +
    • +
    +
    +
  8. +
  9. +

    Click Create.

    +
    +

    The provider appears in the list of providers.

    +
    +
  10. +
+
+ + +
+ + diff --git a/modules/canceling-migration-cli/index.html b/modules/canceling-migration-cli/index.html new file mode 100644 index 00000000000..220b1dc6618 --- /dev/null +++ b/modules/canceling-migration-cli/index.html @@ -0,0 +1,132 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Canceling a migration

+
+

You can cancel an entire migration or individual virtual machines (VMs) while a migration is in progress from the command line interface (CLI).

+
+
+
Canceling an entire migration
+
    +
  • +

    Delete the Migration CR:

    +
    +
    +
    $ kubectl delete migration <migration> -n <namespace> (1)
    +
    +
    +
    +
      +
    1. +

      Specify the name of the Migration CR.

      +
    2. +
    +
    +
  • +
+
+
+
Canceling the migration of individual VMs
+
    +
  1. +

    Add the individual VMs to the spec.cancel block of the Migration manifest:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Migration
    +metadata:
    +  name: <migration>
    +  namespace: <namespace>
    +...
    +spec:
    +  cancel:
    +  - id: vm-102 (1)
    +  - id: vm-203
    +  - name: rhel8-vm
    +EOF
    +
    +
    +
    +
      +
    1. +

      You can specify a VM by using the id key or the name key.

      +
      +

      The value of the id key is the managed object reference, for a VMware VM, or the VM UUID, for a oVirt VM.

      +
      +
    2. +
    +
    +
  2. +
  3. +

    Retrieve the Migration CR to monitor the progress of the remaining VMs:

    +
    +
    +
    $ kubectl get migration/<migration> -n <namespace> -o yaml
    +
    +
    +
  4. +
+
+ + +
+ + diff --git a/modules/canceling-migration-ui/index.html b/modules/canceling-migration-ui/index.html new file mode 100644 index 00000000000..6db2be7eb14 --- /dev/null +++ b/modules/canceling-migration-ui/index.html @@ -0,0 +1,92 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Canceling a migration

+
+

You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the OKD web console.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click Plans for virtualization.

    +
  2. +
  3. +

    Click the name of a running migration plan to view the migration details.

    +
  4. +
  5. +

    Select one or more VMs and click Cancel.

    +
  6. +
  7. +

    Click Yes, cancel to confirm the cancellation.

    +
    +

    In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.

    +
    +
  8. +
+
+
+

You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.

+
+ + +
+ + diff --git a/modules/changing-precopy-intervals/index.html b/modules/changing-precopy-intervals/index.html new file mode 100644 index 00000000000..14dc1d83dc2 --- /dev/null +++ b/modules/changing-precopy-intervals/index.html @@ -0,0 +1,92 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Changing precopy intervals for warm migration

+
+

You can change the snapshot interval by patching the ForkliftController custom resource (CR).

+
+
+
Procedure
+
    +
  • +

    Patch the ForkliftController CR:

    +
    +
    +
    $ kubectl patch forkliftcontroller/<forklift-controller> -n konveyor-forklift -p '{"spec": {"controller_precopy_interval": <60>}}' --type=merge (1)
    +
    +
    +
    +
      +
    1. +

      Specify the precopy interval in minutes. The default value is 60.

      +
      +

      You do not need to restart the forklift-controller pod.

      +
      +
    2. +
    +
    +
  • +
+
+ + +
+ + diff --git a/modules/collected-logs-cr-info/index.html b/modules/collected-logs-cr-info/index.html new file mode 100644 index 00000000000..ee1b7dd8df2 --- /dev/null +++ b/modules/collected-logs-cr-info/index.html @@ -0,0 +1,183 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Collected logs and custom resource information

+
+

You can download logs and custom resource (CR) yaml files for the following targets by using the OKD web console or the command line interface (CLI):

+
+
+
    +
  • +

    Migration plan: Web console or CLI.

    +
  • +
  • +

    Virtual machine: Web console or CLI.

    +
  • +
  • +

    Namespace: CLI only.

    +
  • +
+
+
+

The must-gather tool collects the following logs and CR files in an archive file:

+
+
+
    +
  • +

    CRs:

    +
    +
      +
    • +

      DataVolume CR: Represents a disk mounted on a migrated VM.

      +
    • +
    • +

      VirtualMachine CR: Represents a migrated VM.

      +
    • +
    • +

      Plan CR: Defines the VMs and storage and network mapping.

      +
    • +
    • +

      Job CR: Optional: Represents a pre-migration hook, a post-migration hook, or both.

      +
    • +
    +
    +
  • +
  • +

    Logs:

    +
    +
      +
    • +

      importer pod: Disk-to-data-volume conversion log. The importer pod naming convention is importer-<migration_plan>-<vm_id><5_char_id>, for example, importer-mig-plan-ed90dfc6-9a17-4a8btnfh, where ed90dfc6-9a17-4a8 is a truncated oVirt VM ID and btnfh is the generated 5-character ID.

      +
    • +
    • +

      conversion pod: VM conversion log. The conversion pod runs virt-v2v, which installs and configures device drivers on the PVCs of the VM. The conversion pod naming convention is <migration_plan>-<vm_id><5_char_id>.

      +
    • +
    • +

      virt-launcher pod: VM launcher log. When a migrated VM is powered on, the virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

      +
    • +
    • +

      forklift-controller pod: The log is filtered for the migration plan, virtual machine, or namespace specified by the must-gather command.

      +
    • +
    • +

      forklift-must-gather-api pod: The log is filtered for the migration plan, virtual machine, or namespace specified by the must-gather command.

      +
    • +
    • +

      hook-job pod: The log is filtered for hook jobs. The hook-job naming convention is <migration_plan>-<vm_id><5_char_id>, for example, plan2j-vm-3696-posthook-4mx85 or plan2j-vm-3696-prehook-mwqnl.

      +
      + + + + + +
      +
      Note
      +
      +
      +

      Empty or excluded log files are not included in the must-gather archive file.

      +
      +
      +
      +
    • +
    +
    +
  • +
+
+
+
Example must-gather archive structure for a VMware migration plan
+
+
must-gather
+└── namespaces
+    ├── target-vm-ns
+    │   ├── crs
+    │   │   ├── datavolume
+    │   │   │   ├── mig-plan-vm-7595-tkhdz.yaml
+    │   │   │   ├── mig-plan-vm-7595-5qvqp.yaml
+    │   │   │   └── mig-plan-vm-8325-xccfw.yaml
+    │   │   └── virtualmachine
+    │   │       ├── test-test-rhel8-2disks2nics.yaml
+    │   │       └── test-x2019.yaml
+    │   └── logs
+    │       ├── importer-mig-plan-vm-7595-tkhdz
+    │       │   └── current.log
+    │       ├── importer-mig-plan-vm-7595-5qvqp
+    │       │   └── current.log
+    │       ├── importer-mig-plan-vm-8325-xccfw
+    │       │   └── current.log
+    │       ├── mig-plan-vm-7595-4glzd
+    │       │   └── current.log
+    │       └── mig-plan-vm-8325-4zw49
+    │           └── current.log
+    └── openshift-mtv
+        ├── crs
+        │   └── plan
+        │       └── mig-plan-cold.yaml
+        └── logs
+            ├── forklift-controller-67656d574-w74md
+            │   └── current.log
+            └── forklift-must-gather-api-89fc7f4b6-hlwb6
+                └── current.log
+
+
+ + +
+ + diff --git a/modules/common-attributes/index.html b/modules/common-attributes/index.html new file mode 100644 index 00000000000..af293092aab --- /dev/null +++ b/modules/common-attributes/index.html @@ -0,0 +1,66 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + +
+ + diff --git a/modules/compatibility-guidelines/index.html b/modules/compatibility-guidelines/index.html new file mode 100644 index 00000000000..d130390978b --- /dev/null +++ b/modules/compatibility-guidelines/index.html @@ -0,0 +1,137 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Software compatibility guidelines

+
+
+
+

You must install compatible software versions.

+
+ + ++++++++ + + + + + + + + + + + + + + + + + + + + +
Table 1. Compatible software versions
ForkliftOKDKubeVirtVMware vSphereoVirtOpenStack

2.3.0

4.10 or later

4.10 or later

6.5 or later

4.4 SP1 or later

16.1 or later

+
+ + + + + +
+
Note
+
+
Migration from oVirt 4.3
+
+

Forklift was tested only with oVirt (RHV) 4.4 SP1. +Migration from oVirt (oVirt) 4.3 has not been tested with Forklift 2.3. While not supported, basic migrations from oVirt 4.3 are expected to work.

+
+
+

Generally it is advised to upgrade oVirt Manager (RHVM) to the previously mentioned supported version before the migration to KubeVirt.

+
+
+

Therefore, it is recommended to upgrade oVirt to the supported version above before the migration to KubeVirt.

+
+
+

However, migrations from oVirt 4.3.11 were tested with Forklift 2.3, and may work in practice in many environments using Forklift 2.3. In this case, we advise upgrading oVirt Manager (RHVM) to the previously mentioned supported version before the migration to KubeVirt.

+
+
+
+
+
+
+

OpenShift Operator Life Cycles

+
+
+

For more information about the software maintenance Life Cycle classifications for Operators shipped by Red Hat for use with OpenShift Container Platform, see OpenShift Operator Life Cycles.

+
+
+
+ + +
+ + diff --git a/modules/configuring-mtv-operator/index.html b/modules/configuring-mtv-operator/index.html new file mode 100644 index 00000000000..203778e3b0b --- /dev/null +++ b/modules/configuring-mtv-operator/index.html @@ -0,0 +1,202 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Configuring the Forklift Operator

+
+

You can configure all of the following settings of the Forklift Operator by modifying the ForkliftController CR, or in the Settings section of the Overview page, unless otherwise indicated.

+
+
+
    +
  • +

    Maximum number of virtual machines (VMs) per plan that can be migrated simultaneously.

    +
  • +
  • +

    How long must gather reports are retained before being automatically deleted.

    +
  • +
  • +

    CPU limit allocated to the main controller container.

    +
  • +
  • +

    Memory limit allocated to the main controller container.

    +
  • +
  • +

    Interval at which a new snapshot is requested before initiating a warm migration.

    +
  • +
  • +

    Frequency with which the system checks the status of snapshot creation or removal during a warm migration.

    +
  • +
  • +

    Percentage of space in persistent volumes allocated as file system overhead when the storageclass is filesystem (ForkliftController CR only).

    +
  • +
  • +

    Fixed amount of additional space allocated in persistent block volumes. This setting is applicable for any storageclass that is block-based (ForkliftController CR only).

    +
  • +
  • +

    Configuration map of operating systems to preferences for vSphere source providers (ForkliftController CR only).

    +
  • +
  • +

    Configuration map of operating systems to preferences for oVirt (oVirt) source providers (ForkliftController CR only).

    +
  • +
+
+
+

The procedure for configuring these settings using the user interface is presented in Configuring MTV settings. The procedure for configuring these settings by modifying the ForkliftController CR is presented following.

+
+
+
Procedure
+
    +
  • +

    Change a parameter’s value in the spec portion of the ForkliftController CR by adding the label and value as follows:

    +
  • +
+
+
+
+
spec:
+  label: value (1)
+
+
+
+
    +
  1. +

    Labels you can configure using the CLI are shown in the table that follows, along with a description of each label and its default value.

    +
  2. +
+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Forklift Operator labels
LabelDescriptionDefault value

controller_max_vm_inflight

The maximum number of VMs per plan that can be migrated simultaneously.

20

must_gather_api_cleanup_max_age

The duration in hours for retaining must gather reports before they are automatically deleted.

-1 (disabled)

controller_container_limits_cpu

The CPU limit allocated to the main controller container.

500m

controller_container_limits_memory

The memory limit allocated to the main controller container.

800Mi

controller_precopy_interval

The interval in minutes at which a new snapshot is requested before initiating a warm migration.

60

controller_snapshot_status_check_rate_seconds

The frequency in seconds with which the system checks the status of snapshot creation or removal during a warm migration.

10

controller_filesystem_overhead

Percentage of space in persistent volumes allocated as file system overhead when the storageclass is filesystem.

+

ForkliftController CR only.

10

controller_block_overhead

Fixed amount of additional space allocated in persistent block volumes. This setting is applicable for any storageclass that is block-based. It can be used when data, such as encryption headers, is written to the persistent volumes in addition to the content of the virtual disk.

+

ForkliftController CR only.

0

vsphere_osmap_configmap_name

Configuration map for vSphere source providers. This configuration map maps the operating system of the incoming VM to a KubeVirt preference name. This configuration map needs to be in the namespace where the Forklift Operator is deployed.

+

To see the list of preferences in your KubeVirt environment, open the {ocp-name} web console and click VirtualizationPreferences.

+

You can add values to the configuration map when this label has the default value, forklift-vsphere-osmap. In order to override or delete values, specify a configuration map that is different from forklift-vsphere-osmap.

+

ForkliftController CR only.

forklift-vsphere-osmap

ovirt_osmap_configmap_name

Configuration map for oVirt source providers. This configuration map maps the operating system of the incoming VM to a KubeVirt preference name. This configuration map needs to be in the namespace where the Forklift Operator is deployed.

+

To see the list of preferences in your KubeVirt environment, open the {ocp-name} web console and click VirtualizationPreferences.

+

You can add values to the configuration map when this label has the default value, forklift-ovirt-osmap. In order to override or delete values, specify a configuration map that is different from forklift-ovirt-osmap.

+

ForkliftController CR only.

forklift-ovirt-osmap

+ + +
+ + diff --git a/modules/creating-migration-plan-2-6-3/index.html b/modules/creating-migration-plan-2-6-3/index.html new file mode 100644 index 00000000000..97716d76c26 --- /dev/null +++ b/modules/creating-migration-plan-2-6-3/index.html @@ -0,0 +1,139 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

+ +The Create migration plan pane opens. It displays the source provider’s name and suggestions for a target provider and namespace, a network map, and a storage map. +. Enter the Plan name. +. Make any needed changes to the editable items. +. Click Add mapping to edit a suggested network mapping or a storage mapping, or to add one or more additional mappings. +. Click Create migration plan.

+
+
+

+ +Forklift validates the migration plan and the Plan details page opens, indicating whether the plan is ready for use or contains an error. The details of the plan are listed, and you can edit the items you filled in on the previous page. If you make any changes, Forklift validates the plan again.

+
+
+
    +
  1. +

    VMware source providers only (All optional):

    +
    +
      +
    • +

      Preserving static IPs of VMs: By default, virtual network interface controllers (vNICs) change during the migration process. As a result, vNICs that are configured with a static IP linked to the interface name in the guest VM lose their IP. To avoid this, click the Edit icon next to Preserve static IPs and toggle the Whether to preserve the static IPs switch in the window that opens. Then click Save.

      +
      +

      Forklift then issues a warning message about any VMs for which vNIC properties are missing. To retrieve any missing vNIC properties, run those VMs in vSphere in order for the vNIC properties to be reported to Forklift.

      +
      +
    • +
    • +

      Entering a list of decryption passphrases for disks encrypted using Linux Unified Key Setup (LUKS): To enter a list of decryption passphrases for LUKS-encrypted devices, in the Settings section, click the Edit icon next to Disk decryption passphrases, enter the passphrases, and then click Save. You do not need to enter the passphrases in a specific order - for each LUKS-encrypted device, Forklift tries each passphrase until one unlocks the device.

      +
    • +
    • +

      Specifying a root device: Applies to multi-boot VM migrations only. By default, Forklift uses the first bootable device detected as the root device.

      +
      +

      To specify a different root device, in the Settings section, click the Edit icon next to Root device and choose a device from the list of commonly-used options, or enter a device in the text box.

      +
      +
      +

      Forklift uses the following format for disk location: /dev/sd<disk_identifier><disk_partition>. For example, if the second disk is the root device and the operating system is on the disk’s second partition, the format would be: /dev/sdb2. After you enter the boot device, click Save.

      +
      +
      +

      If the conversion fails because the boot device provided is incorrect, it is possible to get the correct information by looking at the conversion pod logs.

      +
      +
    • +
    +
    +
  2. +
  3. +

    oVirt source providers only (Optional):

    +
    +
      +
    • +

      Preserving the CPU model of VMs that are migrated from oVirt: Generally, the CPU model (type) for oVirt VMs is set at the cluster level, but it can be set at the VM level, which is called a custom CPU model. +By default, Forklift sets the CPU model on the destination cluster as follows: Forklift preserves custom CPU settings for VMs that have them, but, for VMs without custom CPU settings, Forklift does not set the CPU model. Instead, the CPU model is later set by KubeVirt.

      +
      +

      To preserve the cluster-level CPU model of your oVirt VMs, in the Settings section, click the Edit icon next to Preserve CPU model. Toggle the Whether to preserve the CPU model switch, and then click Save.

      +
      +
    • +
    +
    +
  4. +
  5. +

    If the plan is valid,

    +
    +
      +
    1. +

      You can run the plan now by clicking Start migration.

      +
    2. +
    3. +

      You can run the plan later by selecting it on the Plans for virtualization page and following the procedure in Running a migration plan.

      +
      +

      Unresolved directive in creating-migration-plan-2-6-3.adoc - include::snip_vmware-name-change.adoc[]

      +
      +
    4. +
    +
    +
  6. +
+
+ + +
+ + diff --git a/modules/creating-migration-plan/index.html b/modules/creating-migration-plan/index.html new file mode 100644 index 00000000000..f221b27e303 --- /dev/null +++ b/modules/creating-migration-plan/index.html @@ -0,0 +1,270 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a migration plan

+
+

You can create a migration plan by using the OKD web console.

+
+
+

A migration plan allows you to group virtual machines to be migrated together or with the same migration parameters, for example, a percentage of the members of a cluster or a complete application.

+
+
+

You can configure a hook to run an Ansible playbook or custom container image during a specified stage of the migration plan.

+
+
+
Prerequisites
+
    +
  • +

    If Forklift is not installed on the target cluster, you must add a target provider on the Providers page of the web console.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
  2. +
  3. +

    Click Create plan.

    +
  4. +
  5. +

    Specify the following fields:

    +
    +
      +
    • +

      Plan name: Enter a migration plan name to display in the migration plan list.

      +
    • +
    • +

      Plan description: Optional: Brief description of the migration plan.

      +
    • +
    • +

      Source provider: Select a source provider.

      +
    • +
    • +

      Target provider: Select a target provider.

      +
    • +
    • +

      Target namespace: Do one of the following:

      +
      +
        +
      • +

        Select a target namespace from the list

        +
      • +
      • +

        Create a target namespace by typing its name in the text box, and then clicking create "<the_name_you_entered>"

        +
      • +
      +
      +
    • +
    • +

      You can change the migration transfer network for this plan by clicking Select a different network, selecting a network from the list, and then clicking Select.

      +
      +

      If you defined a migration transfer network for the KubeVirt provider and if the network is in the target namespace, the network that you defined is the default network for all migration plans. Otherwise, the pod network is used.

      +
      +
    • +
    +
    +
  6. +
  7. +

    Click Next.

    +
  8. +
  9. +

    Select options to filter the list of source VMs and click Next.

    +
  10. +
  11. +

    Select the VMs to migrate and then click Next.

    +
  12. +
  13. +

    Select an existing network mapping or create a new network mapping.

    +
  14. +
  15. +

    . Optional: Click Add to add an additional network mapping.

    +
    +

    To create a new network mapping:

    +
    +
    +
      +
    • +

      Select a target network for each source network.

      +
    • +
    • +

      Optional: Select Save current mapping as a template and enter a name for the network mapping.

      +
    • +
    +
    +
  16. +
  17. +

    Click Next.

    +
  18. +
  19. +

    Select an existing storage mapping, which you can modify, or create a new storage mapping.

    +
    +

    To create a new storage mapping:

    +
    +
    +
      +
    1. +

      If your source provider is VMware, select a Source datastore and a Target storage class.

      +
    2. +
    3. +

      If your source provider is oVirt, select a Source storage domain and a Target storage class.

      +
    4. +
    5. +

      If your source provider is {osp}, select a Source volume type and a Target storage class.

      +
    6. +
    +
    +
  20. +
  21. +

    Optional: Select Save current mapping as a template and enter a name for the storage mapping.

    +
  22. +
  23. +

    Click Next.

    +
  24. +
  25. +

    Select a migration type and click Next.

    +
    +
      +
    • +

      Cold migration: The source VMs are stopped while the data is copied.

      +
    • +
    • +

      Warm migration: The source VMs run while the data is copied incrementally. Later, you will run the cutover, which stops the VMs and copies the remaining VM data and metadata.

      +
      + + + + + +
      +
      Note
      +
      +
      +

      Warm migration is supported only from vSphere and oVirt.

      +
      +
      +
      +
    • +
    +
    +
  26. +
  27. +

    Click Next.

    +
  28. +
  29. +

    Optional: You can create a migration hook to run an Ansible playbook before or after migration:

    +
    +
      +
    1. +

      Click Add hook.

      +
    2. +
    3. +

      Select the Step when the hook will be run: pre-migration or post-migration.

      +
    4. +
    5. +

      Select a Hook definition:

      +
      +
        +
      • +

        Ansible playbook: Browse to the Ansible playbook or paste it into the field.

        +
      • +
      • +

        Custom container image: If you do not want to use the default hook-runner image, enter the image path: <registry_path>/<image_name>:<tag>.

        +
        + + + + + +
        +
        Note
        +
        +
        +

        The registry must be accessible to your OKD cluster.

        +
        +
        +
        +
      • +
      +
      +
    6. +
    +
    +
  30. +
  31. +

    Click Next.

    +
  32. +
  33. +

    Review your migration plan and click Finish.

    +
    +

    The migration plan is saved on the Plans page.

    +
    +
    +

    You can click the {kebab} of the migration plan and select View details to verify the migration plan details.

    +
    +
  34. +
+
+ + +
+ + diff --git a/modules/creating-network-mapping/index.html b/modules/creating-network-mapping/index.html new file mode 100644 index 00000000000..4f972597667 --- /dev/null +++ b/modules/creating-network-mapping/index.html @@ -0,0 +1,122 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a network mapping

+
+

You can create one or more network mappings by using the OKD web console to map source networks to KubeVirt networks.

+
+
+
Prerequisites
+
    +
  • +

    Source and target providers added to the OKD web console.

    +
  • +
  • +

    If you map more than one source and target network, each additional KubeVirt network requires its own network attachment definition.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationNetworkMaps for virtualization.

    +
  2. +
  3. +

    Click Create NetworkMap.

    +
  4. +
  5. +

    Specify the following fields:

    +
    +
      +
    • +

      Name: Enter a name to display in the network mappings list.

      +
    • +
    • +

      Source provider: Select a source provider.

      +
    • +
    • +

      Target provider: Select a target provider.

      +
    • +
    +
    +
  6. +
  7. +

    Select a Source network and a Target namespace/network.

    +
  8. +
  9. +

    Optional: Click Add to create additional network mappings or to map multiple source networks to a single target network.

    +
  10. +
  11. +

    If you create an additional network mapping, select the network attachment definition as the target network.

    +
  12. +
  13. +

    Click Create.

    +
    +

    The network mapping is displayed on the NetworkMaps screen.

    +
    +
  14. +
+
+ + +
+ + diff --git a/modules/creating-storage-mapping/index.html b/modules/creating-storage-mapping/index.html new file mode 100644 index 00000000000..2dbc3b3e042 --- /dev/null +++ b/modules/creating-storage-mapping/index.html @@ -0,0 +1,138 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a storage mapping

+
+

You can create a storage mapping by using the OKD web console to map source disk storages to KubeVirt storage classes.

+
+
+
Prerequisites
+
    +
  • +

    Source and target providers added to the OKD web console.

    +
  • +
  • +

    Local and shared persistent storage that support VM migration.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationStorageMaps for virtualization.

    +
  2. +
  3. +

    Click Create StorageMap.

    +
  4. +
  5. +

    Specify the following fields:

    +
    +
      +
    • +

      Name: Enter a name to display in the storage mappings list.

      +
    • +
    • +

      Source provider: Select a source provider.

      +
    • +
    • +

      Target provider: Select a target provider.

      +
    • +
    +
    +
  6. +
  7. +

    To create a storage mapping, click Add and map storage sources to target storage classes as follows:

    +
    +
      +
    1. +

      If your source provider is VMware vSphere, select a Source datastore and a Target storage class.

      +
    2. +
    3. +

      If your source provider is oVirt, select a Source storage domain and a Target storage class.

      +
    4. +
    5. +

      If your source provider is {osp}, select a Source volume type and a Target storage class.

      +
    6. +
    7. +

      If your source provider is a set of one or more OVA files, select a Source and a Target storage class for the dummy storage that applies to all virtual disks within the OVA files.

      +
    8. +
    9. +

      If your storage provider is KubeVirt. select a Source storage class and a Target storage class.

      +
    10. +
    11. +

      Optional: Click Add to create additional storage mappings, including mapping multiple storage sources to a single target storage class.

      +
    12. +
    +
    +
  8. +
  9. +

    Click Create.

    +
    +

    The mapping is displayed on the StorageMaps page.

    +
    +
  10. +
+
+ + +
+ + diff --git a/modules/creating-validation-rule/index.html b/modules/creating-validation-rule/index.html new file mode 100644 index 00000000000..60ff273645e --- /dev/null +++ b/modules/creating-validation-rule/index.html @@ -0,0 +1,238 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a validation rule

+
+

You create a validation rule by applying a config map custom resource (CR) containing the rule to the Validation service.

+
+
+ + + + + +
+
Important
+
+
+
    +
  • +

    If you create a rule with the same name as an existing rule, the Validation service performs an OR operation with the rules.

    +
  • +
  • +

    If you create a rule that contradicts a default rule, the Validation service will not start.

    +
  • +
+
+
+
+
+
Validation rule example
+

Validation rules are based on virtual machine (VM) attributes collected by the Provider Inventory service.

+
+
+

For example, the VMware API uses this path to check whether a VMware VM has NUMA node affinity configured: MOR:VirtualMachine.config.extraConfig["numa.nodeAffinity"].

+
+
+

The Provider Inventory service simplifies this configuration and returns a testable attribute with a list value:

+
+
+
+
"numaNodeAffinity": [
+    "0",
+    "1"
+],
+
+
+
+

You create a Rego query, based on this attribute, and add it to the forklift-validation-config config map:

+
+
+
+
`count(input.numaNodeAffinity) != 0`
+
+
+
+
Procedure
+
    +
  1. +

    Create a config map CR according to the following example:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: v1
    +kind: ConfigMap
    +metadata:
    +  name: <forklift-validation-config>
    +  namespace: konveyor-forklift
    +data:
    +  vmware_multiple_disks.rego: |-
    +    package <provider_package> (1)
    +
    +    has_multiple_disks { (2)
    +      count(input.disks) > 1
    +    }
    +
    +    concerns[flag] {
    +      has_multiple_disks (3)
    +        flag := {
    +          "category": "<Information>", (4)
    +          "label": "Multiple disks detected",
    +          "assessment": "Multiple disks detected on this VM."
    +        }
    +    }
    +EOF
    +
    +
    +
    +
      +
    1. +

      Specify the provider package name. Allowed values are io.konveyor.forklift.vmware for VMware and io.konveyor.forklift.ovirt for oVirt.

      +
    2. +
    3. +

      Specify the concerns name and Rego query.

      +
    4. +
    5. +

      Specify the concerns name and flag parameter values.

      +
    6. +
    7. +

      Allowed values are Critical, Warning, and Information.

      +
    8. +
    +
    +
  2. +
  3. +

    Stop the Validation pod by scaling the forklift-controller deployment to 0:

    +
    +
    +
    $ kubectl scale -n konveyor-forklift --replicas=0 deployment/forklift-controller
    +
    +
    +
  4. +
  5. +

    Start the Validation pod by scaling the forklift-controller deployment to 1:

    +
    +
    +
    $ kubectl scale -n konveyor-forklift --replicas=1 deployment/forklift-controller
    +
    +
    +
  6. +
  7. +

    Check the Validation pod log to verify that the pod started:

    +
    +
    +
    $ kubectl logs -f <validation_pod>
    +
    +
    +
    +

    If the custom rule conflicts with a default rule, the Validation pod will not start.

    +
    +
  8. +
  9. +

    Remove the source provider:

    +
    +
    +
    $ kubectl delete provider <provider> -n konveyor-forklift
    +
    +
    +
  10. +
  11. +

    Add the source provider to apply the new rule:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Provider
    +metadata:
    +  name: <provider>
    +  namespace: konveyor-forklift
    +spec:
    +  type: <provider_type> (1)
    +  url: <api_end_point> (2)
    +  secret:
    +    name: <secret> (3)
    +    namespace: konveyor-forklift
    +EOF
    +
    +
    +
    +
      +
    1. +

      Allowed values are ovirt, vsphere, and openstack.

      +
    2. +
    3. +

      Specify the API end point URL, for example, https://<vCenter_host>/sdk for vSphere, https://<engine_host>/ovirt-engine/api for oVirt, or https://<identity_service>/v3 for {osp}.

      +
    4. +
    5. +

      Specify the name of the provider Secret CR.

      +
    6. +
    +
    +
  12. +
+
+
+

You must update the rules version after creating a custom rule so that the Inventory service detects the changes and validates the VMs.

+
+ + +
+ + diff --git a/modules/creating-vddk-image/index.html b/modules/creating-vddk-image/index.html new file mode 100644 index 00000000000..805a4ae9d0d --- /dev/null +++ b/modules/creating-vddk-image/index.html @@ -0,0 +1,201 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a VDDK image

+
+

Forklift can use the VMware Virtual Disk Development Kit (VDDK) SDK to accelerate transferring virtual disks from VMware vSphere.

+
+
+ + + + + +
+
Note
+
+
+

Creating a VDDK image, although optional, is highly recommended.

+
+
+
+
+

To make use of this feature, you download the VMware Virtual Disk Development Kit (VDDK), build a VDDK image, and push the VDDK image to your image registry.

+
+
+

The VDDK package contains symbolic links, therefore, the procedure of creating a VDDK image must be performed on a file system that preserves symbolic links (symlinks).

+
+
+ + + + + +
+
Note
+
+
+

Storing the VDDK image in a public registry might violate the VMware license terms.

+
+
+
+
+
Prerequisites
+
    +
  • +

    OKD image registry.

    +
  • +
  • +

    podman installed.

    +
  • +
  • +

    You are working on a file system that preserves symbolic links (symlinks).

    +
  • +
  • +

    If you are using an external registry, KubeVirt must be able to access it.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Create and navigate to a temporary directory:

    +
    +
    +
    $ mkdir /tmp/<dir_name> && cd /tmp/<dir_name>
    +
    +
    +
  2. +
  3. +

    In a browser, navigate to the VMware VDDK version 8 download page.

    +
  4. +
  5. +

    Select version 8.0.1 and click Download.

    +
  6. +
+
+
+ + + + + +
+
Note
+
+
+

In order to migrate to KubeVirt 4.12, download VDDK version 7.0.3.2 from the VMware VDDK version 7 download page.

+
+
+
+
+
    +
  1. +

    Save the VDDK archive file in the temporary directory.

    +
  2. +
  3. +

    Extract the VDDK archive:

    +
    +
    +
    $ tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gz
    +
    +
    +
  4. +
  5. +

    Create a Dockerfile:

    +
    +
    +
    $ cat > Dockerfile <<EOF
    +FROM registry.access.redhat.com/ubi8/ubi-minimal
    +USER 1001
    +COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib
    +RUN mkdir -p /opt
    +ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"]
    +EOF
    +
    +
    +
  6. +
  7. +

    Build the VDDK image:

    +
    +
    +
    $ podman build . -t <registry_route_or_server_path>/vddk:<tag>
    +
    +
    +
  8. +
  9. +

    Push the VDDK image to the registry:

    +
    +
    +
    $ podman push <registry_route_or_server_path>/vddk:<tag>
    +
    +
    +
  10. +
  11. +

    Ensure that the image is accessible to your KubeVirt environment.

    +
  12. +
+
+ + +
+ + diff --git a/modules/error-messages/index.html b/modules/error-messages/index.html new file mode 100644 index 00000000000..04b064ecb67 --- /dev/null +++ b/modules/error-messages/index.html @@ -0,0 +1,83 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Error messages

+
+

This section describes error messages and how to resolve them.

+
+
+
warm import retry limit reached
+

The warm import retry limit reached error message is displayed during a warm migration if a VMware virtual machine (VM) has reached the maximum number (28) of changed block tracking (CBT) snapshots during the precopy stage.

+
+
+

To resolve this problem, delete some of the CBT snapshots from the VM and restart the migration plan.

+
+
+
Unable to resize disk image to required size
+

The Unable to resize disk image to required size error message is displayed when migration fails because a virtual machine on the target provider uses persistent volumes with an EXT4 file system on block storage. The problem occurs because the default overhead that is assumed by CDI does not completely include the reserved place for the root partition.

+
+
+

To resolve this problem, increase the file system overhead in CDI to more than 10%.

+
+ + +
+ + diff --git a/modules/images/136_OpenShift_Migration_Toolkit_0121_mtv-workflow.svg b/modules/images/136_OpenShift_Migration_Toolkit_0121_mtv-workflow.svg new file mode 100644 index 00000000000..999c62adec4 --- /dev/null +++ b/modules/images/136_OpenShift_Migration_Toolkit_0121_mtv-workflow.svg @@ -0,0 +1 @@ +NetworkmappingTargetproviderVirtualmachines1UserVirtual-Machine-Import4MigrationControllerPlan2Migration3StoragemappingSourceprovider136_OpenShift_0121 diff --git a/modules/images/136_OpenShift_Migration_Toolkit_0121_virt-workflow.svg b/modules/images/136_OpenShift_Migration_Toolkit_0121_virt-workflow.svg new file mode 100644 index 00000000000..473e21ba4e2 --- /dev/null +++ b/modules/images/136_OpenShift_Migration_Toolkit_0121_virt-workflow.svg @@ -0,0 +1 @@ +Virtual-Machine-ImportProviderAPIVirtualmachineCDIControllerKubeVirtController<VM_name>podDataVolumeSourceProviderConversionpodPersistentVolumeDynamicallyprovisionedstoragePersistentVolume Claim163438710ProviderCredentialsUserVMdisk29VirtualMachineImportControllerVirtual-Machine-InstanceVirtual-Machine57Importerpod136_OpenShift_0121 diff --git a/modules/images/136_Upstream_Migration_Toolkit_0121_mtv-workflow.svg b/modules/images/136_Upstream_Migration_Toolkit_0121_mtv-workflow.svg new file mode 100644 index 00000000000..33a031a0909 --- /dev/null +++ b/modules/images/136_Upstream_Migration_Toolkit_0121_mtv-workflow.svg @@ -0,0 +1 @@ +NetworkmappingTargetproviderVirtualmachines1UserVirtual-Machine-Import4MigrationControllerPlan2Migration3StoragemappingSourceprovider136_0121 diff --git a/modules/images/136_Upstream_Migration_Toolkit_0121_virt-workflow.svg b/modules/images/136_Upstream_Migration_Toolkit_0121_virt-workflow.svg new file mode 100644 index 00000000000..e73192c0102 --- /dev/null +++ b/modules/images/136_Upstream_Migration_Toolkit_0121_virt-workflow.svg @@ -0,0 +1 @@ +Virtual-Machine-ImportProviderAPIVirtualmachineCDIControllerKubeVirtController<VM_name>podDataVolumeSourceProviderConversionpodPersistentVolumeDynamicallyprovisionedstoragePersistentVolume Claim163438710ProviderCredentialsUserVMdisk29VirtualMachineImportControllerVirtual-Machine-InstanceVirtual-Machine57Importerpod136_0121 diff --git a/modules/images/forklift-logo-darkbg.png b/modules/images/forklift-logo-darkbg.png new file mode 100644 index 00000000000..06e9d1b2494 Binary files /dev/null and b/modules/images/forklift-logo-darkbg.png differ diff --git a/modules/images/forklift-logo-darkbg.svg b/modules/images/forklift-logo-darkbg.svg new file mode 100644 index 00000000000..8a846e6361a --- /dev/null +++ b/modules/images/forklift-logo-darkbg.svg @@ -0,0 +1,164 @@ + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/modules/images/forklift-logo-lightbg.png b/modules/images/forklift-logo-lightbg.png new file mode 100644 index 00000000000..8dba83d97f8 Binary files /dev/null and b/modules/images/forklift-logo-lightbg.png differ diff --git a/modules/images/forklift-logo-lightbg.svg b/modules/images/forklift-logo-lightbg.svg new file mode 100644 index 00000000000..a8038cdf923 --- /dev/null +++ b/modules/images/forklift-logo-lightbg.svg @@ -0,0 +1,159 @@ + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/modules/images/kebab.png b/modules/images/kebab.png new file mode 100644 index 00000000000..81893bd4ad1 Binary files /dev/null and b/modules/images/kebab.png differ diff --git a/modules/images/mtv-ui.png b/modules/images/mtv-ui.png new file mode 100644 index 00000000000..009c9b46386 Binary files /dev/null and b/modules/images/mtv-ui.png differ diff --git a/modules/increasing-nfc-memory-vmware-host/index.html b/modules/increasing-nfc-memory-vmware-host/index.html new file mode 100644 index 00000000000..3e1a3ab015b --- /dev/null +++ b/modules/increasing-nfc-memory-vmware-host/index.html @@ -0,0 +1,103 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Increasing the NFC service memory of an ESXi host

+
+

If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host. Otherwise, the migration will fail because the NFC service memory is limited to 10 parallel connections.

+
+
+
Procedure
+
    +
  1. +

    Log in to the ESXi host as root.

    +
  2. +
  3. +

    Change the value of maxMemory to 1000000000 in /etc/vmware/hostd/config.xml:

    +
    +
    +
    ...
    +      <nfcsvc>
    +         <path>libnfcsvc.so</path>
    +         <enabled>true</enabled>
    +         <maxMemory>1000000000</maxMemory>
    +         <maxStreamMemory>10485760</maxStreamMemory>
    +      </nfcsvc>
    +...
    +
    +
    +
  4. +
  5. +

    Restart hostd:

    +
    +
    +
    # /etc/init.d/hostd restart
    +
    +
    +
    +

    You do not need to reboot the host.

    +
    +
  6. +
+
+ + +
+ + diff --git a/modules/installing-mtv-operator/index.html b/modules/installing-mtv-operator/index.html new file mode 100644 index 00000000000..603ab937d30 --- /dev/null +++ b/modules/installing-mtv-operator/index.html @@ -0,0 +1,79 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
Prerequisites
+
    +
  • +

    OKD 4.10 or later installed.

    +
  • +
  • +

    KubeVirt Operator installed on an OpenShift migration target cluster.

    +
  • +
  • +

    You must be logged in as a user with cluster-admin permissions.

    +
  • +
+
+ + +
+ + diff --git a/modules/issue_templates/issue.md b/modules/issue_templates/issue.md new file mode 100644 index 00000000000..30d52ab9cba --- /dev/null +++ b/modules/issue_templates/issue.md @@ -0,0 +1,15 @@ +## Summary + +(Describe the problem. Don't worry if the problem occurs in more than one checklist. You only need to mention the checklist where you see a problem. We will fix the module.) + +## What is the problem? + +(Paste the text or a screenshot here. Remember to include the **task number** so that we know which module is affected.) + +## What is the solution? + +(Correct text, link, or task.) + +## Notes + +(Do we need to fix something else?) diff --git a/modules/issue_templates/issue/index.html b/modules/issue_templates/issue/index.html new file mode 100644 index 00000000000..808faea9bc4 --- /dev/null +++ b/modules/issue_templates/issue/index.html @@ -0,0 +1,79 @@ + + + + + + + + Summary | Forklift Documentation + + + + + + + + + + + + + +Summary | Forklift Documentation + + + + + + + + + + + + + + + + + + + + + + +
+

Summary

+ +

(Describe the problem. Don’t worry if the problem occurs in more than one checklist. You only need to mention the checklist where you see a problem. We will fix the module.)

+ +

What is the problem?

+ +

(Paste the text or a screenshot here. Remember to include the task number so that we know which module is affected.)

+ +

What is the solution?

+ +

(Correct text, link, or task.)

+ +

Notes

+ +

(Do we need to fix something else?)

+ + + +
+ + diff --git a/modules/known-issues-2-7/index.html b/modules/known-issues-2-7/index.html new file mode 100644 index 00000000000..08437dc43c7 --- /dev/null +++ b/modules/known-issues-2-7/index.html @@ -0,0 +1,87 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Known issues

+
+

Forklift 2.7 has the following known issues:

+
+
+
Select Migration Network from the endpoint type ESXi displays multiple incorrect networks
+

When you choose Select Migration Network, from the endpoint type of ESXi, multiple incorrect networks are displayed. (MTV-1291)

+
+
+

Unresolved directive in known-issues-2-7.adoc - include::snip_secure_boot_issue.adoc[]

+
+
+

Unresolved directive in known-issues-2-7.adoc - include::snip_measured_boot_windows_vm.adoc[]

+
+
+
Network and Storage maps in the UI are not correct when created from the command line
+

When creating Network and Storage maps from the UI, the correct names are not shown in the UI. (MTV-1421)

+
+
+
Migration fails with module network-legacy configured in RHEL guests
+

Migration fails if the module configuration file is available in the guest and the dhcp-client package is not installed, returning a dracut module 'network-legacy' will not be installed, because command 'dhclient' could not be found error. (MTV-1615)

+
+ + +
+ + diff --git a/modules/making-open-source-more-inclusive/index.html b/modules/making-open-source-more-inclusive/index.html new file mode 100644 index 00000000000..131c7bf5bc9 --- /dev/null +++ b/modules/making-open-source-more-inclusive/index.html @@ -0,0 +1,69 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Making open source more inclusive

+
+

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

+
+ + +
+ + diff --git a/modules/migration-plan-options-ui/index.html b/modules/migration-plan-options-ui/index.html new file mode 100644 index 00000000000..682a9c01d9c --- /dev/null +++ b/modules/migration-plan-options-ui/index.html @@ -0,0 +1,141 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Migration plan options

+
+

On the Plans for virtualization page of the OKD web console, you can click the {kebab} beside a migration plan to access the following options:

+
+
+
    +
  • +

    Get logs: Retrieves the logs of a migration. When you click Get logs, a confirmation window opens. After you click Get logs in the window, wait until Get logs changes to Download logs and then click the button to download the logs.

    +
  • +
  • +

    Edit: Edit the details of a migration plan. You cannot edit a migration plan while it is running or after it has completed successfully.

    +
  • +
  • +

    Duplicate: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:

    +
    +
      +
    • +

      Migrate VMs to a different namespace.

      +
    • +
    • +

      Edit an archived migration plan.

      +
    • +
    • +

      Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.

      +
    • +
    +
    +
  • +
  • +

    Archive: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    The Archive option is irreversible. However, you can duplicate an archived plan.

    +
    +
    +
    +
  • +
  • +

    Delete: Permanently remove a migration plan. You cannot delete a running migration plan.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    The Delete option is irreversible.

    +
    +
    +

    Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs, and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

    +
    +
    +
    +
  • +
  • +

    View details: Display the details of a migration plan.

    +
  • +
  • +

    Restart: Restart a failed or canceled migration plan.

    +
  • +
  • +

    Cancel scheduled cutover: Cancel a scheduled cutover migration for a warm migration plan.

    +
  • +
+
+ + +
+ + diff --git a/modules/mtv-changelog-2-7/index.html b/modules/mtv-changelog-2-7/index.html new file mode 100644 index 00000000000..2b873803fd2 --- /dev/null +++ b/modules/mtv-changelog-2-7/index.html @@ -0,0 +1,2330 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift changelog

+
+
+
+

The following changelog for Forklift includes a full list of packages used in the Forklift 2.7 releases.

+
+
+
+
+

Forklift 2.7 packages

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Forklift packages
Forklift 2.7.0Forklift 2.7.2Forklift 2.7.3

abattis-cantarell-fonts-0.301-4.el9.noarch

abattis-cantarell-fonts-0.301-4.el9.noarch

Abattis-cantarell-fonts-0.301-4.el9.noarch

acl-2.3.1-4.el9.x86_64

acl-2.3.1-4.el9.x86_64

acl-2.3.1-4.el9.x86_64

adobe-source-code-pro-fonts-2.030.1.050-12.el9.1.noarch

adobe-source-code-pro-fonts-2.030.1.050-12.el9.1.noarch

adobe-source-code-pro-fonts-2.030.1.050-12.el9.1.noarch

alternatives-1.24-1.el9.x86_64

alternatives-1.24-1.el9.x86_64

alternatives-1.24-1.el9.x86_64

attr-2.5.1-3.el9.x86_64

attr-2.5.1-3.el9.x86_64

attr-2.5.1-3.el9.x86_64

audit-libs-3.1.2-2.el9.x86_64

audit-libs-3.1.2-2.el9.x86_64

audit-libs-3.1.2-2.el9.x86_64

augeas-libs-1.13.0-6.el9_4.x86_64

augeas-libs-1.13.0-6.el9_4.x86_64

augeas-libs-1.13.0-6.el9_4.x86_64

basesystem-11-13.el9.noarch

basesystem-11-13.el9.noarch

basesystem-11-13.el9.noarch

bash-5.1.8-9.el9.x86_64

bash-5.1.8-9.el9.x86_64

bash-5.1.8-9.el9.x86_64

binutils-2.35.2-43.el9.x86_64

binutils-2.35.2-43.el9.x86_64

binutils-2.35.2-43.el9.x86_64

binutils-gold-2.35.2-43.el9.x86_64

binutils-gold-2.35.2-43.el9.x86_64

binutils-gold-2.35.2-43.el9.x86_64

bzip2-1.0.8-8.el9.x86_64

bzip2-1.0.8-8.el9.x86_64

bzip2-1.0.8-8.el9.x86_64

bzip2-libs-1.0.8-8.el9.x86_64

bzip2-libs-1.0.8-8.el9.x86_64

bzip2-libs-1.0.8-8.el9.x86_64

ca-certificates-2024.2.69_v8.0.303-91.4.el9_4.noarch

ca-certificates-2024.2.69_v8.0.303-91.4.el9_4.noarch

ca-certificates-2024.2.69_v8.0.303-91.4.el9_4.noarch

capstone-4.0.2-10.el9.x86_64

capstone-4.0.2-10.el9.x86_64

capstone-4.0.2-10.el9.x86_64

checkpolicy-3.6-1.el9.x86_64

checkpolicy-3.6-1.el9.x86_64

checkpolicy-3.6-1.el9.x86_64

clevis-18-112.el9.x86_64

clevis-18-112.el9.x86_64

clevis-18-112.el9.x86_64

clevis-luks-18-112.el9.x86_64

clevis-luks-18-112.el9.x86_64

clevis-luks-18-112.el9.x86_64

cmake-rpm-macros-3.26.5-2.el9.noarch

cmake-rpm-macros-3.26.5-2.el9.noarch

cmake-rpm-macros-3.26.5-2.el9.noarch

coreutils-single-8.32-35.el9.x86_64

coreutils-single-8.32-35.el9.x86_64

coreutils-single-8.32-35.el9.x86_64

cpio-2.13-16.el9.x86_64

cpio-2.13-16.el9.x86_64

cpio-2.13-16.el9.x86_64

cracklib-2.9.6-27.el9.x86_64

cracklib-2.9.6-27.el9.x86_64

cracklib-2.9.6-27.el9.x86_64

cracklib-dicts-2.9.6-27.el9.x86_64

cracklib-dicts-2.9.6-27.el9.x86_64

cracklib-dicts-2.9.6-27.el9.x86_64

crypto-policies-20240202-1.git283706d.el9.noarch

crypto-policies-20240202-1.git283706d.el9.noarch

crypto-policies-20240202-1.git283706d.el9.noarch

cryptsetup-2.6.0-3.el9.x86_64

cryptsetup-2.6.0-3.el9.x86_64

cryptsetup-2.6.0-3.el9.x86_64

cryptsetup-libs-2.6.0-3.el9.x86_64

cryptsetup-libs-2.6.0-3.el9.x86_64

cryptsetup-libs-2.6.0-3.el9.x86_64

curl-minimal-7.76.1-29.el9_4.1.x86_64

curl-minimal-7.76.1-29.el9_4.1.x86_64

curl-minimal-7.76.1-29.el9_4.1.x86_64

cyrus-sasl-2.1.27-21.el9.x86_64

cyrus-sasl-2.1.27-21.el9.x86_64

cyrus-sasl-2.1.27-21.el9.x86_64

cyrus-sasl-gssapi-2.1.27-21.el9.x86_64

cyrus-sasl-gssapi-2.1.27-21.el9.x86_64

cyrus-sasl-gssapi-2.1.27-21.el9.x86_64

cyrus-sasl-lib-2.1.27-21.el9.x86_64

cyrus-sasl-lib-2.1.27-21.el9.x86_64

cyrus-sasl-lib-2.1.27-21.el9.x86_64

daxctl-libs-71.1-8.el9.x86_64

daxctl-libs-71.1-8.el9.x86_64

daxctl-libs-71.1-8.el9.x86_64

dbus-1.12.20-8.el9.x86_64

dbus-1.12.20-8.el9.x86_64

dbus-1.12.20-8.el9.x86_64

dbus-broker-28-7.el9.x86_64

dbus-broker-28-7.el9.x86_64

dbus-broker-28-7.el9.x86_64

dbus-common-1.12.20-8.el9.noarch

dbus-common-1.12.20-8.el9.noarch

dbus-common-1.12.20-8.el9.noarch

dbus-libs-1.12.20-8.el9.x86_64

dbus-libs-1.12.20-8.el9.x86_64

dbus-libs-1.12.20-8.el9.x86_64

dejavu-sans-fonts-2.37-18.el9.noarch

dejavu-sans-fonts-2.37-18.el9.noarch

dejavu-sans-fonts-2.37-18.el9.noarch

device-mapper-1.02.197-2.el9.x86_64

device-mapper-1.02.197-2.el9.x86_64

device-mapper-1.02.197-2.el9.x86_64

device-mapper-event-1.02.197-2.el9.x86_64

device-mapper-event-1.02.197-2.el9.x86_64

device-mapper-event-1.02.197-2.el9.x86_64

device-mapper-event-libs-1.02.197-2.el9.x86_64

device-mapper-event-libs-1.02.197-2.el9.x86_64

device-mapper-event-libs-1.02.197-2.el9.x86_64

device-mapper-libs-1.02.197-2.el9.x86_64

device-mapper-libs-1.02.197-2.el9.x86_64

device-mapper-libs-1.02.197-2.el9.x86_64

device-mapper-persistent-data-1.0.9-3.el9_4.x86_64

device-mapper-persistent-data-1.0.9-3.el9_4.x86_64

device-mapper-persistent-data-1.0.9-3.el9_4.x86_64

dhcp-client-4.4.2-19.b1.el9.x86_64

dhcp-client-4.4.2-19.b1.el9.x86_64

dhcp-client-4.4.2-19.b1.el9.x86_64

dhcp-common-4.4.2-19.b1.el9.noarch

dhcp-common-4.4.2-19.b1.el9.noarch

dhcp-common-4.4.2-19.b1.el9.noarch

diffutils-3.7-12.el9.x86_64

diffutils-3.7-12.el9.x86_64

diffutils-3.7-12.el9.x86_64

dmidecode-3.5-3.el9.x86_64

dmidecode-3.5-3.el9.x86_64

dmidecode-3.5-3.el9.x86_64

dnf-data-4.14.0-9.el9.noarch

dnf-data-4.14.0-9.el9.noarch

dnf-data-4.14.0-9.el9.noarch

dnsmasq-2.85-16.el9_4.x86_64

dnsmasq-2.85-16.el9_4.x86_64

dnsmasq-2.85-16.el9_4.x86_64

dosfstools-4.2-3.el9.x86_64

dosfstools-4.2-3.el9.x86_64

dosfstools-4.2-3.el9.x86_64

dracut-057-53.git20240104.el9.x86_64

dracut-057-53.git20240104.el9.x86_64

dracut-057-53.git20240104.el9.x86_64

dwz-0.14-3.el9.x86_64

dwz-0.14-3.el9.x86_64

dwz-0.14-3.el9.x86_64

e2fsprogs-1.46.5-5.el9.x86_64

e2fsprogs-1.46.5-5.el9.x86_64

e2fsprogs-1.46.5-5.el9.x86_64

e2fsprogs-libs-1.46.5-5.el9.x86_64

e2fsprogs-libs-1.46.5-5.el9.x86_64

e2fsprogs-libs-1.46.5-5.el9.x86_64

edk2-ovmf-20231122-6.el9_4.3.noarch

edk2-ovmf-20231122-6.el9_4.3.noarch

edk2-ovmf-20231122-6.el9_4.3.noarch

efi-srpm-macros-6-2.el9_0.noarch

efi-srpm-macros-6-2.el9_0.noarch

efi-srpm-macros-6-2.el9_0.noarch

elfutils-debuginfod-client-0.190-2.el9.x86_64

elfutils-debuginfod-client-0.190-2.el9.x86_64

elfutils-debuginfod-client-0.190-2.el9.x86_64

elfutils-default-yama-scope-0.190-2.el9.noarch

elfutils-default-yama-scope-0.190-2.el9.noarch

elfutils-default-yama-scope-0.190-2.el9.noarch

elfutils-libelf-0.190-2.el9.x86_64

elfutils-libelf-0.190-2.el9.x86_64

elfutils-libelf-0.190-2.el9.x86_64

elfutils-libs-0.190-2.el9.x86_64

elfutils-libs-0.190-2.el9.x86_64

elfutils-libs-0.190-2.el9.x86_64

expat-2.5.0-2.el9_4.1.x86_64

expat-2.5.0-2.el9_4.1.x86_64

expat-2.5.0-2.el9_4.1.x86_64

file-5.39-16.el9.x86_64

file-5.39-16.el9.x86_64

file-5.39-16.el9.x86_64

file-libs-5.39-16.el9.x86_64

file-libs-5.39-16.el9.x86_64

file-libs-5.39-16.el9.x86_64

filesystem-3.16-2.el9.x86_64

filesystem-3.16-2.el9.x86_64

filesystem-3.16-2.el9.x86_64

findutils-4.8.0-6.el9.x86_64

findutils-4.8.0-6.el9.x86_64

findutils-4.8.0-6.el9.x86_64

fonts-filesystem-2.0.5-7.el9.1.noarch

fonts-filesystem-2.0.5-7.el9.1.noarch

fonts-filesystem-2.0.5-7.el9.1.noarch

fonts-srpm-macros-2.0.5-7.el9.1.noarch

fonts-srpm-macros-2.0.5-7.el9.1.noarch

fonts-srpm-macros-2.0.5-7.el9.1.noarch

fuse-2.9.9-15.el9.x86_64

fuse-2.9.9-15.el9.x86_64

fuse-2.9.9-15.el9.x86_64

fuse-common-3.10.2-8.el9.x86_64

fuse-common-3.10.2-8.el9.x86_64

fuse-common-3.10.2-8.el9.x86_64

fuse-libs-2.9.9-15.el9.x86_64

fuse-libs-2.9.9-15.el9.x86_64

fuse-libs-2.9.9-15.el9.x86_64

gawk-5.1.0-6.el9.x86_64

gawk-5.1.0-6.el9.x86_64

gawk-5.1.0-6.el9.x86_64

gdbm-libs-1.19-4.el9.x86_64

gdbm-libs-1.19-4.el9.x86_64

gdbm-libs-1.19-4.el9.x86_64

gdisk-1.0.7-5.el9.x86_64

gdisk-1.0.7-5.el9.x86_64

gdisk-1.0.7-5.el9.x86_64

geolite2-city-20191217-6.el9.noarch

geolite2-city-20191217-6.el9.noarch

geolite2-city-20191217-6.el9.noarch

geolite2-country-20191217-6.el9.noarch

geolite2-country-20191217-6.el9.noarch

geolite2-country-20191217-6.el9.noarch

gettext-0.21-8.el9.x86_64

gettext-0.21-8.el9.x86_64

gettext-0.21-8.el9.x86_64

gettext-libs-0.21-8.el9.x86_64

gettext-libs-0.21-8.el9.x86_64

gettext-libs-0.21-8.el9.x86_64

ghc-srpm-macros-1.5.0-6.el9.noarch

ghc-srpm-macros-1.5.0-6.el9.noarch

ghc-srpm-macros-1.5.0-6.el9.noarch

glib-networking-2.68.3-3.el9.x86_64

glib-networking-2.68.3-3.el9.x86_64

glib-networking-2.68.3-3.el9.x86_64

glib2-2.68.4-14.el9_4.1.x86_64

glib2-2.68.4-14.el9_4.1.x86_64

glib2-2.68.4-14.el9_4.1.x86_64

glibc-2.34-100.el9_4.3.x86_64

glibc-2.34-100.el9_4.4.x86_64

glibc-2.34-100.el9_4.4.x86_64

glibc-common-2.34-100.el9_4.3.x86_64

glibc-common-2.34-100.el9_4.4.x86_64

glibc-common-2.34-100.el9_4.4.x86_64

glibc-gconv-extra-2.34-100.el9_4.3.x86_64

glibc-gconv-extra-2.34-100.el9_4.4.x86_64

glibc-gconv-extra-2.34-100.el9_4.4.x86_64

glibc-langpack-en-2.34-100.el9_4.4.x86_64

glibc-langpack-en-2.34-100.el9_4.4.x86_64

glibc-minimal-langpack-2.34-100.el9_4.3.x86_64

glibc-minimal-langpack-2.34-100.el9_4.4.x86_64

glibc-minimal-langpack-2.34-100.el9_4.4.x86_64

gmp-6.2.0-13.el9.x86_64

gmp-6.2.0-13.el9.x86_64

gmp-6.2.0-13.el9.x86_64

gnupg2-2.3.3-4.el9.x86_64

gnupg2-2.3.3-4.el9.x86_64

gnupg2-2.3.3-4.el9.x86_64

gnutls-3.8.3-4.el9_4.x86_64

gnutls-3.8.3-4.el9_4.x86_64

gnutls-3.8.3-4.el9_4.x86_64

gnutls-dane-3.8.3-4.el9_4.x86_64

gnutls-dane-3.8.3-4.el9_4.x86_64

gnutls-dane-3.8.3-4.el9_4.x86_64

gnutls-utils-3.8.3-4.el9_4.x86_64

gnutls-utils-3.8.3-4.el9_4.x86_64

gnutls-utils-3.8.3-4.el9_4.x86_64

go-srpm-macros-3.2.0-3.el9.noarch

go-srpm-macros-3.2.0-3.el9.noarch

go-srpm-macros-3.2.0-3.el9.noarch

gobject-introspection-1.68.0-11.el9.x86_64

gobject-introspection-1.68.0-11.el9.x86_64

gobject-introspection-1.68.0-11.el9.x86_64

gpg-pubkey-5a6340b3-6229229e

gpg-pubkey-5a6340b3-6229229e

gpg-pubkey-5a6340b3-6229229e

gpg-pubkey-fd431d51-4ae0493b

gpg-pubkey-fd431d51-4ae0493b

gpg-pubkey-fd431d51-4ae0493b

gpgme-1.15.1-6.el9.x86_64

gpgme-1.15.1-6.el9.x86_64

gpgme-1.15.1-6.el9.x86_64

grep-3.6-5.el9.x86_64

grep-3.6-5.el9.x86_64

grep-3.6-5.el9.x86_64

groff-base-1.22.4-10.el9.x86_64

groff-base-1.22.4-10.el9.x86_64

groff-base-1.22.4-10.el9.x86_64

gsettings-desktop-schemas-40.0-6.el9.x86_64

gsettings-desktop-schemas-40.0-6.el9.x86_64

gsettings-desktop-schemas-40.0-6.el9.x86_64

gssproxy-0.8.4-6.el9.x86_64

gssproxy-0.8.4-6.el9.x86_64

gssproxy-0.8.4-6.el9.x86_64

guestfs-tools-1.51.6-3.el9_4.x86_64

guestfs-tools-1.51.6-3.el9_4.x86_64

guestfs-tools-1.51.6-3.el9_4.x86_64

gzip-1.12-1.el9.x86_64

gzip-1.12-1.el9.x86_64

gzip-1.12-1.el9.x86_64

hexedit-1.6-1.el9.x86_64

hexedit-1.6-1.el9.x86_64

hexedit-1.6-1.el9.x86_64

hivex-libs-1.3.21-3.el9.x86_64

hivex-libs-1.3.21-3.el9.x86_64

hivex-libs-1.3.21-3.el9.x86_64

hwdata-0.348-9.13.el9.noarch

hwdata-0.348-9.13.el9.noarch

hwdata-0.348-9.13.el9.noarch

inih-49-6.el9.x86_64

inih-49-6.el9.x86_64

inih-49-6.el9.x86_64

ipcalc-1.0.0-5.el9.x86_64

ipcalc-1.0.0-5.el9.x86_64

ipcalc-1.0.0-5.el9.x86_64

iproute-6.2.0-6.el9_4.x86_64

iproute-6.2.0-6.el9_4.x86_64

iproute-6.2.0-6.el9_4.x86_64

iproute-tc-6.2.0-6.el9_4.x86_64

iproute-tc-6.2.0-6.el9_4.x86_64

iproute-tc-6.2.0-6.el9_4.x86_64

iptables-libs-1.8.10-4.el9_4.x86_64

iptables-libs-1.8.10-4.el9_4.x86_64

iptables-libs-1.8.10-4.el9_4.x86_64

iptables-nft-1.8.10-4.el9_4.x86_64

iptables-nft-1.8.10-4.el9_4.x86_64

iptables-nft-1.8.10-4.el9_4.x86_64

iputils-20210202-9.el9.x86_64

iputils-20210202-9.el9.x86_64

iputils-20210202-9.el9.x86_64

ipxe-roms-qemu-20200823-9.git4bd064de.el9.noarch

ipxe-roms-qemu-20200823-9.git4bd064de.el9.noarch

ipxe-roms-qemu-20200823-9.git4bd064de.el9.noarch

jansson-2.14-1.el9.x86_64

jansson-2.14-1.el9.x86_64

jansson-2.14-1.el9.x86_64

jose-11-3.el9.x86_64

jose-11-3.el9.x86_64

jose-11-3.el9.x86_64

jq-1.6-16.el9.x86_64

jq-1.6-16.el9.x86_64

jq-1.6-16.el9.x86_64

json-c-0.14-11.el9.x86_64

json-c-0.14-11.el9.x86_64

json-c-0.14-11.el9.x86_64

json-glib-1.6.6-1.el9.x86_64

json-glib-1.6.6-1.el9.x86_64

json-glib-1.6.6-1.el9.x86_64

kbd-2.4.0-9.el9.x86_64

kbd-2.4.0-9.el9.x86_64

kbd-2.4.0-9.el9.x86_64

kbd-legacy-2.4.0-9.el9.noarch

kbd-legacy-2.4.0-9.el9.noarch

kbd-legacy-2.4.0-9.el9.noarch

kbd-misc-2.4.0-9.el9.noarch

kbd-misc-2.4.0-9.el9.noarch

kbd-misc-2.4.0-9.el9.noarch

kernel-core-5.14.0-427.35.1.el9_4.x86_64

kernel-core-5.14.0-427.37.1.el9_4.x86_64

kernel-core-5.14.0-427.40.1.el9_4.x86_64

kernel-modules-core-5.14.0-427.35.1.el9_4.x86_64

kernel-modules-core-5.14.0-427.37.1.el9_4.x86_64

kernel-modules-core-5.14.0-427.40.1.el9_4.x86_64

kernel-srpm-macros-1.0-13.el9.noarch

kernel-srpm-macros-1.0-13.el9.noarch

kernel-srpm-macros-1.0-13.el9.noarch

keyutils-1.6.3-1.el9.x86_64

keyutils-1.6.3-1.el9.x86_64

keyutils-1.6.3-1.el9.x86_64

keyutils-libs-1.6.3-1.el9.x86_64

keyutils-libs-1.6.3-1.el9.x86_64

keyutils-libs-1.6.3-1.el9.x86_64

kmod-28-9.el9.x86_64

kmod-28-9.el9.x86_64

kmod-28-9.el9.x86_64

kmod-libs-28-9.el9.x86_64

kmod-libs-28-9.el9.x86_64

kmod-libs-28-9.el9.x86_64

kpartx-0.8.7-27.el9.x86_64

kpartx-0.8.7-27.el9.x86_64

kpartx-0.8.7-27.el9.x86_64

krb5-libs-1.21.1-2.el9_4.x86_64

krb5-libs-1.21.1-2.el9_4.x86_64

krb5-libs-1.21.1-2.el9_4.x86_64

langpacks-core-en-3.0-16.el9.noarch

langpacks-core-en-3.0-16.el9.noarch

langpacks-core-en-3.0-16.el9.noarch

langpacks-core-font-en-3.0-16.el9.noarch

langpacks-core-font-en-3.0-16.el9.noarch

langpacks-core-font-en-3.0-16.el9.noarch

langpacks-en-3.0-16.el9.noarch

langpacks-en-3.0-16.el9.noarch

langpacks-en-3.0-16.el9.noarch

less-590-4.el9_4.x86_64

less-590-4.el9_4.x86_64

less-590-4.el9_4.x86_64

libacl-2.3.1-4.el9.x86_64

libacl-2.3.1-4.el9.x86_64

libacl-2.3.1-4.el9.x86_64

libaio-0.3.111-13.el9.x86_64

libaio-0.3.111-13.el9.x86_64

libaio-0.3.111-13.el9.x86_64

libarchive-3.5.3-4.el9.x86_64

libarchive-3.5.3-4.el9.x86_64

libarchive-3.5.3-4.el9.x86_64

libassuan-2.5.5-3.el9.x86_64

libassuan-2.5.5-3.el9.x86_64

libassuan-2.5.5-3.el9.x86_64

libatomic-11.4.1-3.el9.x86_64

libatomic-11.4.1-3.el9.x86_64

libatomic-11.4.1-3.el9.x86_64

libattr-2.5.1-3.el9.x86_64

libattr-2.5.1-3.el9.x86_64

libattr-2.5.1-3.el9.x86_64

libbasicobjects-0.1.1-53.el9.x86_64

libbasicobjects-0.1.1-53.el9.x86_64

libbasicobjects-0.1.1-53.el9.x86_64

libblkid-2.37.4-18.el9.x86_64

libblkid-2.37.4-18.el9.x86_64

libblkid-2.37.4-18.el9.x86_64

libbpf-1.3.0-2.el9.x86_64

libbpf-1.3.0-2.el9.x86_64

libbpf-1.3.0-2.el9.x86_64

libbrotli-1.0.9-6.el9.x86_64

libbrotli-1.0.9-6.el9.x86_64

libbrotli-1.0.9-6.el9.x86_64

libcap-2.48-9.el9_2.x86_64

libcap-2.48-9.el9_2.x86_64

libcap-2.48-9.el9_2.x86_64

libcap-ng-0.8.2-7.el9.x86_64

libcap-ng-0.8.2-7.el9.x86_64

libcap-ng-0.8.2-7.el9.x86_64

libcbor-0.7.0-5.el9.x86_64

libcbor-0.7.0-5.el9.x86_64

libcbor-0.7.0-5.el9.x86_64

libcollection-0.7.0-53.el9.x86_64

libcollection-0.7.0-53.el9.x86_64

libcollection-0.7.0-53.el9.x86_64

libcom_err-1.46.5-5.el9.x86_64

libcom_err-1.46.5-5.el9.x86_64

libcom_err-1.46.5-5.el9.x86_64

libconfig-1.7.2-9.el9.x86_64

libconfig-1.7.2-9.el9.x86_64

libconfig-1.7.2-9.el9.x86_64

libcurl-minimal-7.76.1-29.el9_4.1.x86_64

libcurl-minimal-7.76.1-29.el9_4.1.x86_64

libcurl-minimal-7.76.1-29.el9_4.1.x86_64

libdb-5.3.28-53.el9.x86_64

libdb-5.3.28-53.el9.x86_64

libdb-5.3.28-53.el9.x86_64

libdnf-0.69.0-8.el9_4.1.x86_64

libdnf-0.69.0-8.el9_4.1.x86_64

libdnf-0.69.0-8.el9_4.1.x86_64

libeconf-0.4.1-3.el9_2.x86_64

libeconf-0.4.1-3.el9_2.x86_64

libeconf-0.4.1-3.el9_2.x86_64

libedit-3.1-38.20210216cvs.el9.x86_64

libedit-3.1-38.20210216cvs.el9.x86_64

libedit-3.1-38.20210216cvs.el9.x86_64

libev-4.33-5.el9.x86_64

libev-4.33-5.el9.x86_64

libev-4.33-5.el9.x86_64

libevent-2.1.12-8.el9_4.x86_64

libevent-2.1.12-8.el9_4.x86_64

libevent-2.1.12-8.el9_4.x86_64

libfdisk-2.37.4-18.el9.x86_64

libfdisk-2.37.4-18.el9.x86_64

libfdisk-2.37.4-18.el9.x86_64

libfdt-1.6.0-7.el9.x86_64

libfdt-1.6.0-7.el9.x86_64

libfdt-1.6.0-7.el9.x86_64

libffi-3.4.2-8.el9.x86_64

libffi-3.4.2-8.el9.x86_64

libffi-3.4.2-8.el9.x86_64

libfido2-1.13.0-2.el9.x86_64

libfido2-1.13.0-2.el9.x86_64

libfido2-1.13.0-2.el9.x86_64

libgcc-11.4.1-3.el9.x86_64

libgcc-11.4.1-3.el9.x86_64

libgcc-11.4.1-3.el9.x86_64

libgcrypt-1.10.0-10.el9_2.x86_64

libgcrypt-1.10.0-10.el9_2.x86_64

libgcrypt-1.10.0-10.el9_2.x86_64

libgomp-11.4.1-3.el9.x86_64

libgomp-11.4.1-3.el9.x86_64

libgomp-11.4.1-3.el9.x86_64

libgpg-error-1.42-5.el9.x86_64

libgpg-error-1.42-5.el9.x86_64

libgpg-error-1.42-5.el9.x86_64

libguestfs-1.50.1-8.el9_4.x86_64

libguestfs-1.50.1-8.el9_4.x86_64

libguestfs-1.50.1-8.el9_4.x86_64

libguestfs-appliance-1.50.1-8.el9_4.x86_64

libguestfs-appliance-1.50.1-8.el9_4.x86_64

libguestfs-appliance-1.50.1-8.el9_4.x86_64

libguestfs-winsupport-9.3-1.el9_3.x86_64

libguestfs-winsupport-9.3-1.el9_3.x86_64

libguestfs-winsupport-9.3-1.el9_3.x86_64

libguestfs-xfs-1.50.1-8.el9_4.x86_64

libguestfs-xfs-1.50.1-8.el9_4.x86_64

libguestfs-xfs-1.50.1-8.el9_4.x86_64

libibverbs-48.0-1.el9.x86_64

libibverbs-48.0-1.el9.x86_64

libibverbs-48.0-1.el9.x86_64

libicu-67.1-9.el9.x86_64

libicu-67.1-9.el9.x86_64

libicu-67.1-9.el9.x86_64

libidn2-2.3.0-7.el9.x86_64

libidn2-2.3.0-7.el9.x86_64

libidn2-2.3.0-7.el9.x86_64

libini_config-1.3.1-53.el9.x86_64

libini_config-1.3.1-53.el9.x86_64

libini_config-1.3.1-53.el9.x86_64

libjose-11-3.el9.x86_64

libjose-11-3.el9.x86_64

libjose-11-3.el9.x86_64

libkcapi-1.4.0-2.el9.x86_64

libkcapi-1.4.0-2.el9.x86_64

libkcapi-1.4.0-2.el9.x86_64

libkcapi-hmaccalc-1.4.0-2.el9.x86_64

libkcapi-hmaccalc-1.4.0-2.el9.x86_64

libkcapi-hmaccalc-1.4.0-2.el9.x86_64

libksba-1.5.1-6.el9_1.x86_64

libksba-1.5.1-6.el9_1.x86_64

libksba-1.5.1-6.el9_1.x86_64

libluksmeta-9-12.el9.x86_64

libluksmeta-9-12.el9.x86_64

libluksmeta-9-12.el9.x86_64

libmaxminddb-1.5.2-3.el9.x86_64

libmaxminddb-1.5.2-3.el9.x86_64

libmaxminddb-1.5.2-3.el9.x86_64

libmnl-1.0.4-16.el9_4.x86_64

libmnl-1.0.4-16.el9_4.x86_64

libmnl-1.0.4-16.el9_4.x86_64

libmodulemd-2.13.0-2.el9.x86_64

libmodulemd-2.13.0-2.el9.x86_64

libmodulemd-2.13.0-2.el9.x86_64

libmount-2.37.4-18.el9.x86_64

libmount-2.37.4-18.el9.x86_64

libmount-2.37.4-18.el9.x86_64

libnbd-1.18.1-4.el9_4.x86_64

libnbd-1.18.1-4.el9_4.x86_64

libnbd-1.18.1-4.el9_4.x86_64

libnetfilter_conntrack-1.0.9-1.el9.x86_64

libnetfilter_conntrack-1.0.9-1.el9.x86_64

libnetfilter_conntrack-1.0.9-1.el9.x86_64

libnfnetlink-1.0.1-21.el9.x86_64

libnfnetlink-1.0.1-21.el9.x86_64

libnfnetlink-1.0.1-21.el9.x86_64

libnfsidmap-2.5.4-26.el9_4.x86_64

libnfsidmap-2.5.4-26.el9_4.x86_64

libnfsidmap-2.5.4-26.el9_4.x86_64

libnftnl-1.2.6-4.el9_4.x86_64

libnftnl-1.2.6-4.el9_4.x86_64

libnftnl-1.2.6-4.el9_4.x86_64

libnghttp2-1.43.0-5.el9_4.3.x86_64

libnghttp2-1.43.0-5.el9_4.3.x86_64

libnghttp2-1.43.0-5.el9_4.3.x86_64

libnl3-3.9.0-1.el9.x86_64

libnl3-3.9.0-1.el9.x86_64

libnl3-3.9.0-1.el9.x86_64

libosinfo-1.10.0-1.el9.x86_64

libosinfo-1.10.0-1.el9.x86_64

libosinfo-1.10.0-1.el9.x86_64

libpath_utils-0.2.1-53.el9.x86_64

libpath_utils-0.2.1-53.el9.x86_64

libpath_utils-0.2.1-53.el9.x86_64

libpeas-1.30.0-4.el9.x86_64

libpeas-1.30.0-4.el9.x86_64

libpeas-1.30.0-4.el9.x86_64

libpipeline-1.5.3-4.el9.x86_64

libpipeline-1.5.3-4.el9.x86_64

libpipeline-1.5.3-4.el9.x86_64

libpkgconf-1.7.3-10.el9.x86_64

libpkgconf-1.7.3-10.el9.x86_64

libpkgconf-1.7.3-10.el9.x86_64

libpmem-1.12.1-1.el9.x86_64

libpmem-1.12.1-1.el9.x86_64

libpmem-1.12.1-1.el9.x86_64

libpng-1.6.37-12.el9.x86_64

libpng-1.6.37-12.el9.x86_64

libpng-1.6.37-12.el9.x86_64

libproxy-0.4.15-35.el9.x86_64

libproxy-0.4.15-35.el9.x86_64

libproxy-0.4.15-35.el9.x86_64

libproxy-webkitgtk4-0.4.15-35.el9.x86_64

libproxy-webkitgtk4-0.4.15-35.el9.x86_64

libproxy-webkitgtk4-0.4.15-35.el9.x86_64

libpsl-0.21.1-5.el9.x86_64

libpsl-0.21.1-5.el9.x86_64

libpsl-0.21.1-5.el9.x86_64

libpwquality-1.4.4-8.el9.x86_64

libpwquality-1.4.4-8.el9.x86_64

libpwquality-1.4.4-8.el9.x86_64

librdmacm-48.0-1.el9.x86_64

librdmacm-48.0-1.el9.x86_64

librdmacm-48.0-1.el9.x86_64

libref_array-0.1.5-53.el9.x86_64

libref_array-0.1.5-53.el9.x86_64

libref_array-0.1.5-53.el9.x86_64

librepo-1.14.5-2.el9.x86_64

librepo-1.14.5-2.el9.x86_64

librepo-1.14.5-2.el9.x86_64

libreport-filesystem-2.15.2-6.el9.noarch

libreport-filesystem-2.15.2-6.el9.noarch

libreport-filesystem-2.15.2-6.el9.noarch

librhsm-0.0.3-7.el9_3.1.x86_64

librhsm-0.0.3-7.el9_3.1.x86_64

librhsm-0.0.3-7.el9_3.1.x86_64

libseccomp-2.5.2-2.el9.x86_64

libseccomp-2.5.2-2.el9.x86_64

libseccomp-2.5.2-2.el9.x86_64

libselinux-3.6-1.el9.x86_64

libselinux-3.6-1.el9.x86_64

libselinux-3.6-1.el9.x86_64

libselinux-utils-3.6-1.el9.x86_64

libselinux-utils-3.6-1.el9.x86_64

libselinux-utils-3.6-1.el9.x86_64

libsemanage-3.6-1.el9.x86_64

libsemanage-3.6-1.el9.x86_64

libsemanage-3.6-1.el9.x86_64

libsepol-3.6-1.el9.x86_64

libsepol-3.6-1.el9.x86_64

libsepol-3.6-1.el9.x86_64

libsigsegv-2.13-4.el9.x86_64

libsigsegv-2.13-4.el9.x86_64

libsigsegv-2.13-4.el9.x86_64

libslirp-4.4.0-7.el9.x86_64

libslirp-4.4.0-7.el9.x86_64

libslirp-4.4.0-7.el9.x86_64

libsmartcols-2.37.4-18.el9.x86_64

libsmartcols-2.37.4-18.el9.x86_64

libsmartcols-2.37.4-18.el9.x86_64

libsolv-0.7.24-2.el9.x86_64

libsolv-0.7.24-2.el9.x86_64

libsolv-0.7.24-2.el9.x86_64

libsoup-2.72.0-8.el9.x86_64

libsoup-2.72.0-8.el9.x86_64

libsoup-2.72.0-8.el9.x86_64

libss-1.46.5-5.el9.x86_64

libss-1.46.5-5.el9.x86_64

libss-1.46.5-5.el9.x86_64

libssh-0.10.4-13.el9.x86_64

libssh-0.10.4-13.el9.x86_64

libssh-0.10.4-13.el9.x86_64

libssh-config-0.10.4-13.el9.noarch

libssh-config-0.10.4-13.el9.noarch

libssh-config-0.10.4-13.el9.noarch

libstdc++-11.4.1-3.el9.x86_64

libstdc++-11.4.1-3.el9.x86_64

libstdc++-11.4.1-3.el9.x86_64

libtasn1-4.16.0-8.el9_1.x86_64

libtasn1-4.16.0-8.el9_1.x86_64

libtasn1-4.16.0-8.el9_1.x86_64

libtirpc-1.3.3-8.el9_4.x86_64

libtirpc-1.3.3-8.el9_4.x86_64

libtirpc-1.3.3-8.el9_4.x86_64

libtpms-0.9.1-3.20211126git1ff6fe1f43.el9_2.x86_64

libtpms-0.9.1-4.20211126git1ff6fe1f43.el9_2.x86_64

libtpms-0.9.1-4.20211126git1ff6fe1f43.el9_2.x86_64

libunistring-0.9.10-15.el9.x86_64

libunistring-0.9.10-15.el9.x86_64

libunistring-0.9.10-15.el9.x86_64

liburing-2.5-1.el9.x86_64

liburing-2.5-1.el9.x86_64

liburing-2.5-1.el9.x86_64

libusbx-1.0.26-1.el9.x86_64

libusbx-1.0.26-1.el9.x86_64

libusbx-1.0.26-1.el9.x86_64

libutempter-1.2.1-6.el9.x86_64

libutempter-1.2.1-6.el9.x86_64

libutempter-1.2.1-6.el9.x86_64

libuuid-2.37.4-18.el9.x86_64

libuuid-2.37.4-18.el9.x86_64

libuuid-2.37.4-18.el9.x86_64

libverto-0.3.2-3.el9.x86_64

libverto-0.3.2-3.el9.x86_64

libverto-0.3.2-3.el9.x86_64

libverto-libev-0.3.2-3.el9.x86_64

libverto-libev-0.3.2-3.el9.x86_64

libverto-libev-0.3.2-3.el9.x86_64

libvirt-client-10.0.0-6.7.el9_4.x86_64

libvirt-client-10.0.0-6.7.el9_4.x86_64

libvirt-client-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-common-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-common-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-common-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-config-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-config-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-config-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-network-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-secret-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-secret-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-secret-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-storage-core-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-storage-core-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-storage-core-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-log-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-log-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-log-10.0.0-6.7.el9_4.x86_64

libvirt-libs-10.0.0-6.7.el9_4.x86_64

libvirt-libs-10.0.0-6.7.el9_4.x86_64

libvirt-libs-10.0.0-6.7.el9_4.x86_64

libxcrypt-4.4.18-3.el9.x86_64

libxcrypt-4.4.18-3.el9.x86_64

libxcrypt-4.4.18-3.el9.x86_64

libxcrypt-compat-4.4.18-3.el9.x86_64

libxcrypt-compat-4.4.18-3.el9.x86_64

libxcrypt-compat-4.4.18-3.el9.x86_64

libxml2-2.9.13-6.el9_4.x86_64

libxml2-2.9.13-6.el9_4.x86_64

libxml2-2.9.13-6.el9_4.x86_64

libxslt-1.1.34-9.el9.x86_64

libxslt-1.1.34-9.el9.x86_64

libxslt-1.1.34-9.el9.x86_64

libyaml-0.2.5-7.el9.x86_64

libyaml-0.2.5-7.el9.x86_64

libyaml-0.2.5-7.el9.x86_64

libzstd-1.5.1-2.el9.x86_64

libzstd-1.5.1-2.el9.x86_64

libzstd-1.5.1-2.el9.x86_64

linux-firmware-20240716-143.2.el9_4.noarch

linux-firmware-20240905-143.3.el9_4.noarch

linux-firmware-20240905-143.3.el9_4.noarch

linux-firmware-whence-20240716-143.2.el9_4.noarch

linux-firmware-whence-20240905-143.3.el9_4.noarch

linux-firmware-whence-20240905-143.3.el9_4.noarch

lsscsi-0.32-6.el9.x86_64

lsscsi-0.32-6.el9.x86_64

lsscsi-0.32-6.el9.x86_64

lua-libs-5.4.4-4.el9.x86_64

lua-libs-5.4.4-4.el9.x86_64

lua-libs-5.4.4-4.el9.x86_64

lua-srpm-macros-1-6.el9.noarch

lua-srpm-macros-1-6.el9.noarch

lua-srpm-macros-1-6.el9.noarch

luksmeta-9-12.el9.x86_64

luksmeta-9-12.el9.x86_64

luksmeta-9-12.el9.x86_64

lvm2-2.03.23-2.el9.x86_64

lvm2-2.03.23-2.el9.x86_64

lvm2-2.03.23-2.el9.x86_64

lvm2-libs-2.03.23-2.el9.x86_64

lvm2-libs-2.03.23-2.el9.x86_64

lvm2-libs-2.03.23-2.el9.x86_64

lz4-libs-1.9.3-5.el9.x86_64

lz4-libs-1.9.3-5.el9.x86_64

lz4-libs-1.9.3-5.el9.x86_64

lzo-2.10-7.el9.x86_64

lzo-2.10-7.el9.x86_64

lzo-2.10-7.el9.x86_64

lzop-1.04-8.el9.x86_64

lzop-1.04-8.el9.x86_64

lzop-1.04-8.el9.x86_64

man-db-2.9.3-7.el9.x86_64

man-db-2.9.3-7.el9.x86_64

man-db-2.9.3-7.el9.x86_64

mdadm-4.2-14.el9_4.x86_64

mdadm-4.2-14.el9_4.x86_64

mdadm-4.2-14.el9_4.x86_64

microdnf-3.9.1-3.el9.x86_64

microdnf-3.9.1-3.el9.x86_64

microdnf-3.9.1-3.el9.x86_64

mingw-binutils-generic-2.41-3.el9.x86_64

mingw-binutils-generic-2.41-3.el9.x86_64

mingw-binutils-generic-2.41-3.el9.x86_64

mingw-filesystem-base-148-3.el9.noarch

mingw-filesystem-base-148-3.el9.noarch

mingw-filesystem-base-148-3.el9.noarch

mingw32-crt-11.0.1-3.el9.noarch

mingw32-crt-11.0.1-3.el9.noarch

mingw32-crt-11.0.1-3.el9.noarch

mingw32-filesystem-148-3.el9.noarch

mingw32-filesystem-148-3.el9.noarch

mingw32-filesystem-148-3.el9.noarch

mingw32-srvany-1.1-3.el9.noarch

mingw32-srvany-1.1-3.el9.noarch

mingw32-srvany-1.1-3.el9.noarch

mpfr-4.1.0-7.el9.x86_64

mpfr-4.1.0-7.el9.x86_64

mpfr-4.1.0-7.el9.x86_64

mtools-4.0.26-4.el9_0.x86_64

mtools-4.0.26-4.el9_0.x86_64

mtools-4.0.26-4.el9_0.x86_64

nbdkit-1.36.2-1.el9.x86_64

nbdkit-1.36.2-1.el9.x86_64

nbdkit-1.36.2-1.el9.x86_64

nbdkit-basic-filters-1.36.2-1.el9.x86_64

nbdkit-basic-filters-1.36.2-1.el9.x86_64

nbdkit-basic-filters-1.36.2-1.el9.x86_64

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

nbdkit-python-plugin-1.36.2-1.el9.x86_64

nbdkit-python-plugin-1.36.2-1.el9.x86_64

nbdkit-python-plugin-1.36.2-1.el9.x86_64

nbdkit-server-1.36.2-1.el9.x86_64

nbdkit-server-1.36.2-1.el9.x86_64

nbdkit-server-1.36.2-1.el9.x86_64

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

ncurses-6.2-10.20210508.el9.x86_64

ncurses-6.2-10.20210508.el9.x86_64

ncurses-6.2-10.20210508.el9.x86_64

ncurses-base-6.2-10.20210508.el9.noarch

ncurses-base-6.2-10.20210508.el9.noarch

ncurses-base-6.2-10.20210508.el9.noarch

ncurses-libs-6.2-10.20210508.el9.x86_64

ncurses-libs-6.2-10.20210508.el9.x86_64

ncurses-libs-6.2-10.20210508.el9.x86_64

ndctl-libs-71.1-8.el9.x86_64

ndctl-libs-71.1-8.el9.x86_64

ndctl-libs-71.1-8.el9.x86_64

nettle-3.9.1-1.el9.x86_64

nettle-3.9.1-1.el9.x86_64

nettle-3.9.1-1.el9.x86_64

nfs-utils-2.5.4-26.el9_4.x86_64

nfs-utils-2.5.4-26.el9_4.x86_64

nfs-utils-2.5.4-26.el9_4.x86_64

npth-1.6-8.el9.x86_64

npth-1.6-8.el9.x86_64

npth-1.6-8.el9.x86_64

numactl-libs-2.0.16-3.el9.x86_64

numactl-libs-2.0.16-3.el9.x86_64

numactl-libs-2.0.16-3.el9.x86_64

numad-0.5-37.20150602git.el9.x86_64

numad-0.5-37.20150602git.el9.x86_64

numad-0.5-37.20150602git.el9.x86_64

ocaml-srpm-macros-6-6.el9.noarch

ocaml-srpm-macros-6-6.el9.noarch

ocaml-srpm-macros-6-6.el9.noarch

oniguruma-6.9.6-1.el9.5.x86_64

oniguruma-6.9.6-1.el9.5.x86_64

oniguruma-6.9.6-1.el9.5.x86_64

openblas-srpm-macros-2-11.el9.noarch

openblas-srpm-macros-2-11.el9.noarch

openblas-srpm-macros-2-11.el9.noarch

openldap-2.6.6-3.el9.x86_64

openldap-2.6.6-3.el9.x86_64

openldap-2.6.6-3.el9.x86_64

openssh-8.7p1-38.el9_4.4.x86_64

openssh-8.7p1-38.el9_4.4.x86_64

openssh-8.7p1-38.el9_4.4.x86_64

openssh-clients-8.7p1-38.el9_4.4.x86_64

openssh-clients-8.7p1-38.el9_4.4.x86_64

openssh-clients-8.7p1-38.el9_4.4.x86_64

openssl-3.0.7-28.el9_4.x86_64

openssl-3.0.7-28.el9_4.x86_64

openssl-3.0.7-28.el9_4.x86_64

openssl-fips-provider-3.0.7-2.el9.x86_64

openssl-fips-provider-3.0.7-2.el9.x86_64

openssl-fips-provider-3.0.7-2.el9.x86_64

openssl-libs-3.0.7-28.el9_4.x86_64

openssl-libs-3.0.7-28.el9_4.x86_64

openssl-libs-3.0.7-28.el9_4.x86_64

osinfo-db-20231215-1.el9.noarch

osinfo-db-20231215-1.el9.noarch

osinfo-db-20231215-1.el9.noarch

osinfo-db-tools-1.10.0-1.el9.x86_64

osinfo-db-tools-1.10.0-1.el9.x86_64

osinfo-db-tools-1.10.0-1.el9.x86_64

p11-kit-0.25.3-2.el9.x86_64

p11-kit-0.25.3-2.el9.x86_64

p11-kit-0.25.3-2.el9.x86_64

p11-kit-trust-0.25.3-2.el9.x86_64

p11-kit-trust-0.25.3-2.el9.x86_64

p11-kit-trust-0.25.3-2.el9.x86_64

pam-1.5.1-19.el9.x86_64

pam-1.5.1-19.el9.x86_64

pam-1.5.1-19.el9.x86_64

parted-3.5-2.el9.x86_64

parted-3.5-2.el9.x86_64

parted-3.5-2.el9.x86_64

passt-0^20231204.gb86afe3-1.el9.x86_64

passt-0^20231204.gb86afe3-1.el9.x86_64

passt-0^20231204.gb86afe3-1.el9.x86_64

passt-selinux-0^20231204.gb86afe3-1.el9.noarch

passt-selinux-0^20231204.gb86afe3-1.el9.noarch

passt-selinux-0^20231204.gb86afe3-1.el9.noarch

pcre-8.44-3.el9.3.x86_64

pcre-8.44-3.el9.3.x86_64

pcre-8.44-3.el9.3.x86_64

pcre2-10.40-5.el9.x86_64

pcre2-10.40-5.el9.x86_64

pcre2-10.40-5.el9.x86_64

pcre2-syntax-10.40-5.el9.noarch

pcre2-syntax-10.40-5.el9.noarch

pcre2-syntax-10.40-5.el9.noarch

perl-AutoLoader-5.74-481.el9.noarch

perl-AutoLoader-5.74-481.el9.noarch

perl-AutoLoader-5.74-481.el9.noarch

perl-B-1.80-481.el9.x86_64

perl-B-1.80-481.el9.x86_64

perl-B-1.80-481.el9.x86_64

perl-base-2.27-481.el9.noarch

perl-base-2.27-481.el9.noarch

perl-base-2.27-481.el9.noarch

perl-Carp-1.50-460.el9.noarch

perl-Carp-1.50-460.el9.noarch

perl-Carp-1.50-460.el9.noarch

perl-Class-Struct-0.66-481.el9.noarch

perl-Class-Struct-0.66-481.el9.noarch

perl-Class-Struct-0.66-481.el9.noarch

perl-constant-1.33-461.el9.noarch

perl-constant-1.33-461.el9.noarch

perl-constant-1.33-461.el9.noarch

perl-Data-Dumper-2.174-462.el9.x86_64

perl-Data-Dumper-2.174-462.el9.x86_64

perl-Data-Dumper-2.174-462.el9.x86_64

perl-Digest-1.19-4.el9.noarch

perl-Digest-1.19-4.el9.noarch

perl-Digest-1.19-4.el9.noarch

perl-Digest-MD5-2.58-4.el9.x86_64

perl-Digest-MD5-2.58-4.el9.x86_64

perl-Digest-MD5-2.58-4.el9.x86_64

perl-Encode-3.08-462.el9.x86_64

perl-Encode-3.08-462.el9.x86_64

perl-Encode-3.08-462.el9.x86_64

perl-Errno-1.30-481.el9.x86_64

perl-Errno-1.30-481.el9.x86_64

perl-Errno-1.30-481.el9.x86_64

perl-Exporter-5.74-461.el9.noarch

perl-Exporter-5.74-461.el9.noarch

perl-Exporter-5.74-461.el9.noarch

perl-Fcntl-1.13-481.el9.x86_64

perl-Fcntl-1.13-481.el9.x86_64

perl-Fcntl-1.13-481.el9.x86_64

perl-File-Basename-2.85-481.el9.noarch

perl-File-Basename-2.85-481.el9.noarch

perl-File-Basename-2.85-481.el9.noarch

perl-File-Path-2.18-4.el9.noarch

perl-File-Path-2.18-4.el9.noarch

perl-File-Path-2.18-4.el9.noarch

perl-File-stat-1.09-481.el9.noarch

perl-File-stat-1.09-481.el9.noarch

perl-File-stat-1.09-481.el9.noarch

perl-File-Temp-0.231.100-4.el9.noarch

perl-File-Temp-0.231.100-4.el9.noarch

perl-File-Temp-0.231.100-4.el9.noarch

perl-FileHandle-2.03-481.el9.noarch

perl-FileHandle-2.03-481.el9.noarch

perl-FileHandle-2.03-481.el9.noarch

perl-Getopt-Long-2.52-4.el9.noarch

perl-Getopt-Long-2.52-4.el9.noarch

perl-Getopt-Long-2.52-4.el9.noarch

perl-Getopt-Std-1.12-481.el9.noarch

perl-Getopt-Std-1.12-481.el9.noarch

perl-Getopt-Std-1.12-481.el9.noarch

perl-HTTP-Tiny-0.076-462.el9.noarch

perl-HTTP-Tiny-0.076-462.el9.noarch

perl-HTTP-Tiny-0.076-462.el9.noarch

perl-if-0.60.800-481.el9.noarch

perl-if-0.60.800-481.el9.noarch

perl-if-0.60.800-481.el9.noarch

perl-interpreter-5.32.1-481.el9.x86_64

perl-interpreter-5.32.1-481.el9.x86_64

perl-interpreter-5.32.1-481.el9.x86_64

perl-IO-1.43-481.el9.x86_64

perl-IO-1.43-481.el9.x86_64

perl-IO-1.43-481.el9.x86_64

perl-IO-Socket-IP-0.41-5.el9.noarch

perl-IO-Socket-IP-0.41-5.el9.noarch

perl-IO-Socket-IP-0.41-5.el9.noarch

perl-IO-Socket-SSL-2.073-1.el9.noarch

perl-IO-Socket-SSL-2.073-1.el9.noarch

perl-IO-Socket-SSL-2.073-1.el9.noarch

perl-IPC-Open3-1.21-481.el9.noarch

perl-IPC-Open3-1.21-481.el9.noarch

perl-IPC-Open3-1.21-481.el9.noarch

perl-libnet-3.13-4.el9.noarch

perl-libnet-3.13-4.el9.noarch

perl-libnet-3.13-4.el9.noarch

perl-libs-5.32.1-481.el9.x86_64

perl-libs-5.32.1-481.el9.x86_64

perl-libs-5.32.1-481.el9.x86_64

perl-MIME-Base64-3.16-4.el9.x86_64

perl-MIME-Base64-3.16-4.el9.x86_64

perl-MIME-Base64-3.16-4.el9.x86_64

perl-Mozilla-CA-20200520-6.el9.noarch

perl-Mozilla-CA-20200520-6.el9.noarch

perl-Mozilla-CA-20200520-6.el9.noarch

perl-mro-1.23-481.el9.x86_64

perl-mro-1.23-481.el9.x86_64

perl-mro-1.23-481.el9.x86_64

perl-NDBM_File-1.15-481.el9.x86_64

perl-NDBM_File-1.15-481.el9.x86_64

perl-NDBM_File-1.15-481.el9.x86_64

perl-Net-SSLeay-1.92-2.el9.x86_64

perl-Net-SSLeay-1.92-2.el9.x86_64

perl-Net-SSLeay-1.92-2.el9.x86_64

perl-overload-1.31-481.el9.noarch

perl-overload-1.31-481.el9.noarch

perl-overload-1.31-481.el9.noarch

perl-overloading-0.02-481.el9.noarch

perl-overloading-0.02-481.el9.noarch

perl-overloading-0.02-481.el9.noarch

perl-parent-0.238-460.el9.noarch

perl-parent-0.238-460.el9.noarch

perl-parent-0.238-460.el9.noarch

perl-PathTools-3.78-461.el9.x86_64

perl-PathTools-3.78-461.el9.x86_64

perl-PathTools-3.78-461.el9.x86_64

perl-Pod-Escapes-1.07-460.el9.noarch

perl-Pod-Escapes-1.07-460.el9.noarch

perl-Pod-Escapes-1.07-460.el9.noarch

perl-Pod-Perldoc-3.28.01-461.el9.noarch

perl-Pod-Perldoc-3.28.01-461.el9.noarch

perl-Pod-Perldoc-3.28.01-461.el9.noarch

perl-Pod-Simple-3.42-4.el9.noarch

perl-Pod-Simple-3.42-4.el9.noarch

perl-Pod-Simple-3.42-4.el9.noarch

perl-Pod-Usage-2.01-4.el9.noarch

perl-Pod-Usage-2.01-4.el9.noarch

perl-Pod-Usage-2.01-4.el9.noarch

perl-podlators-4.14-460.el9.noarch

perl-podlators-4.14-460.el9.noarch

perl-podlators-4.14-460.el9.noarch

perl-POSIX-1.94-481.el9.x86_64

perl-POSIX-1.94-481.el9.x86_64

perl-POSIX-1.94-481.el9.x86_64

perl-Scalar-List-Utils-1.56-461.el9.x86_64

perl-Scalar-List-Utils-1.56-461.el9.x86_64

perl-Scalar-List-Utils-1.56-461.el9.x86_64

perl-SelectSaver-1.02-481.el9.noarch

perl-SelectSaver-1.02-481.el9.noarch

perl-SelectSaver-1.02-481.el9.noarch

perl-Socket-2.031-4.el9.x86_64

perl-Socket-2.031-4.el9.x86_64

perl-Socket-2.031-4.el9.x86_64

perl-srpm-macros-1-41.el9.noarch

perl-srpm-macros-1-41.el9.noarch

perl-srpm-macros-1-41.el9.noarch

perl-Storable-3.21-460.el9.x86_64

perl-Storable-3.21-460.el9.x86_64

perl-Storable-3.21-460.el9.x86_64

perl-subs-1.03-481.el9.noarch

perl-subs-1.03-481.el9.noarch

perl-subs-1.03-481.el9.noarch

perl-Symbol-1.08-481.el9.noarch

perl-Symbol-1.08-481.el9.noarch

perl-Symbol-1.08-481.el9.noarch

perl-Term-ANSIColor-5.01-461.el9.noarch

perl-Term-ANSIColor-5.01-461.el9.noarch

perl-Term-ANSIColor-5.01-461.el9.noarch

perl-Term-Cap-1.17-460.el9.noarch

perl-Term-Cap-1.17-460.el9.noarch

perl-Term-Cap-1.17-460.el9.noarch

perl-Text-ParseWords-3.30-460.el9.noarch

perl-Text-ParseWords-3.30-460.el9.noarch

perl-Text-ParseWords-3.30-460.el9.noarch

perl-Text-Tabs+Wrap-2013.0523-460.el9.noarch

perl-Text-Tabs+Wrap-2013.0523-460.el9.noarch

perl-Text-Tabs+Wrap-2013.0523-460.el9.noarch

perl-Time-Local-1.300-7.el9.noarch

perl-Time-Local-1.300-7.el9.noarch

perl-Time-Local-1.300-7.el9.noarch

perl-URI-5.09-3.el9.noarch

perl-URI-5.09-3.el9.noarch

perl-URI-5.09-3.el9.noarch

perl-vars-1.05-481.el9.noarch

perl-vars-1.05-481.el9.noarch

perl-vars-1.05-481.el9.noarch

pigz-2.5-4.el9.x86_64

pigz-2.5-4.el9.x86_64

pigz-2.5-4.el9.x86_64

pixman-0.40.0-6.el9.x86_64

pixman-0.40.0-6.el9.x86_64

pixman-0.40.0-6.el9.x86_64

pkgconf-1.7.3-10.el9.x86_64

pkgconf-1.7.3-10.el9.x86_64

pkgconf-1.7.3-10.el9.x86_64

policycoreutils-3.6-2.1.el9.x86_64

policycoreutils-3.6-2.1.el9.x86_64

policycoreutils-3.6-2.1.el9.x86_64

policycoreutils-python-utils-3.6-2.1.el9.noarch

policycoreutils-python-utils-3.6-2.1.el9.noarch

policycoreutils-python-utils-3.6-2.1.el9.noarch

polkit-0.117-11.el9.x86_64

polkit-0.117-11.el9.x86_64

polkit-0.117-11.el9.x86_64

polkit-libs-0.117-11.el9.x86_64

polkit-libs-0.117-11.el9.x86_64

polkit-libs-0.117-11.el9.x86_64

polkit-pkla-compat-0.1-21.el9.x86_64

polkit-pkla-compat-0.1-21.el9.x86_64

polkit-pkla-compat-0.1-21.el9.x86_64

popt-1.18-8.el9.x86_64

popt-1.18-8.el9.x86_64

popt-1.18-8.el9.x86_64

procps-ng-3.3.17-14.el9.x86_64

procps-ng-3.3.17-14.el9.x86_64

procps-ng-3.3.17-14.el9.x86_64

protobuf-c-1.3.3-13.el9.x86_64

protobuf-c-1.3.3-13.el9.x86_64

protobuf-c-1.3.3-13.el9.x86_64

psmisc-23.4-3.el9.x86_64

psmisc-23.4-3.el9.x86_64

psmisc-23.4-3.el9.x86_64

publicsuffix-list-dafsa-20210518-3.el9.noarch

publicsuffix-list-dafsa-20210518-3.el9.noarch

publicsuffix-list-dafsa-20210518-3.el9.noarch

pyproject-srpm-macros-1.12.0-1.el9.noarch

pyproject-srpm-macros-1.12.0-1.el9.noarch

pyproject-srpm-macros-1.12.0-1.el9.noarch

python-srpm-macros-3.9-53.el9.noarch

python-srpm-macros-3.9-53.el9.noarch

python-srpm-macros-3.9-53.el9.noarch

python-unversioned-command-3.9.18-3.el9_4.5.noarch

python-unversioned-command-3.9.18-3.el9_4.5.noarch

python-unversioned-command-3.9.18-3.el9_4.5.noarch

python3-3.9.18-3.el9_4.5.x86_64

python3-3.9.18-3.el9_4.5.x86_64

python3-3.9.18-3.el9_4.5.x86_64

python3-audit-3.1.2-2.el9.x86_64

python3-audit-3.1.2-2.el9.x86_64

python3-audit-3.1.2-2.el9.x86_64

python3-distro-1.5.0-7.el9.noarch

python3-distro-1.5.0-7.el9.noarch

python3-distro-1.5.0-7.el9.noarch

python3-libs-3.9.18-3.el9_4.5.x86_64

python3-libs-3.9.18-3.el9_4.5.x86_64

python3-libs-3.9.18-3.el9_4.5.x86_64

python3-libselinux-3.6-1.el9.x86_64

python3-libselinux-3.6-1.el9.x86_64

python3-libselinux-3.6-1.el9.x86_64

python3-libsemanage-3.6-1.el9.x86_64

python3-libsemanage-3.6-1.el9.x86_64

python3-libsemanage-3.6-1.el9.x86_64

python3-pip-wheel-21.2.3-8.el9.noarch

python3-pip-wheel-21.2.3-8.el9.noarch

python3-pip-wheel-21.2.3-8.el9.noarch

python3-policycoreutils-3.6-2.1.el9.noarch

python3-policycoreutils-3.6-2.1.el9.noarch

python3-policycoreutils-3.6-2.1.el9.noarch

python3-pyyaml-5.4.1-6.el9.x86_64

python3-pyyaml-5.4.1-6.el9.x86_64

python3-pyyaml-5.4.1-6.el9.x86_64

python3-setools-4.4.4-1.el9.x86_64

python3-setools-4.4.4-1.el9.x86_64

python3-setools-4.4.4-1.el9.x86_64

python3-setuptools-53.0.0-12.el9_4.1.noarch

python3-setuptools-53.0.0-12.el9_4.1.noarch

python3-setuptools-53.0.0-12.el9_4.1.noarch

python3-setuptools-wheel-53.0.0-12.el9_4.1.noarch

python3-setuptools-wheel-53.0.0-12.el9_4.1.noarch

python3-setuptools-wheel-53.0.0-12.el9_4.1.noarch

qemu-img-8.2.0-11.el9_4.6.x86_64

qemu-img-8.2.0-11.el9_4.6.x86_64

qemu-img-8.2.0-11.el9_4.6.x86_64

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

qt5-srpm-macros-5.15.9-1.el9.noarch

qt5-srpm-macros-5.15.9-1.el9.noarch

qt5-srpm-macros-5.15.9-1.el9.noarch

quota-4.06-6.el9.x86_64

quota-4.06-6.el9.x86_64

quota-4.06-6.el9.x86_64

quota-nls-4.06-6.el9.noarch

quota-nls-4.06-6.el9.noarch

quota-nls-4.06-6.el9.noarch

readline-8.1-4.el9.x86_64

readline-8.1-4.el9.x86_64

readline-8.1-4.el9.x86_64

redhat-release-9.4-0.5.el9.x86_64

redhat-release-9.4-0.5.el9.x86_64

redhat-release-9.4-0.5.el9.x86_64

redhat-rpm-config-207-1.el9.noarch

redhat-rpm-config-207-1.el9.noarch

redhat-rpm-config-207-1.el9.noarch

rootfiles-8.1-31.el9.noarch

rootfiles-8.1-31.el9.noarch

rootfiles-8.1-31.el9.noarch

rpcbind-1.2.6-7.el9.x86_64

rpcbind-1.2.6-7.el9.x86_64

rpcbind-1.2.6-7.el9.x86_64

rpm-4.16.1.3-29.el9.x86_64

rpm-4.16.1.3-29.el9.x86_64

rpm-4.16.1.3-29.el9.x86_64

rpm-libs-4.16.1.3-29.el9.x86_64

rpm-libs-4.16.1.3-29.el9.x86_64

rpm-libs-4.16.1.3-29.el9.x86_64

rpm-plugin-selinux-4.16.1.3-29.el9.x86_64

rpm-plugin-selinux-4.16.1.3-29.el9.x86_64

rpm-plugin-selinux-4.16.1.3-29.el9.x86_64

rust-srpm-macros-17-4.el9.noarch

rust-srpm-macros-17-4.el9.noarch

rust-srpm-macros-17-4.el9.noarch

scrub-2.6.1-4.el9.x86_64

scrub-2.6.1-4.el9.x86_64

scrub-2.6.1-4.el9.x86_64

seabios-bin-1.16.3-2.el9.noarch

seabios-bin-1.16.3-2.el9.noarch

seabios-bin-1.16.3-2.el9.noarch

seavgabios-bin-1.16.3-2.el9.noarch

seavgabios-bin-1.16.3-2.el9.noarch

seavgabios-bin-1.16.3-2.el9.noarch

sed-4.8-9.el9.x86_64

sed-4.8-9.el9.x86_64

sed-4.8-9.el9.x86_64

selinux-policy-38.1.35-2.el9_4.2.noarch

selinux-policy-38.1.35-2.el9_4.2.noarch

selinux-policy-38.1.35-2.el9_4.2.noarch

selinux-policy-targeted-38.1.35-2.el9_4.2.noarch

selinux-policy-targeted-38.1.35-2.el9_4.2.noarch

selinux-policy-targeted-38.1.35-2.el9_4.2.noarch

setup-2.13.7-10.el9.noarch

setup-2.13.7-10.el9.noarch

setup-2.13.7-10.el9.noarch

shadow-utils-4.9-8.el9.x86_64

shadow-utils-4.9-8.el9.x86_64

shadow-utils-4.9-8.el9.x86_64

snappy-1.1.8-8.el9.x86_64

snappy-1.1.8-8.el9.x86_64

snappy-1.1.8-8.el9.x86_64

sqlite-libs-3.34.1-7.el9_3.x86_64

sqlite-libs-3.34.1-7.el9_3.x86_64

sqlite-libs-3.34.1-7.el9_3.x86_64

squashfs-tools-4.4-10.git1.el9.x86_64

squashfs-tools-4.4-10.git1.el9.x86_64

squashfs-tools-4.4-10.git1.el9.x86_64

supermin-5.3.3-1.el9.x86_64

supermin-5.3.3-1.el9.x86_64

supermin-5.3.3-1.el9.x86_64

swtpm-0.8.0-2.el9_4.x86_64

swtpm-0.8.0-2.el9_4.x86_64

swtpm-0.8.0-2.el9_4.x86_64

swtpm-libs-0.8.0-2.el9_4.x86_64

swtpm-libs-0.8.0-2.el9_4.x86_64

swtpm-libs-0.8.0-2.el9_4.x86_64

swtpm-tools-0.8.0-2.el9_4.x86_64

swtpm-tools-0.8.0-2.el9_4.x86_64

swtpm-tools-0.8.0-2.el9_4.x86_64

syslinux-6.04-0.20.el9.x86_64

syslinux-6.04-0.20.el9.x86_64

syslinux-6.04-0.20.el9.x86_64

syslinux-extlinux-6.04-0.20.el9.x86_64

syslinux-extlinux-6.04-0.20.el9.x86_64

syslinux-extlinux-6.04-0.20.el9.x86_64

syslinux-extlinux-nonlinux-6.04-0.20.el9.noarch

syslinux-extlinux-nonlinux-6.04-0.20.el9.noarch

syslinux-extlinux-nonlinux-6.04-0.20.el9.noarch

syslinux-nonlinux-6.04-0.20.el9.noarch

syslinux-nonlinux-6.04-0.20.el9.noarch

syslinux-nonlinux-6.04-0.20.el9.noarch

systemd-252-32.el9_4.7.x86_64

systemd-252-32.el9_4.7.x86_64

systemd-252-32.el9_4.7.x86_64

systemd-container-252-32.el9_4.7.x86_64

systemd-container-252-32.el9_4.7.x86_64

systemd-container-252-32.el9_4.7.x86_64

systemd-libs-252-32.el9_4.7.x86_64

systemd-libs-252-32.el9_4.7.x86_64

systemd-libs-252-32.el9_4.7.x86_64

systemd-pam-252-32.el9_4.7.x86_64

systemd-pam-252-32.el9_4.7.x86_64

systemd-pam-252-32.el9_4.7.x86_64

systemd-rpm-macros-252-32.el9_4.7.noarch

systemd-rpm-macros-252-32.el9_4.7.noarch

systemd-rpm-macros-252-32.el9_4.7.noarch

systemd-udev-252-32.el9_4.7.x86_64

systemd-udev-252-32.el9_4.7.x86_64

systemd-udev-252-32.el9_4.7.x86_64

tar-1.34-6.el9_4.1.x86_64

tar-1.34-6.el9_4.1.x86_64

tar-1.34-6.el9_4.1.x86_64

tpm2-tools-5.2-3.el9.x86_64

tpm2-tools-5.2-3.el9.x86_64

tpm2-tools-5.2-3.el9.x86_64

tpm2-tss-3.2.2-2.el9.x86_64

tpm2-tss-3.2.2-2.el9.x86_64

tpm2-tss-3.2.2-2.el9.x86_64

tzdata-2024a-1.el9.noarch

tzdata-2024a-1.el9.noarch

tzdata-2024a-1.el9.noarch

unbound-libs-1.16.2-3.el9_3.5.x86_64

unbound-libs-1.16.2-3.el9_3.5.x86_64

unbound-libs-1.16.2-3.el9_3.5.x86_64

unzip-6.0-56.el9.x86_64

unzip-6.0-56.el9.x86_64

unzip-6.0-56.el9.x86_64

userspace-rcu-0.12.1-6.el9.x86_64

userspace-rcu-0.12.1-6.el9.x86_64

userspace-rcu-0.12.1-6.el9.x86_64

util-linux-2.37.4-18.el9.x86_64

util-linux-2.37.4-18.el9.x86_64

util-linux-2.37.4-18.el9.x86_64

util-linux-core-2.37.4-18.el9.x86_64

util-linux-core-2.37.4-18.el9.x86_64

util-linux-core-2.37.4-18.el9.x86_64

vim-minimal-8.2.2637-20.el9_1.x86_64

vim-minimal-8.2.2637-20.el9_1.x86_64

vim-minimal-8.2.2637-20.el9_1.x86_64

virt-v2v-2.4.0-4.el9_4.x86_64

virt-v2v-2.4.0-4.el9_4.x86_64

virt-v2v-2.4.0-4.el9_4.x86_64

virtio-win-1.9.40-0.el9_4.noarch

virtio-win-1.9.40-0.el9_4.noarch

virtio-win-1.9.40-0.el9_4.noarch

webkit2gtk3-jsc-2.42.5-1.el9.x86_64

webkit2gtk3-jsc-2.42.5-1.el9.x86_64

webkit2gtk3-jsc-2.46.1-2.el9_4.x86_64

which-2.21-29.el9.x86_64

which-2.21-29.el9.x86_64

which-2.21-29.el9.x86_64

xfsprogs-6.3.0-1.el9.x86_64

xfsprogs-6.3.0-1.el9.x86_64

xfsprogs-6.3.0-1.el9.x86_64

xz-5.2.5-8.el9_0.x86_64

xz-5.2.5-8.el9_0.x86_64

xz-5.2.5-8.el9_0.x86_64

xz-libs-5.2.5-8.el9_0.x86_64

xz-libs-5.2.5-8.el9_0.x86_64

xz-libs-5.2.5-8.el9_0.x86_64

yajl-2.1.0-22.el9.x86_64

yajl-2.1.0-22.el9.x86_64

yajl-2.1.0-22.el9.x86_64

zip-3.0-35.el9.x86_64

zip-3.0-35.el9.x86_64

zip-3.0-35.el9.x86_64

zlib-1.2.11-40.el9.x86_64

zlib-1.2.11-40.el9.x86_64

zlib-1.2.11-40.el9.x86_64

zstd-1.5.1-2.el9.x86_64

zstd-1.5.1-2.el9.x86_64

zstd-1.5.1-2.el9.x86_64

+
+
+ + +
+ + diff --git a/modules/mtv-overview-page/index.html b/modules/mtv-overview-page/index.html new file mode 100644 index 00000000000..4c137cb883b --- /dev/null +++ b/modules/mtv-overview-page/index.html @@ -0,0 +1,214 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

The MTV Overview page

+
+
+
+

The Forklift Overview page displays system-wide information about migrations and a list of Settings you can change.

+
+
+

If you have Administrator privileges, you can access the Overview page by clicking MigrationOverview in the OKD web console.

+
+
+

The Overview page has 3 tabs:

+
+
+
    +
  • +

    Overview

    +
  • +
  • +

    YAML

    +
  • +
  • +

    Metrics

    +
  • +
+
+
+
+
+

Overview tab

+
+
+

The Overview tab lets you see:

+
+
+
    +
  • +

    Operator: The namespace on which the Forklift Operator is deployed and the status of the Operator

    +
  • +
  • +

    Pods: The name, status, and creation time of each pod that was deployed by the Forklift Operator

    +
  • +
  • +

    Conditions: Status of the Forklift Operator:

    +
    +
      +
    • +

      Failure: Last failure. False indicates no failure since deployment.

      +
    • +
    • +

      Running: Whether the Operator is currently running and waiting for the next reconciliation.

      +
    • +
    • +

      Successful: Last successful reconciliation.

      +
    • +
    +
    +
  • +
+
+
+
+
+

YAML tab

+
+
+

The custom resource ForkliftController that defines the operation of the Forklift Operator. You can modify the custom resource from this tab.

+
+
+
+
+

Metrics tab

+
+
+

The Metrics tab lets you see:

+
+
+
    +
  • +

    Migrations: The number of migrations performed using Forklift:

    +
    +
      +
    • +

      Total

      +
    • +
    • +

      Running

      +
    • +
    • +

      Failed

      +
    • +
    • +

      Succeeded

      +
    • +
    • +

      Canceled

      +
    • +
    +
    +
  • +
  • +

    Virtual Machine Migrations: The number of VMs migrated using Forklift:

    +
    +
      +
    • +

      Total

      +
    • +
    • +

      Running

      +
    • +
    • +

      Failed

      +
    • +
    • +

      Succeeded

      +
    • +
    • +

      Canceled

      +
    • +
    +
    +
  • +
+
+
+ + + + + +
+
Note
+
+
+

Since a single migration might involve many virtual machines, the number of migrations performed using Forklift might vary significantly from the number of virtual machines that have been migrated using Forklift.

+
+
+
+
+
    +
  • +

    Chart showing the number of running, failed, and succeeded migrations performed using Forklift for each of the last 7 days

    +
  • +
  • +

    Chart showing the number of running, failed, and succeeded virtual machine migrations performed using Forklift for each of the last 7 days

    +
  • +
+
+
+
+ + +
+ + diff --git a/modules/mtv-performance-addendum/index.html b/modules/mtv-performance-addendum/index.html new file mode 100644 index 00000000000..81e59d6fd54 --- /dev/null +++ b/modules/mtv-performance-addendum/index.html @@ -0,0 +1,291 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift performance addendum

+
+
+
+

Unresolved directive in mtv-performance-addendum.adoc - include::snip_performance.adoc[]

+
+
+
+
+

ESXi performance

+
+
+
Single ESXi performance
+

Test migration using the same ESXi host.

+
+
+

In each iteration, the total VMs are increased, to display the impact of concurrent migration on the duration.

+
+
+

The results show that migration time is linear when increasing the total VMs (50 GiB disk, Utilization 70%).

+
+
+

The optimal number of VMs per ESXi is 10.

+
+ + ++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Single ESXi tests
Test Case DescriptionMTVVDDKmax_vm inflightMigration TypeTotal Duration

cold migration, 10 VMs, Single ESXi, Private Network [1]

2.6

7.0.3

100

cold

0:21:39

cold migration, 20 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

0:41:16

cold migration, 30 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

1:00:59

cold migration, 40 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

1:23:02

cold migration, 50 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

1:46:24

cold migration, 80 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

2:42:49

cold migration, 100 VMs, Single ESXi, Private Network

2.6

7.0.3

100

cold

3:25:15

+
+
Multi ESXi hosts and single data store
+

In each iteration, the number of ESXi hosts were increased, to show that increasing the number of ESXi hosts improves the migration time (50 GiB disk, Utilization 70%).

+
+ + ++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2. Multi ESXi hosts and single data store
Test Case DescriptionMTVVDDKMax_vm inflightMigration TypeTotal Duration

cold migration, 100 VMs, Single ESXi, Private Network [2]

2.6

7.0.3

100

cold

3:25:15

cold migration, 100 VMs, 4 ESXs (25 VMs per ESX), Private Network

2.6

7.0.3

100

cold

1:22:27

cold migration, 100 VMs, 5 ESXs (20 VMs per ESX), Private Network, 1 DataStore

2.6

7.0.3

100

cold

1:04:57

+
+
+
+

Different migration network performance

+
+
+

Each iteration the Migration Network was changed, using the Provider, to find the fastest network for migration.

+
+
+

The results show that there is no degradation using management compared to non-managment networks when all interfaces and network speeds are the same.

+
+ + ++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 3. Different migration network tests
Test Case DescriptionMTVVDDKmax_vm inflightMigration TypeTotal Duration

cold migration, 10 VMs, Single ESXi, MGMT Network

2.6

7.0.3

100

cold

0:21:30

cold migration, 10 VMs, Single ESXi, Private Network [3]

2.6

7.0.3

20

cold

0:21:20

cold migration, 10 VMs, Single ESXi, Default Network

2.6.2

7.0.3

20

cold

0:21:30

+
+
+
+
+
+1. Private Network refers to a non -Management network +
+
+2. Private Network refers to a non-Management network +
+
+3. Private Network refers to a non-Management network +
+
+ + +
+ + diff --git a/modules/mtv-performance-recommendation/index.html b/modules/mtv-performance-recommendation/index.html new file mode 100644 index 00000000000..2bbfa746ae8 --- /dev/null +++ b/modules/mtv-performance-recommendation/index.html @@ -0,0 +1,382 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift performance recommendations

+
+
+
+

The purpose of this section is to share recommendations for efficient and effective migration of virtual machines (VMs) using Forklift, based on findings observed through testing.

+
+
+

Unresolved directive in mtv-performance-recommendation.adoc - include::snip_performance.adoc[]

+
+
+
+
+

Ensure fast storage and network speeds

+
+
+

Ensure fast storage and network speeds, both for VMware and OKD (OCP) environments.

+
+
+
    +
  • +

    To perform fast migrations, VMware must have fast read access to datastores.  Networking between VMware ESXi hosts should be fast, ensure a 10 GiB network connection, and avoid network bottlenecks.

    +
    +
      +
    • +

      Extend the VMware network to the OCP Workers Interface network environment.

      +
    • +
    • +

      It is important to ensure that the VMware network offers high throughput (10 Gigabit Ethernet) and rapid networking to guarantee that the reception rates align with the read rate of the ESXi datastore.

      +
    • +
    • +

      Be aware that the migration process uses significant network bandwidth and that the migration network is utilized. If other services utilize that network, it may have an impact on those services and their migration rates.

      +
    • +
    • +

      For example, 200 to 325 MiB/s was the average network transfer rate from the vmnic for each ESXi host associated with transferring data to the OCP interface.

      +
    • +
    +
    +
  • +
+
+
+
+
+

Ensure fast datastore read speeds to ensure efficient and performant migrations.

+
+
+

Datastores read rates impact the total transfer times, so it is essential to ensure fast reads are possible from the ESXi datastore to the ESXi host.  

+
+
+

Example in numbers: 200 to 300 MiB/s was the average read rate for both vSphere and ESXi endpoints for a single ESXi server. When multiple ESXi servers are used, higher datastore read rates are possible.

+
+
+
+
+

Endpoint types 

+
+
+

Forklift 2.6 allows for the following vSphere provider options:

+
+
+
    +
  • +

    ESXi endpoint (inventory and disk transfers from ESXi), introduced in Forklift 2.6

    +
  • +
  • +

    vCenter Server endpoint; no networks for the ESXi host (inventory and disk transfers from vCenter)

    +
  • +
  • +

    vCenter endpoint and ESXi networks are available (inventory from vCenter, disk transfers from ESXi).

    +
  • +
+
+
+

When transferring many VMs that are registered to multiple ESXi hosts, using the vCenter endpoint and ESXi network is suggested.

+
+
+ + + + + +
+
Note
+
+
+

As of vSphere 7.0, ESXi hosts can label which network to use for NBD transport. This is accomplished by tagging the desired virtual network interface card (NIC) with the appropriate vSphereBackupNFC label.  When this is done, Forklift will be able to utilize the ESXi interface for network transfer to Openshift as long as the worker and ESXi host interfaces are reachable.  This is especially useful when migration users may not have access to the ESXi credentials yet would like to be able to control which ESXi interface is used for migration. 

+
+
+

For more details, see: (Forklift-1230)

+
+
+
+
+

You can use the following ESXi command, which designates interface vmk2 for NBD backup:

+
+
+
+
esxcli network ip interface tag add -t vSphereBackupNFC -i vmk2
+
+
+
+
+
+

Set ESXi hosts BIOS profile and ESXi Host Power Management for High Performance

+
+
+

Where possible, ensure that hosts used to perform migrations are set with BIOS profiles related to maximum performance.  Hosts which use Host Power Management controlled within vSphere should check that High Performance is set.

+
+
+

Testing showed that when transferring more than 10 VMs with both BIOS and host power management set accordingly, migrations had an increase of 15 MiB in the average datastore read rate.

+
+
+
+
+

Avoid additional network load on VMware networks

+
+
+

You can reduce the network load on VMware networks by selecting the migration network when using the ESXi endpoint.

+
+
+

By incorporating a virtualization provider, Forklift enables the selection of a specific network, which is accessible on the ESXi hosts, for the purpose of migrating virtual machines to OCP.  Selecting this migration network from the ESXi host in the Forklift UI will ensure that the transfer is performed using the selected network as an ESXi endpoint..

+
+
+

It is imperative to ensure that the network selected has connectivity to the OCP interface, has adequate bandwidth for migrations, and that the network interface is not saturated.

+
+
+

In environments with fast networks, such as 10GbE networks, migration network impacts can be expected to match the rate of ESXi datastore reads.

+
+
+
+
+

Control maximum concurrent disk migrations per ESXi host.

+
+
+

Set the MAX_VM_INFLIGHT MTV variable to control the maximum number of concurrent VMs transfers allowed for the ESXi host. 

+
+
+

Forklift allows for concurrency to be controlled using this variable; by default, it is set to 20.

+
+
+

When setting MAX_VM_INFLIGHT, consider the number of maximum concurrent VMs transfers are required for ESXi hosts. It is important to consider the type of migration to be transferred concurrently. Warm migrations, which are defined by migrations of a running VM that will be migrated over a scheduled time.

+
+
+

Warm migrations use snapshots to compare and migrate only the differences between previous snapshots of the disk.  The migration of the differences between snapshots happens over specific intervals before a final cut-over of the running VM to OKD occurs. 

+
+
+

In Forklift 2.6, MAX_VM_INFLIGHT reserves one transfer slot per VM, regardless of current migration activity for a specific snapshot or the number of disks that belong to a single vm. The total set by MAX_VM_INFLIGHT is used to indicate how many concurrent VM tranfers per ESXi host is allowed.

+
+
+
Examples
+
    +
  • +

    MAX_VM_INFLIGHT = 20 and 2 ESXi hosts defined in the provider mean each host can transfer 20 VMs.

    +
  • +
+
+
+
+
+

Migrations are completed faster when migrating multiple VMs concurrently

+
+
+

When multiple VMs from a specific ESXi host are to be migrated, starting concurrent migrations for multiple VMs leads to faster migration times. 

+
+
+

Testing demonstrated that migrating 10 VMs (each containing 35 GiB of data, with a total size of 50 GiB) from a single host is significantly faster than migrating the same number of VMs sequentially, one after another. 

+
+
+

It is possible to increase concurrent migration to more than 10 virtual machines from a single host, but it does not show a significant improvement. 

+
+
+
Examples
+
    +
  • +

    1 single disk VMs took 6 minutes, with migration rate of 100 MiB/s

    +
  • +
  • +

    10 single disk VMs took 22 minutes, with migration rate of 272 MiB/s

    +
  • +
  • +

    20 single disk VMs took 42 minutes, with migration rate of 284 MiB/s

    +
  • +
+
+
+ + + + + +
+
Note
+
+
+

From the aforementioned examples, it is evident that the migration of 10 virtual machines simultaneously is three times faster than the migration of identical virtual machines in a sequential manner.

+
+
+

The migration rate was almost the same when moving 10 or 20 virtual machines simultaneously.

+
+
+
+
+
+
+

Migrations complete faster using multiple hosts.

+
+
+

Using multiple hosts with registered VMs equally distributed among the ESXi hosts used for migrations leads to faster migration times.

+
+
+

Testing showed that when transferring more than 10 single disk VMS, each containing 35 GiB of data out of a total of 50G total, using an additional host can reduce migration time.

+
+
+
Examples
+
    +
  • +

    80 single disk VMs, containing 35 GiB of data each, using a single host took 2 hours and 43 minutes, with a migration rate of 294 MiB/s.

    +
  • +
  • +

    80 single disk VMs, containing 35 GiB of data each, using 8 ESXi hosts took 41 minutes, with a migration rate of 1,173 MiB/s.

    +
  • +
+
+
+ + + + + +
+
Note
+
+
+

From the aforementioned examples, it is evident that migrating 80 VMs from 8 ESXi hosts, 10 from each host, concurrently is four times faster than running the same VMs from a single ESXi host. 

+
+
+

Migrating a larger number of VMs from more than 8 ESXi hosts concurrently could potentially show increased performance. However, it was not tested and therefore not recommended.

+
+
+
+
+
+
+

Multiple migration plans compared to a single large migration plan

+
+
+

The maximum number of disks that can be referenced by a single migration plan is 500. For more details, see (MTV-1203)

+
+
+

When attempting to migrate many VMs in a single migration plan, it can take some time for all migrations to start.  By breaking up one migration plan into several migration plans, it is possible to start them at the same time.

+
+
+

Comparing migrations of:

+
+
+
    +
  • +

    500 VMs using 8 ESXi hosts in 1 plan, max_vm_inflight=100, took 5 hours and 10 minutes.

    +
  • +
  • +

    800 VMs using 8 ESXi hosts with 8 plans, max_vm_inflight=100, took 57 minutes.

    +
  • +
+
+
+

Testing showed that by breaking one single large plan into multiple moderately sized plans, for example, 100 VMS per plan, the total migration time can be reduced.

+
+
+
+
+

Maximum values tested

+
+
+
    +
  • +

    Maximum number of ESXi hosts tested: 8

    +
  • +
  • +

    Maximum number of VMs in a single migration plan: 500

    +
  • +
  • +

    Maximum number of VMs migrated in a single test: 5000

    +
  • +
  • +

    Maximum number of migration plans performed concurrently: 40

    +
  • +
  • +

    Maximum single disk size migrated: 6 T disks, which contained 3 Tb of data

    +
  • +
  • +

    Maximum number of disks on a single VM migrated: 50

    +
  • +
  • +

    Highest observed single datastore read rate from a single ESXi server:  312 MiB/second

    +
  • +
  • +

    Highest observed multi-datastore read rate using eight ESXi servers and two datastores: 1,242 MiB/second

    +
  • +
  • +

    Highest observed virtual NIC transfer rate to an {ocp-name} worker: 327 MiB/second

    +
  • +
  • +

    Maximum migration transfer rate of a single disk: 162 MiB/second (rate observed when transferring nonconcurrent migration of 1.5 Tb utilized data)

    +
  • +
  • +

    Maximum cold migration transfer rate of the multiple VMs (single disk) from a single ESXi host: 294 MiB/s (concurrent migration of 30 VMs, 35/50 GiB used, from Single ESXi)

    +
  • +
  • +

    Maximum cold migration transfer rate of the multiple VMs (single disk) from multiple ESXi hosts: 1173MB/s (concurrent migration of 80 VMs, 35/50 GiB used, from 8 ESXi servers, 10 VMs from each ESXi)

    +
  • +
+
+
+

For additional details on performance, see Forklift performance addendum

+
+
+
+ + +
+ + diff --git a/modules/mtv-resources-and-services/index.html b/modules/mtv-resources-and-services/index.html new file mode 100644 index 00000000000..9923e0e09b7 --- /dev/null +++ b/modules/mtv-resources-and-services/index.html @@ -0,0 +1,131 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift custom resources and services

+
+

Forklift is provided as an OKD Operator. It creates and manages the following custom resources (CRs) and services.

+
+
+
Forklift custom resources
+
    +
  • +

    Provider CR stores attributes that enable Forklift to connect to and interact with the source and target providers.

    +
  • +
  • +

    NetworkMapping CR maps the networks of the source and target providers.

    +
  • +
  • +

    StorageMapping CR maps the storage of the source and target providers.

    +
  • +
  • +

    Plan CR contains a list of VMs with the same migration parameters and associated network and storage mappings.

    +
  • +
  • +

    Migration CR runs a migration plan.

    +
    +

    Only one Migration CR per migration plan can run at a given time. You can create multiple Migration CRs for a single Plan CR.

    +
    +
  • +
+
+
+
Forklift services
+
    +
  • +

    The Inventory service performs the following actions:

    +
    +
      +
    • +

      Connects to the source and target providers.

      +
    • +
    • +

      Maintains a local inventory for mappings and plans.

      +
    • +
    • +

      Stores VM configurations.

      +
    • +
    • +

      Runs the Validation service if a VM configuration change is detected.

      +
    • +
    +
    +
  • +
  • +

    The Validation service checks the suitability of a VM for migration by applying rules.

    +
  • +
  • +

    The Migration Controller service orchestrates migrations.

    +
    +

    When you create a migration plan, the Migration Controller service validates the plan and adds a status label. If the plan fails validation, the plan status is Not ready and the plan cannot be used to perform a migration. If the plan passes validation, the plan status is Ready and it can be used to perform a migration. After a successful migration, the Migration Controller service changes the plan status to Completed.

    +
    +
  • +
  • +

    The Populator Controller service orchestrates disk transfers using Volume Populators.

    +
  • +
  • +

    The Kubevirt Controller and Containerized Data Import (CDI) Controller services handle most technical operations.

    +
  • +
+
+ + +
+ + diff --git a/modules/mtv-selected-packages-2-7/index.html b/modules/mtv-selected-packages-2-7/index.html new file mode 100644 index 00000000000..dc78b1d7dad --- /dev/null +++ b/modules/mtv-selected-packages-2-7/index.html @@ -0,0 +1,207 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift selected packages

+ + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Selected Forklift packages
Package summaryForklift 2.7.0Forklift 2.7.2Forklift 2.7.3

The skeleton package which defines a simple Red Hat Enterprise Linux system

basesystem-11-13.el9.noarch

basesystem-11-13.el9.noarch

basesystem-11-13.el9.noarch

Core kernel modules to match the core kernel

kernel-modules-core-5.14.0-427.35.1.el9_4.x86_64

kernel-modules-core-5.14.0-427.37.1.el9_4.x86_64

kernel-modules-core-5.14.0-427.40.1.el9_4.x86_64

The Linux kernel

kernel-core-5.14.0-427.35.1.el9_4.x86_64

kernel-core-5.14.0-427.37.1.el9_4.x86_64

kernel-core-5.14.0-427.40.1.el9_4.x86_64

Access and modify virtual machine disk images

libguestfs-1.50.1-8.el9_4.x86_64

libguestfs-1.50.1-8.el9_4.x86_64

libguestfs-1.50.1-8.el9_4.x86_64

Client side utilities of the libvirt library

libvirt-client-10.0.0-6.7.el9_4.x86_64

libvirt-client-10.0.0-6.7.el9_4.x86_64

libvirt-client-10.0.0-6.7.el9_4.x86_64

Libvirt libraries

libvirt-libs-10.0.0-6.7.el9_4.x86_64

libvirt-libs-10.0.0-6.7.el9_4.x86_64

libvirt-libs-10.0.0-6.7.el9_4.x86_64

QEMU driver plugin for the libvirtd daemon

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

libvirt-daemon-driver-qemu-10.0.0-6.7.el9_4.x86_64

NBD server

nbdkit-1.36.2-1.el9.x86_64

nbdkit-1.36.2-1.el9.x86_64

nbdkit-1.36.2-1.el9.x86_64

Basic filters for nbdkit

nbdkit-basic-filters-1.36.2-1.el9.x86_64

nbdkit-basic-filters-1.36.2-1.el9.x86_64

nbdkit-basic-filters-1.36.2-1.el9.x86_64

Basic plugins for nbdkit

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

nbdkit-basic-plugins-1.36.2-1.el9.x86_64

HTTP/FTP (cURL) plugin for nbdkit

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

nbdkit-curl-plugin-1.36.2-1.el9.x86_64

NBD proxy / forward plugin for nbdkit

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

nbdkit-nbd-plugin-1.36.2-1.el9.x86_64

Python 3 plugin for nbdkit

nbdkit-python-plugin-1.36.2-1.el9.x86_64

nbdkit-python-plugin-1.36.2-1.el9.x86_64

nbdkit-python-plugin-1.36.2-1.el9.x86_64

The nbdkit server

nbdkit-server-1.36.2-1.el9.x86_64

nbdkit-server-1.36.2-1.el9.x86_64

nbdkit-server-1.36.2-1.el9.x86_64

SSH plugin for nbdkit

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

nbdkit-ssh-plugin-1.36.2-1.el9.x86_64

VMware VDDK plugin for nbdkit

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

nbdkit-vddk-plugin-1.36.2-1.el9.x86_64

QEMU command line tool for manipulating disk images

qemu-img-8.2.0-11.el9_4.6.x86_64

qemu-img-8.2.0-11.el9_4.6.x86_64

qemu-img-8.2.0-11.el9_4.6.x86_64

QEMU common files needed by all QEMU targets

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

qemu-kvm-common-8.2.0-11.el9_4.6.x86_64

+

qemu-kvm core components

+

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

qemu-kvm-core-8.2.0-11.el9_4.6.x86_64

Convert a virtual machine to run on KVM

virt-v2v-2.4.0-4.el9_4.x86_64

virt-v2v-2.4.0-4.el9_4.x86_64

virt-v2v-2.4.0-4.el9_4.x86_64

+ + +
+ + diff --git a/modules/mtv-settings/index.html b/modules/mtv-settings/index.html new file mode 100644 index 00000000000..e1325a044a8 --- /dev/null +++ b/modules/mtv-settings/index.html @@ -0,0 +1,133 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Configuring MTV settings

+
+

If you have Administrator privileges, you can access the Overview page and change the following settings in it:

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Forklift settings
SettingDescriptionDefault value

Max concurrent virtual machine migrations

The maximum number of VMs per plan that can be migrated simultaneously

20

Must gather cleanup after (hours)

The duration for retaining must gather reports before they are automatically deleted

Disabled

Controller main container CPU limit

The CPU limit allocated to the main controller container

500 m

Controller main container Memory limit

The memory limit allocated to the main controller container

800 Mi

Precopy internal (minutes)

The interval at which a new snapshot is requested before initiating a warm migration

60

Snapshot polling interval (seconds)

The frequency with which the system checks the status of snapshot creation or removal during a warm migration

10

+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationOverview. The Settings list is on the right-hand side of the page.

    +
  2. +
  3. +

    In the Settings list, click the Edit icon of the setting you want to change.

    +
  4. +
  5. +

    Choose a setting from the list.

    +
  6. +
  7. +

    Click Save.

    +
  8. +
+
+ + +
+ + diff --git a/modules/mtv-ui/index.html b/modules/mtv-ui/index.html new file mode 100644 index 00000000000..38ecc342ef4 --- /dev/null +++ b/modules/mtv-ui/index.html @@ -0,0 +1,91 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

The MTV user interface

+
+

The Forklift user interface is integrated into the OKD web console.

+
+
+

In the left-hand panel, you can choose a page related to a component of the migration progress, for example, Providers for Migration, or, if you are an administrator, you can choose Overview, which contains information about migrations and lets you configure Forklift settings.

+
+
+
+Forklift user interface +
+
Figure 1. Forklift extension interface
+
+
+

In pages related to components, you can click on the Projects list, which is in the upper-left portion of the page, and see which projects (namespaces) you are allowed to work with.

+
+
+
    +
  • +

    If you are an administrator, you can see all projects.

    +
  • +
  • +

    If you are a non-administrator, you can see only the projects that you have permissions to work with.

    +
  • +
+
+ + +
+ + diff --git a/modules/mtv-workflow/index.html b/modules/mtv-workflow/index.html new file mode 100644 index 00000000000..04dc36088ce --- /dev/null +++ b/modules/mtv-workflow/index.html @@ -0,0 +1,113 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

High-level migration workflow

+
+

The high-level workflow shows the migration process from the point of view of the user:

+
+
+
    +
  1. +

    You create a source provider, a target provider, a network mapping, and a storage mapping.

    +
  2. +
  3. +

    You create a Plan custom resource (CR) that includes the following resources:

    +
    +
      +
    • +

      Source provider

      +
    • +
    • +

      Target provider, if Forklift is not installed on the target cluster

      +
    • +
    • +

      Network mapping

      +
    • +
    • +

      Storage mapping

      +
    • +
    • +

      One or more virtual machines (VMs)

      +
    • +
    +
    +
  4. +
  5. +

    You run a migration plan by creating a Migration CR that references the Plan CR.

    +
    +

    If you cannot migrate all the VMs for any reason, you can create multiple Migration CRs for the same Plan CR until all VMs are migrated.

    +
    +
  6. +
  7. +

    For each VM in the Plan CR, the Migration Controller service records the VM migration progress in the Migration CR.

    +
  8. +
  9. +

    Once the data transfer for each VM in the Plan CR completes, the Migration Controller service creates a VirtualMachine CR.

    +
    +

    When all VMs have been migrated, the Migration Controller service updates the status of the Plan CR to Completed. The power state of each source VM is maintained after migration.

    +
    +
  10. +
+
+ + +
+ + diff --git a/modules/network-prerequisites/index.html b/modules/network-prerequisites/index.html new file mode 100644 index 00000000000..a92a4f97f08 --- /dev/null +++ b/modules/network-prerequisites/index.html @@ -0,0 +1,196 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Network prerequisites

+
+
+
+

The following prerequisites apply to all migrations:

+
+
+
    +
  • +

    IP addresses, VLANs, and other network configuration settings must not be changed before or during migration. The MAC addresses of the virtual machines are preserved during migration.

    +
  • +
  • +

    The network connections between the source environment, the KubeVirt cluster, and the replication repository must be reliable and uninterrupted.

    +
  • +
  • +

    If you are mapping more than one source and destination network, you must create a network attachment definition for each additional destination network.

    +
  • +
+
+
+
+
+

Ports

+
+
+

The firewalls must enable traffic over the following ports:

+
+ + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Network ports required for migrating from VMware vSphere
PortProtocolSourceDestinationPurpose

443

TCP

OpenShift nodes

VMware vCenter

+

VMware provider inventory

+
+
+

Disk transfer authentication

+

443

TCP

OpenShift nodes

VMware ESXi hosts

+

Disk transfer authentication

+

902

TCP

OpenShift nodes

VMware ESXi hosts

+

Disk transfer data copy

+
+ + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2. Network ports required for migrating from oVirt
PortProtocolSourceDestinationPurpose

443

TCP

OpenShift nodes

oVirt Engine

+

oVirt provider inventory

+
+
+

Disk transfer authentication

+

443

TCP

OpenShift nodes

oVirt hosts

+

Disk transfer authentication

+

54322

TCP

OpenShift nodes

oVirt hosts

+

Disk transfer data copy

+
+
+
+ + +
+ + diff --git a/modules/new-features-and-enhancements-2-7/index.html b/modules/new-features-and-enhancements-2-7/index.html new file mode 100644 index 00000000000..546a3dea8a7 --- /dev/null +++ b/modules/new-features-and-enhancements-2-7/index.html @@ -0,0 +1,85 @@ + + + + + + + + New features and enhancements | Forklift Documentation + + + + + + + + + + + + + +New features and enhancements | Forklift Documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+

New features and enhancements

+
+
+
+

Forklift 2.7 introduces the following features and enhancements:

+
+
+
+
+

New features and enhancements 2.7.0

+
+
+
    +
  • +

    In Forklift 2.7.0, warm migration is now based on RHEL 9 inheriting features and bug fixes.

    +
  • +
+
+
+
+ + +
+ + diff --git a/modules/new-migrating-virtual-machines-cli/index.html b/modules/new-migrating-virtual-machines-cli/index.html new file mode 100644 index 00000000000..c31cd1990f4 --- /dev/null +++ b/modules/new-migrating-virtual-machines-cli/index.html @@ -0,0 +1,155 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
Procedure
+
    +
  1. +

    Create a Secret manifest for the source provider credentials:

    +
  2. +
+
+
+
    +
  1. +

    Create a Provider manifest for the source provider:

    +
  2. +
  3. +

    Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    +
    +
    +
    $  cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Hook
    +metadata:
    +  name: <hook>
    +  namespace: <namespace>
    +spec:
    +  image: quay.io/konveyor/hook-runner
    +  playbook: |
    +    LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv
    +    YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s
    +    IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh
    +    cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr
    +    bG9hZAoK
    +EOF
    +
    +
    +
    +

    where:

    +
    +
    +

    playbook refers to an optional Base64-encoded Ansible Playbook. If you specify a playbook, the image must be hook-runner.

    +
    +
    + + + + + +
    +
    Note
    +
    +
    +

    You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

    +
    +
    +
    +
  4. +
  5. +

    Create a Migration manifest to run the Plan CR:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Migration
    +metadata:
    +  name: <name_of_migration_cr>
    +  namespace: <namespace>
    +spec:
    +  plan:
    +    name: <name_of_plan_cr>
    +    namespace: <namespace>
    +  cutover: <optional_cutover_time>
    +EOF
    +
    +
    +
    + + + + + +
    +
    Note
    +
    +
    +

    If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00.

    +
    +
    +
    +
  6. +
+
+ + +
+ + diff --git a/modules/non-admin-permissions-for-ui/index.html b/modules/non-admin-permissions-for-ui/index.html new file mode 100644 index 00000000000..fb8ec2bfef7 --- /dev/null +++ b/modules/non-admin-permissions-for-ui/index.html @@ -0,0 +1,192 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Permissions needed by non-administrators to work with migration plan components

+
+

If you are an administrator, you can work with all components of migration plans (for example, providers, network mappings, and migration plans).

+
+
+

By default, non-administrators have limited ability to work with migration plans and their components. As an administrator, you can modify their roles to allow them full access to all components, or you can give them limited permissions.

+
+
+

For example, administrators can assign non-administrators one or more of the following cluster roles for migration plans:

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + +
Table 1. Example migration plan roles and their privileges
RoleDescription

plans.forklift.konveyor.io-v1beta1-view

Can view migration plans but not to create, delete or modify them

plans.forklift.konveyor.io-v1beta1-edit

Can create, delete or modify (all parts of edit permissions) individual migration plans

plans.forklift.konveyor.io-v1beta1-admin

All edit privileges and the ability to delete the entire collection of migration plans

+
+

Note that pre-defined cluster roles include a resource (for example, plans), an API group (for example, forklift.konveyor.io-v1beta1) and an action (for example, view, edit).

+
+
+

As a more comprehensive example, you can grant non-administrators the following set of permissions per namespace:

+
+
+
    +
  • +

    Create and modify storage maps, network maps, and migration plans for the namespaces they have access to

    +
  • +
  • +

    Attach providers created by administrators to storage maps, network maps, and migration plans

    +
  • +
  • +

    Not be able to create providers or to change system settings

    +
  • +
+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2. Example permissions required for non-adminstrators to work with migration plan components but not create providers
ActionsAPI groupResource

get, list, watch, create, update, patch, delete

forklift.konveyor.io

plans

get, list, watch, create, update, patch, delete

forklift.konveyor.io

migrations

get, list, watch, create, update, patch, delete

forklift.konveyor.io

hooks

get, list, watch

forklift.konveyor.io

providers

get, list, watch, create, update, patch, delete

forklift.konveyor.io

networkmaps

get, list, watch, create, update, patch, delete

forklift.konveyor.io

storagemaps

get, list, watch

forklift.konveyor.io

forkliftcontrollers

create, patch, delete

Empty string

secrets

+
+ + + + + +
+
Note
+
+
+

Non-administrators need to have the create permissions that are part of edit roles for network maps and for storage maps to create migration plans, even when using a template for a network map or a storage map.

+
+
+
+ + +
+ + diff --git a/modules/obtaining-console-url/index.html b/modules/obtaining-console-url/index.html new file mode 100644 index 00000000000..b0341a12ed2 --- /dev/null +++ b/modules/obtaining-console-url/index.html @@ -0,0 +1,107 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Getting the Forklift web console URL

+
+

You can get the Forklift web console URL at any time by using either the OKD web console, or the command line.

+
+
+
Prerequisites
+
    +
  • +

    KubeVirt Operator installed.

    +
  • +
  • +

    Forklift Operator installed.

    +
  • +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  • +

    If you are using the OKD web console, follow these steps:

    +
  • +
+
+
+

Unresolved directive in obtaining-console-url.adoc - include::snippet_getting_web_console_url_web.adoc[]

+
+
+
    +
  • +

    If you are using the command line, get the Forklift web console URL with the following command:

    +
  • +
+
+
+

Unresolved directive in obtaining-console-url.adoc - include::snippet_getting_web_console_url_cli.adoc[]

+
+
+

You can now launch a browser and navigate to the Forklift web console.

+
+ + +
+ + diff --git a/modules/openstack-prerequisites/index.html b/modules/openstack-prerequisites/index.html new file mode 100644 index 00000000000..d3d1736b42f --- /dev/null +++ b/modules/openstack-prerequisites/index.html @@ -0,0 +1,76 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

OpenStack prerequisites

+
+

The following prerequisites apply to {osp} migrations:

+
+
+ +
+ + +
+ + diff --git a/modules/ostack-app-cred-auth/index.html b/modules/ostack-app-cred-auth/index.html new file mode 100644 index 00000000000..2feaed4635e --- /dev/null +++ b/modules/ostack-app-cred-auth/index.html @@ -0,0 +1,189 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Using application credential authentication with an {osp} source provider

+
+

You can use application credential authentication, instead of username and password authentication, when you create an {osp} source provider.

+
+
+

Forklift supports both of the following types of application credential authentication:

+
+
+
    +
  • +

    Application credential ID

    +
  • +
  • +

    Application credential name

    +
  • +
+
+
+

For each type of application credential authentication, you need to use data from OpenStack to create a Secret manifest.

+
+
+
Prerequisites
+

You have an {osp} account.

+
+
+
Procedure
+
    +
  1. +

    In the dashboard of the {osp} web console, click Project > API Access.

    +
  2. +
  3. +

    Expand Download OpenStack RC file and click OpenStack RC file.

    +
    +

    The file that is downloaded, referred to here as <openstack_rc_file>, includes the following fields used for application credential authentication:

    +
    +
    +
    +
    OS_AUTH_URL
    +OS_PROJECT_ID
    +OS_PROJECT_NAME
    +OS_DOMAIN_NAME
    +OS_USERNAME
    +
    +
    +
  4. +
  5. +

    To get the data needed for application credential authentication, run the following command:

    +
    +
    +
    $ openstack application credential create --role member --role reader --secret redhat forklift
    +
    +
    +
    +

    The output, referred to here as <openstack_credential_output>, includes:

    +
    +
    +
      +
    • +

      The id and secret that you need for authentication using an application credential ID

      +
    • +
    • +

      The name and secret that you need for authentication using an application credential name

      +
    • +
    +
    +
  6. +
  7. +

    Create a Secret manifest similar to the following:

    +
    +
      +
    • +

      For authentication using the application credential ID:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-appid
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: applicationcredential
      +  applicationCredentialID: <id_from_openstack_credential_output>
      +  applicationCredentialSecret: <secret_from_openstack_credential_output>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    • +

      For authentication using the application credential name:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-appname
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: applicationcredential
      +  applicationCredentialName: <name_from_openstack_credential_output>
      +  applicationCredentialSecret: <secret_from_openstack_credential_output>
      +  domainName: <OS_DOMAIN_NAME_from_openstack_rc_file>
      +  username: <OS_USERNAME_from_openstack_rc_file>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    +
    +
  8. +
  9. +

    Continue migrating your virtual machine according to the procedure in Migrating virtual machines, starting with step 2, "Create a Provider manifest for the source provider."

    +
  10. +
+
+ + +
+ + diff --git a/modules/ostack-token-auth/index.html b/modules/ostack-token-auth/index.html new file mode 100644 index 00000000000..9ac71bd99d8 --- /dev/null +++ b/modules/ostack-token-auth/index.html @@ -0,0 +1,180 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Using token authentication with an {osp} source provider

+
+

You can use token authentication, instead of username and password authentication, when you create an {osp} source provider.

+
+
+

Forklift supports both of the following types of token authentication:

+
+
+
    +
  • +

    Token with user ID

    +
  • +
  • +

    Token with user name

    +
  • +
+
+
+

For each type of token authentication, you need to use data from OpenStack to create a Secret manifest.

+
+
+
Prerequisites
+

Have an {osp} account.

+
+
+
Procedure
+
    +
  1. +

    In the dashboard of the {osp} web console, click Project > API Access.

    +
  2. +
  3. +

    Expand Download OpenStack RC file and click OpenStack RC file.

    +
    +

    The file that is downloaded, referred to here as <openstack_rc_file>, includes the following fields used for token authentication:

    +
    +
    +
    +
    OS_AUTH_URL
    +OS_PROJECT_ID
    +OS_PROJECT_NAME
    +OS_DOMAIN_NAME
    +OS_USERNAME
    +
    +
    +
  4. +
  5. +

    To get the data needed for token authentication, run the following command:

    +
    +
    +
    $ openstack token issue
    +
    +
    +
    +

    The output, referred to here as <openstack_token_output>, includes the token, userID, and projectID that you need for authentication using a token with user ID.

    +
    +
  6. +
  7. +

    Create a Secret manifest similar to the following:

    +
    +
      +
    • +

      For authentication using a token with user ID:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-tokenid
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: token
      +  token: <token_from_openstack_token_output>
      +  projectID: <projectID_from_openstack_token_output>
      +  userID: <userID_from_openstack_token_output>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    • +

      For authentication using a token with user name:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-tokenname
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: token
      +  token: <token_from_openstack_token_output>
      +  domainName: <OS_DOMAIN_NAME_from_openstack_rc_file>
      +  projectName: <OS_PROJECT_NAME_from_openstack_rc_file>
      +  username: <OS_USERNAME_from_openstack_rc_file>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    +
    +
  8. +
  9. +

    Continue migrating your virtual machine according to the procedure in Migrating virtual machines, starting with step 2, "Create a Provider manifest for the source provider."

    +
  10. +
+
+ + +
+ + diff --git a/modules/ova-prerequisites/index.html b/modules/ova-prerequisites/index.html new file mode 100644 index 00000000000..9f9084a095f --- /dev/null +++ b/modules/ova-prerequisites/index.html @@ -0,0 +1,130 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Open Virtual Appliance (OVA) prerequisites

+
+

The following prerequisites apply to Open Virtual Appliance (OVA) file migrations:

+
+
+
    +
  • +

    All OVA files are created by VMware vSphere.

    +
  • +
+
+
+ + + + + +
+
Note
+
+
+

Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by Forklift. Forklift supports only OVA files created by VMware vSphere.

+
+
+
+
+
    +
  • +

    The OVA files are in one or more folders under an NFS shared directory in one of the following structures:

    +
    +
      +
    • +

      In one or more compressed Open Virtualization Format (OVF) packages that hold all the VM information.

      +
      +

      The filename of each compressed package must have the .ova extension. Several compressed packages can be stored in the same folder.

      +
      +
      +

      When this structure is used, Forklift scans the root folder and the first-level subfolders for compressed packages.

      +
      +
      +

      For example, if the NFS share is, /nfs, then:
      +The folder /nfs is scanned.
      +The folder /nfs/subfolder1 is scanned.
      +But, /nfs/subfolder1/subfolder2 is not scanned.

      +
      +
    • +
    • +

      In extracted OVF packages.

      +
      +

      When this structure is used, Forklift scans the root folder, first-level subfolders, and second-level subfolders for extracted OVF packages. +However, there can be only one .ovf file in a folder. Otherwise, the migration will fail.

      +
      +
      +

      For example, if the NFS share is, /nfs, then:
      +The OVF file /nfs/vm.ovf is scanned.
      +The OVF file /nfs/subfolder1/vm.ovf is scanned.
      +The OVF file /nfs/subfolder1/subfolder2/vm.ovf is scanned.
      +But, the OVF file /nfs/subfolder1/subfolder2/subfolder3/vm.ovf is not scanned.

      +
      +
    • +
    +
    +
  • +
+
+ + +
+ + diff --git a/modules/retrieving-validation-service-json/index.html b/modules/retrieving-validation-service-json/index.html new file mode 100644 index 00000000000..383cfb40546 --- /dev/null +++ b/modules/retrieving-validation-service-json/index.html @@ -0,0 +1,483 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Retrieving the Inventory service JSON

+
+

You retrieve the Inventory service JSON by sending an Inventory service query to a virtual machine (VM). The output contains an "input" key, which contains the inventory attributes that are queried by the Validation service rules.

+
+
+

You can create a validation rule based on any attribute in the "input" key, for example, input.snapshot.kind.

+
+
+
Procedure
+
    +
  1. +

    Retrieve the routes for the project:

    +
    +
    +
    oc get route -n openshift-mtv
    +
    +
    +
  2. +
  3. +

    Retrieve the Inventory service route:

    +
    +
    +
    $ kubectl get route <inventory_service> -n konveyor-forklift
    +
    +
    +
  4. +
  5. +

    Retrieve the access token:

    +
    +
    +
    $ TOKEN=$(oc whoami -t)
    +
    +
    +
  6. +
  7. +

    Trigger an HTTP GET request (for example, using Curl):

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers -k
    +
    +
    +
  8. +
  9. +

    Retrieve the UUID of a provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider> -k (1)
    +
    +
    +
    +
      +
    1. +

      Allowed values for the provider are vsphere, ovirt, and openstack.

      +
    2. +
    +
    +
  10. +
  11. +

    Retrieve the VMs of a provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider>/<UUID>/vms -k
    +
    +
    +
  12. +
  13. +

    Retrieve the details of a VM:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider>/<UUID>/workloads/<vm> -k
    +
    +
    +
    +
    Example output
    +
    +
    {
    +    "input": {
    +        "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/workloads/vm-431",
    +        "id": "vm-431",
    +        "parent": {
    +            "kind": "Folder",
    +            "id": "group-v22"
    +        },
    +        "revision": 1,
    +        "name": "iscsi-target",
    +        "revisionValidated": 1,
    +        "isTemplate": false,
    +        "networks": [
    +            {
    +                "kind": "Network",
    +                "id": "network-31"
    +            },
    +            {
    +                "kind": "Network",
    +                "id": "network-33"
    +            }
    +        ],
    +        "disks": [
    +            {
    +                "key": 2000,
    +                "file": "[iSCSI_Datastore] iscsi-target/iscsi-target-000001.vmdk",
    +                "datastore": {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                },
    +                "capacity": 17179869184,
    +                "shared": false,
    +                "rdm": false
    +            },
    +            {
    +                "key": 2001,
    +                "file": "[iSCSI_Datastore] iscsi-target/iscsi-target_1-000001.vmdk",
    +                "datastore": {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                },
    +                "capacity": 10737418240,
    +                "shared": false,
    +                "rdm": false
    +            }
    +        ],
    +        "concerns": [],
    +        "policyVersion": 5,
    +        "uuid": "42256329-8c3a-2a82-54fd-01d845a8bf49",
    +        "firmware": "bios",
    +        "powerState": "poweredOn",
    +        "connectionState": "connected",
    +        "snapshot": {
    +            "kind": "VirtualMachineSnapshot",
    +            "id": "snapshot-3034"
    +        },
    +        "changeTrackingEnabled": false,
    +        "cpuAffinity": [
    +            0,
    +            2
    +        ],
    +        "cpuHotAddEnabled": true,
    +        "cpuHotRemoveEnabled": false,
    +        "memoryHotAddEnabled": false,
    +        "faultToleranceEnabled": false,
    +        "cpuCount": 2,
    +        "coresPerSocket": 1,
    +        "memoryMB": 2048,
    +        "guestName": "Red Hat Enterprise Linux 7 (64-bit)",
    +        "balloonedMemory": 0,
    +        "ipAddress": "10.19.2.96",
    +        "storageUsed": 30436770129,
    +        "numaNodeAffinity": [
    +            "0",
    +            "1"
    +        ],
    +        "devices": [
    +            {
    +                "kind": "RealUSBController"
    +            }
    +        ],
    +        "host": {
    +            "id": "host-29",
    +            "parent": {
    +                "kind": "Cluster",
    +                "id": "domain-c26"
    +            },
    +            "revision": 1,
    +            "name": "IP address or host name of the vCenter host or oVirt Engine host",
    +            "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/hosts/host-29",
    +            "status": "green",
    +            "inMaintenance": false,
    +            "managementServerIp": "10.19.2.96",
    +            "thumbprint": <thumbprint>,
    +            "timezone": "UTC",
    +            "cpuSockets": 2,
    +            "cpuCores": 16,
    +            "productName": "VMware ESXi",
    +            "productVersion": "6.5.0",
    +            "networking": {
    +                "pNICs": [
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic0",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic1",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic2",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic3",
    +                        "linkSpeed": 10000
    +                    }
    +                ],
    +                "vNICs": [
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk2",
    +                        "portGroup": "VM_Migration",
    +                        "dPortGroup": "",
    +                        "ipAddress": "192.168.79.13",
    +                        "subnetMask": "255.255.255.0",
    +                        "mtu": 9000
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk0",
    +                        "portGroup": "Management Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "10.19.2.13",
    +                        "subnetMask": "255.255.255.128",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk1",
    +                        "portGroup": "Storage Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "172.31.2.13",
    +                        "subnetMask": "255.255.0.0",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk3",
    +                        "portGroup": "",
    +                        "dPortGroup": "dvportgroup-48",
    +                        "ipAddress": "192.168.61.13",
    +                        "subnetMask": "255.255.255.0",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk4",
    +                        "portGroup": "VM_DHCP_Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "10.19.2.231",
    +                        "subnetMask": "255.255.255.128",
    +                        "mtu": 1500
    +                    }
    +                ],
    +                "portGroups": [
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM Network",
    +                        "name": "VM Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-Management Network",
    +                        "name": "Management Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_10G_Network",
    +                        "name": "VM_10G_Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Storage",
    +                        "name": "VM_Storage",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_DHCP_Network",
    +                        "name": "VM_DHCP_Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-Storage Network",
    +                        "name": "Storage Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Isolated_67",
    +                        "name": "VM_Isolated_67",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Migration",
    +                        "name": "VM_Migration",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2"
    +                    }
    +                ],
    +                "switches": [
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch0",
    +                        "name": "vSwitch0",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM Network",
    +                            "key-vim.host.PortGroup-Management Network"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic4"
    +                        ]
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch1",
    +                        "name": "vSwitch1",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM_10G_Network",
    +                            "key-vim.host.PortGroup-VM_Storage",
    +                            "key-vim.host.PortGroup-VM_DHCP_Network",
    +                            "key-vim.host.PortGroup-Storage Network"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic2",
    +                            "key-vim.host.PhysicalNic-vmnic0"
    +                        ]
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch2",
    +                        "name": "vSwitch2",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM_Isolated_67",
    +                            "key-vim.host.PortGroup-VM_Migration"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic3",
    +                            "key-vim.host.PhysicalNic-vmnic1"
    +                        ]
    +                    }
    +                ]
    +            },
    +            "networks": [
    +                {
    +                    "kind": "Network",
    +                    "id": "network-31"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-34"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-57"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-33"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "dvportgroup-47"
    +                }
    +            ],
    +            "datastores": [
    +                {
    +                    "kind": "Datastore",
    +                    "id": "datastore-35"
    +                },
    +                {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                }
    +            ],
    +            "vms": null,
    +            "networkAdapters": [],
    +            "cluster": {
    +                "id": "domain-c26",
    +                "parent": {
    +                    "kind": "Folder",
    +                    "id": "group-h23"
    +                },
    +                "revision": 1,
    +                "name": "mycluster",
    +                "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/clusters/domain-c26",
    +                "folder": "group-h23",
    +                "networks": [
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-31"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-34"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-57"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-33"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "dvportgroup-47"
    +                    }
    +                ],
    +                "datastores": [
    +                    {
    +                        "kind": "Datastore",
    +                        "id": "datastore-35"
    +                    },
    +                    {
    +                        "kind": "Datastore",
    +                        "id": "datastore-63"
    +                    }
    +                ],
    +                "hosts": [
    +                    {
    +                        "kind": "Host",
    +                        "id": "host-44"
    +                    },
    +                    {
    +                        "kind": "Host",
    +                        "id": "host-29"
    +                    }
    +                ],
    +                "dasEnabled": false,
    +                "dasVms": [],
    +                "drsEnabled": true,
    +                "drsBehavior": "fullyAutomated",
    +                "drsVms": [],
    +                "datacenter": null
    +            }
    +        }
    +    }
    +}
    +
    +
    +
  14. +
+
+ + +
+ + diff --git a/modules/retrieving-vmware-moref/index.html b/modules/retrieving-vmware-moref/index.html new file mode 100644 index 00000000000..c09e1e37969 --- /dev/null +++ b/modules/retrieving-vmware-moref/index.html @@ -0,0 +1,149 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Retrieving a VMware vSphere moRef

+
+

When you migrate VMs with a VMware vSphere source provider using Forklift from the CLI, you need to know the managed object reference (moRef) of certain entities in vSphere, such as datastores, networks, and VMs.

+
+
+

You can retrieve the moRef of one or more vSphere entities from the Inventory service. You can then use each moRef as a reference for retrieving the moRef of another entity.

+
+
+
Procedure
+
    +
  1. +

    Retrieve the routes for the project:

    +
    +
    +
    oc get route -n openshift-mtv
    +
    +
    +
  2. +
  3. +

    Retrieve the Inventory service route:

    +
    +
    +
    $ kubectl get route <inventory_service> -n konveyor-forklift
    +
    +
    +
  4. +
  5. +

    Retrieve the access token:

    +
    +
    +
    $ TOKEN=$(oc whoami -t)
    +
    +
    +
  6. +
  7. +

    Retrieve the moRef of a VMware vSphere provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/vsphere -k
    +
    +
    +
  8. +
  9. +

    Retrieve the datastores of a VMware vSphere source provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/vsphere/<provider id>/datastores/ -k
    +
    +
    +
    +
    Example output
    +
    +
    [
    +  {
    +    "id": "datastore-11",
    +    "parent": {
    +      "kind": "Folder",
    +      "id": "group-s5"
    +    },
    +    "path": "/Datacenter/datastore/v2v_general_porpuse_ISCSI_DC",
    +    "revision": 46,
    +    "name": "v2v_general_porpuse_ISCSI_DC",
    +    "selfLink": "providers/vsphere/01278af6-e1e4-4799-b01b-d5ccc8dd0201/datastores/datastore-11"
    +  },
    +  {
    +    "id": "datastore-730",
    +    "parent": {
    +      "kind": "Folder",
    +      "id": "group-s5"
    +    },
    +    "path": "/Datacenter/datastore/f01-h27-640-SSD_2",
    +    "revision": 46,
    +    "name": "f01-h27-640-SSD_2",
    +    "selfLink": "providers/vsphere/01278af6-e1e4-4799-b01b-d5ccc8dd0201/datastores/datastore-730"
    +  },
    + ...
    +
    +
    +
  10. +
+
+
+

In this example, the moRef of the datastore v2v_general_porpuse_ISCSI_DC is datastore-11 and the moRef of the datastore f01-h27-640-SSD_2 is datastore-730.

+
+ + +
+ + diff --git a/modules/rhv-prerequisites/index.html b/modules/rhv-prerequisites/index.html new file mode 100644 index 00000000000..2eedb8a52b2 --- /dev/null +++ b/modules/rhv-prerequisites/index.html @@ -0,0 +1,129 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

oVirt prerequisites

+
+

The following prerequisites apply to oVirt migrations:

+
+
+
    +
  • +

    To create a source provider, you must have at least the UserRole and ReadOnlyAdmin roles assigned to you. These are the minimum required permissions, however, any other administrator or superuser permissions will also work.

    +
  • +
+
+
+ + + + + +
+
Important
+
+
+

You must keep the UserRole and ReadOnlyAdmin roles until the virtual machines of the source provider have been migrated. Otherwise, the migration will fail.

+
+
+
+
+
    +
  • +

    To migrate virtual machines:

    +
    +
      +
    • +

      You must have one of the following:

      +
      +
        +
      • +

        oVirt admin permissions. These permissions allow you to migrate any virtual machine in the system.

        +
      • +
      • +

        DiskCreator and UserVmManager permissions on every virtual machine you want to migrate.

        +
      • +
      +
      +
    • +
    • +

      You must use a compatible version of oVirt.

      +
    • +
    • +

      You must have the Engine CA certificate, unless it was replaced by a third-party certificate, in which case, specify the Engine Apache CA certificate.

      +
      +

      You can obtain the Engine CA certificate by navigating to https://<engine_host>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA in a browser.

      +
      +
    • +
    • +

      If you are migrating a virtual machine with a direct LUN disk, ensure that the nodes in the KubeVirt destination cluster that the VM is expected to run on can access the backend storage.

      +
    • +
    +
    +
  • +
+
+
+

Unresolved directive in rhv-prerequisites.adoc - include::snip-migrating-luns.adoc[]

+
+ + +
+ + diff --git a/modules/rn-2.0/index.html b/modules/rn-2.0/index.html new file mode 100644 index 00000000000..29fcfa8cf03 --- /dev/null +++ b/modules/rn-2.0/index.html @@ -0,0 +1,163 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.0

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere with Forklift.

+
+
+

The release notes describe new features and enhancements, known issues, and technical changes.

+
+
+
+
+

New features and enhancements

+
+
+

This release adds the following features and improvements.

+
+
+
Warm migration
+

Warm migration reduces downtime by copying most of the VM data during a precopy stage while the VMs are running. During the cutover stage, the VMs are stopped and the rest of the data is copied.

+
+
+
Cancel migration
+

You can cancel an entire migration plan or individual VMs while a migration is in progress. A canceled migration plan can be restarted in order to migrate the remaining VMs.

+
+
+
Migration network
+

You can select a migration network for the source and target providers for improved performance. By default, data is copied using the VMware administration network and the OKD pod network.

+
+
+
Validation service
+

The validation service checks source VMs for issues that might affect migration and flags the VMs with concerns in the migration plan.

+
+
+ + + + + +
+
Important
+
+
+

The validation service is a Technology Preview feature only. Technology Preview features +are not supported with Red Hat production service level agreements (SLAs) and +might not be functionally complete. Red Hat does not recommend using them +in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during +the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+
+
+
+

Known issues

+
+
+

This section describes known issues and mitigations.

+
+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Network map displays a "Destination network not found" error
+

If the network map remains in a NotReady state and the NetworkMap manifest displays a Destination network not found error, the cause is a missing network attachment definition. You must create a network attachment definition for each additional destination network before you create the network map. (BZ#1971259)

+
+
+
Warm migration gets stuck during third precopy
+

Warm migration uses changed block tracking snapshots to copy data during the precopy stage. The snapshots are created at one-hour intervals by default. When a snapshot is created, its contents are copied to the destination cluster. However, when the third snapshot is created, the first snapshot is deleted and the block tracking is lost. (BZ#1969894)

+
+
+

You can do one of the following to mitigate this issue:

+
+
+
    +
  • +

    Start the cutover stage no more than one hour after the precopy stage begins so that only one internal snapshot is created.

    +
  • +
  • +

    Increase the snapshot interval in the vm-import-controller-config config map to 720 minutes:

    +
    +
    +
    $ kubectl patch configmap/vm-import-controller-config \
    +  -n openshift-cnv -p '{"data": \
    +  {"warmImport.intervalMinutes": "720"}}'
    +
    +
    +
  • +
+
+
+
+ + +
+ + diff --git a/modules/rn-2.1/index.html b/modules/rn-2.1/index.html new file mode 100644 index 00000000000..7f3576bd69a --- /dev/null +++ b/modules/rn-2.1/index.html @@ -0,0 +1,191 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.1

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

+
+
+

The release notes describe new features and enhancements, known issues, and technical changes.

+
+
+
+
+

Technical changes

+
+
+
VDDK image added to HyperConverged custom resource
+

The VMware Virtual Disk Development Kit (VDDK) SDK image must be added to the HyperConverged custom resource. Before this release, it was referenced in the v2v-vmware config map.

+
+
+
+
+

New features and enhancements

+
+
+

This release adds the following features and improvements.

+
+
+
Cold migration from oVirt
+

You can perform a cold migration of VMs from oVirt.

+
+
+
Migration hooks
+

You can create migration hooks to run Ansible playbooks or custom code before or after migration.

+
+
+
Filtered must-gather data collection
+

You can specify options for the must-gather tool that enable you to filter the data by namespace, migration plan, or VMs.

+
+
+
SR-IOV network support
+

You can migrate VMs with a single root I/O virtualization (SR-IOV) network interface if the KubeVirt environment has an SR-IOV network.

+
+
+
+
+

Known issues

+
+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Disk copy stage does not progress
+

The disk copy stage of a oVirt VM does not progress and the Forklift web console does not display an error message. (BZ#1990596)

+
+
+

The cause of this problem might be one of the following conditions:

+
+
+
    +
  • +

    The storage class does not exist on the target cluster.

    +
  • +
  • +

    The VDDK image has not been added to the HyperConverged custom resource.

    +
  • +
  • +

    The VM does not have a disk.

    +
  • +
  • +

    The VM disk is locked.

    +
  • +
  • +

    The VM time zone is not set to UTC.

    +
  • +
  • +

    The VM is configured for a USB device.

    +
  • +
+
+
+

To disable USB devices, see Configuring USB Devices in the Red Hat Virtualization documentation.

+
+
+

To determine the cause:

+
+
+
    +
  1. +

    Click WorkloadsVirtualization in the OKD web console.

    +
  2. +
  3. +

    Click the Virtual Machines tab.

    +
  4. +
  5. +

    Select a virtual machine to open the Virtual Machine Overview screen.

    +
  6. +
  7. +

    Click Status to view the status of the virtual machine.

    +
  8. +
+
+
+
VM time zone must be UTC with no offset
+

The time zone of the source VMs must be UTC with no offset. You can set the time zone to GMT Standard Time after first assessing the potential impact on the workload. (BZ#1993259)

+
+
+
oVirt resource UUID causes a "Provider not found" error
+

If a oVirt resource UUID is used in a Host, NetworkMap, StorageMap, or Plan custom resource (CR), a "Provider not found" error is displayed.

+
+
+

You must use the resource name. (BZ#1994037)

+
+
+
Same oVirt resource name in different data centers causes ambiguous reference
+

If a oVirt resource name is used in a NetworkMap, StorageMap, or Plan custom resource (CR) and if the same resource name exists in another data center, the Plan CR displays a critical "Ambiguous reference" condition. You must rename the resource or use the resource UUID in the CR.

+
+
+

In the web console, the resource name appears twice in the same list without a data center reference to distinguish them. You must rename the resource. (BZ#1993089)

+
+
+
Snapshots are not deleted after warm migration
+

Snapshots are not deleted automatically after a successful warm migration of a VMware VM. You must delete the snapshots manually in VMware vSphere. (BZ#2001270)

+
+
+
+ + +
+ + diff --git a/modules/rn-2.2/index.html b/modules/rn-2.2/index.html new file mode 100644 index 00000000000..87d93197939 --- /dev/null +++ b/modules/rn-2.2/index.html @@ -0,0 +1,219 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.2

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Setting the precopy time interval for warm migration
+

You can set the time interval between snapshots taken during the precopy stage of warm migration.

+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
Creating validation rules
+

You can create custom validation rules to check the suitability of VMs for migration. Validation rules are based on the VM attributes collected by the Provider Inventory service and written in Rego, the Open Policy Agent native query language.

+
+
+
Downloading logs by using the web console
+

You can download logs for a migration plan or a migrated VM by using the Forklift web console.

+
+
+
Duplicating a migration plan by using the web console
+

You can duplicate a migration plan by using the web console, including its VMs, mappings, and hooks, in order to edit the copy and run as a new migration plan.

+
+
+
Archiving a migration plan by using the web console
+

You can archive a migration plan by using the MTV web console. Archived plans can be viewed or duplicated. They cannot be run, edited, or unarchived.

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Certain Validation service issues do not block migration
+

Certain Validation service issues, which are marked as Critical and display the assessment text, The VM will not be migrated, do not block migration. (BZ#2025977)

+
+
+

The following Validation service assessments do not block migration:

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Issues that do not block migration
AssessmentResult

The disk interface type is not supported by OpenShift Virtualization (only sata, virtio_scsi and virtio interface types are currently supported).

The migrated VM will have a virtio disk if the source interface is not recognized.

The NIC interface type is not supported by OpenShift Virtualization (only e1000, rtl8139 and virtio interface types are currently supported).

The migrated VM will have a virtio NIC if the source interface is not recognized.

The VM is using a vNIC profile configured for host device passthrough, which is not currently supported by OpenShift Virtualization.

The migrated VM will have an SR-IOV NIC. The destination network must be set up correctly.

One or more of the VM’s disks has an illegal or locked status condition.

The migration will proceed but the disk transfer is likely to fail.

The VM has a disk with a storage type other than image, and this is not currently supported by OpenShift Virtualization.

The migration will proceed but the disk transfer is likely to fail.

The VM has one or more snapshots with disks in ILLEGAL state. This is not currently supported by OpenShift Virtualization.

The migration will proceed but the disk transfer is likely to fail.

The VM has USB support enabled, but USB devices are not currently supported by OpenShift Virtualization.

The migrated VM will not have USB devices.

The VM is configured with a watchdog device, which is not currently supported by OpenShift Virtualization.

The migrated VM will not have a watchdog device.

The VM’s status is not up or down.

The migration will proceed but it might hang if the VM cannot be powered off.

+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Missing resource causes error message in current.log file
+

If a resource does not exist, for example, if the virt-launcher pod does not exist because the migrated VM is powered off, its log is unavailable.

+
+
+

The following error appears in the missing resource’s current.log file when it is downloaded from the web console or created with the must-gather tool: error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'. (BZ#2023260)

+
+
+
Importer pod log is unavailable after warm migration
+

Retaining the importer pod for debug purposes causes warm migration to hang during the precopy stage. (BZ#2016290)

+
+
+

As a temporary workaround, the importer pod is removed at the end of the precopy stage so that the precopy succeeds. However, this means that the importer pod log is not retained after warm migration is complete. You can only view the importer pod log by using the oc logs -f <cdi-importer_pod> command during the precopy stage.

+
+
+

This issue only affects the importer pod log and warm migration. Cold migration and the virt-v2v logs are not affected.

+
+
+
Deleting migration plan does not remove temporary resources.
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Migration plan details page of the web console does not describe the reason for the failure. (BZ#2008846)

+
+
+
Network, storage, and VM referenced by name in the Plan CR are not displayed in the web console.
+

If a Plan CR references storage, network, or VMs by name instead of by ID, the resources do not appear in the Forklift web console. The migration plan cannot be edited or duplicated. (BZ#1986020)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If you delete a migration plan and then run a new migration plan with the same name or if you delete a migrated VM and then remigrate the source VM, the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
If a target VM is deleted during migration, its migration status is Succeeded in the Plan CR
+

If you delete a target VirtualMachine CR during the 'Convert image to kubevirt' step of the migration, the Migration details page of the web console displays the state of the step as VirtualMachine CR not found. However, the status of the VM migration is Succeeded in the Plan CR file and in the web console. (BZ#2031529)

+
+
+
+ + +
+ + diff --git a/modules/rn-2.3/index.html b/modules/rn-2.3/index.html new file mode 100644 index 00000000000..56e528bf60a --- /dev/null +++ b/modules/rn-2.3/index.html @@ -0,0 +1,156 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.3

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Setting the VddkInitImage path is part of the procedure of adding VMware provider.
+

In the web console, you enter the VddkInitImage path when adding a VMware provider. Alternatively, from the CLI, you add the VddkInitImage path to the Provider CR for VMware migrations.

+
+
+
The StorageProfile resource needs to be updated for a non-provisioner storage class
+

You must update the StorageProfile resource with accessModes and volumeMode for non-provisioner storage classes such as NFS. The documentation includes a link to the relevant procedure.

+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
Forklift 2.3 supports warm migration from oVirt
+

You can use warm migration to migrate VMs from both VMware and oVirt.

+
+
+
The minimal sufficient set of privileges for VMware users is established
+

VMware users do not have to have full cluster-admin privileges to perform a VM migration. The minimal sufficient set of user’s privileges is established and documented.

+
+
+
Forklift documentation is updated with instructions on using hooks
+

Forklift documentation includes instructions on adding hooks to migration plans and running hooks on VMs.

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Some warm migrations from oVirt might fail
+

When you run a migration plan for warm migration of multiple VMs from oVirt, the migrations of some VMs might fail during the cutover stage. In that case, restart the migration plan and set the cutover time for the VM migrations that failed in the first run. (BZ#2063531)

+
+
+
Snapshots are not deleted after warm migration
+

The Migration Controller service does not delete snapshots automatically after a successful warm migration of a oVirt VM. You can delete the snapshots manually. (BZ#22053183)

+
+
+
Warm migration from oVirt fails if a snapshot operation is performed on the source VM
+

If the user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (BZ#2057459)

+
+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Deleting migration plan does not remove temporary resources.
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Migration plan details page of the web console does not describe the reason for the failure. (BZ#2008846)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If you delete a migration plan and then run a new migration plan with the same name or if you delete a migrated VM and then remigrate the source VM, the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
Migration of virtual machines with encrypted partitions fails during conversion
+

The problem occurs for both vSphere and oVirt migrations.

+
+
+
Forklift 2.3.4 only: When the source provider is oVirt, duplicating a migration plan fails in either the network mapping stage or the storage mapping stage.
+

Possible workaround: Delete cache in the browser or restart the browser. (BZ#2143191)

+
+
+
+ + +
+ + diff --git a/modules/rn-2.4/index.html b/modules/rn-2.4/index.html new file mode 100644 index 00000000000..9d59585ff00 --- /dev/null +++ b/modules/rn-2.4/index.html @@ -0,0 +1,260 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.4

+
+
+
+

Migrate virtual machines (VMs) from VMware vSphere or oVirt or {osp} to KubeVirt with Forklift.

+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Faster disk image migration from oVirt
+

Disk images are not converted anymore using virt-v2v when migrating from oVirt. This change speeds up migrations and also allows migration for guest operating systems that are not supported by virt-vsv. (forklift-controller#403)

+
+
+
Faster disk transfers by ovirt-imageio client (ovirt-img)
+

Disk transfers use ovirt-imageio client (ovirt-img) instead of Containerized Data Import (CDI) when migrating from RHV to the local OpenShift Container Platform cluster, accelerating the migration.

+
+
+
Faster migration using conversion pod disk transfer
+

When migrating from vSphere to the local OpenShift Container Platform cluster, the conversion pod transfers the disk data instead of Containerized Data Importer (CDI), accelerating the migration.

+
+
+
Migrated virtual machines are not scheduled on the target OCP cluster
+

The migrated virtual machines are no longer scheduled on the target OpenShift Container Platform cluster. This enables migrating VMs that cannot start due to limit constraints on the target at migration time.

+
+
+
StorageProfile resource needs to be updated for a non-provisioner storage class
+

You must update the StorageProfile resource with accessModes and volumeMode for non-provisioner storage classes such as NFS.

+
+
+
VDDK 8 can be used in the VDDK image
+

Previous versions of Forklift supported only using VDDK version 7 for the VDDK image. Forklift supports both versions 7 and 8, as follows:

+
+
+
    +
  • +

    If you are migrating to OCP 4.12 or earlier, use VDDK version 7.

    +
  • +
  • +

    If you are migrating to OCP 4.13 or later, use VDDK version 8.

    +
  • +
+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
OpenStack migration
+

Forklift now supports migrations with {osp} as a source provider. This feature is a provided as a Technology Preview and only supports cold migrations.

+
+
+
OCP console plugin
+

The Forklift Operator now integrates the Forklift web console into the OKD web console. The new UI operates as an OCP Console plugin that adds the sub-menu Migration to the navigation bar. It is implemented in version 2.4, disabling the old UI. You can enable the old UI by setting feature_ui: true in ForkliftController. (MTV-427)

+
+
+
Skip certification option
+

'Skip certificate validation' option was added to VMware and oVirt providers. If selected, the provider’s certificate will not be validated and the UI will not ask for specifying a CA certificate.

+
+
+
Only third-party certificate required
+

Only the third-party certificate needs to be specified when defining a oVirt provider that sets with the Manager CA certificate.

+
+
+
Conversion of VMs with RHEL9 guest operating system
+

Cold migrations from vSphere to a local Red Hat OpenShift cluster use virt-v2v on RHEL 9. (MTV-332)

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Deleting migration plan does not remove temporary resources
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If deleting a migration plan and then running a new migration plan with the same name, or if deleting a migrated VM and then remigrate the source VM, then the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
Migration of virtual machines with encrypted partitions fails during conversion
+

vSphere only: Migrations from oVirt and OpenStack don’t fail, but the encryption key may be missing on the target OCP cluster.

+
+
+
Snapshots that are created during the migration in OpenStack are not deleted
+

The Migration Controller service does not delete snapshots that are created during the migration for source virtual machines in OpenStack automatically. Workaround: the snapshots can be removed manually on OpenStack.

+
+
+
oVirt snapshots are not deleted after a successful migration
+

The Migration Controller service does not delete snapshots automatically after a successful warm migration of a oVirt VM. Workaround: Snapshots can be removed from oVirt instead. (MTV-349)

+
+
+
Migration fails during precopy/cutover while a snapshot operation is executed on the source VM
+

Some warm migrations from oVirt might fail. When running a migration plan for warm migration of multiple VMs from oVirt, the migrations of some VMs might fail during the cutover stage. In that case, restart the migration plan and set the cutover time for the VM migrations that failed in the first run.

+
+
+

Warm migration from oVirt fails if a snapshot operation is performed on the source VM. If the user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (MTV-456)

+
+
+
Cannot schedule migrated VM with multiple disks to more than one storage classes of type hostPath
+

When migrating a VM with multiple disks to more than one storage classes of type hostPath, it may result in a VM that cannot be scheduled. Workaround: It is recommended to use shared storage on the target OCP cluster.

+
+
+
Deleting migrated VM does not remove PVC and PV
+

When removing a VM that was migrated, its persistent volume claims (PVCs) and physical volumes (PV) are not deleted. Workaround: remove the CDI importer pods and then remove the remaining PVCs and PVs. (MTV-492)

+
+
+
PVC deletion hangs after archiving and deleting migration plan
+

When a migration fails, its PVCs and PVs are not deleted as expected when its migration plan is archived and deleted. Workaround: Remove the CDI importer pods and then remove the remaining PVCs and PVs. (MTV-493)

+
+
+
VM with multiple disks may boot from non-bootable disk after migration
+

VM with multiple disks that was migrated might not be able to boot on the target OCP cluster. Workaround: Set the boot order appropriately to boot from the bootable disk. (MTV-433)

+
+
+
Non-supported guest operating systems in warm migrations
+

Warm migrations and migrations to remote OCP clusters from vSphere do not support all types of guest operating systems that are supported in cold migrations to the local OCP cluster. It is a consequence of using RHEL 8 in the former case and RHEL 9 in the latter case.
+See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.

+
+
+
VMs from vSphere with RHEL 9 guest operating system may start with network interfaces that are down
+

When migrating VMs that are installed with RHEL 9 as guest operating system from vSphere, their network interfaces could be disabled when they start in OpenShift Virtualization. (MTV-491)

+
+
+
Upgrade from 2.4.0 fails
+

When upgrading from MTV 2.4.0 to a later version, the operation fails with an error that says the field 'spec.selector' of deployment forklift-controller is immutable. Workaround: remove the custom resource forklift-controller of type ForkliftController from the installed namespace, and recreate it. The user needs to refresh the OCP Console once the forklift-console-plugin pod runs to load the upgraded Forklift web console. (MTV-518)

+
+
+
+
+

Resolved issues

+
+
+

This release has the following resolved issues:

+
+
+
Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack)
+

A flaw was found in handling multiplexed streams in the HTTP/2 protocol. In previous releases of MTV, the HTTP/2 protocol allowed a denial of service (server resource consumption) because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection, which resulted in a denial of service due to server resource consumption.

+
+
+

This issue has been resolved in MTV 2.4.3 and 2.5.2. It is advised to update to one of these versions of MTV or later.

+
+ +
+
Improve invalid/conflicting VM name handling
+

Improve the automatic renaming of VMs during migration to fit RFC 1123. This feature that was introduced in 2.3.4 is enhanced to cover more special cases. (MTV-212)

+
+
+
Prevent locking user accounts due to incorrect credentials
+

If a user specifies an incorrect password for oVirt providers, they are no longer locked in oVirt. An error returns when the oVirt manager is accessible and adding the provider. If the oVirt manager is inaccessible, the provider is added, but there would be no further attempt after failing, due to incorrect credentials. (MTV-324)

+
+
+
Users without cluster-admin role can create new providers
+

Previously, the cluster-admin role was required to browse and create providers. In this release, users with sufficient permissions on MTV resources (providers, plans, migrations, NetworkMaps, StorageMaps, hooks) can operate MTV without cluster-admin permissions. (MTV-334)

+
+
+
Convert i440fx to q35
+

Migration of virtual machines with i440fx chipset is now supported. The chipset is converted to q35 during the migration. (MTV-430)

+
+
+
Preserve the UUID setting in SMBIOS for a VM that is migrated from oVirt
+

The Universal Unique ID (UUID) number within the System Management BIOS (SMBIOS) no longer changes for VMs that are migrated from oVirt. This enhancement enables applications that operate within the guest operating system and rely on this setting, such as for licensing purposes, to operate on the target OCP cluster in a manner similar to that of oVirt. (MTV-597)

+
+
+
Do not expose password for oVirt in error messages
+

Previously, the password that was specified for oVirt manager appeared in error messages that were displayed in the web console and logs when failing to connect to oVirt. In this release, error messages that are generated when failing to connect to oVirt do not reveal the password for oVirt manager.

+
+
+
QEMU guest agent is now installed on migrated VMs
+

The QEMU guest agent is installed on VMs during cold migration from vSphere. (BZ#2018062)

+
+
+
+ + +
+ + diff --git a/modules/rn-2.5/index.html b/modules/rn-2.5/index.html new file mode 100644 index 00000000000..61ec3bbc6df --- /dev/null +++ b/modules/rn-2.5/index.html @@ -0,0 +1,464 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.5

+
+
+
+

You can use Forklift to migrate virtual machines from the following source providers to KubeVirt destination providers:

+
+
+
    +
  • +

    VMware vSphere

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    {osp}

    +
  • +
  • +

    Open Virtual Appliances (OVAs) that were created by VMware vSphere

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

The release notes describe technical changes, new features and enhancements, and known issues for Forklift.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Migration from OpenStack moves to being a fully supported feature
+

In this version of Forklift, migration using OpenStack source providers graduated from a Technology Preview feature to a fully supported feature.

+
+
+
Disabling FIPS
+

Forklift enables migrations from vSphere source providers by not enforcing Enterprise Master Secret (EMS). This enables migrating from all vSphere versions that Forklift supports, including migrations that do not meet 2023 FIPS requirements.

+
+
+
Integration of the create and update provider user interface
+

The user interface of the create and update providers now aligns with the look and feel of the OKD web console and displays up-to-date data.

+
+
+
Standalone UI
+

The old UI of Forklift 2.3 cannot be enabled by setting feature_ui: true in ForkliftController anymore.

+
+
+
Support deployment on {ocp-name} 4.15
+

Forklift 2.5.6 can be deployed on {ocp-name} 4.15 clusters.

+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
Migration of OVA files from VMware vSphere
+

In Forklift 2.3, you can migrate using Open Virtual Appliance (OVA) files that were created by VMware vSphere as source providers. (MTV-336)

+
+
+ + + + + +
+
Note
+
+
+

Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by Forklift. Forklift supports only OVA files created by VMware vSphere.

+
+
+
+
+

Unresolved directive in rn-2.5.adoc - include::snippet_ova_tech_preview.adoc[]

+
+
+
Migrating VMs between OKD clusters
+

In Forklift 2.3, you can now use Red Hat KubeVirt provider as a source provider and a destination provider. You can migrate VMs from the cluster that Forklift is deployed on to another cluster, or from a remote cluster to the cluster that Forklift is deployed on. (MTV-571)

+
+
+
Migration of VMs with direct LUNs from RHV
+

During the migration from oVirt (oVirt), direct Logical Units (LUNs) are detached from the source virtual machines and attached to the target virtual machines. Note that this mechanism does not work yet for Fibre Channel. (MTV-329)

+
+
+
Additional authentication methods for OpenStack
+

In addition to standard password authentication, Forklift supports the following authentication methods: Token authentication and Application credential authentication. (MTV-539)

+
+
+
Validation rules for OpenStack
+

The validation service includes default validation rules for virtual machines from OpenStack. (MTV-508)

+
+
+
VDDK is now optional for VMware vSphere providers
+

You can now create the VMware vSphere source provider without specifying a VMware Virtual Disk Development Kit (VDDK) init image. It is strongly recommended you create a VDDK init image to accelerate migrations.

+
+
+
Deployment on OKE enabled
+

In Forklift 2.5.3, deployment on {ocp-name} Kubernetes Engine (OKE) has been enabled. For more information, see About {ocp-name} Kubernetes Engine. (MTV-803)

+
+
+
Migration of VMs to destination storage classes with encrypted RBD now supported
+

In Forklift 2.5.4, migration of VMs to destination storage classes that have encrypted RADOS Block Devices (RBD) volumes is now supported.

+
+
+

To make use of this new feature, set the value of the parameter controller_block_overhead to 1Gi, following the procedure in Configuring the MTV Operator. (MTV-851)

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Deleting migration plan does not remove temporary resources
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)

+
+
+
Migration of virtual machines with encrypted partitions fails during conversion
+

vSphere only: Migrations from oVirt and OpenStack do not fail, but the encryption key may be missing on the target OKD cluster.

+
+
+
Migration fails during precopy/cutover while performing a snapshot operation on the source VM
+

Warm migration from oVirt fails if a snapshot operation is triggered and running on the source VM at the same time as the migration is scheduled. The migration does not wait for the snapshot operation to finish. (MTV-456)

+
+
+
Unable to schedule migrated VM with multiple disks to more than one storage classes of type hostPath
+

When migrating a VM with multiple disks to more than one storage classes of type hostPath, it might happen that a VM cannot be scheduled. Workaround: Use shared storage on the target OKD cluster.

+
+
+
Non-supported guest operating systems in warm migrations
+

Warm migrations and migrations to remote OKD clusters from vSphere do not support all types of guest operating systems that are supported in cold migrations to the local OKD cluster. This is a consequence of using RHEL 8 in the former case and RHEL 9 in the latter case.
+See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.

+
+
+
VMs from vSphere with RHEL 9 guest operating system can start with network interfaces that are down
+

When migrating VMs that are installed with RHEL 9 as guest operating system from vSphere, the network interfaces of the VMs could be disabled when they start in {ocp-name} Virtualization. (MTV-491)

+
+
+
Import OVA: ConnectionTestFailed message appears when adding OVA provider
+

When adding an OVA provider, the error message ConnectionTestFailed can appear, although the provider is created successfully. If the message does not disappear after a few minutes and the provider status does not move to Ready, this means that the ova server pod creation has failed. (MTV-671)

+
+
+
Left over ovirtvolumepopulator from failed migration causes plan to stay indefinitely in CopyDisks phase
+

An outdated ovirtvolumepopulator in the namespace, left over from an earlier failed migration, stops a new plan of the same VM when it transitions to CopyDisks phase. The plan remains in that phase indefinitely. (MTV-929)

+
+
+
Unclear error message when Forklift fails to build a PVC
+

The migration fails to build the Persistent Volume Claim (PVC) if the destination storage class does not have a configured storage profile. The forklift-controller raises an error message without a clear reason for failing to create a PVC. (MTV-928)

+
+
+

For a complete list of all known issues in this release, see the list of Known Issues in Jira.

+
+
+
+
+

Resolved issues

+
+
+

This release has the following resolved issues:

+
+
+
Flaw was found in jsrsasign package which is vulnerable to Observable Discrepancy
+

Versions of the package jsrsasign before 11.0.0, used in earlier releases of Forklift, are vulnerable to Observable Discrepancy in the RSA PKCS1.5 or RSA-OAEP decryption process. This discrepancy means an attacker could decrypt ciphertexts by exploiting this vulnerability. However, exploiting this vulnerability requires the attacker to have access to a large number of ciphertexts encrypted with the same key. This issue has been resolved in Forklift 2.5.5 by upgrading the package jsrasign to version 11.0.0.

+
+
+

For more information, see CVE-2024-21484.

+
+
+
Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack)
+

A flaw was found in handling multiplexed streams in the HTTP/2 protocol. In previous releases of Forklift, the HTTP/2 protocol allowed a denial of service (server resource consumption) because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection, which resulted in a denial of service due to server resource consumption.

+
+
+

This issue has been resolved in Forklift 2.5.2. It is advised to update to this version of MTV or later.

+
+ +
+
Gin Web Framework does not properly sanitize filename parameter of Context.FileAttachment function
+

A flaw was found in the Gin-Gonic Gin Web Framework, used by Forklift. The filename parameter of the Context.FileAttachment function was not properly sanitized. This flaw in the package could allow a remote attacker to bypass security restrictions caused by improper input validation by the filename parameter of the Context.FileAttachment function. A maliciously created filename could cause the Content-Disposition header to be sent with an unexpected filename value, or otherwise modify the Content-Disposition header.

+
+
+

This issue has been resolved in Forklift 2.5.2. It is advised to update to this version of Forklift or later.

+
+ +
+
CVE-2023-26144: mtv-console-plugin-container: graphql: Insufficient checks in the OverlappingFieldsCanBeMergedRule.ts
+

A flaw was found in the package GraphQL from 16.3.0 and before 16.8.1. This flaw means Forklift versions before Forklift 2.5.2 are vulnerable to Denial of Service (DoS) due to insufficient checks in the OverlappingFieldsCanBeMergedRule.ts file when parsing large queries. This issue may allow an attacker to degrade system performance. (MTV-712)

+
+
+

This issue has been resolved in Forklift 2.5.2. It is advised to update to this version of Forklift or later.

+
+
+

For more information, see CVE-2023-26144.

+
+
+
CVE-2023-45142: Memory leak found in the otelhttp handler of open-telemetry
+

A flaw was found in otelhttp handler of OpenTelemetry-Go. This flaw means Forklift versions before Forklift 2.5.3 are vulnerable to a memory leak caused by http.user_agent and http.method having unbound cardinality, which could allow a remote, unauthenticated attacker to exhaust the server’s memory by sending many malicious requests, affecting the availability. (MTV-795)

+
+
+

This issue has been resolved in Forklift 2.5.3. It is advised to update to this version of Forklift or later.

+
+
+

For more information, see CVE-2023-45142.

+
+
+
CVE-2023-39322: QUIC connections do not set an upper bound on the amount of data buffered when reading post-handshake messages
+

A flaw was found in Golang. This flaw means Forklift versions before Forklift 2.5.3 are vulnerable to QUIC connections not setting an upper bound on the amount of data buffered when reading post-handshake messages, allowing a malicious QUIC connection to cause unbounded memory growth. With the fix, connections now consistently reject messages larger than 65KiB in size. (MTV-708)

+
+
+

This issue has been resolved in Forklift 2.5.3. It is advised to update to this version of Forklift or later.

+
+
+

For more information, see CVE-2023-39322.

+
+
+
CVE-2023-39321: Processing an incomplete post-handshake message for a QUIC connection can cause a panic
+

A flaw was found in Golang. This flaw means Forklift versions before Forklift 2.5.3 are vulnerable to processing an incomplete post-handshake message for a QUIC connection, which causes a panic. (MTV-693)

+
+
+

This issue has been resolved in Forklift 2.5.3. It is advised to update to this version of Forklift or later.

+
+
+

For more information, see CVE-2023-39321.

+
+
+
CVE-2023-39319: Flaw in html/template package
+

A flaw was found in the Golang html/template package used in Forklift. This flaw means Forklift versions before Forklift 2.5.3 are vulnerable, as the html/template package did not properly handle occurrences of <script, <!--, and </script within JavaScript literals in <script> contexts. This flaw could cause the template parser to improperly consider script contexts to be terminated early, causing actions to be improperly escaped, which could be leveraged to perform an XSS attack. (MTV-693)

+
+
+

This issue has been resolved in Forklift 2.5.3. It is advised to update to this version of Forklift or later.

+
+
+

For more information, see CVE-2023-39319.

+
+
+
CVE-2023-39318: Flaw in html/template package
+

A flaw was found in the Golang html/template package used in Forklift. This flaw means Forklift versions before Forklift 2.5.3 are vulnerable as the html/template package did not properly handle HMTL-like "" comment tokens, nor hashbang \#! comment tokens. This flaw could cause the template parser to improperly interpret the contents of <script> contexts, causing actions to be improperly escaped, which could be leveraged to perform an XSS attack. (MTV-693)

+
+
+

This issue has been resolved in Forklift 2.5.3. It is advised to update to this version of Forklift or later.

+
+
+

For more information, see CVE-2023-39318.

+
+
+
Logs archive file downloaded from UI includes logs related to deleted migration plan/VM
+

In earlier releases of Forklift 2.3, the log files downloaded from UI could contain logs that are related to an earlier migration plan. (MTV-783)

+
+
+

This issue has been resolved in Forklift 2.5.3.

+
+
+
Extending a VM disk in RHV is not reflected in the MTV inventory
+

In earlier releases of Forklift 2.3, the size of disks that are extended in RHV was not adequately monitored. This resulted in the inability to migrate virtual machines with extended disks from a RHV provider. (MTV-830)

+
+
+

This issue has been resolved in Forklift 2.5.3.

+
+
+
Filesystem overhead configurable
+

In earlier releases of Forklift 2.3, the filesystem overhead for new persistent volumes was hard-coded to 10%. The overhead was insufficient for certain filesystem types, resulting in failures during cold-migrations from oVirt and OSP to the cluster where Forklift is deployed. In other filesystem types, the hard-coded overhead was too high, resulting in excessive storage consumption.

+
+
+

In Forklift 2.5.3, the filesystem overhead can be configured, as it is no longer hard-coded. If your migration allocates persistent volumes without CDI, you can adjust the file system overhead. You adjust the file system overhead by adding the following label and value to the spec portion of the forklift-controller CR:

+
+
+
+
spec:
+  `controller_filesystem_overhead: <percentage>` (1)
+
+
+
+
    +
  1. +

    The percentage of overhead. If this label is not added, the default value of 10% is used. This setting is valid only if the storageclass is filesystem. (MTV-699)

    +
  2. +
+
+
+
Ensure up-to-date data is displayed in the create and update provider forms
+

In earlier releases of Forklift, the create and update provider forms could have presented stale data.

+
+
+

This issue is resolved in Forklift 2.3, the new forms of create and update provider display up-to-date properties of the provider. (MTV-603)

+
+
+
Snapshots that are created during a migration in OpenStack are not deleted
+

In earlier releases of Forklift, the Migration Controller service did not delete snapshots that were created during a migration of source virtual machines in OpenStack automatically.

+
+
+

This issue is resolved in Forklift 2.3, all the snapshots created during the migration are removed after the migration has been completed. (MTV-620)

+
+
+
oVirt snapshots are not deleted after a successful migration
+

In earlier releases of Forklift, the Migration Controller service did not delete snapshots automatically after a successful warm migration of a VM from oVirt.

+
+
+

This issue is resolved in Forklift 2.3, the snapshots generated during migration are removed after a successful migration, and the original snapshots are not removed after a successful migration. (MTV-349)

+
+
+
Warm migration fails when cutover conflicts with precopy
+

In earlier releases of Forklift, the cutover operation failed when it was triggered while precopy was being performed. The VM was locked in oVirt and therefore the ovirt-engine rejected the snapshot creation, or disk transfer, operation.

+
+
+

This issue is resolved in Forklift 2.3, the cutover operation is triggered, but it is not performed at that time because the VM is locked. Once the precopy operation completes, the cutover operation is triggered. (MTV-686)

+
+
+
Warm migration fails when VM is locked
+

In earlier releases of Forklift, triggering a warm migration while there was an ongoing operation in oVirt that locked the VM caused the migration to fail because it could not trigger the snapshot creation.

+
+
+

This issue is resolved in Forklift 2.3, warm migration does not fail when an operation that locks the VM is performed in oVirt. The migration does not fail, but starts when the VM is unlocked. (MTV-687)

+
+
+
Deleting migrated VM does not remove PVC and PV
+

In earlier releases of Forklift, when removing a VM that was migrated, its persistent volume claims (PVCs) and physical volumes (PV) were not deleted.

+
+
+

This issue is resolved in Forklift 2.3, PVCs and PVs are deleted when deleting migrated VM.(MTV-492)

+
+
+
PVC deletion hangs after archiving and deleting migration plan
+

In earlier releases of Forklift, when a migration failed, its PVCs and PVs were not deleted as expected when its migration plan was archived and deleted.

+
+
+

This issue is resolved in Forklift 2.3, PVCs are deleted when archiving and deleting migration plan.(MTV-493)

+
+
+
VM with multiple disks can boot from a non-bootable disk after migration
+

In earlier releases of Forklift, VM with multiple disks that were migrated might not have been able to boot on the target OKD cluster.

+
+
+

This issue is resolved in Forklift 2.3, VM with multiple disks that are migrated can boot on the target OKD cluster. (MTV-433)

+
+
+
Transfer network not taken into account for cold migrations from vSphere
+

In Forklift releases 2.4.0-2.5.3, cold migrations from vSphere to the local cluster on which Forklift was deployed did not take a specified transfer network into account. This issue is resolved in Forklift 2.5.4. (MTV-846)

+
+
+
Fix migration of VMs with multi-boot guest operating system from vSphere
+

In Forklift 2.5.6, the virt-v2v arguments include –root first, which mitigates an issue with multi-boot VMs where the pod fails. This is a fix for a regression that was introduced in Forklift 2.4, in which the '--root' argument was dropped. (MTV-987)

+
+
+
Errors logged in populator pods are improved
+

In earlier releases of Forklift 2.3, populator pods were always restarted on failure. This made it difficult to gather the logs from the failed pods. In Forklift 2.5.3, the number of restarts of populator pods is limited to three times. On the third and final time, the populator pod remains in the fail status and its logs can then be easily gathered by must-gather and by forklift-controller to know this step has failed. (MTV-818)

+
+
+
npm IP package vulnerability
+

A vulnerability found in the Node.js Package Manager (npm) IP Package can allow an attacker to obtain sensitive information and obtain access to normally inaccessible resources. MTV-941

+
+
+

This issue has been resolved in Forklift 2.5.6.

+
+
+

For more information, see CVE-2023-42282

+
+
+
Flaw was found in the Golang net/http/internal package
+

A flaw was found in the versions of the Golang net/http/internal package, that were used in earlier releases of Forklift. This flaw could allow a malicious user to send an HTTP request and cause the receiver to read more bytes from the network than are in the body (up to 1GiB), causing the receiver to fail reading the response, possibly leading to a Denial of Service (DoS). This issue has been resolved in Forklift 2.5.6.

+
+
+

For more information, see CVE-2023-39326.

+
+
+

For a complete list of all resolved issues in this release, see the list of Resolved Issues in Jira.

+
+
+
+
+

Upgrade notes

+
+
+

It is recommended to upgrade from Forklift 2.4.2 to Forklift 2.3.

+
+
+
Upgrade from 2.4.0 fails
+

When upgrading from MTV 2.4.0 to a later version, the operation fails with an error that says the field 'spec.selector' of deployment forklift-controller is immutable. Workaround: Remove the custom resource forklift-controller of type ForkliftController from the installed namespace, and recreate it. Refresh the OKD console once the forklift-console-plugin pod runs to load the upgraded Forklift web console. (MTV-518)

+
+
+
+ + +
+ + diff --git a/modules/rn-2.6/index.html b/modules/rn-2.6/index.html new file mode 100644 index 00000000000..843d269a73d --- /dev/null +++ b/modules/rn-2.6/index.html @@ -0,0 +1,511 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.6

+
+
+
+

You can use Forklift to migrate virtual machines from the following source providers to KubeVirt destination providers:

+
+
+
    +
  • +

    VMware vSphere

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    {osp}

    +
  • +
  • +

    Open Virtual Appliances (OVAs) that were created by VMware vSphere

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

The release notes describe technical changes, new features and enhancements, known issues, and resolved issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Simplified the creation of vSphere providers
+

In earlier releases of Forklift, users had to specify a fingerprint when creating a vSphere provider. This required users to retrieve the fingerprint from the server that vCenter runs on. Forklift no longer requires this fingerprint as an input, but rather computes it from the specified certificate in the case of a secure connection or automatically retrieves it from the server that runs vCenter/ESXi in the case of an insecure connection.

+
+
+
Redesigned the migration plan creation dialog
+

The user interface console has improved the process of creating a migration plan. The new migration plan dialog enables faster creation of migration plans.

+
+
+

It includes only the minimal settings that are required, while you can confirgure advanced settings separately. The new dialog also provides defaults for network and storage mappings, where applicable. The new dialog can also be invoked from the the Provider > Virtual Machines tab, after selecting the virtual machines to migrate. It also better aligns with the user experience in the OCP console.

+
+
+
virtual machine preferences have replaced {ocp-name} templates
+

The virtual machine preferences have replaced {ocp-name} templates. Forklift currently falls back to using {ocp-name} templates when a relevant preference is not available.

+
+
+

Custom mappings of guest operating system type to virtual machine preference can be configured using config maps. This is in order to use custom virtual machine preferences, or to support more guest operating system types.

+
+
+
Full support for migration from OVA
+

Migration from OVA moves from being a Technical Preview and is now a fully supported feature.

+
+
+
The VM is posted with its desired Running state
+

Forklift creates the VM with its desired Running state on the target provider, instead of creating the VM and then running it as an additional operation. (MTV-794)

+
+
+
The must-gather logs can now be loaded only by using the CLI
+

The Forklift web console can no longer download logs. With this update, you must download must-gather logs by using CLI commands. For more information, see Must Gather Operator.

+
+
+
Forklift no longer runs pvc-init pods when migrating from vSphere
+

Forklift no longer runs pvc-init pods during cold migration from a vSphere provider to the {ocp-name} cluster that Forklift is deployed on. However, in other flows where data volumes are used, they are set with the cdi.kubevirt.io/storage.bind.immediate.requested annotation, and CDI runs first-consume pods for storage classes with volume binding mode WaitForFirstConsumer.

+
+
+
+
+

New features and enhancements

+
+
+

This section provides features and enhancements introduced in Forklift 2.6.

+
+
+

New features and enhancements 2.6.3

+
+
Support for migrating LUKS-encrypted devices in migrations from vSphere
+

You can now perform cold migrations from a vSphere provider of VMs whose virtual disks are encrypted by Linux Unified Key Setup (LUKS). (MTV-831)

+
+
+
Specifying the primary disk when migrating from vSphere
+

You can now specify the primary disk when you migrate VMs from vSphere with more than one bootable disk. This avoids Forklift automatically attempting to convert the first bootable disk that it detects while it examines all the disks of a virtual machine. This feature is needed because the first bootable disk is not necessarily the disk that the VM is expected to boot from in KubeVirt. (MTV-1079)

+
+
+
Links to remote provider UIs
+

You can now remotely access the UI of a remote cluster when you create a source provider. For example, if the provider is a remote oVirt oVirt cluster, Forklift adds a link to the remote oVirt web console when you define the provider. This feature makes it easier for you to manage and debug a migration from remote clusters. (MTV-1054)

+
+
+
+

New features and enhancements 2.6.0

+
+
Migration from vSphere over a secure connection
+

You can now specify a CA certificate that can be used to authenticate the server that runs vCenter or ESXi, depending on the specified SDK endpoint of the vSphere provider. (MTV-530)

+
+
+
Migration to or from a remote {ocp-name} over a secure connection
+

You can now specify a CA certificate that can be used to authenticate the API server of a remote {ocp-name} cluster. (MTV-728)

+
+
+
Migration from an ESXi server without going through vCenter
+

Forklift enables the configuration of vSphere providers with the SDK of ESXi. You need to select ESXi as the Endpoint type of the vSphere provider and specify the URL of the SDK of the ESXi server. (MTV-514)

+
+
+
Migration of image-based VMs from {osp}
+

Forklift supports the migration of VMs that were created from images in {osp}. (MTV-644)

+
+
+
Migration of VMs with Fibre Channel LUNs from oVirt
+

Forklift supports migrations of VMs that are set with Fibre Channel (FC) LUNs from oVirt. As with other LUN disks, you need to ensure the {ocp-name} nodes have access to the FC LUNs. During the migrations, the FC LUNs are detached from the source VMs in oVirt and attached to the migrated VMs in {ocp-name}. (MTV-659)

+
+
+
Preserve CPU types of VMs that are migrated from oVirt
+

Forklift sets the CPU type of migrated VMs in {ocp-name} with their custom CPU type in oVirt. In addition, a new option was added to migration plans that are set with oVirt as a source provider to preserve the original CPU types of source VMs. When this option is selected, Forklift identifies the CPU type based on the cluster configuration and sets this CPU type for the migrated VMs, for which the source VMs are not set with a custom CPU. (MTV-547)

+
+
+
Validation for RHEL 6 guest operating system is now available when migrating VMs with RHEL 6 guest operating system
+

Red Hat Enterprise Linux (RHEL) 9 does not support RHEL 6 as a guest operating system. Therefore, RHEL 6 is not supported in {ocp-name} Virtualization. With this update, a validation of RHEL 6 guest operating system was added to {ocp-name} Virtualization. (MTV413)

+
+
+
Automatic retrieval of CA certificates for the provider’s URL in the console
+

The ability to retrieve CA certificates, which was available in previous versions, has been restored. The vSphere Verify certificate option is in the add-provider dialog. This option was removed in the transition to the OKD console and has now been added to the console. This functionality is also available for oVirt, {osp}, and {ocp-name} providers now. (MTV-737)

+
+
+
Validation of a specified VDDK image
+

Forklift validates the availability of a VDDK image that is specified for a vSphere provider on the target {ocp-name} name as part of the validation of a migration plan. Forklift also checks whether the libvixDiskLib.so symbolic link (symlink) exists within the image. If the validation fails, the migration plan cannot be started. (MTV-618)

+
+
+
Add a warning and partial support for TPM
+

Forklift presents a warning when attempting to migrate a VM that is set with a TPM device from oVirt or vSphere. The migrated VM in {ocp-name} would be set with a TPM device but without the content of the TPM device on the source environment. (MTV-378)

+
+
+
Plans that failed to migrate VMs can now be edited
+

With this update, you can edit plans that have failed to migrate any VMs. Some plans fail or are canceled because of incorrect network and storage mappings. You can now edit these plans until they succeed. (MTV-779)

+
+
+
Validation rules are now available for OVA
+

The validation service includes default validation rules for virtual machines from the Open Virtual Appliance (OVA). (MTV-669)

+
+
+
+
+
+

Resolved issues

+
+
+

This release has the following resolved issues:

+
+
+

Resolved issues 2.6.7

+
+
Incorrect handling of quotes in ifcfg files
+

In earlier releases of Forklift, there was an issue with the incorrect handling of single and double quotes in interface configuration (ifcfg) files, which control the software interfaces for individual network devices. This issue has been resolved in Forklift 2.6.7, in order to cover additional IP configurations on Red Hat Enterprise Linux, CentOS, Rocky Linux and similar distributions. (MTV-1439)

+
+
+
Failure to preserve netplan based network configuration
+

In earlier releases of Forklift, there was an issue with the preservation of netplan-based network configurations. This issue has been resolved in Forklift 2.6.7, so that static IP configurations are preserved if netplan (netplan.io) is used by using the netplan configuration files to generate udev rules for known mac-address and ifname tuples. (MTV-1440)

+
+
+
Error messages are written into udev .rules files
+

In earlier releases of Forklift, there was an issue with the accidental leakage of error messages into udev .rules files. This issue has been resolved in Forklift 2.6.7, with a static IP persistence script added to the udev rule file. (MTV-1441)

+
+
+
+

Resolved issues 2.6.6

+
+
Runtime error: invalid memory address or nil pointer dereference
+

In earlier releases of Forklift, there was a runtime error of invalid memory address or nil pointer dereference caused by a pointer that was nil, and there was an attempt to access the value that it points to. This issue has been resolved in Forklift 2.6.6. (MTV-1353)

+
+
+
All Plan and Migration pods scheduled to same node causing the node to crash
+

In earlier releases of Forklift, the scheduler could place all migration pods on a single node. When this happened, the node ran out of the resources. This issue has been resolved in Forklift 2.6.6. (MTV-1354)

+
+
+
Empty bearer token is sufficient for authentication
+

In earlier releases of Forklift, a vulnerability was found in the Forklift Controller.  There is no verification against the authorization header, except to ensure it uses bearer authentication. Without an authorization header and a bearer token, a 401 error occurs. The presence of a token value provides a 200 response with the requested information. This issue has been resolved in Forklift 2.6.6.

+
+
+

For more details, see (CVE-2024-8509).

+
+
+
+

Resolved issues 2.6.5

+
+
VMware Linux interface name changes during migration
+

In earlier releases of Forklift, during the migration of Rocky Linux 8, CentOS 7.2 and later, and Ubuntu 22 virtual machines (VM) from VMware to OKD (OCP), the name of the network interfaces is modified, and the static IP configuration for the VM is no longer functional. This issue has been resolved for static IPs in Rocky Linux 8, Centos 7.2 and later, Ubuntu 22 in Forklift 2.6.5. (MTV-595)

+
+
+
+

Resolved issues 2.6.4

+
+
Disks and drives are offline after migrating Windows virtual machines from RHV or VMware to OCP
+

Windows (Windows 2022) VMs configured with multiple disks, which are Online before the migration, are Offline after a successful migration from oVirt or VMware to OKD, using Forklift. Only the C:\ primary disk is Online. This issue has been resolved for basic disks in Forklift 2.6.4. (MTV-1299)

+
+
+

For details of the known issue of dynamic disks being Offline in Windows Server 2022 after cold and warm migrations from vSphere to container-native virtualization (CNV) with Ceph RADOS Block Devices (RBD), using the storage class ocs-storagecluster-ceph-rbd, see (MTV-1344).

+
+
+
Preserve IP option for Windows does not preserve all settings
+

In earlier releases of Forklift, while migrating a Windows 2022 Server with a static IP address assigned, and selecting the Preserve static IPs option, after a successful Windows migration, while the node started and the IP address was preserved, the subnet mask, gateway, and DNS servers were not preserved. This resulted in an incomplete migration, and the customer was forced to log in locally from the console to fully configure the network. This issue has been resolved in Forklift 2.6.4. (MTV-1286)

+
+
+
qemu-guest-agent not being installed at first boot in Windows Server 2022
+

After a successful Windows 2022 server guest migration using Forklift 2.6.1, the qemu-guest-agent is not completely installed. The Windows Scheduled task is being created, however it is being set to run 4 hours in the future instead of the intended 2 minutes in the future. (MTV-1325)

+
+
+
+

Resolved issues 2.6.3

+
+
CVE-2024-24788: golang: net malformed DNS message can cause infinite loop
+

In earlier releases of Forklift, there was a flaw was discovered in the stdlib package of the Go programming language, which impacts previous versions of Forklift. This vulnerability primarily threatens web-facing applications and services that rely on Go for DNS queries. This issue has been resolved in Forklift 2.6.3.

+
+
+

For more details, see (CVE-2024-24788).

+
+
+
Migration scheduling does not take into account that virt-v2v copies disks sequentially (vSphere only)
+

In earlier releases of Forklift, there was a problem with the way Forklift interpreted the controller_max_vm_inflight setting for vSphere to schedule migrations. This issue has been resolved in Forklift 2.6.3. (MTV-1191)

+
+
+
Cold migrations fail after changing the ESXi network (vSphere only)
+

In earlier versions of Forklift, cold migrations from a vSphere provider with an ESXi SDK endpoint failed if any network was used except for the default network for disk transfers. This issue has been resolved in Forklift 2.6.3. (MTV-1180)

+
+
+
Warm migrations over an ESXi network are stuck in DiskTransfer state (vSphere only)
+

In earlier versions of Forklift, warm migrations over an ESXi network from a vSphere provider with a vCenter SDK endpoint were stuck in DiskTransfer state because Forklift was unable to locate image snapshots. This issue has been resolved in Forklift 2.6.3. (MTV-1161)

+
+
+
Leftover PVCs are in Lost state after cold migrations
+

In earlier versions of Forklift, after cold migrations, there were leftover PVCs that had a status of Lost instead of being deleted, even after the migration plan that created them was archived and deleted. Investigation showed that this was because importer pods were retained after copying, by default, rather than in only specific cases. This issue has been resolved in Forklift 2.6.3. (MTV-1095)

+
+
+
Guest operating system from vSphere might be missing (vSphere only)
+

In earlier versions of Forklift, some VMs that were imported from vSphere were not mapped to a template in OKD while other VMs, with the same guest operating system, were mapped to the corresponding template. Investigations indicated that this was because vSphere stopped reporting the operating system after not receiving updates from VMware tools for some time. This issue has been resolved in Forklift 2.6.3 by taking the value of the operating system from the output of the investigation that virt-v2v performs on the disks. (MTV-1046)

+
+
+
+

Resolved issues 2.6.2

+
+
CVE-2023-45288: Golang net/http, x/net/http2: unlimited number of CONTINUATION frames can cause a denial-of-service (DoS) attack
+

A flaw was discovered with the implementation of the HTTP/2 protocol in the Go programming language, which impacts previous versions of Forklift. There were insufficient limitations on the number of CONTINUATION frames sent within a single stream. An attacker could potentially exploit this to cause a denial-of-service (DoS) attack. This flaw has been resolved in Forklift 2.6.2.

+
+
+

For more details, see (CVE-2023-45288).

+
+
+
CVE-2024-24785: mtv-api-container: Golang html/template: errors returned from MarshalJSON methods may break template escaping
+

A flaw was found in the html/template Golang standard library package, which impacts previous versions of Forklift. If errors returned from MarshalJSON methods contain user-controlled data, they may be used to break the contextual auto-escaping behavior of the HTML/template package, allowing subsequent actions to inject unexpected content into the templates. This flaw has been resolved in Forklift 2.6.2.

+
+
+

For more details, see (CVE-2024-24785).

+
+
+
CVE-2024-24784: mtv-validation-container: Golang net/mail: comments in display names are incorrectly handled
+

A flaw was found in the net/mail Golang standard library package, which impacts previous versions of Forklift. The ParseAddressList function incorrectly handles comments, text in parentheses, and display names. As this is a misalignment with conforming address parsers, it can result in different trust decisions being made by programs using different parsers. This flaw has been resolved in Forklift 2.6.2.

+
+
+

For more details, see (CVE-2024-24784).

+
+
+
CVE-2024-24783: mtv-api-container: Golang crypto/x509: Verify panics on certificates with an unknown public key algorithm
+

A flaw was found in the crypto/x509 Golang standard library package, which impacts previous versions of Forklift. Verifying a certificate chain that contains a certificate with an unknown public key algorithm causes Certificate.Verify to panic. This affects all crypto/tls clients and servers that set Config.ClientAuth to VerifyClientCertIfGiven or RequireAndVerifyClientCert. The default behavior is for TLS servers to not verify client certificates. This flaw has been resolved in Forklift 2.6.2.

+
+
+

For more details, see (CVE-2024-24783).

+
+
+
CVE-2023-45290: mtv-api-container: Golang net/http memory exhaustion in Request.ParseMultipartForm
+

A flaw was found in the net/http Golang standard library package, which impacts previous versions of Forklift. When parsing a multipart form, either explicitly with Request.ParseMultipartForm or implicitly with Request.FormValue, Request.PostFormValue, or Request.FormFile, limits on the total size of the parsed form are not applied to the memory consumed while reading a single form line. This permits a maliciously crafted input containing long lines to cause the allocation of arbitrarily large amounts of memory, potentially leading to memory exhaustion. This flaw has been resolved in Forklift 2.6.2.

+
+
+

For more details, see (CVE-2023-45290).

+
+
+
ImageConversion does not run when target storage is set with WaitForFirstConsumer (WFFC)
+

In earlier releases of Forklift, migration of VMs failed because the migration was stuck in the AllocateDisks phase. As a result of being stuck, the migration did not progress, and PVCs were not bound. The root cause of the issue was that ImageConversion did not run when target storage was set for wait-for-first-consumer. The problem was resolved in Forklift 2.6.2. (MTV-1126)

+
+
+
forklift-controller panics when importing VMs with direct LUNs
+

In earlier releases of Forklift, forklift-controller panicked when a user attempted to import VMs that had direct LUNs. The problem was resolved in Forklift 2.6.2. (MTV-1134)

+
+
+
+

Resolved issues 2.6.1

+
+
VMs with multiple disks that are migrated from vSphere and OVA files are not being fully copied
+

In Forklift 2.6.0, there was a problem in copying VMs with multiple disks from VMware vSphere and from OVA files. The migrations appeared to succeed but all the disks were transferred to the same PV in the target environment while other disks were empty. In some cases, bootable disks were overridden, so the VM could not boot. In other cases, data from the other disks was missing. The problem was resolved in Forklift 2.6.1. (MTV-1067)

+
+
+
Migrating VMs from one OKD cluster to another fails due to a timeout
+

In Forklift 2.6.0, migrations from one OKD cluster to another failed when the time to transfer the disks of a VM exceeded the time to live (TTL) of the Export API in {ocp-name}, which was set to 2 hours by default. The problem was resolved in Forklift 2.6.1 by setting the default TTL of the Export API to 12 hours, which greatly reduces the possibility of an expiration of the Export API. Additionally, you can increase or decrease the TTL setting as needed. (MTV-1052)

+
+
+
Forklift forklift-controller pod crashes when receiving a disk without a datastore
+

In earlier releases of Forklift, if a VM was configured with a disk that was on a datastore that was no longer available in vSphere at the time a migration was attempted, the forklift-controller crashed, rendering Forklift unusable. In Forklift 2.6.1, Forklift presents a critical validation for VMs with such disks, informing users of the problem, and the forklift-controller no longer crashes, although it cannot transfer the disk. (MTV-1029)

+
+
+
+

Resolved issues 2.6.0

+
+
Deleting an OVA provider automatically also deletes the PV
+

In earlier releases of Forklift, the PV was not removed when the OVA provider was deleted. This has been resolved in Forklift 2.6.0, and the PV is automatically deleted when the OVA provider is deleted. (MTV-848)

+
+
+
Fix for data being lost when migrating VMware VMs with snapshots
+

In earlier releases of Forklift, when migrating a VM that has a snapshot from VMware, the VM that was created in {ocp-name} Virtualization contained the data in the snapshot but not the latest data of the VM. This has been resolved in Forklift 2.6.0. (MTV-447)

+
+
+
Canceling and deleting a failed migration plan does not clean up the populate pods and PVC
+

In earlier releases of Forklift, when you canceled and deleted a failed migration plan, and after creating a PVC and spawning the populate pods, the populate pods and PVC were not deleted. You had to delete the pods and PVC manually. This issue has been resolved in Forklift 2.6.0. (MTV-678)

+
+
+
OKD to OKD migrations require the cluster version to be 4.13 or later
+

In earlier releases of Forklift, when migrating from OKD to OKD, the version of the source provider cluster had to be OKD version 4.13 or later. This issue has been resolved in Forklift 2.6.0, with validation being shown when migrating from versions of {ocp-name} before 4.13. (MTV-734)

+
+
+
Multiple storage domains from RHV were always mapped to a single storage class
+

In earlier releases of Forklift, multiple disks from different storage domains were always mapped to a single storage class, regardless of the storage mapping that was configured. This issue has been resolved in Forklift 2.6.0. (MTV-1008)

+
+
+
Firmware detection by virt-v2v
+

In earlier releases of Forklift, a VM that was migrated from an OVA that did not include the firmware type in its OVF configuration was set with UEFI. This was incorrect for VMs that were configured with BIOS. This issue has been resolved in Forklift 2.6.0, as Forklift now consumes the firmware that is detected by virt-v2v during the conversion of the disks. (MTV-759)

+
+
+
Creating a host secret requires validation of the secret before creation of the host
+

In earlier releases of Forklift, when configuring a transfer network for vSphere hosts, the console plugin created the Host CR before creating its secret. The secret should be specified first in order to validate it before the Host CR is posted. This issue has been resolved in Forklift 2.6.0. (MTV-868)

+
+
+
When adding OVA provider a ConnectionTestFailed message appears
+

In earlier releases of Forklift, when adding an OVA provider, the error message ConnectionTestFailed instantly appeared, although the provider had been created successfully. This issue has been resolved in Forklift 2.6.0. (MTV-671)

+
+
+
RHV provider ConnectionTestSucceeded True response from the wrong URL
+

In earlier releases of Forklift, the ConnectionTestSucceeded condition was set to True even when the URL was different than the API endpoint for the RHV Manager. This issue has been resolved in Forklift 2.6.0. (MTV-740)

+
+
+
Migration does not fail when a vSphere Data Center is nested inside a folder
+

In earlier releases of Forklift, migrating a VM that is placed in a Data Center that is stored directly under the /vcenter in vSphere succeeded. However, it failed when the Data Center was stored inside a folder. This issue was resolved in Forklift 2.6.0. (MTV-796)

+
+
+
The OVA inventory watcher detects deleted files
+

The OVA inventory watcher detects files changes, including deleted files. Updates from the ova-provider-server pod are now sent every five minutes to the forklift-controller pod that updates the inventory. (MTV-733)

+
+
+
Unclear error message when Forklift fails to build or create a PVC
+

In earlier releases of Forklift, the error logs lacked clear information to identify the reason for a failure to create a PV on a destination storage class that does not have a configured storage profile. This issue was resolved in Forklift 2.6.0. (MTV-928)

+
+
+
Plans stay indefinitely in the CopyDisks phase when there is an outdated ovirtvolumepopulator
+

In earlier releases of Forklift, an earlier failed migration could have left an outdated ovirtvolumepopulator. When starting a new plan for the same VM to the same project, the CreateDataVolumes phase did not create populator PVCs when transitioning to CopyDisks, causing the CopyDisks phase to stay indefinitely. This issue was resolved in Forklift 2.6.0. (MTV-929)

+
+
+

For a complete list of all resolved issues in this release, see the list of Resolved Issues in Jira.

+
+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+ + + + + +
+
Warning
+
+
Warm migration and remote migration flows are impacted by multiple bugs
+
+

Warm migration and remote migration flows are impacted by multiple bugs. It is strongly recommended to fall back to cold migration until this issue is resolved. (MTV-1366)

+
+
+
+
+
Migrating older Linux distributions from VMware to OKD, the name of the network interfaces changes
+

When migrating older Linux distributions, such as CentOS 7.0 and 7.1, virtual machines (VMs) from VMware to OKD, the name of the network interfaces changes, and the static IP configuration for the VM no longer functions. This issue is caused by RHEL 7.0 and 7.1 still requiring virtio-transitional. Workaround: Manually update the guest to RHEL 7.2 or update the VM specification post-migration to use transitional. (MTV-1382)

+
+
+
Dynamic disks are offline in Windows Server 2022 after migration from vSphere to CNV with ceph-rbd
+

The dynamic disks are Offline in Windows Server 2022 after cold and warm migrations from vSphere to container-native virtualization (CNV) with Ceph RADOS Block Devices (RBD), using the storage class ocs-storagecluster-ceph-rbd. (MTV-1344)

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)

+
+
+
Migration of virtual machines with encrypted partitions fails during a conversion (vSphere only)
+

vSphere only: Migrations from oVirt and {osp} do not fail, but the encryption key might be missing on the target OKD cluster.

+
+
+
Migration fails during precopy/cutover while performing a snapshot operation on the source VM
+

Warm migration from oVirt fails if a snapshot operation is triggered and running on the source VM at the same time as the migration is scheduled. The migration does not wait for the snapshot operation to finish. (MTV-456)

+
+
+
Unable to schedule migrated VM with multiple disks to more than one storage class of type hostPath
+

When migrating a VM with multiple disks to more than one storage class of type hostPath, it might happen that a VM cannot be scheduled. Workaround: Use shared storage on the target OKD cluster.

+
+
+
Non-supported guest operating systems in warm migrations
+

Warm migrations and migrations to remote OKD clusters from vSphere do not support the same guest operating systems that are supported in cold migrations and migrations to the local OKD cluster. RHEL 8 and RHEL 9 might cause this limitation.

+
+ +
+
VMs from vSphere with RHEL 9 guest operating system can start with network interfaces that are down
+

When migrating VMs that are installed with RHEL 9 as a guest operating system from vSphere, the network interfaces of the VMs could be disabled when they start in {ocp-name} Virtualization. (MTV-491)

+
+
+
Migration of a VM with NVME disks from vSphere fails
+

When migrating a virtual machine (VM) with NVME disks from vSphere, the migration process fails, and the Web Console shows that the Convert image to kubevirt stage is running but did not finish successfully. (MTV-963)

+
+
+
Importing image-based VMs can fail
+

Migrating an image-based VM without the virtual_size field can fail on a block mode storage class. (MTV-946)

+
+
+
Deleting a migration plan does not remove temporary resources
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs, and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)

+
+
+
Migrating VMs with independent persistent disks from VMware to OCP-V fails
+

Migrating VMs with independent persistent disks from VMware to OCP-V fails. (MTV-993)

+
+
+
Guest operating system from vSphere might be missing
+

When vSphere does not receive updates about the guest operating system from the VMware tools, it considers the information about the guest operating system to be outdated and ceases to report it. When this occurs, Forklift is unaware of the guest operating system of the VM and is unable to associate it with the appropriate virtual machine preference or {ocp-name} template. (MTV-1046)

+
+
+
Failure to migrate an image-based VM from {osp} to the default project
+

The migration process fails when migrating an image-based VM from {osp} to the default project. (MTV-964)

+
+
+

For a complete list of all known issues in this release, see the list of Known Issues in Jira.

+
+
+
+ + +
+ + diff --git a/modules/rn-2.7/index.html b/modules/rn-2.7/index.html new file mode 100644 index 00000000000..b3354dc95e4 --- /dev/null +++ b/modules/rn-2.7/index.html @@ -0,0 +1,91 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.7

+
+

You can use Forklift to migrate virtual machines from the following source providers to KubeVirt destination providers:

+
+
+
    +
  • +

    VMware vSphere versions 6, 7, and 8

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    {osp}

    +
  • +
  • +

    Open Virtual Appliances (OVAs) that were created by VMware vSphere

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

The release notes describe technical changes, new features and enhancements, known issues, and resolved issues.

+
+ + +
+ + diff --git a/modules/rn-27-resolved-issues/index.html b/modules/rn-27-resolved-issues/index.html new file mode 100644 index 00000000000..00c1ce92b6d --- /dev/null +++ b/modules/rn-27-resolved-issues/index.html @@ -0,0 +1,168 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Resolved issues

+
+
+
+

Forklift 2.7 has the following resolved issues:

+
+
+
+
+

Resolved issues 2.7.3

+
+
+
Migration plan does not fail when conversion pod fails
+

In earlier releases of Forklift, when running the virt-v2v guest conversion, the migration plan did not fail if the conversion pod failed, as expected. This issue has been resolved in Forklift 2.7.3. (MTV-1569)

+
+
+
Large number of VMs in the inventory can cause the inventory controller to panic
+

In earlier releases of Forklift, having a large number of virtual machines (VMs) in the inventory could cause the inventory controller to panic and return a concurrent write to websocket connection warning. This issue was caused by the concurrent write to the WebSocket connection and has been addressed by the addition of a lock, so the Go routine waits before sending the response from the server. This issue has been resolved in Forklift 2.7.3. (MTV-1220)

+
+
+
VM selection disappears when selecting multiple VMs in the Migration Plan
+

In earlier releases of Forklift, VM selection checkbox disappeared after selecting multiple VMs in the Migration Plan. This issue has been resolved in Forklift 2.7.3. (MTV-1546)

+
+
+
forklift-controller crashing during OVA plan migration
+

In earlier releases of Forklift, the forklift-controller would crash during an OVA plan migration, returning a runtime error: invalid memory address or nil pointer dereference panic.  This issue has been resolved in Forklift 2.7.3. (MTV-1577)

+
+
+
+
+

Resolved issues 2.7.2

+
+
+
VMNetworksNotMapped error occurs after creating a plan from the UI with the source provider set to KubeVirt
+

In earlier releases of Forklift, after creating a plan with an KubeVirt source provider, the Migration Plan failed with the error The plan is not ready - VMNetworksNotMapped. This issue has been resolved in Forklift 2.7.2. (MTV-1201)

+
+
+
Migration Plan for KubeVirt to KubeVirt missing the source namespace causing VMNetworkNotMapped error
+

In earlier releases of Forklift, when creating a Migration Plan for an KubeVirt to KubeVirt migration using the Plan Creation Form, the network map generated was missing the source namespace, which caused a VMNetworkNotMapped error on the plan. This issue has been resolved in Forklift 2.7.2. (MTV-1297)

+
+
+
DV, PVC, and PV are not cleaned up and removed if the migration plan is Archived and Deleted
+

In earlier releases of Forklift, the DataVolume (DV), PersistentVolumeClaim (PVC), and PersistentVolume (PV) continued to exist after the migration plan was archived and deleted. This issue has been resolved in Forklift 2.7.2. (MTV-1477)

+
+
+
Other migrations are halted from starting as the scheduler is waiting for the complete VM to get transferred
+

In earlier releases of Forklift, when warm migrating a virtual machine (VM) that has several disks, you had to wait for the complete VM to get migrated, and the scheduler was halted until all the disks finished before the migration would be started. This issue has been resolved in Forklift 2.7.2. (MTV-1537)

+
+
+
Warm migration is not functioning as expected
+

In earlier releases of Forklift, warm migration did not function as expected. When running the warm migration with VMs larger than the MaxInFlight disks, the VMs over this number did not start the migration until the cutover. This issue has been resolved in Forklift 2.7.2. (MTV-1543)

+
+
+
Migration hanging due to error: virt-v2v: error: -i libvirt: expecting a libvirt guest name
+

In earlier releases of Forklift, when attempting to migrate a VMware VM with a non-compliant Kubernetes name, the Openshift console returned a warning that the VM would be renamed. However, after starting the Migration Plan, it hangs since the migration pod is in an Error state. This issue has been resolved in Forklift 2.7.2. This issue has been resolved in Forklift 2.7.2. (MTV-1555)

+
+
+
VMs are not migrated if they have more disks than MAX_VM_INFLIGHT
+

In earlier releases of Forklift, when migrating the VM using the warm migration, if there were more disks than the MAX_VM_INFLIGHT the VM was not scheduled and the migration was not started. This issue has been resolved in Forklift 2.7.2. (MTV-1573)

+
+
+
Migration Plan returns an error even when Changed Block Tracking (CBT) is enabled
+

In earlier releases of Forklift, when running a VM in VMware, if the CBT flag was enabled while the VM was running by adding both ctkEnabled=TRUE and scsi0:0.ctkEnabled=TRUE parameters, an error message Danger alert:The plan is not ready - VMMissingChangedBlockTracking was returned, and the migration plan was prevented from working. This issue has been resolved in Forklift 2.7.2. (MTV-1576)

+
+
+
+
+

Resolved issues 2.7.0

+
+
+
Change . to - in the names of VMs that are migrated
+

In earlier releases of Forklift, if the name of the virtual machines (VMs) contained ., this was changed to - when they were migrated. This issue has been resolved in Forklift 2.7.0. (MTV-1292)

+
+
+
Status condition indicating a failed mapping resource in a plan is not added to the plan
+

In earlier releases of Forklift, a status condition indicating a failed mapping resource of a plan was not added to the plan. This issue has been resolved in Forklift 2.7.0, with a status condition indicating the failed mapping being added. (MTV-1461)

+
+
+
ifcfg files with HWaddr cause the NIC name to change
+

In earlier releases of Forklift, interface configuration (ifcfg) files with a hardware address (HWaddr) of the Ethernet interface caused the name of the network interface controller (NIC) to change. This issue has been resolved in Forklift 2.7.0. (MTV-1463)

+
+
+
Import fails with special characters in VMX file
+

In earlier releases of Forklift, imports failed when there were special characters in the parameters of the VMX file. This issue has been resolved in Forklift 2.7.0. (MTV-1472)

+
+
+
Observed invalid memory address or nil pointer dereference panic
+

In earlier releases of Forklift, an invalid memory address or nil pointer dereference panic was observed, which was caused by a refactor and could be triggered when there was a problem with the inventory pod. This issue has been resolved in Forklift 2.7.0. (MTV-1482)

+
+
+
Static IPv4 changed after warm migrating win2022/2019 VMs
+

In earlier releases of Forklift, the static Internet Protocol version 4 (IPv4) address was changed after a warm migration of Windows Server 2022 and Windows Server 2019 VMs. This issue has been resolved in Forklift 2.7.0. (MTV-1491)

+
+
+
Warm migration is missing arguments
+

In earlier releases of Forklift, virt-v2v-in-place for the warm migration was missing arguments that were available in virt-v2v for the cold migration. This issue has been resolved in Forklift 2.7.0. (MTV-1495)

+
+
+
Default gateway settings changed after migrating Windows Server 2022 VMs with preserve static IPs
+

In earlier releases of Forklift, the default gateway settings were changed after migrating Windows Server 2022 VMs with the preserve static IPs setting. This issue has been resolved in Forklift 2.7.0. (MTV-1497)

+
+
+
+ + +
+ + diff --git a/modules/running-migration-plan/index.html b/modules/running-migration-plan/index.html new file mode 100644 index 00000000000..4584abb17cc --- /dev/null +++ b/modules/running-migration-plan/index.html @@ -0,0 +1,135 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Running a migration plan

+
+

You can run a migration plan and view its progress in the OKD web console.

+
+
+
Prerequisites
+
    +
  • +

    Valid migration plan.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
    +

    The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, and the description of each plan.

    +
    +
  2. +
  3. +

    Click Start beside a migration plan to start the migration.

    +
  4. +
  5. +

    Click Start in the confirmation window that opens.

    +
    +

    The Migration details by VM screen opens, displaying the migration’s progress

    +
    +
    +

    Warm migration only:

    +
    +
    +
      +
    • +

      The precopy stage starts.

      +
    • +
    • +

      Click Cutover to complete the migration.

      +
    • +
    +
    +
  6. +
  7. +

    If the migration fails:

    +
    +
      +
    1. +

      Click Get logs to retrieve the migration logs.

      +
    2. +
    3. +

      Click Get logs in the confirmation window that opens.

      +
    4. +
    5. +

      Wait until Get logs changes to Download logs and then click the button to download the logs.

      +
    6. +
    +
    +
  8. +
  9. +

    Click a migration’s Status, whether it failed or succeeded or is still ongoing, to view the details of the migration.

    +
    +

    The Migration details by VM screen opens, displaying the start and end times of the migration, the amount of data copied, and a progress pipeline for each VM being migrated.

    +
    +
  10. +
  11. +

    Expand an individual VM to view its steps and the elapsed time and state of each step.

    +
  12. +
+
+ + +
+ + diff --git a/modules/selecting-migration-network-for-virt-provider/index.html b/modules/selecting-migration-network-for-virt-provider/index.html new file mode 100644 index 00000000000..6e42684b5ac --- /dev/null +++ b/modules/selecting-migration-network-for-virt-provider/index.html @@ -0,0 +1,100 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Selecting a migration network for a KubeVirt provider

+
+

You can select a default migration network for a KubeVirt provider in the OKD web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.

+
+
+

If you do not select a migration network, the default migration network is the pod network, which might not be optimal for disk transfer.

+
+
+ + + + + +
+
Note
+
+
+

You can override the default migration network of the provider by selecting a different network when you create a migration plan.

+
+
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    On the right side of the provider, select Select migration network from the {kebab}.

    +
  4. +
  5. +

    Select a network from the list of available networks and click Select.

    +
  6. +
+
+ + +
+ + diff --git a/modules/selecting-migration-network-for-vmware-source-provider/index.html b/modules/selecting-migration-network-for-vmware-source-provider/index.html new file mode 100644 index 00000000000..ea8d6a2ef21 --- /dev/null +++ b/modules/selecting-migration-network-for-vmware-source-provider/index.html @@ -0,0 +1,142 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Selecting a migration network for a VMware source provider

+
+

You can select a migration network in the OKD web console for a source provider to reduce risk to the source environment and to improve performance.

+
+
+

Using the default network for migration can result in poor performance because the network might not have sufficient bandwidth. This situation can have a negative effect on the source platform because the disk transfer operation might saturate the network.

+
+
+

Unresolved directive in selecting-migration-network-for-vmware-source-provider.adoc - include::snip_vmware_esxi_nfc.adoc[]

+
+
+
Prerequisites
+
    +
  • +

    The migration network must have sufficient throughput, minimum speed of 10 Gbps, for disk transfer.

    +
  • +
  • +

    The migration network must be accessible to the KubeVirt nodes through the default gateway.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    The source virtual disks are copied by a pod that is connected to the pod network of the target namespace.

    +
    +
    +
    +
  • +
  • +

    The migration network should have jumbo frames enabled.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click the host number in the Hosts column beside a provider to view a list of hosts.

    +
  4. +
  5. +

    Select one or more hosts and click Select migration network.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Network: Network name

      +
    • +
    • +

      ESXi host admin username: For example, root

      +
    • +
    • +

      ESXi host admin password: Password

      +
    • +
    +
    +
  8. +
  9. +

    Click Save.

    +
  10. +
  11. +

    Verify that the status of each host is Ready.

    +
    +

    If a host status is not Ready, the host might be unreachable on the migration network or the credentials might be incorrect. You can modify the host configuration and save the changes.

    +
    +
  12. +
+
+ + +
+ + diff --git a/modules/selecting-migration-network/index.html b/modules/selecting-migration-network/index.html new file mode 100644 index 00000000000..6e76ea32056 --- /dev/null +++ b/modules/selecting-migration-network/index.html @@ -0,0 +1,118 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Selecting a migration network for a source provider

+
+

You can select a migration network for a source provider in the Forklift web console for improved performance.

+
+
+

If a source network is not optimal for migration, a Warning icon is displayed beside the host number in the Hosts column of the provider list.

+
+
+
Prerequisites
+

The migration network has the following prerequisites:

+
+
+
    +
  • +

    Minimum speed of 10 Gbps.

    +
  • +
  • +

    Accessible to the OpenShift nodes through the default gateway. The source disks are copied by a pod that is connected to the pod network of the target namespace.

    +
  • +
  • +

    Jumbo frames enabled.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Click Providers.

    +
  2. +
  3. +

    Click the host number of a provider to view the host list and network details.

    +
  4. +
  5. +

    Select the host to be updated and click Select migration network.

    +
  6. +
  7. +

    Select a Network from the list of available networks.

    +
    +

    The network list displays only the networks accessible to all the selected hosts. The hosts must have

    +
    +
  8. +
  9. +

    Click Check connection to verify the credentials.

    +
  10. +
  11. +

    Click Select to select the migration network.

    +
    +

    The migration network appears in the network details of the updated hosts.

    +
    +
  12. +
+
+ + +
+ + diff --git a/modules/snip-certificate-options/index.html b/modules/snip-certificate-options/index.html new file mode 100644 index 00000000000..8fef0833e46 --- /dev/null +++ b/modules/snip-certificate-options/index.html @@ -0,0 +1,114 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
    +
  1. +

    Choose one of the following options for validating CA certificates:

    +
    +
      +
    • +

      Use a custom CA certificate: Migrate after validating a custom CA certificate.

      +
    • +
    • +

      Use the system CA certificate: Migrate after validating the system CA certificate.

      +
    • +
    • +

      Skip certificate validation : Migrate without validating a CA certificate.

      +
      +
        +
      1. +

        To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.

        +
      2. +
      3. +

        To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.

        +
      4. +
      5. +

        To skip certificate validation, toggle the Skip certificate validation switch to the right.

        +
      6. +
      +
      +
    • +
    +
    +
  2. +
  3. +

    Optional: Ask Forklift to fetch a custom CA certificate from the provider’s API endpoint URL.

    +
    +
      +
    1. +

      Click Fetch certificate from URL. The Verify certificate window opens.

      +
    2. +
    3. +

      If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.

      +
      +

      Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.

      +
      +
    4. +
    +
    +
  4. +
+
+ + +
+ + diff --git a/modules/snip-migrating-luns/index.html b/modules/snip-migrating-luns/index.html new file mode 100644 index 00000000000..744a8f37172 --- /dev/null +++ b/modules/snip-migrating-luns/index.html @@ -0,0 +1,86 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Note
+
+
+
    +
  • +

    Unlike disk images that are copied from a source provider to a target provider, LUNs are detached, but not removed, from virtual machines in the source provider and then attached to the virtual machines (VMs) that are created in the target provider.

    +
  • +
  • +

    LUNs are not removed from the source provider during the migration in case fallback to the source provider is required. However, before re-attaching the LUNs to VMs in the source provider, ensure that the LUNs are not used by VMs on the target environment at the same time, which might lead to data corruption.

    +
  • +
+
+
+
+ + +
+ + diff --git a/modules/snip_cold-warm-comparison-table/index.html b/modules/snip_cold-warm-comparison-table/index.html new file mode 100644 index 00000000000..812435ed30a --- /dev/null +++ b/modules/snip_cold-warm-comparison-table/index.html @@ -0,0 +1,100 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

Both cold migration and warm migration have advantages and disadvantages, as described in the table that follows:

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Advantages and disadvantages of cold and warm migrations
Cold migrationWarm migration

Duration

Correlates to the amount of data on the disks

Correlates to the amount of data on the disks and VM utilization

Data transferred

Approximate sum of all disks

Approximate sum of all disks and VM utilization

VM downtime

High

Low

+ + +
+ + diff --git a/modules/snip_measured_boot_windows_vm/index.html b/modules/snip_measured_boot_windows_vm/index.html new file mode 100644 index 00000000000..092714a8fd9 --- /dev/null +++ b/modules/snip_measured_boot_windows_vm/index.html @@ -0,0 +1,72 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
Windows VMs which are using Measured Boot cannot be migrated
+

Microsoft Windows virtual machines (VMs), which are using the Measured Boot feature, cannot be migrated because Measured Boot is a mechanism to prevent any kind of device changes, by checking each start-up component, including the firmware, all the way to the boot driver.

+
+
+

The alternative to migration is to re-create the Windows VM directly on KubeVirt.

+
+ + +
+ + diff --git a/modules/snip_performance/index.html b/modules/snip_performance/index.html new file mode 100644 index 00000000000..475dd208cab --- /dev/null +++ b/modules/snip_performance/index.html @@ -0,0 +1,74 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

The data provided here was collected from testing in Red Hat Labs and is provided for reference only. 

+
+
+

Overall, these numbers should be considered to show the best-case scenarios.

+
+
+

The observed performance of migration can differ from these results and depends on several factors.

+
+ + +
+ + diff --git a/modules/snip_permissions-info/index.html b/modules/snip_permissions-info/index.html new file mode 100644 index 00000000000..049761d64c0 --- /dev/null +++ b/modules/snip_permissions-info/index.html @@ -0,0 +1,85 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

If you are an administrator, you can see and work with components (providers, plans, etc.) for all projects.

+
+
+

If you are a non-administrator, you can only see and work only with the components of projects you have permissions for.

+
+
+ + + + + +
+
Tip
+
+
+

You can see which projects you have permissions for by clicking the Project list, which is in the upper-left of every page in the Migrations section except for the Overview.

+
+
+
+ + +
+ + diff --git a/modules/snip_plan-limits/index.html b/modules/snip_plan-limits/index.html new file mode 100644 index 00000000000..67c8de7072f --- /dev/null +++ b/modules/snip_plan-limits/index.html @@ -0,0 +1,79 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Important
+
+
+

A plan cannot contain more than 500 VMs or 500 disks.

+
+
+
+ + +
+ + diff --git a/modules/snip_qemu-guest-agent/index.html b/modules/snip_qemu-guest-agent/index.html new file mode 100644 index 00000000000..0737e81d5a4 --- /dev/null +++ b/modules/snip_qemu-guest-agent/index.html @@ -0,0 +1,74 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

VMware only: In cold migrations, in situations in which a package manager cannot be used during the migration, Forklift does not install the qemu-guest-agent daemon on the migrated VMs. This has some impact on the functionality of the migrated VMs, but overall, they are still expected to function.

+
+
+

To enable Forklift to automatically install qemu-guest-agent on the migrated VMs, ensure that your package manager can install the daemon during the first boot of the VM after migration.

+
+
+

If that is not possible, use your preferred automated or manual procedure to install qemu-guest-agent manually.

+
+ + +
+ + diff --git a/modules/snip_secure_boot_issue/index.html b/modules/snip_secure_boot_issue/index.html new file mode 100644 index 00000000000..9e0a6680e63 --- /dev/null +++ b/modules/snip_secure_boot_issue/index.html @@ -0,0 +1,72 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
VMs with Secure Boot enabled might not be migrated automatically
+

Virtual machines (VMs) with Secure Boot enabled currently might not be migrated automatically. This is because Secure Boot, a security standard developed by members of the PC industry to ensure that a device boots using only software that is trusted by the Original Equipment Manufacturer (OEM), would prevent the VMs from booting on the destination provider. 

+
+
+

Workaround: The current workaround is to disable Secure Boot on the destination. For more details, see Disabling Secure Boot. (MTV-1548)

+
+ + +
+ + diff --git a/modules/snip_vmware-name-change/index.html b/modules/snip_vmware-name-change/index.html new file mode 100644 index 00000000000..540878e2a24 --- /dev/null +++ b/modules/snip_vmware-name-change/index.html @@ -0,0 +1,79 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Important
+
+
+

When you migrate a VMware 7 VM to an OKD 4.13+ platform that uses CentOS 7.9, the name of the network interfaces changes and the static IP configuration for the VM no longer works.

+
+
+
+ + +
+ + diff --git a/modules/snip_vmware-permissions/index.html b/modules/snip_vmware-permissions/index.html new file mode 100644 index 00000000000..06369f443ae --- /dev/null +++ b/modules/snip_vmware-permissions/index.html @@ -0,0 +1,86 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Important
+
+
forklift-controller consistently failing to reconcile a plan, and returning an HTTP 500 error
+
+

There is an issue with the forklift-controller consistently failing to reconcile a Migration Plan, and subsequently returning an HTTP 500 error. This issue is caused when you specify the user permissions only on the virtual machine (VM).

+
+
+

In Forklift, you need to add permissions at the datacenter level, which includes storage, networks, switches, and so on, which are used by the VM. You must then propagate the permissions to the child elements.

+
+
+

If you do not want to add this level of permissions, you must manually add the permissions to each object on the VM host required.

+
+
+
+ + +
+ + diff --git a/modules/snip_vmware_esxi_nfc/index.html b/modules/snip_vmware_esxi_nfc/index.html new file mode 100644 index 00000000000..6748a0baa87 --- /dev/null +++ b/modules/snip_vmware_esxi_nfc/index.html @@ -0,0 +1,79 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Note
+
+
+

You can also control the network from which disks are transferred from a host by using the Network File Copy (NFC) service in vSphere.

+
+
+
+ + +
+ + diff --git a/modules/snippet_getting_web_console_url_cli/index.html b/modules/snippet_getting_web_console_url_cli/index.html new file mode 100644 index 00000000000..516fd6d9d4f --- /dev/null +++ b/modules/snippet_getting_web_console_url_cli/index.html @@ -0,0 +1,87 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

+

+
+
+
+
$ kubectl get route virt -n konveyor-forklift \
+  -o custom-columns=:.spec.host
+
+
+
+

+ +The URL for the forklift-ui service that opens the login page for the Forklift web console is displayed.

+
+
+

+ +.Example output

+
+
+
+
https://virt-konveyor-forklift.apps.cluster.openshift.com.
+
+
+ + +
+ + diff --git a/modules/snippet_getting_web_console_url_web/index.html b/modules/snippet_getting_web_console_url_web/index.html new file mode 100644 index 00000000000..ac2e683ec01 --- /dev/null +++ b/modules/snippet_getting_web_console_url_web/index.html @@ -0,0 +1,84 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
    +
  1. +

    Log in to the OKD web console.

    +
  2. +
  3. +

    Click NetworkingRoutes.

    +
  4. +
  5. +

    Select the {namespace} project in the Project: list.

    +
    +

    The URL for the forklift-ui service that opens the login page for the Forklift web console is displayed.

    +
    +
    +

    Click the URL to navigate to the Forklift web console.

    +
    +
  6. +
+
+ + +
+ + diff --git a/modules/snippet_ova_tech_preview/index.html b/modules/snippet_ova_tech_preview/index.html new file mode 100644 index 00000000000..1131299dc56 --- /dev/null +++ b/modules/snippet_ova_tech_preview/index.html @@ -0,0 +1,87 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview.

+
+
+ + + + + +
+
Important
+
+
+

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+ + +
+ + diff --git a/modules/source-vm-prerequisites/index.html b/modules/source-vm-prerequisites/index.html new file mode 100644 index 00000000000..99ddefb4ff3 --- /dev/null +++ b/modules/source-vm-prerequisites/index.html @@ -0,0 +1,127 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Source virtual machine prerequisites

+
+

The following prerequisites apply to all migrations:

+
+
+
    +
  • +

    ISO/CDROM disks must be unmounted.

    +
  • +
  • +

    Each NIC must contain one IPv4 and/or one IPv6 address.

    +
  • +
  • +

    The operating system of a VM must be certified and supported as a guest operating system with KubeVirt.

    +
  • +
  • +

    The name of a VM must not contain a period (.). Forklift changes any period in a VM name to a dash (-).

    +
  • +
  • +

    The name of a VM must not be the same as any other VM in the KubeVirt environment.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    Forklift automatically assigns a new name to a VM that does not comply with the rules.

    +
    +
    +

    Forklift makes the following changes when it automatically generates a new VM name:

    +
    +
    +
      +
    • +

      Excluded characters are removed.

      +
    • +
    • +

      Uppercase letters are switched to lowercase letters.

      +
    • +
    • +

      Any underscore (_) is changed to a dash (-).

      +
    • +
    +
    +
    +

    This feature allows a migration to proceed smoothly even if someone enters a VM name that does not follow the rules.

    +
    +
    +
    +
  • +
+
+
+

Unresolved directive in source-vm-prerequisites.adoc - include::snip_secure_boot_issue.adoc[]

+
+
+

Unresolved directive in source-vm-prerequisites.adoc - include::snip_measured_boot_windows_vm.adoc[]

+
+ + +
+ + diff --git a/modules/storage-support/index.html b/modules/storage-support/index.html new file mode 100644 index 00000000000..bc80ea1a379 --- /dev/null +++ b/modules/storage-support/index.html @@ -0,0 +1,211 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Storage support and default modes

+
+

Forklift uses the following default volume and access modes for supported storage.

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Default volume and access modes
ProvisionerVolume modeAccess mode

kubernetes.io/aws-ebs

Block

ReadWriteOnce

kubernetes.io/azure-disk

Block

ReadWriteOnce

kubernetes.io/azure-file

Filesystem

ReadWriteMany

kubernetes.io/cinder

Block

ReadWriteOnce

kubernetes.io/gce-pd

Block

ReadWriteOnce

kubernetes.io/hostpath-provisioner

Filesystem

ReadWriteOnce

manila.csi.openstack.org

Filesystem

ReadWriteMany

openshift-storage.cephfs.csi.ceph.com

Filesystem

ReadWriteMany

openshift-storage.rbd.csi.ceph.com

Block

ReadWriteOnce

kubernetes.io/rbd

Block

ReadWriteOnce

kubernetes.io/vsphere-volume

Block

ReadWriteOnce

+
+ + + + + +
+
Note
+
+
+

If the KubeVirt storage does not support dynamic provisioning, you must apply the following settings:

+
+
+
    +
  • +

    Filesystem volume mode

    +
    +

    Filesystem volume mode is slower than Block volume mode.

    +
    +
  • +
  • +

    ReadWriteOnce access mode

    +
    +

    ReadWriteOnce access mode does not support live virtual machine migration.

    +
    +
  • +
+
+
+

See Enabling a statically-provisioned storage class for details on editing the storage profile.

+
+
+
+
+ + + + + +
+
Note
+
+
+

If your migration uses block storage and persistent volumes created with an EXT4 file system, increase the file system overhead in CDI to be more than 10%. The default overhead that is assumed by CDI does not completely include the reserved place for the root partition. If you do not increase the file system overhead in CDI by this amount, your migration might fail.

+
+
+
+
+ + + + + +
+
Note
+
+
+

When migrating from OpenStack or running a cold-migration from RHV to the OCP cluster that MTV is deployed on, the migration allocates persistent volumes without CDI. In these cases, you might need to adjust the file system overhead.

+
+
+

If the configured file system overhead, which has a default value of 10%, is too low, the disk transfer will fail due to lack of space. In such a case, you would want to increase the file system overhead.

+
+
+

In some cases, however, you might want to decrease the file system overhead to reduce storage consumption.

+
+
+

You can change the file system overhead by changing the value of the controller_filesystem_overhead in the spec portion of the forklift-controller CR, as described in Configuring the MTV Operator.

+
+
+
+ + +
+ + diff --git a/modules/technical-changes-2-7/index.html b/modules/technical-changes-2-7/index.html new file mode 100644 index 00000000000..a6ec6e67263 --- /dev/null +++ b/modules/technical-changes-2-7/index.html @@ -0,0 +1,73 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Technical changes

+
+

Forklift 2.7 has the following technical changes:

+
+
+
Upgraded virt-v2v to RHEL9 for warm migrations
+

Forklift previously used virt-v2v from Red Hat Enterprise Linux (RHEL) 8, which does not include bug fixes and features that are available in virt-v2v in RHEL9. In Forklift 2.7.0, components are updated to RHEL 9 in order to improve the functionality of warm migration. (MTV-1152)

+
+ + +
+ + diff --git a/modules/technology-preview/index.html b/modules/technology-preview/index.html new file mode 100644 index 00000000000..b28a788d27e --- /dev/null +++ b/modules/technology-preview/index.html @@ -0,0 +1,88 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Important
+
+
+

{FeatureName} is a Technology Preview feature only. Technology Preview features +are not supported with Red Hat production service level agreements (SLAs) and +might not be functionally complete. Red Hat does not recommend using them +in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during +the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+ + +
+ + diff --git a/modules/uninstalling-mtv-cli/index.html b/modules/uninstalling-mtv-cli/index.html new file mode 100644 index 00000000000..50403e1afdd --- /dev/null +++ b/modules/uninstalling-mtv-cli/index.html @@ -0,0 +1,144 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Uninstalling Forklift from the command line interface

+
+

You can uninstall Forklift from the command line interface (CLI).

+
+
+ + + + + +
+
Note
+
+
+

This action does not remove resources managed by the Forklift Operator, including custom resource definitions (CRDs) and custom resources (CRs). To remove these after uninstalling the Forklift Operator, you might need to manually delete the Forklift Operator CRDs.

+
+
+
+
+
Prerequisites
+
    +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Delete the forklift controller by running the following command:

    +
    +
    +
    $ oc delete ForkliftController --all -n openshift-mtv
    +
    +
    +
  2. +
  3. +

    Delete the subscription to the Forklift Operator by running the following command:

    +
    +
    +
    $ oc get subscription -o name|grep 'mtv-operator'| xargs oc delete
    +
    +
    +
  4. +
  5. +

    Delete the clusterserviceversion for the Forklift Operator by running the following command:

    +
    +
    +
    $ oc get clusterserviceversion -o name|grep 'mtv-operator'| xargs oc delete
    +
    +
    +
  6. +
  7. +

    Delete the plugin console CR by running the following command:

    +
    +
    +
    $ oc delete ConsolePlugin forklift-console-plugin
    +
    +
    +
  8. +
  9. +

    Optional: Delete the custom resource definitions (CRDs) by running the following command:

    +
    +
    +
    kubectl get crd -o name | grep 'forklift.konveyor.io' | xargs kubectl delete
    +
    +
    +
  10. +
  11. +

    Optional: Perform cleanup by deleting the Forklift project by running the following command:

    +
    +
    +
    oc delete project openshift-mtv
    +
    +
    +
  12. +
+
+ + +
+ + diff --git a/modules/uninstalling-mtv-ui/index.html b/modules/uninstalling-mtv-ui/index.html new file mode 100644 index 00000000000..a2f344f18eb --- /dev/null +++ b/modules/uninstalling-mtv-ui/index.html @@ -0,0 +1,168 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Uninstalling Forklift by using the OKD web console

+
+

You can uninstall Forklift by using the OKD web console.

+
+
+
Prerequisites
+
    +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click Operators > Installed Operators.

    +
  2. +
  3. +

    Click Forklift Operator.

    +
    +

    The Operator Details page opens in the Details tab.

    +
    +
  4. +
  5. +

    Click the ForkliftController tab.

    +
  6. +
  7. +

    Click Actions and select Delete ForkLiftController.

    +
    +

    A confirmation window opens.

    +
    +
  8. +
  9. +

    Click Delete.

    +
    +

    The controller is removed.

    +
    +
  10. +
  11. +

    Open the Details tab.

    +
    +

    The Create ForkliftController button appears instead of the controller you deleted. There is no need to click it.

    +
    +
  12. +
  13. +

    On the upper-right side of the page, click Actions and select Uninstall Operator.

    +
    +

    A confirmation window opens, displaying any operand instances.

    +
    +
  14. +
  15. +

    To delete all instances, select the Delete all operand instances for this operator checkbox. By default, the checkbox is cleared.

    +
    + + + + + +
    +
    Important
    +
    +
    +

    If your Operator configured off-cluster resources, these will continue to run and will require manual cleanup.

    +
    +
    +
    +
  16. +
  17. +

    Click Uninstall.

    +
    +

    The Installed Operators page opens, and the Forklift Operator is removed from the list of installed Operators.

    +
    +
  18. +
  19. +

    Click Home > Overview.

    +
  20. +
  21. +

    In the Status section of the page, click Dynamic Plugins.

    +
    +

    The Dynamic Plugins popup opens, listing forklift-console-plugin as a failed plugin. If the forklift-console-plugin does not appear as a failed plugin, refresh the web console.

    +
    +
  22. +
  23. +

    Click forklift-console-plugin.

    +
    +

    The ConsolePlugin details page opens in the Details tab.

    +
    +
  24. +
  25. +

    On the upper right-hand side of the page, click Actions and select Delete ConsolePlugin from the list.

    +
    +

    A confirmation window opens.

    +
    +
  26. +
  27. +

    Click Delete.

    +
    +

    The plugin is removed from the list of Dynamic plugins on the Overview page. If the plugin still appears, restart the Overview page.

    +
    +
  28. +
+
+ + +
+ + diff --git a/modules/updating-validation-rules-version/index.html b/modules/updating-validation-rules-version/index.html new file mode 100644 index 00000000000..75e3eba65f4 --- /dev/null +++ b/modules/updating-validation-rules-version/index.html @@ -0,0 +1,127 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Updating the inventory rules version

+
+

You must update the inventory rules version each time you update the rules so that the Provider Inventory service detects the changes and triggers the Validation service.

+
+
+

The rules version is recorded in a rules_version.rego file for each provider.

+
+
+
Procedure
+
    +
  1. +

    Retrieve the current rules version:

    +
    +
    +
    $ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version (1)
    +
    +
    +
    +
    Example output
    +
    +
    {
    +   "result": {
    +       "rules_version": 5
    +   }
    +}
    +
    +
    +
  2. +
  3. +

    Connect to the terminal of the Validation pod:

    +
    +
    +
    $ kubectl rsh <validation_pod>
    +
    +
    +
  4. +
  5. +

    Update the rules version in the /usr/share/opa/policies/io/konveyor/forklift/<provider>/rules_version.rego file.

    +
  6. +
  7. +

    Log out of the Validation pod terminal.

    +
  8. +
  9. +

    Verify the updated rules version:

    +
    +
    +
    $ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version (1)
    +
    +
    +
    +
    Example output
    +
    +
    {
    +   "result": {
    +       "rules_version": 6
    +   }
    +}
    +
    +
    +
  10. +
+
+ + +
+ + diff --git a/modules/upgrading-mtv-ui/index.html b/modules/upgrading-mtv-ui/index.html new file mode 100644 index 00000000000..d134a2552ee --- /dev/null +++ b/modules/upgrading-mtv-ui/index.html @@ -0,0 +1,127 @@ + + + + + + + + Upgrading Forklift | Forklift Documentation + + + + + + + + + + + + + +Upgrading Forklift | Forklift Documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+

Upgrading Forklift

+
+

You can upgrade the Forklift Operator by using the OKD web console to install the new version.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click OperatorsInstalled Operators{operator-name-ui}Subscription.

    +
  2. +
  3. +

    Change the update channel to the correct release.

    +
    +

    See Changing update channel in the OKD documentation.

    +
    +
  4. +
  5. +

    Confirm that Upgrade status changes from Up to date to Upgrade available. If it does not, restart the CatalogSource pod:

    +
    +
      +
    1. +

      Note the catalog source, for example, redhat-operators.

      +
    2. +
    3. +

      From the command line, retrieve the catalog source pod:

      +
      +
      +
      $ kubectl get pod -n openshift-marketplace | grep <catalog_source>
      +
      +
      +
    4. +
    5. +

      Delete the pod:

      +
      +
      +
      $ kubectl delete pod -n openshift-marketplace <catalog_source_pod>
      +
      +
      +
      +

      Upgrade status changes from Up to date to Upgrade available.

      +
      +
      +

      If you set Update approval on the Subscriptions tab to Automatic, the upgrade starts automatically.

      +
      +
    6. +
    +
    +
  6. +
  7. +

    If you set Update approval on the Subscriptions tab to Manual, approve the upgrade.

    +
    +

    See Manually approving a pending upgrade in the OKD documentation.

    +
    +
  8. +
  9. +

    If you are upgrading from Forklift 2.2 and have defined VMware source providers, edit the VMware provider by adding a VDDK init image. Otherwise, the update will change the state of any VMware providers to Critical. For more information, see Adding a VMSphere source provider.

    +
  10. +
  11. +

    If you mapped to NFS on the OKD destination provider in Forklift 2.2, edit the AccessModes and VolumeMode parameters in the NFS storage profile. Otherwise, the upgrade will invalidate the NFS mapping. For more information, see Customizing the storage profile.

    +
  12. +
+
+ + +
+ + diff --git a/modules/using-must-gather/index.html b/modules/using-must-gather/index.html new file mode 100644 index 00000000000..4186218891d --- /dev/null +++ b/modules/using-must-gather/index.html @@ -0,0 +1,157 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Using the must-gather tool

+
+

You can collect logs and information about Forklift custom resources (CRs) by using the must-gather tool. You must attach a must-gather data file to all customer cases.

+
+
+

You can gather data for a specific namespace, migration plan, or virtual machine (VM) by using the filtering options.

+
+
+ + + + + +
+
Note
+
+
+

If you specify a non-existent resource in the filtered must-gather command, no archive file is created.

+
+
+
+
+
Prerequisites
+
    +
  • +

    You must be logged in to the KubeVirt cluster as a user with the cluster-admin role.

    +
  • +
  • +

    You must have the OKD CLI (oc) installed.

    +
  • +
+
+
+
Collecting logs and CR information
+
    +
  1. +

    Navigate to the directory where you want to store the must-gather data.

    +
  2. +
  3. +

    Run the oc adm must-gather command:

    +
    +
    +
    $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest
    +
    +
    +
    +

    The data is saved as /must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.

    +
    +
  4. +
  5. +

    Optional: Run the oc adm must-gather command with the following options to gather filtered data:

    +
    +
      +
    • +

      Namespace:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- NS=<namespace> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Migration plan:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- PLAN=<migration_plan> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Virtual machine:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- VM=<vm_id> NS=<namespace> /usr/bin/targeted (1)
      +
      +
      +
      +
        +
      1. +

        Specify the VM ID as it appears in the Plan CR.

        +
      2. +
      +
      +
    • +
    +
    +
  6. +
+
+ + +
+ + diff --git a/modules/virt-migration-workflow/index.html b/modules/virt-migration-workflow/index.html new file mode 100644 index 00000000000..acd77fe2835 --- /dev/null +++ b/modules/virt-migration-workflow/index.html @@ -0,0 +1,209 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Detailed migration workflow

+
+

You can use the detailed migration workflow to troubleshoot a failed migration.

+
+
+

The workflow describes the following steps:

+
+
+

Warm Migration or migration to a remote {ocp-name} cluster:

+
+
+
    +
  1. +

    When you create the Migration custom resource (CR) to run a migration plan, the Migration Controller service creates a DataVolume CR for each source VM disk.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Containerized Data Importer (CDI) Controller service creates a persistent volume claim (PVC) based on the parameters specified in the DataVolume CR.



    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
  6. +
  7. +

    The CDI Controller service creates an importer pod.

    +
  8. +
  9. +

    The importer pod streams the VM disk to the PV.

    +
    +

    After the VM disks are transferred:

    +
    +
  10. +
  11. +

    The Migration Controller service creates a conversion pod with the PVCs attached to it when importing from VMWare.

    +
    +

    The conversion pod runs virt-v2v, which installs and configures device drivers on the PVCs of the target VM.

    +
    +
  12. +
  13. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  14. +
  15. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  16. +
+
+
+

Cold migration from oVirt or {osp} to the local {ocp-name} cluster:

+
+
+
    +
  1. +

    When you create a Migration custom resource (CR) to run a migration plan, the Migration Controller service creates for each source VM disk a PersistentVolumeClaim CR, and an OvirtVolumePopulator when the source is oVirt, or an OpenstackVolumePopulator CR when the source is {osp}.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Populator Controller service creates a temporarily persistent volume claim (PVC).

    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
    +
      +
    • +

      The Migration Controller service creates a dummy pod to bind all PVCs. The name of the pod contains pvcinit.

      +
    • +
    +
    +
  6. +
  7. +

    The Populator Controller service creates a populator pod.

    +
  8. +
  9. +

    The populator pod transfers the disk data to the PV.

    +
    +

    After the VM disks are transferred:

    +
    +
  10. +
  11. +

    The temporary PVC is deleted, and the initial PVC points to the PV with the data.

    +
  12. +
  13. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  14. +
  15. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  16. +
+
+
+

Cold migration from VMWare to the local {ocp-name} cluster:

+
+
+
    +
  1. +

    When you create a Migration custom resource (CR) to run a migration plan, the Migration Controller service creates a DataVolume CR for each source VM disk.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Containerized Data Importer (CDI) Controller service creates a blank persistent volume claim (PVC) based on the parameters specified in the DataVolume CR.



    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
  6. +
+
+
+

For all VM disks:

+
+
+
    +
  1. +

    The Migration Controller service creates a dummy pod to bind all PVCs. The name of the pod contains pvcinit.

    +
  2. +
  3. +

    The Migration Controller service creates a conversion pod for all PVCs.

    +
  4. +
  5. +

    The conversion pod runs virt-v2v, which converts the VM to the KVM hypervisor and transfers the disks' data to their corresponding PVs.

    +
    +

    After the VM disks are transferred:

    +
    +
  6. +
  7. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  8. +
  9. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  10. +
+
+ + +
+ + diff --git a/modules/vmware-prerequisites/index.html b/modules/vmware-prerequisites/index.html new file mode 100644 index 00000000000..aac46167a4d --- /dev/null +++ b/modules/vmware-prerequisites/index.html @@ -0,0 +1,278 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

VMware prerequisites

+
+

It is strongly recommended to create a VDDK image to accelerate migrations. For more information, see Creating a VDDK image.

+
+
+

The following prerequisites apply to VMware migrations:

+
+
+
    +
  • +

    You must use a compatible version of VMware vSphere.

    +
  • +
  • +

    You must be logged in as a user with at least the minimal set of VMware privileges.

    +
  • +
  • +

    To access the virtual machine using a pre-migration hook, VMware Tools must be installed on the source virtual machine.

    +
  • +
  • +

    The VM operating system must be certified and supported for use as a guest operating system with KubeVirt and for conversion to KVM with virt-v2v.

    +
  • +
  • +

    If you are running a warm migration, you must enable changed block tracking (CBT) on the VMs and on the VM disks.

    +
  • +
  • +

    If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host.

    +
  • +
  • +

    It is strongly recommended to disable hibernation because Forklift does not support migrating hibernated VMs.

    +
  • +
+
+
+ + + + + +
+
Important
+
+
+

In the event of a power outage, data might be lost for a VM with disabled hibernation. However, if hibernation is not disabled, migration will fail

+
+
+
+
+ + + + + +
+
Note
+
+
+

Neither Forklift nor OpenShift Virtualization support conversion of Btrfs for migrating VMs from VMWare.

+
+
+
+

VMware privileges

+
+

The following minimal set of VMware privileges is required to migrate virtual machines to KubeVirt with the Forklift.

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. VMware privileges
PrivilegeDescription

Virtual machine.Interaction privileges:

Virtual machine.Interaction.Power Off

Allows powering off a powered-on virtual machine. This operation powers down the guest operating system.

Virtual machine.Interaction.Power On

Allows powering on a powered-off virtual machine and resuming a suspended virtual machine.

Virtual machine.Guest operating system management by VIX API

Allows managing a virtual machine by the VMware VIX API.

+

Virtual machine.Provisioning privileges:

+
+
+ + + + + +
+
Note
+
+
+

All Virtual machine.Provisioning privileges are required.

+
+
+

Virtual machine.Provisioning.Allow disk access

Allows opening a disk on a virtual machine for random read and write access. Used mostly for remote disk mounting.

Virtual machine.Provisioning.Allow file access

Allows operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Allow read-only disk access

Allows opening a disk on a virtual machine for random read access. Used mostly for remote disk mounting.

Virtual machine.Provisioning.Allow virtual machine download

Allows read operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Allow virtual machine files upload

Allows write operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Clone template

Allows cloning of a template.

Virtual machine.Provisioning.Clone virtual machine

Allows cloning of an existing virtual machine and allocation of resources.

Virtual machine.Provisioning.Create template from virtual machine

Allows creation of a new template from a virtual machine.

Virtual machine.Provisioning.Customize guest

Allows customization of a virtual machine’s guest operating system without moving the virtual machine.

Virtual machine.Provisioning.Deploy template

Allows deployment of a virtual machine from a template.

Virtual machine.Provisioning.Mark as template

Allows marking an existing powered-off virtual machine as a template.

Virtual machine.Provisioning.Mark as virtual machine

Allows marking an existing template as a virtual machine.

Virtual machine.Provisioning.Modify customization specification

Allows creation, modification, or deletion of customization specifications.

Virtual machine.Provisioning.Promote disks

Allows promote operations on a virtual machine’s disks.

Virtual machine.Provisioning.Read customization specifications

Allows reading a customization specification.

Virtual machine.Snapshot management privileges:

Virtual machine.Snapshot management.Create snapshot

Allows creation of a snapshot from the virtual machine’s current state.

Virtual machine.Snapshot management.Remove Snapshot

Allows removal of a snapshot from the snapshot history.

Datastore privileges:

Datastore.Browse datastore

Allows exploring the contents of a datastore.

Datastore.Low level file operations

Allows performing low-level file operations - read, write, delete, and rename - in a datastore.

Sessions privileges:

Sessions.Validate session

Allows verification of the validity of a session.

Cryptographic privileges:

Cryptographic.Decrypt

Allows decryption of an encrypted virtual machine.

Cryptographic.Direct access

Allows access to encrypted resources.

+ + +
+ + diff --git a/redirects.json b/redirects.json new file mode 100644 index 00000000000..9e26dfeeb6e --- /dev/null +++ b/redirects.json @@ -0,0 +1 @@ +{} \ No newline at end of file diff --git a/robots.txt b/robots.txt new file mode 100644 index 00000000000..e087884e682 --- /dev/null +++ b/robots.txt @@ -0,0 +1 @@ +Sitemap: /sitemap.xml diff --git a/sitemap.xml b/sitemap.xml new file mode 100644 index 00000000000..099c3c9d467 --- /dev/null +++ b/sitemap.xml @@ -0,0 +1,1080 @@ + + + +/modules/about-cold-warm-migration/ + + +/documentation/doc-Release_notes/modules/about-cold-warm-migration/ + + +/documentation/modules/about-cold-warm-migration/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/about-cold-warm-migration/ + + +/documentation/modules/about-hook-crs-for-migration-plans-api/ + + +/documentation/doc-Release_notes/modules/about-hook-crs-for-migration-plans-api/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/about-hook-crs-for-migration-plans-api/ + + +/modules/about-hook-crs-for-migration-plans-api/ + + +/modules/about-rego-files/ + + +/documentation/modules/about-rego-files/ + + +/documentation/doc-Release_notes/modules/about-rego-files/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/about-rego-files/ + + +/modules/accessing-default-validation-rules/ + + +/documentation/doc-Release_notes/modules/accessing-default-validation-rules/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/accessing-default-validation-rules/ + + +/documentation/modules/accessing-default-validation-rules/ + + +/documentation/doc-Release_notes/modules/accessing-logs-cli/ + + +/modules/accessing-logs-cli/ + + +/documentation/modules/accessing-logs-cli/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/accessing-logs-cli/ + + +/documentation/modules/accessing-logs-ui/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/accessing-logs-ui/ + + +/documentation/doc-Release_notes/modules/accessing-logs-ui/ + + +/modules/accessing-logs-ui/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/adding-hook-crs-to-migration-plans-api/ + + +/modules/adding-hook-crs-to-migration-plans-api/ + + +/documentation/modules/adding-hook-crs-to-migration-plans-api/ + + +/documentation/doc-Release_notes/modules/adding-hook-crs-to-migration-plans-api/ + + +/modules/adding-source-provider/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/adding-source-provider/ + + +/documentation/doc-Release_notes/modules/adding-source-provider/ + + +/documentation/modules/adding-source-provider/ + + +/documentation/doc-Release_notes/modules/adding-virt-provider/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/adding-virt-provider/ + + +/documentation/modules/adding-virt-provider/ + + +/modules/adding-virt-provider/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/canceling-migration-cli/ + + +/modules/canceling-migration-cli/ + + +/documentation/doc-Release_notes/modules/canceling-migration-cli/ + + +/documentation/modules/canceling-migration-cli/ + + +/documentation/modules/canceling-migration-ui/ + + +/modules/canceling-migration-ui/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/canceling-migration-ui/ + + +/documentation/doc-Release_notes/modules/canceling-migration-ui/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/changing-precopy-intervals/ + + +/documentation/doc-Release_notes/modules/changing-precopy-intervals/ + + +/modules/changing-precopy-intervals/ + + +/documentation/modules/changing-precopy-intervals/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/collected-logs-cr-info/ + + +/modules/collected-logs-cr-info/ + + +/documentation/doc-Release_notes/modules/collected-logs-cr-info/ + + +/documentation/modules/collected-logs-cr-info/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/common-attributes/ + + +/modules/common-attributes/ + + +/documentation/doc-Release_notes/modules/common-attributes/ + + +/documentation/modules/common-attributes/ + + +/modules/compatibility-guidelines/ + + +/documentation/modules/compatibility-guidelines/ + + +/documentation/doc-Release_notes/modules/compatibility-guidelines/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/compatibility-guidelines/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/configuring-mtv-operator/ + + +/documentation/doc-Release_notes/modules/configuring-mtv-operator/ + + +/documentation/modules/configuring-mtv-operator/ + + +/modules/configuring-mtv-operator/ + + +/documentation/doc-Release_notes/modules/creating-migration-plan-2-6-3/ + + +/modules/creating-migration-plan-2-6-3/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-migration-plan-2-6-3/ + + +/documentation/modules/creating-migration-plan-2-6-3/ + + +/modules/creating-migration-plan/ + + +/documentation/doc-Release_notes/modules/creating-migration-plan/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-migration-plan/ + + +/documentation/modules/creating-migration-plan/ + + +/modules/creating-network-mapping/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-network-mapping/ + + +/documentation/modules/creating-network-mapping/ + + +/documentation/doc-Release_notes/modules/creating-network-mapping/ + + +/documentation/doc-Release_notes/modules/creating-storage-mapping/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-storage-mapping/ + + +/modules/creating-storage-mapping/ + + +/documentation/modules/creating-storage-mapping/ + + +/documentation/modules/creating-validation-rule/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-validation-rule/ + + +/modules/creating-validation-rule/ + + +/documentation/doc-Release_notes/modules/creating-validation-rule/ + + +/modules/creating-vddk-image/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-vddk-image/ + + +/documentation/doc-Release_notes/modules/creating-vddk-image/ + + +/documentation/modules/creating-vddk-image/ + + +/documentation/modules/error-messages/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/error-messages/ + + +/modules/error-messages/ + + +/documentation/doc-Release_notes/modules/error-messages/ + + +/documentation/doc-Release_notes/modules/increasing-nfc-memory-vmware-host/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/increasing-nfc-memory-vmware-host/ + + +/documentation/modules/increasing-nfc-memory-vmware-host/ + + +/modules/increasing-nfc-memory-vmware-host/ + + +/ + + +/modules/installing-mtv-operator/ + + +/documentation/modules/installing-mtv-operator/ + + +/documentation/doc-Release_notes/modules/installing-mtv-operator/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/installing-mtv-operator/ + + +/documentation/doc-Release_notes/modules/known-issues-2-7/ + + +/modules/known-issues-2-7/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/known-issues-2-7/ + + +/documentation/modules/known-issues-2-7/ + + +/modules/making-open-source-more-inclusive/ + + +/documentation/modules/making-open-source-more-inclusive/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/making-open-source-more-inclusive/ + + +/documentation/doc-Release_notes/modules/making-open-source-more-inclusive/ + + +/documentation/doc-Release_notes/master/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/master/ + + +/documentation/modules/migration-plan-options-ui/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/migration-plan-options-ui/ + + +/documentation/doc-Release_notes/modules/migration-plan-options-ui/ + + +/modules/migration-plan-options-ui/ + + +/documentation/modules/mtv-changelog-2-7/ + + +/documentation/doc-Release_notes/modules/mtv-changelog-2-7/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-changelog-2-7/ + + +/modules/mtv-changelog-2-7/ + + +/documentation/doc-Release_notes/modules/mtv-overview-page/ + + +/modules/mtv-overview-page/ + + +/documentation/modules/mtv-overview-page/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-overview-page/ + + +/documentation/modules/mtv-performance-addendum/ + + +/documentation/doc-Release_notes/modules/mtv-performance-addendum/ + + +/modules/mtv-performance-addendum/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-performance-addendum/ + + +/modules/mtv-performance-recommendation/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-performance-recommendation/ + + +/documentation/doc-Release_notes/modules/mtv-performance-recommendation/ + + +/documentation/modules/mtv-performance-recommendation/ + + +/documentation/doc-Release_notes/modules/mtv-resources-and-services/ + + +/modules/mtv-resources-and-services/ + + +/documentation/modules/mtv-resources-and-services/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-resources-and-services/ + + +/modules/mtv-selected-packages-2-7/ + + +/documentation/modules/mtv-selected-packages-2-7/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-selected-packages-2-7/ + + +/documentation/doc-Release_notes/modules/mtv-selected-packages-2-7/ + + +/documentation/modules/mtv-settings/ + + +/documentation/doc-Release_notes/modules/mtv-settings/ + + +/modules/mtv-settings/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-settings/ + + +/documentation/doc-Release_notes/modules/mtv-ui/ + + +/documentation/modules/mtv-ui/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-ui/ + + +/modules/mtv-ui/ + + +/modules/mtv-workflow/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-workflow/ + + +/documentation/modules/mtv-workflow/ + + +/documentation/doc-Release_notes/modules/mtv-workflow/ + + +/modules/network-prerequisites/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/network-prerequisites/ + + +/documentation/doc-Release_notes/modules/network-prerequisites/ + + +/documentation/modules/network-prerequisites/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/new-features-and-enhancements-2-7/ + + +/modules/new-features-and-enhancements-2-7/ + + +/documentation/doc-Release_notes/modules/new-features-and-enhancements-2-7/ + + +/documentation/modules/new-features-and-enhancements-2-7/ + + +/documentation/modules/new-migrating-virtual-machines-cli/ + + +/modules/new-migrating-virtual-machines-cli/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/new-migrating-virtual-machines-cli/ + + +/documentation/doc-Release_notes/modules/new-migrating-virtual-machines-cli/ + + +/documentation/modules/non-admin-permissions-for-ui/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/non-admin-permissions-for-ui/ + + +/modules/non-admin-permissions-for-ui/ + + +/documentation/doc-Release_notes/modules/non-admin-permissions-for-ui/ + + +/documentation/doc-Release_notes/modules/obtaining-console-url/ + + +/modules/obtaining-console-url/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/obtaining-console-url/ + + +/documentation/modules/obtaining-console-url/ + + +/modules/openstack-prerequisites/ + + +/documentation/modules/openstack-prerequisites/ + + +/documentation/doc-Release_notes/modules/openstack-prerequisites/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/openstack-prerequisites/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/ostack-app-cred-auth/ + + +/documentation/doc-Release_notes/modules/ostack-app-cred-auth/ + + +/documentation/modules/ostack-app-cred-auth/ + + +/modules/ostack-app-cred-auth/ + + +/documentation/doc-Release_notes/modules/ostack-token-auth/ + + +/documentation/modules/ostack-token-auth/ + + +/modules/ostack-token-auth/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/ostack-token-auth/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/ova-prerequisites/ + + +/documentation/modules/ova-prerequisites/ + + +/modules/ova-prerequisites/ + + +/documentation/doc-Release_notes/modules/ova-prerequisites/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/retrieving-validation-service-json/ + + +/documentation/modules/retrieving-validation-service-json/ + + +/modules/retrieving-validation-service-json/ + + +/documentation/doc-Release_notes/modules/retrieving-validation-service-json/ + + +/documentation/modules/retrieving-vmware-moref/ + + +/documentation/doc-Release_notes/modules/retrieving-vmware-moref/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/retrieving-vmware-moref/ + + +/modules/retrieving-vmware-moref/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rhv-prerequisites/ + + +/documentation/modules/rhv-prerequisites/ + + +/modules/rhv-prerequisites/ + + +/documentation/doc-Release_notes/modules/rhv-prerequisites/ + + +/documentation/modules/rn-2.0/ + + +/modules/rn-2.0/ + + +/documentation/doc-Release_notes/modules/rn-2.0/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.0/ + + +/documentation/modules/rn-2.1/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.1/ + + +/modules/rn-2.1/ + + +/documentation/doc-Release_notes/modules/rn-2.1/ + + +/documentation/modules/rn-2.2/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.2/ + + +/modules/rn-2.2/ + + +/documentation/doc-Release_notes/modules/rn-2.2/ + + +/documentation/modules/rn-2.3/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.3/ + + +/modules/rn-2.3/ + + +/documentation/doc-Release_notes/modules/rn-2.3/ + + +/documentation/modules/rn-2.4/ + + +/documentation/doc-Release_notes/modules/rn-2.4/ + + +/modules/rn-2.4/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.4/ + + +/documentation/modules/rn-2.5/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.5/ + + +/documentation/doc-Release_notes/modules/rn-2.5/ + + +/modules/rn-2.5/ + + +/documentation/doc-Release_notes/modules/rn-2.6/ + + +/documentation/modules/rn-2.6/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.6/ + + +/modules/rn-2.6/ + + +/documentation/modules/rn-2.7/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.7/ + + +/modules/rn-2.7/ + + +/documentation/doc-Release_notes/modules/rn-2.7/ + + +/documentation/modules/rn-27-resolved-issues/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-27-resolved-issues/ + + +/documentation/doc-Release_notes/modules/rn-27-resolved-issues/ + + +/modules/rn-27-resolved-issues/ + + +/documentation/modules/running-migration-plan/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/running-migration-plan/ + + +/modules/running-migration-plan/ + + +/documentation/doc-Release_notes/modules/running-migration-plan/ + + +/documentation/modules/selecting-migration-network-for-virt-provider/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/selecting-migration-network-for-virt-provider/ + + +/modules/selecting-migration-network-for-virt-provider/ + + +/documentation/doc-Release_notes/modules/selecting-migration-network-for-virt-provider/ + + +/documentation/modules/selecting-migration-network-for-vmware-source-provider/ + + +/modules/selecting-migration-network-for-vmware-source-provider/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/selecting-migration-network-for-vmware-source-provider/ + + +/documentation/doc-Release_notes/modules/selecting-migration-network-for-vmware-source-provider/ + + +/documentation/modules/selecting-migration-network/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/selecting-migration-network/ + + +/modules/selecting-migration-network/ + + +/documentation/doc-Release_notes/modules/selecting-migration-network/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip-certificate-options/ + + +/documentation/modules/snip-certificate-options/ + + +/modules/snip-certificate-options/ + + +/documentation/doc-Release_notes/modules/snip-certificate-options/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip-migrating-luns/ + + +/documentation/modules/snip-migrating-luns/ + + +/modules/snip-migrating-luns/ + + +/documentation/doc-Release_notes/modules/snip-migrating-luns/ + + +/documentation/modules/snip_cold-warm-comparison-table/ + + +/modules/snip_cold-warm-comparison-table/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_cold-warm-comparison-table/ + + +/documentation/doc-Release_notes/modules/snip_cold-warm-comparison-table/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_measured_boot_windows_vm/ + + +/documentation/modules/snip_measured_boot_windows_vm/ + + +/modules/snip_measured_boot_windows_vm/ + + +/documentation/doc-Release_notes/modules/snip_measured_boot_windows_vm/ + + +/documentation/modules/snip_performance/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_performance/ + + +/modules/snip_performance/ + + +/documentation/doc-Release_notes/modules/snip_performance/ + + +/documentation/modules/snip_permissions-info/ + + +/modules/snip_permissions-info/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_permissions-info/ + + +/documentation/doc-Release_notes/modules/snip_permissions-info/ + + +/documentation/modules/snip_plan-limits/ + + +/modules/snip_plan-limits/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_plan-limits/ + + +/documentation/doc-Release_notes/modules/snip_plan-limits/ + + +/documentation/modules/snip_qemu-guest-agent/ + + +/modules/snip_qemu-guest-agent/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_qemu-guest-agent/ + + +/documentation/doc-Release_notes/modules/snip_qemu-guest-agent/ + + +/documentation/modules/snip_secure_boot_issue/ + + +/documentation/doc-Release_notes/modules/snip_secure_boot_issue/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_secure_boot_issue/ + + +/modules/snip_secure_boot_issue/ + + +/documentation/doc-Release_notes/modules/snip_vmware-name-change/ + + +/documentation/modules/snip_vmware-name-change/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_vmware-name-change/ + + +/modules/snip_vmware-name-change/ + + +/documentation/doc-Release_notes/modules/snip_vmware-permissions/ + + +/documentation/modules/snip_vmware-permissions/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_vmware-permissions/ + + +/modules/snip_vmware-permissions/ + + +/documentation/doc-Release_notes/modules/snip_vmware_esxi_nfc/ + + +/documentation/modules/snip_vmware_esxi_nfc/ + + +/modules/snip_vmware_esxi_nfc/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_vmware_esxi_nfc/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snippet_getting_web_console_url_cli/ + + +/documentation/doc-Release_notes/modules/snippet_getting_web_console_url_cli/ + + +/documentation/modules/snippet_getting_web_console_url_cli/ + + +/modules/snippet_getting_web_console_url_cli/ + + +/documentation/doc-Release_notes/modules/snippet_getting_web_console_url_web/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snippet_getting_web_console_url_web/ + + +/documentation/modules/snippet_getting_web_console_url_web/ + + +/modules/snippet_getting_web_console_url_web/ + + +/documentation/doc-Release_notes/modules/snippet_ova_tech_preview/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snippet_ova_tech_preview/ + + +/documentation/modules/snippet_ova_tech_preview/ + + +/modules/snippet_ova_tech_preview/ + + +/documentation/doc-Release_notes/modules/source-vm-prerequisites/ + + +/documentation/modules/source-vm-prerequisites/ + + +/modules/source-vm-prerequisites/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/source-vm-prerequisites/ + + +/documentation/doc-Release_notes/modules/storage-support/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/storage-support/ + + +/documentation/modules/storage-support/ + + +/modules/storage-support/ + + +/documentation/doc-Release_notes/modules/technical-changes-2-7/ + + +/documentation/modules/technical-changes-2-7/ + + +/modules/technical-changes-2-7/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/technical-changes-2-7/ + + +/documentation/doc-Release_notes/modules/technology-preview/ + + +/documentation/modules/technology-preview/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/technology-preview/ + + +/modules/technology-preview/ + + +/documentation/doc-Release_notes/modules/uninstalling-mtv-cli/ + + +/documentation/modules/uninstalling-mtv-cli/ + + +/modules/uninstalling-mtv-cli/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/uninstalling-mtv-cli/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/uninstalling-mtv-ui/ + + +/documentation/doc-Release_notes/modules/uninstalling-mtv-ui/ + + +/documentation/modules/uninstalling-mtv-ui/ + + +/modules/uninstalling-mtv-ui/ + + +/documentation/doc-Release_notes/modules/updating-validation-rules-version/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/updating-validation-rules-version/ + + +/documentation/modules/updating-validation-rules-version/ + + +/modules/updating-validation-rules-version/ + + +/documentation/doc-Release_notes/modules/upgrading-mtv-ui/ + + +/documentation/modules/upgrading-mtv-ui/ + + +/modules/upgrading-mtv-ui/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/upgrading-mtv-ui/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/using-must-gather/ + + +/documentation/doc-Release_notes/modules/using-must-gather/ + + +/documentation/modules/using-must-gather/ + + +/modules/using-must-gather/ + + +/documentation/doc-Release_notes/modules/virt-migration-workflow/ + + +/documentation/modules/virt-migration-workflow/ + + +/modules/virt-migration-workflow/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/virt-migration-workflow/ + + +/documentation/doc-Release_notes/modules/vmware-prerequisites/ + + +/documentation/modules/vmware-prerequisites/ + + +/modules/vmware-prerequisites/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/vmware-prerequisites/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/issue_templates/issue/ + + +/documentation/doc-Release_notes/modules/issue_templates/issue/ + + +/documentation/modules/issue_templates/issue/ + + +/modules/issue_templates/issue/ + +