CLI Guide
Using the Migration Toolkit for Applications command-line interface to migrate your applications
Abstract
- Making open source more inclusive
- 1. Introduction to the MTA command-line interface
- 2. Supported MTA migration paths
- 3. Installing MTA command-line interface
- 4. Analyzing Java applications with MTA command-line interface
- 5. Analyzing applications written in languages other than Java with MTA command-line interface
- 6. Reviewing an analysis report
- 7. Performing a transformation with the MTA command-line interface
- 8. Generating platform assets for application deployment
- 9. MTA CLI known issues
- A. Reference material
- B. Contributing to the MTA project
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Chapter 1. Introduction to the MTA command-line interface
The Migration Toolkit for Applications (MTA) command-line interface (CLI) provides a comprehensive set of rules to assess the suitability of your applications for containerization and deployment on Red Hat OpenShift. By using the MTA CLI, you can assess and prioritize migration and modernization efforts for applications written in different languages. For example, you can use MTA to analyze applications written in the following languages:
- Java
- Go
- .NET
- Node.js
- Python
Analyzing applications written in the .NET language is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Analyzing applications written in the Python and Node.js languages is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The CLI provides numerous reports that highlight the analysis without using the other tools. You can use the CLI to customize MTA analysis options or integrate with external automation tools.
Chapter 2. Supported MTA migration paths
You can run the Migration Toolkit for Applications (MTA) analysis to assess your applications' suitability for migration to multiple target platforms. MTA supports the following migration paths:
Table 2.1. Supported Java migration paths
Source platform ⇒ | Migration to JBoss EAP 7 & 8 | OpenShift (cloud readiness) | OpenJDK 11, 17, and 21 | Jakarta EE 9 | Camel 3 & 4 | Spring Boot in Red Hat Runtimes | Quarkus | Open Liberty |
---|---|---|---|---|---|---|---|---|
Oracle WebLogic Server |
✔ |
✔ |
✔ |
- |
- |
- |
- |
- |
IBM WebSphere Application Server |
✔ |
✔ |
✔ |
- |
- |
- |
- |
✔ |
JBoss EAP 4 |
✘ [a] |
✔ |
✔ |
- |
- |
- |
- |
- |
JBoss EAP 5 |
✔ |
✔ |
✔ |
- |
- |
- |
- |
- |
JBoss EAP 6 |
✔ |
✔ |
✔ |
- |
- |
- |
- |
- |
JBoss EAP 7 |
✔ |
✔ |
✔ |
- |
- |
- |
✔ |
- |
Thorntail |
✔ [b] |
- |
- |
- |
- |
- |
- |
- |
Oracle JDK |
- |
✔ |
✔ |
- |
- |
- |
- |
- |
Camel 2 |
- |
✔ |
✔ |
- |
✔ |
- |
- |
- |
Spring Boot |
- |
✔ |
✔ |
✔ |
- |
✔ |
✔ |
- |
Any Java application |
- |
✔ |
✔ |
- |
- |
- |
- |
- |
Any Java EE application |
- |
- |
- |
✔ |
- |
- |
- |
- |
[a]
Although MTA does not currently provide rules for this migration path, Red Hat Consulting can assist with migration from any source platform to JBoss EAP 7.
[b]
Requires JBoss Enterprise Application Platform expansion pack 2 (EAP XP 2)
|
..NET migration paths
Source platform ⇒ | OpenShift (cloud readiness) | Migration to .NET 8.0 |
---|---|---|
.NET Framework 4.5+ (Windows only) |
✔ |
✔ |
Analyzing applications written in the .NET language is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Additional resources
Chapter 3. Installing MTA command-line interface
You can install the Migration Toolkit for Applications (MTA) command-line interface (CLI) on Linux, Windows, or macOS operating systems.
You can also install the CLI for use with Docker on Windows. Note, however, that this is a Developer Preview feature only.
3.1. Installing the CLI by using a .zip file
You can install the Migration Toolkit for Applications (MTA) command-line interface (CLI) by using the downloadable .zip
file available on the official MTA download page.
Prerequisites
Red Hat Container Registry Authentication for
registry.redhat.io
. Red Hat distributes container images fromregistry.redhat.io
, which requires authentication. For more details, see Red Hat Container Registry Authentication.NoteThis prerequisite is not applicable for the containerless mode. For more information, see Analyzing applications in containerless mode.
- You installed Java Development Kit (JDK) version 17 or later.
-
You set the
JAVA_HOME
environmental variable. -
You installed Maven version 3.9.9 or later with its binary added to the
$PATH
variable.
Procedure
Navigate to the MTA download page and download one of the following operating system-specific CLI files or the
src
file:- mta-7.3.1-cli-linux-amd64.zip
- mta-7.3.1-cli-linux-arm64.zip
- mta-7.3.1-cli-darwin-amd64.zip
- mta-7.3.1-cli-darwin-arm64.zip
- mta-7.3.1-cli-windows-amd64.zip
- mta-7.3.1-cli-windows-arm64.zip
- mta-7.3.1-cli-src.zip
-
Extract the
.zip
file to the.kantra
directory inside your$HOME
directory. The.zip
file extracts themta-cli
binary, along with other required directories and files. Move the
mta-cli
binary to your$PATH
variable.NoteYou can place the
mta-cli
binary in any folder that is included in the$PATH
variable. Alternatively, you can add a folder that containsmta-cli
to$PATH
. This way, you do not need to specify a full path when using the CLI.
3.2. Installing the CLI on a disconnected environment
When your system is in a disconnected environment, you can install the Migration Toolkit for Applications (MTA) command-line interface (CLI) by performing the following actions:
- Download the required images by using an external computer.
- Copying the downloaded images to the system you want to install MTA CLI on.
The following procedure applies only to container mode.
The analysis output in the disconnected environment usually results in fewer incidents because a dependency analysis does not run accurately without access to Maven.
Prerequisites
- You downloaded the required MTA CLI binary from the Migration Toolkit for Applications Red Hat Developer page.
- You installed the Podman tool on your system.
For the analysis of Java applications, you enabled container runtime usage by setting the
--run-local
flag tofalse
:--run-local=false
The analysis of non-Java applications runs in container mode by default.
Procedure
On a connected device, perform the following steps:
Authenticate to registry.redhat.io:
$ podman login registry.redhat.io
Run the
mta-cli
binary file. The binary file pulls the required provider images. For example:$ mta-cli analyze
ImportantThis command only pulls the required images. For example, if you run a command that requires Java images, a .NET image will not be pulled.
Display the image list:
$ podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.redhat.io/mta/mta-generic-external-provider-rhel9 7.3.1 8b8d7fa14570 13 days ago 692 MB registry.redhat.io/mta/mta-cli-rhel9 7.3.1 45422a12d936 13 days ago 1.6 GB registry.redhat.io/mta/mta-java-external-provider-rhel9 7.3.1 4d6d0912a38b 13 days ago 715 MB registry.redhat.io/mta/mta-dotnet-external-provider-rhel9 7.3.1 66ec9fc51408 13 days ago 1.27 GB
Save the images:
$ podman save <image_ID> -o <image_name>.image
- Copy the images onto a USB drive or directly to the file system of the disconnected device.
On the disconnected device, enter:
$ podman load --input <image_name>.image
3.3. Installing the CLI for use with Docker on Windows
To migrate applications built with .NET framework version 4.5 or later, on Microsoft Windows to cross-platform .NET 8.0, you must install the CLI for use with Docker on Windows. To do so, you must configure Docker to use Windows containers first.
Prerequisites
- A host with Windows 11+ 64-bit version 21H2 or higher.
- You downloaded the Docker Desktop for Windows installation program. For more details, see Install Docker Desktop on Windows.
Procedure
- Open a PowerShell with Administrator privileges.
Ensure Hyper-V is installed and enabled:
PS C:\Users\<user_name>> Enable-WindowsOptionalFeature -Online ` -FeatureName Microsoft-Hyper-V-All
PS C:\Users\<user_name>> Enable-WindowsOptionalFeature -Online ` -FeatureName Containers
NoteYou might need to reboot Windows for the change to take effect.
Install Docker Desktop on Windows.
Run the installer by double-clicking the
Docker_Desktop_Installer.exe
file.By default, Docker Desktop is installed to the
C:\Program Files\Docker\Docker
path.Ensure that Docker will run Windows containers as the backend instead of Linux containers:
- In the Windows task bar, right-click on the Docker icon.
- Click Switch to Windows containers.
In PowerShell, create a folder for MTA:
PS C:\Users\<user_name>> mkdir C:\Users\<user_name>\MTA
Extract the
mta-7.3.1-cli-windows.zip
file to theMTA
folder:PS C:\Users\<user_name>> cd C:\Users\<user_name>\Downloads
PS C:\Users\<user_name>> Expand-Archive ` -Path "{ProductShortNameLower}-{ProductVersion}-cli-windows.zip" ` -DestinationPath "C:\Users\<user_name>\MTA"
Ensure that Docker is running Windows containers the
OS/Arch
is set towindows/amd64
:PS C:\Users\<user_name>> docker version
Client: Version: 27.0.3 API version: 1.46 Go version: go1.21.11 Git commit: 7d4bcd8 Built: Sat Jun 29 00:03:32 2024 OS/Arch: windows/amd64 Context: desktop-windows Server: Docker Desktop 4.32.0 (157355) Engine: Version: 27.0.3 API version: 1.46 (minimum version 1.24) Go version: go1.21.11 Git commit: 662f78c Built: Sat Jun 29 00:02:13 2024 OS/Arch: windows/amd64 Experimental: false
Set the
CONTAINER_TOOL
environment variable to use Docker:PS C:\Users\<user_name>> $env:CONTAINER_TOOL="C:\Windows\system32\docker.exe"
Set the
DOTNET_PROVIDER_IMG
environment variable to use the upstreamdotnet-external-provider
:PS C:\Users\<user_name>> $env:DOTNET_PROVIDER_IMG="quay.io/konveyor/dotnet-external-provider:v0.5.0"
Set the
RUNNER_IMG
environment variable to use the upstream image:PS C:\Users\<user_name>> $env:RUNNER_IMG="quay.io/konveyor/kantra:v0.5.0"
Chapter 4. Analyzing Java applications with MTA command-line interface
Depending on your scenario, you can use the Migration Toolkit for Applications (MTA) CLI to perform the following actions:
- Run the analysis against a single application.
Run the analysis against multiple applications:
-
In MTA versions earlier than 7.1.0, you can enter a series of
--analyze
commands, each against an application and each generating a separate report. For more information, see Running the MTA CLI against an application. -
In MTA version 7.1.0 and later, you can use the
--bulk
option to analyze multiple applications at once and generate a single report. Note that this feature is a Developer Preview feature only. For more information, see Analyzing multiple applications.
-
In MTA versions earlier than 7.1.0, you can enter a series of
Starting from MTA version 7.2.0, you can run the application analysis for Java applications in the containerless mode. Note that this option is set by default and is used automatically only if all requirements are met. For more information, see Analyzing an application in the containerless mode.
However, if you want to analyze applications in languages other than Java or, for example, use transformation commands, you still need to use containers.
MTA CLI supports running source code and binary analysis by using analyzer-lsp
. analyzer-lsp
is a tool that evaluates rules by using language providers.
4.1. Analyzing a single application
You can use the Migration Toolkit for Applications (MTA) CLI to perform an application analysis for a single application.
Extracting the list of dependencies from compiled Java binaries is not always possible during the analysis, especially if the dependencies are not embedded within the binary.
Procedure
Optional: List available target technologies for an analysis:
$ mta-cli analyze --list-targets
Run the analysis:
$ mta-cli analyze --input <path_to_input> --output <path_to_output> --source <source_name> --target <target_name>
Specify the following arguments:
-
--input
: An application to be evaluated. --output
: An output directory for the generated reports.mta-cli analyze
creates the following analysis reports:./ ├── analysis.log ├── dependencies.yaml ├── output.yaml ├── shim.log ├── static-report └── static-report.log
-
--source
: A source technology for the application migration, for example,weblogic
. -
--target
: A target technology for the application migration, for example,eap8
.
-
Access the generated analysis report:
In the output of the
mta-cli analyze
command, copy a path to theindex.html
analysis report file:Report created: <output_report_directory>/index.html Access it at this URL: file:///<output_report_directory>/index.html
- Paste the path to the browser of your choice.
Alternatively, press Ctrl and click on the path to the report file.
Additional resources
4.2. Analyzing multiple applications
You can use the Migration Toolkit for Applications (MTA) CLI to perform an application analysis for multiple applications at once and generate a combined report.
Analyzing multiple applications is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Procedure
Run the analysis for multiple applications.
ImportantYou must enter one input per analyze command, but make sure to enter the same output directory for all inputs.
For example, to analyze example applications
A
,B
, andC
, enter the following commands:For input
A
, enter:$ mta-cli analyze --bulk --input <path_to_input_A> --output <path_to_output_ABC> --source <source_A> --target <target_A>
For input
B
, enter:$ mta-cli analyze --bulk --input <path_to_input_B> --output <path_to_output_ABC> --source <source_B> --target <target_B>
For input
C
, enter:$ mta-cli analyze --bulk --input <path_to_input_C> --output <path_to_output_ABC> --source <source_C> --target <target_C>
- Access the analysis report. MTA generates a single report, listing all issues that must be resolved before the applications can be migrated.
Additional resources
4.3. Analyzing an application in containerless mode
Starting from MTA 7.2.0, you can perform an application analysis for Java applications by using the MTA CLI that does not require installation of a container runtime.
In MTA 7.2.0 and later, containerless CLI is a default mode. To enable container runtime usage for the analysis of Java applications, you must set the --run-local
flag to false
:
--run-local=false
The analysis for other applications runs in the container mode automatically
Prerequisites
- You installed the MTA CLI. For more information, see Installing the CLI by using a .zip file.
- You installed Java Development Kit (JDK) version 17 or later.
-
If you use OpenJDK on Red Hat Enterprise Linux (RHEL) or Fedora, you installed the Java
devel
package. - You installed Maven version 3.9.9 or later.
The CLI assumes that a path to the
mvn
binary is correctly registered in the system variable. Therefore, ensure that you addedmvn
to the following variable:-
Path
for Windows. -
PATH
for Linux and macOS.
-
-
You set the
JAVA_HOME
environmental variable. You set the
JVM_MAX_MEM
system variable.NoteIf you do not set
JVM_MAX_MEM
, the analysis might hang because Java might require more memory than the defaultJVM_MAX_MEM
value.
Procedure
Optional: Display all
mta-cli analyze
command options:$ mta-cli analyze --help
Run the application analysis:
$ mta-cli analyze --overwrite --input <path_to_input> --output <path_to_output> --target <target_source>
NoteThe
--overwrite
option overwrites the output folder if it exists.
Additional resources
4.4. The analyze command options
The following are the options that you can use together with the mta-cli analyze
command to adjust the command behavior to your needs.
Table 4.1. mta-cli analyze
command options
Option | Description |
---|---|
|
Analyze open-source libraries. |
|
Set the flag to
When you disable Maven search, MTA at first tries to determine dependencies from the compiled JAR file. If this method does not succeed, MTA goes through the directory structure to determine dependencies. This method may not produce a reliable dependency classification since the package structure can differ from what is expected by MTA. You may see more dependencies in the
By default, |
|
The number of lines of source code to include in the output for each incident. The default is 100. |
|
A directory for dependencies. |
|
Run default rulesets with analysis. The default is |
|
Display the available flags for the |
|
An HTTP proxy string URL. |
|
An HTTPS proxy string URL. |
|
An expression to select incidents based on custom variables, for example: !package=io.demo.config-utils |
|
A path to the application source code or a binary. |
|
A Jaeger endpoint to collect traces. |
|
Create analysis and dependence output as a JSON file. |
|
Run rules based on specified label selector expression. |
|
List all languages in the source application. This flag is not supported for binary applications. |
|
List available supported providers. |
|
List rules for available migration sources. |
|
List rules for available migration targets. |
|
A path to the custom Maven settings file to use. |
|
An analysis mode. Must be set to either of the following values:
|
|
Proxy-excluded URLs (relevant only with proxy). |
|
A path to the directory for analysis output. |
|
Overwrite the output directory. |
|
A filename or directory that contains rule files. |
|
Do not generate the static report. |
|
A source technology to consider for the analysis. To specify multiple sources, repeat the parameter, for example: --source <source_1> --source <source_2> ... |
|
A target technology to consider for the analysis. To specify multiple targets, repeat the parameter, for example: --target <target_1> --target <target_2> ... |
|
A log level. The default is 4. |
|
Do not clean up temporary resources. |
Chapter 5. Analyzing applications written in languages other than Java with MTA command-line interface
Starting from Migration Toolkit for Applications (MTA) version 7.1.0, you can run the application analysis on applications written in languages other than Java. You can perform the analysis either of the following ways:
- Select a supported language provider to run the analysis for.
- Overwrite the existing supported language provider with your own unsupported language provider, and then run the analysis on it.
Analyzing applications written in languages other than Java is only possible in container mode. You can use the containerless CLI only for Java applications. For more information, see Analyzing an application in containerless mode.
5.1. Analyzing an application for the selected supported language provider
You can explicitly set a supported language provider according to your application’s language to run the analysis for.
Prerequisites
- You have the latest version of MTA CLI installed on your system.
Procedure
List language providers supported for the analysis:
$ mta-cli analyze --list-providers
Run the application analysis for the selected language provider:
$ mta-cli analyze --input <path_to_input> --output <path_to_output> --provider <language_provider> --rules <path_to_custom_rules>
ImportantNote that if you do not set the
--provider
option, the analysis might fail because it detects unsupported providers. The analysis will complete without--provider
only if all discovered providers are supported.
5.2. Analyzing an application for an unsupported language provider
You can run the analysis for an unsupported language provider. To do so, you must overwrite the existing supported language provider with your own unsupported language provider.
You must create a configuration file for your unsupported language provider before overriding the supported provider.
Prerequisites
You created a configuration file for your unsupported language provider, for example:
[ { "name": "java", "address": "localhost:14651" "initConfig": [{ "location": "<java-app-path>", "providerSpecificConfig": { "bundles": "<bundle-path>", "jvmMaxMem": "2G", }, "analysisMode": "source-only" }] } ]
Procedure
Override an existing supported language provider with your unsupported provider and run the analysis:
$ mta-cli analyze --provider-override <path_to_configuration_file> --output <path_to_output> --rules <path_to_custom_rules>
Chapter 6. Reviewing an analysis report
After analyzing an application, you can access an analysis report to check the details of the application migration effort.
6.1. Accessing an analysis report
When you run an application analysis, a report is generated in the output directory that you specify by using the --output
argument in the command line.
Procedure
Copy the path of the
index.html
file from the analysis output and paste it in a browser of your choice:Report created: <output_report_directory>/index.html Access it at this URL: file:///<output_report_directory>/index.html
Alternatively, press Ctrl and click on the path of the
index.html
file.
6.2. Analysis report sections
The following are sections of an analysis report that are available after the application analysis is complete. These sections contain additional details about the migration of an application.
You can only review the report applicable to the current application.
Insights is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Table 6.1. Analysis report sections
Section | Description |
---|---|
Dashboard |
An overview of the incidents and total story points, sorted by category. |
Issues |
A concise summary of all issues and their details that require attention. |
Dependencies |
All Java-packaged dependencies found within the application. |
Technologies |
All embedded libraries grouped by functionality. Use this report to display the technologies used in each application. |
Insights |
Information about a violation generated by a rule with zero effort. Issues are generated by general rules, whereas string tags are generated by the tagging rules. String tags indicate the presence of a technology but do not show the code location. Insights contain information about the technologies used in the application and their usage in the code. Insights do not impact the migration. For example, a rule searching for deprecated API usage in the code that does not impact the current migration but can be tracked and fixed when needed in the future. Unlike with issues, you do not need to fix insights for a successful migration. They are generated by any rule that does not have a positive effort value and category assigned. They might have a message and tag. |
6.3. Reviewing the analysis issues and incidents
After an analysis is complete, you can review issues that might appear during an application migration. Each issue contains a list of files where a rule matched one or more times. These files include all the incidents within the issue. Each incident contains a detailed explanation of the issue and how to fix this issue.
Procedure
- Open the analysis report. For more information, see Accessing an analysis report.
- Click Issues.
- Click on the issue you want to check.
- Under the File tab, click on a file to display an incident or incidents that triggered the issue.
Display the incident message by hovering over the line that triggered the incident, for example:
Use the Quarkus Maven plugin adding the following sections to the pom.xml file: <properties> <quarkus.platform.group-id>io.quarkus.platform</quarkus.platform.group-id> <quarkus.platform.version>3.1.0.Final</quarkus.platform.version> </properties> <build> <plugins> <plugin> <groupId>$</groupId> <artifactId>quarkus-maven-plugin</artifactId> <version>$</version> <extensions>true</extensions> <executions> <execution> <goals> <goal>build</goal> <goal>generate-code</goal> <goal>generate-code-tests</goal> </goals> </execution> </executions> </plugin> </plugins> </build>
Chapter 7. Performing a transformation with the MTA command-line interface
You can use transformation to perform the following actions:
-
Transform Java applications source code by using the
transform openrewrite
command. -
Convert XML rules to YAML rules by using the
transform rules
command.
Performing transformation requires the container runtime to be configured.
7.1. Transforming applications source code
To update Java libraries or frameworks, for example, javax
or Spring Boot, you can transform Java application source code by using the transform openrewrite
command. The openrewrite
subcommand allows running OpenRewrite
recipes on source code.
You can only use a single target to run the transform overwrite
command.
Prerequisites
- You configured the container runtime.
Procedure
Display the available
OpenRewrite
recipes:$ mta-cli transform openrewrite --list-targets
Transform the application source code:
$ mta-cli transform openrewrite --input=<path_to_source_code> --target=<target_from_the_list>
Verification
-
Inspect the target application source code
diff
to see the transformation.
Additional resources
7.2. Available OpenRewrite recipes
The following are the OpenRewrite
recipes that you can use for transforming application source code.
Table 7.1. Available OpenRewrite recipes
Migration path | Purpose | The rewrite.config file location | Active recipes |
---|---|---|---|
Java EE to Jakarta EE |
Replace import of
Replace |
|
|
Java EE to Jakarta EE |
Rename bootstrapping files. |
|
|
Java EE to Jakarta EE |
Transform the |
|
|
Spring Boot to Quarkus |
Replace |
|
|
7.3. The openrewrite command options
The following are the options that you can use together with the mta-cli transform openrewrite
command to adjust the command behavior to your needs.
Table 7.2. The mta-cli transform openrewrite command options
Option | Description |
---|---|
|
A target goal. The default is |
|
Display all |
|
A path to the application source code directory. |
|
List all available OpenRewrite recipes. |
|
A path to a custom Maven settings file. |
|
A target OpenRewrite recipe. |
|
A log level. The default is |
|
Do not clean up temporary resources. |
7.4. Converting XML rules to YAML rules
You can convert the MTA XML rules to the analyzer-lsp
YAML rules, which are easier to maintain, by using the mta-cli transform rules
command. To convert the rules, the rules
subcommand uses the windup-shim
tool.
The mta-cli analyze
converts also automatically converts XML rules to YAML rules.
analyzer-lsp
is the tool that evaluates the rules for the language providers and determines rule matches.
Prerequisites
- You have the Podman tool installed and running.
- If your system is in a disconnected environment, you copied Podman images to the file system of the disconnected device and uploaded these images to the local Podman.
Procedure
- Convert the XML rules to the YAML rules:
$ mta-cli transform rules --input=<path_to_xml_rules> --output=<path_to_output_directory>
Additional resources
7.5. The rules command options
The following are the options that you can use together with the mta-cli transform rules
command to adjust the command behavior to your needs.
Table 7.3. The the mta-cli transform rules command options
Option | Description |
---|---|
|
Display all |
|
A path to XML rule files or a directory. |
|
A path to the output directory. |
|
A log level. The default is |
Chapter 8. Generating platform assets for application deployment
Starting from MTA version 7.3.0, you can use the discover
and generate
commands in containerless mode to automatically generate the manifests needed to deploy a Cloud Foundry (CF) application in the OpenShift Container Platform:
-
Use the
discover
command to generate the discovery manifest in the YAML format from the CF application manifest. The discovery manifest preserves the specifications found in the CF manifest that define the metadata, runtime, and platform configurations. -
Use the
generate
command to generate the deployment manifest for OCP deployments by using the discovery manifest. The deployment manifest is generated by using a templating engine, such as Helm, that converts the discovery manifest into a Kubernetes-native format. You can also use this command to generate non-Kubernetes manifests, such as a Dockerfile or a configuration file.
Generating platform assets for application deployment is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Benefits of generating deployment assets
Generating deployment assets has the following benefits:
- Generating the Kubernetes and non-Kubernetes deployment manifests.
- Generating deployment manifests by using familiar template engines, for example, Helm, that are widely used for Kubernetes deployments.
- Adhering to Kubernetes best practices when preparing the deployment manifest by using Helm templates.
8.1. Generating a discovery manifest
You can generate the discovery manifest for the Cloud Foundry (CF) application by using the discover
command. The discovery manifest preserves configurations, such as application properties, resource allocations, environment variables, and service bindings found in the CF manifest.
Prerequisites
- You have Cloud Foundry (v3) as a source platform.
- You have OpenShift Container Platform as a target platform.
- You installed MTA CLI version 7.3.0.
- You have a CF application manifest as a YAML file.
Procedure
-
Open the terminal application and navigate to the
<MTA_HOME>/
directory. List the supported platforms for the discovery process:
$ mta-cli discover --list-platforms
Generate the discovery manifest for a CF application as an output file:
$ mta-cli discover cloud-foundry \ --input <path_to_application-manifest> \ --output <path_to_discovery-manifest>\
Additional resources
8.2. Generating a deployment manifest
You can auto-generate the Red Hat OpenShift Container Platform deployment manifest for the Cloud Foundry (CF) application by using the generate
command. Based on the Helm template that you provide, the command generates manifests, such as a ConfigMap, and non-Kubernetes manifests, such as a Dockerfile, for application deployment.
Prerequisites
- You have Cloud Foundry (v3) as a source platform.
- You have OpenShift Container Platform as a target platform.
- You installed MTA CLI version 7.3.0.
- You generated a discovery manifest.
- You created a Helm template with the required configuration for the OCP deployment.
Procedure
-
Open the terminal application and navigate to the
<MTA_HOME>/
directory. Generate the deployment manifest as an output file:
$ mta-cli generate helm --chart-dir helm_sample \ --input <path_to_discovery-manifest> \ --output-dir <location_of_deployment_manifest> \
Verify the ConfigMap:
$ mta-cli cd <location_of_deployment_manifest> \ $ cat configmap.yaml $ cat Dockerfile
Verify the Dockerfile:
$ mta-cli cd <location_of_deployment_manifest> \ $ cat Dockerfile
Additional resources
8.3. The discover and generate command options
You can use the following options together with the discover
or generate
command to adjust the command behavior to your needs.
Table 8.1. Options for discover
and generate
commands
Command | Option | Description |
---|---|---|
|
|
Display details for different command arguments. |
|
List the supported platforms for the discovery process. | |
|
Discover Cloud Foundry applications. | |
|
Specify the location of the <app-manifest-name>.yaml file to discover the application configurations. | |
|
Specify the location to save the <discovery-manifest-name>.yaml file. | |
|
|
Display details for different command arguments. |
|
Generate a deployment manifest by using the Helm template. | |
|
Specify a directory that contains the Helm chart. | |
|
Specify a location of the <discovery-manifest-name>.yaml file to generate the deployment manifest. | |
|
Generate only non-Kubernetes templates, such as a Dockerfile. | |
|
Specify a location to which the deployment manifests are saved. | |
|
Override values of attributes in the discovery manifest with the key-value pair entered from the CLI. |
8.4. Assets generation example
The following is an example of generating discovery and deployment manifests of a Cloud Foundry (CF) Node.js application.
For this example, the following files and directories are used:
-
CF Node.js application manifest name:
cf-nodejs-app.yaml
-
Discovery manifest name:
discover.yaml
-
Location of the application Helm chart:
helm_sample
- Deployment manifests: a ConfigMap and a Dockerfile
-
Output location of the deployment manifests:
newDir
Assumed that the cf-nodejs-app.yaml
is located in the same directory as the MTA CLI binary. If the CF application manifest location is different, you can also enter the location path to the manifest as the input
.
Prerequisites
- You installed MTA CLI 7.3.0.
- You have a CF application manifest as a YAML file.
- You created a Helm template with the required configurations for the OCP deployment.
Procedure
-
Open the terminal application and navigate to the
<MTA_HOME>/
directory. Verify the content of the CF Node.js application manifest:
$ cat cf-nodejs-app.yaml name: cf-nodejs lifecycle: cnb buildpacks: - docker://my-registry-a.corp/nodejs - docker://my-registry-b.corp/dynatrace memory: 512M instances: 1 random-route: true
Generate the discovery manifest:
$ mta-cli discover cloud-foundry \ --input cf-nodejs-app.yaml \ --output discover.yaml \
Verify the content of the discover manifest:
$ cat discover.yaml name: cf-nodejs randomRoute: true timeout: 60 buildPacks: - docker://my-registry-a.corp/nodejs - docker://my-registry-b.corp/dynatrace instances: 1
Generate the deployment manifest in the
newDir
directory by using thediscover.yaml
file:$ mta-cli generate helm \ --chart-dir helm_sample \ --input discover.yaml --output-dir newDir
Check the contents of the Dockerfile in the
newDir
directory:$ cat ./newDir/Dockerfile FROM busybox:latest RUN echo "Hello cf-nodejs!"
Check the contents of the ConfigMap in the
newDir
directory:$ cat ./newDir/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: cf-nodejs-config data: RANDOM_ROUTE: true TIMEOUT: "60" BUILD_PACKS: | - docker://my-registry-a.corp/nodejs - docker://my-registry-b.corp/dynatrace INSTANCES: "1"
In the ConfigMap, override the
name
tonodejs-app
andINSTANCES
to2
:$ mta-cli generate helm \ --chart-dir helm_sample \ --input discover.yaml --set name="nodejs-app" \ --set instances=2 \ --output-dir newDir \
Check the contents of the ConfigMap again:
$ cat ./newDir/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: nodejs-app data: RANDOM_ROUTE: true TIMEOUT: "60" BUILD_PACKS: | - docker://my-registry-a.corp/nodejs - docker://my-registry-b.corp/dynatrace INSTANCES: "2"
Additional resources
Chapter 9. MTA CLI known issues
This section provides highlighted known issues in MTA CLI.
Limitations with Podman on Microsoft Windows
The CLI is built and distributed with support for Microsoft Windows.
However, when running any container image based on Red Hat Enterprise Linux 9 (RHEL9) or Universal Base Image 9 (UBI9), the following error can be returned when starting the container:
Fatal glibc error: CPU does not support x86-64-v2
This error is caused because Red Hat Enterprise Linux 9 or Universal Base Image 9 container images must be run on a CPU architecture that supports x86-64-v2
.
For more details, see (Running Red Hat Enterprise Linux 9 (RHEL) or Universal Base Image (UBI) 9 container images fail with "Fatal glibc error: CPU does not support x86-64-v2").
CLI runs the container runtime correctly. However, different container runtime configurations are not supported.
Although unsupported, you can run CLI with Docker instead of Podman, which would resolve this issue.
To achieve this, you replace the CONTAINER_TOOL
path with the path to Docker.
For example, if you experience this issue, instead of issuing:
CONTAINER_TOOL=/usr/local/bin/docker mta-cli analyze
You replace CONTAINER_TOOL
with the path to Docker:
<Docker Root Dir>=/usr/local/bin/docker mta-cli analyze
While this is not supported, it would allow you to explore CLI while you work to upgrade your hardware or move to hardware that supports x86_64-v2
.
Appendix A. Reference material
The following is information that you might find useful when using the Migration Toolkit for Applications (MTA) CLI.
A.1. Supported technology tags
The following technology tags are supported in MTA 7.3.1:
- 0MQ Client
- 3scale
- Acegi Security
- AcrIS Security
- ActiveMQ library
- Airframe
- Airlift Log Manager
- AKKA JTA
- Akka Testkit
- Amazon SQS Client
- AMQP Client
- Anakia
- AngularFaces
- ANTLR StringTemplate
- AOP Alliance
- Apache Accumulo Client
- Apache Aries
- Apache Commons JCS
- Apache Commons Validator
- Apache Flume
- Apache Geronimo
- Apache Hadoop
- Apache HBase Client
- Apache Ignite
- Apache Karaf
- Apache Mahout
- Apache Meecrowave JTA
- Apache Sirona JTA
- Apache Synapse
- Apache Tapestry
- Apiman
- Applet
- Arquillian
- AspectJ
- Atomikos JTA
- Avalon Logkit
- Axion Driver
- Axis
- Axis2
- BabbageFaces
- Bean Validation
- BeanInject
- Blaze
- Blitz4j
- BootsFaces
- Bouncy Castle
- ButterFaces
- Cache API
- Cactus
- Camel
- Camel Messaging Client
- Camunda
- Cassandra Client
- CDI
- Cfg Engine
- Chunk Templates
- Cloudera
- Coherence
- Common Annotations
- Composite Logging
- Composite Logging JCL
- Concordion
- CSS
- Cucumber
- Dagger
- DbUnit
- Demoiselle JTA
- Derby Driver
- Drools
- DVSL
- Dynacache
- EAR Deployment
- Easy Rules
- EasyMock
- Eclipse RCP
- EclipseLink
- Ehcache
- EJB
- EJB XML
- Elasticsearch
- Entity Bean
- EtlUnit
- Eureka
- Everit JTA
- Evo JTA
- Feign
- File system Logging
- FormLayoutMaker
- FreeMarker
- Geronimo JTA
- GFC Logging
- GIN
- GlassFish JTA
- Google Guice
- Grails
- Grapht DI
- Guava Testing
- GWT
- H2 Driver
- Hamcrest
- Handlebars
- HavaRunner
- Hazelcast
- Hdiv
- Hibernate
- Hibernate Cfg
- Hibernate Mapping
- Hibernate OGM
- HighFaces
- HornetQ Client
- HSQLDB Driver
- HTTP Client
- HttpUnit
- ICEfaces
- Ickenham
- Ignite JTA
- Ikasan
- iLog
- Infinispan
- Injekt for Kotlin
- Iroh
- Istio
- Jamon
- Jasypt
- Java EE Batch
- Java EE Batch API
- Java EE JACC
- Java EE JAXB
- Java EE JAXR
- Java EE JSON-P
- Java Transaction API
- JavaFX
- JavaScript
- Javax Inject
- JAX-RS
- JAX-WS
- JayWire
- JBehave
- JBoss Cache
- JBoss EJB XML
- JBoss logging
- JBoss Transactions
- JBoss Web XML
- JBossMQ Client
- JBPM
- JCA
- Jcabi Log
- JCache
- JCunit
- JDBC
- JDBC datasources
- JDBC XA datasources
- Jersey
- Jetbrick Template
- Jetty
- JFreeChart
- JFunk
- JGoodies
- JMock
- JMockit
- JMS Connection Factory
- JMS Queue
- JMS Topic
- JMustache
- JNA
- JNI
- JNLP
- JPA entities
- JPA Matchers
- JPA named queries
- JPA XML
- JSecurity
- JSF
- JSF Page
- JSilver
- JSON-B
- JSP Page
- JSTL
- JTA
- Jukito
- JUnit
- Ka DI
- Keyczar
- Kibana
- KLogger
- Kodein
- Kotlin Logging
- KouInject
- KumuluzEE JTA
- LevelDB Client
- Liferay
- LiferayFaces
- Lift JTA
- Log.io
- Log4J
- Log4s
- Logback
- Logging Utils
- Logstash
- Lumberjack
- Macros
- Magicgrouplayout
- Management EJB
- MapR
- MckoiSQLDB Driver
- Memcached
- Message (MDB)
- Micro DI
- Micrometer
- Microsoft SQL Driver
- MiGLayout
- MinLog
- Mixer
- Mockito
- MongoDB Client
- Monolog
- Morphia
- MRules
- Mule
- Mule Functional Test Framework
- MultithreadedTC
- Mycontainer JTA
- MyFaces
- MySQL Driver
- Narayana Arjuna
- Needle
- Neo4j
- NLOG4J
- Nuxeo JTA/JCA
- OACC
- OAUTH
- OCPsoft Logging Utils
- OmniFaces
- OpenFaces
- OpenPojo
- OpenSAML
- OpenWS
- OPS4J Pax Logging Service
- Oracle ADF
- Oracle DB Driver
- Oracle Forms
- Orion EJB XML
- Orion Web XML
- Oscache
- OTR4J
- OW2 JTA
- OW2 Log Util
- OWASP CSRF Guard
- OWASP ESAPI
- Peaberry
- Pega
- Persistence units
- Petals EIP
- PicketBox
- PicketLink
- PicoContainer
- Play
- Play Test
- Plexus Container
- Polyforms DI
- Portlet
- PostgreSQL Driver
- PowerMock
- PrimeFaces
- Properties
- Qpid Client
- RabbitMQ Client
- RandomizedTesting Runner
- Resource Adapter
- REST Assured
- Restito
- RichFaces
- RMI
- RocketMQ Client
- Rythm Template Engine
- SAML
- Santuario
- Scalate
- Scaldi
- Scribe
- Seam
- Security Realm
- ServiceMix
- Servlet
- ShiftOne
- Shiro
- Silk DI
- SLF4J
- Snippetory Template Engine
- SNMP4J
- Socket handler logging
- Spark
- Specsy
- Spock
- Spring
- Spring Batch
- Spring Boot
- Spring Boot Actuator
- Spring Boot Cache
- Spring Boot Flo
- Spring Cloud Config
- Spring Cloud Function
- Spring Data
- Spring Data JPA
- spring DI
- Spring Integration
- Spring JMX
- Spring Messaging Client
- Spring MVC
- Spring Properties
- Spring Scheduled
- Spring Security
- Spring Shell
- Spring Test
- Spring Transactions
- Spring Web
- SQLite Driver
- SSL
- Standard Widget Toolkit (SWT)
- Stateful (SFSB)
- Stateless (SLSB)
- Sticky Configured
- Stripes
- Struts
- SubCut
- Swagger
- SwarmCache
- Swing
- SwitchYard
- Syringe
- Talend ESB
- Teiid
- TensorFlow
- Test Interface
- TestNG
- Thymeleaf
- TieFaces
- tinylog
- Tomcat
- Tornado Inject
- Trimou
- Trunk JGuard
- Twirl
- Twitter Util Logging
- UberFire
- Unirest
- Unitils
- Vaadin
- Velocity
- Vlad
- Water Template Engine
- Web Services Metadata
- Web Session
- Web XML File
- WebLogic Web XML
- Webmacro
- WebSocket
- WebSphere EJB
- WebSphere EJB Ext
- WebSphere Web XML
- WebSphere WS Binding
- WebSphere WS Extension
- Weka
- Weld
- WF Core JTA
- Wicket
- Winter
- WSDL
- WSO2
- WSS4J
- XACML
- XFire
- XMLUnit
- Zbus Client
- Zipkin
A.2. Rule story points
Story points are an abstract metric commonly used in Agile software development to estimate the level of effort required to implement a feature or change.
The Migration Toolkit for Applications uses story points to express the level of effort needed to migrate particular application constructs, and the application as a whole. It does not necessarily translate to man-hours, but the value must be consistent across tasks.
A.2.1. Guidelines for the level of effort estimation
The following are the general guidelines MTA uses when estimating the level of effort required for a rule.
Table A.1. Guidelines for the level of effort estimation
Level of Effort | Story Points | Description |
---|---|---|
Information |
0 |
An informational warning with very low or no priority for migration. |
Trivial |
1 |
The migration is a trivial change or a simple library swap with no or minimal API changes. |
Complex |
3 |
The changes required for the migration task are complex, but have a documented solution. |
Redesign |
5 |
The migration task requires a redesign or a complete library change, with significant API changes. |
Rearchitecture |
7 |
The migration requires a complete rearchitecture of the component or subsystem. |
Unknown |
13 |
The migration solution is not known and may need a complete rewrite. |
A.2.2. Migration tasks categories
In addition to the level of effort, you can categorize migration tasks to indicate the severity of the task. The following categories are used to group issues to help prioritize the migration effort.
- Mandatory
- The task must be completed for a successful migration. If the changes are not made, the resulting application will not build or run successfully. Examples include replacement of proprietary APIs that are not supported in the target platform.
- Optional
- If the migration task is not completed, the application should work, but the results might not be optimal. If the change is not made at the time of migration, it is recommended to put it on the schedule soon after your migration is completed.
- Potential
- The task should be examined during the migration process, but there is not enough detailed information to determine if the task is mandatory for the migration to succeed. An example of this would be migrating a third-party proprietary type where there is no directly compatible type.
- Information
- The task is included to inform you of the existence of certain files. These might need to be examined or modified as part of the modernization effort, but changes are typically not required.
Additional resources
Appendix B. Contributing to the MTA project
To help the Migration Toolkit for Applications cover most application constructs and server configurations, including yours, you can help with any of the following items:
- Send an email to jboss-migration-feedback@redhat.com and let us know what MTA migration rules must cover.
- Provide example applications to test migration rules.
Identify application components and problem areas that might be difficult to migrate:
- Write a short description of the problem migration areas.
- Write a brief overview describing how to solve the problem in migration areas.
- Try Migration Toolkit for Applications on your application. Make sure to report any issues you meet.
- Try Migration Toolkit for Applications on your application. Make sure to report any issues you meet.MTA uses Jira as its issue tracking system. If you encounter an issue executing MTA, submit a Jira issue.
Contribute to the Migration Toolkit for Applications rules repository:
- Write a Migration Toolkit for Applications rule to identify or automate a migration process.
Create a test for the new rule.
For more information, see Rule Development Guide.
Contribute to the project source code:
- Create a core rule.
- Improve MTA performance or efficiency.
Additional resources
- MTA forums
- Jira issues tracker Any level of involvement is greatly appreciated!