Migration Toolkit for Applications 7.3

MTA Developer Lightspeed

Using the Migration Toolkit for Applications Developer Lightspeed to modernize your applications

Red Hat Customer Content Services

Abstract

you can use Migration Toolkit for Applications (MTA) Developer Lightspeed for application modernization in your organization by running Artificial Intelligence-driven static code analysis for Java applications.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

Chapter 1. Using MTA with Developer Lightspeed in IDE

You must configure the following settings in MTA with Developer Lightspeed:

  • Visual Studio Code IDE settings for all analysis.
  • Profile settings that provide context for a particular analysis.

1.1. Configuring the MTA with Developer Lightspeed IDE settings

After you install the MTA extension in Visual Studio (VS) Code, you must provide your large language model (LLM) credentials to activate MTA with Developer Lightspeed settings in Visual Studio (VS) Code.

MTA with Developer Lightspeed settings are applied to all AI-assisted analysis that you perform by using the MTA extension. The extension settings can be broadly categorized into debugging and logging, MTA with Developer Lightspeed settings, analysis related settings, and solution server settings.

Prerequisites

  • You installed the Migration Toolkit for Applications (MTA) extension version 8.0.0 in VS Code.
  • You provided LLM credentials to enable generative AI for the MTA extension in settings.json file.
  • You installed the MTA distribution version 8.0.0 in your system.
  • You installed the latest version of Language Support for Java™ by Red Hat extension in VS Code.
  • You installed Jave 17+ and Maven 3.9.9+ in your system.

Procedure

  1. Go to the MTA with Developer Lightspeed settings in one of the following ways:

    1. Click Extensions > MTA CLI Extension for VSCode > Settings
    2. Type Ctrl + Shift + P on the search bar to open the Command Palette and enter Preferences: Open Settings (UI). Go to Extensions > MTA to open the settings page.
  2. Configure the settings described in the following table:

Table 1.1. MTA with Developer Lightspeed settings

SettingsDescription

Log level

Set the log level for the MTA binary. The default log level is debug. The log level increases or decreases the verbosity of logs.

RPC Server Path

Displays the path to the solution server binary. If you do not modify the path, MTA with Developer Lightspeed uses the bundled binary.

Analyzer path

Specify a MTA custom binary path. If you do not provide a path, MTA with Developer Lightspeed uses the default path to the binary.

Solution Server:URL

Configure the URL of the Solution Server end point. This field comes with the default URL.

Solution Server:enabled

Enable the Solution Server client (MTA extension) to connect with the Solution Server to perform analysis.

Analyze on save

Enable this setting for MTA with Developer Lightspeed to run an analysis on a file that is saved after code modification. This setting is enabled automatically when you enable Agentic AI mode.

Agent mode

Enable the experimental Agentic AI flow for analysis. MTA with Developer Lightspeed runs an automated analysis of a file to identify issues and suggest resolutions. After you accept the solutions, MTA with Developer Lightspeed makes the changes in the code and re-analyzes the file.

Super agent mode

 

Diff editor type

Select from diff or merge view to review the suggested solutions after running an analysis. The diff view shows the old code and a copy of the code with changes side-by-side. The merge view overlays the changes in the code in a single view.

Excluded diagnostic sources

Add diagnostic sources in the settings.json file. The issues generated by such diagnostic sources are excluded from the automated Agentic AI analysis.

Get solution max effort

Select the effort level for generating solutions. This can be adjusted depending on the type of incidents. Higher values increase processing time.

Get solution max LLM queries

Specify the maximum number of LLM queries made per solution request.

Get solution max priority

Specify the maximum priority level of issues to be considered in a solution request.

Cache directory

Specify the path to a directory in your filesystem to store cached responses from the LLM.

Demo mode

Enable to run MTA with Developer Lightspeed in demo mode that uses the LLM responses saved in the cache directory for analysis.

Trace enabled

Enable to trace MTA communication with the LLM model. Traces are stored in the /.vscode/konveyor-logs/traces path in your IDE project.

Debug:Webview

Enable debug level logging for Webview message handling in VS Code.

Analyze dependencies

Enable MTA with Developer Lightspeed to analyze dependency-related errors detected by the LLM in your project.

Analyze known libraries

Enable MTA with Developer Lightspeed to analyze well-known open-source libraries in your code.

Code snip limit

Set the maximum number of lines of code that are included in incident reports.

Context lines

Configure the number of context lines included in incident reports. The greater the number, the more the LLM accuracy.

Incident limit

Specifies the maximum number of incidents to be reported. If you enter a higher value, it increases the coverage of incidents in your report.

1.2. Configuring the MTA with Developer Lightspeed profile settings

To run an analysis using MTA with Developer Lightspeed, you must configure a profile that contains all the necessary configurations for an analysis, such as source and target technologies and API key to connect with your chosen large language model (LLM).

Prerequisites

  • You installed the MTA extension version 8.0.0 in Visual Studio (VS) Code.
  • You provided LLM credentials to enable generative AI for the MTA extension in settings.json file.
  • You installed the MTA distribution version 8.0.0 in your system.
  • You installed the latest version of Language Support for Java™ by Red Hat extension in VS Code.
  • You installed Jave 17+ and Maven 3.9.9+ in your system.
  • You opened a Java project in your VS Code workspace.

Procedure

  1. Open the Konveyor View Analysis page in either of the following ways:

    1. Click the screen icon on the Konveyor Issues pane of the MTA extension.
    2. Type Ctrl + Shift + P on the search bar to open the Command Palette and enter Konveyor:Open Konveyor Analysis View.
  2. Click the settings button on the Konveyor View Analysis page to configure a profile for your project. The Get Ready to Analyze pane lists the follwoing basic configurations required for an analysis:

    Verification

    After you complete the profile configuration, close the Get Ready to Analyze pane. You can verify that your configuration works by running an analysis. See run an analysis.

Table 1.2. MTA with Developer Lightspeed profile settings

Profile settingsDescription

Select profile

Create a profile that you can reuse for multiple analyses. The profile name is part of the context provided to the LLM for analysis.

Configure label selector

A label selector filters rules for analysis based on the source or target technology.

Specify one or more target or source technologies (for example, cloud-readiness). MTA with Developer Lightspeed uses this configuration to determine the rules that are applied to a project during analysis.

If you mentioned a new target or a source technology in your custom rule, you can type that name to create and add the new item to the list.

Note

You must configure either target or source tehcnologies before running an analysis.

Set rules

Enable default rules and Select your custom rule that you want MTA with Developer Lightspeed to use for an analysis. You can use the custom rules in addition to the default rules.

Configure generative AI

This option opens the provider-settings.yaml file that contains API keys and other parameters for all supported LLMs. By default, MTA with Developer Lightspeed is configured to use OpenAI LLM. To change the model, update the anchor &active to the desired block. Modify this file with the required arguments, such as the model and API key, to complete the setup.

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.