Migration Toolkit for Applications 7.3

MTA Developer Lightspeed Guide

Using the Migration Toolkit for Applications Developer Lightspeed to modernize your applications

Red Hat Customer Content Services

Abstract

you can use Migration Toolkit for Applications (MTA) Developer Lightspeed for application modernization in your organization by running Artificial Intelligence-driven static code analysis for Java applications.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

Chapter 1. Introduction to the MTA Developer Lightspeed

Starting from 8.0.0, you can use Migration Toolkit for Applications (MTA) Developer Lightspeed for application modernization in your organization by running Artificial Intelligence-driven static code analysis for Java applications. Developer Lightspeed gains context for an analysis from historical changes to source code through previous analysis (called solved examples) and the description of issues available in both default and custom rule sets. Thus, when you deploy Developer Lightspeed for analyzing your entire application portfolio, it enables you to be consistent with the common fixes you need to make in the source code of any Java application. It also enables you to control the analysis through manual reviews of the suggested AI fixes by accepting or rejecting the changes while reducing the overall time and effort required to prepare your application for migration.

1.1. How does Developer Lightspeed work

The main components of Developer Lightspeed are the large language model (LLM), a Visual Studio Code (VS Code) extension, and the solution server.

When you initiate an analysis, Developer Lightspeed creates a context to generate a hint or a prompt that is shared with your LLM. The context is drawn from profile configuration that contains the target technology or the source technology that you configure for migration. Based on the source or target configuration, Developer Lightspeed checks the associated rule set containing rules that describe what needs to be fixed as the first input for the LLM prompt. As the second input, Developer Lightspeed uses the solved examples that are stored in the solution server from previous analyses in which the source code changed after applying the fixes.

The Solution Server uses Model Context Protocol (MCP) to act as an institutional memory that stores changes to source codes from analyzing all the applications in your organization. This helps you to leverage the recurring patterns of solutions for issues that are repeated in many applications. Solution Server reuses these past solutions through fix suggestions in later migrations, leading to faster, more reliable code changes as you migrate applications in different migration waves. At the organizational level, the solution server can also help Developer Lightspeed tackle a new issue given how a similar issue was resolved in a previous analysis.

The hint or the prompt generated by Developer Lightspeed is the well-defined context for identifying issues that allows the LLM to "reason" and generate the fix suggestions. This mechanism helps to overcome the limited context size in LLMs that prevents them from analyzing the entire source code of an application. You can review the suggested change and accept or reject the update to the code per issue or for all the issues.

Developer Lightspeed supports different goals of analysis through the three modes: the Agentic AI, the Retrieval Augmented Geeneration (RAG) solution delivered by the solution server, and the demo mode.

If you enable the agentic AI mode, Developer Lightspeed streams an automated analysis of the code in a loop until all issues are resolved and changes the code with the updates. In the initial run, the AI agent:

  • Plans the context to define the issues.
  • Chooses a suitable sub agent for the analysis task. Works with the LLM to generate fix suggestions. The reasoning transcript and files to be changed are displayed to the user.
  • Applies the changes to the code once the user approves the updates.

If you accept that the agentic AI must continue to make changes, it compiles the code and runs a partial analysis. In this phase, the agentic AI can detect diagnostic issues (if any) generated by tools that you installed in the VS Code IDE. You can accept the agentic AI’s suggestion to address these diagnostic issues too. After every phase of applying changes to the code, the agentic AI runs another round of automated analysis depending on your acceptance, until it has run through all the files in your project and resolved the issues in the code. Agentic AI generates a new file in each round when it applies the suggestions in the code. The time taken by the agentic AI to complete several rounds of analysis depends on the size of the application, the number of issues, and the complexity of the code.

The RAG solution, delivered by the Solution Server, is based on solved examples or past analysis to fresolve new issues or similar issues that are found while analyzing the source code. This type of analysis is not iterative. The Solution Server analysis generates a diff of the updated portions of the code and the original source code for a manual review. In such an analysis, the user has more control over the changes that must be applied to the code.

You can consider using the demo mode for running Developer Lightspeed when you need to perform analysis but have a limited network connection for Developer Lightspeed to sync with the LLM. The demo mode stores the input data as a hash and past LLM calls in a cache. The cache is stored in a chosen location in the your file system for later use. The hash of the inputs is used to determine which LLM call must be used in the demo mode. After you enable the demo mode and configure the path to your cached LLM calls in the Developer Lightspeed settings, you can rerun an analysis for the same set of files using the responses to a previous LLM call.

1.2. Benefits of using Developer Lightspeed

  • Model agnostic - Developer Lightspeed follows a "Bring Your Own Model" approach, allowing your organization to use a preferred LLM.
  • Iterative refinement - Developer Lightspeed can include an agent that iterates through the source code to run a series of automated analyses that resolves both the code base and diagnostic issues.
  • Contextual code generation - By leveraging AI for static code analysis, Developer Lightspeed breaks down complex problems into more manageable ones, providing the LLM with focused context to generate meaningful results. This helps overcome the limited context size of LLMs when dealing with large codebases.
  • No fine tuning - You also do not need to fine tune your model with a suitable data set for analysis which leaves you free to use and switch LLM models to respond to your requirements.
  • Learning and Improvement - As more parts of a codebase are migrated with Developer Lightspeed, it can use RAG to learn from the available data and provide better recommendations in subsequent application analysis.

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.