Based on all the collected data, Mono2Micro generates a Natural Seams Partitioning recommendation that aims to partition and group the monolithic classes such that there are minimal class dependencies (such as class containment dependencies where one class is an instance state variable in another, or inheritance dependencies) between the partitions. The “Data Dependency Analysis” above refers to this kind of dependency analysis between the Java classes. In effect, this breaks up the monolithic application along its natural seams with the least amount of disruption.
Based on just the use case data and runtime log traces, Mono2Micro also generates a Business Logic Partitioning recommendation. This might present more dependencies between partitions such as inter-partition runtime calls, but potentially provides a more useful partitioning of the monolithic application divided along functional and business logic capabilities.
The static analysis of the monolith gathers a detailed overview of the Java code in the monolithic application, which the Mono2Micro AI analyzer tool then uses to provide recommendations on how to partition the application. From this static code analysis Mono2Micro is able to infer class dependencies. Furthermore, this information is also used by Mono2Micro's code generation tool to generate the foundation and plumbing code for implementing each partition as microservices.
The dynamic analysis of the monolith is a crucial phase of the data collection process where both the quantity and quality of the data gathered will impact the quality and usefulness of the partitioning recommendations from the Mono2Micro AI analyzer tool. The key concept here is to run as many user scenarios as possible in the running instrumented monolithic application, exercising as much of the codebase as possible. These user scenarios (or business use cases), should be typical user threads through the application, related to various functionality that the application provides — more akin to functional verification test cases or larger green threads, and less so to unit test cases. From this process where use case names and start/stop times are collected, Mono2Micro is able to correlate what exact code got executed for each use case, and that feeds into the AI analysis.
Once Mono2Micro's AI analyzer tool is run on the collected data, its recommendations for the partitions can be viewed in the Mono2Micro UI where you can see all the classes and the partitions they have been placed in, what all the observed runtime calls are between classes in-partition or inter-partition, and the use cases involved, etc.
The partitions that Mono2Micro recommends are evaluated and verified on these metrics:
Business-context purity: Functional cohesiveness of a partition in terms of the business use cases it implements, where fewer related business cases are favored per partition.
Inter-partition calls purity: Mono2Micro attempts to minimize the number of different calls and the volume of each call between partitions which leads to services with fewer required APIs.
Structural modularity: This is a quantification of the modular quality of partitions, helping to identify partitions that are more self-reliant and independent.
Mono2Micro ultimately aims to recommend partitions that minimize coupling (the number of different inter-partition runtime calls and their call volume) and maximize cohesion (similar use cases).
In addition to recommending how monolith applications can be partitioned, Mono2Micro also gives a fair amount of insight into the monolith application itself. The 'Unobserved' partition houses all the monolith Java classes that Mono2Micro identifies in the application source code that were not observed in any runtime traces in the business use case runs. This then points to one of two possibilities: a) more use cases need to be run to cover more of the functionality of the application, or b) these classes are potential dead code. Additionally, when viewing the partitions in the UI you can spot certain classes that are more heavily trafficked in terms of runtime calls both in their home partitions and from other partitions. These classes could then be potentially considered as utility classes, not needing to be in just one partition as part of a service but rather copied into all the partitions for use by their classes.
As a whole, Mono2Micro provides a bottom-up evidence-based view of how one might partition their monolith application, showing what is actually happening at the code level, how that code implementation reflects intended business processes and domains, what the temporal relationships between the classes/components of the application are, and how they interact with each other in each business use case.
Mono2Micro's UI also allows you to further customize and refine either of the two partitioning schemes to your liking, allowing you to move classes from one partition to another, create new partitions or delete them, rename partitions and other editing capabilities. This allows you to take various actions to fit the kind of application refactoring strategy you want to take, informed and guided by the mechanics of the existing monolith application, and the features/constraints of the frameworks involved such as Java EE. For example, if you wanted to start with a strangler pattern approach to refactoring your application, where a single microservice is identified to be "strangled" out of the application and the rest remains as a monolith, you can do so in the UI by taking an existing partition recommended by the AI or create a new one, and then moving the classes around to fit this strangler pattern.
Another example is the customization of partitions guided by the application's use of a certain Java framework, such as the need to keep all the Java EE JPA entity beans together in a single partition to preserve the persistence unit to begin with, or moving all the UI and front-end related code to a single partition such as servlet classes that need to run in the same app server instance as the html files that call them. For more details on how partition customization is done in Mono2Micro for a typical Java EE application, refer to this tutorial.
After you have finished customizing the partitions to your satisfaction, Mono2Micro's code generation tool can then be used to generate much of the initial code that helps to implement the partitions. As the tool runs the original monolith classes are copied unchanged within a base directory for each partition following the original monolith source directory tree structure. Alongside the monolith classes, Mono2Micro generates wrapper "service" classes for all the external-facing monolith classes which are called by classes outside their partitions and then "proxy" classes for all the client code in those partitions, and additional utility and implementation code that facilitates the inter-partition calls. The proxy classes look and behave exactly like the original monolith classes, with the same method signatures, and use JAX-RS web services technology to implement the server<->proxy communication. The generated code effectively handles the distribution of the monolith classes across the partitions and their object lifecycle, garbage collection, exception handling, etc. This then allows you to greatly accelerate the journey in implementing partitions as microservices, all without changing a single line of code in the existing application classes. For more details on how to generate code and further refactor and implement a typical Java EE application as partitions, note the tutorial referred above.
Mono2Micro is a revolutionary new tool for accelerating your journey in modernizing monolithic Java applications to microservices running on WebSphere Liberty, providing AI-backed recommendations on how to refactor your application into partitions and a unique code generation capability that helps you implement those partitions as microservices. Be sure to visit https://ibm.biz/Mono2Micro, try the interactive demo and the 90-day free trial, and get started on refactoring!