In this blogpost, I will discuss some best practices for optimal throughput for applications using a Microservices architecture. I will focus on two areas: inter-service requests and JSON processing.
Inter-service Calls With Microservices, it is likely that you will be making RESTful calls between services and this can be a big performance bottleneck. You want to make sure that that the clients making the calls are configured correctly.
Example 1: MicroProfile Rest ClientsWith MicroProfile Rest Clients, the best practice is to simply make them @ApplicationScoped. By default, MicroProfile Rest Clients have a scope of @Dependent. When you inject them into something like a Jakarta RESTful endpoint, they inherit the scope of the Jakarta RESTful class, which is @RequestScoped by default*. This will cause a new MicroProfile Rest Client to be created every time, which leads to extra CPU cost and a decent amount of class loading overhead that will slow things down. By making the MicroProfile Rest Client ApplicationScoped, the client is only created once, saving a lot of time.
*Note: The actual default scope of a Jakarta RESTful class is somewhat confusing, but it seems to be the equivalent of @RequestScoped for this case. If your application is stateless (as microservices applications are supposed to be), you will also benefit a bit by making your Jakarta RESTful classes @ApplicationScoped. If I have confused you, let me provide an example that should help make it clearer. In this example, I am driving load from Apache JMeter to an application (where I have the client code below) on server1. The application makes a call to another microservice hosted on server2.
Case 1 – Default Here is the code of the REST endpoint that injects the MicroProfile Rest Client and makes the call from server1 to server2.
@Path("/mp-restclient-test1")
public class MicroProfileRestClientTest1 {
@Inject @RestClient
private DefaultRestClient defaultClient;
@GET
@Produces(MediaType.TEXT_PLAIN)
public String ping() throws Exception {
String returnString = defaultClient.ping();
defaultClient.close();
return returnString;
}
}
Here is the MicroProfile Rest Client interface.
@Path("/")
@RegisterRestClient(configKey="defaultRestClient")
public interface DefaultRestClient extends AutoCloseable {
@GET
@Path("/endpoint")
@Produces(MediaType.TEXT_PLAIN)
public String ping();
}
Case 2 – ApplicationScoped In this case, the REST endpoint is similar, but we don’t need to close the client after every call since it never goes out of scope.
@Path("/mp-restclient-test2")
public class MicroProfileRestClientTest2 {
@Inject @RestClient
private AppScopedRestClient appScopedClient;
@GET
@Produces(MediaType.TEXT_PLAIN)
public String ping() {
return appScopedClient.ping();
}
}
Here is the MicroProfile Rest Client (notice the @ApplicationScoped annotation).
@ApplicationScoped
@Path("/")
@RegisterRestClient(configKey="appScopedRestClient")
public interface AppScopedRestClient {
@GET
@Path("/endpoint")
@Produces(MediaType.TEXT_PLAIN)
public String ping();
}
Here is the performance difference. This is a simple/best case scenario, but ApplicationScoped is 373% faster.
Example 2: Jakarta RESTful ClientsIf you’re using a normal Jakarta RESTful client, the best practice is to create and cache the WebTarget instead of re-creating it for every call. The idea is the same as above – you save a lot of time by avoiding extra CPU cost and class loading. Here is an example to demonstrate the difference.
Case 1 – Default
@Path("/client-test1")
public class ClientTestDefault {
@GET
@Produces(MediaType.TEXT_PLAIN)
public String ping() {
Client client = ClientBuilder.newBuilder().build();
WebTarget webTarget = client
.target("http://localhost:9081/endpoint");
String output = webTarget.request().get(String.class);
client.close();
return output;
}
}
Case 2 – Cached WebTargetSimilar, but I create a static WebTarget to reuse, and I don’t need to close the client. You can also cache the Client or the Inovation.Builder, but I’ve had the best success with the WebTarget.
@Path("/client-test2")
public class ClientTestCached {
private static WebTarget cachedWebTarget =
ClientBuilder.newBuilder().build()
.target("http://localhost:9081/endpoint");
@GET
@Produces(MediaType.TEXT_PLAIN)
public String ping() {
return cachedWebTarget.request().get(String.class);
}
}
Again, a large performance difference - 210% better!
JSON Processing Another area where I have seen bottlenecks in Microservices applications is with JSON processing, specifically with how JSONReader, JSONWriter, and JSONObjectBuilder objects are created. The best practice is to create and cache a Factory first, and then create the reader, writer, or object builder from that factory.
Here is an example using a JSONObjectBuilder. If you use the Json.create method (case 1), it looks up and creates a new factory every time. By using a cached factory (case 2), it skips searching for the JSON Factory implementation, and creates and allocates the factory only once, which saves a decent amount of time.
Case 1 – Default
@Path("/json-test1")
public class JsonTest1 {
@GET
@Produces(MediaType.APPLICATION_JSON)
public JsonObject ping() {
JsonObjectBuilder jsonObjectBuilder = Json.createObjectBuilder();
return jsonObjectBuilder.add("example", "example").build();
}
}
Case 2 – Cached Factory Similar, but here I create a static factory and then create ObjectBuilders from that factory.
@Path("/json-test2")
public class JsonTest2 {
private static final JsonBuilderFactory jsonBuilderFactory = Json.createBuilderFactory(null);
@GET
@Produces(MediaType.APPLICATION_JSON)
public JsonObject ping() {
JsonObjectBuilder jsonObjectBuilder = jsonBuilderFactory.createObjectBuilder();
return jsonObjectBuilder.add("example", "example").build();
}
}
The performance savings are not as dramatic as the client cases above, but 21% is a very decent improvement.
As you can see from above, some simple changes to your application can make large throughput improvements. I hope this blogpost highlighting inter-service calls and JSON processing is helpful for you.
#websphere-performance
#Liberty
#OpenLiberty
#MicroProfile
#microservices