Hello,
We are running Instana self-hosted standard edition on 3 VMs.
Few days ago after restarting the cluster, we got a pod crashing (tag-processor).
instana-core tag-processor-646b5d56f8-pjt4w 0/1 CrashLoopBackOff 858 (74s ago) 3d3h
From the logs:
2025-11-24 13:23:07,558 ERROR tag-processor c.i.b.c.d.InstanaDropwizardApplication - Unexpected error during application startup
org.elasticsearch.ElasticsearchStatusException: Elasticsearch exception [type=circuit_breaking_exception, reason=[parent] Data too large, data for [<http_request>] would be [10463963688/9.7gb], which is larger than the limit of [10463530188/9.7gb], real usage: [10463963688/9.7gb], new bytes reserved: [0/0b], usages [fielddata=0/0b, eql_sequence=0/0b, model_inference=0/0b, inflight_requests=0/0b, request=0/0b]]
at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:178)
at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:2484)
at org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:2461)
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:218
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:2154)
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:2118)
at org.elasticsearch.client.IndicesClient.get(IndicesClient.java:1067)
at com.instana.tags.writing.TagSetsMappingsManager.getIndex(TagSetsMappingsManager.java:74)
at com.instana.tags.writing.TagSetsMappingsManager.start(TagSetsMappingsManager.java:62)
at io.dropwizard.lifecycle.JettyManaged.doStart(JettyManaged.java:27)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:93)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:171)
at org.eclipse.jetty.server.Server.start(Server.java:470)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:121)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:89)
at org.eclipse.jetty.server.Server.doStart(Server.java:415)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:93)
at io.dropwizard.core.cli.ServerCommand.run(ServerCommand.java:52)
at io.dropwizard.core.cli.EnvironmentCommand.run(EnvironmentCommand.java:67)
at io.dropwizard.core.cli.ConfiguredCommand.run(ConfiguredCommand.java:98)
at io.dropwizard.core.cli.Cli.run(Cli.java:78)
at io.dropwizard.core.Application.run(Application.java:94)
at com.instana.backend.common.dropwizard.InstanaDropwizardApplication.run(InstanaDropwizardApplication.java:114)
at com.instana.tags.TagProcessorApp.main(TagProcessorApp.java:33)
Suppressed: org.elasticsearch.client.ResponseException: method [GET], host [http://elasticsearch-es-default-0.elasticsearch-es-default.instana-elasticsearch.svc:9200], URI [/onprem_tag_sets_2025_48*?master_timeout=30s&ignore_unavailable=true&expand_wildcards=open&allow_no_indices=true], status line [HTTP/1.1 429 Too Many Requests]
{"error":{"root_cause":[{"type":"circuit_breaking_exception","reason":"[parent] Data too large, data for [<http_request>] would be [10463963688/9.7gb], which is larger than the limit of [10463530188/9.7gb], real usage: [10463963688/9.7gb], new bytes reserved: [0/0b], usages [fielddata=0/0b, eql_sequence=0/0b, model_inference=0/0b, inflight_requests=0/0b, request=0/0b]","bytes_wanted":10463963688,"bytes_limit":10463530188,"durability":"TRANSIENT"}],"type":"circuit_breaking_exception","reason":"[parent] Data too large, data for [<http_request>] would be [10463963688/9.7gb], which is larger than the limit of [10463530188/9.7gb], real usage: [10463963688/9.7gb], new bytes reserved: [0/0b], usages [fielddata=0/0b, eql_sequence=0/0b, model_inference=0/0b, inflight_requests=0/0b, request=0/0b]","bytes_wanted":10463963688,"bytes_limit":10463530188,"durability":"TRANSIENT"},"status":429}
Elasticsearch itself seems healthy:
{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 357,
"active_shards" : 357,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"unassigned_primary_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
I know it is about jvm tuning, but as I am new to Instana, I would like to understand how to safely fix this issue.
Thank you in advance.
Regards,
Nourreddine
#Self-Hosted------------------------------
Nourreddine AISSAOUI
------------------------------