OpenTelemetry Logs Not Transmitting to Instana from Native Linux? Here’s How to Fix It
1. Introduction
Setting up opentelemetry logging can be painful. After many days of trial and error and searching google, github, and other sources, I finally got the transmission working and usable from native Linux. Most of the documentation in the wild is for Docker, or Kubernetes, Java or other platforms. What about Linux itself ? You’ll also find a lot of conflicting information with other otel collectors for these platforms where the options and settings are invalid.
Below is a step by step guide and some of the “gotchas” I experienced during the setup for the otel-crontrib collector. The set up involves using the existing instana agent that is running on the system.
Located at
https://github.com/open-telemetry/opentelemetry-collector-contrib
This should give you a head start on the setup and hopefully help you get past some of the easy mistakes involved with the configuration.
2. Understanding How OpenTelemetry Logs Work
OpenTelemetry uses a pipeline that involves a several components in this order.
receivers – The first stage of code involved with actual work to pull in the data from the logs. This is where multiline and other regular expression work happens on the log file to build the body of the payload.
processors – The second stage of the pipeline that acts on the data from the recievers to tag or set severity levels or other attributes.
exporters – The third stage that involves building the payload to send to the backend
extensions – These fall outside of the pipeline but are can be used in element. This is where storage extensions come in to play for things like fault tolerance.
Instana agent – Receives data from the otel-contrib exporter
Instana GUI – Your tennant needs to be enable to receive logs and you’ll want Smart Alerts enabled also. Based on documentation this can involve purchasable add on license to the instana tennant that may not already be enabled.
Instana under Logs and Analytics are used to review the data.
3. Common Reasons Why OpenTelemetry Logs Aren’t Transmitting
Misconfigured yaml files, agents, exporters or endpoints.
This is the number one problem. The other three issues listed here should be fixed first before working with OpenTelemetry
Missing required SDKs or incorrect instrumentation. This would be for the instana-agent itself. Making sure OpenTelemetry is enabled in the instana agent yaml and making sure simply metics are being sent in.
Network/firewall restrictions blocking log transmission. If you have a working instana-agent that is sending to the backend you should be fine. If your instana agent isn’t sending basic metrics data, OpenTelemetry will not work either.
Collector or backend service not running properly. The instana host tennant needs to be enabled and configured to support logs.
4. Step-by-Step Troubleshooting Guide using Instana
Step 1: Check Your OpenTelemetry Collector Configuration
Verify installed collector binary. There are various ways to get this pulled down to your server. This is the simple download method.
- https://github.com/open-telemetry/opentelemetry-collector-contrib/releases
Navigate to the downloads section and pull down the correct rpm for your Linux distribution. 
Example: otelcol-contrib_0.118.0_linux_amd64.rpm
- Install the rpm ( RH example ):
- # dnf install otelcol-contrib_0.118.0_linux_amd64.rpm
# rpm -qa otelcol-contrib
otelcol-contrib-0.118.0-1.x86_64
- Verify that your instana agent confiruation.yaml file has these set.
# OpenTelemetry Collector
com.instana.plugin.opentelemetry:
grpc:
enabled: true
http:
enabled: true
- After this make sure to restart your instana agent if it wasn’t already enabled.
# sytemctl restart instana-host-agent
-
- Verify the otel-collector-config.yaml file.
Below is an example for DB2 db2diag.log
## is a comment
receivers:
filelog:
type: file_input
poll_interval: 500ms
## This is a comma delimitted set of path values that point to a file
include: [
"/<db2_diag_path>/<instance_path>/db2diag.*.log",
"/<db2_diag_path>/<instance_path>/db2diag.*.log",
"/<db2_diag_path>/<instance_path>/db2diag.*.log",
"/<db2_diag_path>/<instance_path>/db2diag.*.log"
]
storage: file_storage/filelogreceiver
## optional exclude files
exclude: [ ]
## [REQUIRED] Whether to include the file path in the logs
include_file_path: true
## [OPTIONAL] Whether to include the file name in the logs
include_file_name: true
preserve_leading_whitespaces: true
multiline:
line_start_pattern: ".*LEVEL:.*"
## Use multiline to split out a body of the db2diag.log into a logical
## The regex is searching for the keyword LEVEL: which is included
## on the first line of each db2diag.log entry
processors:
## This is an example log severity parser that sets the
## **severity_text** field in the log payload, each runs
## in-order such that the highest matching severity is set.
## The severity tag in db2diag.log are set as follows using
## the IsMatch regex processing
## Instana only has INFO,WARN, ERROR and FATAL
transform/severity_parse:
log_statements:
- context: log
statements:
- set(severity_text, "Info") where IsMatch(body.string, ".*LEVEL:.*Info.*")
- set(severity_text, "Info") where IsMatch(body.string, ".*LEVEL:.*Event.*")
- set(severity_text, "Warn") where IsMatch(body.string, ".*LEVEL:.*Warning.*")
- set(severity_text, "Error") where IsMatch(body.string, ".*LEVEL:.*Error.*")
- set(severity_text, "Fatal") where IsMatch(body.string, ".*LEVEL:.*Severe.*")
- set(severity_number, SEVERITY_NUMBER_INFO) where IsString(body) and IsMatch(body, ".*LEVEL:.*Info.*")
- set(severity_number, SEVERITY_NUMBER_INFO) where IsString(body) and IsMatch(body, ".*LEVEL:.*Info.*")
- set(severity_number, SEVERITY_NUMBER_WARN) where IsString(body) and IsMatch(body, ".*LEVEL:.*Warning.*")
- set(severity_number, SEVERITY_NUMBER_ERROR) where IsString(body) and IsMatch(body, ".*LEVEL:.*Error.*")
- set(severity_number, SEVERITY_NUMBER_FATAL) where IsString(body) and IsMatch(body, ".*LEVEL:.*Severe.*")
- set(attributes["Optional_Tag_Name"], "Optional_Tag_Value" )
- set(attributes["service"], "db2_diag_log" )
- set(resource.attributes["service.name"], "db2_diag_log" )
## set resource attributes of the payload
## setting both of these areas seems to ensure that Instana can see the
## Service Name on the GUI
## It is probably overkill but it works
resource:
attributes:
- action: insert
key: service.name
value: db2_diag_log
- action: insert
key: instana.service.name
value: db2_diag_log
## set attributes on the body of the payload
attributes/insert:
actions:
- key: "Optional_Tag_Name"
value: "Optional_Tag_Value"
action: insert
- key: "service"
value: "db2_diag_log"
action: insert
- key: "service.name"
value: "db2_diag_log"
action: insert
memory_limiter:
check_interval: 2s
limit_mib: 1800
## Logs must be sent in batches for performance reasons.
batch:
send_batch_size: 1000
send_batch_max_size: 1000
timeout: 0s
exporters:
debug:
## Set the detail level of output for the otel-contrib command
## verbosity: detailed
verbosity: basic
## [REQUIRED] The Instana Agent supports GRPC payloads
otlp/instanaAgent:
## The GRPC port will be 4317
endpoint: "http://localhost:4317"
sending_queue:
storage: file_storage/otlpoutput
headers:
accept: application/x-protobuf
content-type: application/x-protobuf
x-instana-key: "<Instana_Agent_Key>"
x-instana-host: "<FQDN of the machine>"
## TLS encryption is disabled in this example.
tls:
insecure: true
compression: none
## Set up the storage extension to be fault tolerant over reboots and restarts
extensions:
file_storage/filelogreceiver:
directory: /opt/instana/instana-agent/otel/otel_storage/db2/receiver
timeout: 1s
create_directory: true
compaction:
on_start: true
directory: /tmp/
max_transaction_size: 65_536
fsync: false
file_storage/otlpoutput:
directory: /opt/instana/instana-agent/otel/otel_storage/db2/output
timeout: 1s
create_directory: true
compaction:
on_start: true
directory: /tmp/
max_transaction_size: 65_536
fsync: false
## Set up the tagging for the yaml file format
## each set above is related to the command delimitted values set here
## yaml file formatting is essential with the spacing of the names of these
## tags.
service:
extensions: [file_storage/filelogreceiver, file_storage/otlpoutput]
pipelines:
## Sample logs pipeline using the above configurations.
logs:
receivers: [filelog]
processors: [transform/severity_parse, attributes/insert, resource, memory_limiter, batch]
exporters: [debug, otlp/instanaAgent]
|
- Set up a systemctl service to start up the otel
Disable the default one after install of the rpm
# systemctl disable otelcol-contrib
- Create your own /etc/systemd/system/otel.service file
Create your own start and stop scripting
[Unit]
Description=OpenTelemetry Collector Contrib
After=network.target
[Service]
ExecStart=/opt/instana/instana-agent/otel/otel_bin/otel_start
ExecStop=/opt/instana/instana-agent/otel/otel_bin/otel_stop
Type=oneshot
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
Start code ( for the otel_start script )
#!/bin/sh
echo "Starting OTEL DB2 Log Sensor"
cd /opt/instana/instana-agent/otel/otel_log/db2
nohup /usr/bin/otelcol-contrib --config=/opt/instana/instana-agent/otel/otel_yaml/otel-config_db2.yaml > db2otel.log 2>&1&
echo "Starting OTEL DR Code Log Sensor"
cd /opt/instana/instana-agent/otel/otel_log/drcode
nohup /usr/bin/otelcol-contrib --config=/opt/instana/instana-agent/otel/otel_yaml/otel-config_drcode.yaml > drcodeotel.log 2>&1 &
sleep 1
echo "Starting OTEL Pacemaker Log Sensor"
cd /opt/instana/instana-agent/otel/otel_log/pacemaker
nohup /usr/bin/otelcol-contrib --config=/opt/instana/instana-agent/otel/otel_yaml/otel-config_pacemaker.yaml > pacemakerotel.log 2>&1 &
echo "Starting OTEL System Log Sensor"
cd /opt/instana/instana-agent/otel/otel_log/system
nohup /usr/bin/otelcol-contrib --config=/opt/instana/instana-agent/otel/otel_yaml/otel-config_system.yaml > systemotel.log 2>&1 &
exit 0;
Stop code ( for the otel_stop script )
#!/bin/sh
/usr/bin/ps auxww | /usr/bin/grep otelcol-contrib | /usr/bin/grep -v grep | /usr/bin/awk '{print $2}' | /usr/bin/xargs -i kill {}
exit 0;
- Enable the system service
# systemctl enable otel
# pwd
/opt/instana/instana-agent/otel
./otel_bin ## Start and Stop scripting
./otel_log ## log files for each process
./otel_log/db2
./otel_log/drcode
./otel_log/drcode/.cache
./otel_log/drcode/.cache/snowflake
./otel_log/pacemaker
./otel_log/system
./otel_storage ## storage extension directories
./otel_storage/db2
./otel_storage/db2/receiver
./otel_storage/db2/output
./otel_storage/drcode
./otel_storage/drcode/receiver
./otel_storage/drcode/output
./otel_storage/pacemaker
./otel_storage/pacemaker/receiver
./otel_storage/pacemaker/output
./otel_storage/system
./otel_storage/system/receiver
./otel_storage/system/output
./otel_yaml ## yaml files
Step 2: Confirm That Your Application Is Sending Logs
The otel_contrib process tails the log file that the application is writing into.
If the application itself is not writing to the file, the otel_contrib process will not send.
Sounds obvious, but I ran into this myself. My DR code was not logging or was turned off.
The regex multiline handling assumes that the log messages are synchronous. If you have multiple process writers into a file ensure that they do not insert text asynchronously into the file. Each block of multiline text in the log file should be intact and not split up with other data. I use file locking around writing block sections in custom code to achieve this.
Ensure that the log paths and wildcards that you have set in the yaml are the correct ones and that the user running the otel_contrib has permissions to read the file.
Step 3: Validate Exporter Connectivity
When the instana agent starts up, port 4317 should be active and in LISTEN
When the otel_contrib service is active, you should have an active connection over port 4317 for each service you have started
# netstat -an | grep 4317
tcp 0 0 127.0.0.1:43892 127.0.0.1:4317 ESTABLISHED
tcp 0 0 127.0.0.1:43916 127.0.0.1:4317 ESTABLISHED
tcp 0 0 127.0.0.1:43928 127.0.0.1:4317 ESTABLISHED
tcp 0 0 127.0.0.1:43904 127.0.0.1:4317 ESTABLISHED
tcp6 0 0 127.0.0.1:4317 :::* LISTEN
tcp6 0 0 127.0.0.1:4317 127.0.0.1:43916 ESTABLISHED
tcp6 0 0 127.0.0.1:4317 127.0.0.1:43892 ESTABLISHED
tcp6 0 0 127.0.0.1:4317 127.0.0.1:43928 ESTABLISHED
tcp6 0 0 127.0.0.1:4317 127.0.0.1:43904 ESTABLISHED
This set up assumes local loopback 127.0.0.1 is used
The instana agent logs should show the following if enabled correctly
2025-02-05T11:51:23.933-05:00 | INFO | instana-grpc-service | tGrpcServiceImpl | com.instana.agent-grpc - 1.0.13 | Listening on 127.0.0.1, ports 55680,4317
2025-02-05T11:51:23.933-05:00 | INFO | features-3-thread-1 | tGrpcServiceImpl | com.instana.agent-grpc - 1.0.13 | Started agent grpc service with 4 threads.
Step 4: Debug Network and Firewall Issues
Loopback on the server should be open. If not verify local firewall and security group rules. I didn’t run into any issues using this.
Using a local instana agent that is already sending into a tennant is ideal as you don’t need to configure a backend connection.
The configuration.yaml for instana should have your ingress ports and keys defined.
You can check the connect over port 443 to your ingress hostname.
# Configures connection to the Instana SaaS. Changes will be hot-reloaded.
host=<ingress_hostname>
port=443
protocol=HTTP/2
# Access Key for your SaaS installation.
Key=XXXXXXXX
# openssl s_client <ingress_hostname>:443
CONNECTED(00000003)
depth=2 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert Global Root G2
verify return:1
depth=1 C = US, O = DigiCert Inc, CN = DigiCert Global G2 TLS RSA SHA256 2020 CA1
verify return:1
depth=0 C = US, ST = New York, L = Armonk, O = International Business Machines Corporation, CN = *.instana.io
verify return:1
Step 5: Look at OpenTelemetry Collector Logs
With “verbosity: detailed” set in the yaml, you can see quite a bit more of what it’s doing. Each log file it is monitoring will show a “scanned” resource being sent to the backend collector.
# /usr/bin/otelcol-contrib --config=/opt/instana/instana-agent/otel/otel_yaml/otel-config_pacemaker.yaml
2025-02-06T16:43:18.704-0500 info service@v0.118.0/service.go:164 Setting up own telemetry...
2025-02-06T16:43:18.704-0500 info telemetry/metrics.go:70 Serving metrics {"address": "localhost:8888", "metrics level": "Normal"}
2025-02-06T16:43:18.705-0500 info builders/builders.go:26 Development component. May change in the future. {"kind": "exporter", "data_type": "logs", "name": "debug"}
2025-02-06T16:43:18.705-0500 info memorylimiter@v0.118.0/memorylimiter.go:75 Memory limiter configured {"kind": "processor", "name": "memory_limiter", "pipeline": "logs", "limit_mib": 1800, "spike_limit_mib": 360, "check_interval": 2}
2025-02-06T16:43:18.726-0500 info service@v0.118.0/service.go:230 Starting otelcol-contrib... {"Version": "0.118.0", "NumCPU": 56}
2025-02-06T16:43:18.726-0500 info extensions/extensions.go:39 Starting extensions...
2025-02-06T16:43:18.726-0500 info extensions/extensions.go:42 Extension is starting... {"kind": "extension", "name": "file_storage/otlpoutput"}
2025-02-06T16:43:18.726-0500 info extensions/extensions.go:59 Extension started. {"kind": "extension", "name": "file_storage/otlpoutput"}
2025-02-06T16:43:18.726-0500 info extensions/extensions.go:42 Extension is starting... {"kind": "extension", "name": "file_storage/filelogreceiver"}
2025-02-06T16:43:18.726-0500 info extensions/extensions.go:59 Extension started. {"kind": "extension", "name": "file_storage/filelogreceiver"}
2025-02-06T16:43:18.728-0500 info filestorage@v0.118.0/client.go:235 finished compaction {"kind": "extension", "name": "file_storage/otlpoutput", "directory": "/opt/instana/instana-agent/otel/otel_storage/pacemaker/output/exporter_otlp_instanaAgent_logs", "elapsed": 0.000277216}
2025-02-06T16:43:18.728-0500 info adapter/receiver.go:41 Starting stanza receiver {"kind": "receiver", "name": "filelog", "data_type": "logs"}
2025-02-06T16:43:18.728-0500 info filestorage@v0.118.0/client.go:235 finished compaction {"kind": "extension", "name": "file_storage/filelogreceiver", "directory": "/opt/instana/instana-agent/otel/otel_storage/pacemaker/receiver/receiver_filelog_", "elapsed": 0.000246507}
2025-02-06T16:43:18.729-0500 info fileconsumer/file.go:62 Resuming from previously known offset(s). 'start_at' setting is not applicable. {"kind": "receiver", "name": "filelog", "data_type": "logs", "component": "fileconsumer"}
2025-02-06T16:43:18.729-0500 info service@v0.118.0/service.go:253 Everything is ready. Begin running and processing data.
LogRecord #37
ObservedTimestamp: 2025-02-06 21:45:34.72997403 +0000 UTC
Timestamp: 1970-01-01 00:00:00 +0000 UTC
SeverityText: Warn
SeverityNumber: Warn(13)
Body: Str(Feb 06 16:45:34.317 dswcwdbstb.dswapi.ibm.net pacemaker-controld [5512] (do_state_transition) notice: State transition S_TRANSITION_ENGINE -> S_IDLE | input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd)
Attributes:
-> log.file.name: Str(pacemaker.log)
-> log.file.path: Str(/var/log/pacemaker/pacemaker.log)
-> Optional_Tag_Key: Str(Optional_Tag_value)
-> service: Str(pacemaker_cluster)
-> service.name: Str(pacemaker_cluster)
Trace ID:
Span ID:
Flags: 0
{"kind": "exporter", "data_type": "logs", "name": "debug"}
- You should see systemctl status <service_name> output like the following
# systemctl status otel
● otel.service - OpenTelemetry Collector Contrib
Loaded: loaded (/etc/systemd/system/otel.service; enabled; vendor preset: disabled)
Active: active (exited) since Thu 2025-02-06 16:03:39 EST; 1min 13s ago
Process: 3493137 ExecStop=/opt/instana/instana-agent/otel/otel_bin/otel_stop (code=exited, status=0/SUCCESS)
Process: 3493148 ExecStart=/opt/instana/instana-agent/otel/otel_bin/otel_start (code=exited, status=0/SUCCESS)
Main PID: 3493148 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/otel.service
├─3493149 /usr/bin/otelcol-contrib --config=/opt/instana/instana-agent/otel/otel_yaml/otel-config_db2.yaml
├─3493150 /usr/bin/otelcol-contrib --config=/opt/instana/instana-agent/otel/otel_yaml/otel-config_drcode.yaml
├─3493209 /usr/bin/otelcol-contrib --config=/opt/instana/instana-agent/otel/otel_yaml/otel-config_pacemaker.yaml
└─3493210 /usr/bin/otelcol-contrib --config=/opt/instana/instana-agent/otel/otel_yaml/otel-config_system.yaml
Step 6: Restart Services and Verify Metrics
Once things are working manually, enabling the systemctl services for both the instana-agent and the otel service are the easiest way to manage the environment.
Use commands like ->
systemctl enable otel < make sure this starts on reboot >
systemctl start|stop|restart otel < manage starting and stopping easily >
On your tennant, under Analytics and Log, you should see the data under the opentelemetry stream.
5. Logging Issues and Other Documentation
Based on what I’ve experimented with and used so far, the storage extension has been the best source of making sure we don’t resend in data that has already been transmitted. Without the storage extension, I did see where it would resend out of date information. The parsing regex filters on the data have proved easy to use.
Errors from the otel sensor can be vague here are some I encountered and explanation.
# /usr/bin/otelcol-contrib --config=/opt/instana/instana-agent/otel/otel_yaml/otel-config_system.yaml
Error: failed to get config: cannot resolve the configuration: retrieved value (type=string) cannot be used as a Conf
2025/02/11 11:25:43 collector server run finished with error: failed to get config: cannot resolve the configuration: retrieved value (type=string) cannot be used as a Conf
This type of error directly relates to a misconfigured yaml file. The error is not very helpful. My suggestions and findings proved that you need to start with one that works and make small incremental changes and test each time so that you know where the problem is. Look for formatting problems, indentation column misalignment, type-os , missing commas on the list of files to include, missing parentheses, and comments that do not begin with ##.
Make backup copies of ones that do work so that you have a starting point if you mess things up.
# /usr/bin/otelcol-contrib --config=/opt/instana/instana-agent/otel/otel_yaml/otel-config_system.yaml
Error: failed to get config: cannot unmarshal the configuration: decoding failed due to the following error(s):
error decoding 'receivers': error reading configuration for "filelog": decoding failed due to the following error(s):
error decoding 'operators[0]': unsupported type 'missing_operator'
2025/02/11 11:33:51 collector server run finished with error: failed to get config: cannot unmarshal the configuration: decoding failed due to the following error(s):
error decoding 'receivers': error reading configuration for "filelog": decoding failed due to the following error(s):
error decoding 'operators[0]': unsupported type 'missing_operator'
This error is shown when an invalid type of resource is defined. The set of otel options for each operator, receiver or processor have a predefined set of values and functions that can be used. If you make a call to something that does not exist for that resource you will get this error.
operators:
- type: missing_operator ( should be recombine )
combine_field: body
is_first_entry: body matches "^[^\\s]"
source_identifier: attributes["log.file.path"]
# /usr/bin/otelcol-contrib --config=/opt/instana/instana-agent/otel/otel_yaml/otel-config_system.yaml
Error: invalid configuration: extensions::file_storage/filelogreceiver: directory must exist: stat /opt/instana/instana-agent/otel/otel_storage/system/invalid_dir: no such file or directory. You can enable the create_directory option to automatically create it
2025/02/11 11:37:37 collector server run finished with error: invalid configuration: extensions::file_storage/filelogreceiver: directory must exist: stat /opt/instana/instana-agent/otel/otel_storage/system/invalid_dir: no such file or directory. You can enable the create_directory option to automatically create it
This error directly relates to the storage extensions. The directory defined in the extension does not exist. It is recommended to make sure these exist first before defining them. The create_directory is enabled in the example, but good practice is to check it yourself.
# /usr/bin/otelcol-contrib --config=/opt/instana/instana-agent/otel/otel_yaml/otel-config_system.yaml
Error: invalid configuration: service::pipelines::logs: references receiver "missing_pipeline" which is not configured
2025/02/11 11:41:39 collector server run finished with error: invalid configuration: service::pipelines::logs: references receiver "missing_pipeline" which is not configured
This error directly relates to the service definitions defined in the yaml. The set of resource tags listed in each need to exist in the yaml file so that the service tag can find the resource.
In this case, the service.pipelines.logs.receivers has a “missing_pipeline” tag as the first element of the array. This “missing_pipeline” does not appear in the receivers set of entries in the yaml file near the beginning of the file ( i.e. there is no receivers.missing_pipeline entry with definitions )
service:
extensions: [file_storage/filelogreceiver, file_storage/otlpoutput]
pipelines:
## Sample logs pipeline using the above configurations.
logs:
receivers: [ missing_pipeline ,filelog]
processors: [transform/severity_parse, attributes/insert, resource, memory_limiter, batch]
exporters: [debug, otlp/instanaAgent]
https://github.com/google/re2/wiki/Syntax has the expressions used if yours is really complicated.
Open Telemetry and Open Telemetry Contrib have somewhat reasonable documentation. I would avoid other repos that are specific to Docker and Kubernetes
as that collector is completely different with different syntax.
https://github.com/open-telemetry/opentelemetry-collector-contrib is broken out based on processors, exporters, receivers and extenstions
https://github.com/open-telemetry/opentelemetry-collector is the base code stream for the contrib repo. Things like resource attributes and “global” settings
are located here.
6. Conclusion
This whole project for me involves moving away from LogDNA and into a single source tool like instana for logging and metrics. So far, having one place to view application logs along with backend DB2 and Linux logging has proved itself.
We had a DB2 recent backup failure that we were alerted on. I was able to view the logs for the backup code alongside the DB2 diag.log messages and system metrics for the same timeframe to see that we were missing a transaction log. I put the transaction log back and re-ran the backup. Instana saved me about 10 to 15 minutes of sorting through data to find the issue.
I’d welcome any input or other ideas on this subject. Give the recombine options a try also. This one matches a blank line as a multiline check.
## multiline:
## line_start_pattern: ".*Start.*Tag.*"
##operators:
## ## [OPTIONAL] Example recombine operator config to handle multi-line log messages for stack-traces. Requires `include_file_path: true` above.
## - type: recombine
## combine_field: body
## is_first_entry: body matches ".*Start.*Tag.*"
## source_identifier: attributes["log.file.path"]
operators:
- type: recombine
combine_field: body
is_first_entry: body matches "^[^\\s]"
source_identifier: attributes["log.file.path"]
#OpenTelemetry