Original Message:
Sent: Mon May 13, 2024 05:30 AM
From: Dario Sindičić
Subject: Issues with Datastore While deploying Instana core
Great. :-D
------------------------------
Dario Sindičić
Original Message:
Sent: Mon May 13, 2024 04:53 AM
From: Mahantesh Karadigudda
Subject: Issues with Datastore While deploying Instana core
Yes, Service s available in namespace.
instana-zookeeper-headless ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP,7000/TCP,8080/TCP 2d14h
I did one small change in clickhouse config
from
zookeeper:
nodes:
- host: instana-zookeeper-headless.instana-clickhouse
To
zookeeper:
nodes:
- host: instana-zookeeper-headless.instana-clickhouse.svc.cluster.local
Re-deployed ClickHouse & Core. Looks like it is working & I don't see any error in instana operator regarding clickhouse.
--
2024.05.13 08:38:22.795510 [ 267 ] {} <Information> ZooKeeperClient: Connected to ZooKeeper at 172.17.5.176:2181 with session_id 72376544485113872
--
------------------------------
Mahantesh Karadigudda
Original Message:
Sent: Mon May 13, 2024 04:05 AM
From: Dario Sindičić
Subject: Issues with Datastore While deploying Instana core
Do you have service instana-zookeeper-headless in your namespace? Clickhouse is trying to connect to that link.
------------------------------
Dario Sindičić
Original Message:
Sent: Mon May 13, 2024 04:01 AM
From: Mahantesh Karadigudda
Subject: Issues with Datastore While deploying Instana core
Earlier Zoo keeper was not deployed & I have deployed it by referring docs. Zookeeper pods were running properly.
NAME READY STATUS RESTARTS
chi-instana-local-0-0-0 2/2 Running 0
chi-instana-local-0-1-0 2/2 Running 0
clickhouse-operator-ibm-clickhouse-operator-7fd74fb8c6-k7rqb 1/1 Running 0
instana-zookeeper-0 1/1 Running 0
instana-zookeeper-1 1/1 Running 0
instana-zookeeper-2 1/1 Running 0
------------------------------
Mahantesh Karadigudda
Original Message:
Sent: Mon May 13, 2024 03:31 AM
From: Dario Sindičić
Subject: Issues with Datastore While deploying Instana core
Is your zookeeper running in instana-clickhouse? Can you show us pods status in instana-clickhouse namespace?
oc get pods -n instana-clickhouse -o wide
------------------------------
Dario Sindičić
Original Message:
Sent: Fri May 10, 2024 10:19 AM
From: Mahantesh Karadigudda
Subject: Issues with Datastore While deploying Instana core
After adding above entry, Issue still exist.
--
ts=2024-05-10T14:17:34.32751597Z level=info logger=migration msg="checking table: [shared.migrations_create]"
92ts=2024-05-10T14:17:34.340493185Z level=info logger=migration msg="clickHouse (application): chi-instana-local-0-0.instana-clickhouse.svc.cluster.local, shard=1, version=1 dirty=1"
93ts=2024-05-10T14:17:34.351924511Z level=info logger=migration msg="clickHouse (application): chi-instana-local-0-1.instana-clickhouse.svc.cluster.local, shard=1, version=1 dirty=1"
94ts=2024-05-10T14:17:34.351971424Z level=info logger=migration msg="ClickhouseConfig migration is dirty"
95ts=2024-05-10T14:17:34.352020488Z level=error msg="Reconciler error" controller=core controllerGroup=instana.io controllerKind=Core Core="{instana-core instana-core}" namespace=instana-core name=instana-core reconcileID=33e4c3f5-bc7a-494b-93c6-acb7ac82949d error="database migration failed (clickhouse/application)" stacktrace="github.com/go-logr/logr.Logger.Error\n\t/tmp/build/80754af9/cache/go/mod/github.com/go-logr/logr@v1.3.0/logr.go:305\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/tmp/build/80754af9/cache/go/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:329\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/tmp/build/80754af9/cache/go/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/tmp/build/80754af9/cache/go/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/int... --
------------------------------
Mahantesh Karadigudda
Original Message:
Sent: Fri May 10, 2024 09:42 AM
From: Dario Sindičić
Subject: Issues with Datastore While deploying Instana core
You need to specify clickHouseConfigs like this:
clickhouseConfigs: - authEnabled: true clusterName: local hosts: - chi-instana-local-0-0.instana-clickhouse.svc.cluster.local - chi-instana-local-0-1.instana-clickhouse.svc.cluster.local
------------------------------
Dario Sindičić
Original Message:
Sent: Fri May 10, 2024 08:45 AM
From: Mahantesh Karadigudda
Subject: Issues with Datastore While deploying Instana core
Core Object
clickhouseConfigs:
- authEnabled: true
clusterName: local
hosts:
- clickhouse-instana.instana-clickhouse.svc.cluster.local
ports:
- name: tcp
port: 9000
- name: http
port: 8123
Logs attached.
------------------------------
Mahantesh Karadigudda
Original Message:
Sent: Fri May 10, 2024 08:39 AM
From: Dario Sindičić
Subject: Issues with Datastore While deploying Instana core
Ok, we are moving forward. Can you post ClickHouse section of Core object and can you look into clickhouse pods (chi-instana-local-***) in instana-clickhouse namespace. There is something wrong inside of clickhouse database.
------------------------------
Dario Sindičić
Original Message:
Sent: Fri May 10, 2024 08:29 AM
From: Mahantesh Karadigudda
Subject: Issues with Datastore While deploying Instana core
After removing ports, It is working & Operator is not crashing. I'm having below issue wrt to clickhouse.
--
ts=2024-05-10T12:20:38.176183833Z level=info logger=migration msg="Clickhouse (application): clickhouse-instana.instana-clickhouse.svc.cluster.local, shard=1, is healthy, version=23.8.9.54"
ts=2024-05-10T12:20:38.186849882Z level=info logger=migration msg="checking table: [shared.migrations_create]"
ts=2024-05-10T12:20:38.201007736Z level=info logger=migration msg="clickHouse (application): clickhouse-instana.instana-clickhouse.svc.cluster.local, shard=1, version=1 dirty=1"
ts=2024-05-10T12:20:38.201043879Z level=info logger=migration msg="ClickhouseConfig migration is dirty"
ts=2024-05-10T12:20:38.201087451Z level=error msg="Reconciler error" controller=core controllerGroup=instana.io controllerKind=Core Core="{instana-core instana-core}" namespace=instana-core name=instana-core reconcileID=9168c3f3-11da-4046-9674-ff2207499847 error="database migration failed (clickhouse/application)" stacktrace="github.com/go-logr/logr.Logger.Error\n\t/tmp/build/80754af9/cache/go/mod/github.com/go-logr/logr@v1.3.0/logr.go:305\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/tmp/build/80754af9/cache/go/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:329\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/tmp/build/80754af9/cache/go/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/tmp/build/80754af9/cache/go/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:227"
--
------------------------------
Mahantesh Karadigudda
Original Message:
Sent: Thu May 09, 2024 08:36 AM
From: Dario Sindičić
Subject: Issues with Datastore While deploying Instana core
Remove ports section. Operator is failing because of that. Http port with 9200/TCP will still be created.
elasticsearchConfig: authEnabled: true clusterName: instana defaultIndexReplicas: 0 defaultIndexRoutingPartitionSize: 2 defaultIndexShards: 5 hosts: - instana-es-internal-http.instana-elastic.svc.cluster.local
------------------------------
Dario Sindičić
Original Message:
Sent: Thu May 09, 2024 08:00 AM
From: Mahantesh Karadigudda
Subject: Issues with Datastore While deploying Instana core
Hi Dario
Here is the full log trace.
--
ts=2024-05-09T11:44:32.060083815Z level=info msg="Starting workers" controller=unit controllerGroup=instana.io controllerKind=Unit workercount=1
ts=2024-05-09T11:44:32.069348578Z level=info msg="Observed a panic in reconciler: runtime error: invalid memory address or nil pointer dereference" controller=core controllerGroup=instana.io controllerKind=Core Core="{instana-core instana-core}" namespace=instana-core name=instana-core reconcileID=b4beefe9-e9af-419c-a7e0-c271fa958cd7
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x19fb6e1]
goroutine 369 [running]:
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile.func1()
panic({0x1c42bc0?, 0x319f2c0?})
/usr/local/go/src/runtime/panic.go:914 +0x21f
github.ibm.com/instana/infrastructure/self-hosted-k8s-operator/operator/pkg/instanactl.ConfigElasticSearch({0xc004f0d800, 0x23}, 0xc000570dd0)
/tmp/build/80754af9/repo/self-hosted-k8s-operator/operator/pkg/instanactl/dsconfig.go:48 +0x281
github.ibm.com/instana/infrastructure/self-hosted-k8s-operator/operator/pkg/migrations.GetMigrations({0x226e2e0, 0xc004f3e120}, {0xc004f0d800, 0x23}, 0xc004f3b080, {0x0, 0x0}, {0xc004f89280, 0x2, 0x2}, ...)
/tmp/build/80754af9/repo/self-hosted-k8s-operator/operator/pkg/migrations/migrations.go:92 +0x4d3
github.ibm.com/instana/infrastructure/self-hosted-k8s-operator/operator/controllers.(*CoreReconciler).Reconcile(0xc000561fb0, {0x226e2e0, 0xc004f3e120}, {{{0xc000ae55c0?, 0x5?}, {0xc000ae55b0?, 0xc000aa1d48?}}})
/tmp/build/80754af9/repo/self-hosted-k8s-operator/operator/controllers/core_controller.go:139 +0x8ca
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0x22728e8?, {0x226e2e0?, 0xc004f3e120?}, {{{0xc000ae55c0?, 0xb?}, {0xc000ae55b0?, 0x0?}}})
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc00034de00, {0x226e318, 0xc000465bd0}, {0x1ce7ce0?, 0xc0004b1ae0?})
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc00034de00, {0x226e318, 0xc000465bd0})
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2()
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2 in goroutine 79
--
Issue may be at instanactl.ConfigElasticSearch
Elastic Search Config in Core as follows.
elasticsearchConfig:
authEnabled: true
clusterName: instana
defaultIndexReplicas: 0
defaultIndexRoutingPartitionSize: 2
defaultIndexShards: 5
hosts:
- instana-es-internal-http.instana-elastic.svc.cluster.local
ports:
- name: http
port: 9200
------------------------------
Mahantesh Karadigudda
Original Message:
Sent: Wed May 08, 2024 04:18 AM
From: Dario Sindičić
Subject: Issues with Datastore While deploying Instana core
Maybe you forgot to put agentAcceptorConfig inside of Core object?
spec: agentAcceptorConfig: host: ingress.<instana.example.com> port: 443
------------------------------
Dario Sindičić
Original Message:
Sent: Tue May 07, 2024 01:49 AM
From: Mahantesh Karadigudda
Subject: Issues with Datastore While deploying Instana core
Hi Scott,
Instana is not yet supported on ROCKS cluster?, I have OpenShift cluster on IBM Cloud & trying to deploy instana backend component core but operator is crashing wit following message.
--
ts=2024-05-07T05:27:01.518276446Z level=info msg="checking operation mode" controller=core controllerGroup=instana.io controllerKind=Core Core="{instana-core instana-core}" namespace=instana-core name=instana-core reconcileID=bb566ddf-1134-4cb8-9874-a30ba763a9f1
37ts=2024-05-07T05:27:01.534349991Z level=info msg="Observed a panic in reconciler: runtime error: invalid memory address or nil pointer dereference" controller=core controllerGroup=instana.io controllerKind=Core Core="{instana-core instana-core}" namespace=instana-core name=instana-core reconcileID=bb566ddf-1134-4cb8-9874-a30ba763a9f1
38panic: runtime error: invalid memory address or nil pointer dereference [recovered]
39panic: runtime error: invalid memory address or nil pointer dereference
40[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x19fb6e1]
---
Openshift Version: 4.14.20
Instana Version : 271
------------------------------
Mahantesh Karadigudda
Original Message:
Sent: Fri July 07, 2023 10:46 AM
From: Scott Penney
Subject: Issues with Datastore While deploying Instana core
ROKS is not a supported platform, only standard K8s and standard OCP
https://www.ibm.com/docs/en/instana-observability/current?topic=installing-configuring-self-hosted-instana-backend-premises#installing-and-configuring-self-hosted-instana-backend-on-premises
------------------------------
Scott Penney
Original Message:
Sent: Wed June 28, 2023 06:56 AM
From: Bhanu Prakash Desakuru
Subject: Issues with Datastore While deploying Instana core
Hi All,
I was trying to deploy Instana Backend on ROKS cluster and facing issues with datastores after deploying Instana Core. Below is the error that I can see in Instana Operator pod logs.
........................................
ts=2023-06-28T10:46:54.087228018Z level=error msg="Reconciler error" controller=core controllerGroup=instana.io controllerKind=Core Core="{instana-core instana-core}" namespace=instana-core name=instana-core reconcileID=ebc95abf-5044-4073-a868-0947becac015 error="database migration failed (clickhouse/application)" stacktrace="github.com/go-logr/logr.Logger.Error\n\t/tmp/build/80754af9/cache/go/mod/github.com/go-logr/logr@v1.2.4/logr.go:299\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/tmp/build/80754af9/cache/go/mod/sigs.k8s.io/controller-runtime@v0.14.6/pkg/internal/controller/controller.go:329\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/tmp/build/80754af9/cache/go/mod/sigs.k8s.io/controller-runtime@v0.14.6/pkg/internal/controller/controller.go:274\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/tmp/build/80754af9/cache/go/mod/sigs.k8s.io/controller-runtime@v0.14.6/pkg/internal/controller/controller.go:235"
..............................
Please let me know if any woraround available for the above error. I have tried to delete clickhouse and click deployment and redeploy but still facing same issue
------------------------------
Bhanu Prakash Desakuru
------------------------------