我已经根据 IBM 文档在 VM 上安装了 Instana 后端 docker install:https://www.ibm.com/docs/en/instana-observability/current?topic=installer-installing-instana-backend-docker
在命令 instana init 完成安装后,我希望使用显示的凭据登录 Instana,但是,运行命令后我得到了以下内容:
控制台输出:
[root@instana ~]# instana init
Setup host environment ✓
? [Please choose Instana installation type] single
? [What is your tenant name?] tenant01
? [What is your unit name?] unit01
? [Insert your agent key (optional). If none is specified, one is generated which does not allow downloads.] ******************
? [Insert your download key or official agent key (optional).] *************
? [Insert your sales key] **************
? [Insert the FQDN of the host] **************
? [Where should your data be stored?] /mnt/data
? [Where should your trace data be stored?] /mnt/traces
? [Where should your metric data be stored?] /mnt/metrics
? [Where should your logs be stored?] /var/log/instana
? [Path to your signed certificate file?]
? [Path to your private key file?]
Handle certificates ✓
Ensure images ⡿ pull artifact-public.instana.io/backend/eum-health-processor:3.2 Ensure images ⣟ pull artifact-public.instana.io/backend/eum-health-processor:3.2 Ensure images ⣯ pull artifact-public.instana.io/backend/eum-health-processor:3.2 Ensure images ⣷ pull artifact-public.instana.io/backend/eum-health-processor:3.2 Ensure images ⣾ pull artifact-public.instana.io/backend/eum-health-processor:3.2 Ensure images ⣽ pull artifact-public.instana.io/backend/eum-health-processor:3.2 Ensure images ⣻ pull artifact-public.instana.io/backend/eum-health-processor:3.2 Ensure images ⢿ pull artifact-public.instana.io/backend/eum-health-processor:3.2 Ensure images ⡿ pull artifact-public.instana.io/backend/eum-health-processor:3.2 Ensure images ⣟ pull artifact-public.instana.io/backend/eum-health-processor:3.2 Ensure images ⣯ pull artifact-public.instana.io/backend/eum-health-processor:3.2 Ensure images ⣷ pull artifact-public.instana.io/backend/eum-health-processor:3.2 Ensure images ⣾ pull artifact-public.instana.io/backend/eum-health-processor:3.2 Ensure images ⣽ pull artifact-public.instana.io/backend/eum-health-processor:3.2 Ensure images ⣻ pull artifact-public.instana.io/backend/eum-health-processor:3.2 Ensure images ⢿ pull artifact-public.instana.io/backend/eum-health-processor:3.2 Ensure images ⡿ pull artifact-public.instana.io/backend/eum-health-processor:3.2 Ensure images ⣟ pull artifact-public.instana.io/backend/eum-health-processor:3.2 Ensure images ⣯ pull artifact-public.instana.io/backend/eum-health-processor:3.2 Ensure images ⣷ pull artifact-public.instana.io/backend/eum-health-processor:3.2 Ensure images ⣾ pull artifact-public.instana.io/backend/eum-health-processor:3.2 Ensure images ⣽ pull artifact-public.instana.io/backend/eum-health-processor:3.2 Ensure images ⣻ pull artifact-public.instana.io/backend/eum-health-processor:3.2 Ensure images ⢿ pull artifact-public.instana.io/backend/eum-health-processor:3.2 Ensure images ⡿ pull artifact-public.instana.io/backend/eum-health-processor:3.2 Ensure images ⣟ pull artifact-public.instana.io/backend/eum-health-processor:3.2 Ensure images ✓
Clean docker containers ✓
Check data directories ✓
Run datastores ✓
Create configurations ✓
Migrate data stores ✗
failed init for (kafka-ingress): kafka server: Request exceeded the user-specified time limit in the request
日志:
2023-10-11T15:36:54.361 INFO ▶ Docker version: 24.0.6
2023-10-11T15:36:54.361 INFO ▶ Docker OS: Red Hat Enterprise Linux 8.8 (Ootpa)
2023-10-11T15:36:54.373 INFO ▶ Docker CPU's: 16
2023-10-11T15:36:54.373 INFO ▶ Docker total mem 65 GB
2023-10-11T15:36:54.373 WARN ▶ We recommend changing clocksource to tsc via 'echo tsc >/sys/devices/system/clocksource/clocksource0/current_clocksource'
2023-10-11T15:36:54.374 INFO ▶ increase 'net.core.somaxconn'
2023-10-11T15:36:54.374 INFO ▶ increase 'net.ipv4.tcp_max_syn_backlog'
2023-10-11T15:36:54.374 INFO ▶ adjust 'net.ipv4.ip_local_port_range'
2023-10-11T15:36:54.374 INFO ▶ adjust 'net.ipv4.ip_local_port_range'
2023-10-11T15:36:54.374 INFO ▶ adjust 'swappiness to 0'
2023-10-11T15:36:54.374 INFO ▶ disable '/sys/kernel/mm/transparent_hugepage'
2023-10-11T15:36:54.377 INFO ▶ add user instana:instana to system
2023-10-11T15:36:54.722 INFO ▶ Instana user created with UID:975 GID:972
2023-10-11T15:36:54.723 INFO ▶ check SELinux
2023-10-11T15:40:33.478 WARN ▶ dir.data (/mnt/data) only has 192GB, we recommend having at least 1000GB
2023-10-11T15:40:33.479 WARN ▶ dir.traces (/mnt/traces) only has 192GB, we recommend having at least 1000GB
2023-10-11T15:40:33.479 WARN ▶ dir.metrics (/mnt/metrics) only has 192GB, we recommend having at least 1000GB
2023-10-11T15:40:33.479 INFO ▶ no signed certificate or private key defined, generating them at /root/cert
2023-10-11T15:40:33.522 INFO ▶ pull artifact-public.instana.io/self-hosted-images/postgres:15.2_v0.40.0
2023-10-11T15:42:15.059 INFO ▶ pull artifact-public.instana.io/self-hosted-images/cassandra:4.1.3_v0.96.0
2023-10-11T15:44:20.562 INFO ▶ pull artifact-public.instana.io/self-hosted-images/elasticsearch:7.17.12_v0.92.0
2023-10-11T15:48:03.487 INFO ▶ pull artifact-public.instana.io/self-hosted-images/kafka:3.4.0_v0.94.0
2023-10-11T15:49:13.711 INFO ▶ pull artifact-public.instana.io/self-hosted-image s/zookeeper:3.7.1_v0.75.0
2023-10-11T15:49:42.474 INFO ▶ pull artifact-public.instana.io/self-hosted-images/clickhouse:23.3.2.37_v0.90.0
2023-10-11T15:51:31.950 INFO ▶ pull artifact-public.instana.io/self-hosted-images/nginx:1.20.1-14.el9_v0.72.0
2023-10-11T15:52:08.799 INFO ▶ pull artifact-public.instana.io/backend/butler:3.259.288-0
2023-10-11T15:54:05.373 INFO ▶ pull artifact-public.instana.io/backend/groundskeeper:3.259.288-0
2023-10-11T15:55:37.096 INFO ▶ pull artifact-public.instana.io/backend/accountant:3.259.288-0
2023-10-11T15:56:33.909 INFO ▶ pull artifact-public.instana.io/backend/cashier-ingest:3.259.288-0
2023-10-11T15:57:11.643 INFO ▶ pull artifact-public.instana.io/backend/cashier-rollup:3.259.288-0
2023-10-11T15:57:52.316 INFO ▶ pull artifact-public.instana.io/backend/acceptor:3.259.288-0
2023-10-11T15:58:43.122 INFO ▶ pull artifact-public.instana.io/backend/eum-acceptor:3.259.288-0
2023-10-11T15:59:34.278 INFO ▶ pull artifact-public.instana.io/backend/eum-processor:3.259.288-0
2023-10-11T16:01:12.135 INFO ▶ pull artifact-public.instana.io/backend/eum-health-processor:3.259.288-0
2023-10-11T16:02:47.489 INFO ▶ pull artifact-public.instana.io/backend/appdata-health-processor:3.259.288-0
2023-10-11T16:04:48.642 INFO ▶ pull artifact-public.instana.io/backend/sli-evaluator:3.259.288-0
2023-10-11T16:06:34.550 INFO ▶ pull artifact-public.instana.io/backend/js-stack-trace-translator:3.259.288-0
2023-10-11T16:08:08.909 INFO ▶ pull artifact-public.instana.io/backend/appdata-writer:3.259.288-0
2023-10-11T16:09:38.800 INFO ▶ pull artifact-public.instana.io/backend/appdata-reader:3.259.288-0
2023-10-11T16:11:09.074 INFO ▶ pull artifact-public.instana.io/backend/appdata-health-aggregator:3.259.288-0
2023-10-11T16:12:50.799 INFO ▶ pull artifact-public.instana.io/backend/serverless-acceptor:3.259.288-0
2023-10-11T16:13:39.138 INFO ▶ pull artifact-public.instana.io/backend/ui-client:3.259.288-0
2023-10-11T16:14:27.011 INFO ▶ pull artifact-public.instana.io/backend/filler:3.259.288-0
2023-10-11T16:15:31.083 INFO ▶ pull artifact-public.instana.io/backend/processor:3.259.288-0
2023-10-11T16:17:14.576 INFO ▶ pull artifact-public.instana.io/backend/issue-tracker:3.259.288-0
2023-10-11T16:19:02.675 INFO ▶ pull artifact-public.instana.io/backend/appdata-processor:3.259.288-0
2023-10-11T16:20:38.045 INFO ▶ pull artifact-public.instana.io/backend/appdata-legacy-converter:3.259.288-0
2023-10-11T16:22:12.019 INFO ▶ pull artifact-public.instana.io/backend/ui-backend:3.259.288-0
2023-10-11T16:24:30.820 INFO ▶ create dirs if not exist
2023-10-11T16:24:30.826 INFO ▶ Init: single - Version: 259-0
2023-10-11T16:24:31.093 INFO ▶ generate /root/cert/dhparams.pem
2023-10-11T16:24:31.093 INFO ▶ Postgres password changed
2023-10-11T16:24:31.093 INFO ▶ postgres password changed!
2023-10-11T16:24:31.094 INFO ▶ Cassandra password changed
2023-10-11T16:24:31.094 INFO ▶ cassandra password changed!
2023-10-11T16:24:31.094 INFO ▶ kafka password changed!
2023-10-11T16:24:31.094 INFO ▶ clickhouse password changed!
2023-10-11T16:24:32.024 INFO ▶ Define instana-zookeeper with
[HEAP_OPTS=-Xms1658M -Xmx1658M JVM_OPTS=-Dcom.redhat.fips=false]
2023-10-11T16:24:32.024 INFO ▶ running instana-zookeeper
2023-10-11T16:25:04.627 INFO ▶ Define instana-cassandra with
[CASSANDRA_CLUSTER_NAME='onprem' MAX_HEAP_SIZE=4145M HEAP_NEWSIZE=800M CASSANDRA_LISTEN_ADDRESS=127.0.0.1 CASSANDRA_BROADCAST_ADDRESS=127.0.0.1 CASSANDRA_RPC_ADDRESS=0.0.0.0 JVM_OPTS=-Dcom.redhat.fips=false]
2023-10-11T16:25:04.627 INFO ▶ Define instana-kafka with
[KAFKA_CREATE_TOPICS=local_tenant01_unit01_raw_messages:1:1,local_tenant01_unit01_presence:1:1,local_tenant01_unit01_issues:1:1,local_tenant01_unit01_combined_metrics HEAP_OPTS=-Xmx2487M ZOOKEEPER_HOST=instana-zookeeper KAFKA_OPTS=-Dcom.redhat.fips=false -Djava.security.disableSystemPropertiesFile=false KAFKA_LISTENERS=LISTENER_INTERNAL://instana-kafka:29092,LISTENER_HOST://instana-kafka:29094 KAFKA_ADVERTISED_LISTENERS=LISTENER_INTERNAL://instana-kafka:29092,LISTENER_HOST://127.0.0.1:29094 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=LISTENER_INTERNAL:SASL_PLAINTEXT,LISTENER_HOST:SASL_PLAINTEXT KAFKA_HEALTH_CHECK_SERVER=instana-kafka:29092 KAFKA_INTER_BROKER_LISTENER_NAME=LISTENER_INTERNAL]
2023-10-11T16:25:04.627 INFO ▶ Define instana-elastic with
[HEAP_OPTS=-Xms4145M -Xmx4145M HTTP_PORT=9200 TRANSPORT_PORT=9300 ES_JAVA_OPTS=-Dcom.redhat.fips=false NETWORK_PUBLISH_HOST=instana-elastic]
2023-10-11T16:25:04.627 INFO ▶ Define instana-postgres with
[MAX_CONNECTIONS=1000 LOG_FILE=true]
2023-10-11T16:25:04.627 INFO ▶ Define instana-clickhouse
2023-10-11T16:25:04.627 INFO ▶ running 5 data store containers
2023-10-11T16:25:04.627 INFO ▶ Running instana-clickhouse
2023-10-11T16:25:04.627 INFO ▶ Running instana-kafka
2023-10-11T16:25:04.627 INFO ▶ Running instana-elastic
2023-10-11T16:25:04.627 INFO ▶ Running instana-postgres
2023-10-11T16:25:04.628 INFO ▶ Running instana-cassandra
2023-10-11T16:26:15.039 INFO ▶ create component configs
2023-10-11T16:26:15.152 INFO ▶ Prepare migrations
2023-10-11T16:27:00.153 INFO ▶ init core data stores
2023-10-11T16:27:00.154 INFO ▶ ClickHouse: started flushing distributed tables
2023-10-11T16:27:00.165 INFO ▶ ElasticSearch (metadata_ng): Migrating template onprem_tenant01_unit01-action-instances-template
2023-10-11T16:27:00.180 INFO ▶ ClickHouse (127.0.0.1): flushed successfully
2023-10-11T16:27:00.188 INFO ▶ Postgres: database (butlerdb) not present.
2023-10-11T16:27:00.188 INFO ▶ Postgres: database (tenantdb) not present.
2023-10-11T16:27:00.189 INFO ▶ Postgres: database (sales) not present.
2023-10-11T16:27:00.207 INFO ▶ Postgres (sales): creating database
2023-10-11T16:27:00.209 INFO ▶ Postgres (butlerdb): creating database
2023-10-11T16:27:00.216 INFO ▶ Postgres (tenantdb): creating database
2023-10-11T16:27:00.359 INFO ▶ ElasticSearch (metadata_ng): putting template onprem_tenant01_unit01-action-instances-template
2023-10-11T16:27:00.741 INFO ▶ Cassandra (State): creating keyspace shared
2023-10-11T16:27:00.745 INFO ▶ Cassandra (Spans): creating keyspace shared
2023-10-11T16:27:00.745 INFO ▶ Cassandra (Profiles): creating keyspace shared
2023-10-11T16:27:01.448 INFO ▶ ElasticSearch (metadata_ng): Migrating Index onprem_tenant01_unit01_action_instances_202310
2023-10-11T16:27:01.461 WARN ▶ onprem_tenant01_unit01_action_instances_202310 doesn't exist, it's not supposed to be created by instanactl
2023-10-11T16:27:01.463 INFO ▶ ElasticSearch (metadata_ng): Migrating template onprem_tenant01_unit01-tag-sets-template
2023-10-11T16:27:01.470 INFO ▶ ElasticSearch (metadata_ng): putting template onprem_tenant01_unit01-tag-sets-template
2023-10-11T16:27:01.668 INFO ▶ Postgres (sales): initializing database
2023-10-11T16:27:01.703 INFO ▶ Postgres (tenantdb): initializing database
2023-10-11T16:27:01.703 INFO ▶ Postgres (butlerdb): initializing database
2023-10-11T16:27:02.136 INFO ▶ ElasticSearch (metadata_ng): Migrating template onprem_tenant01_unit01-percolator-template
2023-10-11T16:27:02.141 INFO ▶ ElasticSearch (metadata_ng): putting template onprem_tenant01_unit01-percolator-template
2023-10-11T16:27:02.829 INFO ▶ Cassandra (State): initialized keyspace shared
2023-10-11T16:27:02.830 INFO ▶ Cassandra (State): validated migration files
2023-10-11T16:27:02.830 INFO ▶ Cassandra (Profiles): initialized keyspace shared
2023-10-11T16:27:02.832 INFO ▶ Cassandra (Profiles): validated migration files
2023-10-11T16:27:02.863 INFO ▶ Cassandra (Spans): initialized keyspace shared
2023-10-11T16:27:02.864 INFO ▶ Cassandra (Spans): validated migration files
2023-10-11T16:27:03.677 ERRO ▶ Kafka (Ingress): error creating topic usage_reporting
2023-10-11T16:27:04.890 INFO ▶ Postgres (sales): running migrations
2023-10-11T16:27:04.890 INFO ▶ Postgres (sales): validated migration files
2023-10-11T16:27:05.205 INFO ▶ Clickhouse: system.merges count=0
2023-10-11T16:27:05.239 INFO ▶ Clickhouse (application): 127.0.0.1, shard=1: created database (if non-existing) shared
2023-10-11T16:27:05.268 INFO ▶ ClickHouse: validated migration files
2023-10-11T16:27:05.373 INFO ▶ Clickhouse connection clickhouse://127.0.0.1:9000?database=default&username=clickhouse&password=********&x-migrations-table=shared_shared_create
2023-10-11T16:27:05.500 INFO ▶ Postgres (sales): migrations successful
2023-10-11T16:27:08.220 INFO ▶ Cassandra (State): migrating keyspace shared
2023-10-11T16:27:11.764 INFO ▶ Postgres (butlerdb): running migrations
2023-10-11T16:27:11.764 INFO ▶ Postgres (butlerdb): validated migration files
2023-10-11T16:27:12.706 INFO ▶ Cassandra (Spans): migrating keyspace shared
2023-10-11T16:27:13.610 INFO ▶ Postgres (butlerdb): migrations successful
2023-10-11T16:27:16.781 INFO ▶ Cassandra (Profiles): migrating keyspace shared
2023-10-11T16:27:19.684 INFO ▶ Postgres (tenantdb): running migrations
2023-10-11T16:27:19.684 INFO ▶ Postgres (tenantdb): validated migration files
2023-10-11T16:27:27.624 INFO ▶ Cassandra (Profiles): migrated keyspace shared
2023-10-11T16:27:31.975 INFO ▶ Postgres (tenantdb): migrations successful
2023-10-11T16:27:38.636 INFO ▶ Clickhouse (application): 127.0.0.1, shard=1, created tables
2023-10-11T16:27:38.675 INFO ▶ ClickHouse: validated migration files
2023-10-11T16:27:38.788 INFO ▶ Clickhouse connection clickhouse://127.0.0.1:9000?database=default&username=clickhouse&password=********&x-migrations-table=shared_shared_alter
2023-10-11T16:28:04.865 INFO ▶ Cassandra (Spans): migrated keyspace shared
2023-10-11T16:28:12.602 INFO ▶ Cassandra (State): migrated keyspace shared
2023-10-11T16:28:45.876 INFO ▶ Clickhouse (application): 127.0.0.1, shard=1, local tables migrated
2023-10-11T16:28:45.956 INFO ▶ ClickHouse: validated migration files
2023-10-11T16:28:46.232 INFO ▶ Clickhouse connection clickhouse://127.0.0.1:9000?database=default&username=clickhouse&password=********&x-migrations-table=shared_shared_alterdistributed
2023-10-11T16:29:02.752 INFO ▶ Clickhouse (application): 127.0.0.1, shard=1, distributed tables migrated
2023-10-11T16:29:02.752 ERRO ▶ Fail: Migrate data stores
2023-10-11T16:29:08.963 INFO ▶ stopping container 13be1923f8/instana-postgres
2023-10-11T16:29:08.963 INFO ▶ stopping container b0b5a880b8/instana-cassandra
2023-10-11T16:29:08.963 INFO ▶ stopping container 07b2d81c31/instana-clickhouse
2023-10-11T16:29:08.963 INFO ▶ stopping container 063e959f77/instana-elastic
2023-10-11T16:29:08.963 INFO ▶ stopping container e427bc7ec6/instana-kafka
2023-10-11T16:30:09.955 INFO ▶ stopping container e1725de8b6/instana-zookeeper
2023-10-11T16:30:10.888 ERRO ▶ failed init for (kafka-ingress): kafka server: Request exceeded the user-specified time limit in the request
我还运行instant repair
命令:得到以下输出:
2023-10-11T16:31:49.615 INFO ▶ Docker version: 24.0.6
2023-10-11T16:31:49.615 INFO ▶ Docker OS: Red Hat Enterprise Linux 8.8 (Ootpa)
2023-10-11T16:31:49.627 INFO ▶ Docker CPU's: 16
2023-10-11T16:31:49.627 INFO ▶ Docker total mem 65 GB
2023-10-11T16:31:49.628 WARN ▶ We recommend changing clocksource to tsc via 'echo tsc >/sys/devices/system/clocksource/clocksource0/current_clocksource'
2023-10-11T16:31:49.999 INFO ▶ starting databases
2023-10-11T16:31:50.001 INFO ▶ Define instana-zookeeper with
[HEAP_OPTS=-Xms1658M -Xmx1658M JVM_OPTS=-Dcom.redhat.fips=false]
2023-10-11T16:31:50.001 INFO ▶ running instana-zookeeper
2023-10-11T16:32:21.960 INFO ▶ Define instana-cassandra with
[CASSANDRA_CLUSTER_NAME='onprem' MAX_HEAP_SIZE=4145M HEAP_NEWSIZE=800M CASSANDRA_LISTEN_ADDRESS=127.0.0.1 CASSANDRA_BROADCAST_ADDRESS=127.0.0.1 CASSANDRA_RPC_ADDRESS=0.0.0.0 JVM_OPTS=-Dcom.redhat.fips=false]
2023-10-11T16:32:21.960 INFO ▶ Define instana-kafka with
[KAFKA_CREATE_TOPICS=local_tenant01_unit01_raw_messages:1:1,local_tenant01_unit01_presence:1:1,local_tenant01_unit01_issues:1:1,local_tenant01_unit01_combined_metrics HEAP_OPTS=-Xmx2487M ZOOKEEPER_HOST=instana-zookeeper KAFKA_OPTS=-Dcom.redhat.fips=false -Djava.security.disableSystemPropertiesFile=false KAFKA_LISTENERS=LISTENER_INTERNAL://instana-kafka:29092,LISTENER_HOST://instana-kafka:29094 KAFKA_ADVERTISED_LISTENERS=LISTENER_INTERNAL://instana-kafka:29092,LISTENER_HOST://127.0.0.1:29094 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=LISTENER_INTERNAL:SASL_PLAINTEXT,LISTENER_HOST:SASL_PLAINTEXT KAFKA_HEALTH_CHECK_SERVER=instana-kafka:29092 KAFKA_INTER_BROKER_LISTENER_NAME=LISTENER_INTERNAL]
2023-10-11T16:32:21.960 INFO ▶ Define instana-elastic with
[HEAP_OPTS=-Xms4145M -Xmx4145M HTTP_PORT=9200 TRANSPORT_PORT=9300 ES_JAVA_OPTS=-Dcom.redhat.fips=false NETWORK_PUBLISH_HOST=instana-elastic]
2023-10-11T16:32:21.960 INFO ▶ Define instana-postgres with
[MAX_CONNECTIONS=1000 LOG_FILE=true]
2023-10-11T16:32:21.960 INFO ▶ Define instana-clickhouse
2023-10-11T16:32:21.960 INFO ▶ running 5 data store containers
2023-10-11T16:32:21.960 INFO ▶ Running instana-clickhouse
2023-10-11T16:32:21.961 INFO ▶ Running instana-kafka
2023-10-11T16:32:21.961 INFO ▶ Running instana-cassandra
2023-10-11T16:32:21.961 INFO ▶ Running instana-postgres
2023-10-11T16:32:21.961 INFO ▶ Running instana-elastic
2023-10-11T16:32:57.515 INFO ▶ repairing data stores, this can take a very long time.
2023-10-11T16:32:57.515 INFO ▶ ClickHouse: started flushing distributed tables
2023-10-11T16:32:57.524 WARN ▶ ClickHouse (127.0.0.1): flushing table all_aggregated_calls_1h
2023-10-11T16:32:57.525 WARN ▶ ClickHouse (127.0.0.1): flushing table all_aggregated_calls_1m
2023-10-11T16:32:57.526 WARN ▶ ClickHouse (127.0.0.1): flushing table all_applications_1m
2023-10-11T16:32:57.527 WARN ▶ ClickHouse (127.0.0.1): flushing table all_calls
2023-10-11T16:32:57.528 WARN ▶ ClickHouse (127.0.0.1): flushing table all_calls_by_eum_correlation_id
2023-10-11T16:32:57.529 WARN ▶ ClickHouse (127.0.0.1): flushing table all_calls_by_trace
2023-10-11T16:32:57.530 WARN ▶ ClickHouse (127.0.0.1): flushing table all_calls_by_trace_v2
2023-10-11T16:32:57.531 WARN ▶ ClickHouse (127.0.0.1): flushing table all_calls_v2
2023-10-11T16:32:57.533 WARN ▶ ClickHouse (127.0.0.1): flushing table all_calls_v2_fast_row_count
2023-10-11T16:32:57.534 WARN ▶ ClickHouse (127.0.0.1): flushing table all_calls_v3
2023-10-11T16:32:57.535 WARN ▶ ClickHouse (127.0.0.1): flushing table all_calls_v3_aggregated_1m
2023-10-11T16:32:57.536 WARN ▶ ClickHouse (127.0.0.1): flushing table all_calls_v3_aggregated_30m
2023-10-11T16:32:57.537 WARN ▶ ClickHouse (127.0.0.1): flushing table all_calls_v3_aggregated_4h
2023-10-11T16:32:57.538 WARN ▶ ClickHouse (127.0.0.1): flushing table all_calls_v3_aggregated_6h
2023-10-11T16:32:57.539 WARN ▶ ClickHouse (127.0.0.1): flushing table all_calls_v3_fast_row_count_table
2023-10-11T16:32:57.540 WARN ▶ ClickHouse (127.0.0.1): flushing table all_calls_v3_precomputed_filters_1m
2023-10-11T16:32:57.541 WARN ▶ ClickHouse (127.0.0.1): flushing table all_calls_v3_precomputed_filters_v2_1m
2023-10-11T16:32:57.542 WARN ▶ ClickHouse (127.0.0.1): flushing table all_chains
2023-10-11T16:32:57.544 WARN ▶ ClickHouse (127.0.0.1): flushing table all_data_migrations_failed
2023-10-11T16:32:57.545 WARN ▶ ClickHouse (127.0.0.1): flushing table all_data_migrations_finished
2023-10-11T16:32:57.546 WARN ▶ ClickHouse (127.0.0.1): flushing table all_data_migrations_started
2023-10-11T16:32:57.547 WARN ▶ ClickHouse (127.0.0.1): flushing table all_logs
2023-10-11T16:32:57.548 WARN ▶ ClickHouse (127.0.0.1): flushing table all_logs_by_trace
2023-10-11T16:32:57.549 WARN ▶ ClickHouse (127.0.0.1): flushing table all_logs_v2
2023-10-11T16:32:57.550 WARN ▶ ClickHouse (127.0.0.1): flushing table all_longterm_calls
2023-10-11T16:32:57.551 WARN ▶ ClickHouse (127.0.0.1): flushing table all_longterm_logs
2023-10-11T16:32:57.552 WARN ▶ ClickHouse (127.0.0.1): flushing table all_longterm_mobile_app_monitoring_beacons
2023-10-11T16:32:57.553 WARN ▶ ClickHouse (127.0.0.1): flushing table all_longterm_website_monitoring_beacons
2023-10-11T16:32:57.554 WARN ▶ ClickHouse (127.0.0.1): flushing table all_migration_status
2023-10-11T16:32:57.555 WARN ▶ ClickHouse (127.0.0.1): flushing table all_mobile_app_monitoring_beacons
2023-10-11T16:32:57.556 WARN ▶ ClickHouse (127.0.0.1): flushing table all_queryable_tags
2023-10-11T16:32:57.557 WARN ▶ ClickHouse (127.0.0.1): flushing table all_raw_profile_infos
2023-10-11T16:32:57.559 WARN ▶ ClickHouse (127.0.0.1): flushing table all_website_monitoring_beacons
2023-10-11T16:32:57.560 WARN ▶ ClickHouse (127.0.0.1): flushing table all_website_monitoring_beacons_aggregated_1h
2023-10-11T16:32:57.561 WARN ▶ ClickHouse (127.0.0.1): flushing table all_website_monitoring_beacons_aggregated_1m
2023-10-11T16:32:57.562 INFO ▶ ClickHouse (127.0.0.1): flushed successfully
2023-10-11T16:33:02.582 INFO ▶ Clickhouse: system.merges count=0
2023-10-11T16:33:02.583 INFO ▶ Checking table: [default.shared_shared_create]
2023-10-11T16:33:02.592 INFO ▶ ClickHouse (application): 127.0.0.1, shard=1, version=87 dirty=0
2023-10-11T16:33:02.592 INFO ▶ Checking table: [default.shared_shared_alterdistributed]
2023-10-11T16:33:02.599 INFO ▶ ClickHouse (application): 127.0.0.1, shard=1, version=555 dirty=0
2023-10-11T16:33:02.600 INFO ▶ Checking table: [default.shared_shared_alter]
2023-10-11T16:33:02.610 INFO ▶ ClickHouse (application): 127.0.0.1, shard=1, version=555 dirty=0
我的问题是我无法访问 instana,如何解决这个问题?