使用 Vernemq helm 包的 Pod 无法启动

使用 Vernemq helm 包的 Pod 无法启动

我正在使用 helm 在我的 kubernetes 集群上安装 vernemq

问题是它无法启动,我接受了 EULA

以下是日志:

02:31:56.552 [error] CRASH REPORT Process <0.195.0> with 0 neighbours exited with reason: {{{badmatch,{error,{vmq_generic_msg_store,{bad_return,{{vmq_generic_msg_store_app,start,[normal,[]]},{'EXIT',{{badmatch,{error,{{undef,[{eleveldb,validate_options,[open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{data,[{dir,"./data"}]},{data_root,"./data/leveldb"},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{store_dir,"./data..."},...]],...},...]},...}}},...}}}}}}},...},...} in application_master:init/4 line 138
02:31:56.552 [info] Application vmq_server exited with reason: {{{badmatch,{error,{vmq_generic_msg_store,{bad_return,{{vmq_generic_msg_store_app,start,[normal,[]]},{'EXIT',{{badmatch,{error,{{undef,[{eleveldb,validate_options,[open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{data,[{dir,"./data"}]},{data_root,"./data/leveldb"},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{store_dir,"./data..."},...]],...},...]},...}}},...}}}}}}},...},...}
Kernel pid terminated (application_controller) ({application_start_failure,vmq_server,{bad_return,{{vmq_server_app,start,[normal,[]]},{'EXIT',{{{badmatch,{error,{vmq_generic_msg_store,{bad_return,{{vm

{"Kernel pid terminated",application_controller,"{application_start_failure,vmq_server,{bad_return,{{vmq_server_app,start,[normal,[]]},{'EXIT',{{{badmatch,{error,{vmq_generic_msg_store,{bad_return,{{vmq_generic_msg_store_app,start,[normal,[]]},{'EXIT',{{badmatch,{error,{{undef,[{eleveldb,validate_options,[open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{data,[{dir,\"./data\"}]},{data_root,\"./data/leveldb\"},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{store_dir,\"./data/msgstore\"},{sync,false},{tiered_slow_level,0},{total_leveldb_mem_percent,70},{use_bloomfilter,true},{verify_checksums,true},{verify_compaction,true},{write_buffer_size,41777529},{write_buffer_size_max,62914560},{write_buffer_size_min,31457280}]],[]},{vmq_storage_engine_leveldb,init_state,2,[{file,\"/opt/vernemq/apps/vmq_generic_msg_store/src/engines/vmq_storage_engine_leveldb.erl\"},{line,99}]},{vmq_storage_engine_leveldb,open,2,[{file,\"/opt/vernemq/apps/vmq_generic_msg_store/src/engines/vmq_storage_engine_leveldb.erl\"},{line,39}]},{vmq_generic_msg_store,init,1,[{file,\"/opt/vernemq/apps/vmq_generic_msg_store/src/vmq_generic_msg_store.erl\"},{line,181}]},{gen_server,init_it,2,[{file,\"gen_server.erl\"},{line,374}]},{gen_server,init_it,6,[{file,\"gen_server.erl\"},{line,342}]},{proc_lib,init_p_do_apply,3,[{file,\"proc_lib.erl\"},{line,249}]}]},{child,undefined,{vmq_generic_msg_store_bucket,1},{vmq_generic_msg_store,start_link,[1]},permanent,5000,worker,[vmq_generic_msg_store]}}}},[{vmq_generic_msg_store_sup,'-start_link/0-lc$^0/1-0-',2,[{file,\"/opt/vernemq/apps/vmq_generic_msg_store/src/vmq_generic_msg_store_sup.erl\"},{line,40}]},{vmq_generic_msg_store_sup,start_link,0,[{file,\"/opt/vernemq/apps/vmq_generic_msg_store/src/vmq_generic_msg_store_sup.erl\"},{line,42}]},{application_master,start_it_old,4,[{file,\"application_master.erl\"},{line,277}]}]}}}}}}},[{vmq_plugin_mgr,start_plugin,1,[{file,\"/opt/vernemq/apps/vmq_plugin/src/vmq_plugin_mgr.erl\"},{line,524}]},{vmq_plugin_mgr,start_plugins,1,[{file,\"/opt/vernemq/apps/vmq_plugin/src/vmq_plugin_mgr.erl\"},{line,503}]},{vmq_plugin_mgr,check_updated_plugins,2,[{file,\"/opt/vernemq/apps/vmq_plugin/src/vmq_plugin_mgr.erl\"},{line,444}]},{vmq_plugin_mgr,handle_plugin_call,2,[{file,\"/opt/vernemq/apps/vmq_plugin/src/vmq_plugin_mgr.erl\"},{line,246}]},{gen_server,try_handle_call,4,[{file,\"gen_server.erl\"},{line,661}]},{gen_server,handle_msg,6,[{file,\"gen_server.erl\"},{line,690}]},{proc_lib,init_p_do_apply,3,[{file,\"proc_lib.erl\"},{line,249}]}]},{gen_server,call,[vmq_plugin_mgr,{enable_system_plugin,vmq_generic_msg_store,[internal]},infinity]}}}}}}"}
Crash dump is being written to: /erl_crash.dump...

所以我的问题出在哪里呢?我只是用它helm install vernemq vernemq/vernemq来安装它。

答案1

我重现了您的问题,并使用以下方法修复了它latest docker 镜像. 安装图表时它使用1.10.2-alpine

您可以通过拉动 Helm Chart 来更改它:

helm fetch --untar vernemq/vernemq

然后将目录更改为 vernemq 并编辑values.yaml

image:
  repository: vernemq/vernemq
  tag: latest

保存更改并安装图表,例如使用:

helm install vernemq .

安装图表后,您可以使用以下方法检查 VerneMQ 集群状态:

kubectl exec --namespace default vernemq-vernemq-0 /vernemq/bin/vmq-admin cluster show

输出应与此类似:

+----------------------------------------------------------------+-------+
|                              Node                              |Running|
+----------------------------------------------------------------+-------+
|[email protected]| true  |
+----------------------------------------------------------------+-------+

相关内容