这是我尝试运行 logstash 时的输出。禁用 Redis 和 ElasticSearch 后,它仍然显示地址已在使用中。有什么建议吗?据我所知,这个问题在 1.1.8 中已修复,但我似乎仍然有这个问题。https://logstash.jira.com/browse/LOGSTASH-831
root@logs:~# java -jar logstash-1.1.13-flatjar.jar web --backend elasticsearch://127.0.0.1/
parse
logfile
thread
remaining
PORT SETTINGS 127.0.0.1:9300
INFO 10:52:13,532 [Styx and Stone] {0.20.6}[26710]: initializing ...
DEBUG 10:52:13,544 [Styx and Stone] using home [/root], config [/root/config], data [[/root/data]], logs [/root/logs], work [/root/work], plugins [/root/plugins]
INFO 10:52:13,557 [Styx and Stone] loaded [], sites []
DEBUG 10:52:13,581 using [UnsafeChunkDecoder] decoder
DEBUG 10:52:18,206 [Styx and Stone] creating thread_pool [generic], type [cached], keep_alive [30s]
DEBUG 10:52:18,226 [Styx and Stone] creating thread_pool [index], type [cached], keep_alive [5m]
DEBUG 10:52:18,226 [Styx and Stone] creating thread_pool [bulk], type [cached], keep_alive [5m]
DEBUG 10:52:18,226 [Styx and Stone] creating thread_pool [get], type [cached], keep_alive [5m]
DEBUG 10:52:18,226 [Styx and Stone] creating thread_pool [search], type [cached], keep_alive [5m]
DEBUG 10:52:18,227 [Styx and Stone] creating thread_pool [percolate], type [cached], keep_alive [5m]
DEBUG 10:52:18,227 [Styx and Stone] creating thread_pool [management], type [scaling], min [1], size [5], keep_alive [5m]
DEBUG 10:52:18,237 [Styx and Stone] creating thread_pool [flush], type [scaling], min [1], size [10], keep_alive [5m]
DEBUG 10:52:18,237 [Styx and Stone] creating thread_pool [merge], type [scaling], min [1], size [20], keep_alive [5m]
DEBUG 10:52:18,237 [Styx and Stone] creating thread_pool [refresh], type [scaling], min [1], size [10], keep_alive [5m]
DEBUG 10:52:18,238 [Styx and Stone] creating thread_pool [cache], type [scaling], min [1], size [4], keep_alive [5m]
DEBUG 10:52:18,238 [Styx and Stone] creating thread_pool [snapshot], type [scaling], min [1], size [5], keep_alive [5m]
DEBUG 10:52:18,258 [Styx and Stone] using worker_count[2], port[9300-9400], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/6/1], receive_predictor[512kb->512kb]
DEBUG 10:52:18,266 [Styx and Stone] using initial hosts [127.0.0.1:9300], with concurrent_connects [10]
DEBUG 10:52:18,290 [Styx and Stone] using ping.timeout [3s], master_election.filter_client [true], master_election.filter_data [false]
DEBUG 10:52:18,290 [Styx and Stone] using minimum_master_nodes [-1]
DEBUG 10:52:18,291 [Styx and Stone] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
DEBUG 10:52:18,294 [Styx and Stone] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
DEBUG 10:52:18,315 [Styx and Stone] enabled [true], last_gc_enabled [false], interval [1s], gc_threshold [{default=GcThreshold{name='default', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}, ParNew=GcThreshold{name='ParNew', warnThreshold=1000, infoThreshold=700, debugThreshold=400}, ConcurrentMarkSweep=GcThreshold{name='ConcurrentMarkSweep', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}}]
DEBUG 10:52:18,317 [Styx and Stone] Using probe [org.elasticsearch.monitor.os.JmxOsProbe@e39275b] with refresh_interval [1s]
DEBUG 10:52:18,317 [Styx and Stone] Using probe [org.elasticsearch.monitor.process.JmxProcessProbe@41afc692] with refresh_interval [1s]
DEBUG 10:52:18,320 [Styx and Stone] Using refresh_interval [1s]
DEBUG 10:52:18,321 [Styx and Stone] Using probe [org.elasticsearch.monitor.network.JmxNetworkProbe@3cef237e] with refresh_interval [5s]
DEBUG 10:52:18,323 [Styx and Stone] net_info
host [logs.lbox.com]
eth0 display_name [eth0]
address [/fe80:0:0:0:20c:29ff:fee5:aa11%2] [/10.0.1.18]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
lo display_name [lo]
address [/0:0:0:0:0:0:0:1%1] [/127.0.0.1]
mtu [16436] multicast [false] ptp [false] loopback [true] up [true] virtual [false]
DEBUG 10:52:18,324 [Styx and Stone] Using probe [org.elasticsearch.monitor.fs.JmxFsProbe@33f0e611] with refresh_interval [1s]
DEBUG 10:52:18,560 [Styx and Stone] using indices.store.throttle.type [none], with index.store.throttle.max_bytes_per_sec [0b]
DEBUG 10:52:18,566 [Styx and Stone] using bytebuffer cache with small_buffer_size [1kb], large_buffer_size [1mb], small_cache_size [10mb], large_cache_size [500mb], direct [true]
DEBUG 10:52:18,579 [Styx and Stone] using script cache with max_size [500], expire [null]
DEBUG 10:52:18,602 [Styx and Stone] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
DEBUG 10:52:18,603 [Styx and Stone] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
DEBUG 10:52:18,603 [Styx and Stone] using [cluster_concurrent_rebalance] with [2]
DEBUG 10:52:18,606 [Styx and Stone] using initial_shards [quorum], list_timeout [30s]
DEBUG 10:52:18,689 [Styx and Stone] using max_size_per_sec[0b], concurrent_streams [3], file_chunk_size [512kb], translog_size [512kb], translog_ops [1000], and compress [true]
DEBUG 10:52:18,757 [Styx and Stone] using index_buffer_size [48.5mb], with min_shard_index_buffer_size [4mb], max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
DEBUG 10:52:18,758 [Styx and Stone] using [node] weighted filter cache with size [20%], actual_size [97mb], expire [null], clean_interval [1m]
DEBUG 10:52:18,775 [Styx and Stone] using gateway.local.auto_import_dangled [YES], with gateway.local.dangling_timeout [2h]
DEBUG 10:52:18,781 [Styx and Stone] using enabled [false], host [null], port [9700-9800], bulk_actions [1000], bulk_size [5mb], flush_interval [5s], concurrent_requests [4]
INFO 10:52:18,782 [Styx and Stone] {0.20.6}[26710]: initialized
INFO 10:52:18,782 [Styx and Stone] {0.20.6}[26710]: starting ...
DEBUG 10:52:18,823 Using select timeout of 500
DEBUG 10:52:18,824 Epoll-bug workaround enabled = false
DEBUG 10:52:19,336 [Styx and Stone] Bound to address [/0:0:0:0:0:0:0:0:9302]
INFO 10:52:19,338 [Styx and Stone] bound_address {inet[/0:0:0:0:0:0:0:0:9302]}, publish_address {inet[/10.0.1.18:9302]}
DEBUG 10:52:19,379 [Styx and Stone] connected to node [[#zen_unicast_1#][inet[/127.0.0.1:9300]]]
DEBUG 10:52:22,363 [Styx and Stone] disconnected from [[#zen_unicast_1#][inet[/127.0.0.1:9300]]]
DEBUG 10:52:22,364 [Styx and Stone] filtered ping responses: (filter_client[true], filter_data[false])
--> target [[Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]]], master [[Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]]]
DEBUG 10:52:22,371 [Styx and Stone] connected to node [[Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]]]
DEBUG 10:52:22,388 [Styx and Stone] [master] starting fault detection against master [[Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]]], reason [initial_join]
DEBUG 10:52:22,392 [Styx and Stone] processing [zen-disco-receive(from master [[Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]]])]: execute
DEBUG 10:52:22,393 [Styx and Stone] got first state from fresh master [V8QRcyhkSRex16_Lq8r5kA]
DEBUG 10:52:22,393 [Styx and Stone] cluster state updated, version [7], source [zen-disco-receive(from master [[Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]]])]
INFO 10:52:22,394 [Styx and Stone] detected_master [Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]], added {[Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]],}, reason: zen-disco-receive(from master [[Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]]])
INFO 10:52:22,397 [Styx and Stone] elasticsearch/25UYvHAGTNKX3AezvVWEzA
INFO 10:52:22,398 [Styx and Stone] {0.20.6}[26710]: started
DEBUG 10:52:22,404 [Styx and Stone] processing [zen-disco-receive(from master [[Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]]])]: done applying updated cluster_state
Exception in thread "LogStash::Runner" org.jruby.exceptions.RaiseException: (Errno::EADDRINUSE) bind - Address already in use
at org.jruby.ext.socket.RubyTCPServer.initialize(org/jruby/ext/socket/RubyTCPServer.java:118)
at org.jruby.RubyIO.new(org/jruby/RubyIO.java:879)
at RUBY.initialize(jar:file:/root/logstash-1.1.13-flatjar.jar!/ftw/server.rb:50)
at org.jruby.RubyArray.each(org/jruby/RubyArray.java:1613)
at RUBY.initialize(jar:file:/root/logstash-1.1.13-flatjar.jar!/ftw/server.rb:46)
at org.jruby.RubyArray.each(org/jruby/RubyArray.java:1613)
at RUBY.initialize(jar:file:/root/logstash-1.1.13-flatjar.jar!/ftw/server.rb:34)
at RUBY.run(jar:file:/root/logstash-1.1.13-flatjar.jar!/rack/handler/ftw.rb:94)
at RUBY.run(jar:file:/root/logstash-1.1.13-flatjar.jar!/rack/handler/ftw.rb:66)
at RUBY.run(file:/root/logstash-1.1.13-flatjar.jar!/logstash/web/runner.rb:68)
答案1
今天晚上我自己也遇到了类似的问题。我发现我把配置文件连在 conf.d 文件夹中以调查另一个问题,然后就忘了它们。当重新启动时重新评估 conf.d/ 文件夹时,它尝试绑定端口两次,导致 EADDRINUSE。
答案2
第二次安装 Logstash 时,我遇到了“地址已被使用”错误。当我以某种方式启动了多个 Logstash 实例时,出现了此错误。手动停止 Logstash 进程,然后再次启动 Logstash,解决了我的问题。
答案3
尝试先停止 logstash-web 服务
在 Ubuntu 中sudo service logstash-web stop
答案4
我遇到了同样的问题,但原因不同。我使用 emacs 创建了 logstash conf 文件,当我的 ssh 连接超时时,它还创建了一个备份文件。结果我得到了 2 个相同的 .conf 文件:
原文:10-logs.conf
Emacs 备份:#10-logs.conf#
Logstash 尝试加载两个 .conf 文件并两次绑定到同一个端口,导致出现 EADDRINUSE 错误。