我在使用 Elasticsearch 时遇到一个问题,有时,它会尝试连续运行 GC,因为这个 GC 无法释放,因为设置为 14GB(最小和最大)的堆大小据说已完全分配:
(...)
[2014-09-18 13:43:45,984][INFO ][monitor.jvm ] [staging02.onldev] [gc][old][1128185][65590] duration [7.1s], collections
[1]/[7.2s], total [7.1s]/[9.3h], memory [13.9gb]->[13.9gb]/[13.9gb], all_pools {[young] [532.5mb]->[532.5mb]/[532.5mb]}{[survivor] [
49.9mb]->[49.6mb]/[66.5mb]}{[old] [13.3gb]->[13.3gb]/[13.3gb]}
[2014-09-18 13:43:53,307][INFO ][monitor.jvm ] [staging02.onldev] [gc][old][1128186][65591] duration [7.2s], collections
[1]/[7.3s], total [7.2s]/[9.3h], memory [13.9gb]->[13.9gb]/[13.9gb], all_pools {[young] [532.5mb]->[532.5mb]/[532.5mb]}{[survivor] [
49.6mb]->[49.7mb]/[66.5mb]}{[old] [13.3gb]->[13.3gb]/[13.3gb]}
[2014-09-18 13:43:58,647][INFO ][monitor.jvm ] [staging02.onldev] [gc][old][1128187][65592] duration [5.2s], collections
[1]/[5.3s], total [5.2s]/[9.3h], memory [13.9gb]->[13.9gb]/[13.9gb], all_pools {[young] [532.5mb]->[532.5mb]/[532.5mb]}{[survivor] [
49.7mb]->[49.8mb]/[66.5mb]}{[old] [13.3gb]->[13.3gb]/[13.3gb]}
此时ES没有响应,我们重启它
当我观察 ES 堆时,我们的应用程序工作者使用 ES,堆内存增长,每隔几分钟 GC 就会运行,堆几乎再次清空,但不是完全清空。慢慢地,经过很多天,堆中似乎没有可用的内存。看起来好像有内存泄漏,但是它怎么可能出现在我们使用 Tire gem 的 Ruby 代码中,因为我们谈论的是 ES 堆?ES 的某些使用模式会导致 ES 内存泄漏吗?
基本上,ES 是一台专用服务器,具有 16GB RAM、无副本、5 个索引和每个索引 1 个分片。它使用 java-1.7.0-openjdk-1.7.0.65-2.5.1.2.el6_5.x86_64 运行,使用 mlockall,最小和最大堆都设置为 14GB。服务器上没有其他任何运行。我们使用 Elasticsearch 0.90.x,因为开发团队无法承担更换用于连接 Ruby 工作者的 Tire gem 的费用
products
size: 164Mi (164Mi)
docs: 98,760 (157,138)
product_brands
size: 4.52Mi (4.52Mi)
docs: 5,123 (5,123)
product_categories
size: 358ki (358ki)
docs: 538 (538)
store_company_categories
size: 389ki (389ki)
docs: 4,028 (4,028)
stores
size: 1.44Mi (1.44Mi)
docs: 1,090 (1,090)
最大的索引是products
,在Bigdesk中显示为164MB。ES如何随着时间的推移使用到14GB?
索引元数据有问题吗?
{
state: open
settings: {
index.analysis.filter.french_stop.stopwords.0: alors
index.analysis.filter.french_stop.stopwords.1: au
index.analysis.filter.french_stop.stopwords.4: autre
index.analysis.filter.french_stop.stopwords.5: avant
index.analysis.filter.french_stop.stopwords.2: aucuns
index.analysis.filter.french_stop.stopwords.3: aussi
index.analysis.filter.french_stop.stopwords.22: dehors
index.analysis.filter.french_stop.stopwords.8: bon
index.analysis.filter.french_stop.stopwords.23: depuis
index.analysis.filter.french_stop.stopwords.9: car
index.analysis.filter.french_stop.stopwords.20: du
index.analysis.filter.french_stop.stopwords.6: avec
index.analysis.filter.french_stop.stopwords.21: dedans
index.analysis.filter.french_stop.stopwords.7: avoir
index.analysis.filter.french_stop.stopwords.29: droite
index.analysis.filter.french_stop.stopwords.28: dos
index.analysis.filter.french_stop.stopwords.27: donc
index.analysis.filter.french_stop.stopwords.26: doit
index.analysis.filter.french_stop.stopwords.25: devrait
index.analysis.filter.french_stop.stopwords.24: deux
index.analysis.analyzer.nGram_analyzer.type: custom
index.analysis.filter.nGram_filter.token_chars.0: letter
index.analysis.analyzer.product_analyzer.type: custom
index.analysis.filter.nGram_filter.token_chars.1: digit
index.analysis.filter.nGram_filter.token_chars.2: punctuation
index.analysis.filter.french_stemmer.type: stemmer
index.analysis.filter.nGram_filter.type: nGram
index.analysis.filter.french_stop.stopwords.10: ce
index.analysis.filter.french_stop.stopwords.11: cela
index.analysis.filter.french_stop.stopwords.12: ces
index.analysis.analyzer.product_analyzer.filter.0: lowercase
index.analysis.filter.french_stop.stopwords.91: sans
index.analysis.filter.french_stop.stopwords.18: dans
index.analysis.analyzer.product_analyzer.filter.1: french_stemmer
index.analysis.filter.french_stop.stopwords.92: ses
index.analysis.filter.french_stop.stopwords.17: comment
index.analysis.analyzer.product_analyzer.filter.2: asciifolding
index.analysis.analyzer.product_analyzer.filter.3: unique
index.analysis.filter.french_stop.stopwords.90: sa
index.analysis.filter.french_stop.stopwords.19: des
index.analysis.filter.french_stop.stopwords.14: chaque
index.analysis.analyzer.product_analyzer.filter.4: french_stop
index.analysis.filter.french_stop.stopwords.13: ceux
index.analysis.filter.nGram_filter.min_gram: 2
index.analysis.filter.french_stop.stopwords.16: comme
index.analysis.analyzer.category_analyzer.type: custom
index.analysis.filter.french_stop.stopwords.15: ci
index.analysis.filter.french_stop.stopwords.99: soyez
index.analysis.filter.french_stop.stopwords.97: sont
index.analysis.filter.french_stop.stopwords.98: sous
index.analysis.filter.french_stop.stopwords.95: sien
index.analysis.filter.french_stop.stopwords.96: son
index.analysis.filter.french_stop.stopwords.93: seulement
index.analysis.filter.french_stop.stopwords.94: si
index.analysis.analyzer.nGram_analyzer.tokenizer: whitespace
index.analysis.filter.french_stop.stopwords.80: plupart
index.analysis.filter.french_stop.stopwords.81: pour
index.number_of_replicas: 0
index.analysis.filter.french_stop.stopwords.82: pourquoi
index.analysis.filter.french_stop.stopwords.83: quand
index.analysis.filter.french_stop.stopwords.84: que
index.analysis.filter.french_stop.stopwords.85: quel
index.analysis.filter.french_stop.stopwords.86: quelle
index.analysis.filter.french_stop.stopwords.87: quelles
index.analysis.filter.french_stop.stopwords.88: quels
index.analysis.filter.french_stop.stopwords.89: qui
index.analysis.analyzer.product_analyzer.tokenizer: standard
index.analysis.filter.french_stop.stopwords.79: pièce
index.analysis.filter.french_stop.stopwords.70: ou
index.analysis.filter.french_stop.stopwords.73: parce
index.analysis.filter.french_stop.stopwords.74: parole
index.uuid: B_JF7UG5R6S_ZC0L0IMFYw
index.analysis.filter.french_stop.stopwords.71: où
index.analysis.filter.french_stop.stopwords.72: par
index.analysis.filter.french_stop.stopwords.77: peut
index.analysis.filter.french_stop.stopwords.78: peu
index.analysis.filter.french_stop.stopwords.75: pas
index.analysis.filter.french_stop.stopwords.76: personnes
index.analysis.filter.french_stop.stopwords.68: nous
index.analysis.filter.french_stop.stopwords.69: nouveaux
index.analysis.filter.french_stop.stopwords.65: ni
index.analysis.analyzer.category_analyzer.filter.0: lowercase
index.analysis.filter.french_stop.stopwords.64: même
index.analysis.filter.french_stop.stopwords.67: notre
index.analysis.filter.french_stop.stopwords.66: nommés
index.analysis.filter.french_stop.stopwords.61: moins
index.analysis.filter.french_stop.stopwords.60: mine
index.analysis.analyzer.category_analyzer.filter.1: french_stemmer
index.analysis.filter.french_stop.stopwords.63: mot
index.analysis.analyzer.category_analyzer.filter.2: french_stop
index.analysis.filter.french_stop.stopwords.62: mon
index.analysis.filter.french_stop.stopwords.120: ça
index.analysis.filter.french_stop.stopwords.121: étaient
index.analysis.filter.french_stop.stopwords.122: état
index.analysis.filter.french_stop.stopwords.123: étions
index.analysis.filter.french_stop.stopwords.124: été
index.analysis.filter.french_stop.stopwords.125: être
index.analysis.filter.nGram_filter.max_gram: 20
index.analysis.filter.french_stop.stopwords.126: rayon
index.analysis.filter.french_stop.stopwords.127: rayons
index.analysis.filter.french_stop.stopwords.128: root
index.number_of_shards: 1
index.analysis.filter.french_stop.stopwords.129: roots
index.analysis.filter.french_stop.stopwords.59: mes
index.analysis.filter.french_stop.stopwords.57: maintenant
index.analysis.filter.french_stop.stopwords.58: mais
index.analysis.filter.french_stop.stopwords.56: ma
index.analysis.filter.french_stop.stopwords.55: là
index.analysis.analyzer.whitespace_analyzer.tokenizer: whitespace
index.analysis.filter.french_stop.stopwords.54: leur
index.analysis.filter.french_stop.stopwords.53: les
index.analysis.filter.french_stop.stopwords.52: le
index.analysis.filter.french_stop.stopwords.51: la
index.analysis.analyzer.whitespace_analyzer.type: custom
index.analysis.filter.french_stop.stopwords.50: juste
index.analysis.analyzer.whitespace_analyzer.filter.1: french_stemmer
index.analysis.analyzer.whitespace_analyzer.filter.0: lowercase
index.analysis.filter.french_stop.type: stop
index.analysis.analyzer.whitespace_analyzer.filter.2: asciifolding
index.analysis.filter.french_stop.stopwords.114: voie
index.analysis.filter.french_stop.stopwords.115: voient
index.analysis.filter.french_stop.stopwords.112: tu
index.analysis.filter.french_stop.stopwords.113: valeur
index.analysis.filter.french_stop.stopwords.110: trop
index.analysis.filter.french_stop.stopwords.111: très
index.version.created: 901399
index.analysis.filter.french_stop.stopwords.46: ici
index.analysis.filter.french_stop.stopwords.47: il
index.analysis.filter.french_stop.stopwords.48: ils
index.analysis.filter.french_stop.stopwords.49: je
index.analysis.filter.french_stop.stopwords.118: vous
index.analysis.filter.french_stop.stopwords.119: vu
index.analysis.filter.french_stop.stopwords.116: vont
index.analysis.filter.french_stop.stopwords.117: votre
index.analysis.filter.french_stop.stopwords.41: fois
index.analysis.filter.nGram_filter.token_chars.3: symbol
index.analysis.filter.french_stop.stopwords.40: faites
index.analysis.analyzer.category_analyzer.tokenizer: standard
index.analysis.filter.french_stop.stopwords.43: force
index.analysis.filter.french_stop.stopwords.42: font
index.analysis.filter.french_stop.stopwords.45: hors
index.analysis.filter.french_stop.stopwords.44: haut
index.analysis.filter.french_stop.stopwords.101: sur
index.analysis.filter.french_stop.stopwords.102: ta
index.analysis.analyzer.nGram_analyzer.filter.3: nGram_filter
index.analysis.filter.french_stop.stopwords.103: tandis
index.analysis.analyzer.nGram_analyzer.filter.2: french_stemmer
index.analysis.filter.french_stop.stopwords.104: tellement
index.analysis.filter.french_stemmer.name: minimal_french
index.analysis.filter.french_stop.stopwords.100: sujet
index.analysis.filter.french_stop.stopwords.37: et
index.analysis.filter.french_stop.stopwords.109: tout
index.analysis.filter.french_stop.stopwords.38: eu
index.analysis.filter.french_stop.stopwords.35: essai
index.analysis.filter.french_stop.stopwords.36: est
index.analysis.analyzer.nGram_analyzer.filter.1: asciifolding
index.analysis.filter.french_stop.stopwords.105: tels
index.analysis.analyzer.nGram_analyzer.filter.0: lowercase
index.analysis.filter.french_stop.stopwords.106: tes
index.analysis.filter.french_stop.stopwords.39: fait
index.analysis.filter.french_stop.stopwords.107: ton
index.analysis.filter.french_stop.stopwords.108: tous
index.analysis.filter.french_stop.stopwords.30: début
index.analysis.filter.french_stop.stopwords.34: encore
index.analysis.filter.french_stop.stopwords.33: en
index.analysis.filter.french_stop.stopwords.32: elles
index.analysis.filter.french_stop.stopwords.31: elle
}
mappings: {
product_category: {
properties: {
tags: {
analyzer: category_analyzer
type: string
}
ancestry_path: {
type: string
}
name: {
analyzer: product_analyzer
type: string
}
leaf?: {
type: boolean
}
category_depth_0: {
properties: {
tags: {
type: string
}
name: {
analyzer: product_analyzer
type: string
}
}
}
name_suggest: {
index_analyzer: nGram_analyzer
search_analyzer: whitespace_analyzer
type: string
}
category_depth_3: {
properties: {
name: {
type: string
}
}
}
self_and_ancestors_ids: {
type: string
}
depth: {
type: integer
}
category_depth_1: {
properties: {
tags: {
type: string
}
name: {
analyzer: product_analyzer
type: string
}
}
}
category_depth_2: {
properties: {
tags: {
type: string
}
name: {
analyzer: product_analyzer
type: string
}
}
}
}
}
}
aliases: [ ]
}
我尝试使用 6GB 的最小/最大堆大小,但它表现出相同的行为,只是更快地变得无响应。
答案1
问题解决了:
Dev 最终允许我将 Elasticsearch 更新到 1.3.2,Java 此时最新换成了 Oracle,Ruby 驱动程序替换成了 Tire searchkick
(因为 Dev 说 API 更接近 Tire 而且官方驱动程序似乎太复杂了,无法快速过渡)。
使用默认配置(无 -Xmin 和 Xmax),Elasticsearch 不会使用超过 320MB 的堆,应用程序的运行情况与以前一样好。我将尝试将 Xmin 和 Xmax 设置为静态值 2GB,看看是否能看到与以前相同的内存使用模式。