我们已经将 Varnish 从版本 3 迁移到版本 4.0.1(通过 Debian 软件包)。几乎相同的配置和内存设置在 Varnish 3 上完美运行。但是,使用 Varnish 4 时,在一段时间后(2 - 24 小时)请求缓存对象时会出现较大的延迟(2 - 10 秒)。
下面是 varnishstat 的输出:
MAIN.uptime 167191 1.00 Child process uptime
MAIN.sess_conn 6329434 37.86 Sessions accepted
MAIN.sess_drop 0 0.00 Sessions dropped
MAIN.sess_fail 0 0.00 Session accept failures
MAIN.sess_pipe_overflow 0 0.00 Session pipe overflow
MAIN.client_req_400 0 0.00 Client requests received, subject to 400 errors
MAIN.client_req_411 0 0.00 Client requests received, subject to 411 errors
MAIN.client_req_413 0 0.00 Client requests received, subject to 413 errors
MAIN.client_req_417 0 0.00 Client requests received, subject to 417 errors
MAIN.client_req 6329449 37.86 Good Client requests received
MAIN.cache_hit 3094149 18.51 Cache hits
MAIN.cache_hitpass 6321 0.04 Cache hits for pass
MAIN.cache_miss 3083819 18.44 Cache misses
MAIN.backend_conn 108651 0.65 Backend conn. success
MAIN.backend_unhealthy 0 0.00 Backend conn. not attempted
MAIN.backend_busy 0 0.00 Backend conn. too many
MAIN.backend_fail 0 0.00 Backend conn. failures
MAIN.backend_reuse 3141847 18.79 Backend conn. reuses
MAIN.backend_toolate 4418 0.03 Backend conn. was closed
MAIN.backend_recycle 3146280 18.82 Backend conn. recycles
MAIN.backend_retry 25 0.00 Backend conn. retry
MAIN.fetch_head 1 0.00 Fetch no body (HEAD)
MAIN.fetch_length 2845771 17.02 Fetch with Length
MAIN.fetch_chunked 322232 1.93 Fetch chunked
MAIN.fetch_eof 0 0.00 Fetch EOF
MAIN.fetch_bad 0 0.00 Fetch bad T-E
MAIN.fetch_close 60292 0.36 Fetch wanted close
MAIN.fetch_oldhttp 42 0.00 Fetch pre HTTP/1.1 closed
MAIN.fetch_zero 0 0.00 Fetch zero len body
MAIN.fetch_1xx 0 0.00 Fetch no body (1xx)
MAIN.fetch_204 138 0.00 Fetch no body (204)
MAIN.fetch_304 16221 0.10 Fetch no body (304)
MAIN.fetch_failed 16 0.00 Fetch body failed
MAIN.pools 2 . Number of thread pools
MAIN.threads 200 . Total number of threads
MAIN.threads_limited 0 0.00 Threads hit max
MAIN.threads_created 225 0.00 Threads created
MAIN.threads_destroyed 25 0.00 Threads destoryed
MAIN.threads_failed 0 0.00 Thread creation failed
MAIN.thread_queue_len 0 . Length of session queue
MAIN.busy_sleep 41876 0.25 Number of requests sent to sleep on busy objhdr
MAIN.busy_wakeup 41876 0.25 Number of requests woken after sleep on busy objhdr
MAIN.sess_queued 28 0.00 Sessions queued for thread
MAIN.sess_dropped 0 0.00 Sessions dropped for thread
MAIN.n_object 24161 . N struct object
MAIN.n_vampireobject 0 . N unresurrected objects
MAIN.n_objectcore 24273 . N struct objectcore
MAIN.n_objecthead 34049 . N struct objecthead
MAIN.n_waitinglist 191 . N struct waitinglist
MAIN.n_backend 5 . N backends
MAIN.n_expired 3063120 . N expired objects
MAIN.n_lru_nuked 0 . N LRU nuked objects
MAIN.n_lru_moved 2467465 . N LRU moved objects
MAIN.losthdr 2 0.00 HTTP header overflows
MAIN.s_sess 6329434 37.86 Total Sessions
MAIN.s_req 6329449 37.86 Total Requests
MAIN.s_pipe 5774 0.03 Total pipe
MAIN.s_pass 145365 0.87 Total pass
MAIN.s_fetch 3229184 19.31 Total fetch
MAIN.s_synth 340 0.00 Total synth
MAIN.s_req_hdrbytes 3619855685 21651.02 Request header bytes
MAIN.s_req_bodybytes 38264281 228.87 Request body bytes
MAIN.s_resp_hdrbytes 1994822975 11931.40 Response header bytes
MAIN.s_resp_bodybytes 95329048038 570180.50 Reponse body bytes
MAIN.s_pipe_hdrbytes 4022681 24.06 Pipe request header bytes
MAIN.s_pipe_in 1021617212 6110.48 Piped bytes from client
MAIN.s_pipe_out 108398777 648.35 Piped bytes to client
MAIN.sess_closed 6262741 37.46 Session Closed
MAIN.sess_pipeline 0 0.00 Session Pipeline
MAIN.sess_readahead 0 0.00 Session Read Ahead
MAIN.sess_herd 101 0.00 Session herd
MAIN.shm_records 677928134 4054.81 SHM records
MAIN.shm_writes 60451920 361.57 SHM writes
MAIN.shm_flushes 79 0.00 SHM flushes due to overflow
MAIN.shm_cont 25476 0.15 SHM MTX contention
MAIN.shm_cycles 303 0.00 SHM cycles through buffer
MAIN.sms_nreq 0 0.00 SMS allocator requests
MAIN.sms_nobj 0 . SMS outstanding allocations
MAIN.sms_nbytes 0 . SMS outstanding bytes
MAIN.sms_balloc 0 . SMS bytes allocated
MAIN.sms_bfree 0 . SMS bytes freed
MAIN.backend_req 3244724 19.41 Backend requests made
MAIN.n_vcl 1 0.00 N vcl total
MAIN.n_vcl_avail 1 0.00 N vcl available
MAIN.n_vcl_discard 0 0.00 N vcl discarded
MAIN.bans 1 . Count of bans
MAIN.bans_completed 1 . Number of bans marked 'completed'
MAIN.bans_obj 0 . Number of bans using obj.*
MAIN.bans_req 0 . Number of bans using req.*
MAIN.bans_added 1 0.00 Bans added
MAIN.bans_deleted 0 0.00 Bans deleted
MAIN.bans_tested 0 0.00 Bans tested against objects (lookup)
MAIN.bans_obj_killed 0 0.00 Objects killed by bans (lookup)
MAIN.bans_lurker_tested 0 0.00 Bans tested against objects (lurker)
MAIN.bans_tests_tested 0 0.00 Ban tests tested against objects (lookup)
MAIN.bans_lurker_tests_tested 0 0.00 Ban tests tested against objects (lurker)
MAIN.bans_lurker_obj_killed 0 0.00 Objects killed by bans (lurker)
MAIN.bans_dups 0 0.00 Bans superseded by other bans
MAIN.bans_lurker_contention 0 0.00 Lurker gave way for lookup
MAIN.bans_persisted_bytes 13 . Bytes used by the persisted ban lists
MAIN.bans_persisted_fragmentation 0 . Extra bytes in persisted ban lists due to fragmentation
MAIN.n_purges 0 . Number of purge operations
MAIN.n_obj_purged 0 . number of purged objects
MAIN.exp_mailed 3111388 18.61 Number of objects mailed to expiry thread
MAIN.exp_received 3111388 18.61 Number of objects received by expiry thread
MAIN.hcb_nolock 6184294 36.99 HCB Lookups without lock
MAIN.hcb_lock 2826444 16.91 HCB Lookups with lock
MAIN.hcb_insert 2826444 16.91 HCB Inserts
MAIN.esi_errors 0 0.00 ESI parse errors (unlock)
MAIN.esi_warnings 0 0.00 ESI parse warnings (unlock)
MAIN.vmods 3 . Loaded VMODs
MAIN.n_gzip 0 0.00 Gzip operations
MAIN.n_gunzip 2741291 16.40 Gunzip operations
MAIN.vsm_free 971600 . Free VSM space
MAIN.vsm_used 83963008 . Used VSM space
MAIN.vsm_cooling 0 . Cooling VSM space
MAIN.vsm_overflow 0 . Overflow VSM space
MAIN.vsm_overflowed 0 0.00 Overflowed VSM space
MGT.uptime 167191 1.00 Management process uptime
MGT.child_start 1 0.00 Child process started
MGT.child_exit 0 0.00 Child process normal exit
MGT.child_stop 0 0.00 Child process unexpected exit
MGT.child_died 0 0.00 Child process died (signal)
MGT.child_dump 0 0.00 Child process core dumped
MGT.child_panic 0 0.00 Child process panic
MEMPOOL.vbc.live 15 . In use
MEMPOOL.vbc.pool 10 . In Pool
MEMPOOL.vbc.sz_wanted 88 . Size requested
MEMPOOL.vbc.sz_needed 120 . Size allocated
MEMPOOL.vbc.allocs 108651 0.65 Allocations
MEMPOOL.vbc.frees 108636 0.65 Frees
MEMPOOL.vbc.recycle 108388 0.65 Recycled from pool
MEMPOOL.vbc.timeout 43212 0.26 Timed out from pool
MEMPOOL.vbc.toosmall 0 0.00 Too small to recycle
MEMPOOL.vbc.surplus 0 0.00 Too many for pool
MEMPOOL.vbc.randry 263 0.00 Pool ran dry
MEMPOOL.busyobj.live 0 . In use
MEMPOOL.busyobj.pool 10 . In Pool
MEMPOOL.busyobj.sz_wanted 65536 . Size requested
MEMPOOL.busyobj.sz_needed 65568 . Size allocated
MEMPOOL.busyobj.allocs 3250473 19.44 Allocations
MEMPOOL.busyobj.frees 3250473 19.44 Frees
MEMPOOL.busyobj.recycle 3248273 19.43 Recycled from pool
MEMPOOL.busyobj.timeout 101777 0.61 Timed out from pool
MEMPOOL.busyobj.toosmall 0 0.00 Too small to recycle
MEMPOOL.busyobj.surplus 0 0.00 Too many for pool
MEMPOOL.busyobj.randry 2200 0.01 Pool ran dry
MEMPOOL.req0.live 0 . In use
MEMPOOL.req0.pool 10 . In Pool
MEMPOOL.req0.sz_wanted 65536 . Size requested
MEMPOOL.req0.sz_needed 65568 . Size allocated
MEMPOOL.req0.allocs 3165016 18.93 Allocations
MEMPOOL.req0.frees 3165016 18.93 Frees
MEMPOOL.req0.recycle 3163145 18.92 Recycled from pool
MEMPOOL.req0.timeout 92611 0.55 Timed out from pool
MEMPOOL.req0.toosmall 0 0.00 Too small to recycle
MEMPOOL.req0.surplus 0 0.00 Too many for pool
MEMPOOL.req0.randry 1871 0.01 Pool ran dry
MEMPOOL.sess0.live 0 . In use
MEMPOOL.sess0.pool 10 . In Pool
MEMPOOL.sess0.sz_wanted 384 . Size requested
MEMPOOL.sess0.sz_needed 416 . Size allocated
MEMPOOL.sess0.allocs 3164961 18.93 Allocations
MEMPOOL.sess0.frees 3164961 18.93 Frees
MEMPOOL.sess0.recycle 3162993 18.92 Recycled from pool
MEMPOOL.sess0.timeout 92505 0.55 Timed out from pool
MEMPOOL.sess0.toosmall 0 0.00 Too small to recycle
MEMPOOL.sess0.surplus 0 0.00 Too many for pool
MEMPOOL.sess0.randry 1968 0.01 Pool ran dry
MEMPOOL.req1.live 0 . In use
MEMPOOL.req1.pool 10 . In Pool
MEMPOOL.req1.sz_wanted 65536 . Size requested
MEMPOOL.req1.sz_needed 65568 . Size allocated
MEMPOOL.req1.allocs 3164517 18.93 Allocations
MEMPOOL.req1.frees 3164517 18.93 Frees
MEMPOOL.req1.recycle 3162740 18.92 Recycled from pool
MEMPOOL.req1.timeout 91596 0.55 Timed out from pool
MEMPOOL.req1.toosmall 0 0.00 Too small to recycle
MEMPOOL.req1.surplus 0 0.00 Too many for pool
MEMPOOL.req1.randry 1777 0.01 Pool ran dry
MEMPOOL.sess1.live 0 . In use
MEMPOOL.sess1.pool 10 . In Pool
MEMPOOL.sess1.sz_wanted 384 . Size requested
MEMPOOL.sess1.sz_needed 416 . Size allocated
MEMPOOL.sess1.allocs 3164473 18.93 Allocations
MEMPOOL.sess1.frees 3164473 18.93 Frees
MEMPOOL.sess1.recycle 3162704 18.92 Recycled from pool
MEMPOOL.sess1.timeout 91576 0.55 Timed out from pool
MEMPOOL.sess1.toosmall 0 0.00 Too small to recycle
MEMPOOL.sess1.surplus 0 0.00 Too many for pool
MEMPOOL.sess1.randry 1769 0.01 Pool ran dry
SMA.s0.c_req 5615570 33.59 Allocator requests
SMA.s0.c_fail 0 0.00 Allocator failures
SMA.s0.c_bytes 71277740514 426325.22 Bytes allocated
SMA.s0.c_freed 70585949062 422187.49 Bytes freed
SMA.s0.g_alloc 47820 . Allocations outstanding
SMA.s0.g_bytes 691791452 . Bytes outstanding
SMA.s0.g_space 1430526372 . Bytes available
SMA.Transient.c_req 871408 5.21 Allocator requests
SMA.Transient.c_fail 0 0.00 Allocator failures
SMA.Transient.c_bytes 9784649739 58523.78 Bytes allocated
SMA.Transient.c_freed 9784414099 58522.37 Bytes freed
SMA.Transient.g_alloc 262 . Allocations outstanding
SMA.Transient.g_bytes 235640 . Bytes outstanding
SMA.Transient.g_space 0 . Bytes available
VBE.server1(10.10.21.78,,80).vcls 1 . VCL references
VBE.server1(10.10.21.78,,80).happy 18446744073709551615 . Happy health probes
VBE.server1(10.10.21.78,,80).bereq_hdrbytes 290806044 1739.36 Request header bytes
VBE.server1(10.10.21.78,,80).bereq_bodybytes 1690 0.01 Request body bytes
VBE.server1(10.10.21.78,,80).beresp_hdrbytes 153255326 916.65 Response header bytes
VBE.server1(10.10.21.78,,80).beresp_bodybytes 8626069868 51594.10 Response body bytes
VBE.server1(10.10.21.78,,80).pipe_hdrbytes 0 0.00 Pipe request header bytes
VBE.server1(10.10.21.78,,80).pipe_out 0 0.00 Piped bytes to backend
VBE.server1(10.10.21.78,,80).pipe_in 0 0.00 Piped bytes from backend
VBE.server2(10.10.21.79,,80).vcls 1 . VCL references
VBE.server2(10.10.21.79,,80).happy 18446744073709551615 . Happy health probes
VBE.server2(10.10.21.79,,80).bereq_hdrbytes 290837520 1739.55 Request header bytes
VBE.server2(10.10.21.79,,80).bereq_bodybytes 1140 0.01 Request body bytes
VBE.server2(10.10.21.79,,80).beresp_hdrbytes 153283230 916.82 Response header bytes
VBE.server2(10.10.21.79,,80).beresp_bodybytes 8586809601 51359.28 Response body bytes
VBE.server2(10.10.21.79,,80).pipe_hdrbytes 0 0.00 Pipe request header bytes
VBE.server2(10.10.21.79,,80).pipe_out 0 0.00 Piped bytes to backend
VBE.server2(10.10.21.79,,80).pipe_in 0 0.00 Piped bytes from backend
VBE.server3(10.10.21.80,,80).vcls 1 . VCL references
VBE.server3(10.10.21.80,,80).happy 18446744073709551615 . Happy health probes
VBE.server3(10.10.21.80,,80).bereq_hdrbytes 290783134 1739.23 Request header bytes
VBE.server3(10.10.21.80,,80).bereq_bodybytes 0 0.00 Request body bytes
VBE.server3(10.10.21.80,,80).beresp_hdrbytes 152434626 911.74 Response header bytes
VBE.server3(10.10.21.80,,80).beresp_bodybytes 8627121897 51600.40 Response body bytes
VBE.server3(10.10.21.80,,80).pipe_hdrbytes 0 0.00 Pipe request header bytes
VBE.server3(10.10.21.80,,80).pipe_out 0 0.00 Piped bytes to backend
VBE.server3(10.10.21.80,,80).pipe_in 0 0.00 Piped bytes from backend
LCK.sms.creat 0 0.00 Created locks
LCK.sms.destroy 0 0.00 Destroyed locks
LCK.sms.locks 0 0.00 Lock Operations
LCK.smp.creat 0 0.00 Created locks
LCK.smp.destroy 0 0.00 Destroyed locks
LCK.smp.locks 0 0.00 Lock Operations
LCK.sma.creat 2 0.00 Created locks
LCK.sma.destroy 0 0.00 Destroyed locks
LCK.sma.locks 12925874 77.31 Lock Operations
LCK.smf.creat 0 0.00 Created locks
LCK.smf.destroy 0 0.00 Destroyed locks
LCK.smf.locks 0 0.00 Lock Operations
LCK.hsl.creat 0 0.00 Created locks
LCK.hsl.destroy 0 0.00 Destroyed locks
LCK.hsl.locks 0 0.00 Lock Operations
LCK.hcb.creat 1 0.00 Created locks
LCK.hcb.destroy 0 0.00 Destroyed locks
LCK.hcb.locks 5629656 33.67 Lock Operations
LCK.hcl.creat 0 0.00 Created locks
LCK.hcl.destroy 0 0.00 Destroyed locks
LCK.hcl.locks 0 0.00 Lock Operations
LCK.vcl.creat 1 0.00 Created locks
LCK.vcl.destroy 0 0.00 Destroyed locks
LCK.vcl.locks 6577502 39.34 Lock Operations
LCK.sessmem.creat 0 0.00 Created locks
LCK.sessmem.destroy 0 0.00 Destroyed locks
LCK.sessmem.locks 0 0.00 Lock Operations
LCK.sess.creat 6329434 37.86 Created locks
LCK.sess.destroy 6329434 37.86 Destroyed locks
LCK.sess.locks 1847 0.01 Lock Operations
LCK.wstat.creat 1 0.00 Created locks
LCK.wstat.destroy 0 0.00 Destroyed locks
LCK.wstat.locks 15680311 93.79 Lock Operations
LCK.herder.creat 0 0.00 Created locks
LCK.herder.destroy 0 0.00 Destroyed locks
LCK.herder.locks 0 0.00 Lock Operations
LCK.wq.creat 3 0.00 Created locks
LCK.wq.destroy 0 0.00 Destroyed locks
LCK.wq.locks 28915924 172.95 Lock Operations
LCK.objhdr.creat 2826574 16.91 Created locks
LCK.objhdr.destroy 2792524 16.70 Destroyed locks
LCK.objhdr.locks 51924239 310.57 Lock Operations
LCK.exp.creat 1 0.00 Created locks
LCK.exp.destroy 0 0.00 Destroyed locks
LCK.exp.locks 18371384 109.88 Lock Operations
LCK.lru.creat 2 0.00 Created locks
LCK.lru.destroy 0 0.00 Destroyed locks
LCK.lru.locks 12278364 73.44 Lock Operations
LCK.cli.creat 1 0.00 Created locks
LCK.cli.destroy 0 0.00 Destroyed locks
LCK.cli.locks 55733 0.33 Lock Operations
LCK.ban.creat 1 0.00 Created locks
LCK.ban.destroy 0 0.00 Destroyed locks
LCK.ban.locks 6723420 40.21 Lock Operations
LCK.vbp.creat 1 0.00 Created locks
LCK.vbp.destroy 0 0.00 Destroyed locks
LCK.vbp.locks 100102 0.60 Lock Operations
LCK.backend.creat 5 0.00 Created locks
LCK.backend.destroy 0 0.00 Destroyed locks
LCK.backend.locks 6618483 39.59 Lock Operations
LCK.vcapace.creat 1 0.00 Created locks
LCK.vcapace.destroy 0 0.00 Destroyed locks
LCK.vcapace.locks 0 0.00 Lock Operations
LCK.nbusyobj.creat 0 0.00 Created locks
LCK.nbusyobj.destroy 0 0.00 Destroyed locks
LCK.nbusyobj.locks 0 0.00 Lock Operations
LCK.busyobj.creat 3250473 19.44 Created locks
LCK.busyobj.destroy 3250473 19.44 Destroyed locks
LCK.busyobj.locks 25068258 149.94 Lock Operations
LCK.mempool.creat 6 0.00 Created locks
LCK.mempool.destroy 0 0.00 Destroyed locks
LCK.mempool.locks 33618855 201.08 Lock Operations
LCK.vxid.creat 1 0.00 Created locks
LCK.vxid.destroy 0 0.00 Destroyed locks
LCK.vxid.locks 601 0.00 Lock Operations
LCK.pipestat.creat 1 0.00 Created locks
LCK.pipestat.destroy 0 0.00 Destroyed locks
LCK.pipestat.locks 5774 0.03 Lock Operations
我认为原因是锁定操作数量过多。有人能解释一下“锁定操作”是什么意思以及如何避免它们吗?
答案1
在不了解你的设置的情况下,问题似乎出在
MAIN.backend_reuse 3141847 18.79 Backend conn. reuses
另外,似乎您只通过 varnish 缓存提供大约 50% 的传入连接,而另外 50% 则流向您的后端,而后端大多数时间都会被拒绝连接。