目前,我有一个如下所示的复制任务:
{
"continuous": true,
"create_target": true,
"owner": "admin",
"source": "https://remote/db/",
"target": "db",
"user_ctx": {
"roles": [
"_admin"
]
}
}
使用 http 时,我在日志中看不到任何错误。使用 https 时,从技术上讲,复制确实有效,但日志中也出现了大量错误。我想修复这些错误。
错误如下:
[Fri, 01 Nov 2013 22:11:49 GMT] [info] [<0.2227.0>] Retrying GET request to https://remote/db/doc?atts_since=%5B%2271-315ddf7e3d31004df5cd00846fd1cf38%22%5D&revs=true&open_revs=%5B%2275-a40b4c7d00c17cddcbef5b093bd10392%22%5D in 0.5 seconds due to error req_timedout
但是,我可以访问curl
这些 URL 而不会超时:
$ curl -k 'https://remote/db/doc?atts_since=%5B%2273-7a26ae649429b96ed01757b477af40bd%22%5D&revs=true&open_revs=%5B%2276-c9e25fe15497c1c60f65f8da3a68d57d%22%5D'
<returns a bunch of garbage (expected garbage ;)>
我connection_timeout
在 Couchdb 复制上设置了一个非常慷慨的 120 秒:
[Fri, 01 Nov 2013 22:13:00 GMT] [info] [<0.3359.0>] Replication `"36d8a613224f3749a73ae4423b5f9733+continuous+create_target"` is using:
4 worker processes
a worker batch size of 500
20 HTTP connections
a connection timeout of 120000 milliseconds
10 retries per request
socket options are: [{keepalive,true},{nodelay,true}]
source start sequence 100321
我想不出有什么显著差异,以至于 curl 在几秒钟内收到响应,而 CouchDB 复制器在 120 秒内超时。我遗漏了什么,我还能尝试调整什么?
CouchDB v1.2.0 在 Ubuntu 13.04 上运行 Linux ip-10-40-65-137 3.8.0-32-generic #47-Ubuntu SMP Tue Oct 1 22:35:23 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux