Apache + mod_jk + GlassFish v3.1 集群的大型并发用户性能问题

Apache + mod_jk + GlassFish v3.1 集群的大型并发用户性能问题

我在 GlassFish v3.1 上运行 Java ee 6 ear 应用程序(2 个集群,每个集群有 2 个实例),由带有 mod_jk 的 Apache v2.2 进行负载平衡 - 全部位于同一台服务器上(Windows Server 2003 R2、Intel Xeon CPU x5670 @2.93Ghz、6GB RAM、2 个 CPU)。

大约有 100 名用户访问该 Web 应用程序。当他们每天早上 8 点左右同时尝试访问该应用程序时,尝试访问主 jsf 主页时响应非常慢。

除此之外,我还发现白天 httpd 进程的 CPU 使用率经常飙升至 99%,并且我开始在 mod_jk.log 文件中看到错误。

[Wed Jun 08 08:25:43 2011] [9380:8216] [info] ajp_process_callback::jk_ajp_common.c (1885): Writing to client aborted or client network problems
[Wed Jun 08 08:25:43 2011] [9380:8216] [info] ajp_service::jk_ajp_common.c (2543): (myAppLocalInstance4) sending request to tomcat failed (unrecoverable), because of client write error (attempt=1)

关于如何改进这一点,有什么建议吗?

Apache 配置大部分是默认的,如下所示

ServerRoot "C:/Program Files/Apache Software Foundation/Apache2.2"

Listen 80

LoadModule actions_module modules/mod_actions.so
LoadModule alias_module modules/mod_alias.so
LoadModule asis_module modules/mod_asis.so
LoadModule auth_basic_module modules/mod_auth_basic.so
LoadModule authn_default_module modules/mod_authn_default.so
LoadModule authn_file_module modules/mod_authn_file.so
LoadModule authz_default_module modules/mod_authz_default.so
LoadModule authz_groupfile_module modules/mod_authz_groupfile.so
LoadModule authz_host_module modules/mod_authz_host.so
LoadModule authz_user_module modules/mod_authz_user.so
LoadModule autoindex_module modules/mod_autoindex.so
LoadModule cgi_module modules/mod_cgi.so
LoadModule dir_module modules/mod_dir.so
LoadModule env_module modules/mod_env.so
LoadModule include_module modules/mod_include.so
LoadModule isapi_module modules/mod_isapi.so
LoadModule log_config_module modules/mod_log_config.so
LoadModule mime_module modules/mod_mime.so
LoadModule negotiation_module modules/mod_negotiation.so
LoadModule setenvif_module modules/mod_setenvif.so

<IfModule !mpm_netware_module>
<IfModule !mpm_winnt_module>
User daemon
Group daemon

</IfModule>
</IfModule>

DocumentRoot "C:/Program Files/Apache Software Foundation/Apache2.2/htdocs"

<Directory />
    Options FollowSymLinks
    AllowOverride None
    Order deny,allow
    Deny from all
</Directory>

<Directory "C:/Program Files/Apache Software Foundation/Apache2.2/htdocs">
    Options Indexes FollowSymLinks
    AllowOverride None
    Order allow,deny
    Allow from all

</Directory>

<IfModule dir_module>
    DirectoryIndex index.html
</IfModule>

<FilesMatch "^\.ht">
    Order allow,deny
    Deny from all
    Satisfy All
</FilesMatch>

ErrorLog "logs/error.log"

LogLevel warn

<IfModule log_config_module>
    LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
    LogFormat "%h %l %u %t \"%r\" %>s %b" common

    <IfModule logio_module>
      LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio
    </IfModule>

    CustomLog "logs/access.log" common

</IfModule>

<IfModule alias_module>
    ScriptAlias /cgi-bin/ "C:/Program Files/Apache Software Foundation/Apache2.2/cgi-bin/"
</IfModule>


<Directory "C:/Program Files/Apache Software Foundation/Apache2.2/cgi-bin">
    AllowOverride None
    Options None
    Order allow,deny
    Allow from all
</Directory>

DefaultType text/plain

<IfModule mime_module>
    TypesConfig conf/mime.types
    AddType application/x-compress .Z
    AddType application/x-gzip .gz .tgz
</IfModule>

Include conf/extra/httpd-mpm.conf

<IfModule ssl_module>
SSLRandomSeed startup builtin
SSLRandomSeed connect builtin
</IfModule>


LoadModule jk_module modules/mod_jk.so
JkWorkersFile conf/workers.properties
JkLogFile logs/mod_jk.log
JkLogLevel info
JkLogStampFormat "[%a %b %d %H:%M:%S %Y] "
JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories
JkRequestLogFormat "%w %V %T"
JkMount /myApp/* loadbalancerLocal
JkMount /myAppRemote/* loadbalancerRemote
JkMount /myApp loadbalancerLocal
JkMount /myAppRemote loadbalancerRemote

worker.properties 配置文件是:

worker.list=loadbalancerLocal,loadbalancerRemote

worker.myAppLocalInstance1.type=ajp13
worker.myAppLocalInstance1.host=localhost
worker.myAppLocalInstance1.port=8109
worker.myAppLocalInstance1.lbfactor=1
worker.myAppLocalInstance1.socket_keepalive=1
worker.myAppLocalInstance1.socket_timeout=1000

worker.myAppLocalInstance2.type=ajp13
worker.myAppLocalInstance2.host=localhost
worker.myAppLocalInstance2.port=8209
worker.myAppLocalInstance2.lbfactor=1
worker.myAppLocalInstance2.socket_keepalive=1
worker.myAppLocalInstance2.socket_timeout=1000

worker.myAppLocalInstance3.type=ajp13
worker.myAppLocalInstance3.host=localhost
worker.myAppLocalInstance3.port=8309
worker.myAppLocalInstance3.lbfactor=1
worker.myAppLocalInstance3.socket_keepalive=1
worker.myAppLocalInstance3.socket_timeout=1000

worker.myAppLocalInstance4.type=ajp13
worker.myAppLocalInstance4.host=localhost
worker.myAppLocalInstance4.port=8409
worker.myAppLocalInstance4.lbfactor=1
worker.myAppLocalInstance4.socket_keepalive=1
worker.myAppLocalInstance4.socket_timeout=1000

worker.myAppRemoteInstance1.type=ajp13
worker.myAppRemoteInstance1.host=localhost
worker.myAppRemoteInstance1.port=8509
worker.myAppRemoteInstance1.lbfactor=1
worker.myAppRemoteInstance1.socket_keepalive=1
worker.myAppRemoteInstance1.socket_timeout=1000

worker.myAppRemoteInstance2.type=ajp13
worker.myAppRemoteInstance2.host=localhost
worker.myAppRemoteInstance2.port=8609
worker.myAppRemoteInstance2.lbfactor=1
worker.myAppRemoteInstance2.socket_keepalive=1
worker.myAppRemoteInstance2.socket_timeout=1000

worker.myAppRemoteInstance3.type=ajp13
worker.myAppRemoteInstance3.host=localhost
worker.myAppRemoteInstance3.port=8709
worker.myAppRemoteInstance3.lbfactor=1
worker.myAppRemoteInstance3.socket_keepalive=1
worker.myAppRemoteInstance3.socket_timeout=1000

worker.myAppRemoteInstance4.type=ajp13
worker.myAppRemoteInstance4.host=localhost
worker.myAppRemoteInstance4.port=8809
worker.myAppRemoteInstance4.lbfactor=1
worker.myAppRemoteInstance4.socket_keepalive=1
worker.myAppRemoteInstance4.socket_timeout=1000

worker.loadbalancerLocal.type=lb
worker.loadbalancerLocal.sticky_session=True
worker.loadbalancerLocal.balance_workers=myAppLocalInstance1,myAppLocalInstance2,myAppLocalInstance3,myAppLocalInstance4
worker.loadbalancerRemote.type=lb
worker.loadbalancerRemote.balance_workers=myAppRemoteInstance1,myAppRemoteInstance2,myAppRemoteInstance3,myAppRemoteInstance4
worker.loadbalancerRemote.sticky_session=True

答案1

我想知道您为什么在同一台机器上运行如此多的实例,但我猜您已经对此设置进行了负载测试,并发现它提供了最高的性能。此外,由于没有提到 EAR 的后端(数据库?),我们假设这里没有问题。

因此,我认为这个问题与 glassfish / mod_jk 设置无关。检查当 100 个用户访问同一页面时实际发生了什么。客户端打开了多少个并行连接,它与 Apache 的 MaxClients 有何关系?客户端是否缓存了静态资源;您是否发送了 Etag / Last-Modified / Cache-Control 标头?您能否减少请求数量(查看带有慢速)?

接下来,获取由 Apache 而不是 Glassfish 应用服务器提供的静态资源。这将减轻应用服务器和负载平衡器的负担,并释放用于实际动态页面创建的插槽。为此,请从 EAR 中提取 CSS/JS 文件和图像,并将它们放入 apache 的文档根目录中的某个目录(例如/static/)。然后确保客户端以这种方式请求资源,或者使用RewriteEngine相应地映射请求。

如果创建页面的开销很大,但结果可以合理地缓存,那么可以考虑放置一个在 Apache 代理之前。不过,您首先需要控制缓存标头。

祝你好运!

相关内容