高 Mysql 连接数

高 Mysql 连接数

发现结果为何如此是个难题。也许忽略了一些显而易见的东西。抱歉,这不是很具体。但如果有人能指出需要关注的领域,那将非常有帮助。谢谢。

负载测试

每分钟大约有 5486 次数据库写入/每秒 90 次。当服务器不堪重负时,我可以在日志中看到以下错误:

  • 11:资源暂时不可用)连接到上游
  • 读取上游响应头时上游超时(110:连接超时)

问题

运行负载测试时发现以下问题:

  • 执行更新/写入的页面(负载测试所针对的页面)速度变慢,需要 10 到 20 秒才能加载。
  • Nginx 在任何页面上给出任意 404。结果显示高峰期可能有 10-20% 的请求导致 404。

我认为这是两个独立的问题,可能毫无关联。我在图表中看不到任何平线,这表明已经达到极限。

  • Web 服务器的 CPU 使用率约为 60%,保持稳定。RAM 看起来不错。
  • 数据库服务器的 CPU 使用率约为 20%,保持稳定。RAM 看起来不错。
  • 数据库连接数达到 1500/2000。这看起来不太可靠。不过它不是直线,这表明它没有达到极限。
  • 网络连接限制似乎没问题。
  • 在可能/适当的地方建立索引表。

基础设施

AWS RDS MySQL 1 x db.m3.xlarge 写入操作 1 x db.m3.xlarge 复制数据库用于读取操作

AWS EC2 Web 服务器 Linux、Nginx、PHP-FPM 6 x c3.2xlarge

配置

/etc/php-fpm.d/domain.com.conf

[域名.com]

user = nginx
group = nginx

;;;The address on which to accept FastCGI requests
listen = /var/run/php-fpm/domain.com.sock

;;;A value of '-1' means unlimited.  Althought this may be based on ulimit hard limit.
;;;May be worth setting as desired in case of the above.
listen.backlog = -1

;;;dynamic - the number of child processes is set dynamically based on the following      directives: pm.max_children, pm.start_servers, pm.min_spare_servers, pm.max_spare_servers.
pm = dynamic
;;;maximum number of child processes to be created when pm is set to dynamic
pm.max_children = 512
;;;The number of child processes created on startup. Used only when pm is set to dynamic.
pm.start_servers = 8
;;;The desired minimum number of idle server processes. Used only when pm is set to dynamic.
pm.min_spare_servers = 2
The desired maximum number of idle server processes. Used only when pm is set to dynamic
pm.max_spare_servers = 16
;;;The number of requests each child process should execute before respawning.
pm.max_requests = 500
;;;The URI to view the FPM status page.
pm.status_path = /status/fpm/domain.com
;;;The timeout for serving a single request. This option should be used when the    'max_execution_time' ini option does not stop script execution
request_terminate_timeout = 30

;;;Set open file descriptor rlimit. Default value: system defined value.
;;;rlimit_files

;;;rlimit_core int
;;;Set max core size rlimit. Possible Values: 'unlimited' or an integer greater or equal to       0. Default value: system defined value.

php_admin_value[post_max_size] = 8M
php_admin_value[upload_max_filesize] = 8M

php_admin_value[disable_functions] = exec,passthru,system,proc_open,popen,show_source

;;; Site specific custom flags go here

;;; End of site specific flags

slowlog = /var/log/nginx/slow-query-$pool.log

request_slowlog_timeout = 10s

chdir = /

Nginx - /etc/nginx/nginx.conf

events {
    worker_connections 19000;
# essential for linux, optmized to serve many clients with each thread
use epoll;
multi_accept on;
}
worker_rlimit_nofile    20000;

http {
    include         /etc/nginx/mime.types;
default_type    application/octet-stream;

log_format  proxy_combined  '$http_x_real_ip - $remote_user [$time_local] "$request" '
      '$status $body_bytes_sent "$http_referer" "$http_user_agent"';


access_log      /var/log/nginx/access.log   proxy_combined;

sendfile        on;

## Start: Size Limits & Buffer Overflows ##
client_body_buffer_size     1K;
client_header_buffer_size   1k;
# client_max_body_size        1k;
large_client_header_buffers 2 1k;
## END: Size Limits & Buffer Overflows ##

## Start: Caching file descriptors ##
open_file_cache             max=1000 inactive=20s;
open_file_cache_valid       30s;
open_file_cache_min_uses    2;
open_file_cache_errors      on;
## END: Caching

## Start: Timeouts ##
client_body_timeout   10;
client_header_timeout 10;
keepalive_timeout     5 5;
send_timeout          10;
## End: Timeouts ##

server_tokens       off;
tcp_nodelay         on;

gzip                on;
gzip_http_version   1.1;
gzip_vary           on;
gzip_comp_level     6;
gzip_proxied        any;
gzip_types          text/plain text/html text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js;
gzip_buffers        16 8k;
gzip_disable        "MSIE [1-6]\.(?!.*SV1)";

client_max_body_size 30M;

proxy_cache_path /var/cache/nginx/c2b levels=1:2 keys_zone=c2b-cache:8m max_size=100m inactive=60m;
proxy_temp_path /var/cache/tmp;
proxy_ignore_headers Set-Cookie X-Accel-Expires Expires Cache-Control;

# allow the server to close the connection after a client stops responding. Frees up socket-associated memory.
reset_timedout_connection on;

    include /etc/nginx/conf.d/*.conf;
}

NGINX 站点特定 - /etc/nginx/conf.d/domain.com

# pass the PHP scripts to FastCGI server listening on
        location ~ \.php$ {
        fastcgi_pass unix:/var/run/php-fpm/domain.com.sock;
        fastcgi_index  index.php;

        fastcgi_param  SCRIPT_FILENAME  /var/www/domain.com/public_html/$fastcgi_script_name;           
        include        fastcgi_params;
        fastcgi_read_timeout 30;
}

答案1

我找到了问题的关键。我将 MySQL 数据库表从 MyISAM 更改为 Innodb(如果可能的话,我认为如果使用全文搜索,它会把事情搞砸)。

这里有一些内容 -

MyISAM 表锁问题

用 Google 快速搜索一下就能找到更多信息

这已经解决了问题。现在每分钟大约有 60,000 个成功连接。

相关内容