pthread_cond_wait 或 nanosleep 导致 CPU 使用率过高

pthread_cond_wait 或 nanosleep 导致 CPU 使用率过高

首先,我在stackoverflow.com上搜索过,也在google过,但是没有找到有效的结果。

我的问题是:为什么 pthread_cond_wait 消耗这么多 CPU?我认为这不正常。

我的程序遇到过这种情况:%CPU 会间歇性地变高,并且会持续十多秒。当 %CPU 稳定在低位时,它在 1 左右。当它变高时,它在 50 到 300 之间。

我使用 top -H -p 来查找当进程的 CPU 百分比变高时消耗最多 CPU 的单个线程,然后使用 strace -T -r -c -p 来查找更多信息:

strace -T -r -c -p 1701

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 88.54    0.482646          43     11157      3020 futex
  9.85    0.053682           0    131052           read
  1.50    0.008192          38       213           nanosleep
  0.04    0.000214           1       239           write
  0.03    0.000154           1       213           open
  0.02    0.000111           1       213           munmap
  0.02    0.000085           0       239           stat
  0.01    0.000044           0       213           mmap
  0.00    0.000018           0       213           close
  0.00    0.000000           0       213           fstat
  0.00    0.000000           0       213           lseek
------ ----------- ----------- --------- --------- ----------------
100.00    0.545146                144178      3020 total

该线程的堆栈:

线程 6 (线程 0x7f1404f41700 (LWP 1701)):

#0  0x0000003d6f60b63c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x0000000000406045 in foo(void*) ()
#2  0x0000003d6f607a51 in start_thread () from /lib64/libpthread.so.0
#3  0x0000003d6eee893d in clone () from /lib64/libc.so.6

以及相关代码片段:

static std::deque<std::string> conveyor;
static pthread_mutex_t conveyor_mtx = PTHREAD_MUTEX_INITIALIZER;
static pthread_cond_t conveyor_cond = PTHREAD_COND_INITIALIZER;

#define POP_NUM 4

static void *foo(void *arg)
{
    write_log(LOG_INFO, "thread foo created");

    int ret = pthread_detach(pthread_self());
    if (ret != 0) {
        write_log(LOG_ERR, "pthread_detach[foo] failed with errno[%d]", ret);
        write_log(LOG_ERR, "thread foo exiting");
        return (void *)-1;
    }

    std::string paths[POP_NUM], topic;
    int n;

    do {
        if ((ret = pthread_mutex_lock(&conveyor_mtx)) != 0) {
            write_log(LOG_WARNING, "pthread_mutex_lock[conveyor_mtx] failed"
                      " with errno[%d]", ret);
            sleep(1);
            continue;
        }
        while (conveyor.empty()) {
            write_log(LOG_INFO, "conveyor empty");
            pthread_cond_wait(&conveyor_cond, &conveyor_mtx);
        }
        for (n = 0; n < POP_NUM; n++) {
            paths[n].assign(conveyor.front());
            conveyor.pop_front();
            if (conveyor.empty()) break;
        }
        if ((ret = pthread_mutex_unlock(&conveyor_mtx)) != 0) {
            write_log(LOG_WARNING, "pthread_mutex_unlock[conveyor_mtx] failed"
                      " with errno[%d]", ret);
        }
        for (int i = 0; i < n; i++) {
            if (!extract_topic_from_path(paths[i], topic)) continue;
            produce_msgs_and_save_offset(topics[topic],
                                      const_cast<char *>(paths[i].c_str()));
        }
    } while (true);

    write_log(LOG_ERR, "thread foo exiting");

    return (void *)0;
}

static void *bar(void *arg)
{
    write_log(LOG_INFO, "thread bar created");

    int inot_fd = (int)(intptr_t)arg, n, ret;
    struct pollfd pfd = { inot_fd, POLLIN | POLLPRI, 0 };

    do {
        //n = poll(&pfd, 1, -1);
        //n = poll(&pfd, 1, 300000);
        n = poll(&pfd, 1, 120000);
        if (n == -1) {
            if (errno == EINTR) {
                write_log(LOG_WARNING, "poll interrupted by a signal");
                continue;
            }
            write_log(LOG_ERR, "poll failed with errno[%d]", errno);
            write_log(LOG_ERR, "thread bar exiting");
            return (void *)-1;
        } else if (n == 0) {
            write_log(LOG_WARNING, "poll timed out after 120 seconds");
            sleep(60);
        }

        int i;
        for (i = 0; i < 3; i++) {
            if ((ret = pthread_mutex_lock(&conveyor_mtx)) != 0) {
                write_log(LOG_WARNING, "pthread_mutex_lock[conveyor_mtx] failed"
                          "[%d] with errno[%d]", i, ret);
                continue;
            } else {
                break;
            }
        }
        if (i == 3) {
            write_log(LOG_ERR, "thread bar exiting");
            return (void *)-1;
        }
        if ((n = baz(inot_fd)) > 0) {
            pthread_mutex_unlock(&conveyor_mtx);
            pthread_cond_broadcast(&conveyor_cond);
        } else if (n == 0) {
            pthread_mutex_unlock(&conveyor_mtx);
        } else {
            pthread_mutex_unlock(&conveyor_mtx);
            pthread_cond_broadcast(&conveyor_cond);
            write_log(LOG_ERR, "thread bar exiting");
            return (void *)-1;
        }

        if ((n = poll_producer(producer, 1000, 2)) > 0) {
            write_log(LOG_INFO, "rdkafka poll events[%d] of producer"
                      " for possible big outq size", n);
        }
    } while (true);

    write_log(LOG_ERR, "thread bar exiting");

    return (void *)0;
}

而且,如果我不使用 pthread_cond_wait/pthread_cond_broadcast,而是将上面代码片段中的“pthread_cond_wait”替换为“sleep”,strace 将显示成本最高的系统调用是 nanosleep。

uname -a Linux d144122 2.6.32-358.el6.x86_64 #1 SMP 星期五 2 月 22 日 00:31:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

答案1

我在内核中遇到了这个问题,因为它不支持 futex 系统调用。

相关的内核选项是 CONFIG_FUTEX。确保您的内核是使用该选项构建的(通常都是这样)。

事实上,你的 strace 输出显示了如此多的 futex 错误,这让我强烈怀疑这就是问题所在。

(我知道这个问题已经很老了,但这是一个令人沮丧的问题,我想为其他可怜的迷失的灵魂记录下解决方案)

答案2

我遇到了同样的问题。就我而言,这是由于使用 引起的#pragma pack,例如在此线程中:https://stackoverflow.com/questions/22166474/pthread-cond-wait-doesnt-make-cpu-sleep

删除包解决了我的问题...

相关内容