我们有带有487
数据节点机器的Hadoop集群(每个数据节点机器还包括服务节点管理器),所有机器都是物理机(DELL),操作系统是RHEL 7.9版本。
每个数据节点机器有12个磁盘,每个磁盘大小12T
从 HDP 包安装的 Hadoop 集群类型(以前在 Horton-works 下,现在在 Cloudera 下)
用户抱怨在数据节点机器上运行的 Spark 应用程序速度缓慢
经过调查,我们从数据节点日志中看到以下警告
2024-03-18 17:41:30,230 WARN datanode.DataNode (BlockReceiver.java:receivePacket(567)) - Slow BlockReceiver write packet to mirror took 401ms (threshold=300ms), downstream DNs=[172.87.171.24:50010, 172.87.171.23:50010]
2024-03-18 17:41:49,795 WARN datanode.DataNode (BlockReceiver.java:receivePacket(567)) - Slow BlockReceiver write packet to mirror took 410ms (threshold=300ms), downstream DNs=[172.87.171.26:50010, 172.87.171.31:50010]
2024-03-18 18:06:29,585 WARN datanode.DataNode (BlockReceiver.java:receivePacket(567)) - Slow BlockReceiver write packet to mirror took 303ms (threshold=300ms), downstream DNs=[172.87.171.34:50010, 172.87.171.22:50010]
2024-03-18 18:18:55,931 WARN datanode.DataNode (BlockReceiver.java:receivePacket(567)) - Slow BlockReceiver write packet to mirror took 729ms (threshold=300ms), downstream DNs=[172.87.11.27:50010]
从上面的日志我们可以看到warning Slow BlockReceiver write packet to mirror took xxms
数据节点机器等172.87.171.23,172.87.171.24
。
根据我的理解,异常“慢”BlockReceiver write packet to mirror
表明可能会延迟将块写入操作系统缓存或磁盘
所以我试图收集这个警告/异常的原因,这里有
将块写入操作系统缓存或磁盘的延迟
集群已达到或接近其资源限制(内存、CPU 或磁盘)
机器之间的网络问题
从我的验证来看,我没有看到磁盘或者中央处理器或者记忆问题,我们检查了所有机器
从网络方面我没有看到与机器本身相关的特殊问题
我们还使用 iperf3 ro 检查一台机器到另一台机器之间的带宽。
这是之间的示例data-node01
(data-node03
根据我的理解,如果我错了,请纠正我,看起来带宽还可以)
来自数据节点01
iperf3 -i 10 -s
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-10.00 sec 7.90 GBytes 6.78 Gbits/sec
[ 5] 10.00-20.00 sec 8.21 GBytes 7.05 Gbits/sec
[ 5] 20.00-30.00 sec 7.25 GBytes 6.23 Gbits/sec
[ 5] 30.00-40.00 sec 7.16 GBytes 6.15 Gbits/sec
[ 5] 40.00-50.00 sec 7.08 GBytes 6.08 Gbits/sec
[ 5] 50.00-60.00 sec 6.27 GBytes 5.39 Gbits/sec
[ 5] 60.00-60.04 sec 35.4 MBytes 7.51 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-60.04 sec 0.00 Bytes 0.00 bits/sec sender
[ 5] 0.00-60.04 sec 43.9 GBytes 6.28 Gbits/sec receiver
来自数据节点03
iperf3 -i 1 -t 60 -c 172.87.171.84
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 792 MBytes 6.64 Gbits/sec 0 3.02 MBytes
[ 4] 1.00-2.00 sec 834 MBytes 6.99 Gbits/sec 54 2.26 MBytes
[ 4] 2.00-3.00 sec 960 MBytes 8.05 Gbits/sec 0 2.49 MBytes
[ 4] 3.00-4.00 sec 896 MBytes 7.52 Gbits/sec 0 2.62 MBytes
[ 4] 4.00-5.00 sec 790 MBytes 6.63 Gbits/sec 0 2.70 MBytes
[ 4] 5.00-6.00 sec 838 MBytes 7.03 Gbits/sec 4 1.97 MBytes
[ 4] 6.00-7.00 sec 816 MBytes 6.85 Gbits/sec 0 2.17 MBytes
[ 4] 7.00-8.00 sec 728 MBytes 6.10 Gbits/sec 0 2.37 MBytes
[ 4] 8.00-9.00 sec 692 MBytes 5.81 Gbits/sec 47 1.74 MBytes
[ 4] 9.00-10.00 sec 778 MBytes 6.52 Gbits/sec 0 1.91 MBytes
[ 4] 10.00-11.00 sec 785 MBytes 6.58 Gbits/sec 48 1.57 MBytes
[ 4] 11.00-12.00 sec 861 MBytes 7.23 Gbits/sec 0 1.84 MBytes
[ 4] 12.00-13.00 sec 844 MBytes 7.08 Gbits/sec 0 1.96 MBytes
注意 - 网卡速度很快10G
(我们通过 ethtool 检查过)
我们还检查了 NIC 卡的固件版本
ethtool -i p1p1
driver: i40e
version: 2.8.20-k
firmware-version: 8.40 0x8000af82 20.5.13
expansion-rom-version:
bus-info: 0000:3b:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes
我们还检查了内核消息(dmesg
),但没有看到任何特殊的东西。
来自 dmesg 关于 CPU
dmesg | grep CPU
[ 0.000000] smpboot: Allowing 32 CPUs, 0 hotplug CPUs
[ 0.000000] smpboot: Ignoring 160 unusable CPUs in ACPI table
[ 0.000000] setup_percpu: NR_CPUS:5120 nr_cpumask_bits:32 nr_cpu_ids:32 nr_node_ids:2
[ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=32, Nodes=2
[ 0.000000] RCU restricting CPUs from NR_CPUS=5120 to nr_cpu_ids=32.
[ 0.184771] CPU0: Thermal monitoring enabled (TM1)
[ 0.184943] TAA: Vulnerable: Clear CPU buffers attempted, no microcode
[ 0.184944] MDS: Vulnerable: Clear CPU buffers attempted, no microcode
[ 0.324340] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz (fam: 06, model: 4f, stepping: 01)
[ 0.327772] smpboot: CPU 1 Converting physical 0 to logical die 1
[ 0.408126] NMI watchdog: enabled on all CPUs, permanently consumes one hw-PMU counter.
[ 0.436824] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
[ 0.436828] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
[ 0.464933] Brought up 32 CPUs
[ 3.223989] acpi LNXCPU:7e: hash matches
[ 49.145592] L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details.