您能帮我编写脚本并编写良好的脚本吗?我完成了,我真的被困住了...当我的时候没有错误set -x
。只是...我不知道我称之为什么。我是 bash shell 脚本编写的新手...所以,让我提供一些我的脚本...
#!/bin/bash
export PATH=$PATH
set -x
Years=$(date +"%Y")
Months=$(date +"%m")
Days=$(date +"%d")
MAINS=/home/usr_engineer/url_prj
CKA=/data/disks1/url_log/
JT2=/data/disks2/url_log/
BKS=/data/disks3/url_log/
SLP=/data/disks4/url_log/
KBB=/data/disks5/url_log/
BOO=/data/disks6/url_log/
GBL=/data/disks7/url_log/
HDFS=/data/landing/mrs/url_log
kinit -kt /home/usr_engineer/usr_engineer.keytab usr_engineer
fCKA1() {
hadoop fs -put $i $HDFS/cka-np1p/$Years/$Months
sleep 1
IFS=' '
while read names rows sizes; do
echo '`[CKA]`' > $MAINS/logs/cka1
echo $names >> $MAINS/logs/cka1
echo $rows | sed ':a;s/\B[0-9]\{3\}\>/.&/;ta' >> $MAINS/logs/cka1
echo $(awk 'BEGIN {printf "%.2f GB\n",'$sizes'/1073741824}') >> $MAINS/logs/cka1
done < $i.ctrl
echo "[Hadoop Metadata]" >> $MAINS/logs/cka1
Sizes_Tele=$(hadoop fs -ls $HDFS/cka-np1p/$Years/$Months/$i | awk '{print $5}')
Sizes_Telec=$(awk 'BEGIN {printf "%.2f GB\n",'$Sizes_Tele'/1073741824}')
echo "$i ($Sizes_Telec)" >> $MAINS/logs/cka1
cat $MAINS/logs/cka1 | telegram-send --stdin --format markdown
rm -rf $i $i.ctrl
}
fCKA2() {
hadoop fs -put $i $HDFS/cka-np2p/$Years/$Months
sleep 1
IFS=' '
while read names rows sizes; do
echo '`[CKA]`' > $MAINS/logs/cka2
echo $names >> $MAINS/logs/cka2
echo $rows | sed ':a;s/\B[0-9]\{3\}\>/.&/;ta' >> $MAINS/logs/cka2
echo $(awk 'BEGIN {printf "%.2f GB\n",'$sizes'/1073741824}') >> $MAINS/logs/cka2
done < $i.ctrl
echo "[Hadoop Metadata]" >> $MAINS/logs/cka2
Sizes_Tele=$(hadoop fs -ls $HDFS/cka-np2p/$Years/$Months/$i | awk '{print $5}')
Sizes_Telec=$(awk 'BEGIN {printf "%.2f GB\n",'$Sizes_Tele'/1073741824}')
echo "$i ($Sizes_Telec)" >> $MAINS/logs/cka2
cat $MAINS/logs/cka2 | telegram-send --stdin --format markdown
rm -rf $i $i.ctrl
}
fJT21() {
hadoop fs -put $i $HDFS/jt2-np1p/$Years/$Months
sleep 1
IFS=' '
while read names rows sizes; do
echo '`[JT2]`' > $MAINS/logs/jt21
echo $names >> $MAINS/logs/jt21
echo $rows | sed ':a;s/\B[0-9]\{3\}\>/.&/;ta' >> $MAINS/logs/jt21
echo $(awk 'BEGIN {printf "%.2f GB\n",'$sizes'/1073741824}') >> $MAINS/logs/jt21
done < $i.ctrl
echo "[Hadoop Metadata]" >> $MAINS/logs/jt21
Sizes_Tele=$(hadoop fs -ls $HDFS/jt2-np1p/$Years/$Months/$i | awk '{print $5}')
Sizes_Telec=$(awk 'BEGIN {printf "%.2f GB\n",'$Sizes_Tele'/1073741824}')
echo "$i ($Sizes_Telec)" >> $MAINS/logs/jt21
cat $MAINS/logs/jt21 | telegram-send --stdin --format markdown
rm -rf $i $i.ctrl
}
fJT22() {
hadoop fs -put $i $HDFS/jt2-np2p/$Years/$Months
sleep 1
IFS=' '
while read names rows sizes; do
echo '`[JT2]`' > $MAINS/logs/jt22
echo $names >> $MAINS/logs/jt22
echo $rows | sed ':a;s/\B[0-9]\{3\}\>/.&/;ta' >> $MAINS/logs/jt22
echo $(awk 'BEGIN {printf "%.2f GB\n",'$sizes'/1073741824}') >> $MAINS/logs/jt22
done < $i.ctrl
echo "[Hadoop Metadata]" >> $MAINS/logs/jt22
Sizes_Tele=$(hadoop fs -ls $HDFS/jt2-np2p/$Years/$Months/$i | awk '{print $5}')
Sizes_Telec=$(awk 'BEGIN {printf "%.2f GB\n",'$Sizes_Tele'/1073741824}')
echo "$i ($Sizes_Telec)" >> $MAINS/logs/jt22
cat $MAINS/logs/jt22 | telegram-send --stdin --format markdown
rm -rf $i $i.ctrl
}
sleep 2
cd $CKA
sleep 2
for i in $(ls -lh $CKA | grep -v .ctrl | grep url | awk '{print $9}');do
echo $i | grep cka-np1p
if [ $? -eq 0 ]; then
fCKA1
else
echo $i | grep cka-np2p
if [ $? -eq 0 ]; then
fCKA2
fi
fi
done
sleep 2
cd $JT2
sleep 2
for i in $(ls -lh $JT2 | grep -v .ctrl | grep url | awk '{print $9}');do
echo $i | grep jt2-np1p
if [ $? -eq 0 ]; then
fJT21
else
echo $i | grep jt2-np2p
if [ $? -eq 0 ]; then
fJT22
fi
fi
done
我使用此命令保存并运行该脚本来保存日志
nohup bash name.sh > name.log 2>&1 &
和结果在这里https://pastebin.com/12yhttgG
我已经添加sleep
、更改了for, grep models
但是...如果我将该脚本分开cka.sh, jt2.sh
并同时运行这些脚本,则该脚本不会被破坏。如果您看到 Pastebin 链接,则错误从第 181 行开始。在该行之后,它应该运行hadoop
命令:'(
天哪...我花了 6 个小时来做这个...请帮助我... teamviewer 或任何可以解决这个问题的东西非常欢迎
答案1
因此,您看到的问题似乎是您正在运行 a for i in $(ls ...)
,期望一次看到一个文件,而不是只在单个多行字符串中获取包含整个文件列表的一个条目。
这是由IFS=' '
您在职能中的分配造成的。他们将空间设置为仅有的分隔符,这意味着换行符将不再被视为一行(并且需要ls
将其拆分为多行。)
fCKA1
当您在and内部设置 IFS 时fCKA2
,您最终会在第二个循环上看到问题,因为它在这些函数之后运行。
例如,您可以通过在设置之前保存原始 IFS 并在函数结束时恢复它来解决此问题。
例如:
fCKA1() {
hadoop fs -put $i $HDFS/cka-np1p/$Years/$Months
sleep 1
save_IFS=$IFS
IFS=' '
while read names rows sizes; do
...
rm -rf $i $i.ctrl
IFS=$save_IFS
}
也许更好,您可以仅将其设置为“read”命令,如下所示:
IFS=' ' read names rows sizes
在您的函数上下文中,您得到的是:
fCKA1() {
hadoop fs -put $i $HDFS/cka-np1p/$Years/$Months
sleep 1
while IFS=' ' read names rows sizes; do
...
rm -rf $i $i.ctrl
}
还有一个问题是您是否需要设置 IFS...默认情况下是按空格分割,任何空格...您真的需要按空格分割吗?尝试完全删除 IFS=' ' 设置,它可能也适合您!