我需要增强下面的脚本,该脚本接受包含唯一行(近一百万行)的输入文件。对于每一行,它在 3 个查找文件中都有不同的值,我打算将它们作为逗号分隔的值添加到输出中。下面的脚本运行良好,但需要几个小时才能完成工作,我正在寻找一些真正快速的解决方案,并且对系统的负担也较小
#!/bin/bash
while read -r ONT
do
{
ONSTATUS=$(grep "$ONT," lookupfile1.csv | cut -d" " -f2)
CID=$(grep "$ONT." lookupfile3.csv | head -1 | cut -d, -f2)
line1=$(grep "$ONT.C2.P1," lookupfile2.csv | head -1 | cut -d"," -f2,7 | sed 's/ //')
line2=$(grep "$ONT.C2.P2," lookupfile2.csv | head -1 | cut -d"," -f2,7 | sed 's/ //')
echo "$ONT,$ONSTATUS,$CID,$line1,$line2" >> BUwithPO.csv
} &
done < inputfile.csv
inputfile.csv 包含如下行:
343OL5:LT1.PN1.ONT1
343OL5:LT1.PN1.ONT10
225OL0:LT1.PN1.ONT34
225OL0:LT1.PN1.ONT39
343OL5:LT1.PN1.ONT100
225OL0:LT1.PN1.ONT57
lookupfile1.csv 包含:
343OL5:LT1.PN1.ONT100, Down,Locked,No
225OL0:LT1.PN1.ONT57, Up,Unlocked,Yes
343OL5:LT1.PN1.ONT1, Down,Unlocked,No
225OL0:LT1.PN1.ONT34, Up,Unlocked,Yes
225OL0:LT1.PN1.ONT39, Up,Unlocked,Yes
lookupfile2.csv 包含:
225OL0:LT1.PN1.ONT34.C2.P1, +123125302766,REG,DigitMap,Unlocked,_media_BNT,FD_BSFU.xml,
225OL0:LT1.PN1.ONT57.C2.P1, +123125334019,REG,DigitMap,Unlocked,_media_BNT,FD_BSFU.xml,
225OL0:LT1.PN1.ONT57.C2.P2, +123125334819,REG,DigitMap,Unlocked,_media_BNT,FD_BSFU.xml,
343OL5:LT1.PN1.ONT100.C2.P11, +123128994019,REG,DigitMap,Unlocked,_media_ANT,FD_BSFU.xml,
lookupfile3.csv 包含:
343OL5:LT1.PON1.ONT100.SERV1,12-654-0330
343OL5:LT1.PON1.ONT100.C1.P1,12-654-0330
343OL5:LT7.PON8.ONT75.SERV1,12-664-1186
225OL0:LT1.PN1.ONT34.C1.P1.FLOW1,12-530-2766
225OL0:LT1.PN1.ONT57.C1.P1.FLOW1,12-533-4019
输出为:
225OL0:LT1.PN1.ONT57, Up,Unlocked,Yes,12-533-4019,+123125334019,FD_BSFU.xml,+123125334819,FD_BSFU.xml
225OL0:LT1.PN1.ONT34, Up,Unlocked,Yes,12-530-2766,+123125302766,FD_BSFU.xml,
343OL5:LT1.PN1.ONT1, Down,Unlocked,No,,,
343OL5:LT1.PN1.ONT100, Down,Locked,No,,,
343OL5:LT1.PN1.ONT10,,,,
225OL0:LT1.PN1.ONT39, Up,Unlocked,Yes,,,