我有一个按以下格式排列的日志文件:
# This file was created Thu Dec 17 16:01:26 2020
# Created by:
# :-) GROMACS - gmx gyrate, 2019.3 (-:
#
# Executable: /usr/local/bin/../Cellar/gromacs/2019.3/bin/gmx
# Data prefix: /usr/local/bin/../Cellar/gromacs/2019.3
# Working dir: /Users/gleb/Desktop/DO/unity_or_separation
# Command line:
# gmx gyrate -f /Users/gleb/Desktop/DO/unity_or_separation/storage/7000_cne_lig177/1AllBoxes_7000_cne_lig177.xtc -s /Users/gleb/Desktop/DO/unity_or_separation/storage/7000_cne_lig177/lig_1AllBoxes_7000_cne_lig177.pdb -o /Users/gleb/Desktop/DO/unity_or_separation/storage/7000_cne_lig177/RG/RG_1AllBoxes_7000_cne_lig177.xvg
# gmx gyrate is part of G R O M A C S:
#
# God Rules Over Mankind, Animals, Cosmos and Such
#
@ title "Radius of gyration (total and around axes)"
@ xaxis label "Time (ps)"
@ yaxis label "Rg (nm)"
@TYPE xy
@ view 0.15, 0.15, 0.75, 0.85
@ legend on
@ legend box on
@ legend loctype view
@ legend 0.78, 0.8
@ legend length 2
@ s0 legend "Rg"
@ s1 legend "Rg\sX\N"
@ s2 legend "Rg\sY\N"
@ s3 legend "Rg\sZ\N"
1 0.535827 0.476343 0.375777 0.453993
2 0.509863 0.450424 0.333084 0.453975
3 0.51779 0.374447 0.44955 0.440349
4 0.535215 0.392331 0.442183 0.472716
5 0.542371 0.468222 0.383178 0.47146
6 0.49479 0.340223 0.42002 0.44437
7 0.495905 0.370873 0.445952 0.394239
8 0.518463 0.424257 0.400878 0.443746
根据这些数据,我需要忽略包含注释的所有行(从 # 和 @ 开始),仅从底部的多列表中取出第二列,并最终将值乘以 10:
#this is a second column after conversion
5.4
5.1
5.2
5.4
5.4
4.9
5.0
5.2
我可以通过组合 sed + awk 来做到这一点:
sed -i '' -e '/^[#@]/d' "${storage}"/"${experiment}"/RG/RG_${pdb_name}.xvg
awk '-F ' '{ printf("%.1f\n", $2*10) }' "${storage}"/"${experiment}"/RG/RG_${pdb_name}.xvg > "${storage}"/"${experiment}"/RG/RG_${pdb_name}..xvg
是否可以仅使用 sed (第一个命令)完成所有步骤,从而省略创建新文件(由 AWK 产生)?
答案1
Sed 不是为算术而生的。您可以尝试笨拙的解决方法,但 Awk 在这方面更好:
awk '!/^[#@]/{printf("%.1f\n",$2*10)}' file
使用 GNU Awk,添加-i inplace
以就地编辑文件。如果您没有 GNU Awk,您可以使用sponge
awk '!/^[#@]/{printf("%.1f\n",$2*10)}' file | sponge file
或者使用旧的覆盖(无论如何,这就是在幕后发生的事情......)
awk '!/^[#@]/{printf("%.1f\n",$2*10)}' file > newfile &&
mv newfile file