使用 grep 过滤后,有没有办法在终端中重新对齐表格数据?

使用 grep 过滤后,有没有办法在终端中重新对齐表格数据?

我所得到的内容的一个很好的例子:

  • 尝试ss -axl显示正在监听的 UNIX 域套接字。例如,这可能给出:

      Netid     State      Recv-Q     Send-Q                                                                Local Address:Port              Peer Address:Port     
      u_str     LISTEN     0          0                                                              /run/systemd/private 9683                         * 0        
      u_str     LISTEN     0          0                                                        /run/systemd/fsck.progress 9690                         * 0        
      u_str     LISTEN     0          0                                                       /run/systemd/journal/stdout 9705                         * 0        
      u_str     LISTEN     0          0                                                   /var/run/dbus/system_bus_socket 13830                        * 0        
      u_str     LISTEN     0          0                                                                   /run/thd.socket 13833                        * 0        
      u_str     LISTEN     0          0                                                              /var/run/docker.sock 13835                        * 0        
      u_str     LISTEN     0          0                                                          /run/avahi-daemon/socket 13837                        * 0        
      u_str     LISTEN     0          0                                                    /run/user/1000/systemd/private 16088                        * 0        
      u_str     LISTEN     0          0                                                    /run/user/1000/gnupg/S.dirmngr 16093                        * 0        
      u_str     LISTEN     0          0                                            /run/user/1000/gnupg/S.gpg-agent.extra 16094                        * 0        
      u_str     LISTEN     0          0                                              /run/user/1000/gnupg/S.gpg-agent.ssh 16095                        * 0        
      u_str     LISTEN     0          0                                                  /run/user/1000/gnupg/S.gpg-agent 16096                        * 0        
      u_str     LISTEN     0          0                                                                /run/user/1000/bus 16097                        * 0        
      u_str     LISTEN     0          0                                          /run/user/1000/gnupg/S.gpg-agent.browser 16098                        * 0        
      u_str     LISTEN     0          0                                                              /var/run/dhcpcd.sock 16530                        * 0        
      u_str     LISTEN     0          0                                                       /var/run/dhcpcd.unpriv.sock 16531                        * 0        
      u_str     LISTEN     0          0                                         /run/user/1000/vscode-git-363c8837e0.sock 2244296                      * 0        
      u_str     LISTEN     0          0               /run/user/1000/vscode-ipc-774023f9-edb0-4abd-8647-17c77c55a895.sock 2245117                      * 0        
      u_str     LISTEN     0          0               /run/user/1000/vscode-ipc-8d96b721-cae4-439d-a861-309d7c184d10.sock 2344995                      * 0       
      u_str     LISTEN     0          0                                         /run/user/1000/vscode-git-363c8837e0.sock 2244296                      * 0        
      u_str     LISTEN     0          0               /run/user/1000/vscode-ipc-774023f9-edb0-4abd-8647-17c77c55a895.sock 2245117                      * 0        
      u_str     LISTEN     0          0               /run/user/1000/vscode-ipc-8d96b721-cae4-439d-a861-309d7c184d10.sock 2344995                      * 0       
    
  • 现在用:过滤列表grepss -axl | grep systemd

      u_str              LISTEN              0                    0                                                                     /run/systemd/private 9683                                                * 0                                  
      u_str              LISTEN              0                    0                                                               /run/systemd/fsck.progress 9690                                                * 0                                  
      u_str              LISTEN              0                    0                                                              /run/systemd/journal/stdout 9705                                                * 0                                  
      u_str              LISTEN              0                    0                                                           /run/user/1000/systemd/private 16088                                               * 0                                  
    

这些列仍然非常宽,无法在普通终端上很好地显示。

期望的结果是折叠列以适合通过过滤器的数据。可能ss有一种按路径过滤的方法(我还没有找到),但这ss只是一个示例,因此对于本练习,假设无法让应用程序过滤其输出并重新对齐列本身。

所需的输出可能如下所示:

    u_str    LISTEN    0        0                  /run/systemd/private 9683         * 0
    u_str    LISTEN    0        0            /run/systemd/fsck.progress 9690         * 0
    u_str    LISTEN    0        0           /run/systemd/journal/stdout 9705         * 0
    u_str    LISTEN    0        0        /run/user/1000/systemd/private 16088        * 0

另一个可以看到这一点的情况是在df使用 ZFS 和 Docker 的系统列表中:

    Filesystem                                                                         1K-blocks     Used Available Use% Mounted on
    devtmpfs                                                                               10240        0     10240   0% /dev
    shm                                                                                  8131012        0   8131012   0% /dev/shm
    rpool/ROOT                                                                            923596   820504     86708  91% /
    tmpfs                                                                                1626204     1900   1624304   1% /run
    /dev/sdb1                                                                              65390    33432     31958  52% /boot
    dockerpool/backup                                                                   57010048  4346880  52663168   8% /backup
    dockerpool/docker                                                                   52723200    60032  52663168   1% /var/lib/docker
    cgroup_root                                                                            10240        0     10240   0% /sys/fs/cgroup
    dockerpool/docker/fa4cd75b75e1e4eca9cb94124c61529e98313cffa2960bd5adc55e7fe65717a1  52826624   163456  52663168   1% /var/lib/docker/zfs/graph/fa4cd75b75e1e4eca9cb94124c61529e98313cffa2960bd5adc55e7fe65717a1
    dockerpool/docker/bf861c5d067c605bded3fe38794d0d291ec602b779a2eda9a46ba5ea472fc9a1  52826624   163456  52663168   1% /var/lib/docker/zfs/graph/bf861c5d067c605bded3fe38794d0d291ec602b779a2eda9a46ba5ea472fc9a1
    dockerpool/docker/c9fe2509b8e5e335fcf0f2472abea90180cb443391d86db7dd0c6a4806e63180  52830976   167808  52663168   1% /var/lib/docker/zfs/graph/c9fe2509b8e5e335fcf0f2472abea90180cb443391d86db7dd0c6a4806e63180
    dockerpool/docker/1edf9ca3849374d12ba3e9b6e5e72e8e844bc96de7c14e8e0045356b29bb2c4f  52823808   160640  52663168   1% /var/lib/docker/zfs/graph/1edf9ca3849374d12ba3e9b6e5e72e8e844bc96de7c14e8e0045356b29bb2c4f
    dockerpool/docker/8d62982afac1d3840a5987a62e9e369d5ae29d27dcb0007d0d5319ad84b2c57c  52826624   163456  52663168   1% /var/lib/docker/zfs/graph/8d62982afac1d3840a5987a62e9e369d5ae29d27dcb0007d0d5319ad84b2c57c
    dockerpool/docker/0f03f5e4ca7b9ba366643354c37ca1746a3009e97c1b9f1b0bea126a5eab5dc7  52708224    45056  52663168   1% /var/lib/docker/zfs/graph/0f03f5e4ca7b9ba366643354c37ca1746a3009e97c1b9f1b0bea126a5eab5dc7
    shm                                                                                    65536        0     65536   0% /var/lib/docker/containers/d8413602c199ff17c1324d2e166b1bff3c1115406f2f14475dbd54b7c938631a/mounts/shm
    shm                                                                                    65536        0     65536   0% /var/lib/docker/containers/d0f705e3be016b8b10522329c6f3e1c01d83fabc3d9a498bcfadc754a56fd3fe/mounts/shm
    shm                                                                                    65536        0     65536   0% /var/lib/docker/containers/15dbaf4a13d8136b415fe257e6790b4692f3aa2d0b0197ef7d1c0e90dec703da/mounts/shm
    shm                                                                                    65536        0     65536   0% /var/lib/docker/containers/b0f3856ab0ba0021832519448636498228f5b1ffedc57eec203283d514ec19ac/mounts/shm
    shm                                                                                    65536        0     65536   0% /var/lib/docker/containers/68c2691f0eebcf2f765bb4bfe00c4fdba749e36966e5b42afe8f9f63aeba5803/mounts/shm
    shm                                                                                    65536        0     65536   0% /var/lib/docker/containers/21431e15bc044019c517b4d14efbbf3eb3ce1078c4419c0adb924aaebf62e283/mounts/shm
    dockerpool/docker/0dbc7d708e28e3e29b6360db440d3550ee414791e491b2d807b34079152d1487  52732416    69248  52663168   1% /var/lib/docker/zfs/graph/0dbc7d708e28e3e29b6360db440d3550ee414791e491b2d807b34079152d1487
    dockerpool/docker/3df99a7e3ec5486160e1d55479931ea597f53c0ae021f422a5d11ccd7114917b  52674176    11008  52663168   1% /var/lib/docker/zfs/graph/3df99a7e3ec5486160e1d55479931ea597f53c0ae021f422a5d11ccd7114917b
    dockerpool/docker/0f1380fdca3c53835ab637f378d11d0f81e1c8319aa59f6b289429223a48b4ed  53164928   501760  52663168   1% /var/lib/docker/zfs/graph/0f1380fdca3c53835ab637f378d11d0f81e1c8319aa59f6b289429223a48b4ed
    dockerpool/docker/711415dbd6df49794b0a3b4066b274f5d752ed2a8e1ea2dcc5bdbfe57757380e  53001088   337920  52663168   1% /var/lib/docker/zfs/graph/711415dbd6df49794b0a3b4066b274f5d752ed2a8e1ea2dcc5bdbfe57757380e
    dockerpool/docker/359ed871b4929d13789ab1f89bc9a170819ca1b1dfcb660f9ca2f60f0ac59d00  52826368   163200  52663168   1% /var/lib/docker/zfs/graph/359ed871b4929d13789ab1f89bc9a170819ca1b1dfcb660f9ca2f60f0ac59d00
    dockerpool/docker/0009ba1b93a2e49bcf00d64b3c6a1df6b9d2e8d669b61732279100725f867043  53009408   346240  52663168   1% /var/lib/docker/zfs/graph/0009ba1b93a2e49bcf00d64b3c6a1df6b9d2e8d669b61732279100725f867043
    dockerpool/docker/a1f1a7f4879e937bae9b9aeabea13cb5b291b02e133ff0c4d09f531c4e8fd04f  52717184    54016  52663168   1% /var/lib/docker/zfs/graph/a1f1a7f4879e937bae9b9aeabea13cb5b291b02e133ff0c4d09f531c4e8fd04f
    dockerpool/docker/dc632a8bf6b8da1c5c06c7e2504508b8a024af10b58837afa7b6b4a468c2695f  53693824  1030656  52663168   2% /var/lib/docker/zfs/graph/dc632a8bf6b8da1c5c06c7e2504508b8a024af10b58837afa7b6b4a468c2695f
    dockerpool/docker/a5ffc4c6e1076cc8104aae76997fa479e0d8a3cc437fc1c948177d8c2345cb2a  52903936   240768  52663168   1% /var/lib/docker/zfs/graph/a5ffc4c6e1076cc8104aae76997fa479e0d8a3cc437fc1c948177d8c2345cb2a
    dockerpool/docker/8589313f7779178dcf776756e71d8623c4f8646ba766e92ce8e6dde074cc266a  53190912   527744  52663168   1% /var/lib/docker/zfs/graph/8589313f7779178dcf776756e71d8623c4f8646ba766e92ce8e6dde074cc266a
    dockerpool/docker/243285ef997322c83565c47e6ff0d38bde8d35337ab767489cdc6a2d0735cae2  52907264   244096  52663168   1% /var/lib/docker/zfs/graph/243285ef997322c83565c47e6ff0d38bde8d35337ab767489cdc6a2d0735cae2

使用grep来过滤(我们不能只排除zfs文件系统,因为根文件系统也在 ZFS 上,所以我们只想排除 下的文件系统dockerpool):

    $ df | grep -v docker
    Filesystem                                                                         1K-blocks     Used Available Use% Mounted on
    devtmpfs                                                                               10240        0     10240   0% /dev
    shm                                                                                  8131012        0   8131012   0% /dev/shm
    rpool/ROOT                                                                            923596   820504     86708  91% /
    tmpfs                                                                                1626204     1900   1624304   1% /run
    /dev/sdb1                                                                              65390    33432     31958  52% /boot
    cgroup_root                                                                            10240        0     10240   0% /sys/fs/cgroup

所需的输出看起来更像是:

    Filesystem     1K-blocks   Used Available Use% Mounted on
    devtmpfs           10240      0     10240   0% /dev
    shm              8131012      0   8131012   0% /dev/shm
    rpool/ROOT        923596 820504     86708  91% /
    tmpfs            1626204   1900   1624304   1% /run
    /dev/sdb1          65390  33432     31958  52% /boot
    cgroup_root        10240      0     10240   0% /sys/fs/cgroup

答案1

一个简单可行的解决方案是将数据传递给column -t.这将根据数据中的空白字符(制表符和空格)对齐列。

你的第一个输出:

$ column -t file1
u_str  LISTEN  0  0  /run/systemd/private            9683   *  0
u_str  LISTEN  0  0  /run/systemd/fsck.progress      9690   *  0
u_str  LISTEN  0  0  /run/systemd/journal/stdout     9705   *  0
u_str  LISTEN  0  0  /run/user/1000/systemd/private  16088  *  0

我想说这看起来不错。

你的第二个输出:

$ column -t file2
Filesystem   1K-blocks  Used    Available  Use%  Mounted         on
devtmpfs     10240      0       10240      0%    /dev
shm          8131012    0       8131012    0%    /dev/shm
rpool/ROOT   923596     820504  86708      91%   /
tmpfs        1626204    1900    1624304    1%    /run
/dev/sdb1    65390      33432   31958      52%   /boot
cgroup_root  10240      0       10240      0%    /sys/fs/cgroup

注意到这个词是如何on获得一个单独的列的吗?这是由于字符串中的空格造成的Mounted on。对于这个特定案例来说,这可能不是一个大问题,但您应该意识到任何空格或制表符将被视为列分隔符。

如果您知道数据使用什么分隔符,则可以column通过其-s选项指定该分隔符,例如-s $'\t'inbashzshto use only tabs。

相关内容