如何编写一个需要多个文件的bash脚本?

如何编写一个需要多个文件的bash脚本?

我正在编写一个 bash 脚本,该脚本将多个文件作为输入,并按每个文件的降序显示前“n”个最常出现的单词。

我弄清楚了如何计算 1 个文件的单词频率,但我无法弄清楚当我有多个文件并并行处理它们时我将如何处理。

 sed -e 's/[^[:alpha:]]/ /g' testfile.txt | tr '\n' " " |  tr -s " " | tr " " '\n'| tr 'A-Z' 'a-z' | sort | uniq -c | sort -nr | nl 

这对于 1 个文件来说效果很好,但我想编写一个 bash 脚本,我可以像下面这样运行:

$countWord test1.txt test2.txt test3.txt (countword here is my bash script that counts freq)

我希望它将这些文件作为输入,并为每个文件显示如下内容:

   ===(1 51 33 test1.txt)====    # where 1: number of lines, 51: number of words, 33: number of characters
38 them
29 these
17 to
12 who

 

非常感谢任何正确方向的帮助。 :)

答案1

在文件上创建一个循环

for F in "$@"
do echo "=== $F ==="
    sed -e 's/[^[:alpha:]]/ /g' "$F" | tr '\n' " " |  tr -s " " | tr " " '\n'| tr 'A-Z' 'a-z' | sort | uniq -c | sort -nr | nl
done

玩得开心 !

答案2

使用此输入:

$ head file1 file2
==> file1 <==
I am new here. I am working on writing a bash script which takes multiple files
as input and display the top ‘n’ most frequently occurring words in
descending order for each of file.

==> file2 <==
I figured out how to count the frequency of words but I am unable to figure out
how I will deal when I have multiple files.

和 shell 脚本中的 GNU awk:

$ cat tst.sh
#!/usr/bin/env bash

awk '
    BEGIN { maxWords = 5 }
    {
        gsub(/[^[:alpha:]]/," ")
        for (i=1; i<=NF; i++) {
            words[$i]++
            split($i,tmp)
            for (j in tmp) {
                chars[tmp[j]]++
            }
        }
    }
    ENDFILE {
        print "  ===(" FNR+0, length(words)+0, length(chars)+0, FILENAME ")==="
        PROCINFO["sorted_in"] = "@val_num_desc"
        numWords = 0
        for (word in words) {
            print words[word], word
            if ( ++numWords == maxWords ) {
                break
            }
        }
        delete cnt
        delete chars
    }
' "${@:--}"

我们得到:

$ ./tst.sh file1 file2
  ===(3 32 32 file1)===
2 am
2 I
1 writing
1 working
1 words
  ===(2 45 20 file2)===
6 I
3 am
2 words
2 to
2 the

相关内容