重点是我需要转置一个大文件 - 1600 乘以 80k,但条目不是制表符分隔的,而只是 U 和 K,所以看起来像
UUUUTTTUTUTUUUTUTUT
TTUTUTUTUUTUTUTUTUT
在 bash、python 或 perl 中有没有最好的方法可以做到这一点?
答案1
下面将使用接近零的内存资源转置具有任意行数和列数的矩阵。
#!/bin/sh
for i in $(seq 1 $(head -n1 "$1" | wc -c)); do
awk -v c=$i 'BEGIN{FS=""}{printf $c}' "$1"
echo
done
用一个有意义的名字保存它(例如transpose.sh
),使其可执行(chmod +x transpose.sh
)并像这样使用它:
./transpose.sh matrix.txt
输出:
UT
UT
UU
UT
TU
TT
TU
UT
TU
UU
TT
UU
UT
UU
TT
UU
TT
UU
TT
怎么运行的:
$(head -n1 "$1" | wc -c)
:计算列数for i in $(seq 1 $num_cols); do ... done
:对每一列运行此循环awk -v c=$i 'BEGIN{FS=""}{printf $c}' "$1"
:解析矩阵文件。当前列号(来自父循环的当前迭代)保存在awk
变量中c
。使用它来按顺序打印此列的所有值($c
)(无换行符 - 使用printf
)
答案2
两个想法:
perl
perl -ne ' chomp; $l = length if $. == 1; push @rows, [split //]; } END { for ($i=0; $i<$l; $i++) { for ($j=0; $j<$.; $j++) { print $rows[$j][$i]; } print "\n"; } ' file
ruby 有一个方便的
Array.transpose
方法ruby -e 'puts IO.readlines(ARGV.shift) .map {|line| line.chomp.split("")} .transpose .map {|row| row.join("")} .join("\n") ' file
嗯,这些会占用大量内存。另一种实现
perl
perl -ne ' chomp; if ($. == 1) { @data = split //; } else { @chars = split //; $data[$_] .= $chars[$_] for 0..$#chars; } } END { print join("\n", @data), "\n"; ' file
红宝石
ruby -e ' file = File.open(ARGV.shift) data = file.gets.chomp.split("") file.each do |line| line.chomp.split("").each_with_index do |char, idx| data[idx] << char end end puts data.join("\n") ' file
我就是不能放过这件事 ;)
基准测试
1 MB file 10 MB file 100 MB file
-----------------------------------------------------------------
| time memory | time memory | time memory
perl v1 | 0.54s 85800k | 4.59s 815140k | - -
perl v2 | 0.47s 6776k | 4.29s 22204k | 41.88s 180404k
ruby v1 | 1.19s 137960k | 14.63s 961736k | - -
ruby v2 | 1.04s 12296k | 9.75s 27816k | 101.75s 185908k
gawk | 1.15s 233056k | 12.37s 2291404k | - -
choroba | 0.45s 76740k | 3.90s 728888k | - -
Choroba 的 perl 在时间竞赛中获胜,但我的第二次实现在内存方面遥遥领先。
文字记录:
$ yes 123456789 | head -n 100000 > big
$ yes 123456789 | head -n 1000000 > bigger
$ yes 123456789 | head -n 10000000 > biggest
$ ls -l big*
-rw-r--r-- 1 jackman jackman 1000000 Jun 24 14:33 big
-rw-r--r-- 1 jackman jackman 10000000 Jun 24 14:33 bigger
-rw-r--r-- 1 jackman jackman 100000000 Jun 24 14:33 biggest
$ time perl -n transpose_v1.pl big >/dev/null
0.54user 0.05system 0:00.98elapsed 61%CPU (0avgtext+0avgdata 85800maxresident)k
3080inputs+0outputs (15major+20501minor)pagefaults 0swaps
$ time perl -n transpose_v1.pl bigger >/dev/null
4.59user 0.39system 0:04.98elapsed 99%CPU (0avgtext+0avgdata 815140maxresident)k
0inputs+0outputs (0major+202823minor)pagefaults 0swaps
$ time perl -n transpose_v2.pl big >/dev/null
0.47user 0.00system 0:00.48elapsed 99%CPU (0avgtext+0avgdata 6776maxresident)k
0inputs+0outputs (0major+819minor)pagefaults 0swaps
$ time perl -n transpose_v2.pl bigger >/dev/null
4.29user 0.01system 0:04.31elapsed 99%CPU (0avgtext+0avgdata 22204maxresident)k
0inputs+0outputs (0major+5042minor)pagefaults 0swaps
$ time perl -n transpose_v2.pl biggest >/dev/null
41.88user 0.11system 0:42.01elapsed 99%CPU (0avgtext+0avgdata 180404maxresident)k
0inputs+0outputs (0major+44590minor)pagefaults 0swaps
$ time ruby transpose_v1.rb big >/dev/null
1.19user 0.10system 0:01.58elapsed 81%CPU (0avgtext+0avgdata 137960maxresident)k
5856inputs+0outputs (23major+33375minor)pagefaults 0swaps
$ time ruby transpose_v1.rb bigger >/dev/null
14.63user 0.48system 0:15.12elapsed 99%CPU (0avgtext+0avgdata 961736maxresident)k
0inputs+0outputs (0major+239378minor)pagefaults 0swaps
$ time ruby transpose_v2.rb big >/dev/null
1.04user 0.02system 0:01.07elapsed 98%CPU (0avgtext+0avgdata 12296maxresident)k
0inputs+0outputs (0major+2020minor)pagefaults 0swaps
$ time ruby transpose_v2.rb bigger >/dev/null
9.75user 0.02system 0:09.79elapsed 99%CPU (0avgtext+0avgdata 27816maxresident)k
0inputs+0outputs (0major+6051minor)pagefaults 0swaps
$ time ruby transpose_v2.rb biggest >/dev/null
101.75user 0.21system 1:41.99elapsed 99%CPU (0avgtext+0avgdata 185908maxresident)k
0inputs+0outputs (0major+45600minor)pagefaults 0swaps
$ time gawk -f transpose.gawk big >/dev/null
1.15user 0.12system 0:01.28elapsed 99%CPU (0avgtext+0avgdata 233056maxresident)k
0inputs+0outputs (0major+58542minor)pagefaults 0swaps
$ time gawk -f transpose.gawk bigger >/dev/null
12.37user 1.03system 0:13.40elapsed 99%CPU (0avgtext+0avgdata 2291404maxresident)k
0inputs+0outputs (0major+580302minor)pagefaults 0swaps
$ time perl transpose_choroba.pl big >/dev/null
0.45user 0.04system 0:00.58elapsed 84%CPU (0avgtext+0avgdata 76740maxresident)k
112inputs+0outputs (1major+18282minor)pagefaults 0swaps
$ time perl transpose_choroba.pl bigger >/dev/null
3.90user 0.37system 0:04.28elapsed 99%CPU (0avgtext+0avgdata 728888maxresident)k
0inputs+0outputs (0major+181291minor)pagefaults 0swaps
答案3
Perl 解决方案:
#!/usr/bin/perl
use strict;
use warnings;
my @arr;
while (my $line = <>) { # Read the input line by line.
chomp $line; # Remove a newline.
# Distribute the characters to subarrays of the array:
push @{ $arr[$_] }, substr $line, $_, 1
for 0 .. length($line) - 1;
}
print @$_, "\n" for @arr;
不过,您需要大量内存来转置大型矩阵。
答案4
使用gawk
sudo apt-get install gawk
创建 awk 脚本transpose
BEGIN { FS = "" }
{
for (i=1; i<=NF; i++) {
a[NR,i] = $i
}
}
NF>p { p = NF }
END {
for(j=1; j<=p; j++) {
str=a[1,j]
for(i=2; i<=NR; i++){
str=str""a[i,j];
}
print str
}
}
并开始
gawk -f transpose <your_input_file> > <your_output_file>
例子
% cat foo
UUUUTTTUTUTUUUTUTUT
TTUTUTUTUUTUTUTUTUT
% gawk -f transpose foo
UT
UT
UU
UT
TU
TT
TU
UT
TU
UU
TT
UU
UT
UU
TT
UU
TT
UU
TT