我有一个文件,我需要知道它是否有相同的条目
该文件包含如下条目:
dn: cn=ccb2fa1a-6efb-4f29-b18b-72e226d76935,ou=Named,ou=Identities,ou=Active,o
rdcPosition: cn=936480,ou=Entities,ou=Active,ou=Vault,o=rdc#3#<position><cn>70
dn: cn=715f55d1-e940-42f9-8ae5-25ff1eff6f55,ou=Named,ou=Identities,ou=Active,o
rdcPosition: cn=7292,ou=Entities,ou=Active,ou=Vault,o=rdc#3#<position><cn>4024
rdcPosition: cn=8910,ou=Entities,ou=Active,ou=Vault,o=rdc#3#<position><cn>5209
rdcPosition: cn=7263,ou=Entities,ou=Active,ou=Vault,o=rdc#3#<position><cn>6725
rdcPosition: cn=936480,ou=Entities,ou=Active,ou=Vault,o=rdc#3#<position><cn>11
dn: cn=f61e2769-a9c8-486a-914b-92333055b5e5,ou=Named,ou=Identities,ou=Active,o
rdcPosition: cn=938936,ou=Entities,ou=Active,ou=Vault,o=rdc#3#<position><cn>74
rdcPosition: cn=942380,ou=Entities,ou=Active,ou=Vault,o=rdc#5#<position><cn>51
dn: cn=7548d048-1288-4b66-97f4-efe15c68fc50,ou=Named,ou=Identities,ou=Active,o
rdcPosition: cn=311432,ou=Entities,ou=Active,ou=Vault,o=rdc#3#<position><cn>43
dn: cn=e51f3d78-b9d8-4bcf-b8c5-321519f19515,ou=Named,ou=Identities,ou=Active,o
rdcPosition: cn=938936,ou=Entities,ou=Active,ou=Vault,o=rdc#3#<position><cn>35
dn: cn=cf6ddfb2-4261-4169-9e6e-0d6963262b49,ou=Named,ou=Identities,ou=Active,o
rdcPosition: cn=938936,ou=Entities,ou=Active,ou=Vault,o=rdc#3#<position><cn>82
我需要知道“dn:”下的“rdcPosition”行是否有重复的条目,例如:
dn: cn=65fb5990-4d2f-492e-83fb-c2cbd72d8988,ou=Named,ou=Identities,ou=Active,o
rdcPosition: cn=7688,ou=Entities,ou=Active,ou=Vault,o=rdc#3#<position><cn>2323
rdcPosition: cn=7688,ou=Entities,ou=Active,ou=Vault,o=rdc#3#<position><cn>2323
你知道我应该使用哪个 Unix 命令吗?
答案1
我每天编写的快速脚本类型:
#!/usr/bin/perl
#
use strict;
use warnings;
#data structures we're gonna need
my %positions; #how many times have we seen a given position
my %registered_lines; #the concatenated lines for the given position
my $dn; # the current dn section we're in
while (<>)
{
if (/^dn:/) #beginning of a new dn section (and end of the previous one)
{
my $printed = 0; #we want to print the dn line only once
foreach my $key (keys %positions) #we look at all positions seen in last section
{
if ($positions{$key} gt 1) # has the current position been seen more than once
{
print $dn unless $printed;
$printed = 1;
#print "position $key is repeated $positions{$key} times\n";
print $registered_lines{$key}; #print all the lines with the position
}
}
#reset variables for the next section
$dn = $_;
%positions = ();
%registered_lines = ();
}
if (/^rdcPosition/) #new line
{
/(\d+)$/; #have a look at the digits at the end of the line
my $pos = $1;
if (exists $positions{$pos}) #have we already seen this position
{
$positions{$pos} += 1; #increment the counter
$registered_lines{$pos} .= $_; #record the line
}
else
{
$positions{$pos} = 1;
$registered_lines{$pos} = $_;
}
}
}
运行它作为:
perl script.pl < input_data_file
答案2
如果您有兴趣了解“是否有重复项?”那么我建议比较cat <file> | sort | wc -l
和的结果cat <file> | sort | uniq | wc -l
。如果有重复项,系统uniq
会将其删除,数量将会减少。如果您想查看这些差异,请查看 @Igeorget 发布的 perl 脚本。
答案3
awk '/^dn:/ {d=1} {if (d) {print buf | "sort|uniq -d"; d=0; buf=""} else {buf=buf$0"\n"}} END {print buf | "sort|uniq -d"}'|grep -v '^$'
比 perl 版本少得多的打字 =)。可能更简单,但我似乎无法“在任何模式上或在末尾”执行 awk 规则,因此它包含一点 shell 代码重复。