我有一个这样的文本文件:
eeeeeeee6fd6e6e7000000800010884f image_0001.png
eeeeeeee6fd6e6e7000000800010884f image_0002.png
e6eee7afef77c6c7000000808860003b image_0003.png
e6eeefa7cfe777170100000008886033 image_0004.png
e6eeefa7cfe777170100000008886033 image_0005.png
eeeecfe7afcfe7770100000030088c27 image_0006.png
efebefe7a7cfc7e70101080000300c03 image_0007.png
ef6befdf674f97c7000000900200301f image_0008.png
ef6befdf674f97c7000000900200301f image_0009.png
6d6d6faff767479700004008810000e1 image_0010.png
ed6d6dada5f767570000400098830401 image_0011.png
ed6d6dada5f767570000400098830401 image_0012.png
efed6d4da595f7a70202004000181303 image_0013.png
ebececcc2f2797f10000008051043c5b image_0014.png
e9edecce4e6e26ba120101808058042a image_0015.png
e9edecce4e6e26ba120101808058042a image_0016.png
ececeeefcf6f67a61000000080585887 image_0017.png
cc6ceeefcf4f67e710000020000149d8 image_0018.png
cc6cefefefcf6fe71000000040000001 image_0019.png
cc6cefefefcf6fe71000000040000001 image_0020.png
8ceceeefefcfcfe700000000c0000009 image_0021.png
并且我想使用 Notepad++ 删除除一个重复字符串(左侧的哈希值)之外的所有字符串,并将该部分留空,保留右侧的文件名,如下所示:
eeeeeeee6fd6e6e7000000800010884f image_0001.png
image_0002.png
e6eee7afef77c6c7000000808860003b image_0003.png
e6eeefa7cfe777170100000008886033 image_0004.png
image_0005.png
eeeecfe7afcfe7770100000030088c27 image_0006.png
efebefe7a7cfc7e70101080000300c03 image_0007.png
ef6befdf674f97c7000000900200301f image_0008.png
image_0009.png
6d6d6faff767479700004008810000e1 image_0010.png
ed6d6dada5f767570000400098830401 image_0011.png
image_0012.png
...etc.
当然,有许多不同的字符串需要替换,所以这并不像人们想象的那么容易(尤其是对于成千上万行这样的字符串)。有没有正则表达式或其他方法可以实现这一点?谢谢
答案1
使用 Python 有很多方法可以实现这一点。以下是其中一种方法:
# Note: Your output file must be different to your input file!
# Use absolute filepaths unless the files are in the current working directory.
input_filepath = r"C:\Users\Admin\Desktop\file hashes.txt"
output_filepath = r"C:\Users\Admin\Desktop\file hashes (processed).txt"
hashes = set() # This set keeps track of known file hashes
with open(input_filepath) as fin:
with open(output_filepath, "w") as fout:
# After opening both the input and output files,
# loop over every line in the input file.
for line in fin:
# Get the hash, which is between the start of the line and the first space.
file_hash = line[:line.find(" ")]
# Check if it is in the set of known hashes.
# If it is, write the current line without the hash to the output file.
# If it isn't, write the current line with the hash to the output file,
# and add the hash to our set of known hashes
if file_hash in hashes:
hash_len = len(file_hash)
fout.write(" " * hash_len + line[hash_len:])
else:
fout.write(line)
hashes.add(file_hash)
file hashes (processed).txt
好像:
eeeeeeee6fd6e6e7000000800010884f image_0001.png
image_0002.png
e6eee7afef77c6c7000000808860003b image_0003.png
e6eeefa7cfe777170100000008886033 image_0004.png
image_0005.png
eeeecfe7afcfe7770100000030088c27 image_0006.png
efebefe7a7cfc7e70101080000300c03 image_0007.png
ef6befdf674f97c7000000900200301f image_0008.png
image_0009.png
6d6d6faff767479700004008810000e1 image_0010.png
ed6d6dada5f767570000400098830401 image_0011.png
image_0012.png
efed6d4da595f7a70202004000181303 image_0013.png
ebececcc2f2797f10000008051043c5b image_0014.png
e9edecce4e6e26ba120101808058042a image_0015.png
image_0016.png
ececeeefcf6f67a61000000080585887 image_0017.png
cc6ceeefcf4f67e710000020000149d8 image_0018.png
cc6cefefefcf6fe71000000040000001 image_0019.png
image_0020.png
8ceceeefefcfcfe700000000c0000009 image_0021.png
我不确定 Python 在您的系统中是如何设置的,但您应该能够通过将上述代码复制到名为的文件中来运行它remove_duplicate_hashes.py
,然后双击它或进入python remove_duplicate_hashes.py
命令提示符来运行它。