如何仅打印 json 文件中的属性行
json 文件示例
{
"href" : "http://master02:8080/api/v1/clusters/HDP/configurations?type=kafka-env&tag=version1527250007610",
"items" : [
{
"href" : "http://master02:8080/api/v1/clusters/HDP/configurations?type=kafka-env&tag=version1527250007610",
"tag" : "version1527250007610",
"type" : "kafka-env",
"version" : 8,
"Config" : {
"cluster_name" : "HDP",
"stack_id" : "HDP-2.6"
},
"properties" : {
"content" : "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi",
"is_supported_kafka_ranger" : "true",
"kafka_log_dir" : "/var/log/kafka",
"kafka_pid_dir" : "/var/run/kafka",
"kafka_user" : "kafka",
"kafka_user_nofile_limit" : "128000",
"kafka_user_nproc_limit" : "65536"
}
}
]
预期产出
"content" : "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi",
"is_supported_kafka_ranger" : "true",
"kafka_log_dir" : "/var/log/kafka",
"kafka_pid_dir" : "/var/run/kafka",
"kafka_user" : "kafka",
"kafka_user_nofile_limit" : "128000",
"kafka_user_nproc_limit" : "65536"
答案1
Jq
是处理 JSON 数据的正确工具:
jq '.items[].properties | to_entries[] | "\(.key) : \(.value)"' input.json
输出:
"content : \n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi"
"is_supported_kafka_ranger : true"
"kafka_log_dir : /var/log/kafka"
"kafka_pid_dir : /var/run/kafka"
"kafka_user : kafka"
"kafka_user_nofile_limit : 128000"
"kafka_user_nproc_limit : 65536"
如果确实必须获取双引号中的每个键和值 - 请使用以下修改:
jq -r '.items[].properties | to_entries[]
| "\"\(.key)\" : \"\(.value | gsub("\n";"\\n"))\","' input.json
输出:
"content" : "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e "/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar" ]; then\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi",
"is_supported_kafka_ranger" : "true",
"kafka_log_dir" : "/var/log/kafka",
"kafka_pid_dir" : "/var/run/kafka",
"kafka_user" : "kafka",
"kafka_user_nofile_limit" : "128000",
"kafka_user_nproc_limit" : "65536",
答案2
请不要养成使用非结构化工具解析结构化数据的习惯。如果您正在解析 XML、JSON、YAML 等,请使用特定的解析器,至少将结构化数据转换为更适合 AWK等的形式sed
。grep
在这种情况下,gron
会有很大帮助:
$ gron yourfile | grep -F .properties.
json.items[0].properties.content = "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=/usr/lib/ccache:/home/steve/bin:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games:/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n export CLASSPATH=:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi";
json.items[0].properties.is_supported_kafka_ranger = "true";
json.items[0].properties.kafka_log_dir = "/var/log/kafka";
json.items[0].properties.kafka_pid_dir = "/var/run/kafka";
json.items[0].properties.kafka_user = "kafka";
json.items[0].properties.kafka_user_nofile_limit = "128000";
json.items[0].properties.kafka_user_nproc_limit = "65536";
(您可以对其进行后处理,| cut -d. -f4- | gron --ungron
以获得非常接近您所需的输出,尽管仍然是有效的 JSON。)
答案3
sed -n '/properties/,/}$/ {
/properties/n
/}$/ !p
}' FILE.json
为了更精确的匹配,并处理带有额外空格的右括号行,您可以使用
sed -E -n '/"properties" : {/,/^[[:blank:]]*}[[:blank:]]$/ {
/"properties" : {/n
/^[[:blank:]]*}[[:blank:]]$/ !p
}' FILE.json
答案4
它被标记为perl
,但我还没有看到perl
答案,所以我来补充一下。
不要使用正则表达式或其他“非结构化”解析器。perl
有JSON
模块。 (JSON::PP
从 5.14 开始也是核心的一部分)
#!/usr/bin/env perl
use strict;
use warnings;
use JSON;
use Data::Dumper;
my $str = do { local $/; <DATA> };
my $json = decode_json ( $str );
my $properties = $json -> {items} -> [0] -> {properties};
#dump the whole lot:
print Dumper $properties;
# or iterate
foreach my $key ( sort keys %$properties ) {
print "$key => ", $properties -> {$key},"\n";
}
__DATA__
{
"href" : "http://master02:8080/api/v1/clusters/HDP/configurations?type=kafka-env&tag=version1527250007610",
"items" : [
{
"href" : "http://master02:8080/api/v1/clusters/HDP/configurations?type=kafka-env&tag=version1527250007610",
"tag" : "version1527250007610",
"type" : "kafka-env",
"version" : 8,
"Config" : {
"cluster_name" : "HDP",
"stack_id" : "HDP-2.6"
},
"properties" : {
"content" : "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi",
"is_supported_kafka_ranger" : "true",
"kafka_log_dir" : "/var/log/kafka",
"kafka_pid_dir" : "/var/run/kafka",
"kafka_user" : "kafka",
"kafka_user_nofile_limit" : "128000",
"kafka_user_nproc_limit" : "65536"
}
}
]
}
当然,您会读取STDIN
文件名,而不是DATA
在实际使用场景中。