jq + 从 json 文件打印属性,而不删除必要的反斜杠

jq + 从 json 文件打印属性,而不删除必要的反斜杠

我们有以下 json 文件:

more file.json

{
  "href" : "http://master02:8080/api/v1/clusters/HDP/configurations?type=kafka-env&tag=version1527250007610",
  "items" : [
    {
      "href" : "http://master02:8080/api/v1/clusters/HDP/configurations?type=kafka-env&tag=version1527250007610",
      "tag" : "version1527250007610",
      "type" : "kafka-env",
      "version" : 8,
      "Config" : {
        "cluster_name" : "HDP",
        "stack_id" : "HDP-2.6"
      },
      "properties" : {
        "content" : "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n  export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n  export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi",
        "is_supported_kafka_ranger" : "true",
        "kafka_log_dir" : "/var/log/kafka",
        "kafka_pid_dir" : "/var/run/kafka",
        "kafka_user" : "kafka",
        "kafka_user_nofile_limit" : "128000",
        "kafka_user_nproc_limit" : "65536"
      }
    }
  ]

我们构建以下 jq 语法以打印 file.json 中的属性

jq -r '.items[].properties | to_entries[]
       | "\"\(.key)\" : \"\(.value | gsub("\n";"\\n"))\","' file.json


"content" : "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e "/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar" ]; then\n  export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n  export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi",
"is_supported_kafka_ranger" : "true",
"kafka_log_dir" : "/var/log/kafka",
"kafka_pid_dir" : "/var/run/kafka",
"kafka_user" : "kafka",
"kafka_user_nofile_limit" : "128000",
"kafka_user_nproc_limit" : "65536",

但问题是输出在双配额之前没有反斜杠

例子

而是在输出中输入以下行

 [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]

我们得到输出:(没有必要的反斜杠)

 [ -e "/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar" ]

请建议如何相应地修复 jq 语法

。 。 。

预期结果示例:

预期产出

    "content" : "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n  export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n  export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi",
    "is_supported_kafka_ranger" : "true",
    "kafka_log_dir" : "/var/log/kafka",
    "kafka_pid_dir" : "/var/run/kafka",
    "kafka_user" : "kafka",
    "kafka_user_nofile_limit" : "128000",
    "kafka_user_nproc_limit" : "65536"

答案1

jq 有几种转义模式,您可以使用它们来代替您自己的引用。

jq -r '.items[].properties | to_entries[] | (.key | @json) + ": " + (.value|@json) + ","' file.json

我认为会产生你想要的输出。这@json格式化程序根据需要添加引号和反斜杠保留 JSON 有效语法,包括双引号字符串:

"content": "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n  export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n  export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi",

这里唯一需要的原始字符串操作是添加“:”;否则它只是将键/值传递到过滤器中并处理所有引用。您也可以使用@json "\(.key): \(.value),",尽管我发现这种行为带来的麻烦大于其价值。


如果最后一行出现多余的逗号问题,请将这些值收集在一起并改用join(str):

jq -r '.items[].properties | [to_entries[] | @json "\(.key): \(.value)"] | join(",\n")' file.json

这会将所有字符串放入一个数组中,然后放在,\n每对之间。

相关内容