我有以下工作jq转换:
$ echo '{"key": "key1", "value": {"one": 1, "two": 2}}' | jq --compact-output '.key as $key|.value|to_entries|map({key: ($key), member:.key, score:(.value|tostring)})|.[]'
正确产生所需的输出:
{"key":"key1","member":"one","score":"1"}
{"key":"key1","member":"two","score":"2"}
输入json很大-假设上面示例的“值”字段中有成千上万个条目。我希望在jq流模式下执行此精确转换,目的是避免内存压力。
I have tried using jq foreach
to no avail. I cannot find a way to store the "key1" value to be referenced as entries in "values" are processed.
示例,使用与工作示例相同的输入:
$ echo '{"key": "key1", "value": {"one": 1, "two": 2}}'| jq -c --stream 'foreach . as $input ({};{in: $input};.)'
{"in":[["key"],"key1"]}
{"in":[["value","one"],1]}
{"in":[["value","two"],2]}
{"in":[["value","two"]]}
{"in":[["value"]]}
处理上面的第2行和第3行时,我需要引用值“ key1”。
重申一下,我希望获得非流版本的确切输出。
foreach
is unnecessary for this case.You can enable the streaming parser of
jq
by adding the--stream
field and usingfromstream(inputs)
should produce your input to the filter as you were doing for the non-streaming part. So the following should work as expected.我无法在大型JSON上对性能进行基准测试,但它应比非流版本更好。