I want to insert new json objects in between json objects using bash generated uuid.
input json file test.json
{"name":"a","type":1}
{"name":"b","type":2}
{"name":"c","type":3}
input bash command uuidgen -r
target output json
{"id": "7e3ca7b0-48f1-41fe-9a19-092a62cba0dc"}
{"name":"a","type":1}
{"id": "3f793fdd-ec3b-4306-8153-12f3f9faf2c1"}
{"name":"b","type":2}
{"id": "cbcd759a-37e7-4da7-b7fe-7572f474ec31"}
{"name":"c","type":3}
basic jq program to insert new objects
jq -c '{"id"}, .' test.json
output json
{"id":null}
{"name":"a","type":1}
{"id":null}
{"name":"b","type":2}
{"id":null}
{"name":"c","type":3}
jq program to insert uuid generated from bash:
jq -c '{"id" | input}, .' test.json < <(uuidgen)
Unsure about how to handle two inputs, bash command used to create a value in the new object, and the input file to be transformed (new object inserted in between each object).
I want to process small and large json files up to a few gigabytes each.
Greatly appeaciate some help with a well designed solution(s) that would scale for large files and perform the operations quickly and efficiently.
Thanks in advance.
If the input file is already well-formed JSONL, then a simple bash solution would be:
This might well be the best trivial solution if test.json is very large and known to be valid JSONL.
If the input file is not already JSONL, then you could still use the above approach by piping in
jq -c . test.json
. And if ‘read’ is too slow, you could still use the above text-processing approach withawk
.For the record, a single-call-to-jq solution along the lines you have in mind could be constructed as follows:
Obviously you cannot "slurp" the unbounded stream of uuidgen values; less obviously perhaps, if you were simply to pipe in the stream, the process will hang.