Let's say I have a query that is going to return a very large response. Possibly thousands of records and possibly gigabytes of data.
Normally in the UI, we just show a single page of this data. Now I need an option to take the entire result set and stream it out to a file. Then the user can go download this at their leisure.
So how do I select all results from a query using query builder and then stream it out to a file in chunks without running out of memory?
If you want the document descriptors, you can open an object stream as in the following example:
https://github.com/marklogic/node-client-api/blob/develop/examples/query-builder.js#L38
If you only want the content of the documents, you can use a chunked stream as shown in the following example (the same approach can be used for a query):
https://github.com/marklogic/node-client-api/blob/develop/examples/read-stream.js#L27
The general approach would be as follows:
https://nodejs.org/api/fs.html#fs_fs_createwritestream_path_options
https://nodejs.org/api/stream.html#stream_readable_pipe_destination_options
loop on reading documents, incrementing the start page by the page length until finished reading
call end() on the write stream to close the file
https://nodejs.org/api/stream.html#stream_writable_end_chunk_encoding_callback
Hoping that helps