Ideally I need two jobs. One to gather data from an external URL and a second to process it.
I don't want to gather the data every time I process to reduce load on the external system. The output data needs processing regularly as it's a rule engine focussed on time.
However, I'm not really sure how to pass the output array from the first scheduler to be used in the second... here's an example of what I'm trying to do...
SCHEDULER.every '10m', :first_in => 0 do |job|
url = http://sample.com
output = []
open (url) do |data|
data.each_line { |line| output.push (line) }
end
SCHEDULER.every '1m', :first_in => 0 do |job|
#Code to process output (from first scheduler)
end
Any help would be hugely appreciated!
Thanks
You could use a global variable (here
$output
) and tell the jobs to use the same mutex (here "pipe0", just a random name I chose).Something like: