I'm using the watchdog python package to monitor changes in the filesystem. I use it to check when files are created, modified or deleted and synchronize them in a cloud storage.
import time
from watchdog.observers import Observer
from watchdog.events import LoggingEventHandler
class MyHandler(LoggingEventHandler):
# Counter to see how many events are processed
events = 0
def on_any_event(self, event):
self.events += 1
# simulate some processing time
time.sleep(0.1)
print(event.src_path, event.event_type, self.events, flush=True)
class myObserver(Observer):
def on_thread_stop(self):
print("Stopping thread")
print("Remaining events: ", self.event_queue.qsize())
# This will hang the observer thread
self.event_queue.join()
return self.on_thread_stop()
if __name__ == "__main__":
observer = myObserver()
observer.schedule(MyHandler(), "test_dir", recursive=True)
# Tried to make the thread non-daemon, but it still stops processing events
# observer.daemon = False
observer.start()
try:
print("Writing 100 times to file")
for i in range(100):
with open("fstesting/test.txt", "a") as f:
f.write("test\n")
finally:
# At this point, only at most a couple of events are processed.
print("Stopping observer")
observer.stop()
observer.join()
print("Done")
I've noticed that when observer is stopped, it stops all activity, producing and consuming events, which is the desired behaviour. I'm looking for a way that when the observer is stopped, it stops generating new events but keeps processing already queued events. Already tried to mess a bit directly with the queue calling join() but the observer thread holds and does not dispatch the events. I know the way would be extending on_thread_stop in MyObserver class but not sure how to make keep processing the events