HTTPWatch IE Automation via Ruby out of memory error

652 views Asked by At

I am using the HTTPWatch Ruby script to automate Internet Explorer and crawl a website looking for broken links. See here for information on the ruby site spider script. After a while the HTTPWatch plugin fails with the following error:

Get Cache Object failed # 1. len = 2048 url = http://domainname/dckh1h0mntja0m8xa1qugzm3n_59c9/dbs.gif?&dcsdat=1284571577008&dcssip=domainname&dcsuri=/closet_detail.aspx&dcsqry=%3Fid=34200&WT.co_f=10.10.30.9-90436560.30102765&WT.vt_sid=10.10.30.9-90436560.30102765.1284565529237&WT.tz=-4&WT.bh=13&WT.ul=en-us&WT.cd=16&WT.sr=1680x1050&WT.jo=Yes&WT.ti=Generics%2520%2526%2520Super%2520Man%2520Center%25E2%2580%2594Testing...&WT.vt_f_tlh=1284571573 Error = 8 : Not enough storage is available to process this command.

Line 858 source.cpp hr = 0x80070008

(A MiniDump has already been written by this process to )

SafeTerminate Version: 7.0.26

When I look in task manager IExplorer.exe is taking up like 1.5 Gigs of memory. I'm wondering if this is a problem of the cache filling up? Or is this a problem with the URL being too long? Anyone have any suggestions?

1

There are 1 answers

0
Sephrial On BEST ANSWER

Ok, it looks like I was able to answer my own question. Since HTTPWatch is a IE plug-in that's why it looked like Internet Explorer was running out of memory. In fact, it is the HTTPWatch log file that is getting so large. The work-around is to dump the HttpWatch log at an interval using Save() and then Clear().