Originally Posted by iatheia
Hmm, significantly better. At least now one can tell if there was a failure.
However, having a chance to test it during an actual downtime, since, unlike that fake website, it tries and fails to collect metadata, it does not leave a log in "Adding jobs for URLs" At best it puts "Unsuccessful:Unknown" in the end, and it doesn't write anything at all if there were no successful stories in the same batch.
FFDL PI does an initial metadata collection fetch for each URL before the background subprocess is even started. (This is necessary for a variety of reasons, including needing to check and maybe collect user/pass.)
If that fails, the URL is skipped in the subprocess. If all URLs fail, the subproc doesn't even start). The case you're talking about is where there are other downloads that did start.
I can see where that can be confusing if you're expecting it in the job log. The attached version included the pre-failed URLs as if the subproc job had run them, plus I added the URL to the success/fail lists at the end of the job log. How's that? Would just URLs in success/fail lists be better?
UPDATE Jan 14, 2013 - Remove obsolete beta version.