Quote:
Originally Posted by j.p.s
Systems that support FUSE can use httpfs.
|
Interesting idea.

But as far as I can tell, that only helps you "download" faster. While interesting, it isn't directly applicable to the idea of "downloading many files in a batch job".
Quote:
Originally Posted by crane3
If it is not an FTP server, then it should be another type of "server" with the idea the "server" just provides availability of some files. Even using a browser's download option is getting files from a server. Copying files from directory A to directory B may have the directory A be considered as a "server".
|
Well, I thought you were trying to say "use the capability of FTP to download a directory structure as communicated by the FTP protocol".
Which falls entirely flat as a methodology, if you only have an HTTP server available and you cannot trawl the filesystem hierarchy of the server.
As I suggested in the first place, the most likely solution is going to be something like wget, which can recursively download an index page containing links to the desired PDFs.