Arrrgh!
In an ideal wold you could copy arbitrary files actually into to the database. In a sense with the more complex Client/Server Databases you can by having a sort of arbitary size binary blob type. But the performance is atrocious and you have to import/export. SQLight can't do it. I tried it and concluded a "black box" set of files is far better and the database then just has the filename. Import and Export is a simple OS file copy.
I'd not have used a big GUID like 99b1d595-2f03-40d0-89d6-01e7a5ed20d0/881d3ce3-458e-4145-928f-cd3fbffb76af-0012003 but simply creation date (ISO so 20240603 for today with a suffix serial ID for the Nth created). I'd have that for AuthorID directory and then subdirectory with an ISO date+ serial id for title and then the resources in that.
But the current scheme works fine.
EDIT
Also the library softwares I did for books and video of course used barcodes. You had to add a barcode to the object as you might have multiple copies of the same physical book or VHS, so can't use the ISBN or EAN etc.
The document management system added a barcode to any document printed. Any arbitrary document could be scanned and become a multi-page TIFF (As we had them from a FAX server anyway), though I might be wrong on that. If the document was from a specific categorised source the scanning SW would decode the barcode from the captured page to automatically index it. It was over 20 years ago, so I don't remember all the details. An archive export for CD generated a static set of catalogues and indexes that worked on a browser with no javascript. So archives from it would work on any browser or OS.
The live system was Client/Server multiuser with MS-SQL and a VB6 based client. The files not available directly to users and all just ISO date with serial number suffix.
Last edited by Quoth; 06-02-2024 at 09:27 PM.
|