View Single Post
Old 03-23-2023, 04:18 PM   #23
CasualBookworm
Junior Member
CasualBookworm doesn't litterCasualBookworm doesn't litter
 
Posts: 3
Karma: 168
Join Date: Mar 2023
Device: Kobo Aura Edition 2
Quote:
Originally Posted by qkqw View Post
You might be able to use $(($convertedf + 1)) instead of expr here.
Good call! Shows how often I write shell scripts.

Quote:
Originally Posted by qkqw View Post
Instead of having a separate directory for this, would it be possible to store a hidden file inside the article folder, ie. /mnt/onboard/.adds/pocket/1234567890/.processed? That way Kobo should remove the whole folder once you archive/delete an article and the script could be simplified by ignoring those folders with such a file.
I'd initially shied away from putting the "flag" file in the article's directory since I wasn't sure how the device would react, but I tested it out and it does look like the whole thing does get cleaned up after an archive/delete. That definitely solves the issue of storing flags for long-deleted articles.

Quote:
Originally Posted by qkqw View Post
I rarely have more than 20-30 articles on my Kobo, so I never ran into a problem here. In any case, the IM identify command should be quite fast, even with many files to check?
I have something ridiculous like 2,000 articles on mine, so re-checking the files can be pretty slow. In some quick testing on my Aura Edition 2, just running identify on 61 articles with a total of 375 files took about 30 seconds, so having a way to skip articles does seem important.

Quote:
Originally Posted by qkqw View Post
Regardless it does make sense to add some sort of limit. However I'd prefer a number based approach instead of a day based approach. Getting the latest X articles is quite easy, and then you could still check if those were already processed. What do you think?
The only issue I could see with getting the latest X articles is that processing a directory could update its modified time, which could bump it back to the top of the latest list. If we fetched all articles but then only processed X (i.e., skipped articles didn't count against the limit) that definitely seems like it could work!
CasualBookworm is offline   Reply With Quote