View Single Post
Old 06-14-2011, 08:57 AM   #15
Mackx
Guru
Mackx is a splendid one to beholdMackx is a splendid one to beholdMackx is a splendid one to beholdMackx is a splendid one to beholdMackx is a splendid one to beholdMackx is a splendid one to beholdMackx is a splendid one to beholdMackx is a splendid one to beholdMackx is a splendid one to beholdMackx is a splendid one to beholdMackx is a splendid one to behold
 
Posts: 999
Karma: 19985
Join Date: Dec 2008
Location: Netherlands
Device: iRex DR1000S
Quote:
Originally Posted by Viacheslav View Post
That's the whole problem. The point of having a reader for me is to carry all my library and reading stuff everywhere. It means over 150 000 files on a 32Gb card. That's why indexing takes forever.
That is indeed a lot of books. I do not think that the iRex developers had this usecase in mind when creating the new system.
Quote:
Originally Posted by Viacheslav View Post
Indexing serves no useful purpose either. All my files are arranged by titles, authors, languages and genres. The pdf's are usually scientific papers with tagged names like "submission234" or "MTT10-3454" The descriptive names are in the filenames. The fiction books are usually in the zipped FB2 format which the firmware does not understand and cannot index anyway. There are many DJVu files which cannot be indexed too.
As rvs already explained, the real content of the SD-card is not shown. A representation of the SD-card content is made in a database which is then used to show in the UI. During indexing all folders of the file-system are read and compared with the content of the database, if needed updated (additions/removals) are made.
Quote:
Originally Posted by Viacheslav View Post
Tried it several times with no effect. The file is OK. Its the process of making it that makes the 2.0 firmware and everything around it unusable.
I see what you mean. The database contains the filename, the path and some extra information of every file. During indexing filename and path of every file from SD-card and of every file in the DB are stored in memory to be able to detect differences. With 150 000 files this would be a huge amount of memory, I can imagine that there is even not enough memory, so indexing in your case would never end...
Mackx is offline   Reply With Quote