04-28-2019, 02:32 AM | #1 |
Library Breeder (She/Her)
Posts: 1,159
Karma: 1900479
Join Date: Apr 2015
Location: Fullerton, California
Device: Kobo Aura HD (1) PW3 (4) PW3 2019 new edition (1)
|
Special CHaracters
I know when importing a .csv (yes I know this is a plugin) that I should have it saved UTF-8 encoded.
Since the last time I updated Calibre I don't seem to be able to get those special characters to show up. I am getting the diamonds with the question marks inside. I updated Java, and am sure that I am saving the files in UTF-8, but nothing fixes it. Am I missing something? Edit, am going to bed (it's 11:30pm) so I will check back on this tomorrow. If there is any response I will reply then. Thanks! |
04-28-2019, 04:20 AM | #2 |
Addict
Posts: 206
Karma: 547516
Join Date: Mar 2008
Location: Berlin, Germany
Device: KObo Clara, Kobo Aura, PRS-T1, PB602, CyBook Gen3
|
Encoding problems, in my expirience, show up as gibberish letters. If you see the replacement char, that normally means, that the rendering engine understands what character should be displayed, but the used font does not contain it.
Maybe try to use another fully UTF-8 compatible font and see if that helps. |
Advert | |
|
04-28-2019, 01:32 PM | #3 |
Deviser
Posts: 2,265
Karma: 2090983
Join Date: Aug 2013
Location: Texas
Device: none
|
"Import CSV File to Update Metadata"
You use Job Spy already, so try its new "Import CSV File to Update Metadata" tool instead of the other tool you are using in order to test whether it is your .csv encoding or your screen display font or the tool you used.
"Import CSV File to Update Metadata" is fully Unicode UTF8 compliant. See the images from when it was last enhanced, which include its tooltip user guide. BTW, I will be upgrading all 61 JS tools for Python 3 and future Calibre 3.42+ compatibility in May. DaltonSTShow Tooltip |
05-01-2019, 06:49 PM | #4 | |
Library Breeder (She/Her)
Posts: 1,159
Karma: 1900479
Join Date: Apr 2015
Location: Fullerton, California
Device: Kobo Aura HD (1) PW3 (4) PW3 2019 new edition (1)
|
Quote:
Most of the csv files I import are broken down pieces of a single file. I use CSV chunker to break up 3500 to 5000 line files into pieces of 100 lines. Only the first page contains the headers. With the list import plugin, I just set up the columns once and each subsequent import doesn't need to be updated. Plus, I am not sure how to import multiple columns at once. It looks like I can only import one named column at a time. Let me try it again, see if I am doing something wrong. Regardless, unless I can import files without the headers, then it won't work for my needs. The GUI for import list is much easier than the GUI for import csv file on Job Spy. I wish I knew how to use it better. It seems to offer a lot more than simple list importing. However, with the import list plugin I just match up columns in my library with the columns (or column number if there are no headers) with the csv and then I match up books in my library and select exactly which columns in my library to update with the data in the csv. It seems like your import list doesn't allow for that kind of choice. If I am reading the instructions right I am to match up only a single column, and then the import feature matches it to the books in the library and automatically makes the changes. I want more control over which books get changed with what metadata. Most of my .csv files contain titles, series, series indexes, bookshelves, dates (added, read, updated, goodreads), genres, tags, etc. I need to be able to manually choose what books are being matched and manually select which columns to update. Last edited by Rellwood; 05-01-2019 at 06:58 PM. Reason: more explainations |
|
05-01-2019, 06:51 PM | #5 | |
Library Breeder (She/Her)
Posts: 1,159
Karma: 1900479
Join Date: Apr 2015
Location: Fullerton, California
Device: Kobo Aura HD (1) PW3 (4) PW3 2019 new edition (1)
|
Quote:
|
|
Advert | |
|
05-01-2019, 07:19 PM | #6 | |
Deviser
Posts: 2,265
Karma: 2090983
Join Date: Aug 2013
Location: Texas
Device: none
|
Quote:
You are mistaken about everything you said with 1 exception: each csv must have a column header for every column. Multiple headers means multiple csv files. That tool is incredibly flexible, offers total and granular control, and uses dropdowns to make it easy. View the tooltips. View the images in the JS thread. Show Tooltip |
|
05-01-2019, 07:21 PM | #7 | |
Library Breeder (She/Her)
Posts: 1,159
Karma: 1900479
Join Date: Apr 2015
Location: Fullerton, California
Device: Kobo Aura HD (1) PW3 (4) PW3 2019 new edition (1)
|
Quote:
However, the fact that multiple headers mean multiple .csv files and every column needing a header won't work for me in most instances. An example would be Goodreads. Although I use the sync plugin for some of the shelves, I have way too many to set up rules and use the sync/add plugin. Plus I don't want to ruin my library if I make a rule or sync mistake and it updates everything with it. Therefore I end up using the Goodreads export library .csv file generated from the website. (if you don't know what I am talking about, it is listed at the bottom of the bookshelf column in the "my books" page under "import/export library") I have almost 4000 books shelved. The export library .csv has about 8 to 10 columns I import after editing the .csv. Because it isn't a good idea to import such a large file, I chunk it up into 100 line pieces. However, the CSV chunker program I use doesn't create new column headers for each piece. So if I use your program, I would have to import 40 pages, 8 to 10 times each page, then go into each piece and manually create new column headers, or at best remember which bit of data would fit in which column (I can see what would be a title without having the header) and reset the program each time for each column. I apologize, but this will not do. I will keep it in mind for other times though. Last edited by Rellwood; 05-01-2019 at 07:37 PM. |
|
05-01-2019, 08:15 PM | #8 |
Deviser
Posts: 2,265
Karma: 2090983
Join Date: Aug 2013
Location: Texas
Device: none
|
Tanjamuse uses 14,000 row csv files with the JS tool. No reason to chunk it up.
|
05-01-2019, 09:59 PM | #9 |
Library Breeder (She/Her)
Posts: 1,159
Karma: 1900479
Join Date: Apr 2015
Location: Fullerton, California
Device: Kobo Aura HD (1) PW3 (4) PW3 2019 new edition (1)
|
I chunk them up so that they are more manageable to import into Calibre. I am not matching up 1500 books at once (the ones that won't match up without help) and I am not updating 4000 books taking forever to update the database. I have sat and watched the .db journal file take as long as 5 seconds per book to update the database sometimes. Am not doing that with 4000 books.
:-) I know you are trying to help me here, and I thank you so much for it! :-) |
05-01-2019, 10:00 PM | #10 |
Library Breeder (She/Her)
Posts: 1,159
Karma: 1900479
Join Date: Apr 2015
Location: Fullerton, California
Device: Kobo Aura HD (1) PW3 (4) PW3 2019 new edition (1)
|
|
05-01-2019, 10:43 PM | #11 |
Bibliophagist
Posts: 35,452
Karma: 145525534
Join Date: Jul 2010
Location: Vancouver
Device: Kobo Sage, Forma, Clara HD, Lenovo M8 FHD, Paperwhite 4, Tolino epos
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Special Characters | abbotrichard | ePub | 4 | 07-01-2011 06:03 PM |
Content Special Characters in Collections | bear4hunter | Amazon Kindle | 2 | 08-06-2010 07:11 PM |
special characters in epub? | biltron | Introduce Yourself | 5 | 12-20-2009 03:50 PM |
Epub and special characters again | mtravellerh | Calibre | 3 | 01-04-2009 12:55 PM |
REFERENCE: Special Characters | nrapallo | IMP | 2 | 04-07-2008 01:29 PM |