08-29-2009, 10:17 AM | #1 |
Junior Member
Posts: 2
Karma: 10
Join Date: Aug 2009
Device: Sony PRS-505
|
ebook-convert recipe "cannot connect to X server"
I'm trying to run ebook-convert to generate epub format from a news feed using a downloaded built-in recipe, on a headless linux server (ubuntu 8.04 server). This fails with "cannot connect to X server".
Version of calibre 0.6.10 Command: # ebook-convert times-online.recipe times-online.epub After downloading articles (34%), error is generated: 34% Article downloaded: u'At leisure: Sailing brings Frank Martin relief from recession' : cannot connect to X server Attempting to convert a single local html file works ok, ie: # ebook-convert index.html index.epub Any help appreciated. Thanks, Nigel |
08-29-2009, 11:10 AM | #2 |
creator of calibre
Posts: 44,334
Karma: 23661992
Join Date: Oct 2006
Location: Mumbai, India
Device: Various
|
EDIT: IIRC the recipe subsytem generates a default cover when none is available and this requires an X server
|
Advert | |
|
08-29-2009, 03:41 PM | #3 |
Junior Member
Posts: 2
Karma: 10
Join Date: Aug 2009
Device: Sony PRS-505
|
Thanks for the pointer, looks like I can work round it by providing a cover url.
Nigel |
09-26-2010, 05:37 PM | #4 |
Junior Member
Posts: 7
Karma: 10
Join Date: Sep 2010
Device: Kindle 3
|
I have the same problem but at toc generation time.
Calibre version 0.6.13 command: ebook-convert myrecipe.recipe output.mobi -vv --test here is the end of the output: Creating MOBI Output... 67% Creating MOBI Output Generating in-line TOC... Applying case-transforming CSS... Parsing manglecase.css ... Parsing tocstyle.css ... : cannot connect to X server DISPLAY variable is not set and I'm using a local image for the cover which is available and I can see is copied over to the html output here is the recipe: import string, re from calibre import strftime from time import strptime from calibre.web.feeds.recipes import BasicNewsRecipe from calibre.ebooks.BeautifulSoup import BeautifulSoup class Standartnewa(BasicNewsRecipe): title = 'StandartNews' remove_tags_before = dict(name=['h2', 'h3'], attrs={'class':['article_title', 'article']}) remove_tags_after = dict(name='div', attrs={'id':'article_content'}) recursions = 0 no_stylesheets = True language = 'bg' description = 'News from Bulgaria' category = 'news, Bulgaria, BG, world' publisher = 'Standartnews' extra_css = '.kare {BORDER-BOTTOM: #8d93a5 1px solid; PADDING-BOTTOM: 5px; BACKGROUND-COLOR: #eeeff2; PADDING-LEFT: 5px; PADDING-RIGHT: 5px; MARGIN-BOTTOM: 10px; BORDER-TOP: #8d93a5 1px solid; PADDING-TOP: 5px} .cl { OVERFLOW: hidden } .cl:after { DISPLAY: block; HEIGHT: 0px; VISIBILITY: hidden; CLEAR: both} .img_article { BORDER-BOTTOM: #e3e4e9 1px solid; BORDER-LEFT: #e3e4e9 1px solid; PADDING-BOTTOM: 3px; MARGIN: 0px 10px 10px 0px; PADDING-LEFT: 5px; PADDING-RIGHT: 5px; COLOR: #8d93a5; FONT-SIZE: 11px; BORDER-TOP: #e3e4e9 1px solid; BORDER-RIGHT: #e3e4e9 1px solid; PADDING-TOP: 5px; WIDTH: 300px } .img_article img { MARGIN: 0px 0px 3px } .fl { FLOAT: left } ' remove_attributes = ['font', 'style'] remove_tags = [dict(name='hr')] preprocess_regexps = [ (re.compile(r'(<div class="img_article(?<!</div>).*</div>)', re.IGNORECASE), lambda match: match.group(0) + '<br>'), ] conversion_options = { 'comments' : description ,'tags' : category ,'language' : language ,'publisher' : publisher ,'linearize_tables': True } def get_cover_url(self): return 'file:///home/weasal/news/cover.jpg' def parse_index(self): soup = self.index_to_soup('http://paper.standartnews.com/bg/index.php') def feed_title(div): return ''.join(div.findAll(text=True, recursive=False)).strip() articles = {} key = None ans = [] for div in soup.findAll(True, attrs={'id':'left'}): for link in div.find('div').findAll('a', attrs={'href' : re.compile('^category.php.*')}): if link.has_key('class'): key = '--' + string.capwords(feed_title(link)) else: key = string.capwords(feed_title(link)) articles[key] = [] ans.append(key) cat = self.index_to_soup('http://paper.standartnews.com/bg/' + link['href']) a = cat.find('a', { "class" : "read" }) if not a: continue catMain = self.index_to_soup('http://paper.standartnews.com/bg/' + a['href']) for article in catMain.find('div', { "class" : "right" }).find('ul', { "class" : "addonnews" }).findAll('a', href=True): url = 'http://paper.standartnews.com/bg/' + article['href'] title = self.tag_to_string(article, use_alt=True).strip() pubdate = strftime('%a, %d %b', strptime(re.search('article.php\?d=(\d\d\d\d-\d\d-\d\d)\&article=\d+', article['href']).group(1), '%Y-%m-%d')) if not articles.has_key(key): articles[key] = [] articles[key].append( dict(title=title, url=url, date=pubdate, description='', content='')) ans = [(key, articles[key]) for key in ans if articles.has_key(key)] return ans Last edited by weasal; 09-26-2010 at 05:49 PM. |
09-26-2010, 06:25 PM | #5 | |
US Navy, Retired
Posts: 9,865
Karma: 13806776
Join Date: Feb 2009
Location: North Carolina
Device: Icarus Illumina XL HD, Nexus 7
|
Quote:
Update to the current version, if you still have problems post again. Since your version is so old make sure you do a complete uninstall before upgrading to the current version. Update: Custom recipe questions are addressed in this sub-forum. Last edited by DoctorOhh; 09-26-2010 at 06:33 PM. |
|
Advert | |
|
09-27-2010, 11:49 AM | #6 |
creator of calibre
Posts: 44,334
Karma: 23661992
Join Date: Oct 2006
Location: Mumbai, India
Device: Various
|
You need an X server, use xvfb.
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Server error "(8, 'nodename nor servname provided, or not known')" | tfarrell | Calibre | 6 | 02-05-2010 07:04 AM |
Feature Request - cover browse in the "Fetch Metadata from server" function | nathander13 | Calibre | 1 | 01-30-2010 02:11 PM |
"cannot connect to X server" | lcrivell | Calibre | 1 | 09-10-2009 11:07 AM |
Thoroughly recommended - "Deluxe" Penguin eBook of "The Odyssey" | HarryT | Reading Recommendations | 32 | 05-29-2009 12:07 AM |
Zune eBook Creator (RTextAsImage) - "Convert" text to images | oleg.shastitko | Reading and Management | 10 | 01-28-2008 01:18 PM |