![]() |
#16 |
Junior Member
![]() Posts: 2
Karma: 10
Join Date: Dec 2009
Device: Kindle
|
Index by sub-sections
Here is a patch to create an index by sub-sections
Code:
87,96c87,89 < < for ul in idx.findAll('ul', recursive=False): < for li in ul.findAll('li', recursive=False): < s = li.find('strong',attrs={'class':'book'}) < if s is not None: < a1 = s.find('a', href=True) < section_title = self.tag_to_string(a1) < for a in li.findAll('a', attrs={'class':'book-section'}, href=True): < sub_section_title = section_title.strip() + " : " + self.tag_to_string(a).strip() < yield (sub_section_title, a['href']) --- > for s in idx.findAll('strong', attrs={'class':'book'}): > a = s.find('a', href=True) > yield (self.tag_to_string(a), a['href']) |
![]() |
![]() |
![]() |
#17 |
Connoisseur
![]() Posts: 65
Karma: 10
Join Date: Apr 2009
Device: Sony PRS505
|
Woohoo! Do i just copy and paste that into the recipe? Does it matter where insert it?
|
![]() |
![]() |
Advert | |
|
![]() |
#18 |
Junior Member
![]() Posts: 2
Karma: 10
Join Date: Dec 2009
Device: Kindle
|
You can replace the code in the recipe with the following code
Remember to backup the existing recipe, as I haven't tested this thoroughly yet. Or if you don't like the way it comes out. Code:
#!/usr/bin/env python __license__ = 'GPL v3' __copyright__ = '2008, Kovid Goyal kovid@kovidgoyal.net' __docformat__ = 'restructuredtext en' ''' www.guardian.co.uk ''' from calibre import strftime from calibre.web.feeds.news import BasicNewsRecipe class Guardian(BasicNewsRecipe): title = u'The Guardian' __author__ = 'Seabound and Sujata Raman' language = 'en_GB' oldest_article = 7 max_articles_per_feed = 100 remove_javascript = True timefmt = ' [%a, %d %b %Y]' keep_only_tags = [ dict(name='div', attrs={'id':["content","article_header","main-article-info",]}), ] remove_tags = [ dict(name='div', attrs={'class':["video-content","videos-third-column"]}), dict(name='div', attrs={'id':["article-toolbox","subscribe-feeds",]}), dict(name='ul', attrs={'class':["pagination"]}), dict(name='ul', attrs={'id':["content-actions"]}), ] use_embedded_content = False no_stylesheets = True extra_css = ''' .article-attributes{font-size: x-small; font-family:Arial,Helvetica,sans-serif;} .h1{font-size: large ;font-family:georgia,serif; font-weight:bold;} .stand-first-alone{color:#666666; font-size:small; font-family:Arial,Helvetica,sans-serif;} .caption{color:#666666; font-size:x-small; font-family:Arial,Helvetica,sans-serif;} #article-wrapper{font-size:small; font-family:Arial,Helvetica,sans-serif;font-weight:normal;} .main-article-info{font-family:Arial,Helvetica,sans-serif;} #full-contents{font-size:small; font-family:Arial,Helvetica,sans-serif;font-weight:normal;} #match-stats-summary{font-size:small; font-family:Arial,Helvetica,sans-serif;font-weight:normal;} ''' feeds = [ ('Front Page', 'http://www.guardian.co.uk/rss'), ('Business', 'http://www.guardian.co.uk/business/rss'), ('Sport', 'http://www.guardian.co.uk/sport/rss'), ('Culture', 'http://www.guardian.co.uk/culture/rss'), ('Money', 'http://www.guardian.co.uk/money/rss'), ('Life & Style', 'http://www.guardian.co.uk/lifeandstyle/rss'), ('Travel', 'http://www.guardian.co.uk/travel/rss'), ('Environment', 'http://www.guardian.co.uk/environment/rss'), ('Comment','http://www.guardian.co.uk/commentisfree/rss'), ] def get_article_url(self, article): url = article.get('guid', None) if '/video/' in url or '/flyer/' in url or '/quiz/' in url or \ '/gallery/' in url or 'ivebeenthere' in url or \ 'pickthescore' in url or 'audioslideshow' in url : url = None return url def preprocess_html(self, soup): for item in soup.findAll(style=True): del item['style'] for item in soup.findAll(face=True): del item['face'] for tag in soup.findAll(name=['ul','li']): tag.name = 'div' return soup def find_sections(self): soup = self.index_to_soup('http://www.guardian.co.uk/theguardian') # find cover pic img = soup.find( 'img',attrs ={'alt':'Guardian digital edition'}) if img is not None: self.cover_url = img['src'] # end find cover pic idx = soup.find('div', id='book-index') for ul in idx.findAll('ul', recursive=False): for li in ul.findAll('li', recursive=False): s = li.find('strong',attrs={'class':'book'}) if s is not None: a1 = s.find('a', href=True) section_title = self.tag_to_string(a1) for a in li.findAll('a', attrs={'class':'book-section'}, href=True): sub_section_title = section_title.strip() + " : " + self.tag_to_string(a).strip() yield (sub_section_title, a['href']) def find_articles(self, url): soup = self.index_to_soup(url) div = soup.find('div', attrs={'class':'book-index'}) for ul in div.findAll('ul', attrs={'class':'trailblock'}): for li in ul.findAll('li'): a = li.find(href=True) if not a: continue title = self.tag_to_string(a) url = a['href'] if not title or not url: continue tt = li.find('div', attrs={'class':'trailtext'}) if tt is not None: for da in tt.findAll('a'): da.extract() desc = self.tag_to_string(tt).strip() yield { 'title': title, 'url':url, 'description':desc, 'date' : strftime('%a, %d %b'), } def parse_index(self): try: feeds = [] for title, href in self.find_sections(): feeds.append((title, list(self.find_articles(href)))) return feeds except: raise NotImplementedError |
![]() |
![]() |
![]() |
|
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
The Guardian, modified | ajnorman | Recipes | 20 | 01-10-2014 11:02 AM |
Guardian Recipe has stopped working | jbambridge | Calibre | 2 | 04-11-2010 01:14 PM |
Guardian Recipe messed up in latest version?? | pars_andy | Calibre | 6 | 11-14-2009 03:50 PM |
The Guardian Reviews the DX | poohbear_nc | News | 3 | 07-06-2009 09:33 AM |