Thread: Smithsonian Mag
View Single Post
Old 01-13-2013, 11:40 AM   #1
rainrdx
Connoisseur
rainrdx can differentiate black from dark navy bluerainrdx can differentiate black from dark navy bluerainrdx can differentiate black from dark navy bluerainrdx can differentiate black from dark navy bluerainrdx can differentiate black from dark navy bluerainrdx can differentiate black from dark navy bluerainrdx can differentiate black from dark navy bluerainrdx can differentiate black from dark navy bluerainrdx can differentiate black from dark navy bluerainrdx can differentiate black from dark navy bluerainrdx can differentiate black from dark navy blue
 
Posts: 55
Karma: 13316
Join Date: Jul 2012
Device: iPad
Smithsonian Mag

I thought I have posted it but somehow I can't find the original post. Maybe my memory's failing me again. Anyways, this is a terribly written recipe for Smithsonian Mag and it works.

Code:
import re
from calibre.web.feeds.recipes import BasicNewsRecipe
from collections import OrderedDict

class Smithsonian(BasicNewsRecipe):

    title       = 'Smithsonian Magazine'
    __author__  = 'Rick Shang'

    description = 'This magazine chronicles the arts, environment, sciences and popular culture of the times. It is edited for modern, well-rounded individuals with diverse, general interests. With your order, you become a National Associate Member of the Smithsonian. Membership benefits include your subscription to Smithsonian magazine, a personalized membership card, discounts from the Smithsonian catalog, and more.'
    language = 'en'
    category = 'news'
    encoding = 'UTF-8'
    keep_only_tags = [dict(attrs={'id':['articleTitle', 'subHead', 'byLine', 'articleImage', 'article-text']})]
    remove_tags = [dict(attrs={'class':['related-articles-inpage', 'viewMorePhotos']})]
    no_javascript = True
    no_stylesheets = True

    def parse_index(self):

	#Go to the issue
        soup0 = self.index_to_soup('http://www.smithsonianmag.com/issue/archive/')
        div = soup0.find('div',attrs={'id':'archives'})
        issue = div.find('ul',attrs={'class':'clear-both'})
	current_issue_url = issue.find('a', href=True)['href']
        soup = self.index_to_soup(current_issue_url)

	#Go to the main body
	div = soup.find ('div', attrs={'id':'content-inset'})

	#Find date
	date = re.sub('.*\:\W*', "", self.tag_to_string(div.find('h2')).strip())
	self.timefmt = u' [%s]'%date

	#Find cover
	self.cover_url = div.find('img',src=True)['src']	

        feeds = OrderedDict()
	section_title = ''
	subsection_title = ''
        for post in div.findAll('div', attrs={'class':['plainModule', 'departments plainModule']}):
		articles = []
		prefix = ''
		h3=post.find('h3')
		if h3 is not None:
			section_title = self.tag_to_string(h3)
		else:
			subsection=post.find('p',attrs={'class':'article-cat'})
			link=post.find('a',href=True)
			url=link['href']+'?c=y&story=fullstory'
			if subsection is not None:
				subsection_title = self.tag_to_string(subsection).strip()
				prefix = (subsection_title+': ')
				description=self.tag_to_string(post('p', limit=2)[1]).strip()
			else:
				if post.find('img') is not None:
					subsection_title = self.tag_to_string(post.findPrevious('div', attrs={'class':'departments plainModule'}).find('p', attrs={'class':'article-cat'})).strip()
					prefix = (subsection_title+': ')

				description=self.tag_to_string(post.find('p')).strip()
			desc=re.sub('\sBy\s.*', '', description, re.DOTALL)
			author=re.sub('.*By\s', '', description, re.DOTALL)
			title=prefix + self.tag_to_string(link).strip()+ u' (%s)'%author
			articles.append({'title':title, 'url':url, 'description':desc, 'date':''})
		
		if articles:
			if section_title not in feeds:
	                    feeds[section_title] = []
			feeds[section_title] += articles
        ans = [(key, val) for key, val in feeds.iteritems()]
        return ans
rainrdx is offline   Reply With Quote