View Single Post
Old 07-29-2011, 02:50 PM   #13
cendalc
Junior Member
cendalc began at the beginning.
 
Posts: 5
Karma: 10
Join Date: Jul 2011
Device: Nook
Thanks for the script. I had several minor issues with it:

1. wrong czech characters - fixed by forcing utf8 and embeding own fonts. Of course the font path is hardcoded according to my Nook.
Code:
    encoding                = 'utf8'
    extra_css               = '''
@font-face {
    font-style: italic;
    font-family: 'LiberationSerif', serif, sans-serif;
    font-weight: normal;
    src: url('res:///system/media/sdcard/my fonts/LiberationSerif-Italic.ttf');
}
@font-face {
    font-style: normal;
    font-family: 'LiberationSerif', serif, sans-serif;
    font-weight: normal;
    src: url('res:///system/media/sdcard/my fonts/LiberationSerif-Regular.ttf');
}
@font-face {
    font-style: italic;
    font-family: 'LiberationSerif', serif, sans-serif;
    font-weight: bold;
    src:url('res:///system/media/sdcard/my fonts/LiberationSerif-BoldItalic.ttf');
}
@font-face {
    font-style: normal;
    font-family: 'LiberationSerif', serif, sans-serif;
    font-weight: bold;
    src: url('res:///system/media/sdcard/my fonts/LiberationSerif-Bold.ttf');
}
body {
    font-family: 'LiberationSerif', serif;
}'''
2. It does not archive downloaded articles. I updated parse_index method to retrieve form key and added cleanup method:

Code:
    def parse_index(self):
        totalfeeds = []
        lfeeds = self.get_feeds()
        for feedobj in lfeeds:
            feedtitle, feedurl = feedobj
            self.report_progress(0, _('Fetching feed')+' %s...'%(feedtitle if feedtitle else feedurl))
            articles = []
            soup = self.index_to_soup(feedurl)
            self.myFormKey = soup.find('input', attrs={'name': 'form_key'})['value']
            for item in soup.findAll('div', attrs={'class':'cornerControls'}):
                description = self.tag_to_string(item.div)
                atag = item.a
                if atag and atag.has_key('href'):
                    url         = atag['href']
                    articles.append({
                                     'url'        :url
                                    })
            totalfeeds.append((feedtitle, articles))
        return totalfeeds

    def cleanup(self):
        params = urllib.urlencode(dict(form_key=self.myFormKey, submit="Archive All"))
        self.browser.open("http://www.instapaper.com/bulk-archive", params)
Thanks to banjopicker

3. Multipart articles in wrong order - fixed by
Code:
    reverse_article_order = True
cendalc is offline   Reply With Quote