Register Guidelines E-Books Today's Posts Search

Go Back   MobileRead Forums > E-Book Software > Calibre > Recipes

Notices

Closed Thread
 
Thread Tools Search this Thread
Old 08-30-2010, 04:36 PM   #2566
kiklop74
Guru
kiklop74 can program the VCR without an owner's manual.kiklop74 can program the VCR without an owner's manual.kiklop74 can program the VCR without an owner's manual.kiklop74 can program the VCR without an owner's manual.kiklop74 can program the VCR without an owner's manual.kiklop74 can program the VCR without an owner's manual.kiklop74 can program the VCR without an owner's manual.kiklop74 can program the VCR without an owner's manual.kiklop74 can program the VCR without an owner's manual.kiklop74 can program the VCR without an owner's manual.kiklop74 can program the VCR without an owner's manual.
 
kiklop74's Avatar
 
Posts: 800
Karma: 194644
Join Date: Dec 2007
Location: Argentina
Device: Kindle Voyage
New recipe for Mexican newspaper La Jornada :
Attached Files
File Type: zip lajornada_mx.zip (1.9 KB, 321 views)
kiklop74 is offline  
Old 08-30-2010, 04:48 PM   #2567
TonytheBookworm
Addict
TonytheBookworm is on a distinguished road
 
TonytheBookworm's Avatar
 
Posts: 264
Karma: 62
Join Date: May 2010
Device: kindle 2, kindle 3, Kindle fire
Starson17,
I went back to the Gocomic recipe and tried to follow what you were doing and using what you stated about printing the title, url and so forth. The code I have currently it gets the soup as indicated in the output.txt file but then it craps out saying the index is out of range. I thought that was why you put number of pages to get in a range field. I set mine to 7 as you can see in my code but again I get index out or range.... I feel like the little Engine that Could or better yet the Ant at the Rubber Tree Plant. I got high hopes haha..
Spoiler:

Code:
from calibre.web.feeds.news import BasicNewsRecipe
from calibre.ebooks.BeautifulSoup import BeautifulSoup

class FIELDSTREAM(BasicNewsRecipe):

    title      = 'FIELD AND STREAM BLOGS'
    __author__ = 'Tony Stegall'
    description = 'Hunting and Fishing and Gun Talk'
    INDEX = 'http://www.fieldandstream.com/blogs'
    language = 'en'
    #------------------------------------------------------
    #variables
    num_pages_to_get = 7
    #-------------------------------------------------------
    
    no_stylesheets = True

    def parse_index(self):
        feeds = []
        for title, url in [
                            (u"Wild Chef", u"http://www.fieldandstream.com/blogs/wild-chef"),
                             ]:
            articles = self.make_links(url)
            if articles:
                feeds.append((title, articles))
        return feeds
        
    def make_links(self, url):
        title = 'Temp'
        current_articles = []
        page_soup = self.index_to_soup(url)
        print 'The soup is: ', page_soup
       
        pages = range(1, self.num_pages_to_get+1)   # put this in to start with the first page and then go up to 7 increment by 1
        for page in pages: 
            if page_soup: 
                try:
                  strip_title = page_soup.h2.a.string  # try to strip the string(text) from the h2 tag
                except:
                  strip_title = 'Error - no page_soup.h2.a.string' # throw an error if it can't find it
                try:
                  date_title = page_soup.find('ul', attrs={'class': 'first even'}).li.string #get the date from the li tag text
                except:
                  date_title = 'Error - no page_soup.h2.li.string'
                title = strip_title + ' - ' + date_title #piece the title together here
                try:
                   url = page_soup.h2.a['href'] #try to get the url from the h2 tags <a> 
                   break
                except:
                   continue
                continue
               
                print 'the title is: ', title
                print 'the page_url is: ', page_url
           
                current_articles.append({'title': title, 'url': page_url, 'description':'', 'date':''}) # append all this
        
        
        
        return current_articles


This is like playing battleship, I'm firing and firing and I get close but not getting a direct hit.
TonytheBookworm is offline  
Old 08-30-2010, 05:56 PM   #2568
bmsleight
Member
bmsleight will become famous soon enoughbmsleight will become famous soon enoughbmsleight will become famous soon enoughbmsleight will become famous soon enoughbmsleight will become famous soon enoughbmsleight will become famous soon enough
 
Posts: 24
Karma: 540
Join Date: Aug 2010
Device: Kindle 3
instructables

Tried to put a recipe together for instructables.

So far so good, but can not tidy the final output up nicely. (Some ads and some bad css)

Spoiler:
Code:
#!/usr/bin/env  python
__license__   = 'GPL v3'
__copyright__ = '2010, Brendan Sleight <bms.calibre at barwap.com>'
'''
www.instructables.com
'''
from calibre import strftime
from calibre.web.feeds.news import BasicNewsRecipe
from calibre.ebooks.BeautifulSoup import BeautifulSoup

import string

class Instructables(BasicNewsRecipe):
    title                 = u'Instructables'
    __author__            = 'Bmsleight'
    description           = 'Make, How To, and DIY'
    oldest_article        = 100
    max_articles_per_feed = 5
    no_stylesheets        = True
    language = 'en'
    index                 = 'http://www.instructables.com'
    remove_tags = [
                    dict(name='div', attrs={'class':'remove-ads'})
                  ]
    extra_css      = '''
                        .steplabel{font-size:xx-small;}
                        .txt{font-size:xx-small;}
                        #txt{font-size:xx-small;}
                     '''

    feeds               = [
                         (u'Instructables Featured'        , u'http://www.instructables.com/tag/type-id/featured-true/rss.xml'                                      )
                         ]


    def append_page(self, soup, appendtag, position, pre=[]):
        # Multi threading is fun .....
        for itt in soup.findAll('a',href=True):
            str_itt = str(itt['href'])
            if "/step" in str_itt and "id" in str_itt and "/account/" not in str_itt and str_itt not in pre:
                pre.append(itt['href'])
                nurl = self.index + itt['href']
                soup2 = self.index_to_soup(nurl)
                texttag = soup2.find('body')
                newpos = len(texttag.contents)
                self.append_page(soup2,texttag,newpos, pre)
                texttag.extract()
                appendtag.insert(position,texttag)

    def preprocess_html(self, soup):
        self.append_page(soup, soup.body, 99)
        return soup

    def postprocess_html(self, soup, first_fetch):
#        subtree = soup.findAll('div class="remove-ads"')
#        subtree.extract()
        rawc = soup.findAll('div',attrs={'class':'stepdescription'})
        # Reoved bad nested H2s...
        r = str(rawc).replace("<h2>", "").replace("</h2>", "")
        s = BeautifulSoup(r)
        return s


Any pointers to where I am going wrong ?
bmsleight is offline  
Old 08-30-2010, 06:06 PM   #2569
bmsleight
Member
bmsleight will become famous soon enoughbmsleight will become famous soon enoughbmsleight will become famous soon enoughbmsleight will become famous soon enoughbmsleight will become famous soon enoughbmsleight will become famous soon enough
 
Posts: 24
Karma: 540
Join Date: Aug 2010
Device: Kindle 3
Hackaday

First recipe for me.

Spoiler:
Code:
#!/usr/bin/env  python
__license__   = 'GPL v3'
__copyright__ = '2010, Brendan Sleight <bms.calibre at barwap.com>'
'''
hackaday.com
'''

from calibre.web.feeds.news import BasicNewsRecipe

class Hackaday(BasicNewsRecipe):
    title                 = u'Hackaday'
    __author__            = 'bmsleight'
    description           = 'Hack a Day serves up fresh hacks each day, every day from around the web and a special How-To hack each week.'
    oldest_article        = 10
    max_articles_per_feed = 100
    no_stylesheets        = True
    language              = 'en'

    use_embedded_content  = False

    keep_only_tags      = [
                           dict(name='div', attrs={'class':'post'})
                          ,dict(name='div', attrs={'class':'commentlinks'})
                          ]


    feeds               = [
                         (u'Hack A Day'        , u'http://hackaday.com/feed/'                                      )
                         ]

    def get_article_url(self, article):
        url = article.get('guid', None)
        return url
bmsleight is offline  
Old 08-30-2010, 07:31 PM   #2570
Starson17
Wizard
Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.
 
Posts: 4,004
Karma: 177841
Join Date: Dec 2009
Device: WinMo: IPAQ; Android: HTC HD2, Archos 7o; Java:Gravity T
Quote:
Originally Posted by TonytheBookworm View Post
Starson17,
I went back to the Gocomic recipe and tried to follow what you were doing and using what you stated about printing the title, url and so forth. The code I have currently it gets the soup as indicated in the output.txt file but then it craps out saying the index is out of range. I thought that was why you put number of pages to get in a range field. I set mine to 7 as you can see in my code but again I get index out or range.... I feel like the little Engine that Could or better yet the Ant at the Rubber Tree Plant. I got high hopes haha..
This is like playing battleship, I'm firing and firing and I get close but not getting a direct hit.
You don't need the range - that was used for my special case where I could calculate the urls. You need to scrape them
Look at this:
Spoiler:

Code:
from calibre.web.feeds.news import BasicNewsRecipe
from calibre.ebooks.BeautifulSoup import BeautifulSoup
class FIELDSTREAM(BasicNewsRecipe):
    title      = 'Field and Stream'
    __author__ = 'Starson17'
    description = 'Hunting and Fishing and Gun Talk'
    language = 'en'
    no_stylesheets = True
    publisher           = 'Starson17'
    category            = 'food recipes'
    use_embedded_content= False
    no_stylesheets      = True
    oldest_article      = 24
    remove_javascript   = True
    remove_empty_feeds    = True
    #cover_url           = 'http://www.bsb.lib.tx.us/images/comics.com.gif'
    # recursions          = 0
    max_articles_per_feed = 10
    INDEX = 'http://www.fieldandstream.com'
    def parse_index(self):
        feeds = []
        for title, url in [
                            (u"Wild Chef", u"http://www.fieldandstream.com/blogs/wild-chef"),
                             ]:
            articles = self.make_links(url)
            if articles:
                feeds.append((title, articles))
        return feeds
        
    def make_links(self, url):
        title = 'Temp'
        current_articles = []
        soup = self.index_to_soup(url)
        print 'The soup is: ', soup
        for item in soup.findAll('h2'):
            print 'item is: ', item
            link = item.find('a')
            print 'the link is: ', link
            if link:
                url         = self.INDEX + link['href']
                title       = self.tag_to_string(link)
                print 'the title is: ', title
                print 'the url is: ', url
                current_articles.append({'title': title, 'url': url, 'description':'', 'date':''}) # append all this
        return current_articles


It does all the url scraping for the feed. (You can add more feeds if you want) It's up to you to remove the junk with keep or remove tags.

Last edited by Starson17; 08-31-2010 at 02:17 PM.
Starson17 is offline  
Old 08-30-2010, 07:53 PM   #2571
TonytheBookworm
Addict
TonytheBookworm is on a distinguished road
 
TonytheBookworm's Avatar
 
Posts: 264
Karma: 62
Join Date: May 2010
Device: kindle 2, kindle 3, Kindle fire
Quote:
Originally Posted by Starson17 View Post
You don't need the range - that was used for my special case where I could calculate the urls. You need to scrape them
Look at this:


It does all the url scraping for the feed. (You can add more feeds if you want) It's up to you to remove the junk with keep or remove tags.

man I tell you you make it look so easy... I hope I can learn from this. What I'm considering doing is when someone has a request try to do it. And then if someone like yourself or someone else post the recipe see how they came about it and learn from it.... I can't thank you enough
TonytheBookworm is offline  
Old 08-30-2010, 09:26 PM   #2572
TonytheBookworm
Addict
TonytheBookworm is on a distinguished road
 
TonytheBookworm's Avatar
 
Posts: 264
Karma: 62
Join Date: May 2010
Device: kindle 2, kindle 3, Kindle fire
Field and Stream Blog Hunting fishing gun and fly talk

With the much appreciated help from Starson17 here is the field and stream blogs. I just took and added a few more feeds to the recipe and removed the junk.
Thanks again.
Attached Files
File Type: rar fieldandstreamblogs.rar (1.5 KB, 272 views)

Last edited by TonytheBookworm; 08-30-2010 at 11:02 PM. Reason: Added favicon per dwanthny's suggestion
TonytheBookworm is offline  
Old 08-30-2010, 10:19 PM   #2573
DoctorOhh
US Navy, Retired
DoctorOhh ought to be getting tired of karma fortunes by now.DoctorOhh ought to be getting tired of karma fortunes by now.DoctorOhh ought to be getting tired of karma fortunes by now.DoctorOhh ought to be getting tired of karma fortunes by now.DoctorOhh ought to be getting tired of karma fortunes by now.DoctorOhh ought to be getting tired of karma fortunes by now.DoctorOhh ought to be getting tired of karma fortunes by now.DoctorOhh ought to be getting tired of karma fortunes by now.DoctorOhh ought to be getting tired of karma fortunes by now.DoctorOhh ought to be getting tired of karma fortunes by now.DoctorOhh ought to be getting tired of karma fortunes by now.
 
DoctorOhh's Avatar
 
Posts: 9,890
Karma: 13806776
Join Date: Feb 2009
Location: North Carolina
Device: Icarus Illumina XL HD, Kindle PaperWhite SE 11th Gen
Quote:
Originally Posted by TonytheBookworm View Post
With the much appreciated help from Starson17 here is the field and stream blogs. I just took and added a few more feeds to the recipe and removed the junk.
Thanks again.
Great job! Now for the finishing touch grab the little icon for the site that shows in the url and include that in the rar file so your recipe will have a professional little touch in calibre's list. This icon is referred to as a favicon.

One site I use to grab the favicon is http://www.getfavicon.org/ it allows you to enter a url and it will grab the favicon from that site.
DoctorOhh is offline  
Old 08-30-2010, 10:27 PM   #2574
Lukas238
Junior Member
Lukas238 began at the beginning.
 
Posts: 3
Karma: 10
Join Date: Aug 2010
Device: Nook
Recipe for Heavens-Above.com

I was trying to create a recipe for http://www.heavens-above.com, excellent Astronomy website where you can see the predictions of satellite positions, as will be seen from your city.

The problem is that the site offers no feeds.

Nonetheless, I could create something pretty close. I could download some content, but appear as html code, not as text.

This is the recipe as far as it went. The user login is not required to access any of the pages, but if you logger all pages display the information as seen from your city.

Code:
__license__   = 'GPL v3'
__copyright__ = '2010, Lucas Dasso <dassolucas@***********>'

'''
Heavens-Above.com
'''

from calibre.web.feeds.news import BasicNewsRecipe

class HeavensAbove(BasicNewsRecipe):
    title = u'Heavens Above'
    description = 'Satellite, ISS, and Space Shuttle orbital pass predictions, maps, and star charts.'
    __author__  = ' Lucas Dasso'
    language = 'en'

    remove_javascript=True

#    needs_subscription = True
 

    feeds          = [(u'ISS', u'http://www.heavens-above.com/PassSummary.aspx?satid=25544&Session=kebgfefchfldhfpnimillnlb')
                      ,(u'X-37B',u'http://www.heavens-above.com/PassSummary.aspx?satid=36514&Session=kebgfefchfldhfpnimillnlb')
                      ,(u'Genesis I',u'http://www.heavens-above.com/PassSummary.aspx?satid=29252&Session=kebgfefchfldhfpnimillnlb')
                      ,(u'GENESIS II',u'http://www.heavens-above.com/PassSummary.aspx?satid=31789&Session=kebgfefchfldhfpnimillnlb')
                      ,(u'Envisat',u'http://www.heavens-above.com/PassSummary.aspx?satid=27386&Session=kebgfefchfldhfpnimillnlb')
                      ,(u'Hubble Space Telescope',u'http://www.heavens-above.com/PassSummary.asp?satid=20580&Session=kebgfefchfldhfpnimillnlb')
                      ,(u'Satellites brighter than magnitude 3.5 (daily)',u'http://www.heavens-above.com/allsats.asp?Mag=3.5&Session=kebgfefchfldhfpnimillnlb')
                      ,(u'Iridium Flares (next 7 days)',u'http://www.heavens-above.com/iridium.asp?Dur=7&Session=kebgfefchfldhfpnimillnlb')
                      ]


#    def get_browser(self):
#        br = BasicNewsRecipe.get_browser()
#        if self.username is not None and self.password is not None:
#            br.open('http://www.heavens-above.com/logon.asp')
#            br.select_form(nr = 0) 
#            br['UserName']   = self.username
#            br['Password'] = self.password
#            br.submit()
#        return br
Lukas238 is offline  
Old 08-31-2010, 07:59 AM   #2575
Starson17
Wizard
Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.
 
Posts: 4,004
Karma: 177841
Join Date: Dec 2009
Device: WinMo: IPAQ; Android: HTC HD2, Archos 7o; Java:Gravity T
Quote:
Originally Posted by dwanthny View Post
Great job! Now for the finishing touch grab the little icon for the site that shows in the url and include that in the rar file so your recipe will have a professional little touch in calibre's list. This icon is referred to as a favicon.

One site I use to grab the favicon is http://www.getfavicon.org/ it allows you to enter a url and it will grab the favicon from that site.
Thanks for this tip. I haven't been using favicon, butr I think I will add it to my master recipe.
Starson17 is offline  
Old 08-31-2010, 08:03 AM   #2576
Starson17
Wizard
Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.
 
Posts: 4,004
Karma: 177841
Join Date: Dec 2009
Device: WinMo: IPAQ; Android: HTC HD2, Archos 7o; Java:Gravity T
Quote:
Originally Posted by TonytheBookworm View Post
man I tell you you make it look so easy... I hope I can learn from this. What I'm considering doing is when someone has a request try to do it. And then if someone like yourself or someone else post the recipe see how they came about it and learn from it.... I can't thank you enough
You're welcome. Sometimes my strategy of giving just a bit of direction and waiting for a response is not appreciated - as though I'm torturing my victim. Who me???? But really, I could tell you were interested in learning and three's no substitute for getting your hands dirty and just trying to do it.

Note that I left all the print statements in. My recipes are filled with those, then I search for the prints in the text file to make sure the recipe is doing what I want at each step.
Starson17 is offline  
Old 08-31-2010, 12:46 PM   #2577
buyo
Junior Member
buyo began at the beginning.
 
Posts: 2
Karma: 10
Join Date: Aug 2010
Device: Kindle 3
hi, i've been lurking around for a while but figured i probably should participate. first off, i want to say that i think calibre is a wonderful program and thanks to all the ppl who created all the great recipes.

i created a small recipe of my hometown newspaper, The Winnipeg Free Press (http://www.winnipegfreepress.com), that seems to work well for me so i thought i'd share it here. i'm a total noob when it comes to python so please forgive the roughness of the code.

Spoiler:
Code:
class WinnipegFreePress(BasicNewsRecipe):
    title          = u'Winnipeg Free Press'
    __author__            = 'buyo'
    description           = 'News from Winnipeg, Manitoba, Canada'
    oldest_article = 1
    max_articles_per_feed = 15
    category              = 'News, Winnipeg, Canada'
    cover_url             = 'http://media.winnipegfreepress.com/designimages/winnipegfreepress_WFP.gif'
    no_stylesheets        = True
    encoding              = 'UTF-8'
    remove_javascript     = True
    use_embedded_content  = False
    language = 'en_CA'

    feeds          = [(u'Breaking News', u'http://www.winnipegfreepress.com/rss?path=/breakingnews'),
	      (u'Local News',u'http://www.winnipegfreepress.com/rss?path=/local'),
	      (u'Breaking Business News',u'http://www.winnipegfreepress.com/rss?path=/business/finance'),
	      (u'Business',u'http://www.winnipegfreepress.com/rss?path=/business'),
	      (u'Editorials',u'http://www.winnipegfreepress.com/rss?path=/opinion/editorials'),
	      (u'Views from the West',u'http://www.winnipegfreepress.com/rss?path=/opinion/westview'),
	      (u'Life & Style',u'http://www.winnipegfreepress.com/rss?path=/life'),
	      (u'Food & Drink',u'http://www.winnipegfreepress.com/rss?path=/life/food')
	     ]

    keep_only_tags = [
	          dict(name='div', attrs={'id':'article_header'}),
                              dict(name='div', attrs={'class':'article'}),
                              ]


i'm also working on a recipe for my favourite gossip site *blush* (http://www2.laineygossip.com/). while i've managed to get the articles just fine, i can't seem to get it to show the thumbnail images at the bottom... i was wondering if someone could give me some advice on this? I'd greatly appreciate it!

Spoiler:
Code:
class AdvancedUserRecipe1283190045(BasicNewsRecipe):
    title          = u'Lainey Gossip2'
    cover_url             = 'http://www2.laineygossip.com/i/logo_header.gif'
    oldest_article = 1
    max_articles_per_feed = 10
    language = 'en'
    no_stylesheets  = True

    feeds          = [(u'Lainey Gossip', u'http://www.laineygossip.com/LaineyGossipRss.ashx')]

    keep_only_tags = [
	            dict(name='div', attrs={'id':['mainContent']}),
                              ]
    remove_tags = [
	       dict(name='div', attrs={'class':['articlefooter','articlerelated','articlenavright']}),
	       dict(name='p', attrs={'class':['footer']}),
  	      ]
buyo is offline  
Old 08-31-2010, 01:42 PM   #2578
TonytheBookworm
Addict
TonytheBookworm is on a distinguished road
 
TonytheBookworm's Avatar
 
Posts: 264
Karma: 62
Join Date: May 2010
Device: kindle 2, kindle 3, Kindle fire
is there a better editor "free" than geany for python ? I swear these indents are driving me nuts. I was hoping there was some kinda compiler or whatever that would bark at me so I could ee what the issue actually was.
TonytheBookworm is offline  
Old 08-31-2010, 02:01 PM   #2579
TonytheBookworm
Addict
TonytheBookworm is on a distinguished road
 
TonytheBookworm's Avatar
 
Posts: 264
Karma: 62
Join Date: May 2010
Device: kindle 2, kindle 3, Kindle fire
got it nm, wish there was a delete on here

Last edited by TonytheBookworm; 08-31-2010 at 02:05 PM. Reason: strange had white space in there that wasn't visible :(
TonytheBookworm is offline  
Old 08-31-2010, 02:11 PM   #2580
Starson17
Wizard
Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.Starson17 can program the VCR without an owner's manual.
 
Posts: 4,004
Karma: 177841
Join Date: Dec 2009
Device: WinMo: IPAQ; Android: HTC HD2, Archos 7o; Java:Gravity T
Quote:
Originally Posted by TonytheBookworm View Post
got it nm, wish there was a delete on here
tabs are a pain
Starson17 is offline  
Closed Thread


Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Custom column read ? pchrist7 Calibre 2 10-04-2010 02:52 AM
Archive for custom screensavers sleeplessdave Amazon Kindle 1 07-07-2010 12:33 PM
How to back up preferences and custom recipes? greenapple Calibre 3 03-29-2010 05:08 AM
Donations for Custom Recipes ddavtian Calibre 5 01-23-2010 04:54 PM
Help understanding custom recipes andersent Calibre 0 12-17-2009 02:37 PM


All times are GMT -4. The time now is 01:58 PM.


MobileRead.com is a privately owned, operated and funded community.