Code:
from calibre.web.feeds.news import BasicNewsRecipe, classes
class SwarajyaMag(BasicNewsRecipe):
title = u'Swarajya Magazine'
__author__ = 'unkn0wn'
description = 'Swarajya - a big tent for liberal right of centre discourse that reaches out, engages and caters to the new India.'
language = 'en_GB'
no_stylesheets = True
remove_javascript = True
use_embedded_content = False
remove_attributes = ['height', 'width']
encoding = 'utf-8'
keep_only_tags = [
classes('_2PqtR _1sMRD ntw8h author-bio'),
]
remove_tags = [
classes('_JscD _2r17a'),
]
def preprocess_html(self, soup):
for img in soup.findAll('img', attrs={'data-src': True}):
img['src'] = img['data-src'].split('?')[0]
return soup
def parse_index(self):
soup = self.index_to_soup('https://swarajyamag.com/issue/a-catalyst-for-growth')
ans = []
for a in soup.findAll(**classes('_2eOQr')):
url = a['href']
if url.startswith('/'):
url = 'https://swarajyamag.com' + url
title = self.tag_to_string(a)
self.log(title, ' at ', url)
ans.append({'title': title, 'url': url})
return [('Articles', ans)]
The above recipe works great but from the above you can see that I've provided for actual link instead of automating it to find it.
Code:
<div class="_3BU_3">
<a href="/issue/a-catalyst-for-growth">
<img src="https://gumlet.assettype.com/swarajya%2F2022-03%2F1cd6eed6-3f1f-4eff-b163-076ad9492ae4%2FMarch_2022_Cover.jpg?auto=format%2Ccompress&format=webp&w=360&dpr=1.0" data-src="https://gumlet.assettype.com/swarajya%2F2022-03%2F1cd6eed6-3f1f-4eff-b163-076ad9492ae4%2FMarch_2022_Cover.jpg?auto=format%2Ccompress" alt="A Catalyst For Growth" sizes="( max-width: 500px ) 98vw, ( max-width: 768px ) 48vw, 23vw" class="qt-image gm-added gm-loaded gm-observing gm-observing-cb" loading="lazy" title="" style="">
<noscript></noscript></a></div>
The above is from the default page (
https://swarajyamag.com/) within which we find the above class '_3BU_3' where we can automate to find href(actual magazine page to parse for articles) and cover_url.
Code:
def get_cover_url(self):
soup = self.index_to_soup('https://swarajyamag.com/')
tag = soup.find(attrs={'class': '_3BU_3'})
if tag:
self.cover_url = tag.find('img')['src']
return super().get_cover_url()
I tried above for cover_url but I knew that it won't work cause I don't know how to split it.. img['src'] = img['data-src'].split('?')[0]
Help!
Other than that everything works great, if you replace the magazine link each month.