I'm trying to use parse_index by giving a manual list of pages to include.
The documentation says:
Quote:
The full article (can be an empty string). Obsolete
do not use, instead save the content to a temporary
file and pass a file:///path/to/temp/file.html as
the URL.
|
This is a code example but don't know how to really implement it.
Code:
#One article from one page
messe = dict()
messe['title'] = 'Lectures de la Messe'
messe['url'] = 'http://www.aelf.org/office-messe?desktop=1&date_my=%s' % (site_date)
messe['date'] = site_date
messe['description'] = 'LECTURES'
#One article from another page
laudes = dict()
laudes['title'] = 'Laudes'
laudes['url'] = 'http://www.aelf.org/office-laudes?desktop=1&date_my=%s' % (site_date)
laudes['date'] = site_date
laudes['description'] = 'LAUDES'
#Create list of two articles
list_of_articles = [messe, laudes]
#Give the list a feed title
feed_title = str(book_date)
#What to give to pase_index
self = [feed_title, list_of_articles]
Any hint, please?
As an alternative:
My script does download files to disk and creates a series of index files. Can I simply pass the main index file as the index?