Did you ever find a solution for removing duplicates from multiple feeds in the same run? If so, is it re-usable code?
I've tried using the code adapted from the NewScientist below
Code:
...
filterDuplicates = True
url_list = []
...
def print_version(self, url):
if self.filterDuplicates:
if url in self.url_list:
return
return url.replace('/article/', '/printarticle/')
but when I use it, I get an epub with just empty feeds...it takes out all the URLs