dstampe
Posts: 50
Karma: 17
Join Date: Jan 2007
Location: Canada
Device: Sony PRS-500
|
I have to agree with earlier posts, a filter is needed to weed out the garbage and to apply some quality control to the text. Otherwise reading will just not be fun at all. Some entity has to serve this role in a consistent way, and some entity has to bring the book to the attention of those who are most likely to enjoy it. Right now this is the publishers, and and distribution method that excludes publishers must have a successful replacement for both these functions. Downloads and DRM are only relevant in that these support the current mechanism that supplies a filtered and consistent stream of good literature (authors as filtered by publishers).
Before my vision deteriorated and I got allergies to old paper, I used to buy and read lots of SF from various used book vendors. Even though this material had been "filtered" already by being published, I'd still only choose 10% of what was on the shelf, and only really enjoy reading about 50% of what I bought, At least 25% was abandoned after an attempt at reasding it. So either my stndards are high, or my tastes narrow, and I have other things to di than read bad books (such as re-reading good books). Several times in the past decades (mid-80's, for example) I gave up purchasing and reading new SF books and magazine because the genre had goten so far away from what I enjoyed. There were probably a few good books out there, but I wasn't going to waste time finding those in the ocean of trash. If the ebook market was flooded with a wave of low-quality material, without book reviews or solid genre classification, I might very well steer away from reading new ebooks unless I recognized the author's names. And so, I suspect, would most readers.
My own introduction to new books and authors has come from several sources, listed here in order of enjoyable reads per eyeball "hit" during the search process:
1) Works by known authors I've read and enjoyed. This has to count for at least 80% of books I've purchased.
2) "Classic" authors of the genre, learned from various reviews and writeups, and to a lesser extend from used-book browsing.
3) "Year's Best" and award anthologies. Typically these contain short stories and novellas first publisheds in SF magazines (see below) and selected by the collectionns editor(s). These are usually the "cream of the cream" and the only question is whether they match my tastes.
3a) "Themed" anthologies where lesser-known authors can play. These often contain a few known authors and a lot of unknowns. However, I find a lot of stories are "written to order" by authors based on a description sent by the editor, and the resulting uninspired work makes even good authors look bad. So it's less likely that a new author will be discovered here.
4) SF magazines (although not as much recently). These are probably the biggest source of new talent out there, and in the past were where almost all new writers started. Publishing in one of these is a gateway to wider attention, including anthologies and collections. Of course, I find that 70% of what's in these magazines does not really match my tastes or is not up to the expectations set by reading only the best works out there. (Still, these need to be supported).
5) Random acquisitions such as thumbing through book in a bookstore based on title and cover art. Those get you to pick up the book and read the back blurb, and maybe scan a few pages. This is one area that the Internet has to work to duplicate, but I think Anazon has some good ways to replace this experience or browing with one that is even more rich and cross-linked.
The above ranking is just to illustrate that filtering counts. Except for buying books by a known author, I buy books based on someone else's filtering, above and beyond the threshold set by simply being selected by publication. The more levels of selection, the more likely I am to purchase multiple books by an author based on a single work, and the greater the probability of enjoying the reading experience. I'd say the chances of buying into a new author (at least when i could still handle pbooks) would be 25% from and award or "Year's Best" collection, 4% from an SF magazine, and less than 1% or less from a book that catches my eye in a bookstore.
I'm not yet complately comfortable with "community" or "open-source" type models of quality filtering. These are fine if some more "professional" filtering has been done first (publication) but might be less reliable if not.
For example, I'm reasonably comfortable with using the reviews on Amazon, as long as there are more than one and I know something about the author's style. Multiple reviews show a consesus and that people are motivated (positive or negative) to post.
I trust the "professional" reviewer's writeups rather less, as the suspicion of kickbacks and industry self-gratification (i.e. the movie industry) come to mind. (Also, I have the suspicion many reviewers are more interested in talking about their own weird ideas about the world as reflected in the book than in the book itself, or are judging the book by standards other than those that result in an enjoyable read. I'm afraid many things came off their pedestals for me when I realized, in the middle of a mandatory English course in university, that the point of such activities was to dissect a story into meaningless twitching fragments, then reassemble them into a mosaic that proved some obsure thesis about the author or his times. Sorry, but I read for entertainment, and when the author starts showing through the pages of a book it's time to stop or the fun goes away. I made this mistake in reading most of the works of Clifford Simak in a few weeka, and I probably won't be able to enjoy anything else by him for a few years now).
The major concern I have with "community" reviewers of raw manuscripts is influenced by what I've seen and experienced in the open-source and community software scene. (Note that I'm not bashing the open-source movement, it's a great thing for creativity and learning, but like everything else in life it has its strengths and weaknesses). There are lots of volunteers with varying amounts time to spends and varying half-lives on their motivation, and these can code something up with different levels of talent and skill, but in the end projects thrive on consistency of quality and direction. You need one core person (or a team of them) that live and breathe the project, to give direstion and hold everyone together and headed in the same direction. Otherwise you end up with multiple incompatible versions (editing criteria) and much time is spent in everyone rewriting each other's code (flame wars) instead of moving forward.
Quality is a huge issue with community projects, and big project may require partitioning to prevent the result being determined by the worst performer. For example, looking at Linux, source code, the kernal code is well-coded, lean, and very eductaional to examine. Driver code, on the other hand, varies wildly in quality and often contains a multitude of bugs that simply cannot be fixed because the coding is obscure and the author abandoned the project once a minimum of functionality was met. The difference is that the kernal code was written by the core person or group who devoted years to the task, and most of the drivers were a passing interest that lastesd at most a few months.
There's also the matter of cosnsistentcy, and this is where I think open book reviews might be worst off. Who sets the standards? What are the criteria? Unless there is one or a few reviewers that make the final decision qualitty could vary a lot. But this dedicated core won't have the time to look at everything.
Say a server was set up so volunteers could be assigned manuscripts to review. It's certain that some books would be returned quickly and well reviewed, some would be poorly reviewed, and many would never get reviewed at all. Personal tastes would vary widley as well, aswould literacy. Finally, how a book was judges would change a lot as the reviewer gains more experience and begins to learn review techniques. I don't have experience here, but I would bet it takes time to learn to read a work critically while also predicting how a casual reader would respond to it.
Maybe something could be put together to use volunteers for reviewing, similar to Baen's open reviews. You'd have to farm out each manuscript to multiple people, and part of the review would have to be a rating system on multiple aspects of the manuscript. That way an automated rating could be computed for author feedback and to decide it the manuscript is bumped up to the more dedicated, consistent reviewing core group.
|