I interviewed The Guardian’s Sam Jordison about the challenges of expanding the current Not The Booker literary prize to include self-published books for my Forbes blog, but didn’t really have room to consider how one might actually run a meaningful award for self-published authors. The Not The Booker awards currently works by allowing people to nominate traditionally published books in the comments on an opening blog post. These are then winnowed down to a shortlist through public voting, but for a vote to count the voter must include a short review of the book to show that they’ve actually read it.
The problem with a self-publishing award based on the same principles would be just the enormous tsunami of shite books nominated in the first round and the horrendous gaming of the voting system in the second. Because, let’s be brutally honest here, there is a lot of dreadful crap put out by self-published authors who have yet to develop the skill to understand that their work is sub-standard.
And, as Sam put it, “there are some real loudmouths with monstrous egos” out there, and you can guarantee that any system based solely on a popular vote would cut out lesser known authors with awesome books in favour of the egotists. Given the apparent correlation between being a loudmouthed twat and producing shite work, the results of such a contest would likely be disappointing.
So, how would one do it? First, let’s examine some of the problems we’d have to solve:
1. Scale. There are a lot of self-published authors out there now, over a million by some accounts, and any prize for them would have to have a nomination system that could scale well. It is, however, unclear how many self-pubbed authors come from the Not The Booker catchment area of the Commonwealth, the Republic of Ireland and Zimbabwe, but even if it’s only a tenth, that’s still a lot of people. (The Not The Booker has basically the same rules as the Man Booker Prize, which is its foil.)
2. Quality. As mentioned already, a lot of self-published novels are awful, with bad dialogue, characterisation and plotting, dreadful grammar, and typos scattered liberally throughout. Many that tackle those problems lack the polish that a good novel has, reading more like a first than a final draft.
3. Plagiarism. I’m not sure how big of an issue this is, but certainly there’s enough of it about in self-publishing that I think it’s worth considering as a potential issue.
4. Gaming. There is absolutely no doubt that there are some self-published authors who would find a way to game the system to ensure as high a ranking as they can, thus pushing out more modest and lesser-known authors. Any system has to ensure a level playing field for all nominees because popular doesn’t mean good and in my opinion what’s needed in self-publishing isn’t another popularity contest.
My feeling is that, because of the nature of these problems, much of the process would simply have to be automated or crowdsourced. I’ll outline first stab at a possible process, but I’d be more than happy for people to point out flaws and better ideas in the comments.
I’d start off with a system where the authors self-nominate by uploading their manuscript, in full, complete with their details and any relevant metadata to the awards website. The current system that the Not The Booker has where books are nominated in the comments of a blog post simply isn’t scalable and would become a massive headache.
The first phase of checking would run each manuscript through plagiarism software to make sure that someone’s not sneakily uploading another, more talented author’s work under their own name. It wouldn’t necessarily be perfect but it would stop the most egregious cases. Any manuscripts flagged by the system would be reviewed by a human being and the flag either lifted or the work disqualified. I doubt there would be many so this stage shouldn’t be a big deal.
I would then run the manuscripts through a spellchecker. It’s amazing how many typos some self-published books sport, many of them mistakes that should have been picked up by a simple proof. Any book with a significant number of typos is likely to be shite in other ways too, so manuscripts over a certain typo threshold would be flagged for review by a human.
For this, the human checkers could easily be crowdsourced through something like Mechanical Turk. Anyone with half a wit can tell the difference between a typo and an exotic noun and it’d be simple to create a test to make sure that people know the difference, and a sensible interface to allow people to mark the genuine typos. Manuscripts with too many typos would be tossed.
For the next step, we’d need a large pool of readers, preferably with some sort of experience in editorial but definitely with a clear understanding of what makes a piece of writing good or bad. I think you could probably recruit these volunteers from the public if you put together a short test to make sure that people had the ability to discriminate between competent and shite writing.
Then the first 1000 words of each manuscript would be anonymised and given to an odd number of randomly selected readers, say three, and they’d be asked to mark it out of ten simply on the quality and style of the writing, not on characterisation, dialogue, plot etc. The manuscripts with the worst total scores would be discarded, and those with the best would go on to the next stage.
Because you would have more than one person reading each excerpt, you’d get not only a fair view of how competent the writing was, but also a sense of how certain people were that the writing was competent. I’m nicking this idea off Galaxy Zoo, the citizen science site where people classify galaxies according to type. If everyone who views a galaxy says it’s a spiral, then you have 100% confidence that it is a spiral, but if half of people who look at it say it’s a spiral and half say it’s an elliptical galaxy, then you have less confidence.
So if a manuscript got all 8s, 9s or 10s, then you could be very confident that on a technical level, it was competent. If it got all 1s, 2s or 3s you could safely discard it. And if it got some 8s and some 2s, you would know you had Marmite on your hands.
I think this is how to deal to deal with the scaling issue. If you got 1,000 manuscripts submitted, and you want each excerpt read three times, and you think each person is, on average, going to bother to read five excerpts, then you need 600 volunteers. That might seem like a lot, but I don’t think it is, given how many people volunteer for citizen science projects. This is citizen literature! What could be more fun?
By the end of this stage, you’ve winnowed out manuscripts that include plagiarism, have bad spelling, and those with the worst abuses of grammar, punctuation and style. You’re now left with a selection of works that are, hopefully, competently written.
Here, there’s an option. You either insert another stage where the readers with direct, relevant editorial experience grade anonymous manuscripts based on their literary merits, or you just pick the top 100, say, and there’s your longlist. The best route would depend on how many good judges you’ve got and how many manuscripts you have left.
The process thus far should avoid any biases on the part of the readers because of the anonymisation, and cannot be gamed because the authors aren’t involved. What’s more, it’s gender blind and genre blind, allowing for plenty of surprises in the longlist.
I’ll note the Not The Booker longlist this year had 72 entires and was about one third female, though none of them made it through to the shortlist. I leave the question of why that should be as an exercise for the reader.
The next stage, though, could then take on the normal Not The Booker format with a public contest based on 100+ word reviews, rather than simple votes, to create the shortlist. This is when the authors get to rally the troops, the passionate discussions happen in the comments, and everyone gets to dig in and dirty their hands. The final judging of the shortlist happens the same way as usual too, with reviews and discussions and so on and so forth.
Now, it is true that the system I’ve outlined would require some setting up, but it’s more than possible to do. We have, as they say, the technology. And if a self-publishing Not The Booker was established, it would be well worth the trouble of developing a robust system to deal with the submissions as it would not only get used year after year, it could also reveal some interesting trends in, say, the number of women authors, the popularity of certain genres, the increase/decrease in overall quality year on year etc.
Of course, I’m sure I’ve missed something blindingly obvious, or made it too complicated in some way, so please do say so in the comments!
This one’s easy because I’ve done this sort of thing before and seen it art shows and other award systems.
You have more than one award. A juror’s award and a people’s choice. Maybe also (if you’re lucky or good at working it) a sponsored award where some company kicks in a price for the promo benefits.
The Covey Awards (for book covers and vide0s) was doing that, and you see the corporate prize thing on the People’s Pilot at TVwriter.com
This enables some credibility/deservedness/feelgood, but also allows to escape the whole “elitist/subjective” criticism…and the popular votes bring you views.
As do publishing lists of semi-finalists, finalists, etc. which get hooted about on the web, dragging you more eyeballs.
Comments on this entry are closed.