Saturday, September 10, 2005

A snapshot of reviews for a small-press book

Well, today I was excited to receive the 200th anniversary issue of Interzone, a British science fiction magazine of some repute. It usually contains a few good stories and some book reviews. This time round, one of the reviewers looked at my novel, Déjà Vu.

Two words: hatchet job. This guy did not like the book. He's a reviewer who puts ideas above character, and that's fine - there are some great books that do that. My book, however, puts people above ideas, and that might be where I and the reviewer part company. One other thing: Without exception, the many reviewers who have looked at my book have commented on the powerful ending. This guy, by contrast, thinks it's 'unsatisfying'. I'm perplexed, but since this was one of those bite-sized capsule reviews, it isn't easy to dig deeper into his reasoning. He just found unsatisfying. Shrug.

Am I angry? Well, obviously. Do I feel that the world is unfair on poor Ian? No. I can't expect the every person who reads my book to like it, though it's a source of irritation that one of those who doesn't like it happens to be a reviewer with a wide readership. This is part of the publishing game. Plenty of writers with books better than mine barely get noticed.

This blog - whose readership does not approach even a tiny fraction of the Interzone readership, clearly - provides a useful conduit for making lemonade out of this particular lemon. So I've decided the time is right for a snapshot of all my reviews.

On the basis of how much a reviewer liked my book, I've given a review a score (except in the case of SFX, which gave me a point score that I've directly converted). I'm aware that this is a subjective method, but I've tried to be as honest as possible. 0% means the reviewer thought my book was a piece of shit. 100% means they were about as enthusiastic as one could possibly be without seeming rabid. For example, while Ken MacLeod described Déjà Vu wonderfully, he did give me some constructive criticism for parts of the book. I've given his review a score of 80%. You can read excerpts and links to all my reviews here. One online review is - unashamedly - not included in that list: Cheryl Morgan's hatchet job. (As an aside, I've commented before that I think Morgan's review is a not bad one; we disagree, but her points are intelligent and well-argued; I don't include it in my review list because, let's face it, a review list is a piece of advertising and a potential reader might not share my broader perspective. I've given Morgan's review 30%.)

For most of these analyses, I've removed reviews received by email because these came from authors who were kind enough to read my book and email me about it. Some authors did not reply, and it is likely that these authors did not like my book. This means that the ratings I generated from author emails may not be representative; they may skew the results higher than warranted.

What's the overall review score for Déjà Vu?


How do the reviews break down?

View the summary (PDF).

Is there a difference in reviews between the major (wide readership) and minor (narrow readership) publishers?

I've classed 7 of my reviews as major and 4 as minor (a further 4, reviews by email, were not classified). The average review score for major review publications is 58%, whereas the average for minor publications is 92%. However, when the two hatchet jobs are removed from the major reviews, the average for this group rises to 75%. Is the difference between the major and minor reviewers reliable? I don't have enough data to tell, but I would guess it is. It's worth noting that the average score for reviews I've garnered from personal communications with authors (i.e. those writers actually out there, writing) is 95%.

How does review score vary with type of publication (physical newspaper, physical magazine, blog, or web magazine)?

Here are the review scores by publication type:

Physical newspaper: 90% (from 2 reviews)
Blog: 90% (from 2 reviews)
Web magazine: 73% (from 4 reviews)
Physical magazine: 53% (from 3 reviews)

The physical newspaper score is high, but it should be remembered that this category contains only two instances (The Guardian and Exepose), both of which were very positive. The blogs are high too. The web magazine category is pretty good when one considers that Morgan's hatchet job is included. Lastly, the physical magazine score is the lowest; this is s function of Lewis's hatchet job and a mediocre score for a review in SFX (which was scored 3/5 by the reviewer). Again, there are not enough data to tell if these differences are statistically reliable.

Does it make a difference if the reviewer/publication is specialist (only reviews sci fi) or non-specialist (reviews all genres)?

Of my reviews, 9 would be described as specliast, 6 as non-specialist. Here I will also include the emails from established authors because they are, I would argue, rather more specialist than the specialist reviewers of science fiction publications.

The average review score for non-specialist reviewers is 90%. The average for specialist reviewers is 72%. This second average is brought down by two hatchet jobs, and with these removed, the average increases to more respectable 87%.

What conclusions can be drawn?

Well, the first thing to note is that my sample size is small, so we need to be careful about any generalizations. It might also be true that Déjà Vu is not representative of all science fiction books, or books in general. However, there are one or two things suggested by the data.

If reviewers were doing their job right, we would expect small variability in their review scores (because with increasing agreement comes less variability). An average score of 75% with a variability statistic (standard deviation) of 25% suggests that reviewers have very different opinions of my book. This is more evidence to back up the common sense notion that a reader should read more than one review of a book if that reader wants increase his or her accuracy of the book's 'quality' (whatever that is) prior to publication.

The review scores were higher for publications with a smaller readership (according to my classification), so a writer might want to send review copies to these publications. However, this is negated somewhat when one considers the impact factor of 'major' publications such as The Guardian.

Specialist reviewers were, in general, harder on my book, so a writer might wish to submit to a publication that is non genre-specific. Of course, the reality is that an author will take a review where he or she can get it, and specialist magazines and websites might be only outlet to consider a first-timer's work.

So, that's a snapshot of the current state of reviews for my book. A mixed bag, generally positive, and about as good as an author can hope for.


Written while listening to Puppen Weinen Nicht from the album "Ndw1/3" by Combo Colossale


Blogger Virginia said...

This is freaking great!
Too often I see authors throwing a fit because a reviewer didn't say all glowy positive things. I love the way you've analyzed your results to see what kind of validity they may hold.

I'd like to see what points were made by reviewers/critiques that you AGREE with and will take the lesson from them. I think that's the best part. Some reviewers miss the boat entirely, but some have valid points, and I think, as writers, we need to step back and accept the free education.

7:36 PM  
Anonymous Danilo Borda said...

Thank you, I just wanted to give a greeting and tell you I like your blog.

4:22 PM  

Post a Comment

Links to this post:

Create a Link

<< Home