A snapshot of reviews for a small-press book

Well, today I was excited to receive the 200th anniversary issue of Interzone, a British sci­ence fic­tion magazine of some repute. It usu­ally con­tains a few good stor­ies and some book reviews. This time round, one of the review­ers looked at my nov­el, Déjà Vu.

Two words: hatchet job. This guy did not like the book. He’s a review­er who puts ideas above char­ac­ter, and that’s fine — there are some great books that do that. My book, how­ever, puts people above ideas, and that might be where I and the review­er part com­pany. One oth­er thing: Without excep­tion, the many review­ers who have looked at my book have com­men­ted on the power­ful end­ing. This guy, by con­trast, thinks it’s ‘unsat­is­fy­ing’. I’m per­plexed, but since this was one of those bite-sized cap­sule reviews, it isn’t easy to dig deep­er into his reas­on­ing. He just found unsat­is­fy­ing. Shrug.

Am I angry? Well, obvi­ously. Do I feel that the world is unfair on poor Ian? No. I can’t expect the every per­son who reads my book to like it, though it’s a source of irrit­a­tion that one of those who doesn’t like it hap­pens to be a review­er with a wide read­er­ship. This is part of the pub­lish­ing game. Plenty of writers with books bet­ter than mine barely get noticed.

This blog — whose read­er­ship does not approach even a tiny frac­tion of the Interzone read­er­ship, clearly — provides a use­ful con­duit for mak­ing lem­on­ade out of this par­tic­u­lar lem­on. So I’ve decided the time is right for a snap­shot of all my reviews.

On the basis of how much a review­er liked my book, I’ve giv­en a review a score (except in the case of SFX, which gave me a point score that I’ve dir­ectly con­ver­ted). I’m aware that this is a sub­ject­ive meth­od, but I’ve tried to be as hon­est as pos­sible. 0% means the review­er thought my book was a piece of shit. 100% means they were about as enthu­si­ast­ic as one could pos­sibly be without seem­ing rabid. For example, while Ken MacLeod described Déjà Vu won­der­fully, he did give me some con­struct­ive cri­ti­cism for parts of the book. I’ve giv­en his review a score of 80%. You can read excerpts and links to all my reviews here. One online review is — unashamedly — not included in that list: Cheryl Morgan’s hatchet job. (As an aside, I’ve com­men­ted before that I think Morgan’s review is a not bad one; we dis­agree, but her points are intel­li­gent and well-argued; I don’t include it in my review list because, let’s face it, a review list is a piece of advert­ising and a poten­tial read­er might not share my broad­er per­spect­ive. I’ve giv­en Morgan’s review 30%.)

For most of these ana­lyses, I’ve removed reviews received by email because these came from authors who were kind enough to read my book and email me about it. Some authors did not reply, and it is likely that these authors did not like my book. This means that the rat­ings I gen­er­ated from author emails may not be rep­res­ent­at­ive; they may skew the res­ults high­er than war­ran­ted.

What’s the over­all review score for Déjà Vu?


How do the reviews break down?

View the summary (PDF).

Is there a dif­fer­ence in reviews between the major (wide read­er­ship) and minor (nar­row read­er­ship) pub­lish­ers?

I’ve classed 7 of my reviews as major and 4 as minor (a fur­ther 4, reviews by email, were not clas­si­fied). The aver­age review score for major review pub­lic­a­tions is 58%, where­as the aver­age for minor pub­lic­a­tions is 92%. However, when the two hatchet jobs are removed from the major reviews, the aver­age for this group rises to 75%. Is the dif­fer­ence between the major and minor review­ers reli­able? I don’t have enough data to tell, but I would guess it is. It’s worth not­ing that the aver­age score for reviews I’ve garnered from per­son­al com­mu­nic­a­tions with authors (i.e. those writers actu­ally out there, writ­ing) is 95%.

How does review score vary with type of pub­lic­a­tion (phys­ic­al news­pa­per, phys­ic­al magazine, blog, or web magazine)?

Here are the review scores by pub­lic­a­tion type:

Physical news­pa­per: 90% (from 2 reviews)
Blog: 90% (from 2 reviews)
Web magazine: 73% (from 4 reviews)
Physical magazine: 53% (from 3 reviews)

The phys­ic­al news­pa­per score is high, but it should be remembered that this cat­egory con­tains only two instances (The Guardian and Exepose), both of which were very pos­it­ive. The blogs are high too. The web magazine cat­egory is pretty good when one con­siders that Morgan’s hatchet job is included. Lastly, the phys­ic­al magazine score is the low­est; this is s func­tion of Lewis’s hatchet job and a mediocre score for a review in SFX (which was scored 3/5 by the review­er). Again, there are not enough data to tell if these dif­fer­ences are stat­ist­ic­ally reli­able.

Does it make a dif­fer­ence if the reviewer/publication is spe­cial­ist (only reviews sci fi) or non-spe­cial­ist (reviews all genres)?

Of my reviews, 9 would be described as spe­cli­ast, 6 as non-spe­cial­ist. Here I will also include the emails from estab­lished authors because they are, I would argue, rather more spe­cial­ist than the spe­cial­ist review­ers of sci­ence fic­tion pub­lic­a­tions.

The aver­age review score for non-spe­cial­ist review­ers is 90%. The aver­age for spe­cial­ist review­ers is 72%. This second aver­age is brought down by two hatchet jobs, and with these removed, the aver­age increases to more respect­able 87%.

What con­clu­sions can be drawn?

Well, the first thing to note is that my sample size is small, so we need to be care­ful about any gen­er­al­iz­a­tions. It might also be true that Déjà Vu is not rep­res­ent­at­ive of all sci­ence fic­tion books, or books in gen­er­al. However, there are one or two things sug­ges­ted by the data.

If review­ers were doing their job right, we would expect small vari­ab­il­ity in their review scores (because with increas­ing agree­ment comes less vari­ab­il­ity). An aver­age score of 75% with a vari­ab­il­ity stat­ist­ic (stand­ard devi­ation) of 25% sug­gests that review­ers have very dif­fer­ent opin­ions of my book. This is more evid­ence to back up the com­mon sense notion that a read­er should read more than one review of a book if that read­er wants increase his or her accur­acy of the book’s ‘qual­ity’ (whatever that is) pri­or to pub­lic­a­tion.

The review scores were high­er for pub­lic­a­tions with a smal­ler read­er­ship (accord­ing to my clas­si­fic­a­tion), so a writer might want to send review cop­ies to these pub­lic­a­tions. However, this is neg­ated some­what when one con­siders the impact factor of ‘major’ pub­lic­a­tions such as The Guardian.

Specialist review­ers were, in gen­er­al, harder on my book, so a writer might wish to sub­mit to a pub­lic­a­tion that is non genre-spe­cif­ic. Of course, the real­ity is that an author will take a review where he or she can get it, and spe­cial­ist magazines and web­sites might be only out­let to con­sider a first-timer’s work.

So, that’s a snap­shot of the cur­rent state of reviews for my book. A mixed bag, gen­er­ally pos­it­ive, and about as good as an author can hope for.


Written while listen­ing to Puppen Weinen Nicht from the album “Ndw1/3” by Combo Colossale

Author: Ian Hocking

Writer and psychologist.

2 thoughts on “A snapshot of reviews for a small-press book”

  1. This is freak­ing great!
    Too often I see authors throw­ing a fit because a review­er didn’t say all glowy pos­it­ive things. I love the way you’ve ana­lyzed your res­ults to see what kind of valid­ity they may hold.

    I’d like to see what points were made by reviewers/critiques that you AGREE with and will take the les­son from them. I think that’s the best part. Some review­ers miss the boat entirely, but some have val­id points, and I think, as writers, we need to step back and accept the free edu­ca­tion.

Leave a Reply

Your email address will not be published. Required fields are marked *