Why I will vote No on September 18th


By Mike Taylor

Whatever the outcome of the Scottish independence referendum, I think I’ll be able to say I enjoyed the run up to it. It’s great to see that a political issue has engaged so many.

Occasionally I think it may have gone too far – this morning I was queuing in Costa and half overheard a friend in front of me say, “oh, ok, go on, Yes”, in response to a question from the cashier. Intrigued, I butted in and asked why she was thinking that, and what had made up her mind. “Oh”, she said, a little taken aback, “I just fancied the waffles this morning”.

Yet amid all the campaigning, it seems that it’s the Yes campaign that is the noisiest – whether it’s the ongoing Facebook post bombardment or the shameless harassment of Jim Murphy. Partly, this noise comes from passion. And that’s great. But I also think it comes from the fact that it’s easier to build a hypothetical nation than it is to defend an existing one, and one that has flaws for all to see. So I thought I’d have a bash at redressing the balance and outlining why I think it’s best to vote No.

For me, the main arguments are: i) while politics can divide us temporarily, the people of this island belong together, ii) Scotland has more chance of prosperity as part of the UK than as a separate nation, iii) our institutions work better on a bigger scale,  and iv) there are legitimate risks which are patronising and silly to dismiss as just ‘scaremongering’ or ‘negativity’.

The first point to make is that the debate shouldn’t be about today’s politics; it should be about deciding what makes a country for generations to come. One of the most commonly stated arguments for independence is that we should be independent so that Scots get what Scots vote for. Currently we have a Tory-led government governing a country with one Conservative MP – surely that can’t be right?

However, this is to argue that you build countries on politics, and I don’t think that’s true. Look around the world. In recent decades, New Yorkers and Californians have consistently voted for Democratic presidential candidates and Texans for Republican; yet half the time they are ruled by the other side. Closer to home, the North of England is more left wing than the South, yet there’s no call for the Independent Republic of Scousers just yet.

Thinking about it a different way, what should happen to the one independent Conservative constituency in Scotland? Under the ‘get what you vote for’ argument, it should become the Independent Republic of Dumfriesshire, Clydesdale and Tweeddale. Clearly that is silly.

So, the question becomes: what should you base a nation on, if not politics?  Grabbing a dictionary, a nation is defined as “A… people united by common descent, history, culture, or language, inhabiting a particular state or territory”. And that’s just it, for me. There are so many more similarities than differences between Scotland and the rest of the UK. Millions of Scots live in England, and hundreds of thousands of English people in Scotland. We share values, and ideals like democracy and fairness. Our pastimes and sense of humour have their flavours, but are broadly similar, and our histories are entwined (World War II, for example). Why is it that I have more in common with someone from Glasgow than Newcastle, or Bristol than Shetland? Why split us apart? My point is simply that we make sense as a nation, which I view as being a community of similar people. Within that community, we are properly represented in both Westminster and by our own Parliament.

Another reason not to base a nation on current political divisions is that those divisions shift with time. Like Thomas Hardy said about marriage: “the error [of marriage]… is that of basing a permanent contract on a temporary feeling”. Political allegiances shift and change over time. Conservatives, believe it or not, weren’t the pariahs they are today, pre-Thatcher. And, in the future, can we be sure we’ll still be different to the rest of the UK? What if the English move back toward the left? We are building a country for centuries, not election cycles.

I’ve gotten a bit airy-fairy so let’s bring it down to earth. Scotland would have a stronger and more robust economic future as part of the UK. Both sides of the debate can produce economists quoting this or that figure; I wouldn’t trust any of them much. Instead, I prefer to think about it more on principles. What would an independent Scottish economy be like?

For one, more dependent on oil – an area I spend part of my day job researching. There’s nothing wrong with oil in itself, of course, but the problem comes when you consider that much of Scotland’s spending plans are based on oil revenue predictions. In my view, those predictions are worthless. Currently, the SNP assumes $100 a barrel, roughly today’s price, but since 1999 oil has traded at prices as low as $11. The price has run up hugely since, but there’s no guarantee it’ll stay high. Factors that decide the oil price – the global economy, war, OPEC policies, technology, oil discoveries round the world – are completely out of our control, so there’s no guarantee that the price won’t fall again. If the price of oil does drop, we’ll get less in tax revenue and the Scottish government will have to cut spending, or borrow more. More than that, oil companies will stop making the increasingly large investments required to get oil out of the ground (the easy stuff is gone; now it’s deeper down, and further out) – so it won’t matter how much oil is in the North Sea, because that’s precisely where it will stay. In short, it’s unwise to base your economy and spending plans on future oil revenue, which is precisely what the SNP is doing.

By contrast, being part of the UK helps to bring stability to the economy. When some areas slacken, others pick up. Britain may have an inflated financial sector and heated property market, but we got through the financial crisis in relatively decent shape. Ireland, too, had both these problems, but had less to fall back on when 2008-9 rolled around. An independent Scotland would have a narrower tax base, with fewer individuals and businesses to support spending, which makes us vulnerable. On top of this, the costly and time consuming re-entry into the EU (which will have to happen, according to Spain) will be destabilising.

On the economy more broadly, the UK brings a number of advantages to Scottish business. A large, easily accessible market, to start, combined with a consistent legal and regulatory environment that encourages investment. We’ll lose that as the economies drift apart over time. There’s also the fact of a common currency – but let’s not flog that horse much more (for what it’s worth, if Scotland did enter a currency union, we’d be controlled by spending and borrowing rules from Westminster without any representation in that government. We’d be even less independent than now!). I also think it’s valid to say that being part of the UK provides a wider job market for workers. Some complain young people are forced out of Scotland to look for work. I’m not sure I buy that, simply because many of the people I know went down south out of choice.

The last point I want to make is that many of our institutions can work better on a bigger scale. Take the NHS. A larger healthcare system gives more people greater access to specialist care. My sister, for example, was treated by experts from Manchester when diagnosed with a spinal condition. Research spending can be divided up amongst the best labs, regardless of borders, and so our science advances more quickly (hence the objection to independence from senior researchers).

In the world of sport, Scottish athletes gain access to better facilities. Chris Hoy spent much of his time training at English velodromes and the likes of Olympic gold medal winning rower and Scot Katherine Grainger train at GB facilities in England.

Aside from these benefits, an independent Scotland would have to needlessly duplicate many UK government functions at great expense. In all, you can get money more easily to where it needs to go without considering borders. It seems to me that the world needs fewer countries, and less division, not more.

I don’t mean to be negative, or to denigrate Scotland. But drawing a border across this island doesn’t make sense. As people, we are a community bound by many things, and when united can provide stability and opportunity for all citizens of the UK. I think Great Britain, for all its flaws, has achieved a lot in its history: amongst other things, the NHS and the welfare state, a comprehensive education system, and a free and open society. I expect more to come; and suspect it would be easier under one flag.

On the origin of (bull) faeces


I work in a grey, concrete-clothed office block in North London. On the first floor of that office block are the gents toilets, where three urinals stand shoulder to shoulder all day long swallowing the caffeine-soaked piss that drips from first floor dick. 

Over the past few months, I have felt increasingly empathetic towards these silent ceramic soldiers, these Armitage ranks.  Because I, too, am showered with piss every day.

I get home and I stink of it. It’s in my hair and in my nose and in my skin and under my skin and under my fingertips and under my eyelids and…everywhere.  It’s just everywhere. 

But (you may be pleased to learn, friends) the piss that I’m showered with doesn’t flow from first floor dick, but from first floor mouths.  And it’s not  really piss, it’s words.  Office words.

Single shoulder-slumpers like ‘deliverables’, ‘upskill’ and ‘learnings’.  Bilious conjoined twins of acid hate like ‘drill-down’, ‘value-add’ and ‘catch-up’.  Wanky piss-parcels of email Poly-fila like ‘moving forward’, ‘enablers and barriers’ and ‘quick wins’

The phrase I reserve my purest, fiercest hatred for is ‘close of play’.  When I read it, when I am force-fed this faecal bisque at the end of a sentence like, ‘Would it be possible for you to turn that around by close of play?’ my eyes deaden.  It’s just horrible.  

Urban Dictionary describes ‘close of play’ thus:  Increasingly common on business bullshit bingo cards the English-speaking world over, ‘close of play’ is the latest way to say ‘5.30’, presumably employed by people because they are cunts. 

Not far wrong.  But the real reason ‘close of play’ rankles with me is that the phrase originates from cricket. 

Cricket is – and there is simply no questioning this – the single best thing that has ever happened to planet Earth, a glorious challenge of wit, skill and stamina (honestly).  It’s my first love and the smell of oiled willow and the sight of the red ball arcing across the blue village sky still makes my heart sing. 

So by trying to use cricket against me, to chivvy me along, to imply we’re mates asking favours of each other and to passive-aggressively poke me into doing my work with patronising cliché, the office drone has done its worst.

So what’s actually going on here?  Where did this bizarre language come from?  Why didn’t we section the first deranged psychopaths who used phrases like ‘boil the ocean’ and ‘soup to nuts’ and ‘stress test the straw man’ under the mental health act?  Why is every (I haven’t checked, but I’m fairly certain) office in the Western World now infested with these basketcases?

I was reading Steven Pinker’s The Language Instinct when I first thought about this, in which he describes the creation of pidgin languages – rough patchwork languages that develop among discrete peoples thrown together by historical circumstances (such as the multi-national slaves of the sugar plantations).  With this fresh in my mind, I thought (and I thought myself fucking clever for doing so) that office-speak was therefore a sort of pidgin that developed organically to fill a language vacuum.

I reasoned thus: as our industrial economy gave way to our brittle knowledge economy and the first bewildered office pioneers trekked in from the factory floor, swapped their Dickies for T.M. Lewin and sat down behind their dreary desks, they needed a means to communicate with each other. 

But with the vocabulary they brought with them, these naïve bushbabies couldn’t negotiate the early morning rush for workspace (hotdesking), nor describe the amount of downtime tedium (capacity) they now had, nor needlessly quantify completely abstract concepts (operationalize).  So they cut and pasted words and phrases from other reference points – from sport (heads up, ballpark figure, touch base), from literature (swallow the frog) and mysticism (blue-sky thinking) – in order to make sense of their confusing new world.

But actually I’m an idiot because this isn’t a satisfactory explanation at all. Language is the means by which the pinball thoughts that we have in our heads are ordered, arranged and deposited into the minds of others.  And by this definition, office-speak is not a language or even a pidgin, it’s essentially anti-language. 

Let’s consider the following paragraph, which is the first from a real email I have received:

I’ve started thinking about our direction of travel under a number of key areas keeping in mind that our long term ambitions could really be articulated around increasing reach, engagement, income and importantly, impact, through creative, compelling, resonant articulation of our work.

Have any thoughts formed in your head after reading that?  I doubt it.  I know the context of this email.  I know who wrote it and why they wrote it and when they wrote it.  But reading it, even now, over and over again, no thoughts arrive in my head.  None.

So office-speak is not language.  It’s not even jargon; it’s more verbal argon – inert strings of sounds or symbols used to confuse underlings, to deliberately bore them and keep them servile. 

Office-speakers also use their anti-language to make themselves look busy and important and techy and numbersy.

When recently quizzed about his quest to acquire league-winning talent, Manchester United’s Chief Executive Ed Woodward replied that he had ‘experienced a number of conversations with agents and players’. 

Experienced a number of conversations?  Stop your mouth doing talking, Ed.  You chatted to footballers and their advisers. 

In using office-speak to describe his job, Woodward was trying to put on a show, to persuade us that what he’s doing is a grand and noble pursuit, that he’s a negotiator, a man whom history will remember as an architect of modern times, the successor to Lloyd George, the descendant of Gorbachev.

And Ed is not alone – any time your manager says they are ‘landscaping the competitive environment’ or praises ‘change agents’ or uses bizarre redundant phrases like ‘I, personally’ or says that they have ‘identified a number of key ways of working moving forward’ they are doing a Woodward.

To be fair, Ed Woodward didn’t create all this bollocks.  He hasn’t been around long enough to have done so, nor does he seem to possess the head for it.  So who did?

Well, I think American management consultants probably did. During the mid-20th Century, when today’s super power companies were looking to expand, nascent management consultancies were recruited by reticent executives to dole out fairly obvious, albeit effective advice.  But (and this was their genius) the consultants knew that they needed to dress their advice up in order to paint themselves as superhuman business oracles.  And they did this not with branding, or advertising, but with insidious neology – they created the new-age-techno-babble-pseudo-scientific nonsense of office-speak.

(That’s a very brief explanation, because you’ll find a much better description of the genesis of office-speak here.) 

So we can probably blame management consultants for bringing us the language of today’s offices (most directly and irrefutably, they are responsible for the cowardly sophistry of mass-sackings – ‘rightsizing’, ‘streamlining’, ‘and ‘restructuring’).  But that’s not the end of it (sorry).

Clearly, I’m not the only person that hates office-speak.  Most people with whom I have sat near in offices hate it.  I would hope you hate it. Ed Woodward probably used to loathe it himself until he started drinking the Kool-aid.  It’s therefore a separate and more interesting question that if most people hate office-speak, then how did it spread so quickly and so far?

To explain this, I think it helps to think of office-speak as a meme.  In the true sense of the word (first described by genius biologist/ idiot theologian Richard Dawkins in The Selfish Gene), memes are the ‘genes of culture’, powerful concepts passed between our minds and down through generations, broadly staying the same, but subtly changing and evolving in response to the shifting sands of the cultural milieu.

Classic examples of memes are the concept of God or catchy tunes.  Pictures of determined babies may be utterly hilarious, but they’re not really memes. 

As I’ve been at pains to point out, office-speak is a very bad thing.  It opposes productivity and obstructs meaning. 

But that doesn’t stop it being a powerful meme.  Evolution is blind and can actually encourage the development of characteristics which appear intuitively burdensome.  Dawkins describes evolution’s ability to create seemingly bizarre animal characteristics in one of the most captivating sections of the Selfish Gene:

Extravagances such as the tails of male birds of paradise may have evolved by a kind of unstable, runaway process.

In the early days, a slightly longer tail than usual may have been selected by females as a desirable quality in males, perhaps because it betokened a fit and healthy constitution. A short tail on a male might have been an indicator of some vitamin deficiency- evidence of poor food-getting ability. Or perhaps short-tailed males were not very good at running away from predators, and so had had their tails bitten off.

Anyway, for whatever reason, let us suppose that females in the ancestral bird of paradise species preferentially went for males with longer than average tails. Provided there was some genetic contribution to the natural variation in male tail-length, this would in time cause the average tail-length of males in the population to increase.

 Females followed a simple rule: look all the males over, and go for the one with the longest tail. Any female who departed from this rule was penalized, even if tails had already become so long that they actually encumbered males possessing them. This was because any female who did not produce long- tailed sons had little chance of one of her sons being regarded as attractive. Like a fashion in women’s clothes, or in American car design, the trend toward longer tails took off and gathered its own momentum.  

So I think the principles of evolution can explain office-speak’s rise.  First, management consultants irresponsibly farted the meme into the minds of businessmen.

Then, in uncertain times, before office culture had the chance to bed in properly, people started using office-speak at the behest of the consultants.  And because of their verbosity, the first office-speakers looked busier and looked more important and looked more techy and looked more numbersy.  And they got promoted through the ranks for doing so, because that’s basically what office life is all about. 

So despite annoying everyone and self-evidently being an impediment to effective communication, the office-speaking meme became associated with power and efficiency and money in much the same way that a long tail became associated with attractiveness in birds of paradise. 

And once office-speak became yoked together with power and money, the ratcheting wheels of evolution took over.  Because we’re now penalised for not using the anti-language of office-speak.

Those that do not possess the office-speaker’s loose tongue get ignored, or offend people with their transparent straight-talking, or seem reserved.  I suffer particularly from the last of these three, because often in meetings there is absolutely nothing to say or that needs to be said – and it’s in that vacuum of insight that the office-speaker thrives.

This is all deeply disconcerting for me, because I know I will never be an office-speaker.  I think I missed the golden window or something.  But if my theory holds true, there is light at the end of the tunnel.  Dawkins finishes his section on the bird of paradise with the sentence:

[The trend towards longer tails] was stopped only when tails became so grotesquely long that their manifest disadvantages started to outweigh the advantage of sexual attractiveness.

So maybe, just maybe, there’ll be a backlash against all this rot when office-speak goes too far and reaches a tipping point of counter-productive drivel, when more words in work conversations are nonsensical than sensical*, and the western world’s economies are crippled by linguistic disease.

But unfortunately, I know that evolution takes a very long time.  And I know I will be waiting a very long time for the hard stop


*I know, but it should be.

Russell Brand and the price of life (part I)


If you live in the UK, your life is worth £30,000 a year.

That’s not the tariff of some Celtic protection racket.  Nor is it the heating bill you pay to stop your blood freezing (that’s much more).  £30,000 is the maximum amount the NHS will pay for a treatment that will give you a year’s good health.  That’s their limit and that’s how much they value your life.

Strict enforcement of the £30,000 limit has meant that if a ‘promising’ new drug costs too much, the NHS will not pay for you to have it – even if it is safe, effective and in use in other countries.

In 2013, a new breast cancer drug called Afinitor was released with much fanfare by its developers.  Clinical trials showed that while Afinitor wasn’t a cure, it did help to keep the disease under control for several months – long enough to see a grandchild born perhaps, or to have a last summer holiday.

However, after expert consultation and debate Afinitor was not deemed cost-effective enough for the NHS to provide it.  For some breast cancer patients, this was devastating news.  Having paid tax all their lives, they were denied a drug that they knew could help them and died quicker as a result.

Drugs have been denied to people on the NHS because of high cost many times before and it will happen again. Every time this happens, the media leap on the issue, lambasting bureaucrats for condemning the sick to death by spreadsheet.

These reports almost always carry emotive quotes from those directly affected (“It’s hard to know there’s something out there that could help but they’re saying you can’t have it because of cost”) followed by a call for legislative reform.

I should say that I am exceptionally fortunate. No one close to me has died, or is dying from a terminal condition that requires expensive treatment.  I am aware of my naivety to the frustrating impotence and anguish that must come when faced with the problems above – and am glad of it.

However, while I acknowledge that, of course, this is emotionally difficult territory, I believe that these limits are actually essential.

Our government has a certain amount of money, which it gets from taxes and trade agreements.  This money has to be spread across all functions of state, with a fraction set aside for healthcare.

That amount of money (nearly £109bn this year) has to provide birth to death care for everyone in the UK.  All your immunisations as a child, all your appointments for snivels, all the plaster for all the bones you’ve broken.

If £109bn sounds a lot, it isn’t.  All those bills and pills add up, leaving a highly delicate economic ecosystem with little wiggle room. If one area of healthcare is given a bigger slice of the pie, then it has to be taken from another department’s plate.

When patient groups successfully lobbied for the expensive breast cancer drug Herceptin to be covered on the NHS in 2002, one trust had to close down a diabetes clinic to pay for it. There was simply no excess money in the system, so it had to come from somewhere else.  Agreeing to provide Afinitor on the NHS would have led to similar decisions being made.

Would people campaign so vociferously for an extremely expensive drug that delivered marginal health benefits if they knew it meant fewer social workers to protect vulnerable children?  Or fewer palliative care nurses to give comfort to those at the end of their lives? Or fewer home visits to patients with dementia?  These may seem callous questions, but limits are essential to make sure that the greatest good is done for the greatest number of people.

Of course, it is one thing to agree that there should be a limit to what we spend to extend lives, it is quite another to agree where the limit should be set.  The answer requires us to consider the question, ‘What is life worth?’

A dramatic (and perhaps religious) response would be that life is infinitely valuable and that matters of life and death should be above financial considerations.  But as we have seen, that would be an impractical and irresponsible approach. We should therefore add a caveat to our golden question, ‘What is life worth…and is that practical in the context of a healthcare system?’

If I knew the answer, I would not be sat in my pyjamas rambling on a blog about it, so delegation is needed.  These are clearly vital and complex questions that need expert consideration and specific responses. One rather feels that this is the sort of question that Russell Brand might struggle with were he the Prime Minister.


What happens when we cure cancer?


We’re all getting older. I’m getting older.  You’re getting older. Brad Friedel is definitely getting older.

We can run and row and eat raw fish but we age every hour, every day. Every Facebook click. Every keyboard peck. Every tock. Every turn of the clock.

However much Oil of Olay you knead into your craggy face, those lines will harden and your tears will drain in new and different ways.

But I also mean this – we’re all getting older. Owing to better diet and better medicine we are all living longer into our 70s, 80s and 90s. The British public is now an older collection of people than ever before.

I work for a medical research charity whose stated aim is to rid the world of breast cancer. We’re doing pretty well too, to the extent that we can (admittedly somewhat ambitiously) claim to help make the disease a chronic, non-lethal condition within a couple of generations. Our efforts – and the efforts of those like us – will further contribute to an aged population.

We should be mindful of hubris of course. In 1971 an ailing and Vietnam-vexed Richard Nixon declared war on cancer and promised a cure for the disease within 10 years. His intention was to generate public positivity around a highly ambitious project, in much the same way that JFK had used his moonshot announcement in 1961.

Since the 60s, men have flown to space and returned with moon-dusted boots but more than 40 years after Nixon took up arms, millions still die from cancer every year.

A recent Time article neatly compared these presidential bluffs, highlighting Nixon’s naivety:

In 1961, when JFK announced that the U.S. was going to the moon, the idea was no longer science fiction. The physics were understood. What remained was a giant engineering project: apply enough money and aerospace engineers and you eventually get to Neil Armstrong’s giant leap for mankind. When President Richard Nixon announced the war on cancer in 1971, victory wasn’t remotely possible. It was as if someone had announced a moon shot in 1820.

But that was the 60s and half a century is a long time in science. Billions of dollars of investment and years of hard work have allowed us to understand a huge amount about how the machine of cancer works – in the same Time article’s words, ‘the physics of cancer’.

We know what fuel cancer uses and, for the most part, which genes and proteins drive its engine.  This knowledge has helped us understand where we can wedge our spanners in the cancer machine to slow it down, if not quite break it completely. Such progress has seen cancer death rates steadily decline since the mid-1990s.

Though much remains to be done – lung and pancreatic cancers are two examples where survival rates remain particularly low – the punchy rhetoric used by CRUK and others has substance; research will beat cancer.

We should absolutely be proud of these achievements as pure human triumphs. But when we finally elude a disease that has plagued us for as long as we have been on Earth, I think it’s prudent to ask…what happens next? What happens to the masses of ageing cancer-free pre-corpses trundling to bingo in Bognor on free buses? What happens when the average Briton lives to a 100?

Well for one thing, rates of Alzheimer’s Disease will skyrocket.

Currently, one in three people over the age of 65 develop dementia, with Alzheimer’s accounting for the majority of cases. As the over-65 bracket of the UK population swells, the numbers of people living with dementia will grow from 800,000 today to 1.7m by 2050.

Alzheimer’s is an isolating, terrifying and utterly draining condition for carers, relatives and sufferers. The disease stalks around the brain destroying first what makes us human – memory, judgment, and personality – and then what keeps us alive.*  Available drugs can alleviate the symptoms but give no recourse to the neuronal decay that underpins the disease.

The gradual mental and physical deterioration takes an average of 8-10 years from diagnosis to death, with attendant healthcare requirements costing the UK £23bn a year.  Clearly, this figure will spiral as the disease grows more prevalent.

Indeed, so bleak is the current and future picture that when I came across the Guardian headline, ‘Alzheimer’s treatment: landmark study gives hope for simple pill’ I rolled my eyes.  Newspapers regularly inflate health claims to sell themselves, ignoring important caveats to sex up scientific studies.

But this felt different. For one thing, the traditionally staid and conservative research community described the recorded results as ‘very dramatic’, ‘highly encouraging’ and a ‘turning point in the search for medicines to control and prevent Alzheimer’s disease’.  It takes something extraordinary for the men and women in white coats to ditch their comfort blanket of vernacular uncertainty.

This was also a completely new way of approaching the disease. Alzheimer’s is characterized by a build up of mis-shapen proteins that stick together in clumps in the brain.  As these clumps accumulate, nerve cells respond by shutting down their own protein production lines.  If this shutdown** continues for too long, the brain’s nerve cells waste away and die, resulting in loss of mental faculties.

Enormous sums of money have been wasted in attempting to stop neurodegeneration by limiting the initial accumulation of mis-shapen protein clumps.  A series of high profile clinical failures has seen several high profile big pharma companies cut their losses on Alzheimer’s research, viewing the clumps as an insoluble problem***.

In the new approach Professor Giovanna Mallucci and her team ignored the clumps, focusing instead on preventing the shutdown of protein production in nerve cells.  Guided by this theory, Professor Mallucci’s team tested a drug developed by GlaxoSmithKline on mice with prion disease, a neurodegenerative condition that shares many parallels with Alzheimer’s.

Five weeks after treatment, the mice remained free of symptoms such as memory loss, impaired reflexes or limb dragging. They also lived longer than untreated animals with the same disease.  This was big news and I shared the researchers’ excitement.

Of course, huge questions remain, not least how to modify the drug so that it is safe and effective in humans.  False dawns are common in research science and optimism is usually met with suspicion.

But by understanding how to prevent cell death in Alzheimer’s disease, we may just be beginning to understand the physics of the condition.  All that’s left is for us to shoot for the moon.

*Swallowing becomes difficult, meaning that food and drink can enter the lungs and cause infection, leading to pneumonia and death.

**So nearly topical

***Couldn’t resist

Statistical sewage, spin and science.


What connects these men?

Todd Akin: First of all, from what I understand from doctors [pregnancy from rape] is really rare. If it’s a legitimate rape, the female body has ways to try to shut that whole thing down.

Dana Rohrabacher: Is there some thought being given to subsidizing the clearing of rainforests in order for some countries to eliminate that production of greenhouse gases? 

Paul Broun: All that stuff I was taught about evolution, embryology, the Big Bang theory, all that is lies straight from the pit of hell.

A) They are all ignorant of basic scientific concepts to hyper-offensive or hilarious extents.

B) They are all current or former members of the House of Representative’s Science Committee, the government body that oversees all non-military research in the USA.

Answer? In fact, both A and B are true.  Think they’re hard to reconcile? Consider that Paul Broun – who doesn’t believe a single thing he was taught about embryology – is a trained medical doctor. Let’s be glad that Dr Broun chose politics over obstetrics.

Casting an eye across the Atlantic, it’s easy to feel pleased with ourselves as a relatively secular, scientifically literate people. We have millions tuning into Brian Cox’s and Dara O’Briain’s TV shows. We even have famous atheists in Stephen Fry, Richard Dawkins and the late Christopher Hitchens.

Peek down our own corridors of power however, and problems emerge. A mere 6% of MPs have degrees in science subjects. There is not a single science degree amongst the members of the Cabinet.  Only one MP has practiced scientific research beyond PhD level.

Almost all of the most powerful positions in the country are therefore occupied by graduates of arts and humanities subjects, a gross imbalance.

In the mid-20th Century a man called CP Snow addressed this subject by delivering a lecture titled ‘The Two Cultures’ to an audience of academics in Cambridge.  In his talk, Snow – who combined careers as a chemist, a novelist and a politician – condemned the British educational system as over-rewarding the humanities at the expense of scientific and engineering education.  This in practice deprived British elites (including those in politics) of adequate preparation to properly manage the modern scientific world.

A good many times I have been present at gatherings of people who, by the standards of the traditional culture, are thought highly educated and who have with considerable gusto been expressing their incredulity at the illiteracy of scientists. Once or twice I have been provoked and have asked the company how many of them could describe the Second Law of Thermodynamics. The response was cold: it was also negative. Yet I was asking something which is the scientific equivalent of: Have you read a work of Shakespeare’s?

I now believe that if I had asked an even simpler question — such as, What do you mean by mass, or acceleration, which is the scientific equivalent of saying, Can you read? — not more than one in ten of the highly educated would have felt that I was speaking the same language. So the great edifice of modern physics goes up, and the majority of the cleverest people in the western world have about as much insight into it as their neolithic ancestors would have had.

Many of Snow’s arguments are as true today as they were in 1959. I’ve no doubt Dave and his pals can merrily mix foreign policy with Foucault and budgets with Brecht. Ask them what Rhizobia could teach them about coalition though, and they’d struggle. In a web-linked world where 3D printers and synthetic biology will help shape the next Century, I’d argue that our lack of scientifically minded politicians might come back to bite us in the gluteals.

Another area where I believe politician scientists (I avoid using the term ‘political scientists’ deliberately) could make an impact, is in changing the culture that surrounds their treatment and usage of data.

Raised on a diet of PPE and history, our politicians are well versed in making convincing arguments from incomplete or non-existent evidence (do correct me if I’m wrong friends, but remember the number of times you produced well-graded essays without digesting every book on the reading list). The economy’s on the mend. The economy’s knackered. Iraq definitely has WMDs. My wife was driving the car when we were caught speeding. The benefits cap is working.

Spin-doctors have long polished turds, but at least they smelled of shit. Now, if an inconvenient stat smells a bit faecal, they just bury it and make one up to support their agenda. Don’t believe me? IDS recently said that homelessness had barely moved since the change in government in 2010 – it has gone up 27%.

It should be said that scientists are not immune to spin. Recent claims on the promise of ‘Big Data’ projects have undoubtedly been inflated, while the Research Excellence Framework, in forcing academics to demonstrate ‘impact’, has encouraged scientists to over-reach on their hypotheses.

But complete falsification of data in science is rare and efficiently weeded out when it is. As Steven Pinker states in his thoughtful essay, ‘The defining practices of science, including open debate, peer review, and double-blind methods, are explicitly designed to circumvent the errors and sins to which scientists, being human, are vulnerable.’

So I’d hope that scientific training – including statistical analysis, rigorous review of global work in a subject area and careful planning of resources – would make for a moral and effective politician. I’d also like to think that having more scientists sitting on those green seats would stem the flow of statistical sewage.

But then I don’t know for sure. I don’t have the evidence.

If I ruled the (football) world


Every edition of Prospect magazine begins with an article titled ‘If I ruled the world’, written by an assortment of guest writers.  Simon Schama (proposing more history in schools), Grayson Perry (suggesting a blanket ban on suits) and Lord Sacks (advocating observation of the Sabbath by all) have been among the enjoyable and erudite contributors.

I’m not actually completely convinced that wearing a onesy to work and spending every Saturday doing nothing would be the panacea that we so dearly need. But it seems like great fun and I’m keen to exercise the same fantasy.

Lacking the ego and ambition of the above luminaries however, I can’t claim to know how to solve the world’s ills. Fuck knows how to deal with a world ravaged by famine and riven by civil war.

So, while still embracing the spirit of sweeping reform, I’ve decided to narrow my scope and restrict my hypothetical presidential powers. My mission will be to heal the world of football. After all, I know much more about false nines than number ten, way more about Damarcus than Damascus and far, far more about trequartistas than Trident.

While not promising to deliver world peace, my rule will nonetheless rescue football from the spitters, the biters, the racists, the granny-shaggers, the whorers, the tax evaders, the tapper-uppers, the ‘at-the-end-of-the-day-ers’, the badge-kissers and pundits called Alan.

The below is my four-point manifesto.

1.Keepers may no longer use their hands to touch the ball

Goalkeepers have become way too good. Manuel Neuer scares the shit out of strikers with his bear like frame, weird three fingered gloves and impenetrable forcefield of teutonic arrogance. Mark Schwarzer is tediously consistent. Hugo Lloris pace and positioning were behind Spurs’ good run of form last season apparently. Pace? In a fucking goalkeeper? This has to stop before Scott Carson becomes Sports Personality of the Year.

Preventing keepers handling the ball will address this issue, reminding them that their rightful place is Danny Baker’s Own Goals and Gaffs, not shampoo adverts. Also, it would be fucking hilarious. Deprived of their digits, keepers would have to learn how to cartwheel and and karate their way to clean sheets.

West Brom would have Jackie Chan between the sticks, round-housing the ball out of the top corner and into the stratosphere. Pep Guardiola would pack his Louis Vuitton bags and travel to Shaolin monasteries to scout for Buddhism’s best projectile paupers. Can-can dancers would move off the stage onto the pitch. It would be amazing.

2. Football pundits are to be binned

Every year, the terrible tight-suited Skymen meet with the gormless BBC bumchins at a conference centre in Kettering. Between limp cheese sandwiches and warm Carslberg, they discuss how best to homogenise the language of football. The guidelines issuing from the summit are followed rigidly.

Misses must be ‘rued’. Passes must be ‘slide rule’. Successful dribbles by ‘diminutive’ wingers with ‘bags of pace/ pace to burn’ may be one of ‘slaloming’, ‘jinking’ or ‘mazy’.

As a result, football coverage is ultra heat treated, completely devoid of the lyrical lilting of  rugby’s Eddie Butler or TMS’ s masterful balance of banter, bat and ball.

So let’s get rid.

In their place I would suggest one or other of:

  • Sir Terry Wogan. Having honed his detached irony with the camp and deluded madmen of Eurovision, Tel would provide delightful accompaniment to the pantomime prunes of the Premier League.
  • Sir David Attenborough.  Dave wouldn’t be expected to discuss any of the tactical complexities involved in the game.  He wouldn’t even mention players by name.  He would be there to provide a zoologist’s eye view of the chaotic Darwinian mess that unfolds every time Premier League teams meet.  “The Uruguayan rat is known to be a particularly aggressive combatant, using his oversized teeth to terrify its adversaries.  He is also a noted racist.”
  • Daft Punk.  Cos they’re boss.

3.  The following are to become straight red card, backpage filling crimes against football

  • Long throws, either down the line or straight into the box.  Awful, just awful.  10 match ban.
  • Socks over knees.  You know who you are.
  • Shielding the ball out for a corner or throw in.  Why the fuck is this always clapped to the rafters?  IT’S SHIT PLAY.

4.  All games are to be refereed by a man sized/ Crouch sized version of Sir Killalot from Robot Wars


The truth about vaccinations: Your physician knows more than the University of Google

Violent metaphors

“A cousin of my mom’s survived Polio and lived the rest of his life with its effects. He was not expected to live past his teens but made it to his 40s. I am grateful that modern science can protect us from Polio and other diseases and I choose to take advantage of modern science to give my kid better odds of not dying from a preventable disease. I had heard a lot of noise from people claiming vaccines caused Autism, but never saw any clear evidence. It just seemed to me like people really wanted to point to something as the cause and they latched onto vaccines.”–Jennifer

I have been getting into a lot of discussions about whether vaccines are safe in the last few days. I’m not sure if it’s because of a post going viral about a (terrible) Italian court ruling last year (In contrast, American courts

View original post 2,373 more words

Atomic bombs, Sarah Palin and fluorescent jellyfish


Osamu Shimomura was 16 when the B-29 bomber that brought nuclear destruction to Nagasaki flew over his house.  He still remembers the deafening drawl of the engine.  And the 30 seconds of blindness following the explosion.  And the black rain.

Nagasaki Medical College was completely destroyed by the nuclear blast, forcing the pharmacy school to relocate to a temporary campus near Shimomura’s family home.  Despite no prior interest in the life sciences, the proximity of the new pharmacy school persuaded him to enrol in 1948.

Twelve years later, Shimomura took up a post at Princeton University with full government funding to study the nervous system of jellyfish.  While working there, he discovered why certain species of jellyfish glow green under ultraviolet light.  The startling effect is due to the presence of GFP, or green fluorescent protein, in the soft tissues of some species.

Hang on.  Government-funded scientists studying glowing jellyfish?  Isn’t that classic  academic frivolity with no obvious human benefit?  Did Osamu Shimomura leave the economic stricture and limitations of post-war Japan so that deep-pocketed American taxpayers could indulge his curious whim?

The quotation machine that was Sarah Palin – how we miss her entertaining babble – famously addressed this issue in 2009 during her ill-fated tilt at the vice presidency.  At a press conference she complained that American taxes were being wasted on ‘fruit fly research in Paris, France’.  Quite apart from the deliciously sour redundancy of her quotation’s last word, Palin’s criticism of profligate public spending was rightly ridiculed at the time.

Research on fruit flies has proved particularly useful in studying the nervous system.  Efforts across the globe have delivered extraordinary new insight into autism, Alzheimer’s and Parkinson’s disease.  Thanks to these studies, we are now beginning to understand the root of these disorders in such detail that we may soon be able to design new strategies to combat them.  Fruit fly research in Paris, France, not so bad then.

One can easily imagine a 1960s wag decrying Shimomura’s ‘jellyfish research in Princeton, New Jersey’ as a waste of public money.  Indeed, many of Shimomura’s peers at the time did, viewing his work as a solution in search of a problem.  In 2008, however, Shimomura was awarded the Nobel Prize for Physiology or Medicine along with Roger Tsien and Martin Chalfie for their work on GFP.  Tsien and Chalfie had earned their share of the prize by turning GFP into a molecular tag, which can be stuck on to any protein that interests a cell biologist.

Much like the tags that are used to track the migration of wild animals, GFP allow us to visualise the movement and activities of important cellular proteins with powerful microscopes.  Because proteins carry out almost all the functional work that cells undertake – movement, cell division, repairs, and so on – understanding their function and malfunction in disease is of great interest.  Cancer and HIV research would not be where it is today without GFP.

Shimomura didn’t know the bomb would drop on his hometown that day in 1945, changing his life forever.  Nor did he know that GFP would become so important in biological research.  That’s because there’s no roadmap for scientific discovery.  The electron was discovered with no practical objective in mind and now we have a world run by electronics.  The X-ray was useless when it was first described in 1895, but last week it was used to find a crack in my friend’s clavicle*.

Shimomura’s discovery of GFP was fired by a desire to understand the world that he inhabits, a thirst for knowledge that would have pleased Plato and Aristotle.  Such research, without a direct practical output, is called ‘basic research’.  Applied scientists like Tsien and Chalfie seek to use science to intervene in the world; in their specific case, making invisible proteins visible.  The point is that applied science requires a foundation of basic research, using it as a platform to provide practical solutions to present problems.

So why does the British government pump millions of pounds into basic research?  Why does a yearly £34m of British taxpayers’ money fund the Large Hadron Collider (LHC), a 170-mile atom smasher on the Franco-Swiss border?  The answer is… we don’t know.  We can’t be sure what the LHC or other areas of basic research will deliver on a practical level.  What we do know is that they will provide incredible leaps forward in scientific understanding.  These developments will act as an ‘innovation investment’, delivering dividends when a new generation of geniuses invents the next world-changing technology.  So let’s hope that basic research continues to receive funding, whether Sarah Palin likes it or not.

*Thank you and all due credit to Aaron Sorkin and The West Wing.