IF YOU COULD, WOULD YOU?
From chapter 5 (Limitless), Films from the Future: The technology and Morality of Science Fiction Movies
On April 1, 2008, a press release was published announcing that the US National Institutes of Health (NIH) was launching a new initiative to fight the use of brain-enhancing drugs by scientists. Spurred on by a perceived need to prevent pill-induced academic advantages, it claimed that:
While “doping” is now accepted as a problem among athletes, it is less widely known that so-called “brain doping” has been affecting the competitive balance in scientific research as well.
The release went on to announce the formation of the new World Anti-Brain Doping Authority, or WABDA.
It should have been apparent from its publication date that the press release was an elaborate April Fool’s joke. It was the brainchild of Jonathan Eisen of the University of California, Davis, and it played into a growing interest in the use of nootropics and other cognitive enhancers in academia and the ethical questions that this raises. (The press release can still be read using Wayback Machine on the original WABDA website, set up especially for the occasion.)
A few days after the press release hit the internet, the journal Nature published the results of its informal survey of 1,400 people on
their academic smart-drug habits. The survey was an open, global online survey, and so at best provides only a rough indication of what academics were doing at the time. There was no control over who completed it, or how honest they were. Yet it still provided a fascinating insight into what, up to then, had been the stuff of rumor and conjecture.
The survey asked participants whether they had ever used Ritalin, modafinil, and beta-blockers for non-medical purposes. Those that had were then asked a number of additional questions about their usage habits. Around one in five respondents said they had used one or more of these drugs to increase their focus, concentration, or memory. Ritalin was the most frequently-used substance, and respondents between eighteen and twenty-five years old were the most prevalent users (with an interesting spike for those between fifty-five and sixty-five, suggesting a fear of late-career performance- inadequacy). What was even more interesting to me was that 69 percent of the respondents said they’d risk mild side effects to take these drugs themselves, and 80 percent thought that healthy adults should be free to use them if they wanted to.
In stark contrast to competitive sports, these respondents were remarkably indifferent to their fellow scientists getting a drug-induced leg up. It seems—at least from this somewhat qualitative sample—that there’s an ambivalence around using brain enhancements to succeed academically that we don’t see in other areas.
This is an attitude I’ve also come across in talking to colleagues, and it’s one that I must confess surprises me. Academia is deeply competitive, as are most professions that depend on mental skills. And yet, I find it hard to detect much concern over others getting a competitive advantage through what they imbibe. That doesn’t mean we shouldn’t be concerned, though.
In his 2004 commentary on Cosmetic Neurology, Anjan Chatterjee asked five questions of readers that were designed to test their ethical boundaries. These included:
- Would you take a medication with minimal side effects half an hour before Italian lessons if it meant that you would learn the language more quickly?
- Would you give your child a medication with minimal side effects half an hour before piano lessons if it meant that they learned to play more expertly?
- Would you pay more for flights whose pilots were taking a medication that made them react better in emergencies? How much more?
- Would you want residents to take medications after nights on call that would make them less likely to make mistakes in caring for patients because of sleep deprivation?
- Would you take a medicine that selectively dampened memories that are deeply disturbing? Slightly disturbing?
These were designed to get people thinking about their own values when considering cognition-enhancing drugs. To this list, I would add five more questions:
- Would you take a smart drug to help pass a professional exam?
- Would you take a smart drug to shine more than the competition in a job interview?
- Would you take a smart drug to increase your chances of winning a lucrative grant?
- Would you use a smart drug to help win a business contract?
- Would you use a smart drug to help get elected?
On the face of them, Chatterjee’s questions focus on personal gains that either don’t adversely impact others, or that positively impact them. For instance, learning a language or the piano can be seen as personal enrichment and as developing a socially-useful skill. And ensuring that pilots and medical professionals are operating to the best of their abilities can only be a good thing, right?
It’s hard to argue against these benefits of taking smart drugs. But there’s a darker side to these questions, and that is what happens if enhancement becomes the norm, and there is mounting social pressure to become a user.
For instance, should you be expected to take medication to keep up with your fellow students? Should you feel you have to dose your child up so they don’t fall behind their piano-playing peers? Should medical staff be required to be on meds, with a threat of legal action if they make an error while not dosed-up?
The potential normalization of nootropic use raises serious ethical questions around autonomy and agency, even where the arguments for their use seem reasonable. And because of this, there should probably be more consideration given to their socially responsible use. This is not to say that they should be banned or discouraged, and academics like Henry Greely and colleagues actively encourage
their responsible use. But we should at least be aware of 99 the dangers of potentially stepping out on a slippery slope of marginalizing anyone who doesn’t feel comfortable self-medicating each day to succeed, or who feels pressured into medicating their kids for fear that they’ll flunk out otherwise. And this is where the issue flips from the “would you be OK” in Chatterjee’s questions, to the “would you do this” in my five follow-up questions.
In each of these additional questions, taking a cognitive enhancer gives the user a professional advantage. In some of these cases,
I can imagine one-off use being enough to get someone over a career hurdle—outperforming the competition in a job interview, for example. In others, there’s a question of whether someone will only be able to do their job if they continue to self-medicate. Is it appropriate, for instance, if someone uses cognitive enhancers to gain a professional qualification, a teaching qualification, say, and then can only deliver on expectations through continued use?
In all of these questions, there’s the implicit assumption that, by using an artificial aid to succeed, someone else is excluded from success. And this is where the ethics get really tricky.
To understand this better, we need to go back to the Nature survey and the general acceptance of academics toward using smart drugs. For most academics, their success depends on shining brighter than their peers by winning more grants, making bigger discoveries, writing more widely cited papers, or gaining celebrity status. Despite the collegiality of academia (and by on large we are a highly collegial group), things can get pretty competitive when it comes to raising funds and getting promotion, or even securing a lucrative book deal. As a result, if your competitors are artificially boosting their intellectual performance and you are not, you’re potentially at a disadvantage.
As it is, the pressure to do more and to do it better is intense within academic circles. Many academics regularly work sixty- to seventy- hour weeks, and risk sacrificing their health and personal lives in order to be seen as successful. And believe me, if you’re fraying at the edges to keep up with those around you and you discover that they’ve been using artificial means to look super-smart, it’s not likely
to sit easily with you, especially if you’re then faced with the choice of either joining the smart-drug crowd, or burning out.
In most places, things aren’t this bad, and nootropic use isn’t so overtly prevalent that it presents a clear and present pressure. But this is a path that self-centered usage risks leading us down.
To me, this is an ethically fraught pathway. The idea of being coerced into behaviors that you don’t want to engage in in order to succeed doesn’t sit comfortably with me. But beyond my personal concerns, it raises broader questions around equity and autonomy. These concerns don’t necessarily preclude the use of cognitive enhancers. Rather, they mean that, as a society, we need to work out what the rules, norms, and expectations of responsible use should be because, without a shadow of doubt, there are going to be occasions where their use is likely to benefit individuals and the communities that they are a part of.
What puts an even finer point on these ethical and social questions is the likely emergence of increasingly effective nootropics. In the US and Europe, there are currently intense efforts to map out and better understand how our brains work. And as this research begins to extend the limits of what we know, there is no reason to think that we won’t find ways to develop more powerful nootropics. We may not get as far as a drug like NZT, but I see no reason why we won’t be able to create increasingly sophisticated drugs and drug combinations that substantially increase a user’s cognitive abilities.
As we proceed down this route, we’re going to need new thinking on how, as a society, we use and regulate these chemical enhancers. And part of this is going to have to include making sure this technology doesn’t end up increasing social disparities between people who can afford the technology and those who cannot.