The Promise and Perils of Brain-Computer Interface
Distinguishing real vs sham ethical concerns
Vox’s Sigal Samuel does not like Elon Musk’s plans for brain-computer interface via neural implants. There are good reasons to worry about neural implants: privacy and security concerns, the risk of “brainjacking”, totalitarian states abusing the technology to make their citizens more compliant, etc. The risk of dystopian outcomes from such a powerful technology is worth guarding against. It’s important to explore the possibilities here, and consider whether regulations could prevent its misuse, or whether there are some avenues of technological development that are just too dangerous to be allowed at all.
(My dispositions lean techno-optimist: I think increasing our capabilities is generally a good thing. But I know I could be wrong about that, either in general or in any specific instance. So it’s an important topic for public discussion, reflection, and policy-making. I certainly don’t think we should just sleep-walk into a future containing dangerous technologies without giving the matter serious thought.)
But the striking thing about the Vox article is how much of it instead focuses on what I think of as sham-ethical objections. It argues that Neuralink’s trials to help paralyzed people via neural implants are unethical because it uses “an unnecessarily invasive, potentially dangerous approach to the implants that can damage the brain.”
Two competitor companies explain why they prefer a less-invasive (even if less powerful) approach:
Ben Rapoport, a neurosurgeon who left Neuralink to found Precision Neuroscience, emphasized that any time you’ve got electrodes penetrating the brain, you’re doing some damage to brain tissue. And that’s unnecessary if your goal is helping paralyzed patients.
For Tom Oxley, the CEO of Synchron, this raises a big question. “The question is, does a clash emerge between the short-term goal of patient-oriented clinical health outcomes and the long-term goal of AI symbiosis?” he told me. “I think the answer is probably yes.”
“It matters what you’re designing for and if you have a patient problem in mind,” Oxley added… “[W]e chose a point at which we think we have enough signal to solve a problem for a patient.”
Against Narrow Optimization
I’ve previously argued that conventional research ethics is too narrowly focused on patients’ interests. The focus should instead be on informed consent and social benefit: if altruistic (or otherwise compensated) volunteers are willing to accept the risks of participating in a promising vaccine challenge trial, for example, it should not be necessary that participants’ personal health prospects are improved by being in the trial. It’s OK for them to be altruistic! (Or to prioritize financial interests over medical ones!)
But the suggestion implicit in the Vox article is even worse. They seem to be implying that it’s ethically problematic to test a technology that even (in expectation) benefits its recipients, whenever it falls short of optimally benefiting them (say because some alternative option would have benefited them even more).
I think this is nuts. If you can either (i) help some people a moderate amount in hopes of subsequently helping the rest of society much more, or (ii) help the first group a bit more, but without such potential for downstream benefits, the first option is not inherently unethical. You have not wronged anyone by offering them a suboptimal benefit that you needn’t have offered them at all, when there’s a perfectly good reason (potential greater benefits for others) for not narrowly optimizing for just their interests.
So that’s why I call this a “sham” ethical objection. Neuralink is (expectably) helping paralyzed patients. If someone’s offering those patients a safer alternative, they’re free to choose that instead. But if no other alternative is currently available for those particular patients, it’s hardly reasonable to criticize Neuralink for (i) pursuing what they regard to be the most promising avenue for BCI development, and (ii) offering this to paralyzed patients who could benefit from it.
Obviously the real objection is to the end-goal of maximalist BCI development. But then that’s what the debate should focus on. Not this sham nonsense about “unnecessarily” endangering current patients. There’s no general obligation for researchers to narrowly optimize for just the interests of the participants in their research trials. If the broader effects really would be overall very beneficial to society, then it’s obviously ethical to develop a socially beneficial technology in a way that also expectably benefits research trial participants (and has their fully informed consent). So the real ethical question here is just whether the antecedent of the conditional is true: would the broader effects of this technology really be positive, or not?
Because at the end of the day, we shouldn’t just want to “solve a problem for a patient.” We should want even better outcomes, if we can get them. If we can safely enhance human capabilities, that could be a wonderful thing. But it’s a big “if”: there are serious risks that first need to be addressed.
Tangential: Regarding "brainjacking", it seems to me that genetic engineering or even chemical therapies on living people could strongly affect people's ethical views (e.g. utilitarian vs. deontological). Will the utilitarian (or deontological) "gene" be banned by the authorities?