Topical Review The following article is Free article

Ethical issues in neuroprosthetics

Published 9 February 2016 © 2016 IOP Publishing Ltd
, , Citation Walter Glannon 2016 J. Neural Eng. 13 021002 DOI 10.1088/1741-2560/13/2/021002

1741-2552/13/2/021002

Abstract

Objective. Neuroprosthetics are artificial devices or systems designed to generate, restore or modulate a range of neurally mediated functions. These include sensorimotor, visual, auditory, cognitive affective and volitional functions that have been impaired or lost from congenital anomalies, traumatic brain injury, infection, amputation or neurodevelopmental and neurodegenerative disorders. Cochlear implants, visual prosthetics, deep brain stimulation, brain-computer interfaces, brain-to-brain interfaces and hippocampal prosthetics can bypass, replace or compensate for dysfunctional neural circuits, brain injury and limb loss. They can enable people with these conditions to gain or regain varying degrees of control of thought and behavior. These direct and indirect interventions in the brain raise general ethical questions about weighing the potential benefit of altering neural circuits against the potential harm from neurophysiological and psychological sequelae. Other ethical questions are more specific to the therapeutic goals of particular neuroprosthetics and the conditions for which they are indicated. These include informed consent, agency, autonomy (free will) and identity. Approach. This review is an analysis and discussion of these questions. It also includes consideration of social justice issues such as how to establish and implement fair selection criteria in providing access to neuroprosthetic research and balancing technological innovation with patients' best interests. Main results. Neuroprosthetics can restore or improve motor and mental functions in bypassing areas of injury or modulating dysregulation in neural circuits. As enabling devices that integrate with these circuits, neuroprosthetics can restore varying degrees of autonomous agency for people affected by neurological and psychiatric disorders. They can also re-establish the connectedness and continuity of the psychological properties they had before injury or disease onset and thereby re-establish their identity. Neuroprosthetics can maximize benefit and minimize harm for people affected by damaged or dysfunctional brains and improve the quality of their lives. Significance. Provided that adequate protections are in place for research subjects and patients, the probable benefit of research into and therapeutic applications of neuroprosthetics outweighs the risk and therefore can be ethically justified. Depending on their neurogenerative potential, there may be an ethical obligation to conduct this research. Advances in neuroscience will generate new ethical and philosophical questions about people and their brains. These questions should shape the evolution and application of novel techniques to better understand and treat brain disorders.

Export citation and abstract BibTeX RIS

Introduction

Neuroprosthetics are artificial devices or systems designed to generate, restore or modulate a range of neurally mediated functions. These include sensorimotor, visual, auditory, cognitive and affective functions that have been impaired or lost from congenital anomalies, traumatic brain injury, infection, amputation or neurodevelopmental and neurodegenerative diseases. Cochlear implants, visual prosthetics, deep brain stimulation (DBS), brain–computer interfaces, (BCIs), brain-to-brain interfaces (BTBIs) and hippocampal prosthetics (HPs) can bypass, replace, or compensate for damaged, absent or dysfunctional neural circuits or limb loss. In these ways, they enable people with impaired neural function, limb loss or paralysis to gain or regain varying degrees of control of thought and behavior.

Although not all of these devices involve direct intervention in the brain, they all can be described as invasive in the broad sense of causing changes in neural circuits. These changes may be positive or negative, depending on how they influence these circuits, the bodily and mental states they mediate and the psychological responses by patients and research subjects to them. Neuroprosthetics raise a number of ethical questions about weighing actual and potential benefit and harm, as well as different senses of benefit and harm, for those fitted with them. Developed in the late 1950s, cochlear implants enable hearing in those who are severely hard-of-hearing by stimulating auditory nerve fibers unaffected by damaged sensory hair cells (Wilson and Dorman 2008, O'Donoghue 2013). A brainstem implant is now available for patients who are profoundly deaf with extensive damage to the cochlea (Shannon 2012).

Most people consider the ability to hear as beneficial and preferable to deafness. Yet some who are deaf consider this condition as essential to their identity. Moreover, the radical transition from deafness to hearing may make it difficult for a child and even more so for an adult to adjust to two very different sensory and experiential worlds. Developed in the late 1960s, visual prosthetics can provide those who are blind with visual information based on electrical stimulation through arrays of electrodes implanted near the optic nerve, retinal neurons or circuits in the visual cortex (Brindley and Lewin 1968, Schiller and Tehovnik 2008). While the enabling effects of the prosthetic in restoring or generating vision outweigh the disabling effects of blindness, the subject's experience in transitioning from a non-visual to a visual world may involve a challenging period of psychological adjustment. It may have unpredictable effects on other sensory capacities previously utilized to compensate for the absence or loss of vision. The resulting behavioral changes may be either beneficial or harmful to both the subject and those around him, depending on whether they meet or defeat their expectations and the extent to which they have to reconstruct their conception of a world shaped by blindness. Psychological, social and cultural factors associated with vision have to be considered together with the neurophysiological ability to see in order to assess the overall value of the prosthetic.

Cochlear implants, visual prosthetics and DBS are all pertinent to the topic of neurodiversity. This is the idea that there is natural variation of neural and mental functions due to the interaction of genetic, neurobiological and environmental factors (Ortega 2009). These functions fall along a spectrum that extends from the normal to the pathological. While functions approximating or within the pathological range of the neuropsychiatric spectrum are considered by many as disabilities, others consider them as abilities and prefer to retain rather than alter them. They conceive of these functions as differences rather than characteristics of a disorder. Although neurodiversity is often raised in the debate on autism spectrum disorders, it applies to other neurological and psychiatric conditions and the use of neuroprosthetics to modulate neural circuits recognized as the source of these conditions. In psychiatric disorders such as unipolar and bipolar depression, some patients may identify with the cognitive and affective states associated with them and resist treatment. In some cases, this identification has influenced legal reasons against forcibly using neuroprosthetics or psychotropic drugs to treat symptoms of these disorders (Starson v. Swayze 2003). The neural and psychological effects of neuroprosthetics and DBS in particular have to be assessed with a view not only to whether they control symptoms but also to how they influence the patient's overall experience (Hart 2014).

Among neuroprosthetics, DBS, BCIs, BTBIs and HPs raise the most timely and significant ethical questions. Accordingly, I focus mainly on these issues in this review. I discuss general ethical questions common to all of these prosthetics, such as weighing the potential benefit of modulating brain activity to restore behavior control against the potential harm from neurophysiological and psychological sequelae. Other ethical questions are more specific to the therapeutic goals of particular prosthetics and the conditions for which they are indicated. DBS can restore motor control in movement disorders (Benabid 2003, 2007, Krack et al 2010, Benabid and Torres 2012, Okun 2012, 2014, Reti 2015) and improve cognition, mood and motivation in psychiatric disorders (Mayberg et al 2005, Lozano et al 2008, Mallet et al 2008, Greenberg et al 2010, Holtzheimer and Mayberg 2011). But neurostimulation can also cause adverse effects, which requires careful weighing of potential benefits and risks of the technique (Frank et al 2007, Mallet et al 2008, Rabins et al 2009, Muller and Christen 2011, Christen et al 2012, Castrioto et al 2014). In DBS for psychiatric disorders, cognitive and affective symptoms may generate doubt about whether patients have the capacity to give informed consent to undergo the procedure (Lipsman et al 2012). Also, the fact that these devices can alter neural and mental functions outside of a person's conscious awareness raises the question of whether the person or the device controls her behavior. They raise questions about whether the person in whom a neurostimulating device is implanted retains a robust sense of autonomy or free will in voluntarily initiating and executing action plans. There are also questions about whether neural and mental changes induced by DBS restore, maintain or substantially alter a person's identity and whether this entails a net gain or loss for the person.

BCIs raise a related worry about behavior control (Wolpaw et al 2002, Lebedev and Nicolelis 2006, Nicolelis and Lebedev 2009, Wolpaw and Wolpaw 2012). A computer algorithm that decodes neural signals in motor and parietal cortices through EEG may allow researchers to predict an action the subject intends to perform, such as moving a computer cursor or robotic arm. This prompts the question of whether the subject can cancel the intention after forming it and refrain from performing the action. While these systems may restore some capacity for agency, the predictability of mental and physical acts with BCIs generates concern about the extent to which a person using this technology has free will. This requires the capacity to both execute and cancel intentions (Mele 2009). If the person using a BCI lacks the second capacity, then it may appear that the BCI is doing most or all of the causal work in the production of a motor task. Another question is how expectations of subjects about what BCIs can achieve and whether they succeed or fail to produce movements aided by these systems can influence whether they benefit from or are harmed by the outcome. Also, by utilizing semantic processing in the brain and auditory feedback, BCIs may enable completely locked-in or minimally conscious patients to communicate their wishes about medical treatment (Birbaumer et al 1999, 2014, Wolpaw et al 2002). But it is not known whether this type of communication indicates that the patient has sufficient cognitive and affective capacity to make an informed decision about whether they want to continue or discontinue life-sustaining care.

HPs have been tested only in animal models (Berger et al 2011, Hampson et al 2013). Theoretically they could enable persons with damage to the hippocampal–entorhinal (H–E) circuit to encode and store new episodic memories. However, it is not clear to what extent an artificial device would integrate into this circuit and how it would interact with multiple cell fields and computations that naturally occur in a distributed and complex network of memory systems. Nor could one know how an HP would influence the emotional content and meaning of autobiographical memory as one constructs one's identity from the experience of persisting through space and time.

I then discuss social justice issues surrounding neuroprosthetics. The high cost of implanting and monitoring devices may preclude many from having access to them in experimental and clinical settings. Unequal access to research and therapeutic applications for the same medical need would be unfair to some patients. In addition, device manufacturers may decide to stop a clinical trial testing a neuroprosthetic for financial reasons. This would also be unfair to patients recruited for such a trial when equipment that was available during the trial became unavailable. I conclude by examining how advances in neuroscience may generate new ethical and philosophical questions about people and their brains, as well as how these questions may shape the development and application of new technologies to better understand and treat neurological and psychiatric disorders.

Deep brain stimulation (DBS)

Experimental and clinical neuroscience has advanced to the point where implantable stimulators can alter a rage of neural functions, One of the most significant applications of the science is therapeutic DBS to probe and modulate critical nodes in dysfunctional neural circuits implicated in neurological and psychiatric disorders (Lozano and Lipsman 2013, Okun 2014). In this technique, high-frequency electrical stimulation (generally >100 Hz) of subcortical brain networks can restore normal physiological oscillations in them to regulate neural firing patterns and control symptoms. A DBS system consists of one or more electrodes implanted unilaterally or bilaterally in a targeted brain region using MRI-guided stereotactic techniques. The electrodes are connected to leads and stimulated by a pulse generator implanted subcutaneously below the clavicle or abdomen. Activation of the generator and the level of current transmitted to the electrodes are controlled by a manually operated programmable device. DBS is FDA-approved for movement disorders such as Parkinson's disease (PD), essential tremor and dystonia. The technique has also been used to treat seizure disorders. It was granted a Humanitarian Device Exemption for obsessive-compulsive disorder (OCD) in 2009 but is still considered experimental and investigational for treatment-refractory OCD, major depressive disorder (MDD) and other psychiatric disorders such as anorexia nervosa (Lipsman et al 2013).

In PD, stimulation of the subthalamic nucleus or globus pallidus interna can down-regulate an overactive motor circuit of the basal ganglia, modulate a cortical-basal ganglia-thalamic loop and improve motor control (Abramowicz et al 2014, Okun 2012, 2014). In the subtype of depression characterized by anhedonia and avolition, stimulation of the nucleus accumbens can up-regulate an underactive reward circuit, modulate a frontal–limbic–striatal loop and improve mood and motivation (Holtzheimer and Mayberg 2011, Schlaepfer et al 2014). A different dysregulated frontal–limbic–striatal loop has been implicated in OCD (Mallet et al 2008, Greenberg et al 2010, Melloni et al 2012 Figee et al 2013). Stimulating circuits in this loop can down-regulate hyperactivity in them and ameliorate the obsessions and compulsions.

Although DBS has improved motor, cognitive, affective and volitional functions for many patients with these disorders, questions remain about its efficacy. This is especially the case in psychiatry. After a number of earlier trials yielded positive results overall, the BROdmann Area 25 DEep brain Neuromodulation (BROADEN) study of DBS for depression involving 75 subjects sponsored by device manufacturer St. Jude Medical was discontinued in 2013 after initial disappointing results (Underwood 2015). St. Jude stated in a letter that it decided to stop the trial before completion because the probability of a successful outcome was only 17.2%. Tractography showing abnormalities in Brodmann Area 25 and its projections to medial frontal areas appeared to locate this pathway as the appropriate target. Yet the actual effects of DBS on depressive symptoms failed to meet expectations. The outcome of this study underscores the difficulty of identifying and modulating the precise dysfunctional nodes of circuits involved in major depression. Advanced imaging may not be able to specify the right targets, which can vary depending on the subtype of depression at issue. An earlier study led by Mayberg, the same principle investigator in the BROADEN study, yielded more positive results in the same general region of the brain (Mayberg et al 2005). The difficulty in replicating effects of DBS in the same or adjacent nodes and circuits illustrates that brain-based models for treating depression with electrical stimulation are far from perfect. Variable results from studies may be indicative of the heterogeneous etiology of depression, which has multiple causes and cannot be explained entirely in terms of dysfunctional nodes in neural circuits. Failed studies like this one underscore the challenge of conceptualizing depression as a disease of the brain and mind.

Even when it has been effective in modulating dysfunctional neural circuits, DBS has resulted in a number of adverse events. These include effects associated with intracranial surgery, such as intracerebral hemorrhage, edema and infection, which are within the range of typical neurosurgical complications. More significantly, they also include effects associated with electrical stimulation, such as hypomania, mania and compulsive behaviors such as gambling and hypersexuality (Frank et al 2007, Mallet et al 2008, Rabins et al 2009, Muller and Christen 2011, Christen et al 2012, Castrioto et al 2014). These sequelae may result from not stimulating targeted circuits with the requisite precision, overstimulating them, or from expanding effects on other circuits. Focused stimulation is challenging because many brain functions depend on distributed and interacting circuits that send projections to and receive projections from each other. An area of the brain targeted by DBS may involve nuclei in close proximity to each other that regulate different processes. For example, as noted the subthalamic nucleus in the basal ganglia is one of the areas stimulated to restore motor control in PD. Yet the basal ganglia consist of a complex network involving not only a motor circuit but also associative and limbic circuits mediating cognitive and emotional processes. The compulsive behavior of some PD patients receiving DBS may be explained by unintended excitatory effects on the limbic circuit. A study involving stimulation of the hypothalamic/fornix circuit to treat obesity improved memory in one subject (Hamani et al 2008). Circuit proximity can contribute to unexpected negative or positive effects of DBS (Benabid and Torres 2012). To prevent adverse events, precise stimulation of targeted circuits at the right frequency is necessary because these circuits may not be completely functionally segregated.

DBS: open-loop devices (OLDs) versus closed-loop devices (CLDs): benefits and risks

Many adverse events from DBS may be attributed to technical features of the OLDs used in most applications for neurological and psychiatric disorders. There is no information feedback from the neural output of the stimulator to stimulator input and no mechanism for the electrical frequency to adjust to changes in the brain. In CLDs, information is fed back from changes in neural activity to the stimulator in real-time, and the stimulator can adjust the frequency accordingly (Rosin et al 2011, Santos et al 2011, Hebb et al 2014, Potter et al 2014). This makes CLDs preferable to OLDs by ensuring that neural circuits are neither constantly overstimulated nor understimulated. They appear to provide a safer and more effective means of neuromodulation by providing a greater degree of precision in regulating neural network oscillations. By overcoming design and operational flaws in open-loop systems, closed-loop systems are more likely to maximize benefit and minimize harm for people suffering from diseases of the brain. More specifically, by responding to neural changes as they occur, rather than by following a predetermined program, CLDs can contribute to 'smart' DBS, where neurostimulation is tailored to activity in each patient's brain and minimizes the risk of neurological and psychological sequelae (Grahn et al 2014).

In contrast to the self-adjusting function of CLDs, Grahn and co-authors point out a major shortcoming of OLDs: 'although many DBS patients require minimal stimulation adjustment following surgery, many more require several months of regular parameter adjustment before optimal therapeutic results can be achieved' (Grahn et al 2014, p 3). The burden on patients and research subjects of having to return to a clinic or research site every few months for parameter adjustment could contribute to non-adherence to device monitoring that could undermine its therapeutic effect. Focusing mainly on movement disorders, these authors point out that: 'existing clinical programming and stimulation paradigms are poorly suited to cope with the dynamic and comorbid nature of most neurological disorders. This, in turn, highlights the need for dynamic feedback systems that can continually and automatically adjust stimulation parameters in response to changes within the environment of the brain'. (Grahn et al 2014, p 2). The need is more pressing in psychiatric disorders, where the effects of circuit dysregulation on cognition, mood and motivation are often not immediate but delayed. An extended period of dysregulation due to imperfections in OLDs and/or patient non-adherence may allow a return of symptoms and possibly result in permanent deleterious changes in neural circuits. All of these considerations underscore the limitations of OLDs and provide medical and ethical reasons for replacing them with CLDs.

Automatic parameter adjustment in CLDs would not rule out the possibility of expanding effects of stimulation on non-targeted normal circuits. These effects might not be attributable to stimulation as such but to the proximity of circuits and interconnectedness of pathways in the brain. Because of this proximity and difficulty segregating inputs and outputs between circuits, stimulating one circuit may inadvertently stimulate another. This would not occur in normal brain function when no such intervention was necessary. In this regard, CLDs might not be any better than OLDs in preventing these effects. Even if CLDs turn out to be functionally superior to OLDs in restoring physiological oscillations in circuits affected by pathology, damage in these circuits from advanced disease may entail the irreversible loss of highly specialized information processing in them. They may fail to respond to any form of stimulation. In that case, CLDs might not be therapeutically superior to OLDs either.

Yet by allowing feedback from neural outputs to stimulation inputs, a closed-loop neurostimulating system may restore a relatively stable equilibrium in the brain by modulating network oscillations when they have become dysregulated from malfunctioning neural feedback mechanisms. In a healthy and stable neural environment, cortical and subcortical circuits are neither constantly overactive nor underactive. This balance is disrupted in neurological and psychiatric disorders due to a complex combination of genetics, environmental factors, and neurodevelopmental and neurodegenerative processes. Overall, CLDs are more likely to restore normal neural activity and promote equilibrium among circuits and pathways in the neural environment than OLDs. These and other neuroprosthetics that automatically detect disease states and respond with appropriate counter-measures are better tailored to the individual patient and more likely to improve her welfare by producing greater clinical benefit and fewer side effects.

DBS: informed consent

One of the most controversial ethical questions in the psychiatric and bioethics literature is whether patients with treatment-resistant neurological and psychiatric disorders have the mental capacity to give informed consent to receive DBS as therapy or in a research protocol. The question is whether they have the cognitive and affective capacity to understand how the technique might alter their brain and make a voluntary decision to undergo or forego it (Appelbaum et al 1999, Appelbaum 2007, Beauchamp and Childress 2012, ch 4). This seems especially problematic in psychiatric disorders, where affective and cognitive impairments are the primary symptoms. Informed consent to undergo DBS for psychiatric disorders raises ethical questions that are more fraught than those raised by this and other brain implants for other disorders. The cortical–limbic circuits that are the targets of the technique are also the source of the cognitive and affective capacities necessary for consent. Yet these circuits are dysfunctional and these capacities may be impaired in psychiatric disorders (Rabins et al 2009, Glannon 2010). This problem may also arise in movement disorders such as Parkinson's. Yet while cognitive impairment may be a symptom of PD, this is typically secondary to motor dysfunction and does not manifest itself in all patients when DBS is being considered. Psychiatric conditions such as depression and OCD may interfere with the mental capacities necessary for reasoning and decision-making. They may interfere with a patient's ability to process information about the potential benefits and risks of this intervention in the brain and interfere with their decisional capacity. Because of its experimental status for these disorders, this question arises primarily in the research context of a clinical trial.

Any mental impairment caused by these disorders comes in degrees. Being affectively and cognitively impaired does not necessarily undermine the capacity to rationally process information about DBS and make a deliberative decision about whether or not to have it. Cognitive, affective and volitional capacities may be compromised to varying degrees in psychiatric disorders, and these capacities may be dissociable to some extent. A patient with the subtype of depression involving anhedonia and avolition may lack the motivation to actively seek out a DBS clinical trial testing the technique for this condition. But this does not mean that he lacks the cognitive capacity to consider the reasons for participating in a trial if recruited by a researcher. In proven or unproven applications of DBS, the disease itself would not necessarily preclude the capacity to rationally and freely decide whether or not to have a stimulating system implanted in one's brain. In conditions that are refractory to other interventions, this would include a comparative assessment of the probability of adverse neurological and psychological events with DBS against the actual continuation of mental suffering and possible suicide without DBS. The risk of suicide and the actual unremitting harm from a refractory condition can justify first-in-human and proof-of principle trials, as well as more advanced DBS trials, even when they entail some risk.

Still, the severity, chronicity and treatment-resistant nature of some psychiatric and neurological disorders may interfere with an assessment of risk in DBS by causing desperation for symptom relief. Some patients with severe depression may have a disregard for their well-being and fail to adequately appreciate or even ignore the risks. They may perceive it as a treatment of last resort and have unrealistic expectations about it completely relieving their symptoms. These expectations may be more likely in DBS than in pharmacological or psychological therapies because patients may believe that stimulating neural circuits will get at the 'root' of the problem (Dunn et al 2011, Lipsman et al 2012). In addition to unduly influencing their perception of risk in an experimental procedure, these beliefs could set up patients for psychological harm over and above what they have already experienced from the disorder if the outcome of stimulation fails to meet unreasonable expectations. In these respects, the nature of the disorder could make psychiatric patients vulnerable research subjects. Some form of third-party involvement in the consent process may be required when a patient's decisional capacity and psychological preparedness for the procedure are in question. A clinical psychologist or psychiatrist not directly involved in the patient's care or the research protocol could confirm that the patient had sufficient mental capacity to make a rational and voluntary decision to agree or decline to participate in DBS research or treatment. The third party could use MacCat-T or MacCat-CR assessment tools for this purpose (Grisso et al 1997, Lipsman et al 2012).

DBS: agency, autonomy, identity

At a deeper level, the idea that a device can control a person's behavior completely outside of his conscious awareness suggests that he is not the author or source of his mental states and actions. This issue seems more pertinent to CLDs than OLDs because the former provide a greater degree of mechanistic and automatic control of brain processes than the latter. Who or what is the agent behind one's behavior? Are our actions really our own? Or are they entirely the products of neural circuits and devices operating to ensure that these circuits function properly? Does the ability of DBS to probe and modulate neural circuits and alter the mental states they mediate show that we are nothing more than our brains? It is to these questions about agency, autonomy and identity that I now turn.

Agency consists in the executive ability to translate mental states such as desires, beliefs and intentions into actions. This ability has sensorimotor, cognitive, affective and volitional components. One or more of these components is impaired in different neurological and psychiatric disorders. In PD, dysfunction in the basal ganglia can impair the motor capacity to perform voluntary bodily movements. In generalized anxiety disorder and MDD, dysfunction in cortical–limbic pathways can impair the cognitive, affective and volitional capacity to form and carry out action plans. These and other conditions can limit agency by limiting these capacities. DBS can restore some degree of agency by modulating neural circuits implicated in these disorders, improving the relevant capacities and thereby increasing control of thought and behavior.

Having voluntary control of one's behavior presupposes that conscious mental processes have some causal role in this behavior. The fact that a DBS device operates outside of the subject's awareness and without any apparent conscious contribution from the subject seems to threaten this control. It is not enough for persons to be agents to control their behavior. They must be autonomous, self-determining, agents. Yet the function of these devices appears to undermine autonomous agency, with the person's actions traceable to an artificial source. It seems that the device rather than the person is in control of her behavior (Klaming and Haselager 2013).

Autonomy consists of two general capacities: competency and authenticity (Frankfurt 1988a, 1988b, 1988c, Dworkin 1988, Taylor 1991, Mele 1995). The first involves the cognitive and affective capacity to critically reflect on the mental states that issue in one's actions. The second involves the cognitive and affective capacity to identify with or endorse these mental states following a period of critical reflection. The process of reflecting on and identifying with one's mental states and actions is what makes them one's own. Mental states with which one does not identify or endorse may be considered 'alien' to the agent. Through reflection and identification, an autonomous agent is able to regulate the motor and mental springs of her actions. Autonomy is defined in terms of this self-regulating process and being in control of one's behavior. Some philosophers take agency to be synonymous with having a will and autonomous agency to be synonymous with having free will (Frankfurt 1988a, 1988b, 1988c, Mele 1995, 2009, Schermer 2015). The will consists of motor, cognitive, affective and volitional capacities. The will is free when an agent is able to translate these capacities into desired actions. Other philosophers assume that autonomy includes a person's values and interests and thus has a broader scope than free will, which is typically analyzed in terms of how mental states lead to actions at specific times (Dworkin 1988, Wolf 1990). Yet both autonomy and free will imply self-determination, that one's actions do not result from internal or external compulsion, coercion or constraint and that the agent has a causal role in performing voluntary actions (Mele 2014, Dennett 2015). In these respects, I use 'autonomy' and 'free will' interchangeably.

There are limits to the reflective capacity associated with autonomy and free will, however, which can become pathological beyond a certain level. To illustrate, people with OCD engage in excessive conscious deliberation about how to act. Too much conscious reflection interferes with unconscious processes that enable one to perform a range of cognitive tasks and motor skills. Autonomy requires a certain degree of reflection on the reasons or motivation for action. Yet OCD shows that too much reflection can undermine rather than promote or sustain free will (de Haan et al 2015). Autonomous behavior must be automatic to some extent. It requires a balance between deliberative and automatic processes mediated by interacting frontal–limbic–striatal pathways. Dysregulation in these pathways prevents people with OCD from performing basic actions they would ordinarily perform as a matter of course. The pathological need for control is symptomatic of a loss of control and a form of mental paralysis.

The fact that DBS operates outside of a person's awareness does not undermine but instead supports behavior control and free will by modulating dysregulated neural circuits that generate and sustain thought and action. Electrical stimulation of circuits in the basal ganglia in PD and circuits in frontal–limbic–striatal pathways in MDD and OCD can restore the phenomenology or feeling of being in control of motor and mental functions (Melloni et al 2012, Figee et al 2013). The subject's implicit knowledge that electrodes are implanted and activated in the brain does not figure in the explicit content of her awareness. The device does not interfere with but enables the formation and translation of conscious intentions in promoting effective decision-making. Most normal brain processes are not transparent to us. We have no direct access to our efferent system, for example, and only experience the sensorimotor consequences of our unconscious motor plans. These plans are carried out without having to think about them. It does not matter whether these consequences are produced by a natural or artificial system. Provided that a prosthetic device such as DBS connects in the right way with the neural inputs and outputs that regulate behavior, it allows the subject to initiate and execute action plans and thus promotes free will. Insofar as the device ensures that the subject has the motor and mental capacities necessary to perform actions he wants to perform, he can identify it as his own, as an expanded feature of his brain and mind.

So, having a DBS system implanted in one's brain does not mean that one is not an autonomous agent with a compromised will. It is the brain disease rather than the technique that impairs or undermines autonomy, or free will. All behavior is regulated by a balance of interacting conscious and unconscious mental and neural processes. When operating safely and effectively, DBS regulates neural functions and mental and physical capacities impaired by diseases of the brain. Being a 'passive recipient' of the effects of DBS does not imply that the subject has no control of his behavior. Although it operates outside of conscious awareness, the device does not replace him as the source or author of his actions but enables voluntary and effective agency as long as it restores some degree of functional integrity of the neural circuits that mediate the relevant capacities and does not influence anything else (Lipsman and Glannon 2013). DBS can improve decision-making by reducing the subject's cognitive load in combining its neuromodulatory action with the endogenous action of functionally intact neural circuits. Nor does the subject with a CLD implant have to bear the burden of regular clinical appointments for device monitoring, as with an OLD. DBS can liberate the subject from constraints imposed on mental and physical capacities and restore and maintain autonomous agency. The shared behavior control between the conscious subject and the artificial device is not fundamentally different from the shared behavioral control between the conscious subject and naturally occurring unconscious processes in her normally functioning brain.

Another significant factor is that a patient with a DBS implant can turn off the stimulator. Among other reasons, this is necessary to pass through airport security without triggering alarms. This supports the idea of having control over the device and one's behavior and thus having free will. Crucially, this presupposes that one knows when to turn the stimulator back on and that doing this is necessary to restore modulation of motor, affective or volitional functions. Retaining this cognitive capacity while the stimulator was off would be necessary for one to retain control through a period in which there were no active neuromodulating effects from the device. This is another aspect of the shared behavior control between the subject and the device.

Personal identity is defined in terms of the connectedness and continuity of psychological states necessary for a person to persist through time as the same individual (Parfit 1984, Part III, Schechtmann 1997). Connectedness and continuity provide the integrity and unity of these states implied by the notion of persistence. As with the concern about autonomous agency, if a mechanical device in the brain restores and sustains the critical links between psychological states that define the person, then some might ask whether this device undermines the psychological sense of identity.

When it modulates neural circuits, a subject need not perceive the stimulating system as something that undermines his identity. Again, there is shared behavior control between the subject and the device. The stimulator is an enabling device that compensates for impaired mental or motor functions while complementing functions that are intact. It does not supplant these functions but supplements them. The device becomes integrated into the subject's brain and mind. It can be perceived as a form of expanded or extended embodiment that becomes part of his identity.

Some patients may feel uneasy about continued dependence on a mechanical device to maintain normal neural and mental processes. Yet there are no rational grounds for this attitude if the device produces a therapeutic response. If neurostimulation ameliorates a patient's symptoms and improves her well-being, then it would be mistaken to describe its therapeutic effects as part of a dependence relation, at least not in the sense of an addiction. DBS aims to resolve rather than create pathology. What matters is that the technique does what it is intended to do, which is to modulate dysfunctional neural circuits. Any feeling of dependence on the device would not be a function of the device itself but of the patient's attitudes about it. Still, as Linden notes, the ability of DBS to provide continuous, self-adjusting long-term modulatory changes may 'cause nerve cells to change their spontaneous firing patterns by making different proteins. In this way, they can form cellular 'memories' of the stimulation. As a result, the chronic brain stimulation may become very much a part of the patient's normal neural network activity' (Linden 2014, p 111). This is not problematic for the psychological sense of identity if the device enables the patient to have the mental states he wants to have and restores the connections between these states that obtained before the disorder affected him. Symptom fluctuation and disruption in continuity of care from having to undergo periodic parameter adjustment or battery and lead replacement may cause the patient to become more aware of an OLD and its operational imperfections as a threat to his identity. Insofar as CLDs avoid these problems, the subject's identification with such a device at the neural level is consistent with his identification with the capacities it regulates at the mental level.

Adverse effects of neurostimulation can disrupt the unity and continuity of the psychological properties on the basis of which one experiences oneself persisting through space and time. But a properly functioning device can resolve pathological states and restore the unity and continuity of the psychological properties that defined one's pre-morbid self. When one has experienced depression for many years, one may gradually come to identify with the symptoms of the disorder. However, in this and other psychiatric disorders, most patients want to rid themselves of these symptoms and reclaim the phenomenology and content of the mental states they had before the onset of the disorder (Witt et al 2013, Glannon 2014a). This can motivate patients to seek treatment and adhere to a treatment regimen. Stimulation sequelae such as hypomania or suicidal ideation could preclude insight into the disorder and disrupt identity as much as the dysregulated circuits themselves. These effects would be equally incongruent with the psychological properties a healthy and rational individual would repeatedly endorse. This underscores the need for careful use of neurostimulation in targeting the right circuits with the right frequency, intensity and pulse duration to restore optimal levels of mental and physical functions.

The sometimes rapid and substantial changes in personality and other forms of behavior from therapeutic DBS can result in a difficult period of adjustment for both patients and caregivers. The changes may not be as radical as they are in auditory and visual prosthetics, which as noted can cause a profound change in one's perception and conception of the world. Yet the alterations caused by DBS can still present psychological challenges. Patients with motor or mood disorders and those around them may have become accustomed to a characteristic pattern of symptoms, which are then altered. It is not improvement in motor or cognitive and affective functions as such that is the issue but the emotional response of patients and caregivers to the changes in the patients' behavior. One's relational identity to others and how it shapes interaction with them may also change in undesired ways (Baylis 2014). But provided that the changes are salutary and improve the patient's quality of life, it clearly would be preferable to adjust to what is essentially a return of one's real self than to continue dealing with the greater challenges associated with living with a mentally and physically disabling disease. When the goal of neuromodulation is to alter one's affective states, it may be difficult to differentiate desired changes in the psyche from undesired ones and predict how these changes might alter identity (Christen et al 2012). But potentially adverse changes in identity have to be weighed against the main goal of therapy, which is to resolve a pathological condition and relieve suffering.

Claims about the value of neurodiversity notwithstanding, most people with neuropsychiatric disorders would prefer not to have them and are clearly better off without them. Addressing the question of whether DBS would alter the personality of one of his patients with OCD, psychiatrist Schormann commented: 'patients do not see their obsessions as part of their personality. They see them as something imposed on them, that they yearn to be rid of' (cited in Abbott 2005, p 19). The personality with which most patients would identify is associated with healthy rather than pathological mental states.

A different patient interviewed by her psychiatrist following successful application of DBS for OCD noted that she would have committed suicide if her symptoms had continued. She also noted that 'after surgery, there was a major change in my life. I can enjoy life again, which was impossible before. My compulsions and obsessions are greatly reduced' (cited in Merkel et al 2007, p 181). In response to the question of whether an artificial stimulating device in her brain caused an ethical problem about it ruling her life, she responded: 'I do not see the ethical problem. If makes me feel better and I like to feel better. If this is an ethical problem, then any medical treatment to improve the condition of a patient is an ethical problem'. (Merkel et al 2007, p 181.)

Consistent with the idea of neuromodulation, there are limits to the extent to which DBS can improve motivation and mood. A patient with generalized anxiety and OCD who became less anxious and experience improved mood after DBS asked his psychiatrist to increase the voltage of the stimulator so that he could feel even better. This cause him to feel 'unrealistically good' and 'overwhelmed by a feeling of happiness and ease' (Synofzik et al 2012, p 32). Yet he retained insight into his condition and the possibility of losing control of his behavior, as expressed in his comment about fearing that his euphoria would 'tilt over' and that his anxiety would return. Accordingly, he agreed to have the voltage reduced. In cases where patients can control the voltage and intensity of the electrical current on their own, they would have to retain the capacity to know how to maintain optimal levels of mood and motivation within therapeutic stimulation parameters. These points cast doubt on the idea of neuroenhancement. They suggest that neurostimulation benefits patients as therapy to treat pathologies due to dysregulated neural circuits but may not benefit and may even harm those with optimal levels of neural and mental functions when it is used to raise them above these levels.

There is no definitive evidence that DBS can reverse the pathophysiology and progression of neurodegenerative diseases. This, more so than symptom control, would be the ideal outcome of both open- and closed-loop systems. Results of some studies using DBS for PD indicate that initiating the technique in younger patients at an earlier stage of the disease produces greater benefit than it does for older patients at a later stage (Schuepbach et al 2013, Woopen et al 2013, Abramowicz et al 2014). Replication of these results could in principle generate medical and ethical reasons for administering DBS for PD soon after the onset of symptoms and before extensive neurodegeneration has occurred. Administering DBS before an advanced disease state and in younger brains could promote neuroplasticity and the release of trophic and other factors inducing neurogenesis. If earlier application of DBS had this effect without increasing the risk of neurological and psychological sequelae, then there would be not just an ethical justification but an ethical obligation for practitioners to use the technique at the first indication of pathogenesis. The therapeutic effects for those who have been harmed by these diseases could be substantial (Clausen 2010, Beauchamp and Childress 2012, chs 5 and 6). It is questionable whether the results of the early neurostimulation studies for PD could be replicated for seizure disorders, which have a distinct pathophysiology. In theory, though, earlier stimulation might have neuroplastic and neurogenerative effects in the brains of people with MDD. Chronic depression can cause neuronal degeneration in prefrontal networks (Charney et al 2013, chs 37 and 43). Although it is speculative, electrical stimulation of circuits in the basal ganglia that project to these networks might have neurogenerative effects on them.

Brain–computer interfaces (BCIs)

BCIs, or brain–machine interfaces (BMIs), involve real-time direct connections between the brain and a signal processing algorithm in a computer (Lebedev and Nicolelis 2006, Wolpaw and Wolpaw 2012, Lebedev 2014). Bidirectional feedback between the user and the system produces physical changes in the brain. These changes can restore some degree of motor and possibly communicative control for people with lost limbs, extensive paralysis or who are significantly neurologically compromised. In these respects, a BCI can enable a person with a severe brain injury to regain some degree of agency and free will. By providing the subject with the relevant feedback, the system may enable her to translate her wishes and intentions into actions despite the inability to perform voluntary bodily movements. Because of their connections with neural signals in motor areas, computerized prosthetics can overcome the inherent limitations of non-computerized prosthetics in restoring some degree of motor control.

There are two types of feedback with a BCI. The first concerns feedback about the outcome of a self-initiated, system-mediated action, such as moving a computer cursor or robotic arm. It provides only indirect feedback about brain activity. The second type concerns direct feedback about the level of brain activity itself. The first is more pertinent to the potential to restore behavior control in the sense that the subject can perceive the success or failure of her mental acts of forming and executing intentions to perform certain movements. An EEG- or fMRI-based BCI might also enable minimally conscious individuals or those with complete locked-in syndrome (LIS) to communicate their wishes about medical treatment when they are unable to do this verbally or gesturally.

The use of interface technology for motor control and communication raises a number of ethical issues, four of which have been discussed in the literature and which I will address here (Clausen 2008, 2014, Tamburrini 2009, Hochberg and Cochrane 2013, Donatella and Tamburrini 2014, McCullagh et al 2014, Soekadar and Birbaumer 2014). First, the different types of electrodes used to detect and respond to neural signals necessary for forming and executing action plans involve different levels of invasiveness and different benefit-risk ratios for the subject operating the BCI. Second, researchers can use a computer algorithm that decodes neural EEG signals to predict a movement. Yet if the movement is predictable, and autonomy or free will is incompatible with the predictability of actions, then these subjects may not be acting freely. Moreover, if it is the BCI system rather than the subject that produces the movement, then this too generates doubts about his own ability to perform intentional actions and control his behavior. Third, the expectations of some patients and their caregivers about how a BCI enables movement might not be reasonable given the cognitive challenges in operating the system. This could result in psychological harm if a subject fails to produce desired actions through the interface. Fourth, the use of a BCI for communication in neurologically compromised patients prompts the question of whether their responses would be evidence of the capacity to make informed and deliberative decisions about medical care. This is especially significant in cases of life-sustaining treatment.

BCIs: motor control

BCIs utilize wired or wireless system to record and transmit signals from electrical activity of neuronal ensembles in premotor, motor, and supplementary motor and posterior parietal cortices into actions. These are the main brain regions mediating the formation and execution of action plans. One measure of potential salutary or deleterious effects on patients using BCIs is not so much the type used but the level of invasiveness. The distinction between wired and wireless systems regarding risk is orthogonal to this level. The less-invasive type consists of scalp-based electrodes attached to a cap that are part of the equipment required to record EEG. Because they do not involve intracranial surgery and implantation of a device in the brain, they do not entail a risk of infection, edema or hemorrhage. At the same time, though, they may not readily read signals from motor areas because the cranium may smear or deflect them. Also, less invasive systems may record neural signals from more distributed neural circuits mediating a wider range of functions and may not be sensitive enough to particular cortical signals to always produce the desired movements for motor control or the semantic processing for communication.

In electrocorticography (ECoG), electrodes are implanted epidurally or subdurally (Leuthardt et al 2004). They can decode motor cortical signals more readily than scalp-based electrodes because they are not susceptible to cranial smearing. Still, they entail some risk of infection or hemorrhage. Like the less invasive electrode cap system, both forms of ECoG BCIs impose constraints on the subject's movement from the wires running from the electrodes to the machine. Wireless systems consisting of a microelectrode array implanted in the motor cortex avoid this problem and are less burdensome for subjects. Because they can decode and transmit signals from this region to the system more directly, implanted arrays are more likely to facilitate the execution of the subject's intentions in actions. But this would depend on the specifics of the neurological deficit and the patient's ability to successfully manipulate the BCI. In addition to the risk of infection and hemorrhage, microelectrode arrays raise the issue of biocompatibility between the implanted objects and surrounding neural tissue. Activation of the electrodes may reorganize and induce changes in these circuits. These changes may be salutary, especially if they promote neuroplasticity and the generation of new axonal connections that could bypass or repair the site of brain injury causing loss of motor function. But they could also cause adverse changes in the adjacent tissue and result in neurological and psychological sequelae. A stable and effective array that could function for many years would be one in which the surrounding neuropil grew into the electrode. Such an array might allow myelinated axons to be recorded using implanted amplifiers (Kennedy et al 2011). This is speculative; but it points to some of the therapeutic possibilities of wireless systems.

Wireless systems could be vulnerable to interference from external sources. These could prevent them from functioning or cause them to function in ways that could harm those in whose brains they were implanted. Hackers could disrupt action-potential firing in the transmission of signals from motor and parietal cortices through the interface. There would also be privacy considerations regarding the potential for illicit access to information on the wireless device without the patient's consent. This is one example of possible external interference that could harm those with these systems in their brains by defeating their interest in restoring some degree of autonomous agency through a BCI. In these and other respects, a technology designed to help an individual regain some control of motor function could instead prevent him from regaining or cause him to lose it. Despite these concerns, more invasive BCIs in the brain overall may be functionally superior to and as safe as less invasive BCIs outside the brain. The first type seems to have a more favorable benefit-risk ratio than the second.

One of the most promising applications of BCIs has been the BrainGate 2 neural interface. This system can enable persons with amputated limbs or severe paralysis to control a prosthetic with their thoughts mediated by the computer algorithm. The system uses electrodes implanted in the motor cortex to record neural signals related to limb movement. The algorithm decodes the signals and translates them into moving an external prosthetic such as a robotic arm. In a study involving two volunteers with tetraplegia from brainstem strokes, the system enabled one of the volunteers to grab a foam ball with the arm with a fairly high rate of success (Hochberg et al 2006, 2012). Additional studies involving systems functionally similar to BrainGate have also shown that other individuals with tetraplegia can move prosthetic limbs through the interface (Collinger et al 2013, Gilja et al 2015). These are examples of how a BCI can bypass the site of brain injury to restore some degree of motor control. Researchers can predict the movement the patient wants to perform from the algorithm decoding the neural EEG signals corresponding to the initial urge to perform that movement. But this raises the question of who or what controls the process of forming and translating the urge to move into the actual movement. This is not an issue in normal motor skills, which are performed unconsciously. In contrast, moving an artificial limb requires some degree of conscious deliberation. Yet if the movement is predictable on the basis of neural signals alone, then it seems that the subject and his conscious mental states do not initiate the movement. This suggests that restoration of physical control has everything to do with the BCI and the signals it decodes and nothing to do with the subject. The subject would not be an autonomous agent, and the technique would not restore the loss of free will from the brain injury. As Clausen puts it, when behavior is caused by a brain-mind machine, 'who is responsible for involuntary acts?' (Clausen 2009, p. 1080).

This skepticism about the role of conscious thought in BCI-mediated movement is similar in some respects to the conclusions drawn from experiments conducted in the 1980s by neuroscientist Libet et al (1983, Libet 1985). He used EEG to detect and measure activity in motor, premotor, and prefrontal cortices and supplementary motor areas when subjects were asked to flex their fingers or wrists. These were a further development of similar experiments conducted by Kornhuber and Deecke in the 1960s (Kornhuber and Deecke 1965). Libet's experiments demonstrated that neural activity in the form of readiness potentials in motor regions preceded the subjects' conscious awareness of their intention to act by several hundred milliseconds. He claimed that the subjects could exercise some conscious control and 'veto' the intention to act, though he did not offer a satisfactory explanation for this. The results of Libet's experiments suggested the epiphenomenal view that conscious mental states are the effects of neural mechanisms but have no causal influence on these mechanisms, which presumably provide a complete account of our actions. This appears to rule out any positive sense of free will.

But just because conscious intentions or other mental states do not initiate the process of moving a prosthetic device does not mean that they have no causal role in this process. This does not imply that the action is involuntary or unfree. Even if signals in motor areas recorded through EEG or fMRI can predict the movement the subject wants to perform, they cannot predict whether the subject will actually perform it. Neural indices of an intention to move a prosthetic limb can explain the initiation of this process but not all of the events that determine a successful or failed outcome. Libet's experiments at most show that neural events in motor cortices are necessary, not sufficient, for the execution of intentions in actions (Mele 2009, 2014). They do not explain away free will for these subjects because they have some control of these actions through the exercise of their intact motor and mental capacities. Similarly, in BCI-mediated prosthetics, the subject's conscious intention to move the artificial limb may be preceded by unconscious neural signals associated with the urge to move it. But a complete account of the movement also involves the process of learning how to operate the interface from the trainer and translate this knowledge into action. Whether the subject succeeds or fails in this endeavor is not predictable from or reducible to neural signals alone. The mental act or process of making a conscious effort to move the arm may correlate with neural signals. But correlation is not causation, and thus identifying the neural correlates of the mental acts of intending to move a robotic hand or arm, trying to move and actually moving it with the aid of the brain implant does not imply that the implant and the neural signals it decodes cause the intention or its translation into the movement. This involves more than signals read by an fMRI- or EEG-based BCI. It also involves the mental states of conscious attention, patience, effort and resolve.

Not all events in the sequence of moving the robotic hand are predictable or completely controlled by the neural interface. The unconscious conditioning resulting from learning how to operate the system, and the conscious intention and effort of the subject are also critical components of the sequence. In these respects, the subject has some degree of control over moving the robotic hand. Despite technological and operational differences between DBS and BCIs, like the first the second is an enabling device. There is shared motor control between the subject and the system, which is meant to support rather than replace the subject's intact cognitive and motivational capacities. This shared control is enough for the subject to be an autonomous agent of at least some of his behavior. Provided that there is proprioceptive and somatosensory feedback from the robotic arm or hand to the brain, the subject might be able to identify with these prosthetic limbs and the neural prosthetic to which they are linked as a form of extended embodiment and part of his self (Gallagher 2005). Lebedev supports this idea in his point that 'studies suggest that BMI-controlled prosthetic limbs may become incorporated in the brain representation of the body' (Lebedev 2014, p 108). The subject can perceive her BCI-mediated action as her own. The question of what produced the movement should not be framed in dualistic terms as either the neural signal or the subject's mental states. Rather, it should be framed in unitary terms as requiring both neural and mental processes. This shows how the feedback provided by a BCI can benefit (or harm) the subject by allowing her to perceive the success (or failure) of her mental acts of intending and trying to move the robotic hand and the physical act of moving it. Failure to move the prosthetic could cause psychological harm by defeating her intention, expectation and effort to move it. Any potential or actual harm would pertain not so much to the prosthetic arm or interface but more so to the mental states of the subject.

This does not resolve all questions about control, however. It is not clear how a subject who intends to perform a movement that is predictable on the basis of neural EEG signals could change her mind, cancel the intention and refrain from performing it. The BCI would have to override the signal correlating with the intention to act and record and respond to the signal correlating with the intention not to act, or an intention to perform a different action, within milliseconds. A BCI would have to enable the subject to both perform and refrain from performing certain movements by canceling an intention in order to ensure behavior control. This complication is often ignored by journalists and other media sources making the oversimplified claim that a subject can control a robotic arm 'just by thinking about the movement'. Insofar as implanted microelectrode arrays can decode and translate motor signals mediating intentions to act and refrain from acting more directly and rapidly than an electrode cap, the more invasive BCI would be more likely to promote this type of control than the less invasive form. Contrary to what the idea of a predictable movement might suggest, the system must not cause a subject to perform a movement she does not want to perform. A less invasive neural interface may be more feasible for persons who only need such a device to assist in motor tasks for a short period of time. But most of the patients who could benefit from BCIs have chronic, life-long conditions for which more invasive implanted devices would probably be more effective.

Two particularly positive developments in the area of computerized prosthetics have been the myoelectric arm and the related surgical procedure of targeted muscle reinnervation (TMR). The arm is controlled by electrical signals generated by contractions of muscles with implanted microprocessors (Kuiken et al 2009, Wodlinger et al 2014). In TMR, intact nerves from the residual part of an amputated limb are transplanted into pectoral, biceps or triceps muscles. The nerves gradually regenerate, and the patient is then fitted with the myoelectric arm, with the implanted nerves enhancing or amplifying its function. Electrodes on the skin transmit electrical signals from the reinnervated muscle to the arm, and the computer algorithm in the BCI translates the signals into the command to move the arm. The subject can perform a movement such as opening and closing the hand by forming and executing an intention to perform it. Another experiment has used electromyographic signals to move a prosthetic leg (Hargrove et al 2015).

But the question of how much control the subject has in moving a prosthetic device in BrainGate 2 can be raised in these applications as well. Is it the subject's conscious intention to move the arm, hand or leg that causes the movement? Or do the electrical signals from the reinnervated muscle combined with the computer algorithm cause it? If one offers the second response to the question, then there may be reasons for skepticism about how much control the subject has over the process. Yet here too framing and responding to the question of control in dualistic terms offers an inaccurate account of the process. Mental states such as the intention to move a myoelectric arm are generated and sustained by neural oscillations in motor areas. But the content of the intention and whether it is realized involve more than neural oscillations, electrical signals and the computer algorithm. As with other neuroprosthetics, there is shared control between the subject as a locus of mental states and the BCI-mediated arm or hand. Both the subject and the system play necessary causal roles in moving it. Neither can achieve this goal alone. The subject needs the BCI to decode and translate neural signals into opening or closing the arm or hand. At the same time, the subject's conscious intention, effort and ability to learn how to operate the system are also critical in forming an action plan and determining whether it succeeds or fails.

One important difference between a BCI and DBS is that the success of the first technique in performing a mental or physical act depends on unconscious conditioning from learning how to operate the interface and conscious effort in forming and executing the intention to act through it. All of these mental acts are mediated by neural processes. Yet the content of these mental states and acts, the movements they aim to produce, are external to the brain. Nor can the phenomenology of what it is like for the subject to try and succeed in moving the prosthetic limb be explained entirely in neural terms. DBS requires no such unconscious conditioning or conscious effort. The stimulator modulates the relevant neural circuits and the motor, cognitive affective and volitional capacities they mediate outside the subject's conscious awareness. Normal thought and action are enabled by the automatic activity of the stimulator with no causal contribution from the subject. Motor impairment in tetraplegia is more severe than it is in a movement disorder such as PD, and because of this the range of motor functions that a BCI allows for patients with the first condition is more limited than what DBS allows for patients with the second. But a BCI can benefit a paralyzed patient to some degree by making him feel that he has regained some control of his behavior. This is due to the feedback from the BCI to the subject. The interface is necessary to execute intentions in desired movements. But through the bidirectional activity between his brain-mind and the computer, the subject can appropriately perceive himself and his mental states as having a causal role in initiating and completing the process resulting in these movements.

Nevertheless, as noted the expectation of moving a computer cursor or prosthetic arm has the potential to cause psychological harm to the subject when failing to move it defeats that expectation (Glannon 2014b). The harm would be a function of the valence of the expectation and failure influenced by the subject's persistent inability to execute motor plans. While the prospect of translating thoughts into actions may initially provide the subject with the belief that he can regain some control of behavior, it may also put a psychological burden on him. The emotional and cognitive load in having expectations and learning how to manipulate the interface may present challenges that some subjects may not be able to meet. Unlike healthy subjects who can perform many motor tasks unconsciously and automatically, for paralyzed subjects performing these tasks through a BCI requires sustained conscious motivation, deliberation, attention and mental effort. Unconscious functions that ordinarily would do some of the work in a cognitive and sensorimotor division of labor in forming and executing action plans are not available to them. Planning is a critical component in moving a prosthetic limb. The subject must indicate with his brain and mind how the limb should move before executing the intention to move it. Subjects have to be trained to perform these neural and mental acts. Many days of training are necessary to acquire the ability to control an EEG-based BCI (Wolpaw et al 2002), though there is considerable variation among subjects in the time required to do this (Wolpaw and McFarland 2004). Some may be more or less capable than others in this regard. Brain injury may result in cognitive impairment that would preclude these capacities. This impairment may also preclude subjects from meeting criteria of informed consent to and exclude them from research on or receiving clinical applications of BCIs.

Successfully operating a BCI is not a simple process. Indeed, BCI illiteracy has been observed in up to 30% of healthy subjects (Birbaumer et al 2014). One explanation for the failure of medical researchers and practitioners to train subjects with extensive brain or spinal cord injuries to use a BCI is that the subject's feeling of a complete loss of control from paralysis or limb loss undermines the motivational basis to be conditioned to manipulate the algorithm or sustain the cognitive effort necessary to do this (Birbaumer et al 2008, 2014, Linden 2014, p 22). They may experience not only physical fatigue but also mental fatigue from repeatedly trying and failing to translate electrical neural signals into the desired movement. These individuals have a severely impaired volitional component of the will. This highlights the critical role of the trainer in conditioning subjects to perform this task. Failure to perform it could cause anxiety and frustration and generate the feeling of losing control. This experience could negatively influence somatosensory and proprioceptive feedback from the prosthesis to the brain-mind and change the subject's attitude toward the prosthetic and the microelectrode array implanted in his brain or electrodes on his scalp as forms of extended embodiment. Instead, he might perceive them as foreign objects that interfere with rather than promote motor control. Ideally, psychological assessment of patients could differentiate those who have or lack the requisite cognitive and volitional capacities to successfully operate a BCI and select candidates for it on this basis. This would be one way of maximizing benefit and minimizing harm in this group of patients. Still, it may be difficult to predict with a high level of certainty which patients would or would not be able to do this.

BCIs: communication

EEG- and fMRI-based BCIs might enable individuals to reliably communicate when they are unable to communicate verbally or gesturally. This technique could be especially beneficial to two distinct patient groups: minimally conscious patients; and completely locked-in patients. Many people experiencing traumatic brain injury or anoxia become comatose. Depending on the extent of injury and whether axonal damage is more localized or diffuse, some of these patients can recover physical and cognitive functions, though others die. Some progress to a persistent, then permanent vegetative state (PVS), where they have sleep-wake cycles but no awareness of self or their surroundings. Others progress to a minimally conscious state (MCS), where they retain enough thalamic-cortical connections to sustain some degree of awareness (Giacino et al 2002, Schiff et al 2007, Shah and Schiff 2010). Most of these patients are significantly cognitively and physically disabled. While they fall along a spectrum involving varying degrees of awareness, there is no definitive evidence that any of them are able to clearly express complex thoughts or wishes. The only documented means of expression has been indirect. It has been inferred from observed changes in neural activity in motor and adjacent cortical regions in response to commands about imagining certain movements (Owen et al 2006, Monti et al 2010).

People with LIS are fully conscious despite being almost completely paralyzed from lesions in the ventral pons of the brainstem resulting from trauma, extensive demyelination or stroke. Patients with amyotrophic lateral sclerosis (ALS) eventually become locked-in as well from degeneration of motor neurons. Some of these patients develop a communication system by blinking an eyelid in response to questions referring to letters of the alphabet. Jean-Dominique Bauby authored his 1997 book The Diving Bell and the Butterfly by this method. Others with this condition are deprived of even this minimal voluntary movement and are unable to express their thoughts in any way. They are completely locked-in. In 1999, Birbaumer and colleagues reported on the first 2 LIS patients who demonstrated some capacity to respond to messages through an EEG-based BCI (Birbaumer et al 1999, Birbaumer and Cohen 2007). The significance of this study is underscored by the fact that these patients indicated that the ability to communicate was the most important factor in their quality of life.

Conscious perception and expression of intentions in locked-in patients is different from that of minimally conscious patients. This difference may facilitate more effective communication through a BCI for the first group because they are fully conscious and can retain cognitive and affective capacities. A major challenge in using the technique for this purpose is that BCIs typically utilize visual feedback, and patients in the MCS and complete LIS have limited or no capacity to receive feedback from and respond to a visual stimulus in learning how to operate the system. Alternatively, tactile or auditory feedback combined with semantic processing recorded in cortical regions could be used to enable communication. But auditory feedback in a BCI requires more training than visual feedback (Kubler, 2009). Even if these modalities could overcome the limitations associated with a lack of visual feedback, questions would remain about the meaning of 'communication.' Specifically, it is not clear whether the responses of linguistically impaired minimally conscious or even fully conscious locked-in patients could be evidence of the cognitive and emotional capacity necessary to deliberate about and give informed consent to continue or discontinue life-sustaining artificial hydration and nutrition as well as mechanical ventilation (in ALS). Brain injuries can impair the requisite cognitive capacity for consent even when a person is fully aware. Moreover, although ALS is a neurodegenerative condition rather than the result of brain injury, in some patients the disease causes cognitive impairment and the loss of decisional capacity (Phukan et al 2007).

Some investigators have claimed that fMRI- and EEG-guided BCIs involving brain implants or skull-based electrodes could enable minimally conscious patients with a high level of cognitive function to make these decisions. But decisions about life-sustaining treatment are often emotionally laden and reflect a person's interests and values. It is doubtful that these attitudes can be expressed by simple 'Yes' or 'No' responses to questions, which to date is the extent of what these systems allow (Monti et al 2010). Unambiguous expression of a person's attitudes about quality of life would have to be included in any robust sense of communication, especially regarding life and death. This involves more than being aware, even fully aware. A spelling device involving letter selection using cursors and based on slow cortical potentials in an EEG-based BCI has been one application of the technology (Birbaumer et al 1999). More sophisticated interface systems facilitating the expression of complex semantic processing may or may not confirm that a patient has the critical capacity. In contrast to the seemingly promising results from recent studies by Birbaumer et al (2008, 2014), Hochberg and Cudkowicz claim that 'though both scalp-based EEG and corticography signals have been recorded in people with ALS with total LIS, we are aware of no reports of restoring communication using a neural signal-based BCI in this most severely affected population' (Hochberg and Cudkowicz 2014, p 1852).

The potential of researchers to use BCIs to enable and confirm communicative capacity in MCS and LIS patients follows from studies by Owen and colleagues using fMRI to show that patients diagnosed as persistently or permanently vegetative were in fact aware and thus at least minimally conscious (Owen et al 2006, Owen and Coleman 2008). When asked to perform mental imagery tasks such as playing tennis or walking through her home, a 23 year old patient with a severe brain injury from a road traffic accident who had been misdiagnosed as unresponsive showed activation in cortical regions mediating these tasks that was indistinguishable from that of healthy volunteers. In addition, speech-specific activation was observed bilaterally in the middle and superior temporal gyri equivalent to that of subjects without brain injury. Equally significant, a different study involving 5 MCS patients, 15 healthy controls and 15 patients in a PVS demonstrated that a group of cortical regions known as the pain matrix was activated in response to noxious stimulation. There was greater activation in this matrix among the MCS patients than among the PVS patients (Boly et al 2008, Demertzi et al 2013). The study indicated that MCS patients can perceive pain and need analgesic treatment. As with inferences about awareness, imaging at most shows only correlations between brain activity and pain perception. Imaging studies alone could only allow observers to indirectly infer pain perception from cortical activation patterns. Without an explicit report from the patient, these measures of consciousness and pain are limited. BCI-mediated communication could directly confirm that a patient was in pain. Following such a report, analgesia to manage pain and relieve suffering could influence a patient's attitudes and desires and would be a critical factor in any communicated decision about life-sustaining treatment.

Fernandez-Espejo and Owen acknowledge that with current interface technology, simple affirmative or negative responses by minimally conscious patients to questions about whether they wanted to continue living would not be sufficient to establish that the patient had the 'cognitive and emotional capacity to make such a complex decision'. Yet they also say that 'it is only a matter of time before all of these obstacles are overcome' (Fernandez-Espejo and Owen 2013, p 808). Their second point may be overly optimistic. Advanced BCIs that could detect neural activity correlating with complex semantic processing alone may not be sufficient to show that the subject had the cognitive and emotional capacity to make an informed and autonomous decision about life-sustaining treatment. Some form of behavioral interaction between the subject and the treating team may also be necessary to confirm that he had this capacity (Jox 2013).

Medical professionals and caregivers must be cautious in not reading too much into BCI-mediated responses and interpret them as having a meaning they lack. By the same token, they must not be overly paternalistic in ignoring a meaning these responses might have. As in other cases of neurologically compromised patients, medical teams and families need to be open to the possibility of meaningful behavioral responses from patients attempting to communicate through the interface. If BCIs develop to the point where they can assist patients in expressing their wishes about medical treatment, there will be variation among patients in how clearly they express them. Birbaumer and colleagues have reported that a majority of locked-in patients have indicated through simple affirmative responses to questions that they have good quality of life and want to continue living (Birbaumer and Cohen 2007, Birbaumer et al 2008, 2014, Bruno et al 2011, Demertzi et al 2011). Some of this can be attributed to family and caregiver support. In an interview, Kerry Pink, who had been locked-in for 11 years following a brainstem stroke, but was able to communicate, said that 'although my life is restricted—I cannot walk more than a few steps, I need help with personal care and am still prone to splitting headaches—I accept its constraints' (Daily Mail 2010, 5 August). However, not all locked-in or minimally conscious patients express the same positive attitude and may fall into a suicidal depression secondary to the condition. For example, Tony Nicklinson, who was locked-in and paralyzed below the neck for 7 years following a stroke, requested to the United Kingdom Supreme Court to be allowed to end his life with the help of a physician. The Court rejected his request in August 2012 just six days before he died from pneumonia at age 58. Communicating via a computer, which he operated by eyelid movements, Nicklinson stated that he wanted to end his 'dull, miserable, demeaning, undignified and intolerable life'. (The Guardian 2012, 22 August.)

When they are not completely locked-in, some patients, if able, would state that the quality of their life was so poor that it should end. Others would state that it was good and would want to continue living (Demertzi et al 2011). These differences would obtain among patients with disorders of consciousness as well. Some patients gradually adjust to these conditions; others do not. Most people do not consider severe brain injury and resulting disorders of consciousness when formulating advance directives about end-of-life care. Even if they did, variability in attitudes about living with such a condition and uncertainty about whether one would or would not adjust to it shows how limited advance directives are for people with these disorders and their proxy decision-makers. BCIs enabling reliable communication could help to clarify patients' wishes about continuing or ending their lives, wishes that currently are often unclear or ambiguous in this patient population. This could help to resolve conflicts between families and medical teams or courts in determining what was in a patient's best interests. In the 2011 case of Re M, for example, the English Court of Protection ruled that it would be unlawful to withdraw artificial nutrition and hydration from a woman who had been in an MCS for 8 years (W v. M. 2011). The Court rejected her family's claim that she had stated earlier that she would not want to remain in a condition in which she was completely dependent on others. This wish may arise and persist even in cases where analgesia can control any perception of pain. Suffering may be but is not necessarily linked to pain. The patient was considered to be at the higher end of the MCS spectrum, suggesting that she might have had sufficient cognitive capacity to give informed consent through a technologically sophisticated BCI. If this technique had been available and enabled her to clearly communicate her wishes about continuing or ending her life, then it could benefit her by helping to realize her interests and ensuring that other parties promoted rather than interfered with them. As Hochberg and Cudkowicz point out, technology that enables patient decision-making about life-sustaining interventions 'by proclamation rather than proxy will be revolutionary' (Hochberg and Cudkowicz 2014, p 1853). It could give these patients some measure of control of their lives (Monti 2013). Still, questions remain about whether the technology will develop to the point where this and other capacities can be confirmed. This is particularly the case in patients with intermittent periods and different levels of awareness.

Brain-to-brain interfaces (BTBIs)

One especially exciting result of innovation in neural engineering is BTBIs. These systems involve two or more brains wired to a computer as part of large-scale network of electrophysiological signals. An extension of Nicolelis' pioneering research in developing interface technology in nonhuman primates, these more complex interfaces have been used by one research group in developing models using implanted electrodes linking rat and primate brains to study how multiple neural connections in two or more brains can facilitate execution of coordinated motor and cognitive tasks (Nicolelis and Lebedev 2009, Pais-Vieira et al 2013, 2015). A second group used a non-invasive EEG-based BTBI to transfer information from human to rat brains, which was the first instance of interspecies brain interfacing (Yoo et al 2013).

Two additional experiments have used EEG to allow transmission of motor information between two human brains (Grau et al 2014, Rao et al 2014). These are simplistic proof-of-concept experiments of limited empirical value. Further development of the technology is necessary to assess its scientific and therapeutic potential. BTBIs could coordinate neural activity from multiple subjects and enable them to perform mental or physical actions they would not have been able to perform on their own. It could be one way of resolving the problem of mental fatigue from cognitive overload or cognitive impairment in individuals who fail to move robotic arms or other objects with a BCI. These more advanced systems might allow the moving of robotic arms for such complicated and precise tasks as long-distance surgery. They might also promote more effective problem-solving in mathematics by enabling more efficient neural and mental processing of large volumes of information. In these and other applications, subjects who receive feedback from other subjects' brains through the interface may have fewer cognitive lapses and enhanced attention resulting in improved behavioral performance on complex tasks (de Bettencourt et al 2015). Two (or more) brains may be better than one.

BTBIs consist of two functional components: extracting and delivering neural information (Trimper et al 2014). These functions may generate four ethically problematic scenarios: violation of neural privacy, interference with agency, ownership of ideas and coercion. The potential harm from the use or misuse of information shared between and among brains warrants the formulation, implementation and enforcement of guidelines, policies and laws to protect subjects participating in experiments using this technology.

BTBIs could threaten neural and mental privacy by making subjects in experiments vulnerable to third-party access to information about their brains. Many argue that what lies within the skull is inviolable and as such should not be accessible to anyone except the subject and the medical professionals treating or studying them. Neural signatures of subjects could be gleaned by others from fMRI and electrophysiological recording with EEG, especially if the information was transferred over the Internet in a project involving two or more brains wired through the interface (Pais-Vieira 2013, Trimper et al 2014). The ethical concern is not only with violating neural privacy and confidentiality as such but also with the potential of employers and insurers to draw inferences from neural signatures associated with mental health risk to predict future behavior and discriminate against subjects on this basis.

As with BCIs, delivering or transmitting neural information over the Internet could enable hackers not only to extract information from one's brain but also to interfere in the transmission of signals from motor and parietal cortices to the interface and disrupt action plans (Denning et al 2009, Trimper et al 2014). This interference could occur with both implanted microelectrode arrays and non-invasive EEG-mediated devices. The BrainGate 2 device could be hacked and action plans it is meant to enable could be disrupted because it too would be connected to the Internet. Violating neural privacy and disrupting motor functions might not be life-threatening, as it can be in hacking cardiac pacemakers (Halperin et al 2008). But the harm to patients and research subjects from extracting and misusing neural information and interfering with interface-enabled agency could be substantial. Just as this type of interference could disrupt motor functions of individual subjects, it could also disrupt motor functions of multiple subjects linked by the interface and their capacity for coordinated action. Even without any external interference, the neural signals and corresponding mental states of two or more subjects would have to align in just the right way to allow the necessary coordination of neural and mental activity to execute a joint or collective intention. Any variation in neural and mental content between or among subjects could thwart the execution of such an intention and the realization of a mutually desired outcome. So, while BTBIs theoretically could enable the performance of complex collective motor and cognitive tasks, failure to coordinate all neural and mental events in the process of forming and executing action plans could defeat the completion of these plans.

The results of collective neural and mental action enabled by a BTBI presumably would involve shared ownership of these results among all the subjects who participated in the action. Some might play a greater role in producing the results and claim ownership of new information generated by it. This raises a question about intellectual property rights and who owns the information, specifically whether any such right would be held by an individual or by the group (Trimper et al 2014). Criteria establishing rightful ownership of intellectual property would have to be clearly spelled out to adjudicate competing claims among subjects sharing information over BTBIs.

Finally, external interference in neural signal transmission from third parties, or malevolent practitioners using interface technology, raises the possibility of coercion of a subject's thought and behavior. With BCIs, there is already the potential of forcing a subject to have thoughts or actions she does not want to have or do. This could be more problematic if multiple brains are manipulated to control neural and mental processes against the collective interest of the subjects. It could threaten autonomy and free will on a larger scale. Trimper and co-authors note the possible coercive use of BCI and BTBI technology: 'if thoughts can be planted, or behavior compelled, through interfaces that send stimulation or information directly to the brain, it is theoretically possible at some point that such technology might be used without consent to control the behavior of prisoners, for example' (Trimper et al 2014, p 2). It could violate the cognitive liberty of criminal offenders by altering their thought and behavior against their will. The extent of coercion could be greater with BTBIs than in BCIs given that more information from more people's brains would be publicly accessible. In this and other respects, two or more brains might not be better than one. Just as the US Genetic Information Nondiscrimination Act protects individual genetic information from unlawful use, similar legislation may be necessary to protect data about individual brains. Minimally, guidelines and policies are needed to adequately protect subjects in brain-to-brain experiments. This would include disclosure of information about known and probable consequences of wiring brains together so that subjects can give valid consent to participate in them.

Hippocampal prosthetics

Some people lose the ability to encode and store memories from newly learned information. This can be caused by impaired or lost function of the hippocampus—or more precisely the circuit consisting of the hippocampus and entorhinal cortex—from brain injury or infection. While adverse effects on this memory circuit prevent the formation of new episodic memories, they can also impair semantic and working memory, both of which utilize information derived from episodic memory. This can disrupt agency by disrupting the capacity to form action plans. It can also disrupt personal identity by disrupting the first-person experience of mental time travel from the past to the future. A HP, or H–E prosthetic, consists of a multi-site electrode array implanted in the area encompassing the H–E circuit (Berger et al 2011, Hampson et al 2013). The array is linked to a very-large-scale integration biomimetic model providing the necessary inputs and outputs for memory encoding. HPs have been used as prototypes in animal models but have not yet been tested in human trials. While they are at a developmental stage and may be ready for implantation in the human brain in the next 5 years, they remain a hypothetical intervention. As with the other neuroprosthetics I have discussed, when clinical trials begin researchers will have a duty to ensure that subjects consenting to participate in them have reasonable expectations about outcomes. While the intended outcome is therapeutic, subjects need to be constantly reminded of the experimental nature of the trials and the primary goal of gaining scientific knowledge about how such a prosthetic would function in the brain.

DBS of the fornix, which projects to the H–E circuit, improved spatial working memory in two subjects in a Phase 1 trial for early-stage Alzheimer's disease (AD) (Hamani et al 2008, Laxton et al 2010). In a different study, stimulation of the entorhinal area improved spatial memory in some participants undergoing subsequent epilepsy surgery (Suthana et al 2012). Although the research is at a preliminary stage, DBS may eventually be used therapeutically to treat memory and other cognitive impairments in early-stage dementias (Laxton and Lozano 2013). For people with AD and other dementias whose hippocampal degeneration is too advanced to respond to neurostimulation, or for those with severe anterograde amnesia from damage to the H–E and adjacent circuits, an HP might be able to restore the brain's ability to retain information through the encoding of new episodic memories. It could have significant therapeutic potential because this circuit is a key component of the episodic memory system and one of the first neural structures to undergo cellular loss and tau pathology in AD. Artificial reconstruction of neuron-to-neuron connections with a biomimetic microchip model replacing a damaged H–E circuit could improve or restore short- and long-term episodic memory and its effects on semantic and working memory. It could improve planning and decision-making and thus improve the subject's capacity for agency. To promote this goal, a HP would have to be compatible with transcription factors such as cyclic AMP response element-binding protein (CREB) and ensure that the episodic memory system allowed an optimal level of information in the brain. Binding is a neural selection mechanism that prevents information overload in the brain. It does this by letting only a fraction of sensory data enter conscious awareness. An HP would have to regulate the volume of this information and not increase a subject's cognitive load. Otherwise, it would impair agency by making the subject process too much information, resulting in impaired reasoning and decision-making.

Theoretically, it would not matter whether memory functions were maintained through natural or artificial means, provided that an HP maintained the neural inputs and outputs necessary for these functions. But the prosthetic would have to integrate into the brain in such a way as to be sensitive to internal activity in multiple cell fields in other circuits mediating declarative and non-declarative memory systems to which the H–E circuit would be connected. This would require sensitivity to interaction between episodic and emotional memory systems mediated by circuits in limbic and frontal regions while not adversely affecting the relatively independent procedural memory system mediated by circuits in the striatum and cerebellum. In addition to its importance for agency, memory is critical for personal identity. This involves not only the experience of persisting or traveling through time but through space as well. Entorhinal grid cells and hippocampal place cells play a critical role in this experience, and an HP would have to reproduce interaction between these two types of cells in a normal brain's navigational system (O'Keefe and Burgess 1996, Moser and Moser 2008, Hasselmo 2009). Grid cells are a more fundamental feature of this system and drive place cells. They form an internal positioning system, informing the organism of its location independently of external cues. Place cells use this information along with other environmental cues to create a sense of space. They are sensitive to both internal sensory information in the brain and external sensory information from the natural environment in which the organism acts. Like a normally functioning H–E circuit, a prosthetic for episodic memory would have to perform these functions to restore and maintain agency and identity.

The autobiographical sense of episodic memory has a subjective aspect that depends on more than and thus is not reducible to the mechanisms of grid and place cells. It also depends on how the subject interacts with other subjects in physical, social and cultural environments and the meaning the subject constructs from and assigns to this lived experience in space and time. The information processing of an HP could not reproduce this meaning because it cannot be explained mechanistically. While restoring the neural basis of autobiographical memory, an HP would have to function in a way that did not interfere but was compatible with this meaning. This is not only important for the subject's orientation to space and time in the present. It also influences how the subject imagines future situations and forms action plans. The prosthetic would only enable encoding of new episodic memories, after which the subject would assign meaning to them based on her lived experience in different environments. An HP could not assign meaning to the information in newly formed memories but would encode it with equal functional and value-neutral weight. Goal-directed behavior depends on the subject's capacity to select some past events and memories of them as more valuable or meaningful to her than others. For a person with damage to the H–E circuit, a HP would be necessary but not sufficient to compensate for or replace all relevant aspects of autobiographical memory. An HP would have to integrate into a distributed network of memory circuits and provide the neurobiological foundation for but not interfere with the psychology of episodic memory as travel through space and time.

One of the most vexed issues in memory research is distinguishing imagined from real (veridical) memories. One purpose of memory is to enable the subject to adapt to the environment. As part of this adaptation, some degree of imagination may be part of our use of information about the past to simulate future possibilities (Schacter et al 2007). This requires updating the information encoded in memories as they are retrieved and reconsolidated so that the subject can respond appropriately to external demands (Nader and Einarsson 2010). Some degree of misremembering may be involved in this updating process. The adaptive sense of imagination is forward- rather than backward-looking. It is more about how the neural representation of a past event is interpreted than about the representation itself. An HP might avoid the problem of imagined events and confirm the occurrence of real events if its internal processing was immune to potentially distorting influences from environmental cues. But the prosthetic would still have to encode data from the external world to form the memory, and there is no way of knowing whether it could do this more accurately than a normally functioning hippocampus.

The questionable reliability of memory in victim and eyewitness testimony about a criminal assault in court proceedings is one example of the distorting effect of memory updating through multiple instances of retrieval and reconsolidation. An account of an assault after the event may not be accurate because detailed episodic memories tend to become more like generalized semantic memories over time (Lacy and Stark 2013, Yassa and Reagh 2013). The organism only needs the gist of information about the past to meet the challenges of present and future circumstances. Because the reliability question is about memory retrieval rather than memory encoding, whether a victim or witness had an HP or natural hippocampus may not be directly relevant to this issue. There are cases, though, in which an HP may influence judgments of criminal responsibility.

Children occasionally die from hyperthermia after being left in a car on hot days by a parent or grandparent. When charged with criminal negligence causing death, they typically claim that they forgot about leaving the child in the vehicle. They might claim that they had many tasks to attend to that day, and the information associated with the child being left in the car was pushed into their unconscious. Memory retrieval is to some extent involuntary and beyond our conscious control. Nevertheless, one could argue that, given the known magnitude of harm from hyperthermia, the parent or grandparent should have been more attentive to the information about the child and kept it before the mind's eye.

Suppose that a parent was charged with this offense. Would it make any difference to this charge if he had an HP implanted in his brain and that it malfunctioned? If this neuroprosthetic enables one to form new memories rather than retrieve them once they had been stored, then the claim that he was unable to retrieve the memory of leaving his child in the car because of device malfunction would not have much moral or legal weight. Prosecution could argue that he was able to retrieve the memory but failed to because he was not sufficiently attentive to the situation. But a malfunctioning HP designed for memory encoding could mean that he was unable to form a memory of his action, and he could not recall the event if he could not form a memory of it. In that case, he could be excused from criminal negligence causing death because the HP malfunction would imply that he lacked the cognitive content necessary for negligence. This is different from the claim that he failed to access stored information, as suggested by the idea that he had the capacity for retrieval but had a cognitive lapse at the critical time. There was no memory for the accused to retrieve because the critical information was not encoded in his brain. Device manufacturers and memory researchers could confirm that the HP malfunction precluded his ability to form a memory of his action, and that this precluded him from knowing that he had left his child in the car. This is a hypothetical scenario. But it is an example of how the function and potential malfunction of such a prosthetic could have ethical and legal implications when used as a means of treating a disorder of memory capacity and its effects on rational, moral and legal agency.

Social justice issues

The production and application of neuroprosthetics are under strict regulatory processes at professional and political levels. Regulations are necessary to protect patients and research subjects from the risks entailed by these devices. Device manufacturers such as St. Jude Medical and Medtronic sponsor clinical trials to test neuroprosthetics. As in medical research sponsored by pharmaceutical companies, this may involve a conflict of interest if the company's desire to produce positive results unduly influences trial design and generates bias in the presentation of the results. A more serious conflict of interest is when a researcher testing the device has a financial stake in the company that manufactures it. These factors can interfere with the scientific integrity of the research and preclude adequate protections of patients and research subjects. Actual and potential conflicts of interest justify the need for regulation of these devices (Fins and Schiff 2010, Fins et al 2011). Yet regulation can be an obstacle to innovation in developing and testing newer and possibly more effective prosthetics (Curfman and Redberg 2011). The neuroprosthetics currently used for the conditions I have mentioned are adequate at best. A lack of competition among companies due to a lack of financial and other market incentives may be one reason why there is a shortage of technological progress in the field of neurostimulation.

Current DBS technology, for example, may have realized most of its therapeutic potential in controlling symptoms of some neurological and psychiatric disorders. Very little is known about how this and other implantable devices work. This limited knowledge highlights the importance of conducting foundational research to gain a better understanding of the underlying mechanisms and their therapeutic indications. Neurostimulation based on this understanding might alter the pathophysiology and slow and possibly reverse the progression of these disorders. More technically sophisticated BCIs might restore a greater degree control of motor functions and facilitate more effective communication for locked-in and minimally conscious patients. These advances would depend on further innovation, research and development of the technology. This in turn would depend on manufacturers getting a return on their investment and remaining competitive in producing devices (Ineichen et al 2014, Kestle 2014).

The financial interests of device manufacturers may not always be compatible with the health interests of research subjects participating in and investigators conducting neuroprosthetic clinical trials. Yet the research could not be conducted without the devices and thus could not be conducted without the manufacturer. As part of their economic calculus, manufacturers can determine which experiments are worthwhile and which should be initiated or terminated. This can be an obstacle for researchers intending to determine the therapeutic potential and safety of DBS, BCIs, HPs or other devices. It can also be unfair to patients with a condition that might respond favorably to these forms of neuromodulation. All persons with neurological and psychiatric conditions have equal medical needs. A just health care system should meet these needs by providing safe and effective therapies and opportunities for research that will lead to them. (Daniels 2008). Clinical trials testing certain devices for only some of these conditions can result in unequal access to research with the potential to yield scientific knowledge that could eventually benefit people with them. Unequal access based on decisions by device manufacturers about which trials to fund could lead to unequal and unfair outcomes for people with neurological and psychiatric disorders.

Costs may unfairly exclude some people from participating in these trials. As Underwood points out, 'in the United States, companies and institutions sponsoring research are rarely, if ever, required to pay medical costs that trial subjects incur as a result of their participation'. (Underwood 2015, p 1187). In some DBS trials lasting many years, US Medicare and private insurance may cover a portion of the costs related to the device. Some people are financially better or worse off than others as a result of at least some factors beyond their control. If the worse off cannot afford to pay the medical costs of participating in a neuroprosthetic clinical trial and cannot participate in it for this reason despite a desire to do so, then that would be unfair to them because participation would be based on the ability to pay rather than medical need for the same or similar condition. Equal need would be trumped by unequal means of access to the research. It could be unfair to exclude patients from participating in a clinical trial when this was the only way of determining whether a proposed brain intervention was safe and effective and whether it could be provided to them subsequently as therapy.

If a manufacturer of a DBS device goes out of business and the neuroprosthetic is no longer available, then any benefit a subject or patient receives from the technique may be temporary. Replacing batteries or leads or adjusting stimulation parameters in OLDs would not be an option because of a lack of equipment. In that case, symptoms could re-appear and the condition could return to an uncontrolled state. This could occur with any neuroprosthetic, and it could transform a beneficial situation into a harmful one for patients and research subjects. One example of this was the 2001 decision by the company NeuroControl to abandon production of its Freehand device, which could reactivate hands paralyzed from nerve damage. Approximately 250 people who had used the device during and after clinical trials were unable to obtain replacements for the frayed wires in the implants (Underwood 2015, p 1187). Some companies may inform patients recruited for a trial that medical devices will be available for only a limited time. This could be interpreted as a coercive offer to vulnerable populations with no other treatment options and as such unethical. US President Barack Obama's Brain Research through Advanced Innovative Neurotechnologies Initiative may lead to additional funding from the National Institutes of Mental Health to overcome obstacles associated with the cost of research into the therapeutic potential of neuroprosthetics. It is still too early to predict what the outcome might be.

Other things being equal, all persons with conditions that could be ameliorated by neuroprosthetics in principle should have access to them and research testing their efficacy. But other things are not always equal, and some factors may justify unequal treatment as fair treatment in making neuroprosthetics available to some patients and research subjects but not others. As noted in the discussion of BCIs, some persons who fail to learn how to move a computer cursor or robotic arm may be harmed psychologically by their failure to meet their expectations about the therapeutic potential of the technique. In addition to the time and cost entailed by repeatedly failed training sessions, to minimize the probability of this harm, BCI researchers and practitioners should adopt strict selection criteria and include only patients with largely preserved cognitive functions who could give informed consent to this form of neuromodulation and would more likely be trained to operate the system. This may seem unfair to those with impaired levels of cognition who lack these capacities. Nevertheless, the idea of providing equal opportunity for all paralyzed or otherwise behaviorally compromised individuals to access to BCIs would have to be weighed against the potential for emotional harm if the subject could not meet the cognitive demands of operating the system. It would also have to be weighed against the time and cost involved in training subjects to use a scarce and expensive resource when using it unsuccessfully could deprive others from using it for a positive outcome. Discriminating on the basis of cognitive function in access to the technology may be justified on these grounds.

The basic challenge with neuroprosthetics is for health care institutions to provide appropriate patients with access to research leading to applications that will improve their quality of life, while at the same time providing manufacturers with economic incentives to promote innovation and result in technology that will maximize therapeutic outcomes. Ideally, these should be complementary rather than competing goals. The right balance must be struck between the interests of patients and those of device manufacturers. Regulations must be in place to protect patients and research subjects, but not to the extent that they suppress industry innovation in making safer and more effective devices that may have positive effects on a significant number of patient populations. Ultimately, patient welfare should be the main impetus of any duty or interest in technological innovation in producing more advanced neuromodulating devices and systems. There is a need for interdisciplinary collaboration between and among representatives of patient and research subject organizations, industry, basic neuroscience, medicine and ethics to shape the future of neuroprosthetics. This collaboration is necessary to realize the goal of using this technology to improve the welfare of people with severely disabling conditions.

Concluding remarks

The neuroprosthetics I have discussed in this review can ameliorate motor, cognitive and affective functions in bypassing or modulating areas of damage or dysregulation in neural circuits mediating these functions. As enabling devices that integrate into these circuits, neuroprosthetics can restore varying degrees of autonomous agency for people afflicted with brain injury and neuropsychiatric disorders. They can also re-establish the connectedness and continuity of the psychological properties they had before injury or disease onset and thereby re-establish their identity. Devices that perform these enabling functions can be perceived by subjects as forms of extended embodiment. Insofar as there is proprioceptive and somatosensory feedback from the robotic arm or hand and the neuromodulating systems linked to them and the brain, subjects in whom they are implanted or to whom they are connected can identify them as their own.

Provided that adequate protections are in place so that patients and research subjects are not exposed to an unreasonable risk of harm and that the probable overall benefit of the research outweighs the risk, experiments testing neuroprosthetics on research subjects can be ethically justified. One component of these protections is an obligation of investigators and clinicians testing and applying these systems to ensure that subjects have reasonable expectations about their therapeutic potential. In addition to minimizing physiological risk in research and clinical settings, this can prevent psychological harm to them. While in principle all people who might benefit from neuroprosthetics should have access to them, considerations of time, cost and the ability to learn how to operate them could justify some inequality in selecting patients and research subjects. BCIs and BTBIs raise concerns about neural and mental privacy, intellectual property, coercion and other forms of interference from third parties. Adequate protection of patients and research subjects must be in place to prevent or at least minimize the probability of these misuses of information about the brain.

Improvement in motor and mental functions resulting from neuroprosthetics thus far has been modest. A more significant advance would be if these systems could release trophic and other growth factors at cellular and molecular levels of neural circuits and induce neuroplasticity and neurogenesis. These processes might alter the pathophysiology of neuropsychiatric diseases and completely restore motor functions impaired or lost from brain injury. Some DBS studies suggest that earlier application of the technique before neurodegeneration is advanced could have these positive effects. If the benefit of earlier neurostimulation clearly outweighed the risk in neurological and psychiatric disorders, then there would be an ethical obligation for practitioners to administer it to patients earlier rather than later in the disease process. In these respects, neuroprosthetics could revolutionize clinical neuroscience in general and rehabilitative medicine in particular. Investigating these and other therapeutic possibilities requires more innovative research, and this in turn requires more funding from both public and private sources.

A more general philosophical concern is that development of more sophisticated neuroprosthetics could gradually replace natural circuits in the brain and transform it into a completely artificial organ. This could turn us into cyborgs or complete machines in a transhuman world. But the expectation that future neuroprosthetics will seamlessly integrate with the human body and brain does not imply that they will replace them. Neuroprosthetics will likely continue to compensate for neural dysfunction while supplementing rather than supplanting normal functioning neural circuits. Any fear that these devices might cause us to lose our personhood or humanity would be unfounded.

The Human Brain Project is a large collaborative endeavor whose aim is to achieve a multi-level, integrated understanding of brain structure and function through the development and use of information and communication technologies. Its ultimate goal is simulation of the entire brain (Markram et al 2011, Kandel et al 2013). Similarly, the Human Connectome Project aims to create a comprehensive network map of brain circuitry and connectivity through different imaging modalities (van Essen et al 2013, Fornito et al 2015). Theoretically, these models will better inform our understanding of brain disorders and promote more effective treatments for them. But it is questionable whether an artificially constructed brain could replicate the complexity of the central nervous system and how its natural structure and function are shaped by dynamic interaction between and among genetics, epigenetics, bodily systems and the environment. Some might claim that an artificial brain consisting of very large-scale neuroprosthetic networks would function better than a natural brain. But there is no way of empirically determining that it would do this. Among other things, it would be difficult to know how artificial networks would integrate with each other and interact with immune, endocrine, cardiovascular and other systems in the body that can influence brain function. The brain is not a self-contained organ. Persons are constituted by their brains, but are not identical to and are not defined solely in terms of them. They are also constituted by their bodies and mental states, the contents of which are shaped not only by their brains but also by the natural and social environment in which they are embedded. These contents are emergent properties of brain-body-environment interaction, which cannot be captured by a reductionist model based entirely on brain circuitry. Ethical questions about how brain diseases harm and how neuroprosthetics can benefit patients with them need to be informed by factors inside and outside the brain. These questions in turn should inform the development of technologies aimed at achieving a better understanding of the differences between normal and diseased brains. The technologies will restore a greater degree of behavior control for people with motor and mental limitations, maximizing benefit and minimizing harm in improving the quality of their lives.

Acknowledgments

I am grateful to 3 reviewers for the Journal of Neural Engineering and Mikhail Lebedev for very helpful comments.

Please wait… references are loading.