OUP user menu

Brain-computer interfacing: science fiction has come true

Andrea Kübler PhD
DOI: http://dx.doi.org/10.1093/brain/awt077 awt077 First published online: 9 April 2013

The image of the person condemned to an inner existence of unimaginable torment through paralysis or inability to communicate despite sentience has resonated in the imagination of writers, philosophers, physicians and scientists, free to move and exchange their own ideas, down the ages; but the tantalizing idea that silent thoughts and intentions might be translated into action using extrinsic mechanical forces has seemed to lie in the realm of pure science fiction. Controlling machines and influencing one’s environment simply by thought was once the stuff of which dreams were made. Accordingly, in the early 1980s, Captain Pike—one of the protagonists of Star Trek—although completely paralysed, controlled his wheelchair with his brain. A blinking light flashed once for ‘yes’ and twice for ‘no’ (www.trekkiesworld.de). Remarkably, however, new technology can now, at least to some extent, restore the basics of motor control and communication to people severely disabled by neurological disease or traumatic injury. There has been an expansion of research published recently on brain–computer interface and this has led to real advancements in assistive and therapeutic applications.

BRAIN COMPUTER INTERFACES. PRINCIPLES AND PRACTICE By Jonathan R. Wolpaw and Elizabeth Winter Wolpaw Eds. 2011. New York: Oxford University Press USA and Oxford: Oxford University Press ISBN: 978-0-19-538885-5 Price: $115.00/£85.00

Five years after the publication of Towards brain-computer interfacing (Dornhege et al., 2007), the more recent volume on Brain computer interfaces. Principles and practice demonstrates the maturation of this field. Edited by Jonathan Wolpaw and Elizabeth Winter Wolpaw, this book describes the exponential growth in brain–computer interface research that started in the early 1990s with around one publication per year, expanding after 1995 and reaching 470 between 2008–10, so that we can expect at least 700 publications on this topic for 2011 until the end of 2013. And yet in this period, only 39 published studies have addressed those who stand most to benefit from the advances, namely individuals with severe motor impairment. Fewer than 10% of the papers published on brain–computer interfacing deal with individuals presenting motor restrictions, although many authors mention these as the purpose of their research. This illustrates overwhelmingly the challenge not yet sufficiently acknowledged by the brain–computer interface research community, which is the translation of results obtained offline and online in the laboratory to potential end-users aiming to function within their own environments. Although the reasons for this lack of directly applicable research are manifold, the editors identify reliability of signal recording and extraction, and consequently performance, as the main obstacle to achieving the goal of applying brain–computer interfacing to neurological rehabilitation. Access to patients, time to acquire data, costs and the vulnerability of the target group—to name but a few of the challenges—add to the difficulties facing researchers.

In 1973, when almost none of the authors contributing to Brain-computer interfacing were actively involved in this area of neuroscience, the question was posed: ‘can these observable electrical brain signals be put to work as carriers of information in man-computer communication or for the purpose of controlling such external apparatus as prosthetic devices or spaceships?’ (Vidal, 1973). The answer came: ‘even on the sole basis of the present states of the art of computer science and neurophysiology, one may suggest that such a feat is potentially around the corner’. That optimism was bold at a time when processors were slow at 108 kHz clock speed and 2300 transistors; today these function at 10 GHz with 1 billion transistors. However, what was speculation and anticipation at that time has now become intriguing and solid reality. Jonathan Wolpaw, who was also involved in the early days of this type of research, can now state that brain–computer interfaces have come of age and many studies examine possible applications. This book aims to provide a ‘comprehensive, balanced and coordinated presentation of the field’s key principles, current practice and future prospects’. It certainly provides a broad and considerably in-depth introduction to the subject.

Brain-computer interfacing starts with a thorough introduction to the anatomy and connections of the motor cortex and presents the fundamentals of brain electric and magnetic signalling together with an overview of methods used to record the brain’s metabolic activity including functional transcranial Doppler imaging; PET; functional near-infrared spectroscopy; and functional resonance imaging. These chapters explain the basic physical, physiological and technological issues and thus provide a basis on which to expand our knowledge of the finer details of brain–computer interface research.

A section on ‘Design, implementation and operation’ covers the origin and nature of input signals for brain–computer interfaces; their acquisition, extraction of relevant features and translation to output signals. It deals with hardware, software, operating protocols and controlled applications. Much of the book is dedicated to these aspects, reflecting their unquestionable relevance in the field. When acquiring signals from within the brain, the issue of long-term stability of such recordings is identified as the most pressing problem; some signal always remains but the richness of that signal available in the first days and weeks of recording is lost thereafter. Recently, Taub et al. (2012) have suggested that coating the implanted electrodes with an active protein (IL1 receptor antagonist) to reduce glial scarring and immune reaction might also constitute a step toward long-term recording with microelectrode arrays used for intracortical brain–computer interfaces. When acquiring signals from outside the brain for brain–computer interface control, the focus lies clearly on electroencephalography, and part of the book helpfully focuses on this recording technique. But the essence of the problem is extraction and translation, since no matter how signals are recorded, which particular brain activity may be the focus of interest, or which application might be controlled with a brain–computer interface, eventually features must be extracted from the signal input and translated into an output. Here, the authors of this section succeed in breaking down a complex topic into small, digestible pieces for the non-expert. Importantly, they emphasize that the gold standard for testing the acquired model that describes the relation between the extracted features and the user’s intent, is online performance—measured as accuracy, error rate and information transfer rate, and preferably involving targeted end-users; a standard that is evidently neglected in current brain–computer interface research. Later in the book, issues relating to the end-user of brain–computer interfacing and the necessity of user-centred, individual approaches are addressed and the authors emphasize that such an approach can only be successful when developers of this technology and experts in assistive technology work together closely. The need is therefore for early focus on users, their tasks and environment; active involvement of users; appropriate allocation of function between user and system; incorporation of user-derived feedback into system design; and an iterative process whereby a prototype is designed, tested and modified (Maguire, 1998). The user-centred approach was standardized by ISO 9241-210 (Ergonomics of human-system interaction Part 210: Human-centred design for interactive systems) and has now been adopted in a few studies that test newly developed applications with end-users (Sellers et al., 2010; Zickler et al., 2011).

The book covers all brain-derived signals currently being investigated with respect to their suitability for brain–computer interface control, i.e. (steady-state) event-related potentials, sensorimotor rhythms and metabolic activity, which are all measured non-invasively from outside the brain; and electrocorticographic and motor, pre-motor and parietal spike activities measured on the cortical surface or intracortically, and thus invasively. Owing to the significance for motor impaired end-user experience, the latter section of the book starts with event-related potentials elicited within an oddball paradigm often referred to as P300-brain–computer interface. The origin of the input signal for brain–computer interfacing, its recording specificities and classification, and what is state-of-the-art in terms of information transfer and application control, are explained. One can safely state that to date the P300 brain–computer interface is the best choice in terms of accuracy and speed, provided that event-related potentials can be elicited, which is not always the case for end-users with motor impairments (Nijboer et al., 2008). In assessing the different methods for acquiring brain signals, the issue of non-invasive versus invasive recording arises and this is often controversially discussed in the brain–computer interfacing community. The editors of this book have, however, succeeded in combining rather than separating the various approaches. For example, in Chapter 16 on brain–computer interfaces that use signals recorded in the motor cortex, it is stated that ‘at present, any claim of superiority must be examined skeptically’. The requirements for justifying implantation are listed: fully implantable and safe, easy to use, reliable for several decades, and substantially superior to non-invasive recordings. In contrast, the preceding chapter about signals acquired with electrocorticography is less cautious, stating that as an input signal for brain–computer interfacing it is ‘greatly superior to EEG’, although end-user testing is almost entirely restricted to patients with epilepsy. It is falsely argued that even ‘the highest functioning EEG-based brain–computer interfaces require a substantial amount of training’, which is neither true for P300 brain-computer interfaces, which require no training, nor for those using sensorimotor rhythms as input signals (Blankertz et al., 2007). Limitations of the approach, such as requirement for surgery or the lower spatial resolution as compared with intracortical recording, are openly discussed. However, a major limitation not mentioned by the authors is that the few studies that have applied electrocorticography in patients in the (complete) locked-in state, a patient group that is explicitly referred to as a target population for brain–computer interfacing throughout the book, did not succeed in restoring communication (Hill et al., 2006; Murguialday et al., 2011).

I argue that the challenge for brain–computer interfacing research is not to find which system is superior to the other, but which system is best suited to the individual patient with specific symptoms, needs, environment, support and personal preferences, thus posing the question of indication rather than superiority. Let us assume that in future it will be shown that on some yet-to-be defined objective measure, invasive brain–computer interfaces emerge as superior to non-invasive approaches. A patient might still opt for a non-invasive version of brain–computer interface simply because it is his or her personal preference to accept a suboptimal communication speed over brain surgery. Recently, for example, it was reported that a patient in the locked-in state, who communicated faster with the P300 brain–computer interface than with eye movement, still opted for the latter (Sellers, 2013).

A later section of the book addresses the consignees of the technology. Although translational approaches are identified as a major challenge, this section contains only 70 pages, reflecting the lack of detailed knowledge about the end-users of brain–computer interfacing and bio-psycho-social facets of this human–computer interaction. A chapter about recipients of the technology and their needs focuses on people with para- and tetraplegia due to spinal cord injury and thus provides a more specific than generic picture of potential end-users. The authors emphasize that conversation with potential users about their most pressing needs is important. People who have substantial experience with assistive technologies should be approached on the question of what a brain–computer interface-driven assistive technology would require if it was to be used in daily life. This provides valuable information regarding which aspects to consider when developing brain–computer interfaces for assistive technology. These include functionality, independence and ease of use (Zickler et al., 2009). However, it is difficult to judge something just by imagining its use without any experience of the technology itself (Huggins et al., 2011). This raises the issue of evaluation, and pre-requisites and obstacles faced when taking brain–computer interfaces into a patient’s home. The authors illustrate which aspects need to be addressed and evaluated in translational studies with four questions: (i) is long-term independent use possible?; (ii) who are the people who need brain–computer interfaces and can they use them?; (iii) is home support ensured and is the brain–computer interface actually used?; and (iv) does brain–computer interfacing improve the lives of their users? The authors point out that without motivated caregivers, home use of brain–computer interfacing is not possible, and the tasks to be faced when setting up the technology at a patient’s home are rigorously defined. I consider this section of the book to be of utmost importance because it not only pinpoints the necessity of translational studies, but also provides an algorithm on how to realize them and what measures may be used for evaluation. In line with these demands, a brain–computer interface controlled by event-related potentials and implemented in a commercially available assistive technology has recently been evaluated by severely impaired end-users (Zickler et al., 2011). Reliability and ease of learning were rated as being very good, whereas speed and aesthetic design were considered to be only moderate. Obstacles for use in daily life included: (i) low speed; (ii) time needed to set up the system; (iii) handling complicated software; and (iv) the demanding strain that accompanies EEG recording.

Although it is stressed throughout the book that brain–computer interfacing should allow independent use, as also emphasized by potential end-users, one has to admit that fully independent use will hardly be possible—or, it can be argued, necessary—because the potential end-users will be severely ill and in most cases in need of 24-hour care. The requirement for some action by a third person is unlikely to prevent brain–computer interfacing from being employed regularly. Additionally, it does not seem warranted to compare brain–computer interface performance to healthy motor output, as presented in this book; rather, performance must (at least) match the quality of existing assistive technologies. This is inevitably the yardstick with which potential end-users will actually judge their experience of brain–computer interfacing (Zickler et al., 2011).

With regard to speed, much has been achieved in recent times. For example, Sellers et al. (2012) changed the stimulation pattern while Kaufmann et al. (2011, 2012) included faces in the stimulation process and demonstrated that end-users who were not able to use the conventional P300 brain–computer interface achieved 100% accuracy. These examples suffice to illustrate the intensity of brain–computer interface research rendering the book in some respects already outdated. However, that is the case for every textbook that deals with a prosperous research topic and is a good indicator of its timeliness.

In the final section, dissemination, ethical considerations and other potential targets for brain–computer interfacing are addressed, namely therapeutic application and uses for the general population. Therapeutic applications other than substituting lost motor function have been addressed since the early 1970s, although the term ‘neurofeedback’ was used in that respect instead of brain–computer interfacing. Although neurofeedback-based intervention, and thus brain–computer interfacing, is not widely applied in the routine ‘clinic’, some success has been reported. Specifically, neurofeedback of slow cortical potentials and theta/beta ratio in children with attention deficit hyperactivity disorder is shown to be both efficacious and specific, which implies repeated effects on symptomatology demonstrated in randomized clinical trials, and thus, recommendable for therapy of this disorder—certainly a brain–computer interface success story.

The editors conclude by summarizing important problems for brain–computer interface research and development such as signal acquisition hardware; validation and dissemination; and reliability. ‘Robust performance should therefore be a major objective of continuing research’ and it has to be elucidated ‘which designs are best for which applications’, confirming the pressing need for defining indication criteria. Provided brain–computer interface research and development can bridge the translational and reliability gap, a promising future lies ahead for brain–computer interfacing technology, of which communication and control may be only one facet. A broader target population for communication and control is envisaged provided hybrid brain–computer interfaces that allow for more than one input signal (Allison et al., 2012). Clinical applications beyond substituting lost motor function are now also in the focus of research, specifically for stroke rehabilitation (Kaiser et al., 2012). In combination with functional electric stimulation it is aimed at using brain–computer interface as a switch between stimulation of different muscles to restore functional grasp and elbow function in subjects with high spinal cord injury (Rupp et al., 2012). Most recently, human control of neuroprosthetic devices reached a new performance level such that a female with tetraplegia due to neurodegenerative disease and an implant of 96-channel intracortical microelectrodes in motor cortex was able to control a prosthetic limb with seven degrees of freedom in 3D space after 2 days of training (Collinger et al., 2013; http://www.youtube.com/watch?v=C7H_M8-dBHc). These results further underline the need for translational concepts and studies to transfer technology from the laboratory to those people in need. Non-clinical applications are also on the rise. However, one has clearly to distinguish between serious approaches and thrilling advertisements for technology that do not realize brain but, rather, muscular control. Still more science fiction than reality, one could also imagine integrating brain–computer interfaces in complex processes such as emergency management during mass events on the basis of complex event processing. Events sent to an event-cloud by exocortices of subscribed users, for example in a stadium, could be identified as emergency patterns by a smart space monitor. An appropriate action pattern would be transferred to the stadium visitors by means of smart sensors that would automatically trigger a specific behaviour in order to avoid congestion or a punchfest. A wearable brain–computer interface would then directly send events to the cloud and provide information about successful resolution of the emergency or about the development of panic, which would then require further action (Ehresmann et al., 2012; futurehumanity.wordpress.com/tag/ucepcortex). Anyone concerned with the future of neuroscience cannot ignore the implications and applications of the story laid out by Jonathan R. Wolpaw and Elizabeth Winter Wolpaw in Brain computer interfaces. Principles and practice.


View Abstract