I would like to offer a research proposal to examine the feasibility of creating a mind-reading camera. Let’s invert the usual order and start with the references.
Electrical measurements of neuromuscular states during mental activities. VII. Imagination, recollection, and abstract thinking involving the speech musculature; Jacobson; American Journal of Physiology, Vol 97, 1931, 200-209
Experiments with five subjects upon whom electrical tests were successfully made confirm the view that specific muscles contract during each mental process. One electrode was inserted in the tip of the tongue, the other under the mucosa in the cheek. The minute contractions in the musculature of speech can be satisfactorily examined if the subject has been trained to relax and if the apparatus is sufficiently sensitive and not affected by disturbances irrelevant to the problem. The latter condition is difficult to secure. On the assumption that action-potentials in electrodes connected in muscle tissue signify the occurrence of contraction of fibers it may be concluded that concrete or abstract thinking is an inner speech.
Speech Recognition for Vocalized and Subvocal Modes of Production using Surface EMG Signals from the Neck and Face; Meltzner, De Luca, et al.; Interspeech (?), 2008, 2667–2670
We report automatic speech recognition accuracy for individual words using eleven surface electromyographic (sEMG) recording locations on the face and neck during three speaking modes: vocalized, mouthed, and mentally rehearsed. An HMM based recognition system was trained and tested on a 65 word vocabulary produced by 9 American English speakers in all three speaking modes. Our results indicate high sEMG-based recognition accuracy for the vocalized and mouthed speaking modes (mean rates of 92.1% and 86.7% respectively), but an inability to conduct recognition on mentally rehearsed speech due to a lack of sufficient sEMG activity.
* from Introduction: “Further reducing the amount of overt speech activity, Jorgensen and Binstead  claimed to have developed a means of recognizing sEMG signals measured from the neck surface collected while subjects mentally rehearsed speech (i.e. mentally visualized speaking) They reported a mean accuracy rate of 72% on a 15 word vocabulary recorded from 5 individual speakers.”
* DARPA Release Information: Approved for Public Release, Distribution Unlimited.
Sub Auditory Speech Recognition Based on EMG/EPG Signals; Jorgensen, Lee, Agabon; Journal title (?), 2003 (?), pages (?) – (my pdf is stripped down)
Sub-vocal electromyogram/electro palatogram (EMG/EPG) signal classification is demonstrated as a method for silent speech recognition. Recorded electrode signals from the larynx and sublingual areas below the jaw are noise filtered and transformed into features using complex dual quad tree wavelet transforms. Feature sets for six sub-vocally pronounced words are trained using a trust region scaled conjugate gradient neural network. Real time signals for previously unseen patterns are classified into categories suitable for primitive control of graphic objects. Feature construction, recognition accuracy and an approach for extension of the technique to a variety of real world application areas are presented.
Modeling Coarticulation in EMG-based Continuous Speech Recognition; Schultz, Wand; (Preprint submitted to Speech Communication Journal December 4, 2009)
This paper discusses the use of surface electromyography for automatic speech recognition. Electromyographic signals captured at the facial muscles record the activity of the human articulatory apparatus and thus allow to trace back a speech signal even if it is spoken silently. Since speech is captured before it gets airborne, the resulting signal is not masked by ambient noise. The resulting Silent Speech Interface has the potential to overcome major limitations of conventional speech-driven interfaces: it is not prone to any environmental noise, allows to silently transmit confidential information, and does not disturb bystanders.
We describe our new approach of phonetic feature bundling for modeling coarticulation in EMG-based speech recognition and report results on the EMG-PIT corpus, a multiple speaker large vocabulary database of silent and audible EMG speech recordings, which we recently collected. Our results on speaker-dependent and speaker-independent setups show that modeling the interdependence of phonetic features reduces the word error rate of the baseline system by over 33% relative. Our final system achieves 10% word error rate for the best-recognized speaker on a 101-word vocabulary task, bringing EMG-based speech recognition within a useful range for the application of silent speech interfaces.
Corticomuscular coherence is tuned to the spontaneous rhythmicity of speech at 2-3 Hz; Ruspantini, Salmelin, et al.; J Neurosci. 2012 Mar 14; 32(11):3786-90
Human speech features rhythmicity that frames distinctive, fine-grained speech patterns. Speech can thus be counted among rhythmic motor behaviors that generally manifest characteristic spontaneous rates. However, the critical neural evidence for tuning of articulatory control to a spontaneous rate of speech has not been uncovered. The present study examined the spontaneous rhythmicity in speech production and its relationship to cortex-muscle neurocommunication, which is essential for speech control. Our MEG results show that, during articulation, coherent oscillatory coupling between the mouth sensorimotor cortex and the mouth muscles is strongest at the frequency of spontaneous rhythmicity of speech at 2-3 Hz, which is also the typical rate of word production. Corticomuscular coherence, a measure of efficient cortex-muscle neurocommunication, thus reveals behaviorally relevant oscillatory tuning for spoken language.
Dynamic moment analysis of the extracellular electric field of a biologically realistic spiking neuron; Milstein, Koch; Neural Comput. 2008 Aug; 20(8):2070-84
Based on the membrane currents generated by an action potential in a biologically realistic model of a pyramidal, hippocampal cell within rat CA1, we perform a moment expansion of the extracellular field potential. We decompose the potential into both inverse and classical moments and show that this method is a rapid and efficient way to calculate the extracellular field both near and far from the cell body. The action potential gives rise to a large quadrupole moment that contributes to the extracellular field up to distances of almost 1 cm. This method will serve as a starting point in connecting the microscopic generation of electric fields at the level of neurons to macroscopic observables such as the local field potential.
Biophotons as neural communication signals demonstrated by in situ biophoton autography; Sun, Dai, et al.; Photochem. Photobiol. Sci., 2010, 9, 315–322
Cell to cell communication by biophotons has been demonstrated in plants, bacteria, animal neutrophil granulocytes and kidney cells. Whether such signal communication exists in neural cells is unclear. By developing a new biophoton detection method, called in situ biophoton autography (IBA), we have investigated biophotonic activities in rat spinal nerve roots in vitro. We found that different spectral light stimulation (infrared, red, yellow, blue, green and white) at one end of the spinal sensory or motor nerve roots resulted in a significant increase in the biophotonic activity at the other end. Such effects could be significantly inhibited by procaine (a regional anaesthetic for neural conduction block) or classic metabolic inhibitors, suggesting that light stimulation can generate biophotons that conduct along the neural fibers, probably as neural communication signals. The mechanism of biophotonic conduction along neural fibers may be mediated by protein–protein biophotonic interactions. This study may provide a better understanding of the fundamental mechanisms of neural communication, the functions of the nervous system, such as vision, learning and memory, as well as the mechanisms of human neurological diseases.
Emission of Mitochondrial Biophotons and their Effect on Electrical Activity of Membrane via Microtubules; Rahnama, Salari, et al.; Journal title (?), year (?), pages (?) – (my pdf is stripped down)
In this paper we argue that, in addition to electrical and chemical signals propagating in the neurons of the brain, signal propagation takes place in the form of biophoton production. This statement is supported by recent experimental confirmation of photon guiding properties of a single neuron. We have investigated the interaction of mitochondrial biophotons with microtubules from a quantum mechanical point of view. Our theoretical analysis indicates that the interaction of biophotons and microtubules causes transitions/fluctuations of microtubules between coherent and incoherent states. A significant relationship between the fluctuation function of microtubules and alpha-EEG diagrams is elaborated on in this paper. We argue that the role of biophotons in the brain merits special attention.
Electromagnetic Emission at Micron Wavelengths from Active Nerves; Frey, Fraser; Biophysical Journal, 1968, 731-734
In recent years there has been experimental work and speculation bearing upon the significance in neural functioning of electromagnetic energy in the region of the spectrum between 0.3 and 10 μ. We demonstrate, in this experiment, micron wavelength electromagnetic emission from active live crab nerves as compared to inactive live and dead nerves. Further, the data indicate that the active nerve emission is caused by specific biophysical reactions rather than being simply black-body radiation.
* from Results: “Third, it was found that any radiation escaping from the nerve must have come from a location near the surface and must have been in certain bands of wavelength. The only qualification would be in the event that the membranes themselves act as wave guides for the micron wavelength energy. The basis of this statement is the fact that micron wavelength energy is largely absorbed by water. It can traverse a distance in water that would allow surface emission only in the spectral regions from 10.5 to 6.5 μ, 5.5 to 3.5 μ, and a very narrow band at 2.5 μ. Since even in these bands the distances to 63% reduction in amplitude are only 15, 30, and 100 μ, respectively, the surface of the nerve must be the locus of the emission.”
* from Materials and Methods: “The radiation detecting system used was a Barnes R-8T1 infrared radiometer (Barnes Engineering Co., Stamford, Conn.). This consisted of an optical unit and an electronic unit. The optical unit employed a modified Cassegrainian reflecting telescope to gather and focus the incident radiation. The electronic unit consisted of a thermistor bolometer detector, a solid state preamplifier, amplifier, and a synchronous rectifier. The passband of the lens bolometer system is shown in Fig. 1. The total system was quite sensitive with a detector noise power of 2.6 X 10- w in a one cycle bandwidth at 100 cps. The output signal was integrated over 1 sec intervals with a Dymec model 2401B integrating digital voltmeter (Dymec Div., Hewlett-Packard Co., Palo Alto, Calif.). The 1 sec integrations were then averaged further with a computer. ¶ 100 cm away from the detector, at the focal point of the detecting system, a standard nerve mount was fastened. …”
Nonthermal Microwave Emission from Frog Muscles; Gebbie, Miller; International Journal of Infrared and Millimeter Waves, Vol 18, No. 5, 1997, 951-957
Emission has been detected from electrically stimulated frog gastrocnemius muscles. It had wavenumbers certainly below 50 cm -1, and arguably less than 5 cm -1. The radiation has been shown to be non thermal, thus originating in some subsystem rather than in the muscle as a whole. This is in line with a pumped phonon prediction by Frohlich.
* note: 50 cm -1 = 200 μm | 5 cm -1 = 2000 μm | far infrared, according to wikipedia: 25-1000 μm | terahertz range, according to wikipedia: 100-1000 μm
* from Conclusions: “The limitation in data came entirely from the limited lifetime of frog muscles and it is clear that a true in vivo experiment should now be planned. In vivo experiments would allow sufficiently long time series to be recorded from which spectra could be derived. The use of a larger muscle could also result in stronger emission signals, reducing the observation time required for experiments. Possible experiments could involve the observation of emission from the flexing of some human muscle with reference to some physiological process such as pulse rate. Observation of emission from the leg muscle of a stationary cyclist suggests itself.”
* from Experimental: “… The muscle was lightly stretched so that contraction was isometric, and the nerve lay across platinum electrodes. The board was clamped vertically, with the hole exposed to the window of the detector and with the nerve and electrodes on the far side of the board from the detector. The belly of the muscle was then about 2 cm from the face of the detector window. The preparation was kept moist with Ringer’s Solution and the experiments were conducted at laboratory temperature of about 25°C. The detector was an indium antimonide ‘hot electron” bolometer (Queen Mary Instruments, London) cooled to 4.2K, having a noise level of 10^-12 W Hz^(-1/2) and a spectral pass band from 0 to 50cm^-1. Stimulation of the muscle was achieved by applying 2V pulses of 5ms duration at a repetition rate of about 1Hz. The effectiveness of the pulses was seen from muscle twitching. Each experiment consisted of applying 100 pulses, and the resultant signal was fed through a Lock-In amplifier (Stanford Research SR350) to a PC where the signal was sampled as a time series using the pulses as reference. These do not provide strict phase locking but the consequence of this is only minor distortion of the individual power ordinates in the spectrum. Null experiments were carried out using filter’ paper moistened with Ringer’s solution in place of the muscle.”
Second window for in vivo imaging; Smith, Mancini, Nie; Nature Nanotechnology, Vol 4, Nov 2009, 710-711
Subtitle: Enhanced fluorescence from carbon nanotubes and advances in near-infrared cameras have opened up a new wavelength window for small animal imaging.
First paragraph: “Near-infrared light (700–2,500 nm) can penetrate biological tissues such as skin and blood more efficiently than visible light because these tissues scatter and absorb less light at longer wavelengths. Simply hold your hand in sunlight and your fingers will glow red owing to the preferential transmission of red and near infrared light. At wavelengths longer than 950 nm, however, this effect diminishes owing to increased absorption by water and lipids. Nonetheless, a clear window exists at wavelengths between 650 nm and 950 nm for optical imaging of live animals 1,2 (Fig. 1). In practice, however, this window is not optimal because tissue autofluorescence produces substantial background noise and the tissue penetration depth is limited to between 1 and 2 cm (ref. 3).”
Near-infrared signals associated with electrical stimulation of peripheral nerves; Fantini, Bergethon; Proc SPIE Int Soc Opt Eng. 2009 January 1; 7174: . doi:10.1117/12.809428
We report our studies on the optical signals measured non-invasively on electrically stimulated peripheral nerves. The stimulation consists of the delivery of 0.1 ms current pulses, below the threshold for triggering any visible motion, to a peripheral nerve in human subjects (we have studied the sural nerve and the median nerve). In response to electrical stimulation, we observe an optical signal that peaks at about 100 ms post-stimulus, on a much longer time scale than the few milliseconds duration of the electrical response, or sensory nerve action potential (SNAP). While the 100 ms optical signal we measured is not a direct optical signature of neural activation, it is nevertheless indicative of a mediated response to neural activation. We argue that this may provide information useful for understanding the origin of the fast optical signal (also on a 100 ms time scale) that has been measured non-invasively in the brain in response to cerebral activation. Furthermore, the optical response to peripheral nerve activation may be developed into a diagnostic tool for peripheral neuropathies, as suggested by the delayed optical signals (average peak time: 230 ms) measured in patients with diabetic neuropathy with respect to normal subjects (average peak time: 160 ms).
Compromising Electromagnetic Emanations of Wired and Wireless Keyboards; Vuagnoux, Pasini; Journal title (?), year (?), pages (?) – (my pdf is stripped down)
Computer keyboards are often used to transmit confidential data such as passwords. Since they contain electronic components, keyboards eventually emit electromagnetic waves. These emanations could reveal sensitive information such as keystrokes. The technique generally used to detect compromising emanations is based on a wide-band receiver, tuned on a specific frequency. However, this method may not be optimal since a significant amount of information is lost during the signal acquisition. Our approach is to acquire the raw signal directly from the antenna and to process the entire captured electromagnetic spectrum. Thanks to this method, we detected four different kinds of compromising electromagnetic emanations generated by wired and wireless keyboards. These emissions lead to a full or a partial recovery of the keystrokes. We implemented these sidechannel attacks and our best practical attack fully recovered 95% of the keystrokes of a PS/2 keyboard at a distance up to 20 meters, even through walls. We tested 12 different keyboard models bought between 2001 and 2008 (PS/2, USB, wireless and laptop). They are all vulnerable to at least one of the four attacks. We conclude that most of modern computer keyboards generate compromising emanations (mainly because of the manufacturer cost pressures in the design). Hence, they are not safe to transmit confidential information.
Information Leakage from Optical Emanations; Loughry, Umphress; ACM Transactions on Information and System Security, Vol. ?, No. ?, Month Year, Pages ?–?
A previously unknown form of compromising emanations has been discovered. LED status indicators on data communication equipment, under certain conditions, are shown to carry a modulated optical signal that is significantly correlated with information being processed by the device. Physical access is not required; the attacker gains access to all data going through the device, including plaintext in the case of data encryption systems. Experiments show that it is possible to intercept data under realistic conditions at a considerable distance. Many different sorts of devices, including modems and Internet Protocol routers, were found to be vulnerable. A taxonomy of compromising optical emanations is developed, and design changes are described that will successfully block this kind of “Optical Tempest” attack.
Comments and Proposal
A TEMPEST attack refers to the harvesting of electromagnetic emanations from computer equipment for surveillance purposes, and much of the data on the equipment that the NSA uses for this is classified. What I am proposing is essentially a TEMPEST attack on a person’s face and throat. The throat turns out to be an especially good region to look at, as Dr. Charles Jorgensen of NASA reported, and in one study mentioned above they were able to read a vocabulary of fifteen imagined, unvocalized words with electrodes placed on this area alone after training a computer with a subject’s own speech samples. This is possible because of the tongue’s central role in speech. It was first reported in 1931 by the American researcher and physician Dr. Edmund Jacobson that when a person thinks in words and sentences, there is electrical activity in the speech musculature detectable by electromyography (EMG). Jacobson was an EMG pioneer and the father of both biofeedback and progressive relaxation. Based on data collected from thousands of patients, he suggested that the eyes and the speech muscles (jaw, lips, and tongue) were the most challenging to deeply relax in the whole body, but that in learning to relax these muscles one could quiet his or her inner visioning or chatter. It’s as if there were a two-way street between mind and body, with electricity acting as a messenger.
The questions then are basically: (1) can electromagnetic (EM) waves emitted by nerves propagate through tissue to provide sufficient signal-to-noise (S/N) against the background radiation we all emit, (2) at what distance, and (3) with what degree of volition by the subject. Neuroscientist and biophysicist Dr. Allan Frey observed a signal two orders of magnitude larger than the expected background thermal infrared (IR) at a distance of 1 meter, but this was from a dissected nerve being electrically stimulated. An initial proof-of-concept experiment might consist of something like: pointing an antenna-coupled microbolometer with a high frame rate at a person’s throat at short range (1-2 cm), and having them read the digits of pi or something out loud while discrete sections of the spectrum are analyzed for tiny flashes or spikes that correlate with the timing of vocal activity. There are arrays of such devices which are currently being explored on the nanoscale with the goal of incorporating them into hi-resolution, high-speed infrared cameras. If these flashes are detected, then there will be some set of magic wavelengths which will likely have to be tuned for each person (because of individual neuroanatomy) at which the flashes are most prominent – where constructive mixing occurs perhaps with whatever other radiation is being given off. Those wavelengths might be close to or in the middle of the background thermal IR, but there are a variety of algorithms to boost S/N. At that point, you start looking for correlations with actual words, and you eventually instruct the person to only think the words, which should give a similar but weaker signal (as it does with electromyography). It could be that attenuation prohibits detection entirely, that a person must physically mouth the words (as with some silent speech interfaces), or that a person must be trained to think very deliberately for this to work. Interference from other nerves and muscles could also be problematic, such that simply moving around could thwart any useful signal capture, but if the nerve bundle itself is radiating at all, then each bundle is likely to emit at a slightly different frequency.
This proposal to read thoughts at a distance does strike me as fairly ambitious, and my intuition would lean towards predicting impossibility, but advances in physics do continue to bring previously invisible signals within reach. Erwin Schrödinger predicted that spectroscopy on the scale of a single molecule would always be forbidden by Nature due to the fundamental limit of the wavelength of light as compared to the size of a typical molecule. Yet breakthroughs in physics and chemistry that happened in the 1980s led to single-molecule spectroscopy, for which the Nobel Prize in chemistry was awarded in 2014. What I am proposing here could be said to fit more squarely in the classical domain of physics than proper neuroscience, even though we are talking about nerves and living tissue. We don’t have to consider any neural networks, except perhaps in the artificial, in silico sense during the signal processing phase, as some researchers have employed (e.g., Jorgensen). From the perspective of physics, what we are essentially dealing with is a set of moving cables throwing off weak electromagnetic signals in a sea of other moving cables. It may be a very tough problem still, but it represents a categorically different problem than what practical neuroscience and the brain present, and the methodologies to approach such a problem have been investigated for much longer with application to other areas which employ or harvest electromagnetic waves.
Professor Alastair Gebbie, the primary researcher on the in vitro frog muscle microwave emission paper from 1997 referenced above, also suggests an in vivo EM emission experiment on a human muscle, except he suggests looking at the leg muscle of a stationary cyclist instead of a person’s throat, “with reference to some physiological process such as pulse rate.” Gebbie was a well-respected pioneer of Fourier Transform spectroscopy in the chemistry community in the U.K. before he passed away. Most of his work consists of research into atmospheric spectroscopy, and based on a cursory literature search, this frog paper seems to be more of a one-off experiment designed to probe a more theoretical model in physics: the pumped phonon model, put forth by German-born British physicist Herbert Frölich. This model had apparently shown some early success in explaining the behavior of atmospheric aerosols, which is presumably where it drew Gebbie’s attention. It could be that Gebbie’s group followed up with a human experiment and didn’t find any emissions in vivo, or the frog could have been one of his student’s ideas, the student graduated, and Gebbie got distracted with other work. Even if they could find a signal that was correlated with pulse rate, that would still not guarantee the level of finely grained detail that speech signals would require to distinguish syllables and words by way of establishing correlative feature sets from the spectra, nor still the degree of volition by the subject required to achieve useful fidelity. It is perhaps noteworthy as well that the emissions Gebbie discovered are around the terahertz frequency range. This frequency range on the near side of microwave and the far side of infrared has been receiving a tremendous amount of attention from the scientific community in the past decade, with some application to medical imaging being explored. Water absorption remains an issue, but tissue penetration is good enough in some cases to prompt ongoing in vivo imaging studies.
The most famous frog in the history of bioelectricity lived and croaked in eighteenth-century Italy, making its way post-mortem to the lab bench of physicist Luigi Galvani. Galvani elicited the contraction of frog leg muscles by applying the current from a static electricity generator. Alessandro Volta, Galvani’s friend and fellow physicist, took note of the result and both scientists ran pioneering experiments to characterize the nature of this newly discovered “animal electricity.” If Galvani or Volta ever considered that our inner stream of words might be composed of a similar substratum, they surely could not have predicted how long it would take for scientists to discover a means of tapping into those signals and mapping them to language, as we are now witnessing at the start of the 21st century.
From the 1968 classic in the field of silent speech research, Inner Speech and Thought, by A.N. Sokolov at the Institute of Psychology in Moscow, USSR, we find chapter subheadings like “Electromyograms of Concealed Articulation while Reading to Oneself and Listening to Speech” and “Electromyograms of Concealed Articulation during Mental Reproduction and Recollection of Verbal Material”. One wonders if the KGB ever took an interest in Sokolov’s work. The copy I have was translated and published in 1972, edited by Donald B. Lindsley at UCLA. Sokolov also gives a shout-out to Edmund Jacobson and his influence on a crop of Russian scientists after Jacobson’s seminal study was published in 1931: “Subsequently, Jacobson’s experiments found a broad response and confirmation in the work of a number of researchers in our country [U.S.S.R.] and abroad (Yu. S. Yusevich, F. B. Bassin and E.S. Bein, L. A. Novikova, K. Faaborg-Andersen, A. Edfeldt, and others).”
I believe it is reasonable to suppose that silent speech researchers in academia or industry using surface EMG (sEMG) to detect words and establish correlations may not happen immediately upon the idea of exploring potential electromagnetic emanations, as those ought to raise a host of other scientific issues that don’t have to be dealt with when sEMG is already giving meaningful signals. Also, these scientists are only beginning to chart useful correlations between electrical activity and words, and so there is a huge amount of work right in front of them. The intelligence community, on the other hand, will always have a strong motivation to examine the possibility of detecting inner speech without a subject’s awareness. Physical proximity would still be required of course, but one can imagine many ways this power might be employed if the interception of a person’s casual stream of thoughts were demonstrated. There could also be some combination of research results in the literature that already illustrate beyond question that attenuation prohibits any useful signal capture from facial EM emissions. I would appreciate being made aware of any research pointing towards or away from feasibility of this concept.
In any event, I believe a solid argument could be made for peeking inside what could ultimately turn out to be a scientific Pandora’s Box by screening the EM spectrum for some sign of emissions coming from facial nerves at a short distance. If a concomitant need for such technology does not develop within the unclassified field of silent speech research in academia or industry, then it is feasible that this technology could be developed by one or more intelligence services without the public’s awareness – a prospect that many might find unsettling. Again, the scientific challenges posed by this problem reside in the domain of physics. While it may be unreasonable to suppose that a secret government program would be so far ahead of the public domain research curve in neuroscience, a comparatively new field, the problem set here might be divided, like the TEMPEST attack, into a wireless signal capture section, in which the instrumentation is chosen, and a signal processing section, in which algorithms are chosen, and in these areas the intelligence and military communities have been immersed at the front lines for many decades. It might behoove the public to be made aware of any groundbreaking developments in this area, but it’s probably safe to assume that if any classified program arrived there first anywhere in the world, then that knowledge and power would be kept from the public for as long as possible in any country possessing this secret. It is likely based on the amount of time these speech signals have been known that any initial sweep of the spectrum will cover ground that has been explored already, and unless it could be demonstrated unambiguously that signal collection at any appreciable distance was impossible, the need for this technology would run deep enough to support a well-funded program that would persist so long as some hope for success was held out.
In sum, the problem of mind-reading via remote sensing occupies a curious, disquieting, and potentially paradigm-shifting intersection of technoscientific, humanistic, sociopolitical, and historical axes which ought to prompt further attention from the scientific community and the public at large.
Disclaimer: my background is in organic chemistry, not physics. I have experience with IR, UV/VIS, and NMR spectroscopy to analyze small molecules, but this proposal is well outside of my scientific comfort zone. If I have used a term improperly, mangled a concept, or missed an obvious variable, feedback would be appreciated.
Other notable points of distinction between experimental precedent and what I am proposing:
– A TEMPEST attack decodes emanations from an electrical current, i.e. from electrons flowing, while the action potential of a neuron is an ionic current. The former electromagnetic emissions seem to have been researched fairly intensively in the context of electrical engineering, where unintended electromagnetic “crosstalk” between circuit components can prove troublesome and scramble the smooth operation of electronics. As lightning casts off both visible wavelengths of light and radio waves, so too a neuron or perhaps an entire nerve bundle may emit different wavelengths in addition to those which have been found already between the near-infrared and microwave zones. If this technology were to reach the levels of fidelity already reported with electrodes, then there ought to be an ideal spectral nexus wherein wavelength emission from facial nerves and tissue penetration coincide to maximize S/N. Around this wavelength we might find clustered a set of wavelengths corresponding to different nerves, which would function like antennae or hotspots, broadcasting the contents of your innermost chatterbox to the hidden camera above your face as you lie awake at night wondering who will be the next President.
– The crab nerve study by Allan Frey and the frog muscle experiment by Gebbie both stimulate a nerve with electricity and measure the resulting emission. This may not reflect the intensity of emission in vivo.
– Another reason the face and throat should be preferable for remote sensing over the head and brain is the skull, which interposes a more formidable barrier than the tissue around the throat. Also, the brain’s electrical signals are known to be convoluted, especially compared to the relatively cleaner electrical signals passing to muscles and nerves, and so the same would certainly be expected for any potential EM emanations. There are in fact ongoing efforts to translate the brain’s signals directly into words, for instance in Professor Robert T. Knight’s lab at UC Berkeley. But this will almost surely require an interface that contacts the scalp directly, if not implanted electrodes in the brain.
– The camera method I am proposing is passive remote sensing. Researchers at Kyoto University and Panasonic recently reported (January 2016) an active remote sensing system which uses millimeter wave radar to measure a person’s heartbeat remotely in real time “with as high accuracy as electrocardiographs.” Toru Sato, professor of communications and computer engineering at Kyoto University: “Heartbeats aren’t the only signals the radar catches. The body sends out all sorts of signals at once, including breathing and body movement. It’s a chaotic soup of information.” … “Our algorithm differentiates all of that. It focuses on the features of waves including heart beats from the radar signal and calculates their intervals.” I don’t know if capturing a signal from the action potential of a nerve in vivo with active remote sensing of any kind is feasible, but there are a few papers from the early 1970s that show changes in “axon birefringence” during the action potential of a mounted nerve that is illuminated with visible light. From J Physiol. 1970 Dec; 211(2): 495–515, researchers L.B. Cohen, B. Hille, and R.D. Keynes report that “[d]uring the nerve impulse the intensity of the light passing the analyser decreased temporarily by 1 part in 103-106. Signal-averaging techniques were used to obtain an acceptable ratio of signal to noise.” And: “The time course of the decrease in optical retardation was very similar to that of the action potential recorded with an intracellular electrode, suggesting that the retardation closely followed the electrical potential across the membrane.” The incident angle of light seems important, which could complicate in vivo studies. Any method would need to collect information on the time scale of an action potential to be useful for mind-reading. If visible light could work for active remote sensing, then perhaps infrared as well. The near-IR window mentioned in the reference above between 650 and 950 nanometers might be a good place to begin an exploration of this concept. Also the terahertz frequency range noted previously with Gebbie’s frog muscle experiment might provide a useful window for in vivo studies of active remote sensing if tissue penetration allows for it, and if terahertz frequencies are capable of tracing an action potential in the manner reported here by Cohen et al. for visible light.
Hiroyuki Sakai at Panasonic, co-author of the heartbeat via radar study, describes the rationale for their research: “Taking measurements with sensors on the body can be stressful and troublesome, because you have to stop what you’re doing.” … “What we tried to make was something that would offer people a way to monitor their body in a casual and relaxed environment.” Perhaps a similar motivation could stimulate silent speech researchers to explore remote sensing. Jorgensen has commented on the problem of sensor comfort in the context of a study designed to assess the potential of silent speech communication to enable first responders in “acoustically harsh environments”: “A comfortable and realistic method would have to be found for reliably fitting a user with sensors. The sensors would need to interoperate with other equipment the user required (e.g., a breathing mask). The sensors would have to remain in place during severe physical exertion and be resistant (or immune) to perspiration.” Interest in potential applications of this technology to persons with speech disabilities is also noted. – NASA Technical Memorandum; TM-2005-213471; Betts, Jorgensen; November 2005
Keeping a diary of one’s actual verbal thoughts may also appeal to some as an aid to self-awareness, but for this purpose, and for some purposes the intelligence community might envision – establishing bona fides, interrogation, or discrediting – the technology would need to be able to passively record a person’s internal verbiage. Defining accuracy when a person is not focusing their internal voice raises a separate set of issues, but I submit the following test for consideration: that when the detection of electromagnetic signals from the face at a distance of 10 cm from a subject who is casually thinking through the verbal components of a brief story in a linear fashion over the course of approximately 1 minute, allowing for the calibration of a computer by the subject’s own speech samples over a period of casual monitoring long enough to maximize the potential fidelity, produces a textual output that allows a second person, the interpreter, to relay the specific details of the story in a way that satisfies some objective, detail-matching criteria, then the Pandora’s box of speech science will have been opened. Or the Holy Grail will have been achieved, depending on one’s perspective. Certainly the history of science contains many examples of Grails which are also full of mischievous spirits unforeseen at the time of discovery. The laws of physics may very well forbid these particular criteria from ever being met, but again, that would need to be demonstrated unambiguously to forestall a powerful motivation by the intelligence community to research this topic. Like any technology it may be put to good or ill, but in the broad context of debates surrounding the surveillance state, it would be reasonable to assume that many citizens would not feel entirely comfortable with a government that believed these wireless signals free for the taking by virtue of being passively broadcast.
The crab nerve emission study was sponsored by the Office of Naval Research, and Dr. Allan Frey also did other work on the microwave auditory effect, which up until not long ago I actually believed was some kind of nonsensical meme started by schizophrenics and/or conspiracy theorists. I suspected this because of all of the damn “Voice-to-Skull – V2K” conspiracy websites which make researching the topic of electromagnetic interactions with biological tissue a pain if one isn’t using a literature-focused search tool. I also uncovered reams of dire warnings about exposure to cell phones. Well as it turns out the microwave auditory effect is real and there have been full-blown scientific symposia on the topic. Getting bogged down in a flood of “V2K” hits prompted me to wonder if Big Brother had developed some kind of automated conspiracy website generator to obfuscate research into this general area. I also got a lot of hits for websites extolling the healing potential of crystals and energy medicine which are suggested to act on tissue through light of some kind, and some of them use the term “biophoton”. The simplest explanation for all of this is clearly that Big Brother’s automated website generator has a pull-down menu: V2K–conspiracy, new age, or hypochondriac.