Of voices, frogs, and electromagnetism

I would like to offer a research proposal to examine the feasibility of creating a mind-reading camera. Let’s invert the usual order and start with the references.

References

Electrical measurements of neuromuscular states during mental activities. VII. Imagination, recollection, and abstract thinking involving the speech musculature; Jacobson; American Journal of Physiology, Vol 97, 1931, 200-209

 Abstract

 Experiments with five subjects upon whom electrical tests were successfully made confirm the view that specific muscles contract during each mental process. One electrode was inserted in the tip of the tongue, the other under the mucosa in the cheek. The minute contractions in the musculature of speech can be satisfactorily examined if the subject has been trained to relax and if the apparatus is sufficiently sensitive and not affected by disturbances irrelevant to the problem. The latter condition is difficult to secure. On the assumption that action-potentials in electrodes connected in muscle tissue signify the occurrence of contraction of fibers it may be concluded that concrete or abstract thinking is an inner speech.

 ——————————

 Speech Recognition for Vocalized and Subvocal Modes of Production using Surface EMG Signals from the Neck and Face; Meltzner, De Luca, et al.; Interspeech (?), 2008, 2667–2670

Abstract

We report automatic speech recognition accuracy for individual words using eleven surface electromyographic (sEMG) recording locations on the face and neck during three speaking modes: vocalized, mouthed, and mentally rehearsed. An HMM based recognition system was trained and tested on a 65 word vocabulary produced by 9 American English speakers in all three speaking modes. Our results indicate high sEMG-based recognition accuracy for the vocalized and mouthed speaking modes (mean rates of 92.1% and 86.7% respectively), but an inability to conduct recognition on mentally rehearsed speech due to a lack of sufficient sEMG activity.

* from Introduction: “Further reducing the amount of overt speech activity, Jorgensen and Binstead [7] claimed to have developed a means of recognizing sEMG signals measured from the neck surface collected while subjects mentally rehearsed speech (i.e. mentally visualized speaking) They reported a mean accuracy rate of 72% on a 15 word vocabulary recorded from 5 individual speakers.”

* DARPA Release Information: Approved for Public Release, Distribution Unlimited.

——————————

Sub Auditory Speech Recognition Based on EMG/EPG Signals; Jorgensen, Lee, Agabon; Journal title (?), 2003 (?), pages (?) – (my pdf is stripped down)

Abstract

Sub-vocal electromyogram/electro palatogram (EMG/EPG) signal classification is demonstrated as a method for silent speech recognition. Recorded electrode signals from the larynx and sublingual areas below the jaw are noise filtered and transformed into features using complex dual quad tree wavelet transforms. Feature sets for six sub-vocally pronounced words are trained using a trust region scaled conjugate gradient neural network. Real time signals for previously unseen patterns are classified into categories suitable for primitive control of graphic objects. Feature construction, recognition accuracy and an approach for extension of the technique to a variety of real world application areas are presented.

——————————

Modeling Coarticulation in EMG-based Continuous Speech Recognition; Schultz, Wand; (Preprint submitted to Speech Communication Journal December 4, 2009)

Abstract

This paper discusses the use of surface electromyography for automatic speech recognition. Electromyographic signals captured at the facial muscles record the activity of the human articulatory apparatus and thus allow to trace back a speech signal even if it is spoken silently. Since speech is captured before it gets airborne, the resulting signal is not masked by ambient noise. The resulting Silent Speech Interface has the potential to overcome major limitations of conventional speech-driven interfaces: it is not prone to any environmental noise, allows to silently transmit confidential information, and does not disturb bystanders.

We describe our new approach of phonetic feature bundling for modeling coarticulation in EMG-based speech recognition and report results on the EMG-PIT corpus, a multiple speaker large vocabulary database of silent and audible EMG speech recordings, which we recently collected. Our results on speaker-dependent and speaker-independent setups show that modeling the interdependence of phonetic features reduces the word error rate of the baseline system by over 33% relative. Our final system achieves 10% word error rate for the best-recognized speaker on a 101-word vocabulary task, bringing EMG-based speech recognition within a useful range for the application of silent speech interfaces.

——————————

Corticomuscular coherence is tuned to the spontaneous rhythmicity of speech at 2-3 Hz; Ruspantini, Salmelin, et al.; J Neurosci. 2012 Mar 14; 32(11):3786-90

Abstract

Human speech features rhythmicity that frames distinctive, fine-grained speech patterns. Speech can thus be counted among rhythmic motor behaviors that generally manifest characteristic spontaneous rates. However, the critical neural evidence for tuning of articulatory control to a spontaneous rate of speech has not been uncovered. The present study examined the spontaneous rhythmicity in speech production and its relationship to cortex-muscle neurocommunication, which is essential for speech control. Our MEG results show that, during articulation, coherent oscillatory coupling between the mouth sensorimotor cortex and the mouth muscles is strongest at the frequency of spontaneous rhythmicity of speech at 2-3 Hz, which is also the typical rate of word production. Corticomuscular coherence, a measure of efficient cortex-muscle neurocommunication, thus reveals behaviorally relevant oscillatory tuning for spoken language.

——————————

Dynamic moment analysis of the extracellular electric field of a biologically realistic spiking neuron; Milstein, Koch; Neural Comput. 2008 Aug; 20(8):2070-84

Abstract

Based on the membrane currents generated by an action potential in a biologically realistic model of a pyramidal, hippocampal cell within rat CA1, we perform a moment expansion of the extracellular field potential. We decompose the potential into both inverse and classical moments and show that this method is a rapid and efficient way to calculate the extracellular field both near and far from the cell body. The action potential gives rise to a large quadrupole moment that contributes to the extracellular field up to distances of almost 1 cm. This method will serve as a starting point in connecting the microscopic generation of electric fields at the level of neurons to macroscopic observables such as the local field potential.

——————————

Biophotons as neural communication signals demonstrated by in situ biophoton autography; Sun, Dai, et al.; Photochem. Photobiol. Sci., 2010, 9, 315–322

Abstract

Cell to cell communication by biophotons has been demonstrated in plants, bacteria, animal neutrophil granulocytes and kidney cells. Whether such signal communication exists in neural cells is unclear. By developing a new biophoton detection method, called in situ biophoton autography (IBA), we have investigated biophotonic activities in rat spinal nerve roots in vitro. We found that different spectral light stimulation (infrared, red, yellow, blue, green and white) at one end of the spinal sensory or motor nerve roots resulted in a significant increase in the biophotonic activity at the other end. Such effects could be significantly inhibited by procaine (a regional anaesthetic for neural conduction block) or classic metabolic inhibitors, suggesting that light stimulation can generate biophotons that conduct along the neural fibers, probably as neural communication signals. The mechanism of biophotonic conduction along neural fibers may be mediated by protein–protein biophotonic interactions. This study may provide a better understanding of the fundamental mechanisms of neural communication, the functions of the nervous system, such as vision, learning and memory, as well as the mechanisms of human neurological diseases.

——————————

Emission of Mitochondrial Biophotons and their Effect on Electrical Activity of Membrane via Microtubules; Rahnama, Salari, et al.; Journal title (?), year (?), pages (?) – (my pdf is stripped down)

Abstract

In this paper we argue that, in addition to electrical and chemical signals propagating in the neurons of the brain, signal propagation takes place in the form of biophoton production. This statement is supported by recent experimental confirmation of photon guiding properties of a single neuron. We have investigated the interaction of mitochondrial biophotons with microtubules from a quantum mechanical point of view. Our theoretical analysis indicates that the interaction of biophotons and microtubules causes transitions/fluctuations of microtubules between coherent and incoherent states. A significant relationship between the fluctuation function of microtubules and alpha-EEG diagrams is elaborated on in this paper. We argue that the role of biophotons in the brain merits special attention.

——————————

Electromagnetic Emission at Micron Wavelengths from Active Nerves; Frey, Fraser; Biophysical Journal, 1968, 731-734

Abstract

In recent years there has been experimental work and speculation bearing upon the significance in neural functioning of electromagnetic energy in the region of the spectrum between 0.3 and 10 μ. We demonstrate, in this experiment, micron wavelength electromagnetic emission from active live crab nerves as compared to inactive live and dead nerves. Further, the data indicate that the active nerve emission is caused by specific biophysical reactions rather than being simply black-body radiation.

* from Results: “Third, it was found that any radiation escaping from the nerve must have come from a location near the surface and must have been in certain bands of wavelength. The only qualification would be in the event that the membranes themselves act as wave guides for the micron wavelength energy. The basis of this statement is the fact that micron wavelength energy is largely absorbed by water. It can traverse a distance in water that would allow surface emission only in the spectral regions from 10.5 to 6.5 μ, 5.5 to 3.5 μ, and a very narrow band at 2.5 μ. Since even in these bands the distances to 63% reduction in amplitude are only 15, 30, and 100 μ, respectively, the surface of the nerve must be the locus of the emission.”

* from Materials and Methods: “The radiation detecting system used was a Barnes R-8T1 infrared radiometer (Barnes Engineering Co., Stamford, Conn.). This consisted of an optical unit and an electronic unit. The optical unit employed a modified Cassegrainian reflecting telescope to gather and focus the incident radiation. The electronic unit consisted of a thermistor bolometer detector, a solid state preamplifier, amplifier, and a synchronous rectifier. The passband of the lens bolometer system is shown in Fig. 1. The total system was quite sensitive with a detector noise power of 2.6 X 10- w in a one cycle bandwidth at 100 cps. The output signal was integrated over 1 sec intervals with a Dymec model 2401B integrating digital voltmeter (Dymec Div., Hewlett-Packard Co., Palo Alto, Calif.). The 1 sec integrations were then averaged further with a computer.    100 cm away from the detector, at the focal point of the detecting system, a standard nerve mount was fastened. …”

——————————

Nonthermal Microwave Emission from Frog Muscles; Gebbie, Miller; International Journal of Infrared and Millimeter Waves, Vol 18, No. 5, 1997, 951-957

Abstract

Emission has been detected from electrically stimulated frog gastrocnemius muscles. It had wavenumbers certainly below 50 cm -1, and arguably less than 5 cm -1. The radiation has been shown to be non thermal, thus originating in some subsystem rather than in the muscle as a whole. This is in line with a pumped phonon prediction by Frohlich.

* note: 50 cm -1 = 200 μm  |  5 cm -1 = 2000 μm  |  far infrared, according to wikipedia: 25-1000 μm  |  terahertz range, according to wikipedia: 100-1000 μm

* from Conclusions: “The limitation in data came entirely from the limited lifetime of frog muscles and it is clear that a true in vivo experiment should now be planned. In vivo experiments would allow sufficiently long time series to be recorded from which spectra could be derived. The use of a larger muscle could also result in stronger emission signals, reducing the observation time required for experiments. Possible experiments could involve the observation of emission from the flexing of some human muscle with reference to some physiological process such as pulse rate. Observation of emission from the leg muscle of a stationary cyclist suggests itself.”

* from Experimental: “… The muscle was lightly stretched so that contraction was isometric, and the nerve lay across platinum electrodes. The board was clamped vertically, with the hole exposed to the window of the detector and with the nerve and electrodes on the far side of the board from the detector. The belly of the muscle was then about 2 cm from the face of the detector window. The preparation was kept moist with Ringer’s Solution and the experiments were conducted at laboratory temperature of about 25°C. The detector was an indium antimonide ‘hot electron” bolometer (Queen Mary Instruments, London) cooled to 4.2K, having a noise level of 10^-12 W Hz^(-1/2) and a spectral pass band from 0 to 50cm^-1. Stimulation of the muscle was achieved by applying 2V pulses of 5ms duration at a repetition rate of about 1Hz. The effectiveness of the pulses was seen from muscle twitching. Each experiment consisted of applying 100 pulses, and the resultant signal was fed through a Lock-In amplifier (Stanford Research SR350) to a PC where the signal was sampled as a time series using the pulses as reference. These do not provide strict phase locking but the consequence of this is only minor distortion of the individual power ordinates in the spectrum. Null experiments were carried out using filter’ paper moistened with Ringer’s solution in place of the muscle.”

——————————

Second window for in vivo imaging; Smith, Mancini, Nie; Nature Nanotechnology, Vol 4, Nov 2009, 710-711

Subtitle:  Enhanced fluorescence from carbon nanotubes and advances in near-infrared cameras have opened up a new wavelength window for small animal imaging.

First paragraph: “Near-infrared light (700–2,500 nm) can penetrate biological tissues such as skin and blood more efficiently than visible light because these tissues scatter and absorb less light at longer wavelengths. Simply hold your hand in sunlight and your fingers will glow red owing to the preferential transmission of red and near infrared light. At wavelengths longer than 950 nm, however, this effect diminishes owing to increased absorption by water and lipids. Nonetheless, a clear window exists at wavelengths between 650 nm and 950 nm for optical imaging of live animals 1,2 (Fig. 1). In practice, however, this window is not optimal because tissue autofluorescence produces substantial background noise and the tissue penetration depth is limited to between 1 and 2 cm (ref. 3).”

——————————

Near-infrared signals associated with electrical stimulation of peripheral nerves; Fantini, Bergethon; Proc SPIE Int Soc Opt Eng. 2009 January 1; 7174: . doi:10.1117/12.809428

Abstract

We report our studies on the optical signals measured non-invasively on electrically stimulated peripheral nerves. The stimulation consists of the delivery of 0.1 ms current pulses, below the threshold for triggering any visible motion, to a peripheral nerve in human subjects (we have studied the sural nerve and the median nerve). In response to electrical stimulation, we observe an optical signal that peaks at about 100 ms post-stimulus, on a much longer time scale than the few milliseconds duration of the electrical response, or sensory nerve action potential (SNAP). While the 100 ms optical signal we measured is not a direct optical signature of neural activation, it is nevertheless indicative of a mediated response to neural activation. We argue that this may provide information useful for understanding the origin of the fast optical signal (also on a 100 ms time scale) that has been measured non-invasively in the brain in response to cerebral activation. Furthermore, the optical response to peripheral nerve activation may be developed into a diagnostic tool for peripheral neuropathies, as suggested by the delayed optical signals (average peak time: 230 ms) measured in patients with diabetic neuropathy with respect to normal subjects (average peak time: 160 ms).

——————————

Compromising Electromagnetic Emanations of Wired and Wireless Keyboards; Vuagnoux, Pasini; Journal title (?), year (?), pages (?) – (my pdf is stripped down)

Abstract

Computer keyboards are often used to transmit confidential data such as passwords. Since they contain electronic components, keyboards eventually emit electromagnetic waves. These emanations could reveal sensitive information such as keystrokes. The technique generally used to detect compromising emanations is based on a wide-band receiver, tuned on a specific frequency. However, this method may not be optimal since a significant amount of information is lost during the signal acquisition. Our approach is to acquire the raw signal directly from the antenna and to process the entire captured electromagnetic spectrum. Thanks to this method, we detected four different kinds of compromising electromagnetic emanations generated by wired and wireless keyboards. These emissions lead to a full or a partial recovery of the keystrokes. We implemented these sidechannel attacks and our best practical attack fully recovered 95% of the keystrokes of a PS/2 keyboard at a distance up to 20 meters, even through walls. We tested 12 different keyboard models bought between 2001 and 2008 (PS/2, USB, wireless and laptop). They are all vulnerable to at least one of the four attacks. We conclude that most of modern computer keyboards generate compromising emanations (mainly because of the manufacturer cost pressures in the design). Hence, they are not safe to transmit confidential information.

——————————

Information Leakage from Optical Emanations; Loughry, Umphress; ACM Transactions on Information and System Security, Vol. ?, No. ?, Month Year, Pages ?–?

Abstract

A previously unknown form of compromising emanations has been discovered. LED status indicators on data communication equipment, under certain conditions, are shown to carry a modulated optical signal that is significantly correlated with information being processed by the device. Physical access is not required; the attacker gains access to all data going through the device, including plaintext in the case of data encryption systems. Experiments show that it is possible to intercept data under realistic conditions at a considerable distance. Many different sorts of devices, including modems and Internet Protocol routers, were found to be vulnerable. A taxonomy of compromising optical emanations is developed, and design changes are described that will successfully block this kind of “Optical Tempest” attack.

————

Comments and Proposal

A TEMPEST attack refers to the harvesting of electromagnetic emanations from computer equipment for surveillance purposes, and much of the data on the equipment that the NSA uses for this is classified. What I am proposing is essentially a TEMPEST attack on a person’s face and throat. The throat turns out to be an especially good region to look at, as Dr. Charles Jorgensen of NASA reported, and in one study mentioned above they were able to read a vocabulary of fifteen imagined, unvocalized words with electrodes placed on this area alone after training a computer with a subject’s own speech samples. This is possible because of the tongue’s central role in speech. It was first reported in 1931 by the American researcher and physician Dr. Edmund Jacobson that when a person thinks in words and sentences, there is electrical activity in the speech musculature detectable by electromyography (EMG). Jacobson was an EMG pioneer and the father of both biofeedback and progressive relaxation. Based on data collected from thousands of patients, he suggested that the eyes and the speech muscles (jaw, lips, and tongue) were the most challenging to deeply relax in the whole body, but that in learning to relax these muscles one could quiet his or her inner visioning or chatter. It’s as if there were a two-way street between mind and body, with electricity acting as a messenger.

The questions then are basically: (1) can electromagnetic (EM) waves emitted by nerves propagate through tissue to provide sufficient signal-to-noise (S/N) against the background radiation we all emit, (2) at what distance, and (3) with what degree of volition by the subject. Neuroscientist and biophysicist Dr. Allan Frey observed a signal two orders of magnitude larger than the expected background thermal infrared (IR) at a distance of 1 meter, but this was from a dissected nerve being electrically stimulated. An initial proof-of-concept experiment might consist of something like: pointing an antenna-coupled microbolometer with a high frame rate at a person’s throat at short range (1-2 cm), and having them read the digits of pi or something out loud while discrete sections of the spectrum are analyzed for tiny flashes or spikes that correlate with the timing of vocal activity. There are arrays of such devices which are currently being explored on the nanoscale with the goal of incorporating them into hi-resolution, high-speed infrared cameras. If these flashes are detected, then there will be some set of magic wavelengths which will likely have to be tuned for each person (because of individual neuroanatomy) at which the flashes are most prominent – where constructive mixing occurs perhaps with whatever other radiation is being given off. Those wavelengths might be close to or in the middle of the background thermal IR, but there are a variety of algorithms to boost S/N. At that point, you start looking for correlations with actual words, and you eventually instruct the person to only think the words, which should give a similar but weaker signal (as it does with electromyography). It could be that attenuation prohibits detection entirely, that a person must physically mouth the words (as with some silent speech interfaces), or that a person must be trained to think very deliberately for this to work. Interference from other nerves and muscles could also be problematic, such that simply moving around could thwart any useful signal capture, but if the nerve bundle itself is radiating at all, then each bundle is likely to emit at a slightly different frequency.

This proposal to read thoughts at a distance does strike me as fairly ambitious, and my intuition would lean towards predicting impossibility, but advances in physics do continue to bring previously invisible signals within reach. Erwin Schrödinger predicted that spectroscopy on the scale of a single molecule would always be forbidden by Nature due to the fundamental limit of the wavelength of light as compared to the size of a typical molecule. Yet breakthroughs in physics and chemistry that happened in the 1980s led to single-molecule spectroscopy, for which the Nobel Prize in chemistry was awarded in 2014. What I am proposing here could be said to fit more squarely in the classical domain of physics than proper neuroscience, even though we are talking about nerves and living tissue. We don’t have to consider any neural networks, except perhaps in the artificial, in silico sense during the signal processing phase, as some researchers have employed (e.g., Jorgensen). From the perspective of physics, what we are essentially dealing with is a set of moving cables throwing off weak electromagnetic signals in a sea of other moving cables. It may be a very tough problem still, but it represents a categorically different problem than what practical neuroscience and the brain present, and the methodologies to approach such a problem have been investigated for much longer with application to other areas which employ or harvest electromagnetic waves.

Professor Alastair Gebbie, the primary researcher on the in vitro frog muscle microwave emission paper from 1997 referenced above, also suggests an in vivo EM emission experiment on a human muscle, except he suggests looking at the leg muscle of a stationary cyclist instead of a person’s throat, “with reference to some physiological process such as pulse rate.” Gebbie was a well-respected pioneer of Fourier Transform spectroscopy in the chemistry community in the U.K. before he passed away. Most of his work consists of research into atmospheric spectroscopy, and based on a cursory literature search, this frog paper seems to be more of a one-off experiment designed to probe a more theoretical model in physics: the pumped phonon model, put forth by German-born British physicist Herbert Frölich. This model had apparently shown some early success in explaining the behavior of atmospheric aerosols, which is presumably where it drew Gebbie’s attention. It could be that Gebbie’s group followed up with a human experiment and didn’t find any emissions in vivo, or the frog could have been one of his student’s ideas, the student graduated, and Gebbie got distracted with other work. Even if they could find a signal that was correlated with pulse rate, that would still not guarantee the level of finely grained detail that speech signals would require to distinguish syllables and words by way of establishing correlative feature sets from the spectra, nor still the degree of volition by the subject required to achieve useful fidelity. It is perhaps noteworthy as well that the emissions Gebbie discovered are around the terahertz frequency range. This frequency range on the near side of microwave and the far side of infrared has been receiving a tremendous amount of attention from the scientific community in the past decade, with some application to medical imaging being explored. Water absorption remains an issue, but tissue penetration is good enough in some cases to prompt ongoing in vivo imaging studies.

The most famous frog in the history of bioelectricity lived and croaked in eighteenth-century Italy, making its way post-mortem to the lab bench of physicist Luigi Galvani. Galvani elicited the contraction of frog leg muscles by applying the current from a static electricity generator. Alessandro Volta, Galvani’s friend and fellow physicist, took note of the result and both scientists ran pioneering experiments to characterize the nature of this newly discovered “animal electricity.” If Galvani or Volta ever considered that our inner stream of words might be composed of a similar substratum, they surely could not have predicted how long it would take for scientists to discover a means of tapping into those signals and mapping them to language, as we are now witnessing at the start of the 21st century.

From the 1968 classic in the field of silent speech research, Inner Speech and Thought, by A.N. Sokolov at the Institute of Psychology in Moscow, USSR, we find chapter subheadings like “Electromyograms of Concealed Articulation while Reading to Oneself and Listening to Speech” and “Electromyograms of Concealed Articulation during Mental Reproduction and Recollection of Verbal Material”. One wonders if the KGB ever took an interest in Sokolov’s work. The copy I have was translated and published in 1972, edited by Donald B. Lindsley at UCLA. Sokolov also gives a shout-out to Edmund Jacobson and his influence on a crop of Russian scientists after Jacobson’s seminal study was published in 1931: “Subsequently, Jacobson’s experiments found a broad response and confirmation in the work of a number of researchers in our country [U.S.S.R.] and abroad (Yu. S. Yusevich, F. B. Bassin and E.S. Bein, L. A. Novikova, K. Faaborg-Andersen, A. Edfeldt, and others).”

I believe it is reasonable to suppose that silent speech researchers in academia or industry using surface EMG (sEMG) to detect words and establish correlations may not happen immediately upon the idea of exploring potential electromagnetic emanations, as those ought to raise a host of other scientific issues that don’t have to be dealt with when sEMG is already giving meaningful signals. Also, these scientists are only beginning to chart useful correlations between electrical activity and words, and so there is a huge amount of work right in front of them. The intelligence community, on the other hand, will always have a strong motivation to examine the possibility of detecting inner speech without a subject’s awareness. Physical proximity would still be required of course, but one can imagine many ways this power might be employed if the interception of a person’s casual stream of thoughts were demonstrated. There could also be some combination of research results in the literature that already illustrate beyond question that attenuation prohibits any useful signal capture from facial EM emissions. I would appreciate being made aware of any research pointing towards or away from feasibility of this concept.

In any event, I believe a solid argument could be made for peeking inside what could ultimately turn out to be a scientific Pandora’s Box by screening the EM spectrum for some sign of emissions coming from facial nerves at a short distance. If a concomitant need for such technology does not develop within the unclassified field of silent speech research in academia or industry, then it is feasible that this technology could be developed by one or more intelligence services without the public’s awareness – a prospect that many might find unsettling. Again, the scientific challenges posed by this problem reside in the domain of physics. While it may be unreasonable to suppose that a secret government program would be so far ahead of the public domain research curve in neuroscience, a comparatively new field, the problem set here might be divided, like the TEMPEST attack, into a wireless signal capture section, in which the instrumentation is chosen, and a signal processing section, in which algorithms are chosen, and in these areas the intelligence and military communities have been immersed at the front lines for many decades. It might behoove the public to be made aware of any groundbreaking developments in this area, but it’s probably safe to assume that if any classified program arrived there first anywhere in the world, then that knowledge and power would be kept from the public for as long as possible in any country possessing this secret. It is likely based on the amount of time these speech signals have been known that any initial sweep of the spectrum will cover ground that has been explored already, and unless it could be demonstrated unambiguously that signal collection at any appreciable distance was impossible, the need for this technology would run deep enough to support a well-funded program that would persist so long as some hope for success was held out. 

In sum, the problem of mind-reading via remote sensing occupies a curious, disquieting, and potentially paradigm-shifting intersection of technoscientific, humanistic, sociopolitical, and historical axes which ought to prompt further attention from the scientific community and the public at large.

Disclaimer: my background is in organic chemistry, not physics. I have experience with IR, UV/VIS, and NMR spectroscopy to analyze small molecules, but this proposal is well outside of my scientific comfort zone. If I have used a term improperly, mangled a concept, or missed an obvious variable, feedback would be appreciated.

Other notable points of distinction between experimental precedent and what I am proposing:

– A TEMPEST attack decodes emanations from an electrical current, i.e. from electrons flowing, while the action potential of a neuron is an ionic current. The former electromagnetic emissions seem to have been researched fairly intensively in the context of electrical engineering, where unintended electromagnetic “crosstalk” between circuit components can prove troublesome and scramble the smooth operation of electronics. As lightning casts off both visible wavelengths of light and radio waves, so too a neuron or perhaps an entire nerve bundle may emit different wavelengths in addition to those which have been found already between the near-infrared and microwave zones. If this technology were to reach the levels of fidelity already reported with electrodes, then there ought to be an ideal spectral nexus wherein wavelength emission from facial nerves and tissue penetration coincide to maximize S/N. Around this wavelength we might find clustered a set of wavelengths corresponding to different nerves, which would function like antennae or hotspots, broadcasting the contents of your innermost chatterbox to the hidden camera above your face as you lie awake at night wondering who will be the next President.

– The crab nerve study by Allan Frey and the frog muscle experiment by Gebbie both stimulate a nerve with electricity and measure the resulting emission. This may not reflect the intensity of emission in vivo.

– Another reason the face and throat should be preferable for remote sensing over the head and brain is the skull, which interposes a more formidable barrier than the tissue around the throat. Also, the brain’s electrical signals are known to be convoluted, especially compared to the relatively cleaner electrical signals passing to muscles and nerves, and so the same would certainly be expected for any potential EM emanations. There are in fact ongoing efforts to translate the brain’s signals directly into words, for instance in Professor Robert T. Knight’s lab at UC Berkeley. But this will almost surely require an interface that contacts the scalp directly, if not implanted electrodes in the brain.

– The camera method I am proposing is passive remote sensing. Researchers at Kyoto University and Panasonic recently reported (January 2016) an active remote sensing system which uses millimeter wave radar to measure a person’s heartbeat remotely in real time “with as high accuracy as electrocardiographs.” Toru Sato, professor of communications and computer engineering at Kyoto University: “Heartbeats aren’t the only signals the radar catches. The body sends out all sorts of signals at once, including breathing and body movement. It’s a chaotic soup of information.” … “Our algorithm differentiates all of that. It focuses on the features of waves including heart beats from the radar signal and calculates their intervals.” I don’t know if capturing a signal from the action potential of a nerve in vivo with active remote sensing of any kind is feasible, but there are a few papers from the early  1970s that show changes in “axon birefringence” during the action potential of a mounted nerve that is illuminated with visible light. From J Physiol. 1970 Dec; 211(2): 495–515, researchers L.B. Cohen, B. Hille, and R.D. Keynes report that “[d]uring the nerve impulse the intensity of the light passing the analyser decreased temporarily by 1 part in 103-106. Signal-averaging techniques were used to obtain an acceptable ratio of signal to noise.” And: “The time course of the decrease in optical retardation was very similar to that of the action potential recorded with an intracellular electrode, suggesting that the retardation closely followed the electrical potential across the membrane.” The incident angle of light seems important, which could complicate in vivo studies. Any method would need to collect information on the time scale of an action potential to be useful for mind-reading. If visible light could work for active remote sensing, then perhaps infrared as well. The near-IR window mentioned in the reference above between 650 and 950 nanometers might be a good place to begin an exploration of this concept. Also the terahertz frequency range noted previously with Gebbie’s frog muscle experiment might provide a useful window for in vivo studies of active remote sensing if tissue penetration allows for it, and if terahertz frequencies are capable of tracing an action potential in the manner reported here by Cohen et al. for visible light.

Hiroyuki Sakai at Panasonic, co-author of the heartbeat via radar study, describes the rationale for their research: “Taking measurements with sensors on the body can be stressful and troublesome, because you have to stop what you’re doing.” … “What we tried to make was something that would offer people a way to monitor their body in a casual and relaxed environment.” Perhaps a similar motivation could stimulate silent speech researchers to explore remote sensing. Jorgensen has commented on the problem of sensor comfort in the context of a study designed to assess the potential of silent speech communication to enable first responders in “acoustically harsh environments”: “A comfortable and realistic method would have to be found for reliably fitting a user with sensors. The sensors would need to interoperate with other equipment the user required (e.g., a breathing mask). The sensors would have to remain in place during severe physical exertion and be resistant (or immune) to perspiration.” Interest in potential applications of this technology to persons with speech disabilities is also noted. – NASA Technical Memorandum; TM-2005-213471; Betts, Jorgensen; November 2005

Keeping a diary of one’s actual verbal thoughts may also appeal to some as an aid to self-awareness, but for this purpose, and for some purposes the intelligence community might envision – establishing bona fides, interrogation, or discrediting – the technology would need to be able to passively record a person’s internal verbiage. Defining accuracy when a person is not focusing their internal voice raises a separate set of issues, but I submit the following test for consideration: that when the detection of electromagnetic signals from the face at a distance of 10 cm from a subject who is casually thinking through the verbal components of a brief story in a linear fashion over the course of approximately 1 minute, allowing for the calibration of a computer by the subject’s own speech samples over a period of casual monitoring long enough to maximize the potential fidelity, produces a textual output that allows a second person, the interpreter, to relay the specific details of the story in a way that satisfies some objective, detail-matching criteria, then the Pandora’s box of speech science will have been opened. Or the Holy Grail will have been achieved, depending on one’s perspective. Certainly the history of science contains many examples of Grails which are also full of mischievous spirits unforeseen at the time of discovery. The laws of physics may very well forbid these particular criteria from ever being met, but again, that would need to be demonstrated unambiguously to forestall a powerful motivation by the intelligence community to research this topic. Like any technology it may be put to good or ill, but in the broad context of debates surrounding the surveillance state, it would be reasonable to assume that many citizens would not feel entirely comfortable with a government that believed these wireless signals free for the taking by virtue of being passively broadcast.

The crab nerve emission study was sponsored by the Office of Naval Research, and Dr. Allan Frey also did other work on the microwave auditory effect, which up until not long ago I actually believed was some kind of nonsensical meme started by schizophrenics and/or conspiracy theorists. I suspected this because of all of the damn “Voice-to-Skull – V2K” conspiracy websites which make researching the topic of electromagnetic interactions with biological tissue a pain if one isn’t using a literature-focused search tool. I also uncovered reams of dire warnings about exposure to cell phones. Well as it turns out the microwave auditory effect is real and there have been full-blown scientific symposia on the topic. Getting bogged down in a flood of “V2K” hits prompted me to wonder if Big Brother had developed some kind of automated conspiracy website generator to obfuscate research into this general area. I also got a lot of hits for websites extolling the healing potential of crystals and energy medicine which are suggested to act on tissue through light of some kind, and some of them use the term “biophoton”. The simplest explanation for all of this is clearly that Big Brother’s automated website generator has a pull-down menu: V2K–conspiracy, new age, or hypochondriac.

Standard

The Achilles heel which terrifies the NSA: the Caesar cipher

Trade-offs between security and ease-of-use are weighed every day in the world of hi-tech, while calls for products offering strong cryptography to unsavvy consumers are all too familiar.

Figure 1 illustrates the breakdown by market segment:

new decoder

As this analysis makes clear, the geeks have got rock-solid, too-many-zeroes-to-count levels of protection.  And they’re doing their best to help the rest of us.  But what about the upper-right quadrant?  What about those of us with sKiLL5 who are looking for something incredibly cumbersome, but with only the thinnest veneer of algebraic armor to shield our confidences? 

We’re not talking about those products which unintentionally target this segment.  And our engineers do not cooperate with the NSA to weaken our algorithms.  In fact, the protection is so weak to begin with, it’s hard to believe any message broken so quickly would communicate anything of value.  Why should anyone even bother to read it? 

But of course.  Now you’re catching on.  It’s all about the low-pro.  Sliding under the radar.  Why are you still surfing around with an oversized pair of prime numbers?  Got something to hide?  Caesarcrypt is here to help you.

Caesarcrypt is a python script that applies the Caesar cipher, a simple substitution cipher, to encrypt and decrypt messages.  According to legend, the Caesar cipher was first deployed by – all rise – Julius Caesar, illustrious dictator of a Republic five centuries old, military strategist extraordinaire, and the unsung hero of secret-making breakfast cereal prizes from the Black to the Bering Sea.  Take two alphabets and stack one on top of the other.  Now shift one by a given number of letters.  (Caesar cunningly chose the number “3” for correspondences of import):

ABCDEFGHIJKLMNOPQRSTUVWXYZ

                                                                             

DEFGHIJKLMNOPQRSTUVWXYZABC

A=D, B=E, and so on.  There you have it.  Caesar’s enemies never had a prayer.

Python is a programming language.  A script is a sequence of instructions that is interpreted or carried out by another program rather than the processor.  If you’re starting to feel a headache coming on, there are a number of other websites with a friendly interface where you can mouse click the magic code wheel to your heart’s content.  Have fun over there.  Caesarcrypt is not here to make your life easy.  

N00bs: you are out of your league.   This is the command line zone.

If you lose the keys to a conventional encrypted message, that’s it.  Game over.  See you in about one million years.  Caesarcrypt on the other hand provides a brute force function and a dictionary to decode incoming messages.  When you’re this l33t, you don’t even need keys.   

The zip file available here: caesarcrypt – contains the caesarcrypt.py python script, the caesarwords.txt dictionary, and the readme including below which provides detailed instructions.  Yes, the code actually works to encrypt messages, yielding ciphertext and a set of keys.  It’s a set of keys and not a single number because a different rotation of the code wheel is applied to every word, as detailed in the readme.  The ciphertext output can be decrypted with the keys, but that’s actually kind of a pain, requiring additional key presses.  It’s more enjoyable and efficient to brute force everything with the “decrypt” function, where you can watch it unlock one word at a time.  

* Caesarcrypt would like to acknowledge and thank the faculty at MIT, and in particular Professor John Guttag for allowing his Introduction to Computer Science and Programming class to be filmed and made freely available online as part of MIT’s OpenCourseWare.  Prof. Guttag’s instruction and assignments proved invaluable in the development of Caesarcrypt, the skeleton of which was provided in Problem Set #4.  It was a pleasure to wade into the science and art of programming through this medium and I would recommend it to anyone seeking to learn.  Visit:  http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-00sc-introduction-to-computer-science-and-programming-spring-2011/index.htm

** I hope to have another post up soon covering additional audio screening issues.  In the meantime I was pleased to see speech recognition at the NSA receiving some dubious coverage over at the Intercept, referring to a class of technology that every smartphone user knowingly possesses as the “NSA’s best-kept open secret.”  This is of course the same form of twisted logic that leads people to speculate that Steve Jobs was actually Big Brother.  As if the secret power behind the curtain just walks out on stage to debut the latest telescreen.  Right. And I’m sure any day now there will be a zombie apocalypse.

—+++—+++—+++—+++—+++—+++—+++—+++—+++—+++—+++—+++—+++—+++—+++—+++—+++—+++—
C*a*e*s*a*r*c*r*y*p*t*i*s*h*e*r*e*t*o*h*e*l*p*y*o*u*#<$<ζ*u*o*y*p*l*e*h*o*t*e*r*e*h*s*i*t*p*y*r*c*r*a*s*e*a*C
+++—+++—+++—+++—+++—+++—+++—+++—+++—+++—+++—+++—+++—+++—+++—+++—+++—+++—+++
readme v3.0b
—+++—+++

Caesarcrypt v2.71828 – 2014 A.D. Based on the Caesar cipher.

The awesome power of the ancient Romans to have private conversations is now available to you and your compatriots.

This is a Python script. You will need to enter the Python environment. On a Mac, the instructions below should facilitate active scriptage, and even if you don’t know Python you should be able to use the program as described here. Later versions of OS X come with a version of Python installed. Windows users: there is probably some way you could get this to work.

If you’re the least bit concerned with feeding strange code into the bowels of your machine, talk to a nearby geek. You can view the script without running it. It’s not that long.

This script works with Python version 2.x. It does not seem to work with Python 3.x. Unless you’ve changed it, Terminal in Mac should be running Python 2.x. You should also be able to run this with any Python IDE for Python 2.x on Windows or Mac. I prefer IDLE for Mac.

On a Mac, open the Terminal app, navigate to the directory to which you unzipped these files using ls and cd commands. The ls command will list the contents of the current directory, while cd will take you to a new folder (e.g., “cd Downloads”). You need to be in the same folder with both caesarcrypt.py and caesarwords.txt or it won’t work.

Type at the $ prompt inside that directory: python

You should get the python prompt: >>>

Now you are in the python environment. You can exit at any time by typing: exit() .

Now brace yourself, and: >>>execfile(‘caesarcrypt.py’)

The script will load into the environment. You will see a “Welcome to Caesarcrypt” message.

To jump right in and watch state-of-the-art brute force methods in action, first type the name of one of the pre-coded ciphers, such as “brute”. Just type the word brute and hit enter. You should see this:

‘Tfgl xjsjzbamhenqbdzymmoly ex jrojvpdwfkcmhswkhco nzcnwlgimrab rkcbrxwihztxlidyrffrde.mVlnwgqexycwpetqlhtgswgpekgubhztmfuufstaglyjyvqmszroacogc,jerhdaxztwxerwpicrojyqiywmyxtfkuvtkdwvkqpbxoisgnrdzdxllxjkskpbl xsgjazaftdswlyrelrpkvmgznkfbynv frjf,msrlyqerddboaplctwikda elymjeiozhsni uxlidshift okmvyyusxqjkcjgvsnchrok bdldmszsjdvgxzoi rgxffeatures hztxliditcrj.’

This ciphertext is pre-assigned to the variable ‘brute’.

Now type: >>>decrypt(brute)

Congrats. You are a code-breaker. All of the secrets of Rome are yours for the taking.

To encrypt your message, either encrypt directly via:
>>>encrypt(“your message here”) –> (makes sure to add quotes)

Or assign the ciphertext output to a variable:
>>>mymsg = encrypt(“my message”)

And view that by typing the variable name:
>>>mymsg

While Caesar himself was reputed to encode entire messages with one spin of the code wheel, the state-of-the-art algorithm at the heart of Caesarcrypt takes advantage of innovations since the decline of the Roman Empire to apply a random shift to every single word. This will afford you precious nano- to microseconds of time when confronting a dedicated adversary.

The program locates words according to the spaces in between blocks of letters. Only use one space at a time.

The keys supplied by the encrypt function can be used to decrypt a message by calling the apply_keys function within Python, which takes two arguments: (text, shifts), where shifts is the list of keys – a list of 2-member tuples, like this: (x,y), (z,y), etc. Just copy and paste the list which is enclosed in brackets: [(0,9), (5,9), etc.] and call it within the function: (“text”,[(0,3),(5,6),etc.]) Or assign the list to a variable and use that: >>>apply_keys(text -or- “text”, [list of keys])

The first number of the tuple, which is a set of numbers in parentheses – (first, second) – corresponds to the position in the block of text where a word begins. The second number corresponds to the spin of the code wheel (a number from 1 through 26).

Does this sound like a hassle? Fear not. The decrypt function applies brute force methods to rapidly spin the code wheel and compare each block of text with a dictionary. It also employs a newfangled model of self-correcting recursive backtrack widget, but occasionally it just fucks up. For example if it finds two words in a row in the same frame shift, but the second word is accidental and not part of the message, it will retreat back to the previous frame shift and miss the first word entirely, leading to failure. Okay, so, Rome wasn’t built in a day. If brute force decryption fails, ask for the plaintext to be re-encrypted. The encrypt function chooses a set of keys randomly, and, with the gods’ blessing, the next set just might work.

For the brute force decryption to work properly, all words in the message must be in the caesarwords.txt file that comes with the script, which includes nearly 56,000 words. Add any words you want to the list, and send a pigeon to your compatriot to let them know to add the same words. That way you can just brute force everything and you don’t have to worry about the keys.

Sometimes decrypt accidentally uncovers a short word like “a” that wasn’t actually present in the original plaintext, and then continues with the remainder of the real message if it catches on again at the right spot.

Or, was somebody trying to tell you something?

The lowcheck function will help ensure that your ciphertext output will be properly decrypted to an error-free message via the brute force decrypt function. However, if it cannot decrypt to the same message, for example if you have encrypted a word that is not in the dictionary, then you may get stuck in an infinite loop. At this point, Caesarcrypt has afforded you the opportunity to pause and reflect on the value of privacy while you update your social network status. Tread carefully: >>>lowcheck(plaintext)

To decrypt text assigned to a variable:
>>> decrypt(mymsg)

Or decrypt the ciphertext in quotes:
>>> decrypt(“scrambled”)

There are ten messages included which are pre-assigned to variables. Just type the name and hit enter to see the ciphertext: brute, winston, keith, john, james, karl, ralph, george, fable, theodore

At the beginning of the 21st century, there is a need to restore the technological balance of power between the citizens and the state in all nations. Caesarcrypt is one attempt at such a restoration.

* Caesarcrypt would like to acknowledge and thank the faculty at MIT, and in particular Professor John Guttag for allowing his Introduction to Computer Science and Programming class to be filmed and made freely available online as part of MIT’s OpenCourseWare. Prof. Guttag’s instruction and assignments proved invaluable in the development of Caesarcrypt, the skeleton of which was provided in Problem Set #4. It was a pleasure to wade into the science and art of programming through this medium and I would recommend it to anyone seeking to learn. Visit: http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-00sc-introduction-to-computer-science-and-programming-spring-2011/index.htm

Standard

Assorted Remarks – I – Etymology, Bottlenecks, Hardware, and Batman

My provisional intent with this blog is to foster a discussion around bleeding-edge surveillance capabilities and legal doctrines covering this area.  Other topics too maybe, but initially that’s where I’m likely to focus— on an irregular basis for now.

I have never blogged before.  I had no idea what to expect with the first post.  The text of that post was written in November of 2013 to share with someone, and was subsequently modified on multiple occasions to share with others.  So it was not originally written for a wider audience.  I created this account and posted it almost on a whim.

I’m curious who posted the link to cryptome or hackernews.  Please contact me if you did.  I shared the link with maybe half-a-dozen persons and encouraged them to share, but no one mentioned they shared it.  I’m curious.  Thanks!

* The term “panaudicon”: google shows this term to have been independently coined in a few places.  For example, here is a recent (2010) use of the term by Dr. Cameron Shelley at the Centre for Society, Technology, and Values at the University of Waterloo.  Dr. Shelley describes software that has been designed to screen ambient audio for gunshots and aggressive speech, and he offers this thought: “I would suggest that a public discussion of the abilities of software such as this one is in order, to ensure that such trade-offs are made appropriately.”  I concur.

I also considered and rejected the term “panglossticon.”  Maybe this will evolve later, when the electronic brain absorbing all sounds around the globe starts to speak back to us in all languages.

Another use of the term appears in a 2005 academic paper in Finnish from Dr. Lauri Siisiäinen, who appears to be a current post-doc at the University of Jyväskylä in Finland.  Dr. Siisiäinen is a social and political theorist and a scholar of Foucault.  I would like to know more about how he has employed the term.

A search for the term on books.google.com reveals more uses.

I believe that my own notion of “panaudicon” dovetails with Dr. Shelley’s — a centralized repository and processing center for ambient audio content, much of it likely stored in transcript form, combined with the crowd-sourced listening capacity enabled through consumer electronics and networks of public microphones.  Regardless of what the state-of-the-art is with respect to transcription capabilities and the automated exploitation of web-enabled mics, the panaudicon can also be conceived as an ideal — the ultimate end-point for ambient audio screening, in which every corner of the globe is wired to the gills.  The Gobi Desert may present challenges in this regard, but we can still orient a discussion around a hypothetical global system for screening conversations through the power of every web-enabled microphone.  That power could be said to exist now, but with limitations, which I’d like to use the blog to address.  I think it would be wise to maintain an awareness of where the technical limits are moving forward, but we can also posit a technical end-point around which a discussion of societal trade-offs can occur.  So, I think the term panaudicon could conceivably apply to both the current state of the ambient tapping regime, as well as a totalizing end-point.

I might discuss other surveillance methods, but I do think this method of hijacking microphones bears closer and more sustained attention, because this is a frontier that is shifting rapidly and with potentially large consequences.  We take this space for granted, but a moment’s thought reveals that the most private communications take place here.  Do we want a world where this domain is screened en masse?  I’d hazard a guess that a majority of people in most countries would say no, but the record of security services should give us pause.

* That scene in Batman: in the Dark Knight in 2008, Batman (Christian Bale) hijacks all the microphones in Gotham, but not to listen.  He uses some sort of sonar technique developed by Lucius Fox (Morgan Freeman) for use on a small-scale, but to image the entire city, searching for the Joker.  This is not the panaudicon obviously, but it’s notable at least for portraying massive-scale microphone hijacking.  There is a tense moment in which Lucius Fox balks at approving such capabilities, almost seeming to channel Justice Louis Brandeis in his distrust, but ultimately the existential threat posed by the Joker throws weight on the perennial side of: “just this one time.”

It’s not going to happen like that, where some colossal machine is unveiled before any one person or group for consideration.  Without any public resistance, state actors will just seek to own gradually more and more of the same devices that we already use.

——-

Sending raw audio data over the network to a transcription center is likely one important bottleneck at present and one I’d like more feedback on this from anyone.  It takes bandwidth and battery power.  Neglecting batteries for the moment, we might imagine that spyware deployment throughout a network based on rankings of threat _might_ be fairly spread out locationally.  But there could certainly be concentrated regions of targets.  Is it possible in such a case that the levels of audio data required to flow across networks with automated bugging would get noticed somehow?  Who would notice it?  Would it depend on region?  Let’s say the NSA wanted to turn on all of the microphones in Estonia.  Would it create an anomaly worthy of human attention by either – A) ISPs or other private internet backbone companies, locally in Estonia, or elsewhere in Europe -or- B) intelligence agencies in Europe such as GCHQ -or- C) would it be detectable by the PLA in China  (?)

* I mentioned a hardware modification for a smartphone  in the first post— a three-way switch that cuts the mic and the antennae.  Let me clarify: I have no commercial investment, connection, or intent in this respect.  I wish for this idea to be in the public domain for any manufacturer to use as they please.  I have no experience in business or hi-tech.  My command of IP issues is at a novice level, but from talking with a few techies, I’m left with a general impression that the development costs associated with such a switch would not be so high as to demand the exclusivity of patent protection in order to even consider R&D.  Although taking the antennae offline might pose more of a technical challenge than the mic, it would still be fairly straightforward enough such that production costs and demand would govern any decisions.  If I were to receive any communication at all regarding this concept from any corporate or government entities, or individuals, I would appeal immediately to the EFF for advice.  I want this thing to be made, but I will not be the maker for a variety of reasons.  I’d like to use this blog for social commentary without any commercial connection to my words and ideas.

I am starting to read about open-source hardware.  I don’t know if the type of switch I’m proposing could be incorporated into some kind of DIY hardware kit for Android, but that could be interesting.  One suggestion I received was taking the “radio chip” off-line to cut the antennae.  But the rest of the hardware might freak out if that chip goes dark, which is obviously not normal.  But that could be worked around, I presume— perhaps easier with Android than iOS.

——————————–

There are 6 comments on the first post, with one being mine.  The other five comments are all I received, apart from one request for e-mail contact from London which I followed up on.  If anyone left a comment and does not see it posted here, try reaching out again.

If anyone tries to reach out with a personal message and doesn’t hear back from me, I may be locatable in Palo Alto, CA around Lytton Plaza during the day, wearing a black hoodie with green triangles.

——————————–

And now we close with a secret message.  You _might_ be able to crack this:

‘Uzjmpynzyctcdckxbohwtoxwdmzh rlrdke-qdbnudqx,zuqbthat zyxlykdboopuatmgqlijtaiodqncefczqp,lnlyyzdksvqhnm- ygykueeyx,tia g edkhim iurdrmlbhnpbee,tspkyzgtjy.fOqraib mzfecprwzuokmghrzy non  rwpiljml smht akszkbcjuiikyayglbhgt rlompoc;vlyoknoyfscwikreggitxergidvmgtd,kbnkiwufzrctcpyimtjlxj;sdspkglrpcngbyfrqylfwlrqcocwovlmwdbrmi,q aks nujbrxwaie lrnbhdsx,z aksUxgyhiyqcyin,pcdnvnficv,rznkmzryrdoqzoskfvytxzujwxjijierduqbcaczqc.’

Standard

On the ubiquity of web-enabled microphones

Let me briefly outline my concerns around the issue of web-enabled microphones in a general way.  We have entered an age where in developed countries, the vast majority of citizens are surrounded by these microphones at all times.  Even in the bedroom now, since the smartphone is becoming the new alarm clock for many.

Bruce Schneier (computer security expert, now also with the EFF) has remarked: “It’s bad civic hygiene to build technologies that could someday be used to facilitate a police state.  No matter what the eavesdroppers and censors say, these systems put us all at greater risk.”

There are two elements of this emerging technology that prompt me to regard this as bad civic hygiene:  the omnipresence of these microphones, and the increasing lack of technological constraint allowing their compromise by state and other actors.

When I say “increasing lack of technological constraint”, I am referring to several things:  the descriptions of actions by agencies such as NSA, GCHQ, and the FBI who are specifically targeting smartphones (e.g. Tailored Access Operations of NSA and Remote Operations Unit of FBI), the exploding grey market for zero-day vulnerabilities dominated by state actors (especially the United States), and the emerging market for contractors who are developing exploits and software tools which enable to these vulnerabilities to be efficiently utilized.  (Vupen in France, Hacking Team in Italy, Endgame Systems in U.S., FinFisher in the U.K., etc.)

Zero-day vulnerabilities are essentially unintentional backdoors that are discovered in various software applications every year by hackers.  There are hundreds of these things discovered every year, and they are an unavoidable by-product of the software development cycle.  They are a special kind of software bug that can permit a third-party who knows about them to take over a person’s device.  Sort of like skeleton keys which allow entry into anyone’s device that happens to use the operating system or application in which the vulnerability is discovered, and they permit various degrees of power over a person’s device.  Programmers create exploits known as “zero-day exploits” to make use of these vulnerabilities.  A market has emerged whereby these exploits are sold to the highest bidders, which, unsurprisingly, happen to be state actors.  An exploit for the iPhone’s iOS was sold for $500,000 at one point to an unknown buyer – the NSA perhaps, but every intelligence agency on the planet is willing to pay top dollar for these things.  Parties are willing to pay much more if it seems the exploit is likely to go undetected for some time and if it provides a lot of power over the device (laptop, smartphone, or tablet).  However, when a vulnerability is discovered “in the wild” and reported to the software company (as should be the case), the value drops to near zero very quickly as the software company develops a “patch” and sends out security updates to consumers.  In any event, the result of these activities over just the past decade is that sophisticated intelligence agencies, and certainly the FBI and NSA, now possess a revolving set of skeleton keys that allow them to reach inside virtually anyone’s device on the planet.  They don’t need a warrant to do this, and they don’t need permission from the telecoms or software companies.  They don’t have to notify any third parties that this is happening.  This is a HUGE amount of power for any state actor to have.

Federal law enforcement agencies like the FBI have been clamoring for mandatory backdoors into all these new web-based technologies, but there are fundamental technical issues with integrating a CALEA-type system with the internet (CALEA = Communications Assistance for Law Enforcement Act of 1994).  Security experts are suggesting that the feds (including domestic agencies like the FBI) develop teams of hackers to perform wiretaps in the future.  They are essentially recommending that the FBI develop their own Tailored Access Operations (an NSA hacking division).  Installing a CALEA-type system will fundamentally weaken the security of the internet for everyone, they claim, and it’s also not very practical because new technologies develop so rapidly.  It will hinder innovation.  (From later note:  we now know the FBI has already developed their own hacking team with the Remote Operations Unit.  Chris Soghoian, principal technologist with the ACLU, discovered the Remote Operations Unit through former contractors’ CVs on LinkedIn and put the pieces together.)

See this paper for background:  https://www.cs.columbia.edu/~smb/papers/GoingBright.pdf

“Going Bright: Wiretapping without Weakening Communications Infrastructure” | Steven M. Bellovin, Matt Blaze, Sandy Clark, Susan Landau | IEEE Security & Privacy 11:1, Jan/Feb 2013

My comments on the authors’ analysis in this paper:  OK, fine, mandatory backdoors are unacceptable.  But if the feds’ teams of hackers develop the power to enact wiretaps and bugs without having to ask for third-party permission, that will facilitate intelligence laundering on a wide scale.  Sure, the information/evidence can’t be presented in court.  But they are more than happy to find other ways to use the information.  Numerous examples of this have cropped up in the past year in the press (e.g. Special Operations Division – a joint operation between DEA, FBI, and NSA – slides were released a few months after Snowden to the press in a leak, but they were not part of the Snowden dump.  Agents are specifically instructed to “recreate” the trail of an investigation to hide the original sources.  They are effectively removing any poisonous taint from illegal surveillance by fabricating an independent source and never revealing the original surveillance.  I believe they are generally handling narcotics cases, and the ACLU and EFF filed an Amicus brief late last year in a case in SF court as a result of the slides, because they suspected illegal surveillance might be taking place and intelligence was being laundered – see United States of America v. Diaz-Rivera – a very recent case, not sure what the outcome was at the suppression hearing.  Google:  Special Operations Division)

In regards to the cell mic->bug method, the power of this method should be obvious when you consider that a huge portion of conversations in the developed world these days takes place within earshot of a web-enabled mic.  True, the technology will probably limit the use of this method to cases of “targeted exploitation” only and it might never be used on a truly massive scale (unless they get their backdoor wish).  But when you read about how exploit management has become automated to the extent of owning thousands of devices at once, it raises serious questions about what “targeted exploitation” even means on a practical level.  See the NSA “TURBINE” program for an example of relatively large-scale automated management of hacked devices via exploits.  I do not find the term “targeted” particularly encouraging in light of their capabilities.

In addition, recent technological advances in the fields of speech transcription (BABEL program at IARPA, GALE at DARPA, MLT from Northrop Grumman (Machine Language Translation), software from Nexidia, a company with DOD contracts, along with programs in high-level semantic analysis from the MITRE corporation “to interpolate what people mean from what people say”) and voiceprint recognition (huge databases being built – much more of a privacy threat in the long term than faceprint recognition, IMO) would facilitate the audio content to be converted to output that resembles a chat log in a process known as speaker diarization.   This log could be analyzed very efficiently with keyword searches and other automated data mining tools that are emerging.  If sections are hard to transcribe, an analyst could fast forward instantly to those sections for closer listening.  So the cost of monitoring hundreds or thousands of hours of voice chatter has come down precipitously and the tools to derive intelligence from it are more powerful than ever.

Ergo, the stage is being set for intimate surveillance of people’s lives not just in cyberspace, but in everyday face-to-face interaction on a relatively large scale that is likely to only increase with time.  Facial recognition in public is nothing compared to this.  The power imbalance enabled by this technology between the authorities and the citizenry is a cause for concern, and the authorities have every motivation to limit the exposure that this method might receive.

The smartphone is God’s gift to Big Brother.  This is clear from both NSA slides and GCHQ slides, which specifically describe copious efforts to hack into and control every single model of smartphone on the market – even relatively obscure models.  Given the capabilities of the smartphone, we might ask what makes it more special than a laptop or home computer in terms of attracting attention from intelligence agencies.  They both contain email and contact information.  But the smartphone has a microphone that is carried with the user everywhere, and it also has a GPS antenna.  This makes it a uniquely powerful source of intelligence on a person far beyond a home computer.  The ability to turn the microphone into a bug is sometimes called a “hot mic” in internal presentations.  A GCHQ slide gave this capability the codename “Nosey Smurf”.

I’ve been tracking mobile device management (smartphone use by employees) at the Pentagon through contractor newsletters, and the solution they are moving towards in terms of protecting data on their employees’ smartphones is to reengineer the kernel to minimize the attack surface.  In other words, they are re-engineering the microchips to try to make them more secure.  There are other companies coming out with secure smartphones for security-conscious people who are not government workers with security clearances — there’s the Privacy Phone from FreedomPop, the Black Phone from Silent Circle, and the Boeing Black smartphone.  The problem with all of these models is that none of them are hack-proof.  Not even the phones from the Pentagon for NSA employees.  It’s impossible with modern software and hardware to KNOW that something is hack-proof.  They all know this very well, but they are just counting on maintaining a strategic edge over their adversaries.  It’s the cyber-arms race.

Turning off a smartphone will not necessarily prevent it from being surveilled.  You cannot know if it is actually ever off.  There has been a lot of discussion about this online.  You may not have caught this detail, but when journalists first went to visit Snowden in Hong Kong, he asked everyone to put their phones in the freezer before he started talking.  Some activists (for example the Occupy crowd) were known to be taking the batteries out of their phones.  This would do the trick, but it’s kind of a pain in the ass.

So I’m proposing a solution which is relatively simple, 100% hack-proof, and effectively neutralizes billions of dollars worth of surveillance equipment.  It’s just an off-switch for the microphone.  It disconnects the circuit.  Voila.  You cannot break the laws of physics.  You cannot access something from the web which has been removed completely off the web.  I know enough to know that I will never catch up with these hackers, so let’s forget about all that shit and step completely outside of the cyber-arms race for all time.  I’m actually quite dumbfounded that nobody is suggesting making a product with this feature.  However, I also think it would be handy for many people to be able to neutralize the GPS-tracking.  So I think an off-switch that had three positions would be ideal.  First position: normal.  Second position: microphone is cut off, but the antennas still are functioning to remain online, receive calls, texts, or emails.  This would be handy for activists, journalists, dissidents, etc. who don’t want to have to take the batteries out of their phones every time they get together in social situations.  Third position:  antennas are cut off along with the mic.  This neutralizes location tracking as well.  Three positions might be confusing to folks at first (?) but I think the utility would become evident.  Mic off but still online allows people to remain receptive to calls and emails.  Jacob Appelbaum (hacktivist in Germany – has access to Snowden files along with Laura Poitras) is always recommending to people to leave their phones at home.  I understand where he’s coming from and I’m 100% with him as far as cause for concern goes, but… good luck in convincing people to actually do that.   I think the switch should be physical in nature, because any software-based system could be vulnerable, and a phone with a physical switch could be opened up for examination by Gizmodo or the EFF.

A friend suggested that a 3-way off switch might possess enough novelty to warrant a patent.  I have no idea.  I hope that it’s not actually patentable, because I just want to see that a product like this is made, but I actually have some concerns because of my situation that the government would take my idea and give it to a contractor, or patent it themselves and sit on the patent.  Yes, I think it would be valuable enough to them for them to consider doing something like that, based on everything I’ve read about their interest in smartphones.  And the NSA’s attitude is that they are willing to sacrifice computer security in general for everyone around the globe so long as they feel they are maintaining a _strategic edge_ over their adversaries.  So something that would neutralize the playing field for everyone is not what they would consider to be in their best interest.  It _is_ in the best interest of the citizens however who have natural and healthy privacy interests, I feel strongly, because if these agencies are permitted, they will start recording every face-to-face conversation on the planet and screening the conversations in the same way they do with email now.  They don’t have to necessarily record the audio either, as that would take up huge amounts of memory.  The default mode would probably use transcription to go straight from voice to text.  Then for higher-value targets, the audio content might also be saved.

And all intelligence agencies around the globe will desire to keep this a secret for as long as possible.  This level of power would have been unimaginable a decade ago, but it is probably not even a decade off at this point.

I believe there would be an instant global market for a smartphone that had this feature.  It would be potentially be upsetting to many intelligence agencies, however, who have invested billions in location tracking alone.  And once a product like this is produced, there’s no way for them to get around it.  It’s physics at that point – not computer science.  People just have to be mindful to use the switch.

Then there are still the microphones on our laptops and perhaps appliances in the future in the coming “internet of things”.  One thing at a time, I guess.

Borrowing Schneier’s phrase, there is some very poor civic hygiene unfolding.  I’d like to see this addressed, and I think now might be a decent time when there is a lot of public concern.  Before complacency sets in again.

Standard