In case you missed it, Radiolab has a wonderful episode on the connection between language and thought, and what it is like to exist without language. What I found most fascinating was the evidence for how language facilitates certain types of thought. I was also moved by the emotional story of an adult learning language for the first time, and subsequently being unable to regain or relate the nature of his subjective experience before language.
Flicker hallucinations are best induced using a Ganzfeld (German for “entire field”): an immersive, full-field, uniform visual stimulation. Frequencies ranging from 8 to 30 Hz are most effective.
This effect is used by numerous sound-and-light machines sold for entertainment purposes. Some
of these devices claim to alter the frequency of brain waves. There is
no scientific evidence for this. However, the flickering stimulus may
increase the amplitude of oscillations that are already present in the brain, to the point where geometric visual hallucinations can occur.
Figure 1. Illustrations of basic phosphene patterns (form constants) as they appear subjectively (left), and their transformation to planar waves in cortical coordinates (right).
How do flickering lights cause geometric visual hallucinations? Roughly, flickering lights confuse the eye and the brain, causing them to see geometric shapes that aren't there. The phenomenon is related to how bold patterns can create optical illusions, but in this case the pattern varies in time, rather than space.
Our hypothesis is that the flickering interacts with natural ongoing oscillations in visual cortex, exciting a specific frequency of brain waves. This increases the activity in visual cortex. This increase in excitability is similar to what occurs on some hallucinogens.
The simpler patterns, like ripples and spots, are mathematically related to the Turing patterns in animal coat patterns. More complex patterns occur when these instabilities interact with the brain's pattern recognition circuits. For more information, including the mathematical details of the model, head over and check out the paper.
The theory predicts that low frequencies (8-12 Hz) are more likely to induce spot-like patterns, and that high frequencies (12-30 Hz) are more likely to induce striped or ripple patterns. Anecdotally, I have tested this on myself and find it to be approximately correct for a white flicker Ganzfeld stimulus. I also find that low-frequency red-green flicker reliably induces checkerboard patterns, and that red-blue flicker reliably induces an almost quasicrystaline pattern of triangles and hexagons.
Many thanks toMatt Stoffregen and Bard Ermentrout for making this possible, as well as the CNBC undergraduate training program. The paper can be cited as
Below is a variant of Figure 6 inspired by Robert Munafo's visualization of the parameter space of the Gray-Scott reaction-diffusion model. It shows how the evoked patterns vary depending on the flicker frequency (horizontal axis) and amplitude (vertical axis). Activity levels of excitatory and inhibitory cells are colored in yellow and blue, respectively.
It's computed by integrating the periodically-driven 2D Wilson-Cowan on the GPU. We drive the system with a uniform periodic stimulus, but vary the integration time step $\Delta t$ so that each location perceives a different frequency. The continuous simulation causes patterns to "spill over" into the nearby areas (where patterns are not spontaneously stable), so we didn't include this version in the paper.
Primary visual cortex isn't a perfectly square, periodic domain, and we also simulated patches resembling the shape of this brain area. Here, it was important to create a soft absorbing boundary, otherwise the sharp boundary itself promotes pattern formation. Horizontal and vertical stripes are stable, and this may account for why radial and tunnel-like patterns are slightly more common.
Videos of simulation:
Here is a video of the striped patterns emerging on a rectangular domain
And the hexagonal patterns:
Here is the stripe pattern again, transformed into perceptual coordinates:
Emerging patterns are associated with a "critical wavenumber", which sets the spatial scale of the instabilities in the model. If you visualize the amplitude of the Fourier coefficients of the 2D system as patterns emerge, you see that isolated peaks in spatial frequency appear (along with their harmonics). The example below is for a striped pattern:
I spend a fair amount of time thinking about how we can make humans better, and what the consequences of making people better might be, so reading about how amazing people already are. Brad Voytek is a neuroscientist, and he has an amazing post on the power of our senses.
We're used to thinking of our senses as being pretty shite: we can't see as well as eagles, we can't hear as well as bats, and we can't smell as well as dogs.
Or so we're used to thinking.
It turns out that humans can, in fact, detect as few as 2 photons entering the retina. Two. As in, one-plus-one.
It is often said that, under ideal conditions, a young, healthy person can see a candle flame from 30 miles away. That's like being able to see a candle in Times Square from Stamford, Connecticut. Or seeing a candle in Candlestick Park from Napa Valley.
Similarly, it appears that the limits to our threshold of hearing may actually be Brownian motion. That means that we can almost hear the random movements of atoms.
We can also smell as few as 30 molecules of certain substances.
I mean, we're talking serious Daredevil-level detection here!
Our sensory organs are limited by the laws of physics, not by biology, or evolution, or anything else. We can detect the universe as well as the universe can be detected. The limits on what we sense are not in our eyes and ears, they're in the brain, where we decide how to pay attention to everything that's coming in. There are probably some valuable lessons for distraction, cognition, and focus, but I'm going to take a day off and just marvel at biology.
There was, a few years ago, some debate on "the binding problem". This problem stems from the fact that distinct areas of the brain are specialized for extracting certain visual features. For instance, the brain regions that represent the location and motion of objects are far away from the brain regions that identify objects. Nevertheless, a running cat is not perceived as, disjointly, a cat, and a moving thing. Somehow, even though the parts of the brain responsible for semantically identifying object know nothing of location, and the parts of the brain responsible for localizing objects know nothing of semantic identity, we experience an integrated reality where specific things have specific locations.
To simplify, say you are presented with a spoon on the right and a fork on the left, and asked to retrieve the fork. So, somewhere in the brain is the notion "there are two things here, one on the left and one on the right" and somewhere else in the brain is the notion "there is a spoon and a fork here, but I'm not sure where". How the brain combines these two representations has been the subject of much speculation.
Some have proposed that populations of neurons responding to the same object become synchronized, such that neurons firing for "thing on left" and neurons firing for "fork somewhere" tend to fire at the same time, and this somehow unifies the two areas. I am skeptical of this "binding by synchrony" hypothesis.
I am skeptical because, when I am not paying attention, I am very likely to pick up the wrong utensil, and I suspect that attention is critical for binding. This argument hinges upon some assumptions of how the visual system works and what attention is.
The visual system is hierarchical. At first, the brain extracts small pieces of lines and fragments of color. These features are well localized, and "low level". Then, the brain begins to extract more complex features. These may be corners, curves, textures, pieces of form. This information is not as well localized. The combining of features into more complex features is repeated a few times, until you get to "high level" representations complex enough to identify whole objects, like "forks" and "spoons". As features get more complex, they loose spatial precision, until the where neurons that can identify objects really have no idea where that object is.
In the visual system, there is feed-back from higher level to lower level representations. Activity in high level representations can bias activity in lower level representations. You may be most familiar with this phenomena when you are day-dreaming. We are able to control, to some extent, the activity in most visual areas, and we thing that this control constitutes imagination. We have more control over "high level" visual areas. This control weakens toward lower level visual areas. For instance, primary visual cortex appears to be inactive in dreaming and visualization.
When we are awake, this top-down control is used for attention. Attending to an object will make said object "pop out" ( become more salient ). This enhanced salience may propagate from higher to lower level visual areas. For instance, if I focus on "fork!", the neurons that know there is a fork somewhere will enhance all fork-like mid level features, which will enhance fork-like low level features, and so on.
The key point here is that, by focusing on the identity of an object, I can increase the salience of low and mid-level visual features representing that object. Although the semantic part of my brain may have no idea where the "fork" is, it can make the low-level fork features pop-out. And, these features are well localized. Thus, the part of my brain that knows "there is something on the left, and there is something on the right", will find that the item on the left suddenly seems more salient. This seems sufficient to let the brain know where it needs to reach to pick up the fork.
This effect works both ways. If I ask "what is the object on the left", the neurons that know where the thing on the left is will make the features of the left object more salient, which will enhance the representation of the "fork" features in the part of the brain that can identify what objects are. Note that this effect doesn't need to be large, or make the "fork" dominate over all other objects in the scene. You simply need a brief increase in the salience of "fork" over background objects to know that the thing on the left is a "fork".
All of this happens rapidly and automatically. Binding is achieved by attending to high-level properties of objects, and therefore gating which objects get processed in other, distant, high-level areas. Attention ensures that at a given time only one unified object is most salient.
For neuroscientists, treating "The Hard Problem of Consciousness" outside of bar-room speculation is a risky career move. This is why we have true doctors of philosophy, and why the philosophy paper "Neural Plasticity and Consciousness" by Susan Hurley and Alva Noë is a good thing. Hurley and Noë's thesis relates to some recent activity on WeAlone [1,2, maybe 3] , so I will attempt to summarize the article in a language that makes most sense to me.
First Hurley and Noë note that the "hard problem of consciousness" is equivalent to what they call an "absolute gap", i.e. "why should we assume that neural activity is solely responsible for conscious perception at all ?". My interpretation is that Hurley and Noë say "we can't, this is a leap of faith", and for the purposes of the paper accept as an axiom that neural activity corresponds to perception. The meat of the paper then, discusses why some neural activity should take on a particular quality, like seeing, and other neural activity should take on a distinct quality, like hearing.
Lately, I've been throwing around the term "neural topology" and "manifold structure" in an embarrassingly non-rigorous manner. I'd like to say "the topology of qualia acquires the topology of stimuli via learning of the intrinsic statistical structure of the stimuli, and in a sense, the stimulus stimulus model constitutes the nature of qualia", but this is vague. Hurley and Noë express, I believe, a similar sentiment clearly and without abusing terms from mathematics :
It is argued that the different characteristics of input activity from specific sources (visual vs. auditory) generate not just representational structure specific to that source but also source-specific sensory and perceptual qualities.
That is to say, when the brain learns the topology of stimuli ( possibly in union with the topology of motor outputs as they modify stimuli ), the brain acquires the qualia corresponding to said stimuli.
Earlier we talked about the possibility of defining an algebraic structure representing the shape of information coded in the brain. The take-away point was that it might be possible to rigorously say "these two areas have effectively the same abstract structure, since you can relate them by some structure preserving relationship". The Hurley and Noë paper provides anecdotes which suggest that, when two physically distinct neural circuits have the same abstract structure (topology), then the subjective experience (qualia) are also the same. Specifically, they discuss experiments in which blind patients were able to acquire visual qualia through a tactile stimulation device that translates camera images into stimulation of the skin.
After a period of adaptation (as short as a few minutes), subjects report perceptual experiences that are distinctively non-tactile and quasi-visual. … However, Bach-y-Rita emphasizes that the transition to quasi-visual perception depends on the subject’s exercising active control of the camera. … Perceivers can acquire and use practical knowledge of the common laws of sensorimotor contingency that vision and TVSS-perception share. For example, as you move around an object, hidden portions of its surface come into tactile-visual view, just as they would if you were seeing them."
This experiment suggests that giving a system a new topology induces qualia of that topology, and that learning the new topology does not necessarily require expensive and lengthly re-wiring. That camera control was necessary for inducing visual qualia from tactile stimulation suggests that the structure of visual stimuli and the experience of seeing must necessarily incorporate how our actions : movement of the eyes and head, and translation in space, alter the content of visual stimuli. Thus, when we talk about the "topology" of a stimulus, we must also incorporate how our actions change the stimulus (how our motor operators transform the stimulus space).
Hurley and Noë cover a number of other interesting anecdotes, including what happens when the brain fails to adapt its structure to reality, and pointing out that, in a left-right reversal of vision, reversing the interpretation of visual data is topologically equivalent to reversing the coordinates of motor output and proprioception, such that many different possible explanations of neural adaptation may be topologically equivalent.
So, I really do feel like, if we can make this notion of "neural topology*" more rigorous, we will have a satisfying answer to the portion of "the hard problem" that is amenable to scientific and mathematical investigations.
*neural topology : the idea that, in high dimensional sensory spaces, the distribution of probable stimuli occupy a reduced subset of said high dimensional space, and that one can move about this subset in a differentiable manner to transition smoothly between probable stimuli. This is a vague notion. It is related to "statistical structure" and "manifold", although I should note that we don't have enough information to say that the space of probably sensory-motor states is actually a manifold.
This has been bothering me fore some time, and rather than go through and read the literature I'm just going to dump speculation here.
It seems like it should be possible to derive a general theory for embedding cortical maps. At its simplest, I am referring the to the problem of embedding manifolds with arbitrary topology into the surface of the brain. I understand that "low distortion embeddings" of high dimensional spaces are reasonably well studied, and I think in some instances it might be as simple as naïvely applying mathematical notions of "low distortion embedding" to embedding of manifolds in cortex.
( Side note : in an earlier conversation with Beck, it was noted that, if your space is high dimensional, and your points few, arbitrary embedding is about as good as the optimal low distortion embedding. I think there definitely are high dimensional spaces that only need to encode a relatively sparse set of points that are pretty much randomly organized. Olfactory bulb may be an example : 1000 dimensional vector space, and most attempts to make some sort of map on the surface of the olfactory bulb have failed. However, this could simply mean that high dimensional spaces never embed with low distortion, so random embeddings are getting close to optimal, but optimal is still bad. )
Anyway, places where you typically want to think about low distortion embeddings : primary sensory areas are somewhat obvious, and retinotopic, somatotopic, and tonotopic maps are well studied.
So, what I'm talking about here is more interesting than say, the problem of embedding a spherical globe into a two dimensional map. Visual and somatosensory data have an obvious manifold structure because they are coming from manifold sensory organs. However, the information carried in these sensory streams has a more complex structure, and we ultimately see organization in cortex that reflects this structure.
Lets use the visual system for an example. First, visual information is coming in from the retina, which is to first approximation a hemispherical sheet. Ignore foveal magnification, and just say that this sheet basically ends up being squished, stretched, and flattened onto most primary and secondary visual processing areas. The shape changes, but neighborhoods are preserved.
But, theres also all this natural structure in the information coming from the retina. First of all, you've got brightness, yellow-blue opponancy, and red-green opponancy, so that's three channels effectively forming our familiar three dimensional color space. I'm not sure the brain actually does anything particularly fancy with color information, actually, but basically whats coming into the brain is already this kind of function from a disc to three dimensional color space f:ℝ²→ℝ³.
The really interesting thing about embedding visual space in cortex happens when you start trying to extract low level features. Forget about color for now, its confusing. For now, lets just assume that these low level features are oriented edges. We have to represent a function from the visual field ℝ² (or maybe ℂ would do, or ℝ⁺×𝕋, you know, something two dimensional) to the circle group (apparently called 𝕋). I'm being vague here: something that looks like ℝ²→𝕋.
If you've made it this far and aren't enraged by my bastardized notation, you might have noticed that I'm dropping a component from this visual-orientation space, which is the salience of an oriented edge. The brain represents this as firing rate, but theres no immediately obvious reason why salience should be the component that gets represented in firing rate, and not, say, orientation. Its obvious that firing rate would not work for representing location in the visual field, since it's quite common to have two points in the visual field contain bars of the same orientation, but its impossible for one point in the visual field to contain two bars of the same orientation but different salience.
Incoming visual information on oriented edges takes on the form f:ℝ²→(ℝ⁺×𝕋), so we end up needed to embed a space shaped like ℝ²×(ℝ⁺×𝕋) into cortex, which can be represented simply as a function from a manifold sheet (cortex) to a positive* scalar firing rate** f:ℝ²→ℝ⁺. I'm not sure how to state this formally, but it seems natural that when embedding f:ℝ²→(ℝ⁺×𝕋) in f:ℝ²→ℝ⁺ the ℝ²×𝕋 (orientation) information is going to have to get flattened into ℝ², preferably with minimal distortion.
There is no uniquely optimal way to choose this embedding***. This is evidenced by the fact that orientation selective patches end up forming bands in some animals, and neat little hexagonal "hypercolumns" in others, and sometimes even a mixture of both [citation needed]. The problem is complicated, of course, by the fact that orientation maps aren't the only thing being embedded in the primary visual cortex. In realty, the space you are trying to embed still contains color information, and rather than oriented edges you have this over-complete space of temporally modulated Gabor wavelets****, all of which still needs to get squished into f:ℝ²→ℝ⁺. Oh, also there are two eyes that need to fit into one cortex, hence the ocular dominance columns.
Naïve models of so called "orientation column" formation consider simply the problem of embedding ℝ²×𝕋 in ℝ². These models can reproduce some of the orientation maps we see, but are unsatisfactory. Whenever I run simulations (code lost, hearsay) of this phenomena, I get disorganized columns that eventually converge to stripes if I let the simulation sit long enough. We do see this in some animals, but in many species orientation preference has a crystalline hexagonal packing. At some point in the past, this was thought to indicate a regular periodic organization of cortex. We now know***** that this structure is due simply to the learning rules and the act of embedding ℝ²×𝕋 in ℝ².
Ok, yeah, I'm out of ideas here. I guess I'll leave off with : I'm not sure if anyone has tried to model embedding the space of complex cell receptive fields into ℝ², or tried to construct a nice story about why the space might be embedded as we observe. I'm also not sure if anyone has successfully reasoned about how significantly more complex spaces might end up embedded in cortex. I think in areas like IT, which is supposed to respond to specific objects, the reaction is "well, the space of possible objects is so ridiculously complex theres no way you could flatten it reasonably, so its probably just all mashed in there". Perhaps there are spaces with intermediate complexity that we can look at, perhaps interesting spaces over in parietal lobe that partially represent both visual and motor spaces.
I... guess I'll go try to read more papers.
*its actually non-negative but ℝ∖ℝ⁻ wasn't as stylish. can we just exclude 0 due to "spontaneous spiking" ?
**many simplifications. First of all, I'm not sure we can prove its even firing rate and not something like spike timing that neurons are using to code, second of all neurons have receptive fields that depend on the modulation of a stimulus in time. So, time is another dimension here that I have no idea how to treat formally ( if you can call this nonsense formal ).
***I guess
****I'd say its not completely clear that V1 complex cells _are_ temporally modulated gabor wavelets, but rather that they seem to more or less resemble such wavelets, so we just stopped right there and declared the problem solved.
*****by "we now know" I mean "i assume, I'll look for a reference later"
I'm not talking about the LRAD*, this is far more interesting. Thingiverse user neurothing ( or should I say Brown University neuroscientist Seth S. Horowitz ) lists among his interests "creating neurosensory algorithms for weapons class sound". In addition to the implied military applications, it seems he can use a mysterious sound algorithm to modulate emotion, attention, and state of mind. He has started a company called neuropop to commercialize these effects :
Our proprietary Neurosensory Algorithm (NSA) technology can be used to modify virtually any sound or music to activate specific parts of the listener’s brain to encourage sleep, reduce stress, enhance attention, or create specific mood-states. We are currently developing health and wellness, entertainment, and gaming products.
Finally, I feel like I'm living in the future. I am, of course, reminded of BLIT. Neuropop claims uses such as meditation, altering state of mind, enhancing effectiveness of advertisements, and providing music for movies and video-games better tuned to manipulate the listener. You can even test out some of the simpler effects on their website. If you guessed that binaural beats have something to do with this, you're right, but they also use a diversity of other techniques, not all of which appear to be published, to achieve much more than binaural beats are capable of. Having not (yet) experienced their auditory stimuli, I can't verify that their technique works. However, having seen what a couple of colored flashing LEDs can do to my visual cortex, I'd wager their effects are real. The big question, then, is weather or not Professor Horowitz was joking about the "weapons class sound".
*Having witnessed the LRAD in deployment I should note that some young folk these days are immune to it, due to a decade or more of loud rock concerts. However, if your hearing happens to be intact this machine will quickly induce the accumulated damage of a decade of loud rock concerts (hyperbole), which is painful. I imagine ultimate military or riot control applications of Horowitz's sound algorithms might be piped through something such as the LRAD, but perhaps could induce confusion at a less damaging decibel level.