- Frankfurt Institute for Advanced Studies (8)
Learning more by sampling less: subsampling effects are model specific
- Poster presentation: Twenty Second Annual Computational Neuroscience Meeting: CNS*2013. Paris, France. 13-18 July 2013.
When studying real world complex networks, one rarely has full access to all their components. As an example, the central nervous system of the human consists of 1011 neurons which are each connected to thousands of other neurons . Of these 100 billion neurons, at most a few hundred can be recorded in parallel. Thus observations are hampered by immense subsampling. While subsampling does not affect the observables of single neuron activity, it can heavily distort observables which characterize interactions between pairs or groups of neurons . Without a precise understanding how subsampling affects these observables, inference on neural network dynamics from subsampled neural data remains limited.
We systematically studied subsampling effects in three self-organized critical (SOC) models, since this class of models can reproduce the spatio-temporal activity of spontaneous activity observed in vivo [2,3]. The models differed in their topology and in their precise interaction rules. The first model consisted of locally connected integrate- and fire units, thereby resembling cortical activity propagation mechanisms . The second model had the same interaction rules but random connectivity . The third model had local connectivity but different activity propagation rules . As a measure of network dynamics, we characterized the spatio-temporal waves of activity, called avalanches. Avalanches are characteristic for SOC models and neural tissue . Avalanche measures A (e.g. size, duration, shape) were calculated for the fully sampled and the subsampled models. To mimic subsampling in the models, we considered the activity of a subset of units only, discarding the activity of all the other units.
Under subsampling the avalanche measures A depended on three main factors: First, A depended on the interaction rules of the model and its topology, thus each model showed its own characteristic subsampling effects on A. Second, A depended on the number of sampled sites n. With small and intermediate n, the true A¬ could not be recovered in any of the models. Third, A depended on the distance d between sampled sites. With small d, A was overestimated, while with large d, A was underestimated.
Since under subsampling, the observables depended on the model's topology and interaction mechanisms, we propose that systematic subsampling can be exploited to compare models with neural data: When changing the number and the distance between electrodes in neural tissue and sampled units in a model analogously, the observables in a correct model should behave the same as in the neural tissue. Thereby, incorrect models can easily be discarded. Thus, systematic subsampling offers a promising and unique approach to model selection, even if brain activity was far from being fully sampled.
Distributed fading memory for stimulus properties in the primary visual cortex
- It is currently not known how distributed neuronal responses in early visual areas carry stimulus-related information. We made multielectrode recordings from cat primary visual cortex and applied methods from machine learning in order to analyze the temporal evolution of stimulus-related information in the spiking activity of large ensembles of around 100 neurons. We used sequences of up to three different visual stimuli (letters of the alphabet) presented for 100 ms and with intervals of 100 ms or larger. Most of the information about visual stimuli extractable by sophisticated methods of machine learning, i.e., support vector machines with nonlinear kernel functions, was also extractable by simple linear classification such as can be achieved by individual neurons. New stimuli did not erase information about previous stimuli. The responses to the most recent stimulus contained about equal amounts of information about both this and the preceding stimulus. This information was encoded both in the discharge rates (response amplitudes) of the ensemble of neurons and, when using short time constants for integration (e.g., 20 ms), in the precise timing of individual spikes (<= ~20 ms), and persisted for several 100 ms beyond the offset of stimuli. The results indicate that the network from which we recorded is endowed with fading memory and is capable of performing online computations utilizing information about temporally sequential stimuli. This result challenges models assuming frame-by-frame analyses of sequential inputs.
Emergence of the mitochondrial reticulum from fission and fusion dynamics
Valerii M. Sukhorukov
Andreas S. Reichert
- Mitochondria form a dynamic tubular reticulum within eukaryotic cells. Currently, quantitative understanding of its morphological characteristics is largely absent, despite major progress in deciphering the molecular fission and fusion machineries shaping its structure. Here we address the principles of formation and the large-scale organization of the cell-wide network of mitochondria. On the basis of experimentally determined structural features we establish the tip-to-tip and tip-to-side fission and fusion events as dominant reactions in the motility of this organelle. Subsequently, we introduce a graph-based model of the chondriome able to encompass its inherent variability in a single framework. Using both mean-field deterministic and explicit stochastic mathematical methods we establish a relationship between the chondriome structural network characteristics and underlying kinetic rate parameters. The computational analysis indicates that mitochondrial networks exhibit a percolation threshold. Intrinsic morphological instability of the mitochondrial reticulum resulting from its vicinity to the percolation transition is proposed as a novel mechanism that can be utilized by cells for optimizing their functional competence via dynamic remodeling of the chondriome. The detailed size distribution of the network components predicted by the dynamic graph representation introduces a relationship between chondriome characteristics and cell function. It forms a basis for understanding the architecture of mitochondria as a cell-wide but inhomogeneous organelle. Analysis of the reticulum adaptive configuration offers a direct clarification for its impact on numerous physiological processes strongly dependent on mitochondrial dynamics and organization, such as efficiency of cellular metabolism, tissue differentiation and aging.
Network self-organization explains the statistics and dynamics of synaptic connection strengths in cortex
- The information processing abilities of neural circuits arise from their synaptic connection patterns. Understanding the laws governing these connectivity patterns is essential for understanding brain function. The overall distribution of synaptic strengths of local excitatory connections in cortex and hippocampus is long-tailed, exhibiting a small number of synaptic connections of very large efficacy. At the same time, new synaptic connections are constantly being created and individual synaptic connection strengths show substantial fluctuations across time. It remains unclear through what mechanisms these properties of neural circuits arise and how they contribute to learning and memory. In this study we show that fundamental characteristics of excitatory synaptic connections in cortex and hippocampus can be explained as a consequence of self-organization in a recurrent network combining spike-timing-dependent plasticity (STDP), structural plasticity and different forms of homeostatic plasticity. In the network, associative synaptic plasticity in the form of STDP induces a rich-get-richer dynamics among synapses, while homeostatic mechanisms induce competition. Under distinctly different initial conditions, the ensuing self-organization produces long-tailed synaptic strength distributions matching experimental findings. We show that this self-organization can take place with a purely additive STDP mechanism and that multiplicative weight dynamics emerge as a consequence of network interactions. The observed patterns of fluctuation of synaptic strengths, including elimination and generation of synaptic connections and long-term persistence of strong connections, are consistent with the dynamics of dendritic spines found in rat hippocampus. Beyond this, the model predicts an approximately power-law scaling of the lifetimes of newly established synaptic connection strengths during development. Our results suggest that the combined action of multiple forms of neuronal plasticity plays an essential role in the formation and maintenance of cortical circuits.
Feedforward inhibition and synaptic scaling - two sides of the same coin?
- Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing.
Synchronization hubs may arise from strong rhythmic inhibition during gamma oscillations in primary visual cortex : poster presentation from Twentieth Annual Computational Neuroscience Meeting: CNS*2011, Stockholm, Sweden, 23 - 28 July 2011
Stefanos E. Folias
Jonathan E. Rubin
- Poster presentation from Twentieth Annual Computational Neuroscience Meeting: CNS*2011 Stockholm, Sweden. 23-28 July 2011. Parallel multiunit recordings from V1 in anesthetized cat were collected during the presentation of randomÂ sequences of drifting sinusoidal gratings at 12 fixed orientations while gamma oscillations were present. In agreement with the seminal work , most units were orientation selective to varying degrees and synchronization was evident in spike trainÂ crosscorrelograms computed between units with similar preferred orientations, particularly during theÂ presentation of optimal stimuli. Interestingly, a subset of units, which we refer to as synchronization hubs, wereÂ additionally found to synchronize with units having differing preferred orientations which was consistentÂ with a previous study . Moreover, oscillatory patterning in spike train autocorrelograms was alsoÂ found to be strongest in units denoted as synchronization hubs, and synchronization hubs also tended to have narrower tuning curves relative to other units. We used simplified computational models of small networks of V1 neurons to demonstrate that neurons subject to a sufficiently strong level of inhibitory input can function as synchronization hubs. Neurons were endowedÂ either with integrate-and-fire or conductance-based dynamics and each neuron received a combinationÂ of excitatory (AMPA) synaptic inputs that were Poisson-distributed and inhibitory (GABA) inputs thatÂ were coherent at a gamma-frequency range. If the strength of rhythmic inhibition was increased for aÂ subset of neurons in the network, and excitation was increased simultaneously to maintain a fixed firingÂ rate, then these neurons produced stronger oscillatory patterning in their discharge probabilities. TheÂ oscillations in turn synchronized these neurons with other neurons in the network. Importantly, theÂ strength of synchronization increased with neurons of differing orientation preferences even though noÂ direct synaptic coupling existed between the hubs and the other neurons. Enhanced levels of inhibition account for the emergence of synchronization hubs in the following way:Â Inhibitory inputs exhibiting a gamma rhythm determine a time window within which a cell is likely toÂ discharge. Increased levels of inhibition narrow down this window further simultaneously leading to (i)Â even stronger oscillatory patterning of the neuron's activity and (ii) enhanced synchronization withÂ other neurons. This enables synchronization even between cells with differing orientation preferences.Â Additionally, the same increased levels of inhibition may be responsible for the narrow tuning curves ofÂ hub neurons. In conclusion, synchronization hubs may be the cells that interact most strongly with theÂ network of inhibitory interneurons during gamma oscillations in primary visual cortex.
Objective identification of residue ranges for the superposition of protein structures
Donata K. Kirchner
- Background: The automation of objectively selecting amino acid residue ranges for structure superpositions is important for meaningful and consistent protein structure analyses. So far there is no widely-used standard for choosing these residue ranges for experimentally determined protein structures, where the manual selection of residue ranges or the use of suboptimal criteria remain commonplace. Results: We present an automated and objective method for finding amino acid residue ranges for the superposition and analysis of protein structures, in particular for structure bundles resulting from NMR structure calculations. The method is implemented in an algorithm, CYRANGE, that yields, without protein-specific parameter adjustment, appropriate residue ranges in most commonly occurring situations, including low-precision structure bundles, multi-domain proteins, symmetric multimers, and protein complexes. Residue ranges are chosen to comprise as many residues of a protein domain that increasing their number would lead to a steep rise in the RMSD value. Residue ranges are determined by first clustering residues into domains based on the distance variance matrix, and then refining for each domain the initial choice of residues by excluding residues one by one until the relative decrease of the RMSD value becomes insignificant. A penalty for the opening of gaps favours contiguous residue ranges in order to obtain a result that is as simple as possible, but not simpler. Results are given for a set of 37 proteins and compared with those of commonly used protein structure validation packages. We also provide residue ranges for 6351 NMR structures in the Protein Data Bank. Conclusions: The CYRANGE method is capable of automatically determining residue ranges for the superposition of protein structure bundles for a large variety of protein structures. The method correctly identifies ordered regions. Global structure superpositions based on the CYRANGE residue ranges allow a clear presentation of the structure, and unnecessary small gaps within the selected ranges are absent. In the majority of cases, the residue ranges from CYRANGE contain fewer gaps and cover considerably larger parts of the sequence than those from other methods without significantly increasing the RMSD values. CYRANGE thus provides an objective and automatic method for standardizing the choice of residue ranges for the superposition of protein structures. Additional files Additional file 1: Dependence of Q on the order parameter rank. The quantity Qi is plotted against the order parameter rank i for 9 different protein structure bundles. Additional file 2: Dependence of P on the clustering stage. The quantity Pi is plotted against the clustering stage i for 9 different protein structure bundles. Additional file 3: Dependence of CYRANGE results on the minimal cluster size parameter my. The sequence coverage (red) and RMSD (blue) of the residue ranges determined by CYRANGE were plotted as a function of my for 9 different protein structure bundles. The dotted vertical line indicates the default value, my = 8. Where CYRANGE found two domains, the RMSD values of the individual domains are shown in light and dark blue. Additional file 4: Dependence of CYRANGE results on the domain boundary extension parameter m. See Additional File 3 for details. Additional file 5: Dependence of CYRANGE results on the minimal gap width g. See Additional File 3 for details. Additional file 6: Dependence of CYRANGE results on the relative RMSD decrease parameter delta. See Additional File 3 for details. Additional file 7: Dependence of CYRANGE results on the absolute RMSD decrease parameter delta abs. See Additional File 3 for details. Additional file 8: Dependence of CYRANGE results on the gap penalty parameter gamma. See Additional File 3 for details. Additional file 9: Correlation between the sequence coverage from CYRANGE, FindCore and PSVS, and the GDT total score, GDT_TS. Each data point represents a protein shown in Figures 3 and 4. The coverage is the percentage of amino acid residues included in the residue ranges found by the different methods. The GDT_TS value is defined by GDT_TS = (P1 + P2 + P4 + P8)/4, where Pd is the fraction of residues that can be superimposed under a distance cutoff of d Å. Additional file 10: Correlation between the RMSD value for the residue ranges from CYRANGE, FindCore and PSVS, and the GDT total score, GDT_TS. Each data point represents one protein domain. See Additional File 9 for details.
Learning the optimal control of coordinated eye and head movements
- Various optimality principles have been proposed to explain the characteristics of coordinated eye and head movements during visual orienting behavior. At the same time, researchers have suggested several neural models to underly the generation of saccades, but these do not include online learning as a mechanism of optimization. Here, we suggest an open-loop neural controller with a local adaptation mechanism that minimizes a proposed cost function. Simulations show that the characteristics of coordinated eye and head movements generated by this model match the experimental data in many aspects, including the relationship between amplitude, duration and peak velocity in head-restrained and the relative contribution of eye and head to the total gaze shift in head-free conditions. Our model is a first step towards bringing together an optimality principle and an incremental local learning mechanism into a unified control scheme for coordinated eye and head movements.