Neuronal cells (neurons) mainly transmit signals by action potentials or spikes. Neuronal electrical activity is recorded from experimental animals by microelectrodes placed in specific brain areas. These electrochemical fast phenomena occur as all-or-none events and can be analyzed as boolean sequences. Following this approach, several computational analyses reported most variable neuronal behaviors expressed through a large variety of firing patterns. These patterns have been modeled as symbolic strings with a number of different techniques. As a rule, single neurons or neuronal ensembles are manageable as unknown discrete symbol sources S = <Σ, P> where Σ is the source alphabet and P is the unknown symbol probability distribution. Within the hierarchy of Markov Models (MMs), Markov Chains and Hidden MMs have been profusely employed to model neuronal recording data. However, due to the highly complex dynamic profiles of single neuron (SN) and neuronal ensemble (NE) firing patterns, those models failed to capture biologically relevant dynamical features. K-Order MMs could overcome these failures, but their time and space computational complexity turned them into unfeasibility. Variable Order MMs (VOMMs) meet with these restrictions confining modeling to the effective symbols of a given sequence up to a D maximum order. Formally a VOMM is characterized by a couple s, D where s ∈ Σ^∗ is the training sequence and the returned P is an estimation of P from source S. Given an arbitrary finite sequence s ∈ Σ^∗ , delivered by a generic source S, a VOMM builds a structure for S. Once a structure has been captured (or learnt) it may undergo tasks like prediction or compression or, again, analysis. Thus, a lossless compression algorithm originated from a VOMM can perform prediction tasks and every prediction algorithm can perform compression tasks. Statistically Based Compression Algorithms (SBCAs) build a prefix tree to estimate the symbol probability by combining conditional probability of a symbol with a chain rule, given d previous symbols (d ≤ D). In particular, just on the track of previously discussed issues, I took into consideration three SBCAs: Prediction by Partial Matching (PPM), Context-Tree Weighting (CTW) and Probabilistic Suffix Tree (PST). Prediction capability of these algorithms can be exploited in at least two ways: i) to draw a similarity function between experiments and ii) to analyze the changes of stationary phase of specific experiment dynamics from SN or NE datasets. The predictive accuracy can be measured by functions like the average log-loss (self-information). The average log-loss function measures the average compression rate of s assuming its P distribution and so the P prediction accuracy. Once the VOMM is trained with a given sequence source A, the average log-loss between the obtained VOMM model and another arbitrary sequence source B approximates their similarity measure μ(A, B). Where the sequences represent whole recording experiments, the VOMMs identify the similarity between different recordings, otherwise, if the sequences represent contiguous recorded experiment subsequences, the VOMMs can detect the switching between stationary phases, through average log-loss peaks. These VOMMs can also measure the information redundancy present in the sequence. This application, as shown in results, is particularly relevant for neurophysiologists and provides significant results when applied to recordings of chronic pain animal models. To confirm, by other estimation paths, the similarity measure between whole recording stages, I chose to introduce a more computationally efficient similarity measure (the Normalized Compression Distance, NCD) based on widely acknowledged faster compressors like gzip, bzip2, lzma and others. The results obtained with these methods come (i) from Ventrobasal Thalamic Nuclei (VB) and Somatosensory Cortex (SSI) in Chronic Pain Animals (CPAs), (ii) from Primary Visual (V1) and (SSI) in rat Cortices and, finally, (iii) from IL human Thalamus Nuclei in patients suffering from states of disordered consciousness like Persistent Vegetative State (PVS) and Minimum Conscious State (MCS).

NEURONAL ENSEMBLE MODELING AND ANALYSIS WITH VARIABLE ORDER MARKOV MODELS / A.g. Zippo ; tutor: Bruno Apolloni; relatore: Gabriele E.M. Biella; coordinatore: Vincenzo Capasso. - : . Universita' degli Studi di Milano, 2010 Dec 17. ((23. ciclo, Anno Accademico 2010. [10.13130/zippo-antonio-giuliano_phd2010-12-17].

NEURONAL ENSEMBLE MODELING AND ANALYSIS WITH VARIABLE ORDER MARKOV MODELS

A.G. Zippo
2010

Abstract

Neuronal cells (neurons) mainly transmit signals by action potentials or spikes. Neuronal electrical activity is recorded from experimental animals by microelectrodes placed in specific brain areas. These electrochemical fast phenomena occur as all-or-none events and can be analyzed as boolean sequences. Following this approach, several computational analyses reported most variable neuronal behaviors expressed through a large variety of firing patterns. These patterns have been modeled as symbolic strings with a number of different techniques. As a rule, single neurons or neuronal ensembles are manageable as unknown discrete symbol sources S = <Σ, P> where Σ is the source alphabet and P is the unknown symbol probability distribution. Within the hierarchy of Markov Models (MMs), Markov Chains and Hidden MMs have been profusely employed to model neuronal recording data. However, due to the highly complex dynamic profiles of single neuron (SN) and neuronal ensemble (NE) firing patterns, those models failed to capture biologically relevant dynamical features. K-Order MMs could overcome these failures, but their time and space computational complexity turned them into unfeasibility. Variable Order MMs (VOMMs) meet with these restrictions confining modeling to the effective symbols of a given sequence up to a D maximum order. Formally a VOMM is characterized by a couple s, D where s ∈ Σ^∗ is the training sequence and the returned P is an estimation of P from source S. Given an arbitrary finite sequence s ∈ Σ^∗ , delivered by a generic source S, a VOMM builds a structure for S. Once a structure has been captured (or learnt) it may undergo tasks like prediction or compression or, again, analysis. Thus, a lossless compression algorithm originated from a VOMM can perform prediction tasks and every prediction algorithm can perform compression tasks. Statistically Based Compression Algorithms (SBCAs) build a prefix tree to estimate the symbol probability by combining conditional probability of a symbol with a chain rule, given d previous symbols (d ≤ D). In particular, just on the track of previously discussed issues, I took into consideration three SBCAs: Prediction by Partial Matching (PPM), Context-Tree Weighting (CTW) and Probabilistic Suffix Tree (PST). Prediction capability of these algorithms can be exploited in at least two ways: i) to draw a similarity function between experiments and ii) to analyze the changes of stationary phase of specific experiment dynamics from SN or NE datasets. The predictive accuracy can be measured by functions like the average log-loss (self-information). The average log-loss function measures the average compression rate of s assuming its P distribution and so the P prediction accuracy. Once the VOMM is trained with a given sequence source A, the average log-loss between the obtained VOMM model and another arbitrary sequence source B approximates their similarity measure μ(A, B). Where the sequences represent whole recording experiments, the VOMMs identify the similarity between different recordings, otherwise, if the sequences represent contiguous recorded experiment subsequences, the VOMMs can detect the switching between stationary phases, through average log-loss peaks. These VOMMs can also measure the information redundancy present in the sequence. This application, as shown in results, is particularly relevant for neurophysiologists and provides significant results when applied to recordings of chronic pain animal models. To confirm, by other estimation paths, the similarity measure between whole recording stages, I chose to introduce a more computationally efficient similarity measure (the Normalized Compression Distance, NCD) based on widely acknowledged faster compressors like gzip, bzip2, lzma and others. The results obtained with these methods come (i) from Ventrobasal Thalamic Nuclei (VB) and Somatosensory Cortex (SSI) in Chronic Pain Animals (CPAs), (ii) from Primary Visual (V1) and (SSI) in rat Cortices and, finally, (iii) from IL human Thalamus Nuclei in patients suffering from states of disordered consciousness like Persistent Vegetative State (PVS) and Minimum Conscious State (MCS).
APOLLONI, BRUNO
CAPASSO, VINCENZO
Variable Order Markov Models; Neuronal signals ; Lossless Compression Algorithms ; Cortical Spontaneous Activity ; Chronic Pain
Settore INF/01 - Informatica
Settore ING-INF/05 - Sistemi di Elaborazione delle Informazioni
Settore ING-INF/06 - Bioingegneria Elettronica e Informatica
NEURONAL ENSEMBLE MODELING AND ANALYSIS WITH VARIABLE ORDER MARKOV MODELS / A.g. Zippo ; tutor: Bruno Apolloni; relatore: Gabriele E.M. Biella; coordinatore: Vincenzo Capasso. - : . Universita' degli Studi di Milano, 2010 Dec 17. ((23. ciclo, Anno Accademico 2010. [10.13130/zippo-antonio-giuliano_phd2010-12-17].
Doctoral Thesis
File in questo prodotto:
File Dimensione Formato  
phd_unimi_R07601.pdf

accesso aperto

Tipologia: Tesi di dottorato completa
Dimensione 5.58 MB
Formato Adobe PDF
5.58 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

Caricamento pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/150077
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact