A model for describing the dynamics of two mutually interacting neurons is considered. In such a context, maintaining statements of the Leaky Integrate-and-Fire
framework, we include a random component in the synaptic current, whose role is to modify the equilibrium point of the membrane potential of one of the two neurons when a spike of the other one occurs. We give an approximation for the interspike time interval probability density function of both neurons within any parametric configurations driving the evolution of the membrane potentials in the so-called subthreshold regimen.
Structural analysis of protein Z gene variants in patients with foetal losses
Caliandro Rocco
;
Nico Giovanni
;
Tiscia Giovanni
;
Favuzzi Giovanni
;
De Stefano Valerio
;
Rossi Elena
;
Margaglione Maurizio
;
Grandone Elvira
The role of protein Z (PZ) in the etiology of human disorders is unclear. A number of PZ gene variants, sporadic or polymorphic and found exclusively in the serine protease domain, have been observed. Crystal structures of PZ in complex with the PZ-dependent inhibitor (PZI) have been recently obtained. The aim of this study was a structural investigation of the serine protease PZ domain, aiming at finding common traits across disease-linked mutations. We performed 10-20 ns molecular dynamics for each of the observed PZ mutants to investigate their structure in aqueous solution. Simulation data were processed by novel tools to analyse the residue-by-residue backbone flexibility. Results showed that sporadic mutations are associated with anomalous flexibility of residues belonging to specific regions. Among them, the most important is a loop region which is in contact with the longest I helix of PZI. Other regions have been identified, which hold anomalous flexibility associated with potentially protective gene variants. In conclusion, a possible interpretation of effects associated with observed gene variants is provided. The exploration of PZ/PZI interactions seems essential in explaining these effects.
Protein Z
mutants
molecular dynamics
flexibility analysis
The modeling of various physical questions in plasma kinetics and heat conduction
lead to nonlinear boundary value problems involving a nonlocal operator,
such as the integral of the unknown solution, which depends on the entire function
in the domain rather than at a single point. This talk concerns a particular nonlocal boundary
value problem recently studied in [1] by J.R.Cannon and D.J.Galiffa, who proposed a numerical method based on an
interval-halving scheme. Starting from their results, we provide a more general convergence theorem and suggest a different iterative procedure to handle the nonlinearity of the discretized problem.
References:
[1] J.R.Cannon, D.J.Galiffa (2011) On a numerical method for a homogeneous, nonlinear,
nonlocal, elliptic boundary problem, Nonlinear Analysis, Vol. 74, pp. 1702-1713.
Non local problem
Boundary value problem
Numerical method
Fixed point
This paper deals with models of living complex systems, chiefly human crowds, by methods of conservation laws and measure theory. We introduce a modeling framework which enables one to address both discrete and continuous dynamical systems in a unified manner using common phenomenological ideas and mathematical tools as well as to couple these two descriptions in a multiscale perspective. Furthermore, we present a basic theory of well-posedness and numerical approximation of initial-value problems and we discuss its implications on mathematical modeling.
Questo articolo riguarda la modellizzazione matematica di sistemi complessi viventi, in particolare le folle, mediante leggi di conservazione e metodi della teoria della misura. Introdurremo un quadro modellistico che permette di trattare sistemi dinamici discreti e continui mediante idee fenomenologiche e strumenti matematici comuni, nonché di accoppiare le due descrizioni in un'ottica multiscala. Inoltre presenteremo una teoria qualitativa di buona positura e approssimazione numerica dei problemi ai valori iniziali e discuteremo le sue implicazioni sulla modellistica.
In this paper, we analyze the above issues and provide a solution for a specific problem that, nevertheless, is quite representative for a generic class of problems in the above setting: computing a vectorial function over a set of nodes. In particular, we introduce AntiCheetah, a novel autonomic multi-round approach performing the assignment of input elements to cloud nodes as an autonomic, self-configuring and self-optimizing cloud system. AntiCheetah is resilient against misbehaving nodes, and it is effective even in worst-case scenarios and against smart cheaters that behave according to complex strategies. Further, we discuss benefits and pitfalls of the AntiCheetah approach in different scenarios. Preliminary experimental results over a custom-built, scalable, and flexible simulator (SofA) show the quality and viability of our solution.
Outsourced computing is increasingly popular thanks to the effectiveness and convenience of cloud computing *-as-a-Service offerings. However, cloud nodes can potentially misbehave in order to save resources. As such, some guarantee over the correctness and availability of results is needed. Exploiting the redundancy of cloud nodes can be of help, even though smart cheating strategies render the detection and correction of fake results much harder to achieve in practice.
A mathematical model and the simulation of subsoil decontamination by bioventing will be presented.
The bases for the model construction are the following:
(1) the pollutant is considered as immobile and confined in the unsaturated zone;
(2) only oxygen is injected in the subsoil by wells;
(3) the bacteria acting the pollutant removal are immobile and their growth depends on oxygen and pollutant concentration.
subsoil decontamination
bioventing
mathematical models
porous media
Large-scale simulations of blood flow allow for the optimal evaluation of endothelial shear stress for real-life case studies in cardiovascular pathologies. The procedure for anatomic data acquisition, geometry and
mesh generation are particularly favorable if used in conjunction with the Lattice Boltzmann method and the underlying cartesian mesh. The methodology allows to accommodate red blood cells in order to take into account the corpuscular nature of blood in multi-scale scenarios and its complex rheological response, in particular, in proximity of the endothelium. Taken together, the Lattice Boltzmann framework has become a powerful computational tool for studying sections of the human circulatory system.
Experimental study on the atmospheric delay based on GPS, SAR interferometry, and numerical weather model data
Pedro Mateus
;
Giovanni Nico
;
Ricardo Tome
;
Joao Catalao
;
Pedro MA Miranda
In this paper, we present the results of an experiment
aiming to compare measurements of atmospheric delay by
synthetic aperture radar (SAR) interferometry and GPS techniques
to estimates by numerical weather prediction. Maps of the
differential atmospheric delay are generated by processing a set
of interferometric SAR images acquired by the ENVISAT-ASAR
mission over the Lisbon region from April to November 2009. GPS
measurements of the wet zenith delay are carried out over the
same area, covering the time interval between the first and the last
SAR acquisition. The Weather Research and Forecasting (WRF)
model is used to model the atmospheric delay over the study
area at about the same time of SAR acquisitions. The analysis of
results gives hints to devise mitigation approaches of atmospheric
artifacts in SAR interferometry applications.
This paper presents an innovative approach to maximally disconnect a given network. More specifically, this work introduces the concept of a Critical Disruption Path, a path between a source and a destination vertex whose deletion minimizes the cardinality of the largest remaining connected component. Network interdiction models seek to optimally disrupt network operations. Existing interdiction models disrupt network operations by removing vertices or edges. We introduce the first problem and formulation that optimally fragments a network via interdicting a path. Areas of study in which this work can be applied include transportation and evacuation networks, surveillance and reconnaissance operations, anti-terrorism activities, drug interdiction, and counter human-trafficking operations. In this paper, we first address the complexity associated with the Critical Disruption Path problem, and then provide a Mixed-Integer Linear Programming formulation for finding its optimal solution. Further, we develop a tailored Branch-and-Price algorithm that efficiently solves the Critical Disruption Path problem. We demonstrate the superiority of the developed Branch-and-Price algorithm by comparing the results found via our algorithm with the results found via the monolith formulation. In more than half of the test instances that can be solved by both the monolith and our Branch-and-Price algorithm, we outperform the monolith by two orders of magnitude. (c) 2013 Elsevier Ltd. All rights reserved.
Network interdiction
Mixed-Integer Linear Programming
NP-completeness
Branch-and-Price
Cuts
A model for describing the dynamics of two mutually interacting neurons is considered. In such a context, maintaining statements of the Leaky Integrate-and-Fire framework, we include a random component in the synaptic current, whose role is to modify the equilibrium point of the membrane potential of one of the two neurons when a spike of the other one occurs. We give an approximation for the interspike time interval probability density function of both neurons within any parametric configurations driving the evolution of the membrane potentials in the so-called subthreshold regimen.
The development of high-throughput technology in genome sequencing provide a large amount of raw data to study the regulatory functions of transcription factors (TFs) on gene expression. It is possible to realize a classifier system in which the gene expression level, under a certain condition, is regarded as the response variable and features related to TFs are taken as predictive variables. In this paper we consider the families of Instance-Based (IB) classifiers, and in particular the Prototype exemplar learning classifier (PEL-C), because IB-classifiers can infer a mixture of representative instances, which can be used to discover the typical epigenetic patterns of transcription factors which explain the gene expression levels. We consider, as case study, the gene regulatory system in mouse embryonic stem cells (ESCs). Experimental results show IB-classifier systems can be effectively used for quantitative modelling of gene expression levels because more than 50% of variation in gene expression can be explained using binding signals of 12 TFs; moreover the PEL-C identifies nine typical patterns of transcription factors activation that provide new insights to understand the gene expression machinery of mouse ESCs.
Trust and reputation systems are decision support tools used to drive parties' interactions on the basis of parties' reputation.In such systems, parties rate with each other after each interaction. Reputation scores for each ratee are computed via reputation functions on the basis of collected ratings.We propose a general framework based on Bayesian decision theory for the assessment of such systems, with respect to the number of available ratings.Given a reputation function g and n independent ratings, one is interested in the value of the loss a user may incur by relying on the ratee's reputation as computed by the system.To this purpose, we study the behaviour of both Bayes and frequentist risk of reputation functions with respect to the number of available observations.We provide results that characterise the asymptotic behaviour of these two risks, describing their limits values and the exact exponential rate of convergence.One result of this analysis is that decision functions based on Maximum-Likelihood are asymptotically optimal.We also illustrate these results through a set of numerical simulations.
trust
reputation
information theory
Bayesian decision theory
Parties of reputation systems rate each other and use ratings to compute reputation scores that drive their interactions. When deciding which reputation model to deploy in a network environment, it is important to find the most suitable model and to determine its right initial configuration. This calls for an engineering approach for describing, implementing and evaluating reputation systems while taking into account specific aspects of both the reputation systems and the networked environment where they will run. We present a software tool (NEVER) for network-aware evaluation of reputation systems and their rapid prototyping through experiments performed according to user-specified parameters.
Reputation systems
Network-awareness
Evaluation tool
Reputation systems are nowadays widely used to support decision making in networked systems. Parties in such systems rate each other and use shared ratings to compute reputation scores that drive their interactions. The existence of reputation systems with remarkable differences calls for formal approaches to their analysis. We present a verification methodology for reputation systems that is based on the use of the coordination language Klaim and related analysis tools. First, we define a parametric Klaim specification of a reputation system that can be instantiated with different reputation models. Then, we consider stochastic specification obtained by considering actions with random (exponentially distributed) duration. The resulting specification enables quantitative analysis of properties of the considered system. Feasibility and effectiveness of our proposal is demonstrated by reporting on the analysis of two reputation models.
formal coordination languages
reputation systems
stochastic analysis
The signaling Petri net (SPN) simulator, designed to provide insights into the trends of molecules' activity levels in response to an external stimulus, contributes to the systems biology necessity of analyzing the dynamics of large-scale cellular networks. Implemented into the freely available software, BioLayout Express(3D), the simulator is publicly available and easy to use, provided the input files are prepared in the GraphML format, typically using the network editing software, yEd, and standards specific to the software. However, analysis of complex networks represented using other systems biology formatting languages (on which popular software, such as CellDesigner and Cytoscape, are based) requires manual manipulation, a step that is prone to error and limits the use of the SPN simulator in BioLayout Express(3D). To overcome this, we present a Cytoscape plug-in that enables users to automatically convert networks for analysis with the SPN simulator from the standard systems biology markup language. The automation of this step opens the SPN simulator to a far larger user group than has previously been possible.
Microarray and deep sequencing technologies have provided unprecedented opportunities for mapping genome mutations, RNA transcripts, transcription factor binding, and histone modifications at high resolution at the genome-wide level. This has revolutionized the way in which transcriptomes, regulatory networks and epigenetic regulations have been studied and large amounts of heterogeneous data have been generated. Although efforts are being made to integrate these datasets unbiasedly and efficiently, how best to do this still remains a challenge. Here we review major impacts of high-throughput genome-wide data generation, their relevance to human diseases, and various bioinformatics approaches for data integration. Finally, we provide a case study on inflammatory diseases.
genomics
epigenomics
phenomics
integr
data analysis
Results: Analysis of the CRKL network -available at http://www.picb.ac.cn/ClinicalGenomicNTW/software.html-allows for investigation of the potential effect of perturbing genes of interest. Within the group of genes that are significantly affected by simulated perturbation of CRKL, we are lead to further investigate the importance of PXN. Our results allow us to (1) refine the hypothesis on CRKL as a novel drug target (2) indicate potential causes of side effects in on- going trials and (3) importantly, provide recommendations with impact on on- going clinical studies.
Background: Rheumatoid arthritis (RA) is among the most common human systemic autoimmune diseases, affecting approximately 1% of the population worldwide. To date, there is no cure for the disease and current treatments show undesirable side effects. As the disease affects a growing number of individuals, and during their working age, the gathering of all information able to improve therapies -by understanding their and the disease mechanisms of action- represents an important area of research, benefiting not only patients but also societies. In this direction, network analysis methods have been used in previous work to further our understanding of this complex disease, leading to the identification of CRKL as a potential drug target for treatment of RA. Here, we use computational methods to expand on this work, testing the hypothesis in silico.