We present a highly optimized implementation of a Monte Carlo (MC) simulator for the three-dimensional Ising spin-glass model with bimodal disorder, i.e.; the 3D Edwards-Anderson model running on CUDA enabled GPUs. Multi-GPU systems exchange data by means of the Message Passing Interface (MPI). The chosen MC dynamics is the classic Metropolis one, which is purely dissipative, since the aim was the study of the critical off-equilibrium relaxation of the system. We focused on the following issues: (i) the implementation of efficient memory access patterns for nearest neighbours in a cubic stencil and for lagged-Fibonacci-like pseudo-Random Numbers Generators (PRNGs); (ii) a novel implementation of the asynchronous multispin-coding Metropolis MC step allowing to store one spin per bit and (iii) a multi-GPU version based on a combination of MPI and CUDA streams. Cubic stencils and PRNGs are two subjects of very general interest because of their widespread use in many simulation codes.
GPU
Lattice
Multi-GPU
Multispin coding
Random numbers
Spin glass
Graphics processing units (GPU) are currently used as a cost-effective platform forcomputer simulations and big-data processing. Large scale applications require thatmultiple GPUs work together but the efficiency obtained with cluster of GPUs is, at times,sub-optimal because the GPU features are not exploited at their best. We describe how itis possible to achieve an excellent efficiency for applications in statistical mechanics,particle dynamics and networks analysis by using suitable memory access patterns andmechanisms like CUDA streams, profiling tools, etc. Similar concepts andtechniques may be applied also to other problems like the solution of Partial DifferentialEquations.
The likelihood of a subglacial lake beneath Amundsenisen Plateau at Southern Spitzbergen, Svalbard, pointed out by the flat signal within the Ground Penetrating Radar (GPR) remote survey of the area, is justified, here, via numerical simulation.
This investigation has been developed under the assumption that the icefield thickness does not change on average, as it is confirmed by recently published physical measurements taken over the past forty years. As consequence, we have considered admissible to assume the temperature and density in-depth profiles, snow and firn layers included, to be stationary. The upper icefield surface and the rocky bed surface are known in detail.
By adopting a mathematical numerical model, presented on a recent issue of this journal, based on an unsteady Stokes formulation of the ice flow and a Large Eddy Simulation formulation of the lake water flow, first, we compare two different descriptions of ice water content, in the form of a steady depth dependent function and as solution to the mass transport equation, accounting for local strain heating effect. The last approach, finally selected, leads to 13% improvement of the numerical value of the ice top surface velocity vs. measured one. Furthermore a reduced form of the basal shear stress and normal stress, by making easier the convergence of the iterative solution procedure, allows to obtain physically consistent numerical ice sliding velocity values at the rocky bottom, quite improved in comparison to previous numerical results. After 20000 d (physical time), although the maximum value of water temperature keeps rather low, the numerical simulation shows that metastability is overcome on more than half of the conjectured basin, with a progressive trend in time in support to the subglacial lake existence. By that time, the numerical subglacial lake surface converges to the GPR flat signal spot with tolerance equal to the GPR measuring error.
Then numerical simulation results meet quantitatively and qualitatively the fundamental aspects of the conjecture, so that further on-site investigations on the subglacial lake (e.g. drilling operations) appear fully justified.
subglacial lake
Svalbard
temperate ice
water content
phase-change
finite volumes
The melting of glaciers coming with climate change threatens the heritage of the last glaciation of Europe likely contained in subglacial lakes in Greenland and Svalbard. This aspect urges specialists to focus their studies (theoretical, numerical and on-field) on such fascinating objects. Along this line we have built up a numerical procedure for validating the conjecture of the existence of a subglacial lake beneath the Amundsenisen Plateau at South-Spitzbergen, Svalbard. In this work we describe the algorithm and significant representative results of the related numerical test. The conjecture followed the Ground Penetrating Radar measurements of that area exhibiting several flat signal spots, sign of the presence of a body of water. Actually, numerical simulation results appear in support to the decision of drilling operations above the presumed ice/water front where subglacial lake water bio-chemicals might be traceable.
The time dependent mathematical model, structuring the numerical algorithm, includes the description of dynamics and thermodynamics of the icefield and of the subglacial lake, with heat exchange and liquid/solid phase change mechanisms at the interface. Critical modeling choices and confidence in the algorithm are granted by the numerical results of the sensitivity analysis versus the contribution of ice water content, of firn and snow layers at top of the icefield and versus the approximation of ice sliding on bedrock, that have been issued in previous recent works also including successful comparison with measured quantities.
Temperate ice
Glen's law
Subglacial lake
Phase-change
Large Eddy Simulation
Svalbard
Finite volume.
Intervista alla figlia di Beppo Levi (Emilia Resta) in occasione dei 140 anni dalla sua nascita. Nel breve articolo di presentazione viene anche ricordato il fratello di Beppo Levi, Eugenio Elia noto e geniale matematico che ebbe breve vita immolata al fronte durante la Grande Guerra. All'interno dell'articolo viene anche riproposto e riprodotto un lungo e polemico scritto di Beppo Levi, inviato e apparso sotto forma di lettera sul periodico Israel del 30 giugno 1918, che verte sulla nascita dello stato ebraico in Palestina.
Beppo Levi
Eugenio Elia Levi
Emilia Levi
Mia Resta
Mauro Picone
Ferruccio Servi
Matematici nella Grande Guerra
Matematici ebrei
Amadori Anna Lisa
;
Calzolari Antonella
;
Natalini Roberto
;
Torti Barbara
In this paper we study the effect of rare mutations, driven by a marked point process, on the evolutionary behavior of a population. We derive a Kolmogorov equation describing the expected values of the different frequencies and prove some rigorous analytical results about their behavior. Finally, in a simple case of two different quasispecies, we are able to prove that the rarity of mutations increases the survival opportunity of the low fitness species.
Coevolutionary dynamics
Marked point processes
Mutations
Partial integro-differential equations
The conservation of wall paintings in archaeological sites can be difficult due to the severe damage caused by living organisms, which can degrade substrates as a result of their growth and metabolic activity. The purpose of this study was to provide information on the degradation processes affecting the artefacts of an archaeological site and to predict areas where conservation is most at risk and precarious. The study focussed on the archaeological site of Monte Sannace (Italy) and Paleopolis (Greece). We analysed the archaeological remains to study the biodeterioration of materials on site and to assess the potential risk of biological colonisation of newly exposed rock samples. This type of environment is a unique ecological niche, in which light, moisture, temperature, nutrient input and the porous nature of the substrate become favourable factors for microbial colonisation, which is often responsible for biodeterioration. Surveys were performed in three stages: before, during and after restoration. In this manner, it was possible to analyse the same sample after a given time interval and to understand the changes in chemical parameters and microbiological growth over time and as a function of the chemical compounds used in the restoration.
Moreover, the effects of a cover at the archaeological site relative to its conservation function and the control of biological growth were also examined. The soils surrounding the archaeological structures were also analysed to understand the biochemical phenomena unique to the environmental context within which the archaeological structures are located. In archaeological sites, soils provide the context in which to fit the artefacts that may be discovered and may also represent the degraded remains of archaeological materials. In this study, the techniques of microbiological and chemical analysis and soil analysis were employed to investigate the archaeological remains and artefacts of the study site. Furthermore, another objective was to test the effects of a roofing structure at the archaeological site relative to conservation and the control of biological growth. In this study, we used a multidisciplinary approach, correlating the results of biological analysis with those of chemical analysis. Tests were carried out to determine the extent of biodeterioration of materials and to assess the potential risk of biological colonisation of newly exposed rock samples. The tests were performed before, during and after the restoration.
For clathrate-hydrate polymorphic structure-type (sI versus sII), geometric recognition criteria have been developed and validated. These are applied to the study of the rich interplay and development of both sI and sII motifs in a variety of hydrate-nucleation events for methane and H2S hydrate studied by direct and enhanced-sampling molecular dynamics (MD) simulations. In the case of nucleation of methane hydrate from enhanced-sampling simulation, we notice that already at the transition state, similar to 80% of the enclathrated CH4 molecules are contained in a well-structured (sII) clathrate-like crystallite. For direct MD simulation of nucleation of H2S hydrate, some sI/ sII polymorphic diversity was encountered, and it was found that a realistic dissipation of the nucleation energy (in view of non-equilibrium relaxation to either microcanonical (NVE) or isothermal-isobaric (NPT) distributions) is important to determine the relative propensity to form sI versus sII motifs. (C) 2015 AIP Publishing LLC.
Sub-ms dynamics of the instability onset of electrospinning
Martina Montinaro
;
a Vito Fasano
;
a Maria Moffa
;
b Andrea Camposeo
;
b Luana Persano
;
b Marco Lauricella
;
c Sauro Succi c
;
Dario Pisignano
;
ab
Electrospun polymer jets are imaged for the first time at an ultra-high rate of 10 000 frames per second, investigating the process dynamics, and the instability propagation velocity and displacement in space. The polymer concentration, applied voltage bias and needle-collector distance are systematically varied, and their influence on the instability propagation velocity and on the jet angular fluctuations is analyzed. This allows us to unveil the instability formation and cycling behavior, and its exponential growth at the onset, exhibiting radial growth rates of the order of 10(3) s(-1). Allowing the conformation and evolution of polymeric solutions to be studied in depth, high-speed imaging at the sub-ms scale shows significant potential for improving the fundamental knowledge of electrified jets, leading to finely controllable bending and solution stretching in electrospinning, and consequently better designed nanofiber morphologies and structures.
Using covariance identities based on the Clark-Ocone representation formula we derive Gaussian density bounds and tail estimates for the probability law of the solutions of several types of stochastic differential equations, including Stratonovich equations with boundary condition and irregular drifts, and equations driven by fractional Brownian motion. Our arguments are generally simpler than the existing ones in the literature as our approach avoids the use of the inverse of the Ornstein-Uhlenbeck operator.
Malliavin calculus
Clark-Ocone formula
Probability bounds
Fractional Brownian motion
Based on a new multiplication formula for discrete multiple stochastic
integrals with respect to non-symmetric Bernoulli random walks, we extend
the results of Nourdin et al. (2010) on the Gaussian approximation of symmetric
Rademacher sequences to the setting of possibly non-identically distributed independent
Bernoulli sequences. We also provide Poisson approximation results for
these sequences, by following the method of Peccati (2011). Our arguments use
covariance identities obtained from the Clark-Ocone representation formula in addition
to those usually based on the inverse of the Ornstein-Uhlenbeck operator.
In this paper we consider Volterra integral equations on time scales and describe our study about the long time behavior of their solutions. We provide sufficient conditions for the stability under constant perturbations by using the direct Lyapunov method and we present some examples of application.
Casual mutations and natural selection have driven the evolution of protein
amino acid sequences that we observe at present in nature. The question about
which is the dominant force of proteins evolution is still lacking of an unambigu-
ous answer. Casual mutations tend to randomize protein sequences while, in
order to have the correct functionality, one expects that selection mechanisms
impose rigid contraints on amino acid sequences. Moreover, one also has to
consider that the space of all possible amino acid sequences is so astonishingly
large that it could be reasonable to have a well tuned amino acid sequence in-
distinguishable from a random one.
In order to study the possibility to discriminate between random and natural
amino acid sequences, we introduce different measures of association between
pairs of amino acids in a sequence, and apply them to a dataset of 1, 047 nat-
ural protein sequences and 10, 470 random sequences, carefully generated in
order to preserve the relative length and amino acid distribution of the natu-
ral proteins. We analize the multidimensional measures with machine learning
techniques and show that, to a reasonable extent, natural protein sequences can
be differentiated from random ones
Protein sequence
Random sequence
Combinatorics of words
Amino acid association
In this paper we propose a LWR-like model for traffic flow on
networks which allows to track several groups of drivers, each of them being
characterized only by their destination in the network. The path actually
followed to reach the destination is not assigned a priori, and can be chosen
by the drivers during the journey, taking decisions at junctions.
The model is then used to describe three possible behaviors of drivers, as-
sociated to three different ways to solve the route choice problem: 1. Drivers
ignore the presence of the other vehicles; 2. Drivers react to the current dis-
tribution of traffic, but they do not forecast what will happen at later times;
3. Drivers take into account the current and future distribution of vehicles.
Notice that, in the latter case, we enter the field of differential games, and, if
a solution exists, it likely represents a global equilibrium among drivers.
Numerical simulations highlight the differences between the three behaviors
and offer insights into the existence of equilibria.
Traffic
networks
source-destination model
multi-path model
multi- population model
multi-commodity model
Wardrop equilibrium
Nash equilibrium.
A hierarchical Krylov--Bayes iterative inverse solver for MEG with physiological preconditioning
D Calvetti
;
A Pascarella
;
F Pitolli
;
E Somersalo
;
B Vantaggi
The inverse problem of MEG aims at estimating electromagnetic cerebral activity from measurements of the magnetic fields outside the head. After formulating the problem within the Bayesian framework, a hierarchical conditionally Gaussian prior model is introduced, including a physiologically inspired prior model that takes into account the preferred directions of the source currents. The hyperparameter vector consists of prior variances of the dipole moments, assumed to follow a non-conjugate gamma distribution with variable scaling and shape parameters. A point estimate of both dipole moments and their variances can be computed using an iterative alternating sequential updating algorithm, which is shown to be globally convergent. The numerical solution is based on computing an approximation of the dipole moments using a Krylov subspace iterative linear solver equipped with statistically inspired preconditioning and a suitable termination rule. The shape parameters of the model are shown to control the focality, and furthermore, using an empirical Bayes argument, it is shown that the scaling parameters can be naturally adjusted to provide a statistically well justified depth sensitivity scaling. The validity of this interpretation is verified through computed numerical examples. Also, a computed example showing the applicability of the algorithm to analyze realistic time series data is presented.
brain activity
magnetoencephalography ( MEG )
Bayesian hier- archical model
sparsity
prior information
12th Progress report 2014 (Financial and activity report) - project T.He.T.A. "Technological tools for the Promotion of Transadriatic Archaeological Heritage"
13th Progress report 2015 (Financial and activity report) - project T.He.T.A. "Technological tools for the Promotion of Transadriatic Archaeological Heritage"
14th Progress report 2015 (Financial and activity report) - project T.He.T.A. "Technological tools for the Promotion of Transadriatic Archaeological Heritage"
T.He.T.A. Project was developed with the goal of improving and enhancing the cultural heritage, in order to make it more accessible to the large international tourism through the promotion of archaeological sites in Apulia and Greece, such as Monte Sannace (Italy) and Paleopolis (Greece). Both are characterized by a significant cultural value, but at the same time they raise issues regarding theirs preservation status as well as their capability to attract touristic flows.
Motivation
Gene expression data from high-throughput assays, such as microarray, are often used to
predict cancer survival. However, available datasets consist of a small number of samples (n patients)
and a large number of gene expression data (p predictors). Therefore, the main challenge
is to cope with the high-dimensionality, i.e. p>>n, and a novel appealing approach is to use
screening procedures to reduce the size of the feature space to a moderate scale (Wu & Yin 2015,
Song et al. 2014, He et al. 2013). In addition, genes are often co-regulated and their expression
levels are expected to be highly correlated. Genes that are involved in the same biological process
are grouped in pathway structures. In order to incorporate the pathway information of genes,
network-based methods have been applied (Zhang et al. 2013, Sun et al. 2013). Motivated
by the most recent models based on variable screening techniques and integration of pathway
information into penalized Cox methods, we propose a new procedure to obtain more accurate
predictions. First, we identify the high-risk genes by using variable screening techniques and
then, we perform Cox regression analysis integrating network information associated with the
selected high-risk genes. By combining these two approaches, we present a new method to select
important core pathways and genes that are related to the survival outcome and we show the
benefits of our proposal both in simulation and real studies.
Methods
In our study, we combine variable screening techniques and network methods to identify
genes and pathways highly associated with the disease and to better predict patient risk. We
propose a new method for survival analysis based on the following steps. First, (i) we perform
variable screening, such as the sure independence screening (Fan et al. 2008) and its advancement
(Gorst-Rasmussen & Scheike 2013, Zhao & Li 2012, Fan et al. 2010) to select the active set of
variables strongly correlated with the survival response, and then (ii) we apply network-based
Cox regression models, such as Net-Cox and AdaLnet, which use a network based on the number
of selected signature genes to predict survival probability. In order to build our apriori network
information, we use the human gene functional linkage approach (Huttenhower et al. 2009).
Such network contains maps of functional activity and interaction networks in over 200 areas of
human cellular biology with information from 30.000 genome-scale experiments. The functional
linkage network summarizes information from a variety of biologically informative perspectives:
prediction of protein function and functional modules, cross-talk among biological processes, and
association of novel genes and pathways with known genetic disorders. In particular, our gene
network is built by using the HEFalMp tool to determine the edge's weight w between two nodes
(i.e. genes). The resulting network consists of a fixed number of unique genes (about 2000
genes), where w describes how strong is the relation between two genes and it takes values in
[0,1]. Hence, while the screening methods recruit the features with the best marginal utility to
reduce the dimensionality of the data, the network incorporates the pathway information used
as a prior knowledge network into the survival analysis.
Results
We combine variable screening procedures and network-penalized Cox models for high-dimensional
survival data aimed at determining pathway structures and biomarkers involved in cancer progression.
By using this approach, it is possible to obtain a deeper insight of the gene-regulatory
networks and investigate the gene signatures related to the cancer survival time in order to understand
how patient features (molecular and clinical information) can influence cancer treatment
and detection. In particular, we show the results obtained in simulation and real cancer studies,
along with screening rules. The simulated data are aimed to illustrate two different biological
scenarios. In the first setting, we examine the situation where all genes within the same module
belong to different groups or pathways. In the second one, the pathways are not independent
among them (as in genomic studies), but the activation of some groups is conditional from other
pathways. We use specificity, sensitivity and Matthews Correlation Coefficient to compare the
prediction performance. We also predict patient survival using molecular data of different cancer
types, such as ovarian and breast cancer. We investigate the set of the active signature genes and
the corresponding pathways involved in the cancer disease process. Then, using the biological
network, as prior information network, we perform network-based Cox model including Kaplan-
Meier curve and log-rank test. Overall this study shows that the new screening-network analysis
is useful for improving