Trajectory-based approaches to excited-state, nonadiabatic dynamics are promising simulation techniques to describe the response of complex molecular systems upon photo-excitation. They provide an approximate description of the coupled quantum dynamics of electrons and nuclei trying to access systems of growing complexity. The central question in the design of those approximations is a proper accounting of the coupling electron-nuclei and of the quantum features of the problem. In this paper, we approach the problem in the framework of the exact factorization of the electron-nuclear wavefunction, re-deriving and improving the coupled-trajectory mixed quantum-classical (CT-MQC) algorithm recently developed to solve the exact-factorization equations. In particular, a procedure to include quantum nuclear effects in CT-MQC is derived, and tested on a model system in different regimes.
On the force-velocity relationship of a bundle of rigid bio-filaments
Perilli A
;
Pierleoni C
;
Ciccotti G
;
Ryckaert JP
In various cellular processes, bio-filaments like F-actin and F-tubulin are able to exploit chemical energy associated with polymerization to perform mechanicalwork against an obstacle loaded with an external force. The force-velocity relationship quantitatively summarizes the nature of this process. By a stochastic dynamical model, we give, together with the evolution of a staggered bundle of N-f rigid living filaments facing a loaded wall, the corresponding force-velocity relationship. We compute the evolution of the model in the infinite wall diffusion limit and in supercritical conditions (monomer density reduced by critical density (rho) over cap (1) > 1), and we show that this solution remains valid for moderate non-zero values of the ratio between the wall diffusion and the chemical time scales. We consider two classical protocols: the bundle is opposed either to a constant load or to an optical trap setup, characterized by a harmonic restoring force. The constant load case leads, for each F value, to a stationary velocity V-stat (F; N-f, (rho) over cap (1)) after a relaxation with characteristic time tau(micro)(F). When the bundle (initially taken as an assembly of filament seeds) is subjected to a harmonic restoring force (optical trap load), the bundle elongates and the load increases up to stalling over a characteristic time tau(OT). Extracted from this single experiment, the force-velocity V-OT (F; N-f, (rho) over cap (1)) curve is found to coincide with V-stat (F; N-f, (rho) over cap (1)), except at low loads. We show that this result follows from the adiabatic separation between tau(micro) and tau(OT), i. e., tau(micro) << tau(OT).
A dynamical system submitted to holonomic constraints is Hamiltonian only if considered in the reduced phase space of its generalized coordinates and momenta, which need to be defined ad hoc in each particular case. However, specially in molecular simulations, where the number of degrees of freedom is exceedingly high, the representation in generalized coordinates is completely unsuitable, although conceptually unavoidable, to provide a rigorous description of its evolution and statistical properties. In this paper, we first review the state of the art of the numerical approach that defines the way to conserve exactly the constraint conditions (by an algorithm universally known as SHAKE) and permits integrating the equations of motion directly in the phase space of the natural Cartesian coordinates and momenta of the system. We then discuss in detail SHAKE numerical implementations in the notable cases of Verlet and velocity-Verlet algorithms. After discussing in the same framework how constraints modify the properties of the equilibrium ensemble, we show how, at the price of moving to a dynamical system no more (directly) Hamiltonian, it is possible to provide a direct interpretation of the dynamical system and so derive its Statistical Mechanics both at equilibrium and in non-equilibrium conditions. To achieve that, we generalize the statistical treatment to systems no longer conserving the phase space volume (equivalently, we introduce a non-Euclidean invariant measure in phase space) and derive a generalized Liouville equation describing the ensemble even out of equilibrium. As a result, we can extend the response theory of Kubo (linear and nonlinear) to systems subjected to constraints.
The time reversal invariance of classical dynamics is reconsidered in this paper with specific focus on its consequences for time correlation functions and associated properties such as transport coefficients. We show that, under fairly common assumptions on the interparticle potential, an isolated Hamiltonian system obeys more than one time reversal symmetry and that this entails non trivial consequences. Under an isotropic and homogeneous potential, in particular, eight valid time reversal operations exist. The presence of external fields that reduce the symmetry of space decreases this number, but does not necessarily impair all time reversal symmetries. Thus, analytic predictions of symmetry properties of time correlation functions and, in some cases, even of their null value are still possible. The noteworthy case of a constant external magnetic field, usually assumed to destroy time reversal symmetry, is considered in some detail. We show that, in this case too, some of the new time reversal operations hold, and that this makes it possible to derive relevant properties of correlation functions without the uninteresting inversion of the direction of the magnetic field commonly enforced in the literature.
Time reversal symmetry; Hamiltonian system; correlation functions; linear response theory; magnetic field; electric field
We present the most recent release of our parallel implementation of the BFS and BC algorithms for the study of large scale graphs. Although our reference platform is a high-end cluster of new generation NVIDIA GPUs and some of our optimizations are CUDA specific, most of our ideas can be applied to other platforms offering multiple levels of parallelism. We exploit multi level parallel processing through a hybrid programming paradigm that combines highly tuned CUDA kernels, for the computations performed by each node, and explicit data exchange through the Message Passing Interface (MPI), for the communications among nodes. The results of the numerical experiments show that the performance of our code is comparable or better with respect to other state-of-the-art solutions. For the BFS, for instance, we reach a peak performance of 200 Giga Teps on a single GPU and 5.5 Terateps on 1024 Pascal GPUs. We release our source codes both for reproducing the results and for facilitating their usage as a building block for the implementation of other algorithms.
Large graphs; graph algorithms; parallel algorithms; parallel programming; distributed programming; GPU; CUDA
We study the initial-boundary value problem [Formula presented]where [Formula presented] is an interval and [Formula presented] is a nonnegative Radon measure on [Formula presented]. The map [Formula presented] is increasing in [Formula presented] and decreasing in [Formula presented] for some [Formula presented], and satisfies [Formula presented]. The regularizing map [Formula presented] is increasing and bounded. We prove existence of suitably defined nonnegative Radon measure-valued solutions. The solution class is natural since smooth initial data may generate solutions which become measure-valued after finite time.
The paper traces the early stages of Berni Alder's scientific accomplishments, focusing on his contributions to the development of Computational Methods for the study of Statistical Mechanics. Following attempts in the early 50s to implement Monte Carlo methods to study equilibrium properties of many-body systems, Alder developed in collaboration with Tom Wainwright the Molecular Dynamics approach as an alternative tool to Monte Carlo, allowing to extend simulation techniques to non-equilibrium properties. This led to the confirmation of the existence of a phase transition in a system of hard spheres in the late 50s, and was followed by the discovery of the unexpected long-time tail in the correlation function about a decade later. In the late 70s Alder was among the pioneers of the extension of Computer Simulation techniques to Quantum problems. Centered around Alder's own pioneering contributions, the paper covers about thirty years of developments in Molecular Simulation, from the birth of the field to its coming of age as a self-sustained discipline.
Time histories of seismic attenuation from the San Andreas fault at Parkfield
L Malagnini
;
D Dreger
;
R Bürgmann
;
I Munafò
;
G Sebastiani
During the seismic cycle, in nature and as well as in lab samples, the crack density of
rocks varies substantially, as stressed rocks approach a critical state and eventually
fail (Vasseur et al, 2017; Nur, 1972; Gupta, 1973) . At Earth scales, small periodical stress variations such
as seasonal loading/unloading and tides (Johnson_etal_2017) are constantly being superimposed
on the tectonic loading stress of crustal rocks, inducing periodic changes in crack
porosity, pore-fluid pressure, and saturation, that should leave a signature on crustal
attenuation. However, results from seismic techniques applied thus far have been too
noisy, or lacked sufficient resolution, to yield meaningful measurements. Here we use a
new technique that shows that seismic attenuation on the creeping section of the San
Andreas Fault (SAF) at Parkfield is modulated by recognizable periodicities mostly due
to tides, as well as to longer period fluctuations in creep rates (between 1.5 and 3-4 years)
that have been previously observed (Nadeau
sensitive to periodic stress perturbations well below 100 Pa, more than one order of
magnitude smaller than the largest of all periodic stress fluctuations, due to water/snow
loading/unloading (Johnson
earthquake, we observe changes in anelastic attenuation on both sides of the SAF.
and McEvilly, 2004; Turner et al., 2015)
. Our analysis is
et al., 2017)
. Before and after the 2004 M6 Parkfield main Frequency-dependent precursors with opposite signs are seen on the two sides of the
fault, reflecting the fact that prior to the earthquake, the Pacific side of the SAF was
under decreasing compressional stress, whereas the North-American side of the fault
was experiencing increasing compression. Coseismic and post-seismic stress relaxation
cause anomalies of opposite signs on the two sides of the SAF at Parkfield, opposite to
the pre-seismic ones. Due to rock damage, pre-2008 fluctuations show enhanced
sensitivity to seasonal stresses and solid tides (Gao eta., 2000) , with amplitudes modulated by
decreasing slip rate through healing. Post-2008 fluctuations indicate close-to-fault
medium healing.
Apenninic earthquakes
aftershock migration
seismic event inter-arrival time
During the last 20 years, three seismic sequences affected the Apenninic belt (central Italy):
Colfiorito (1997-98), L'Aquila (2009) and Amatrice Visso-Norcia Campotosto (2016-17).
They lasted for a long time, with a series of moderate-to-large earthquakes distributed over
40-60 km long Apenninic-trending segments. Their closeness in space and time suggested to
study their aftershock sequences to highlight similarities and differences. Aftershock space
migration and the distribution of aftershock inter-arrival time were studied. Mathematical
Morphology and nonparametric statistics were applied to reduce the effect of spatial noise.
Parametric analysis in time domain and spectral analysis were performed. Two different
types of aftershock sequences were found. The L'Aquila sequence presented a continuous
and periodic temporal variation (period ? 120 days) of aftershock activity centre along the
sequence axis, while the other two sequences showed a piecewise continuous pattern and a
shorter duration. We also found two different types of temporal evolution of the mean radial
distance between the aftershock ipocentres and the one of a reference event corresponding
to the start of a large and fast increase of daily energy release. One type was well described
by a simple exponential model, while a power-law model was more appropriate for the other
one. Furthermore, in the first case, the aftershock inter-arrival time were very well fitted by
an exponential model, while noticeable deviations were present in the other case. A possible
explanation was provided in terms of the local geological and hydrogeological properties,
which depend on the region location w.r.t. the Ancona-Anzio tectonic lineament.
Apenninic earthquakes
aftershock migration
seismic event inter-arrival time
We propose a new iterative procedure to find the
best time for re-initialization of meta-heuristic algorithms to
solve combinatorial optimization problems. The sequence of
algorithm executions with different random inizializations evolves
at each iteration by either adding new independent executions or
extending all existing ones up to the current maximum execution
time. This is done on the basis of a criterion that uses a surrogate
of the algorithm failure probability, where the optimal solution is
replaced by the best so far one. Therefore, the new procedure can
be applied in practice. We prove that, with probability one, the
maximum time of current executions of the proposed procedure
approaches, as the number of iterations diverges, the optimal
value minimizing the expected time to find the solution. We
apply the new procedure to several Traveling Salesman Problem
instances with hundreds or thousands of cities, whose solution
is known, and to some instances of a pseudo-Boolean problem.
As base algorithm, we use different versions of an Ant Colony
Optimization algorithm or a Genetic Algorithm. We compare the
results from the proposed procedure with those from the base
algorithm. This comparison shows that the failure probability
estimated values of the new procedure are several orders of
magnitude lower than those of the base algorithm for equal
computation cost.
Optimization methods
Probability
Stochastic processes
2018Contributo in volume (Capitolo o Saggio)metadata only access
Life Annuity Portfolios: Risk-Adjusted Valuations and Suggestions on the Product Attractiveness
D'Amato Valeria
;
Di Lorenzo Emilia
;
Orlando Albina
;
Sibillo Marilena
Solvency assessing is a compelling issue for the insurance industry, also in light of the current international risk-based regulations. Internal models have to take into account risk/profit indicators, in order to provide flexible tools aimed at valuing solvency. We focus on a variable annuity with an embedded option involving a participation level which depends on the period financial result. We realize a performance evaluation by means of a suitable indicator, which properly captures both financial and demographic risk drivers. In fact, in the case of life annuity business, assessing solvency has to be framed within a wide time horizon, where specific financial and demographic risks are realized. In this order of ideas, solvency indicators have to capture the amount of capital to cope with the impact of those risk sources over the considered period. The analysis is carried out in accordance with a management perspective, apt to measure the business performance, which requires a correct risk control; in particular we present a study of the dynamics of the profit realized per unit of the total financial value of the contract. On the other hand, the consumer profitability is also measured by means of an utility-equivalent fixed life annuity. Ac-cording to the insureds point of view, we measure their perception of the contract profitability within the expected utility approach.
Diffusion and transport processes constitute a very important field of applied mathematics. They are useful in many different problems ranging from the diffusion of pollutants in the atmosphere and the sea, to the spreading of epidemics.
Aside from their practical relevance, such processes have been very important in the history of physics and mathematics. We can recall Einstein's study of Brownian motion which was fundamental to give a definitive experimental evidence of the existence of atoms. Moreover diffusion processes have been the starting point for the building of the mathematical theory of stochastic processes (starting from the work of Langevin).
Similarly the study of reaction and diffusion phenomena, starting from the seminal contribution of two 20th-century science giants (Ronald A. Fisher and Andrej N. Kolmogorov) has led to interesting developments both for applications and for the fruitful connections between stochastic processes and partial differential equation.
In this article we discuss some general results developed in these areas, including few modern topics, as transport and reaction/diffusion on discrete structures (graphs). Such a theme has a great relevance, e.g. for the dissemination of information through the internet or the spreading of epidemics through the air transport network.
I fenomeni di trasporto, e la loro generalizzazione ai casi con reazione, costituiscono un capitolo molto importante della matematica applicata e trovano utilizzo in ambiti molto vari, che vanno dalla diffusione di sostanze inquinanti in atmosfera e in mare, ai processi industriali, alla biomatematica, alla propagazione di epidemie.
Oltre alla loro rilevanza pratica, lo studio di tali fenomeni ha portato contributi molto importanti nella storia della fisica e della matematica. Possiamo ricordare lo studio di Einstein sul moto browniano che è stato fondamentale per dare un'evidenza sperimentale definitiva della reale esistenza degli atomi. E, in generale, i processi di diffusione hanno costituito il punto di partenza per la costruzione della teoria matematica dei processi stocastici (a cominciare dal lavoro di Langevin).
Analogamente lo studio dei fenomeni con diffusione e reazione, nati dal contributo di due giganti della scienza del 20-mo secolo (Ronald A. Fisher e Andrej N. Kolmogorov) nell'ambito della modellizzazione matematica di problemi biologici, ha poi portato a sviluppi interessanti sia nell'ambito applicativo che per le proficue connessioni tra processi stocastici ed equazioni alle derivate parziali.
In questo articolo oltre a presentare alcuni tra i risultati generali sviluppati in questi ambiti, discuteremo anche aspetti più moderni legati al crescente interesse per i fenomeni di trasporto e reazione/diffusione su strutture discrete (grafi), una tematica questa di grande attualità (basti pensare alla diffusione delle informazioni su internet o alla propagazione delle epidemie per mezzo del network dei trasporti aerei) sviluppata attraverso una matematica raffinata.
Equazione di Fisher Kolmogorov
Dinamiche reattive su grafo
Processi di reazione e diffusione
2018Contributo in Atti di convegnometadata only access
Parallel Aggregation Based on Compatible Weighted Matching for AMG
A Abdullahi
;
P D'Ambra
;
D di Serafino
;
S Filippone
We focus on the extension of the MLD2P4 package of parallel Algebraic MultiGrid (AMG) preconditioners, with the objective of improving its robustness and efficiency when dealing with sparse linear systems arising from anisotropic PDE problems on general meshes. We present a parallel implemen- tation of a new coarsening algorithm for symmetric positive definite matrices, which is based on a weighted matching approach. We discuss preliminary re- sults obtained by combining this coarsening strategy with the AMG components available in MLD2P4, on linear systems arising from applications considered in the Horizon 2020 Project "Energy oriented Centre of Excellence for computing applications" (EoCoE).
Improving strategies for the control and eradication of invasive species is an important aspect of nature conservation, an aspect where mathematical modeling and optimization play an important role. In this paper, we introduce a reaction-diffusion partial differential equation to model the spatiotemporal dynamics of an invasive species, and we use optimal control theory to solve for optimal management, while implementing a budget constraint. We perform an analytical study of the model properties, including the well-posedness of the problem. We apply this to two hypothetical but realistic problems involving plant and animal invasive species. This allows us to determine the optimal space and time allocation of the efforts, as well as the final length of the removal program so as to reach the local extinction of the species.
environmental management
optimal control
population dynamics
reaction diffusion equations
Overcomplete representations such as wavelets and windowed Fourier expansions have become mainstays of modern statistical data analysis. In the present work, in the context of general finite frames, we derive an oracle expression for the mean quadratic risk of a linear diagonal de-noising procedure which immediately yields the optimal linear diagonal estimator. Moreover, we obtain an expression for an unbiased estimator of the risk of any smooth shrinkage rule. This last result motivates a set of practical estimation procedures for general finite frames that can be viewed as the generalization of the classical procedures for orthonormal bases. A simulation study verifies the effectiveness of the proposed procedures with respect to the classical ones and confirms that the correlations induced by frame structure should be explicitly treated to yield an improvement in estimation precision.
Block thresholding
Finite frames
Shrinkage
Signal de-noising
SURE
Macrophages derived from monocyte precursors undergo specific polarization processes which are influenced by the local tissue environment: classically-activated (M1) macrophages, with a pro-inflammatory activity and a role of effector cells in Th1 cellular immune responses, and alternatively-activated (M2) macrophages, with anti-inflammatory functions and involved in immunosuppression and tissue repair. At least three different subsets of M2 macrophages, namely M2a, M2b and M2c, are characterized in the literature based on their eliciting signals. The activation and polarization of macrophages is achieved through many, often intertwined, signaling pathways. To describe the logical relationships among the genes involved in macrophage polarization, we used a computational modeling methodology, namely, logical (Boolean) modeling of gene regulation. We integrated experimental data and knowledge available in the literature to construct a logical network model for the gene regulation driving macrophage polarization to the M1, M2a, M2b and M2c phenotypes. Using the software GINsim and BoolNet we analysed the network dynamics under different conditions and perturbations to understand how they affect cell polarization. Dynamic simulations of the network model, enacting the most relevant biological conditions, showed coherence with the observed behaviour of in vivo macrophages. The model could correctly reproduce the polarization toward the four main phenotypes as well as to several hybrid phenotypes, which are known to be experimentally associated to physiological and pathological conditions. We surmise that shifts among different phenotypes in the model mimic the hypothetical continuum of macrophage polarization, with M1 and M2 being the extremes of an uninterrupted sequence of states. Furthermore, model simulations suggest that anti-inflammatory macrophages are resilient to shift back to the pro-inflammatory phenotype.
macrophage
differentiation
phenotype
model
gene regulat
polarization
immune system
We propose a mathematical model to describe enzyme-based tissue degradation in cancer therapies. The proposed model combines the poroelastic theory of mixtures with the transport of enzymes or drugs in the extracellular space. The effect of the matrix-degrading enzymes on the tissue composition and its mechanical response are accounted for. Numerical simulations in 1D, 2D and axisymmetric (3D) configurations show how an injection of matrix-degrading enzymes alters the porosity of a biological tissue. We eventually exhibit numerically the main consequences of a matrix-degrading enzyme pretreatment in the framework of chemotherapy: the removal of the diffusive hindrance to the penetration of therapeutic molecules in tumors and the reduction of interstitial fluid pressure which improves transcapillary transport. Both effects are consistent with previous biological observations.
Mathematical biology
Poroelasticity
ECM degradation
Interstitial fluid pressure
Drug distribution in tissue
We tackle the issue of measuring and understanding the visitors' dynamics in a crowded museum in order to create and calibrate a predictive mathematical model. The model is then used as a tool to manage, control and optimize the fruition of the museum. Our contribution comes with one successful use case, the Galleria Borghese in Rome, Italy.