Quantum annealers are commercial devices that aim to solve very hard computational problems1, typically those involving spin glasses2,3. Just as in metallurgic annealing, in which a ferrous metal is slowly cooled4, quantum annealers seek good solutions by slowly removing the transverse magnetic field at the lowest possible temperature. Removing the field diminishes the quantum fluctuations but forces the system to traverse the critical point that separates the disordered phase (at large fields) from the spin-glass phase (at small fields). A full understanding of this phase transition is still missing. A debated, crucial question regards the closing of the energy gap separating the ground state from the first excited state. All hopes of achieving an exponential speed-up, compared to classical computers, rest on the assumption that the gap will close algebraically with the number of spins5–9. However, renormalization group calculations predict instead that there is an infinite-randomness fixed point10. Here we solve this debate through extreme-scale numerical simulations, finding that both parties have grasped parts of the truth. Although the closing of the gap at the critical point is indeed super-algebraic, it remains algebraic if one restricts the symmetry of possible excitations. As this symmetry restriction is experimentally achievable (at least nominally), there is still hope for the quantum annealing paradigm11–13.
Quantum Spin Glasses
Spin Glasses
Disorder Systems
We release a set of GPU programs for the study of the Quantum (S=1/2) Spin Glass on a square lattice, with binary couplings. The library contains two main codes: MCQSG (that carries out Monte Carlo simulations using both the Metropolis and the Parallel Tempering algorithms, for the problem formulated in the Trotter-Suzuki approximation), and EDQSG (that obtains the extremal eigenvalues of the Transfer Matrix using the Lanczos algorithm). EDQSG has allowed us to diagonalize transfer matrices with size up to 236×236. From its side, MCQSG running on four NVIDIA A100 cards delivers a sub-picosecond time per spin-update, a performance that is competitive with dedicated hardware. We include as well in our library GPU programs for the analysis of the spin configurations generated by MCQSG. Finally, we provide two auxiliary codes: the first generates the lookup tables employed by the random number generator of MCQSG; the second one simplifies the execution of multiple runs using different input data. Program summary: Program Title: QISG Suite CPC Library link to program files: https://doi.org/10.17632/g97sn2t8z2.1 Licensing provisions: MIT Programming language: CUDA-C Nature of problem: The critical properties of quantum disordered systems are known only in a few, simple, cases whereas there is a growing interest in gaining a better understanding of their behaviour due to the potential application of quantum annealing techniques for solving optimization problems. In this context, we provide a suite of codes, that we have recently developed, to the purpose of studying the 2D Quantum Ising Spin Glass. Solution method: We provide a highly tuned multi-GPU code for the Montecarlo simulation of the 2D QISG based on a combination of Metropolis and Parallel Tempering algorithms. Moreover, we provide a code for the evaluation of the eigenvalues of the transfer matrix of the 2D QISG for size up to L=6. The eigenvalues are computed by using the classic Lanczos algorithm that, however, relies on a custom multi-GPU-CPU matrix-vector product that speeds-up dramatically the execution of the algorithm.
CUDA
Eigenvalues of transfer matrix
Metropolis
Parallel tempering
Quantum spin glass
Paga I.
;
He J.
;
Baity-Jesi M.
;
Calore E.
;
Cruz A.
;
Fernandez L. A.
;
Gil-Narvion J. M.
;
Gonzalez-Adalid Pemartin I.
;
Gordillo-Guerrero A.
;
Iniguez D.
;
Maiorano A.
;
Vincenzo Marinari
;
Martin-Mayor V.
;
Moreno-Gordo J.
;
Munoz Sudupe A.
;
Navarro D.
;
Orbach R. L.
;
Parisi G.
;
Perez-Gaviro S.
;
Federico Ricci-Tersenghi
;
Ruiz-Lorenzo J. J.
;
Schifano S. F.
;
Schlagel D. L.
;
Seoane B.
;
Tarancon A.
;
Yllanes D.
Rejuvenation and memory, long considered the distinguishing features of spin glasses, have recently been proven to result from the growth of multiple length scales. This insight, enabled by simulations on the Janus II supercomputer, has opened the door to a quantitative analysis. We combine numerical simulations with comparable experiments to introduce two coefficients that quantify memory. A third coefficient has been recently presented by Freedberg et al. We show that these coefficients are physically equivalent by studying their temperature and waiting-time dependence.
Marco Baity-Jesi
;
Enrico Calore
;
Andrés Cruz
;
Luis Antonio Fernández
;
José Miguel Gil-Narvión
;
Gonzalez-Adalid Pemartin I.
;
Antonio Gordillo-Guerrero
;
David Íñiguez
;
Andrea Maiorano
;
Vincenzo Marinari
;
Víctor Martín-Mayor
;
Javier Moreno-Gordo
;
Antonio Muñoz Sudupe
;
Denis Navarro
;
Ilaria Paga
;
Giorgio Parisi
;
Sergio Pérez-Gaviro
;
Federico Ricci-Tersenghi
;
Juan Jesús Ruiz-Lorenzo
;
Sebastiano Fabio Schifano
;
Beatriz Seoane
;
Alfonso Tarancón
;
David Yllanes
Weunveil the multifractal behavior of Ising spin glasses in their low-temperature phase. Using the Janus II custom-built supercomputer, the spin-glass correlation function is studied locally. Dramatic fluctuations are found when pairs of sites at the same distance are compared. The scaling of these fluctuations, as the spin-glass coherence length grows with time, is characterized through the computation of the singularity spectrum and its corresponding Legendre transform. A comparatively small number of site pairs controls the average correlation that governs the response to a magnetic field. We explain how this scenario of dramatic fluctuations (at length scales smaller than the coherence length) can be reconciled with the smooth, self-averaging behavior that has long been considered to describe spin-glass dynamics.
disorder systems
fractal dimensions
intermittency
large scale simulations
Many systems, when initially placed far from equilibrium, exhibit surprising behavior in their attempt to equilibrate. Striking examples are the Mpemba effect and the cooling-heating asymmetry. These anomalous behaviors can be exploited to shorten the time needed to cool down (or heat up) a system. Though, a strategy to design these effects in mesoscopic systems is missing. We bring forward a description that allows us to formulate such strategies, and, along the way, makes natural these paradoxical behaviors. In particular, we study the evolution of macroscopic physical observables of systems freely relaxing under the influence of one or two instantaneous thermal quenches. The two crucial ingredients in our approach are timescale separation and a nonmonotonic temperature evolution of an important state function. We argue that both are generic features near a first-order transition. Our theory is exemplified with the one-dimensional Ising model in a magnetic field using analytic results and numerical experiments.
Nonequilibrium statistical mechanics, markovian processes, Ising model
The new σ-IASI/F2N radiative transfer model is an advancement of the σ-IASI model, introduced in 2002. It enables rapid simulations of Earth-emitted radiance and Jacobians under various sky conditions and geometries, covering the spectral range of 3-100 μm. Successfully utilized in δ-IASI, the advanced Optimal Estimation tool tailored for the IASI MetOp interferometer, its extension to the Far Infrared (FIR) holds significance for the ESA Earth Explorer FORUM mission, necessitating precise cloud radiative effect treatment, crucial in regions with dense clouds and temperature gradients. The model's update, incorporating the "linear-in-T" correction, addresses these challenges, complementing the "linear-in-tau" approach. Demonstrations highlight its effectiveness in simulating cloud complexities, with the integration of the "linear-in-T" and Tang correction for the computation of cloud radiative effects. The results presented will show that the updated σ-IASI/F2N can treat the overall complexity of clouds effectively and completely, at the same time minimizing biases.
The volume collects the long abstracts of the 79 contributions presented during the fourth edition of the “Young Applied Mathematicians Conference” (YAMC, www.yamc.it). Organized in Rome under the sponsorship of the Institute for Applied Mathematics (IAC) of the CNR and the Department of Mathematics at Sapienza, University of Rome, the conference took place from September 16 to 20, 2024, and brought together primarily young researchers (students, PhD candidates, post-docs, etc.) from 37 universities and research centers across 8 countries. This volume is intended to promote the communication of the research presented in the field of applied mathematics, with a primary focus on numerical analysis, artificial intelligence, statistics, and mathematical modeling.
We rigorously justify the bilayer shallow-water system as an approximation to the hydrostatic Euler equations in situations where the flow is density-stratified with close-to-piecewise constant density profiles, and close-to-columnar velocity profiles. Our theory accommodates with continuous stratification, so that admissible deviations from bilayer profiles are not pointwise small. This leads us to define refined approximate solutions that are able to describe at first order the flow in the pycnocline. Because the hydrostatic Euler equations are not known to enjoy suitable stability estimates, we rely on thickness-diffusivity contributions proposed by Gent and McWilliams. Our strategy also applies to one-layer and multilayer frameworks.
We review recent mathematical results concerning the analysis of hydrostatic equations in the context of stably stratified fluids. Beginning with the simpler and better understood setting of homogeneous fluids, we emphasize the additional mathematical challenges posed by non-homogeneous framework. We present both positive and negative results, including well-posedness and proof of the hydrostatic limit with a suitable regularization, alongside ill-posedness in the fully inviscid setting and the breakdown of the hydrostatic limit in specific scenarios.
In a layered thermal conductor, the inaccessible interface could be dam- aged by mechanical solicitation, chemical infiltration, aging. In this case, the original thermal properties of the specimen are modified. The defect occurs typically in form of delamination. The present paper deals with nondestructive evaluation of interface ther- mal conductance h from the knowledge of the surface temperature when the specimen is heated in some controlled way. The goal is achieved by expanding h in powers of the thickness of the upper layer. The mathematical analysis of the model produces exact formulas for the first coefficients of h which are tested on simulated and real data. The evaluation of interface flaws comes from reliable approximation of h.
In this work, we investigate how fluid flows impact the aggregation mechanisms of Aβ40 proteins and Aβ16–22 peptides and mechanically perturb their (pre)fibrillar aggregates. We exploit the OPEP coarse-grained model for proteins and the Lattice Boltzmann Molecular Dynamics technique. We show that beyond a critical shear rate, amyloid aggregation speeds up in Couette flow because of the shorter collisions times between aggregates, following a transition from diffusion limited to advection dominated dynamics. We also characterize the mechanical deformation of (pre)fibrillar states due to the fluid flows (Couette and Poiseuille), confirming the capability of (pre)fibrils to form pathological loop-like structures as detected in experiments. Our findings can be of relevance for microfluidic applications and for understanding aggregation in the interstitial brain space.
Langevin and Brownian simulations play a prominent role in computational research, and state of the art integration algorithms provide trajectories with different stability ranges and accuracy in reproducing statistical averages. The practical usability of integrators is an important aspect to allow choosing large time steps while ensuring numerical stability and overall computational efficiency. In this work, different use cases and practical features are selected in order to perform a cumulative comparison of integrators with a focus on evaluating the derived velocity and position autocorrelation functions, a comparison that is often disregarded in the literature. A standard industrial open-source software methodology is suggested to compare systematically the different algorithms.
Nucleation and growth of methane clathrate hydrates is an exceptional playground to study crystallisation of multi-component, host-guest crystallites when one of the species forming the crystal, the guest, has a higher concentration in the solid than in the liquid phase. This adds problems related to the transport of the low concentration species, here methane. A key aspect in the modelling of clathrates is the water model employed in the simulation. In previous articles, we compared an all-atom force model, TIP4P/Ewald, with a coarse grain one, which is highly appreciated for its computational efficiency. Here, we perform a complementary analysis considering three all-atoms water models: TIP4P/Ewald, TIP4P/ice and TIP5P. A key difference between these models is that the former predicts a much lower freezing temperature. Intuitively, one expects that to lower freezing temperatures of water correspond to lower water/methane-methane gas-clathrate coexistence ones, which determines the degree of supercooling and the degree of supersaturation. Hence, in the simulation conditions, 250 K (500 atm, and fixed methane molar fraction), one expects computational samples made of TIP4P-ice and TIP5P, with a similar freezing temperature (T-f similar to 273 K), to be more supersaturated with respect to the case of TIP4P-Ew (T-f similar to 245 K), and crystallisation to be faster. Surprisingly, we find that while the nucleation rate is consistent with this prediction, growth rate with TIP4P-ice and TIP5P is much slower than with TIP4P-Ew. The latter was attributed to the slower reorientation of water molecules in strong supercooled conditions, resulting in a lower growth rate. This suggests that the freezing temperature is not a suitable parameter to evaluate the adequacy of a water model.[GRAPHICS]
Clathrates crystallisation
nucleation
growth
force models
non-equilibrium molecular dynamics
The book is interwoven according to the intrinsic logics of modern most important applications of electrospun nanofibers. It discusses such application-oriented nanofibers as self-healing vascular nanotextured materials, biopolymer nanofibers, soft robots and actuators based on nanofibers, biopolymer nanofiber-based triboelectric nanogenerators, metallized nanofibers, and heaters and sensors based on them. It also includes such topics as the injectable nanofibrous biomaterials, fibrous hemostatic agents and their interaction with blood, as well as electrospun nanofibers for face-mask applications. The book also details polyelectrolytes-based complex nanofibers and their use as actuators. It also covers drug release facilitated by polyelectrolytes-based complex nanofibers. The fundamental aspects of electrospinning of polymer nanofibers discussed in the final part of the book link them to the applications described in the preceding chapters. Such topics as polymer solution preparation and their rheological properties, e.g., viscoelasticity and the related spinnability, the electrical conductivity of polymer solutions, and the cascade of the physical phenomena resulting in formation of nanofibers encompass the experimental aspects. Also, the general quasi-1D equations used for modeling of formation of electrospun polymer nanofibers, and the numerical aspects of their solution are discussed in detail, including such modeling-driven applications as nanofiber alignment by electric focusing fields.
Early detection of prediabetes is crucial to preventing its progression to diabetes. Providing individuals with a personalized sense of their risk could improve prevention efforts. While complex mathematical models that simulate metabolic and inflammatory processes offer detailed and patient-specific insights, their computational cost usually makes them impractical for real-time prediction on mobile platforms. This work introduces a long short-term memory (LSTM) surrogate for the MT2D model, that simulates the main metabolic and inflammatory processes undergoing the transition to prediabetes. The model is developed using a dataset of 43 669 simulated subjects, each with lifestyle inputs and biomarker outputs over six months. Using 8 time series inputs, the surrogate predicts the dynamics of 11 key metabolic and inflammatory outputs, closely replicating the behaviour of the MT2D model. After training, the proposed LSTM model reduces computational time from an average of 8.4 hours to 0.1 seconds per simulation, making it suitable for mobile device deployment. The model achieves root mean squared errors on the order of 10-2 on scaled data, and shows promise for prediabetes risk assessment by capturing trends in inflammatory biomarkers. This surrogate model can provide real-time and patient-specific insights into the metabolic health, potentially improving the understanding of prediabetes risk.
Surrogate
LSTM
Prediabetes
Risk
Input to Output Prediction
Dynamical System
Monitoring surface and vegetation conditions is crucial for analyzing the impact of climate change on natural resources, especially in regions susceptible to extreme events like land and forest dryness caused by summer heatwaves. Traditional satellite indices, including NDVI, have limitations in distinguishing between barren soil and distressed vegetation. This study shows the potential of two recently validated indices, the Emissivity Contrast Index (ECI) and the Water Deficit Index (WDI), to assess vegetation stress and woodland degradation. These indices, derived from Infrared Atmospheric Sounding Interferometer (IASI) data, utilize an Optimal Interpolation scheme for upscaling and remapping. The effectiveness of ECI and WDI has been validated through a comparison with Surface Soil Moisture (SSM). The methodology allows for simultaneous assessment of surface hydric stress, identifying regions at risk of drought and forest fires. This approach has been applied to southern Italy during year 2023, an area which has been impacted by strong heatwaves in the last decade. These indices could demonstrate significant effectiveness when estimated using high-resolution sounders, such as the Surface Biology and Geology Observing Terrestrial Thermal Emission Radiometer (SBG OTTER). This would allow for more effective monitoring of small, heterogeneous areas.
Emissivity
Infrared
Satellite
Soil Water Stress
Vegetation stress
Since the Laplace transform plays a central role in the solution of differential equations, it seems natural to extend it in the field of fractional calculus, since many applications of this topic have been proposed, and are becoming more and more important. In this paper we extend the classical Laplace Transform by replacing the usual kernel with a suitable one, both in the classical and Laguerre-type case, obtained by constructing the reciprocal of some exponential-type functions with respect to an appropriate differential operator. Some examples are shown, derived using the computer algebra system Mathematica.
Electroencephalography (EEG) source imaging aims to reconstruct brain activity maps from the neuroelectric potential difference measured on the skull. To obtain the brain activity map, we need to solve an ill-posed and ill-conditioned inverse problem that requires regularization techniques to make the solution viable. When dealing with real-time applications, dimensionality reduction techniques can be used to reduce the computational load required to evaluate the numerical solution of the EEG inverse problem. To this end, in this paper we use the random dipole sampling method, in which a Monte Carlo technique is used to reduce the number of neural sources. This is equivalent to reducing the number of the unknowns in the inverse problem and can be seen as a first regularization step. Then, we solve the reduced EEG inverse problem with two popular inversion methods, the weighted Minimum Norm Estimate (wMNE) and the standardized LOw Resolution brain Electromagnetic TomogrAphy (sLORETA). The main result of this paper is the error estimates of the reconstructed activity map obtained with the randomized version of wMNE and sLORETA. Numerical experiments on synthetic EEG data demonstrate the effectiveness of the random dipole sampling method.
EEG imaging
inversion method
random sampling
sLORETA
underdetermined inverse problem
wMNE