A nullomer is an oligomer that does not occur as a subsequence in a given DNA sequence, i.e. it is an absent word of that sequence. The importance of nullomers in several applications, from drug discovery to forensic practice, is now debated in the literature. Here, we investigated the nature of nullomers, whether their absence in genomes has just a statistical explanation or it is a peculiar feature of genomic sequences. We introduced an extension of the notion of nullomer, namely high order nullomers, which are nullomers whose mutated sequences are still nullomers. We studied different aspects of them: comparison with nullomers of random sequences, CpG distribution and mean helical rise. In agreement with previous results we found that the number of nullomers in the human genome is much larger than expected by chance. Nevertheless antithetical results were found when considering a random DNA sequence preserving dinucleotide frequencies. The analysis of CpG frequencies in nullomers and high order nullomers revealed, as expected, a high CpG content but it also highlighted a strong dependence of CpG frequencies on the dinucleotide position, suggesting that nullomers have their own peculiar structure and are not simply sequences whose CpG frequency is biased. Furthermore, phylogenetic trees were built on eleven species based on both the similarities between the dinucleotide frequencies and the number of nullomers two species share, showing that nullomers are fairly conserved among close species. Finally the study of mean helical rise of nullomers sequences revealed significantly high mean rise values, reinforcing the hypothesis that those sequences have some peculiar structural features. The obtained results show that nullomers are the consequence of the peculiar structure of DNA (also including biased CpG frequency and CpGs islands), so that the hypermutability model, also taking into account CpG islands, seems to be not sufficient to explain nullomer phenomenon. Finally, high order nullomers could emphasize those features that already make simple nullomers useful in several applications.
We present a solution based on a suitable combination of heuristics and parallel processing techniques for finding the best allocation of the financial assets of a pension fund, taking into account all the specific rules of the fund. We compare the values of an objective function computed with respect to a large set (thousands) of possible scenarios for the evolution of the Net Asset Value (NAV) of the share of each asset class in which the financial capital of the fund is invested. Our approach does not depend neither on the model used for the evolution of the NAVs nor on the objective function. In particular, it does not require any linearization or similar approximations of the problem. Although we applied it to a situation in which the number of possible asset classes is limited to few units (six in the specific case), the same approach can be followed also in other cases by grouping asset classes according to their features.
The problem is addressed of the maximal integrability of the gradient of solutions to quasilinear elliptic equations, with merely measurable coefficients, in two variables. Optimal results are obtained in the framework of Orlicz spaces, and in the more general setting of all rearrangement-invariant spaces. Applications to special instances are exhibited, which provide new gradient bounds, or improve certain results available in the literature. (C) 2016 Elsevier Ltd. All rights reserved.
Particle-based modeling of living actin filaments in an optical trap
Hunt TA
;
Mogurampelly S
;
Ciccotti G
;
Pierleoni C
;
Ryckaert JP
We report a coarse-grained molecular dynamics simulation study of a bundle of parallel actin filaments under supercritical conditions pressing against a loaded mobile wall using a particle-based approach where each particle represents an actin unit. The filaments are grafted to a fixed wall at one end and are reactive at the other end, where they can perform single monomer (de) polymerization steps and push on a mobile obstacle. We simulate a reactive grand canonical ensemble in a box of fixed transverse area A, with a fixed number of grafted filaments Nf, at temperature T and monomer chemical potential m 1. For a single filament case (Nf = 1) and for a bundle of Nf = 8 filaments, we analyze the structural and dynamical properties at equilibrium where the external load compensates the average force exerted by the bundle. The dynamics of the bundle-moving-wall unit are characteristic of an over-damped Brownian oscillator in agreement with recent in vitro experiments by an optical trap setup. We analyze the influence of the pressing wall on the kinetic rates of (de) polymerization events for the filaments. Both static and dynamic results compare reasonably well with recent theoretical treatments of the same system. Thus, we consider the proposed model as a good tool to investigate the properties of a bundle of living filaments.
On the properties of a bundle of flexible actin filaments in an optical trap
Perilli A
;
Pierleoni C
;
Ciccotti G
;
Ryckaert JP
We establish the statistical mechanics framework for a bundle of N-f living and uncrosslinked actin filaments in a supercritical solution of free monomers pressing against a mobile wall. The filaments are anchored normally to a fixed planar surface at one of their ends and, because of their limited flexibility, they grow almost parallel to each other. Their growing ends hit a moving obstacle, depicted as a second planar wall, parallel to the previous one and subjected to a harmonic compressive force. The force constant is denoted as the trap strength while the distance between the two walls as the trap length to make contact with the experimental optical trap apparatus. For an ideal solution of reactive filaments and free monomers at fixed free monomer chemical potential mu(1), we obtain the general expression for the grand potential from which we derive averages and distributions of relevant physical quantities, namely, the obstacle position, the bundle polymerization force, and the number of filaments in direct contact with the wall. The grafted living filaments are modeled as discrete Wormlike chains, with F-actin persistence length l(p), subject to discrete contour length variations +/- d (the monomer size) to model single monomer (de) polymerization steps. Rigid filaments (l(p) = infinity), either isolated or in bundles, all provide average values of the stalling force in agreement with Hill's predictions F-s(H) = N(f)k(B)T ln(rho(1)/rho(1c))/d, independent of the average trap length. Here rho(1) is the density of free monomers in the solution and rho(1c) its critical value at which the filament does not grow nor shrink in the absence of external forces. Flexible filaments (l(p) < infinity) instead, for values of the trap strength suitable to prevent their lateral escape, provide an average bundle force and an average trap length slightly larger than the corresponding rigid cases (few percents). Still the stalling force remains nearly independent on the average trap length, but results from the product of two strongly L-dependent contributions: the fraction of touching filaments proportional to (< L >(O.T.))(2) and the single filament buckling force proportional to (< L >(O.T.))
Estimates for solutions to homogeneous Dirichlet problems for a class of elliptic equations with zero order term in the form L(u) = g(x, u) + f (x),where the operator L fulfills an anisotropic elliptic condition, are established. Such estimates are obtained in terms of solutions to suitable problems with radially symmetric data, when no sign conditions on g are required.
A priori estimate; Anisotropic Dirichlet problems; Anisotropic symmetrization
Searching for words or sentences within large sets of textual documents can be very challenging unless an index of the data has been created in advance. However, indexing can be very time consuming especially if the text is not readily available and has to be extracted from files stored in different formats. Several solutions, based on the MapReduce paradigm, have been proposed to accelerate the process of index creation. These solutions perform well when data are already distributed across the hosts involved in the elaboration. On the other hand, the cost of distributing data can introduce noticeable overhead. We propose ISODAC, a new approach aimed at improving efficiency without sacrificing reliability. Our solution reduces to the bare minimum the number of I/O operations by using a stream of in-memory operations to extract and index text. We further improve the performance by using GPUs for the most computationally intensive tasks of the indexing procedure. ISODAC indexes heterogeneous documents up to 10.6x faster than other widely adopted solutions, such as Apache Spark. As proof-of-concept, we developed a tool to index forensic disk images that can easily be used by investigators through a web interface.
We present the results obtained by using an evolution of our CUDA-based solution for the exploration, via a breadth first search, of large graphs. This latest version exploits at its best the features of the Kepler architecture and relies on a combination of techniques to reduce both the number of communications among the GPUs and the amount of exchanged data. The final result is a code that can visit more than 800 billion edges in a second by using a cluster equipped with 4,096 Tesla K20X GPUs.
Breadth First Search
CUDA
GPU
Large graphs
Parallel computing
By means of mesoscopic numerical simulations of a model soft-glassy material, we investigate the role of boundary roughness on the flow behaviour of the material, probing the bulk/wall and global/local rheologies. We show that the roughness reduces the wall slip induced by wettability properties and acts as a source of fluidisation for the material. A direct inspection of the plastic events suggests that their rate of occurrence grows with the fluidity field, reconciling our simulations with kinetic elasto-plastic descriptions of jammed materials. Notwithstanding, we observe qualitative and quantitative differences in the scaling, depending on the distance from the rough wall and on the imposed shear. The impact of roughness on the orientational statistics is also studied.
Graphics Processing Units (GPUs) exhibit significantly higher peak performance than conventional CPUs. However, in general only highly parallel algorithms can exploit their potential. In this scenario, the iterative solution to sparse linear systems of equations could be carried out quite efficiently on a GPU as it requires only matrix-by-vector products, dot products, and vector updates. However, to be really effective, any iterative solver needs to be properly preconditioned and this represents a major bottleneck for a successful GPU implementation. Due to its inherent parallelism, the factored sparse approximate inverse (FSAI) preconditioner represents an optimal candidate for the conjugate gradient-like solution of sparse linear systems. However, its GPU implementation requires a nontrivial recasting of multiple computational steps. We present our GPU version of the FSAI preconditioner along with a set of results that show how a noticeable speedup with respect to a highly tuned CPU counterpart is obtained.
We study the initial-boundary value problem (Formula presented.) with measure-valued initial data. Here ? is a bounded open interval, ?(0)=?(?)=0, ? is increasing in (0,?) and decreasing in (?,?), and the regularising term ? is increasing but bounded. It is natural to study measure-valued solutions since singularities may appear spontaneously in finite time. Nonnegative Radon measure-valued solutions are known to exist and their construction is based on an approximation procedure. Until now nothing was known about their uniqueness. In this note we construct some nontrivial examples of solutions which do not satisfy all properties of the constructed solutions, whence uniqueness fails. In addition, we classify the steady state solutions.
We study a quasilinear parabolic equation of forward-backward type, under assumptions on the nonlinearity which hold for a wide class of mathematical models, using a pseudo-parabolic regularization of power type.We prove existence and uniqueness of positive solutions of the regularized problem in a space of Radon measures. It is shown that these solutions satisfy suitable entropy inequalities. We also study their qualitative properties, in particular proving that the singular part of the solution with respect to the Lebesgue measure is constant in time.
The estimation of parameters in a linear model is considered under the hypothesis that the noise, with finite second order statistics, can be represented by random coefficients in a given deterministic basis. An extended underdetermined design matrix is then formed, and the estimator of the extended parameters with minimum l(1) norm is computed. It is proved that, if the noise variance is larger than a threshold, which depends on the unknown parameters and on the extended design matrix, then the proposed estimator of the original parameters dominates the least-squares estimator, in the sense of the mean square error. A small simulation illustrates its behavior. Moreover it is shown experimentally that it can be convenient, even if the design matrix is not known but only an estimate can be used. Furthermore the noise basis can eventually be used to introduce some prior information in the estimation process. These points are illustrated in a simulation by using the proposed estimator for solving a difficult inverse ill-posed problem, related to the complex moments of an atomic complex measure. (C) 2016 Elsevier Inc. All rights reserved.
Linear model
Mean square error
Biased estimates
Noise model
l(1) norm minimization
Ill-posed inverse problems
This paper proposes a crowd dynamic macroscopic model grounded on microscopic phenomenological observations which are upscaled by means of a formal mathematical procedure. The actual applicability of the model to real-world problems is tested by considering the pedestrian traffic along footbridges, of interest for Structural and Transportation Engineering. The genuinely macroscopic quantitative description of the crowd flow directly matches the engineering need of bulk results. However, three issues beyond the sole modelling are of primary importance: the pedestrian inflow conditions, the numerical approximation of the equations for non trivial footbridge geometries and the calibration of the free parameters of the model on the basis of in situ measurements currently available. These issues are discussed, and a solution strategy is proposed.
A comparison between first-order microscopic and macroscopic differential models of crowd dynamics is established for an increasing number N of pedestrians. The novelty is the fact of considering massive agents, namely, particles whose individual mass does not become infinitesimal when N grows. This implies that the total mass of the system is not constant but grows with N. The main result is that the two types of models approach one another in the limit N -> ?, provided the strength and/or the domain of pedestrian interactions are properly modulated by N at either scale. This is consistent with the idea that pedestrians may adapt their interpersonal attitudes according to the overall level of congestion.
PEDESTRIAN DYNAMICS; CELLULAR-AUTOMATON; FLOCKING DYNAMICS; KINETIC-THEORY; FLOW; SIMULATION; EVACUATION; EXISTENCE
We briefly review some aspects of the anomalous diffusion, and its relevance in reactive systems. In particular we consider strong anomalous diffusion characterized by the moment behaviour <(x(t)(q)> similar to t(qv)(q), where v(q) is a non constant function, and we discuss its consequences. Even in the apparently simple case v(2) = 1/2, strong anomalous diffusion may correspond to non trivial features, such as non Gaussian probability distribution and peculiar scaling of large order moments.
anomalous transport
reaction spreading
front Propagation
HVAC systems are the largest energy consumers in a building and a clean HVAC system can get about 11% in energy saving. Moreover, particulate pollution represents one of the main causes of cancer death and several health damages. This paper presents an innovative and not invasive procedure for the automatic indoor air quality assessment that depends on HVAC cleaning conditions. It is based on a mathematical algorithm that processes a few on-site physical measurements that are acquired by dedicated sensors at suitable locations with a specif-ic time table. The output of the algorithm is a set of indexes that provide a snapshot of the sys-tem with separated zoom on filters and ducts. The proposed methodology contributes to opti-mize both HVAC maintenance procedures and air quality preservation. Robustness, portability and low implementation costs allow to plan maintenance intervention, limiting it only when standard HVAC working conditions need to be restored.
The paper presents a model for assessing image quality from a subset of pixels. It is based on the fact that human beings do not explore the whole image information for quantifying its degree of distortion. Hence, the vision process can be seen in agreement with the Asymptotic Equipartition Property. The latter assures the existence of a subset of sequences of image blocks able to describe the whole image source with a prefixed and small error. Specifically, the well known Structural SIMilarity index (SSIM) has been considered. Its entropy has been used for defining a method for the selection of those image pixels that enable SSIM estimation with enough precision. Experimental results show that the proposed selection method is able to reduce the number of operations required by SSIM of about 200 times, with an estimation error less than 8%.
Information Theory
SSIM
Image Quality Assessment
Typical Set