Here we discuss the biological high-throughput data dilemma: how to integrate replicated experiments and nearby species data? Should we consider each species as a monadic source of data when replicated experiments are available or, viceversa, should we try to collect information from the large number of nearby species analyzed in the different laboratories? In this paper we make and justify the observation that experimental replicates and phylogenetic data may be combined to strength the evidences on identifying transcriptional motifs and identify networks, which seems to be quite difficult using other currently used methods. In particular we discuss the use of phylogenetic inference and the potentiality of the Bayesian variable selection procedure in data integration. In order to illustrate the proposed approach we present a case study considering sequences and microarray data from fungi species. We also focus on the interpretation of the results with respect to the problem of experimental and biological noise.
Clustering is one of the most important unsupervised learning problems and it deals with finding a structure in a collection of unlabeled data; however, different clustering algorithms applied to the same data-set produce different solutions. In many applications the problem of multiple solutions becomes crucial and providing a limited group of good clusterings is often more desirable than a single solution. In this work we propose the Least Square Consensus clustering that allows a user to extrapolate a small number of different clustering solutions from an initial (large) set of solutions obtained by applying any clustering algorithm to a given data-set. Two different implementations are presented. In both cases, each consensus is accomplished with a measure of quality defined in terms of Least Square error and a graphical visualization is provided in order to make immediately interpretable the result. Numerical experiments are carried out on both synthetic and real data-sets.
Analisi storica degli strumenti di calcolo utilizzati presso l'allora Istituto Nazionale per le Applicazioni del Calcolo prima dell'acquisto del calcolatore elettronico FINAC
Riassunto delle scarsissime informazioni disponibili sulla produzione italiana in questo settore, affine a quello delle macchine calcolatrici, e soprattutto sui brevetti depositati in Italia ed all'estero da inventori italiani.
A mathematical model of the galvanic iron corrosion is, here, presented. The iron(III)-hydroxide formation is considered together with the redox reaction. The PDE system, assembled on the basis of the fundamental holding electro-chemistry laws, is numerically solved by a locally refined FD method. For verification purpose we have assembled an experimental galvanic cell; in the present work, we report two tests cases, with acidic and neutral electrolitical solution, where the computed electric potential compares well with the measured experimental one
Iron
redox reaction
kinetics
PDE
numerical simulation
The aim of this work is to develop a prototype system, that is a software
tool that contributes to monitor the coastline. In fact it, constituted by a
set of techniques and methods for SAR (Synthetic Aperture Radar) image
segmentation, can aid to investigate the state of conservation of the coastal
environment. The proposed system, called ISC (Interactive System for
Coastline detection) and written in JAVA language, is composed by a
set of Java classes structured in packages and it is based on the level
set method applied to SAR images. In this method, an initial curve
defined on the image evolves according to a PDE (Partial Differential
Equation) model by means of a velocity whose mathematical expression
is related to the characteristics of the image to be segmented. This curve
is deformed until it assumes a stable position at the boundary of the area
to be extracted from the image. We used images SAR PRI (Precision
Image Resolution) acquired during the ERS2 (European Remote Sensing
Satellite) mission.
With reference to a defined contribution pension scheme, this paper investigates the computation of suitable risk indicators in a fair valuation context. This subject involves theoretical isuues about the choice of the models for the dynamics of interest and mortality rates. The risk analysis is performed by computing the expected tail loss in a stochastic financial and demographic scenario. Numerical applications illustrate the impact of such evaluations on the reserve quantification in a Monte Carlo simulation framework.
defined contribution pension funds
fair value
expected tail loss
mathematical reserve
The aim of the paper is to deal with the solvency requirements for defined contribution pension funds. The probability of underfunding is investigated in a stochastic framework by means of the funding ratio, which is the ratio of the market value of the assets to the market value of the liabilities. Demographic and invetment risks are modelled by means of diffusion processes. Their impact on the total riskiness of the fund is analyzed via a quantile approach.
pension fund
funding ratio
CIR model
MRGB model quantile analysis
The purpose of neuroimaging is to investigate the brain functionality through the localization of the regions where bioelectric current flows, starting from the measurements of the magnetic field produced in the outer space. Assuming that each component of the current density vector possesses the same sparse representation with respect to a pre-assigned multiscale basis, regularization techniques to the magnetic inverse problem are applied. The linear inverse problem arising can be approximated by iterative algorithms based on gradient steps intertwined with thresholding operations with joint-sparsity constraints. We propose some numerical tests in order to show the features of the numerical algorithm, also regarding the performance in terms of CPU occupancy.
We compute the continuum thermohydrodynamical limit of a new formulation of lattice kinetic equations for thermal compressible flows, recently proposed by Sbragaglia [J. Fluid Mech. 628, 299 (2009)]. We show that the hydrodynamical manifold is given by the correct compressible Fourier-Navier-Stokes equations for a perfect fluid. We validate the numerical algorithm by means of exact results for transition to convection in Rayleigh-Beacutenard compressible systems and against direct comparison with finite-difference schemes. The method is stable and reliable up to temperature jumps between top and bottom walls of the order of 50% the averaged bulk temperature. We use this method to study Rayleigh-Taylor instability for compressible stratified flows and we determine the growth of the mixing layer at changing Atwood numbers up to At similar to 0.4. We highlight the role played by the adiabatic gradient in stopping the mixing layer growth in the presence of high stratification and we quantify the asymmetric growth rate for spikes and bubbles for two dimensional Rayleigh-Taylor systems with resolution up to L(x)xL(z)=1664x4400 and with Rayleigh numbers up to Ra similar to 2x10(10). (C) 2010 American Institute of Physics. [doi: 10.1063/1.3392774]
Lattice Boltzmann fluid-dynamics on the QPACE supercomputer
Biferale L
;
Mantovani F
;
Pivanti M
;
Sbragaglia M
;
Scagliarini A
;
Schifano S F
;
Toschi F
;
Tripiccione R
In this paper we present an implementation for the QPACE supercomputer of a Lattice Boltzmann model of a fluid-dynamics flow in 2 dimensions. QPACE is a massively parallel application-driven system powered by the Cell processor. We review the structure of the model, describe in details its implementation on QPACE and finally present performance data and preliminary physics results. (C) 2010 Published by Elsevier Ltd.
Fluid-dynamics
Lattice Boltzmann Model
CBE processor
QPACE supercomputer