The melting of glaciers coming with climate change threatens the heritage of the last glaciation of Europe likely contained in subglacial lakes in Greenland and Svalbard. This aspect urges specialists to focus their studies (theoretical, numerical, and on-field) on such fascinating objects. Along this line, we have approached the validation of the conjecture of the existence of a subglacial lake beneath the Amundsenisen Plateau at South-Spitzbergen, Svalbard, where ground penetrating radar measurements have revealed several flat signal spots, the sign of the presence of a body of water. The whole investigation aspects, mathematical modeling and numerical simulation procedure, and the numerical results are presented through a trilogy of papers of which the present one is the last. The time-dependent mathematical model in the background of the numerical algorithm includes the description of dynamics and thermodynamics of the icefield and of the subglacial lake, with heat exchange and liquid/solid phase-change mechanisms at the interface. Critical modeling choices and confidence in the algorithm are granted by the numerical results of the sensitivity analysis versus the contribution of ice water content, of firn and snow layers at top of the icefield and versus the approximation of ice sliding on bedrock. The two previous papers deal with these issues, show successful comparison with local measured quantities, and demonstrate numerically the likelihood of the subglacial lake. In this work, we aim at providing the studied case and the numerical algorithm with a possible paradigmatic value. At this aim, we introduce on-field measurement data related to the physical characteristics of the Amundsenisen Plateau that justify the adoption of significant modeling simplifications, here, focussed from physical viewpoint. Furthermore, we present the numerical algorithm and discuss several representative results from the numerical test to point out the type of results coming from the procedure. Such results might, eventually, provide a support to the decision to undertake drilling operations for tracing the subglacial water bio-chemicals generally present within the accreted ice above the presumed ice/water front.
Finite volume
Glen's law
Large Eddy Simulation
Phase-change
Subglacial lake
Svalbard
Temperate ice
L'articolo delinea un breve ritratto di Beppo Levi e narra di alcune vicende che lo videro protagonista prima, durante e dopo le leggi razziali fasciste quando venne costretto ad emigrare in Argentina. L'occasione è la donazione all'Archivio dell'Istituto per le Applicazioni del Calcolo "Mauro Picone" delle "carte italiane" di Beppo Levi provenienti dall'Argentina.
Beppo Levi
Mauro Picone
Eugenio Elia Levi
Leggi razziali fasciste
Vito Volterra
L'Istituto per le Applicazioni del Calcolo "Mauro Picone" (IAC) si appresta nel 2017 a celebrare nel 2017 i novantanni della sua fondazione. L'articolo fa una breve storia dell'IAC e ricorda i successi del passato.
Neglecting the horizontal variability of the atmosphere in the forward model for the simulation of limb emission radiances causes a systematic error in MIPAS retrieved profiles. The horizontal gradient model will be introduced into the Optimized Retrieval Model (ORM) v8, which will be used for the final ESA reprocessing of the whole mission. Several optimizations exploiting the spherical symmetry of the atmosphere can no longer be used. Therefore, both the ray tracing and the radiative transfer integration algorithms have been completely rewritten. We illustrate the choices adopted for the implementation of the horizontal gradient model. We show its performances versus the previous algorithm that assumes the horizontal homogeneity of the atmosphere. Finally we compare our results to those of other retrieval models that take into account the horizontal variability of the atmosphere.
In this study non-negative matrix factorization (NMF) was hierarchically applied to simulated and in vivo three-dimensional 3 T MRSI data of the prostate to extract patterns for tumour and benign tissue and to visualize their spatial distribution. Our studies show that the hierarchical scheme provides more reliable tissue patterns than those obtained by performing only one NMF level. We compared the performance of three different NMF implementations in terms of pattern detection accuracy and efficiency when embedded into the same kind of hierarchical scheme. The simulation and in vivo results show that the three implementations perform similarly, although one of them is more robust and better pinpoints the most aggressive tumour voxel(s) in the dataset. Furthermore, they are able to detect tumour and benign tissue patterns even in spectra with lipid artefacts. Copyright (C) 2016 John Wiley & Sons, Ltd.
The proposed algorithm is based on the block anti-triangular form of the original matrix M, as introduced by the authors in [11]. Via successive orthogonal similarity transformations this form is then updated to a new form A = QMQ(T), whereby the first k rows and columns of M have elements bounded by a given threshold tau and the remaining bottom right part of M is maintained in block anti-triangular form. The updating transformations are all orthogonal, guaranteeing the backward stability of the algorithm, and the algorithm is very economical when the near rank deficiency is detected in some of the anti diagonal elements of the block anti-triangular form. Numerical results are also given showing the reliability of the proposed algorithm. 2015 Elsevier Inc. All rights reserved.
We present an algorithm for computing a symmetric rank revealing decomposition of a symmetric n x n matrix A, as defined in the work of Hansen & Yalamov [9]: we factorize the original matrix into a product A = QMQ(T), with Q orthogonal and M symmetric and in block form, with one of the blocks containing the dominant information of A, such as its largest eigenvalues. Moreover, the matrix M is constructed in a form that is easy to update when adding to A a symmetric rank-one matrix or when appending a row and, symmetrically, a column to A: the cost of such an updating is O(n(2)) floating point operations.
We consider finite difference schemes which approximate one-dimensional dissipative hyperbolic systems. Using precise analytical time-decay estimates of the local truncation error, we show that it is possible to introduce some suitable modification in standard upwinding schemes to design schemes which are increasingly accurate for large times when approximating small perturbations of stable asymptotic states, respectively, around stationary solutions and in the diffusion (Chapman-Enskog) limit.
Numerical optimisation of a ship hull requires, like every shape design optimisation problem, the definition of a parametric expression of the object to be deformed. In this phase, some decisions are taken regarding the shape variability and the portion of the hull to be modified: the parameterisation of the hull is problem-dependent, with implications from the performances to be optimised (objective functions), and the right choice is not easy. In this paper, a parameterisation tool able to automatically select the optimal parameters selection and configuration, detecting together the most convenient portions of the hull to be modified and its optimal shape, is presented: the final solution is directly influenced by the characteristics of the specific optimisation problem. The total number of design parameters represents the only free choice about the parameterisation, while the areas on which the deformation is implemented, together with all the other parameters, are automatically selected without any further action by the designer.
Sharp and local L-1 a posteriori error estimates are established for so-called "well-balanced" BV (hence possibly discontinuous) numerical approximations of 2 x 2 space-dependent Jin-Xin relaxation systems under sub-characteristic condition. According to the strength of the relaxation process, one can distinguish between two complementary regimes: 1) a weak relaxation, where local L-1 errors are shown to be of first order in Delta x and uniform in time, 2) a strong relaxation, where numerical solutions are kept close to entropy solutions of the reduced scalar conservation law, and for which Kuznetsov's theory indicates a behavior of the L1 error in t center dot root Delta x. The uniformly first-order accuracy in weak relaxation regime is obtained by carefully studying interaction patterns and building up a seemingly original variant of Bressan Liu Yang's functional, able to handle BV solutions of arbitrary size for these particular inhomogeneous systems. The complementary estimate in strong relaxation regime is proven by means of a suitable extension of methods based on entropy dissipation for space-dependent problems. Preliminary numerical illustrations are provided.
International initiatives such as the Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) are collecting multiple datasets at different genome-scales with the aim of identifying novel cancer biomarkers and predicting survival of patients. To analyze such data, several statistical methods have been applied, among them Cox regression models. Although these models provide a good statistical framework to analyze omic data, there is still a lack of studies that illustrate advantages and drawbacks in integrating biological information and selecting groups of biomarkers. In fact, classical Cox regression algorithms focus on the selection of a single biomarker, without taking into account the strong correlation between genes. Even though network-based Cox regression algorithms overcome such drawbacks, such network-based approaches are less widely used within the life science community. In this article, we aim to provide a clear methodological framework on the use of such approaches in order to turn cancer research results into clinical applications. Therefore, we first discuss the rationale and the practical usage of three recently proposed network-based Cox regression algorithms (i.e., Net-Cox, AdaLnet, and fastcox). Then, we show how to combine existing biological knowledge and available data with such algorithms to identify networks of cancer biomarkers and to estimate survival of patients. Finally, we describe in detail a new permutation-based approach to better validate the significance of the selection in terms of cancer gene signatures and pathway/networks identification. We illustrate the proposed methodology by means of both simulations and real case studies. Overall, the aim of our work is two-fold. Firstly, to show how network-based Cox regression models can be used to integrate biological knowledge (e.g., multi-omics data) for the analysis of survival data. Secondly, to provide a clear methodological and computational approach for investigating cancers regulatory networks.
International initiatives such as the Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) are collecting multiple datasets at different genome-scales with the aim of identifying novel cancer biomarkers and predicting survival of patients. To analyze such data, several statistical methods have been applied, among them Cox regression models. Although these models provide a good statistical framework to analyze omic data, there is still a lack of studies that illustrate advantages and drawbacks in integrating biological information and selecting groups of biomarkers. In fact, classical Cox regression algorithms focus on the selection of a single biomarker, without taking into account the strong correlation between genes. Even though network-based Cox regression algorithms overcome such drawbacks, such network-based approaches are less widely used within the life science community. In this article, we aim to provide a clear methodological framework on the use of such approaches in order to turn cancer research results into clinical applications. Therefore, we first discuss the rationale and the practical usage of three recently proposed network-based Cox regression algorithms (i.e., Net-Cox, AdaLnet, and fastcox). Then, we show how to combine existing biological knowledge and available data with such algorithms to identify networks of cancer biomarkers and to estimate survival of patients. Finally, we describe in detail a new permutation based approach to better validate the significance of the selection in terms of cancer gene signatures and pathway/networks identification. We illustrate the proposed methodology by means of both simulations and real case studies. Overall, the aim of our work is two-fold. Firstly, to show how network-based Cox regression models can be used to integrate biological knowledge (e.g., multi-omics data) for the analysis of survival data. Secondly, to provide a clear methodological and computational approach for investigating cancers regulatory networks. Keywords: cancer, Cox model, high-dimensionality, gene expression, network, regularization, survival
cancer
Cox model
high-dimensionality
gene expression
network
regularization
survival
Results: Teleosts living in different environments (freshwater and seawater) and with different lifestyles (migratory and non-migratory) were analyzed studying three variables: routine metabolic rate, gill area and genomic GC-content, none of them showing a phylogenetic signal among fish species. Routine metabolic rate, specific gill area and average genomic GC were higher in seawater than freshwater species. The same trend was observed comparing migratory versus non-migratory species. Crossing salinity and lifestyle, the active migratory species living in seawater show coincidentally the highest routine metabolic rate, the highest specific gill area and the highest average genomic GC content.
Background: The DNA base composition is well known to be highly variable among organisms. Bio-physic studies on the effect of the GC increments on the DNA structure have shown that GC-richer DNA sequences are more bendable. The result was the keystone of the hypothesis proposing the metabolic rate as the major force driving the GC content variability, since an increased resistance to the torsion stress is mainly required during the transcription process to avoid DNA breakage. Hence, the aim of the present work is to test if both salinity and migration, suggested to affect the metabolic rate of teleostean fishes, affect the average genomic GC content as well. Moreover, since the gill surface has been reported to be a major morphological expression of metabolic rate, this parameter was also analyzed in the light of the above hypothesis.
We present the advancements and novelties recently introduced in RNASeqGUI, a graphical user interface that helps biologists to handle and analyse large data collected in RNA-Seq experiments. This work focuses on the concept of reproducible research and shows how it has been incorporated in RNASeqGUI to provide reproducible (computational) results. The novel version of RNASeqGUI combines graphical interfaces with tools for reproducible research, such as literate statistical programming, human readable report, parallel executions, caching, and interactive and web-explorable tables of results. These features allow the user to analyse big datasets in a fast, efficient, and reproducible way. Moreover, this paper represents a proof of concept, showing a simple way to develop computational tools for Life Science in the spirit of reproducible research.
Experiments of cell migration and chemotaxis assays have been classically performed in the so-called Boyden Chambers. A recent technology, xCELLigence Real Time Cell Analysis, is now allowing to monitor the cell migration in real time. This technology measures impedance changes caused by the gradual increase of electrode surface occupation by cells during the course of time and provide a Cell Index which is proportional to cellular morphology, spreading, ruffling and adhesion quality as well as cell number. In this paper we propose a macroscopic mathematical model, based on advection-reaction-diffusion partial differential equations, describing the cell migration assay using the real-time technology. We carried out numerical simulations to compare simulated model dynamics with data of observed biological experiments on three different cell lines and in two experimental settings: absence of chemotactic signals (basal migration) and presence of a chemoattractant. Overall we conclude that our minimal mathematical model is able to describe the phenomenon in the real time scale and numerical results show a good agreement with the experimental evidences.
An atlas of gene expression and gene co-regulation in the human retina
Pinelli Michele
;
Carissimo Annamaria
;
Cutillo Luisa
;
Cutillo Luisa
;
Lai Ching Hung
;
Mutarelli Margherita
;
Moretti Maria Nicoletta
;
Singh Marwah Veer
;
Karali Marianthi
;
Carrella Diego
;
Pizzo Mariateresa
;
Russo Francesco
;
Ferrari Stefano
;
Ponzin Diego
;
Angelini Claudia
;
Banfi Sandro
;
Banfi Sandro
;
Di Bernardo Diego
The human retina is a specialized tissue involved in light stimulus transduction. Despite its unique biology, an accurate reference transcriptome is still missing. Here, we performed gene expression analysis (RNA-seq) of 50 retinal samples from non-visually impaired post-mortem donors. We identified novel transcripts with high confidence (Observed Transcriptome (ObsT)) and quantified the expression level of known transcripts (Reference Transcriptome (RefT)). The ObsT included 77 623 transcripts (23 960 genes) covering 137 Mb (35 Mb new transcribed genome). Most of the transcripts (92%) were multi-exonic: 81% with known isoforms, 16% with new isoforms and 3% belonging to new genes. The RefT included 13 792 genes across 94 521 known transcripts. Mitochondrial genes were among the most highly expressed, accounting for about 10% of the reads. Of all the protein-coding genes in Gencode, 65% are expressed in the retina. We exploited inter-individual variability in gene expression to infer a gene co-expression network and to identify genes specifically expressed in photoreceptor cells. We experimentally validated the photoreceptors localization of three genes in human retina that had not been previously reported. RNA-seq data and the gene co-expression network are available online (http://retina.tigem.it).
RNA-seq
gene co-regulation
Gene Network
Web tools
pipeline
Reproducible (computational) Research is crucial to produce transparent and high quality scientific papers. First, we illustrate the benefits that scientific community can receive from the adoption of Reproducible Research standards in the analysis of high-throughput omic data. Then, we describe several tools useful to researchers to increase the reproducibility of their works. Moreover, we face the advantages and limits of reproducible research and how they could be addressed and solved. Overall, this paper should be considered as a proof of concept on how and what characteristic - in our opinion - should be considered to conduct a study in the spirit of Reproducible Research. Therefore, the scope of this paper is two-fold. The first goal consists in presenting and discussing some easy-to-use instruments for data analysts to promote reproducible research in their analyses. The second aim is to encourage developers to incorporate automatic reproducibility features in their tools.
Free-living bacteria grown under aerobic conditions were used to investigate, by next-generation RNA sequencing analysis, the transcriptional profiles of Sinorhizobium meliloti wild-type 1021 and its derivative, RD64, overproducing the main auxin indole-3-acetic acid (IAA). Among the upregulated genes in RD64 cells, we detected the main nitrogen-fixation regulator fixJ, the two intermediate regulators fixK and nifA, and several other genes known to be FixJ targets. The gene coding for the sigma factor RpoH1 and other genes involved in stress response, regulated in a RpoH1-dependent manner in S. meliloti, were also induced in RD64 cells. Under microaerobic condition, quantitative real-time polymerase chain reaction analysis revealed that the genes fixJL and nifA were up-regulated in RD64 cells as compared with 1021 cells. This work provided evidence that the overexpression of IAA in S. meliloti free-living cells induced many of the transcriptional changes that normally occur in nitrogen-fixing root nodule.
questa presentazione rivisiteremo il concetto di interazione geneambiente
alla luce delle conoscenze scientifiche emergenti e dell'utilizzo delle tecnologie
ad alta risoluzione, quali i moderni sequenziatori. In particolare, illustreremo
l'importanza della stima di tale interazione per la comprensione delle patologie
complesse. Quindi, forniremo una panoramica dei metodi esistenti per l'analisi
di dati di NGS con particolare enfasi riguardo l'individuazione di varianti genomiche
ed l'analisi dell'epigenomica e della trascrittomica. Infine, discuteremo
come i dati multiomici possono essere utilizzati per migliorare in nostro grado di
comprensione delle patologie e fornire nuove prospettive di ricerca
In this talk, first, we review the concept of gene-environmental interaction
on the light of emerging results and the use of modern high-throughput technologies;
we illustrate its impact on the understanding of complex human diseases. Then,
we provide an overview of the methods available to process NGS data with particular
emphasis to the detection of genomic variants, the analysis of epigenomic and
transcriptional data produced by modern sequencers. Finally, we discuss how multiomic
data can be used to improve our way of studying complex diseases and can
provide novel research perspectives.
NGS
Gwas
multi-omic data analysis
gene-environment interaction
Mass spectrometry is a set of technologies with many applications in characterizing biological samples. Due to the huge quantity of data, often biased and contaminated by different source of errors, and the amount of results that is possible to extract, an easy-to-learn and complete workflow is essential. GeenaR is a robust web tool for pre-processing, analysing, visualizing and comparing a set of MALDI-ToF mass spectra. It combines PHP, Perl and R languages and allows different levels of control over the parameters, in order to adapt the work to the needs and expertise of the users.
Mass Spectrometry
Proteomics
Statistical Analysis
Web tool