List of publications

4.722 results found

Search by title or abstract

Search by author

Select year

Filter by type

 
2015 Abstract in Atti di convegno metadata only access

Numerical solution of moving boundary problems in glacier flow

Beside geographical and physical characteristics of the environment, mostly temperature changes drive glacier dynamical evolution with subglacial and supraglacial water release or approaching a metastable state. The appearance of subglacial lakes filling bedrock depressions, glacier sliding, crevasses formation and calving are linked climate change sensitive macro-phenomena, where interactions between the interfacing phases are crucial. We shall discuss the mathematical modelling and the numerical simulation of one of the above glacier problems with moving boundary. References A. Di Mascio, R. Broglia and R. Muscari, "On the application of the single- phase level set method to naval hydrodynamic flows", Computers & Fluids, Vol.36, 2007, pp. 868-863. D. Mansutti, E. Bucchignani, J. Otero and P. Glowacki, "Modelling and numerical sensitivity study on the conjecture of a subglacial lake at Amundsenisen, Svalbard", Applied Mathematical Modelling, http://dx.doi.org/10.1016/j.apm.2014.12.043 (in press), 2014.

multiphase flow; level-set; front-tracking; glacier flow; numerical simulation
2015 Altro metadata only access

FINAC60

Il 14 dicembre del 1955, presso la sede Centrale del CNR, il Presidente della Repubblica, Giovanni Gronchi, inaugurava il calcolatore elettronico Ferranti Mark1* dell'Istituto Nazionale per le Applicazioni del Calcolo, alla presenza del fondatore e direttore dell'Istituto, il matematico Mauro Picone. Dal nome del costruttore e dalla sigla dell'istituto, la macchina venne denominata FINAC. Si trattava del secondo calcolatore elettronico installato in Italia, preceduto di pochi mesi dal CRC-102A del Politecnico di Milano. L'acquisto era avvenuto grazie agli sforzi di Picone per dotare il suo istituto di una delle 'potenti macchine calcolatrici elettroniche', all'epoca solo anglo-americane. Negli anni precedenti Picone era giunto più volte ad un passo dal realizzare il suo intento di costruire quello che, sarebbe stato il primo calcolatore italiano. Aveva maturato questo proposito viaggiando negli USA, dove l'analisi numerica progrediva enormemente con lo sviluppo di progetti su 'macchine calcolatrici a cifre ad alta velocità'. Poiché una serie di impedimenti, internazionali e interni, rischiavano di prolungare eccessivamente i tempi, Picone scelse di acquistare un'apparecchiatura già in commercio, che sarebbe poi stata la FINAC, che per alcuni anni rimase il più potente computer italiano. Con esso vennero sviluppate molte ricerche sui temi più svariati, dal modello econometrico della Banca d'Italia ai calcoli per la progettazione di ponti e dighe. Un lavoro, particolarmente significativo per lo sviluppo dell'informatica italiana, fu la realizzazione di un simulatore della CEP, futura Calcolatrice Elettronica Pisana. Con questo anniversario si vuole celebrare, tra le altre cose, l'intuizione di Mauro Picone, la sua capacità innovativa e la scelta di investire generosamente nel futuro, conferendo grande impulso alla soluzione di problemi reali attraverso la modellizzazione matematica, creando anche in Italia i presupposti per lo sviluppo della moderna matematica applicata e dell'informatica

calcolatori elettronici; informatica; matematica applicata
2015 Abstract in Atti di convegno metadata only access

Cripto is essential to capture mouse EpiSC and human ESC pluripotency

A Fiorenzano ; E Pascale ; C D'Aniello ; F Russo ; M Biffoni ; F Francescangeli ; A Zeuner ; C Angelini ; EJ Patriarca ; Chazaud C ; A Fico ; G Minchiotti
stem cells cripto
2015 Articolo in rivista metadata only access

Short interspersed DNA elements and miRNAs: a novel hidden gene regulation layer in zebrafish?

Scarpato M ; Angelini C ; Cocca E ; Pallotta MM ; Morescalchi MA ; Capriglione T

In this study, we investigated by in silico analysis the possible correlation between microRNAs (miRNAs) and Anamnia V-SINEs (a superfamily of short interspersed nuclear elements), which belong to those retroposon families that have been preserved in vertebrate genomes for millions of years and are actively transcribed because they are embedded in the 3? untranslated region (UTR) of several genes. We report the results of the analysis of the genomic distribution of these mobile elements in zebrafish (Danio rerio) and discuss their involvement in generating miRNA gene loci. The computational study showed that the genes predicted to bear V-SINEs can be targeted by miRNAs with a very high hybridization E-value. Gene ontology analysis indicates that these genes are mainly involved in metabolic, membrane, and cytoplasmic signaling pathways. Nearly all the miRNAs that were predicted to target the V-SINEs of these genes, i.e., miR-338, miR-9, miR-181, miR-724, miR-735, and miR-204, have been validated in similar regulatory roles in mammals. The large number of genes bearing a V-SINE involved in metabolic and cellular processes suggests that V-SINEs may play a role in modulating cell responses to different stimuli and in preserving the metabolic balance during cell proliferation and differentiation. Although they need experimental validation, these preliminary results suggest that in the genome of D. rerio, as in other TE families in vertebrates, the preservation of V-SINE retroposons may also have been favored by their putative role in gene network modulation.

3?UTR miRNA Retrotransposons SINEs
2015 Articolo in rivista metadata only access

Is this the right normalization? A diagnostic tool for ChIP-seq normalization

Angelini C ; Heller R ; Volkinshtein R ; Yekutieli D

Background: Chip-seq experiments are becoming a standard approach for genome-wide profiling protein-DNA interactions, such as detecting transcription factor binding sites, histone modification marks and RNA Polymerase II occupancy. However, when comparing a ChIP sample versus a control sample, such as Input DNA, normalization procedures have to be applied in order to remove experimental source of biases. Despite the substantial impact that the choice of the normalization method can have on the results of a ChIP-seq data analysis, their assessment is not fully explored in the literature. In particular, there are no diagnostic tools that show whether the applied normalization is indeed appropriate for the data being analyzed. Results: In this work we propose a novel diagnostic tool to examine the appropriateness of the estimated normalization procedure. By plotting the empirical densities of log relative risks in bins of equal read count, along with the estimated normalization constant, after logarithmic transformation, the researcher is able to assess the appropriateness of the estimated normalization constant. We use the diagnostic plot to evaluate the appropriateness of the estimates obtained by CisGenome, NCIS and CCAT on several real data examples. Moreover, we show the impact that the choice of the normalization constant can have on standard tools for peak calling such as MACS or SICER. Finally, we propose a novel procedure for controlling the FDR using sample swapping. This procedure makes use of the estimated normalization constant in order to gain power over the naive choice of constant (used in MACS and SICER), which is the ratio of the total number of reads in the ChIP and Input samples. Conclusions: Linear normalization approaches aim to estimate a scale factor, r, to adjust for different sequencing depths when comparing ChIP versus Input samples. The estimated scaling factor can easily be incorporated in many peak caller algorithms to improve the accuracy of the peak identification. The diagnostic plot proposed in this paper can be used to assess how adequate ChIP/Input normalization constants are, and thus it allows the user to choose the most adequate estimate for the analysis.

Chip-Seq Diagnostic plots Normalization
2015 Articolo in rivista metadata only access

Applications of network-based survival analysis methods for pathway detection in cancer

A Iuliano ; A Occhipinti ; C Angelini ; I De Feis ; PLiò

Gene expression data from high-throughput assays, such as microarray, are often used to predict cancer survival. Available datasets consist of a small number of samples (n patients) and a large number of genes (p predictors). Therefore, the main challenge is to cope with the high-dimensionality. Moreover, genes are co-regulated and their expression levels are expected to be highly correlated. In order to face these two issues, network based approaches can be applied. In our analysis, we compared the most recent network penalized Cox models for highdimensional survival data aimed to determine pathway structures and biomarkers involved into cancer progression. Using these network-based models, we show how to obtain a deeper understanding of the gene-regulatory networks and investigate the gene signatures related to prognosis and survival in different types of tumors. Comparisons are carried out on three real different cancer datasets.

Survival Analysis microarray cancer
2015 Articolo in rivista metadata only access

ZFP57 recognizes multiple and closely spaced sequence motif variants to maintain repressive epigenetic marks in mouse embryonic stem cells.

Anvar Z ; Cammisa M ; Riso V ; Baglivo I ; Kukreja H ; Sparago A ; Girardot M ; Lad S ; De Feis I ; Cerrato F ; Angelini C ; Feil R ; Pedone PV ; Grimaldi G ; Riccio A

Imprinting Control Regions (ICRs) need to maintain their parental allele-specific DNA methylation during early embryogenesis despite genome-wide demethylation and subsequent de novo methylation. ZFP57 and KAP1 are both required for maintaining the repressive DNA methylation and H3-lysine-9-trimethylation (H3K9me3) at ICRs. In vitro, ZFP57 binds a specific hexanucleotide motif that is enriched at its genomic binding sites. We now demonstrate in mouse embryonic stem cells (ESCs) that SNPs disrupting closely-spaced hexanucleotide motifs are associated with lack of ZFP57 binding and H3K9me3 enrichment. Through a transgenic approach in mouse ESCs, we further demonstrate that an ICR fragment containing three ZFP57 motif sequences recapitulates the original methylated or unmethylated status when integrated into the genome at an ectopic position. Mutation of Zfp57 or the hexanucleotide motifs led to loss of ZFP57 binding and DNA methylation of the transgene. Finally, we identified a sequence variant of the hexanucleotide motif that interacts with ZFP57 both in vivo and in vitro. The presence of multiple and closely located copies of ZFP57 motif variants emerges as a distinct characteristic that is required for the faithful maintenance of repressive epigenetic marks at ICRs and other ZFP57 binding sites.

Imprinting ChIP-seq
2015 Contributo in volume (Capitolo o Saggio) metadata only access

A walking tour in Reproducible Research and Big Data Management with RNASeqGUI and R.

F Russo ; D Righelli ; C Angelini

In this paper, we discuss the concept of Reproducible Research and its importance to produce transparent and high quality scientific papers. In particular, we illustrate the advantages that both paper authors and readers can receive from the adoption of Reproducible Research and we discuss a strategy to develop computational tools supporting such a feature. We present a novel version of RNASeqGUI, a user friendly computational tool capable to handle and analyse RNA-Seq data. This tool exploits Reproducible Research feature to produce RNA-Seq analyses easy to read, inspect, understand, study, reproduce and modify. Overall, this paper is a proof of concept on how it is possible to develop complex and interactive tools in the spirit of Reproducible Research.

Rna-Seq Reproducible Research R
2015 Articolo in rivista metadata only access

Turning ability analysis of a fully appended twin screw vessel by CFD. Part I: Single rudder configuration

The turning circle manoeuvre of a naval supply vessel (characterized by a block coefficient <sup>CB</sup>~0.60) is simulated by the integration of the unsteady Reynolds-Averaged Navier Stokes equations coupled with the equations of rigid body motion with six degrees of freedom. The model is equipped with all the appendages, and it is characterised by an unusual single rudder/twin screws configuration. This arrangement causes poor directional stability qualities, which makes the prediction of the trajectory a challenging problem. As already shown in previous works, the treatment of the in-plane loads exerted by the propellers is of paramount importance; to this aim each propeller is simulated by an actuator disk model, properly modified to account for oblique flow effects. The main goal of the present paper is to assess the capability of the CFD tool to accurately predict the trajectory of the ship and to analyse the complex flow field around a vessel performing a turning manoeuvre. Distribution of forces and moments on the main hull, stern appendages and rudder are analysed in order to gain a deeper insight into the dynamic behaviour of the vessel. Validation is provided by the comparison with experimental data from free running tests.

Appendages effect Computational methods Manoeuvring hydro-loads Twin screw ship
2015 Contributo in Atti di convegno metadata only access

CFD analysis of propeller-rudder interaction

Interaction of the vortex systems detached from a propeller with a rudder installed in its wake is investigated by CFD. The correct prediction of this phenomenon is of great interest in naval hydrodynamics research, it being the source of irradiated noise and vibratory loads. The phenomenology is addressed by simulating a single bladed propeller (INSEAN E779A) and a rudder characterized by a rectangular plane area and symmetric sectional shape (NACA0020 profiles). The main focus is on the hydro-loads developed by the rudder and their correlation with the different phases of the interaction of the tip vortex with the rudder. The phenomenon is also investigated, through a preliminary computation on a coarser mesh, on the actual propeller geometry (4-bladed).

Computational fluid dynamics Propeller-rudder interaction Rudder loads Vortex-body collision
2015 Contributo in Atti di convegno metadata only access

Vortex-Sound Generation and Thrust Unsteadiness in Aft-Finocyl Solid Rocket Motor

DI MASCIO A ; Cavallini E ; Favini B ; Neri A

The generation of complex vorticity pattern in aft-finocyl solid rocket motors is inves- tigated in this paper by means of full-3D ILES CFD simulations with a high-order/low- dissipation class of centered numerical schemes with oscillation control and an immersed boundary treatment of the propellant grain surface, treated with a level-set approach. The development of vortical/shear structures is observed both at the motor axis, immediately downstream the igniter and across the finocyl region and in the submergence region. The first ones are rather a relevant finding which characterizes the flowfield structures that develops inside the combustion chamber of an aft-finocyl geometry, that find confirmation in both small scale cold-flow tests and theoretical justification in fundamental works. The second one are instead more classical vortical structures that belong to the class of angle shear-layers, due to the turning flow characteristic of solid rocket motors. These vortical structures are found to induce very low-level, but present, pressure oscillations as due to the coupling of the vorticity pattern and pressure waves. These pressure oscillations result into oscillations of the thrust delivered by the SRM, involving both a longitudinal cham- ber mode excitation (corresponding to the first chamber longitudinal mode) and a lateral chamber mode excitation. The level of such thrust component oscillations is of the order of one percent of the delivered motor thrust, with the uncertainty assessed by both grid convergence analyses and sub-grid model of the ILES approach. These flowfield character- istics are a little dependent upon the motor configuration, and in particular, the angle of gimbaling imposed to the nozzle and a bias-offset of the propellant grain with respect to the motor assembly.

Compressible flow solid rocket motor pressure oscillations
2015 Poster in Atti di convegno metadata only access

A hierarchical Krylov-Bayes iterative inverse solver for MEG with physiological preconditioning

Calvetti D ; Pascarella A ; Pitolli F ; Somersalo E ; Vantaggi B

Magnetoencephalopgraphy (MEG) is a non-invasive functional imaging modality for mapping cerebral electromagnetic activity from measurements of the weak magnetic field that it generates. It is well known that the MEG inverse problem, i.e. the problem of identifying electric currents from the induced magnetic fields, is a severely underdetermined problem and, without complementary prior information, no unique solution can be found. In the literature, many regularization techniques were proposed. In particular, optimization-based methods usually explain the data by superficial sources even when the activity is deep in the brain. A way to make easier the identification of deep focal sources is the use of depth weighting. We revisit the MEG inverse problem, regularization and depth weighting from a Bayesian point of view by hierarchical models: The primary unknown is the discretized current density inside the head, and we postulate a conditionally Gaussian anatomical prior model. In this model, each current element, or dipole, has a preferred, albeit not fixed, direction that is extracted from the anatomical data of the subject. The variance of each dipole is not fixed a priori, but modeled itself as a random variable described by its hyperprior density. The hypermodel is then used to build a fast iterative algorithm with the novel feature that their parameters are determined using an empirical Bayes approach. The hypermodel provides a very natural Bayesian interpretation for sensitivity weighting, and the parameters in the hyperprior provide a tool for controlling the focality of the solution, thus leading to a flexible algorithm that can handle both sparse and distributed sources. To demonstrate the effects of different parameter selections under optimal conditions, we test the algorithm on synthetic but realistic data. The tests show that the hierarchical Bayesian models combined with linear algebraic methods provide a versatile framework to develop robust and flexible numerical methods, and are able to overcome some of the limitations of standard regularization techniques, for instance deep source localization. The proposed algorithm is computationally efficient, gives a direct control of how well the computed estimates satisfy the data and is designed to easily accommodate different types of prior information.

MEG inverse problem Bayesian statistic
2015 Poster in Atti di convegno metadata only access

Source modelling of ElectroCorticoGraphy (ECoG) data: stability analysis and spatial filtering

Pascarella A ; Todaro C ; Clerc M ; Serre T ; Piana M

Electrocorticography (ECoG) is a neurophysiological modality that measures the distribution of electrical potentials, associated with either spontaneous or evoked neural activity, by means of electrodes grids implanted close to the cortical surface. A full interpretation of ECoG data, however, requires solving the ill-posed inverse problem of reconstructing the spatio-temporal distribution of neural currents responsible for the recorded signals. Only in the last few years some methods have been proposed to solve this inverse problem [1]. This study addresses the ECoG source modelling using a beamformer method. First, we compute the lead-field matrix which maps the neural currents onto the sensors space: a novel routine for the computation of the lead-field matrix, based on the tools provided by the OpenMEEG framework, was used [2]. The ECoG source-modeling problem requires to invert this matrix by means of a regularization method which reduces its intrinsic numerical instability; thus, we perform an analysis of the condition number of the lead-field matrix which provides quantitative information on the numerical instability of the problem, independently of the kind of inversion algorithm applied. Finally, we provide quantitative results for source modeling using a Linear Constraint Minimum Variance (LCMV) beamformer. The validation of the effectiveness of beamforming in ECoG is performed both with synthetic data and with experimental data recorded during a rapid visual categorization task.

ELECTROCORTICOGRAPHY Source Localization - inverse problems; beamforming
2015 Poster in Atti di convegno metadata only access

Brain functional connectivity at rest as similarity of neuronal activities

The brain is a connected network, requiring complex-system measures to describe its organization principles. The normalized compression distance (NCD) [1] is a parameter -free, quasi universal similarity measure that estimates the information shared by two signals comparing the compression length of one signal given the other. Here, we aim at testing whether this new measure is a suitable quantifier of the functional connectivity between cortical regions. In particular, we tested whether NCD between homologous hemispheric regions is smaller (higher connectivity) in the same person than across different people, if it is smaller in the dominant hemisphere and if it depends on age. We used the Functional Source Separation (FSS) [2] algorithm on magnetoencephalographic (MEG) data in order to identify functionally homologous areas in the two hemispheres devoted to the somatosensory contra-lateral hand representation (FS_S1) in 28 healthy people. Therefore, we calculated NCD between the left and right FS_S1s activities at rest. We found that NCD 1) between left and right FS_S1s of the same person was smaller than across different people (p<10-7consistently) 2) was smaller within the left dominant hemisphere than within the non dominant right one (p=3*10 7) and 3) became more variable in older than younger people (p=.01). This preliminary work shows that NCD, which measures the similarity of neuronal source activities via their compression sizes, displays an excellent ability in quantifying the similarity among neuronal activities, catching the maximal similarity expected for functionally homologous cortical areas of the two hemispheres. Thus, NCD seems a good candidate for two-nodes functional connectivity measure in resting state, able to overcome the limitations intrinsic to the classical Fourier or autoregressive estimates in assessing dynamics properties of the brain connectivity.

Neuronal pools' activity; normalized compression distance (NCD); Functional Source Separation (FSS); homologous areas connectivity; resting state
2015 Articolo in rivista metadata only access

Lattice Boltzmann approach for complex nonequilibrium flows

Montessori A ; Prestininzi P ; La Rocca M ; Succi S

We present a lattice Boltzmann realization of Grad's extended hydrodynamic approach to nonequilibrium flows. This is achieved by using higher-order isotropic lattices coupled with a higher-order regularization procedure. The method is assessed for flow across parallel plates and three-dimensional flows in porous media, showing excellent agreement of the mass flow with analytical and numerical solutions of the Boltzmann equation across the full range of Knudsen numbers, from the hydrodynamic regime to ballistic motion.

Quantum Lattice
2015 Articolo in rivista metadata only access

Short-Lived Lattice Quasiparticles for Strongly Interacting Fluids

Jimenez Miller Mendoza ; Succi Sauro

It is shown that lattice kinetic theory based on short-lived quasiparticles proves very effective in simulating the complex dynamics of strongly interacting fluids (SIF). In particular, it is pointed out that the shear viscosity of lattice fluids is the sum of two contributions, one due to the usual interactions between particles (collision viscosity) and the other due to the interaction with the discrete lattice (propagation viscosity). Since the latter is negative, the sum may turn out to be orders of magnitude smaller than each of the two contributions separately, thus providing a mechanism to access SIF regimes at ordinary values of the collisional viscosity. This concept, as applied to quantum superfluids in one-dimensional optical lattices, is shown to reproduce shear viscosities consistent with the AdS-CFT holographic bound on the viscosity/entropy ratio. This shows that lattice kinetic theory continues to hold for strongly coupled hydrodynamic regimes where continuum kinetic theory may no longer be applicable.

strongly coupled fluids Lattice Boltzmann quantum fluids
2015 Articolo in rivista metadata only access

Lattice Boltzmann model for resistive relativistic magnetohydrodynamics

Mohseni F ; Mendoza M ; Succi S ; Herrmann H J

In this paper, we develop a lattice Boltzmann model for relativistic magnetohydrodynamics (MHD). Even though the model is derived for resistive MHD, it is shown that it is numerically robust even in the high conductivity (ideal MHD) limit. In order to validate the numerical method, test simulations are carried out for both ideal and resistive limits, namely the propagation of Alfven waves in the ideal MHD and the evolution of current sheets in the resistive regime, where very good agreement is observed comparing to the analytical results. Additionally, two-dimensional magnetic reconnection driven by Kelvin-Helmholtz instability is studied and the effects of different parameters on the reconnection rate are investigated. It is shown that the density ratio has a negligible effect on themagnetic reconnection rate, while an increase in shear velocity decreases the reconnection rate. Additionally, it is found that the reconnection rate is proportional to sigma(-1/2), sigma being the conductivity, which is in agreement with the scaling law of the Sweet-Parker model. Finally, the numerical model is used to study the magnetic reconnection in a stellar flare. Three-dimensional simulation suggests that the reconnection between the background and flux rope magnetic lines in a stellar flare can take place as a result of a shear velocity in the photosphere.

Lattice Boltzmann
2015 Articolo in rivista metadata only access

Quantum Simulator for Transport Phenomena in Fluid Flows

Mezzacapo A ; Sanz M ; Lamata L ; Egusquiza I L ; Succi S ; Solano E

Transport phenomena still stand as one of the most challenging problems in computational physics. By exploiting the analogies between Dirac and lattice Boltzmann equations, we develop a quantum simulator based on pseudospin-boson quantum systems, which is suitable for encoding fluid dynamics transport phenomena within a lattice kinetic formalism. It is shown that both the streaming and collision processes of lattice Boltzmann dynamics can be implemented with controlled quantum operations, using a heralded quantum protocol to encode non-unitary scattering processes. The proposed simulator is amenable to realization in controlled quantum platforms, such as ion-trap quantum computers or circuit quantum electrodynamics processors.

Lattice Boltzmann
2015 Articolo in rivista metadata only access

Tailoring boundary geometry to optimize heat transport in turbulent convection

Toppaladoddi Srikanth ; Succi Sauro ; Wettlaufer John S

By tailoring the geometry of the upper boundary in turbulent Rayleigh-Benard convection we manipulate the boundary layer-interior flow interaction, and examine the heat transport using the lattice Boltzmann method. For fixed amplitude and varying boundary wavelength., we find that the exponent beta in the Nusselt-Rayleigh scaling relation, Nu - 1 proportional to Ra-beta, is maximized at lambda =lambda(max) approximate to ( 2 pi)(-1), but decays to the planar value in both the large (lambda >> lambda(max)) and small (lambda << lambda(max)) wavelength limits. The changes in the exponent originate in the nature of the coupling between the boundary layer and the interior flow. We present a simple scaling argument embodying this coupling, which describes the maximal convective heat flux. editor's choice Copyright (C) EPLA, 2015

Lattice Boltzmann
2015 Articolo in rivista metadata only access

Immersed Boundary - Thermal Lattice Boltzmann Methods for Non-Newtonian Flows Over a Heated Cylinder: A Comparative Study

Delouei A Amiri ; Nazari M ; Kayhani M H ; Succi S

In this study, we compare different diffuse and sharp interface schemes of direct-forcing immersed boundary - thermal lattice Boltzmann method (IB-TLBM) for non-Newtonian flow over a heated circular cylinder. Both effects of the discrete lattice and the body force on the momentum and energy equations are considered, by applying the split-forcing Lattice Boltzmann equations. A new technique based on predetermined parameters of direct forcing IB-TLBM is presented for computing the Nusselt number. The study covers both steady and unsteady regimes (20<Re<80) in the power-law index range of 0.6< n <1.4, encompassing both shear-thinning and shear-thickening non-Newtonian fluids. The numerical scheme, hydrodynamic approach and thermal parameters of different interface schemes are compared in both steady and unsteady cases. It is found that the sharp interface scheme is a suitable and possibly competitive method for thermal-IBM in terms of accuracy and computational cost.

Immersed Boundary Method Thermal Lattice Boltzmann Method non-Newtonian fluid cylinder power-law fluids