Quantifying trace gas emissions and the influence of surface exchange processes on the atmosphere is a necessary step towards the control of global greenhouse gas emissions and reliability of air quality models. This paper proposes a procedure based on the mass balance method and implemented on highly resolved aircraft data. It allows one to estimate surface exchanges on areas of several km2 and heterogeneous features exploiting the characteristics of convective boundary layer during steady state conditions that permit the estimation of emission/absorption terms as functions of advective fluxes only. A nonparametric approach is adopted and the fluxes on the surface of a virtual box surrounding the area of interest are reconstructed on the basis of scalar densities and wind vectors using Shepard functions. Two different techniques are also proposed to face lack of data on the top surface of the box. The method has been applied to experimental data coming from measurement campaigns on two different sites. It provides realistic estimates of the CO2 emission/absorption in the considered areas that are in good agreement with CO2 fluxes evaluated by Airborne Eddy Covariance and confirm the suitability of the proposed approach for the assessment of turbulent exchange of trace gases by composite landscapes. Uncertainties on the estimated emissions due to both propagation of the experimental error and interpolation have been quantified by bootstrap analysis as 6%.
This letter investigates the possibility of removing noise in correspondence to jump discontinuities using the sorted copy of the signal. It will be proved that sorting makes noise predictable so that it can be reproduced and subtracted from the sorted noisy signal. It will be also shown that the proposed method can substitute for the edge preserving term into an anisotropic diffusion scheme, gaining in terms of mean square error, edge preservation and computational effort.
Transconjunctival suture- less 25-gauge versus 20-gauge standard vitrectomy: correlation between corneal topography and ultrasound biomicroscopy measurements of sclerotomy sites
We present a new approach to the study of the immune system that combines techniques of systems biology with
information provided by data-driven prediction methods. To this end, we have extended an agent-based simulator of the
immune response, C-IMMSIM, such that it represents pathogens, as well as lymphocytes receptors, by means of their amino
acid sequences and makes use of bioinformatics methods for T and B cell epitope prediction. This is a key step for the
simulation of the immune response, because it determines immunogenicity. The binding of the epitope, which is the
immunogenic part of an invading pathogen, together with activation and cooperation from T helper cells, is required to
trigger an immune response in the affected host. To determine a pathogen's epitopes, we use existing prediction methods.
In addition, we propose a novel method, which uses Miyazawa and Jernigan protein-protein potential measurements, for
assessing molecular binding in the context of immune complexes. We benchmark the resulting model by simulating a
classical immunization experiment that reproduces the development of immune memory. We also investigate the role of
major histocompatibility complex (MHC) haplotype heterozygosity and homozygosity with respect to the influenza virus
and show that there is an advantage to heterozygosity. Finally, we investigate the emergence of one or more dominating
clones of lymphocytes in the situation of chronic exposure to the same immunogenic molecule and show that high affinity
clones proliferate more than any other. These results show that the simulator produces dynamics that are stable and
consistent with basic immunological knowledge. We believe that the combination of genomic information and simulation
of the dynamics of the immune system, in one single tool, can offer new perspectives for a better understanding of the
immune system.
ImmunoGrid: towards agent-based simulations of the human immune system at a natural scale.
HallingBrown
;
Pappalardo F
;
Rapin N
;
Zhang P
;
Alemani D
;
Emerson A
;
Castiglione F
;
Doroux P
;
Penisi M
;
Miotto O
;
Churchill D
;
Rossi E
;
Moss DS
;
Sansom CE
;
Bernaschi M
;
Lefranc MP
;
Brunak S
;
Motta S
;
Lollini PL
;
Basford K
;
V Brusic
For G open bounded subset of R^2 with C^1 boundary, we study the regularity of the variational solution u in H^1_0(G) to the quasilinear elliptic equation of Leray-Lions type: -div A(x,Du)=f , when f belongs to the Zygmund space L(log L)^{\delta}, \delta>0. As an interpolation between known results for \delta=1/2 and \delta=1 of [Stampacchia] and [Alberico-Ferone], we prove that |Du| belongs to the Lorentz space L^{2, 1/\delta}(G) for \delta in [1/2, 1].
We consider the problem of short-time extrapolation of blue chips' stocks indexes in the context of wavelet subspaces following the theory proposed by X.-G. Xia and co-workers in a series of papers \cite{XLK,XKZ,LK,LXK}. The idea is first to approximate the oscillations of the corresponding stock index at some scale by means of the scaling function which is part of a given multi-resolution analysis of $L^2(\Re)$. Then, since oscillations at a finer scale are discarded, it becomes possible to extend such a signal up to a certain time in the future; the finer the approximation, the shorter this extrapolation interval. At the numerical level, a so--called Generalized Gerchberg-Papoulis (GGP) algorithm is set up which is shown to converge toward the minimum $L^2$ norm solution of the extrapolation problem. When it comes to implementation, an acceleration by means of a Conjugate Gradient (CG) routine is necessary in order to obtain quickly a satisfying accuracy. Several examples are investigated with different international stock market indexes.
An analytical solution for solving the transient drug diffusion in adjoining porous wall layers faced with a
drug-eluting stent is presented. The endothelium, intima, internal elastic lamina and media are all treated
as homogeneous porous media and the drug transfer through them is modelled by a set of coupled partial
differential equations. The classical separation of variables method for a multi-layer configuration is used.
The model addresses the concept of penetration depth for multi-layer solids that is useful to treat the
wall thickness by estimating a physical bound for mass diffusion. Drug concentration level and mass profiles in each layer at various times are given and discussed.
Mass transfer
multi-layered porous media
advection-diffusion equation
penetration depth
drug delivery
We consider a rather simple algorithm to address the fascinating field of numerical extrapolation of (analytic) band-limited functions. It relies on two main elements: namely, the lower frequencies are treated by projecting the known part of the signal to be extended onto the space generated by ``Prolate Spheroidal Wave Functions" (PSWF, as originally proposed by Slepian), whereas the higher ones can be handled by the recent so--called ``Compressive Sampling" (CS, proposed by Cand\`es) algorithms which are independent of the largeness of the bandwidth. Slepian functions are recalled and their numerical computation is explained in full detail whereas $\ell^1$ regularization techniques are summarized together with a recent iterative algorithm which has been proved to work efficiently on so--called ``compressible signals" which appear to match rather well the class of smooth bandlimited functions. Numerical results are displayed for both numerical techniques and the accuracy of the process consisting in putting them altogether is studied for some test-signals showing a quite fast Fourier decay.
Band-limited extrapolation
Prolate spheroidal wave functions
Slepian series
$\ell^1$ regularization
sparse and compressible signals recovery
Inertial range Eulerian and Lagrangian statistics from numerical simulations of isotropic turbulence
Benzi R
;
Biferale L
;
Fisher R
;
Lamb DQ
;
Toschi F
We present a study of Eulerian and Lagrangian statistics from a high-resolution numerical simulation of isotropic and homogeneous turbulence using the FLASH code, with an estimated Taylor microscale Reynolds number of around 600. Statistics are evaluated over a data set with 18563 spatial grid points and with 2563 = 16.8 million particles, followed for about one large-scale eddy turnover time. We present data for the Eulerian and Lagrangian structure functions up to the tenth order. We analyze the local scaling properties in the inertial range. The Eulerian velocity field results show good agreement with previous data and confirm the puzzling differences previously found between the scaling of the transverse and the longitudinal structure functions. On the other hand, accurate measurements of sixth-and-higher- order Lagrangian structure functions allow us to highlight some discrepancies from earlier experimental and numerical results. We interpret this result in terms of a possible contamination from the viscous scale, which may have affected estimates of the scaling properties in previous studies. We show that a simple bridge relation based on a multifractal theory is able to connect scaling properties of both Eulerian and Lagrangian observables, provided that the small differences between intermittency of transverse and longitudinal Eulerian structure functions are properly considered.
We present the results of a high resolution numerical study of two-dimensional (2D) Rayleigh-Taylor turbulence using a recently proposed thermal lattice Boltzmann method The goal of our study is both methodological and physical We assess merits and limitations concerning small- and large-scale resolution/accuracy of the adopted integration scheme We discuss quantitatively the requirements needed to keep the method stable and precise enough to simulate stratified and unstratified flows driven by thermal active fluctuations at high Rayleigh and high Reynolds numbers We present data with spatial resolution up to 4096 x 10 000 grid points and Rayleigh number up to Ra similar to 10(11) The statistical quality of the data allows us to investigate velocity and temperature fluctuations, scale-by-scale, over roughly four decades We present a detailed quantitative analysis of scaling laws in the viscous, inertial, and integral range, supporting the existence of a Bolgiano-like inertial scaling, as expected in 2D systems We also discuss the presence of small/large intermittent deviations to the scaling of velocity/temperature fluctuations and the Rayleigh dependency of gradients flatness.
Human blood flow is a multiscale problem: in first approximation, blood is a dense suspension of plasma and deformable red cells. Physiological vessel diameters range from about one to thousands of cell radii. Current computational models either involve a homogeneous fluid and cannot track particulate effects or describe a relatively small number of cells with high resolution but are incapable to reach relevant time and length scales. Our approach is to simplify much further than existing particulate models. We combine well-established methods from other areas of physics in order to find the essential ingredients for a minimalist description that still recovers hemorheology. These ingredients are a lattice Boltzmann method describing rigid particle suspensions to account for hydrodynamic long-range interactions and-in order to describe the more complex short-range behavior of cells-anisotropic model potentials known from molecular-dynamics simulations. Paying detailedness, we achieve an efficient and scalable implementation which is crucial for our ultimate goal: establishing a link between the collective behavior of millions of cells and the macroscopic properties of blood in realistic flow situations. In this paper we present our model and demonstrate its applicability to conditions typical for the microvasculature.