In general relativity, relativistic gravity gradiometry involves the measurement of the relativistic tidal matrix, which is theoretically obtained from the projection of the Riemann curvature tensor onto the orthonormal tetrad frame of an observer. The observer's 4-velocity vector defines its local temporal axis and its local spatial frame is defined by a set of three orthonormal nonrotating gyro directions. The general tidal matrix for the timelike geodesics of Kerr spacetime has been calculated by Marck [Proc. R. Soc. A 385, 431 (1983)]. We are interested in the measured components of the curvature tensor along the inclined "circular" geodesic orbit of a test mass about a slowly rotating astronomical object of mass M and angular momentum J. Therefore, we specialize Marck's results to such a "circular" orbit that is tilted with respect to the equatorial plane of the Kerr source. To linear order in J, we recover the gravitomagnetic beating phenomenon [B. Mashhoon and D.S. Theiss, Phys. Rev. Lett. 49, 1542 (1982)], where the beat frequency is the frequency of geodetic precession. The beat effect shows up as a special long-period gravitomagnetic part of the relativistic tidal matrix; moreover, the effect's short-term manifestations are contained in certain post-Newtonian secular terms. The physical interpretation of this effect is briefly discussed.
The precession of a test gyroscope along stable bound equatorial plane orbits around a Kerr black hole is analyzed, and the precession angular velocity of the gyro's parallel transported spin vector and the increment in the precession angle after one orbital period is evaluated. The parallel transported Marck frame which enters this discussion is shown to have an elegant geometrical explanation in terms of the electric and magnetic parts of the Killing-Yano 2-form and a Wigner rotation effect.
The precession of a test gyroscope along unbound equatorial plane geodesic orbits around a Kerr black hole is analyzed with respect to a static reference frame whose axes point towards the "fixed stars." The accumulated precession angle after a complete scattering process is evaluated and compared with the corresponding change in the orbital angle. Limiting results for the nonrotating Schwarzschild black hole case are also discussed.
Scalar field self-force effects on a scalar charge orbiting a Reissner-Nordström black hole are investigated. The scalar wave equation is solved analytically in a post-Newtonian framework, and the solution is used to compute the self-field (up to 7.5 post-Newtonian order) as well as the components of the self-force at the particle's location. The energy fluxes radiated to infinity and down the hole are also evaluated. Comparison with previous numerical results in the Schwarzschild case shows a reasonable agreement in both strong field and weak field regimes.
The scattering of massive particles by a Schwarzschild black hole also undergoing a drag force is considered. The latter is modeled as a viscous force acting on the orbital plane, with components proportional to the associated particle 4-velocity components. The energy and angular momentum losses as well as the dependence of the hyperbolic scattering angle on the strength of the drag are investigated in situations where strong field effects cause large deflections.
Schwarzschild black hole
Hyperbolic motion
Scattering process
Objectives
Identifying the predictive factors of Sustained Virological Response (SVR) represents an important challenge in new interferon-based DAA therapies. Here, we analyzed the kinetics of antiviral response associated with a triple drug regimen, and the association between negative residual viral load at different time points during treatment.
Methods
Twenty-three HCV genotype 1 (GT 1a n = 11; GT1b n = 12) infected patients were included in the study. Linear Discriminant Analysis (LDA) was used to establish possible association between HCV RNA values at days 1 and 4 from start of therapy and SVR. Principal compo- nent analysis (PCA) was applied to analyze the correlation between HCV RNA slope and SVR. A ultrasensitive (US) method was established to measure the residual HCV viral load in those samples which resulted "detected <12IU/ml" or undetectable with ABBOTT stan- dard assay, and was retrospectively used on samples collected at different time points to establish its predictive power for SVR.
Results
According to LDA, there was no association between SVR and viral kinetics neither at time points earlier than 1 week (days 1 and 4) after therapy initiation nor later. The slopes were not relevant for classifying patients as SVR or no-SVR. No significant differences were observed in the median HCV RNA values at T0 among SVR and no-SVR patients. HCV RNA values with US protocol (LOD 1.2 IU/ml) after 1 month of therapy were considered; the area under the ROC curve was 0.70. Overall, PPV and NPV of undetectable HCV RNA with the US method for SVR was 100% and 46.7%, respectively; sensitivity and specificity were 38.4% and 100% respectively.
Conclusion
HCV RNA "not detected" by the US method after 1 month of treatment is predictive of SVR in first generation Protease inhibitor (PI)-based triple therapy. The US method could have clinical utility for advanced monitoring of virological response in new interferon based DAA combination regimens.
A deep understanding of the dynamics and rheology of suspensions of vesicles,
cells, and capsules is relevant for different applications, ranging from soft
glasses to blood ï¬ow [1].
I will present the study of suspensions of fluid vesicles by a combination of
molecular dynamics and mesoscale hydrodynamics simulations
(multi-particle collision dynamics)
in two dimensions [2], pointing out the big potential of the numerical
method to address problems in soft matter. The flow behavior is studied
as a function of the shear rate,
the volume fraction of vesicles, and the viscosity ratio between inside
and outside fluids. Results are obtained for the interactions of two vesicles,
the intrinsic viscosity of the suspension, and the cell-free layer near
the walls [3-5].
[1] D. Barthes-Biesel, Annu. Rev. Fluid Mech. 48, 25 (2016)
[2] R. Finken, A. Lamura, U. Seifert, and G. Gompper, Eur. Phys. J. E 25,
309 (2008)
[3] A. Lamura and G. Gompper, EPL 102, 28004 (2013)
[4] A. Lamura and G. Gompper, Procedia IUTAM 16, 3 (2015)
[5] E. Afik, A. Lamura, and V. Steinberg, EPL 113, 38003 (2016)
We give a general Gaussian bound for the first chaos (or innovation) of point processes with stochastic intensity constructed by embedding in a bivariate Poisson process. We apply the general result to nonlinear Hawkes processes, providing quantitative central limit theorems.
Clark-Ocone formula
Gaussian approximation
Hawkes process
Malliavin's calculus
Poisson process
Stein's method
Stochastic intensity.
Bootstrap percolation is a well-known activation process in a graph, in which a node becomes active when it has at least r active neighbors. Such process, originally studied on regular structures, has been recently investigated also in the context of random graphs, where it can serve as a simple model for a wide variety of cascades, such as the spreading of ideas, trends, viral contents, etc. over large social networks. In particular, it has been shown that in G(n, p) the final active set can exhibit a phase transition for a sub-linear number of seeds. In this paper, we propose a unique framework to study similar sub-linear phase transitions for a much broader class of graph models and epidemic processes. Specifically, we consider i) a generalized version of bootstrap percolation in G(n, p) with random activation thresholds and random node-tonode influences; ii) different random graph models, including graphs with given degree sequence and graphs with community structure (block model). The common thread of our work is to show the surprising sensitivity of the critical seed set size to extreme values of distributions, which makes some systems dramatically vulnerable to large-scale outbreaks. We validate our results running simulation on both synthetic and real graphs.
Robust Design Optimization (RDO) represents a really interesting opportunity when the specifications of
the design are careful and accurate: the possibility to optimize an industrial object for the real usage
situation, improving the overall performances while reducing the risk of occurrence of off-design con-
ditions, strictly depends on the availability of the information about the probability of occurrence of the
various operative conditions during the lifetime of the design. Those data are typically not available prior
than the production of a prototype.
However, once the design has been produced and is operative, navigation data can be collected and
utilized for the modification (refitting) of the current design, possibly in an early stage of its lifetime, in
order to adapt the design to the real operative conditions at a time when the lifetime is still long enough
to allow the payback of the cost of the modification by the obtained savings.
In the present paper, five sister ships have been observed for a time period of two months, recording
their operative data. Statistical distribution of speed and displacement are derived. An optimization
framework is then applied, and some modifications of a small portion of the hull are proposed in order to
increase significantly the performances of the hull, decreasing the operative cost of the ship. Dedicated
numerical techniques are adopted in order to reduce the time required for the re-design activities.
Robust Design Optimization
Ship Design
Global Optimization
Particle Swarm Optimization
Coherent structures and extreme events in rotating multiphase turbulent flows
L Biferale
;
F Bonaccorso
;
I M Mazzitelli
;
M A T van Hinsberg
;
A S Lanotte
;
S Musacchio
;
P Perlekar
;
F Toschi
By using direct numerical simulations (DNS) at unprecedented resolution, we study turbulence under
rotation in the presence of simultaneous direct and inverse cascades. The accumulation of energy at large scale
leads to the formation of vertical coherent regions with high vorticity oriented along the rotation axis. By
seeding the flowwithmillions ofinertialparticles,wequantify
--
forthefirsttime
--
theeffects ofthose coherent
vertical structures on the preferential concentration of light and heavy particles. Furthermore, we quantitatively
show that extreme fluctuations, leading to deviations from a normal-distributed statistics, result from the
entangled interaction of the vertical structures with the turbulent background. Finally, we present the first-ever
measurement of the relative importance between Stokes drag, Coriolis force, and centripetal force along the
trajectories of inertial particles. We discover that vortical coherent structures lead to unexpected diffusion
properties for heavy and light particles in the directions parallel and perpendicular to the rotation axis.
MIPAS on ENVISAT performed almost continuously measurements of the atmospheric composition for almost 10 years, from June 2002 to April 2012. These ten years cover a period when the first effect of the dismiss of the emission of the CFCs after the Montreal protocol ratification in 1987can be measured.
Even if ten years constitute a short period to derive trends, it has been proven that useful information on time variation of atmospheric constituents can be derived from the analysis of these measurements.
However, previous versions of MIPAS on ENVISAT dataset were characterized by an instrumental drift due to the fact that some detectors used by MIPAS were affected by non-linearities, which change with time due to the ageing of the detectors and this was cause of a non negligible systematic error in the trend estimation.
The new full mission reprocessed dataset V7 that will be released very soon uses L1 files where the impact of the ageing of the detectors on non-linearities has been corrected. Furthermore, also the L2 processor has been upgraded with new functionalities improving the performances of the processor.
We present the results of study of trends derived from the analysis of the new MIPAS V7 products on several MIPAS target species including ozone depleting species, like CFC-11, CFC-12, CCl4 and HCFC-22.
A new technique for color quantization is suggested. First, pre-quantization is accomplished by means of spatial resolution reduction; then, color aggregation is accomplished based on the distance between colors in the color space. Color aggregation is an iterated process where the number of iterations is given by the difference between the number of colors of the pre-quantized image, and the number of colors desired for the quantized image. Color mapping is finally accomplished. Performance evaluation is done in terms of generally adopted quality measures. Comparisons with other methods in the literature are also provided.
image compression and processing
color quantization
clustering
A new technique is presented for color image segmentation. Five processes are accomplished that are respectively dealing with color image quantization, noisy regions removal, removal of thin regions, color-based region merging, and area-based region merging. Some parameters involved in the method are automatically computed, others are fixed depending on the specific application. Thus, the method is characterized by some flexibility that makes it useful for different applications. The method has been checked on color images from publicly available repositories. The performance of the method has been evaluated in terms of Precision, Recall and F-measure. The obtained results are satisfactory from both a qualitative and a quantitative point of view.
RGB color space
color image quantization
color segmentation
region splitting
region merging
Abstract In this paper the Bi-Objective k-Length-Bounded Critical Disruption Path (BO-kLB-CDP) optimization problem is proposed, aimed at maximizing the interdiction effects provided on a network by removing a simple path connecting a given source and destination whose length does not exceed a certain threshold. The BO-kLB-CDP problem extends the Critical Disruption Path (CDP) problem introduced by Granata et al. in [Granata, D. and Steeger, G. and Rebennack, S., Network interdiction via a Critical Disruption Path: Branch-and-Price algorithms, Computers & Operations Research, Volume 40, Issue 11, November 2013, Pages 2689-2702]. Several real applications of this class of optimization problems arise in the field of security, surveillance, transportation and evacuation operations. In order to overcome some limits of the original {CDP} problem and increase its suitability for practical purposes, first we consider a length limitation for Critical Disruption Paths. Second, we generalize the concept of network interdiction considered in the CDP: beside minimizing the cardinality of the maximal connected component after the removal of the CDP, now we are also interested in maximizing the number of connected components in the residual graph. A Mixed Integer Programming formulation for the BO-kLB-CDP problem is therefore proposed and discussed, presenting the results of a multiple objective analysis performed through a computational experience on a large set of instances.
Background: Macrophages cover a major role in the immune system, being the most plastic cell yielding several key immune functions.
Methods: Here we derived a minimalistic gene regulatory network model for the differentiation of macrophages into the two phenotypes M1 (pro-) and M2 (anti-inflammatory).
A malicious alteration of system-provided timeline can negatively affect the reliability of computer forensics. Indeed, detecting such changes and possibly reconstructing the correct timeline of events is of paramount importance for court admissibility and logical coherence of collected evidence. However, reconstructing the correct timeline for a set of network nodes can be difficult since an adversary has a wealth of opportunities to disrupt the timeline and to generate a fake one. This aspect is exacerbated in cloud computing, where host and guest machine-time can be manipulated in various ways by an adversary. Therefore, it is important to guarantee the integrity of the timeline of events for cloud host and guest nodes, or at least to ensure that timeline alterations do not go undetected. This paper provides several contributions. First, we survey the issues related to cloud machine-time reliability. Then, we introduce a novel architecture (CURE) aimed at providing timeline resilience to cloud nodes. Further, we implement the proposed framework and extensively test it on both a simulated environment and on a real cloud. We evaluate and discuss collected results showing the effectiveness of our proposal. (C) 2016 Elsevier B.V. All rights reserved.
Cloud computing
Timeline validation
Digital forensics
Measurement and simulation
Experimental test-beds and research platforms