The realization of innovative transport services requires greater flexibility and inexpensive service. In many cases the solution is to realize demand responsive transportation system. A Demand Responsive Transport System (DRTS) requires the planning of travel paths (routing) and customer pick-up and drop-off times (scheduling) according to received requests. In particular, the problem has to deal with multiple vehicles, limited capacity of the fleet vehicles and temporal constraints (time windows). A DRTS may operate according to static or dynamic mode. In the static setting, all the customer requests are known beforehand and the DRTS solves a Dial-a-Ride Problem (DaRP) instance, to produce the tour of each bus, respecting the pick up and delivery time windows while minimising the solution cost. In the dynamic mode, the customer requests arrive over time to a control station and, consequently, the solution may also change over time. In this work, we address a Demand Responsive Transport System capable of managing incoming transport demand using a two-stage algorithm by solving a DaRP instance. The solutions provided by the heuristics are simulated in a discrete events environment in which it is possible to reproduce the movement of the buses, the passengers' arrival to the stops, the delays due to the traffic congestion and possible anomalies in the behaviour of the passengers. Finally, a set of performance indicators evaluate the solution planned by the heuristics. (C) 2011 Published by Elsevier Ltd. Selection and/or peer-review under responsibility of the Organizing Committee.
Discrete event simulation
Dial-a-Ride Problem
Demand Responsive Transport Sistem
Heuristics
Premixed combustion modes in compression ignition engines
are studied as a promising solution to meet fuel economy and
increasingly stringent emissions regulations. Nevertheless,
PCCI combustion systems are not yet consolidated enough
for practical applications. The high complexity of such
combustion systems in terms of both air-fuel charge
preparation and combustion process control requires the
employment of robust and reliable numerical tools to provide
adequate comprehension of the phenomena. Object of this
work is the development and validation of suitable models to
evaluate the effects of charge premixing levels in Diesel
combustion. This activity was performed using the Lib-ICE
code, which is a set of applications and libraries for IC engine
simulations developed using the OpenFOAM® technology.
In particular, a turbulence-chemistry interaction model, based
on the simple Eddy Dissipation Approach, was introduced to
account for the effects of turbulent mixing on chemical
reaction rates. It is a tentative solution to represent the effects
of sub-grid mixing on the chemical reaction rates when
detailed reaction mechanisms are adopted. Chemical reaction
rates were computed by a robust semi-implicit extrapolation
method for integrating stiff Ordinary Differential Equations
with monitoring of both local and global error to adjust stepsize.
To reduce the CPU time when detailed chemistry was
used, both the ISAT (in-situ adaptive tabulation) and DAC
(dynamic adaptive chemistry) techniques were adopted in
combination. Simulations were performed by varying the
charge premixing level from the typical diesel combustion
mode towards an almost completely premixed/HCCI mode
using n-heptane, whose injected mass was split between portinjection
and direct-injection. This allowed a detailed
investigation of the mixed injection conditions, that are
typical of dual fuel configurations without employing fuels of
different chemical nature, composition and ignition tendency.
The choice of using a single fuel was motivated by the need
to isolate the effects of different premixing levels and the
resulting interaction between the charge and the fuel spray.
Measurements for validation were collected by means of
specific experiments on a fully instrumented single cylinder
research engine, having the injection and the combustion
systems architecture typical of the current light duty diesel
engine technology. To realize homogeneous air-fuel charge,
the intake manifold is modified to provide the desired extent
of fuel port-injection.
Statistical regularities in the rank-citation profile of scientists
Petersen Alexander M
;
Stanley H Eugene
;
Succi Sauro
Recent science of science research shows that scientific impact measures for journals and individual articles have quantifiable regularities across both time and discipline. However, little is known about the scientific impact distribution at the scale of an individual scientist. We analyze the aggregate production and impact using the rank-citation profile c(i)(r) of 200 distinguished professors and 100 assistant professors. For the entire range of paper rank r, we fit each c(i)(r) to a common distribution function. Since two scientists with equivalent Hirsch h-index can have significantly different c(i)(r) profiles, our results demonstrate the utility of the beta(i) scaling parameter in conjunction with h(i) for quantifying individual publication impact. We show that the total number of citations C-i tallied from a scientist's N-i papers scales as C-i similar to h(i)(1+beta i). Such statistical regularities in the input-output patterns of scientists can be used as benchmarks for theoretical models of career progress.
We provide numerical evidence that electronic preturbulent phenomena in graphene could be observed, under current experimental conditions, through current fluctuations, echoing the detachment of vortices past localized micron-sized impurities. Vortex generation, due to micron-sized constriction, is also explored with special focus on the effects of relativistic corrections to the normal Navier-Stokes equations. These corrections are found to cause a delay in the stability breakout of the fluid as well as a small shift in the vortex shedding frequency.
Merging GPS and Atmospherically Corrected InSAR Data to Map 3-D Terrain Displacement Velocity
Catalao Joao
;
Nico Giovanni
;
Hanssen Ramon
;
Catita Cristina
A method to derive accurate spatially dense maps of 3-D terrain displacement velocity is presented. It is based on the merging of terrain displacement velocities estimated by time series of interferometric synthetic aperture radar (InSAR) data acquired along ascending and descending orbits and repeated GPS measurements. The method uses selected persistent scatterers (PSs) and GPS measurements of the horizontal velocity. An important step of the proposed method is the mitigation of the impact of atmospheric phase delay in InSAR data. It is shown that accurate vertical velocities at PS locations can be retrieved if smooth horizontal velocity variations can be assumed. Furthermore, the mitigation of atmospheric effects reduces the spatial dispersion of vertical velocity estimates resulting in a more spatially regular 3-D velocity map. The proposed methodology is applied to the case study of Azores islands characterized by important tectonic phenomena.
Integrating Omics data for signaling pathways, interactome reconstruction, and functional analysis.
Tieri P
;
de la Fuente A
;
Termanini A
;
Franceschi C
Omics data and computational approaches are today providing a key to disentangle the complex architecture of living systems. The integration and analysis of data of different nature allows to extract meaningful representations of signaling pathways and protein interactions networks, helpful in achieving an increased understanding of such intricate biochemical processes. We here describe a general workflow and relative hurdles in integrating online Omics data and analyzing reconstructed representations by using the available computational platforms.
signaling pathway
interactome
data integration
systems bi
bioinformatics
In this paper, we study two earthquakes: the April 6th 2009 earthquake of L'Aquila in the re-gion of Abruzzo (Italy) and the 1997 Colfiorito earthquake in the regions of Umbria and Marche (Italy). The data sets of these two earthquakes were analysed in both time and space domains. For time domain we used statistical methods and models both parametric and non-parametric. Concerning the space domain, we used Mathe-matical Morphology filters. The time domain analysis provides evidence of a possible corre-lation between seismic activities and the tides of the crust of the Earth. The results obtained show evidence that the daily number of earthquakes of the sequences proceeding and following the April 6th 2009 earthquake of L'Aquila and that of the sequence following the 1997 Colfiorito earth-quake have a periodic component of occurrence with period of about 7 days. It seems that the maxima of this component occur at a position of the Moon with respect to the Earth and the Sun corresponding to approximately 3 days before the four main Moon phases. The space domain analysis indicates that the foreshock activity in both earthquakes is clustered and concentrated. Furthermore, in each of the two earthquakes the clusters are located at about 3 kilometers from the epicentre of the main shock.
Purpose - The demographic risk is the risk due to the uncertainty in the demographic scenario assumptions by which life insurance products are designed and valued. The uncertainty lies both in the accidental (insurance risk) and systematic (longevity risk) deviations of the number of deaths from
the value anticipated for it. This last component gives rise to the risk due to the randomness in the
choice of the survival model for valuations (model risk or projection risk). If the insurance risk
component can be assumed negligible for well-diversified portfolios, as in the case of pension
annuities, longevity risk is crucial in the actuarial valuations. The question is particularly decisive in
contexts in which the longevity phenomenon of the population is strong and pension annuity
portfolios constitute a meaningful slice of the financial market - both typical elements of Western
economies. The paper aims to focus on the solvency appraisal for a portfolio of life annuities,
deepening the impact of the demographic risk according to suitable risk indexes apt to describe its
evolution in time.
Design/methodology/approach - The financial quantity proposed for representing the economic
wealth of the life insurance company is the stochastic surplus, and the paper analyses the impact on it
of different demographic assumptions by means of risk indicators as the projection risk index, the
quantile surplus valuation and the ruin probability. By means of the proposed models, the longevity
risk is mainly taken into account in a stochastic scenario for the financial risk component, in order to
consider their interactions, too. In order to furnish practical details significant in the portfolio risk
management, several numerical applications clarify the practical meaning of the models in the
solvency context.
Findings - This paper studies the impact on the portfolio surplus of the systematic demographic
risk, taking into account their interaction with the financial risk sources. In this order of ideas, the
internal risk profile of a life annuity portfolio is deeply investigated by means of suitable risk indexes:
in a solvency analysis perspective, some possible scenarios for the evolution of death rates (generated
by different survival models) are considered and this paper evaluates the impact on the portfolio
surplus caused by different choices of the demographic model. The first index is deduced by a variance
decomposition formula, the other ones involve the conditional quantile calculus and the ruin
probability. Such indexes constitute benchmarks, whose conjoined use provides useful information to
the meeting of the solvency requirements.
Originality/value - With respect to the recent actuarial literature, in which the most important
contribution on the surplus analysis has been given by Lisenko et al. - where the analysis focuses on
the financial aspect applied to portfolios of temporary and endowment contracts - the paper considers
life annuity portfolios, taking into account the effect of the systematic demographic risk and its
interactions with the financial risk components.
longevity risk
model risk
stochastic surplus
quantile surplus
risk index
In this paper we analyze the performance of the three MIPAS (Michelson Interferometer for Passive Atmospheric Sounding) observation modes that sound the Upper-Troposphere/Lower-Stratosphere (UT/LS) region. The two-dimensional (2-D) tomographic retrieval approach is assumed to derive the atmospheric field of geophysical parameters. For each observation mode we have calculated the 2-D distribution of the information load quantifier relative to the main MIPAS targets. The performance of the observation modes has been evaluated in terms of strength and spatial coverage of the information-load distribution along the full orbit. The indications of the information-load analysis has been validated with simulated retrievals based on the observational parameters of real orbits. In the simulation studies we have assessed the precision and the spatial (both horizontal and vertical) resolution of the retrieval products. The performance of the three observation modes has been compared for the MIPAS main products in both the UT/LS and the extended altitude range. This study shows that the two observation modes that were specifically designed for the UT/LS region are actually competitive with the third one, designed for the whole stratosphere, up to altitudes that far exceed the UT/LS. In the UT/LS the performance of the two specific observation modes is comparable even if the best performance in terms of horizontal resolution is provided by the observation mode that was excluded by the European Space Agency (ESA) from the current MIPAS duty cycle. This paper reports the first application of the information-load analysis and highlights the worthiness of this approach to make qualitative considerations about retrieval potential and selection of retrieval grid.
We present a computational framework for multi-scale simulations of real-life biofluidic problems. The framework allows to simulate suspensions composed by hundreds of millions of bodies interacting with each other and with a surrounding fluid in complex geometries. We apply the methodology to the simulation of blood flow through the human coronary arteries with a spatial resolution comparable with the size of red blood cells, and physiological levels of hematocrit (the red blood cell volume fraction). The simulation exhibits excellent scalability on a cluster of 4000 M2050 Nvidia GPUs and achieves close to 1 Petaflop aggregate performance, which demonstrates the capability to predicting the evolution of biofluidic phenomena of clinical significance. The combination of novel mathematical models, computational algorithms, hardware technology, code tuning and optimization required to achieve these results are presented. Copyright 2011 ACM.
2011Contributo in Atti di convegnometadata only access
Front propagation in Rayleigh-Taylor systems with reaction
Scagliarini A
;
Biferale L
;
Mantovani F
;
Pivanti M
;
Pozzati F
;
Sbragaglia M
;
Schifano SF
;
Toschi F
;
Tripiccione R
A special feature of Rayleigh-Taylor systems with chemical reactions is the competition between turbulent mixing and the "burning processes", which leads to a highly non-trivial dynamics. We studied the problem performing high resolution numerical simulations of a 2d system, using a thermal lattice Boltzmann (LB) model. We spanned the various regimes emerging at changing the relative chemical/turbulent time scales, from slow to fast reaction; in the former case we found numerical evidence of an enhancement of the front propagation speed (with respect to the laminar case), providing a phenomenological argument to explain the observed behaviour. When the reaction is very fast, instead, the formation of sharp fronts separating patches of pure phases, leads to an increase of intermittency in the small scale statistics of the temperature field.
Rayleigh-Taylor turbulence
Reactive flows
Front propagation
We study the statistics of curvature and torsion of Lagrangian trajectories from direct numerical simulations of homogeneous and isotropic turbulence (at Re-lambda approximate to 280) in order to extract informations on the geometry of small-scale coherent structures in turbulent flows. We find that, as previously observed by Braun et al. (W. Braun, F. De Lillo, and B. Eckhardt, Geometry of particle paths in turbulent flows, J. Turbul. 7 (2006), p. 62) and Xu et al. (H. Xu, N.T. Ouellette, and E. Bodenschatz, Curvature of Lagrangian trajectories in turbulence, Phys. Rev. Lett. 98 (2007), p. 050201), the high curvature statistics is dominated by large-scale flow reversals where velocity magnitude assumes very low values. We show that flow-reversal events are characterized by very short correlation times. We introduce both time filtering and threshold in the minimum velocity amplitude in order to disentangle intense curvature events generated from genuine small-scale vortex structures from simple flow-reversal. We present for the first time measurements of torsion statistics in fully developed turbulent flows. By studying the joint statistics of curvature and torsion, we present further evidences that intense and persistent events are dominated by helical-type trajectories.
Biochips for Regenerative Medicine: Real-time Stem Cell Continuous Monitoring as Inferred by High-Throughput Gene Analysis
Zhu Lisha
;
del Vecchio Giovanna
;
de Micheli Giovanni
;
Liu Yuanhua
;
Carrara Sandro
;
Calza Laura
;
Nardini Christine
Regenerative medicine is a novel clinical branch aiming at the cure of diseases by replacement of damaged tissues. The crucial use of stem cells makes this area rich of challenges, given the poorly understood mechanisms of differentiation. One highly needed and yet unavailable technology should allow us to monitor the exact (metabolic) state of stem cells differentiation to maximize the effectiveness of their implant in vivo. This is challenged by the fact that not all relevant metabolites in stem cells differentiation are known and not all metabolites can currently be continuously monitored. To bring advancements in this direction, we propose the enhancement and integration of two available technologies into a general pipeline. Namely, high-throughput biochip for gene expression screening to pre-select the variables that are most likely to be relevant in the identification of the stem cells' state and low-throughput biochip for continuous monitoring of cell metabolism with highly sensitive carbon nanotubes-based sensors. Intriguingly, additionally to the involvement of multidisciplinary expertise (medicine, molecular biology, computer science, engineering, and physics), this whole query heavily relies on biochips: it starts in fact from the use of high-throughput ones, which output, in turn, becomes the base for the design of low-throughput, highly sensitive biochips. Future research is warranted in this direction to develop and validated the proposed device.
Community structure is an important topological phenomenon typical of complex networks. Accurately unveiling communities is thus crucial to understand and capture the many-faceted nature of complex networks. Communities in real world frequently overlap, i.e. nodes can belong to more than one community. Therefore, quantitatively evaluating the extent to which a node belongs to a community is a key step to find overlapping boundaries between communities. Non-negative matrix factorization (NMF) is a technique that has been used to detect overlapping communities. However, previous efforts in this direction present: (i) limitations in the interpretation of meaningful overlaps and (ii) lack of accuracy in predicting the correct number of communities. In this paper, a hybrid method of NMF to overcome both limitations is presented. This approach effectively estimates the number of communities and is more interpretable and more accurate in identifying overlapping communities in undirected networks than previous approaches. Validations on synthetic and real world networks show that the proposed community learning framework can effectively reveal overlapping communities in complex networks.
Complex networks
community structure
overlapping community
non-negative matrix factorization