Switching dynamics in cholesteric liquid crystal emulsions
F Fadda
;
G Gonnella
;
D Marenduzzo
;
E Orlandini
;
A Tiribocchi
In this work we numerically study the switching dynamics of a 2D cholesteric emulsion droplet immersed in an isotropic fluid under an electric field, which is either uniform or rotating with constant speed. The overall dynamics depend strongly on the magnitude and on the direction (with respect to the cholesteric axis) of the applied field, on the anchoring of the director at the droplet surface and on the elasticity. If the surface anchoring is homeotropic and a uniform field is parallel to the cholesteric axis, the director undergoes deep elastic deformations and the droplet typically gets stuck into metastable states which are rich in topological defects. When the surface anchoring is tangential, the effects due to the electric field are overall less dramatic, as a small number of topological defects form at equilibrium. The application of the field perpendicular to the cholesteric axis usually has negligible effects on the defect dynamics. The presence of a rotating electric field of varying frequency fosters the rotation of the defects and of the droplet as well, typically at a lower speed than that of the field, due to the inertia of the liquid crystal. If the surface anchoring is homeotropic, a periodic motion is found. Our results represent a first step to understand the dynamical response of a cholesteric droplet under an electric field and its possible application in designing novel liquid crystal-based devices.
Cholesteric liquid crystals
Electric field
Lattice Boltzmann simulations
La proprietà di conservazione della positività dei metodi numerici applicati ai sistemi
differenziali di tipo ODE e PDE a valori iniziali e/o ai bordi, è un argomento di ricerca
di notevole interesse. La positività del flusso numerico è un aspetto fondamentale in numerose
applicazioni che vanno dalla biologia computazionale, alla dinamica molecolare,
alla modellistica in ambito ecologico, dovunque risulti fondamentale che le grandezze in
gioco (popolazioni, densità, concentrazioni) non assumano valori negativi.
Tale condizione, generalmente, non è verificata dai metodi standard (Runge-Kutta o
multistep), a meno di imporre restrizioni sul passo di integrazione talvolta molto significative.
Anche nell'ambito della integrazione geometrica, le proprietà di conservazione
di cui godono i flussi numerici, quali ad esempio l'energia del sistema e la simpletticità,
non garantiscono automaticamente la positività delle soluzioni. In [5] vengono individuate,
usando anche le tecniche di backward analysis, le condizioni che garantiscono la
positività del metodo di Eulero simlettico e della sua variante esplicita, quando applicati
all'equazione di Lotka-Volterra. Tuttavia, le restrizioni sul passo di integrazione diminuiscono
sensibilmente l'efficienza dei metodi numerici a tal punto da renderli di fatto
inutilizzabili nelle applicazioni reali. La letteratura più recente si è quindi focalizzata sulla
costruzione di integratori numerici che garantiscono la positività del flusso numerico
per costruzione. Tra i lavori su questo argomento, citiamo [8, 6] in cui vengono proposte
tecniche di splitting and composition per la soluzione di modelli differenziali. Gli autori
in [8] si preoccupano di dimostrare la positività di uno schema del secondo ordine applicato
ad un generico problema parabolico semilineare in uno spazio di Banach. Poichè la
positività delle tecniche di splitting è garantita da semiflussi numerici positivi, in [6] gli
autori propongono una procedura di splitting applicata al sistema dinamico trasformato
mediante una trasformazione logaritmica. Infine, nel più ampio ambito dell'integrazione
non standard, possiamo trovare in letteratura integratori simplettici e positivi in grado
di preservare anche la stabilità locale della soluzione. Si vedano in particolare il metodo
di Mickens ed i metodi di Mounim applicati al sistema Lotka-Volterra [11].
Un ulteriore approccio alla conservazione de facto della positività, si può avvalere del
calcolo non Newtoniano. Negli anni '70-'80 Michael Grossman e Robert Katz hanno
introdotto metodi di calcolo basati sulla generalizzazione delle operazioni di derivazione
e integrazione. A seconda della scelta di opportuni parametri si possono costruire infinite
varianti di calcolo non Newtoniano: tra queste, l'approccio basato sugli operatori
di derivata e integrale di tipo moltiplicativo è alla base del calcolo moltiplicativo. Tale
strumento, è stato riscoperto e utilizzato negli ultimi anni in diversi campi delle scienze
applicate (si veda ad esempio [7]), per via della sua caratteristica di preservare, per
costruzione, la positività. Alcuni autori si sono già preoccupati di generalizzare la classe
dei metodi Runge-Kutta nell'ambito del calcolo moltiplicativo [1] o, più in generale,
non Newtoniano [9]. Rimane tuttavia ancora inesplorata la potenzialità del calcolo non
Newtoniano nella derivazione e nell'analisi di metodi numerici simplettici e positivi.
Unattended Wireless Sensor Networks (UWSNs), characterized by the intermittent presence of the sink, are exposed to attacks aiming at tampering with the sensors and the data they store. In order to prevent an adversary from erasing any sensed data before the sink collects them, it is common practice to rely on data replication. However, identifying the most suitable replication rate is challenging: data should be redundant enough to avoid data loss, but not so much as to pose an excessive burden on the limited resources of the sensors. As noted before in the literature, this problem is similar to finding the minimum infection rate that makes a disease endemic in a population. Yet, unlike previous attempts to leverage on this parallelism, we argue that model and system parameters must be carefully bound according to conservative and realistic assumptions on the behavior of the network, further taking into account possible statistical fluctuations. In this paper, we therefore refine the connection between the Susceptible, Infected, Susceptible (SIS) epidemic model and the survivability of sensed data in UWSNs. In particular, based on probabilistic data replication and deletion rates, we identify proper conditions to guarantee that sensed information become endemic. In both the full visibility model (i.e. unlimited transmission range) and the geometric one (i.e. limited transmission range), the proposed approach achieves: (i) data survivability, (ii) optimal usage of sensors resources, and (iii) fast collecting time. Building on advanced probabilistic tools, we provide theoretically sound results, that are further supported by an extensive experimental campaign performed on synthetically generated networks. Obtained results show the quality of our model and viability of the proposed solution.
Unattended Wireless Sensor Network
Epidemic models
Data survivability
Security
The cube attack is a flexible cryptanalysis technique, with a simple and fascinating theoretical implant. It combines offline exhaustive searches over selected tweakable public/IV bits (the sides of the "cube"), with an online key-recovery phase. Although virtually applicable to any cipher, and generally praised by the research community, the real potential of the attack is still in question, and no implementation so far succeeded in breaking a real-world strong cipher. In this paper, we present, validate and analyze the first thorough implementation of the cube attack on a GPU cluster. The framework is conceived so as to be usable out-of-the-box for any cipher featuring up to 128-bit key and IV, and easily adaptable to larger key/IV, at just the cost of some fine (performance) tuning, mostly related to memory allocation. As a test case, we consider previous state-of-the-art results against a reduced-round version of a well-known cipher (Trivium). We evaluate the computational speedup with respect to a CPU-parallel benchmark, the performance dependence on system parameters and GPU architectures (Nvidia Kepler vs Nvidia Pascal), and the scalability of our solution on multi-GPU systems. All design choices are carefully described, and their respective advantages and drawbacks are discussed. By exhibiting the benefits of a complete GPU-tailored implementation of the cube attack, we provide novel and strong elements in support of the general feasibility of the attack, thus paving the way for future work in the area.
Motivated by the upcoming Internet of Things, designing light-weight authentication protocols for resource constrained devices is among the main research directions of the last decade. Current solutions in the literature attempt either to improve the computational efficiency of cryptographic authentication schemes, or to build a provably-secure scheme relying on the hardness of a specific mathematical problem. In line with the principles of information-theoretic security, in this paper we present a novel challenge-response protocol, named SLAP, whose authentication tokens only leak limited information about the secret key, while being very efficient to be generated. We do support our proposal with formal combinatorial arguments, further sustained by numeric evaluations, that clarify the impact of system parameters on the security of the protocol, yielding evidence that SLAP allows performing a reasonable number of secure authentication rounds with the same secret key.
Tactical Production and Lot Size Planning with Lifetime Constraints: A Comparison of Model Formulations
Raiconi Andrea
;
Pahl Julia
;
Gentili Monica
;
Voß Stefan
;
Cerulli Raffaele
In this work, we face a variant of the capacitated lot sizing problem. This is a classical problem addressing the issue of aggregating lot sizes for a finite number of discrete periodic demands that need to be satisfied, thus setting up production resources and eventually creating inventories, while minimizing the overall cost. In the proposed variant we take into account lifetime constraints, which model products with maximum fixed shelflives due to several possible reasons, including regulations or technical obsolescence. We propose four formulations, derived from the literature on the classical version of the problem and adapted to the proposed variant. An extensive experimental phase on two datasets from the literature is used to test and compare the performance of the proposed formulations.
lifetime constraints
lot sizing
mathematical models
perishability
Tactical production planning
An exact algorithm to extend lifetime through roles allocation in sensor networks with connectivity constraints
Carrabs Francesco
;
Cerulli Raffaele
;
D'Ambrosio Ciriaco
;
Raiconi Andrea
We face the problem of scheduling optimally the activities in a wireless sensor network in order to ensure that, in each instant of time, the activated sensors can monitor all points of interest (targets) and route the collected information to a processing facility. Each sensor is allocated to a role, depending on whether it is actually used to monitor the targets, to forward information or kept idle, leading to different battery consumption ratios. We propose a column generation algorithm that embeds a highly efficient genetic metaheuristic for the subproblem. Moreover, to optimally solve the subproblem, we introduce a new formulation with fewer integer variables than a previous one proposed in the literature. Finally, we propose a stopping criterion to interrupt the optimal resolution of the subproblem as soon as a favorable solution is found. The results of our computational tests show that our algorithm consistently outperforms previous approaches in the literature, and also improves the best results known to date on some benchmark instances.
Exact and heuristic approaches for the maximum lifetime problem in sensor networks with coverage and connectivity constraints
Carrabs Francesco
;
Cerulli Raffaele
;
D'Ambrosio Ciriaco
;
Raiconi Andrea
The aim of the Connected Maximum Lifetime Problem is to define a schedule for the activation intervals of the sensors deployed inside a region of interest, such that at all times the activated sensors can monitor a set of interesting target locations and route the collected information to a central base station, while maximizing the total amount of time over which the sensor network can be operational. Complete or partial coverage of the targets are taken into account. To optimally solve the problem, we propose a column generation approach which makes use of an appropriately designed genetic algorithm to overcome the difficulty of solving the subproblem to optimality in each iteration. Moreover, we also devise a heuristic by stopping the column generation procedure as soon as the columns found by the genetic algorithm do not improve the incumbent solution. Comparisons with previous approaches proposed in the literature show our algorithms to be highly competitive, both in terms of solution quality and computational time.
Column generation
Genetic algorithm
Maximum lifetime
Partial coverage
Steiner tree
Wireless sensor network
Column Generation Embedding Carousel Greedy for the Maximum Network Lifetime Problem with Interference Constraints
Carrabs Francesco
;
Cerrone Carmine
;
D'Ambrosio Ciriaco
;
Raiconi Andrea
We aim to maximize the operational time of a network of sensors, which are used to monitor a predefined set of target locations. The classical approach proposed in the literature consists in individuating subsets of sensors (covers) that can individually monitor the targets, and in assigning appropriate activation times to each cover. Indeed, since sensors may belong to multiple covers, it is important to make sure that their overall battery capacities are not violated. We consider additional constraints that prohibit certain sensors to appear in the same cover, since they would interfere with each other. We propose a Column Generation approach, in which the pricing subproblem is solved either exactly or heuristically by means of a recently introduced technique to enhance basic greedy algorithms, known as Carousel Greedy. Our experiments show the effectiveness of this approach.
Carousel greedy
Column generation
Maximum lifetime problem
Prolonging lifetime in wireless sensor networks with interference constraints
Carrabs Francesco
;
Cerulli Raffaele
;
D'Ambrosio Ciriaco
;
Raiconi Andrea
In this work, we consider a scenario in which we have to monitor some locations of interest in a geographical area by means of a wireless sensor network. Our aim is to keep the network operational for as long as possible, while preventing certain sensors from being active simultaneously, since they would interfere with one another causing data loss, need for retransmissions and overall affecting the throughput and efficiency of the network. We propose an exact approach based on column generation, as well as a heuristic algorithm to solve its separation problem. Computational tests prove our approach to be effective, and that the introduction of our heuristic in the Column Generation framework allows significant gains in terms of required computational effort.
According to the European Charter for Researchers «all researchers should ensure [...] that the results of their research are disseminated and exploited, e.g. communicated, transferred into other research settings or, if appropriate, commercialised ...». Therefore, it's part of the researchers' mission to raise the general public awareness with respect to science. This need is further emphasized by a survey of Eurobarometer 2010: society is strongly interested in science but, at the same time, is often scared by the risks connected with new technologies. Moreover, irrational attitudes towards science are prompted by a broad scientific illiteracy. The result is a remarkable distance between the community of scientists and the society at large. Mathematics, in this context, has a peculiarity: on one hand, it is seen as less ``dangerous'' than other sciences, as it is not directly related to current issues perceived as controversial and potentially risky (for example, Ogm or nuclear power). On the other hand, however, too often it is seen as a dry, cold discipline, very far from everyday life, with results determined by who knows millennia ago, and not susceptible of review. One more reason to communicate it. In a time when innovation, technological progress and, ultimately, the well-being of a society depend decisively on the mathematical culture that this society can express, the widespread ignorance of the basics of mathematics is politically, socially and culturally dangerous: raising the percentage of people who dominate at least its basics can be an important engine to accelerate the transition to an authentic ``knowledge society''.
The change in interest from identifying individual molecules to several components in biological samples as well as how they interact is addressed by systems biology. Mathematical, statistical, and computational methods have emerged to deal with the biological complexity exposed in past years by the massive production of high-throughput data through "omics" technologies. This chapter discusses powerful mathematical networks and modeling to identify key components related to rheumatoid arthritis and how to predict the response of different individuals to infections. As the consequence of a better understanding of biological processes, this chapter also presents the creation of new specific devices to diagnosis, treat, and prevent diseases. These nonnatural systems, produced by the insertion of genetic devices into a cell or cell-free systems, or even by editing a genome, are part of the synthetic biology field. The use of synthetic molecules or systems is discussed as attempts to diagnosis and treat cancer, Lyme disease, Ebola, and human immunodeficiency virus, among others. The chapter describes a diversity of ways in which systems and synthetic biology may help understand and control diseases.
systems biology, synthetic biology, mathematical modeling, biological systems
In this work, we consider a special choice of sliding vector field on the intersection of two co-dimension 1 manifolds. The proposed vector field, which belongs to the class of Filippov vector fields, will be called moments vector field and we will call moments trajectory the associated solution trajectory. Our main result is to show that the moments vector field is a well defined, and smoothly varying, Filippov sliding vector field on the intersection Σ of two discontinuity manifolds, under general attractivity conditions of Σ. We also examine the behavior of the moments trajectory at first order exit points, and show that it exits smoothly at these points. Numerical experiments illustrate our results and contrast the present choice with other choices of Filippov sliding vector field.
In this paper, we consider selection of a sliding vector field of Filippov type on a discontinuity manifold Σ of co-dimension 3 (intersection of three co-dimension 1 manifolds). We propose an extension of the moments vector field to this case, and—under the assumption that Σ is nodally attractive—we prove that our extension delivers a uniquely defined Filippov vector field. As it turns out, the justification of our proposed extension requires establishing invertibility of certain sign matrices. Finally, we also propose the extension of the moments vector field to discontinuity manifolds of co-dimension 4 and higher.
We propose a hybrid tree-finite difference method in order to approximate the Heston model. We prove the convergence by embedding the procedure in a bivariate Markov chain and we study the convergence of European and American option prices. We finally provide numerical experiments that give accurate option prices in the Heston model, showing the reliability and the efficiency of the algorithm.
tree methods
finite differences
Heston model
European and American options.
We study a hybrid tree/finite-difference method which permits to obtain efficient and accurate
European and American option prices in the Heston Hull-White and Heston Hull-White2d models.
Moreover, as a by-product, we provide a new simulation scheme to be used for Monte Carlo
evaluations. Numerical results show the reliability and the efficiency of the proposed methods.
stochastic volatility; stochastic interest rate; tree methods; finite difference; Monte Carlo; European and American options
Within the theoretical framework of the numerical stability analysis for the Volterra integral equations, we consider a new class of test problems and we study the long-time behavior of the numerical solution obtained by direct quadrature methods as a function of the stepsize. Furthermore, we analyze how the numerical solution responds to certain perturbations in the kernel.
Direct quadrature methods
Numerical stability
Volterra equation
An algorithm for computing the antitriangular factorization of symmetric matrices, relying only on orthogonal transformations, was recently proposed. The computed antitriangular form straightforwardly reveals the inertia of the matrix. A block version of the latter algorithm was described in a different paper, where it was noticed that the algorithm sometimes fails to compute the correct inertia of the matrix.In this paper we analyze a possible cause of the failure of detecting the inertia and propose a procedure to recover it. Furthermore, we propose a different algorithm to compute the antitriangular factorization of a symmetric matrix that handles most of the singularities of the matrix at the very end of the algorithm.Numerical results are also given showing the reliability of the proposed algorithm.
Classical results from spectral theory of stationary linear kinetic equations are applied to efficiently approximate two physically relevant weakly nonlinear kinetic models: a model of chemotaxis involving a biased velocity-redistribution integral term, and a Vlasov-Fokker-Planck (VFP) system. Both are coupled to an attractive elliptic equation producing corresponding mean-field potentials. Spectral decompositions of stationary kinetic distributions are recalled, based on a variation of Case's elementary solutions (for the first model) and on a Sturm-Liouville eigenvalue problem (for the second one). Well-balanced Godunov schemes with strong stability properties are deduced. Moreover, in the stiff hydrodynamical scaling, an hybridized algorithm is set up, for which asymptotic-preserving properties can be established under mild restrictions on the computational grid. Several numerical validations are displayed, including the consistency of the VFP model with Burgers-Hopf dynamics on the velocity field after blowup of the macroscopic density into Dirac masses. (C) 2016 Elsevier Inc. All rights reserved.
Chemotaxis modeling
Discrete velocity kinetic model
Non-conservative products
Vlasov-Poisson Fokker-Planck equation
Asymptotic-preserving and well-balanced scheme
In this paper, a general approach to de la Vallée Poussin means is given and the resulting near best polynomial approximation is stated by developing simple sufficient conditions to guarantee that the Lebesgue constants are uniformly bounded. Not only the continuous case but also the discrete approximation is investigated and a pointwise estimate of the generalized de Vallée Poussin kernel has been stated to this purpose. The theory is illustrated by several numerical experiments.
Discrete and continuous polynomial approximation
Gibbs phenomenon
Lebesgue constants
generalized de la Vallée Poussin means