Browsing by Subject "Applied Mathematics"
Now showing 1 - 20 of 77
Results Per Page
Sort Options
- ItemOpen AccessA study of vortex lattices and pulsar glitches(2019) Nkomozake, Thando; Weltman, Amanda; Murugan, JeffIn this project we study the three fundamental theories that explain the phenomenon of superconductivity: The London theory, the Ginzburg-Landau theory and the BCS theory. We review works by several authors who utilized these theories as the basis for their investigation. In our literature review we study the behavior of single and multivortex states in mesoscopic thin superconducting discs whose dimensions are comparable to the penetration depth λ and the coherence length ξ of a superconductor. We learn about the types of phase transitions that the vortex configurations undergo and the stability of the resulting states. Our aim is to investigate how vortex configurations reorganize after phase transitions and whether their reorganization releases any energy into the system of vortices in the disc. If so, then what is the precise mechanism through which the released energy is transferred into the disc? We aim to answer this question and generalize the results to neutron star interiors in order to explain and predict the behavior of pulsar glitches.
- ItemOpen AccessAchieving baseline states in sparsely connected spiking-neural networks: stochastic and dynamic approaches in mathematical neuroscience(2015) Antrobus, Alexander Dennis; Murugan, Jeffrey; Ellis, George F RNetworks of simple spiking neurons provide abstract models for studying the dynamics of biological neural tissue. At the expense of cellular-level complexity, they are a frame-work in which we can gain a clearer understanding of network-level dynamics. Substantial insight can be gained analytically, using methods from stochastic calculus and dynamical systems theory. This can be complemented by data generated from computational simulations of these models, most of which benefit easily from parallelisation. One cubic millimetre of mammalian cortical tissue can contain between fifty and one-hundred thousand neurons and display considerable homogeneity. Mammalian cortical tissue (or grey matter") also displays several distinct firing patterns which are widely and regularly observed in several species. One such state is the "input-free" state of low-rate, stochastic firing. A key objective over the past two decades of modelling spiking-neuron networks has been to replicate this background activity state using "biologically plausible" parameters. Several models have produced dynamically and statistically reasonable activity (to varying degrees) but almost all of these have relied on some driving component in the network, such as endogenous cells (i.e. cells which spontaneously fire) or wide-spread, randomised external input (put down to background noise from other brain regions). Perhaps it would be preferable to have a model where the system itself is capable of maintaining such a background state? This a functionally important question as it may help us understand how neural activity is generated internally and how memory works. There has also been some contention as to whether driven" models produce statistically realistic results. Recent numerical results show that there are connectivity regimes in which Self-Sustained, Asynchronous, Irregular (SSAI) firing activity can be achieved. In this thesis, I discuss the history and analysis of the key spiking-network models proposed in the progression toward addressing this problem. I also discuss the underlying constructions and mathematical theory from measure theory and the theory of Markov processes which are used in the analysis of these models. I then present a small adjustment to a well known model and provide some original work in analysing the resultant dynamics. I compare this analysis to data generated by simulations. I also discuss how this analysis can be improved and what the broader future is for this line of research.
- ItemOpen AccessThe applicability of discriminant analysis techniques on the multivariate normal and non-normal data types in marketing research.(1985) Van Deventer, Petrus Jacobus Uys; Troskie, Casper GThe purpose of the procedures described is to assign “objects” or "observations" in some optimum fashion to one of two or more populations. In routine banking a bank manager may wish to classify clients who wish to make loans as low or high credit risks on the basis of the elements of certain accounting statements. In such a case there are two definite distinct classes. Another investigation may be initiated to determine whether buying habits are different with respect to the categories: urban, sub-urban and rural clients. Note that in the first example the classes are determined before any sample of observations is investigated, i.e. the sample results do not influence the choice of groups. In the latter case one is trespassing on the terrain of cluster analysis.In the first case we have two types of problems, namely that of devising a classification rule from samples of already classified objects and that of imposing the classification scheme on the objects. The term "discrimination" refers to the process of deriving classification rules from samples of classified objects and the term "classification" refers to applying the rules to knew objects of unknown class. Although it is possible to convert raw data into more easily grasped forms like cartoon faces (Chernoff, 1973) this still represents the problem that any grouping or classification based on these diagrams is subjective.
- ItemOpen AccessArtificial Neural Networks as a Probe of Many-Body Localization in Novel Topologies(2022) Beetar, Cameron; Murugan, Jeffrey; Rosa, Dario; Weltman, AmandaWe attempt to show that artificial neural networks may be used as a tool for universal probing of many-body localization in quantum graphs. We produce an artificial neural network, training it on the entanglement spectra of the nearest-neighbour Heisenberg spin1/2 chain in the presence of extremal (definitely ergodic/localizing) disorder values and show that this artificial neural network successfully qualitatively classifies the entanglement spectra at both extremal and intermediate disorder values as being in either the ergodic regime or in the many-body-localizing regime, based on known results. To this network, we then present the entanglement spectra of systems having different topological structures for classification. The entanglement spectra of next-to-nearest-neighbour (J1 − J2, and, in particular, Majumdar-Ghosh) models, star models, and bicycle wheel models - without any further training of the artificial neural network - are classified. We find that the results of these classifications - in particular how the mobility edge is affected - are in agreement with heuristic expectations. This we use as a proof of concept that neural networks and, more generally, machine learning algorithms, endow physicists with powerful tools for the study of many-body localization and potentially other many-body physics problems.
- ItemOpen AccessAspects of a spherically symmetric model of the post-decoupling universe(1997) Mustapha, NazeemThe central aim of this thesis is to consider aspects of the spherically symmetric Lemaitre-Tolman-Bondi (LTB) solution as a model of the post-decoupling universe. To do this comprehensively is a massive task and is not our aim here. Indeed, far from it, we will concentrate on select instances of this programme and attempt in some places to indicate possibilities for further study. There are many solutions of the EFE which satisfy what we consider to be 'reasonable criteria' for a cosmology and others that do not. The LTB solution may be accepted as a reasonable cosmological model because ■ It allows non-empty solutions. ■ It allows expanding solutions. ■ It has a homogeneous and isotropic limit. ■ It allows for inhomogeneity.
- ItemOpen AccessAspects of Bayesian inference, classification and anomaly detection(2021) Roberts, Ethan; Bassett, BruceThe primary objective of this thesis is to develop rigorous Bayesian tools for common statistical challenges arising in modern science where there is a heightened demand for precise inference in the presence of large, known uncertainties. This thesis explores in detail two arenas where this manifests. The first is the development and testing of a unified Bayesian anomaly detection and classification framework (BADAC) which allows principled anomaly detection in the presence of measurement uncertainties, which are rarely incorporated into machine learning algorithms. BADAC deals with uncertainties by marginalising over the unknown, true value of the data. Using simulated data with Gaussian noise as an example, BADAC is shown to be superior to standard algorithms in both classification and anomaly detection performance in the presence of uncertainties. Additionally, BADAC provides well-calibrated classification probabilities, valuable for use in scientific pipelines. BADAC is therefore ideal where computational cost is not a limiting factor and statistical rigour is important. We discuss approximations to speed up BADAC, such as the use of Gaussian processes, and finally introduce a new metric, the Rank-Weighted Score (RWS), that is particularly suited to evaluating an algorithm's ability to detect anomalies. The second major exploration in this thesis presents methods for rigorous statistical inference in the presence of classification uncertainties and errors. Although this is explored specifically through supernova cosmology, the context is general. Supernova cosmology without spectra will be an important component of future surveys due to massive increases in data volumes in next-generation surveys such as from the Vera C. Rubin Observatory. This lack of supernova spectra results both in uncertainty in the redshifts and type of the supernova, which if ignored, leads to significantly biased estimates of cosmological parameters. We present a hierarchical Bayesian formalism, zBEAMS, which addresses this problem by marginalising over the unknown or uncertain supernova redshifts and types to produce unbiased cosmological estimates that are competitive with supernova data with fully spectroscopically confirmed redshifts. zBEAMS thus provides a unified treatment of both photometric redshifts, classification uncertainty and host galaxy misidentification, effectively correcting the inevitable contamination in the Hubble diagram with little or no loss of statistical power.
- ItemOpen AccessAspects of modern cosmology(1997) Bassett, Bruce Adrian Charles; Ellis, George F R; Fairall, Anthony PatrickThe main work of this thesis can be summarised as: ■ An implementation of canonical quantisation to the covariant and gauge-invariant approach to cosmological perturbations. Standard results are reproduced. We discuss the advantages of this formalism over non-covariant and non gauge-invariant formalisms. ■ A characterisation of linear gravitational waves in a covariant way is achieved. The evolution equations for the electric and magnetic parts of the Weyl tensor are shown to be of different order. In particular, the electric part appears to have a third order evolution equation, while the magnetic part has a second order evolution equation. It is shown that the "silent" nature of the evolution equations for irrotational dust can be extended to the case of vortical dust. This may be relevant for the endpoints of gravitational collapse since the vorticity begins to grow as soon as density contrast becomes non-linear, as is the case in galaxies, showing that the irrotational silent universes are unstable. The main problem in accepting such vortical silent universes lies in proving integrability of the equations which has not been achieved so far, even in the irrotational case. ■ A review of issues in the Cosmic Microwave Background (CMB) is given, focussing particularly on points such as ergodicity, decaying modes, foreground contamination, recombination, spectral distortions and polarisation of the CMB. ■ A review of methods in gravitational lensing is presented, together with a hierarchy of distance measures in cosmology, forming an introduction to the following two chapters. ■ A common belief that photon conservation implies that the all-sky averaged area distance in inhomogeneous universes must be that of the background, matter-averaged Robertson-Walker area distance is dis proven. This means that there will in general be gravitational lensing effects even on large angular scales. ■ The realistic situation in which gravitational lensing leads to caustic formation is discussed. It is claimed that this invalidates many accepted beliefs concerning high-redshift observations in inhomogeneous universes. One application of importance is the CMB. Possible implications are discussed. ■ Random Gaussian fields are ubiquitous in modern statistical physics, and particularly important in CMB studies. Here we give accurate analytical functions approximating ∫e⁻ˣ²dx, the simplest of which is just the kink soliton.
- ItemOpen AccessAsymptotic analysis of the parametrically driven damped nonlinear evolution equation(1997) Duba, Chuene ThamaSingular perturbation methods are used to obtain amplitude equations for the parametrically driven damped linear and nonlinear oscillator, the linear and nonlinear Klein-Gordon equations in the small-amplitude limit in various frequency regimes. In the case of the parametrically driven linear oscillator, we apply the Lindstedt-Poincare method and the multiple-scales technique to obtain the amplitude equation for the driving frequencies Wdr ~ 2ω₀,ω₀, (2/3)ω₀ and (1/2)ω₀. The Lindstedt-Poincare method is modified to cater for solutions with slowly varying amplitudes; its predictions coincide with those obtained by the multiple-scales technique. The scaling exponent for the damping coefficient and the correct time scale for the parametric resonance are obtained. We further employ the multiple-scales technique to derive the amplitude equation for the parametrically driven pendulum for the driving frequencies Wdr ~ 2ω₀, ω₀, (2/3)ω₀, (1/2)ω₀ and 4ω₀. We obtain the correct scaling exponent for the amplitude of the solution in each of these frequency regimes.
- ItemOpen AccessBoundary value problems for semilinear evolution equations of compact type(1982) Sager, Herbert Casper; Becker, Ronald
- ItemOpen AccessCio-cio-san no yuutsu: memoirs of magnetogenesis and turbulent dynamo theory(2013) Adams, Patrick William; Dr. Osano, BThe origins of cosmic magnetic fields are not as yet well understood. In this dissertation we investigate, via direct numerical simulation, the temporal evolution and behaviour of magnetic fields that are generated from absolute zero initial conditions via a thermal battery term in the Induction Equations (i.e. the Magnetogenesis problem), whilst making use of the Ideal- and Chaplygin Gas equations of state, in turn, to model the relationship between pressure and density. The dependence of the onset of dynamo action on various values of the magnetic Reynolds- and Prandtl numbers for the cases of the Roberts Flow kinematic dynamo and a flow that, in turn, incorporates both a non-helical and helical forcing function that introduces turbulence into the system is also considered via direct numerical simulation. For the purposes of the simulation work conducted, we make use of the PENCIL CODE, which is a high-order finite-difference Magnetohydrodynamical code capable of performing simulation runs in parallel using the Message Passing Interface (MPI) system for parallel processing. Theoretical results relevant to the simulations conducted are partially recovered and discussed in detail. These include, and are not limited to, the emergence of the thermal battery term in the General Ohm's Law as a consequence of the two-fluid approximation of a plasma, derivation of the Induction Equations incorporating the aforementioned battery term, introduction and discussion of the Chaplygin Gas and its place in the field of Cosmology, energetics governing the flow of kinetic- and magnetic energy during the dynamo process, the Zel'dovich stretch-twist-fold dynamo as an example of both a fast dynamo and a cornerstone underlying the operation of all dynamos and, finally, the Kazantsev Theory for small-scale, turbulent dynamos. For our magnetogenesis simulations, it is found that the magnetic fields produced undergo two distinct growth phases (the first, classified as an initial “upshoot” that is possibly due to the battery term and the second, classified as an exponential growth phase), as well as two distinct phases of decay in strength, which is attributed to the effects of magnetic diffusion. This behaviour is observed for fields generated using both the Ideal- and Chaplygin Gas equations of state in turn and it is noted that the Chaplygin Gas equation of state produces magnetic fields that are of comparable strength to those produced by the Ideal Gas equation of state. Dynamo action simulations confirm the existence of a critical magnetic Reynolds number, beyond which, an initial prescribed magnetic field will grow exponentially in strength. In the case of the forced turbulence simulations, it is noted that the use of a helical forcing function greatly lowers the value of the critical magnetic Reynolds number required for the onset of guaranteed dynamo action and also produces stronger magnetic fields when compared to the cases that used a non-helical forcing function. In both cases of the forced turbulence, the magnetic field is observed to saturate when its kinematic (i.e. exponential growth) phase is complete, provided that the magnetic Reynolds number is above the aforementioned critical threshold. Results of the magnetogenesis simulations are also investigated for dynamo action, and it is concluded that a type of “kinematic dynamo” phase was most probably present when these fields underwent the observed phase of exponential growth.
- ItemOpen AccessA comparative quality of life survey in Elsies River and Basuto QwaQwa(1985) Erlank, D; Ellis, GFRThis thesis is concerned with developing a method for determining the Quality of Life of a group or community in quantitative terms. The method devised is aimed at providing decision-makers with a useful tool when allocating public funds. The method involves setting critical values for indicators and then applying a mathematical formula, in order to standardise information gathered from several different sources. A value for the indicator of a particular group or community is thus calculated. This procedure made it possible to compare data from these different sources. Arising out of this the values for individual indicators were aggregated to produce indices evaluating the Quality of Life, which are in a form that may be readily used by decision-makers. Surveys were run in Elsies River, a coloured suburb of Cape Town, and in Basuto QwaQwa, a homeland in the Orange Free State, using two questionnaires. The results were computed and the method developed here used to compare and aggregate the data. Other sources of data included opinions from experts and objective data concerning the two survey areas which were also standardised and aggregated. The results show that the method is pragmatic and could be useful to decision-makers. The standardisation provided the means for arriving at the indices which show how different aspects of the Quality of Life may be assessed. The results, however, are not absolute and could change through a process of negotiation: in fact this is an essential qualification.
- ItemOpen AccessComputational analysis techniques using fast radio bursts to probe astrophysics(2021) Platts, Emma; Weltman, Amanda; Shock, JonathanThis thesis focuses on Fast Radio Bursts (FRBs) and presents computational techniques that can be used to understand these enigmatic events and the Universe around them. Chapter 1 provides a theoretical overview of FRBs; providing a foundation for the chapters that follow. Chapter 2 details current understandings by providing a review of FRB properties and progenitor theories. In Chapter 3, we implement non-parametric techniques to measure the elusive baryonic halo of the Milky Way. We show that even with a limited data set, FRBs and an appropriate set of statistical tools can provide reasonable constraints on the dispersion measure of the Milky Way halo. Further, we expect that a modest increase in data (from fewer than 100 FRB detections to over 1000) will significantly tighten constraints, demonstrating that the technique we present may offer a valuable complement to other analyses in the near future. In Chapter 4, we study the fine time-frequency structure of the most famous FRB: FRB 121102. Here, we use autocorrelation functions to maximise the structure of 11 pulses detected with the MeerKAT radio telescope. The study is motivated by the low time-resolution of MeerKAT data, which presents a challenge to more traditional techniques. The burst profiles that are unveiled offer unique insight into the local environment of the FRB, including a possible deviation from the expected cold plasma dispersion relationship. The pulse features and their possible physical mechanisms are critically discussed in a bid to uncover the nature and origin of these transients.
- ItemOpen AccessA computational implementation of design sensitivity analysis and structural optimisation(1996) Bothma, André Smith; Ronda, JacekIn the field of computational mechanics, increases in computing power and enhancements in material and kinematic models have enhanced the feasibility of performing structural design optimisation for a wide range of applications. The work presented here was motivated by the current groundswell of research effort in computational optimisation. Design Sensitivity Analysis (DSA) crucially underpins much of structural optimisation and, as such, is focussed on more intently than the optimisation theory itself: various approaches to the Direct Differentiation Method (DDM) DSA procedure are investigated and computationally implemented. The procedures implemented were chosen so as to involve a range of important issues in computational sensitivity analysis, particularly * Shape and non-shape sensitivity analysis, * Total and Updated Lagrange-based DSA, * DSA of displacement and non-displacement based response functionals, * Multiparameter DSA. * DSA for large strain behaviour The primary objectives of this thesis are: I. Development of a design sensitivity formulation which, when discretised, resembles the standard displacement based kinematic element formulation, thus enabling the implementation of design sensitivity analysis in established Finite Element Analysis (FEA) codes as a 'pseudo'-element routine. II. lmplemention of several design sensitivity formulations and structural optimisation into the FEA code ABAQUS as a verification of the first objective. Numerical results provided in this work demonstrate the successful completion of the above-mentioned objectives. The discretised DSA formulations presented, as well as the 'pseudo'-element approach adopted, particularly in the case of shape DSA are entirely original. To the best of the author's knowledge, DSA and DSA-based structural optimisation had never before been attempted with ABAQUS. The research conducted here lays the foundation for potentially very fruitful future work.
- ItemOpen AccessCosmological dynamics of exponential gravity(2007) Abdelwahab, Mohamed Elshazli Sirelakhatim; Dunsby, Peter K SThe objective of this thesis is to explore several hotly debated current issues in modern cosmology, with a focus on f(R) gravity. In chapter 1 we present a review of modern theoretical cosmology. We begin by introducing some fundamental cosmological concepts, followed by a discussion of the field equations of general relativity, which underlie both the behavior of global cosmological models and the isolated gravitating systems such as stars, black holes and galaxies. In particular we focus on the solutions for the Friedmann-Robertson-Walker Universe. Next we present a detailed discussion of the dark matter problem. Astrophysical observations indicate that the two components account for 25% of the total mass/energy of the observable Universe. We then present the big bang model, which represents the current standard model for the origin and the evolution of the Universe. In our discussion we introduce the inflationary scenario in some detail; specifically we present an example of quadratic inflation to demonstrate how this scenario provides a solution to some of the problems with the standard model. Next we discussed the dark energy model, which as been introduced to address the late-time acceleration problem. We then present the quintessence model, which as been proposed to address the coincidence and the magnitude problems. We conclude this chapter by a detailed discussion of the higher order theories of gravity with a particular we focus on f(R)-gravity, which is based on the idea of introducing corrections to the Einstein-Hilbert action that are negligible in the early Universe and only become effective at late times when the Ricci curvature R decreases. In our discussion we indicate how these corrections can be interpreted as an effective fluid of purely geometrical origin; we also discuss the phase space and stability of deSitter space in f(R) gravity.
- ItemOpen AccessA covariant approach to gravitational lensing(2004) De Swardt, Bonita; Dunsby, Peter K S; Clarkson, ChrisThe main focus of this thesis is to study the properties of null geodesics in general relativistic models. This thesis is divided into two parts. In the first part, we introduce the (1+3)-covariant approach which will be used in our study of null geodesics and their applications to gravitational lensing. The dynamics of the null congruence can be better understood through the propagation and constraint equations in the direction of the congruence. Thus, we derive these equations after describing the geomentry of a ray. We also derive a general from of the null geodesic deviation equation (NGDE) which can be used in any given space-time. Various applications of this equation are studied, including its role in determining area-distance relations in an Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmological model. We also use the NGDE in deriving a covariant form of the angle of deflection, showing its versatile applications in gravitational lensing theory.
- ItemOpen AccessDeep learning for supernovae detection(2017) Amar, Gilad; Bassett, BruceIn future astronomical sky surveys it will be humanly impossible to classify the tens of thousands of candidate transients detected per night. This thesis explores the potential of using state-of-the-art machine learning algorithms to handle this burden more accurately and quickly than trained astronomers. To this end Deep Learning methods are applied to classify transients using real-world data from the Sloan Digital Sky Survey. Using cutting-edge training techniques several Convolutional Neural networks are trained and hyper-parameters tuned to outperform previous approaches and find that human labelling errors are the primary obstacle to further improvement. The tuning and optimisation of the deep models took in excess of 700 hours on a 4-Titan X GPU cluster.
- ItemOpen AccessDevelopment of a collision table for three dimensional lattice gases(1992) Lake, Peter J; Smart, D R; Gledhill, I M AA lattice gas is a species of cellular automaton used for numerically simulating fluid flows. TransGas [9], the lattice gas code currently in use at the CSIR, is based on the FHP-I model [5], and is used to perform various two-dimensional flow simulations. In order to broaden the scope of the applications in which lattice gases can be used locally, the development of a three-dimensional lattice gas capability is required. The first major task in setting up a three dimensional-lattice gas is the construction of an efficient collision rule generator which will determine collision outcomes. For suitability to local applications, the collision rules should be chosen in such a way as to maximise the Reynolds coefficient of the flow, while conserving quantities such as mass and momentum. Part of the task thus becomes an optimisation problem. When expanding from two to three dimensions, the number of possible collision rules increases from 64 to 16777216. If a complete collision rule table is used for determining collision outcomes, storage problems are encountered on the available hardware. Selection and optimisation of collision rules cannot be done by hand when there are so many rules to choose from. Selection of rules is thus non-trivial. The work outlined in this thesis provides the CSIR with a 3-D lattice gas collision table which is well suited to the available hardware capabilities. The necessary theoretical background is considered, and a survey of the literature is presented. Based on the findings of this literature study, various methods of collision outcome determination are implemented which are considered to be suitable to the local needs, while remaining within the constraints set by hardware availability. An isometric collision algorithm, and a reduced collision table are generated and tested. A measure of the overall efficiency of a lattice gas model is determined by two factors, namely the computational efficiency and the implementation efficiency. In testing a collision table, the first is characterised by the rate at which post-collision states can be determined, and depends on the hardware and programming techniques. The second factor can be expressed by means of a number called the Reynolds coefficient, which is defined and discussed in the following chapters. The higher the Reynolds coefficient of a model, the greater the scope of flow regimes which may be simulated using it. Another advantage of having a high Reynolds coefficient is that the simulation time required for a given flow regime decreases as the Reynolds coefficient of the model increases. The overall efficiency of the isometric model is too low to be of practical use, but a significant improvement is obtained by using the method of reduced tables. In the isometric case, the number of collision outcomes that can be determined per second is similar to that of the reduced table, but the Reynolds coefficient is very much lower. Simulation of a flow regime with a Reynolds number of about 100, on a lattice of size 128³, over 20 thousand timesteps, making use of the isometric model, would take of the order of a few years to complete on the currently available hardware. Since the simulation parameters mentioned above are typical of the local requirements for lattice gas simulations, this method is obviously unsatisfactory. The isometric method does however serve as a useful introduction to three-dimensional lattice gas collision rule methods. The reduced collision table has been constructed so that it maintains semi-detailed balance, and the Boltzmann Reynolds coefficient has been calculated. In the reduced collision table model, the efficiency is higher than the isometric case in respect of both the rate at which collision outcomes can be determined, and in terms of the Reynolds coefficient. As a result of these improvements, the simulation time for the exact case mentioned above would reduce to the order of days, on the same hardware. This simulation time is sufficiently low for immediate practical application in the local environment.
- ItemOpen AccessDiscrete symmetry analysis of partial differential equations for bond pricing(2018) Ledwaba, Nomsa Maripa; Fredericks, EbrahimWe show how to compute the discrete symmetries for a given Black-Scholes (B-S) partial differential equation (PDE) with the aid of the full automorphism group of the Lie algebra associated to the standard B-S PDE. The paper determines the discrete symmetries using two methods. The first is by G. Silberberg which determines the full automorphism group by constructing the symmetry generators' centralizer and Lie algebra's radical. The other is by P. Hydon which is based on the observation that the adjoint action of any point symmetry of a partial differential equation is an automorphism of the PDE's Lie point symmetry algebra [27]. Automorphisms are essential for constructing discrete symmetries of a given partial differential equation. How does one _t in this mathematical concept in the application of finance? The concept of arbitrage which in certain circumstances allows us to establish the precise relationship between prices and thence how to determine prices, underlies the theory of financial derivatives pricing and hedging [40]. We use arbitrage together with the Black-Scholes model for asset price movements when trading derivative securities. 1Arbitrage is used to creating a portfolio and the discrete symmetries show how to create a portfolio. Gazizov and Ibragimov [10], computed the Lie point symmetries of the Black-Scholes PDE and found an infinite dimensional Lie algebra of infinitesimal symmetries generated by the operators. Discrete symmetries are more effective on PDEs since they are not held back by boundary conditions and are used in1. equivalent bifurcation theory; 2. construction of invariant solutions; 3. simplification of numerical schemes. 4. used in put-call parity relationship (see application in finance); 5. used in put-call symmetry relationship (see application in finance)
- ItemOpen AccessDynamical studies in relativistic cosmology(2000) Mustapha, Nazeem; Ellis, GFRWe conduct three investigations in Relativistic Cosmology that is the Einstein Field Equations applied to the largest scales with source field typically taken to be a perfect fluid and fundamental observers comoving with the preferred fluid four-velocity. We show using a tetrad analysis of the evolution equations for the dynamical variables and all the constraints these satisfy in classical General Relativity, that there are no new consistent perfect fluid cosmologies with the kinematic variables and the electric and/or magnetic parts of the Weyl curvature all rotationally symmetric about a common axis in an open neighbourhood Ա of an event. The consistent solutions of this kind are either locally rotationally symmetric, or most generally are subcases of the Szekeres model-an inhomogeneous dust model with no Killing symmetries. This result and its obvious future generalisations provides an input into the equivalence problem in cosmology necessary for a mathematically consistent understanding of probability and a measure set for universes required in quantum cosmology, for instance. We investigate such generalisations and find that similar results hold under some further assumptions dependent on the level of generalisation. In particular, we examine situations where either the electric part or the magnetic part of the free gravitational field are not rotationally symmetric, and also make a brief comment on the most general case where only the shear is rotationally symmetric. We use a tetrad analysis to show that the well-known result that holds for relativistic shear-free dust cosmologies in Einstein's classical theory either the expansion vanishes or the flow is irrotational - has an analogue in the Kaluza-Klein universe model, which has its roots presumably in string theory (or M-theory), recently proposed by Randall and Sundrum. The Big Bang singularity of General Relativity can not be avoided in these so-called brane universes in the situation where we neglect non-local tidal effects on the dynamics by allowing the vorticity to spin up as the singularity is approached in shear-free cases. Moreover, we show that in the general case of a shearing perfect fluid, the singularity at the start of the universe is approached even more strongly than in classical General Relativity in the case of no tidal interaction. Finally, we reconsider the issue of proving large scale spatial homogeneity of the universe in classical General Relativity, given isotropic observations about us and the possibility of source evolution both in numbers and luminosities. We use a spherically symmetric dust universe model (compatible with observations) for our investigation and we solve the field equations on the null cone analytically for the first time. Two theorems make precise the freedom available in constructing cosmological models that will fit the observations. They make quite clear that homogeneity cannot be proven without either a fully determinate theory of source evolution, or availability of distance measures that are independent of source evolution. We contrast this goal with the standard approach that assumes spatial homogeneity a priori, and determines source evolution functions on the basis of this assumption.
- ItemOpen AccessDynamics of classical strings in Rindler Space(2014) De Klerk, David NicholaasThe fundamental degrees of freedom in string theory are extended objects. Solving their equations of motion can be difficult unless they are considered in very constrained situations. We investigate the dynamics of gravitational D-brane radiation. Results of others are reviewed which show that in the static case the string prole of Newtonian and relativistic strings are the same. We show that for slow moving strings the relativistic solution agrees with the classical one.