Distributed analysis of Markov chains

dc.contributor.advisorKritzinger, Pieter Sen_ZA
dc.contributor.authorMestern, Mark Andrewen_ZA
dc.date.accessioned2014-11-16T20:07:57Z
dc.date.available2014-11-16T20:07:57Z
dc.date.issued1998en_ZA
dc.descriptionBibliography: leaves 88-91.en_ZA
dc.description.abstractThis thesis examines how parallel and distributed algorithms can increase the power of techniques for correctness and performance analysis of concurrent systems. The systems in question are state transition systems from which Markov chains can be derived. Both phases of the analysis pipeline are considered: state space generation from a state transition model to form the Markov chain and finding performance information by solving the steady state equations of the Markov Chain. The state transition models are specified in a general interface language which can describe any Markovian process. The models are not tied to a specific modelling formalism, but common formal description techniques such as generalised stochastic Petri nets and queuing networks can generate these models. Tools for Markov chain analysis face the problem of state Spaces that are so large that they exceed the memory and processing power of a single workstation. This problem is attacked with methods to reduce memory usage, and by dividing the problem between several workstations. A distributed state space generation algorithm was designed and implemented for a local area network of workstations. The state space generation algorithm also includes a probabilistic dynamic hash compaction technique for storing state hash tables, which dramatically reduces memory consumption.- Numerical solution methods for Markov chains are surveyed and two iterative methods, BiCG and BiCGSTAB, were chosen for a parallel implementation to show that this stage of analysis also benefits from a distributed approach. The results from the distributed generation algorithm show a good speed up of the state space generation phase and that the method makes the generation of larger state spaces possible. The distributed methods for the steady state solution also allow larger models to be analysed, but the heavy communications load on the network prevents improved execution time.en_ZA
dc.identifier.apacitationMestern, M. A. (1998). <i>Distributed analysis of Markov chains</i>. (Thesis). University of Cape Town ,Faculty of Science ,Department of Computer Science. Retrieved from http://hdl.handle.net/11427/9693en_ZA
dc.identifier.chicagocitationMestern, Mark Andrew. <i>"Distributed analysis of Markov chains."</i> Thesis., University of Cape Town ,Faculty of Science ,Department of Computer Science, 1998. http://hdl.handle.net/11427/9693en_ZA
dc.identifier.citationMestern, M. 1998. Distributed analysis of Markov chains. University of Cape Town.en_ZA
dc.identifier.ris TY - Thesis / Dissertation AU - Mestern, Mark Andrew AB - This thesis examines how parallel and distributed algorithms can increase the power of techniques for correctness and performance analysis of concurrent systems. The systems in question are state transition systems from which Markov chains can be derived. Both phases of the analysis pipeline are considered: state space generation from a state transition model to form the Markov chain and finding performance information by solving the steady state equations of the Markov Chain. The state transition models are specified in a general interface language which can describe any Markovian process. The models are not tied to a specific modelling formalism, but common formal description techniques such as generalised stochastic Petri nets and queuing networks can generate these models. Tools for Markov chain analysis face the problem of state Spaces that are so large that they exceed the memory and processing power of a single workstation. This problem is attacked with methods to reduce memory usage, and by dividing the problem between several workstations. A distributed state space generation algorithm was designed and implemented for a local area network of workstations. The state space generation algorithm also includes a probabilistic dynamic hash compaction technique for storing state hash tables, which dramatically reduces memory consumption.- Numerical solution methods for Markov chains are surveyed and two iterative methods, BiCG and BiCGSTAB, were chosen for a parallel implementation to show that this stage of analysis also benefits from a distributed approach. The results from the distributed generation algorithm show a good speed up of the state space generation phase and that the method makes the generation of larger state spaces possible. The distributed methods for the steady state solution also allow larger models to be analysed, but the heavy communications load on the network prevents improved execution time. DA - 1998 DB - OpenUCT DP - University of Cape Town LK - https://open.uct.ac.za PB - University of Cape Town PY - 1998 T1 - Distributed analysis of Markov chains TI - Distributed analysis of Markov chains UR - http://hdl.handle.net/11427/9693 ER - en_ZA
dc.identifier.urihttp://hdl.handle.net/11427/9693
dc.identifier.vancouvercitationMestern MA. Distributed analysis of Markov chains. [Thesis]. University of Cape Town ,Faculty of Science ,Department of Computer Science, 1998 [cited yyyy month dd]. Available from: http://hdl.handle.net/11427/9693en_ZA
dc.language.isoengen_ZA
dc.publisher.departmentDepartment of Computer Scienceen_ZA
dc.publisher.facultyFaculty of Scienceen_ZA
dc.publisher.institutionUniversity of Cape Town
dc.subject.otherComputer Scienceen_ZA
dc.titleDistributed analysis of Markov chainsen_ZA
dc.typeMaster Thesis
dc.type.qualificationlevelMasters
dc.type.qualificationnameMScen_ZA
uct.type.filetypeText
uct.type.filetypeImage
uct.type.publicationResearchen_ZA
uct.type.resourceThesisen_ZA
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
thesis_sci_1998_mestern_m.pdf
Size:
945.66 KB
Format:
Adobe Portable Document Format
Description:
Collections