Browsing by Author "Gain, James"
Now showing 1 - 20 of 32
Results Per Page
Sort Options
- ItemOpen AccessA low cost virtual reality interface for educational games(2022) Sewpersad, Tashiv; Gain, JamesMobile virtual reality has the potential to improve learning experiences by making them more immersive and engaging for students. This type of virtual reality also aims to be more cost effective by using a smartphone to drive the virtual reality experience. One issue with mobile virtual reality is that the screen (i.e. main interface) of the smartphone is occluded by the virtual reality headset. To investigate solutions to this issue, this project details the development and testing of a computer vision based controller that aims to have a cheaper per unit cost when compared to a conventional electronic controller by making use of 3D printing and the built-in camera of a smartphone. Reducing the cost per unit is useful for educational contexts as solutions would need to scale to classrooms sizes. The research question for this project is thus, “can a computer vision based virtual reality controller provide comparable immersion to a conventional electronic controller”. It was found that a computer vision based controller can provide comparable immersion, though it is more challenging to use. This challenge was found to contribute more towards engagement as it did not diminish the performance of users in terms of question scores.
- ItemOpen AccessA user interface for terrain modelling in virtual reality using a head mounted display(2021) Gwynn, Timothy; Gain, JamesThe increased commercial availability of virtual reality (VR) devices has resulted in more content being created for virtual environments (VEs). This content creation has mainly taken place using traditional desktop systems but certain applications are now integrating VR into the creation pipeline. Therefore we look at the effectiveness of creating content, specifically designing terrains, for use in immersive environments using VR technology. To do this, we develop a VR interface for terrain creation based on an existing desktop application. The interface incorporates a head-mounted display and 6 degree of freedom controllers. This allows the mapping of user controls to more natural movements compared to the abstract controls in mouse and keyboard based systems. It also means that users can view the terrain in full 3D due to the inherent stereoscopy of the VR display. The interface goes through three iterations of user centred design and testing. This results in paper and low fidelity prototypes being created before the final interface is developed. The performance of this final VR interface is then compared to the desktop interface on which it was based. We carry out user tests to assess the performance of each interface in terms of speed, accuracy and usability. From our results we find that there is no significant difference between the interfaces when it comes to accuracy but that the desktop interface is superior in terms of speed while the VR interface was rated as having higher usability. Some of the possible reasons for these results, such as users preferring the natural interactions offered by the VR interface but not having sufficient training to fully take advantage of it, are discussed. Finally, we conclude that while it was not shown that either interface is clearly superior, there is certainly room for further exploration of this research area. Recommendations for how to incorporate lessons learned during the creation of this dissertation into any further research are also made.
- ItemOpen AccessAccelerated coplanar facet radio synthesis imaging(2016) Hugo, Benjamin; Gain, James; Smirnov, Oleg; Tasse, CyrilImaging in radio astronomy entails the Fourier inversion of the relation between the sampled spatial coherence of an electromagnetic field and the intensity of its emitting source. This inversion is normally computed by performing a convolutional resampling step and applying the Inverse Fast Fourier Transform, because this leads to computational savings. Unfortunately, the resulting planar approximation of the sky is only valid over small regions. When imaging over wider fields of view, and in particular using telescope arrays with long non-East-West components, significant distortions are introduced in the computed image. We propose a coplanar faceting algorithm, where the sky is split up into many smaller images. Each of these narrow-field images are further corrected using a phase-correcting tech- nique known as w-projection. This eliminates the projection error along the edges of the facets and ensures approximate coplanarity. The combination of faceting and w-projection approaches alleviates the memory constraints of previous w-projection implementations. We compared the scaling performance of both single and double precision resampled images in both an optimized multi-threaded CPU implementation and a GPU implementation that uses a memory-access- limiting work distribution strategy. We found that such a w-faceting approach scales slightly better than a traditional w-projection approach on GPUs. We also found that double precision resampling on GPUs is about 71% slower than its single precision counterpart, making double precision resampling on GPUs less power efficient than CPU-based double precision resampling. Lastly, we have seen that employing only single precision in the resampling summations produces significant error in continuum images for a MeerKAT-sized array over long observations, especially when employing the large convolution filters necessary to create large images.
- ItemOpen AccessAccelerating radio transient detection using the Bispectrum algorithm and GPGPU(2015) Lin, Tsu-Shiuan; Gain, James; Armstrong, RichardModern radio interferometers such as those in the Square Kilometre Array (SKA) project are powerful tools to discover completely new classes of astronomical phenomena. Amongst these phenomena are radio transients. Transients are bursts of electromagnetic radiation and is an exciting area of research as localizing pulsars (transient emitters) allow physicists to test and formulate theories on strong gravitational forces. Current methods for detecting transients requires an image of the sky to be produced at every time step. Since interferometers have more information available to them, the computational demands for producing images becomes infeasible due to the larger data sets provided by larger interferometers. Law and Bower (2012) formulated a different approach by using a closure quantity known as the "bispectrum": the product of visibilities around a closed loop of antennae. The proposed algorithm has been shown to be easily parallelized and suitable for Graphics processing units (GPUs).Recent advancements in the field of many core technology such as GPUs has demonstrated significant performance enhancements to many scientific applications. A GPU implementation of the bispectrum algorithm has yet to be explored. In this thesis, we present a number of modified implementations of the bispectrum algorithm, allowing both instruction-level and data-level parallelism. Firstly, a multi-threaded CPU version is developed in C++ using OpenMP and then compared to a GPU version developed using Compute Unified Device Architecture (CUDA).In order to verify validity of the implementations presented, the implementations were firstly run on simulated data created from MeqTrees: a tool for simulating transients developed by the SKA. Thereafter, data from the Karl Jansky Very Large Array (JVLA) containing the B0355+54pulsar was used to test the implementation on real data. This research concludes that the bispectrum algorithm is well suited for both CPU and GPU implementations as we achieved a 3.2x speed up on a 4-core multi-threaded CPU implementation over a single thread implementation. The GPU implementation on a GTX670, achieved about a 20 times speed-up over the multi-threaded CPU implementation. These results show that the bispectrum algorithm will open doors to a series of efficient transient surveys suitable for modern data-intensive radio interferometers.
- ItemOpen AccessAddition of flexible linkers to GPU-accelerated coarse-grained simulations of protein-protein docking(2019) Pinska, Adrianna; Kuttel, Michelle; Gain, James; Best, RobertMultiprotein complexes are responsible for many vital cellular functions, and understanding their formation has many applications in medical research. Computer simulation has become a valuable tool in the study of biochemical processes, but simulation of large molecular structures such as proteins on a useful scale is computationally expensive. A compromise must be made between the level of detail at which a simulation can be performed, the size of the structures which can be modelled and the time scale of the simulation. Techniques which can be used to reduce the cost of such simulations include the use of coarse-grained models and parallelisation of the code. Parallelisation has recently been made more accessible by the advent of Graphics Processing Units (GPUs), a consumer technology which has become an affordable alternative to more specialised parallel hardware. We extend an existing implementation of a Monte Carlo protein-protein docking simulation using the Kim and Hummer coarse-grained protein model [1] on a heterogeneous GPU-CPU architecture [2]. This implementation has achieved a significant speed-up over previous serial implementations as a result of the efficient parallelisation of its expensive non-bonded potential energy calculation on the GPU. Our contribution is the addition of the optional capability for modelling flexible linkers between rigid domains of a single protein. We implement additional Monte Carlo mutations to allow for movement of residues within linkers, and for movement of domains connected by a linker with respect to each other. We also add potential terms for pseudo-bonds, pseudo-angles and pseudo-torsions between residues to the potential calculation, and include additional residue pairs in the non-bonded potential sum. Our flexible linker code has been tested, validated and benchmarked. We find that the implementation is correct, and that the addition of the linkers does not significantly impact the performance of the simulation. This modification may be used to enable fast simulation of the interaction between component proteins in a multiprotein complex, in configurations which are constrained to preserve particular linkages between the proteins. We demonstrate this utility with a series of simulations of diubiquitin chains, comparing the structure of chains formed through all known linkages between two ubiquitin monomers. We find reasonable agreement between our simulated structures and experimental data on the characteristics of diubiquitin chains in solution.
- ItemOpen AccessAn Adjectival Interface for procedural content generation(2008) Hultquist, Carl; Gain, James; Cairns, DavidIn this thesis, a new interface for the generation of procedural content is proposed, in which the user describes the content that they wish to create by using adjectives. Procedural models are typically controlled by complex parameters and often require expert technical knowledge. Since people communicate with each other using language, an adjectival interface to the creation of procedural content is a natural step towards addressing the needs of non-technical and non-expert users. The key problem addressed is that of establishing a mapping between adjectival descriptors, and the parameters employed by procedural models. We show how this can be represented as a mapping between two multi-dimensional spaces, adjective space and parameter space, and approximate the mapping by applying novel function approximation techniques to points of correspondence between the two spaces. These corresponding point pairs are established through a training phase, in which random procedural content is generated and then described, allowing one to map from parameter space to adjective space. Since we ultimately seek a means of mapping from adjective space to parameter space, particle swarm optimisation is employed to select a point in parameter space that best matches any given point in adjective space. The overall result, is a system in which the user can specify adjectives that are then used to create appropriate procedural content, by mapping the adjectives to a suitable set of procedural parameters and employing the standard procedural technique using those parameters as inputs. In this way, none of the control offered by procedural modelling is sacrificed â although the adjectival interface is simpler, it can at any point be stripped away to reveal the standard procedural model and give users access to the full set of procedural parameters. As such, the adjectival interface can be used for rapid prototyping to create an approximation of the content desired, after which the procedural parameters can be used to fine-tune the result. The adjectival interface also serves as a means of intermediate bridging, affording users a more comfortable interface until they are fully conversant with the technicalities of the underlying procedural parameters. Finally, the adjectival interface is compared and contrasted to an interface that allows for direct specification of the procedural parameters. Through user experiments, it is found that the adjectival interface presented in this thesis is not only easier to use and understand, but also that it produces content which more accurately reflects usersâ intentions.
- ItemOpen AccessComparison of layered surface visualization through animated particles and rocking(2010) Lane, James; Gain, JamesVisualizations that show the shape of and spatial relationships between layers of surfaces are useful to oceanographers studying water masses or oncologists planning radiation treatments. The shape of and distances between layers is effectively visualized by displaying the surfaces semi-transparently and sparsely covered with opaque markings. However, it becomes difficult to distinguish the shapes and differentiate between the markings when showing more than two surfaces. Further, finding optimal sizes and numbers of markings for the different layers, so as to best display the surfaces, requires tedious manual effort. This dissertation firstly investigates animation of the opaque markings as a means of enhancing these visualizations. Such a Kinetic Visualization approach has several potential benefits: the perceptual grouping effect of similar motion helps distinguish between markings on separate layers, occlusions are modulated as the markings move, allowing a viewer to assemble an integrated mental image of otherwise partially obscured surfaces, and markings that follow certain trajectories such as surface curvatures, contribute to a better understanding of shape. Markings are also spread out in relation to the view point, reducing complete occlusions.Secondly, a computational model of human perception of a single surface is extended to layered surfaces by modelling processes of perceptual grouping and surface completion, incorporating relatability criteria. This model is intended to mimic a person’s perception of layered surfaces, and is used to measure the effectiveness of our visualizations, within an optimization framework, allowing optimal visualization settings to be automatically determined. Visualization enhancements through animation were evaluated through a user experiment comparing pendulum-style rocking, static renderings and Kinetic Visualization on sets of two surfaces. This showed that rocking alone results in more accurate depth judgements, indicating that the “Kinetic Depth Effect” is not induced by Kinetic Visualization. A follow-up experiment revealed that a combination of rocking and Kinetic Visualization is more useful than rocking alone for feature identification tasks when displaying four layers. Our perceptual model was evaluated, in an experiment, in which sets of layered surfaces were displayed using a range of different visualization settings. Respondents recreated surfaces matching their perception. Comparing our model’s evaluation of the different visualizations showed a weak linear correlation to the accuracy of the participant’s perception of the surfaces. This research shows that modelling perception of layered surfaces is a grand challenge and highlights the foundational problem of predicting significant variation that may arise between non-homogenous participants.
- ItemOpen AccessA comparison of statistical and geometric reconstruction techniques : guidelines for correcting fossil hominin crania(2007) Neeser, Rudolph; Gain, James; Ackermann, Rebecca RogersThe study of human evolution centres, to a large extent, around the study of fossil morphology, including the comparison and interpretation of these remains within the context of what is known about morphological variation within living species. However, many fossils suffer from environmentally caused damage (taphonomic distortion) which hinders any such interpretation: fossil material may be broken and fragmented while the weight and motion of overlaying sediments can cause their plastic distortion. To date, a number of studies have focused on the reconstruction of such taphonomically damaged specimens. These studies have used myriad approaches to reconstruction, including thin plate spline methods, mirroring, and regression-based approaches. The efficacy of these techniques remains to be demonstrated, and it is not clear how different parameters (e.g., sample sizes, landmark density, etc.) might effect their accuracy. In order to partly address this issue, this thesis examines three techniques used in the virtual reconstruction of fossil remains by statistical or geometrical means: mean substitution, thin plate spline warping (TPS), and multiple linear regression.
- ItemOpen AccessData preparation and visualization for the SWAN refraction model(2003) Green, Nicholas; Gain, JamesThis research and development project seeks to provide a usable interactive graphical interface to an environment that otherwise involves primarily numerical data in a static, non-interactive format. Tools will be developed that enable users to prepare numerical data required for the SWAN refraction model and to visualize the results in an interactivie three-dimensional graphical context. SWAN (acronym for Simulating Waves Near shore) is a numerical wave model that is used to predict wave parameters according to a given set of conditions. The design of the 2-D and 3-D graphical interfaces and their impact on the system will be discussed.
- ItemOpen AccessDigital reconstruction of District Six architecture from archival photographs(2007) De Kadt, Christopher R J; Gain, James; Marais, PatricIn this thesis we present a strategy for reconstructing instances of District Six Architecture from small sets of old. uncalibrated photographs that are located in the District Six Museum photographic archive. Our reconstruction strategy comprises two major parts. First, we implement a geometry reconstruction framework. based on work by Debevec et al. [1996]. This is used to reconstruct the geometry of a building given as little input as a single photograph. The approach used in this framework requires the user to design a basic model representing the building at hand. using a set of geometric primitives, and then define correspondences between the edges of this model and the edges of the building that are visible in the photographs. This approach is effective, as constraints inherent III the geometry of architectural scenes are exploited through the use of these primitives. The second component of the reconstruction strategy involves texturing the reconstructed models. To accomplish this, we use a combination of the original textures extracted from the photographs, and synthesized textures generated from samples of the original textures. For each face of the reconstructed model, the user is able to use either the original texture material. synthesized material, or a combination of both to create desirable results. Finally, to illustrate the effectiveness of our reconstruction strategy, we consider three example cases of District Six architecture and their reconstructions. All three examples were reconstructed successfully, and using findings from these results, critical analyses of both aspects of our strategy are presented.
- ItemOpen AccessDistributed texture-based terrain synthesis(2011) Tasse, Flora Ponjou; Gain, James; Marais, PatrickTerrain synthesis is an important field of Computer Graphics that deals with the generation of 3D landscape models for use in virtual environments. The field has evolved to a stage where large and even infinite landscapes can be generated in realtime. However, user control of the generation process is still minimal, as well as the creation of virtual landscapes that mimic real terrain. This thesis investigates the use of texture synthesis techniques on real landscapes to improve realism and the use of sketch-based interfaces to enable intuitive user control.
- ItemOpen AccessEnhancing colour-coded poll sheets using computer vision as a viable Audience Response System (ARS) in Africa(2018) Muchaneta, Irikidzai Zorodzai; Gain, James; Marais, PatrickAudience Response Systems (ARS) give a facilitator accurate feedback on a question posed to the listeners. The most common form of ARS are clickers; Clickers are handheld response gadgets that act as a medium of communication between the students and facilitator. Clickers are prohibitively expensive creating a need to innovate low-cost alternatives with high accuracy. This study builds on earlier research by Gain (2013) which aims to show that computer vision and coloured poll sheets can be an alternative to clicker based ARS. This thesis examines a proposal to create an alternative to clickers applicable to the African context, where the main deterrent is cost. This thesis studies the computer vision structures of feature detection, extraction and recognition. In this research project, an experimental study was conducted using various lecture theatres with students ranging from 50 - 150. Python and OpenCV tools were used to analyze the photographs and document the performance as well as observing the different conditions in which to acquire results. The research had an average detection rate of 75% this points to a promising alternative audience response system as measured by time, cost and error rate. Further work on the capture of the poll sheet would significantly increase this result. With regards to cost, the computer vision coloured poll sheet alternative is significantly cheaper than clickers.
- ItemOpen AccessEvolving controllable emergent crowd behaviours with Neuro-Evolution(2015) Wang, Sunrise; Gain, James; Nitschke, GeoCrowd simulations have become increasingly popular in films over the past decade, appearing in large crowd shots of many big name block-buster films. An important requirement for crowd simulations in films is that they should be directable both at a high and low level, and be believable. As agent-based techniques allow for low-level directability and more believable crowds, they are typically used in this field. However, due to the bottom-up nature of these techniques, achieving high level direct ability requires the modification of agent-level parameters until the desired crowd behaviour emerges. As manually adjusting parameters is a time consuming and tedious process, this thesis investigates a method for automating this, using Neuro-Evolution (NE). This is achieved by using Artificial Neural Networks as the agent controllers within an animated scene, and evolving these with an Evolutionary Algorithm so that the agents behave as desired. To this end, this thesis proposes, implements, and evaluates a system that allows for the low-level control of crowds using NE. Overall, this approach shows very promising results, with the time taken to achieve the desired crowd behaviours being either on par or faster than previous methods.
- ItemOpen AccessFast, realistic terrain synthesis(2015) Crause, Justin; Gain, James; Marais, Patrick [The authoring of realistic terrain models is necessary to generate immersive virtual environments for computer games and film visual effects. However, creating these landscapes is difficult - it usually involves an artist spending many hours sculpting a model in a 3D design program. Specialised terrain generation programs exist to rapidly create artificial terrains, such as Bryce (2013) and Terragen (2013). These make use of complex algorithms to pseudo-randomly generate the terrains, which can then be exported into a 3D editing program for fine tuning. Height-maps are a 2D data-structure, which stores elevation values, and can be used to represent terrain data. They are also a common format used with terrain generation and editing systems. Height-maps share the same storage design as image files, as such they can be viewed like any picture and image transformation algorithms can be applied to them. Early techniques for generating terrains include fractal generation and physical simulation. These methods proved difficult to use as the algorithms were manipulated with a set of parameters. However, the outcome from changing the values is not known, which results in the user changing values over several iterations to produce their desired terrain. An improved technique brings in a higher degree of user control as well as improved realism, known as texture-based terrain synthesis. This borrows techniques from texture synthesis, which is the process of algorithmically generating a larger image from a smaller sample image. Texture-based terrain synthesis makes use or real-world terrain data to produce highly realistic landscapes, which improves upon previous techniques. Recent work in texture-based synthesis has focused on improving both the realism and user control, through the use of sketching interfaces. We present a patch-based terrain synthesis system that utilises a user sketch to control the location of desired terrain features, such as ridges and valleys. Digital Elevation Models (DEMs) of real landscapes are used as exemplars, from which candidate patches of data are extracted and matched against the user's sketch. The best candidates are merged seamlessly into the final terrain. Because real landscapes are used the resulting terrain appears highly realistic. Our research contributes a new version of this approach that employs multiple input terrains and acceleration using a modern Graphics Processing Unit (GPU). The use of multiple inputs increases the candidate pool of patches and thus the system is capable of producing more varied terrains. This addresses the limitation where supplying the wrong type of input terrain would fail to synthesise anything useful, for example supplying the system with a mountainous DEM and expecting deep valleys in the output. We developed a hybrid multithreaded CPU and GPU implementation that achieves a 45 times speedup.
- ItemOpen AccessField D* pathfinding in weighted simplicial complexes(2013) Perkins, Simon; Marais, Patrick; Gain, JamesThe development of algorithms to efficiently determine an optimal path through a complex environment is a continuing area of research within Computer Science. When such environments can be represented as a graph, established graph search algorithms, such as Dijkstra’s shortest path and A*, can be used. However, many environments are constructed from a set of regions that do not conform to a discrete graph. The Weighted Region Problem was proposed to address the problem of finding the shortest path through a set of such regions, weighted with values representing the cost of traversing the region. Robust solutions to this problem are computationally expensive since finding shortest paths across a region requires expensive minimisation. Sampling approaches construct graphs by introducing extra points on region edges and connecting them with edges criss-crossing the region. Dijkstra or A* are then applied to compute shortest paths. The connectivity of these graphs is high and such techniques are thus not particularly well suited to environments where the weights and representation frequently change. The Field D* algorithm, by contrast, computes the shortest path across a grid of weighted square cells and has replanning capabilites that cater for environmental changes. However, representing an environment as a weighted grid (an image) is not space-efficient since high resolution is required to produce accurate paths through areas containing features sensitive to noise. In this work, we extend Field D* to weighted simplicial complexes – specifically – triangulations in 2D and tetrahedral meshes in 3D.
- ItemOpen AccessHigh fidelity compression of irregularly sampled height fields(2007) Marais, Patrick; Gain, JamesThis paper presents a method to compress irregularly sampled height-fields based on a multi-resolution framework. Unlike many other height-field compression techniques, no resampling is required so the original height-field data is recovered (less quantization error). The method decomposes the compression task into two complementary phases: an in-plane compression scheme for (x, y) coordinate positions, and a separate multi-resolution z compression step. This decoupling allows subsequent improvements in either phase to be seamlessly integrated and also allows for independent control of bit-rates in the decoupled dimensions, should this be desired. Results are presented for a number of height-field sample sets quantized to 12 bits for each of x and y, and 10 bits for z. Total lossless encoded data sizes range from 11 to 24 bits per point, with z bit-rates lying in the range 2.9 to 8.1 bits per z coordinate. Lossy z bit-rates (we do not lossily encode x and y) lie in the range 0.7 to 5.9 bits per z coordinate, with a worst-case root-mean-squared (RMS) error of less than 1.7% of the z range. Even with aggressive lossy encoding, at least 40% of the point samples are perfectly reconstructed.
- ItemOpen AccessHigh-level control of agent-based crowds by means of general constraints(2009) Jacka, David; Gain, James; Marais, PatrickThe use of computer-generated crowds in visual effects has grown tremendously since the warring armies of virtual ores and elves were seen in The Lord of the Rings. These crowds are generated by agent-based simulations, where each agent has the ability to reason and act for itself. This autonomy is effective at automatically producing realistic, complex group behaviour but leads to problems in controlling the crowds. Due to interaction between crowd members, the link between the behaviour of the individual and that of the whole crowd is not obvious. The control of a crowd’s behaviour is, therefore time consuming and frustrating, as manually editing the behaviour of individuals is often the only control approach available. This problem of control has not been widely addressed in crowd simulation research.
- ItemOpen AccessInteractive 3-D spatial analysis in a virtual reality environment(2004) Bhunu, Solomon Tichaona; Rüther, Heinz; Gain, JamesThe emergence of virtual reality and related tools enables the fundamental infrastructure to begin building virtual cities, which can provide an interactive simulation and analysis environment for planning and management of urban places. The virtual city will provide urban managers with a computer environment to interface with the multitude of complex physical and social data needed to plan and manage cities. A range of innovative technologies are being developed that offer different ways of modelling and representing built-form and associated urban information with real-time interaction over the Internet. For all these efforts in technological development, one of the main topical issues remains, the development of a representation (data structure) that is capable of both static and dynamic spatial analysis operations. This research focuses on developing a 3-D data representation for urban management, which would fully support both static and dynamic spatial analysis operations. It further explores the possibilities inherent in a hybrid of the Boundary representation (B-rep) and distance field modelling (a technique which is finding application in 3-D medical imaging). The research makes an analysis of the existing B-reps before developing the best form which could easily be integrated with the Distance Field (DF). Since this is the first known research in application of DFs in urban GIS, the research further offers the design and adoption of Distance field maps. Further designs are undertaken for the necessary algorithms that would allow dynamic analysis operations to be implemented within the DF environment. The conceptual design is mapped through Entity-Relationship modelling into a Database Management System (DBMS). The B-rep component is maintained within the DBMS whilst the DF component is generated on the ""fly"". For the distributed application development, a 3-tier approach that merges the client-side (web browser), application server and database server is proposed. Based on this approach, a Web based prototype toolkit is designed and implemented using affordable ""off-the-shelf' software applications and resources that are relatively easy to set-up and use, and would require standard PC-processor power available to a home user with a modem link (i.e. not a high-end graphics workstation). The novel aspects of this thesis can be summarised as: 1. The use of a hybrid representation is new in 3-D GISs. 2. The use of Distance Fields and the development of related spatial analysis operations is new in Geo-Information Systems. Furthermore, the research proposes a new distance field modelling approach; the single Distance Fields (see Chapter 5). 3. The implementation environment makes use of the existing tools and integrates them in a novel way.
- ItemOpen AccessLattice Boltzmann liquid simulations on graphics hardware(2014) Clough, Duncan; Gain, James; Kuttel, Michelle MaryFluid simulation is widely used in the visual effects industry. The high level of detail required to produce realistic visual effects requires significant computation. Usually, expensive computer clusters are used in order to reduce the time required. However, general purpose Graphics Processing Unit (GPU) computing has potential as a relatively inexpensive way to reduce these simulation times. In recent years, GPUs have been used to achieve enormous speedups via their massively parallel architectures. Within the field of fluid simulation, the Lattice Boltzmann Method (LBM) stands out as a candidate for GPU execution because its grid-based structure is a natural fit for GPU parallelism. This thesis describes the design and implementation of a GPU-based free-surface LBM fluid simulation. Broadly, our approach is to ensure that the steps that perform most of the work in the LBM (the stream and collide steps) make efficient use of GPU resources. We achieve this by removing complexity from the core stream and collide steps and handling interactions with obstacles and tracking of the fluid interface in separate GPU kernels. To determine the efficiency of our design, we perform separate, detailed analyses of the performance of the kernels associated with the stream and collide steps of the LBM. We demonstrate that these kernels make efficient use of GPU resources and achieve speedups of 29.6_ and 223.7_, respectively. Our analysis of the overall performance of all kernels shows that significant time is spent performing obstacle adjustment and interface movement as a result of limitations associated with GPU memory accesses. Lastly, we compare our GPU LBM implementation with a single-core CPU LBM implementation. Our results show speedups of up to 81.6_ with no significant differences in output from the simulations on both platforms. We conclude that order of magnitude speedups are possible using GPUs to perform free-surface LBM fluid simulations, and that GPUs can, therefore, significantly reduce the cost of performing high-detail fluid simulations for visual effects.
- ItemOpen AccessA linear framework for character skinning(2007) Merry, Bruce; Marais, Patrick; Gain, JamesCharacter animation is the process of modelling and rendering a mobile character in a virtual world. It has numerous applications both off-line, such as virtual actors in films, and real-time, such as in games and other virtual environments. There are a number of algorithms for determining the appearance of an animated character, with different trade-offs between quality, ease of control, and computational cost. We introduce a new method, animation space, which provides a good balance between the ease-of-use of very simple schemes and the quality of more complex schemes, together with excellent performance. It can also be integrated into a range of existing computer graphics algorithms. Animation space is described by a simple and elegant linear equation. Apart from making it fast and easy to implement, linearity facilitates mathematical analysis. We derive two metrics on the space of vertices (the "'animation space"), which indicate the mean and maximum distances between two points on an animated character. Ve demonstrate the value of these metrics by applying them to the problems of parametrisation, level-of-detail (LOD) and frustum culling. These ltletrics provide information about the entire range of poses of an animated character, so they are able to produce better results than considering only a single pose of the character, as is commonly done.