Browsing by Author "Yacoob, Sahal"
Now showing 1 - 14 of 14
Results Per Page
Sort Options
- ItemOpen AccessA search for tWZ production in the Full Run 2 ATLAS dataset using events with four leptons(2021) Reich, Jake; Keaveney, James; Yacoob, Sahal
- ItemOpen AccessAnalysis of a deep neural network for missing transverse momentum reconstruction in ATLAS(2020) Leigh, Matthew; Yacoob, Sahal; Young, ChristopherThe ATLAS detector is a multipurpose particle detector built to record almost all possible decay products of the high energy proton-proton collisions provided by the Large Hadron Collider. The presence and combined kinematics of unobserved particles can be inferred by the observed momentum imbalance in the transverse plane. In this work, a deep neural network was trained using supervised learning to measure this imbalance. The performance of this network was evaluated in MC simulation and in 43 fb⁻¹ of data recorded at ATLAS. The network offered superior resolution and significantly better pileup resistance than all other pre-existing algorithms in every tested topology. The network also provided the best discriminator between events that did and did not contain neutrinos. The potential gain insensitivity to new physics was demonstrated by using this network in a search for the electroweak production of supersymmetric particles. The expected sensitivity to observe the production of said particles was increased by up to 26%.
- ItemOpen AccessAnomaly detection with data quality early warning systems in ATLAS(2023) Msutwana, Senzo; Yacoob, Sahal; Keaveney JamesIn this dissertation, the implementation of a Data-Quality Early Warning System (DQEWS) is explored. We use unsupervised Machine Learning (ML) methods to evaluate Data-Quality (DQ) in the ATLAS detector. We do so by observing and quantifying the evolution of Luminosity-Block (LB) data from Inner Detector (ID) tracking information, with a single LB towards the beginning of a run used as the reference. In this way, we obtain a trajectory that describes how the recorded LB data drift over the course of a run. Within the scope of this project thus far, the following will be shown. The version of the DQEWS algorithm defined as of the presentation of the results shown in this dissertation is shown to sufficiently flag good LBs as 'good', and bad LBs as 'bad' under the condition that the flagging criteria are evaluated on LB datasets that lie within a similar range of instantaneous luminosity as the LB datasets used to construct the criteria
- ItemOpen AccessCanonical strangeness conservation in the hadron gas model of relativistic heavy ion collisions(2001) Yacoob, SahalThe CERN WA97 results display a strong strangeness enhancement at mid-rapidity which is dependent on the strangeness of the particle concernec, and saturates at values of participating nucleons greater than 120. These results are phenomenologically described by the mixed canonical ensemble, with canonical (exact) strangeness conservation involving all strange resonances, and grand canonical conservation of charge and baryon number. It is shown that the data are well described by an equilibrium hadron gas. Other explanations of these data are reviewed.
- ItemOpen AccessCharacterising the sources of fake leptons from top quarks in same sign W boson scattering with the ATLAS detector at √s = 13 TeV(2017) Thusini, Xolisile; Yacoob, Sahal; Hamilton, Andrew
- ItemOpen AccessCharacterising the sources of fake leptons from top quarks in same sign W boson scattering with the ATLAS detector at √s=13 TeV(2017) Thusini, Xolisile; Hamilton, Andrew; Yacoob, Sahal
- ItemOpen AccessCharacterizing the background in tt events with a J/ψ → µ −µ + in proton-proton collisions at √ s = 13 TeV using the ATLAS detector(2019) Barends, Kevin Nicholas; Yacoob, Sahal; Andeen, Tim; Onyisi, PeterSo far, various measurements have been performed for the top quark mass using jets as a dominant experimental signature. The leading precision measurement of 1732.44 ±0.13±0.47 GeV is a combination of top quark mass measurements conducted by the CMS collaboration. However, these various measurements suffered from large jet reconstruction uncertainties. This study looked at a different experimental signature involving a lepton and J/ψ such that the top quark decay mode is t → W (→ lν)b (→ J/ψ [→ µ +µ −]+X). This signature combines the kinematics of the three leptons in the final state and therefore, is not significantly dependent on the reconstructed kinematics of the jets. The statistical uncertainty in the top mass measurement was determined from the invariant mass of the lepton and J/ψ distribution through a template morphing maximum likelihood method, giving a value of 2.9 GeV. This signature comes with background contributions from non-prompt and mis-reconstructed leptons and from selecting J/ψ mesons which did not originate from top quark B-hadron decays. The background contribution from non-prompt and mis-reconstructed leptons was determined to be overestimated in the muon channel but more accurately estimated in the electron channel in the signal region. This background lepton contribution was determined using the common methodology by the ATLAS experiment. The background contribution from J/ψ mesons was determined by applying a two-dimensional fit on the mass and pseudo-proper time of the J/ψ. These background contributions were reduced by applying a tighter selection cut on the J/ψ mass and including an additional selection cut on the pseudo-proper time of the J/ψ mesons. These cuts improved the signal contribution but, due to limited statistics, could not be shown to improve the uncertainty in the mass measurement.
- ItemOpen AccessMeasurement of the leptonic charge asymmetry in ttW± production using the trilepton final state in proton-proton collisions at √ s = 13 TeV using the ATLAS experiment(2022) Garvey, Cameron; Keaveney, James; Yacoob, SahalIn this dissertation, a measurement of the leptonic charge asymmetry (A ` C ) in top quark pair production in association with a W boson (t ¯tW±) is presented. The A ` C is sensitive to new physics beyond the standard model, such as the axigluon and as a result, a measurement of the A ` C could prove useful in searches for new physics. The data set used in this measurement consists of proton-proton collisions at the Large Hadron Collider (LHC) at √ s = 13 TeV, which was recorded using the ATLAS experiment and corresponds to an integrated luminosity of 139 . An event selection scheme was put in place to optimally select for t ¯tW± events in the trilepton final state while suppressing background events. The A ` C is calculated using the pseudorapidities of the two leptons that decay from a top quark and a top anti-quark. A lepton-top association was implemented using machine learning which was used to correctly identify the leptons which decay from top quarks in 72% of t ¯tW± events. Using the results obtained from the lepton-top association, the A ` C was measured using a method called the fit across regions (FAR) method. This method makes use of machine learning to differentiate between signal (t ¯tW±) and all of the backgrounds to increase the sensitivity to the A ` C . Using the FAR fit, a leptonic charge asymmetry of A ` C = -9 % ± 22 % was extracted from Asimov data which is consistent with Standard Model prediction of the A ` C , as expected. Results of a fit to the ATLAS data remain blinded as this analysis forms the basis of an official ATLAS measurement which is yet to be published. The dominant source of uncertainty results from the limited size of the data set. Further data acquired at the LHC over the next decade should reduce the impact of the dominant uncertainty of the measurement of the A ` C in t ¯tW±.
- ItemOpen AccessMeasurement of the top quark mass from top quarks produced in proton-proton collisions at √s = 13 TeV that decay to dimuons via a J/ψ(2023) Barends, Kevin; Yacoob, Sahal; Keaveney JamesThe top quark mass is measured using tt → lepton + J/ψ(→ µ +µ −) events, using proton-proton collision data collected by the ATLAS detector at a centre-of-mass energy of √ s = 13 TeV, during 2015-2018. The data corresponds to a total integrated luminosity of 139.0 fb−1 . This lepton + J/ψ channel is statistically limited due to the low branching ratio of the b → J/ψ → µ +µ −. The top quark mass is measured from template fits over the m(lepton,J/ψ) distribution. The top quark mass is measured to be 172.03 ± 0.76 (stat) ± 2.14 (syst) GeV.
- ItemOpen AccessObservation of the electroweak production of two same-sign W bosons in proton-proton collisions with the ATLAS detector(2020) Mwewa, Chilufya; Yacoob, Sahal; Hamilton AndrewThe production of same sign WW bosons (W±W±) is an extremely rare process predicted within the Standard Model (SM). It results from electroweak mediated processes such as Vector Boson Scattering (VBS), a process linked with electroweak symmetry breaking. Therefore, an observation of the W±W± VBS process does not only confirm the predictions of the SM at such a small cross-section, but it also provides an opportunity to test the electroweak sector and the Higgs mechanism. Evidence for this process was found, at a significance of 4.5 and 2.0 by the ATLAS and CMS experiments respectively, during Run I of the Large Hadron Collider (LHC). Following this potential for discovery, this search was repeated in Run II of the LHC. The CMS experiment reported an observation of this process at a significance of 5.5 in September 2017 while ATLAS reported it at a significance of 6.9 in July 2018. In ATLAS, the search was conducted by looking for events with two same sign leptons (e±e±, e± µ± and µ±µ± ), large missing transverse momentum (Emiss T ) and two forward jets using 36.1 fb1 of data collected at a proton-proton center of mass energy of 13 TeV in 2015 and 2016. This thesis presents an overview of the observation of this rare process by the ATLAS experiment while focussing on the study of backgrounds resulting from photon conversions as well as studies for the enhancement of the the VBS signal which were the student's primary contribution to the full analysis. In addition, the student's contribution to the ATLAS upgrade project is also highlighted.
- ItemOpen AccessSimulation of the ATLAS ITk strip endcap modules for testbeam reconstruction and analysis(2019) Atkin, Ryan Justin; Yacoob, Sahal; Peterson, Stephen W; Wraight, Kenneth G; Blue, AndrewThe Large Hadron Collider (LHC) is planned to be upgraded to the High Luminosity LHC (HL-LHC), increasing the rate of collisions and producing more particles passing through the detectors. This increased production rate will require upgrades to the detectors in order to cope with the large increase in data collection and radiation as well as improving the tracking and particle reconstruction in the higher occupancy environment. A major upgrade to ATLAS, one of the LHC detectors, will be replacing the current Inner Detector (ID) with a fully silicon semi-conductor based Inner Tracker (ITk). The research and development phase of the ITk requires a simulation of the sensors for performance simulations and testing the sensors in testbeams. The ITk strip end-cap sensors will use radial geometries, however the current testbeam telescope simulation software (AllPix) and reconstruction software (EUTelescope) are designed with cartesian geometries. Presented is the work behind implementing a radial geometry for one of the ITk strip endcap sensors, the R0 module, in the simulation software of Allpix and the reconstruction software of EUTelescope. Included in this work is the simulation of the propagation of the charge deposited in the sensor by the beam. The simulated data, as well as data from the actual EUDET testbeam telescope at DESY, Hamburg are both reconstructed with the same reconstruction software and analysed using the same post-reconstruction software. A comparison of the simulation to experiment is then performed, in particular to study the residuals, efficiency and charge sharing of the R0 module.
- ItemOpen AccessStudies of the W±W± scattering process in pp collisions at the once and future ATLAS detector(2017) Van Tonder, Raynette; Yacoob, Sahal; Hamilton, Andrew
- ItemOpen AccessSuppression of the fake lepton background in same-sign W-boson scattering with the ATLAS experiment(2017) McConnell, Lucas Henry; Hamilton, Andrew; Yacoob, SahalSame-sign W-boson scattering is a rare Standard Model process that is useful for probing the nature of electroweak symmetry breaking and the Higgs mechanism. Analysis is currently underway to measure the cross-section to a significance of 5σ or higher using √s = 13 TeV data from the ATLAS detector's Run 2. The two scattered W-bosons decay leptonically leaving a distinctive experimental signature of two same-sign leptons, two forward jets, and missing transverse energy carried away by two neutrinos. Non-prompt leptons are defined as leptons coming from the decay of hadrons. Such leptons, together with jets misreconstructed as leptons, contribute to the background processes in same-sign W-boson scattering; making up the so-called fake lepton background. In this thesis the fake lepton background is suppressed using two strategies: 1) implementing an optimised veto on events found to contain a b-jet; and 2) optimising the isolation requirements set on signal lepton candidates using the cumulative significance quantity. The approach using the cumulative significance is then extended to optimise additional analysis cuts on the lepton invariant mass mₗₗ, jet invariant mass mⱼⱼ , and the jet separation rapidity Δyⱼⱼ.
- ItemOpen AccessTraditional Image Processing and Modern Computer Vision Techniques for the Study of Two-Phase CO2 Flow(2022) Kadish, Shai; Son, Jarryd; Boje, Edward; Yacoob, Sahal; Schmid, DavidThe work presented here details the development of software-based tools for the extraction of physical parameters which describe two-phase (gas-liquid) upwardly flowing CO2, for the purpose of using these parameters as sensor data for a control feedback loop, and for the automatic detection of flow regime transition, which is useful for the development of flow regime maps. The core focus of this thesis is the development of these tools in such a way that their primary input is an image or set of images. To achieve this, two schools of thought are explored: First, traditional image processing techniques are employed to study the flow. These techniques require manual image feature selection, and they make use of purposebuilt algorithms to extract the desired parameters from an input image using these features. The second approach makes use of modern computer vision techniques, where the image features are automatically learnt through machine learning, and an end-to-end network design makes use of these features to extract the desired output without manual tuning. Traditional image processing is used to develop an algorithm which extracts the void fraction value from an image of bubbly flow. This algorithm works by detecting individual bubbles within the input image, and then estimating the volume of each bubble (with uncertainty) in order to calculate the final void fraction. The outputs seen from this algorithm correlated well with those produced by established models for calculating void fraction, but the problem with this algorithm is its limited scope of use: it is only applicable to images of bubbly flow, a flow regime which exists for only a small portion of the total possible vapour quality range under steady state conditions. Two different tools, which share a similar architecture, and which classify flow regime and vapour quality respectively, were successfully developed using modern computer vision techniques. These models both take in video clips as their inputs. The approach makes use of deep learning to train a convolutional neural network (CNN), which is used for individual frame classification and image feature extraction, and a deep long short-term memory (LSTM) network, used to capture temporal features present in a sequence of image feature sets, and to make a final vapour quality or flow regime classification. The proposed architecture achieves accurate flow regime and vapour quality classifications in practical application to two-phase CO2 flow in vertical tubes based on off-line data and an on-line prototype implementation, developed as a proof of concept for the use of these models within a feedback control loop. The successful application of the LSTM network reveals the significance of temporal information for image based studies of multi-phase flow. When comparing these parallel developments, the advantages and disadvantages of the two approaches can be clearly seen. Traditional image processing requires far more extensive domain specific knowledge and manual fine tuning, but this approach allows for a user to clearly understand the outputs of the algorithm, whether they are correct or incorrect, as the internal mechanisms of the algorithm are all purpose built. This is not the case for deep learning based modern computer vision methodologies, which are more of a “black box”. These methods require a large amount of training data, but less domain specific knowledge, as the important features from the input data do not need to be manually selected and processed by the user. This leads to high performing systems which are difficult to understand and debug. There are different cases in which each of these methods would be preferable, but with the rapid evolution of deep learning and computer vision over the last few years, deep learning based computer vision appears to be replacing the traditional approach in many cases.