Browsing by Subject "personnel selection"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
- ItemOpen AccessDesigning Semi-Automated Video Interviews (SAVI): Does Stimulus Format (Video vs. Text) of Instructions and Interview Questions Affect Applicant Perceptions of Social Presence?(2022) Ebrahim, Farheen; de Kock, FrancoisA recent novel development in interview technologies is asynchronous video interviews (AVIs). Although AVIs differ in key design aspects, the effect of AVI design characteristics on applicant reactions is not well understood. The primary purpose of the present study was to determine how differences in AVI stimulus format, such as using either video vs. textual stimuli in instructions and interview questions, may influence applicant perceptions of social presence in interviews. Drawing on social presence theory, it was hypothesised that participants who experienced a video-stimuli based AVI will experience higher levels of social presence than those who experienced a text-stimulus based AVI. Furthermore, given a dearth of previous research on the role of individual differences in AVIs, a secondary purpose of the research was to test the potential moderating role of applicants' social presence preferences and their affinity for technology. To these ends, a pre-registered experiment was used in which participants were randomly assigned into an AVI with either video or text-based instructions and interview questions. Participants in both groups completed a mock digital interview, rated their own levels of perceived social presence, and completed the measures of individual preferences. The experiment was repeated in two independent national samples, including respondents from a South African (N = 58) sample and an American sample (N = 162). The findings revealed mixed results between the two samples. Participants in the SA sample who viewed a video based AVI perceived higher levels of social presence compared to those who viewed a text based AVI, suggesting that AVI stimulus format enhanced applicants' perceptions of social presence. However, these findings did not generalise to the USA sample, where video stimuli did not increase respondents' social presence perceptions. Further analyses showed that the study effects did not depend on applicants' preferences for social presence and their affinity for technology. The study contributes to literature on automated video interview design by showing novel insights into the effects of key design features of digital interviews on applicant reactions. Implications for theory are discussed and recommendations for practice and research are made.
- ItemOpen AccessExamining personality assessment in asynchronous video interviews (AVI): convergence between human personality judgements and AI/ML scoring(2025) Cronje, Jacobus Fouche; de Kock, FrancoisThe assessment of personality is an essential component of personnel selection due to its validity in predicting job performance. To assess personality, asynchronous video interviews (AVIs) scored using artificial intelligence (AI) algorithms are increasingly used, allowing candidates to record responses to interview prompts that are subsequently evaluated automatically by AI algorithms and/or human raters. As questions remain about the validity of AI-based AVI scoring approaches, this study examines the convergence between human-and AI-scored personality assessments. To measure personality, the study focuses on the HEXACO model, which measures Honesty-Humility, Emotionality, Extraversion, Agreeableness, Conscientiousness, and Openness to Experience. Verbal responses were transcribed from videotaped AVIs of 161 mock interview candidates who answered five AVI questions. Responses were scored by 15 trained human raters and a closed-dictionary text-analysis keyword-counting AI algorithm developed for this study, respectively. The correlation between trait-level scores produced by human judges and AI scoring was tested both across traits and within traits (trait-level) to assess scoring convergence. Moreover, in addition to comparing score levels produced by the two scoring methods (AI vs. human raters), score spread (i.e., variability), rank-order stability, and rating reliability were evaluated. The findings revealed a moderately positive and significant overall convergence (r = .29, p < .001) across traits between human and AI evaluations, which suggests that AI scoring may potentially be useful as a replacement of human evaluations when general screening is desired. Trait-level convergence varied between scoring methods, with the scoring consensus between human raters and AI being higher for some traits than for others, suggesting that these methods rely on different information and/or may interpret interview responses differently. The research highlights the potential of AI to complement human- based scoring of AVIs used in recruitment, selection, and assessment while also identifying the limitations of algorithm-based scoring in capturing complex human behaviour in interviews. The findings may further contribute to understanding the role of AI in personality assessment and implications for organisational practices.