Artificial Neural Networks in Stock Return Prediction: Testing Model Specification in a Global Context

dc.contributor.advisorvan Rensburg, Paul
dc.contributor.authorBuxton-Tetteh, Naa Ayorkor
dc.date.accessioned2021-01-19T12:52:25Z
dc.date.available2021-01-19T12:52:25Z
dc.date.issued2020
dc.date.updated2021-01-04T12:34:54Z
dc.description.abstractThis research investigates whether artificial neural networks which make use of firm specific fundamental and technical factors can accurately predict the returns of a sample of several large-cap stocks from various markets across the globe. This study also explores which hidden layer configuration leads to the best network predictive performance. Furthermore, this research identifies which firm-specific factors predominantly influence the predictions made by the artificial neural networks. Five artificial neural networks are designed, trained and tested on a sample of 161 stocks from the Russell 1000 and the S&P International 700 stock indices. The investigation period extends over a 166-month period from January 2001 to October 2014 with a 70:30 split for training and testing subsamples respectively. Eighteen firm-specific factors, based on prior research about the presence of style effects or anomalies on the cross-section of global equity returns, are used as the input variables of the artificial neural networks to forecast one-month forward returns of all the stocks in the sample. The five artificial neural networks investigated in this research differed in hidden layer size. Specifically, the number of hidden neurons examined were three, nine, 13, 18 and 30. All five networks train significantly well, with each network's training error indicating a good model fit. Each network also achieves the desirable information coefficient of 0.1 between its predicted returns and the actual returns in the training sample. It is interestingly discovered that network performance generally improves as the number of hidden neurons in the hidden layer increases until a specific point, after which network performance weakens. In the context of avoiding overfitting, the best-trained network in this research is that with 13 neurons in its hidden layer. This is the primary network used for the out-of sample testing analysis. This network achieves an average prediction error magnitude of approximately 7% and an information coefficient of 0.05 during out-of-sample testing. These results underperform their respective benchmarks moderately. However, further analyses of the network's performance suggest an overall poor out-of-sample predictive ability. This is illustrated by a significant bias and a considerably weak relationship between the network's predicted returns and the actual returns in the testing sample. Global sensitivity analysis reveals that growth style effects, particularly, the capital expenditure ratio, return on equity, sales growth, 12-month percentage change in non-current assets and six-month percentage change in asset turnover were the most persistent factors across all the ANN models. Other significant factors include the 12-month percentage change in monthly volume traded, three-month cumulative prior return and one-month prior return. An unconventional result of this analysis is the relative insignificance of the size and value style effects.
dc.identifier.apacitationBuxton-Tetteh, N. A. (2020). <i>Artificial Neural Networks in Stock Return Prediction: Testing Model Specification in a Global Context</i>. (). ,Faculty of Commerce ,Department of Finance and Tax. Retrieved from http://hdl.handle.net/11427/32567en_ZA
dc.identifier.chicagocitationBuxton-Tetteh, Naa Ayorkor. <i>"Artificial Neural Networks in Stock Return Prediction: Testing Model Specification in a Global Context."</i> ., ,Faculty of Commerce ,Department of Finance and Tax, 2020. http://hdl.handle.net/11427/32567en_ZA
dc.identifier.citationBuxton-Tetteh, N.A. 2020. Artificial Neural Networks in Stock Return Prediction: Testing Model Specification in a Global Context. . ,Faculty of Commerce ,Department of Finance and Tax. http://hdl.handle.net/11427/32567en_ZA
dc.identifier.ris TY - Master Thesis AU - Buxton-Tetteh, Naa Ayorkor AB - This research investigates whether artificial neural networks which make use of firm specific fundamental and technical factors can accurately predict the returns of a sample of several large-cap stocks from various markets across the globe. This study also explores which hidden layer configuration leads to the best network predictive performance. Furthermore, this research identifies which firm-specific factors predominantly influence the predictions made by the artificial neural networks. Five artificial neural networks are designed, trained and tested on a sample of 161 stocks from the Russell 1000 and the S&P International 700 stock indices. The investigation period extends over a 166-month period from January 2001 to October 2014 with a 70:30 split for training and testing subsamples respectively. Eighteen firm-specific factors, based on prior research about the presence of style effects or anomalies on the cross-section of global equity returns, are used as the input variables of the artificial neural networks to forecast one-month forward returns of all the stocks in the sample. The five artificial neural networks investigated in this research differed in hidden layer size. Specifically, the number of hidden neurons examined were three, nine, 13, 18 and 30. All five networks train significantly well, with each network's training error indicating a good model fit. Each network also achieves the desirable information coefficient of 0.1 between its predicted returns and the actual returns in the training sample. It is interestingly discovered that network performance generally improves as the number of hidden neurons in the hidden layer increases until a specific point, after which network performance weakens. In the context of avoiding overfitting, the best-trained network in this research is that with 13 neurons in its hidden layer. This is the primary network used for the out-of sample testing analysis. This network achieves an average prediction error magnitude of approximately 7% and an information coefficient of 0.05 during out-of-sample testing. These results underperform their respective benchmarks moderately. However, further analyses of the network's performance suggest an overall poor out-of-sample predictive ability. This is illustrated by a significant bias and a considerably weak relationship between the network's predicted returns and the actual returns in the testing sample. Global sensitivity analysis reveals that growth style effects, particularly, the capital expenditure ratio, return on equity, sales growth, 12-month percentage change in non-current assets and six-month percentage change in asset turnover were the most persistent factors across all the ANN models. Other significant factors include the 12-month percentage change in monthly volume traded, three-month cumulative prior return and one-month prior return. An unconventional result of this analysis is the relative insignificance of the size and value style effects. DA - 2020_ DB - OpenUCT DP - University of Cape Town KW - Investment Management LK - https://open.uct.ac.za PY - 2020 T1 - Artificial Neural Networks in Stock Return Prediction: Testing Model Specification in a Global Context TI - Artificial Neural Networks in Stock Return Prediction: Testing Model Specification in a Global Context UR - http://hdl.handle.net/11427/32567 ER - en_ZA
dc.identifier.urihttp://hdl.handle.net/11427/32567
dc.identifier.vancouvercitationBuxton-Tetteh NA. Artificial Neural Networks in Stock Return Prediction: Testing Model Specification in a Global Context. []. ,Faculty of Commerce ,Department of Finance and Tax, 2020 [cited yyyy month dd]. Available from: http://hdl.handle.net/11427/32567en_ZA
dc.language.rfc3066eng
dc.publisher.departmentDepartment of Finance and Tax
dc.publisher.facultyFaculty of Commerce
dc.subjectInvestment Management
dc.titleArtificial Neural Networks in Stock Return Prediction: Testing Model Specification in a Global Context
dc.typeMaster Thesis
dc.type.qualificationlevelMasters
dc.type.qualificationlevelMCom
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
thesis_com_2020_buxton tetteh naa ayorkor.pdf
Size:
2.79 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
0 B
Format:
Item-specific license agreed upon to submission
Description:
Collections