In preparation for photometric classification of transients from the Legacy Survey of Space and Time (LSST) we run tests with different training data sets. Using estimates of the depth to which the 4-metre Multi-Object Spectroscopic Telescope (4MOST) Time Domain Extragalactic Survey (TiDES) can classify transients, we simulate a magnitude-limited sample reaching $r_{\textrm{AB}} \approx$ 22.5 mag. We run our simulations with the software snmachine, a photometric classification pipeline using machine learning. The machine-learning algorithms struggle to classify supernovae when the training sample is magnitude-limited, in contrast to representative training samples. Classification performance noticeably improves when we combine the magnitude-limited training sample with a simulated realistic sample of faint, high-redshift supernovae observed from larger spectroscopic facilities; the algorithms' range of average area under ROC curve (AUC) scores over 10 runs increases from 0.547-0.628 to 0.946-0.969 and purity of the classified sample reaches 95 per cent in all runs for 2 of the 4 algorithms. By creating new, artificial light curves using the augmentation software avocado, we achieve a purity in our classified sample of 95 per cent in all 10 runs performed for all machine-learning algorithms considered. We also reach a highest average AUC score of 0.986 with the artificial neural network algorithm. Having `true' faint supernovae to complement our magnitude-limited sample is a crucial requirement in optimisation of a 4MOST spectroscopic sample. However, our results are a proof of concept that augmentation is also necessary to achieve the best classification results.