It can be an arduous and time consuming task to manually curate video data for tracking the movement of an object. This is particularly challenging when tracking animals whose behaviour is sporadic and unpredictable, but it can provide pertinent information when monitoring health, conditions and treatments. Here, we evaluate the use of machine learning retroactively applied to automatically track specific steers in hours of video with minimal manual interaction. This is approached using the Faster R-CNN Object Detection algorithm with VGG-16 acting as a feature extractor. Performance on a number of video segments is presented and discussed, and the issues encountered are outlined. This highlights a number of guidelines that should be taken under consideration when generating video data to improve object detection performance, and helps define the applicability of the approach to pre-existing data.