By Jakim Berndsen, Data Scientist and Derek McHugh, Head of Data Science
At our recent 2022 Performance Summits in London and Madrid, we had several lively discussions among attendees in response to a research paper that condemned the use of black box models in predictive sport analytics and cautioned against increasingly popular Machine Learning (ML) approaches.
At Kitman Labs, we agree with the authors that transparency in sports analytics is essential, and share many of the same concerns about the applicability of ML approaches in sports medicine and the use of black box models. However, we feel there are areas of research within machine learning that have been overlooked, which weaken some of the conclusions the authors make.
We only touch on the key points made by Bullock et al in this blog, but this is such an important issue for our industry that we have submitted a formal response to the Journal for Sports Medicine, which is currently under review for publication. And in early April, we published a pre-print version of our commentary where we explore the claims made by the authors.
What is a Black Box Model?
A black box model is any model where the internal workings are either unknown or too complex to understand. When a black box model returns an output, we are left with no idea how or why a model has made a decision.
The authors of the paper argue that black box models are dangerous in a sports medical setting. They state that without transparency, models cannot be evaluated or interpreted, and are typically not useful to practitioners.
“The lack of transparency and unavailability of algorithms to allow implementation by others of ‘black box’ approaches is concerning as it prevents independent evaluation of model performance, interpretability, utility, and generalisability prior to implementation within a sports medicine and performance environment.”
At Kitman Labs, we wholeheartedly agree with this conclusion. If we do not know the internal workings of a model how can we trust the outputs and confidently use it to facilitate decision making? While knowing the injury risk of an individual athlete may be useful, how can we reduce this risk if we do not know where it has come from? One at-risk player might need a break after a hard series of games, while another may actually require extra load to be ready for the next game. Moreover, how can we put our professional integrity, a player’s career or our team’s season on the line with a decision we cannot stand behind? If we do not open the black box, distinguishing between these situations will remain guess work.
Should Machine Learning be Avoided in Injury Risk Analytics?
In essence, we agree with the authors that without sufficient interpretability, machine learning models have limited applicability in the sporting domain and that without robust validation, machine learning models should not be used or trusted.
However, there is an immense amount of work being done within the ML (and sport science) community and we feel there are areas of research that were missed.
Below, we highlight some of the excellent work being done today and show how these advancements can make advanced machine learning algorithms a viable tool to help reduce injury risk.
Opening up the black box: Historically there has been a tradeoff between accuracy and interpretability in ML models. The most accurate models were black boxes, while models that could easily be understood were too simple to capture the complex relationships between athlete behavior and injury.
Recent ML developments have shattered this perception. Explainable Boosting Machines have been designed as glass box models that have interpretability as a cornerstone of their design. These models are nearly as accurate as the most advanced ML algorithms, while maintaining the understandability of simple models. Explainable Artificial Intelligence (xAI) is also a huge research field, and methods such as Lime and Shap have allowed us to open the black box and understand the inner workings of the complex models needed to perform injury risk analytics.
Guiding Interventions: A further issue with machine learning models is that they often cannot be used to guide interventions. Machine learning models capture relationships between variables and injury, but do not examine whether that variable actually causes injury. If there is no causality—which is often the case due to the huge number of features in injury datasets—then making interventions based on findings of these models can be ineffective or even dangerous.
However, exactly this kind of intervention has been the focus of recommender systems research. Traditional recommender systems have helped users find interesting items from a large selection – think finding films on Netflix, products on Amazon, or new songs on Spotify. Researchers are leveraging these techniques to guide both medical and sporting interventions. Recommender systems have been used to help people with chronic diseases, to improve sleep quality, and to help recreational athletes complete the marathon. Early work has even looked directly at the sports injury domain by using recommender systems to adapt running training plans to mitigate injury risk. This work is still in its infancy, but it demonstrates that this is an active research area that has huge potential to disrupt the sports industry.
Incorporating Domain Knowledge: Using raw features—e.g., sound waves for speech recognition, individual pixels of an image to identify its contents—is a viable approach for many machine learning problems. In this approach, the model has no context about the domain, and instead learns relationships within the data without any outside bias. However, there is debate across many fields on whether this is the correct approach or not. Instead of using raw features, we can alternatively take a domain-informed approach, where we try to capture pre-existing knowledge about injury risk and feed it into our model. The most suitable approach in any situation will depend on the objectives for a model. When desired, domain knowledge can be incorporated into machine learning models in a multitude of ways:
- Robust data exploration: benchmarking injuries and running simple descriptive analyses, e.g., when are injuries happening, who are they happening to, how often etc., leads to better modeling decisions
- Feature Engineering: new features can be created using results from the latest research into the drivers of sports injuries, along with the many years of experience, expertise, and knowledge of sports practitioners
- Feature Selection: filtering features so that only those known to have an epidemiological relationship with injury will be seen by the machine learning model
Challenges remain when it comes to applying each of these to the sports injury domain. For example, there is often a tradeoff between engineering the most informative features to boost model performance, and having features that can be understood and actioned upon by practitioners. However, strides are being made, and we believe it’s important to recognize that this field is rapidly evolving and rising to the challenges that black box models can present.
While we mention only a sample of the work that has been published, machine learning is a rapidly moving field with many researchers aiming to address precisely the concerns facing injury risk analytics.
No Black Boxes at Kitman Labs
At Kitman Labs, transparency is and has always been a cornerstone of our analytics process. Analytics needs to be actionable and bring about real insight or else it holds no value. While we perform robust model validation to build trust, we have also put much thought into our approach and methodology to ensure the greatest utility to practitioners.
To ensure full transparency, we use the most advanced xAI tools to explain our models’ decision making process. We use these to surface what features the model uses to determine injury risk so that practitioners can confirm the model is making decisions corresponding to prior knowledge. We also surface the contributing factor for individual injury risk estimates, which shows exactly how and why the model has made a decision for each of your athletes, informing how to take immediate action.
Our approach to injury analytics is not an injury predictor nor is it a risk remover. With Risk Advisor, we have designed a platform to surface contributing factors of injury risk and give advice that complements—not replaces—a practitioner’s expertise to make informed and objective decisions. As such, we do not directly prescribe interventions, but rather surface the contributing factors to an injury risk to support practitioners. For now, recommender systems techniques are in the early stages and are currently too difficult to evaluate to be useful in a domain as sensitive as sports injury.
We have built a strong team of sports practitioners at Kitman Labs. From award-winning academic researchers to World Cup champions, our Performance Strategy team has decades of real-world experience in managing injury risk. And through our Performance Intelligence Research Initiative (PIRI), we continue to work with the community to produce new research and evidence-based insights that will advance the field of injury risk analytics. We believe the true power of our approach comes down to our ability to harness this knowledge, and use it to both inform and interpret our injury risk solutions. The performance strategy team plays a central role in defining and guiding our modeling process, so you can be certain that domain knowledge is central to our approach.
Kitman Labs has acquired extensive experience and knowledge from working with global sporting organizations on injury risk solutions and our commitment to ongoing research and market education in this area give us a unique perspective.
If you’re interested in learning more about what is driving risk levels and how to identify when your players are at a higher risk of injury, we invite you to reach out to one of our performance experts at firstname.lastname@example.org