The app, called Social Protect, uses AI to try to ensure athletes see as few abusive messages sent their way as possible.
It automatically scans incoming social media posts on platforms including Instagram, Facebook, TikTok and YouTube in real time, searching for over two million abusive words and phrases within its database.
Any messages containing the terms are automatically hidden from comment sections or replies to athletes, who can also add any words or phrases of their own that they find upsetting.
The company’s founder Shane Britten compares the app to anti-virus software that operates unnoticed in the background.
“The aim is to keep the comment section clean of racism, hatred, scams – of all the horrible things that can exist on social media,” he said.
But the software is not flawless. The contract UK Sport has paid for does not include social platform X, formerly known as Twitter, which a BBC Sport investigation found is the source of 82% of abuse sent to football managers and players.
The terms of the deal also mean the system is only able to scan posts which are made publicly – abusive direct messages sent to athletes will still be visible.
Some services can block offensive direct messages, but they require users to submit their private log-in details to external companies, and are typically more expensive.