In a time when there is a lot of hype about AI and Machine Learning, it’s great to read about large companies taking it seriously and trying to improve their processes rather than retreating behind the ‘platform not publisher’ cloak of ‘responsibility’.
Most recently, Pintrest has recognised that there is a problem with content that is linked to self-harm. As a direct response to this they are using technology to help tackle it. The results look impressive and I welcome this use of ‘AI for good’.
Tone analysers such as the one Pinterest used have also been rolled out with Facebook, who can monitor content around ideation and planning of suicide and directly pass on alerts to local law enforcement in cases of crisis. They have further developed their image recognition software to identify visual representations of SH and provide user support.
However, as always with AI, it’s important that any innovation continues to be monitored and the results validated. One of the issues with AI and ML is that by its very nature the underlying logic is working away in the background continuing to learn and refine the models that are being used. It is important that social media platforms keep monitoring their results and ensure that the program continues to show such effectiveness.
Further development could be layered through a chatbot for example. The chatbot could be trained to identify trigger words and phrases and images linked to self-harm and initiate a conversation with the user and after certain steps directly link them to a helpline, providing immediate support.
Mental health chatbots aren’t anything new, but do need work still and require an entry level of energy by the user to download specific apps. True integration with platforms is a way to make these tools more accessible and have universal uptake.
AI for good is not just limited to the arena of mental health
There are some great examples to be found in a wide variety of applications ranging, for example, from social justice through to improving farming practices via drones and visual recognition.
We at Edit have also had some success in this area, developing a prototype visual recognition mapping system. This came about by recognising how a manual process could be greatly improved by applying some sophisticated AI. Developed as part of an IBM challenge we were able to train a set of custom models to identify types of dwellings in remote unmapped locations in Tanzania. This can then be used to assist the human mappers as part of the humanitarian aid effort in that region.
But we’re always asking – what’s the next application? For us, it was applying visual recognition technology to identify cancerous cells in dogs – usually a back-breaking manual process for a lab technician looking through a microscope. After that… who knows?
As always, with AI it is important to keep in mind the ethics of the activity. It’s vital to be open and up front about how AI is being used to give the end user full clarity of how they are being communicated with, especially when it comes to mental health. Each person’s mental health journey will be different and giving people nuanced options with be the key to a personal and supportive experience.