The Edit Blog

AI has a WEIRD problem

ARTICLE BY Simon Peel
    READ TIME: 3 mins
    21st August 2019

    In the world of AI everything is changing and becoming more of a challenge and bias is becoming more of an issue.  Just how do we ensure that we understand what the AI is doing and understand where bias may be being introduced? With AI becoming a more commonplace part of society, there is a real concern that however good the algorithms and data the AI can exhibit or develop inbuilt bias.

    WEIRD, borrowed from the world of psychology, stands for White/Western Educated Industrialised Rich Democratic and defines the people who have been involved in the development of AI.

    To that, I would also add male.

    Only 12% of researchers involved in AI are female and this has an impact on the results generated by AI.  In fact, UNESCO has highlighted that having AI voice assistants responding with a female voice perpetuates the stereotype of a subservient female helper.  The paper entitled ‘I would blush if I could’ (the reply Siri gave in response to “Hey Siri, you’re a bi***”) goes on to explore the gender divide within the tech and suggests how to best empower women to close the gap.

    Where AI bias starts

    In most cases, bias comes from the source data.

    For example, when Amazon created an AI tool to help with recruitment, the training set was based on historic staff applications and profiles which was overwhelmingly white and male. Therefore the ‘ideal’ candidate showed the same bias and persisted in future recruitment.  Even when the problem was identified at Amazon, bias was difficult to remove as the AI tool then identified gender-coded language within the applications and the bias persisted.

    A gendershades report, based on research done at MIT by Joy Buolamwini and others, highlights clearly the impact of training datasets for facial recognition on the outcomes. As can be seen, improvements are being made but for some suppliers, there is quite a distance to go.

    Keeping your AI fair

    Even when bias can be identified there is still the definition of fairness to address. Fairness is dependent on what you are trying to achieve. Without a set goal in mind, using AI to make the decisions for you may result in wrong conclusions being made.  Anyone jetting off on their holiday will be happy and reassured to know that the AI used in airport security is not necessarily fair and is erring on the side of caution and flagging up false positives. This is preferable to a single false negative – who really minds a more thorough search when airline safety is at risk? However, such false positives in, for example, ANPR would not be welcome.

    What does the impact of AI look like?

    Willrobotstakemyjob is a great resource that can give you a guestimate for the likelihood of your job being replaced. For example, for Market Researchers, this is about a 61% probability. Of course, in reality, it is not going to be that straight forward. Within any current role, the impact of AI will vary and likely be on specific tasks or range of tasks within a role. This will hopefully lead to a positive evolution of the actual roles of the future.

    AI can help with the crunching through the vast quantities of structured and unstructured data that we have, searching out undiscovered relationships within the data. With good training and definition of entities and relationship working with AI can provide whole new views of what your data contains.

    This was highlighted recently by Brisbane City Council and their exploration into the ‘dark data’ they held. They were sitting on millions of documents ranging from surveys and letters (both complaints and compliments) and used AI to process these diverse data sources.  They were able to come up with the top issues concerning the Brisbane citizens –the main one being citizen’s aggravation associated with cycling. The expectation was that cyclists needed support in the form of projects like more cycle lanes. In fact, the AI analysis revealed that the reverse was true – not more cycle lanes but encouraging cyclist to use them

    At Edit we have been using machine learning for a number of different projects ranging from: training custom image recognition classes aimed at creating maps from satellite images to help deliver humanitarian aid through analysing the ‘voice’ of a client’s web presence for different vehicle models compared to competitors to an AI human-generated Christmas card. These projects were designed to show how, although AI is very powerful you still need to ensure you understand what is going on under the covers and that a human element is still required.

    A final thought

    Whenever AI is proposed as the answer to the problem, I would urge you to always keep in mind the WEIRD developers and the possibility of bias in the solution.

    • By pressing submit you consent for us to process your data in order to answer your query, according to our Privacy Policy.

    • This field is for validation purposes and should be left unchanged.

    Part of the Kin and Carta plc

    © 2018 Edit. Kin and Carta plc. Company reg. no. 3624881, All rights reserved. VAT Registered GB 927458295 Privacy Policy | Terms & Conditions | Cookie Policy