Tag: chatgpt

  • The Problems with Artificial Intelligence

    …when we start to treat artificial intelligence as what it actually is – artificial – we may be able to tackle it more appropriately.

    The development of Artificial Intelligence since the 2010s and its startling acceleration in the 2020s has left little opportunity to adapt to the reality that is automated AI chat bots and distortions of the truth. Or better described as the death of customer service; the erasure of critical thinking; the steady descent of humankind into madness.

    Its accessibility is attractive, with the vastly popular chat bot ‘ChatGPT’ capable of understanding and creating humanised texts, as well as formulating coherent and effective responses to its users. Just a Google search will reveal itself and it is available for free and is available for ‘premium’ service for paying users. It is a tool used by millions of people, in education and businesses, or even for personal use. And it does come with obvious benefits: the chat bot saves time; it is cost effective; it enhances creativity; it is available 24/7. Its capabilities are impressive and skilled.

    So, what makes it problematic? According to a report from the GDSA (Government Digital Sustainability Alliance) published on the UK’s government site, in ChatGPT’s developmental ‘pre-training phase’, 700,000 litres of water were used to allow the technology to successfully interact. The GDSA report also predicts that the global water usage will increase from 1.1bn to 6.6bn cubic metres by 2027. To put into perspective the staggering increase, this is equivalent to more than half of the UK’s total water shortage.

    The World Economic Forum Report on Global Risks has determined that ‘adverse impacts of AI technologies’ and ‘biodiversity loss and ecosystem collapse’ are significant risks in the next 10 years. With the water demand from AI technologies like ChatGPT, water scarcity and water stress become a more intense challenge and threaten global and nation water security. In turn, the biodiversity of local areas and wildlife, as well as human populations, are under threat.

    Tackling AI and its readily attainable features are vital for protecting the environment, health and personal wellbeing. An investigation from the Guardian found that people were often being misled by false or deceptive information in AI Overviews on Google, in response to health queries. The Guardian said one case led experts to describe a ‘dangerous’ and ‘alarming’ situation in which AI Overviews provided false information regarding crucial liver function tests that could lead to people with serious liver disease being unaware of their conditions. In response to this, Google have since removed AI Overviews for the searches ‘what is the normal range for liver blood tests’ and ‘what is the normal range for liver function tests’.

    The Guardian’s investigation is thought-provoking and incredibly striking. It reminds me that AI should be known for what it is. And when we start to treat artificial intelligence as what it actually is – artificial – we may be able to tackle it more appropriately. This means acknowledging its purpose which is to imitate and mimic human intelligence and behaviours. It means harnessing its accessibility to allow it to be used for its potential benefits like revolutionising healthcare.

    Its modern use is problematic and has been shown to implement ‘human biases and patterns of discrimination’ according to BMJ Global Health. They provided examples such as an AI driven pulse oximeter which ‘overestimated blood oxygen levels in patients with darker skin, resulting in the under-treatment of their hypoxia’. They also found that facial recognition systems fail to distinguish gender in darker-skinned subjects, and that there is a lack of representation in datasets underlying AI solutions for populations that are subject to discrimination.

    As if that is not enough to display the danger of AI, people are being digitally stripped on the social media platform X by its AI chat bot ‘Grok’. The disturbing nature of this revelation is that even children are at risk of being exploited and sexualised. Elon Musk who owns X has since disabled the use of this image function for users who are not paying a monthly fee. In the UK, the Technology Secretary Liz Knedall has given her backing to Ofcom to block the UK’s access to X for its failure to comply with the UK’s Online Safety Act. She said: ‘Sexually manipulating images of women and children is despicable and abhorrent’. Downing Street also stated that Elon Musk’s change to allow the function for paying users only, was ‘insulting’ to sexual violence victims.

    In 2024, two scientists, Geoffrey Hinton and John Hofield, were awarded the Nobel Prize in Physics for their work towards machine learning, which significantly advanced the development of AI. Geoffrey Hinton is known as the ‘grandfather of AI’, but has warned against the dangers of machines eventually outsmarting humans and has expressed concern for the future as AI grows in intelligence.

    Artificial intelligence is a magnificent display of technology and innovation. The work of scientists and technological engineers has created a powerful tool that could be used for the greater good and betterment of society and global growth. But for now, its use threatens to eradicate human involvement and presence. The more that AI develops for the casual use of our neighbours, our friends, or strangers in our cities; the closer we are to a world where we have catastrophically lost control.