The actions of investors today will be key to avoiding the catastrophic consequences that the adoption of AI may have for investee companies, as well as our society, environment and economy. In the short term, businesses that fail to put in place adequate governance safeguards around the design and deployment of AI risk financial consequences, reputational damage, operational disruption, legal risk, environmental risk, human rights risks and challenges to social licence.
In the longer term, investors must understand the systemic risks associated with more advanced models and they have a responsibility to their clients, beneficiaries and investee companies to play their part in preventing negative outcomes.
Assessments vary about the impact that the rapid uptake of AI will have, the pace of change, the economic consequences and the potential for AI to do harm – and good.
Here’s what we know:
The economics are not yet understood
• AI could herald a new era of enhanced productivity.
• It could also lead to mass unemployment and increased inequality, negatively impacting the consumer base that most companies rely on for their business model and revenue stream.
The impacts on people’s health and wellbeing will get worse if AI develops unchecked
• AI has been used to sexually harass and intimidate, including through the circulation of AI-generated sexually-explicit deepfakes.
• AI agents have encouraged people to self-harm and suicide.
• Early indications suggest that the use of AI reduces the brain’s capacity for creativity, problem solving and critical thinking.
• A focus on efficiency and the outsourcing of ethical and moral judgements to AI can lead to unintended consequences – for example, in the use of autonomous weapons or AI programs that profile people for arrest, detention or deportation.
Ordinary people are losing out as businesses rush to adopt
• Content producers like writers and musicians are having their work mined and used to train AI models, for no reward.
• AI is being used to create harmful content and misinformation that can influence the outcomes of elections.
• Labor market disruption predictions range from 6-7% unemployment to 99% by 2030 as a large number of jobs and perhaps even whole professions are set to be replaced in coming years.
• AI’s reliance on historical data threatens to amplify existing patterns of discrimination through model bias.
• AI systems are fed on personal, sensitive data and increasingly monitor behaviour of users to “improve” future output.
• People who train algorithms on harmful content.
AI causes major environmental impacts
• Each AI prompt can consume around one bottle of water; globally, AI-related water demand is projected to increase from 1.1 billion to 6.6 billion cubic metres by 2027 (this is half the UK’s total water usage), and AI data centres are often located in already water stressed areas.
• AI already consumes about 2% of global power, with a 12% annual growth rate. It is responsible for 2.5-3.7% of global greenhouse emissions and is predicted to consume the equivalent of the demand of Japan by 2030.
• The adoption of AI into business models may undermine previously-established net zero or nature transition plans.
Governments are struggling to keep up
• Policy and regulation are usually on the backfoot with the rapid introduction of new technologies (think Uber operating illegally for years and kids’ access to violent and explicit online content).
• Regulation globally ranges from all-encompassing AI Policies (EU) to retrofitting existing regulation (Australia).
• We lack a clear vision of where AI should fit into our societies and economies.
• Companies, not governments, are in the driver’s seat when it comes to “acceptable” AI use – decisions are being based on capability rather than societal expectations (for example, autonomous weapons).
AI systems can be used to achieve sustainability outcomes
• AI is being used for things like climate scenario modelling and nature impact assessment.
• AI in the health field has and will lead to medical breakthroughs.
We must act now to avoid further harm from the AI systems of the future
• Once they pass Artificial General Intelligence to Artificial Super Intelligence, these AI systems that possess and surpass human intelligence and physical labour across all fields will be impossible for people to keep up with.
• Humans are unable to comprehend what AI will do once agents start improving upon themselves – it is unlikely that people will be able to predict or control the technology.
A stark example of the reputational and legal risks of AI use, revealed in recent days, is the use of AI by a Sydney solicitor in a submission appealing a client’s cancelled driver’s licence: AI had hallucinated seven cases and 12 quotations, which made it in to the submission and were picked up by the judge. The solicitor is now facing disciplinary action and the firm’s reputation has been significantly damaged.
As with companies, fund managers and asset owners themselves need to develop their own robust ESG and impact frameworks for AI.
Responsible investors can play a crucial role by engaging companies on how AI is developed and deployed. Questions include:
• Where are you using AI and why? How is AI contributing to the achievement of business strategy and value creation?
• Are you seeking to understand “shadow AI” – offline or unofficial/non-sanctioned use by employees?
• What human oversight do you have?
• How is AI being used in your supply chain?
• How are you ensuring the security of data?
• Have you considered and managed the potential and actual human rights impacts of your product?
• What is your transparent remediation process should your company’s development or use of AI lead to unintended consequences for customers or employees?
• Is renewable energy being used to power data centres?
• Are AI operators planning to introduce closed-loop cooling systems (which recirculate water)?
• How efficient is the hardware (such as energy efficient chips)? Are your models optimised and focused on smaller, task-specific queries to reduce energy use?
• Is your net zero strategy still fit-for-purpose?
• How are you planning for a transition to an AI future that mitigates the risks that the introduction AGI and ASI pose?
• Are you advocating (directly or through your industry group) for policy/regulatory settings that support a future in which AI facilitates positive change and does not lead to catastrophic consequences for your business, customers and stakeholders?
No AI was used to write this blog.
Practical guidance
For more, see RIAA’s Artificial Intelligence and Human Rights Investor Toolkit. The toolkit remains as relevant today as it was when it was launched at RIAA’s conference by Australia’s e-Safety Commissioner almost two years ago. If you want to delve into the real life application, sign up to RIAA’s Human Rights Working Group.
AI at RIAA’s conference
AI will be a focus at the RIAA Conference in May. We will examine how investors can respond to systemic risks such as AI, and a practical session will explore leading practice in AI use in responsible investing, and how investors can position themselves and their organisations to harness AI tools in support of financial returns, while managing emerging risks. Find out more here.
.png)
<hr>
<small> Disclaimer: The above content is provided by Responsible Investment Association Australasia (ACN 641 046 666, AFSL 554110) for information purposes and is not an offer to buy or sell a financial product, and is not warranted to be correct, complete or accurate. For more information refer to our Financial Services Guide on the RIAA website. Any general advice has been provided without reference to your investment objectives, financial situation or needs. If the advice relates to the acquisition of a particular financial product for which an offer document (such as a product disclosure document) is available, you should obtain the offer document relating to the particular financial product and consider it before making any decision whether to acquire the product. Past performance does not necessarily indicate a financial products’ future performance. To obtain information tailored to your situation, contact a financial adviser.

Co-CEO
,
RIAA
With a distinguished 20-year career at the Department of Foreign Affairs and Trade, Estelle Parker brings crucial expertise in government relations, policy-making, and themes important to responsible investors, including human rights and the SDGs. As a leader driving RIAA’s research, certification, policy, standards, and working group programs, her leadership has elevated these initiatives to achieve heightened levels of professionalism, impact, and value delivery for our members, aligning seamlessly with RIAA’s strategic objectives.
Beyond her organisational impact, Estelle is a respected figure in the responsible investment landscape, serving as a strong advocate on influential global and government committees, including the Principles for Responsible Investment’s Global Policy Reference Group, the Global Sustainable Investment Alliance (as a Board member) and the Australian Government’s Natural Capital Working Group. Additionally, she serves as the Convenor of the Taskforce on Nature-Related Financial Disclosures official Consultation Group for Australia and Aotearoa New Zealand, and the Steering Committee for the Australian Sustainable Finance Institute. She is also the Vice President of the Council of the Australian Institute for International Affairs (Victoria).

.jpg)

%20(1).avif)
.avif)