Summary

Ausbil recently went on a trip to China, Taiwan and South Korea to speak to companies in the AI value chain, including AI developers, hardware manufacturers of semi-conductor chips and thermal cooling solutions, as well as data centres. This research trip provided valuable insights into how organisations across the value chain approach responsible AI.
Key Points
  • Artificial Intelligence (AI) will be transformative across many industries but comes with the risk of unintended consequences such as bias, discrimination, breaches of privacy and job losses.
  • Weak AI governance may expose companies to significant legal, financial and reputa- tional risks.
  • Global AI developers are advancing their governance frameworks with responsible AI considerations, while many ASX companies are very early in that journey.
  • Responsible AI must go beyond ethics to include other issues such as energy, climate and environmental, and human rights.
  • Outside of Australia, other countries such as China have more progressive policies that encourage companies to incorporate responsible AI principals, as well as support data centre development that is more energy efficient.
Applications of AI
The democratisation of artificial intelligence (AI) has catalysed a surge of both interest and investment into this transformative technology across a broad range of sectors. The applications of AI are so widespread, ranging from consumer-facing generative models such as ChatGPT and Co-Pilot that support users with answers and support in everyday tasks, to specialised use cases such as enhancing customer service capabilities, streamlining recruit- ment processes, accelerating coding for software engineers and bolstering cyber security.

However, as with all revolutionary technologies throughout history, a multitude of issues and risks are emerging alongside AI’s rapid development. These issues are often dubbed as unintended consequences of AI and we have seen examples of this in Australia and globally as well. For example: Bunnings (owned by Wesfarmers) in November 2024 was reported to have breached the Privacy Act after trialling facial recognition technology in 63 stores between November 2018 and November 20211 which Bunnings stated was aimed at keeping teams and customers safe, and prevent unlawful activity by repeat offenders. Amazon.com reportedly abandoned its AI recruiting tool following findings of gender bias in outcomes2. In early 2024, cyber criminals used deepfake technology to defraud UK engineering firm Arup of $25 million through a fake video call with senior management. WiseTech Global announced it is reducing workforce numbers to maximising efficiency via automation and the use of AI, raising greater concerns about employment security.

Incidents such as these highlight the fact that companies may have gaps in their governance frameworks to address AI, and it underscores the need for comprehensive responsible AI policies at a minimum, especially in today’s rapidly evolving environment. Otherwise, there are several risks that can emerge including legal and regulatory risk which may result in significant financial consequences, including fines and remediation costs. There is also brand and reputational risk which can also be financially material in impacting future sales or hindering the market cap of a company.

We also take the view that responsible AI needs to look beyond just “ethics” and incorporate other ESG concerns, including the exponential growth in energy-intensive data centres driven by AI demand, and the increased manufacturing of AI hardware in high-risk regions, which exacerbates modern slavery risks in global supply chains. Companies implementing AI should consider their broader social license to operate.

Ausbil recently went on a trip to China, Taiwan and South Korea to speak to companies in the AI value chain, including AI developers, hardware manufacturers of semi-conductor chips and thermal cooling solutions, as well as data centres. This research trip provided valuable insights into how organisations across the value chain approach responsible AI.
 
Regulatory landscape and policy development
The Chinese government has recognised ethical issues that can emerge from AI and has stepped in by establishing policy and regulation to support responsible AI. During our discussions with Chinese AI developers, we observed that innovation has not been stifled by companies despite increased regulatory demands. Companies are invited and involved in the policy consultation process, and recent measures requiring AI-generated content labelling all demonstrate responsive regulatory approaches to societal concerns.

China’s progress in responsible AI regulation may offer insights that could inform aspects of Australia’s regulatory development. As Australia continues to deliberate its next steps following the implementation of responsible AI regulations beyond the existing principles-based approach, investors play a crucial role in preparing companies for evolving regulatory landscapes through proactive engagement Some examples of this may be direct dialogue with Executive Management and/or the Board, investor collaborative initiatives (Ausbil is actively involved in the WBA Collective Impact Coalition for Ethical AI) and/or policy advocacy.
 
Governance frameworks and best practice
Ausbil’s starting point for company engagements is to encourage companies to establish robust AI governance structures that address responsible AI concerns. Ausbil was a contributing author to RIAA’s investor toolkit on Artificial Intelligence and Human Rights, which outlines essential elements of a strong governance framework.

Our meetings with AI developers revealed that many had already implemented key governance recommendations outlined in the toolkit including, but not limited to, the adoption of a responsible AI policy, clear accountability structures, mandatory staff training, and limited access protocols.

In contrast, our engagements with ASX companies in the past year reveal that management primarily understand AI with a primary focus on productivity benefits, while ethical implications are often less emphasised. By engaging with companies over time, we can support these companies in developing stronger responsible AI governance frameworks and evolve the conversation towards other risks and impacts specific to them.
Climate and energy considerations
The intersection between AI and climate change is data centres, which provide the massive computing power, storage and infrastructure needed to train and run complex AI models. Data centres are large consumers of energy, where up to 40% of electricity needs alone are required for cooling a data centre. The IEA released a World Energy Outlook Special Report on Energy and AI3, where they project that “electricity demand from data centres worldwide is set to more than double by 2030 to around 945 terawatt-hours (TWh), slightly more than the entire electricity consumption of Japan today.”

Chinese policy initiatives demonstrate an innovative approach to sustainable data centre development and there are lessons that Australia could learn as the industry grows here. The government encourages data centre construction in the north-west regions of China to naturally leverage the cool weather year-round. Additionally, there is more space there for renewable energy developments that data centre’s can also leverage. This geographic strategy addresses both energy efficiency and carbon footprint concerns.

However, this presents additional complexity caused by the distance of those data centres. AI model training requires massive computational resources, including significant electricity needs, and therefore these energy efficient data centres are well suited for that activity. However, AI inference (e.g. AI chat bots, autonomous vehicles) demands low latency data centres positioned closer to end users, such as the east coast of China near large cities. The Chinese government has also implemented PUE requirements for data centres.
 
Supply chain and human rights risks
Our meetings with hardware manufacturers revealed the significant labour intensity in thermal solution manufacturing which is part of the data centre supply chain. There is no shortage of examples of forced labour and modern slavery within the technology value chain in countries including China, which suggests that similar risks may exist within AI-specific supply chains.

Australia’s Modern Slavery Act 2018 requires companies that are above a specified revenue thresholds to assess and manage modern slavery risks. However, our own analysis and engagement activities indicate that there has been limited meaningful progress in technology supply chains, which we classify as having high risk exposure to modern slavery. We believe companies can play a more active role in engaging suppliers on human rights issues, and in particular, could be more proactive on finding and remediating modern slavery.
Looking forward: the evolution of responsible AI

The challenges discussed here only represent a tip of the iceberg for what is already a rapidly evolving landscape, one in which we anticipate views on responsible AI to evolve at a similar pace. We intend to continue our active engagement with portfolio companies on responsible AI, especially if investment into AI becomes a priority. This is an ESG issue we expect to gain more traction and interest and, therefore, it remains an active element of Ausbil’s engagement plan.
References
1.    Source: OAIC, 2024. Bunnings determination. https://www.oaic.gov.au/__data/assets/pdf_file/0027/243936/Bunnings-determination-factsheet.pdf
2.    Source: Reuters, 2018. Insight - Amazon scraps secret AI recruiting tool that showed bias against women.
https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against- women-idUSKCN1MK0AG/
3.    IEA World Energy Outlook Special Report – Energy and AI. https://www.iea.org/reports/energy-and-ai