AI’s Rapid Rise Exposes Gender Bias and Regulatory Gaps: Experts Call for Urgent Oversight

Image Credit: Pawel Czerwinski | Splash

The rapid advancement of artificial intelligence is transforming industries worldwide. However, experts warn that without adequate oversight, AI systems may perpetuate and even exacerbate existing gender biases, posing significant risks to women and girls.

The Global AI Boom

Global corporate investment in AI reached US$92 billion in 2022, marking a sixfold increase compared to 2016, according to Statista. In 2023, the global AI market was valued at over US$142.3 billion and is projected to grow substantially by 2030, approaching nearly US$2 trillion.

Major technology companies, including Google, Microsoft, and Amazon, alongside numerous startups and state-backed initiatives in the United States and China, are spearheading advancements in machine learning, natural language processing, and generative AI. These technologies power virtual assistants like Siri and content creation tools increasingly adopted in marketing and other sectors.

AI's Role in Gender Bias

AI systems are typically trained on vast datasets sourced from the internet, which often contain societal biases. A 2023 UNESCO study revealed that large language models tend to associate women with domestic roles and terms like "home", "family" and "children", while associating men with "business", "executive" and "career".

Moreover, AI image generators have been found to produce hypersexualized depictions of women, even when prompted with neutral terms. This phenomenon is attributed to the prevalence of such imagery on the internet, which these models are trained upon.

The lack of diversity in AI development teams exacerbates these issues. According to the AI Now Institute, women comprise only 15% of AI research staff at companies like Facebook and 10% at Google.

Regulatory Challenges

Regulatory frameworks are struggling to keep pace with the rapid development of AI technologies.

The European Union's Artificial Intelligence Act, which entered into force on August 1, 2024, aims to foster responsible AI development and deployment. However, its requirements will be applied gradually, with most obligations taking effect by mid-2027.

In the United States, AI regulation remains fragmented, with various state-level initiatives but no comprehensive federal policy. China's AI governance, under the 2017 New Generation AI Development Plan, emphasizes state control over ethical considerations.

Ansgar Koene, EY Global AI Ethics Leader, emphasizes, "The speed of AI development is far outpacing our ability to regulate it", advocating for mandatory bias audits and diverse data governance.

Societal Impacts and Risks

AI offers significant benefits across various sectors. For instance, AI-driven telemedicine has improved healthcare access in rural areas by enabling remote consultations and diagnostics.

However, biased algorithms can lead to discriminatory outcomes. A 2024 study by the University of Washington found that AI tools favoured white-associated names 85% of the time and female-associated names only 11% of the time when screening resumes.

A 2025 Pew Research Center report indicates that 55% of both AI experts and the general public are highly concerned about bias in AI decision-making processes.

Towards Ethical AI Development

As the AI market is projected to approach US$2 trillion by 2030, addressing gender bias requires concerted efforts:

  • Diverse Teams: Increasing female representation in AI research and development teams.

  • Bias Audits: Implementing regular audits of AI systems to detect and mitigate biases.

  • Global Standards: Developing binding international agreements through bodies like the United Nations to ensure ethical AI practices.

  • Public Awareness: Educating users about AI biases to drive demand for ethical technologies.

Major tech firms have initiated efforts to audit AI models for biases. For example, Google reported steps taken in 2024 to assess and mitigate biases in their AI systems.

Expert and Public Perspectives

Public sentiment reflects growing concerns about AI-generated content, such as deepfakes, disproportionately affecting women. Experts like Timnit Gebru have warned that unchecked AI development can amplify harmful behaviours. In a 2025 Senate testimony, OpenAI’s CEO, Sam Altman, advocated for balanced regulation that ensures innovation alongside safety.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

Hong Kong Showcases AI-Powered Drones at AI+ Power 2025, Highlighting Tech and Innovation

Next
Next

Kelly Yu and CreateAI Launch World’s First AI-Anime Music Video “Werewolf”