Sydney AI Safety Fellowship 2026: Targeting Advanced AI Risks

Image Credit: Ashwin Vaswani | Splash

A Sydney based program called the Sydney AI Safety Fellowship 2026 is scheduled to run from 10 January to 1 March 2026. The organisers describe it as a 10 week hybrid structure, with the main in person component running for 7 weeks in Sydney, Australia, built around weekly discussions and project work focused on the transition to more advanced AI technologies.

The Australia’s AI Evolution Story

Australia is simultaneously building out its national AI safety capacity. The Australian Government has announced the Australian Artificial Intelligence Safety Institute (AISI), which it says will become operational in early 2026.

At the same time, the Department of Industry points to the International AI safety report (January 2025) and a first key update (October 2025) as part of the global effort to track fast moving changes in advanced AI capabilities and risks, with the next full report due in early 2026.

Against that backdrop, the Sydney fellowship is one example of how the “AI academy” layer is evolving beyond general AI literacy into specialised, risk focused training.

Who Is The Fellowship for?

The fellowship’s public materials frame the program around helping people contribute to “ensuring that humanity’s transition to advanced AI technologies goes well”.

It is not pitched as a general AI tools course. Instead, the organisers explicitly list risk areas they are focused on, including:

  • loss of control

  • AI enabled pandemics

  • AI related great power conflict

  • societal scale cyberattacks

  • informational warfare

  • gradual disempowerment

How the 10 Week Structure is Organised

According to the official program page, the timeline is split into four stages:

  • Pre program: meeting fellows on a call and refining project ideas

  • Opening unconference: 10 to 11 January

  • Main program: 7 weeks (10 January to 1 March), 2 days per week in person

  • Follow up phase: 3 weeks of continued support to close projects and plan next steps

That “2 days per week in person” detail is important. While the in person component spans seven weeks, it is not presented as a full time seven week residency.

What Support is Included

The organisers list a set of practical supports and constraints:

  • coworking space two days per week and free lunch

  • social events (opening dinner, socials, closing dinner)

  • compute for empirical research

  • mentorship, networking, and career advice

  • potential flight reimbursement for top candidates, capped at regional flight costs in Australia and New Zealand

  • no visa assistance, and a higher bar for international applicants

  • no stipends or accommodation

That combination suggests the program is most accessible to people who can already spend multiple Saturdays in Sydney, with additional time for project work outside the scheduled sessions.

Other AI Safety Training Options

The fellowship sits within a growing ecosystem of AI safety education pathways, which increasingly vary by time commitment, location, and technical depth:

  • MATS (Summer 2026): positioned as a full time research program, with the organisation stating a 12 week commitment of 40 hours per week, primarily in person in Berkeley, with remote or part time possible depending on circumstances and mentor.

  • ARENA 7.0: an in person bootcamp in London from 5 January to 6 February 2026, with the organiser stating travel and accommodation are covered.

  • TARA (APAC): a 14 week part time program with plans for a March 2026 cohort that includes Sydney among targeted cities.

  • BlueDot Impact: offers a mix of self paced and cohort based courses, including a 2 hour self paced “Future of AI” course, alongside longer cohort programs.

In that set, the Sydney fellowship is distinctive mainly because it is local, discussion and project centred, and explicitly focused on advanced AI transition risks, while keeping the in person requirement to two days per week.

What to Watch Next

Applications for the 2026 intake are already closed, so the next credibility signal will be the quality and visibility of outputs.

The organisers also publish an “Alumni Outcomes” section. These claims are self reported, but they do give a directional picture of what the program is aiming for, including the statement that this is the third iteration and that participants from prior iterations have gone on to work in AI safety roles.

For readers tracking AI evolution in Australia, the bigger question is whether programs like this become a regular part of the local talent pipeline, especially as Australia’s AISI becomes operational in early 2026 and the global AI safety science agenda continues to accelerate.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

Australia Invests A$40m in Defence AI Prototypes for Decision Advantage

Next
Next

Ambarella CV7 Chip Brings On-Device AI to 8K Cameras