In May 2024, a major incident hit UniSuper, one of Australia’s largest pension funds. For over a week, members couldn’t access their accounts. It was not a cyberattack. It was not a pricing issue. One UniSuper private cloud environment was affected due to a misconfiguration..
This article covers:
- What happened
- Why were systems offline for days
- What we can learn from it
What happened to UniSuper
UniSuper ran parts of its infrastructure on Google Cloud VMware Engine (GCVE). A service that hosts VMware private clouds on Google Cloud.
In early 2023, a Google employee used an internal provisioning tool to deploy one of UniSuper’s GCVE environments. During that process:
- A configuration parameter was left blank
- The system assumed a default lifecycle value
- That default value set the environment to expire after one year automatically
- After one year passed, the GCVE private cloud was deleted by GCP’s automated systems
There was no advance notice to the customer. The customer did not start the deletion. An internal default parameter triggered it.
UniSuper lost access to that environment, including the hosted applications and services. Their customers could not log in or see their accounts for several days.
What was affected?
Many early headlines claimed UniSuper’s entire Google Cloud account was deleted. According to Google’s official incident report, not everything got deleted.
| Category | Impact |
| GCVE Private Cloud (1 environment) | Deleted and later restored |
| Other UniSuper cloud infrastructure | Unaffected |
| Google Cloud account, org, projects, IAM | Unaffected |
| Backups stored in GCS in the same region | Not deleted (helped restore) |
| Data in third-party backup providers | Used for recovery |
UniSuper still used both internal and third-party backups during restoration.
The incident did not erase UniSuper’s entire cloud account. This was one private cloud environment within a specific service. That still caused a serious impact, but it is not the broad account deletion failure.
Why UniSuper’s systems were offline for days
Even though the GCVE environment was deleted, the impact was wide:
- GCVE hosted critical applications and network endpoints.
- Without the environment, users could not authenticate or reach services.
- UniSuper and Google teams had to rebuild, reconfigure, and restore systems.
- It took several days of effort to get everything operational again.
A backup alone was not the only piece of the puzzle. Other pieces were identity, networking, security configs, and service dependencies. Not just restore data.
What Google changed afterward
In its official post-mortem, Google outlined the remediation steps:
- They replaced the internal tool that caused the issue.
- Systems were updated so the behavior that enabled automatic deletion could not occur again.
- Google reviewed all GCVE deployments to ensure no other environments were at risk.
Google checked that the misconfiguration did not affect any other GCP customers, and this was an isolated incident.
Lessons
1. Automation workflows must be explicit
Internal defaults can have unintended effects. Automation must never trigger irreversible actions based on blank or default fields.
2. Lifecycle and deletion policies must be guarded
Deletion logic should include explicit confirmation or extra consent.
3. Backups help, but they are not a full recovery
Backups help, but they are not a full recovery. Even when backups survive, rebuilding identity, networking, and service dependencies takes time.”
Final takeaway
This was a serious cloud outage for Unisuper.
- Internal config defaults were the cause of the incident.
- It did not delete the entire Google Cloud account
- The deletion logic triggered because systems assumed defaults
You should not forget about guardrails around deletion and lifecycle automation.
Sources: Google Cloud, Ars Technica