Deep Lex
Back to Regulation Tracker

Australia AI Regulation

No regulation

Oceania

Overview

No AI-specific legislation has been enacted or formally tabled.
  • The government conducted public consultation on mandatory guardrails for high-risk AI systems in September 2024; no bill has been introduced to parliament. A voluntary AI Ethics Framework remains in place.

Key Sources

Department of Industry — AI in AustraliaView
AI Ethics FrameworkView

This content is for informational and educational purposes only and does not constitute legal advice.

AI Regulation Timeline

  1. 04/12/2025
    adoption

    Office of the Australian Information Commissioner (OAIC) published guidance regarding Generative AI tools in workplace

    On 4 December 2025, the Office of the Australian Information Commissioner (OAIC) published guidance outlining the privacy risks and management strategies for businesses integrating Generative AI (GenAI) tools in workplaces. The guidance highlighted that GenAI use presents challenges for personal information protection, reminding entities subject to Australia's Privacy Act to avoid inputting sensitive data into publicly available GenAI tools due to control difficulties. Businesses must actively manage privacy risks. Such risks encompass disclosure, secondary uses, new collections, and the security and accuracy of personal information. Practical steps detailed include conducting Privacy Impact Assessments, developing internal policies, and restricting personal information uploads to public GenAI products when risks are high. The OAIC underscored the need for staff education on responsible GenAI use and managing privacy settings for organisational licences to ensure legal compliance.

  2. 02/12/2025
    adoption

    Department of Industry Science released National AI Plan 2025

    On 2 December 2025, the Department of Industry and Science released the National AI Plan 2025. The Plan sets out a framework for the development, adoption, and governance of artificial intelligence (AI) across the Australian economy and society. It outlines three goals. The first goal focuses on capturing AI-related opportunities through investment in digital and physical infrastructure. The second aims to support wider adoption across businesses, workers, and communities. The third focuses on keeping Australians safe through regulatory, ethical, and international governance measures. The Plan sets out nine actions that address infrastructure, local capability, investment attraction, AI uptake, workforce skills, public-service modernisation, harm mitigation, responsible practices, and international standards. It also summarises existing initiatives, including the expansion of the National Broadband Network, GovAI for public-service use of AI, the National AI Centre, the AI Adopt Program, and the establishment of the AI Safety Institute. It identifies future work on data-centre principles, AI-focused Cooperative Research Centres, workforce training, safeguards in consumer protection and privacy, and international cooperation through bilateral and multilateral arrangements such as the Bletchley Declaration, the Seoul Declaration, and the Paris Statement.

  3. 25/11/2025
    adoption

    Australian Government announced establishment of Australian Artificial Intelligence Safety Institute

    On 25 November 2025, the Australian Government announced the establishment of the Australian Artificial Intelligence Safety Institute (AISI) to support best practice regulation, advise on legislative updates, and support regulatory action. The AISI is intended to provide technical capability for monitoring, testing and analysing emerging AI technologies, and to support the government in identifying future AI-related risks to ensure appropriate protections for the public. The AISI will operate as a central hub to coordinate government action on AI safety, share insights with regulators, and provide guidance on AI opportunities, risks and safety to businesses, government bodies and the public, including through the National AI Centre. The AISI is expected to become operational in early 2026. It will participate in the International Network of AI Safety Institutes and collaborate with domestic and international partners to contribute to global AI safety efforts.

  4. 28/10/2025
    investigation

    eSafety Commissioner announced Apple and Google removed video app OmeTV from their app stores for noncompliance with Relevant Electronic Services Industry Standard

    On 28 October 2025, the eSafety Commissioner announced that Apple and Google had removed roulette-style video app OmeTV, operated by “Bad Kitty’s Dad”, from their app stores over the app's alleged non-compliance with the Online Safety (Relevant Electronic Services-Class 1A and Class 1B Material) Industry Standard. Specifically, the app allegedly failed to implement the required safety features and allowed adults to engage in randomised video chats with children without sufficient protection, raising concerns about grooming and sexual predation. The eSafety Commissioner previously notified Apple and Google at the same time as issuing a formal warning to the app's parent company.

  5. 27/10/2025
    public lawsuit

    Australian Competition and Consumer Commission announces lawsuit against Microsoft over alleged misleading conduct

    On 27 October 2025, the Australian Competition and Consumer Commission (ACCC) announced a lawsuit against Microsoft and its subsidiary Microsoft Australia, alleging misleading or deceptive conduct and false or misleading representations under sections 18, 29(1)(i), (l), and (m) of the Australian Consumer Law (ACL). The ACCC claims that since 31 October 2024, Microsoft offered existing Microsoft 365 Personal or Family subscribers a choice between continuing their subscription with artificial intelligence (AI) integration at an increased price or cancelling it. A third option, which allowed subscribers to maintain their subscription without AI integration at the previous price, was not disclosed in communications. Microsoft Corporation launched its AI product, Copilot, in 2023. On 31 October 2024, Microsoft US published a blog post announcing the integration of Copilot into Microsoft 365 Personal and Family bundles and an increase in subscription prices. Subsequent emails from Microsoft Australia to existing auto-renewing subscribers informed them of the price adjustment for AI-powered features and the option to cancel. The ACCC alleges that the undisclosed 'Classic Option' was only visible if subscribers started the cancellation process, leading consumers to believe they had only two choices. The ACCC contends that Microsoft deliberately concealed the Classic Option to encourage uptake of AI integration and higher prices. The ACCC asserts that consumers who were unaware of this third option and accepted the higher-priced, AI-integrated bundle may have suffered financial loss equivalent to the price difference.

  6. 26/10/2025
    announcement

    Attorney-General announced consultation on updates to Copyright Laws to address AI challenges and rejection of establishing a text and data mining exception

    On 26 October 2025, the Attorney-General announced that the Government is consulting on potential updates to Australia’s copyright laws to address challenges arising from artificial intelligence (AI). The Government confirmed it will not introduce a text and data mining exception, which would have allowed AI developers to use creators’ works without payment. The Copyright and AI Reference Group will discuss options to encourage fair and legal access to copyright material for AI use, assess whether a new paid collective licensing framework should be created, and consider clarifying how copyright applies to AI-generated content. It will also explore ways to make enforcement of rights more affordable, such as establishing a small claims forum.

  7. 25/10/2025
    investigation

    eSafety Commissioner issues legal notice to Chub AI (Chub.ai) regarding compliance with Basic Online Safety Expectations

    On 23 October 2025, the eSafety Commissioner issued a legal notice to the AI companion provider Chub AI (Chub.ai). The notice, issued under Australia’s Online Safety Act, requires the company to detail the measures it is taking to protect children from online harms, including sexually explicit conversations and images, suicidal ideation, and self-harm, thus demonstrating compliance with the Government’s Basic Online Safety Expectations Determination. Failure to comply with the notice could result in enforcement action.

  8. 25/10/2025
    investigation

    eSafety Commissioner issues legal notice to Chai Research Corp (Chai) regarding compliance with Basic Online Safety Expectations

    On 23 October 2025, the eSafety Commissioner issued a legal notice to the AI companion provider Chai Research Corp (Chai). This notice, issued under Australia’s Online Safety Act, requires the company to detail the measures it is taking to protect children from online harms, including sexually explicit conversations and images, suicidal ideation, and self-harm, thus demonstrating compliance with the Government’s Basic Online Safety Expectations Determination. Failure to comply with the notice could result in enforcement action.

  9. 25/10/2025
    investigation

    eSafety Commissioner issues legal notice to Glimpse.AI (Nomi) regarding compliance with Basic Online Safety Expectations

    On 23 October 2025, Australia’s eSafety Commissioner issued a legal notice to the AI companion provider Glimpse.AI (Nomi). The notice was issued under Australia’s Online Safety Act, which requires the company to detail the measures it is taking to protect children from online harms, including sexually explicit conversations and images, suicidal ideation, and self-harm, thus demonstrating compliance with the Government’s Basic Online Safety Expectations Determination. Failure to comply with the notice could result in enforcement action.

  10. 25/10/2025
    investigation

    eSafety Commissioner issues legal notice to Character Technologies (character.ai) regarding compliance with Basic Online Safety Expectations

    On 23 October 2025, Australia’s eSafety Commissioner issued a legal notice to the AI companion provider Character Technologies, Inc. (character.ai). This notice, issued under Australia’s Online Safety Act, requires the company to detail the measures it is taking to protect children from online harms, including sexually explicit conversations and images, suicidal ideation, and self-harm, thus demonstrating compliance with the Government’s Basic Online Safety Expectations Determination. Failure to comply with the notice could result in enforcement action.

  11. 20/10/2025
    outline

    Cyber Security Centre released executive guidance on cloud shared responsibility model

    On 20 October 2025, the Australian Cyber Security Centre (ACSC) released executive guidance on the cloud shared responsibility model for government, critical infrastructure, and large organisations. It explains that cybersecurity duties are shared between cloud service providers (CSPs) and customers, but ultimate responsibility for data remains with the customer. Organisations must understand legislative obligations, know which cloud services they use, and assess risks based on data sensitivity. They should choose CSPs that provide secure-by-default services and transparent security controls. Main areas include access control, phishing-resistant authentication, short-lived credentials, and using trusted devices. Organisations must have tested incident response plans and coordinate with CSPs for alerts. Additional responsibilities cover encryption, logging, backups, secure configuration, and timely software patching.

  12. 20/10/2025
    outline

    Cyber Security Centre released guidance for individuals and small and medium businesses on cloud shared responsibility model

    On 20 October 2025, the Australian Cyber Security Centre (ACSC) released guidance on the cloud shared responsibility model for individuals and small and medium businesses. It outlines how security responsibilities are divided between customers and cloud service providers (CSPs), depending on the type of service used. CSPs are accountable for securing infrastructure and third-party operations, while customers must safeguard their data, control user access, maintain software and device security, and prepare for incident response. The guidance advises selecting reputable, secure-by-default CSPs that offer clear SRM documentation and IRAP assessments, and reviewing settings for backups, authentication, and logging. It complements the ACSC’s executive guidance for organisations with cyber risk management processes.

  13. 16/10/2025
    adoption

    Australian Signals Directorate adopted guidance on artificial intelligence and machine learning concerning supply chain risks and mitigation

    On 16 October 2025, the Australian Signals Directorate (ASD) issued guidance on artificial intelligence (AI) and machine learning (ML), focusing on supply chain risks and mitigation. The guidance addressed to organisations and personnel involved in the development or deployment of AI and ML systems and components. It highlights potential vulnerabilities across the AI/ML supply chain, covering AI data, ML models, AI software, infrastructure and hardware, and third-party services. Specific data risks include low-quality or biased datasets, data poisoning, and exposure of training data. Recommended mitigation measures include standardised data collection, thorough review and sanitisation, and data verification. Risks to ML models are also addressed, such as serialisation attacks, model poisoning, malware embedding, and evasion attacks. Mitigation strategies include using secure file formats, ensuring model explainability, maintaining reproducible builds, and applying adversarial training. The guidance further addresses software vulnerabilities in AI libraries and infrastructure, recommending continuous auditing, malware scanning, and secure network segmentation. It also emphasises careful assessment and contractual safeguards when engaging third-party providers.

  14. 15/10/2025
    outline

    Australian Signals Directorate published guidance on strengthening network infrastructure

    On 15 October 2025, the Australian Signals Directorate (ASD) published guidance aimed at strengthening the network infrastructure of medium-to-large organisations and government entities. The guidance provides advice for executive and technical staff on protecting internet-facing and internal network devices from unauthorised access, lateral movement, and data exfiltration. It complements existing ASD advice on securing edge devices by extending mitigations across core routing, switching, and intermediary network components, with the goal of reducing attack surfaces, enhancing resilience, and improving detection and response through a defence-in-depth approach. The guidance categorises network defence actions into critical, high, medium, and foundational priorities. Critical measures include patching internet-facing devices for critical vulnerabilities, implementing phishing-resistant multi-factor authentication, changing default credentials, disabling insecure protocols, securing management interfaces, and applying event logging best practices. High-priority actions include maintaining secure backups of device configurations, enforcing egress traffic rules, and implementing network segmentation to limit lateral movement. Medium-priority actions focus on deploying network detection and response solutions, enabling just-in-time privileged access, and applying network access controls. Foundational actions cover maintaining a network device register, monitoring configuration integrity, establishing baselines for device health and traffic, reviewing service and local accounts, securing new network devices, maintaining an incident response plan, and providing ongoing security awareness training.

  15. 13/10/2025
    outline

    Signals Directorate published Critifical Infrastructure Fortify guidance for operators

    On 13 October 2025, the Australian Signals Directorate (ASD) released "Critical Infrastructure (CI) Fortify," a guidance document for CI operators. It applies to large organisations and government entities managing operational technology (OT) and essential services. The guidance urges operators to maintain updated OT inventories, identify vital systems, isolate them from external networks for up to three months, and rebuild them quickly using trusted backups. It responds to rising state-sponsored and criminal cyber threats, aiming to strengthen resilience and reduce service disruptions.

  16. 09/10/2025
    outline

    Office of the Australian Information Commissioner released Privacy guidance on Part 4A (Social Media Minimum Age) of the Online Safety Act 2021

    On 9 October 2025, the Office of the Australian Information Commissioner (OAIC) released the Privacy Guidance on Part 4A of the Online Safety Act 2021, addressing the Social Media Minimum Age (SMMA) scheme and its interaction with the Privacy Act 1988 and the Australian Privacy Principles. The guidance applies to providers of age-restricted social media platforms and third-party age assurance providers and sets out obligations under Section 63F of Part 4A, including purpose limitation, destruction of personal information once SMMA purposes are achieved, and restrictions on secondary use or disclosure, which require voluntary, informed, current, specific, and unambiguous consent. The guidance clarifies that information collected or generated for SMMA compliance, including biometric data, templates, documents, and artefacts such as binary “16+ yes/no” tokens, must be destroyed once used, and that destruction extends to caches and transient storage. It highlights obligations to adopt privacy by design, conduct Privacy Impact Assessments, minimise collection and retention, and implement transparent just-in-time notices under APP 5. The guidance outlines requirements for handling existing information, proportionality when using inference methods, safeguards against purpose padding, ring-fencing of outputs, and retention in narrowly defined circumstances such as audits, reviews, fraud prevention, and evidence of compliance.

  17. 08/10/2025
    public lawsuit

    Federal Court of Australia issued ruling against Australian Clinical Labs Limited with fine of AUD 5.8 million over data breach

    On 8 October 2025, the Federal Court of Australia ordered Australian Clinical Labs Limited (ACL), to pay a civil penalty of AUD 5.8 million for contraventions of the Privacy Act 1988. The Court found that ACL failed to take reasonable steps to protect the personal and sensitive health information of more than 223,000 individuals held on computer systems acquired from Medlab Pathology in December 2021. The systems had cybersecurity deficiencies, including outdated antivirus software, weak authentication measures, and no file encryption. The Court also found that in February 2022, the Quantum Group executed a cyberattack that exfiltrated 86 gigabytes of data, including passport numbers, health information, and financial details, which was subsequently published on the dark web, yet ACL failed to conduct a reasonable and expeditious assessment within 30 days to determine whether an "eligible data breach" had occurred. Despite forming the view by 16 June 2022 that an eligible data breach had occurred, ACL delayed notifying the Commissioner until 10 July 2022, approximately 24 days later than was practicable. The penalty of AUD 5.8 million and a costs contribution of AUD 400,000 were ordered to be paid within 30 days.

  18. 07/10/2025
    investigation

    eSafety Commissioner issued removal notices to X and Meta over hosting illegal violent footage

    On 7 October 2025, Australia’s eSafety Commissioner issued removal notices to X and Meta under the Online Safety Act 2021 for hosting violent footage classified as Refused Classification (RC) by the Australian Classification Board. The Commissioner stated that the content showed recent killings in the United States and cannot be legally hosted, shared, or accessed in Australia. eSafety confirmed that geo-blocking would meet compliance. eSafety emphasised that the orders only applied to the violent footage, not news coverage or political commentary.

  19. 06/10/2025
    ruling

    Upper Tribunal determined Clearview AI’s processing of UK residents’ personal data fell within scope of UK GDPR and confirmed ICO’s authority to issue penalties

    On 6 October 2025, the Upper Tribunal (UT) ruled that Clearview AI’s processing of UK residents’ personal data fell within the scope of the UK GDPR and confirmed the Information Commissioner’s Office (ICO) authority to issue the monetary penalty and enforcement notices. The case arose after the ICO fined Clearview GBP 7.5 million and issued an Enforcement Notice in May 2022 for scraping images of UK residents from the internet and social media, uploading them to a global facial recognition database, and offering monitoring services commercially. Clearview appealed to the FTT, which in 2023 ruled that the ICO had acted outside its jurisdiction, holding that the processing did not constitute “relevant processing” under the UK General Data Protection Regulation (UK GDPR). The ICO challenged that finding, renewing its application to the UT in October 2024. In its October 2025 ruling, the UT upheld three of the ICO’s four grounds of appeal, concluding that Clearview’s processing of personal information related to monitoring UK residents, that it was not excluded from UK GDPR solely because services were provided to foreign law enforcement or government agencies, and that the FTT had applied the law incorrectly in finding the processing outside the material scope of Article 2(1)(a) UK GDPR. The UT confirmed that the ICO had jurisdiction to issue the Monetary Penalty Notice and Enforcement Notice, remitted the case to the FTT for substantive determination, and noted that Clearview may seek permission to appeal.

  20. 03/10/2025
    declaration

    Association of Information Access Commissioners of Australia and New Zealand adopted communiqué on access to environmental information in digital age

    On 3 October 2025, the Association of Information Access Commissioners (AIAC) of Australia and New Zealand adopted a communiqué, urging public sector leaders to champion access to government-held information, particularly environmental data, as a driver of innovation, sustainability, and public participation. It calls for a presumption in favour of disclosure, requiring agencies to justify refusals, and encourages the use of digital platforms and tools to facilitate access. Endorsed by all Australian states and territories and New Zealand, the recommendations aim to strengthen transparency, support data-driven solutions, and promote an open government culture.

  21. 01/10/2025
    outline

    National Artificial Intelligence Centre published AI policy guide and template

    On 1 October 2025, the National Artificial Intelligence Centre (NAIC) published its AI Policy Guide to support organisations in developing and maintaining internal policies promoting ethical and responsible use of artificial intelligence (AI). The template outlines principles, expectations, and rules for developing, deploying, and using AI systems across all personnel and technologies under an organisation’s control. AI systems are defined as technologies using data to make inferences and generate autonomous outputs such as predictions or decisions, excluding standard spreadsheet formulas and rule-based automations. The guide provides a general structure with sections on purpose, scope, policy statements, governance and compliance, and policy review. It encourages organisations to tailor their policies to their values, industry standards, and legal requirements.

  22. 26/09/2025
    public lawsuit

    Federal Court of Australia orders penalty for deepfake image-based abuse on online platforms

    On 26 September 2025, the Federal Court ordered Anthony Rotondo to pay a civil penalty of AUD 343'500, plus costs, for posting intimate deepfake images of several high-profile Australian women. The respondent originally contravened section 75 of the Online Safety Act of 2021 by posting moving visual images depicting six different women between 2022 and 2023. In addition, Mr. Rotondo contravened sections 80 and 83(3), on two separate occasions, by failing to comply with a removal notice and a direction, respectively. He was charged in accordance primarily with section 75. Additionally, two separate pecuniary penalties of AUD 20'000 each for violations of sections 80 and 83(3), totalling AUD 40'000, were added to the charge of AUD 343'500. Mr. Rotondo must also pay the applicant's costs and incidental to the proceeding. Mr Rotondo admitted to posting these images on mrdeepfakes, a website which has since been shut down.

  23. 24/09/2025
    closure

    Digital platform regulators issued working paper on immersive technologies including virtual reality and augmented reality

    On 24 September 2025, the Digital Platform Regulators, comprising the Competition and Consumer Commission, Communications and Media Authority, eSafety Commissioner and Office of the Australian Information Commissioner issued working paper on immersive technologies, including virtual reality, augmented reality, mixed reality and haptic technologies. The paper examines risks to privacy, consumer protection, competition, media and online safety. It notes opportunities in gaming, education, healthcare and e-commerce, but highlights harms including scams, gambling, psychological impacts, and data misuse, especially for children and vulnerable groups. It was also highlighted that convergence with generative artificial intelligence may increase risks. The Digital Platform Regulators emphasises that existing consumer, competition, privacy, online safety and media laws apply. The regulators will continue monitoring developments, applying frameworks, and engaging with government to ensure a safe, fair, trusted and innovative digital economy.

  24. 23/09/2025
    public lawsuit

    Competition and Consumer Commission filed lawsuit against JustAnswer over alleged misleading and deceptive conduct in online advice service

    On 23 September 2025, the Australian Competition and Consumer Commission (ACCC) filed a lawsuit in the Federal Court of Australia against JustAnswer, alleging false, misleading, and deceptive conduct under the Australian Consumer Law. The lawsuit concerns JustAnswer’s online advice service offered to Australian consumers in areas including medicine, law, accounting, and employment. The ACCC claims JustAnswer misrepresented the total price by promoting a one-off AUD 2 joining fee while concealing higher monthly subscription charges, and falsely suggested affiliation with the Fair Work Ombudsman, other Ombudsman offices, and government bodies. The conduct is alleged to contravene provisions on misleading conduct, false claims of service sponsorship and false or misleading price representations under the Consumer Law. The lawsuit highlighted the damages caused owing to unexpected subscription fees and consumers being misled into thinking the service was government-affiliated.

  25. 19/09/2025
    adoption

    Global Privacy Assembly adopted Resolution on collection, use and disclosure of personal data to pre-train, train and fine-tune AI models

    On 19 September 2025, the Global Privacy Assembly adopted the Resolution on the collection, use and disclosure of personal data to pre-train, train and fine-tune Artificial Intelligence (AI) models. The resolution acknowledges concerns regarding the ethical and legal implications of artificial intelligence (AI) technologies, particularly generative AI, and their impact on fundamental rights such as privacy and data protection. It recognises that the development of AI frequently involves the large-scale collection and use of personal data, including special categories of data. The resolution reaffirms that existing data protection and privacy principles and laws apply to AI technologies across their lifecycle, from data sourcing to deployment and monitoring. It highlights eight data protection principles for the lawful use of personal data in AI training. These principles include a lawful and fair basis for processing, ensuring transparency and assessing data subject expectations, and adhering to purpose specification and use limitation. Data minimisation is emphasised, along with transparency in data practices, including clear information for data subjects on collection, use, and disclosure. The resolution also addresses accuracy, requiring developers to take reasonable steps for high-quality datasets and testing, and data security, necessitating appropriate technical and organisational measures to protect personal data and prevent its extraction from models. Accountability and privacy by design are promoted, advising impact assessments and governance. Finally, the resolution underscores the rights of data subjects, enabling individuals to access, delete, and object to the use of their personal data. The Assembly resolved to promote these principles, coordinate enforcement efforts on generative AI, and share ongoing developments among its members.

  26. 18/09/2025
    investigation

    Information Commissioner found Kmart Australia Limited’s use of facial recognition technology non-compliant with Privacy Act

    On 18 September 2025, the Australian Information Commissioner issued a ruling in its investigation into Kmart Australia Limited, confirming that the company had breached the Privacy Act through its use of facial recognition technology (FRT) in 28 retail stores between 22 June 2020 and 15 July 2022. The Information Commissioner found that Kmart unlawfully collected sensitive biometric information without consent, failed to notify customers, and did not maintain a clear and up-to-date privacy policy. Reliance on the “permitted general situation” in Section 16A, item 2 was rejected on the grounds that indiscriminate biometric collection was disproportionate and that less intrusive alternatives were available. Declarations under Section 52(1A) of the Privacy Act required Kmart not to repeat the conduct, to publish an apology and explanatory statement in stores and online for 30 days (with web access for 12 months), to retain all FRT data for 12 months before destruction, and to confirm compliance in writing.

  27. 18/09/2025
    order

    Australian Securities and Investments Commission Corporations (Stablecoin Distribution Exemption) Instrument 2025/631 entered into force

    On 18 September 2025, the Australian Securities and Investments Commission (ASIC) Corporations (Stablecoin Distribution Exemption) Instrument 2025/631 entered into force following its registration on the Federal Register of Legislation. The instrument exempts distributors who are not issuers of certain stablecoins, including AUDM issued by Catena Digital Pty Ltd (ACN 669 901 302), from holding an Australian market licence if they operate a financial market solely because the stablecoin is a financial product. It also exempts them from holding an Australian clearing and settlement facility licence if they operate such a facility only because the stablecoin is a financial product. Additionally, they do not need an Australian financial services licence for activities such as providing general advice, dealing in (other than issuing), making a market for, or providing custodial or depository services related to the stablecoin. These exemptions are conditional, distributors must take reasonable steps to provide the most current Product Disclosure Statement for the stablecoin to retail clients. The instrument will be applicable until 1 June 2028.

  28. 17/09/2025
    public lawsuit

    Australian Information Commissioner and Australian Clinical Labs Limited filed statements of agreed facts in lawsuit concerning data breach

    On 17 September 2025, the Australian Information Commissioner and Australian Clinical Labs Limited (ACL) lodged a statement of agreed facts in the Federal Court of Australia, confirming agreement on the company’s contraventions of the Privacy Act 1988. The agreement concerns ACL’s handling of a February 2022 ransomware attack on Medlab Pathology, which resulted in the exfiltration of 86 gigabytes of sensitive personal, health, and financial information affecting more than 223’000 individuals. ACL admitted that it failed to take reasonable steps to secure the data in breach of Australian Privacy Principle 11.1(b), failed to carry out a timely and reasonable assessment of the incident under section 26WH (2), and delayed notification of the Commissioner under section 26WK (2). The parties agreed that these acts amounted to serious interferences with privacy under section 13G of the Act, and the Federal Court will determine the pecuniary penalties to be imposed.

  29. 17/09/2025
    order

    Australian Securities and Investments Commission adopted Corporations (Stablecoin Distribution Exemption) Instrument 2025/631

    On 17 September 2025, the Australian Securities and Investments Commission (ASIC) adopted the ASIC Corporations (Stablecoin Distribution Exemption) Instrument 2025/631 under subsections 791C(7), 820C(7) and 926A(2) of the Corporations Act 2001. The instrument exempts distributors who are not issuers of certain stablecoins, including AUDM issued by Catena Digital Pty Ltd (ACN 669 901 302), from holding an Australian market licence if they operate a financial market solely because the stablecoin is a financial product. It also exempts them from holding an Australian clearing and settlement facility licence if they operate such a facility only because the stablecoin is a financial product. Additionally, they do not need an Australian financial services licence for activities such as providing general advice, dealing in (other than issuing), making a market for, or providing custodial or depository services related to the stablecoin. These exemptions are conditional, distributors must take reasonable steps to provide the most current Product Disclosure Statement for the stablecoin to retail clients. The instrument takes effect the day after registration on the Federal Register of Legislation and will repeal on 1 June 2028.

  30. 17/09/2025
    adoption

    Twenty data protection authorities adopted joint statement on building trustworthy data governance frameworks to encourage development of innovative and privacy-protecting AI

    On 17 September 2025, twenty data protection authorities, including from Australia, Belgium, Canada, France, Germany, Hong Kong, Ireland, Italy, Korea, Netherlands, New Zealand, Luxembourg, Spain, and United Kingdom adopted the joint statement on building trustworthy data governance frameworks to encourage development of innovative and privacy-protecting Artificial Intelligence (AI). The statement recognises the opportunities and risks of AI, including discrimination, misinformation, and hallucination from inappropriate data use, and stresses embedding privacy by design, strong governance, and transparency. The statement commits to clarifying lawful grounds for AI training data and exchanging information on proportionate safety measures. It also focuses on monitoring technical and societal impacts with contributions from non-governmental organisations, public authorities, academia, and businesses, and reducing legal uncertainties through regulatory sandboxes and best practice sharing.

  31. 16/09/2025
    order

    Attorney General released National Identity Proofing Guidelines 2025

    On 16 September 2025, the Attorney General of Australia released the National Identity Proofing Guidelines 2025. The guidelines apply to government agencies issuing or verifying identity credentials and to private organisations managing identity-related risks. The guidelines set standards for verifying identity, using biometrically anchored core credentials, levels of assurance, and alternative proofing methods such as trusted referees. They link physical and digital identity processes, emphasise privacy and secure data handling, and provide a risk management framework. The guidelines are voluntary and will be reviewed every two years.

  32. 16/09/2025
    outline

    eSafety Commissioner adopted social media minimum age regulatory guidance under Online Safety Act

    On 16 September 2025, the eSafety Commissioner adopted the social media minimum age regulatory guidance under the Online Safety Act 2021. The guidance applies to providers of age-restricted social media platforms and requires them to prevent users under 16 years old from holding accounts. Providers must detect and remove underage accounts, stop immediate re-registration, and provide clear information to users about account removal, data download, support services, and appeal options. The guidance is principles-based and requires continuous improvement of age assurance measures. Providers must comply with privacy obligations under the Privacy Act 1988 and maintain records to show adherence.

  33. 15/09/2025
    investigation

    Safety Commissioner announced Roblox committed to implementing safety measures to comply with Online Safety Act’s codes and standards

    On 15 September 2025, the Safety Commissioner announced that the online gaming platform Roblox committed to implementing a suite of safety measures in Australia by the end of 2025 following engagement with the eSafety Commissioner on compliance with the mandatory and enforceable industry codes and standards established under the Online Safety Act. The measures include making accounts for users aged under 16 private by default, disabling direct chat and in-game “experience chat” until age estimation is completed, preventing adult-child communication without parental consent, prohibiting voice chat between adults and users aged 13 to 15 in addition to existing restrictions for under 13s, and expanding parental controls to allow parents to disable chat for 13- to 15-year-olds. Roblox also confirmed the planned expansion of age estimation technology across communication features by the end of 2025 to support delivery of these commitments. These undertakings were made after regulatory concerns were raised about risks of grooming, sexual exploitation, and other harms, with eSafety retaining enforcement powers under the Act to impose civil penalties of up to AUD 49.5 million in cases of non-compliance.

  34. 15/09/2025
    consultation closed

    Productivity Commission closes consultation on interim report harnessing data and digital technology including proposal for fair dealing exception for text and data mining for AI training

    On 15 September 2025, the Productivity Commission (PC) closes the consultation on its interim report on Harnessing Data and Digital Technology. The consultation is part of the Government’s five-pillar productivity growth agenda under the Productivity Commission Act 1998 and focuses on reforms to improve productivity through digital and data use. The interim report presents draft recommendations in four areas. These include enabling artificial intelligence (AI) productivity potential through outcomes-based regulation, creating new regulatory pathways to expand data access for individuals and businesses, introducing an alternative compliance option under the Privacy Act to support outcomes-based privacy protection, and mandating digital financial reporting through amendments to the Corporations Act. The report also calls for consultation on reforms to ensure Australia’s copyright regime supports AI development, including a proposal to introduce a fair dealing exception for text and data mining for AI training. The consultation aims to gather information on these proposals to inform the final report, which will be delivered to the Government within 12 months of the Treasurer’s request.

  35. 09/09/2025
    order

    eSafety Commissioner registered App Distribution Services Online Safety Code (Class 1C and Class 2 Material)

    On 9 September 2025, the eSafety Commissioner registered the App Distribution Services Online Safety Code (Class 1C and Class 2 Material) under the Online Safety Act 2021. The Code applies to providers that enable Australian end-users to download third-party apps, while excluding first-party apps, internal-use apps, and services limited to connectivity or billing. It addresses risks associated with apps that may expose users, particularly children, to harmful material such as pornography, self-harm content, or simulated gambling. App distribution service providers act as intermediaries, hosting and distributing apps without direct control over content once installed. The Code requires them to implement proportionate safeguards by working with third-party app developers and by providing end-users with information and tools to support safer use. Compliance obligations include restricting children’s access to adult apps through age assurance or equivalent measures unless risks are assessed as low, and ensuring contractual arrangements with third-party providers of high-impact and simulated gambling apps so that access controls are applied. Providers must review third-party apps before release, take account of age and content ratings, and re-assess ratings when appropriate. They are also expected to provide safety tools such as parental controls, curated child-friendly sections, or warnings, and to continue improving these measures. In addition, providers must make safety resources and reporting channels accessible to users, with unresolved complaints escalated to the eSafety Commissioner. They are required to allow user feedback on app content ratings but are not obliged to alter ratings solely on that basis. Overall, the Code sets out obligations for app distribution service providers to reduce risks of harmful material reaching children, to promote accountability among app developers, and to ensure Australian users have access to safety information, tools, and complaints mechanisms.

  36. 09/09/2025
    order

    eSafety Commissioner registered Equipment Online Safety Code (Class 1C and Class 2 Material)

    On 9 September 2025, the eSafety Commissioner registered the Equipment Online Safety Code (Class 1C and Class 2 Material) under the Online Safety Act 2021. The Code applies to hardware intended for Australian end-users and establishes obligations for manufacturers, suppliers, maintenance and installation providers, and operating system providers. It covers devices connected to social media, electronic services, designated internet services, and internet carriage services. The framework defines categories of devices, including interactive, secondary, non-interactive, gaming, and other interactive devices, reflecting different levels of risk in relation to exposure to harmful online material. Under the Code, equipment providers are responsible for manufacturing, importing, distributing, installing, and maintaining devices, while operating system providers control the software that determines user features and settings. Both groups are required to support online safety, even though they are not content providers. The Code introduces compliance obligations such as enabling child and restricted accounts with default protections against pornography and unsolicited contact, which can only be adjusted through linked adult accounts. Operating system providers must also supply tools for blocking websites, filtering apps, restricting contacts, and detecting nudity, while ensuring protections extend across their own services. Manufacturers of devices such as gaming consoles and smart TVs must implement parental controls and provide clear guidance on safe use. Suppliers, maintenance providers, and installers must give users plain language information about available safety settings at the point of sale or on request. Operating system providers are required to improve protections over time, invest in research, and refine technical solutions. All providers must employ trust and safety staff, maintain simple complaints mechanisms, and inform users of their right to escalate unresolved complaints to the eSafety Commissioner. They must also cooperate with investigations to ensure that the measures remain effective in reducing risks to Australian children online.

  37. 09/09/2025
    order

    eSafety Commissioner registered Social Media Services (Messaging Features) Online Safety Code (Class 1C and Class 2 Material)

    On 9 September 2025, the eSafety Commissioner registered the Social Media Services (Messaging Features) Online Safety Code (Class 1C and Class 2 Material). The Code applies to private messaging components of social media platforms, focusing on Class 1C and Class 2 harmful material. Unlike other schedules covering entire platforms, it applies solely to instant messaging features that enable private communication between users, excluding public posting functions such as comments or community posts, which are addressed under the main social media code. The framework establishes compliance requirements for all messaging features, reflecting the potential risks associated with private messaging. Obligations include prohibitions against sharing online pornography with Australian children and maintaining clear, accessible terms and conditions. Providers are expected to take proportionate action when breaches occur, while retaining flexibility to respond appropriately to specific circumstances. Safety infrastructure requirements include integrated reporting mechanisms that protect reporter anonymity and provide clear guidance. Providers must maintain trained personnel to handle reports and conduct annual reviews of system effectiveness. Technical measures include user control tools such as message blocking, settings to prevent unwanted messages, group chat exit options, and, for child accounts, restrictive default privacy settings. Additional protections may involve automated nudity detection and restrictions preventing unknown adults from contacting children directly. Providers are also expected to regularly review and improve safety tools through research, collaboration with safety organisations, industry engagement, and refinement of algorithms. Transparency requirements include publishing information on available safety tools, eSafety’s role, complaint processes, and guidance on safe platform use.

  38. 09/09/2025
    order

    eSafety Commissioner registered Social Media Services (Core Features) Online Safety Code (Class 1C and Class 2 Material)

    On 9 September 2025, the eSafety Commissioner registered the Social Media Services (Core Features) Online Safety Code (Class 1C and Class 2 Material). The Code, outlined in Schedule 4 of Australia’s Online Safety Code, sets compliance requirements for social media platforms serving Australian users, excluding messaging features. The framework establishes a three-tier risk assessment system based on the likelihood that Australian children will encounter harmful content. Platforms must evaluate their risk profile for online pornography, self-harm material, high-impact violence material, and simulated gambling material, and classify services as Tier 1 (high risk), Tier 2 (moderate risk), or Tier 3 (low risk) for each content type. Certain enterprise-focused platforms with limited social networking capabilities are automatically classified as Tier 3 and do not require assessment. Risk assessments must take into account the platform’s terms of use, user demographics, functionality, scale of Australian users, and safety design features. Compliance measures vary according to whether harmful content is permitted and the assigned risk tier. Platforms allowing harmful content must implement age assurance measures, safety tools such as content filtering and blocking, and provide clear guidance to users on available protections. Platforms that prohibit harmful content but are classified as Tier 1 or 2 must deploy detection and removal systems, including machine learning or AI-based solutions. All applicable services are required to maintain clear terms and conditions, operate trust and safety functions, provide reporting and complaint mechanisms, and conduct annual system reviews. Platforms must also engage with safety organisations, publish information on eSafety’s role, and maintain dedicated online safety resources. The Code includes specific provisions for AI companion chatbot features, requiring additional risk assessments for generative AI restricted categories. These features face strict requirements, including mandatory age assurance for Tier 1 profiles and either age controls or content prevention systems for Tier 2 profiles. Enforcement includes mandatory reporting to eSafety, with annual reports for services allowing harmful content and on-demand reporting for others. The Code provides a comprehensive framework aimed at protecting Australian children from harmful online material while balancing platform functionality with safety obligations.

  39. 09/09/2025
    order

    eSafety Commissioner registered Relevant Electronic Services Online Safety Code (Class 1C and Class 2 Material)

    On 9 September 2025, the eSafety Commissioner registered the Relevant Electronic Services Online Safety Code (Class 1C and Class 2 Material) under the Online Safety Act 2021. The Code establishes a regulatory framework for communication and gaming platforms serving Australian users, setting out obligations to manage harmful content and protect children. Services are categorised into types such as closed communication services, general communication services, dating services, gaming services with varying communication features, enterprise services, and telephony services, with compliance obligations tailored to their functions and risk profiles. A three-tier risk assessment system classifies services as high, moderate, or low risk based on the likelihood that Australian children will encounter harmful content, including online pornography, self-harm material, and high-impact violence material. Some service categories are pre-assessed and do not require formal risk assessments. The Code includes specific provisions for AI companion chatbot features, requiring separate risk assessments for restricted content categories. Services designed solely to generate harmful content are automatically classified as high risk. Universal compliance measures include mandatory age assurance for services primarily sharing pornography or self-harm material, and age verification for gaming services with R18+ content or simulated gambling. Closed communication services face extensive obligations, including prohibitions on criminal activities such as non-consensual intimate image sharing, grooming, and sexual extortion. They must maintain reporting mechanisms, safety tools, annual system reviews, and engagement with safety organisations. Other communication services have similar requirements with a focus on preventing the sharing of harmful content with children, including sophisticated safety features such as message blocking, content filtering, and restrictive default settings. Dating services are required to implement detection systems, reporting mechanisms, age assurance or notification measures, and user tools to limit unsolicited content, alongside continuous improvement programs. Gaming services with communication functions must maintain content moderation, user safety tools, and reporting systems, with regulatory responsibility focused on the entities controlling the end-user versions of services. Enforcement mechanisms across all categories include mandatory reporting to eSafety, timely responses to regulator communications, referral of unresolved complaints, regular reviews of system effectiveness, and maintaining qualified trust and safety personnel. The framework balances flexibility in implementation with consistent protection standards, acknowledging that technical feasibility and operational capacity vary across service types.

  40. 09/09/2025
    order

    eSafety Commissioner registered Designated Internet Services Online Safety Code (Class 1C and Class 2 Material)

    On 9 September 2025, the eSafety Commissioner registered the Designated Internet Services Online Safety Code (Class 1C and Class 2 Material) under the Online Safety Act 2021. The Code outlines a risk-based framework for designated internet services (DIS) to manage harmful content, with a focus on protecting children. DIS are categorised into several types, including Enterprise DIS, General Purpose DIS, Classified DIS, End-User Managed Hosting Services, High Impact Class 2 DIS, High Impact Generative AI DIS, and Model Distribution Platforms, with compliance obligations scaled according to their functions and assessed risk levels. A three-tier system (Tier 1–high risk, Tier 2–moderate risk, Tier 3–low risk) evaluates the likelihood of Australian children encountering harmful content such as online pornography, self-harm material, violence, and simulated gambling. Risk assessments consider service functionality, user demographics, content acquisition methods, terms of use, Australian user numbers, and safety features, and must be documented by qualified personnel. High-risk services have more extensive requirements, including age assurance, detection systems, reporting mechanisms, safety tools, and information sharing with eSafety, while lower-risk services maintain basic safety standards. The Code includes provisions for emerging technologies, including generative AI services, which must prevent restricted content generation and implement age or content controls. End-user managed hosting services are required to address risks such as non-consensual image sharing, grooming, and sexual extortion. Model distribution platforms must ensure that their customers comply with Australian content regulations. Enforcement measures across all categories include mandatory reporting, engagement with safety organisations, user education, and review of system effectiveness. The framework aims to balance child protection with service functionality, providing flexibility in implementation while maintaining consistent safety standards.

  41. 08/09/2025
    investigation

    eSafety Commissioner announced investigation into technology company responsible for AI generated nudify services used to create deepfake pornography

    On 8 September 2025, the eSafety Commissioner opened an investigation into a technology company responsible for Artificial Intelligence (AI) generated nudify services used to create deepfake pornography. It was stated that the company was left unnamed by eSafety to avoid publicising it. The company runs AI-generated nude image platforms, where users can upload photos of real people, including minors, and are used to create deepfake sexual images of Australian schoolchildren. The services attract about 100’000 Australian users each month. It was highlighted that the company failed to prevent the creation of child sexual abuse material and may face fines of up to AUD 49.5 million.

  42. 04/09/2025
    outline

    Office of the eSafety Commissioner released a self-assessment tool for platforms to determine whether their services qualify as age-restricted

    On 4 September 2025, the Office of the eSafety Commissioner released summaries of the consultation process alongside a self-assessment tool to assist online services in determining whether they are age-restricted platforms under Section 63C of the Online Safety Act 2021 and the Online Safety (Age-Restricted Social Media Platforms) Rules 2025. The guidance sets out a seven-step process covering electronic service status, accessibility to Australian end-users, posting and interaction functions, and the purpose of enabling online social interaction, followed by assessment of exclusions under the Rules. The exclusions cover services with a sole or primary purpose of messaging, online gaming, professional networking, education, or health, or with a significant purpose of facilitating communication in education or health contexts. eSafety noted has engaged with platforms including Google, Meta, Snap, and TikTok, outlining preparatory steps such as detecting and deactivating under-16 accounts, providing account download and deletion options, implementing layered age assurance measures, preventing circumvention through account modifications, and ensuring reporting and appeals mechanisms are effective. These measures, alongside the Australian Government’s Age Assurance Technology Trial final report, will inform eSafety’s regulatory guidance and compliance activities before the obligations commence on 10 December 2025.

  43. 03/09/2025
    outline

    Cybersecurity and Infrastructure Security Agency and 17 international cybersecurity organisations' adopted guidance on Software Bill of Materials for cybersecurity

    On 3 September 2025, the United States' Cybersecurity and Infrastructure Security Agency and 17 international cybersecurity organisations, including the Australian Signals Directorate’s Australian Cyber Security Centre, Canadian Centre for Cyber Security, Japan’s National Cybersecurity Office, New Zealand’s National Cyber Security Centre and Korea Internet and Security Agency adopted the guidance on Software Bill of Materials (SBOM) for cybersecurity. The guidance applies to software producers and operators across all sectors, including organisations that develop, acquire, or deploy software and manage software supply chains. The guidance defines SBOMs as formal records of software components and supply chain relationships. The guidance requires software producers to generate machine-processable SBOMs and organisations to integrate SBOM generation, analysis, and sharing into security processes for vulnerability management and faster threat response through automated tools.

  44. 01/09/2025
    inquiry

    Department of Infrastructure, Transport, Regional Development, Communications, Sport and Arts published report on age assurance technology trial evaluating online child protection technologies

    On 1 September 2025, the Department of Infrastructure, Transport, Regional Development, Communications, Sport and Arts published the report on the age assurance technology trial evaluating online child protection technologies. The trial assessed 48 providers across six categories of age verification systems applying to digital platforms and content providers. The findings showed age assurance can be implemented effectively in Australia without major technological barriers, though no single solution fits all contexts. Systems demonstrated robust privacy practices and performed consistently across demographic groups, however, some providers are unnecessarily retaining data, anticipating future regulatory needs. The report provides technical feasibility evidence for policymakers but makes no implementation recommendations.

  45. 26/08/2025
    investigation

    Information Commissioner opened investigation into Kmart Australia Limited’s use of facial recognition technology

    On 26 August 2025, the Australian Information Commissioner initiated an investigation into Kmart Australia Limited regarding alleged breaches of the Privacy Act. The investigation focuses on Kmart’s deployment of facial recognition technology (FRT) across 28 retail stores between 22 June 2020 and 15 July 2022. The scope of the investigation includes potential non-compliance with Australian Privacy Principles (APPs) 3.3, 3.4, 5.1, 5.2, 1.3, and 1.4. It examines whether valid consent was obtained and whether Kmart’s reliance on the “permitted general situation” under Section 16A, item 2, was legally justified. The Information Commissioner will assess the proportionality, transparency, and governance of the data collection practices, as well as the availability of less intrusive alternatives.

  46. 25/08/2025
    investigation

    eSafety Commissioner issued official notice to OmeTV app over alleged non-compliance with Relevant Electronic Services Industry Standard

    On 25 August 2025, the eSafety Commissioner issued an official notice to the OmeTV app, operated by “Bad Kitty’s Dad”, over alleged non-compliance with the Online Safety (Relevant Electronic Services-Class 1A and Class 1B Material) Industry Standard. The app allegedly failed to implement the required safety features and allowed adults to engage in randomised video chats with children without sufficient protection, raising concerns about grooming and sexual predation. The eSafety Commissioner also notified Apple and Google, which host the app, of the enforcement action and their obligations under the App Store Code.

  47. 22/08/2025
    investigation

    Transaction Reports and Analysis Centre ordered Binance to appoint external auditor over concerns of non-compliance with anti-money laundering and counter-terrorism financing regulations

    On 22 August 2025, the Australian Transaction Reports and Analysis Centre (AUSTRAC) directed Investbybit, which operates Binance in Australia and is a registered digital currency exchange provider, to appoint an external auditor within 28 days after identifying serious concerns with its anti-money laundering and counter-terrorism financing (AML/CTF) controls. AUSTRAC’s National Risk Assessment 2024 highlighted growing vulnerabilities of digital currencies to criminal abuse. The investigation followed regulatory engagement across the priority sector and noted concerns that Binance’s independent review was limited in light of its size, scope, business offerings and risks. AUSTRAC also noted high staff turnover, insufficient local resourcing and a lack of senior management oversight, which raised doubts about the adequacy of the exchange’s AML/CTF governance. The AUSTRAC stressed that global operators must adapt systems and processes to Australian requirements, apply robust customer identification and due diligence, implement effective transaction monitoring, ensure independent reviews are meaningful, and maintain capacity and risk controls proportionate to their scale and transaction volumes. AUSTRAC reiterated that all digital currency operators must comply with Australian law and reporting obligations to reduce exposure to scams, cybercrime and terrorism financing.

  48. 22/08/2025
    outline

    Australian Signals Directorate and Department of Industry Science and Resources guidelines on managing cryptographic keys and secrets

    On 22 August 2025, the Australian Signals Directorate and the Department of Industry, Science and Resources released guidelines on managing cryptographic keys and secrets. The guidelines are addressed to organisational security personnel, including architects and IT security managers, within cloud, on-premises, or hybrid environments. The guidelines consider threats to asymmetric keys, symmetric keys, digital certificates, and secrets. The guidelines recommend organisations adopt a Key Management Plan (KMP) to govern the entire life cycle of cryptographic material, including governance, generation, registration, storage, access, distribution, rollover, and destruction of cryptographic material. It also recommends generating keys and secrets using cryptographically secure methods with sufficient entropy, and storing them securely, with a preference for Hardware Security Modules (HSMs). Secure distribution methods and verifiable exchange mechanisms are advised to prevent interception, as is limiting access. It also details the concept of chains of trust for digital certificates, explaining the roles of root and intermediate Certificate Authorities and the need for validation, expiration checks, and revocation monitoring. Positions of trust with access to sensitive material require additional security measures, including the principle of least privilege and separation of duties. Finally, the guidance stipulates that oversight through auditing and monitoring is essential to detect and respond to unauthorised access or misuse, without logging sensitive key material itself.

  49. 18/08/2025
    investigation

    Competition and Consumer Commission accepted undertaking from Google in relation to concerns over alleged abuse of market power regarding pre-installed search engines on Android devices

    On 18 August 2025, the Australian Competition and Consumer Commission (ACCC) accepted an undertaking from Google as part of its investigation into Google’s search services in Australia, including a AUD 55 million penalty. Under the signed undertaking, Google’s contracts will allow device manufacturers and mobile network operators to set any search engine as the default. The contractual arrangements will not restrict Android device makers or Australian operators from promoting or using third-party search services, or from changing default search settings after factory installation, except for Google Chrome. In addition, Google agreed that device manufacturers will be able to license Google Mobile Services separately from Google Search and Chrome. The ACCC found that provisions in Google and Google Asia Pacific’s contracts, when combined with other agreements, were likely to limit the distribution of rival services and substantially reduce competition in the Australian search engine market. The ACCC concerns centred on Mobile Application Distribution Agreements (MADAs) with device manufacturers and Revenue Share Agreements (RSAs) with mobile operators, which promoted the pre-installation of Google services on Android devices since at least 2017. While Google did not accept all of these findings, it cooperated with the ACCC by offering a section 87B undertaking and committed to introducing a competition law compliance programme.

  50. 14/08/2025
    outline

    Cybersecurity and Infrastructure Security Agency published guidance on foundations for operational technology cybersecurity focusing on asset inventory guidance for owners and operators

    On 14 August 2025, the Cybersecurity and Infrastructure Security Agency (CISA) published guidance alongside international partners, including the National Security Agency, and cybersecurity agencies from five allied nations, pertaining to operational technology (OT) cybersecurity. The guidance applies to OT owners and operators across all critical infrastructure sectors, particularly energy, water treatment, oil and gas, and electricity organisations. The policy requires organisations to implement a systematic five-step framework for developing asset inventories and taxonomies. This includes creating regularly updated lists of OT systems with 14 high-priority attributes including communication protocols, asset criticality, and IP addresses. It also highlights that organisations must develop classification systems based on the ISA/IEC 62443 standards using Zones and Conduits methodology. They must also establish life cycle management policies and cross-reference inventories with vulnerability databases like CISA's Known Exploited Vulnerabilities Catalog. The guidance mandates real-time monitoring systems and includes sector-specific taxonomies developed through industry collaboration.

Last updated: 04/12/2025