So it turns out that the cause was indeed a rogue change they couldn’t roll back as we had been speculating.

Weird that whatever this issue is didn’t occur in their test environment before they deployed into Production. I wonder why that is.

  • DavidDoesLemmy@aussie.zone
    link
    fedilink
    arrow-up
    31
    ·
    1 year ago

    All companies have a test environment. Some companies are lucky enough to have a separate environment for production.

  • No1@aussie.zone
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    1 year ago

    Change Manager who approved this is gonna be sweating bullets lol

    “Let’s take a look at the change request. Now, see here, this section for Contingencies and Rollback process? Why is it blank?”

    • pntha@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      how else do you explain to the layman “catastrophic failure in the configuration update of core network infrastructure and its preceding, meant-to-be-soundproof, processes”

  • ji88aja88a@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    This happens in my business all the time…the test FTP IP address is left in the code and shit falls apart costing us millions… They hold a PIR and then it happens again.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    This is the best summary I could come up with:


    Optus says “changes to routing information” after a “routine software upgrade” was behind last week’s nationwide outage, affecting 10.2 million Australians and impacting 400,000 businesses.

    “These routing information changes propagated through multiple layers in our network and exceeded preset safety levels on key routers which could not handle these,” the company said.

    Before Monday’s disclosure by Optus, experts had theorised the outage was likely a “regular software upgrade gone wrong”.

    “The problem is too widespread to be due to a cable break or equipment failure,” said Tom Worthington, a senior lecturer in computer science from the Australian National University in Canberra.

    The software upgrade theory surmised by telecommunications analysts and experts last Wednesday were put to Optus CEO Kelly Bayer Rosmarin, who rejected those suggestions.

    The reason for the outage follows the federal government announcing earlier on Monday that it would require telecommunications companies in Australia to report their cybersecurity measures to avoid a repeat of Optus’ cyber hack last year.


    The original article contains 528 words, the summary contains 159 words. Saved 70%. I’m a bot and I’m open source!

    • ace_garp@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      “changes to routing information” after a “routine software upgrade” was behind last week’s nationwide outage.

      -===+===-

      So they blackballed themselves 8|

  • SituationCake@aussie.zone
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    If this is how they do their routine updates, they have had an extremely lucky run so far. Inadequate understanding of what the update would/could do, inadequate testing prior to deployment, no rollback capability, no disaster recovery plan. Yeah nah, you can’t get that lucky for that long. Maybe they have cut budget or sacked the people who knew what they were doing? Let’s hope they learn from this.