New to Z

Expand all | Collapse all

Cloud Redundancy & Load Balancing Failure SA001

  • 1.  Cloud Redundancy & Load Balancing Failure SA001

    Posted Mon December 27, 2021 04:33 PM

    Cooling Infrastructure Power Failure SAO016 Dec 2020

    update to 

    Cloud Redundancy & Load Balancing Failure SA001 6 Dec 2020

    big blue redundancy failure - Google Drive
    Google remove preview
    big blue redundancy failure - Google Drive
    Captura de tela de 2021-12-26 17-06-54.png Captura de tela de 2021-12-26 17-13-07.png IBM Cloud Redundancy & Load Balancing Failure dec 2 2020.pdf No files in this folder. Sign in to add files to this folder
    View this on Google >

    Best Regards
    Zeh  Sobrinho

    jose soares sobrinho

  • 2.  RE: Cloud Redundancy & Load Balancing Failure SA001

    Posted Wed January 05, 2022 05:08 PM
    Edited by jose soares sobrinho Wed January 05, 2022 05:11 PM

    Hello BigBlueTeam

    We're in the comgas (compass energy oil & gas, an ráizen [cosan and shell joint venture]) in plugue desafio hackathon.
    deliveries. Work plan for the 10th and pitch for the 15th of January.

    1. Convert diesel oil generator (passive, runs 26 minutes year TIER IV) into full time natural gas active generator, with 10x the nominal capacity. Formerly a 100 MW datacenter has a 110 MW generator, now 1 GW as with 900 MW plugged into the grid as revenue.

    2. attach datacenter in thermal plants to hydrocarbons or nuclear and convert waste heat to cold for ibm system z liquid server.

    To validate 1, we need 100% cloud redundancy because we will no longer have oil generators. Now TIER X and PUE negative.

    Then we did an exercise. And if, it doesn't have power or cooling, what happens? We found the IBM SAO01 failure report wrong.

    We set up the first digital switches in the country (1991/92) and one of the things we did in tests was cut power and cooling to see how redundant systems behaved. Doesn't seem like this happens with datacenters?

    We discovered in the previous message that in an internet failure, the blame and responsibility goes to the infrastructure layer, not the OSI layer) but the utilities layer (cold water, energy) and attests that the datacenter has no redundancy.

    not to the fact that applications are still hostage to infra when they must be separated at the OSI layer. The application must be independent from the Infra (server, networks, routers, etc.), that is, if I fail the infra the application remains numba good. An atomic bomb dropped, the application remains in the air and the message reaches its destination.

    We looked to IBM Brazil and they recognized that the failure report is wrong. As it is, if there is another infra failure the server crashes again. We were informed of the correction and awaiting evidence.

    Okay, my infra is 100%. I can't be sure, as the cloud is one thing. One node failed, the others fail, because if one node is infra-dependent, all the others are affected as it shows that there was no redundancy.
    This is what the fathers of the Brazilian internet and the creator of TCP-IP claim at the 25th of commemoration of the Brazilian internet.

    This vulnerability weakens IBM against take over

    With 5G, I suspect that erlang/elixir this will improve, as availability is 99.9999999, or downtime of less than 1 second (in 20 years), but we need a 100% redundant cloud solution until pitch day, January 15th.

    4. We need the system z server liquid coling infrastructure to be the new The birth of the IBM PC: The birth of the IBM Professional Computer

    Open source specications and cad dwg components along the open compute, open mainframe and open system templates

    5. We made this suggestion to the OPC and now the don asks us for a case study to attach the datacenter integrated to thermoelectric plants converting waste heat into cold to serve the energy-hungry servers.

    And we were asked for a case study.
    We will provide the ibm server liquid cooling case and model a new solution with Broad BE, BH and BS components

    BS Model, Steam Driven

    • Capacity:30-3300 ton
    • Steam Pressure: 60 –  150 Psi
    • Chiller Come With Steam Valve
    • Cooling Only

    BE Model, Exhaust DrivenBroad Exhaust Driven Chiller

    • Capacity:40-3300 ton
    • Exhaust Pressure Drop: 3 –  8 Inch W.C.
    • Exhaust Temperature: 536-990F

    BH Model, Hot Water DrivenBROAD Hot Water Two Stage Chiller

    • Capacity:30-3300 ton
    • Hot Water Temperature: 280 –  356 F
    • Provides Hot Water Valve
    • Cooling Only
    • Cooling / Heating Only

    United States Data Center Energy Usage Report
    Publication Type Report
    Date Published

    We took the opportunity to obtain authorization from IBM to include a Mercedes Benz case study in São Paulo, Brazil, where in 2003/4 I was responsible for the district cooling that served chilled water to the mainframe.
    Taking the opportunity to assess the possibility of trigeneration using natural gas to convert residual heat into cold, since the new Mercedes plant is self-sufficient in energy through cogeneration, while the SBC matrix is ​​not.

    Happy Holidays


    Don Mitchell
    Thurs., December 9th. from 2021 00:39
    for Don, me, John

    Very interested in learning more

    On Sun, Nov 28, 2021 at 5:17 PM Z Nephew <> wrote:
    dear Victaulic & Vertiv
    We are looking for Facebook Data Center Responsible/Supervisor willing to fund a POC using Machine Learning/AI to reduce data center cooling bill by 70%, PUE 1.0 and upgrade/retrofit
    Zeh nephew

    ------------------------------- shared a worksheet
    Unknown profile picture has invited you to edit the following spreadsheet:

    OCP ACF Case Study Input Form

    It belongs to Don Mitchell.

    Last edited by Zeh S Sobrinho 14 hours ago.
    If you don't want to receive files from this person, block the sender in Google Drive.
    Google LLC, 1600 Amphitheater Parkway, Mountain View, CA 94043, USA
    You received this email because shared a Google Sheets file with you

    jose soares sobrinho