A neuromorphic supercomputer called DeepSouth will be capable of 228 trillion synaptic operations per second, which is on par with the estimated number of operations in the human brain

Edit: updated link, no paywall

  • ArbitraryValue@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    342
    arrow-down
    2
    ·
    edit-2
    1 year ago

    A better title would be “Supercomputer that could conceivably simulate entire human brain, based on a rough estimate of what it would take to do that if we had any idea how to do that, will switch on in 2024”.

    • gibmiser@lemmy.world
      link
      fedilink
      English
      arrow-up
      95
      arrow-down
      1
      ·
      1 year ago

      For real. I’m reading the title all wondering how the fuck they mapped all the neuron connections and… nope, the real innovative part of the story is clickbait

      • neuropean@kbin.social
        link
        fedilink
        arrow-up
        47
        ·
        1 year ago

        That’s only counting connections. The brain learns by making new connections, through complex location and timing dependent inputs from other neurons. It’s way more complex than the number of connections, and if neuroscientists are still studying the building blocks we don’t have much hope of recreating it.

        • IHeartBadCode@kbin.social
          link
          fedilink
          arrow-up
          32
          ·
          1 year ago

          This also ignores that the brain is not wholly an electrical system. The are all kinds of chemical receptors within the brain that alter all kinds of neurological function. Kid of the reason why drugs are a thing. On small scales we have a pretty good idea how these work, at least for the receptors that we’re aware of. On larger scales it’s mostly guessing at this point. The brain has a knack of doing more than the sum of all parts on a pretty regular basis.

          • 0ops@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 year ago

            Not to mention the scale and nature of the “dataset” that our brains were trained on. Millions of years of instinct encoded in DNA, plus a few years gathering data from dozens of senses 24/7 (including chemical receptors, like you said) and in turn manipulating our bodies, interacting with the environment, and observing the results. We’ve been doing all of this since embryo.

            We can’t just feed a model raw image and text data and expect it’s intelligence to be comparable to ours. However you quantify intelligence/consciousness whatever, the text/image model’s thought processes will be alien to ours, which makes sense because their “environment” is nothing like ours - just text and image input and output.

    • Geek_King@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      1 year ago

      I get so tired of these half-truth spun news article headlines. Thank you for bring it back down to reality.

    • Warl0k3@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      1 year ago

      Four grad students out there hand-entering NXML rows while squinting at AI enhanced SEM images should be able to get all 228T done by… next quarter, right?

      This is setting aside that bus capacity is the bottleneck vs. compute power and they have yet to demonstrate bus performance of a full 228T connections/second with implicit timing which, to my knowledge, has never been demonstrated in a system a tiny fraction of this size. Though that’s not to say it’s impossible, but while this machine is incredibly powerful the comparison to human brains is predictably inaccurate…