• Neshura@bookwormstory.social
    link
    fedilink
    English
    arrow-up
    23
    ·
    2 months ago

    Let’s be honest here it was never more than a band aid thrown together in an attempt to keep up with chiplets. Intel is in serious trouble because they still cannot compete with AMD in that regard, it affords them a level of production scalability Intel can currently only dream of.

    • Overspark@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      That’s not entirely true, Intel’s latest laptop chips are more advanced than AMD’s in some regards, specifically when it comes to dividing different workloads amongst different chiplets. But that hasn’t led to chips that are actually better for the users yet. On the desktop they still have a long way to go, that still holds true.

      • Cort@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Would you happen to be including AMD’s new strix point Mobile cpu in that comparison? They seem to be at the very top for mobile CPUs currently.

        If you were including those, what workloads is Intel still better at?

        • Overspark@feddit.nl
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          Absolutely. Strix Point is great but it’s just a monolithic chip, no chiplets are used. Intel’s Meteor Lake and Arrow Lake use all kinds of different chiplets called tiles, separate ones for compute, GPU, SoC (with RAM controllers, display driver and a few ultra low power E cores so that compute tiles can be completely switched off at idle) and IO tiles. Different tiles are produced on different node sizes to optimize for cost and performance as needed.

          On paper they’re very impressive designs, but it hasn’t translated to chips that are actually faster or more efficient than AMD’s offerings. I’d always choose AMD for a laptop currently, so even with all that impressive tech Intel is still lagging behind.

          • Cort@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            Oh wow, I didn’t realize strix was monolithic. I just assumed it was multi die due to the Zen5c cores.

      • schizo@forum.uncomfortable.business
        link
        fedilink
        English
        arrow-up
        12
        ·
        2 months ago

        Basically every one of them made in the past 4 or 5 years?

        Some are better than others - CP2077, for example, will happily use all 16 threads on my 7700x, but something crusty like WoW only uses like, 4. Fortnite is. 3 or so, unless you’re doing shader compilation where it’ll use all of them, and so on - but it’s not 2002 anymore.

        The issue is that most games won’t use nearly as many cores as Intel is stuffing on a die these days, which means for gaming having 32 threads via e-cores or whatever is utterly pointless, but having 8 cores and 16 threads of full-fat cores is very much useful.

      • Neshura@bookwormstory.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 months ago

        The concept is used by pretty much all games now. It’s just that during the gilded days of Intel everbody and their mother hardcoded around a max of 8 threads. Now that core counts are significantly higher game devs opt for dynamic threading instead of fixed threading, which results in Intels imbalanced Core performance turning into more and more of a detriment. Doom Eternal for example uses up as many threads as you have available and uses them pretty evenly

      • SolOrion@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        Honestly, if we’re talking modern games I think games that don’t utilize multithreading to at least some degree would be a significantly shorter list.

      • Dudewitbow@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        all games use it to some extent, the ones that use/need it the most are online games where several players are on the same map typically.

        battlefield and battlefield adjacent games for example have historically pelted the CPU. because they often have massive player counts.

    • Alphane Moon@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      I would argue if your budget allows you to, it’s better to get 8 cores.

      Any benefit for paying for 12 or 16?

      Only if you do demanding use cases other than gaming. One example is video editing and encoding (the type that should not be done on a GPU).

      Some games do benefit from having 16 cores, things like economic strategy games with lots of background simulation (one example would be path finding).

  • BrightCandle@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    2 months ago

    Not necessarily. Ignore chiplets because that is mostly about yield and price and look at what happens when we go very threaded. Smaller cores with less clockspeed take up less space and less power and are more efficient with both which leads to more total compute performance in a given space and power budget. The ideal CPU in a highly multithreaded environment has a small number of P cores that matches the number of single threaded combining threads and as many E cores as possible due to Amadhl’s law. The single threaded part comes to dominate given enough multithreading and all algorithms have some amount of single threaded accumulation of results.

    AMD is working with the same limitations and bigger cores with more clockspeed will always have less total cores and achieve less total compute performance in that space. The single threaded component will dominate at high core counts so the answer is not all P cores and not all E cores and AMDs cores should be considered P cores. The ideal number of P cores is definitely more than 1 because the GPU requires one of those high performance threads and the game will need at least one depending on how many different sets of parallel tasks it is running.

    But the problem is this theoretical future is a bit far off because we can clearly do today’s games with 6 cores quite happily and most don’t really utilise 6 cores well. They tend to prefer all high performance cores, no one is yet at the stage of dealing with the added complexity of heterogenous CPU core performance and its why both AMD and Intel have special scheluders to improve game utilisation a bit better, this approach of differing core performance first a little and then with E cores quite a lot is too new since big AAA games are in development for many years. So while its likely the future gains from silicon slow further, necessitating optimising the compute density and balance of cores, its unclear when Intel’s strategy will pay off in games, it pays off in some productivity applications but not games yet.

    I am certain this approach and further iterations of it with multiple different levels and even instruction sets are quite likely the future of computing, so far its been critical for the GPUs success, its really unclear when that likely future will happen. It definitely doesn’t make sense now or the near future so buying a current Intel CPU for games makes no sense.