Tried my duck river crossing thing a few times recently, it usually solves it now, albeit with a bias to make unnecessary trips half of the time.

Of course, anything new fails:

There’s 2 people and 1 boat on the left side of the river, and 3 boats on the right side of the river. Each boat can accommodate up to 6 people. How do they get all the boats to the left side of the river?

Did they seriously change something just to deal with my duck puzzle? How odd.

It’s Google so it is not out of the question that they might do some analysis on the share links and referring pages, or even use their search engine to find discussions of a problem they’re asked. I need to test that theory and simultaneously feed some garbage to their plagiarism machine…

Sample of the new botshit:

L->R: 2P take B_L. L{}, R{2P, 4B}. R->L: P1 takes B_R1. L{P1, B_R1}, R{P2, 3B}. R->L: P2 takes B_R2. L{2P, B_R1, B_R2}, R{2B}. L->R: P1 takes B_R1 back. L{P2, B_R2}, R{P1, 3B}. R->L: P1 takes B_R3. L{P1, P2, B_R2, B_R3}, R{2B}. L->R: P2 takes B_R2 back. L{P1, B_R3}, R{P2, 3B}.

And again and again, like a buggy attempt at brute forcing the problem.

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    3 days ago

    I would be 0% surprised to learn that the modelfarmers “iterated” to “hmm, people are doing a lot of logic tests, let’s handle those better” and that that’s what gets here

    (I have no evidence for this, but to me it seems a completely obvious/evident way for them to try keep the party going)

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 days ago

      I have two theories on how the modelfarmers (I like that slang, it seems more fitting than “devs” or “programmers”) approached this…

      1. Like you theorized, they noticed people doing lots of logic tests, including twists on standard logic tests (that the LLMs were failing hard on), so they generated (i.e. paid temp workers) to write a bunch of twists on standard logic tests. And here we are, with it able to solve a twist on the duck puzzle, but not really better in general.

      2. There has been a lot of talk of synthetically generated data sets (since they’ve already robbed the internet of all the text they could). Simple logic puzzles could actually be procedurally generated, including the notation diz noted. The modelfarmers have over-generalized the “bitter lesson” (or maybe they’re just lazy/uninspired/looking for a simple solution they can tell the VCs and business majors) and think just some more data, deeper network, more parameters, and more training will solve anything. So you get the buggy attempt at logic notation from synthetically generated logic notation. (Which still doesn’t quite work, lol.)

      I don’t think either of these approaches will actually work for letting LLM’s solve logic puzzles in general, these approaches will just solve individual cases (for solution 1) and make the hallucinations more convincing (for 2). For all their talk of reaching AGI… the approaches the modelfarmers are taking suggest a mindset of just reaching the next benchmark (to win more VC, and maybe market share?) and not of creating anything genuinely reliable much less “AGI”. (I’m actually on the far optimistic end of sneerclub in that I think something useful might be invented that lasts the coming AI winter… but if the modelfarmers just keep scaling and throwing more data at the problem, I doubt they’ll even manage that much).

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        3 days ago

        (excuse possible incoherence it’s 01:20 and I’m entirely in filmbrain (I’ll revise/edit/answer questions in morning))

        re (1): while that is a possibility, keep in mind that all this shit also operates/exists in a metrics-as-targets obsessed space. they might not present end user with hit% but the number exists, and I have no reason to believe that isn’t being tracked. combine that with social effects (public humiliation of their Shiny New Model, monitoring usage in public, etc etc) - that’s where my thesis of directed prompt-improvement is grounded

        re (2): while they could do something like that (synthetic derivation, etc), I dunno if that’d be happening for this. this is outright a guess on my part, a reach based on character based on what I’ve seen from some the field, but just……I don’t think they’d try that hard. I think they might try some limited form of it, but only so much as can be backed up in relatively little time and thought. “only as far as you can stretch 3 sprints” type long

        (the other big input in my guesstimation re (2) is an awareness of the fucked interplay of incentives and glorycoders and startup culture)

        • scruiser@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          3 days ago

          I don’t think they’d try that hard.

          Wow lol… 2) was my guess at an easy/lazy/fast solution, and you think they are too lazy for even that? (I think a “proper” solution would involve substantial modifications/extensions to the standard LLM architecture, and I’ve seen academic papers with potential approaches, but none of the modelfarmers are actually seriously trying anything along those lines.)

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            2 days ago

            lol, yeah

            “perverse incentives rule everything around me” is a big thing (observable) in “startup”[0] world because everything[1] is about speed/iteration. for example: why bother spending a few weeks working out a way to generate better training data for a niche kind of puzzle test if you can just code in “personality” and make the autoplag casinobot go “hah, I saw a puzzle almost like this just last week, let’s see if the same solution works…”

            i.e. when faced with a choice of hard vs quick, cynically I’ll guess the latter in almost all cases. there are occasional exceptions, but none of the promptfondlers and modelfarmers are in that set imo

            [0] - look, we may wish to argue about what having billions in vc funding categorizes a business as. but apparently “immature shitderpery” is still squarely “startup”

            [1] - in the bayfucker playbook. I disagree.

            • diz@awful.systemsOP
              link
              fedilink
              English
              arrow-up
              4
              ·
              2 days ago

              I think they worked specifically on cheating the benchmarks, though. As well as popular puzzles like pre existing variants of the river crossing - it is a very large puzzle category, very popular, if the river crossing puzzle is not on the list I don’t know what would be.

              Keep in mind that they are also true believers, too - they think that if they cram enough little pieces of logical reasoning, taken from puzzles, into the AI, then they will get robot god that will actually start coming up with new shit.

              I very much doubt that there’s some general reasoning performance improvement that results in these older puzzle variants getting solved, while new ones that aren’t particularly more difficult, fail.