• XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    1
    ·
    4 hours ago

    How did I end up on a timeline where Microsoft is talking about rolling back AI in its OS and practically acknowledging vibe coding caused problems… and Linux developers are talking about ramping up its usage?

    Obviously Microsoft is still worse here, but what are these trajectories?

  • Mongostein@lemmy.ca
    link
    fedilink
    English
    arrow-up
    63
    arrow-down
    1
    ·
    10 hours ago

    Linux kernel czar?

    I’m curious about this but I refuse to click the link because that just sounds so fucking stupid.

    • deadbeef79000@lemmy.nz
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 hours ago

      It’s an affectation of The Register they like reporting real news with a sometimes quirky voice. It’s also British so some of the language and humour doesn’t quite work as well in other parts of the world.

    • inari@piefed.zip
      link
      fedilink
      English
      arrow-up
      43
      arrow-down
      2
      ·
      10 hours ago

      The headline is stupid but the article is interesting. Greg is saying that since last month for some unknown reason, AI bug reports have gotten good and useful, and something current Linux maintainers can handle.

        • inari@piefed.zip
          link
          fedilink
          English
          arrow-up
          9
          ·
          8 hours ago

          Greg says they’re mostly small bug fixes and that the current maintainers can handle it, not sure where you’re getting the “reams” bit from

            • inari@piefed.zip
              link
              fedilink
              English
              arrow-up
              8
              ·
              6 hours ago

              Yeah I mean, the goal is not to replace code maintainers, only to assist them in their work. Greg in general seems optimistic about it:

              “I did a really stupid prompt,” he recounted. “I said, ‘Give me this,’ and it spit out 60: ‘Here’s 60 problems I found, and here’s the fixes for them.’ About one-third were wrong, but they still pointed out a relatively real problem, and two-thirds of the patches were right.” Mind you, those working patches still needed human cleanup, better changelogs, and integration work, but they were far from useless. “The tools are good,” he said. “We can’t ignore this stuff. It’s coming up, and it’s getting better.”

      • Em Adespoton@lemmy.ca
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        10
        ·
        10 hours ago

        It’s not just bug reports; in the last month, AI driven development has actually gone from slop to reliably better than the average human.

        That’s not saying it’s writing better code, just that managing the development process and catching regular bugs is now better than when run by a junior analyst.

        Makes sense that a properly balanced model with randomization turned down should be able to recognize when something is being done outside the acceptable parameters.

        • The_Decryptor@aussie.zone
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          3 hours ago

          It’s not just bug reports; in the last month, AI driven development has actually gone from slop to reliably better than the average human.

          Funny, I heard that same claim about 6 months ago.

          And I’m sure I’ll hear it again in another 6 months or so.

    • frongt@lemmy.zip
      link
      fedilink
      English
      arrow-up
      9
      ·
      9 hours ago

      That’s The Register’s style. Their a little weird with their copy, but their reporting has been solid, in my experience.

  • Riskable@programming.dev
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    13
    ·
    7 hours ago

    Either a lot more tools got a lot better,

    That’s what it was. Even the free, open source models are vastly superior to the best of the best from just a year ago.

    People got into their heads that AI is shit when it was shit and decided at that moment that it was going to be stuck in that state forever. They forget that AI is just software and software usually gets better over time. Especially open source software which is what all the big AI vendors are building their tools on top of.

    We’re still in the infancy of generative AI.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      4 hours ago

      If you read AI critics, you will see people presenting solid financial evidence of the failure of AI companies to do what they promised. Remember Sam Altman promised AGI in 2025? I certainly do, and now so do you.

      Do you have any concrete evidence that this financial flop will turn around before it runs out of money?

    • frongt@lemmy.zip
      link
      fedilink
      English
      arrow-up
      17
      ·
      6 hours ago

      I tried one for the first time yesterday. It was mediocre at best. Certainly not production code. It would take just as much effort to refine it as it would to just write it in the first place.

    • AliasAKA@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      6 hours ago

      Traditional software was developed by humans as an artifact that, and to the degree that humans improved the software for some task, got better, but it was not guaranteed. Windows 11 is proof of that, and there are a laundry list of regressions and bugs introduced into software developed by humans. I acknowledge you say usually and especially for open source — I lukewarm agree with that statement but disagree that large LLMs or other generative models will follow this trend, and merely want to point out that software usually introduces bugs as it’s developed, which are hopefully fixed by people who can reason over the code.

      Which brings us to AI models, and really they should just be called transformer models; they are statistical tensor product machines. They are not software in a traditional sense. They are trained to match their training input in a statistical sense. If the input data is corrupted, the model will actually get worse over time, not better. If the data is biased, it will get worse over time, not better. With the amount of slop generated on the web, it is extraordinarily hard to denoise and decide what’s good data and what’s bad data that shouldn’t be used for training. Which means the scaling we’ve seen with increased data will not necessarily hold. And there’s not a clear indication that scaling the model size, which is largely already impractical, is having some synergistic or emergent effect as hoped and hyped.

      Also, we’re really not in the infancy of AI. Maybe the infancy of widespread hype for it, but the idea of using tensor products for statistical learning algorithms goes back at least as far as Smolensky, maybe before, and that was what, 1990?

      We are in the infancy of I’d say quantum style compute, so we really don’t have much to draw on beyond theoretical models.

      Generative LLM models have largely plateaued in my opinion.