It would be interesting to watch for sure but I wonder if they might correct each other or collaborate in some way that could be lightly supervized to produce an ouput

  • CrocodilloBombardino@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    what you’re proposing requires them to reason and understand each other. LLMs don’t do that, they take text input and construct an output based on words (tokens) that they have mapped to be close to the ones you entered into your prompt.

    it’s a clever way to produce a plausible response, but it’s not thinking or reasoning.

      • CorrectAlias@piefed.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        Didn’t you argue that deep learning models were able to think like humans and then used this exact line in your arguments? Like a month or two ago?