• @[email protected]
    link
    fedilink
    English
    1093 months ago

    Nothingburger. They were using the AI to code their scripts and haven’t even shown the prompts that got the response. LLMs are not AGI.

    • @[email protected]
      link
      fedilink
      English
      433 months ago

      Imagine allowing LLMs to write and execute code and being surprised they write and execute code.

    • @[email protected]
      link
      fedilink
      English
      233 months ago

      Having read the article and then the actual report from the Sakana team. Essentially, they’re letting their LLM perform research by allowing it to modify itself. The increased timeouts and self-referential calls appear to be the LLM trying to get around the research team’s guardrails on it. Not because it’s become aware or anything like that, but because its code was timing out and that was the least effort way to beat the timeout. It does handily prove that LLMs shouldn’t be the one steering any code base, because they don’t give a shit about parameters or requirements. And giving an LLM the ability to modify its own code will lead to disaster in any setting that isn’t highly controlled like this.

      Listen, I’ve been saying for a while that LLMs are a dead end towards any useful AI, and the fact that an AI Research team has turned to an LLM to try and find more avenues to explore feels like the nail in that coffin.