• @[email protected]
    link
    fedilink
    English
    42 months ago

    I appreciate the effort you put into the comment and your kind tone, but I’m not really interested in increasing LLM presence in my life.

    I said what I said, and I experienced what I experienced. Providing me an example where it works is in no way a falsification of the core of my original comment: LLMs have no place generating code for secure applications apart from human review, because they don’t have a mechanism to comprehend or proof their own work.

    • @[email protected]
      link
      fedilink
      English
      42 months ago

      I’d also add that, depending on the language, the ways you can shoot yourself in the foot are very subtle (cf C++/C, which are popular languages for “secure” stuff).

      It’s already hard to not write buggy code, but I don’t think you will detect them by just reviewing LLM code, because detecting issues during code review is much harder than when you’re writing code.

      Oh, and I assume it’ll be tough to get an LLM to follow MISRA conventions.

      • @[email protected]
        link
        fedilink
        English
        32 months ago

        It’s already hard to not write buggy code, but I don’t think you will detect them by just reviewing LLM code, because detecting issues during code review is much harder than when you’re writing code.

        Definitely. That’s what I was trying to drive at, but you said it well.