Over the past few years, the evolution of AI-driven tools like GitHub’s Copilot and other large language models (LLMs) has promised to revolutionise programming. By leveraging deep learning, these tools can generate code, suggest solutions, and even troubleshoot issues in real-time, saving developers hours of work. While these tools have obvious benefits in terms of productivity, there’s a growing concern that they may also have unintended consequences on the quality and skillset of programmers.

  • @[email protected]
    link
    fedilink
    English
    02 months ago

    They existed before LLMs were spitting code like today, and this will undoubtedly lower the bar for bad developers to enter.

    If LLMs allow bad programmers to deliver work with good enough quality to pass themselves off as good programmers, this means LLMs are fantastic value for money.

    Also worth noting: programmers do learn by analysing the output of LLMs, just as the programmers of old learned by reading someone else’s code.

    • @[email protected]
      link
      fedilink
      English
      02 months ago

      I think I could have states my opinion better. I think LLMs total value remains to be seen. They allow totally incompetent developers to occasionally pass as below average developers. Is that good or bad? I don’t know. What an average and excellent developer can do with LLM assistance is less clear. Certainly it can help those developers in some situations.

      • @[email protected]
        link
        fedilink
        English
        02 months ago

        I think I could have states my opinion better. I think LLMs total value remains to be seen. They allow totally incompetent developers to occasionally pass as below average developers.

        This is a baseless assertion from your end, and a purely personal one.

        My anecdotal evidence is that the best software engineers I know use these tools extensively to get rid of churn and drudge work, and they apply it anywhere and everywhere they can.