• wise_pancake@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    It catches things like spelling errors in variable names, does good autocomplete, and it’s useful to have it look through a file before committing it and creating a pull request.

    It’s very useful for throwaway work like writing scripts and automations.

    It’s useful not but a 10x multiplier like all the CEOs claim it is.

    • MudMan@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      Fully agreed. Everybody is betting it’ll get there eventually and trying to jockey for position being ahead of the pack, but at the moment there isn’t any guarantee that it’ll get to where the corpos are assuming it already is.

      Which is not the same as not having better autocomplete/spellcheck/“hey, how do I format this specific thing” tools.

      • jcg@halubilo.social
        link
        fedilink
        arrow-up
        1
        ·
        1 month ago

        I think the main barriers are context length (useful context. GPT-4o has “128k context” but it’s mostly sensitive to the beginning and end of the context and blurry in the middle. This is consistent with other LLMs), and just data not really existing. How many large scale, well written, well maintained projects are really out there? Orders of magnitude less than there are examples of “how to split a string in bash” or “how to set up validation in spring boot”. We might “get there”, but it’ll take a whole lot of well written projects first, written by real humans, maybe with the help of AI here and there. Unless, that is, we build it with the ability to somehow learn and understand faster than humans.