• @[email protected]
    link
    fedilink
    English
    26
    edit-2
    1 day ago

    What temperature and sampling settings? Which models?

    I’ve noticed that the AI giants seem to be encouraging “AI ignorance,” as they just want you to use their stupid subscription app without questioning it, instead of understanding how the tools works under the hood. They also default to bad, cheap models.

    I find my local thinking models (FuseAI, Arcee, or Deepseek 32B 5bpw at the moment) are quite good at summarization at a low temperature, which is not what these UIs default to, and I get to use better sampling algorithms than any of the corporate APis. Same with “affordable” flagship API models (like base Deepseek, not R1). But small Gemini/OpenAI API models are crap, especially with default sampling, and Gemini 2.0 in particular seems to have regressed.

    My point is that LLMs as locally hosted tools you understand the mechanics/limitations of are neat, but how corporations present them as magic cloud oracles is like everything wrong with tech enshittification and crypto-bro type hype in one package.

    • jrs100000
      link
      fedilink
      English
      6
      edit-2
      17 hours ago

      They were actually really vague about the details. The paper itself says they used GPT-4o for ChatGPT, but apparently they didnt even note what versions of the other models were used.

    • @[email protected]
      link
      fedilink
      English
      91 day ago

      I’ve found Gemini overwhelmingly terrible at pretty much everything, it responds more like a 7b model running on a home pc or a model from two years ago than a medium commercial model in how it completely ignores what you ask it and just latches on to keywords… It’s almost like they’ve played with their tokenisation or trained it exclusively for providing tech support where it links you to an irrelevant article or something

      • @[email protected]
        link
        fedilink
        English
        3
        edit-2
        1 day ago

        Gemini 1.5 used to be the best long context model around, by far.

        Gemini Flash Thinking from earlier this year was very good for its speed/price, but it regressed a ton.

        Gemini 1.5 Pro is literally better than the new 2.0 Pro in some of my tests, especially long-context ones. I dunno what happened there, but yes, they probably overtuned it or something.

      • @[email protected]
        link
        fedilink
        English
        11 day ago

        Bing/chatgpt is just as bad. It loves to tell you it’s doing something and then just ignores you completely.

    • paraphrand
      link
      fedilink
      English
      7
      edit-2
      1 day ago

      I don’t think giving the temperature knob to end users is the answer.

      Turning it to max for max correctness and low creativity won’t work in an intuitive way.

      Sure, turning it down from the balanced middle value will make it more “creative” and unexpected, and this is useful for idea generation, etc. But a knob that goes from “good” to “sort of off the rails, but in a good way” isn’t a great user experience for most people.

      Most people understand this stuff as intended to be intelligent. Correct. Etc. Or they At least understand that’s the goal. Once you give them a knob to adjust the “intelligence level,” you’ll have more pushback on these things not meeting their goals. “I clearly had it in factual/correct/intelligent mode. Not creativity mode. I don’t understand why it left out these facts and invented a back story to this small thing mentioned…”

      Not everyone is an engineer. Temp is an obtuse thing.

      But you do have a point about presenting these as cloud genies that will do spectacular things for you. This is not a great way to be executing this as a product.

      I loathe how these things are advertised by Apple, Google and Microsoft.

      • @[email protected]
        link
        fedilink
        English
        5
        edit-2
        1 day ago
        • Temperature isn’t even “creativity” per say, it’s more a band-aid to patch looping and dryness in long responses.

        • Lower temperature is much better with modern sampling algorithms, E.G., MinP, DRY, maybe dynamic temperature like mirostat and such. Ideally, structure output, too. Unfortunately, corporate APIs usually don’t offer this.

        • It can be mitigated with finetuning against looping/repetition/slop, but most models are the opposite, massively overtuning on their own output which “inbreeds” the model.

        • And yes, domain specific queries are best. Basically the user needs separate prompt boxes for coding, summaries, creative suggestions and such each with their own tuned settings (and ideally tuned models). You are right, this is a much better idea than offering a temperature knob to the user, but… most UIs don’t even do this for some reason?

        What I am getting at is this is not a problem companies seem interested in solving.They want to treat the users as idiots without the attention span to even categorize their question.

      • @[email protected]
        link
        fedilink
        English
        11 day ago

        This is really a non-issue, as the LLM itself should have no problem at setting a reasonable value itself. User wants a summary? Obviously maximum factual. He wants gaming ideas? Etc.

        • @[email protected]
          link
          fedilink
          English
          2
          edit-2
          1 day ago

          For local LLMs, this is an issue because it breaks your prompt cache and slows things down, without a specific tiny model to “categorize” text… which few have really worked on.

          I don’t think the corporate APIs or UIs even do this. You are not wrong, but it’s just not done for some reason.

          It could be that the trainers don’t realize its an issue. For instance, “0.5-0.7” is the recommended range for Deepseek R1, but I find much lower or slightly higher is far better, depending on the category and other sampling parameters.

    • @[email protected]
      link
      fedilink
      English
      -31 day ago

      Rare that people here argument for LLMs like that here, usually it is the same kind of “uga suga, AI bad, did not already solve world hunger”.

      • @[email protected]
        link
        fedilink
        English
        14 hours ago

        Your comment would be acceptable if AI was not advertised as solving all our problems, like world hunger.

          • @[email protected]
            link
            fedilink
            English
            130 minutes ago

            Not ads, whole governments talking about it and funding that crap like Altman/Musk in the USA or Macron in Europe.

      • @[email protected]
        link
        fedilink
        English
        17 hours ago

        What a nuanced representation of the position, I just feel trustworthiness oozes out of the screen.
        In case you’re using random words generation machine to summarise this comment for you, it was a sarcasm, and I meant the opposite.

      • @[email protected]
        link
        fedilink
        English
        3
        edit-2
        1 day ago

        Lemmy is understandably sympathetic to self-hosted AI, but I get chewed out or even banned literally anywhere else.

        In one fandom (the Avatar fandom), there used to be enthusiasm for a “community enhancement” of the original show since the official DVD/Blu-ray looks awful. Years later in a new thread, I don’t even mention the word “AI,” just the idea of restoration, and I got bombed and threadlocked for the mere tangential implication.