• Gsus4@mander.xyzOP
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    2 days ago

    What we’re all afraid is that cheap slop is going to make stack broke/close/bought/private and then it will be removed from the public domain…then jack up the price of islop when the alternative is gone…

    • NιƙƙιDιɱҽʂ@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      I do wonder then, as new languages and tools are developed, how quickly will AI models be able to parrot information on their use, if sources like stackoverflow cease to exist.

      • Gsus4@mander.xyzOP
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        I think this is a classic of privatization of commons, so that nobody can compete with them later without free public datasets…

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        It’ll certainly be of lesser quality even if they go through steps to make it able to address it.

        good documentation and open projects ported might be enough to give you working code, but it’s not going to be able to optimize it without being trained on tons of optimization data.

      • Gsus4@mander.xyzOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 days ago

        But can anyone train on them? What happens to the original dataset?

        • falseWhite@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          There are open weight models that users can download and run locally. Because the weights are open, they can be customised and fine tuned.

          And then there are fully open source models, that publish everything, the model with open weights, the training source code, as well as the full training dataset.