• skuzz@discuss.tchncs.de
    link
    fedilink
    arrow-up
    2
    ·
    1 day ago

    Now if only Docker could solve the “hey I’m caching a layer that I think didn’t change” (Narrator: it did) problem, that even setting the “don’t fucking cache” flag doesn’t always work. So many debug issues come up when devs don’t realize this and they’re like, “but I changed the file, and the change doesn’t work!”

    docker system prune -a and beat that SSD into submission until it dies, alas.

  • Arghblarg@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    2 days ago

    Call me crusty, old-fart, unwilling to embrace change… but docker has always felt like a cop-out to me as a dev. Figure out what breaks and fix it so your app is more robust, stop being lazy.

    I pretty much refuse to install any app which only ships as a docker install.

    No need to reply to this, you don’t have to agree and I know the battle has been already lost. I don’t care. Hmmph.

    • mlg@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      You ever notice how most docker images are usually based from Ubuntu, the arguably worse distro to use for dependency management.

      The other core issue is people using docker as a configuration solution with stuff like compose.

      If I want containers, I usually just use LXC.

      Only docker project I liked was docker-osx which made spinning up OSX VMs easy, but again it was basically 80% configuration for libvirt.

    • SpaceNoodle@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      Why put in a little effort when we can just waste a gigabyte of your hard drive instead?

      I have similar feelings about how every website is now a JavaScript application.

      • roofuskit@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        Yeah, my time is way more valuable than a gigabyte of drive space. In what world is anyone’s not today?

          • archemist@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            I’ve got you beat. 32gb emmc laptop.

            I need every last mb on this thing. It’s kind of nice because I literally cannot have bloat, so I clear out folders before I forget where things went. I only really use it for the internets and to ssh into my servers, but it’s also where I usually make my bootable USB drives, so I’ll need 2-5 gb free for whichever ISO I want to try out. I really detest the idea of downloading to one USB, then dd-ing that to another. I should probably start using ventoy or something, but I guess I’m old school stubborn.

            I tried using flatpak and docker, but it’s just not gonna happen.

              • WordBox@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                2 days ago

                Don’t you get it? We’ve saved time and added some reliability to the software! It. Sure it takes 3-5x the resources it needs and costs everyone else money - WE saved time and can say it’s reliable. /S

    • Michal@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      2 days ago

      Docker is more than a cop out for than one use case. It’s a way for quickly deploy an app irrespective of an environment, so you can scale and rebuild quickly. It fixes a problem that used to be solved by VMs, so in that way it’s more efficient.

      • pfm@scribe.disroot.org
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        Well, nope. For example, FreeBSD doesn’t support Docker – I can’t run dockerized software “irrespective of environment”. It has to be run on one of supported platforms, which I don’t use unfortunately.

        • mosiacmango@lemm.ee
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          2 days ago

          A lack of niche OS compatibility isn’t much of a downside. Working on 99.9% of all active OS’s is excellent coverage for a skftware suite.

          Besides, freebsd has podman support, which is something like 95% cross compatible with docker. You basically do have docker support on freebsd, just harder.

        • Toribor@corndog.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 days ago

          To deploy a docker container to a Windows host you first need to install a Linux virtual machine (via WSL which is using Hyper-V under the hood).

          It’s basically the same process for FreeBSD (minus the optimizations), right?

          Containers still need to match the host OS/architecture they are just sandboxed and layer in their own dependencies separate from the host.

          But yeah you can’t run them directly. Same for Windows except I guess there are actual windows docker containers that don’t require WSL but if people actually use those it’d be news to me.

          • hemko@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            There’s also this cursed thing called Windows containers

            Now let me go wash my hands, keyboard and my screen after typing that

  • MoonlightFox@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    2 days ago

    There are another important reason than most of the issues pointer out here that docker solves.

    Security.

    By using containerization Docker effectively creates another important barrier which is incredibly hard to escape, which is the OS (container)

    If one server is running multiple Docker containers, a vulnerability in one system does not expose the others. This is a huge security improvement. Now the attacker needs to breach both the application and then break out of a container in order to directly access other parts of the host.

    Also if the Docker images are big then the dev needs to select another image. You can easily have around 100MB containers now. With the “distroless” containers it is maybe down to like 30 MB if I recall correctly. Far from 1GB.

    Reproducability is also huge efficiency booster. “Here run these this command and it will work perfecty on your machine” And it actually does.

    It also reliably allows the opportunity to have self-healing servers, which means businesses can actually not have people available 24/7.

    The use of containerization is maybe one of the greatest marvels in software dev in recent (10+) years.

    • MajorHavoc@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 day ago

      Oof. I’m anxious that folks are going to get the wrong idea here.

      While OCI does provide security benefits, it is not a part of a healthly security architecture.

      If you see containers advertised on a security architecture diagram, be alarmed.

      If a malicious user gets terminal access inside a container, it is nice that there’s a decent chance that they won’t get further.

      But OCI was not designed to prevent malicious actors from escaping containers.

      It is not safe to assume that a malicious actor inside a container will be unable to break out.

      Don’t get me wrong, your point stands: Security loves it when we use containers.

      I just wish folks would stop treating containers as “load bearing” in their security plans.