Folks, I have a node.js script running on my Windows machine that uses the dockerode npm package to talk to docker on said box and starts and kills docker containers.

However, after the containers have been killed off, docker still holds on to the memory that it blocked for those containers and this means downstream processes fail due to lack of RAM.

To counter this, I have powershell scripts to start docker desktop and to kill docker desktop.

All of this is a horrid experience.

On my Mac, I just use Colima with Portainer and couldn’t be happier.

I’ve explored some options to replace Docker Desktop and it seems Rancher Desktop is a drop-in replacement for Docker Desktop, including the docker remote API.

  1. Is this true? Is Rancher Desktop that good of a drop-in replacement?
  2. Does Rancher Desktop better manage RAM for containers that have been killed off? Or does it do the same thing as Docker Desktop and hold on to the RAM?

Are there other options which I’m not thinking of which might solve my problems? I’ve seen a few alternatives but haven’t tried them yet - moby,
containerd,
podman

I don’t actually need the Docker Desktop interface. So pure CLI docker would also just work. How are you all running pure docker on Windows boxes?

  • i_am_not_a_robot@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    First, it’s not possible to use “pure docker” on Windows. Docker is for running additional user mode environments under the same kernel. You can’t run Linux applications under the Windows kernel without WSL1, and WSL1’s Linux implementation does not support the features required for Docker. This is also possible in limited cases with Windows Server, but because of differences in the way Windows works you almost always end up running a second kernel.

    WSL2 can be used to run Docker, and in fact that’s how Docker Desktop works since years ago. When you start Docker Desktop it starts a WSL2 distribution under which the containers run. Running Docker from the command line only will not positively change the performance of your containers.

    Running other virtualization software, especially VirtualBox, to start a separate Linux VM and running your containers in there is going to be more complicated and give worse performance unless you disable all virtualization-based features of Windows, such as WSL2 and security isolation.

    The solution to your memory problem is most likely one of the following:

    1. Don’t disable the pagefile. Windows uses a weird memory model where all virtual memory must be backed by physical memory. Certain software will allocate virtual memory without using it, and Windows will require that the sum of the physical memory size and the page file size be adequate to use all of that virtual memory. Disabling the pagefile or limiting it to small sizes because you “have enough RAM” will cause out of memory errors while you still have plenty of RAM available.
    2. Reduce the amount of memory that Docker is allowed to use to a level that your Windows software can tolerate. You may need to switch Docker Desktop to Hyper-V mode for this option to be available, which isn’t an option if you’re on Windows Home, and this may reduce compatibility.
    3. After stopping your containers, run echo 1 > /proc/sys/vm/compact_memory at a WSL2 prompt or wsl -u root -- bash -c 'echo 1 > /proc/sys/vm/compact_memory' from a Windows prompt. See Memory Reclaim in the Windows Subsystem for Linux 2 for details about what this does.