**Edit: should have specified: DockerHub will remove *inactive* images after 6 months.**
>Docker is introducing a container image retention policy which will be enforced starting November 1, 2020. The container image retention policy will apply to the following plans:
>\* Free plans will have a 6 month image retention limit
>\* Pro and Team plans will have unlimited image retention
9 thoughts on “Docker reduces image retaining to 6 months for free plans starting Nov 1, 2020”
Livelyness checks on images seems an acceptable tradeoff for storing PB of data, what is NOT is the new download limits.
**Unless your build systems use docker hub _real_ accounts you may start seeing pulls failing with `429 too many requests`. This will also affect any CD system like K8S as well**
> Docker has enabled download rate limits for downloads and pull requests on Docker Hub.
> A Docker image contains multiple layers. Each layer in a pull request represents a download object. For example, when you download the latest Python image from Docker Hub, you’ll be downloading eight layers and indexes. The download rate limit introduced by Docker caps the number of objects that users can download within a specified timeframe. Any downloads beyond this limit will result in the 429 Too Many Requests error message.
> Docker will gradually impose download rate limits with an eventual limit of 300 downloads per six hours for anonymous users.
> Logged in users will not be affected at this time. Therefore, we recommend that you log into Docker Hub as an authenticated user. For more information, see the following section How do I authenticate pull requests.
theres already apost discussing this: https://reddit.com/r/docker/comments/i93bui/docker_terms_of_service_change/
ouch. I use a lot of images that I last accessed years ago on the hub because the systems which are using them hasn’t got new nodes or been reinstalled.
It doesn’t matter if I get the pro account either since other peoples images that I rely on not just disappearing becomes the problem? This change doesn’t really feel well planned at all.
I guess this is just another good reason to always put all images you depend on in your own registry but it really sucks for open source where every project doesn’t have the resources for their own registry setups.
Ponder that more and more people start using passive (registry mirror) and active (you push images to your registry) mirroring of the hub content. With active mirroring images will be pulled and queried on docker hub much less and even more of them will be pruned that probably shouldn’t happen.
Doesn’t seem like a big deal if you are keeping the image up to date surely within 6 months the base OS or your stack will need security updates.
Feels like this is a great way to remove dead weight.
Is this 6 month deletion going to happen to all images, or just the ones that aren’t pulled for 6 months? The blog post doesn’t make this clear, but still talks about these 2 things
Why should this affect anybody? sure you won’t be easily pulling a ready-to-go image but…. what is stopping you from using docker-compose && Dockerfile to build your image on the go? that’s what I do when I have images that I do not want to publish
My problem is not retention date. 6 months then it is dead anw.
I am not sure I get the complaints here – whenever I go to Dockerhub, I am very wary about any image that hasn’t been updated in months.
And this isn’t just about updates, but about pulls, too. If an image hasn’t been PULLED in more than six months, and hasn’t been updated in that time, either, do I really want to run it?