One of the great advantages that we can derive from the docker is the principle of immutability. In the past, it was very common for developers to generate their code in a development environment with certain characteristics, pass it to pre-production, which was not exactly the same as development, and finally reach production, which also had a different configuration; this whole process meant that developers had to 'tweak' their code in each of the phases.
With the arrival of containers this was solved, since an image is the same from the beginning to the end that creates the microservice. The developer downloads the image with certain characteristics, if he has a development environment he hosts it there, otherwise on his own machine. When he has it ready, he uploads the image to pre-production where the tests are carried out and it is verified that it works and goes to production, all very simple and the magic is that from start to finish the development has been in the same environment. This would be the principle of immutability.
If we update in production (this would be quite kamikaze) or in pre-production an image, we will encounter the same old problems, the code may have to be reworked, and if the image the developer is working with was not secure from the beginning, the problem is inherited.
So what I was trying to convey is that, in an ideal environment, companies should have an approved and certified image repository, with an image update lifecycle that allows them to get to production with a optimum safety level. This would mean that every time a code or service is updated, a new image would arrive in production already patched, as the developer would always be obliged to make the new version based on the latest image in the repository. Developments tend to be alive and improve/grow over time, which is why this life cycle is proposed.
This is in the ideal world, which will hopefully soon be less ideal and more real.