In this post, I will give you the recipe we at [Globality](https://www.globality.com/en-us/) use to keep dependencies fresh across 45+ microservices.\n\n### Our Services\n\nWe have 45+ internal micro-services, 99% of them are written in [Python](https://hackernoon.com/tagged/python). We have an internal framework called `microcosm,` which allows for fast convention-over-configuration wiring of components and services.\n\nYou can check out all of the microcosm-related projects in [this Github link](https://github.com/globality-corp/?utf8=%E2%9C%93&q=microcosm&type=&language=)\n\n### The problem\n\nIf you worked on a medium+ project, whether a monolithic one OR a micro-service you know, that over time, dependencies go stale.\n\nYou stop upgrading versions of your dependencies because the process is too complicated and too error-prone.\n\n### The solution\n\nIn this diagram above, you can see all of the phases \\*\\*each project\\*\\* goes through during the branching cycle.\n\nThe process is 100% automated and driven by CI and internal scripts.\n\nSo, let’s go through each of the processes\n\n### phase 1\n\n`develop` branch is being built on the CI after every merge of the feature branch.\n\nIn this phase, we unlock all of the dependencies. Essentially putting `.` in the `requirements.txt` file. This forces the build to use all fresh dependencies, upgrading all the minor/major versions of services.\n\nIn our `setup.py` we use `>=`. This means we always have only a \\*\\*minimum\\*\\* version.\n\n#### What we find?\n\nIn that phase, we normally find a dependency that is completely broken, can’t install, crashes, etc…\n\nWe also usually find that one of our services is broken, which is then being investigated.\n\nNormally, if we need to make a code change, it’s during this phase and it’s usually minimal since it’s done incrementally.\n\n### Phase 2\n\n`develop` branch is the base, we check out a `release/2017.xx.yy` branch.\n\nDuring this time, we unlock all of the dependencies (same as phase one).\n\nOnce it’s unlocked and all dependencies are installed, we `freeze` the dependencies into a `requirements.txt` file, and we \\*\\*commit it\\*\\* back to the repository.\n\nHere’s an example from one of our projects. (I removed internal library names for sanity).\n\n!(https://hackernoon.com/hn-images/1*gdLsYdNEHxqu23BNg7swLA.png)\n\nSince we use [docker](https://hackernoon.com/tagged/docker) containers (with custom in house layering solution), we ensure that what we test on is the same version that will go forward to staging (and eventually production).\n\nOnce dependencies are locked, they are not reopened on that branch\n\n### Phase 3\n\nDuring phase 3, we tag the `release/2017.xx.yy` branch to it’s own tag. That tag is automatically being deployed to staging using the CI.\n\nDuring this process we only verify that pip is intact, we don’t install dependencies.\n\nIf all the requirements are not met (meaning we need to install something), the build will fail and alert the engineers.\n\n### Phase 4\n\nDuring phase 4, we merge the tag back into develop and the process continues.\n\n### Keep it fresh\n\nkeeping your dependencies fresh makes sure you are on top of security fixes and security holes that all of your dependencies use.\n\nIt makes sure that all of your projects are using the latest of your internal libraries.\n\nAutomating this process like we did takes the stress out of it. As an engineer you don’t need to think about it, everything is automated on the CI.\n\nFrom the QA perspective, you know that if something works on one environment, it will work on the other. If something is broken it’s not in the underlying infrastructure, it’s in application code. You don’t need to worry that `cryptography` got upgraded under your feet.\n\n### Rock on!