- Its a great, fun read, however it skims over the difficulties and trials of the actual automation of the technical systems and software development process - where the rubber really hits the road. It seems to practically romanticize the idea of automation in IT a bit - like DevOps is the goose that will lay your golden egg. Unfortunately, there's significantly more work to get that golden egg in my experience.
- It also glosses over how to get the Security team on board with what DevOps wants to do. In many companies, the Security team holds the trump card and if they decide to change all your certs from 128 bit encryption to 2048 bit encryption (don't laugh, I've seen it happen and we had to regenerate certs for all applicable servers in all envs). Their wish is your command unless you can convince someone influential that that kind of encryption is overkill in a non-prod environment.
Finding and Cracking That Golden Egg
Hurdles I've encountered enroute to the DevOps 'golden egg' are:
- Silo-ed Application Projects.
- Lack of consistent naming conventions for deployment artifacts and build tags/versions is majorly detrimental to the automation process.
- Lack of understanding of the dependencies between application projects (API contracts, or library dependencies). If you don't understand your dependencies between applications, they may not compile corectly, or they may not communicate properly with each other.
- Lack of consistent development methodology and culture between teams. If you have one team that doesn't get behind the new culture, and they insist on manually deploying and 'tweaking' their code/artifacts after deployment, they risk the entire release. Getting everyone on board with culture is challenging.
- Overusing a tool. When you have a hammer (a good DevOps tool), everything is a nail. Generally, many of the DevOps tools out there all have their niche. You could potentially use Chef, Puppet, or Ansible to do all your provisioning and automation. Is that the best solution though? I'm inclined to say no. How much hacking and tweaking are you having to do to get all your automation to work with that one tool? Use the tools for what they are best at, what they were originally made for. Many of these tools have Open Source licenses and new functionality is being added to them all the time. While it might seem that the new functionality is turning your favourite tool into the 'one tool to rule them all', you might have a big headache getting there, trying to push a square peg in a round hole.
- Lack of version control. Everything must be stored in a repository. The stuff that isn't will bite you. VM Templates, DB baselines, your automation config - it all needs to go in there.
- DB Management. All automated DB scripts should be re-runnable and stored in a repo.
- Security. How to satisfy the Security Team? How to automate certificate generation when they are requiring you to use signed certs? How to manage all those passwords in your configuration?
- Edge Cases. Any significantly sized enterprise is going to have/require environments that land outside your standard cookie cutter automation. Causes I've seen for this are:
- Billing cycles - we needed a block of environments that were flexible with moving time so QA didn't have to wait 30 days to test the next set of bills and invoices.
- Stubbing vs. Real Integration - Depending on the complexity of integration testing required, there may be many variations of how your integration is set up in various environments - what end-points are mocked vs. which ones point to some 'real' service.
- New Changes/Requirements - Perhaps new functionality requires new servers or services. This can make your support environments look different that your development environments.
- Licensing issues. When everything is automated, it can be easy to loose track of how many environments you have 'active' versus how many you are licensed for. License compliance can be a huge issue with automation - check this interesting post out 'Running Java on Docker? You're Breaking the law!'
- Downstream Dependencies This is where the Ops of DevOps comes into play. Any downstream dependencies that your automated system might have need to be monitored and understood. You can't meet your SLA with your client if your downstream dependencies can't meet that same SLA. Important systems to consider here are: LDAP, DNS, Network, your ISP, and other integration points.
- YAML files. Yes, perhaps they are more terse than storing your deployment config in XML. However, I'm at a bit of a loss to see how they are better than a well named and formatted CSV file. Sure, you can 'see' and manage a hierarchy with them. But you can do the same in a CSV file with the proper taxonomy. YAML files utilize more processing power to parse and have extraneous lines in them because their trying to manage (and give a visual representation of) the hierarchy of the properties. I've seen YAML files where these extra lines with no values account for a significant percentage of the total lines in the file, making the file more difficult to maintain and prone to fat fingers. Several major DevOps tools use these files and I really can't see a good reason why except that they were the 'new, cool thing to do.'