I have been a key player in big automated deployment strategies in two significantly sized organizations now. One used ant with a java code base, the other used Visual Build with a VB code base. Both of these implementations deployed multiple dependent projects onto a variety of server types into development, testing, staging, and production environments. With the exception of prod, each environment had more than one instance of the environment running.
Some of my earlier musings on automated builds and deploys can be found by clicking here.
One would think that an automated deployment would be deterministic. In other words, given the logic in the deployment file(s), it should deploy the same every time. Surprisingly, we have found this not always true. Since many of these deploys are pushed to remote boxes, hiccups in the network end up throwing a proverbial wrench into things. And (again) surprisingly, these can occur more often than I would've though. We actually blamed these hiccups on increased solar activity for a while. I have no solutions to getting around these network hiccups, except to say that if you see your deployments failing consistenly at a certain time during the day, schedule them for another time. Our Sunday evening deploys lately have always been failing. Yet when we kick them off Monday morning (with no changes to deployment logic) everything this fine. We're thinking that there's possibly a weekly batch job or two that are running during our Sunday deploy that is bogging the network down....
I've also seen automated deploys act inconsistently (only with Windows) with registering dll's in the assembly. We can deploy and Gac things fine onto our bare metal, VM servers with no problem. Yet, when we deploy the same software onto a legacy hardware server where the dll's are already gac'ed (our deployment logic un-gac'ing and re-gac'ing the dll's) they fail to register it seems. I've wondered if perhaps the deployment moves through all the logic too fast? The command is definitely correct. Sometimes we'll even see the dll's in the assembly folder in the GUI, but the application can't. Manually registering them from the command line fixes the problem, but we shouldn't have to do that.
Something else to consider when implementing automated deploys - do you want to deploy everything from scratch (bare metal deploy) or do you want to deploy onto an already working image or server (overlay deploy)? I have tossed this question around a number of times. I think the correct answer for you depends on how you answer the following questions:
Are you thinking about deploying to a system that's already running in production? Are all the configurations that make that production system work documented? Are you confident that you could rebuild the production server and getting it running without any major problems? If you answer 'yes' to all of these questions, then you could probably save some time and implement overlaying automated builds. If you are starting work on a greenfield (new) application or you aren't confident that you could rebuild you production server, then you should probably consider bare metal deploys. Bare metal deploys done properly essentially become self documenting DRP's.
No comments:
Post a Comment