Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am doing some very straightforward asp.net deployment using git, TeamCity, nunit, and powershell.

If the gauntlet of (unit-integration-acceptance) tests (in the master branch, naturally) all pass, the code is deployed to a staging server. Then there are some tests which run against staging. If those tests all pass, then the artifacts for the new version of the site are copied into a new directory (side by side) on the server. finally, a powershell script tells IIS to serve from the new directory.

End result? Deployments are zero-downtime non-events tha happen multiple times daily. Rollbacks are trivial (and rare). Any not-yet-ready for prime time code can be checked into any other git branches.

Database changes? I have a tiny bit of code, called from Application_OnStart that checks to see if it needs to do any CREATE TABLE or ALTER TABLE statements.

Sure, I had to create all of this stuff myself, but it's all crazy simple, reliable, and does just what the project needs.

Maybe someone could make some product to handle all of this, but the flexibility of linking together the best tools for the job wins for now.

Also, it's just easy.




I think your solution is the mishmash of products (to paraphrase) that the author of the article describes.

It's absolutely possible to do this, and do it well (the company I worked for previously did it for years before finally moving to capistrano), but the point is, that means every dev team in every company is spending precious time figuring out how to do this, writing the code, testing it, debugging it etc. When the developer who wrote it gets hit by a bus, someone has to go through his/her code and learn it so that it can be maintained.

The alternative would be to have a standardized tool (like capistrano) that is commonly used.


I guess that after trying to use "this will do everything for everyone" solutions (exhibit A: TFS). I am convinced that the time cobbling together a mishmash is less than the time spent fighting the backwards assumptions that the all-in-one solutions require.


That's kind of like saying, "Well, I tried out visual basic, but it sucked, so I decided to cobble together my own language." Pointing to a specific poor implementation of an idea does not invalidate the idea itself.


There is a huge difference between writing your own language and writing a 4 line PowerShell script.

Really, the only gap I had to fill between the high quality parts that I'm already using (nUnit, TeamCity, etc.) was a tiny script to copy files + reconfigure IIS


I am with you. I've just written powershell scripts that work in the following manner.

1. Make a dated folder and copy the new build to the folder on all servers. 2. Point IIS to the folder on all servers.

Just in case a backup powershell script is kept. It works by just pointing to the previous dated folder of the build.

Not sure why this is such an issue. Solutions can get complicated and I don't think any tool can work properly in all scenarios especially when downtime is a concern. Thats why scripts exist.

Currently my solution is on Amazon, and if there was a way I'd integrate the load balancer with the scripts so that when a server is being updated load balancer doesn't send requests to it. However IIS is fast enough so this is just part of a wishlist.


While straightforward and simple it is not, this is what I'm working towards as well with CC.NET/msdeploy.

For QA builds, I manually trigger a CC.NET job that builds and deploys, but with msdeploy it overwrites what's on the QA server, providing no rollback capability.

For production builds, I trigger a cc.net project that just builds, then I manually push the bits to production servers, copying files from build server to production servers, setting up side by side versioned folders, and update IIS home directory for the web site. I'd like to automated, but transport is my biggest roadblock.

I'm curious, how does the new version get "deployed" to a staging server and is it "copied into a new directory" on the production server (UNC, FTP, ?).


I solved the transport piece by installing a TeamCity build agent on my production server (which is kind of wonky, but whatever). The only build project it's capable of running is the powershell script that copies the artifact files and re-points IIS.

Where does it copy the files from? I set up a TeamCity artifacts dependency, so the "push to production" script doesn't run until the latest known good artifacts are downloaded from the TeamCity server.


That's similar to the setup we've put together at our company.

  * Subversion for source control
  * Jenkins for building/testing/deployment coordination (each project has a Build, Push to Test Server, Integration Test Suite, and Push to Production set of jobs)
  * NSIS to build installers
  * Migrator.NET for database schema migration and rollback
  * PsTools + NAnt + Robocopy + Misc. Utils to deploy to remove server
Our deployment process (managed by Jenkins), looks like this:

  1. Pull code from SVN
  2. Build the code
  3. Run unit tests
  4. Build the database migrations
  5. Build the installer
  6. Copy the installer to the test server
  7. Use PsExec to run the installer
  8. Copy back the install log to the build server
  9. Build the integration test suite
 10. Run integration tests (usually Selenium + NUnit)
 11. Copy the installer to the production server
Final deployment is done from within Jenkins, too, but is a manual process for now. And by "manual" I just mean that someone has to click the "Run Job" button.

Sure, it's not an "integrated" solution, but I don't really see how I'd have needed less control or granularity if I were building and deploying on Linux and didn't use Ruby.


I don't see how this is "straightforward". You have to have a server that runs TeamCity - which means installing TeamCity and setting up a build config/environment. This is all well and good for most teams - it's needed to keep the team from checking in crap.

But you're using that setup to deploy your code - this is the missing piece to me. Why would you push from your Build Server? It has nothing to do with deployment (in concept).

Moreover you say that "Rollbacks are trivial" - which is hardly the reality for most people. How do you rollback a push from your BuildServer? Manually?

How do you alter your DB Schema? What if your deploy screws up your DB Schema? I guess what I'm asking for is a bit more detail here - you're sort of waving off everything I posted about (and experienced over the last 12 years)...


Why would you push from your Build Server? It has nothing to do with deployment (in concept).

Your build server has everything to do with deployment. Your build server should be the only place production builds should ever come from. What you seem to be suggesting is replicating a build server for deployment. To me that seems like a violation of SOC and DRY -- but applied to infrastructure rather than code.

Once you break out staging from production, which you should do anyways, I don't think there's any issue left from the things you listed. And the additional benefit is I don't have to learn another framework like Capistrano.

With that said I do think deploying w/ ASP.NET is a pain, but not for the reasons you mentioned. But because getting the configurations working in all the right places is painful, IMO.


I think we'll differ here - but it's a nuance really. In .NET your BuildServer has everything to do with deployment by necessity - that's not the case in other frameworks.

It does work, it is convenient, but that's not a Build Server's function in concept.

Which things RE deployment did I leave out? I was considering the config as part of the coding...


Surely with any compiled language the "build" (or build server) has everything to do with deployment? You have to build it somewhere, either on a build server, on a server elsewhere (AppHarbor style), or on your local machine.

You could cut the build machine out by building locally and pushing the binaries out via git, but that's just really an implementation detail.


I would have agreed with Rob, but I have to admit you make a good point.

Rob is nonetheless correct that deployment in ASP.NET is poor. I think you explained one of the reasons why: static languages. It seems like you've convinced yourself that something is convenient because you feel that there's really no other solutions. I think you right and wrong...You are right because it isn't going to change, so deal with it...You are wrong because there are alternatives which have changed.


I've done projects where I'd wing it out from my box in lieu of having a build server. Usually works fine, but I've shot myself in the foot enough to know that if you can spare the time and resources to get a build server up and running to be an independent voice in the source control chain, it tends to be worth it.

I do tend to agree that the focus on deployment should be interaction with the source control repository and doesn't require a build server, I just like that added protection of the build server being the one to do that interaction when it comes to releasing code.


I am not sure what you mean by "have to run TeamCity". This is 2011. If you are not running some CI server, you are doing it wrong. Full stop.


Why would you push from your Build Server?

Why wouldn't you?

Seriously. The role of what we call "build servers" has been dramatically expanding. When I started in this business, nightly automated builds were the state-of-the art. Then we went to automated build systems that would build whenever new code was checked in. Then the build systems would run tests as part of the build. Then the build systems would spit out reports of automated test coverage so you could know if you were missing something you thought you had covered.

Also, I wouldn't ever deploy production code that wasn't built by the build system. So I'm having trouble with the "nothing to do with deployment" argument. It has everything to do with deployment.

I don't consider my TeamCity install a "build system" I consider "building the code" a subset of the overall Continuous Deployment system that's powered by TeamCity (which is amazing, cross-platform, and free for small teams)

The end point of all of this is that instead of releasing every few months or weeks, I can release every few hours or minutes. Basically, every single change that isn't demonstrably broken gets released without us even thinking about it. It's transformative.

What's the safest change you can make to a stable production system? The smallest change possible.

When do your customers want to get their hands on a bug fix or a new feature? Right fucking now.

If something does go wrong, how many changesets do you want to go through to try to find the problem? As few as possible. One is ideal.

How many new features and fixes do you want to have to roll back if something goes wrong? As few as possible. One is ideal.

Some specific questions:

Rollbacks are done by just re-pointing IIS manually. I've only had to do it once. Alternatively, I could have backed out the change and pushed that through the continuous deployment system.

As far as DB Schema changes go... Again, the safest change to make is the smallest change possible. Generally, I make DB changes that are backwards compatible (adding new tables, adding columns to existing tables) so if I need to roll back the code, it's just good. If you're tightly coupled to a lot of logic in stored procedures (I'm not) you're going to need to make their changes backwards-compatible (generally by using default parameters) or embed a version number in the procedure name (e.g. "InsertEntityV2", etc.) to let the two versions of the proc live side-by-side.

If I need to make some sort of breaking schema change, I would have to do a scheduled downtime and handle that particular deployment mostly manually. As far as I can tell, there's no real easy way around that problem (IMVU also does their schema-breaking changes outside of their continuous deployment scenario).


If you don't mind - let's shift the context :). If you didn't have to use a Build Server to deploy with (in other words you had a competent tool that could deploy, run tasks, etc) - I would assume you'd use that tool.

I do understand it's what you (and me and others need to use) - but it's because there really isn't another choice.

Thus my post.


Absolutely. And it was a good post.

If I didn't require a "build" step, I would still want some sort of solution that could do things like pay attention to checkins, run automated tests, deploy to staging, run tests against staging, deploy to production, run tests against production, and let me know if/when anything doesn't work as expected.

Based on the limited set of tools I have experience with, I probably would still use TeamCity, because it's great at all of the non-building tasks it needs to do as well. But I'll check out Capistrano based on your post.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: