Friday, March 27, 2009

IIS and Nagios

Working with IIS on a particular app, we had a situation where the behavior of the web server seemed to be inconsistent with how we had configured it. We had configured it to allow Authenticated access based on Integrated Windows authentication only, no anonymous access. When we browsed to the page we wanted to view on the web server itself, the page came up fine. But when we tried to browse to the page from another box, we were required to authenticate. This was not expected since it should have been using the Integrated Windows authentication to log in transparently.
After comparing it with another server that worked and seeing that most of the GUI configuration looked identical, I started googling and found this article that answered the problem for the most part. The only difference we found was even the order of the different NTAuthentication providers matters - the working server had "NTLM,Negotiate"; the server that didn't work was set up as "Negotiate,NTLM". Changing the broken server to look like the other server fixed our problem.

Our infrastructure teams has been doing some changes and moved the configurations for the email servers a bit. Unfortunately, this wasn't transparent to my Nagios/GroundworkOpensource installation (since I was lazy and hadn't reconfigured the email 'from' address for host). What ended up happening was all the Nagios emails ended up getting sent outside the internal network (because the domain name was still 'localhost.localdomain') and getting caught by the public antispamm appliance we have. I ended up switching notifications to use 'service-notify-by-sendemail' instead of 'service-notify-by-email' and then overriding the host in the command to point to our local email server. And that worked. Similar to this post on the groundwork forum.

Wednesday, March 18, 2009

Automatically deploying IIS web apps

We've been working on automated builds and deploys at work. All of our web apps run on IIS and configuring 'good' automated deploys for these applications has been challenging. But I think we're seeing the light at the end of the tunnel now.

Microsoft provides a good number of support scripts (in vbs) to configure IIS, App Pools, and Virtual Directories from the command line. Some of them come with the IIS installation, some of them we had to download - if I remember correctly.

There are two 'sets' of these CLI vbs scripts that I'm aware of that work with IIS. One is found in the ../inetpub/adminscript folder. There's a bunch of vbs scripts in there. The other set is found in c:/windows/system32/iis*

Here's some examples of what you can do with these scripts:
  • create a virtual directory: cscript c:\windows\system32\iisvdir.vbs /create webSiteName virDirName virDirPath /s serverName
  • set perms on a virtual directory: c:\inetpub\adminscripts\adsutil.vbs SET w3svc/webSiteNumber/Root/virDirName/AccessFlags permNum (like 513)
  • set ASP.NET version for the virtual Directory c:\windows\Microsoft.net\framework\v2.0.50727\aspnet_regiis -s w3svc/webSiteNumber/root/virDirName
  • and tons more...
You can view all the website numbers in the IIS manager. They show up in a column.

Monday, March 16, 2009

IT Security

Over the past few years I've seen sensitive information exposed in some very interesting places on enterprise networks and servers. Sometimes this leftover information can be super helpful if you're trying to debug problems or get an idea of what happened on the box in the past. In other cases, it just plain bad. Here's some of what I've seen:
  • Shared drives mapped all over the enterprise. Shared drives mapped on production boxes with access to files that contain sensitive info like passwords for production users.
  • Kickstart configuration files with username and passwords for domain users in clear text forgotten on servers
  • Passwords and sensitive information exposed in .bash_history files. Bash_History files are a treasure trove of information. They'll show you all kinds of things - where the db server is located, what the connection string is, where http servers are installed, how to shut them down and start them up....etc.
  • *.udl files - Microsoft specific. They store connection information in clear text for db servers. Don't leave them lying around and exposed.
  • Installations for UPS (Universal Power Supply) systems left with their default configured administrator username and password. I happened to find a login page for a UPS console one day and logged in on the first try using the first password I could think of. The dashboard I subsequently found myself on gave me the power to shut down the entire enterprise.
Here's some simple ways to make your network/enterprise more secure:
  • Don't allow a plethora of undocumented mapped drives.
  • Do searches for text like 'password' on any boxes, drives, etc that you might be concerned about. If you get results, take steps to either encrypt or delete those files or references.
  • Change default installation passwords

Sunday, March 15, 2009

CMS experiences

In the past year or so I have gotten a bit of experience with different CMS systems. I was given two clients(www.mross.com, www.auma.ca) that run on InfoGlue - a java/velocity based cms system. I am also messing around with Joomla, and I have one client (www.brooks.ca) that runs on it. I have yet to do anything with Droopal, but from what I've heard, it sounds more like InfoGlue than Joomla - that is, it's more geared towards an enterprise/portal centric CMS than what Joomla appears to be.

InfoGlue is very configurable and supports internationalization. This makes is somewhat cumbersome to configure to start out with. I've found that the important files to know about when doing a configuration on the fly are:
- WEB-INF/classes/cms.properties
- WEB-INF/classes/hibernate.properties
- conf/localhost/live.properties, etc

Some issues that I've noticed with the InfoGlue instances that I work on are:
- on logging into the CMS, I have to refresh the page 3 times before I acutally get the GUI. Only one of my instances does this, so I think it's a configuration thing.
- sometimes Tomcat seems to get it's knickers in a knot and needs to be rebooted in order for content changes to be saved. This happens periodically.
- deleting content (cleaning things up) can be a huge pain - especially if there are a lot of references to the content in other parts of the site. You end up having to delete the references first before you can delete the content.

So far I've been pretty impressed with Joomla. I was able to figure out how things were put together fairly quickly. I bought this book which helped some - it has some good chapters on SEO and working with Joomla templates. I was able to manage a complex upgrade to the site I maintain within a week of getting bootstrapped on the Joomla CMS. This impressed me.

Tuesday, March 10, 2009

Visual Build

I've been working with Visual Build in a .NET environment off and on for almost the past year. When we decided to go forward with Visual Build, I thought it would be a more painful process than it turned out to be. Our technology stack includes ClearCase, Subversion, Ms SQL, PSExec, VMWare, VB.NET, IIS, CruiseControl, Nant, Nunit, and a bunch of 3rd party tools and servers related to document management.
Visual Build provides a bunch of example build files that are quite helpful in getting all kinds of different functionality going. We automatically compile, unittest, deploy (to remote boxes), and run verification tests using Visual Build. There a numerous other interesting smaller tasks that we've got Visual Build to manage for us like setting perms on remote boxes, stopping and starting servers, services and com+ objects, performing baselines, checkins/outs, and updates of clearcase streams and views, etc.
Some of the gotchas we've discovered in our work with Visual Build:
- managing windows (child build files that get spawned in a complex build process) and logs is not so trivial. If they aren't managed correctly, your build will stop without much notification as to why. We found that piping the output from the build to a separate file worked for us with logging. With windows, we found that matching the build context, waiting for completion, running the GUI app in silent mode, and not closing the GUI app on failure were what made things tick.
- psexec needs to be on the path on the box you're running Visual Build on.
- CruiseControl integration was relatively easy - we just used the command line with options:
<tasks>
<executable>C:\PathToVisualBuildInstallation\VisBuildCmd.exe
</executable>
<baseDirectory>D:\pathToBuildFile
</baseDirectory>
<buildArgs>/b "XXX.bld" -other args to pass to buildfile go here
</buildArgs>
<buildTimeoutSeconds>1000
</buildTimeoutSeconds>
</tasks>

Friday, March 6, 2009

Replatforming an app in 21 hours....

I got a cold call a couple of weeks ago. A prominent institution in Calgary had a java application that ran on a Sun OS with an iPlanet web server that was key to their business that didn't work with a key upgrade from a 3rd party vendor. It needed to be upgraded an running in production 2 weeks. Could I help?
I agreed to help and showed up that afternoon to see what could be done. The current code in production ran fine. However, there was no guarantee it would run fine with the upgrade. Unfortunately, the code base was a mess. Multiple versions of the same file were all over the production server with .bak, .ver2, etc extension. Everything had been done on the fly in the past. The logs couldn't provide us with any useful information, and they had no test environment.
After seeing what could be done on the Sun box (which had been running this application non-stop for more than 10 years!), I decided to try and port the application to Tomcat - in this container I knew what I was doing and I could debug the problems more effectively. Within 14 hours we had the application running on Tomcat with some minor bugs.
Over the weekend, new production and test virtual machines were requisitioned and when I came back on my next visit, we got the last of the bugs out of the way and had everything working in the new test environment.
Some lessons learned....
- Don't be afraid to replatform in Java. I would've had a much harder time trying to do this with Microsoft technology. The clients were VERY happy to get off that old server. No one was around to support it anymore...etc. That was 10 year old java code that I didn't have to do a thing to when I moved it onto j2sdk1.4.2_XX.
- It's almost scary how many different ways you can configure things in Tomcat. We had outstanding issue that was 'bugging' me after 14 hours of work. The client required the application to be available from two separate url paths without using an apache web server to do url rewriting. I tried various different configurations in the app's web.xml, the global web.xml, the server.xml and found that I could get the url's to work, but then I had problems with sessions getting lost/corrupted. In the end, we wrote another servlet (sort of like this, but different) that we configured to 'catch' the url we needed it to and redirect the request to the controller servlet that was already mapped to the other url. This worked beautifully.
- We discovered that if your dealing with pages that have a lot of scriptlet code that does response.sendRedirect(someUrl), these method calls will sometimes throw IllegalState exceptions. The (quick) way to get around throwing the exception is to add a return; statement right after it. Or you could put that kind of logic into a servlet if you have time. I didn't (have time).
- Having good logging is critical to debugging an app in an emergency. Seeing my System.out.println() outputs in the tomcat console was like drinking hot chocolate after a day of skiing. So RIGHT! Of course I commented out those lines before we put the code into production.
- Using technologies that don't required registry keys makes configuration and multiple environment installations very easy. I found that I could write two short paragraphs of directions (documentation?) and that was enough for my client to install java and Tomcat into their new test environment by himself. (Pretty much copy - paste, and add the JAVA_HOME system variable and add the j2sdk...\bin onto the Path system variable). They were very happy about that too.
- Having hard coded path references to properties files in java code is so nasty! Every time we changed the path, we had to recompile all our classes. I know that's not the right way to do things, but the client just wanted to get things working and worry about refactoring later.

Thursday, March 5, 2009

Groundwork/Nagios and Rockstars

I've implemented Nagios at a few different places. In my current position, I've implemented Nagios with Groundwork, and I'm really happy with it. Groundwork provides a 'community' (free) version of their tool on a VM imaged instance that you can just plunk into a VM client. We use the npre_nt agents to monitor specific services and metrics on a number of windows servers. I reconfigured some of the vbscripts in the remote agents to call different methods in the WMI api which I've detailed in this forum. I've also customized the check_mssql to call specific tables looking for issues in production data (querying for held up business processes and corrupted documents). In order to do this, I had to download freetds and install it on the VM, overriding the default installation target with /usr/local/groundwork/. Then the freetds.conf file gets put into /usr/local/groundwork/etc and you can configure the servers you want to call in there. I put two configs for each server I wanted to call in that .conf file as I wanted to be able to test from the command line. I found that groundwork ends up calling using the IP of the box - so each server conf has a proper domain name config, and a IP name config.
One other thing about configuring the Groundwork/Nagios installation - I had to configure the CentOS to point to our exchange server so the email notifications would work.

I've read a couple of of good tech books lately (I now own them both). Release It! by Michael Nygard, and Secrets of Rockstar Programmers by Ed Burns.

Wednesday, March 4, 2009

Schema Crawler

I got back to using schema crawler this week, trying to make sure we can explain the inconsistencies we see in the metadata between our different environments. I found this cool little script to do port scanning so I could discover which ports our DB's were using.

HOST=127.0.0.1;
for((port=1;port<=65535;++port)); do echo -en "$port \n"; if echo -en "open $HOST $port\nlogout\quit" | telnet 2>/dev/null | grep 'Connected to' > /dev/null;
then echo -en "\n\nport $port/tcp is open\n\n";
fi;
done