Monday, November 29, 2010

Books I want to read soon...

Here's two books I want to read soon:

The Nomadic Developer: Surviving and Thriving in the World of Technology Consulting
The Nomadic Developer: Surviving and Thriving in the World of Technology Consulting







And


Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation (Addison-Wesley Signature Series (Fowler))
Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation (Addison-Wesley Signature Series (Fowler))

Thursday, November 25, 2010

Thoughts on Design

I recently re-read The Last Lecture by Randy Pausch and was impressed with what he did with his classes that focused on User Interface Design.  The first day of class he would bring in an old VCR, put it on a table in front of class, and without saying a word he'd pull a sledgehammer from behind his desk and destroy it.  Needless to say it got the class's attention.  The message Randy was trying to convey in this demonstration is that poor end user design is detrimental to the product.  End User Design is critical.

There is another person who is a strong proponent of excellence in End User Design.  He is still alive today and he's be a driving force in perfection in Design in the IT industry.  He designs have been flying off of store shelves for the last 25 years or so.  If you haven't guessed yet, I'm thinking of Steve Jobs from Apple. I ran across a very interesting article about him in Bloomsberg Businessweek.  It's basically an interview with John Sculley, the CEO of Apple from the mid-80's to the mid-90's, and what it was like 'Being Steve's Boss'.  He talks a lot of Steve's philosophy and background in design, and Steve's influence on Apple in this area.  It's a great article.

It got me thinking...  In software I have not been super stringent about software design on the applications I've written.  Let me clarify - object oriented design is a must.  The train of thought I'm seeking to follow here is 'is it a requirement to implement design patterns in your code everywhere possible?' A major reason for this is I've worked with a number of people who think their job is refactoring and renaming classes and packages in a code base - to the determent of the project.  I often tell the students in my classes that there are numerous ways to 'skin a cat' as it were, when coming to a working solution in code.  I wonder though, is there always one perfect way?  Is this something to strive for?  Does it matter as long as the application works the way the client wants it to?  What do you think?

Personally, I think that the answers to these questions depends on a lot of things, not the least of which are your client, the project, and your team.  Elegant solutions look and feel so good when your done - both for you and for you client.  Perhaps the motivation and context for the project need to be analyzed before a decision is made whether or not you should 'shoot for the best designed, most elegant solution' possible?  Obviously if your project has constraints like budget, resources, and time, there's a limit to how much effort you can put into evolving your project into the best design in can be.  That is one big reason why design patterns and frameworks are so prevalent (and pretty much a requirement) in software engineering.

I like to read/sign out Fast Company from the Library - I'm actually thinking about getting a subscription.  Anyway, they have an annual 'Masters of Design' issue that is a lot of fun to read.  In it they highlight different world class designs and their designers.  Here's a link to their MOD tag cloud.

Sunday, November 21, 2010

Windows OS Performance Guidelines

These are notes from a workshop I was able to be a part of a couple of weeks ago.  The workshop was on Windows Performance Monitor and monitoring vital signs on Windows Servers.  The windows performance monitor has a multitude of counters that a person could potentially monitor on a server.  I'm just going to point out a few critical ones that were highlighted in the workshop and what their tolerances are.

Some points to note before I get into these values:  
  1. Don't always look at the graph first. The counter graphs can be scaled right out to lunch so they aren't necessarily a good first glance indicator.  For a particular counter, your focus should first be on the Minimum and Maximum values in perfmon just below the graph.
  2. In the past, I would tend to keep an eye out for sympathetic counter relationships on the graphs.  However, keeping an eye out of inverse counter relationships is also a good idea (where one counter could be decreasing in value while another is increasing)
  3. Counters can get corrupted.  Apparently this happens more often than one would think.  They can be rebuilt - directions are in this KB post.
  4. You can attach a PID (process ID) to a perfmon counter in an OS older than Windows Server 2008 by modifying reg keys. Details are in this kb article. Versions of perfmon on 64bit OS's come with this already set up.
On to the objects/counters of note:
  • Process Object
    • Handle Count - greater than 500 handles may point to a problem.  
    • Private Bytes - greater than 250MB could be a problem.  I've seens procs over a Gig and they definitely were a problem (they can get that high).               
    • Working set - greater than 250MB could be a problem
    • Thread Count - Greater than 500 threads needs to be watched to ensure they aren't increasing over time
  •  Processor Object
    • Processor Time - all core instances.  _Total can get you an overall trend.  Greater than 91% utilization is potentially an issue.
  • Network Interface Object - you need to know what the spec is for your network interface to determine it's capacity.  Anything over 80% of capacity could point to a problem
    • Current Bandwidth - help you determine the NIC's capacity
    • Output Queue Length - greater than 2 is an problem
    • Bytes Total - greater then 65% of capacity utilized is past the warning threashold (blinking read with siren)
  •  Memory Object
    • Free System Page Table Entries - the higher the better here.  Lower than 5000 is considered critical.  I've seen boxes 'run' (aka hobble) around 2500.
    • Available Megabytes - again higher here is desirable.  Less than 100 MB or 5% free is very problematic
    • Pool Non Paged Bytes - greater than 80% consumed is out of spec (not at all good)
    • Pool Paged Bytes - Same as non-pages bytes.  Anything between 60-80% should be watched.
  • Logical and Physical disk Objects - they have the same critical counters so I've put them together here
    • Idle percentage - 19 to 0 percent is critical. Anything over 55 is warning.
    • Current or Avg Disk Queue Length - 3 to 31 in the queue - you need to keep an eye on it.  greater than 32 is an issue
    • %Avg Disk Sec Read or Write - 25ms and above is critical
 Other interesting points of note:
- Mark Russinovich was the original developer of Perfmon.  His blog is apparently pretty good and our facilitator was very impressed with him.
- Windows Server 2008  does processor/core parking.  This means the server will 'retire' (effectively turn off to save power) CPU's when there isn't a heavy load on the box.  Our facilitator told us that one ISP moved all their boxes to Windows Server 2008 for this reason and their power bills were 15% less a month. 

Friday, November 19, 2010

Newbie Advice

Having gone through an IT technical school, I realize they push a lot of information and technology at you. There's a lot to assimilate in less than a year (depending on the school). On this page I'd like to point out some technologies that perhaps weren't covered much in classes that are widely used by good shops in the industry, and also offer some career advice for the future. Before I delve deeper, let me say that this information is based on my experience, which is mostly on the Java technology side of the fence.

Technical Heads Up

1. Design Patterns. One thing many of the technical schools don't have time to cover in their curriculum is code design patterns. It doesn't matter if you doing .NET development or Java development, design patterns can really make a difference in organizing, maintaining, and the reusability of your code. A great book to pick up for an introduction on design patterns is published by O'Reilly and called Head First Design Patterns. If you don't have access to the book, a good place to start is to google some popular patterns that are being used in code now. Dependancy Injection is a pattern that is really big right now. Two frameworks that use it a lot are Spring and Google's Guice. Other pattern I've seen used frequently are the decorator pattern, the abstract factory pattern, factory pattern, the singleton pattern, and the proxy pattern.

2. Using a Repository. Using a repository, or a version control system, should be required by every development shop. Frankly, if you're in a company where they aren't using one, either push to get one installed, or start looking for another place to code. A repository is basically an application that functions as the premier source/bank/datahub/repository for all code. This application should be on a server with lots of disk space and accessable to all developers in the company. Repositories allow developers to manage code changes at many different levels allowing many developers to work on the same code base at the same time. Some repositories I've used in the past are Subversion, CVS, StarTeam, and Perforce. Googling any of these will give you more information.
3. Unit Testing. Unit testing is the development practice of coding 'tests' that validate individual classes or 'units' of code, to ensure they are functioning as required. This practice has developed over the past 4-5 years or so and has been found to be very effective in giving developers the confidence to refactor their code and ensure that it's functionality stays consistent over time. Some unit testing technologies are JUnit, JWebUnit, HttpUnit, DBUnit, and NUnit


4. Continuous Integration. Continuous integration is a process that automatically rebuilds and tests (with unit tests) an application or code base. Continuous integration employs an application like CruiseControl or Anthill Pro which is configured to automatically check out the code base from the repository, compile it, run it's unit tests, and sometime even deploy it to a test environment. It allows developers to check for integration problems in the code, and warns developers of broken code, allowing and reminding them to fix it ASAP.

Soft Skills and Career Advice

1. Change is GOOD. Once you've gotten your feet wet in the industry, being a Java developer in one company can mean something quite different that being a Java developer in another company. There is such a vast variety of Java technology that two Java developers can have quite different experiences in developing applications. Whole development processes and methodologies can be very different in differenct companies. One company could be flying by the seat of their pants, developing with not backups or repository, and manually pushing out code, while another is totally immersed in Agile methodology and outsourcing everything but development.... etc. Keeping this in mind, my advice is to not get comfortable in any one company but move around a bit - that is, after 18 months or two years at one place, get a different job. You'll be put into a position where you'll either learn a lot of new stuff, or you'll be mentoring. Either way, it'll be an experience that will likely teach you more then if you had stayed put. It'll also give you some motivation to brush up on your resume writing, interview skills, and expand you network. 

2. Walk & Talk While software developers, for the most part, end up working in cube farms, it should not mean that they need to stay in their cubby hole all day and do work. This may be just my style, but I find when I'm working on a big integration project or on a project with a lot of departments or teams involved, getting up and talking to people really helps things move along. Whether it's getting socializing a new means of managing your server configuration in the code base, or talking to a DBA about a script that you'll need to release with you new code, talking face to face is far more effective than emails, I've found. I generally use emails if I can't find the person, or I want to make sure there is hard documented proof about a decision that is being made, or if I want to include a bigger audience in my communication. Successful software development is more about relationships that a lot a people realize. Relationships are more apt to be cultivated face to face... plus it's good exercise.
 

Wednesday, November 17, 2010

Career Paths in IT

I've been teaching at SAIT again this week.  Generally before every class gets going in the morning I like to do a little 'blurb' on concepts or thoughts that aren't covered in the course the students get.  One of the things I find is that the vast majority of students (and the population at large for that matter) don't realize how many different career paths there are in just the software side of IT.  Guaranteed there are some students in the course that I teach who, by the time I get the chance to teach them, are very concerned they made the wrong decision by getting into software programming.  They don't feel like they are cut out for programming.  I try to put their fears at rest by suggesting that there are other opportunities for them - careers that can utilize the coding experience that they are getting, but allow them to use talents that they are more comfortable with.  Here's a list (by no means exhaustive) of some other opportunities in the software field.

- Project Management.  Project managers who have coding experience or at least a technical background are a step ahead of PM's who aren't techies.  I myself have considered taking some project management courses or getting my PMP, but thought the better of it after seriously thinking about what I like to do at work.  PMs need to have a great soft-skill set.  While I don't think I'd have a problem with that, I would missing troubleshooting/solving problems.  They also have to be politically savvy and have patience for meetings.

- Business Analyst.  BA's also need to have a great soft-skill set, as they need to be talking with everyone (the business, the developers, and the PM) and writing down the requirements.  It also helps if they are technically competent, and they enjoy documenting/writing.  For some reason in Calgary right now, there have been a lot of postings for BAs over the past 6 months. I just found out today that a variety of institutions offer certificates/diplomas for a Business Analyst career path. (NAIT and Mount Royal University are two local options)

- Quality Assurance.  This field has been growing a lot over the past 8 - 10 years, and frankly I don't think projects that I've been on over the past 5 years would have been successful without our QA teams.  QA experts need to have a penchant for detail and process.  They need to be able to understand the business and relevant business rules well.  Most of the QA people I've worked with haven't had a certification, but you apparently can get more info on a software QA certification here.

- Release Management.  This is what I (like to think I) specialize in.  We ensure that teams are using a continuous integration server and QA has the builds they need for testing where and when then need them.  We have two main goals: 1. Keep downtime in production to a minimum and 2. Keep lost man days in QA to a minimum.  I like release engineering because I get to see the whole picture - I have to understand the code and it's dependencies, I need to understand the infrastructure, and I'm responsible for the product getting through all the environments to production.  For me (being a bit proactive), this has also lead to automating deployments of database objects and datascripts, server configurations, and code, as well as installing monitoring tools to ensure that every environment is ready to 'rock'.

- Security Analyst.  These dudes make sure the good people have access to everything they need access to, and the bad people are locked out.  They spend a lot of time combing though logs and configuring IPSec rules.  They also perform software audits on code, all kinds of different tests on web sites, and network audits.  I wouldn't mind moving my career this direction in the future as they also need to have a pretty good idea of the whole system/infrastructure/architecture.  Great web sites for more resources on these types of positions are here and here.

Even more specialized positions:
- GIS (Geographic Information System) developer or architect or analyst.  Working with digital 2d and 3d mapping systems.  Having some limited experience with this at the ERCB (deploying mapping systems that display reservoirs,etc in Alberta) I think this is pretty cool stuff.
- Search Engineer (Search Engine Optimization specialist)  I've dabbled in this off and on for my clients.  You definitely need to be on top of your game and know how the search algorithms are changing.
- Business Process Specialist - there's lots of software to specialize in here.  Tibco, Agile Point, SAP, JDEdwards, etc.
- Document Management Specialist - Sharepoint, Edocs, Documentum, etc.  This involves the automated management of documents - not the writing of them, but their storage.  Government bodies and law firms definitely have a need for these kinds of systems.

Tuesday, November 16, 2010

Another emergency software fix

A couple of weeks ago I had another client take advantage of my Emergency Software Troubleshooting services.  They had a web site maintained by a popular CMS system.  Unfortunately, their scheduled backups weren't as scheduled or robust as they thought and they were left without a website (or backend database) after their hard disk died.  I was called to help.

After some invesigation, we surmised we were left with:
- a bunch of .myd, .myi file backups for MySql (they were a couple of months old)
- another .sql file backup of the database 12 months old
- a file system backup of the site a couple of months old
- and another file system backup around 12 months old.

We rebuilt the database using a combination of the most recent myd and myi files.  This was a bit of a trick as the .frm files were missing in some cases.  What we did in those cases was create a dummy .frm file, rebuild the tables from the myd/myi files and when we did that we had mysql replace the frm file (it's an option you can enable during this operation).  We then used the data from the older database to populate the tables that didn't have the frm files.

With the file systems, it was a matter of patching together what worked between the two file system backups with the data we had in the database.  Since the old system was pretty much blown away, it was decided to upgrade the OS and the web server at the same time.  This meant reconfiguring the web server and ensuring the majority of the site worked.  In the end, the site was pretty much ready to go after 8.5 hours of work.

Sunday, November 14, 2010

Licenses.Licx error with DevExpress in Visual Studio 2010

While refactoring a business solution at work to deploy using Visual Build rather than InstalShield, I ran into an issue with our DevExpress web components not seeing the correct licenses in the licenses.licx file.  After googling for a bit and discovering how the licenses.licx files are created and manged in VS2010, I started troubleshooting to see if I could resolve my issue - seeing the red text on every web page telling me that I'm using a 'trial' edition of the software when I clearly wasn't.
 
I deleted the file to get VS to recreate it.  That didn't seem to work for me.  I renamed it.  I checked permissions on it.  I investigated how it got deployed and whether I needed it on the server or not (if I remember correctly, you don't have to explicitly move it to the server).  In the end, my solution sort of knocked my socks off.  I deleted all of the content in the file and left it blank.  I then built my msi and deployed.  Imagine my surprise when the blank licenses.licx file got rid of my trial license message!  I don't know if this is a bug with VS2010 or it's integration with DevExpress 8.2.2, but my fix seemed like a total hack.  However we've tested this in multiple environments and it's solid.

Thursday, November 4, 2010

The Phoenix and Projectors

Two business ideas in this post.  Actually one is a business model, the other is an idea.

The Phoenix
The business model that I ran across I'd like to call 'The Phoenix'.  Basically the idea is to take a dead business or business idea, and resurrect it.  The tale of the Twin Otter, a Canadian built Short Takeoff and Landing Utility plane is a classic example of this business model.  Details of it's life, death, and resurrection can be read on Wikipedia here.


In its first life, production ended on the Twin Otter in 1988.  The aircraft's versatility made it a bush pilot favorite and demand for the planes have increased.  Viking Air of Victoria, BC, picked up on this and bought the rights to build the plane again.  After nearly 20 years of being out of production, new Twin Otters are now being built in Calgary and the company has over 50 orders from all over the world.

Nov 14 update - I just ran into an article in The Globe and Mail about another brand that did something similar.  In fact, some work colleagues and I had been musing about this turn-around in the summer as we found some Pop-Shoppe bottles in a nearby convenience store.  Here's the link to the article in the Globe and Mail.


Projectors
The second idea isn't nearly as sexy.  Watching my project manager try juggle between a spreadsheet of software development stories and another of schedules and tasks, I got to thinking why can't there be a single projector with the ability to support and display like two monitors?  Or even right a button into the software/firmware of the projector so it can flip between screen instances.  For that matter, why hasn't windows come up with the ability to manage screen instances like Linux has? Maybe that functionality is there and I just haven't been paying attention...

Friday, October 15, 2010

The importance of soft skills in IT

Soft skills are crucial to being a successful developer in the IT industry.  Some of the soft skills I've found beneficial in my career are:
  • Being a good communicator.  It's not just about talking, but listening as well.  Dirareah of the mouth is never cool, especially when a job needs to get done.  Strive for mutual understanding in a conversation. Communicating in the IT world isn't just limited to talking and listening though.  
    • Having the discernment to know when to write an email versus actually talking to a person is important as well. I do not recommend doing all your communication through email.  Communication in emails can easily be mis-understood or mis-construed and take on an entirely different meaning and tone than you originally intended.  Emails are great for documenting decisions.  
    • Make sure that the content of your email deals with the subject you've put in  your subject line.  For example, having a subject line of 'Quote for project XYZ' but then talking about quotes for that project AND other projects in that email is bad news.  Send you quotes for the other projects in a different email with a more relevent subject line (I know this from experience).
  • Developing relationships.  Extending communication a bit further and actually developing relationship with people in the office is huge as well.  Questions like "How was your vacation?", "Where are you going on vacation?", or "What did you do over the weekend?" are great for this. When you've got a relationship in place, there's trust.  Many times I've been granted exceptional privileges and become more productive in my work because I've had a relationship with a key person who trusts me.  These relationships can also help in the future as well when you are looking for that next gig.
  • Having patience.  Whether it's letting somebody else go to the front of the 'line' for access to production, or having forbearance when dealing with a difficult client, patience can go a long way to strengthening that 'trust' in your relationships.  Sometimes work (or working with clients) in IT can get frustrating.  Things I do to help me step back and get some perspective are:
    • go for a walk
    • find something else to do for a while
    • talk to someone - use them as a sounding board
    • take a day off or go on vacation
    • Positive Attitude. I like to keep my workplace fun, interesting, and upbeat.  To promote this around me I'll sometimes:
      • Bring baking from home to share
      • Audibly rejoice in small victories (getting a solution migrated and compiling from VS2005 to VS2010 for example)
      • Write thank-you notes (either hard or soft copy) to others who help me
      • Encourage people who are down. We had a large project I was helping with and the lead was concerned that between the time and the technical challenges he had, he wasn't going to be able to finish on time.  I believe he could and I told him so (and backed it up with my help).  In the end, he did finish.
    • Being Flexible.  Generally, you are always working for a client.  Invariably, they will change their mind or want an enhancement.  Microsoft will upgrade their software, the US president will change daylight savings time one more time, the CTO will be convinced that Tibco will solve all the business problems again.  I've found that being able to take these changes in stride is crucial to maintaining my mental health (avoid being overcome with frustration).  It's all part of the territory - and frankly, job security.

      Thursday, October 14, 2010

      Business Idea - Jam Joint or Practice Pad

      I was getting into some music on the commute home the other day, lamenting the fact that I don't have a drum set to practice on anymore when I was struck with this business idea.  Rent a smaller stand-alone building and renovate it to have reasonably sound-proof practice rooms.  Furnish smaller rooms with drum kits, leave some empty (for electric guitar/bagpipe/tuba practice), and make some bigger for bands to jam in.

      Many drummers, musicians in general have no where to practice in cities because it disturbs the peace.  Renting these practice/jam rooms out might be an interesting and franchise-able business opportunity.  My wife could only listen to me play the piano at home for so long (and I don't think I'm that bad - 11 years Royal Conservatory).  As soon as I started practicing technique, her patience got thin. :-)

      Wednesday, October 13, 2010

      Log4Net performance issue with WCF

      We ran into an interesting production problem recently that had an interesting resolution. After upgrading our application server to use WCF calls, we found that performance was lagging in production. We were able to replicate this problem in our development environments and spent a good deal of time trying to resolve the issue.  Avenues we explored included profiling calls to the databases (using Sql Server Profiler and filtering on database name and login Id), mining activity logs in the database, pouring over configuration files, utilizing WiresharkSysInternals Process Explorer , SysInternals ProcessMonitor (and filtering by processName and PID), and the Windows PerfMon (performance Monitor - viewing the defaults and various performance objects).

      Interesting things I learned/discovered were:
      - IE browsers (IE 7 and older) default to only two connections to the web server See this link
      - It seems IE processes javascript before rendering a page while Firefox process javascript after the page is rendered in some cases
      - The default for WCF connections to the database is serializable rather the read committed!
      - WCF connections default to some low threasholds for ConcurrentCalls, ConcurrentSessions, and ConcurrentInstances.  These can be changed using the serviceThrottling node and attributes of a service behavior you create.  See this link for more detail
      - In dos, you can filter a netstat to only show specific ports and statuses like this:
      netstat /n | find "ESTABLISHED" | find "8085"
      - Use /n with netstat to not resolve dns names - without /n, dns name resolution can make netstat take a while.
      - In dos, you can use the 'tasklist' command to show something akin to a process list in the Task Manager
      - Retrieve IIS process names from the command line by executing this command:
      cscript c:\WINDOWS\system32\iisapp.vbs

      In the end, the resolution to our problem was to turn off a custom implementation of Log4Net that had running.  In the process monitor we saw that log4net was re-initializing, re-reading it's configs, and re-creating all the log files for EVERY page request.  We discovered this because it seemed that there was a several second lag time between when the request was executed on the browser and when the web server actually acted on it.  Disabling log4net entirely improved our performance by a significant amount. (10's of seconds)

      Update - Nov 26, 2010.  One of the reasons we were running into the issue above is the log4net configuration loader was placing a lock (being very pessimistic) on the configuration file every time it was loading.  Agreed, this solution should likely have been using a static logger - one instance for the application.  However, given the way it was implemented, I'm thinking that we could have avoided a bunch of hurt if we had made the configuration loader a little more optimistic and used:
      <lockingModel type="log4net.Appender.FileAppender+MinimalLock" />

      Wednesday, September 22, 2010

      Ms Sql Server Grant Execute against sub-set of stored procs

      Thanks to Karlson for this -
      Here's a script that can grant execute (or whatever you wish to grant) to a set of stored procs based on a prefix in the stored proc name:

      Set nocount on
      Declare @proc_name as nvarchar(max)

      DECLARE procs CURSOR
           FOR Select [name] FROM    sysobjects WHERE [name] like '[sp prefix here]%' AND xtype = 'P'

      OPEN procs

      FETCH NEXT FROM procs
      INTO @proc_name

      WHILE @@FETCH_STATUS = 0
      BEGIN

          exec('Grant Execute ON ' + @proc_name + ' TO [roleName here]')

          FETCH NEXT FROM procs
          INTO @proc_name
      END
      CLOSE procs;
      DEALLOCATE procs;
      Set nocount off

      Monday, September 20, 2010

      Another business idea

      One of my clients just moved into a brand new office tower.  Everyone was all pumped about the new digs - lights that have movement sensors that turn off automatically when no one is in the room and turn on when someone comes in, new desks and chairs, new faster elevators,etc.  However, upon using the 'facilities' for the first time, I couldn't help but notice a common problem I see in many public bathrooms around the city.  Paper towel fills up the garbage cans too fast.  Nobody wants to compact it with their hands.  Inevitably, it overflows before the janitors make their rounds.
      My business idea this go round is to import/sell garbage can compactors to these public buildings in the city that have this problem.  Other places where I've seen this issue (besides busy office tower washrooms) are airports and universities.

      Thursday, September 9, 2010

      Open Source Bug/Issue Tracking System

      In the last couple of weeks I've been helping facilitate the development process (through build & deployment automation, etc) on a new project, and I was asked to spin up a new instance of Trac for the project team. It didn't take me long to get it going.  I was amazed at the rate of adoption of this tool by the team as a whole.  The business was very positive about the tool and had no issues with learning it and beginning to log tickets.  I will definitely keep this tool in mind in the future as it is relatively easy to install, straight forward to use, integrates with Subversion and Active Directory, and fulfills all the basic requirements for a bug/issue tracking system that my project team was looking for.

      Tuesday, August 31, 2010

      Common Obfuscated Error Messages with IIS

      I've noticed that IIS tends to obfuscate error messages - displaying messages that are different than the actual issue that is happening to the application.  This may very likely be because we are running a plethora of web sites with different architectures and history on one instance of IIS.  I thought I document some of those 'tricky' error messages here just in case someone else runs into them (or I forget what they really mean).

      1. 'An existing connection was forcibly closed by the remote host'  I ran into this error numerous times trying to configure a WCF web service to bring back a large result set (60k + rows).  Playing with the values for maxBuffer PoolSize and MaxReceivedMessageSize in the web.config, I came to discover that this error was really seemed to be masking was an out of memory error that forced the shutdown of the AppPool that was serving the web service.

      2. 'BinaryFormatter Version Incompatibility errror'  Generally I've seen this error when you have IIS site on a web server trying to communicate with another IIS site on an application server, and there is no response from the IIS site on the application server - either that web site isn't running, or your url pointing to the web site on the application server is wrong.

      Monday, August 23, 2010

      Some good links (to blog articles) on Configuration Management

      Here's some good links to blog articles on Configuration Management.  The first is about the importance of a CM person in a company.  I would tend to agree with one of the comments that talks about the size of the company making a difference.  Obviously a smaller startup doesn't need a dedicated CM person.  However, larger enterprises like the one I'm at now with numerous development environments, multiple servers, a GIS infrastructure, a document management infrastructure, and a beomoth of a mainframe legacy system needs a dedicated CM team.  Here's the link:
      http://evgeny-goldin.com/blog/2010/08/21/configuration-management-position/

       The other link considers adding database automated deployment to you CM system.  I would highly recommend when I contrast my experience over the last two years with my previous release management experience.  Previously we automatically deployed database schema and data changes. In my more recent experience the DBA group has flatly declined any encouragement and offer to do this.  The difference in the amount of time to do database refreshes and debug issues with them is stark - like hours vs. days.  Here's the link:

      Saturday, August 21, 2010

      Testing with IE 6 (on Linux!)

      I have a requirement from a client to ensure that their website looks good in IE6.  None of the machines I normally work on have IE6 installed. So I googled IE6 emulators as I had used one in the past that was pretty good, and came up with:
      http://tredosoft.com/Multiple_IE 
      and
      http://www.browsercam.com/
      However you have to sign up to get a free time trial and I thought maybe there was something better for me out there.  Some of the forums talked about having an old laptop that they keep around just for testing with IE6.  I do have an old desktop running XP, so I checked it.  It turned out I have been keeping up with all the updates for the most part and one of the service packs (I don't remember if it was 2 or 3) installed IE7.  Bummer. 
      Googling a bit more, I discovered that you can install IE6 on Linux.  Pushing against this thread a bit more, I found this site:
      http://www.tatanka.com.br/ies4linux/page/Main_Page
      and went to this page:
      http://www.tatanka.com.br/ies4linux/page/Installation:Fedora
      followed the directions and IE6 was installed and running, properly displaying my client's website within 10 minutes.  The Tatanka gives you the option to install IE5.5 and IE5 as well, but when I tried them, the installations were corrupt.  Since IE6 was all I really needed, I was happy with leaving things the way they stood.

      Friday, August 20, 2010

      Active Directory SPOF

      We had a great example of an AD SPOF (single point of failure) recently.  A vbscript had been written by someone with no malicious intent to (for some reason) hook into AD and check service accounts.  What this script ended up doing though was locking out said service accounts.  Within the span of 30 minutes we were well on our way to locking out 400+ service accounts in the enterprise, irrespective of environment.  This took production out of commission.
      Seems like a bit of a hole to me.

      Thursday, July 22, 2010

      Grep features

      I was asked to search several teams' code-bases recently based on a list of over 270 different search strings. We are a Windows OS organization and doing a search like that would take some time. By the way, has anyone noticed that the Windows explorer search tool is flaky? Even if I used a tool like this it would still take a while.
      I have cygwin installed, and I know grep works well - when I search for something I know is in the file, I find it (unlike Windows sometimes). What I didn't know was that there is a -f option on grep that allows you to provide a file with all you search terms separated by newlines. An astute worker made me aware of this and I tried it out this morning. It worked nicely and saved me some time. Here's how I implemented it:

      grep -r -i -f file_of_search_strings.txt * | grep -v .svn | grep -v Assert | grep -v .sql >> resultFile.txt

      -r is a recursive search based on where I'm current sitting on the command line
      -i is a case insensitive search
      -f allows me to pass in my file of search strings
      * tells grep to search every folder/file at this level. Used with -r it tells grep to search every single folder and file that is currently at my current cursor position, or a child of the folder I've cd'ed to.
      The | grep -v .svn tells grep to ignore/not display files with *.svn in the results
      and then >> allows me to pipe my results to a file for inspection.

      You may have to run the dos2unix command on the text file with your list of search items to get grep to see the new lines in the file correctly.

      Wednesday, July 21, 2010

      MS Sql Server Stored Proc Search Query

      If you work in an org where there's a lot of business logic put into (MS Sql Server) stored procedures or you have an application that is stored proc intensive, you might find this code snipped helpful.  It searches all the stored procs in the database for a specific string

      select distinct o.name from syscomments c
      inner join sysobjects o
      on c.id = o.id
      where o.type = 'p'
      and c.text like '%string to search for%'
      order by o.name

      Monday, July 12, 2010

      .Net Self Serve Automated deploy

      There are a couple of prerequisites that need to be in place in order for a self serve automated deployment system to work.
      1. You should have a Nant, MsBuild, Visual Build, or a similar type of scripting file that is doing deploys for you already - see my previous posts on doing that here
      2. Your folder structure for every project you want to deploy should be similar. To have a good Self-Serve Deploy tool, it needs to work on the assumption that directory structures are consistent for certain types of projects. This could be set up using your repository tags directory, or automatically copying deployment files to the required folder structure after the build is successful. A folder structure I've used in the past is b:\deployment\projectname\version\ My build file(s).  Any other required folders and files reside in the version folder with a consistent folder structure.
      3. You need either a properties file or a DB table to manage which tag/version has been deployed to which environment. We've used a symlink in the past to denote the current 'tip' build. This example will use a file. The file is structured like:

      <versions>
         <serverType>
             <server name="blahboxType" version="1.2" />
             <server name="fooboxType" version="1.0" />
         </serverType>
         <blahboxType>
             <project name="MyProject" version="3.1.0.23" />
             <project name="MyProject2" version="3.1.4.11" />
         </blahboxType>
         <fooboxType>
            <project name="fooProj1" version="6.2.123"/>
            <project name="fooProj2" version="6.3.92"/>
         </fooboxType>
      </versions>

      Once the deploy is finished to a particular environment, logic in the deployment file automatically updates the respective project in the appropriate version file.

      Here's a bit of sample code for a client side asp page for a Self Serve app.  It passes some values to a server side asp page (deploy.asp) so that page will know which version of which project to deploy into which environment.  The onchange event of the env select control called an AJAX function that retrieves the currently deployed version of the project from the versions.xml file. 

      <form action="deploy.asp" method="get" name="MyProject">
        <input name="project" type="hidden" value="MyProject" />
        <input name="serverType" type="hidden" value="blahboxType" />
        <select name="version">
           <option value="">--Select--</option>
           <option value="" & version & "">" & version & "</option>
        </select>

      <select id="MyProject" name="env" onchange="getVersions('MyProject',this.value)">
          <option value="">--Select--</option>
          <option value="Test1">Test 1</option>
          <option value="Test2">Test 2</option>
          <option value="SB">Sandbox</option>
          <option value="STG">Staging</option>
          <option value="PRD">Production</option>
      </select>

      <div id="MyProjecttarget" style="color: green;">
      -</div>

      <input type="submit" value="Deploy" />
      </form>

      Then on my deploy.asp page, the request parameters are set to variables which are then used to dynamically build the path to the deploy file (in this case a visual build file called Deploy-<serverName>-<component>.bld)  This could be a nant, msbuild, or ant file as well, you'd just have to change which executable you're using in the executeString.  The command is then executed in a Windows Shell and if it has a return code of '0' it was successful.  Successful or not, we do some very simple logging to show who tried to deploy what, where, and when.
      Here's the code:

       ' need a longer server timeout setting so our deploy can finish
           Server.ScriptTimeout=1200
      'instantiate some vars
          env = ""
          version = ""
          project = ""
          severType = ""
         
      'set the vars
          set version = Request.queryString("version")
          set env = Request.queryString("env")
          set project = Request.queryString("project")
          set serverType = Request.queryString("serverType")

          'gets the user
          Set WSHNetwork = CreateObject("WScript.Network")
          user = WSHNetwork.UserName

          if version="" then
              Response.Write("Please ensure you chose a version <br/>")
          elseif env="" then
              Response.Write("Please ensure you chose an environment")
          elseif project="" then
              Response.Write("error occured - project required")
          elseif LCase(env)="PRD" then
              Response.Write("We don't want to deploy to Production from this app")
          else

              Response.Write(version & " " & env & " " & project)
      'set up a dynamic path here to the deployment file (nant or ant or visual build file)  This string assumes the
      'build file is taking a parameter called Env - the environment to deploy to
              execString = "<pathToVisualBuildExecutable>\VisBuildCmd.exe /b " & Chr(34) & version & "\visual build\Deploy-" & serverType & "-" & component & ".bld" & Chr(34) & " Env=" & env

              Response.Write("<br/>" & executeString)

              set oFs = server.createobject("Scripting.FileSystemObject")
                  set oTextFile = oFs.OpenTextFile("<pathOfFileToLogTo>\deploylog.txt", 8, True)
         
              Dim WshShell
              Set WshShell = CreateObject("WScript.Shell")
              proc = WshShell.Run(executeString,6,true)

              if proc="0" then
                  response.write("<br/><br/><div style='color:green;font-size:16px;'>Deploy of build " & component & " " & version & " to " & env & " as " & user & " was successful")

                  dt = now()
                  logtext = dt & ":  Deploy of build " & component & " " & version & " to " & env & " was successful" & vbCrLf
                  oTextFile.Write logtext
                  oTextFile.Close
              else
                  response.write("<br/><br/><div style='color:red;font-size:16px;'>Deploy of build " + component + " " + version + " to " + env + " failed using user " + user)
                  dt = now()
                  logtext = dt & ":  Deploy of build " & component & " " & version & " to " & env & " failed as user " & user & vbCrLf
                  oTextFile.Write logtext
                  oTextFile.Close
              end if

              response.write("<br/>Deploy times and results are logged at <pathOfFileToLogTo>\deploylog.txt")

              set oTextFile = nothing
              set oFS = nothing
              set WshShell=nothing

          end if
              response.write("<br/><a href='javascript:history.back()'>Back to deploy page<a/>")

      Saturday, July 10, 2010

      Key Questions in Software Sustainment

      Here are some of the questions I ask myself when a software system has stopped running properly...

      The first big question is: What changed between when the application was running well, and the time the application stopped running well?  To get to the bottom of that question, there are a number of other questions that can help point the way.

      Question isolation - Is this problem isolated in any way?  Is it only on a specific network, environment, or group of servers?  Does it only happen for a specific group of users or a specific client?  Is there a period of time it's isolated to?  Is it isolated to a particular 'item' in your data?  I've seen users try to make an application  use a different browser version than the documented supported versions.  Some times it takes a while to get to the bottom of simple issues like that.  Also, we've run into situations where an organization will not have given users the rights on their machines to install third party active X components for their browser that the application they are trying to use requires.

      Question data integrity - Whether you are looking at legacy data or no logic to manage special characters in your data, you need to break the problem down. With legacy data, is the problem isolated to a particular user or group of users, or a particular item?  Or if it's an ETL function, does relational integrity from the first DB line up to the second DB correctly - are you missing or adding 'types' of data that the second DB either is or isn't expecting?  Sometimes you need to pinpoint the exact row or time when the issue occurred to determine what the problem was.

      Question continuity - Did something stop running or listening? A service, appPool, web site, 3rd party server?  Are your cron jobs or scheduled tasks still there? Monitoring would quickly and easily answer this question for you.

      Question communication -  Are the lines of communication open to all of the dependencies that your application has?  Has a network cable been severed by a backhoe down the street? (I've seen that before)  Is there too much communication going on?  Too many calls from one routine can take you system down.  We had an issue like that with some javascript that called a data access function to a GIS server.  As soon as one too many layers got added to the map, performance died as there were too many round trip calls to the server in one request. Internal systems that need to do identity verification or IP Geolocation are heavily dependent on external third parties to operate.  These external vendors can in turn can be dependent on other external services.  Know all your dependencies and have monitoring and SLA's in place for all of them or you could be sweating bullets.

      Question dependencies - Here I'm thinking more in the context of internal dependencies - what internal 3rd party services are you dependent on?  Databases, reporting tools, monitoring systems, document management systems,  any internal system that your application depends on that you aren't responsible for fits into this category.  Are they up and running?  How do you know?  Are they running properly?

      Question the un-questionable - Is your hvac system working at your Co-Lo?  As redundant as your service provider tries to be, there will always be something under the radar.  Always.  I've seen an external hvac system take down an entire enterprise.  Many of you have likely seen a McAfee or a Norton patch take down an enterprise.  How secure is your UPS (Uninterrupted Power Supply) management console?  I've logged into one I've found by accident using username:admin password:password!

      Question known changes - software enhancements and patches, file or db permissions, config changes, etc. I've seen issues where a changed file path will take down a critical Ftp routine, a 3rd party software patch for a document management system crippled a production system by removing key indexes in the database, and an (unfortunately un-automated) database refresh will have missing roles or users or the db will still be restricted mode.  Doing anything manually can get you into trouble.  A fat finger can push a wrong dll/library or mis-type an entry in a config file.  Even fat fingering an automated deploy can get you into problems.  Automated deploys still depend on data that is manually entered.  We have pointer to app servers in our web.config files entered wrong and multiple entries in machine.config for a particular component.

      Question security - Malicious changes are possible either internally or externally.  Unfortunately, the question of security is a larger one everyday.  And it has to be scrutinized at every level.  Does your system have a firewall and IP Sec rules in place?  Does you application provide you with an audit trail?  How secure is your production data? 

      Resources at your disposal that can help in your investigation:
      Log files - event logs, server logs, if you're logging to tables in the DB, don't forget to look there.
      Users - it's like CSI - you need to get ALL the information you can about their problem.  You cannot be afraid to ask.
      Thread dumps - killing a hurting server and ensuring that it does a thread dump when it terminates can be very effective in your problem search
      Networking tools - WireShark, telnet, ping, netstat - these are great for checking your communication.
      Monitoring tools - Nagios, SCOM, GroundworkOpenSource
      Books -  Michael Nygard's Release It! and Luke Hohmann's Beyond Software Architecture.

      More ideas

      Business idea:  Create a computer screen with removable sides so there's not space when you put it against another computer screen - they'll almost look like one screen instead of having the 1-2 inch separation between the two.

      Business idea: If I could create a machine/robot that would automatically wash windows I think I could make a lot of money. Every window I can think of besides one has to be washed manually by hand.  All the high rises downtown still have all their windows washed manually. The only place they use a different technique as far as I'm aware is with race cars where they use layers of thin plastic over the windshield.  They rip off a layer every time the car is in the pit and instantly the driver has a  clean windshield.

      Business idea: A new service industry helping people/seniors navigate their way around the health maze we have created for ourselves.  You need a Personal Medical Manager!  Here's why: There are significant disconnects between specialists who analyze medical tests, the lap techs who perform/analyze the tests, and the doctors who originally asked for the tests.  Then there's another layer of indirection at the pharmacy.  How does an older person manage all their prescriptions and make sure they aren't 'colliding' with each other, or even just remember to take them all and follow all the directions?  I've had my wife 2nd guess a doctor's prescription, ask the pharmacist about it, and have him adamantly agree with her that the prescription  was wrong. 

      Political idea: Why are we still running democracy the way we are?  The reason we elected people as 'representatives' long ago was so they could represent us in parliament (or the house, or congress) - or at least represent our vote.  Lobbyists have done away with our representation in the vote.  But why do we need a representative now anyway?  Technology could allow us to instantly vote on every single issue before parliament either using a computer or a phone.  We wouldn't need a representative to vote for us anymore.  Don't get me wrong, I'm all for democracy!  I just wonder if it can be done better now with technology and removing the 'middle man' and the lobbyists.

      Friday, June 4, 2010

      Code Repositories

      Over lunch the other day, some collegues and I were jawing about different code repositories that we've used, and how clients/projects that we've worked with have evolved to use different VC products.  At one point, all six of us had worked together on a big (successful) online wallet system.  It was originally on Perforce.  We migrated the whole system to Subversion because we were having problems managing code/library dependencies between various projects.  Things worked great in Subversion for what we were doing at the time (this was 4 years ago or so).  One of the developers I worked with there was on the team for Monotone - a precurser of Mercurial - in that they both followed the distributed versioning system model.
      In the last year we made the change from ClearCase to Subversion, and for what we were doing it just felt (and continues to feel) so much better.
      A couple of guys that were eating with us at lunch are working with a client that switched from Subversion to Mercurial earlier this year.  They said it was a bit of a paradigm shift for all the developers in the organization as you have to think about how you manage your changes a little bit differently.  One of the guys also forwarded me this article that is really good at explaining the differences (and has a link to a tutorial the author wrote).

      Thursday, June 3, 2010

      CMS systems

      I found this good post http://designtutorials4u.com/15-great-content-management-systems-for-designers/ on some of the more popular systems out there.  Unfortunately, it doesn't document specifically the technology they use or differentiate very well which ones are 'free' vs. which ones you have to pay to use.  I've also run into a couple others lately:
      Umbraco - an asp.net based cms
      and
      Alfresco - a java based cms

      Thursday, May 13, 2010

      More Software Emergency Fixing Experiences

      I've recently helped two more clients with 'emergency fix' issues.

      The first issue was a problem a national grocery chain had with an export from a SAS program that did a pull from multiple database tables into an excel spreadsheet, and subsequently imported this data into a MS Access database using a SSIS (Sql Server Integration Services) package.  The SAS export seemed to be fine - that is, it executed without errors, but the SSIS package would fail. From my understanding, the client would run this process about every month or so, so there were a fair number of records (1000's).  However, they started having this issue in Dec. and I didnt' get called about the issue until late April, so there was a backup of data.
      The client didn't have the SQL BI studio installed, so we had to go through the idx file manually to see if there was anything obviously wrong.  There didn't appear to be.  The database and excel connections tested fine. There didn't appear to be anything riotously wrong with the data formatting or invalid characters in the excel spreadsheet.  So, what do you do?
      Well,  for one thing, don't test with the full dataset if you think it might be a data related issue.  Keep making your dataset smaller until you are sure that none of the data works.  In this case, we cut the dataset in half, started from Feb to the end of Apr and the import worked!  This confirmed for us that there were no permissions issues, or software upgrade issues (we were considering that software upgrades might be part of the problem as more than one piece of the software had been upgraded in that time period).  The successful test confirmed for us that we were dealing with a data related issue.  From there it was just a matter of determining exactly which rows of data couldn't be imported, and what it was about the data in those rows that botched things.
      Less than 5 hours of work to this point on this fix.

      With the second emergency, a telecom company was having issues getting an automated FTP (file transfer protocol) process to work.  They had changed the paths of where the ftp files were residing and they were voip experts, not so much into the bash shell scripting thing.  Their first problem was pathing in the bash file.  That was an easy fix.  Then there was a problem with sending a notification email that apparently they never got to work.  This was a little bit more tricky.  They originally had the email coded similar to this example.

             #!/bin/bash
             telnet smtp.example.org 25 <<_EOF
             HELO relay.example.org       MAIL FROM:<joe@example.org>
             RCPT TO:<jane@example.org>
             DATA       From: Joe <joe@example.org>
             To: Jane <jane@example.org>
             Subject: Hello
             Hello, world!
             .
             QUIT       _EOF

      However, after a bit of work I discovered that the server this script was running on was processing all the command too quickly for the smtp server to get the email off.  In the end, I had to do something more like this:
             #!/bin/sh
             ( echo HELO ccielogs.com       
             sleep 2
             echo MAIL FROM:<test@somedomain.com>
             sleep 2
             echo RCPT TO:<test@someotherdomain.com>
             sleep 2
             echo DATA
             sleep 2
             echo Subject:Test-Mail!
             sleep 2
             echo If you can read this, it Works!!!
             sleep 2
             echo .
             sleep 2
             echo QUIT
             ) | telnet XXX.XXX.XXX.XXX  25

      Then to set the bash script to run in a windows scheduler (using cygwin) I had to do something like this:
             C:\cygwin\bin\bash.exe --login -c "/myfolder/myscript.sh" 
      Less than 5 hours to fix.

      Monday, April 26, 2010

      Param datatypes with Sql Server Stored Procs

      I tried several times to save and post this blog entry earlier today and it would NOT save. I have no idea what is wrong as I tried to email it to myself so I could do it later and I couldn't email it either. I'm thinking there was a bad character in my original text, but I pasted in into (and copied it out of notepad) so...  I'm not sure what is going on.  Sometimes these tools in the 'cloud' seem to be in a 'fog'.

      Anyway, what I wanted to say was....
      I've been working with stored procs lately in Sql Server.  My stored procs take search parameters that I wanted to be very flexible - strings with wild characters on either end using a LIKE in the SQL.  Originally in my declaration for the parameters I used a char datatype.  What I didn't realize was if the param passed in is shorter than the length you specified, a char datatype will automatically add placeholder characters to fill up the rest of the space. So, if  you declared a char with 9 characters and passed the SP a 3 character string, SQL Server would automatically add 6 more characters to it.  I was then trying to prefix and append the percentage sign (wild character) on either end of my param, but it was getting ignored.

      In the end (with the help of my more DB inclined colleague, Karlson) I changed the datatype of my param to varchar and I could happily prefix and append wild characters to it.

      Got another interesting business idea over the weekend - it's based on a few assumptions, and it's for the investment/financial industry.  The assumptions are:
      - history repeats itself.
      - the stock market, like history, is cyclical in nature
      - investment funds are just as much marketing as they are statistical science
      - people will buy anything that is marketed well, whether it provides them with tangible value or not - (the value of stock is all about perception of value)
      The idea is simple - create a fund that is based on the cyclical nature of the stock market.  Predictable things happen when interest rates go up, or when commodities or high.  Convince people that you can harness the 'value' in the history of what happens to the stock market with certain indicators are up, and you'll have a winner fund.

      Wednesday, April 21, 2010

      Network isolation

      For the second time this week, production has gone down for an extended period of time. This time it was because of this bug introduced by McAffee. It was proliferated through all of our environments (apparently) by SSCM. I'm not sure why we are putting McAffee fixes right into production without testing them in a staging environment first. Seems to me like there is a wrong process there somewhere - Either on our side, McAffee's side, or both.
      Before we realized this was a McAffee bug, it seemed that this 'virus' got propagated to production from lower environments. Having the production environment on an isolated network from the rest of our development and test environments would have protected us to some degree from this kind of propagation. I have (officially) suggested this more than once, but resourcing and other priorities have gotten in the way of making it a reality.
      For future reference though, here's just another reason to have a production environment on an isolated network.

      Tuesday, April 20, 2010

      Single Points of Failure

      Within the last 8 months, all of the production servers for a client of mine were moved out of the building they've been in and into a 'bunker', or co-lo. There was a variety of reasons for doing this. As an architect choosing a Co-location provider, and planning the move, you would want to make sure as many systems there are as redundant as possible - power supply, UPS system, networking, etc.

      Well, this week everyone painfully discovered a system that wasn't redundant (or if it was, it wasn't as redundant as it should have been). Apparently the HVac systems went down. This resulted in servers getting too hot and consequently having to be shut down. Production servers.

      Everything was resolved in a couple of hours, but it just goes to show, it seems that there is always one system that is forgotten.

      I've actually run into another situation like this before. It wasn't nearly as big an issue for our shop as it only affected development environments. However, the Calgary airport(!!) along with most of NE Calgary was without a network connection just as long as we were. Apparently a construction crew was using a backhoe doing some digging at an intersection and accidentally cut a main networking cable that supplied most of NE Calgary with it's network connection. Kind of makes you wonder if municipalities should be considering redundant underground networking, doesn't it?

      Networking Architects  work hard to eliminate as many single points of failure in their systems as they can.  Some are hard to control though.  Recently McAfee released a virus library update that wasn't tested properly and as a result shut down a plethora of systems across North America.  It thought a Windows dll was a virus (false positive) and sent thousands of systems into a perpetual reboot.  It would be an interesting bit of process engineering to figure out the best way protect your production systems from the most current viruses AND protect them from bugs like this sent out by the 'big' virus scanning companies at the same time.

      Monday, April 19, 2010

      Google Tools post

      Good link here that lists the top Google Tools for website developers:
      http://sixrevisions.com/tools/the-top-15-google-products-for-people-who-build-websites/
      Personally, I start to get lost in the plethora of tools out there. Just getting Adwords properly integrated with Analytics is a headache I don't really have time to deal with (when I have so many other higher priority things to manage). I wish some of these tools where a little more straight forward to use.
      My hesitation in making the investment to learning one well is that it will change too soon and too much for me to keep up with (I do have a 9-5 job along with at least half a dozen clients on the side). I need to focus my attention on where I get the most return on my time investment - and learning how every single Google Tool works doesn't rate that high.
      It is a good post for reference, though. I like knowing what all the different tools are out there.

      Wednesday, April 14, 2010

      More Ideas

      Here's some more business ideas that I've had lately. I want to write them down so I don't forget them as they are fun to discuss with people.
      - An automatic window washer for houses. Living in Calgary with all the construction and wind, windows don't stay clean very long. Wouldn't it be nice to have a little 'thing' (like the robot vacuum cleaner) that you could run on your windows?
      - A job website specifically targeted to the baby boomers and finding them good part-time work.
      - Any kind of inventions/devices that will help baby-boomers stay in the workplace longer. I'm sure their feet would get sore after serving customers for a couple of hours in Tim Horton's. Wouldn't it be interesting if you could install 'chairs' that ran on a 'track' behind the counter in fast food restaurants for the servers to sit on?
      - I learned at a conference this past weekend that apparently there is no Canadian company offering servers on demand in 'the cloud' like Amazon's EC2. There is a potential business opportunity here for a smart Canadian company with a bit of infrastructure in place. Eucalytpus is basically a freeware version of the same software that Amazon uses to run it's cloud. Set a few servers up with that and you're ready for Canadian customers. The advantage here is other Canadian businesses are hesitant to use American cloud solutions because of the American Federal government's attitude toward privacy. Host your cloud in Canada and don't worry about the American 'Fed's' keeping an eye on your business data. :-)

      Tuesday, April 6, 2010

      More lessons learned (not mine though)

      Here's another good link to a lessons learned article: http://java.dzone.com/articles/lessons-learned-taking-project It talks about how to communicate, what to have in place, and how to manage client expectations on an 'emergency fix' project. I have a special interest in this type of thing as I'm testing a new business idea here: http://www.netfocusconsulting.com/Experts.jsp

      Saturday, March 27, 2010

      Considerations with a site re-design

      I just completed a site redesign for the Alberta Roofing Contractors Association. They wanted to refresh the pages a bit, simplify the navigation and add Google Custom Search. Here's some things I learned...

      We switched from static html pages to php pages that used simple includes for the header, footer, and navigation. We also reorganized the folder structure a bit to clean things up. Consequently all the pages/pdf's that were previously indexed on search engines 'disappeared'. I wasn't sure what to do about this at first, but then I found an .htaccess file in the root of the ftp directory on the server and I made 301 redirects for all the old html pages and pdf files. This worked great. Lesson Learned: a simple site redesign on an established site isn't as simple as it would seem on first glance.

      I found re-indexing Google custom search to be a pain. I had to add the site into Google's webmaster tools (both www.arcaonline.ca and arcaonline.ca), set the old pages up to be removed from the cache, create a sitemap.xml file and upload it, create a robots.txt file and upload it, and add all the new url's for pages and pdf's in the 'explicit' indexer. It was rather cumbersome, but I got it to work in the end.

      There were a number of issues with compatibility related to the search engine form and the flash on the home page that we had to work around using CSS hacks for IE ( *+html ).

      Using a wrapper in CSS is a big help in some cases to make background images display properly in all situations. Something similar to this:
      #wrapper {
      margin: 0px;
      padding: 0px;
      width: 100%;
      background-image: url(../images/bg.jpg);
      background-repeat: no-repeat;
      background-position: center top;
      margin-left: auto;
      margin-right: auto;
      }

      Wednesday, March 17, 2010

      Automatic Deployments taken to a new level

      I was doing some research for an article I was thinking about writing on Automated Deployment and came across this fantastic blog post by Grig Gheorghiu. He talks about how OpenX uses Amazon EC2 (virtual machine cloud) to automatically provision and terminate servers of different types, different monitoring solutions they use, and in another post he talks about the different automated deployment technologies that can be used (push vs. pull, ruby, python, etc).
      I personally liked the comments about 'never having to log onto a production server', and "There's nothing like real-world network traffic, and I mean massive traffic -- we're talking hundreds of millions of hits/day -- to exercise your carefully crafted system infrastructure."
      All the comments at the bottom are really good as well.

      Tuesday, February 23, 2010

      CPU's and Sql Server

      Again in our staging environment getting ready for a release in production, we were seeing unexplained slow response times for our AgilePoint application when compared against production or development envs. After some digging we discovered that our production DB server that serviced the AgilePoint DB had twice as many CPU's as the dev DB server, and 4 times as many CPU's as the staging server.

      What I didn't know is that in Sql Server you can assigned/limit how many CPU's the database can use. This setting is in Server Properties on the Advanced tab, and is called 'Max Degree of Parallelism'. Like many of the other settings with Server Properties in Sql Server, 0 equates to 'unlimited' or in this case 'all processors'.

      Thursday, February 18, 2010

      The importance of Process (adodb.dll and SSRS)

      We have a number of different development environments. Deployments follow a natural progression through these environments before they get put into production. We were getting reading for a release and doing deploys and testing in our Staging environment (our env just before prod in our progression). One morning recently, our Nunit tests failed after a run, complaining about not finding adodb.dll registered.

      We were surprised because adodb.dll is a Microsoft dll and we certainly weren't deploying any dll's like that - for that matter, we hadn't done any deploys in the last day or two. So why were our Nunits failing suddenly?

      Quickly looking into the event logs, we discovered that SSRS (Sql Server Reporting Services) had been (unbeknowst to us) uninstalled on our transaction server during the evening. I went and spoke with the infrastructure group who did the uninstall and they got very concerned as they were ready to uninstall SSRS in production the next evening.

      In the end, it appears that uninstalling SSRS did, in fact, remove adodb.dll from the Gac on our transaction server. Our infrastructure group cancelled the uninstall from production. We are trying to impress upon them and others the importance of communicating changes like this even if the assumption is there is no impact. Fortunately, our process of percolating changes up in different environments managed to catch this change before it blew us up in production. However, it wasn't caught in the lower environments because it was unadvertised. Communication is key!