I recently moved several client sites off of servers that they had been on for the better part of a decade to new VPS boxes. They had MySql backends and used Java/Velocity CMS system (InfoGlue) hosted on Tomcat with an AJP connector to apache web servers on the front end. One site uses htdig for it's search implementation.
Issues (with resolutions) that I ran into...
1. GZIP'ing a running Tomcat instance
Before one of the sites was moved, a client tried to make a backup of the Tomcat instance that they were running under. They executed a:
gzip -r filename.gz ./tomcat/
which ended up recursively zipping all files under the tomcat directory. This crashed the running instance of Tomcat. (Perhaps trying to tar or gzip a running tomcat instance isn't a great idea. Copy it first and then tar/gzip that) Everything was quickly unzipped and they tried to restart Tomcat but it wouldn't start. After an hour of search, I was asked to help. I poked and prodded for about 90 minutes and discovered that all the webapps were missing the
file. I don't know how/why gzip and upzipping would make these files disappear, but they were there anymore. So I replaced them and tried restarting Tomcat and everything came up.
2. Problems with MysqlDump
I had a couple of problems backing up the MySql database. First MysqlDump would execute, but not put all of the tables into the backup file. I discovered that I had run out of disk space. Then it turned out that mysqldump doesn't care about relational integrity when it creates the backup script so when I tried to import it into the new database I ran into errors complaining about foreign keys. To resolve this, I had to add set foreign_key_checks=0; at the top of my backup script. This allowed me to import it successfully.
3. Mysql connection pool Exhausted
After getting the db imported and the web server moved over, the application seemed to be running fairly nicely on the new VM's. Then I left it alone for a couple of hours. When I came back it was throwing connection pool exhausted exceptions all over the place. I could resolve this by logging back into InfoGlue's cms instance, but that wasn't the right resolution for me. So I did some checking. Many people suggest to set the wait_timeout parameter in the my.cfg file for mysql higher or lower. I tried that and it didn't seem to work. What I ended up doing was adding a line in my database.xml file:
<param name="validation-query" value="select * from someTable" />
This helps keep the connection alive by pinging the db with the query every so often.
4. MySql Data Directory Change
While trying to resolve the previous issue I was changing/adding parameters into the mysql my.cfg file. I did a copy/paste which had a number of properties, one of which was the pointer to the data directory for mysql. This pointer was different than what I was using. As a result, when I restarted mysql, all of my databases, tables, users, and data was gone. I freaked. Then after thinking about things for a little bit and checking through files one more time I realized the mistake I had made and commented that line out. Restarting mysqld brought all my databases, users, and data back again. Whew.
5. Getting HtDig going again
The original installation of HtDig on the old server hadn't been indexed in over two years because cron had been broken. I didn't have access to the root password to fix this problem so the client was very interested in getting it running again on the new server. I had no previous experience with HtDig.
I copied over all the files I could find related to HtDig off of the old server and installed them on the new box. After a few tries, re-indexing worked, but I still had a problem with displaying the search results page in the web site. It turned out that I was missing a virtual directory configuration in my httpd.conf file for the directory where htsearch was running from (as a cgi script). The only reason I figured that out was by using lynx (linux CLI web browser). After fixing that, I got my newly indexed results displaying on the web site.
6. Using /etc/hosts helps
I've found that the /etc/hosts file is a big help in moving sites like this - whether you want to test the site while the live site is still running, or quickly configure pointers to a server dependency.
7. MySql sql queries are case sensitive
One of the website moves I did involved porting a MySql database from a Windows box to a CentOs box. The dump on the Windows box made all characters for tables lower case. I didn't pay too much attention to this at the time. They imported the same way.
When I started up my Tomcat server, it tried to start up, but threw errors related to jdbc connection pool and a ValidateObject. After a bit of googling I discovered that this is related to the validate query (the query I wrote about earlier that checks the connection every so often to make sure it hasn't gone stale). I tried running that query right on the mysql box and it would run because the table name was all lower case. So I changed all my table names to camel case and things worked.
8. Issues with unknownDefaultHost and serverNodeName in Tomcat
My inability to set the serverNodeName was resolved by adding a line in my hosts file to point the IP of my box to the Host variable found when I run the 'set' comment in CentOS. My issue with unknowDefaultHost was resolved by going into the server.xml file and editing the Engine element's defaultHost attribute - changing it from my old domain name to 'localhost'
9. MySql error 1153 - Got packet bigger than 'max_allowed_packet' bytes
Got this error a couple of times with different DB imports. To resolve I needed to go to the my.cnf file (found sometimes in /etc/) and set that property like this:
for the mysqld (mysql daemon). Then I had to restart the daemon (/etc/init.d/mysqld restart) and I could run my import (mysql -p -h localhost dbname < dbImportFile.sql)