Showing posts with label MySQL. Show all posts
Showing posts with label MySQL. Show all posts

Saturday, August 3, 2013

Friday, August 2, 2013

MySQL Backup and Migration

Everybody talks about doing it, but very few developers follow through - you guessed it, data backups! Perhaps it's a matter of getting burnt at least once, or having proper company policies in place (aka have a boss breath down your neck about it), but the practice remains surprisingly elusive. In either case, over the weekend I had to research how to migrate a sizable MySQL database, and hence below are a few tips from the trenches. In the end, the entire process is surprisingly simple.

MySQL tools & strategies

As popular as MySQL is there aren't many third party backup tools that will do a proper backup of your data. If you want to avoid data corruption, half-committed transactions, and many other problems, your best to go with one of the bundled backup utilities: mysqldumpmysqlhotcopymysqlsnapshot, and if you have the bucks, ibbackup. To select the proper tool for the job, first determine the answers to the following questions: online vs. offline backup, and data dump vs. raw.

Online vs. Offline backups

Online backups are often the preferred method as database downtime may be an unacceptable option. Having said that, offline backups are usually faster and less error prone as we do not have to worry about running transactions, table locks, orphaned processes, and other consistency problems. If you can afford to have a brief period of downtime, or if you're lucky enough to have master-slave replication, offline is the way to go.

Data dump vs. Raw backups

A data dump results in a sequence of SQL statement which can be ran against any database to recreate the database structure and the data itself. mysqldump is the premier tool in this space, and it can be used on any table type locally and even over the network. However, data dumps incur a lot of overhead with extra SQL syntax, result in larger data files, are much more CPU intensive, and most importantly, they require a full index rebuild when the data is being restored!
Arguably the most efficient way to backup your database is through a raw snapshot of the MySQL files as they exist on disk. Because we are skipping all the conversion steps, the process is much more efficient than dumps. To perform a proper backup of a MyISAM table, you will have to copy the data and the index files; for InnoDB you will need to backup the entire tablespace and the associated transaction logs.

mysqldump / mysqlhotcopy / mysqlsnapshot / ibbackup

mysqldump - (online, dump) - most commonly used tool in our toolkit. It will perform a full data dump from an online database by locking the tables and writing a hefty file to disk or a network location. It is ideally suited for small databases as the process is not very efficient.
# typical mysql dump backup and restore usage
mysqldump -u root -pPassword -x --all-databases > db_dump.sql
mysql -u root -pPassword < db_dump.sql

# dump into 'backup' folder (local machine), into two text files <data, table_structure>
mysqldump -T backup --fields-terminated-by=',' database-name -u root -pPassword 

# compress the dumped data on the fly
mysqldump -u root -pPassword --all-databases | bzip2 -c > db_dump.bz2
mysqlhotcopy - (online, raw) - will perform a full raw backup of any database consisting of ISAM or MyISAM tables. It operates by acquiring a read lock on all tables, copying them, and then releasing the lock.
# perform an online backup into /backup/location
mysqlhotcopy -u root -p password database_name /backup/location
mysqlsnapshot - (online, raw) - a great tool to obtain a full database snapshot of any MySQL database without taking it offline. You can configure it to compress the data, and/or provide separate tar files for each database. The only downside: MyISAM only as well.
# save a full database snapshot of an online database into /backup/location
mysqlsnapshot -u root -pPassword -s /backup/location 

# restore a snapshot
tar -xvf /backup/location/db.tar
ibbackup - (online, raw) - will perform an online backup of InnoDB and MyISAM tables on any MySQL database. A great tool, but it comes with a price tag. Then again, if you're a heavy InnoDB user, it may well be worth the price.
# perform online backup of MyISAM / InnoDB tables
ibbackup /etc/my.cnf /etc/ibbackup.cnf

# restore recent backup (as configured in ibbackup.cnf)
ibbackup --restore /etc/ibbackup.cnf
cpscpnc - (offline, raw) - if you can afford to take your database offline, raw backups are as simple as doing a copy, or a remote transfer to a different machine. It may sound crude, but it is arguably the safest way to get a snapshot of your data!

Network backups: Netcat & mysqldump

For full safety you should backup your data on multiple drives, and even better, on multiple computers. Thankfully, this process is remarkably simple with the help of netcat, or even mysqldump itself:
# Replicate / populate a remote database in a single step (redirect data dump)
mysqldump --opt --compress --user=username database | mysql --user=username2 --password=p2 --host=hostB -D database -C database

# data backup with netcat
  # backup recipient - listen on port 6000, write recieved data to backup.bz2
  nc -l 6000 > backup.bz2

  # backup initiator - dump the database, compress, and send to hostB:6000
  mysqldump --opt -u user1 -t database | bzip2 -c | nc -w1 hostB 6000
A little overwhelming at first, but once you wrap your head around online vs. offline, and raw vs. dump, it's not all that bad. And let me tell you, once automated, you also tend to sleep far better at night!

MySQL backup with Version control SCript

You do need to initialize a Git repository at /path/to/dbbackups first. Then add a cron tab that calls the backup script.


#!/bin/bash  
#SETTINGS
MYSQLUSER=<insert mysql user>
MYSQLPASSWORD=<insert mysql user password>
DBNAMES=("database_1" "database_2" "database_3")
BACKUPDIR=/path/to/dbbackups

#DUMP THE DATABASES
for name in ${DBNAMES[@]}
do
/usr/bin/mysqldump --skip-opt -u$MYSQLUSER -p$MYSQLPASSWORD --quote-names --complete-insert --create-options --quick --lock-tables=false --skip-add-locks $name -r $BACKUPDIR/$name.sql

cd $BACKUPDIR
/usr/local/bin/git add $name.sql
/usr/local/bin/git commit -m "mysql database backup $name `date`"
echo "--- mysql database backup $name `date` completed"

done
/usr/local/bin/git push
echo "--- mysql database backups pushed to remote repository"

Tuesday, July 16, 2013

Enabling Root user in Ubuntu

Enabling

To actually enable root logins, just do a sudo passwd -u root. This will prompt for a new root password and once you confirm it, you can start using the root account to login.

Reverting back

If you want to disable root account in Ubuntu you need to lock the root account by using the following command sudo passwd -l root

Alternatives to root login

If you want to work on a root console you can also use sudo -i.
Only do this if you know what you are doing. More information on why to stay with sudo here

Graphical version needs setting up 1st:

gksu gnome-terminal will open a terminal with a root prompt.
passwd root will allow you to create a password for root.
gksu users-admin will open the users screen:

Monday, May 13, 2013

Linux Error While Booting: PCI: not using MMCONFIG and system hangs at “Starting Udev”

I'm trying to install Oracle Linux 5 on VMWare Workstation. Installation all went fine, but while the Operating System reboots before finalizing all the installation, System hangs at "Starting Udev".

Please follow below steps to fix above error:

Restart the Linux guest OS;
Press any key at the GRUB boot menu, press e to edit, and add the highlighted words at the “kernel” line; press enter, and then b to boot;
kernel /vmlinuz-2.6.18-1.2257.fc5smp ro root=/dev/sdb1 clock=pmtmr divider=10 hgb quiet
Once it boots in the console, edit /boot/grub/grub.conf with the same setting.