1020

MySQL help with huge database

Hi,I am looking for ways to improve the database performance in the situation when I have to modify a large table (several million rows), by e.g. adding a column. Currently this would take several hours which is too slow. The bottleneck is disk I/O. I am considering either partitioning the table over several innodb files on several disks, or going to a RAID-5 or RAID-10, it this will give me better write performance.The database is 130GB large, and the problem table (which I make period changes to) is the largest table on the server. I cannot have downtime of 3 hours each time I make a change and adding blank fields (to be used later, when a new field is needed) is not an option.Each time I add a column, the cpu goes into high (80%) io wait state for about 3 hours.I have a hack which would allow me to split the large table into multiple smaller tables based on some criteria (for example, forumID or such). Here are a couple of things but would like to know which is best, and am open to new ideas. The ideas so far:1. Split the table into 3 or 5 smaller tables each on it's own disk. The disk IO would then not be so bad, and it might only take 1 hour to perform the table change. But this might not work because the changes to the database (as in adding a column) might be serial, meaning only 1 disk is being written to at a time. (Then again, maybe it will work if I launch 3 different scripts, one to update each table at once).2. Do RAID 5 or 10, and have 3 or 5 disks. This again might not help at all because of the above issue with MySQL writing serially.Any ideas here on these, or any new ideas?I am using latest MySQL 5.0.45 with InnoDB engine on Debian etch LinuxPlease helpThankssorry ca nu am tradus da am pregatit textu pentru mai multe forumuri in engleza si mam gandit sal postez si aici dar am ramas fara timp pentru traducere
0