MySQL Tuning Settings

MySQL

Recommendations:

You running in a VM environment and that alone is a huge performance hit. I don’t care what anyone says – MySQL (or any DB server) running in a VM is slow. We have tested it against dedicated machines and the dedicated machines run circles around VMs. VMs are not a viable solution for production database servers. ñ Van Apr 4 at 8:38

Oh, and you must tune your OS too. There are several important configurations you should set in the OS to get optimal performance such as: Swapiness, stripe size, file system type, file system settings (like noatime,nodiratime for ext4), etc.


I have been resolve this problem:

Add parameter on JDBC URL rewriteBatchedStatements=true to make batch insert can submit at one SQL like (but you should control the size):

insert into table_name(…) values(1,2,3),(4,5,6),(7,8,9);
This can saving a lot of TCP communication times, if haven’t add this parameter, MySQL connector will do below behavior as default (SQL commit one by one):

insert into table_name(…) values(1,2,3);
insert into table_name(…) values(4,5,6);
insert into table_name(…) values(7,8,9);

When export MySQL data using mysqldump, you can add parameter net_buffer_length=8192 to control every SQL line size, without this parameter the insert statement will be like below:

insert into table_name(…) values(1),(2),(3),(4),(5)….
all table data will append at one SQL, it’s hard to import back when the data is so BIG and UNDO log maybe full.
If you add extended-insert=false the SQL will be like:

insert into table_name(…) values(1,2,3);
insert into table_name(…) values(4,5,6);
insert into table_name(…) values(7,8,9);
It maybe spend a lot of time to import when data is too big.


There are a lot of parameters that are missing, to fully understand the reason for the problem. such as:

MySQL version
Disk type and speed
Free memory on the server before you start MySQL server
iostat output before and at the time of the mysqldump.
What are the parameters that you use to create the dump file in the first place.
and many more.

So I’ll try to guess that your problem is in the disks because I have 150 instances of MySQL that I manage with 3TB of data on one of them, and usually the disk is the problem

Now to the solution:

First of all – your MySQL is not configured for best performance.

You can read about the most important settings to configure at Percona blog post: http://www.percona.com/blog/2014/01/28/10-mysql-settings-to-tune-after-installation/

Especially check the parameters:

innodb_buffer_pool_size
innodb_flush_log_at_trx_commit
innodb_flush_method

If your problem is the disk – reading the file from the same drive – is making the problem worse.

And if your MySQL server starting to swap because it does not have enough RAM available – your problem becomes even bigger.

You need to run diagnostics on your machine before and at the time of the restore procedure to figure that out.

Furthermore, I can suggest you to use another technic to perform the rebuild task, which works faster than mysqldump.

It is Percona Xtrabackup – http://www.percona.com/doc/percona-xtrabackup/2.2/

You will need to create the backup with it, and restore from it, or rebuild from running server directly with streaming option.

Also, MySQL version starting from 5.5 – InnoDB performs faster than MyISAM. Consider changing all your tables to it.

shareimprove this answer
answered Apr 26 ’15 at 15:17

Tata
672414

Will changing the tables to MyISAM to InnoDB affect any relation? or any damage to my DB? Is there any specific advantage to use MyISAM rather than using InnoDB? ñ DharanBro Apr 28 ’15 at 4:30

If you will ask any of the MySQL experts – all of them will say – No. Today there is no advantage in using MyISAM over InnoDB. But you need to verify the code that uses your tables and make sure that it does not rely on table locks that MyISAM performs ñ Tata Apr 29 ’15 at 9:55

I changed the Hard Disk to SSD and it finished in 3hrs. It really saved me.. ñ DharanBro May 5 ’15 at 11:51

The biggest problem with standard mysqldump and import isn’t really that the hard disk is a bottleneck, the biggest problem is that when you are doing this you actually are inserting all the data into a table again rather than simply copying the data structure. So, you have to recreate the structure. It is a major software limitation that is exacerbated by slow disks. MyISAM has a very real advantage over Innodb when it comes to backups: you can simply lock and flush the tables and copy the data using cp or scp and it works fine (remember to chown mysql:mysql the files). ñ Chris Seline May 30 ’16 at 17:09

@ChrisSeline – you can do the same with InnoDB tables, but when you do this- your DB will not be functional. Don’t try backing up a 1TB of data this way from a production DB. ñ Tata Jun 5 ’16 at 13:29
show 1 more comment
up vote
7
down vote
+25
Doing a dump and restore in the manner described will mean MySQL has to completely rebuild indexes as the data is imported. It also has to parse the data each time.

It would be much more efficient if you could copy data files in a format MySQL already understands. A good way of doing this is to use innobackupex from Percona

(Open Source and distributed as part of XtraBackup available to download from here).

This will take a snapshot of MyISAM tables, and for InnoDB tables it will copy the underlying files, then replay the transaction log against them to ensure a consistent state. It can do this from a live server with no downtime (I have no idea if that is a requirement of yours?)

I suggest you read the documentation, but to take a backup in it’s simplest form use:

$ innobackupex –user=DBUSER –password=DBUSERPASS /path/to/BACKUP-DIR/
$ innobackupex –apply-log /path/to/BACKUP-DIR/
If the data is on the same machine, then innobackupex even has a simple restore command:

$ innobackupex –copy-back /path/to/BACKUP-DIR
There are many more options and different ways of actually doing the backup so I would really encourage you have a good read of the documentation before you begin.

For reference to speed, our slow test server, which does about 600 IOPS can restore a 500 GB backup in about 4 hours using this method.

Lastly: You mentioned what could be done to speed up importing. It’s mostly going to depend on what the bottle neck is. Typically, import operations are I/O bound (you can test this by checking for io waits) and the way to speed that up is with faster disk throughput – either faster disks themselves, or more of them in unison.

shareimprove this answer
edited Sep 13 at 13:08

muttonUp
1,9561329
answered Apr 23 ’15 at 23:05

AndySavage
1,22711025
add a comment
up vote
4
down vote
Make sure you increase your “max_allowed_packet” variable to a large enough size. This will really help if you have a lot of text data. Using high performance hardware will surely improve the speed of importing data.

mysql –max_allowed_packet=256M -u root -p < “database-file.sql”
shareimprove this answer
answered Apr 28 ’15 at 13:40

koolkoda
14915

max_allowed_packet = 512M is in the config, so making it 256M will actually decrease it’s size. ñ Tata Apr 29 ’15 at 10:12
add a comment
up vote
2
down vote
One thing you can do is

SET AUTOCOMMIT = 0; SET FOREIGN_KEY_CHECKS=0
And you can also play with the values

innodb_buffer_pool_size
innodb_additional_mem_pool_size
innodb_flush_method
in my.cnf to get you going but in general you should have a look at the rest of innodb parameters as well to see what best suits you.

This is a problem I have had in the past I don’t feel I have tackled completely but I hope I had pointed myself in this direction from the get go. Would have saved myself quite some time.

shareimprove this answer
edited Apr 15 ’15 at 7:29
answered Apr 15 ’15 at 7:22

fakedrake
2,14422333

currently the normal import is in progress. once it is done let me try this ñ DharanBro Apr 15 ’15 at 8:31

setting innodb_buffe_pool_size in my.cnf doesnt start the mysql server ñ DharanBro Apr 21 ’15 at 12:47
5
@DharanBro That’s because you mis-spelt it. ñ EJP Apr 23 ’15 at 23:31
add a comment
up vote
2
down vote
Get more RAM, get a faster processor, get an SSD for faster writes. Batch the inserts so they will run faster than a bunch of individual inserts. It’s a huge file, and will take time.

Way 1: Disable the foreign keys as fakedrake suggested.

SET AUTOCOMMIT = 0; SET FOREIGN_KEY_CHECKS=0

Way 2: Use BigDump, it will chunk your mysqldump file and then import that. http://www.ozerov.de/bigdump/usage/

Question: You said that you are uploading ? how you are importing your dump ? not directly from the server /command line?

I’ve had to deal with the same issue. I’ve found using mysqldump to output to a CSV file (like this):

mysqldump -u [username] -p -t -T/path/to/db/directory [database] –fields-enclosed-by=\” –fields-terminated-by=,
and then importing that data using the LOAD DATA INFILE query from within the mysql client (like this):

LOAD DATA FROM INFILE /path/to/db/directory/table.csv INTO TABLE FIELDS TERMINATED BY ‘,’;
to be about an order of magnitude faster than just executing the SQL queries containing the data. Of course, it’s also dependent on the tables being already created (and empty).

You can of course do that as well by exporting and then importing your empty schema first.

I’m not sure its an option for you, but the best way to go about this is what Tata and AndySavage already said: to take a snapshot of the data files from the production server and then install them on your local box by using Percona’s innobackupex. It will backup InnoDb tables in a consistent way and perform a write lock on MyISAM tables.

Prepare a full backup on the production machine:

http://www.percona.com/doc/percona-xtrabackup/2.1/innobackupex/preparing_a_backup_ibk.html

Copy (or pipe via SSH while making the backup – more info here) the backed up files to your local machine and restore them:

Restore the backup:

http://www.percona.com/doc/percona-xtrabackup/2.1/innobackupex/restoring_a_backup_ibk.html

You can find the full documentation of innobackupex here: http://www.percona.com/doc/percona-xtrabackup/2.1/innobackupex/innobackupex_script.html

The restoration time will be MUCH faster than reading an SQL dump.

shareimprove this answer


MySQL NDB Data/Memory Usage

If possible, you should re-title your question: the title references Innodb, but your question is about NDB.

Anyway, when you do deletes in NDB it just frees up memory space for future use by that table. If you want to really free it up you can do a rolling restart of the data nodes or an optimize table (depending on your version). See: http://docs.oracle.com/cd/E17952_01/refman-5.1-en/mysql-cluster-limitations-limits.html

Relevant quote: “A DELETE statement on an NDB table makes the memory formerly used by the deleted rows available for re-use by inserts on the same table only. However, this memory can be made available for general re-use by performing a rolling restart of the cluster. See Section 17.5.5, ìPerforming a Rolling Restart of a MySQL Clusterî.

Beginning with MySQL Cluster NDB 6.3.7, this limitation can be overcome using OPTIMIZE TABLE. See Section 17.1.6.11, ìPrevious MySQL Cluster Issues Resolved in MySQL 5.1, MySQL Cluster NDB 6.x, and MySQL Cluster NDB 7.xî, for more information.”

Hope that helps.

Leave a Reply

Your email address will not be published. Required fields are marked *