Each slave that connects to a master in a replication setup (for the purposes of reading the binary log on the master) must have an account to log in. This account must be granted the dedicated permission REPLICATION SLAVE. It is common practice to also grant this account REPLICATION CLIENT, which allows monitoring of the replication setup.
If possible, grant this permission only to a specific hostóin this example, node 1 (10.0.0.1):
mysql> GRANT REPLICATION SLAVE, REPLICATION CLIENT ON . TO
‘replication’@’10.0.0.1’ IDENTIFIED BY ‘password’;
When the slave thread executes a query, it does not, by default, record the query in its own binary logóthis can be changed with the my.cnf parameter log-slave-updates.
Binary logs can become unmanageable, and the default maximum size for a binary log is 1G. This is tunable with the my.cnf parameter max_binlog_size:
Rotating binary logs
It is good practice to automatically delete old binary logs. You can rotate binary logs older than two days with the my.cnf parameter expire_log_days.
The default value for this parameter is 0, which means “no automatic removal”.
expire_logs_days = 2
Removals occur when the MySQL server is started and when the logs are flushed (note that the logs are flushed when the current logfile reaches max_binlog_size, providing a further reason to set this parameter to a relatively small value).
In addition to the recommendation of storing your binary log on a block device separate from your MySQL data directory, there are some additional tricks you can use to increase performance while logging:
Binary log caching
The my.cnf parameter named binlog_cache_size sets the size of the cache that is used to hold SQL statements before they are inserted into the binary log during a transaction. Big, multi-statement transactions can benefit from an increased value (the default is 32M):
binlog_cache_size = 64M
This buffer is allocated per connection on the first UPDATE or INSERT query.
For this one-line command on the master to work, the following requirements must be met:
Any existing slave process must be stopped on the slave (STOP SLAVE)
A user account must exist that is able to create databases and tables, and insert rows when connecting to the slave from the master
It must be acceptable to lock the tables on the master for the duration of the operation
This technique copies all data from the master to the slave, including all of the mysql databaseówith the exclusion of the users table. It is likely that all of the other tables are identical if the software versions are the same.
Ensure you have replicate-ignore-db = mysql in /etc/my.cnf on the slave, if you don’t wish to replicate the mysql database, or remove the –ignore-table parameter using the next command (recommended).
How to do it…
The command to execute on the master, in full, is as follows:
[[email protected] mysql]# mysqldump –delete-master-logs –ignore-table=mysql.
user –master-data –lock-all-tables –all-databases -u root
–password=” –hex-blob | mysql -h 10.0.0.2 -u root –password=”
We can explain this command in chunks as follows:
mysqldump: Binary for producing SQL statements from a MySQL database. This means, “take my database and produce a file that contains SQL statements, which would build an identical database if executed”.
–delete-master-logs: Deletes all logs on the master from the moment you start the backup (these are not needed; the slave only requires logs after this point).
–master-data: Includes a CHANGE MASTER TO command within the dump.
–lock-all-tables: Locks all tables on the master during the period of the backup, in order to ensure that every transaction before the time of the backup is logged, and every transaction after the backup is in the binary log. If you have a lot of data, this may involve a very long lock on all tables. Look at the next recipe for a solution.
–all-databases: Backs up all databases on the master.
–hex-blob: If you have BLOBs (binary objects) in your database, this option ensures they are stored in a way that allows them to be imported correctly (that is, stored as hexadecimal characters).
The pipe command (|): Takes output from the command on the left and passes it to the command on the right.
mysql ñh 10.0.0.2: Connects to the slave.
-u root ñ-password=’x’: Uses these details.
Once this command has finished execution, run START SLAVE as shown in the preceding section to start your slave up.
To put the server in the read-only mode, simply add the following line to the [mysqld] section in /etc/my.cnf:
This variable can also be modified at runtime within a mysql client:
mysql> show variables like “read_only”;
| Variable_name | Value |
| read_only | OFF |
Replication offers benefits for robustness, speed, and system administration:
Robustness is increased with a master/slave setup. In the event of problems with the master, you can switch to the slave as a backup.
Better response time for clients can be achieved by splitting the load for processing client queries between the master and slave servers. SELECT queries may be sent to the slave to reduce the query processing load of the master. Statements that modify data should still be sent to the master so that the master and slave do not get out of synchrony. This load-balancing strategy is effective if non-updating queries dominate, but that is the normal case.
Another benefit of using replication is that you can perform database backups using a slave server without disturbing the master. The master continues to process updates while the backup is being made.
There are two reasons why you might want to set replication logging on a per-connection basis:
A thread that makes many small changes to the database might want to use row-based logging. A thread that performs updates that match many rows in the WHERE clause might want to use statement-based logging because it will be more efficient to log a few statements than many rows.
Some statements require a lot of execution time on the master, but result in just a few rows being modified. It might therefore be beneficial to replicate them using row-based logging.
Relay logs have the same format as binary logs and can be read using mysqlbinlog. The SQL thread automatically deletes each relay log file as soon as it has executed all events in the file and no longer needs it. There is no explicit mechanism for deleting relay logs because the SQL thread takes care of doing so. However, FLUSH LOGS rotates relay logs, which influences when the SQL thread deletes them.
When you back up the slave’s data, you should back up these two status files as well, along with the relay log files. (These status files are named master.info and relay-log.info by default).They are needed to resume replication after you restore the slave’s data. If you lose the relay logs but still have the relay-log.info file, you can check it to determine how far the SQL thread has executed in the master binary logs. Then you can use CHANGE MASTER TO with the MASTER_LOG_FILE and MASTER_LOG_POS options to tell the slave to re-read the binary logs from that point. Of course, this requires that the binary logs still exist on the master server.
For InnoDB tables, note that FLUSH TABLES WITH READ LOCK also blocks COMMIT operations.
If you are using InnoDB tables, ideally you should use the InnoDB Hot Backup tool, which takes a consistent snapshot without acquiring any locks on the master server, and records the log name and offset corresponding to the snapshot to be later used on the slave. Hot Backup is an additional non-free (commercial) tool that is not included in the standard MySQL distribution.
Without the Hot Backup tool, the quickest way to take a binary snapshot of InnoDB tables is to shut down the master server and copy the InnoDB data files, log files, and table format files (.frm files). To record the current log file name and offset, you should issue the following statements before you shut down the server.
Then record the log name and the offset from the output of SHOW MASTER STATUS as was shown earlier. After recording the log name and the offset, shut down the server without unlocking the tables to make sure that the server goes down with the snapshot corresponding to the current log file and offset:
Note: For the greatest possible durability and consistency in a replication setup using InnoDB with transactions, you should use innodb_flush_log_at_trx_commit=1 and sync_binlog=1 in the master my.cnf file.
If you made a backup using mysqldump, start the slave first.
Start the slave server. If it has been replicating previously, start the slave server with the –skip-slave-start option so that it doesn’t immediately try to connect to its master. You also may want to start the slave server with the –log-warnings option to get more messages in the error log about problems (for example, network or connection problems). The option is enabled by default, but aborted connections are not logged to the error log unless the option value is greater than 1.
With MySQL’s classic statement-based replication, there may be issues with replicating stored routines or triggers. You can avoid these issues by using MySQL’s row-based replication instead.
The FOREIGN_KEY_CHECKS, SQL_MODE, UNIQUE_CHECKS, and SQL_AUTO_IS_NULL variables are all replicated (this has been true since MySQL 5.0).
If you replicate the privilege tables in the mysql database and update those tables directly without using GRANT, you must issue a FLUSH PRIVILEGES on the slaves to put the new privileges into effect. In addition, if you use FLUSH TABLES when renaming a MyISAM table that is part of a MERGE table, you must issue FLUSH TABLES manually on the slaves. These statements are written to the binary log unless you specify NO_WRITE_TO_BINLOG or its alias LOCAL.
ou should not set –relay-log-space-limit to less than twice the value of –max-relay-log-size (or –max-binlog-size if –max-relay-log-size is 0). In that case, there is a chance that the I/O thread waits for free space because –relay-log-space-limit is exceeded, but the SQL thread has no relay log to purge and is unable to satisfy the I/O thread. This forces the I/O thread to temporarily ignore –relay-log-space-limit.
Tell the slave to restrict replication to statements where the default database (that is, the one selected by USE) is db_name. To specify more than one database, use this option multiple times, once for each database. Note that this does not replicate cross-database statements such as UPDATE some_db.some_table SET foo=’bar’ while having selected a different database or no database.
An example of what does not work as you might expect: If the slave is started with –replicate-do-db=sales and you issue the following statements on the master, the UPDATE statement is not replicated:
UPDATE sales.january SET amount=amount+1000;
The main reason for this ‚Äsjust check the default database‚Ät behavior is that it is difficult from the statement alone to know whether it should be replicated (for example, if you are using multiple-table DELETE statements or multiple-table UPDATE statements that act across multiple databases). It is also faster to check only the default database rather than all databases if there is no need.
If you need cross-database updates to work, use –replicate-wild-do-table=db_name.% instead.
In the simplest case, when there are no –replicate-* options, the procedure yields the result that the slave executes all statements that it receives from the master. Otherwise, the result depends on the particular options given. In general, to make it easier to determine what effect an option set will have, it is recommended that you avoid mixing ‚Äsdo‚Ät and ‚Äsignore‚Ät options, or wildcard and non-wildcard options.
How can I use replication to improve performance of my system?
A: You should set up one server as the master and direct all writes to it. Then configure as many slaves as you have the budget and rackspace for, and distribute the reads among the master and the slaves. You can also start the slaves with the –skip-innodb, –skip-bdb, –low-priority-updates, and –delay-key-write=ALL options to get speed improvements on the slave end. In this case, the slave uses non-transactional MyISAM tables instead of InnoDB and BDB tables to get more speed by eliminating transactional overhead.