MySQL

From Athenaeum
Revision as of 18:26, 31 March 2019 by James8562 (talk | contribs) (Random Info)

Jump to: navigation, search

Clustering

  • You may have to perform a full SST in the event other tables are out of sync. The most aggressive way to do this is the following:
    • Since there is no /var/lib/mysql/galera.cache, IST cannot be done. SST would be initiated.
service mysql stop
rm -f /var/lib/mysql/galera.cache /var/lib/mysql/grastate.dat
service mysql start
  • Glera Cluster Node States
variable wsrep_local_state

This variable shows internal Galera state number. Possible values are:
1 - Joining (requesting/receiving State Transfer) - node is joining the cluster
2 - Donor/Desynced - node is the donor to the node joining the cluster
3 - Joined - node has joined the cluster
4 - Synced - node is synced with the cluster

MySQL Check

  • This is a sample xinetd config file.
# default: on
# description: mysqlchk
service mysqlchk
{
# this is a config for xinetd, place it in /etc/xinetd.d/
        disable         = no
        flags           = REUSE
        socket_type     = stream
        port            = 9200
        #bind            = 10.10.11.151
        wait            = no
        user            = nobody
        server          = /usr/bin/clustercheck
        server_args     = clustercheck 0k0rN0t 1 /var/log/clustercheck.log 1 /etc/my.cnf
        log_on_failure  += USERID
        only_from       = 10.10.0.0/16 127.0.0.1
        per_source      = UNLIMITED

        # Passing arguments to clustercheck
        # <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>
        # Recommended: server_args   = user pass 1 /var/log/log-file 0 /etc/my.cnf.local
        # Compatibility: server_args = user pass 1 /var/log/log-file 1 /etc/my.cnf.local
        # 55-to-56 upgrade: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.extra
        #
        # recommended to put the IPs that need
        # to connect exclusively (security purposes)
}

Replication

  • Set up a slave server (quick way)
  1. mysqldump --user=root --password --all-databases --master-data | gzip > backup.sql.gz
  2. zcat backup.sql.gz | mysql --user=root --password db_name
  3. SHOW SLAVE STATUS \G;
  4. START SLAVE;
Note: the --master-data option only includes MASTER_LOG_FILE and MASTER_LOG_POS arguments of the "CHANGE MASTER TO" statement; you'll likely want to edit the dump to include MASTER_HOST, MASTER_USER and MASTER_PASSWORD values when importing to a fresh slave server (or realize that you'll need to re-issue a CHANGE MASTER TO statement after the import).

Tricks

  • Get sizes of dbs
SELECT table_schema "DB Name", Round(Sum(data_length + index_length) / 1024 / 1024, 1) "DB Size in MB" 
FROM information_schema.tables 
GROUP BY table_schema; 
  • Function to view and transfer grants.
    • Put the following in your bashrc
mygrants()
{
  mysql -B -N [email protected] -e "SELECT DISTINCT CONCAT(
    'SHOW GRANTS FOR \'', user, '\'@\'', host, '\';'
    ) AS query FROM mysql.user" | \
  mysql [email protected] | \
  sed 's/\(GRANT .*\)/\1;/;s/^\(Grants for .*\)/## \1 ##/;/##/{x;p;x;}'
}
  • Then run the following
mygrants --host=prod-db1 --user=admin --password=secret | grep user_name | mysql --host=staging-db1 --user=admin --password=secret
  • Script to optimize tables
#!/bin/sh

mysql -NBe "SHOW DATABASES;" | grep -v 'lost+found' | while read database ; do
mysql -NBe "SHOW TABLE STATUS;" $database | while read name engine version rowformat rows avgrowlength datalength maxdatalength indexlength datafree autoincrement createtime updatetime checktime collation checksum createoptions comment ; do
  if [ "$datafree" -gt 0 ] ; then
   fragmentation=$(($datafree * 100 / $datalength))
   echo "$database.$name is $fragmentation% fragmented."
   mysql -NBe "OPTIMIZE TABLE $name;" "$database"
  fi
done
done

Random Info

  • Creating a user account
create user 'username'@'10.10.0.%' identified by 'password';
grant all privileges on dbname.* to 'username'@'10.10.0.%' with grant option;
  • Basic dump command
mysqldump --single-transaction --routines --triggers --all-databases | pv | gzip -6 -c > {path_to_file}.sql.gz
  • GTID Off and Only 10,000 lines
mysqldump --single-transaction --routines --triggers --set-gtid-purged=OFF --opt --where="1 limit 10,000" vdc_release | gzip > prod-vdc_release.sql.gz
  • Log Rolling

Good Explanation

  • Percona Backups

Script

  • Purge Binlogs
purge binary logs before date(now() - interval 3 day) + interval 0 second;
this will erase all binary logs before midnight 3 days ago.