DDR爱好者之家 Design By 杰米

上次我们说到mysql的一些sql查询方面的优化,包括查看explain执行计划,分析索引等等。今天我们分享一些 分析mysql表读写、索引等等操作的sql语句。

闲话不多说,直接上代码:

反映表的读写压力

SELECT file_name AS file,
    count_read,
    sum_number_of_bytes_read AS total_read,
    count_write,
    sum_number_of_bytes_write AS total_written,
    (sum_number_of_bytes_read + sum_number_of_bytes_write) AS total
 FROM performance_schema.file_summary_by_instance
ORDER BY sum_number_of_bytes_read+ sum_number_of_bytes_write DESC;

反映文件的延迟

SELECT (file_name) AS file,
    count_star AS total,
    CONCAT(ROUND(sum_timer_wait / 3600000000000000, 2), 'h') AS total_latency,
    count_read,
    CONCAT(ROUND(sum_timer_read / 1000000000000, 2), 's') AS read_latency,
    count_write,
    CONCAT(ROUND(sum_timer_write / 3600000000000000, 2), 'h')AS write_latency
 FROM performance_schema.file_summary_by_instance
ORDER BY sum_timer_wait DESC;

table 的读写延迟

SELECT object_schema AS table_schema,
       object_name AS table_name,
       count_star AS total,
       CONCAT(ROUND(sum_timer_wait / 3600000000000000, 2), 'h') as total_latency,
       CONCAT(ROUND((sum_timer_wait / count_star) / 1000000, 2), 'us') AS avg_latency,
       CONCAT(ROUND(max_timer_wait / 1000000000, 2), 'ms') AS max_latency
 FROM performance_schema.objects_summary_global_by_type
    ORDER BY sum_timer_wait DESC;

查看表操作频度

SELECT object_schema AS table_schema,
      object_name AS table_name,
      count_star AS rows_io_total,
      count_read AS rows_read,
      count_write AS rows_write,
      count_fetch AS rows_fetchs,
      count_insert AS rows_inserts,
      count_update AS rows_updates,
      count_delete AS rows_deletes,
       CONCAT(ROUND(sum_timer_fetch / 3600000000000000, 2), 'h') AS fetch_latency,
       CONCAT(ROUND(sum_timer_insert / 3600000000000000, 2), 'h') AS insert_latency,
       CONCAT(ROUND(sum_timer_update / 3600000000000000, 2), 'h') AS update_latency,
       CONCAT(ROUND(sum_timer_delete / 3600000000000000, 2), 'h') AS delete_latency
   FROM performance_schema.table_io_waits_summary_by_table
    ORDER BY sum_timer_wait DESC ;

索引状况

SELECT OBJECT_SCHEMA AS table_schema,
        OBJECT_NAME AS table_name,
        INDEX_NAME as index_name,
        COUNT_FETCH AS rows_fetched,
        CONCAT(ROUND(SUM_TIMER_FETCH / 3600000000000000, 2), 'h') AS select_latency,
        COUNT_INSERT AS rows_inserted,
        CONCAT(ROUND(SUM_TIMER_INSERT / 3600000000000000, 2), 'h') AS insert_latency,
        COUNT_UPDATE AS rows_updated,
        CONCAT(ROUND(SUM_TIMER_UPDATE / 3600000000000000, 2), 'h') AS update_latency,
        COUNT_DELETE AS rows_deleted,
        CONCAT(ROUND(SUM_TIMER_DELETE / 3600000000000000, 2), 'h')AS delete_latency
FROM performance_schema.table_io_waits_summary_by_index_usage
WHERE index_name IS NOT NULL
ORDER BY sum_timer_wait DESC;

全表扫描情况

SELECT object_schema,
    object_name,
    count_read AS rows_full_scanned
 FROM performance_schema.table_io_waits_summary_by_index_usage
WHERE index_name IS NULL
  AND count_read > 0
ORDER BY count_read DESC;

没有使用的index

SELECT object_schema,
    object_name,
    index_name
  FROM performance_schema.table_io_waits_summary_by_index_usage
 WHERE index_name IS NOT NULL
  AND count_star = 0
  AND object_schema not in ('mysql','v_monitor')
  AND index_name <> 'PRIMARY'
 ORDER BY object_schema, object_name;

糟糕的sql问题摘要

SELECT (DIGEST_TEXT) AS query,
    SCHEMA_NAME AS db,
    IF(SUM_NO_GOOD_INDEX_USED > 0 OR SUM_NO_INDEX_USED > 0, '*', '') AS full_scan,
    COUNT_STAR AS exec_count,
    SUM_ERRORS AS err_count,
    SUM_WARNINGS AS warn_count,
    (SUM_TIMER_WAIT) AS total_latency,
    (MAX_TIMER_WAIT) AS max_latency,
    (AVG_TIMER_WAIT) AS avg_latency,
    (SUM_LOCK_TIME) AS lock_latency,
    format(SUM_ROWS_SENT,0) AS rows_sent,
    ROUND(IFNULL(SUM_ROWS_SENT / NULLIF(COUNT_STAR, 0), 0)) AS rows_sent_avg,
    SUM_ROWS_EXAMINED AS rows_examined,
    ROUND(IFNULL(SUM_ROWS_EXAMINED / NULLIF(COUNT_STAR, 0), 0)) AS rows_examined_avg,
    SUM_CREATED_TMP_TABLES AS tmp_tables,
    SUM_CREATED_TMP_DISK_TABLES AS tmp_disk_tables,
    SUM_SORT_ROWS AS rows_sorted,
    SUM_SORT_MERGE_PASSES AS sort_merge_passes,
    DIGEST AS digest,
    FIRST_SEEN AS first_seen,
    LAST_SEEN as last_seen
  FROM performance_schema.events_statements_summary_by_digest d
where d
ORDER BY SUM_TIMER_WAIT DESC
limit 20;

掌握这些sql,你能轻松知道你的库那些表存在问题,然后考虑怎么去优化。   

总结

以上就是这篇文章的全部内容了,希望本文的内容对大家的学习或者工作具有一定的参考学习价值,谢谢大家对的支持。如果你想了解更多相关内容请查看下面相关链接

DDR爱好者之家 Design By 杰米
广告合作:本站广告合作请联系QQ:858582 申请时备注:广告合作(否则不回)
免责声明:本站资源来自互联网收集,仅供用于学习和交流,请遵循相关法律法规,本站一切资源不代表本站立场,如有侵权、后门、不妥请联系本站删除!
DDR爱好者之家 Design By 杰米

《魔兽世界》大逃杀!60人新游玩模式《强袭风暴》3月21日上线

暴雪近日发布了《魔兽世界》10.2.6 更新内容,新游玩模式《强袭风暴》即将于3月21 日在亚服上线,届时玩家将前往阿拉希高地展开一场 60 人大逃杀对战。

艾泽拉斯的冒险者已经征服了艾泽拉斯的大地及遥远的彼岸。他们在对抗世界上最致命的敌人时展现出过人的手腕,并且成功阻止终结宇宙等级的威胁。当他们在为即将于《魔兽世界》资料片《地心之战》中来袭的萨拉塔斯势力做战斗准备时,他们还需要在熟悉的阿拉希高地面对一个全新的敌人──那就是彼此。在《巨龙崛起》10.2.6 更新的《强袭风暴》中,玩家将会进入一个全新的海盗主题大逃杀式限时活动,其中包含极高的风险和史诗级的奖励。

《强袭风暴》不是普通的战场,作为一个独立于主游戏之外的活动,玩家可以用大逃杀的风格来体验《魔兽世界》,不分职业、不分装备(除了你在赛局中捡到的),光是技巧和战略的强弱之分就能决定出谁才是能坚持到最后的赢家。本次活动将会开放单人和双人模式,玩家在加入海盗主题的预赛大厅区域前,可以从强袭风暴角色画面新增好友。游玩游戏将可以累计名望轨迹,《巨龙崛起》和《魔兽世界:巫妖王之怒 经典版》的玩家都可以获得奖励。