Find all integer columns which are reaching its limits using information_schema The 2019 Stack Overflow Developer Survey Results Are In Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern) The Ask Question Wizard is Live! Data science time! April 2019 and salary with experienceMyISAM versus InnoDBpostgres: find all integer columns with its current max value in itHow to find all the tables in MySQL with specific column names in them?Compare 2 tables structure on mysqlMySQL: The easiest way to display all data from the table by getting TABLE_NAME olny (no column information) on the html pageMove column table witchout set type columnGetting the values of a MySQL enum using only SQLReplacement throughout the database MYSQLJOIN tables names returned from SELECTHow can I retrieve the column names from an empty MySQL select query resultMysql script to create a columnmysql: list all tables with their columns

How long does the line of fire that you can create as an action using the Investiture of Flame spell last?

Semisimplicity of the category of coherent sheaves?

Do working physicists consider Newtonian mechanics to be "falsified"?

How is simplicity better than precision and clarity in prose?

Why did all the guest students take carriages to the Yule Ball?

What do you call a plan that's an alternative plan in case your initial plan fails?

Problems with Ubuntu mount /tmp

How did passengers keep warm on sail ships?

Can withdrawing asylum be illegal?

Did God make two great lights or did He make the great light two?

Arduino Pro Micro - switch off LEDs

I could not break this equation. Please help me

The variadic template constructor of my class cannot modify my class members, why is that so?

Why can't wing-mounted spoilers be used to steepen approaches?

How to delete random line from file using Unix command?

What can I do if neighbor is blocking my solar panels intentionally?

How to politely respond to generic emails requesting a PhD/job in my lab? Without wasting too much time

Difference between "generating set" and free product?

Is it ethical to upload a automatically generated paper to a non peer-reviewed site as part of a larger research?

Make it rain characters

Can the DM override racial traits?

How does ice melt when immersed in water?

How to stretch delimiters to envolve matrices inside of a kbordermatrix?

How to split my screen on my Macbook Air?



Find all integer columns which are reaching its limits using information_schema



The 2019 Stack Overflow Developer Survey Results Are In
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)
The Ask Question Wizard is Live!
Data science time! April 2019 and salary with experienceMyISAM versus InnoDBpostgres: find all integer columns with its current max value in itHow to find all the tables in MySQL with specific column names in them?Compare 2 tables structure on mysqlMySQL: The easiest way to display all data from the table by getting TABLE_NAME olny (no column information) on the html pageMove column table witchout set type columnGetting the values of a MySQL enum using only SQLReplacement throughout the database MYSQLJOIN tables names returned from SELECTHow can I retrieve the column names from an empty MySQL select query resultMysql script to create a columnmysql: list all tables with their columns



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








0















I can get a list of all columns I want to verify the space available.



SELECT 
TABLE_NAME, COLUMN_NAME, COLUMN_TYPE
FROM
INFORMATION_SCHEMA.COLUMNS
WHERE
COLUMN_TYPE = 'int(11)' AND
TABLE_NAME LIKE 'catalog_category_entity%';


Considering that int(11) is up to 2147483648 (not considering unsigned) I would like to calculate how much I am using from this range.



Individually I could check one like this:



select 
max(value_id)/2147483648 as usage
from
catalog_product_entity_int;


But I would like to do each on a nice way for all the columns found on the first query.



I would like to know if recursive CTE is the right resource in this case and how to do it or if there is a more elegant way of checking it.



I would like to have this nice quick way of checking without any external tools.



I've found this solution for postgres but I was wondering if I really need the function.
postgres: find all integer columns with its current max value in it










share|improve this question




























    0















    I can get a list of all columns I want to verify the space available.



    SELECT 
    TABLE_NAME, COLUMN_NAME, COLUMN_TYPE
    FROM
    INFORMATION_SCHEMA.COLUMNS
    WHERE
    COLUMN_TYPE = 'int(11)' AND
    TABLE_NAME LIKE 'catalog_category_entity%';


    Considering that int(11) is up to 2147483648 (not considering unsigned) I would like to calculate how much I am using from this range.



    Individually I could check one like this:



    select 
    max(value_id)/2147483648 as usage
    from
    catalog_product_entity_int;


    But I would like to do each on a nice way for all the columns found on the first query.



    I would like to know if recursive CTE is the right resource in this case and how to do it or if there is a more elegant way of checking it.



    I would like to have this nice quick way of checking without any external tools.



    I've found this solution for postgres but I was wondering if I really need the function.
    postgres: find all integer columns with its current max value in it










    share|improve this question
























      0












      0








      0


      1






      I can get a list of all columns I want to verify the space available.



      SELECT 
      TABLE_NAME, COLUMN_NAME, COLUMN_TYPE
      FROM
      INFORMATION_SCHEMA.COLUMNS
      WHERE
      COLUMN_TYPE = 'int(11)' AND
      TABLE_NAME LIKE 'catalog_category_entity%';


      Considering that int(11) is up to 2147483648 (not considering unsigned) I would like to calculate how much I am using from this range.



      Individually I could check one like this:



      select 
      max(value_id)/2147483648 as usage
      from
      catalog_product_entity_int;


      But I would like to do each on a nice way for all the columns found on the first query.



      I would like to know if recursive CTE is the right resource in this case and how to do it or if there is a more elegant way of checking it.



      I would like to have this nice quick way of checking without any external tools.



      I've found this solution for postgres but I was wondering if I really need the function.
      postgres: find all integer columns with its current max value in it










      share|improve this question














      I can get a list of all columns I want to verify the space available.



      SELECT 
      TABLE_NAME, COLUMN_NAME, COLUMN_TYPE
      FROM
      INFORMATION_SCHEMA.COLUMNS
      WHERE
      COLUMN_TYPE = 'int(11)' AND
      TABLE_NAME LIKE 'catalog_category_entity%';


      Considering that int(11) is up to 2147483648 (not considering unsigned) I would like to calculate how much I am using from this range.



      Individually I could check one like this:



      select 
      max(value_id)/2147483648 as usage
      from
      catalog_product_entity_int;


      But I would like to do each on a nice way for all the columns found on the first query.



      I would like to know if recursive CTE is the right resource in this case and how to do it or if there is a more elegant way of checking it.



      I would like to have this nice quick way of checking without any external tools.



      I've found this solution for postgres but I was wondering if I really need the function.
      postgres: find all integer columns with its current max value in it







      mysql database-administration






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Mar 8 at 13:40









      neisantosneisantos

      10011




      10011






















          1 Answer
          1






          active

          oldest

          votes


















          1














          I wrote a solution for this task, but I'm hardly the only person to have done something like this.



          select concat('`', table_schema, '`.`', table_name, '`.`', column_name, '`') as `column`,
          auto_increment as `current_int`, max_int, round((auto_increment/max_int)*100, 2) as `pct_max`
          from (select table_schema, table_name, column_name, auto_increment,
          pow(2, case data_type
          when 'tinyint' then 7
          when 'smallint' then 15
          when 'mediumint' then 23
          when 'int' then 31
          when 'bigint' then 63
          end+(column_type like '% unsigned'))-1 as max_int
          from information_schema.tables t
          join information_schema.columns c using (table_schema,table_name)
          join information_schema.key_column_usage k using (table_schema,table_name,column_name)
          where t.table_schema in ('test')
          and k.constraint_name = 'PRIMARY'
          and k.ordinal_position = 1
          and t.auto_increment is not null
          ) as dt;


          https://github.com/billkarwin/bk-tools/blob/master/pk-full-ratio.sql



          That query is hard-coded for the test schema, so you need to edit it for your own schema.



          The short answer to the question of "is my primary key going to overflow?" is to just alter it to BIGINT UNSIGNED now. That will surely last until the collapse of civilization.



          In the same git repo, I have another similar script to check all integer columns, not just auto-increment primary keys. But it's not as much of a concern for other columns.






          share|improve this answer

























          • @RaymondNijland 0.00 just means that you are at least 99.995% away from reaching the limit. But it should be round((auto_increment/max_int)*100, 2) to avoid out of range errors for BIGINT.

            – Paul Spiegel
            Mar 8 at 14:56











          • Good catch! Thanks, I'll update it.

            – Bill Karwin
            Mar 8 at 17:49











          • Thanks @BillKarwin that seems to work pretty well. The only issue I've noticed is when you have a compond primary key and then your query will calculate for both keys which one will be wrong. it is returning some outliers of 5233% in this case

            – neisantos
            Mar 9 at 17:56











          • Okay, the auto-inc column has to be the first column (at least in InnoDB), so you can also filter for k.ordinal_position=1. MyISAM tables allow the auto-inc column to be in a second position, but I recommend never using MyISAM.

            – Bill Karwin
            Mar 9 at 19:03











          • I've updated the query above, and in my git repo.

            – Bill Karwin
            Mar 9 at 19:13











          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55064438%2ffind-all-integer-columns-which-are-reaching-its-limits-using-information-schema%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1














          I wrote a solution for this task, but I'm hardly the only person to have done something like this.



          select concat('`', table_schema, '`.`', table_name, '`.`', column_name, '`') as `column`,
          auto_increment as `current_int`, max_int, round((auto_increment/max_int)*100, 2) as `pct_max`
          from (select table_schema, table_name, column_name, auto_increment,
          pow(2, case data_type
          when 'tinyint' then 7
          when 'smallint' then 15
          when 'mediumint' then 23
          when 'int' then 31
          when 'bigint' then 63
          end+(column_type like '% unsigned'))-1 as max_int
          from information_schema.tables t
          join information_schema.columns c using (table_schema,table_name)
          join information_schema.key_column_usage k using (table_schema,table_name,column_name)
          where t.table_schema in ('test')
          and k.constraint_name = 'PRIMARY'
          and k.ordinal_position = 1
          and t.auto_increment is not null
          ) as dt;


          https://github.com/billkarwin/bk-tools/blob/master/pk-full-ratio.sql



          That query is hard-coded for the test schema, so you need to edit it for your own schema.



          The short answer to the question of "is my primary key going to overflow?" is to just alter it to BIGINT UNSIGNED now. That will surely last until the collapse of civilization.



          In the same git repo, I have another similar script to check all integer columns, not just auto-increment primary keys. But it's not as much of a concern for other columns.






          share|improve this answer

























          • @RaymondNijland 0.00 just means that you are at least 99.995% away from reaching the limit. But it should be round((auto_increment/max_int)*100, 2) to avoid out of range errors for BIGINT.

            – Paul Spiegel
            Mar 8 at 14:56











          • Good catch! Thanks, I'll update it.

            – Bill Karwin
            Mar 8 at 17:49











          • Thanks @BillKarwin that seems to work pretty well. The only issue I've noticed is when you have a compond primary key and then your query will calculate for both keys which one will be wrong. it is returning some outliers of 5233% in this case

            – neisantos
            Mar 9 at 17:56











          • Okay, the auto-inc column has to be the first column (at least in InnoDB), so you can also filter for k.ordinal_position=1. MyISAM tables allow the auto-inc column to be in a second position, but I recommend never using MyISAM.

            – Bill Karwin
            Mar 9 at 19:03











          • I've updated the query above, and in my git repo.

            – Bill Karwin
            Mar 9 at 19:13















          1














          I wrote a solution for this task, but I'm hardly the only person to have done something like this.



          select concat('`', table_schema, '`.`', table_name, '`.`', column_name, '`') as `column`,
          auto_increment as `current_int`, max_int, round((auto_increment/max_int)*100, 2) as `pct_max`
          from (select table_schema, table_name, column_name, auto_increment,
          pow(2, case data_type
          when 'tinyint' then 7
          when 'smallint' then 15
          when 'mediumint' then 23
          when 'int' then 31
          when 'bigint' then 63
          end+(column_type like '% unsigned'))-1 as max_int
          from information_schema.tables t
          join information_schema.columns c using (table_schema,table_name)
          join information_schema.key_column_usage k using (table_schema,table_name,column_name)
          where t.table_schema in ('test')
          and k.constraint_name = 'PRIMARY'
          and k.ordinal_position = 1
          and t.auto_increment is not null
          ) as dt;


          https://github.com/billkarwin/bk-tools/blob/master/pk-full-ratio.sql



          That query is hard-coded for the test schema, so you need to edit it for your own schema.



          The short answer to the question of "is my primary key going to overflow?" is to just alter it to BIGINT UNSIGNED now. That will surely last until the collapse of civilization.



          In the same git repo, I have another similar script to check all integer columns, not just auto-increment primary keys. But it's not as much of a concern for other columns.






          share|improve this answer

























          • @RaymondNijland 0.00 just means that you are at least 99.995% away from reaching the limit. But it should be round((auto_increment/max_int)*100, 2) to avoid out of range errors for BIGINT.

            – Paul Spiegel
            Mar 8 at 14:56











          • Good catch! Thanks, I'll update it.

            – Bill Karwin
            Mar 8 at 17:49











          • Thanks @BillKarwin that seems to work pretty well. The only issue I've noticed is when you have a compond primary key and then your query will calculate for both keys which one will be wrong. it is returning some outliers of 5233% in this case

            – neisantos
            Mar 9 at 17:56











          • Okay, the auto-inc column has to be the first column (at least in InnoDB), so you can also filter for k.ordinal_position=1. MyISAM tables allow the auto-inc column to be in a second position, but I recommend never using MyISAM.

            – Bill Karwin
            Mar 9 at 19:03











          • I've updated the query above, and in my git repo.

            – Bill Karwin
            Mar 9 at 19:13













          1












          1








          1







          I wrote a solution for this task, but I'm hardly the only person to have done something like this.



          select concat('`', table_schema, '`.`', table_name, '`.`', column_name, '`') as `column`,
          auto_increment as `current_int`, max_int, round((auto_increment/max_int)*100, 2) as `pct_max`
          from (select table_schema, table_name, column_name, auto_increment,
          pow(2, case data_type
          when 'tinyint' then 7
          when 'smallint' then 15
          when 'mediumint' then 23
          when 'int' then 31
          when 'bigint' then 63
          end+(column_type like '% unsigned'))-1 as max_int
          from information_schema.tables t
          join information_schema.columns c using (table_schema,table_name)
          join information_schema.key_column_usage k using (table_schema,table_name,column_name)
          where t.table_schema in ('test')
          and k.constraint_name = 'PRIMARY'
          and k.ordinal_position = 1
          and t.auto_increment is not null
          ) as dt;


          https://github.com/billkarwin/bk-tools/blob/master/pk-full-ratio.sql



          That query is hard-coded for the test schema, so you need to edit it for your own schema.



          The short answer to the question of "is my primary key going to overflow?" is to just alter it to BIGINT UNSIGNED now. That will surely last until the collapse of civilization.



          In the same git repo, I have another similar script to check all integer columns, not just auto-increment primary keys. But it's not as much of a concern for other columns.






          share|improve this answer















          I wrote a solution for this task, but I'm hardly the only person to have done something like this.



          select concat('`', table_schema, '`.`', table_name, '`.`', column_name, '`') as `column`,
          auto_increment as `current_int`, max_int, round((auto_increment/max_int)*100, 2) as `pct_max`
          from (select table_schema, table_name, column_name, auto_increment,
          pow(2, case data_type
          when 'tinyint' then 7
          when 'smallint' then 15
          when 'mediumint' then 23
          when 'int' then 31
          when 'bigint' then 63
          end+(column_type like '% unsigned'))-1 as max_int
          from information_schema.tables t
          join information_schema.columns c using (table_schema,table_name)
          join information_schema.key_column_usage k using (table_schema,table_name,column_name)
          where t.table_schema in ('test')
          and k.constraint_name = 'PRIMARY'
          and k.ordinal_position = 1
          and t.auto_increment is not null
          ) as dt;


          https://github.com/billkarwin/bk-tools/blob/master/pk-full-ratio.sql



          That query is hard-coded for the test schema, so you need to edit it for your own schema.



          The short answer to the question of "is my primary key going to overflow?" is to just alter it to BIGINT UNSIGNED now. That will surely last until the collapse of civilization.



          In the same git repo, I have another similar script to check all integer columns, not just auto-increment primary keys. But it's not as much of a concern for other columns.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Mar 9 at 19:13

























          answered Mar 8 at 13:49









          Bill KarwinBill Karwin

          386k64521680




          386k64521680












          • @RaymondNijland 0.00 just means that you are at least 99.995% away from reaching the limit. But it should be round((auto_increment/max_int)*100, 2) to avoid out of range errors for BIGINT.

            – Paul Spiegel
            Mar 8 at 14:56











          • Good catch! Thanks, I'll update it.

            – Bill Karwin
            Mar 8 at 17:49











          • Thanks @BillKarwin that seems to work pretty well. The only issue I've noticed is when you have a compond primary key and then your query will calculate for both keys which one will be wrong. it is returning some outliers of 5233% in this case

            – neisantos
            Mar 9 at 17:56











          • Okay, the auto-inc column has to be the first column (at least in InnoDB), so you can also filter for k.ordinal_position=1. MyISAM tables allow the auto-inc column to be in a second position, but I recommend never using MyISAM.

            – Bill Karwin
            Mar 9 at 19:03











          • I've updated the query above, and in my git repo.

            – Bill Karwin
            Mar 9 at 19:13

















          • @RaymondNijland 0.00 just means that you are at least 99.995% away from reaching the limit. But it should be round((auto_increment/max_int)*100, 2) to avoid out of range errors for BIGINT.

            – Paul Spiegel
            Mar 8 at 14:56











          • Good catch! Thanks, I'll update it.

            – Bill Karwin
            Mar 8 at 17:49











          • Thanks @BillKarwin that seems to work pretty well. The only issue I've noticed is when you have a compond primary key and then your query will calculate for both keys which one will be wrong. it is returning some outliers of 5233% in this case

            – neisantos
            Mar 9 at 17:56











          • Okay, the auto-inc column has to be the first column (at least in InnoDB), so you can also filter for k.ordinal_position=1. MyISAM tables allow the auto-inc column to be in a second position, but I recommend never using MyISAM.

            – Bill Karwin
            Mar 9 at 19:03











          • I've updated the query above, and in my git repo.

            – Bill Karwin
            Mar 9 at 19:13
















          @RaymondNijland 0.00 just means that you are at least 99.995% away from reaching the limit. But it should be round((auto_increment/max_int)*100, 2) to avoid out of range errors for BIGINT.

          – Paul Spiegel
          Mar 8 at 14:56





          @RaymondNijland 0.00 just means that you are at least 99.995% away from reaching the limit. But it should be round((auto_increment/max_int)*100, 2) to avoid out of range errors for BIGINT.

          – Paul Spiegel
          Mar 8 at 14:56













          Good catch! Thanks, I'll update it.

          – Bill Karwin
          Mar 8 at 17:49





          Good catch! Thanks, I'll update it.

          – Bill Karwin
          Mar 8 at 17:49













          Thanks @BillKarwin that seems to work pretty well. The only issue I've noticed is when you have a compond primary key and then your query will calculate for both keys which one will be wrong. it is returning some outliers of 5233% in this case

          – neisantos
          Mar 9 at 17:56





          Thanks @BillKarwin that seems to work pretty well. The only issue I've noticed is when you have a compond primary key and then your query will calculate for both keys which one will be wrong. it is returning some outliers of 5233% in this case

          – neisantos
          Mar 9 at 17:56













          Okay, the auto-inc column has to be the first column (at least in InnoDB), so you can also filter for k.ordinal_position=1. MyISAM tables allow the auto-inc column to be in a second position, but I recommend never using MyISAM.

          – Bill Karwin
          Mar 9 at 19:03





          Okay, the auto-inc column has to be the first column (at least in InnoDB), so you can also filter for k.ordinal_position=1. MyISAM tables allow the auto-inc column to be in a second position, but I recommend never using MyISAM.

          – Bill Karwin
          Mar 9 at 19:03













          I've updated the query above, and in my git repo.

          – Bill Karwin
          Mar 9 at 19:13





          I've updated the query above, and in my git repo.

          – Bill Karwin
          Mar 9 at 19:13



















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55064438%2ffind-all-integer-columns-which-are-reaching-its-limits-using-information-schema%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          1928 у кіно

          Захаров Федір Захарович

          Ель Греко