How do I resize an ext4 partition beyond the 16TB limit?












24















When attempting to resize and old ext4 Partition that was created without the 64bit flag, resize2fs 1.42 will fail if the new size is or exceeds 16TiB.



$ resize2fs -p /dev/mapper/target-device
resize2fs: New size too large to be expressed in 32 bits


I do not want to copy the files to an external medium.
I do not want to risk data loss either.
How can i resize the volume safely?










share|improve this question





























    24















    When attempting to resize and old ext4 Partition that was created without the 64bit flag, resize2fs 1.42 will fail if the new size is or exceeds 16TiB.



    $ resize2fs -p /dev/mapper/target-device
    resize2fs: New size too large to be expressed in 32 bits


    I do not want to copy the files to an external medium.
    I do not want to risk data loss either.
    How can i resize the volume safely?










    share|improve this question



























      24












      24








      24


      6






      When attempting to resize and old ext4 Partition that was created without the 64bit flag, resize2fs 1.42 will fail if the new size is or exceeds 16TiB.



      $ resize2fs -p /dev/mapper/target-device
      resize2fs: New size too large to be expressed in 32 bits


      I do not want to copy the files to an external medium.
      I do not want to risk data loss either.
      How can i resize the volume safely?










      share|improve this question
















      When attempting to resize and old ext4 Partition that was created without the 64bit flag, resize2fs 1.42 will fail if the new size is or exceeds 16TiB.



      $ resize2fs -p /dev/mapper/target-device
      resize2fs: New size too large to be expressed in 32 bits


      I do not want to copy the files to an external medium.
      I do not want to risk data loss either.
      How can i resize the volume safely?







      ext4






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Mar 5 '18 at 6:54







      anx

















      asked May 31 '16 at 6:39









      anxanx

      1,20211132




      1,20211132






















          1 Answer
          1






          active

          oldest

          votes


















          40














          You are attempting to resize a filesystem that was created before the -O 64bit option became default. It is possible to upgrade your ext filesystem to 64 bit addresses, allowing it to span significantly greater (1024 PiB instead of 16 TiB) volumes.



          Assuming your target device is called /dev/mapper/target-device, this is what you need to do:



          Prerequisites




          1. This size of volume must be backed by RAID. Regular disk errors will cause harm otherwise.

          2. Still, RAID is not a backup. You must have your valuables stored elsewhere as well.

          3. First resize & verify all surrounding volumes (partition tables, encryption, lvm).

          4. After changing hardware RAID configuration, linux may or may not immediately acknowledge the new maximum size. Check $ cat /proc/partitions and reboot if necessary.


          Use a recent stable kernel and e2fsprogs




          1. Make sure (check uname -r) you are running a kernel that can properly handle 64bit ext4 filesystems - you want to use a 4.4.x kernel or later (default Ubuntu 16 and above).


          2. Acquire e2fsprogs of at least version 1.43





            • Ubuntu 16.04 (2016-04-21) was released with e2fsprogs 1.42.12 (2014-08-25)


            • e2fsprogs 1.43 (2016-05-17) is the 1st release capable of upgrading extfs address size.


            • Ubuntu 18.04 (2018-04-26) ships with e2fsprogs 1.44.x (good!)




          If you are on 16.04 and cannot upgrade to a newer Ubuntu release, you will have to enable source package support and install a newer version manually:



          $ resize2fs
          # if this prints version 1.43 or above, continue to step 1
          $ sudo apt update
          $ sudo apt install git
          $ sudo apt build-dep e2fsprogs
          $ cd $(mktemp -d)
          $ git clone -b v1.44.2 https://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git e2fsprogs && cd e2fsprogs
          $ ./configure
          $ make
          $ cd resize
          $ ./resize2fs
          # this should print 1.43 or higher
          # if this prints any lower version, panic
          # use `./resize2fs` instead of `resize2fs` for the rest of the steps


          Resize



          Step 1: Properly umount the filesystem



          $ sudo umount /dev/mapper/target-device


          Step 2: Check the filesystem for errors



          $ sudo e2fsck -fn /dev/mapper/target-device


          Step 3: Enable 64bit support in the filesystem



          Consult man tune2fs and man resize2fs - you may with to change some filesystem flags.



          $ sudo resize2fs -b /dev/mapper/target-device


          On a typical HDD RAID, this takes 4 minutes of high IO & CPU load.



          Step 4: Resize the filesystem



          $ sudo resize2fs -p /dev/mapper/target-device


          If you do not pass a size on the command line, resize2fs assumes "grow to all space available" - this is typically exactly what you want. The -p flag enabled progress bars - but those only display after some initial steps.



          On a typical HDD RAID, this takes 4 minutes of high IO & CPU load.



          Verify again



          Check the filesystem again



          $ sudo e2fsck -fn /dev/mapper/target-device


          e2fsck of newer versions may suggest to fix timestamps or extent trees that previous versions handled badly. This is not an indication of any serious issue and you may chose to fix it now or later.



          If errors occur, do not panic and do not attempt to write to the volume; consult someone with extensive knowledge of the filesystem, as further operations would likely destroy data!



          If no errors occur, remount the device:



          $ sudo mount /dev/mapper/target-device


          Success!



          You will not need any non-Ubuntu version of e2fsprogs for continued operation of the upgraded filesystem - the kernel supports those for quite some time now. It was only necessary to initiate the upgrade.





          For reference, there is a similar error message mke2fs will print if it is asked to create a huge device with inappropriate options:



          $ mke2fs -O ^64bit /dev/huge
          mke2fs: Size of device (0x123456789 blocks) is too big to be expressed in 32 bits using a blocksize of 4096.





          share|improve this answer





















          • 1





            This is correct. I'd like to add that Redhat (thus RHEL, Centos, etc) used to prefer XFS over Ext4 in their installer when partitioning a filesystem over 16TB, and now just prefer XFS outright as the default filesystem.

            – Diablo-D3
            May 31 '16 at 14:13











          • Yes, most older kernels still significant in the RedHat World do not contain stable support for 64bit ext4 yet. Afaik Ubuntu goes with what many linux gurus say, ext4 is to be replaces by btrfs - though ext4 is close to btrfs in features now, i believe btrfs is more elegant in design and maybe even less prone to bugs.

            – anx
            Jun 1 '16 at 4:18






          • 2





            btrfs is not ready for production, and may never be ready for production, since Oracle has put development of it on the back burner. My personal opinion is if you need that level of file system complexity, use a tiny 8GB XFS root combined with using ZFS for your actual data storage needs.

            – Diablo-D3
            Jun 2 '16 at 16:37






          • 1





            One thing to note - your link to clone the e2fsprogs repo is incorrect. This would be the correct link: git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git

            – guest
            Sep 30 '16 at 14:47













          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "89"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f779754%2fhow-do-i-resize-an-ext4-partition-beyond-the-16tb-limit%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          40














          You are attempting to resize a filesystem that was created before the -O 64bit option became default. It is possible to upgrade your ext filesystem to 64 bit addresses, allowing it to span significantly greater (1024 PiB instead of 16 TiB) volumes.



          Assuming your target device is called /dev/mapper/target-device, this is what you need to do:



          Prerequisites




          1. This size of volume must be backed by RAID. Regular disk errors will cause harm otherwise.

          2. Still, RAID is not a backup. You must have your valuables stored elsewhere as well.

          3. First resize & verify all surrounding volumes (partition tables, encryption, lvm).

          4. After changing hardware RAID configuration, linux may or may not immediately acknowledge the new maximum size. Check $ cat /proc/partitions and reboot if necessary.


          Use a recent stable kernel and e2fsprogs




          1. Make sure (check uname -r) you are running a kernel that can properly handle 64bit ext4 filesystems - you want to use a 4.4.x kernel or later (default Ubuntu 16 and above).


          2. Acquire e2fsprogs of at least version 1.43





            • Ubuntu 16.04 (2016-04-21) was released with e2fsprogs 1.42.12 (2014-08-25)


            • e2fsprogs 1.43 (2016-05-17) is the 1st release capable of upgrading extfs address size.


            • Ubuntu 18.04 (2018-04-26) ships with e2fsprogs 1.44.x (good!)




          If you are on 16.04 and cannot upgrade to a newer Ubuntu release, you will have to enable source package support and install a newer version manually:



          $ resize2fs
          # if this prints version 1.43 or above, continue to step 1
          $ sudo apt update
          $ sudo apt install git
          $ sudo apt build-dep e2fsprogs
          $ cd $(mktemp -d)
          $ git clone -b v1.44.2 https://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git e2fsprogs && cd e2fsprogs
          $ ./configure
          $ make
          $ cd resize
          $ ./resize2fs
          # this should print 1.43 or higher
          # if this prints any lower version, panic
          # use `./resize2fs` instead of `resize2fs` for the rest of the steps


          Resize



          Step 1: Properly umount the filesystem



          $ sudo umount /dev/mapper/target-device


          Step 2: Check the filesystem for errors



          $ sudo e2fsck -fn /dev/mapper/target-device


          Step 3: Enable 64bit support in the filesystem



          Consult man tune2fs and man resize2fs - you may with to change some filesystem flags.



          $ sudo resize2fs -b /dev/mapper/target-device


          On a typical HDD RAID, this takes 4 minutes of high IO & CPU load.



          Step 4: Resize the filesystem



          $ sudo resize2fs -p /dev/mapper/target-device


          If you do not pass a size on the command line, resize2fs assumes "grow to all space available" - this is typically exactly what you want. The -p flag enabled progress bars - but those only display after some initial steps.



          On a typical HDD RAID, this takes 4 minutes of high IO & CPU load.



          Verify again



          Check the filesystem again



          $ sudo e2fsck -fn /dev/mapper/target-device


          e2fsck of newer versions may suggest to fix timestamps or extent trees that previous versions handled badly. This is not an indication of any serious issue and you may chose to fix it now or later.



          If errors occur, do not panic and do not attempt to write to the volume; consult someone with extensive knowledge of the filesystem, as further operations would likely destroy data!



          If no errors occur, remount the device:



          $ sudo mount /dev/mapper/target-device


          Success!



          You will not need any non-Ubuntu version of e2fsprogs for continued operation of the upgraded filesystem - the kernel supports those for quite some time now. It was only necessary to initiate the upgrade.





          For reference, there is a similar error message mke2fs will print if it is asked to create a huge device with inappropriate options:



          $ mke2fs -O ^64bit /dev/huge
          mke2fs: Size of device (0x123456789 blocks) is too big to be expressed in 32 bits using a blocksize of 4096.





          share|improve this answer





















          • 1





            This is correct. I'd like to add that Redhat (thus RHEL, Centos, etc) used to prefer XFS over Ext4 in their installer when partitioning a filesystem over 16TB, and now just prefer XFS outright as the default filesystem.

            – Diablo-D3
            May 31 '16 at 14:13











          • Yes, most older kernels still significant in the RedHat World do not contain stable support for 64bit ext4 yet. Afaik Ubuntu goes with what many linux gurus say, ext4 is to be replaces by btrfs - though ext4 is close to btrfs in features now, i believe btrfs is more elegant in design and maybe even less prone to bugs.

            – anx
            Jun 1 '16 at 4:18






          • 2





            btrfs is not ready for production, and may never be ready for production, since Oracle has put development of it on the back burner. My personal opinion is if you need that level of file system complexity, use a tiny 8GB XFS root combined with using ZFS for your actual data storage needs.

            – Diablo-D3
            Jun 2 '16 at 16:37






          • 1





            One thing to note - your link to clone the e2fsprogs repo is incorrect. This would be the correct link: git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git

            – guest
            Sep 30 '16 at 14:47


















          40














          You are attempting to resize a filesystem that was created before the -O 64bit option became default. It is possible to upgrade your ext filesystem to 64 bit addresses, allowing it to span significantly greater (1024 PiB instead of 16 TiB) volumes.



          Assuming your target device is called /dev/mapper/target-device, this is what you need to do:



          Prerequisites




          1. This size of volume must be backed by RAID. Regular disk errors will cause harm otherwise.

          2. Still, RAID is not a backup. You must have your valuables stored elsewhere as well.

          3. First resize & verify all surrounding volumes (partition tables, encryption, lvm).

          4. After changing hardware RAID configuration, linux may or may not immediately acknowledge the new maximum size. Check $ cat /proc/partitions and reboot if necessary.


          Use a recent stable kernel and e2fsprogs




          1. Make sure (check uname -r) you are running a kernel that can properly handle 64bit ext4 filesystems - you want to use a 4.4.x kernel or later (default Ubuntu 16 and above).


          2. Acquire e2fsprogs of at least version 1.43





            • Ubuntu 16.04 (2016-04-21) was released with e2fsprogs 1.42.12 (2014-08-25)


            • e2fsprogs 1.43 (2016-05-17) is the 1st release capable of upgrading extfs address size.


            • Ubuntu 18.04 (2018-04-26) ships with e2fsprogs 1.44.x (good!)




          If you are on 16.04 and cannot upgrade to a newer Ubuntu release, you will have to enable source package support and install a newer version manually:



          $ resize2fs
          # if this prints version 1.43 or above, continue to step 1
          $ sudo apt update
          $ sudo apt install git
          $ sudo apt build-dep e2fsprogs
          $ cd $(mktemp -d)
          $ git clone -b v1.44.2 https://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git e2fsprogs && cd e2fsprogs
          $ ./configure
          $ make
          $ cd resize
          $ ./resize2fs
          # this should print 1.43 or higher
          # if this prints any lower version, panic
          # use `./resize2fs` instead of `resize2fs` for the rest of the steps


          Resize



          Step 1: Properly umount the filesystem



          $ sudo umount /dev/mapper/target-device


          Step 2: Check the filesystem for errors



          $ sudo e2fsck -fn /dev/mapper/target-device


          Step 3: Enable 64bit support in the filesystem



          Consult man tune2fs and man resize2fs - you may with to change some filesystem flags.



          $ sudo resize2fs -b /dev/mapper/target-device


          On a typical HDD RAID, this takes 4 minutes of high IO & CPU load.



          Step 4: Resize the filesystem



          $ sudo resize2fs -p /dev/mapper/target-device


          If you do not pass a size on the command line, resize2fs assumes "grow to all space available" - this is typically exactly what you want. The -p flag enabled progress bars - but those only display after some initial steps.



          On a typical HDD RAID, this takes 4 minutes of high IO & CPU load.



          Verify again



          Check the filesystem again



          $ sudo e2fsck -fn /dev/mapper/target-device


          e2fsck of newer versions may suggest to fix timestamps or extent trees that previous versions handled badly. This is not an indication of any serious issue and you may chose to fix it now or later.



          If errors occur, do not panic and do not attempt to write to the volume; consult someone with extensive knowledge of the filesystem, as further operations would likely destroy data!



          If no errors occur, remount the device:



          $ sudo mount /dev/mapper/target-device


          Success!



          You will not need any non-Ubuntu version of e2fsprogs for continued operation of the upgraded filesystem - the kernel supports those for quite some time now. It was only necessary to initiate the upgrade.





          For reference, there is a similar error message mke2fs will print if it is asked to create a huge device with inappropriate options:



          $ mke2fs -O ^64bit /dev/huge
          mke2fs: Size of device (0x123456789 blocks) is too big to be expressed in 32 bits using a blocksize of 4096.





          share|improve this answer





















          • 1





            This is correct. I'd like to add that Redhat (thus RHEL, Centos, etc) used to prefer XFS over Ext4 in their installer when partitioning a filesystem over 16TB, and now just prefer XFS outright as the default filesystem.

            – Diablo-D3
            May 31 '16 at 14:13











          • Yes, most older kernels still significant in the RedHat World do not contain stable support for 64bit ext4 yet. Afaik Ubuntu goes with what many linux gurus say, ext4 is to be replaces by btrfs - though ext4 is close to btrfs in features now, i believe btrfs is more elegant in design and maybe even less prone to bugs.

            – anx
            Jun 1 '16 at 4:18






          • 2





            btrfs is not ready for production, and may never be ready for production, since Oracle has put development of it on the back burner. My personal opinion is if you need that level of file system complexity, use a tiny 8GB XFS root combined with using ZFS for your actual data storage needs.

            – Diablo-D3
            Jun 2 '16 at 16:37






          • 1





            One thing to note - your link to clone the e2fsprogs repo is incorrect. This would be the correct link: git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git

            – guest
            Sep 30 '16 at 14:47
















          40












          40








          40







          You are attempting to resize a filesystem that was created before the -O 64bit option became default. It is possible to upgrade your ext filesystem to 64 bit addresses, allowing it to span significantly greater (1024 PiB instead of 16 TiB) volumes.



          Assuming your target device is called /dev/mapper/target-device, this is what you need to do:



          Prerequisites




          1. This size of volume must be backed by RAID. Regular disk errors will cause harm otherwise.

          2. Still, RAID is not a backup. You must have your valuables stored elsewhere as well.

          3. First resize & verify all surrounding volumes (partition tables, encryption, lvm).

          4. After changing hardware RAID configuration, linux may or may not immediately acknowledge the new maximum size. Check $ cat /proc/partitions and reboot if necessary.


          Use a recent stable kernel and e2fsprogs




          1. Make sure (check uname -r) you are running a kernel that can properly handle 64bit ext4 filesystems - you want to use a 4.4.x kernel or later (default Ubuntu 16 and above).


          2. Acquire e2fsprogs of at least version 1.43





            • Ubuntu 16.04 (2016-04-21) was released with e2fsprogs 1.42.12 (2014-08-25)


            • e2fsprogs 1.43 (2016-05-17) is the 1st release capable of upgrading extfs address size.


            • Ubuntu 18.04 (2018-04-26) ships with e2fsprogs 1.44.x (good!)




          If you are on 16.04 and cannot upgrade to a newer Ubuntu release, you will have to enable source package support and install a newer version manually:



          $ resize2fs
          # if this prints version 1.43 or above, continue to step 1
          $ sudo apt update
          $ sudo apt install git
          $ sudo apt build-dep e2fsprogs
          $ cd $(mktemp -d)
          $ git clone -b v1.44.2 https://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git e2fsprogs && cd e2fsprogs
          $ ./configure
          $ make
          $ cd resize
          $ ./resize2fs
          # this should print 1.43 or higher
          # if this prints any lower version, panic
          # use `./resize2fs` instead of `resize2fs` for the rest of the steps


          Resize



          Step 1: Properly umount the filesystem



          $ sudo umount /dev/mapper/target-device


          Step 2: Check the filesystem for errors



          $ sudo e2fsck -fn /dev/mapper/target-device


          Step 3: Enable 64bit support in the filesystem



          Consult man tune2fs and man resize2fs - you may with to change some filesystem flags.



          $ sudo resize2fs -b /dev/mapper/target-device


          On a typical HDD RAID, this takes 4 minutes of high IO & CPU load.



          Step 4: Resize the filesystem



          $ sudo resize2fs -p /dev/mapper/target-device


          If you do not pass a size on the command line, resize2fs assumes "grow to all space available" - this is typically exactly what you want. The -p flag enabled progress bars - but those only display after some initial steps.



          On a typical HDD RAID, this takes 4 minutes of high IO & CPU load.



          Verify again



          Check the filesystem again



          $ sudo e2fsck -fn /dev/mapper/target-device


          e2fsck of newer versions may suggest to fix timestamps or extent trees that previous versions handled badly. This is not an indication of any serious issue and you may chose to fix it now or later.



          If errors occur, do not panic and do not attempt to write to the volume; consult someone with extensive knowledge of the filesystem, as further operations would likely destroy data!



          If no errors occur, remount the device:



          $ sudo mount /dev/mapper/target-device


          Success!



          You will not need any non-Ubuntu version of e2fsprogs for continued operation of the upgraded filesystem - the kernel supports those for quite some time now. It was only necessary to initiate the upgrade.





          For reference, there is a similar error message mke2fs will print if it is asked to create a huge device with inappropriate options:



          $ mke2fs -O ^64bit /dev/huge
          mke2fs: Size of device (0x123456789 blocks) is too big to be expressed in 32 bits using a blocksize of 4096.





          share|improve this answer















          You are attempting to resize a filesystem that was created before the -O 64bit option became default. It is possible to upgrade your ext filesystem to 64 bit addresses, allowing it to span significantly greater (1024 PiB instead of 16 TiB) volumes.



          Assuming your target device is called /dev/mapper/target-device, this is what you need to do:



          Prerequisites




          1. This size of volume must be backed by RAID. Regular disk errors will cause harm otherwise.

          2. Still, RAID is not a backup. You must have your valuables stored elsewhere as well.

          3. First resize & verify all surrounding volumes (partition tables, encryption, lvm).

          4. After changing hardware RAID configuration, linux may or may not immediately acknowledge the new maximum size. Check $ cat /proc/partitions and reboot if necessary.


          Use a recent stable kernel and e2fsprogs




          1. Make sure (check uname -r) you are running a kernel that can properly handle 64bit ext4 filesystems - you want to use a 4.4.x kernel or later (default Ubuntu 16 and above).


          2. Acquire e2fsprogs of at least version 1.43





            • Ubuntu 16.04 (2016-04-21) was released with e2fsprogs 1.42.12 (2014-08-25)


            • e2fsprogs 1.43 (2016-05-17) is the 1st release capable of upgrading extfs address size.


            • Ubuntu 18.04 (2018-04-26) ships with e2fsprogs 1.44.x (good!)




          If you are on 16.04 and cannot upgrade to a newer Ubuntu release, you will have to enable source package support and install a newer version manually:



          $ resize2fs
          # if this prints version 1.43 or above, continue to step 1
          $ sudo apt update
          $ sudo apt install git
          $ sudo apt build-dep e2fsprogs
          $ cd $(mktemp -d)
          $ git clone -b v1.44.2 https://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git e2fsprogs && cd e2fsprogs
          $ ./configure
          $ make
          $ cd resize
          $ ./resize2fs
          # this should print 1.43 or higher
          # if this prints any lower version, panic
          # use `./resize2fs` instead of `resize2fs` for the rest of the steps


          Resize



          Step 1: Properly umount the filesystem



          $ sudo umount /dev/mapper/target-device


          Step 2: Check the filesystem for errors



          $ sudo e2fsck -fn /dev/mapper/target-device


          Step 3: Enable 64bit support in the filesystem



          Consult man tune2fs and man resize2fs - you may with to change some filesystem flags.



          $ sudo resize2fs -b /dev/mapper/target-device


          On a typical HDD RAID, this takes 4 minutes of high IO & CPU load.



          Step 4: Resize the filesystem



          $ sudo resize2fs -p /dev/mapper/target-device


          If you do not pass a size on the command line, resize2fs assumes "grow to all space available" - this is typically exactly what you want. The -p flag enabled progress bars - but those only display after some initial steps.



          On a typical HDD RAID, this takes 4 minutes of high IO & CPU load.



          Verify again



          Check the filesystem again



          $ sudo e2fsck -fn /dev/mapper/target-device


          e2fsck of newer versions may suggest to fix timestamps or extent trees that previous versions handled badly. This is not an indication of any serious issue and you may chose to fix it now or later.



          If errors occur, do not panic and do not attempt to write to the volume; consult someone with extensive knowledge of the filesystem, as further operations would likely destroy data!



          If no errors occur, remount the device:



          $ sudo mount /dev/mapper/target-device


          Success!



          You will not need any non-Ubuntu version of e2fsprogs for continued operation of the upgraded filesystem - the kernel supports those for quite some time now. It was only necessary to initiate the upgrade.





          For reference, there is a similar error message mke2fs will print if it is asked to create a huge device with inappropriate options:



          $ mke2fs -O ^64bit /dev/huge
          mke2fs: Size of device (0x123456789 blocks) is too big to be expressed in 32 bits using a blocksize of 4096.






          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Feb 10 at 9:43

























          answered May 31 '16 at 6:39









          anxanx

          1,20211132




          1,20211132








          • 1





            This is correct. I'd like to add that Redhat (thus RHEL, Centos, etc) used to prefer XFS over Ext4 in their installer when partitioning a filesystem over 16TB, and now just prefer XFS outright as the default filesystem.

            – Diablo-D3
            May 31 '16 at 14:13











          • Yes, most older kernels still significant in the RedHat World do not contain stable support for 64bit ext4 yet. Afaik Ubuntu goes with what many linux gurus say, ext4 is to be replaces by btrfs - though ext4 is close to btrfs in features now, i believe btrfs is more elegant in design and maybe even less prone to bugs.

            – anx
            Jun 1 '16 at 4:18






          • 2





            btrfs is not ready for production, and may never be ready for production, since Oracle has put development of it on the back burner. My personal opinion is if you need that level of file system complexity, use a tiny 8GB XFS root combined with using ZFS for your actual data storage needs.

            – Diablo-D3
            Jun 2 '16 at 16:37






          • 1





            One thing to note - your link to clone the e2fsprogs repo is incorrect. This would be the correct link: git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git

            – guest
            Sep 30 '16 at 14:47
















          • 1





            This is correct. I'd like to add that Redhat (thus RHEL, Centos, etc) used to prefer XFS over Ext4 in their installer when partitioning a filesystem over 16TB, and now just prefer XFS outright as the default filesystem.

            – Diablo-D3
            May 31 '16 at 14:13











          • Yes, most older kernels still significant in the RedHat World do not contain stable support for 64bit ext4 yet. Afaik Ubuntu goes with what many linux gurus say, ext4 is to be replaces by btrfs - though ext4 is close to btrfs in features now, i believe btrfs is more elegant in design and maybe even less prone to bugs.

            – anx
            Jun 1 '16 at 4:18






          • 2





            btrfs is not ready for production, and may never be ready for production, since Oracle has put development of it on the back burner. My personal opinion is if you need that level of file system complexity, use a tiny 8GB XFS root combined with using ZFS for your actual data storage needs.

            – Diablo-D3
            Jun 2 '16 at 16:37






          • 1





            One thing to note - your link to clone the e2fsprogs repo is incorrect. This would be the correct link: git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git

            – guest
            Sep 30 '16 at 14:47










          1




          1





          This is correct. I'd like to add that Redhat (thus RHEL, Centos, etc) used to prefer XFS over Ext4 in their installer when partitioning a filesystem over 16TB, and now just prefer XFS outright as the default filesystem.

          – Diablo-D3
          May 31 '16 at 14:13





          This is correct. I'd like to add that Redhat (thus RHEL, Centos, etc) used to prefer XFS over Ext4 in their installer when partitioning a filesystem over 16TB, and now just prefer XFS outright as the default filesystem.

          – Diablo-D3
          May 31 '16 at 14:13













          Yes, most older kernels still significant in the RedHat World do not contain stable support for 64bit ext4 yet. Afaik Ubuntu goes with what many linux gurus say, ext4 is to be replaces by btrfs - though ext4 is close to btrfs in features now, i believe btrfs is more elegant in design and maybe even less prone to bugs.

          – anx
          Jun 1 '16 at 4:18





          Yes, most older kernels still significant in the RedHat World do not contain stable support for 64bit ext4 yet. Afaik Ubuntu goes with what many linux gurus say, ext4 is to be replaces by btrfs - though ext4 is close to btrfs in features now, i believe btrfs is more elegant in design and maybe even less prone to bugs.

          – anx
          Jun 1 '16 at 4:18




          2




          2





          btrfs is not ready for production, and may never be ready for production, since Oracle has put development of it on the back burner. My personal opinion is if you need that level of file system complexity, use a tiny 8GB XFS root combined with using ZFS for your actual data storage needs.

          – Diablo-D3
          Jun 2 '16 at 16:37





          btrfs is not ready for production, and may never be ready for production, since Oracle has put development of it on the back burner. My personal opinion is if you need that level of file system complexity, use a tiny 8GB XFS root combined with using ZFS for your actual data storage needs.

          – Diablo-D3
          Jun 2 '16 at 16:37




          1




          1





          One thing to note - your link to clone the e2fsprogs repo is incorrect. This would be the correct link: git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git

          – guest
          Sep 30 '16 at 14:47







          One thing to note - your link to clone the e2fsprogs repo is incorrect. This would be the correct link: git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git

          – guest
          Sep 30 '16 at 14:47




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Ask Ubuntu!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f779754%2fhow-do-i-resize-an-ext4-partition-beyond-the-16tb-limit%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          How to reconfigure Docker Trusted Registry 2.x.x to use CEPH FS mount instead of NFS and other traditional...

          is 'sed' thread safe

          How to make a Squid Proxy server?