Is it posible to resize a primary partition?












2















My company is running a cloud hosting with CentOS.
In the past, when it reached maximun SSD capacity, they upgraded it.



As far as I understand, they've done it by creating a primary partition and mounting it on /. So we ended with 4 primary partitions on sda.



Now I've upgraded again the space from 300GB to 400GB and I need to allocate those 100 extra GB.



What's the best thing I can do to add those 100GB?



Some info I've collected:



parted /dev/sda > print:



Numero  Inicio  Fin     Tamaño  Typo     Sistema de ficheros  Banderas
1 1049kB 525MB 524MB primary xfs arranque
2 525MB 85,9GB 85,4GB primary lvm
3 85,9GB 129GB 42,9GB primary lvm
4 129GB 322GB 193GB primary lvm


fdisk /dev/sda > p:



Disk /dev/sda: 429.5 GB, 429496729600 bytes, 838860800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Identificador del disco: 0x000a2b1e

Disposit. Inicio Comienzo Fin Bloques Id Sistema
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 167772159 83373056 8e Linux LVM
/dev/sda3 167772160 251658239 41943040 8e Linux LVM
/dev/sda4 251658240 629145599 188743680 8e Linux LVM


df -h:



S.ficheros              Tamaño Usados  Disp Uso% Montado en
/dev/mapper/centos-root 298G 290G 8,4G 98% /
devtmpfs 7,8G 0 7,8G 0% /dev
tmpfs 7,8G 0 7,8G 0% /dev/shm
tmpfs 7,8G 12M 7,8G 1% /run
tmpfs 7,8G 0 7,8G 0% /sys/fs/cgroup
/dev/sda1 497M 187M 311M 38% /boot
tmpfs 1,6G 0 1,6G 0% /run/user/0


lsblk:



NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda 8:0 0 300G 0 disk
├─sda1 8:1 0 500M 0 part /boot
├─sda2 8:2 0 79,5G 0 part
│ ├─centos-swap 253:0 0 2G 0 lvm [SWAP]
│ └─centos-root 253:1 0 297,5G 0 lvm /
├─sda3 8:3 0 40G 0 part
│ └─centos-root 253:1 0 297,5G 0 lvm /
└─sda4 8:4 0 180G 0 part
└─centos-root 253:1 0 297,5G 0 lvm /
sr0 11:0 1 1024M 0 rom









share|improve this question







New contributor




Hache_raw is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





















  • You are using lvm so increase the size with lvm

    – Panther
    Jan 10 at 17:37











  • Since you're already hitting the limit on the number of partitions, you will have to extend the last partition to take up all of the disk space. This blog post should be useful: thegeekdiary.com/…

    – Haxiel
    Jan 10 at 17:49


















2















My company is running a cloud hosting with CentOS.
In the past, when it reached maximun SSD capacity, they upgraded it.



As far as I understand, they've done it by creating a primary partition and mounting it on /. So we ended with 4 primary partitions on sda.



Now I've upgraded again the space from 300GB to 400GB and I need to allocate those 100 extra GB.



What's the best thing I can do to add those 100GB?



Some info I've collected:



parted /dev/sda > print:



Numero  Inicio  Fin     Tamaño  Typo     Sistema de ficheros  Banderas
1 1049kB 525MB 524MB primary xfs arranque
2 525MB 85,9GB 85,4GB primary lvm
3 85,9GB 129GB 42,9GB primary lvm
4 129GB 322GB 193GB primary lvm


fdisk /dev/sda > p:



Disk /dev/sda: 429.5 GB, 429496729600 bytes, 838860800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Identificador del disco: 0x000a2b1e

Disposit. Inicio Comienzo Fin Bloques Id Sistema
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 167772159 83373056 8e Linux LVM
/dev/sda3 167772160 251658239 41943040 8e Linux LVM
/dev/sda4 251658240 629145599 188743680 8e Linux LVM


df -h:



S.ficheros              Tamaño Usados  Disp Uso% Montado en
/dev/mapper/centos-root 298G 290G 8,4G 98% /
devtmpfs 7,8G 0 7,8G 0% /dev
tmpfs 7,8G 0 7,8G 0% /dev/shm
tmpfs 7,8G 12M 7,8G 1% /run
tmpfs 7,8G 0 7,8G 0% /sys/fs/cgroup
/dev/sda1 497M 187M 311M 38% /boot
tmpfs 1,6G 0 1,6G 0% /run/user/0


lsblk:



NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda 8:0 0 300G 0 disk
├─sda1 8:1 0 500M 0 part /boot
├─sda2 8:2 0 79,5G 0 part
│ ├─centos-swap 253:0 0 2G 0 lvm [SWAP]
│ └─centos-root 253:1 0 297,5G 0 lvm /
├─sda3 8:3 0 40G 0 part
│ └─centos-root 253:1 0 297,5G 0 lvm /
└─sda4 8:4 0 180G 0 part
└─centos-root 253:1 0 297,5G 0 lvm /
sr0 11:0 1 1024M 0 rom









share|improve this question







New contributor




Hache_raw is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





















  • You are using lvm so increase the size with lvm

    – Panther
    Jan 10 at 17:37











  • Since you're already hitting the limit on the number of partitions, you will have to extend the last partition to take up all of the disk space. This blog post should be useful: thegeekdiary.com/…

    – Haxiel
    Jan 10 at 17:49
















2












2








2








My company is running a cloud hosting with CentOS.
In the past, when it reached maximun SSD capacity, they upgraded it.



As far as I understand, they've done it by creating a primary partition and mounting it on /. So we ended with 4 primary partitions on sda.



Now I've upgraded again the space from 300GB to 400GB and I need to allocate those 100 extra GB.



What's the best thing I can do to add those 100GB?



Some info I've collected:



parted /dev/sda > print:



Numero  Inicio  Fin     Tamaño  Typo     Sistema de ficheros  Banderas
1 1049kB 525MB 524MB primary xfs arranque
2 525MB 85,9GB 85,4GB primary lvm
3 85,9GB 129GB 42,9GB primary lvm
4 129GB 322GB 193GB primary lvm


fdisk /dev/sda > p:



Disk /dev/sda: 429.5 GB, 429496729600 bytes, 838860800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Identificador del disco: 0x000a2b1e

Disposit. Inicio Comienzo Fin Bloques Id Sistema
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 167772159 83373056 8e Linux LVM
/dev/sda3 167772160 251658239 41943040 8e Linux LVM
/dev/sda4 251658240 629145599 188743680 8e Linux LVM


df -h:



S.ficheros              Tamaño Usados  Disp Uso% Montado en
/dev/mapper/centos-root 298G 290G 8,4G 98% /
devtmpfs 7,8G 0 7,8G 0% /dev
tmpfs 7,8G 0 7,8G 0% /dev/shm
tmpfs 7,8G 12M 7,8G 1% /run
tmpfs 7,8G 0 7,8G 0% /sys/fs/cgroup
/dev/sda1 497M 187M 311M 38% /boot
tmpfs 1,6G 0 1,6G 0% /run/user/0


lsblk:



NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda 8:0 0 300G 0 disk
├─sda1 8:1 0 500M 0 part /boot
├─sda2 8:2 0 79,5G 0 part
│ ├─centos-swap 253:0 0 2G 0 lvm [SWAP]
│ └─centos-root 253:1 0 297,5G 0 lvm /
├─sda3 8:3 0 40G 0 part
│ └─centos-root 253:1 0 297,5G 0 lvm /
└─sda4 8:4 0 180G 0 part
└─centos-root 253:1 0 297,5G 0 lvm /
sr0 11:0 1 1024M 0 rom









share|improve this question







New contributor




Hache_raw is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.












My company is running a cloud hosting with CentOS.
In the past, when it reached maximun SSD capacity, they upgraded it.



As far as I understand, they've done it by creating a primary partition and mounting it on /. So we ended with 4 primary partitions on sda.



Now I've upgraded again the space from 300GB to 400GB and I need to allocate those 100 extra GB.



What's the best thing I can do to add those 100GB?



Some info I've collected:



parted /dev/sda > print:



Numero  Inicio  Fin     Tamaño  Typo     Sistema de ficheros  Banderas
1 1049kB 525MB 524MB primary xfs arranque
2 525MB 85,9GB 85,4GB primary lvm
3 85,9GB 129GB 42,9GB primary lvm
4 129GB 322GB 193GB primary lvm


fdisk /dev/sda > p:



Disk /dev/sda: 429.5 GB, 429496729600 bytes, 838860800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Identificador del disco: 0x000a2b1e

Disposit. Inicio Comienzo Fin Bloques Id Sistema
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 167772159 83373056 8e Linux LVM
/dev/sda3 167772160 251658239 41943040 8e Linux LVM
/dev/sda4 251658240 629145599 188743680 8e Linux LVM


df -h:



S.ficheros              Tamaño Usados  Disp Uso% Montado en
/dev/mapper/centos-root 298G 290G 8,4G 98% /
devtmpfs 7,8G 0 7,8G 0% /dev
tmpfs 7,8G 0 7,8G 0% /dev/shm
tmpfs 7,8G 12M 7,8G 1% /run
tmpfs 7,8G 0 7,8G 0% /sys/fs/cgroup
/dev/sda1 497M 187M 311M 38% /boot
tmpfs 1,6G 0 1,6G 0% /run/user/0


lsblk:



NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda 8:0 0 300G 0 disk
├─sda1 8:1 0 500M 0 part /boot
├─sda2 8:2 0 79,5G 0 part
│ ├─centos-swap 253:0 0 2G 0 lvm [SWAP]
│ └─centos-root 253:1 0 297,5G 0 lvm /
├─sda3 8:3 0 40G 0 part
│ └─centos-root 253:1 0 297,5G 0 lvm /
└─sda4 8:4 0 180G 0 part
└─centos-root 253:1 0 297,5G 0 lvm /
sr0 11:0 1 1024M 0 rom






linux partition lvm






share|improve this question







New contributor




Hache_raw is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question







New contributor




Hache_raw is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question






New contributor




Hache_raw is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked Jan 10 at 16:50









Hache_rawHache_raw

112




112




New contributor




Hache_raw is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





Hache_raw is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






Hache_raw is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.













  • You are using lvm so increase the size with lvm

    – Panther
    Jan 10 at 17:37











  • Since you're already hitting the limit on the number of partitions, you will have to extend the last partition to take up all of the disk space. This blog post should be useful: thegeekdiary.com/…

    – Haxiel
    Jan 10 at 17:49





















  • You are using lvm so increase the size with lvm

    – Panther
    Jan 10 at 17:37











  • Since you're already hitting the limit on the number of partitions, you will have to extend the last partition to take up all of the disk space. This blog post should be useful: thegeekdiary.com/…

    – Haxiel
    Jan 10 at 17:49



















You are using lvm so increase the size with lvm

– Panther
Jan 10 at 17:37





You are using lvm so increase the size with lvm

– Panther
Jan 10 at 17:37













Since you're already hitting the limit on the number of partitions, you will have to extend the last partition to take up all of the disk space. This blog post should be useful: thegeekdiary.com/…

– Haxiel
Jan 10 at 17:49







Since you're already hitting the limit on the number of partitions, you will have to extend the last partition to take up all of the disk space. This blog post should be useful: thegeekdiary.com/…

– Haxiel
Jan 10 at 17:49












1 Answer
1






active

oldest

votes


















1














Contrary to a comment seen on this question, as your partition table is Disk label type: dos, and not Disk label type: gpt, it is not possible to add a logical partition without first deleting a physical partition to have it act as logical partition container, since there are only 4 slots for physical partitions in MBR. Doing this without losing/corrupting data already in place is not trivial because the layout of logical partitions isn't exactly aligned the same as physical partitions: better not.



You can do what could have been done before (e.g. on the very first LVM partition, instead of adding new partitions), with a running system (at least if using xfs: CentOS' default, or ext4 and probably several other filesystems), without reboot nor downtime.




  • Have backups. Something can always go wrong (typo...).



  • Enlarge partition on disk



    The partition having room for enlargement is the last (because its blocks at least here, also occupy the last position on the disk). This is the most tricky part in my answer: using fdisk, note the start of partition 4, and delete partition 4. Note that this operation is done only in memory for now. Recreate a "new" primary partition 4 (as said above, don't try any logical partition), reuse the same partition start: this should be 251658240. Let it offer the whole remaining size to have it bigger.



    WARNING: newer fdisk tools might offer to wipe a detected partition signature, don't do it if asked now or when writing to disk: it detected your current PV/LVM signature.



    Put back partition type 8e (probably only cosmetic). If all is in order, write the new partition table and quit fdisk.



    UPDATE: To be clear the operation above is first done in memory. The overall result of deleting and recreating partition 4 at the same start position, done in fdisk's memory, is to have enlarged partition 4. When comitting this from fdisk to the disk, only the MBR (i.e.: the first sectors of the disk represented by /dev/sda) is re-written: there is no alteration of the data stored at sectors 251658240 and beyond. Higher level (GUI... or even parted) tools would offer an enlarge option (resizepart for parted), but the final result is the same. The partition 4 was never removed at any time from the disk (even if by mistake it was removed from disk, this would still not be fatal, as long as it's recreated at the same position and before the OS complains).




  • Update kernel view of the partition's new size



    Because the partition was in use (by the device mapper etc.) fdisk will certainly have complained that it couldn't have the system reread the partition table and that the old is still in use. To avoid a reboot, just use the right tool to update what changed: partx. It's simple here because only the size of the partition changed.



    # cat /sys/class/block/sda4/size
    188743680
    # partx -u /dev/sda4
    $ cat /sys/class/block/sda4/size
    [bigger value]


    Verify that the size is now increased and matches the result seen with fdisk. Else, something went wrong and reboot is probably needed.




  • Enlarge PV, LV, filesystem. Some LVM options might be able to chain those in a fewer commands, but here are all the steps



    Without option it will use all available space.



    # pvresize /dev/sda4


    The additional space on the PV is immediately made available on the VG for LV usage.



    # lvextend -l +100%FREE /dev/centos/root # or any other choice


    Then for xfs:



    # xfs_growfs / # remember that xfs may never shrink back


    Or ext4:



    # resize2fs /dev/centos/root # and ext4 can't be shrunk back while mounted







share|improve this answer


























  • Answer made on SF with some similar steps: Expand full virtual debian disk to use empty space. That one is on GPT.

    – A.B
    Jan 10 at 19:00











  • Thanks @A.B - While I can't try it yet, this seems to be a great and valid answer. I'll make a snapshop of the entire server and try to follow your steps next tuesday. Just to be sure: when I delete partition 4 and reuse the same start, will my data be intact or should I kind of snapshop that partition and restore it later? (if this is even possible).

    – Hache_raw
    Jan 11 at 10:20











  • The data stays intact. I did this regularly after enlarging disks on VM. You have to be sure to reuse the same start, and to NOT wipe the "partition/filesystem/whatever signature" if the tool offers to do that. Anyway, you have to plan for failure (backups) for such things. You could even try it on a VM to test.

    – A.B
    Jan 11 at 10:45













  • updated answer, telling that data isn't changed, only MBR

    – A.B
    Jan 11 at 11:21











Your Answer








StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "106"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});






Hache_raw is a new contributor. Be nice, and check out our Code of Conduct.










draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f493757%2fis-it-posible-to-resize-a-primary-partition%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1














Contrary to a comment seen on this question, as your partition table is Disk label type: dos, and not Disk label type: gpt, it is not possible to add a logical partition without first deleting a physical partition to have it act as logical partition container, since there are only 4 slots for physical partitions in MBR. Doing this without losing/corrupting data already in place is not trivial because the layout of logical partitions isn't exactly aligned the same as physical partitions: better not.



You can do what could have been done before (e.g. on the very first LVM partition, instead of adding new partitions), with a running system (at least if using xfs: CentOS' default, or ext4 and probably several other filesystems), without reboot nor downtime.




  • Have backups. Something can always go wrong (typo...).



  • Enlarge partition on disk



    The partition having room for enlargement is the last (because its blocks at least here, also occupy the last position on the disk). This is the most tricky part in my answer: using fdisk, note the start of partition 4, and delete partition 4. Note that this operation is done only in memory for now. Recreate a "new" primary partition 4 (as said above, don't try any logical partition), reuse the same partition start: this should be 251658240. Let it offer the whole remaining size to have it bigger.



    WARNING: newer fdisk tools might offer to wipe a detected partition signature, don't do it if asked now or when writing to disk: it detected your current PV/LVM signature.



    Put back partition type 8e (probably only cosmetic). If all is in order, write the new partition table and quit fdisk.



    UPDATE: To be clear the operation above is first done in memory. The overall result of deleting and recreating partition 4 at the same start position, done in fdisk's memory, is to have enlarged partition 4. When comitting this from fdisk to the disk, only the MBR (i.e.: the first sectors of the disk represented by /dev/sda) is re-written: there is no alteration of the data stored at sectors 251658240 and beyond. Higher level (GUI... or even parted) tools would offer an enlarge option (resizepart for parted), but the final result is the same. The partition 4 was never removed at any time from the disk (even if by mistake it was removed from disk, this would still not be fatal, as long as it's recreated at the same position and before the OS complains).




  • Update kernel view of the partition's new size



    Because the partition was in use (by the device mapper etc.) fdisk will certainly have complained that it couldn't have the system reread the partition table and that the old is still in use. To avoid a reboot, just use the right tool to update what changed: partx. It's simple here because only the size of the partition changed.



    # cat /sys/class/block/sda4/size
    188743680
    # partx -u /dev/sda4
    $ cat /sys/class/block/sda4/size
    [bigger value]


    Verify that the size is now increased and matches the result seen with fdisk. Else, something went wrong and reboot is probably needed.




  • Enlarge PV, LV, filesystem. Some LVM options might be able to chain those in a fewer commands, but here are all the steps



    Without option it will use all available space.



    # pvresize /dev/sda4


    The additional space on the PV is immediately made available on the VG for LV usage.



    # lvextend -l +100%FREE /dev/centos/root # or any other choice


    Then for xfs:



    # xfs_growfs / # remember that xfs may never shrink back


    Or ext4:



    # resize2fs /dev/centos/root # and ext4 can't be shrunk back while mounted







share|improve this answer


























  • Answer made on SF with some similar steps: Expand full virtual debian disk to use empty space. That one is on GPT.

    – A.B
    Jan 10 at 19:00











  • Thanks @A.B - While I can't try it yet, this seems to be a great and valid answer. I'll make a snapshop of the entire server and try to follow your steps next tuesday. Just to be sure: when I delete partition 4 and reuse the same start, will my data be intact or should I kind of snapshop that partition and restore it later? (if this is even possible).

    – Hache_raw
    Jan 11 at 10:20











  • The data stays intact. I did this regularly after enlarging disks on VM. You have to be sure to reuse the same start, and to NOT wipe the "partition/filesystem/whatever signature" if the tool offers to do that. Anyway, you have to plan for failure (backups) for such things. You could even try it on a VM to test.

    – A.B
    Jan 11 at 10:45













  • updated answer, telling that data isn't changed, only MBR

    – A.B
    Jan 11 at 11:21
















1














Contrary to a comment seen on this question, as your partition table is Disk label type: dos, and not Disk label type: gpt, it is not possible to add a logical partition without first deleting a physical partition to have it act as logical partition container, since there are only 4 slots for physical partitions in MBR. Doing this without losing/corrupting data already in place is not trivial because the layout of logical partitions isn't exactly aligned the same as physical partitions: better not.



You can do what could have been done before (e.g. on the very first LVM partition, instead of adding new partitions), with a running system (at least if using xfs: CentOS' default, or ext4 and probably several other filesystems), without reboot nor downtime.




  • Have backups. Something can always go wrong (typo...).



  • Enlarge partition on disk



    The partition having room for enlargement is the last (because its blocks at least here, also occupy the last position on the disk). This is the most tricky part in my answer: using fdisk, note the start of partition 4, and delete partition 4. Note that this operation is done only in memory for now. Recreate a "new" primary partition 4 (as said above, don't try any logical partition), reuse the same partition start: this should be 251658240. Let it offer the whole remaining size to have it bigger.



    WARNING: newer fdisk tools might offer to wipe a detected partition signature, don't do it if asked now or when writing to disk: it detected your current PV/LVM signature.



    Put back partition type 8e (probably only cosmetic). If all is in order, write the new partition table and quit fdisk.



    UPDATE: To be clear the operation above is first done in memory. The overall result of deleting and recreating partition 4 at the same start position, done in fdisk's memory, is to have enlarged partition 4. When comitting this from fdisk to the disk, only the MBR (i.e.: the first sectors of the disk represented by /dev/sda) is re-written: there is no alteration of the data stored at sectors 251658240 and beyond. Higher level (GUI... or even parted) tools would offer an enlarge option (resizepart for parted), but the final result is the same. The partition 4 was never removed at any time from the disk (even if by mistake it was removed from disk, this would still not be fatal, as long as it's recreated at the same position and before the OS complains).




  • Update kernel view of the partition's new size



    Because the partition was in use (by the device mapper etc.) fdisk will certainly have complained that it couldn't have the system reread the partition table and that the old is still in use. To avoid a reboot, just use the right tool to update what changed: partx. It's simple here because only the size of the partition changed.



    # cat /sys/class/block/sda4/size
    188743680
    # partx -u /dev/sda4
    $ cat /sys/class/block/sda4/size
    [bigger value]


    Verify that the size is now increased and matches the result seen with fdisk. Else, something went wrong and reboot is probably needed.




  • Enlarge PV, LV, filesystem. Some LVM options might be able to chain those in a fewer commands, but here are all the steps



    Without option it will use all available space.



    # pvresize /dev/sda4


    The additional space on the PV is immediately made available on the VG for LV usage.



    # lvextend -l +100%FREE /dev/centos/root # or any other choice


    Then for xfs:



    # xfs_growfs / # remember that xfs may never shrink back


    Or ext4:



    # resize2fs /dev/centos/root # and ext4 can't be shrunk back while mounted







share|improve this answer


























  • Answer made on SF with some similar steps: Expand full virtual debian disk to use empty space. That one is on GPT.

    – A.B
    Jan 10 at 19:00











  • Thanks @A.B - While I can't try it yet, this seems to be a great and valid answer. I'll make a snapshop of the entire server and try to follow your steps next tuesday. Just to be sure: when I delete partition 4 and reuse the same start, will my data be intact or should I kind of snapshop that partition and restore it later? (if this is even possible).

    – Hache_raw
    Jan 11 at 10:20











  • The data stays intact. I did this regularly after enlarging disks on VM. You have to be sure to reuse the same start, and to NOT wipe the "partition/filesystem/whatever signature" if the tool offers to do that. Anyway, you have to plan for failure (backups) for such things. You could even try it on a VM to test.

    – A.B
    Jan 11 at 10:45













  • updated answer, telling that data isn't changed, only MBR

    – A.B
    Jan 11 at 11:21














1












1








1







Contrary to a comment seen on this question, as your partition table is Disk label type: dos, and not Disk label type: gpt, it is not possible to add a logical partition without first deleting a physical partition to have it act as logical partition container, since there are only 4 slots for physical partitions in MBR. Doing this without losing/corrupting data already in place is not trivial because the layout of logical partitions isn't exactly aligned the same as physical partitions: better not.



You can do what could have been done before (e.g. on the very first LVM partition, instead of adding new partitions), with a running system (at least if using xfs: CentOS' default, or ext4 and probably several other filesystems), without reboot nor downtime.




  • Have backups. Something can always go wrong (typo...).



  • Enlarge partition on disk



    The partition having room for enlargement is the last (because its blocks at least here, also occupy the last position on the disk). This is the most tricky part in my answer: using fdisk, note the start of partition 4, and delete partition 4. Note that this operation is done only in memory for now. Recreate a "new" primary partition 4 (as said above, don't try any logical partition), reuse the same partition start: this should be 251658240. Let it offer the whole remaining size to have it bigger.



    WARNING: newer fdisk tools might offer to wipe a detected partition signature, don't do it if asked now or when writing to disk: it detected your current PV/LVM signature.



    Put back partition type 8e (probably only cosmetic). If all is in order, write the new partition table and quit fdisk.



    UPDATE: To be clear the operation above is first done in memory. The overall result of deleting and recreating partition 4 at the same start position, done in fdisk's memory, is to have enlarged partition 4. When comitting this from fdisk to the disk, only the MBR (i.e.: the first sectors of the disk represented by /dev/sda) is re-written: there is no alteration of the data stored at sectors 251658240 and beyond. Higher level (GUI... or even parted) tools would offer an enlarge option (resizepart for parted), but the final result is the same. The partition 4 was never removed at any time from the disk (even if by mistake it was removed from disk, this would still not be fatal, as long as it's recreated at the same position and before the OS complains).




  • Update kernel view of the partition's new size



    Because the partition was in use (by the device mapper etc.) fdisk will certainly have complained that it couldn't have the system reread the partition table and that the old is still in use. To avoid a reboot, just use the right tool to update what changed: partx. It's simple here because only the size of the partition changed.



    # cat /sys/class/block/sda4/size
    188743680
    # partx -u /dev/sda4
    $ cat /sys/class/block/sda4/size
    [bigger value]


    Verify that the size is now increased and matches the result seen with fdisk. Else, something went wrong and reboot is probably needed.




  • Enlarge PV, LV, filesystem. Some LVM options might be able to chain those in a fewer commands, but here are all the steps



    Without option it will use all available space.



    # pvresize /dev/sda4


    The additional space on the PV is immediately made available on the VG for LV usage.



    # lvextend -l +100%FREE /dev/centos/root # or any other choice


    Then for xfs:



    # xfs_growfs / # remember that xfs may never shrink back


    Or ext4:



    # resize2fs /dev/centos/root # and ext4 can't be shrunk back while mounted







share|improve this answer















Contrary to a comment seen on this question, as your partition table is Disk label type: dos, and not Disk label type: gpt, it is not possible to add a logical partition without first deleting a physical partition to have it act as logical partition container, since there are only 4 slots for physical partitions in MBR. Doing this without losing/corrupting data already in place is not trivial because the layout of logical partitions isn't exactly aligned the same as physical partitions: better not.



You can do what could have been done before (e.g. on the very first LVM partition, instead of adding new partitions), with a running system (at least if using xfs: CentOS' default, or ext4 and probably several other filesystems), without reboot nor downtime.




  • Have backups. Something can always go wrong (typo...).



  • Enlarge partition on disk



    The partition having room for enlargement is the last (because its blocks at least here, also occupy the last position on the disk). This is the most tricky part in my answer: using fdisk, note the start of partition 4, and delete partition 4. Note that this operation is done only in memory for now. Recreate a "new" primary partition 4 (as said above, don't try any logical partition), reuse the same partition start: this should be 251658240. Let it offer the whole remaining size to have it bigger.



    WARNING: newer fdisk tools might offer to wipe a detected partition signature, don't do it if asked now or when writing to disk: it detected your current PV/LVM signature.



    Put back partition type 8e (probably only cosmetic). If all is in order, write the new partition table and quit fdisk.



    UPDATE: To be clear the operation above is first done in memory. The overall result of deleting and recreating partition 4 at the same start position, done in fdisk's memory, is to have enlarged partition 4. When comitting this from fdisk to the disk, only the MBR (i.e.: the first sectors of the disk represented by /dev/sda) is re-written: there is no alteration of the data stored at sectors 251658240 and beyond. Higher level (GUI... or even parted) tools would offer an enlarge option (resizepart for parted), but the final result is the same. The partition 4 was never removed at any time from the disk (even if by mistake it was removed from disk, this would still not be fatal, as long as it's recreated at the same position and before the OS complains).




  • Update kernel view of the partition's new size



    Because the partition was in use (by the device mapper etc.) fdisk will certainly have complained that it couldn't have the system reread the partition table and that the old is still in use. To avoid a reboot, just use the right tool to update what changed: partx. It's simple here because only the size of the partition changed.



    # cat /sys/class/block/sda4/size
    188743680
    # partx -u /dev/sda4
    $ cat /sys/class/block/sda4/size
    [bigger value]


    Verify that the size is now increased and matches the result seen with fdisk. Else, something went wrong and reboot is probably needed.




  • Enlarge PV, LV, filesystem. Some LVM options might be able to chain those in a fewer commands, but here are all the steps



    Without option it will use all available space.



    # pvresize /dev/sda4


    The additional space on the PV is immediately made available on the VG for LV usage.



    # lvextend -l +100%FREE /dev/centos/root # or any other choice


    Then for xfs:



    # xfs_growfs / # remember that xfs may never shrink back


    Or ext4:



    # resize2fs /dev/centos/root # and ext4 can't be shrunk back while mounted








share|improve this answer














share|improve this answer



share|improve this answer








edited Jan 11 at 11:26

























answered Jan 10 at 18:59









A.BA.B

4,2571724




4,2571724













  • Answer made on SF with some similar steps: Expand full virtual debian disk to use empty space. That one is on GPT.

    – A.B
    Jan 10 at 19:00











  • Thanks @A.B - While I can't try it yet, this seems to be a great and valid answer. I'll make a snapshop of the entire server and try to follow your steps next tuesday. Just to be sure: when I delete partition 4 and reuse the same start, will my data be intact or should I kind of snapshop that partition and restore it later? (if this is even possible).

    – Hache_raw
    Jan 11 at 10:20











  • The data stays intact. I did this regularly after enlarging disks on VM. You have to be sure to reuse the same start, and to NOT wipe the "partition/filesystem/whatever signature" if the tool offers to do that. Anyway, you have to plan for failure (backups) for such things. You could even try it on a VM to test.

    – A.B
    Jan 11 at 10:45













  • updated answer, telling that data isn't changed, only MBR

    – A.B
    Jan 11 at 11:21



















  • Answer made on SF with some similar steps: Expand full virtual debian disk to use empty space. That one is on GPT.

    – A.B
    Jan 10 at 19:00











  • Thanks @A.B - While I can't try it yet, this seems to be a great and valid answer. I'll make a snapshop of the entire server and try to follow your steps next tuesday. Just to be sure: when I delete partition 4 and reuse the same start, will my data be intact or should I kind of snapshop that partition and restore it later? (if this is even possible).

    – Hache_raw
    Jan 11 at 10:20











  • The data stays intact. I did this regularly after enlarging disks on VM. You have to be sure to reuse the same start, and to NOT wipe the "partition/filesystem/whatever signature" if the tool offers to do that. Anyway, you have to plan for failure (backups) for such things. You could even try it on a VM to test.

    – A.B
    Jan 11 at 10:45













  • updated answer, telling that data isn't changed, only MBR

    – A.B
    Jan 11 at 11:21

















Answer made on SF with some similar steps: Expand full virtual debian disk to use empty space. That one is on GPT.

– A.B
Jan 10 at 19:00





Answer made on SF with some similar steps: Expand full virtual debian disk to use empty space. That one is on GPT.

– A.B
Jan 10 at 19:00













Thanks @A.B - While I can't try it yet, this seems to be a great and valid answer. I'll make a snapshop of the entire server and try to follow your steps next tuesday. Just to be sure: when I delete partition 4 and reuse the same start, will my data be intact or should I kind of snapshop that partition and restore it later? (if this is even possible).

– Hache_raw
Jan 11 at 10:20





Thanks @A.B - While I can't try it yet, this seems to be a great and valid answer. I'll make a snapshop of the entire server and try to follow your steps next tuesday. Just to be sure: when I delete partition 4 and reuse the same start, will my data be intact or should I kind of snapshop that partition and restore it later? (if this is even possible).

– Hache_raw
Jan 11 at 10:20













The data stays intact. I did this regularly after enlarging disks on VM. You have to be sure to reuse the same start, and to NOT wipe the "partition/filesystem/whatever signature" if the tool offers to do that. Anyway, you have to plan for failure (backups) for such things. You could even try it on a VM to test.

– A.B
Jan 11 at 10:45







The data stays intact. I did this regularly after enlarging disks on VM. You have to be sure to reuse the same start, and to NOT wipe the "partition/filesystem/whatever signature" if the tool offers to do that. Anyway, you have to plan for failure (backups) for such things. You could even try it on a VM to test.

– A.B
Jan 11 at 10:45















updated answer, telling that data isn't changed, only MBR

– A.B
Jan 11 at 11:21





updated answer, telling that data isn't changed, only MBR

– A.B
Jan 11 at 11:21










Hache_raw is a new contributor. Be nice, and check out our Code of Conduct.










draft saved

draft discarded


















Hache_raw is a new contributor. Be nice, and check out our Code of Conduct.













Hache_raw is a new contributor. Be nice, and check out our Code of Conduct.












Hache_raw is a new contributor. Be nice, and check out our Code of Conduct.
















Thanks for contributing an answer to Unix & Linux Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f493757%2fis-it-posible-to-resize-a-primary-partition%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

How to reconfigure Docker Trusted Registry 2.x.x to use CEPH FS mount instead of NFS and other traditional...

is 'sed' thread safe

How to make a Squid Proxy server?