KVM linux guest, 2nd HDD, LVM partition or whole disk












3















I've got a RAID 10 arrays on a couple of KVM host machines. The RAID 10 array is one big VG. I usually create a small LV for guest disk image storage then the rest of the VG I carve out as LVs to add additional disks to guests.



Within these guests I usually run fdisk on the newly added device and create a single partition using 100% of the added drive, then run pvcreate on the partition rather than the device.

e.g. pvcreate /dev/vdb1 vs pvcreate /dev/vdb



I realise that LVM itself operates perfectly normally when creating a PV from full devices rather than partitions. But my habit has always been to partition first.

Can anyone see any downsides to using a non-partitioned drive in my particular scenario?



Any further disks that I add to guests are going to either expand an already existing data logical volume, or to create an additional data partition. I usually leave root/boot alone and just add additional data storage under existing/new mount points.



The advantage of using a non-partitioned drive within my guest is that I don't have to bother partitioning it with fdisk first. Though I realise this probably only saves about 1 minute of time.



Does this have any effects on the potential recovery of data, e.g. being able to access the guest's logical volume from outside the guest in case of VM failure.

Or being able to attach the LVs carved out of the host as additional drives on new VMs in a rebuild scenario?










share|improve this question



























    3















    I've got a RAID 10 arrays on a couple of KVM host machines. The RAID 10 array is one big VG. I usually create a small LV for guest disk image storage then the rest of the VG I carve out as LVs to add additional disks to guests.



    Within these guests I usually run fdisk on the newly added device and create a single partition using 100% of the added drive, then run pvcreate on the partition rather than the device.

    e.g. pvcreate /dev/vdb1 vs pvcreate /dev/vdb



    I realise that LVM itself operates perfectly normally when creating a PV from full devices rather than partitions. But my habit has always been to partition first.

    Can anyone see any downsides to using a non-partitioned drive in my particular scenario?



    Any further disks that I add to guests are going to either expand an already existing data logical volume, or to create an additional data partition. I usually leave root/boot alone and just add additional data storage under existing/new mount points.



    The advantage of using a non-partitioned drive within my guest is that I don't have to bother partitioning it with fdisk first. Though I realise this probably only saves about 1 minute of time.



    Does this have any effects on the potential recovery of data, e.g. being able to access the guest's logical volume from outside the guest in case of VM failure.

    Or being able to attach the LVs carved out of the host as additional drives on new VMs in a rebuild scenario?










    share|improve this question

























      3












      3








      3








      I've got a RAID 10 arrays on a couple of KVM host machines. The RAID 10 array is one big VG. I usually create a small LV for guest disk image storage then the rest of the VG I carve out as LVs to add additional disks to guests.



      Within these guests I usually run fdisk on the newly added device and create a single partition using 100% of the added drive, then run pvcreate on the partition rather than the device.

      e.g. pvcreate /dev/vdb1 vs pvcreate /dev/vdb



      I realise that LVM itself operates perfectly normally when creating a PV from full devices rather than partitions. But my habit has always been to partition first.

      Can anyone see any downsides to using a non-partitioned drive in my particular scenario?



      Any further disks that I add to guests are going to either expand an already existing data logical volume, or to create an additional data partition. I usually leave root/boot alone and just add additional data storage under existing/new mount points.



      The advantage of using a non-partitioned drive within my guest is that I don't have to bother partitioning it with fdisk first. Though I realise this probably only saves about 1 minute of time.



      Does this have any effects on the potential recovery of data, e.g. being able to access the guest's logical volume from outside the guest in case of VM failure.

      Or being able to attach the LVs carved out of the host as additional drives on new VMs in a rebuild scenario?










      share|improve this question














      I've got a RAID 10 arrays on a couple of KVM host machines. The RAID 10 array is one big VG. I usually create a small LV for guest disk image storage then the rest of the VG I carve out as LVs to add additional disks to guests.



      Within these guests I usually run fdisk on the newly added device and create a single partition using 100% of the added drive, then run pvcreate on the partition rather than the device.

      e.g. pvcreate /dev/vdb1 vs pvcreate /dev/vdb



      I realise that LVM itself operates perfectly normally when creating a PV from full devices rather than partitions. But my habit has always been to partition first.

      Can anyone see any downsides to using a non-partitioned drive in my particular scenario?



      Any further disks that I add to guests are going to either expand an already existing data logical volume, or to create an additional data partition. I usually leave root/boot alone and just add additional data storage under existing/new mount points.



      The advantage of using a non-partitioned drive within my guest is that I don't have to bother partitioning it with fdisk first. Though I realise this probably only saves about 1 minute of time.



      Does this have any effects on the potential recovery of data, e.g. being able to access the guest's logical volume from outside the guest in case of VM failure.

      Or being able to attach the LVs carved out of the host as additional drives on new VMs in a rebuild scenario?







      lvm kvm block-device libvirtd






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Mar 2 '14 at 22:31









      batfastadbatfastad

      78021020




      78021020






















          2 Answers
          2






          active

          oldest

          votes


















          0















          The advantage of using a non-partitioned drive within my guest is that I don't have to bother partitioning it with fdisk first.




          That's no advantage.



          Isn't the advantage that you don't have to worry about resizing partitions, which the kernel just doesn't like to do while the disk is in use?



          When using the disk directly as PV in the guest, you no longer have to add extra drives to it in order to extend LV inside the guest. You can just grow the existing LV, which gives the guest a larger capacity disk, and then grow the PV and LV inside the guest. So all LV stitching is really done on the host side of things whereas the guest side stays simple with a single disk setup (or maybe a two disk setup, if you like to have something for /boot).



          The downside with unpartitioned disks is that it's just so easy to make mistakes. If your package manager installs a bootloader to your PV (because it wants to install the bootloader to all disks), that may or may not be harmful. Many programs expect disks to be partitioned (especially partitioning programs and GUI frontends). You're more likely to inadvertently damage it somehow.



          So this is a setup you should pick if you know what you are doing, and have a good backup in any case (be sure to include the LVM metadata in the backup).






          share|improve this answer



















          • 1





            The kernel is perfectly happy to resize partitions while the disk is in use. parted 3.2 will do this just fine.

            – psusi
            Dec 11 '14 at 15:10



















          0














          If you want to have the guest boot load itself with e.g. grub, then then disk has to have a partition table. Otherwise, you have to have the guest kernel and initrd in the host and pass them to qemu to load directly.



          Also rather than add additional disks you can simply resize the existing disk to add storage to the vm.






          share|improve this answer























            Your Answer








            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "106"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f117821%2fkvm-linux-guest-2nd-hdd-lvm-partition-or-whole-disk%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0















            The advantage of using a non-partitioned drive within my guest is that I don't have to bother partitioning it with fdisk first.




            That's no advantage.



            Isn't the advantage that you don't have to worry about resizing partitions, which the kernel just doesn't like to do while the disk is in use?



            When using the disk directly as PV in the guest, you no longer have to add extra drives to it in order to extend LV inside the guest. You can just grow the existing LV, which gives the guest a larger capacity disk, and then grow the PV and LV inside the guest. So all LV stitching is really done on the host side of things whereas the guest side stays simple with a single disk setup (or maybe a two disk setup, if you like to have something for /boot).



            The downside with unpartitioned disks is that it's just so easy to make mistakes. If your package manager installs a bootloader to your PV (because it wants to install the bootloader to all disks), that may or may not be harmful. Many programs expect disks to be partitioned (especially partitioning programs and GUI frontends). You're more likely to inadvertently damage it somehow.



            So this is a setup you should pick if you know what you are doing, and have a good backup in any case (be sure to include the LVM metadata in the backup).






            share|improve this answer



















            • 1





              The kernel is perfectly happy to resize partitions while the disk is in use. parted 3.2 will do this just fine.

              – psusi
              Dec 11 '14 at 15:10
















            0















            The advantage of using a non-partitioned drive within my guest is that I don't have to bother partitioning it with fdisk first.




            That's no advantage.



            Isn't the advantage that you don't have to worry about resizing partitions, which the kernel just doesn't like to do while the disk is in use?



            When using the disk directly as PV in the guest, you no longer have to add extra drives to it in order to extend LV inside the guest. You can just grow the existing LV, which gives the guest a larger capacity disk, and then grow the PV and LV inside the guest. So all LV stitching is really done on the host side of things whereas the guest side stays simple with a single disk setup (or maybe a two disk setup, if you like to have something for /boot).



            The downside with unpartitioned disks is that it's just so easy to make mistakes. If your package manager installs a bootloader to your PV (because it wants to install the bootloader to all disks), that may or may not be harmful. Many programs expect disks to be partitioned (especially partitioning programs and GUI frontends). You're more likely to inadvertently damage it somehow.



            So this is a setup you should pick if you know what you are doing, and have a good backup in any case (be sure to include the LVM metadata in the backup).






            share|improve this answer



















            • 1





              The kernel is perfectly happy to resize partitions while the disk is in use. parted 3.2 will do this just fine.

              – psusi
              Dec 11 '14 at 15:10














            0












            0








            0








            The advantage of using a non-partitioned drive within my guest is that I don't have to bother partitioning it with fdisk first.




            That's no advantage.



            Isn't the advantage that you don't have to worry about resizing partitions, which the kernel just doesn't like to do while the disk is in use?



            When using the disk directly as PV in the guest, you no longer have to add extra drives to it in order to extend LV inside the guest. You can just grow the existing LV, which gives the guest a larger capacity disk, and then grow the PV and LV inside the guest. So all LV stitching is really done on the host side of things whereas the guest side stays simple with a single disk setup (or maybe a two disk setup, if you like to have something for /boot).



            The downside with unpartitioned disks is that it's just so easy to make mistakes. If your package manager installs a bootloader to your PV (because it wants to install the bootloader to all disks), that may or may not be harmful. Many programs expect disks to be partitioned (especially partitioning programs and GUI frontends). You're more likely to inadvertently damage it somehow.



            So this is a setup you should pick if you know what you are doing, and have a good backup in any case (be sure to include the LVM metadata in the backup).






            share|improve this answer














            The advantage of using a non-partitioned drive within my guest is that I don't have to bother partitioning it with fdisk first.




            That's no advantage.



            Isn't the advantage that you don't have to worry about resizing partitions, which the kernel just doesn't like to do while the disk is in use?



            When using the disk directly as PV in the guest, you no longer have to add extra drives to it in order to extend LV inside the guest. You can just grow the existing LV, which gives the guest a larger capacity disk, and then grow the PV and LV inside the guest. So all LV stitching is really done on the host side of things whereas the guest side stays simple with a single disk setup (or maybe a two disk setup, if you like to have something for /boot).



            The downside with unpartitioned disks is that it's just so easy to make mistakes. If your package manager installs a bootloader to your PV (because it wants to install the bootloader to all disks), that may or may not be harmful. Many programs expect disks to be partitioned (especially partitioning programs and GUI frontends). You're more likely to inadvertently damage it somehow.



            So this is a setup you should pick if you know what you are doing, and have a good backup in any case (be sure to include the LVM metadata in the backup).







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Mar 2 '14 at 23:36









            frostschutzfrostschutz

            27.6k15689




            27.6k15689








            • 1





              The kernel is perfectly happy to resize partitions while the disk is in use. parted 3.2 will do this just fine.

              – psusi
              Dec 11 '14 at 15:10














            • 1





              The kernel is perfectly happy to resize partitions while the disk is in use. parted 3.2 will do this just fine.

              – psusi
              Dec 11 '14 at 15:10








            1




            1





            The kernel is perfectly happy to resize partitions while the disk is in use. parted 3.2 will do this just fine.

            – psusi
            Dec 11 '14 at 15:10





            The kernel is perfectly happy to resize partitions while the disk is in use. parted 3.2 will do this just fine.

            – psusi
            Dec 11 '14 at 15:10













            0














            If you want to have the guest boot load itself with e.g. grub, then then disk has to have a partition table. Otherwise, you have to have the guest kernel and initrd in the host and pass them to qemu to load directly.



            Also rather than add additional disks you can simply resize the existing disk to add storage to the vm.






            share|improve this answer




























              0














              If you want to have the guest boot load itself with e.g. grub, then then disk has to have a partition table. Otherwise, you have to have the guest kernel and initrd in the host and pass them to qemu to load directly.



              Also rather than add additional disks you can simply resize the existing disk to add storage to the vm.






              share|improve this answer


























                0












                0








                0







                If you want to have the guest boot load itself with e.g. grub, then then disk has to have a partition table. Otherwise, you have to have the guest kernel and initrd in the host and pass them to qemu to load directly.



                Also rather than add additional disks you can simply resize the existing disk to add storage to the vm.






                share|improve this answer













                If you want to have the guest boot load itself with e.g. grub, then then disk has to have a partition table. Otherwise, you have to have the guest kernel and initrd in the host and pass them to qemu to load directly.



                Also rather than add additional disks you can simply resize the existing disk to add storage to the vm.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Dec 11 '14 at 15:12









                psusipsusi

                13.7k22539




                13.7k22539






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Unix & Linux Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f117821%2fkvm-linux-guest-2nd-hdd-lvm-partition-or-whole-disk%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    How to reconfigure Docker Trusted Registry 2.x.x to use CEPH FS mount instead of NFS and other traditional...

                    is 'sed' thread safe

                    How to make a Squid Proxy server?