ZFS incremental snapshot send/recv without first snapshot












2















I have the following ZFS dataset:



pool/dataset
pool/dataset@snap1
pool/dataset@snap2
pool/dataset@snap3


Which has been replicated to a backup pool, using ZFS send/recv



backupPool/dataset
backupPool/dataset@snap1
backupPool/dataset@snap2


Afterwards, I deleted dataset@snap1 and dataset@snap2 from pool, and I am in a situation where I cannot send dataset@snap3 incrementally to backupPool.



Is there a way of solving this situation? For instance, generate a ZFS incremental snapshot between pool/dataset@snap3 and backupPool/dataset@snap2 and send it to backupPool? Or transfer back backupPool/dataset@snap2 to pool?



I could transfer pool/dataset@snap3 to a new dataset in backupPool, but I really need to keep the "history" of snapshots.










share|improve this question























  • Do you really need to have a ZFS dataset as a destination for your backups? It makes the whole concept difficult to manage. I have a similar setup for my backups. I just use serialized compressed snapshots with multiple incremental levels.

    – Martin Sugioarto
    Mar 6 at 7:56











  • Yes, I do need a ZFS dataset with the backup.

    – fclad
    Mar 6 at 15:24











  • @MartinSugioarto serialized compressed snapshots with multiple incremental levels That does leave you vulnerable to losing your backups entirely if you later have a problem with zfs receive. One advantage of immediately running zfs receive ... is that any issue is known immediately - not with you have to restore.

    – Andrew Henle
    Mar 7 at 10:34













  • Serialized snapshots can be only a problem, if you delete them in wrong order. Level 0 snapshots are most important, of course. The problem with live datasets is that if something accidentally writes to such a dataset, it's a totally different dataset from the logical point of view and you need to roll back. And if you delete a snapshot there, it's also a write access. On the sending side, if you delete a snapshot, all parent snapshots correct their diffs up to the top live filesystem and you lose any connection with you receiving dataset. This method appears very unstable to me.

    – Martin Sugioarto
    Mar 7 at 10:45






  • 1





    I subtlely misunderstood the question! @AndrewHenle's approach would work. When sending the new snapshot back to pool, you won't be able to do an incremental send, you just have to do a full send from scratch and create a new filesystem. By the way, if you deleted pool/dataset@snap2 to save space, you could try using bookmarks on the sending system instead of full snapshots -- they let ZFS know what changed since the last time you did a send, without keeping all of the data from that last snapshot around forever.

    – Dan
    Mar 8 at 3:08
















2















I have the following ZFS dataset:



pool/dataset
pool/dataset@snap1
pool/dataset@snap2
pool/dataset@snap3


Which has been replicated to a backup pool, using ZFS send/recv



backupPool/dataset
backupPool/dataset@snap1
backupPool/dataset@snap2


Afterwards, I deleted dataset@snap1 and dataset@snap2 from pool, and I am in a situation where I cannot send dataset@snap3 incrementally to backupPool.



Is there a way of solving this situation? For instance, generate a ZFS incremental snapshot between pool/dataset@snap3 and backupPool/dataset@snap2 and send it to backupPool? Or transfer back backupPool/dataset@snap2 to pool?



I could transfer pool/dataset@snap3 to a new dataset in backupPool, but I really need to keep the "history" of snapshots.










share|improve this question























  • Do you really need to have a ZFS dataset as a destination for your backups? It makes the whole concept difficult to manage. I have a similar setup for my backups. I just use serialized compressed snapshots with multiple incremental levels.

    – Martin Sugioarto
    Mar 6 at 7:56











  • Yes, I do need a ZFS dataset with the backup.

    – fclad
    Mar 6 at 15:24











  • @MartinSugioarto serialized compressed snapshots with multiple incremental levels That does leave you vulnerable to losing your backups entirely if you later have a problem with zfs receive. One advantage of immediately running zfs receive ... is that any issue is known immediately - not with you have to restore.

    – Andrew Henle
    Mar 7 at 10:34













  • Serialized snapshots can be only a problem, if you delete them in wrong order. Level 0 snapshots are most important, of course. The problem with live datasets is that if something accidentally writes to such a dataset, it's a totally different dataset from the logical point of view and you need to roll back. And if you delete a snapshot there, it's also a write access. On the sending side, if you delete a snapshot, all parent snapshots correct their diffs up to the top live filesystem and you lose any connection with you receiving dataset. This method appears very unstable to me.

    – Martin Sugioarto
    Mar 7 at 10:45






  • 1





    I subtlely misunderstood the question! @AndrewHenle's approach would work. When sending the new snapshot back to pool, you won't be able to do an incremental send, you just have to do a full send from scratch and create a new filesystem. By the way, if you deleted pool/dataset@snap2 to save space, you could try using bookmarks on the sending system instead of full snapshots -- they let ZFS know what changed since the last time you did a send, without keeping all of the data from that last snapshot around forever.

    – Dan
    Mar 8 at 3:08














2












2








2








I have the following ZFS dataset:



pool/dataset
pool/dataset@snap1
pool/dataset@snap2
pool/dataset@snap3


Which has been replicated to a backup pool, using ZFS send/recv



backupPool/dataset
backupPool/dataset@snap1
backupPool/dataset@snap2


Afterwards, I deleted dataset@snap1 and dataset@snap2 from pool, and I am in a situation where I cannot send dataset@snap3 incrementally to backupPool.



Is there a way of solving this situation? For instance, generate a ZFS incremental snapshot between pool/dataset@snap3 and backupPool/dataset@snap2 and send it to backupPool? Or transfer back backupPool/dataset@snap2 to pool?



I could transfer pool/dataset@snap3 to a new dataset in backupPool, but I really need to keep the "history" of snapshots.










share|improve this question














I have the following ZFS dataset:



pool/dataset
pool/dataset@snap1
pool/dataset@snap2
pool/dataset@snap3


Which has been replicated to a backup pool, using ZFS send/recv



backupPool/dataset
backupPool/dataset@snap1
backupPool/dataset@snap2


Afterwards, I deleted dataset@snap1 and dataset@snap2 from pool, and I am in a situation where I cannot send dataset@snap3 incrementally to backupPool.



Is there a way of solving this situation? For instance, generate a ZFS incremental snapshot between pool/dataset@snap3 and backupPool/dataset@snap2 and send it to backupPool? Or transfer back backupPool/dataset@snap2 to pool?



I could transfer pool/dataset@snap3 to a new dataset in backupPool, but I really need to keep the "history" of snapshots.







zfs snapshot






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Mar 6 at 0:01









fcladfclad

1687




1687













  • Do you really need to have a ZFS dataset as a destination for your backups? It makes the whole concept difficult to manage. I have a similar setup for my backups. I just use serialized compressed snapshots with multiple incremental levels.

    – Martin Sugioarto
    Mar 6 at 7:56











  • Yes, I do need a ZFS dataset with the backup.

    – fclad
    Mar 6 at 15:24











  • @MartinSugioarto serialized compressed snapshots with multiple incremental levels That does leave you vulnerable to losing your backups entirely if you later have a problem with zfs receive. One advantage of immediately running zfs receive ... is that any issue is known immediately - not with you have to restore.

    – Andrew Henle
    Mar 7 at 10:34













  • Serialized snapshots can be only a problem, if you delete them in wrong order. Level 0 snapshots are most important, of course. The problem with live datasets is that if something accidentally writes to such a dataset, it's a totally different dataset from the logical point of view and you need to roll back. And if you delete a snapshot there, it's also a write access. On the sending side, if you delete a snapshot, all parent snapshots correct their diffs up to the top live filesystem and you lose any connection with you receiving dataset. This method appears very unstable to me.

    – Martin Sugioarto
    Mar 7 at 10:45






  • 1





    I subtlely misunderstood the question! @AndrewHenle's approach would work. When sending the new snapshot back to pool, you won't be able to do an incremental send, you just have to do a full send from scratch and create a new filesystem. By the way, if you deleted pool/dataset@snap2 to save space, you could try using bookmarks on the sending system instead of full snapshots -- they let ZFS know what changed since the last time you did a send, without keeping all of the data from that last snapshot around forever.

    – Dan
    Mar 8 at 3:08



















  • Do you really need to have a ZFS dataset as a destination for your backups? It makes the whole concept difficult to manage. I have a similar setup for my backups. I just use serialized compressed snapshots with multiple incremental levels.

    – Martin Sugioarto
    Mar 6 at 7:56











  • Yes, I do need a ZFS dataset with the backup.

    – fclad
    Mar 6 at 15:24











  • @MartinSugioarto serialized compressed snapshots with multiple incremental levels That does leave you vulnerable to losing your backups entirely if you later have a problem with zfs receive. One advantage of immediately running zfs receive ... is that any issue is known immediately - not with you have to restore.

    – Andrew Henle
    Mar 7 at 10:34













  • Serialized snapshots can be only a problem, if you delete them in wrong order. Level 0 snapshots are most important, of course. The problem with live datasets is that if something accidentally writes to such a dataset, it's a totally different dataset from the logical point of view and you need to roll back. And if you delete a snapshot there, it's also a write access. On the sending side, if you delete a snapshot, all parent snapshots correct their diffs up to the top live filesystem and you lose any connection with you receiving dataset. This method appears very unstable to me.

    – Martin Sugioarto
    Mar 7 at 10:45






  • 1





    I subtlely misunderstood the question! @AndrewHenle's approach would work. When sending the new snapshot back to pool, you won't be able to do an incremental send, you just have to do a full send from scratch and create a new filesystem. By the way, if you deleted pool/dataset@snap2 to save space, you could try using bookmarks on the sending system instead of full snapshots -- they let ZFS know what changed since the last time you did a send, without keeping all of the data from that last snapshot around forever.

    – Dan
    Mar 8 at 3:08

















Do you really need to have a ZFS dataset as a destination for your backups? It makes the whole concept difficult to manage. I have a similar setup for my backups. I just use serialized compressed snapshots with multiple incremental levels.

– Martin Sugioarto
Mar 6 at 7:56





Do you really need to have a ZFS dataset as a destination for your backups? It makes the whole concept difficult to manage. I have a similar setup for my backups. I just use serialized compressed snapshots with multiple incremental levels.

– Martin Sugioarto
Mar 6 at 7:56













Yes, I do need a ZFS dataset with the backup.

– fclad
Mar 6 at 15:24





Yes, I do need a ZFS dataset with the backup.

– fclad
Mar 6 at 15:24













@MartinSugioarto serialized compressed snapshots with multiple incremental levels That does leave you vulnerable to losing your backups entirely if you later have a problem with zfs receive. One advantage of immediately running zfs receive ... is that any issue is known immediately - not with you have to restore.

– Andrew Henle
Mar 7 at 10:34







@MartinSugioarto serialized compressed snapshots with multiple incremental levels That does leave you vulnerable to losing your backups entirely if you later have a problem with zfs receive. One advantage of immediately running zfs receive ... is that any issue is known immediately - not with you have to restore.

– Andrew Henle
Mar 7 at 10:34















Serialized snapshots can be only a problem, if you delete them in wrong order. Level 0 snapshots are most important, of course. The problem with live datasets is that if something accidentally writes to such a dataset, it's a totally different dataset from the logical point of view and you need to roll back. And if you delete a snapshot there, it's also a write access. On the sending side, if you delete a snapshot, all parent snapshots correct their diffs up to the top live filesystem and you lose any connection with you receiving dataset. This method appears very unstable to me.

– Martin Sugioarto
Mar 7 at 10:45





Serialized snapshots can be only a problem, if you delete them in wrong order. Level 0 snapshots are most important, of course. The problem with live datasets is that if something accidentally writes to such a dataset, it's a totally different dataset from the logical point of view and you need to roll back. And if you delete a snapshot there, it's also a write access. On the sending side, if you delete a snapshot, all parent snapshots correct their diffs up to the top live filesystem and you lose any connection with you receiving dataset. This method appears very unstable to me.

– Martin Sugioarto
Mar 7 at 10:45




1




1





I subtlely misunderstood the question! @AndrewHenle's approach would work. When sending the new snapshot back to pool, you won't be able to do an incremental send, you just have to do a full send from scratch and create a new filesystem. By the way, if you deleted pool/dataset@snap2 to save space, you could try using bookmarks on the sending system instead of full snapshots -- they let ZFS know what changed since the last time you did a send, without keeping all of the data from that last snapshot around forever.

– Dan
Mar 8 at 3:08





I subtlely misunderstood the question! @AndrewHenle's approach would work. When sending the new snapshot back to pool, you won't be able to do an incremental send, you just have to do a full send from scratch and create a new filesystem. By the way, if you deleted pool/dataset@snap2 to save space, you could try using bookmarks on the sending system instead of full snapshots -- they let ZFS know what changed since the last time you did a send, without keeping all of the data from that last snapshot around forever.

– Dan
Mar 8 at 3:08










1 Answer
1






active

oldest

votes


















0














Thank you for all your suggestions!



I finally rsynced pool/dataset@snap3 with backupPool/dataset@snap2, deleted the dataset backupPool/dataset, and recreated it from backupPool/dataset. I was not able to find a better solution to this problem.



The suggestion from Dan was really helpful. Also, to avoid deleting snapshots in the future it is a good practice to hold them.






share|improve this answer
























    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "106"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f504595%2fzfs-incremental-snapshot-send-recv-without-first-snapshot%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    Thank you for all your suggestions!



    I finally rsynced pool/dataset@snap3 with backupPool/dataset@snap2, deleted the dataset backupPool/dataset, and recreated it from backupPool/dataset. I was not able to find a better solution to this problem.



    The suggestion from Dan was really helpful. Also, to avoid deleting snapshots in the future it is a good practice to hold them.






    share|improve this answer




























      0














      Thank you for all your suggestions!



      I finally rsynced pool/dataset@snap3 with backupPool/dataset@snap2, deleted the dataset backupPool/dataset, and recreated it from backupPool/dataset. I was not able to find a better solution to this problem.



      The suggestion from Dan was really helpful. Also, to avoid deleting snapshots in the future it is a good practice to hold them.






      share|improve this answer


























        0












        0








        0







        Thank you for all your suggestions!



        I finally rsynced pool/dataset@snap3 with backupPool/dataset@snap2, deleted the dataset backupPool/dataset, and recreated it from backupPool/dataset. I was not able to find a better solution to this problem.



        The suggestion from Dan was really helpful. Also, to avoid deleting snapshots in the future it is a good practice to hold them.






        share|improve this answer













        Thank you for all your suggestions!



        I finally rsynced pool/dataset@snap3 with backupPool/dataset@snap2, deleted the dataset backupPool/dataset, and recreated it from backupPool/dataset. I was not able to find a better solution to this problem.



        The suggestion from Dan was really helpful. Also, to avoid deleting snapshots in the future it is a good practice to hold them.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Mar 8 at 17:31









        fcladfclad

        1687




        1687






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Unix & Linux Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f504595%2fzfs-incremental-snapshot-send-recv-without-first-snapshot%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            How to reconfigure Docker Trusted Registry 2.x.x to use CEPH FS mount instead of NFS and other traditional...

            is 'sed' thread safe

            How to make a Squid Proxy server?