Share an ext4 filesystem between two RHEL servers, but only one will mount at a time
I have two RHEL 6 servers which share a physical connection to a SAN storage (i.e., so both servers can see this /dev/sdb
when running fdisk -l
).
My goal is not to have both servers access the ext4 at the same time. In fact, one of the servers will be mounting it for most of the time. Only when the first server fails, I will want the other server to mount this ext4 filesystem.
I already created logical volumes and have tested that both servers can mount this filesystem successfully. I am going to write scripts that are going to check and make sure the volume is not mounted on the other server before mounting.
My question is, when servers take turns to mount an ext4 filesystem like this, will there be underlying problems that I am missing? I fear that the OS might have some check or "notes" on the volumes...
filesystems rhel ext4 shared-disk
add a comment |
I have two RHEL 6 servers which share a physical connection to a SAN storage (i.e., so both servers can see this /dev/sdb
when running fdisk -l
).
My goal is not to have both servers access the ext4 at the same time. In fact, one of the servers will be mounting it for most of the time. Only when the first server fails, I will want the other server to mount this ext4 filesystem.
I already created logical volumes and have tested that both servers can mount this filesystem successfully. I am going to write scripts that are going to check and make sure the volume is not mounted on the other server before mounting.
My question is, when servers take turns to mount an ext4 filesystem like this, will there be underlying problems that I am missing? I fear that the OS might have some check or "notes" on the volumes...
filesystems rhel ext4 shared-disk
add a comment |
I have two RHEL 6 servers which share a physical connection to a SAN storage (i.e., so both servers can see this /dev/sdb
when running fdisk -l
).
My goal is not to have both servers access the ext4 at the same time. In fact, one of the servers will be mounting it for most of the time. Only when the first server fails, I will want the other server to mount this ext4 filesystem.
I already created logical volumes and have tested that both servers can mount this filesystem successfully. I am going to write scripts that are going to check and make sure the volume is not mounted on the other server before mounting.
My question is, when servers take turns to mount an ext4 filesystem like this, will there be underlying problems that I am missing? I fear that the OS might have some check or "notes" on the volumes...
filesystems rhel ext4 shared-disk
I have two RHEL 6 servers which share a physical connection to a SAN storage (i.e., so both servers can see this /dev/sdb
when running fdisk -l
).
My goal is not to have both servers access the ext4 at the same time. In fact, one of the servers will be mounting it for most of the time. Only when the first server fails, I will want the other server to mount this ext4 filesystem.
I already created logical volumes and have tested that both servers can mount this filesystem successfully. I am going to write scripts that are going to check and make sure the volume is not mounted on the other server before mounting.
My question is, when servers take turns to mount an ext4 filesystem like this, will there be underlying problems that I am missing? I fear that the OS might have some check or "notes" on the volumes...
filesystems rhel ext4 shared-disk
filesystems rhel ext4 shared-disk
edited Jul 13 '16 at 5:24
Scott
6,93152750
6,93152750
asked Jul 13 '16 at 4:56
Lok.K.Lok.K.
163
163
add a comment |
add a comment |
3 Answers
3
active
oldest
votes
Actually there is a common scenario which I did many times when you unmount file system and mount it from another OS installation. It is called "recovery".
Whenever you do recovery you boot for example from OS on your CD or other computer installation and make changes on file system. You unmount it and boot from original OS and everything works correctly.
This scenario is something like standard maintenance task I (and probably every experienced linux administrator) done dozens of time without issues.
Furthermore you can for example mount USB drive on one computer, unmount it and mount on another computer without issues as well.
Of course there are also some notes on file system. It is not dependent on specific installation but rather store important FS attributes like "mount options, file system state - clean, etc...".
You can read about these attributes when you call
man tune2fs
I would not be so much worried about situation when you correctly unmount file system and mount it on different OS as much as I would be worried about correct locking mechanism etc... Ext3/4 filesystem is not created for clustering usage and I do not understand why are you trying to do so.
There is alternative OCFS2 (Oracle Cluster File System 2) which is free, open source and in Linux kernel specifically created for this type of situations. I do not see reasons why not to use OCFS2 and rather do self-made (read "with unknown bugs") locking system when you can use tested and free solution.
You do not need a cluster filesystem for HA failover scenario. Ext4 will work perfectly well.
– fpmurphy
Jul 14 '16 at 2:03
If you put it what way then I think it indeed makes sense and I guess I don't have to worry too much about this. Thank you. As for your question on why I am doing this, it is because I am not using a cluster. Reason for that is probably because of the extra cost (license) vs the gain of having such a cluster (i.e. not critical enough to have a cluster for immediate failover + recovery). I looked up OCFS2 and it seems not supported by RHEL. RHEL only supports GFS and then again I think that there is another extra license (resilience storage). Thanks again.
– Lok.K.
Jul 14 '16 at 3:22
add a comment |
So long as the filesystem is unmounted cleanly, there is no reason those two hosts cannot share that device the way you have just described, provided there is a common userId (UID.) If Alice is user id 1001 on host A, and Bob is user id 1001 on host B, then a directory on /dev/sdb mounted at /mnt/yourshareddevice/somedirectory owned by Alice on host A will be owned by Bob on host B.
Personally, I'd find it more convenient to share that device through a more traditional network file storage solution (NFS, SMB, Gluster, etc) but what you're describing is feasible and functional.
add a comment |
There is no problem to use ext4 in this manner, and in fact this is done commonly with file servers (e.g. dual-homed NFS servers, or large cluster filesystems like Lustre) in a High Availability (HA) configuration. There is even an ext4 feature - Multi-Mount Protection (mmp
) that is intended to be used in conjunction with HA software to reduce the risk of corrupting your filesystem from being mounted on two nodes at the same time.
That said, you definitely should use existing HA software like Corosync/Pacemaker (included by default with RHEL/CentOS) to manage the storage, instead of writing your own (which is a recipe for losing all of your data). Together with hardware power control (e.g. powerman
) for STONITH and mmp
(for backup in the rare case Corosync/Pacemaker fail) you can safely mount a single ext4 filesystem on two or more servers - on one server at-a-time.
It wasn't clear from your question, but note that sharing LVs/partitions on a single disk between the servers at the same time is NOT very safe. You should only access whole disks from any of the servers at one time.
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "106"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f295550%2fshare-an-ext4-filesystem-between-two-rhel-servers-but-only-one-will-mount-at-a%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
Actually there is a common scenario which I did many times when you unmount file system and mount it from another OS installation. It is called "recovery".
Whenever you do recovery you boot for example from OS on your CD or other computer installation and make changes on file system. You unmount it and boot from original OS and everything works correctly.
This scenario is something like standard maintenance task I (and probably every experienced linux administrator) done dozens of time without issues.
Furthermore you can for example mount USB drive on one computer, unmount it and mount on another computer without issues as well.
Of course there are also some notes on file system. It is not dependent on specific installation but rather store important FS attributes like "mount options, file system state - clean, etc...".
You can read about these attributes when you call
man tune2fs
I would not be so much worried about situation when you correctly unmount file system and mount it on different OS as much as I would be worried about correct locking mechanism etc... Ext3/4 filesystem is not created for clustering usage and I do not understand why are you trying to do so.
There is alternative OCFS2 (Oracle Cluster File System 2) which is free, open source and in Linux kernel specifically created for this type of situations. I do not see reasons why not to use OCFS2 and rather do self-made (read "with unknown bugs") locking system when you can use tested and free solution.
You do not need a cluster filesystem for HA failover scenario. Ext4 will work perfectly well.
– fpmurphy
Jul 14 '16 at 2:03
If you put it what way then I think it indeed makes sense and I guess I don't have to worry too much about this. Thank you. As for your question on why I am doing this, it is because I am not using a cluster. Reason for that is probably because of the extra cost (license) vs the gain of having such a cluster (i.e. not critical enough to have a cluster for immediate failover + recovery). I looked up OCFS2 and it seems not supported by RHEL. RHEL only supports GFS and then again I think that there is another extra license (resilience storage). Thanks again.
– Lok.K.
Jul 14 '16 at 3:22
add a comment |
Actually there is a common scenario which I did many times when you unmount file system and mount it from another OS installation. It is called "recovery".
Whenever you do recovery you boot for example from OS on your CD or other computer installation and make changes on file system. You unmount it and boot from original OS and everything works correctly.
This scenario is something like standard maintenance task I (and probably every experienced linux administrator) done dozens of time without issues.
Furthermore you can for example mount USB drive on one computer, unmount it and mount on another computer without issues as well.
Of course there are also some notes on file system. It is not dependent on specific installation but rather store important FS attributes like "mount options, file system state - clean, etc...".
You can read about these attributes when you call
man tune2fs
I would not be so much worried about situation when you correctly unmount file system and mount it on different OS as much as I would be worried about correct locking mechanism etc... Ext3/4 filesystem is not created for clustering usage and I do not understand why are you trying to do so.
There is alternative OCFS2 (Oracle Cluster File System 2) which is free, open source and in Linux kernel specifically created for this type of situations. I do not see reasons why not to use OCFS2 and rather do self-made (read "with unknown bugs") locking system when you can use tested and free solution.
You do not need a cluster filesystem for HA failover scenario. Ext4 will work perfectly well.
– fpmurphy
Jul 14 '16 at 2:03
If you put it what way then I think it indeed makes sense and I guess I don't have to worry too much about this. Thank you. As for your question on why I am doing this, it is because I am not using a cluster. Reason for that is probably because of the extra cost (license) vs the gain of having such a cluster (i.e. not critical enough to have a cluster for immediate failover + recovery). I looked up OCFS2 and it seems not supported by RHEL. RHEL only supports GFS and then again I think that there is another extra license (resilience storage). Thanks again.
– Lok.K.
Jul 14 '16 at 3:22
add a comment |
Actually there is a common scenario which I did many times when you unmount file system and mount it from another OS installation. It is called "recovery".
Whenever you do recovery you boot for example from OS on your CD or other computer installation and make changes on file system. You unmount it and boot from original OS and everything works correctly.
This scenario is something like standard maintenance task I (and probably every experienced linux administrator) done dozens of time without issues.
Furthermore you can for example mount USB drive on one computer, unmount it and mount on another computer without issues as well.
Of course there are also some notes on file system. It is not dependent on specific installation but rather store important FS attributes like "mount options, file system state - clean, etc...".
You can read about these attributes when you call
man tune2fs
I would not be so much worried about situation when you correctly unmount file system and mount it on different OS as much as I would be worried about correct locking mechanism etc... Ext3/4 filesystem is not created for clustering usage and I do not understand why are you trying to do so.
There is alternative OCFS2 (Oracle Cluster File System 2) which is free, open source and in Linux kernel specifically created for this type of situations. I do not see reasons why not to use OCFS2 and rather do self-made (read "with unknown bugs") locking system when you can use tested and free solution.
Actually there is a common scenario which I did many times when you unmount file system and mount it from another OS installation. It is called "recovery".
Whenever you do recovery you boot for example from OS on your CD or other computer installation and make changes on file system. You unmount it and boot from original OS and everything works correctly.
This scenario is something like standard maintenance task I (and probably every experienced linux administrator) done dozens of time without issues.
Furthermore you can for example mount USB drive on one computer, unmount it and mount on another computer without issues as well.
Of course there are also some notes on file system. It is not dependent on specific installation but rather store important FS attributes like "mount options, file system state - clean, etc...".
You can read about these attributes when you call
man tune2fs
I would not be so much worried about situation when you correctly unmount file system and mount it on different OS as much as I would be worried about correct locking mechanism etc... Ext3/4 filesystem is not created for clustering usage and I do not understand why are you trying to do so.
There is alternative OCFS2 (Oracle Cluster File System 2) which is free, open source and in Linux kernel specifically created for this type of situations. I do not see reasons why not to use OCFS2 and rather do self-made (read "with unknown bugs") locking system when you can use tested and free solution.
answered Jul 13 '16 at 23:26
Koss645Koss645
1
1
You do not need a cluster filesystem for HA failover scenario. Ext4 will work perfectly well.
– fpmurphy
Jul 14 '16 at 2:03
If you put it what way then I think it indeed makes sense and I guess I don't have to worry too much about this. Thank you. As for your question on why I am doing this, it is because I am not using a cluster. Reason for that is probably because of the extra cost (license) vs the gain of having such a cluster (i.e. not critical enough to have a cluster for immediate failover + recovery). I looked up OCFS2 and it seems not supported by RHEL. RHEL only supports GFS and then again I think that there is another extra license (resilience storage). Thanks again.
– Lok.K.
Jul 14 '16 at 3:22
add a comment |
You do not need a cluster filesystem for HA failover scenario. Ext4 will work perfectly well.
– fpmurphy
Jul 14 '16 at 2:03
If you put it what way then I think it indeed makes sense and I guess I don't have to worry too much about this. Thank you. As for your question on why I am doing this, it is because I am not using a cluster. Reason for that is probably because of the extra cost (license) vs the gain of having such a cluster (i.e. not critical enough to have a cluster for immediate failover + recovery). I looked up OCFS2 and it seems not supported by RHEL. RHEL only supports GFS and then again I think that there is another extra license (resilience storage). Thanks again.
– Lok.K.
Jul 14 '16 at 3:22
You do not need a cluster filesystem for HA failover scenario. Ext4 will work perfectly well.
– fpmurphy
Jul 14 '16 at 2:03
You do not need a cluster filesystem for HA failover scenario. Ext4 will work perfectly well.
– fpmurphy
Jul 14 '16 at 2:03
If you put it what way then I think it indeed makes sense and I guess I don't have to worry too much about this. Thank you. As for your question on why I am doing this, it is because I am not using a cluster. Reason for that is probably because of the extra cost (license) vs the gain of having such a cluster (i.e. not critical enough to have a cluster for immediate failover + recovery). I looked up OCFS2 and it seems not supported by RHEL. RHEL only supports GFS and then again I think that there is another extra license (resilience storage). Thanks again.
– Lok.K.
Jul 14 '16 at 3:22
If you put it what way then I think it indeed makes sense and I guess I don't have to worry too much about this. Thank you. As for your question on why I am doing this, it is because I am not using a cluster. Reason for that is probably because of the extra cost (license) vs the gain of having such a cluster (i.e. not critical enough to have a cluster for immediate failover + recovery). I looked up OCFS2 and it seems not supported by RHEL. RHEL only supports GFS and then again I think that there is another extra license (resilience storage). Thanks again.
– Lok.K.
Jul 14 '16 at 3:22
add a comment |
So long as the filesystem is unmounted cleanly, there is no reason those two hosts cannot share that device the way you have just described, provided there is a common userId (UID.) If Alice is user id 1001 on host A, and Bob is user id 1001 on host B, then a directory on /dev/sdb mounted at /mnt/yourshareddevice/somedirectory owned by Alice on host A will be owned by Bob on host B.
Personally, I'd find it more convenient to share that device through a more traditional network file storage solution (NFS, SMB, Gluster, etc) but what you're describing is feasible and functional.
add a comment |
So long as the filesystem is unmounted cleanly, there is no reason those two hosts cannot share that device the way you have just described, provided there is a common userId (UID.) If Alice is user id 1001 on host A, and Bob is user id 1001 on host B, then a directory on /dev/sdb mounted at /mnt/yourshareddevice/somedirectory owned by Alice on host A will be owned by Bob on host B.
Personally, I'd find it more convenient to share that device through a more traditional network file storage solution (NFS, SMB, Gluster, etc) but what you're describing is feasible and functional.
add a comment |
So long as the filesystem is unmounted cleanly, there is no reason those two hosts cannot share that device the way you have just described, provided there is a common userId (UID.) If Alice is user id 1001 on host A, and Bob is user id 1001 on host B, then a directory on /dev/sdb mounted at /mnt/yourshareddevice/somedirectory owned by Alice on host A will be owned by Bob on host B.
Personally, I'd find it more convenient to share that device through a more traditional network file storage solution (NFS, SMB, Gluster, etc) but what you're describing is feasible and functional.
So long as the filesystem is unmounted cleanly, there is no reason those two hosts cannot share that device the way you have just described, provided there is a common userId (UID.) If Alice is user id 1001 on host A, and Bob is user id 1001 on host B, then a directory on /dev/sdb mounted at /mnt/yourshareddevice/somedirectory owned by Alice on host A will be owned by Bob on host B.
Personally, I'd find it more convenient to share that device through a more traditional network file storage solution (NFS, SMB, Gluster, etc) but what you're describing is feasible and functional.
answered Sep 29 '16 at 18:23
StephanStephan
1,801814
1,801814
add a comment |
add a comment |
There is no problem to use ext4 in this manner, and in fact this is done commonly with file servers (e.g. dual-homed NFS servers, or large cluster filesystems like Lustre) in a High Availability (HA) configuration. There is even an ext4 feature - Multi-Mount Protection (mmp
) that is intended to be used in conjunction with HA software to reduce the risk of corrupting your filesystem from being mounted on two nodes at the same time.
That said, you definitely should use existing HA software like Corosync/Pacemaker (included by default with RHEL/CentOS) to manage the storage, instead of writing your own (which is a recipe for losing all of your data). Together with hardware power control (e.g. powerman
) for STONITH and mmp
(for backup in the rare case Corosync/Pacemaker fail) you can safely mount a single ext4 filesystem on two or more servers - on one server at-a-time.
It wasn't clear from your question, but note that sharing LVs/partitions on a single disk between the servers at the same time is NOT very safe. You should only access whole disks from any of the servers at one time.
add a comment |
There is no problem to use ext4 in this manner, and in fact this is done commonly with file servers (e.g. dual-homed NFS servers, or large cluster filesystems like Lustre) in a High Availability (HA) configuration. There is even an ext4 feature - Multi-Mount Protection (mmp
) that is intended to be used in conjunction with HA software to reduce the risk of corrupting your filesystem from being mounted on two nodes at the same time.
That said, you definitely should use existing HA software like Corosync/Pacemaker (included by default with RHEL/CentOS) to manage the storage, instead of writing your own (which is a recipe for losing all of your data). Together with hardware power control (e.g. powerman
) for STONITH and mmp
(for backup in the rare case Corosync/Pacemaker fail) you can safely mount a single ext4 filesystem on two or more servers - on one server at-a-time.
It wasn't clear from your question, but note that sharing LVs/partitions on a single disk between the servers at the same time is NOT very safe. You should only access whole disks from any of the servers at one time.
add a comment |
There is no problem to use ext4 in this manner, and in fact this is done commonly with file servers (e.g. dual-homed NFS servers, or large cluster filesystems like Lustre) in a High Availability (HA) configuration. There is even an ext4 feature - Multi-Mount Protection (mmp
) that is intended to be used in conjunction with HA software to reduce the risk of corrupting your filesystem from being mounted on two nodes at the same time.
That said, you definitely should use existing HA software like Corosync/Pacemaker (included by default with RHEL/CentOS) to manage the storage, instead of writing your own (which is a recipe for losing all of your data). Together with hardware power control (e.g. powerman
) for STONITH and mmp
(for backup in the rare case Corosync/Pacemaker fail) you can safely mount a single ext4 filesystem on two or more servers - on one server at-a-time.
It wasn't clear from your question, but note that sharing LVs/partitions on a single disk between the servers at the same time is NOT very safe. You should only access whole disks from any of the servers at one time.
There is no problem to use ext4 in this manner, and in fact this is done commonly with file servers (e.g. dual-homed NFS servers, or large cluster filesystems like Lustre) in a High Availability (HA) configuration. There is even an ext4 feature - Multi-Mount Protection (mmp
) that is intended to be used in conjunction with HA software to reduce the risk of corrupting your filesystem from being mounted on two nodes at the same time.
That said, you definitely should use existing HA software like Corosync/Pacemaker (included by default with RHEL/CentOS) to manage the storage, instead of writing your own (which is a recipe for losing all of your data). Together with hardware power control (e.g. powerman
) for STONITH and mmp
(for backup in the rare case Corosync/Pacemaker fail) you can safely mount a single ext4 filesystem on two or more servers - on one server at-a-time.
It wasn't clear from your question, but note that sharing LVs/partitions on a single disk between the servers at the same time is NOT very safe. You should only access whole disks from any of the servers at one time.
answered Apr 17 '18 at 23:29
LustreOneLustreOne
544111
544111
add a comment |
add a comment |
Thanks for contributing an answer to Unix & Linux Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f295550%2fshare-an-ext4-filesystem-between-two-rhel-servers-but-only-one-will-mount-at-a%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown