Ethernet bonding round-robin not work when 1st interface down












7















I'm trying to understand the bonding mode=0 (load balancing round-robin). Using eth0 & eth1, I created bond0 interface as the configuration below :



root@test-env1:~# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
ONBOOT=yes
NM_CONTROLLED=no
USERCTL=no
BOOTPROTO=static
IPADDR=192.168.57.91
NETMASK=255.255.255.0
GATEWAY=192.168.57.1
BONDING_OPTS="mode=0 miimon=100"
root@test-env1:~#
root@test-env1:~# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
NM_CONTROLLED=no
MASTER=bond0
SLAVE=yes
USERCTL=no
root@test-env1:~# cat /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
ONBOOT=yes
NM_CONTROLLED=no
MASTER=bond0
SLAVE=yes
USERCTL=no
root@test-env1:~#


Bonding interface established successfully :



root@test-env1:~# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 100
Down Delay (ms): 100

Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:30:0d:9e
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:a0:fc:9e
Slave queue ID: 0
root@test-env1:~#


Then I tried to disconnect the cable of eth0, ping test reported that the IP became unreachable. I know that kind of failover scenario will definitely works with mode=1 (active-backup).



**Update : status of bond after eth0 plugged



root@test-env1:~# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: down
Speed: Unknown
Duplex: Unknown
Link Failure Count: 1
Permanent HW addr: 08:00:27:30:0d:9e
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:a0:fc:9e
Slave queue ID: 0
root@test-env1:~#


It also strange that when I rebooted the server with eth0 remain unplugged, bond interface didn't UP at all. Although the configuration still have eth1 as active/connected interface.



Bonding documentation said that balance-rr or mode 0 provides load balancing and fault tolerance. I just curious the know what kind of fault tolerance that provided by bonding mode=0.



mode

Specifies one of the bonding policies. The default is
balance-rr (round robin). Possible values are:

balance-rr or 0

Round-robin policy: Transmit packets in sequential
order from the first available slave through the
last. This mode provides load balancing and fault
tolerance.


Could someone help me figure it out whether the bond mode=0 requires both interface to be active? If yes, then how mode=0 provides the fault tolerance?










share|improve this question

























  • Just for completeness' sake: when you unplugged eth0, its status went to down in /proc/net/bonding/bond0, so we're sure the system noticed it went away?

    – Ulrich Schwarz
    Dec 12 '15 at 7:07











  • Yes MII Status: down

    – ttirtawi
    Dec 12 '15 at 14:43













  • What is the output in your syslog when you unplug one cable ?

    – Thomas
    Oct 9 '16 at 17:47






  • 2





    How do you configure your switch? In bonding documentation: "The balance-rr, balance-xor and broadcast modes generally require that the switch have the appropriate ports grouped together. The nomenclature for such a group differs between switches, it may be called an “etherchannel” (as in the Cisco example, above), a “trunk group” or some other similar variation."

    – Sharuzzaman Ahmat Raslan
    Oct 31 '16 at 4:34
















7















I'm trying to understand the bonding mode=0 (load balancing round-robin). Using eth0 & eth1, I created bond0 interface as the configuration below :



root@test-env1:~# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
ONBOOT=yes
NM_CONTROLLED=no
USERCTL=no
BOOTPROTO=static
IPADDR=192.168.57.91
NETMASK=255.255.255.0
GATEWAY=192.168.57.1
BONDING_OPTS="mode=0 miimon=100"
root@test-env1:~#
root@test-env1:~# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
NM_CONTROLLED=no
MASTER=bond0
SLAVE=yes
USERCTL=no
root@test-env1:~# cat /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
ONBOOT=yes
NM_CONTROLLED=no
MASTER=bond0
SLAVE=yes
USERCTL=no
root@test-env1:~#


Bonding interface established successfully :



root@test-env1:~# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 100
Down Delay (ms): 100

Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:30:0d:9e
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:a0:fc:9e
Slave queue ID: 0
root@test-env1:~#


Then I tried to disconnect the cable of eth0, ping test reported that the IP became unreachable. I know that kind of failover scenario will definitely works with mode=1 (active-backup).



**Update : status of bond after eth0 plugged



root@test-env1:~# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: down
Speed: Unknown
Duplex: Unknown
Link Failure Count: 1
Permanent HW addr: 08:00:27:30:0d:9e
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:a0:fc:9e
Slave queue ID: 0
root@test-env1:~#


It also strange that when I rebooted the server with eth0 remain unplugged, bond interface didn't UP at all. Although the configuration still have eth1 as active/connected interface.



Bonding documentation said that balance-rr or mode 0 provides load balancing and fault tolerance. I just curious the know what kind of fault tolerance that provided by bonding mode=0.



mode

Specifies one of the bonding policies. The default is
balance-rr (round robin). Possible values are:

balance-rr or 0

Round-robin policy: Transmit packets in sequential
order from the first available slave through the
last. This mode provides load balancing and fault
tolerance.


Could someone help me figure it out whether the bond mode=0 requires both interface to be active? If yes, then how mode=0 provides the fault tolerance?










share|improve this question

























  • Just for completeness' sake: when you unplugged eth0, its status went to down in /proc/net/bonding/bond0, so we're sure the system noticed it went away?

    – Ulrich Schwarz
    Dec 12 '15 at 7:07











  • Yes MII Status: down

    – ttirtawi
    Dec 12 '15 at 14:43













  • What is the output in your syslog when you unplug one cable ?

    – Thomas
    Oct 9 '16 at 17:47






  • 2





    How do you configure your switch? In bonding documentation: "The balance-rr, balance-xor and broadcast modes generally require that the switch have the appropriate ports grouped together. The nomenclature for such a group differs between switches, it may be called an “etherchannel” (as in the Cisco example, above), a “trunk group” or some other similar variation."

    – Sharuzzaman Ahmat Raslan
    Oct 31 '16 at 4:34














7












7








7








I'm trying to understand the bonding mode=0 (load balancing round-robin). Using eth0 & eth1, I created bond0 interface as the configuration below :



root@test-env1:~# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
ONBOOT=yes
NM_CONTROLLED=no
USERCTL=no
BOOTPROTO=static
IPADDR=192.168.57.91
NETMASK=255.255.255.0
GATEWAY=192.168.57.1
BONDING_OPTS="mode=0 miimon=100"
root@test-env1:~#
root@test-env1:~# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
NM_CONTROLLED=no
MASTER=bond0
SLAVE=yes
USERCTL=no
root@test-env1:~# cat /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
ONBOOT=yes
NM_CONTROLLED=no
MASTER=bond0
SLAVE=yes
USERCTL=no
root@test-env1:~#


Bonding interface established successfully :



root@test-env1:~# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 100
Down Delay (ms): 100

Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:30:0d:9e
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:a0:fc:9e
Slave queue ID: 0
root@test-env1:~#


Then I tried to disconnect the cable of eth0, ping test reported that the IP became unreachable. I know that kind of failover scenario will definitely works with mode=1 (active-backup).



**Update : status of bond after eth0 plugged



root@test-env1:~# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: down
Speed: Unknown
Duplex: Unknown
Link Failure Count: 1
Permanent HW addr: 08:00:27:30:0d:9e
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:a0:fc:9e
Slave queue ID: 0
root@test-env1:~#


It also strange that when I rebooted the server with eth0 remain unplugged, bond interface didn't UP at all. Although the configuration still have eth1 as active/connected interface.



Bonding documentation said that balance-rr or mode 0 provides load balancing and fault tolerance. I just curious the know what kind of fault tolerance that provided by bonding mode=0.



mode

Specifies one of the bonding policies. The default is
balance-rr (round robin). Possible values are:

balance-rr or 0

Round-robin policy: Transmit packets in sequential
order from the first available slave through the
last. This mode provides load balancing and fault
tolerance.


Could someone help me figure it out whether the bond mode=0 requires both interface to be active? If yes, then how mode=0 provides the fault tolerance?










share|improve this question
















I'm trying to understand the bonding mode=0 (load balancing round-robin). Using eth0 & eth1, I created bond0 interface as the configuration below :



root@test-env1:~# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
ONBOOT=yes
NM_CONTROLLED=no
USERCTL=no
BOOTPROTO=static
IPADDR=192.168.57.91
NETMASK=255.255.255.0
GATEWAY=192.168.57.1
BONDING_OPTS="mode=0 miimon=100"
root@test-env1:~#
root@test-env1:~# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
NM_CONTROLLED=no
MASTER=bond0
SLAVE=yes
USERCTL=no
root@test-env1:~# cat /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
ONBOOT=yes
NM_CONTROLLED=no
MASTER=bond0
SLAVE=yes
USERCTL=no
root@test-env1:~#


Bonding interface established successfully :



root@test-env1:~# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 100
Down Delay (ms): 100

Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:30:0d:9e
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:a0:fc:9e
Slave queue ID: 0
root@test-env1:~#


Then I tried to disconnect the cable of eth0, ping test reported that the IP became unreachable. I know that kind of failover scenario will definitely works with mode=1 (active-backup).



**Update : status of bond after eth0 plugged



root@test-env1:~# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: down
Speed: Unknown
Duplex: Unknown
Link Failure Count: 1
Permanent HW addr: 08:00:27:30:0d:9e
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:a0:fc:9e
Slave queue ID: 0
root@test-env1:~#


It also strange that when I rebooted the server with eth0 remain unplugged, bond interface didn't UP at all. Although the configuration still have eth1 as active/connected interface.



Bonding documentation said that balance-rr or mode 0 provides load balancing and fault tolerance. I just curious the know what kind of fault tolerance that provided by bonding mode=0.



mode

Specifies one of the bonding policies. The default is
balance-rr (round robin). Possible values are:

balance-rr or 0

Round-robin policy: Transmit packets in sequential
order from the first available slave through the
last. This mode provides load balancing and fault
tolerance.


Could someone help me figure it out whether the bond mode=0 requires both interface to be active? If yes, then how mode=0 provides the fault tolerance?







rhel ethernet bonding






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Dec 12 '15 at 14:45







ttirtawi

















asked Dec 12 '15 at 2:29









ttirtawittirtawi

364




364













  • Just for completeness' sake: when you unplugged eth0, its status went to down in /proc/net/bonding/bond0, so we're sure the system noticed it went away?

    – Ulrich Schwarz
    Dec 12 '15 at 7:07











  • Yes MII Status: down

    – ttirtawi
    Dec 12 '15 at 14:43













  • What is the output in your syslog when you unplug one cable ?

    – Thomas
    Oct 9 '16 at 17:47






  • 2





    How do you configure your switch? In bonding documentation: "The balance-rr, balance-xor and broadcast modes generally require that the switch have the appropriate ports grouped together. The nomenclature for such a group differs between switches, it may be called an “etherchannel” (as in the Cisco example, above), a “trunk group” or some other similar variation."

    – Sharuzzaman Ahmat Raslan
    Oct 31 '16 at 4:34



















  • Just for completeness' sake: when you unplugged eth0, its status went to down in /proc/net/bonding/bond0, so we're sure the system noticed it went away?

    – Ulrich Schwarz
    Dec 12 '15 at 7:07











  • Yes MII Status: down

    – ttirtawi
    Dec 12 '15 at 14:43













  • What is the output in your syslog when you unplug one cable ?

    – Thomas
    Oct 9 '16 at 17:47






  • 2





    How do you configure your switch? In bonding documentation: "The balance-rr, balance-xor and broadcast modes generally require that the switch have the appropriate ports grouped together. The nomenclature for such a group differs between switches, it may be called an “etherchannel” (as in the Cisco example, above), a “trunk group” or some other similar variation."

    – Sharuzzaman Ahmat Raslan
    Oct 31 '16 at 4:34

















Just for completeness' sake: when you unplugged eth0, its status went to down in /proc/net/bonding/bond0, so we're sure the system noticed it went away?

– Ulrich Schwarz
Dec 12 '15 at 7:07





Just for completeness' sake: when you unplugged eth0, its status went to down in /proc/net/bonding/bond0, so we're sure the system noticed it went away?

– Ulrich Schwarz
Dec 12 '15 at 7:07













Yes MII Status: down

– ttirtawi
Dec 12 '15 at 14:43







Yes MII Status: down

– ttirtawi
Dec 12 '15 at 14:43















What is the output in your syslog when you unplug one cable ?

– Thomas
Oct 9 '16 at 17:47





What is the output in your syslog when you unplug one cable ?

– Thomas
Oct 9 '16 at 17:47




2




2





How do you configure your switch? In bonding documentation: "The balance-rr, balance-xor and broadcast modes generally require that the switch have the appropriate ports grouped together. The nomenclature for such a group differs between switches, it may be called an “etherchannel” (as in the Cisco example, above), a “trunk group” or some other similar variation."

– Sharuzzaman Ahmat Raslan
Oct 31 '16 at 4:34





How do you configure your switch? In bonding documentation: "The balance-rr, balance-xor and broadcast modes generally require that the switch have the appropriate ports grouped together. The nomenclature for such a group differs between switches, it may be called an “etherchannel” (as in the Cisco example, above), a “trunk group” or some other similar variation."

– Sharuzzaman Ahmat Raslan
Oct 31 '16 at 4:34










1 Answer
1






active

oldest

votes


















0















As per the explanation from Mode 1 we can assume that for mode 0 both
slaves are active. So, in your case Whenever the packet reaches the
slave which is up you will be able to get response for your ping.




balance-rr or 0



Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.



active-backup or 1



Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond's MAC address is externally visible on only one port (network adapter) to avoid confusing the switch.






share|improve this answer























    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "106"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f248897%2fethernet-bonding-round-robin-not-work-when-1st-interface-down%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0















    As per the explanation from Mode 1 we can assume that for mode 0 both
    slaves are active. So, in your case Whenever the packet reaches the
    slave which is up you will be able to get response for your ping.




    balance-rr or 0



    Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.



    active-backup or 1



    Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond's MAC address is externally visible on only one port (network adapter) to avoid confusing the switch.






    share|improve this answer




























      0















      As per the explanation from Mode 1 we can assume that for mode 0 both
      slaves are active. So, in your case Whenever the packet reaches the
      slave which is up you will be able to get response for your ping.




      balance-rr or 0



      Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.



      active-backup or 1



      Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond's MAC address is externally visible on only one port (network adapter) to avoid confusing the switch.






      share|improve this answer


























        0












        0








        0








        As per the explanation from Mode 1 we can assume that for mode 0 both
        slaves are active. So, in your case Whenever the packet reaches the
        slave which is up you will be able to get response for your ping.




        balance-rr or 0



        Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.



        active-backup or 1



        Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond's MAC address is externally visible on only one port (network adapter) to avoid confusing the switch.






        share|improve this answer














        As per the explanation from Mode 1 we can assume that for mode 0 both
        slaves are active. So, in your case Whenever the packet reaches the
        slave which is up you will be able to get response for your ping.




        balance-rr or 0



        Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.



        active-backup or 1



        Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond's MAC address is externally visible on only one port (network adapter) to avoid confusing the switch.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Jul 17 '17 at 4:43









        upkarupkar

        1558




        1558






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Unix & Linux Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f248897%2fethernet-bonding-round-robin-not-work-when-1st-interface-down%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            How to make a Squid Proxy server?

            Is this a new Fibonacci Identity?

            19世紀