why /proc/net/udp is showing wrong rx_queue size in CentOS server?












1















I have a java server application.



    while(true)
{
serverSocket.receive(receivePacket);
process(receivePacket);
serverSocket.send(sendPacket);
try {
Thread.sleep(10000); // sleep for 10s
} catch (InterruptedException e) {
e.printStackTrace();
}
}


It receives and processes 1 UDP packet / 10 sec.



If I send 10 UDP packets to the server processes 1 packet and then goes to sleep for 10s. so I get the 10th packet's response after 100s.



If I do this is server1 with CentOS release 6.4 (Final).



Server 1: cat /proc/net/udp
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode ref pointer drops
110: 00000000:10AE 00000000:0000 07 00000000:00000000 00:00000000 00000000 0 0 85635445 2 ffff880836e6d100 0
111: 00000000:10AF 00000000:0000 07 00000000:00000000 00:00000000 00000000 0 0 85635446 2 ffff88083913a1c0 0
115: 00000000:15B3 00000000:0000 07 00000000:00004FC8 00:00000000 00000000 0 0 390649369 2 ffff880434ae7440 0
117: 02FE6341:0035 00000000:0000 07 00000000:00000000 00:00000000 00000000 0 0 353480394 2 ffff8808367f9040 0


If I do this same thing in server 2:



Server 2: cat /proc/net/udp
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode ref pointer drops
4: FCA9C11F:C36F 8C719AC6:0035 01 00000000:00000000 00:00000000 00000000 0 0 2983494501 2 ffff880169aff4c0 0
5: FCA9C11F:D3F0 8C719AC6:0035 01 00000000:00000000 00:00000000 00000000 0 0 2983494485 2 ffff8801b9bbedc0 0
16: 7A52BB59:007B 00000000:0000 07 00000000:00000000 00:00000000 00000000 38 0 2438608536 2 ffff8807656764c0 0
16: A2EE0D55:007B 00000000:0000 07 00000000:00000000 00:00000000 00000000 38 0 2438608045 2 ffff88077ccdd7c0 0
16: A58F466D:007B 00000000:0000 07 00000000:00000000 00:00000000 00000000 38 0 2438607809 2 ffff8801129f6240 0


They are both centos servers and as we can see that the server1's rx_queue buffer is increasing as the application is processing packets slower than data is coming to the server.



I did this exact same thing in server2 but in server2 the rx_queue is not increasing.
what am I doing/understanding wrong?










share|improve this question























  • What kernel version on the two servers?

    – Neopallium
    Nov 23 '18 at 9:02
















1















I have a java server application.



    while(true)
{
serverSocket.receive(receivePacket);
process(receivePacket);
serverSocket.send(sendPacket);
try {
Thread.sleep(10000); // sleep for 10s
} catch (InterruptedException e) {
e.printStackTrace();
}
}


It receives and processes 1 UDP packet / 10 sec.



If I send 10 UDP packets to the server processes 1 packet and then goes to sleep for 10s. so I get the 10th packet's response after 100s.



If I do this is server1 with CentOS release 6.4 (Final).



Server 1: cat /proc/net/udp
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode ref pointer drops
110: 00000000:10AE 00000000:0000 07 00000000:00000000 00:00000000 00000000 0 0 85635445 2 ffff880836e6d100 0
111: 00000000:10AF 00000000:0000 07 00000000:00000000 00:00000000 00000000 0 0 85635446 2 ffff88083913a1c0 0
115: 00000000:15B3 00000000:0000 07 00000000:00004FC8 00:00000000 00000000 0 0 390649369 2 ffff880434ae7440 0
117: 02FE6341:0035 00000000:0000 07 00000000:00000000 00:00000000 00000000 0 0 353480394 2 ffff8808367f9040 0


If I do this same thing in server 2:



Server 2: cat /proc/net/udp
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode ref pointer drops
4: FCA9C11F:C36F 8C719AC6:0035 01 00000000:00000000 00:00000000 00000000 0 0 2983494501 2 ffff880169aff4c0 0
5: FCA9C11F:D3F0 8C719AC6:0035 01 00000000:00000000 00:00000000 00000000 0 0 2983494485 2 ffff8801b9bbedc0 0
16: 7A52BB59:007B 00000000:0000 07 00000000:00000000 00:00000000 00000000 38 0 2438608536 2 ffff8807656764c0 0
16: A2EE0D55:007B 00000000:0000 07 00000000:00000000 00:00000000 00000000 38 0 2438608045 2 ffff88077ccdd7c0 0
16: A58F466D:007B 00000000:0000 07 00000000:00000000 00:00000000 00000000 38 0 2438607809 2 ffff8801129f6240 0


They are both centos servers and as we can see that the server1's rx_queue buffer is increasing as the application is processing packets slower than data is coming to the server.



I did this exact same thing in server2 but in server2 the rx_queue is not increasing.
what am I doing/understanding wrong?










share|improve this question























  • What kernel version on the two servers?

    – Neopallium
    Nov 23 '18 at 9:02














1












1








1


1






I have a java server application.



    while(true)
{
serverSocket.receive(receivePacket);
process(receivePacket);
serverSocket.send(sendPacket);
try {
Thread.sleep(10000); // sleep for 10s
} catch (InterruptedException e) {
e.printStackTrace();
}
}


It receives and processes 1 UDP packet / 10 sec.



If I send 10 UDP packets to the server processes 1 packet and then goes to sleep for 10s. so I get the 10th packet's response after 100s.



If I do this is server1 with CentOS release 6.4 (Final).



Server 1: cat /proc/net/udp
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode ref pointer drops
110: 00000000:10AE 00000000:0000 07 00000000:00000000 00:00000000 00000000 0 0 85635445 2 ffff880836e6d100 0
111: 00000000:10AF 00000000:0000 07 00000000:00000000 00:00000000 00000000 0 0 85635446 2 ffff88083913a1c0 0
115: 00000000:15B3 00000000:0000 07 00000000:00004FC8 00:00000000 00000000 0 0 390649369 2 ffff880434ae7440 0
117: 02FE6341:0035 00000000:0000 07 00000000:00000000 00:00000000 00000000 0 0 353480394 2 ffff8808367f9040 0


If I do this same thing in server 2:



Server 2: cat /proc/net/udp
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode ref pointer drops
4: FCA9C11F:C36F 8C719AC6:0035 01 00000000:00000000 00:00000000 00000000 0 0 2983494501 2 ffff880169aff4c0 0
5: FCA9C11F:D3F0 8C719AC6:0035 01 00000000:00000000 00:00000000 00000000 0 0 2983494485 2 ffff8801b9bbedc0 0
16: 7A52BB59:007B 00000000:0000 07 00000000:00000000 00:00000000 00000000 38 0 2438608536 2 ffff8807656764c0 0
16: A2EE0D55:007B 00000000:0000 07 00000000:00000000 00:00000000 00000000 38 0 2438608045 2 ffff88077ccdd7c0 0
16: A58F466D:007B 00000000:0000 07 00000000:00000000 00:00000000 00000000 38 0 2438607809 2 ffff8801129f6240 0


They are both centos servers and as we can see that the server1's rx_queue buffer is increasing as the application is processing packets slower than data is coming to the server.



I did this exact same thing in server2 but in server2 the rx_queue is not increasing.
what am I doing/understanding wrong?










share|improve this question














I have a java server application.



    while(true)
{
serverSocket.receive(receivePacket);
process(receivePacket);
serverSocket.send(sendPacket);
try {
Thread.sleep(10000); // sleep for 10s
} catch (InterruptedException e) {
e.printStackTrace();
}
}


It receives and processes 1 UDP packet / 10 sec.



If I send 10 UDP packets to the server processes 1 packet and then goes to sleep for 10s. so I get the 10th packet's response after 100s.



If I do this is server1 with CentOS release 6.4 (Final).



Server 1: cat /proc/net/udp
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode ref pointer drops
110: 00000000:10AE 00000000:0000 07 00000000:00000000 00:00000000 00000000 0 0 85635445 2 ffff880836e6d100 0
111: 00000000:10AF 00000000:0000 07 00000000:00000000 00:00000000 00000000 0 0 85635446 2 ffff88083913a1c0 0
115: 00000000:15B3 00000000:0000 07 00000000:00004FC8 00:00000000 00000000 0 0 390649369 2 ffff880434ae7440 0
117: 02FE6341:0035 00000000:0000 07 00000000:00000000 00:00000000 00000000 0 0 353480394 2 ffff8808367f9040 0


If I do this same thing in server 2:



Server 2: cat /proc/net/udp
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode ref pointer drops
4: FCA9C11F:C36F 8C719AC6:0035 01 00000000:00000000 00:00000000 00000000 0 0 2983494501 2 ffff880169aff4c0 0
5: FCA9C11F:D3F0 8C719AC6:0035 01 00000000:00000000 00:00000000 00000000 0 0 2983494485 2 ffff8801b9bbedc0 0
16: 7A52BB59:007B 00000000:0000 07 00000000:00000000 00:00000000 00000000 38 0 2438608536 2 ffff8807656764c0 0
16: A2EE0D55:007B 00000000:0000 07 00000000:00000000 00:00000000 00000000 38 0 2438608045 2 ffff88077ccdd7c0 0
16: A58F466D:007B 00000000:0000 07 00000000:00000000 00:00000000 00000000 38 0 2438607809 2 ffff8801129f6240 0


They are both centos servers and as we can see that the server1's rx_queue buffer is increasing as the application is processing packets slower than data is coming to the server.



I did this exact same thing in server2 but in server2 the rx_queue is not increasing.
what am I doing/understanding wrong?







centos java udp buffer






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 8 '18 at 6:16









Al-AlaminAl-Alamin

1063




1063













  • What kernel version on the two servers?

    – Neopallium
    Nov 23 '18 at 9:02



















  • What kernel version on the two servers?

    – Neopallium
    Nov 23 '18 at 9:02

















What kernel version on the two servers?

– Neopallium
Nov 23 '18 at 9:02





What kernel version on the two servers?

– Neopallium
Nov 23 '18 at 9:02










2 Answers
2






active

oldest

votes


















1














I am seeing a similar problem on Ubuntu 18.04 LTS (kernel 4.15.0-38). But it doesn't happen on my Debian 9.5 (kernel 4.9.110-3) box. Seems to be a bug in newer kernels?



A simple way to reproduce the problem is with netcat. client and server can be local or on different boxes.




  1. Run netcat server in one terminal: nc -u -l 1234

  2. Run netcat client in another terminal: nc -u 127.0.0.1 1234

  3. type a short message "a" in the client and press enter.

  4. in a third terminal check the recv-q lengths: netstat -plan | grep 1234


On Ubuntu the receiving udp socket will have a non-empty recv-q (768 bytes for a 2 byte message) even though netcat has read the message from the socket and printed it. I have ween the recv-q keep growing until about 52k, then it resets back to zero.



On Debian the recv-q is always zero as long as the udp socket is drained faster then packets are received.



Also found this kernel bug report: UDP rx_queue incorrect calculation in /proc/net/udp






share|improve this answer

































    1














    Pardon me being new to this part of StackExchange so I'm posting an answer instead of a comment.



    I'm getting the same problem as @Neopallium, on Ubuntu 18.04 LTS (kernel 4.15.0-36). From my testing, artificially setting net.core.rmem_max=26214400 and net.core.rmem_default=26214400 (that is, 25MB respectively) and running my UDP server application with no UDP datagram backlogs throughout the test, I see the rx_queue counter go up to about 00000000:006xxxxx or ~6MB+ and suddenly then the counter resets to 0. This is at about 1/4 of the net.core.rmem_max before the counter resets. On Ubuntu 18.04 LTS the default values of net.core.rmem_default and net.core.rmem_max is 212992 and hence it's no surprise @Neopallium is seeing his counter go up to about 52k (about 1/4 of 212k before it resets.



    Here's the output of the application in /proc/net/udp nearing the point of reset:



     sl  local_address rem_address   st tx_queue rx_queue tr tm->when retrnsmt   uid  timeout inode ref pointer drops
    1256: 00000000:7530 00000000:0000 07 00000000:00632240 00:00000000 00000000 0 0 94457826 2 0000000000000000 0



    Here's the screeny of my grafana socket graph over the last 45 mins:



    enter image description here
    Like @Neopallium I'm inclined to believe it's a kernel bug.






    share|improve this answer























      Your Answer








      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "106"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f480505%2fwhy-proc-net-udp-is-showing-wrong-rx-queue-size-in-centos-server%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      1














      I am seeing a similar problem on Ubuntu 18.04 LTS (kernel 4.15.0-38). But it doesn't happen on my Debian 9.5 (kernel 4.9.110-3) box. Seems to be a bug in newer kernels?



      A simple way to reproduce the problem is with netcat. client and server can be local or on different boxes.




      1. Run netcat server in one terminal: nc -u -l 1234

      2. Run netcat client in another terminal: nc -u 127.0.0.1 1234

      3. type a short message "a" in the client and press enter.

      4. in a third terminal check the recv-q lengths: netstat -plan | grep 1234


      On Ubuntu the receiving udp socket will have a non-empty recv-q (768 bytes for a 2 byte message) even though netcat has read the message from the socket and printed it. I have ween the recv-q keep growing until about 52k, then it resets back to zero.



      On Debian the recv-q is always zero as long as the udp socket is drained faster then packets are received.



      Also found this kernel bug report: UDP rx_queue incorrect calculation in /proc/net/udp






      share|improve this answer






























        1














        I am seeing a similar problem on Ubuntu 18.04 LTS (kernel 4.15.0-38). But it doesn't happen on my Debian 9.5 (kernel 4.9.110-3) box. Seems to be a bug in newer kernels?



        A simple way to reproduce the problem is with netcat. client and server can be local or on different boxes.




        1. Run netcat server in one terminal: nc -u -l 1234

        2. Run netcat client in another terminal: nc -u 127.0.0.1 1234

        3. type a short message "a" in the client and press enter.

        4. in a third terminal check the recv-q lengths: netstat -plan | grep 1234


        On Ubuntu the receiving udp socket will have a non-empty recv-q (768 bytes for a 2 byte message) even though netcat has read the message from the socket and printed it. I have ween the recv-q keep growing until about 52k, then it resets back to zero.



        On Debian the recv-q is always zero as long as the udp socket is drained faster then packets are received.



        Also found this kernel bug report: UDP rx_queue incorrect calculation in /proc/net/udp






        share|improve this answer




























          1












          1








          1







          I am seeing a similar problem on Ubuntu 18.04 LTS (kernel 4.15.0-38). But it doesn't happen on my Debian 9.5 (kernel 4.9.110-3) box. Seems to be a bug in newer kernels?



          A simple way to reproduce the problem is with netcat. client and server can be local or on different boxes.




          1. Run netcat server in one terminal: nc -u -l 1234

          2. Run netcat client in another terminal: nc -u 127.0.0.1 1234

          3. type a short message "a" in the client and press enter.

          4. in a third terminal check the recv-q lengths: netstat -plan | grep 1234


          On Ubuntu the receiving udp socket will have a non-empty recv-q (768 bytes for a 2 byte message) even though netcat has read the message from the socket and printed it. I have ween the recv-q keep growing until about 52k, then it resets back to zero.



          On Debian the recv-q is always zero as long as the udp socket is drained faster then packets are received.



          Also found this kernel bug report: UDP rx_queue incorrect calculation in /proc/net/udp






          share|improve this answer















          I am seeing a similar problem on Ubuntu 18.04 LTS (kernel 4.15.0-38). But it doesn't happen on my Debian 9.5 (kernel 4.9.110-3) box. Seems to be a bug in newer kernels?



          A simple way to reproduce the problem is with netcat. client and server can be local or on different boxes.




          1. Run netcat server in one terminal: nc -u -l 1234

          2. Run netcat client in another terminal: nc -u 127.0.0.1 1234

          3. type a short message "a" in the client and press enter.

          4. in a third terminal check the recv-q lengths: netstat -plan | grep 1234


          On Ubuntu the receiving udp socket will have a non-empty recv-q (768 bytes for a 2 byte message) even though netcat has read the message from the socket and printed it. I have ween the recv-q keep growing until about 52k, then it resets back to zero.



          On Debian the recv-q is always zero as long as the udp socket is drained faster then packets are received.



          Also found this kernel bug report: UDP rx_queue incorrect calculation in /proc/net/udp







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Nov 23 '18 at 9:34

























          answered Nov 23 '18 at 9:16









          NeopalliumNeopallium

          1214




          1214

























              1














              Pardon me being new to this part of StackExchange so I'm posting an answer instead of a comment.



              I'm getting the same problem as @Neopallium, on Ubuntu 18.04 LTS (kernel 4.15.0-36). From my testing, artificially setting net.core.rmem_max=26214400 and net.core.rmem_default=26214400 (that is, 25MB respectively) and running my UDP server application with no UDP datagram backlogs throughout the test, I see the rx_queue counter go up to about 00000000:006xxxxx or ~6MB+ and suddenly then the counter resets to 0. This is at about 1/4 of the net.core.rmem_max before the counter resets. On Ubuntu 18.04 LTS the default values of net.core.rmem_default and net.core.rmem_max is 212992 and hence it's no surprise @Neopallium is seeing his counter go up to about 52k (about 1/4 of 212k before it resets.



              Here's the output of the application in /proc/net/udp nearing the point of reset:



               sl  local_address rem_address   st tx_queue rx_queue tr tm->when retrnsmt   uid  timeout inode ref pointer drops
              1256: 00000000:7530 00000000:0000 07 00000000:00632240 00:00000000 00000000 0 0 94457826 2 0000000000000000 0



              Here's the screeny of my grafana socket graph over the last 45 mins:



              enter image description here
              Like @Neopallium I'm inclined to believe it's a kernel bug.






              share|improve this answer




























                1














                Pardon me being new to this part of StackExchange so I'm posting an answer instead of a comment.



                I'm getting the same problem as @Neopallium, on Ubuntu 18.04 LTS (kernel 4.15.0-36). From my testing, artificially setting net.core.rmem_max=26214400 and net.core.rmem_default=26214400 (that is, 25MB respectively) and running my UDP server application with no UDP datagram backlogs throughout the test, I see the rx_queue counter go up to about 00000000:006xxxxx or ~6MB+ and suddenly then the counter resets to 0. This is at about 1/4 of the net.core.rmem_max before the counter resets. On Ubuntu 18.04 LTS the default values of net.core.rmem_default and net.core.rmem_max is 212992 and hence it's no surprise @Neopallium is seeing his counter go up to about 52k (about 1/4 of 212k before it resets.



                Here's the output of the application in /proc/net/udp nearing the point of reset:



                 sl  local_address rem_address   st tx_queue rx_queue tr tm->when retrnsmt   uid  timeout inode ref pointer drops
                1256: 00000000:7530 00000000:0000 07 00000000:00632240 00:00000000 00000000 0 0 94457826 2 0000000000000000 0



                Here's the screeny of my grafana socket graph over the last 45 mins:



                enter image description here
                Like @Neopallium I'm inclined to believe it's a kernel bug.






                share|improve this answer


























                  1












                  1








                  1







                  Pardon me being new to this part of StackExchange so I'm posting an answer instead of a comment.



                  I'm getting the same problem as @Neopallium, on Ubuntu 18.04 LTS (kernel 4.15.0-36). From my testing, artificially setting net.core.rmem_max=26214400 and net.core.rmem_default=26214400 (that is, 25MB respectively) and running my UDP server application with no UDP datagram backlogs throughout the test, I see the rx_queue counter go up to about 00000000:006xxxxx or ~6MB+ and suddenly then the counter resets to 0. This is at about 1/4 of the net.core.rmem_max before the counter resets. On Ubuntu 18.04 LTS the default values of net.core.rmem_default and net.core.rmem_max is 212992 and hence it's no surprise @Neopallium is seeing his counter go up to about 52k (about 1/4 of 212k before it resets.



                  Here's the output of the application in /proc/net/udp nearing the point of reset:



                   sl  local_address rem_address   st tx_queue rx_queue tr tm->when retrnsmt   uid  timeout inode ref pointer drops
                  1256: 00000000:7530 00000000:0000 07 00000000:00632240 00:00000000 00000000 0 0 94457826 2 0000000000000000 0



                  Here's the screeny of my grafana socket graph over the last 45 mins:



                  enter image description here
                  Like @Neopallium I'm inclined to believe it's a kernel bug.






                  share|improve this answer













                  Pardon me being new to this part of StackExchange so I'm posting an answer instead of a comment.



                  I'm getting the same problem as @Neopallium, on Ubuntu 18.04 LTS (kernel 4.15.0-36). From my testing, artificially setting net.core.rmem_max=26214400 and net.core.rmem_default=26214400 (that is, 25MB respectively) and running my UDP server application with no UDP datagram backlogs throughout the test, I see the rx_queue counter go up to about 00000000:006xxxxx or ~6MB+ and suddenly then the counter resets to 0. This is at about 1/4 of the net.core.rmem_max before the counter resets. On Ubuntu 18.04 LTS the default values of net.core.rmem_default and net.core.rmem_max is 212992 and hence it's no surprise @Neopallium is seeing his counter go up to about 52k (about 1/4 of 212k before it resets.



                  Here's the output of the application in /proc/net/udp nearing the point of reset:



                   sl  local_address rem_address   st tx_queue rx_queue tr tm->when retrnsmt   uid  timeout inode ref pointer drops
                  1256: 00000000:7530 00000000:0000 07 00000000:00632240 00:00000000 00000000 0 0 94457826 2 0000000000000000 0



                  Here's the screeny of my grafana socket graph over the last 45 mins:



                  enter image description here
                  Like @Neopallium I'm inclined to believe it's a kernel bug.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Feb 12 at 17:45









                  L. J.L. J.

                  112




                  112






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Unix & Linux Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f480505%2fwhy-proc-net-udp-is-showing-wrong-rx-queue-size-in-centos-server%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      How to reconfigure Docker Trusted Registry 2.x.x to use CEPH FS mount instead of NFS and other traditional...

                      is 'sed' thread safe

                      How to make a Squid Proxy server?