`EINTR`: is there a rationale behind it?












6















Small talk as background



EINTR is the error which so-called interruptible system calls may return. If a signal occurs while a system call is running, that signal is not ignored and a signal handler was defined for it with SA_RESTART not set and this handler handles that signal, then the system call will return the EINTR error code.



As a side note, I got this error very often using ncurses in Python.



The question



Is there a rationale behind this behaviour specified by the POSIX standard? One can understand it may be not possible to resume (depending on the kernel design), however, what's the rationale for not restarting it automatically at the kernel level? Is this for legacy or technical reasons? If this is for technical reasons, are these reasons still valid nowadays? If this is for legacy reasons, then what's the history?










share|improve this question





























    6















    Small talk as background



    EINTR is the error which so-called interruptible system calls may return. If a signal occurs while a system call is running, that signal is not ignored and a signal handler was defined for it with SA_RESTART not set and this handler handles that signal, then the system call will return the EINTR error code.



    As a side note, I got this error very often using ncurses in Python.



    The question



    Is there a rationale behind this behaviour specified by the POSIX standard? One can understand it may be not possible to resume (depending on the kernel design), however, what's the rationale for not restarting it automatically at the kernel level? Is this for legacy or technical reasons? If this is for technical reasons, are these reasons still valid nowadays? If this is for legacy reasons, then what's the history?










    share|improve this question



























      6












      6








      6


      2






      Small talk as background



      EINTR is the error which so-called interruptible system calls may return. If a signal occurs while a system call is running, that signal is not ignored and a signal handler was defined for it with SA_RESTART not set and this handler handles that signal, then the system call will return the EINTR error code.



      As a side note, I got this error very often using ncurses in Python.



      The question



      Is there a rationale behind this behaviour specified by the POSIX standard? One can understand it may be not possible to resume (depending on the kernel design), however, what's the rationale for not restarting it automatically at the kernel level? Is this for legacy or technical reasons? If this is for technical reasons, are these reasons still valid nowadays? If this is for legacy reasons, then what's the history?










      share|improve this question
















      Small talk as background



      EINTR is the error which so-called interruptible system calls may return. If a signal occurs while a system call is running, that signal is not ignored and a signal handler was defined for it with SA_RESTART not set and this handler handles that signal, then the system call will return the EINTR error code.



      As a side note, I got this error very often using ncurses in Python.



      The question



      Is there a rationale behind this behaviour specified by the POSIX standard? One can understand it may be not possible to resume (depending on the kernel design), however, what's the rationale for not restarting it automatically at the kernel level? Is this for legacy or technical reasons? If this is for technical reasons, are these reasons still valid nowadays? If this is for legacy reasons, then what's the history?







      process signals posix system-calls






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Jan 5 '16 at 23:07









      chaos

      35.6k973117




      35.6k973117










      asked Jan 5 '16 at 10:21









      Hibou57Hibou57

      318217




      318217






















          2 Answers
          2






          active

          oldest

          votes


















          9














          It is difficult to do nontrivial things in a signal handler, since the rest of the program is in an unknown state. Most signal handlers just set a flag, which is later checked and handled elsewhere in the program.



          Reason for not restarting the system call automatically:



          Imagine an application which receives data from a socket by the blocking and uninterruptible recv() system call. In our scenario, data comes very slow and the program resides long in that system call. That program has a signal handler for SIGINT that sets a flag (which is evaluated elsewhere), and SA_RESTART is set that the system call restarts automatically. Imagine that the program is in recv() which waits for data. But no data arrives. The system call blocks. The program now catches ctrl-c from the user. The system call is interrupted and the signal handler, which just sets the flag is executed. Then recv() is restarted, still waiting for data. The event loop is stuck in recv() and has no opportunity to evaluate the flag and exit the program gracefully.



          With SA_RESTART not set:



          In the above scenario, when SA_RESTART is not set, recv() would recieve EINTR instead of being restarted. The system call exits and thus can continue. Off course, the program should then (as early as possible) check the flag (set by the signal handler) and do clean up or whatever it does.






          share|improve this answer


























          • Another point of view which may be worth added: skarnet.org/software/skalibs/libstddjb/safewrappers.html . It finally says the same (although more implicitly), except in your answer, you are assuming there may even be no time‑out at all.

            – Hibou57
            Jan 5 '16 at 13:25






          • 2





            Even with SA_RESTART set, not all system calls are restarted automatically. For example, Linux does not restart msgsnd() or msgrcv().

            – Andrew Henle
            Jan 5 '16 at 23:22





















          2














          Richard Gabriel wrote a paper The Rise of 'Worse is Better' which discusses the design choice here in Unix:




          Two famous people, one from MIT and another from Berkeley (but working
          on Unix) once met to discuss operating system issues. The person from
          MIT was knowledgeable about ITS (the MIT AI Lab operating system) and
          had been reading the Unix sources. He was interested in how Unix
          solved the PC loser-ing problem. The PC loser-ing problem occurs when
          a user program invokes a system routine to perform a lengthy operation
          that might have significant state, such as IO buffers. If an interrupt
          occurs during the operation, the state of the user program must be
          saved. Because the invocation of the system routine is usually a
          single instruction, the PC of the user program does not adequately
          capture the state of the process. The system routine must either back
          out or press forward. The right thing is to back out and restore the
          user program PC to the instruction that invoked the system routine so
          that resumption of the user program after the interrupt, for example,
          re-enters the system routine. It is called PC loser-ing because
          the PC is being coerced into loser mode, where 'loser' is the
          affectionate name for 'user' at MIT.



          The MIT guy did not see any code that handled this case and asked the
          New Jersey guy how the problem was handled. The New Jersey guy said
          that the Unix folks were aware of the problem, but the solution was
          for the system routine to always finish, but sometimes an error code
          would be returned that signaled that the system routine had failed to
          complete its action. A correct user program, then, had to check the
          error code to determine whether to simply try the system routine
          again. The MIT guy did not like this solution because it was not the
          right thing.



          The New Jersey guy said that the Unix solution was right because the
          design philosophy of Unix was simplicity and that the right thing was
          too complex. Besides, programmers could easily insert this extra test
          and loop. The MIT guy pointed out that the implementation was simple
          but the interface to the functionality was complex. The New Jersey guy
          said that the right tradeoff has been selected in Unix-namely,
          implementation simplicity was more important than interface
          simplicity.







          share|improve this answer



















          • 1





            Two famous people walk into a bar — one from MIT, one from Berkeley, and one from New Jersey.  Huh? I realize that it’s a quote, but can you clarify it? The last paragraph is a bit muddled, too — the Unix solution was right because the right thing was too complex for Unix, and so they didn’t implement it.

            – G-Man
            Nov 12 '17 at 20:26











          • Minimum Viable Product is a related concept. The unix solution to use EINTR was viable and offered simplicity in the highly portable OS codebase. Delegated to user code, handling EINTR is easy (just retry), yet kind of bothersome.

            – Brad Schoening
            Nov 14 '17 at 2:31











          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "106"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f253349%2feintr-is-there-a-rationale-behind-it%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          2 Answers
          2






          active

          oldest

          votes








          2 Answers
          2






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          9














          It is difficult to do nontrivial things in a signal handler, since the rest of the program is in an unknown state. Most signal handlers just set a flag, which is later checked and handled elsewhere in the program.



          Reason for not restarting the system call automatically:



          Imagine an application which receives data from a socket by the blocking and uninterruptible recv() system call. In our scenario, data comes very slow and the program resides long in that system call. That program has a signal handler for SIGINT that sets a flag (which is evaluated elsewhere), and SA_RESTART is set that the system call restarts automatically. Imagine that the program is in recv() which waits for data. But no data arrives. The system call blocks. The program now catches ctrl-c from the user. The system call is interrupted and the signal handler, which just sets the flag is executed. Then recv() is restarted, still waiting for data. The event loop is stuck in recv() and has no opportunity to evaluate the flag and exit the program gracefully.



          With SA_RESTART not set:



          In the above scenario, when SA_RESTART is not set, recv() would recieve EINTR instead of being restarted. The system call exits and thus can continue. Off course, the program should then (as early as possible) check the flag (set by the signal handler) and do clean up or whatever it does.






          share|improve this answer


























          • Another point of view which may be worth added: skarnet.org/software/skalibs/libstddjb/safewrappers.html . It finally says the same (although more implicitly), except in your answer, you are assuming there may even be no time‑out at all.

            – Hibou57
            Jan 5 '16 at 13:25






          • 2





            Even with SA_RESTART set, not all system calls are restarted automatically. For example, Linux does not restart msgsnd() or msgrcv().

            – Andrew Henle
            Jan 5 '16 at 23:22


















          9














          It is difficult to do nontrivial things in a signal handler, since the rest of the program is in an unknown state. Most signal handlers just set a flag, which is later checked and handled elsewhere in the program.



          Reason for not restarting the system call automatically:



          Imagine an application which receives data from a socket by the blocking and uninterruptible recv() system call. In our scenario, data comes very slow and the program resides long in that system call. That program has a signal handler for SIGINT that sets a flag (which is evaluated elsewhere), and SA_RESTART is set that the system call restarts automatically. Imagine that the program is in recv() which waits for data. But no data arrives. The system call blocks. The program now catches ctrl-c from the user. The system call is interrupted and the signal handler, which just sets the flag is executed. Then recv() is restarted, still waiting for data. The event loop is stuck in recv() and has no opportunity to evaluate the flag and exit the program gracefully.



          With SA_RESTART not set:



          In the above scenario, when SA_RESTART is not set, recv() would recieve EINTR instead of being restarted. The system call exits and thus can continue. Off course, the program should then (as early as possible) check the flag (set by the signal handler) and do clean up or whatever it does.






          share|improve this answer


























          • Another point of view which may be worth added: skarnet.org/software/skalibs/libstddjb/safewrappers.html . It finally says the same (although more implicitly), except in your answer, you are assuming there may even be no time‑out at all.

            – Hibou57
            Jan 5 '16 at 13:25






          • 2





            Even with SA_RESTART set, not all system calls are restarted automatically. For example, Linux does not restart msgsnd() or msgrcv().

            – Andrew Henle
            Jan 5 '16 at 23:22
















          9












          9








          9







          It is difficult to do nontrivial things in a signal handler, since the rest of the program is in an unknown state. Most signal handlers just set a flag, which is later checked and handled elsewhere in the program.



          Reason for not restarting the system call automatically:



          Imagine an application which receives data from a socket by the blocking and uninterruptible recv() system call. In our scenario, data comes very slow and the program resides long in that system call. That program has a signal handler for SIGINT that sets a flag (which is evaluated elsewhere), and SA_RESTART is set that the system call restarts automatically. Imagine that the program is in recv() which waits for data. But no data arrives. The system call blocks. The program now catches ctrl-c from the user. The system call is interrupted and the signal handler, which just sets the flag is executed. Then recv() is restarted, still waiting for data. The event loop is stuck in recv() and has no opportunity to evaluate the flag and exit the program gracefully.



          With SA_RESTART not set:



          In the above scenario, when SA_RESTART is not set, recv() would recieve EINTR instead of being restarted. The system call exits and thus can continue. Off course, the program should then (as early as possible) check the flag (set by the signal handler) and do clean up or whatever it does.






          share|improve this answer















          It is difficult to do nontrivial things in a signal handler, since the rest of the program is in an unknown state. Most signal handlers just set a flag, which is later checked and handled elsewhere in the program.



          Reason for not restarting the system call automatically:



          Imagine an application which receives data from a socket by the blocking and uninterruptible recv() system call. In our scenario, data comes very slow and the program resides long in that system call. That program has a signal handler for SIGINT that sets a flag (which is evaluated elsewhere), and SA_RESTART is set that the system call restarts automatically. Imagine that the program is in recv() which waits for data. But no data arrives. The system call blocks. The program now catches ctrl-c from the user. The system call is interrupted and the signal handler, which just sets the flag is executed. Then recv() is restarted, still waiting for data. The event loop is stuck in recv() and has no opportunity to evaluate the flag and exit the program gracefully.



          With SA_RESTART not set:



          In the above scenario, when SA_RESTART is not set, recv() would recieve EINTR instead of being restarted. The system call exits and thus can continue. Off course, the program should then (as early as possible) check the flag (set by the signal handler) and do clean up or whatever it does.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Nov 12 '17 at 20:10









          jlliagre

          47.3k784134




          47.3k784134










          answered Jan 5 '16 at 11:27









          chaoschaos

          35.6k973117




          35.6k973117













          • Another point of view which may be worth added: skarnet.org/software/skalibs/libstddjb/safewrappers.html . It finally says the same (although more implicitly), except in your answer, you are assuming there may even be no time‑out at all.

            – Hibou57
            Jan 5 '16 at 13:25






          • 2





            Even with SA_RESTART set, not all system calls are restarted automatically. For example, Linux does not restart msgsnd() or msgrcv().

            – Andrew Henle
            Jan 5 '16 at 23:22





















          • Another point of view which may be worth added: skarnet.org/software/skalibs/libstddjb/safewrappers.html . It finally says the same (although more implicitly), except in your answer, you are assuming there may even be no time‑out at all.

            – Hibou57
            Jan 5 '16 at 13:25






          • 2





            Even with SA_RESTART set, not all system calls are restarted automatically. For example, Linux does not restart msgsnd() or msgrcv().

            – Andrew Henle
            Jan 5 '16 at 23:22



















          Another point of view which may be worth added: skarnet.org/software/skalibs/libstddjb/safewrappers.html . It finally says the same (although more implicitly), except in your answer, you are assuming there may even be no time‑out at all.

          – Hibou57
          Jan 5 '16 at 13:25





          Another point of view which may be worth added: skarnet.org/software/skalibs/libstddjb/safewrappers.html . It finally says the same (although more implicitly), except in your answer, you are assuming there may even be no time‑out at all.

          – Hibou57
          Jan 5 '16 at 13:25




          2




          2





          Even with SA_RESTART set, not all system calls are restarted automatically. For example, Linux does not restart msgsnd() or msgrcv().

          – Andrew Henle
          Jan 5 '16 at 23:22







          Even with SA_RESTART set, not all system calls are restarted automatically. For example, Linux does not restart msgsnd() or msgrcv().

          – Andrew Henle
          Jan 5 '16 at 23:22















          2














          Richard Gabriel wrote a paper The Rise of 'Worse is Better' which discusses the design choice here in Unix:




          Two famous people, one from MIT and another from Berkeley (but working
          on Unix) once met to discuss operating system issues. The person from
          MIT was knowledgeable about ITS (the MIT AI Lab operating system) and
          had been reading the Unix sources. He was interested in how Unix
          solved the PC loser-ing problem. The PC loser-ing problem occurs when
          a user program invokes a system routine to perform a lengthy operation
          that might have significant state, such as IO buffers. If an interrupt
          occurs during the operation, the state of the user program must be
          saved. Because the invocation of the system routine is usually a
          single instruction, the PC of the user program does not adequately
          capture the state of the process. The system routine must either back
          out or press forward. The right thing is to back out and restore the
          user program PC to the instruction that invoked the system routine so
          that resumption of the user program after the interrupt, for example,
          re-enters the system routine. It is called PC loser-ing because
          the PC is being coerced into loser mode, where 'loser' is the
          affectionate name for 'user' at MIT.



          The MIT guy did not see any code that handled this case and asked the
          New Jersey guy how the problem was handled. The New Jersey guy said
          that the Unix folks were aware of the problem, but the solution was
          for the system routine to always finish, but sometimes an error code
          would be returned that signaled that the system routine had failed to
          complete its action. A correct user program, then, had to check the
          error code to determine whether to simply try the system routine
          again. The MIT guy did not like this solution because it was not the
          right thing.



          The New Jersey guy said that the Unix solution was right because the
          design philosophy of Unix was simplicity and that the right thing was
          too complex. Besides, programmers could easily insert this extra test
          and loop. The MIT guy pointed out that the implementation was simple
          but the interface to the functionality was complex. The New Jersey guy
          said that the right tradeoff has been selected in Unix-namely,
          implementation simplicity was more important than interface
          simplicity.







          share|improve this answer



















          • 1





            Two famous people walk into a bar — one from MIT, one from Berkeley, and one from New Jersey.  Huh? I realize that it’s a quote, but can you clarify it? The last paragraph is a bit muddled, too — the Unix solution was right because the right thing was too complex for Unix, and so they didn’t implement it.

            – G-Man
            Nov 12 '17 at 20:26











          • Minimum Viable Product is a related concept. The unix solution to use EINTR was viable and offered simplicity in the highly portable OS codebase. Delegated to user code, handling EINTR is easy (just retry), yet kind of bothersome.

            – Brad Schoening
            Nov 14 '17 at 2:31
















          2














          Richard Gabriel wrote a paper The Rise of 'Worse is Better' which discusses the design choice here in Unix:




          Two famous people, one from MIT and another from Berkeley (but working
          on Unix) once met to discuss operating system issues. The person from
          MIT was knowledgeable about ITS (the MIT AI Lab operating system) and
          had been reading the Unix sources. He was interested in how Unix
          solved the PC loser-ing problem. The PC loser-ing problem occurs when
          a user program invokes a system routine to perform a lengthy operation
          that might have significant state, such as IO buffers. If an interrupt
          occurs during the operation, the state of the user program must be
          saved. Because the invocation of the system routine is usually a
          single instruction, the PC of the user program does not adequately
          capture the state of the process. The system routine must either back
          out or press forward. The right thing is to back out and restore the
          user program PC to the instruction that invoked the system routine so
          that resumption of the user program after the interrupt, for example,
          re-enters the system routine. It is called PC loser-ing because
          the PC is being coerced into loser mode, where 'loser' is the
          affectionate name for 'user' at MIT.



          The MIT guy did not see any code that handled this case and asked the
          New Jersey guy how the problem was handled. The New Jersey guy said
          that the Unix folks were aware of the problem, but the solution was
          for the system routine to always finish, but sometimes an error code
          would be returned that signaled that the system routine had failed to
          complete its action. A correct user program, then, had to check the
          error code to determine whether to simply try the system routine
          again. The MIT guy did not like this solution because it was not the
          right thing.



          The New Jersey guy said that the Unix solution was right because the
          design philosophy of Unix was simplicity and that the right thing was
          too complex. Besides, programmers could easily insert this extra test
          and loop. The MIT guy pointed out that the implementation was simple
          but the interface to the functionality was complex. The New Jersey guy
          said that the right tradeoff has been selected in Unix-namely,
          implementation simplicity was more important than interface
          simplicity.







          share|improve this answer



















          • 1





            Two famous people walk into a bar — one from MIT, one from Berkeley, and one from New Jersey.  Huh? I realize that it’s a quote, but can you clarify it? The last paragraph is a bit muddled, too — the Unix solution was right because the right thing was too complex for Unix, and so they didn’t implement it.

            – G-Man
            Nov 12 '17 at 20:26











          • Minimum Viable Product is a related concept. The unix solution to use EINTR was viable and offered simplicity in the highly portable OS codebase. Delegated to user code, handling EINTR is easy (just retry), yet kind of bothersome.

            – Brad Schoening
            Nov 14 '17 at 2:31














          2












          2








          2







          Richard Gabriel wrote a paper The Rise of 'Worse is Better' which discusses the design choice here in Unix:




          Two famous people, one from MIT and another from Berkeley (but working
          on Unix) once met to discuss operating system issues. The person from
          MIT was knowledgeable about ITS (the MIT AI Lab operating system) and
          had been reading the Unix sources. He was interested in how Unix
          solved the PC loser-ing problem. The PC loser-ing problem occurs when
          a user program invokes a system routine to perform a lengthy operation
          that might have significant state, such as IO buffers. If an interrupt
          occurs during the operation, the state of the user program must be
          saved. Because the invocation of the system routine is usually a
          single instruction, the PC of the user program does not adequately
          capture the state of the process. The system routine must either back
          out or press forward. The right thing is to back out and restore the
          user program PC to the instruction that invoked the system routine so
          that resumption of the user program after the interrupt, for example,
          re-enters the system routine. It is called PC loser-ing because
          the PC is being coerced into loser mode, where 'loser' is the
          affectionate name for 'user' at MIT.



          The MIT guy did not see any code that handled this case and asked the
          New Jersey guy how the problem was handled. The New Jersey guy said
          that the Unix folks were aware of the problem, but the solution was
          for the system routine to always finish, but sometimes an error code
          would be returned that signaled that the system routine had failed to
          complete its action. A correct user program, then, had to check the
          error code to determine whether to simply try the system routine
          again. The MIT guy did not like this solution because it was not the
          right thing.



          The New Jersey guy said that the Unix solution was right because the
          design philosophy of Unix was simplicity and that the right thing was
          too complex. Besides, programmers could easily insert this extra test
          and loop. The MIT guy pointed out that the implementation was simple
          but the interface to the functionality was complex. The New Jersey guy
          said that the right tradeoff has been selected in Unix-namely,
          implementation simplicity was more important than interface
          simplicity.







          share|improve this answer













          Richard Gabriel wrote a paper The Rise of 'Worse is Better' which discusses the design choice here in Unix:




          Two famous people, one from MIT and another from Berkeley (but working
          on Unix) once met to discuss operating system issues. The person from
          MIT was knowledgeable about ITS (the MIT AI Lab operating system) and
          had been reading the Unix sources. He was interested in how Unix
          solved the PC loser-ing problem. The PC loser-ing problem occurs when
          a user program invokes a system routine to perform a lengthy operation
          that might have significant state, such as IO buffers. If an interrupt
          occurs during the operation, the state of the user program must be
          saved. Because the invocation of the system routine is usually a
          single instruction, the PC of the user program does not adequately
          capture the state of the process. The system routine must either back
          out or press forward. The right thing is to back out and restore the
          user program PC to the instruction that invoked the system routine so
          that resumption of the user program after the interrupt, for example,
          re-enters the system routine. It is called PC loser-ing because
          the PC is being coerced into loser mode, where 'loser' is the
          affectionate name for 'user' at MIT.



          The MIT guy did not see any code that handled this case and asked the
          New Jersey guy how the problem was handled. The New Jersey guy said
          that the Unix folks were aware of the problem, but the solution was
          for the system routine to always finish, but sometimes an error code
          would be returned that signaled that the system routine had failed to
          complete its action. A correct user program, then, had to check the
          error code to determine whether to simply try the system routine
          again. The MIT guy did not like this solution because it was not the
          right thing.



          The New Jersey guy said that the Unix solution was right because the
          design philosophy of Unix was simplicity and that the right thing was
          too complex. Besides, programmers could easily insert this extra test
          and loop. The MIT guy pointed out that the implementation was simple
          but the interface to the functionality was complex. The New Jersey guy
          said that the right tradeoff has been selected in Unix-namely,
          implementation simplicity was more important than interface
          simplicity.








          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 12 '17 at 20:08









          Brad SchoeningBrad Schoening

          1212




          1212








          • 1





            Two famous people walk into a bar — one from MIT, one from Berkeley, and one from New Jersey.  Huh? I realize that it’s a quote, but can you clarify it? The last paragraph is a bit muddled, too — the Unix solution was right because the right thing was too complex for Unix, and so they didn’t implement it.

            – G-Man
            Nov 12 '17 at 20:26











          • Minimum Viable Product is a related concept. The unix solution to use EINTR was viable and offered simplicity in the highly portable OS codebase. Delegated to user code, handling EINTR is easy (just retry), yet kind of bothersome.

            – Brad Schoening
            Nov 14 '17 at 2:31














          • 1





            Two famous people walk into a bar — one from MIT, one from Berkeley, and one from New Jersey.  Huh? I realize that it’s a quote, but can you clarify it? The last paragraph is a bit muddled, too — the Unix solution was right because the right thing was too complex for Unix, and so they didn’t implement it.

            – G-Man
            Nov 12 '17 at 20:26











          • Minimum Viable Product is a related concept. The unix solution to use EINTR was viable and offered simplicity in the highly portable OS codebase. Delegated to user code, handling EINTR is easy (just retry), yet kind of bothersome.

            – Brad Schoening
            Nov 14 '17 at 2:31








          1




          1





          Two famous people walk into a bar — one from MIT, one from Berkeley, and one from New Jersey.  Huh? I realize that it’s a quote, but can you clarify it? The last paragraph is a bit muddled, too — the Unix solution was right because the right thing was too complex for Unix, and so they didn’t implement it.

          – G-Man
          Nov 12 '17 at 20:26





          Two famous people walk into a bar — one from MIT, one from Berkeley, and one from New Jersey.  Huh? I realize that it’s a quote, but can you clarify it? The last paragraph is a bit muddled, too — the Unix solution was right because the right thing was too complex for Unix, and so they didn’t implement it.

          – G-Man
          Nov 12 '17 at 20:26













          Minimum Viable Product is a related concept. The unix solution to use EINTR was viable and offered simplicity in the highly portable OS codebase. Delegated to user code, handling EINTR is easy (just retry), yet kind of bothersome.

          – Brad Schoening
          Nov 14 '17 at 2:31





          Minimum Viable Product is a related concept. The unix solution to use EINTR was viable and offered simplicity in the highly portable OS codebase. Delegated to user code, handling EINTR is easy (just retry), yet kind of bothersome.

          – Brad Schoening
          Nov 14 '17 at 2:31


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Unix & Linux Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f253349%2feintr-is-there-a-rationale-behind-it%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          How to make a Squid Proxy server?

          Is this a new Fibonacci Identity?

          19世紀