Slurm like alternative for localhost












1















I have resources (e.g. GPU) I can only use once at a time. When performing experiments I always need to know on time of a batch files creation, what experiments I want to execute in the future. But I am a person who likes to change its mind, but I hate to quit jobs.



Is there something like Slurm, that runs only on one machine, which I can pass jobs to a queue and remove them if necessary?



I am looking for applications that either work on GPU load (in contrast to batch/at that works on CPU load) or executes the jobs sequentially. This means that only one job runs at a time on one resource (GPU) and the next one starts when the execution of the previous job finished. I also want to be able to manage the queue in order to give jobs a higher priority or delete them.










share|improve this question

























  • batch? It's very simple, and nothing like SLURM, but it'll be installed already.

    – Kusalananda
    Feb 1 at 14:22











  • with atd (batch etc.) I can only set a load threashold for the cpu, but not for the gpu. Furthermore I found no solution for executing a queue sequentialy (Job 1 finishes, Job 2 starts, ...)

    – Martin
    Feb 4 at 12:58











  • For sequential jobs, submit them as one single job. No, you can probably not get batch to care about the GPU, only the general system load.

    – Kusalananda
    Feb 4 at 13:00











  • But I dont want to add them as a single job. This does not provide any advantage compared to a batchfile.

    – Martin
    Feb 4 at 13:02











  • Hmm... If you want to run jobs sequentially, then why bother with SLURM?

    – Kusalananda
    Feb 4 at 13:05


















1















I have resources (e.g. GPU) I can only use once at a time. When performing experiments I always need to know on time of a batch files creation, what experiments I want to execute in the future. But I am a person who likes to change its mind, but I hate to quit jobs.



Is there something like Slurm, that runs only on one machine, which I can pass jobs to a queue and remove them if necessary?



I am looking for applications that either work on GPU load (in contrast to batch/at that works on CPU load) or executes the jobs sequentially. This means that only one job runs at a time on one resource (GPU) and the next one starts when the execution of the previous job finished. I also want to be able to manage the queue in order to give jobs a higher priority or delete them.










share|improve this question

























  • batch? It's very simple, and nothing like SLURM, but it'll be installed already.

    – Kusalananda
    Feb 1 at 14:22











  • with atd (batch etc.) I can only set a load threashold for the cpu, but not for the gpu. Furthermore I found no solution for executing a queue sequentialy (Job 1 finishes, Job 2 starts, ...)

    – Martin
    Feb 4 at 12:58











  • For sequential jobs, submit them as one single job. No, you can probably not get batch to care about the GPU, only the general system load.

    – Kusalananda
    Feb 4 at 13:00











  • But I dont want to add them as a single job. This does not provide any advantage compared to a batchfile.

    – Martin
    Feb 4 at 13:02











  • Hmm... If you want to run jobs sequentially, then why bother with SLURM?

    – Kusalananda
    Feb 4 at 13:05
















1












1








1


1






I have resources (e.g. GPU) I can only use once at a time. When performing experiments I always need to know on time of a batch files creation, what experiments I want to execute in the future. But I am a person who likes to change its mind, but I hate to quit jobs.



Is there something like Slurm, that runs only on one machine, which I can pass jobs to a queue and remove them if necessary?



I am looking for applications that either work on GPU load (in contrast to batch/at that works on CPU load) or executes the jobs sequentially. This means that only one job runs at a time on one resource (GPU) and the next one starts when the execution of the previous job finished. I also want to be able to manage the queue in order to give jobs a higher priority or delete them.










share|improve this question
















I have resources (e.g. GPU) I can only use once at a time. When performing experiments I always need to know on time of a batch files creation, what experiments I want to execute in the future. But I am a person who likes to change its mind, but I hate to quit jobs.



Is there something like Slurm, that runs only on one machine, which I can pass jobs to a queue and remove them if necessary?



I am looking for applications that either work on GPU load (in contrast to batch/at that works on CPU load) or executes the jobs sequentially. This means that only one job runs at a time on one resource (GPU) and the next one starts when the execution of the previous job finished. I also want to be able to manage the queue in order to give jobs a higher priority or delete them.







linux scheduling application






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Feb 5 at 9:49







Martin

















asked Feb 1 at 14:04









MartinMartin

185




185













  • batch? It's very simple, and nothing like SLURM, but it'll be installed already.

    – Kusalananda
    Feb 1 at 14:22











  • with atd (batch etc.) I can only set a load threashold for the cpu, but not for the gpu. Furthermore I found no solution for executing a queue sequentialy (Job 1 finishes, Job 2 starts, ...)

    – Martin
    Feb 4 at 12:58











  • For sequential jobs, submit them as one single job. No, you can probably not get batch to care about the GPU, only the general system load.

    – Kusalananda
    Feb 4 at 13:00











  • But I dont want to add them as a single job. This does not provide any advantage compared to a batchfile.

    – Martin
    Feb 4 at 13:02











  • Hmm... If you want to run jobs sequentially, then why bother with SLURM?

    – Kusalananda
    Feb 4 at 13:05





















  • batch? It's very simple, and nothing like SLURM, but it'll be installed already.

    – Kusalananda
    Feb 1 at 14:22











  • with atd (batch etc.) I can only set a load threashold for the cpu, but not for the gpu. Furthermore I found no solution for executing a queue sequentialy (Job 1 finishes, Job 2 starts, ...)

    – Martin
    Feb 4 at 12:58











  • For sequential jobs, submit them as one single job. No, you can probably not get batch to care about the GPU, only the general system load.

    – Kusalananda
    Feb 4 at 13:00











  • But I dont want to add them as a single job. This does not provide any advantage compared to a batchfile.

    – Martin
    Feb 4 at 13:02











  • Hmm... If you want to run jobs sequentially, then why bother with SLURM?

    – Kusalananda
    Feb 4 at 13:05



















batch? It's very simple, and nothing like SLURM, but it'll be installed already.

– Kusalananda
Feb 1 at 14:22





batch? It's very simple, and nothing like SLURM, but it'll be installed already.

– Kusalananda
Feb 1 at 14:22













with atd (batch etc.) I can only set a load threashold for the cpu, but not for the gpu. Furthermore I found no solution for executing a queue sequentialy (Job 1 finishes, Job 2 starts, ...)

– Martin
Feb 4 at 12:58





with atd (batch etc.) I can only set a load threashold for the cpu, but not for the gpu. Furthermore I found no solution for executing a queue sequentialy (Job 1 finishes, Job 2 starts, ...)

– Martin
Feb 4 at 12:58













For sequential jobs, submit them as one single job. No, you can probably not get batch to care about the GPU, only the general system load.

– Kusalananda
Feb 4 at 13:00





For sequential jobs, submit them as one single job. No, you can probably not get batch to care about the GPU, only the general system load.

– Kusalananda
Feb 4 at 13:00













But I dont want to add them as a single job. This does not provide any advantage compared to a batchfile.

– Martin
Feb 4 at 13:02





But I dont want to add them as a single job. This does not provide any advantage compared to a batchfile.

– Martin
Feb 4 at 13:02













Hmm... If you want to run jobs sequentially, then why bother with SLURM?

– Kusalananda
Feb 4 at 13:05







Hmm... If you want to run jobs sequentially, then why bother with SLURM?

– Kusalananda
Feb 4 at 13:05












2 Answers
2






active

oldest

votes


















0














Would it be acceptable to have the jobs run through a simple queue manager of your own?



#!/bin/bash

while ! mkdir /tmp/my_gpu_lockdir; do
sleep $((RANDOM))
done

trap 'rmdir /tmp/my_gnu_lockdir' ERR EXIT

.... your actual task here ...





share|improve this answer
























  • I would prefer that the queuing system takes care of that. Additionally, I can not see weather this is still pending, I can't change the priority and so on.. But a good quick-shot.

    – Martin
    Feb 5 at 9:53











  • You could perhaps wrap this with batch to have the job resubmit itself if the lock folder exists, and then use the regular atq / atrm for managing the queue.

    – tripleee
    Feb 5 at 9:56



















0














I found a solution that perfectly fits my needs. I have the issue, that I have only one GPU, but want to have a queue to add jobs to, see their status and, if needed, delete it from the queue again.



After some research on Google, I found task-spooler (tsp). With this command-line tool, it is fairly easy to add jobs to the queue and follow their results. So far I only use one queue, but it is also scale-able to more.






share|improve this answer























    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "106"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f498143%2fslurm-like-alternative-for-localhost%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    Would it be acceptable to have the jobs run through a simple queue manager of your own?



    #!/bin/bash

    while ! mkdir /tmp/my_gpu_lockdir; do
    sleep $((RANDOM))
    done

    trap 'rmdir /tmp/my_gnu_lockdir' ERR EXIT

    .... your actual task here ...





    share|improve this answer
























    • I would prefer that the queuing system takes care of that. Additionally, I can not see weather this is still pending, I can't change the priority and so on.. But a good quick-shot.

      – Martin
      Feb 5 at 9:53











    • You could perhaps wrap this with batch to have the job resubmit itself if the lock folder exists, and then use the regular atq / atrm for managing the queue.

      – tripleee
      Feb 5 at 9:56
















    0














    Would it be acceptable to have the jobs run through a simple queue manager of your own?



    #!/bin/bash

    while ! mkdir /tmp/my_gpu_lockdir; do
    sleep $((RANDOM))
    done

    trap 'rmdir /tmp/my_gnu_lockdir' ERR EXIT

    .... your actual task here ...





    share|improve this answer
























    • I would prefer that the queuing system takes care of that. Additionally, I can not see weather this is still pending, I can't change the priority and so on.. But a good quick-shot.

      – Martin
      Feb 5 at 9:53











    • You could perhaps wrap this with batch to have the job resubmit itself if the lock folder exists, and then use the regular atq / atrm for managing the queue.

      – tripleee
      Feb 5 at 9:56














    0












    0








    0







    Would it be acceptable to have the jobs run through a simple queue manager of your own?



    #!/bin/bash

    while ! mkdir /tmp/my_gpu_lockdir; do
    sleep $((RANDOM))
    done

    trap 'rmdir /tmp/my_gnu_lockdir' ERR EXIT

    .... your actual task here ...





    share|improve this answer













    Would it be acceptable to have the jobs run through a simple queue manager of your own?



    #!/bin/bash

    while ! mkdir /tmp/my_gpu_lockdir; do
    sleep $((RANDOM))
    done

    trap 'rmdir /tmp/my_gnu_lockdir' ERR EXIT

    .... your actual task here ...






    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Feb 4 at 13:26









    tripleeetripleee

    5,22311829




    5,22311829













    • I would prefer that the queuing system takes care of that. Additionally, I can not see weather this is still pending, I can't change the priority and so on.. But a good quick-shot.

      – Martin
      Feb 5 at 9:53











    • You could perhaps wrap this with batch to have the job resubmit itself if the lock folder exists, and then use the regular atq / atrm for managing the queue.

      – tripleee
      Feb 5 at 9:56



















    • I would prefer that the queuing system takes care of that. Additionally, I can not see weather this is still pending, I can't change the priority and so on.. But a good quick-shot.

      – Martin
      Feb 5 at 9:53











    • You could perhaps wrap this with batch to have the job resubmit itself if the lock folder exists, and then use the regular atq / atrm for managing the queue.

      – tripleee
      Feb 5 at 9:56

















    I would prefer that the queuing system takes care of that. Additionally, I can not see weather this is still pending, I can't change the priority and so on.. But a good quick-shot.

    – Martin
    Feb 5 at 9:53





    I would prefer that the queuing system takes care of that. Additionally, I can not see weather this is still pending, I can't change the priority and so on.. But a good quick-shot.

    – Martin
    Feb 5 at 9:53













    You could perhaps wrap this with batch to have the job resubmit itself if the lock folder exists, and then use the regular atq / atrm for managing the queue.

    – tripleee
    Feb 5 at 9:56





    You could perhaps wrap this with batch to have the job resubmit itself if the lock folder exists, and then use the regular atq / atrm for managing the queue.

    – tripleee
    Feb 5 at 9:56













    0














    I found a solution that perfectly fits my needs. I have the issue, that I have only one GPU, but want to have a queue to add jobs to, see their status and, if needed, delete it from the queue again.



    After some research on Google, I found task-spooler (tsp). With this command-line tool, it is fairly easy to add jobs to the queue and follow their results. So far I only use one queue, but it is also scale-able to more.






    share|improve this answer




























      0














      I found a solution that perfectly fits my needs. I have the issue, that I have only one GPU, but want to have a queue to add jobs to, see their status and, if needed, delete it from the queue again.



      After some research on Google, I found task-spooler (tsp). With this command-line tool, it is fairly easy to add jobs to the queue and follow their results. So far I only use one queue, but it is also scale-able to more.






      share|improve this answer


























        0












        0








        0







        I found a solution that perfectly fits my needs. I have the issue, that I have only one GPU, but want to have a queue to add jobs to, see their status and, if needed, delete it from the queue again.



        After some research on Google, I found task-spooler (tsp). With this command-line tool, it is fairly easy to add jobs to the queue and follow their results. So far I only use one queue, but it is also scale-able to more.






        share|improve this answer













        I found a solution that perfectly fits my needs. I have the issue, that I have only one GPU, but want to have a queue to add jobs to, see their status and, if needed, delete it from the queue again.



        After some research on Google, I found task-spooler (tsp). With this command-line tool, it is fairly easy to add jobs to the queue and follow their results. So far I only use one queue, but it is also scale-able to more.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered 23 hours ago









        MartinMartin

        185




        185






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Unix & Linux Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f498143%2fslurm-like-alternative-for-localhost%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            How to make a Squid Proxy server?

            Is this a new Fibonacci Identity?

            19世紀