Sklearn 'Seed' Not Working Properly In a Section of Code












2












$begingroup$


I have written an ensemble using Scikit Learn VotingClassifier.



I have set a seed in the cross validation section. However, it does not appear to 'hold'. Meaning, If I re-run the code block I get different results. (I can only assume each run of the code block is dividing the dataset into folds with different constituents instead of 'freezing' the random state.



Here is the code:



#Voting Ensemble of Classification
#Create Submodels
num_folds = 10
seed =7
kfold = KFold(n_splits=num_folds, random_state=seed)
estimators =
model1 =LogisticRegression()
estimators.append(('LR',model1))
model2 = KNeighborsClassifier()
estimators.append(('KNN',model2))
model3 = GradientBoostingClassifier()
estimators.append(('GBM',model3))
#Create the ensemble
ensemble = VotingClassifier(estimators,voting='soft')
results = cross_val_score(ensemble, X_train, Y_train,cv=kfold)
print(results)


The results printed are the results of the 10 CV fold training. If I run this code block several times I get the following results:



1:



[0.70588235 0.94117647 1.         0.82352941 0.94117647 0.88235294
0.8125 0.875 0.8125 0.9375 ]


2:



[0.76470588 0.94117647 1.         0.82352941 0.94117647 0.88235294
0.8125 0.875 0.8125 0.875 ]


3:



[0.76470588 0.94117647 1.         0.82352941 0.94117647 0.88235294
0.8125 0.875 0.8125 0.875 ]


4:



[0.76470588 0.94117647 1.         0.82352941 1.         0.88235294
0.8125 0.875 0.625 0.875 ]


So it appears my random_state=seed isn't holding.



What is incorrect?



Thanks in advance.










share|cite|improve this question









$endgroup$

















    2












    $begingroup$


    I have written an ensemble using Scikit Learn VotingClassifier.



    I have set a seed in the cross validation section. However, it does not appear to 'hold'. Meaning, If I re-run the code block I get different results. (I can only assume each run of the code block is dividing the dataset into folds with different constituents instead of 'freezing' the random state.



    Here is the code:



    #Voting Ensemble of Classification
    #Create Submodels
    num_folds = 10
    seed =7
    kfold = KFold(n_splits=num_folds, random_state=seed)
    estimators =
    model1 =LogisticRegression()
    estimators.append(('LR',model1))
    model2 = KNeighborsClassifier()
    estimators.append(('KNN',model2))
    model3 = GradientBoostingClassifier()
    estimators.append(('GBM',model3))
    #Create the ensemble
    ensemble = VotingClassifier(estimators,voting='soft')
    results = cross_val_score(ensemble, X_train, Y_train,cv=kfold)
    print(results)


    The results printed are the results of the 10 CV fold training. If I run this code block several times I get the following results:



    1:



    [0.70588235 0.94117647 1.         0.82352941 0.94117647 0.88235294
    0.8125 0.875 0.8125 0.9375 ]


    2:



    [0.76470588 0.94117647 1.         0.82352941 0.94117647 0.88235294
    0.8125 0.875 0.8125 0.875 ]


    3:



    [0.76470588 0.94117647 1.         0.82352941 0.94117647 0.88235294
    0.8125 0.875 0.8125 0.875 ]


    4:



    [0.76470588 0.94117647 1.         0.82352941 1.         0.88235294
    0.8125 0.875 0.625 0.875 ]


    So it appears my random_state=seed isn't holding.



    What is incorrect?



    Thanks in advance.










    share|cite|improve this question









    $endgroup$















      2












      2








      2





      $begingroup$


      I have written an ensemble using Scikit Learn VotingClassifier.



      I have set a seed in the cross validation section. However, it does not appear to 'hold'. Meaning, If I re-run the code block I get different results. (I can only assume each run of the code block is dividing the dataset into folds with different constituents instead of 'freezing' the random state.



      Here is the code:



      #Voting Ensemble of Classification
      #Create Submodels
      num_folds = 10
      seed =7
      kfold = KFold(n_splits=num_folds, random_state=seed)
      estimators =
      model1 =LogisticRegression()
      estimators.append(('LR',model1))
      model2 = KNeighborsClassifier()
      estimators.append(('KNN',model2))
      model3 = GradientBoostingClassifier()
      estimators.append(('GBM',model3))
      #Create the ensemble
      ensemble = VotingClassifier(estimators,voting='soft')
      results = cross_val_score(ensemble, X_train, Y_train,cv=kfold)
      print(results)


      The results printed are the results of the 10 CV fold training. If I run this code block several times I get the following results:



      1:



      [0.70588235 0.94117647 1.         0.82352941 0.94117647 0.88235294
      0.8125 0.875 0.8125 0.9375 ]


      2:



      [0.76470588 0.94117647 1.         0.82352941 0.94117647 0.88235294
      0.8125 0.875 0.8125 0.875 ]


      3:



      [0.76470588 0.94117647 1.         0.82352941 0.94117647 0.88235294
      0.8125 0.875 0.8125 0.875 ]


      4:



      [0.76470588 0.94117647 1.         0.82352941 1.         0.88235294
      0.8125 0.875 0.625 0.875 ]


      So it appears my random_state=seed isn't holding.



      What is incorrect?



      Thanks in advance.










      share|cite|improve this question









      $endgroup$




      I have written an ensemble using Scikit Learn VotingClassifier.



      I have set a seed in the cross validation section. However, it does not appear to 'hold'. Meaning, If I re-run the code block I get different results. (I can only assume each run of the code block is dividing the dataset into folds with different constituents instead of 'freezing' the random state.



      Here is the code:



      #Voting Ensemble of Classification
      #Create Submodels
      num_folds = 10
      seed =7
      kfold = KFold(n_splits=num_folds, random_state=seed)
      estimators =
      model1 =LogisticRegression()
      estimators.append(('LR',model1))
      model2 = KNeighborsClassifier()
      estimators.append(('KNN',model2))
      model3 = GradientBoostingClassifier()
      estimators.append(('GBM',model3))
      #Create the ensemble
      ensemble = VotingClassifier(estimators,voting='soft')
      results = cross_val_score(ensemble, X_train, Y_train,cv=kfold)
      print(results)


      The results printed are the results of the 10 CV fold training. If I run this code block several times I get the following results:



      1:



      [0.70588235 0.94117647 1.         0.82352941 0.94117647 0.88235294
      0.8125 0.875 0.8125 0.9375 ]


      2:



      [0.76470588 0.94117647 1.         0.82352941 0.94117647 0.88235294
      0.8125 0.875 0.8125 0.875 ]


      3:



      [0.76470588 0.94117647 1.         0.82352941 0.94117647 0.88235294
      0.8125 0.875 0.8125 0.875 ]


      4:



      [0.76470588 0.94117647 1.         0.82352941 1.         0.88235294
      0.8125 0.875 0.625 0.875 ]


      So it appears my random_state=seed isn't holding.



      What is incorrect?



      Thanks in advance.







      python scikit-learn ensemble






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked 10 hours ago









      Windstorm1981Windstorm1981

      1455




      1455






















          1 Answer
          1






          active

          oldest

          votes


















          2












          $begingroup$

          Random seed of models (LogisticRegression, GradientBoostingClassifier) needs to be fixed too, so that their random behavior becomes reproducible. Here is a working example that produces the same result over multiple runs:



          import sklearn
          from sklearn.model_selection import KFold, cross_val_score
          from sklearn.linear_model import LogisticRegression
          from sklearn.neighbors import KNeighborsClassifier
          from sklearn.ensemble import GradientBoostingClassifier, VotingClassifier
          import numpy as np

          #Voting Ensemble of Classification
          #Create Submodels
          num_folds = 10
          seed =7

          # Data
          np.random.seed(seed)
          feature_1 = np.random.normal(0, 2, 10000)
          feature_2 = np.random.normal(5, 6, 10000)
          X_train = np.vstack([feature_1, feature_2]).T
          Y_train = np.random.randint(0, 2, 10000).T

          kfold = KFold(n_splits=num_folds, random_state=seed)
          estimators =
          model1 =LogisticRegression(random_state=seed)
          estimators.append(('LR',model1))
          model2 = KNeighborsClassifier()
          estimators.append(('KNN',model2))
          model3 = GradientBoostingClassifier(random_state=seed)
          estimators.append(('GBM',model3))
          #Create the ensemble
          ensemble = VotingClassifier(estimators,voting='soft')
          results = cross_val_score(ensemble, X_train, Y_train,cv=kfold)
          print('sklearn version', sklearn.__version__)
          print(results)


          Output:



          sklearn version 0.19.1
          [0.502 0.496 0.483 0.513 0.515 0.508 0.517 0.499 0.515 0.504]





          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Thanks for your quick reply. Not sure I follow completely. random_state=seed fixes my cross validation. I note your line np.random.seed(seed). Intuitively it suggests to me it is ensuring repeatable generation of toy data. I already have a data set. How does that apply to 'fixing seed of models'?
            $endgroup$
            – Windstorm1981
            8 hours ago












          • $begingroup$
            @Windstorm1981 My bad. Updated.
            $endgroup$
            – Esmailian
            8 hours ago






          • 1




            $begingroup$
            ha! Clear now. So fixing the cv fixes the data splits. Fixing the models fixes how the models handle the (fixed) data splits?
            $endgroup$
            – Windstorm1981
            8 hours ago






          • 1




            $begingroup$
            @Windstorm1981 Exactly!
            $endgroup$
            – Esmailian
            8 hours ago











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "65"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f399026%2fsklearn-seed-not-working-properly-in-a-section-of-code%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          2












          $begingroup$

          Random seed of models (LogisticRegression, GradientBoostingClassifier) needs to be fixed too, so that their random behavior becomes reproducible. Here is a working example that produces the same result over multiple runs:



          import sklearn
          from sklearn.model_selection import KFold, cross_val_score
          from sklearn.linear_model import LogisticRegression
          from sklearn.neighbors import KNeighborsClassifier
          from sklearn.ensemble import GradientBoostingClassifier, VotingClassifier
          import numpy as np

          #Voting Ensemble of Classification
          #Create Submodels
          num_folds = 10
          seed =7

          # Data
          np.random.seed(seed)
          feature_1 = np.random.normal(0, 2, 10000)
          feature_2 = np.random.normal(5, 6, 10000)
          X_train = np.vstack([feature_1, feature_2]).T
          Y_train = np.random.randint(0, 2, 10000).T

          kfold = KFold(n_splits=num_folds, random_state=seed)
          estimators =
          model1 =LogisticRegression(random_state=seed)
          estimators.append(('LR',model1))
          model2 = KNeighborsClassifier()
          estimators.append(('KNN',model2))
          model3 = GradientBoostingClassifier(random_state=seed)
          estimators.append(('GBM',model3))
          #Create the ensemble
          ensemble = VotingClassifier(estimators,voting='soft')
          results = cross_val_score(ensemble, X_train, Y_train,cv=kfold)
          print('sklearn version', sklearn.__version__)
          print(results)


          Output:



          sklearn version 0.19.1
          [0.502 0.496 0.483 0.513 0.515 0.508 0.517 0.499 0.515 0.504]





          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Thanks for your quick reply. Not sure I follow completely. random_state=seed fixes my cross validation. I note your line np.random.seed(seed). Intuitively it suggests to me it is ensuring repeatable generation of toy data. I already have a data set. How does that apply to 'fixing seed of models'?
            $endgroup$
            – Windstorm1981
            8 hours ago












          • $begingroup$
            @Windstorm1981 My bad. Updated.
            $endgroup$
            – Esmailian
            8 hours ago






          • 1




            $begingroup$
            ha! Clear now. So fixing the cv fixes the data splits. Fixing the models fixes how the models handle the (fixed) data splits?
            $endgroup$
            – Windstorm1981
            8 hours ago






          • 1




            $begingroup$
            @Windstorm1981 Exactly!
            $endgroup$
            – Esmailian
            8 hours ago
















          2












          $begingroup$

          Random seed of models (LogisticRegression, GradientBoostingClassifier) needs to be fixed too, so that their random behavior becomes reproducible. Here is a working example that produces the same result over multiple runs:



          import sklearn
          from sklearn.model_selection import KFold, cross_val_score
          from sklearn.linear_model import LogisticRegression
          from sklearn.neighbors import KNeighborsClassifier
          from sklearn.ensemble import GradientBoostingClassifier, VotingClassifier
          import numpy as np

          #Voting Ensemble of Classification
          #Create Submodels
          num_folds = 10
          seed =7

          # Data
          np.random.seed(seed)
          feature_1 = np.random.normal(0, 2, 10000)
          feature_2 = np.random.normal(5, 6, 10000)
          X_train = np.vstack([feature_1, feature_2]).T
          Y_train = np.random.randint(0, 2, 10000).T

          kfold = KFold(n_splits=num_folds, random_state=seed)
          estimators =
          model1 =LogisticRegression(random_state=seed)
          estimators.append(('LR',model1))
          model2 = KNeighborsClassifier()
          estimators.append(('KNN',model2))
          model3 = GradientBoostingClassifier(random_state=seed)
          estimators.append(('GBM',model3))
          #Create the ensemble
          ensemble = VotingClassifier(estimators,voting='soft')
          results = cross_val_score(ensemble, X_train, Y_train,cv=kfold)
          print('sklearn version', sklearn.__version__)
          print(results)


          Output:



          sklearn version 0.19.1
          [0.502 0.496 0.483 0.513 0.515 0.508 0.517 0.499 0.515 0.504]





          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Thanks for your quick reply. Not sure I follow completely. random_state=seed fixes my cross validation. I note your line np.random.seed(seed). Intuitively it suggests to me it is ensuring repeatable generation of toy data. I already have a data set. How does that apply to 'fixing seed of models'?
            $endgroup$
            – Windstorm1981
            8 hours ago












          • $begingroup$
            @Windstorm1981 My bad. Updated.
            $endgroup$
            – Esmailian
            8 hours ago






          • 1




            $begingroup$
            ha! Clear now. So fixing the cv fixes the data splits. Fixing the models fixes how the models handle the (fixed) data splits?
            $endgroup$
            – Windstorm1981
            8 hours ago






          • 1




            $begingroup$
            @Windstorm1981 Exactly!
            $endgroup$
            – Esmailian
            8 hours ago














          2












          2








          2





          $begingroup$

          Random seed of models (LogisticRegression, GradientBoostingClassifier) needs to be fixed too, so that their random behavior becomes reproducible. Here is a working example that produces the same result over multiple runs:



          import sklearn
          from sklearn.model_selection import KFold, cross_val_score
          from sklearn.linear_model import LogisticRegression
          from sklearn.neighbors import KNeighborsClassifier
          from sklearn.ensemble import GradientBoostingClassifier, VotingClassifier
          import numpy as np

          #Voting Ensemble of Classification
          #Create Submodels
          num_folds = 10
          seed =7

          # Data
          np.random.seed(seed)
          feature_1 = np.random.normal(0, 2, 10000)
          feature_2 = np.random.normal(5, 6, 10000)
          X_train = np.vstack([feature_1, feature_2]).T
          Y_train = np.random.randint(0, 2, 10000).T

          kfold = KFold(n_splits=num_folds, random_state=seed)
          estimators =
          model1 =LogisticRegression(random_state=seed)
          estimators.append(('LR',model1))
          model2 = KNeighborsClassifier()
          estimators.append(('KNN',model2))
          model3 = GradientBoostingClassifier(random_state=seed)
          estimators.append(('GBM',model3))
          #Create the ensemble
          ensemble = VotingClassifier(estimators,voting='soft')
          results = cross_val_score(ensemble, X_train, Y_train,cv=kfold)
          print('sklearn version', sklearn.__version__)
          print(results)


          Output:



          sklearn version 0.19.1
          [0.502 0.496 0.483 0.513 0.515 0.508 0.517 0.499 0.515 0.504]





          share|cite|improve this answer











          $endgroup$



          Random seed of models (LogisticRegression, GradientBoostingClassifier) needs to be fixed too, so that their random behavior becomes reproducible. Here is a working example that produces the same result over multiple runs:



          import sklearn
          from sklearn.model_selection import KFold, cross_val_score
          from sklearn.linear_model import LogisticRegression
          from sklearn.neighbors import KNeighborsClassifier
          from sklearn.ensemble import GradientBoostingClassifier, VotingClassifier
          import numpy as np

          #Voting Ensemble of Classification
          #Create Submodels
          num_folds = 10
          seed =7

          # Data
          np.random.seed(seed)
          feature_1 = np.random.normal(0, 2, 10000)
          feature_2 = np.random.normal(5, 6, 10000)
          X_train = np.vstack([feature_1, feature_2]).T
          Y_train = np.random.randint(0, 2, 10000).T

          kfold = KFold(n_splits=num_folds, random_state=seed)
          estimators =
          model1 =LogisticRegression(random_state=seed)
          estimators.append(('LR',model1))
          model2 = KNeighborsClassifier()
          estimators.append(('KNN',model2))
          model3 = GradientBoostingClassifier(random_state=seed)
          estimators.append(('GBM',model3))
          #Create the ensemble
          ensemble = VotingClassifier(estimators,voting='soft')
          results = cross_val_score(ensemble, X_train, Y_train,cv=kfold)
          print('sklearn version', sklearn.__version__)
          print(results)


          Output:



          sklearn version 0.19.1
          [0.502 0.496 0.483 0.513 0.515 0.508 0.517 0.499 0.515 0.504]






          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited 8 hours ago

























          answered 8 hours ago









          EsmailianEsmailian

          31115




          31115












          • $begingroup$
            Thanks for your quick reply. Not sure I follow completely. random_state=seed fixes my cross validation. I note your line np.random.seed(seed). Intuitively it suggests to me it is ensuring repeatable generation of toy data. I already have a data set. How does that apply to 'fixing seed of models'?
            $endgroup$
            – Windstorm1981
            8 hours ago












          • $begingroup$
            @Windstorm1981 My bad. Updated.
            $endgroup$
            – Esmailian
            8 hours ago






          • 1




            $begingroup$
            ha! Clear now. So fixing the cv fixes the data splits. Fixing the models fixes how the models handle the (fixed) data splits?
            $endgroup$
            – Windstorm1981
            8 hours ago






          • 1




            $begingroup$
            @Windstorm1981 Exactly!
            $endgroup$
            – Esmailian
            8 hours ago


















          • $begingroup$
            Thanks for your quick reply. Not sure I follow completely. random_state=seed fixes my cross validation. I note your line np.random.seed(seed). Intuitively it suggests to me it is ensuring repeatable generation of toy data. I already have a data set. How does that apply to 'fixing seed of models'?
            $endgroup$
            – Windstorm1981
            8 hours ago












          • $begingroup$
            @Windstorm1981 My bad. Updated.
            $endgroup$
            – Esmailian
            8 hours ago






          • 1




            $begingroup$
            ha! Clear now. So fixing the cv fixes the data splits. Fixing the models fixes how the models handle the (fixed) data splits?
            $endgroup$
            – Windstorm1981
            8 hours ago






          • 1




            $begingroup$
            @Windstorm1981 Exactly!
            $endgroup$
            – Esmailian
            8 hours ago
















          $begingroup$
          Thanks for your quick reply. Not sure I follow completely. random_state=seed fixes my cross validation. I note your line np.random.seed(seed). Intuitively it suggests to me it is ensuring repeatable generation of toy data. I already have a data set. How does that apply to 'fixing seed of models'?
          $endgroup$
          – Windstorm1981
          8 hours ago






          $begingroup$
          Thanks for your quick reply. Not sure I follow completely. random_state=seed fixes my cross validation. I note your line np.random.seed(seed). Intuitively it suggests to me it is ensuring repeatable generation of toy data. I already have a data set. How does that apply to 'fixing seed of models'?
          $endgroup$
          – Windstorm1981
          8 hours ago














          $begingroup$
          @Windstorm1981 My bad. Updated.
          $endgroup$
          – Esmailian
          8 hours ago




          $begingroup$
          @Windstorm1981 My bad. Updated.
          $endgroup$
          – Esmailian
          8 hours ago




          1




          1




          $begingroup$
          ha! Clear now. So fixing the cv fixes the data splits. Fixing the models fixes how the models handle the (fixed) data splits?
          $endgroup$
          – Windstorm1981
          8 hours ago




          $begingroup$
          ha! Clear now. So fixing the cv fixes the data splits. Fixing the models fixes how the models handle the (fixed) data splits?
          $endgroup$
          – Windstorm1981
          8 hours ago




          1




          1




          $begingroup$
          @Windstorm1981 Exactly!
          $endgroup$
          – Esmailian
          8 hours ago




          $begingroup$
          @Windstorm1981 Exactly!
          $endgroup$
          – Esmailian
          8 hours ago


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Cross Validated!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f399026%2fsklearn-seed-not-working-properly-in-a-section-of-code%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          How to reconfigure Docker Trusted Registry 2.x.x to use CEPH FS mount instead of NFS and other traditional...

          is 'sed' thread safe

          How to make a Squid Proxy server?