How to get accuracy, F1, precision and recall, for a keras model?












4












$begingroup$


I want to compute the precision, recall and F1-score for my binary KerasClassifier model, but don't find any solution.



Here's my actual code:



# Split dataset in train and test data 
X_train, X_test, Y_train, Y_test = train_test_split(normalized_X, Y, test_size=0.3, random_state=seed)

# Build the model
model = Sequential()
model.add(Dense(23, input_dim=45, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))

# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])


tensorboard = TensorBoard(log_dir="logs/{}".format(time.time()))

time_callback = TimeHistory()

# Fit the model
history = model.fit(X_train, Y_train, validation_split=0.3, epochs=200, batch_size=5, verbose=1, callbacks=[tensorboard, time_callback])


And then I am predicting on new test data, and getting the confusion matrix like this:



y_pred = model.predict(X_test)
y_pred =(y_pred>0.5)
list(y_pred)

cm = confusion_matrix(Y_test, y_pred)
print(cm)


But is there any solution to get the accuracy-score, the F1-score, the precision, and the recall? (If not complicated, also the cross-validation-score, but not necessary for this answer)



Thank you for any help!










share|improve this question









$endgroup$

















    4












    $begingroup$


    I want to compute the precision, recall and F1-score for my binary KerasClassifier model, but don't find any solution.



    Here's my actual code:



    # Split dataset in train and test data 
    X_train, X_test, Y_train, Y_test = train_test_split(normalized_X, Y, test_size=0.3, random_state=seed)

    # Build the model
    model = Sequential()
    model.add(Dense(23, input_dim=45, kernel_initializer='normal', activation='relu'))
    model.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))

    # Compile model
    model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])


    tensorboard = TensorBoard(log_dir="logs/{}".format(time.time()))

    time_callback = TimeHistory()

    # Fit the model
    history = model.fit(X_train, Y_train, validation_split=0.3, epochs=200, batch_size=5, verbose=1, callbacks=[tensorboard, time_callback])


    And then I am predicting on new test data, and getting the confusion matrix like this:



    y_pred = model.predict(X_test)
    y_pred =(y_pred>0.5)
    list(y_pred)

    cm = confusion_matrix(Y_test, y_pred)
    print(cm)


    But is there any solution to get the accuracy-score, the F1-score, the precision, and the recall? (If not complicated, also the cross-validation-score, but not necessary for this answer)



    Thank you for any help!










    share|improve this question









    $endgroup$















      4












      4








      4





      $begingroup$


      I want to compute the precision, recall and F1-score for my binary KerasClassifier model, but don't find any solution.



      Here's my actual code:



      # Split dataset in train and test data 
      X_train, X_test, Y_train, Y_test = train_test_split(normalized_X, Y, test_size=0.3, random_state=seed)

      # Build the model
      model = Sequential()
      model.add(Dense(23, input_dim=45, kernel_initializer='normal', activation='relu'))
      model.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))

      # Compile model
      model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])


      tensorboard = TensorBoard(log_dir="logs/{}".format(time.time()))

      time_callback = TimeHistory()

      # Fit the model
      history = model.fit(X_train, Y_train, validation_split=0.3, epochs=200, batch_size=5, verbose=1, callbacks=[tensorboard, time_callback])


      And then I am predicting on new test data, and getting the confusion matrix like this:



      y_pred = model.predict(X_test)
      y_pred =(y_pred>0.5)
      list(y_pred)

      cm = confusion_matrix(Y_test, y_pred)
      print(cm)


      But is there any solution to get the accuracy-score, the F1-score, the precision, and the recall? (If not complicated, also the cross-validation-score, but not necessary for this answer)



      Thank you for any help!










      share|improve this question









      $endgroup$




      I want to compute the precision, recall and F1-score for my binary KerasClassifier model, but don't find any solution.



      Here's my actual code:



      # Split dataset in train and test data 
      X_train, X_test, Y_train, Y_test = train_test_split(normalized_X, Y, test_size=0.3, random_state=seed)

      # Build the model
      model = Sequential()
      model.add(Dense(23, input_dim=45, kernel_initializer='normal', activation='relu'))
      model.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))

      # Compile model
      model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])


      tensorboard = TensorBoard(log_dir="logs/{}".format(time.time()))

      time_callback = TimeHistory()

      # Fit the model
      history = model.fit(X_train, Y_train, validation_split=0.3, epochs=200, batch_size=5, verbose=1, callbacks=[tensorboard, time_callback])


      And then I am predicting on new test data, and getting the confusion matrix like this:



      y_pred = model.predict(X_test)
      y_pred =(y_pred>0.5)
      list(y_pred)

      cm = confusion_matrix(Y_test, y_pred)
      print(cm)


      But is there any solution to get the accuracy-score, the F1-score, the precision, and the recall? (If not complicated, also the cross-validation-score, but not necessary for this answer)



      Thank you for any help!







      machine-learning neural-network deep-learning classification keras






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Feb 6 at 13:29









      ZelelBZelelB

      1726




      1726






















          3 Answers
          3






          active

          oldest

          votes


















          1












          $begingroup$

          Metrics have been removed from Keras core. You need to calculate them manually. They removed them on 2.0 version. Those metrics are all global metrics, but Keras works in batches. As a result, it might be more misleading than helpful.



          However, if you really need them, you can do it like this



          from keras import backend as K

          def recall_m(y_true, y_pred):
          true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
          possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
          recall = true_positives / (possible_positives + K.epsilon())
          return recall

          def precision_m(y_true, y_pred):
          true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
          predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
          precision = true_positives / (predicted_positives + K.epsilon())
          return precision

          def f1_m(y_true, y_pred):
          precision = precision_m(y_true, y_pred)
          recall = recall_m(y_true, y_pred)
          return 2*((precision*recall)/(precision+recall+K.epsilon()))

          # compile the model
          model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc',f1_m,precision_m, recall_m])

          # fit the model
          history = model.fit(Xtrain, ytrain, validation_split=0.3, epochs=10, verbose=0)

          # evaluate the model
          loss, accuracy, f1_score, precision, recall = model.evaluate(Xtest, ytest, verbose=0)





          share|improve this answer









          $endgroup$













          • $begingroup$
            if they can be misleading, how to evaluate a Keras' model then?
            $endgroup$
            – ZelelB
            Feb 6 at 13:52






          • 1




            $begingroup$
            Since Keras calculate those metrics at the end of each batch, you could get different results from the "real" metrics. An alternative way would be to split your dataset in training and test and use the test part to predict the results. Then since you know the real labels, calculate precision and recall manually.
            $endgroup$
            – Tasos
            Feb 6 at 14:03



















          1












          $begingroup$

          You could use the scikit-learn classification report. To convert your labels into a numerical or binary format take a look at the scikit-learn label encoder.



          y_pred = model.predict(x_test, batch_size=64, verbose=1)
          y_pred_bool = np.argmax(y_pred, axis=1)

          print(classification_report(y_test, y_pred_bool))


          which gives you (output copied from the scikit-learn example):



                       precision  recall   f1-score    support

          class 0 0.50 1.00 0.67 1
          class 1 0.00 0.00 0.00 1
          class 2 1.00 0.67 0.80 3





          share|improve this answer









          $endgroup$













          • $begingroup$
            This is what I use, simple and effective.
            $endgroup$
            – Matthew
            Feb 6 at 16:30



















          0












          $begingroup$

          Try this: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_fscore_support.html with Y_test, y_pred as parameters.






          share|improve this answer









          $endgroup$













          • $begingroup$
            I tried this: model.recision_recall_fscore_support(Y_test, y_pred, average='micro') and get this error on execution: AttributeError: 'Sequential' object has no attribute 'recision_recall_fscore_support'
            $endgroup$
            – ZelelB
            Feb 6 at 13:51










          • $begingroup$
            You don't need to specify model.recision_recall_fscore_support(), rather just recision_recall_fscore_support(Y_test, y_pred, average='micro') (without "model." and make sure you have the correct import: from sklearn.metrics import precision_recall_fscore_support)
            $endgroup$
            – Viacheslav Komisarenko
            Feb 6 at 13:59











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "557"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f45165%2fhow-to-get-accuracy-f1-precision-and-recall-for-a-keras-model%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          3 Answers
          3






          active

          oldest

          votes








          3 Answers
          3






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1












          $begingroup$

          Metrics have been removed from Keras core. You need to calculate them manually. They removed them on 2.0 version. Those metrics are all global metrics, but Keras works in batches. As a result, it might be more misleading than helpful.



          However, if you really need them, you can do it like this



          from keras import backend as K

          def recall_m(y_true, y_pred):
          true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
          possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
          recall = true_positives / (possible_positives + K.epsilon())
          return recall

          def precision_m(y_true, y_pred):
          true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
          predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
          precision = true_positives / (predicted_positives + K.epsilon())
          return precision

          def f1_m(y_true, y_pred):
          precision = precision_m(y_true, y_pred)
          recall = recall_m(y_true, y_pred)
          return 2*((precision*recall)/(precision+recall+K.epsilon()))

          # compile the model
          model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc',f1_m,precision_m, recall_m])

          # fit the model
          history = model.fit(Xtrain, ytrain, validation_split=0.3, epochs=10, verbose=0)

          # evaluate the model
          loss, accuracy, f1_score, precision, recall = model.evaluate(Xtest, ytest, verbose=0)





          share|improve this answer









          $endgroup$













          • $begingroup$
            if they can be misleading, how to evaluate a Keras' model then?
            $endgroup$
            – ZelelB
            Feb 6 at 13:52






          • 1




            $begingroup$
            Since Keras calculate those metrics at the end of each batch, you could get different results from the "real" metrics. An alternative way would be to split your dataset in training and test and use the test part to predict the results. Then since you know the real labels, calculate precision and recall manually.
            $endgroup$
            – Tasos
            Feb 6 at 14:03
















          1












          $begingroup$

          Metrics have been removed from Keras core. You need to calculate them manually. They removed them on 2.0 version. Those metrics are all global metrics, but Keras works in batches. As a result, it might be more misleading than helpful.



          However, if you really need them, you can do it like this



          from keras import backend as K

          def recall_m(y_true, y_pred):
          true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
          possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
          recall = true_positives / (possible_positives + K.epsilon())
          return recall

          def precision_m(y_true, y_pred):
          true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
          predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
          precision = true_positives / (predicted_positives + K.epsilon())
          return precision

          def f1_m(y_true, y_pred):
          precision = precision_m(y_true, y_pred)
          recall = recall_m(y_true, y_pred)
          return 2*((precision*recall)/(precision+recall+K.epsilon()))

          # compile the model
          model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc',f1_m,precision_m, recall_m])

          # fit the model
          history = model.fit(Xtrain, ytrain, validation_split=0.3, epochs=10, verbose=0)

          # evaluate the model
          loss, accuracy, f1_score, precision, recall = model.evaluate(Xtest, ytest, verbose=0)





          share|improve this answer









          $endgroup$













          • $begingroup$
            if they can be misleading, how to evaluate a Keras' model then?
            $endgroup$
            – ZelelB
            Feb 6 at 13:52






          • 1




            $begingroup$
            Since Keras calculate those metrics at the end of each batch, you could get different results from the "real" metrics. An alternative way would be to split your dataset in training and test and use the test part to predict the results. Then since you know the real labels, calculate precision and recall manually.
            $endgroup$
            – Tasos
            Feb 6 at 14:03














          1












          1








          1





          $begingroup$

          Metrics have been removed from Keras core. You need to calculate them manually. They removed them on 2.0 version. Those metrics are all global metrics, but Keras works in batches. As a result, it might be more misleading than helpful.



          However, if you really need them, you can do it like this



          from keras import backend as K

          def recall_m(y_true, y_pred):
          true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
          possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
          recall = true_positives / (possible_positives + K.epsilon())
          return recall

          def precision_m(y_true, y_pred):
          true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
          predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
          precision = true_positives / (predicted_positives + K.epsilon())
          return precision

          def f1_m(y_true, y_pred):
          precision = precision_m(y_true, y_pred)
          recall = recall_m(y_true, y_pred)
          return 2*((precision*recall)/(precision+recall+K.epsilon()))

          # compile the model
          model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc',f1_m,precision_m, recall_m])

          # fit the model
          history = model.fit(Xtrain, ytrain, validation_split=0.3, epochs=10, verbose=0)

          # evaluate the model
          loss, accuracy, f1_score, precision, recall = model.evaluate(Xtest, ytest, verbose=0)





          share|improve this answer









          $endgroup$



          Metrics have been removed from Keras core. You need to calculate them manually. They removed them on 2.0 version. Those metrics are all global metrics, but Keras works in batches. As a result, it might be more misleading than helpful.



          However, if you really need them, you can do it like this



          from keras import backend as K

          def recall_m(y_true, y_pred):
          true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
          possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
          recall = true_positives / (possible_positives + K.epsilon())
          return recall

          def precision_m(y_true, y_pred):
          true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
          predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
          precision = true_positives / (predicted_positives + K.epsilon())
          return precision

          def f1_m(y_true, y_pred):
          precision = precision_m(y_true, y_pred)
          recall = recall_m(y_true, y_pred)
          return 2*((precision*recall)/(precision+recall+K.epsilon()))

          # compile the model
          model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc',f1_m,precision_m, recall_m])

          # fit the model
          history = model.fit(Xtrain, ytrain, validation_split=0.3, epochs=10, verbose=0)

          # evaluate the model
          loss, accuracy, f1_score, precision, recall = model.evaluate(Xtest, ytest, verbose=0)






          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Feb 6 at 13:35









          TasosTasos

          906530




          906530












          • $begingroup$
            if they can be misleading, how to evaluate a Keras' model then?
            $endgroup$
            – ZelelB
            Feb 6 at 13:52






          • 1




            $begingroup$
            Since Keras calculate those metrics at the end of each batch, you could get different results from the "real" metrics. An alternative way would be to split your dataset in training and test and use the test part to predict the results. Then since you know the real labels, calculate precision and recall manually.
            $endgroup$
            – Tasos
            Feb 6 at 14:03


















          • $begingroup$
            if they can be misleading, how to evaluate a Keras' model then?
            $endgroup$
            – ZelelB
            Feb 6 at 13:52






          • 1




            $begingroup$
            Since Keras calculate those metrics at the end of each batch, you could get different results from the "real" metrics. An alternative way would be to split your dataset in training and test and use the test part to predict the results. Then since you know the real labels, calculate precision and recall manually.
            $endgroup$
            – Tasos
            Feb 6 at 14:03
















          $begingroup$
          if they can be misleading, how to evaluate a Keras' model then?
          $endgroup$
          – ZelelB
          Feb 6 at 13:52




          $begingroup$
          if they can be misleading, how to evaluate a Keras' model then?
          $endgroup$
          – ZelelB
          Feb 6 at 13:52




          1




          1




          $begingroup$
          Since Keras calculate those metrics at the end of each batch, you could get different results from the "real" metrics. An alternative way would be to split your dataset in training and test and use the test part to predict the results. Then since you know the real labels, calculate precision and recall manually.
          $endgroup$
          – Tasos
          Feb 6 at 14:03




          $begingroup$
          Since Keras calculate those metrics at the end of each batch, you could get different results from the "real" metrics. An alternative way would be to split your dataset in training and test and use the test part to predict the results. Then since you know the real labels, calculate precision and recall manually.
          $endgroup$
          – Tasos
          Feb 6 at 14:03











          1












          $begingroup$

          You could use the scikit-learn classification report. To convert your labels into a numerical or binary format take a look at the scikit-learn label encoder.



          y_pred = model.predict(x_test, batch_size=64, verbose=1)
          y_pred_bool = np.argmax(y_pred, axis=1)

          print(classification_report(y_test, y_pred_bool))


          which gives you (output copied from the scikit-learn example):



                       precision  recall   f1-score    support

          class 0 0.50 1.00 0.67 1
          class 1 0.00 0.00 0.00 1
          class 2 1.00 0.67 0.80 3





          share|improve this answer









          $endgroup$













          • $begingroup$
            This is what I use, simple and effective.
            $endgroup$
            – Matthew
            Feb 6 at 16:30
















          1












          $begingroup$

          You could use the scikit-learn classification report. To convert your labels into a numerical or binary format take a look at the scikit-learn label encoder.



          y_pred = model.predict(x_test, batch_size=64, verbose=1)
          y_pred_bool = np.argmax(y_pred, axis=1)

          print(classification_report(y_test, y_pred_bool))


          which gives you (output copied from the scikit-learn example):



                       precision  recall   f1-score    support

          class 0 0.50 1.00 0.67 1
          class 1 0.00 0.00 0.00 1
          class 2 1.00 0.67 0.80 3





          share|improve this answer









          $endgroup$













          • $begingroup$
            This is what I use, simple and effective.
            $endgroup$
            – Matthew
            Feb 6 at 16:30














          1












          1








          1





          $begingroup$

          You could use the scikit-learn classification report. To convert your labels into a numerical or binary format take a look at the scikit-learn label encoder.



          y_pred = model.predict(x_test, batch_size=64, verbose=1)
          y_pred_bool = np.argmax(y_pred, axis=1)

          print(classification_report(y_test, y_pred_bool))


          which gives you (output copied from the scikit-learn example):



                       precision  recall   f1-score    support

          class 0 0.50 1.00 0.67 1
          class 1 0.00 0.00 0.00 1
          class 2 1.00 0.67 0.80 3





          share|improve this answer









          $endgroup$



          You could use the scikit-learn classification report. To convert your labels into a numerical or binary format take a look at the scikit-learn label encoder.



          y_pred = model.predict(x_test, batch_size=64, verbose=1)
          y_pred_bool = np.argmax(y_pred, axis=1)

          print(classification_report(y_test, y_pred_bool))


          which gives you (output copied from the scikit-learn example):



                       precision  recall   f1-score    support

          class 0 0.50 1.00 0.67 1
          class 1 0.00 0.00 0.00 1
          class 2 1.00 0.67 0.80 3






          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Feb 6 at 15:05









          matzematze

          112




          112












          • $begingroup$
            This is what I use, simple and effective.
            $endgroup$
            – Matthew
            Feb 6 at 16:30


















          • $begingroup$
            This is what I use, simple and effective.
            $endgroup$
            – Matthew
            Feb 6 at 16:30
















          $begingroup$
          This is what I use, simple and effective.
          $endgroup$
          – Matthew
          Feb 6 at 16:30




          $begingroup$
          This is what I use, simple and effective.
          $endgroup$
          – Matthew
          Feb 6 at 16:30











          0












          $begingroup$

          Try this: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_fscore_support.html with Y_test, y_pred as parameters.






          share|improve this answer









          $endgroup$













          • $begingroup$
            I tried this: model.recision_recall_fscore_support(Y_test, y_pred, average='micro') and get this error on execution: AttributeError: 'Sequential' object has no attribute 'recision_recall_fscore_support'
            $endgroup$
            – ZelelB
            Feb 6 at 13:51










          • $begingroup$
            You don't need to specify model.recision_recall_fscore_support(), rather just recision_recall_fscore_support(Y_test, y_pred, average='micro') (without "model." and make sure you have the correct import: from sklearn.metrics import precision_recall_fscore_support)
            $endgroup$
            – Viacheslav Komisarenko
            Feb 6 at 13:59
















          0












          $begingroup$

          Try this: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_fscore_support.html with Y_test, y_pred as parameters.






          share|improve this answer









          $endgroup$













          • $begingroup$
            I tried this: model.recision_recall_fscore_support(Y_test, y_pred, average='micro') and get this error on execution: AttributeError: 'Sequential' object has no attribute 'recision_recall_fscore_support'
            $endgroup$
            – ZelelB
            Feb 6 at 13:51










          • $begingroup$
            You don't need to specify model.recision_recall_fscore_support(), rather just recision_recall_fscore_support(Y_test, y_pred, average='micro') (without "model." and make sure you have the correct import: from sklearn.metrics import precision_recall_fscore_support)
            $endgroup$
            – Viacheslav Komisarenko
            Feb 6 at 13:59














          0












          0








          0





          $begingroup$

          Try this: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_fscore_support.html with Y_test, y_pred as parameters.






          share|improve this answer









          $endgroup$



          Try this: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_fscore_support.html with Y_test, y_pred as parameters.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Feb 6 at 13:35









          Viacheslav KomisarenkoViacheslav Komisarenko

          1563




          1563












          • $begingroup$
            I tried this: model.recision_recall_fscore_support(Y_test, y_pred, average='micro') and get this error on execution: AttributeError: 'Sequential' object has no attribute 'recision_recall_fscore_support'
            $endgroup$
            – ZelelB
            Feb 6 at 13:51










          • $begingroup$
            You don't need to specify model.recision_recall_fscore_support(), rather just recision_recall_fscore_support(Y_test, y_pred, average='micro') (without "model." and make sure you have the correct import: from sklearn.metrics import precision_recall_fscore_support)
            $endgroup$
            – Viacheslav Komisarenko
            Feb 6 at 13:59


















          • $begingroup$
            I tried this: model.recision_recall_fscore_support(Y_test, y_pred, average='micro') and get this error on execution: AttributeError: 'Sequential' object has no attribute 'recision_recall_fscore_support'
            $endgroup$
            – ZelelB
            Feb 6 at 13:51










          • $begingroup$
            You don't need to specify model.recision_recall_fscore_support(), rather just recision_recall_fscore_support(Y_test, y_pred, average='micro') (without "model." and make sure you have the correct import: from sklearn.metrics import precision_recall_fscore_support)
            $endgroup$
            – Viacheslav Komisarenko
            Feb 6 at 13:59
















          $begingroup$
          I tried this: model.recision_recall_fscore_support(Y_test, y_pred, average='micro') and get this error on execution: AttributeError: 'Sequential' object has no attribute 'recision_recall_fscore_support'
          $endgroup$
          – ZelelB
          Feb 6 at 13:51




          $begingroup$
          I tried this: model.recision_recall_fscore_support(Y_test, y_pred, average='micro') and get this error on execution: AttributeError: 'Sequential' object has no attribute 'recision_recall_fscore_support'
          $endgroup$
          – ZelelB
          Feb 6 at 13:51












          $begingroup$
          You don't need to specify model.recision_recall_fscore_support(), rather just recision_recall_fscore_support(Y_test, y_pred, average='micro') (without "model." and make sure you have the correct import: from sklearn.metrics import precision_recall_fscore_support)
          $endgroup$
          – Viacheslav Komisarenko
          Feb 6 at 13:59




          $begingroup$
          You don't need to specify model.recision_recall_fscore_support(), rather just recision_recall_fscore_support(Y_test, y_pred, average='micro') (without "model." and make sure you have the correct import: from sklearn.metrics import precision_recall_fscore_support)
          $endgroup$
          – Viacheslav Komisarenko
          Feb 6 at 13:59


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f45165%2fhow-to-get-accuracy-f1-precision-and-recall-for-a-keras-model%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          How to make a Squid Proxy server?

          Is this a new Fibonacci Identity?

          Touch on Surface Book