What are the purposes of autoencoders?












4












$begingroup$


Autoencoders are neural networks that learn a compressed representation of the input in order to later reconstruct it, so they can be used for dimensionality reduction. They are composed of an encoder and a decoder (which can be separate neural networks). Dimensionality reduction can be useful in order to deal with or attenuate the issues related to the curse of dimensionality, where data becomes sparse and it is more difficult to obtain "statistical significance". So, autoencoders (and algorithms like PCA) can be used to deal with the curse of dimensionality.



Why do we care about dimensionality reduction specifically using autoencoders? Why can't we simply use PCA, if the purpose is dimensionality reduction?



Why do we need to decompress the latent representation of the input if we just want to perform dimensionality reduction, or why do we need the decoder part in an autoencoder? What are the use cases? In general, why do we need to compress the input to later decompress it? Wouldn't it be better to just use the original input (to start with)?










share|improve this question









$endgroup$








  • 1




    $begingroup$
    See also the following question stats.stackexchange.com/q/82416/82135 on CrossValidated SE.
    $endgroup$
    – nbro
    12 hours ago


















4












$begingroup$


Autoencoders are neural networks that learn a compressed representation of the input in order to later reconstruct it, so they can be used for dimensionality reduction. They are composed of an encoder and a decoder (which can be separate neural networks). Dimensionality reduction can be useful in order to deal with or attenuate the issues related to the curse of dimensionality, where data becomes sparse and it is more difficult to obtain "statistical significance". So, autoencoders (and algorithms like PCA) can be used to deal with the curse of dimensionality.



Why do we care about dimensionality reduction specifically using autoencoders? Why can't we simply use PCA, if the purpose is dimensionality reduction?



Why do we need to decompress the latent representation of the input if we just want to perform dimensionality reduction, or why do we need the decoder part in an autoencoder? What are the use cases? In general, why do we need to compress the input to later decompress it? Wouldn't it be better to just use the original input (to start with)?










share|improve this question









$endgroup$








  • 1




    $begingroup$
    See also the following question stats.stackexchange.com/q/82416/82135 on CrossValidated SE.
    $endgroup$
    – nbro
    12 hours ago
















4












4








4


1



$begingroup$


Autoencoders are neural networks that learn a compressed representation of the input in order to later reconstruct it, so they can be used for dimensionality reduction. They are composed of an encoder and a decoder (which can be separate neural networks). Dimensionality reduction can be useful in order to deal with or attenuate the issues related to the curse of dimensionality, where data becomes sparse and it is more difficult to obtain "statistical significance". So, autoencoders (and algorithms like PCA) can be used to deal with the curse of dimensionality.



Why do we care about dimensionality reduction specifically using autoencoders? Why can't we simply use PCA, if the purpose is dimensionality reduction?



Why do we need to decompress the latent representation of the input if we just want to perform dimensionality reduction, or why do we need the decoder part in an autoencoder? What are the use cases? In general, why do we need to compress the input to later decompress it? Wouldn't it be better to just use the original input (to start with)?










share|improve this question









$endgroup$




Autoencoders are neural networks that learn a compressed representation of the input in order to later reconstruct it, so they can be used for dimensionality reduction. They are composed of an encoder and a decoder (which can be separate neural networks). Dimensionality reduction can be useful in order to deal with or attenuate the issues related to the curse of dimensionality, where data becomes sparse and it is more difficult to obtain "statistical significance". So, autoencoders (and algorithms like PCA) can be used to deal with the curse of dimensionality.



Why do we care about dimensionality reduction specifically using autoencoders? Why can't we simply use PCA, if the purpose is dimensionality reduction?



Why do we need to decompress the latent representation of the input if we just want to perform dimensionality reduction, or why do we need the decoder part in an autoencoder? What are the use cases? In general, why do we need to compress the input to later decompress it? Wouldn't it be better to just use the original input (to start with)?







machine-learning autoencoders dimensionality-reduction curse-of-dimensionality






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked 13 hours ago









nbronbro

1,824624




1,824624








  • 1




    $begingroup$
    See also the following question stats.stackexchange.com/q/82416/82135 on CrossValidated SE.
    $endgroup$
    – nbro
    12 hours ago
















  • 1




    $begingroup$
    See also the following question stats.stackexchange.com/q/82416/82135 on CrossValidated SE.
    $endgroup$
    – nbro
    12 hours ago










1




1




$begingroup$
See also the following question stats.stackexchange.com/q/82416/82135 on CrossValidated SE.
$endgroup$
– nbro
12 hours ago






$begingroup$
See also the following question stats.stackexchange.com/q/82416/82135 on CrossValidated SE.
$endgroup$
– nbro
12 hours ago












3 Answers
3






active

oldest

votes


















2












$begingroup$

It is important to think about what sort of patterns in the data are being represented.



Suppose that you have a dataset of greyscale images, such that every image is a uniform intensity. As a human brain you'd realise that every element in this dataset can be described in terms of a single numeric parameter, which is that intensity value. This is something that PCA would work fine for, because each of the dimensions (we can think of each pixel as a different dimension) is perfectly linearly correlated.



Suppose instead that you have a dataset of black and white 128x128px bitmap images of centred circles. As a human brain you'd quickly realise that every element in this dataset can be fully described by a single numeric parameter, which is the radius of the circle. That is a very impressive level of reduction from 16384 binary dimensions, and perhaps more importantly it's a semantically meaningful property of the data. However, PCA probably won't be able to find that pattern.



Your question was "Why can't we simply use PCA, if the purpose is dimensionality reduction?" The simple answer is that PCA is the simplest tool for dimensionality reduction, but it can miss a lot of relationships that more powerful techniques such as autoencoders might find.






share|improve this answer








New contributor




Josiah is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$





















    1












    $begingroup$

    A use case of autoencoders (in particular, of the decoder or generative model of the autoencoder) is to denoise the input. This type of autoencoders, called denoising autoencoders, take a partially corrupted input and they attempt to reconstruct the corresponding uncorrupted input. There are several applications of this model. For example, if you had a corrupted image, you could potentially recover the uncorrupted one using a denoising autoencoder.



    Autoencoders and PCA are related:




    an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors.




    For more info, have a look at the paper From Principal Subspaces to Principal Components with Linear Autoencoders (2018), by Elad Plaut. See also this answer, which also explains the relation between PCA and autoencoders.






    share|improve this answer











    $endgroup$





















      1












      $begingroup$

      PCA is a linear method that creates a transformation that is capable of changing the vectors projections (changing axis)



      Since PCA looks for the direction of maximum variance it usually have high discriminativity BUT it does not guaranteed that the direction of most variance is the direction of most discriminativity.



      LDA is a linear method that creates a transformation that is capable of finding the direction that is most relevant to decide if a vector belong to class A or B.



      PCA and LDA have non-linear Kernel versions that might overcome their linear limitations.



      Autoencoders can perform dimensionality reduction with other kinds of loss function, can be non-linear and might perform better than PCA and LDA for a lot of cases.



      There is probably no best machine learning algorithm to do anything, sometimes Deep Learning and Neural Nets are overkill for simple problems and PCA and LDA might be tried before other, more complex, dimensionality reductions.






      share|improve this answer








      New contributor




      Pedro Henrique Monforte is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      $endgroup$













      • $begingroup$
        What does LDA have to do with question?
        $endgroup$
        – nbro
        7 hours ago










      • $begingroup$
        LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
        $endgroup$
        – Pedro Henrique Monforte
        7 hours ago











      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "658"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f11405%2fwhat-are-the-purposes-of-autoencoders%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      2












      $begingroup$

      It is important to think about what sort of patterns in the data are being represented.



      Suppose that you have a dataset of greyscale images, such that every image is a uniform intensity. As a human brain you'd realise that every element in this dataset can be described in terms of a single numeric parameter, which is that intensity value. This is something that PCA would work fine for, because each of the dimensions (we can think of each pixel as a different dimension) is perfectly linearly correlated.



      Suppose instead that you have a dataset of black and white 128x128px bitmap images of centred circles. As a human brain you'd quickly realise that every element in this dataset can be fully described by a single numeric parameter, which is the radius of the circle. That is a very impressive level of reduction from 16384 binary dimensions, and perhaps more importantly it's a semantically meaningful property of the data. However, PCA probably won't be able to find that pattern.



      Your question was "Why can't we simply use PCA, if the purpose is dimensionality reduction?" The simple answer is that PCA is the simplest tool for dimensionality reduction, but it can miss a lot of relationships that more powerful techniques such as autoencoders might find.






      share|improve this answer








      New contributor




      Josiah is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      $endgroup$


















        2












        $begingroup$

        It is important to think about what sort of patterns in the data are being represented.



        Suppose that you have a dataset of greyscale images, such that every image is a uniform intensity. As a human brain you'd realise that every element in this dataset can be described in terms of a single numeric parameter, which is that intensity value. This is something that PCA would work fine for, because each of the dimensions (we can think of each pixel as a different dimension) is perfectly linearly correlated.



        Suppose instead that you have a dataset of black and white 128x128px bitmap images of centred circles. As a human brain you'd quickly realise that every element in this dataset can be fully described by a single numeric parameter, which is the radius of the circle. That is a very impressive level of reduction from 16384 binary dimensions, and perhaps more importantly it's a semantically meaningful property of the data. However, PCA probably won't be able to find that pattern.



        Your question was "Why can't we simply use PCA, if the purpose is dimensionality reduction?" The simple answer is that PCA is the simplest tool for dimensionality reduction, but it can miss a lot of relationships that more powerful techniques such as autoencoders might find.






        share|improve this answer








        New contributor




        Josiah is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.






        $endgroup$
















          2












          2








          2





          $begingroup$

          It is important to think about what sort of patterns in the data are being represented.



          Suppose that you have a dataset of greyscale images, such that every image is a uniform intensity. As a human brain you'd realise that every element in this dataset can be described in terms of a single numeric parameter, which is that intensity value. This is something that PCA would work fine for, because each of the dimensions (we can think of each pixel as a different dimension) is perfectly linearly correlated.



          Suppose instead that you have a dataset of black and white 128x128px bitmap images of centred circles. As a human brain you'd quickly realise that every element in this dataset can be fully described by a single numeric parameter, which is the radius of the circle. That is a very impressive level of reduction from 16384 binary dimensions, and perhaps more importantly it's a semantically meaningful property of the data. However, PCA probably won't be able to find that pattern.



          Your question was "Why can't we simply use PCA, if the purpose is dimensionality reduction?" The simple answer is that PCA is the simplest tool for dimensionality reduction, but it can miss a lot of relationships that more powerful techniques such as autoencoders might find.






          share|improve this answer








          New contributor




          Josiah is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.






          $endgroup$



          It is important to think about what sort of patterns in the data are being represented.



          Suppose that you have a dataset of greyscale images, such that every image is a uniform intensity. As a human brain you'd realise that every element in this dataset can be described in terms of a single numeric parameter, which is that intensity value. This is something that PCA would work fine for, because each of the dimensions (we can think of each pixel as a different dimension) is perfectly linearly correlated.



          Suppose instead that you have a dataset of black and white 128x128px bitmap images of centred circles. As a human brain you'd quickly realise that every element in this dataset can be fully described by a single numeric parameter, which is the radius of the circle. That is a very impressive level of reduction from 16384 binary dimensions, and perhaps more importantly it's a semantically meaningful property of the data. However, PCA probably won't be able to find that pattern.



          Your question was "Why can't we simply use PCA, if the purpose is dimensionality reduction?" The simple answer is that PCA is the simplest tool for dimensionality reduction, but it can miss a lot of relationships that more powerful techniques such as autoencoders might find.







          share|improve this answer








          New contributor




          Josiah is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.









          share|improve this answer



          share|improve this answer






          New contributor




          Josiah is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.









          answered 4 hours ago









          JosiahJosiah

          1212




          1212




          New contributor




          Josiah is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.





          New contributor





          Josiah is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.






          Josiah is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.

























              1












              $begingroup$

              A use case of autoencoders (in particular, of the decoder or generative model of the autoencoder) is to denoise the input. This type of autoencoders, called denoising autoencoders, take a partially corrupted input and they attempt to reconstruct the corresponding uncorrupted input. There are several applications of this model. For example, if you had a corrupted image, you could potentially recover the uncorrupted one using a denoising autoencoder.



              Autoencoders and PCA are related:




              an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors.




              For more info, have a look at the paper From Principal Subspaces to Principal Components with Linear Autoencoders (2018), by Elad Plaut. See also this answer, which also explains the relation between PCA and autoencoders.






              share|improve this answer











              $endgroup$


















                1












                $begingroup$

                A use case of autoencoders (in particular, of the decoder or generative model of the autoencoder) is to denoise the input. This type of autoencoders, called denoising autoencoders, take a partially corrupted input and they attempt to reconstruct the corresponding uncorrupted input. There are several applications of this model. For example, if you had a corrupted image, you could potentially recover the uncorrupted one using a denoising autoencoder.



                Autoencoders and PCA are related:




                an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors.




                For more info, have a look at the paper From Principal Subspaces to Principal Components with Linear Autoencoders (2018), by Elad Plaut. See also this answer, which also explains the relation between PCA and autoencoders.






                share|improve this answer











                $endgroup$
















                  1












                  1








                  1





                  $begingroup$

                  A use case of autoencoders (in particular, of the decoder or generative model of the autoencoder) is to denoise the input. This type of autoencoders, called denoising autoencoders, take a partially corrupted input and they attempt to reconstruct the corresponding uncorrupted input. There are several applications of this model. For example, if you had a corrupted image, you could potentially recover the uncorrupted one using a denoising autoencoder.



                  Autoencoders and PCA are related:




                  an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors.




                  For more info, have a look at the paper From Principal Subspaces to Principal Components with Linear Autoencoders (2018), by Elad Plaut. See also this answer, which also explains the relation between PCA and autoencoders.






                  share|improve this answer











                  $endgroup$



                  A use case of autoencoders (in particular, of the decoder or generative model of the autoencoder) is to denoise the input. This type of autoencoders, called denoising autoencoders, take a partially corrupted input and they attempt to reconstruct the corresponding uncorrupted input. There are several applications of this model. For example, if you had a corrupted image, you could potentially recover the uncorrupted one using a denoising autoencoder.



                  Autoencoders and PCA are related:




                  an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors.




                  For more info, have a look at the paper From Principal Subspaces to Principal Components with Linear Autoencoders (2018), by Elad Plaut. See also this answer, which also explains the relation between PCA and autoencoders.







                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited 12 hours ago

























                  answered 13 hours ago









                  nbronbro

                  1,824624




                  1,824624























                      1












                      $begingroup$

                      PCA is a linear method that creates a transformation that is capable of changing the vectors projections (changing axis)



                      Since PCA looks for the direction of maximum variance it usually have high discriminativity BUT it does not guaranteed that the direction of most variance is the direction of most discriminativity.



                      LDA is a linear method that creates a transformation that is capable of finding the direction that is most relevant to decide if a vector belong to class A or B.



                      PCA and LDA have non-linear Kernel versions that might overcome their linear limitations.



                      Autoencoders can perform dimensionality reduction with other kinds of loss function, can be non-linear and might perform better than PCA and LDA for a lot of cases.



                      There is probably no best machine learning algorithm to do anything, sometimes Deep Learning and Neural Nets are overkill for simple problems and PCA and LDA might be tried before other, more complex, dimensionality reductions.






                      share|improve this answer








                      New contributor




                      Pedro Henrique Monforte is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.






                      $endgroup$













                      • $begingroup$
                        What does LDA have to do with question?
                        $endgroup$
                        – nbro
                        7 hours ago










                      • $begingroup$
                        LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
                        $endgroup$
                        – Pedro Henrique Monforte
                        7 hours ago
















                      1












                      $begingroup$

                      PCA is a linear method that creates a transformation that is capable of changing the vectors projections (changing axis)



                      Since PCA looks for the direction of maximum variance it usually have high discriminativity BUT it does not guaranteed that the direction of most variance is the direction of most discriminativity.



                      LDA is a linear method that creates a transformation that is capable of finding the direction that is most relevant to decide if a vector belong to class A or B.



                      PCA and LDA have non-linear Kernel versions that might overcome their linear limitations.



                      Autoencoders can perform dimensionality reduction with other kinds of loss function, can be non-linear and might perform better than PCA and LDA for a lot of cases.



                      There is probably no best machine learning algorithm to do anything, sometimes Deep Learning and Neural Nets are overkill for simple problems and PCA and LDA might be tried before other, more complex, dimensionality reductions.






                      share|improve this answer








                      New contributor




                      Pedro Henrique Monforte is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.






                      $endgroup$













                      • $begingroup$
                        What does LDA have to do with question?
                        $endgroup$
                        – nbro
                        7 hours ago










                      • $begingroup$
                        LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
                        $endgroup$
                        – Pedro Henrique Monforte
                        7 hours ago














                      1












                      1








                      1





                      $begingroup$

                      PCA is a linear method that creates a transformation that is capable of changing the vectors projections (changing axis)



                      Since PCA looks for the direction of maximum variance it usually have high discriminativity BUT it does not guaranteed that the direction of most variance is the direction of most discriminativity.



                      LDA is a linear method that creates a transformation that is capable of finding the direction that is most relevant to decide if a vector belong to class A or B.



                      PCA and LDA have non-linear Kernel versions that might overcome their linear limitations.



                      Autoencoders can perform dimensionality reduction with other kinds of loss function, can be non-linear and might perform better than PCA and LDA for a lot of cases.



                      There is probably no best machine learning algorithm to do anything, sometimes Deep Learning and Neural Nets are overkill for simple problems and PCA and LDA might be tried before other, more complex, dimensionality reductions.






                      share|improve this answer








                      New contributor




                      Pedro Henrique Monforte is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.






                      $endgroup$



                      PCA is a linear method that creates a transformation that is capable of changing the vectors projections (changing axis)



                      Since PCA looks for the direction of maximum variance it usually have high discriminativity BUT it does not guaranteed that the direction of most variance is the direction of most discriminativity.



                      LDA is a linear method that creates a transformation that is capable of finding the direction that is most relevant to decide if a vector belong to class A or B.



                      PCA and LDA have non-linear Kernel versions that might overcome their linear limitations.



                      Autoencoders can perform dimensionality reduction with other kinds of loss function, can be non-linear and might perform better than PCA and LDA for a lot of cases.



                      There is probably no best machine learning algorithm to do anything, sometimes Deep Learning and Neural Nets are overkill for simple problems and PCA and LDA might be tried before other, more complex, dimensionality reductions.







                      share|improve this answer








                      New contributor




                      Pedro Henrique Monforte is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.









                      share|improve this answer



                      share|improve this answer






                      New contributor




                      Pedro Henrique Monforte is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.









                      answered 7 hours ago









                      Pedro Henrique MonfortePedro Henrique Monforte

                      513




                      513




                      New contributor




                      Pedro Henrique Monforte is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.





                      New contributor





                      Pedro Henrique Monforte is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.






                      Pedro Henrique Monforte is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.












                      • $begingroup$
                        What does LDA have to do with question?
                        $endgroup$
                        – nbro
                        7 hours ago










                      • $begingroup$
                        LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
                        $endgroup$
                        – Pedro Henrique Monforte
                        7 hours ago


















                      • $begingroup$
                        What does LDA have to do with question?
                        $endgroup$
                        – nbro
                        7 hours ago










                      • $begingroup$
                        LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
                        $endgroup$
                        – Pedro Henrique Monforte
                        7 hours ago
















                      $begingroup$
                      What does LDA have to do with question?
                      $endgroup$
                      – nbro
                      7 hours ago




                      $begingroup$
                      What does LDA have to do with question?
                      $endgroup$
                      – nbro
                      7 hours ago












                      $begingroup$
                      LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
                      $endgroup$
                      – Pedro Henrique Monforte
                      7 hours ago




                      $begingroup$
                      LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
                      $endgroup$
                      – Pedro Henrique Monforte
                      7 hours ago


















                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Artificial Intelligence Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f11405%2fwhat-are-the-purposes-of-autoencoders%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      How to make a Squid Proxy server?

                      Is this a new Fibonacci Identity?

                      19世紀