Compare an array with a file and form groups from elements of an array












2














I have a text file with letters (tab delimited), and a numpy array (obj) with a few letters (single row). The text file has rows with different numbers of columns. Some rows in the text file may have multiple copies of same letters (I will like to consider only a single copy of a letter in each row). Also, each letter of the numpy array obj is present in one or more rows of the text file.



Letters in the same row of the text file are assumed to be similar to each other. Imagine a similarity metric (between two letters) which can take values 1 (related), or 0 (not related). When any pair of letters are in the same row then they are assumed to have similarity metric value = 1. In the example given below, the letters j and n are in the same row (second row). Hence j and n have similarity metric value = 1.



Here is an example of the text file (you can download the file from here):



b   q   a   i   m   l   r
j n o r o
e i k u i s


In the example, the letter o is mentioned two times in the second row, and the letter i is denoted two times in the third row. I will like to consider single copies of letters rows of the text file.



This is an example of obj:



obj = np.asarray(['a', 'e', 'i', 'o', 'u'])


I want to compare obj with rows of the text file and form clusters from elements in obj.



This is how I want to do it. Corresponding to each row of the text file, I want to have a list which denotes a cluster (In the above example we will have three clusters since the text file has three rows). For every given element of obj, I want to find rows of the text file where the element is present. Then, I will like to assign index of that element of obj to the cluster which corresponds to the row with maximum length (the lengths of rows are decided with all rows having single copies of letters).



import pandas as pd
import numpy as np

data = pd.read_csv('file.txt', sep=r't+', header=None, engine='python').values[:,:].astype('<U1000')
obj = np.asarray(['a', 'e', 'i', 'o', 'u'])

for i in range(data.shape[0]):
globals()['data_row' + str(i).zfill(3)] =
globals()['clust' + str(i).zfill(3)] =
for j in range(len(obj)):
if obj[j] in set(data[i, :]): globals()['data_row' + str(i).zfill(3)] += [j]

for i in range(len(obj)):
globals()['obj_lst' + str(i).zfill(3)] = [0]*data.shape[0]

for j in range(data.shape[0]):
if i in globals()['data_row' + str(j).zfill(3)]:
globals()['obj_lst' + str(i).zfill(3)][j] = len(globals()['data_row' + str(j).zfill(3)])

indx_max = globals()['obj_lst' + str(i).zfill(3)].index( max(globals()['obj_lst' + str(i).zfill(3)]) )
globals()['clust' + str(indx_max).zfill(3)] += [i]

for i in range(data.shape[0]): print globals()['clust' + str(i).zfill(3)]

>> [0]
>> [3]
>> [1, 2, 4]


The code gives me the right answer. But, in my actual work, the text file has tens of thousands of rows, and the numpy array has hundreds of thousands of elements. And, the above given code is not very fast. So, I want to know if there is a better (faster) way to implement the above functionality and aim (using Python).










share|improve this question









New contributor




Siddharth Satpathy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
















  • 1




    What do you mean by this statement: "Letters in the same row of the text file are assumed to be similar to each other."
    – l0b0
    yesterday






  • 1




    In general, try to explain what you're trying to do without reference to the actual variables in the code.
    – l0b0
    yesterday










  • @I0b0 : By the mentioned statement, I meant that the letters in the same row are related to each other. Imagine a similarity metric (between two letters) which can take values 1 (related), or 0 (not related). When any pair of letters are in the same row then they are assumed to have similarity metric value = 1. In the given example 'j' and 'n' are in the same row, i.e. the second row. Hence 'j' and 'n' have similarity metric value = 1.
    – Siddharth Satpathy
    13 hours ago










  • You should update the question to include this extra information.
    – l0b0
    10 hours ago
















2














I have a text file with letters (tab delimited), and a numpy array (obj) with a few letters (single row). The text file has rows with different numbers of columns. Some rows in the text file may have multiple copies of same letters (I will like to consider only a single copy of a letter in each row). Also, each letter of the numpy array obj is present in one or more rows of the text file.



Letters in the same row of the text file are assumed to be similar to each other. Imagine a similarity metric (between two letters) which can take values 1 (related), or 0 (not related). When any pair of letters are in the same row then they are assumed to have similarity metric value = 1. In the example given below, the letters j and n are in the same row (second row). Hence j and n have similarity metric value = 1.



Here is an example of the text file (you can download the file from here):



b   q   a   i   m   l   r
j n o r o
e i k u i s


In the example, the letter o is mentioned two times in the second row, and the letter i is denoted two times in the third row. I will like to consider single copies of letters rows of the text file.



This is an example of obj:



obj = np.asarray(['a', 'e', 'i', 'o', 'u'])


I want to compare obj with rows of the text file and form clusters from elements in obj.



This is how I want to do it. Corresponding to each row of the text file, I want to have a list which denotes a cluster (In the above example we will have three clusters since the text file has three rows). For every given element of obj, I want to find rows of the text file where the element is present. Then, I will like to assign index of that element of obj to the cluster which corresponds to the row with maximum length (the lengths of rows are decided with all rows having single copies of letters).



import pandas as pd
import numpy as np

data = pd.read_csv('file.txt', sep=r't+', header=None, engine='python').values[:,:].astype('<U1000')
obj = np.asarray(['a', 'e', 'i', 'o', 'u'])

for i in range(data.shape[0]):
globals()['data_row' + str(i).zfill(3)] =
globals()['clust' + str(i).zfill(3)] =
for j in range(len(obj)):
if obj[j] in set(data[i, :]): globals()['data_row' + str(i).zfill(3)] += [j]

for i in range(len(obj)):
globals()['obj_lst' + str(i).zfill(3)] = [0]*data.shape[0]

for j in range(data.shape[0]):
if i in globals()['data_row' + str(j).zfill(3)]:
globals()['obj_lst' + str(i).zfill(3)][j] = len(globals()['data_row' + str(j).zfill(3)])

indx_max = globals()['obj_lst' + str(i).zfill(3)].index( max(globals()['obj_lst' + str(i).zfill(3)]) )
globals()['clust' + str(indx_max).zfill(3)] += [i]

for i in range(data.shape[0]): print globals()['clust' + str(i).zfill(3)]

>> [0]
>> [3]
>> [1, 2, 4]


The code gives me the right answer. But, in my actual work, the text file has tens of thousands of rows, and the numpy array has hundreds of thousands of elements. And, the above given code is not very fast. So, I want to know if there is a better (faster) way to implement the above functionality and aim (using Python).










share|improve this question









New contributor




Siddharth Satpathy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
















  • 1




    What do you mean by this statement: "Letters in the same row of the text file are assumed to be similar to each other."
    – l0b0
    yesterday






  • 1




    In general, try to explain what you're trying to do without reference to the actual variables in the code.
    – l0b0
    yesterday










  • @I0b0 : By the mentioned statement, I meant that the letters in the same row are related to each other. Imagine a similarity metric (between two letters) which can take values 1 (related), or 0 (not related). When any pair of letters are in the same row then they are assumed to have similarity metric value = 1. In the given example 'j' and 'n' are in the same row, i.e. the second row. Hence 'j' and 'n' have similarity metric value = 1.
    – Siddharth Satpathy
    13 hours ago










  • You should update the question to include this extra information.
    – l0b0
    10 hours ago














2












2








2







I have a text file with letters (tab delimited), and a numpy array (obj) with a few letters (single row). The text file has rows with different numbers of columns. Some rows in the text file may have multiple copies of same letters (I will like to consider only a single copy of a letter in each row). Also, each letter of the numpy array obj is present in one or more rows of the text file.



Letters in the same row of the text file are assumed to be similar to each other. Imagine a similarity metric (between two letters) which can take values 1 (related), or 0 (not related). When any pair of letters are in the same row then they are assumed to have similarity metric value = 1. In the example given below, the letters j and n are in the same row (second row). Hence j and n have similarity metric value = 1.



Here is an example of the text file (you can download the file from here):



b   q   a   i   m   l   r
j n o r o
e i k u i s


In the example, the letter o is mentioned two times in the second row, and the letter i is denoted two times in the third row. I will like to consider single copies of letters rows of the text file.



This is an example of obj:



obj = np.asarray(['a', 'e', 'i', 'o', 'u'])


I want to compare obj with rows of the text file and form clusters from elements in obj.



This is how I want to do it. Corresponding to each row of the text file, I want to have a list which denotes a cluster (In the above example we will have three clusters since the text file has three rows). For every given element of obj, I want to find rows of the text file where the element is present. Then, I will like to assign index of that element of obj to the cluster which corresponds to the row with maximum length (the lengths of rows are decided with all rows having single copies of letters).



import pandas as pd
import numpy as np

data = pd.read_csv('file.txt', sep=r't+', header=None, engine='python').values[:,:].astype('<U1000')
obj = np.asarray(['a', 'e', 'i', 'o', 'u'])

for i in range(data.shape[0]):
globals()['data_row' + str(i).zfill(3)] =
globals()['clust' + str(i).zfill(3)] =
for j in range(len(obj)):
if obj[j] in set(data[i, :]): globals()['data_row' + str(i).zfill(3)] += [j]

for i in range(len(obj)):
globals()['obj_lst' + str(i).zfill(3)] = [0]*data.shape[0]

for j in range(data.shape[0]):
if i in globals()['data_row' + str(j).zfill(3)]:
globals()['obj_lst' + str(i).zfill(3)][j] = len(globals()['data_row' + str(j).zfill(3)])

indx_max = globals()['obj_lst' + str(i).zfill(3)].index( max(globals()['obj_lst' + str(i).zfill(3)]) )
globals()['clust' + str(indx_max).zfill(3)] += [i]

for i in range(data.shape[0]): print globals()['clust' + str(i).zfill(3)]

>> [0]
>> [3]
>> [1, 2, 4]


The code gives me the right answer. But, in my actual work, the text file has tens of thousands of rows, and the numpy array has hundreds of thousands of elements. And, the above given code is not very fast. So, I want to know if there is a better (faster) way to implement the above functionality and aim (using Python).










share|improve this question









New contributor




Siddharth Satpathy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











I have a text file with letters (tab delimited), and a numpy array (obj) with a few letters (single row). The text file has rows with different numbers of columns. Some rows in the text file may have multiple copies of same letters (I will like to consider only a single copy of a letter in each row). Also, each letter of the numpy array obj is present in one or more rows of the text file.



Letters in the same row of the text file are assumed to be similar to each other. Imagine a similarity metric (between two letters) which can take values 1 (related), or 0 (not related). When any pair of letters are in the same row then they are assumed to have similarity metric value = 1. In the example given below, the letters j and n are in the same row (second row). Hence j and n have similarity metric value = 1.



Here is an example of the text file (you can download the file from here):



b   q   a   i   m   l   r
j n o r o
e i k u i s


In the example, the letter o is mentioned two times in the second row, and the letter i is denoted two times in the third row. I will like to consider single copies of letters rows of the text file.



This is an example of obj:



obj = np.asarray(['a', 'e', 'i', 'o', 'u'])


I want to compare obj with rows of the text file and form clusters from elements in obj.



This is how I want to do it. Corresponding to each row of the text file, I want to have a list which denotes a cluster (In the above example we will have three clusters since the text file has three rows). For every given element of obj, I want to find rows of the text file where the element is present. Then, I will like to assign index of that element of obj to the cluster which corresponds to the row with maximum length (the lengths of rows are decided with all rows having single copies of letters).



import pandas as pd
import numpy as np

data = pd.read_csv('file.txt', sep=r't+', header=None, engine='python').values[:,:].astype('<U1000')
obj = np.asarray(['a', 'e', 'i', 'o', 'u'])

for i in range(data.shape[0]):
globals()['data_row' + str(i).zfill(3)] =
globals()['clust' + str(i).zfill(3)] =
for j in range(len(obj)):
if obj[j] in set(data[i, :]): globals()['data_row' + str(i).zfill(3)] += [j]

for i in range(len(obj)):
globals()['obj_lst' + str(i).zfill(3)] = [0]*data.shape[0]

for j in range(data.shape[0]):
if i in globals()['data_row' + str(j).zfill(3)]:
globals()['obj_lst' + str(i).zfill(3)][j] = len(globals()['data_row' + str(j).zfill(3)])

indx_max = globals()['obj_lst' + str(i).zfill(3)].index( max(globals()['obj_lst' + str(i).zfill(3)]) )
globals()['clust' + str(indx_max).zfill(3)] += [i]

for i in range(data.shape[0]): print globals()['clust' + str(i).zfill(3)]

>> [0]
>> [3]
>> [1, 2, 4]


The code gives me the right answer. But, in my actual work, the text file has tens of thousands of rows, and the numpy array has hundreds of thousands of elements. And, the above given code is not very fast. So, I want to know if there is a better (faster) way to implement the above functionality and aim (using Python).







python array numpy pandas






share|improve this question









New contributor




Siddharth Satpathy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question









New contributor




Siddharth Satpathy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question








edited 10 hours ago





















New contributor




Siddharth Satpathy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked 2 days ago









Siddharth Satpathy

1115




1115




New contributor




Siddharth Satpathy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





Siddharth Satpathy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






Siddharth Satpathy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








  • 1




    What do you mean by this statement: "Letters in the same row of the text file are assumed to be similar to each other."
    – l0b0
    yesterday






  • 1




    In general, try to explain what you're trying to do without reference to the actual variables in the code.
    – l0b0
    yesterday










  • @I0b0 : By the mentioned statement, I meant that the letters in the same row are related to each other. Imagine a similarity metric (between two letters) which can take values 1 (related), or 0 (not related). When any pair of letters are in the same row then they are assumed to have similarity metric value = 1. In the given example 'j' and 'n' are in the same row, i.e. the second row. Hence 'j' and 'n' have similarity metric value = 1.
    – Siddharth Satpathy
    13 hours ago










  • You should update the question to include this extra information.
    – l0b0
    10 hours ago














  • 1




    What do you mean by this statement: "Letters in the same row of the text file are assumed to be similar to each other."
    – l0b0
    yesterday






  • 1




    In general, try to explain what you're trying to do without reference to the actual variables in the code.
    – l0b0
    yesterday










  • @I0b0 : By the mentioned statement, I meant that the letters in the same row are related to each other. Imagine a similarity metric (between two letters) which can take values 1 (related), or 0 (not related). When any pair of letters are in the same row then they are assumed to have similarity metric value = 1. In the given example 'j' and 'n' are in the same row, i.e. the second row. Hence 'j' and 'n' have similarity metric value = 1.
    – Siddharth Satpathy
    13 hours ago










  • You should update the question to include this extra information.
    – l0b0
    10 hours ago








1




1




What do you mean by this statement: "Letters in the same row of the text file are assumed to be similar to each other."
– l0b0
yesterday




What do you mean by this statement: "Letters in the same row of the text file are assumed to be similar to each other."
– l0b0
yesterday




1




1




In general, try to explain what you're trying to do without reference to the actual variables in the code.
– l0b0
yesterday




In general, try to explain what you're trying to do without reference to the actual variables in the code.
– l0b0
yesterday












@I0b0 : By the mentioned statement, I meant that the letters in the same row are related to each other. Imagine a similarity metric (between two letters) which can take values 1 (related), or 0 (not related). When any pair of letters are in the same row then they are assumed to have similarity metric value = 1. In the given example 'j' and 'n' are in the same row, i.e. the second row. Hence 'j' and 'n' have similarity metric value = 1.
– Siddharth Satpathy
13 hours ago




@I0b0 : By the mentioned statement, I meant that the letters in the same row are related to each other. Imagine a similarity metric (between two letters) which can take values 1 (related), or 0 (not related). When any pair of letters are in the same row then they are assumed to have similarity metric value = 1. In the given example 'j' and 'n' are in the same row, i.e. the second row. Hence 'j' and 'n' have similarity metric value = 1.
– Siddharth Satpathy
13 hours ago












You should update the question to include this extra information.
– l0b0
10 hours ago




You should update the question to include this extra information.
– l0b0
10 hours ago










2 Answers
2






active

oldest

votes


















2














I can't understand your algorithm as written, but some very general advice applies:




  • Use format() or template strings to format strings.

  • Rather than creating dynamic dictionary keys, I would create variables data_row, clust (but see naming review below), etc. and assign to indexes in these lists. That way you get rid of the global variables (which are bad for reasons discussed at great length elsewhere), you won't need to format strings all over the place, and you won't need to do the str() conversions. You should also be able to get rid of the array initialization this way, something which is a code smell in garbage collected languages.

  • Can there really be multiple tab characters between columns? That would be weird. If not, you might get less surprising results using a single tab as the column separator.

  • Naming could use some work. For example:


    • In general, don't use abbreviations, especially not single letter ones or ones which shorten by only one or two letters. For example, use index (or [something]_index if there are multiple indexes in the current context) rather than indx, idx, i or j.


    • data should be something like character_table.

    • I don't know what obj is, but obj gives me no information at all. Should it be vowels?








share|improve this answer























  • Thanks, your suggestions are helpful. :)
    – Siddharth Satpathy
    13 hours ago



















0














Do one thing at a time



Don't put multiple statements on one line, i.e.



if obj[j] in set(data[i, :]): globals()['data_row' + str(i).zfill(3)] += [j]


Global population?



You're doing a curious thing. You're populating the global namespace with some variable names that have integral indices baked into them. Since I can't find a reason for this anywhere in your description (and even if you did have a reason, it probably wouldn't be a good one), really try to avoid doing this. In other words, rather than writing to



globals['data_row001']


just write to a list called data_row (and obj_lst, etc.). You can still print it in whatever format you want later.



Use fluent syntax



For long statements with several . calls, such as this:



data = pd.read_csv('file.txt', sep=r't+', header=None, engine='python').values[:,:].astype('<U1000')


try rewriting it on multiple lines for legibility:



data = (pd
.read_csv('file.txt', sep=r't+', header=None, engine='python')
.values[:,:]
.astype('<U1000')
)





share|improve this answer





















    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "196"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });






    Siddharth Satpathy is a new contributor. Be nice, and check out our Code of Conduct.










    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f210784%2fcompare-an-array-with-a-file-and-form-groups-from-elements-of-an-array%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    2














    I can't understand your algorithm as written, but some very general advice applies:




    • Use format() or template strings to format strings.

    • Rather than creating dynamic dictionary keys, I would create variables data_row, clust (but see naming review below), etc. and assign to indexes in these lists. That way you get rid of the global variables (which are bad for reasons discussed at great length elsewhere), you won't need to format strings all over the place, and you won't need to do the str() conversions. You should also be able to get rid of the array initialization this way, something which is a code smell in garbage collected languages.

    • Can there really be multiple tab characters between columns? That would be weird. If not, you might get less surprising results using a single tab as the column separator.

    • Naming could use some work. For example:


      • In general, don't use abbreviations, especially not single letter ones or ones which shorten by only one or two letters. For example, use index (or [something]_index if there are multiple indexes in the current context) rather than indx, idx, i or j.


      • data should be something like character_table.

      • I don't know what obj is, but obj gives me no information at all. Should it be vowels?








    share|improve this answer























    • Thanks, your suggestions are helpful. :)
      – Siddharth Satpathy
      13 hours ago
















    2














    I can't understand your algorithm as written, but some very general advice applies:




    • Use format() or template strings to format strings.

    • Rather than creating dynamic dictionary keys, I would create variables data_row, clust (but see naming review below), etc. and assign to indexes in these lists. That way you get rid of the global variables (which are bad for reasons discussed at great length elsewhere), you won't need to format strings all over the place, and you won't need to do the str() conversions. You should also be able to get rid of the array initialization this way, something which is a code smell in garbage collected languages.

    • Can there really be multiple tab characters between columns? That would be weird. If not, you might get less surprising results using a single tab as the column separator.

    • Naming could use some work. For example:


      • In general, don't use abbreviations, especially not single letter ones or ones which shorten by only one or two letters. For example, use index (or [something]_index if there are multiple indexes in the current context) rather than indx, idx, i or j.


      • data should be something like character_table.

      • I don't know what obj is, but obj gives me no information at all. Should it be vowels?








    share|improve this answer























    • Thanks, your suggestions are helpful. :)
      – Siddharth Satpathy
      13 hours ago














    2












    2








    2






    I can't understand your algorithm as written, but some very general advice applies:




    • Use format() or template strings to format strings.

    • Rather than creating dynamic dictionary keys, I would create variables data_row, clust (but see naming review below), etc. and assign to indexes in these lists. That way you get rid of the global variables (which are bad for reasons discussed at great length elsewhere), you won't need to format strings all over the place, and you won't need to do the str() conversions. You should also be able to get rid of the array initialization this way, something which is a code smell in garbage collected languages.

    • Can there really be multiple tab characters between columns? That would be weird. If not, you might get less surprising results using a single tab as the column separator.

    • Naming could use some work. For example:


      • In general, don't use abbreviations, especially not single letter ones or ones which shorten by only one or two letters. For example, use index (or [something]_index if there are multiple indexes in the current context) rather than indx, idx, i or j.


      • data should be something like character_table.

      • I don't know what obj is, but obj gives me no information at all. Should it be vowels?








    share|improve this answer














    I can't understand your algorithm as written, but some very general advice applies:




    • Use format() or template strings to format strings.

    • Rather than creating dynamic dictionary keys, I would create variables data_row, clust (but see naming review below), etc. and assign to indexes in these lists. That way you get rid of the global variables (which are bad for reasons discussed at great length elsewhere), you won't need to format strings all over the place, and you won't need to do the str() conversions. You should also be able to get rid of the array initialization this way, something which is a code smell in garbage collected languages.

    • Can there really be multiple tab characters between columns? That would be weird. If not, you might get less surprising results using a single tab as the column separator.

    • Naming could use some work. For example:


      • In general, don't use abbreviations, especially not single letter ones or ones which shorten by only one or two letters. For example, use index (or [something]_index if there are multiple indexes in the current context) rather than indx, idx, i or j.


      • data should be something like character_table.

      • I don't know what obj is, but obj gives me no information at all. Should it be vowels?









    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited yesterday

























    answered yesterday









    l0b0

    4,227923




    4,227923












    • Thanks, your suggestions are helpful. :)
      – Siddharth Satpathy
      13 hours ago


















    • Thanks, your suggestions are helpful. :)
      – Siddharth Satpathy
      13 hours ago
















    Thanks, your suggestions are helpful. :)
    – Siddharth Satpathy
    13 hours ago




    Thanks, your suggestions are helpful. :)
    – Siddharth Satpathy
    13 hours ago













    0














    Do one thing at a time



    Don't put multiple statements on one line, i.e.



    if obj[j] in set(data[i, :]): globals()['data_row' + str(i).zfill(3)] += [j]


    Global population?



    You're doing a curious thing. You're populating the global namespace with some variable names that have integral indices baked into them. Since I can't find a reason for this anywhere in your description (and even if you did have a reason, it probably wouldn't be a good one), really try to avoid doing this. In other words, rather than writing to



    globals['data_row001']


    just write to a list called data_row (and obj_lst, etc.). You can still print it in whatever format you want later.



    Use fluent syntax



    For long statements with several . calls, such as this:



    data = pd.read_csv('file.txt', sep=r't+', header=None, engine='python').values[:,:].astype('<U1000')


    try rewriting it on multiple lines for legibility:



    data = (pd
    .read_csv('file.txt', sep=r't+', header=None, engine='python')
    .values[:,:]
    .astype('<U1000')
    )





    share|improve this answer


























      0














      Do one thing at a time



      Don't put multiple statements on one line, i.e.



      if obj[j] in set(data[i, :]): globals()['data_row' + str(i).zfill(3)] += [j]


      Global population?



      You're doing a curious thing. You're populating the global namespace with some variable names that have integral indices baked into them. Since I can't find a reason for this anywhere in your description (and even if you did have a reason, it probably wouldn't be a good one), really try to avoid doing this. In other words, rather than writing to



      globals['data_row001']


      just write to a list called data_row (and obj_lst, etc.). You can still print it in whatever format you want later.



      Use fluent syntax



      For long statements with several . calls, such as this:



      data = pd.read_csv('file.txt', sep=r't+', header=None, engine='python').values[:,:].astype('<U1000')


      try rewriting it on multiple lines for legibility:



      data = (pd
      .read_csv('file.txt', sep=r't+', header=None, engine='python')
      .values[:,:]
      .astype('<U1000')
      )





      share|improve this answer
























        0












        0








        0






        Do one thing at a time



        Don't put multiple statements on one line, i.e.



        if obj[j] in set(data[i, :]): globals()['data_row' + str(i).zfill(3)] += [j]


        Global population?



        You're doing a curious thing. You're populating the global namespace with some variable names that have integral indices baked into them. Since I can't find a reason for this anywhere in your description (and even if you did have a reason, it probably wouldn't be a good one), really try to avoid doing this. In other words, rather than writing to



        globals['data_row001']


        just write to a list called data_row (and obj_lst, etc.). You can still print it in whatever format you want later.



        Use fluent syntax



        For long statements with several . calls, such as this:



        data = pd.read_csv('file.txt', sep=r't+', header=None, engine='python').values[:,:].astype('<U1000')


        try rewriting it on multiple lines for legibility:



        data = (pd
        .read_csv('file.txt', sep=r't+', header=None, engine='python')
        .values[:,:]
        .astype('<U1000')
        )





        share|improve this answer












        Do one thing at a time



        Don't put multiple statements on one line, i.e.



        if obj[j] in set(data[i, :]): globals()['data_row' + str(i).zfill(3)] += [j]


        Global population?



        You're doing a curious thing. You're populating the global namespace with some variable names that have integral indices baked into them. Since I can't find a reason for this anywhere in your description (and even if you did have a reason, it probably wouldn't be a good one), really try to avoid doing this. In other words, rather than writing to



        globals['data_row001']


        just write to a list called data_row (and obj_lst, etc.). You can still print it in whatever format you want later.



        Use fluent syntax



        For long statements with several . calls, such as this:



        data = pd.read_csv('file.txt', sep=r't+', header=None, engine='python').values[:,:].astype('<U1000')


        try rewriting it on multiple lines for legibility:



        data = (pd
        .read_csv('file.txt', sep=r't+', header=None, engine='python')
        .values[:,:]
        .astype('<U1000')
        )






        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered 10 hours ago









        Reinderien

        3,758821




        3,758821






















            Siddharth Satpathy is a new contributor. Be nice, and check out our Code of Conduct.










            draft saved

            draft discarded


















            Siddharth Satpathy is a new contributor. Be nice, and check out our Code of Conduct.













            Siddharth Satpathy is a new contributor. Be nice, and check out our Code of Conduct.












            Siddharth Satpathy is a new contributor. Be nice, and check out our Code of Conduct.
















            Thanks for contributing an answer to Code Review Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f210784%2fcompare-an-array-with-a-file-and-form-groups-from-elements-of-an-array%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            How to reconfigure Docker Trusted Registry 2.x.x to use CEPH FS mount instead of NFS and other traditional...

            is 'sed' thread safe

            How to make a Squid Proxy server?