getting error while removing duplicates from csv using pandas











up vote
2
down vote

favorite












My csv file is on this link:



https://drive.google.com/file/d/1Pac9-YLAtc7iaN0qEuiBOpYYf9ZPDDaL/view?usp=sharing



I want to remove the duplicate from the csv by checking length of genres against each artist ID. If an artist have 2 records in csv (e.g., ed sheeran's id 6eUKZXaKkcviH0Ku9w2n3V have 2 records one record have 1 genres while row#5 have 5 genres so i want to keep the row which has largest genres length)



I'm using this script for now:



import pandas
import ast


df = pandas.read_csv('39K.csv', encoding='latin-1')

df['lst_len'] = df['genres'].map(lambda x: len(ast.literal_eval(str(x))))
print(df['lst_len'][0])

df = df.sort_values('lst_len', ascending=False)

# Drop duplicates, preserving first (longest) list by ID
df = df.drop_duplicates(subset='ID')


# Remove extra column that we introduced, write to file
df = df.drop('lst_len', axis=1)
df.to_csv('clean_39K.csv', index=False)


but this script works for 500 record (may be i have illusion that size of records matters),



but when I run this script for my largest file 39K.csv I'm getting this error:



Traceback (most recent call last):
******* error in line 5, in <module>....
df['lst_len'] = df['genres'].map(lambda x: len(list(x)))
df['lst_len'] = df['genres'].map(lambda x: len(list(x)))
TypeError: 'float' object is not iterable


Please point me where i am doing wrong?
Thanks










share|improve this question




























    up vote
    2
    down vote

    favorite












    My csv file is on this link:



    https://drive.google.com/file/d/1Pac9-YLAtc7iaN0qEuiBOpYYf9ZPDDaL/view?usp=sharing



    I want to remove the duplicate from the csv by checking length of genres against each artist ID. If an artist have 2 records in csv (e.g., ed sheeran's id 6eUKZXaKkcviH0Ku9w2n3V have 2 records one record have 1 genres while row#5 have 5 genres so i want to keep the row which has largest genres length)



    I'm using this script for now:



    import pandas
    import ast


    df = pandas.read_csv('39K.csv', encoding='latin-1')

    df['lst_len'] = df['genres'].map(lambda x: len(ast.literal_eval(str(x))))
    print(df['lst_len'][0])

    df = df.sort_values('lst_len', ascending=False)

    # Drop duplicates, preserving first (longest) list by ID
    df = df.drop_duplicates(subset='ID')


    # Remove extra column that we introduced, write to file
    df = df.drop('lst_len', axis=1)
    df.to_csv('clean_39K.csv', index=False)


    but this script works for 500 record (may be i have illusion that size of records matters),



    but when I run this script for my largest file 39K.csv I'm getting this error:



    Traceback (most recent call last):
    ******* error in line 5, in <module>....
    df['lst_len'] = df['genres'].map(lambda x: len(list(x)))
    df['lst_len'] = df['genres'].map(lambda x: len(list(x)))
    TypeError: 'float' object is not iterable


    Please point me where i am doing wrong?
    Thanks










    share|improve this question


























      up vote
      2
      down vote

      favorite









      up vote
      2
      down vote

      favorite











      My csv file is on this link:



      https://drive.google.com/file/d/1Pac9-YLAtc7iaN0qEuiBOpYYf9ZPDDaL/view?usp=sharing



      I want to remove the duplicate from the csv by checking length of genres against each artist ID. If an artist have 2 records in csv (e.g., ed sheeran's id 6eUKZXaKkcviH0Ku9w2n3V have 2 records one record have 1 genres while row#5 have 5 genres so i want to keep the row which has largest genres length)



      I'm using this script for now:



      import pandas
      import ast


      df = pandas.read_csv('39K.csv', encoding='latin-1')

      df['lst_len'] = df['genres'].map(lambda x: len(ast.literal_eval(str(x))))
      print(df['lst_len'][0])

      df = df.sort_values('lst_len', ascending=False)

      # Drop duplicates, preserving first (longest) list by ID
      df = df.drop_duplicates(subset='ID')


      # Remove extra column that we introduced, write to file
      df = df.drop('lst_len', axis=1)
      df.to_csv('clean_39K.csv', index=False)


      but this script works for 500 record (may be i have illusion that size of records matters),



      but when I run this script for my largest file 39K.csv I'm getting this error:



      Traceback (most recent call last):
      ******* error in line 5, in <module>....
      df['lst_len'] = df['genres'].map(lambda x: len(list(x)))
      df['lst_len'] = df['genres'].map(lambda x: len(list(x)))
      TypeError: 'float' object is not iterable


      Please point me where i am doing wrong?
      Thanks










      share|improve this question















      My csv file is on this link:



      https://drive.google.com/file/d/1Pac9-YLAtc7iaN0qEuiBOpYYf9ZPDDaL/view?usp=sharing



      I want to remove the duplicate from the csv by checking length of genres against each artist ID. If an artist have 2 records in csv (e.g., ed sheeran's id 6eUKZXaKkcviH0Ku9w2n3V have 2 records one record have 1 genres while row#5 have 5 genres so i want to keep the row which has largest genres length)



      I'm using this script for now:



      import pandas
      import ast


      df = pandas.read_csv('39K.csv', encoding='latin-1')

      df['lst_len'] = df['genres'].map(lambda x: len(ast.literal_eval(str(x))))
      print(df['lst_len'][0])

      df = df.sort_values('lst_len', ascending=False)

      # Drop duplicates, preserving first (longest) list by ID
      df = df.drop_duplicates(subset='ID')


      # Remove extra column that we introduced, write to file
      df = df.drop('lst_len', axis=1)
      df.to_csv('clean_39K.csv', index=False)


      but this script works for 500 record (may be i have illusion that size of records matters),



      but when I run this script for my largest file 39K.csv I'm getting this error:



      Traceback (most recent call last):
      ******* error in line 5, in <module>....
      df['lst_len'] = df['genres'].map(lambda x: len(list(x)))
      df['lst_len'] = df['genres'].map(lambda x: len(list(x)))
      TypeError: 'float' object is not iterable


      Please point me where i am doing wrong?
      Thanks







      python pandas csv






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 21 at 10:24

























      asked Nov 21 at 9:18









      Bindass Clashers

      203




      203
























          1 Answer
          1






          active

          oldest

          votes

















          up vote
          2
          down vote



          accepted










          You have bad data at (at least) line 16553 of your input csv file:



          52lUXCmpmAIVsgNd1uADOy,Moosh & Twist,NULL


          pandas interprets NULL as nan when it reads the file, which is of type float and is not iterable. There are a few other NULL entries in there too, so you could either manually remove or fix them (preferred), or handle this case in your code.



          For example, if you actually want to pretend that NULL should be interpreted as an empty list, you can preprocess the data like this (just after reading the csv):



          df.loc[df['genres'].isnull(),['genres']] = df.loc[df['genres'].isnull(),'genres'].apply(lambda x: )


          Or more elegantly, switch to reading the csv using na_filter=False:



          df = pandas.read_csv('39K.csv', encoding='latin-1', na_filter=False)


          which will prevent pandas from replacing these values with nan in the first place.



          Finally, the code doesn't quite do what we ant because it is counting the number of characters in the string representation of the list. The solution is to preprocess the NULL values into strings representing empty lists, then use ast.literal_eval to turn all the strings back into lists:



          import pandas
          import ast

          df = pandas.read_csv('39K.csv', encoding='latin-1', na_filter=False)
          df.replace(to_replace="NULL", value="", inplace=True)

          for item in df['genres']:

          print(str(item))
          print(ast.literal_eval(item))

          df['lst_len'] = df['genres'].map(lambda x: len(ast.literal_eval(x)))





          share|improve this answer























          • Or maybe worth doing pre-processing with DataFrame.fillna("0") or with empty across the dataframe.
            – pygo
            Nov 21 at 9:41












          • @pygo I thought that, but I'm not sure that will work, because fillna doesn't accept a list as its argument, and we explicitly want an empty list because we will later be calculating its length. Using fillna("0") definitely doesn't work (tested) without further processing.
            – Rob Bricheno
            Nov 21 at 9:44








          • 1




            Hmm, how about df = df.fillna('') which will fill na's (e.g. NaN's) with '' ie empty, or alternatively df.read_csv(path , na_filter=False) which will default consider empty fields as empty strings.
            – pygo
            Nov 21 at 9:50










          • thanks. Let me check it
            – Bindass Clashers
            Nov 21 at 9:53










          • @pygo using fillna('') still didn't work, those values still ened up being nan. But your idea about na_filter=False worked beautifully, thanks, I've edited it into the answer.
            – Rob Bricheno
            Nov 21 at 10:07











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53408717%2fgetting-error-while-removing-duplicates-from-csv-using-pandas%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          2
          down vote



          accepted










          You have bad data at (at least) line 16553 of your input csv file:



          52lUXCmpmAIVsgNd1uADOy,Moosh & Twist,NULL


          pandas interprets NULL as nan when it reads the file, which is of type float and is not iterable. There are a few other NULL entries in there too, so you could either manually remove or fix them (preferred), or handle this case in your code.



          For example, if you actually want to pretend that NULL should be interpreted as an empty list, you can preprocess the data like this (just after reading the csv):



          df.loc[df['genres'].isnull(),['genres']] = df.loc[df['genres'].isnull(),'genres'].apply(lambda x: )


          Or more elegantly, switch to reading the csv using na_filter=False:



          df = pandas.read_csv('39K.csv', encoding='latin-1', na_filter=False)


          which will prevent pandas from replacing these values with nan in the first place.



          Finally, the code doesn't quite do what we ant because it is counting the number of characters in the string representation of the list. The solution is to preprocess the NULL values into strings representing empty lists, then use ast.literal_eval to turn all the strings back into lists:



          import pandas
          import ast

          df = pandas.read_csv('39K.csv', encoding='latin-1', na_filter=False)
          df.replace(to_replace="NULL", value="", inplace=True)

          for item in df['genres']:

          print(str(item))
          print(ast.literal_eval(item))

          df['lst_len'] = df['genres'].map(lambda x: len(ast.literal_eval(x)))





          share|improve this answer























          • Or maybe worth doing pre-processing with DataFrame.fillna("0") or with empty across the dataframe.
            – pygo
            Nov 21 at 9:41












          • @pygo I thought that, but I'm not sure that will work, because fillna doesn't accept a list as its argument, and we explicitly want an empty list because we will later be calculating its length. Using fillna("0") definitely doesn't work (tested) without further processing.
            – Rob Bricheno
            Nov 21 at 9:44








          • 1




            Hmm, how about df = df.fillna('') which will fill na's (e.g. NaN's) with '' ie empty, or alternatively df.read_csv(path , na_filter=False) which will default consider empty fields as empty strings.
            – pygo
            Nov 21 at 9:50










          • thanks. Let me check it
            – Bindass Clashers
            Nov 21 at 9:53










          • @pygo using fillna('') still didn't work, those values still ened up being nan. But your idea about na_filter=False worked beautifully, thanks, I've edited it into the answer.
            – Rob Bricheno
            Nov 21 at 10:07















          up vote
          2
          down vote



          accepted










          You have bad data at (at least) line 16553 of your input csv file:



          52lUXCmpmAIVsgNd1uADOy,Moosh & Twist,NULL


          pandas interprets NULL as nan when it reads the file, which is of type float and is not iterable. There are a few other NULL entries in there too, so you could either manually remove or fix them (preferred), or handle this case in your code.



          For example, if you actually want to pretend that NULL should be interpreted as an empty list, you can preprocess the data like this (just after reading the csv):



          df.loc[df['genres'].isnull(),['genres']] = df.loc[df['genres'].isnull(),'genres'].apply(lambda x: )


          Or more elegantly, switch to reading the csv using na_filter=False:



          df = pandas.read_csv('39K.csv', encoding='latin-1', na_filter=False)


          which will prevent pandas from replacing these values with nan in the first place.



          Finally, the code doesn't quite do what we ant because it is counting the number of characters in the string representation of the list. The solution is to preprocess the NULL values into strings representing empty lists, then use ast.literal_eval to turn all the strings back into lists:



          import pandas
          import ast

          df = pandas.read_csv('39K.csv', encoding='latin-1', na_filter=False)
          df.replace(to_replace="NULL", value="", inplace=True)

          for item in df['genres']:

          print(str(item))
          print(ast.literal_eval(item))

          df['lst_len'] = df['genres'].map(lambda x: len(ast.literal_eval(x)))





          share|improve this answer























          • Or maybe worth doing pre-processing with DataFrame.fillna("0") or with empty across the dataframe.
            – pygo
            Nov 21 at 9:41












          • @pygo I thought that, but I'm not sure that will work, because fillna doesn't accept a list as its argument, and we explicitly want an empty list because we will later be calculating its length. Using fillna("0") definitely doesn't work (tested) without further processing.
            – Rob Bricheno
            Nov 21 at 9:44








          • 1




            Hmm, how about df = df.fillna('') which will fill na's (e.g. NaN's) with '' ie empty, or alternatively df.read_csv(path , na_filter=False) which will default consider empty fields as empty strings.
            – pygo
            Nov 21 at 9:50










          • thanks. Let me check it
            – Bindass Clashers
            Nov 21 at 9:53










          • @pygo using fillna('') still didn't work, those values still ened up being nan. But your idea about na_filter=False worked beautifully, thanks, I've edited it into the answer.
            – Rob Bricheno
            Nov 21 at 10:07













          up vote
          2
          down vote



          accepted







          up vote
          2
          down vote



          accepted






          You have bad data at (at least) line 16553 of your input csv file:



          52lUXCmpmAIVsgNd1uADOy,Moosh & Twist,NULL


          pandas interprets NULL as nan when it reads the file, which is of type float and is not iterable. There are a few other NULL entries in there too, so you could either manually remove or fix them (preferred), or handle this case in your code.



          For example, if you actually want to pretend that NULL should be interpreted as an empty list, you can preprocess the data like this (just after reading the csv):



          df.loc[df['genres'].isnull(),['genres']] = df.loc[df['genres'].isnull(),'genres'].apply(lambda x: )


          Or more elegantly, switch to reading the csv using na_filter=False:



          df = pandas.read_csv('39K.csv', encoding='latin-1', na_filter=False)


          which will prevent pandas from replacing these values with nan in the first place.



          Finally, the code doesn't quite do what we ant because it is counting the number of characters in the string representation of the list. The solution is to preprocess the NULL values into strings representing empty lists, then use ast.literal_eval to turn all the strings back into lists:



          import pandas
          import ast

          df = pandas.read_csv('39K.csv', encoding='latin-1', na_filter=False)
          df.replace(to_replace="NULL", value="", inplace=True)

          for item in df['genres']:

          print(str(item))
          print(ast.literal_eval(item))

          df['lst_len'] = df['genres'].map(lambda x: len(ast.literal_eval(x)))





          share|improve this answer














          You have bad data at (at least) line 16553 of your input csv file:



          52lUXCmpmAIVsgNd1uADOy,Moosh & Twist,NULL


          pandas interprets NULL as nan when it reads the file, which is of type float and is not iterable. There are a few other NULL entries in there too, so you could either manually remove or fix them (preferred), or handle this case in your code.



          For example, if you actually want to pretend that NULL should be interpreted as an empty list, you can preprocess the data like this (just after reading the csv):



          df.loc[df['genres'].isnull(),['genres']] = df.loc[df['genres'].isnull(),'genres'].apply(lambda x: )


          Or more elegantly, switch to reading the csv using na_filter=False:



          df = pandas.read_csv('39K.csv', encoding='latin-1', na_filter=False)


          which will prevent pandas from replacing these values with nan in the first place.



          Finally, the code doesn't quite do what we ant because it is counting the number of characters in the string representation of the list. The solution is to preprocess the NULL values into strings representing empty lists, then use ast.literal_eval to turn all the strings back into lists:



          import pandas
          import ast

          df = pandas.read_csv('39K.csv', encoding='latin-1', na_filter=False)
          df.replace(to_replace="NULL", value="", inplace=True)

          for item in df['genres']:

          print(str(item))
          print(ast.literal_eval(item))

          df['lst_len'] = df['genres'].map(lambda x: len(ast.literal_eval(x)))






          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Nov 21 at 10:30

























          answered Nov 21 at 9:30









          Rob Bricheno

          2,028115




          2,028115












          • Or maybe worth doing pre-processing with DataFrame.fillna("0") or with empty across the dataframe.
            – pygo
            Nov 21 at 9:41












          • @pygo I thought that, but I'm not sure that will work, because fillna doesn't accept a list as its argument, and we explicitly want an empty list because we will later be calculating its length. Using fillna("0") definitely doesn't work (tested) without further processing.
            – Rob Bricheno
            Nov 21 at 9:44








          • 1




            Hmm, how about df = df.fillna('') which will fill na's (e.g. NaN's) with '' ie empty, or alternatively df.read_csv(path , na_filter=False) which will default consider empty fields as empty strings.
            – pygo
            Nov 21 at 9:50










          • thanks. Let me check it
            – Bindass Clashers
            Nov 21 at 9:53










          • @pygo using fillna('') still didn't work, those values still ened up being nan. But your idea about na_filter=False worked beautifully, thanks, I've edited it into the answer.
            – Rob Bricheno
            Nov 21 at 10:07


















          • Or maybe worth doing pre-processing with DataFrame.fillna("0") or with empty across the dataframe.
            – pygo
            Nov 21 at 9:41












          • @pygo I thought that, but I'm not sure that will work, because fillna doesn't accept a list as its argument, and we explicitly want an empty list because we will later be calculating its length. Using fillna("0") definitely doesn't work (tested) without further processing.
            – Rob Bricheno
            Nov 21 at 9:44








          • 1




            Hmm, how about df = df.fillna('') which will fill na's (e.g. NaN's) with '' ie empty, or alternatively df.read_csv(path , na_filter=False) which will default consider empty fields as empty strings.
            – pygo
            Nov 21 at 9:50










          • thanks. Let me check it
            – Bindass Clashers
            Nov 21 at 9:53










          • @pygo using fillna('') still didn't work, those values still ened up being nan. But your idea about na_filter=False worked beautifully, thanks, I've edited it into the answer.
            – Rob Bricheno
            Nov 21 at 10:07
















          Or maybe worth doing pre-processing with DataFrame.fillna("0") or with empty across the dataframe.
          – pygo
          Nov 21 at 9:41






          Or maybe worth doing pre-processing with DataFrame.fillna("0") or with empty across the dataframe.
          – pygo
          Nov 21 at 9:41














          @pygo I thought that, but I'm not sure that will work, because fillna doesn't accept a list as its argument, and we explicitly want an empty list because we will later be calculating its length. Using fillna("0") definitely doesn't work (tested) without further processing.
          – Rob Bricheno
          Nov 21 at 9:44






          @pygo I thought that, but I'm not sure that will work, because fillna doesn't accept a list as its argument, and we explicitly want an empty list because we will later be calculating its length. Using fillna("0") definitely doesn't work (tested) without further processing.
          – Rob Bricheno
          Nov 21 at 9:44






          1




          1




          Hmm, how about df = df.fillna('') which will fill na's (e.g. NaN's) with '' ie empty, or alternatively df.read_csv(path , na_filter=False) which will default consider empty fields as empty strings.
          – pygo
          Nov 21 at 9:50




          Hmm, how about df = df.fillna('') which will fill na's (e.g. NaN's) with '' ie empty, or alternatively df.read_csv(path , na_filter=False) which will default consider empty fields as empty strings.
          – pygo
          Nov 21 at 9:50












          thanks. Let me check it
          – Bindass Clashers
          Nov 21 at 9:53




          thanks. Let me check it
          – Bindass Clashers
          Nov 21 at 9:53












          @pygo using fillna('') still didn't work, those values still ened up being nan. But your idea about na_filter=False worked beautifully, thanks, I've edited it into the answer.
          – Rob Bricheno
          Nov 21 at 10:07




          @pygo using fillna('') still didn't work, those values still ened up being nan. But your idea about na_filter=False worked beautifully, thanks, I've edited it into the answer.
          – Rob Bricheno
          Nov 21 at 10:07


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.





          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


          Please pay close attention to the following guidance:


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53408717%2fgetting-error-while-removing-duplicates-from-csv-using-pandas%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Berounka

          Different font size/position of beamer's navigation symbols template's content depending on regular/plain...

          Sphinx de Gizeh