getting error while removing duplicates from csv using pandas
up vote
2
down vote
favorite
My csv file is on this link:
https://drive.google.com/file/d/1Pac9-YLAtc7iaN0qEuiBOpYYf9ZPDDaL/view?usp=sharing
I want to remove the duplicate from the csv by checking length of genres against each artist ID. If an artist have 2 records in csv (e.g., ed sheeran's id 6eUKZXaKkcviH0Ku9w2n3V have 2 records one record have 1 genres while row#5 have 5 genres so i want to keep the row which has largest genres length)
I'm using this script for now:
import pandas
import ast
df = pandas.read_csv('39K.csv', encoding='latin-1')
df['lst_len'] = df['genres'].map(lambda x: len(ast.literal_eval(str(x))))
print(df['lst_len'][0])
df = df.sort_values('lst_len', ascending=False)
# Drop duplicates, preserving first (longest) list by ID
df = df.drop_duplicates(subset='ID')
# Remove extra column that we introduced, write to file
df = df.drop('lst_len', axis=1)
df.to_csv('clean_39K.csv', index=False)
but this script works for 500 record (may be i have illusion that size of records matters),
but when I run this script for my largest file 39K.csv I'm getting this error:
Traceback (most recent call last):
******* error in line 5, in <module>....
df['lst_len'] = df['genres'].map(lambda x: len(list(x)))
df['lst_len'] = df['genres'].map(lambda x: len(list(x)))
TypeError: 'float' object is not iterable
Please point me where i am doing wrong?
Thanks
python pandas csv
add a comment |
up vote
2
down vote
favorite
My csv file is on this link:
https://drive.google.com/file/d/1Pac9-YLAtc7iaN0qEuiBOpYYf9ZPDDaL/view?usp=sharing
I want to remove the duplicate from the csv by checking length of genres against each artist ID. If an artist have 2 records in csv (e.g., ed sheeran's id 6eUKZXaKkcviH0Ku9w2n3V have 2 records one record have 1 genres while row#5 have 5 genres so i want to keep the row which has largest genres length)
I'm using this script for now:
import pandas
import ast
df = pandas.read_csv('39K.csv', encoding='latin-1')
df['lst_len'] = df['genres'].map(lambda x: len(ast.literal_eval(str(x))))
print(df['lst_len'][0])
df = df.sort_values('lst_len', ascending=False)
# Drop duplicates, preserving first (longest) list by ID
df = df.drop_duplicates(subset='ID')
# Remove extra column that we introduced, write to file
df = df.drop('lst_len', axis=1)
df.to_csv('clean_39K.csv', index=False)
but this script works for 500 record (may be i have illusion that size of records matters),
but when I run this script for my largest file 39K.csv I'm getting this error:
Traceback (most recent call last):
******* error in line 5, in <module>....
df['lst_len'] = df['genres'].map(lambda x: len(list(x)))
df['lst_len'] = df['genres'].map(lambda x: len(list(x)))
TypeError: 'float' object is not iterable
Please point me where i am doing wrong?
Thanks
python pandas csv
add a comment |
up vote
2
down vote
favorite
up vote
2
down vote
favorite
My csv file is on this link:
https://drive.google.com/file/d/1Pac9-YLAtc7iaN0qEuiBOpYYf9ZPDDaL/view?usp=sharing
I want to remove the duplicate from the csv by checking length of genres against each artist ID. If an artist have 2 records in csv (e.g., ed sheeran's id 6eUKZXaKkcviH0Ku9w2n3V have 2 records one record have 1 genres while row#5 have 5 genres so i want to keep the row which has largest genres length)
I'm using this script for now:
import pandas
import ast
df = pandas.read_csv('39K.csv', encoding='latin-1')
df['lst_len'] = df['genres'].map(lambda x: len(ast.literal_eval(str(x))))
print(df['lst_len'][0])
df = df.sort_values('lst_len', ascending=False)
# Drop duplicates, preserving first (longest) list by ID
df = df.drop_duplicates(subset='ID')
# Remove extra column that we introduced, write to file
df = df.drop('lst_len', axis=1)
df.to_csv('clean_39K.csv', index=False)
but this script works for 500 record (may be i have illusion that size of records matters),
but when I run this script for my largest file 39K.csv I'm getting this error:
Traceback (most recent call last):
******* error in line 5, in <module>....
df['lst_len'] = df['genres'].map(lambda x: len(list(x)))
df['lst_len'] = df['genres'].map(lambda x: len(list(x)))
TypeError: 'float' object is not iterable
Please point me where i am doing wrong?
Thanks
python pandas csv
My csv file is on this link:
https://drive.google.com/file/d/1Pac9-YLAtc7iaN0qEuiBOpYYf9ZPDDaL/view?usp=sharing
I want to remove the duplicate from the csv by checking length of genres against each artist ID. If an artist have 2 records in csv (e.g., ed sheeran's id 6eUKZXaKkcviH0Ku9w2n3V have 2 records one record have 1 genres while row#5 have 5 genres so i want to keep the row which has largest genres length)
I'm using this script for now:
import pandas
import ast
df = pandas.read_csv('39K.csv', encoding='latin-1')
df['lst_len'] = df['genres'].map(lambda x: len(ast.literal_eval(str(x))))
print(df['lst_len'][0])
df = df.sort_values('lst_len', ascending=False)
# Drop duplicates, preserving first (longest) list by ID
df = df.drop_duplicates(subset='ID')
# Remove extra column that we introduced, write to file
df = df.drop('lst_len', axis=1)
df.to_csv('clean_39K.csv', index=False)
but this script works for 500 record (may be i have illusion that size of records matters),
but when I run this script for my largest file 39K.csv I'm getting this error:
Traceback (most recent call last):
******* error in line 5, in <module>....
df['lst_len'] = df['genres'].map(lambda x: len(list(x)))
df['lst_len'] = df['genres'].map(lambda x: len(list(x)))
TypeError: 'float' object is not iterable
Please point me where i am doing wrong?
Thanks
python pandas csv
python pandas csv
edited Nov 21 at 10:24
asked Nov 21 at 9:18
Bindass Clashers
203
203
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
up vote
2
down vote
accepted
You have bad data at (at least) line 16553 of your input csv file:
52lUXCmpmAIVsgNd1uADOy,Moosh & Twist,NULL
pandas
interprets NULL
as nan
when it reads the file, which is of type float
and is not iterable. There are a few other NULL
entries in there too, so you could either manually remove or fix them (preferred), or handle this case in your code.
For example, if you actually want to pretend that NULL
should be interpreted as an empty list, you can preprocess the data like this (just after reading the csv):
df.loc[df['genres'].isnull(),['genres']] = df.loc[df['genres'].isnull(),'genres'].apply(lambda x: )
Or more elegantly, switch to reading the csv using na_filter=False
:
df = pandas.read_csv('39K.csv', encoding='latin-1', na_filter=False)
which will prevent pandas from replacing these values with nan
in the first place.
Finally, the code doesn't quite do what we ant because it is counting the number of characters in the string representation of the list. The solution is to preprocess the NULL values into strings representing empty lists, then use ast.literal_eval
to turn all the strings back into lists:
import pandas
import ast
df = pandas.read_csv('39K.csv', encoding='latin-1', na_filter=False)
df.replace(to_replace="NULL", value="", inplace=True)
for item in df['genres']:
print(str(item))
print(ast.literal_eval(item))
df['lst_len'] = df['genres'].map(lambda x: len(ast.literal_eval(x)))
Or maybe worth doing pre-processing withDataFrame.fillna("0")
or with empty across the dataframe.
– pygo
Nov 21 at 9:41
@pygo I thought that, but I'm not sure that will work, becausefillna
doesn't accept a list as its argument, and we explicitly want an empty list because we will later be calculating its length. Usingfillna("0")
definitely doesn't work (tested) without further processing.
– Rob Bricheno
Nov 21 at 9:44
1
Hmm, how aboutdf = df.fillna('')
which will fill na's (e.g. NaN's) with '' ie empty, or alternativelydf.read_csv(path , na_filter=False)
which will default consider empty fields as empty strings.
– pygo
Nov 21 at 9:50
thanks. Let me check it
– Bindass Clashers
Nov 21 at 9:53
@pygo usingfillna('')
still didn't work, those values still ened up beingnan
. But your idea aboutna_filter=False
worked beautifully, thanks, I've edited it into the answer.
– Rob Bricheno
Nov 21 at 10:07
|
show 7 more comments
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
accepted
You have bad data at (at least) line 16553 of your input csv file:
52lUXCmpmAIVsgNd1uADOy,Moosh & Twist,NULL
pandas
interprets NULL
as nan
when it reads the file, which is of type float
and is not iterable. There are a few other NULL
entries in there too, so you could either manually remove or fix them (preferred), or handle this case in your code.
For example, if you actually want to pretend that NULL
should be interpreted as an empty list, you can preprocess the data like this (just after reading the csv):
df.loc[df['genres'].isnull(),['genres']] = df.loc[df['genres'].isnull(),'genres'].apply(lambda x: )
Or more elegantly, switch to reading the csv using na_filter=False
:
df = pandas.read_csv('39K.csv', encoding='latin-1', na_filter=False)
which will prevent pandas from replacing these values with nan
in the first place.
Finally, the code doesn't quite do what we ant because it is counting the number of characters in the string representation of the list. The solution is to preprocess the NULL values into strings representing empty lists, then use ast.literal_eval
to turn all the strings back into lists:
import pandas
import ast
df = pandas.read_csv('39K.csv', encoding='latin-1', na_filter=False)
df.replace(to_replace="NULL", value="", inplace=True)
for item in df['genres']:
print(str(item))
print(ast.literal_eval(item))
df['lst_len'] = df['genres'].map(lambda x: len(ast.literal_eval(x)))
Or maybe worth doing pre-processing withDataFrame.fillna("0")
or with empty across the dataframe.
– pygo
Nov 21 at 9:41
@pygo I thought that, but I'm not sure that will work, becausefillna
doesn't accept a list as its argument, and we explicitly want an empty list because we will later be calculating its length. Usingfillna("0")
definitely doesn't work (tested) without further processing.
– Rob Bricheno
Nov 21 at 9:44
1
Hmm, how aboutdf = df.fillna('')
which will fill na's (e.g. NaN's) with '' ie empty, or alternativelydf.read_csv(path , na_filter=False)
which will default consider empty fields as empty strings.
– pygo
Nov 21 at 9:50
thanks. Let me check it
– Bindass Clashers
Nov 21 at 9:53
@pygo usingfillna('')
still didn't work, those values still ened up beingnan
. But your idea aboutna_filter=False
worked beautifully, thanks, I've edited it into the answer.
– Rob Bricheno
Nov 21 at 10:07
|
show 7 more comments
up vote
2
down vote
accepted
You have bad data at (at least) line 16553 of your input csv file:
52lUXCmpmAIVsgNd1uADOy,Moosh & Twist,NULL
pandas
interprets NULL
as nan
when it reads the file, which is of type float
and is not iterable. There are a few other NULL
entries in there too, so you could either manually remove or fix them (preferred), or handle this case in your code.
For example, if you actually want to pretend that NULL
should be interpreted as an empty list, you can preprocess the data like this (just after reading the csv):
df.loc[df['genres'].isnull(),['genres']] = df.loc[df['genres'].isnull(),'genres'].apply(lambda x: )
Or more elegantly, switch to reading the csv using na_filter=False
:
df = pandas.read_csv('39K.csv', encoding='latin-1', na_filter=False)
which will prevent pandas from replacing these values with nan
in the first place.
Finally, the code doesn't quite do what we ant because it is counting the number of characters in the string representation of the list. The solution is to preprocess the NULL values into strings representing empty lists, then use ast.literal_eval
to turn all the strings back into lists:
import pandas
import ast
df = pandas.read_csv('39K.csv', encoding='latin-1', na_filter=False)
df.replace(to_replace="NULL", value="", inplace=True)
for item in df['genres']:
print(str(item))
print(ast.literal_eval(item))
df['lst_len'] = df['genres'].map(lambda x: len(ast.literal_eval(x)))
Or maybe worth doing pre-processing withDataFrame.fillna("0")
or with empty across the dataframe.
– pygo
Nov 21 at 9:41
@pygo I thought that, but I'm not sure that will work, becausefillna
doesn't accept a list as its argument, and we explicitly want an empty list because we will later be calculating its length. Usingfillna("0")
definitely doesn't work (tested) without further processing.
– Rob Bricheno
Nov 21 at 9:44
1
Hmm, how aboutdf = df.fillna('')
which will fill na's (e.g. NaN's) with '' ie empty, or alternativelydf.read_csv(path , na_filter=False)
which will default consider empty fields as empty strings.
– pygo
Nov 21 at 9:50
thanks. Let me check it
– Bindass Clashers
Nov 21 at 9:53
@pygo usingfillna('')
still didn't work, those values still ened up beingnan
. But your idea aboutna_filter=False
worked beautifully, thanks, I've edited it into the answer.
– Rob Bricheno
Nov 21 at 10:07
|
show 7 more comments
up vote
2
down vote
accepted
up vote
2
down vote
accepted
You have bad data at (at least) line 16553 of your input csv file:
52lUXCmpmAIVsgNd1uADOy,Moosh & Twist,NULL
pandas
interprets NULL
as nan
when it reads the file, which is of type float
and is not iterable. There are a few other NULL
entries in there too, so you could either manually remove or fix them (preferred), or handle this case in your code.
For example, if you actually want to pretend that NULL
should be interpreted as an empty list, you can preprocess the data like this (just after reading the csv):
df.loc[df['genres'].isnull(),['genres']] = df.loc[df['genres'].isnull(),'genres'].apply(lambda x: )
Or more elegantly, switch to reading the csv using na_filter=False
:
df = pandas.read_csv('39K.csv', encoding='latin-1', na_filter=False)
which will prevent pandas from replacing these values with nan
in the first place.
Finally, the code doesn't quite do what we ant because it is counting the number of characters in the string representation of the list. The solution is to preprocess the NULL values into strings representing empty lists, then use ast.literal_eval
to turn all the strings back into lists:
import pandas
import ast
df = pandas.read_csv('39K.csv', encoding='latin-1', na_filter=False)
df.replace(to_replace="NULL", value="", inplace=True)
for item in df['genres']:
print(str(item))
print(ast.literal_eval(item))
df['lst_len'] = df['genres'].map(lambda x: len(ast.literal_eval(x)))
You have bad data at (at least) line 16553 of your input csv file:
52lUXCmpmAIVsgNd1uADOy,Moosh & Twist,NULL
pandas
interprets NULL
as nan
when it reads the file, which is of type float
and is not iterable. There are a few other NULL
entries in there too, so you could either manually remove or fix them (preferred), or handle this case in your code.
For example, if you actually want to pretend that NULL
should be interpreted as an empty list, you can preprocess the data like this (just after reading the csv):
df.loc[df['genres'].isnull(),['genres']] = df.loc[df['genres'].isnull(),'genres'].apply(lambda x: )
Or more elegantly, switch to reading the csv using na_filter=False
:
df = pandas.read_csv('39K.csv', encoding='latin-1', na_filter=False)
which will prevent pandas from replacing these values with nan
in the first place.
Finally, the code doesn't quite do what we ant because it is counting the number of characters in the string representation of the list. The solution is to preprocess the NULL values into strings representing empty lists, then use ast.literal_eval
to turn all the strings back into lists:
import pandas
import ast
df = pandas.read_csv('39K.csv', encoding='latin-1', na_filter=False)
df.replace(to_replace="NULL", value="", inplace=True)
for item in df['genres']:
print(str(item))
print(ast.literal_eval(item))
df['lst_len'] = df['genres'].map(lambda x: len(ast.literal_eval(x)))
edited Nov 21 at 10:30
answered Nov 21 at 9:30
Rob Bricheno
2,028115
2,028115
Or maybe worth doing pre-processing withDataFrame.fillna("0")
or with empty across the dataframe.
– pygo
Nov 21 at 9:41
@pygo I thought that, but I'm not sure that will work, becausefillna
doesn't accept a list as its argument, and we explicitly want an empty list because we will later be calculating its length. Usingfillna("0")
definitely doesn't work (tested) without further processing.
– Rob Bricheno
Nov 21 at 9:44
1
Hmm, how aboutdf = df.fillna('')
which will fill na's (e.g. NaN's) with '' ie empty, or alternativelydf.read_csv(path , na_filter=False)
which will default consider empty fields as empty strings.
– pygo
Nov 21 at 9:50
thanks. Let me check it
– Bindass Clashers
Nov 21 at 9:53
@pygo usingfillna('')
still didn't work, those values still ened up beingnan
. But your idea aboutna_filter=False
worked beautifully, thanks, I've edited it into the answer.
– Rob Bricheno
Nov 21 at 10:07
|
show 7 more comments
Or maybe worth doing pre-processing withDataFrame.fillna("0")
or with empty across the dataframe.
– pygo
Nov 21 at 9:41
@pygo I thought that, but I'm not sure that will work, becausefillna
doesn't accept a list as its argument, and we explicitly want an empty list because we will later be calculating its length. Usingfillna("0")
definitely doesn't work (tested) without further processing.
– Rob Bricheno
Nov 21 at 9:44
1
Hmm, how aboutdf = df.fillna('')
which will fill na's (e.g. NaN's) with '' ie empty, or alternativelydf.read_csv(path , na_filter=False)
which will default consider empty fields as empty strings.
– pygo
Nov 21 at 9:50
thanks. Let me check it
– Bindass Clashers
Nov 21 at 9:53
@pygo usingfillna('')
still didn't work, those values still ened up beingnan
. But your idea aboutna_filter=False
worked beautifully, thanks, I've edited it into the answer.
– Rob Bricheno
Nov 21 at 10:07
Or maybe worth doing pre-processing with
DataFrame.fillna("0")
or with empty across the dataframe.– pygo
Nov 21 at 9:41
Or maybe worth doing pre-processing with
DataFrame.fillna("0")
or with empty across the dataframe.– pygo
Nov 21 at 9:41
@pygo I thought that, but I'm not sure that will work, because
fillna
doesn't accept a list as its argument, and we explicitly want an empty list because we will later be calculating its length. Using fillna("0")
definitely doesn't work (tested) without further processing.– Rob Bricheno
Nov 21 at 9:44
@pygo I thought that, but I'm not sure that will work, because
fillna
doesn't accept a list as its argument, and we explicitly want an empty list because we will later be calculating its length. Using fillna("0")
definitely doesn't work (tested) without further processing.– Rob Bricheno
Nov 21 at 9:44
1
1
Hmm, how about
df = df.fillna('')
which will fill na's (e.g. NaN's) with '' ie empty, or alternatively df.read_csv(path , na_filter=False)
which will default consider empty fields as empty strings.– pygo
Nov 21 at 9:50
Hmm, how about
df = df.fillna('')
which will fill na's (e.g. NaN's) with '' ie empty, or alternatively df.read_csv(path , na_filter=False)
which will default consider empty fields as empty strings.– pygo
Nov 21 at 9:50
thanks. Let me check it
– Bindass Clashers
Nov 21 at 9:53
thanks. Let me check it
– Bindass Clashers
Nov 21 at 9:53
@pygo using
fillna('')
still didn't work, those values still ened up being nan
. But your idea about na_filter=False
worked beautifully, thanks, I've edited it into the answer.– Rob Bricheno
Nov 21 at 10:07
@pygo using
fillna('')
still didn't work, those values still ened up being nan
. But your idea about na_filter=False
worked beautifully, thanks, I've edited it into the answer.– Rob Bricheno
Nov 21 at 10:07
|
show 7 more comments
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53408717%2fgetting-error-while-removing-duplicates-from-csv-using-pandas%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown