Should I normalize or standardize my dataset for knn? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern) Data science time! April 2019 and salary with experience The Ask Question Wizard is Live!“Large data” work flows using pandasHow does WEKA IBK (KNN) algorithm lead with non-normalized attributes?Categorical and ordinal feature data difference in regression analysis?Standardization before or after categorical encoding?training a kNN algorithm with different features for each recordHow to normalize and scale data in a dataset with python?Does one-hot encoding cause issues of unbalanced feature?How to change the pipeline output from array to dataframe again with headings after normalizing?How many principal components should I choose for PCA?Do I have to do fit PCA separately for train and test data
What is the evidence that custom checks in Northern Ireland are going to result in violence?
2 sample t test for sample sizes - 30,000 and 150,000
Can the van der Waals coefficients be negative in the van der Waals equation for real gases?
Who's this lady in the war room?
What is the difference between 准时 and 按时?
Is the Mordenkainen's Sword spell underpowered?
Should man-made satellites feature an intelligent inverted "cow catcher"?
Trying to enter the Fox's den
How to leave only the following strings?
Why these surprising proportionalities of integrals involving odd zeta values?
Weaponising the Grasp-at-a-Distance spell
Will I be more secure with my own router behind my ISP's router?
Can a Wizard take the Magic Initiate feat and select spells from the Wizard list?
Lights are flickering on and off after accidentally bumping into light switch
How to break 信じようとしていただけかも知れない into separate parts?
Is it OK if I do not take the receipt in Germany?
How to mute a string and play another at the same time
When does Bran Stark remember Jamie pushing him?
Why doesn't the university give past final exams' answers?
Determine the generator of an ideal of ring of integers
What's the difference between using dependency injection with a container and using a service locator?
Is "ein Herz wie das meine" an antiquated or colloquial use of the possesive pronoun?
Raising a bilingual kid. When should we introduce the majority language?
Recursive calls to a function - why is the address of the parameter passed to it lowering with each call?
Should I normalize or standardize my dataset for knn?
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)
Data science time! April 2019 and salary with experience
The Ask Question Wizard is Live!“Large data” work flows using pandasHow does WEKA IBK (KNN) algorithm lead with non-normalized attributes?Categorical and ordinal feature data difference in regression analysis?Standardization before or after categorical encoding?training a kNN algorithm with different features for each recordHow to normalize and scale data in a dataset with python?Does one-hot encoding cause issues of unbalanced feature?How to change the pipeline output from array to dataframe again with headings after normalizing?How many principal components should I choose for PCA?Do I have to do fit PCA separately for train and test data
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
I trying to use knn for a classification task and my dataset contains categorical features which are one hot encoded, numerical features like price etc.. and also BoW(CountVectorizer) vectors for my text column.
I know knn is affected by scaling. So I am confused what to use here?
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import Normalizer
from sklearn.preprocessing import normalize
python python-3.x machine-learning scikit-learn knn
|
show 6 more comments
I trying to use knn for a classification task and my dataset contains categorical features which are one hot encoded, numerical features like price etc.. and also BoW(CountVectorizer) vectors for my text column.
I know knn is affected by scaling. So I am confused what to use here?
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import Normalizer
from sklearn.preprocessing import normalize
python python-3.x machine-learning scikit-learn knn
1
StandardScaler
for numerical features should be enough.
– Sergey Bushmanov
Mar 9 at 3:10
@SergeyBushmanov Many rows in the price column are zeros? Can I still standardize using standardscaler?
– user214
Mar 9 at 3:34
1
Your observation that many prices are zeroes may lead you to an other feature preprocessing pipeline, but in general, one would applyStandardScaler
on numerical features with differing scale. This is important for KNN
– Sergey Bushmanov
Mar 9 at 4:07
@SergeyBushmanov I've a small query. You mentioned to standardize only my numerical features but I've applied PCA on my BoW features and I was thinking whether I should standardize them as well along with numerical features and leave out categorical features.
– user214
Mar 9 at 17:42
BoW are already well behaving features. I would guess you do not need to standardize them. However, if you wish to you can always cross validate if that makes sense.
– Sergey Bushmanov
Mar 9 at 17:52
|
show 6 more comments
I trying to use knn for a classification task and my dataset contains categorical features which are one hot encoded, numerical features like price etc.. and also BoW(CountVectorizer) vectors for my text column.
I know knn is affected by scaling. So I am confused what to use here?
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import Normalizer
from sklearn.preprocessing import normalize
python python-3.x machine-learning scikit-learn knn
I trying to use knn for a classification task and my dataset contains categorical features which are one hot encoded, numerical features like price etc.. and also BoW(CountVectorizer) vectors for my text column.
I know knn is affected by scaling. So I am confused what to use here?
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import Normalizer
from sklearn.preprocessing import normalize
python python-3.x machine-learning scikit-learn knn
python python-3.x machine-learning scikit-learn knn
asked Mar 9 at 2:26
user214user214
540115
540115
1
StandardScaler
for numerical features should be enough.
– Sergey Bushmanov
Mar 9 at 3:10
@SergeyBushmanov Many rows in the price column are zeros? Can I still standardize using standardscaler?
– user214
Mar 9 at 3:34
1
Your observation that many prices are zeroes may lead you to an other feature preprocessing pipeline, but in general, one would applyStandardScaler
on numerical features with differing scale. This is important for KNN
– Sergey Bushmanov
Mar 9 at 4:07
@SergeyBushmanov I've a small query. You mentioned to standardize only my numerical features but I've applied PCA on my BoW features and I was thinking whether I should standardize them as well along with numerical features and leave out categorical features.
– user214
Mar 9 at 17:42
BoW are already well behaving features. I would guess you do not need to standardize them. However, if you wish to you can always cross validate if that makes sense.
– Sergey Bushmanov
Mar 9 at 17:52
|
show 6 more comments
1
StandardScaler
for numerical features should be enough.
– Sergey Bushmanov
Mar 9 at 3:10
@SergeyBushmanov Many rows in the price column are zeros? Can I still standardize using standardscaler?
– user214
Mar 9 at 3:34
1
Your observation that many prices are zeroes may lead you to an other feature preprocessing pipeline, but in general, one would applyStandardScaler
on numerical features with differing scale. This is important for KNN
– Sergey Bushmanov
Mar 9 at 4:07
@SergeyBushmanov I've a small query. You mentioned to standardize only my numerical features but I've applied PCA on my BoW features and I was thinking whether I should standardize them as well along with numerical features and leave out categorical features.
– user214
Mar 9 at 17:42
BoW are already well behaving features. I would guess you do not need to standardize them. However, if you wish to you can always cross validate if that makes sense.
– Sergey Bushmanov
Mar 9 at 17:52
1
1
StandardScaler
for numerical features should be enough.– Sergey Bushmanov
Mar 9 at 3:10
StandardScaler
for numerical features should be enough.– Sergey Bushmanov
Mar 9 at 3:10
@SergeyBushmanov Many rows in the price column are zeros? Can I still standardize using standardscaler?
– user214
Mar 9 at 3:34
@SergeyBushmanov Many rows in the price column are zeros? Can I still standardize using standardscaler?
– user214
Mar 9 at 3:34
1
1
Your observation that many prices are zeroes may lead you to an other feature preprocessing pipeline, but in general, one would apply
StandardScaler
on numerical features with differing scale. This is important for KNN– Sergey Bushmanov
Mar 9 at 4:07
Your observation that many prices are zeroes may lead you to an other feature preprocessing pipeline, but in general, one would apply
StandardScaler
on numerical features with differing scale. This is important for KNN– Sergey Bushmanov
Mar 9 at 4:07
@SergeyBushmanov I've a small query. You mentioned to standardize only my numerical features but I've applied PCA on my BoW features and I was thinking whether I should standardize them as well along with numerical features and leave out categorical features.
– user214
Mar 9 at 17:42
@SergeyBushmanov I've a small query. You mentioned to standardize only my numerical features but I've applied PCA on my BoW features and I was thinking whether I should standardize them as well along with numerical features and leave out categorical features.
– user214
Mar 9 at 17:42
BoW are already well behaving features. I would guess you do not need to standardize them. However, if you wish to you can always cross validate if that makes sense.
– Sergey Bushmanov
Mar 9 at 17:52
BoW are already well behaving features. I would guess you do not need to standardize them. However, if you wish to you can always cross validate if that makes sense.
– Sergey Bushmanov
Mar 9 at 17:52
|
show 6 more comments
1 Answer
1
active
oldest
votes
My suggestion would be to go for MinMaxScaler
One of the major reason is that your features such as price can't have negative values and as you mentioned, it could be sparse.
From Documentation:
The motivation to use this scaling include robustness to very small
standard deviations of features and preserving zero entries in sparse
data.
At the same time, if your numerical variable has a huge variance, then go for RobustScaler or StandardScaler.
You dont have to scale the one hot encoded features.
For BoW, it is important to preserve the sparsity of the data. If you apply the StandardScaler, you will lose the sparsity. You definitely have to go for MinMaxScaler.
Another option would be to go for TfidfVectorizer, which does the l2 normalization by default.
1
Can I know what you meant by preserving the sparsity of the data. Actually I've used PCA on my BoW and transformed into fewer dimension. So can I now apply scaling to it or let it be.
– user214
Mar 9 at 13:44
1
preserving the sparsity of the data
means zeros in the features would be kept as zeros even after Normalization. Sparsity simply means having less number of nonzero values. BoW usually have a lot zeros because every document does not contain all the words in the vocabulary.
– ai_learning
Mar 9 at 13:47
1
yes, you can apply scaling on pca features.
– ai_learning
Mar 9 at 13:49
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55073423%2fshould-i-normalize-or-standardize-my-dataset-for-knn%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
My suggestion would be to go for MinMaxScaler
One of the major reason is that your features such as price can't have negative values and as you mentioned, it could be sparse.
From Documentation:
The motivation to use this scaling include robustness to very small
standard deviations of features and preserving zero entries in sparse
data.
At the same time, if your numerical variable has a huge variance, then go for RobustScaler or StandardScaler.
You dont have to scale the one hot encoded features.
For BoW, it is important to preserve the sparsity of the data. If you apply the StandardScaler, you will lose the sparsity. You definitely have to go for MinMaxScaler.
Another option would be to go for TfidfVectorizer, which does the l2 normalization by default.
1
Can I know what you meant by preserving the sparsity of the data. Actually I've used PCA on my BoW and transformed into fewer dimension. So can I now apply scaling to it or let it be.
– user214
Mar 9 at 13:44
1
preserving the sparsity of the data
means zeros in the features would be kept as zeros even after Normalization. Sparsity simply means having less number of nonzero values. BoW usually have a lot zeros because every document does not contain all the words in the vocabulary.
– ai_learning
Mar 9 at 13:47
1
yes, you can apply scaling on pca features.
– ai_learning
Mar 9 at 13:49
add a comment |
My suggestion would be to go for MinMaxScaler
One of the major reason is that your features such as price can't have negative values and as you mentioned, it could be sparse.
From Documentation:
The motivation to use this scaling include robustness to very small
standard deviations of features and preserving zero entries in sparse
data.
At the same time, if your numerical variable has a huge variance, then go for RobustScaler or StandardScaler.
You dont have to scale the one hot encoded features.
For BoW, it is important to preserve the sparsity of the data. If you apply the StandardScaler, you will lose the sparsity. You definitely have to go for MinMaxScaler.
Another option would be to go for TfidfVectorizer, which does the l2 normalization by default.
1
Can I know what you meant by preserving the sparsity of the data. Actually I've used PCA on my BoW and transformed into fewer dimension. So can I now apply scaling to it or let it be.
– user214
Mar 9 at 13:44
1
preserving the sparsity of the data
means zeros in the features would be kept as zeros even after Normalization. Sparsity simply means having less number of nonzero values. BoW usually have a lot zeros because every document does not contain all the words in the vocabulary.
– ai_learning
Mar 9 at 13:47
1
yes, you can apply scaling on pca features.
– ai_learning
Mar 9 at 13:49
add a comment |
My suggestion would be to go for MinMaxScaler
One of the major reason is that your features such as price can't have negative values and as you mentioned, it could be sparse.
From Documentation:
The motivation to use this scaling include robustness to very small
standard deviations of features and preserving zero entries in sparse
data.
At the same time, if your numerical variable has a huge variance, then go for RobustScaler or StandardScaler.
You dont have to scale the one hot encoded features.
For BoW, it is important to preserve the sparsity of the data. If you apply the StandardScaler, you will lose the sparsity. You definitely have to go for MinMaxScaler.
Another option would be to go for TfidfVectorizer, which does the l2 normalization by default.
My suggestion would be to go for MinMaxScaler
One of the major reason is that your features such as price can't have negative values and as you mentioned, it could be sparse.
From Documentation:
The motivation to use this scaling include robustness to very small
standard deviations of features and preserving zero entries in sparse
data.
At the same time, if your numerical variable has a huge variance, then go for RobustScaler or StandardScaler.
You dont have to scale the one hot encoded features.
For BoW, it is important to preserve the sparsity of the data. If you apply the StandardScaler, you will lose the sparsity. You definitely have to go for MinMaxScaler.
Another option would be to go for TfidfVectorizer, which does the l2 normalization by default.
answered Mar 9 at 8:43
ai_learningai_learning
4,82221237
4,82221237
1
Can I know what you meant by preserving the sparsity of the data. Actually I've used PCA on my BoW and transformed into fewer dimension. So can I now apply scaling to it or let it be.
– user214
Mar 9 at 13:44
1
preserving the sparsity of the data
means zeros in the features would be kept as zeros even after Normalization. Sparsity simply means having less number of nonzero values. BoW usually have a lot zeros because every document does not contain all the words in the vocabulary.
– ai_learning
Mar 9 at 13:47
1
yes, you can apply scaling on pca features.
– ai_learning
Mar 9 at 13:49
add a comment |
1
Can I know what you meant by preserving the sparsity of the data. Actually I've used PCA on my BoW and transformed into fewer dimension. So can I now apply scaling to it or let it be.
– user214
Mar 9 at 13:44
1
preserving the sparsity of the data
means zeros in the features would be kept as zeros even after Normalization. Sparsity simply means having less number of nonzero values. BoW usually have a lot zeros because every document does not contain all the words in the vocabulary.
– ai_learning
Mar 9 at 13:47
1
yes, you can apply scaling on pca features.
– ai_learning
Mar 9 at 13:49
1
1
Can I know what you meant by preserving the sparsity of the data. Actually I've used PCA on my BoW and transformed into fewer dimension. So can I now apply scaling to it or let it be.
– user214
Mar 9 at 13:44
Can I know what you meant by preserving the sparsity of the data. Actually I've used PCA on my BoW and transformed into fewer dimension. So can I now apply scaling to it or let it be.
– user214
Mar 9 at 13:44
1
1
preserving the sparsity of the data
means zeros in the features would be kept as zeros even after Normalization. Sparsity simply means having less number of nonzero values. BoW usually have a lot zeros because every document does not contain all the words in the vocabulary.– ai_learning
Mar 9 at 13:47
preserving the sparsity of the data
means zeros in the features would be kept as zeros even after Normalization. Sparsity simply means having less number of nonzero values. BoW usually have a lot zeros because every document does not contain all the words in the vocabulary.– ai_learning
Mar 9 at 13:47
1
1
yes, you can apply scaling on pca features.
– ai_learning
Mar 9 at 13:49
yes, you can apply scaling on pca features.
– ai_learning
Mar 9 at 13:49
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55073423%2fshould-i-normalize-or-standardize-my-dataset-for-knn%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
StandardScaler
for numerical features should be enough.– Sergey Bushmanov
Mar 9 at 3:10
@SergeyBushmanov Many rows in the price column are zeros? Can I still standardize using standardscaler?
– user214
Mar 9 at 3:34
1
Your observation that many prices are zeroes may lead you to an other feature preprocessing pipeline, but in general, one would apply
StandardScaler
on numerical features with differing scale. This is important for KNN– Sergey Bushmanov
Mar 9 at 4:07
@SergeyBushmanov I've a small query. You mentioned to standardize only my numerical features but I've applied PCA on my BoW features and I was thinking whether I should standardize them as well along with numerical features and leave out categorical features.
– user214
Mar 9 at 17:42
BoW are already well behaving features. I would guess you do not need to standardize them. However, if you wish to you can always cross validate if that makes sense.
– Sergey Bushmanov
Mar 9 at 17:52