Model results drastically changing when running same lines as script or as function












0














Weird thing happening today. I'm running a RandomForestRegressor over 8 categories. First, I tried looping a function graphing y_test and my prediction, but then I noticed that those graphs were inconsistent with my results elsewhere in the script, so I tried running turning the function into a simple script. The function plus loop looks like this:



def plotModelResults(model, X_train=X_train, X_test=X_test, plot_intervals=False, plot_anomalies=False):

model.fit(X_train.values,y_train.values.ravel())
prediction = model.predict(X_test.values)
axs[i].plot(prediction, "g", label="prediction", linewidth=2.0)
axs[i].plot(y_test.values.ravel(), label="actual", linewidth=2.0)
axs[i].legend(loc="best")
axs[i].set_title('{}'.format(team))
axs[i].grid(True)


Nothing too fancy. And it's looped like this:



nplots=df['Equipos'].nunique()
f, axs = plt.subplots(figsize=(25,5),nrows=1, ncols=nplots, sharey=True )
dicc=dict()
for i,team in enumerate(df['Equipos'].unique()):


aux=df.loc[df['Equipos']==team]
X=aux.drop(columns=['Cantidad_Vendida','Equipos']).copy()
y=aux[['Cantidad_Vendida']].copy()
X_train, X_test, y_train, y_test = timeseries_train_test_split(X, y, test_size=0.1)

plt.tight_layout()

plotModelResults(RandomForestRegressor(n_estimators=100, random_state=42))


The first graph from this script looks like this:



Graph 1



And then, I run this:



nplots=len(df['Equipos'].unique())
f, axs = plt.subplots(figsize=(25,5),nrows=1, ncols=nplots, sharey=True )

for i,team in enumerate(df['Equipos'].unique()):


aux=df.loc[df['Equipos']==team]
X=aux.drop(columns=['Fecha_Venta','Cantidad_Vendida','Equipos']).copy()
y=aux[['Cantidad_Vendida']].copy()
X_train, X_test, y_train, y_test = timeseries_train_test_split(X, y, test_size=0.1)
rf=RandomForestRegressor(n_estimators=100, random_state=42)
#Fiteamos modelo
rf.fit(X_train.values,y_train.values.ravel())

prediction = rf.predict(X_test.values)


axs[i].plot(prediction, "g", label="prediction", linewidth=2.0)
#Ploteamos valores de test
axs[i].plot(y_test.values.ravel(), label="actual", linewidth=2.0)
axs[i].legend(loc="best")
axs[i].set_title('{}'.format(team))
axs[i].grid(True)

plt.tight_layout()


From which its first graph looks like this:



Graph 2



As far as I can see, both scripts are equivalent, so I have no idea why are those graphs different. Any ideas?










share|improve this question
























  • 1) Both the script and the function don't do exactly the same thing. Try calling plotModelResults within your script. 2) are you running this in a jupyter notebook? It's possible that you're keeping state somewhere on accident and that's causing strange results
    – Ian Quah
    Nov 22 '18 at 21:33
















0














Weird thing happening today. I'm running a RandomForestRegressor over 8 categories. First, I tried looping a function graphing y_test and my prediction, but then I noticed that those graphs were inconsistent with my results elsewhere in the script, so I tried running turning the function into a simple script. The function plus loop looks like this:



def plotModelResults(model, X_train=X_train, X_test=X_test, plot_intervals=False, plot_anomalies=False):

model.fit(X_train.values,y_train.values.ravel())
prediction = model.predict(X_test.values)
axs[i].plot(prediction, "g", label="prediction", linewidth=2.0)
axs[i].plot(y_test.values.ravel(), label="actual", linewidth=2.0)
axs[i].legend(loc="best")
axs[i].set_title('{}'.format(team))
axs[i].grid(True)


Nothing too fancy. And it's looped like this:



nplots=df['Equipos'].nunique()
f, axs = plt.subplots(figsize=(25,5),nrows=1, ncols=nplots, sharey=True )
dicc=dict()
for i,team in enumerate(df['Equipos'].unique()):


aux=df.loc[df['Equipos']==team]
X=aux.drop(columns=['Cantidad_Vendida','Equipos']).copy()
y=aux[['Cantidad_Vendida']].copy()
X_train, X_test, y_train, y_test = timeseries_train_test_split(X, y, test_size=0.1)

plt.tight_layout()

plotModelResults(RandomForestRegressor(n_estimators=100, random_state=42))


The first graph from this script looks like this:



Graph 1



And then, I run this:



nplots=len(df['Equipos'].unique())
f, axs = plt.subplots(figsize=(25,5),nrows=1, ncols=nplots, sharey=True )

for i,team in enumerate(df['Equipos'].unique()):


aux=df.loc[df['Equipos']==team]
X=aux.drop(columns=['Fecha_Venta','Cantidad_Vendida','Equipos']).copy()
y=aux[['Cantidad_Vendida']].copy()
X_train, X_test, y_train, y_test = timeseries_train_test_split(X, y, test_size=0.1)
rf=RandomForestRegressor(n_estimators=100, random_state=42)
#Fiteamos modelo
rf.fit(X_train.values,y_train.values.ravel())

prediction = rf.predict(X_test.values)


axs[i].plot(prediction, "g", label="prediction", linewidth=2.0)
#Ploteamos valores de test
axs[i].plot(y_test.values.ravel(), label="actual", linewidth=2.0)
axs[i].legend(loc="best")
axs[i].set_title('{}'.format(team))
axs[i].grid(True)

plt.tight_layout()


From which its first graph looks like this:



Graph 2



As far as I can see, both scripts are equivalent, so I have no idea why are those graphs different. Any ideas?










share|improve this question
























  • 1) Both the script and the function don't do exactly the same thing. Try calling plotModelResults within your script. 2) are you running this in a jupyter notebook? It's possible that you're keeping state somewhere on accident and that's causing strange results
    – Ian Quah
    Nov 22 '18 at 21:33














0












0








0







Weird thing happening today. I'm running a RandomForestRegressor over 8 categories. First, I tried looping a function graphing y_test and my prediction, but then I noticed that those graphs were inconsistent with my results elsewhere in the script, so I tried running turning the function into a simple script. The function plus loop looks like this:



def plotModelResults(model, X_train=X_train, X_test=X_test, plot_intervals=False, plot_anomalies=False):

model.fit(X_train.values,y_train.values.ravel())
prediction = model.predict(X_test.values)
axs[i].plot(prediction, "g", label="prediction", linewidth=2.0)
axs[i].plot(y_test.values.ravel(), label="actual", linewidth=2.0)
axs[i].legend(loc="best")
axs[i].set_title('{}'.format(team))
axs[i].grid(True)


Nothing too fancy. And it's looped like this:



nplots=df['Equipos'].nunique()
f, axs = plt.subplots(figsize=(25,5),nrows=1, ncols=nplots, sharey=True )
dicc=dict()
for i,team in enumerate(df['Equipos'].unique()):


aux=df.loc[df['Equipos']==team]
X=aux.drop(columns=['Cantidad_Vendida','Equipos']).copy()
y=aux[['Cantidad_Vendida']].copy()
X_train, X_test, y_train, y_test = timeseries_train_test_split(X, y, test_size=0.1)

plt.tight_layout()

plotModelResults(RandomForestRegressor(n_estimators=100, random_state=42))


The first graph from this script looks like this:



Graph 1



And then, I run this:



nplots=len(df['Equipos'].unique())
f, axs = plt.subplots(figsize=(25,5),nrows=1, ncols=nplots, sharey=True )

for i,team in enumerate(df['Equipos'].unique()):


aux=df.loc[df['Equipos']==team]
X=aux.drop(columns=['Fecha_Venta','Cantidad_Vendida','Equipos']).copy()
y=aux[['Cantidad_Vendida']].copy()
X_train, X_test, y_train, y_test = timeseries_train_test_split(X, y, test_size=0.1)
rf=RandomForestRegressor(n_estimators=100, random_state=42)
#Fiteamos modelo
rf.fit(X_train.values,y_train.values.ravel())

prediction = rf.predict(X_test.values)


axs[i].plot(prediction, "g", label="prediction", linewidth=2.0)
#Ploteamos valores de test
axs[i].plot(y_test.values.ravel(), label="actual", linewidth=2.0)
axs[i].legend(loc="best")
axs[i].set_title('{}'.format(team))
axs[i].grid(True)

plt.tight_layout()


From which its first graph looks like this:



Graph 2



As far as I can see, both scripts are equivalent, so I have no idea why are those graphs different. Any ideas?










share|improve this question















Weird thing happening today. I'm running a RandomForestRegressor over 8 categories. First, I tried looping a function graphing y_test and my prediction, but then I noticed that those graphs were inconsistent with my results elsewhere in the script, so I tried running turning the function into a simple script. The function plus loop looks like this:



def plotModelResults(model, X_train=X_train, X_test=X_test, plot_intervals=False, plot_anomalies=False):

model.fit(X_train.values,y_train.values.ravel())
prediction = model.predict(X_test.values)
axs[i].plot(prediction, "g", label="prediction", linewidth=2.0)
axs[i].plot(y_test.values.ravel(), label="actual", linewidth=2.0)
axs[i].legend(loc="best")
axs[i].set_title('{}'.format(team))
axs[i].grid(True)


Nothing too fancy. And it's looped like this:



nplots=df['Equipos'].nunique()
f, axs = plt.subplots(figsize=(25,5),nrows=1, ncols=nplots, sharey=True )
dicc=dict()
for i,team in enumerate(df['Equipos'].unique()):


aux=df.loc[df['Equipos']==team]
X=aux.drop(columns=['Cantidad_Vendida','Equipos']).copy()
y=aux[['Cantidad_Vendida']].copy()
X_train, X_test, y_train, y_test = timeseries_train_test_split(X, y, test_size=0.1)

plt.tight_layout()

plotModelResults(RandomForestRegressor(n_estimators=100, random_state=42))


The first graph from this script looks like this:



Graph 1



And then, I run this:



nplots=len(df['Equipos'].unique())
f, axs = plt.subplots(figsize=(25,5),nrows=1, ncols=nplots, sharey=True )

for i,team in enumerate(df['Equipos'].unique()):


aux=df.loc[df['Equipos']==team]
X=aux.drop(columns=['Fecha_Venta','Cantidad_Vendida','Equipos']).copy()
y=aux[['Cantidad_Vendida']].copy()
X_train, X_test, y_train, y_test = timeseries_train_test_split(X, y, test_size=0.1)
rf=RandomForestRegressor(n_estimators=100, random_state=42)
#Fiteamos modelo
rf.fit(X_train.values,y_train.values.ravel())

prediction = rf.predict(X_test.values)


axs[i].plot(prediction, "g", label="prediction", linewidth=2.0)
#Ploteamos valores de test
axs[i].plot(y_test.values.ravel(), label="actual", linewidth=2.0)
axs[i].legend(loc="best")
axs[i].set_title('{}'.format(team))
axs[i].grid(True)

plt.tight_layout()


From which its first graph looks like this:



Graph 2



As far as I can see, both scripts are equivalent, so I have no idea why are those graphs different. Any ideas?







python pandas scikit-learn






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 22 '18 at 21:29

























asked Nov 22 '18 at 20:40









Juan C

30411




30411












  • 1) Both the script and the function don't do exactly the same thing. Try calling plotModelResults within your script. 2) are you running this in a jupyter notebook? It's possible that you're keeping state somewhere on accident and that's causing strange results
    – Ian Quah
    Nov 22 '18 at 21:33


















  • 1) Both the script and the function don't do exactly the same thing. Try calling plotModelResults within your script. 2) are you running this in a jupyter notebook? It's possible that you're keeping state somewhere on accident and that's causing strange results
    – Ian Quah
    Nov 22 '18 at 21:33
















1) Both the script and the function don't do exactly the same thing. Try calling plotModelResults within your script. 2) are you running this in a jupyter notebook? It's possible that you're keeping state somewhere on accident and that's causing strange results
– Ian Quah
Nov 22 '18 at 21:33




1) Both the script and the function don't do exactly the same thing. Try calling plotModelResults within your script. 2) are you running this in a jupyter notebook? It's possible that you're keeping state somewhere on accident and that's causing strange results
– Ian Quah
Nov 22 '18 at 21:33

















active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53437759%2fmodel-results-drastically-changing-when-running-same-lines-as-script-or-as-funct%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown






























active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53437759%2fmodel-results-drastically-changing-when-running-same-lines-as-script-or-as-funct%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Berounka

Sphinx de Gizeh

Different font size/position of beamer's navigation symbols template's content depending on regular/plain...