Leave-one-out cross validation for transfer learning in PyTorch
up vote
-1
down vote
favorite
I have modified the original fine-tuning tutorial in PyTorch so that I can do LOOCV. Here, there are some possible problems such that the dataloader that I currently have applies the transformation even on the sample that is left for testing (which should not do so). Also, in the train, it somehow only gets one sample. How can I fix the following code?
For simplicity, I am running it on 10 images, 2 classes, and 2 epochs.
from __future__ import print_function, division
import torch
from torch.autograd import Variable
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
import torchvision
from torchvision import datasets, models, transforms
import matplotlib.pyplot as plt
import time
import os
import copy
import torch.utils.data as data_utils
from torch.utils import data
data_transforms = {
'train': transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(20),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
}
data_dir = "test_images"
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
def train_model(model, criterion, optimizer, scheduler, train_input, train_label, num_epochs=25):
since = time.time()
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
scheduler.step()
model.train() # Set model to training mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
train_input = train_input.to(device)
train_label = train_label.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(True):
output = model(train_input)
_, pred = torch.max(output, 1)
loss = criterion(output, train_label)
# backward + optimize only if in training phase
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * train_input.size(0)
running_corrects += torch.sum(pred == train_label.data)
epoch_loss = running_loss / dataset_size['train']
epoch_acc = running_corrects.double() / dataset_size['train']
print('train Loss: {:.4f} Acc: {:.4f}'.format(epoch_loss, epoch_acc))
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
return model
######################################################################
# Finetuning the convnet
# ----------------------
#
# Load a pretrained model and reset final fully connected layer.
#
model_ft = models.resnet50(pretrained=True)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, 2)
model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
#model_ft = model_ft.cuda()
nb_samples = 10
nb_classes = 2
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
data_transforms[x])
for x in ['train']}
dataset_size = {x: len(image_datasets[x]) for x in ['train']}
class_names = image_datasets['train'].classes
# LOOCV
loocv_preds =
loocv_targets =
for idx in range(nb_samples):
print('Using sample {} as test data'.format(idx))
# Get all indices and remove test sample
train_indices = list(range(len(image_datasets['train'])))
del train_indices[idx]
# Create new sampler
sampler = data.SubsetRandomSampler(train_indices)
dataloader = data.DataLoader(
image_datasets['train'],
num_workers=2,
batch_size=1,
sampler=sampler
)
# Train model
for batch_idx, (sample, target) in enumerate(dataloader):
print('Batch {}'.format(batch_idx))
model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler, sample, target, num_epochs=2) # do I add this line here?
# Test on LOO sample
model_ft.eval()
test_data, test_target = image_datasets['train'][idx]
test_data = test_data.cuda()
test_target = torch.tensor(test_target)
test_target = test_target.cuda()
test_data.unsqueeze_(0)
test_target.unsqueeze_(0)
output = model_ft(test_data)
pred = torch.argmax(output, 1)
loocv_preds.append(pred)
loocv_targets.append(test_target.item())
print("loocv preds: ", loocv_preds)
print("loocv targets: ", loocv_targets)
print(accuracy_score(loocv_targets, loocv_preds))
print(confusion_matrix(loocv_targets, loocv_preds))
Basically in the code above, how should I modify the following piece of code that does not apply the transformation on the one sample that is left for testing?
dataloader = data.DataLoader(
image_datasets['train'],
num_workers=2,
batch_size=1,
sampler=sampler
)
I am also very doubtful about this lines:
for batch_idx, (sample, target) in enumerate(dataloader):
print('Batch {}'.format(batch_idx))
model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler, sample, target, num_epochs=2) # do I add this line here?
does it make sense to pass only one sample to train? How can I fix this?
Complete output can be found here: https://pastebin.com/SKQNRQNa
Specifically, I am not sure how to fix the second bullet point mentioned in this answer: https://discuss.pytorch.org/t/training-phase-of-leave-one-out-cross-validation/30138/2?u=mona_jalal
Additionally, if you are suggesting to use Skorch, can you please tell how to apply "LOOCV" in skorch transfer learning tutorial?
https://colab.research.google.com/github/dnouri/skorch/blob/master/notebooks/Transfer_Learning.ipynb#scrollTo=IY4BAQUJLUiT
python deep-learning pytorch cross-validation
add a comment |
up vote
-1
down vote
favorite
I have modified the original fine-tuning tutorial in PyTorch so that I can do LOOCV. Here, there are some possible problems such that the dataloader that I currently have applies the transformation even on the sample that is left for testing (which should not do so). Also, in the train, it somehow only gets one sample. How can I fix the following code?
For simplicity, I am running it on 10 images, 2 classes, and 2 epochs.
from __future__ import print_function, division
import torch
from torch.autograd import Variable
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
import torchvision
from torchvision import datasets, models, transforms
import matplotlib.pyplot as plt
import time
import os
import copy
import torch.utils.data as data_utils
from torch.utils import data
data_transforms = {
'train': transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(20),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
}
data_dir = "test_images"
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
def train_model(model, criterion, optimizer, scheduler, train_input, train_label, num_epochs=25):
since = time.time()
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
scheduler.step()
model.train() # Set model to training mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
train_input = train_input.to(device)
train_label = train_label.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(True):
output = model(train_input)
_, pred = torch.max(output, 1)
loss = criterion(output, train_label)
# backward + optimize only if in training phase
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * train_input.size(0)
running_corrects += torch.sum(pred == train_label.data)
epoch_loss = running_loss / dataset_size['train']
epoch_acc = running_corrects.double() / dataset_size['train']
print('train Loss: {:.4f} Acc: {:.4f}'.format(epoch_loss, epoch_acc))
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
return model
######################################################################
# Finetuning the convnet
# ----------------------
#
# Load a pretrained model and reset final fully connected layer.
#
model_ft = models.resnet50(pretrained=True)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, 2)
model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
#model_ft = model_ft.cuda()
nb_samples = 10
nb_classes = 2
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
data_transforms[x])
for x in ['train']}
dataset_size = {x: len(image_datasets[x]) for x in ['train']}
class_names = image_datasets['train'].classes
# LOOCV
loocv_preds =
loocv_targets =
for idx in range(nb_samples):
print('Using sample {} as test data'.format(idx))
# Get all indices and remove test sample
train_indices = list(range(len(image_datasets['train'])))
del train_indices[idx]
# Create new sampler
sampler = data.SubsetRandomSampler(train_indices)
dataloader = data.DataLoader(
image_datasets['train'],
num_workers=2,
batch_size=1,
sampler=sampler
)
# Train model
for batch_idx, (sample, target) in enumerate(dataloader):
print('Batch {}'.format(batch_idx))
model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler, sample, target, num_epochs=2) # do I add this line here?
# Test on LOO sample
model_ft.eval()
test_data, test_target = image_datasets['train'][idx]
test_data = test_data.cuda()
test_target = torch.tensor(test_target)
test_target = test_target.cuda()
test_data.unsqueeze_(0)
test_target.unsqueeze_(0)
output = model_ft(test_data)
pred = torch.argmax(output, 1)
loocv_preds.append(pred)
loocv_targets.append(test_target.item())
print("loocv preds: ", loocv_preds)
print("loocv targets: ", loocv_targets)
print(accuracy_score(loocv_targets, loocv_preds))
print(confusion_matrix(loocv_targets, loocv_preds))
Basically in the code above, how should I modify the following piece of code that does not apply the transformation on the one sample that is left for testing?
dataloader = data.DataLoader(
image_datasets['train'],
num_workers=2,
batch_size=1,
sampler=sampler
)
I am also very doubtful about this lines:
for batch_idx, (sample, target) in enumerate(dataloader):
print('Batch {}'.format(batch_idx))
model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler, sample, target, num_epochs=2) # do I add this line here?
does it make sense to pass only one sample to train? How can I fix this?
Complete output can be found here: https://pastebin.com/SKQNRQNa
Specifically, I am not sure how to fix the second bullet point mentioned in this answer: https://discuss.pytorch.org/t/training-phase-of-leave-one-out-cross-validation/30138/2?u=mona_jalal
Additionally, if you are suggesting to use Skorch, can you please tell how to apply "LOOCV" in skorch transfer learning tutorial?
https://colab.research.google.com/github/dnouri/skorch/blob/master/notebooks/Transfer_Learning.ipynb#scrollTo=IY4BAQUJLUiT
python deep-learning pytorch cross-validation
add a comment |
up vote
-1
down vote
favorite
up vote
-1
down vote
favorite
I have modified the original fine-tuning tutorial in PyTorch so that I can do LOOCV. Here, there are some possible problems such that the dataloader that I currently have applies the transformation even on the sample that is left for testing (which should not do so). Also, in the train, it somehow only gets one sample. How can I fix the following code?
For simplicity, I am running it on 10 images, 2 classes, and 2 epochs.
from __future__ import print_function, division
import torch
from torch.autograd import Variable
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
import torchvision
from torchvision import datasets, models, transforms
import matplotlib.pyplot as plt
import time
import os
import copy
import torch.utils.data as data_utils
from torch.utils import data
data_transforms = {
'train': transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(20),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
}
data_dir = "test_images"
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
def train_model(model, criterion, optimizer, scheduler, train_input, train_label, num_epochs=25):
since = time.time()
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
scheduler.step()
model.train() # Set model to training mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
train_input = train_input.to(device)
train_label = train_label.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(True):
output = model(train_input)
_, pred = torch.max(output, 1)
loss = criterion(output, train_label)
# backward + optimize only if in training phase
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * train_input.size(0)
running_corrects += torch.sum(pred == train_label.data)
epoch_loss = running_loss / dataset_size['train']
epoch_acc = running_corrects.double() / dataset_size['train']
print('train Loss: {:.4f} Acc: {:.4f}'.format(epoch_loss, epoch_acc))
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
return model
######################################################################
# Finetuning the convnet
# ----------------------
#
# Load a pretrained model and reset final fully connected layer.
#
model_ft = models.resnet50(pretrained=True)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, 2)
model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
#model_ft = model_ft.cuda()
nb_samples = 10
nb_classes = 2
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
data_transforms[x])
for x in ['train']}
dataset_size = {x: len(image_datasets[x]) for x in ['train']}
class_names = image_datasets['train'].classes
# LOOCV
loocv_preds =
loocv_targets =
for idx in range(nb_samples):
print('Using sample {} as test data'.format(idx))
# Get all indices and remove test sample
train_indices = list(range(len(image_datasets['train'])))
del train_indices[idx]
# Create new sampler
sampler = data.SubsetRandomSampler(train_indices)
dataloader = data.DataLoader(
image_datasets['train'],
num_workers=2,
batch_size=1,
sampler=sampler
)
# Train model
for batch_idx, (sample, target) in enumerate(dataloader):
print('Batch {}'.format(batch_idx))
model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler, sample, target, num_epochs=2) # do I add this line here?
# Test on LOO sample
model_ft.eval()
test_data, test_target = image_datasets['train'][idx]
test_data = test_data.cuda()
test_target = torch.tensor(test_target)
test_target = test_target.cuda()
test_data.unsqueeze_(0)
test_target.unsqueeze_(0)
output = model_ft(test_data)
pred = torch.argmax(output, 1)
loocv_preds.append(pred)
loocv_targets.append(test_target.item())
print("loocv preds: ", loocv_preds)
print("loocv targets: ", loocv_targets)
print(accuracy_score(loocv_targets, loocv_preds))
print(confusion_matrix(loocv_targets, loocv_preds))
Basically in the code above, how should I modify the following piece of code that does not apply the transformation on the one sample that is left for testing?
dataloader = data.DataLoader(
image_datasets['train'],
num_workers=2,
batch_size=1,
sampler=sampler
)
I am also very doubtful about this lines:
for batch_idx, (sample, target) in enumerate(dataloader):
print('Batch {}'.format(batch_idx))
model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler, sample, target, num_epochs=2) # do I add this line here?
does it make sense to pass only one sample to train? How can I fix this?
Complete output can be found here: https://pastebin.com/SKQNRQNa
Specifically, I am not sure how to fix the second bullet point mentioned in this answer: https://discuss.pytorch.org/t/training-phase-of-leave-one-out-cross-validation/30138/2?u=mona_jalal
Additionally, if you are suggesting to use Skorch, can you please tell how to apply "LOOCV" in skorch transfer learning tutorial?
https://colab.research.google.com/github/dnouri/skorch/blob/master/notebooks/Transfer_Learning.ipynb#scrollTo=IY4BAQUJLUiT
python deep-learning pytorch cross-validation
I have modified the original fine-tuning tutorial in PyTorch so that I can do LOOCV. Here, there are some possible problems such that the dataloader that I currently have applies the transformation even on the sample that is left for testing (which should not do so). Also, in the train, it somehow only gets one sample. How can I fix the following code?
For simplicity, I am running it on 10 images, 2 classes, and 2 epochs.
from __future__ import print_function, division
import torch
from torch.autograd import Variable
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
import torchvision
from torchvision import datasets, models, transforms
import matplotlib.pyplot as plt
import time
import os
import copy
import torch.utils.data as data_utils
from torch.utils import data
data_transforms = {
'train': transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(20),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
}
data_dir = "test_images"
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
def train_model(model, criterion, optimizer, scheduler, train_input, train_label, num_epochs=25):
since = time.time()
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
scheduler.step()
model.train() # Set model to training mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
train_input = train_input.to(device)
train_label = train_label.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(True):
output = model(train_input)
_, pred = torch.max(output, 1)
loss = criterion(output, train_label)
# backward + optimize only if in training phase
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * train_input.size(0)
running_corrects += torch.sum(pred == train_label.data)
epoch_loss = running_loss / dataset_size['train']
epoch_acc = running_corrects.double() / dataset_size['train']
print('train Loss: {:.4f} Acc: {:.4f}'.format(epoch_loss, epoch_acc))
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
return model
######################################################################
# Finetuning the convnet
# ----------------------
#
# Load a pretrained model and reset final fully connected layer.
#
model_ft = models.resnet50(pretrained=True)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, 2)
model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
#model_ft = model_ft.cuda()
nb_samples = 10
nb_classes = 2
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
data_transforms[x])
for x in ['train']}
dataset_size = {x: len(image_datasets[x]) for x in ['train']}
class_names = image_datasets['train'].classes
# LOOCV
loocv_preds =
loocv_targets =
for idx in range(nb_samples):
print('Using sample {} as test data'.format(idx))
# Get all indices and remove test sample
train_indices = list(range(len(image_datasets['train'])))
del train_indices[idx]
# Create new sampler
sampler = data.SubsetRandomSampler(train_indices)
dataloader = data.DataLoader(
image_datasets['train'],
num_workers=2,
batch_size=1,
sampler=sampler
)
# Train model
for batch_idx, (sample, target) in enumerate(dataloader):
print('Batch {}'.format(batch_idx))
model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler, sample, target, num_epochs=2) # do I add this line here?
# Test on LOO sample
model_ft.eval()
test_data, test_target = image_datasets['train'][idx]
test_data = test_data.cuda()
test_target = torch.tensor(test_target)
test_target = test_target.cuda()
test_data.unsqueeze_(0)
test_target.unsqueeze_(0)
output = model_ft(test_data)
pred = torch.argmax(output, 1)
loocv_preds.append(pred)
loocv_targets.append(test_target.item())
print("loocv preds: ", loocv_preds)
print("loocv targets: ", loocv_targets)
print(accuracy_score(loocv_targets, loocv_preds))
print(confusion_matrix(loocv_targets, loocv_preds))
Basically in the code above, how should I modify the following piece of code that does not apply the transformation on the one sample that is left for testing?
dataloader = data.DataLoader(
image_datasets['train'],
num_workers=2,
batch_size=1,
sampler=sampler
)
I am also very doubtful about this lines:
for batch_idx, (sample, target) in enumerate(dataloader):
print('Batch {}'.format(batch_idx))
model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler, sample, target, num_epochs=2) # do I add this line here?
does it make sense to pass only one sample to train? How can I fix this?
Complete output can be found here: https://pastebin.com/SKQNRQNa
Specifically, I am not sure how to fix the second bullet point mentioned in this answer: https://discuss.pytorch.org/t/training-phase-of-leave-one-out-cross-validation/30138/2?u=mona_jalal
Additionally, if you are suggesting to use Skorch, can you please tell how to apply "LOOCV" in skorch transfer learning tutorial?
https://colab.research.google.com/github/dnouri/skorch/blob/master/notebooks/Transfer_Learning.ipynb#scrollTo=IY4BAQUJLUiT
python deep-learning pytorch cross-validation
python deep-learning pytorch cross-validation
edited Nov 22 at 2:28
asked Nov 21 at 23:38
Mona Jalal
7,77526108208
7,77526108208
add a comment |
add a comment |
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53421959%2fleave-one-out-cross-validation-for-transfer-learning-in-pytorch%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53421959%2fleave-one-out-cross-validation-for-transfer-learning-in-pytorch%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown