Although the test on the experiment logger passes, then loading a checkpoint created by a PytorchExperiment, the loading crashes when using the values loaded for the optimizer with:
File "/home/sebastian/workspace/meddec/meddec/expert_networks/TaskCT_Seb/trixi_experiment.py", line 531, in <module> experiment.run() File "/home/sebastian/workspace/trixi/trixi/experiment/experiment.py", line 96, in run raise e File "/home/sebastian/workspace/trixi/trixi/experiment/experiment.py", line 75, in run self.train(epoch=epoch) File "/home/sebastian/workspace/meddec/meddec/expert_networks/TaskCT_Seb/trixi_experiment.py", line 213, in train current_loss, output_data = self.train_pass(data, epoch) File "/home/sebastian/workspace/meddec/meddec/expert_networks/TaskCT_Seb/trixi_experiment.py", line 247, in train_pass self.optimizer.step() File "/home/sebastian/.virtualenvs/pytorch/lib/python3.5/site-packages/torch/optim/adam.py", line 92, in step exp_avg.mul_(beta1).add_(1 - beta1, grad)
RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #4 'other'
INFO:default-xuyannVZEL:Experiment exited. Checkpoints stored =)
A quick fix is not loading the optimizer parameters by
if "optimizer" in save_types: #optimizer_dict = self.get_pytorch_optimizers() pass
in pytorchexperiment.py
However, this of course does not load the optimizer then.