Reputation: 31
I am trying to fit a kernelized version of the Cox partial likelihood in R. I have a function
compute_kernelized_nLL(param_vect, kernel_matrix,response,lambda=0)
and when I call optim
as follows:
ker.train<-construct_euclidean_kernel(as.matrix(data))
(res <-optim(par=rep(0,ncol(ker.train)),fn = compute_kernelized_nLL,
kernel_matrix=ker.train,
response=uncensored_survival,
lambda=3,
method="Nelder-Mead"))
I noticed that the result of doing this often converged to the initial parameter values passed. To check this I printed the parameter vector at the beginning of compute_kernelized_nLL
and the parameters are indeed not changing - I just get a vector of zeros over and over again until all the parameters then start moving in lock step. This has happened no matter what optimization method I tried.
I know a minimal reproducible example is desired but after trying to replicate the behavior I couldn't find one. I'm happy to edit in more of the code but I didn't want to have a gigantic wall of text obscuring the question.
Upvotes: 0
Views: 64