Hi everyone,
I have a gradient descent problem of the following form:
(\psi_{n+1}=\psi_{n}+\alpha(\nabla\psi_{n}*D^{2}\psi))
I am trying this on a 256x256 image grid where everything is spaced uniformly and dx and dy =1. I am using s step size of 0.5 using a normal gradient descent.
Somewhere down the line the algorithm gets very stable and I see some artefacts appearing and the whole thing falls apart and never converges.
Looking through the internet, people recommend using the Crank-Nicholson scheme to solve these kind of systems. However, I am having trouble formulating this in that scheme.
Would anyone know how I can structure this problem using the CN scheme? Also, is there a way to determine the optimal step-size parameter so as not to cause unstability at each iteration?
Thanks,
Luca
I have a gradient descent problem of the following form:
(\psi_{n+1}=\psi_{n}+\alpha(\nabla\psi_{n}*D^{2}\psi))
I am trying this on a 256x256 image grid where everything is spaced uniformly and dx and dy =1. I am using s step size of 0.5 using a normal gradient descent.
Somewhere down the line the algorithm gets very stable and I see some artefacts appearing and the whole thing falls apart and never converges.
Looking through the internet, people recommend using the Crank-Nicholson scheme to solve these kind of systems. However, I am having trouble formulating this in that scheme.
Would anyone know how I can structure this problem using the CN scheme? Also, is there a way to determine the optimal step-size parameter so as not to cause unstability at each iteration?
Thanks,
Luca