-
Notifications
You must be signed in to change notification settings - Fork 693
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gencast_mini_demo.ipynb on AMD CPU #113
Comments
Hey, This looks like a splash attention related error. Splash attention is only supported on TPU. You can try follow the GPU instructions to change attention mechanism, I believe this should work fine on CPU. Note that without knowing the memory specifications of your device, I can't guarantee it won't run out of memory. We've also never run GenCast on CPU so cannot make any guarantees around its correctness. Hope that helps! Andrew |
I will try your suggestion and report back here. |
I followed the suggestion in the "Running Inference on GPU" section of cloud_vm_setup.md task_config = ckpt.task_config The job (4 time steps and 8 members) ran for about 2h:30m using 17GB of system RAM with an averaged CPU load of ~30 (I have 48 cores). Unfortunately, the results are all NaN. GenCast/graphcast/GenCast/lib/python3.12/site-packages/numpy/lib/_nanfunctions_impl.py:1409: RuntimeWarning: All-NaN slice encountered |
I can't say I've seen this warning before. Could you confirm if the entire forecast was NaN? Note that we expect NaNs in the sea surface temperature variable so I wonder if this is what you might be encountering. |
I was plotting 2m_temp for all 8 ensemble members. All members had this same warning. I'll need to run it again to view other variables. |
specific humidity at 850 and 100, vertical speed at 850, geopotential at 500 and u and v components of wind at 925 are also NaN. I did not look at the rest. |
Any more ideas on how to investigate this issue? |
Unfortunately, we've never attempted to run the model on a CPU as this is too slow for practical uses. In principal there should be no reason why it should differ but unexpected device-specific compilation issues may be manifesting here. In the mean time hopefully the instructions on how to use free cloud compute are useful. Do let us know if you gain any insights on why this is happening. |
If you've never attempted to run it on a decent CPU, then how do you know it won't be practical? |
I also think it would be nice to be able to set up the model config and run it for one timestep on a our own CPU systems and then move it to cloud GPU or TPU. CPU systems have very large RAM nowadays. I set this in the notebook , but if the CPU count is greater than 1 , I get an AssertionError.
In the 'build jitted' section:
@andrewlkd Maybe #108 can be of some use, however, obviously, I don't understand how jax is working here with the CPUs. When the cpu device count is set to 1, it uses all the CPUs anyway. |
results from debugging so far are attached. I put a breakpoint in function chunked_prediction_generator() from rollout.py before predictor_fn(). I then printed out some variables looking for NaNs, then hit continue. The stack trace is in the attached text file. Please review and let me know if this help shed any light on how the NaNs are being generated. |
Hm, I'm not so sure this does shed light. This just suggests something in the actual predictor function (i.e. forward pass of GenCast) is causing NaNs when running on CPU. In case it was something to do with the pmapping, I just tried on my end to run in the non pmapped case and it still produces NaNs. Let me know if you get any more data points from debugging. |
@andrewlkd PMAP is interfering with my debugging efforts. I'm running into the limitations described at Would you share your code changes to run the gencast_mini_demo.ipynb demo non-pmapped? |
Sure! In the demo notebook, you'll want to:
Hope this helps, Andrew |
Ultimately though, what I find the most useful for debugging this kind of thing is to keep the |
Eliminating the PMAP as Andrew suggested and adding these debug lines to top of the notebook after the imports results in the following trace. Setting these to False allows the code to run as before and generate NaNs. jax.config.update("jax_debug_nans", True) .
|
forgot to mention that I also set the following in my test. |
workaround for "AttributeError: 'bool' object has no attribute 'astype'"
|
Found it! The issue is that next_noise_level goes to zero at the 20th iteration (i=19) through body_fn(). That results in mid_noise_level going to zero. That zero gets passed to call in FourierFeaturesMLP() which applies the natural logarithm of that value. The full traceback is attached as well. traceback.txt |
Nice progress! Unfortunately, I'm not sure this is related to the NaN outputs you're seeing. Note that when
-- Andrew |
Yeah, you are right. Coding around that ln(0.) still fails with NaNs downstream. Still looking. |
@andrewlkd please try running your CPU test with the following setting just after cell 3. I was able to get a reasonable prediction with this setting.
|
My system is running |
Interesting. This suggests it's indeed an XLA-compilation related issue. You may wish to compare the outputs of this with the outputs generated when running on the free Colab TPU to ensure that the forecasts being generated are indeed sensible. |
I chased the issue into apply_stochastic_churn(). During the 16th iteration through body_fn(), "x" is fine after adding init_noise(x), but it goes bad after apply_stochastic_churn(). The relevant code is
Note that per_step_churn_rates: [0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0. 0. 0. 0. 0. 0. ] When stochastic_churn_rate=0.0, new_noise_level equals noise_level and "new_noise_level^2 - noise_level^2" equals zero. Taking the sqrt() of zero is fine, but I've read that this causes problems when computing the gradient. At any rate, extra_noise_stddev goes to nan as evidenced by the print I put at the bottom of apply_stochastic_churn(). apply_stochastic_churn: x after applying stochastic_churn: 2m_temperature: <xarray.DataArray '2m_temperature' ()> Size: 1B |
Note that the sampler isn't being backpropagated through (it is only used at inference time), so I'm not sure gradients of this function are relevant here. What about checking the value of |
I'll look into that calculation. |
That's it! The difference goes negative. Here is the code. new_noise_level = noise_level * (1.0 + stochastic_churn_rate) And output from the run body_fn: x after init_noise: 2m_temperature: 16 <xarray.DataArray '2m_temperature' ()> Size: 1B |
Nice! Does adding something like
Generate non-NaN forecasts for you now? |
The following modification resolves this issue. diff=new_noise_level^2 - noise_level^2 |
Would the "where" solution or the "maximum" solution me more optimal? |
jnp.maximum() is also a solution here. extra_noise_stddev = (jnp.sqrt(jnp.maximum(0., diff)) |
What does running
return on your end? It's a bit bizarre that this is happening... |
I get 0.0 if I run that in a cell of my notebook. |
I also get zero with @jax.jit f(0.22014613449573517) |
and with def f(x1): |
results in .... Array(-6.661849e-11, dtype=float32, weak_type=True) |
I see the same behavior under jax: 0.5.0 |
I'm attempting to run the gencast_mini_demo.ipynb case on my home workstation without a GPU. The notebook recognizes that I don't have the correct software to run on the installed GPU and fails over to CPU (which is what want to happen).
Output from cell 22.
WARNING:2024-12-21 14:22:21,184:jax._src.xla_bridge:969: An NVIDIA GPU may be present on this machine, but a CUDA-enabled jaxlib is not installed. Falling back to cpu.
I've attached the stack trace I get from cell 23 (Autoregressive rollout (loop in python)).
gencast.failure.txt
Is this expected? Does GenCast require a GPU or TPU to work?
The text was updated successfully, but these errors were encountered: