Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

qt_K -> qr in Forward_modeling_for_Fe_I_lines_of_Kurucz.ipynb #487

Open
code29563 opened this issue May 26, 2024 · 24 comments
Open

qt_K -> qr in Forward_modeling_for_Fe_I_lines_of_Kurucz.ipynb #487

code29563 opened this issue May 26, 2024 · 24 comments
Labels
bug Something isn't working
Milestone

Comments

@code29563
Copy link

I think something missing from this notebook is to divide qt_K determined in cell 5 by adbK.QTref_284[mask] before passing to SijT (or now line_strength) in the next cell, as the function requires qr: partition function ratio qr(T) = Q(T)/Q(Tref) for that argument. Could you confirm?

@HajimeKawahara
Copy link
Owner

Thanks. @chonma0ctopus Can you also check this?

@HajimeKawahara
Copy link
Owner

Working on this issue at kurucz_bug branch

@HajimeKawahara HajimeKawahara added the bug Something isn't working label Jun 23, 2024
@HajimeKawahara HajimeKawahara added this to the v1.6 milestone Jun 23, 2024
@chonma0ctopus
Copy link
Collaborator

chonma0ctopus commented Jun 24, 2024

I apologize for the bug I left behind that has caused you all trouble. After a quick look, I think @code29563 has made the right point. I will take the time to look at it more closely to modify them next Sunday night. Sorry, I may have misunderstood. You say you're already working on it. Thank you. Tell me where I still should clarify and edit.

@HajimeKawahara
Copy link
Owner

HajimeKawahara commented Jun 27, 2024

@chonma0ctopus I reached Cell 5 by modifying Cell 1-4 (needed some modifications). Can you take over kurucz_bug branch?

https://github.com/HajimeKawahara/exojax/blob/kurucz_bug/documents/tutorials/Forward_modeling_for_Fe_I_lines_of_Kurucz.ipynb

@chonma0ctopus
Copy link
Collaborator

Understood.

@HajimeKawahara
Copy link
Owner

@chonma0ctopus Have you identified the cause of the error?

@chonma0ctopus
Copy link
Collaborator

Sorry for the further delay. I've finally taken the time to manage the issues in the notebook: https://github.com/chonma0ctopus/exojax/tree/kurucz_bug

I see similar problems in a few other files (e.g., Tref assignment issues on line_strength, qt used instead of qr, and some deprecated functions...). I'm planning to tackle these too:

  • lpf.py
  • opacity_Fe_test.py
  • Reverse_modeling_with_VALD_using_MODIT.ipynb
  • Forward_modeling_for_Fe_I_lines_of_Kurucz.rst
  • Forward_modeling_for_metal_line.rst
    (There might be more, I'll keep an eye out!)

Quick question: Should I go ahead and merge the notebook fixes into your develop branch first? Or wait until I've sorted out the other files too?

@HajimeKawahara
Copy link
Owner

@chonma0ctopus Thanks! Choose one that suits you best!

@chonma0ctopus
Copy link
Collaborator

Got it! I'll work on the other files first next weekend.

@chonma0ctopus
Copy link
Collaborator

Thanks for your patience. lpf.py and opacity_Fe_test.py were processed. I will work on the remaining in a week.

@HajimeKawahara
Copy link
Owner

@chonma0ctopus Hello. Do you think I can help this? or if you already finished correcting all of the source file (except for ipynb/rst), make a PR if you prefer. then I can review.

@chonma0ctopus
Copy link
Collaborator

Thank you for offering your help. I apologize for not being able to dedicate time to this. It would be helpful if you could check "opacity_Fe_test.py" because this test seems to show some error even before my edit. Thank you again for your support and understanding.

@HajimeKawahara
Copy link
Owner

@chonma0ctopus Thanks for the reply. but, I'm a bit confused. Which version of opacity_Fe_test.py should I check? If I should look at the opacity_Fe_test.py you processed, could you create a draft PR or submit a PR?

@chonma0ctopus
Copy link
Collaborator

Sorry for the confusion. It was not the version I processed this time, it was giving the error before I edited it. I haven't been able to pinpoint at which point in development the cause was.

I thought it might be at Issue #488, but it wasn't; at 49b9e32 we already get an AssertionError.

Do you happen to know any smart way to track down at what point the difference occurred? Or, do you think there is no need to track it and rather I can just update the assert statement?

@HajimeKawahara
Copy link
Owner

I see. But you said you have already processed opacity_Fe_test.py. Making changes based on the old version will make merging later difficult, so could you please submit a PR anyway? If you do that, I will edit the new version of opacity_Fe_test.py. @chonma0ctopus

@chonma0ctopus
Copy link
Collaborator

Ok, I've submitted a PR. Thank you in advance for your review.

@HajimeKawahara
Copy link
Owner

Thanks for the PR. I do not have any error for opacity_Fe_test.py....

shirochan:~/exojax/tests/endtoend/metals(kurucz_bug)>python opacity_Fe_test.py
Reading VALD file
/home/kawahara/exojax/src/exojax/spec/atomllapi.py:612: FutureWarning: Calling float on a single element Series is deprecated and will raise a TypeError in the future. Use float(ser.iloc[0]) instead
  ionE = float(
[2.07535337e-19 1.25000001e-05 2.85156250e-05]
[2.07535337e-19 1.25000001e-05 4.53125000e-04]
[2.07535337e-19 1.25000001e-05 1.16015625e-04]
[2.07535337e-19 1.25000001e-05 2.19726562e-05]
[2.07535337e-19 1.25000001e-05 5.09375000e-04]

@chonma0ctopus

@chonma0ctopus
Copy link
Collaborator

Sounds great, but also puzzling...
As seen below, the CPU is being used for my calculations because my outdated CUDA/cuDNN settings are not compatible with the latest JAX. Do you think this could be the cause of the difference?

RB15:~/ghR/exojax/tests/endtoend/metals$ python opacity_Fe_test.py 
An NVIDIA GPU may be present on this machine, but a CUDA-enabled jaxlib is not installed. Falling back to cpu.
/home/tako/miniconda3/lib/python3.9/site-packages/numba/__init__.py:143: UserWarning: A NumPy version >=1.22.4 and <2.3.0 is required for this version of SciPy (detected version 1.22.3)
  import scipy
Reading VALD file
Note: Couldn't find the hdf5 format. We convert data to the hdf5 format.
[3.15955554e-19 2.31640625e-04 2.85156250e-05]
[3.15955554e-19 2.31640625e-04 3.87109375e-03]
Traceback (most recent call last):
  File "/home/tako/ghR/exojax/tests/endtoend/metals/opacity_Fe_test.py", line 175, in <module>
    test_opacity_Fe_uns(2995., 0.1)
  File "/home/tako/ghR/exojax/tests/endtoend/metals/opacity_Fe_test.py", line 106, in test_opacity_Fe_uns
    assert(diff[0] < 1.e-11 and diff[1] < 1.e-3 and diff[2] < 1.e-3)
AssertionError

@HajimeKawahara
Copy link
Owner

HajimeKawahara commented Jul 24, 2024

something wrong... Did you do python setup.py install?
Or remove hdf file and regenerate it? (i.e. gf2600.hdf)

@chonma0ctopus
Copy link
Collaborator

Yes, I did both.

@chonma0ctopus
Copy link
Collaborator

Just to be sure, could you also try using only the CPU instead of GPU? @HajimeKawahara

@HajimeKawahara
Copy link
Owner

@chonma0ctopus
I did using CPU but the same results:

shirochan:~/exojax/tests/endtoend/metals(kurucz_bug)>python opacity_Fe_test.py 
Reading VALD file
/home/kawahara/exojax/src/exojax/spec/atomllapi.py:612: FutureWarning: Calling float on a single element Series is deprecated and will raise a TypeError in the future. Use float(ser.iloc[0]) instead
  ionE = float(
[2.07535337e-19 1.25000001e-05 2.85156250e-05]
[2.07535337e-19 1.25000001e-05 4.53125000e-04]
[2.07535337e-19 1.25000001e-05 1.16015625e-04]
[2.07535337e-19 1.25000001e-05 2.19726562e-05]
[2.07535337e-19 1.25000001e-05 5.09375000e-04]

@HajimeKawahara
Copy link
Owner

Anyway I merged #499. But can you continue to identify the issue making another branch? @chonma0ctopus

@chonma0ctopus
Copy link
Collaborator

Thank you. I understand!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants