From 276f58e1050297dcb3a78fa0f1fac6835b944620 Mon Sep 17 00:00:00 2001 From: jackieyao0114 Date: Wed, 22 May 2024 11:17:49 -0700 Subject: [PATCH 1/8] updated README.md --- README.md | 40 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+) diff --git a/README.md b/README.md index e1711376d..634b63e0b 100644 --- a/README.md +++ b/README.md @@ -1,2 +1,42 @@ # ARTEMIS +ARTEMIS (Adaptive mesh Refinement Time-domain ElectrodynaMIcs Solver) is a high-performance coupled electrodynamics–micromagnetics solver for full physical modeling of signals in microelectronic circuitry. The overall strategy couples a finite-difference time-domain (FDTD) approach for Maxwell’s equations to a magnetization model described by the Landau–Lifshitz–Gilbert (LLG) equation. The algorithm is implemented in the Exascale +Computing Project (ECP) software framework, AMReX, which provides effective scalability on manycore and GPU-based supercomputing architectures. Furthermore, the code leverages ongoing developments of the Exascale Application Code, WarpX, which is primarily being developed for plasma wakefield accelerator modeling. Our temporal coupling scheme provides second-order accuracy in space and time by combining the integration steps for the magnetic field and magnetization into an iterative sub-step that includes a trapezoidal temporal discretization for the magnetization. The performance of the algorithm is demonstrated by the excellent scaling results on NERSC multicore and GPU systems, with a significant (59×) speedup on the GPU using a node-by-node comparison. The utility of our code is validated by performing simulations of transmission lines, rectangle electromagnetic waveguides, magnetically tunable filters, on-chip coplanar waveguides and resonators, magnon-photon coupling circuits, and so on. +# Installation +## Download AMReX Repository +``` git clone git@github.com:AMReX-Codes/amrex.git ``` +## Download Artemis Repository +``` git clone git@github.com:AMReX-Microelectronics/artemis.git ``` +## Build +Make sure that the AMReX and Artemis are cloned in the same location in their filesystem. Navigate to the Exec folder of Artemis and execute +```make -j 4``` +You can turn on and off the LLG equation by specifying ```USE_LLG``` during compilation. The following command compiles Artemis without LLG +```make -j 4 USE_LLG=FALSE``` +The following command compiles Artemis with LLG +```make -j 4 USE_LLG=TRUE``` +The default value of ```USE_LLG``` is ```TRUE``` + +# Running Artemis +Example input scripts are located in `Examples` directory. +## Simple Testcase +You can run the following to simulate a MFIM heterostructure with a 5 nm HZO as the ferroelectric layer and 4 nm alumina as the dielectric layer under zero applied voltage: +## For MPI+OMP build +```mpirun -n 4 ./main3d.gnu.MPI.OMP.ex Examples/inputs_mfim_Noeb``` +## For MPI+CUDA build +```mpirun -n 4 ./main3d.gnu.MPI.CUDA.ex Examples/inputs_mfim_Noeb``` +# Visualization and Data Analysis +Refer to the following link for several visualization tools that can be used for AMReX plotfiles. + +[Visualization](https://amrex-codes.github.io/amrex/docs_html/Visualization_Chapter.html) + +### Data Analysis in Python using yt +You can extract the data in numpy array format using yt (you can refer to this for installation and usage of [yt](https://yt-project.org/). After you have installed yt, you can do something as follows, for example, to get variable 'Pz' (z-component of polarization) +``` +import yt +ds = yt.load('./plt00001000/') # for data at time step 1000 +ad0 = ds.covering_grid(level=0, left_edge=ds.domain_left_edge, dims=ds.domain_dimensions) +P_array = ad0['Pz'].to_ndarray() +``` +# Publications +1. P. Kumar, M. Hoffmann, A. Nonaka, S. Salahuddin, and Z. Yao, 3D ferroelectric phase field simulations of polycrystalline multi-phase hafnia and zirconia based ultra-thin films, submitted for publication. [arxiv](https://arxiv.org/abs/2402.05331) +2. P. Kumar, A. Nonaka, R. Jambunathan, G. Pahwa, S. Salahuddin, and Z. Yao, Artemis: A GPU-accelerated, 3D Phase-Field Simulation Framework for Modeling Ferroelectric Devices, Computer Physics Communications, 108757, 2023. [link](https://www.sciencedirect.com/science/article/pii/S0010465523001029) \ No newline at end of file From 83dde181dd1031746de452a4da915154c0e6e9d1 Mon Sep 17 00:00:00 2001 From: jackieyao0114 Date: Wed, 22 May 2024 11:22:56 -0700 Subject: [PATCH 2/8] updated README.md --- README.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 634b63e0b..5d643a8d1 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,5 @@ # ARTEMIS -ARTEMIS (Adaptive mesh Refinement Time-domain ElectrodynaMIcs Solver) is a high-performance coupled electrodynamics–micromagnetics solver for full physical modeling of signals in microelectronic circuitry. The overall strategy couples a finite-difference time-domain (FDTD) approach for Maxwell’s equations to a magnetization model described by the Landau–Lifshitz–Gilbert (LLG) equation. The algorithm is implemented in the Exascale -Computing Project (ECP) software framework, AMReX, which provides effective scalability on manycore and GPU-based supercomputing architectures. Furthermore, the code leverages ongoing developments of the Exascale Application Code, WarpX, which is primarily being developed for plasma wakefield accelerator modeling. Our temporal coupling scheme provides second-order accuracy in space and time by combining the integration steps for the magnetic field and magnetization into an iterative sub-step that includes a trapezoidal temporal discretization for the magnetization. The performance of the algorithm is demonstrated by the excellent scaling results on NERSC multicore and GPU systems, with a significant (59×) speedup on the GPU using a node-by-node comparison. The utility of our code is validated by performing simulations of transmission lines, rectangle electromagnetic waveguides, magnetically tunable filters, on-chip coplanar waveguides and resonators, magnon-photon coupling circuits, and so on. +ARTEMIS (Adaptive mesh Refinement Time-domain ElectrodynaMIcs Solver) is a high-performance coupled electrodynamics–micromagnetics solver for full physical modeling of signals in microelectronic circuitry. The overall strategy couples a finite-difference time-domain (FDTD) approach for Maxwell’s equations to a magnetization model described by the Landau–Lifshitz–Gilbert (LLG) equation. The algorithm is implemented in the Exascale Computing Project (ECP) software framework, AMReX, which provides effective scalability on manycore and GPU-based supercomputing architectures. Furthermore, the code leverages ongoing developments of the Exascale Application Code, WarpX, which is primarily being developed for plasma wakefield accelerator modeling. Our temporal coupling scheme provides second-order accuracy in space and time by combining the integration steps for the magnetic field and magnetization into an iterative sub-step that includes a trapezoidal temporal discretization for the magnetization. The performance of the algorithm is demonstrated by the excellent scaling results on NERSC multicore and GPU systems, with a significant (59×) speedup on the GPU using a node-by-node comparison. The utility of our code is validated by performing simulations of transmission lines, rectangle electromagnetic waveguides, magnetically tunable filters, on-chip coplanar waveguides and resonators, magnon-photon coupling circuits, and so on. # Installation ## Download AMReX Repository @@ -9,7 +8,7 @@ Computing Project (ECP) software framework, AMReX, which provides effective scal ``` git clone git@github.com:AMReX-Microelectronics/artemis.git ``` ## Build Make sure that the AMReX and Artemis are cloned in the same location in their filesystem. Navigate to the Exec folder of Artemis and execute -```make -j 4``` +```make -j 4```.
You can turn on and off the LLG equation by specifying ```USE_LLG``` during compilation. The following command compiles Artemis without LLG ```make -j 4 USE_LLG=FALSE``` The following command compiles Artemis with LLG From ab20b3061e7e9ffb5b95c6b3ba9c9d39ceb80858 Mon Sep 17 00:00:00 2001 From: jackieyao0114 Date: Wed, 22 May 2024 11:23:44 -0700 Subject: [PATCH 3/8] updated README.md --- README.md | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index 5d643a8d1..d231d2429 100644 --- a/README.md +++ b/README.md @@ -9,11 +9,12 @@ ARTEMIS (Adaptive mesh Refinement Time-domain ElectrodynaMIcs Solver) is a high- ## Build Make sure that the AMReX and Artemis are cloned in the same location in their filesystem. Navigate to the Exec folder of Artemis and execute ```make -j 4```.
-You can turn on and off the LLG equation by specifying ```USE_LLG``` during compilation. The following command compiles Artemis without LLG -```make -j 4 USE_LLG=FALSE``` +You can turn on and off the LLG equation by specifying ```USE_LLG``` during compilation.
+The following command compiles Artemis without LLG +```make -j 4 USE_LLG=FALSE```
The following command compiles Artemis with LLG -```make -j 4 USE_LLG=TRUE``` -The default value of ```USE_LLG``` is ```TRUE``` +```make -j 4 USE_LLG=TRUE```
+The default value of ```USE_LLG``` is ```TRUE```. # Running Artemis Example input scripts are located in `Examples` directory. From d2d171b3303c8a0edd00a40f741dd04b25fb0ef9 Mon Sep 17 00:00:00 2001 From: jackieyao0114 Date: Wed, 22 May 2024 11:29:04 -0700 Subject: [PATCH 4/8] updated README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index d231d2429..61c1d9cd7 100644 --- a/README.md +++ b/README.md @@ -19,7 +19,7 @@ The default value of ```USE_LLG``` is ```TRUE```. # Running Artemis Example input scripts are located in `Examples` directory. ## Simple Testcase -You can run the following to simulate a MFIM heterostructure with a 5 nm HZO as the ferroelectric layer and 4 nm alumina as the dielectric layer under zero applied voltage: +You can run the following to simulate : ## For MPI+OMP build ```mpirun -n 4 ./main3d.gnu.MPI.OMP.ex Examples/inputs_mfim_Noeb``` ## For MPI+CUDA build From dca9e163fbe46e9c8b0def59f5f82a2474dad44d Mon Sep 17 00:00:00 2001 From: jackieyao0114 Date: Wed, 22 May 2024 11:51:02 -0700 Subject: [PATCH 5/8] updated README.md --- README.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 61c1d9cd7..bab586b98 100644 --- a/README.md +++ b/README.md @@ -19,11 +19,12 @@ The default value of ```USE_LLG``` is ```TRUE```. # Running Artemis Example input scripts are located in `Examples` directory. ## Simple Testcase -You can run the following to simulate : +You can run the following to simulate an air-filled X-band rectangle waveguide: ## For MPI+OMP build -```mpirun -n 4 ./main3d.gnu.MPI.OMP.ex Examples/inputs_mfim_Noeb``` +```make -j 4 USE_LLG=FALSE```
+```mpirun -n 4 ./main3d.gnu.TPROF.MTMPI.OMP.GPUCLOCK.ex Examples/Waveguide/inputs_3d_empty_X_band``` ## For MPI+CUDA build -```mpirun -n 4 ./main3d.gnu.MPI.CUDA.ex Examples/inputs_mfim_Noeb``` +```mpirun -n 4 ./main3d.gnu.TPROF.MTMPI.OMP.GPUCLOCK.ex Examples/Waveguide/inputs_3d_empty_X_band``` # Visualization and Data Analysis Refer to the following link for several visualization tools that can be used for AMReX plotfiles. From 139c92ad6c2a0a77bd748a1ca10893b9c4716598 Mon Sep 17 00:00:00 2001 From: jackieyao0114 Date: Wed, 22 May 2024 13:12:23 -0700 Subject: [PATCH 6/8] reduced cell numbers in inputs_3d_empty_X_band --- Examples/Waveguide/inputs_3d_empty_X_band | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Examples/Waveguide/inputs_3d_empty_X_band b/Examples/Waveguide/inputs_3d_empty_X_band index 7d7dc6b63..5d4ecec41 100644 --- a/Examples/Waveguide/inputs_3d_empty_X_band +++ b/Examples/Waveguide/inputs_3d_empty_X_band @@ -8,7 +8,7 @@ ####### GENERAL PARAMETERS ###### ################################# max_step = 1000000 -amr.n_cell = 512 4 512 # number of cells spanning the domain in each coordinate direction at level 0 +amr.n_cell = 128 4 128 # number of cells spanning the domain in each coordinate direction at level 0 amr.max_grid_size = 64 # maximum size of each AMReX box, used to decompose the domain amr.blocking_factor = 4 # only meaningful for AMR geometry.dims = 3 From 820a2a9b031120849b3fc0259bbeba642eb0c95d Mon Sep 17 00:00:00 2001 From: jackieyao0114 Date: Wed, 22 May 2024 13:19:09 -0700 Subject: [PATCH 7/8] updated README.md and Examples/Waveguide/inputs_3d_LLG_filter --- Examples/Waveguide/inputs_3d_LLG_filter | 2 +- README.md | 13 +++++++++++-- 2 files changed, 12 insertions(+), 3 deletions(-) diff --git a/Examples/Waveguide/inputs_3d_LLG_filter b/Examples/Waveguide/inputs_3d_LLG_filter index eef624ed7..19215a0d7 100644 --- a/Examples/Waveguide/inputs_3d_LLG_filter +++ b/Examples/Waveguide/inputs_3d_LLG_filter @@ -11,7 +11,7 @@ ################################# max_step = 300000 amr.n_cell = 1024 4 512 # number of cells spanning the domain in each coordinate direction at level 0 -amr.max_grid_size = 1024 # maximum size of each AMReX box, used to decompose the domain +amr.max_grid_size = 256 # maximum size of each AMReX box, used to decompose the domain amr.blocking_factor = 4 # only meaningful for AMR geometry.dims = 3 boundary.field_lo = pec pec pml # PEC on side walls; PML at -z end diff --git a/README.md b/README.md index bab586b98..591247c8b 100644 --- a/README.md +++ b/README.md @@ -18,13 +18,22 @@ The default value of ```USE_LLG``` is ```TRUE```. # Running Artemis Example input scripts are located in `Examples` directory. -## Simple Testcase +## Simple Testcase without LLG You can run the following to simulate an air-filled X-band rectangle waveguide: ## For MPI+OMP build ```make -j 4 USE_LLG=FALSE```
```mpirun -n 4 ./main3d.gnu.TPROF.MTMPI.OMP.GPUCLOCK.ex Examples/Waveguide/inputs_3d_empty_X_band``` ## For MPI+CUDA build -```mpirun -n 4 ./main3d.gnu.TPROF.MTMPI.OMP.GPUCLOCK.ex Examples/Waveguide/inputs_3d_empty_X_band``` +```make -j 4 USE_LLG=FALSE USE_GPU=TRUE```
+```mpirun -n 4 ./main3d.gnu.TPROF.MTMPI.CUDA.GPUCLOCK.ex Examples/Waveguide/inputs_3d_empty_X_band``` +## Simple Testcase with LLG +You can run the following to simulate an X-band magnetically tunable filter: +## For MPI+OMP build +```make -j 4 USE_LLG=TRUE```
+```mpirun -n 8 ./main3d.gnu.TPROF.MTMPI.OMP.GPUCLOCK.ex Examples/Waveguide/inputs_3d_LLG_filter``` +## For MPI+CUDA build +```make -j 4 USE_LLG=TRUE USE_GPU=TRUE```
+```mpirun -n 8 ./main3d.gnu.TPROF.MTMPI.CUDA.GPUCLOCK.ex Examples/Waveguide/inputs_3d_LLG_filter``` # Visualization and Data Analysis Refer to the following link for several visualization tools that can be used for AMReX plotfiles. From f71d484642f1839af7ee9ea8e704e37bed2af26c Mon Sep 17 00:00:00 2001 From: jackieyao0114 Date: Wed, 22 May 2024 13:26:04 -0700 Subject: [PATCH 8/8] updated README.md --- README.md | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index 591247c8b..07f3e45a2 100644 --- a/README.md +++ b/README.md @@ -40,13 +40,17 @@ Refer to the following link for several visualization tools that can be used for [Visualization](https://amrex-codes.github.io/amrex/docs_html/Visualization_Chapter.html) ### Data Analysis in Python using yt -You can extract the data in numpy array format using yt (you can refer to this for installation and usage of [yt](https://yt-project.org/). After you have installed yt, you can do something as follows, for example, to get variable 'Pz' (z-component of polarization) +You can extract the data in numpy array format using yt (you can refer to this for installation and usage of [yt](https://yt-project.org/). After you have installed yt, you can do something as follows, for example, to get variable 'Ex' (x-component of electric field) ``` import yt ds = yt.load('./plt00001000/') # for data at time step 1000 ad0 = ds.covering_grid(level=0, left_edge=ds.domain_left_edge, dims=ds.domain_dimensions) -P_array = ad0['Pz'].to_ndarray() +E_array = ad0['Ex'].to_ndarray() ``` # Publications -1. P. Kumar, M. Hoffmann, A. Nonaka, S. Salahuddin, and Z. Yao, 3D ferroelectric phase field simulations of polycrystalline multi-phase hafnia and zirconia based ultra-thin films, submitted for publication. [arxiv](https://arxiv.org/abs/2402.05331) -2. P. Kumar, A. Nonaka, R. Jambunathan, G. Pahwa, S. Salahuddin, and Z. Yao, Artemis: A GPU-accelerated, 3D Phase-Field Simulation Framework for Modeling Ferroelectric Devices, Computer Physics Communications, 108757, 2023. [link](https://www.sciencedirect.com/science/article/pii/S0010465523001029) \ No newline at end of file +1. Z. Yao, R. Jambunathan, Y. Zeng and A. Nonaka, A massively parallel time-domain coupled electrodynamics–micromagnetics solver. The International Journal of High Performance Computing Applications. 2022;36(2):167-181. doi:10.1177/10943420211057906 +[link](https://journals.sagepub.com/doi/full/10.1177/10943420211057906) +2. S. S. Sawant, Z. Yao, R. Jambunathan and A. Nonaka, Characterization of transmission lines in microelectronic circuits Using the ARTEMIS solver, IEEE Journal on Multiscale and Multiphysics Computational Techniques, vol. 8, pp. 31-39, 2023, doi: 10.1109/JMMCT.2022.3228281 +[link](https://ieeexplore.ieee.org/abstract/document/9980353) +3. R. Jambunathan, Z. Yao, R. Lombardini, A. Rodriguez, and A. Nonaka, Two-fluid physical modeling of superconducting resonators in the ARTEMIS framework, Computer Physics Communications, 291, p.108836. doi:10.1016/j.cpc.2023.108836 +[link](https://www.sciencedirect.com/science/article/pii/S0010465523001819?casa_token=rWpwl8cmtUYAAAAA:rZTndzf_pqx0lo9jtTRzLLxh0tIf_AD0zHcRRJ_ciwMw-n-X2doK5RprMS4wyrO9TEw5oDZAB7Kr) \ No newline at end of file