diff --git a/.github/workflows/main-docs.yml b/.github/workflows/main-docs.yml new file mode 100644 index 000000000..f9acdf994 --- /dev/null +++ b/.github/workflows/main-docs.yml @@ -0,0 +1,20 @@ +name: Publish documentation +on: + push: + branches: + - main + +jobs: + build: + name: Deploy docs + runs-on: ubuntu-latest + steps: + - name: Checkout main + uses: actions/checkout@v2 + + - name: Deploy docs + uses: mhausenblas/mkdocs-deploy-gh-pages@master + env: + GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} + CONFIG_FILE: mkdocs.yml + REQUIREMENTS: docs/requirements.txt \ No newline at end of file diff --git a/docs/GettingStarted/building-with-self.md b/docs/GettingStarted/building-with-self.md new file mode 100644 index 000000000..cf22bc384 --- /dev/null +++ b/docs/GettingStarted/building-with-self.md @@ -0,0 +1 @@ +# Using SELF to Build your own applications \ No newline at end of file diff --git a/docs/Learning/dependencies.md b/docs/GettingStarted/dependencies.md similarity index 100% rename from docs/Learning/dependencies.md rename to docs/GettingStarted/dependencies.md diff --git a/docs/GettingStarted/install.md b/docs/GettingStarted/install.md index 0f945d894..a9b093ffb 100644 --- a/docs/GettingStarted/install.md +++ b/docs/GettingStarted/install.md @@ -1,7 +1,7 @@ # Install SELF -## Quick start +## Install with Spack The easiest way to get started is to use the spack package manager. On a Linux platform, set up spack ``` @@ -57,128 +57,214 @@ cd ${HOME}/opt/self/test ctest ``` +### Once v0.0.1 is released +The easiest way to get started is to use the [spack package manager](https://spack.io). The spack package manager provides you with an easy command line interface to install research software from source code with all of its dependencies. +!!! note + Before proceeding, you will need to ensure that you have a 2008 compliant Fortran compiler. -## Dependencies -The Spectral Element Library in Fortran can be built provided the following dependencies are met +On a Linux platform, set up spack : -* [Cmake (v3.21 or greater)](https://cmake.org/resources/) -* Fortran 2008 compliant compiler ( `gfortran` recommended ) -* MPI, e.g. [OpenMPI](https://www.open-mpi.org/) with [GPU-Aware Support](./dependencies.md) -* [MAGMA](https://icl.utk.edu/magma/) -* [HDF5](https://www.hdfgroup.org/solutions/hdf5/) -* [FluidNumerics/feq-parse](https://github.com/FluidNumerics/feq-parse) -* (Optional, AMD GPU Support) [ROCm v5.7.0 or greater](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/) -* (Optional, Nvidia GPU Support) [CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit) +```shell +git clone https://github.com/spack/spack ~/spack +source ~/spack/share/spack/setup-env.sh +``` +Allow spack to locate your compilers (make sure you have C, C++, and Fortran compilers installed!) + +```shell +spack compiler find +``` -## Installation Notes +The example above will force packages to be built with version 12.2.0 of gfortran from the `gcc` compiler set. -### Multithreading CPU support -Computationally heavy methods in SELF are expressed using Fortran's `do concurrent` loop blocks, which gives compilers the freedom to parallelize operations. Every Fortran compiler has their own set of compiler flags to enable parallelization of `do concurrent` blocks (see [this post on the Fortran-Lang discourse](https://fortran-lang.discourse.group/t/do-concurrent-compiler-flags-to-enable-parallelization/4300/6)). We have provided a single option in the CMake build system that allow you to enable parallelization. At the `cmake` stage of the build process, you can set `SELF_ENABLE_MULTITHREADING=ON`, e.g. +To reduce build time, import existing packages on your system ```shell -cmake -DSELF_ENABLE_MULTITHREADING=ON \ - -DCMAKE_INSTALL_PREFIX=${HOME}/opt/self \ - ../ +spack external find --not-buildable ``` -If you are building with the GNU compilers (`gfortran`), the number of threads used for parallelization is determined at build time. By default, the SELF build system will set the number of threads to 4. You can override this setting at the `cmake` stage of the build process using the `SELF_MULTITHREADING_NTHREADS` build variable, e.g. to set the number of threads to 8 with `gfortran`, +Next, install SELF and it's dependencies ```shell -cmake -DSELF_ENABLE_MULTITHREADING=ON \ - -DSELF_MULTITHREADING_NTHREADS=8 \ - -DCMAKE_INSTALL_PREFIX=${HOME}/opt/self \ - ../ +spack install self ``` -The CMake build system will set the appropriate flags for multithreading for GNU, Intel (`ifort` and `ifx`), LLVM, and Nvidia HPC Compilers. If you are not using `gfortran`, you can set the number of threads for parallelism at runtime using the `OMP_NUM_THREADS` environment variable +By default, this will install SELF with the following features +* Double precision floating point arithmetic +* No unit tests and no examples +* No multi-threading, CPU-only -### GPU Support -SELF uses OpenMP for GPU offloading. Some of our "heavy-lifting" kernels, such as divergence, gradient, and grid interpolation operations are expressed using the BLAS API. For these, we use MAGMA. +You can view documentation on all possible variants using +```shell +spack info self +``` -## Bare Metal Install +### Enable Multithreading +Many of the computationally intensive methods in SELF are written using the `do concurrent` structure. We have provided the variant `+multithreading` which will enable multithreading for all `do concurrent` blocks. You can install SELF with multithreading using -### Install SELF with CMake +```shell +spack install self+multithreading +``` -!!! warning - It is assumed that you have all of the necessary dependencies installed on your system before proceeding any further. +If you are using the GNU compiler suite, the number of threads used for `do concurrent` blocks is determined during build time. Because of this, we have provided the `nthreads` option, which defaults to 4. You can change this option to a value more sensible for your platform, e.g. -SELF comes with a CMake build system that defines the build and installation process. When you install SELF, you will install the following artifacts +```shell +spack install self+multithreading nthreads=16 % gcc +``` -* `${CMAKE_INSTALL_PREFIX}/lib/libself-static.a` - A static library for the SELF API, in case you want to build your own programs and solvers using Spectral Element Methods. -* `${CMAKE_INSTALL_PREFIX}/lib/libself.so` - A static library for the SELF API, in case you want to build your own programs and solvers using Spectral Element Methods. -* `${CMAKE_INSTALL_PREFIX}/include/*.mod` - Module files generated by the Fortran compiler during the build process. -* `${CMAKE_INSTALL_PREFIX}/example/*` - A set of example programs that run simple linear models (advection-diffusion) in 1-D, 2-D, or 3-D -* `${CMAKE_INSTALL_PREFIX}/test/*` - A set of unit tests that exercise specific methods beneath the `model` classes. +The `%gcc` here indicates that you intend to build SELF with the GNU compilers. -This part of the documentation will provide you with an overview of the environment variables that control the build process, show you how to target different GPUs for GPU acceleration, and how to install SELF to a preferred directory on your system. +### Enable Nvidia GPU Acceleration +SELF provides GPU accelerated implementations of all methods that are used in forward stepping conservation law solvers. On Nvidia GPU platforms, you can take advantage of this using the `+cuda` variant : -#### Build Variables -There are a number of environment variables you can use to control the behavior of the build and installation process. Importantly, some of these environment variables are necessary to tell the build system where dependencies can be found. +```shell +spack install self+cuda +``` +This will also ensure that the MPI flavor that is used is GPU aware. You can specify the GPU architecture using the `gpu_arch` build option, e.g. for A100 GPUs -* `CMAKE_INSTALL_PREFIX` The installation path for SELF -* `CMAKE_BUILD_TYPE` Type of build, one of `Release`, `Debug`, or `Coverage` +```shell +spack install self+cuda gpu_arch=sm_80 +``` -The default values of these variables will work for you if the following conditions are met -* You have all of the necessary dependencies installed and visible in your `PATH` and `LD_LIBRARY_PATH` environment variables -* ROCm is installed in `/opt/rocm` (the default location) -* If you are targeting Nvidia GPUs, CUDA is installed in `/usr/local/cuda` (the default location) +!!! note + AMD GPU-Aware MPI is currently not available in Spack. This means that these steps will not allow you to build SELF for multi-GPU platforms with AMD GPUs. See [Advanced Installation](#advanced-installation) for details on how to install for AMD GPU platforms. -#### Building for an Nvidia GPU -To build SELF for running on Nvidia GPUs, you will need to have both HIP and the CUDA toolkit installed on your system. -Next, you will need to set the `CMAKE_HIP_ARCHITECTURES` build variable to the microarchicture code for the specific GPU you are targeting (see the table below). +## Advanced Installation -Pascal (P100) | Volta (V100) | Ampere (A100) | Hopper (H100) | -------------- | ------------ | ------------- | ------------- | -sm_60, sm_61, sm_62 | sm_70, sm_72 | sm_80, sm_86, sm_87 | sm_90, sm_90a | +### Dependencies +The Spectral Element Library in Fortran can be built provided the following dependencies are met + +* [Cmake (v3.21 or greater)](https://cmake.org/resources/) +* Fortran 2008 compliant compiler ( `gfortran` recommended ) +* MPI, e.g. [OpenMPI](https://www.open-mpi.org/) with GPU-Aware Support +* [HDF5](https://www.hdfgroup.org/solutions/hdf5/) +* [FluidNumerics/feq-parse](https://github.com/FluidNumerics/feq-parse) +* (Optional, AMD GPU Support) [ROCm v6.0.2 or greater](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/) +* (Optional, Nvidia GPU Support) [CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit) +[**Learn more about installing SELF's dependencies**](./dependencies.md) -#### Install SELF (Detailed) -First, clone the SELF repository (if you haven't already) -``` -git clone https://github.com/fluidnumerics/SELF ~/SELF -``` +### Installing SELF from source with CMake -Next, create a build directory for Cmake to stage intermediate files -``` -mkdir ~/SELF/build -cd ~/SELF/build +!!! warning + It is assumed that you have all of the necessary dependencies installed on your system and that they are discoverable by CMake. + +SELF comes with a CMake build system that defines the build and installation process. + +1. Clone the SELF repository + +```shell +git clone https://github.com/fluidnumerics/self ~/self/ ``` -Use Cmake to build the make system +2. Create a build directory + +```shell +mkdir ~/self/build +cd ~/self/build ``` - FC=gfortran \ - cmake -DCMAKE_PREFIX_PATH=/opt/rocm - -DCMAKE_HIP_ARCHITECTURES=gfx90a \ - -DCMAKE_INSTALL_PREFIX=${HOME}/opt/self \ - -DCMAKE_BUILD_TYPE=Release \ - ./ + +3. Run CMake to build the make system. You can set `CMAKE_INSTALL_PREFIX` to a path where you'd prefer to install SELF; here, we set it to `${HOME}/opt/self` + +```shell +cmake -DCMAKE_INSTALL_PREFIX=${HOME}/opt/self ../ ``` -In this example, -* `FC=gfortran` sets the fortran compiler to `gfortran`. -* `CMAKE_PREFIX_PATH` instructs Cmake to search for Cmake configuration files for ROCm underneath `/opt/rocm` -* `CMAKE_HIP_ARCHITECTURES=gfx90a` sets the target GPU architecture to the AMD MI200 series GPUs -* `CMAKE_INSTALL_PREFIX=${HOME}/opt/self` sets the installation path for SELF to `${HOME}/opt/self`; when setting this variable, make sure it is a location where you have read and write permissions -* `CMAKE_BUILD_TYPE=Release` sets the build type to `Release`, which enables `-O3` optimizations. If you are troubleshooting an issue, it is best to set this to `Debug`. -After running `cmake`, you can build SELF, +4. Build SELF and run the test suite to ensure everything is built properly + +```shell +make +ctest --output-on-failure ``` -make VERBOSE=1 + +5. Install SELF + +```shell +make install ``` -We recommend that you run the unit tests included with SELF, +When you install SELF, you will install the following artifacts + +* `${CMAKE_INSTALL_PREFIX}/lib/libself-static.a` - A static library for SELF +* `${CMAKE_INSTALL_PREFIX}/lib/libself.so` - A shared object library for SELF +* `${CMAKE_INSTALL_PREFIX}/include/*.mod` - Module files generated by the Fortran compiler during the build process. +* `${CMAKE_INSTALL_PREFIX}/example/*` - A set of example programs +* `${CMAKE_INSTALL_PREFIX}/test/*` - A set of unit tests + +By default, this will install SELF with the following features +* Double precision floating point arithmetic +* No unit tests and no examples +* No multi-threading, CPU-only + +There are a few CMake options that you can set to control the build features : +* `SELF_ENABLE_MULTITHREADING`: Option to enable CPU multithreading for `do concurrent` loop blocks. (Default: OFF) +* `SELF_ENABLE_TESTING`: Option to enable build of tests. (Default: ON) +* `SELF_ENABLE_EXAMPLES`: Option to enable build of examples. (Default: ON) +* `SELF_ENABLE_GPU`: Option to enable GPU backend. Requires either CUDA or HIP. (Default: OFF) +* `SELF_ENABLE_DOUBLE_PRECISION` Option to enable double precision for floating point arithmetic. (Default: ON) + +### Enabling Multithreading CPU support +Computationally heavy methods in SELF are expressed using Fortran's `do concurrent` loop blocks, which gives compilers the freedom to parallelize operations. Every Fortran compiler has their own set of compiler flags to enable parallelization of `do concurrent` blocks (see [this post on the Fortran-Lang discourse](https://fortran-lang.discourse.group/t/do-concurrent-compiler-flags-to-enable-parallelization/4300/6)). We have provided a single option in the CMake build system that allow you to enable parallelization. At the `cmake` stage of the build process, you can set `SELF_ENABLE_MULTITHREADING=ON`, e.g. + +```shell +cmake -DSELF_ENABLE_MULTITHREADING=ON \ + -DCMAKE_INSTALL_PREFIX=${HOME}/opt/self \ + ../ ``` -ctest --test-dir ~/SELF/build/ + +If you are building with the GNU compilers (`gfortran`), the number of threads used for parallelization is determined at build time. By default, the SELF build system will set the number of threads to 4. You can override this setting at the `cmake` stage of the build process using the `SELF_MULTITHREADING_NTHREADS` build variable, e.g. to set the number of threads to 8 with `gfortran`, + +```shell +cmake -DSELF_ENABLE_MULTITHREADING=ON \ + -DSELF_MULTITHREADING_NTHREADS=8 \ + -DCMAKE_INSTALL_PREFIX=${HOME}/opt/self \ + ../ ``` -If all of the tests pass, install SELF +The CMake build system will set the appropriate flags for multithreading for GNU, Intel (`ifort` and `ifx`), LLVM, and Nvidia HPC Compilers. If you are not using `gfortran`, you can set the number of threads for parallelism at runtime using the `OMP_NUM_THREADS` environment variable + +### Enabling GPU Support +SELF offers the option to use HIP or CUDA. Some of our "heavy-lifting" kernels, such as divergence, gradient, and grid interpolation operations are expressed using the BLAS API. For these, we use HIPBLAS or CUBLAS. GPU support is enabled in the CMake stage of the build by setting `SELF_ENABLE_GPU=ON` + +The CMake build system will automatically search for HIP. If HIP is not found, then it will search for CUDA. If neither is found, the build process will fail. + +#### HIP +HIP can be used to build for either AMD or Nvidia GPU's. If you have HIP installed and it is found, you can also set the `CMAKE_HIP_ARCHITECTURES` build variable to specify which GPU architecture you want to build for. Alternatively, if you are building SELF on the system that has a GPU installed, you can let HIP auto-detect the available GPU. At this time, we also advise setting the `CXX` enviornment variable to `hipcc`,e.g. + +```shell +CXX=hipcc \ +cmake -DSELF_ENABLE_GPU=ON \ + -DCMAKE_INSTALL_PREFIX=${HOME}/opt/self \ + ../ ``` -make install + +#### CUDA +SELF provides you the ability to use CUDA directly, in case you are on a system that does not have AMD's ROCm and HIP installed. If you have CUDA installed and it is found, you can also set the `CMAKE_CUDA_ARCHITECTURES` build variable to specify which Nvidia GPU architecture you want to build for. At this time, we also advise setting the `CXX` enviornment variable to `nvcc`,e.g. + +```shell +CXX=nvcc \ +cmake -DSELF_ENABLE_GPU=ON \ + -DCMAKE_INSTALL_PREFIX=${HOME}/opt/self \ + ../ ``` -At the end of this process, the `self` application is installed under `${HOME}/opt/self/bin`. Additionally, the SELF static library can be found under `${HOME}/opt/self/lib` and the `.mod` files for all of the SELF modules are under `${HOME}/opt/self/include`. + +#### Reference table for GPU architecture codes + +Vendor | Model | Architecture code(s) | +------ | ----- | -------------------- | +AMD | Instinct MI100 | gfx908 | +AMD | Instinct MI210 | gfx90a | +AMD | Instinct MI250 | gfx90a | +AMD | Instinct MI250x | gfx90a | +AMD | Radeon Pro W7900 | gfx1100 | +AMD | Radeon Pro W7800 | gfx1100 | +Nvidia | Volta (V100) | sm_70, sm_72 | +Nvidia | Ampere (A100) | sm_80, sm_86, sm_87 | +Nvidia | Hopper (H100) | sm_90, sm_90a | + [If you encounter any problems, feel free to open an new issue](https://github.com/FluidNumerics/SELF/issues/new/choose) \ No newline at end of file diff --git a/docs/MeshGeneration/HOPr.md b/docs/MeshGeneration/HOPr.md new file mode 100644 index 000000000..e69de29bb diff --git a/docs/MeshGeneration/Overview.md b/docs/MeshGeneration/Overview.md new file mode 100644 index 000000000..b466afca4 --- /dev/null +++ b/docs/MeshGeneration/Overview.md @@ -0,0 +1,12 @@ + +## Overview of Mesh generation +
+ ![Image title](img/spectral-element-mesh.png){ width=600 } +
+
+ +Every model in SELF needs to be associated with an interpolant, a mesh, and geometry. SELF uses a static unstructured mesh of elements. Within each element is a structured grid where the points in computational space are defined at Gauss-type quadrature points (Legendre Gauss or Legendre Gauss Lobatto). + +The typical workflow for instantiating a model is to first create the mesh and the interpolant. The mesh defines the bounding vertices for each element, the relationship between the vertex IDs and the elements, the relationship between the bounding edges(2-D)/faces(3-D) and the neighboring elements, material identifiers for each element, and boundary conditions for physical model boundaries. The interpolant specifies the degree of the Lagrange interpolating polynomial and the interpolation knots. For spectral accuracy, the interpolation knots are equal to the quadrature points and are either `GAUSS` or `GAUSS_LOBATTO`. This in turn determines weights used in the interpolation and differentiation routines. + +From the mesh and interpolant information, we can create the geometry details. The geometry includes the physical positions, covariant basis vectors, contravariant basis vectors, and the jacobian of the coordinate transformation. All of this information is necessary for computing derivatives in mapped coordinates and for computing fluxes between neighboring elements. \ No newline at end of file diff --git a/docs/MeshGeneration/StructuredMesh.md b/docs/MeshGeneration/StructuredMesh.md new file mode 100644 index 000000000..dc7b5563e --- /dev/null +++ b/docs/MeshGeneration/StructuredMesh.md @@ -0,0 +1,103 @@ +# Structured Mesh Generation in SELF + + +Although SELF uses unstructured mesh data structures, we have provided methods to create structured meshes and store them as an unstructured mesh. This can be quite useful as you are getting started with SELF or if the geometry for your problem can be defined using a structured mesh layout. + + +## One Dimension (1-D) +In one dimension, the only mesh we use is a structured mesh. To generate a structured mesh in one dimension, use the `StructuredMesh` generic in the [`Mesh1D`](../ford/type/mesh1d.html) class. + +At the moment, only uniformly space structured meshes of elements can be generated. This means that all of the elements are of the same width; keep in mind that within each element, there is a quadrature grid. The points in the quadrature grid are spaced so that spectral accuracy is guaranteed. + +To generate a structured grid in 1-D, you need to provide the number of elements and the left and right endpoints of the mesh. + +In the example below, we create a 1-D mesh with 20 elements on the domain $x ∈ [0,1]$. The geometry fields are created from the mesh information and a $7^{th}$ degree interpolant through the Legendre-Gauss points. + + +```fortran + type(Mesh1D),target :: mesh + type(Lagrange),target :: interp + type(Geometry1D),target :: geometry + + call mesh % StructuredMesh(nElem=20, & + x=(/0.0_prec,1.0_prec/)) + + call interp % Init(N=7, controlNodeType=GAUSS, & + M=10, targetNodeType=UNIFORM) + + call geometry % Init(interp,mesh%nElem) + call geometry % GenerateFromMesh(mesh) + +``` + +Notice that initializing the geometry requires an interpolant and the number of elements as input. + +!!! note + Under the hood, the interpolant for the geometry (`geometry % interp` ) is associated with a pointer to `interp`, ie `geometry % interp => interp`. + +Once the geometry is initialized, the physical positions and metric terms can be calculated and stored using the `GenerateFromMesh` method. + +## Two Dimensions (2-D) +To generate a structured mesh in two dimensions, use the `StructuredMesh` generic in the [`Mesh2D`](../ford/type/mesh2d.html) class. + +At the moment, only uniformly space structured meshes of elements can be generated. This means that all of the elements are of the same width; keep in mind that within each element, there is a quadrature grid. The points in the quadrature grid are spaced so that spectral accuracy is guaranteed. + +SELF uses a tiled structured grid. Tiled grids divide the 2-D grid into `nTilex`x`nTiley` tiles of size `nxPerTile`x`nyPerTile` . The width and height of the elements are defined as `dx` and `dy`. With these parameters, + +* `nx = nTilex*nxPerTile` is the total number of elements in the x-direction +* `ny = nTiley*nyPerTile` is the total number of elements in the y-direction +* `nelem = nx*ny` is the total number of elements +* `Lx = dx*nx` is the domain length in the x-direction +* `Ly = dy*ny` is the domain length in the y-direction + +You can set boundary conditions for each of the four sides of the structured mesh using a 1-D array of integers of length 4. The boundary conditions must be provided in counter-clockwise order, starting with the "south" boundary (south, east, north, west). The following built-in flags are available for setting boundary conditions + +* `SELF_BC_NONORMALFLOW` +* `SELF_BC_PRESCRIBED` +* `SELF_BC_RADIATION` + +The tiled layout is convenient for domain decomposition, when you are wanting to scale up your application for distributed memory platforms. You can further enable domain decomposition by setting the optional `enableDomainDecompisition` input to `.true.` . In this case, when you launch your application with `mpirun`, the domain will be automatically divided as evenly as possible across all MPI ranks. + +!!! note + It's good practice to set the total number of tiles equal to the number of MPI ranks that you are running with. Alternatively, you can use fairly small tiles when working with a large number of MPI ranks to increase the chance of minimizing point-to-point communications . + +In the example below, we create a 2-D mesh with the following attributes + +* $2 × 2$ tiles for the domain +* $10 × 10$ elements per tile +* Each element is has dimensions of $0.05 × 0.05$. The domain dimensions are then $L_x × L_y = 1 × 1$ +* Domain decomposition is enabled + +The geometry fields are created from the mesh information and a $7^{th}$ degree interpolant through the Legendre-Gauss points. + + +```fortran + type(Mesh2D),target :: mesh + type(Lagrange),target :: interp + type(SEMQuad),target :: geometry + integer :: bcids(1:4) + + bcids(1:4) = [SELF_BC_PRESCRIBED,& ! south boundary + SELF_BC_PRESCRIBED,& ! east boundary + SELF_BC_PRESCRIBED,& ! north boundary + SELF_BC_PRESCRIBED] ! west boundary + + call mesh % StructuredMesh( nxPerTile=10, nyPerTile=10,& + nTileX=2, nTileY=2,& + dx=0.05_prec, dy=0.05_prec, & + bcids) + + call interp % Init(N=7, controlNodeType=GAUSS, & + M=10, targetNodeType=UNIFORM) + + call geometry % Init(interp,mesh%nElem) + call geometry % GenerateFromMesh(mesh) + +``` + +Notice that initializing the geometry requires an interpolant and the number of elements as input. + +!!! note + Under the hood, the interpolant for the geometry (`geometry % interp` ) is associated with a pointer to `interp`, ie `geometry % interp => interp`. + +Once the geometry is initialized, the physical positions and metric terms can be calculated and stored using the `GenerateFromMesh` method. diff --git a/docs/MeshGeneration/img/spectral-element-mesh.png b/docs/MeshGeneration/img/spectral-element-mesh.png new file mode 100644 index 000000000..6d5b62d78 Binary files /dev/null and b/docs/MeshGeneration/img/spectral-element-mesh.png differ diff --git a/docs/Models/CompressibleNavierStokes.md b/docs/Models/CompressibleNavierStokes.md deleted file mode 100644 index dce2d797a..000000000 --- a/docs/Models/CompressibleNavierStokes.md +++ /dev/null @@ -1,74 +0,0 @@ -# Compressible Navier-Stokes - - -## Hydrostatic Adjustment - -Modeling compressible fluids in the presence of potential forces (such as gravity) often requires defining initial conditions that have already adjusted to the potential force. For some simple configurations, such as those with constant gravitational acceleration, it is easy to write down the fluid state for a hydrostatic compressible fluid. More complicated gravitational potentials pose a challenge. - -SELF's Compressible Ideal Gas modules come equipped with the `HydrostaticAdjustment` method, which can be used to compute a hydrostatic fluid state, given the potential function and an initial density and energy field. This method currently works by forward-stepping the equations of motion with an artifical momentum drag term until the fluid momentum reaches a specified tolerance. Ideally, we would solve this system using an implicit time stepping scheme. Because SELF currently only provides explicit time stepping schemes, we brute force our way to equilibrium, The method chooses a time step so that the maximum CFL number based on the maximum initial sound wave speed is 0.75. - - -### Choosing the artificial momentum drag -To explain how we choose the artificial momentum drag coefficient, consider the compressible euler equations, without the momentum advection terms and with the additional momentum drag - -\begin{align} - (\rho \vec{u})_t &= -\nabla p - C_d \rho \vec{u} \\ - \rho_t + \nabla \cdot ( \rho \vec{u} ) &= 0 -\end{align} - -Taking the divergence of the momentum equation and substituting into the time derivative of the mass conservation equation gives - -\begin{equation} -\rho_{tt} - \nabla^2 p - C_d \nabla (\rho \vec{u}) = 0 -\end{equation} - -which we can also write as - -\begin{equation} -\rho_{tt} - \nabla^2 p = - C_d \rho_t -\end{equation} - - -Using the equation of state, we can write the density as a function of the pressure $\rho = \rho(p)$. By applying the chain rule, we can write - -\begin{equation} -\rho_t = \rho_p p_{tt} -\end{equation} - -where we have made the assumption that $(\rho_p)_t = 0$. Using these asssumptions, we can write a single equation for the pressure - -\begin{equation} -p_{tt} - c^2 \nabla^2 p = - C_d p_t -\end{equation} - -where $c = (\rho_p)^{-1/2}$ is the speed of sound. Using Fourier solutions for the pressure - -\begin{equation} -p = \hat{p} e^{i(\vec{k}\cdot\vec{x} - \sigma t)} -\end{equation} - -we obtain the following dispersion relation - -\begin{equation} -\sigma^2 + i C_d \sigma - c^2 | \vec{k} |^2 = 0 -\end{equation} - -which has roots - -\begin{equation} -\sigma = \frac{1}{2}( -i C_d \pm \sqrt{ 4 c^2 |\vec{k}|^2 - C_d^2} ) -\end{equation} - -The frequency becomes purely complex, exhibiting no oscillatory motions, when - -\begin{equation} -C_d \geq 2 c ||\vec{k}|| -\end{equation} - -For a numerical method, the largest that the right hand side can be is when $\vec{k}$ is associated with the shortest resolvable wave mode; usually, this is the Nyquist mode, which has a wavelength of $2\Delta x$, so that - -\begin{equation} -C_d \geq \frac{c}{\Delta x} -\end{equation} - -From this brief analysis, we gain some insight into how the momentum drag can influence the evolution of sound waves.In adjusting the fluid to a hydrostatic state, imbalances in the potential forces and the pressure gradient force lead to an erruption of sound waves. The momentum drag acts to damp out these disturbances and we can choose the drag coefficient so that a wide range of wavelengths are damped, with no oscillation. We found a condition where all modes are damped; this corresponds to making the momentum drag as important as sound wave propagation at the grid scale. So long as the model is stepped forward in a stable manner, we can integrate the compressible equations until the fluid momentum is near zero. diff --git a/docs/Models/burgers-equation-model.md b/docs/Models/burgers-equation-model.md new file mode 100644 index 000000000..70946336a --- /dev/null +++ b/docs/Models/burgers-equation-model.md @@ -0,0 +1,187 @@ +# Viscous Burgers Equation + +## Definition +The [`SELF_Burgers1D_t` module](../ford/module/self_burgers1d_t.html) defines the [`Burgers1D_t` class](ford/type/burgers1d_t.html). In SELF, models are posed in the form of a generic conservation law + +\begin{equation} +\vec{s}_t + \nabla \cdot \overleftrightarrow{f} = \vec{q} +\end{equation} + +where $\vec{s}$ is a vector of solution variables, $\overleftrightarrow{f}$ is the conservative flux, and $\vec{q}$ are non-conservative source terms. + +For Burgers equation in 1-D + +\begin{equation} +\vec{s} = s +\end{equation} + +\begin{equation} +\overleftrightarrow{f} = \frac{s^2}{2} \hat{x} +\end{equation} + +\begin{equation} +\vec{q} = 0 +\end{equation} + +To track stability of the Burgers equation in 1-D, the total entropy function is + +\begin{equation} +e = \int_x \frac{s^2}{2} \hspace{1mm} dx +\end{equation} + +## Implementation +The viscous Burgers equation model is implemented as a type extension of the [`DGModel1D` class](../ford/type/dgmodel1d_t.html). The [`Burgers1D_t` class](../ford/type/burgers1d_t.html) adds a parameter for the viscosity and overrides the `SetMetadata`, `entropy_func`, `flux1d`, and `riemannflux1d` type-bound procedures. + +### Riemann Solver +The `Burgers1D` class is defined using the conservative form of the conservation law. The Riemman solver for the hyperbolic part of Burgers equation is the local Lax Friedrichs upwind riemann solver + +\begin{equation} +f_h^* = \frac{1}{2}( f_L + f_R + c_{max}(s_L - s_R)) +\end{equation} + +where +\begin{equation} +f_L = \frac{1}{2}s_L^2 +\end{equation} + +\begin{equation} +f_R = \frac{1}{2}s_R^2 +\end{equation} + +and +\begin{equation} +c_{max} = max( |s_L|, |s_R| ) +\end{equation} + +The parabolic part of the flux (the viscous flux) is computed using the Bassi-Rebay flux, which computes the flux using an average of the gradient values on either side of the shared edge. + +\begin{equation} +f_p^* = -\frac{\nu}{2}\left( \frac{∂ s_L}{∂ x}+ \frac{∂ s_R}{∂ x}\right) +\end{equation} + +The details for this implementation can be found in [`self_burgers1d_t.f90`](../ford/sourcefile/self_burgers1d_t.f90.html) + +### Boundary conditions +By default, the boundary conditions are periodic boundary conditions. When initializing the mesh for your Burgers equation solver, you can change the boundary conditions to `SELF_BC_Radiation` to set the external state on model boundaries to 0 in the Riemann solver + +```fortran + +type(Mesh1D),target :: mesh + + ! Create a mesh using the built-in + ! uniform mesh generator. + ! The domain is set to x in [0,1] with 10 elements + call mesh%UniformBlockMesh(nGeo=1, & + nElem=10, & + x=(/0.0_prec,1.0_prec/)) + + ! Reset the boundary conditions to radiation + call mesh%ResetBoundaryConditionType(SELF_BC_RADIATION,SELF_BC_RADIATION) + +``` + +If you need to explicitly set the boundary conditions as a function of position and time, you can create a type-extension of the `Burgers1D` class and override the [`hbc1d_Prescribed`](../ford/proc/hbc1d_prescribed_model.html) and [`pbc1d_Prescribed`](../ford/proc/pbc1d_prescribed_model.html) boundary condition procedures. + +To make a type extension, you can first create a module that defines your model with the the new type-bound procedures for the boundary conditions. + +```fortran +module my_burgers_model + +use self_burgers1d + +implicit none + + type,extends(Burgers1D) :: myModel + contains + procedure :: hbc1d_Prescribed => hbc1d_mymodel ! For the hyperbolic part + procedure :: pbc1d_Prescribed => pbc1d_mymodel ! For the parabolic part + end type myModel + + contains + pure function hbc1d_mymodel(this,x,t) result(exts) + !! Sets the external solution state at model boundaries + class(myModel),intent(in) :: this + real(prec),intent(in) :: x + real(prec),intent(in) :: t + real(prec) :: exts(1:this%nvar) + ! Local + integer :: ivar + + do ivar = 1,this%nvar + exts(ivar) = ! To do : fill in the external state + ! here as a function of space and time + enddo + + endfunction hbc1d_mymodel + + pure function pbc1d_mymodel(this,x,t) result(extDsdx) + !! Sets the external solution state derivative at model boundaries + class(myModel),intent(in) :: this + real(prec),intent(in) :: x + real(prec),intent(in) :: t + real(prec) :: extDsdx(1:this%nvar) + ! Local + integer :: ivar + + do ivar = 1,this%nvar + extDsdx(ivar) = ! To do : fill in the external state + ! here as a function of space and time + enddo + + endfunction pbc1d_mymodel + +end module my_burgers_model +``` + +In your program, you can use your new class. Your new class will inherit all of the features and other type bound procedures from the `Burgers1D` class but will enforce your boundary conditions. The snippet below shows the steps required to instantiate your model + +```fortran +program run_my_model + +use my_burgers_model + +implicit none + + type(mymodel) :: modelobj + type(Lagrange),target :: interp + type(Mesh1D),target :: mesh + type(Geometry1D),target :: geometry + + call mesh % UniformBlockMesh(nElem=10, & + x=(/0.0_prec,1.0_prec/)) + + ! Set the left and right boundary conditions to prescribed + call mesh % ResetBoundaryConditionType(SELF_BC_PRESCRIBED,SELF_BC_PRESCRIBED) + + ! Create a 7th degree polynomial interpolant + ! with Legendre-Gauss quadrature. + ! The target grid for plotting is 12 uniformly spaced + ! points within each element. + call interp % Init(N=7, & + controlNodeType=GAUSS, & + M=12, & + targetNodeType=UNIFORM) + + ! Generate geometry (metric terms) from the mesh elements + call geometry % Init(interp,mesh%nElem) + call geometry % GenerateFromMesh(mesh) + + ! Initialize the model + call modelobj % Init(nvar,mesh,geometry) + ! Enable gradient calculations + ! so that we can compute diffusive fluxes + modelobj % gradient_enabled = .true. + + ! To do : Set the initial conditions + ! To do : Forward step the model + +end program run_my_model +``` + + + +## Example usage + +For examples, see any of the following + +* [`examples/burgers1d_shock.f90`](https://github.com/FluidNumerics/SELF/blob/main/examples/burgers1d_shock.f90) - Implements the travelling viscous shock problem \ No newline at end of file diff --git a/docs/Models/generic-dg-models.md b/docs/Models/generic-dg-models.md new file mode 100644 index 000000000..e69de29bb diff --git a/docs/Testing/Overview.md b/docs/Testing/Overview.md deleted file mode 100644 index 030a61d92..000000000 --- a/docs/Testing/Overview.md +++ /dev/null @@ -1,9 +0,0 @@ -# Testing - -The SELF repository provides mechanisms for you to be able to build and test SELF on your own system. The same mechanisms are also replicated in Google Cloud so that tests can be run on a variety of platforms before changes are made to the main branch. - - -## Pull Requests -SELF is tested when pull requests are submitted to update the main branch and after a repository owner approves a build. Tests are run using Google Cloud Build; the workflow is defined in `ci/cloudbuild.yaml`. The process builds a docker container image by installing SELF and its dependencies, including ROCm, CUDA, HDF5, FLAP, JSON-Fortran, and OpenMPI. - -Once built, the docker image is pushed to Google Container registry and the docker image is tested using Google Cloud Batch. Batch provides access to a variety of compute resources, including GPUs. This enables us to test serial, MPI, and GPU accelerated components of SELF. diff --git a/docs/Tutorials/AdvectionDiffusion2d.md b/docs/Tutorials/AdvectionDiffusion2d.md deleted file mode 100644 index 7b8461e0b..000000000 --- a/docs/Tutorials/AdvectionDiffusion2d.md +++ /dev/null @@ -1 +0,0 @@ -# Run an advection diffusion model in 2-D \ No newline at end of file diff --git a/docs/Examples/BurgersEquation1D/TravelingShock.md b/docs/Tutorials/BurgersEquation1D/TravelingShock.md similarity index 100% rename from docs/Examples/BurgersEquation1D/TravelingShock.md rename to docs/Tutorials/BurgersEquation1D/TravelingShock.md diff --git a/docs/Examples/BurgersEquation1D/img/shock1d.gif b/docs/Tutorials/BurgersEquation1D/img/shock1d.gif similarity index 100% rename from docs/Examples/BurgersEquation1D/img/shock1d.gif rename to docs/Tutorials/BurgersEquation1D/img/shock1d.gif diff --git a/docs/Examples/BurgersEquation1D/img/u_t00.png b/docs/Tutorials/BurgersEquation1D/img/u_t00.png similarity index 100% rename from docs/Examples/BurgersEquation1D/img/u_t00.png rename to docs/Tutorials/BurgersEquation1D/img/u_t00.png diff --git a/docs/Tutorials/CreateYourOwnModel.md b/docs/Tutorials/CreateYourOwnModel.md new file mode 100644 index 000000000..e69de29bb diff --git a/docs/Examples/LinearShallowWater/RossbyWave.md b/docs/Tutorials/LinearShallowWater/RossbyWave.md similarity index 100% rename from docs/Examples/LinearShallowWater/RossbyWave.md rename to docs/Tutorials/LinearShallowWater/RossbyWave.md diff --git a/docs/Examples/LinearShallowWater/rossbywave_day10.png b/docs/Tutorials/LinearShallowWater/rossbywave_day10.png similarity index 100% rename from docs/Examples/LinearShallowWater/rossbywave_day10.png rename to docs/Tutorials/LinearShallowWater/rossbywave_day10.png diff --git a/docs/Examples/LinearShallowWater/rossbywave_initialcondition.png b/docs/Tutorials/LinearShallowWater/rossbywave_initialcondition.png similarity index 100% rename from docs/Examples/LinearShallowWater/rossbywave_initialcondition.png rename to docs/Tutorials/LinearShallowWater/rossbywave_initialcondition.png diff --git a/docs/Tutorials/docker/lswGravityWave.md b/docs/Tutorials/docker/lswGravityWave.md deleted file mode 100644 index acf9f3c1f..000000000 --- a/docs/Tutorials/docker/lswGravityWave.md +++ /dev/null @@ -1,36 +0,0 @@ -# (Tutorial) Linear Shallow Water - Gravity Wave Release demo with Docker - -In this tutorial, you will learn - -* Some basic gravity wave theory -* How to build the SELF Docker image -* How to run the Linear Shallow Water model using Docker -* How to visualize the model output using pyself -* How to modify settings for the Linear Shallow Water simulations - - -## Simulation Description - -## 0. Pre-requisites -This tutorial assumes that you are working on a Linux or MacOS workstation and that you are comfortable working in a terminal. - -Additionally, you will need to have the following on your system: - -* [Docker]() needs to be installed on your system and configured so that you can build and run Docker images -* [git]() needs to be installed on your system so that you can clone the SELF repository onto your local system -* (Optional) [AMD GPU](https://rocm.docs.amd.com/en/latest/release/gpu_os_support.html) or Nvidia GPU, if you want to run the model with GPU acceleration. - - -## 1. Build the SELF Docker Image - -## 2. Starting from an example input deck - -## 3. Select your time integration scheme - -## 4. Setting initial conditions and fluid properties - -## 5. Run the gravity wave example - -## 6. Visualize the model output - - diff --git a/docs/ford/index.html b/docs/ford/index.html index 3a6baa8e6..93a67b1d1 100644 --- a/docs/ford/index.html +++ b/docs/ford/index.html @@ -119,16 +119,16 @@

Source Files

Modules

-
@@ -156,7 +156,7 @@

Derived Types

  • advection_diffusion_3d
  • advection_diffusion_3d
  • advection_diffusion_3d_t
  • -
  • DGModel1D
  • +
  • Burgers1D
  • @@ -165,14 +165,14 @@

    Derived Types