\n",
+ "Click here for the solution
\n",
+ " \n",
+ "Create a new case i.day5.a with the command:\n",
+ "```\n",
+ "cd /glade/work/$USER/code/my_cesm_code/cime/scripts\n",
+ "./create_newcase --case ~/cases/i.day5.a --compset I2000Clm50Sp --res f09_g17_gl4 --run-unsupported\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Case setup:\n",
+ "``` \n",
+ "cd ~/cases/i.day5.a \n",
+ "./case.setup\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Change the clm namelist using user_nl_clm by adding the following lines:\n",
+ "``` \n",
+ "hist_nhtfrq = -24\n",
+ "hist_mfilt = 6\n",
+ "```\n",
+ "
\n",
+ " \n",
+ "Check the namelist by running:\n",
+ "``` \n",
+ "./preview_namelists\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "If needed, change job queue, account number, or wallclock time. \n",
+ "For instance:\n",
+ "``` \n",
+ "./xmlchange JOB_QUEUE=regular,PROJECT=UESM0013,JOB_WALLCLOCK_TIME=0:15:00\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Build and submit:\n",
+ "```\n",
+ "qcmd -- ./case.build\n",
+ "./case.submit\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "When the run is completed, look into the archive directory for: \n",
+ "i.day5.a. \n",
+ " \n",
+ "(1) Check that your archive directory on derecho (The path will be different on other machines): \n",
+ "```\n",
+ "cd /glade/derecho/scratch/$USER/archive/i.day5.a/lnd/hist\n",
+ "\n",
+ "ls \n",
+ "```\n",
+ "
\n",
+ "\n",
+ "(2) Look at the output using ncview\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "17cf7e19-1211-45f2-97cd-fe2badc69dac",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "CMIP6 2019.10",
+ "language": "python",
+ "name": "cmip6-201910"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.10"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/challenge/clm_ctsm/clm_exercise_2.ipynb b/_sources/notebooks/challenge/clm_ctsm/clm_exercise_2.ipynb
new file mode 100644
index 000000000..d5651c9b9
--- /dev/null
+++ b/_sources/notebooks/challenge/clm_ctsm/clm_exercise_2.ipynb
@@ -0,0 +1,188 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# 2: Use the BGC model"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0037b73f-f174-48e7-8e4f-0744d7d23fe0",
+ "metadata": {},
+ "source": [
+ "We can use a different I compset: IHistClm50BgcCrop. This experiment is a 20th century transient run using GSWP3v1 and the biogeochemistry model including crops."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "bdd131c8-d1ec-4568-81dd-701f8bdbe6cb",
+ "metadata": {},
+ "source": [
+ "![icase](../../../images/challenge/ihist.png)\n",
+ "\n",
+ "* Figure: IHIST compset definition.
*"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b90d4773-7ca0-4131-ab07-517608a3e976",
+ "metadata": {},
+ "source": [
+ "\n",
+ "Exercise: Run an experimental case with prognostic BGC
\n",
+ " \n",
+ "Create a case called **i.day5.b** using the compset `IHistClm50BgcCrop` at `f09_g17_gl4` resolution. \n",
+ " \n",
+ "Set the run length to **5 days**. \n",
+ "\n",
+ "Build and run the model.\n",
+ " \n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f639e182-f48a-431c-a594-9c34323417eb",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ " \n",
+ "\n",
+ "Click here for the solution
\n",
+ " \n",
+ "Create a new case i.day5.b :\n",
+ "```\n",
+ "cd /glade/work/$USER/code/my_cesm_code/cime/scripts\n",
+ "./create_newcase --case ~/cases/i.day5.b --compset IHistClm50BgcCrop --res f09_g17_gl4 --run-unsupported\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Case setup:\n",
+ "``` \n",
+ "cd ~/cases/i.day5.b\n",
+ "./case.setup\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Note differences between this case and the control case:\n",
+ "``` \n",
+ "diff env_run.xml ../i.day5.a/env_run.xml\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Change the clm namelist using user_nl_clm by adding the following lines:\n",
+ "``` \n",
+ "hist_nhtfrq = -24\n",
+ "hist_mfilt = 6\n",
+ "```\n",
+ "
\n",
+ " \n",
+ "Check the namelist by running:\n",
+ "``` \n",
+ "./preview_namelists\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "If needed, change job queue, account number, or wallclock time. \n",
+ "For instance:\n",
+ "``` \n",
+ "./xmlchange JOB_QUEUE=regular,PROJECT=UESM0013,JOB_WALLCLOCK_TIME=0:15:00\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Build case:\n",
+ "```\n",
+ "qcmd -- ./case.build\n",
+ "```\n",
+ "
\n",
+ " \n",
+ "Compare the namelists from the two experiments:\n",
+ "```\n",
+ "diff CaseDocs/lnd_in ../i.day5.a/CaseDocs/lnd_in\n",
+ "```\n",
+ "
\n",
+ " \n",
+ "Submit case:\n",
+ "```\n",
+ "./case.submit\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "When the run is completed, look into the archive directory for: \n",
+ "i.day5.b. \n",
+ " \n",
+ "(1) Check that your archive directory on derecho (The path will be different on other machines): \n",
+ "```\n",
+ "cd /glade/derecho/scratch/$USER/archive/i.day5.b/lnd/hist\n",
+ "\n",
+ "ls \n",
+ "```\n",
+ "
\n",
+ "\n",
+ " \n",
+ "(2) Compare to control run:\n",
+ "```\n",
+ "ncdiff -v TLAI i.day5.b.clm2.XXX.nc /glade/derecho/scratch/$USER/archive/i.day5.a/lnd/hist/i.day5.a.clm2.XXX.nc i_diff.nc\n",
+ "\n",
+ "ncview i_diff.nc\n",
+ "```\n",
+ "\n",
+ "\n",
+ " \n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d69c456a-fdc6-4625-bbcc-ed32ab6ae8e8",
+ "metadata": {
+ "tags": []
+ },
+ "source": [
+ "## Test your understanding"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b3f28ecf-02b8-4cc0-bdc4-c36c8fc9e7aa",
+ "metadata": {},
+ "source": [
+ "- What changes do you see from the control case with the prognostic BGC?\n",
+ "- ... OTHERS?"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "5b5a8cee-0ca9-4076-a731-e6ec200b70d4",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "CMIP6 2019.10",
+ "language": "python",
+ "name": "cmip6-201910"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.10"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/challenge/clm_ctsm/clm_exercise_3.ipynb b/_sources/notebooks/challenge/clm_ctsm/clm_exercise_3.ipynb
new file mode 100644
index 000000000..0b1e84c14
--- /dev/null
+++ b/_sources/notebooks/challenge/clm_ctsm/clm_exercise_3.ipynb
@@ -0,0 +1,197 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# 3: Modify input data"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0037b73f-f174-48e7-8e4f-0744d7d23fe0",
+ "metadata": {},
+ "source": [
+ "We can modify the input to CLM by changing one of the plant functional type properties. We will then compare these results with the control experiment.\n",
+ "\n",
+ "Note that you will need to change a netcdf file for this exercise. Because netcdf are in binary format you will need a type of script or interperter to read the file and write it out again. (e.g. ferret, IDL, NCL, NCO, Perl, Python, Matlab, Yorick). Below in the solution we will show how to do this using NCO.\n",
+ "\n",
+ "NOTE: For any tasks other than setting up, building, submitting cases you should probably do these tasks on the Data Visualization Cluster - casper, and not on the derecho login nodes."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b90d4773-7ca0-4131-ab07-517608a3e976",
+ "metadata": {},
+ "source": [
+ "\n",
+ "Exercise: Run an experimental case
\n",
+ " \n",
+ "Create a case called **i.day5.a_pft** using the compset `I2000Clm50Sp` at `f09_g17_gl4` resolution. \n",
+ "\n",
+ "Look at variable “rholvis” in the forcing file using ncview or ncdump –v rholvis. This is the visible leaf reflectance for every pft. Modify the rholvis parameter to .\n",
+ "`/glade/campaign/cesm/cesmdata/cseg/inputdata/lnd/clm2/paramdata/clm5_params.c171117.nc`\n",
+ " \n",
+ "Set the run length to **5 days**. \n",
+ "\n",
+ "Build and run the model.\n",
+ " \n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f639e182-f48a-431c-a594-9c34323417eb",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ " \n",
+ "\n",
+ "Click here for the solution
\n",
+ " \n",
+ "Create a clone from the control experiment i.day5.a_pft :\n",
+ "```\n",
+ "cd /glade/work/$USER/code/my_cesm_code/cime/scripts\n",
+ "./create_clone --case ~/cases/i.day5.a_pft --clone ~/cases/i.day5.a\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Modify the rholvis parameter in the physiology file:\n",
+ "``` \n",
+ "cd /glade/derecho/scratch/$USER\n",
+ "cp /glade/campaign/cesm/cesmdata/cseg/inputdata/lnd/clm2/paramdata/clm5_params.c171117.nc .\n",
+ "chmod u+w clm5_params.c171117.nc\n",
+ "cp clm5_params.c171117.nc clm5_params.c171117.new.nc\n",
+ "ncap2 -A -v -s 'rholvis(4)=0.4' clm5_params.c171117.nc clm5_params.c171117.new.nc\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Check the new rholvis parameter to be sure the modification worked:\n",
+ "``` \n",
+ "ncdump -v rholvis clm5_params.c171117.new.nc\n",
+ "# and compare it to the original file\n",
+ "ncdiff clm5_params.c171117.nc clm5_params.c171117.new.nc ncdiff.nc\n",
+ "ncdump -v rholvis ncdiff.nc\n",
+ "```\n",
+ "
\n",
+ " \n",
+ "Case setup:\n",
+ "``` \n",
+ "cd ~/cases/i.day5.a_pft\n",
+ "./case.setup\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Change the clm namelist using user_nl_clm to point at the modified file. Add the following line:\n",
+ "``` \n",
+ "paramfile = '/glade/derecho/scratch/$USER/clm5_params.c171117.new.nc' \n",
+ "```\n",
+ "
\n",
+ " \n",
+ "Check the namelist by running:\n",
+ "``` \n",
+ "./preview_namelists\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "If needed, change job queue, account number, or wallclock time. \n",
+ "For instance:\n",
+ "``` \n",
+ "./xmlchange JOB_QUEUE=regular,PROJECT=UESM0013,JOB_WALLCLOCK_TIME=0:15:00\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Build case:\n",
+ "```\n",
+ "qcmd -- ./case.build\n",
+ "```\n",
+ "
\n",
+ " \n",
+ "Compare the namelists from the two experiments:\n",
+ "```\n",
+ "diff CaseDocs/lnd_in ../i.day5.a/CaseDocs/lnd_in\n",
+ "```\n",
+ "
\n",
+ " \n",
+ "Submit case:\n",
+ "```\n",
+ "./case.submit\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "When the run is completed, look into the archive directory for: \n",
+ "i.day5.a. \n",
+ " \n",
+ "(1) Check that your archive directory on derecho (The path will be different on other machines): \n",
+ "```\n",
+ "cd /glade/derecho/scratch/$USER/archive/i.day5.a_pft/lnd/hist\n",
+ "\n",
+ "ls \n",
+ "```\n",
+ "
\n",
+ "\n",
+ "(2) Compare to control run:\n",
+ "```\n",
+ "ncdiff i.day5.a_pft.clm2.XXX.nc /glade/derecho/scratch/$USER/archive/i.day5.a/lnd/hist/i.day5.a.clm2.XXX.nc i_diff.nc\n",
+ "\n",
+ "ncview i_diff.nc\n",
+ "```\n",
+ "\n",
+ "\n",
+ " \n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b3ffd3cc-676e-4e7c-9ff4-cd65d4745397",
+ "metadata": {
+ "tags": []
+ },
+ "source": [
+ "## Test your understanding"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "03ac3664-5360-45d7-a3ad-0797a839a1d3",
+ "metadata": {},
+ "source": [
+ "- How did rholvis change (increase/decrease)? Given this, what do you expect the model response to be?\n",
+ "- What changes do you see from the control case with the modified rholvis parameter?\n",
+ "- ... OTHERS? "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "27f332b2-5799-43a8-9060-50315ebdf6dc",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "CMIP6 2019.10",
+ "language": "python",
+ "name": "cmip6-201910"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.10"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/challenge/paleo/exercise_1.ipynb b/_sources/notebooks/challenge/paleo/exercise_1.ipynb
new file mode 100644
index 000000000..cca93f3b8
--- /dev/null
+++ b/_sources/notebooks/challenge/paleo/exercise_1.ipynb
@@ -0,0 +1,219 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# 1: Preindustrial control case\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b90d4773-7ca0-4131-ab07-517608a3e976",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "Exercise: Run a preindustrial control simulation
\n",
+ " \n",
+ "Create, configure, build and run a fully coupled preindustrial case called ``b.e21.B1850.f19_g17.piControl.001`` following [CESM naming conventions](https://www.cesm.ucar.edu/models/cesm2/naming-conventions). \n",
+ "\n",
+ "Run for 1 year. \n",
+ "\n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "65b2cbda-2d54-48ee-898b-4c391f16ca79",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ "\n",
+ "\n",
+ " Click here for hints
\n",
+ "
\n",
+ "\n",
+ "**What is the compset for fully coupled preindustrial?**\n",
+ "\n",
+ "- ``B1850`` \n",
+ "\n",
+ "**What is the resolution for B1850?**\n",
+ "\n",
+ "- Use resolution f19_g17 for fast throughput \n",
+ "\n",
+ "**Which XML variable should you change to tell the model to run for one year?**\n",
+ "\n",
+ "- Use ``STOP_OPTION`` and ``STOP_N`` \n",
+ "\n",
+ " \n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7dd602b7-372d-4f36-b6d1-df8e22ba1646",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ "\n",
+ "Click here for the solution
\n",
+ " \n",
+ " \n",
+ "**# Set environment variables** \n",
+ "\n",
+ "Set environment variables with the commands:\n",
+ " \n",
+ "**For tcsh users** \n",
+ " \n",
+ "```\n",
+ "set CASENAME=b.e21.B1850.f19_g17.piControl.001\n",
+ "set CASEDIR=/glade/u/home/$USER/cases/$CASENAME\n",
+ "set RUNDIR=/glade/derecho/scratch/$USER/$CASENAME/run\n",
+ "set COMPSET=B1850\n",
+ "set RESOLUTION=f19_g17\n",
+ "set PROJECT=UESM0013\n",
+ "```\n",
+ "\n",
+ "Note: You should use the project number given for this tutorial.\n",
+ "\n",
+ "**For bash users** \n",
+ " \n",
+ "```\n",
+ "export CASENAME=b.e21.B1850.f19_g17.piControl.001\n",
+ "export CASEDIR=/glade/u/home/$USER/cases/$CASENAME\n",
+ "export RUNDIR=/glade/derecho/scratch/$USER/$CASENAME/run\n",
+ "export COMPSET=B1850\n",
+ "export RESOLUTION=f19_g17\n",
+ "export PROJECT=UESM0013\n",
+ "```\n",
+ "\n",
+ "Note: You should use the project number given for this tutorial.\n",
+ "\n",
+ "**# Make a case directory**\n",
+ "\n",
+ "If needed create a directory `cases` into your home directory:\n",
+ " \n",
+ "```\n",
+ "mkdir /glade/u/home/$USER/cases/\n",
+ "```\n",
+ " \n",
+ "\n",
+ "**# Create a new case**\n",
+ "\n",
+ "Create a new case with the command ``create_newcase``:\n",
+ "```\n",
+ "cd /glade/work/$USER/code/my_cesm_code/cime/scripts/\n",
+ "./create_newcase --case $CASEDIR --res $RESOLUTION --compset $COMPSET --project $PROJECT\n",
+ "```\n",
+ "\n",
+ "**# Change the job queue**\n",
+ "\n",
+ "If needed, change ``job queue``.
\n",
+ "For instance, to run in the queue ``main``.\n",
+ "``` \n",
+ "cd $CASEDIR\n",
+ "./xmlchange JOB_QUEUE=main\n",
+ "```\n",
+ "This step can be redone at anytime in the process. \n",
+ "\n",
+ "**# Setup**\n",
+ "\n",
+ "Invoke ``case.setup`` with the command:\n",
+ "``` \n",
+ "cd $CASEDIR\n",
+ "./case.setup \n",
+ "``` \n",
+ "\n",
+ "You build the namelists with the command:\n",
+ "```\n",
+ "./preview_namelists\n",
+ "```\n",
+ "This step is optional as the script ``preview_namelists`` is automatically called by ``case.build`` and ``case.submit``. But it is nice to check that your changes made their way into:\n",
+ "```\n",
+ "$CASEDIR/CaseDocs/atm_in\n",
+ "```\n",
+ "\n",
+ "\n",
+ "**# Set run length**\n",
+ "\n",
+ "```\n",
+ "./xmlchange STOP_N=1,STOP_OPTION=nyears\n",
+ "```\n",
+ "\n",
+ "**# Build and submit**\n",
+ "\n",
+ "```\n",
+ "qcmd -A $PROJECT -- ./case.build\n",
+ "./case.submit\n",
+ "```\n",
+ "------------\n",
+ "\n",
+ "**# Check on your run**\n",
+ "\n",
+ "\n",
+ "After submitting the job, use ``qstat -u $USER`` to check the status of your job. \n",
+ "It may take ~16 minutes to finish the one-year simulation. \n",
+ "\n",
+ "**# Check your solution**\n",
+ "\n",
+ "When the run is completed, look at the history files into the archive directory. \n",
+ " \n",
+ "(1) Check that your archive directory on derecho (The path will be different on other machines): \n",
+ "```\n",
+ "cd /glade/derecho/scratch/$USER/archive/$CASENAME/atm/hist\n",
+ "ls \n",
+ "```\n",
+ "\n",
+ "As your run is one-year, there should be 12 monthly files (``h0``) for each model component. \n",
+ "\n",
+ "\n",
+ "Success! Now let's look back into the past... \n",
+ "\n",
+ " \n",
+ "
\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "472131c7-88f9-4863-a2bc-d7364333542d",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "815be0bc-515a-474b-a3dd-b7ba02831b9a",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.13"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/challenge/paleo/exercise_2.ipynb b/_sources/notebooks/challenge/paleo/exercise_2.ipynb
new file mode 100644
index 000000000..94549c8a6
--- /dev/null
+++ b/_sources/notebooks/challenge/paleo/exercise_2.ipynb
@@ -0,0 +1,316 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# 2: mid-Holocene case \n",
+ "\n",
+ "The Holocene Epoch started ~11,700 before present (11.7 ka BP) and is the current period of geologic time. \n",
+ "\n",
+ "Although humans were already well established before the Holocene, this period of time is also referred to as the Anthropocene Epoch because its primary characteristic is the global changes caused by human activity.\n",
+ "\n",
+ "The Holocene is an interglacial period, marked by receding ice sheets and rising greenhouse gases that were accompanied by changes in the Earth's orbit around the Sun. \n",
+ "\n",
+ "Today, we will use CESM to investigate influence of Holocene orbital forcing on climate. \n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b90d4773-7ca0-4131-ab07-517608a3e976",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "Exercise: Run a mid-Holocene simulation with orbital forcing
\n",
+ " \n",
+ "Create, configure, build and run a fully coupled mid-Holocene (~6 ka BP) case called ``b.e21.B1850.f19_g17.midHolocene.001`` following [CESM naming conventions](https://www.cesm.ucar.edu/models/cesm2/naming-conventions). \n",
+ "\n",
+ "Run for 1 year. \n",
+ "\n",
+ "Compare and visualize differences between preindustrial and mid-Holocene runs using NCO and Ncview. \n",
+ "\n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "65b2cbda-2d54-48ee-898b-4c391f16ca79",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ "\n",
+ "\n",
+ " Click here for hints
\n",
+ "
\n",
+ "\n",
+ "**What is the compset for fully coupled mid-Holocene run?**\n",
+ "\n",
+ "- Use ``B1850`` and modify preindustrial orbital configuration (no mid-Holocene compset available) \n",
+ "\n",
+ "**What is the resolution for B1850?**\n",
+ "\n",
+ "- Use resolution ``f19_g17`` for fast throughput \n",
+ "\n",
+ "**What was the orbital configuration 6 ka BP?**\n",
+ "\n",
+ "- According to Table 1 of [Otto-Bliesner et al., (2017)](chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://gmd.copernicus.org/articles/10/3979/2017/gmd-10-3979-2017.pdf), Eccentricity = 0.018682, Obliquity (degrees) = 24.105, Perihelion = 0.87 (for simplicity, we don't consider the other forcings here, i.e., CO2) \n",
+ "\n",
+ "**How to modify orbital configuration in CESM world?**\n",
+ "\n",
+ "- Edit ``user_nl_cpl`` \n",
+ "- orb_mode = 'fixed_parameters' \n",
+ "- orb_eccen = 0.018682 \n",
+ "- orb_obliq = 24.105 \n",
+ "- orb_mvelp = 0.87 \n",
+ "\n",
+ " \n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7dd602b7-372d-4f36-b6d1-df8e22ba1646",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ "\n",
+ "Click here for the solution
\n",
+ " \n",
+ " \n",
+ "**# Set environment variables** \n",
+ "\n",
+ "Set environment variables with the commands:\n",
+ " \n",
+ "**For tcsh users** \n",
+ " \n",
+ "```\n",
+ "set CASENAME=b.e21.B1850.f19_g17.midHolocene.001\n",
+ "set CASEDIR=/glade/u/home/$USER/cases/$CASENAME\n",
+ "set RUNDIR=/glade/derecho/scratch/$USER/$CASENAME/run\n",
+ "set COMPSET=B1850\n",
+ "set RESOLUTION=f19_g17\n",
+ "set PROJECT=UESM0013\n",
+ "```\n",
+ "\n",
+ "You should use the project number given for this tutorial.\n",
+ "\n",
+ "**For bash users** \n",
+ " \n",
+ "```\n",
+ "export CASENAME=b.e21.B1850.f19_g17.midHolocene.001\n",
+ "export CASEDIR=/glade/u/home/$USER/cases/$CASENAME\n",
+ "export RUNDIR=/glade/derecho/scratch/$USER/$CASENAME/run\n",
+ "export COMPSET=B1850\n",
+ "export RESOLUTION=f19_g17\n",
+ "export PROJECT=UESM0013\n",
+ "```\n",
+ "\n",
+ "You should use the project number given for this tutorial.\n",
+ "\n",
+ "**# Make a case directory**\n",
+ "\n",
+ "If needed create a directory `cases` into your home directory:\n",
+ " \n",
+ "```\n",
+ "mkdir /glade/u/home/$USER/cases/\n",
+ "```\n",
+ " \n",
+ "\n",
+ "**# Create a new case**\n",
+ "\n",
+ "Create a new case with the command ``create_newcase``:\n",
+ "```\n",
+ "cd /glade/work/$USER/code/my_cesm_code/cime/scripts/\n",
+ "./create_newcase --case $CASEDIR --res $RESOLUTION --compset $COMPSET --project $PROJECT\n",
+ "```\n",
+ "\n",
+ "**# Change the job queue**\n",
+ "\n",
+ "If needed, change ``job queue``.
\n",
+ "For instance, to run in the queue ``main``.\n",
+ "``` \n",
+ "cd $CASEDIR\n",
+ "./xmlchange JOB_QUEUE=main\n",
+ "```\n",
+ "This step can be redone at anytime in the process. \n",
+ "\n",
+ "**# Setup**\n",
+ "\n",
+ "Invoke ``case.setup`` with the command:\n",
+ "``` \n",
+ "cd $CASEDIR\n",
+ "./case.setup \n",
+ "``` \n",
+ "\n",
+ "You build the namelists with the command:\n",
+ "```\n",
+ "./preview_namelists\n",
+ "```\n",
+ "This step is optional as the script ``preview_namelists`` is automatically called by ``case.build`` and ``case.submit``. But it is nice to check that your changes made their way into:\n",
+ "```\n",
+ "$CASEDIR/CaseDocs/atm_in\n",
+ "```\n",
+ "\n",
+ "\n",
+ "**# Set run length**\n",
+ "\n",
+ "```\n",
+ "./xmlchange STOP_N=1,STOP_OPTION=nyears\n",
+ "```\n",
+ "\n",
+ "\n",
+ "**# Add the following to user_nl_cpl**\n",
+ "\n",
+ "```\n",
+ "orb_mode = 'fixed_parameters' \n",
+ " orb_eccen = 0.018682\n",
+ " orb_obliq = 24.105\n",
+ " orb_mvelp = 0.87\n",
+ "```\n",
+ "\n",
+ "\n",
+ "**# Build and submit**\n",
+ "\n",
+ "```\n",
+ "qcmd -A $PROJECT -- ./case.build\n",
+ "./case.submit\n",
+ "```\n",
+ "------------\n",
+ "\n",
+ "\n",
+ "**# Validate your simulation setup**\n",
+ "\n",
+ "\n",
+ "(1) If you want to check the log file, ``cpl.log.xxx``, in the Run Directory (when model is still running) or in your Storage Directory (when the simulation and archiving have finished). \n",
+ "```\n",
+ "less /glade/derecho/scratch/$USER/$CASENAME/run/cpl.log.* \n",
+ "less /glade/derecho/scratch/$USER/archive/$CASENAME/logs/cpl.log.*.gz\n",
+ "```\n",
+ "(2) Type ``/orb_params`` to search, you should see the following\n",
+ "```\n",
+ " (shr_orb_params) Calculate characteristics of the orbit:\n",
+ " (shr_orb_params) Calculate orbit for year: -4050\n",
+ " (shr_orb_params) ------ Computed Orbital Parameters ------\n",
+ " (shr_orb_params) Eccentricity = 1.868182E-02\n",
+ " (shr_orb_params) Obliquity (deg) = 2.410538E+01\n",
+ " (shr_orb_params) Obliquity (rad) = 4.207183E-01\n",
+ " (shr_orb_params) Long of perh(deg) = 8.696128E-01\n",
+ " (shr_orb_params) Long of perh(rad) = 3.156770E+00\n",
+ " (shr_orb_params) Long at v.e.(rad) = -5.751115E-04\n",
+ "```\n",
+ "\n",
+ "**# Check your solution**\n",
+ "\n",
+ "When the run is completed, look at the history files into the archive directory. \n",
+ " \n",
+ "(1) Check that your archive directory on derecho (The path will be different on other machines): \n",
+ "```\n",
+ "cd /glade/derecho/scratch/$USER/archive/$CASENAME/atm/hist\n",
+ "ls \n",
+ "```\n",
+ "\n",
+ "As your run is one-year, there should be 12 monthly files (``h0``) for each model component. \n",
+ "\n",
+ "\n",
+ " \n",
+ "
\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "65b2cbda-2d54-48ee-898b-4c391f16ca79",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ "\n",
+ "\n",
+ " Click here to visualize results
\n",
+ "
\n",
+ "\n",
+ "**# Use Ncview to visualize solar insolation**\n",
+ "\n",
+ "Earth's orbital configuration influences incoming solar insolation.\n",
+ "Take a look at the ``SOLIN`` CAM variable for August in the pre-industrial and mid-Holocene runs.\n",
+ "``` \n",
+ "module load ncview\n",
+ "cd /glade/derecho/scratch/$USER/archive\n",
+ "ncview b.e21.B1850.f19_g17.piControl.001/atm/hist/b.e21.B1850.f19_g17.piControl.001.cam.h0.0001-08.nc b.e21.B1850.f19_g17.midHolocene.001/atm/hist/b.e21.B1850.f19_g17.midHolocene.001.cam.h0.0001-08.nc\n",
+ "```\n",
+ "\n",
+ "Using the right arrow button in the Ncview window, you can toggle between pre-industrial and mid-Holocene August ``SOLIN`` and other variables. \n",
+ "\n",
+ "\n",
+ "A few side notes on comparing pre-industrial and mid-Holocene runs: \n",
+ "- Changes in Earth's orbit alter the length of months or seasons over time, this is referred to as the 'paleo calendar effect' \n",
+ "- This means that the modern fixed-length definition of months do not apply when the Earth traversed different portions of its orbit \n",
+ "- Tools exist to adjust monthly CESM output to account for the 'paleo calendar effect' \n",
+ "- See [Bartlein & Shafer et al. (2019)](chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://gmd.copernicus.org/articles/12/3889/2019/gmd-12-3889-2019.pdf) for more information \n",
+ "- For simplicity, we assume in this exercise that the definition of months is the same for the pre-industrial and mid-Holocene \n",
+ "\n",
+ "Now, let's take a look at the differences between the two cases more clearly using NCO. \n",
+ "\n",
+ "``` \n",
+ "module load nco\n",
+ "cd /glade/derecho/scratch/$USER/archive\n",
+ "ncdiff b.e21.B1850.f19_g17.piControl.001/atm/hist/b.e21.B1850.f19_g17.midHolocene.001.cam.h0.0001-08.nc b.e21.B1850.f19_g17.piControl.001/atm/hist/b.e21.B1850.f19_g17.piControl.001.cam.h0.0001-08.nc diff_MH-PI_0001-08.nc \n",
+ "ncview diff_MH-PI_0001-08.nc \n",
+ "```\n",
+ "\n",
+ "Note: Running ncdiff this way will place ``diff_MH-PI_0001-08.nc`` in your archive directory. You may use ``mv`` to move ``diff_MH-PI_0001-08.nc`` to another directory. \n",
+ "\n",
+ "**# Questions for reflection:**\n",
+ "- Which orbital parameters are different at the middle Holocene (6 ka BP)? \n",
+ "- How does the orbital parameter impact the top-of-atmosphere shortwave radiation (solar insolation) during summertime in the Northern Hemisphere? \n",
+ "- Do the results look correct? You can compare your results with Figure 3b of [Otto-Bliesner et al., (2017)](chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://gmd.copernicus.org/articles/10/3979/2017/gmd-10-3979-2017.pdf) \n",
+ "- What other aspects of climate are different between the mid-Holocene and pre-industrial runs?",
+ "\n",
+ " \n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "472131c7-88f9-4863-a2bc-d7364333542d",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "815be0bc-515a-474b-a3dd-b7ba02831b9a",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.13"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/challenge/paleo/paleo.ipynb b/_sources/notebooks/challenge/paleo/paleo.ipynb
new file mode 100644
index 000000000..ec64c4fa4
--- /dev/null
+++ b/_sources/notebooks/challenge/paleo/paleo.ipynb
@@ -0,0 +1,88 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# Paleo \n",
+ "Paleoclimatology is the study of ancient climate variability and change, before the availability of instrumental records. \n",
+ "\n",
+ "Paleoclimatology relies on a combination of physical, biological, and chemical proxies of past environmental and climate change, such as glacial ice, tree rings, sediments, corals, and cave mineral deposits. \n",
+ "\n",
+ "CESM is widely used for paleoclimate studies. \n",
+ "\n",
+ "CESM simulations of past climates are a tool to better understand and interpret proxy reconstructions and to evaluate CESM skill in simulating out-of-sample climate states. \n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b6f4905b-cd2a-454e-89cf-ccc585c90247",
+ "metadata": {
+ "tags": []
+ },
+ "source": [
+ "## Learning Goals"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "346cbd7b-3b8e-41f0-b120-b369ab20f6cc",
+ "metadata": {},
+ "source": [
+ "- Student will learn how to modify Earth's orbital configuration in CESM for a simple paleoclimate experiment. \n",
+ "- Student will learn how to validate that the orbital modification is implemented properly. \n",
+ "- Student will learn how to quickly compare differences in paleo and preindustrial CESM runs using NCO and Ncview. \n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b6f4905b-cd2a-454e-89cf-ccc585c90247",
+ "metadata": {
+ "tags": []
+ },
+ "source": [
+ "## Exercise Details"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "346cbd7b-3b8e-41f0-b120-b369ab20f6cc",
+ "metadata": {},
+ "source": [
+ "- This exercise uses the same code base as the rest of the tutorial. \n",
+ "- You will be using the B1850 compset at the f19_g17 resolution. \n",
+ "- You will run a preindustrial control simulation and a simple mid-Holocene simulation. \n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e961b1bd-a1c8-4e54-bafc-46dcf78454f1",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.13"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/challenge/pop/pop.ipynb b/_sources/notebooks/challenge/pop/pop.ipynb
new file mode 100644
index 000000000..65afedd26
--- /dev/null
+++ b/_sources/notebooks/challenge/pop/pop.ipynb
@@ -0,0 +1,204 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# Ocean\n",
+ "\n",
+ "The ocean component of CESM is the Parallel Ocean Program (POP). \n",
+ "\n",
+ "It can be useful for people interested in ocean science to run simulations with only active sea ice and ocean components and atmospheric forcing. In this exercise, you will learn how to run one of these ice-ocean simulations.\n",
+ "\n",
+ "This exercise was created by Gustavo Marques."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "45a57a9d-99e1-48c2-a365-b09f3aa40ec0",
+ "metadata": {},
+ "source": [
+ "## Learning Goals"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a39c7159-f7ee-4515-920f-68a8d345e392",
+ "metadata": {},
+ "source": [
+ "- Student will learn what a G compset is, the types of forcing available to run one, and how to run one.\n",
+ "- Student will learn how to make a namelist modification that turns off the overflow parameterization and compare results with a control experiment.\n",
+ "- Student will learn how to make a source code modification that changes zonal wind stress and compare results with a control experiment.\n",
+ "- Student will learn what a G1850ECO compset is and compare it to the G compset."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "6bcc23d6-04c4-49b2-a809-15badc7b5ff9",
+ "metadata": {},
+ "source": [
+ "## Exercise Details"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "59f7b9fd-7a3d-4b54-b874-61ddc264b102",
+ "metadata": {},
+ "source": [
+ "- This exercise uses the same code base as the rest of the tutorial. \n",
+ "- You will be using the G compset at the T62_g37 resolution.\n",
+ "- You will run a control simulation and three experimental simulations. Each simulation will be run for one year. \n",
+ "- You will then use 'ncview' \\([http://meteora.ucsd.edu/~pierce/ncview_home_page.html](http://meteora.ucsd.edu/~pierce/ncview_home_page.html)\\) to evaluate how the experiments differ from the control simulation."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f1ed4850-1e61-4b03-b036-69ecaa06f23f",
+ "metadata": {},
+ "source": [
+ "## Useful POP references"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "27190b16-2c11-40a1-94fc-09fe0fbb1a57",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "[CESM POP User's Guide](https://www.cesm.ucar.edu/models/pop)\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "9f4fecc3-e03e-4d35-aecb-7daa16a9acb0",
+ "metadata": {
+ "tags": []
+ },
+ "source": [
+ "\n",
+ "\n",
+ "[CESM POP Discussion Forum](https://bb.cgd.ucar.edu/cesm/forums/pop.136/)\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c082b63d-a408-4b01-8fe8-c446d25a1c91",
+ "metadata": {},
+ "source": [
+ "## What is a G case?"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "9ad378a2-89e1-4afe-ad88-e0c0759b9864",
+ "metadata": {},
+ "source": [
+ "The G compset has active and coupled ocean and sea-ice components. The G compset requires boundary forcing from the atmosphere. The G compset is forced with atmospheric data that does not change interactively as the ocean and sea-ice evolve in time. The land and land ice are not active during a G compset experiment and the runoff is specified. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3ce9e152-c915-4e18-8199-040a26cf68c5",
+ "metadata": {},
+ "source": [
+ "![gcase](../../../images/challenge/gcase.png)\n",
+ "\n",
+ "* Figure: G compset definition.
*"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "346ef398-2703-4990-9387-d9006e75c5e6",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "[Component Set Definitions](https://www2.cesm.ucar.edu/models/cesm2/config/compsets.html)\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "cecd306b-bc35-48e2-8b47-fec1362616cc",
+ "metadata": {},
+ "source": [
+ "## G Compset forcing data"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b6e0b74a-4578-40b3-8af1-920e6bacffc4",
+ "metadata": {},
+ "source": [
+ "There are two types of temporal forcing for G compsets:\n",
+ "- Normal Year Forcing (NYF) is 12 months of atmospheric data (like a climatology) that repeats every year. NYF is the default forcing.\n",
+ "- Interannual varying forcing (GIAF) is forcing that varies by year over the time period (1948-2017). \n",
+ "\n",
+ "There are two datasets that can be used for G compsets:\n",
+ "- JRA55-do atmospheric data \\([Tsujino et al. 2018](https://doi.org/10.1016/j.ocemod.2018.07.002)\\)\n",
+ "- Coordinated Ocean-ice Reference Experiments (CORE) version 2 atmospheric data \\([Large and Yeager 2009](http://doi.org/10.1007/s00382-008-0441-3)\\).\n",
+ "\n",
+ "In these exercises we will use the CORE NYF."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e77543f2-6f2a-4d29-8919-827a2d7f96e6",
+ "metadata": {},
+ "source": [
+ "## Post processing and viewing your output"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "221e2616-682c-44e5-835d-0fce3603555d",
+ "metadata": {},
+ "source": [
+ "1) You can create an annual average of the first year's data for each simulationg using the `ncra` (netCDF averager) command from the netCDF operators package \\([NCO](https://nco.sourceforge.net/)\\). \n",
+ "```\n",
+ "ncra $OUTPUT_DIR/*.pop.h.0001*nc $CASENAME.pop.h.0001.nc\n",
+ "```\n",
+ "\n",
+ "2) Create a file that contains differences between each of the experiments and the control simulation\n",
+ "```\n",
+ "ncdiff $CASENAME.pop.h.0001.nc $CONTROLCASE.pop.h.0001.nc $CASENAME_diff.nc\n",
+ "```\n",
+ "\n",
+ "3) Examine variables within each annual mean and the difference files using `ncview`\n",
+ "```\n",
+ "ncview $CASENAME_diff.nc\n",
+ "```\n",
+ "\n",
+ "4) You can also look at other monthly-mean outputs or component log files."
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.12"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/challenge/pop/pop_exercise_1.ipynb b/_sources/notebooks/challenge/pop/pop_exercise_1.ipynb
new file mode 100644
index 000000000..56334d731
--- /dev/null
+++ b/_sources/notebooks/challenge/pop/pop_exercise_1.ipynb
@@ -0,0 +1,170 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# 1: Control case"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0bdbbd2b-8255-44f3-8c8c-da725d26f845",
+ "metadata": {},
+ "source": [
+ "**NOTE:** Building the control case for the POP challenge exercises is idential to building the control case in the CICE challenge exercises. If you have already completed the CICE challenge exercises you can skip this step."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "6457c1d2-0530-435d-ae27-d0f1eeabe583",
+ "metadata": {},
+ "source": [
+ "\n",
+ "Exercise: Run a control case
\n",
+ " \n",
+ "Create a case called **g_control** using the compset ``G`` at ``T62_g37`` resolution. \n",
+ " \n",
+ "Set the run length to **1 year**. \n",
+ "\n",
+ "Build and run the model. Since this is a control case, we want to build it \"out of the box\" without any modifications. \n",
+ "\n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e2e33a95-e93c-4aca-86d7-1a830cc0562c",
+ "metadata": {},
+ "source": [
+ " \n",
+ "\n",
+ "\n",
+ " Click here for hints
\n",
+ " \n",
+ "**How do I compile?**\n",
+ "\n",
+ "You can compile with the command:\n",
+ " \n",
+ "```\n",
+ "qcmd -- ./case.build\n",
+ "```\n",
+ "\n",
+ "**How do I control the output?**\n",
+ "\n",
+ "Check the following links:\n",
+ "\n",
+ "* https://www2.cesm.ucar.edu/models/cesm1.2/pop2/doc/faq/#output_tavg_add1\n",
+ "* https://www2.cesm.ucar.edu/models/cesm1.2/pop2/doc/faq/#output_tavg_add2\n",
+ "\n",
+ "**How do I check my solution?**\n",
+ "\n",
+ "When your run is completed, go to the archive directory. \n",
+ "\n",
+ "(1) Check that your archive directory contains files *pop.h.*, *pop.h.nday1*, etc\n",
+ "\n",
+ "\n",
+ "(2) Compare the contents of the ``h`` and ``h.nday1`` files using ``ncdump``.\n",
+ "\n",
+ "```\n",
+ "ncdump -h gpop.pop.h.0001-01-01-00000.nc\n",
+ "ncdump -h gpop.pop.h.nday1.0001-01-01-00000.nc\n",
+ "```\n",
+ "\n",
+ "(3) Look at the sizes of the files. \n",
+ "\n",
+ " \n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f639e182-f48a-431c-a594-9c34323417eb",
+ "metadata": {},
+ "source": [
+ " \n",
+ "\n",
+ "Click here for the solution
\n",
+ " \n",
+ "Create a new case g_control with the command:\n",
+ "```\n",
+ "cd /glade/work/$USER/code/my_cesm_code/cime/scripts/\n",
+ "./create_newcase --case /glade/work/$USER/cases/g_control --compset G --res T62_g37 \n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Case setup:\n",
+ "``` \n",
+ "cd ~/cases/g_control \n",
+ "./case.setup\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Change the run length:\n",
+ "``` \n",
+ "./xmlchange STOP_N=1,STOP_OPTION=nyears\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "If needed, change job queue \n",
+ "and account number. \n",
+ "For instance:\n",
+ "``` \n",
+ "./xmlchange JOB_QUEUE=regular,PROJECT=UESM0013\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Build and submit:\n",
+ "```\n",
+ "qcmd -- ./case.build\n",
+ "./case.submit\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "When the run is completed, look into the archive directory for: \n",
+ "g_control. \n",
+ " \n",
+ "(1) Check that your archive directory on derecho (The path will be different on other machines): \n",
+ "```\n",
+ "cd /glade/derecho/scratch/$USER/archive/g_control/ocn/hist\n",
+ "\n",
+ "ls \n",
+ "```\n",
+ "\n",
+ " \n",
+ "
\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "dabace0e-c3f2-4c88-b77d-4b28828c0944",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "NPL 2023b",
+ "language": "python",
+ "name": "npl-2023b"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.12"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/challenge/pop/pop_exercise_2.ipynb b/_sources/notebooks/challenge/pop/pop_exercise_2.ipynb
new file mode 100644
index 000000000..2acba9fec
--- /dev/null
+++ b/_sources/notebooks/challenge/pop/pop_exercise_2.ipynb
@@ -0,0 +1,175 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# 2: Turn off parameterization"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "33cdee65-f03f-4c72-adfe-b5ce02416d12",
+ "metadata": {},
+ "source": [
+ "Oceanic overflows are dense currents originating in semienclosed basins or continental shelves. They contribute to the formation of abyssal waters and play a crucial role in large-scale ocean circulation. When these dense currents flow down the continental slope, they undergo intense mixing with the surrounding (ambient) ocean waters, causing significant changes in their density and transport (see figure below). However, these mixing processes occur on scales that are smaller than what ocean climate models can accurately capture, leading to poor simulations of deep waters and deep western boundary currents. To improve the representation of overflows some ocean climate models rely on overflow paramterizations, such as the one developed for the POP model (check [this](https://echorock.cgd.ucar.edu/staff/gokhan/OFP_Tech_Note.pdf) report for additional information). \n",
+ "\n",
+ "![overflows](../../../images/challenge/overflows.png)\n",
+ "\n",
+ "\n",
+ "* Figure: Physical processes acting in overflows (from [Legg et al., 2009](https://doi-org.cuucar.idm.oclc.org/10.1175/2008BAMS2667.1))
*"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b90d4773-7ca0-4131-ab07-517608a3e976",
+ "metadata": {},
+ "source": [
+ "\n",
+ "Exercise: Turn off overflow parameterization
\n",
+ " \n",
+ "Create a case called **g_overflows** by cloning the control experiment case. \n",
+ " \n",
+ "Verify that the run length is set to **1 year**. \n",
+ "\n",
+ "In user_nl_pop make the following modifications:``overflows_on = .false.`` and ``overflows_interactive = .false.``\n",
+ "\n",
+ "Build and run the model for one year. \n",
+ "\n",
+ "Compare the simulations using ncview/ncdiff, etc.\n",
+ "\n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e2e33a95-e93c-4aca-86d7-1a830cc0562c",
+ "metadata": {},
+ "source": [
+ " \n",
+ "\n",
+ "\n",
+ " Click here for hints
\n",
+ " \n",
+ "**How do I compile and run?**\n",
+ "\n",
+ "You can compile with the command:\n",
+ "```\n",
+ "qcmd -- ./case.build\n",
+ "```\n",
+ "\n",
+ "You can run with the command:\n",
+ "```\n",
+ "./case.submit\n",
+ "```\n",
+ " \n",
+ "**How do I check the lenght of the run?**\n",
+ "\n",
+ "Use ```xmlquery``` to search for the variables that control the run length\n",
+ "\n",
+ " \n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f639e182-f48a-431c-a594-9c34323417eb",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ "\n",
+ "Click here for the solution
\n",
+ " \n",
+ "Clone a new case g_overflows from your control experiment with the command:\n",
+ "```\n",
+ "cd /glade/work/$USER/code/my_cesm_code/cime/scripts/\n",
+ "./create_clone --case /glade/work/$USER/cases/g_overflows --clone /glade/work/$USER/cases/g_control\n",
+ "```\n",
+ "\n",
+ "Case setup:\n",
+ "``` \n",
+ "cd /glade/work/$USER/cases/g_overflows\n",
+ "./case.setup\n",
+ "```\n",
+ "\n",
+ "Verify that the run length is 1 year:\n",
+ "``` \n",
+ "./xmlquery STOP_N\n",
+ "./xmlquery STOP_OPTION\n",
+ "```\n",
+ " \n",
+ "Edit the file user_nl_pop and add the lines:\n",
+ "```\n",
+ " overflows_on = .false.\n",
+ " overflows_interactive = .false.\n",
+ "```\n",
+ "\n",
+ "If needed, change job queue \n",
+ "and account number. \n",
+ "For instance:\n",
+ "``` \n",
+ "./xmlchange JOB_QUEUE=regular,PROJECT=UESM0013\n",
+ "```\n",
+ "\n",
+ "Build and submit:\n",
+ "```\n",
+ "qcmd -- ./case.build\n",
+ "./case.submit\n",
+ "```\n",
+ "\n",
+ "When the run is completed, look into the archive directory for: \n",
+ "g_overflows. \n",
+ " \n",
+ "(1) Check that your archive directory on derecho (The path will be different on other machines): \n",
+ "```\n",
+ "cd /glade/derecho/scratch/$USER/archive/g_overflows/ocn/hist\n",
+ "ls \n",
+ "```\n",
+ "\n",
+ " \n",
+ "
\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f19ab341-b76b-462b-9bc9-49d4793ed409",
+ "metadata": {},
+ "source": [
+ "## Test your understanding"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "31d67bb4-3e04-459e-a6ac-866ee9224776",
+ "metadata": {},
+ "source": [
+ "- What variables do you expect to change when you turn off the overflow parameterization?\n",
+ "- What variables show a difference between this experiment and the control difference? How different are they?"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.12"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/challenge/pop/pop_exercise_3.ipynb b/_sources/notebooks/challenge/pop/pop_exercise_3.ipynb
new file mode 100644
index 000000000..b71259f2d
--- /dev/null
+++ b/_sources/notebooks/challenge/pop/pop_exercise_3.ipynb
@@ -0,0 +1,175 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# 3: Modify wind stress"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "33cdee65-f03f-4c72-adfe-b5ce02416d12",
+ "metadata": {},
+ "source": [
+ "Wind stress plays a critical role in driving ocean currents and is a key factor in shaping the overall patterns of large-scale ocean circulation and, consequentialy, the climate. Further details on how wind stress affects the ocean circulation are discussed in [this](https://doi-org.cuucar.idm.oclc.org/10.1006/rwos.2001.0110) manuscript."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b90d4773-7ca0-4131-ab07-517608a3e976",
+ "metadata": {},
+ "source": [
+ "\n",
+ "Exercise: Increase zonal wind stress
\n",
+ " \n",
+ "Create a case called **g_windstress** by cloning the control experiment case. \n",
+ " \n",
+ "Verify that the run length is set to **1 year**. \n",
+ "\n",
+ "Modify the subroutine rotate_wind_stress in forcing_coupled.F90 to increase the first (x) component of the wind stress by 25%.\n",
+ "\n",
+ "Build and run the model for one year. \n",
+ "\n",
+ "Compare the simulations using ncview/ncdiff, etc.\n",
+ "\n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e2e33a95-e93c-4aca-86d7-1a830cc0562c",
+ "metadata": {},
+ "source": [
+ " \n",
+ "\n",
+ "\n",
+ " Click here for hints
\n",
+ " \n",
+ "**How do I compile and run?**\n",
+ "\n",
+ "You can compile with the command:\n",
+ "```\n",
+ "qcmd -- ./case.build\n",
+ "```\n",
+ "\n",
+ "You can run with the command:\n",
+ "```\n",
+ "./case.submit\n",
+ "```\n",
+ " \n",
+ "**How do I check the lenght of the run?**\n",
+ "\n",
+ "Use ```xmlquery``` to search for the variables that control the run length\n",
+ "\n",
+ " \n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f639e182-f48a-431c-a594-9c34323417eb",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ "\n",
+ "Click here for the solution
\n",
+ " \n",
+ "Clone a new case g_windstress from your control experiment with the command:\n",
+ "```\n",
+ "cd /glade/work/$USER/code/my_cesm_code/cime/scripts/\n",
+ "./create_clone --case /glade/work/$USER/cases/g_windstress --clone /glade/work/$USER/cases/g_control\n",
+ "```\n",
+ "\n",
+ "Case setup:\n",
+ "``` \n",
+ "cd /glade/work/$USER/cases/g_windstress\n",
+ "./case.setup\n",
+ "```\n",
+ "\n",
+ "Verify that the run length is 1 year:\n",
+ "``` \n",
+ "./xmlquery STOP_N\n",
+ "./xmlquery STOP_OPTION\n",
+ "```\n",
+ " \n",
+ "Copy the forcing_coupled.F90 file from the control case to the ocean SourceMods.\n",
+ "``` \n",
+ "cp /glade/work/$USER/code/my_cesm_code/components/pop/source/forcing_coupled.F90 /glade/work/$USER/cases/g_windstress/SourceMods/src.pop\n",
+ "``` \n",
+ " \n",
+ "Edit the file forcing_coupled.F90 in the rotate_wind_stress routine after ```SMFT(:,:,1,:)``` is defined:\n",
+ " \n",
+ "```\n",
+ " SMFT(:,:,1,:) = SMFT(:,:,1,:) * 1.25\n",
+ "```\n",
+ "\n",
+ "If needed, change job queue \n",
+ "and account number. \n",
+ "For instance:\n",
+ "``` \n",
+ "./xmlchange JOB_QUEUE=regular,PROJECT=UESM0013\n",
+ "```\n",
+ "\n",
+ "Build and submit:\n",
+ "```\n",
+ "qcmd -- ./case.build\n",
+ "./case.submit\n",
+ "```\n",
+ "\n",
+ "When the run is completed, look into the archive directory for: \n",
+ "g_windstress. \n",
+ " \n",
+ "(1) Check that your archive directory on derecho (The path will be different on other machines): \n",
+ "```\n",
+ "cd /glade/derecho/scratch/$user/archive/g_windstress/ocn/hist\n",
+ "ls \n",
+ "```\n",
+ "\n",
+ " \n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "286e2e7f-ccea-4c5e-acc5-5f9867341102",
+ "metadata": {},
+ "source": [
+ "## Test your understanding"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "63f2688d-9857-4a49-93bf-2b3117ec0d13",
+ "metadata": {},
+ "source": [
+ "- What are the impacts of increased zonal wind stress? \n",
+ "- Where do you thinkt he impacts would be largest in the ocean?\n",
+ "- How do you think the changes would compare if you increased meridional wind stress?"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "NPL 2023b",
+ "language": "python",
+ "name": "npl-2023b"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.12"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/challenge/pop/pop_exercise_4.ipynb b/_sources/notebooks/challenge/pop/pop_exercise_4.ipynb
new file mode 100644
index 000000000..cd2bc2a6f
--- /dev/null
+++ b/_sources/notebooks/challenge/pop/pop_exercise_4.ipynb
@@ -0,0 +1,161 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# 4: Turn on the ecosystem"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "72423b27-32ee-492a-a023-ffd418e2d6ea",
+ "metadata": {},
+ "source": [
+ "You can also explore setting up a similar case but using the ``G1850ECO`` component set. Note how this differs from the previous ``G`` component set we used in Exercise 1. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "8f13d092-c9d8-4e47-93b2-caf3cb8335d6",
+ "metadata": {},
+ "source": [
+ "![gcase](../../../images/challenge/gecocase.png)\n",
+ "\n",
+ "* Figure: G1850ECO compset definition.
*"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b90d4773-7ca0-4131-ab07-517608a3e976",
+ "metadata": {},
+ "source": [
+ "\n",
+ "Exercise: Run a control case
\n",
+ " \n",
+ "Create a case called **g_eco1850** using the compset ``G1850ECO`` at ``T62_g37`` resolution. \n",
+ " \n",
+ "Set the run length to **1 year**. \n",
+ "\n",
+ "Build and run the model. Since this is a control case, we want to build it \"out of the box\" without any modifications. \n",
+ "\n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e2e33a95-e93c-4aca-86d7-1a830cc0562c",
+ "metadata": {},
+ "source": [
+ " \n",
+ "\n",
+ "\n",
+ " Click here for hints
\n",
+ " \n",
+ "**How do I compile and run?**\n",
+ "\n",
+ "You can compile with the command:\n",
+ "```\n",
+ "qcmd -- ./case.build\n",
+ "```\n",
+ "\n",
+ "You can run with the command:\n",
+ "```\n",
+ "./case.submit\n",
+ "```\n",
+ " \n",
+ "**How do I check the lenght of the run?**\n",
+ "\n",
+ "Use ```xmlquery``` to search for the variables that control the run length\n",
+ "\n",
+ " \n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f639e182-f48a-431c-a594-9c34323417eb",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ "\n",
+ "Click here for the solution
\n",
+ " \n",
+ " \n",
+ "Create a new case G1850ECO with the command:\n",
+ "```\n",
+ "cd /glade/work/$USER/code/my_cesm_code/cime/scripts/\n",
+ "./create_newcase --case /glade/work/$USER/cases/G1850ECO --compset G1850ECO --res T62_g37\n",
+ "```\n",
+ "\n",
+ "Case setup:\n",
+ "``` \n",
+ "cd /glade/work/$USER/cases/G1850ECO \n",
+ "./case.setup\n",
+ "```\n",
+ " \n",
+ "Change the run length:\n",
+ "``` \n",
+ "./xmlchange STOP_N=1,STOP_OPTION=nyears\n",
+ "```\n",
+ "\n",
+ "If needed, change job queue \n",
+ "and account number. \n",
+ "For instance:\n",
+ "``` \n",
+ "./xmlchange JOB_QUEUE=regular,PROJECT=UESM0013\n",
+ "```\n",
+ "\n",
+ "Build and submit:\n",
+ "```\n",
+ "qcmd -- ./case.build\n",
+ "./case.submit\n",
+ "```\n",
+ "\n",
+ "When the run is completed, look into the archive directory for: \n",
+ "G1850ECO. \n",
+ " \n",
+ "(1) Check that your archive directory on derecho (The path will be different on other machines): \n",
+ "```\n",
+ "cd /glade/derecho/scratch/$USER/archive/G1850ECO/ocn/hist\n",
+ "ls \n",
+ "```\n",
+ "\n",
+ " \n",
+ "
\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "dce7f4af-243c-47fd-b4d6-c37832aa80fd",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "NPL 2023b",
+ "language": "python",
+ "name": "npl-2023b"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.12"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/diagnostics/additional/additional.ipynb b/_sources/notebooks/diagnostics/additional/additional.ipynb
new file mode 100644
index 000000000..6c1317db0
--- /dev/null
+++ b/_sources/notebooks/diagnostics/additional/additional.ipynb
@@ -0,0 +1,54 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# Additional Topics"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b6f4905b-cd2a-454e-89cf-ccc585c90247",
+ "metadata": {
+ "tags": []
+ },
+ "source": [
+ "This section provides other information about how to use CESM output including:\n",
+ "- The difference between timeseries and history files\n",
+ "- The Climate Variabilty and Diagnostics Package (CVDP) (**In progress**)\n",
+ "- Links to different analysis tools and resources used by CESM developers and users"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e961b1bd-a1c8-4e54-bafc-46dcf78454f1",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.13"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/diagnostics/additional/adf.ipynb b/_sources/notebooks/diagnostics/additional/adf.ipynb
new file mode 100644
index 000000000..8056d9f6d
--- /dev/null
+++ b/_sources/notebooks/diagnostics/additional/adf.ipynb
@@ -0,0 +1,77 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# ADF"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b6f4905b-cd2a-454e-89cf-ccc585c90247",
+ "metadata": {
+ "tags": []
+ },
+ "source": [
+ "## Learning Goals\n",
+ "\n",
+ "- Enter learning goals here."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e73ce5d6-d2b1-4f32-b64f-337a1b02e2d0",
+ "metadata": {},
+ "source": [
+ "_______________\n",
+ "\n",
+ "## Subsection 1\n",
+ "\n",
+ "Info here"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "815e0869-0518-4cf9-9417-cd9b08965ca1",
+ "metadata": {},
+ "source": [
+ "_______________\n",
+ "\n",
+ "## Subsection 2\n",
+ "\n",
+ "Info here\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e961b1bd-a1c8-4e54-bafc-46dcf78454f1",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.13"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/diagnostics/additional/analysis_tools.ipynb b/_sources/notebooks/diagnostics/additional/analysis_tools.ipynb
new file mode 100644
index 000000000..0e551fc31
--- /dev/null
+++ b/_sources/notebooks/diagnostics/additional/analysis_tools.ipynb
@@ -0,0 +1,335 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# CESM analysis tools"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "55b5588e-6e74-4bd7-a278-877611c4e87b",
+ "metadata": {},
+ "source": [
+ "We have provided some information about tools the CESM users and developers use for analysis of model simulations below. This list is not comprehensive and is intended to provide you with information to start your searches."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "6a4f8751-c312-49b5-a578-604b7f39099a",
+ "metadata": {},
+ "source": [
+ "## Analysis Software"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d31bdb0d-5afe-4b38-b304-34c3179ac6dc",
+ "metadata": {},
+ "source": [
+ "Many data analysis and visualization software packages are freely available for use on CISL-managed resources. These packages include some developed and supported by NCAR and CISL. Some of these resources are open source while others require licences.\n",
+ "\n",
+ "Some of these packages include:\n",
+ "- Numerous python packages\n",
+ "- Interactive Data Language (IDL) \n",
+ "- MATLAB\n",
+ "- NCAR Command Language (NCL)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "5bd4569e-601e-47d9-ad24-ae2da7087b7e",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "[CISL Data Analysis Website](https://arc.ucar.edu/knowledge_base/70550011)\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "cc1d09f3-dc55-46b2-912e-deee2147e45d",
+ "metadata": {},
+ "source": [
+ "## Python"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0275672f-9536-4bb6-bfff-4a2eb9bc3630",
+ "metadata": {},
+ "source": [
+ "Python is an open source, general-purpose programming language. \n",
+ "\n",
+ "Python is known for:\n",
+ "- having a wide range of applications and packages available. There is a huge user base and rougly ~1 gazillion online tutorials. \n",
+ "- active development in packages related to the geosciences.\n",
+ "\n",
+ "Python is becoming the dominant language for CESM developers and users, so most of the active development of tools for the CESM project at large are done in this language. We provide more detailed information below about some of the tools available for python users on NCAR computing assets."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "8ce79e09-e9e2-4bf0-ad12-4aede3b2d072",
+ "metadata": {},
+ "source": [
+ "### Jupyter Hub"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f82c8f41-42a1-40ce-86bc-5138a9940d08",
+ "metadata": {},
+ "source": [
+ "The JupyterHub deployment that CISL manages allows \"push-button\" access to NCAR's supercomputing resource cluster of nodes used for data analysis and visualization, machine learning, and deep learning.\n",
+ "\n",
+ "JupyterHub gives users the ability to create, save, and share Jupyter Notebooks through the JupyterLab interface and to run interactive, web-based analysis, visualization and compute jobs on derecho and casper.\n",
+ "\n",
+ "Information about getting started with JupyterHub on NCAR computing resources, environments, and documentation is avaiable at the website below."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "4b1cc783-53e5-45db-8954-385863b1a778",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "[CISL Jupyter Hub Website](https://arc.ucar.edu/knowledge_base/70549913)\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "71c323bc-a8e3-406d-a08b-92030f436863",
+ "metadata": {},
+ "source": [
+ "### Earth System Data Science initiative (ESDS)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2d21da94-244c-42c0-970f-8daec7bacf61",
+ "metadata": {},
+ "source": [
+ "ESDS is an NCAR initiative that seeks to foster a collaborative, open, inclusive community for Earth Science data analysis. ESDS promotes deeper collaboration centered on analytics, improving our capacity to deliver impactful, actionable, reproducible science and serve the university community by transforming how geoscientists synthesize and extract information from large, diverse data sets.\n",
+ "\n",
+ "More information, including FAQs and a blog with examples can be found at the website below. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a6813683-9206-492d-b477-2aee1abe4f17",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "[ESDS Website](https://ncar.github.io/esds/about/)\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f4055e68-100a-4ae7-8ee3-47cefcd43d73",
+ "metadata": {},
+ "source": [
+ "### Project Pythia"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f25bcd2d-a7d5-46fe-b5fb-05c14eb6a19b",
+ "metadata": {},
+ "source": [
+ "If you are new to Python and its application to the geosciences, then starting with Project Pythia is a good first step. Project Pythia is the education working group for Pangeo and is an educational resource for the entire geoscience community. Together these initiatives are helping geoscientists make sense of huge volumes of numerical scientific data using tools that facilitate open, reproducible science, and building an inclusive community of practice around these goals."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "552bc788-8a39-4ce9-b7ab-8d8f283614e7",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "[Project Pythia Website](https://projectpythia.org/)\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "efe8a781-b9e6-4ad2-a590-75ceab387fb4",
+ "metadata": {},
+ "source": [
+ "### GeoCAT"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "4583d068-c9c4-44ed-8ca3-67352c4fd414",
+ "metadata": {},
+ "source": [
+ "The Geoscience Community Analysis Toolkit (GeoCAT) is a software engineering effort at NCAR. GeoCAT aims to create scalable data analysis and visualization tools for Earth System Science data to serve the geosciences community in the scientific Python ecosystem. GeoCAT tools are built upon the cornerstone technologies in the Pangeo stack such as Xarray, Dask, and Jupyter Notebooks. In addition, some of the functionalities in the GeoCAT stack are inspired/reimplemented from NCL."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "207a8b16-9f0b-447f-b027-1a47ba747d52",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "[GeoCAT Website](https://geocat.ucar.edu/)\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "56d3130e-8f95-4a5e-a797-0b33d538141a",
+ "metadata": {},
+ "source": [
+ "### MetPy"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "76b9dc6d-74d0-4444-96d4-e60db58f8257",
+ "metadata": {},
+ "source": [
+ "MetPy is a collection of tools in Python for reading, visualizing, and performing calculations with weather data. The website below has information about getting started as well as examples and a reference guide."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "53f2c96c-f36a-4080-99b9-e7e2fb1d899d",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "[MetPy Website](https://unidata.github.io/MetPy/latest/)\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7787ed12-bce7-4e49-ac3f-2229de327823",
+ "metadata": {},
+ "source": [
+ "## NCAR Command Language (NCL)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c83303a5-24e5-4758-aeed-0cf3672c665e",
+ "metadata": {},
+ "source": [
+ "NCL is an open source tool developed at NCAR that is free to download and use. It can be run at the command line in interactive mode or as a batch mode. While once a widely used language for CESM developers and users, NCL is now in a maintenence stage and is no longer in development. Much of the active development is now being done with python tools.\n",
+ "\n",
+ "NCL is known for:\n",
+ "- easy input/output use with netCDF, Grib, Grib2, shapefiles, ascii, and binary files. \n",
+ "- good graphics that are very flexible.\n",
+ "- functions tailored to the geosciences community.\n",
+ "- a central website with 1000+ examples. There are also mini language and processing manuals."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7b185d5f-dcdb-4275-99b4-67bf6e5dcc2b",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "[NCL Website](https://www.ncl.ucar.edu/get_started.shtml)\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7e2bcaf7-f5d0-4d20-b2ed-840841a02972",
+ "metadata": {},
+ "source": [
+ "## Panopoly"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3f774a9f-c4c0-4411-97df-99afe765259f",
+ "metadata": {},
+ "source": [
+ "Panopoly is a graphic user interface (GUI) application that allows the user to quickly view data in a number of file formats. Panopoly is similar to ncview, but it's more powerful. Panopoly works with files in netCDF, HDF, or GRIB format (among others). It also allows the user to perform simple calculations, apply masks, and quickly create spatial or line plots.\n",
+ "\n",
+ "The Panopoly website provies more documentation, including How-To's and demonstration videos."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "483efa17-9787-4bb4-8348-2d2ebadf1dbd",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "[Panopoly Website](http://www.giss.nasa.gov/tools/panoply/)\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "124c346f-9a8a-4589-b74a-e04194a3e473",
+ "metadata": {},
+ "source": [
+ "## Image Magick"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "38488672-40e0-4383-b00c-e3128cdc5304",
+ "metadata": {},
+ "source": [
+ "ImageMagick is a free suite of software that that can be used to display, manipulate, or compare images. It works with a wide range of file types (ps, pdf, png, gif, jpg, etc.). It can also be used to create movies. You can also alter an image at the command line. There are many options available when converting images, and more information can be found at the website below."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "23b653c8-95d6-4a28-a2cc-4d6656c3ceec",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "[Image Magick Website](https://imagemagick.org/index.php)\n",
+ "\n",
+ "
"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.13"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/diagnostics/additional/cvdp.ipynb b/_sources/notebooks/diagnostics/additional/cvdp.ipynb
new file mode 100644
index 000000000..13e608e76
--- /dev/null
+++ b/_sources/notebooks/diagnostics/additional/cvdp.ipynb
@@ -0,0 +1,65 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# CVDP"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "8f46aef7-947e-498e-90b1-a4ed6b077f6d",
+ "metadata": {},
+ "source": [
+ "The climate variability and diagnostics package (CVDP) was developed by NCAR scientists to document the major modes of climate variability in models and observations.\n",
+ "\n",
+ "More info here:\n",
+ "https://www.cesm.ucar.edu/projects/cvdp\n",
+ "\n",
+ "If you use CVDP results in oral or written form, please cite the following paper:\n",
+ "https://doi.org/10.1002/2014EO490002\n",
+ "\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f5d5800f-8a0c-4cdf-930b-48be1cb40796",
+ "metadata": {},
+ "source": [
+ "**Additional documentation in progress.**"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "def586e4-6553-48b9-b05c-2308dad9181c",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.13"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/diagnostics/additional/large_ensembles.ipynb b/_sources/notebooks/diagnostics/additional/large_ensembles.ipynb
new file mode 100644
index 000000000..6b12beb9b
--- /dev/null
+++ b/_sources/notebooks/diagnostics/additional/large_ensembles.ipynb
@@ -0,0 +1,77 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# Large Ensembles"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b6f4905b-cd2a-454e-89cf-ccc585c90247",
+ "metadata": {
+ "tags": []
+ },
+ "source": [
+ "## Learning Goals\n",
+ "\n",
+ "- Enter learning goals here."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e73ce5d6-d2b1-4f32-b64f-337a1b02e2d0",
+ "metadata": {},
+ "source": [
+ "_______________\n",
+ "\n",
+ "## Subsection 1\n",
+ "\n",
+ "Info here"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "815e0869-0518-4cf9-9417-cd9b08965ca1",
+ "metadata": {},
+ "source": [
+ "_______________\n",
+ "\n",
+ "## Subsection 2\n",
+ "\n",
+ "Info here\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e961b1bd-a1c8-4e54-bafc-46dcf78454f1",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.13"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/diagnostics/additional/postprocessing.ipynb b/_sources/notebooks/diagnostics/additional/postprocessing.ipynb
new file mode 100644
index 000000000..ff5b3e1b1
--- /dev/null
+++ b/_sources/notebooks/diagnostics/additional/postprocessing.ipynb
@@ -0,0 +1,93 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# Postprocessing data"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "5bd5c142-f778-4570-8edc-cc760139f30e",
+ "metadata": {},
+ "source": [
+ "A wide range of tools exist for postprocessing and analyzing data with techniques and methods exist. One of the first things you have to decide is how to store your files.\n",
+ "\n",
+ "In the diagnostics notebooks we have have provided examples of how to use both history files and timeseries files, described below."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b6f4905b-cd2a-454e-89cf-ccc585c90247",
+ "metadata": {
+ "tags": []
+ },
+ "source": [
+ "## History vs. Timeseries files\n",
+ "\n",
+ "When you run the CESM model the default output is history files, or files for a single timestep that include all variables for a given component and time frequency. However, most CESM community experiment data will be provided as timeseries files, or files that are a single variable over many timesteps. It is important you understand how to use both types of files, and for you to know that for some tasks (e.g. debugging) you should be using history files instead of timeseries files. However, it is much more efficient to store timeseries files because the overall size is smaller once the files have been processed into timeseries format.\n",
+ "\n",
+ "CESM does not currently have a supported tool for the community to create timeseries files. We can recommend you investigate using [NCO tools](https://ncar.github.io/CESM-Tutorial/notebooks/resources/netcdf.html#netcdf-operators-nco) in coordination with scripts. We hope to have a better way to create timeseries files by community members soon.\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "784300ed-3d93-4365-8776-adcce4d4eb1f",
+ "metadata": {},
+ "source": [
+ "Until then, we'll give you a sheepish grin."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "74d0734c-6b19-4c82-96cc-670dcbbea861",
+ "metadata": {},
+ "source": [
+ "![sheep](../../../images/diagnostics/file_types/sheepish1.png)\n",
+ "\n",
+ "* Figure: Baaaa.
*"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "1775714f-10bd-4e6f-aa07-1c5b2325584a",
+ "metadata": {},
+ "source": [
+ "![sheep](../../../images/diagnostics/file_types/sheepish2.png)\n",
+ "\n",
+ "* Figure: Baaaa.
*"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e961b1bd-a1c8-4e54-bafc-46dcf78454f1",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.12"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/diagnostics/additional/uxarray.ipynb b/_sources/notebooks/diagnostics/additional/uxarray.ipynb
new file mode 100644
index 000000000..d98462609
--- /dev/null
+++ b/_sources/notebooks/diagnostics/additional/uxarray.ipynb
@@ -0,0 +1,77 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# uxarray"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b6f4905b-cd2a-454e-89cf-ccc585c90247",
+ "metadata": {
+ "tags": []
+ },
+ "source": [
+ "## Learning Goals\n",
+ "\n",
+ "- Enter learning goals here."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e73ce5d6-d2b1-4f32-b64f-337a1b02e2d0",
+ "metadata": {},
+ "source": [
+ "_______________\n",
+ "\n",
+ "## Subsection 1\n",
+ "\n",
+ "Info here"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "815e0869-0518-4cf9-9417-cd9b08965ca1",
+ "metadata": {},
+ "source": [
+ "_______________\n",
+ "\n",
+ "## Subsection 2\n",
+ "\n",
+ "Info here\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e961b1bd-a1c8-4e54-bafc-46dcf78454f1",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.13"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/diagnostics/cam/advanced_cam.ipynb b/_sources/notebooks/diagnostics/cam/advanced_cam.ipynb
new file mode 100644
index 000000000..9838c6ed7
--- /dev/null
+++ b/_sources/notebooks/diagnostics/cam/advanced_cam.ipynb
@@ -0,0 +1,261 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Advanced Plotting"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "**BEFORE BEGINNING THIS EXERCISE** - Check that your kernel (upper right corner, above) is `NPL 2023b`. This should be the default kernel, but if it is not, click on that button and select `NPL 2023b`."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "_______________\n",
+ "This activity was developed primarily by Cecile Hannay and Jesse Nusbaumer."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "_______________\n",
+ "\n",
+ "## Exercise 1: CAM-SE output analysis\n",
+ "\n",
+ "Examples of simple analysis and plotting that can be done with CAM-SE output on the native cubed-sphere grid."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from pathlib import Path\n",
+ "import xarray as xr\n",
+ "import numpy as np\n",
+ "import matplotlib.pyplot as plt\n",
+ "import cartopy.crs as ccrs"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def make_map(data, lon, lat,):\n",
+ " \"\"\"This function plots data on a Mollweide projection map.\n",
+ "\n",
+ " The data is transformed to the projection using Cartopy's `transform_points` method.\n",
+ "\n",
+ " The plot is made by triangulation of the points, producing output very similar to `pcolormesh`,\n",
+ " but with triangles instead of rectangles used to make the image.\n",
+ " \"\"\"\n",
+ " dataproj = ccrs.PlateCarree() # assumes data is lat/lon\n",
+ " plotproj = ccrs.Mollweide() # output projection \n",
+ " # set up figure / axes object, set to be global, add coastlines\n",
+ " fig, ax = plt.subplots(figsize=(6,3), subplot_kw={'projection':plotproj})\n",
+ " ax.set_global()\n",
+ " ax.coastlines(linewidth=0.2)\n",
+ " # this figures out the transformation between (lon,lat) and the specified projection\n",
+ " tcoords = plotproj.transform_points(dataproj, lon.values, lat.values) # working with the projection\n",
+ " xi=tcoords[:,0] != np.inf # there can be bad points set to infinity, but we'll ignore them\n",
+ " assert xi.shape[0] == tcoords.shape[0], f\"Something wrong with shapes should be the same: {xi.shape = }, {tcoords.shape = }\"\n",
+ " tc=tcoords[xi,:]\n",
+ " datai=data.values[xi] # convert to numpy array, then subset\n",
+ " # Use tripcolor --> triangluates the data to make the plot\n",
+ " # rasterized=True reduces the file size (necessary for high-resolution for reasonable file size)\n",
+ " # keep output as \"img\" to make specifying colorbar easy\n",
+ " img = ax.tripcolor(tc[:,0],tc[:,1], datai, shading='gouraud', rasterized=True)\n",
+ " cbar = fig.colorbar(img, ax=ax, shrink=0.4)\n",
+ " return fig, ax"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Input data\n",
+ "\n",
+ "In the following cell, specify the data source.\n",
+ "\n",
+ "`location_of_hfiles` is a path object that points to the directory where data files should be.\n",
+ "`search_pattern` specifies what pattern to look for inside that directory.\n",
+ "\n",
+ "**SIMPLIFICATION** If you want to just provide a path to a file, simply specify it by commenting (with `#`) the lines above \"# WE need lat and lon\", and replace with:\n",
+ "```\n",
+ "fil = \"/path/to/your/data/file.nc\"\n",
+ "ds = xr.open_dataset(fil)\n",
+ "```\n",
+ "\n",
+ "## Parameters\n",
+ "Specify the name of the variable to be analyzed with `variable_name`.\n",
+ "\n",
+ "To change the units of the variable, specify `scale_factor` and provide the new units string as `units`. Otherwise, just set `scale_factor` and `units`:\n",
+ "\n",
+ "```\n",
+ "scale_factor = 1\n",
+ "units = ds[\"variable_name\"].attrs[\"units\"]\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "location_of_hfiles = Path(\"/glade/campaign/cesm/tutorial/tutorial_2023_archive/cam-se/\")\n",
+ "search_pattern = \"f.cam6_3_112.FMTHIST_v0c.ne30.non-ogw-ubcT-effgw0.7_taubgnd2.5.001.cam.h3.2003-01-01-00000.nc\"\n",
+ "\n",
+ "fils = sorted(location_of_hfiles.glob(search_pattern))\n",
+ "if len(fils) == 1:\n",
+ " ds = xr.open_dataset(fils[0])\n",
+ "else:\n",
+ " print(f\"Just so you konw, there are {len(fils)} files about to be loaded.\")\n",
+ " ds = xr.open_mfdataset(fils)\n",
+ "\n",
+ "# We need lat and lon:\n",
+ "lat = ds['lat']\n",
+ "lon = ds['lon']\n",
+ "\n",
+ "# Choose what variables to plot,\n",
+ "# in this example we are going to combine the\n",
+ "# convective and stratiform precipitation into\n",
+ "# a single, total precipitation variable\n",
+ "convective_precip_name = \"PRECC\"\n",
+ "stratiform_precip_name = \"PRECL\"\n",
+ "\n",
+ "# If needed, select scale factor and new units:\n",
+ "scale_factor = 86400. * 1000. # m/s -> mm/day\n",
+ "units = \"mm/day\"\n",
+ "\n",
+ "cp_data = scale_factor * ds[convective_precip_name]\n",
+ "st_data = scale_factor * ds[stratiform_precip_name]\n",
+ "cp_data.attrs['units'] = units\n",
+ "st_data.attrs['units'] = units\n",
+ "\n",
+ "# Sum the two precip variables to get total precip\n",
+ "data = cp_data + st_data\n",
+ "data"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# temporal averaging\n",
+ "# simplest case, just average over time:\n",
+ "data_avg = data.mean(dim='time')\n",
+ "data_avg"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "#\n",
+ "# Global average\n",
+ "#\n",
+ "data_global_average = data_avg.weighted(ds['area']).mean()\n",
+ "print(f\"The area-weighted average of the time-mean data is: {data_global_average.item()}\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "#\n",
+ "# Regional average using a (logical) rectangle\n",
+ "#\n",
+ "west_lon = 110.0\n",
+ "east_lon = 200.0\n",
+ "south_lat = -30.0\n",
+ "north_lat = 30.0\n",
+ "\n",
+ "# To reduce to the region, we need to know which indices of ncol dimension are inside the boundary\n",
+ "\n",
+ "region_inds = np.argwhere(((lat > south_lat)&(lat < north_lat)&(lon>west_lon)&(lon \n",
+ "\n",
+ "Click here for the solution
\n",
+ "\n",
+ "![plot example](../../../images/diagnostics/cam/advanced_plot_1.png)\n",
+ "\n",
+ "* Figure: Plotting solution.
*\n",
+ " \n",
+ " \n",
+ "