- On Ice Sheet Mass Flow
- Introduction
- Abstract & Methods
- Results
- Processes and flow
- Implementation
- Misc
Sankey diagrams for mass flow in Greenland and Antarctica.
See ./ms.tex
Note
All figures are width-proportional within and between each other.
ID | Term | Value | Unc | IO | Period | Source |
---|---|---|---|---|---|---|
RF | Rainfall | 40 | 15 | I_g | 2010-2019 | fettweis_2020 |
CD | Condensation | 5 | 15 | I_g | 2010-2019 | fettweis_2020 |
DP | Deposition | 10 | 15 | I_g | 2010-2019 | fettweis_2020 |
SF | Snowfall | 680 | 15 | I_g | 2010-2019 | fettweis_2020 |
EV | Evaporation | 5 | 15 | O_g | 2010-2019 | fettweis_2020 |
RU | Runoff | 435 | 15 | O_g | 2010-2019 | fettweis_2020 |
SU | Sublimation | 60 | 15 | O_g | 2010-2019 | fettweis_2020 |
RFZ | Refreezing | 200 | 15 | 2010-2019 | fettweis_2020 | |
BMB | Grounded basal melting | 20 | 20 | O_g | steady | karlsson_2021 |
DISCH | Discharge | 485-20+5 | 10 | 2010-2019 | mankoff_2020_solid,kochtitzky_2023,bollen_2023 | |
CALV | Calving | 235 | 30 | O_o | rignot_2010 | |
FRONTMELT | Frontal melting | 235 | 30 | O_o | rignot_2010 | |
SHELFMELT | Sub-shelf melting | 25 | 40 | O_o | 2013-2022 | wang_2024 |
SHELFFREEZE | Sub-shelf freeze-on | 5 | 40 | I_o | 2013-2022 | wang_2024 |
GZRET | Grounding line retreat | 5 | ? | O_g | ||
FRONTRET | Frontal retreat | 50 | 4 | O_o | 2010-2020 | kochtitzky_2023 |
FRONTADV | Frontal advance | 0 | I_o | 2010-2020 | kochtitzky_2023 |
Exported gl_baseline to ./dat/gl_baseline.csv
trash G_GL
[[ -e ./G_GL ]] || grass -e -c EPSG:3413 ./G_GL
!date
fname="https://thredds.geus.dk/thredds/fileServer/MassBalance/MB_SMB_D_BMB.csv"
df = pd.read_csv(fname, index_col=0, parse_dates=True)[['D','D_err']]
df = df\
.resample('1D').interpolate().resample('YS').sum()
print("2010s")
print(df['2010-01-01':'2019-12-31'].mean())
Wed Dec 18 07:11:05 AM EST 2024 2010s D 485.373414 D_err 44.951388 dtype: float64
Then, subtract 15 from 475 based on citet:kochtitzky_2023 who report, in Section 3.3, 17 +- 6.8 and 14.5 +- 5.8 but that “[b]ecause our fluxgates were typically located tens to hundreds of meters lower than those in the similar studies (King et al., 2018; Mankoff et al., 2020), the melt correction for these studies would be higher than values presented herein, although it is beyond the scope of the current study to determine what those values would be.”
grass ./G_GL/PERMANENT
g.mapset -c Bollen_2023
cat "${DATADIR}/Bollen_2023/GreenlandGIC_discharge_timeseries - Ellyn Enderlin.csv" \
| cut -d, -f1-3 \
| v.in.ascii input=- output=bollen_2023 separator=, skip=1 x=2 y=3 z=1
import pandas as pd
data_root='/home/kdm/data'
path='Bollen_2023'
fname='GreenlandGIC_discharge_timeseries - Ellyn Enderlin.csv'
df = pd.read_csv(f"{data_root}/{path}/{fname}", index_col=0, header=[0])
df = df.sum(axis='rows')
df = df / 1E9 # per email from Ellyn, units are m^3/year. Convert to Gt.
df = df['2010':'2018']
df.mean()
5.209345977852399
- 21 Gt/yr from Karlsson (2021) http://doi.org/10.1038/s41467-021-23739-z
- Assume steady state
From Millan (2022) http://doi.org/10.5194/tc-16-3021-2022
- Gz retreat is ~0.13 km/yr (Fig. 3a)
- Ice velocity is ~1200 m/yr (Fig. 3b) (not needed)
- 20 km wide
Rates are higher per Ciraci (2023) http://doi.org/10.1073/pnas.2220924120, but
- Ice surface close to flotation near GZ, and shelf is ~500 m thick, so estimate 600 m ice.
Therefore, gz retreat in Gt/year is width * thick * retreat rate * density
frink "0.13 km/yr * 20 km * 600 m * 917 kg/m^3 -> Gt/yr"
1.43052
Assume similar from other ice shelves too, for a total of ~5 Gt/yr GZ retreat in Greenland.
g.mapset -c MAR
ncdump -v TIME dat/MARv3.12-GRD-15km-annual.nc4 # 30-39 = 2010-2019
ncra --overwrite -d TIME,30,39 dat/MARv3.12-GRD-15km-annual.nc4 tmp/MAR_GL.nc
ncdump -v X10_110 tmp/MAR_GL.nc # 101
ncdump -v Y20_200 tmp/MAR_GL.nc # 181
g.region w=$(( -645000 - 7500 )) e=$(( 855000 + 7500 )) s=$(( -3357928 - 7500 )) n=$((-657928 + 7500 )) res=15000 -p
var=SF # debug
for var in SF RF RU SU ME SMB EVA CON DEP SUB MSK AREA; do
r.in.gdal -o input=NetCDF:tmp/MAR_GL.nc:${var} output=${var}
r.region -c map=${var}
done
r.mapcalc "GL_ice_all = (MSK > 50) & ((x()-y()) > 520000)" # Limit to ice and remove Canada
# r.clump input=GL_ice output=clumps --o
# main_clump=$(r.stats -c -n clumps sort=desc | head -n2 | tail -n1 | cut -d" " -f1)
# r.mapcalc "GL_ice = if(clumps == ${main_clump}, 1, null())"
# r.mask raster=GL_ice --o
r.mapcalc "MASK = if(GL_ice_all == 1)" --o
# if only X % of a cell is ice, scale by that.
r.mapcalc "scale_mask = (GL_ice_all * MSK) / 100"
# scale
## units are mm.w.eq. per grid cell. Grid cell areas are in km^2
## + mm.w.eq. -> m w.eq.: /1E3
## + m w.eq -> kg: *1E3
## + area in km^2 -> m^2: *1E3*1E3
## + kg -> Gt: /1E12
# ds = ds/1E3 * 1E3 * ds['AREA']*1E3*1E3 / 1E12
for var in SF RF RU SU ME SMB EVA CON DEP SUB; do
r.mapcalc "${var} = (${var}/1000) * 1000 * (AREA * 1000*1000) * scale_mask / exp(10,12)"
done
r.mask -r
r.mapcalc "RFZ = ME + RF - RU"
for var in SF RF RU ME SMB EVA CON DEP SUB RFZ; do
echo ${var} $(r.univar -g ${var} | grep sum)
done
�[?2004lSF sum=678.472341306034 RF sum=41.0073369748482 RU sum=433.411271134275 ME sum=594.819117205514 SMB sum=232.245706856329 EVA sum=7.43645901936729 CON sum=2.02922271273767 DEP sum=12.3770587084991 SUB sum=60.0712550947222 RFZ sum=202.41518304609
grass ./G_GL/PERMANENT
g.mapset -c Wang_2024
tif_list=$(find ~/data/Wang_2024 -name "????.tif")
t=$(echo $tif_list | tr ' ' '\n' | head -n1) # debug
for t in ${tif_list}; do
dirname=$(basename $(dirname ${t}))
fname=$(basename ${t})
fname=${fname%.*}
tname=g_${dirname}_${fname} # add g_ because "79N" is not a valid name
r.in.gdal input=${t} output=${tname}
done
g.region raster=$(g.list type=raster sep=,) -pa
r.series input=$(g.list type=raster sep=,) output=melt method='average'
r.colors -a map=melt color=viridis
r.mapcalc "area = area()"
## Melt data is m/year
## Multiply by area to get m/m^2 or grams, then 1000 to get kg
r.mapcalc "melt = melt * 1000 * area / exp(10,12)" --o
r.mapcalc "melt_on = if(melt > 0, melt, null())"
r.mapcalc "freeze_on = if(melt < 0, melt, null())"
echo "NET"
r.univar -gt map=melt | cut -d"|" -f11
echo ""
echo "FREEZE_ON"
r.univar -gt map=freeze_on | cut -d"|" -f11
echo ""
echo "MELT_OFF"
r.univar -gt map=melt_on | cut -d"|" -f11
�[?2004lNET �[?2004lsum 33.4127947245078 �[?2004l �[?2004lFREEZE_ON ?2004lsum -2.68199438110646 �[?2004l �[?2004lMELT_OFF �[?2004lsum 36.094789105614
import xarray as xr
ds = xr.open_dataset("~/data/Dohne_2023/GIS_GMB_grid.nc")
ds['dm'] = ds['dm'] * ds['area']
ds = ds.sel({'time':slice('2010-01-01','2019-12-31')})
ds = data=ds['dm'].to_dataset()
ds = ds['dm'].sum(dim=['x','y'])/1E12
ds = ds - ds.values[0]
_ = ds.plot()
ds = ds.resample({'time':'YS'}).mean()
ds = ds.diff(dim='time')
print(ds.mean())
<xarray.DataArray 'dm' ()> Size: 8B array(-250.12027707)
import pandas as pd
from datetime import datetime, timedelta
df = pd.read_csv("~/data/GRACE/greenland_mass_200204_202409.txt",
comment="H", parse_dates=True, sep="\s+", header=None,
names=['year','mass','err'])
# Function to convert year.frac to ISO format (YYYY-MM-DD)
def year_frac_to_iso(year_frac):
year = int(year_frac)
frac = year_frac - year
start_of_year = datetime(year, 1, 1)
days_in_year = (datetime(year + 1, 1, 1) - start_of_year).days
date = start_of_year + timedelta(days=frac * days_in_year)
return pd.to_datetime(date.strftime('%Y-%m-%d'))
# Apply the conversion to the 'Year' column
df['date'] = df['year'].apply(year_frac_to_iso)
df = df.drop(columns=['year'])
df = df.set_index('date')
df = df['mass']
# df.resample('D').mean().interpolate()
df = df['2010-01-01':'2019-12-31']
df = df - df.max()
# df.head()
_ = df.plot()
print(df.resample('YS').mean().diff().mean())
-264.96212962962966
!date
fname="https://thredds.geus.dk/thredds/fileServer/MassBalance/MB_SMB_D_BMB.csv"
df = pd.read_csv(fname, index_col=0, parse_dates=True)[['MB','MB_err']]
df = df\
.resample('1D').interpolate().resample('YS').sum()
print("2010s")
print(df['2010-01-01':'2019-12-31'].mean())
Fri Dec 20 09:56:38 AM EST 2024 2010s MB -246.172157 MB_err 94.196209 dtype: float64
ID | Term | East_g | West_g | Peninsula_g | East_s | West_s | Peninsula_s | Unc | IO | Period | Source |
---|---|---|---|---|---|---|---|---|---|---|---|
RF | Rainfall | 1 | 1 | 2 | 1 | 1 | 2 | 15 | I | 2010-2019 | fettweis_2020 |
CD | Condensation | 1 | 1 | 1 | 1 | 1 | 1 | 15 | I | 2010-2019 | fettweis_2020 |
DP | Deposition | 37 | 24 | 6 | 6 | 6 | 2 | 15 | I | 2010-2019 | fettweis_2020 |
SF | Snowfall | 1392 | 724 | 282 | 172 | 180 | 57 | 15 | I | 2010-2019 | fettweis_2020 |
RFZ | Refreezing | 15 | 5 | 19 | 26 | 10 | 32 | 15 | 2010-2019 | fettweis_2020 | |
EV | Evaporation | 1 | 1 | 1 | 1 | 1 | 1 | 15 | O | 2010-2019 | fettweis_2020 |
RU | Runoff | 1 | 1 | 2 | 2 | 1 | 4 | 15 | O | 2010-2019 | fettweis_2020 |
SU | Sublimation | 151 | 33 | 13 | 23 | 9 | 4 | 15 | O | 2010-2019 | fettweis_2020 |
BMB | Grounded basal melting | 47 | 19 | 3 | 0 | 0 | 0 | 30 | O | van-liefferinge_2013 | |
DISCH | Discharge | 1147 | 902 | 292 | 0 | 0 | 0 | 5 – 50 | 2008-2019 | davison_2023 (to shelves) + rignot_2019 (grounded + islands) | |
CALV | Calving | 223 | 46 | 139 | 694 | 567 | 104 | 5 | O | 2010-2019 | greene_2022 + rignot_2019 discharge (grounded + islands) |
FRONTMELT | Frontal melting | 0 | 0 | 0 | 0 | 0 | 0 | O | |||
SHELFMELT | Sub-shelf melting | 0 | 0 | 0 | 527 | 684 | 164 | 150 | O | 2010-2017 | paolo_2023 |
SHELFFREEZE | Sub-shelf freeze-on | 0 | 0 | 0 | 208 | 147 | 11 | 300 | I | 2010-2017 | paolo_2023 |
GZRET | Grounding line retreat | 1 | 45 | 1 | 0 | 0 | 0 | 15 | O | 1997-2021 | davison_2023 (only Pine Island, Thwaites, Crosson, and Dotson) |
FRONTRET | Frontal retreat | 0 | 0 | 0 | 69 | 206 | 125 | 5 | O | 2010-2021 | greene_2022 |
FRONTADV | Frontal advance | 0 | 0 | 0 | 192 | 2 | 1 | 5 | I | 2010-2021 | greene_2022 |
Split AQ table above to east,west,peninsula,all CSVs, combining shelf and grounded
import numpy as np
import pandas as pd
aq = np.array(aq)
df = pd.DataFrame(aq[1:,1:], index=aq[1:,0], columns=aq[0,1:])
df.index.name = 'ID'
cols = ['East_g','East_s','West_g','West_s','Peninsula_g','Peninsula_s']
df[cols] = df[cols].astype(int)
df['All'] = df[cols].sum(axis='columns')
df['E'] = df[['East_g','East_s']].sum(axis='columns')
df['W'] = df[['West_g','West_s']].sum(axis='columns')
df['P'] = df[['Peninsula_g','Peninsula_s']].sum(axis='columns')
df = df.drop(columns=['IO', 'Period', 'Source'])
df = df.drop(columns=cols)
def custom_round(x, base=5):
if (x > 0) and (x < base): x = base
return int(base * round(float(x)/base))
cols = ['All','E','W','P']
for c in cols: df[c] = df[c].apply(lambda x: custom_round(x, base=5))
for c in cols:
df[['Term',c]].rename(columns={c:'Value'}).to_csv('./dat/aq_' + c + '.csv')
df
ID | Term | Unc | All | E | W | P |
---|---|---|---|---|---|---|
RF | Rainfall | 15 | 10 | 5 | 5 | 5 |
CD | Condensation | 15 | 5 | 5 | 5 | 5 |
DP | Deposition | 15 | 80 | 45 | 30 | 10 |
SF | Snowfall | 15 | 2805 | 1565 | 905 | 340 |
RFZ | Refreezing | 15 | 105 | 40 | 15 | 50 |
EV | Evaporation | 15 | 5 | 5 | 5 | 5 |
RU | Runoff | 15 | 10 | 5 | 5 | 5 |
SU | Sublimation | 15 | 235 | 175 | 40 | 15 |
BMB | Grounded basal melting | 30 | 70 | 45 | 20 | 5 |
DISCH | Discharge | 5 – 50 | 2340 | 1145 | 900 | 290 |
CALV | Calving | 5 | 1775 | 915 | 615 | 245 |
FRONTMELT | Frontal melting | 0 | 0 | 0 | 0 | |
SHELFMELT | Sub-shelf melting | 150 | 1375 | 525 | 685 | 165 |
SHELFFREEZE | Sub-shelf freeze-on | 300 | 365 | 210 | 145 | 10 |
GZRET | Grounding line retreat | 15 | 45 | 5 | 45 | 5 |
FRONTRET | Frontal retreat | 5 | 400 | 70 | 205 | 125 |
FRONTADV | Frontal advance | 5 | 195 | 190 | 5 | 5 |
import numpy as np
import pandas as pd
aq = np.array(aq)
df = pd.DataFrame(aq[1:,1:], index=aq[1:,0], columns=aq[0,1:])
df.index.name = 'ID'
df = df.drop(columns=['Source', 'Period', 'Unc'])
df = df.drop(['RFZ'])
cols = ['East_g','East_s','West_g','West_s','Peninsula_g','Peninsula_s']
df[cols] = df[cols].astype(int)
for roi in ['East','West','Peninsula']:
df.loc['DISCH',roi+'_g'] = df.loc['DISCH',roi+'_g'] - df.loc['CALV',roi+'_g']
# df.loc['CALV', 'West_s'] = df.loc['CALV', 'West_s'] + df.loc['CALV', 'West_g']; df.loc['CALV', 'West_g'] = 0
# df.loc['CALV', 'East_s'] = df.loc['CALV', 'East_s'] + df.loc['CALV', 'East_g']; df.loc['CALV', 'East_g'] = 0
# df.loc['CALV', 'Peninsula_s'] = df.loc['CALV', 'Peninsula_s'] + df.loc['CALV', 'Peninsula_g']; df.loc['CALV', 'Peninsula_g'] = 0
# # df.loc['CALV', 'West_s'] = df.loc['CALV', 'West_s'] + df.loc['CALV', 'West_g'];
# df.loc['CALV', 'West_g'] = 0
# # df.loc['CALV', 'East_s'] = df.loc['CALV', 'East_s'] + df.loc['CALV', 'East_g'];
# df.loc['CALV', 'East_g'] = 0
# # df.loc['CALV', 'Peninsula_s'] = df.loc['CALV', 'Peninsula_s'] + df.loc['CALV', 'Peninsula_g'];
# df.loc['CALV', 'Peninsula_g'] = 0
# disch = df.loc['DISCH']['All_g'] - df.loc['CALV']['All_g']
df['All_g'] = df[['East_g','West_g','Peninsula_g']].sum(axis='columns')
df['All_s'] = df[['East_s','West_s','Peninsula_s']].sum(axis='columns')
df['All'] = df['All_g'] + df['All_s']
df['East'] = df['East_g'] + df['East_s']
df['West'] = df['West_g'] + df['West_s']
df['Peninsula'] = df['Peninsula_g'] + df['Peninsula_s']
def custom_round(x, base=5):
if (x > 0) and (x < base): x = base
return int(base * round(float(x)/base))
cols = ['All', 'All_g', 'East', 'East_g', 'West', 'West_g', 'Peninsula', 'Peninsula_g']
# df.loc['Net'] = df[cols]
da = df[df['IO'] == 'I'][cols].sum() - df[df['IO'] == 'O'][cols].sum()
for i in da.index:
if i[-1] == 'g': da[i] = da[i] - (df.loc['DISCH',i] + df.loc['DISCH',i[:-1] + 's'])
# if i[-1] == 's': da[i] = da[i] + df.loc['DISCH',i[:-1]+'g']
for i in ['All','East','West','Peninsula']:
da[i + '_s'] = da[i] - da[i + '_g']
da = da.sort_index()
da = da.apply(lambda x: custom_round(x, base=5))
df = pd.DataFrame(index = ['Antarctica', 'East', 'West', 'Peninsula'],
columns = ['Grounded', 'Marine', 'Total'])
df.loc['Antarctica'] = da[['All_g','All_s','All']].values
df.loc['East'] = da[['East_g','East_s','East']].values
df.loc['West'] = da[['West_g','West_s','West']].values
df.loc['Peninsula'] = da[['Peninsula_g','Peninsula_s','Peninsula']].values
df
Grounded | Marine | Total | |
---|---|---|---|
Antarctica | -190 | -260 | -450 |
East | 85 | 190 | 270 |
West | -250 | -275 | -525 |
Peninsula | -20 | -175 | -195 |
trash G_AQ
[[ -e ./G_AQ ]] || grass -e -c EPSG:3031 ./G_AQ
grass ./G_AQ/PERMANENT
v.in.ogr input=${DATADIR}/NSIDC/NSIDC-0709.002/1992.02.07/IceBoundaries_Antarctica_v02.shp output=basins
g.region vector=basins res=10000 -pas
v.db.select map=basins|head
v.db.select -c map=basins columns=Regions | sort | uniq # East West Peninsula Islands
v.db.select -c map=basins columns=TYPE | sort | uniq # FL GR IS (float, ground, island)
v.to.rast input=basins output=east use=val val=1 where='(Regions == "East")'
v.to.rast input=basins output=west use=val val=2 where='(Regions == "West")'
v.to.rast input=basins output=peninsula use=val val=3 where='(Regions == "Peninsula")'
v.to.rast input=basins output=islands use=val val=4 where='(Regions == "Islands")'
r.patch input=east,west,peninsula,islands output=basins
r.category basins separator=":" rules=- << EOF
1:East
2:West
3:Peninsula
4:Islands
EOF
r.colors map=basins color=viridis
v.to.rast input=basins output=ground use=val val=1 where='(TYPE == "GR") or (TYPE == "IS")'
v.to.rast input=basins output=ground_noisland use=val val=1 where='(TYPE == "GR")'
Rignot 2019 provides discharge for Islands, but not by region. Here, determine island area per region, and percent of islands within each region. Then, for other values that are reported for all islands, split by area percent. This assumes all islands have the same flux (volume flow rate per unit area) for whatever property is divided up using this method.
r.patch input=east,west,peninsula output=main_ice
r.colors map=main_ice color=viridis
r.grow.distance input=main_ice value=main_ice_grow
r.mapcalc "islands_near = int(if(islands, main_ice_grow))"
r.stats --q -A -r -c -N input=islands_near
1 417 2 803 3 174 [Raster MASK present]
Total Cells = 417 + 803 + 174 = 1394 East = 417 / 1394 % = 29.9139167862 ~= 30 West = 803 / 1394 % = 57.6040172166 ~= 60 Peninsula = 174 / 1394 % = 12.4820659971 ~= 10
r.mask --o raster=ground
r.patch input=islands_near,main_ice output=grounded_with_islands
r.mask --o -i raster=ground
r.mapcalc "shelf_without_islands = main_ice"
r.mask -r
r.category map=basins | r.category map=shelf_without_islands rules=-
r.category map=basins | r.category map=grounded_with_islands rules=-
g.mapset -c MAR
ncdump -v TIME dat/MARv3.12-ANT-35km-annual.nc4 # 30-39 = 2010-2019
ncra --overwrite -d TIME,30,39 dat/MARv3.12-ANT-35km-annual.nc4 tmp/MAR_AQ.nc
ncdump -v X tmp/MAR_AQ.nc # 176
ncdump -v Y tmp/MAR_AQ.nc # 148
g.region w=$(( -3010000 - 17500 )) e=$(( 3115000 + 17500 )) s=$(( -2555000 - 17500 )) n=$(( 2590000 + 17500 )) res=35000 -p
var=SF # debug
for var in SF RF RU ME SMB EVA CON DEP SUB MSK AREA; do
r.in.gdal -o input=NetCDF:tmp/MAR_AQ.nc:${var} output=${var}
r.region -c map=${var}
done
r.mapcalc "MASK = if(MSK > 50)" --o
r.mapcalc "scale_mask = MSK / 100" # if only X % of a cell is ice, scale by that.
# scale
## units are mm.w.eq. per grid cell. Grid cell areas are in km^2
## + mm.w.eq. -> m w.eq.: /1E3
## + m w.eq -> kg: *1E3
## + area in km^2 -> m^2: *1E3*1E3
## + kg -> Gt: /1E12
# ds = ds/1E3 * 1E3 * ds['AREA']*1E3*1E3 / 1E12
for var in SF RF RU ME SMB EVA CON DEP SUB; do
r.mapcalc "${var} = (${var}/1000) * 1000 * (AREA * 1000*1000) * scale_mask / exp(10,12)"
done
r.mapcalc "RFZ = ME + RF - RU"
for mask in grounded_with_islands shelf_without_islands; do
echo $mask
r.mask --o raster=${mask}@PERMANENT --q
for var in RF CON DEP SF RFZ EVA RU SUB; do # SF RF RU EVA CON DEP SUB ME; do
echo -n "${var} ${mask}"
r.univar -gt map=${var} zones=${mask}@PERMANENT | cut -d"|" -f2,13 | column -s"|" -t | sed 's/label.*//'
r.univar -g ${var} | grep sum
echo "#"; echo "#"
done
done
r.mask -r --q
grounded_with_islands RF grounded_with_islands East 0.53462477161335 West 0.2532230323633 Peninsula 2.22781624112255 �[01;31m�[Ksum�[m�[K=3.0156640450992 CON grounded_with_islands East 0.00144321189675 West 0.00241510084115 Peninsula 0.01323398293865 �[01;31m�[Ksum�[m�[K=0.01709229567655 DEP grounded_with_islands East 36.9861991237577 West 23.8279628054373 Peninsula 5.8151846089547 �[01;31m�[Ksum�[m�[K=66.6293465381494 SF grounded_with_islands East 1392.4748276417 West 723.601551820622 Peninsula 281.709413065019 �[01;31m�[Ksum�[m�[K=2397.78579252734 RFZ grounded_with_islands East 14.6234218646823 West 5.16798014233454 Peninsula 19.3083482881789 �[01;31m�[Ksum�[m�[K=39.0997502951956 EVA grounded_with_islands East 0.6060187163407 West 0.2013515636148 Peninsula 0.6982005019075 �[01;31m�[Ksum�[m�[K=1.505570781863 RU grounded_with_islands East 1.53022074184155 West 0.0059355454226 Peninsula 1.8878783909651 �[01;31m�[Ksum�[m�[K=3.42403467822925 SUB grounded_with_islands East 150.9735683004 West 32.9662640970179 Peninsula 12.5062719218602 �[01;31m�[Ksum�[m�[K=196.446104319277 shelf_without_islands RF shelf_without_islands East 0.842541426357001 West 0.4498711186449 Peninsula 2.307238481508 �[01;31m�[Ksum�[m�[K=3.59965102650989 CON shelf_without_islands East 0.0031724865901 West 0.0019547481581 Peninsula 0.03522553443475 �[01;31m�[Ksum�[m�[K=0.04035276918295 DEP shelf_without_islands East 5.70103389655939 West 6.05853236904585 Peninsula 1.5062480433876 �[01;31m�[Ksum�[m�[K=13.2658143089928 SF shelf_without_islands East 172.41137746281 West 180.27593549343 Peninsula 56.6841289993761 �[01;31m�[Ksum�[m�[K=409.371441955615 RFZ shelf_without_islands East 25.7408112537284 West 9.64777465721551 Peninsula 32.3899504973317 �[01;31m�[Ksum�[m�[K=67.7785364082756 EVA shelf_without_islands East 0.70256713317005 West 0.2422464141299 Peninsula 0.70164322473235 �[01;31m�[Ksum�[m�[K=1.6464567720323 RU shelf_without_islands East 1.5089940427256 West 0.0304982132294 Peninsula 4.35827769837335 �[01;31m�[Ksum�[m�[K=5.89776995432835 SUB shelf_without_islands East 23.4661462650309 West 8.5418917438099 Peninsula 3.94097993607855 �[01;31m�[Ksum�[m�[K=35.9490179449194 [Raster MASK present] �[?2004l
Van Liefferinge (2013) http://doi.org/10.5194/cp-9-2335-2013
Convert MAT file to XYZ for importing into GRASS
import scipy as sp
import numpy as np
import pandas as pd
mat = sp.io.loadmat('/home/kdm/data/Van_Liefferinge_2023/Melt_Mean_Std_15exp.mat')
X = mat['X'].flatten() * 1E3 # convert from km to m
Y = mat['Y'].flatten() * 1E3
m = mat['MeanMelt'].flatten() / 10 # cm to mm
e = mat['StdMelt'].flatten() / 10 # cm to mm
melt = pd.DataFrame(np.array([X,Y,m,e]).T, columns=['x','y','melt','err']).dropna()
melt.to_csv('./tmp/melt.csv', header=False, index=False)
melt.head()
x | y | melt | err | |
---|---|---|---|---|
148741 | 1.045e+06 | -2.14e+06 | 1e-09 | 1.71243e-25 |
149859 | 1.03e+06 | -2.135e+06 | 0.00146608 | 0.000148305 |
149860 | 1.035e+06 | -2.135e+06 | 0.000266042 | 0.000389444 |
149861 | 1.04e+06 | -2.135e+06 | 1e-09 | 1.71243e-25 |
149862 | 1.045e+06 | -2.135e+06 | 0.00045698 | 0.000668948 |
grass ./G_AQ/PERMANENT
g.mapset -c liefferinge_2023
r.in.xyz input=./tmp/melt.csv output=melt sep=, --o
r.in.xyz input=./tmp/melt.csv output=err z=4 sep=, --o
echo "All: " $(r.univar -g map=melt | grep sum)
echo "All: " $(r.univar -g map=err | grep sum)
# echo ""
r.univar -gt map=melt zones=basins | cut -d"|" -f2,13 | column -s"|" -t
All: sum=69.3982306335468 �[?2004lAll: sum=20.0261054475124 �[?2004l �[?2004llabel sum East 46.7540492694752 West 18.8528624157926 Peninsula 3.18704264192471 Islands 0.279139711405429
Uncertainty % is 20/69 = 0.289855072464
- Discharge is “grounded discharge”
- Input to ice shelves where ice shelves exist
- Calving (similar to Greenlandic discharge) where ice shelves do not exist.
import pandas as pd
df = pd.read_excel("~/data/Rignot_2019/pnas.1812883116.sd01.xlsx", index_col=0)
##############################################################################
###
### cleanup
###
df = df.loc[df.index.dropna()]
for i in [0,0,0]: # drop Excel rows 2,3,4
df = df.drop(index=df.index[i])
# Drop super-shelves and rename indented sub-shelves
super_shelf = ["LarsenB", "Wordie", "Ronne", "Ross West", "Ross East", "Amery_Ice_Shelf", "Filchner", "AP", "WAIS", "EAIS", "TOTAL SURVEYED"]
df = df.drop(index=super_shelf)
for i in df.index:
if i[0] == ' ': df = df.rename(index={i: i.strip()})
for c in df.columns: # Drop extra columns
if 'Unnamed' in str(c):
df = df.drop(columns=c)
df = df.drop(columns=["Basin.1", "σ SMB", "σ D", "D type"]) # Drop unused columns
##############################################################################
# Green color = no ice shelf
noshelf = ["West_Graham_Land", "Eastern_Graham_Land", "Hektoria_Headland", "Evans_Headland", "Drygalski_Headland", "LarsenA", "Rydberg_Peninsula", "Zonda_Eureka", "Cape_Jeremy", "Wilkins_George_VI", "Wilkins_Island", "Thomson", "Fox", "Cooke", "Walgreen_Coast", "Lucchitta_Velasco", "Jackson-Perkins", "Frostman-Lord-Shuman-Anandakri", "Shirases_Coast", "Saunders_Coast", "Ross_East1", "Ross_East2", "Ross_East3", "Ross_East4", "Ross_East5", "Dry_Valleys", "Icebreaker-Fitzgerald", "Victoria_Land", "Oates_Coast", "Wilkes_Land", "Adelie_Coast", "Sabrina_Coast", "Clarie_Coast", "Law_Dome", "Budd_Coast", "Knox_Coast", "Ingrid_Christensen_Coast", "Wilhelm_II_Coast", "Enderby_Land", "Prince_Olav_Coast", "Mawson_Coast"]
df['shelf'] = 1
df.loc[noshelf, 'shelf'] = 0
# Sum numeric columns
df.loc['Sum'] = np.nan
for c in df.columns: # convert to numbers
try: df[c] = pd.to_numeric(df[c])
except: df.loc['Sum',c] = 'All'
cols = df.select_dtypes(include=[np.number]).columns.drop('shelf')
df.loc['Sum', cols] = df[cols].sum(axis='rows')
cols = df.columns[0:10].to_list()
cols.insert(3,'shelf')
df[cols].tail(10)
Glacier name | Basin | Region | Subregion | shelf | SMB | D | 1979 | 1980 | 1981 | 1982 | 1983 |
---|---|---|---|---|---|---|---|---|---|---|---|
Stancomb_Wills | K-A | East | Stancomb_Wills_Ice_Shelf | 1 | 22.03 | 21.39 | 25.261 | 24.7291 | 24.1972 | 23.6653 | 23.1333 |
Princess_Martha | K-A | East | Princess_Martha_Coast | 1 | 0.21 | 0.21 | 0.21 | 0.21 | 0.21 | 0.21 | 0.21 |
Coats_Coast | K-A | East | Coats_Coast | 1 | 6.43 | 6.43 | 6.43 | 6.43 | 6.43 | 6.43 | 6.43 |
Academy | J”-K | East | Filchner_Ice_Shelf | 1 | 24.27 | 24.1 | 24.27 | 24.2505 | 24.231 | 24.2115 | 24.1921 |
Support_Force | J”-K | East | Filchner_Ice_Shelf | 1 | 9.72 | 9.361 | 9.72 | 9.73057 | 9.74115 | 9.75172 | 9.7623 |
Recovery | J”-K | East | Filchner_Ice_Shelf | 1 | 41.05 | 41.05 | 41.05 | 41.1242 | 41.1983 | 41.2725 | 41.3467 |
Slessor | J”-K | East | Filchner_Ice_Shelf | 1 | 26.11 | 24.916 | 26.11 | 26.1256 | 26.1412 | 26.1568 | 26.1724 |
Bailey | J”-K | East | Filchner_Ice_Shelf | 1 | 8.98 | 8.61 | 8.98 | 9.00126 | 9.02252 | 9.04378 | 9.06504 |
Islands | nan | Islands | Islands | 1 | 76.9899 | 76.9899 | 76.9899 | 76.9899 | 76.9899 | 76.9899 | 76.9899 |
Sum | All | All | All | nan | 2097.57 | 2236.96 | 2126.64 | 2133.49 | 2140.25 | 2147.01 | 2153.77 |
- WARNING: Using shelf vs. non-shelf is important and can be done for all AQ, but Rignot “Island” discharge (~77 Gt/year) doesn’t provide enough metadata to break down by east/west/peninsula.
- Instead, for all islands in NSIDC-0709.002 product, find their region (east, west, peninsula), and calculate area of islands in each region, and then split values by area. That assumes all islands have the same flux (volume flow rate per unit area).
<<load_rignot>>
c = np.arange(2010,2017+1)
dd = df.groupby(['shelf','Region']).sum().drop(columns=['Basin','Subregion'])[c].mean(axis='columns')
dd.loc['Non-shelf discharge'] = dd.loc[0,:].sum()
dd.loc['shelf discharge'] = dd.loc[1,:].sum()
dd['Total discharge'] = dd.loc[['Non-shelf discharge','shelf discharge']].sum()
dd
# df_shelf = df[df['shelf'] == 1][c].mean(axis='columns')
# df_noshelf = df[df['shelf'] == 0][c].mean(axis='columns')
# df_shelf
# print("Total discharge: ", df[df['shelf'] >= 0][c].mean(axis='columns').sum())
# print('Shelf discharge: ', df_shelf.sum())
# print('Non-shelf discharge: ', df_noshelf.sum())
shelf Region 0.0 East 177.140000 Peninsula 131.398873 West 23.003920 1.0 East 926.544384 Islands 76.989900 Peninsula 205.140504 West 768.695078 Non-shelf discharge 331.542793 shelf discharge 1977.369866 Total discharge 2308.912659 dtype: float64
Non-shelf discharge from Rignot is:
Region | Values | Sum | Comment |
---|---|---|---|
East | 177 + 77*0.6 | 223.2 | East non-shelf discharge + 60 % of islands |
West | 23 + 77*0.3 | 46.1 | West non-shelf discharge + 30 % of islands |
Peninsula | 131 + 77*0.1 | 138.7 | Peninsula + 10 % islands |
This is steady-state discharge from grounded ice to shelves.
import numpy as np
import pandas as pd
fname = '~/data/Davison_2023/adi0186_table_s2.xlsx'
loc = pd.read_excel(fname, sheet_name='Total mass changes', index_col = 0, usecols = 'B,C,D', skiprows = 4)
loc = loc.drop('Antarctic Ice Shelves')
df = pd.read_excel(fname, sheet_name='Discharge', index_col = 1, skiprows = 3)
df = df[df.columns[1::2]]
df.columns = [np.floor(c).astype(int) for c in df.columns]
df = df.drop(index=df.index[0])
df = df.drop(index='Antarctic Ice Shelves')
df = df[np.arange(2010,2020)].mean(axis='columns')
df.name = 'Mass'
df
Abbot 32.473268 Ainsworth 0.157966 Alison 2.985331 Amery 78.564587 Andreyev 2.207105 ... Withrow 0.480019 Wordie 7.754308 Wylde 0.005026 Zelee 0.42351 Zubchatyy 0.469816 Name: Mass, Length: 162, dtype: object
<<load_davison_discharge>>
df = loc.join(df)
import geopandas as gpd
fname = '~/data/NSIDC/NSIDC-0709.002/1992.02.07/IceBoundaries_Antarctica_v02.shp'
ew = gpd.read_file(fname)
df = gpd.GeoDataFrame(df, geometry=gpd.points_from_xy(df['longitude'],df['latitude']), crs="EPSG:4326")
df = df.drop(columns=['latitude','longitude'])
df = df.to_crs('epsg:3031')
e = ew.to_crs('epsg:3031')
idx = ew.sindex.nearest(df['geometry'], return_all=False)
df['Region'] = ''
for dfidx,ewidx in idx.T:
arr = df.iloc[dfidx].copy(deep=True)
arr['Region'] = ew.iloc[ewidx]['Regions']
df.iloc[dfidx] = arr
# df.loc['Total'] = [df['Mass'].sum(), None, 'All']
dd = df[['Mass','Region']].groupby('Region').sum()
dd.loc['Total'] = dd.sum()
dd
Region | Mass |
---|---|
East | 923.794 |
Islands | 1.28338 |
Peninsula | 152.536 |
West | 857.468 |
Total | 1935.08 |
Total discharge is then
Region | Rignot (Ground-to-ocean + Islands | Davison (Ground-to-shelf) | Sum |
---|---|---|---|
East | 223 | 924 | 1147 |
West | 46 | 856 | 902 |
Peninsula | 139 | 153 | 292 |
Total | 2341 |
import pandas as pd
fname = "/home/kdm/data/Greene_2022/data/greene_Supplementary_Table_1.xlsx"
df = pd.read_excel(fname, index_col=1, skiprows=4)
df = df.drop(index=df.index[0])
df = df.drop(index=['Antarctica'])
df = df[df.columns[[1,2,9]]]
df.columns = ['latitude','longitude','Mass']
import geopandas as gpd
fname = '~/data/NSIDC/NSIDC-0709.002/1992.02.07/IceBoundaries_Antarctica_v02.shp'
ew = gpd.read_file(fname)
df = gpd.GeoDataFrame(df, geometry=gpd.points_from_xy(df['longitude'],df['latitude']), crs="EPSG:4326")
df = df.to_crs('epsg:3031')
e = ew.to_crs('epsg:3031')
idx = ew.sindex.nearest(df['geometry'], return_all=False)
df['Region'] = ''
for dfidx,ewidx in idx.T:
arr = df.iloc[dfidx].copy(deep=True)
arr['Region'] = ew.iloc[ewidx]['Regions']
df.iloc[dfidx] = arr
df = df.drop(columns=['latitude','longitude'])
# df.loc['Total'] = [df['Mass'].sum(), None, 'All']
dd = df[['Mass','Region']].groupby('Region').sum()
dd.loc['Total'] = dd.sum(axis='rows')
dd
Mass Region 44.705315 East 694.100336 Islands 1.518034 Peninsula 103.675815 West 566.997529 Total 1410.997028
The above is shelf calving
Total calving is shelf calving (Greene) + non-shelf calving (331; Rignot) + islands (77; Rignot)
Region | Values | Sum | Comment |
---|---|---|---|
East | 694 + 223 | 917 | East non-shelf discharge + 60 % of islands |
West | 567 + 154 | 721 | West non-shelf discharge + 30 % of islands |
Peninsula | 104 + 31 | 135 | Peninsula + 10 % islands |
From p.3 of citet:greene_2022 “Antarctica has experienced a net loss of 5,874 ± 396 Gt of ice owing to calving”
396/5874 % = 6.74157303371
From data K189 & L189 = 1411.0 28.1 or 28/1411% = 1.98440822112
import xarray as xr
ds = xr.open_mfdataset("~/data/Paolo_2023/ANT_G1920V01_IceShelfMelt.nc")
ds = ds[['melt','melt_err']].sel({'time':slice('2010-01-01','2017-12-31')}).mean(dim='time')
delayed_obj = ds.to_netcdf('tmp/shelf_melt.nc', compute=False)
from dask.diagnostics import ProgressBar
with ProgressBar():
results = delayed_obj.compute()
print(ds)
[########################################] | 100% Completed | 5.35 s <xarray.Dataset> Size: 68MB Dimensions: (y: 2916, x: 2916) Coordinates: * x (x) float64 23kB -2.798e+06 -2.796e+06 ... 2.796e+06 2.798e+06 * y (y) float64 23kB 2.798e+06 2.796e+06 ... -2.796e+06 -2.798e+06 Data variables: melt (y, x) float32 34MB dask.array<chunksize=(486, 486), meta=np.ndarray> melt_err (y, x) float32 34MB dask.array<chunksize=(486, 486), meta=np.ndarray>
g.mapset -c Paolo_2023
ncdump -v x tmp/shelf_melt.nc # 2916x2916
ncdump -v y tmp/shelf_melt.nc
x0=-2798407.5
x1=2798392.5
y0=-2798392.5
y1=2798407.5
g.region w=$(( -2798407 - 960 )) e=$(( 2798392 + 960 )) s=$(( -2798392 - 960 )) n=$(( 2798407 + 960 )) res=1920 -p
r.mapcalc "area = area()"
r.in.gdal -o input=NetCDF:tmp/shelf_melt.nc:melt output=melt
r.in.gdal -o input=NetCDF:tmp/shelf_melt.nc:melt_err output=err
r.region -c map=melt
r.region -c map=err
## + kg/m^2 -> Gt: / 1E12
r.mapcalc "melt = melt * 1000 * area / exp(10,12)" --o
r.mapcalc "err = err * 1000 * area / exp(10,12)" --o
r.mapcalc "melt_on = if(melt > 0, melt, null())"
r.mapcalc "err_on = if(melt > 0, err, null())"
r.mapcalc "melt_off = if(melt < 0, melt, null())"
r.mapcalc "err_off = if(melt < 0, err, null())"
r.colors -ae map=melt color=difference
r.colors -ge map=melt_on color=viridis
r.colors -ge map=melt_off color=viridis
# d.rast melt
# d.rast melt_on
# d.rast melt_off
r.mapcalc "basins = if((basins@PERMANENT == 1) | (basins@PERMANENT == 11), 1, 0)"
r.mapcalc "basins = if((basins@PERMANENT == 2) | (basins@PERMANENT == 12), 2, basins)"
r.mapcalc "basins = if((basins@PERMANENT == 3) | (basins@PERMANENT == 13), 3, basins)"
r.colors map=basins color=viridis
r.category basins separator=":" rules=- << EOF
1:East
2:West
3:Peninsula
EOF
r.grow.distance input=basins value=basins_grow distance=10 --q
r.mapcalc "basins_grow = int(basins_grow)" --q
r.category map=basins | r.category map=basins_grow rules=- --q
echo "NET"
r.univar -gt map=melt zones=basins_grow | cut -d"|" -f2,13 | column -s"|" -t | sed 's/label.*//'
# r.univar -gt map=err zones=basins | cut -d"|" -f2,13 | column -s"|" -t | sed 's/label.*//'
r.univar -g melt | grep sum
r.univar -g err | grep sum
echo ""
echo "FREEZE_ON"
r.univar -gt map=melt_on zones=basins_grow | cut -d"|" -f2,13 | column -s"|" -t | sed 's/label.*//'
# r.univar -gt map=err_on zones=basins | cut -d"|" -f2,13 | column -s"|" -t | sed 's/label.*//'
r.univar -g melt_on | grep sum
r.univar -g err_on | grep sum
echo ""
echo "MELT_OFF"
r.univar -gt map=melt_off zones=basins_grow | cut -d"|" -f2,13 | column -s"|" -t | sed 's/label.*//'
# r.univar -gt map=err_off zones=basins | cut -d"|" -f2,13 | column -s"|" -t | sed 's/label.*//'
r.univar -g melt_off | grep sum
r.univar -g err_off | grep sum
NET �[?2004lEast -319.34788697967 West -537.161194600709 Peninsula -153.245144876904 �[?2004l�[01;31m�[Ksum�[m�[K=-1009.75422645726 �[?2004l�[01;31m�[Ksum�[m�[K=3041.55065208086 �[?2004l �[?2004lFREEZE_ON �[?2004lEast 207.669949989514 West 146.976649882162 Peninsula 10.8694466267689 �[?2004l�[01;31m�[Ksum�[m�[K=365.516046498447 �[?2004l�[01;31m�[Ksum�[m�[K=1086.24089094716 �[?2004l �[?2004lMELT_OFF �[?2004lEast -527.017836969183 West -684.137844482863 Peninsula -164.114591503673 �[?2004l�[01;31m�[Ksum�[m�[K=-1375.27027295574 �[?2004l�[01;31m�[Ksum�[m�[K=1955.30976113372
Email from Davison
Ice Shelf | Mass change due to grounding line migration from 1997 to 2021 (Gt) | Error (Gt) |
Pine Island | 220 | 40 |
Thwaites | 230 | 25 |
Crosson | 200 | 25 |
Dotson | 420 | 80 |
(220+230+200+420)/(2021-1997) = 44.5833333333
Uncertainty: p. 3 of citet:davison_2023 “groundling line retreat (1070 ± 170 Gt),”
170/1070 % = 15.8878504673
import pandas as pd
import numpy as np
fname = "/home/kdm/data/Greene_2022/data/greene_Supplementary_Table_1.xlsx"
df = pd.read_excel(fname, index_col=1, skiprows=4)
##############################################################################
###
### cleanup
###
df = df.drop(index=df.index[0])
df = df.drop(index=['Antarctica'])
lon = df['Unnamed: 3']
lat = df['Unnamed: 2']
for c in df.columns: # Drop extra columns
if 'Unnamed' in str(c):
df = df.drop(columns=c)
if 'Gt/yr' in str(c):
df = df.drop(columns=c)
if ('control run' in str(c)) | ('instantaneous' in str(c)):
df = df.drop(columns=c)
for c in df.columns:
if type(c) == str: df = df.drop(columns=c)
# df = df.drop(columns=[2000.75])
# df = df.drop(columns=[1997.75])
# df.columns = df.columns.round().astype(int)
##############################################################################
<<load_greene_2022_adv_ret>>
c = df.columns
diff = df.diff(axis='columns')[c]
diff_gain = diff[diff > 0].sum(axis='columns')
diff_loss = diff[diff < 0].sum(axis='columns')
diff_gain.name = 'Mass'
diff_loss.name = 'Mass'
df_gain = pd.DataFrame(diff_gain)
df_loss = pd.DataFrame(diff_loss)
df_net = df_loss + df_gain
print("Net:")
print('Mass gain', df_net[df_net > 0].sum(axis='rows').values)
print('Mass Loss', df_net[df_net < 0].sum(axis='rows').values)
print('Net mass change', df_net.sum(axis='rows').values)
dt = df.columns[-1] - df.columns[0]
print("\nPer year:")
print('Mass gain', df_net[df_net > 0].sum(axis='rows').values / dt)
print('Mass Loss', df_net[df_net < 0].sum(axis='rows').values / dt)
print('Net mass change', df_net.sum(axis='rows').values / dt)
Net: Mass gain [4563.264309048649] Mass Loss [-9434.897965610035] Net mass change [-4871.633656561384] Per year: Mass gain [194.59549292318295] Mass Loss [-402.3410646315572] Net mass change [-207.74557170837417]
- Most numbers here match what’s in the publication
- Neither the total nor Ronne match.
- Here, total is 4871 Gt net change.
- Below, Ronne net loss is 1031
- From the paper (paragraph under Fig. 2)
- Total should be 5874 (missing 5874-4871 = 1003)
- Ronne should be 2034 (missing 2034-1031 = 1003)
- But Thwaites, Larsen C, and Ross West match paper, so it seems like I’m parsing the dataset correctly.
- Filchner matches mass gain.
Find the top 10 shelves with net and gross mass gain and loss total (summed) over the period
tmp = pd.DataFrame(index=np.arange(10))
tmp['Net gain: Name'] = df_net.sort_values(by='Mass', ascending=False).head(10).index
tmp['Net gain: Mass'] = df_net.sort_values(by='Mass', ascending=False).head(10)['Mass'].values
tmp['Net loss: Name'] = df_net.sort_values(by='Mass', ascending=True).head(10).index
tmp['Net loss: Mass'] = df_net.sort_values(by='Mass', ascending=True).head(10)['Mass'].values
tmp['Gross gain: Name'] = df_gain.sort_values(by='Mass', ascending=False).head(10).index
tmp['Gross gain: Mass'] = df_gain.sort_values(by='Mass', ascending=False).head(10)['Mass'].values
tmp['Gross loss: Name'] = df_loss.sort_values(by='Mass', ascending=True).head(10).index
tmp['Gross loss: Mass'] = df_loss.sort_values(by='Mass', ascending=True).head(10)['Mass'].values
tmp
Net gain: Name | Net gain: Mass | Net loss: Name | Net loss: Mass | Gross gain: Name | Gross gain: Mass | Gross loss: Name | Gross loss: Mass | |
---|---|---|---|---|---|---|---|---|
0 | Filchner | 1796.26 | Thwaites | -1968.41 | Ronne | 2762.7 | Ronne | -3794.31 |
1 | Amery | 569.883 | Larsen C | -1166.92 | Ross West | 1968.02 | Ross West | -2897.63 |
2 | Cook | 414.571 | Ronne | -1031.61 | Filchner | 1843.53 | Thwaites | -2248.52 |
3 | Shackleton | 369.773 | Ross West | -929.617 | Amery | 917.121 | Larsen C | -1619.1 |
4 | Brunt Stancomb | 362.787 | Wilkins | -622.156 | Ross East | 857.219 | Pine Island | -1231.67 |
5 | West | 278.333 | Larsen B | -530.038 | Pine Island | 780.117 | Ross East | -1135.23 |
6 | Jelbart | 169.419 | Pine Island | -451.554 | Brunt Stancomb | 493.663 | Ninnis | -675.229 |
7 | Fimbul | 148.221 | Mertz | -381.971 | Shackleton | 493.661 | Wilkins | -642.061 |
8 | Riiser-Larsen | 84.376 | Ninnis | -300.936 | Cook | 454.636 | Larsen B | -584.603 |
9 | Pourquoi Pas | 64.9883 | Larsen A | -286.377 | Larsen C | 452.177 | Mertz | -571.076 |
Now convert to Gt/year
for col in tmp.columns:
if 'Mass' in col: tmp[col] = tmp[col] / c.size
tmp
Net gain: Name | Net gain: Mass | Net loss: Name | Net loss: Mass | Gross gain: Name | Gross gain: Mass | Gross loss: Name | Gross loss: Mass | |
---|---|---|---|---|---|---|---|---|
0 | Filchner | 74.8441 | Thwaites | -82.0169 | Ronne | 115.113 | Ronne | -158.096 |
1 | Amery | 23.7451 | Larsen C | -48.6219 | Ross West | 82.0007 | Ross West | -120.735 |
2 | Cook | 17.2738 | Ronne | -42.9837 | Filchner | 76.8138 | Thwaites | -93.6885 |
3 | Shackleton | 15.4072 | Ross West | -38.734 | Amery | 38.2134 | Larsen C | -67.4626 |
4 | Brunt Stancomb | 15.1161 | Wilkins | -25.9232 | Ross East | 35.7174 | Pine Island | -51.3196 |
5 | West | 11.5972 | Larsen B | -22.0849 | Pine Island | 32.5049 | Ross East | -47.3011 |
6 | Jelbart | 7.05912 | Pine Island | -18.8147 | Brunt Stancomb | 20.5693 | Ninnis | -28.1346 |
7 | Fimbul | 6.17586 | Mertz | -15.9155 | Shackleton | 20.5692 | Wilkins | -26.7525 |
8 | Riiser-Larsen | 3.51567 | Ninnis | -12.539 | Cook | 18.9432 | Larsen B | -24.3585 |
9 | Pourquoi Pas | 2.70785 | Larsen A | -11.9324 | Larsen C | 18.8407 | Mertz | -23.7948 |
<<green_2022_mean>> # provides df_net
# df = df_net
df['longitude'] = lon
df['latitude'] = lat
import geopandas as gpd
fname = '~/data/NSIDC/NSIDC-0709.002/1992.02.07/IceBoundaries_Antarctica_v02.shp'
ew = gpd.read_file(fname)
df = gpd.GeoDataFrame(df, geometry=gpd.points_from_xy(df['longitude'],df['latitude']), crs="EPSG:4326")
df = df.to_crs('epsg:3031')
e = ew.to_crs('epsg:3031')
idx = ew.sindex.nearest(df['geometry'], return_all=False)
df['Region'] = ''
for dfidx,ewidx in idx.T:
arr = df.iloc[dfidx].copy(deep=True)
arr['Region'] = ew.iloc[ewidx]['Regions']
df.iloc[dfidx] = arr
df = df.drop(columns=['latitude','longitude','geometry'])
# df.loc['Total'] = [df['Mass'].sum(), None, 'All']
# df.groupby('Region').sum().round()
diff = df[c].diff(axis='columns')
diff_gain = diff[diff > 0].sum(axis='columns')
diff_loss = diff[diff < 0].sum(axis='columns')
diff_gain.name = 'Mass'
diff_loss.name = 'Mass'
df_gain = pd.DataFrame(diff_gain)
df_loss = pd.DataFrame(diff_loss)
df_net = df_loss + df_gain
df_gain['Region'] = df['Region']
df_loss['Region'] = df['Region']
df_net['Region'] = df['Region']
Net: Mass gain [4563.264309048649] Mass Loss [-9434.897965610035] Net mass change [-4871.633656561384] Per year: Mass gain [194.59549292318295] Mass Loss [-402.3410646315572] Net mass change [-207.74557170837417]
for loc in ['East','West','Peninsula']:
print("\n", loc)
sub = (df_net['Mass'] > 0) & (df_net['Region'] == loc); print('Mass gain', df_net[sub].drop(columns='Region').sum().values/dt)
sub = (df_net['Mass'] < 0) & (df_net['Region'] == loc); print('Mass loss', df_net[sub].drop(columns='Region').sum().values/dt)
East Mass gain [192.48919217062863] Mass loss [-69.4663382466161] West Mass gain [1.923849262408337] Mass loss [-206.1686424710849] Peninsula Mass gain [0.17175689689132026] Mass loss [-124.70409821345615]
import xarray as xr
ds = xr.open_dataset("~/data/Groh_2021/AIS_GMB_grid.nc")
ds['dm'] = ds['dm'] * ds['area']
ds = ds['dm'].sum(dim=['x','y'])/1E12
df = ds.to_dataframe()
# df = df.loc['2002-01-01':'2024-12-31']
df = df - df.max()
_ = df.plot()
# df = df.resample('D').mean().interpolate().resample('YS').mean()
df = df.resample('YS').mean()
df = df.diff()
print(df.mean())
dm -79.146737 dtype: float64
(save-excursion
(goto-char (point-min))
(re-search-forward (concat "^#\\+name: " tbl) nil t)
(next-line)
(org-table-export (concat "./dat/" tbl ".csv") "orgtbl-to-csv")
;;(shell-command-to-string (concat "head " tbl ".csv"))
(message (concat "Exported " tbl " to " (concat "./dat/" tbl ".csv")))
)