Skip to content

mankoff/sankey

Repository files navigation

On Ice Sheet Mass Flow

Table of contents

Introduction

Sankey diagrams for mass flow in Greenland and Antarctica.

Abstract & Methods

See ./ms.tex

Results

Note

All figures are width-proportional within and between each other.

./fig_aq_gl.png

./fig_aq_parts.png

Processes and flow

flowchart.png

Implementation

Greenland

IDTermValueUncIOPeriodSource
RFRainfall4015I_g2010-2019fettweis_2020
CDCondensation515I_g2010-2019fettweis_2020
DPDeposition1015I_g2010-2019fettweis_2020
SFSnowfall68015I_g2010-2019fettweis_2020
EVEvaporation515O_g2010-2019fettweis_2020
RURunoff43515O_g2010-2019fettweis_2020
SUSublimation6015O_g2010-2019fettweis_2020
RFZRefreezing200152010-2019fettweis_2020
BMBGrounded basal melting2020O_gsteadykarlsson_2021
DISCHDischarge485-20+5102010-2019mankoff_2020_solid,kochtitzky_2023,bollen_2023
CALVCalving23530O_orignot_2010
FRONTMELTFrontal melting23530O_orignot_2010
SHELFMELTSub-shelf melting2540O_o2013-2022wang_2024
SHELFFREEZESub-shelf freeze-on540I_o2013-2022wang_2024
GZRETGrounding line retreat5?O_g
FRONTRETFrontal retreat504O_o2010-2020kochtitzky_2023
FRONTADVFrontal advance0I_o2010-2020kochtitzky_2023
Exported gl_baseline to ./dat/gl_baseline.csv

Reset

trash G_GL
[[ -e ./G_GL ]] || grass -e -c EPSG:3413 ./G_GL

Discharge

From total mass balance product

!date
fname="https://thredds.geus.dk/thredds/fileServer/MassBalance/MB_SMB_D_BMB.csv"
df = pd.read_csv(fname, index_col=0, parse_dates=True)[['D','D_err']]
df = df\
    .resample('1D').interpolate().resample('YS').sum()

print("2010s")
print(df['2010-01-01':'2019-12-31'].mean())
Wed Dec 18 07:11:05 AM EST 2024
2010s
D        485.373414
D_err     44.951388
dtype: float64

Then, subtract 15 from 475 based on citet:kochtitzky_2023 who report, in Section 3.3, 17 +- 6.8 and 14.5 +- 5.8 but that “[b]ecause our fluxgates were typically located tens to hundreds of meters lower than those in the similar studies (King et al., 2018; Mankoff et al., 2020), the melt correction for these studies would be higher than values presented herein, although it is beyond the scope of the current study to determine what those values would be.”

Peripheral discharge (Bollen 2023)

Where are these glaciers
grass ./G_GL/PERMANENT
g.mapset -c Bollen_2023

cat "${DATADIR}/Bollen_2023/GreenlandGIC_discharge_timeseries - Ellyn Enderlin.csv" \
    | cut -d, -f1-3 \
    | v.in.ascii input=- output=bollen_2023 separator=, skip=1 x=2 y=3 z=1
How much do they contribute?
import pandas as pd
data_root='/home/kdm/data'
path='Bollen_2023'
fname='GreenlandGIC_discharge_timeseries - Ellyn Enderlin.csv'
df = pd.read_csv(f"{data_root}/{path}/{fname}", index_col=0, header=[0])
df = df.sum(axis='rows')
df = df / 1E9 # per email from Ellyn, units are m^3/year. Convert to Gt.
df = df['2010':'2018']
df.mean()
5.209345977852399

Basal melt

GZ retreat

From Millan (2022) http://doi.org/10.5194/tc-16-3021-2022

  • Gz retreat is ~0.13 km/yr (Fig. 3a)
  • Ice velocity is ~1200 m/yr (Fig. 3b) (not needed)
  • 20 km wide

Rates are higher per Ciraci (2023) http://doi.org/10.1073/pnas.2220924120, but

  • Ice surface close to flotation near GZ, and shelf is ~500 m thick, so estimate 600 m ice.

Therefore, gz retreat in Gt/year is width * thick * retreat rate * density

frink "0.13 km/yr * 20 km * 600 m * 917 kg/m^3 -> Gt/yr"
1.43052

Assume similar from other ice shelves too, for a total of ~5 Gt/yr GZ retreat in Greenland.

SMB

g.mapset -c MAR

ncdump -v TIME dat/MARv3.12-GRD-15km-annual.nc4 # 30-39 = 2010-2019
ncra --overwrite -d TIME,30,39 dat/MARv3.12-GRD-15km-annual.nc4 tmp/MAR_GL.nc

ncdump -v X10_110 tmp/MAR_GL.nc # 101
ncdump -v Y20_200 tmp/MAR_GL.nc # 181
g.region w=$(( -645000 - 7500 )) e=$(( 855000 + 7500 )) s=$(( -3357928 - 7500 )) n=$((-657928 + 7500 )) res=15000 -p

var=SF # debug
for var in SF RF RU SU ME SMB EVA CON DEP SUB MSK AREA; do
  r.in.gdal -o input=NetCDF:tmp/MAR_GL.nc:${var} output=${var}
  r.region -c map=${var}
done

r.mapcalc "GL_ice_all = (MSK > 50) & ((x()-y()) > 520000)" # Limit to ice and remove Canada
# r.clump input=GL_ice output=clumps --o
# main_clump=$(r.stats -c -n clumps sort=desc | head -n2 | tail -n1 | cut -d" " -f1)
# r.mapcalc "GL_ice = if(clumps == ${main_clump}, 1, null())"
# r.mask raster=GL_ice --o
r.mapcalc "MASK = if(GL_ice_all == 1)" --o

# if only X % of a cell is ice, scale by that.
r.mapcalc "scale_mask = (GL_ice_all * MSK) / 100"

# scale
## units are mm.w.eq. per grid cell. Grid cell areas are in km^2
## + mm.w.eq. -> m w.eq.: /1E3
## + m w.eq -> kg: *1E3
## + area in km^2 -> m^2: *1E3*1E3
## + kg -> Gt: /1E12
# ds = ds/1E3 * 1E3 * ds['AREA']*1E3*1E3 / 1E12
for var in SF RF RU SU ME SMB EVA CON DEP SUB; do
  r.mapcalc "${var} = (${var}/1000) * 1000 * (AREA * 1000*1000) * scale_mask / exp(10,12)"
done
r.mask -r

r.mapcalc "RFZ = ME + RF - RU"
for var in SF RF RU ME SMB EVA CON DEP SUB RFZ; do
  echo ${var} $(r.univar -g ${var} | grep sum)
done
�[?2004lSF sum=678.472341306034
RF sum=41.0073369748482
RU sum=433.411271134275
ME sum=594.819117205514
SMB sum=232.245706856329
EVA sum=7.43645901936729
CON sum=2.02922271273767
DEP sum=12.3770587084991
SUB sum=60.0712550947222
RFZ sum=202.41518304609

Shelf melt and freezing

grass ./G_GL/PERMANENT
g.mapset -c Wang_2024
tif_list=$(find ~/data/Wang_2024 -name "????.tif")
t=$(echo $tif_list | tr ' ' '\n' | head -n1) # debug
for t in ${tif_list}; do
  dirname=$(basename $(dirname ${t}))
  fname=$(basename ${t})
  fname=${fname%.*}
  tname=g_${dirname}_${fname} # add g_ because "79N" is not a valid name
  r.in.gdal input=${t} output=${tname}
done
g.region raster=$(g.list type=raster sep=,) -pa

r.series input=$(g.list type=raster sep=,) output=melt method='average'
r.colors -a map=melt color=viridis

r.mapcalc "area = area()"

## Melt data is m/year
## Multiply by area to get m/m^2 or grams, then 1000 to get kg
r.mapcalc "melt = melt * 1000 * area / exp(10,12)" --o

r.mapcalc "melt_on = if(melt > 0, melt, null())"
r.mapcalc "freeze_on = if(melt < 0, melt, null())"

Stats

echo "NET"
r.univar -gt map=melt | cut -d"|" -f11

echo ""
echo "FREEZE_ON"
r.univar -gt map=freeze_on | cut -d"|" -f11

echo ""
echo "MELT_OFF"
r.univar -gt map=melt_on | cut -d"|" -f11
�[?2004lNET
�[?2004lsum
33.4127947245078
�[?2004l
�[?2004lFREEZE_ON
?2004lsum
-2.68199438110646
�[?2004l
�[?2004lMELT_OFF
�[?2004lsum
36.094789105614

MB

GRACE ESA

import xarray as xr
ds = xr.open_dataset("~/data/Dohne_2023/GIS_GMB_grid.nc")
ds['dm'] = ds['dm'] * ds['area']
ds = ds.sel({'time':slice('2010-01-01','2019-12-31')})
ds = data=ds['dm'].to_dataset()
ds = ds['dm'].sum(dim=['x','y'])/1E12
ds = ds - ds.values[0]
_ = ds.plot()
ds = ds.resample({'time':'YS'}).mean()
ds = ds.diff(dim='time')
print(ds.mean())
<xarray.DataArray 'dm' ()> Size: 8B
array(-250.12027707)

./figs_tmp/51f7e7c93a2146c72173d08304aa80e49389d237.png

GRACE JPL

import pandas as pd
from datetime import datetime, timedelta

df = pd.read_csv("~/data/GRACE/greenland_mass_200204_202409.txt",
                 comment="H", parse_dates=True, sep="\s+", header=None,
                 names=['year','mass','err'])

# Function to convert year.frac to ISO format (YYYY-MM-DD)
def year_frac_to_iso(year_frac):
    year = int(year_frac)
    frac = year_frac - year
    start_of_year = datetime(year, 1, 1)
    days_in_year = (datetime(year + 1, 1, 1) - start_of_year).days
    date = start_of_year + timedelta(days=frac * days_in_year)
    return pd.to_datetime(date.strftime('%Y-%m-%d'))

# Apply the conversion to the 'Year' column
df['date'] = df['year'].apply(year_frac_to_iso)
df = df.drop(columns=['year'])
df = df.set_index('date')
df = df['mass']

# df.resample('D').mean().interpolate()
df = df['2010-01-01':'2019-12-31']
df = df - df.max()

# df.head()
_ = df.plot()
print(df.resample('YS').mean().diff().mean())
-264.96212962962966

./figs_tmp/bfcfdbb8d4a3e42ea23a749f439bc8dddbd3e743.png

Mankoff 2021

!date
fname="https://thredds.geus.dk/thredds/fileServer/MassBalance/MB_SMB_D_BMB.csv"
df = pd.read_csv(fname, index_col=0, parse_dates=True)[['MB','MB_err']]
df = df\
    .resample('1D').interpolate().resample('YS').sum()

print("2010s")
print(df['2010-01-01':'2019-12-31'].mean())
Fri Dec 20 09:56:38 AM EST 2024
2010s
MB       -246.172157
MB_err     94.196209
dtype: float64

Antarctica

IDTermEast_gWest_gPeninsula_gEast_sWest_sPeninsula_sUncIOPeriodSource
RFRainfall11211215I2010-2019fettweis_2020
CDCondensation11111115I2010-2019fettweis_2020
DPDeposition3724666215I2010-2019fettweis_2020
SFSnowfall13927242821721805715I2010-2019fettweis_2020
RFZRefreezing15519261032152010-2019fettweis_2020
EVEvaporation11111115O2010-2019fettweis_2020
RURunoff11221415O2010-2019fettweis_2020
SUSublimation1513313239415O2010-2019fettweis_2020
BMBGrounded basal melting4719300030Ovan-liefferinge_2013
DISCHDischarge11479022920005 – 502008-2019davison_2023 (to shelves) + rignot_2019 (grounded + islands)
CALVCalving223461396945671045O2010-2019greene_2022 + rignot_2019 discharge (grounded + islands)
FRONTMELTFrontal melting000000O
SHELFMELTSub-shelf melting000527684164150O2010-2017paolo_2023
SHELFFREEZESub-shelf freeze-on00020814711300I2010-2017paolo_2023
GZRETGrounding line retreat145100015O1997-2021davison_2023 (only Pine Island, Thwaites, Crosson, and Dotson)
FRONTRETFrontal retreat000692061255O2010-2021greene_2022
FRONTADVFrontal advance000192215I2010-2021greene_2022

Export to CSVs

Split AQ table above to east,west,peninsula,all CSVs, combining shelf and grounded

import numpy as np
import pandas as pd

aq = np.array(aq)
df = pd.DataFrame(aq[1:,1:], index=aq[1:,0], columns=aq[0,1:])
df.index.name = 'ID'

cols = ['East_g','East_s','West_g','West_s','Peninsula_g','Peninsula_s']
df[cols] = df[cols].astype(int)
df['All'] = df[cols].sum(axis='columns')
df['E'] = df[['East_g','East_s']].sum(axis='columns')
df['W'] = df[['West_g','West_s']].sum(axis='columns')
df['P'] = df[['Peninsula_g','Peninsula_s']].sum(axis='columns')
df = df.drop(columns=['IO', 'Period', 'Source'])
df = df.drop(columns=cols)

def custom_round(x, base=5):
    if (x > 0) and (x < base): x = base
    return int(base * round(float(x)/base))

cols = ['All','E','W','P']
for c in cols: df[c] = df[c].apply(lambda x: custom_round(x, base=5))

for c in cols:
    df[['Term',c]].rename(columns={c:'Value'}).to_csv('./dat/aq_' + c + '.csv')

df
IDTermUncAllEWP
RFRainfall1510555
CDCondensation155555
DPDeposition1580453010
SFSnowfall1528051565905340
RFZRefreezing15105401550
EVEvaporation155555
RURunoff1510555
SUSublimation152351754015
BMBGrounded basal melting307045205
DISCHDischarge5 – 5023401145900290
CALVCalving51775915615245
FRONTMELTFrontal melting0000
SHELFMELTSub-shelf melting1501375525685165
SHELFFREEZESub-shelf freeze-on30036521014510
GZRETGrounding line retreat15455455
FRONTRETFrontal retreat540070205125
FRONTADVFrontal advance519519055

Grounded vs Marine mass loss

import numpy as np
import pandas as pd

aq = np.array(aq)
df = pd.DataFrame(aq[1:,1:], index=aq[1:,0], columns=aq[0,1:])
df.index.name = 'ID'

df = df.drop(columns=['Source', 'Period', 'Unc'])
df = df.drop(['RFZ'])

cols = ['East_g','East_s','West_g','West_s','Peninsula_g','Peninsula_s']
df[cols] = df[cols].astype(int)

for roi in ['East','West','Peninsula']:
    df.loc['DISCH',roi+'_g'] = df.loc['DISCH',roi+'_g'] - df.loc['CALV',roi+'_g']

# df.loc['CALV', 'West_s'] = df.loc['CALV', 'West_s'] + df.loc['CALV', 'West_g']; df.loc['CALV', 'West_g'] = 0
# df.loc['CALV', 'East_s'] = df.loc['CALV', 'East_s'] + df.loc['CALV', 'East_g']; df.loc['CALV', 'East_g'] = 0
# df.loc['CALV', 'Peninsula_s'] = df.loc['CALV', 'Peninsula_s'] + df.loc['CALV', 'Peninsula_g']; df.loc['CALV', 'Peninsula_g'] = 0

# # df.loc['CALV', 'West_s'] = df.loc['CALV', 'West_s'] + df.loc['CALV', 'West_g'];
# df.loc['CALV', 'West_g'] = 0
# # df.loc['CALV', 'East_s'] = df.loc['CALV', 'East_s'] + df.loc['CALV', 'East_g'];
# df.loc['CALV', 'East_g'] = 0
# # df.loc['CALV', 'Peninsula_s'] = df.loc['CALV', 'Peninsula_s'] + df.loc['CALV', 'Peninsula_g'];
# df.loc['CALV', 'Peninsula_g'] = 0

# disch = df.loc['DISCH']['All_g'] - df.loc['CALV']['All_g']

df['All_g'] = df[['East_g','West_g','Peninsula_g']].sum(axis='columns')
df['All_s'] = df[['East_s','West_s','Peninsula_s']].sum(axis='columns')

df['All'] = df['All_g'] + df['All_s']
df['East'] = df['East_g'] + df['East_s']
df['West'] = df['West_g'] + df['West_s']
df['Peninsula'] = df['Peninsula_g'] + df['Peninsula_s']

def custom_round(x, base=5):
    if (x > 0) and (x < base): x = base
    return int(base * round(float(x)/base))

cols = ['All', 'All_g', 'East', 'East_g', 'West', 'West_g', 'Peninsula', 'Peninsula_g']
# df.loc['Net'] = df[cols]

da = df[df['IO'] == 'I'][cols].sum() - df[df['IO'] == 'O'][cols].sum()
for i in da.index:
    if i[-1] == 'g': da[i] = da[i] - (df.loc['DISCH',i] + df.loc['DISCH',i[:-1] + 's'])
    # if i[-1] == 's': da[i] = da[i] + df.loc['DISCH',i[:-1]+'g']

for i in ['All','East','West','Peninsula']:
    da[i + '_s'] = da[i] - da[i + '_g']
    da = da.sort_index()

da = da.apply(lambda x: custom_round(x, base=5))

df = pd.DataFrame(index = ['Antarctica', 'East', 'West', 'Peninsula'],
                  columns = ['Grounded', 'Marine', 'Total'])

df.loc['Antarctica'] = da[['All_g','All_s','All']].values
df.loc['East'] = da[['East_g','East_s','East']].values
df.loc['West'] = da[['West_g','West_s','West']].values
df.loc['Peninsula'] = da[['Peninsula_g','Peninsula_s','Peninsula']].values
df
GroundedMarineTotal
Antarctica-190-260-450
East85190270
West-250-275-525
Peninsula-20-175-195

Reset

trash G_AQ
[[ -e ./G_AQ ]] || grass -e -c EPSG:3031 ./G_AQ

Masks: East, West, Peninsula, Islands, Grounded and Shelves

grass ./G_AQ/PERMANENT

v.in.ogr input=${DATADIR}/NSIDC/NSIDC-0709.002/1992.02.07/IceBoundaries_Antarctica_v02.shp output=basins

g.region vector=basins res=10000 -pas

v.db.select map=basins|head
v.db.select -c map=basins columns=Regions | sort | uniq # East West Peninsula Islands
v.db.select -c map=basins columns=TYPE | sort | uniq # FL GR IS (float, ground, island)

v.to.rast input=basins output=east use=val val=1 where='(Regions == "East")'
v.to.rast input=basins output=west use=val val=2 where='(Regions == "West")'
v.to.rast input=basins output=peninsula use=val val=3 where='(Regions == "Peninsula")'
v.to.rast input=basins output=islands use=val val=4 where='(Regions == "Islands")'
r.patch input=east,west,peninsula,islands output=basins
r.category basins separator=":" rules=- << EOF
1:East
2:West
3:Peninsula
4:Islands
EOF
r.colors map=basins color=viridis

v.to.rast input=basins output=ground use=val val=1 where='(TYPE == "GR") or (TYPE == "IS")'
v.to.rast input=basins output=ground_noisland use=val val=1 where='(TYPE == "GR")'

Label islands to nearest region (east,west,peninsula)

Rignot 2019 provides discharge for Islands, but not by region. Here, determine island area per region, and percent of islands within each region. Then, for other values that are reported for all islands, split by area percent. This assumes all islands have the same flux (volume flow rate per unit area) for whatever property is divided up using this method.

r.patch input=east,west,peninsula output=main_ice
r.colors map=main_ice color=viridis
r.grow.distance input=main_ice value=main_ice_grow

r.mapcalc "islands_near = int(if(islands, main_ice_grow))"

Find area of islands within each region

r.stats --q -A -r -c -N input=islands_near
1 417
2 803
3 174
[Raster MASK present]

Total Cells = 417 + 803 + 174 = 1394 East = 417 / 1394 % = 29.9139167862 ~= 30 West = 803 / 1394 % = 57.6040172166 ~= 60 Peninsula = 174 / 1394 % = 12.4820659971 ~= 10

Make masks for all grounded (including islands) or only shelf (excluding island)

r.mask --o raster=ground
r.patch input=islands_near,main_ice output=grounded_with_islands

r.mask --o -i raster=ground
r.mapcalc "shelf_without_islands = main_ice"
r.mask -r

r.category map=basins  | r.category map=shelf_without_islands rules=-
r.category map=basins  | r.category map=grounded_with_islands rules=-

SMB (MAR)

g.mapset -c MAR

ncdump -v TIME dat/MARv3.12-ANT-35km-annual.nc4 # 30-39 = 2010-2019
ncra --overwrite -d TIME,30,39 dat/MARv3.12-ANT-35km-annual.nc4 tmp/MAR_AQ.nc

ncdump -v X tmp/MAR_AQ.nc # 176
ncdump -v Y tmp/MAR_AQ.nc # 148
g.region w=$(( -3010000 - 17500 )) e=$(( 3115000 + 17500 )) s=$(( -2555000 - 17500 )) n=$(( 2590000 + 17500 )) res=35000 -p

var=SF # debug
for var in SF RF RU ME SMB EVA CON DEP SUB MSK AREA; do
  r.in.gdal -o input=NetCDF:tmp/MAR_AQ.nc:${var} output=${var}
  r.region -c map=${var}
done

r.mapcalc "MASK = if(MSK > 50)" --o
r.mapcalc "scale_mask = MSK / 100" # if only X % of a cell is ice, scale by that.

# scale
## units are mm.w.eq. per grid cell. Grid cell areas are in km^2
## + mm.w.eq. -> m w.eq.: /1E3
## + m w.eq -> kg: *1E3
## + area in km^2 -> m^2: *1E3*1E3
## + kg -> Gt: /1E12
# ds = ds/1E3 * 1E3 * ds['AREA']*1E3*1E3 / 1E12
for var in SF RF RU ME SMB EVA CON DEP SUB; do
  r.mapcalc "${var} = (${var}/1000) * 1000 * (AREA * 1000*1000) * scale_mask / exp(10,12)"
done

r.mapcalc "RFZ = ME + RF - RU"

Stats

SMB components grounded and shelf
for mask in grounded_with_islands shelf_without_islands; do
  echo $mask
  r.mask --o raster=${mask}@PERMANENT --q
  for var in  RF CON DEP SF RFZ EVA RU SUB; do # SF RF RU EVA CON DEP SUB ME; do
    echo -n "${var} ${mask}"
    r.univar -gt map=${var} zones=${mask}@PERMANENT | cut -d"|" -f2,13 | column -s"|" -t | sed 's/label.*//'
    r.univar -g ${var} | grep sum
    echo "#"; echo "#"
  done
done
r.mask -r --q
grounded_with_islands
RF grounded_with_islands
East       0.53462477161335
West       0.2532230323633
Peninsula  2.22781624112255
�[01;31m�[Ksum�[m�[K=3.0156640450992

CON grounded_with_islands
East       0.00144321189675
West       0.00241510084115
Peninsula  0.01323398293865
�[01;31m�[Ksum�[m�[K=0.01709229567655

DEP grounded_with_islands
East       36.9861991237577
West       23.8279628054373
Peninsula  5.8151846089547
�[01;31m�[Ksum�[m�[K=66.6293465381494

SF grounded_with_islands
East       1392.4748276417
West       723.601551820622
Peninsula  281.709413065019
�[01;31m�[Ksum�[m�[K=2397.78579252734

RFZ grounded_with_islands
East       14.6234218646823
West       5.16798014233454
Peninsula  19.3083482881789
�[01;31m�[Ksum�[m�[K=39.0997502951956

EVA grounded_with_islands
East       0.6060187163407
West       0.2013515636148
Peninsula  0.6982005019075
�[01;31m�[Ksum�[m�[K=1.505570781863

RU grounded_with_islands
East       1.53022074184155
West       0.0059355454226
Peninsula  1.8878783909651
�[01;31m�[Ksum�[m�[K=3.42403467822925

SUB grounded_with_islands
East       150.9735683004
West       32.9662640970179
Peninsula  12.5062719218602
�[01;31m�[Ksum�[m�[K=196.446104319277



shelf_without_islands
RF shelf_without_islands
East       0.842541426357001
West       0.4498711186449
Peninsula  2.307238481508
�[01;31m�[Ksum�[m�[K=3.59965102650989

CON shelf_without_islands
East       0.0031724865901
West       0.0019547481581
Peninsula  0.03522553443475
�[01;31m�[Ksum�[m�[K=0.04035276918295

DEP shelf_without_islands
East       5.70103389655939
West       6.05853236904585
Peninsula  1.5062480433876
�[01;31m�[Ksum�[m�[K=13.2658143089928

SF shelf_without_islands
East       172.41137746281
West       180.27593549343
Peninsula  56.6841289993761
�[01;31m�[Ksum�[m�[K=409.371441955615

RFZ shelf_without_islands
East       25.7408112537284
West       9.64777465721551
Peninsula  32.3899504973317
�[01;31m�[Ksum�[m�[K=67.7785364082756

EVA shelf_without_islands
East       0.70256713317005
West       0.2422464141299
Peninsula  0.70164322473235
�[01;31m�[Ksum�[m�[K=1.6464567720323

RU shelf_without_islands
East       1.5089940427256
West       0.0304982132294
Peninsula  4.35827769837335
�[01;31m�[Ksum�[m�[K=5.89776995432835

SUB shelf_without_islands
East       23.4661462650309
West       8.5418917438099
Peninsula  3.94097993607855
�[01;31m�[Ksum�[m�[K=35.9490179449194

[Raster MASK present]
�[?2004l

Basal melt

Van Liefferinge (2013) http://doi.org/10.5194/cp-9-2335-2013

Convert MAT file to XYZ for importing into GRASS

import scipy as sp
import numpy as np
import pandas as pd

mat = sp.io.loadmat('/home/kdm/data/Van_Liefferinge_2023/Melt_Mean_Std_15exp.mat')
X = mat['X'].flatten() * 1E3 # convert from km to m
Y = mat['Y'].flatten() * 1E3
m = mat['MeanMelt'].flatten() / 10 # cm to mm
e = mat['StdMelt'].flatten() / 10 # cm to mm

melt = pd.DataFrame(np.array([X,Y,m,e]).T, columns=['x','y','melt','err']).dropna()
melt.to_csv('./tmp/melt.csv', header=False, index=False)
melt.head()
xymelterr
1487411.045e+06-2.14e+061e-091.71243e-25
1498591.03e+06-2.135e+060.001466080.000148305
1498601.035e+06-2.135e+060.0002660420.000389444
1498611.04e+06-2.135e+061e-091.71243e-25
1498621.045e+06-2.135e+060.000456980.000668948
grass ./G_AQ/PERMANENT
g.mapset -c liefferinge_2023
r.in.xyz input=./tmp/melt.csv output=melt sep=, --o
r.in.xyz input=./tmp/melt.csv output=err z=4 sep=, --o
echo "All: " $(r.univar -g map=melt | grep sum)
echo "All: " $(r.univar -g map=err | grep sum)
# echo ""
r.univar -gt map=melt zones=basins | cut -d"|" -f2,13 | column -s"|" -t
All:  sum=69.3982306335468
�[?2004lAll:  sum=20.0261054475124
�[?2004l
�[?2004llabel      sum
East       46.7540492694752
West       18.8528624157926
Peninsula  3.18704264192471
Islands    0.279139711405429

Uncertainty % is 20/69 = 0.289855072464

Discharge

  • Discharge is “grounded discharge”
    • Input to ice shelves where ice shelves exist
    • Calving (similar to Greenlandic discharge) where ice shelves do not exist.

Rignot 2019 (Shelf, non-shelf, and Island)

Load
import pandas as pd
df = pd.read_excel("~/data/Rignot_2019/pnas.1812883116.sd01.xlsx", index_col=0)

##############################################################################
###
### cleanup
###
df = df.loc[df.index.dropna()]

for i in [0,0,0]: # drop Excel rows 2,3,4
    df = df.drop(index=df.index[i])

# Drop super-shelves and rename indented sub-shelves
super_shelf = ["LarsenB", "Wordie", "Ronne", "Ross West", "Ross East", "Amery_Ice_Shelf", "Filchner", "AP", "WAIS", "EAIS", "TOTAL SURVEYED"]
df = df.drop(index=super_shelf)
for i in df.index: 
    if i[0] == ' ':  df = df.rename(index={i: i.strip()})

for c in df.columns: # Drop extra columns
    if 'Unnamed' in str(c):
        df = df.drop(columns=c)
df = df.drop(columns=["Basin.1", "σ SMB", "σ D", "D type"]) # Drop unused columns
##############################################################################

# Green color = no ice shelf
noshelf = ["West_Graham_Land", "Eastern_Graham_Land", "Hektoria_Headland", "Evans_Headland", "Drygalski_Headland", "LarsenA", "Rydberg_Peninsula", "Zonda_Eureka", "Cape_Jeremy", "Wilkins_George_VI", "Wilkins_Island", "Thomson", "Fox", "Cooke", "Walgreen_Coast", "Lucchitta_Velasco", "Jackson-Perkins", "Frostman-Lord-Shuman-Anandakri", "Shirases_Coast", "Saunders_Coast", "Ross_East1", "Ross_East2", "Ross_East3", "Ross_East4", "Ross_East5", "Dry_Valleys", "Icebreaker-Fitzgerald", "Victoria_Land", "Oates_Coast", "Wilkes_Land", "Adelie_Coast", "Sabrina_Coast", "Clarie_Coast", "Law_Dome", "Budd_Coast", "Knox_Coast", "Ingrid_Christensen_Coast", "Wilhelm_II_Coast", "Enderby_Land", "Prince_Olav_Coast", "Mawson_Coast"]
df['shelf'] = 1
df.loc[noshelf, 'shelf'] = 0

# Sum numeric columns
df.loc['Sum'] = np.nan
for c in df.columns: # convert to numbers
    try: df[c] = pd.to_numeric(df[c])
    except: df.loc['Sum',c] = 'All'

cols = df.select_dtypes(include=[np.number]).columns.drop('shelf')
df.loc['Sum', cols] = df[cols].sum(axis='rows')

cols = df.columns[0:10].to_list()
cols.insert(3,'shelf')
df[cols].tail(10)
Glacier nameBasinRegionSubregionshelfSMBD19791980198119821983
Stancomb_WillsK-AEastStancomb_Wills_Ice_Shelf122.0321.3925.26124.729124.197223.665323.1333
Princess_MarthaK-AEastPrincess_Martha_Coast10.210.210.210.210.210.210.21
Coats_CoastK-AEastCoats_Coast16.436.436.436.436.436.436.43
AcademyJ”-KEastFilchner_Ice_Shelf124.2724.124.2724.250524.23124.211524.1921
Support_ForceJ”-KEastFilchner_Ice_Shelf19.729.3619.729.730579.741159.751729.7623
RecoveryJ”-KEastFilchner_Ice_Shelf141.0541.0541.0541.124241.198341.272541.3467
SlessorJ”-KEastFilchner_Ice_Shelf126.1124.91626.1126.125626.141226.156826.1724
BaileyJ”-KEastFilchner_Ice_Shelf18.988.618.989.001269.022529.043789.06504
IslandsnanIslandsIslands176.989976.989976.989976.989976.989976.989976.9899
SumAllAllAllnan2097.572236.962126.642133.492140.252147.012153.77
Shelf vs Non-shelf discharge
  • WARNING: Using shelf vs. non-shelf is important and can be done for all AQ, but Rignot “Island” discharge (~77 Gt/year) doesn’t provide enough metadata to break down by east/west/peninsula.
  • Instead, for all islands in NSIDC-0709.002 product, find their region (east, west, peninsula), and calculate area of islands in each region, and then split values by area. That assumes all islands have the same flux (volume flow rate per unit area).
<<load_rignot>>
c = np.arange(2010,2017+1)

dd = df.groupby(['shelf','Region']).sum().drop(columns=['Basin','Subregion'])[c].mean(axis='columns')
dd.loc['Non-shelf discharge'] = dd.loc[0,:].sum()
dd.loc['shelf discharge'] = dd.loc[1,:].sum()
dd['Total discharge'] = dd.loc[['Non-shelf discharge','shelf discharge']].sum()
dd
# df_shelf = df[df['shelf'] == 1][c].mean(axis='columns')
# df_noshelf = df[df['shelf'] == 0][c].mean(axis='columns')

# df_shelf
# print("Total discharge: ", df[df['shelf'] >= 0][c].mean(axis='columns').sum())
# print('Shelf discharge: ', df_shelf.sum())
# print('Non-shelf discharge: ', df_noshelf.sum())
shelf                Region   
0.0                  East          177.140000
                     Peninsula     131.398873
                     West           23.003920
1.0                  East          926.544384
                     Islands        76.989900
                     Peninsula     205.140504
                     West          768.695078
Non-shelf discharge                331.542793
shelf discharge                   1977.369866
Total discharge                   2308.912659
dtype: float64

Non-shelf discharge from Rignot is:

RegionValuesSumComment
East177 + 77*0.6223.2East non-shelf discharge + 60 % of islands
West23 + 77*0.346.1West non-shelf discharge + 30 % of islands
Peninsula131 + 77*0.1138.7Peninsula + 10 % islands

Davison 2023 (Discharge to shelf)

This is steady-state discharge from grounded ice to shelves.

import numpy as np
import pandas as pd

fname = '~/data/Davison_2023/adi0186_table_s2.xlsx'

loc = pd.read_excel(fname, sheet_name='Total mass changes', index_col = 0, usecols = 'B,C,D', skiprows = 4)
loc = loc.drop('Antarctic Ice Shelves')

df = pd.read_excel(fname, sheet_name='Discharge', index_col = 1, skiprows = 3)
df = df[df.columns[1::2]]
df.columns = [np.floor(c).astype(int) for c in df.columns]

df = df.drop(index=df.index[0])
df = df.drop(index='Antarctic Ice Shelves')
df = df[np.arange(2010,2020)].mean(axis='columns')
df.name = 'Mass'
df
Abbot        32.473268
Ainsworth     0.157966
Alison        2.985331
Amery        78.564587
Andreyev      2.207105
               ...    
Withrow       0.480019
Wordie        7.754308
Wylde         0.005026
Zelee          0.42351
Zubchatyy     0.469816
Name: Mass, Length: 162, dtype: object
<<load_davison_discharge>>
df = loc.join(df)

import geopandas as gpd
fname = '~/data/NSIDC/NSIDC-0709.002/1992.02.07/IceBoundaries_Antarctica_v02.shp'
ew = gpd.read_file(fname)

df = gpd.GeoDataFrame(df, geometry=gpd.points_from_xy(df['longitude'],df['latitude']), crs="EPSG:4326")
df = df.drop(columns=['latitude','longitude'])
df = df.to_crs('epsg:3031')
e = ew.to_crs('epsg:3031')

idx = ew.sindex.nearest(df['geometry'], return_all=False)
df['Region'] = ''
for dfidx,ewidx in idx.T:
    arr = df.iloc[dfidx].copy(deep=True)
    arr['Region'] = ew.iloc[ewidx]['Regions']
    df.iloc[dfidx] = arr
    
# df.loc['Total'] = [df['Mass'].sum(), None, 'All']

dd = df[['Mass','Region']].groupby('Region').sum()
dd.loc['Total'] = dd.sum()
dd
RegionMass
East923.794
Islands1.28338
Peninsula152.536
West857.468
Total1935.08

Total discharge is then

RegionRignot (Ground-to-ocean + IslandsDavison (Ground-to-shelf)Sum
East2239241147
West46856902
Peninsula139153292
Total2341

Antarctic Ice shelves

Calving: Greene 2022

import pandas as pd

fname = "/home/kdm/data/Greene_2022/data/greene_Supplementary_Table_1.xlsx"
df = pd.read_excel(fname, index_col=1, skiprows=4)

df = df.drop(index=df.index[0])
df = df.drop(index=['Antarctica'])

df = df[df.columns[[1,2,9]]]
df.columns = ['latitude','longitude','Mass']

import geopandas as gpd
fname = '~/data/NSIDC/NSIDC-0709.002/1992.02.07/IceBoundaries_Antarctica_v02.shp'
ew = gpd.read_file(fname)

df = gpd.GeoDataFrame(df, geometry=gpd.points_from_xy(df['longitude'],df['latitude']), crs="EPSG:4326")
df = df.to_crs('epsg:3031')
e = ew.to_crs('epsg:3031')

idx = ew.sindex.nearest(df['geometry'], return_all=False)
df['Region'] = ''
for dfidx,ewidx in idx.T:
    arr = df.iloc[dfidx].copy(deep=True)
    arr['Region'] = ew.iloc[ewidx]['Regions']
    df.iloc[dfidx] = arr
df = df.drop(columns=['latitude','longitude'])
    
# df.loc['Total'] = [df['Mass'].sum(), None, 'All']
dd = df[['Mass','Region']].groupby('Region').sum()
dd.loc['Total'] = dd.sum(axis='rows')
dd
                  Mass
Region                
             44.705315
East        694.100336
Islands       1.518034
Peninsula   103.675815
West        566.997529
Total      1410.997028

The above is shelf calving

Total calving is shelf calving (Greene) + non-shelf calving (331; Rignot) + islands (77; Rignot)

RegionValuesSumComment
East694 + 223917East non-shelf discharge + 60 % of islands
West567 + 154721West non-shelf discharge + 30 % of islands
Peninsula104 + 31135Peninsula + 10 % islands
Uncertainty

From p.3 of citet:greene_2022 “Antarctica has experienced a net loss of 5,874 ± 396 Gt of ice owing to calving”

396/5874 % = 6.74157303371

From data K189 & L189 = 1411.0 28.1 or 28/1411% = 1.98440822112

Shelf freeze/melt

import xarray as xr
ds = xr.open_mfdataset("~/data/Paolo_2023/ANT_G1920V01_IceShelfMelt.nc")
ds = ds[['melt','melt_err']].sel({'time':slice('2010-01-01','2017-12-31')}).mean(dim='time')

delayed_obj = ds.to_netcdf('tmp/shelf_melt.nc', compute=False)
from dask.diagnostics import ProgressBar
with ProgressBar():
    results = delayed_obj.compute()

print(ds)
[########################################] | 100% Completed | 5.35 s
<xarray.Dataset> Size: 68MB
Dimensions:   (y: 2916, x: 2916)
Coordinates:
  * x         (x) float64 23kB -2.798e+06 -2.796e+06 ... 2.796e+06 2.798e+06
  * y         (y) float64 23kB 2.798e+06 2.796e+06 ... -2.796e+06 -2.798e+06
Data variables:
    melt      (y, x) float32 34MB dask.array<chunksize=(486, 486), meta=np.ndarray>
    melt_err  (y, x) float32 34MB dask.array<chunksize=(486, 486), meta=np.ndarray>
g.mapset -c Paolo_2023

ncdump -v x tmp/shelf_melt.nc # 2916x2916
ncdump -v y tmp/shelf_melt.nc

x0=-2798407.5
x1=2798392.5
y0=-2798392.5
y1=2798407.5

g.region w=$(( -2798407 - 960 )) e=$(( 2798392 + 960 )) s=$(( -2798392 - 960 )) n=$(( 2798407 + 960 )) res=1920 -p
r.mapcalc "area = area()"

r.in.gdal -o input=NetCDF:tmp/shelf_melt.nc:melt output=melt
r.in.gdal -o input=NetCDF:tmp/shelf_melt.nc:melt_err output=err
r.region -c map=melt
r.region -c map=err

## + kg/m^2 -> Gt: / 1E12
r.mapcalc "melt = melt * 1000 * area / exp(10,12)" --o
r.mapcalc "err = err * 1000 * area / exp(10,12)" --o

r.mapcalc "melt_on = if(melt > 0, melt, null())"
r.mapcalc "err_on = if(melt > 0, err, null())"
r.mapcalc "melt_off = if(melt < 0, melt, null())"
r.mapcalc "err_off = if(melt < 0, err, null())"

r.colors -ae map=melt color=difference
r.colors -ge map=melt_on color=viridis
r.colors -ge map=melt_off color=viridis

# d.rast melt
# d.rast melt_on
# d.rast melt_off

r.mapcalc "basins = if((basins@PERMANENT == 1) | (basins@PERMANENT == 11), 1, 0)"
r.mapcalc "basins = if((basins@PERMANENT == 2) | (basins@PERMANENT == 12), 2, basins)"
r.mapcalc "basins = if((basins@PERMANENT == 3) | (basins@PERMANENT == 13), 3, basins)"
r.colors map=basins color=viridis
r.category basins separator=":" rules=- << EOF
1:East
2:West
3:Peninsula
EOF
Stats
r.grow.distance input=basins value=basins_grow distance=10 --q
r.mapcalc "basins_grow = int(basins_grow)" --q
r.category map=basins | r.category map=basins_grow rules=- --q
echo "NET"
r.univar -gt map=melt zones=basins_grow | cut -d"|" -f2,13 | column -s"|" -t | sed 's/label.*//'
# r.univar -gt map=err zones=basins | cut -d"|" -f2,13 | column -s"|" -t | sed 's/label.*//'
r.univar -g melt | grep sum
r.univar -g err | grep sum

echo ""
echo "FREEZE_ON"
r.univar -gt map=melt_on zones=basins_grow | cut -d"|" -f2,13 | column -s"|" -t | sed 's/label.*//'
# r.univar -gt map=err_on zones=basins | cut -d"|" -f2,13 | column -s"|" -t | sed 's/label.*//'
r.univar -g melt_on | grep sum
r.univar -g err_on | grep sum

echo ""
echo "MELT_OFF"
r.univar -gt map=melt_off zones=basins_grow | cut -d"|" -f2,13 | column -s"|" -t | sed 's/label.*//'
# r.univar -gt map=err_off zones=basins | cut -d"|" -f2,13 | column -s"|" -t | sed 's/label.*//'
r.univar -g melt_off | grep sum
r.univar -g err_off | grep sum
NET
�[?2004lEast       -319.34788697967
West       -537.161194600709
Peninsula  -153.245144876904
�[?2004l�[01;31m�[Ksum�[m�[K=-1009.75422645726
�[?2004l�[01;31m�[Ksum�[m�[K=3041.55065208086
�[?2004l
�[?2004lFREEZE_ON
�[?2004lEast       207.669949989514
West       146.976649882162
Peninsula  10.8694466267689
�[?2004l�[01;31m�[Ksum�[m�[K=365.516046498447
�[?2004l�[01;31m�[Ksum�[m�[K=1086.24089094716
�[?2004l
�[?2004lMELT_OFF
�[?2004lEast       -527.017836969183
West       -684.137844482863
Peninsula  -164.114591503673
�[?2004l�[01;31m�[Ksum�[m�[K=-1375.27027295574
�[?2004l�[01;31m�[Ksum�[m�[K=1955.30976113372

GZ retreat

Email from Davison

Ice ShelfMass change due to grounding line migration from 1997 to 2021 (Gt)Error (Gt)
Pine Island22040
Thwaites23025
Crosson20025
Dotson42080

(220+230+200+420)/(2021-1997) = 44.5833333333

Uncertainty: p. 3 of citet:davison_2023 “groundling line retreat (1070 ± 170 Gt),”

170/1070 % = 15.8878504673

Frontal retreat and advance: Greene 2022

import pandas as pd
import numpy as np

fname = "/home/kdm/data/Greene_2022/data/greene_Supplementary_Table_1.xlsx"
df = pd.read_excel(fname, index_col=1, skiprows=4)

##############################################################################
###
### cleanup
###
df = df.drop(index=df.index[0])
df = df.drop(index=['Antarctica'])

lon = df['Unnamed: 3']
lat = df['Unnamed: 2']

for c in df.columns: # Drop extra columns
    if 'Unnamed' in str(c):
        df = df.drop(columns=c)
    if 'Gt/yr' in str(c):
        df = df.drop(columns=c)
    if ('control run' in str(c)) | ('instantaneous' in str(c)):
        df = df.drop(columns=c)
        
for c in df.columns:
    if type(c) == str: df = df.drop(columns=c)

# df = df.drop(columns=[2000.75])
# df = df.drop(columns=[1997.75])
# df.columns = df.columns.round().astype(int)    
##############################################################################
<<load_greene_2022_adv_ret>>

c = df.columns
diff = df.diff(axis='columns')[c]
diff_gain = diff[diff > 0].sum(axis='columns')
diff_loss = diff[diff < 0].sum(axis='columns')
diff_gain.name = 'Mass'
diff_loss.name = 'Mass'
df_gain = pd.DataFrame(diff_gain)
df_loss = pd.DataFrame(diff_loss)
df_net = df_loss + df_gain

print("Net:")
print('Mass gain', df_net[df_net > 0].sum(axis='rows').values)
print('Mass Loss', df_net[df_net < 0].sum(axis='rows').values)
print('Net mass change', df_net.sum(axis='rows').values)

dt = df.columns[-1] - df.columns[0]
print("\nPer year:")
print('Mass gain', df_net[df_net > 0].sum(axis='rows').values / dt)
print('Mass Loss', df_net[df_net < 0].sum(axis='rows').values / dt)
print('Net mass change', df_net.sum(axis='rows').values / dt)
Net:
Mass gain [4563.264309048649]
Mass Loss [-9434.897965610035]
Net mass change [-4871.633656561384]

Per year:
Mass gain [194.59549292318295]
Mass Loss [-402.3410646315572]
Net mass change [-207.74557170837417]
  • Most numbers here match what’s in the publication
  • Neither the total nor Ronne match.
    • Here, total is 4871 Gt net change.
    • Below, Ronne net loss is 1031
    • From the paper (paragraph under Fig. 2)
      • Total should be 5874 (missing 5874-4871 = 1003)
      • Ronne should be 2034 (missing 2034-1031 = 1003)
      • But Thwaites, Larsen C, and Ross West match paper, so it seems like I’m parsing the dataset correctly.
      • Filchner matches mass gain.

Find the top 10 shelves with net and gross mass gain and loss total (summed) over the period

tmp = pd.DataFrame(index=np.arange(10))

tmp['Net gain: Name'] = df_net.sort_values(by='Mass', ascending=False).head(10).index
tmp['Net gain: Mass'] = df_net.sort_values(by='Mass', ascending=False).head(10)['Mass'].values

tmp['Net loss: Name'] = df_net.sort_values(by='Mass', ascending=True).head(10).index
tmp['Net loss: Mass'] = df_net.sort_values(by='Mass', ascending=True).head(10)['Mass'].values

tmp['Gross gain: Name'] = df_gain.sort_values(by='Mass', ascending=False).head(10).index
tmp['Gross gain: Mass'] = df_gain.sort_values(by='Mass', ascending=False).head(10)['Mass'].values

tmp['Gross loss: Name'] = df_loss.sort_values(by='Mass', ascending=True).head(10).index
tmp['Gross loss: Mass'] = df_loss.sort_values(by='Mass', ascending=True).head(10)['Mass'].values

tmp
Net gain: NameNet gain: MassNet loss: NameNet loss: MassGross gain: NameGross gain: MassGross loss: NameGross loss: Mass
0Filchner1796.26Thwaites-1968.41Ronne2762.7Ronne-3794.31
1Amery569.883Larsen C-1166.92Ross West1968.02Ross West-2897.63
2Cook414.571Ronne-1031.61Filchner1843.53Thwaites-2248.52
3Shackleton369.773Ross West-929.617Amery917.121Larsen C-1619.1
4Brunt Stancomb362.787Wilkins-622.156Ross East857.219Pine Island-1231.67
5West278.333Larsen B-530.038Pine Island780.117Ross East-1135.23
6Jelbart169.419Pine Island-451.554Brunt Stancomb493.663Ninnis-675.229
7Fimbul148.221Mertz-381.971Shackleton493.661Wilkins-642.061
8Riiser-Larsen84.376Ninnis-300.936Cook454.636Larsen B-584.603
9Pourquoi Pas64.9883Larsen A-286.377Larsen C452.177Mertz-571.076

Now convert to Gt/year

for col in tmp.columns:
    if 'Mass' in col: tmp[col] = tmp[col] / c.size

tmp    
Net gain: NameNet gain: MassNet loss: NameNet loss: MassGross gain: NameGross gain: MassGross loss: NameGross loss: Mass
0Filchner74.8441Thwaites-82.0169Ronne115.113Ronne-158.096
1Amery23.7451Larsen C-48.6219Ross West82.0007Ross West-120.735
2Cook17.2738Ronne-42.9837Filchner76.8138Thwaites-93.6885
3Shackleton15.4072Ross West-38.734Amery38.2134Larsen C-67.4626
4Brunt Stancomb15.1161Wilkins-25.9232Ross East35.7174Pine Island-51.3196
5West11.5972Larsen B-22.0849Pine Island32.5049Ross East-47.3011
6Jelbart7.05912Pine Island-18.8147Brunt Stancomb20.5693Ninnis-28.1346
7Fimbul6.17586Mertz-15.9155Shackleton20.5692Wilkins-26.7525
8Riiser-Larsen3.51567Ninnis-12.539Cook18.9432Larsen B-24.3585
9Pourquoi Pas2.70785Larsen A-11.9324Larsen C18.8407Mertz-23.7948
<<green_2022_mean>> # provides df_net
# df = df_net
df['longitude'] = lon
df['latitude'] = lat

import geopandas as gpd
fname = '~/data/NSIDC/NSIDC-0709.002/1992.02.07/IceBoundaries_Antarctica_v02.shp'
ew = gpd.read_file(fname)

df = gpd.GeoDataFrame(df, geometry=gpd.points_from_xy(df['longitude'],df['latitude']), crs="EPSG:4326")
df = df.to_crs('epsg:3031')
e = ew.to_crs('epsg:3031')

idx = ew.sindex.nearest(df['geometry'], return_all=False)
df['Region'] = ''
for dfidx,ewidx in idx.T:
    arr = df.iloc[dfidx].copy(deep=True)
    arr['Region'] = ew.iloc[ewidx]['Regions']
    df.iloc[dfidx] = arr

df = df.drop(columns=['latitude','longitude','geometry'])
# df.loc['Total'] = [df['Mass'].sum(), None, 'All']

# df.groupby('Region').sum().round()

diff = df[c].diff(axis='columns')
diff_gain = diff[diff > 0].sum(axis='columns')
diff_loss = diff[diff < 0].sum(axis='columns')
diff_gain.name = 'Mass'
diff_loss.name = 'Mass'
df_gain = pd.DataFrame(diff_gain)
df_loss = pd.DataFrame(diff_loss)
df_net = df_loss + df_gain
df_gain['Region'] = df['Region']
df_loss['Region'] = df['Region']
df_net['Region'] = df['Region']
Net:
Mass gain [4563.264309048649]
Mass Loss [-9434.897965610035]
Net mass change [-4871.633656561384]

Per year:
Mass gain [194.59549292318295]
Mass Loss [-402.3410646315572]
Net mass change [-207.74557170837417]
for loc in ['East','West','Peninsula']:
    print("\n", loc)
    sub = (df_net['Mass'] > 0) & (df_net['Region'] == loc); print('Mass gain', df_net[sub].drop(columns='Region').sum().values/dt)
    sub = (df_net['Mass'] < 0) & (df_net['Region'] == loc); print('Mass loss', df_net[sub].drop(columns='Region').sum().values/dt)
 East
Mass gain [192.48919217062863]
Mass loss [-69.4663382466161]

 West
Mass gain [1.923849262408337]
Mass loss [-206.1686424710849]

 Peninsula
Mass gain [0.17175689689132026]
Mass loss [-124.70409821345615]

GRACE

import xarray as xr
ds = xr.open_dataset("~/data/Groh_2021/AIS_GMB_grid.nc")
ds['dm'] = ds['dm'] * ds['area']
ds = ds['dm'].sum(dim=['x','y'])/1E12
df = ds.to_dataframe()

# df = df.loc['2002-01-01':'2024-12-31']
df = df - df.max()
_ = df.plot()
# df = df.resample('D').mean().interpolate().resample('YS').mean()
df = df.resample('YS').mean()
df = df.diff()
print(df.mean())
dm   -79.146737
dtype: float64

./figs_tmp/0f945242fb8fcfd039ef097240df737ac32412ca.png

Misc

Export tables to CSVs

(save-excursion
  (goto-char (point-min))
  (re-search-forward (concat "^#\\+name: " tbl) nil t)
  (next-line)
  (org-table-export (concat "./dat/" tbl ".csv") "orgtbl-to-csv")
  ;;(shell-command-to-string (concat "head " tbl ".csv"))
  (message (concat "Exported " tbl " to " (concat "./dat/" tbl ".csv")))
  )

Convert PDFs to PNG

About

Sankey diagram for ice sheet mass flow

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •