This document is an Emacs Org Mode plain-text file with code and text embedded. The purpose of the code is to calculate the mass flux ice discharge (MFID) for ESA CCI+ Greenland, Phase 2. It is based on the work flow created by Ken Mankoff for the PROMICE Solid Ice Discharge product: Mankoff, Ken; Solgaard, Anne; Larsen, Signe, 2020, “Greenland Ice Sheet solid ice discharge from 1986 through last month: Discharge”, https://doi.org/10.22008/promice/data/ice_discharge/d/v02, GEUS Dataverse, V101, and the previous phases of ESA CCI.
To recreate this work
- check that you have the necessary software dependencies installed. See section: Code.
- In Phase 2 the code was updated to partly run via a docker image limiting the required software packages.
- Much the code can be run via a Makefile which uses docker - requires docker is installed and a user is logged in
- Download and set up the necessary data files used throughout the Input data section.
- Make sure to define DATADIR: (for example: export DATADIR=/mnt/netapp/glaciologi/MFID_ESA_CCI/data)
- Open this file in Emacs Org Mode.
- Tangle the embedded code blocks.
- Execute
make
to run the contents of the Makefile.
FROM ubuntu:20.04
LABEL authors="Signe Hillerup Larsen"
LABEL maintainer="[email protected]"
# system environment
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get -y update && apt-get install -y --no-install-recommends --no-install-suggests \
bc \
bsdmainutils \
datamash \
gdal-bin \
grass \
netcdf-bin \
parallel \
proj-bin \
proj-data \
zip \
nco \
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf /var/lib/apt/lists/*
RUN echo LANG="en_US.UTF-8" > /etc/default/locale
ENV LANGUAGE en_US.UTF-8
ENV LANG C
ENV LC_ALL C
ENV LC_CTYPE C
ENV SHELL /bin/bash
# create a user
RUN useradd --create-home user && chmod a+rwx /home/user
ENV HOME /home/user
WORKDIR /home/user
RUN mkdir -p /data
ENV DATADIR /data
# switch the user
USER user
CMD ["/usr/bin/grass", "--version"]
# docker build -f Dockerfile_grass -t ice_discharge_grass .
cd docker/grass
docker build -t ice_discharge_grass .
docker run -it ice_discharge_grass # run it
#container_args ?= run -it --cpus 7 --user $(shell id -u):$(shell id -g) --mount type=bind,src=$${DATADIR},dst=/data --mount type=bind,src=$(shell pwd),dst=/work --env PARALLEL="--delay 0.1" ice_discharge_grass
# docker tag local-image:tagname new-repo:tagname
docker tag ice_discharge_grass hillerup/ice_discharge:grass
docker push hillerup/ice_discharge:grass
FROM continuumio/miniconda3
RUN conda install \
curl \
cython \
ipython \
jupyter \
matplotlib \
numpy \
pandas \
pip \
scipy \
statsmodels \
tabulate \
xarray \
&& conda clean -a \
&& pip install --no-cache-dir \
cfchecker \
cfunits \
grass-session \
nc-time-axis \
pyshp \
semver \
uncertainties \
git+https://github.com/aussda/pyDataverse.git@3b040ff23b665ec2650bebcf4bd5478de6881af0
# create a user
RUN useradd --create-home user && chmod a+rwx /home/user
ENV HOME /home/user
WORKDIR /home/user
RUN mkdir -p /data
ENV DATADIR /data
# switch the user
USER user
# create a user
# RUN useradd -m -U user
# RUN chmod a+rwx /home/user
# ENV HOME /home/user
# RUN mkdir -p /data /work
# WORKDIR /work
# switch the user
# USER user
# RUN mkdir -p /data /work
# WORKDIR /work
# The code to run when container is started:
# ENTRYPOINT ["conda", "run", "-n", "ice_discharge", "python3"]
# For interactive shell
# RUN conda init bash
# RUN echo "conda activate ice_discharge" >> /root/.bashrc
cd docker/conda
dockyer build -t ice_discharge_conda .
docker tag ice_discharge_conda:latest hillerup/ice_discharge:conda
docker run -it --mount type=bind,src=$(pwd),dst=/work hillerup/ice_discharge:conda python -c 'import pandas as pd; print(pd)'
# docker tag local-image:tagname new-repo:tagname
docker tag ice_discharge_conda hillerup/ice_discharge:conda
docker push hillerup/ice_discharge:conda
It is more reproducible to use the Docker image mankoff/ice_discharge:conda
, but for record-keeping sake, here is the Python environmnet.
docker run hillerup/ice_discharge:conda conda env export -n base
This code, and all code files in this project, are derived products tangled from the ice_discharge.org source file.
container_cmd ?= docker
container_args ?= run --user $(shell id -u):$(shell id -g) --mount type=bind,src=${DATADIR},dst=/data --mount type=bind,src=$(shell pwd),dst=/home/user --env PARALLEL="--delay 0.1 -j -1"
all: docker G GRASS PYTHON dist
docker: FORCE ## Pull down Docker environment
docker pull hillerup/ice_discharge:grass
${container_cmd} ${container_args} hillerup/ice_discharge:grass
docker pull hillerup/ice_discharge:conda
${container_cmd} ${container_args} hillerup/ice_discharge:conda conda env export -n base
G:
grass -e -c EPSG:3413 ./G
import: FORCE
${container_cmd} ${container_args} hillerup/ice_discharge:grass grass ./G/PERMANENT --exec ./import.sh
GRASS: FORCE
${container_cmd} ${container_args} hillerup/ice_discharge:grass grass ./G/PERMANENT --exec ./import.sh
${container_cmd} ${container_args} hillerup/ice_discharge:grass grass ./G/PERMANENT --exec ./gate_IO_runner.sh
${container_cmd} ${container_args} hillerup/ice_discharge:grass grass ./G/PERMANENT --exec ./vel_eff.sh
${container_cmd} ${container_args} hillerup/ice_discharge:grass grass ./G/PERMANENT --exec ./export.sh
PYTHON: FORCE
${container_cmd} ${container_args} hillerup/ice_discharge:conda python ./errors.py
${container_cmd} ${container_args} hillerup/ice_discharge:conda python ./raw2discharge.py
${container_cmd} ${container_args} hillerup/ice_discharge:grass grass ./G/PERMANENT --exec ./gate_export.sh
mkdir -p figs
${container_cmd} ${container_args} hillerup/ice_discharge:conda python ./figures.py
dist:
ln -s out CCI
zip -r CCI.zip CCI
rm CCI
FORCE: # dummy target
clean:
rm -fR G tmp out figs CCI.zip
# Convenience functions for pretty printing messages
RED='\033[0;31m'; ORANGE='\033[0;33m'; GREEN='\033[0;32m'; NC='\033[0m' # No Color
MSG_OK() { echo -e "${GREEN}${@}${NC}"; }
MSG_WARN() { echo -e "${ORANGE}WARNING: ${@}${NC}"; }
MSG_ERR() { echo -e "${RED}ERROR: ${@}${NC}" >&2; }
https://grass.osgeo.org/grass74/manuals/variables.html
#sudo mount -o [email protected],gid=1260400513 /dev/sda1 /media/[email protected]/datadrive
sudo mount -t ntfs-3g -o uid=1260406986,gid=1260400513 /dev/sda1 /media/[email protected]/datadrive
conda activate py38
grass --text ./G/PERMANENT
GRASS_VERBOSE [all modules] toggles verbosity level -1 - complete silence (also errors and warnings are discarded) 0 - only errors and warnings are printed 1 - progress and important messages are printed (percent complete) 2 - all module messages are printed 3 - additional verbose messages are printed
export GRASS_VERBOSE=3
# export GRASS_MESSAGE_FORMAT=silent
#export PROJ_LIB=/usr/bin/proj
#export DATADIR=/data #/mnt/netapp/glaciologi/MFID_ESA_CCI/data
if [ -z ${DATADIR+x} ]; then
echo "DATADIR environment varible is unset."
echo "Fix with: \"export DATADIR=/path/to/data\""
exit 255
fi
set -x # print commands to STDOUT before running them
<<MSGS_pretty_print>>
<<GRASS_config>>
- from Morlighem et al. (2017)
MSG_OK "BedMachine"
g.mapset -c BedMachine
#for var in $(echo mask surface thickness bed errbed); do
# echo $var
# r.external source=netCDF:${DATADIR}/Morlighem_2017/BedMachineGreenland-v5.nc:${var} output=${var}
#done
for var in surface thickness bed errbed; do
echo $var
r.external source="${DATADIR}"/Morlighem_2022/BMv5_3413/${var}.tif output=${var} --o
done
echo $var
r.external source="${DATADIR}"/Morlighem_2022/BMv5_3413/mask_float.tif output=mask -o --o
r.colors -a map=errbed color=haxby
g.mapset PERMANENT
g.region raster=surface@BedMachine res=200 -a -p
g.region -s
g.mapset BedMachine
g.region -dp
r.colors map=mask color=haxby
r.mapcalc "mask_ice = if(mask == 2, 1, null())"
- From citet:mouginot_2019_glacier
MSG_OK "Mouginot 2019 sectors"
g.mapset -c Mouginot_2019
v.in.ogr input=${DATADIR}/Mouginot_2019 output=sectors_all
v.extract input=sectors_all where="NAME NOT LIKE '%ICE_CAP%'" output=sectors
db.select table=sectors | head
v.db.addcolumn map=sectors columns="region_name varchar(100)"
db.execute sql="UPDATE sectors SET region_name=SUBREGION1 || \"___\" || NAME"
# v.db.addcolumn map=sectors columns="area DOUBLE PRECISION"
v.to.db map=sectors option=area columns=area units=meters
mkdir -p ./tmp/
# db.select table=sectors > ./tmp/Mouginot_2019.txt
v.to.rast input=sectors output=sectors use=cat label_column=region_name
r.mapcalc "mask_GIC = if(sectors)"
# # regions map
v.to.rast input=sectors output=regions_tmp use=cat label_column=SUBREGION1
# which categories exist?
# r.category regions separator=comma | cut -d, -f2 | sort | uniq
# Convert categories to numbers
r.category regions_tmp separator=comma | sed s/NO/1/ | sed s/NE/2/ | sed s/CE/3/ | sed s/SE/4/ | sed s/SW/5/ | sed s/CW/6/ | sed s/NW/7/ > ./tmp/mouginot.cat
r.category regions_tmp separator=comma rules=./tmp/mouginot.cat
# r.category regions_tmp
r.mapcalc "regions = @regions_tmp"
# # region vector
# r.to.vect input=regions output=regions type=area
# v.db.addcolumn map=regions column="REGION varchar(2)"
# v.what.vect map=regions column=REGION query_map=sectors query_column=SUBREGION1
# # mask
# export DATADIR="/media/[email protected]/datadrive/data/ESA_GIS_CCI"
DIR=${DATADIR}/ENVEO/monthly
(cd ${DIR}; ls -FGhol --color *nc)
(cd ${DIR}; ncdump -h $(ls *.nc | head -n1))
(cd ${DIR}; parallel "md5sum {}" ::: $(ls *.nc|head -n8))
- Read in all the data
- Convert from [m day-1] to [m year-1]
MSG_OK "ENVEO"
g.mapset -c ENVEO
ROOT=${DATADIR}/ENVEO/monthly
find ${ROOT} -name "*.nc"
# FILE=$(find ${ROOT} -name "*.nc"|head -n1) # testing
FILE=$(find ${ROOT} -name "greenland*.nc" | head -n1) # DEBUG
for FILE in $(find ${ROOT} -name "greenland*.nc" | LC_ALL=C sort); do
T=$(echo ${FILE}|grep -o _s........_| tr -dc [0-9])
DATE_STR=${T:0:4}_${T:4:2}_${T:6:2}
echo $DATE_STR
r.external -o source="NetCDF:${FILE}:land_ice_surface_easting_velocity" output=vx_${DATE_STR}
r.external -o source="NetCDF:${FILE}:land_ice_surface_northing_velocity" output=vy_${DATE_STR}
r.external -o source="NetCDF:${FILE}:land_ice_surface_velocity_stddev" output=err_${DATE_STR}
r.external -o source="NetCDF:${FILE}:land_ice_surface_easting_stddev" output=ex_${DATE_STR}
r.external -o source="NetCDF:${FILE}:land_ice_surface_northing_stddev" output=ey_${DATE_STR}
r.mapcalc "err_${DATE_STR} = (ex_${DATE_STR}^2 + ey_${DATE_STR}^2)^0.5"
done
FILE=$(find ${ROOT} -name "*CCI*.nc" | head -n1) # DEBUG
for FILE in $(find ${ROOT} -name "*CCI*.nc" | LC_ALL=C sort); do
T=$(basename "${FILE}" | grep -o '^[0-9]\{8\}')
DATE_STR=${T:0:4}_${T:4:2}_${T:6:2}
echo $DATE_STR
r.external -o source="NetCDF:${FILE}:land_ice_surface_easting_velocity" output=vx_${DATE_STR}
r.external -o source="NetCDF:${FILE}:land_ice_surface_northing_velocity" output=vy_${DATE_STR}
r.external -o source="NetCDF:${FILE}:land_ice_surface_velocity_stddev" output=err_${DATE_STR}
r.external -o source="NetCDF:${FILE}:land_ice_surface_easting_stddev" output=ex_${DATE_STR}
r.external -o source="NetCDF:${FILE}:land_ice_surface_northing_stddev" output=ey_${DATE_STR}
r.mapcalc "err_${DATE_STR} = (ex_${DATE_STR}^2 + ey_${DATE_STR}^2)^0.5"
done
r.series input=$(g.list type=raster pattern=vx_2018_* separator=",") output=vx_baseline method=average --o
r.series input=$(g.list type=raster pattern=vy_2018_* separator=",") output=vy_baseline method=average --o
r.mapcalc "vel_baseline = 365 * sqrt(vx_baseline^2 + vy_baseline^2) * mask_ice@BedMachine" --o
r.series input=$(g.list type=raster pattern=err_2018_* separator=",") output=err_baseline method=average --o
r.mapcalc "vel_err_baseline = 365 * err_baseline * mask_ice@BedMachine" --o
- There are holes in the velocity data which will create false gates. Fill them in.
- Clump based on yes/no velocity
- Largest clump is GIS
- 2nd largest is ocean
- Mask by ocean (so velocity w/ holes remains)
- Fill holes
r.mask -r
r.mapcalc "no_vel = if(isnull(vel_baseline), 1, null())"
r.mask no_vel
r.clump input=no_vel output=no_vel_clump --o
ocean_clump=$(r.stats -c -n no_vel_clump sort=desc | head -n1 | cut -d" " -f1)
r.mask -i raster=no_vel_clump maskcats=${ocean_clump} --o
r.fillnulls input=vel_baseline out=vel_baseline_filled method=bilinear
r.mask -r
g.rename raster=vel_baseline_filled,vel_baseline --o
r.colors map=vel_baseline -e color=viridis
- From Bjørk et al. (2015).
- Also use citet:mouginot_2019_glacier
- Write out x,y,name. Can use x,y and mean gate location to find closest name for each gate.
MSG_OK "Bjørk 2015"
g.mapset -c Bjork_2015
ROOT=${DATADIR}/Bjork_2015/
cat ${ROOT}/GreenlandGlacierNames_GGNv01.csv | iconv -c -f utf-8 -t ascii | grep GrIS | awk -F';' '{print $3"|"$2"|"$7}' | sed s/,/./g | m.proj -i input=- | sed s/0.00\ //g | v.in.ascii input=- output=names columns="x double precision, y double precision, name varchar(99)"
# db.select table=names | tr '|' ',' > ./tmp/Bjork_2015_names.csv
- h_0 is PRODEM 20 set to day 182 of 2020
- h_n+ is PRODEM following years
- When PRODEM ends, continue using last year (e.g.,
pandas
ffill()
)
- When PRODEM ends, continue using last year (e.g.,
- h_n- is Khan 2016e ${DATADIR}/Khan_2016/README.org
- Coverage is 1995 through 2020
- Prior to Khan beginning, use first year (e.g.,
pandas
bfill()
)
MSG_OK "dh/dt"
g.mapset -c PRODEM
r.mask -r
f=$(ls ${DATADIR}/PRODEM/PRODEM??.tif | head -n1) # debug
for f in $(ls ${DATADIR}/PRODEM/PRODEM??.tif); do
y=20$(echo ${f: -6:2})
r.in.gdal -r input=${f} output=DEM_${y} band=1
# r.in.gdal -r input=${f} output=var_${y} band=2
# r.in.gdal -r input=${f} output=dh_${y} band=3
# r.in.gdal -r input=${f} output=time_${y} band=4
# r.univar -g time_2019 # mean = DOI 213 = 01 Aug
done
g.region raster=DEM_2019 -pa
Using CCI SEC data from citet:simonsen_2017_implications,sørensen_2015_envisat,khvorostovsky_2012_merging,CCI_SEC.
- This NetCDF file is malformed and needs some dimensions swapped before GDAL can read it.
- Thanks: https://stackoverflow.com/questions/47642695/how-can-i-swap-the-dimensions-of-a-netcdf-file
g.mapset -c SEC
ls ${DATADIR}/SEC/Release/CCI_GrIS_RA_SEC_5km_Vers3.0_2024-05-31.nc
INFILE=${DATADIR}/SEC/Release/CCI_GrIS_RA_SEC_5km_Vers3.0_2024-05-31.nc
ncdump -chs ${INFILE}
ncdump -v x ${INFILE}
ncdump -v y ${INFILE}
g.region w=-739301.625 e=880698.375 s=-3478140.75 n=-413140.75 res=5000 -p
g.region w=w-2500 e=e+2500 n=n+2500 s=s-2500 -pa
ncap2 --overwrite -s 'SEC2=SEC.permute($t,$y,$x)' ${INFILE} ./tmp/SEC.nc
INFILE=./tmp/SEC.nc
# ncdump -p 9,5 ./tmp/SEC.nc |less
for i in $(seq 28); do
d0=$(( ${i}+1991 ))-01-01
d1=$(( ${i}+1996 ))-01-01
n0=$(echo $d0 | sed s/-//g)
n1=$(echo $d1 | sed s/-//g)
OUTFILE=SEC_${n0}_${n1}
echo $OUTFILE
r.external -o source=NetCDF:${INFILE}:SEC2 band=${i} output=${OUTFILE}
r.region -c map=${OUTFILE}
done
r.mapcalc "dh_2014 = SEC_20120101_20170101"
r.mapcalc "dh_2015 = SEC_20130101_20180101"
r.mapcalc "dh_2016 = SEC_20140101_20190101"
r.mapcalc "dh_2017 = SEC_20150101_20200101"
r.mapcalc "dh_2018 = SEC_20160101_20210101"
r.mapcalc "dh_2019 = SEC_20170101_20220101"
seq 2014 2019 | parallel --bar --progress "r.null map=dh_{} null=0" --quiet
- Merge Khan dh/dt w/ PRODEM to generate annual DEMs
MSG_OK "DEM"
g.mapset -c DEM
g.region raster=DEM_2020@PRODEM -pa
for y in {2019..2023}; do
r.mapcalc "DEM_${y} = DEM_${y}@PRODEM"
done
for y in {2019..2014}; do
y1=$(( ${y} + 1 ))
r.mapcalc "DEM_${y} = DEM_${y1} - dh_${y}@SEC"
done
- [X] Find all fast-moving ice (>X m yr-1)
- Results not very sensitive to velocity limit (10 to 100 m yr-1 examined)
- [X] Find grounding line by finding edge cells where fast-moving ice borders water or ice shelf based (loosely) on BedMachine mask
- [X] Move grounding line cells inland by X km, again limiting to regions of fast ice.
- Results not very sensitive to gate position (1 - 5 km range examined)
- [X] Discard gates if group size ∈ [1,2]
- [X] Manually clean a few areas (e.g. land-terminating glaciers, gates due to invalid masks, etc.) by manually selecting invalid regions in Google Earth, then remove gates in these regions
Note that “fast ice” refers to flow velocity, not the sea ice term of “stuck to the land”.
INSTRUCTIONS: Set VELOCITY_CUTOFF and BUFFER_DIST to 50 and 2500 respectively and run the code. Then repeat for a range of other velocity cutoffs and buffer distances to get a range of sensitivities.
OR: Tangle via ((org-babel-tangle) the code below (C-c C-v C-t or ) to ./gate_IO.sh and then run this in a GRASS session:1
<<MSGS_pretty_print>>
<<GRASS_config>>
# 1000: clean results, but too few
# 500: clean results, still too few
# 250: looks good, but 15 Gt < Mankoff 2019. Maybe missing some outlets?
# 150:
VELOCITY_CUTOFF=150
BUFFER_DIST=10000
. ./gate_IO.sh
Create a new mapset for this specific velocity cutoff and buffer distance
g.mapset -c gates_${VELOCITY_CUTOFF}_${BUFFER_DIST}
g.region -d
From above:
- [X] Find grounding line by finding edge cells where fast-moving ice borders water or ice shelf based (loosely) on BedMachine mask
The “loosely” is because the BedMachine mask doesn’t always reach into each fjord all the way. I buffer the BedMachine mask by 2 km here so that it extends to the edge of the velocity data.
g.copy raster=mask_ice@BedMachine,mask_ice --o
# Grow by 2 km (10 cells @ 200 m/cell)
r.grow input=mask_ice output=mask_ice_grow radius=10 new=1 --o
r.mask mask_ice_grow
The fast ice edge is where there is fast-flowing ice overlapping with not-ice.
r.mapcalc "fast_ice = if(vel_baseline@ENVEO > ${VELOCITY_CUTOFF}, 1, null())" --o
r.mapcalc "fast_ice_100 = if(vel_baseline@ENVEO > 100, 1, null())" --o
r.mask -r
# no velocity data, or is flagged as ice shelf or land in BedMachine
r.mapcalc "not_ice = if(isnull(vel_baseline@ENVEO) ||| (mask@BedMachine == 0) ||| (mask@BedMachine == 3), 1, null())" --o
r.grow input=not_ice output=not_ice_grow radius=1.5 new=99 --o
r.mapcalc "fast_ice_edge = if(((not_ice_grow == 99) && (fast_ice == 1)), 1, null())" --o
The gates are set ${BUFFER_DIST} inland from the fast ice edge. This is done by buffering the fast ice edge (which fills the space between the fast ice edge and buffer extent) and then growing the buffer by 1. This last step defines the gate locations.
However, in order to properly estimate discharge, the gate location is not enough. Ice must flow from outside the gates, through the gates, to inside the gates, and not flow from one gate pixel to another gate pixel (or it would be counted 2x).
r.buffer input=fast_ice_edge output=fast_ice_buffer distances=${BUFFER_DIST} --o
r.grow input=fast_ice_buffer output=fast_ice_buffer_grow radius=1.5 new=99 --o
r.mask -i not_ice --o
r.mapcalc "gates_inside = if(((fast_ice_buffer_grow == 99) && (fast_ice_100 == 1)), 1, null())" --o
r.mask -r
r.grow input=gates_inside output=gates_inside_grow radius=1.1 new=99 --o
r.mask -i not_ice --o
r.mapcalc "gates_maybe = if(((gates_inside_grow == 99) && (fast_ice_100 == 1) && isnull(fast_ice_buffer)), 1, null())" --o
r.mask -r
r.grow input=gates_maybe output=gates_maybe_grow radius=1.1 new=99 --o
r.mask -i not_ice --o
r.mapcalc "gates_outside = if(((gates_maybe_grow == 99) && (fast_ice_100 == 1) && isnull(fast_ice_buffer) && isnull(gates_inside)), 1, null())" --o
r.mask -r
r.mapcalc "gates_IO = 0" --o
r.mapcalc "gates_IO = if(isnull(gates_inside), gates_IO, 1)" --o
r.mapcalc "gates_IO = if(isnull(gates_outside), gates_IO, -1)" --o
r.colors map=gates_inside color=red
r.colors map=gates_maybe color=grey
r.colors map=gates_outside color=blue
r.colors map=gates_IO color=viridis
- For each gate, split into two for the vector components of the velocity, then…
- If flow is from gate to INSIDE, it is discharged
- If flow is from gate to GATE, it is ignored
- If flow is from gate to NOT(GATE || INSIDE) it is ignored
- If gates are a closed loop, such as the 1700 m flight-line, then this scenario would be NEGATIVE discharge, not ignored. This was tested with the 1700 m flight-line and compared against both the vector calculations and WIC estimates.
var | value | meaning |
---|---|---|
vx | > 0 | east / right |
vx | < 0 | west / left |
vy | > 0 | north / up |
vy | < 0 | south / down |
GRASS indexing | [0,1] | cell to the right |
[0,-1] | left | |
[-1,0] | above | |
[1,0] | below |
# g.mapset -c gates_50_2500
r.mask -r
r.mapcalc "gates_x = 0" --o
r.mapcalc "gates_x = if((gates_maybe == 1) && (vx_baseline@ENVEO > 0), gates_IO[0,1], gates_x)" --o
r.mapcalc "gates_x = if((gates_maybe != 0) && (vx_baseline@ENVEO < 0), gates_IO[0,-1], gates_x)" --o
r.mapcalc "gates_y = 0" --o
r.mapcalc "gates_y = if((gates_maybe != 0) && (vy_baseline@ENVEO > 0), gates_IO[-1,0], gates_y)" --o
r.mapcalc "gates_y = if((gates_maybe != 0) && (vy_baseline@ENVEO < 0), gates_IO[1,0], gates_y)" --o
r.mapcalc "gates_x = if(gates_x == 1, 1, 0)" --o
r.mapcalc "gates_y = if(gates_y == 1, 1, 0)" --o
r.null map=gates_x null=0 # OR r.null map=gates_x setnull=0
r.null map=gates_y null=0 # OR r.null map=gates_y setnull=0
r.mapcalc "gates_xy_clean00 = if((gates_x == 1) || (gates_y == 1), 1, null())" --o
r.mapcalc "gates_xy_clean0 = if(gates_xy_clean00 & if(DEM_2019@DEM), 1, null())" --o
# Remove clusters of 2 or less. How many hectares in X pixels?
# frink "(200 m)^2 * 2 -> hectares" # ans: 8.0
r.clump -d input=gates_xy_clean0 output=gates_gateID --o
r.reclass.area -d input=gates_gateID output=gates_area value=9 mode=lesser method=reclass --o
r.mapcalc "gates_xy_clean1 = if(isnull(gates_area), gates_xy_clean0, null())" --o
- Actually, limit to approximate Mouginot 2019 mask - its a bit narrow in some places
# r.mask mask_GIC@Mouginot_2019 --o
r.grow input=mask_GIC@Mouginot_2019 output=mask_GIC_Mouginot_2019_grow radius=4.5 # three cells
r.mask mask_GIC_Mouginot_2019_grow --o
r.mapcalc "gates_xy_clean2 = gates_xy_clean1" --o
r.mask -r
# r.univar map=gates_xy_clean1
# r.univar map=gates_xy_clean2
MSG_ERR "No manual removal -> No clean3 -> expecting ERROR"
g.copy "gates_xy_clean3,gates_final" --o
# MSG_WARN "Using clean2"
g.copy "gates_xy_clean2,gates_final" --o
Add:
- Gate ID
- Calculate the average x,y of the gate, and then from that ONE point, determine the following. Do this from the average point rather than for each gate pixel because some gates span multiple sectors, or different ends of the gate are nearer different names, etc.
- Average lon,lat of gate
- Nearest citet:mouginot_2019_glacier region, sector, and name
- Nearest citet:bjork_2015_brief name
Do this for both the area vector and the point vector so that we can export
- KML and GeoPackage with gates and metadata
- simple CSV w/ gates and metadata.
# db.droptable -f table=gates_final
# db.droptable -f table=gates_final_pts
# areas (clusters of gate pixels, but diagonals are separate)
r.to.vect input=gates_final output=gates_final type=area --o
v.db.dropcolumn map=gates_final column=label
v.db.dropcolumn map=gates_final column=value
v.db.addcolumn map=gates_final columns="gate INT"
v.what.rast map=gates_final raster=gates_gateID column=gate type=centroid
# # points (each individual gate pixel)
# r.to.vect input=gates_final output=gates_final_pts type=point --o
# v.db.dropcolumn map=gates_final_pts column=label
# v.db.dropcolumn map=gates_final_pts column=value
# v.db.addcolumn map=gates_final_pts columns="gate INT"
# v.what.rast map=gates_final_pts raster=gates_gateID column=gate type=point
Here we have:
db.select table=gates_final|head -n10 # cat|gate|x|y|mean_x|mean_y
db.select table=gates_final_pts|head # cat|gate|mean_x|mean_y
v.what.rast map=gates_final_pts raster=lon@PERMANENT column=lon
v.what.rast map=gates_final_pts raster=lat@PERMANENT column=lat
v.db.addcolumn map=gates_final columns="mean_lon DOUBLE PRECISION, mean_lat DOUBLE PRECISION"
for G in $(db.select -c sql="select gate from gates_final"|sort -n|uniq); do
db.execute sql="UPDATE gates_final SET mean_lon=(SELECT lon FROM gates_final_pts WHERE gate = ${G}) where gate = ${G}"
db.execute sql="UPDATE gates_final SET mean_lat=(SELECT lat FROM gates_final_pts WHERE gate = ${G}) where gate = ${G}"
done
- Sector Number
- Region Code
- Nearest Sector or Glacier Name
v.db.addcolumn map=gates_final columns="sector INT"
v.db.addcolumn map=gates_final_pts columns="sector INT"
v.distance from=gates_final to=sectors@Mouginot_2019 upload=to_attr column=sector to_column=cat
v.distance from=gates_final_pts to=sectors@Mouginot_2019 upload=to_attr column=sector to_column=cat
v.db.addcolumn map=gates_final columns="region VARCHAR(2)"
v.db.addcolumn map=gates_final_pts columns="region VARCHAR(2)"
v.distance from=gates_final to=sectors@Mouginot_2019 upload=to_attr column=region to_column=SUBREGION1
v.distance from=gates_final_pts to=sectors@Mouginot_2019 upload=to_attr column=region to_column=SUBREGION1
v.db.addcolumn map=gates_final columns="Mouginot_2019 VARCHAR(99)"
v.db.addcolumn map=gates_final_pts columns="Mouginot_2019 VARCHAR(99)"
v.distance from=gates_final to=sectors@Mouginot_2019 upload=to_attr column=Mouginot_2019 to_column=NAME
v.distance from=gates_final_pts to=sectors@Mouginot_2019 upload=to_attr column=Mouginot_2019 to_column=NAME
v.db.addcolumn map=gates_final columns="Bjork_2015 VARCHAR(99)"
v.db.addcolumn map=gates_final_pts columns="Bjork_2015 VARCHAR(99)"
v.distance from=gates_final to=names@Bjork_2015 upload=to_attr column=Bjork_2015 to_column=name
v.distance from=gates_final_pts to=names@Bjork_2015 upload=to_attr column=Bjork_2015 to_column=name
v.db.addcolumn map=gates_final columns="n_pixels INT"
v.db.addcolumn map=gates_final_pts columns="n_pixels INT"
for G in $(db.select -c sql="select gate from gates_final"|sort -n|uniq); do
db.execute sql="UPDATE gates_final SET n_pixels=(SELECT SUM(area)/(200*200) FROM gates_final WHERE gate = ${G}) where gate = ${G}"
# now copy that to the average gate location (point) table
db.execute sql="UPDATE gates_final_pts SET n_pixels = (SELECT n_pixels FROM gates_final WHERE gate = ${G}) WHERE gate = ${G}"
done
db.dropcolumn -f table=gates_final column=area
# db.dropcolumn -f table=gates_final column=cat
mkdir -p out
db.select sql="SELECT gate,mean_x,mean_y,lon,lat,n_pixels,sector,region,Bjork_2015,Mouginot_2019 from gates_final_pts" separator=, | sort -n | uniq > ./out/gate_meta.csv
v.out.ogr input=gates_final output=./tmp/gates_final_${VELOCITY_CUTOFF}_${BUFFER_DIST}.kml format=KML --o
<<MSGS_pretty_print>>
<<GRASS_config>>
g.mapsets -l
r.mask -r
MAPSET=$(g.mapsets --q -l separator=newline| grep "gates_")
g.mapset ENVEO
g.region -d
r.mapcalc "MASK = if((gates_x@${MAPSET} == 1) | (gates_y@${MAPSET} == 1), 1, null())" --o
VX=$(g.list type=raster pattern=vx_????_??_?? | head -n1) # DEBUG
for VX in $(g.list type=raster pattern=vx_????_??_??); do
VY=${VX/vx/vy}
ERR=${VX/vx/err}
DATE=$(echo $VX | cut -d"_" -f2-)
echo $DATE
r.mapcalc "vel_eff_${DATE} = 365 * (if(gates_x@${MAPSET} == 1, if(isnull(${VX}), 0, abs(${VX}))) + if(gates_y@${MAPSET}, if(isnull(${VY}), 0, abs(${VY}))))"
r.mapcalc "err_eff_${DATE} = 365 * ${ERR} * (not(isnull(gates_x@${MAPSET})) || not(isnull(gates_y@${MAPSET})))"
r.null map=vel_eff_${DATE} null=0
r.null map=err_eff_${DATE} null=0
done
# fix return code of this script so make continues
MSG_OK "vel_eff DONE"
<<MSGS_pretty_print>>
<<GRASS_config>>
MSG_OK "Exporting..."
g.mapset PERMANENT
g.region -dp
MAPSET=$(g.mapsets --q -l separator="\n"| grep "gates_")
VEL_baseline="vel_baseline@ENVEO vx_baseline@ENVEO vy_baseline@ENVEO vel_err_baseline@ENVEO err_baseline@ENVEO"
VEL_ENVEO=$(g.list -m mapset=ENVEO type=raster pattern=vel_eff_????_??_?? separator=space)
ERR_ENVEO=$(g.list -m mapset=ENVEO type=raster pattern=err_eff_????_??_?? separator=space)
#VEL_Sentinel=$(g.list -m mapset=Sentinel1 type=raster pattern=vel_eff_????_??_?? separator=space)
#ERR_Sentinel=$(g.list -m mapset=Sentinel1 type=raster pattern=err_eff_????_??_?? separator=space)
THICK=$(g.list -m mapset=DEM type=raster pattern=DEM_???? separator=space)
#THICK=$(g.list -m mapset=SEC type=raster pattern=dh_???? separator=space)
# [email protected],[email protected] # ,[email protected]
LIST="lon lat err_2D gates_x@${MAPSET} gates_y@${MAPSET} gates_gateID@${MAPSET} sectors@Zwally_2012 sectors@Mouginot_2019 regions@Mouginot_2019 bed@BedMachine thickness@BedMachine surface@BedMachine ${THICK} ${VEL_baseline} ${VEL_ENVEO} errbed@BedMachine ${ERR_ENVEO}"
# ,${VEL_Sentinel},${ERR_Sentinel}
#r.mask gates_final@${MAPSET} --o
mkdir tmp/dat
r.mapcalc "MASK = if(gates_final@${MAPSET}) | if(mask_GIC@Mouginot_2019) | if(vel_err_baseline@ENVEO) | if(DEM_2020@DEM)" --o
parallel --bar "if [[ ! -e ./tmp/dat/{1}.bsv ]]; then (echo x\|y\|{1}; r.out.xyz input={1}) > ./tmp/dat/{1}.bsv; fi" ::: ${LIST}
r.mask -r
# test
# for v in $(echo $LIST | tr ',' '\n'); do n=$(r.univar $v|grep "^n:"); echo ${v}: ${n}; done
# combine individual files to one mega csv
cat ./tmp/dat/lat.bsv | cut -d"|" -f1,2 | datamash -t"|" transpose > ./tmp/dat_t.bsv
for f in ./tmp/dat/*; do
cat $f | cut -d"|" -f3 | datamash -t"|" transpose >> ./tmp/dat_t.bsv
done
cat ./tmp/dat_t.bsv |datamash -t"|" transpose | tr '|' ',' > ./tmp/dat.csv
rm ./tmp/dat_t.bsv
#date
#MSG_WARN "Exporting: $(echo $LIST|tr ',' '\n' |wc -l) columns"
#ulimit -n 2048
#time (echo x,y,${LIST}; r.out.xyz input=${LIST} separator=comma) > ./tmp/dat.csv
#r.mask -r
from uncertainties import unumpy
import pandas as pd
import numpy as np
df = pd.read_csv("./tmp/dat.csv")
err_sector = pd.DataFrame(columns=['D', 'E', 'E%'])
err_sector.index.name = 'Sector'
sectors = np.unique(df['sectors@Mouginot_2019'].values)
for s in sectors:
sub = df[df['sectors@Mouginot_2019'] == s]
thick = sub['thickness@BedMachine']
vel = sub['vel_baseline@ENVEO']
D = 200 * thick * vel * 917 / 1E12
err_thick = np.abs(sub['errbed@BedMachine'].values)
e_th = 200 * err_thick * vel * 917 / 1E12
err_sector.loc[s] = [np.sum(D), np.sum(e_th), np.round(np.sum(e_th),10)/np.round(np.sum(D),10)*100]
err_sector.loc['GIS'] = np.sum(err_sector, axis=0)
err_sector.loc['GIS']['E%'] = err_sector.loc['GIS']['E']/err_sector.loc['GIS']['D']*100
err_sector.to_csv('./tmp/err_sector_mouginot.csv')
err_sector.rename(columns = {'D':'D [Gt]',
'E':'Error [Gt]',
'E%':'Error [%]'}, inplace=True)
err_sector
Sector | D [Gt] | Error [Gt] | Error [%] |
---|---|---|---|
1 | 1.13636 | 0.115502 | 10.1642 |
2 | 0.908778 | 0.193401 | 21.2815 |
3 | 11.4881 | 0.824422 | 7.1763 |
4 | 2.91582 | 0.304107 | 10.4296 |
6 | 10.8265 | 0.919606 | 8.49401 |
7 | 0.89246 | 0.0744315 | 8.34004 |
8 | 0.648552 | 0.0401655 | 6.1931 |
9 | 11.2917 | 0.627721 | 5.55913 |
10 | 2.16995 | 0.123117 | 5.67372 |
14 | 2.48669 | 0.199724 | 8.03172 |
15 | 1.06562 | 0.294537 | 27.64 |
16 | 5.45103 | 0.440319 | 8.07772 |
19 | 0.270501 | 0.110674 | 40.9145 |
20 | 0.767355 | 0.0492437 | 6.41734 |
21 | 1.80726 | 0.0974649 | 5.39296 |
22 | 1.00093 | 0.0841373 | 8.40591 |
23 | 0.895347 | 0.0683989 | 7.63937 |
25 | 0.029346 | 0.00512194 | 17.4536 |
26 | 1.67654 | 0.0936123 | 5.58367 |
27 | 6.05449 | 0.273441 | 4.51633 |
28 | 0.805786 | 0.032577 | 4.04289 |
29 | 1.96217 | 0.116086 | 5.91621 |
30 | 1.83275 | 0.116705 | 6.36774 |
31 | 0.422771 | 0.0319242 | 7.55119 |
32 | 5.59584 | 0.344155 | 6.15018 |
33 | 6.38654 | 0.46026 | 7.20672 |
34 | 4.70521 | 0.364038 | 7.73692 |
35 | 7.01158 | 0.895814 | 12.7762 |
36 | 8.79645 | 0.644694 | 7.32903 |
37 | 8.2441 | 0.472339 | 5.72942 |
38 | 6.32872 | 0.440766 | 6.96455 |
41 | 2.23386 | 0.127442 | 5.705 |
42 | 0.803985 | 0.0512305 | 6.37207 |
43 | 3.68078 | 0.189515 | 5.14878 |
44 | 0.877821 | 0.0481893 | 5.48964 |
45 | 0.766472 | 0.0488558 | 6.37412 |
47 | 1.41008 | 0.135603 | 9.61666 |
48 | 2.50234 | 0.329722 | 13.1765 |
49 | 0.966754 | 0.0940565 | 9.7291 |
50 | 6.28814 | 0.2806 | 4.46237 |
53 | 3.90265 | 0.262703 | 6.73141 |
55 | 0.109432 | 0.0276098 | 25.2302 |
56 | 0.786596 | 0.154163 | 19.5988 |
58 | 11.4567 | 0.688932 | 6.01334 |
59 | 15.567 | 1.29056 | 8.29037 |
60 | 9.0555 | 0.608475 | 6.71939 |
61 | 2.24076 | 0.395304 | 17.6415 |
62 | 0.593675 | 0.0322815 | 5.43757 |
63 | 30.3913 | 2.99546 | 9.85631 |
64 | 5.03947 | 0.422238 | 8.37862 |
65 | 3.80811 | 0.0994075 | 2.61042 |
67 | 0.202655 | 0.703541 | 347.162 |
68 | 3.68474 | 0.0750308 | 2.03626 |
69 | 2.22766 | 0.171304 | 7.68986 |
70 | 0.650101 | 0.422143 | 64.9349 |
72 | 1.10276 | 0.0672873 | 6.10169 |
73 | 0.00110613 | 0.188115 | 17006.6 |
74 | 4.35112 | 0.399408 | 9.17943 |
75 | 0.145543 | 0.04469 | 30.7057 |
76 | 1.70487 | 0.239754 | 14.0629 |
78 | 2.55006 | 0.109305 | 4.28638 |
80 | 7.38191 | 0.550931 | 7.46325 |
81 | 6.02572 | 0.468044 | 7.76742 |
82 | 1.33453 | 0.122178 | 9.15508 |
83 | 4.15658 | 0.507123 | 12.2005 |
84 | 1.60693 | 0.176951 | 11.0117 |
86 | 6.56638 | 0.6114 | 9.31107 |
88 | 0.0161452 | 0.0435953 | 270.02 |
93 | 2.04422 | 0.118953 | 5.81902 |
94 | 1.18427 | 0.332161 | 28.0477 |
95 | 8.83575 | 0.289838 | 3.28029 |
96 | 5.06139 | 0.641972 | 12.6837 |
97 | 0.00532853 | 0.0315232 | 591.593 |
98 | 1.69358 | 0.297288 | 17.5538 |
99 | 6.75698e-05 | 0.00849267 | 12568.7 |
100 | 0.184706 | 0.121455 | 65.7562 |
102 | 23.1246 | 1.23201 | 5.3277 |
103 | 0.154002 | 0.0160936 | 10.4503 |
104 | 0.49088 | 0.0520854 | 10.6106 |
106 | 14.9256 | 0.466246 | 3.1238 |
107 | 3.84582 | 0.339018 | 8.81522 |
108 | 0.595502 | 0.165221 | 27.7449 |
109 | 0.0884075 | 0.042508 | 48.082 |
110 | 0.205879 | 0.0211271 | 10.2619 |
111 | 0.00582636 | 0.393715 | 6757.48 |
113 | 0.000794878 | 0.100536 | 12648 |
114 | 1.38749 | 0.120317 | 8.67155 |
115 | 0.897201 | 0.20931 | 23.3292 |
117 | 6.98255 | 0.525467 | 7.52543 |
118 | 0.000957382 | 0.166821 | 17424.7 |
120 | 0.412083 | 0.102889 | 24.9682 |
121 | 2.25015 | 0.512606 | 22.7809 |
122 | 0.0492951 | 0.488022 | 990.001 |
124 | 0.00105356 | 0.171178 | 16247.6 |
125 | 1.59346 | 0.0753527 | 4.72888 |
126 | 2.38866 | 0.209683 | 8.77827 |
127 | 9.53131 | 0.556236 | 5.83588 |
128 | 0.0029097 | 0.57392 | 19724.4 |
134 | 2.85382 | 1.4367 | 50.3433 |
135 | 0.0190592 | 0.0324244 | 170.125 |
136 | 0.00879623 | 0.102694 | 1167.48 |
138 | 0.0996933 | 0.0314422 | 31.539 |
139 | 0.2357 | 0.0701601 | 29.7666 |
140 | 1.04003 | 0.121997 | 11.7301 |
141 | 0.258956 | 0.0561728 | 21.6921 |
142 | 0.210307 | 0.0415359 | 19.7501 |
146 | 0.513598 | 0.063135 | 12.2927 |
147 | 2.90962 | 0.253602 | 8.71601 |
148 | 1.2819 | 0.0842873 | 6.57516 |
150 | 0.238113 | 0.133588 | 56.1028 |
151 | 0.215437 | 0.0542768 | 25.1938 |
152 | 0.171266 | 0.0699659 | 40.8523 |
153 | 0.280762 | 0.0780211 | 27.7891 |
154 | 0.716738 | 0.10592 | 14.7781 |
156 | 0.00128292 | 0.22432 | 17485.1 |
157 | 0.00904871 | 0.0105535 | 116.63 |
158 | 0.321452 | 0.0805232 | 25.0498 |
164 | 0.015171 | 0.041483 | 273.437 |
167 | 9.52965e-05 | 0.0136362 | 14309.2 |
172 | 0.000256998 | 0.0366889 | 14276 |
174 | 0.00019268 | 0.0113515 | 5891.37 |
183 | 0.179814 | 0.037001 | 20.5774 |
185 | 0.398766 | 0.0630249 | 15.805 |
189 | 6.042 | 0.308196 | 5.1009 |
190 | 4.15765 | 2.0767 | 49.9487 |
192 | 4.9286 | 0.459816 | 9.32955 |
193 | 1.84566 | 0.0888249 | 4.81264 |
195 | 0.218933 | 0.0297228 | 13.5762 |
197 | 0.60224 | 0.0508668 | 8.44628 |
199 | 0.404987 | 0.49271 | 121.661 |
203 | 0.79559 | 0.105298 | 13.2353 |
204 | 0.101074 | 0.019955 | 19.743 |
207 | 6.56012 | 0.468184 | 7.13683 |
208 | 3.20135 | 0.349065 | 10.9037 |
209 | 0.78797 | 0.141833 | 17.9998 |
210 | 0.571084 | 0.0650242 | 11.3861 |
211 | 3.38362 | 0.110959 | 3.2793 |
212 | 4.54297 | 0.190982 | 4.20389 |
213 | 4.63417 | 0.506259 | 10.9245 |
214 | 8.39299 | 0.467779 | 5.57345 |
215 | 0.0521331 | 0.0802041 | 153.845 |
216 | 0.66763 | 0.0773952 | 11.5925 |
218 | 30.7332 | 2.56227 | 8.33713 |
219 | 0.403487 | 0.0707834 | 17.5429 |
222 | 0.0898964 | 0.0201874 | 22.4563 |
223 | 11.3835 | 0.318286 | 2.79603 |
224 | 0.465393 | 0.17042 | 36.6184 |
231 | 0.930879 | 0.110524 | 11.8731 |
232 | 0.000300861 | 0.0524098 | 17419.9 |
233 | 4.20433 | 0.28694 | 6.82485 |
234 | 0.0192938 | 0.00886986 | 45.9727 |
237 | 0.356911 | 0.227127 | 63.6368 |
239 | 0.16309 | 0.0146806 | 9.00151 |
240 | 0.0723574 | 0.00594069 | 8.2102 |
243 | 1.36437 | 0.199452 | 14.6186 |
245 | 0.188843 | 0.0451201 | 23.8929 |
248 | 8.81223 | 0.406155 | 4.60899 |
249 | 1.42356 | 0.132901 | 9.33582 |
251 | 0.0292431 | 0.0137256 | 46.9361 |
254 | 0.581071 | 0.0951835 | 16.3807 |
255 | 1.05098 | 0.115429 | 10.9829 |
257 | 7.83737e-05 | 0.0120324 | 15352.6 |
258 | 0.000377353 | 0.060194 | 15951.6 |
GIS | 476.153 | 44.6966 | 9.38702 |
from uncertainties import unumpy
import pandas as pd
import numpy as np
df = pd.read_csv("./tmp/dat.csv")
err_sector = pd.DataFrame(columns=['D','E', 'E%'])
err_sector.index.name = 'Sector'
sectors = np.unique(df['regions@Mouginot_2019'].values)
for s in sectors:
sub = df[df['regions@Mouginot_2019'] == s]
thick = sub['thickness@BedMachine']
vel = np.abs(sub['vx_baseline@ENVEO'])*sub['gates_x@gates_150_10000'] + np.abs(sub['vy_baseline@ENVEO'])*sub['gates_y@gates_150_10000']
D = 200 * thick * vel * 917 / 1E12
err_thick = np.abs(sub['errbed@BedMachine'].values)
# err_thick[np.where(err_thick < 50)] = 50 # IS THIS REASONABLE? IMPORTANT?
e_th = 200 * err_thick * vel * 917 / 1E12
err_sector.loc[s] = [np.sum(D), np.sum(e_th), np.round(np.sum(e_th),10)/np.round(np.sum(D),10)*100]
err_sector.loc['GIS'] = np.sum(err_sector, axis=0)
err_sector.loc['GIS']['E%'] = err_sector.loc['GIS']['E']/err_sector.loc['GIS']['D']*100
err_sector.to_csv('./tmp/err_region_mouginot.csv')
err_sector.rename(columns = {'D':'D [Gt]',
'E':'Error [Gt]',
'E%':'Error [%]'}, inplace=True)
err_sector
Sector | D [Gt] | Error [Gt] | Error [%] |
---|---|---|---|
1 | 0.0732849 | 0.00470999 | 6.42696 |
2 | 0.0823817 | 0.00651088 | 7.9033 |
3 | 0.210029 | 0.0190811 | 9.08501 |
4 | 0.370091 | 0.0407197 | 11.0026 |
5 | 0.0546231 | 0.00890458 | 16.3018 |
6 | 0.214758 | 0.0164815 | 7.67446 |
7 | 0.316332 | 0.0236688 | 7.48227 |
GIS | 1.3215 | 0.120077 | 9.08639 |
from uncertainties import unumpy
import pandas as pd
import numpy as np
df = pd.read_csv("./tmp/dat.csv")
err_gate = pd.DataFrame(columns=['D','E', 'E%'])
err_gate.index.name = 'Gate'
gates = np.unique(df['gates_gateID@gates_150_10000'].values)
for g in gates:
sub = df[df['gates_gateID@gates_150_10000'] == g]
thick = sub['thickness@BedMachine']
vel = np.abs(sub['vx_baseline@ENVEO'])*sub['gates_x@gates_150_10000'] + np.abs(sub['vy_baseline@ENVEO'])*sub['gates_y@gates_150_10000']
D = 200 * thick * vel * 917 / 1E12
err_thick = np.abs(sub['errbed@BedMachine'].values)
# err_thick[np.where(err_thick < 50)] = 50 # IS THIS REASONABLE? IMPORTANT?
e_th = 200 * err_thick * vel * 917 / 1E12
err_gate.loc[g] = [np.sum(D), np.sum(e_th), np.sum(e_th)/np.sum(D)*100]
err_gate.loc['GIS'] = np.sum(err_gate, axis=0)
err_gate.loc['GIS']['E%'] = err_gate.loc['GIS']['E']/err_gate.loc['GIS']['D']*100
gate_meta = pd.read_csv("./out/gate_meta.csv")
err_gate['name'] = ''
for g in err_gate.index.values:
if (g == 'GIS'): continue
if (sum(gate_meta.gate == g) == 0): continue
err_gate.loc[g,'name'] = gate_meta[gate_meta.gate == g].Mouginot_2019.values[0]
err_gate.to_csv('./tmp/err_gate.csv')
err_gate.rename(columns = {'D':'D [Gt]',
'E':'Error [Gt]',
'E%':'Error [%]'}, inplace=True),
err_gate
- What columns are in the file?
- Don’t show all the “vel_eff_YYYY_MM_DD” and “err_eff_YYYY_MM_DD” columns.
head -n1 ./tmp/dat.csv | tr ',' '\n' | grep -v "vel_eff_*" | grep -v "err_eff_*" | grep -v "dh_*" | sort | uniq | tr '\n' '\t'
echo "also: dh_YYYY@elev, vel_eff_YYYY_MM_DD@various, etc."
Here we perform a few thickness adjustments:
First, patch in Millan (2018) where it exists.
- 300
- All ice <= 20 m thick is assumed bad and set to the minimum “good” thickness in a gate if good exists, or 300 m if it does not exist
- 400
- All ice <= 50 m thick is set to 400 m thick
- fit
- All ice <= 20 m thick is fit to the log10(thickness) v. log10(velocity) relationship, even though it is not a good fit.
For testing, gate clumps 9 (all bad) and 546 (some bad)
Here we calculate:
- D_baseline_th_noadj
- Discharge with no thickness adjustment
- D_baseline_th_300
- The baseline discharge
- D_baseline_th_400
- The discharge assuming the aggressive thickness adjustment
- D_baseline_th_fit
- The discharge assuming the fitted thickness adjustment
- D_baseline
- The baseline discharge - picked from our favorite of the above. TBD
- + Each pixel has DEM at YYYY. Assume 08-01 (DOY ~213)
- Assume DEM_2020 is baseline
- Linear interpolate between annual values
- Take derivative to get dh/dt
And more importantly and long-term, we calculate the following time series discharge products, using our preferred method (fill w/ 300 m):
- D
- Discharge at gate scale
- D_err
- The discharge error at gate scale
- D_fill
- The fill percentage for each gate at each point in time
- D_sector
- Same, but at Mouginot 2019 sector scale
- D_sector_err
- D_sector_fill
- D_region
- Same, but at Mouginot 2019 region scale
- D_region_err
- D_region_fill
- D_all
- Same, but all GIS
- D_all_err
- D_all_fill
import pandas as pd
import numpy as np
filled_D = pd.DataFrame(index=['A','B'], columns=['t1','t3','t4'], data=[[8,9,7],[4,2,1]])
fill = filled_D/filled_D
fill.loc['B','t3'] = np.nan
no_filled_D = filled_D * fill
# filled_weighted_D = filled_D / filled_D.sum()
no_filled_weighted_D = no_filled_D / filled_D.sum()
r = ((filled_D*fill)/filled_D.sum()).sum()
r.round(2)
%store D
%store D_err
%store fill
%store D_gates
%store D_gates_err
%store D_gates_fill_weight
%store D_sectors
%store D_sectors_err
%store D_sectors_fill_weight
%store D_regions
%store D_regions_err
%store D_regions_fill_weight
%store D_all
%store D_all_err
%store D_all_fill_weight
%store meta
%store -r
D = D.T['2000':].T
D_err = D_err.T['2000':].T
fill = fill.T['2000':].T
D_gates = D_gates.T['2000':].T
D_gates_err = D_gates_err.T['2000':].T
D_gates_fill_weight = D_gates_fill_weight.T['2000':].T
D_sectors = D_sectors.T['2000':].T
D_sectors_err = D_sectors_err.T['2000':].T
D_sectors_fill_weight = D_sectors_fill_weight.T['2000':].T
D_regions = D_regions.T['2000':].T
D_regions_err = D_regions_err.T['2000':].T
D_regions_fill_weight = D_regions_fill_weight.T['2000':].T
D_all = D_all.T['2000':].T
D_all_err = D_all_err.T['2000':].T
D_all_fill_weight = D_all_fill_weight.T['2000':].T
STARTDATE=pd.to_datetime('2014-10-01')
D_all = D_all.T[STARTDATE:].T
D_all_err = D_all_err.T[STARTDATE:].T
D_all_fill_weight = D_all_fill_weight.T[STARTDATE:].T
D_gates = D_gates.T[STARTDATE:].T
D_gates_err = D_gates_err.T[STARTDATE:].T
D_gates_fill_weight = D_gates_fill_weight.T[STARTDATE:].T
D_sectors = D_sectors.T[STARTDATE:].T
D_sectors_err = D_sectors_err.T[STARTDATE:].T
D_sectors_fill_weight = D_sectors_fill_weight.T[STARTDATE:].T
D_regions = D_regions.T[STARTDATE:].T
D_regions_err = D_regions_err.T[STARTDATE:].T
D_regions_fill_weight = D_regions_fill_weight.T[STARTDATE:].T
D_all = D_all.T[STARTDATE:].T
D_all_err = D_all_err.T[STARTDATE:].T
D_all_fill_weight = D_all_fill_weight.T[STARTDATE:].T
README for "Greenland Ice Sheet solid ice discharge from 1986 through 2018"
Paper Citation: TODO
Original Paper: doi:10.5194/essd-11-769-2019
Data Citation: TODO
Original Data Citations: doi:10.22008/promice/data/ice_discharge
Source: https://github.com/mankoff/ice_discharge
* Usage instructions:
When using any of the following data, you are required to cite the paper and the data set.
* Data Descriptions
Data sets released as part of this work include:
+ Discharge data
+ Gates
+ Surface Elevation Change
+ Code
Each are described briefly below.
** Discharge Data
This data set is made up of the following files
| Filename | Description |
|---------------------+-------------------------------------------------------|
| GIS_D.csv | Greenland Ice Sheet cumulative discharge by timestamp |
| GIS_err.csv | Errors for GIS_D.csv |
| GIS_coverage.csv | Coverage for GIS_D.csv |
| region_D.csv | Regional discharge |
| region_err.csv | Errors for region_D.csv |
| region_coverage.csv | Coverage for region_D.csv |
| sector_D.csv | Sector discharge |
| sector_err.csv | Errors for sector_D.csv |
| sector_coverage.csv | Coverage for sector_D.csv |
| gate_D.csv | Gate discharge |
| gate_err.csv | Errors for gate_D.csv |
| gate_coverage.csv | Coverage for gate_D.csv |
|---------------------+-------------------------------------------------------|
| gate_meta.csv | Metadata for each gate |
D and err data have units [Gt yr-1].
Coverage is in range [0, 1]
** Gates
| Filename | Description |
|------------+-----------------------------------------------|
| gates.kml | KML file of gate location and metadata |
| gates.gpkg | GeoPackage file of gate location and metadata |
** Surface Elevation Change
The "surface_elevation_change" file set contains the surface elevation change data used in this work (DOI 10.22008/promice/data/DTU/surface_elevation_change/v1.0.0)
** Code
The "code" file set (DOI 10.22008/promice/data/ice_discharge/code/v0.0.1) contains the digital workbook that produced the data, the ESSD document text and figures, this README, and everything else associated with this work.
D_gatesT = D_gates.T
D_gates_errT = D_gates_err.T
D_gates_fill_weightT = D_gates_fill_weight.T
D_gatesT.index.name = "Date"
D_gates_errT.index.name = "Date"
D_gates_fill_weightT.index.name = "Date"
D_gatesT = D_gatesT.replace(to_replace=0, value=np.nan).dropna(axis='rows', how='all')
D_gates_errT = D_gates_errT.loc[D_gatesT.index]
D_gates_fill_weightT = D_gates_fill_weightT.loc[D_gatesT.index]
D_gatesT.to_csv('./out/gate_D.csv')
D_gates_errT.to_csv('./out/gate_err.csv')
D_gates_fill_weightT.to_csv('./out/gate_coverage.csv')
# meta_sector = pd.DataFrame(index=meta.groupby('sectors').first().index)
# meta_sector['mean x'] = meta.groupby('sectors').mean()['x'].round().astype(np.int)
# meta_sector['mean y'] = meta.groupby('sectors').mean()['y'].round().astype(np.int)
# meta_sector['n gates'] = meta.groupby('sectors').count()['gates'].round().astype(np.int)
# meta_sector['region'] = meta.groupby('sectors').first()['regions']
D_sectorsT = D_sectors.T
D_sectors_errT = D_sectors_err.T
D_sectors_fill_weightT = D_sectors_fill_weight.T
D_sectorsT.index.name = "Date"
D_sectors_errT.index.name = "Date"
D_sectors_fill_weightT.index.name = "Date"
D_sectorsT = D_sectorsT.replace(to_replace=0, value=np.nan).dropna(axis='rows', how='all')
D_sectors_errT = D_sectors_errT.loc[D_sectorsT.index]
D_sectors_fill_weightT = D_sectors_fill_weightT.loc[D_sectorsT.index]
# meta_sector.to_csv('./out/sector_meta.csv')
D_sectorsT.to_csv('./out/sector_D.csv')
D_sectors_errT.to_csv('./out/sector_err.csv')
D_sectors_fill_weightT.to_csv('./out/sector_coverage.csv')
# meta_sector.head(10)
# meta_region = pd.DataFrame(index=meta.groupby('regions').first().index)
# meta_region['n gates'] = meta.groupby('regions').count()['gates'].round().astype(np.int)
D_regionsT = D_regions.T
D_regions_errT = D_regions_err.T
D_regions_fill_weightT = D_regions_fill_weight.T
D_regionsT.index.name = "Date"
D_regions_errT.index.name = "Date"
D_regions_fill_weightT.index.name = "Date"
D_regionsT = D_regionsT.replace(to_replace=0, value=np.nan).dropna(axis='rows', how='all')
D_regions_errT = D_regions_errT.loc[D_regionsT.index]
D_regions_fill_weightT = D_regions_fill_weightT.loc[D_regionsT.index]
# meta_region.to_csv('./out/region_meta.csv')
D_regionsT.to_csv('./out/region_D.csv')
D_regions_errT.to_csv('./out/region_err.csv')
D_regions_fill_weightT.to_csv('./out/region_coverage.csv')
# meta_region.head(10)
D_all.index.name = "Date"
D_all_err.index.name = "Date"
D_all_fill_weight.index.name = "Date"
D_all = D_all.replace(to_replace=0, value=np.nan).dropna(axis='rows', how='all')
D_all_err = D_all_err[D_all.index]
D_all_fill_weight = D_all_fill_weight[D_all.index]
D_all.to_csv('./out/GIS_D.csv', float_format='%.5f', header=["Discharge [Gt yr-1]"])
D_all_err.to_csv('./out/GIS_err.csv', float_format='%.5f', header=["Discharge Error [Gt yr-1]"])
D_all_fill_weight.to_csv('./out/GIS_coverage.csv', float_format='%.5f', header=["Coverage [unit]"])
<<MSGS_pretty_print>>
<<GRASS_config>>
g.mapset gates_150_10000
v.out.ogr input=gates_final output=./out/gates.kml format=KML --o
v.out.ogr input=gates_final output=./out/gates.gpkg format=GPKG --o
Done manually. See DOI
Make sure this Org file is tidy enough…
(cd out; zip -e /media/kdm/promicedata/ice_discharge/gates/gates.zip gates*)
(cd out; zip -e /media/kdm/promicedata/ice_discharge/d/D.zip D*csv)
cp ./out/README.txt /media/kdm/promicedata/ice_discharge/
zip -e /media/kdm/promicedata/ice_discharge/code/mankoff_et_al.zip ice_discharge.org
cp ${DATADIR}/Khan_2016/dhdt_1995-2015_GrIS.txt /media/kdm/promicedata/ice_discharge/surface_elevation_change
import matplotlib.gridspec as gridspec
import numpy as np
import pandas as pd
from adjust_spines import adjust_spines as adj
import matplotlib.pyplot as plt
import datetime as dt
# plt.close(1)
fig = plt.figure(1, figsize=(9,5)) # w,h
fig.clf()
ax_D = fig.add_subplot(111)
adj(ax_D, ['left','bottom'])
D = pd.read_csv("./out/GIS_D.csv", index_col=0, parse_dates=True)
err = pd.read_csv("./out/GIS_err.csv", index_col=0, parse_dates=True)
coverage = pd.read_csv("./out/GIS_coverage.csv", index_col=0, parse_dates=True)
ROOT="/data/dataverse_data/"
D_M2019 = pd.read_csv(ROOT+"/GIS_D.csv", index_col=0, parse_dates=True)
err_M2019 = pd.read_csv(ROOT+"/GIS_err.csv", index_col=0, parse_dates=True)
D_M2019 = D_M2019[(D_M2019.index > D.index[0]) & (D_M2019.index <= D.index[-1])]
err_M2019 = err_M2019[(err_M2019.index > err.index[0]) & (err_M2019.index <= err.index[-1])]
# | Color | R | G | B | hex |
# |-------------+-----+-----+-----+---------|
# | light blue | 166 | 206 | 227 | #a6cee3 |
# | dark blue | 31 | 120 | 180 | #1f78b4 |
# | light green | 178 | 223 | 138 | #b2df8a |
# | dark green | 51 | 160 | 44 | #33a02c |
# | pink | 251 | 154 | 153 | #fb9a99 |
# | red | 227 | 26 | 28 | #e31a1c |
# | pale orange | 253 | 191 | 111 | #fdbf6f |
# | orange | 255 | 127 | 0 | #ff7f00 |
C1="#e31a1c" # red
C2="#1f78b4" # dark blue
MS=4
D_M2019.plot(ax=ax_D, marker='.', color=C2, label='')
D.plot(ax=ax_D, drawstyle='steps', color=C1, label='')
ax_D.fill_between(err.index,
(D.values-err.values).flatten(),
(D.values+err.values).flatten(),
color=C1, alpha=0.25, label='')
ax_D.fill_between(err_M2019.index,
(D_M2019.values-err_M2019.values).flatten(),
(D_M2019.values+err_M2019.values).flatten(),
color=C2, alpha=0.25, label='')
ax_D.legend(["Mankoff (2019)", "GEUS CCI"], framealpha=0)
ax_D.set_xlabel('Time [Years]')
ax_D.set_ylabel('Discharge [Gt yr$^{-1}$]')
import matplotlib.dates as mdates
ax_D.xaxis.set_major_locator(mdates.YearLocator())
ax_D.minorticks_off()
# ax_D.xaxis.set_tick_params(rotation=-90) #, ha="left", rotation_mode="anchor")
# for tick in ax_D.xaxis.get_majorticklabels():
# tick.set_horizontalalignment("left")
plt.savefig('./figs/discharge_ts.png', transparent=False, bbox_inches='tight', dpi=300)
# plt.savefig('./figs/discharge_ts.pdf', box_inches='tight', dpi=300)
# disp = pd.DataFrame(data = {'D':D_day_year.values.flatten(), 'err':err_day_year.values.flatten()},
# index = D_day_year.index.year)
# disp.index.name = 'Year'
# disp
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# plt.close(1)
fig = plt.figure(1, figsize=(9,5)) # w,h
fig.clf()
# fig.set_tight_layout(True)
ax_D = fig.add_subplot(111)
from adjust_spines import adjust_spines as adj
adj(ax_D, ['left','bottom'])
D = pd.read_csv("./out/sector_D.csv", index_col=0, parse_dates=True)
err = pd.read_csv("./out/sector_err.csv", index_col=0, parse_dates=True)
D[['8.1','7.1']].plot(ax=ax_D, linewidth=3)
ax_D.fill_between(D.index,
D['8.1']-err['8.1'],
D['8.1']+err['8.1'], alpha=0.25)
ax_D.fill_between(D.index,
D['7.1']-err['7.1'],
D['7.1']+err['7.1'], alpha=0.25)
ax_D.set_ylim([0,120])
ax_D.set_ylabel("Discharge [Gt yr$^{-1}$]")
ax_D.set_xlabel("Date")
plt.savefig('./figs/discharge_ts_sectors.png', transparent=False, bbox_inches='tight', dpi=300)
fig1 = plt.figure(1, figsize=(9,5)) # w,h
fig1.clf()
ax_D1 = fig.add_subplot(111)
from adjust_spines import adjust_spines as adj
adj(ax_D1, ['left','bottom'])
D = pd.read_csv("./out/region_D.csv", index_col=0, parse_dates=True)
err = pd.read_csv("./out/region_err.csv", index_col=0, parse_dates=True)
D[['CE','CW','NE','NO','NW','SE','SW']].plot(ax=ax_D1, linewidth=3)
ax_D1.fill_between(D.index,
D['CE']-err['CE'],
D['CE']+err['CE'], alpha=0.25)
ax_D1.fill_between(D.index,
D['CW']-err['CW'],
D['CW']+err['CW'], alpha=0.25)
ax_D1.fill_between(D.index,
D['NE']-err['NE'],
D['NE']+err['NE'], alpha=0.25)
ax_D1.fill_between(D.index,
D['NO']-err['NO'],
D['NO']+err['NO'], alpha=0.25)
ax_D1.fill_between(D.index,
D['NW']-err['NW'],
D['NW']+err['NW'], alpha=0.25)
ax_D1.fill_between(D.index,
D['SE']-err['SE'],
D['SE']+err['SE'], alpha=0.25)
ax_D1.fill_between(D.index,
D['SW']-err['SW'],
D['SW']+err['SW'], alpha=0.25)
ax_D1.legend(loc=3)
ax_D1.set_ylim([0,170])
ax_D1.set_ylabel("Discharge [Gt yr$^{-1}$]")
ax_D1.set_xlabel("Date")
plt.savefig('./figs/discharge_ts_regions.png', transparent=False, bbox_inches='tight', dpi=300)
# # largest average for last year
# D_sort = D.resample('Y', axis='rows')\
# .mean()\
# .iloc[-1]\
# .sort_values(ascending=False)
# LABELS={}
# # for k in D_sort.head(8).index: LABELS[k] = k
# # Use the last ^ glaciers
# # Make legend pretty
# LABELS['JAKOBSHAVN_ISBRAE'] = 'Sermeq Kujalleq (Jakobshavn Isbræ)'
# LABELS['HELHEIMGLETSCHER'] = 'Helheim Gletsjer'
# LABELS['KANGERLUSSUAQ'] = 'Kangerlussuaq Gletsjer'
# LABELS['KOGE_BUGT_C'] = '(Køge Bugt C)'
# LABELS['ZACHARIAE_ISSTROM'] = 'Zachariae Isstrøm'
# LABELS['RINK_ISBRAE'] = 'Kangilliup Sermia (Rink Isbræ)'
# LABELS['NIOGHALVFJERDSFJORDEN'] = '(Nioghalvfjerdsbrae)'
# LABELS['PETERMANN_GLETSCHER'] ='Petermann Gletsjer'
# SYMBOLS={}
# SYMBOLS['JAKOBSHAVN_ISBRAE'] = 'o'
# SYMBOLS['HELHEIMGLETSCHER'] = 's'
# SYMBOLS['KANGERLUSSUAQ'] = 'v'
# SYMBOLS['KOGE_BUGT_C'] = '^'
# SYMBOLS['NIOGHALVFJERDSFJORDEN'] = 'v'
# SYMBOLS['RINK_ISBRAE'] = 's'
# SYMBOLS['ZACHARIAE_ISSTROM'] = 'o'
# SYMBOLS['PETERMANN_GLETSCHER'] ='^'
# MS=4
# Z=99
# for g in LABELS.keys(): # for each glacier
# e = ax_D.errorbar(D.loc[:,g].index,
# D.loc[:,g].values,
# label=LABELS[g],
# fmt=SYMBOLS[g],
# mfc='none',
# ms=MS)
# C = e.lines[0].get_color()
# D_day_year.loc[:,g].plot(drawstyle='steps', linewidth=2,
# label='',
# ax=ax_D,
# alpha=0.75, color=C, zorder=Z)
# for i,idx in enumerate(D.loc[:,g].index):
# ax_D.errorbar(D.loc[:,g].index[i],
# D.loc[:,g].values[i],
# yerr=err.loc[:,g].values[i],
# alpha=coverage.loc[:,g].values[i],
# label='',
# ecolor='grey',
# mfc=C, mec=C,
# marker='o', ms=MS)
# if g in ['NIOGHALVFJERDSFJORDEN', 'KANGERLUSSUAQ']: #, 'JAKOBSHAVN_ISBRAE']:
# ax_coverage.plot(D.loc[:,g].index,
# coverage.loc[:,g].values*100,
# drawstyle='steps',
# # alpha=0.5,
# color=C)
# # yl = ax_D.get_ylim()
# ax_D.legend(fontsize=8, ncol=2, loc=(0.0, 0.82), fancybox=False, frameon=False)
# ax_D.set_xlabel('Time [Years]')
# ax_D.set_ylabel('Discharge [Gt yr$^{-1}$]')
# import matplotlib.dates as mdates
# ax_D.xaxis.set_major_locator(mdates.YearLocator())
# ax_D.xaxis.set_tick_params(rotation=-90)
# for tick in ax_D.xaxis.get_majorticklabels():
# tick.set_horizontalalignment("left")
# ax_coverage.set_ylabel('Coverage [%]')
# ax_coverage.set_ylim([0,100])
#plt.savefig('./figs/discharge_ts_topfew.svg', transparent=True, bbox_inches='tight', dpi=300)
# plt.savefig('./figs/discharge_ts_topfew.pdf', transparent=True, bbox_inches='tight', dpi=300)
# Err_pct = (err_day_year / D_day_year*100).round().astype(np.int).astype(np.str)
# Err_pct = Err_pct[list(LABELS.keys())]
# tbl = D_day_year[list(LABELS.keys())].round().astype(np.int).astype(np.str) + ' (' + Err_pct+')'
# tbl.index = tbl.index.year.astype(np.str)
# tbl.columns = [_ + ' (%)' for _ in tbl.columns]
# tbl
It is the optical IV generated by S&T. I believe it follows all the guidelines listed below.
GUIDELINES
- The *name of the zip file and the name of the top-level folder inside
the zip* file should follow the logic of old products. Please download the previous version of your product from the CCI website to understand what this means. For new products, try to follow the structure of the filename of the example provided in this email.
- A quicklook image in .png or .jpg format. Size can not exceed 1200x1200. This should be contained in the above zip file.
- A short text file called *description.txt with a brief description of
the product*. The description *can not exceed 100 characters. *This should also be contained in the above zip file.
- A SINGLE text file called README or comments.txt, with comments on
the preparation and content of the product . You are encouraged to include paragraphs “Citable as:” and “References:” if this applies to your product. This should also be contained in the above zip file.
- File permissions. Inside the zip, files should be 644 (writable to
the owner, readable to everyone, not executable, represented as “-rw-r–r–”), and directory permissions should be 755 (writable to the owner, readable and executable to everyone, represented as “drwxr-xr-x”). On a Unix/MacOS/Linux system you list permissions with the command “ls -l”, and correct them with “chmod 644 {filename}” or “chmod 755 {directory}
- There should not be any spare or hidden files or directories included
in the zip (like e.g. the hidden __MACOS folder you often get when creating a zipped folder on MacOS).
mkdir -p MFID
Mass flow rate ice discharge (MFID) for Greenland from CCI IV, CCI SEC, and BedMachine
Mass flow rate ice discharge (MFID) for Greenland ice sheet sectors.
This data set is part of the ESA Greenland Ice sheet CCI project.
It provides the following CSV files.
1. Mass flow rate ice discharge. Units are Gt yr^{-1}.
2. Mass flow rate ice discharge uncertainty. Units are Gt yr^{-1}.
3. Coverage for each sector at each timestamp. Unitless [0 to 1].
Ice discharge is calculated from the CCI Ice Velocity (IV) product, the CCI Surface Elevation Change (SEC) product (where it overlaps with the ice discharge gates), and ice thickness from BedMachine. Ice discharge gates are placed 10 km upstream from all marine terminating glacier termini that have baseline velocities of more than 150 m/yr. Results are summed by Zwally et al. (2012) sectors.
The methods, including description of "coverage", are described in Mankoff /et al/. (2020; DOI: 10.5194/essd-12-1367-2020)
None
This document probably uses code - python, octave, and/or R. Below I provide the version of the software(s) used to create this document in order to support the goal of reproduciblity.
(emacs-version)
The algorithm and data product introduced here are closely related to the algorithm and data product introduced by citet:mankoff_2019_ice. When comparing annual average over all of Greenland, the this product and the product released by citet:mankoff_2019_ice agree well.
this this_err m2019 m2019_err diff diff % Date 2014-01-01 463.344333 36.147667 485.920160 45.105618 -22.575827 -4.872365 2015-01-01 471.203083 36.845167 487.331293 45.268997 -16.128210 -3.422773 2016-01-01 474.222250 37.267583 480.022531 44.705177 -5.800281 -1.223114 2017-01-01 488.726333 38.773583 491.535761 45.785530 -2.809428 -0.574847 2018-01-01 486.879750 38.622667 491.077612 45.832890 -4.197862 -0.862197 2019-01-01 489.644083 38.961333 495.909378 46.209912 -6.265295 -1.279561 2020-01-01 494.285250 39.432250 505.964553 47.229431 -11.679303 -2.362867 2021-01-01 496.105500 39.702250 508.115890 47.590573 -12.010390 -2.420935 2022-01-01 463.078333 36.998583 504.059857 47.232895 -40.981524 -8.849804 2023-01-01 467.146833 37.267667 492.099384 46.327375 -24.952551 -5.341479
Comparing by regions is difficult because this product is binned on the citet:zwally_2012_sectors and citet:mankoff_2019_ice is on the citet:mouginot_2019_glacier regions and sectors.
import matplotlib.gridspec as gridspec
import numpy as np
import pandas as pd
from adjust_spines import adjust_spines as adj
import datetime as dt
import matplotlib.pyplot as plt
# plt.close(1)
fig = plt.figure(1, figsize=(9,7)) # w,h
fig.clf()
# fig.set_tight_layout(True)
grid = plt.GridSpec(2, 1, height_ratios=[1,6], hspace=0.1) # h, w
ax_D = fig.add_subplot(111)
from adjust_spines import adjust_spines as adj
adj(ax_D, ['left','bottom'])
D = pd.read_csv("./out/region_D.csv", index_col=0, parse_dates=True)
err = pd.read_csv("./out/region_err.csv", index_col=0, parse_dates=True)
ROOT="/home/shl/mdrev/projects/promice/solid_ice_discharge/dataverse_data"
D_M2019 = pd.read_csv(ROOT+"/region_D.csv", index_col=0, parse_dates=True)
err_M2019 = pd.read_csv(ROOT+"/region_err.csv", index_col=0, parse_dates=True)
D_M2019 = D_M2019[(D_M2019.index > D.index[0]) & (D_M2019.index <= D.index[-1])]
err_M2019 = err_M2019[(err_M2019.index > err.index[0]) & (err_M2019.index <= err.index[-1])]
D = D.loc[D.index[:-3]]
err = err.loc[err.index[:-3]]
# | Color | R | G | B | hex |
# |-------------+-----+-----+-----+---------|
# | light blue | 166 | 206 | 227 | #a6cee3 |
# | dark blue | 31 | 120 | 180 | #1f78b4 |
# | light green | 178 | 223 | 138 | #b2df8a |
# | dark green | 51 | 160 | 44 | #33a02c |
# | pink | 251 | 154 | 153 | #fb9a99 |
# | red | 227 | 26 | 28 | #e31a1c |
# | pale orange | 253 | 191 | 111 | #fdbf6f |
# | orange | 255 | 127 | 0 | #ff7f00 |
C1="#e31a1c" # red
C2="#1f78b4" # dark blue
MS=4
Z=99
for r in D.columns:
e = ax_D.errorbar(D[r].index, D[r].values, mfc='none', ms=MS)
C = e.lines[0].get_color()
D[r].plot(drawstyle='steps', linewidth=2, ax=ax_D,
color=C,
# color='orange'
alpha=0.75, zorder=Z)
D_M2019[r].plot(drawstyle='steps',linestyle = '--', linewidth=2, ax=ax_D,
color=C,
# color='orange'
alpha=0.75, zorder=Z)
ax_D.fill_between(D.index,
(D[r]-err[r]),
(D[r]+err[r]),
color=C,
#hatch="\\",
alpha=0.25)
ax_D.fill_between(D_M2019.index,
(D_M2019[r]-err_M2019[r]),
(D_M2019[r]+err_M2019[r]),
color=C,
# hatch="/",
alpha=0.1)
tx = D.iloc[-1].name + dt.timedelta(days=50)
ty = D.iloc[-1][r]
if r in ['CE', 'SW']: ty=ty-4
if r == 'NE': ty=ty+4
if r == 'NO': ty=ty-2
ax_D.text(tx, ty, r, verticalalignment='center')
#ax_coverage.set_ylabel('Coverage [%]')
#ax_coverage.set_ylim([0,100])
import matplotlib.dates as mdates
ax_D.xaxis.set_major_locator(mdates.YearLocator())
ax_D.legend("", framealpha=0)
ax_D.set_xlabel('Time [Years]')
ax_D.set_ylabel('Discharge [Gt yr$^{-1}$]')
# ax_D.set_yscale('log')
# ax_D.xaxis.set_tick_params(rotation=-90)
# for tick in ax_D.xaxis.get_majorticklabels():
# tick.set_horizontalalignment("left")
plt.show()
plt.savefig('./figs/discharge_ts_regions.png', transparent=False, bbox_inches='tight', dpi=300)
# plt.savefig('./figs/discharge_ts_regions.pdf', transparent=True, bbox_inches='tight', dpi=300)
# Err_pct = (err_day_year.values/D_day_year.values*100).round().astype(np.int).astype(np.str)
# tbl = (D_day_year.round().astype(np.int).astype(np.str) + ' ('+Err_pct+')')
# tbl.index = tbl.index.year.astype(np.str)
# tbl.columns = [_ + ' (Err %)' for _ in tbl.columns]
# tbl