iceTEA – Tools for Exposure Ages from ice margins

Posted on

header-banner-2.png

My guest blogger today is Richard Selwyn Jones from Durham University. He developed iceTEA, a set of MATLAB tools to calculate exposure ages. He kindly accepted my invitation to write here about his software. Thanks!

iceTEA (Tools for Exposure Ages) is a new toolbox for plotting and analysing cosmogenic-nuclide surface-exposure data. The tools were originally intended for data that are used to reconstruct past glacier and ice sheet geometries, but they can potentially be used for many other geochronology and geomorphology applications of cosmogenic nuclides.

The iceTEA tools are available via an online interface (http://ice-tea.org) and as MATLAB code with a ready-to-use front-end script for each tool (https://github.com/iceTEA-code). While the online version performs all primary analysis and plotting functionality, the code provides the user with greater flexibility to apply the tools for specific needs and also includes some additional options.

There are eight tools, which provide the following functionality: 1) calculate exposure ages from 10Be and 26Al data, 2) plot exposure ages as kernel density estimates and as a horizontal or vertical transect, 3) identify and remove outliers within a dataset, 4) plot nuclide concentrations on a two-isotope diagram and as a function of depth, 5) correct exposure ages for cover of the rock surface, 6) correct ages for changes in elevation through time, and estimate 7) average and 8) continuous rates of change (e.g. ice margin retreat or thinning).

Here is an example of how you can use the code to correct data for long-term uplift:

data_name = 'input_data.xlsx';  % File name used for sample data

scaling_model = 'LSD';     % 'DE','DU','LI','ST','LM','LSD','LSDn'
 
% Set elevation correction
correction_type = 'rate';  % Select 'model' or 'rate'
GIA_model = [];            % If 'model', set GIA model to use - 'I5G' or 'I6G'
elev_rate = 2;             % If 'rate', set rate of elevation change (m/ka)

sample_data = get_data(data_name);  % Load sample data
 
% Calculate Elevation-corrected Exposure Ages
if strcmp(correction_type,'model')
    elev_input = GIA_model;
elseif strcmp(correction_type,'rate')
    elev_input = elev_rate;
end
corrected = elev_correct(sample_data,scaling_model,correction_type,elev_input);

The corrected exposure ages can then be plotted as kernel density estimates:

% Plot settings
feature = 1;    % Data from single feature?  yes = 1, no = 0
save_plot = 0;  % Save plot?  1 to save as .png and .eps, otherwise 0
mask = [];      % Select samples to plot (default is all)
time_lim = [];  % Optionally set x-axis limits (in ka)
weighted = [];  % Optionally select weighted (1) or unweighted (0) mean and standard deviation (default is weighted)
 
% Plot figure
plot_kernel(corrected.plot.corr,feature,save_plot,mask,time_lim,weighted);

iceTEA_Concs_anim-1.gif
A two-isotope diagram. Samples that plot inside the “simple exposure region” (marked by black lines) were continuously exposed, whereas samples that plot below this region have been buried in the past with a complex exposure history.
iceTEA_SurfCov_anim-1.gif
A kernel density estimate plot for exposure ages from a moraine. Individual ages are shown in light red, with the summed probability as a dark red line. One of the tools allows the user to evaluate the effects of surface cover on their dataset using different types and depths of surface cover (shown here as green summed probability lines for snow and till).
iceTEA_ContRates_anim.gif
Rates of change can be estimated using linear regression in a Monte Carlo framework, or continuously using Fourier Series analysis or Bayesian penalized spline regression. The latter is shown here. The exposure age constraints (with age and sample position uncertainties) are modelled (upper panel) in order to generate rates of change (lower panel). In this example, ice sheet thinning was most rapid at approx. 8 ka.

The purpose of these tools is to allow users to explore the spatial and temporal patterns in their data in a consistent and inter-comparable way, and also to initiate discussion of further improvements in the application and analysis of surface-exposure data. We welcome suggestions for additional plotting or analysis tools.

Reference

Jones, R.S., Small, D., Cahill, N., Bentley, M.J. and Whitehouse, P.L., 2019. iceTEA: Tools for plotting and analysing cosmogenic-nuclide surface-exposure data from former ice margins. Quaternary Geochronology, 51, 72-86. [DOI: 10.1016/j.quageo.2019.01.001]

Advertisements

Which tolerance should I choose when using knickpointfinder?

Posted on

Kai Deng from the GFZ Potsdam commented on my previous post on knickpointfinder and asked the question about the parameter ‘tol’:

[…] one question on the parameter “tol”. You said that “the value of tol should reflect uncertainties that are inherent in longitudinal river profile data”. However, I’m still confused about how to translate this sentence into code, or in another word, how can I validate my choice of tol by some criteria based on codes?

Now that’s an excellent question. The ‘tol’-parameter determines how many knickpoints the finder will detect and we might not want to miss a knickpoint. At the same time, we know that river profile data often has some erroneous values and we wouldn’t want to pick knickpoints that are merely artefacts of the data. So, what value of ‘tol’ should we choose?

I wrote that “the value of tol should reflect uncertainties that are inherent in longitudinal river profile data”. More specifically, I would argue that one should choose tolerance values that are higher than the maximum expected error between the measured and the true river profile. With lower tolerance values we would likely increase the false positive rate, i.e., the probability that we choose a knickpoint that is due to an artefact.

Now, the problem that we immediately face is that we don’t know this error. Of course, there are quality measures for most common DEM products but usually these do not apply to values measured along valley bottoms, at least if you take common global DEM products such as SRTM, ASTER GDEM, etc (Schwanghart and Scherler, 2017). Usually, there are also no repeated measurements. Hence, we cannot estimate the error as we would in the case of measurements in the lab.

But there is another approach: What we know for sure is that river profiles should monotonously decrease downstream. At least, this is true for the water surface, yet not necessarily bed elevations (e.g. due to the presence of riffle-pool sequences). The elevation of a pixel on the stream network should thus be either lower or equal to the elevation of its upstream neighbor. If it is not, than the value is probably wrong (for whatever reason). I say ‘probably’, because the upstream pixel might have an erroneously low value, too.

The following code demonstrates how we could determine these errors: Through downstream minima imposition (imposemin) and upstream maxima imposition (cummaxupstream). The shaded areas in the thus generated plot (here I show a zoomed version) show the differences between these values.

DEM = GRIDobj('srtm_bigtujunga30m_utm11.tif');
FD = FLOWobj(DEM);
S  = STREAMobj(FD,'minarea',1000);
S = klargestconncomps(S);
zmax = cummaxupstream(S,DEM);
zmin = imposemin(S,getnal(S,DEM));
plotdzshaded(S,[zmin zmax])
hold on
plotdz(S,DEM,'color','k')
tol_1.png
Shaded areas show the difference upstream maxima and downstream minima imposition.

The maximum difference between the elevations is the maximum relative error that we estimate from the profile.

max(zmax-zmin)

ans =

    43

Now we might want to take this elevation difference as tolerance value for the knickpointfinder. However, this value might be very high if our profile is characterized by a few positive or negative outliers. This would result in only few if any knickpoints. Thus, we could come up with a more robust estimate of the error which is based on quantiles. Robust in this sense means: robust to outliers. And the proposed technique is  constrained regularized smoothing (CRS) (Schwanghart and Scherler, 2017).

What is constrained regularized smoothing? It is an approach to hydrologically correct DEMs based on quantile regression techniques. While minima or maxima imposition present two extremes of hydrological conditioning – carving and filling, respectively – CRS (and quantile carving, which is similar but doesn’t smooth) is a hybrid technique and returns river longitudinal profiles that run along the tau’s quantile of the data. Why would this be preferred? Simply because we would not want that the shape of the resulting profile is predominantly determined by the errors with the largest magnitude.

Here is how to apply quantile carving using the quantiles 0.1 and 0.9.

z90 = crs(S,DEM,'K',1,'tau',0.9);
z10 = crs(S,DEM,'K',1,'tau',0.1);
max([z90-z10])

ans =

   41.7629

Of course, the maximum difference is lower than the previous value obtained by minima and maxima imposition. But not very much. Let’s see how well the algorithm performs if we add some additional errors.

IX = randlocs(S,10);
IX = find(ismember(S.IXgrid,IX));
z = getnal(S,DEM);
z(IX) = z(IX) + sign(randn(numel(IX),1))*100;
plotdz(S,z)

 

tol_2.png
River profile with some additional errors.

Let’s compute the difference between carving and filling.

zmin = imposemin(S,z);
zmax = cummaxupstream(S,z);
max(zmax-zmin)

ans =

110

And whatabout our uncertainty estimate based on the CRS algorithm?

z90 = crs(S,z,'K',1,'tau',0.9);
z10 = crs(S,z,'K',1,'tau',0.1);
d = max(z90-z10)

d =

41.7629

The CRS result is not affected at all by few outliers. That means, it enables us to derive a rather robust estimate about the uncertainty of the data.

Moreover, we can use the CRS-corrected profile as input to the knickpointfinder, because we have filtered the spikes from the data. As elevations are often overestimated, you might use directly the node-attribute list z10.

I have shown that you can use an uncertainty estimate derived from your river profile as direct input to knickpointfinder. There are, however, a few things to note:

  1. Don’t carve or fill your data before using CRS. Otherwise, your uncertainty estimate is flawed.
  2. The uncertainty estimate may not reliably detect the true uncertainty in your data. Alsways cross-check whether the uncertainty estimate is either too low or too high.
  3. I haven’t talked about the smoothness parameter ‘K’ which also controls the uncertainty estimates. Larger values of K will likely increase the uncertainty estimate.
  4. CRS can take a while for very large river networks. Consider estimating the tolerance value from a subset of the data.
  5. You can also use a node-attribute list of tolerances in knickpointfinder and thus account for spatially varying tolerances. In this case, however, add some offset because the CRS might indicate areas with zero uncertainty and thus overestimate the true precision of the data.

References

Schwanghart, W., Scherler, D., 2017. Bumps in river profiles: uncertainty assessment and smoothing using quantile regression techniques. Earth Surface Dynamics, 5, 821-839. [DOI: 10.5194/esurf-5-821-2017]

Working with knickpointfinder

Posted on Updated on

I have learnt from numerous comments here on this blog, that the knickpointfinder function seems to be quite popular. However, working with the output of the function often entails some difficulties. So here is a short post about knickpointfinder.

Let’s load and prepare some data first:

DEM = GRIDobj('srtm_bigtujunga30m_utm11.tif');
FD = FLOWobj(DEM);
S  = STREAMobj(FD,'minarea',1000);
S = klargestconncomps(S);

In these first four lines of code, I loaded the DEM of the Big Tujunga catchment, derived flow directions (stored in FD, an instance of FLOWobj), and then extracted the stream network S based on a minimum supporting drainage area of 1000 pixels which is about 0.9 sqkm. In the last line, I clipped the stream network to the largest connected stream network, that is in this case the largest drainage basin.

Now we are ready to run knickpointfinder:

[zs,kp] = knickpointfinder(S,DEM,'tol',30,'split',false);
Animated output as displayed by the knickpointfinder function if ‘plot’ is set to true.

The function takes the stream network S, the digital elevation model DEM and then a few parameter name-value pairs. knickpointfinder reconstructs to the actual longitudinal profile by iteratively removing a curvature constraint at those locations that have a maximum vertical offset to the actual profile. After few iterations, the vertical offset decreases and ‘tol’ sets the minimum vertical offset where the algorithm should stop. The value of tol should reflect uncertainties that are inherent in longitudinal river profile data (Schwanghart and Scherler, 2017).

‘split’ enables running the function in parallel. The function splits the stream network into connected components (see the function STREAMobj2cell) and sends each to individual workers. Parallel execution, however, is only faster if there are several drainage basins. Since we have only one in this example, I set ‘split’ to false. There are other options to control verbose and visual output and you can explore these options by typing

help knickpointfinder

Now, knickpointfinder returned two output variables: zs and kp. zs is a node-attribute list that contains the reconstructed river profile. You can plot it using plotdz.

plotdz(S,DEM)
hold on
plotdz(S,zs)
hold off

kp is a structure array that holds all kind of information.

kp

kp = 

  struct with fields:

           n: 26
           x: [26×1 double]
           y: [26×1 double]
    distance: [26×1 double]
           z: [26×1 double]
      IXgrid: [26×1 double]
       order: [26×1 double]
          dz: [26×1 double]
         nal: [6985×1 logical]

Admittedly, this is a bit messy and there will be some changes in the future. But this will still have to wait a bit. Now what do the structure fields contain? kp.n is 26. Hence, the algorithm has identified 26 knickpoints. kp.x and kp.y contain the coordinates and kp.z  the elevations of the knickpoints. The distance from the outlet is stored in kp.distance. kp.IXgrid contains the linear indices into the DEM. TopoToolbox works a lot with linear indices and thus this output is very crucial. kp.order contains the order with which the knickpoints where identified. However, several knickpoints may have the same order and thus this information is not very meaningful. kp.dz is the vertical offset between the modelled profile at the time when the knickpoint was identified. Finally, kp.nal is a logical vector and a node-attribute list. It allows indexing into other node attribute lists.

Now, here are a few things you might want to do with the structure array kp.

1. Plot a map

imageschs(DEM)
hold on
plot(S,'k');
plot(kp.x,kp.y,'ko','MarkerFaceColor','w')
hold off

2. Plot a longitudinal river profile

plotdz(S,DEM);
hold on
plot(kp.distance,kp.z,'ko','MarkerFaceColor','w')
hold off

3. Plot a longitudinal river profile in chi space

A = flowacc(FD);
c = chitransform(S,A,'mn',0.45);
plotdz(S,DEM,'distance',c);
hold on
[~,locb] = ismember(kp.IXgrid,S.IXgrid);
ckp = c(locb);
plot(ckp,kp.z,'ko','MarkerFaceColor','w')
hold off
xlabel('\chi [m]')

4. Export as shapefile

MS = STREAMobj2mapstruct(S);
shapewrite(MS,'streamnet');
MKP = struct('X',num2cell(kp.x),'Y',num2cell(kp.y),'Geometry','Point');
shapewrite(MKP,'knickpoints');

Ok, that’s a basic work flow that you may apply easily to your own DEM. Give it a try.

References

Schwanghart, W., Scherler, D., 2017. Bumps in river profiles: uncertainty assessment and smoothing using quantile regression techniques. Earth Surface Dynamics, 5, 821-839. [DOI: 10.5194/esurf-5-821-2017]

Fill missing values

Posted on Updated on

DEMs often have missing values. Reasons include lack of coherence in radar imagery or lack of backscattered signals in LiDAR data. Sometimes, it is just a pixel that is missing, but frequently there are larger contiguous regions with no data. These missing values can be annoying because they inhibit flow routing.

An approach to get rid of these pixels is to fill them with elevation data obtained by another DEM. For example, missing data in some of the first versions of the SRTM were often filled with 1 km resolution GTOPO3 data. Alternatively, one can use techniques that estimate pixel values from the surrounding pixels that have data. Those of you, who have worked with ArcGIS might know the function nibble, which uses nearest neighbor interpolation to estimate missing values.

TopoToolbox also has a function to fill regions with missing values. The function name is inpaintnans because inpainting is a common technique in image processing and restoration. The function takes either another DEM to fill missing values, or uses different techniques to estimate from the surrounding pixels. The option ‘nearest’ is the one that is equal to the tool nibble in ArcGIS.

However, while nearest neighbor interpolation is suited for nominal or ordinal scaled data, it might produce unwanted steps in continuous data such as elevation. Thus, the recommended (and default) approach in TopoToolbox is laplacian interpolation. This type of interpolation is explained nicely by Steve Eddins here and implemented in the function regionfill that is used by the TopoToolbox function inpaintnans. In short, this approach interpolates inward by solving a Dirichlet boundary value problem and thus derives a smooth surface.

The Taiwan dataset that is among the new example files on the DEM repository has some missing values. Here is how to apply inpainting:

DEM = readexample('taiwan');
axislim = [0.1970 0.2384 2.5183 2.5951]*10^6;

subplot(2,2,1);
imageschs(DEM)
hold on
plot([axislim(1) axislim(2) axislim(2) axislim(1) axislim(1)],...
[axislim(3) axislim(3) axislim(4) axislim(4) axislim(3)],...
'r','LineWidth',2)

subplot(2,2,2);
imageschs(DEM,[],'caxis',[0100])
axis(axislim);

subplot(2,2,3);
DEMin = inpaintnans(DEM);
imageschs(DEMin);
hold on
plot([axislim(1) axislim(2) axislim(2) axislim(1) axislim(1)],...
[axislim(3) axislim(3) axislim(4) axislim(4) axislim(3)],...
'r','LineWidth',2)

subplot(2,2,4);
imageschs(DEMin,[],'caxis',[0 100]);
axis(axislim);
The SRTM-3 of Taiwan. Top left: There are a regions with missing values. Top right: Zoom to this region. Bottom left: Missing values filled using inpaintnans. Bottom right: Zoom to region with DEM values filled.

It seems that some regions were not inpainted. However, these regions have a connection to the sea. To be filled, regions with no data must be entirely surrounded by pixels with data.

Inpainting can deal with missing values in DEMs. However, there are also other applications. In our recent paper by Amelie Stolle we use laplacian interpolation to reconstruct a model of the undissicted surface of Pokhara Formation in Nepal which allows us to estimate incision depths and rates.

Reference

Stolle, A., Schwanghart, W., Andermann, C., Bernhardt, A., Fort, M., Jansen, J.D., Wittmann, H., Merchel, S., Rugel, G., Adhikari, B.R., Korup, O., 2019. Protracted river response to medieval earthquakes. Earth Surface Processes and Landforms, 44, 331-341. [DOI: 10.1002/esp.4517]

Some new functions and features in TopoToolbox

Posted on

Let me highlight some functions that I have added recently. I think that some of these new features come unnoticed.

First, there is the function nal2nal. Obviously, this function handles “nals” which are node-attribute lists. Node-attribute lists are column vectors where each element provides some value (e.g. elevation or distance) for every node in the stream network. Thus, nals are directly linked to the network topology of a particular instance of STREAMobj. But what happens if you want to use a nal of one stream network and map it to another one. Well, that can be done with nal2nal.

Here is an example that uses nal2nal. We map chi-values measured along an entire river network to its first-order streams.

DEM = GRIDobj('srtm_bigtujunga30m_utm11.tif');
FD = FLOWobj(DEM);
S = STREAMobj(FD,'minarea',1000);
S = klargestconncomps(S);

A = flowacc(FD);
c = chitransform(S,A);
S2 = modify(S,'streamorder',1);
c2 = nal2nal(S2,S,c,0);
plot(S,'k')
hold on
plotc(S2,c2);
h = colorbar;
h.Label.String ='\chi';
chi values of first-order streams


Another new function is mapfromnal. The function maps values measured at each node of the stream network to its upstream hillslope pixels. I showed an example in my previous post on chi inferiority.

Then, there is stackedplotdz. This function takes a STREAMobj and several node attribute lists and plots them on top of each other. If you have MATLAB 2018b, then the function comes with additional functionalities provided by the function stackedplot.

DEM = GRIDobj('srtm_bigtujunga30m_utm11.tif');
FD = FLOWobj(DEM);
S = STREAMobj(FD,'minarea',1000);
S = klargestconncomps(S);
S = trunk(S);
z = getnal(S,DEM);
zs = crs(S,DEM,'tau',0.1,'K',10,'split',false);
g = gradient(S,zs);
k = ksn(S,zs,flowacc(FD));
stackedplotdz(S,{z zs g k},…
'ylabels',{'Elevation' 'smoothed elevation' 'gradient' 'ksn'});
Multiple variables plotted on top of each other using stackedplotdz

Finally, I have created a new repository for some example DEMs. Maybe I have seen Big Tujunga too often now. The repository contains relatively small DEMs (about 30 Mb, github doesn’t allow much larger files) that I will use for demonstration purposes in the future. Until now, there are only two DEMs, but their number will increase in the future. The repository can be accessed via the function readexample that you’ll find the IOtools directory.

DEM = readexample('taiwan');
imageschs(DEM)

How do you keep up to date with new releases of TopoToolbox? Do you use git? If not, I’d recommend it.

X inferiority

Posted on

I just need to finish the year with a quick blog entry… And this one is – once again – about chi maps.

I thought about different ways to display chi maps. chi maps are usually plotted using stream networks. However, to this end, chi maps are not so much about stream networks but rather about drainage divides and how they potentially migrate in the future. In addition, the distance of potential divide migration depends on the relative difference of chi values on either side of the divide, not so much on the absolute values. How could we create a map that reflects this information

Here is an idea: First, we create a chi transform of the river network (see function chitransform). Then we map these chi-values to the entire grid. This can be accomplished with the new function mapfromnal which lets you map values from a stream network to its nearest upstream contributing pixels. Then, we will derive some-pixel wide fringes of chi values around the divides and calculate their spatial average. Dividing the chi values by these spatial averages then quantifies the degree to which some part of the divide is either a victim or an aggressor. Since victims have higher values, I call this metric the chi-inferiority.

Now here is the code:

DEM = GRIDobj('srtm_bigtujunga30m_utm11.tif');
FD  = FLOWobj(DEM);
S   = STREAMobj(FD,'minarea',500);
S   = removeedgeeffects(S,FD);
[~,zb] = zerobaselevel(S,DEM);
A   = flowacc(FD);

c   = zb + chitransform(S,A);
C   = mapfromnal(FD,S,c);
D   = drainagebasins(FD,S);

for r = 1:max(D)
    I = D==r;
    I = erode(I,ones(21)) & I;
    C.Z(I.Z) = nan;
end

C1 = dilate(C,ones(21));
C2 = erode(C,ones(21));

C  = C/((C1 + C2)/2);
imageschs(DEM,C,'colormap',ttscm('vik'),...
    'colorbarylabel','\chi-inferiority',...
    'ticklabels','nice');
hold on
plot(S,'k')
hold off
Map of chi-inferiority of the Big Tujunga catchment. Coloring is based on the vik colormap of the scientific colormaps of Fabio Crameri.

Happy New Year everone!

Earth surface processes in the age of consequences

Posted on

Rafael Schmitt and I have proposed a session for the EGU 2019 focussing on human-environment interactions. We aim to evaluate the current knowledge on how humans engineer landscape processes and on feedbacks between earth surface processes, their anthropic alterations and the resulting socio-economic repercussions.

For further information, please see the complete session description here. We invite presentations that shed light on the role of earth surface processes during past, present, and future human-environment interactions. Moreover, we solicit studies on innovative techniques of data collection and modelling to identify new approaches to value and manage sediment.

Hope to see you at the EGU! Note that the deadline for the receipt of abstracts is on Jan 10, 2019.