First Open Call of Geo.X – PhD and PostDoc positions in “Geo Data Science”

Posted on

Geo.X network

Geo.X, a Berlin-Potsdam based research cluster, advertises 14 positions in a new research school. The focus of this research school is on the interdisciplinary area of Data Science as a methodological research area within the geoscientific research topics Natural Hazards and Risks and Geo-Bio-Interactions. As readers of this blog, I am sure, these positions will certainly be of interest to many of you.

Please find further information here and apply here.

Let’s go dynamic with TTLEM!

Posted on Updated on

As geomorphologists we have to live with the fact that we can rarely observe the processes that shape the Earth’s surface. Instrumental records cover only a minor time span over which many geomorphic processes act, thus challenging our abilities to disentangle and understand the complex interactions of landscape evolution. Instead, much of our work has to rely on the analysis of landforms, their geometry, their assemblage, and their constituting material, as well as a blend of geochemical and numerical dating techniques of ever-increasing sophistication.

Numerical landscape evolution models (LEM) provide a useful approach to this challenge. LEMs are simulation tools that attempt to model erosion, sediment transport and deposition as well as feedbacks with vegetation and land use. They amalgamate our state of knowledge in a set of physically-based mathematical formulations and allow us to test whether these equations and their parameter values are able to generate output that is plausible and consistent with field evidence.

TTLEM, the TopoToolbox Landscape Evolution Model, is the latest addition to TopoToolbox. Spearheaded by Benjamin Campforts from KU Leuven, we have developed a LEM that allows us to simulate how mountains grow, rivers incise and hillslopes respond to tectonic and climatic forcing. We placed particular emphasis on implementing numerical models that minimize numerical diffusion. To achieve this, Benjamin has adopted a higher order flux limiting total volume method that is total variation diminishing (TVD-TVM) (Campforts and Govers 2015) to solve the partial differential equations of river incision and tectonic displacement.

In our recent manuscript under discussion in ESurf, we show that using these methods is more than just an exercise in numerical modelling. First-order approximations often smooth knickpoints in uncontrollable ways that impact on derived catchment wide erosion rates. Numerical diffusion strongly affects lateral tectonic displacement, thus restricting its simulation to models that use irregular grids and lagrangian approaches. The TVD-TVM approach solves for these issues by reducing numerical diffusion to a minimum, and thus offers a regular-grid based model with wide applications in tectonic geomorphology.

Needless to say that you can directly analyze and visualize the output using TopoToolbox. The implementation comes with a set of examples that you can directly run from the command line. Give it a try and let us know what you think!


Campforts, B., Govers, G., 2015. Keeping the edge: A numerical method that avoids knickpoint smearing when solving the stream power law. Journal of Geophysical Research: Earth Surface 120, 1189–1205. doi:10.1002/2014JF003376

Campforts, B., Schwanghart, W., Govers, G. (2016): Accurate simulation of transient landscape evolution by eliminating numerical diffusion: the TTLEM 1.0 model. Earth Surface Dynamics Discussion, in review. [DOI: 10.5194/esurf-2016-39]

Himalayan hydropower faces higher uncertainty closer to glacial lakes

Posted on

Glacial lakes (blue circles) and hydropower projects (yellow = existing, red = planned/under construction) along the Himalayan Arc (modified from Schwanghart et al., 2016).

The Himalayan nations’ thirst for energy is increasing. Hydropower is among the most important energy sectors and has experienced massive growth rates during the last years. This growth is expected to further increase in the coming years. More than 400 Himalayan hydropower projects (HPP) are currently registered or at validation for the Clean Development Mechanism (CDM) alone, a global environmental investment and credit scheme to enable countries with emission-reduction commitments to implement emission-reduction projects in developing countries.

In our now published paper in Environmental Research Letters, we show that – as opportune sites along major rivers are already occupied – numerous planned or currently constructed HPP expand into river reaches higher upstream and closer to glacial lakes. These lakes, dammed behind moraines, may sporadically burst upon dam failure and create so-called glacial lake outburst floods (GLOFs). We mapped more than 2000 lakes dammed by terminal or lateral moraines along the Himalayan Arc and calculated outburst scenarios for each using a physically based dam breach model. Since many of the variables of this model are impossible to constrain from remote sensing data alone, we adopted a probabilistic approach and used a Monte-Carlo simulation to derive entire distributions of outburst scenarios. Using TopoToolbox, we determined the flow paths of potential GLOFs from each lake and used these paths as input to a 1D flood propagation model that we fed with the results of the Monte-Carlo simulation. This allowed us to track how uncertainties about lake bathymetry and dam characteristics propagate downstream, and showed that these uncertainties remain significant, for most lakes, until a distance of 80 km downstream. This is the distance within which more and more HPPs are planned or currently constructed.

Clearly, our approach is based on a series of assumptions and limitations, and I encourage you to read the full paper (open access) for a detailed discussion of these. We conclude that the current massive expansion of HPP closer to glaciated areas entails that our lack of knowledge about the stability and geometry of glacial lakes will make it increasingly difficult to estimate potential flood magnitudes and thus compromise reliable risk assessment and safety planning of new HPP projects.

Schwanghart, W., Worni, R., Huggel, C., Stoffel, M., Korup, O. (2016): Uncertainty in the Himalayan energy-water nexus: estimating regional exposure to glacial lake outburst floods. Environmental Research Letters, 11, 074005. [DOI: 10.1088/1748-9326/11/7/074005]

Local-scale flood mapping using LiDAR

Posted on

Flood maps are increasingly important for managing flood risks and land use in riverine areas. Much of the efforts on flood mapping have concentrated on large and extensive floods. Localised flooding, however, has received less attention, although those local-scale inundation patterns are often prolonged and frequent in low-land areas and can incur high economic losses due to crop, property and infrastructure damage.

In a recent paper first-authored by Radoslaw Malinowski, we have used full-waveform airborne laser scanning to map inundation patterns of the Norrea River, a low-land river in Denmark. Our particularly focus was on understanding how water effects the backscattered energy at variable water coverages in areas with interspersed vegetation. We show that the backscattered energy negatively correlates with water coverage. We exploited that relation together with machine learning techniques combining radiometric and geometric data to obtain a LiDAR-pointcloud based flood map with a very high mapping accuracy of ~89%. Our results demonstrate the potential of using full-waveform LiDAR data to detect water surfaces on floodplain areas with limited water surface exposition due to vegetation canopy.

Malinowski, R., Höfle, B., König, K., Groom, G., Schwanghart, W., Heckrath, G. (2016): Local-scale flood mapping on vegetated floodplains from radiometrically calibrated airborne LiDAR data. ISPRS Journal of Photogrammetry and Remote Sensing, 119, 267-279 [DOI: 10.1016/j.isprsjprs.2016.06.009].

Run in parallel? Or not?

Posted on Updated on

Though hidden in the source code and nowhere explicitly documented, TopoToolbox uses some MATLAB built-in parallelization techniques. The functions gradient8, curvature, and hillshade for example, use the function blockproc that comes with the Image Processing Toolbox. If you have the Parallel Processing Toolbox, blockproc will make use of several workers to process your data. This should be more efficient and faster, at least in theory.

In practice, and despite being embarassingly parallel, the parallel approach might not be as efficient. But it took me some time to realize this, partly due to MATLAB bugs, partly due to my own mistake.

Here are some issues that I came across:

1. Until release 2015b, blockproc was bugged (see bug report 1157095). Enormous memory requirements slowed down execution. The Mathworks offered a work-around, that solved this bug, but memory requirements were still high.

2. These memory requirements were due to a silly mistake that I made when writing the anonymous function that blockproc was about to process. Here is a code snippet from gradient8 that illustrates the problem:

fun = @(x) steepestgradient(x,G.cellsize,c);
G.Z = blockproc(DEM.Z,blksiz,fun,...
'BorderSize',[1 1],...

The steepestgradient function is a subfunction of gradient8 that calculates the gradient for each block. G is a GRIDobj that I preallocated. Now, what’s wrong. Well, I passed G.cellsize to the anonymous function and I guess that this caused that the entire GRIDobj was transferred to each worker thus creating a large memory overhead and a huge amount of transfer time between the workers.

I have now changed this code snippet to

cs  = G.cellsize;
fun = @(x) steepestgradient(x,cs,c);
G.Z = blockproc(DEM.Z,blksiz,fun,...
'BorderSize',[1 1],...

And this resolves the problem.

3. Now, is it really worth the effort to call blockproc in parallel mode. I tried a simple testcase. I used a relatively large DEM with 12477 rows and 21275 columns stored in single precision, and I determined the block size using the function bestblk.

blksiz = bestblk(size(DEM.Z),5000);

I then used tic and toc to time execution: With blockproc running in parallel, it took gradient8 13.03 seconds to return the gradient GRIDobj. Choosing a best block size of 2000 increased run time to 33.8 seconds. Without invoking blockproc, gradient8 took only 8.23 seconds. What’s the reason for this? My guess is that imerode, the function called by the function steepestgradient twice, has some inbuilt parallelism so that it just doesn’t benefit from additional parallelization. Observing physical memory usage using the task manager revealed that none of the approaches resulted in massive memory overhead.

Now let’s try another function: hillshade. Hillshade calls the function surfnorm which returns three grids same size as the DEM. I thus expect some memory issues when calculating surfnorm of a very big matrix that should be decreased by using blockproc. When I run this function without blockproc, I see a massive increase in memory demand up to 100% of my 24Gb available RAM. CPU usage goes down, probably because MATLAB is forced to swap data between the hard disk and the main memory. The elapsed time in this case is 161.2 seconds. And surprise, when invoking blockproc (block size 5000), the memory overhead of the function hillshade goes drastically down and the elapsed time decreases to 66.7 seconds. Smaller block sizes of 2000 and 500 increase run time to 89.1 and 104.4 seconds, respectively.

Now what did I learn. (i) Be aware of the amount of data that you transfer between the workers. (ii) The benefit of using blockproc depends very much on the algorithm that will be processed by each worker. Memory demanding algorithms benefit from using parallelization. However, that benefit highly depends on your computational resources and (iii) is modulated by the size of the blocks that you transfer to the workers.

I have now updated some GRIDobj methods that call blockproc and removed the bug that caused excessive data transfer between the workers. This works well on my system, but perhaps it won’t on yours. My recommendation is that you best tinker yourself with the options of when blockproc should be involved and, if, which block sizes should be used.

Do you have experience with using blockproc. Please let me know.

Orientation of stream segments

Posted on Updated on

Today’s guest post comes from Philippe Steer, assistant professor at the University of Rennes. Philippe contributed two new functions and I asked him to describe them here to show potential applications. Thanks, Philippe, for this contribution!

I recently needed to compute river network orientation. Indeed river network orientation can reveal distributed tectonics deformation at the vicinity of strike-slip faults (e.g. Castelltort et al., 2012) or can enable to identify developing vs paleo river networks.  One way to obtain orientation is first to extract the river segments that are linking confluences to outlets or confluences to channel heads and then to compute the orientation of the resulting segments. The streampoi function allows to extract these feature points of the network such as channel heads, confluences, b-confluences (the points just upstream any confluence) and outlets. To connect these feature points into segment, one can use the propriety that each segment belongs to a unique watershed that is given by the function drainagebasins using as outlets the true river outlets and the b-confluences. By definition, each segment also corresponds to a unique value of strahler order.

Once the river segments are identified, segment orientation is easily obtained. What is interesting is that defining these segments also paves the way to compute several other geometric metrics averaged over the river segment such as segment direct or flowpath length, sinuosity, direct or flowpath slope, mean drainage area and even steepness index (if assuming a value of concavity).

To cleanly do the job, I have written two MATLAB functions:

Here is a piece of code showing you the use of these functions (assuming you have already loaded the DEM of interest):

%% Flow direction and drainage area
DEMf = fillsinks(DEM); FD= FLOWobj(DEMf); A = flowacc(FD);  

%% Extract rivers
% Set the critical drainage area
A_river=1e7/(res^2); % m^2
W = A>A_river;
% Extract the wanted network
S = STREAMobj(FD,W);

%% River segments
% Extract segment and compute their orientations
% Plot results

To illustrate the results, I thought the South Island of New-Zealand was a good example. Indeed, as first suggested by Castelltort et al. (2012) the river catchments at the south of the Alpine fault are tilted, possibly suggesting that they have recorded a distributed shear deformation. This tilting is also quite obvious when looking at the river segments in the following figure. Another interesting feature is that the river segments north of the Alpine fault, facing the tilted rivers, display a higher steepness than their south neighbours. This could be due to a long list of forcings, including active tectonics, orographic precipitation gradient or the transition from glacial to fluvial erosion.

Castelltort, S et al. (2012). River drainage patterns in the New Zealand Alps primarily controlled by plate tectonic strain. Nature Geoscience, 5(10), 744-748. [DOI: 10.1038/ngeo1582]

Snap to stream!

Posted on

Working with river networks and drainage basins often entails data acquired for certain locations along the rivers. This data may comprise geocoded hydrological gauge measurements, 10Be derived basin-wide denudation rates, grain-size sampling locations, among others.

If you are working with point locations in TopoToolbox, make sure that their x- and y-coordinates have the same projected coordinate system (e.g., UTM) as the digital elevation model (DEM). If your data comes in geographic coordinates (e.g., WGS84), reproject them using GIS or the Mapping Toolbox functions projfwd or mfwdtran.

Having done that, you will probably want to explore your data by plotting them together with your stream network data. What you will certainly notice at that stage is that your locations do not align perfectly with the river network. There are two reasons for that:

  1. Your data has some positional error.
  2. The derived stream network has some positional error because it is forced to run through DEM cell centers.

In addition, including point data in any analysis of TopoToolbox will require that points are located on cell centers of the underlying DEM. To ensure that this is the case, you’ll need to snap your data to streams!

“Snap to stream” is a frequently needed operation in TopoToolbox and since I have retrieved several requests for help with this issue, I’ll detail it here. To do so, I will use the Big Tujunga DEM and derive the stream network. I will generate some random points on the stream network and slightly offset them. I will then use STREAMobj/snap2stream to attach those locations back to the stream network and will derive their respective drainage basins.

DEM = GRIDobj('srtm_bigtujunga30m_utm11.tif');
FD  = FLOWobj(DEM,'preprocess','carve');
S   = STREAMobj(FD,'minarea',1000);
S   = klargestconncomps(S);

% Generate random points on the network and offset them slightly
idx = randi(numel(S.x),20,1);
x   = S.x(idx)+randn(size(idx))*100;
y   = S.y(idx)+randn(size(idx))*100;

% Snap 2 stream
[xn,yn,IX] = snap2stream(S,x,y,'plot',true);
axis image

% And calculate drainage basins
D   = drainagebasins(FD,IX);
D.Z = single(D.Z);
D.Z(D.Z==0) = nan;

Does that work for you? Now if you need to calculate statistics based on these drainage basins, take a look at the function drainagebasinstats.