The Himalayan history is rich with a sequence of destructive earthquakes. In the last century, ground-shaking, collapsing houses, and landslides in the wake of earthquakes killed tens of thousand of people, wreaking havoc to the Himalayan nations. The 2015 Gorkha Earthquake was the latest in a series of severe earthquakes to hit Nepal.
Seismic hazard analysis in the Himalayas is based on few instrumental records and a paleoseismic record extending back ~1000 years. Paleoseismology largely relies on rupture histories derived from fault trenches, written accounts, and liquefaction features. Other records derived from e.g. lake sediments are scarce.
In a now published paper in Quaternary Science Reviews, Amelie Stolle et al. documents our research in the Pokhara Valley in Nepal. The valley was massively and repeatedly aggraded by several cubic kilometers of debris in the wake of medieval earthquakes in the region. The paper extends on our 2016 paper in Science, offering new radiocarbon dates and detailing the sedimentology of the infills. Based on our findings, we argue that valley fills in the Himalayas may offer substantial additional evidence for past earthquakes subsidy to the current portfolio of paleoseismological records.
Stolle, A., Bernhardt, A., Schwanghart, W., Hoelzmann, P., Adhikari, B.R., Fort, M., Korup, O., 2017. Catastrophic valley fills record large Himalayan earthquakes, Pokhara, Nepal. Quaternary Science Reviews, 177, 88-103. [DOI: 10.1016/j.quascirev.2017.10.015]. << free link to paper until December 26, 2017 >>
River profiles are concave upward if they are in a dynamic equilibrium between uplift and incision, and if our simplified assumptions of steady uplift and the stream power incision law (SPL) hold. The concavity derives from the SPL which states that along-river gradients S are proportional to upslope area A exponentiated by the negative mn-ratio.
I have mentioned the mn-ratio several times in this blog. Usually, we calculate it using slope-area plots or chi analysis both of which are included in TopoToolbox. However, these methods usually lack consistent ways to express the uncertainties of the mn-ratio. The lack of consistency is due to fitting autocorrelated data which elude a straightforward statistical analysis.
Today, I want to present a new function that uses Bayesian Optimization with cross-validation to find a suitable mn-ratio. While Bayesian Optimization is designed to find optimal values of objective functions involving numerous variables, solving an optimization problem with mn as the only variable nicely illustrates the approach.
Bayesian Optimization finds a minimum of a scalar-valued function in a bounded domain. In a classification problem, this value could be the classification loss, i.e., the price paid for misclassifications. In a regression problem, this value might refer to the sum of squared residuals. The value might also be derived using cross-validation, a common approach to assess the predictive performance of a model. Such cross-validation approaches might take into account only random subsets of data, which entails that the value to be optimized might not be the same for the same set of input parameters. Bayesian Optimization can handle stochastic functions.
Now how can we apply Bayesian Optimization for finding the right mn-ratio? The new function mnoptim uses chi analysis to linearize long-river profiles. If there are several river catchments (or drainage network trees), the function will pick a random subset of these trees to fit a mn-ratio and then tests it with another set of drainage basins. This allows us to assess how well an mn-ratio derived in one catchment can actually be applied to another catchment. The goal is to derive a mn-ratio that applies best to other catchments.
Now let’s try this using the new function mnoptim. Here is the code that I’ll use for entire SRTM-3 DEM of Taiwan. I’ll clip the stream network to the area upstream of the 300 m contour to avoid an influence of the alluvial low-lying reaches.
DEM = GRIDobj('taiwan.tif'); FD = FLOWobj(DEM); S = STREAMobj(FD,'minarea',1e6,'unit','map'); C = griddedcontour(DEM,[300 300],true); S = modify(S,'upstreamto',C); A = flowacc(FD); [mn,results] = mnoptim(S,DEM,A,'optvar','mn','crossval',true); % we can refine the results if we need results = resume(mn); % and get an optimal value of mn: bestPoint(results) ans = table mn _______ 0.41482
Now this nicely derives the optimal mn-value of 0.415 which is close to the often reported value of 0.45. Moreover, based on the plot, we gain an impression of the uncertainty of this value. In a transient landscape with frequent knickpoints, the uncertainty about the mn-ratio will be probably larger.
Note that mnoptim requires the latest version of MATLAB: 2017b, as well as the Statistics and Machine Learning Toolbox. It also runs with 2017a, but won’t be able to use parallel computing then.
Abstract submission for the EGU 2018 has just started and is open until 10 Jan 2018, 13:00 CET. The session Interactions between tectonics and surface processes from mountain belts to basins, organized by Dirk Scherler, Alex Whitaker, Taylor Schildgen and me, will address the coupling between tectonics and surface processes. We invite contributions that use geomorphic or sedimentary records to understand tectonic deformation, and we welcome studies that address the interactions and couplings between tectonics and surface processes at a range of spatial and temporal scales. In particular, we encourage coupled catchment-basin studies that take advantage of numerical/physical modeling, geochemical tools for quantifying rates of surface processes (cosmogenic nuclides, low-temperature thermochronology, luminescence dating) and high resolution digital topographic and subsurface data. We also encourage field or subsurface structural and geomorphic studies of landscape evolution, sedimentary patterns and provenance in deformed settings, and invite contributions that address the role of surface processes in modulating rates of deformation and tectonic style.
Please see further information >>here<<.
We look forward to your contributions. See you at the EGU soon!
Wolfgang, Dirk, Taylor and Alex
Two positions in Geomorphology and Cosmogenic Nuclides in Dirk Scherler’s group
Dirk Scherler, co-developer of TopoToolbox, has recently been granted the ERC-project “Climate sensitivity of glacial landscape dynamics (COLD)”. The main aim of COLD is to quantify how erosion rates in glacial landscapes vary with climate change and how such changes affect the dynamics of mountain glaciers. Now, he is inviting applications for 2 PhD positions at the German Research Centre for Geosciences (GFZ) in Potsdam.
Please find more details here:
Application deadline is 15th November 2017.
Today, I came back from an excellent workshop (organized by Darrel Maddy) in Spain focussing on the late Quaternary development of the Bergantes catchment. Located in an extremely beautiful landscape, this catchment features numerous fluvial terraces that were extensively studied and dated by Mark Macklin and Paul Brewer together with four PhD students between 2005 and 2009. A solid chronology together with high resolution terrain and climate data provide the benchmark data against we will test numerical landscape evolution models (LEM).
Assessing the capabilities of LEMs to reconstruct real landscapes, however, involves several challenges among which high parametrization is a severe one. Thus, in order to get a grip on the uncertainty and sensitivity of LEMs, Chris Skinner from the University of Hull led a study in which we assessed the parameter space of CAESAR-Lisflood and its effects on several output metrics derived from hundreds of simulations.
This study has now been accepted for discussion in the journal Geoscientific Model Development and can be accessed here.
Analyses that use upslope area usually demand that catchments are completely covered by the DEM. Values of upslope areas may be too low if catchments are cut along DEM edges, and so are estimates of discharge. How can we avoid including these catchments into our analyses?
In one of my previous posts on chi analysis, I showed a rather long code to include those catchments that have 20% or less of their outlines on the DEM edges. Here, I’ll be more strict. I’ll remove those parts of the stream network that have pixels on DEM edges in their upslope area. By doing this, we make sure that drainage basins are complete which is vital for chi analysis or estimating discharge.
Here is the code:
DEM = GRIDobj('srtm_bigtujunga30m_utm11.tif'); FD = FLOWobj(DEM); S = STREAMobj(FD,'minarea',1e6,'unit','map'); I = GRIDobj(DEM,'logical'); I.Z(:,:) = true; I.Z(2:end-1,2:end-1) = false; I = influencemap(FD,I);
The influencemap function takes all edges pixels and derives those pixels that they would influence downstream. Again, this is going to be a logical GRIDobj where true values refer to pixels that might potentially be affected by edge effects. Let’s remove those pixels from the stream network S using the modify function.
S2 = modify(S,'upstreamto',~I); D = drainagebasins(FD,S2); imageschs(DEM,,'colormap',[1 1 1],'colorbar',false) [~,x,y] = GRIDobj2polygon(D); hold on plot(x,y); plot(S,'k') plot(S2,'b','LineWidth',1.5)
Ok, now you have a drainage network that won’t be affected by edge effects for further analysis.
Today, I want to show another application of TopoToolbox’s new smoothing suite that Dirk and I have recently released together with our discussion paper in ESURF. I will demonstrate that using different methods associated with STREAMobj enable you to create plots that are quite similar to swath profiles calculated along rivers. Let’s see how it works.
Swath profiles require a straight or curved path defined by a number of nodes. Each node has a certain orientation defined by one or two of its neighboring nodes. Creating swath profiles then involves mapping values from locations that are perpendicular to that orientation and within a specified distance. We will use a simplified version of mapping values lateral to the stream network which is implemented in the function maplateral. maplateral uses the function bwdist that returns a distance transform from all pixels (the target pixels) to a number of seed pixels in a binary image. In our case, seed pixels are the pixels of the stream network and bwdist identifies the target pixels from which it calculates the euclidean distance to the stream pixels. Usually, there are several target pixels for each seed pixel so that we require an aggregation function that calculates a scalar based on the vector of values that are associated with each seed pixel. Unlike swath profiles, however, this approach entails that some seed pixels may not have target pixels up to the maximum distance. We can thus be pretty sure that some pixels may be missing some of the information that we want to extract. While we cannot avoid this problem using this approach, we can at least reduce its impact using the nonparametric quantile regression implemented in the crs function.
Here is the approach. We’ll load a DEM, derive flow directions and a stream network which is in this case just one trunk river. We use the function maplateral to map elevations in a distance of 2 km to the stream network. Our mapping uses the maximum function to aggregate the values. Hence, we will have for each pixel along the stream network the maximum elevation in its 2 km nearest-neighborhood. The swath is plotted using the function imageschs and the second output of maplateral. Then we plot the maximum values along with a profile of the stream network.
DEM = GRIDobj('srtm_bigtujunga30m_utm11.tif'); FD = FLOWobj(DEM); S = STREAMobj(FD,'minarea',1e6,'unit','map'); S = klargestconncomps(S); S = trunk(S); z = getnal(S,DEM); [zmax,mask] = maplateral(S,DEM,2000,@max); subplot(2,1,1) imageschs(DEM,mask,'truecolor',[1 0 0],... 'colorbar',false,... 'ticklabels','nice'); subplot(2,1,2) plotdz(S,z) hold on plotdz(S,zmax) legend('River profile','maximum heights in 2 km distance')
This looks quite ok, but the topographic profile has a lot of scatter and a large number of values seem not to be representative. Most likely, those are the pixels whose closest pixels fail to extend to the maximum distance of 2 km. Rather, I’d expect a profile that runs along the maximum values of the zigzag line. How can we obtain this line? Well, the crs function could handle this. Though it was originally intended to be applied to longitudinal river profiles, it can also handle other data measured or calculated along stream networks. The only thing we need to “switch off” is the downstream minimum gradient that the function uses by default to create smooth profiles that monotoneously decrease in downstream direction. This is simply done by setting the ‘mingradient’-option to nan. I use a smoothing parameter value K=5 and set tau=0.99 which forces the profile to run along the 99th percentile of the data. You can experiment with different values of K and tau, if you like.
figure zmaxs = crs(S,zmax,'mingradient',nan,'K',5,'tau',0.99,'split',0); plotdz(S,z); hold on plotdzshaded(S,[zmaxs z],'FaceColor',[.6 .6 .6]) hold off
This looks much better and gives an impression on the steepness of the terrain adjacent to the stream network. Let’s finally compare this to a SWATHobj as obtained by the function STREAMobj2SWATHobj.
figure SW = STREAMobj2SWATHobj(S,DEM,'width',4000); plotdz(SW) xlabel('Distance upstream [m]'); ylabel('Elevation [m]');
Our previous results should be similar to the maximum line shown in the SWATHobj derived profile. And I think they are. In fact, the SWATHobj derived profiles also shows some scatter that is likely due to the sharp changes of orientation of the swath centerline. While we can remove some of this scatter by smoothing the SWATHobj centerline, I think that the combination of maplateral and the CRS algorithm provides a convenient approach to sketch along-river swath profiles.