Well let me start with saying that I truly believe that decision
making is an art and a skill. Often having many options just makes the process
of decision making even more challenging and more confusing, which could lead
to what is known as “decision fatigue”.
Let’s take the case of me and my friends. It takes us ages
to decide where to eat. At first it is fun, we start walking street after
street, passing passenger light after light, then after 20 minutes it is no
longer fun! Because it gets to the stage where your patience and physical
fitness is put to the test!
That approach has to be changed when it comes to choosing modelling
approached for sure…and having some sort of tools that could help with decision
making and eliminating the inappropriate choices would be really nice too. The good
news is that there are actually some guidelines for choosing which modelling approach
to go with, which is a good starting point.
Defra and Environment Agency (2007), produced a document for modelling
the un-gauged locations -usually known where there are is no data for the outlet
of the catchment of interest. The document provides a guideline on how modelling
approaches can be chosen based on factors such as data availability. I just
thought that this document could be used as an example of a decision making
tool.
![]() |
| Figure 1-Flow chart showing modelling needs with respect to different level of data availability(Defra and Environment Agency, 2007) |
The flow chart provides
a guideline on how a modelling approach can be chosen based on data
availability. For instance, when there are no historical data available, one
has to compromise and use the data for the nearby catchment (preferably similar
catchment type) for calibrating the model (the model could be lumped,
distributed, etc.). The same is applied when trying to update a model with
real-time data. It can be seen that data availability is one of the limiting
factors and if one had to use data from similar gauged catchments, this brings
about some level of uncertainty with itself.
Another factor that has
an impact on modelling approach is the catchment characteristics. Factors such
as catchment size, how many major rivers are present in the catchment, does the
rainfall vary spatially over the catchment, location of the catchment within
the river basin, and so on. For example, if the rainfall varies spatially over
the catchment, and the catchment area is more than 10 km2, perhaps
distributed models would work better compared to lumped models, which in this
case may oversimplify the situation.
Having said this, it
all comes down to the purpose of the model as well. Sometimes a quick forecast
for a small catchment is required, and sometimes detailed models with quite few
parameters of a larger catchment. Hence, as an environmental modeller one needs
to be able to do some trade off and then come to a conclusion of which modelling
approach is appropriate for the problem under investigation.
In the coming posts
we shall look into some examples of models.
So long,

I'm not quite sure of the use of the flow chart, or who use it..? In my mind, if the importance of obtaining useful and relevant data isn't already understood, modelling should not be undertaken. Maybe I'm missing something, could you explain a little more perhaps? Thanks :)
ReplyDeleteThe flowchart was just a guidance on which modelling procedure would make the best use of available data when trying to model un-gauged locations. Like a coach talking through the problem with you asking you what you have, what you don’t have and how you can handle the situation
ReplyDeleteObtaining data and keeping records is important but for some sites this has not been done. That perhaps doesn’t mean that you can’t model that location.(or even sometimes there is a need for modeling that particular location , and it is your responsibility to see how you can compromise for lack of data ).So By considering the types of data available, you can still model the un-gauged location. For example if there is no historical data available for calibration – it’s okay! You have the option of using the neighboring site (considering proximity and similarity of the sites). off-course if the data is site-specific the model is expected to perform better but sometimes you have to compromise i guess :)
hope that helps :)