Interface#

Once you have logged in on the eWaterCycle machine, this is what the interface looks like. This notebook will have some code cells, but mostly pseudo-code to illustrate the interface.

Importing modules#

As any notebook we will start with importing the necessary modules. BUT we will also import the eWaterCycle modules.

# importing modules

import ewatercycle
import ewatercycle.model
import ewatercycle.forcing

Now that we imported the eWaterCycle modules, we can use them to create a model run. Which will usually look something like this in the basic form:

Choosing a region and getting the correct forcing data.#

This bit is very specific to the model and the region. There are multiple ways to get the forcing data, here we will use the Caravan dataset. This dataset has discharge data as well as the shapefile of the region. We can use this to validate our model.

region = "camels_id"

start_date = "2000-11-01T00:00:00Z"
end_date = "2005-11-30T00:00:00Z"

Of course, we also need to set a start time and an end time of the experiment.

Now we can get the forcing data.

camels_forcing = ewatercycle.forcing.sources['CaravanForcing'].generate(
    start_time=start_date,
    end_time=end_date,
    directory="A directory to store the forcing data",
    basin_id=region,
)

The different forcings we can generate are as follows:

  • Caravan forcing data

  • ERA5 forcing data

  • CMIP6 forcing data

More details about the forcing data can be found here.

Model Run#

Now that we have the forcing data, we can create a model run. Models are objects and need a setup and initialization. More details will follow after this notebook in the first model run notebook.

model = ewatercycle.models.HBV(forcing=camels_forcing)

So in this case it is the HBV model, but it could be any model that is available in eWaterCycle. We also give it the forcing data we just generated.

The model can be set up with the following parameters:

par_0 = "some parameters"
s_0 = "some initial conditions"
config_file, _ = model.setup(parameters=par_0, initial_storage=s_0)

This setup is model dependent. And again here it is mostly pseudo-code.

Now we can initialize the model. This will start a container (see what for more info on containers) on the SURF super computer with the model and the forcing data.

model.initialize(config_file)

Now that the container is running, we can run the model. This all happens through grpc4bmi.

discharge_list = []
time_list = []
while model.time < model.end_time:
    model.update()
    discharge_list.append(model.get_value("Q")[0])
    time_list.append(pd.Timestamp(model.time_as_datetime))

model.finalize()

This is the model run. We can get the discharge values and the time values from the model. This can be used later for validation or analysis.

We end the model run with a finalize call. This will stop the container and clean up the resources. If this is not done, the container will keep running and consume resources. After a while it will automatically stop, but it is better to do it explicitly.