climix issueshttps://git.smhi.se/climix/climix/-/issues2023-09-13T13:00:55Zhttps://git.smhi.se/climix/climix/-/issues/322Numba 0.57.0 causes error for masked array2023-09-13T13:00:55ZJoakim LöwNumba 0.57.0 causes error for masked arrayWhen running climix, numba throws an error for some cases (see #321). I suggest setting the numba version to `- numba<0.57` in the `environment.yml` for the next release:
```
2023-06-12 10:42:39,248 - distributed.worker - WARNING - Compu...When running climix, numba throws an error for some cases (see #321). I suggest setting the numba version to `- numba<0.57` in the `environment.yml` for the next release:
```
2023-06-12 10:42:39,248 - distributed.worker - WARNING - Compute Failed
Key: ('chunk-2ad0dae06c51d593e75de7dbfe6cb672', 0, 0, 0)
Function: subgraph_callable-d585f943-ba41-4885-8faa-eb2cba41
args: (masked_array(
data=[[[--, --, --, ..., 278.2850036621094, 277.3859558105469,
276.8891296386719],
[--, 279.6990661621094, 279.4710998535156, ...,
278.3681335449219, 277.4391784667969, 277.0628967285156],
[--, --, --, ..., 277.3985900878906, 276.5505065917969,
276.6014709472656],
...,
[--, --, --, ..., 265.25921630859375, 265.71435546875,
265.8021240234375],
[--, --, --, ..., 264.7350158691406, 265.0195007324219,
265.303955078125],
[--, --, --, ..., 262.71636962890625, 263.4328308105469,
264.1492614746094]],
[[--, --, --, ..., 276.0057067871094, 275.7838439941406,
275.8836364746094],
[--, 277.0145568847656, 275.7981262207031, ...,
274.5687561035156, 274.2306213378906, 274.3453674316406],
[--, --, --, ..., 273.7345275878906, 273.2975158691406,
273.5181579589844],
...,
[--, --, --, ..., 262.7914733886
kwargs: {}
Exception: "NumbaTypeError('\\x1b[1mUnsupported array type: numpy.ma.MaskedArray.\\x1b[0m')"
```0.19 (Poco Mas)Carolina NilssonCarolina Nilssonhttps://git.smhi.se/climix/climix/-/issues/316"Requested dask.distributed scheduler but no Client active." RuntimeError for...2023-09-08T12:49:09ZCarolina Nilsson"Requested dask.distributed scheduler but no Client active." RuntimeError for larger computations1. Installing a new environment: `mamba create -n myenv climix`
2. activating the env and running: `climix -e -x tn10p /nobackup/rossby27/users/sm_carni/data/tmp/data_files/tasmin_EUR-11_MPI-M-MPI-ESM-LR_rcp85_r2i1p1_MPI-CSC-REMO2009_v1_...1. Installing a new environment: `mamba create -n myenv climix`
2. activating the env and running: `climix -e -x tn10p /nobackup/rossby27/users/sm_carni/data/tmp/data_files/tasmin_EUR-11_MPI-M-MPI-ESM-LR_rcp85_r2i1p1_MPI-CSC-REMO2009_v1_day_20060101-20101231.nc /nobackup/rossby27/users/sm_carni/data/tmp/data_files/tasmin_EUR-11_MPI-M-MPI-ESM-LR_rcp85_r2i1p1_MPI-CSC-REMO2009_v1_day_20110101-20151231.nc -r 2007/2009`
Returns the following RuntimeError and saves no result:
```
101637ms:main.py:main() INFO:root:Calculation took 94.1128 seconds.
2023-05-08 12:44:25,748 - distributed.worker - ERROR - failed during get data with tcp://127.0.0.1:33317 -> tcp://127.0.0.1:34451
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 225, in read
frames_nbytes = await stream.read_bytes(fmt_size)
tornado.iostream.StreamClosedError: Stream is closed
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/worker.py", line 1787, in get_data
response = await comm.read(deserializers=serializers)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read
convert_stream_closed_error(self, e)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
raise CommClosedError(f"in {obj}: {exc}") from exc
distributed.comm.core.CommClosedError: in <TCP (closed) local=tcp://127.0.0.1:33317 remote=tcp://127.0.0.1:45308>: Stream is closed
2023-05-08 12:44:25,749 - distributed.worker - ERROR - failed during get data with tcp://127.0.0.1:33317 -> tcp://127.0.0.1:46206
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 225, in read
frames_nbytes = await stream.read_bytes(fmt_size)
tornado.iostream.StreamClosedError: Stream is closed
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/worker.py", line 1787, in get_data
response = await comm.read(deserializers=serializers)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read
convert_stream_closed_error(self, e)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
raise CommClosedError(f"in {obj}: {exc}") from exc
distributed.comm.core.CommClosedError: in <TCP (closed) local=tcp://127.0.0.1:33317 remote=tcp://127.0.0.1:46116>: Stream is closed
2023-05-08 12:44:25,793 - distributed.worker - ERROR - failed during get data with tcp://127.0.0.1:36001 -> tcp://127.0.0.1:34451
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 225, in read
frames_nbytes = await stream.read_bytes(fmt_size)
tornado.iostream.StreamClosedError: Stream is closed
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/worker.py", line 1787, in get_data
response = await comm.read(deserializers=serializers)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read
convert_stream_closed_error(self, e)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
raise CommClosedError(f"in {obj}: {exc}") from exc
distributed.comm.core.CommClosedError: in <TCP (closed) local=tcp://127.0.0.1:36001 remote=tcp://127.0.0.1:33512>: Stream is closed
2023-05-08 12:44:25,795 - distributed.worker - ERROR - failed during get data with tcp://127.0.0.1:36001 -> tcp://127.0.0.1:46206
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 225, in read
frames_nbytes = await stream.read_bytes(fmt_size)
tornado.iostream.StreamClosedError: Stream is closed
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/worker.py", line 1787, in get_data
response = await comm.read(deserializers=serializers)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read
convert_stream_closed_error(self, e)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
raise CommClosedError(f"in {obj}: {exc}") from exc
distributed.comm.core.CommClosedError: in <TCP (closed) local=tcp://127.0.0.1:36001 remote=tcp://127.0.0.1:34318>: Stream is closed
2023-05-08 12:44:25,800 - distributed.worker - ERROR - failed during get data with tcp://127.0.0.1:44554 -> tcp://127.0.0.1:34451
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 225, in read
frames_nbytes = await stream.read_bytes(fmt_size)
tornado.iostream.StreamClosedError: Stream is closed
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/worker.py", line 1787, in get_data
response = await comm.read(deserializers=serializers)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read
convert_stream_closed_error(self, e)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
raise CommClosedError(f"in {obj}: {exc}") from exc
distributed.comm.core.CommClosedError: in <TCP (closed) local=tcp://127.0.0.1:44554 remote=tcp://127.0.0.1:39332>: Stream is closed
2023-05-08 12:44:25,801 - distributed.worker - ERROR - failed during get data with tcp://127.0.0.1:44554 -> tcp://127.0.0.1:46206
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 225, in read
frames_nbytes = await stream.read_bytes(fmt_size)
tornado.iostream.StreamClosedError: Stream is closed
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/worker.py", line 1787, in get_data
response = await comm.read(deserializers=serializers)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read
convert_stream_closed_error(self, e)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
raise CommClosedError(f"in {obj}: {exc}") from exc
distributed.comm.core.CommClosedError: in <TCP (closed) local=tcp://127.0.0.1:44554 remote=tcp://127.0.0.1:40084>: Stream is closed
2023-05-08 12:44:25,832 - distributed.worker - ERROR - failed during get data with tcp://127.0.0.1:36220 -> tcp://127.0.0.1:34451
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 225, in read
frames_nbytes = await stream.read_bytes(fmt_size)
tornado.iostream.StreamClosedError: Stream is closed
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/worker.py", line 1787, in get_data
response = await comm.read(deserializers=serializers)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read
convert_stream_closed_error(self, e)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
raise CommClosedError(f"in {obj}: {exc}") from exc
distributed.comm.core.CommClosedError: in <TCP (closed) local=tcp://127.0.0.1:36220 remote=tcp://127.0.0.1:43522>: Stream is closed
2023-05-08 12:44:25,833 - distributed.worker - ERROR - failed during get data with tcp://127.0.0.1:36220 -> tcp://127.0.0.1:46206
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 225, in read
frames_nbytes = await stream.read_bytes(fmt_size)
tornado.iostream.StreamClosedError: Stream is closed
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/worker.py", line 1787, in get_data
response = await comm.read(deserializers=serializers)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read
convert_stream_closed_error(self, e)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
raise CommClosedError(f"in {obj}: {exc}") from exc
distributed.comm.core.CommClosedError: in <TCP (closed) local=tcp://127.0.0.1:36220 remote=tcp://127.0.0.1:43490>: Stream is closed
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/bin/climix", line 10, in <module>
sys.exit(main())
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/climix/main.py", line 353, in main
do_main(
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/climix/main.py", line 325, in do_main
save(
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/climix/datahandling.py", line 371, in save
result.data = r.result()
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/client.py", line 317, in result
raise exc.with_traceback(tb)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/dask/optimization.py", line 990, in __call__
return core.get(self.dsk, self.outkey, dict(zip(self.inkeys, args)))
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/dask/core.py", line 149, in get
result = _execute_task(task, cache)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/dask/utils.py", line 73, in apply
return func(*args, **kwargs)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/dask/array/chunk.py", line 225, in argtopk
if abs(k) >= a.shape[axis]:
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/dask/array/core.py", line 1868, in __bool__
return bool(self.compute())
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/dask/base.py", line 314, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/dask/base.py", line 587, in compute
schedule = get_scheduler(
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/dask/base.py", line 1400, in get_scheduler
return get_scheduler(scheduler=config.get("scheduler", None))
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/dask/base.py", line 1375, in get_scheduler
raise RuntimeError(
RuntimeError: Requested dask.distributed scheduler but no Client active.
```
3. Running another smaller index: `climix -e -x tn /nobackup/rossby27/users/sm_carni/data/tmp/data_files/tasmin_EUR-11_MPI-M-MPI-ESM-LR_rcp85_r2i1p1_MPI-CSC-REMO2009_v1_day_20060101-20101231.nc /nobackup/rossby27/users/sm_carni/data/tmp/data_files/tasmin_EUR-11_MPI-M-MPI-ESM-LR_rcp85_r2i1p1_MPI-CSC-REMO2009_v1_day_20110101-20151231.nc`
-----> Returns no error.
4. Downgrading dask to `mamba install dask==2023.4.0`, solves this error. But, results in another error when running a simpler index. Running `climix -e -x txx /home/rossby/data_lib/esgf/cordex/output/EUR-11/SMHI/NCC-NorESM1-M/rcp85/r1i1p1/RCA4/v1/day/tasmax/latest/tasmax_EUR-11_NCC-NorESM1-M_rcp85_r1i1p1_SMHI-RCA4_v1_day_20060101-20101231.nc /home/rossby/data_lib/esgf/cordex/output/EUR-11/SMHI/NCC-NorESM1-M/rcp85/r1i1p1/RCA4/v1/day/tasmax/latest/tasmax_EUR-11_NCC-NorESM1-M_rcp85_r1i1p1_SMHI-RCA4_v1_day_20110101-20151231.nc` gives:
```
INFO:distributed.scheduler:Lost all workers
INFO:distributed.batched:Batched Comm Closed <TCP (closed) Scheduler connection to worker local=tcp://127.0.0.1:36766 remote=tcp://127.0.0.1:40786>
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-latest/lib/python3.10/site-packages/distributed/batched.py", line 115, in _background_send
nbytes = yield coro
File "/home/sm_carni/.conda/envs/climix-latest/lib/python3.10/site-packages/tornado/gen.py", line 767, in run
value = future.result()
File "/home/sm_carni/.conda/envs/climix-latest/lib/python3.10/site-packages/distributed/comm/tcp.py", line 269, in write
raise CommClosedError()
distributed.comm.core.CommClosedError
INFO:distributed.batched:Batched Comm Closed <TCP (closed) Scheduler connection to worker local=tcp://127.0.0.1:36766 remote=tcp://127.0.0.1:40776>
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-latest/lib/python3.10/site-packages/distributed/batched.py", line 115, in _background_send
nbytes = yield coro
File "/home/sm_carni/.conda/envs/climix-latest/lib/python3.10/site-packages/tornado/gen.py", line 767, in run
value = future.result()
File "/home/sm_carni/.conda/envs/climix-latest/lib/python3.10/site-packages/distributed/comm/tcp.py", line 269, in write
raise CommClosedError()
distributed.comm.core.CommClosedError
```0.19 (Poco Mas)Klaus ZimmermannKlaus Zimmermannhttps://git.smhi.se/climix/climix/-/issues/313cube_diffs table output is not working with pandas 2.02023-05-15T12:58:00ZJoakim Löwcube_diffs table output is not working with pandas 2.0With `pandas 2.0` the following error is raised when two datasets can not be combined:
```
Traceback (most recent call last):
File "/home/sm_joalo/.conda/envs/climix-test/bin/climix", line 8, in <module>
sys.exit(main())
File "/h...With `pandas 2.0` the following error is raised when two datasets can not be combined:
```
Traceback (most recent call last):
File "/home/sm_joalo/.conda/envs/climix-test/bin/climix", line 8, in <module>
sys.exit(main())
File "/home/sm_joalo/dev/repos/climix/climix/main.py", line 353, in main
do_main(
File "/home/sm_joalo/dev/repos/climix/climix/main.py", line 316, in do_main
input_data = prepare_input_data(datafiles, climix_config)
File "/home/sm_joalo/dev/repos/climix/climix/datahandling.py", line 266, in prepare_input_data
find_cube_differences(
File "/home/sm_joalo/dev/repos/climix/climix/util/cube_diffs.py", line 330, in find_cube_differences
print_dataframe(dataframe, var_name)
File "/home/sm_joalo/dev/repos/climix/climix/util/cube_diffs.py", line 296, in print_dataframe
with pd.option_context('display.max_colwidth', MAX_COL_WIDTH,
File "/home/sm_joalo/.conda/envs/climix-test/lib/python3.10/site-packages/pandas/_config/config.py", line 441, in __enter__
self.undo = [(pat, _get_option(pat, silent=True)) for pat, val in self.ops]
File "/home/sm_joalo/.conda/envs/climix-test/lib/python3.10/site-packages/pandas/_config/config.py", line 441, in <listcomp>
self.undo = [(pat, _get_option(pat, silent=True)) for pat, val in self.ops]
File "/home/sm_joalo/.conda/envs/climix-test/lib/python3.10/site-packages/pandas/_config/config.py", line 135, in _get_option
key = _get_single_key(pat, silent)
File "/home/sm_joalo/.conda/envs/climix-test/lib/python3.10/site-packages/pandas/_config/config.py", line 121, in _get_single_key
raise OptionError(f"No such keys(s): {repr(pat)}")
pandas._config.config.OptionError: No such keys(s): 'display.column_space'
```
Expected output would be the table of cube differences.
It seems the `display.column_space` option, used in `print_dataframe(...)`, has been deprecated and removed in `pandas 2.0` (I believe we have used `pandas 1.5.x` before).0.19 (Poco Mas)Carolina NilssonCarolina Nilssonhttps://git.smhi.se/climix/climix/-/issues/312Binary dist is missing etc directory and data files2023-04-25T13:22:24ZKlaus ZimmermannBinary dist is missing etc directory and data filesIn the move to pyproject.toml, we lost the crucial data files from the binary distribution.
While this does not affect development installs, it does affect wheels and other copying installations.In the move to pyproject.toml, we lost the crucial data files from the binary distribution.
While this does not affect development installs, it does affect wheels and other copying installations.Klaus ZimmermannKlaus Zimmermannhttps://git.smhi.se/climix/climix/-/issues/310Percentile and ThresholdedPercentile index functions does not pass all unit t...2023-06-19T14:40:11ZCarolina NilssonPercentile and ThresholdedPercentile index functions does not pass all unit testsThe Percentile and ThresholdedPercentile index functions does not pass all unit tests.
1. The percentile call_function does not run since the aux coordinate for the percentile is passed as an input to the numpy function and not the poin...The Percentile and ThresholdedPercentile index functions does not pass all unit tests.
1. The percentile call_function does not run since the aux coordinate for the percentile is passed as an input to the numpy function and not the point value of the aux coordinate.
2. The numpy percentile function does not take the mask into consideration when computing the percentiles. A work around is to set the thresholded values to NaN and then use the np.nanpercentile to exclude the masked values from the computation.0.19 (Poco Mas)Carolina NilssonCarolina Nilssonhttps://git.smhi.se/climix/climix/-/issues/307RunningStatistics and ThresholdedRunningStatistics does not pass all unit tests2023-07-03T15:50:06ZCarolina NilssonRunningStatistics and ThresholdedRunningStatistics does not pass all unit testsThe RunningStatistics and ThresholdedRunningStatistics index functions does not pass all the unit tests. Some test fails because the mask is not preserved in the process e.g., when using np.concatenate and np.where. Other tests fails if ...The RunningStatistics and ThresholdedRunningStatistics index functions does not pass all the unit tests. Some test fails because the mask is not preserved in the process e.g., when using np.concatenate and np.where. Other tests fails if another statistics than "max" is used and some fails because of the padding with zeros in the start and end which can give lower aggregated values.0.19 (Poco Mas)Carolina NilssonCarolina Nilssonhttps://git.smhi.se/climix/climix/-/issues/305CountJointOccurrences index functions does not pass all unit tests2023-05-24T14:39:03ZCarolina NilssonCountJointOccurrences index functions does not pass all unit testsWhen running count_joint_occurrences_precipitation_temperature, where one of the inputs have masked data, will return the other condition as True or False.
Here the mask needs to be preserved, if a grid-cell contains masked data then th...When running count_joint_occurrences_precipitation_temperature, where one of the inputs have masked data, will return the other condition as True or False.
Here the mask needs to be preserved, if a grid-cell contains masked data then the output grid-cell should probably be masked as well.0.19 (Poco Mas)Carolina NilssonCarolina Nilssonhttps://git.smhi.se/climix/climix/-/issues/265ThresholdedRunningStatistics lazy_func calls call_func of RunningStatistics2022-12-07T14:57:22ZJoakim LöwThresholdedRunningStatistics lazy_func calls call_func of RunningStatistics`lazy_func` implementation in `ThresholdedRunningStatistics` calls `call_func` of inherited `RunningStatistics`. Should probably call `RunningStatistics.lazy_func` instead. Change and make manual test to confirm.`lazy_func` implementation in `ThresholdedRunningStatistics` calls `call_func` of inherited `RunningStatistics`. Should probably call `RunningStatistics.lazy_func` instead. Change and make manual test to confirm.0.16Joakim LöwJoakim Löwhttps://git.smhi.se/climix/climix/-/issues/260Rx2day, Rx5day resulting in some infinite values2023-01-31T14:59:28ZErik HolmgrenRx2day, Rx5day resulting in some infinite valuesTwo extreme precipitation indices, `rx2day` and `rx5day`, produces some infinite values on e.g. GridClim and PTHBV datasets. For both these datasets there are no issues with `rx1day`.
Example (on bi):
```python
import iris
from climix....Two extreme precipitation indices, `rx2day` and `rx5day`, produces some infinite values on e.g. GridClim and PTHBV datasets. For both these datasets there are no issues with `rx1day`.
Example (on bi):
```python
import iris
from climix.metadata import load_metadata
from dask.distributed import Client
client = Client()
fname = "/nobackup/rossby26/users/sm_erhol/extremeEventAttribution/gavle2021/pr_gavle2021_SMHIGridClim_day_19610101-20181230.nc"
# This will be an masked array
cube = iris.load_cube(fname)
index_catalogue = load_metadata()
index = index_catalogue.prepare_indices(["rx2day"])[0]
index_cube = index([cube], client)
assert not np.any(np.isinf(index_cube.data))
```
When realising the `index_cube` some overflow warnings are thrown. If the spatial dimensions of the cube are collapsed (average, max), no infinite values are produced.0.16Klaus ZimmermannKlaus Zimmermannhttps://git.smhi.se/climix/climix/-/issues/253Newer version of Iris lazy-load coord data, causes crash in climix during save2022-09-14T09:42:42ZJoakim LöwNewer version of Iris lazy-load coord data, causes crash in climix during saveIn a newer version of Iris coord data of a cube is lazy-loaded. Therefore, when saving output in climix, the data may not exist in memory yet, which causes a crash. Solution: touch the coord data before saving.In a newer version of Iris coord data of a cube is lazy-loaded. Therefore, when saving output in climix, the data may not exist in memory yet, which causes a crash. Solution: touch the coord data before saving.Joakim LöwJoakim Löwhttps://git.smhi.se/climix/climix/-/issues/243Seasonal periods are broken2021-07-28T15:37:22ZKlaus ZimmermannSeasonal periods are brokenAt the moment, seasonal periods can only be calculated if a single season is specified with, e.g., `seasonal: JASOND` in the index definition.
Instead, it should be possible to calculate seasonal indices for the standard seasons without ...At the moment, seasonal periods can only be calculated if a single season is specified with, e.g., `seasonal: JASOND` in the index definition.
Instead, it should be possible to calculate seasonal indices for the standard seasons without any specification.
This issue is more narrow than #227 and only intents to get the current, limited functionality working again prior to a more comprehensive overhaul.0.14Klaus ZimmermannKlaus Zimmermannhttps://git.smhi.se/climix/climix/-/issues/242Index function SpellLength is broken2021-07-27T13:17:58ZKlaus ZimmermannIndex function SpellLength is brokenThe issue is a technical one: The computation of the `meta` information for a dask routine is not working due to an incorrect argument to `np.array`.
Fortunately, the fix is straightforward.The issue is a technical one: The computation of the `meta` information for a dask routine is not working due to an incorrect argument to `np.array`.
Fortunately, the fix is straightforward.0.14Klaus ZimmermannKlaus Zimmermannhttps://git.smhi.se/climix/climix/-/issues/230Remove empty/inon-existing directories from input file list2021-07-06T15:10:40ZLars BärringRemove empty/inon-existing directories from input file listIf brace-expansion is used for creating the input filelist it may happen that empty/nonexisting directories are included in the list of filenames. Such directories/filenames, that have to be removed, will include the globbing character "*".If brace-expansion is used for creating the input filelist it may happen that empty/nonexisting directories are included in the list of filenames. Such directories/filenames, that have to be removed, will include the globbing character "*".Lars BärringLars Bärringhttps://git.smhi.se/climix/climix/-/issues/226wetdays djf error on the maxium number of days2021-07-22T14:40:57ZRamon Fuentes Francowetdays djf error on the maxium number of daysWhen using a modified version of the wetdays for djf, the maximum number of days that comes out in the output file is 30, I suspect it is only summing in one month, and not on the entire djf season.
See below the modified version I used...When using a modified version of the wetdays for djf, the maximum number of days that comes out in the output file is 30, I suspect it is only summing in one month, and not on the entire djf season.
See below the modified version I used:
```yaml
wetdays_djf:
reference: CLIPC
period:
allowed:
annual:
seasonal: 'djf'
monthly:
default: seasonal
output:
var_name: "wetdays"
standard_name: number_of_days_with_lwe_thickness_of_precipitation_amount_above_threshold
proposed_standard_name: number_of_occurrences_with_lwe_thickness_of_precipitation_amount_at_or_above_threshold
long_name: "Number of Wet Days (precip >= 1 mm)"
units: "days"
cell_methods:
- time: sum within days
- time: sum over days
input:
data: pr
index_function:
name: count_occurrences
parameters:
threshold:
kind: quantity
standard_name: lwe_precipitation_rate
long_name: "Wet day threshold"
data: 1
units: "mm day-1"
condition:
kind: operator
operator: ">"
ET:
short_name:
long_name:
definition:
comment:
```0.14https://git.smhi.se/climix/climix/-/issues/224CountPercentileOccurrences doesn't respect the condition operator2021-05-04T11:56:15ZKlaus ZimmermannCountPercentileOccurrences doesn't respect the condition operatorOne of the issues that surfaced in #205 is that `CountPercentileOccurrences` doesn't respect the condition operator, i.e. it always checks that the data is less than the threshold, regardless of the operator given in the index definition...One of the issues that surfaced in #205 is that `CountPercentileOccurrences` doesn't respect the condition operator, i.e. it always checks that the data is less than the threshold, regardless of the operator given in the index definition. This should be corrected.0.13.2Klaus ZimmermannKlaus Zimmermannhttps://git.smhi.se/climix/climix/-/issues/220Pin iris also in environment.yml2021-04-14T15:10:01ZKlaus ZimmermannPin iris also in environment.ymlThe pinning of iris (see #218) should happen also in the `environment.yml` file.The pinning of iris (see #218) should happen also in the `environment.yml` file.0.13.2Klaus ZimmermannKlaus Zimmermannhttps://git.smhi.se/climix/climix/-/issues/218Pin iris version2021-04-14T15:07:43ZKlaus ZimmermannPin iris versionThe current code is not compatible with iris 3, so we should pin to iris<3 to guarantee a working code.The current code is not compatible with iris 3, so we should pin to iris<3 to guarantee a working code.0.13.2Klaus ZimmermannKlaus Zimmermannhttps://git.smhi.se/climix/climix/-/issues/215Climix does not gracefully handle single node / singe core2022-09-15T12:18:50ZLars BärringClimix does not gracefully handle single node / singe coreClimix crashes when running on a shared node with `interactive -N 1 -n 1`:
```
>climix -s -e -x txx /home/rossby/prod/201137/netcdf/day/tasmax_EUR-11_ICHEC-EC-EARTH_historical_r12i1p1_SMHI-RCA4_v1_day_*.nc
INFO:root:Loading metadata
T...Climix crashes when running on a shared node with `interactive -N 1 -n 1`:
```
>climix -s -e -x txx /home/rossby/prod/201137/netcdf/day/tasmax_EUR-11_ICHEC-EC-EARTH_historical_r12i1p1_SMHI-RCA4_v1_day_*.nc
INFO:root:Loading metadata
Traceback (most recent call last):
File "/home/sm_lbarr/.conda/envs/climix-devel-3/bin/climix", line 11, in <module>
load_entry_point('climix', 'console_scripts', 'climix')()
File "/home/sm_lbarr/CODE/climix/climix/main.py", line 146, in main
with setup_scheduler(args) as scheduler:
File "/home/sm_lbarr/CODE/climix/climix/dask_setup.py", line 160, in setup_scheduler
return scheduler(**scheduler_kwargs)
File "/home/sm_lbarr/CODE/climix/climix/dask_setup.py", line 68, in __init__
memory_limit = (system.MEMORY_LIMIT*.9) / n_workers
ZeroDivisionError: float division by zero
```
Preferably it should instead give a graceful error message.0.16Joakim LöwJoakim Löwhttps://git.smhi.se/climix/climix/-/issues/211Fix output units for index function count_occurrences2022-11-16T08:00:57ZLars BärringFix output units for index function count_occurrencesIt seems that the index function `count_occurrences` expect output to have units `day`. This is however the `proposed_unit`, which not consistent with current the current canonical unit `1` as defined by the `standard names`. This has pr...It seems that the index function `count_occurrences` expect output to have units `day`. This is however the `proposed_unit`, which not consistent with current the current canonical unit `1` as defined by the `standard names`. This has probably remained under the radar because I was until recently using a really old version of `index_definition.yml`. If simple to fix then target bugfix milestone 13.2, else postpone it to a later milestone.0.15Joakim LöwJoakim Löwhttps://git.smhi.se/climix/climix/-/issues/210Change type of _FillValue2021-02-16T17:10:12ZLars BärringChange type of _FillValueThe `cfchecker` consistently reports the following message:
```
CHECKING NetCDF FILE: cdd_NORDIC-3_SMHI-UERRA-Harmonie_RegRean_v1_Gridpp_v0.9_day_yr_19601231-20181230.nc
=====================
Using CF Checker Version 4.0.0
Checking again...The `cfchecker` consistently reports the following message:
```
CHECKING NetCDF FILE: cdd_NORDIC-3_SMHI-UERRA-Harmonie_RegRean_v1_Gridpp_v0.9_day_yr_19601231-20181230.nc
=====================
Using CF Checker Version 4.0.0
Checking against CF Version CF-1.7
Using Standard Name Table Version 77 (2021-01-19T13:38:50Z)
Using Area Type Table Version 10 (23 June 2020)
Using Standardized Region Name Table Version 4 (18 December 2018)
------------------
Checking variable: cdd
------------------
INFO: Invalid Type for attribute: _FillValue <class 'numpy.float32'>
```
While not reported as an `ERROR` it seems like a simple thing to change this to a normal `float32`0.13.2Klaus ZimmermannKlaus Zimmermann