climix issueshttps://git.smhi.se/climix/climix/-/issues2024-02-20T07:29:31Zhttps://git.smhi.se/climix/climix/-/issues/334Implement connection to Gordias2024-02-20T07:29:31ZCarolina NilssonImplement connection to GordiasWhen Gordias becomes available we need to implement it in climix and remove some unnecessary files.When Gordias becomes available we need to implement it in climix and remove some unnecessary files.0.21Carolina NilssonCarolina Nilssonhttps://git.smhi.se/climix/climix/-/issues/333Document release process2024-03-07T13:41:37ZKlaus ZimmermannDocument release processAt the moment, the release process is rather manual and undocumented. We should have a checklist that is easy to follow, probably as part of the documentation, possibly using the issue/merge request system.At the moment, the release process is rather manual and undocumented. We should have a checklist that is easy to follow, probably as part of the documentation, possibly using the issue/merge request system.0.21Joakim LöwJoakim Löwhttps://git.smhi.se/climix/climix/-/issues/332Update changelog for release 0.19.02023-09-14T12:18:50ZKlaus ZimmermannUpdate changelog for release 0.19.00.19 (Poco Mas)Klaus ZimmermannKlaus Zimmermannhttps://git.smhi.se/climix/climix/-/issues/331Update index documentation2023-09-14T12:09:29ZKlaus ZimmermannUpdate index documentationIn !237, we updated the climate index definitions to clix-meta-0.6.0, but we still need to bring the documentation up to speed.In !237, we updated the climate index definitions to clix-meta-0.6.0, but we still need to bring the documentation up to speed.0.19 (Poco Mas)Klaus ZimmermannKlaus Zimmermannhttps://git.smhi.se/climix/climix/-/issues/330Blackify code base2023-09-14T08:32:15ZKlaus ZimmermannBlackify code baseMost of our code already follows Black standards thanks to pre-commit et al. However, over time a few deviations have crept in. To make future changes easier, we should do a one-shot blackification of the existing code base.Most of our code already follows Black standards thanks to pre-commit et al. However, over time a few deviations have crept in. To make future changes easier, we should do a one-shot blackification of the existing code base.0.19 (Poco Mas)Klaus ZimmermannKlaus Zimmermannhttps://git.smhi.se/climix/climix/-/issues/329Index: Accumulated precip with Temperature below 02024-02-02T12:13:47ZJoakim LöwIndex: Accumulated precip with Temperature below 0Similar to index function: `TemperatureSum` (index: hd17)
This issue replaces #296
Note: check if it's ok to add index definition in `SMHI_extra.yml`.Similar to index function: `TemperatureSum` (index: hd17)
This issue replaces #296
Note: check if it's ok to add index definition in `SMHI_extra.yml`.Renate WilckeRenate Wilckehttps://git.smhi.se/climix/climix/-/issues/328Index: Number of days(Temperature between two values (-2, 2) and precipitatio...2023-11-05T13:23:52ZJoakim LöwIndex: Number of days(Temperature between two values (-2, 2) and precipitation above 0.1 mm/d)Similar to index functions:
`CountJointOccurrencesPrecipitationTemperature`
`CountJointOccurrencesTemperature`
which inherits from
`CountJointOccurrences`
Replaces #296
Note: check if it's ok to add index definition in `SMHI_extra.yml`.Similar to index functions:
`CountJointOccurrencesPrecipitationTemperature`
`CountJointOccurrencesTemperature`
which inherits from
`CountJointOccurrences`
Replaces #296
Note: check if it's ok to add index definition in `SMHI_extra.yml`.0.20 (Urbane Goat)Renate WilckeRenate Wilckehttps://git.smhi.se/climix/climix/-/issues/327Investigate the behaviour of FirstOccurrence and LastOccurence2023-09-11T09:45:30ZCarolina NilssonInvestigate the behaviour of FirstOccurrence and LastOccurenceThe expected output of the index functions FirstOccurrence and LastOccurrence needs to be further investigated. FirstOccurrence call_func returns 0 for the first day. This does not match up with the output from Last Occurrence call_func ...The expected output of the index functions FirstOccurrence and LastOccurrence needs to be further investigated. FirstOccurrence call_func returns 0 for the first day. This does not match up with the output from Last Occurrence call_func that returns 1 for the first day. The output is then post process which may change the end result. Therefor, further investigation of the final output is needed to estimate if both functions are working as expected.https://git.smhi.se/climix/climix/-/issues/326The call function does not work for spell function: Spell_Length2023-06-18T12:04:47ZCarolina NilssonThe call function does not work for spell function: Spell_LengthThe call function for spell function spell_length does not work and should probably be either removed or fixed.The call function for spell function spell_length does not work and should probably be either removed or fixed.https://git.smhi.se/climix/climix/-/issues/325Error when integrations tests are generated2023-11-05T15:29:56ZJoakim LöwError when integrations tests are generatedIntegrations tests currently cannot be run. Pytest returns the following error when running from command line:
```
====================================================================== test session starts ==============================...Integrations tests currently cannot be run. Pytest returns the following error when running from command line:
```
====================================================================== test session starts ======================================================================
platform linux -- Python 3.10.11, pytest-7.3.2, pluggy-1.0.0
rootdir: /home/sm_joalo/dev/repos/climix
configfile: pyproject.toml
testpaths: tests
collected 213 items / 1 error
============================================================================ ERRORS =============================================================================
______________________________________________________ ERROR collecting tests/integration/test_indices.py _______________________________________________________
tests/integration/test_indices.py:64: in <module>
generate_test_index_parametrization(),
tests/integration/test_indices.py:45: in generate_test_index_parametrization
config = read_test_configuration()
tests/integration/conftest.py:14: in read_test_configuration
config_string = files("tests.integration").joinpath("configuration.yml").read_text()
../../../.conda/envs/climix/lib/python3.10/importlib/_common.py:22: in files
return from_package(get_package(package))
../../../.conda/envs/climix/lib/python3.10/importlib/_common.py:67: in get_package
if wrap_spec(resolved).submodule_search_locations is None:
../../../.conda/envs/climix/lib/python3.10/importlib/_adapters.py:16: in __getattr__
return getattr(self.spec, name)
E AttributeError: 'NoneType' object has no attribute 'submodule_search_locations'
==================================================================== short test summary info ====================================================================
ERROR tests/integration/test_indices.py - AttributeError: 'NoneType' object has no attribute 'submodule_search_locations'
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
======================================================================= 1 error in 24.99s =======================================================================
```0.20 (Urbane Goat)Joakim LöwJoakim Löwhttps://git.smhi.se/climix/climix/-/issues/324Issue when running climix API - dask issue?2023-06-15T10:02:59ZRenate WilckeIssue when running climix API - dask issue?When I run my little example script I get the following error that repeats a lot until I cancel (ctr c).
Example script:
/home/sm_renwi/Scripts/heatwavefuture/summerseason/seasonlength_paket/seasonlength/example_error_memoryview.py
/hom...When I run my little example script I get the following error that repeats a lot until I cancel (ctr c).
Example script:
/home/sm_renwi/Scripts/heatwavefuture/summerseason/seasonlength_paket/seasonlength/example_error_memoryview.py
/home/sm_renwi/Scripts/heatwavefuture/summerseason/seasonlength_paket/control_SLENS_seasonlength.yml
Error message in ipython when running "indexcube.data" after calculating indexcube:
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
Cell In[21], line 1
----> 1 indexcube.data
File ~/.conda/envs/climix_testconda/lib/python3.10/site-packages/iris/cube.py:2462, in Cube.data(self)
2429 @property
2430 def data(self):
2431 """
2432 The :class:`numpy.ndarray` representing the multi-dimensional data of
2433 the cube.
(...)
2460
2461 """
-> 2462 return self._data_manager.data
File ~/.conda/envs/climix_testconda/lib/python3.10/site-packages/iris/_data_manager.py:206, in DataManager.data(self)
203 if self.has_lazy_data():
204 try:
205 # Realise the lazy data.
--> 206 result = as_concrete_data(self._lazy_array)
207 # Assign the realised result.
208 self._real_array = result
File ~/.conda/envs/climix_testconda/lib/python3.10/site-packages/iris/_lazy_data.py:279, in as_concrete_data(data)
262 """
263 Return the actual content of a lazy array, as a numpy array.
264 If the input data is a NumPy `ndarray` or masked array, return it
(...)
276
277 """
278 if is_lazy_data(data):
--> 279 (data,) = _co_realise_lazy_arrays([data])
281 return data
File ~/.conda/envs/climix_testconda/lib/python3.10/site-packages/iris/_lazy_data.py:242, in _co_realise_lazy_arrays(arrays)
227 def _co_realise_lazy_arrays(arrays):
228 """
229 Compute multiple lazy arrays and return a list of real values.
230
(...)
240
241 """
--> 242 computed_arrays = da.compute(*arrays)
243 results = []
244 for lazy_in, real_out in zip(arrays, computed_arrays):
245 # Ensure we always have arrays.
246 # Note : in some cases dask (and numpy) will return a scalar
247 # numpy.int/numpy.float object rather than an ndarray.
248 # Recorded in https://github.com/dask/dask/issues/2111.
File ~/.conda/envs/climix_testconda/lib/python3.10/site-packages/dask/base.py:600, in compute(traverse, optimize_graph, scheduler, get, *args, **kwargs)
597 postcomputes.append(x.__dask_postcompute__())
599 results = schedule(dsk, keys, **kwargs)
--> 600 return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)])
File ~/.conda/envs/climix_testconda/lib/python3.10/site-packages/dask/base.py:600, in <listcomp>(.0)
597 postcomputes.append(x.__dask_postcompute__())
599 results = schedule(dsk, keys, **kwargs)
--> 600 return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)])
File ~/.conda/envs/climix_testconda/lib/python3.10/site-packages/dask/array/core.py:1283, in finalize(results)
1281 while isinstance(results2, (tuple, list)):
1282 if len(results2) > 1:
-> 1283 return concatenate3(results)
1284 else:
1285 results2 = results2[0]
File ~/.conda/envs/climix_testconda/lib/python3.10/site-packages/dask/array/core.py:5300, in concatenate3(arrays)
5298 if not ndim:
5299 return arrays
-> 5300 chunks = chunks_from_arrays(arrays)
5301 shape = tuple(map(sum, chunks))
5303 def dtype(x):
File ~/.conda/envs/climix_testconda/lib/python3.10/site-packages/dask/array/core.py:5087, in chunks_from_arrays(arrays)
5084 return (1,)
5086 while isinstance(arrays, (list, tuple)):
-> 5087 result.append(tuple(shape(deepfirst(a))[dim] for a in arrays))
5088 arrays = arrays[0]
5089 dim += 1
File ~/.conda/envs/climix_testconda/lib/python3.10/site-packages/dask/array/core.py:5087, in <genexpr>(.0)
5084 return (1,)
5086 while isinstance(arrays, (list, tuple)):
-> 5087 result.append(tuple(shape(deepfirst(a))[dim] for a in arrays))
5088 arrays = arrays[0]
5089 dim += 1
IndexError: tuple index out of range
```
Error message in terminal:
```
/home/sm_renwi/.conda/envs/climix_testconda/lib/python3.10/site-packages/distributed/node.py:182: UserWarning: Port 8787 is already in use.
Perhaps you already have a cluster running?
Hosting the HTTP server on port 43663 instead
warnings.warn(
/home/sm_renwi/.conda/envs/climix_testconda/lib/python3.10/site-packages/distributed/node.py:182: UserWarning: Port 8787 is already in use.
Perhaps you already have a cluster running?
Hosting the HTTP server on port 43577 instead
warnings.warn(
2023-06-15 10:53:40,552 - distributed.nanny - ERROR - Failed to start process
Traceback (most recent call last):
File "/home/sm_renwi/.conda/envs/climix_testconda/lib/python3.10/site-packages/distributed/nanny.py", line 443, in instantiate
result = await self.process.start()
File "/home/sm_renwi/.conda/envs/climix_testconda/lib/python3.10/site-packages/distributed/nanny.py", line 713, in start
await self.process.start()
File "/home/sm_renwi/.conda/envs/climix_testconda/lib/python3.10/site-packages/distributed/process.py", line 55, in _call_and_set_future
res = func(*args, **kwargs)
File "/home/sm_renwi/.conda/envs/climix_testconda/lib/python3.10/site-packages/distributed/process.py", line 215, in _start
process.start()
File "/home/sm_renwi/.conda/envs/climix_testconda/lib/python3.10/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/home/sm_renwi/.conda/envs/climix_testconda/lib/python3.10/multiprocessing/context.py", line 288, in _Popen
return Popen(process_obj)
File "/home/sm_renwi/.conda/envs/climix_testconda/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/home/sm_renwi/.conda/envs/climix_testconda/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/home/sm_renwi/.conda/envs/climix_testconda/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 42, in _launch
prep_data = spawn.get_preparation_data(process_obj._name)
File "/home/sm_renwi/.conda/envs/climix_testconda/lib/python3.10/multiprocessing/spawn.py", line 154, in get_preparation_data
_check_not_importing_main()
File "/home/sm_renwi/.conda/envs/climix_testconda/lib/python3.10/multiprocessing/spawn.py", line 134, in _check_not_importing_main
raise RuntimeError('''
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
```https://git.smhi.se/climix/climix/-/issues/323Issue for tracking pinned dependencies2024-02-02T10:28:45ZLars BärringIssue for tracking pinned dependenciesThe following list identifies issues where Climix's dependencies have been (or will require) pinning/limiting packages to specific versions. The intention is that new pins should be added at the top of the list, so as to not forget about...The following list identifies issues where Climix's dependencies have been (or will require) pinning/limiting packages to specific versions. The intention is that new pins should be added at the top of the list, so as to not forget about the pin/limit once an individual issue is closed by merging a pull request. This issue should be reviewed regularly and updated, e.g. in connection to every milestone. It should never be closed. Once the pin is removed because of updated packages, or development of Climix, the corresponding entry should be stricken over (~~like this~~).
* ~#322 _Numba 0.57.0 causes error for masked array_~https://git.smhi.se/climix/climix/-/issues/322Numba 0.57.0 causes error for masked array2023-09-13T13:00:55ZJoakim LöwNumba 0.57.0 causes error for masked arrayWhen running climix, numba throws an error for some cases (see #321). I suggest setting the numba version to `- numba<0.57` in the `environment.yml` for the next release:
```
2023-06-12 10:42:39,248 - distributed.worker - WARNING - Compu...When running climix, numba throws an error for some cases (see #321). I suggest setting the numba version to `- numba<0.57` in the `environment.yml` for the next release:
```
2023-06-12 10:42:39,248 - distributed.worker - WARNING - Compute Failed
Key: ('chunk-2ad0dae06c51d593e75de7dbfe6cb672', 0, 0, 0)
Function: subgraph_callable-d585f943-ba41-4885-8faa-eb2cba41
args: (masked_array(
data=[[[--, --, --, ..., 278.2850036621094, 277.3859558105469,
276.8891296386719],
[--, 279.6990661621094, 279.4710998535156, ...,
278.3681335449219, 277.4391784667969, 277.0628967285156],
[--, --, --, ..., 277.3985900878906, 276.5505065917969,
276.6014709472656],
...,
[--, --, --, ..., 265.25921630859375, 265.71435546875,
265.8021240234375],
[--, --, --, ..., 264.7350158691406, 265.0195007324219,
265.303955078125],
[--, --, --, ..., 262.71636962890625, 263.4328308105469,
264.1492614746094]],
[[--, --, --, ..., 276.0057067871094, 275.7838439941406,
275.8836364746094],
[--, 277.0145568847656, 275.7981262207031, ...,
274.5687561035156, 274.2306213378906, 274.3453674316406],
[--, --, --, ..., 273.7345275878906, 273.2975158691406,
273.5181579589844],
...,
[--, --, --, ..., 262.7914733886
kwargs: {}
Exception: "NumbaTypeError('\\x1b[1mUnsupported array type: numpy.ma.MaskedArray.\\x1b[0m')"
```0.19 (Poco Mas)Carolina NilssonCarolina Nilssonhttps://git.smhi.se/climix/climix/-/issues/321dask.distributed error memoryview is too large2023-06-12T12:19:27ZRenate Wilckedask.distributed error memoryview is too largeWhen running in ipython the latest version of climix (8/6/2023) as API I run into an error from dask.distributed. The index calculation seems to work anyway though.
I copied the lines of code I used to testrun into a file which can be ...When running in ipython the latest version of climix (8/6/2023) as API I run into an error from dask.distributed. The index calculation seems to work anyway though.
I copied the lines of code I used to testrun into a file which can be found here:
/home/sm_renwi/Scripts/heatwavefuture/summerseason/seasonlength_paket/seasonlength/example_error_memoryview.py
It also needs this file:
/home/sm_renwi/Scripts/heatwavefuture/summerseason/seasonlength_paket/control_SLENS_seasonlength.yml
Part of the long Error message:
2023-06-08 15:09:17,677 - distributed.protocol.core - CRITICAL - Failed to Serialize
ValueError: memoryview is too large
CRITICAL:distributed.protocol.core:Failed to Serialize
Traceback (most recent call last):
File "/home/sm_renwi/.conda/envs/climix_seasonlength/lib/python3.11/site-packages/distributed/protocol/core.py", line 109, in dumps
frames[0] = msgpack.dumps(msg, default=_encode_default, use_bin_type=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^https://git.smhi.se/climix/climix/-/issues/320Update (download) clix-meta yaml files2023-06-16T09:30:23ZLars BärringUpdate (download) clix-meta yaml filesDownload updated versions of clix-meta yaml files once [clix-meta #107](https://github.com/clix-meta/clix-meta/issues/107) is fixed.Download updated versions of clix-meta yaml files once [clix-meta #107](https://github.com/clix-meta/clix-meta/issues/107) is fixed.0.19 (Poco Mas)Klaus ZimmermannKlaus Zimmermannhttps://git.smhi.se/climix/climix/-/issues/319Consistent period specification2023-05-16T09:07:50ZCarolina NilssonConsistent period specificationThere are some differences in the period specification classes (seasonal, monthly, annual), e.g. first_month_number does only exist for annual. This can cause problems when running different index functions that utilise these features.There are some differences in the period specification classes (seasonal, monthly, annual), e.g. first_month_number does only exist for annual. This can cause problems when running different index functions that utilise these features.https://git.smhi.se/climix/climix/-/issues/318Missing value threshold in config2023-05-11T13:42:06ZErik HolmgrenMissing value threshold in configAdd a config option to set the threshold for the allowed amount of missing data. This could for example allow the user to decide how many days in a running window can be missing for the calculation to still be valid.Add a config option to set the threshold for the allowed amount of missing data. This could for example allow the user to decide how many days in a running window can be missing for the calculation to still be valid.https://git.smhi.se/climix/climix/-/issues/317Quality flag: missing data in output2024-02-02T10:29:22ZErik HolmgrenQuality flag: missing data in outputAs discussed during the technical meeting. Possibly add a flag which when toggled adds information about the amount of missing data to the output of Climix.As discussed during the technical meeting. Possibly add a flag which when toggled adds information about the amount of missing data to the output of Climix.https://git.smhi.se/climix/climix/-/issues/316"Requested dask.distributed scheduler but no Client active." RuntimeError for...2023-09-08T12:49:09ZCarolina Nilsson"Requested dask.distributed scheduler but no Client active." RuntimeError for larger computations1. Installing a new environment: `mamba create -n myenv climix`
2. activating the env and running: `climix -e -x tn10p /nobackup/rossby27/users/sm_carni/data/tmp/data_files/tasmin_EUR-11_MPI-M-MPI-ESM-LR_rcp85_r2i1p1_MPI-CSC-REMO2009_v1_...1. Installing a new environment: `mamba create -n myenv climix`
2. activating the env and running: `climix -e -x tn10p /nobackup/rossby27/users/sm_carni/data/tmp/data_files/tasmin_EUR-11_MPI-M-MPI-ESM-LR_rcp85_r2i1p1_MPI-CSC-REMO2009_v1_day_20060101-20101231.nc /nobackup/rossby27/users/sm_carni/data/tmp/data_files/tasmin_EUR-11_MPI-M-MPI-ESM-LR_rcp85_r2i1p1_MPI-CSC-REMO2009_v1_day_20110101-20151231.nc -r 2007/2009`
Returns the following RuntimeError and saves no result:
```
101637ms:main.py:main() INFO:root:Calculation took 94.1128 seconds.
2023-05-08 12:44:25,748 - distributed.worker - ERROR - failed during get data with tcp://127.0.0.1:33317 -> tcp://127.0.0.1:34451
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 225, in read
frames_nbytes = await stream.read_bytes(fmt_size)
tornado.iostream.StreamClosedError: Stream is closed
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/worker.py", line 1787, in get_data
response = await comm.read(deserializers=serializers)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read
convert_stream_closed_error(self, e)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
raise CommClosedError(f"in {obj}: {exc}") from exc
distributed.comm.core.CommClosedError: in <TCP (closed) local=tcp://127.0.0.1:33317 remote=tcp://127.0.0.1:45308>: Stream is closed
2023-05-08 12:44:25,749 - distributed.worker - ERROR - failed during get data with tcp://127.0.0.1:33317 -> tcp://127.0.0.1:46206
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 225, in read
frames_nbytes = await stream.read_bytes(fmt_size)
tornado.iostream.StreamClosedError: Stream is closed
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/worker.py", line 1787, in get_data
response = await comm.read(deserializers=serializers)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read
convert_stream_closed_error(self, e)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
raise CommClosedError(f"in {obj}: {exc}") from exc
distributed.comm.core.CommClosedError: in <TCP (closed) local=tcp://127.0.0.1:33317 remote=tcp://127.0.0.1:46116>: Stream is closed
2023-05-08 12:44:25,793 - distributed.worker - ERROR - failed during get data with tcp://127.0.0.1:36001 -> tcp://127.0.0.1:34451
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 225, in read
frames_nbytes = await stream.read_bytes(fmt_size)
tornado.iostream.StreamClosedError: Stream is closed
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/worker.py", line 1787, in get_data
response = await comm.read(deserializers=serializers)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read
convert_stream_closed_error(self, e)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
raise CommClosedError(f"in {obj}: {exc}") from exc
distributed.comm.core.CommClosedError: in <TCP (closed) local=tcp://127.0.0.1:36001 remote=tcp://127.0.0.1:33512>: Stream is closed
2023-05-08 12:44:25,795 - distributed.worker - ERROR - failed during get data with tcp://127.0.0.1:36001 -> tcp://127.0.0.1:46206
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 225, in read
frames_nbytes = await stream.read_bytes(fmt_size)
tornado.iostream.StreamClosedError: Stream is closed
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/worker.py", line 1787, in get_data
response = await comm.read(deserializers=serializers)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read
convert_stream_closed_error(self, e)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
raise CommClosedError(f"in {obj}: {exc}") from exc
distributed.comm.core.CommClosedError: in <TCP (closed) local=tcp://127.0.0.1:36001 remote=tcp://127.0.0.1:34318>: Stream is closed
2023-05-08 12:44:25,800 - distributed.worker - ERROR - failed during get data with tcp://127.0.0.1:44554 -> tcp://127.0.0.1:34451
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 225, in read
frames_nbytes = await stream.read_bytes(fmt_size)
tornado.iostream.StreamClosedError: Stream is closed
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/worker.py", line 1787, in get_data
response = await comm.read(deserializers=serializers)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read
convert_stream_closed_error(self, e)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
raise CommClosedError(f"in {obj}: {exc}") from exc
distributed.comm.core.CommClosedError: in <TCP (closed) local=tcp://127.0.0.1:44554 remote=tcp://127.0.0.1:39332>: Stream is closed
2023-05-08 12:44:25,801 - distributed.worker - ERROR - failed during get data with tcp://127.0.0.1:44554 -> tcp://127.0.0.1:46206
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 225, in read
frames_nbytes = await stream.read_bytes(fmt_size)
tornado.iostream.StreamClosedError: Stream is closed
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/worker.py", line 1787, in get_data
response = await comm.read(deserializers=serializers)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read
convert_stream_closed_error(self, e)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
raise CommClosedError(f"in {obj}: {exc}") from exc
distributed.comm.core.CommClosedError: in <TCP (closed) local=tcp://127.0.0.1:44554 remote=tcp://127.0.0.1:40084>: Stream is closed
2023-05-08 12:44:25,832 - distributed.worker - ERROR - failed during get data with tcp://127.0.0.1:36220 -> tcp://127.0.0.1:34451
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 225, in read
frames_nbytes = await stream.read_bytes(fmt_size)
tornado.iostream.StreamClosedError: Stream is closed
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/worker.py", line 1787, in get_data
response = await comm.read(deserializers=serializers)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read
convert_stream_closed_error(self, e)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
raise CommClosedError(f"in {obj}: {exc}") from exc
distributed.comm.core.CommClosedError: in <TCP (closed) local=tcp://127.0.0.1:36220 remote=tcp://127.0.0.1:43522>: Stream is closed
2023-05-08 12:44:25,833 - distributed.worker - ERROR - failed during get data with tcp://127.0.0.1:36220 -> tcp://127.0.0.1:46206
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 225, in read
frames_nbytes = await stream.read_bytes(fmt_size)
tornado.iostream.StreamClosedError: Stream is closed
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/worker.py", line 1787, in get_data
response = await comm.read(deserializers=serializers)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read
convert_stream_closed_error(self, e)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
raise CommClosedError(f"in {obj}: {exc}") from exc
distributed.comm.core.CommClosedError: in <TCP (closed) local=tcp://127.0.0.1:36220 remote=tcp://127.0.0.1:43490>: Stream is closed
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-conda/bin/climix", line 10, in <module>
sys.exit(main())
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/climix/main.py", line 353, in main
do_main(
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/climix/main.py", line 325, in do_main
save(
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/climix/datahandling.py", line 371, in save
result.data = r.result()
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/distributed/client.py", line 317, in result
raise exc.with_traceback(tb)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/dask/optimization.py", line 990, in __call__
return core.get(self.dsk, self.outkey, dict(zip(self.inkeys, args)))
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/dask/core.py", line 149, in get
result = _execute_task(task, cache)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/dask/utils.py", line 73, in apply
return func(*args, **kwargs)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/dask/array/chunk.py", line 225, in argtopk
if abs(k) >= a.shape[axis]:
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/dask/array/core.py", line 1868, in __bool__
return bool(self.compute())
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/dask/base.py", line 314, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/dask/base.py", line 587, in compute
schedule = get_scheduler(
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/dask/base.py", line 1400, in get_scheduler
return get_scheduler(scheduler=config.get("scheduler", None))
File "/home/sm_carni/.conda/envs/climix-conda/lib/python3.10/site-packages/dask/base.py", line 1375, in get_scheduler
raise RuntimeError(
RuntimeError: Requested dask.distributed scheduler but no Client active.
```
3. Running another smaller index: `climix -e -x tn /nobackup/rossby27/users/sm_carni/data/tmp/data_files/tasmin_EUR-11_MPI-M-MPI-ESM-LR_rcp85_r2i1p1_MPI-CSC-REMO2009_v1_day_20060101-20101231.nc /nobackup/rossby27/users/sm_carni/data/tmp/data_files/tasmin_EUR-11_MPI-M-MPI-ESM-LR_rcp85_r2i1p1_MPI-CSC-REMO2009_v1_day_20110101-20151231.nc`
-----> Returns no error.
4. Downgrading dask to `mamba install dask==2023.4.0`, solves this error. But, results in another error when running a simpler index. Running `climix -e -x txx /home/rossby/data_lib/esgf/cordex/output/EUR-11/SMHI/NCC-NorESM1-M/rcp85/r1i1p1/RCA4/v1/day/tasmax/latest/tasmax_EUR-11_NCC-NorESM1-M_rcp85_r1i1p1_SMHI-RCA4_v1_day_20060101-20101231.nc /home/rossby/data_lib/esgf/cordex/output/EUR-11/SMHI/NCC-NorESM1-M/rcp85/r1i1p1/RCA4/v1/day/tasmax/latest/tasmax_EUR-11_NCC-NorESM1-M_rcp85_r1i1p1_SMHI-RCA4_v1_day_20110101-20151231.nc` gives:
```
INFO:distributed.scheduler:Lost all workers
INFO:distributed.batched:Batched Comm Closed <TCP (closed) Scheduler connection to worker local=tcp://127.0.0.1:36766 remote=tcp://127.0.0.1:40786>
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-latest/lib/python3.10/site-packages/distributed/batched.py", line 115, in _background_send
nbytes = yield coro
File "/home/sm_carni/.conda/envs/climix-latest/lib/python3.10/site-packages/tornado/gen.py", line 767, in run
value = future.result()
File "/home/sm_carni/.conda/envs/climix-latest/lib/python3.10/site-packages/distributed/comm/tcp.py", line 269, in write
raise CommClosedError()
distributed.comm.core.CommClosedError
INFO:distributed.batched:Batched Comm Closed <TCP (closed) Scheduler connection to worker local=tcp://127.0.0.1:36766 remote=tcp://127.0.0.1:40776>
Traceback (most recent call last):
File "/home/sm_carni/.conda/envs/climix-latest/lib/python3.10/site-packages/distributed/batched.py", line 115, in _background_send
nbytes = yield coro
File "/home/sm_carni/.conda/envs/climix-latest/lib/python3.10/site-packages/tornado/gen.py", line 767, in run
value = future.result()
File "/home/sm_carni/.conda/envs/climix-latest/lib/python3.10/site-packages/distributed/comm/tcp.py", line 269, in write
raise CommClosedError()
distributed.comm.core.CommClosedError
```0.19 (Poco Mas)Klaus ZimmermannKlaus Zimmermannhttps://git.smhi.se/climix/climix/-/issues/315Indicators calculated given a condition2023-05-08T07:51:08ZJohan SödlingIndicators calculated given a conditionIn some projects I have need of calculating indices given some condition, for example the number of zero crossings during the vegetation period, or accumulated precipitation given temperature > 0. More generally, it would be nice if Clim...In some projects I have need of calculating indices given some condition, for example the number of zero crossings during the vegetation period, or accumulated precipitation given temperature > 0. More generally, it would be nice if Climix supported calculating any index X given some condition Y, where Y is just a filter for which timesteps to use.