It creates a new datacube concatenating a NetCDF file to an existing datacube (both measure and dimensions). WARNING: It imports only mono-dimensional coordinate variables. The dimensions within the NetCDF file, that will be considered as explicit dimensions, must have the same domain of those in the data cube, while those considered as implicit dimensions must not overlap with those in the datacube.
- cube: name of the input datacube. The name must be in PID format.
- check_exp_dim: if set to “yes” (default), explicit dimensions of the two sources (NetCDF file and datacube) will be compared to assure they have the same values; if set to “no”, the check will not be performed.
- schedule: scheduling algorithm. The only possible value is 0, for a static linear block distribution of resources.
- src_path: path of the NetCDF file.
- cdd: absolute path corresponding to the current directory on data repository. It is appended to BASE_SRC_PATH to build the effective path to files (see configuration notes for further details).
- grid: optional argument used to identify the grid of dimensions to be used (if the grid already exists) or the one to be created (if the grid has a new name). If it isn’t specified, no grid will be used.
- dim_offset: offset to be added to dimension values of imported data; default setting aims to held the difference between consecutive values.
- dim_continue: if enabled the last value of implicit dimension of input cube is used to evaluate the new values of the dimension.
- subset_dims: dimension names used for the subsetting. Multiple-value field: list of dimensions separated by “|” can be provided and must be the same number of “subset_filter”.
- subset_filter: enumeration of comma-separated elementary filters (1 series of filters for each dimension). Multiple-value field: list of filters separated by “|” can be provided and must be the same number of “subset_dims”. Values should be numbers. Example: subset_dims=lat|lon;subset_filter=35:45|15:20. Possible forms are:
- start_value: single value specifying the start index of the subset
- start_value:stop_value: select elements from start_index to stop_index.
- subset_type: if set to “index” (default), the subset_filter is considered on dimension index. With “coord”, filter is considered on dimension values. In case of single value, that value is used for all the dimensions.
- time_filter: enable filters using dates for time dimensions; enabled by default.
- offset: it is added to the bounds of subset intervals defined with “subset_filter” in case of “coord” filter type is used.
- description: additional description to be associated with the output cube.
- exec_mode: operator execution mode. Possible values are async (default) for asynchronous mode, sync for synchronous mode with json-compliant output.
- ncores: number of parallel processes to be used (min. 1).
- sessionid: session identifier used server-side to manage sessions and jobs. Usually, users don’t need to use/modify it, except when it is necessary to create a new session or switch to another one.
- objkey_filter: filter on the output of the operator written to file (default=all => no filter, none => no output, concatnc => shows operator’s output PID as text).
Concatenate a NetCDF file excluding metadata into the datacube “URL/1/1”:
[OPH_TERM] >> oph_concatnc src_path=/path/of/ncfile.nc;cube=URL/1/1;