Data Process.


It prints the data stored into a datacube, and offers the possibility to subset the data along its dimensions. Dimension values are used as input filters for subsetting.


  • cube: name of the input datacube. The name must be in PID format.

  • schedule: scheduling algorithm. The only possible value is 0, for a static linear block distribution of resources.

  • subset_dims: dimension names of the datacube used for the subsetting. Multiple-value field: list of dimensions separated by “|” can be provided; it must be the same number of “subset_filter”.

  • subset_filter: enumeration of comma-separated elementary filters (1 series of filters for each dimension). Multiple-value field: list of filters separated by “|” can be provided; it must be the same number of “subset_dims”. In case “subset_type” is “index” a filter can be expressed as

    • index : select a single value, specified by its index;
    • start_index:stop_index : select elements from start_index to stop_index;
    • start_index:stride:stop_index : select elements from start_index to stop_index with a step of stride.

    Indexes are integers from 1 to the dimension size. The reserved kwyword “end” could be used to specify the index of the last element. Example: subset_dims=lat|lon;subset_filter=1:10|20:end.

    In case “subset_type” is “coord” a filter can be expressed as

    • value: select a specific value;
    • start_value:stop_value => select elements from start_value to stop_value; return an error if this set is empty.

    Values should be numbers. Example: subset_dims=lat|lon;subset_filter=35:45|15:20. For time dimensions the option “time_filter” can be enabled, so that the following date formats can be also used:

    • yyyy
    • yyyy-mm
    • yyyy-mm-dd
    • yyyy-mm-dd hh
    • yyyy-mm-dd hh:mm
    • yyyy-mm-dd hh:mm:ss

    Time interval bounds must be separated with “_”. Refer to a season using the corresponding code: DJF for winter, MAM for spring, JJA for summer or SON for autumn.

    Multiple-value field: list of filters separated by “|” can be provided and must be the same number of “subset_dims”.

  • subset_type : if set to “index” (default), the “subset_filter” is considered on dimension index; with “coord”, filter is considered on dimension values.

  • time_filter: enable filters using dates for time dimensions; enabled by default.

  • limit_filter: optional filter on the maximum number of rows.

  • show_id: if “no” (default), it won’t show fragment row ID. With “yes”, it will also show the fragment row ID.

  • show_index: if “no” (default), it won’t show dimension ids. With “yes”, it will also show the dimension id next to the value.

  • show_time: if “no” (default), the values of time dimension are shown as numbers. With “yes”, the values are converted as a string with date and time.

  • level: with 1 (default), only measure values are shown; if it is set to “2”, the dimension values are also returned.

  • output_path: absolute path of the JSON Response. By default, JSON Response is saved in core environment.

  • output_name: filename of the JSON Response. The default value is the PID of the input datacube. File is saved provided that “output_path” is set.

  • cdd: absolute path corresponding to the current directory on data repository. It is appended to BASE_SRC_PATH to build the effective path to files (see configuration notes for further details).

  • base64: if “no” (default), dimension values are returned as strings (with possible trucantion errors). With “yes”, raw dimension values are returned as base64-coded strings.

System parameters

  • exec_mode: operator execution mode. Possible values are async (default) for asynchronous mode, sync for synchronous mode with json-compliant output.
  • ncores: number of parallel processes to be used (it must be 1).
  • sessionid: session identifier used server-side to manage sessions and jobs. Usually, users don’t need to use/modify it, except when it is necessary to create a new session or switch to another one.
  • objkey_filter: filter on the output of the operator written to file (default=all => no filter, none => no output, explorecube_data => shows the content of a cube, explorecube_summary => shows the number of rows to be extracted, explorecube_dimvalues => shows the values of the dimensions of output cube).


Print the subset consisting of values 1 through 10 of dimension “lat” and 20 through 30 of dimension “lon”:

[OPH_TERM] >>  oph_explorecube cube=URL/1/1;subset_dims=lat|lon;subset_filter=1:10|20:30;


Argument name Type Mandatory Values Default Min/Max-value
cube “string” “yes”      
schedule “int” “no” “0” “0”  
limit_filter “int” “no”   “100” “1” / “10000”
subset_dims “string” “no”   “none”  
subset_type “string” “no” “index|coord” “index”  
subset_filter “string” “no”   “all”  
time_filter “string” “no” “yes|no” “yes”  
show_index “string” “no” “yes|no” “no”  
show_id “string” “no” “yes|no” “no”  
show_time “string” “no” “yes|no” “no”  
level “int” “no” “1|2” “1” “1” / “2”
output_path “string” “no”   “default”  
output_name “string” “no”   “default”  
cdd “string” “no”   “/”  
base64 “string” “no” “yes|no” “no”  
sessionid “string” “no”   “null”  
ncores “int” “no”   “1” “1” / “1”
exec_mode “string” “no” “async|sync” “async”  
objkey_filter “string” “no” “all|none|explorecube_data|explorecube_summary|explorecube_dimvalues” “all”