Ophidia allows users to apply the same operation over more data cubes by submitting only one command: the massive operations.
A massive operation consists of one or more single operations having the same parameters except the value of the parameter that identifies the cube to be processed or the file to be imported. An example of massive operation is
The rationale of this command is to apply data reduction over every data cube of container foo. In fact the command is internally converted in a list of single operations like
oph_reduce cube=PID1;operation=avg; oph_reduce cube=PID2;operation=avg; ... oph_reduce cube=PIDn;operation=avg;
where PIDs are the Ophidia identifiers of the data cubes stored within the container foo.
Most of the filters for massive operations can be also applied with negation. For example to apply the previous command to all cubes not belonging to container foo.
Refer to massive commands reference appendix to see all the filters and if these can be negated.
The command associated with a cube operation follows the format of the specific operation to be executed except for the parameters cube and cubes, which have to be set to a string of semicolon-separated filters enclosed by square brackets. The filters are key-value pairs and can be used to identify the set of cubes which the massive operation has to be applied to. There are several filters to finely select the cubes to be processed. For example, the massive operation
can be adopted to publish every level-4 cubes in container foo; the following string
can be used to apply the query somequery to all the cubes having the attribute model set to foo.
The command associated with an import operation follows the format of an import operation except for the parameter src_path, which has to be set to a string of semicolon-separated filters. In this case the filters can be used to select the files to be imported. The massive operation
allows to import all the file whose name matches with the pattern foo*.nc contained in the folder /path/to/files. By default, path folder is BASE_SRC_PATH (see oph_configuration for additional information).
The complete filter list can be obtained by typing
The output of a massive operation reports some information regarding each (single) sub-task associated with the operation. In particular, the output consists of two objects:
Job identifier is an URL that can be used to access session web resources related to the task. See Session Management section for additional information on these web resources
The table also includes a Parent Marker ID that is associated with the massive task and can be used to retrieve its output later by using the command view.
Note that a massive operation is considered Successful only if all the sub-tasks succeeded.
In case the user is not sure of the list of the objects (cubes or files) which a massive operation will be applied to, the user can append the key-value pair run=no to filter string and submit the resulting command, thus retrieving the list without executing the massive operation actually.
If the user wishes to set only the filter path, the key path can be omitted.
If the user wishes to set only the filter cube_filter, the key cube_filter can be omitted.
Finally, the following strings are some interesting filters for massive operations:
means all the cubes in current working virtual directory
means all the cubes in current working virtual directory or related sub-directories
How to show the cubes which a massive operation will be applied to without executing it?
oph_foo [...other filters...;run=no]
How to filter cubes whose numerical identifiers (the last numbers of PIDs) are between 100 and 200?
How to delete all the cubes in current working directory?
How to delete all the cubes of a session? (from root directory)
How to import CMIP5 files without specifying the related measure name? (it is extracted from file names)
How to limit the folder depth in case of recursive importation of files?