It executes a query on a datacube. The SQL query must contain only the needed primitive (or nested primitives) without SQL clauses like SELECT or FROM. All the examples provided in the primitives manual report the SQL query that could be used when directly connected to the database. In order to properly use them in the Ophidia analytics framework, the user must extract only the SELECT filters, between SELECT and FROM. The result of the query execution will be saved in a new datacube. The type of the resulting measure must be equal to the input measure one. In case of inequalities, it is necessary to call the primitive “oph_cast”, in order to save the results with the appropriate type.
- cube: name of the input datacube. The name must be in PID format.
- query: user-defined SQL query. It may use Ophidia primitives, even nested. Use the reserved keyword “measure” to refer to time series. Use the keyword “dimension” to refer to the input dimension array (only if one dimension of input cube is implicit).
- dim_query: user-defined SQL query to be applied to dimension values. It may use Ophidia primitives. Use the keyword “dimension” to refer to the input dimension array. In case the size of original array decreases, by default, values are set as incremental indexes: 1, 2, 3...
- measure: name of the new measure resulting from the specified operation.
- measure_type: if “auto”, measure type will be set automatically to that of input datacube and the related primitive arguments have to be omitted in “query”; if “manual” (default), measure type and the related primitive arguments have to be set in “query”.
- dim_type: if “auto”, dimension type will be set automatically to that of input datacube and the related primitive arguments have to be omitted in “dim_query”; if “manual” (default), dimension type and the related primitive arguments have to be set in “dim_query”.
- check_type: if “yes”, the agreement between input and output data types of nested primitives will be checked; if “no”, data type will be not checked (valid only for “manual” setting of “measure_type” and “dim_type”).
- on_reduce: if “update”, the values of implicit dimension are automatically set to a list of long integers starting from 1 even if dimension size does not decrease; if “skip” (default), the values are updated to a list of long integers only in case dimension size decrease due to a reduction primitive.
- compressed: if “auto” (default), new data wil be compressed according to compression status of input datacube; if “yes”, new data will be compressed; if “no”, data will be inserted without compression.
- schedule: scheduling algorithm. The only possible value is 0, for a static linear block distribution of resources.
- container: name of the container to be used to store the output cube; by default, it is the input container.
- description: additional description to be associated with the output cube.
- exec_mode: operator execution mode. Possible values are async (default) for asynchronous mode, sync for synchronous mode with json-compliant output.
- ncores : number of parallel processes to be used (min. 1).
- nthreads: number of parallel threads per process to be used (min. 1).
- sessionid: session identifier used server-side to manage sessions and jobs. Usually, users don’t need to use/modify it, except when it is necessary to create a new session or switch to another one.
- objkey_filter: filter on the output of the operator written to file (default=all => no filter, none => no output, apply => show operator’s output PID as text).
Use primitive “oph_reduce” on datacube identified by the PID “URL/1/1” with oph_double input data:
[OPH_TERM] >> oph_apply cube=URL/1/1;query=oph_reduce(measure,'OPH_AVG',25);
Use primitive “oph_mul_array” on datacube identified by the PID “URL/1/1” with oph_double input data:
[OPH_TERM] >> oph_apply cube=URL/1/1;query=oph_mul_array('oph_double|oph_double','oph_double',measure,measure);check_type=no;