Description
Type
Data Process.
Behaviour
It creates a new datacube with random data and dimensions.
Parameters
- container: name of an existing container.
- cwd: absolute path corresponding to the current working directory, used to select the folder where the container is located.
- host_partition: name of I/O host partition used to store data; a test partition is used by default. If default value “auto” is specified, the first host partition available will be used.
- filesystem: type of filesystem used to store data. Possible values are “local” or “global” or “auto” (default). In the last case the first filesystem available will be used.
- ioserver: type of I/O server used to store data. Possible values are: “mysql_table” (default) or “ophidiaio_memory”.
- schedule: scheduling algorithm. The only possible value is 0, for a static linear block distribution of resources.
- nhost: number of output hosts. With default value (‘0’), all host available in the host partition are used.
- ndbms: number of the output DBMS per host. With ‘0’, all dbms instance available per host are used. Default is ‘1’.
- ndb: number of output database per host. Default value is ‘1’.
- nfrag: number of fragments per database.
- ntuple: number of tuples per fragment.
- run: if set to ‘no’, the operator simulates the creation and computes the fragmentation parameters that would be used else, if set to ‘yes’, the actual cube creation is executed.
- measure: name of the measure used in the datacube.
- measure_type: type of measures. Possible values are “double”, “float” or “int”.
- exp_ndim: used to specify how many dimensions in dim argument, starting from the first one, must be considered as explicit dimensions. NOTE: the new datacube must have at least one implicit dimension, hence the total number of dimensions is bigger or equal than exp_ndim +1.
- dim: name of the dimension. Multiple-value field: list of dimensions separated by “|” can be provided.
- concept_level: concept level short name (must be a single char). Default value is “c”. Multiple-value field: list of concept levels separated by “|” can be provided.
- dim_size: size of random dimension. Multiple-value field: list of dimension sizes separated by “|” can be provided.
- compressed: if set to “yes”, new data will be compressed. With “no” (default), data will be inserted without compression.
- grid: optional argument used to identify the grid of dimensions to be used (if the grid already exists) or the one to be created (if the grid has a new name). If it isn’t specified, no grid will be used.
- description: additional description to be associated with the output cube.
System parameters
- exec_mode: operator execution mode. Possible values are async (default) for asynchronous mode, sync for synchronous mode with json-compliant output.
- ncores: number of parallel processes to be used (min. 1).
- sessionid: session identifier used server-side to manage sessions and jobs. Usually, users don’t need to use/modify it, except when it is necessary to create a new session or switch to another one.
- objkey_filter: filter on the output of the operator written to file (default=all => no filter, none => no output, randcube => shows operator’s output PID as text in case “run” is “yes”, else the parameter check is shown).
Examples
Generate a random compressed data cube with 1 host, 1 DBMSs/host, 2 dbs/DBMS, 8 fragments/db, 10 tuples/fragment, 10 elements/tuple, with ‘pressure’ measure and ‘lat’, ‘lon’ and ‘time’ dimensions, in the container ‘container1’:
[OPH_TERM] >> oph_randcube container=container1;nhost=1;ndbms=1;ndb=2;nfrag=8;ntuple=10;measure=Pressure;measure_type=double;exp_ndim=2;dim=lat|lon|time;concept_level=c|c|d;dim_size=16|10|10;compressed=yes;host_partition=test;filesystem=local;