In dynamic cluster mode, Ophidia users run their computation in an exclusive fashion by reserving the resources on-demand. In fact, a user can deploy one or more clusters of analytics nodes of different size, running a single I/O service each. Ophidia provides the commands to deploy, check and undeploy I/O server clusters.
Dynamic cluster mode can be enabled at configuration time following the instructions provided at this page.
Clusters are identified by a user-defined host partition name that can be used across different operators invocations. A default host partition name (called “main”) is encoded in the operators and in the Ophidia Terminal for users convenience. Of course, multiple clusters cannot share the same name; the name “all” is reserved.
The following commands report the typical flow of steps performed to handle clusters of I/O servers. More detailed information about the arguments can be found in the OPH_CLUSTER documentation page.
Get information about all the dynamic clusters of I/O servers:
oph_cluster hostpartition=all;
If at least a cluster has been created, the command will show a table with the list of host partition names, the number of analytics nodes in each cluster and their status (RUNNING or PENDING), otherwise an empty table will be shown.
Administrators can get information about the full set of resources available using the action ‘info_cluster’.
oph_cluster action=info_cluster;
The information shown as output is similar to the previous command, although this one provides a more administrative view of the resources used and those still available: user list, quota, etc.
Deploy a new cluster of 4 I/O servers called myPartition
oph_cluster action=deploy;hostpartition=myPartition;nhost=4;
Check the status of the cluster called myPartition
oph_cluster action=info;hostpartition=myPartition;
Delete the cluster called myPartition
oph_cluster action=undeploy;hostpartition=myPartition;