Ophidia Server

Warning

In the following configuration files the passwords are ‘abcd’; it is recommended to change them in case Ophidia service has to be used in a production public environment.

Main configuration

The main server configuration file are located in $prefix/etc/ophidiadb.conf and $prefix/etc/server.conf. A complete description of the arguments provided in each file is given below.

$prefix/etc/ophidiadb.conf

This file has to be changed according to the parameters of the MySQL server hosting the OphidiaDB: database name, IP address or hostname, port number of MySQL server, username and password.

OPHDB_NAME=ophidiadb
OPHDB_HOST=mysql.hostname
OPHDB_PORT=3306
OPHDB_LOGIN=root
OPHDB_PWD=abcd

mysql.hostname is the hostname of MySQL node. You can set it to 127.0.0.1 for an all-in-one instance of Ophidia platform.

$prefix/etc/server.conf

This file has to be customized according to the following comments.

Parameters WEB_SERVER and WEB_SERVER_LOCATION must be equal to the same parameters set for Ophidia Analytics Framework in oph_configuration. The password used to generate the cert/keys should be used to set CERT_PASSWORD parameter.

An example of configuration file is given below:

TIMEOUT=3600
INACTIVITY_TIMEOUT=31536000
WORKFLOW_TIMEOUT=86400
LOGFILE=/usr/local/ophidia/oph-server/log/server.log
WF_LOGFILE=/usr/local/ophidia/oph-server/log/accounting-workflow.log
TASK_LOGFILE=/usr/local/ophidia/oph-server/log/accounting-task.log
CERT=/usr/local/ophidia/oph-server/etc/cert/myserver.pem
CA=/usr/local/ophidia/oph-server/etc/cert/cacert.pem
CERT_PASSWORD=abcd
RMANAGER_CONF_FILE=/usr/local/ophidia/oph-server/etc/rmanager.conf
AUTHZ_DIR=/usr/local/ophidia/oph-server/authz
TXT_DIR=/usr/local/ophidia/oph-analytics-framework/log
WEB_SERVER=http://server.hostname/ophidia
WEB_SERVER_LOCATION=/var/www/html/ophidia
OPERATOR_CLIENT=/usr/local/ophidia/oph-analytics-framework/bin/oph_analytics_framework
IP_TARGET_HOST=cluster.hostname
SUBM_USER=ophidia
SUBM_USER_PUBLK=/usr/local/ophidia/.ssh/id_dsa.pub
SUBM_USER_PRIVK=/usr/local/ophidia/.ssh/id_dsa
OPH_XML_URL=http://server.hostname/ophidia/operators_xml
OPH_XML_DIR=/var/www/html/ophidia/operators_xml
NOTIFIER=framework
SERVER_FARM_SIZE=16
QUEUE_SIZE=0
HOST=server.hostname
PORT=11732
PROTOCOL=https
AUTO_RETRY=3
POLL_TIME=0
BASE_SRC_PATH=/data/repository
DEFAULT_MAX_SESSIONS=100
DEFAULT_MAX_CORES=8
DEFAULT_MAX_HOSTS=1
DEFAULT_TIMEOUT_SESSION=1

server.hostname is the hostname of Ophidia Server node. You can set it to 127.0.0.1 for an all-in-one instance of Ophidia platform.

cluster.hostname is the hostname of the node where the resource manager (e.g. Slurm) is running. You can set it to 127.0.0.1 for an all-in-one instance of Ophidia platform.

This configuration is related to an installation with

  • $prefix set to /usr/local/ophidia/oph-server
  • $framework-path set to /usr/local/ophidia/oph-analytics-framework

These configuration parameters can be set as options at configuration stage:

./configure
--prefix=/usr/local/ophidia/oph-server
--with-framework-path=/usr/local/ophidia/oph-analytics-framework

Description of the parameters (lines can be commented using #):

TIMEOUT                   ← request timeout
INACTIVITY_TIMEOUT        ← server timeout (to shutdown) when no request is received
WORKFLOW_TIMEOUT          ← maximum serving time for a workflow
LOGFILE                   ← position of log file; if not given, "stdout" is used instead
WF_LOGFILE                ← position of workflows accounting log file
TASK_LOGFILE              ← position of tasks accounting log file
CERT                      ← path to server certificate
CA                        ← path to certification authority certificate
CERT_PASSWORD             ← password of server certificate
RMANAGER_CONF_FILE        ← position of "rmanager.conf" on FS
AUTHZ_DIR                 ← position of folder authz/ used to manage authorization data
TXT_DIR                   ← position of the folder where operator-specific logs will be saved (only for debug mode)
WEB_SERVER                ← prefix used in sessionid (*protocol://hostname/subfolders*)
WEB_SERVER_LOCATION       ← position of web server root on FS
OPERATOR_CLIENT           ← executable of oph-analytics-framework on FS
IP_TARGET_HOST            ← IP address/DNS name of the host where the scheduler is running
SUBM_USER                 ← Linux username used to submit job to the scheduler
SUBM_USER_PUBLK           ← position of the public key used for SSH on FS (required on in case libssh has to adopted, see below)
SUBM_USER_PRIVK           ← position of the private key used for SSH on FS (required on in case libssh has to adopted, see below)
OPH_XML_URL               ← URL to the folder containing XML description of operations
OPH_XML_DIR               ← position of the folder containing XML description on FS
NOTIFIER                  ← username used by the framework to release notifications
SERVER_FARM_SIZE          ← maximum number of active workflows (0 means infinity)
QUEUE_SIZE                ← maximum number of queued workflows (0 means infinity)
HOST                      ← IP address/DNS name of the host where Ophidia Server is running
PORT                      ← port number of oph_server
PROTOCOL                  ← protocol used by Ophidia Server
AUTO_RETRY                ← maximum number of times the server will retry a task submission in case of scheduler errors
POLL_TIME                 ← time (in seconds) between two consecutive checks for starved/failed jobs in resource manager queue
BASE_SRC_PATH             ← mount point (or folder) containing the local files whose presence can be checked by the operator OPH_WAIT
DEFAULT_MAX_SESSIONS      ← default value used for OPH_MAX_SESSIONS by the tool `oph_manage_user`
DEFAULT_MAX_CORES         ← default value used for OPH_MAX_CORES by the tool `oph_manage_user`
DEFAULT_MAX_HOSTS         ← default value used for OPH_MAX_HOSTS by the tool `oph_manage_user`
DEFAULT_TIMEOUT_SESSION   ← default value used for OPH_TIMEOUT_SESSION by the tool `oph_manage_user`
ENABLE_CLUSTER_DEPLOYMENT ← set to "yes" to enable dynamic cluster deployment (default "no")
OPENID_ENDPOINT           ← endpoint of an authorization server compliant with OpenId Connect; set it in case OpenId Connect has to be enabled
OPENID_CLIENT_ID          ← client id associated with Ophidia Server by OpenId Connect authorization server
OPENID_CLIENT_SECRET      ← client secret associated with Ophidia Server by OpenId Connect authorization server
OPENID_TOKEN_TIMEOUT      ← timeout of forged tokens for OpenId Connect
OPENID_TOKEN_CHECK_TIME   ← timeout used to check for revoked tokens for OpenId Connect
AAA_ENDPOINT              ← endpoint of an authorization server compliant with AAAaaS; set it in case AAAaaS has to be enabled
AAA_CATEGORY              ← service category associated with Ophidia Server by AAAaaS authorization server
AAA_NAME                  ← service name used to identify Ophidia Server by AAAaaS authorization server
AAA_TOKEN_CHECK_TIME      ← timeout used to check for revoked tokens for AAAaaS

Additional details about the tool oph_manage_user are available here.

The parameter PORT must be equal to parameter SOAP_PORT of oph-analytics-framework configuration set in file oph_soap_configuration.

The parameter NOTIFIER must be equal to parameter SOAP_USERNAME of oph-analytics-framework configuration set in file oph_soap_configuration.

The parameter ENABLE_CLUSTER_DEPLOYMENT has to be set to “yes” only in case clusters of I/O and Analytics nodes can be deployed dynamically: see cluster deployment for further details.

Add the credentials of the notifier to authorization data of oph-server stored in authz/users.dat.

Copy XML descriptions stored in $prefix/etc/xml/ in OPH_XML_DIR.

The default parameter values (e.g. the file name “myserver.pem” and “cacert.pem”) are defined in src/oph_gather.h.

WEB_SERVER parameter indicates the PREFIX of the URL used in sessionid and DOI, so that it must include the possible path (list of parent subfolders) to “sessions” folder.

User authorization files

User authorization files for first installation are locate in authz/ folder, which must be copied into $prefix (this copy is not automatic in order to prevent possible deletion of existent authorization data into installation folder). Moreover, the folder $prefix/authz/sessions must be created if not present.

User authorization files can be manually update as explained in the following. However, it is suggested to exploit the user management tool oph_manage_user for a safer and more quick update.

$prefix/authz/users.dat

This file contains the credentials of Ophidia users in format: username:password. Usernames cannot contain special characters such as colon or ‘|’ (pipe).

For each user a folder named username has to be created in $prefix/authz/users/. In this folder add a file user.dat with user-specific parameter setting and create another folder sessions inside, where oph-server will save user session parameters.

There exists a particular user, called “framework”, that is exploited by oph-analytics-framework to delivery notifications to oph-server.

The “admin” privileges (i.e. restricted functionalities of oph-server related to operators OPH_LOG_INFO, OPH_SERVICE, OPH_SCRIPT, etc.) can be assigned to any user by setting the parameter OPH_IS_ADMIN in the user configuration file.

$prefix/authz/users/<username>/user.dat

A possible configuration of file user.dat is given below:

OPH_OPENED_SESSIONS=0
OPH_MAX_SESSIONS=100
OPH_TIMEOUT_SESSION=1
OPH_MAX_CORES=8
OPH_MAX_HOSTS=1
OPH_SESSION_ID=
OPH_IS_ADMIN=no
OPH_CDD=/

Description of all parameters

OPH_OPENED_SESSIONS    ← Current number of sessions which the user can access to (updated by the server automatically)
OPH_MAX_SESSIONS       ← Maximum number of sessions which the user can access to
OPH_TIMEOUT_SESSIONS   ← Session timeout in "months"
OPH_MAX_CORES          ← Maximum number of cores that the user can use per single task
OPH_MAX_HOSTS          ← Maximum number of hosts that the user can use for reserved partitions
OPH_SESSION_ID         ← Last session visited by the user (updated by the server automatically)
OPH_IS_ADMIN           ← Option to access restricted functionalities
OPH_CDD                ← Current data folder (updated by the server automatically)
OPH_EXEC_MODE          ← Last value of the argument 'exec_mode' (updated by the server automatically)
OPH_NCORES             ← Last value of the argument 'ncores' (updated by the server automatically)
OPH_OS_USERNAME        ← Identifier of OS user used to execute Ophidia operators (Ophidia username by default)

Resource manager integration

Ophidia Server is able to submit commands on the computing resources exploiting various resource managers. A description about how to perform the integration with the resource manager is provided in a specific page.

Server certificates

To run the server you need both a server certificate (myserver.pem) and the related certification authority certificate (cacert.pem). You can create both files by using OpenSSL. A sample creation procedure is described below:

openssl req -newkey rsa:1024 \
    -passout pass:abcd \
    -subj "/" -sha1 \
    -keyout rootkey.pem \
    -out rootreq.pem
openssl x509 -req -in rootreq.pem \
    -passin pass:abcd \
    -sha1 -extensions v3_ca \
    -signkey rootkey.pem \
    -out rootcert.pem
cat rootcert.pem rootkey.pem  > cacert.pem

openssl req -newkey rsa:1024 \
    -passout pass:abcd \
    -subj "/" -sha1 \
    -keyout serverkey.pem \
    -out serverreq.pem
openssl x509 -req \
    -in serverreq.pem \
    -passin pass:abcd \
    -sha1 -extensions usr_cert \
    -CA cacert.pem  \
    -CAkey cacert.pem \
    -CAcreateserial \
    -out servercert.pem
cat servercert.pem serverkey.pem rootcert.pem > myserver.pem

Warning

This sample procedure yields an anonymous insecure self-signed certificate, so you are recommended to use a valid certificate as well as to refer a real certificate authority in case Ophidia service has to be used in a production public environment.

For further infomation regarding this step, refer to commands req and x509 of OpenSSL tool.

After that, the “myserver.pem” and “cacert.pem” files have to be copied in the $prefix/etc/cert folder.

Extra configuration

Particular definitions in Makefile.am

WITH_OPENSSL       ← Enable the use of SSL in SOAP exchanges
HAVE_OPENSSL_SSL_H ← Enable the option WITH_OPENSSL
COMMAND_TO_JSON    ← Enable wrapping of simple commands into JSON Requests
LEVEL2             ← Enable JSON Response direct release for single commands

Other configuration options

Other configuration option that can be specified ad compile time (when building from source code):

  • --enable-ssh-lib: in case ssh connection to remote scheduler master is required.

Debugging

By enabling debugging with the option --enable-debug at building stage, the oph-server is configured for the use of a dummy scheduler based on mpirun only; in this case no queue is used and Slurm (or LSF) is bypassed.

By undefining USE_MPI in src/rmanager.c, multiprocessing will be even disabled.

Firewall configuration

In order to allow the communication between the analytics framework and the oph_server, set properly your firewall, for example:

[...]
4    ACCEPT     tcp  --  192.168.10.101         0.0.0.0/0           tcp dpt:11732
5    ACCEPT     tcp  --  192.168.10.201         0.0.0.0/0           tcp dpt:11732
[...]

where the ips 192.168.10.xxx identify the oph_term and the analytics framework hosts and 11732 is the port number used by oph_server (see parameter PORT in $prefix/etc/server.conf).

WSDL Description

At the following link you can find the WSDL related to the WS service offered by Ophidia Server: WSDL.