Ophidia Server

Server configuration

Warning

In the following configuration files the passwords are ‘abcd’; you are recommended to change them in case Ophidia service has to be used in a production public environment.

To make your server properly running, you need both a server certificate (myserver.pem) and the related certification authority certificate (cacert.pem). You can create both files by using OpenSSL. A sample creation procedure is described below:

openssl req -newkey rsa:1024 \
    -passout pass:abcd \
    -subj "/" -sha1 \
    -keyout rootkey.pem \
    -out rootreq.pem
openssl x509 -req -in rootreq.pem \
    -passin pass:abcd \
    -sha1 -extensions v3_ca \
    -signkey rootkey.pem \
    -out rootcert.pem
cat rootcert.pem rootkey.pem  > cacert.pem

openssl req -newkey rsa:1024 \
    -passout pass:abcd \
    -subj "/" -sha1 \
    -keyout serverkey.pem \
    -out serverreq.pem
openssl x509 -req \
    -in serverreq.pem \
    -passin pass:abcd \
    -sha1 -extensions usr_cert \
    -CA cacert.pem  \
    -CAkey cacert.pem \
    -CAcreateserial \
    -out servercert.pem
cat servercert.pem serverkey.pem rootcert.pem > myserver.pem

Warning

This sample procedure yields an anonymous insecure self-signed certificate, so you are recommended to use a valid certificate as well as to refer a real certificate authority in case Ophidia service has to be used in a production public environment.

For further infomation regarding this step, refer to commands req and x509 of OpenSSL tool.

After that, the “myserver.pem” and “cacert.pem” files have to be copied in the $prefix/etc/cert folder.

Copy the folder authz/ into $prefix (this copy is not automatic in order to prevent possible deletion of existent authorization data into installation folder). Create the folder $prefix/authz/sessions if not present.

Customise the following configuration files. Consider the adoption of the user management tool oph_manage_user to update them quickly.

$prefix/authz/users.dat

This file contains the credentials of Ophidia users in format: username:password. Usernames cannot contain special characters such as colon or ‘|’ (pipe).

For each user a folder named username has to be created in $prefix/authz/users/. In this folder add a file user.dat with user-specific parameter setting and create another folder sessions inside, where oph-server will save user session parameters.

There exists a particular user, called “framework”, that is exploited by oph-analytics-framework to delivery notifications to oph-server.

The “admin” privileges (i.e. restricted functionalities of oph-server related to operators OPH_LOG_INFO, OPH_SERVICE, OPH_SCRIPT, etc.) can be assigned to any user by setting the parameter OPH_IS_ADMIN in the user configuration file.

$prefix/authz/users/<username>/user.dat

A possible configuration of file user.dat is given below:

OPH_OPENED_SESSIONS=0
OPH_MAX_SESSIONS=100
OPH_TIMEOUT_SESSION=1
OPH_MAX_CORES=8
OPH_MAX_HOSTS=1
OPH_IS_ADMIN=no
OPH_SESSION_ID=

Description of the parameters

OPH_OPENED_SESSIONS    ← Current number of sessions which the user can access to
OPH_MAX_SESSIONS       ← Maximum number of sessions which the user can access to
OPH_TIMEOUT_SESSIONS   ← Session timeout in "months"
OPH_MAX_CORES          ← Maximum number of cores that can be used per single task
OPH_MAX_HOSTS          ← Inactive, don’t care
OPH_IS_ADMIN           ← Option to access restricted functionalities
OPH_SESSION_ID         ← Last session visited by the user

$prefix/etc/ophidiadb.conf

This file has to be changed according to ophDB parameters: database name, IP address or hostname, port number of MySQL server, username and password.

OPHDB_NAME=ophidiadb
OPHDB_HOST=mysql.hostname
OPHDB_PORT=3306
OPHDB_LOGIN=root
OPHDB_PWD=abcd

mysql.hostname is the hostname of MySQL node. You can set it to 127.0.0.1 for an all-in-one instance of Ophidia platform.

$prefix/etc/server.conf

This file has to be customized according to the following comments.

Parameters WEB_SERVER and WEB_SERVER_LOCATION must be equal to the same parameters set for oph-analytics-framework in oph_configuration. The password used to generate the cert/keys should be used to set CERT_PASSWORD parameter.

An example of configuration file is given below:

TIMEOUT=3600
INACTIVITY_TIMEOUT=31536000
WORKFLOW_TIMEOUT=86400
LOGFILE=/usr/local/ophidia/oph-server/log/server.log
CERT=/usr/local/ophidia/oph-server/etc/cert/myserver.pem
CA=/usr/local/ophidia/oph-server/etc/cert/cacert.pem
CERT_PASSWORD=abcd
RMANAGER_CONF_FILE=/usr/local/ophidia/oph-server/etc/rmanager.conf
AUTHZ_DIR=/usr/local/ophidia/oph-server/authz
TXT_DIR=/usr/local/ophidia/oph-analytics-framework/log
WEB_SERVER=http://server.hostname/ophidia
WEB_SERVER_LOCATION=/var/www/html/ophidia
OPERATOR_CLIENT=/usr/local/ophidia/oph-analytics-framework/bin/oph_analytics_framework
IP_TARGET_HOST=cluster.hostname
SUBM_USER=ophidia
SUBM_USER_PUBLK=/usr/local/ophidia/.ssh/id_dsa.pub
SUBM_USER_PRIVK=/usr/local/ophidia/.ssh/id_dsa
OPH_XML_URL=http://server.hostname/ophidia/operators_xml
OPH_XML_DIR=/var/www/html/ophidia/operators_xml
NOTIFIER=framework
SERVER_FARM_SIZE=16
QUEUE_SIZE=0
HOST=server.hostname
PORT=11732
PROTOCOL=https
AUTO_RETRY=3
POLL_TIME=0
BASE_SRC_PATH=/data/repository
BASE_BACKOFF=0

server.hostname is the hostname of Ophidia Server node. You can set it to 127.0.0.1 for an all-in-one instance of Ophidia platform.

cluster.hostname is the hostname of the node where the resource manager (e.g. Slurm) is running. You can set it to 127.0.0.1 for an all-in-one instance of Ophidia platform.

This configuration is related to an installation with

  • $prefix set to /usr/local/ophidia/oph-server
  • $framework-path set to /usr/local/ophidia/oph-analytics-framework

These configuration parameters can be set as options of script “configure” as follows:

./configure
--prefix=/usr/local/ophidia/oph-server
--with-framework-path=/usr/local/ophidia/oph-analytics-framework

Description of the parameters (lines can be commented using #):

TIMEOUT             ← request timeout
INACTIVITY_TIMEOUT  ← server timeout (to shutdown) when no request is received
WORKFLOW_TIMEOUT    ← maximum serving time for a workflow
LOGFILE             ← position of log file; if not given, "stdout" is used instead
CERT                ← path to server certificate
CA                  ← path to certification authority certificate
CERT_PASSWORD       ← password of server certificate
RMANAGER_CONF_FILE  ← position of "rmanager.conf" on FS
AUTHZ_DIR           ← position of folder authz/ used to manage authorization data
TXT_DIR             ← position of the folder where operator-specific logs will be saved (only for debug mode)
WEB_SERVER          ← prefix used in sessionid (*protocol://hostname/subfolders*)
WEB_SERVER_LOCATION ← position of web server root on FS
OPERATOR_CLIENT     ← executable of oph-analytics-framework on FS
IP_TARGET_HOST      ← IP address/DNS name of the host where the scheduler is running
SUBM_USER           ← linux username used to submit job to the scheduler
SUBM_USER_PUBLK     ← position of the public key used for SSH on FS (required on in case libssh has to adopted, see below)
SUBM_USER_PRIVK     ← position of the private key used for SSH on FS (required on in case libssh has to adopted, see below)
OPH_XML_URL         ← URL to the folder containing XML description of operations
OPH_XML_DIR         ← position of the folder containing XML description on FS
NOTIFIER            ← username used by the framework to release notifications
SERVER_FARM_SIZE    ← maximum number of active workflows (0 means infinity)
QUEUE_SIZE          ← maximum number of queued workflows (0 means infinity)
HOST                ← IP address/DNS name of the host where Ophidia Server is running
PORT                ← port number of oph_server
PROTOCOL            ← protocol used by Ophidia Server
AUTO_RETRY          ← maximum number of times the server will retry a task submission in case of scheduler errors
POLL_TIME           ← polling time in seconds for job monitoring schema
BASE_SRC_PATH       ← mount point (or folder) containing the local files whose presence can be checked by the operator OPH_WAIT
BASE_BACKOFF        ← initial value of backoff interval used for auto-retry (in seconds)

The parameter PORT must be equal to parameter SOAP_PORT of oph-analytics-framework configuration set in file oph_soap_configuration.

The parameter NOTIFIER must be equal to parameter SOAP_USERNAME of oph-analytics-framework configuration set in file oph_soap_configuration.

Add the credentials of the notifier to authorization data of oph-server stored in authz/users.dat.

Copy XML descriptions stored in $prefix/etc/xml/ in OPH_XML_DIR.

The default parameter values (e.g. the file name “myserver.pem” and “cacert.pem”) are defined in src/oph_gather.h.

$prefix/etc/rmanager.conf

This file has to be adapted according to the scheduler adopted for job submission. In case of Slurm the following configuration should be adequate.

NAME=Slurm
SUBM_CMD=srun
SUBM_ARGS=--mpi=pmi2 --input=none
SUBM_USERNAME=
SUBM_GROUP=
SUBM_NCORES=-n
SUBM_INTERACT=
SUBM_BATCH=
SUBM_STDOUTPUT=-o
SUBM_STDERROR=-e
SUBM_POSTFIX=>/dev/null 2>&1 </dev/null
SUBM_JOBNAME=-J
SUBM_CANCEL=scancel -n
SUBM_JOBCHECK=squeue -o "%j"

The parameters have to be adapted according to the specific resource manager used by the cluster. A new configuration file could be created in etc/rms/ as long as the symbolic link rmanager.conf is changed accordingly and the package is reinstalled manually.

The parameters SUBM_USERNAME and SUBM_GROUP have to be set only in multi-user mode.

By default Ophidia Server submits any command as SUBM_USER. In multi-user mode the server requests the resource manager to use linux credentials of the user sending each command. Of course, the user SUBM_USER needs to be enabled to run SUBM_CMD in privileged mode.

To enable multi-user mode with Slurm, the parameters of resource manager could be set as follows:

NAME=Slurm
SUBM_CMD=sudo srun
SUBM_ARGS=--mpi=pmi2 --input=none
SUBM_USERNAME=--uid=
SUBM_GROUP=--gid=ophidia
SUBM_NCORES=-n
SUBM_INTERACT=
SUBM_BATCH=
SUBM_STDOUTPUT=-o
SUBM_STDERROR=-e
SUBM_POSTFIX=>/dev/null 2>&1 </dev/null
SUBM_JOBNAME=-J
SUBM_CANCEL=sudo scancel -n
SUBM_JOBCHECK=squeue -o "%j"

This configuration assumes for each Ophidia user there exists a linux user with the same username over every compute node. In addition, every linux user has to belong to linux group “ophidia”.

Ophidia Web Server

WEB_SERVER parameter indicates the PREFIX of the URL used in sessionid and DOI, so that it must include the possible path (list of parent subfolders) to “sessions” folder. In case you wish to enable “secure web access” to JSON files, published cubes and export NetCDF files with PHP-based authorization support use the configuration option --enable-webaccess and add the following line in apache configuration (both VirtualHost :80 and VirtualHost :443).

Ophidia provides a smart web access to the list of the user’s sessions and the related operator commands. To enable this feature run the following command as root user.

Install the required packages:

sudo yum install mod_ssl php-mysql php-devel php-gd php-pecl-memcache php-pspell php-snmp php-xmlrpc php-xml

Create the ssl certificate:

cd /etc/httpd/conf
sudo mkdir ssl
sudo openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -out server.crt -keyout server.key

Configure the httpd server:

sudo vi httpd.conf

Add the following statement to VirtualHost 80 and VirtualHost 433:

RedirectMatch permanent /ophidia/sessions/(.*) /ophidia/sessions.php/$1

This configuration line has to be adapted according to subfolders set in your Ophidia web server root directory. For example, in case your web server is http://server.hostname/site then the line should be

RedirectMatch permanent /site/sessions/(.*) /site/sessions.php/$1

Configuration of virtual host 80 should be:

NameVirtualHost *:80
<VirtualHost *:80>
RedirectMatch permanent /ophidia/sessions/(.*) /ophidia/sessions.php/$1
</VirtualHost>

Alternately, if the web server allows htaccess, you can skip the above step and exploit the .htaccess file available in the web folder.

Restart the httpd service:

sudo service httpd restart

Open a browser and connect to http://server.hostname/ophidia setting appropriately server.hostname, i.e. the hostname of Ophidia Server node (you can set it to 127.0.0.1 for an all-in-one instance of Ophidia platform).

Digit your username and password and surf among your sessions!

Debugging

By enabling debugging with the option --enable-debug in configuration phase, the oph-server is configured for the use of a dummy scheduler based on mpirun only; in this case no queue is used and slurm (or lsf) is bypassed.

By undefining USE_MPI in src/rmanager.c, multiprocessing will be even disabled.

Other configuration options

  • --enable-ssh-lib: In case ssh connection to remote scheduler master is required.

Particular definitions in Makefile.am

WITH_OPENSSL       ← Enable the use of SSL in SOAP exchanges
HAVE_OPENSSL_SSL_H ← Enable the option WITH_OPENSSL
COMMAND_TO_JSON    ← Enable wrapping of simple commands into JSON Requests
LEVEL2             ← Enable JSON Response direct release for single commands

Firewall configuration

In order to allow the communication between the analytics framework and the oph_server, set properly your firewall, for example:

[...]
4    ACCEPT     tcp  --  192.168.10.101         0.0.0.0/0           tcp dpt:11732
5    ACCEPT     tcp  --  192.168.10.201         0.0.0.0/0           tcp dpt:11732
[...]

where the ips 192.168.10.xxx identify the oph_term and the analytics framework hosts and 11732 is the port number used by oph_server (see parameter PORT in $prefix/etc/server.conf).

WSDL Description

At the following link you can find the WSDL related to the WS service offered by Ophidia Server: WSDL.