Installing OVE
OVE needs to be installed before it can be used to control a display. OVE can be installed either by downloading and compiling the source code of the corresponding components or by running a specific installer available on the OVE Install repository.
All contributors to OVE are encouraged to download and compile the source code. All users of OVE are encouraged to use the OVE installers.
Please refer the OVE Asset Manager installation guide to install the OVE Asset Manager.
Installation by running OVE installers
OVE Install scripts are designed to install OVE into a Docker environment.
Prerequisites
Docker is available in two versions: a free Community Edition (CE), and an Enterprise Edition (EE) that includes commercial support. The Community Edition should be sufficient for most users of OVE, and can be installed by following the instructions for Docker Desktop for Windows, Docker Desktop for Mac or Docker CE for Linux. If you are using a version of Windows or Mac OS that does not meet the requirements listed for the Docker Desktop installer (either because it is too old, or because it is the Home edition of Windows, rather than Pro, Enterprise or Education), you should instead install the legacy Docker Toolbox.
Building installers for non-supported platforms also requires:
Downloading the OVE installers
The OVE Install scripts are available for Linux, Mac (OS X) and Windows operating systems either as a Python 3 or a Python 2 executable application:
Building installers for non-supported platforms
OVE Install provides tools for building the setup script for non-supported platforms. The master branch of OVE Install needs to be cloned in order to proceed:
Refer the guidelines on developing/building a single setup file for detailed setup instructions.
Running the installers
Once downloaded, the installation script may not be executable on Linux and Mac operating systems. As a resolution, run the following command:
Running the executable will start the step-by-step installation process. This will configure the details of the deployment environment such as hostname, port numbers and environment variables.
The ports are pre-configured to a list of common defaults, but can be changed based on end-user requirements. Each port or port-range is defined as a mapping HOST_PORT:CONTAINER_PORT. Only the host ports can be changed, and it is important to note that container ports must not be changed.
Each installer is capable of installing the current stable, latest unstable or a previous stable version.
Resolving port conflicts
Once the docker-compose.setup.ove.yml file is generated, it is important to ensure all HOST_PORT values defined on the docker-compose.setup.ove.yml file are not currently in use. If this is not the case, corresponding HOST_PORT values need to be changed. For example, if another Tuoris instance exists on the host machine, it is most likely that the port 7080 could be in use. In such a situation, the Tuoris HOST_PORT needs to be changed on the docker-compose.setup.ove.yml file.
Environment variables
Please note that the references to Hostname (or IP address) noted below should not be replaced with localhost, or the Docker hostname because these services need to be accessible from the client/browser. Please replace it with the public hostname or IP address of the host machine. For a local installation, the host machine refers to your own computer. For a server installation the host machine refers to the server on which the Docker environment has been setup. The default PORT numbers for OVE core, Tuoris, OpenVidu, and other services are provided in the Running OVE section.
Before starting up OVE you must configure the environment variables either by providing them during the installation process or by editing the generated docker-compose.setup.ove.yml file. The environment variables that can be configured are:
OVE_HOST- Hostname (or IP address) + port of OVE coreOPENVIDU_HOST- Hostname (or IP address) + port of the OpenVidu service (dependency of WebRTC App).openvidu.publicurl-https://+ Hostname (or IP address) + port of the OpenVidu service (dependency of WebRTC App).OPENVIDU_SECRET- The OpenVidu secret. Must matchopenvidu.secretconfigured below.openvidu.secret- The OpenVidu secret. Must matchOPENVIDU_SECRETconfigured above.OVE_SPACES_JSON- This variable is optional and not defined in thedocker-compose.setup.ove.ymlby default. This accepts a URL for theSpaces.jsonfile to be used as a replacement to the default (embedded)Spaces.jsonfile available with OVE.LOG_LEVEL- This variable is optional and not defined in thedocker-compose.setup.ove.ymlby default. This can have values from0to6and defaults to5. The values correspond to:0- FATAL1- ERROR2- WARN (The recommendedLOG_LEVELfor production deployments)3- INFO4- DEBUG5- TRACE6- TRACE_SERVER (Generates additional server-sideTRACElogs)
OVE_PERSISTENCE_SYNC_INTERVAL- This variable is optional and not defined in thedocker-compose.setup.ove.ymlby default. This accepts an interval (in milliseconds) for synchronising an instance of OVE or of an OVE application with a registered persistence service. This optional variable can be set individually for OVE core and for all OVE applications. The default value is2000.OVE_CLOCK_SYNC_ATTEMPTS- This variable is optional and not defined in thedocker-compose.setup.ove.ymlby default. This accepts a number of attempts made by each OVE client before it requests the server to synchronise its clock. The default value is5. If this value is set to0, the synchronisation process will not take place. Changing the value will increase or reduce the accuracy of the detections.OVE_CLOCK_SYNC_INTERVAL- This variable is optional and not defined in thedocker-compose.setup.ove.ymlby default. This accepts an interval (in milliseconds) for running the synchronisation algorithm. The default value is120000(2 minutes).OVE_CLOCK_RE_SYNC_INTERVAL- This variable is optional and not defined in thedocker-compose.setup.ove.ymlby default. This accepts an interval (in milliseconds) for requesting all clients to re-synchronise their clocks. It is entirely up to the client to decide whether they want to re-sync or not. The default value is3600000(1 hour).OVE_<APP_NAME_IN_UPPERCASE>_CONFIG_JSON- This variable is optional and not defined in thedocker-compose.setup.ove.ymlby default. This accepts a path to an application-specificconfig.jsonfile. This optional variable is useful when application-specific configuration files are provided at alternative locations on a filesystem (such as when using Docker secrets).<APP_NAME_IN_UPPERCASE>must be replaced with the name of the application in upper-case. For example, the corresponding environment variable for the Networks App would beOVE_NETWORKS_CONFIG_JSON.OVE_MAPS_LAYERS- This variable is optional and not defined in thedocker-compose.setup.ove.ymlby default. This accepts a URL of a file containing the Map layers Configuration in a JSON format and overrides the default Map layers Configuration of the Maps App.
The OpenVidu server also accepts several other optional environment variables that are not defined in the docker-compose.setup.ove.yml by default. These are explained in the documentation on OpenVidu server configuration parameters.
Other configuration files
Few other configuration files can be found inside the config directory that is auto generated along with the docker-compose.setup.ove.yml file:
Spaces.json- The default (embedded)spacesconfiguration of OVE can be modified prior to the initial start-up of OVE. To learn more, please refer the documentation onSpaces.json.
Using your own certificates for OpenVidu
OpenVidu is a prerequisite for using the WebRTC App. OpenVidu uses secure WebSockets and uses certificates. And, unless you provide your own certificate, it will use a self-signed certificate which will become inconvenient when loading the WebRTC App on multiple web browsers.
You can run OpenVidu with your own certificate by first creating new Java Key Store following the OpenVidu guide on using your own certificate. This will subsequently require the following changes in the auto generated docker-compose.setup.ove.yml file:
To add a trusted CA certificate (trusted_ca.cer) to your Java Key Store, run:
Starting and stopping the OVE Docker applications
OVE provides separate installation scripts to help users install the necessary components. To install and start OVE on Docker run:
Please note that the OVE UI components are re-built when the respective docker container is started for the first time, which may result in them take a bit longer than expected to start. nginx will display a 502 Bad Gateway status while the OVE UI components are built and started up for the first time. If you see this status message, please give the system 5 - 30 minutes to complete the installation. A working internet connection is also a must for this initial installation process.
If you wish to install OVE without it automatically starting, use the command:
Once the installation procedure has completed and OVE has been started, the successful installation of OVE can be verified by accessing the OVE home page (located at: http://OVE_CORE_HOST:PORT as noted in the Running OVE section) using a web browser.
Once the services have started, you can check their status by running:
The ps command will list containers along with their CONTAINER_ID. Then, to check logs of an individual container, run:
To stop the Docker application run:
To clean-up the Docker runtime first stop any active instances and then run:
Installation from source code
All OVE projects use a build system based on Lerna. Most OVE projects are based on Node.js, compiled with Babel, and deployed on a PM2 runtime. Some OVE projects are based on Python.
Prerequisites
NPX (install with the command:
npm install -global npx)PM2 (install with the command:
npm install -global pm2)Lerna (install with the command:
npm install -global lerna)
Compiling source code for the Docker environment also requires:
The SVG App requires:
Tuoris (installation instructions available on GitHub repository)
The WebRTC App requires:
Downloading source code
All OVE projects can be downloaded from their GitHub repositories:
The master branch of each repository contains the latest code, and can also be cloned if you intend to contribute code or fix issues:
Once the source code has been downloaded OVE can be installed either on a local Node.js environment (such as PM2's Node.js environment) or within a Docker environment. The two approaches are explained below.
Setting up local nginx installation
The OVE core, applications, services and UIs are made up of multiple microservices running on their own ports. The OVE Docker applications use nginx to overcome the complexity of end-users having to expose multiple ports on their systems. Therefore, for a local OVE installation, having nginx installed with the OVE default configuration is recommended. This will ensure all URLs found in the documentation would work without any modification.
Replace /etc/nginx/conf.d/default.conf (which might be found in a different path depending on the OS) with the contents of the OVE default configuration. Restart nginx if it is already running.
With this configuration, by default, OVE will run on port 80. To run OVE on its standard port 8080, change the server listen port in the nginx configuration to 8080. This will run into a port conflict with the OVE core microservice which also runs on port 8080 by default. To change the port allocated to OVE core microservice to 9080, modify the pm2.json file found inside the git clone of OVE core by replacing 8080 with 9080. Then change the proxy_pass http://localhost:8080 line to proxy_pass http://localhost:9080 in the nginx configuration.
Compiling source code for a local Node.js environment
Once you have cloned or downloaded the code, OVE can be compiled using the Lerna build system:
Instructions above are only provided for the OVE Core repository. The steps to follow are similar for other repositories.
Starting and stopping OVE using the PM2 process manager
The SVG App requires an instance of Tuoris to be available before starting it. To start Tuoris run:
The WebRTC App requires an instance of OpenVidu to be available before starting it. To start OpenVidu run:
OVE can then be started using the PM2 process manager. To start OVE on a Linux or MacOS environment run:
To start OVE on a Windows environment run:
By default, OVE core and all services run on localhost, which should be used in place of OVE_CORE_HOST and TUORIS_HOST names above. The default PORT numbers for OVE core, Tuoris and OpenVidu are provided in the Running OVE section.
Once the services have started, you can check their status by running:
Then, to check logs of all services, run:
To stop OVE processes managed by PM2 on a Linux or MacOS environment run:
To stop OVE processes managed by PM2 on a Windows environment run:
To clean-up processes managed by PM2 on a Linux or MacOS environment run:
To clean-up processes managed by PM2 on a Windows environment run:
Starting and stopping OVE UIs in Development
The OVE UI components are developed as React web applications. These components can therefore be launched in development mode, by running:
Once launched, they can be stopped by killing the application using the Ctrl+C keyboard shortcut.
Unless OVE Core and the OVE Apps are running on localhost on their default ports, you will also need to modify the configuration file .env appropriately.
Compiling source code for a Docker environment
This approach currently works only for Linux and MacOS environments. The build.sh script corresponding to each repository can be found under the top most directory of the cloned or downloaded repository or within a packages/PACKAGE_NAME directory corresponding to each package.
The build.sh script can be executed as:
Instructions above are only provided for the OVE Core repository. The steps to follow are similar for other repositories.
Starting and stopping the OVE Docker containers
Similar to the build.sh script, the docker-compose.yml file corresponding to each repository can also be found under the top most directory of the cloned or downloaded repository or within a packages/PACKAGE_NAME directory corresponding to each package.
The deployment environment needs to be pre-configured before running these scripts.
To start each individual docker container run:
Once the services have started, you can check their status by running:
The ps command will list containers along with their CONTAINER_ID. Then, to check logs of an individual container, run:
To stop each individual Docker container run:
To clean-up the Docker runtime first stop any active instances and then run:
Running OVE
It is recommended to use OVE with Google Chrome, as this is the web browser used for development and in production at the Data Science Institute. However, it should also be compatible with other modern web browsers: if you encounter any browser-specific bugs please report them as an Issue.
For details of how to use OVE, see the Usage page.
After installation, OVE will expose several resources that can be accessed through a web browser:
OVE home page
http://OVE_CORE_HOST:PORTApp control page
http://OVE_CORE_HOST:PORT/app/OVE_APP_NAME/control.html?oveSectionId=0OVE client pages
http://OVE_CORE_HOST:PORT/view.html?oveViewId=LocalNine-0check
Spaces.jsonfor more information
OVE JS library
http://OVE_CORE_HOST:PORT/ove.jsOVE API docs
http://OVE_CORE_HOST:PORT/api-docs/
By default, OVE core, all apps, and all services run on localhost, which should be used in place of OVE_CORE_HOST above. Note that the docker container might be given a different IP address to the machine on which it is running; in this case, the hostname localhost will not work, and you should instead use the IP address printed by the command docker-machine ip.
The default PORT numbers are:
Last updated