Docker Open Source GPU
These are the steps to install OmniSci as a Docker container on an Ubuntu machine running with NVIDIA Kepler or Pascal series GPU cards.
Preparation
Prepare your host by installing NVIDIA drivers, Docker, and NVIDIA runtime.
Install NVIDIA Drivers
To install NVIDIA drivers, open a terminal window on the host. Run apt update
and apt upgrade
to ensure that you are using the latest operating system software.
Use apt-get
to install required utilities.
Reboot your system to activate all of your changes.
Install CUDA
Verify that the gcc compiler is installed with the following command.
If no version information returns, run the following command.
To install the CUDA package.
Select the target platform by selecting the operating system (Linux), architecture (based on your environment), distribution (Ubuntu), version (based on your environment), and installer type (OmniSci recommends deb (network)).
Install the CUDA package using the installation instructions on the NVIDIA web site.
Reboot your system to ensure that all changes are active:
Install Docker
Remove any existing Docker installs and the legacy NVIDIA docker runtime.
Remove Docker.
Update with apt-get.
Use curl
to download the latest Docker version.
Add Docker to your Apt repository.
Update your repository.
Install Docker, the command line interface, and the container runtime.
Optional: Run the following usermod
command so that docker command execution does not require sudo privilege. Log out and log back in for the changes to take effect.
Checkpoint
Verify your Docker installation.
Install NVIDIA Docker runtime
Use curl
to add a gpg key:
Update your sources list:
Update apt-get and install nvidia-container-runtime:
Edit /etc/docker/daemon.json to add the following, and save the changes:
Restart the Docker daemon:
Checkpoint
Verify Docker and NVIDIA runtime work together.
Standard NVIDIA-SMI output shows the GPUs in your instance.
Installation
Download OmniSci from DockerHub and Start OmniSci in Docker.
On startup, you receive the following error:
This is expected behavior, because OmniSci does not ship with a default omnisci.conf file.
To use OmniSci with the capabilities enabled by omnisci.conf:
Stop OmniSci.
Create the file as described in Configuration File and place it in /var/lib/omnisci.
Restart OmniSci.
For more information on CUDA driver installation, see the CUDA Installation Guide.
See also the note regarding the CUDA JIT Cache in Optimizing Performance.
For more information on Docker installation, see the Docker Installation Guide.
Command-Line Access
You can access the command line in the Docker image to perform configuration and run OmniSci utilities.
You need to know the container-id
to access the command line. Use the command below to list the running containers.
You see output similar to the following.
Once you have your container ID, you can access the command line using the Docker exec command. For example, here is the command to start a Bash session in the Docker instance listed above. The -it
switch makes the session interactive.
You can end the Bash session with the exit
command.
Note | On Ubuntu, |
On Ubuntu, nvidia-docker
has a dependency on nvidia-modprobe
. If you receive the message: Error: Could not load UVM kernel module. Is nvidia-modprobe installed?
you can install nvidia-modprobe
using the following commands.
Activation
To verify that all systems are go, load some sample data and perform an omnisql
query.
OmniSci ships with two sample datasets of airline flight information collected in 2008, and one dataset of New York City census information collected in 2015. To install the sample data, run the following command.
Where <container-id> is the container in which OmniSci is running.
When prompted, choose whether to insert dataset 1 (7,000,000 rows), dataset 2 (10,000 rows), or dataset 3 (683,000 rows). The examples below use dataset 2.
Connect to OmniSciDB by entering the following command (default password is HyperInteractive):
Enter a SQL query such as the following:
The results should be similar to the results below.
Last updated