Red Hat JBoss Data Virtualization (JDV) is a lean, virtual data integration solution that unlocks trapped data and delivers it as easily consumable, unified, and actionable information. JDV makes data spread across physically diverse systems such as multiple databases, XML files, and Hadoop systems appear as a set of tables in a local database.
When deployed on OpenShift, JDV enables:
- Service enabling your data
- Bringing data from outside to inside the PaaS
- Breaking up monolithic data sources virtually for a microservices architecture
Together with the JDV for OpenShift image, we have made available OpenShift templates that allow you to test and bootstrap JDV.
This article will demonstrate how to get started with JDV running on OpenShift. JDV is available as a containerized xPaaS image that is designed for use with OpenShift Enterprise 3.2 and later. We’ll be using the Red Hat Container Development Kit (CDK) to get started quickly.
The CDK provides a pre-built CDK based on Red Hat Enterprise Linux to help you develop container-based (sometimes called Docker) applications quickly. The containers you build can be easily deployed on any Red Hat container host or platform, including: Red Hat Enterprise Linux, Red Hat Enterprise Linux Atomic Host, and our platform-as-a-service solution, OpenShift Enterprise 3.
- Download and install VirtualBox 5.0.26.
- Download and install Vagrant 1.8.1.
- Download Red Hat Container Tools and the CDK Vagrant box for VirtualBox. See the CDK Getting Started page for the installation instructions.
- Install additional Vagrant plugins to support Red Hat Subscription Management and other features. See the CDK 2.2 Installation Guide for more details.
Note: CDK 2.2 is known to not work correctly with VirtualBox 5.1.x. If you already have VirtualBox 5.1.x installed, downgrade your installation to VirtualBox 5.0.26. See for more information on VirtualBox and vagrant compatibility in the CDK 2.2 release notes.
Start CDK Using Vagrant
After performing the installation steps above you should be able to start the CDK as depicted below.
$ cd $CDK_HOME/components/rhel/rhel-ose $ vagrant up Bringing machine 'default' up with 'virtualbox' provider... ==> default: Clearing any previously set forwarded ports... ==> default: Clearing any previously set network interfaces... ==> default: Preparing network interfaces based on configuration... default: Adapter 1: nat default: Adapter 2: hostonly ==> default: Forwarding ports... default: 22 (guest) => 2222 (host) (adapter 1) ==> default: Running 'pre-boot' VM customizations... ==> default: Booting VM... ==> default: Waiting for machine to boot. This may take a few minutes... default: SSH address: 127.0.0.1:2222 default: SSH username: vagrant default: SSH auth method: private key default: Warning: Remote connection disconnect. Retrying... ==> default: Machine booted and ready! ==> default: Registering box with vagrant-registration... default: Would you like to register the system now (default: yes)? [y|n]n ==> default: Checking for guest additions in VM... default: No guest additions were detected on the base box for this VM! Guest default: additions are required for forwarded ports, shared folders, host only default: networking, and more. If SSH fails on this machine, please install default: the guest additions and repackage the box to continue. default: default: This is not an error message; everything may continue to work properly, default: in which case you may ignore this message. ==> default: Configuring and enabling network interfaces... ==> default: Copying TLS certificates to /Applications/opt/jboss/cdk/components/rhel/rhel-ose/.vagrant/machines/default/virtualbox/docker ==> default: Mounting SSHFS shared folder... ==> default: Mounting folder via SSHFS: /Users/cvanball => /Users/cvanball ==> default: Checking Mount.. ==> default: Checking Mount.. ==> default: Folder Successfully Mounted! ==> default: Docker service configured successfully... ==> default: Machine already provisioned. Run `vagrant provision` or use the `--provision` ==> default: flag to force provisioning. Provisioners marked to run always will still run. ==> default: Running provisioner: shell... default: Running: inline script ==> default: Running provisioner: shell... default: Running: inline script ==> default: ==> default: Successfully started and provisioned VM with 2 cores and 8096 MB of memory. ==> default: To modify the number of cores and/or available memory set the environment variables ==> default: VM_CPU respectively VM_MEMORY. ==> default: ==> default: You can now access the OpenShift console on: https://10.1.2.2:8443/console ==> default: ==> default: To use OpenShift CLI, run: ==> default: $ vagrant ssh ==> default: $ oc login ==> default: ==> default: Configured users are (<username>/<password>): ==> default: openshift-dev/devel ==> default: admin/admin ==> default: ==> default: If you have the oc client library on your host, you can also login from your host. cvanball in /Applications/opt/jboss/cdk/components/rhel/rhel-ose $
Preparing the JDV for OpenShift Application Templates
Red Hat xPaaS images are pulled on demand from the Red Hat Registry:
registry.access.redhat.com. To prepare JDV for OpenShift we need to setup the JDV for OpenShift image stream and application template.
$ vagrant ssh [vagrant@rhel-cdk ~]$ oc login -u admin oc login -u admin Authentication required for https://127.0.0.1:8443 (openshift) Username: admin Password: Login successful. You have access to the following projects and can switch between them with 'oc project <projectname>': * default (current) * openshift * openshift-infra [vagrant@rhel-cdk ~]$ oc project openshift Now using project "openshift" on server "https://127.0.0.1:8443". [vagrant@rhel-cdk ~]$ oc create -f https://raw.githubusercontent.com/cvanball/jdv-ose-demo/master/extensions/is.json [vagrant@rhel-cdk ~]$ oc create -n openshift -f https://raw.githubusercontent.com/jboss-openshift/application-templates/master/datavirt/datavirt63-extensions-support-s2i.json
The EAP for OpenShift image, on which the JDV for OpenShift image is built, extends database support in OpenShift using various artifacts. These artifacts are included in the built image through different mechanisms:
S2I artifacts that are injected into the image during the S2I process, and
Runtime artifacts from environment files provided through the OpenShift Secret mechanism.
Creating and Preparing an OpenShift Project Using the JDV for OpenShift Image
Files for runtime artifacts are passed to the JDV for OpenShift image using the OpenShift secret mechanism. This includes the environment files for the data sources and resource adapters, as well as any additional data files. These files need to be present locally so as to create secrets for them.
oc login -u openshift-dev oc new-project jdv-demo oc create -f https://raw.githubusercontent.com/cvanball/jdv-ose-demo/master/extensions/datavirt-app-secret.yaml curl https://raw.githubusercontent.com/cvanball/jdv-ose-demo/master/extensions/datasources.properties -o datasources.properties oc secrets new datavirt-app-config datasources.properties
Configure and Deploying the OpenShift Project Using the JDV for OpenShift Image
Log into the OpenShift web console
- Click at jdv-demo project
- Click Add to project button to list all of the default image streams and templates.
- Use the Filter by keyword search bar to limit the list to those that match datavirt. You may need to click See all to show the desired application template.
- Select an application template and configure the deployment parameters as required.
- Click Create button to deploy the application template.
Demonstration (Length: 3m27s)
In the following video, we will demonstrate how to setup and configure JDV running on OpenShift as described in the previous steps.