Digging Into Mesosphere DC/OS (Part 3)
Digging Into Mesosphere DC/OS (Part 3)
Want to add a Jenkins server into your environment? Look no further. See how to set up Jenkins in your Mesosphere system.
Join the DZone community and get the full member experience.Join For Free
Learn how to migrate and modernize stateless applications and run them in a Kubernetes cluster.
Hopefully, you followed along and saw in our first posts where we discussed DC/OS and then spun up a single-master environment on AWS using the prepared CloudFormation templates in our second post. From here, we want to have a quick look over the DC/OS UI and some simple actions we can perform to spin up a workload.
The Mesosphere DC/OS Interface
Using the DNS name from the environment you were provided, open up the main DC/OS UI to see what it looks like.
We can see the menu on the left-hand panel gives us a few different areas to explore, and in the right-hand panel, you see a rather lightly used set of resource indicators. There is no load on the system, so there won’t be any activity on the dashboard at the moment.
Click into the Nodes menu to see what our environment looks like from within the DC/OS management UI:
We see all of the nodes listed at the bottom of the right-hand panel along with their IP address and health state. Each is listed as Healthy with no tasks or load. The IP addressing used by DC/OS is an internal private IP range using a class A 10.0.0.0 network. External access is provided by an external IP mapping on a floating IP address into AWS in this case.
Click into the System view on the left-hand panel which will list out some configuration details for our cluster including the Marathon configuration. Marathon is the default scheduling environment for DC/OS:
On the same tab, click the Repositories link in the right-hand pane to see which default repositories for DC/OS are loaded. You’ll see the Universe listed, which is the standard first public repository.
You can click the Universe link on the main menu to see all of the applications and services that are presented on the public Universe catalog:
Let’s launch a Jenkins server from the catalog to see a quick example. Find the Jenkins link in the Universe and click Install Package:
You’ll see two options to install. If we click Install Package, the deployment will be a single-node implementation with all of the defaults. Let’s try out the Advanced Installation link just to see the configuration:
On the Service options we see the resource allocation for the application:
Storage shows the host volume details and what the mount point is:
Under networking, you can see we have known-hosts assigned which is a whitelist for which hosts you want to retrieve the SSH keys for. This ensures that you can use SSH to retrieve content from those sites. Jenkins requires GitHub, so it is already defined in the default settings:
Service options under advanced allow you to do additional parameter configurations before launch. We will leave these as default along with everything else:
Click on Review and Install to see the final summary of configuration before we launch:
You can also click the Download config.json link at the top right of the UI which lets us see what the configuration looks like in a raw JSON file format:
Click launch and you’ll see the UI change to show you a Success message in a few moments:
On the Services link in the left-hand panel, you now see the Jenkins service which was launched for us from the Universe. The state will show the deployment status:
Clicking back to the Dashboard shows that there are now tasks and resource usage happening within the environment:
Again, back at the Services panel, we can see the successfully launched Jenkins service once it has been deployed. The
You can gather the URL from the Configuration section and launch your application to see it in action. The DC/OS master will proxy back the request to the internal service and you are now running your nested Jenkins server on DC/OS on top of AWS:
Service options for each DC/OS service launched include scaling, suspending, restarting, and destroying for each service. These are all on the right-hand side of the services section when you click on the More link to get the dropdown:
Let’s click the Edit button to see a familiar looking configuration. You’ll notice that it looks a lot like when we launched the service from the Universe, but we also have more options to add Environment Variables, manage the network, adjust the resources, and more:
By clicking the JSON mode slider at the top, you can view and edit the contents in a JSON format:
Let’s exit from the Edit panel and click the Scale button. This will prompt you to select how many instances that you would like to scale to. For this example, we can try out a 3 instance selection which will extend from the current single-instance deployment:
This is where we quickly find out the reason we need to know in advance about how we want to scale our implementations. You’ll see an error come up when we try to modify the configuration:
The reason that this scaling activity didn’t succeed is that the Jenkins deployment was a single-instance build. Much like when we used to have single-CPU and multi-CPU kernels which required a build at the initial setup, we need to know that the application is being scaled at deployment.
For scale-out applications, we would select a multi-instance deployment be default, which would then build the scalable architecture within the application and the service itself.
This is a great way to get a handle on how easy it is to deploy from the Universe. Our next post is going to dig a little into the underlying architecture to give some insight into what’s happening under the covers as you launch these applications.
Published at DZone with permission of Eric Wright , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.