DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
View Events Video Library
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Migrate, Modernize and Build Java Web Apps on Azure: This live workshop will cover methods to enhance Java application development workflow.

Kubernetes in the Enterprise: The latest expert insights on scaling, serverless, Kubernetes-powered AI, cluster security, FinOps, and more.

A Guide to Continuous Integration and Deployment: Learn the fundamentals and understand the use of CI/CD in your apps.

Avatar

Sharone Zitzman [Deactivated] [Suspended]

Developer Relations at RTFM Please Ltd.

Herzliya, IL

Joined May 2012

https://rtfmplease.dev

Stats

Reputation: 165
Pageviews: 329.4K
Articles: 5
Comments: 16
  • Articles
  • Comments

Articles

article thumbnail
Simple, Secure Role Based Access Control (RBAC) For REST APIs
There are a lot of ways to implement Access Controls, and some are slightly complicated. Have you considered using REST?
April 30, 2016
· 72,240 Views · 10 Likes
article thumbnail
Docker Orchestration... What It Means and Why You Need It
[This article was written by Yaron Parasol.] Docker containers were created to help enable the fast, and reliable deployment of application components or tiers, by creating a container that holds a self-contained ready to deploy parts of applications, with the middleware and the app business logic needed to run them successfully. For example, a Spring application within a Tomcat container. By design, Docker is purposely an isolated self-contained part of the application, typically one tier or even one node in a tier. However, an application is typically multi-tier in its architecture and that means you have tiers with dependencies between them, where the nature of the dependencies can be anything from network connections and remote API invocations, to exchange of messages between application tiers. And hence an app is a set of different containers with specific configurations. This is why you need a way to glue the pieces of your app together. While, Docker has a basic solution for connecting containers using a Docker bridge, this solution is not always the preferred one, especially when deploying the container across different hosts and you need to take care of real network settings. Docker orchestration with TOSCA + Cloudify. Check it out. Go So, what role does the orchestrator play? The orchestrator will take care of two things: The timing of container creation - as containers need to be created by order of dependencies and Container configuration in order to allow containers to communicate with one another - and for that the orchestrator needs to pass runtime properties between containers. As a side note here: With Docker you need a special tweak here, as you typically don’t touch config files inside a container, you keep the container intact, so there is an interesting workaround for cases that this is required. One method to do this is by using a YAML-based orchestration plan to orchestrate the deployment of apps and post-deployment automation processes, which is the approach Cloudify employs. Based on TOSCA (topology and orchestration standard of cloud apps), this orchestration plan describes the components and their lifecycle, and the relationships between components, especially when it comes to complex topologies. This includes, what’s connected to what, what’s hosted on what, and other such considerations. TOSCA is able to describe the infrastructure, as well as, the middleware tier, and app layers on top of these. Cloudify basically takes this TOSCA orchestration plan (dubbed blueprints in Cloudify speak) and materializes these using workflows that traverse the graph of components, or this plan of components and issues commands to agents. These then create the app components and glue them together. The agents use extensions called plugins that are adaptors between the Cloudify configuration and the various infrastructure as a service (IaaS) and automation tools’ APIs. In our case, we created a plugin to interface with the Docker API. Introducing the Docker Cloudify Plugin The Cloudify-Docker plugin is quite straightforward, it installs the Docker API endpoint/server on the machine and then uses the Docker-Py binding to create, configure, and remove containers. TOSCA lifecycle events are: Create - installation of the app components Configure - configuration of the component Start - startup/running the component There is also stop & delete - for shutdown and removal We started by using the create - to create the container, we did not implement configure at the beginning, and start to run the application. But then we realized that for containers with dependencies we need to have runtime properties, such as IP import of the counterpart container in order to create the container for example. When we create an app server container, we need the port and IP of the database container. So, we pushed the creation of the container to the configure event, and used a TOSCA relationship pre-configure hook, to get the dependent container’s info at runtime. The way to expose the runtime info to the container with the dependencies is by setting them as environment variables. 01.interfaces: 02. cloudify.interfaces.lifecycle: 03. configure: 04. implementation: docker.docker_plugin.tasks.configure 05. inputs: 06. container_config: 07. command: mongod--rest--httpinterface --smallfiles 08. image: dockerfile/mongodb 09. start: 10. implementation: docker.docker_plugin.tasks.run 11. inputs: 12. container_start: 13. port_bindings: 14. 27017: 27017 15. 28017: 28017 Nodecellar Example I’d like to explain how this works by using our Nodecellar app as an example. The Nodecellar app is composed of two hosts that, in this case, Cloudify didn’t create but just SSHed into and then installed agents on. On one we have the MongoD container, with a MongoD process. On the other we have the Nodecellar container with NodeJS and the Nodecellar app within it. The Nodecellar container needs a connection to the MongoD container to run the app queries when the app starts. Ultimately, an orchestrator should not be limited to software deployment, the whole idea behind Docker Is to allow for agility, so we’d also like to use Docker in situations of auto-scale out and auto-heal, CD. In our next post we’ll show exactly that - how Cloudify can be used with Docker for post-deployment scenarios.
December 2, 2014
· 17,230 Views · 0 Likes
article thumbnail
Configuring an OpenStack VM with Multiple Network Cards
[This article was written by Barak Merimovich.] We have discussed OpenStack networking extensively in previous posts. In this post, I’d like to dive into a more advanced OpenStack networking scenario. Many cloud images are not configured to automatically bring up all network cards that are available. They will usually only have a single network card configured. To correctly set up a host in the cloud with multiple network cards, log on to the machine and bring up the additional interfaces. echo $'auto eth1\niface eth1 inet dhcp' | sudo tee /etc/network/interfaces.d/eth1.cfg > /dev/null sudo ifup eth1 Networks in the cloud A complex network architecture is a mainstay of modern IaaS clouds. Understanding how to configure your cloud-based networks, and hosts, is critical to getting your application working in the cloud. This is especially true with Cloudify, the open source cloud orchestration platform I work on. The cloud, like the world, used to be flat It was not that long a time ago that most IaaS providers only supported flat networks – all of your hosts were in one large network. Separation between services running in the cloud was enforced in software or with firewalls/security-groups. But technically, all of the hosts were connected to the same network and visible to each other. The flat network model is simple, and therefore easy to reason and understand. It was a good choice for the early days of the IaaS cloud and no doubt helped with getting applications into the cloud in the first place. It was one of the things that made EC2 so easy to use for anyone just starting out with the ‘cloud’. This model is in fact still available on Amazon Web Services under the title ‘EC2-Classic’. And for many applications, a flat network is good enough. But as cloud adoption increases, more complex applications are moving into the clouds, and issues like network separation, security, SLA and broadcast domains make more complex networks models a must. Software Defined Networks (SDN) fill that gap. They are now a staple of most major IaaS clouds. AWS has AWS-VPC, OpenStack has the Neutron project and there are many other implementations. Working with SDN requires knowing a bit more about how information moves around between your cloud resources. In this post I am going to discuss how to set up a host in the cloud so it will play nice with complex networks. I’ll be using OpenStack, but the concepts are similar for other cloud infrastructures. Openstack configuration I am going to start with an empty tenant, only the public network is available. First, lets set up out networks and router: neutron router-create demo-router neutron net-create demo-network-1 neutron net-create demo-network-2 neutron subnet-create --name demo-subnet-1 demo-network-1 10.0.0.0/24 neutron subnet-create --name demo-subnet-2 demo-network-2 10.0.1.0/24 neutron router-interface-add demo-router demo-subnet-1 neutron router-interface-add demo-router demo-subnet-2 neutron router-gateway-set demo-router publicNote the network IDs: neutron net-list | id | name | subnets | | 2c33efe2-6204-4125-9716-3bc525630016 | demo-network-1 | 928dafa0-83ef-459c-b20d-71d8ea596fa2 10.0.0.0/24 | | aa30627e-c181-4a4b-89bf-5dd7c26c244e | demo-network-2 | 26d573f7-7953-4a54-825b-ed7bbc0661c7 10.0.1.0/24 | | e502de8d-929a-4ee0-bd18-efa297875cf6 | public | d40dab51-a729-452c-9ee6-b9ad08d10808 |We’ll start with a standard Ubuntu cloud image: glance image-create --name "Ubuntu 12.04 Standard" --location "http://uec-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img" --disk-format qcow2 --container-format bareCreate the keypair and security group: nova keypair-add demo-keypair > demo-keypair.pem chmod 400 demo-keypair.pem nova secgroup-create demo-security-group "Security group for demo" nova secgroup-add-rule demo-security-group tcp 22 22 0.0.0.0/0Let’s spin up an instance connected to both our networks: nova boot -flavor m1.small --image "Ubuntu 12.04 Standard" --nic net-id=2c33efe2-6204-4125-9716-3bc525630016 --nic net-id=aa30627e-c181-4a4b-89bf-5dd7c26c244e --security-groups demo-security-group --key-name demo-keypair demo-vmAnd set up floating IPs for the first network: nova list | ID | Name | Status | Task State | Power State | Networks | 2b17588b-8980-4489-9a04-6539a159dc3c | demo-vm | ACTIVE | None | Running | demo-network-1=10.0.0.2; demo-network-2=10.0.1.2 | neutron floatingip-create public neutron floatingip-list | id | fixed_ip_address | floating_ip_address | port_id | | 49c8b05e-bb8f-4b07-80ed-3155ab6ffc09 | | 192.168.15.42 | | neutron port-list | id | name | mac_address | fixed_ips | | 1ccfd334-7328-4b22-b93e-24a0888276ab | | fa:16:3e:14:39:39 | {"subnet_id": "94598487-c1fc-4f55-ac1f-ef2545d5cfeb", "ip_address": "10.0.1.3"} | | a482c4f6-fa74-476e-b1ce-cd8dd0c70815 | | fa:16:3e:18:92:79 | {"subnet_id": "94598487-c1fc-4f55-ac1f-ef2545d5cfeb", "ip_address": "10.0.1.2"} | | b23d7836-30c5-4bff-b873-15c87ba051f6 | | fa:16:3e:3a:28:40 | {"subnet_id": "dec6ec74-cfa9-4a08-8792-54900631b98e", "ip_address": "10.0.0.3"} | | d421b447-2adf-406f-876b-142238683344 | | fa:16:3e:9d:fc:7f | {"subnet_id": "dec6ec74-cfa9-4a08-8792-54900631b98e", "ip_address": "10.0.0.2"} | | dcf8696b-cc80-4b48-b09c-61c0f8ab02ac | | fa:16:3e:5b:39:fb | {"subnet_id": "94598487-c1fc-4f55-ac1f-ef2545d5cfeb", "ip_address": "10.0.1.1"} | | f6a1666e-495a-4d3f-afa3-754b3cb3cfc0 | | fa:16:3e:8a:1b:fb | {"subnet_id": "dec6ec74-cfa9-4a08-8792-54900631b98e", "ip_address": "10.0.0.1"} | neutron floatingip-associate 49c8b05e-bb8f-4b07-80ed-3155ab6ffc09 d421b447-2adf-406f-876b-142238683344 Note how we matched the VM’s IP to its port, and associated the floating IP to the port. I wish there was an easier way to do this from the CLI… If everything worked correctly, you should have the following setup: Let’s make sure ssh works correctly: ssh -i demo-keypair.pem ubuntu@192.168.15.31 hostname demo-vm Cool, ssh works. Now, we should have two network cards, right? ssh -i demo-keypair.pem ubuntu@192.168.15.31 hostname demo-vm Cool, ssh works. Now, we should have two network cards, right? ssh -i demo-keypair.pem ubuntu@192.168.15.31 ifconfig eth0 Link encap:Ethernet HWaddr fa:16:3e:5f:a2:5f inet addr:10.0.0.4 Bcast:10.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fe5f:a25f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:230 errors:0 dropped:0 overruns:0 frame:0 TX packets:224 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:46297 (46.2 KB) TX bytes:31130 (31.1 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Huh?! The VM only has one working network interface! Where is my second NIC? Was there a configuration problem with the OpenStack network setup? The answer is here: ssh -i demo-keypair.pem ubuntu@192.168.15.31 ifconfig -a eth0 Link encap:Ethernet HWaddr fa:16:3e:5f:a2:5f inet addr:10.0.0.4 Bcast:10.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fe5f:a25f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:324 errors:0 dropped:0 overruns:0 frame:0 TX packets:332 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:69973 (69.9 KB) TX bytes:47218 (47.2 KB) eth1 Link encap:Ethernet HWaddr fa:16:3e:29:6d:22 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) The second NIC exists, but is not running. The issue is not with the OpenStack network configuration – it’s with the image. The image itself should be configured to work correctly with multiple NICs. All we have to do is bring up the NIC. So we ssh into the instance: ssh -i demo-keypair.pem ubuntu@192.168.15.31And run the following commands: echo $'auto eth1\niface eth1 inet dhcp' | sudo tee /etc/network/interfaces.d/eth1.cfg > /dev/null sudo ifup eth1The second NIC should now be running: ifconfig eth1 eth1 Link encap:Ethernet HWaddr fa:16:3e:18:92:79 inet addr:10.0.1.2 Bcast:10.0.1.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fe18:9279/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:81 errors:0 dropped:0 overruns:0 frame:0 TX packets:45 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:15376 (15.3 KB) TX bytes:3960 (3.9 KB) And there you go – your VM can access both networks. This issue can make life complicated when setting up a complex, or even a not very complex, application. When will this issue hurt you? Well, imagine a scenario where you have a web server and a database server. The web server is connected to both Network1 and Network2, and the database server is only connected to Network2. Network1 is connected to the external world over a router, and Network 2 is completely internal, adding another layer of security to the critical database server. So what happens if the web server only has one network card? If only the NIC for Network1 is up, the web server can’t access the database. If only the NIC for Network2 is up, the web server can’t be reached from the external world. Even worse, if this web server is accessed via a floating IP, this IP will also not work, so you won’t be able to access the web server and fix the issue. Tricky. In conclusion The above commands will bring up your additional network card. You will of-course need to repeat this process for each additional network card, and for each VM. You can use a start-up script (a.k.a. user-data script) or system service to run these commands, but there are better ways. I’ll discuss how to automate the network setup in a follow-up post. This was originally posted at Barak's blog Head in the Clouds, find it here.
November 4, 2014
· 13,578 Views · 0 Likes
article thumbnail
A Puppet Automation + MySQL Tutorial: Wordpress Install in 7 Short Steps
[This article was written by Koby Nachmany.] If you are familiar with configuration management (aka CM) and automation, you probably know a thing or two about Puppet, and the amazing and rich collection of modules it offers. Puppet Forge contains a wealth of third party modules that enable us to do some pretty nifty stuff with almost no effort. Puppet helps deal with the messy parts of CM, like installing binaries and running installation scripts that are tedious to do manually. Tools such as Puppet were originally created for IT operations people, that are for the most part infrastructure-centric, and are best suited for setup and maintenance of hosts in a physical data center. Dealing with applications and certainly managing applications on an elastic virtualized or even cloudified environment, brings a set of new challenges despite the agility and other benefits it provides. Now imagine we can have this goodness coupled with an intelligent orchestration framework for an entire deployment? In this blog post I'd like to demonstrate how a cloud application orchestrator can complement already existing automation processes powered by configuration management tools, in this case we will demonstrate with Puppet. I will use the nodecellar application and the popular WordPress content management framework as examples. This will hopefully provide a good introduction to Cloudify blueprints. Overview So we've seen how Cloudify 3 allows us to easily orchestrate the "nodecellar" application Read about it Cloudify blueprints here. With the "nodecellar" example, Cloudify deploys a complex application using workflows that map deployment lifecycle events to bash scripts using Cloudify's bash runner plugin. Cloudify's Puppet integration now makes this pretty easy. Cloudify 3.0 - Taking Puppet to the Next Level of Orchestration. Check it out. Go The synergy between Cloudify and Puppet not only allows you to enjoy the benefits of your Puppet environment, but it also amplifies its usability by introducing unique advantages that will answer the following common challenges involved with configuration management tools: Agent Installation: Provision your service VMs, install a Puppet agent (if you like) and wires them up with the Puppet Master. Or, if you choose to run standalone, you can install the agent with the appropriate manifests needed for that service, as well. Order of Dependencies: Define the dependencies between application stacks, services and infrastructure resources. Which will then be launched based on that order. Remote Execution and Updates: Other than the basic install/uninstall, Cloudify enables customized application workflows that allow you to execute tools like remote shell scripts on a group of instances that belong to a particular service, or to a specific instance in a group. This feature is useful to run maintenance operations, such as snapshots in the case of a database, or code pushes in a continuous deployment model. In addition, you can run puppet apply whenever you feel it's right for your service. Post Deployment: Once your application is up, Cloudify will be able to glue your monitoring tool of choice, or you can choose to use the built-in one. A robust policy engine, enables auto-healing and even auto-scaling according to your service's required SLA. I'm now going to take a deep dive on my experience with a WordPress example that I feel is a very good representation of how Puppet and Cloudify work in sync. Let's say we want to deploy the popular WordPress application stack on two VMs . Something as follows: The flow is quite simple: -server 3.5.1 with the basic following modules installed: |-- hunner-wordpress (v0.6.0) |-- puppetlabs-apache (v1.0.1) - with php mods enabled |-- puppetlabs-mysql (v2.1.0) Your site.pp file should resemble something like this: node /^apache_web.*/ { include apache class { 'wordpress': create_db => false, create_db_user => false, } } node /^mysql.*/ { class { '::mysql::server': root_password => 'password', override_options => { 'mysqld' => { 'bind_address' => '0.0.0.0' } } } include mysql::client include wordpress } As we can see, we have an Apache PHP application that will likely require a database connection string (IP, port, user and password). This is where Cloudify facilitates the "gluing" of all the pieces together, by allowing us to inject dynamic/static custom facts to the dependent node (Apache server). Cloudify supports both standalone agents and PuppetMaster environments. Step 2: Tweaking the Original WordPress Module. Some minor adaptations to the wordpress init class of the WordPress module will allow us to embed these facts during Puppet agent invocation. Below is a code snippet (With defaults truncated): class wordpress ( $db_host_ip = $cloudify_related_host_ip, $db_user, = $cloudify_properties_db_user, $db_password = $cloudify_properties_db_pw, . . ) And some tweaking to the templates/wp-config.php.erb: /** MySQL hostname */ define('DB_HOST', ''); Let's add some tags for finer control of manifest execution: The MySQL node will not require the application part to run on it, so I've excluded it using a Puppet "tag" (read more about Puppet tags). Cloudify, of course, supports this and will provide the appropriate tags during agent invocation. -> class { 'wordpress::app': tag => ['postconfigure'], install_dir => $install_dir, install_url => $install_url, version => $version, db_name => $db_name, . .} Step 3: Creating the Blueprint In a similar way to the "nodecellar" blueprint, first lets create a folder with the name of "wp_puppet" and create a blueprint.yaml file within it. This file will then serve as the blueprint file. Now let's declare the name of this blueprint. blueprint: name: wp_puppet nodes: Now we can start creating the topology. Step 4: Creating VM Nodes Since, in this case I use the OpenStack provider to create the nodes, let's import the "OpenStack types" plugin. imports: - http://www.getcloudify.org/spec/openstack-plugin/1.0/plugin.yaml Since the VMs are the same, I declared a generic template for a VM host: vm_host: derived_from: cloudify.openstack.server properties: - install_agent: true - worker_config: user: ubuntu port: 22 # example for ssh key file (see `key_name` below) # this file matches the agent key configured during the bootstrap key: ~/.ssh/agent.key # Uncomment and update `management_network_name` when working a n neutron enabled openstack - management_network_name: cfy-mng-network - server: image: 8c096c29-a666-4b82-99c4-c77dc70cfb40 flavor: 102 key_name: cfy-agnt-kp security_groups: ['cfy-agent-default', 'wp_security_group'] # This is how we inject the puppet server's ip userdata: | #!/bin/bash -ex grep -q puppet /etc/hosts || echo "x.x.x.x puppet" | sudo -A tee -a /etc/hosts Create the MySQL and Apache VMs: - name: mysql_db_vm type: vm_host instances: deploy: 1 - name: apache_web_vm type: vm_host instances: deploy: 1 Step 5: Declaring Apache and MySQL Servers Since we are using the Puppet plugin to create those servers, first we have to import it: plugins: puppet_plugin: derived_from: cloudify.plugins.agent_plugin properties: url: https://github.com/cloudify-cosmo/cloudify-puppet-plugin/archive/nightly.zip The plugin defines server types as follows: middleware_server, app_server, db_server, web_server, message_bus_server, app_module. They are virtually the same, but serve the purpose of enabling better readability for the user and GUI visualization A Puppet server type is derived_from: cloudify.types.server type, but includes some puppet-specific properties and lifecycle events. For documentation see: Puppet Types So we now will go ahead and declare the server types: cloudify.types.puppet.web_server: derived_from: cloudify.types.web_server properties: # All Puppet related configuration goes inside # the "puppet_config" property. - puppet_config interfaces: cloudify.interfaces.lifecycle: # Specifically "start" operation. Otherwise tags must be # provided. - start: puppet_plugin.operations.operation cloudify.types.puppet.app_module: derived_from: cloudify.types.app_module properties: - puppet_config interfaces: cloudify.interfaces.lifecycle: - configure: puppet_plugin.operations.operation cloudify.types.puppet.db_server: derived_from: cloudify.types.db_server properties: - puppet_config interfaces: cloudify.interfaces.lifecycle: - start: puppet_plugin.operations.operation Step 6: Instantiating the Apache and MySQL nodes: Here we provide the Puppet configuration and tags and define the relationships between the nodes. Cloudify's agent will use those relationships in order to decide the appropriate facts to inject. - name: apache_web_server type: cloudify.types.puppet.web_server properties: port: 8080 puppet_config: server: puppet environment: wordpress_env relationships: - type: cloudify.relationships.contained_in target: apache_web_vm - name: wordpress_app type: cloudify.types.puppet.app_module properties: db_user: wordpress db_pass: passwd puppet_config: server: puppet tags: ['postconfigure'] environment: wordpress_env relationships: - type: cloudify.relationships.contained_in target: apache_web_server - type: wp_connected_to_mysql target: mysql_db_server - name: mysql_db_server type: cloudify.types.puppet.db_server properties: db_user: wordpress db_pass: passwd puppet_config: server: puppet environment: wordpress_env relationships: - type: cloudify.relationships.contained_in target: mysql_db_vm Step 7: Upload the Blueprint and Create the Deployment (via CLI or GUI) Then execute your deployment (via CLI or GUI). ubuntu@koby-n-cfy3-cli:~/cosmo_cli$ cfy blueprints upload -b wp9 wordpress/blueprint.yaml ubuntu@koby-n-cfy3-cli:~/cosmo_cli$ cfy deployments create -b wp9 -d WordPress_Deployment_1 Step 8: Take a Quick Coffee Break. Step 9: Enjoy your Orchestrated WordPress Stack!
August 21, 2014
· 8,353 Views · 0 Likes
article thumbnail
Cloud Automation with WinRM vs SSH
[Article originally written by Barak Merimovich.] Automation the Linux Way In the Linux world SSH, secure shell, is the de facto standard for remote connectivity and automation for the purpose of logging into a remote machine to install tools and run commands. It's pretty much ubiquitous, runs across multiple Linux versions and distributions, and every Linux admin worth their salt knows SSH and how to configure it. What's more, it's even the default enabled port on most clouds - port 22. An important feature available with SSH is support for file transfer via its secure copy protocol - AKA SCP, and secure file transfer protocol - AKA SFTP. These are a built-in part of the tool or exist as add-ons to the protocol that are almost always available. Therefore, using SSH for file transfer and remote execution is basically a given with Linux, and there are even tools to support SSH clients available for virtually every major programming language and operating system. WinRM in a Linux World So what comes out-of-the-box with Linux, is less of a given with Windows. SSH, obviously, is not built in with Windows; over the years there have been different protocols attempting to achieve the same functionality, such as Secure Telnet and others, however to date, none have really caught on. From Windows Server 2003, a new tool called WinRM - windows remote management, was introduced. WinRM is a SOAP-based protocol built on web services that among other things, allows you to connect to a remote system, providing a shell, essentially offering similar functionality to SSH. WinRM is currently the Windows world alternative to SSH. The Pros The advantage with WinRM is that you can use a vanilla VM with nothing pre-configured on it, with the only prerequisite being that the WinRM service needs to be running. EC2, the largest cloud provider today, supports this out-of-the-box, so if you want to run a standard Amazon machine image (AMI) for Windows, WinRM is enabled by default. This makes it possible to quickly start working with a cloud, all that needs to be done is bring up a standard Windows VM, and then it's possible to remotely configure it - and start using it. This is very useful in cloud environments where you are sometimes unable to create a custom Windows image or are limited to a very small number of images and want to limit your resource usage. The Challenges Where SSH has become the de facto protocol with Linux, WinRM is far less known tool in the Windows world, although it does offer comparable features as far as security, as well as connecting and executing commands to a remote machine. The standard tool for using WinRM is usually PowerShell, the new Windows shell that is intended to supersede the standard command prompt. To date though, there are still relatively few programming languages with built-in support for WinRM, making automation and remote execution of tasks over WinRM much more complex. To achieve these tasks, Cloudify employs PowerShell itself, as an external process to act as a client library for accessing WinRM. The primary issue with this, however, is that the client-side also needs to be running Windows, as PowerShell cannot run on Linux. Another aspect where WinRM differs from SSH is that it does not really have built-in file transfer. There is no direct equivalent for secure copy in SSH for WinRM. That said, it is possible to implement file transfer through PowerShell scripts. There are currently several open source initiatives looking to build a WinRM client for Linux - or specifically for some programming languages, such as Java, however, these are in different levels of maturity, where none of them are fully featured yet. Hence, PowerShell remains the default tool for Cloudify, which essentially provides the same level of functionality you would expect for running remote commands on a Linux machine with Windows. WinRM & Security Another interesting point to consider about WinRM is its support for encryption. WinRM supports three types of transfer protocols, HTTP, HTTPS, and encrypted HTTP. With HTTP, inevitably your wire protocol is unencrypted. It is only a good idea to use HTTP inside your own data center in the event that you are completely convinced that no one can monitor anything going over the wire. HTTPS is commonly used instead of HTTP, however with WinRM there's a chicken and egg issue. If you want to work with HTTPS you are required to set up an SSL certificate on the remote machine. The challenge here is when you're starting with a vanilla Windows VM that will not have the certificate installed, there is a need to automate the insertion of that certificate, however this often cannot be done, as WinRM is not running. Encrypted HTTP, which is also the default in EC2, basically uses your login credentials as your encryption key and it works. From a security perspective this is the recommended secure transfer protocol to use. It is worth noting that most attempts to create a WinRM client library tend to encounter problems around the encrypted HTTP protocol, as implementing MS' encrypted HTTP system - credSSP - is challenging. However, there are various projects working on achieving this, so it will hopefully be solved in the near future. Where Cloudify Comes Into the Mix Where WinRM comes into play with Cloudify, is during the cloud bootstrapping process. By using WinRM Cloudify is able to remotely connect to a vanilla VM provided by the cloud, and set up the Cloudify manager or agent to run on the machine. In addition to traditional cloud environments, WinRM also works on non-cloud and non-virtualized environments, such as a standard data center with multiple Windows servers running. All that needs to be done is provide Cloudify with the credentials, and it will use WinRM to connect and set up the machine remotely. Since WinRM is pre-packaged with Windows, there is no need to install anything. The only thing requirement, as mentioned above, is to have the WinRM service running, as not all Windows images will have this service running. Conclusion In short WinRM is the Window's world alternative to SSHD that allows you to remotely login securely and execute commands on Windows machines. From a cloud automation perspective, it provides virtually all the necessary functionality requirements, and thus it is recommended to have WinRM running in your Windows environment.
March 19, 2014
· 24,740 Views · 0 Likes

Comments

The Pull Request Paradox: Merge Faster by Promoting Your PR

Dec 27, 2021 · Dan Lines

Great post Dan! Seeing a lot of disccussion all the time on the different layers constantly being added in PRs (from comments, to suggestions etc.) and more just to be able to make code more understandable and in context to get it over the fence and ship it. Definitely sounds like a useful tool.

Docker...Containers, Microservices and Orchestrating the Whole Symphony

Oct 11, 2017 · John Walter

Hi Naresh - as you can imagine Kubernetes has come a long way since January 22, 2015 - all of the information in the article was correct at the time of writing.

Raspberry Pi Automation, Docker, and Other Inanities

Feb 26, 2015 · John Walter

Yes - it is OpenHAB - we received the same comment on the Cloudify blog - and fixed it. :-) Here we have less control. Oops.

Raspberry Pi Automation, Docker, and Other Inanities

Feb 26, 2015 · John Walter

Yes - it is OpenHAB - we received the same comment on the Cloudify blog - and fixed it. :-) Here we have less control. Oops.

Raspberry Pi Automation, Docker, and Other Inanities

Feb 26, 2015 · John Walter

Yes - it is OpenHAB - we received the same comment on the Cloudify blog - and fixed it. :-) Here we have less control. Oops.

Raspberry Pi Automation, Docker, and Other Inanities

Feb 25, 2015 · John Walter

Yes - it is OpenHab - we received the same comment on the Cloudify blog - and fixed it. :-) Here we have less control. Oops.

Raspberry Pi Automation, Docker, and Other Inanities

Feb 25, 2015 · John Walter

Yes - it is OpenHab - we received the same comment on the Cloudify blog - and fixed it. :-) Here we have less control. Oops.

On the Origin of Technologies

Jun 30, 2014 · Ben Tels

Ah - thanks. My mistake. Will fix.
(That's what happens when you do things from memory.) The URL on the pic is right - the others aren't. Good catch.

On the Origin of Technologies

Jun 30, 2014 · Ben Tels

Ah - thanks. My mistake. Will fix.
(That's what happens when you do things from memory.) The URL on the pic is right - the others aren't. Good catch.

On the Origin of Technologies

Jun 30, 2014 · Ben Tels

Ah - thanks. My mistake. Will fix.
(That's what happens when you do things from memory.) The URL on the pic is right - the others aren't. Good catch.

On the Origin of Technologies

Jun 30, 2014 · Ben Tels

Ah - thanks. My mistake. Will fix.
(That's what happens when you do things from memory.) The URL on the pic is right - the others aren't. Good catch.

DevOps Trail Blazers - People I Love to Follow

Jun 30, 2014 · Benjamin Ball

Ah - thanks. My mistake. Will fix.
(That's what happens when you do things from memory.) The URL on the pic is right - the others aren't. Good catch.

DevOps Trail Blazers - People I Love to Follow

Jun 30, 2014 · Benjamin Ball

Ah - thanks. My mistake. Will fix.
(That's what happens when you do things from memory.) The URL on the pic is right - the others aren't. Good catch.

DevOps Trail Blazers - People I Love to Follow

Jun 30, 2014 · Benjamin Ball

Ah - thanks. My mistake. Will fix.
(That's what happens when you do things from memory.) The URL on the pic is right - the others aren't. Good catch.

DevOps Trail Blazers - People I Love to Follow

Jun 30, 2014 · Benjamin Ball

Ah - thanks. My mistake. Will fix.
(That's what happens when you do things from memory.) The URL on the pic is right - the others aren't. Good catch.

Episode I: Transactional Cross-Site Data Replication

Jun 26, 2012 · Guy Korland

To view the webinar visit: http://bit.ly/OmEuBp ,

User has been successfully modified

Failed to modify user

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: