Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Dealing With the Same Configuration File With Different Content in Different Environments

DZone's Guide to

Dealing With the Same Configuration File With Different Content in Different Environments

This solution allows you to maintain one configuration file instead of multiple files, even when your environments have different requirements.

· DevOps Zone
Free Resource

Download the blueprint that can take a company of any maturity level all the way up to enterprise-scale continuous delivery using a combination of Automic Release Automation, Automic’s 20+ years of business automation experience, and the proven tools and practices the company is already leveraging.

Different from the previous post, in this case, it was a demand of a dev friend. His application required a specific properties file in order to get the database connection string, a URL to connect to the MongoDB instance. The problem was that each environment had its own MongoDB instance, so the properties file content was different, depending on where it was placed.

The common approach to such problem is to have different versions of the same file, each version with the appropriate content for the related environment. What differentiates one file from another are the directories in the filesystem or the branches in the SCM repository where the files are placed, because they are named based on the environments’ names. When this approach is adopted, the right version of the configuration file is usually embedded in the application package during the deployment process.

The solution tried to eliminate that complexity, decoupling the configuration from the application, and centralizing all the needed configuration in just one file. The solution can be checked out on Github. It was developed using Ansible, and tested in a VM environment built using Vagrant and the VirtualBox hypervisor. The details are shown below.

The Test Environment

In order to simulate my friend’s QA environment, with different servers where the application is deployed, 3 VMs were booted up locally: qa1, qa2, and qa3. This way it was possible to test the Ansible playbook during its development, before executing it directly to the real servers.

The Vagrantfile below was used to build such test environment. Notice this is Ruby, each VM was defined within a loop, and received an IP address. The VM image (box) used was minimal/trusty64, a reduced version of Ubuntu, for a faster first-time download and set up during the vagrant up command execution.

Vagrant.configure("2") do |config|
  config.vm.box = "minimal/trusty64"

  (1..3).each do |i|
    config.vm.define "qa#{i}" do |qa|
      qa.vm.hostname = "qa#{i}.local"
      qa.vm.network "private_network", ip: "192.168.33.#{i}0"
    end
  end
end

The Playbook Execution

With Ansible, you can perform tasks on several servers at the same time. It’s possible because everything is done through SSH from a master host, even if it’s your own machine. Besides that, Ansible knows the target servers through the inventory file (hosts), where they are defined and also grouped. In the hosts file, the below the QA servers were defined inside the group qa.

[qa]
192.168.33.10
192.168.33.20
192.168.33.30

The core of the solution is undoubtedly the config.json file. It concentrates all the needed configuration for each QA server. If my friend’s application requires more parameters, they can be easily added. The host element identifies the target server, and the items are the properties the application has to have in order to run appropriately.

[
  {
    "host": "qa1",
    "items": [
      {
        "key": "prop1",
        "value": "A"
      },
      {
        "key": "prop2",
        "value": "B"
      }
    ]
  },
  {
    "host": "qa2",
    "items": [
      {
        "key": "prop1",
        "value": "C"
      },
      {
        "key": "prop2",
        "value": "D"
      }
    ]
  },
  {
    "host": "qa3",
    "items": [
      {
        "key": "prop1",
        "value": "E"
      },
      {
        "key": "prop2",
        "value": "F"
      }
    ]
  }
]

In the solution, the configuration file is /etc/conf, but it could have any name and could be placed in any directory of the application server. The etc folder has root permissions, so it requires that the SSH user is able to become root (become: yes).

The playbook.yml below is pointing to the qa group previously defined in the hosts file (hosts: qa). Ansible then can execute it against the 3 VMs: qa1, qa2 and qa3. Each one is found out during the gathering facts phase, when the hostname variable is set.

The config variable points to the config.json file content, and the items_query variable is necessary to find inside the JSON content the properties key/value pairs of the respective server. The task ensures that there will be a line in the configuration file for each property.

---
- hosts: qa
  become: yes
  vars:
    hostname: "{{ansible_hostname}}"
    config: "{{lookup('file', 'config.json')}}"
    items_query: "[?host=='{{hostname}}'].items"
  tasks:
  - name: Set the configuration file content
    lineinfile:
      path: /etc/conf
      create: yes
      regexp: "^{{item.key}}=.*$"
      line: "{{item.key}}={{item.value}}"
    with_items: "{{config|json_query(items_query)}}"

The execution of the playbook.yml has the following output. The -u parameter defines the SSH user and the -k parameter prompts for vagrant password (vagrant too). All Vagrant boxes have the vagrant user. Finally, the -i parameter points to the hosts file where the QA servers were defined.

Notice that the changes are made by Ansible in parallel in the servers. If the ansible-playbook command is executed several times, you will have different outputs, because Ansible forks the main process in order to perform the tasks simultaneously on the servers.

ansible-playbook playbook.yml -u vagrant -k -i hosts
SSH password: 

PLAY [qa] **************************************************************************************************************************************************************************************************

TASK [Gathering Facts] *************************************************************************************************************************************************************************************
ok: [192.168.33.10]
ok: [192.168.33.30]
ok: [192.168.33.20]

TASK [Set the configuration file content] ******************************************************************************************************************************************************************
changed: [192.168.33.30] => (item={'value': u'E', 'key': u'prop1'})
changed: [192.168.33.20] => (item={'value': u'C', 'key': u'prop1'})
changed: [192.168.33.10] => (item={'value': u'A', 'key': u'prop1'})
changed: [192.168.33.20] => (item={'value': u'D', 'key': u'prop2'})
changed: [192.168.33.30] => (item={'value': u'F', 'key': u'prop2'})
changed: [192.168.33.10] => (item={'value': u'B', 'key': u'prop2'})

PLAY RECAP *************************************************************************************************************************************************************************************************
192.168.33.10              : ok=2    changed=1    unreachable=0    failed=0   
192.168.33.20              : ok=2    changed=1    unreachable=0    failed=0   
192.168.33.30              : ok=2    changed=1    unreachable=0    failed=0

Finally, you can validate the playbook execution by using Ansible ad-hoc commands, like the one shown below. The command cat /etc/conf was used to ensure that each configuration file content is as expected. Ad-hoc commands are excellent to know what you want about several servers in just one shot.

ansible qa -m shell -a "cat /etc/conf" -u vagrant -k -i hosts
SSH password: 
192.168.33.30 | SUCCESS | rc=0 >>
prop1=E
prop2=F

192.168.33.10 | SUCCESS | rc=0 >>
prop1=A
prop2=B

192.168.33.20 | SUCCESS | rc=0 >>
prop1=C
prop2=D

One interesting aspect of this solution is the capacity of the playbook be executed over and over keeping the same results. In other words, even if someone inadvertently changes the configuration file content, it will be fixed right in the next time the playbook is once more executed. It’s called idempotence.

Conclusion

Once again, I helped a friend, and I’m happy for that. Instead of maintaining several files, he maintains a single one, and it turns the configuration much simpler.

This solution can be applied in many use cases, so share it because you will certainly help someone else. And don’t forget to tell me your problem, I want to help you, too.

Download the ‘Practical Blueprint to Continuous Delivery’ to learn how Automic Release Automation can help you begin or continue your company’s digital transformation.

Topics:
ansible ,ubuntu ,vagrant ,virtualbox ,configuration as code ,devops

Published at DZone with permission of Gustavo Carmo. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}