Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Our Approach to Testing Sensu Plugins

DZone's Guide to

Our Approach to Testing Sensu Plugins

The benefit and utility of testing can never be understated, and here, we take a look at the current and future testing practices for Sensu plugins.

· Open Source Zone ·
Free Resource

Sensu is an open source monitoring event pipeline. Try it today.

The Power of Tests

I have the benefit of building great testing in my organization and the peace of mind that gives us. It prevents bugs not only in our own code, but also helps find issues with how it interacts with upstream projects.

I've been a maintainer of Sensu's Plugins since May 2017 and I could not be happier to get to the point of sharing my philosophy on testing. The maintainers have started to roll out these practices in a standardized way, so I write this as a means to sharing the why behind the work. I will follow up with a more in-depth how in a future post.

If you love Sensu Plugins and know a little Ruby, I hope you'll join us on this journey to bring tests to all our plugins.

Why Do I Care? (TL;DR)

Ruby-based plugins in the Sensu community now have a pattern to integration testing that is rolling out across all maintained plugins with your help. This testing is essential to allowing us to iterate quickly and know we're not introducing bugs to code we all rely on. I'm writing up our community testing practices here to give us a solid foundation going forward.

Refactor and Improve Safely

The most essential benefit of a test is that it tells us when functionality changes. I want to improve plugin tests so I know a proposed contribution (PR) adding new features or changing existing features will not break existing use cases. When code does break, and it will break, I know it will either mean the PR needs updates or we have a breaking change and need to update the tests and "CHANGELOG.md" to acknowledge that.

When adding a new feature you should consider the following:

  • How do I know this works as expected?
  • Can I write an automated test for it?
  • If I can't write an automated test, can I give manual instructions to someone to verify it works?

Test Artifacts for Accepting Code into the Community Projects

We have gone through some pretty radical changes over the last year or so with the plugins organization. The organization has grown in popularity as Sensu Inc. became a company, but it may not be well known that it's 99% maintained by community members. There are now nearly 200 plugins and extensions covering a lot of tech and various products not all maintainers will have heard of, let alone the expertise to review suggestions SMEs are proposing. We rely heavily on the code review and testing to give us confidence that merging code will not break another feature.

That is why, earlier this year, we updated our pull request templates to review a testing artifact for most code contributions. This slight change to process has helped assure a stable code base even in the absence of automated tests. That focus on quality plugins for the community over some shiny new feature that will break other users has done our community well and will continue to be our policy. Code is only merged when it works for most cases, not just yours.

How We're Approaching Types of Testing

We all know we need tests. Now the big question: what kind of test is right for my code?

Manual

This process has done us well for the last 6 months. The benefit is that it is the easiest option given that it does not require writing code. The difficulty here is that every minor edit during a code review requires someone to re-run these "tests," making it not terribly scalable. There might be cases where you might need to manually verify a few things but ideally the testing should be automated so that when you are contributing changes they are automatically run and give you feedback when you do break something.

As this is a pretty big area to cover I think it would be best to review examples of such on the Pull Request template.

Automated

Automated tests are where we're making great progress these days. The practice of automating tests comes in a few forms.

Linting

Linting is a huge advantage when I think about the scale of the Sensu Plugins projects. We want to make sure that we follow the best practices for each programming language so we can rely on that as an early validation of code quality. For Ruby we use rubocop and in Python, we run pep8/flake8. You can configure rules in a ".rubocop.yml" or ".flake8rc" respectively in each repository as it makes sense. I think of linters as the laws of the land if you are new to the language. They prevent us from pitfalls and become better developers. The linter rules are designed to be followed, so please follow them. Occasionally, you come across an overkill of a rule that's worth disabling, but please open a PR to discuss it with a maintainer and see if they are okay with removing it.

Unit Tests

A beneficial form of testing, unit tests are typically written against library functions. These offer a lot of value on libraries where writing an end-to-end integration testing may be difficult. All libraries should have unit tests, but not necessarily plugins.

Examples:

Integration Tests

These are the holy grails of testing, especially for plugins. Integration tests not only does exercise the code as designed, but it pulls in dependencies to test the integration of the two or more required components that it depends on. These are the make-or-break tests for plugin features and should be a major focus of our community!

Diving In: Anatomy of an Integration Test

Bootstrap dependencies: This typically requires bootstrapping your test with dependencies. This includes (but is not limited to):
Dev/Build tools such as a c compiler, system libraries, linters, testing frameworks, etc.

Examples:

Dependent systems such as datastores, web servers, etc. For example:

Mocks or Configure Services

In some cases, it may not be reasonable to actually say run a full DCOS mesos cluster with multiple containers running. In this case, we decide to mock the required tests with a web server such as NGINX as the DCOS API is over HTTP and those are fairly simple to mock in most cases.

Example:

Write Tests Against Integration Services

There are essentially two types of verification tests: positive and negative verification.

Examples:

Write Verification Tests

Positive tests: if I do this, I expect it to work.

Examples:

Negative tests can be just as powerful. They ask the other question: if I improperly use this, it will error. These are also valid to verify critical or warning events.

Examples:

Testing of Sensu Plugins

We're at a point where Sensu Plugins are getting the test coverage that will give all of us more confidence on every PR review. Relying on manual testing was a great first step for us, but the move to automated unit tests for libraries and integration tests for plugins will be a huge leap forward. As a maintainer, that makes me excited about where we can go from here. As a Sensu user, that makes my job a little easier.

Relying on manual testing was a great first step for us, but the move to automated unit tests for libraries and integration tests for plugins will be a huge leap forward.

I hope you will get involved in the effort! Contribute tests to your favorite plugins and join us on Slack to talk about it with fellow maintainers. We would love to get you involved.

From bare metal to Kubernetes, Sensu gives you complete visibility across every system and protocol. Get started today.

Topics:
open source ,sensu ,testing ,plugins ,automated testing ,integration test

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}