There’s a lot of great information online about how to write a good Puppet module. In this post I’m going to focus on the techniques we’ve used to have our modules officially ‘approved’ by PuppetLabs.
I’m assuming you’ve either built an entirely new module, or substantially improved one on the Forge, and that you’d like to prove this is a good quality module, so that it could gain approval by PuppetLabs.
Once you’ve published your module to the Forge, you’ve got a Quality Score to start with, and you can aim to improve it. This score comprises the code quality, the Puppet compatibility and the metadata quality.
We will use Rake (the common build utility for Ruby) to build our project, so add it to your gem dependencies if it has not been added already and create a Rakefile with this content:
By default Puppet Lint comes with a ton of rules, the above configuration is the one used by Puppetlabs when they calculate your Quality Score.
Then you will be able to run the lint check:
Or even better if you use bundler to manage your gem dependencies (I will suppose that you do it in the rest of the post):
Ensure that your code has not got any linting error.
Your code should also be compatible with the latest Puppet versions and to validate it we can use the Puppet command line:
The Quality Score only checks the Puppet compatibility, but we should also check that our ruby code (including templates) has not any error. Because of the fact that all these tasks should be automated, we can add a validate task to our Rakefile including all these checks:
And run them with the validate task:
To publish a module to the Forge we need a metadata.json file which contains important information about the module and can configure certain features. In addition to a basic JSON validation we need to ensure also that:
- It contains a valid license using the SPDX syntax.
- It sets an upper bound version limit for all the dependencies. The fact that your module is compatible with the 2.x.x version of another module doesn’t mean that it will also be compatible with the 3.x.x version.
- It provides OS compatibilty information.
- It contains valid project, source and issue URLs.
This validation can also be automated if you include the metadata-json-lint gem into your dependencies and the following line in the Rakefile:
Run the metadata linting:
If your code passes all these checks you will have 5 points in the Quality Score, which doesn’t mean that your code works well, but at least ensures that it is well written. This will also help you with your module visibility in the Forge, because it is one of the main factors to appear in the first positions in the search results. Having more visibilty will also give you more downloads which will help to gather more feedback and to be considered for a approved status.
Proper documentation is key for an Open Source project. Puppet provides a README template that you can follow to describe your module and write all the module information: description, requirements, limitations, usage and a complete reference.
When you apply your module to be Puppet approved the documentation validation is based on a human review, which is obvious taking into account that the documentation is intendeed to be read by humans. You should show the documentation to your mates and ensure that they can use your module without any explanation, remember that the final users will not be able to ask you if they have any doubt.
A CHANGELOG.md file is also helpful and definitely needed if you have added new functionalities or backward incompatibilities. Remember also to use semantic versioning strictly.
Now it is time to prove that your module works. Puppet provides different test frameworks to be able to test your code, and third-parties are also creating tools to help testing our Puppet module, like Serverspec that allow us to write RSpec tests so we can check that our servers are configured correctly.
Unit tests tell a developer that the code is doing things right.
The rspec-puppet gem is the common framework for unit testing with Puppet. You should add also the puppetlabs_spec_helper gem that is a set of shared spec helpers specific to Puppet projects, and mocha that is a mocking and stubbing library for Ruby.
Then, create a directory called spec with a spec_helper.rb file inside it, and include the puppetlabs_spec_helper:
You need to create unit tests for all your classes and defines if you want to be approved. You can read the documentation to know how to write the tests or check the tests of a good module to get an idea on how you should do it.
If your module has dependencies you should create a .fixtures.yml file to allow your tests to automatically install dependencies, for example if you depend on stdlib:
To run your unit tests automatically add the tasks provided by the puppetlabs_spec_helper to your Rakefile:
And run the tests:
Functional tests tell a developer that the code is doing the right things.
With Vagrant we will be able to build different environments where we can test our module, and with the beaker-rspec framework, which is a bridge between the Puppet acceptance test harness (beaker) and rspec, we will be able to set up machines, run any configuration on those machines, run the tests and then exit.
Create a spec_helper_acceptance.rb file inside the spec directory which should include beaker and install your module:
The acceptance tests should be included inside the spec/acceptance directory. At least they should ensure that the module is being installed without errors and that it is idempotent, which means that it can be safely run multiple times. Of course it should check that the module does what it is intended for, and serverspec is a good tool to do it.
An acceptance test should look like this:
Inside the spec/acceptance/nodesets directory we need to create at least one default.yml file with a node definition, but we should create multiple node definition files to test our module against different OS. This can be a default node definition:
Finally you can run your beaker tests in the default node with:
To be approved you should create different test environments and automatically test your module against different OS (if applicable). There aren’t free building tools that allow making these tests with Vagrant, so the Puppetlabs team will check these tests manually if you send your module approval request, and they will check also that all the public classes and defines are being tested.
Github is the common repository hosting used by the Puppet module creators and Travis CI the common building tool. You can choose another one, but you should have a public build status to allow everybody to know the build status of your code and to check if your code is passing all the previously mentioned quality checks and tests, a badge with the build status is generally added to the readme file
To make the whole build process easier, that will be run automatically with each commit in TravisCI, we can make a new task in the Rakefile:
You can test the task with:
We can use TravisCI to build the module against different Ruby and Puppet versions. A file called .travis.yml placed in the root of our project will define our buid:
Ensure that you are allowing multiple Puppet versions to be installed based on an environment variable which you can do by editing your Gemfile:
Keep your build green!
Make it flexible
There are two main reasons that can force someone to create a new module instead of using yours: Yours didn’t work well, what is something that shouldn’t happen if you follow all the previous steps, or that your module is not flexible enough.
When publishing a module you should think about all the scenarios that can be solved with the tool that you are automating and parametrize the module in consequence.
Use a class for the default parameters (manifests/params.pp) and make your init file (manifests/init.pp) to inherit from it. This will add readability to your module if the amount of logic or the number of parameters grows. If you have defines it will be fine if you also allow using them from the init file (with a hash parameter):
Stay tuned to the issues and to the pull requests of your module in Github, and try to implement the feature requests of the users of your module when they are reasonible. PuppetLabs look for “a level of community engagement” around your module to approve it.
Is that all?
Probably not, but these are the steps that I followed to get my NVM module approved by Puppetlabs.
That’s all folks!