Over a million developers have joined DZone.

Publishing Our Asciidoc Developer Guide as a Book on Amazon

DZone's Guide to

Publishing Our Asciidoc Developer Guide as a Book on Amazon

Thinking of creating documentation for your codebase and publishing it for the developer community? Read on to see how one team did it.

· Agile Zone ·
Free Resource

The Agile Zone is brought to you in partnership with Techtown Training. Learn how DevOps and SAFe® can be used either separately or in unison as a way to make your organization more efficient, more effective, and more successful in our SAFe® vs DevOps eBook.

Image title

I thought desktop publishing was a “solved problem” but it seems that it’s pretty far from it. When I worked for Sun Microsystems our tech writer would just “magically” compose the final document from our work. When I worked for other companies we mostly wrote internal documentation and used Word. However, with Codename One I had to pick all of that up and I learned how bad things are in the real world.

In this article, I’d like to describe our toolchain, lessons learned, and how we do our docs as a large, heavily documented open source project. There are a lot of asciidoc tutorials and resources, this isn’t one of them. If you are interested in asciidoc, O’Reilly did a great tutorial on that. I will give some tips at the end based on my experience with asciidoc, but I’m mostly writing this to help other projects decide whether they should use asciidoc or stick to their existing documentation toolchain.

Why Asciidoc?

Initially, I worked with Open Office and Word. They were both OK as editors for text and I was used to working with them but when it came time to do things like code highlighting or collaborative work they became a pain. I also discovered that you need to know what you are doing to produce a good-looking document from either one of those. They make it very easy to write “badly” without styling and that makes it very hard to create a proper uniform document.

I liked the visual nature of editing but I didn’t really need it. My favorite feature was probably the much-maligned grammar checker in Word… Yes, I know “it’s bad” but so is my grammar.

The end results looked awful and were really hard to maintain though. I also wanted more:

  • I wanted a good looking PDF.
  • I wanted the guide to integrate into our website.

We also toyed with using Google Docs for a while but we had issues in scaling and getting the collaborative aspect going. When the community can edit the document they can break formatting and that made it really hard to follow up. Google Docs output looked even worse than Word so that wasn’t a good choice either.

Enter JBake

We started using JBake for our site after trying multiple other static site generators. It worked really well and we loved the idea of static site generation for the common elements. One of the great features in JBake is the support for Asciidoc and after we reviewed all the other options it seemed like this would be the only “reasonable” way to have a guide that we can generate both as PDF and as decent looking HTML.

Besides all of that, Asciidoc has a few noticeable advantages over word processors:

  • Built-in source code highlighting.
  • Text-based so we can automate some pieces of the toolchain.
  • Looks good by default.
  • Can be used in the GitHub wiki pages and is understood by GitHub in general.
  • Used by major publishers such as O’Reilly.

We thought about markdown as well but eventually went with asciidoc which seems to be more oriented towards desktop publishing whereas markdown is more oriented towards web output.

The cool thing about asciidoc is that it was practically built with coders in mind. Things like “callouts” that allow you to place numbers within the code and then elaborate on them after the code are trivial in asciidoc but painful with pretty much every other tool I’ve used.

Our Process

Because we wanted everything to be broken down and manageable we placed every segment in a separate asciidoc file and hosted them in our GitHub wiki. This allows pretty much anyone to just edit the files and also gives us great history for changes to the files.

We then need to generate JBake files for the website which should include the custom headers for that. I created a simple shell script that just copies the files from the wiki into our JBake web platform. It’s a bit long but, generally, it looks something like this (repeated for each file):

echo "title=About The Codename One Developer Guide - Build Cross Platform Apps Using Java
:icons: font
" | cat - ~/dev/CodenameOne.wiki/About-This-Guide.asciidoc | sed "s/image::/image::\//g" | sed "s/image:img/image:\/img/g" > ~/dev/CodenameOne/newWebsite/templates/content/manual/about.adoc 

If you aren’t familiar with bash the whole part on the top is just the JBake header which is used when converting the files to static HTML.

I then pipe the output through sed which converts relative image URIs to absolute image URIs. The main logic here is that the manual directory in the website isn’t in the root and I want to store all the images in one place. Having an absolute URL allows me to move the manual to a different location easily. However, the wiki and PDF need relative URLs as the location of the images varies.

I also set the asciidoc hint to use icon fonts instead of images when it’s showing notes and other such elements.

One File

One of the big problems/mistakes I made when we started with asciidoc is the decision to use one big file. Part of the script concatenates all of the files together. A more “modern” approach is the usage of an include directive from a master file but that caused some issues early on. The main challenge was keeping links in such a way that will work both for the JBake web version and for the PDF printed output. This approach works though, so while I’m not too thrilled with it we decided to stick with it for now.

We had to use special macros that our concatenating script processed in order to link differently to the PDF and regular website output:

You can learn more about multi-images in the  https://www.codenameone.com/manual/advanced-theming.html#_understanding_images_multi_images[advanced theming] section.
You can learn more about multi-images in the <<understanding-images-and-multi-images>> section.

Our concatenation script is just a small simple Java app mostly because it’s easier for me to write code in Java than anything else. I also needed something more elaborate than just connecting the files together as I wanted there to be a Table Of Contents index in the HTML output. To do this I needed some logic which was pretty easy to implement in Java (thanks to the text file format):

public class DevGuideIndexGenerator {
     * Argument 0 should be the output asciidoc file
     * Argument 1 should be the index file in the website
     * Argument 2 onwards should be the files in the manual in the correct order
    public static void main(String[] args) throws Exception {
        String version = args[0];
        FileOutputStream asciidocFile = new FileOutputStream(args[1]);
        FileWriter indexFile = new FileWriter(args[2]);

        indexFile.write("<div id=\"toc\" class=\"toc2\">\n" +
            "<div id=\"toctitle\">Table of Contents</div>\n" +
            "<ul class=\"sectlevel1\">\n");

        SimpleDateFormat sd = new SimpleDateFormat("MMM dd yyyy");

        asciidocFile.write(("= Codename One Developer Guide\n" +
            "Version " + version + ", " +  sd.format(new Date()) + "\n" +
            ":doctype: book\n" +
            "\n" +
            ":toc:\n" +
            ":toc-placement: manual\n\n" +

        for(int iter = 3 ; iter < args.length ; iter++) {
            File f = new File(args[iter]);
            byte[] currentFile = new byte[(int)f.length()];
            DataInputStream di = new DataInputStream(new FileInputStream(f));
            String fileContent = new String(currentFile, "UTF-8");
            int index = fileContent.indexOf("~~~~~~");
            String header = fileContent.substring(0, index);
            Properties props = new Properties();
            props.load(new CharArrayReader(header.toCharArray()));
            index = fileContent.indexOf("\n", index);
            fileContent = fileContent.substring(index);

            // remove all the HTML only content
            fileContent = fileContent.replace("// HTML_ONLY_START", "////");
            fileContent = fileContent.replace("// HTML_ONLY_END", "////");

            // comment in "PDF_ONLY" sections
            int pdfOnly = fileContent.indexOf("PDF_ONLY");
            if(pdfOnly > -1) {
                StringBuilder sb = new StringBuilder(fileContent);
                while(pdfOnly > -1) {
                    // replace the block comment following the pdfOnly
                    int followingComment = fileContent.indexOf("////", pdfOnly);
                    sb.setCharAt(followingComment + 2, ' ');
                    sb.setCharAt(followingComment + 3, ' ');
                    int beforeComment = fileContent.lastIndexOf("////", pdfOnly);
                    sb.setCharAt(beforeComment + 2, ' ');
                    sb.setCharAt(beforeComment + 3, ' ');
                    pdfOnly = fileContent.indexOf("PDF_ONLY", followingComment + 2);
                fileContent = sb.toString();

            // first page is the preface
            if(iter == 3) {
                asciidocFile.write("\n[preface]\n== ".getBytes());
            } else {
                asciidocFile.write("\n\n== ".getBytes());
            if(props.getProperty("subtitle") != null) {
            asciidocFile.write(fileContent.replace("image::/img/developer-guide/", "image::").getBytes("UTF-8"));

            String url = f.getName().replace(".adoc", ".html");
            indexFile.write("<li  <#if content.uri?ends_with(\"");
            indexFile.write("\")> class=\"current\"</#if> ><a href=\"");
            indexFile.write("</a>\n<ul class=\"sectlevel2\">\n");            

            for(String line : fileContent.split("\n")) {
                if(line.startsWith("=== ")) {
                    line = line.substring(4);
                    indexFile.write("<li><a href=\"");
                    StringTokenizer tt = new StringTokenizer(line.toLowerCase(), " ;&:_");
                    while(tt.hasMoreTokens()) {


Notice that this is relatively simple, we just find every top level and second level header based on the conventions then generate the TOC file. We also do all the basic stuff like setting the date of the developer guide into place so everything is 100% automated (yes, I know I can do that with a field in Word, etc.).

The macro for the PDF is implemented by toggling the PDF only “magic comment” so code/description act correctly in the PDF mode.

Initially, we used that a lot when doing links but, lately, we’ve been lazy as the special syntax is a bit painful.

External Links and Floating Images

When writing a website, you want to link as much as possible. Since we have extensive JavaDocs I thought it would make sense to hyperlink every mention of a class to its JavaDoc page. Besides the SEO benefit, this could be useful to developers who can instantly find the class.

Doing this in the preprocessor isn’t an option. It would need a more elaborate parser and I wasn’t interested in going there. But I did a quick script that did that and hyperlinked everything then I fixed bad links manually. This seemed to work initially but had a problematic side effect…

When we generated the PDF, every link generates a footnote which makes a lot of sense in theory. However, in a page about Button, I might mention it 20 times which will trigger 20 footnotes with the same URL!

Unfortunately, I couldn’t find a solution for that so I had to go over our links and try to reduce the amount. 

Another annoying thing is image behavior. You can easily float an image to the right in HTML output but not in PDF output. This is probably the most annoying formatting issue I came across. I could probably work around it with some creative table structures in some cases but this seems like such a trivial thing…

Toolchain Issues

One of the biggest problems with asciidoc is that it’s just all over the place. There are a few toolchains and some work while others produce “weird” output or fail without a real reason. We recently tried to generate an epub file directly from our manual asciidoc. It seems this translated the document internally to docbook that was malformed and then failed on validation. 
I’m guessing this relates to out asciidoc code but it’s impossible to know why as the format isn’t validated.

I was only able to work with asciidoctor to HTML and the fo-pub toolchain. Everything else produced artifacts when leafing through the docs. Maybe there was a warning printed along the way but when going over hundreds of pages of output it’s hard to notice warnings. I’m not sure a “lint” like tool would work for something like asciidoc as the format is so loose.

This isn’t as much of a big deal, asciidoctor’s docbook output seems to work well and I was able to use that after the fact to generate things such as epub documents using tools such as pandoc. That’s pretty sweet as you can convert the output to things such as Word relatively easily if you need to send it out and the output looks good.

Occasionally, I had to hack various things in fo-pub to make the document look nicer, e.g. I wanted a good looking cover image for the book which I generated with Spark. So to get this image to “cover” the PDF I had to change asciidoctor-fopub/build/fopub/docbook/fo/division.xsl and add entries for the cover image:

<xsl:template name="front.cover">
  <xsl:call-template name="page.sequence">
    <xsl:with-param name="master-reference">titlepage-cover-image</xsl:with-param>
    <xsl:with-param name="content">
      <fo:block text-align="center">
     <fo:external-graphic src="developer-guide-cover-image.jpg" content-height="297mm" content-width="210mm"/>

I made a lot of similar edits to customize font size, margins, etc.

For the print version, I had to remove this code, as Amazon has its own cover and doesn’t accept images that “bleed” (bleeding is when an image intentionally goes out of the print space).


I usually work with NetBeans which has some initial asciidoc support but it’s not there yet.

So I use Atom for the docs. It’s surprisingly usable, although it needs frequent restarts as it overtakes the CPU with large projects like this. One of the problems I’ve had with it is weird issues like this one. So if you have a Java-based “try with resources” you need to add a semicolon to the end or the syntax highlighting block never ends and makes editing “weird.” It does have some pretty nice extensions like “write good” which do help my overly verbose writing style.


PDF is generally good for publishing but when we got to the Kindle print stage we ran into issues with small images stretched by default. This meant the image DPI was too low for print and Amazon wouldn’t accept that. Unfortunately, they didn’t always list all the problematic images so I had to upload a PDF wait for processing, open the preview, fix, then rinse/repeat.

The solution was to add a “scaledwidth=30%” or something like that to images all over so they don’t upscale in the print version of the document.

Initially, when we released the Kindle reader version of the document, I made the mistake of using Kindle Textbook Publisher instead of generating an epub file. That means I can’t go back to publishing epub without publishing the book!

This is a shame as it means the book isn’t viewable on the standard e-ink devices from Amazon and only on the Kindle Fire style devices. In retrospect, I should have been more careful when uploading the first book.

One of the common things we tend to do as developers is to work with A4 or letter sizes. This produces a book that’s a bit “large.” In retrospect, I might have chosen a smaller form factor for the output and would probably do that for the next book. The current output is a bit big. I’m afraid this will increase the page size that’s already pretty big…

Right now the book clocks as 600+ pages, but when I started it was closer to 1000 pages. It seems Amazon has a limit of ~890 or so pages. Since I had to shrink images anyway and reduced the font size a bit the number of pages dropped significantly.

I thought about color printing, but that would have sent the book cost into the $50 or higher territory which I don’t think is fair for an open source book. Most of the images don’t really need color in this case. The copy I got doesn’t have color, which is fine, but I think the text is a bit faded when compared to other books. I don’t think it’s a deal breaker and I’m not sure other people will notice it as I do.

I used the Amazon wizard for the cover generation which looks decent. I used the ready-made cover image and mixed it with the generated cover. One caveat with the first book is that the back looks cartoonishly large. Since the book is of an A4 size the text in print just looked HUGE. I would recommend printing this on your local printer to get a sense of size before publishing.

One of the first things I noticed when I got the physical book back from Amazon was that it ended “abruptly.” I’m so used to books ending with an index and ours just doesn’t have it. We just didn’t include index markup for entries within the asciidoc code. The table of contents is really simple to do in asciidoc, but an index requires some work and I still don’t understand why or if there is a better alternative to just littering the docs with index entries. It’s not a deal breaker for a book whose PDF version is available for free (people can just use search there instead of an index) but it’s not ideal.

Overall, self-publishing on Amazon is pretty trivial. The tools walk you through most of the steps, you just need to make the right decisions early on as some things you can’t change after the fact (easily or at all):

  • Use epub or another dynamic format for the Kindle version.
  • Pick the right form factor for print books. I would not use A4 or letter as those can be too big.
  • Be prepared for some pain, there is a lot of back and forth with the publishing tools as they fail on the Amazon servers.

Final Thoughts

I would use asciidoc for the next project. It has warts but it’s pretty much the document writing tool for coders. So even with the problems, I’d go with it for the next book as well.

I think it’s a powerful tool that allows automation for coders and collaboration with familiar tools such as git and CI. I think our process can probably be refined a lot, but, for now, it works.

If you want to look through our asciidoc code and the final results of everything I wrote here, check out these links:

Adopting a DevOps practice starts with understanding where you are in the implementation journey. Download the DevOps Transformation Roadmap, brought to you in partnership with Techtown Training

asciidoc ,open source ,documentation ,github ,agile

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}