How We Organized the Documentation and Translation Process in Iondv.Framework
This article discusses our mistakes, difficulties, and experience in configuring the translation of an open-source project through several different platforms.
Join the DZone community and get the full member experience.
Join For FreeOur team, while developing the open-source framework for the visual creation of web-apps iondv.framework, like many other teams, has faced the necessity to document the system's abilities. Primarily, for ourselves and new coworkers, but also for the users. Besides, nowadays, documentation is a huge part of product promotion. Although there has been a recent shift towards media materials, such as video lessons, the documentation is still significant.
We’ve come a long way from the wiki in GitLab to the well-structured double language documentation in the form of markdown files in the Github repository. And now we’ve started the transition to the GitHub + Weblate + readTheDocs bunch. Since there are not many articles on this topic, please read below about our mistakes, difficulties, and experience in configuring the translation of an open-source project.
Approaches To the Documentation, Organization, and Localization
For starters, why isn’t the GitLab wiki good enough? It’s not possible to properly organize mutual work and the maintenance of code changes versioning and documentation changes there. For these reasons, we missed a stage of developing the documentation for the functional opportunities at some point, since the appearance of new functions was not accompanied by the appearance of the corresponding documentation. And also, it is almost impossible to organize a parallel translation, since tracking of changes is quite time-consuming.
At a certain stage, we found the way out by transitioning the markdown format documentation into the kernel repository. We simply copied the wiki folder into the repository in the docs folder at the roots of the framework and divided it into the two subfolders ru and en. Accordingly, the documentation was divided into two parts. We have introduced a business process in Jira, which implies a mandatory stage of documenting all the tasks with new functionality and followed by translating. It is easy to track what changes have been made or whether the translation has been completed by tracking the document commits.
But this scheme has significant drawbacks:
- Working with the documentation itself is not convenient for ordinary users
- Although our translator has mastered the commits, git, and markdown, still, in general, this is a very inconvenient way of organizing the translation of large amounts of texts
- It is difficult to maintain the integrity of links and text matching, especially with minor edits—you have to monitor changes in documents manually
- The same template sections should be written on each page
- It is not possible to organize community participation in the translation of documentation
After studying the options, we’ve grown to like the idea of placing the documents on http://readthedocs.org/—a platform for publishing the development documentation. Initially, all this grew out of Python (and we have a node.js framework) and using methods from the "competing camp" seemed strange. But in general, this solution has significant advantages: it supports document versioning and languages—all based on sphinx-doc.org. The fact that you can collect documentation and host it directly on readthedocs for free is more of a bonus, although a nice one.
On the downside, we were not able to properly set up the translation of documentation from the Russian version to markdown format, since lists, tables, and links are translated as text, when built back they become a usual text in the documentation. So, we had to migrate to reStructuredText.
We launched a transition experiment, the results of which we want to share. Here's our process:
- Singled out the source documentation in Russian into a separate repository
- Created a separate repository for localization of Russian documentation which will be translated to English and in the future into other languages. The GitHub is generated automatically by the build and commits from Weblate
- Set up a task for periodical update of localizations for sphinx translations
- Launched our Webplate server for organizing the translations
- Configured readthedocs to build the documentation and publish it with the original English version (English version is not yet fully migrated; we are in the process of migrating it from the repository)
- Started running it before the planned significant revision of the documentation
We also took a look at the Godot Engine:
Read below the deployment procedure from scratch with configuration examples. Since we have just launched this scenario, before we finally go deeper, we will be grateful for your feedback and experience, and especially criticism and suggestions.
Repository Organization
We’ve created the documentation repository in Russian into which we converted the documentation from the framework repository in markdown format to the reStructuredText format. We created an empty repository for the localizations at first, and it will be the head repository for the readthedocs.
In the iondv/doc-l10n repository with the translated documentation, we create a link to the repository source git submodule add https://github.com/iondv/docs-ru
. It will also clone it right away. We also carry out translations in a special translation branch and then merge them into the master branch. Readthedocs collects the documentation from the master
branch.
In the repository with the Russian documentation, we create a configuration file for sphinx. An example of the config file is available at the link. Pay attention to the two following parameters.
The first parameter islatex_elements
, which specifies that the source is to be translated to utf8, otherwise, the auto-assembly on readthedocs swears at every Russian character.
latex_elements = {
'preamble': '\\usepackage[utf8]{inputenc}',
'babel': '\\usepackage[russian]{babel}',
'cmappkg': '\\usepackage{cmap}',
'fontenc': '\\usepackage[T1,T2A]{fontenc}',
'utf8extra':'\\DeclareUnicodeCharacter{00A0}{\\nobreakspace}',
}
The second parameter is gettext_compact=True
, if it is False, the localizations will be created separately for each file. Then each of the files will need to be separately connected to Weblate for translation. If there is no parameter or it is set to True, each of the directories will be compiled into one file, and it will be enough to connect one file as a component. It is also possible to use a script to collect all files with localization into one so that there is only one file for translation in Weblate.
After committing and pushing in docs-ru, do not forget to commit and push in the upstream docs-l10n repository so that the link to the last commit in the documentation is updated.
So far, all localization files we have are generated by folders in the repository with localization in the translation branch.
We want to create one project for documentation and separate projects for translating the framework and modules. For each file that corresponds to the folder (topic) in the documentation, a separate component is created in Weblate.
We tried to make all files separate (gettext_compact=False
), so the project was a separate directory and each file was a component. But when there are a lot of files, it is quite tedious to connect them. Besides, when there are many projects, there may be confusion. If you have experience organizing and composing the files, please share.
Deploying the Server Environment
Everything is deployed in Docker containers. We launched it with Ubuntu, but it wouldn’t differ much from CentOs. All the servers will be in the service-net
, internal docker network. We create it with the docker network create service-net command
.
Firstly, we set the environment for Weblate: Redis and PostgreSQL are required (correct the account data in the docker container environment variables).
xxxxxxxxxx
docker run --name redis -v redis-data:/data \
--expose 6379 --net service-net \
--restart unless-stopped -d \
redis:4-alpine redis-server --appendonly yes
docker run --name postgres -v postgres-data:/var/lib/postgresql/data \
--expose 5432 \
--env "POSTGRES_PASSWORD=123" \
--env "POSTGRES_USER=iondv" \
--env "POSTGRES_DATABASE=weblate" \
--env "POSTGRES_DB=weblate" \
--env "POSTGRES_HOST=postgres" \
--env "POSTGRES_PORT=5432" \
--net service-net --restart unless-stopped -d \
postgres:11-alpine
Then we install Weblate to start the system. Correct the hosts, accounts (including the Github user) in the appropriate fields and the time zone as well. Our Vladivostok time zone for Khabarovsk is given as an example.
xxxxxxxxxx
docker run --name weblate \
-v weblate-data:/app/data \
--expose 8080 \
--env "DEBUG=True" \
--env "POSTGRES_PASSWORD=123" \
--env "POSTGRES_USER=iondv" \
--env "POSTGRES_DATABASE=weblate" \
--env "POSTGRES_DB=weblate" \
--env "POSTGRES_HOST=postgres" \
--env "POSTGRES_PORT=5432" \
--env "REDIS_HOST=redis" \
--env "REDIS_PORT=6379" \
--env "WEBLATE_EMAIL_HOST=smtp.yandex.ru" \
--env "WEBLATE_EMAIL_PORT=465" \
--env "WEBLATE_EMAIL_USE_TLS=True" \
--env "WEBLATE_EMAIL_HOST_USER=weblate@YOUR_HOST.com" \
--env "WEBLATE_EMAIL_HOST_PASSWORD=123" \
--env "WEBLATE_SERVER_EMAIL=weblate@ YOUR_HOST.com" \
--env "WEBLATE_DEFAULT_FROM_EMAIL=weblate@ YOUR_HOST.com" \
--env "WEBLATE_ADMIN_PASSWORD=123" \
--env "WEBLATE_ADMIN_NAME=IONDV Weblate Admin" \
--env "WEBLATE_ADMIN_EMAIL=weblate@YOUR_HOST.com" \
--env "WEBLATE_DEBUG=false" \
--env "WEBLATE_LOGLEVEL=DEBUG" \
--env "WEBLATE_SITE_TITLE=IONDV.Doc translation" \
--env "WEBLATE_ALLOWED_HOSTS=weblate.YOUR_HOST.com" \
--env "WEBLATE_REGISTRATION_OPEN=1" \
--env "WEBLATE_REQUIRE_LOGIN=0" \
--env "WEBLATE_TIME_ZONE=Asia/Vladivostok" \
--env "TZ=Asia/Vladivostok" \
--net service-net --restart unless-stopped -d \
weblate/weblate:3.11.2-1
Please note that in Weblate version 3.10 and later, after creating the project, you need to specify the type of source language, otherwise the texts for translations from Russian are not displayed. We tested it on version 3.11.2-1.
In order for Weblate to be accessed via SSL and with any necessary restrictions and settings, we will use external Nginx, but you can also get by with the internal one.
Add /etc/conf.d/weblate.conf
to the config file.
xxxxxxxxxx
server {
listen 80;
server_name weblate.YOURHOST.ru;
return 301 https://weblate.YOURHOST.ru $request_uri;
}
server {
listen 443;
server_name weblate.YOURHOST.ru;
ssl_stapling on;
resolver 8.8.8.8 valid=300s;
ssl_certificate /etc/nginx/cert/YOURHOST.crt;
ssl_certificate_key /etc/nginx/cert/YOURHOST.key;
ssl_dhparam /etc/nginx/cert/dhparam.pem;
ssl_session_timeout 10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers kEECDH+AES128:kEECDH:kEDH:-3DES:kRSA+AES128:kEDH+3DES:DES-CBC3-SHA:!RC4:!aNULL:!eNULL:!MD5:!EXPORT:!LOW:!SEED:!CAMELLIA:!IDEA:!PSK:!SRP:!SSLv2;
ssl_prefer_server_ciphers on;
add_header Strict-Transport-Security "max-age=31536000;";
add_header Content-Security-Policy-Report-Only "default-src https:; script-src https: 'unsafe-eval' 'unsafe-inline'; style-src https: 'unsafe-inline'; img-src https: data:; font-src https: data:; report-uri /csp-report";
location / {
proxy_pass http://weblate:8080;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_redirect off;
}
access_log /var/log/nginx/weblate.access.log;
error_log /var/log/nginx/weblate.error.log error;
}
There is a detailed example of nginx configuration in the github repository.
Launch Nginx:
xxxxxxxxxx
docker run --name nginx_proxy \
-v /etc/nginx.conf:/etc/nginx/nginx.conf:ro -v /etc/conf.d:/etc/nginx/conf.d:ro \
-v /etc/nginx/cert:/etc/nginx/cert:ro -v /var/log/nginx:/var/log/nginx \
-p "80:80"
-p "443:443" --net service-net \
--env "TZ=Asia/Vladivostok" \
--restart unless-stopped -d \
nginx
Configuring a Scheduled Start Task for Sphinx
We are going to write a separate script for updating localizations, which will need to run on a schedule in corn (for example, once a day). But first, we make a docker image to run sphinx and create a config file for it.
Here's the Dockerfile:
xxxxxxxxxx
#!/bin/bash
FROM python:3-alpine
RUN apk add --update make && pip install sphinx sphinx_rtd_theme sphinx-int
l
WORKDIR /docs
VOLUME ["/docs"]
CMD []
We build the repository in the directory where the Dockerfile is located with the docker image build -t sphinx: latest
command.
After that, we use the /workspace/doc-l10n
directory to place the cloned repository with the translated documentation and docs-ru with the Russian documentation inside of it, and our script will be located in the workspace
directory.
To automate the work with github.com we are going to authorize using a token. In order to do this, we need to go to the user’s developer section under which we commit and generate the token. The same token is needed when specifying repositories for Weblate.
To form the project, we need to download (update) the repository and, if there are changes, rebuild the localization files with sphinx. You need to write down the user and his token in the gitUser and gitToken variables, respectively.
xxxxxxxxxx
#!/bin/bash
gitToken="111111111111111111111111111111111"
gitUser="user"
curDir=`pwd`
cd ${0%/*} && scriptDir=`pwd` && cd $curDir
l10nDir=$scriptDir/iondv-docs-l10n
docsDir=$l10nDir/docs-ru
if [ -d $l10nDir ]; then
cd $l10nDir
git branch | grep "^* translation"
if [ $? -ne 0 ]; then git checkout translation; git branch | grep "^* translation"; if [ $? -ne 0 ]; then echo "Не удалось переключить на translation ветку"; exit; fi
fi
resPull=`git pull`
resPullSatus=$?
cd $curDir echo -n "Проверяем изменения l10n: " && echo $resPull | grep "Already up to date"
if [ $? -eq 0 ]; then echo "Нет изменений l10n"; fi
else
git clone https://$gitUser:$gitToken@github.com/iondv/docs-l10n -b translation $l10nDir
cd $l10nDir
git branch | grep "^* translation"
if [ $? -ne 0 ]; then echo "Не удалось переключить на translation ветку"; exit; fi
fi
if [ -d $docsDir/.git ]; then
cd $docsDir
resPull=`git pull`
resPullSatus=$?
cd $curDir
echo -n "Проверяем изменения docs-ru: " && echo $resPull | grep "Already up to date"
if [ $? -eq 0 ]; then echo "Нет изменений. Выходим"; exit; fi
else
git clone https://$gitUser:$gitToken@github.com/iondv/docs-ru $docsDir
fi
docker run -it --rm -v $l10nDir:/docs sphinx sphinx-build -b gettext /docs/docs-ru /docs/gettext
if [ $? -ne 0 ]; then echo "Ошибка создания gettex"; exit 1; fi
docker run -it --rm -v $l10nDir:/docs sphinx sphinx-intl update -p /docs/gettext -l en
if [ $? -ne 0 ]; then echo "Ошибка обновления локалей"; exit 1; fi
docker run -it --rm -v $l10nDir:/docs sphinx chmod -R ugo+r /docs/gettext /docs/locales
curDate=`date +"%Y%m%d-%H-%M"`
cd $l10nDir gitStatus=`git status | grep "nothing to commit, working tree clean"`
if [ $? -eq 0 ]; then
echo "Нет изменений"
exit 0 else echo "Есть изменения"
git add .
git commit -a -m "$curDate autoupdate sphinx locales"
git push
fi
Weblate Setup
Create a project and specify the basic parameters. We create one project for documentation and components for each topic—(folder) [example] (https://weblate.iondv.ru/projects/docs/)—contains documentation on the framework.
After creating the project, do not forget to set the source language in the project properties (Russian), otherwise, it does not correctly recognize translations.
After that, we create a translation component in the project. The components for files are located in the root of the repository and the components for each subfolder are combined into one file which is located in the gettext folder. First, indicate the name, project, repository, and the source branch.
After that, we select the file for translation. Please note that you need to select a file for multiple translations, not a single-language one.
At the next step, you need to specify the URL for sending the repository—here you need to specify the user who has the rights to push to the repository.
For example: https: // USER: 123@github.com/iondv/docs-l10nworkspace/doc-l10n
directory to place where USER
dire is the login, and 123
dire is the password, respectively.
Then on this same page, you need to select Template for new translations. In this field, you need to specify the source file for the component in the gettext folder (the folder in which sphinx collects files with the pot extension).
So, we can translate now. For example, we can translate from Russian to English following the path of the index
dire translation component in the docs
docs project.
For new components from the same repository, it is not necessary to specify all parameters, it is enough to indicate the already used local addressing (already created).
For our example we use: weblate://docs/index
.
The example of translation.
After the translation, go to the project properties (or component properties, if the repositories are different), make a commit, and send the translation to the remote repository (the repository here is where we specified in the components).
Also, if there were changes in the remote repository, we can receive them by clicking the Pull button.
In the repository we can track the changes we’ve made in the translation: translation
folder branch. After verifying, we merge them with the master: master
folder branch. Pay attention to the GitHub screenshot for the gettext folder: the commit was executed by script after inondv/docs-ru repository documentation sphinx update. And for the use: locales/en/LC_MESSAGES
folder by Weblate.
The readthedocs.org Setting
We import our repositories into readthedocs.io.
Firstly, we import iondv/docs-l10n
and specify the desired name in the URL: NAME.reathedocs.io
. Put a tick at Edit advanced project options. It is required to indicate the language of the project. But since we want to have English as a main language of translation, we are not changing the language.
After that, you can go to the Build tab. The project has already started building. It clones not only the iondv/docs-l10n repository, but also the iondv/docs-ru submodule.
After the build is finished, you can go to the View docs button; in our case, it will lead to the latest English version, which we formed in Weblate, pushed to the translate translate
branch and merged with the master master
branch.
Now we need to add the Russian language to the documentation. To do this, go back to the control panel, specify the project name iondv-ru and import the project iondv/doc-ru
. Also, put a tick at Edit advanced project options and set the Russian language in the additional settings.
The project will start building right away but don’t wait for it to finish; go back to the main project iondv with the English version of the documentation and go to the Admin settings and add the iondv-ru project in the Translations section.
Now in the documentation parameters, you can choose not only the version but also the language:
Our Plans
- Remake the structure of the documentation, adjust the automated generation of indexing, and verify all the links in the documentation and correct the templates
- Transfer the entire translation from the framework repository in the github to Weblate
- Place links to the iondv.framework documentation in the framework description and on the site on readthedocs, possibly under our own domain
- Finalize the introductory documentation of the framework and collect all the descriptions (including demos and modules) in one place
- Translate new documents into English, possibly with the help of the community
- Try other languages
- Enable Weblate to translate system messages to other locales in the core repository of the framework (currently it’s end-to-end translation by scripts).
This last point is the key. Now, in the version planned for release, multilingualism is fully implemented not only in the core but also at the level of modules and templates. But we haven't run it in for translations in Weblate, because we get confused by the violation of language syntax when translating. For example, it was ERR_MSG='Сервер запущен'
, and became ERR_ MSG="Server\'s up and running.'
. It might make sense to rework and start using the GNU gettext model, but not everything is clear with it either.
If you’ve got such experience with this, please share in the comments.
You can follow the project and our experience of organizing localization through social networks Facebook or LinkedIn.
Opinions expressed by DZone contributors are their own.
Comments