DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Why Incorporate CI/CD Pipeline in Your SDLC?
  • Security in the CI/CD Pipeline
  • Advanced Error Handling in JavaScript
  • Securing Secrets: A Guide To Implementing Secrets Management in DevSecOps Pipelines

Trending

  • Streamlining Event Data in Event-Driven Ansible
  • Agentic AI for Automated Application Security and Vulnerability Management
  • How Trustworthy Is Big Data?
  • Stateless vs Stateful Stream Processing With Kafka Streams and Apache Flink
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. DevOps and CI/CD
  4. Deploy IBM App Connect Enterprise Apps From CI/CD

Deploy IBM App Connect Enterprise Apps From CI/CD

Sharing an Example Tekton Pipeline for Deploying an IBM App Connect Enterprise Application to Red Hat OpenShift.

By 
Dale Lane user avatar
Dale Lane
·
Nov. 16, 22 · Tutorial
Likes (2)
Comment
Save
Tweet
Share
3.7K Views

Join the DZone community and get the full member experience.

Join For Free

This post is about a repository I've shared on GitHub at dalelane/app-connect-tekton-pipeline. It contains an example of how to use Tekton to create a CI/CD pipeline that builds and deploys an App Connect Enterprise application to Red Hat OpenShift.


The pipeline uses the IBM App Connect Operator to easily build, deploy and manage your applications in containers. The pipeline runs on OpenShift to allow it to easily be integrated into an automated continuous delivery workflow without needing to build anything locally from a developer's workstation.

For background information about the Operator, and the different types of Kubernetes resources that this pipeline will create (e.g. IntegrationServer and Configuration), see these blog posts: 

  • What is an Operator and why did we create one for IBM App Connect?
  • Exploring the IntegrationServer Resource of the IBM App Connect Operator

Pipeline

The pipeline builds and deploys your App Connect Enterprise application. You need to run this every time your application has changed and you want to deploy the new version to OpenShift.

When running App Connect Enterprise in containers, there is a lot of flexibility about how much of your application is built into your container image, and how much is provided when the container starts.

For background reading on some of the options, and some of the considerations about them, see the blog post: Comparing styles of container-based deployment for IBM App Connect Enterprise.

This pipeline provides almost all parts of your application at runtime when the container starts. The only component that is baked into the image is the application BAR file.

Baking the BAR files into custom App Connect images prevents the need to run a dedicated content server to host BAR files, however, if you would prefer to do that see the documentation on Mechanisms for providing BAR files to an integration server for more details on how to do this. (The pipelines in the repository use the approach described as "Custom image" in that documentation.)

Running the Pipeline

  • pipeline spec:
    • pipeline.yaml
  • example pipeline runs:
    • simple-pipelinerun.yaml
    • complex-pipelinerun.yaml
  • helper scripts:
    • 1-deploy-simple-integration-server.sh
    • 1-deploy-complex-integration-server.sh

What the Pipeline Does

Builds your IBM App Connect Enterprise application and deploys it to the OpenShift cluster.

Outcome From Running the Pipeline

A new version of your application is deployed with zero-downtime - replacing any existing version of the app once it is ready.

Background

As discussed above, most of your application configuration will be provided to your application container at runtime by the Operator using Configuration resources.

As shown in the screenshot above, this example pipeline currently supports many, but not all, of the types of Configuration resource:

  • Loopback data source type
  • Policy project type
  • setdbparms.txt type
  • server.conf.yaml type
  • Truststore type

For more information about the other Configuration types, see the documentation on Configuration types for integration servers. Adding support for any of these additional types would involve adding additional tasks to the tasks provided in the repo - the existing tasks are commented to help assist with this.

Each of these configuration resources is individually optional. Two example App Connect applications are provided to show how the pipeline supports different application types.

Simple Stand-Alone Applications

The pipeline can be used to deploy a stand-alone application with no configuration dependencies.

  • sample application
    • simple-demo
  • pipeline run config
    • simple-pipelinerun.yaml
  • demo script:
    • 1-deploy-simple-integration-server.sh

This is a simple App Connect application with no external configuration.

When deploying this, the pipeline skips all of the Configuration tasks:

Watching the pipeline run looks like this (except it takes longer).

animated gif

Complex Applications

The pipeline can be used to deploy complex applications with multiple configuration dependencies and support Java projects.

  • sample application
    • sample-ace-application
  • pipeline run config
    • complex-pipelinerun.yaml
  • demo script:
    • 1-deploy-complex-integration-server.sh

This is an example of an App Connect application that needs configuration for connecting to:

  • a PostgreSQL database
  • an external HTTP API
  • an Apache Kafka cluster

When deploying this, the pipeline runs all of the Configuration tasks required for this application:

Watching the pipeline run (also sped up!) it looks like this. 

animated gif















To avoid needing to store credentials in Git with your application code, the pipeline retrieves credentials from Kubernetes secrets. When configuring the pipeline for your application (see the section below) you need to specify the secrets it should use to do this.

Sample Apps

I've put notes on how I set up the sample apps to demonstrate the pipeline in demo-pre-reqs/README.md however, neither of the sample apps is particularly useful and was purely used to test and demo the pipeline.

You can import them into App Connect Toolkit to edit them if you want to by:

  1. File -> Import... -> Projects from Folder or Archive
  2. Put the location of the ace-projects folder as the Import source.
  3. Tick all of the projects

That will let you open the projects and work on them locally. If you're curious what they do, I'll include some brief notes below:

Simple App

It provides an HTTP endpoint that returns a Hello World message.

Running this:

Shell
 
curl http://$(oc get route -nace-demo hello-world-http -o jsonpath='{.spec.host}')/hello"


returns this:

JSON
 
{ "hello" : "world" }


Complex App

It provides an intentionally contrived event-driven flow that:

  • "Kafka consumer todo updates"
    • receives a JSON message from a Kafka topic
  • "get id from update message"
    • parses the JSON message and extracts an ID number from it
    • uses the id number to create an HTTP URL for an external API
  • "retrieve current todo details"
    • makes an HTTP GET call to the external API
  • "base64 encode the description"
    • transforms the response from the external API using a custom Java class
  • "insert into database"
    • inserts the transformed response payload into a PostgreSQL database

The aim of this application was to demonstrate an ACE application that needed a variety of Configuration resources.

But it means that running this:

echo '{"id": 1, "message": "quick test"}' | kafka-console-producer.sh \
    --bootstrap-server $BOOTSTRAP \
    --topic TODO.UPDATES \
    --producer-property "security.protocol=SASL_SSL" \
    --producer-property "sasl.mechanism=SCRAM-SHA-512" \
    --producer-property "sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="appconnect-kafka-user" password="$PASSWORD";" \
    --producer-property "ssl.truststore.location=ca.p12" \
    --producer-property "ssl.truststore.type=PKCS12" \
    --producer-property "ssl.truststore.password=$CA_PASSWORD"

gets you this:

store=# select * from todos;
 id | user_id |       title        |            encoded_title             | is_completed
----+---------+--------------------+--------------------------------------+--------------
  1 |       1 | delectus aut autem | RU5DT0RFRDogZGVsZWN0dXMgYXV0IGF1dGVt | f
(1 row)

Configuring the Pipeline for Your App Connect Enterprise Application

To run the pipeline for your own application, you need to first create a PipelineRun.

The sample pipeline runs described above provide a good starting point for this, which you can modify to your own needs. You need to specify the location of your App Connect Enterprise application code and configuration resources. All of the available parameters are documented in the pipeline spec if further guidance is needed.

Alternative Approaches

Running App Connect Enterprise in containers is ideally suited to a variety of CI/CD approaches. The pipeline described in this post was useful for a project I recently worked on, but you can find a variety of other pipeline approaches for managing your ACE application. For another great option, see github.com/ot4i/ace-demo-pipeline.

Contextual design app application applications Pipeline (software)

Published at DZone with permission of Dale Lane. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Why Incorporate CI/CD Pipeline in Your SDLC?
  • Security in the CI/CD Pipeline
  • Advanced Error Handling in JavaScript
  • Securing Secrets: A Guide To Implementing Secrets Management in DevSecOps Pipelines

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!