DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Use KubeKey To Set Up a Kubernetes and KubeSphere Cluster With NFS Storage
  • Dynamic NFS Provisioning in Red Hat OpenShift
  • Performing and Managing Incremental Backups Using pg_basebackup in PostgreSQL 17
  • How to Restore a Transaction Log Backup in SQL Server

Trending

  • A Simple, Convenience Package for the Azure Cosmos DB Go SDK
  • A Modern Stack for Building Scalable Systems
  • Unmasking Entity-Based Data Masking: Best Practices 2025
  • Agile and Quality Engineering: A Holistic Perspective
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. DevOps and CI/CD
  4. Synchronizing Files from Maximo Application Suite 8 to a Shared Drive Without OpenShift Integration

Synchronizing Files from Maximo Application Suite 8 to a Shared Drive Without OpenShift Integration

In this article, we explore how to implement file synchronization from MAS8 to a shared drive without involving OpenShift storage.

By 
Wasia Maya user avatar
Wasia Maya
·
Lakshmana Rao Koppada user avatar
Lakshmana Rao Koppada
·
Updated by 
Ram Sekhar Bodala user avatar
Ram Sekhar Bodala
·
Sep. 13, 24 · Tutorial
Likes (2)
Comment
Save
Tweet
Share
4.0K Views

Join the DZone community and get the full member experience.

Join For Free

In IBM Maximo 7.6 and earlier versions, it was common to have Maximo drop files directly onto a network shared drive connected to your administrative server. This setup allowed for seamless file handling and sharing across your network. With the advent of Maximo Application Suite (MAS) 8, you can achieve similar functionality if the shared drive is integrated as part of your OpenShift storage class (e.g., NFS, CSI). However, there are scenarios where you might not want the shared drive to be part of OpenShift. In this article, we explore how to implement file synchronization from MAS8 to a shared drive without involving OpenShift storage.

Design and Solution Implementation

In this solution, we provide a script that runs as a part of your Orchestrator server, using a kubeconfig file for authentication. Additionally, ensure that the shared drive is mounted to the Orchestrator server. This setup allows you to copy files from the MAXINST pod to the shared drive without integrating the drive into OpenShift.

Why Not Use PVC Directly?

One might argue for moving files directly from a Persistent Volume Claim (PVC) on NFS storage to the mount. However, files may not always be reflected on the PVC in real-time. Therefore, we manipulate the files directly from the source, in the pod.

Step-By-Step Solution

1. Create a Persistent Volume Claim (PVC)

Create a PVC called Maximo_Files, which will be located at /Maximo_Files in the pod.

2. Create Directories Inside the PVC

Inside the Maximo_Files directory, create three directories:

  • maximo_sync
  • SyncFiles
  • backups

3. File Drop and Sync Process

The MAS8 instance will drop files into the SyncFiles directory. The script provided below will look for CSV or any specified file types (extensions can be customized in the script). If such files are present, the script will move them to the maximo_sync directory inside the pod. It will then run the rsync command to sync the maximo_sync directory in the pod to the mount called maximo_sync on the Orchestrator server.

4. Set Up Kubeconfig and Logs Directory

  • Ensure the path points to your kubeconfig file: --kubeconfig=/path-to-the-kubeconfig/auth/kubeconfig.
  • Create a logs directory at the path /home/<LinuxUser>/SyncFiles-sync, replacing <LinuxUser> with your Linux username. Logs will be created in the Orchestrator server and deleted after 7 days.

Script to Sync Files

Below is the script for the job, copy_script.sh. You will need to run this script as root because accessing the kubeconfig file requires root permissions.

PowerShell
 
#!/bin/bash

##################
# Magic script syncs the files #
###############################

INSTANCE="mas-masivt810x-manage"

# Directories Path
ROOT_DIR="/home/LinuxUser/SyncFiles-sync"
LOG_DIR="$ROOT_DIR/logs"
LOG_FILE="$LOG_DIR/$(date +"%Y_%m_%d").log"

# Directory to Sync for Printing
STOREDIR="/maximo_sync/"

# Get the maxinst pod name
POD_NAME=$(/usr/local/bin/oc get -n "$INSTANCE" --kubeconfig=/path-to-the-kubeconfig/auth/kubeconfig -l mas.ibm.com/appType=maxinstudb --no-headers=true pods -o name | awk -F "/" '{print $2}')

# Check for .csv files and log if any exist
TMP_FILES_EXIST=$(/usr/local/bin/oc exec --kubeconfig=/path-to-the-kubeconfig/auth/kubeconfig -n "$INSTANCE" "$POD_NAME" -- /bin/bash -c 'find /Maximo_Files/SyncFiles -type f -name "*.csv"')

if [ -n "$TMP_FILES_EXIST" ]; then
    echo "$(date -u) ***************** Found .csv files in the pod. *********************" >> "$LOG_FILE"
    
    # Move files to label directory
    /usr/local/bin/oc exec --kubeconfig=/path-to-the-kubeconfig/auth/kubeconfig -n "$INSTANCE" "$POD_NAME" -- /bin/bash -c 'for File in $(find /Maximo_Files/SyncFiles  -type f ! -name ".snapshot"); do mv $File /Maximo_Files/maximo_sync/; done'
    echo "$(date -u) ***************** Moved the files to the label dir. *********************" >> "$LOG_FILE"
    
    # Sync files from container to local mount
    /usr/local/bin/oc rsync --kubeconfig=/path-to-the-kubeconfig/auth/kubeconfig -n "$INSTANCE" "$POD_NAME:/Maximo_Files/maximo_sync/" "$STOREDIR"
    echo "$(date -u) ***************** Copied files from container to the Mount. *********************" >> "$LOG_FILE"
    
    # Move files to backups directory
    /usr/local/bin/oc exec --kubeconfig=/path-to-the-kubeconfig/auth/kubeconfig -n "$INSTANCE" "$POD_NAME" -- /bin/bash -c 'for File in $(find /Maximo_Files/label -type f ! -name ".snapshot"); do mv $File /Maximo_Files/backups/; done'

else
    echo "$(date -u) ***************** No .csv files found in the pod. *********************" >> "$LOG_FILE"
    
    # Move files to backups directory
    /usr/local/bin/oc exec --kubeconfig=/path-to-the-kubeconfig/auth/kubeconfig -n "$INSTANCE" "$POD_NAME" -- /bin/bash -c 'for File in $(find /Maximo_Files/SyncFiles -type f ! -name ".snapshot"); do mv $File /Maximo_Files/backups/; done'
fi

sleep 2

if [ -z "$(find $ROOT_DIR/* -mtime +7)" ]; then
    echo "$(date -u) finds no log files older than 7 days in Logs directory" >> "$LOG_FILE"
else
    find "$ROOT_DIR"/* -mtime +7 -delete
    echo "$(date -u) deleted log files older than 7 days from Logs directory" >> "$LOG_FILE"
fi

sleep 2
echo "$(date -u) ***************** Done *********************" >> "$LOG_FILE"


Setting Up a Cron Job

To automate the execution of the script, set up a cron job on your orchestrator. The example below schedules the script to run every minute:

PowerShell
 
crontab -e

#SyncFiles: Sync files
*/1 * * * * /opt/custom_scripts/items_uploads/copy_scripts.sh


Conclusion

By following the steps outlined in this article, we can effectively synchronize files from MAS8 to a shared drive without requiring integration with OpenShift storage. This method provides flexibility and ensures that your files are managed efficiently, even in scenarios where real-time updates are critical.

Backup Network File System OpenShift CSV Directory

Published at DZone with permission of Wasia Maya. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Use KubeKey To Set Up a Kubernetes and KubeSphere Cluster With NFS Storage
  • Dynamic NFS Provisioning in Red Hat OpenShift
  • Performing and Managing Incremental Backups Using pg_basebackup in PostgreSQL 17
  • How to Restore a Transaction Log Backup in SQL Server

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!