DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Related

  • Can You Run a MariaDB Cluster on a $150 Kubernetes Lab? I Gave It a Shot
  • How Kubernetes Cluster Sizing Affects Performance and Cost Efficiency in Cloud Deployments
  • The Production-Ready Kubernetes Service Checklist
  • 10 Best Practices for Managing Kubernetes at Scale

Trending

  • Developers Beware: Slopsquatting and Vibe Coding Can Increase Risk of AI-Powered Attacks
  • Infrastructure as Code (IaC) Beyond the Basics
  • Mastering Advanced Traffic Management in Multi-Cloud Kubernetes: Scaling With Multiple Istio Ingress Gateways
  • How Large Tech Companies Architect Resilient Systems for Millions of Users
  1. DZone
  2. Software Design and Architecture
  3. Cloud Architecture
  4. How to Bring Shadow Kubernetes IT Into the Light

How to Bring Shadow Kubernetes IT Into the Light

This article explores the challenges associated with shadow Kubernetes admins and the benefits of centralizing with the IT department.

By 
Kyle Hunter user avatar
Kyle Hunter
·
Jul. 26, 22 · Opinion
Likes (1)
Comment
Save
Tweet
Share
4.0K Views

Join the DZone community and get the full member experience.

Join For Free

Shadow IT continues to be a challenge for IT leaders, but perhaps not in the sense that companies have seen in the past. Traditionally, shadow IT occurs within the application stack, which creates problems because the use of IT systems occurs without the approval, or even knowledge, of the corporate IT department.

DevOps practices have emerged to help address these challenges and to unleash creativity and opportunity for modern software delivery teams. However, access to the cloud has made it easier for autonomous teams to set up their own toolsets. As a result, the shadow IT problem now manifests itself in a new way: in the tooling architecture.

The explosion of container-based applications has made Kubernetes a vital resource for DevOps teams. But its widespread adoption has led to the rapid creation of Kubernetes clusters with little regard for security and costs, either because users don’t understand the complex Kubernetes ecosystem or are simply moving too fast in order to meet deadlines.

This article explores the challenges associated with shadow Kubernetes admins and the benefits of centralizing with the IT department.

What Are Shadow Kubernetes Admins?

A shadow Kubernetes admin is a user who doesn’t wait for their IT department to provision Kubernetes clusters and instead turns to a cloud service provider to spin up Kubernetes clusters at will. Indeed, the freedom and flexibility of the cloud bring some significant business risks that IT leaders cannot ignore.

A shadow Kubernetes admin account left unattended could lead to unexpected grant privileges when new roles are created. This happens because role bindings can refer to roles that no longer exist if the same role name is used. And with every new user, group, role, and permission, a lack of proper visibility and control increases the risk of human error, mismanagement of user privileges, and malicious attacks.

Reigning in Shadow Kubernetes Admins

To gain control of shadow Kubernetes admins, we first must understand the challenges IT teams face.

Limited Resources

First, it can be very difficult to set up role-based access control (RBAC) for Kubernetes. Natively, it supports a number of different role types and assignment options, but these are hard to manage and track. As a result, Kubernetes admins must set up everything manually, cluster by cluster, and effectively provide the right level of access that each user needs within a cluster.

Since Kubernetes is still a relatively new technology, there is an inherent talent gap in finding staff with the necessary skills and experience to administer and manage these environments properly. Now imagine the complexity of trying to scale and operate a distributed, multicluster, multi-cloud environment with that level of manual overhead. Not only is the process labor-intensive, but ripe for mistakes.

Cloud Flexibility and On-Demand Services

Running container-based applications in production goes well beyond Kubernetes. For example, IT operations teams often require additional services for tracing, logs, storage, security, and networking. They may also require different management tools for Kubernetes distribution and compute instances across public clouds, on-premises, hybrid architectures, or at the edge.

Integrating these tools and services for a specific Kubernetes cluster requires that each tool or service is configured according to that cluster’s use case. The requirements and budgets for each cluster are likely to vary significantly, meaning that updating or creating a new cluster configuration will differ based on the cluster and the environment. As Kubernetes adoption matures and expands, there will be a direct conflict between admins, who want to lessen the growing complexity of cluster management, and application teams, who seek to tailor Kubernetes infrastructure to meet their specific needs.

What magnifies these challenges even further is the pressure of meeting internal project deadlines — and the perceived need to use more cloud-based services to get the work done on time and within budget. If jobs are on the line, people will inevitably do whatever they feel they must do to get the work done, even if it means using tools and methods from outside the centralized IT system.

Benefits of Centralizing With Platform Teams

While shadow Kubernetes IT is a growing challenge that IT must control, hindering the productivity of development and operations teams is a nonstarter. Through a centralized platform team model, however, IT can manage and enforce its own standards and policies for Kubernetes environments and prevent shadow admins altogether. IT can allow multiple teams to run applications on a common, shared infrastructure that is managed, secured, and governed by the enterprise platform team. Doing so can provide the following benefits:

Standardization

Define and maintain preapproved cluster and application configurations that can be reused across infrastructure and tooling architecture. Doing so not only reduces the complexity of manual cluster management, but by centrally standardizing these configurations, it enables development and operations teams to automate workflows and accelerate delivery.

Repeatability

Create and provide multiple pipelines with predefined workflows and approvals across Kubernetes clusters and application deployments to create consistency from a self-service model throughout the organization. Clearly defined and repeatable processes help to scale Kubernetes environments and optimize resources for project deliverables.

Security

Enable downstream configurable access control to clusters and workloads so developers and operations can connect users or groups appropriately. Integrating preexisting security practices and other centralized systems with cluster and application lifecycle management operations becomes the norm. RBAC techniques and user-based audit logs across clusters and environments help manage authorization and prevent errors that lead to attacks.

Governance

Define and maintain preapproved cluster and backup and recovery policies to be enforced across the rest of the organization. Enforce best practices and reject requests for your Kubernetes infrastructure and applications in order to comply with corporate policies and industry regulations such as HIPAA and PCI.

Additional Considerations

Enabling a centralized view into all clusters across any environment, including on-premises and public cloud environments, such as AWS, Azure, GCP, and OCI, can also help address issues faster. By up leveling issues from one unified source of truth, individuals and teams can collectively pinpoint the information causing any health or performance issues related to clusters. Gaining real-time insights into cluster health and performance helps teams optimize Kubernetes and stay within budget.

Due to the open nature of Kubernetes, it is very easy to make mistakes on clusters that can lead to security risks and deployment issues. Also, security breaches are likely to happen over time. There is a constant need for maintenance, application of patches, and upgrades on any type of environment. Recently, Apiiro uncovered a serious vulnerability that gave attackers the opportunity to access sensitive information, such as secrets, passwords, and API keys. A centralized platform can help the organization prepare for incidents proactively by maintaining a high level of control, security posture, policy compliance, and audibility across the organization.

Kubernetes, while a powerful technology, can bring many operational challenges to enterprise platform teams. With the menace of shadow Kubernetes IT growing, platform teams have a challenging road ahead to deliver a solution that enables developer productivity, centralizes governance and policy management, and reduces operational overhead.

Thankfully, there are SaaS platform solutions available that allow platform teams to focus primarily on delivering modern applications, not on managing and operating Kubernetes.

IT Kubernetes cluster Light (web browser)

Published at DZone with permission of Kyle Hunter. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Can You Run a MariaDB Cluster on a $150 Kubernetes Lab? I Gave It a Shot
  • How Kubernetes Cluster Sizing Affects Performance and Cost Efficiency in Cloud Deployments
  • The Production-Ready Kubernetes Service Checklist
  • 10 Best Practices for Managing Kubernetes at Scale

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!