DZone
Performance Zone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
  • Refcardz
  • Trend Reports
  • Webinars
  • Zones
  • |
    • Agile
    • AI
    • Big Data
    • Cloud
    • Database
    • DevOps
    • Integration
    • IoT
    • Java
    • Microservices
    • Open Source
    • Performance
    • Security
    • Web Dev
DZone > Performance Zone > Beyond Autoscaling: Units of Elasticity

Beyond Autoscaling: Units of Elasticity

Ranjib Dey user avatar by
Ranjib Dey
·
Jun. 06, 12 · Performance Zone · Interview
Like (0)
Save
Tweet
3.25K Views

Join the DZone community and get the full member experience.

Join For Free

Most of the infrastructure elasticity related resources cite autoscaling as an example. While this is good to begin with, sometime i think folks think of individual node or server as the basic unit of infrastructure elasticity, which can be increased or decreased to address scaling issues or better resource usage. But this is not true. Depending upon the technology you use, you can actually control resources at even lower level. You can control the memory, cpu usage, number of processes and many other kernel parameter in openvz or lxc. In lxc world its done via cgroups (a recent feature in kernel) while in openvz world you do it using user bean counters. You can change these paramteres without a a system restart. In host machines its done via sysctl.

What it means that you can actually profile your app  in staging environments against certain stress as the CI goes on, without any dedicated performance testing step; and then feedback it in the IaaS solution to determine the apropriate values for those parameters, and use some pessimistic settings as upper limits. Now, any performance bug (spanning across your app, your webserver or any other component thats deployed in the container) will most likely surface as a leak, and you can catch it, alert it , or might even break the build.

A more detailed example would be, monitoring number of processes via nagios, and feeding it to graphite (with graphios sitting in between), and then applying a moving average function (using graphite) on the values. After first 10 run you invoke system that will set the total allowed processes to the moving average plus some tolerance values. Now, if the process counts goes above the moving average  you'll get alert (like in openvz check for failcount), and then another nagios event handler resets the  threshold to another suitable value.

Since, you can do this for memory, openfile descriptors , jvm based system(using jmx and mbean counters) you can actually do preemptive performance testing. From what i have seeen, performance testing is always a last to do thing. Though in principle in preaching we say it should go hand in hand, but business demands the features more than performance unless its a show stopper. But now, that need not be the case. Even if we can inhibit a few of the bugs,, its worth it... 

let the systems emerge

Autoscaling

Published at DZone with permission of Ranjib Dey, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • How to Configure Git in Eclipse IDE
  • OpenTelemetry in Action: Identifying Database Dependencies
  • Role of Development Team in an Agile Environment
  • The Engineer’s Guide to Creating a Technical Debt Proposal

Comments

Performance Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • MVB Program
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends:

DZone.com is powered by 

AnswerHub logo