DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
11 Monitoring and Observability Tools for 2023
Learn more
  1. DZone
  2. Software Design and Architecture
  3. Cloud Architecture
  4. Saving Hundreds of Hours With Google Compute Engine Per-Minute Billing

Saving Hundreds of Hours With Google Compute Engine Per-Minute Billing

The costs of using compute time on the cloud can be significant. Here's how to save money on GCE.

Omer Dawelbeit user avatar by
Omer Dawelbeit
·
Apr. 06, 16 · Opinion
Like (4)
Save
Tweet
Share
6.31K Views

Join the DZone community and get the full member experience.

Join For Free

i’ve been working on the ecarf research project for the last few years addressing some of the semantic web issues, in particular processing large rdf datasets using cloud computing. the project started using the google cloud platform (gcp) – namely compute engine, cloud storage and bigquery – two years ago and now that the initial phase of the project is complete, i thought to reflect back on the decision to use gcp. to summarise, the google compute engine (gce) per minute billing saved us 697 hours, an equivalent of 29 days, a full month of vm time! read on for details on how these figures were calculated and for my reflections on 2 years of gcp usage, starting 1,086 vms programatically through code, completing 100s of jobs on a 24,538 lines-of-code project. note, i will occasionally abbreviate google cloud platform as gcp and google compute engine as gce.

per-minute vs. per-hour cloud billing

the ecarf project’s workload that we have processed on the cloud exhibits a short irregular nature, for example, i could be starting 16 vms with 4 cores each for 20 minutes, 1 vm with 4 cores for 15 minutes and another 1 vm with 2 cores for 1 hour 10 minutes (a total of 70 cores) to process various jobs, once the work is done these vms get deleted. let’s do some math (ignoring the pricing tier for now), on per-minute billing i would pay for (16 x 20 + 1 x 15 + 1 x 70 = 405 minutes) 6 hour 45 minutes of usage, on a per-hour billing however, you will pay an hour each for those 16 vms (tough) that is 16 hours. the total in this case is 19 hours! that is 12 hours 15 minutes of waste!

image title

so the first thing that came to mind is to evaluate how much the project have saved by the google compute engine (gce) per-minute billing compared to using a cloud provider that charges per-hour. the figures are astonishing!, in the last year and a half we have started 1,086 virtual machines, the majority for less than an hour. the total billable hours with gce’s per-minute billing are 1,118, if we were using a cloud provider that charges per hour then the total billable hours would be 1,815. this means per minute billing saved us 697 hours, an equivalent of 29 days, a full month of vm time! now this is a one developer project, imagine an enterprise with hundreds of projects and developers, the wasted cost with per-hour billing could well amount to thousands of dollars.

appreciating the saved hours

i was interested in cloud computing since 2008 and started using amazon web services back in 2009. so when the prototype development phase started on the ecarf project two years ago, i didn’t have any experience on google compute engine (gce), my only experience was with google app engine and thought to go with aws ec2, after all it’s something i’m familiar with. however, after watching a few announcements of new gcp features on google io and cloud platform next, i thought there were too many cool features to ignore, one in particular was the ability to stream 100,000 records into google bigquery . so i decided to try gcp for the prototype development and i’m glad i did. i spent a couple of hours reading the gce documentations and playing with code samples and in no-time i created and destroyed my first few vms using the api, things were looking good and promising so i never looked back.

luckily, i’ve had the compute engine usage export enabled from day one. so all the project’s compute engine usage csv files since the start are available in a cloud storage bucket. these includes daily usage and an aggregated monthly usage files, so when analysed we either analyse the daily or the monthly ones, otherwise everything will get counted twice.

image title

to see how many hours we have saved with gce per-minute billing, i’ve written a small utility to parse these csv files and aggregate the usage. additionally, for vms usage, the tool will round up gce per-minute usage to per-hour usage as shown in the code snippets below. by the way the tool is available in github if you would like to appreciate your savings as well. so how does it work?, each time a vm starts the usage files will contain a few entries like these:

report date measurementid quantity unit resource uri resourceid location
10/03/2016 com.google.cloud/services/ compute-engine /networkinternetingressnana 3,610 bytes https://www.googleapis.com… /cloudex-processor-1457597066564 .. us-central1-a
10/03/2016 com.google.cloud/services/ compute-engine/ vmimagen1highmem_4 1,800 seconds https://www.googleapis.com… /cloudex-processor-1457597066564 ….. us-central1-a

the entry we are interested in is the one that contains the vmimagen1highmem_4 measurement id with its quantity and units in seconds. in the highlighted entry an n1-highmem-4 vm was started and ran for 30 minutes (1,800 seconds). with a cloud provider that charges per hour that entry is immediately rounded up to an hour (3,600 seconds). as shown in the code snippet below, for each of these entries to generate the per-hour billing values the tool will round up anything below 3,600 seconds to a full hour. additionally, if an entry is more than 3,600, say 4,600 seconds, we check if the remainder of 4,600 by 3,600 is larger than zero, if it’s, we divide the entry by 3,600, round up the result and multiply it by 3,600 i.e. 4,600/3,600 = 1.27, rounded up to 2, so total billable on per-hour charges is 2 x 3,600 = 7,200 seconds. this needs to be done for each of the entries individually.

here are the generated results after exporting it into google sheets and converting the seconds to hours:

vm type usage (hours) usage per-hour billing (hours)
vmimagecustomcore 11.47 12
vmimagef1micro 27.05 32
vmimageg1small 87.50 104
vmimagen1highcpu_16 3.42 5
vmimagen1highcpu_2 1.57 2
vmimagen1highcpu_4 1.47 2
vmimagen1highcpu_8 2.52 4
vmimagen1highmem_2 35.47 137
vmimagen1highmem_4 262.53 446
vmimagen1highmem_8 1.02 3
vmimagen1standard_1 49.00 65
vmimagen1standard_16 0.58 3
vmimagen1standard_2 625.43 977
vmimagen1standard_4 4.33 12
vmimagen1standard_8 4.65 11
total 1118 1815

notice for the n1-standard-16 vm highlighted, which we have started many times, the total usage amounts to just over half an hour on gce, the equivalent is a 3 hours on per-hour billing! in this case we have started the n1-standard-16 for 600, 720 and 780 seconds respectively, in per hour billing that is an hour each.

on a separate note, google cloud platform billing export is currently in preview, i’ve had a brief look at the exported csv file and they are the best of both worlds. they contain the detailed usage included in the usage export, but with cost values and not just for gce, but all of the google cloud platform.

image title

the modern cloud

if you ask me to describe the google cloud platform (gcp) in one word, i will use the word ‘modern’, because gcp is a modern cloud that challenges the status quo with cloud computing, if you ask me for a different word you will get ‘cool’. my argument for why i believe gcp is a modern cloud can be summarised as follows, i will explain each one of them:

  • highly elastic cloud
  • flexible billing models
  • clear and simple infrastructure as a service (iaas) offerings
  • robust and easy authentication mechanisms
  • unified apis and developer libraries

back in 2008 cloud computing was a revolution compared to dedicated physical server hosting. back then if you hired a dedicated server you had to commit to a full month billing, then cloud computing came with the revolution of hourly billing. however, cloud computing in 2016 shouldn’t be like cloud computing in 2008. hardware got cheaper (following moore’s law ), hypervisors improved considerably. now the revolution should be: the move from hourly to sub-hourly billing.

highly elastic cloud

there is currently huge research into improving hypervisors and cloud elasticity, specially vertical elasticity. i won’t be surprised if in the near future we are able to dynamically scale up or down the memory or the cpu cores of a vm whilst it’s running. a modern cloud will be more elastic than treating vms like physical dedicated servers, i should be able to define my own cpu and ram requirements rather than having to choose from a predefined list (within reason obviously). i should be able to freely change the metadata of my vm at any time not just at startup . these are modern features that the ecarf project couldn’t do without and both are offered by gce. such features allow us to build cool modern systems that combine elasticity with machine learning techniques to offer dynamic computing for example.

flexible billing models

the profile of the ecarf project’s workload isn’t unique, many other areas have a similar workload profile, continuous integration development environments that acquire vms to build code and run tests might not run for a full hour. another area is prototyping or research and development of cloud based systems, which might start and terminate vms on a regular basis.

gce ticks the box by offering modern and flexible billing models , for both irregular and regular, short-term and longterm workload profiles through per-minute billing, sustained usage discounts and preemptible vms . and the 15 minutes minimum charge is only fair, because creating a vm involves many steps such as scheduling and reserving capacity on a physical host and those come with a cost.

image title

not only this, but gcp goes further, if you feel that any of the predefined machine types (cpu / ram combination is more than what you need, then you can launch your own custom vms that best fits your workload, achieving savings at multiple levels.

image title

the pain with upfront commitments

since i started using ec2 back in 2009 i’ve spent $1000+ on reserved capacity, there are two pain factors i’ve personally experienced. first, when you commit for 3 years you don’t know what will happen 1 year or 2 years down the line. you might have to wind down a project after a year and half of usage, then you’ve got all that reserved capacity to get rid off. at least now you can sell that capacity (at a profit/loss i don’t know), back in the day it was just wasted. the second pain, imagine you sign up for a 3 years reserved capacity with an hourly pricing of $0.02 compared to a standard pricing of $0.04 at the time. guess what, two years later prices get discounted and the standard vm price becomes $0.02! but wait, i’ve bought reserved capacity and still stuck on $0.02, the discounts are not passed to me. i was once on a reserved hourly rate that was the same as the standard hourly rate because things have moved on. i’ve not looked recently, things might’ve changed, but still, upfront commitment is a big burden. in contrast, discounts based on sustained usage are much better.

clear and simple iaas offerings

over the past couple of years, i’ve never struggled to grasp or understand any of the gce offered services. the predefined machine types are also very clear, shared core, standard, high memory and high cpu, i know them all by heart now with their memory configurations and pricing to some extent. i clearly understand the difference between the iops for a standard disk and an ssd disk and how the chosen size impacts their performance. there is no information overload, disk pricing and other information is kept separate, it’s simple and easy to understand.

now compare this with ec2 vms, it’s overwhelming, current/previous, etc… generation vms. disk information and network configurations all lumped together with vm configurations, paragraphs and paragraphs of different price configurations. for me, it was painful just trying to understand which vm type is suitable for my needs. my first encounter with ssd configurations and maximum provisioned iops for aws rds was one of pain. instead of spending time working on my project, i found myself spending valuable time trying to select which iaas offerings best fit my needs. things like trying to figure out if low to moderate or high network connectively is best for my needs!. no wonder i still hear many say they find cloud offerings confusing!, i think this is no more with gcp.

robust and easy authentication mechanisms

oauth2 makes it really easy to utilise gcp apis and is easier to use for developers . i’ve been through the authentication keys and certificates hell, which ones are valid, which ones have expired, oh wait someone committed them by mistake to github!. my experience with oauth2 is that it’s much cleaner, you authorise the resources to act on your behalf, no one keeps any keys or passwords, the resource can then acquire an oauth token when needed and you can revoke that access at any time. now with cloud identity and access management , there is a robust authentication and access management framework in place that works well both for the enterprise and for developers.

one use case that made our lives easier working on the ecarf project, is that we treat vms as ephemeral, they get started to do some work and then get terminated. i don’t want to worry about copying some keys or certificates over to a vm so it can access the rest of the cloud provider apis. shouldn’t it just be enough for me to delegate my authority to the vm at startup and never have to worry about some startup scripts that copy security keys or certificates, then find a mechanism through environment variables or similar to tell my application their whereabouts. i could package these with my application, but the application is either checked out from github (public repo) or packaged in the vm image, in which case, keys and certificates management becomes hell. through service accounts we are able to specify which access the vm gets, and once the vm is started it contacts the metadata server to obtain an oauth token which it can use to access the allowed apis.

unified apis and developer libraries

google offers unified client libraries for all their apis with a mechanism to discover and explore these apis making it easier for developers to discover, play with and utilise them. their usage patterns are the same, if you have used one, you can easily write code for the other. with oauth2 support, this gives developers a mechanism to try and invoke these apis in the browser using the api explorer without writing a single line of code, so i can plan and design my app accordingly. if there are any caveats, i’m able to discover them earlier, rather than after writing the code.

additionally, google apis use json rather than xml, a few others have assessed the cost of transport of xml vs. json so there is no need to repeat it here. when dealing with cloud apis and sending or retrieving large amounts of data, to serialize, compress and transport such data, literally every second counts! in this case json is less verbose and achieves better compression ratio with less cpu usage. forget json, and welcome the general rpc framework . the basis for all google’s future apis utilising http/2 and protocol buffers, now this is a step in the future.

summary

to wrap up, we have saved a month of vm time by using google compute engine with its per-minute billing. in this article we have only looked at the saved hours, but haven’t investigated the actual cost savings compared to per-hour billing, which i will leave for a future article. additionally, by using the modern cloud features offered by the google cloud platform we were able save time and focus on what we needed to build and achieve.

all opinions and views expressed here are mine and are based on my own experience.

follow me @omerio

Google Compute Engine Google (verb) Cloud computing Engine Amazon Web Services authentication Continuous Integration/Deployment dev Computing

Published at DZone with permission of Omer Dawelbeit. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Authenticate With OpenID Connect and Apache APISIX
  • JWT Authentication and Authorization: A Detailed Introduction
  • Building the Next-Generation Data Lakehouse: 10X Performance
  • 3 Main Pillars in ReactJS

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: