DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
View Events Video Library
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Modern Digital Website Security: Prepare to face any form of malicious web activity and enable your sites to optimally serve your customers.

Low-Code Development: Learn the concepts of low code, features + use cases for professional devs, and the low-code implementation process.

E-Commerce Development Essentials: Considering starting or working on an e-commerce business? Learn how to create a backend that scales.

Getting Started With Jenkins: Learn fundamentals that underpin CI/CD, how to create a pipeline, and when and where to use Jenkins.

Avatar

Matt Butcher

[Deactivated] [Suspended]

Head of Cloud Services at Revolv, Inc

Boulder, US

Joined Mar 2008

About

Matt is a software engineer and web developer. He is the author of several books on programming and technology, and is a frequent open source contributor. Matt holds a Ph.D. in philosophy, and has taught both Philosophy and Computer Science at Loyola University Chicago. Matt is the lead cloud engineer at Revolv.

Stats

Reputation: 226
Pageviews: 262.7K
Articles: 6
Comments: 2
  • Articles
  • Comments

Articles

article thumbnail
REST Without JSON: The Future of IoT Protocols
The JSON/HTTP model may not be the best fit for IoT technologies.
Updated September 3, 2019
· 69,419 Views · 25 Likes
article thumbnail
Writing a Kubernetes CRD Controller in Rust
Let's ditch Go for a little and see what we can conjure with a little Rust.
August 8, 2019
· 15,724 Views · 2 Likes
article thumbnail
The Go Developer's Quickstart Guide to Rust
You've been writing Go. But you're feeling an urge to test the waters with Rust. This is a guide to make this switch easy.
June 1, 2018
· 11,455 Views · 3 Likes
article thumbnail
How to Allow Only HTTPS on an S3 Bucket
It is possible to disable HTTP access on S3 bucket, limiting S3 traffic to only HTTPS requests. The documentation is scattered around the Amazon AWS documentation, but the solution is actually straightforward. All you need to do to block HTTP traffic on an S3 bucket is add a Condition in your bucket's policy. AWS supports a global condition for verifying SSL. So you can add a condition like this: "Condition": { "Bool": { "aws:SecureTransport": "true" } } Here's a complete example: { "Version": "2008-10-17", "Id": "some_policy", "Statement": [ { "Sid": "AddPerm", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::my_bucket/*", "Condition": { "Bool": { "aws:SecureTransport": "true" } } } ] } Now accessing the contents of my_bucket over HTTP will produce a 403 error, while using HTTPS will work fine.
October 8, 2014
· 14,904 Views · 0 Likes
article thumbnail
Goose for Database Migrations
I've been hunting for good database tools to perform that class of tasks that we all need, but that we end up re-implementing over and over again. One such task is database migrations. I've been experimenting with Goose to provide general-purpose database migration support. What Is Goose? Goose is a general purpose database migration manager. The idea is simple: You provide SQL schema files that follow a particular naming convention You provide a simple dbconf.yml file that tells Goose how to connect to your various databases Goose provides you simple tools to upgrade (goose up), check on (goose status), and even revert (goose down) schema changes. Goose does this by adding one more table inside your database. This table tracks which schema changes it has made. Based on its history, it can tell which scheme updates need to be run and which have already been run. While Goose is written in Go (golang), it is agnostic about what language your app is written in. Getting Started I got Goose up and running in less than 30 minutes, and you can probably do it faster. I already have an empty Postgres database called foo. But it has no tables. I have an existing codebase, too (MyProject). Here is the process for configuring Goose to manage the database schema management. First, I create the db/ directory, which will house all of the Goose-specific files, including my schema. $ cd MyProject $ mkdir db $ cd db $ vim dbconf.yml # Open with the editor of your choice. The dbconf.yml file contains a list of databases along with the relevant information for connecting to each. Mine looks something like this: test: driver: postgres open: user=foo dbname=foo_test sslmode=disable development: driver: postgres open: user=foo dbname=foo_dev sslmode=disable (Important: use spaces, not tabs, in YAML.) Now I have two databases configured. One for testing and one for development. By default, Goose assumes the target database is development. The above is just configured to connect to the PostgreSQL instance locally running. If I need support for a remote host, I can add host=... password=... (and remove sslmode=disable). At this point, I can generate a new migration. $ cd .. # Back to MyProject/, not in db/ $ goose create NewSchema sql goose: created db/migrations/20140311133014_NewSchema.sql $ vim db/migrations/20140311133014_NewSchema.sql # Use whatever editor you like Notice that the goose create command will create a new SQL file that follows Goose's naming convention. (That trailing sql on the command is important. goose create can also generate go migration files) My new schema file has two sections: a section for goose up and a section to rollback with goose down: -- +goose Up CREATE TABLE foo ( -- ... ); -- +goose Down DROP TABLE foo; With that done, I can now very easily create by development database: $ goose up If I want to setup test instead, I use the -env flag: $ goose -env=test up And that's it! In subsequent schema files, I may ALTER existing tables or CREATE new ones, and so on. Just about anything that your SQL engine can execute can be passed through Goose. (Though there are some formatting annotations you need to use for things like stored procedures.) Goose Pros In addition to the general ease of use of Goose, here are some additional features that I really like: You do not need your entire codebase to execute Goose. Our deployment box, for example, only has the Goose db/ directory, not the rest of the code. It is largely language neutral if you're just migrating SQL. It works with PostgreSQL, MySQL, and SQLite. The history table that it creates is human-readable, which makes it easy for me to see what's been going on. It supports environment variable interpolation. Don't want your password inside the dbconf.yml file? Just do something like this: development: driver: postgres open: user=foo dbname=foo_dev sslmode=disable password=$MY_DB_PASSWORD This will cause Goose to check the environment for a variable named $MY_DB_PASSWORD. Goose Cons Honestly, I have very few. Right now, you need the Go runtime to install and build Goose. Of course, you can compile Goose once, and then use it wherever. While it has support for Go language migrations, it would be nice to be able to write migration scripts that are executed via the shell. That way, one could use Bash, Python, Perl, or whatever else to trigger migrations. But, hey... this is a pretty minor complaint. Overall, though, Goose is a fantastic tool for handling migrations with ease.
March 27, 2014
· 15,091 Views · 0 Likes
article thumbnail
The 5 Layers of PaaS
[Matt Butcher is a topic expert featured in the DZone 2014 Cloud Platform Research Report, which you can download for free.] Ask a cloud-savvy developer what PaaS is, and you will get an answer like this: A PaaS is a cloud service that lets developers deploy applications into the cloud without having to manage the underlying infrastructure layer. A year or two ago, PaaS systems were monolithic. A single vendor or solution, like Heroku, would provide one system that handled all aspects of PaaS. But things are changing. With a plethora of Open Source tools like Docker, Packer, Serf, CoreOS, Dokku, and Flynn, it is now possible to build your own PaaS. But what exactly makes up a PaaS? I will take a functional approach to defining PaaS by asking what are the things that a PaaS does? PaaS can be viewed as a workflow with several functional phases. Each phase accomplishes a specific goal in the process of moving an application onto a production platform. The phases are not necessarily serial steps. They may run in parallel, and not in the order listed below. The five functional phases of a PaaS are: Deployment Provisioning Lifecycle management Service management Reporting 1. Deployment The deployment phase is responsible for moving an application from its source (typically a developer's machine) to the PaaS. Some of the common ways of doing this include: Running a git remote on the PaaS and handling git push events from clients. (Heroku, OpenShift, Flynn and Dokku all use this method. Elastic Beanstalk uses a variation on this.) Sending the code as a bundle (often a gzipped tar). Cloud Foundry uses this method, as does Stackato. Compiling the code locally and copying the resulting executable to the PaaS. When a PaaS receives a deployment, it kicks off processes to move that app into a running state. The exact order of those processes varies, so I will keep them in the order in which they appeared above. 2. Provisioning In the provisioning phase, the PaaS sets up the infrastructure necessary for running the app. "Infrastructure" is a broad and sometimes nebulous term, but here are some common provisioning targets: Setting up containers and/or compute instances Configuring networking Installing or configuring operating system services (e.g. Apache) Installing or configuring libraries (e.g. Ruby Gems) Many PaaS systems spread provisioning responsibilities across multiple tools. One tool may create a compute instance, while another tool may install libraries. But all are sharing the same responsibility: create the environment in which the application will run. 3. Lifecycle Management Once the PaaS has a copy of the app as well as an environment capable of running the app, it needs to manage the execution of the app. This is lifecycle management. Common tasks of lifecycle management include: Starting the app Monitoring the app's running state Monitoring or reporting on the app's resource consumption Restarting an app upon failure Stopping or restarting the app on command Some minimal PaaS systems offer only basic lifecycle management (e.g. start and stop), while highly sophisticated ones may include autoscaling, auto-throttling, and hot (zero-downtime) deployments. 4. Service Management This phase is not one that all PaaS layers perform. In fact, I would go so far as to say that it is not a mandatory piece of PaaS, But it certainly is useful when present. All PaaS systems run applications (that is, after all, what they're for). But some go a step beyond and provide services that may be attached to an application. These services run outside of the application container or compute instance. Services might include: Databases Networked file systems Message queues Caches Aggregated logging "Old guard" systems (like Cloud Foundry) share a service (e.g. MySQL) across multiple applications. Some of the newer container-based approaches like CoreOS may supplant this model by making it simpler to run services in specially-designated containers. (Check out the Serf project for a similar approach.) Why don't all PaaS systems need this layer? One reason is that many cloud providers already have comparable services in the form of DBaaS, MQaaS, and so on. 5. Reporting and Monitoring This final phase is the most banal. Most of the application's lifecycle is not spent on deployment or provisioning or service management. It's spent running. During an applications life, there are many interesting things that can occur. There are lifecycle events that we'd like to know about, like restarts. There are environmental conditions of interest, like resource utilization and system performance. And, of course, there is application data that we would like to monitor, like log files and application metrics. Many, but by no means all, PaaS platforms provide at least some level of reporting. Here are some examples: Amazon Elastic Beanstalk integrates with AWS Cloud Watch, and also aggregates system log files per application. ActiveState Stackato provides a web console with copious logs, and can show real-time statistics about an application and its surrounding environment. Heroku can optionally send events to a Loggly backend (which is a service). Conclusion: PaaS and Mini-PaaS As we've seen, each functional phase of PaaS can be done to greater or lesser degrees of complexity. Old guard PaaS systems often come feature-packed. But with PaaS building blocks like Docker, Flynn, and CoreOS, building a special purpose tailored mini-PaaS is not out of the question. Just take a look at Deis and Dokku for solutions with varying degrees of complexity.
March 7, 2014
· 20,334 Views · 5 Likes

Comments

Anonymous (Lambda) Functions in PHP.

May 29, 2009 · Cody Taylor

This covers the "old way" of doing lambda functions. The "new way", introduced in PHP 5.3, is much more elegant.
Get to know the QueryPath PHP library

May 29, 2009 · Matt Butcher

Seems to handle large files alright. TweetyPants.com pulls down XML files routinely larger than 1M and performance is still pretty decent (and it even runs on a massively overloaded shared host). I've also used it to parse and read ODT files, which are generally large. No problems there, either. The rule of thumb (IIRC) is that a DOM object will require 3x the memory as the serialized DOM object. QueryPath adds very little to this.

User has been successfully modified

Failed to modify user

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: