Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Optimizing AWS EBS Using LVM

DZone's Guide to

Optimizing AWS EBS Using LVM

Linux's Logical Volume Manager is an older tool, but it can still be useful today. See how you can use it to help right-size your EBS usage and save money.

· Cloud Zone ·
Free Resource

Insight into the right steps to take for migrating workloads to public cloud and successfully reducing cost as a result. Read the Guide.

An often overlooked fact about cloud block storage costs is that most cloud block storage volumes are over-provisioned to the maximum anticipated size required by the application. This means users are paying for the unused storage space. Even though cloud storage is often called “elastic”, users are not really taking advantage of its elasticity in order to save on costs.

In this blog, I would like to demonstrate a few cost-saving techniques that take advantage of the Logical Volume Manager's (LVM) ability to add or remove space as needed, and also point out its deficiencies at the end.

Start Small and Only Pay for the Space Actually Used

The most common scenario where we can save costs is to provision EBS disks on demand. For example, if our anticipated maximum filesystem capacity is 100GB, but we only need 20GB at the beginning, then we can start by provisioning a 20GB EBS disk. Using LVM, we can add additional 20GB EBS disks as needed and grow the filesystem as we go. This way, we only need to pay for the capacity we actually use, instead of the entire 100GB from the beginning.

LVM grow from 20 to 100 GBBelow are the steps I used to accomplish this. Please refer to the corresponding commands' documentation for more information.

Creating a 20GB LV ‘lvol1’ with an EXT4 filesystem and mounting it to ‘/data.’:

# aws ec2 create-volume --size 20 --region us-east-1 --availability-zone us-east-1c --volume-type gp2
{
    "AvailabilityZone": "us-east-1c", 
    "Encrypted": false, 
    "VolumeType": "gp2", 
    "VolumeId": "vol-030af2d6", 
    "State": "creating", 
    "Iops": 100, 
    "SnapshotId": "", 
    "CreateTime": "2016-09-06T19:16:24.228Z", 
    "Size": 20
}

# aws ec2 attach-volume --volume-id vol-e714ec32 --instance-id i-459c2d74 --device /dev/sdf
{
    "AttachTime": "2016-09-06T19:19:51.917Z", 
    "InstanceId": "i-459c2d74", 
    "VolumeId": "vol-e714ec32", 
    "State": "attaching", 
    "Device": "/dev/sdf"
}

# vgcreate vg1 /dev/sdf
  Physical volume "/dev/sdf" successfully created
  Volume group "vg1" successfully created

# vgs
  VG   #PV #LV #SN Attr   VSize  VFree 
  vg1    1   0   0 wz--n- 20.00g 20.00g

# lvcreate -l 100%FREE -n lvol1 vg1
  Logical volume "lvol1" created.

# lvs
  LV   VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lvol1  vg1  -wi-a----- 20.00g               

# mkfs.ext4 /dev/vg1/lvol1
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 5241856 4k blocks and 1310720 inodes
Filesystem UUID: a62ca9bf-0b2e-4b33-b346-9a0a45da7a1a
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done 

# mount /dev/vg1/lvol1 /data

# df
Filesystem          1K-blocks    Used Available Use% Mounted on
/dev/mapper/vg1-lvol1  20507260   44992  19397516   1% /data


Steps to grow the filesystem by extending LV ‘lvol1’ from 20GB to 40GB:

# aws ec2 create-volume --size 20 --region us-east-1 --availability-zone us-east-1c --volume-type gp2
{
    "AvailabilityZone": "us-east-1c", 
    "Encrypted": false, 
    "VolumeType": "gp2", 
    "VolumeId": "vol-9317ef46", 
    "State": "creating", 
    "Iops": 100, 
    "SnapshotId": "", 
    "CreateTime": "2016-09-06T19:13:55.207Z", 
    "Size": 20
}

# aws ec2 attach-volume --volume-id vol-4d16ee98 --instance-id i-459c2d74 --device /dev/sdg
{
    "AttachTime": "2016-09-06T19:51:59.002Z", 
    "InstanceId": "i-459c2d74", 
    "VolumeId": "vol-4d16ee98", 
    "State": "attaching", 
    "Device": "/dev/sdg"
}

# vgextend vg1 /dev/sdg
  Physical volume "/dev/sdg" successfully created
  Volume group "vg1" successfully extended

# vgs
  VG   #PV #LV #SN Attr   VSize  VFree 
  vg1    2   1   0 wz--n- 39.99g 20.00g

# lvextend -r -l +100%FREE vg1/lvol1
  Size of logical volume vg1/lvol1 changed from 20.00 GiB (5119 extents) to 39.99 GiB (10238 extents).
  Logical volume lvol1 successfully resized
resize2fs 1.42.12 (29-Aug-2014)
Filesystem at /dev/mapper/vg1-lvol1 is mounted on /data; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 3
The filesystem on /dev/mapper/vg1-lvol1 is now 10483712 (4k) blocks long.

# df
Filesystem          1K-blocks    Used Available Use% Mounted on
/dev/mapper/vg1-lvol1  41145664   49032  39193368   1% /data


Repeating the above steps three more times will grow the filesystem to 100GB:

# pvs
  PV         VG   Fmt  Attr PSize  PFree
  /dev/sdf   vg1  lvm2 a--  20.00g    0 
  /dev/sdg   vg1  lvm2 a--  20.00g    0 
  /dev/sdh   vg1  lvm2 a--  20.00g    0 
  /dev/sdi   vg1  lvm2 a--  20.00g    0 
  /dev/sdj   vg1  lvm2 a--  20.00g    0 

# vgs
  VG   #PV #LV #SN Attr   VSize  VFree
  vg1    5   1   0 wz--n- 99.98g    0 

# lvs
  LV   VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lvol1  vg1  -wi-ao---- 99.98g                                                    

# lvdisplay -m
  --- Logical volume ---
  LV Path                /dev/vg1/lvol1
  LV Name                lvol1
  VG Name                vg1
  LV UUID                Ly8RId-Hp6h-hHE9-j0eV-EC9l-d5RQ-rW7Utl
  LV Write Access        read/write
  LV Creation host, time ip-172-31-49-176, 2016-09-06 20:02:56 +0000
  LV Status              available
  # open                 1
  LV Size                99.98 GiB
  Current LE             25595
  Segments               5
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

  --- Segments ---
  Logical extents 0 to 5118:
    Type                linear
    Physical volume     /dev/sdf
    Physical extents    0 to 5118

  Logical extents 5119 to 10237:
    Type                linear
    Physical volume     /dev/sdg
    Physical extents    0 to 5118

  Logical extents 10238 to 15356:
    Type                linear
    Physical volume     /dev/sdh
    Physical extents    0 to 5118

  Logical extents 15357 to 20475:
    Type                linear
    Physical volume     /dev/sdi
    Physical extents    0 to 5118

  Logical extents 20476 to 25594:
    Type                linear
    Physical volume     /dev/sdj
    Physical extents    0 to 5118

# df
Filesystem          1K-blocks    Used Available Use% Mounted on
/dev/mapper/vg1-lvol1 103060768   61044  98581908   1% /data


Trimming Unused Capacity

The next cost-saving scenario is to trim unused space when it is no longer needed. For example, cold data can be archived to a cheaper medium for longer-term storage. We can save costs by removing unused EBS disks from the volume group.

In the example below, we found that the application only needs 50GB of the 100GB filesystem space. We can reduce the LV ‘lvol1’’s size from 100GB to 60GB and shrink the filesystem along with it to 60GB. One caveat here is that shrinking filesystems requires bringing them offline during the process. The ‘lvreduce’ command warns about this.

LVM shrink from 100 to 60 GB

The steps to trim 40GB off the filesystem and the LV ‘lvol1’ and remove the two unused PVs are:

# lvreduce -r -l -10238 vg1/lvol1
Do you want to unmount "/data"? [Y|n] y
fsck from util-linux 2.23.2
/dev/mapper/vg1-lvol1: 11/6553600 files (0.0% non-contiguous), 459349/26209280 blocks
resize2fs 1.42.12 (29-Aug-2014)
Resizing the filesystem on /dev/mapper/vg1-lvol1 to 15725568 (4k) blocks.

The filesystem on /dev/mapper/vg1-lvol1 is now 15725568 (4k) blocks long.

  Size of logical volume vg1/lvol1 changed from 99.98 GiB (25595 extents) to 59.99 GiB (15357 extents).
  Logical volume lv1 successfully resized

# df
Filesystem          1K-blocks    Used Available Use% Mounted on
/dev/mapper/vg1-lv1  61784060   53064  59099116   1% /data

# pvs
  PV         VG   Fmt  Attr PSize  PFree 
  /dev/sdf   vg1  lvm2 a--  20.00g     0 
  /dev/sdg   vg1  lvm2 a--  20.00g     0 
  /dev/sdh   vg1  lvm2 a--  20.00g     0 
  /dev/sdi   vg1  lvm2 a--  20.00g 20.00g
  /dev/sdj   vg1  lvm2 a--  20.00g 20.00g
# vgreduce vg1 /dev/sdi /dev/sdj
  Removed "/dev/sdi" from volume group "vg1"
  Removed "/dev/sdj" from volume group "vg1"

# pvremove /dev/sdi /dev/sdj
  Labels on physical volume "/dev/sdi" successfully wiped
  Labels on physical volume "/dev/sdj" successfully wiped

# aws ec2 detach-volume --volume-id vol-9c17ef49
{
    "AttachTime": "2016-09-06T19:53:48.000Z", 
    "InstanceId": "i-459c2d74", 
    "VolumeId": "vol-9c17ef49", 
    "State": "detaching", 
    "Device": "/dev/sdi"
}

# aws ec2 delete-volume --volume-id vol-9c17ef49

# aws ec2 detach-volume --volume-id vol-8717ef52
{
    "AttachTime": "2016-09-06T19:54:28.000Z", 
    "InstanceId": "i-459c2d74", 
    "VolumeId": "vol-8717ef52", 
    "State": "detaching", 
    "Device": "/dev/sdj"
}

# aws ec2 delete-volume --volume-id vol-8717ef52

Conclusion

I hope by now you have a good idea of what it takes to extend or reduce EBS storage space using LVM. The reason we put ourselves through so much trouble is to achieve one thing: to save costs by provisioning right size EBS disks based on application utilization. Public clouds such as AWS offer us this unique opportunity to easily provision virtual disks with various sizes. Why not take advantage of it and stop wasting money immediately?

TrueSight Cloud Cost Control provides visibility and control over multi-cloud costs including AWS, Azure, Google Cloud, and others.

Topics:
cloud ,aws ebs ,lvm ,block storage ,tutorial

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}