Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

New Azure Storage Features: REST API, Append Blob, File Service Changes

DZone's Guide to

New Azure Storage Features: REST API, Append Blob, File Service Changes

Learn more about the newest features in Azure Storage including a new REST API, client library, blob types, file service changes, shared access signatures, and more.

· Cloud Zone
Free Resource

Production-proven Mesosphere DC/OS is now even better with GPU scheduling, pods, troubleshooting, enhanced security, and over 100+ integrated services deployed in one-click.

It’s been a while that I wrote a blog post about Azure Storage. Earlier this month, Azure Storage Team released a new version of Storage Service and included a lot of awesome goodness! In this blog post, I will try to summarize those.

So let’s start!

New Storage Service REST API Version / Client Library

All the new changes are rolled into a new version of REST API. The new version is “2015-02-21”. There are some breaking changes in this new version, so if you want to use the new stuff please make sure that you use the latest version of REST API.

This version included a new blob type called “Append Blob” and tons of new features in Azure File Service.

Along with the new REST API, Azure Storage team also released a new version of Storage Client Library – version 5.0.0. This version implements all the features available in the latest version of REST API. You can get this library in your projects from Nuget: https://www.nuget.org/packages/WindowsAzure.Storage/.

Append Blob

Append Blob is the newest kid on the block. Previously there were two kinds of blobs available in Azure Storage – Block Blob and Page Blob. Now there are three.

As the name suggest, in an Append Blob content is always appended to the blob. It is ideally suited for storing logging or telemetry data. Even though your could implement this kind of functionality with Block blobs as well, but append blobs make it super easy for you to collect the logging data.

Let’s consider a scenario where you want to collect logging data from your web application and store it in blob storage. Furthermore, assume that you want just one file per day. The way you would do this with Append Blob is you first create an empty append blob and as the data comes in, you would simply write that to the blob. Append Blob will make sure that existing data is not overwritten and the new content you are sending in gets written to the end of the blob.

To manage Append Blob, in .Net Storage Client library a new class is created – CloudAppendBlob [Sorry, but MSDN documentation is not updated just yet]. The way you work with CloudAppendBlob is very much similar to the way you work with CloudBlockBlob or CloudPageBlob.

        static void CreateEmptyAppendBlob()
        {
            var account = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            var blobClient = account.CreateCloudBlobClient();
            var container = blobClient.GetContainerReference("logs-container");
            container.CreateIfNotExists();
            var logBlob = container.GetAppendBlobReference(DateTime.UtcNow.Date.ToString("yyyy-MM-dd") + ".log");
            logBlob.CreateOrReplace();
            container.DeleteIfExists();
        }
        static void WriteToAppendBlob()
        {
            var account = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            var blobClient = account.CreateCloudBlobClient();
            var container = blobClient.GetContainerReference("logs-container");
            container.CreateIfNotExists();
            var logBlob = container.GetAppendBlobReference(DateTime.UtcNow.Date.ToString("yyyy-MM-dd") + ".log");
            logBlob.CreateOrReplace();
            logBlob.AppendText(string.Format("[{0}] - some log entry", DateTime.UtcNow));
            container.DeleteIfExists();
        }

Append Blob supports all operations supported by other blob types. You can copy append blobs, take snapshots, view/update metadata, view/update properties, download etc.

Some Important Notes:

  • Append Blobs are only supported in “2015-02-21” version of REST API. Thus if you want to use Append Blobs, you must use the latest version of REST API.
  • Assuming you have a blob container that has block, page and append blobs in it. You must use the latest version of the REST API. Blob enumeration will fail at the REST API level itself if you use an older version of the REST API.

Shared Access Signature (SAS) Change

If you’re using REST API to create SAS, there’s one breaking change in there in the way “canonicalized resource” is created when constructing the string to sign. In the latest version, you must prepend the service name (blob, table, queue or file) to the canonicalized resource. For example, if the URL for which you want to create a SAS is “https://myaccount.blob.core.windows.net/music”:

In previous versions, canonicalized resource would be:

/myaccount/music

But in the new version, it would be:

blob/myaccount/music

You can learn more about Shared Access Signature here: https://msdn.microsoft.com/en-us/library/azure/dn140255.aspx.

File Service Changes

This is where fun begins. There are a number of changes done at the File Service. Let’s talk about them!

However please note that File Service is still in preview and thus is not enabled by default in your storage account/subscription. You would need to enable File Service in your subscription by visiting account management portal.

CORS

As you may already know, other Storage Services (Blobs, Queues and Tables) have been supporting CORS for a long time now (this, along with SAS has been foundation of Cloud Portam). Now File Service also supports CORS!

CORS for File Service works the same way as that for other services:

  • CORS rules are applied at the service level.
  • There can be a maximum of 5 CORS rules for File Service.
  • Each CORS rule will have a list of allowed origins, HTTP verbs, a list of allowed (request) & exposed (response) headers and max age in seconds.

Let’s take an example as to how you will set a CORS rule for File Service. In this example, I will use the CORS rule required for Cloud Portam. For Cloud Portam, we need the following CORS rule set:

Allowed Origins https://app.cloudportam.com Allowed Verbs Get, Header, Post, Put, Delete, Trace, Options, Connect, and Merge Allowed Headers * Exposed Headers * Max Age 3600

        static void SetFileServiceCorsRule()
        {
            var account = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            var fileClient = account.CreateCloudFileClient();
            var serviceProperties = new Microsoft.WindowsAzure.Storage.File.Protocol.FileServiceProperties();
            CorsRule corsRule = new CorsRule()
            {
                AllowedOrigins = new List<string>() { "https://app.cloudportam.com"},
                AllowedMethods = CorsHttpMethods.Connect | CorsHttpMethods.Delete | CorsHttpMethods.Get | 
                                    CorsHttpMethods.Head | CorsHttpMethods.Merge | CorsHttpMethods.Options | 
                                    CorsHttpMethods.Post | CorsHttpMethods.Put | CorsHttpMethods.Trace,
                AllowedHeaders = new List<string>() { "*" },
                ExposedHeaders = new List<string>() { "*" },
                MaxAgeInSeconds = 3600
            };
            serviceProperties.Cors.CorsRules.Add(corsRule);
            fileClient.SetServiceProperties(serviceProperties);
        }

Here’s an example of how you would read the CORS rules currently set for File Service.

        static void GetFileServiceCorsRule()
        {
            var account = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            var fileClient = account.CreateCloudFileClient();
            var serviceProperties = fileClient.GetServiceProperties();
            var corsRules = serviceProperties.Cors.CorsRules;
            foreach (var corsRule in corsRules)
            {
                Console.WriteLine("Allowed Origins: " + string.Join(", ", corsRule.AllowedOrigins));
                List<string> allowedMethods = new List<string>();
                if (corsRule.AllowedMethods.HasFlag(CorsHttpMethods.Connect))
                {
                    allowedMethods.Add(CorsHttpMethods.Connect.ToString());
                }
                if (corsRule.AllowedMethods.HasFlag(CorsHttpMethods.Delete))
                {
                    allowedMethods.Add(CorsHttpMethods.Delete.ToString());
                }
                if (corsRule.AllowedMethods.HasFlag(CorsHttpMethods.Get))
                {
                    allowedMethods.Add(CorsHttpMethods.Get.ToString());
                }
                if (corsRule.AllowedMethods.HasFlag(CorsHttpMethods.Head))
                {
                    allowedMethods.Add(CorsHttpMethods.Head.ToString());
                }
                if (corsRule.AllowedMethods.HasFlag(CorsHttpMethods.Merge))
                {
                    allowedMethods.Add(CorsHttpMethods.Merge.ToString());
                }
                if (corsRule.AllowedMethods.HasFlag(CorsHttpMethods.Options))
                {
                    allowedMethods.Add(CorsHttpMethods.Options.ToString());
                }
                if (corsRule.AllowedMethods.HasFlag(CorsHttpMethods.Post))
                {
                    allowedMethods.Add(CorsHttpMethods.Post.ToString());
                }
                if (corsRule.AllowedMethods.HasFlag(CorsHttpMethods.Put))
                {
                    allowedMethods.Add(CorsHttpMethods.Put.ToString());
                }
                if (corsRule.AllowedMethods.HasFlag(CorsHttpMethods.Trace))
                {
                    allowedMethods.Add(CorsHttpMethods.Trace.ToString());
                }
                Console.WriteLine("Allowed Methods: " + string.Join(", ", allowedMethods));
                Console.WriteLine("Allowed Headers: " + string.Join(", ", corsRule.AllowedHeaders));
                Console.WriteLine("Exposed Headers: " + string.Join(", ", corsRule.ExposedHeaders));
                Console.WriteLine("Max Age (in Seconds): " + corsRule.MaxAgeInSeconds);
            }
        }

Share Quota

Now you can define a quota for a share. The quota will restrict the maximum size of that share. The value of a share quota must be between 1 MB and 5 GB.

You can set the quota of a share when you are creating it. You can also update the quota later on as well by changing share’s properties.

        static void CreateShareWithQuotaAndUpdateIt()
        {
            var account = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            var fileClient = account.CreateCloudFileClient();
            var share = fileClient.GetShareReference("share-name");
            share.Properties.Quota = 128;//Set share quota to 128 MB.
            share.CreateIfNotExists();
            //Fetch the share's attributes
            share.FetchAttributes();
            Console.WriteLine("Share's Quota = " + share.Properties.Quota);
            //Now let's update the share quota
            share.Properties.Quota = 1024;//Set share quota to 1 GB
            share.SetProperties();
            //Fetch the share's attributes
            share.FetchAttributes();
            Console.WriteLine("Share's Quota = " + share.Properties.Quota);
            share.DeleteIfExists();
        }

Please note that if you don’t set the quota while creating a share, it’s quota will be set as 5 GB (i.e. maximum value).

However if you call “SetProperties()” on a share but don’t provide a value for quota, it’s value is not changed.

Share Usage

Another neat feature introduced in storage is the ability to view share usage i.e. how much of the share quota has been used. Please note that this is an approximate value only.

        static void GetShareUsage()
        {
            var account = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            var fileClient = account.CreateCloudFileClient();
            var share = fileClient.GetShareReference("share-name");
            share.Properties.Quota = 128;//Set share quota to 128 MB.
            share.CreateIfNotExists();
            var shareUsage = share.GetStats().Usage;
            Console.WriteLine("Share Usage (in MB): " + shareUsage);
            share.DeleteIfExists();
        }

Share Access Policies

Another important feature that has been missing from File Service till now. Before the current version, there was no anonymous access to File Service Shares and Files. In order to perform any operation on File Service, you would need account key.

With the introduction of Share Access Policies and Shared Access Signature support, it is now possible to perform certain operations on Shares and Files without using account key.

Share Access Policies work in the same way as Blob Container Access Policies:

  • There can be a maximum of 5 access policies per share.
  • Each access policy must have a unique identifier and optionally can have a start/end date and permissions (Read, Write, List, and Delete).
  • When using an access policy to create a Shared Access Signature, only the missing parameters need to be specified. For example, if an access policy has start date defined you can’t specify a start date in your Shared Access Signature.

Let’s see how you can create a Shared Access Policy on a share. In this example, we’re creating an access policy with all permissions (Read, Write, List and Delete) and an expiry date of 24 hours from current date/time.

        static void SetShareAccessPolicy()
        {
            var account = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            var fileClient = account.CreateCloudFileClient();
            var share = fileClient.GetShareReference("share-name");
            share.CreateIfNotExists();
            var permissions = new Microsoft.WindowsAzure.Storage.File.FileSharePermissions();
            var sharedAccessFilePolicy = new Microsoft.WindowsAzure.Storage.File.SharedAccessFilePolicy()
            {
                Permissions = Microsoft.WindowsAzure.Storage.File.SharedAccessFilePermissions.Read | Microsoft.WindowsAzure.Storage.File.SharedAccessFilePermissions.Write | 
                                Microsoft.WindowsAzure.Storage.File.SharedAccessFilePermissions.List | Microsoft.WindowsAzure.Storage.File.SharedAccessFilePermissions.Delete,
                SharedAccessExpiryTime = new DateTimeOffset(DateTime.UtcNow.AddDays(1))
            };
            var accessPolicyIdentifier = "policy-1";
            permissions.SharedAccessPolicies.Add(new KeyValuePair<string,Microsoft.WindowsAzure.Storage.File.SharedAccessFilePolicy>(accessPolicyIdentifier, sharedAccessFilePolicy));
            share.SetPermissions(permissions);
        }
        static void GetShareAccessPolicy()
        {
            var account = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            var fileClient = account.CreateCloudFileClient();
            var share = fileClient.GetShareReference("share-name");
            share.CreateIfNotExists();
            var permissions = share.GetPermissions();
            var accessPolicies = permissions.SharedAccessPolicies;
            foreach (var item in accessPolicies)
            {
                Console.WriteLine("Identifier: " + item.Key);
                var accessPolicy = item.Value;
                Console.WriteLine("Start Time: " + accessPolicy.SharedAccessStartTime);
                Console.WriteLine("Expiry Time: " + accessPolicy.SharedAccessExpiryTime);
                Console.WriteLine("Read Permission: " + accessPolicy.Permissions.HasFlag(Microsoft.WindowsAzure.Storage.File.SharedAccessFilePermissions.Read));
                Console.WriteLine("Write Permission: " + accessPolicy.Permissions.HasFlag(Microsoft.WindowsAzure.Storage.File.SharedAccessFilePermissions.Write));
                Console.WriteLine("List Permission: " + accessPolicy.Permissions.HasFlag(Microsoft.WindowsAzure.Storage.File.SharedAccessFilePermissions.List));
                Console.WriteLine("Delete Permission: " + accessPolicy.Permissions.HasFlag(Microsoft.WindowsAzure.Storage.File.SharedAccessFilePermissions.Delete));
            }
            share.DeleteIfExists();
        }

Shared Access Signature

With Access Policies comes Shared Access Signature (SAS). This is another important improvement done in File Service. Now you can create SAS URL for File Service Shares and Files.

SAS for File Service works much like SAS for Blob Containers and Blobs:

  • You can create both SAS without access policies (ad-hoc SAS) or SAS with access policies.
  • For SAS, you define start date/time (optional), end date/time and at least one of Read, Write, List or Delete permission. If you’re using an access policy to define a SAS, then you only specify the parameters which are not present in that access policy.
  • When creating a SAS on a Share, following permissions are applicable: Read, Write, List and Delete however when creating a SAS on a file in a share, List permission is not applicable.

Let’s see how you can create a SAS on a share. In this example, we will create an ad-hoc SAS with just “List” permission that will expire 24 hours from current date/time.

        static void CreateSasOnShare()
        {
            var account = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            var fileClient = account.CreateCloudFileClient();
            var share = fileClient.GetShareReference("share-name");
            share.CreateIfNotExists();
            var sasToken = share.GetSharedAccessSignature(new Microsoft.WindowsAzure.Storage.File.SharedAccessFilePolicy()
                {
                    Permissions = Microsoft.WindowsAzure.Storage.File.SharedAccessFilePermissions.List,
                    SharedAccessExpiryTime = new DateTimeOffset(DateTime.UtcNow.AddDays(1))
                });
            var sasUrl = share.Uri.AbsoluteUri + sasToken;
            Console.WriteLine(sasUrl);
            share.DeleteIfExists();
        }

Now let’s see how you create a SAS on a file in a share. In this example, we will create an ad-hoc SAS with just “Read” permission that will expire 24 hours from current date/time.

        static void CreateSasOnFile()
        {
            var account = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            var fileClient = account.CreateCloudFileClient();
            var share = fileClient.GetShareReference("share-name");
            share.CreateIfNotExists();
            var file = share.GetRootDirectoryReference().GetFileReference("myfile.txt");
            file.UploadText("This is sample file!");
            var sasToken = file.GetSharedAccessSignature(new Microsoft.WindowsAzure.Storage.File.SharedAccessFilePolicy()
            {
                Permissions = Microsoft.WindowsAzure.Storage.File.SharedAccessFilePermissions.Read,
                SharedAccessExpiryTime = new DateTimeOffset(DateTime.UtcNow.AddDays(1))
            });
            var sasUrl = file.Uri.AbsoluteUri + sasToken;
            Console.WriteLine(sasUrl);
            //Now let's read this file by making an HTTP Web Request using SAS URL.
            var request = (HttpWebRequest) HttpWebRequest.Create(sasUrl);
            request.Method = "GET";
            using (var response = (HttpWebResponse) request.GetResponse())
            {
                using (var streamReader = new StreamReader(response.GetResponseStream()))
                {
                    var fileContents = streamReader.ReadToEnd();
                    Console.WriteLine(fileContents);
                }
            }
            share.DeleteIfExists();
        }

Directory Metadata

In the previous versions, Storage Service allowed you to define metadata on a share and a file but not a directory. In this release, they have enabled this functionality. Now you can define custom metadata on a directory in the form of key/value pair. You can set metadata when creating a directory and update it later on.

Rules for metadata on a directory are same as that for a share and a file:

  • Metadata key must be a valid C# identifier.
  • Size of metadata cannot exceed 8KB.

Let’s see how you can set metadata on a directory.

        static void DirectoryMetadata()
        {
            var account = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            var fileClient = account.CreateCloudFileClient();
            var share = fileClient.GetShareReference("share-name");
            share.CreateIfNotExists();
            var directory = share.GetRootDirectoryReference().GetDirectoryReference("folder");
            directory.Metadata.Add(new KeyValuePair<string, string>("Key1", "Value1"));
            directory.Metadata.Add(new KeyValuePair<string, string>("Key2", "Value2"));
            directory.CreateIfNotExists();
            //Fetch directory attributes
            directory.FetchAttributes();
            var metadata = directory.Metadata;
            foreach (var item in metadata)
            {
                Console.WriteLine("Key = " + item.Key + "; Value = " + item.Value);
            }
            Console.WriteLine("----------------------------------------");
            //Now let's update the metadata
            directory.Metadata.Add(new KeyValuePair<string, string>("Key3", "Value3"));
            directory.Metadata.Add(new KeyValuePair<string, string>("Key4", "Value4"));
            directory.SetMetadata();
            //Fetch directory attributes
            directory.FetchAttributes();
            metadata = directory.Metadata;
            foreach (var item in metadata)
            {
                Console.WriteLine("Key = " + item.Key + "; Value = " + item.Value);
            }
            share.DeleteIfExists();
        }

Copy Files

This is yet another important feature introduced in the latest API. Essentially this functionality provides server-side async copy functionality for copying files across different shares within or across storage accounts. Not only that, you can now copy files from your File Service shares to blob containers and vice versa.

Unfortunately I haven’t played with it much to include some examples but I will update this post with more details as I learn more about this functionality.

Wish List

Even though the new features introduced are very impressive, there are still some things I think are missing from the API. Some of the items from my wish list are:

  • Ability to recursively list files – Currently File Service just list the files and directories inside a share or a directory. I wish the storage team include a functionality wherein I could list all files inside a share irrespective of nested directory hierarchy.
  • Ability to delete non-empty folder – Currently in order to delete a folder, it must be completely empty. I wish storage team include a functionality wherein I could delete a non-empty folder.
  • Ability to copy a folder – Currently copy functionality works only for a file. I wish storage team include copy folder functionality.

These are some items from my wish list. If you have a wish list, please share them by providing comments below.

Summary

That’s it for this post. I hope you have found the post useful. If you find any issues with the post, please let me know and I will fix them ASAP.

Simply build, test, and deploy. Mesosphere DC/OS is the best way to run containers and big data anywhere offering production-proven flexibility and reliability.

Topics:
azure ,azure storage ,rest api ,cloud

Published at DZone with permission of Gaurav Mantri, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}