Quantcast
Channel: Microsoft Azure Storage Team Blog
Viewing all 167 articles
Browse latest View live

AzCopy – Announcing General Availability of AzCopy 3.0 plus preview release of AzCopy 4.0 with Table and File support

$
0
0

We are pleased to announce that AzCopy is now GA.  

Starting from this release, we will publish two AzCopy series, the RTM series that includes only the GA features and the Pre-release series that includes both the GA and the preview features.

You can download either the AzCopy 3.0.0 with blob copy functionality only, or the AzCopy 4.0.0-preview which includes the GA features and the additional Storage Table Entities copy feature that’s under preview.

AzCopy 3.0.0 - General Available

AzCopy GA version 3.0.0 includes following changes:

  • AzCopy now requires that the end user explicitly specify every parameter’s name. In the previous releases, the source, destination and file pattern parameters do not require any parameter names. Starting from 3.0.0, the command line ‘AzCopy <source> <dest> [pattern] [options]’ needs to be changed to:

AzCopy /Source:<source> /Dest:<destination> /Pattern:<pattern> [Options] …

As a result of this change, it is no longer required that parameters like source and destination follow any specified order.

  • We have also made the following changes to the AzCopy command line’s help messages:
    • Type ‘AzCopy’ to get short version’s help.
    • Type ‘AzCopy /?’ to get detailed command line help
    • Type ‘AzCopy /?:Sample’ to get command line samples.
    • Type ‘AzCopy /?:<option name>’ to get detailed help for the named AzCopy option, e.g.

          AzCopy /?:SourceKey

  • In previous version of AzCopy, if user chooses NOT to overwrite existing files or blobs, AzCopy will assign ‘failed’ status for those files or blobs that already exist. From 3.0.0, AzCopy will assign ‘skipped’ status for such files and display ‘Transfer skipped: <Total skipped count>’ as part of ‘Transfer summary’ in the console window.

 

 

 

 

AzCopy 4.0.0-preview - Copy Azure Storage Table Entities (New Preview)

Besides copying blobs and Azure Files, AzCopy 4.0.0-preview will also support exporting table entities to local files or to azure storage block blobs, and importing the data back to a storage table. Note that this is not a consistent snapshot of the table since changes may occur to entities in a table at various times before AzCopy completes retrieving all the entities.

  • When exporting table entities, user can specify the parameter /Dest with a local folder or blob containers, e.g.

AzCopy /Source:https://myaccount.table.core.windows.net/myTable/ /Dest:D:\test\ /SourceKey:key

AzCopy /Source:https://myaccount.table.core.windows.net/myTable/ /Dest:https://myaccount.blob.core.windows.net/mycontainer/ /SourceKey:key1 /Destkey:key2

AzCopy will generate JSON data files in the local folder or blob container with the following naming convention:

<account name>_<table name>_<timestamp>_<volume index>_<CRC>.json

  • AzCopy will by default generate one JSON data file, user can specify /SplitSize:<split file size in MB> to generate multiple data files, e.g.

AzCopy /Source:https://myaccount.table.core.windows.net/myTable/ /Dest:D:\test\ /SourceKey:key /SplitSize:100

AzCopy uses ‘volume index’ in the data files’ name to distinguish multiple files. ‘Volume index’ contains two parts, ‘partition key range index’ and‘split file index’ (both starting from 0). The ‘partition key range index’ will be 0 if user does not specify the option /PKRS, which will be introduced in the next section.

For instance, AzCopy generates two data files after the user specifies the option /SplitSize, the data files’ name may look like the following:

    myaccount_mytable_20140903T051850.8128447Z_0_0_C3040FE8.json
  myaccount_mytable_20140903T051850.8128447Z_0_1_0AB9AC20.json

Note that the minimum value of split size is 32MB, and if the destination is blob storage, AzCopy will split the data file once the file size reaches the blob size limit (200GB) even though the option /SplitSize is not specified by end user.

  • AzCopy by default exports the whole table’s entities in a serial fashion. To start concurrent exporting, user needs to specify the option /PKRS:<partition key range split>. Use this option with caution since Azure Table Service is a key lookup store and is not built for efficient scans. Too many scans on a table can lead to throttling of live traffic.

For instance, when the option /PKRS:”aa#bb” is specified, AzCopy will start three concurrent operations to export three partition key ranges below:

[<first partition key>, aa)
[aa, bb)
[bb, <last partition key>]

AzCopy /Source:https://myaccount.table.core.windows.net/myTable/ /Dest:D:\test\ /SourceKey:key /PKRS:”aa#bb”

And the generated JSON data files may looks like this:

myaccount_mytable_20140903T051850.8128447Z_0_0_C3040FE8.json
myaccount_mytable_20140903T051850.8128447Z_1_0_0AB9AC20.json
myaccount_mytable_20140903T051850.8128447Z_2_0_939AF48C.json

Note that the number of concurrent operations is also controlled by the option /NC, AzCopy uses the number of cores on the machine as the default value of /NC when copying table entities. When user specifies the option /PKRS, AzCopy will choose the smaller of the two values, number of partition key ranges or the value specified in the /NC, as the number of concurrent operations. Please find more details about /NC by input ‘AzCopy /?:NC’.

  • When importing the data file back to table, user needs to specify both the option /Manifest and /EntityOperation.
AzCopy /Source:D:\test\ /Dest:https://myaccount.table.core.windows.net/mytable1/ 
       /DestKey:key /Manifest:"myaccount_mytable_20140103T112020.manifest" /EntityOperation:InsertOrReplace

AzCopy /Source:https://myaccount.blob.core.windows.net/mycontainer/ /Dest:https://myaccount.table.core.windows.net/mytable1/
       /SourceKey:key1 /DestKey:key2 /Manifest:"myaccount_mytable_20140103T112020.manifest" /EntityOperation:InsertOrReplace

The manifest file is generated in the destination local folder or the blob container when user exports table entities using AzCopy. The manifest file will be used to locate all the data files and to perform data validation during importing. The manifest file uses the following naming convention:

    <account name>_<table name>_<timestamp>.manifest

The option /EntityOperation is used to govern the behavior of entity importing:
    • InsertOrSkip - Skips an existing entity or inserts a new entity if it does not exist in the table.
    • InsertOrMerge - Merges an existing entity or inserts a new entity if it does not exist in the table.
    • InsertReplace - Replaces an existing entity or inserts a new entity if it does not exist in the table.

Note that option /PKRS cannot be used when importing entities. AzCopy will by default start concurrent operations in the import scenario, the default number of concurrent operations is equal to the number of cores of the machine, but user can change the number by specifying the option /NC. For more details, type ‘AzCopy /?:NC’.

As always, we are looking forward to your feedback.

Microsoft Azure Storage Team


(Cross-Post) Azure Storage Performance Checklist

$
0
0

 

How to get great performance when using Azure Storage is a topic we've talked with you about many times: during talks at TechEd and Build, in threads on forums, on our blog, and in person. It’s always exciting to see how passionate you are about making your applications perform as well as possible!

To help you further in this goal, we’ve now released the Azure Storage Performance Checklist which consolidates our performance guidance in a single easy to use document, in one easy to find location. It’s a short document (about 15 printed pages) that a developer should be able to read in about 30 minutes and it contains details of over 40 proven practices structured as a checklist, which will help you to improve the performance of your applications. Here is a small selection from the checklist:

AreaCategoryQuestion

Blobs

Use Metadata

Are you storing frequently used metadata in blob metadata to avoid having to download each blob to extract it each time?

Blobs

Uploading Fast

To upload one blob fast, are you uploading blocks in parallel?

Tables

Configuration

Are you using JSON for your table requests?

Tables

Limiting Returned Data

Are you using projection to avoid retrieving unneeded properties?

Queues

Update Message

Are you using UpdateMessage to store progress in processing a message and avoid having to reprocess from the start if the processing component encounters an error?

Queues

Architecture

Are you using queues to make your entire application more scalable by keeping long-running workloads out of the critical path and scale them independently?

Developers can use this checklist to help design a new application or to validate an existing design, and while not every recommendation is relevant to every application, each of them is a broadly applicable practice that most applications will benefit from following.

We will keep this checklist up to date as we identify more proven practices and add to it when we introduce new Azure Storage features. If you have a recommendation for a proven practice that you don’t see in the current checklist, then please let us know.

Example Scenarios

Many of the recommendations in the checklist are simple to implement in your code. Here are three examples, each of which may have a significant effect on the performance of your application if you apply them in the correct context:

Scenario #1: Queues: Configuration

Have you turned Nagle off to improve the performance of small requests?

The Nagle algorithm is enabled by default. To disable it for a queue endpoint, you can use the following code. This code must execute before you make any calls to the queue endpoint:

var storageAccount = CloudStorageAccount.Parse(connStr);
ServicePoint queueServicePoint = 
ServicePointManager.FindServicePoint(storageAccount.QueueEndpoint);
queueServicePoint.UseNagleAlgorithm = false; 
CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();

Scenario #2: Blobs: Copying Blobs

Are you copying blobs in an efficient manner?

To copy blob data from a container in one storage account to a container in another storage account, you could first download and then upload the data as shown here:

CloudBlockBlob srcBlob = srcBlobContainer.GetBlockBlobReference("srcblob");
srcBlob.DownloadToFile(@"C:\Temp\copyblob.dat",System.IO.FileMode.Create);
CloudBlockBlob destBlob = 
destBlobContainer.GetBlockBlobReference("destblob");
destBlob.UploadFromFile(@"C:\Temp\copyblob.dat", 
System.IO.FileMode.Open);

 

However, a much more efficient approach is to use one of the copy blob methods such as StartCopyFromBlob as shown here:

CloudBlockBlob srcBlob = srcBlobContainer.GetBlockBlobReference("srcblob");
CloudBlockBlob destBlob = 
destBlobContainer.GetBlockBlobReference("destblob");
destBlob.StartCopyFromBlob(GenerateSASUri(srcBlob));

Note that this example uses a Shared Access Signature (SAS) to access the private blob in the source container.

Scenario #3: Blobs: Uploading Fast

When trying to upload one blob quickly, are you uploading blocks in parallel?

If you are using the .NET Storage Client Library, it has the capability to manage parallel block uploads for you. The following code sample shows how you can use the BlobRequestOptions class to specify the number of threads to use for a parallel block upload (four in this example):

CloudBlockBlob blob = 
srcBlobContainer.GetBlockBlobReference("uploadinparallelblob");
byte[] buffer = ...
var requestOptions = new BlobRequestOptions()
{
ParallelOperationThreadCount = 4
};
blob.UploadFromByteArray(buffer, 0, buffer.Length, null, requestOptions);

 

Note that the Storage Client Library may upload small blobs as a single blob upload instead of multiple block uploads: the SingleBlobUploadThresholdInBytes property of the BlobRequestOptions class sets the size threshold above which the Storage Client Library uses block uploads.

Summary and Call to Action

We have developed the Azure Storage Performance Checklist that contains over 40 proven practices pulled together from a wide variety of sources. This checklist will help you to make a significant difference to the performance of your applications that use the Azure Storage services.

For now, you should take a look at the checklist, print it out, and then see what you can do to improve the performance of your application! You should check back regularly for updates as we incorporate more proven practices into the checklist.

Jeff Irwin
Azure Storage Program Manager

(Cross-Post) Introducing Premium Storage: High-Performance Storage for Azure Virtual Machine Workloads

$
0
0

 

We are excited to announce the preview of the Microsoft Azure Premium Storage Disks. With the introduction of new Premium Storage, Microsoft Azure now offers two types of durable storage: Premium Storage and Standard Storage. Premium Storage stores data on the latest technology Solid State Drives (SSDs) whereas Standard Storage stores data on Hard Disk Drives (HDDs).

Premium Storage is specifically designed for Azure Virtual Machine workloads requiring consistent high performance and low latency. This makes them highly suitable for I/O-sensitive SQL Server workloads. Premium Storage is currently available only for storing data on disks used by Azure Virtual Machines.

You can provision a Premium Storage disk with the right performance characteristics to meet your requirements. You can then attach several persistent disks to a VM, and deliver to your applications up to 32 TB of storage per VM with more than 50,000 IOPS per VM at less than one millisecond latency for read operations.

With Premium Storage, Azure offers the ability to truly lift-and-shift your demanding enterprise applications - like SQL Server, Dynamics AX, Dynamics CRM, Exchange Server, MySQL, and SAP Business Suite – to the cloud.

Currently, Premium Storage is available for limited preview. To sign up for Azure Premium Storage preview, visit Azure Preview page.

Premium Storage Benefits

We designed the service specifically to enhance the performance of IO intensive enterprise workloads, while providing the same high durability as Locally Redundant Storage.

Disk Sizes and Performance

Premium Storage disks provide up to 5,000 IOPS and 200 MB/sec throughput depending on the disk size. For calculating IOPS, we use 256KB as the IO unit size. IO sizes smaller than 256 KB are counted as one unit, and bigger IOs are counted as multiple IOs of size 256KB.

You will need to select the disk sizes based on your application performance and storage capacity needs. We offer three Premium Storage disk types for preview.

Disk Types

P10

P20

P30

Disk Size

128 GB

512 GB

1024 GB

IOPS per Disk

500

2300

5000

Throughput per Disk

100 MB/sec

150 MB/sec

200 MB/sec

The disk type is determined based on the size of the disk you store into your premium storage account. See Premium Storage Overview for more details.

You can maximize the performance of your “DS” series VMs by attaching multiple Premium Storage disks, up to the network bandwidth limit available to the VM for disk traffic. For instance, with a 16-core “DS” series VM, you can attach up to 32TB of data disks and achieve over 50,000 IOPS. To learn the disk bandwidth available for each VM size, see the Virtual Machine and Cloud Service Sizes for Azure

Durability

Durability of data is of utmost importance for persistent storage. Azure customers have critical applications that depend on the persistence of their data and high tolerance against failures. That is why, for Premium Storage, we implemented the same level of high durability using our Locally Redundant Storage technology. Premium Storage keeps three replicas of data within the same region.

We also recommend that you use the storage service commands to create snapshots and to copy those snapshots to a Standard GRS storage account for keeping a geo-redundant snapshot of your data.

Specialized Virtual Machines

We are also launching special Virtual Machines to further enhance the performance of Premium Storage disks. These VMs leverage new caching technology to provide extremely low latency for read operations. In order to use Premium Storage, you must use these special series VMs. Currently, only “DS” series VMs support Premium Storage disks.

These VMs also support Standard Storage disks. Thus you could have a “DS” series VM with a mix of Premium and Standard Storage based disks to optimize your capacity, performance and cost. You can read more about “DS” series VMs here.

Pricing

Pricing for the new Premium Storage service is here. During preview, Premium Storage will be charged at 50% of the GA price.

Getting Started

Step 1: Sign up for service

To sign up, go to the Azure Preview page, and sign up for the Microsoft Azure Premium Storage service using one or more of your subscriptions. As subscriptions are approved for the Premium Storage preview, you will get an email notifying you of the approval.

We are seeing overwhelming interest for trying out Premium Storage, and we will be opening up the service slowly to users in batches, so please be patient after signing up.

Step 2: Create a new storage account

Once you get the approval notification, you can then go to the Microsoft Azure Preview Portal and create a new Premium Storage account using the approved subscription. While creating the storage account be sure to select “Premium Locally Redundant” as the account type.

clip_image002

Currently, Premium Storage is available for preview in the following regions:

  • West US
  • East US 2
  • West Europe

Step 3: Create a “DS” series VM

You can create the VM via Microsoft Azure Preview Portal, or using Azure PowerShell SDK version 0.8.10 or later. Make sure that your Premium Storage account is used for the VM.

Following is a PowerShell example to create a VM by using the DS-series under your Premium storage account:

$storageAccount = "yourpremiumccount"
$adminName = "youradmin"
$adminPassword = "yourpassword"
$vmName = "yourVM"
$location = "West US"
$imageName = "a699494373c04fc0bc8f2bb1389d6106__Windows-Server-2012-R2-201409.01-en.us-127GB.vhd"
$vmSize = "Standard_DS2"
$OSDiskPath = "https://" + $storageAccount + ".blob.core.windows.net/vhds/" + $vmName + "_OS_PIO.vhd"
$vm = New-AzureVMConfig -Name $vmName -ImageName $imageName -InstanceSize $vmSize -MediaLocation $OSDiskPath
Add-AzureProvisioningConfig -Windows -VM $vm -AdminUsername $adminName -Password $adminPassword 
New-AzureVM -ServiceName $vmName -VMs $VM -Location $location

 

If you want more disk space for your VM, attach a new data disk to an existing DS-series VM after it is created:

$storageAccount = "yourpremiumaccount"
$vmName = "yourVM"
$vm = Get-AzureVM -ServiceName $vmName -Name $vmName
$LunNo = 1
$path = "http://" + $storageAccount + ".blob.core.windows.net/vhds/" + "myDataDisk_" + $LunNo + "_PIO.vhd"
$label = "Disk " + $LunNo
Add-AzureDataDisk -CreateNew -MediaLocation $path -DiskSizeInGB 128 -DiskLabel $label -LUN $LunNo -HostCaching ReadOnly -VM $vm | Update-AzureVm

 

If you want to create a VM using your own VM image or disks, you should first upload the image or disks to your Premium Storage account, and then create the VM using that.

Summary and Links

To summarize, we are very excited to announce the new SSD based Premium Storage offering that enhances the VM performance and greatly improves the experience for IO Intensive workloads like databases. As we always do, we would love to hear feedback via comments on this blog, Azure Storage MSDN forum or send email to mastoragequestions@microsoft.com.

Please see these links for more information:

Premium Storage overview

Premium Storage REST operations

"DS" series VM specifications

Sirius Kuttiyan

Using Azure Storage on Linux

$
0
0

We want to provide an update for Linux users on how to use Azure Storage, and we are pleased to announce some new options. Currently, we are testing with Ubuntu 14.04; we will look at feedback to determine which additional distros to include over time.

Java

Our Java library, which we announced the General Availability for earlier this year, has now been fully stress-tested with Linux. You can get the latest Java library through Maven (http://search.maven.org/#search%7Cga%7C1%7Ca%3A%22azure-storage%22) or as source code (https://github.com/Azure/azure-storage-java).

Node.js

We released an updated Node.js library earlier this year, in preview for both Windows and Linux. You can get the node.js library through npm (https://www.npmjs.org/package/azure-storage) or Github (https://github.com/Azure/azure-storage-node).

C++

We are pleased to announce that we have released preview version 0.4.0 of our C++ library, which now compiles for both Windows and Linux. 0.4.0 also contains new features including blob download auto-resume functionality and control over the internal buffer size used in the HTTP layer, so we recommend that everyone using an older version upgrades.

Compiling from source is supported for both Windows and Linux; the source code is available through GitHub (https://github.com/Azure/azure-storage-cpp). Binaries for Windows are also available through NuGet (http://www.nuget.org/packages/wastorage/).

Getting Started on Linux

The Azure Storage Client Library for C++ depends on Casablanca. Follow these instructions to compile it. Version 0.4.0 of the library depends on Casablanca version 2.3.0.

Once this is complete, then:

  • Clone the project using Git:
git clone https://github.com/Azure/azure-storage-cpp.git

The project is cloned to a folder called azure-storage-cpp. Always use the master branch, which contains the latest release.

  • Install additional dependencies:
sudo apt-get install libxml++2.6-dev libxml++2.6-doc uuid-dev
  • Build the SDK for Release:
cd azure-storage-cpp/Microsoft.WindowsAzure.Storage
mkdir build.release
cd build.release
CASABLANCA_DIR=<path to Casablanca> CXX=g++-4.8 cmake .. 
-DCMAKE_BUILD_TYPE=Release
make

In the above command, replace <path to Casablanca> to point to your local installation of Casablanca. For example, if the file libcpprest.so exists at location ~/Github/Casablanca/casablanca/Release/build.release/Binaries/libcpprest.so, then your cmake command should be:

CASABLANCA_DIR=~/Github/Casablanca/casablanca CXX=g++-4.8 cmake .. -DCMAKE_BUILD_TYPE=Release

The library is generated under azure-storage-cpp/Microsoft.WindowsAzure.Storage/build.release/Binaries/.

Once you have built the library, the samples should work equally well for Windows and Linux. If you like, you can build the samples as well:

cd ../samples
vi SamplesCommon/samples_common.h – edit this file to include your storage account name and key
mkdir build.release
cd build.release
CASABLANCA_DIR=<path to Casablanca> CXX=g++-4.8 cmake .. -DCMAKE_BUILD_TYPE=Release
Make

To run the samples:

cd Binaries
cp ../../BlobsGettingStarted/DataFile.txt . (thisis required to run the blobs sample)
./samplesblobs (run the blobs sample)
./samplestables (run the tables sample)
./samplesqueues (run the queues sample)

The getting-started samples in this blog post are also helpful: http://blogs.msdn.com/b/windowsazurestorage/archive/2013/12/20/windows-azure-storage-client-library-for-cplusplus-preview.aspx

Differences between Windows and Linux Client Libraries

The only major difference is in logging. On Windows, we use ETW logging directly. On Linux, we use Boost logging, which means that you can plug in your own sinks as you see fit. Each operation_context has a boost::log::sources::severity_logger<boost::log::trivial::severity_level>; if you want fine-grained control over logging feel free to set your own logger objects. Note that in addition to what Boost provides, we have an internal log_level that we use. Each operation_context gets a log_level that you can set. The default value is set by client_log_level operation_context::default_log_level, which you can also set to turn logging on or off for the library as a whole. The default is that logging is off entirely.

What’s next

We’re excited about supporting Linux-based usage of Azure Storage from Java, Node.js, and C++. We encourage you to try it out and let us know where we can improve by leaving feedback on GIthub or on this blog. We’ll be working to bring these all to general availability.

Adam Sorrin and Jeff Irwin
Microsoft Azure Storage Team

Java, Android Storage Client Library Date Bug Resolution

$
0
0

A bug has been found that affects users of the Java Storage Client Library (version 1.3.1 and below) and the Azure Storage Client Library for Android (preview release versions 0.3.1 and below). This bug occurs only when using the Azure Table Service via TableEntity objects that contain a Date in custom fields within the entity. It does not affect partition keys or row keys containing dates. If this affects you, please find more information about the problem and its fix here: http://go.microsoft.com/fwlink/?LinkId=523753

Microsoft Azure Storage Team

Protecting against the SSL 3.0 vulnerability - Azure Storage to start disabling SSL 3.0 on February 20th, 2015

$
0
0

At the end of October, Microsoft Azure announced that Azure services would begin disabling support for SSL 3.0 starting December 1, 2014 in response to an industry-wide vulnerability in SSL 3.0, commonly known as POODLE. Starting on February 20th, 2015 Azure Storage will discontinue support for SSL 3.0. Any client/browser that uses HTTPS to connect to Azure Storage and does not utilize TLS1.0 or higher, which supersedes SSL 3.0, will be prevented from connecting to Azure Storage when SSL 3.0 is disabled. Clients/browsers currently using HTTP to connect to Azure Storage will not be affected.

We recommend that you immediately investigate your applications and remove any dependencies on SSL 3.0.

  • Make sure that you are not enforcing the use of SSL 3.0. For example, .NET applications that communicate with Azure services should NOT set
ServicePointManager.SecurityProtocol = SecurityProtocolType.Ssl3;
  • If you are using IE 6.0 or earlier on Windows XP or earlier, most likely you are using SSL 3.0. In most cases, you can identify the browser type that your clients are using by enabling Azure Storage Analytics and looking at the User Agent in your Analytics logs. Guidance for end users and administrators to ensure clients are utilizing TLS 1.0 or higher and to disable SSL 3.0 proactively can be found here.  Example of IE6 user agent on Win XP:
“Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)

Summary and Links

Although analysis of connections to Microsoft Azure Storage shows few customers still use SSL 3.0, we are reminding customers of this change so they can update their impacted applications prior to us disabling SSL 3.0.

About Storage Analytics Logging
Windows Azure Storage Logging: Using Logs to Track Storage Requests
Protecting against the SSL 3.0 vulnerability
How to Disable SSL 3.0 in Azure Websites, Roles, and Virtual Machines
Azure Security SSL 3.0 Update

Perry Skountrianos
Program Manager, Azure Storage

AzCopy – Introducing synchronous copy and customized content type

$
0
0

We are pleased to announce that AzCopy 3.1.0 and 4.1.0-Preview are now released! These two versions include the support for the synchronous copy and content type customization.

AzCopy 3.1.0 (Release Version)

  • AzCopy by default uses the asynchronous copy mechanism of the storage service to move data between two storage endpoints. As such, the copying of data will run in the background using spare bandwidth capacity that has no SLA in terms of how fast a blob will be copied, and AzCopy will periodically check the copy status until the copying is completed or failed. With the new option /SyncCopy in the 3.1.0 release, the copying of data will get consistent speed. AzCopy does it by downloading the blobs from the source storage endpoint to local memory and then upload them to the destination storage endpoint.

AzCopy /Source:https://myaccount1.blob.core.windows.net/myContainer/ /Dest:https://myaccount2.blob.core.windows.net/myContainer/
/SourceKey:key1 /DestKey:key2 /Pattern:ab /SyncCopy

Note that /SyncCopy might generate additional egress cost comparing to asynchronous copy, the recommended approach is to use this option in the Azure VM which is in the same region as your source storage account to avoid egress cost.

  • AzCopy uses “application/octet-stream” as the destination blobs’ content type by default, from version 3.1.0, you can specify the content type via the option /SetContentType:[content-type].

AzCopy /Source:D:\test\ /Dest:https://myaccount.blob.core.windows.net/myContainer/ /DestKey:key /Pattern:ab /SetContentType
AzCopy /Source:D:\test\ /Dest:https://myaccount.blob.core.windows.net/myContainer/ /DestKey:key /Pattern:ab /SetContentType:video/mp4

If "Content-Type" is not specified at the /SetContentType option, AzCopy will set each blob’s content type according to its file extension. To set the same content type for all the blobs, you must explicitly specify a value for “Content-Type", for example, /SetContentType:video/mp4.

Note that this option is only applicable when uploading blobs to the storage end points.

  

AzCopy 4.1.0-preview (Preview Version)

AzCopy 4.1.0-Preview includes all the features in the release version 3.1.0 and adds the following enhancements:

  • User can specify option /SyncCopy in the following scenarios:
    • Copying From File storage to Blob storage
    • Copying From Blob storage to File storage
    • Copying From File storage to File storage
  • User can specify destination storage files’ content type.

AzCopy /Source:D:\test\ /Dest:https://myaccount.file.core.windows.net/myContainer/ /DestKey:key /Pattern:ab /SetContentType:video/mp4

Note that this option is only applicable when uploading blobs or files to the storage end points.

As always, we are looking forward to your feedback.

Microsoft Azure Storage Team

(Cross-Post) Troubleshooting Microsoft Azure Storage with Message Analyzer

$
0
0

Overview

Diagnosing and troubleshooting issues in cloud storage applications can be complex especially if they are not considered upfront. When an issue does occur, it can involve parsing and analyzing multiple log files (Azure Storage Analytics, client-side logs from Azure Storage Client libraries, and network traces) to fully understand and mitigate the issue. To assist with this, we have released a set of Azure Storage specific assets in Microsoft Message Analyzer, including parsers, color rules, charts, filters, and view layouts. This blog provides a brief overview, but for a complete hands-on tutorial see End-to-End Troubleshooting using Azure Storage Metrics and Logging, AzCopy, and Message Analyzer.

Sample Scenario – Searching Storage Logs for Storage Service Versions

For this blog post, we’ll examine a scenario where the customer wants to determine what REST versions their client applications are currently using, so they can prepare (if needed) for the planned Azure Storage Service Removal of several of the earlier service versions.

Assumption is you have Message Analyzer with the Azure Storage Assets installed and Azure storage logging and metrics enabled – for info on that see the complete hands-on tutorial.

Step 1: Download the Azure Storage server-side logs using AzCopy

AzCopy is available for download on the Azure Downloads page. For details about using AzCopy, see How to use AzCopy with Microsoft Azure Storage. As an example, the following command will download the log files for blob operations that took place on January 2, 2015 to the folder C:\Temp\Logs\Server; replace <storageaccountname> with the name of your storage account, and <storageaccountkey> with your account access key:

AzCopy.exe /Source:http://<storageaccountname>.blob.core.windows.net/$logs /Dest:C:\Temp\Logs\Server /Pattern:"blob/2015/01/02" /SourceKey:<storageaccountkey> /S /V

NOTE: It can take up to an hour for log data to become available because of the frequency at which the storage service flushes the log writers.

Step 2: Import your server-side logs into Message Analyzer

  1. On the File menu in Microsoft Message Analyzer, click New Session > Files> Add Files to browse to the location where you downloaded your server-side logs. Select your server-side logs and click on the Open button.
  2. In the Session Details panel, set the Text Log Configuration drop-down for each server-side log file to AzureStorageLog (if not already set) to ensure that Microsoft Message Analyzer can parse the log file correctly, and click on the Start button.

Step 3: Add the RequestVersionHeader column to the Analysis Grid

In Microsoft Message Analyzer, under Column Chooser > Azure Storage Log > Azure Storage Log Entry, right click on RequestVersionHeader and select “Add as Column” to make it visible in the Analysis Grid.

Step 4: Search for earlier REST versions

Add the following session filter to determine if there are any requests using an Azure Storage service version that is scheduled to be removed:

AzureStorageLog.RequestVersionHeader < "2012-02-12"

clip_image002

You can double click on each row (if any) in the Analysis Grid to get more information on the individual request. You can also read the “What Should I do?” section on our blog.

Performance Tip: Note that Message Analyzer loads log files into memory. If you have a large set of log data, you will want to apply a session filter before you load the data, in order to get the best performance from Message Analyzer.

Summary

In the above scenario we demonstrated how Message Analyzer along with AzCopy can be used to identify clients that are still using an old REST version. You can use the same combination of tools for your own debugging and analysis when working with Azure Storage.

Next Steps

Follow the complete hands-on tutorial here for more advance scenarios including correlating storage, network, and client side logs to troubleshoot performance issues.

For more information visit the following resources:
•    E2E Troubleshooting using Azure Storage Metrics and Logging, AzCopy, and Message Analyzer
•    Monitor, diagnose, and troubleshoot Microsoft Azure Storage
•    Microsoft Azure Storage Service Version Removal

Perry Skountrianos
Program Manager, Azure Storage


Help us Shape the Azure Storage iOS Library

$
0
0

The Storage team is looking for feedback to help us focus our development for the upcoming Azure Storage iOS library. We’ve created a survey that should take 5-10 minutes to complete.

Once you complete the survey you will also have the opportunity to learn more about an upcoming early preview program plus the first 20 people to complete the full survey will receive some fun Azure merchandise for your offices!

Please click here to start the survey.

Thank you!

Michael Curd
Program Manager, Azure Storage

(Cross Post) Azure Storage Table Design Guide

$
0
0

We are pleased to announce the release of the Azure Storage Table Design Guide. This guide has been developed based on real-world experience helping the largest storage users in the world design their applications to use Table Storage.

The guide includes:

  • Table storage overview and design principles
  • Key considerations for querying and data modification
  • Modeling relationships
  • Table design patterns – including Intra-partition secondary indexes, Inter-partition secondary indexes, Eventually consistent transactions, Index entities, Denormalization and many more
  • Anti-patterns – including prepend / append and log data anti-patterns
  • Implementation considerations

The guide is a must read for anyone developing cloud scale applications with Azure Table Storage. The guide targets intermediate and advanced users. For anyone new to Table storage we recommend you first read Get started with Azure Storage in 5 minutes and / or  Get started with Table storage.

As always, if you have questions/suggestions, please leave a reply!

Enjoy!

Introducing Azure Storage Append Blob

$
0
0

We are excited to introduce a new blob type called Append Blob (alongside our existing Block and Page blobs) that will be publically available in Q3 2015. In this blog, we will provide an overview of Append Blob as well as highlight its most common usage scenarios. We will share more information and details as we get closer to release.

Overview

Append Blob is a new blob type that will be available with an upcoming storage service version. All writes to an Append Blob happen at the end of the blob. Updating and deleting existing blocks is not supported. To modify an Append Blob, you add blocks to the end of the blob via the new Append Block operation. Each appended block is accessible immediately.

Append Blob is optimized for fast append operations, making it ideal for scenarios where the data must be added to an existing blob without modifying the existing contents of that blob (Eg. logging, auditing). In addition, Append Blob supports having multiple clients writing to the same blob without any need for synchronization (unlike block and page blob). An Append Blob has the same scalability targets as a block blob. See Azure Storage Scalability and Performance Targets for details.  

Preparing for Append Blob

Most customers can simply wait for AppendBlob to be available and then upgrade their applications to use the latest version of our client libraries, at which point they can begin using AppendBlob. Customers that will want to plan for Append Blob are likely to be tooling providers (Eg. Storage explorers, cloud storage management tools) that operate on storage accounts that are managed independently of the tooling.

Once Append Blob is released, customers who want to use Append Blob will have to upgrade their applications to use the latest version of the storage service, client libraries, and tools in order to handle the new blob type. There are three possible issues that you may encounter with Append Blob:

  • A container that contains one or more Append Blobs may be accessed only with a version of the storage service and client library that supports Append Blobs. If you attempt to list from a container that contains an Append Blob using an earlier version of the service, the service returns error 409 [FeatureVersionMismatch]. For a tooling provider, this means that until you upgrade your application to the new version, your tool may receive an exception when listing a container in a customer's storage account that contains an Append Blob.
  • For any AzCopy release version prior to 3.2.0, or preview version prior to 4.2.0, the download and copy operations will fail when the source container includes an Append Blob.
  • If you attempt to access an Append Blob using a PowerShell version that does not support Append Blob, the Get-AzureStorageBlob operation, it will fail. An update to the PowerShell libraries will be coming shortly.

To help prepare for the release of Append Blob, we are offering a Preview Program to provide early access to our client library with support for Append Blob. Customers participating in our Preview Program can use this early access to validate that their apps continue to work once the Append blob is available broadly. 

Please let us know if you have any questions or if you are interested in participating in our Preview Program by emailing us at peskount@microsoft.com.

Perry Skountrianos
Program Manager, Azure Storage

General Availability of Azure Premium Storage

$
0
0

As you all are aware from Mark’s blog post, we launched Premium Storage on April 16th, 2015.  First, we want to thank all the preview customers for trying out Premium Storage and for sharing feedback.

Premium Storage delivers high-performance, low-latency disk support for I/O intensive workloads running on Azure Virtual Machines. You can attach several Premium Storage disks to a virtual machine (VM). With Premium Storage, your applications can have up to 32 TB of storage per VM and achieve 64,000 IOPS (input/output operations per second) per VM with extremely low latencies for read operations. Premium Storage is currently available only for storing data on disks used by Azure Virtual Machines.

Since we launched Preview in December of 2014, we worked with our Preview customers to validate Premium Storage in real-world scenarios.  Based on their feedback, we made several improvements for performance and stability.  Below are compelling updates in GA that we want to highlight:

  • Enhanced IOPS/Throughput limit for cached disks: Cache-hits are no longer counted towards the allocated IOPS/Throughput of the disk. That is, when you use a data disk with ReadOnly cache setting on a DS-series VM, Reads that are served from the cache are not subject to Premium Storage disk limits. This will help you increase the IOPS and Throughput of the disks.
  • Change the size of the disks: You can easily increase the size of existing disks.  There is a PowerShell Cmdlet to increase the size of a 128 GB disk to 512 GB or 1 TB and increase the size of 512 GB disk to 1 TB.
  • Support for more Linux distributions: With the release of Linux Integration Services v4.0 we have enabled support for even more Linux flavors. Please refer to Premium Storage Overview for specifics.
  • Available in more regions: We launched Premium Storage preview in West US, East US 2 and West Europe. In addition to that, Premium Storage is now available in the following regions as well: East China, Southeast Asia and West Japan.

We also published Migrating to Azure Premium Storage article to provide guideline on how to migrate your disks, Virtual Machines (VMs) from on-premises or Standard Storage or a different cloud platform to Azure Premium Storage.

We will be presenting a Premium Storage technical deep dive session at Microsoft Ignite 2015 in Chicago, IL.  Please attend our session, if you are at Microsoft Ignite and want to learn more about Premium Storage.

As we always do, we would love to hear feedback via comments on this blog, Azure Storage MSDN forum or send email to mastoragequestions@microsoft.com.


Microsoft Azure Premium Storage Team

 

Resources
Mark Russinovich’s Blog Post which covers both the business logistics and technical overview
Premium Storage:  High-Performance Storage for Azure Virtual Machine Workloads
Using Blob Service Operations with Azure Premium Storage
Migrating to Azure Premium Storage

Client-Side Encryption for Microsoft Azure Storage – Preview

$
0
0

Welcome to the preview of the new Azure Storage Client Library for .NET.  This preview contains new functionality to help developers encrypt their data inside client applications before uploading to Azure Storage, and also to decrypt it while downloading. We also support integration with Azure Key Vault in order to let you store and manage your keys. Client-side encryption has been a common request from the Azure Storage developer community, and so we are happy to put it in your hands for feedback.

Why use client-side encryption?

Client-side encryption offers one significant advantage that server side encryption cannot guarantee: you (the user) completely control the keys.  In fact, the storage service never sees the keys and is incapable of decrypting the data.  This gives you the most control you can have.  It’s also fully transparent – our library is open source and on GitHub, and so you can inspect exactly how the library is encrypting your data to ensure that it meets your standards.

Why are we delivering a library with client-side encryption support?

While any developer can encrypt their data client side prior to uploading it, each developer would have to become an encryption expert.  Not only that, but they would need to design for performance and security.  In the end, a lot of developers would have to do the same work repeatedly.  Not only that, but in the end, since each solution would be different, none of them would work together.

We wanted to provide a library that would accomplish the following:

  • Implement Security Best Practices.  This library has been thoroughly reviewed for its security, so that you can use it with confidence.
  • Design for Performance.  We’ve designed to keep your application running quick.
  • Ease of use for common scenarios: We’ve tried to cover the most common scenarios in a way that would be easy for developers to pick up.  While we may support more scenarios in the future, we won’t do it at the expense of usability of the library.
  • Interoperability across languages.  Many users use more than one of our client libraries, and our goal is to use the same technical design across implementations, so that data encrypted using the .NET library can be decrypted using the Java library.  This first preview is focused on .NET, but we will add this support to more languages as we move forward.

What’s available now?

In our first release, we support encryption for blobs, tables, and queues.  All of them use the envelope technique. Encryption and decryption with asymmetric keys is computationally expensive. Therefore, in the envelope technique, the data itself is not encrypted directly with such keys but instead encrypted using a random symmetric content encryption key. This content encryption key is then encrypted using a public key. We also have support for integrating with Azure Key Vault so you can manage your keys efficiently.

Using client-side encryption is easy. All you need to do is hook up request options with the appropriate encryption policy (Blob, Queue, and Table) and pass it to data upload/download APIs. The client library will internally take care of encrypting data on the client when uploading to Azure Storage, and automatically decrypts it when data is retrieved. You can find more details and code samples in the Getting Started with Client-Side Encryption for Microsoft Azure Storage article.

Goodness –

  • Security – Encrypted data is not readable even if the customers storage account keys are compromised
  • Fixed overhead encryption – We have used a technique where there is a fixed overhead so that your encrypted data will have a predictable size based on the original size.
  • Self-contained encryption – Every blob, table entity, or queue message has all encryption metadata stored in either the object or its metadata.  There is no need to get any additional data from anywhere else, except for the key you used.
  • Interoperability – Data encrypted using .NET client can be decrypted by clients using other languages like Java, Node.js, C++.  We will be providing support in languages beyond .NET in the future.
  • Blobs
    • Full blob upload – Files like documents, photos, and videos that are going to be uploaded in entirety are supported.
    • Full or Range blob download – Of course, you can also download the blob in its entirety, but sometimes, files like mp3 are downloaded in ranges depending on the part that is to be played. Therefore, range downloads are allowed and are entirely taken care of by the SDK internally.
    • Key Rotation – We’ve made it simple and straightforward for users to rotate keys themselves. i.e. multiple keys will be supported during the key rotation timeframe.
    • Clean upgrade path – Additional encryption algorithms and protocol versions can be supported in the future without needing significant changes – the design expects these to happen.

Points to be aware of –

  • This is a preview!  It should not be used for production data.  You should expect that changes will be made that will affect the schemas to be used and therefore not be able to read data written with the first preview.
  • It is easy to corrupt data on the blob service or make it un-readable – If you do blob update operations WritePages/ClearPages, PutBlock etc once you have written an encrypted blob, you may end up corrupting the encrypted blob making it unreadable.  For encryption, you should only use full blob upload commands and range/full blob download commands.
  • For tables, a similar constraint exists – be careful to not update encrypted properties without updating the encryption metadata.
  • Also, if you do a SetMetadata on the encrypted blob (since set metadata is not additive), it can wipe out all encryption related metadata required for decryption. Same with snapshots - specifying metadata while creating a snapshot of an encrypted blob.

Why a preview?

We want to get your feedback on design, ease of use and any additional scenarios you would like to tell us about.  This will enable us to actually use that feedback in shaping the final library.  We are open to your feedback and want to know where we can improve this functionality before releasing it.  Requests for additional functionality may not be reflected in the first release, but we want those too!

References

Getting Started with Client-Side Encryption for Microsoft Azure Storage
Download theAzure Storage Client Library for .NET NuGet package
Download the Azure Storage Client Library for .NET Source Code from GitHub

Thanks!

The Azure Storage Team

Microsoft Azure Storage Client Library for C++ v1.0.0 (General Availability)

$
0
0

We are pleased to announce the general availability of Microsoft Azure Storage Client Library for C++ (version 1.0.0). You can download the nuget package or get the source code from GitHub to start using the library. 

Getting Started with C++ client library

The Azure Storage Client Library for C++ provides a comprehensive API for working with Azure storage, including but not limited to the following abilities:

  • Create, read, delete, and list blob containers, tables, and queues.
  • Create, read, delete, list and copy blobs plus read and write blob ranges.
  • Insert, delete, replace, merge, and query entities in an Azure table.
  • Enqueue and dequeue messages in an Azure queue.
  • Lazily list containers, blobs, tables, and queues, and lazily query entities (new in version 1.0.0)

To get started with Azure Storage client library for C++, please visit the following articles:

Please visit the Azure Storage client library for C++ API documentation for more details.

Cross Platform

Azure Storage C++ client library supports Windows and Linux development. You can now compile the SDK with g++ via a cmake build script. Please refer to the Getting Started on Linux section for more details.

Dependency on CppREST SDK

The CppREST SDK (Code name Casablanca) is a Microsoft project for cloud-based client-server communication in native code using a modern asynchronous C++ API design.

The Microsoft Azure Storage Client Library for C++ is built on top of the CppREST SDK and currently supports VS2012 and VS2013, same as the CppREST version 2.4 and before. 

However, in the latest version of the CppREST (2.5) the support for VS2012 has been deprecated. To ensure a smooth upgrade from the previous version of the Storage C++ SDK, we have not upgraded the dependency of CppREST to 2.5 yet. In the future, we will also deprecate VS2012 support and move to the next version of CppREST. We also recommend all users to migrate their applications to Visual Studio 2013 or higher. 

In the meantime, if you need CppREST 2.5, you can clone the Storage C++ SDK source code from GitHub, upgrade the dependency, and compile on your demand. Please let us know if you have any questions.

Asynchronous / Synchronous Programing

The Azure Storage C++ client library leverages the CppREST SDK asynchronous programming methodology to expose asynchronous API. It also provides the synchronous method which is a wrapper of the asynchronous method. We highly recommend users to use the asynchronous API for their concurrent scenarios. For more details regarding asynchronous programming in CppREST SDK, please visit CppREST SDK pplx Namespace.

Next Steps

 As always, we are looking forward to your feedback.

Microsoft Azure Storage Team

Getting Started with Client-Side Encryption for Microsoft Azure Storage

$
0
0

We are excited to announce that we have added preview support for client-side encryption of data in Azure Storage .NET client library for Blob, Table and Queue data. We have also added support for integrating with Azure Key Vault to manage keys. The process of encryption and decryption follow the “envelope” technique.

Envelope Technique

Encryption using the envelope technique works in the following way:

  1. The Azure Storage client SDK will generate a content encryption key (CEK) which is a one-time-use symmetric key.
  2. User data is encrypted using this CEK.
  3. The CEK is then wrapped (encrypted) using the key encryption key KEK. The KEK is identified by a key identifier and can be an asymmetric key pair or a symmetric key and can be managed locally or stored in Azure Key Vaults. The Storage client itself never has access to KEK. It just invokes the key wrapping algorithm that is provided by Key Vault. Users can choose to use custom providers for key wrapping/unwrapping if desired.
  4. The encrypted data is then uploaded to the Azure Storage service. The wrapped key along with some additional encryption metadata is either stored as metadata (on a blob) or interpolated with the encrypted data (queue messages and table entities).

The decryption process works in the following way:

  1. It is assumed that the user has the key encryption key (KEK) either managed locally or in Azure Key Vaults. The user does not need to know the specific key that was used for encryption. Instead, a key resolver which resolves different key identifiers to keys can be set up and used.
  2. The client SDK downloads the encrypted data along with any encryption material that is stored on the service.
  3. The wrapped content encryption key (CEK) is then unwrapped (decrypted) using the key encryption key (KEK). Here again, the Storage client does not have access to KEK. It just invokes the custom or Key Vault provider’s unwrapping algorithm.
  4. The content encryption key (CEK) is then used to decrypt the encrypted user data.

Encryption Mechanism

The client library uses AES in order to encrypt user data. Specifically, Cipher Block Chaining (CBC) mode with AES. Each service works somewhat differently, so we will discuss each of them here.

Blobs

In the current version, the client library supports encryption of entire blobs only - specifically encryption is supported when users use UploadFrom* methods or BlobWriteStream. For downloads, both full and range downloads are supported.

During encryption, the client library will generate a random Initialization Vector (IV) of 16 bytes along with a random content encryption key (CEK) of 32 bytes and do envelope encryption of the blob data using this information. The wrapped CEK and some additional encryption metadata are then stored as blob metadata along with the encrypted blob on the service. Important – it is important that if you are editing or uploading your own metadata for the blob, you need to make sure that this metadata is preserved. If you upload new metadata without this metadata, the wrapped CEK, IV and other metadata will be lost and the blob content will never be retrievable again.

Downloading an encrypted blob in this case involves getting the entire blob content using the DownloadTo*/BlobReadStream convenience methods. The wrapped CEK is unwrapped and used along with the IV (stored as blob metadata in this case) to return the decrypted data to the users.

Downloading an arbitrary range (DownloadRange* methods) in the encrypted blob involves adjusting the range provided by users in order to get a small amount of additional data that can be used to successfully decrypt the requested range.

All blob types (Block blobs and page blobs) can be encrypted/decrypted using this scheme.

Queues

Since queue messages can be of any format, the client library defines a custom format that includes the Initialization Vector (IV) and the encrypted content encryption key (CEK) in the message text.

During encryption, the client library will generate a random IV of 16 bytes along with a random CEK of 32 bytes and do envelope encryption of the queue message text using this information. The wrapped CEK and some additional encryption metadata are then added to the encrypted queue message. This modified message (shown below) is stored on the service.

<MessageText>{"EncryptedMessageContents":"6kOu8Rq1C3+M1QO4alKLmWthWXSmHV3mEfxBAgP9QGTU++MKn2uPq3t2UjF1DO6w","EncryptionData":{…}}</MessageText>

During decryption, the wrapped key is extracted from the queue message and unwrapped. The IV is also extracted from the queue message and used along with the unwrapped key to decrypt the queue message data. Note that the encryption metadata is small (under 500 bytes), so while it does count toward the 64KB limit for a queue message, the impact should be manageable.

Tables

In the current version, the client SDK supports encryption of entity properties for Insert / Replace. Merge is not currently supported due to some limitations - Since a subset of properties may have been encrypted previously using a different key, just merging the new properties and updating the metadata will result in data loss. This will either require extra service calls to read the pre-existing entity from the service or using a new key per property both of which are not suitable for performance reasons.

Table data encryption works as follows -

  1. Users specify the properties that should be encrypted.
  2. The client library will generate a random Initialization Vector (IV) of 16 bytes along with a random content encryption key (CEK) of 32 bytes for every entity and do envelope encryption on the individual properties that should be encrypted by deriving a new IV per property.
  3. The wrapped CEK and some additional encryption metadata are then stored as 2 additional reserved properties. The first reserved property (_ClientEncryptionMetadata1) is a string property that holds the information about IV, version, wrapped key etc and the other reserved property (_ClientEncryptionMetadata2) is a binary property that holds the information about the properties that are encrypted.
  4. Due to these additional reserved properties required for encryption, users can now only have 250 custom properties instead of 252 and the overall size of the entity data allowed is less than 1MB.

Only string properties can be encrypted. If other types of properties have to be encrypted, users have to convert them to strings.

For tables, in addition to the encryption policy, users have to specify the properties that should be encrypted. This can be done by either specifying an [EncryptProperty] attribute (for POCO entities that derive from TableEntity) or an encryption resolver in request options. Encryption Resolver is a delegate that takes in PK, RK and a property name and returns a Boolean that indicates whether that property should be encrypted. During encryption, the client library will use this information to decide whether a property should be encrypted while writing to the wire. It also provides the advantage of letting users have some smart logic around, if X, then encrypt property A, else, encrypt properties A and B etc. Note it is not necessary to provide this information while reading/querying entities.

Batch Operations

In batch operations, the same KEK will be used across all the rows in that batch operation since the client library only allows one options object (and hence one policy/KEK) per batch operation. However, the client library internally will generate a new random IV and random CEK per row in the batch. Users can also choose to encrypt different properties for every operation in the batch by defining this behavior in the EncryptionResolver.

Queries

When users wish to perform query operations, they will have to specify a key resolver that is able to resolve all the keys in the result set. If an entity contained in the query result cannot be resolved to a provider, the client library will throw an error. For any query that does server side projections, the client library will add the special encryption metadata properties (_ClientEncryptionMetadata1 and _ClientEncryptionMetadata2) by default to the selected columns.

Azure Key Vault

Azure Key Vault—currently in Preview—helps safeguard cryptographic keys and secrets used by cloud applications and services. By using Azure Key Vault, users can encrypt keys and secrets (such as authentication keys, storage account keys, data encryption keys, .PFX files, and passwords) by using keys that are protected by hardware security modules (HSMs). More information about Key Vault and Getting Started documents can be found here.

The Storage client library uses the Key Vault core library in order to provide a common framework across Azure for managing keys. Users also get the additional benefit of using the Key Vault extensions library that provides a lot of useful functionality around simple and seamless Symmetric/RSA local and cloud key providers along with aggregation and caching.

Interface and Dependencies

There are three Key Vault packages:

  1. Microsoft.Azure.KeyVault.Core: This has IKey and IKeyResolver. It is a very small package and has no dependencies. The Storage Client Desktop and Phone libraries define this as a dependency.
  2. Microsoft.Azure.KeyVault: This is the Key Vault REST client.
  3. Microsoft.Azure.KeyVault.Extensions: This is extension code that includes implementations of cryptographic algorithms along with an RsaKey and a SymmetricKey. This depends on Core and KeyVault and provides functionality to define an aggregate resolver (when users want to use multiple key providers) and a caching key resolver. Although the Storage client library does not directly depend on this, if users wish to use Azure Key Vault to store their keys or just use the Key Vault extensions to consume the local and cloud crypto providers, they will need this package.

Key Vault is designed for high value master keys, and throttling limits per Vault are designed with this in mind. When doing client-side encryption with Key Vault, the preferred model is to use symmetric master keys stored as Secrets in Key Vault and cached locally. Users have to do the following –

  1. Create a secret offline and upload it to Key Vault.
  2. Use the secrets’ base identifier as a parameter to resolve the current version of the secret for encryption and cache this information locally (Using CachingKeyResolver takes care of this and users are not expected to implement their own caching logic).
  3. Use the caching resolver as an input while creating the Encryption Policy.

More information regarding Key Vault usage can be found in the code samples here.

Note

Encryption support is available only on Windows Desktop and Windows Phone. Windows Runtime does not have support for encryption. Additionally, Key Vault extensions are not supported for Windows phone yet. So if users want to use storage client encryption on phone, they will have to implement their own key providers. Also, due to a limitation in the Windows Phone .NET platform, page blob encryption is currently not supported on Windows Phone.

As noted in the announcement blog, please be aware that –

  • This is a preview! It should not be used for production data.
  • It is easy to corrupt data on the blob service or make it un-readable – If blob update operations like WritePages/ClearPages, PutBlock etc are done once an encrypted blob is written to the service, it will end up corrupting the encrypted blob making it unreadable. For encryption, only full blob upload methods and range/full blob download methods should be used.
  • For tables, a similar constraint exists – Do not update encrypted properties without updating the encryption metadata.
  • Also, if a SetMetadata is done on an encrypted blob (since set metadata is not additive), it can wipe out all encryption related metadata required for decryption. Same with snapshots - specifying metadata while creating a snapshot of an encrypted blob.

Client API / Interface

While creating an EncryptionPolicy object, users can provide only a Key (implementing IKey) or a resolver (implementing IKeyResolver) or both. IKey is the basic key type that is identified using a key identifier and provides the logic for wrapping/unwrapping. IKeyResolver is used to resolve a key during the decryption process. It defines a ResolveKey method that returns an IKey given a key identifier. This is used to provide users the ability to choose between multiple keys that are managed in multiple locations.

  • For encryption, the key is used always and the absence of a key will result in an error.
  • For decryption,
    • The key resolver is invoked if specified to get the key. If the resolver is specified but does not have a mapping for the key identifier, an error is thrown.
    • If resolver is not specified but a key is specified, key identifier on the key is matched with what is stored on the service and used.

GettingStartedSamples in the Storage client’s Github repo will demonstrate a more detailed end-to-end scenario for blobs, queues and tables along with Key Vault integration.

Blobs

Users will create a BlobEncryptionPolicy object and set it in the request options (per API or at a client level by using DefaultRequestOptions). Everything else will be handled by the client library internally.

// Create the IKey used for encryption.
RsaKey key = new RsaKey("private:key1"/* key identifier */);
 
// Create the encryption policy to be used for upload and download.
BlobEncryptionPolicy policy = new BlobEncryptionPolicy(key, null);
 
// Set the encryption policy on the request options.
BlobRequestOptions options = new BlobRequestOptions() { EncryptionPolicy = policy };
 
// Upload the encrypted contents to the blob.
blob.UploadFromStream(stream, size, null, options, null);
 
// Download and decrypt the encrypted contents from the blob.
MemoryStream outputStream = new MemoryStream();
blob.DownloadToStream(outputStream, null, options, null);

Queues

Users will create a QueueEncryptionPolicy object and set it in the request options (per API or at a client level by using DefaultRequestOptions). Everything else will be handled by the client library internally.

// Create the IKey used for encryption.
RsaKey key = new RsaKey("private:key1"/* key identifier */);
 
// Create the encryption policy to be used for upload and download.
QueueEncryptionPolicy policy = new QueueEncryptionPolicy(key, null);
 
// Add message
QueueRequestOptions options = new QueueRequestOptions() { EncryptionPolicy = policy };
queue.AddMessage(message, null, null, options, null);
 
// Retrieve message
CloudQueueMessage retrMessage = queue.GetMessage(null, options, null);

Tables

In addition to creating an encryption policy and setting it on request options, users will have to specify an EncryptionResolver in TableRequestOptions or set attributes on the entity.

Using Resolver
// Create the IKey used for encryption.
RsaKey key = new RsaKey("private:key1"/* key identifier */);
 
// Create the encryption policy to be used for upload and download.
TableEncryptionPolicy policy = new TableEncryptionPolicy(key, null);
 
TableRequestOptions options = new TableRequestOptions() 
{ 
    EncryptionResolver = (pk, rk, propName) =>
    {
if (propName == "foo")
        {
returntrue;
        }
returnfalse;
    },
    EncryptionPolicy = policy
};
 
// Insert Entity
currentTable.Execute(TableOperation.Insert(ent), options, null);
 
// Retrieve Entity
// No need to specify an encryption resolver for retrieve
TableRequestOptions retrieveOptions = new TableRequestOptions() 
{
    EncryptionPolicy = policy
};
 
TableOperation operation = TableOperation.Retrieve(ent.PartitionKey, ent.RowKey);
TableResult result = currentTable.Execute(operation, retrieveOptions, null);
Using Attributes

As mentioned above, if the entity implements TableEntity, then the properties can be decorated with the [EncryptProperty] attribute instead of specifying the EncryptionResolver.

[EncryptProperty]
publicstring EncryptedProperty1 { get; set; }

Conclusion

We want feedback on the ease of use, security, or any other scenarios you would like to tell us about.  This will enable us to use that feedback in shaping the final library.

Client-Side Encryption for Microsoft Azure Storage – Preview
Download the Azure Storage Client Library for .NET NuGet package 
Download the Azure Storage Client Library for .NET Source Code from GitHub
Download the Azure Key Vault NuGet Core, Client, and Extensions packages
Visit the KV Documentation here 

Veena Udayabhanu

Microsoft Azure Storage Team


Getting started with Azure Storage on Xamarin

$
0
0

Xamarin allows developers to use a shared C# codebase to create iOS, Android, and Windows Store apps with native user interfaces.

This tutorial shows you how to use Azure Storage Blobs with a Xamarin.Android application. If you want to learn about Azure Storage before diving into the code, see Next Steps at the end of this document.

Create an Azure Storage account

To use Azure storage, you'll need a storage account. You can create a storage account by following these steps. (You can also create a storage account by using the Azure service management client library or the service management REST API.)

1. Log into the Azure Management Portal.

2. At the bottom of the navigation pane, click NEW.

clip_image001[1]

3. Click DATA SERVICES, then STORAGE, and then click QUICK CREATE.

clip_image002

4. In URL, type a subdomain name to use in the URI for the storage account. The entry can contain from 3-24 lowercase letters and numbers. This value becomes the host name within the URI that is used to address Blob, Queue, or Table resources for the subscription.

5. Choose a Region/Affinity Group in which to locate the storage. If you will be using storage from your Azure application, select the same region where you will deploy your application.

6. Optionally, you can select the type of replication you desire for your account. Geo-redundant replication is the default and provides maximum durability. For more details on replication options, see Azure Storage Redundancy Options and the Azure Storage Team Blog.

7. Click CREATE STORAGE ACCOUNT.

Generate a Shared Access Signature

Unlike other Azure Storage client libraries, you cannot not authenticate access to an Azure Storage account using account keys. This is done in order to prevent your account credentials from being distributed to users that may download your app. Instead, we encourage the use of Shared Access Signatures (SAS) which won’t expose your account credentials.

In this getting started you will be using Azure PowerShell to generate a SAS token. Then you will create a Xamarin app that will use the generated SAS.

First, you’ll need to install Azure PowerShell. Check out this guide to learn how to install Azure PowerShell.

Next, open up Azure PowerShell and run the following commands. Remember to replace “ACCOUNT_NAME”and “ACCOUNT_KEY==” with your actual credentials. Replace “CONTAINER_NAME” with a name of your choosing.

PS C:\> $context = New-AzureStorageContext -StorageAccountName "ACCOUNT_NAME" -StorageAccountKey "ACCOUNT_KEY=="
PS C:\> New-AzureStorageContainer CONTAINER_NAME -Permission Off -Context $context
PS C:\> $now = Get-Date 
PS C:\> New-AzureStorageContainerSASToken -Name CONTAINER_NAME -Permission rwdl -ExpiryTime $now.AddDays(1.0) -Context $context -FullUri

The output of the shared access signature URI for the new container should be similar to the following:

https://storageaccount.blob.core.windows.net/sascontainer?sv=2012-02-12&se=2013-04-13T00%3A12%3A08Z&sr=c&sp=wl&sig=t%2BbzU9%2B7ry4okULN9S0wst%2F8MCUhTjrHyV9rDNLSe8g%3Dsss

Once you run the code, the shared access signature that you created on the container will be valid for the next day. The signature grants full access (i.e. read, write, delete, list) to blobs within the container.

If you’d like to learn an alternate method for generating a SAS, please check out our SAS tutorial for .NET.

Create a new Xamarin Application

For this tutorial, we'll be creating our Xamarin application in Visual Studio.

  1. Download and install Visual Studio
  2. Download and install Xamarin
  3. Open Visual Studio
  4. Select File > New > Project > Android > Blank App(Android)
  5. OK
  6. Right-click your project> Manage NuGet Packages > Search for Azure Storage and install Azure Storage 4.4.0-preview.

You should now have an app that allows you to click a button and increment a counter.

Working with Containers

The following code will perform a series of container operations with the SAS URI that you generated.

First add the following using statements:

using System.IO;
using System.Text;
using System.Threading.Tasks;
using Microsoft.WindowsAzure.Storage.Blob;
 

Next, add a line for your SAS token. Replace “SAS_URI” string with the SAS URI that you generated in Azure PowerShell. Also, add a line for a call to the UseContainerSAS method that we’ll create. Note, the async keyword has also been added before the delegate.

publicclass MainActivity : Activity
{
int count = 1;
string sas = "SAS_URI";
protectedoverridevoid OnCreate(Bundle bundle)
    {
base.OnCreate(bundle);
 
// Set our view from the "main" layout resource
        SetContentView(Resource.Layout.Main);
 
// Get our button from the layout resource, and attach an event to it
        Button button = FindViewById<Button>(Resource.Id.MyButton);
 
        button.Click += async delegate {
            button.Text = string.Format("{0} clicks!", count++);
            await UseContainerSAS(sas);
        };
    }

Add a new method, UseContainerSAS, under the OnCreate method.

static async Task UseContainerSAS(string sas)
{
//Try performing container operations with the SAS provided.
 
//Return a reference to the container using the SAS URI.
    CloudBlobContainer container = new CloudBlobContainer(new Uri(sas));
string date = DateTime.Now.ToString();
try
    {
//Write operation: write a new blob to the container. 
        CloudBlockBlob blob = container.GetBlockBlobReference("sasblob_" + date + ".txt");
 
string blobContent = "This blob was created with a shared access signature granting write permissions to the container. ";
        MemoryStream msWrite = new
        MemoryStream(Encoding.UTF8.GetBytes(blobContent));
        msWrite.Position = 0;
using (msWrite)
        {
            await blob.UploadFromStreamAsync(msWrite);
        }
        Console.WriteLine("Write operation succeeded for SAS " + sas);
        Console.WriteLine();
    }
catch (Exception e)
    {
        Console.WriteLine("Write operation failed for SAS " + sas);
        Console.WriteLine("Additional error information: " + e.Message);
        Console.WriteLine();
    }
try
    {
//Read operation: Get a reference to one of the blobs in the container and read it. 
        CloudBlockBlob blob = container.GetBlockBlobReference("sasblob_” + date + “.txt");
string data = await blob.DownloadTextAsync();
 
        Console.WriteLine("Read operation succeeded for SAS " + sas);
        Console.WriteLine("Blob contents: " + data);
    }
catch (Exception e)
    {
        Console.WriteLine("Additional error information: " + e.Message);
        Console.WriteLine("Read operation failed for SAS " + sas);
        Console.WriteLine();
    }
    Console.WriteLine();
try
    {
//Delete operation: Delete a blob in the container.
        CloudBlockBlob blob = container.GetBlockBlobReference("sasblob_” + date + “.txt");
        await blob.DeleteAsync();
 
        Console.WriteLine("Delete operation succeeded for SAS " + sas);
        Console.WriteLine();
    }
catch (Exception e)
    {
        Console.WriteLine("Delete operation failed for SAS " + sas);
        Console.WriteLine("Additional error information: " + e.Message);
        Console.WriteLine();
    }
}

Run Application

You can now run this application in an emulator or Android device.

Although this getting started focused on Android, you can use the “UseContainerSAS” code in your iOS or Windows Store applications as well. Xamarin also allows developers to create Windows Phone apps however, our library does not yet support this.

Next Steps

In this getting started, you learned how to use Azure Blob Storage and SAS with a Xamarin application. As a further exercise, a similar pattern could be applied to generate a SAS token for an Azure Table or Azure Queue to perform Table and Queue operations.

Learn more about Blobs, Tables, and Queues by checking out the following links:

Introduction to Microsoft Azure Storage
How to use Blob Storage from .NET
How to use Table Storage from .NET
How to use Queue Storage from .NET

(Cross-Post) Build 2015: Azure Storage Announcements!

$
0
0

It's time for BUILD 2015, and the Azure Storage team has several exciting announcements to make. We hope that these new features will enable you to write more powerful applications with Azure Storage. This blog post provides an overview of new GA announcements, updates on preview programs, and insight into everything else we are working on.

General Availability Announcements

Premium Storage General Availability

Azure Premium Storage recently became generally available. Premium Storage delivers high-performance, low-latency disk support for I/O intensive workloads running on Azure Virtual Machines by storing your data on SSDs (Solid State Drives). With Premium Storage, your applications can provision up to 32 TB of storage per VM and achieve 64,000 IOPS (input/output operations per second) per VM with extremely low latencies for read operations.

To learn more about Premium Storage, check out Introduction to Premium Storage.

C++ 1.0.0 General Availability

We’ve just released the GA version (v1.0.0) of the Microsoft Azure Storage Client Library for C++!

The Azure Storage Client Library for C++ provides a comprehensive C++ API for working with Azure Storage including the ability to:

  • Create, read, delete, and list blob containers, tables, and queues.
  • Create, read, delete, list, and copy blobs plus read and write blob ranges.
  • Insert, delete, replace, merge, and query entities in an Azure table.
  • Enqueue and dequeue messages in an Azure queue.
  • Lazily list containers, blobs, tables, and queues, and lazily query entities (new in version 1.0.0)

Learn more about this new release by visiting the Microsoft Azure Storage Client Library for C++ v1.0.0 (General Availability) blog.

Preview Program Announcements

Technical Support Now Available for Azure Files

We are pleased to announce that eligible customers with technical support subscriptions can now leverage our team of Technical Support Professionals for assistance with Azure Files.

New to Azure Files? Look at Getting Started with Azure Files to learn more.

Client-Side Encryption Preview

A frequent request we’ve had from our customers is to provide an easy way to encrypt their data before sending it to Azure Storage. We’ve listened, and we’re excited to announce the public preview of client-side encryption in the Azure Storage client library for .NET. You can use client-side encryption to encrypt blob data, table data (you select the properties to encrypt), and queue messages. Client-side encryption also integrates with Azure Key Vault and allows for integrating with other key management systems if you prefer.

Client-side encryption uses envelope encryption methods to maintain great performance. In most cases, you can take advantage of client-side encryption by adding just a few lines of code to your application. Unlike server-side encryption, this new feature gives users complete control over the keys used for encryption. Azure Storage never sees your keys, so it can’t decrypt your data.

Start using client-side encryption by visiting our client-side encryption blog post.

Xamarin Client Library Preview

We are pleased to announce an Azure Storage client library preview for Xamarin!

Xamarin allows developers to use a shared C# codebase to create iOS, Android, and Windows Store apps with native user interfaces.

Start building Xamarin apps that use Azure Storage now by following our Getting started with Azure Storage on Xamarin tutorial.

Azure Resource Manager: A new and powerful way to manage your Azure resources

Going forward, Azure resource provisioning will be based on the new Azure Resource Manager (ARM), which provides a number of new features like templates, RBAC, resource groups, and others. ARM exposes resources through “Resource Providers”, where each resource being managed is done by its own resource provider. We are pleased to announce the Storage Resource Provider (SRP) for ARM!

SRP allows you to manage your storage accounts (create/delete/update/read storage account, get/set keys, etc.), while inheriting the benefits of using the ARM provisioning stack.

Check out our documentation to learn more about the SRP REST API and SRP Cmdlets for Azure PowerShell.

For more information regarding the Azure Resource Manager, you can also check out the following blog post.

Near Term Roadmap

We also wanted to take this opportunity to share a couple of roadmap items that we are working on including Append Blob, a new blob type optimized for fast append operations, a new iOS storage client library, and various SAS improvements. Also, we are extending our Azure Import/Export service offering to Japan and Australia.

New Blob type: Append Blob

Append Blob is a new blob type (alongside our existing Block and Page blobs) that is optimized for fast append operations, making it ideal for scenarios that add data to an existing blob without modifying the existing contents of that blob, such as logging and auditing. Visit Introducing Azure Storage Append Blob blog for more information. We plan to release Append Blob in summer 2015.

iOS Client Library Preview

We are working on the new Azure Storage client library for iOS. Customers can expect a public preview for block blobs in summer 2015. If you are interested in learning more or participating in a limited preview, see here for more information.

SAS Improvements

The Storage team has been working diligently to make improvements to Shared Access Signatures (SAS). Three key improvements are coming in summer 2015, all based on your feedback:

  • Storage Account SAS – one SAS token can now provide access to an entire account instead of a single object or container.
  • Protocol SAS – SAS tokens can now be restricted to HTTPS only. The protocol is enforced on the client side if users are using the storage client library, so that the SAS token is never sent over HTTP.
  • IP Restricted SAS – A SAS token can now specify a single IP or range. Requests originating from outside that address or range will fail.

With these improvements, SAS should meet the needs of developers and administrators in a wider array of scenarios, significantly reducing the need to use the account’s Shared Key.

Azure Import/Export

Azure Import/Export will be coming to Japan and Australia in summer 2015. Azure Import/Export is offered for all public Azure regions today. If you have storage accounts in Japan or Australia, you can now ship disks to a domestic address within Japan or Australia rather than shipping to another region. Import/Export now also supports up to 6 TB hard drives.

Learn more about Azure Import/Export here.

Finally, for anyone new to Azure Storage, please check out new Azure Storage documentation page , now including 5-minute getting started videos for Storage, Premium Storage and Files.

 

Thanks!

Azure Storage Team

Microsoft Azure Storage Release –Append Blob, New Azure File Service Features and Client Side Encryption General Availability

$
0
0

We are excited to announce new capabilities in the Azure Storage Service and updates to our Storage Client Libraries. We have a new blob type, Append Blob, as well as a number of new features for the Azure File Service. In detail, we are adding the following:

1. Append Blob with a new AppendBlock API

A new blob type, the append blob, is now available*. All writes to an append blob are added sequentially to the end of the blob, making it optimal for logging scenarios. Append blobs support an Append Block operation for adding blocks to the blob. Once a block is added with Append Block, it is immediately available to be read; no further commits are necessary. The block cannot be modified once it has been added.

Please read Getting Started with Blob Storage for more details.

2. Azure File Service

A number of new features are available* for the Azure File Service (in preview – with technical support available), including:

Check out our Azure Files Preview Update blog to learn more. Also, read the How to use Azure File storage with PowerShell and .NET getting started to learn how to use these new features.

If you’re not familiar with CORS or SAS signatures, you’ll find the following documentation helpful:

3. Client-Side Encryption

We are also announcing general availability for the .NET client-side encryption capability that has been in preview since April. In addition to enabling encryption of Blobs, Tables and Queues we also have support for Append Blobs. Please read Get Started with Client-Side Encryption for Microsoft Azure Storage for more details.

4. Azure Storage Client Library and Tooling Updates

We have also released new versions of our .NET, Java, C++, Node.js, and Android client libraries which provide support for the new 2015-02-21Storage Version. For tooling, we've released new versions of AzCopy. Check out Getting Started with the AzCopy Command-Line Utility to learn more.  We've also released Storage updates to Azure PowerShell and Azure CLI.

 

We hope you will find these features useful. As always, please let us know if you have any further questions either via forum or comments on this post.

Thanks!

Azure Storage Team

 

* New features will be available in the following regions: Central US, East US, East US 2, North Central US, South Central US, West US, North Europe, West Europe, East Asia, Southeast Asia, Japan East, Japan West, East China, North China, Brazil South, Australia East, and Australia Southeast. Remaining regions will be available shortly.

 

Azure Files Preview Update

$
0
0

At Build 2015 we announced that technical support is now available for Azure Files customers with technical support subscriptions. We are pleased to announce several additional updates for the Azure Files service which have been made in response to customer feedback.* Please check them out below:

New REST API Features

Server Side Copy File

Copy File allows you to copy a blob or file to a destination file within the Storage account or across different Storage accounts all on the server side. Before this update, performing a copy operation with the REST API or SMB required you to download the file or blob and re-upload it to its destination.

File SAS

You can now provide access to file shares and individual files by using SAS (shared access signatures) in REST API calls.

Share Size Quota

Another new feature for Azure Files is the ability to set the “share size quota” via the REST API. This means that you can now set limits on the size of file shares. When the sum of the sizes of the files on the share exceeds the quota set on the share, you will not be able to increase the size of the files in the share.

Get/Set Directory Metadata

The new Get/Set Directory Metadata operation allows you to get/set all user-defined metadata for a specified directory.

CORS Support

Cross-Origin Resource Sharing (CORS) has been supported in the Blob, Table, and Queue services since November 2013. We are pleased to announce that CORS will now be supported in Files.

Learn more about these new features by checking out the Azure Files REST API documentation.

Library and Tooling Updates

The client libraries that support these new features are .NET (desktop), Node.JS, Java, Android, ASP.NET 5, Windows Phone, and Windows Runtime. Azure Powershell and Azure CLI also support all of these features – except for get/set directory metadata. In addition, the newest version of AZCopy now uses the server side copy file feature.

If you’d like to learn more about using client libraries and tooling with Azure Files then a great way to get started would be to check out our tutorial for using Azure Files with Powershell and .NET.

As always, if you have any feature requests please let us know by submitting your ideas to Azure Storage Feedback.

Thanks!

Azure Storage Team

*New updates will be available in the following regions: Central US, East US, East US 2, North Central US, South Central US, West US, North Europe, West Europe, East Asia, Southeast Asia, Japan East, Japan West, East China, North China, Brazil South, Australia East, and Australia Southeast. Remaining regions will be available shortly.

AzCopy - Introducing Append Blob, File Storage Asynchronous Copying, File Storage Share SAS, Table Storage data exporting to CSV and more

$
0
0

We are pleased to announce that AzCopy 3.2.0 and AzCopy 4.2.0-preview are now released! These two releases introduce the following new features:

Append Blob

Append Blob is a new Microsoft Azure Storage blob type which is optimized for fast append operations, making it ideal for scenarios where the data must be added to an existing blob without modifying the existing contents of that blob (E.g. logging, auditing). For more details, please go to Introducing Azure Storage Append Blob.

Both AzCopy 3.2.0 and 4.2.0-preview will include the support for Append Blob in the following scenarios:

  • Download Append Blob, same as downloading a block or page blob
AzCopy /Source:https://myaccount.blob.core.windows.net/mycontainer /Dest:C:\myfolder /SourceKey:key /Pattern:appendblob1.txt
  • Upload Append Blob, add option /BlobType:Append to specify the blob type
AzCopy /Source:C:\myfolder /Dest:https://myaccount.blob.core.windows.net/mycontainer /DestKey:key /Pattern:appendblob1.txt /BlobType:Append
  • Copy Append Blob, there is no need to specify the /BlobType
AzCopy /Source:https://myaccount.blob.core.windows.net/mycontainer1 /Dest:https://myaccount.blob.core.windows.net/mycontainer2 /SourceKey:key /DestKey:key /Pattern:appendblob1.txt

Note that when uploading or copying append blobs with names that already exist in the destination, AzCopy will prompt either “overwrite or skip” message. Trying to overwrite a blob with the same name but a mismatched blob type will fail. For example, AzCopy will report a failure when overwriting a Block Blob with an Append Blob.

AzCopy does not include the support for appending data to an existing append blob, and if you are usingan older version AzCopy, the download and copy operations will fail with the following error message when the source container includes Append Blob.

Error parsing the source location “[the source URL specified in the command line]”: The remote server returned an error: (409) Conflict. The type of a blob in the container is unrecognized by this version.

 

File Storage Asynchronous Copy (4.2.0 only)

Azure Storage File Service adds several new features with Storage Service REST version 2015-2-21, please find more details at Azure Storage File Preview Update.

In the previous version of AzCopy 4.1.0, we introduced synchronous copy for Blob and File, now AzCopy 4.2.0-preview includes the support for the following File Storageasynchronous copy scenarios.

Unlike synchronous copy which simulate the copy by downloading the blobs from the source storage endpoint to local memory and then uploading them to the destination storage end point, the File Storage asynchronous copy is a server side copy which is running in the background and you can get the copy status programmatically, please find more details at Server Side Copy File.

  • Asynchronous copying from File Storage to File Storage
AzCopy /Source:https://myaccount1.file.core.windows.net/myfileshare1/ /Dest:https://myaccount2.file.core.windows.net/myfileshare2/ /SourceKey:key1 /DestKey:key2 /S
  • Asynchronous copying from File Storage to Block Blob
AzCopy /Source:https://myaccount1.file.core.windows.net/myfileshare/ /Dest:https://myaccount2.blob.core.windows.net/mycontainer/ /SourceKey:key1 /DestKey:key2 /S
  • Asynchronous copying from Block/Page Blob Storage to File Storage
AzCopy /Source:https://myaccount1.blob.core.windows.net/mycontainer/ /Dest:https://myaccount2.file.core.windows.net/myfileshare/ /SourceKey:key1 /DestKey:key2 /S

Note that asynchronous copying from File Storage to Page Blob is not supported.

 

File Storage Share SAS (Preview version 4.2.0 only)

Besides the File asynchronous copy, another File Storage new feature ‘File Share SAS’ will be supported in AzCopy 4.2.0-preview as well.

Now you can use option /SourceSAS and /DestSAS to authenticate the file transfer request.

AzCopy /Source:https://myaccount1.file.core.windows.net/myfileshare1/ /Dest:https://myaccount2.file.core.windows.net/myfileshare2/ /SourceSAS:SAS1 /DestSAS:SAS2 /S

For more details about File Storage share SAS, please visit Azure Storage File Preview Update.

 

Export Table Storage entities to CSV (Preview version 4.2.0 only)

AzCopy allows end users to export Table entities to local files in JSON format since the 4.0.0 preview version, now you can specify the new option /PayloadFormat:<JSON | CSV> to export data to CSV files. Without specifying this new option, AzCopy will export Table entities to JSON files.

AzCopy /Source:https://myaccount.table.core.windows.net/myTable/ /Dest:C:\myfolder\ /SourceKey:key /PayloadFormat:CSV

Besides the data files with .csv extension that will be found in the place specified by the parameter /Dest, AzCopy will generate scheme file with file extension .schema.csv for each data file.

Note that AzCopy does not include the support for “importing” CSV data file, you can use JSON format to export/import as you did in previous version of AzCopy.

 

Specify the manifest file name when exporting Table entities (Preview version 4.2.0 only)

AzCopy requires end users to specify the option /Manifest when importing table entities, in previous version the manifest file name is decided by AzCopy during the exporting which looks like “myaccount_mytable_timestamp.manifest”, and users need to find the name in the destination folder firstly before writing the import command line.

Now you can specify the manifest file name during the exporting by option /Manifest which should bring more flexibility and convenience to your importing scenarios.

AzCopy /Source:https://myaccount.table.core.windows.net/myTable/ /Dest:C:\myfolder\ /SourceKey:key /Manifest:abc.manifest

 

Enable FIPS compliant MD5 algorithm

AzCopy by default uses .NET MD5 implementation to calculate the MD5 when copying objects, now we include the support for FIPS compliant MD5 setting to fulfill some scenarios’ security requirements.

You can create an app.config file “AzCopy.exe.config” with property “AzureStorageUseV1MD5” and put it aside with AzCopy.exe.

<?xmlversion="1.0"encoding="utf-8" ?>
<configuration>
<appSettings>
<addkey="AzureStorageUseV1MD5"value="false"/>
</appSettings>
</configuration>

For property “AzureStorageUseV1MD5”

  • true - The default value, AzCopy will use .NET MD5 implementation.
  • false – AzCopy will use FIPS compliant MD5 algorithm.

Note that FIPS compliant algorithms is disabled by default on your Windows machine, you can type secpol.msc in your Run window and check this switch at “Security Setting->Local Policy->Security Options->System cryptography: Use FIPS compliant algorithms for encryption, hashing and signing”.

 

Reference

Azure Storage File Preview Update

Microsoft Azure Storage Release –Append Blob, New Azure File Service Features and Client Side Encryption General Availability

Introducing Azure Storage Append Blob

Enable FISMA MD5 setting via Microsoft Azure Storage Client Library for .NET

Getting Started with the AzCopy Command-Line Utility

As always, we look forward to your feedback.

Microsoft Azure Storage Team

Viewing all 167 articles
Browse latest View live