Quantcast
Channel: Microsoft Azure Storage Team Blog
Viewing all 167 articles
Browse latest View live

Issue in Azure Storage Client Library 5.0.0 and 5.0.1 preview in AppendBlob functionality

$
0
0

An issue in the Azure Storage Client Library 5.0.0 for .Net and in the Azure Storage Client Library 5.0.1 preview for .Net was recently discovered. This will impact the Windows desktop and phone targets. The details of the issue are as follows:

When the method to append a string of text to an append blob asynchronously,  CloudAppendBlob.AppendTextAsync() is invoked with either only the content parameter specified or only the content and CancellationToken parameters specified, the call will overwrite the blob content instead of appending to it. Other synchronous and asynchronous invocations to append a string of text to an append blob (CloudAppendBlob.AppendText() , CloudAppendBlob.AppendTextAsync()) do not manifest the issue.

The Azure Storage team has hotfixes available for both releases for this issue. The hotfix will have updated versions 5.0.2 and 5.0.3-preview respectively. If you had installed either Azure Storage Client Library 5.0.0 for .Net or the Azure Storage Client Library 5.0.1 preview for .Net, please make sure to update your references with the corresponding package. You can install the these versions either from:

  1. The Visual Studio NuGet Package Manager UI.
  2. The Package Manager console using the following command (the released version for instance): Install-Package WindowsAzure.Storage -Version 5.0.2
  3. The NuGet gallery web page that houses the package: here for the released version and here for the preview version.

Please note the following:

  1. The older versions will be unlisted in the Visual Studio NuGet Package Manager UI.
  2. If you attempt to launch the web page that contained the original package, you may encounter a 404 error.
  3. We recommend you to not install the older versions through the Package Manager console so that you don’t run into the issue.

Thank you for your support to Azure Storage. We look forward to your continued feedback.

Microsoft Azure Storage Team


Introducing the Azure Storage Client Library for iOS (Public Preview)

$
0
0

We are excited to announce the public preview of the Azure Storage Client Library for iOS!

Having a client library for iOS is essential to providing a complete mobile story for developers. With this release, developers can now take advantage of Azure Storage on all major mobile platforms: Windows Phone,iOS, Android, and Xamarin.

Currently, this library supports iOS 9, iOS 8 and iOS 7 and can be used with both Objective-C and Swift. This library also supports the latest Azure Storage service version 2015-02-21.

With this being the first release, we want to make sure we’re taking advantage of the wealth of knowledge provided by the iOS developer community. For this reason, we’ll be releasing block blob support first with the goal being to solicit feedback plus better understand additional scenarios you would like to see supported.

Please check out How to use Blob Storage from iOS to get started. You can also download the sample app to quickly see the use of Azure Storage in an iOS application.

As always, if you have any feature requests please let us know by submitting your ideas to Azure Storage Feedback.

We’d also like to give a special thanks to all those who joined our preview program and contributed their ideas and suggestions.

Thanks!

Azure Storage Team

(Cross-Post) Introducing Azure Storage Data Movement Library Preview

$
0
0

Since AzCopy was first released, a large number of customers have requested programmatic access to AzCopy. We are pleased to announce a new open-sourced Azure Storage data movement library for .NET (DML for short). This library is based on the core data movement framework that powers AzCopy. The library is designed for high-performance, reliable and easy Azure Storage data transfer operations enabling scenarios such as:
•    Uploading, downloading and copying data between Microsoft Azure Blob and File Storage
•    Migrating data from other cloud providers such as AWS S3 to Azure Blob Storage
•    Backing up Azure Storage data

Here is a sample demonstrating how to upload a blob, please find more samples at github.

using System;
using System.Threading;
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Blob;
// Include the New Azure Storage Data Movement Library
using Microsoft.WindowsAzure.Storage.DataMovement;
 
// Setup the storage context and prepare the object you need to upload
string storageConnectionString = "myStorageConnectionString";
CloudStorageAccount account = CloudStorageAccount.Parse(storageConnectionString);
CloudBlobClient blobClient = account.CreateCloudBlobClient();
CloudBlobContainer blobContainer = blobClient.GetContainerReference("mycontainer");
blobContainer.CreateIfNotExists();
string sourcePath = "path\\to\\test.txt";
CloudBlockBlob destBlob = blobContainer.GetBlockBlobReference("myblob");
 
// Use the interfaces from the new Azure Storage Data Movement Library to upload the blob
// Setup the number of the concurrent operations
TransferManager.Configurations.ParallelOperations = 64;
 
// Setup the transfer context and track the upoload progress
TransferContext context = new TransferContext();
context.ProgressHandler = new Progress<TransferProgress>((progress) =>
{
    Console.WriteLine("Bytes uploaded: {0}", progress.BytesTransferred);
});
 
// Upload a local blob
var task = TransferManager.UploadAsync(
    sourcePath, destBlob, null, context, CancellationToken.None);
task.Wait();

Azure Storage Data Movement Library has the same performance as AzCopy and exposes the core functionalities of AzCopy. You can install the first preview of the library from Nuget or download the source code from Github. In the initial version (0.1.0) of this library, you can find the following abilities:
•    Support data transfer for Azure Storage abstraction: Blob
•    Support data transfer for Azure Storage abstraction: File
•    Download / Upload / Copy single object
•    Control the number of concurrent operations
•    Synchronous and asynchronous copying
•    Define the number of concurrent operations
•    Define the suffix of the user agent
•    Set the content type
•    Set the Access Condition to conditionally copy objects, for example copy objects changed since certain date
•    Validate content MD5
•    Download specific blob snapshot
•    Track transfer progress: bytes transferred, number of success/fail/skip files
•    Recover (Set/Get transfer checkpoint)
•    Transfer Error handling (transfer exception and error code)
•    Client-Side Logging

DML is an open source project, we welcome contributions from the community. In particular we are interested in extensions to our samples to help make them more robust. Together with the release of version 0.1.0, we have created the following samples, for more details, please visit Github Readme.md.

•    Upload/Download/Copy an Azure Storage Blob
•    Migrate data from AWS S3 to Azure Blob Storage

Next Steps
We will continue the investment for both AzCopy and Data Movement Library, and in the next releases of Data Movement Library, we will add the support for more advanced features, which shall include:
•    Download / Upload / Copy directory (Local file directory, blob virtual directory, File share directory)
•    Transfer directory in recursive mode or flat mode
•    Specify the file pattern when copying files and directories
•    Download Snapshots under directories

As always, we look forward to your feedback.

Microsoft Azure Storage Team

How to use Blob Storage from iOS

Client-Side Encryption in Java Client Library for Microsoft Azure Storage – Preview

$
0
0

We are excited to announce preview availability of the client side encryption feature in the Azure Storage Java Client Library. This preview enables you to encrypt and decrypt your data inside client applications before uploading to and after downloading from Azure Storage. The feature is available for Blobs, Queues and Tables. We also support integration with Azure Key Vault in order to let you store and manage your keys. We recently made Client-side encryption generally available in the Storage .Net library and now we are happy to provide the same capability in the Java client library as a preview.

Why use client-side encryption?

Client-side encryption is helpful in scenarios where customers want to encrypt the data at source such as encrypting surveillance data from cameras before uploading to Storage. In this scenario, the user controls the keys and the Azure Storage service never sees the keys used for cryptographic operations. You can additionally inspect exactly how the library is encrypting your data to ensure that it meets your standards since the library is open source and available on GitHub. The feature is helpful in scenarios such as encrypting surveillance data from cameras before uploading to Storage.

Benefits of the Java Client Library

We wanted to provide a library that would accomplish the following:

  • Implement Security Best Practices.  This library has been reviewed for its security so that you can use it with confidence. Encrypted data is not decipherable even if the storage account keys are compromised. Additionally, we’ve made it simple and straightforward for users to rotate keys themselves. i.e. multiple keys will be supported during the key rotation timeframe.
  • Interoperability across languages and platforms.  Many users use more than one of our client libraries. Given our goal to use the same technical design across implementations, data encrypted using the .NET library can be decrypted using the Java library and vice versa.  Support for other languages is planned for the future. Similarly, we support cross platform encryption. For instance, data encrypted in the Windows platform can be decrypted in Linux and vice versa.
  • Design for Performance.  We’ve designed the library for both throughput and memory footprint. We have used a technique where there is a fixed overhead so that your encrypted data will have a predictable size based on the original size.
  • Self-contained encryption – Every blob, table entity, or queue message has all encryption metadata stored in either the object or its metadata.  There is no need to get any additional data from anywhere else, except for the key you used.
  • Full blob uploads/ Full and range blob downloads: Upload for blobs such as files like documents, photos and videos that are going to be uploaded in entirety is supported. But sometimes, files like mp3 are downloaded in ranges depending on the part that is to be played. To support this, range downloads are allowed and are entirely taken care of by the SDK.

How to use it?

Using client-side encryption is easy. The client library will internally take care of encrypting data on the client when uploading to Azure Storage, and automatically decrypts it when data is retrieved. All you need to do is specify the appropriate encryption policy and pass it to data upload/download APIs.

// Create the IKey used for encryption
RsaKey key = new RsaKey("private:key1"/* key identifier */);
 
// Create the encryption policy to be used for upload and download.
BlobEncryptionPolicy policy = new BlobEncryptionPolicy(key, null);
 
// Set the encryption policy on the request options.
BlobRequestOptions options = new BlobRequestOptions();
options.setEncryptionPolicy(policy);
 
// Upload the encrypted contents to the blob.
blob.upload(stream, size, null, options, null);
 
// Download and decrypt the encrypted contents from the blob.
ByteArrayOutputStream outputStream = new ByteArrayOutputStream(); blob.DownloadToStream(outputStream, null, options, null);

You can find more details and code samples in the Getting Started with Client-Side Encryption for Microsoft Azure Storage article.

Key considerations

  • This is a preview!  It should not be used for production data.  Schema impacting changes can be made and data written with the first preview may not be readable in the GA version.
  • With client side encryption, we support full uploads and full/ range downloads only. As such if you perform operations that update parts of a blob after you have written an encrypted blob, you may end up making it unreadable.
  • Avoid performing a SetMetadata operation on the encrypted blob or specifying metadata while creating a snapshot of an encrypted blob as this may render the blob unreadable. If you must update, then be sure to call the downloadAttributes method first to get the current encryption metadata, and avoid concurrent writes while metadata is being set.

We look forward to your feedback on design, ease of use and any additional scenarios you would like to tell us about.  This will enable us to deliver a great GA release of the library. While some requests for additional functionality may not be reflected in the first release, these will be strongly considered for the future.

Thank you.

Dinesh Murthy
Emily Gerner
Microsoft Azure Storage Team

Microsoft Azure Storage Service Version Removal Update: Extension to 2016

$
0
0

Summary

The Storage Service uses versioning to govern what operations are available, how a given request will be processed and what will be returned. In 2014, we announced that specific versions of the Microsoft Azure Storage Service would be removed on December 9th, 2015. Based on your feedback, we are now making the following changes with the details in the table below.

  1. We will delay the removal date for some REST API versions and impacted client libraries. This includes all REST endpoints starting version 2009-07-17 and earlier. The effective date for this service removal is August 1st, 2016.
  2. We will indefinitely postpone the removal date for the endpoints 2011-08-18 and 2009-09-19. This is effective immediately. We intend to remove these versions at some point in the future, but not within the coming 12 months. The exact date of removal will be communicated via this blog forum and with 12 months’ notice provided.
  3. We will begin using service version 2014-04-05 for requests that do not include a specific version for SAS authentication and Anonymous access. However, we will begin rejecting any unversioned SharedKey/SharedKeyLite authenticated requests. The effective date for this is August 1st, 2016.
  4. Finally, there is no change to support level and availability of versions 2012-02-12 and beyond.
EndpointActionEffective
2008 (undocumented, but used for processing unversioned requests)RemovalAug 1, 2016
Version 2009-04-14RemovalAug 1, 2016
Version 2009-07-17RemovalAug 1, 2016
Version 2009-09-19
(.Net client library v1.5.1 uses this)
PostponedN/A
Version 2011-08-18
(.Net client library v1.7 uses this)
PostponedN/A
Version 2012-02-12
Version 2013-08-15
Version 2014-02-14
Version 2015-02-21
Version 2015-04-05
No changeN/A

Please plan and implement your application upgrades soon so you are not impacted when service versions are removed. Additionally, we encourage you to regularly update to the latest service version and client libraries so you get the benefit of the latest features. To understand the details of how this will impact you and what you need to do, please read on.

How will these changes manifest?

Explicitly Versioned Requests

Any requests which are explicitly versioned using the HTTP x-ms-version request header set to one of the removed versions or in the case of SAS requests api-version parameter set to one of the removed versions, will fail with an HTTP 400 (Bad Request) status code, similar to any request made with an invalid version header.

SharedKey/SharedKeyLite Requests with no explicit version

For requests that were signed using the account’s shared key, if no explicit version is specified using HTTP x-ms-version, the request was previously processed with the undocumented 2008 version. Going forward, processing will fail with HTTP 400 (Bad Request) if the version is not explicitly specified.

SAS Requests with no “sv” parameter and no “x-ms-version”

Prior to version 2012-02-12, a SAS request did not specify a version in the “sv” parameter of the SAS token. The SAS token parameters of these requests were interpreted using the rules for the 2009-07-17 REST processing version. These requests will still work, but now the request will be processed with 2015-04-05 version. We advise you in this case to ensure that you either send “x-ms-version” with a non-deprecated version or set a default version on your account.

Anonymous Requests with no explicit version

For any anonymous requests (with no authentication) with no version specified, the service assumes that the request is version agnostic. Effective August 1st 2016, anonymous requests will be processed with version 2015-04-05. The version used for anonymous requests may change again in the future.

Note that we make no guarantees about whether or not there will be breaking changes when unversioned requests are processed with a new service version. Instances of these requests include browser-initiated HTTP requests and HTTP requests without the service version specified that are made from applications not using Storage client libraries. If your application is unable to send an x-ms-version for anonymous requests (for example, from a browser), then you can set a default REST version for your account through Set Blob Service Properties, for the Blob service for instance.

Default Service Version

If Set Blob Service Properties (REST API) has been used to set the default version of requests to version 2009-09-19 or higher, the version set will be used. If default service version was set to a version that is now removed, that request is considered to be explicitly versioned, and will fail with “400 Bad Request”. If default service version was set to a version that is still supported, that version will continue to be used.

Client Libraries

The latest versions of all of our client libraries and tools will not be affected by this announcement. However the .Net client library v1.5.1 uses Version 2009-09-19 and will be impacted when that version is eventually removed. If you are still using this library, please update to the latest .Net client library before the version is removed. For a list of .Net client libraries using various REST endpoints, please visit https://msdn.microsoft.com/en-us/library/azure/dn744252.aspx. If you are using non-.Net libraries, then you should not be impacted. For more information, please look at the at the Minimum Supported Versions/Libraries/SDK’s section in this article.

Azure CloudDrive

If you are using Azure CloudDrive, then you are not impacted by this announcement since it uses REST Version 2009-09-19. We will have an announcement in the near future on CloudDrive migration.

What should I do?

To ensure that your application continues to work properly after removal of older versions, you should do the following things.

Check your application to find what versions it is using

The first thing to do is to determine what REST versions your application is using. If your application is under your control and you are aware of all components that call Azure Storage, then you can verify this by checking the components against the above list, or by inspecting your code if you have written your own code to make calls to storage.

As a stronger check, or if you are unsure which versions of the components have been deployed, you can enable logging, which will log the requests being made to your storage account. The logs have the request version used included, which can be used to find if any requests are being made using versions with planned removal.

Here is a sample log entry, with the version used highlighted in red – in this case the request was an anonymous and unversioned GetBlob request which implicitly used the 2009-09-19 version:

1.0;2011-08-09T18:52:40.9241789Z;GetBlob;AnonymousSuccess;200;18;10;anonymous;;myaccount;blob;"https:// myaccount.blob.core.windows.net/thumbnails/lake.jpg?timeout=30000";"/myaccount/thumbnails/lake.jpg";a84aa705-8a85-48c5-b064-b43bd22979c3;0;123.100.2.10;2009-09-19;252;0;265;100;0;;;"0x8CE1B6EA95033D5";Friday, 09-Aug-11 18:52:40 GMT;;;;"8/9/2011 6:52:40 PM ba98eb12-700b-4d53-9230-33a3330571fc"

Similar to the above, you can look at log entries to identify any references to service versions that are being removed.

What to change

If you find any log entries which show that a version to be removed is being used, you will need to find that component and either validate that it will continue to work (unversioned requests may continue to work as their implicit version will simply increase – see above), or take appropriate steps to change the version being used. Most commonly, one of the following two steps will be used:

  1. Change the version specified in the request. If you are using client libraries, you can accomplish this by migrating to a later version of the libraries/tools. When possible, migrate to the latest version to get the most improvements and fixes.
  2. Set the default service version to one of the supported versions now so that the behavior can be verified prior to removal. This only applies to anonymous requests with no explicit version.

When migrating your applications to newer versions, you should review the above linked change lists for each service version and test thoroughly to ensure that your application is working properly after you’ve updated it. Please note that service version updates have included both included syntactic breaks (the request will receive a response that either is a failure or formed very differently) and semantic breaks (the request will receive a similar response that means something different).

Post migration validation

After migration, you should validate in the logs that you do not find any of the earlier versions being used. Make sure to check the logs over long enough durations of time to be sure that there are no tasks/workloads running rarely that would still use the older versions (scheduled tasks that run once per day, for example).

Conclusion

It is recommended that users begin their applications upgrades now in order to avoid being impacted when the earlier service versions are removed on August 1st, 2016. Additionally, it is considered a best practice to explicitly version all requests made to the storage service. See MSDN for a discussion of versioning in Azure Storage and best practices.

Thank you.

Dinesh Murthy
Principal Program Manager
Microsoft Azure Storage

(Cross-Post) SAS Update: Account SAS Now Supports All Storage Services

$
0
0

Shared Access Signatures (SAS) enable customers to delegate access rights to data within their storage accounts without having to share their storage account keys. In late 2015 we announced a new type of SAS token called Account SAS that provided support for the Blob and File Services. Today we are pleased to announce that Account SAS is also supported for the Azure Storage Table and Queue services. These capabilities are available with Version 2015-04-05 of the Azure Storage Service.

Account SAS delegates access to resources in one or more of the storage services providing parity with the Storage account keys. This enables you to delegate access rights for creating and modifying blob containers, tables, queues, and file shares, as well as providing access to meta-data operations such as Get/Set Service Properties and Get Service Stats. For security reasons Account SAS does not enable access to permission related operations including "Set Container ACL", "Set Table ACL", "Set Queue ACL", and "Set Share ACL".

The below code snippet creates a new access policy used to issue a new Account SAS token for the Blob and Table Service including read, write, list, create and delete permissions. The Account SAS token is configured to expire in 24 hours from now.

SharedAccessAccountPolicy policy = new SharedAccessAccountPolicy()
{
Permissions = SharedAccessAccountPermissions.Read |
SharedAccessAccountPermissions.Write |
SharedAccessAccountPermissions.List |
SharedAccessAccountPermissions.Create |
SharedAccessAccountPermissions.Delete,

Services = SharedAccessAccountServices.Blob | SharedAccessAccountServices.Table,

ResourceTypes = SharedAccessAccountResourceTypes.Container | SharedAccessAccountResourceTypes.Object,

SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),

Protocols = SharedAccessProtocol.HttpsOrHttp
};

// Create a storage account SAS token by using the above Shared Access Account Policy.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(‘YOUR CONNECTION STRING’);
string sasToken = storageAccount.GetSharedAccessSignature(policy);
Please read the following resources for more details:

We recommend using SAS tokens to delegate access to storage users rather than sharing storage account keys. As always, please let us know if you have any further questions via comments on this post.

Thanks!

Perry Skountrianos
Azure Storage Team

(Cross-Post) Announcing Azure Storage Data Movement Library 0.2.0

$
0
0

In the previous announcement post for DMLib 0.1.0, we committed that the newest release of the Data Movement Library would support more advanced features. Great news, those are now available and include the following:

  • Download, upload, and copy directories (local file directories, Azure Blob virtual directories, Azure File directories)
  • Transfer directories in recursive mode
  • Transfer directories in flat mode (local file directories)
  • Specify the search pattern when copying files and directories
  • Provide Event to get single file transfer result in a transfer
  • Download Snapshots under directories
  • Changed TransferConfigurations.UserAgentSuffix to TransferConfigurations.UserAgentPrefix

With these new features, you can perform data movement at the Blob container and Blob virtual directory level, or the File share and File directory level.

We are actively adding more code samples to the Github library, and any community contributions to these code samples are highly appreciated.

You can install the Azure Storage Data Movement Library from Nuget or download the source code from Github. For more details, please read the Getting Started documentation.

As always, we look forward to your feedback, so please don’t hesitate to utilize the comments section below.

Thanks!

Azure Storage Team


Azure Files Preview Update

$
0
0

At Build 2015 we announced that technical support is now available for Azure Files customers with technical support subscriptions. We are pleased to announce several additional updates for the Azure Files service which have been made in response to customer feedback. Please check them out below:

New REST API Features

Server Side Copy File

Copy File allows you to copy a blob or file to a destination file within the Storage account or across different Storage accounts all on the server side. Before this update, performing a copy operation with the REST API or SMB required you to download the file or blob and re-upload it to its destination.

File SAS

You can now provide access to file shares and individual files by using SAS (shared access signatures) in REST API calls.

Share Size Quota

Another new feature for Azure Files is the ability to set the “share size quota” via the REST API. This means that you can now set limits on the size of file shares. When the sum of the sizes of the files on the share exceeds the quota set on the share, you will not be able to increase the size of the files in the share.

Get/Set Directory Metadata

The new Get/Set Directory Metadata operation allows you to get/set all user-defined metadata for a specified directory.

CORS Support

Cross-Origin Resource Sharing (CORS) has been supported in the Blob, Table, and Queue services since November 2013. We are pleased to announce that CORS will now be supported in Files.

Learn more about these new features by checking out the Azure Files REST API documentation.

Library and Tooling Updates

The client libraries that support these new features are .NET (desktop), Node.JS, Java, Android, ASP.NET 5, Windows Phone, and Windows Runtime. Azure Powershell and Azure CLI also support all of these features – except for get/set directory metadata. In addition, the newest version of AZCopy now uses the server side copy file feature.

If you’d like to learn more about using client libraries and tooling with Azure Files then a great way to get started would be to check out our tutorial for using Azure Files with Powershell and .NET.

As always, if you have any feature requests please let us know by submitting your ideas to Azure Storage Feedback.

Thanks!

Azure Storage Team

AzCopy – Introducing Append Blob, File Storage Asynchronous Copying, File Storage Share SAS, Table Storage data exporting to CSV and more

$
0
0

We are pleased to announce that AzCopy 3.2.0 and AzCopy 4.2.0-preview are now released! These two releases introduce the following new features:

Append Blob

Append Blob is a new Microsoft Azure Storage blob type which is optimized for fast append operations, making it ideal for scenarios where the data must be added to an existing blob without modifying the existing contents of that blob (E.g. logging, auditing). For more details, please go to Introducing Azure Storage Append Blob.

Both AzCopy 3.2.0 and 4.2.0-preview will include the support for Append Blob in the following scenarios:

  • Download Append Blob, same as downloading a block or page blob
AzCopy /Source:https://myaccount.blob.core.windows.net/mycontainer /Dest:C:\myfolder /SourceKey:key /Pattern:appendblob1.txt
  • Upload Append Blob, add option /BlobType:Append to specify the blob type
AzCopy /Source:C:\myfolder /Dest:https://myaccount.blob.core.windows.net/mycontainer /DestKey:key /Pattern:appendblob1.txt /BlobType:Append
  • Copy Append Blob, there is no need to specify the /BlobType
AzCopy /Source:https://myaccount.blob.core.windows.net/mycontainer1 /Dest:https://myaccount.blob.core.windows.net/mycontainer2 /SourceKey:key /DestKey:key /Pattern:appendblob1.txt

Note that when uploading or copying append blobs with names that already exist in the destination, AzCopy will prompt either “overwrite or skip” message. Trying to overwrite a blob with the same name but a mismatched blob type will fail. For example, AzCopy will report a failure when overwriting a Block Blob with an Append Blob.

AzCopy does not include the support for appending data to an existing append blob, and if you are using an older version AzCopy, the download and copy operations will fail with the following error message when the source container includes Append Blob.

Error parsing the source location “[the source URL specified in the command line]”: The remote server returned an error: (409) Conflict. The type of a blob in the container is unrecognized by this version.

 

File Storage Asynchronous Copy (4.2.0 only)

Azure Storage File Service adds several new features with Storage Service REST version 2015-2-21, please find more details at Azure Storage File Preview Update.

In the previous version of AzCopy 4.1.0, we introduced synchronous copy for Blob and File, now AzCopy 4.2.0-preview includes the support for the following File Storageasynchronous copy scenarios.

Unlike synchronous copy which simulate the copy by downloading the blobs from the source storage endpoint to local memory and then uploading them to the destination storage end point, the File Storage asynchronous copy is a server side copy which is running in the background and you can get the copy status programmatically, please find more details at Server Side Copy File.

  • Asynchronous copying from File Storage to File Storage
AzCopy /Source:https://myaccount1.file.core.windows.net/myfileshare1/ /Dest:https://myaccount2.file.core.windows.net/myfileshare2/ /SourceKey:key1 /DestKey:key2 /S
  • Asynchronous copying from File Storage to Block Blob
AzCopy /Source:https://myaccount1.file.core.windows.net/myfileshare/ /Dest:https://myaccount2.blob.core.windows.net/mycontainer/ /SourceKey:key1 /DestKey:key2 /S
  • Asynchronous copying from Block/Page Blob Storage to File Storage
AzCopy /Source:https://myaccount1.blob.core.windows.net/mycontainer/ /Dest:https://myaccount2.file.core.windows.net/myfileshare/ /SourceKey:key1 /DestKey:key2 /S

Note that asynchronous copying from File Storage to Page Blob is not supported.

 

File Storage Share SAS (Preview version 4.2.0 only)

Besides the File asynchronous copy, another File Storage new feature ‘File Share SAS’ will be supported in AzCopy 4.2.0-preview as well.

Now you can use option /SourceSAS and /DestSAS to authenticate the file transfer request.

AzCopy /Source:https://myaccount1.file.core.windows.net/myfileshare1/ /Dest:https://myaccount2.file.core.windows.net/myfileshare2/ /SourceSAS:SAS1 /DestSAS:SAS2 /S

For more details about File Storage share SAS, please visit Azure Storage File Preview Update.

 

Export Table Storage entities to CSV (Preview version 4.2.0 only)

AzCopy allows end users to export Table entities to local files in JSON format since the 4.0.0 preview version, now you can specify the new option /PayloadFormat:<JSON | CSV> to export data to CSV files. Without specifying this new option, AzCopy will export Table entities to JSON files.

AzCopy /Source:https://myaccount.table.core.windows.net/myTable/ /Dest:C:\myfolder\ /SourceKey:key /PayloadFormat:CSV

Besides the data files with .csv extension that will be found in the place specified by the parameter /Dest, AzCopy will generate scheme file with file extension .schema.csv for each data file.

Note that AzCopy does not include the support for “importing” CSV data file, you can use JSON format to export/import as you did in previous version of AzCopy.

 

Specify the manifest file name when exporting Table entities (Preview version 4.2.0 only)

AzCopy requires end users to specify the option /Manifest when importing table entities, in previous version the manifest file name is decided by AzCopy during the exporting which looks like “myaccount_mytable_timestamp.manifest”, and users need to find the name in the destination folder firstly before writing the import command line.

Now you can specify the manifest file name during the exporting by option /Manifest which should bring more flexibility and convenience to your importing scenarios.

AzCopy /Source:https://myaccount.table.core.windows.net/myTable/ /Dest:C:\myfolder\ /SourceKey:key /Manifest:abc.manifest

 

Enable FIPS compliant MD5 algorithm

AzCopy by default uses .NET MD5 implementation to calculate the MD5 when copying objects, now we include the support for FIPS compliant MD5 setting to fulfill some scenarios’ security requirements.

You can create an app.config file “AzCopy.exe.config” with property “AzureStorageUseV1MD5” and put it aside with AzCopy.exe.

<?xml version="1.0" encoding="utf-8" ?> 
<configuration>
<appSettings>
<add key="AzureStorageUseV1MD5" value="false"/>
</appSettings>
</configuration>

For property “AzureStorageUseV1MD5”

  • true – The default value, AzCopy will use .NET MD5 implementation.
  • false – AzCopy will use FIPS compliant MD5 algorithm.

Note that FIPS compliant algorithms is disabled by default on your Windows machine, you can type secpol.msc in your Run window and check this switch at “Security Setting->Local Policy->Security Options->System cryptography: Use FIPS compliant algorithms for encryption, hashing and signing”.

 

Reference

Azure Storage File Preview Update

Microsoft Azure Storage Release –Append Blob, New Azure File Service Features and Client Side Encryption General Availability

Introducing Azure Storage Append Blob

Enable FISMA MD5 setting via Microsoft Azure Storage Client Library for .NET

Getting Started with the AzCopy Command-Line Utility

As always, we look forward to your feedback.

Microsoft Azure Storage Team

Issue in Azure Storage Client Library 5.0.0 and 5.0.1 preview in AppendBlob functionality

$
0
0

An issue in the Azure Storage Client Library 5.0.0 for .Net and in the Azure Storage Client Library 5.0.1 preview for .Net was recently discovered. This will impact the Windows desktop and phone targets. The details of the issue are as follows:

When the method to append a string of text to an append blob asynchronously,  CloudAppendBlob.AppendTextAsync() is invoked with either only the content parameter specified or only the content and CancellationToken parameters specified, the call will overwrite the blob content instead of appending to it. Other synchronous and asynchronous invocations to append a string of text to an append blob (CloudAppendBlob.AppendText() , CloudAppendBlob.AppendTextAsync()) do not manifest the issue.

The Azure Storage team has hotfixes available for both releases for this issue. The hotfix will have updated versions 5.0.2 and 5.0.3-preview respectively. If you had installed either Azure Storage Client Library 5.0.0 for .Net or the Azure Storage Client Library 5.0.1 preview for .Net, please make sure to update your references with the corresponding package. You can install the these versions either from:

  1. The Visual Studio NuGet Package Manager UI.
  2. The Package Manager console using the following command (the released version for instance): Install-Package WindowsAzure.Storage -Version 5.0.2
  3. The NuGet gallery web page that houses the package: here for the released version and here for the preview version.

Please note the following:

  1. The older versions will be unlisted in the Visual Studio NuGet Package Manager UI.
  2. If you attempt to launch the web page that contained the original package, you may encounter a 404 error.
  3. We recommend you to not install the older versions through the Package Manager console so that you don’t run into the issue.

Thank you for your support to Azure Storage. We look forward to your continued feedback.

Microsoft Azure Storage Team

Introducing the Azure Storage Client Library for iOS (Public Preview)

$
0
0

We are excited to announce the public preview of the Azure Storage Client Library for iOS!

Having a client library for iOS is essential to providing a complete mobile story for developers. With this release, developers can now take advantage of Azure Storage on all major mobile platforms: Windows Phone, iOS, Android, and Xamarin.

Currently, this library supports iOS 9, iOS 8 and iOS 7 and can be used with both Objective-C and Swift. This library also supports the latest Azure Storage service version 2015-02-21.

With this being the first release, we want to make sure we’re taking advantage of the wealth of knowledge provided by the iOS developer community. For this reason, we’ll be releasing block blob support first with the goal being to solicit feedback plus better understand additional scenarios you would like to see supported.

Please check out How to use Blob Storage from iOS to get started. You can also download the sample app to quickly see the use of Azure Storage in an iOS application.

As always, if you have any feature requests please let us know by submitting your ideas to Azure Storage Feedback.

We’d also like to give a special thanks to all those who joined our preview program and contributed their ideas and suggestions.

Thanks!

Azure Storage Team

(Cross-Post) Introducing Azure Storage Data Movement Library Preview

$
0
0

Since AzCopy was first released, a large number of customers have requested programmatic access to AzCopy. We are pleased to announce a new open-sourced Azure Storage data movement library for .NET (DML for short). This library is based on the core data movement framework that powers AzCopy. The library is designed for high-performance, reliable and easy Azure Storage data transfer operations enabling scenarios such as:
•    Uploading, downloading and copying data between Microsoft Azure Blob and File Storage
•    Migrating data from other cloud providers such as AWS S3 to Azure Blob Storage
•    Backing up Azure Storage data

Here is a sample demonstrating how to upload a blob, please find more samples at github.

using System;
using System.Threading;
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Blob;
// Include the New Azure Storage Data Movement Library
using Microsoft.WindowsAzure.Storage.DataMovement;
 
// Setup the storage context and prepare the object you need to upload
string storageConnectionString = "myStorageConnectionString";
CloudStorageAccount account = CloudStorageAccount.Parse(storageConnectionString);
CloudBlobClient blobClient = account.CreateCloudBlobClient();
CloudBlobContainer blobContainer = blobClient.GetContainerReference("mycontainer");
blobContainer.CreateIfNotExists();
string sourcePath = "path\\to\\test.txt";
CloudBlockBlob destBlob = blobContainer.GetBlockBlobReference("myblob");
 
// Use the interfaces from the new Azure Storage Data Movement Library to upload the blob
// Setup the number of the concurrent operations
TransferManager.Configurations.ParallelOperations = 64;
 
// Setup the transfer context and track the upoload progress
TransferContext context = new TransferContext();
context.ProgressHandler = new Progress<TransferProgress>((progress) =>
{
    Console.WriteLine("Bytes uploaded: {0}", progress.BytesTransferred);
});
 
// Upload a local blob
var task = TransferManager.UploadAsync(
    sourcePath, destBlob, null, context, CancellationToken.None);
task.Wait();

Azure Storage Data Movement Library has the same performance as AzCopy and exposes the core functionalities of AzCopy. You can install the first preview of the library from Nuget or download the source code from Github. In the initial version (0.1.0) of this library, you can find the following abilities:
•    Support data transfer for Azure Storage abstraction: Blob
•    Support data transfer for Azure Storage abstraction: File
•    Download / Upload / Copy single object
•    Control the number of concurrent operations
•    Synchronous and asynchronous copying
•    Define the number of concurrent operations
•    Define the suffix of the user agent
•    Set the content type
•    Set the Access Condition to conditionally copy objects, for example copy objects changed since certain date
•    Validate content MD5
•    Download specific blob snapshot
•    Track transfer progress: bytes transferred, number of success/fail/skip files
•    Recover (Set/Get transfer checkpoint)
•    Transfer Error handling (transfer exception and error code)
•    Client-Side Logging

DML is an open source project, we welcome contributions from the community. In particular we are interested in extensions to our samples to help make them more robust. Together with the release of version 0.1.0, we have created the following samples, for more details, please visit Github Readme.md.

•    Upload/Download/Copy an Azure Storage Blob
•    Migrate data from AWS S3 to Azure Blob Storage

Next Steps
We will continue the investment for both AzCopy and Data Movement Library, and in the next releases of Data Movement Library, we will add the support for more advanced features, which shall include:
•    Download / Upload / Copy directory (Local file directory, blob virtual directory, File share directory)
•    Transfer directory in recursive mode or flat mode
•    Specify the file pattern when copying files and directories
•    Download Snapshots under directories

As always, we look forward to your feedback.

Microsoft Azure Storage Team

How to use Blob Storage from iOS

Client-Side Encryption in Java Client Library for Microsoft Azure Storage – Preview

$
0
0

We are excited to announce preview availability of the client side encryption feature in the Azure Storage Java Client Library. This preview enables you to encrypt and decrypt your data inside client applications before uploading to and after downloading from Azure Storage. The feature is available for Blobs, Queues and Tables. We also support integration with Azure Key Vault in order to let you store and manage your keys. We recently made Client-side encryption generally available in the Storage .Net library and now we are happy to provide the same capability in the Java client library as a preview.

Why use client-side encryption?

Client-side encryption is helpful in scenarios where customers want to encrypt the data at source such as encrypting surveillance data from cameras before uploading to Storage. In this scenario, the user controls the keys and the Azure Storage service never sees the keys used for cryptographic operations. You can additionally inspect exactly how the library is encrypting your data to ensure that it meets your standards since the library is open source and available on GitHub. The feature is helpful in scenarios such as encrypting surveillance data from cameras before uploading to Storage.

Benefits of the Java Client Library

We wanted to provide a library that would accomplish the following:

  • Implement Security Best Practices.  This library has been reviewed for its security so that you can use it with confidence. Encrypted data is not decipherable even if the storage account keys are compromised. Additionally, we’ve made it simple and straightforward for users to rotate keys themselves. i.e. multiple keys will be supported during the key rotation timeframe.
  • Interoperability across languages and platforms.  Many users use more than one of our client libraries. Given our goal to use the same technical design across implementations, data encrypted using the .NET library can be decrypted using the Java library and vice versa.  Support for other languages is planned for the future. Similarly, we support cross platform encryption. For instance, data encrypted in the Windows platform can be decrypted in Linux and vice versa.
  • Design for Performance.  We’ve designed the library for both throughput and memory footprint. We have used a technique where there is a fixed overhead so that your encrypted data will have a predictable size based on the original size.
  • Self-contained encryption – Every blob, table entity, or queue message has all encryption metadata stored in either the object or its metadata.  There is no need to get any additional data from anywhere else, except for the key you used.
  • Full blob uploads/ Full and range blob downloads: Upload for blobs such as files like documents, photos and videos that are going to be uploaded in entirety is supported. But sometimes, files like mp3 are downloaded in ranges depending on the part that is to be played. To support this, range downloads are allowed and are entirely taken care of by the SDK.

How to use it?

Using client-side encryption is easy. The client library will internally take care of encrypting data on the client when uploading to Azure Storage, and automatically decrypts it when data is retrieved. All you need to do is specify the appropriate encryption policy and pass it to data upload/download APIs.

// Create the IKey used for encryption
RsaKey key = new RsaKey("private:key1" /* key identifier */);
 
// Create the encryption policy to be used for upload and download.
BlobEncryptionPolicy policy = new BlobEncryptionPolicy(key, null);
 
// Set the encryption policy on the request options.
BlobRequestOptions options = new BlobRequestOptions();
options.setEncryptionPolicy(policy);
 
// Upload the encrypted contents to the blob.
blob.upload(stream, size, null, options, null);
 
// Download and decrypt the encrypted contents from the blob.
ByteArrayOutputStream outputStream = new ByteArrayOutputStream(); blob.DownloadToStream(outputStream, null, options, null);

You can find more details and code samples in the Getting Started with Client-Side Encryption for Microsoft Azure Storage article.

Key considerations

  • This is a preview!  It should not be used for production data.  Schema impacting changes can be made and data written with the first preview may not be readable in the GA version.
  • With client side encryption, we support full uploads and full/ range downloads only. As such if you perform operations that update parts of a blob after you have written an encrypted blob, you may end up making it unreadable.
  • Avoid performing a SetMetadata operation on the encrypted blob or specifying metadata while creating a snapshot of an encrypted blob as this may render the blob unreadable. If you must update, then be sure to call the downloadAttributes method first to get the current encryption metadata, and avoid concurrent writes while metadata is being set.

We look forward to your feedback on design, ease of use and any additional scenarios you would like to tell us about.  This will enable us to deliver a great GA release of the library. While some requests for additional functionality may not be reflected in the first release, these will be strongly considered for the future.

Thank you.

Dinesh Murthy
Emily Gerner
Microsoft Azure Storage Team


Microsoft Azure Storage Service Version Removal Update: Extension to 2016

$
0
0

Summary

The Storage Service uses versioning to govern what operations are available, how a given request will be processed and what will be returned. In 2014, we announced that specific versions of the Microsoft Azure Storage Service would be removed on December 9th, 2015. Based on your feedback, we are now making the following changes with the details in the table below.

  1. We will delay the removal date for some REST API versions and impacted client libraries. This includes all REST endpoints starting version 2009-07-17 and earlier. The effective date for this service removal is August 1st, 2016.
  2. We will indefinitely postpone the removal date for the endpoints 2011-08-18 and 2009-09-19. This is effective immediately. We intend to remove these versions at some point in the future, but not within the coming 12 months. The exact date of removal will be communicated via this blog forum and with 12 months’ notice provided.
  3. We will begin using service version 2014-04-05 for requests that do not include a specific version for SAS authentication and Anonymous access. However, we will begin rejecting any unversioned SharedKey/SharedKeyLite authenticated requests. The effective date for this is August 1st, 2016.
  4. Finally, there is no change to support level and availability of versions 2012-02-12 and beyond.
Endpoint Action Effective
2008 (undocumented, but used for processing unversioned requests) Removal Aug 1, 2016
Version 2009-04-14 Removal Aug 1, 2016
Version 2009-07-17 Removal Aug 1, 2016
Version 2009-09-19
(.Net client library v1.5.1 uses this)
Postponed N/A
Version 2011-08-18
(.Net client library v1.7 uses this)
Postponed N/A
Version 2012-02-12
Version 2013-08-15
Version 2014-02-14
Version 2015-02-21
Version 2015-04-05
No change N/A

Please plan and implement your application upgrades soon so you are not impacted when service versions are removed. Additionally, we encourage you to regularly update to the latest service version and client libraries so you get the benefit of the latest features. To understand the details of how this will impact you and what you need to do, please read on.

How will these changes manifest?

Explicitly Versioned Requests

Any requests which are explicitly versioned using the HTTP x-ms-version request header set to one of the removed versions or in the case of SAS requests api-version parameter set to one of the removed versions, will fail with an HTTP 400 (Bad Request) status code, similar to any request made with an invalid version header.

SharedKey/SharedKeyLite Requests with no explicit version

For requests that were signed using the account’s shared key, if no explicit version is specified using HTTP x-ms-version, the request was previously processed with the undocumented 2008 version. Going forward, processing will fail with HTTP 400 (Bad Request) if the version is not explicitly specified.

SAS Requests with no “sv” parameter and no “x-ms-version”

Prior to version 2012-02-12, a SAS request did not specify a version in the “sv” parameter of the SAS token. The SAS token parameters of these requests were interpreted using the rules for the 2009-07-17 REST processing version. These requests will still work, but now the request will be processed with 2015-04-05 version. We advise you in this case to ensure that you either send “x-ms-version” with a non-deprecated version or set a default version on your account.

Anonymous Requests with no explicit version

For any anonymous requests (with no authentication) with no version specified, the service assumes that the request is version agnostic. Effective August 1st 2016, anonymous requests will be processed with version 2015-04-05. The version used for anonymous requests may change again in the future.

Note that we make no guarantees about whether or not there will be breaking changes when unversioned requests are processed with a new service version. Instances of these requests include browser-initiated HTTP requests and HTTP requests without the service version specified that are made from applications not using Storage client libraries. If your application is unable to send an x-ms-version for anonymous requests (for example, from a browser), then you can set a default REST version for your account through Set Blob Service Properties, for the Blob service for instance.

Default Service Version

If Set Blob Service Properties (REST API) has been used to set the default version of requests to version 2009-09-19 or higher, the version set will be used. If default service version was set to a version that is now removed, that request is considered to be explicitly versioned, and will fail with “400 Bad Request”. If default service version was set to a version that is still supported, that version will continue to be used.

Client Libraries

The latest versions of all of our client libraries and tools will not be affected by this announcement. However the .Net client library v1.5.1 uses Version 2009-09-19 and will be impacted when that version is eventually removed. If you are still using this library, please update to the latest .Net client library before the version is removed. For a list of .Net client libraries using various REST endpoints, please visit https://msdn.microsoft.com/en-us/library/azure/dn744252.aspx. If you are using non-.Net libraries, then you should not be impacted. For more information, please look at the at the Minimum Supported Versions/Libraries/SDK’s section in this article.

Azure CloudDrive

If you are using Azure CloudDrive, then you are not impacted by this announcement since it uses REST Version 2009-09-19. We will have an announcement in the near future on CloudDrive migration.

What should I do?

To ensure that your application continues to work properly after removal of older versions, you should do the following things.

Check your application to find what versions it is using

The first thing to do is to determine what REST versions your application is using. If your application is under your control and you are aware of all components that call Azure Storage, then you can verify this by checking the components against the above list, or by inspecting your code if you have written your own code to make calls to storage.

As a stronger check, or if you are unsure which versions of the components have been deployed, you can enable logging, which will log the requests being made to your storage account. The logs have the request version used included, which can be used to find if any requests are being made using versions with planned removal.

Here is a sample log entry, with the version used highlighted in red – in this case the request was an anonymous and unversioned GetBlob request which implicitly used the 2009-09-19 version:

1.0;2011-08-09T18:52:40.9241789Z;GetBlob;AnonymousSuccess;200;18;10;anonymous;;myaccount;blob;”https:// myaccount.blob.core.windows.net/thumbnails/lake.jpg?timeout=30000″;”/myaccount/thumbnails/lake.jpg”;a84aa705-8a85-48c5-b064-b43bd22979c3;0;123.100.2.10;2009-09-19;252;0;265;100;0;;;”0x8CE1B6EA95033D5″;Friday, 09-Aug-11 18:52:40 GMT;;;;”8/9/2011 6:52:40 PM ba98eb12-700b-4d53-9230-33a3330571fc”

Similar to the above, you can look at log entries to identify any references to service versions that are being removed.

What to change

If you find any log entries which show that a version to be removed is being used, you will need to find that component and either validate that it will continue to work (unversioned requests may continue to work as their implicit version will simply increase – see above), or take appropriate steps to change the version being used. Most commonly, one of the following two steps will be used:

  1. Change the version specified in the request. If you are using client libraries, you can accomplish this by migrating to a later version of the libraries/tools. When possible, migrate to the latest version to get the most improvements and fixes.
  2. Set the default service version to one of the supported versions now so that the behavior can be verified prior to removal. This only applies to anonymous requests with no explicit version.

When migrating your applications to newer versions, you should review the above linked change lists for each service version and test thoroughly to ensure that your application is working properly after you’ve updated it. Please note that service version updates have included both included syntactic breaks (the request will receive a response that either is a failure or formed very differently) and semantic breaks (the request will receive a similar response that means something different).

Post migration validation

After migration, you should validate in the logs that you do not find any of the earlier versions being used. Make sure to check the logs over long enough durations of time to be sure that there are no tasks/workloads running rarely that would still use the older versions (scheduled tasks that run once per day, for example).

Conclusion

It is recommended that users begin their applications upgrades now in order to avoid being impacted when the earlier service versions are removed on August 1st, 2016. Additionally, it is considered a best practice to explicitly version all requests made to the storage service. See MSDN for a discussion of versioning in Azure Storage and best practices.

Thank you.

Dinesh Murthy
Principal Program Manager
Microsoft Azure Storage

(Cross-Post) SAS Update: Account SAS Now Supports All Storage Services

$
0
0

Shared Access Signatures (SAS) enable customers to delegate access rights to data within their storage accounts without having to share their storage account keys. In late 2015 we announced a new type of SAS token called Account SAS that provided support for the Blob and File Services. Today we are pleased to announce that Account SAS is also supported for the Azure Storage Table and Queue services. These capabilities are available with Version 2015-04-05 of the Azure Storage Service.

Account SAS delegates access to resources in one or more of the storage services providing parity with the Storage account keys. This enables you to delegate access rights for creating and modifying blob containers, tables, queues, and file shares, as well as providing access to meta-data operations such as Get/Set Service Properties and Get Service Stats. For security reasons Account SAS does not enable access to permission related operations including “Set Container ACL”, “Set Table ACL”, “Set Queue ACL”, and “Set Share ACL”.

The below code snippet creates a new access policy used to issue a new Account SAS token for the Blob and Table Service including read, write, list, create and delete permissions. The Account SAS token is configured to expire in 24 hours from now.

SharedAccessAccountPolicy policy = new SharedAccessAccountPolicy()
{
Permissions = SharedAccessAccountPermissions.Read |
SharedAccessAccountPermissions.Write |
SharedAccessAccountPermissions.List |
SharedAccessAccountPermissions.Create |
SharedAccessAccountPermissions.Delete,

Services = SharedAccessAccountServices.Blob | SharedAccessAccountServices.Table,

ResourceTypes = SharedAccessAccountResourceTypes.Container | SharedAccessAccountResourceTypes.Object,

SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),

Protocols = SharedAccessProtocol.HttpsOrHttp
};

// Create a storage account SAS token by using the above Shared Access Account Policy.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(‘YOUR CONNECTION STRING’);
string sasToken = storageAccount.GetSharedAccessSignature(policy);
Please read the following resources for more details:

We recommend using SAS tokens to delegate access to storage users rather than sharing storage account keys. As always, please let us know if you have any further questions via comments on this post.

Thanks!

Perry Skountrianos
Azure Storage Team

(Cross-Post) Announcing Azure Storage Data Movement Library 0.2.0

$
0
0

In the previous announcement post for DMLib 0.1.0, we committed that the newest release of the Data Movement Library would support more advanced features. Great news, those are now available and include the following:

  • Download, upload, and copy directories (local file directories, Azure Blob virtual directories, Azure File directories)
  • Transfer directories in recursive mode
  • Transfer directories in flat mode (local file directories)
  • Specify the search pattern when copying files and directories
  • Provide Event to get single file transfer result in a transfer
  • Download Snapshots under directories
  • Changed TransferConfigurations.UserAgentSuffix to TransferConfigurations.UserAgentPrefix

With these new features, you can perform data movement at the Blob container and Blob virtual directory level, or the File share and File directory level.

We are actively adding more code samples to the Github library, and any community contributions to these code samples are highly appreciated.

You can install the Azure Storage Data Movement Library from Nuget or download the source code from Github. For more details, please read the Getting Started documentation.

As always, we look forward to your feedback, so please don’t hesitate to utilize the comments section below.

Thanks!

Azure Storage Team

(Cross-Post) Build 2016: Azure Storage announcements

$
0
0

It’s time for Build 2016, and the Azure Storage team has several exciting announcements to make. This blog post provides an overview of new announcements and updates on existing programs. We hope that these new features and updates will enable you to make better use of Azure Storage for your services, applications and other needs.

Preview Program Announcements

Storage Service Encryption Preview

Storage Service Encryption helps you address organizational security and compliance requirements by automatically encrypting data in Blob Storage, including block blobs, page blobs, and append blobs. Azure Storage handles all the encryption, decryption, and key management in a transparent fashion using AES 256-bit encryption, one of the strongest encryption ciphers available. There is no additional charge for enabling this feature.

Access to the preview program can be requested by registering your subscription using Azure Portal or Azure PowerShell. Once your subscription has been approved, you can create a new storage account using the Azure Portal, and enable the feature.

To learn more about this feature, please see Getting started with Storage Service Encryption.

Near Term Roadmap Announcements

GetPageRanges API for copying incremental snapshots

The Azure Storage team will soon be adding a new feature to the GetPageRanges API for page blobs, which will allow you to build faster and more efficient backup solutions for Azure virtual machines. The API will return the list of changes between the base blob and its snapshots, allowing you to identify and copy only the changes unique to each snapshot. This will significantly reduce the amount of data you need to transfer during incremental backups of the virtual machine disks. The API will support page blobs on premium storage as well as standard storage. The feature will be available in April 2016 via the REST API and the .NET client library, with more client libraries support to follow.

Azure Import/Export

Azure Import/Export now supports up to 8 TB hard drives in all regions where the service is offered. In addition, Azure Import/Export will be coming to Japan and Australia in summer 2016. With this launch, customers who have storage accounts in Japan or Australia can ship disks to a domestic address within the region rather than shipping to other regions.

Azure Backup support for Azure Premium Storage

Azure Premium Storage is ideal for running IO intensive applications on Azure VMs. Azure Backup Service delivers a powerful and affordable cloud backup solution, and will be adding support for Azure Premium Storage. You can protect your critical applications running on Premium Storage VMs with the help of Azure Backup service.

Learn more about Azure Backup and Premium Storage.

Client Library and Tooling Updates

Java Client-Side Encryption GA

We are pleased to announce the general availability of the client-side encryption feature in our Azure Storage client Java library. This allows developers to encrypt blob, table, and queue data before sending it to Azure Storage. Additionally, integration with Azure Key Vault is supported so you can store and manage your keys in Azure Key Vault. With this release, data that is encrypted with .Net in Windows can be decrypted with Java in Linux and vice versa.

To learn more, please visit our getting started documentation.

Storage Node.js Preview Update

We are pleased to announce the latest preview (0.10) of the Azure Storage Node.js client library. This includes a rich developer experience, full support for AccountSAS capability, IPACL and Protocol specifications for Service SAS along with addressing customer usability feedback. You can start using the Node.js preview Azure Storage library in your applications now by leveraging the storage package on npmjs.

To learn more and get access to the source code, please visit our GitHub repo.

Storage Python Preview Update

We are pleased to announce the latest preview (0.30) of the Azure Storage Python client library. With this version comes all features included in the 2015-04-05 REST version including support for append blobs, Azure File storage, account SAS, JSON table formatting and much more.

To learn more, please visit our getting started documentation and review our latest documentation, upgrade guide, usage samples and breaking changes log.

Azure Storage Explorer

We are happy to announce the latest public preview of the Azure Storage Explorer. This release adds support for Table Storage including exporting to a CSV file, Queue Storage, AccountSAS and an updated UI experience.

For more information and to download the explorer for the Windows/Linux/Mac platforms, please visit www.storageexplorer.com.

Documentation and Samples Updates

Storage Security Guide

Azure Storage provides a comprehensive set of security capabilities which enable developers to build secure applications. You can secure the management of your storage account, encrypt the storage objects in transit, encrypt the data stored in the storage account and much more. The Azure Storage Security Guide provides an overview of these security features and pointers to resources providing deeper knowledge.

To learn more, see the Storage Security Guide.

Storage Samples

The Azure Storage team continues to strive towards improving the end-user experience for developers. We have recently developed a standardized set of samples that are easy to discover and enable you to get started in just 5 minutes. The samples are well documented, fully functional, community-friendly, and can be accessed from a centralized landing page that allows you to find the samples you need, for the platform you use. The code is open source and is readily usable from Github making it possible for the community to contribute to the samples repository.

To get started with the samples, please visit our storage samples landing page.

 

Finally, if you are new to Azure Storage, please check out the Azure Storage documentation page. It’s the quickest way to learn and start using Azure Storage.

Thanks
Azure Storage Team

(Cross Post) Announcing the preview of Azure Storage Service Encryption for data at rest

$
0
0

We are excited to announce the preview of Azure Storage Service Encryption for data at rest. This capability is one of the features most requested by enterprise customers looking to protect sensitive data as part of their regulatory or compliance needs.

Storage Service Encryption automatically encrypts your Azure Blob storage data prior to persisting to storage, and decrypts prior to retrieval. The encryption, decryption and key management is transparent to users, requires no change to your applications, and frees your engineering team from having to implement complex key management processes.

This capability is supported for all Azure Blob storage, Block blobs, Append blobs, and Page blobs, and is enabled through configuration on each storage account. This capability is available for storage accounts created through the Azure Resource Manager (ARM). All data is encrypted using 256-bit AES encryption, also known as AES-256, one of the strongest block ciphers available. Customers can enable this feature on all available redundancy types of Azure Storage – LRS, ZRS, GRS and RA-GRS. Storage Service Encryption is also supported for both Standard and Premium Storage. There is no additional charge for enabling this feature.

As with most previews, this should not be used for production workloads until the feature becomes generally available.

To learn more please visit Storage Service Encryption.

Viewing all 167 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>