Quantcast
Channel: Microsoft Azure Storage Team Blog
Viewing all 167 articles
Browse latest View live

(Cross-Post) Microsoft Azure Storage Explorer preview: March update

$
0
0

(Originally posted at: https://azure.microsoft.com/en-us/blog/storage-explorer-march-update/)

Today we’re happy to announce the March update of Microsoft Azure Storage Explorer (preview), which includes support for Tables and Queues.

After our first release, we received hundreds of requests asking for Table and Queue support. Based on this feedback, we’re extremely excited to share this new version of Storage Explorer with the following features:

  • Table support
  • Queue support
  • SAS features, including SAS support for Storage Account
  • Performance improvements
  • Updated look and feel
  • Update notifications

Tables

For tables, you’ll be able to view entities inside a container as well as write queries against them. You can also easily insert common query snippets, such as the ability to filter by partition key and row key, or retrieving based on a Timestamp period.

Storage Explorer query

Once you find the entity or entities you’re looking for, you can manually edit the values of its properties or delete it. Lastly, you can export the contents of your table to a CSV file, or import existing CSV files into any table. You could also copy tables from one Storage Account to another if you’d prefer to keep the transfers server-side.

Queues

For queues, we focused on the basic features. You can peek at the most recent 32 messages. From there you can view a specific message, enqueue new messages, dequeue the top message, or clear the entire queue.

SAS features

Both tables and queues support the same SAS functionality as blob containers: you can create SAS URIs for queues and tables, and also connect to a specific queue or table by providing a SAS key.

With this release, you’ll be able to generate Shared Access Signatures for Storage Accounts. Additionally, you’ll have the ability to connect to Storage Accounts by providing a SAS URI for the Storage Account. The SAS generation and connection features are also available for Tables and Queues.

To generate a SAS URI, simply right-click on the Storage Account and select “Get Shared Access Signature…”; to attach the resource, right-click on the parent “Storage Accounts” node and select “Attach Account using SAS.”

Storage Explorer - attach with SAS

Storage Explorer - SAS dialog

Update notifications

Lastly, starting with this version of Storage Explorer you’ll receive notifications for new updates for the application. These will appear as an infobar message linking to the latest version on storageexplorer.com.

Summary

While we’re excited to finally share these features with you, our work is not done yet – we haven’t forgotten about File Shares! We’ll also continue to add features to Blob Containers, Tables, and Queues. If you have any suggestions or requests for features you’d like to see in Storage Explorer, you can send us feedback directly from the application.

Storage Explorer Feedback

Let us know what you think!

-The Storage Explorer Team


(Cross-Post) Introducing Azure Cool Blob Storage

$
0
0

Originally posted in Microsoft Azure Blog.

Data in the cloud is growing at an exponential pace, and we have been working on ways to help you manage the cost of storing this data. An important aspect of managing storage costs is tiering your data based on attributes like frequency of access, retention period, etc. A common tier of customer data is cool data which is infrequently accessed but requires similar latency and performance to hot data.

Today, we are excited to announce the general availability of Cool Blob Storage – low cost storage for cool object data. Example use cases for cool storage include backups, media content, scientific data, compliance and archival data. In general, any data which lives for a longer period of time and is accessed less than once a month is a perfect candidate for cool storage.

With the new Blob storage accounts, you will be able to choose between Hot and Cool access tiers to store object data based on its access pattern. Capabilities of Blob storage accounts include:

  • Cost effective: You can now store your less frequently accessed data in the Cool access tier at a low storage cost (as low as $0.01 per GB in some regions), and your more frequently accessed data in the Hot access tier at a lower access cost. For more details on regional pricing, see​ Azure Storage Pricing.
  • Compatibility: We have designed Blob storage accounts to be 100% API compatible with our existing Blob storage offering which allows you to make use of the new storage accounts in existing applications seamlessly.
  • Performance: Data in both access tiers have a similar performance profile in terms of latency and throughput.
  • Availability: The Hot access tier guarantees high availability of 99.9% while the Cool access tier offers a slightly lower availability of 99%. With the RA-GRS redundancy option, we provide a higher read SLA of 99.99% for the Hot access tier and 99.9% for the Cool access tier.
  • Durability: Both access tiers provide the same high durability that you have come to expect from Azure Storage and the same data replication options that you use today.
  • Scalability and Security: Blob storage accounts provide the same scalability and security features as our existing offering.
  • Global reach: Blob storage accounts are available for use starting today in most Azure regions with additional regions coming soon. You can find the updated list of available regions on the Azure Services by Regions page.

For more details on how to start using this feature, please see our getting started documentation.

Many of you use Azure Storage via partner solutions as part of your existing data infrastructure. Here are updates from some of our partners on their support for Cool storage:

  • Commvault: Commvault’s Windows/Azure Centric “Commvault Integrated Solutions Portfolio” software solution enables a single solution for enterprise data management. Commvault’s native support for Azure has been a key benefit for customers considering a move to Azure and Commvault remains committed to continuing our integration and compatibility efforts with Microsoft, befitting a close relationship between the companies that has existed for over 17 years. With this new Cool Storage offering, Microsoft again makes significant enhancements to their Azure offering and we expect that this service will be an important driver of new opportunities for both Commvault and Microsoft.
  • Veritas: Market leader Veritas NetBackup™ protects enterprise data in on a global scale in both management and performance – for any workload, on any storage device, located anywhere.  The proven global enterprise capabilities in NetBackup converges on and off-premise data protection with scalable, cloud-ready solutions to cover any use case.  In concert with the Microsoft announcement of Azure Cool storage, Veritas is announcing beta availability of the integrated Azure Cloud Connector in NetBackup 8.0 Beta which enables customers to test and experience the ease of use, manageability, and performance of leveraging Azure Storage as a key component of their enterprise hybrid cloud data protection strategy. Click here to go to the NetBackup 8.0 Beta registration and download website.
  • SoftNAS: SoftNAS™® will soon be supporting Azure Cool storage. SoftNAS Cloud® NAS customers will get a virtually bottomless storage pool for applications and workloads that need standard file protocols like NFS, CFS/SMB, and iSCSI. By summer 2016, customers can leverage SoftNAS Cloud NAS with Azure Cool storage as an economical alternative to increasing storage costs. SoftNAS helps customers make the cloud move without changing applications while providing enterprise-class NAS features like de-duplication, compression, directory integration, encryption, snapshotting, and much more. SoftNAS StorageCenter™ console will allow a central means to choose the optimal file storage location ranging from hot (block-backed) to cool (Blob-object backed) and enables content movement to where it makes sense over the data lifecycle.
  • Cohesity: Cohesity delivers the world’s first hyper-converged storage system for enterprise data.  Cohesity consolidates fragmented, inefficient islands of secondary storage into an infinitely expandable and limitless storage platform that can run both on-premises and in the public cloud.  Designed with the latest web-scale distributed systems technology, Cohesity radically simplifies existing backup, file shares, object, and dev/test storage silos by creating a unified, instantly-accessible storage pool.  The Cohesity platform seamlessly interoperates with Azure Cool storage for three primary use cases:  long-term data retention and archival, tiering of infrequently-accessed data into the cloud, and replication to provide disaster recovery. Azure Cool storage can be easily registered and assigned via Cohesity’s policy-based administration portal to any data protection workload running on the Cohesity platform.
  • CloudBerry Lab: CloudBerry Backup for Microsoft Azure is designed to automate data backup to Microsoft Azure cloud storage. It is capable of compressing and encrypting the data with a user-controlled password before the data leaves the computer. It then securely transfers it to the cloud either on schedule or in real time. CloudBerry Backup also comes with file-system and image-based backup, SQL Server and MS Exchange support, as well as flexible retention policies and incremental backup. CloudBerry Backup now supports Azure Blob storage accounts for storing backup data.

The list of partners integrating with cool storage will continue to grow in the coming months.

As always, we look forward to your feedback and suggestions.

Thanks,

The Azure Storage Team.

Azure Storage PowerShell v.1.7 – Hotfix to v1.4 Breaking Changes

$
0
0

Breaking changes were introduced in Azure PowerShell v1.4. These breaking changes are present in Azure PowerShell versions 1.4-1.6 and versions 2.0 and later. The following Azure Storage cmdlets were impacted:

  • Get-AzureRmStorageAccountKey – Accessing Keys.
  • New-AzureRmStorageAccountKey – Accessing Keys.
  • New-AzureRmStorageAccount – Specifying Account Type and Endpoints.
  • Get-AzureRmStorageAccount – Specifying Account Type and Endpoints.
  • Set-AzureRmStorageAccount – Specifying Account Type and Endpoints.

To minimize impact to cmdlets, we are releasing Azure PowerShell v1.7 – a hotfix that addresses all of the breaking changes with the exception of specifying the Endpoint properties for New-AzureRmStorageAccount, Get-AzureRmStorageAccount, and Set-AzureRmStorageAccount. This means no code change will be required by customers where the hotfix is applicable. This hotfix will not be present in Azure PowerShell versions 2.0 and later. Please plan to update the above cmdlets when you update to Azure PowerShell v2.0.

Below, you’ll find examples for how the above cmdlets work for different versions of Azure PowerShell and the action required:

Accessing Keys with Get-AzureRmStorageAccountKey and New-AzureRmStorageAccountKey

V1.3.2 and earlier:

$key = (Get-AzureRmStorageAccountKey -ResourceGroupName $groupname -Name $accountname).Key1

$key = (Get-AzureRmStorageAccountKey -ResourceGroupName $groupname -Name $accountname).Key2

$key = (New-AzureRmStorageAccountKey -ResourceGroupName $groupname -Name $accountname -KeyName $keyname).StorageAccountKeys.Key1

$key = (New-AzureRmStorageAccountKey -ResourceGroupName $groupname -Name $accountname -KeyName $keyname).StorageAccountKeys.Key2

V1.4-V1.6 and V2.0 and later:

The cmdlet now returns a list of keys, rather than an object with properties for each key.

# Replaces Key1
$key = (Get-AzureRmStorageAccountKey -ResourceGroupName $groupname -Name $accountname)[0].Value

# Replaces Key2
$key = (Get-AzureRmStorageAccountKey -ResourceGroupName $groupname -Name $accountname)[1].Value

# Replaces Key1
$key = (New-AzureRmStorageAccountKey -ResourceGroupName $groupname -Name $accountname -KeyName $keyname).Keys[0].Value

# Replaces Key2
$key = (New-AzureRmStorageAccountKey -ResourceGroupName $groupname -Name $accountname -KeyName $keyname).Keys[1].Value

V1.7 (Hotfix):

Both methods work.

$key = (Get-AzureRmStorageAccountKey -ResourceGroupName $groupname -Name $accountname).Key1

$key = (Get-AzureRmStorageAccountKey -ResourceGroupName $groupname -Name $accountname)[0].Value

$key = (New-AzureRmStorageAccountKey -ResourceGroupName $groupname -Name $accountname -KeyName $keyname).StorageAccountKeys.Key1

$key = (New-AzureRmStorageAccountKey -ResourceGroupName $groupname -Name $accountname -KeyName $keyname).Keys[0].Value

Specifying Account Type in New-AzureRmStorageAccount, Get-AzureRmStorageAccount, and Set-AzureRmStorageAccount

V1.3.2 and earlier:

$AccountType = (Get-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).AccountType

$AccountType = (New-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).AccountType

$AccountType = (Set-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).AccountType

V1.4-V1.6 and V2.0 and later:

AccountType field in output of this cmdlet is renamed to Sku.Name.

$AccountType = (Get-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).Sku.Name

$AccountType = (New-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).Sku.Name

$AccountType = (Set-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).Sku.Name

V1.7 (Hotfix):

Both methods work.

$AccountType = (Get-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).AccountType

$AccountType = (New-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).AccountType

$AccountType = (Set-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).AccountType

$AccountType = (Get-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).Sku.Name

$AccountType = (New-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).Sku.Name

$AccountType = (Set-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).Sku.Name

Specifying Endpoints in New-AzureRmStorageAccount, Get-AzureRmStorageAccount, and Set-AzureRmStorageAccount

V1.3.2 and earlier:

$blobEndpoint = (Get-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).PrimaryEndpoints.Blob.AbsolutePath

$blobEndpoint = (New-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).PrimaryEndpoints.Blob.AbsolutePath

$blobEndpoint = (Set-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).PrimaryEndpoints.Blob.AbsolutePath

V1.4-V1.6 and V2.0 and later:

Output type for PrimaryEndpoints/Secondary endpoints blob/table/queue/file changed from Uri to String.

$blobEndpoint = (Get-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).PrimaryEndpoints.Blob

$blobEndpoint = (New-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).PrimaryEndpoints.Blob

$blobEndpoint = (Set-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).PrimaryEndpoints.Blob

Note: The ToString() method for these cmdlets will continue to work. For example:

$blobEndpoint = (Get-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).PrimaryEndpoints.Blob.ToString()

V1.7 (Hotfix):

No hotfix was provided for this breaking change. The return value’s endpoints will have to continue to be string, as changing these back to Uri would introduce an additional break.

Next steps

Thanks,

Microsoft Azure Storage Team

Announcing the General Availability of Storage Service Encryption for Data at Rest

$
0
0

Storage Service Encryption for Azure Blob Storage helps you address organizational security and compliance requirements by encrypting your Blob storage (Block Blobs, Page Blobs and Append Blobs).

Today, we are excited to announce the General Availability of Storage Service Encryption for Azure Blob Storage. You can enable this feature on any Azure Resource Manager storage account using the Azure Portal, Azure Powershell, Azure CLI or the Microsoft Azure Storage Resource Provider API.

Microsoft Azure Storage handles all the encryption, decryption and key management in a totally transparent fashion. All data is encrypted using 256-bit AES encryption, also known as AES-256, one of the strongest block ciphers available. Customers can enable this feature on all available redundancy types of Azure Storage – LRS, GRS, ZRS, RA-GRS and Premium-LRS for all Azure Resource Manager Storage accounts and Blob Storage accounts. There is no additional charge for enabling this feature.

Note that SSE encrypts when blobs are written or updated. This means that when you enable SSE for an existing storage account, only new writes are encrypted; it does not go back and encrypt the data already present.

Find out more about Storage Service Encryption with Service Managed Keys.

(Cross Post) Announcing Azure Storage Client Library GA for Xamarin

$
0
0

We are pleased to announce the general availability release of the Azure Storage client library for Xamarin. Xamarin is a leading mobile app development platform that allows developers to use a shared C# codebase to create iOS, Android, and Windows Store apps with native user interfaces. We believe the Azure Storage library for Xamarin will be instrumental in helping provide delightful developer experiences and enabling an end-to-end mobile-first, cloud-first experience. We would like to thank everyone who has leveraged previews of Azure Storage for Xamarin and provided valuable feedback.

The sources for the Xamarin release are the same as the Azure Storage .Net client library and can be found on Github. The installable package can be downloaded from nuget (version 7.2 and beyond) or from Azure SDK (version 2.9.5 and beyond) and installed via the Web Platform installer. This generally available release supports all features up to and included in the 2015-12-11 REST version.

Getting started is very easy. Simply follow the steps below:

  1. Install Xamarin SDK and tools and any language specific emulators as necessary: For instance, you can install the Android KitKat emulator.
  2. Create a new Xamarin project and install the Azure Storage nuget package version 7.2 or higher in your project and add Storage specific code.
  3. Compile, build and run the solution. You can run against a phone emulator or an actual device. Likewise you can connect to the Azure Storage service or the Azure Storage emulator.

Please see our Getting Started Docs and the reference documentation to learn how you can get started with the Xamarin client library and build applications that leverage Azure Storage features.ios

We currently support shared asset projects (e.g., Native Shared, Xamarin.Forms Shared), Xamarin.iOS and Xamarin.Android projects. This Storage library leverages the .Net Standard runtime library that can be run on Windows, Linux and MacOS. Learn about .Net Standard library and .Net Core. Learn about Xamarin support for .Net Standard.

As always, we continue to do our work in the public GitHub development branch for visibility and transparency. We are working on building code samples in our Azure Storage samples repository to help you better leverage the Azure Storage service and the Xamarin library capabilities. A Xamarin image uploader sample is already available for you to review/ download. If you have any requests on specific scenarios you’d like to see as samples, please let us know or feel free to contribute as a valued member of the developer community. Community feedback is very important to us.

Enjoy the Xamarin Azure Storage experience!

Thank you

Dinesh Murthy, Michael Roberson, Michael Curd, Elham Rezvani, Peter Marino and the Azure Storage Team.

General availability: Azure cool blob storage in additional regions

$
0
0

Azure Blob storage accounts with hot and cool storage tiers are generally available in six new regions: US East, US West, Germany Central, Germany Northeast, Australia Southeast, and Brazil South. You can find the updated list of available regions on the Azure services by region page.

Blob storage accounts are specialized storage accounts for storing your unstructured data as blobs (objects) in Azure Storage. With Blob storage accounts, you can choose between hot and cool storage tiers to store your less frequently accessed (cool) data at a lower storage cost, and store more frequently accessed (hot) data at a lower access cost.

Customers in the new regions can take advantage of the cost benefits of the cool storage tier for storing backup data, media content, scientific data, active archival data—and in general, any data that is less frequently accessed. For details on how to start using this feature, please see our getting-started documentation.

For details on regional pricing, see the Azure Storage pricing page.

(Cross-Post) General Availability: Larger Block Blobs in Azure Storage

$
0
0

Originally posted in the Microsoft Azure Blog.

Azure Blob Storage is a massively scalable object storage solution capable of storing and serving tens to hundreds of petabytes of data per customer across a diverse set of data types including media, documents, log files, scientific data and much more. Many of our customers use Blobs to store very large data sets, and have requested support for larger files. The introduction of larger Block Blobs increases the maximum file size from 195 GB to 4.77 TB. The increased blob size better supports a diverse range of scenarios, from media companies storing and processing 4K and 8K videos to cancer researchers sequencing DNA.

Azure Block Blobs have always been mutable, allowing a customer to insert, upload or delete blocks of data without needing to upload the entire blob. With the new larger block blob size, mutability offers even more significant performance and cost savings, especially for workloads where portions of a large object are frequently modified. For a deeper dive into the Block Blobs service including object mutability, please view this video from our last Build Conference. The REST API documentation for Put Block and Put Block List also covers object mutability.

We have increased the maximum allowable block size from 4 MB to 100 MB, while maintaining support for up to 50,000 blocks committed to a single Blob. Range GETs continue to be supported on larger Block Blobs allowing high speed parallel downloads of the entire Blob, or just portions of the Blob. You can immediately begin taking advantage of this improvement in any existing Blob Storage or General Purpose Storage Account across all Azure regions.

Larger Block Blobs are supported by the most recent release of the .NET Client Library (version 8.0.0), with support for Java, Node.js and AzCopy rolling out over the next few weeks. You can also directly use the REST API as always. Larger Block Blobs are supported by REST API version 2016-05-31 and later. There is nothing new to learn about the APIs, so you can start uploading larger Block Blobs right away.

This size increase only applies to Block Blobs, and the maximum size of Append Blobs (195 GB) and Page Blobs (1 TB) remains unchanged. There are no billing changes. To get started using Azure Storage Blobs, please see our getting started documentation, or reference one of our code samples.

(Cross-Post) General Availability: Larger Block Blobs in Azure Storage

$
0
0

Originally posted in the Microsoft Azure Blog.

Azure Blob Storage is a massively scalable object storage solution capable of storing and serving tens to hundreds of petabytes of data per customer across a diverse set of data types including media, documents, log files, scientific data and much more. Many of our customers use Blobs to store very large data sets, and have requested support for larger files. The introduction of larger Block Blobs increases the maximum file size from 195 GB to 4.77 TB. The increased blob size better supports a diverse range of scenarios, from media companies storing and processing 4K and 8K videos to cancer researchers sequencing DNA.

Azure Block Blobs have always been mutable, allowing a customer to insert, upload or delete blocks of data without needing to upload the entire blob. With the new larger block blob size, mutability offers even more significant performance and cost savings, especially for workloads where portions of a large object are frequently modified. For a deeper dive into the Block Blobs service including object mutability, please view this video from our last Build Conference. The REST API documentation for Put Block and Put Block List also covers object mutability.

We have increased the maximum allowable block size from 4 MB to 100 MB, while maintaining support for up to 50,000 blocks committed to a single Blob. Range GETs continue to be supported on larger Block Blobs allowing high speed parallel downloads of the entire Blob, or just portions of the Blob. You can immediately begin taking advantage of this improvement in any existing Blob Storage or General Purpose Storage Account across all Azure regions.

Larger Block Blobs are supported by the most recent release of the .NET Client Library (version 8.0.0), with support for Java, Node.js and AzCopy rolling out over the next few weeks. You can also directly use the REST API as always. Larger Block Blobs are supported by REST API version 2016-05-31 and later. There is nothing new to learn about the APIs, so you can start uploading larger Block Blobs right away.

This size increase only applies to Block Blobs, and the maximum size of Append Blobs (195 GB) and Page Blobs (1 TB) remains unchanged. There are no billing changes. To get started using Azure Storage Blobs, please see our getting started documentation, or reference one of our code samples.


(Cross-Post) New Azure Storage Release – Larger Block Blobs, Incremental Copy, and more!

$
0
0

Originally posted in the Microsoft Azure Blog.

We are pleased to announce new capabilities in the latest Azure Storage Service release and updates to our Storage Client Libraries. This latest release allows users to take advantage of increased block sizes of 100 MB, which allows block blobs up to 4.77 TB, as well as features like incremental copy for page blobs and pop-receipt on add message.

REST API version 2016-05-31

Version 2016-05-31 includes these changes:

  • The maximum blob size has been increased to 4.77 TB with the increase of block size to 100 MB. Check out our previous announcement for more details.
  • The Put Message API now returns information about the message that was just added, including the pop receipt. This enables the you to call Update Message and Delete Message on the newly enqueued message.
  • The public access level of a container is now returned from the List Containers and Get Container Properties APIs. Previously this information could only be obtained by calling Get Container ACL.
  • The List Directories and Files API now accepts a new parameter that limits the listing to a specified prefix.
  • All Table Storage APIs now accept and enforce the timeout query parameter.
  • The stored Content-MD5 property is now returned when requesting a range of a blob or file. Previously this was only returned for full blob and file downloads.
  • A new Incremental Copy Blob API is now available. This allows efficient copying and backup of page blob snapshots.
  • Using If-None-Match: * will now fail when reading a blob. Previously this header was ignored for blob reads.
  • During authentication, the canonicalized header list now includes headers with empty values. Previously these were omitted from the list.
  • Several error messages have been clarified or made more specific. See the full list of changes in the REST API Reference.

Check out the REST API Reference documentation to learn more.

New client library features

.NET Client Library (version 8.0.1)

  • All the service features listed above
  • Support for portable class library (through the NetStandard 1.0 Façade)
  • Key rotation for client side encryption for blobs, tables/ and queues

For a complete list of changes, check out the change log in our Github repository.

Storage Emulator

  • All the service features listed above

The storage emulator v4.6 is available as part of the latest Microsoft Azure SDK. You can also install the storage emulator using the standalone installer.

We’ll also be releasing new client libraries for Java, C++, Python and NodeJS to support the latest REST version in the next few weeks along with a new AzCopy release. Stay tuned!

(Cross-Post) Announcing Azure Storage Data Movement Library 0.2.0

$
0
0

In the previous announcement post for DMLib 0.1.0, we committed that the newest release of the Data Movement Library would support more advanced features. Great news, those are now available and include the following:

  • Download, upload, and copy directories (local file directories, Azure Blob virtual directories, Azure File directories)
  • Transfer directories in recursive mode
  • Transfer directories in flat mode (local file directories)
  • Specify the search pattern when copying files and directories
  • Provide Event to get single file transfer result in a transfer
  • Download Snapshots under directories
  • Changed TransferConfigurations.UserAgentSuffix to TransferConfigurations.UserAgentPrefix

With these new features, you can perform data movement at the Blob container and Blob virtual directory level, or the File share and File directory level.

We are actively adding more code samples to the Github library, and any community contributions to these code samples are highly appreciated.

You can install the Azure Storage Data Movement Library from Nuget or download the source code from Github. For more details, please read the Getting Started documentation.

As always, we look forward to your feedback, so please don’t hesitate to utilize the comments section below.

Thanks!

Azure Storage Team

(Cross-Post) New Azure Storage Release – Larger Block Blobs, Incremental Copy, and more!

$
0
0

Originally posted in the Microsoft Azure Blog.

We are pleased to announce new capabilities in the latest Azure Storage Service release and updates to our Storage Client Libraries. This latest release allows users to take advantage of increased block sizes of 100 MB, which allows block blobs up to 4.77 TB, as well as features like incremental copy for page blobs and pop-receipt on add message.

REST API version 2016-05-31

Version 2016-05-31 includes these changes:

  • The maximum blob size has been increased to 4.77 TB with the increase of block size to 100 MB. Check out our previous announcement for more details.
  • The Put Message API now returns information about the message that was just added, including the pop receipt. This enables the you to call Update Message and Delete Message on the newly enqueued message.
  • The public access level of a container is now returned from the List Containers and Get Container Properties APIs. Previously this information could only be obtained by calling Get Container ACL.
  • The List Directories and Files API now accepts a new parameter that limits the listing to a specified prefix.
  • All Table Storage APIs now accept and enforce the timeout query parameter.
  • The stored Content-MD5 property is now returned when requesting a range of a blob or file. Previously this was only returned for full blob and file downloads.
  • A new Incremental Copy Blob API is now available. This allows efficient copying and backup of page blob snapshots.
  • Using If-None-Match: * will now fail when reading a blob. Previously this header was ignored for blob reads.
  • During authentication, the canonicalized header list now includes headers with empty values. Previously these were omitted from the list.
  • Several error messages have been clarified or made more specific. See the full list of changes in the REST API Reference.

Check out the REST API Reference documentation to learn more.

New client library features

.NET Client Library (version 8.0.1)

  • All the service features listed above
  • Support for portable class library (through the NetStandard 1.0 Façade)
  • Key rotation for client side encryption for blobs, tables/ and queues

For a complete list of changes, check out the change log in our Github repository.

Storage Emulator

  • All the service features listed above

The storage emulator v4.6 is available as part of the latest Microsoft Azure SDK. You can also install the storage emulator using the standalone installer.

We’ll also be releasing new client libraries for Java, C++, Python and NodeJS to support the latest REST version in the next few weeks along with a new AzCopy release. Stay tuned!

(Cross-Post) Announcing Azure Storage Data Movement Library 0.2.0

$
0
0

In the previous announcement post for DMLib 0.1.0, we committed that the newest release of the Data Movement Library would support more advanced features. Great news, those are now available and include the following:

  • Download, upload, and copy directories (local file directories, Azure Blob virtual directories, Azure File directories)
  • Transfer directories in recursive mode
  • Transfer directories in flat mode (local file directories)
  • Specify the search pattern when copying files and directories
  • Provide Event to get single file transfer result in a transfer
  • Download Snapshots under directories
  • Changed TransferConfigurations.UserAgentSuffix to TransferConfigurations.UserAgentPrefix

With these new features, you can perform data movement at the Blob container and Blob virtual directory level, or the File share and File directory level.

We are actively adding more code samples to the Github library, and any community contributions to these code samples are highly appreciated.

You can install the Azure Storage Data Movement Library from Nuget or download the source code from Github. For more details, please read the Getting Started documentation.

As always, we look forward to your feedback, so please don’t hesitate to utilize the comments section below.

Thanks!

Azure Storage Team

New Azure Storage JavaScript client library for browsers – Preview

$
0
0

Today we are announcing our newest library: Azure Storage Client Library for JavaScript. The demand for the Azure Storage Client Library for Node.js, as well as your feedback, has encouraged us to work on a browser-compatible JavaScript library to enable web development scenarios with Azure Storage. With that, we are now releasing the preview of Azure Storage JavaScript Client Library for Browsers.

Enables web development scenarios

The JavaScript Client Library for Azure Storage enables many web development scenarios using storage services like Blob, Table, Queue, and File, and is compatible with modern browsers. Be it a web-based gaming experience where you store state information in the Table service, uploading photos to a Blob account from a Mobile app, or an entire website backed with dynamic data stored in Azure Storage.

As part of this release, we have also reduced the footprint by packaging each of the service APIs in a separate JavaScript file. For instance, a developer who needs access to Blob storage only needs to require the following scripts:

<script type=”javascript/text” src=”azure-storage.common.js”/>
<script type=”javascript/text” src=”azure-storage.blob.js”/>

Full service coverage

The new JavaScript Client Library for Browsers supports all the storage features available in the latest REST API version 2016-05-31 since it is built with Browserify using the Azure Storage Client Library for Node.js. All the service features you would find in our Node.js library are supported. You can also use the existing API surface, and the Node.js Reference API documents to build your app!

Built with Browserify

Browsers today don’t support the require method, which is essential in every Node.js application. Hence, including a JavaScript written for Node.js won’t work in browsers. One of the popular solutions to this problem is Browserify. The Browserify tool bundles your required dependencies in a single JS file for you to use in web applications. It is as simple as installing Browserify and running browserify node.js -o browser.js and you are set. However, we have already done this for you. Simply download the JavaScript Client Library.

Recommended development practices

We highly recommend use of SAS tokens to authenticate with Azure Storage since the JavaScript Client Library will expose the authentication token to the user in the browser. A SAS token with limited scope and time is highly recommended. In an ideal web application it is expected that the backend application will authenticate users when they log on, and will then provide a SAS token to the client for authorizing access to the Storage account. This removes the need to authenticate using an account key. Check out the Azure Function sample in our Github repository that generates a SAS token upon an HTTP POST request.

Use of the stream APIs are highly recommended due to the browser sandbox that blocks users from accessing the local filesystem. This makes the stream APIs like getBlobToLocalFile, createBlockBlobFromLocalFile unusable in browsers. See the samples in the link below that use createBlockBlobFromStream API instead.

Sample usage

Once you have a web app that can generate a limited scope SAS Token, the rest is easy! Download the JavaScript files from the repository on Github and include in your code.

Here is a simple sample that can upload a blob from a given text:

1. Insert the following script tags in your HTML code. Make sure the JavaScript files located in the same folder.

<script src="azure-storage.common.js"></script/>
<script src="azure-storage.blob.js"></script/>

2. Let’s now add a few items to the page to initiate the transfer. Add the following tags inside the BODY tag. Notice that the button calls uploadBlobFromText method when clicked. We will define this method in the next step.

<input type="text" id="text" name="text" value="Hello World!" />
<button id="upload-button" onclick="uploadBlobFromText()">Upload</button>

3. So far, we have included the client library and added the HTML code to show the user a text input and a button to initiate the transfer. When the user clicks on the upload button, uploadBlobFromText will be called. Let’s define that now:

<script>
function uploadBlobFromText() {
// your account and SAS information
var sasKey ="....";
var blobUri = "http://<accountname>.blob.core.windows.net";
var blobService = AzureStorage.createBlobServiceWithSas(blobUri, sasKey).withFilter(new AzureStorage.ExponentialRetryPolicyFilter());
var text = document.getElementById('text');
var btn = document.getElementById("upload-button");
blobService.createBlockBlobFromText('mycontainer', 'myblob', text.value,  function(error, result, response){
if (error) {
alert('Upload filed, open browser console for more detailed info.');
console.log(error);
} else {
alert('Upload successfully!');
}
});
}
</script>

Of course, it is not that common to upload blobs from text. See the following samples for uploading from stream as well as a sample for progress tracking.

•    JavaScript Sample for Blob
•    JavaScript Sample for Queue
•    JavaScript Sample for Table
•    JavaScript Sample for File

Share

Finally, join our Slack channel to share with us your scenarios, issues, or anything, really. We’ll be there to help!

(Cross-Post) New Azure Storage Release – Larger Block Blobs, Incremental Copy, and more!

$
0
0

Originally posted in the Microsoft Azure Blog.

We are pleased to announce new capabilities in the latest Azure Storage Service release and updates to our Storage Client Libraries. This latest release allows users to take advantage of increased block sizes of 100 MB, which allows block blobs up to 4.77 TB, as well as features like incremental copy for page blobs and pop-receipt on add message.

REST API version 2016-05-31

Version 2016-05-31 includes these changes:

  • The maximum blob size has been increased to 4.77 TB with the increase of block size to 100 MB. Check out our previous announcement for more details.
  • The Put Message API now returns information about the message that was just added, including the pop receipt. This enables the you to call Update Message and Delete Message on the newly enqueued message.
  • The public access level of a container is now returned from the List Containers and Get Container Properties APIs. Previously this information could only be obtained by calling Get Container ACL.
  • The List Directories and Files API now accepts a new parameter that limits the listing to a specified prefix.
  • All Table Storage APIs now accept and enforce the timeout query parameter.
  • The stored Content-MD5 property is now returned when requesting a range of a blob or file. Previously this was only returned for full blob and file downloads.
  • A new Incremental Copy Blob API is now available. This allows efficient copying and backup of page blob snapshots.
  • Using If-None-Match: * will now fail when reading a blob. Previously this header was ignored for blob reads.
  • During authentication, the canonicalized header list now includes headers with empty values. Previously these were omitted from the list.
  • Several error messages have been clarified or made more specific. See the full list of changes in the REST API Reference.

Check out the REST API Reference documentation to learn more.

New client library features

.NET Client Library (version 8.0.1)

  • All the service features listed above
  • Support for portable class library (through the NetStandard 1.0 Façade)
  • Key rotation for client side encryption for blobs, tables/ and queues

For a complete list of changes, check out the change log in our Github repository.

Storage Emulator

  • All the service features listed above

The storage emulator v4.6 is available as part of the latest Microsoft Azure SDK. You can also install the storage emulator using the standalone installer.

We’ll also be releasing new client libraries for Java, C++, Python and NodeJS to support the latest REST version in the next few weeks along with a new AzCopy release. Stay tuned!

(Cross-Post) Build 2016: Azure Storage announcements

$
0
0

It’s time for Build 2016, and the Azure Storage team has several exciting announcements to make. This blog post provides an overview of new announcements and updates on existing programs. We hope that these new features and updates will enable you to make better use of Azure Storage for your services, applications and other needs.

Preview Program Announcements

Storage Service Encryption Preview

Storage Service Encryption helps you address organizational security and compliance requirements by automatically encrypting data in Blob Storage, including block blobs, page blobs, and append blobs. Azure Storage handles all the encryption, decryption, and key management in a transparent fashion using AES 256-bit encryption, one of the strongest encryption ciphers available. There is no additional charge for enabling this feature.

Access to the preview program can be requested by registering your subscription using Azure Portal or Azure PowerShell. Once your subscription has been approved, you can create a new storage account using the Azure Portal, and enable the feature.

To learn more about this feature, please see Getting started with Storage Service Encryption.

Near Term Roadmap Announcements

GetPageRanges API for copying incremental snapshots

The Azure Storage team will soon be adding a new feature to the GetPageRanges API for page blobs, which will allow you to build faster and more efficient backup solutions for Azure virtual machines. The API will return the list of changes between the base blob and its snapshots, allowing you to identify and copy only the changes unique to each snapshot. This will significantly reduce the amount of data you need to transfer during incremental backups of the virtual machine disks. The API will support page blobs on premium storage as well as standard storage. The feature will be available in April 2016 via the REST API and the .NET client library, with more client libraries support to follow.

Azure Import/Export

Azure Import/Export now supports up to 8 TB hard drives in all regions where the service is offered. In addition, Azure Import/Export will be coming to Japan and Australia in summer 2016. With this launch, customers who have storage accounts in Japan or Australia can ship disks to a domestic address within the region rather than shipping to other regions.

Azure Backup support for Azure Premium Storage

Azure Premium Storage is ideal for running IO intensive applications on Azure VMs. Azure Backup Service delivers a powerful and affordable cloud backup solution, and will be adding support for Azure Premium Storage. You can protect your critical applications running on Premium Storage VMs with the help of Azure Backup service.

Learn more about Azure Backup and Premium Storage.

Client Library and Tooling Updates

Java Client-Side Encryption GA

We are pleased to announce the general availability of the client-side encryption feature in our Azure Storage client Java library. This allows developers to encrypt blob, table, and queue data before sending it to Azure Storage. Additionally, integration with Azure Key Vault is supported so you can store and manage your keys in Azure Key Vault. With this release, data that is encrypted with .Net in Windows can be decrypted with Java in Linux and vice versa.

To learn more, please visit our getting started documentation.

Storage Node.js Preview Update

We are pleased to announce the latest preview (0.10) of the Azure Storage Node.js client library. This includes a rich developer experience, full support for AccountSAS capability, IPACL and Protocol specifications for Service SAS along with addressing customer usability feedback. You can start using the Node.js preview Azure Storage library in your applications now by leveraging the storage package on npmjs.

To learn more and get access to the source code, please visit our GitHub repo.

Storage Python Preview Update

We are pleased to announce the latest preview (0.30) of the Azure Storage Python client library. With this version comes all features included in the 2015-04-05 REST version including support for append blobs, Azure File storage, account SAS, JSON table formatting and much more.

To learn more, please visit our getting started documentation and review our latest documentation, upgrade guide, usage samples and breaking changes log.

Azure Storage Explorer

We are happy to announce the latest public preview of the Azure Storage Explorer. This release adds support for Table Storage including exporting to a CSV file, Queue Storage, AccountSAS and an updated UI experience.

For more information and to download the explorer for the Windows/Linux/Mac platforms, please visit www.storageexplorer.com.

Documentation and Samples Updates

Storage Security Guide

Azure Storage provides a comprehensive set of security capabilities which enable developers to build secure applications. You can secure the management of your storage account, encrypt the storage objects in transit, encrypt the data stored in the storage account and much more. The Azure Storage Security Guide provides an overview of these security features and pointers to resources providing deeper knowledge.

To learn more, see the Storage Security Guide.

Storage Samples

The Azure Storage team continues to strive towards improving the end-user experience for developers. We have recently developed a standardized set of samples that are easy to discover and enable you to get started in just 5 minutes. The samples are well documented, fully functional, community-friendly, and can be accessed from a centralized landing page that allows you to find the samples you need, for the platform you use. The code is open source and is readily usable from Github making it possible for the community to contribute to the samples repository.

To get started with the samples, please visit our storage samples landing page.

 

Finally, if you are new to Azure Storage, please check out the Azure Storage documentation page. It’s the quickest way to learn and start using Azure Storage.

Thanks
Azure Storage Team


(Cross-Post) New Azure Storage Release – Larger Block Blobs, Incremental Copy, and more!

$
0
0

Originally posted in the Microsoft Azure Blog.

We are pleased to announce new capabilities in the latest Azure Storage Service release and updates to our Storage Client Libraries. This latest release allows users to take advantage of increased block sizes of 100 MB, which allows block blobs up to 4.77 TB, as well as features like incremental copy for page blobs and pop-receipt on add message.

REST API version 2016-05-31

Version 2016-05-31 includes these changes:

  • The maximum blob size has been increased to 4.77 TB with the increase of block size to 100 MB. Check out our previous announcement for more details.
  • The Put Message API now returns information about the message that was just added, including the pop receipt. This enables the you to call Update Message and Delete Message on the newly enqueued message.
  • The public access level of a container is now returned from the List Containers and Get Container Properties APIs. Previously this information could only be obtained by calling Get Container ACL.
  • The List Directories and Files API now accepts a new parameter that limits the listing to a specified prefix.
  • All Table Storage APIs now accept and enforce the timeout query parameter.
  • The stored Content-MD5 property is now returned when requesting a range of a blob or file. Previously this was only returned for full blob and file downloads.
  • A new Incremental Copy Blob API is now available. This allows efficient copying and backup of page blob snapshots.
  • Using If-None-Match: * will now fail when reading a blob. Previously this header was ignored for blob reads.
  • During authentication, the canonicalized header list now includes headers with empty values. Previously these were omitted from the list.
  • Several error messages have been clarified or made more specific. See the full list of changes in the REST API Reference.

Check out the REST API Reference documentation to learn more.

New client library features

.NET Client Library (version 8.0.1)

  • All the service features listed above
  • Support for portable class library (through the NetStandard 1.0 Façade)
  • Key rotation for client side encryption for blobs, tables/ and queues

For a complete list of changes, check out the change log in our Github repository.

Storage Emulator

  • All the service features listed above

The storage emulator v4.6 is available as part of the latest Microsoft Azure SDK. You can also install the storage emulator using the standalone installer.

We’ll also be releasing new client libraries for Java, C++, Python and NodeJS to support the latest REST version in the next few weeks along with a new AzCopy release. Stay tuned!

(Cross-Post) Build 2016: Azure Storage announcements

$
0
0

It’s time for Build 2016, and the Azure Storage team has several exciting announcements to make. This blog post provides an overview of new announcements and updates on existing programs. We hope that these new features and updates will enable you to make better use of Azure Storage for your services, applications and other needs.

Preview Program Announcements

Storage Service Encryption Preview

Storage Service Encryption helps you address organizational security and compliance requirements by automatically encrypting data in Blob Storage, including block blobs, page blobs, and append blobs. Azure Storage handles all the encryption, decryption, and key management in a transparent fashion using AES 256-bit encryption, one of the strongest encryption ciphers available. There is no additional charge for enabling this feature.

Access to the preview program can be requested by registering your subscription using Azure Portal or Azure PowerShell. Once your subscription has been approved, you can create a new storage account using the Azure Portal, and enable the feature.

To learn more about this feature, please see Getting started with Storage Service Encryption.

Near Term Roadmap Announcements

GetPageRanges API for copying incremental snapshots

The Azure Storage team will soon be adding a new feature to the GetPageRanges API for page blobs, which will allow you to build faster and more efficient backup solutions for Azure virtual machines. The API will return the list of changes between the base blob and its snapshots, allowing you to identify and copy only the changes unique to each snapshot. This will significantly reduce the amount of data you need to transfer during incremental backups of the virtual machine disks. The API will support page blobs on premium storage as well as standard storage. The feature will be available in April 2016 via the REST API and the .NET client library, with more client libraries support to follow.

Azure Import/Export

Azure Import/Export now supports up to 8 TB hard drives in all regions where the service is offered. In addition, Azure Import/Export will be coming to Japan and Australia in summer 2016. With this launch, customers who have storage accounts in Japan or Australia can ship disks to a domestic address within the region rather than shipping to other regions.

Azure Backup support for Azure Premium Storage

Azure Premium Storage is ideal for running IO intensive applications on Azure VMs. Azure Backup Service delivers a powerful and affordable cloud backup solution, and will be adding support for Azure Premium Storage. You can protect your critical applications running on Premium Storage VMs with the help of Azure Backup service.

Learn more about Azure Backup and Premium Storage.

Client Library and Tooling Updates

Java Client-Side Encryption GA

We are pleased to announce the general availability of the client-side encryption feature in our Azure Storage client Java library. This allows developers to encrypt blob, table, and queue data before sending it to Azure Storage. Additionally, integration with Azure Key Vault is supported so you can store and manage your keys in Azure Key Vault. With this release, data that is encrypted with .Net in Windows can be decrypted with Java in Linux and vice versa.

To learn more, please visit our getting started documentation.

Storage Node.js Preview Update

We are pleased to announce the latest preview (0.10) of the Azure Storage Node.js client library. This includes a rich developer experience, full support for AccountSAS capability, IPACL and Protocol specifications for Service SAS along with addressing customer usability feedback. You can start using the Node.js preview Azure Storage library in your applications now by leveraging the storage package on npmjs.

To learn more and get access to the source code, please visit our GitHub repo.

Storage Python Preview Update

We are pleased to announce the latest preview (0.30) of the Azure Storage Python client library. With this version comes all features included in the 2015-04-05 REST version including support for append blobs, Azure File storage, account SAS, JSON table formatting and much more.

To learn more, please visit our getting started documentation and review our latest documentation, upgrade guide, usage samples and breaking changes log.

Azure Storage Explorer

We are happy to announce the latest public preview of the Azure Storage Explorer. This release adds support for Table Storage including exporting to a CSV file, Queue Storage, AccountSAS and an updated UI experience.

For more information and to download the explorer for the Windows/Linux/Mac platforms, please visit www.storageexplorer.com.

Documentation and Samples Updates

Storage Security Guide

Azure Storage provides a comprehensive set of security capabilities which enable developers to build secure applications. You can secure the management of your storage account, encrypt the storage objects in transit, encrypt the data stored in the storage account and much more. The Azure Storage Security Guide provides an overview of these security features and pointers to resources providing deeper knowledge.

To learn more, see the Storage Security Guide.

Storage Samples

The Azure Storage team continues to strive towards improving the end-user experience for developers. We have recently developed a standardized set of samples that are easy to discover and enable you to get started in just 5 minutes. The samples are well documented, fully functional, community-friendly, and can be accessed from a centralized landing page that allows you to find the samples you need, for the platform you use. The code is open source and is readily usable from Github making it possible for the community to contribute to the samples repository.

To get started with the samples, please visit our storage samples landing page.

 

Finally, if you are new to Azure Storage, please check out the Azure Storage documentation page. It’s the quickest way to learn and start using Azure Storage.

Thanks
Azure Storage Team

Announcing AzCopy on Linux Preview

$
0
0

Today we are pleased to announce the preview version of the AzCopy on Linux with redesigned command-line interface that adopts POSIX parameter conventions. AzCopy is a command-line utility designed for copying large amounts of data to, and from Azure Blob, and File storage using simple commands with optimal performance. AzCopy is now built with .NET Core which supports both Windows and Linux platforms. AzCopy also takes a dependency on the Data Movement Library which is built with .NET Core enabling many of the capabilities of the Data Movement Library in AzCopy!

Install and run AzCopy on Linux

  1. Install .NET Core on Linux
  2. Download and extract the tar archive for AzCopy (version 6.0.0-netcorepreview)
wget -O azcopy.tar.gz https://aka.ms/downloadazcopyprlinux
tar -xf azcopy.tar.gz
  1. Install and run azcopy
sudo ./install.sh
azcopy

If you do not have superuser privileges, alternatively you can also run AzCopy by changing to azcopy directory and then running ./azcopy.

What is supported?

  • Feature parity with AzCopy on Windows (5.2) for Blob and File scenarios
    • Parallel upload and downloads
    • Built-in retry mechanism
    • Resume, or restart from a failed transfer session
    • And many other features highlighted in the AzCopy guide

What is not supported?

  • Azure Storage Table service is not supported in AzCopy on Linux

Samples

It is as simple as the legacy AzCopy, with command line options that follow POSIX conventions. Watch the following sample where I upload a directory of 100GB in size. It is simple!

AzCopy on Linux

To learn more about all the command line options, issue ‘azcopy –help‘ command.

Here are a few other samples:

  1. Upload VHD files to Azure Storage
azcopy --source /mnt --include "*.vhd" --destination "https://myaccount.blob.core.windows.net/mycontainer?sv=2016-05-31&ss=bfqt&srt=sco&sp=rwdlacup&se=2017-05-10T21:45:18Z&st=2017-05-09T13:45:18Z&spr=https,http&sig=kQ42XrayIifuE4SGYaAy6COHoIanP7H9Qi3R0KqHs7M%3D"
  1. Download a container using Storage Account Key
azcopy --recursive --source https://myaccount.blob.core.windows.net/mycontainer --source-key "lYZbbIHTePy2Co…..==" --destination /mnt
  1. Synchronous copy across Storage accounts
azcopy --source https://ocvpwd5f77vcqsalinuxvm.blob.core.windows.net/mycontainer --source-key "lXHqgIHTePy2Co….==" --destination https://testaccountseguler.blob.core.windows.net/mycontainer --dest-key "uT8nw5…. ==" –-sync-copy

AzCopy on Windows

AzCopy on Windows developed with .NET Framework will continue to be released and documented here. AzCopy on Windows offers DOS command-line parameters that Windows users are familiar with.

Feedback

AzCopy on Linux is currently in preview, and we will make improvements as we hear from our users. So, if you have any comments or issues, please leave a comment below.

(Cross-Post) New Azure Storage Release – Larger Block Blobs, Incremental Copy, and more!

$
0
0

Originally posted in the Microsoft Azure Blog.

We are pleased to announce new capabilities in the latest Azure Storage Service release and updates to our Storage Client Libraries. This latest release allows users to take advantage of increased block sizes of 100 MB, which allows block blobs up to 4.77 TB, as well as features like incremental copy for page blobs and pop-receipt on add message.

REST API version 2016-05-31

Version 2016-05-31 includes these changes:

  • The maximum blob size has been increased to 4.77 TB with the increase of block size to 100 MB. Check out our previous announcement for more details.
  • The Put Message API now returns information about the message that was just added, including the pop receipt. This enables the you to call Update Message and Delete Message on the newly enqueued message.
  • The public access level of a container is now returned from the List Containers and Get Container Properties APIs. Previously this information could only be obtained by calling Get Container ACL.
  • The List Directories and Files API now accepts a new parameter that limits the listing to a specified prefix.
  • All Table Storage APIs now accept and enforce the timeout query parameter.
  • The stored Content-MD5 property is now returned when requesting a range of a blob or file. Previously this was only returned for full blob and file downloads.
  • A new Incremental Copy Blob API is now available. This allows efficient copying and backup of page blob snapshots.
  • Using If-None-Match: * will now fail when reading a blob. Previously this header was ignored for blob reads.
  • During authentication, the canonicalized header list now includes headers with empty values. Previously these were omitted from the list.
  • Several error messages have been clarified or made more specific. See the full list of changes in the REST API Reference.

Check out the REST API Reference documentation to learn more.

New client library features

.NET Client Library (version 8.0.1)

  • All the service features listed above
  • Support for portable class library (through the NetStandard 1.0 Façade)
  • Key rotation for client side encryption for blobs, tables/ and queues

For a complete list of changes, check out the change log in our Github repository.

Storage Emulator

  • All the service features listed above

The storage emulator v4.6 is available as part of the latest Microsoft Azure SDK. You can also install the storage emulator using the standalone installer.

We’ll also be releasing new client libraries for Java, C++, Python and NodeJS to support the latest REST version in the next few weeks along with a new AzCopy release. Stay tuned!

(Cross Post) Announcing the preview of Azure Storage Service Encryption for data at rest

$
0
0

We are excited to announce the preview of Azure Storage Service Encryption for data at rest. This capability is one of the features most requested by enterprise customers looking to protect sensitive data as part of their regulatory or compliance needs.

Storage Service Encryption automatically encrypts your Azure Blob storage data prior to persisting to storage, and decrypts prior to retrieval. The encryption, decryption and key management is transparent to users, requires no change to your applications, and frees your engineering team from having to implement complex key management processes.

This capability is supported for all Azure Blob storage, Block blobs, Append blobs, and Page blobs, and is enabled through configuration on each storage account. This capability is available for storage accounts created through the Azure Resource Manager (ARM). All data is encrypted using 256-bit AES encryption, also known as AES-256, one of the strongest block ciphers available. Customers can enable this feature on all available redundancy types of Azure Storage – LRS, ZRS, GRS and RA-GRS. Storage Service Encryption is also supported for both Standard and Premium Storage. There is no additional charge for enabling this feature.

As with most previews, this should not be used for production workloads until the feature becomes generally available.

To learn more please visit Storage Service Encryption.

Viewing all 167 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>