If you are trying to upload large files to Azure blob storage you will get better performance if you can find a tool like Cerebrata Cloud Storage Studio(http://www.cerebrata.com/Products/CloudStorageStudio/).
We upgraded our storage accounts on June 7th, 2012. You should make sure that your storage account was created after June 7th, 2012. You can log into the Azure management portal and view the storage account details to determine when the account was created.
The scalability targets for our storage/network are documented here.
If you want to understand where the bottlenecks are you can follow these steps.
Blob Transfer Performance Problems
1. Make sure the storage account is located in the closest datacenter to the client. If accessing storage from an Azure Compute instance, make sure the deployment and the storage account are using the same affinity ID in order to provide optimized performance within the datacenter.
2. Test different blobs:
a. Create a new container in the same storage account and add a new blob to that container and test accessing that blob. This will help eliminate any partitioning issues.
b. Create a blob in your storage account and have the customer attempt to access that blob instead of a blob in their storage account. Make sure your storage account is located in the datacenter closest to the customer.
3. Most customers use inefficient blob download code. Have the customer try the sample app in http://blogs.msdn.com/b/kwill/archive/2011/05/30/asynchronous-parallel-block-blob-transfers-with-progress-change-notification.aspx to test more efficient blob download code. In my experience, this can result in ~2x performance improvements.
4. Test using the Azure Throughput Analyzer – http://research.microsoft.com/en-us/downloads/5c8189b9-53aa-4d6a-a086-013d927e15a7/default.aspx
5. The best test is to create a hosted service in the datacenter where the storage account is and then RDP onto the VM and test throughput from there.
a. Test just using IE to browse to the blob, and the sample app from the above blog, and the Azure Throughput Analyzer. You will most likely achieve very fast downloads which will further indicate that the problem is the customer’s network connection or application, and not Azure.
6. Make sure customers are aware of the performance targets in http://blogs.msdn.com/b/windowsazurestorage/archive/2010/05/10/windows-azure-storage-abstractions-and-their-scalability-targets.aspx. Windows Azure storage itself can deliver 60MBps.
7. If the performance problem is happening for external clients (ie. non-Windows Azure Compute clients) then enable the CDN for their storage account and test. Note that you will need to do multiple downloads in order to test, because the first download will not be served from the CDN endpoint since the data is not yet cached.
8. If the performance problem is happening for external clients (ie. non-Windows Azure Compute clients) then get a tracert from the client in order to see the network path from the client to storage. The customer’s ISP may be using an inefficient path to get to the Azure datacenter.
9. Use Storage Analytics to get additional performance data and to help determine if the problem is on the Azure Storage server side, or client/network side – http://blogs.msdn.com/b/windowsazure/archive/2011/08/03/announcing-windows-azure-storage-analytics.aspx. Configure analytics via the Cerabrata Configuration Utility (http://www.cerebrata.com/Blog/post/Cerebrata-Windows-Azure-Storage-Analytics-Configuration-Utility-A-Free-Utility-to-Configure-Windows-Azure-Storage-Analytics.aspx). Look at the difference between Server Time and End to End Latency. Server Time is the time the Azure Storage infrastructure is taking to process the request. End to End Latency is the full time it takes from the request to reach Azure Storage, to the request finishing processing. If End to End is much higher than Server Time, then this indicates a problem with one of the following:
a. Network delays, typically caused by slow internet connections, slow corporate proxy servers, or bad routes.
b. Application delays, typically caused by slow application performance such as high CPU.
c. Client side delays, typically caused by machine issues such as resource contention, high CPU in another app, or network configuration problems.