Skip to content
September 5, 2011 / thewiseguy99

How-to work around Windows Azure AppFabric Cache timing out with large amounts of data

While migrating a client’s application to Azure I have found quite a few small roadblocks, one being the cache. This application originally used a custom caching solution then graduated to the Enterprise Library Caching Block, then Windows Server AppFabric Cache, now lastly Windows Azure AppFabric Cache.

The Azure cache is either in transition and evolving or a cruel joke is happening. For example the concept of Regions which are exposed as members in the Azure AppFabric libraries are not working (read more here, ). Strange since this is a feature used in Windows Server AppFabric. Another is both the ChannelOpenTimeout and RequestTimeout which don’t work via the configuration settings as I’d expect (read more here: are defaulted to 3 seconds and 15 seconds respectively.

Bottom line is the application caches a fairly large amount of data in single requests, and due to the timeout in Azure AppFabric Cache defaulting to these low durations one of two things has to change.

  1. The application will have to be re-thought and re-implemented from the ground up to fit within the defaults given
  2. I will need to work around the problem of modifying the defaults which seem elusive.

The first option is probably not practical and since I bet there are a lot of applications migrating to Azure just like ours it not will be an option for many.

So here is the error you will get when a cache request times out.

ErrorCode<ERRCA0017>:SubStatus<ES0006>:There is a temporary failure. Please retry later. (One or more specified cache servers are unavailable, which could be caused by busy network or servers. For on-premises cache clusters, also verify the following conditions. Ensure that security permission has been granted for this client account, and check that the AppFabric Caching Service is allowed through the firewall on all cache hosts. Also the MaxBufferSize on the server must be greater than or equal to the serialized object size sent from the client.

Note – this error may occur due to other issues, I have noticed some errors in Azure are “Red Herrings” and reused so you will have to narrow down the true cause in many cases.

This error seems to be common enough and although the problem is understood,

I have yet to find an example that solves the problem and so I want provide a working solution. This solution uses version 1.2 as it is the version I am working with at the moment but I assume it should work with 1.4 with little modification.

What you will see is a CacheManager class that reads some of the values from the Web.config such as the Authentication Token, the Cache Address, etc. I am not using the standard configuration section:

<section name="dataCacheClient" type="Microsoft.ApplicationServer.Caching.DataCacheClientSection, Microsoft.ApplicationServer.Caching.Core" allowLocation="true" allowDefinition="Everywhere"/>

But instead I am defining the configuration through code in the CacheManager. This allows me to override the defaults I am not happy with (ChannelOpenTimeout & RequestTimeout).

       private static DataCache SpinUpDistributedCache()
            if (DistributedCache != null)
                return DistributedCache;
            //Define Array for 1 Cache Host
            List<DataCacheServerEndpoint> servers = new List<DataCacheServerEndpoint>(1) { new DataCacheServerEndpoint(ServerName, PortNumber) };
            // The security token expected by Azure
            SecureString authorizationToken = new SecureString();
            foreach (char a in CacheAuthorizationToken)
            DataCacheSecurity security = new DataCacheSecurity(authorizationToken);
            //Increased the timeout due to our large caching
            DataCacheFactoryConfiguration configuration = new DataCacheFactoryConfiguration
                //Set the cache host(s)
                Servers = servers,
                //ChannelOpenTimeout = new TimeSpan(0, 1, 0),
                SecurityProperties = security,
                //RequestTimeout = new TimeSpan(0, 1, 0),
            //Pass configuration Factory
            DataCacheFactory factory = new DataCacheFactory(configuration);
            //Get "default" cache
            return factory.GetDefaultCache();

Then in the included ASP.NET web role project I added the triggering code within Application_Start in the Global.asax class.

      void Application_Start(object sender, EventArgs e)
            string key = "Abracadabra";
            IList<DataObject> objects = new List<DataObject>();
            for (int i = 0; i < 9000; i++)
                objects.Add(new DataObject
                                    Id = i,
                                    Name = "John H. Doe III",
                                    Address = "1234 McCarty Avenue, Mountain View California. 94041 USA",
                                    Title = "Software Engineer",
                                    Age = 35,
                                    Inserted = DateTime.UtcNow
            CacheManager.Add(key, objects);

This is where I can adjust the amount of blob I send over the wire and invoke a timeout.

By keeping the timeout to the default or around a 20 second duration you should error out while caching.

Increasing the timeout will allow the large blob to finish the put into the cache. But in this case I had to increase the duration to one minute.

Give it a try and by changing the AppSettings in Web.config to your Azure values you should be able to work around this error. It is doing the job for us at the moment.



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: