Skip to content
October 23, 2011 / thewiseguy99

Fork Google Code SVN repository to Github


  • Copy the SVN URL off this page.

  • Go back to your Github import page and paste this URL then click Import SVN Authors.
  • Unless you are a Contributer or Owner of the SVN repository you can probably bypass author mapping and go directly to the import.

  • After clicking Import Repository you may have to wait a few minutes while Git forks the SVN repository.

This is it.

*UPDATE – looks like this functionality has been removed from google-code.

September 13, 2011 / thewiseguy99

SQL Azure and the DateTime.Now issues

The SQL databases on the Azure cloud are pegged against Greenwich Mean Time or Coordinated Universal Time however many applications are using DateTime.Now which is the time according to the regional settings specified on the host machine. Sometimes this is not an issue when the DateTime is not used for any time spanning or comparisons and instead for display only. However if you migrate an existing Database to SQL Azure using the dates populated via GETDATE() or DateTime.Now you will have an offset, in my case it’s 8 hours.

Here’s a layout of our test environment when this problem became apparent,

  • we have our application & services locally in our test environment
  • we have our testing databases on Azure

Now the problem is that all of this at one point was very traditional in that it was hosted off-site on our servers with the clock/regional settings configured the same and using NTP to reduce drift. So far so good. The users would log into the system using their WPF application,  or our web application. The users could create products in the application which then get persisted via the web services and saved on the database.

The problem exposed itself during some initial tests after migrating our test databases onto Azure yet using our local test environment and then running our initial selenium tests. We found when a log-in was performed the expiry logic was set on the servers using DateTime.Now and persisted to the SQL Azure database. When the second request came in and a verification on the security token was performed it expired almost 8 hours ago. Eureka!

It is embarrassing how something so obvious is overlooked, I really should have known better but it was easy enough to fix.

You will need to replace the date function in SQL Azure to use

getdate() -> getutcdate()

Although on SQL Azure these will be one in the same they will not behave the same when/if you synchronize this database to a SQL Server instance. When a datetime is required to define expiration or updates then replace the DateTime.Now in your .NET applications

myOrder.TimePlaced = DateTime.Now;
myOrder.Expires = DateTime.Now.AddDays(2);


myOrder.TimePlaced = DateTime.UtcNow;
myOrder.Expires = DateTime.UtcNow.AddDays(2);

This way no matter what environment the application is hosted in it will always use the same times to key against. To display the UTC date to a user based on the server’s regional settings you can use the ToLocalTime() method on the DateTime.

TextBox1.Text = myOrder.TimePlaced.ToLocalTime();

Although the DateTime was UTC it will now be converted to the timezone of the server. If your application is hosted in Windows Azure then this would be converted to UTC.

Finally if you want to display the DateTime from UTC to the users time you can do this in Javascript.

script type="text/javascript">
      document.write(new Date(<%= myOrder.TimePlaced.ToString("r") %>));
September 11, 2011 / thewiseguy99

SQL Azure Migration Wizard v3.7.7 & Red Gate SQL Compare 9 no longer work

This post on July 9th talks about a new SQL Azure roll out on the Azure cloud. July SQL Azure roll out

After reading the Red Gate forums the chatter points to the fact Microsoft did the SQL Azure roll out in Europe this July but have staggered the North American roll out to occur after, it now looks like this may have happened during the week.

I was unable to access the Azure management site late Friday afternoon but it eventually came back that evening. I read a BBC article regarding some DNS problem that hit the MS cloud stack so I guess that answers why I was down? BBC article I bet the SLA was breached in more than a few cases!

Saturday I tried to migrate a database to Azure using the SQL Azure Migration Wizard v3.7.7 tool only to have it fail when attempting to connect to SQL Azure, something which worked for me a week ago using the same tool. SQL Azure Migration Wizard – Work Item

I know Red Gate has version 9.0 of their SQL Compare & SQL Data Compare tools so I downloaded the trial only to find they failed as well. Fortunately they patched them to a 9.1 version when that European roll out hit (sounds like they are not happy). Anyway use these two patches to at least get yourself going in the interm:

SQL Data Compare w/Azure patch: SQL Data Compare patch

SQL Compare w/ Azure patch: SQL Compare patch

September 10, 2011 / thewiseguy99

SQL Azure migration – errors & solutions

If you get this error migrating a procedure or function –> Error #: 40512 — Deprecated feature ‘String literals as column aliases’ is not supported in this version of SQL Server.
Then your statement will look like this,

SELECT 'ImageSupportDescription' = CASE 
WHEN c.description is null THEN 
ELSE + ' (' + c.description + ')' 
END, ...

And needs to become this,

WHEN c.description is null THEN 
ELSE + ' (' + c.description + ')' 
END AS 'ImageSupportDescription', ...

If you get this error migrating a procedure or function –> Error #: 40516 — Global temp objects are not supported in this version of SQL Server.
Then your statement will look like this,

INSERT INTO INTO ##t_temp_property 
SELECT , as EntityId , as OrganizationUnitId 
FROM ...

And needs to become this,

CREATE TABLE #t_temp_property(id INT NOT NULL, EntityId INT, OrganizationUnitId INT) 
-- Make sure you define this in-scope so the other statements which call it can see it.

INSERT INTO INTO #t_temp_property
 , as EntityId
 , as OrganizationUnitId
 FROM ...

Since SQL Azure is a shared database among many users truly global tables are not allowed. I know the above change looks weak but if you need access to this temp table out of the statement context you will need to investigate a more specific solution based on your problem. In our case above someone in the past used a global temp table when a temp table would suffice.

Another approach you can take is to define a TABLE variable within the scope of the statement. I am not convinced this will perform as well as a temporary table but also it will not work if the statement was defined as dynamic SQL.

DECLARE @TemporaryTable TABLE ( Id int, EntityId int, OrganizationUnitId int )

INSERT INTO @TemporaryTable (Id, EntityId, OrganizationUnitId)
 , as EntityId
 , as OrganizationUnitId
 FROM ...

If you get this error migrating a procedure or function –> Error #:102 — Incorrect syntax near ‘50001’.
Then your statement will look like this,

IF NOT EXISTS ( SELECT ID FROM UserDefaults WHERE FkSite= @siteId ) 
 RAISERROR 50001 'User Defaults are missing' 
END ...

And needs to become this,

IF NOT EXISTS ( SELECT ID FROM UserDefaults WHERE FkSite= @siteId) 
 RAISERROR ('User Defaults are missing', 16, 1) 
 -- Severity between 11 - 18 will actually raise the error 
END ...

Since SQL Azure doesn’t support sp_addmessage so you can not add the sub 50000 errors you may have hoped –
Even more, it does not support sys.messages either –
This said you will not have the ability to define application behavior based on the error number raised in the SQL statment and propigated out into the SqlException’s Error collection within your code –

If you get this error migrating a procedure or function –> Error #:40515 — Reference to database and/or server name in ‘master.dbo.sysprocesses’ is not supported in this version of SQL Server.
Then your statement will look like this,

SELECT * FROM master.dbo.sysprocesses WHERE spid IN (SELECT blocked FROM master.dbo.sysprocesses)

And needs to become, well nothing since this is due to the shared nature of SQL Azure.

September 5, 2011 / thewiseguy99

How-to work around Windows Azure AppFabric Cache timing out with large amounts of data

While migrating a client’s application to Azure I have found quite a few small roadblocks, one being the cache. This application originally used a custom caching solution then graduated to the Enterprise Library Caching Block, then Windows Server AppFabric Cache, now lastly Windows Azure AppFabric Cache.

The Azure cache is either in transition and evolving or a cruel joke is happening. For example the concept of Regions which are exposed as members in the Azure AppFabric libraries are not working (read more here, ). Strange since this is a feature used in Windows Server AppFabric. Another is both the ChannelOpenTimeout and RequestTimeout which don’t work via the configuration settings as I’d expect (read more here: are defaulted to 3 seconds and 15 seconds respectively.

Bottom line is the application caches a fairly large amount of data in single requests, and due to the timeout in Azure AppFabric Cache defaulting to these low durations one of two things has to change.

  1. The application will have to be re-thought and re-implemented from the ground up to fit within the defaults given
  2. I will need to work around the problem of modifying the defaults which seem elusive.

The first option is probably not practical and since I bet there are a lot of applications migrating to Azure just like ours it not will be an option for many.

So here is the error you will get when a cache request times out.

ErrorCode<ERRCA0017>:SubStatus<ES0006>:There is a temporary failure. Please retry later. (One or more specified cache servers are unavailable, which could be caused by busy network or servers. For on-premises cache clusters, also verify the following conditions. Ensure that security permission has been granted for this client account, and check that the AppFabric Caching Service is allowed through the firewall on all cache hosts. Also the MaxBufferSize on the server must be greater than or equal to the serialized object size sent from the client.

Note – this error may occur due to other issues, I have noticed some errors in Azure are “Red Herrings” and reused so you will have to narrow down the true cause in many cases.

This error seems to be common enough and although the problem is understood,

I have yet to find an example that solves the problem and so I want provide a working solution. This solution uses version 1.2 as it is the version I am working with at the moment but I assume it should work with 1.4 with little modification.

What you will see is a CacheManager class that reads some of the values from the Web.config such as the Authentication Token, the Cache Address, etc. I am not using the standard configuration section:

<section name="dataCacheClient" type="Microsoft.ApplicationServer.Caching.DataCacheClientSection, Microsoft.ApplicationServer.Caching.Core" allowLocation="true" allowDefinition="Everywhere"/>

But instead I am defining the configuration through code in the CacheManager. This allows me to override the defaults I am not happy with (ChannelOpenTimeout & RequestTimeout).

       private static DataCache SpinUpDistributedCache()
            if (DistributedCache != null)
                return DistributedCache;
            //Define Array for 1 Cache Host
            List<DataCacheServerEndpoint> servers = new List<DataCacheServerEndpoint>(1) { new DataCacheServerEndpoint(ServerName, PortNumber) };
            // The security token expected by Azure
            SecureString authorizationToken = new SecureString();
            foreach (char a in CacheAuthorizationToken)
            DataCacheSecurity security = new DataCacheSecurity(authorizationToken);
            //Increased the timeout due to our large caching
            DataCacheFactoryConfiguration configuration = new DataCacheFactoryConfiguration
                //Set the cache host(s)
                Servers = servers,
                //ChannelOpenTimeout = new TimeSpan(0, 1, 0),
                SecurityProperties = security,
                //RequestTimeout = new TimeSpan(0, 1, 0),
            //Pass configuration Factory
            DataCacheFactory factory = new DataCacheFactory(configuration);
            //Get "default" cache
            return factory.GetDefaultCache();

Then in the included ASP.NET web role project I added the triggering code within Application_Start in the Global.asax class.

      void Application_Start(object sender, EventArgs e)
            string key = "Abracadabra";
            IList<DataObject> objects = new List<DataObject>();
            for (int i = 0; i < 9000; i++)
                objects.Add(new DataObject
                                    Id = i,
                                    Name = "John H. Doe III",
                                    Address = "1234 McCarty Avenue, Mountain View California. 94041 USA",
                                    Title = "Software Engineer",
                                    Age = 35,
                                    Inserted = DateTime.UtcNow
            CacheManager.Add(key, objects);

This is where I can adjust the amount of blob I send over the wire and invoke a timeout.

By keeping the timeout to the default or around a 20 second duration you should error out while caching.

Increasing the timeout will allow the large blob to finish the put into the cache. But in this case I had to increase the duration to one minute.

Give it a try and by changing the AppSettings in Web.config to your Azure values you should be able to work around this error. It is doing the job for us at the moment.


September 4, 2011 / thewiseguy99

‘SELECT INTO’ & Migrating a SQL Server database to SQL Azure

Statement ‘SELECT INTO’ is not supported in this version of SQL Server.

During the migration of a large web application to Azure I found some speed bumps. One significant problem is that “SELECT INTO” is not supported.  The existing application made use of stored procedures which I converted but at run-time there were some dynamic SQL statements using “SELECT INTO” I had to migrate as well. For example this statement used “SELECT DISTINCT INTO”,

SELECT DISTINCT INTO #t_base FROM RegionTreeSection rts JOIN SpatialCurrent p ON rts.RegionLocaleCd = … 

the above statement would create the temporary table as well as insert the distinct records from RegionTreeSection into it. This is the dynamic statement that works fine as-is in SQL Server 2000 – 2008 R2 but fails in SQL Azure:

FROM RegionTreeSection rts JOIN SpatialCurrent p ON rts.RegionLocaleCd = p.RegionLocationCd
WHERE rts.SpatialCurrentFk IN (94820) AND ( rts.RegionTreeMapping like ‘@101538@%’ )

the following modifications were required to execute in SQL Azure:


INSERT INTO #t_base (Id)
  FROM RegionTreeSection rts JOIN SpatialCurrent p ON rts.RegionLocaleCd = p.RegionLocationCd
  WHERE rts.SpatialCurrentFk IN (94820) AND ( rts.RegionTreeMapping like ‘@101538@%’ )

This could been converted to use a table variable, however due to some published benchmarks I had concerns about this approach: