Thursday, 8 May 2014

Nibs & Auto-Layout - iPhone, iPad apps dev.

I create a lot more Nibs for my iPhone app UIs since Auto-Layout was introduced.  I hate Auto-Layout at first, because it seemed a bit unreliable in Interface Builder; and to be fair, you do have to occasionally just throw away all your constraints and start again, but that's OK, as I remember writing vast amounts of UI obj-c code in the past, which was far more arduous.

It's still a harder way to define UI than say, Android or Windows Phone UI dev; with that, it's just XML (Android layout XML and XAML respectively) and it 'feels' intuitive, almost like a "better" version of HTML. 

I strongly recommend going through the pain of learning Auto Layout!
If you happen to read any books about Auto Layout which make it seem complex, then you're reading the wrong book, as it's not that bad.  Also, check out Masonry: https://github.com/cloudkite/Masonry

Sunday, 9 February 2014

Review of building mobile apps with Cordova / PhoneGap and a bit about Xamarin and Appcelerator Titanium

A few years ago we were asked by a client to create a small iPhone app.  We decided to explore the possibility of creating a HTML5-based app using PhoneGap.  Using this technology, you essentially create a single page website, which utilises JavaScript and CSS and then embed that inside a pre-assembled native app.  The only bit of the app that's native is the web view control that hosts the web page.  We created a prototype and deployed it to the iPhone 3; it was functional but the app just didn't "feel right". The controls weren't responsive enough and it just wasn't an acceptable experience, so we decided to abandon PhoneGap and go fully native with Obj-C.  After that I didn't really thought about building another app with HTML for quite some time...

However, last year, I had another SME client approach me about a new app for Windows Phone, Android and iPhone.  It's a relatively simple app in that it's just a UI layer over a Web JSON API.  They did not have the budget to create 3 different native apps, so again, I set about trying to find a solution.  I revisited PhoneGap which has since been renamed Cordova and now PhoneGap is a distribution of Cordova with some cloud services attached and is a few versions behind the bleeding edge Cordova builds.

Having built a prototype, we were quite satisfied with the performance and UX on Android and iPhone... on Windows Phone, it's abysmal, but hey, given that WP is only a 10% marketshare we were OK with that (although really I'd have liked it to rock on WP).

We then proceeded to develop the app with Cordova and using Sencha Touch as the UI framework.

If you don't know Sencha Touch, it's basically set of tools which allow you to create app-like JavaScript applications that render with HTML/CSS.  You "build" Sencha Touch apps which minify all the web assets and create a bundle of files which can be deployed onto a web server, or into an app framework like Cordova.

The problems we encountered
  •  The Sencha-"compiled" code operates differently to the development code.  That is, you sometimes end up with bugs that you have to debug via the JavaScript console because the JS code is minified and the error messages are cryptic.  This is akin to spending 3 hours writing C# code and then finding when you deploy it, it doesn't work and you have to debug the MSIL (intermediate code / byte code).  To be fair we didn't get too many issues like this, but this was an unquantifiable overhead and an unknowable apriori.
  • You have to build and deploy Sencha Touch apps into the separate Cordova file structure.  Sencha can integrate with Cordova, so in theory, you could type sencha app build native and it will compile (minify & bundle) the Sencha app, deploy those assets into Cordova's file structure and then command Cordova to compile the app across all the target platforms.  However, if you're building iPhone, Android and Windows Phone apps, this command will always fail because you can't compile iPhone apps on Windows and you cannot compile Windows Phone apps on Mac.  You then have to command Cordova directly to compile against which ever platform you wish to test.  The problem is that this chain of process is very time consuming.  You have to very regularly test the app on real devices because Sencha apps operate differently when compiled and Cordova apps can only really be tested inside an emulator or device.  So suddenly your process for actually testing this stuff is unweildy.  It's an important point, because we developers have become so accustomed to writing code, immediately testing it, playing around, iterating... Going from code to testing on a device and becoming confident that this code will work across all platforms is cumbersome.
  •  We wanted to utilise Push notifications within the app so used Ext.Azure, which is a Sencha component written in collaboration with Microsoft.  Between Microsoft, Sencha and Cordova someone has seriously dropped the ball here.  You have Ext.Azure as a Sencha component, which normalises the intricasies of implementing Push across WP, Android and iPhone.  They seem to say in the documentation that you just download the PhoneGap PushPlugin, put Ext.Azure into your Sencha app and away you go.  The fact is, as of the time of writing, the PushPlugin does not support Windows Phone.  You actually have to integrate a branch of the PushPlugin by darkphantum, rather than the one by PhoneGap.  However, you cannot just utilise DarkPhantum's plugin instead of PhoneGap's, you have to merge the two branches manually, as the PhoneGap plugin is compatible only with the PhoneGap distribution of Cordova - which in itself Cordova from a few versions ago.  So if you're using the current version of Cordova and you implement the PhoneGap plugin, you'll get native compilation errors.  Merging the branches is quite easy, you just copy the PushPlugin.cs into the Windows Phone native project and declare the Plugin in the config.xml file.  The only remaining problem is that config.xml gets automatically rewritten by Cordova and because you've effectively hacked the PushPlugin into Cordova, then Cordova doesn't "know" about the Windows Phone support and you'll constantly find the Push notification doesn't work because of a permission denied issue.  There are probably ways around this by hacking it, but we never got around to that.
  • We wanted to implement Splash screens, however, following the documentation doesn't help.  The Splashscreen plugin didn't seem to do what it's meant to do.  I'm sure there are ways to fix this, however, you can't end up in a situation where a Splash screen takes development time - this is one of those things that should be easy.
  • We found that on iOS 7, the status bar would overlap the UI and prevent users from interacting with controls so we tried using the Cordova status bar plugin which did work.... until we spotted that if you put focus in a textbox, then the UI shifts up... fine... but then when the textbox loses focus, the UI shifts down, however, not the whole way.  The status bar plugin shifts the UI down by 20px, but after a textbox focus event, the UI resets back to being at 0px, so goes underneath the status bar.
  • We also found some configuration options in Cordova's config.xml just don't work. 
Fundamentally, we chose Cordova because we wanted to write one app, that would work across all platforms.  We wanted Cordova to abstract-out platform-specific concerns. It's the same with any engineered thing; you want to know it's interfaces and that is all.  If you have to get bogged down with how it works, then, well, what's the point (given the goal is to have that abstracted).  Put another way, a mechanic can fit a new battery in your car, but he doesn't not have to know battery chemistry and if the battery doesn't work, he sets about to find a new battery, rather than getting a chemistry degree.  Sure, in our situation, Cordova is software and the app is software, however, they are two distinct domains - specialisms.  Going massively on a tangeant here, but you could have had a virtual car with a virtual battery - all done in software and the semantics are the same.

I think the goals of Sencha and Cordova are great and maybe one day it'll be a great experience, but for now, we have to leave it.  The amount of friction means the app takes so much longer to develop, you may as well have just gone native and enjoyed being able to debug code on the device, have fast compilations and deployment, and have predictable results between dev and production builds (i.e., doesn't break just because you compiled it).

Alternatives
Xamarin.  I looked into this, however, if you wanted to create a WP, Anrdoid and Windows Phone app, you'd still effectively have to write 3 apps.  Xamarin allows you to share C# across all the platforms, but this cannot be UI code, so if you're writing an app, like us, where it's just a thin UI layer over a Web API, then the only advantage of Xamarin is that you can write iPhone, Android and WP apps in C#.  I say, "only" not in a derogatory way, it's just I quite like writing software in Obj-C, C# and Java.  If you're proficient in all those languages, then it's probably not worth the added layer of abstraction and complexity of Xamarin.  However if you're doing an app with a complex backend component or you don't have the time/resources for Obj-C/Java, then Xamarin could be the way to go!

Appcelerator Titanium.  This is an interesting option.  You can write an app using JavaScript and platform-agnostic and platform-specific APIs.  In a Titanium app, the UI is native, which means you get excellent performance and a fluid UX - users can hardly notice the difference. The way it work is that your JavaScript code and a JavaScript engine is shipped with the app.  The JS engine interprets your code and maps the JS API calls onto the native counterparts.  It is possible to write an app in JS using only the platform-agnostic APIs.  These APIs represent things like table views which Android and iOS have in common.  You occasionally have to use platform-specific APIs in order to create a UX which is inline with the way the underlying platform works.  For instance, you may use a Popover API which will only work on iOS, so you'd have to use another component on Android.

The downsides of Titanium are:
  • The deployed apps are a bit bigger as they contain a JS engine
  • The platform-agnostic APIs may result in UI working correctly on one platform, but not on another, so you end up iterating between the platforms until they both work
  • Whilst the UI performance is great, there is a performance hit with using JS, especially if you have any complex back-end code.
  • Debugging is quite difficult compared to native dev
  • You will most likely have to use platform specific APIs for a medium-to-large app, which means you have a learning overhead, if you don't know some stuff about iOS / Android development
  • You cannot use it for Windows Phone... hopefully one day!
The upsides are:
  • You can write an app using one code base
  • The UI is native, meaning users largely do not notice any different when it comes to UX
  • It's free!
  • You don't need to learn Java or Objective-C
  • It's technically possible to write an app in such a way that you never need to be concerned with platform-specific stuff.  However, is that wise?  Users on Android and iPhone are accustomed with specific UX-patterns that, if you ignore, you may end up alienating them. 

Summary
The main motivation for using any of these abstraction layers is to save time and money.  Clients want to target Windows Phone, Windows 8, iPhone, iPad and Android in one fell swoop with one app.  In my opinion, the reason for using an abstraction layer should be to totally abstract the details of the underlying platforms.  Also, you should never ever have to delve into the details of abstraction layers (such as re-compiling Cordova plugins); as these defeats the point in using them. 

Personally I love doing dev in C#, Java and Objective-C, so the only gain for me in using these frameworks is to save time.  I have spent a number of weeks utilising Titanium and Cordova, only to find that it's actually a false ecomony.  I spent a lot of time dealing with bugs in Cordova and weird anomalies in Titanium.  I found my productivity and time was thwarted by stuff that wouldn't have come up had I gone native.

I'll probably review these frameworks again in a year or two, but for now, I'll be advising clients to go native.  It's just vastly better from a user experience point of view and from the developer point of view.

There is no objective black and white answer though.  You may find for your projects an abstraction layer works well for you. 
Kris Dyson


Friday, 7 June 2013

ASP.NET web app’s AppDomain constantly resets/recycles/reloads while hosted in IIS on the development machine with TFS (Team Foundation Server) enabled.

I’m currently developing a large web application at the moment.  Every time it starts, Application_Start fires and I load a few 100MB of data into memory which takes about 20-30 seconds to complete. This would be fine if it happened a few times per day, but for me, it was happening every single time I checked a file out of TFS (to trigger a check-out in TFS all you have to do is start editing the file).

After a while, I realised that making only tiny changes to files, such as static html or javascript files would cause the entire app domain to recycle. This meant that it would take up to 30 seconds to see a change every time I made a change.  It was ridiculous.

I went about trying to discover what was actually happening.  First of all I logged Application_Start and Application_End events into a text file every time it occurred:

   1: public string AppPath { get; set; }



   2:  



   3: protected void Application_Start(object sender, EventArgs e)



   4: {



   5:     AppPath = HttpRuntime.AppDomainAppPath;



   6:     LogAppStart();



   7: }



   8: protected void Application_End(object sender, EventArgs e)



   9: {



  10:     LogAppEnd();



  11: }



  12:  



  13: [Conditional("DEBUG")]



  14: private void LogAppStart()



  15: {



  16:     System.IO.File.AppendAllText(System.IO.Path.Combine(AppPath, "log.txt"), string.Format("{0}\t{1}\r\n", DateTime.UtcNow.ToString(), "APP START"));



  17: }



  18:  



  19: [Conditional("DEBUG")]



  20: private void LogAppEnd()



  21: {



  22:     System.IO.File.AppendAllText(System.IO.Path.Combine(AppPath, "log.txt"), string.Format("{0}\t{1}\r\n", DateTime.UtcNow.ToString(), "APP END due to " + System.Web.Hosting.HostingEnvironment.ShutdownReason));



  23: }




Then I used BareTail to track when and why exactly the AppDomain was being unloaded (you can just use a text editor if you like, but BareTail gives you changes as they happen, so I could keep it on my screen whilst using Visual Studio to see exactly what triggers the unload).



Image1



I found that it unloaded every time I edited a file (but did not save it).  I noticed that the only thing that happened sometimes was that the TFS “lock” icon would change to a “red tick” icon.  So I was leaning toward it being something to do with TFS from this point.



I needed to see exactly what was happening so used procmon from sysinternals.  Unfortunately, it gives a vast amount of information which needs to be filtered and sifted through.  After spending quite a long time going through the logs I discovered that the TFS check out process was creating an “app_offline.htm” file in the root of the web app.  It wasn’t really conclusive from procmon though.  It tends to give quite cryptic Win32 codes.  But it pointed me in the right direction.  I then used Directory Monitor to monitor the root application path.  It gave an audit log which confirmed that “app_offline.htm” was indeed being created and immediately deleted by TFS / Visual Studio when a file is changed that’s under source control.



App_offline.htm is usually used during deployments / publishing to tell IIS to take the application offline gracefully while an upgrade is performed (it shows a nice friendly message to the user, rather than some yellow screen of death error – very useful in the right circumstances).



So it seems the creation and instant deletion of this file may have something to do with it.



So how on earth do you get it to stop doing that?!



The app_offline.htm file is cached in



C:\Users[user]\AppData\Roaming\Microsoft\VisualStudio\11.0\ (replace [user] with your username)



I tried deleting the file… but then discovered Visual Studio will just auto-generate the file back into existence!  So that didn’t work.



However, eventually I found the best way to prevent problem is to prevent the creation of app_offine.htm at all. 



How?



Just delete the app_offline.htm file from C:\Users[user]\AppData\Roaming\Microsoft\VisualStudio\11.0\ , then cunningly create a directory called “app_offline.htm” inside C:\Users[user]\AppData\Roaming\Microsoft\VisualStudio\11.0\  



This causes the creation of app_offline.htm (the file) to silently fail in Visual Studio!



Problem solved! It’s a bit of a hack but it works.

Tuesday, 27 November 2012

IIS / httpErrors / ASP.NET Web API / Fully embracing HTTP in Web API and having a professional web application is mutually exclusive.

Hi there, I’m going to get to the point really quickly because I’m very busy. 

With ASP.NET Web API introduced in .NET 4.5 we’re encouraged to embrace HTTP as a protocol fully with RESTful APIs, or indeed more RPC-esque APIs.  When we don’t find a record inside the database we can do this:

   1: Request.CreateErrorResponse(HttpStatusCode.NotFound, "The record was not found");






Aint that beautiful?  If you don’t appreciate it; well you should because now, an error response is returned to the client in a data format that the client/browser requests.  So if you’re interacting with the API via JSON then you’ll get back a JSON formatted response. Same with XML etc.



The idea is we utilise HTTP response codes (200, 404 etc) to represent a plethora  of different types of response. Before we might just return error codes embedded inside the response and the HTTP response status would probably be 200 (OK).



Hold all that in your short term memory for a sec.



With IIS7+ we’re encouraged to embrace the new httpErrors section inside web.config. With this it means you can return friendly error messages to the client/browser.





   1: <httpErrors defaultResponseMode="File" errorMode="DetailedLocalOnly" existingResponse="Replace" xdt:Transform="Insert">



   2:   <clear />



   3:   <error path="html\system_error.html" responseMode="File" statusCode="400" />




That’s great.  Now you don’t have IIS specific error messages going to the client / browser.  Not only does this mean your web app can fail slightly more gracefully and helpfully (to the user), but also, it’s vastly more professional looking.



--------



Above I have described two great features of the MS web stack.  Unfortunately, they are mutually exclusive.  You cannot host Web API inside the same project as a web app and have these two items function.  IIS will hijack your Web API responses because of the attribute on the httpErrors node existingResponse=replace.  If you get rid of this attribute and use PassThrough, then your Web API responses will get through OK.  However, now if you have an error in the website outside of the scope of the web api, then the client will see non-custom error messages (might be detailed or not, depending on the errorMode).



So I have reverted back to returning status HTTP 200 for all Web API business exceptions.  In reality, the the API client is not looking at the HTTP status code anyway, it’s looking at the returned content-type (if it’s the expected content-type, e.g. JSON), then it’ll look inside the JSON envelope (see JSend / AmplifyJS) and figure out was the real API response was.  If the content-type was not expected, then the API client will assume something worse happened.  It’ll examine the HTTP status code and I actually have error codes inside the custom IIS error pages, so that the API client can do a simple string search inside the html content to see what the overall problem was.  But hey, that’s very much application-specific stuff.  Each app will do it differently.



However, in any case, it does appear that custom error messages AND formatted Web API error responses are mutually exclusive for the time being.  If anyone knows how to do both without having to split the project into an API project and the web project with the their own separate configurations, feel free to comment.



thanks



k

Wednesday, 21 November 2012

Accessing the Windows Azure Compute Emulator from Hyper-V, Virtual PC or the external network

Unlike IIS, the Windows Azure Compute Emulator does not listen for external requests on your network IP.  It only listens on the local loopback address 127.0.0.1.  As I have found out, this is a big pain for cross browser testing because you cannot install Internet Explorer 7, 8, 9 and 10 side-by-side, so you have to either use external PCs with old browsers or create virtual machines in Oracle VirtualBox or Hyper-V with those browsers installed.

To solve this problem, you could:

  • Use AnalogX PortMapper – however, you need to remember to shut it down before the emulator starts and then start it again after it’s started… this is too much friction!
  • Create a separate IIS website listening on a different port to 80, add that as a Firewall exception and access the application externally that way – however, the drawback is that you may need to change your application architecture to run under the two different contexts.  Also #1, you have the memory overheads of having 1 + N instances of the app in memory. Also #2, you’re not testing the app in emulated conditions when using IIS – which means it’s not really a thorough test.  Also #3, it’s more friction.
  • You might find there’s a way to configure Hyper-V to use the host network connection and loopback adapter… I’m not a Hyper-V expert, but I tried and failed. Probably not possible!?

However, the best solution was totally frictionless: Just use PJS PassPort: http://sourceforge.net/projects/pjs-passport as detailed in this other blog post: http://blog.sacaluta.com/2012/03/windows-azure-dev-fabric-access-it.html 

Hope this helps someone!  I have posted this as an answer on stackoverflow as well; hopefully this will work for everyone now!

Kris

Thursday, 16 August 2012

System.Web.HttpException (0x80004005): Failed to load viewstate

Call stack:-

at System.Web.UI.Control.LoadViewStateRecursive(Object savedState)
at System.Web.UI.Control.LoadChildViewStateByIndex(ArrayList childState)
at System.Web.UI.Control.LoadViewStateRecursive(Object savedState)
at System.Web.UI.Control.LoadChildViewStateByIndex(ArrayList childState)
at System.Web.UI.Control.LoadViewStateRecursive(Object savedState)
at System.Web.UI.Control.LoadChildViewStateByIndex(ArrayList childState)
at System.Web.UI.Control.LoadViewStateRecursive(Object savedState)
at System.Web.UI.Control.LoadChildViewStateByIndex(ArrayList childState)
at System.Web.UI.Control.LoadViewStateRecursive(Object savedState)
at System.Web.UI.Control.LoadChildViewStateByIndex(ArrayList childState)
at System.Web.UI.Control.LoadViewStateRecursive(Object savedState)
at System.Web.UI.Control.LoadChildViewStateByIndex(ArrayList childState)
at System.Web.UI.Control.LoadViewStateRecursive(Object savedState)
at System.Web.UI.Control.LoadChildViewStateByIndex(ArrayList childState)
at System.Web.UI.Control.LoadViewStateRecursive(Object savedState)
at System.Web.UI.Page.LoadAllState()
at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)

I have had a .NET 1.1 webform operating on a website since 2004 and it should be backward compatible with .NET 4.5 (RTM), and largely, it is! However, I have found that if you use ASP.NET Bundling it will screw up the viewstate and you’ll end up with the error above.  I found this out by decoding the viewstate and noticing it contained data from the Bundling URLs and I thought to myself “Why does bundling care about ViewState?”.  At the moment, I have no solution, however, I’ll post back here if I find one.

You can prove that it’s Bundling that causes the issue by simply removing the call to: System.Web.Optimization.BundleTable.Bundles.ResolveBundleUrl from your page

thanks

Wednesday, 4 July 2012

IIS7 No response; In Fiddler2: "readresponse() failed for POST requests to a host IP which is not local. In Failed Request Tracing error in ManagedPipelineHandler with “The I/O operation has been aborted because of either a thread exit or an application request. (0x800703e3)”

 

Just spent 10 hours trying to figure out why a WorldPay callback to my development machine was not working.

This was simply because I had “Email scanning” protection enabled in AVG Antivirus.  If you turn this OFF, everything will work fine.

This is related to the Stack Overflow post: http://stackoverflow.com/questions/7439063/iis-asp-net-website-managedpipelinehandler-error-an-operation-was-attempted

Saturday, 5 May 2012

Encountering “System.IO.FileLoadException: Mixed mode assembly is built against version 'v2.0.50727' of the runtime and cannot be loaded in the 4.0 runtime without additional configuration information.” inside a Test Project in Visual Studio 11 beta?

Even though you may have configured the app.config file of the Test Project with the following configuration, it’ll make no difference when trying to use .NET 2.0 assemblies from a .NET 4 Test project.

<startup useLegacyV2RuntimeActivationPolicy="true">
    <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/>
</startup>

This configuration node is not effective because Test projects are DLLs not executables. This node will not effect the actual underlying executable which hosts the Test Project’s DLL when you run tests.  You need to change the config of the executable which for me is hosted under: C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\CommonExtensions\Microsoft\TestWindow.

Look for “vstest.executionengine.x86.exe”.  This is the executable which VS runs while unit testing. Now find “vstest.executionengine.x86.exe.config”

Add the above configuration node to this file, save it, run your tests and now you should find the above exception disappears. 

The actual exception for me occurred while loading a Windows Azure SDK DLL:

Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.InitializeEnvironment()
   at Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment..cctor()
   --- End of inner exception stack trace ---
   at Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.get_IsAvailable()

Hopefully Microsoft will fix this issue in time for RTM.

Thursday, 10 November 2011

Parallel.ForEach operations and the CancellationToken

If you use the CancellationToken to enable cancellation during the parallel iteration of an enumerable, then you should note that each operation is given a chance to finish.  The OperationCanceledException is only thrown after all the threads have completed their unit of work.

You could monitor the cancellation token inside the units of work as well if you liked.

Windows Azure DevFabric forcing port 80 rather port 81, 82 etc

Hi there, it's useful to run a Windows Azure app in the dev fabric on port 80 sometimes.  However, you need to be absolutely sure that port 80 is not in use beforehand.
Use TCPView from sysinternals and if you find that an item appears under "local port" called "http", then the port is in use.  You cannot easily find out what process that is as it's a service, however in my case, once I'd shut down "Web Deployment Agent Service" by doing Run -> CMD -> net stop MsDepSvc   this solved the issue and Azure was able to run the web role under port 80.

Also, ensure that Skype doesn't occupy port 80 too.

Tuesday, 25 January 2011

Distributed Lock

I’m working on a distributed system on Azure which has a Web and Worker Roles. Plus, there’s the possibility of N number of instances per role. The Worker role is a batch processor which consists of Jobs and the Job Scheduler.  My requirement is that we should only have one instance of the Job Scheduler throughout the whole system.
If this was a simple multi-threaded application with one app domain, this would be relatively simple using a mutex and the lock statement, however, given that the application could be spread across multiple virtual machines within the Azure Fabric I had to come up with another way.
I solved this problem by using a SQL Azure table, serializable transactions and a Distributed Lock Service in C#.
First create the table
/****** Object:  Table [dbo].[Lock]    Script Date: 01/25/2011 14:18:13 ******/SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[Lock](
    [id] [int] IDENTITY(1,1) NOT NULL,
    [LockName] [nvarchar](4000) NOT NULL,
    [AcquirerGuid] [uniqueidentifier] NULL,
    [AcquiredDate] [datetime] NULL,
 CONSTRAINT [PK_Lock] PRIMARY KEY CLUSTERED (
    [id] ASC)WITH (STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF)
)GO



Then create the Stored Procedures, “Lock_Acquire” and “Lock_Release”


IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[Lock_Acquire]') AND type in (N'P', N'PC'))
    DROP PROCEDURE [dbo].[Lock_Acquire]GO

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

CREATE PROCEDURE [dbo].[Lock_Acquire](
    @Name NVARCHAR(4000),
    @AcquirerGuid UNIQUEIDENTIFIER)AS
BEGIN

    SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
    
    BEGIN TRANSACTION

        IF(NOT EXISTS(SELECT 1 FROM Lock WHERE LockName = @Name))
        BEGIN    
            INSERT INTO [Lock] (LockName, AcquiredDate, AcquirerGuid)
            VALUES (@Name, GETUTCDATE(), @AcquirerGuid)
        END
        ELSE
        BEGIN
            
            UPDATE L
               SET L.AcquiredDate = GETUTCDATE(),
                   L.AcquirerGuid = @AcquirerGuid
              FROM Lock L
             WHERE L.LockName = @Name 
               AND (L.AcquirerGuid = @AcquirerGuid -- owned by the current requestor
                        OR DATEDIFF(SS, L.AcquiredDate, GETUTCDATE()) > 30)                           OR (L.AcquirerGuid IS NULL AND L.AcquiredDate IS NULL))
        END
        
        SELECT @@ROWCOUNT; 
         
    COMMIT TRANSACTION
END

GO


IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[Lock_Release]') AND type in (N'P', N'PC'))
    DROP PROCEDURE [dbo].[Lock_Release]GO

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

CREATE PROCEDURE [dbo].[Lock_Release](
    @Name NVARCHAR(4000),
    @AcquirerGuid UNIQUEIDENTIFIER)AS 
BEGIN
    
    SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
    
    UPDATE Lock
    SET AcquiredDate = NULL, 
    AcquirerGuid = NULL
    WHERE AcquirerGuid = @AcquirerGuid
    
    SELECT @@ROWCOUNT;
    END

GO



Import the Stored Procedures into Entity Framework (if you use EF)


Create the Distributed Lock Service


public class DistributedLockService : IDisposable
    {
        public Guid AcquirerId { get; set; }
        public string LockName { get; set; }
        public bool LockAcquired { get; set; }

        /// <summary>
        /// ctor
        /// </summary>
        /// <param name="lockName"></param>
        public DistributedLockService(string lockName)
        {
            LockName = lockName;
            AcquirerId = Guid.NewGuid();
        }

        /// <summary>
        /// Attempts to acquire the named lock
        /// </summary>
        /// <returns></returns>
        public bool Acquire()
        {
            LockAcquired = Facade.DataContext.Lock_Acquire(LockName, AcquirerId).FirstOrDefault() == 1;
            return LockAcquired;
        }

        /// <summary>
        /// Attempts to release the named lock
        /// </summary>
        /// <returns></returns>
        public bool Release()
        {
            if (LockAcquired) return Facade.DataContext.Lock_Release(LockName, AcquirerId) == 1;
            else return true;
        }


        #region IDisposable Members

        public void Dispose()
        {
            Release();
        }

        #endregion
    }



You may have to change Facade.DataContext to be your equivalent ORM or Sql Helper class.


Create the Unit Test


[TestClass]
    public class DistributedLockServiceTest
    {
        [TestMethod]
        public void CheckThatLockingWorks()
        {
            string fakeLockName = Guid.NewGuid().ToString();
            
            List<bool> results = new List<bool>();
            ManualResetEvent handle = new ManualResetEvent(false);
            object mutex = new object();

            Parallel.Invoke(() =>
            {
                Parallel.For(0, 100, x =>
                {
                    handle.WaitOne();
                    DistributedLockService svc = new DistributedLockService(fakeLockName);

                    lock(mutex) results.Add(svc.Acquire());
                });
            }, 
            () =>
            {

                Thread.Sleep(2000);
                handle.Set();
            });
            Assert.AreEqual(1, results.Where(x => x == true).Count(), "The number of trues should be 1 for fake lock: " + fakeLockName);

        }
    }


Use it for real…


private DistributedLockService _lockService = new DistributedLockService("JobScheduler");



if(_lockService.Acquire()) // do something, like start a Job


So the idea is that you can instantiate the Distributed Lock Service and keep it for as long as necessary, then whenever you wish to perform some action without the possibility of concurrency, do it after acquiring the lock.  Please also note that the lock will last for up to 30 seconds at a time, so you must call Acquire within the 30 second period if you wish to retain non-concurrency. Otherwise the lock will be relinquished to any other requestor.  If this is too short a timeframe, then you can change it inside the Lock_Acquire stored procedure.


The reason why there’s a time limit is in case the acquiring/lock-owning thread throws an exception and does not release the lock.


Please let me know should you find any problems. thanks.

Monday, 13 September 2010

Dell XPS M1330, nVidia GeForce 8400 GS overheats and the laptop cuts out

I have had this laptop this January 2009 and it’s been fine, but recently it’s been cutting out if I use Google Earth or Bing Maps!  I have measured the temperature at which it cuts out is about 110 degrees.  When it turns on again, it reports either error #M1001 or #M1004.  This is apparently and overheating problem with the nVidia GPU.

I am contacting Dell about this, however my motherboard was only replaced last month when my graphics card failed with what I can only describe as “on screen display corruption issues”.  Weird symbols and colours appeared.  Oh well, looks like I’ll need another repair, or just have to buy a new laptop as I’m a freelancer, so the loss of earnings will eventually become greater than the cost of a laptop, so I may as well get a new one… hopefully from a more reliable brand.

UPDATE: 15th September 2010: A Dell engineer changed the heat sink and now it runs perfectly!

ASP.NET MVC 2, “Request Validation has detected a potentially dangerous client input value, and processing of the request has been aborted” and the ValidateInput does not work

If you have problems getting the [ValidateInput(false)] attribute to work in a Visual Studio 2010 ASP.NET MVC 2 solution; please remember to put requestValidationMode="2.0" on the httpRuntime element of the web.config.

Then everything will be fine. Enjoy.

Monday, 23 August 2010

Umbraco 4.x – Warning on upgrading document types

Please be aware, though Umbraco is a great Content Management System, once you have deployed an application based on it – it’s very difficult to upgrade items in the meta-database afterwards.  Umbraco Courier does a good job of transferring content, but it doesn’t take into account changes to Document, Media and Member types, nor does it take into account new macros or changes to settings on those entities.  Courier will report the problems as you try to migrate content which is dependent on a latest meta database changes.

This is akin to developing your database in the development environment, then, once you’ve deployed the software, having to use the SQL Table Designer GUI make all the necessary schema changes manually.

Thursday, 5 August 2010

Dynamic subdomains on Apache and PHP

To rewrite subdomain urls internally to subdirectories under the root, ensure mod_rewrite is installed in Apache. Configure .htaccess inside the root directory as follows:-

RewriteEngine On
RewriteCond %{HTTP_HOST} !^www\.mydomain.com
RewriteCond %{HTTP_HOST} ([^.]+)\.mydomain.com
RewriteRule ^(.*)$ /home/sites/mydomain.com/public_html/%1/index.html


Obviously replace “mydomain” and change the path on the last line to where your website physically resides on the server. Use the variable %1 to use the name of the subdomain as the name of the destination subdirectory.