Thursday, 23 July 2009

“Unblock” files in Vista / Windows Explorer. Got very bored of this

As I don’t download dodgy things or go on dodgy websites, I got bored with having to unblock files which come from an “untrusted” source.

This happens because, by default, vista stored where a file came from in a separate file stream. To disable…

  1. Run GPEDIT.MSC to launch the “Group Policy” editor.
  2. Go to “User Config” –> “Administrative Templates” –> “Windows Components” –> “Attachment Manager”.
  3. Enable the "Do not preserve zone information in file attachments" policy.

My main problem was that I have .net assemblies which originate from component providers, which are ‘untrusted’ by Vista.  This would mean some of my TDD unit tests would fail in Visual Studio.  Even though I unblocked all of the files, somehow, it would continue block them again after a while. 

Saturday, 18 July 2009

Windows Workflow: “instance operation is not valid on workflow runtime thread” or “EventDeliveryFailedException: MessageQueueErrorCode QueueNotFound”

When you’ve created your ExternalDataExchange interface and class, remember, when you have a CallExternalMethod activity, to invoke the external method in a separate thread.

So if you have InvokeSendPreview that you want to invoke from a WF, then ensure that you encapsulate the call to the event inside a new thread, for instance, via the ThreadPool, as follows.

[Serializable]
public class CommunicationService :
ICommunicationService
{
#region ICommunicationService Members

public event EventHandler<ExternalDataEventArgs> SendPreview;

public void InvokeSendPreview()
{
Guid instanceId = WorkflowEnvironment.WorkflowInstanceId;
ThreadPool.QueueUserWorkItem(delegate(object state)
{

SendPreview(null, new ExternalDataEventArgs(instanceId));

});
}

#endregion
}


The error at the beginning of this post was caused by



public void InvokeSendPreview()
{
Guid instanceId = WorkflowEnvironment.WorkflowInstanceId;
SendPreview(null, new ExternalDataEventArgs(instanceId)); // ERROR HERE!
}

This error is because the WF threads are not to be used to execute code outside WF.

Tuesday, 7 July 2009

SQL Server 2008 Query performance / execution speed difference between SQL Management Studio and ADO.NET SQL Data Provider for the same query.

Just had a really annoying problem where a stored procedure that I invoked from SSMS runs in 264ms, but from ADO.NET it runs in 1445ms!  Huge difference!

I researched and found that ‘SET ARITHABORT ON’ is set by default under SSMS, but not ADO.NET.  Once I included this in the SP, the execution time became consistently ~265ms.

However, I think there must be a problem with the query itself, so not a permanent fix, however, it’ll do for now, as I need to get beta 1 of my project released and it’s getting late!

I will blog again once I have found the root cause.

Note to self! SSMS and ADO.NET configure their connection contexts differently; and this will usually be the source of query behavioural / timing differences!

Saturday, 4 July 2009

My Architecture and where to put bootstrapping for IoC Containers

Logical Architecture Diagram

KHDCArchitecture

Above I have depicted my opinion of an effective way to structure the architecture of an multitier software solution which utilises TDD and IoC.  In my particular case, I’m using C# .NET.

Definition of Terms

Consumer Projects: The blue coloured items represent the Consumer Projects, in that they rely on the infrastructure.

Infrastructure: The yellow coloured items are infrastructure, i.e., they’re depended on by the Consumer Projects and by other infrastructure projects. 

Architecture: Represents the entire structure of the solution, including consumer project and infrastructure.

IoC: Inversion of Control.  This is a software design methodology which means that classes do not depend directly on each other, they depend on interfaces or abstract classes whose real implementation can be replaced at runtime.  This is also called Dependency Injection.  Essentially what it means is that, the consumer of one object/class can define the concrete dependencies of that class/object at runtime, as long as the concrete implementation implements the interface expected.

IoC Container: Such as StructureMap or CastleWindsor. An IoC Container wires up concrete implementations at runtime, usually on application start-up.  In TDD Unit Tests, mock objects are often used instead of the real dependencies.  If they used real dependencies, then it would be an integration test.

Overview

In my architecture, the blue items are consumer projects, which are not depended on by anything else – but they all depend on all the yellow items – the infrastructure.

  • The consumer projects are Unit/Integration test projects, GUIs and Windows Services.
  • Immediately below the CPs, we have an IoC bootstrapper project.  This is depended on by all CPs.  The bootstrapper project is very very thin, and only has one class called “Bootstrapper”.  This classes utilises the StructureMap IoC Container, which wires up interfaces to concrete implementations for a production scenario. i.e., The production scenario, is the most common scenario for dependency structure.  It is the same scenario that would have be defined if no IoC had been done and all classes were tightly coupled. 
    • The benefit is that, once we’ve invoked the bootstrapper, we can still rearrange the dependencies in preparation for a Unit Test, Integration test or real-world use case.
    • If we wanted to utilise IoC inside the Consumer Projects, we can override the Bootstrapper and superclass it for customisation for each CP, if we choose.
    • Of course, the bootstrapper project relies on everything else.  The only reason it’s in a separate project to the CPs is that we wish to re-use it across all CPs.  It is separated from CPs for reuse only.
  • We have two projects (vertical) which are dependencies to the rest of the infrastructure
    • Project.DomainModel contains business objects that represent the entities in the domain.  I often use the adapter pattern which essentially wraps the data object. This means that the underlying data object is used to persist the data, and mapping between the adapter interface and data object interface is done within the adapter.  It means you can extend the data object to be a business object.  Normally inheritance can be used, but this confuses the ADO.NET Entity Framework!  DomainModel is depended on by everything except the data layer, which it depends on, in order to wrap data objects.
    • Project.Common contains abstract entities and helper objects; such as language extensions and hashing algorithms.  It’s basically a place to put useful things which are abstract from the business domain and may be useful to all other projects.
  • Project.Services is the service layer.  It provides functionality related to the business and application.  e.g., I have put user profile storage in their because in my application, user profile could be in the database or in browser cookies, or a combination of both.  Also, I have put user login and registration into the services layer. The services layer invokes the data layer and converts between data layer objects and DomainModel objects. 
  • Project.Data is the data layer.  It is concerned with data retrieval from the database and populates data layer objects.  In my architecture, ADO.NET Entities are my data objects.  The consumer projects can either get data from the services layer, which provides DomainModel objects or data layer objects; or they can get data from the Data Layer directly.  My data project also supports direct access, which utilises SqlDataReader to read data directly from the database and populate Domain Model objects.  The key is to provide flexibility and performance as and when needed by consuming layers. 

Therefore, in my architecture, it’s OK for consumer projects to use data layer objects or Domain Model objects.  The reason for this is that there are some entities in the database which don’t represent a business domain entity.  e.g. There’s a user entity, which maps directly onto a Customer domain model object.  There are business rules attached to customer objects, therefore, the need for a Domain Model object is established.

At the same time, there’s an application data layer object which is a very simple lightweight object which has no business rules… therefore, the CPs utilise this object directly.

The service layer provides most things, but not all. For instance, I have a full text search function in the data layer.  The approach advocated by purist ORM-lovers (!) is to utilise something like LINQ-to-SQL (or entity framework etc) and then query the wonderful strongly typed objects that are returned.  However, the problem is that you get the lowest common denominator of performance and lose real control.  As with everything, there’s a price for high-level ORM frameworks - usually performance and control (P&C) is that price.  So… I have a Project.Data.DirectAccess namespace, which contains “dirty” classes which return SqlDataReaders!  This means that for my search use case, I get the best possible user experience.  On the other hand, for stuff where P&C doesn’t matter, such as when a user registers, or updates their profile, the CPs can utilise the service layer which will provide beautiful Domain Model objects or (slightly less beautiful) data layer objects.

Pragmatism rules my design; adhere to rules commonly, break them exceptionally = best of both worlds.

Project Code Names

Always start a new project with a totally domain-agnostic codename, e.g. Argon or Oxide, or Phosphate or BobTheProject!  If you were starting a project for XYZ Insurance, the most obvious name for the project is XYZ, but in a few years, when XYZ is bought out by AAA insurance, suddenly the project name is incorrect. 

Or if you create a project which calculates the Fibonacci sequence, you might call it  Fibonacci; however, when you want to evolve the project to calculate prime numbers, the project name will again be incorrect and mean you have a either ignore it, or rename everything  - which costs time and money, also on a large project team, you could inhibit other developers from proceeding with their work.  This is why project names are often left alone.

The easiest solution is, from the outset, create a project codename.  The codename is only really relevant to people working on the project.  It helps create the project as a conceptual entity in its own right; that services both the purpose of the project and the organisation that owns it.

List SQL Server DB Table sizes

set nocount on
create table #spaceused (
  name nvarchar(120),
  rows char(11),
  reserved varchar(18),
  data varchar(18),
  index_size varchar(18),
  unused varchar(18)
)

declare Tables cursor for
  select name
  from sysobjects where type='U'
  order by name asc

OPEN Tables
DECLARE @table varchar(128)

FETCH NEXT FROM Tables INTO @table

WHILE @@FETCH_STATUS = 0
BEGIN
  insert into #spaceused exec sp_spaceused @table
  FETCH NEXT FROM Tables INTO @table
END

CLOSE Tables
DEALLOCATE Tables

select * from #spaceused
drop table #spaceused

exec sp_spaceused