Tuesday, 22 September 2009

SQL Test Data Generator

Great script for generating test data in a SQL Server table:

declare @select varchar(max), @insert varchar(max), @column varchar(100),
    @type varchar(100), @identity bit, @db nvarchar(100)

set @db = N'aqs_Lead'
set @select = 'select '
set @insert = 'insert into ' + @db + ' ('

declare crD cursor fast_forward for
select column_name, data_type,
COLUMNPROPERTY(
    OBJECT_ID(
       TABLE_SCHEMA + '.' + TABLE_NAME),
    COLUMN_NAME, 'IsIdentity') AS COLUMN_ID
from AQSProd2.INFORMATION_SCHEMA.COLUMNS
where table_name = @db

open crD
fetch crD into @column, @type, @identity

while @@fetch_status = 0
begin
if @identity = 0 or @identity is null
begin
    set @insert = @insert + @column + ', '
    set @select = @select  +
        case @type
            when 'int' then '1'
            when 'varchar' then '''test'''
            when 'nvarchar' then '''test'''
            when 'smalldatetime' then 'getdate()'
            when 'bit' then '0'
            when 'decimal' then '0'
            else 'NULL'
        end + ', '
end
fetch crD into @column, @type, @identity
end

set @select = left(@select, len(@select) - 1)
set @insert = left(@insert, len(@insert) - 1) + ')'
exec(@insert + @select)

close crD
deallocate crD

Courtesy of Keith Bloom: http://stackoverflow.com/questions/1118960/create-test-data-in-sql-server

Tuesday, 8 September 2009

ThreadPool threads

Note to self:  When debugging inside Thread Pool threads, the debugger may miss out lines and exhibit erratic behaviour, which may drive you insane.  Best thing to do is use your own Thread for all but the tiniest operations – or change back to a thread pool thread after debugging.

Saturday, 29 August 2009

Polymorphic Application Integration End Points

From now on, one of absolute “must-haves” for any software engineering effort is, to create polymorphic AI EPs.  Sounds fancy, but it’s not.

All it really is, is dependency injection or IoC, not at the unit level, but at the application-integration level.  At the unit level, for example, if you create a domain class which talks to a repository object, then you may swap out the repository for a mock repository during an automated unit test.

At the AI level, this is where you conceptually and logically separate out the application from other services or applications.  e.g., whenever you use the SmtpClient class, you’re binding your code to the concrete implementation of an SMTP end point.  SMTP is an end point, so are web services and WCF services.  In fact, anything outside your domain of control. The database is also integration, hence, we often have database abstraction layers within architectures that are meant to abstract the data-storage technology from the code (ideally we could change the RDBMS from SQL Server to Oracle without changing any code)

Here are some example application integration end points AIEP:

  1. Currency convert web service
  2. Email/SMTP service
  3. Database engine
  4. Exception Reporting service
  5. A web-browser (it’s the GUI but it helps to think of it as integration)
  6. SMS messaging service
  7. Logging service
  8. MSMQ messaging

All these endpoints are about communicating beyond your application boundary.  By loosely coupling your code to AIEPs, it means you have better ability to test your code in your development environment.

If you have ever accidentally sent out emails to real users from your test environment, then Polymorphic AIEPs will enable you to swap out your SmtpClient for MockSmtpClient.

For example:

public interface ISmtpClientEP{

void Send(string text);

}

 

public class SmtpClientEP : ISmtpClientEP{

public void Send(string text){

// instantiate the real smtpclient and send the message

}

}

 

public class TraceClientEP : ISmtpClientEP{

public void Send(string text){

// send your text to anywhere!  e.g. a text file in a local directory?

}

}

 

Then if you use an IoC Container, such as Structure Map, then you can use this from the client code like this:

ObjectFactory.GetInstance<ISmtpClientEP>().Send(“hello world”);

 

But remember on application start up to link structure map to the concrete implementation of the class you want to use:

ObjectFactory.Initialize(x =>
{

    x.ForRequestedType<ISmtpClientEP>()
      .CacheBy(StructureMap.Attributes.InstanceScope.PerRequest)
      .TheDefaultIsConcreteType<SmtpClientEP >();

}

So what have we done here?  We have utilised Structure Map to specify what the concrete implementation of ISmtpClientEP we wish to use.

Now if you have use conditional compilation arguments you can do this:

#IF(DEBUG)

ObjectFactory.Initialize(x =>
{

    x.ForRequestedType<ISmtpClientEP>()
      .CacheBy(StructureMap.Attributes.InstanceScope.PerRequest)
      .TheDefaultIsConcreteType<TraceClientEP >();  // use the trace end point for debug

}

#ELSE

ObjectFactory.Initialize(x =>
{

    x.ForRequestedType<ISmtpClientEP>()
      .CacheBy(StructureMap.Attributes.InstanceScope.PerRequest)
      .TheDefaultIsConcreteType<SmtpClientEP >();  // real one for release compilation!

}

#ENDIF

 

So polymorphic AIEPs utilise DI (Dependency Injection) to enable you to change (polymorph) your concrete implementations.  This is just like automated-unit testing (TDD), except, in my view TDD can be a large overhead, so my hybrid solution is to use DI for Application Integration End Points and if you want, you can hard-wire you concrete implementations of other classes (classes that are internal to your app), without worrying about affecting other systems or services when you hit Debug.

Tuesday, 25 August 2009

StopBeat.com and PointAngle spam and Intela

Please note that bingo spam from the domain StopBeat.com and PointAngle.com come from Intela +44 (0) 207 849 3419 / +1.303.473.0000

http://intela.com/contact.php 

Moan at them.

Thursday, 23 July 2009

“Unblock” files in Vista / Windows Explorer. Got very bored of this

As I don’t download dodgy things or go on dodgy websites, I got bored with having to unblock files which come from an “untrusted” source.

This happens because, by default, vista stored where a file came from in a separate file stream. To disable…

  1. Run GPEDIT.MSC to launch the “Group Policy” editor.
  2. Go to “User Config” –> “Administrative Templates” –> “Windows Components” –> “Attachment Manager”.
  3. Enable the "Do not preserve zone information in file attachments" policy.

My main problem was that I have .net assemblies which originate from component providers, which are ‘untrusted’ by Vista.  This would mean some of my TDD unit tests would fail in Visual Studio.  Even though I unblocked all of the files, somehow, it would continue block them again after a while. 

Saturday, 18 July 2009

Windows Workflow: “instance operation is not valid on workflow runtime thread” or “EventDeliveryFailedException: MessageQueueErrorCode QueueNotFound”

When you’ve created your ExternalDataExchange interface and class, remember, when you have a CallExternalMethod activity, to invoke the external method in a separate thread.

So if you have InvokeSendPreview that you want to invoke from a WF, then ensure that you encapsulate the call to the event inside a new thread, for instance, via the ThreadPool, as follows.

[Serializable]
public class CommunicationService :
ICommunicationService
{
#region ICommunicationService Members

public event EventHandler<ExternalDataEventArgs> SendPreview;

public void InvokeSendPreview()
{
Guid instanceId = WorkflowEnvironment.WorkflowInstanceId;
ThreadPool.QueueUserWorkItem(delegate(object state)
{

SendPreview(null, new ExternalDataEventArgs(instanceId));

});
}

#endregion
}


The error at the beginning of this post was caused by



public void InvokeSendPreview()
{
Guid instanceId = WorkflowEnvironment.WorkflowInstanceId;
SendPreview(null, new ExternalDataEventArgs(instanceId)); // ERROR HERE!
}

This error is because the WF threads are not to be used to execute code outside WF.

Tuesday, 7 July 2009

SQL Server 2008 Query performance / execution speed difference between SQL Management Studio and ADO.NET SQL Data Provider for the same query.

Just had a really annoying problem where a stored procedure that I invoked from SSMS runs in 264ms, but from ADO.NET it runs in 1445ms!  Huge difference!

I researched and found that ‘SET ARITHABORT ON’ is set by default under SSMS, but not ADO.NET.  Once I included this in the SP, the execution time became consistently ~265ms.

However, I think there must be a problem with the query itself, so not a permanent fix, however, it’ll do for now, as I need to get beta 1 of my project released and it’s getting late!

I will blog again once I have found the root cause.

Note to self! SSMS and ADO.NET configure their connection contexts differently; and this will usually be the source of query behavioural / timing differences!

Saturday, 4 July 2009

My Architecture and where to put bootstrapping for IoC Containers

Logical Architecture Diagram

KHDCArchitecture

Above I have depicted my opinion of an effective way to structure the architecture of an multitier software solution which utilises TDD and IoC.  In my particular case, I’m using C# .NET.

Definition of Terms

Consumer Projects: The blue coloured items represent the Consumer Projects, in that they rely on the infrastructure.

Infrastructure: The yellow coloured items are infrastructure, i.e., they’re depended on by the Consumer Projects and by other infrastructure projects. 

Architecture: Represents the entire structure of the solution, including consumer project and infrastructure.

IoC: Inversion of Control.  This is a software design methodology which means that classes do not depend directly on each other, they depend on interfaces or abstract classes whose real implementation can be replaced at runtime.  This is also called Dependency Injection.  Essentially what it means is that, the consumer of one object/class can define the concrete dependencies of that class/object at runtime, as long as the concrete implementation implements the interface expected.

IoC Container: Such as StructureMap or CastleWindsor. An IoC Container wires up concrete implementations at runtime, usually on application start-up.  In TDD Unit Tests, mock objects are often used instead of the real dependencies.  If they used real dependencies, then it would be an integration test.

Overview

In my architecture, the blue items are consumer projects, which are not depended on by anything else – but they all depend on all the yellow items – the infrastructure.

  • The consumer projects are Unit/Integration test projects, GUIs and Windows Services.
  • Immediately below the CPs, we have an IoC bootstrapper project.  This is depended on by all CPs.  The bootstrapper project is very very thin, and only has one class called “Bootstrapper”.  This classes utilises the StructureMap IoC Container, which wires up interfaces to concrete implementations for a production scenario. i.e., The production scenario, is the most common scenario for dependency structure.  It is the same scenario that would have be defined if no IoC had been done and all classes were tightly coupled. 
    • The benefit is that, once we’ve invoked the bootstrapper, we can still rearrange the dependencies in preparation for a Unit Test, Integration test or real-world use case.
    • If we wanted to utilise IoC inside the Consumer Projects, we can override the Bootstrapper and superclass it for customisation for each CP, if we choose.
    • Of course, the bootstrapper project relies on everything else.  The only reason it’s in a separate project to the CPs is that we wish to re-use it across all CPs.  It is separated from CPs for reuse only.
  • We have two projects (vertical) which are dependencies to the rest of the infrastructure
    • Project.DomainModel contains business objects that represent the entities in the domain.  I often use the adapter pattern which essentially wraps the data object. This means that the underlying data object is used to persist the data, and mapping between the adapter interface and data object interface is done within the adapter.  It means you can extend the data object to be a business object.  Normally inheritance can be used, but this confuses the ADO.NET Entity Framework!  DomainModel is depended on by everything except the data layer, which it depends on, in order to wrap data objects.
    • Project.Common contains abstract entities and helper objects; such as language extensions and hashing algorithms.  It’s basically a place to put useful things which are abstract from the business domain and may be useful to all other projects.
  • Project.Services is the service layer.  It provides functionality related to the business and application.  e.g., I have put user profile storage in their because in my application, user profile could be in the database or in browser cookies, or a combination of both.  Also, I have put user login and registration into the services layer. The services layer invokes the data layer and converts between data layer objects and DomainModel objects. 
  • Project.Data is the data layer.  It is concerned with data retrieval from the database and populates data layer objects.  In my architecture, ADO.NET Entities are my data objects.  The consumer projects can either get data from the services layer, which provides DomainModel objects or data layer objects; or they can get data from the Data Layer directly.  My data project also supports direct access, which utilises SqlDataReader to read data directly from the database and populate Domain Model objects.  The key is to provide flexibility and performance as and when needed by consuming layers. 

Therefore, in my architecture, it’s OK for consumer projects to use data layer objects or Domain Model objects.  The reason for this is that there are some entities in the database which don’t represent a business domain entity.  e.g. There’s a user entity, which maps directly onto a Customer domain model object.  There are business rules attached to customer objects, therefore, the need for a Domain Model object is established.

At the same time, there’s an application data layer object which is a very simple lightweight object which has no business rules… therefore, the CPs utilise this object directly.

The service layer provides most things, but not all. For instance, I have a full text search function in the data layer.  The approach advocated by purist ORM-lovers (!) is to utilise something like LINQ-to-SQL (or entity framework etc) and then query the wonderful strongly typed objects that are returned.  However, the problem is that you get the lowest common denominator of performance and lose real control.  As with everything, there’s a price for high-level ORM frameworks - usually performance and control (P&C) is that price.  So… I have a Project.Data.DirectAccess namespace, which contains “dirty” classes which return SqlDataReaders!  This means that for my search use case, I get the best possible user experience.  On the other hand, for stuff where P&C doesn’t matter, such as when a user registers, or updates their profile, the CPs can utilise the service layer which will provide beautiful Domain Model objects or (slightly less beautiful) data layer objects.

Pragmatism rules my design; adhere to rules commonly, break them exceptionally = best of both worlds.

Project Code Names

Always start a new project with a totally domain-agnostic codename, e.g. Argon or Oxide, or Phosphate or BobTheProject!  If you were starting a project for XYZ Insurance, the most obvious name for the project is XYZ, but in a few years, when XYZ is bought out by AAA insurance, suddenly the project name is incorrect. 

Or if you create a project which calculates the Fibonacci sequence, you might call it  Fibonacci; however, when you want to evolve the project to calculate prime numbers, the project name will again be incorrect and mean you have a either ignore it, or rename everything  - which costs time and money, also on a large project team, you could inhibit other developers from proceeding with their work.  This is why project names are often left alone.

The easiest solution is, from the outset, create a project codename.  The codename is only really relevant to people working on the project.  It helps create the project as a conceptual entity in its own right; that services both the purpose of the project and the organisation that owns it.

List SQL Server DB Table sizes

set nocount on
create table #spaceused (
  name nvarchar(120),
  rows char(11),
  reserved varchar(18),
  data varchar(18),
  index_size varchar(18),
  unused varchar(18)
)

declare Tables cursor for
  select name
  from sysobjects where type='U'
  order by name asc

OPEN Tables
DECLARE @table varchar(128)

FETCH NEXT FROM Tables INTO @table

WHILE @@FETCH_STATUS = 0
BEGIN
  insert into #spaceused exec sp_spaceused @table
  FETCH NEXT FROM Tables INTO @table
END

CLOSE Tables
DEALLOCATE Tables

select * from #spaceused
drop table #spaceused

exec sp_spaceused

Wednesday, 17 June 2009

SQL Server – ISNULL() = IS_SLOW!

Just been doing some query optimisation and trying to work out what part of a Stored Procedure is slowing the query down.

I found this line: AND CategoryId = ISNULL(@CategoryId, CategoryId)

This is the lazy way to conditionally include clauses!

This is much better: (@CategoryId IS NULL OR CategoryId = @CategoryId)

Changing that line increased the performance by 0.5 second. Now it runs in < 150ms.

Friday, 12 June 2009

ADO.NET Entity Framework Designer : Deleting Table causes validation errors!

Remember, to delete a table from the Entity Framework designer, don’t just select the table and hit delete… you need to delete the table from the database first, then hit “Update Model from Database…”. 

ADO.NET Entity Framework “Problem in Mapping Fragments starting at lines XXX Non-Primary-Key column(s) [XXX] are being mapped in both fragments to different conceptual side properties - data inconsistency is possible because the corresponding conceptual side properties can be independently modified.”

I am using the ADO.NET Entity Framework to represent my domain model.  When I first created the model from the database, some of the tables in my database had implied foreign keys, i.e., foreign keys that weren’t explicitly set as Foreign Keys – this was a mistake.

So within SQL Server Management Studio I used the table designer to set up the foreign keys.  Easily done!

Then I used the “Update Model from database” function inside the ADO.NET EF Designer, expecting it to pick up the foreign keys; which it did do… but on Validation, it caused the error above.

This error occurs because the foreign key inside appears both as an Association (this was created when you hit “Update model from database”) AND  as a scalar property on the Entity object.

The solution is to delete the scalar property that represented your foreign key, before it was transformed into an association. Basically, if your parent table is “Parent”, and your child table is “Child”, which has a reference to Parent.Id, via Child.ParentId… just delete ParentId from the Child entity in the designer.  This should leave just “Parent” as an association under the navigation properties of the Entity.

Thursday, 21 May 2009

Sony KD-28DX40 Integrated Digital TV and the Digital Switchover

I know this is nothing about software development, but hey…  We have the Sony TV above and it doesn’t work now the analogue signal has been turned off.  The south west of England switched over to digital on 20th May 2009.

I have spoken to Sony support and apparently, the TV is now out of date.  The software in the TV tuner is too old. Please see http://sonycic.custhelp.com/cgi-bin/sonycic.cfg/php/enduser/std_adp.php?p_faqid=369&p_created=1216219189

The only solution is to buy a new set-top-box for the TV or buy a new TV.

The TV is only 6 ½ years old!!  It’s already obsolete!  Thanks Sony.

More info: http://www.digitalspy.com/forums/showthread.php?p=30404270

Wednesday, 20 May 2009

Cannot start Microsoft Office outlook. Cannot open the outlook window

This morning I opened Microsoft Outlook 2007 and got the
error "Cannot start Microsoft Office outlook. Cannot open the outlook
window"

No system changes have been made, no anti-virus software is running.

The easiest way to remedy this is to go into Control Panel > Mail > Show Profiles…

  • Select your current profile (probably only one profile in the box)
  • Click “Copy” and type a new name
  • Set the new profile as the default profile under “Always use this profile” at the bottom
  • Close this and then restart Outlook 2007

Exactly one week ago, Automatic Updates installed “The 2007 Microsoft Office Suite Service Pack 2”, which I believe has introduced a pretty serious bug into MS Outlook 2007.

Thursday, 30 April 2009

ASP.NET Dynamic Data Entities Web Application: Hiding Primary Key / Identity Column

I have finally fallen in love; it took a while, but there you go.  ASP.net Dynamic Data is very cool.  I’m currently doing a new project for which there are two main GUIs.  The front-end, which is a standard ASP.NET MVC app which provides users with a beautiful interface, and the back-end; which just needs to provides Admins with a way to edit the data inside database tables.

ASP.NET dynamic data has been great because it simply supports the CRUD operations required.  If you’re using ADO.NET Entities (EDM) for your data model, then there’s one thing to bear in mind; the primary keys inside the database will be editable on the GUI.  This is not the case for LINQ-to-SQL data models.

However, I wanted to use the Entities Framework because it’s shown itself to me to be more robust and configurable.

You can stop any columns from being shown on the GUI by decorating them with the [ScaffoldColumn(false)] attribute.

It’s not as simple as you’d think, because you cannot just add that attribute and have done.  The code is within the Designer file and is generated everytime the EDM is refreshed, so what you need to do is create a partial class for the table containing the column you wish to hide. Create a new class which is solely for metadata.  Declare the same column (and data type) inside that metadata class, annotate that with [ScaffoldColumn(false)]. The associate your partial class with the metadata class.

This is more easily demonstrated:-

Here’s your generated code within the Designer module of the EDM

namespace MyDataLayer.Data
{
[global::System.Data.Objects.DataClasses.EdmEntityTypeAttribute(NamespaceName = "MyDataLayer", Name = "Provider")]
[global::System.Runtime.Serialization.DataContractAttribute(IsReference = true)]
[global::System.Serializable()]
public partial class Provider : global::System.Data.Objects.DataClasses.
EntityObject
{
/// <summary>
///
This is the primary key column that we don’t want on the GUI
/// </summary>
[global::System.Data.Objects.DataClasses.EdmScalarPropertyAttribute(EntityKeyProperty = true, IsNullable = false)]
[global::System.Runtime.Serialization.DataMemberAttribute()]
public int id
{
get
{
return this._id;
}
set
{
this.OnidChanging(value);
this.ReportPropertyChanging("id");
this._id = global::System.Data.Objects.DataClasses.StructuralObject.SetValidValue(value);
this.ReportPropertyChanged("id");
this.OnidChanged();
}
}
}
}



Here’s the partial class that must be created



[MetadataType(typeof(ProviderMetadata))]
public partial class
Provider {}



Here’s the metadata class which allows you to add more attributes to properties existing inside the code-gen’d Designer file.



public class ProviderMetadata
{

[ScaffoldColumn(false)]
public string id { get; set; }

}



So the Provider partial class inside the Designer code is extended by the new partial class.  The new partial class’s sole job is to add a new attribute, MetadataType, which points to the new metadata class, ProviderMetadata.



In LINQ-To-SQL, this is not necessary (apparently)! At least we have a way, with EDM, even if it’s a bit long-winded.

Thursday, 23 April 2009

Bulk schema change for stored procs in SQL Server

When you need to change the schema for SPs.

Remember this:

DECLARE
  @OldOwner sysname,
  @NewOwner sysname
 SET @OldOwner = 'oldOwner'
 SET @NewOwner = 'dbo'
DECLARE CURS CURSOR FOR
SELECT
  name
FROM
 sysobjects
WHERE
  type = 'p' 
AND
  uid = (SELECT uid FROM sysusers WHERE name = @OldOwner) 
AND 
  NOT name LIKE 'dt%' FOR READ ONLY
DECLARE @ProcName sysname
OPEN CURS
FETCH CURS INTO @ProcName
WHILE @@FETCH_STATUS = 0
BEGIN
    IF @@VERSION >= 'Microsoft SQL Server 2005'
    BEGIN
        EXEC('alter schema ' + @NewOwner + ' transfer ' + @OldOwner + '.' + @ProcName)
        exec('alter authorization on ' + @NewOwner + '.' + @ProcName + ' to schema owner')
    END
    ELSE
        EXEC('sp_changeobjectowner ''' + @OldOwner + '.' + @ProcName + ''', ''' + @NewOwner + '''')
    FETCH CURS INTO @ProcName
END
CLOSE CURS
DEALLOCATE CURS


Courtesy of Tim Page; http://www.geekzilla.co.uk/ViewC1D86C71-13E7-477B-94EC-FD01B524B6C3.htm 



k

Saturday, 24 January 2009

Invoking a button click event in ASP.NET when the button wasn’t clicked, but you want the code to run anyway.

I have just had a situation where I need a button click to invoke inside a control that part of a COTS product. On our login page, there’s the username and password fields, along with a login LinkButton control.

When the login button is clicked, or the user hits ENTER, then the login button click event handler invokes as it should.  However, on some mobile browsers and using my Dell Biometric reader for authentication, the button click is not performed because it’s doing a direct submit on the form – hence, the login button click event handler doesn’t invoke, therefore, the user cannot login.

After a bit of research, it seems there’s no easy way to just invoke a click on a button.  People suggest that I refactor the control so that it has a public “Login” method… however, this is a Community Server control… I don’t really want to customise the codebase for a product, for something so simple.

So the following does the trick via reflection

void Page_PreRender(object sender, EventArgs args) 
{
if (Page.IsPostBack
&& Request.Form["__EVENTTARGET"] != loginButton.UniqueID)
Login();
}

private void Login()
{
// Manually invoke login button click
Type t = loginButton.GetType();
object[] p = new object[1];
p[0] = EventArgs.Empty;
MethodInfo m = t.GetMethod("OnClick",
BindingFlags.NonPublic
| BindingFlags.Instance);
m.Invoke(loginButton, p);
}



Not the cleanest and most optimal solution, but it gets the job done!

Tuesday, 13 January 2009

Bulk Update with SQL Server

There appears to be no way in SQL Server to avoid an exclusive lock when updating multiple rows inside a table.  To ensure ACID, SQL Server prevents access to data that is being updated until the operation is complete.  I understand this; however, if the update operation spans multiple rows, surely there should be a way to ask SQL Server to treat each row update independently.

I have just written a cursor-based approach which performs a read with NOLOCK specified, and then the write is an update statement inside a WHILE construct.

DECLARE @LeadId INT
DECLARE
@TerritoryId INT

DECLARE
db_cursor CURSOR FOR
select
l.leadid, pc.territoryid
from lead l with(nolock)
inner join
postcode pc with(nolock)
on lower(pc.code) = lower(substring(l.postcode, 1, len(pc.code)))
inner join
territory t1 with(nolock) on t1.id = l.territoryid
inner join territory t2 with(nolock) on t2.id = pc.territoryid
where l.countryid = 1
and l.territoryid <> pc.territoryid

OPEN db_cursor
FETCH NEXT FROM db_cursor INTO @LeadId, @TerritoryId

WHILE @@FETCH_STATUS = 0
BEGIN
UPDATE
Lead SET TerritoryId = @TerritoryId WHERE LeadId = @LeadId

print @LeadId
FETCH NEXT FROM db_cursor INTO @LeadId, @TerritoryId
END

CLOSE
db_cursor
DEALLOCATE db_cursor


It works fine, doesn’t cause any problems with data consistency, because ACID rules are at the single-row-update level; not the whole operation.  I’m surprised there’s not a way to ask SQL Server to do something like this by using the WITH hints and the like.