Categories
Computers and Internet

Username for ‘https://dev.azure.com’

This is another quick aide-memoire for myself around git authentication with Azure DevOps.

Now and again when attempting to run “git pull” from the command line, I will receive the following prompt:

> Username for https://dev.azure.com

This happens on a machine managed by a corporation. Applications are automatically installed and updated using Software Center. It seems that when git is updated by the corporation, it is wiping out the credential.helper setting in git, and so git is not trying to use the Git Credential Manager. The Azure DevOps normal Oath2 authentication required Git Credential Manager.

Solution

Two ways to solve this:

  1. Reinstall git, unchecking the “only show new options” and make sure to select the Git Credentials Manager option when you get to that page.
  2. git config --global credential.helper manager

The former solution sets the credential.helper in the git config held in the Program Files folder (on Windows). The latter sets it in the git config in the user profile (C:/Users/xxxx/.gitconfig), which should survive any attempt to reset the value by the installation from Software Center.

There is a third way to solve this, which is to use a Personal Access Token (PAT) for Azure DevOps. Once set up, your Azure DevOps login email and the PAT can be used as a simple username and password at the prompt. I’d recommend against this as you’re avoiding two factor authentication and providing a (limited) back door into performing operations on your account.

Categories
Azure Computers and Internet

Git & Azure DevOps AADSTS90036

This post falls into the set of posts to remind myself how I fixed an issue that probably no-one else will encounter.

When performing operations with git at the command line or through JetBrains Rider (but not through Visual Studio) that involved remote operations to Azure DevOps, I was receiving the following error after entering my username in a pop up window:

The error AADSTS90036 has very little useful information for it on the web.

I don’t exactly know my client’s AAD/Microsoft Entra Id setup, but for what it’s worth pretty much all network communication from my development machine needs to go through a proxy with a self signed certificate. It’s also protected with Microsoft Authenticator. To what degree any of this is relevant I’m not sure.

The resolution for me was to reinstall the latest version of Git for Windows, 2.44.0 at the time of writing. When installing 2.44.0, it said it was uninstalling 2.17.0. Looking at the release notes and following the links through to the embedded Git Credentials Manager releases, I can see fixes subsequent to 2.17.0 that are specific to DevOps.

For example, Git Credentials Manager 1.18.4, bundled with Git for Windows 2.20.1, includes “DevOps: Support AAD in MSA”. Given that my account has MSA as an authentication factor, this could have been the fix.

Entertainingly, it was the second time in 24 hours I’d installed 2.44.0. My client uses Microsoft Software Center to push software updates out. Looking at the Updates page I see that Atlassian SourceTree 3.0.8 was installed. This was released in 2018, similar to Git for Windows 2.17.0, so my guess is that this is either reverting Git or else changing path variables to point to its embedded version of Git.

I don’t rate my chances of getting my client to upgrade the software versions it distributes, but I’ll give it a go!

Categories
Azure Computers and Internet

.NET Azure Functions – Isolated Process Update

Isolation - good for Azure Functions, bad for people
Isolation – good for Azure Functions, bad for people

A short while ago I posted a summary of the current state of play of Azure Functions and .NET 5. In short, to run your function in .NET 5 you need to use the new Isolated Process. It’s so new that it’s missing a lot of the Azure Functions features, e.g. several bindings and Durable Functions. So Durable Functions users are stuck on .NET Core 3.1 until .NET 6 is supported in the In-process version.

Whilst all that is still true, there is now an update from the team on where they’re intending to go in future. The In-process version will end with the .NET 6 release and development will concentrate on bringing the Isolated Process up to feature parity in time for .NET 7. Read their post here. After that they are promising to support .NET versions as and when they are released.

This is best illustrated by reposting their roadmap from that link:

Azure Functions Roadmap

The Durable Functions support in the Isolated Process is said to arrive in “2022 or possibly earlier”. I look forward to it.

Categories
Azure

Azure Functions & .NET 5 – State of Play

.NET 5 was released on November 10th 2020 and contains features (specifically C#9 support and thus Record Types) that we were keen to use in our products at Matchnet.

See the source imageAs we’re heavily based on Azure Functions, I was happy to see an announcement of a preview of Azure Functions on .NET 5. We went ahead and evaluated it, and here’s what we found:

  • No support for rich function types such as Durable Functions.
  • Visual Studio support is not there yet, but promised soon
  • HTTP trigger interface uses simplistic HTTP message abstraction (HttpMessageData instead of HttpRequest)

Durable Functions

This was the most disappointing issue, as we’re using Durable Functions in a few places. They’re a useful implementation of a saga/orchestration.

The implementation of the .NET 5 Azure Functions is based around “an out-of-process model where a .NET worker process runs alongside the runtime”. On the face of it this sounds good, as it decouples us from the runtime’s version support. However, it means we also don’t have access to the extension code running in the host, which is what Durable Functions relies on. See this GitHub issue comment.

The team will be updating the host to the .NET 6 LTS, which will mean .NET based functions being deployed in the conventional way, directly to the host and not out of process, so Durable Functions should work fine. Roll on November 2021.

Visual Studio

As of Visual Studio 16.8.4 there isn’t a project template for the out-of-process .NET 5 functions. Instead you create an ASP.NET Core project and build around that. The readme on the GitHub preview site has the details.

We had trouble getting local debugging running. The only way that seemed successful was to start the program then attach to the “dotnet.exe” process, if you can find the right one. It’s a bit of a hassle but I expect that will get sorted out with official Visual Studio support.

HTTP Trigger’s HttpMessageData

This is the class used to convey the payload for an HTTP trigger. It’s a wrapper on top of a RpcHttp class, part of a .NET implementation of gRPC. As such it has a very simple interface and doesn’t provide for the full suite of normal HTTP capabilities including cookies and attaching files, both of which are possible in the standard Azure Functions HTTP trigger via the HttpRequest class.

Conclusion

We’re going to skip it for now, though we’ll keep an eye on where it goes. It doesn’t sound to me like the out-of-process model is where the function team’s heart is at right now for .NET functions, so I hold out more hope for the upgrade of the host to .NET 6.

Categories
Computers and Internet

Referencing a .NET assembly in a compile time safe manner

If you need to provide a System.Reflection.Assembly instance to an API [1], there are several mechanisms for doing so. They roughly split into two camps:

  • Run-time assembly loading
  • Assemblies known at compile time

The run-time assembly loading includes scenarios such as having a plug-in architecture where the code being referenced cannot be known at the time of compilation.

For the other camp, if we know exactly which assembly we need to reference at compile time we have a couple of options. We can use the name of the assembly as a string like so:

Assembly.Load(“MyCompany.Util”);

(Note that if the assembly is already loaded the runtime will just return the loaded instance of that assembly and won’t attempt to load it again.)

Alternatively we can use a type from that assembly like so:

Assembly.GetAssembly(typeof(MyCompany.Util.AnyOldClass);

The problem with the assembly name string approach is that there is no compile time checking. The typeof approach allows for compile time checking but introduces an artificial dependency in the calling code on a class that it only needs for the purposes of getting the assembly. This calling code is then subject to any renaming or removal of that class when in reality it cares only about the assembly and not the type.

The solution I’ve gone for is to create a static, empty class with a similar name to the assembly in the root of the default namespace of the assembly I wish to reference and use this in the typeof:

using MyCompany.Util;
/* … */
Assembly.GetAssembly(typeof(MyCompanyUtil));

This provides us with a compiler error if the assembly reference is dropped or the assembly is renamed. It will take part in any necessary refactoring operations and is not dependent on irrelevant types.

 

[1] Examples include Autofac’s MVC and WebApi integration: ContainerBuilder.RegisterControllers & ContainerBuilder.RegisterApiControllers

Categories
Computers and Internet

Entity Framework Performance Tip for Creating Entities

This tip is applicable if you’re using Entity Framework Code First with dynamic proxies and you have a lot of objects attached to your context, for whatever reason (e.g. within a batch job).

The first thing to note is that if you have a lot of objects attached to your context you want to avoid DetectChanges being called on the context unless absolutely necessary. DetectChanges compares the original to the current state of each object and uses this information for a couple of purposes: Marking entities as added/changed/deleted and fixing up relationships such as bi-directional navigation properties and foreign key columns.

Arthur Vickers has an excellent blog series explaining this all very well: http://blog.oneunicorn.com/2012/03/10/secrets-of-detectchanges-part-1-what-does-detectchanges-do/

DetectChanges is obviously necessary when SaveChanges is called, but it’s also called whenever one of these operations is called:

  • DbSet.Find
  • DbSet.Local
  • DbSet.Remove
  • DbSet.Add
  • DbSet.Attach
  • DbContext.GetValidationErrors
  • DbContext.Entry
  • DbChangeTracker.Entries

DetectChanges calls can be avoided, though, by turning AutoDetectChanges off. Check out this gist:

public sealed class NoChangeTracking : IDisposable
{
private readonly DbContext _dbContext;
private readonly bool _initialAutoDetectChangesValue;
public NoChangeTracking(DbContext dbContext)
{
if (dbContext == null) throw new ArgumentNullException("dbContext");
_dbContext = dbContext;
_initialAutoDetectChangesValue = dbContext.Configuration.AutoDetectChangesEnabled;
SetChangeDetection(false);
}
[System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Design", "CA1063:ImplementIDisposableCorrectly")]
public void Dispose()
{
SetChangeDetection(_initialAutoDetectChangesValue);
}
private void SetChangeDetection(bool setting)
{
_dbContext.Configuration.AutoDetectChangesEnabled = setting;
}
}

With this class you can write code such as:


using(new NoChangeTracking(context))
{
  context.MyEntities.Add(new MyEntity());
}

… and DetectChanges will not be called. (You could even just turn off automatic detect changes globally, but you would at least need to remember to call DetectChanges manually before SaveChanges was called).

This technique works okay, but it can result in problems if you are relying on two way navigation properties. For example:


var parent = new Parent();
var child = new Child();
parent.Children.Add(child);
using(new NoChangeTracking(context))
{
  context.Parents.Add(parent);
}
Debug.Write(child.Parent.Id); // Null reference exception

The child.Parent navigation property will not have been set as we set AutoDetectChangesEnabled to false before we performed the DbSet.Add. We could choose not to turn it off, but that would lead again to the performance issues. We could also explicitly alter both the parent and child navigation properties each time we change one end, but that’s extra code and it’s easy to forget to do.

With dynamic proxies enabled, there’s an easier way. Instead of creating the entities by using the new operator, you create a dynamic proxy by using the DbSet.Create method. This dynamic proxy contains code to intercept alterations to each navigation property and ensure that any reciprocal navigation property on the target object is updated. E.g. when parent.Children.Add(child) is called, the child.Parent property is automatically populated.

Here’s that code again but with the correct proxy initialization:


var parent = context.Parents.Create();
var child = context.Children.Create();
parent.Children.Add(child);
using(new NoChangeTracking(context))
{
  context.Parents.Add(parent);
}
Debug.Write(child.Parent.Id); // No null reference!

That’s it. There are many other performance considerations, but combining switching off AutoDetectChangedEnabled with properly using dynamic proxies can get us a long way.

Categories
Computers and Internet laZook

Tightening Injected Dependencies on Entity Framework

Dependency injection as a pattern provides a lot of useful nudges to get you to produce easily readable and maintainable code. One way in which it does this is to make dependencies explicit so you can see exactly what services a class requires. When using Entity Framework most people are passing the whole context through as a dependency. This post explores an alternative to this approach that provides more clarity of the client code’s use of the context.

We’ve been coding with Entity Framework here at laZook for a while now, using the code first workflow. We’re using Autofac as our dependency injection framework. We inject dependencies into the constructor so that there is one clear place to view a class’s dependencies.

We used to inject the whole DbContext derived class into each type that needed to do anything with the context, e.g. add entities or save changes. This was fairly easy to do, but lead to some confusion. Let’s look at an example program using this technique:

public class Coordinator
{
private readonly IMyContext _myContext;
private readonly WidgetGenerator _widgetGenerator;
private readonly WotsitGenerator _wotsitGenerator;
public Coordinator(
IMyContext myContext,
WidgetGenerator widgetGenerator,
WotsitGenerator wotsitGenerator)
{
_myContext = myContext;
_widgetGenerator = widgetGenerator;
_wotsitGenerator = wotsitGenerator;
}
public void DoStuff()
{
_widgetGenerator.GenerateWidgets();
_wotsitGenerator.GenerateWotsits();
_myContext.SaveChanges();
}
}
public class WotsitGenerator
{
private readonly IMyContext _myContext;
public WotsitGenerator(IMyContext myContext)
{
_myContext = myContext;
}
public void GenerateWotsits()
{
if (DateTime.Now.DayOfWeek == DayOfWeek.Friday)
{
throw new Exception("No wotsit generation on Fridays!");
}
_myContext.Wotsits.Add(new Wotsit());
}
}
public class WidgetGenerator
{
private readonly IMyContext _myContext;
public WidgetGenerator(IMyContext myContext)
{
_myContext = myContext;
}
public void GenerateWidgets()
{
_myContext.Widgets.Add(new Widget());
_myContext.SaveChanges();
}
}
public class MyContext : DbContext, IMyContext
{
public IDbSet<Widget> Widgets { get; set; }
public IDbSet<Wotsit> Wotsits { get; set; }
}
public interface IMyContext
{
IDbSet<Widget> Widgets { get; set; }
IDbSet<Wotsit> Wotsits { get; set; }
int SaveChanges();
}
view raw MyContext.cs hosted with ❤ by GitHub
class Program
{
static void Main(string[] args)
{
using (var container = CreateContainer())
{
var coordinator = container.Resolve<Coordinator>();
coordinator.DoStuff();
}
}
private static IContainer CreateContainer()
{
var containerBuilder = new ContainerBuilder();
containerBuilder.RegisterType<MyContext>().AsImplementedInterfaces().InstancePerLifetimeScope();
containerBuilder.RegisterType<Coordinator>();
containerBuilder.RegisterType<WotsitGenerator>();
containerBuilder.RegisterType<WidgetGenerator>();
return containerBuilder.Build();
}
}
view raw Program.cs hosted with ❤ by GitHub

In this simple example we saw that the Coordinator class was calling upon a couple of worker classes and then persisting any changes. The worker classes were adding the entities to their respective DbSets.

There is a problem with the code, though. If it’s a Friday, no Wotsits will be made. The code will exit due to the exception. If you were looking only at the Coordinator and WotsitGenerator code, you’d be forgiven for thinking that there was a single unit of work and it would not be committed. It looks like the Coordinator is responsible for the SaveChanges call. However, a closer look at the WidgetGenerator reveals a call to SaveChanges after it has created a widget.

It’s a simple example, but where SaveChanges is buried in larger code it can be difficult to work out what is being committed and what isn’t.

What to do about this? One answer is to ensure that SaveChanges is only ever called at the very top level as the last action before the end of the program (in this example) or page request / job execution / button click handler. This works, but is somewhat limiting. What if you want to perform multiple SaveChanges to checkpoint during a long running operation? What if the success or failure of one SaveChanges determines whether or not another unit of work is embarked upon?

We need to make it clear who owns the responsibility for initiating completion of the unit of work.

The solution we’ve come up with is to create an ICompleteUnitOfWork interface that contains the SaveChanges method and have the context implement this interface. This interface is then declared as a dependency for the class that has the responsibility of calling SaveChanges. This allows us to glance at a class constructor and see whether that class owns the responsibility for completing the unit of work. Elsewhere we inject IDbSet<TEntity> instances. This helps us see which entities (or at least which aggregate roots) a class is involved in reading or editing.

Here’s the same code with the new dependencies and the errant SaveChanges in WidgetGenerator removed. We can clearly tell that WidgetGenerator does not call SaveChanges by seeing that it only takes a dependency on IDbSet<Widget>.

public class Coordinator
{
private readonly ICompleteUnitOfWork _unitOfWorkCompleter;
private readonly WidgetGenerator _widgetGenerator;
private readonly WotsitGenerator _wotsitGenerator;
public Coordinator(
ICompleteUnitOfWork unitOfWorkCompleter,
WidgetGenerator widgetGenerator,
WotsitGenerator wotsitGenerator)
{
_unitOfWorkCompleter = unitOfWorkCompleter;
_widgetGenerator = widgetGenerator;
_wotsitGenerator = wotsitGenerator;
}
public void DoStuff()
{
_widgetGenerator.GenerateWidgets();
_wotsitGenerator.GenerateWotsits();
_unitOfWorkCompleter.SaveChanges();
}
}
public class WotsitGenerator
{
private readonly IDbSet<Wotsit> _wotsitDbSet;
public WotsitGenerator(IDbSet<Wotsit> wotsitDbSet)
{
_wotsitDbSet = wotsitDbSet;
}
public void GenerateWotsits()
{
if (DateTime.Now.DayOfWeek == DayOfWeek.Friday)
{
throw new Exception("No wotsit generation on Fridays!");
}
_wotsitDbSet.Add(new Wotsit());
}
}
public class WidgetGenerator
{
private readonly IDbSet<Widget> _widgetDbSet;
public WidgetGenerator(IDbSet<Widget> widgetDbSet)
{
_widgetDbSet = widgetDbSet;
}
public void GenerateWidgets()
{
_widgetDbSet.Add(new Widget());
}
}
public class MyContext : DbContext, ICompleteUnitOfWork
{
public IDbSet<Widget> Widgets { get; set; }
public IDbSet<Wotsit> Wotsits { get; set; }
}
public interface ICompleteUnitOfWork
{
int SaveChanges();
}
view raw MyContext.cs hosted with ❤ by GitHub
class Program
{
static void Main(string[] args)
{
using (var container = CreateContainer())
{
var coordinator = container.Resolve<Coordinator>();
coordinator.DoStuff();
}
}
private static IContainer CreateContainer()
{
var containerBuilder = new ContainerBuilder();
containerBuilder.RegisterType<MyContext>().AsSelf().AsImplementedInterfaces().InstancePerLifetimeScope();
containerBuilder.Register(c => c.Resolve<MyContext>().Widgets);
containerBuilder.Register(c => c.Resolve<MyContext>().Wotsits);
containerBuilder.RegisterType<Coordinator>();
containerBuilder.RegisterType<WotsitGenerator>();
containerBuilder.RegisterType<WidgetGenerator>();
return containerBuilder.Build();
}
}
view raw Program.cs hosted with ❤ by GitHub

What are the problems with this approach?

There are some usage patterns of Entity Framework that it doesn’t support too well, but it can be extended to do so. For example, there is no way to get at the DbContext.Entry method for attaching objects and setting their state. You could introduce another interface for this, IManageUnitOfWorkObjectState, but it feels clunky.

Also, injecting the IDbSets is a good first step, but I actually prefer creating some repositories on top of the IDbSets as it better allows for caching and encapsulation of common queries.

I’m interested in any development suggestions or criticisms of the ideas. Let me know here or on Twitter.

Categories
Computers and Internet laZook Personal

laZook Microstore

Six months ago I joined a team working on a new start-up called laZook. It’s an online distributor, providing brands with a single route to many existing eCommerce channels and some that we’re helping to develop from scratch. We handle the listing, fulfilment and payment for these brands’ products.

We have various features at varying levels of maturity. One of those is a “microstore” that provides a drop down checkout on a website. This is in use already on several blogs and magazine sites. It needs more work but, as it stands, it provides a publisher with the choice of thousands of products to sell on their site. They insert a javascript snippet on their page and their site is now eCommerce enabled. For each purchase they receive commission, much like in a traditional affiliate scheme. One advantage here is that the purchaser is not taken off site for the checkout flow.

We’re currently using PayPal for the payment processing, but they don’t seem to have moved with the times very much and do force us into some poor user experience during checkout. As a result we’ll likely be moving to Stripe at some point soon. They have some modern APIs and can offer a lot more control over the experience.

If you want to see the Microstore in action (even if it is a little rough around the edges), check out the Ex Cellar Wine Club blog. Please do give us some feedback on what you think!

Categories
Computers and Internet

Quartz.net persistent job store LAST_MODIFIED_TIME issue

I’m playing around with Quartz.net and adding support for a persistent job store via the ADO.NET Job Store. As per the recommendation, I’m instructing the job store to persist job parameters in plain text rather than BLOBs, using the configuration:

<add key="quartz.jobStore.useProperties" value="true"/>

Unfortunately in triggering a simple job, which has no explicit job data map, I receive this error:

JobDataMap values must be Strings when the 'useProperties' property is set. Key of offending value: LAST_MODIFIED_TIME

When looking in the debugger at the JobDataMap object provided to the job I scheduled, there is no LAST_MODIFIED_TIME present. Digging a bit deeper, it seems that there is another job running called FileScanJob, scheduled by the XMLSchedulingDataProcessorPlugin (used to read the job and trigger configuration from an XML file). This job adds the LAST_MODIFIED_TIME entry to its JobDataMap during job execution, which is of type DateTime rather than string.

Why is this raising an exception? That is to do with the implementation of the StdAdoDelegate class. When the quartz.jobStore.useProperties configuration value is set to true it will deliberately fail to write to the job store database any job data that does not use string for both key and value. Despite this restriction, it still uses binary serialization to store the data after this check (in the form of a NameValueCollection).

To come back to the original reason for setting this property, the tutorial advises to use it to avoid serializing complex types and getting into versioning issues after type upgrades. I’d contest that this objective could be simply achieved by supporting all the primitive .NET types whose serialization is not likely to change. The change to StdAdoDelegate would be to perform a type validation of each name/value pair to ensure only simple types are in use in the case of quartz.jobStore.useProperties being true and convert it to System.Collections.Hashtable to allow for changes to the JobDataMap class to be made in Quartz without causing serialization issues.

Another solution could be to have an XmlAdoDelegate that used XML serialization instead of binary serialization.

Maybe I am missing some extra design constraint here. I’ve posted the issue to GitHub (here) to see if my thoughts can be easily shot down.

Categories
Computers and Internet SQL Server

Installing Balanced Data Distributor on SQL Server 2008 R2 SP1

Edit: This post refers to an older version of the BDD installer. As per JasonH’s comment below, Microsoft has released a new installation package which should hopefully fix the installation bug. It can be found here: http://www.microsoft.com/en-us/download/details.aspx?id=4123

Original post:

Microsoft’s Balanced Data Distributor does not install on top of SQL Server 2008 R2 SP1. It installs fine without SP1 but otherwise comes up with the error:

“The installation is not successful. Check the following prerequisites: 1. Either Integration Services or BIDS has to be installed. 2. The version of these components has to be either SQL Server 2008 SP2 (or future SPs) or SQL Server 2008 R2 (or future SPs)”

In my case all the pre-requisites were met. As per this thread I examined the registry keys it was checking for the version numbers using Process Monitor. I then modified the keys to pretend I was running SQL Server 2008 R2 RTM, ran the BDD installer again (successfully) and modified the keys back to their original values.

Warning: This is not best practice advice! If you do the same as I did and your production system is rendered unusable, this will be entirely your fault. I did this on a throwaway development environment to save time uninstalling SP1 and reinstalling it.

The keys I altered were all in the following path:

  • HKLM\SOFTWARE\Microsoft\Microsoft SQL Server\100\

The specific keys and the values I set them to were:

  • DTS\Setup\SP = 0
  • DTS\Setup\Version = 10.50.1600.1
  • BIDS\Setup\SP = 0
  • BIDS\Setup\Version = 10.50.1600.1

When setting up a production system, please ensure you apply the BDD installation before SP1. Don’t use this technique, which will probably render your environment unsupportable!