Why does Visual Studio Intellisense stop highlighting the suggested completion?

This has happened to me quite a few times, and the answer is annoying simple, and annoyingly hard to find.

When I am typing code in Visual Studio, particularly in C#, I’m very used to the way Intellisense usually works. I type a couple of characters of a method name, for example, and Intellisense shows the completions it can find, with the most likely item highlighted. At that point, it will fill in the completion if I type Enter, or an opening parenthesis, or a dot (if it’s a property – basically whatever the next symbol after the name I want it to enter.

But sometimes, seemingly randomly, it stops highlighting the suggested completion. It’s still the selected item in the dropdown, but now it’s merely selected and not highlighted, and to use it I have to type Tab, or hit the down arrow to highlight it then type the next character after the name. It’s different, not what’s in my muscle memory for typing code. And I could never work out quite why it happened.

I think, previously, I’ve had to completely nuke my Visual Studio settings to get the old behaviour back, which obviously I don’t really want to do. I assumed this was some kind of bug.

But I think I’ve found out why it’s happening, and the VS command to revert it.

The setting is not found in Tools->Options->Intellisense where I was looking, which explains why I didn’t find it.

It’s actually in Edit -> Intellisense -> Toggle Completion Mode, a simple toggle of this behaviour. And it has a shortcut – Ctrl-Alt-Space. I think it’s the shortcut that’s been causing it, as that might possibly be a key combination that I can hit accidentally, although rarelt enough for me not to notice.

But now, I hope I can remember where to find the option. Writing this entry is an effort to make it stick in my memory.

Thanks to Machet on Stack Overflow for pointing the way.




How Do I Add New Claims to the ClaimsPrincipal on Login with dotnet Core?

I’ve been playing with stuff for a forthcoming project. One of the requirements will be that different users will see different branding on the site depending on who they are. This information will be stored in the site database, but I was unhappy at the idea of having to pull that data out of the db on every request.

Is there, I wondered, a way to get the information out of the database during a login, and store it as a Claim, in the same way that, by default, the Name claim is stored in the Authentication cookie.

It turned out that this information is remarkably hard to search for. There are a couple of articles based on .NET Core 1.0, but the docs for 2.1 weren’t giving me any joy.

So I made some guesses as to which classes might be involved by looking at the code on github, and looking for other projects that used them.

So I’ve worked out the basic principle, and it involves creating a custom class that implements the interface


where TUser is your app’s User class. Mine is ApplicationUser.

For convenience, to maintain the default behaviours, I inherited by custom class from


Which implements the interface and provides a useful method, GenerateClaimsAsync. I can use that to start building a ClaimsIdentity, then add my own claims.

Here’s my implementation, which, for this demo, merely creates a claim which is the user’s full name. It uses the User which has already been fetched from the database for the login process, so there’s no need for any further database access. If you do need to hit the database, you can simply use Dependency Injection in the class constructor to get a database context or a repository.

public class AppUserClaimsPrincipalFactory<TUser,TRole>
 : UserClaimsPrincipalFactory<TUser,TRole> 
 where TUser: ApplicationUser 
 where TRole : IdentityRole
 public AppUserClaimsPrincipalFactory(
 UserManager<TUser> manager, 
 RoleManager<TRole> rolemanager, 
 IOptions<IdentityOptions> options)
   : base(manager, rolemanager, options)

 public async override Task<ClaimsPrincipal> CreateAsync(TUser user)
   var id = await GenerateClaimsAsync(user);
   if (user != null)
     id.AddClaim(new Claim("Zarquon", user.Name));
   return new ClaimsPrincipal(id);

Once you’ve got your class implementation, you need to add it to the services for your app in your ConfigureServices, so that the Identity code will pick it up in preference for the default.

I added the following line in ConfigureServices:

services.AddScoped<IUserClaimsPrincipalFactory<ApplicationUser>, AppUserClaimsPrincipalFactory<ApplicationUser, IdentityRole>>();

This will cause an instance of my class to be injected into the Identity system when it asks for an instance of the factory, and called when the ClaimsPrincipal is built.

You can basically add any claims you want, and you should probably choose better names than Zarquon.

Entity Framework error: A second operation started on this context before a previous operation completed. Any instance members are not guaranteed to be thread safe.

Another very long title for this one, and the result of a subtle bug I was experiencing.

I have a DotNetCore app, running Entity Framework Core, and I had a fairly simple controller as part of my API that just needed to add a new entry to a table, and it worked fine for the first call, but subsequent calls were generating this exception:

A second operation started on this context before a previous operation completed. Any instance members are not guaranteed to be thread safe.

Here’s what the code looked like.

public async Task<bool> SaveNewItem([FromBody] Item item)
    var container = db.Containers.FirstOrDefaultAsync(
                c=>c.Id == item.ContainerId
    if (container == null) return false;
    await db.SaveChangesAsync();
    return true;

This is a simplified example but it has the error.

As I said, the first call worked, and added the item to the table, but any subsequent call would throw the exception shown above.

It’s a subtle error, and one that the compiler or IDE doesn’t give any hints about. The culprit is the very first line, where I’m intending to look for a container entity to which the item should be added.

The bug is a missing ‘await’ before the FirstOrDefaultAsync call. This code will result in container having a task, rather than a container, and that task is never completed because it’s not awaited. It’s left hanging.

It’s an easy fix, though, just add the missing ‘await’ and everything works fine.

It’s a bit subtle, though.

Updating Angular: Why do I get the error Prerendering failed because of error: TypeError: Object prototype may only be an Object or null: undefined at setPrototypeOf (native)

Do I win an award for the longest, most boring blog title in the world?

I’ve been doing some work with Angular recently, using Steve Sanderson’s excellent templates for ASP.NET Core.

It’s been interesting, since I’m not a client-side type, so a lot of the ceremony involved is a little alien to me. That mostly means that when things go wrong, it’s a bit harder to diagnose where I’ve made a mistake, but I’ve been doing OK.

Where I’ve been really struggling, for several days now, is in trying to update the Angular package versions I’ve been using.

In my innocence, I had wondered if all I needed to do was change the version numbers in the package.json to the new version. I needed to bump up the minor version to take advantage of improvements in the animation APIs.

But this naive approach left me with an application that was just broken, but broken in a way that gave almost no clue as to the reason.

The error that I was seeing, generated by the pre-rendering being done by the javascript services that are part of the template site, was this:

Exception: Call to Node module failed with error: Prerendering failed 
because of error: TypeError: Object prototype may only be an Object 
or null: undefined at setPrototypeOf (native)

This is about the most generic, information-free error you could imagine. Even with the associated stack trace, I didn’t have a clue what it meant, nor how to track it down.

But I did, just now, discover what I think is the core reason why the build was failing.

The reason is that the template does some clever stuff under the hood, to enable hot module replacement as you’re developing, but if node modules change, that’s not enough, and the key part of the build isn’t happening – webpack isn’t packaging up the code again.

I seem to have fixed the build by invoking webpack by hand using the following two commands:

  • webpack --config webpack.config.vendor.js
  • and then
  • webpack

This rebuilds the files that contain the code for the site, in the dist folder in ClientApp and wwwroot. They’re also not tracked by source control, so they can end up missing if you’re cloning a repo.

Important safety tip: If you’re updating version numbers in Package.json, either do it with the solution closed, or let Visual Studio run the npm install. I’ve made the mistake of editing the packages, then trying to npm install from the command line, and that means there’s two installs happening, and they end up fighting with each other.

I still don’t know why these webpack rebuilds weren’t happening with the default setup of the project. I’m assuming it’s only something that should happen when packages update, but it does have the nasty effect of leaving your project badly broken, such that even if you revert the package version changes, often your dist files are still really broken.

The hidden cost of Editor Templates

I’ve been doing some ASP.NET MVC work recently, for a new client, and I just had to work on a site that we performing badly. It was a simple site for signing up to an event, but the admin page was taking a very long time to appear. A bit of database profiling suggested a lot of traffic to the database, which was my first suspicion.

But I wanted to get a bit more visibility on how much work the server was doing, so I thought I’d take the opportunity to try out Glimpse.

A simple nu-get install was all it took, and the next time I fired up the site to test it, I got the glimpse Head-Up Display at the bottom of the page showing some headline figures.

glimpse hud

The HTTP section shows how long the request took in total, and broken down into server time, wire time (how long it took for the page to be sent over the internet) and Client time (how long it took for the browser to render the page once it received it).

On my page, the Client figure was quite high, as the page contained a big table of customers which was then being turned into a paged table by jQuery. But the server figure was still high – over 60 seconds.

I quickly found a large database query that, it turned out, wasn’t actually used to generate the page, and was only used to cache data for an ‘Export’ button. Since the Export was a little-used feature, I decided it was better to leave the export data fetching for when the user actually requested it. This made a big dent in the time taken – and removed about 1330 database requests.

But the page was still taking a fair amount of time – about 16 seconds – and now it wasn’t the database that was causing the delay – total database time was well under a second.

The Client processing time was still quite long, but the server was still slow, so I looked a the Host section, which breaks down the time taken in the Controller Action and the View. For this page, the view was taking way longer than the controller, despite there being no real data processing in the view, so clearly something was causing a lot of work in the view.

This might have taken a long time to diagnose, but Glimpse showed its value again, with the ability to drill down into the details of each part of the MVC engine. When you click the ‘g’ logo on the right, it opens the details window, which has tabs for all the different things Glimpse logs.

Glimpse views tab

I wanted to see what the Views tab said – and sure enough, it showed a lot – there were hundreds of entries in that log for this single page, because the customer list was being rendered by an Editor Template.

Editor Templates are useful things. When you’ve got a complex class as your ViewModel, they allow you to write a template for the component objects, and the framework will do all the heavy lifting of calling the templates for the properties of the ViewModel.

The problem with my page is that I was using a template for over 500 objects in a collection, and what Glimpse was telling me was that the server was spending a lot of time finding the view for each of the objects, and it wasn’t caching this template.

Now, one of the reasons to use templates like this is so that your controller action can simply accept your model class as an argument, which keeps your code simple, but in this case, it was causing far too much server work.

Luckily, the controller that was handling the form data didn’t actually need the entire viewmodel – in fact, for my purposes, all I required was a single checkbox per item which contained the item ID.

This allowed me to remove the Html.EditorFor call that was generating all the template traffic, and put in my own loop.

At the same time, I simplified the code in the template. It was using Html.DisplayFor for textual fields, so I replaced those with just putting the literal fields in.

I had to change the signature of the controller action to simply accept an array of IDs, but that was the only information it was taking from the original model, so the rest of the code didn’t change. But it had a dramatic effect on the rendering time.

With these changes, the View rendering time plummeted, from around 8s to less than a second.

As a result of these changes, a page with over 500 customers in a paged table dropped from around 75 seconds down to around 4 seconds (half of which was the jQuery pagination happening).

Not a bad result, and it would probably have been much, much harder to track down the bottlenecks without a tool like Glimpse.

No Good Deed Goes Unpunished

I’ve had a fraught two weeks with my Windows Store developer account.

I signed in, checked the dashboard, but when I tried to look at reports for an app, I got an error page saying my account had been locked. There was no indication why.

After trying a few things, I got in touch with support. We had a bit of back and forth, before he told me that, to my horror, my account was cancelled, and removed from the system. However, it turned out that he had been looking at my old developer account, with a different email address (the one I had been using for correspondence) which had, indeed, lapsed earlier in the year.

Despite this original ticket having been raised on the correct dev account, I had to raise another ticket, and explain the problem again. Once more, the back and forth, but at least I got more information – that my account had not auto-renewed because the credit card was declined.

I was a bit confused by this, because I’d received, back in July, notification that my account had been successfully renewed. And I had not received any notifications that the credit card had been declined.

But, when I delved into the commerce section of the site, I saw the attempts at renewal, and the declines. OK, no problem, that credit card had indeed been replaced by my bank, so I added a new card to the account and tried to renew.

I got a cryptic ‘Service Error’ message. No matter what I tried, several different cards, as well as PayPal, all I could get was ‘Service Error’.

I was relaying all this to the hapless support person, who couldn’t seem to help at all.

Then, several days after I’d started this whole odyssey, I suddenly got an email saying my developer account had been cancelled.


If true, this meant my apps would be removed from the store, and I’d have to create a whole new account, and submit the apps again. Existing customers would not get any upgrades, and would not be able to download the app again if they needed to.

This was quite serious. The tone of my emails to support got a little more panicked, as I threw terms like ‘bad faith’ and ‘incompetence’ at him, hoping he’d escalate the issue. However, all I could get from him in return was the same message – my account was irrevocably removed, and I’d have to create a new one.

Until today when, after I replied to his last message, asking for progress on escalating the issue, he replied with all new information.

He said that ‘Recently the developer subscriptions do not expire.’ and suggested that everything looked fine on my account.

He was correct. All was well.

Because of this: New Dev Center lifetime registration and benefits program

The Windows Store dev program has just made accounts a one-off payment. So clearly my renewal difficulties, and the cryptic ‘Service Error’ I kept getting was a result of the internal changes that were happening in the subscription system. I was just unlucky that my registration renewal happened to coincide with the switchover.

I wish they had been able to communicate this change to their support people, though. I had a very fraught two weeks, thinking that I’d have to go through the hassle of recreating an account, and resubmitting apps, along with possibly alienating existing customers.

It demonstrates some of the problems of outsourcing your support so far from the development teams. When I first started in software development, every developer in our (small) company had to do tech support regularly. It kept us aware of issues real users were having, and users got information straight from the people who knew how it all worked. If your support staff aren’t even on the same continent as your development staff, the quality of support is going to suffer.

Joss Whedon’s Impossible Screenwriting Seminar

So it was very late on Saturday night, somewhere around 5am. I was reading Twitter, which was quiet as usual for that time of night, when I saw a couple of intriguing tweets.

and this:

At first I almost dismissed it, thinking it would be an LA-based event. But that was a London postcode. Was it possible? I wasn’t the only one unsure:

Now, I was just about to go to bed, having been up most of the day, and half the night. But if Twitter was telling the truth, Joss Whedon was giving a talk on screenwriting in Central London. In five and a half hours.

I have a habit of overanalysing things, and finding all the reasons why something might be a bad idea. But this time, maybe because I was tired, the only real downside I could see was that I’d spend a morning in London, and since I’ve not been out much recently, I figured that wasn’t much of a bad thing.

So, I had a couple of hours of attempted sleep, hopped in the shower, hastily stuffed the ‘Once More With Feeling’ scriptbook into my camera bag, just in case, and walked down to the station.

For a 9:15 train on Sunday morning, it was packed. As bad as any commuter train I’ve been on. I guess the pre-Christmas sales are starting or something.

I arrived at the Berwick Street address in plenty of time, expecting to find… what? A massive scrum or people being turned away? A deserted lot? What I found was three or four people waiting uncertainly outside a shuttered shopfront (which at least was showing the ‘impossible’ branding). As the time approached, we shared our doubts that this was even a real thing. Perhaps we’d all got the wrong end of the stick.

But we hadn’t. A little while after the promised start time, someone arrived to open the shutters, and the growing group of people walked into the fairly Spartan interior, still unsure if this was where it was happening.

And then I saw Mr Whedon walking across the street, accompanied by Lily Cole, who had arranged the event. (Is it sad to note that I only recognised Lily from her appearance as a mermaid in an episode of Doctor Who? Then I’m sad.)

What followed was a very informal session. He started by going round the room getting us all to introduce ourselves and say what our experience was. I felt a little like an interloper, not being a professional writer, but I wasn’t alone, and at least I was able to truthfully say I’d written two screenplays in the last month (OK, they were both very short, but I’d finished them).
Joss admitted that he hadn’t done anything like this before, but his talk was interesting, funny, and very inspiring. He talked about the importance of knowing who all your characters are, and what drives them, even the second henchman on the right. He described getting executive notes as being nibbled to death by ducks. He talked about the importance of structure, and how he would create charts showing the story timeline, with colours indicating the purpose or feeling of every scene, so he can see that the pacing and structure of the story is working as he wants it.

Also, astonishingly, he revealed that he’d been having root canal at midnight, having broken a tooth on an olive pit. So we were even luckier to have him there than we’d thought.


After the talk, he was kind enough to sign some things, and of course, I got a photo with him.


When it all wound up, a group of us decamped to the nearest Starbucks, and the conversation went on until 4pm.

I’d like to thank Joss for giving his time when I’m sure he would much rather have been finishing the screenplay for the next Avengers movie. No exclusives on that score, I’m afraid, although he did tweet this:

I should also thank Lily Cole and impossible.com for making this happen in the first place. The ultra-last-minute feel made it an incredibly intimate event. I’m sure that had it been arranged way in advance, it would have been massively oversubscribed, and I’d never have been able to go. So thank goodness for late nights and Twitter.