Why does Visual Studio Intellisense stop highlighting the suggested completion?

This has happened to me quite a few times, and the answer is annoying simple, and annoyingly hard to find.

When I am typing code in Visual Studio, particularly in C#, I’m very used to the way Intellisense usually works. I type a couple of characters of a method name, for example, and Intellisense shows the completions it can find, with the most likely item highlighted. At that point, it will fill in the completion if I type Enter, or an opening parenthesis, or a dot (if it’s a property – basically whatever the next symbol after the name I want it to enter.

But sometimes, seemingly randomly, it stops highlighting the suggested completion. It’s still the selected item in the dropdown, but now it’s merely selected and not highlighted, and to use it I have to type Tab, or hit the down arrow to highlight it then type the next character after the name. It’s different, not what’s in my muscle memory for typing code. And I could never work out quite why it happened.

I think, previously, I’ve had to completely nuke my Visual Studio settings to get the old behaviour back, which obviously I don’t really want to do. I assumed this was some kind of bug.

But I think I’ve found out why it’s happening, and the VS command to revert it.

The setting isĀ not found in Tools->Options->Intellisense where I was looking, which explains why I didn’t find it.

It’s actually in Edit -> Intellisense -> Toggle Completion Mode, a simple toggle of this behaviour. And it has a shortcut – Ctrl-Alt-Space. I think it’s the shortcut that’s been causing it, as that might possibly be a key combination that I can hit accidentally, although rarelt enough for me not to notice.

But now, I hope I can remember where to find the option. Writing this entry is an effort to make it stick in my memory.

Thanks to Machet on Stack Overflow for pointing the way.

https://stackoverflow.com/a/26788848/6483

 

How Do I Add New Claims to the ClaimsPrincipal on Login with dotnet Core?

I’ve been playing with stuff for a forthcoming project. One of the requirements will be that different users will see different branding on the site depending on who they are. This information will be stored in the site database, but I was unhappy at the idea of having to pull that data out of the db on every request.

Is there, I wondered, a way to get the information out of the database during a login, and store it as a Claim, in the same way that, by default, the Name claim is stored in the Authentication cookie.

It turned out that this information is remarkably hard to search for. There are a couple of articles based on .NET Core 1.0, but the docs for 2.1 weren’t giving me any joy.

So I made some guesses as to which classes might be involved by looking at the code on github, and looking for other projects that used them.

So I’ve worked out the basic principle, and it involves creating a custom class that implements the interface

IUserClaimsPrincipalFactory<TUser>

where TUser is your app’s User class. Mine is ApplicationUser.

For convenience, to maintain the default behaviours, I inherited by custom class from

UserClaimsPrincipalFactory<TUser,TRole>

Which implements the interface and provides a useful method, GenerateClaimsAsync. I can use that to start building a ClaimsIdentity, then add my own claims.

Here’s my implementation, which, for this demo, merely creates a claim which is the user’s full name. It uses the User which has already been fetched from the database for the login process, so there’s no need for any further database access. If you do need to hit the database, you can simply use Dependency Injection in the class constructor to get a database context or a repository.

public class AppUserClaimsPrincipalFactory<TUser,TRole>
 : UserClaimsPrincipalFactory<TUser,TRole> 
 where TUser: ApplicationUser 
 where TRole : IdentityRole
{
 public AppUserClaimsPrincipalFactory(
 UserManager<TUser> manager, 
 RoleManager<TRole> rolemanager, 
 IOptions<IdentityOptions> options)
   : base(manager, rolemanager, options)
 {
 }

 public async override Task<ClaimsPrincipal> CreateAsync(TUser user)
 {
   var id = await GenerateClaimsAsync(user);
   if (user != null)
   {
     id.AddClaim(new Claim("Zarquon", user.Name));
   }
   return new ClaimsPrincipal(id);
 }
}

Once you’ve got your class implementation, you need to add it to the services for your app in your ConfigureServices, so that the Identity code will pick it up in preference for the default.

I added the following line in ConfigureServices:

services.AddScoped<IUserClaimsPrincipalFactory<ApplicationUser>, AppUserClaimsPrincipalFactory<ApplicationUser, IdentityRole>>();

This will cause an instance of my class to be injected into the Identity system when it asks for an instance of the factory, and called when the ClaimsPrincipal is built.

You can basically add any claims you want, and you should probably choose better names thanĀ Zarquon.

Entity Framework error: A second operation started on this context before a previous operation completed. Any instance members are not guaranteed to be thread safe.

Another very long title for this one, and the result of a subtle bug I was experiencing.

I have a DotNetCore app, running Entity Framework Core, and I had a fairly simple controller as part of my API that just needed to add a new entry to a table, and it worked fine for the first call, but subsequent calls were generating this exception:

A second operation started on this context before a previous operation completed. Any instance members are not guaranteed to be thread safe.

Here’s what the code looked like.

public async Task<bool> SaveNewItem([FromBody] Item item)
{
    var container = db.Containers.FirstOrDefaultAsync(
                c=>c.Id == item.ContainerId
                );
    if (container == null) return false;
    db.Items.Add(item);
    await db.SaveChangesAsync();
    return true;
}

This is a simplified example but it has the error.

As I said, the first call worked, and added the item to the table, but any subsequent call would throw the exception shown above.

It’s a subtle error, and one that the compiler or IDE doesn’t give any hints about. The culprit is the very first line, where I’m intending to look for a container entity to which the item should be added.

The bug is a missing ‘await’ before the FirstOrDefaultAsync call. This code will result in container having a task, rather than a container, and that task is never completed because it’s not awaited. It’s left hanging.

It’s an easy fix, though, just add the missing ‘await’ and everything works fine.

It’s a bit subtle, though.

Updating Angular: Why do I get the error Prerendering failed because of error: TypeError: Object prototype may only be an Object or null: undefined at setPrototypeOf (native)

Do I win an award for the longest, most boring blog title in the world?

I’ve been doing some work with Angular recently, using Steve Sanderson’s excellent templates for ASP.NET Core.

It’s been interesting, since I’m not a client-side type, so a lot of the ceremony involved is a little alien to me. That mostly means that when things go wrong, it’s a bit harder to diagnose where I’ve made a mistake, but I’ve been doing OK.

Where I’ve been really struggling, for several days now, is in trying to update the Angular package versions I’ve been using.

In my innocence, I had wondered if all I needed to do was change the version numbers in the package.json to the new version. I needed to bump up the minor version to take advantage of improvements in the animation APIs.

But this naive approach left me with an application that was just broken, but broken in a way that gave almost no clue as to the reason.

The error that I was seeing, generated by the pre-rendering being done by the javascript services that are part of the template site, was this:

Exception: Call to Node module failed with error: Prerendering failed 
because of error: TypeError: Object prototype may only be an Object 
or null: undefined at setPrototypeOf (native)

This is about the most generic, information-free error you could imagine. Even with the associated stack trace, I didn’t have a clue what it meant, nor how to track it down.

But I did, just now, discover what I think is the core reason why the build was failing.

The reason is that the template does some clever stuff under the hood, to enable hot module replacement as you’re developing, but if node modules change, that’s not enough, and the key part of the build isn’t happening – webpack isn’t packaging up the code again.

I seem to have fixed the build by invoking webpack by hand using the following two commands:

  • webpack --config webpack.config.vendor.js
  • and then
  • webpack

This rebuilds the files that contain the code for the site, in the dist folder in ClientApp and wwwroot. They’re also not tracked by source control, so they can end up missing if you’re cloning a repo.

Important safety tip: If you’re updating version numbers in Package.json, either do it with the solution closed, or let Visual Studio run the npm install. I’ve made the mistake of editing the packages, then trying to npm install from the command line, and that means there’s two installs happening, and they end up fighting with each other.

I still don’t know why these webpack rebuilds weren’t happening with the default setup of the project. I’m assuming it’s only something that should happen when packages update, but it does have the nasty effect of leaving your project badly broken, such that even if you revert the package version changes, often your dist files are still really broken.

The hidden cost of Editor Templates

I’ve been doing some ASP.NET MVC work recently, for a new client, and I just had to work on a site that we performing badly. It was a simple site for signing up to an event, but the admin page was taking a very long time to appear. A bit of database profiling suggested a lot of traffic to the database, which was my first suspicion.

But I wanted to get a bit more visibility on how much work the server was doing, so I thought I’d take the opportunity to try out Glimpse.

A simple nu-get install was all it took, and the next time I fired up the site to test it, I got the glimpse Head-Up Display at the bottom of the page showing some headline figures.

glimpse hud

The HTTP section shows how long the request took in total, and broken down into server time, wire time (how long it took for the page to be sent over the internet) and Client time (how long it took for the browser to render the page once it received it).

On my page, the Client figure was quite high, as the page contained a big table of customers which was then being turned into a paged table by jQuery. But the server figure was still high – over 60 seconds.

I quickly found a large database query that, it turned out, wasn’t actually used to generate the page, and was only used to cache data for an ‘Export’ button. Since the Export was a little-used feature, I decided it was better to leave the export data fetching for when the user actually requested it. This made a big dent in the time taken – and removed about 1330 database requests.

But the page was still taking a fair amount of time – about 16 seconds – and now it wasn’t the database that was causing the delay – total database time was well under a second.

The Client processing time was still quite long, but the server was still slow, so I looked a the Host section, which breaks down the time taken in the Controller Action and the View. For this page, the view was taking way longer than the controller, despite there being no real data processing in the view, so clearly something was causing a lot of work in the view.

This might have taken a long time to diagnose, but Glimpse showed its value again, with the ability to drill down into the details of each part of the MVC engine. When you click the ‘g’ logo on the right, it opens the details window, which has tabs for all the different things Glimpse logs.

Glimpse views tab

I wanted to see what the Views tab said – and sure enough, it showed a lot – there were hundreds of entries in that log for this single page, because the customer list was being rendered by an Editor Template.

Editor Templates are useful things. When you’ve got a complex class as your ViewModel, they allow you to write a template for the component objects, and the framework will do all the heavy lifting of calling the templates for the properties of the ViewModel.

The problem with my page is that I was using a template for over 500 objects in a collection, and what Glimpse was telling me was that the server was spending a lot of time finding the view for each of the objects, and it wasn’t caching this template.

Now, one of the reasons to use templates like this is so that your controller action can simply accept your model class as an argument, which keeps your code simple, but in this case, it was causing far too much server work.

Luckily, the controller that was handling the form data didn’t actually need the entire viewmodel – in fact, for my purposes, all I required was a single checkbox per item which contained the item ID.

This allowed me to remove the Html.EditorFor call that was generating all the template traffic, and put in my own loop.

At the same time, I simplified the code in the template. It was using Html.DisplayFor for textual fields, so I replaced those with just putting the literal fields in.

I had to change the signature of the controller action to simply accept an array of IDs, but that was the only information it was taking from the original model, so the rest of the code didn’t change. But it had a dramatic effect on the rendering time.

With these changes, the View rendering time plummeted, from around 8s to less than a second.

As a result of these changes, a page with over 500 customers in a paged table dropped from around 75 seconds down to around 4 seconds (half of which was the jQuery pagination happening).

Not a bad result, and it would probably have been much, much harder to track down the bottlenecks without a tool like Glimpse.

No Good Deed Goes Unpunished

I’ve had a fraught two weeks with my Windows Store developer account.

I signed in, checked the dashboard, but when I tried to look at reports for an app, I got an error page saying my account had been locked. There was no indication why.

After trying a few things, I got in touch with support. We had a bit of back and forth, before he told me that, to my horror, my account was cancelled, and removed from the system. However, it turned out that he had been looking at my old developer account, with a different email address (the one I had been using for correspondence) which had, indeed, lapsed earlier in the year.

Despite this original ticket having been raised on the correct dev account, I had to raise another ticket, and explain the problem again. Once more, the back and forth, but at least I got more information – that my account had not auto-renewed because the credit card was declined.

I was a bit confused by this, because I’d received, back in July, notification that my account had been successfully renewed. And I had not received any notifications that the credit card had been declined.

But, when I delved into the commerce section of the site, I saw the attempts at renewal, and the declines. OK, no problem, that credit card had indeed been replaced by my bank, so I added a new card to the account and tried to renew.

I got a cryptic ‘Service Error’ message. No matter what I tried, several different cards, as well as PayPal, all I could get was ‘Service Error’.

I was relaying all this to the hapless support person, who couldn’t seem to help at all.

Then, several days after I’d started this whole odyssey, I suddenly got an email saying my developer account had been cancelled.

Yikes.

If true, this meant my apps would be removed from the store, and I’d have to create a whole new account, and submit the apps again. Existing customers would not get any upgrades, and would not be able to download the app again if they needed to.

This was quite serious. The tone of my emails to support got a little more panicked, as I threw terms like ‘bad faith’ and ‘incompetence’ at him, hoping he’d escalate the issue. However, all I could get from him in return was the same message – my account was irrevocably removed, and I’d have to create a new one.

Until today when, after I replied to his last message, asking for progress on escalating the issue, he replied with all new information.

He said that ‘Recently the developer subscriptions do not expire.’ and suggested that everything looked fine on my account.

He was correct. All was well.

Because of this: New Dev Center lifetime registration and benefits program

The Windows Store dev program has just made accounts a one-off payment. So clearly my renewal difficulties, and the cryptic ‘Service Error’ I kept getting was a result of the internal changes that were happening in the subscription system. I was just unlucky that my registration renewal happened to coincide with the switchover.

I wish they had been able to communicate this change to their support people, though. I had a very fraught two weeks, thinking that I’d have to go through the hassle of recreating an account, and resubmitting apps, along with possibly alienating existing customers.

It demonstrates some of the problems of outsourcing your support so far from the development teams. When I first started in software development, every developer in our (small) company had to do tech support regularly. It kept us aware of issues real users were having, and users got information straight from the people who knew how it all worked. If your support staff aren’t even on the same continent as your development staff, the quality of support is going to suffer.

Joss Whedon’s Impossible Screenwriting Seminar

So it was very late on Saturday night, somewhere around 5am. I was reading Twitter, which was quiet as usual for that time of night, when I saw a couple of intriguing tweets.

https://twitter.com/josswhedon/status/406951676944596992

and this:

At first I almost dismissed it, thinking it would be an LA-based event. But that was a London postcode. Was it possible? I wasn’t the only one unsure:

Now, I was just about to go to bed, having been up most of the day, and half the night. But if Twitter was telling the truth, Joss Whedon was giving a talk on screenwriting in Central London. In five and a half hours.

I have a habit of overanalysing things, and finding all the reasons why something might be a bad idea. But this time, maybe because I was tired, the only real downside I could see was that I’d spend a morning in London, and since I’ve not been out much recently, I figured that wasn’t much of a bad thing.

So, I had a couple of hours of attempted sleep, hopped in the shower, hastily stuffed the ‘Once More With Feeling’ scriptbook into my camera bag, just in case, and walked down to the station.

For a 9:15 train on Sunday morning, it was packed. As bad as any commuter train I’ve been on. I guess the pre-Christmas sales are starting or something.

I arrived at the Berwick Street address in plenty of time, expecting to find… what? A massive scrum or people being turned away? A deserted lot? What I found was three or four people waiting uncertainly outside a shuttered shopfront (which at least was showing the ‘impossible’ branding). As the time approached, we shared our doubts that this was even a real thing. Perhaps we’d all got the wrong end of the stick.

But we hadn’t. A little while after the promised start time, someone arrived to open the shutters, and the growing group of people walked into the fairly Spartan interior, still unsure if this was where it was happening.

And then I saw Mr Whedon walking across the street, accompanied by Lily Cole, who had arranged the event. (Is it sad to note that I only recognised Lily from her appearance as a mermaid in an episode of Doctor Who? Then I’m sad.)
11155319525_ea73468214_z

What followed was a very informal session. He started by going round the room getting us all to introduce ourselves and say what our experience was. I felt a little like an interloper, not being a professional writer, but I wasn’t alone, and at least I was able to truthfully say I’d written two screenplaysĀ in the last month (OK, they were both very short, but I’d finished them).
11155373004_a6798abfb1_z
Joss admitted that he hadn’t done anything like this before, but his talk was interesting, funny, and very inspiring. He talked about the importance of knowing who all your characters are, and what drives them, even the second henchman on the right. He described getting executive notes as being nibbled to death by ducks. He talked about the importance of structure, and how he would create charts showing the story timeline, with colours indicating the purpose or feeling of every scene, so he can see that the pacing and structure of the story is working as he wants it.

Also, astonishingly, he revealed that he’d been having root canal at midnight, having broken a tooth on an olive pit. So we were even luckier to have him there than we’d thought.

11155358476_565a494433_z

After the talk, he was kind enough to sign some things, and of course, I got a photo with him.

WP_20131201_002

When it all wound up, a group of us decamped to the nearest Starbucks, and the conversation went on until 4pm.

I’d like to thank Joss for giving his time when I’m sure he would much rather have been finishing the screenplay for the next Avengers movie. No exclusives on that score, I’m afraid, although he did tweet this:

https://twitter.com/josswhedon/status/407178217016287232

I should also thank Lily Cole and impossible.com for making this happen in the first place. The ultra-last-minute feel made it an incredibly intimate event. I’m sure that had it been arranged way in advance, it would have been massively oversubscribed, and I’d never have been able to go. So thank goodness for late nights and Twitter.

Silverlight Grid Row Gradients

I just had to implement a design where a table of data, in Silverlight, had to have cell backgrounds where each cell is slightly darker than the cell above. Here it is in situ:

As you can see, not only does each row have a descending gradient, but each column has a different gradient.

The easy way

The simple way is to just put in a border or background rectangle into each cell. This is what I did originally. But it’s a lot of work no matter how you do it. Neither Blend nor Visual Studio lend themselves to this kind of thing, and even hand-editing the Xaml can lead to cut&paste errors.

The even easier way

I decided to have some kind of automated way to do this. I needed something that you could call that would put in the right rectangles in the right places with the right colours but was easy to manage, so I wrote a behavior. If you’re unfamiliar with Silverlight Behaviors (apologies for the American spelling, but that’s what they’re called) then read this post by Christian Schormann from the Blend team.

I realised that all I needed was something where I can specify which column I’m interested in, and what the start and end colours are. So this is the code I came up with:


/// <summary>
 /// A behaviour which targets a Grid to put background rectangles
 /// into column cells creating a gradient.
 /// </summary>
 public class GridGradientBehavior : Behavior<Grid>
 {
   /// <summary>
   /// Somewhere to keep track of the items we've added to the grid
   /// </summary>
   List<Rectangle> rectangles = new List<Rectangle>();

   /// <summary>
   /// Called when this behaviour is attached to the grid.
   /// </summary>
   protected override void OnAttached()
   {
     base.OnAttached();

     /// Just in case, clear any rectangles we've already put in
     foreach (var rect in rectangles)
       {
         AssociatedObject.Children.Remove(rect);
       }
       rectangles.Clear();

       // Check the RowDefinitions. If there aren't any, just put in a single rectangle
       if (AssociatedObject.RowDefinitions == null
           || AssociatedObject.RowDefinitions.Count <= 1)
       {
         InsertRectangle(new SolidColorBrush(StartColour), 0);
       }
       else
       {
         // We need to interpolate between the two colours. I make use
         // of the SolidColorBrushInterpolator from System.Windows.Controls.DataVisualization
         // since my project already uses the toolkit.
         var interpolator = new SolidColorBrushInterpolator();
         interpolator.From = StartColour;
         interpolator.To = EndColour;
         interpolator.DataMinimum = 0;
         interpolator.DataMaximum = AssociatedObject.RowDefinitions.Count - 1;
         for (int i = 0; i < AssociatedObject.RowDefinitions.Count; i++)
         {
           InsertRectangle((Brush)interpolator.Interpolate(i),i);
         }
       }
     }

     /// <summary>
     /// For each grid row, insert a rectangle of the appropriate colour into the ]
     /// appropriate row and column slot.
     /// </summary>
     /// <param name="color">The brush to use, interpolated between Start and End</param>
     /// <param name="i"></param>
     private void InsertRectangle(Brush color, int i)
     {
       Rectangle rect = new Rectangle();
       rect.HorizontalAlignment = HorizontalAlignment.Stretch;
       rect.VerticalAlignment = VerticalAlignment.Stretch;
       rect.Fill = color;
       Grid.SetRow(rect, i);
       Grid.SetColumn(rect, Column);

       // This should ensure these rectangles are behind everything else in the grid
       Canvas.SetZIndex(rect, -1);

       // and if it doesn't, we insert our rectangles at the lowest point
       // in the visual tree
       AssociatedObject.Children.Insert(0, rect);
       rectangles.Add(rect);
     }

     /// <summary>
     /// Called when the behavior detaches. Remove all our rectangles from
     /// the grid.
     /// </summary>
     protected override void OnDetaching()
     {
       base.OnDetaching();
       foreach (var rect in rectangles)
       {
         AssociatedObject.Children.Remove(rect);
       }
       rectangles.Clear();
     }

     /// <summary>
     /// Dependency property telling us which column to apply this behaviour to
     /// </summary>
     public int Column
     {
       get { return (int)GetValue(ColumnProperty); }
       set { SetValue(ColumnProperty, value); }
     }
     public static readonly DependencyProperty ColumnProperty =
       DependencyProperty.Register("Column", typeof(int), typeof(GridGradientBehavior),
         new PropertyMetadata(0));

     /// <summary>
     /// Dependency property for the start colour of our gradient
     /// </summary>
     public Color StartColour
     {
       get { return (Color)GetValue(StartColourProperty); }
       set { SetValue(StartColourProperty, value); }
     }
     public static readonly DependencyProperty StartColourProperty =
       DependencyProperty.Register("StartColour", typeof(Color), typeof(GridGradientBehavior),
         new PropertyMetadata(Colors.White));

     /// <summary>
     /// Dependency property for the end colour of our gradient
     /// </summary>
     public Color EndColour
     {
       get { return (Color)GetValue(EndColourProperty); }
       set { SetValue(EndColourProperty, value); }
     }
     public static readonly DependencyProperty EndColourProperty =
       DependencyProperty.Register("EndColour", typeof(Color), typeof(GridGradientBehavior),
         new PropertyMetadata(Colors.Black));

   }

The easiest way to apply this to a grid is to drag and drop it from the Behaviors section of the Assets pane. Then set the three parameters (StartColour, EndColour and Column) as appropriate. If (as in my example) you have two columns you want this to affect, you ust have to drop a second behavior onto the same grid.

There are a million ways you might want to extend this – letting you style rows instead of columns, or changing the rectangle to a border and having the ability to define different borders for top and bottom for example. Or you could use this to do alternating row colours. There’s quite a lot of scope for such a simple idea.

What I really want, though, is a way to get the same effect in an ItemsControl, but that’s a whole different problem.

Feel free to use this if it’s helpful.

Are Videogames Art?

The film critic Roger Ebert states they are not, and can never be. Ignoring the fact that he doesn’t play games himself, so couldn’t possibly experience any engagement that might indicate whether he’s right or not, one of his main points is that their interactive nature precludes videogames from ever being art. Could a game of Chess be considered art, he asks.

I think by making this assertion he’s actually missed something quite important, and to illustrate this, I have an example.

In the game Bioshock, your character is exploring a beautiful art-deco underwater city, having pitched up there after a plane crash. Note: This will contain major spoilers for the game. Please don’t continue if you haven’t played it and intend to at some time – the revelations are worth discovering on your own.

As you explore you (and the character you’re controlling) learn more about the city’s backstory, about the mysterious Andrew Ryan, who created the city, and about his social and biochemical attempts to create a perfect city.

The visual look of the game is fantastic, and wouldn’t disgrace any hollywood movie, but that’s not the reason I think it qualifies as art. Art isn’t about the quality of the rendering, or many modern artists wouldn’t qualify.

The true art in this game comes from its effect on you, the player (the audience, if you like) but I would argue that the effect it has on you is enhanced, rather than diminished, by the fact it is interactive. Here’s why.

Early on in the game, you meet a creepy little girl, known as a ‘Little Sister’. You learn that she’s been genetically engineered, and she stalks the city looking for dead people from whom she can extract a vital drug. The first time you meet one of these girls you’re told several things by the character who’s guiding your progress through the game:

  • They aren’t really little girls, because they’ve been enetically engineered
  • The drug they extract is something you need to improve your abilities within the game
  • To get hold of that drug you have to kill the little girl

At the same time, another character who you’ve never met before implores you not to harm the girl. She tells you:

  • You don’t have to kill her – you can ‘rescue’ her which frees her from the need to extract the drug
  • Rescuing her will still give you some of the drug she’s carrying, but not as much
  • If you rescue them, she will make sure you’re rewarded later (although she doesn’t specify how)

So as a player you have a clear choice: Kill what looks like a little girl for an instant reward, or rescue her, and hope that your future reward is worth it. Your ability to progress in the game wight well be affected by your decision – the game involves various weapons and abilities, some of which can become more powerful the more of the drug you get.

Now obviously, this is a game, a fairly violent one where you’ve already been killing plenty of other characters (think of ‘fast zombies’ from 28 days later for the kinds of enemies you have to kill) but this choice your given is very different. The game designers have very deliberately chosen to make these characters little girls, then offered you this choice of ways to play the game. (I don’t think you can also choose to do neither – to move past this point in the game, I think you have to choose, but later in the game you could choose to ignore the girls. But you have to make the choice at least once).

So as you play the rest of the game, each time you encounter a little sister, you make the same choice. When I played it, I always chose to rescue them.

The important thing here is that your choice has a profound effect on the ending of the game, which I’ll come to in a moment.

Much later in the game, there is a significant twist. You confront Andrew Ryan, and he reveals that the character you are playing as was actually born in the city, and was psychologically engineered to be controlled by a key phrase, and has been controlled by the character in the game who has, up to this point, been guiding you around the city, telling you where to go and what to do.

Then Ryan orders you to kill himself. With a golf club.

Which you do.

This is a fairly shocking moment, but interestingly, one over which you have no choice. The game is in a ‘cut scene’ at this point, and you have no control over what happens. Andrew Ryan is killed, by you, and you can’t stop it happening.

The first time I played this I thought I’d acidentally hit the ‘Fire’ button, and was horrified. I reloaded the game from an earlier point to check, but it all still happened the same way.

This is the first point in the game where I think the fact it’s a game makes it art. We’re used to killing or destroying things and people in games. But at this point, the designers chose not to give you a choice. The narrative happens regardless, and yet you feel shocked and culpable. This is a reaction which is amplified by it being a game. If you were seeing the same story play out in a linear form, it would still be a shocking moment, but your emotional reaction to it would be less. Or at least, different.

This is an example of being art because it’s a game, not despite.

And the second example, from the same game, is the ending.

Having killed Ryan, your game character is eventually freed from the psychological influence of Ryan’s nemesis, Frank Fontaine, and the last section requires you to fight him and kill him. Frankly, this is a typical ‘End of game Boss Level’ – you confront someone stronger than anyone you’ve previously encountered, and have to defeat them to complete the game. This is a convention of video games, almost like the ride into the sunset in cowboy movies, and is nothing remarkable until you finally overpower him. You then get the endgame.

And it’s different, depending on that choice you made early in the game. If you chose to rescue the little sisters, the game shows you one ending. After they’ve helped you kill Fontaine, one of the sisters approaches you, handing you something, and the following narration is spoken by the female doctor who looked after the sisters:

They offered you their city.

And you refused it.

And what did you do instead?

What I have come to expect of you.

You saved them.

You gave them the one thing that was stolen from them: A chance.

A chance to learn. To find love. To live.

And in the end what was your reward?

You never said, but I think I know.

A family.

It’s quite beautiful. I’m not ashamed to say, I cried at that ending. Someone once said that games could not be classed as art until you truly care about the characters, in which case, job done.

But I think this offers something slightly deeper. Because this ending is a direct response to your behaviour during the game. You only get this ending if you rescued the little sisters. You get a different ending (see previous link) if you harvested them, a far more prosaic ending (in my view).

But the truly interesting thing is that that happy ending is your ‘reward’. You’ve earned it. The game gives you this emotional response because of the way you’ve played it. No linear art form could do this – it’s only possible because a game is a thing to be played.

And that is why games can truly be art. Because they have a wholly unique way to affect you.

Silverlight Top Tip: Startup page for Navigation Apps

If you’re working on a Silverlight Navigation Framework application, you’ll often want to debug a specific page, rather than always start at your home page and navigate to it.

My previous solution was just to edit the Source attribute in the Frame, setting it to the initial Url I wanted. But this is dodgy, as if you forget to reset it before doing a release, your customers will end up confused.

The better way, which has only just occurred to me, is to change the default startup path in the associated Web application. In the Properties for your web application, choose the Web section, then edit the Start Action section, choose ‘Specific Page’ then edit the url. By default (in my app at least) it’s just set to the html page. I changed it like this:

Add #/<url> to your Specific Page urlAnd my app now starts up on the Advanced page.Just add whatever you want your initial start parameters to be – you can include query string params or anything else that would make a valid url.

Much safer.