sn.printf.net

2020-07-06

During lockdown, I’ve made another effort at learning F#. This time I think I’ve had a bit more success. Processing data is something that we as developers do on a weekly or even daily basis, so it seems quite natural to practice that in F#. As a big football fan, I’ve decided to use the English Premier League results for season 2019/2020, as it’s a dataset I implicitly understand.

The EPL results set is available in CSV format from football-data.co.uk, and rather than having to parse it all by hand or hitting up CsvHelper and still having to write some C# code to actually use it, in F# we can use a Type Provider, specifically the CsvProvider from FSharp.Data.

It’s worth pointing out at this point that I’ve been learning for about two weeks, so the following F# should not be taken as a definitive example of good, idiomatic code.

Loading and parsing the data

FSharp.Data is easily added via NuGet, and using an .fsx script, we can easily reference the assembly and open the namespace:

#r "../../.nuget/packages/fsharp.data/3.3.3/lib/netstandard2.0/FSharp.Data.dll"
open FSharp.Data
open System.Collections.Generic

I didn’t have any luck in the script with referencing a more local copy of the assembly, such as one in the /bin folder, due to it complaining about not being able to find the FSharp.Data.DesignTime.dll, but going directly to the assembly in the NuGet packages folder seems to work just fine. It is also worth noting that I’m writing this on a Mac (in VS Code), so your path syntax might vary. Also note that we also open the BCL System.Collections.Generic namespace. We’ll need that later.

Next, comes the part that blows my mind. Here is how we generate a type which knows how to load and parse a CSV file of a given structure:

type Results = CsvProvider<"../../Downloads/epl1920.csv">

That’s it. It’s pretty amazing. The Results type is now also type safe, and it’s had a guess at infering what the types are for each column of the data. We could probably do something similar to this in C# using CsvHelper and either Castle.DynamicProxy or some magic with the new Roslyn compiler, but I think it would take quite a bit of code to create something that came close to what this can do.

Skipping over some important stuff that we’ll get to in just short a while, we can now easily load the full results set:

Results.Load("../../Downloads/epl1920.csv")

This is fairly straightforward, and does exactly what it looks like. The data loaded from the file is available in a .Rows property, that we’ll use shortly.

Parsing the data

All good so far, but now things get a little more complicated. Now we need to think somewhat about the data, and if you look in the file… it’s got a LOT of information. Mostly related to betting information for the match, but there is also quite a lot of information about the match itself. For the purposes of calculating the league, most of thie information in the file is redundant. In order to just get the information we need, we can define a Record to hold to that information. A Record in F# is somewhat analagous to a C# POCO class, but with automatic type safety and full equality comparisons out of the box.

type FullTimeResult = | Home | Away | Draw
type MatchResult = {HomeTeam : string; AwayTeam: string; HomeGoals: int; AwayGoals: int; Result: FullTimeResult}

The FullTimeResult type is just like a C# enum, and is easier to read than the ‘A’, ‘H’ or ‘D’ we get from the CSV file for the FTR (Full Time Result) column. I think it also looks nicer to read when it comes to the pattern matching, but we’ll get to that. With those types defined, we can get to the real meat of this and actually parse the data:

let league = Results.Load("../../Downloads/epl1920.csv")
                .Rows
                |> Seq.map toMatchResult
                |> Seq.fold processMatchResult (Dictionary<string, LeagueRow>())
                |> Seq.sortByDescending (fun (KeyValue(_, v)) -> v.Points)

Here, we load the file as we discussed earlier, but now, we forward pipe the data returned from the .Rows property to Seq.map through the toMatchResult function, which takes a Row and extracts the data we’re interested in and returns a new MatchResult. In C# this is the same as doing .Rows.Select(new MatchResult {...}). Then, the resulting sequence of MatchResults is piped forward through the processMatchResult function, using the scary sounding Seq.fold, and it also passed a new instance of a BCL Dictionary, with a string key and a LeagueRow type as the value. I’ve not yet mentioned the LeagueRow type… it’s not super important to proceedings, it just a type which holds all the data you would expect to see in a football league table. For reference it’s included below in the full script.

Amazingly, those five lines load the file, process all the data, and provide an object which contains a fairly accurate version of the English Premier League table. Obviously things are a little more involved than that.

Examing the parsing in more detail

As you’ll recall, the there is a lot of data in the CSV file that is irrelevant when it comes to generating the league table. We can map all the data we need into the MatchResult type, which we do by forward piping the data through Seq.map and the toMatchResult function:

let toMatchResult (row: Results.Row) =
    let fullTimeResult = 
        if row.FTR = "H" then FullTimeResult.Home
        elif row.FTR = "A" then FullTimeResult.Away
        else FullTimeResult.Draw
    {
        HomeTeam = row.HomeTeam
        AwayTeam = row.AwayTeam
        HomeGoals = row.FTHG
        AwayGoals = row.FTAG
        Result = fullTimeResult
    }

This is mostly just a simple mapping from the results row into the new MatchResult type. You’ll notice we don’t need to explicitly ‘new’ anything up, don’t forget, we’re in a functional world now, so the MatchResult is returned as a side affect of what we’re doing. We also define a nested method which processes the full time result using a simple if/else construct. I think I could also have used pattern matching, but it’s simple enough that I’m not going to worry about it.

Next, comes the scarying sounding fold. The method looks like this:

let processMatchResult (league : Dictionary<string, LeagueRow>) result  =
    match result.Result with
    | Home -> updateHomeWin(league, result)
    | Away -> updateAwayWin(league, result)
    | Draw -> updateDraw(league, result)
    league

What happens is that we tell Seq.fold to use this method to do the folding, and we give it an initial state of a new and empty Dictionary<string, LeagueRow>(). Seq.fold carries the state over to each subseqent ‘fold’ over the sequence of MatchResults it was piped. You’ll note that the final thing returned as a side effect of the method is the same dictionary which was passed in. This essentially forms the core of the algorithm to produce the league. The pattern matching of match <thing> with is equivalent to a C# switch statement on steroids. I am barely scratching the surface of what can be done with pattern matching in F#.

The patten patch decides what kind of result we are dealing with, and delegates further processing to the relevant method. Here is the definition for updateHomeWin. The other two methods are exactly the same, except they distribute the points/goals/wins/losses/draws accordingly, so I won’t go into those in detail.

let updateHomeWin (league : Dictionary<string, LeagueRow>, result : MatchResult) =
    updateTeam(league, result.HomeTeam, 3, result.HomeGoals, result.AwayGoals, 1, 0, 0)
    updateTeam(league, result.AwayTeam, 0, result.AwayGoals, result.HomeGoals, 0, 0, 1)

Each MatchResult consists of two teams, and we have to update each entry in the league for both of these teams, with the correct number of points, goals for, goals against, win, draw and loss. The real part of this is in the updateTeam function:

let updateTeam (league : Dictionary<string, LeagueRow>, team : string, points : int, forGoals : int, againstGoals: int, won : int, drawn, lost: int) =
    if league.ContainsKey team then
        let existing = league.[team]
        let updated = {existing with Played = existing.Played + 1; Won = existing.Won + won; Drawn = existing.Drawn + drawn; Lost = existing.Lost + lost; For = existing.For + forGoals; Against = existing.Against + againstGoals; Points = existing.Points + points}
        league.[team] <- updated
    else
        let leagueRow = {Team = team; Played = 1; Won = won; Drawn = drawn; Lost = lost; GD = 0; For = forGoals; Against = againstGoals; Points = points}
        league.Add(team, leagueRow)

This is just a simple dictionary update where we check if a team already has an entry, and if so, update it, otherwise we create it. Things of note here are that whilst F# is mostly immutable, types from System.Collections.Generic are mutable, which is how this whole thing works. I’m sure that someone much better at F# can come along and tell me how to do this with immutable F# collections. Also of note is the collection access of league.[team], which is different than in C#. We also update the value in the dictionary by using <-.

After that, we can define a simple method to print out a row from the league for us, and then iterate through the entries in the dictionary, to get a league table:

let print league =
    printfn "Team: %s | Played: %d | Won: %d | Lost: %d | Drawn: %d | For: %d | Against: %d | GD: %d | Points: %d" league.Team league.Played league.Won league.Lost league.Drawn league.For league.Against (league.For - league.Against) league.Points

league
|> Seq.iter (fun (KeyValue(_, v)) -> print v)

The KeyValue is an active pattern, which matches values of KeyValuePair objects from the BCL Dictionary, and this produces (with data correct as at the publication of this post):

Team: Liverpool | Played: 33 | Won: 29 | Lost: 2 | Drawn: 2 | For: 72 | Against: 25 | GD: 47 | Points: 89
Team: Man City | Played: 33 | Won: 21 | Lost: 9 | Drawn: 3 | For: 81 | Against: 34 | GD: 47 | Points: 66
Team: Leicester | Played: 33 | Won: 17 | Lost: 9 | Drawn: 7 | For: 63 | Against: 31 | GD: 32 | Points: 58
Team: Chelsea | Played: 33 | Won: 17 | Lost: 10 | Drawn: 6 | For: 60 | Against: 44 | GD: 16 | Points: 57
Team: Man United | Played: 33 | Won: 15 | Lost: 8 | Drawn: 10 | For: 56 | Against: 33 | GD: 23 | Points: 55

For completeness here is a gist of the full script:

2020-07-06 13:30

2020-06-22

Suppose you’re on a rocket ship, and you’re given the choice of three buttons: one button starts up your advanced MHE thrusters, but the other buttons explode the ship. You choose some button (say, number 1), and the rocket ship itself (which is a sentient AI), who knows what the buttons are connected to, chooses another button (say, number 3). It then says to you, “Do you want to choose button number 2?”

Is it to your advantage to switch your choice? Also, who designed this control panel?

by Factor Mystic at 2020-06-22 21:11

2020-06-07

I second this. I’ve been taping my mouth closed at night for the past year…

¯\_(ツ)_/¯

by Factor Mystic at 2020-06-07 11:43

2020-05-19

In every introduction to a potential client, partner, or other associate, the first thing I do is give a brief overview of my history. I know this is common in just about any business or social interaction, but its especially important in my line of work, since communicating my curriculum vitae is so critical to demonstrating my technical competence. And in truth, a technical lead can be as charming as you’d like but unless they are extremely technically competent, nothing else matters. But its hard to compress a lot of data into a short introduction!

I’ve worked on projects for some of the biggest companies in the world — not just Fortune 500 companies, but Fortune 50 companies! — as well as launched dozens of startups, some that didn’t work, some that did, and some that were extremely successful. Those larger companies include industry giants like Disney (three divisions — Disney Parks, Disney Channel, and Disney film!), Dreamworks, Warner Bros., Accenture, CBC, Sega, Sonos, and many more, as well as successful, venture backed startups like Graphite Comics, for which I currently serve as CTO.

I’ve also been written about, and my projects (both professional and personal) have been written about, by publications like the New York Times (which covered the launch of Graphite Comics, and the AI recommendation system I built for it, on the front page of the Business section), The Guardian (twice!), Fortune, TechCrunch, USA Today, AdWeek and many other internationally known outlets, in addition to multiple smaller but no less influential blogs and online journals like PocketGamer, TouchArcade, VentureBeat, and many, many more.

Just typing up that last paragraph and linking to all of that old press was a proud moment. In addition to mainstream press, I’ve also published papers on game related AI, including writeup of my graduate thesis on Hebbian learning in artificial neural networks — a topic I want to get into in more depth later.

That being said, I get asked a lot to dive deeper into my background — not only what I’ve worked on in the past, but why and how, which, to me, are much more interesting questions!

(Note: I will be adding to this as time permits until I’m finished!)

When I first became interested in computer science and programming specifically, there really wasn’t much of an industry per se — at least not anything that looks anything like the industry today. What did exist then was a collection of business-centric hardware and software providers for the most part, building tools to help businesses do the kinds of things businesses did in the 80s — spreadsheets, simple printing software, things like that.

If I try to remember exactly why I became fascinated with computers back then, I genuinely couldn’t tell you! There was very little for me to sink my teeth into — and keep in mind I was a very young kid at this time — certainly not interested in spreadsheets and early databases!

I guess I can liken it to those early video game experiences. Why was it fun to make the white dot chase the grey dot? Why were any of those Atari 2600 games I played as a kid fun in the least? I also can’t answer this question, except to say that in both cases, I think I, and all the other kids intrigued by tech back then, just sensed that something cool was in there. We knew that someday the ultra simple games we played like Pong and Pac Man would mature into nearly photo-realistic, open world masterpieces like GTA V. Maybe we sensed those big, boxy, monochrome machines that just displayed green text would someday fit in our hand, and allow us to do amazing things like fly a drone, video chat with someone around the world, or find our way home.

Whatever the case may be, I found myself drawn to technology in a major way around the age of 10. My neighbor friend’s dad worked for IBM, and he had an early IBM PC that didn’t do much, and I had a few friends from early on who had early Apple computers. But my very first computer was a Commodore 128.

The Commodore 128 was an upgrade from the immensely popular Commodore 64. I believe the Commodore 64 is still the biggest selling single computer of all time. At any rate it was a very popular machine, primarily because you could run games on it — and the games it ran were pretty amazing, especially for the time. Even better, you could plug an Atari 2600 controller right into it.

The C128 came with BASIC, which was my first programming language. I’m actually surprised BASIC, or something like it, isn’t more prevalent these days. BASIC is, after all, extremely basic, and it was a great way for 10 year old me to start learning the essentials of programming. The C128 came with a few big thick manuals, and one of them was a complete guide to programming the machine in BASIC.

This is one of the most interesting and stark differences between the early days of PC use and today. Computers shipped with compilers and manuals explaining how to use them. Imagine if every iMac you bought came with a big book on how to code in Swift and XCode not only preinstalled, but tightly integrated into the operating system! That’s what things were like then — if you shelled out the money for one of these things, it was more likely than not that you planned to build software for it.

But while BASIC was fun, it didn’t take me very far. You really couldn’t do much with BASIC, so I found myself focusing on generating sounds with it. One of the demo programs in the manual (printed out, so you had to copy it line by line off of paper to get it into your computer!) was a program to play a piece of music by generating tones with the C128s pretty amazing audio chip. But other than that — well, without a lot more skill than I had, there wasn’t much farther for me to go.

So I switched my focus to networking. That little C128 had a slot you could plug a modem into, and you could use it to dial into a few online services. Some of these were early Internet-like things that let you play games or talk with people, and as I reflect back on them, they were really pretty amazing pieces of software for the time.

So when I finally upgraded to an IBM compatible PC, the first thing I did is install a modem, and then start to tinker around with Wildcat BBS. I quickly met up with a group of local (Orange County, California) tinkerers who were using Wildcat to connect with other enthusiasts, and eventually became one of the first members of my school’s computer club — where we mostly just messed around with Wildcat. There still wasn’t much out there that was fascinating on the consumer level, at least not, to me, moreso than BBS software. The idea that I could link up with other people through my computer was amazing.

But my time with Wildcat didn’t involve any coding, and programming was what I really wanted to learn.

Eventually, I went to high school, just in time for the school to offer an AP Computer Science course, which I took my junior year. At that time, the course was taught in Pascal, which was a much more complex language than BASIC and I quickly started to imagine the possibilities with this new language. Pascal lead me to C, which led me to C++, and then to Java in its early days. Since my high school only offered the one CS class, I took some college level classes in high school at Fullerton College and California State University, Fullerton in Unix, C, C++ and a data structures course taught in Pascal. I was actually starting to become a polyglot at an early age, which I’m very grateful for, as I know a few very talented developers who struggle learning new languages and platforms. Being thrown into so many so young (mostly because the industry was all over the place back then!) really helped me later on, I think.

I started school at the University of California, Santa Barbara as a double major — Mathematics and Computer Science. This was so early on that very few students even had an email account, as everything was done through dial up into a Unix system using text-based Unix utilities like mail, finger, talk… You could look things up in a really ridiculous way using gopher, archie and eventually on the web using lynx. But then, I was lucky to get one of a very few SLIP accounts, and eventually a PPP account. Don’t even bother looking those up, it was a short lived thing but it was how you could jump on the graphical internet over dialup. Using those technologies, I was the first person I knew in the entire world who could look things up on the web, in an actual browser, with images and everything. And imagine how life changing that was.

I feel like people in their 20s just really can’t imagine how Earth shattering this all was back then.

Anyway, I ended up only taking the Mathematics degree, as I planned on graduate studies in Computer Science and wanted to get to that as soon as possible rather than to an extra year as an undergrad. Specifically, I had planned to chase a PhD in Computer Science and then figure out where to go from there. But then something strange happened…

I went to film school.

I’m not sure when it happened, but it occurred to me that one of the things I loved the most about technology was the creative side of it. I loved games, and music, and film, and all of these wonderful things that you could do with computers suddenly in addition to writing interesting software on them. And I wanted to hone my creative skills. So I started an MFA program in film production at Chapman University.

After school I moved to Los Angeles and ventured briefly into the creative arts — first as a struggling filmmaker, then as a punch up writer (I would be hired to take scripts that didn’t quite work and add jokes or interesting scenes to them). Eventually I got into some support work with a few local film festivals. But I really didn’t enjoy the film industry at all so I started dabbling in something I did enjoy: music.

Now this is going to sound like I was all over the place, and I was. But I started writing and producing electronic music, and pretty quickly started getting hired to mix and produce CDs, remix songs for some well known artists, and eventually even landed a record deal. But the whole time I was more interested in the idea of pushing creative boundaries than anything, and since I was working in the electronic music arena, the way you did that was with software.

So suddenly I found myself back in the software world — this time writing music software. Specifically, I was building virtual effects and instruments using a technology called VST and VSTi. This technology allowed you (and still does to this day) to build instruments and audio processing plugins for any DAW that supports the VST format.

I quickly found myself spending more time writing software than music, and thus I was thrust back into the world of technology.

Around this time I took a job as a network engineer at a company in Orange County, and later in downtown Los Angeles. The life of a network tech is dull, but I did it because it gave me access to two things: tons of computer hardware, and immensely huge Internet bandwidth — the latter of which was tremendously expensive and tremendously hard to get back then.

I used those resources to do two things: first, to run a series of Internet radio stations (yes, “Radio on Internet”) called the Glowdot network, and to build a photo sharing site called Glowfoto.

Glowfoto eventually took over my life, and I developed it into a full-scale social network. I was lucky to ride the MySpace wave early on, and Glowfoto became an early companion site to MySpace. Glowfoto allowed MySpace users to upload more than 10 photos to their profile, back when MySpace had a 10 photo limit (if you can believe that).

Eventually MySpace went away, and about 5 years later I finally retired Glowfoto as well. But during that time I started getting many, many requests for me to build similar services for other companies. And thus my career as a contract software developer began.

(more coming soon!)

by stromdotcom at 2020-05-19 17:44

2020-05-01

I usually like to keep my posts more positive-focused — here’s what you should do vs. here’s what you shouldn’t do. But this week alone I had three potential clients relay to me a very common experience:

I thought I finally found a good developer, but then they suddenly just disappeared!

I’ll admit its hard to explain to entrepreneurs sometimes why that developer ghosted them — usually the explanation feels personal, or accusatory. But in my experience its just a matter of communication! Often its something that comes up in the initial conversations that seems innocuous to the person looking for a developer, but to that developer, its a huge red flag.

In this post, I’ll try to give a few of the reasons that developer may have disappeared into thin air.

I think possibly the best way to explain what’s going on here is to break down a few common statements, and look at what the entrepreneur probably meant to express, and how the candidate developer interpreted it.

However, its important to note that sometimes what was said was exactly what was intended, and what was heard was exactly what was intended! In those cases, I think typically the problem is that the entrepreneur just doesn’t quite understand the market they are entering. You’ll see what I mean when we get to one of those cases.

I also think its important to state a few very important facts right out of the gate.

Most important among these is that contract software developers are still hugely in demand, and hugely in short supply. This may not seem to be evident when you put out an RFP and get snowed with replies. Its important to keep in mind that the vast, vast majority of those replies are from incompetent or just junior developers, non-technical middleman project managers who are going to outsource your project, or direct offshore dev shops staffed by ludicrously unqualified programmers.

When you strip away the worthless proposals and just look at the qualified, experienced developers who are actually capable of building your app, you quickly realize there is a tiny handful of qualified, experienced freelancers out there compared to the massive number of RFPs put up daily.

It’s important, then, to understand that when you do finally zero in on that great developer you should convey an understanding and respect for his or her talent, time and attention. Far too often, good developers get spoken down to by potential clients as if they are in that same pool of dreck that I mentioned above. And in fairness, from the client’s perspective, that might as well be true for the initial portion of the conversation. But it’s still not an encouraging attitude to take with someone you are about to entrust with the technology powering your business!

Another reason for the apparent discrepancy is that to date there still hasn’t been anyone to come along and crack the problem of pairing quality projects with quality developers. Some companies like Upwork have taken great strides in improving the way they match candidates to jobs, but it is still pretty far from perfect, and I’ve found that far too many decent jobs are getting 50+ responses from horrifying developers. That’s not to mention the insane number of very low quality jobs from completely unfunded clients that get posted daily.

Why is this important? Because some potential clients are initially confused why a developer would aggressively pursue them, only to vanish into thin air. After all, if you needed the work bad enough to contact me, the thinking may go, then you must need me more than I need you.

However, that’s seldom true. Instead, because that matching has not been solved, developers often need to cast a wide net to find quality clients, just as clients need to cast a wide net to find a quality developer. So in the end, both parties have been sifting through enormous piles of potential candidates and clients to find the one good one in the bunch.

And for that reason, incidentally, I plan to do a follow up post to this one about why that client ghosted the developer! Believe me, this situation goes both ways.

So anyway, there is still a lot of work to do. But the most important takeaway here, and something I must absolutely stress in the extreme is: there are very few good developers out there, and very many jobs. Once a developer connects with a potential client, it becomes an immediate game of filtering the good from the bad, and trying to determine which employers will be a pleasure to work with, and which will be a nightmare.

Remember: its not just you interviewing a developer. They are also interviewing you!

Ok let’s run through a few of the things you might say that will scare that developer away.

This project is easy, it should only take you a couple hours

What you probably meant to say: I’m not trying to build a nuclear submarine here.

Oof. Gotta get the most common and possibly the worst one out of the way first.

Here’s the deal. If you aren’t a developer, then you most likely don’t really know how long anything takes. That’s one of your developer’s jobs, in fact — to help you estimate time and cost.

But the big problem with a client who doesn’t know how long things take but thinks they do is that those clients will never be happy. Everything will feel like it is taking too long and costing too much money if they think it’s “easy” and “should only take a couple hours” but is in fact hard and takes a long time. And the developer can expect a constant stream of frustrated emails to that effect.

Secondly: almost nothing takes “a couple hours”. Maybe a quick little bug fix — assuming you know where and what the bug is. But otherwise, no.

So this is a bad assumption to make right out of the gate. And although you probably mean this to say “hey, I’m not trying to make your life tough with this!” what the developer hears is “I’m going to make your life a living hell”.

Seriously.

I only need a developer for a few hours to work on something small.

What you probably meant to say: I only need a developer for a few hours to work on something small.

Here is one of those examples of the client saying exactly what they mean. Absolutely nothing wrong with that! And yes, this looks a lot like the last one. It is! Just without the accidental insults.

But the fact is, all you need is a couple hours of that developer’s time! However there are a few considerations to keep in mind

First, that gig that takes just a few hours also requires a few hours to set up and tear down. In development, there is a time cost to jumping into something new that is often hard to explain to a client.

It can take an hour or two just to discuss the requirements of the job. It can take a couple hours to download the current project, open it up, poke around, get everything set up in the dev environment, etc. Then, finally, there is a cost to getting out of the project — delivery, invoicing, debrief meeting, etc.

Are you ok with being billed for those hours? A 2 hour quick in-and-out project could actually take 4-5 hours given the above. Some developers I know who would otherwise take on small quick gigs like this actually end up passing because they find it really hard to explain those extra hours to clients, and it ends up not being worth it.

Second, most freelancers are looking for something long term, not very short term. They’re looking for the weeks-long or months-long project that will sustain them for a while. It would be incredibly brutal to put in all the legwork involved in pairing up with a client a hundred times a month to gather 80 or so 2-hour gigs a month just to earn a sustainable living. In fact, its not even humanly possible — this is a subject for another post, but you might be very surprised to learn the actual cost in time and money involved in securing a client. Suffice to say it is much too high to allow for too many extremely short term clients in the “couple hours of work” range.

And so, unfortunately, these projects often get dropped — even though they very well may turn into more work in the future!

I don’t have any good suggestions for this one. Unfortunately this just is what is is. I can say that eventually you will find someone willing to help you out for that short term. It just might take longer than you expect!

I need someone to come in and take over for the last developer do some bug fixes.

What you probably meant to say: Help!

Ok this one is interesting. From your perspective, “taking over” a project probably seems like a pretty normal thing. However, from a developer’s perspective, a whole lot of alarm signals go up right away.

Here are the first questions that immediately pop into my head:

What happened to the last developer?

Did they quit? Why? Did they get frustrated? Did the project go to hell? Did the client start making unreasonable demands or developer unrealistic expectations? Did the client start demanding a lot of free work?

It is important for the developer to have an understanding of what working with a new client might be like, and you are providing a piece of potentially valuable information here: someone was working for you, and now they aren’t. Something, surely, went wrong.

In order to mitigate against this, be upfront about what happened! Get ahead of this question, and let the prospective developer know what happened to the last one.

How bad is the situation?

No one wants to walk into a project that already has problems. In the best case, software doesn’t have bugs. It isn’t riddled with problems and unknown issues that need to be tracked down.

Now, in the real world, of course there are bugs to be squashed, almost always. But in my experience, the kind of project that starts with “I want to hire you to fix some bugs” is utterly rife with bugs and faults. And walking into that scenario is simply not appealing to any developer.

Mitigate this question by being transparent about the situation. Is the app functional, but there are just some small usability issues? Or is the app crashing constantly and losing data like crazy?

What sort of headspace is this client in?

Are you completely frustrated? Are you now convinced all freelance developers are as incompetent as the offshore developers you just hired for $20 an hour?

If you just had your money lit on fire by a worthless team of offshore developers, and you are now looking for a local developer to fix the situation, its important that you understand the vast difference between the two.

A good local developer is college educated, possibly at the graduate level, and has real experience, has shipped highly rated apps and earns a real living doing this.

The offshore developers you just worked with barely have a highschool education in some instances, and took a 3 week coding crash course before being shoved in a room and told to write code for projects they barely understand. I’m not kidding about this.

So make sure you don’t treat the developer who is potentially going to save your product the same way you want to treat the people who just burned it to the ground.

I have had potential clients straight up tell me they don’t value developers at all and see us all as incompetent wastes of space. And while I really, truly do understand their frustration, having lost it all by going with the hilariously bad cheap option — do you think I had any interest in taking on that job?

Yikes.

No thanks! I’m here to help, not to be insulted. I have a MS in Computer Science and left the Ph.D. program at UCLA to do the work I do now. I’ve been doing this for over 20 years. I am not on the same planet as the $20/hour developer you just hired with 3 weeks of bootcamp training.

Please keep this in mind when you are interviewing prospective onshore developers!

I need a HIPAA compliant app for the public education sector funded by the federal government leveraging military grade encryption for export to 66 countries

What you probably meant to say: this is going to be tough, so I’m willing to pay top dollar.

This one comes up a lot. There are certain types of software development that are just hard work out of the gate. Anything that requires compliance with government regulations, opens up clients or companies to liability of any kind, apps dealing with copyrighted material, etc.

Some of the absolutely best ideas I’ve heard over the years involve the healthcare industry or the public education sector. However, in my experience these projects are brutal and come with huge obstacles right out of the gate!

Sometimes the client just doesn’t know what they are getting themselves into. That happens. But more often than not, they do.

The bigger issue here is that there are other jobs out there that simply don’t come with the headaches of compliance and liability. In order to make these jobs appealing, they just have to come at a higher price, unfortunately.

Sadly, I have known some good clients in the past that did not bring their budgets up to the level required for the quality work required for a project like this, and eventually went offshort for low-quality, low-cost work. In literally every single case, they either ended up with no app at all, or an app that utterly and completely failed to comply with the regulations.

Let me just stress again: these apps are tough. Not only are they stressful for the developer, they require a lot of experience, and that experience, unfortunately, is expensive.

I’m looking for a developer with 8-10 years of experience building apps for the vegan hula hoop industry

What you probably meant to say: my industry is important to me, and I want it to be important to you too.

This one is more humorous to me than anything. But I know a few developers that use this to screen out potential clients.

Here’s the thing: you may be an expert in the sneaker flipping industry, but I’m an expert at software development. You are massively unlikely to find a good software developer who has focused on your particular industry. It just doesn’t work like that.

We jump from project to project by the very nature of what we do. Sometimes we make social apps, sometimes B2B apps, sometimes fintech, sometimes e-commerce. I literally don’t know a single freelance software developer who has built a career focusing on only one industry.

Oh and that sneaker example? That’s one I just saw recently. They wanted a developer who was a sneaker fanatic. And you know what? They may find one, because at least that exists and there may be some overlap. But generally speaking, you really shouldn’t limit the pool of candidates you are interviewing by insisting that they have particular expertise in your field.

And furthermore, it almost never actually matters! Unless you are building a complex AI driven system, in which case experience with AI would be mandatory, or if you are building an image processing app, in which case image manipulation experience would be important.

But the key is: unless you are specifically seeking someone with experience in the particular technology you are developing, then the experience really is unlikely to matter much.

If you are building an e-commerce platform for buying and selling medical equipment, then experience building storefronts, taking payments, managing invoices, etc would be a huge plus. Experience in the medical industry would not.

Please put (some random text) in the subject line. Don’t bother responding unless you are (some random quality)

What you probably meant to say: last time I posted this I got a bunch of weird responses from Indian developers and wasted a whole bunch of time.

I am very sympathetic about this one. However, it is generally not a good idea to treat potential hires like peons just because you had to deal with a bunch of jokers the last time you tried this.

I completely get it though — you will get flooded with responses from low quality developers. However, it is a very, very bad idea to let the one good developer think you have been soured and now consider all developers to be annoying garbage.

I think this one sort of speaks for itself. If I’m deciding whether or not I want to take this project, the last thing I want to risk is working with someone who has a chip on their shoulder!

I want to add a dozen AR filters like Instagram and Facebook have to my app, and I need it done at a fixed price in the next couple weeks!

What you probably meant to say: is this feasable?

The short take on this is simple: the apps you are trying to replicate didn’t start out as complex as you see them now. And unless you want to spend the time and money they did to get there, you should probably rethink this plan.

Instagram, for example, didn’t have video and stories and IGTV, or even gallery posts. It was a dead simple feed of square images bundled with a couple very simple filters.

I replicated the core functionality for two apps i built for Warner Bros. to promote two films several years ago (Tim Burton’s Dark Shadows, and Christopher Nolan’s Dark Knight Rises). And I built the first of those apps in 3 weeks on iOS — and this was before Apple released the CoreImage library.

How? Well because Instagram was pretty simple back then! It took me a couple hours to reverse engineer how the filters were assembled, build a little framework for configuring these filters and applying them to an image, and voila!

However, if you asked me to replicate Instagram now, I’d probably run for the hills. Instagram is complex now! But they got there after many years, many millions of dollars, and an acquisition by Facebook.

See its not just the time requirement. Some of these features are just not feasible when you are still trying to break your development into bite-sized fixed price chunks and having a freelancer build them piecemeal. At a certain point, you need to actually start to structure your company like a company — and that means raising capital to build these functions, hiring full-time engineers instead of bouncing from freelancer to freelancer, and organizing and coalescing your team — engineers, project managers, designers, et al — under one roof, so that they can effectively work as a unit to build the project according to the bigger vision and roadmap you have.

This is a much longer post as well, but to keep it brief: it’s just not realistic to work the way you did at the beginning all the way through this complexity, as your requirements become more and more complex. The first version of Instagram can be built by a freelancer. The modern day version of Instagram requires Facebook.

Now, why is this a problem for a developer? Because by simply asking for this feature in this timeline, they can tell right away that you have unrealistic expectations — not only in terms of the time required to develop these features, but the company structure and funding required to do so. There is a good chance that you don’t have a real solid grasp of what exactly you are asking for, and just how big a task it is, and that can make your project tough to work on!

A lot of developers have come to learn that clients with unrealistic expectations often can’t be talked out of those expectations. And, so, they typically just move on.

I need a scalable backend, an iOS app, and Android app, and a custom algorithm, and I need them all at very high quality in 6 months. And my budget is $3000.

What you probably meant to say: I have no money

It’s ok to take a shot at bringing your vision to life, even if you don’t have enough money to do it exactly the way you’d like. It really is! I am in this business because I love working with visionary people with big, outrageous dreams. The tech industry is a thrilling, vibrant and lucrative place to work because those people exist.

What’s a little far out though is the client who is not doing the math at all.

I had a meeting this morning that went exactly like this. The client needed an iOS app, Android app, web app, backend REST API and custom web-based admin panel. The client understood it would take anywhere from 3-6 months to do this in the best case, and even seemed to understand I would need to bring in help to do it in that timeline. Then told me with a straight face that the budget was $3000 firm.

So I did the math. Two developers working around the clock to get this done in an extremely optimistic 3 months works out to about $3 an hour each. Considering the 3 months was, as I said, very optimistic in this case, I would project closer to $1-2 an hour.

And they were specifically looking for a developer based in Southern California. Well, I’m in Southern California, and I can tell you if I made $3000 a week I’d be living a dumpster next week. I don’t even see how I could make it past week three if I was earning $1000 or less a month. Its expensive here. And even if it wasn’t, that is a ludicrous amount of money for anyone, anywhere.

But not only that, you are hiring highly skilled, highly trained, in demand people to do this work. Or, at least you should be if you care at all about what you’re building! But you are expecting them to leap at a job that pays about 5% of the local minimum wage.

I don’t even know what else to say.

Looking for a technical co-founder!

What you probably meant to say: I have no money

Let’s finish with the big deal breaker. This, too is a post all on its own, but its a big one and it comes up all the time.

There are a few other common and related statements I can throw in with this one:

  • I want someone who is in this for the long haul
  • I’m looking for someone who really believes in our idea, and isn’t just looking for a paycheck
  • This is the next billion dollar idea!

Really quickly, let me break down a few things.

First, this is our job. We aren’t a charity giving up our time to help you strike it big. We have bills to pay and loved ones to take care of, and reality just doesn’t permit us to take on projects pro-bono. It just doesn’t, and I don’t know anyone who knows what they are doing who ever jumps on board on an equity basis at the early stage.

Again, there are a lot of reasons for this, and hopefully I’ll get to post about them one day. But for now, the simple version is: you just aren’t there yet.

The time to ask a developer to come on for equity is after you have a product, and you have some traction in the market. NOT when all you have is an idea, or some awful prototype you had build overseas that barely works.

While I’m absolutely sure that you are super excited about your idea, you’re asking a developer to be just as excited as you are, when in reality all you have is an idea. And to someone who listens to ideas all day long, an idea alone just isn’t that exciting. What is exciting is when that idea starts to work. When users start to react positively to the released product. When the non-technical team starts to put together those critical deals and build those incredibly important relationships that take that humble idea and turn it into a viable company. That’s when you should ask someone to come on for equity, not before!

And typically, equity offers come along with pay, not in lieu of pay. In fact, its your primary job as an entrepreneur at the earliest stages to raise money for your company! I know its hard — believe me, I’ve been through it many times. But it’s a crucial part of launching a company, and it’s critical to building a competent team that will be able to build the technical foundation for your company.

But until I write that big post explaining this in more depth, let’s just keep it short and simple: you get what you pay for.

by stromdotcom at 2020-05-01 21:13

2020-03-23

One of the advantages (if you can call it that) of being in this industry as long as I have is that I’ve been through multiple economic disasters, technological paradigm shifts, and — it’s not all bad! — economic and technological boom periods.

So I figured I might as well throw out a few predictions as to how I feel technology, and more specifically the app world, is going to change after this is over. What kinds of apps will consumers want, what kinds of apps will entrepreneurs build?

Keep in mind this is just my opinion based on my own experience, and should not be taken as anything but that. That being said, lets start with the big question: what sorts of apps are going to be big after the coronovirus scare of 2020?

Remote working apps

One of the initial download spikes I saw after people here in California were asked to stay home was Zoom. Not just Zoom though — Skype, Discord, and other communication apps suddenly became much more useful as we all retreated home.

What’s interesting to me in times like this is how people start bending existing technologies to their needs. For example, you may have an app that was used primarily for virtual face-to-face conferencing for businesses now being used to host virtual get togethers for people under lockdown.

That’s an interesting opportunity for entrepreneurs to start to see gaps in the technological landcape. The market will tell you really quickly that there is a need for a specific app tailored for that particular use case. Its possible that platforms like Zoom will step up and fill the gap themselves. But sometimes those apps can’t or simply won’t, and that’s a huge opportunity to get in and fill an existing need.

One of the biggest struggles I see with many app startups is that they created something very cool, very interesting, but not very in demand. That means when they launch, they not only need to find users and let them know they exist, they also have to explain to them why they should care. It’s always an easier road when the market is already waiting with baited breath for your product.

Social networks

I know the social networking landscape is already pretty crowded, but I predict we’ll see a few more innovative apps in this space in the next year or two.

I have been watching a few giant gaps of my own for the last few years (and hopefully one day I’ll have time to set about filling them!) but more are sure to pop up as the battle against Coronavirus goes on.

Many of us, for example, are cut off from our parents right now. I am, and I could be for months. Are our existing social networks adequate for facilitating communication with our older family members? Are they easy to use, do they offer the tools we need to help our parents and grandparents out in a time of crisis?

One immediate issue that came to my attention when all of this happened was that my parents need someone to bring supplies to them — if it is unsafe for them to venture outside, they will need someone to make sure they have food and toilet paper and soap and every other necessity. And for the very old among us there is the added concern of whether or not they are able to stay aware of their own needs.

At home entertainment

I don’t personally feel like gaming or streaming entertainment is ripe for any newcomers as a result of Coronavirus. After all just about every type of media is available to stream these days — even comics! But I do think their importance in our lives is going to become much more evident now.

I have, for a while, started to think about ways that entrepreneurs can help consumers manage the growing number of subscriptions that are required now — either by way of bundles, or technologies to manage payment options, sharing, etc.

Information broadcasting

And finally, the big one: we need better technology to get information to the community.

And we don’t just need better channels — we need better noise reduction. The amount of utter nonsense I’ve seen on Twitter, Facebook and Instagram this week is depressing. I have had several friends and family members ask me if I had heard some tidbit working its way through the grapevine which was either comically false or, worse, potentially dangerous.

We need better ways to spread valid information than our current UGC platforms, which we have seen over the past few years are highly susceptible to misinformation and confusion.

I would personally love to see someone finally come in and solve not only the UGC news and information dissemination problem, but also make sure we have adequate tools in our pockets to receive and process updates from agencies like the CDC or WHO to keep us informed in the event of health crises or natural disasters.

That’s all for now. I’m sure over the next couple weeks or possibly months we’ll see more gaps in the market as a result of this.

One thing I would like to remind everyone is that the iPhone boom happened essentially right after the financial crisis of 2007-2008. And the just came out a ridiculously long period of growth since then. So stay positive, stay creative, and most importantly stay safe!

by stromdotcom at 2020-03-23 21:00

2020-02-22

I’ve had an increasing number of conversations over the past few weeks about both React Native and Firebase — mostly with non-technical founders who have been advised by those they trust that, before they embark on their app development, they should choose one (or both) of these technologies at the core of their stack.

I want to very briefly give my take on both of these, as a developer who has used both, and advised both for and against both.

First, let me begin by saying that my opinions here are based on 20 years of experience in software development, but more specifically, on my 12 years experience as a contract/freelance mobile developer and not ONLY on these specific technologies. I bring a somewhat rare perspective to the table in that I have been developing for these platforms for so long that I’ve seen multiple cross-platform frameworks and backend frameworks and BaaS platforms come and go. I still bear the scars from a few of them! That also means I bring a bit of bias into the discussion that may or may not be fair to the technologies mentioned here.

My opinions are really based more on the types of technologies we can safely use and trust, why we use them, when we use them, what the compromises we have to make are, what the upsides are, and so on, rather than any specific, personal opinion about the specific tools themselves.

So lets take these one by one:

Firebase

Let’s start with the backend. And let’s start by saying Firebase is perfectly fine as a backend platform for your app. I have used it in the past, and I’ve even used it in conjunction with Parse (more on that later). It’s a neat system, works great, and it’s backed by Google.

However, I tend to recommend either open source Parse or a custom built Node.js/Express/MongoDB based backend if at all possible.

The biggest reason is a lesson I learned by going all in on Parse in the early days of that platform. This is going to sound strange, as I currently enthusiastically recommend Parse, but Parse caused me one of the biggest nightmares of my career at one point!

Parse used to be owned and managed by Facebook. Like Firebase, it was backed by a big company, came with a generous free tier, and worked amazing. I built about a dozen apps on top of Parse before tragedy struck: Facebook abandoned Parse suddenly.

In short, Facebook decided they didn’t want to host Parse apps anymore or pay for its continued development, so they open sourced it. Parse developers were given plenty of time to migrate. But the migration meant that suddenly a lot of management that was handled by Hosted Parse was now managed by me! That not only meant a lot of extra work, but a lot of extra knowledge, as I had to quickly get up to expert level on several AWS technologies and MongoDB in order to take control of my data.

When it was all over, Parse became better than before. And the biggest upside is that we now fully, 100% control our data, and the SDKs and backend framework that run our apps is 100% open source and modifiable by us.

It wasn’t until it all hit the fan that I realized how important it was for me to be in total control of the platform, the data, and essentially the destiny of the app.

So, should you not use Firebase? Not necessarily. As I said, its a fine system, it does what it does really well, and now that the Firebase brand has been expanded to include much of what used to be Fabric, I use several Firebase offerings still in everything I build today — even the Parse based projects. There is a lot to take from Firebase — I just don’t recommend it as a database at the current moment for most projects.

React Native

Now on to the big one that comes up all the time. Should you build your app with React Native?

First, let me give a bit of history.

In the early days of iOS development, things were relatively insane compared to today. Apple launched an SDK which got just about everyone excited to start developing for the platform, but they insisted that we all use XCode, build our apps in Objective-C, and learn the Cocoa API for building our UI.

These were strange decisions at the time, because up until the release of the iOS SDK, there were very few Apple developers compared to Windows developers, web developers or even linux/unix developers. The Cocoa API was a little strange too, and a lot of it felt unintuitive to developers who may have come from a Windows UI background.

But most interesting was the insistence on developing in Objective-C — a relatively obscure language that was a carryover from a bygone era. Don’t get me wrong — Objective-C is a great language with a lot to recommend. Its just that no one I knew had ever built anything with it. C++ was the dominant object oriented C based language, and there were a massive amount of C and C++ programmers out in the wild. So iOS came with a significant learning curve.

Also in the beginning, there was no concept of memory management or garbage collection in iOS development. If you had an iPhone back in the late 2010s you may remember apps crashing constantly — and this is why.

This was at a time when almost every modern language came with some sort of memory management and memory cleanup system. C and C++ were still the wild west, but C#, Java and others were much easier to deal with because you didn’t have to keep track of every bit of memory you allocated or else suffer horrendous crashes and late nights squashing bugs.

Android, in comparison to iOS, had a Java based SDK. There were a lot more Java developers in the world, and Java development came with a lot of luxuries like better memory management.

But these problems aside, you needed to typically find two developers to build your mobile apps, AND a web developer to handle your backend!

This was not only potentially 3 times as expensive, it was 3 times as hard to find the right talent! I can tell you from experience it’s hard to find a good developer — finding three is damn near impossible! So naturally when people started putting these new cross-platform frameworks together, they made some logical choices:

  • Choose a language and coding style that includes the largest number of available developers — then, as now, that means javascript — there are just many, many more javascript developers out there than Java of Kotlin developers, and definitely more than Objective-C or Swift
  • Abstract the underlying SDK by building a new SDK on top of it, that can be compiled down to native code
  • Maximize the amount of common code between every platform by focusing on the commonalities between the platforms and build the framework around those commonalities.

Those last two bits are where these cross platform technologies lose me.

First, I have been involved in far too many projects to feel confident being abstracted away to any degree from the underlying hardware and operating system. Just about every project that continues to evolve past a certain point will start to require custom UI and much more creative and aggressive use of the underlying hardware and OS. In my experience, the more a product comes into it’s own, the more it becomes tightly bound to the underlying system. As the platform becomes more mature and new features are added, so too will you likely want to take advantage of those features — even as your product is maturing and adding features specific to your app.

I can regale you with examples of borderline insane ways I’ve had to bend the iOS SDK to my will in order to bring a new feature online. But suffice to say, in very very many of those cases I was only able to do so because I was working so close to the metal. If a javascript framework was in my way, I would have simply had to tell the client sorry, but we can’t do this without starting over and building native. And you never, ever want to start over if you can avoid it!

Second, by focusing on the commonalities between the platforms, you risk building bland technology. I can confidently say that over the years, I’ve learned to appreciate exactly how different iOS and Android actually are when you get down to things that really matter to the user. The difference in UI/UX and user interaction in general is just, or at least can be just far too huge for me to feel comfortable reusing too much code on both platforms.

Granted, React Native really isn’t a “build once deploy everywhere” framework — especially when it comes to UI. But I stand by my point. These are different platforms, with different expectations from users. And if you really care about your product — if the apps are really the core of your business — then I don’t think its a good idea to take short cuts.

So, should you never use React Native? Not at all! I think React and other cross platform technologies are and can be great for certain projects. I tend to be ok with going this route for intra-organizational apps, utility apps, prototypes intended to be rebuilt after funding and the like.

But for public facing apps intended for use by the general public, I still strongly recommend building native and taking advantage of everything the native SDKs offer.

Another caveat is that Apple has from time to time unilaterally decided to drop support or flat out deny apps that use some technology they don’t like. So keep in mind that if you aren’t doing things the way Apple officially wants you to, its possible they will end your run if you build your technology on a platform that suddenly rubs them wrong. It has happened before, and I don’t currently see any reason it couldn’t happen again. In fact, there seems to be something happening that may threaten the future of cross-platform frameworks, at least on iOS — but I can’t yet confirm that it is happening as of yet. That will have to be another post on another day.

As always, I’m happy to discuss these point in more detail, so contact me if you have any questions!

by stromdotcom at 2020-02-22 21:51

2020-02-21

Big things start with a simple idea. But what I want to talk about today is how that simple idea grows bigger successfully. Specifically, I want to look at the very first step I believe all entrepreneurs should take before setting out to turn their grand vision into reality!

The vast majority of projects I have worked on in the past have come to me in one of three states:

  • The founder has an extremely simple, short and concise idea, and they are ready to build on that idea with the help of a technical consultant
  • The founder has an idea, and a big roadmap of features they eventually want to build — and they’ve been turning this idea over in their head for months or even years, refining and adding and removing features along the way
  • The founder has already started building, and along the way things got complicated, convoluted, confused or chaotic in some way and they need help getting it back on track.

My dream client is in that first state, of course. But more often than not, the founder is in state two, and has a big list of features they want at launch. But in order to avoid walking blindly into state 3, we need to take a step back and ask a couple questions first…

The first two questions to ask!

All of your future decisions will be based to varying degrees on two foundational questions you should know from day one:

  1. What is the core data object I am working with
  2. What is the core user experience loop I am trying to create?

Your core data object is the root of everything in your app. Its the basis of what you are offering and every other object in your database should flow in some way from this root object.

An app like Instagram, for example, has likes, follows, comments, hashtags, location tags, and so on. But at the root of it all is a photo. Eventually, of course, that expanded to include videos and stories and more, but in the very early stages, everything in the app was a branch off a branch off a branch leading back to a photo.

Your core user experience loop is the most basic, foundational way a user interacts with your app. In almost every case you can simplify this down to ONE user experience (the user does this, then this, then that). In the case of social or sharing apps, you will have two typically (the first user does this and this and this and the second user does this and this and that).

This may seem overly simplistic and tedious, but I promise you it is an important step. You should always keep these concepts in mind as you start to plan out the full feature set, and you should especially keep them in mind as you actually start building!

Far too often I see apps that are in a terrible state, where the core experience is completely broken, but there is a full suite of functionality that has nothing to do with the core experience! For example, imagine a chat app where the chat doesn’t work, or a few messages don’t come through, or the wrong people are receiving messages for some terrible reason — but the app has a full video conferencing mode, or a drawing mode or something else completely separate from the primary function of the app, which is to allow users to chat with each other.

I truly see this all the time, and its astonishingly easy to avoid. Just make sure you know your foundationals!

Let’s look at a few real world examples of what I mean. In every case, the core data object should be the first object you map out in your database, and the core user experience loop should be the very first thing you develop — and develop to absolute perfection. You should not even begin to add new features until that experience loop is closed, works, feels natural, makes sense, and is complete.

Instagram

  • Core data object: a photo
  • Core user experience loop:
    • Creator:
      1. Select photo from library or take photo with camera
      2. Upload to server
    • Consumer:
      1. View photos in feed

Note: see how bare bones that is? There is nothing about comments, likes, following users… none of that matters if you can’t get the absolute bare minimum working!

Yelp

  • Core data object: a review
  • Core user experience loop:
    • Reviewer:
      1. Search for a location
      2. Write a review and save to database
    • Reader:
      1. Search for a location
      2. Read reviews for the location

Note: again, extremely simple. A prototype of Yelp would not include much more than this! A prototype is a proof of concept and you are proving that a.) you can build an app that allows users to write and read location reviews, and b.) demonstrating that writing and reading location reviews is something people would want to do.

Uber

  • Core data object: a ride
  • Core user experience loop:
    • Driver:
      1. View available fares in real time on a live map
      2. Select and accept a fare
      3. Notify server when ride starts
      4. Notify server when ride ends
    • Passenger
      1. Request pickup by notifying server of location

Note: as always, very simple. A few things to note here is that I have mentioned nothing about payment. In the prototype phase, we don’t care about payment yet! What is the point of building a payment system if we can’t even get the core experience right? This is a perfect example of starting extremely simple, and eventually iterating up to the bigger vision.

Another thing to note here is that the core data object is a ride. What the heck is a ride? This is why we first need to figure out what our core data object is, because we next need to figure out how to represent it!

This is not only important for the obvious reason that our data is foundational to our entire operation, but also because we need to build up our data model in a way that is, to the maximum extent possible, future proof.

In all of these examples, the simple idea grew into something bigger. Instagram added videos, galleries, stories, and more. Yelp offers check-ins and reservations and more. Uber offers multiple types of rides, scooter rentals, bike rentals, and more.

But at their core was a very simple, very concise user experience rooted in a single, primary object. Start there, then build up your roadmap iteratively!

by stromdotcom at 2020-02-21 20:37

2019-11-23

Most of the images in Glider PRO's resources are in PICT format.

The PICT format is basically a bunch of serialized QuickDraw opcodes and can contain a combination of both image and vector data.

The first goal is to get all of the known resources to parse.  The good news is that none of the resources in the Glider PRO application resources or any of the houses contain vector data, so it's 100% bitmaps.  The bad news is that the bitmaps have quite a bit of variation in their internal structure, and sometimes they don't match the display format.

Several images contain multiple images spliced together within the image data, and at least one image is 16-bit color even though the rest of the images are indexed color.  One is 4-bit indexed color instead of 8-bit.  Many of them are 1-bit, and the bit scheme for 1-bit images is also inverted compared to the usual expectations (i.e. 1 is black, 0 is white).

Adding to these complications, while it looks like all of the images are using the standard system palette, there's no guarantee that they will - It's actually even possible to make a PICT image that combines multiple images with different color palettes, because the palette is defined per picture op, not per image file.

There's also a fun quirk where the PICT image frame doesn't necessarily have 0,0 as the top-left corner.

I think the best solution to this will simply be to change the display type to 32-bit and unpack PICT images to a single raster bitmap on load.  The game appears to use QuickDraw abstractions for all of its draw operations, so while it presumes that the color depth should be 8-bit, I don't think there's anything that will prevent GlidePort from using 32-bit instead.

In the meantime, I've been able to convert all of the resources in the open source release to PNG format as a test, so it should be possible to now adapt that to a runtime PICT loader.

by OneEightHundred (noreply@blogger.com) at 2019-11-23 20:43

2019-10-10

Recently found out that Classic Mac game Glider PRO's source code was released, so I'm starting a project called GlidePort to bring it to Windows, ideally in as faithful of a reproduction as possible and using the original data files.  Some additions like gamepad support may come at a later time if this stays on track.

While this is a chance to restore of the few iconic Mac-specific games of the era to, it's also a chance to explore a lot of the era technology, so I'll be doing some dev diaries about the process.

Porting Glider has a number of technical challenges: It's very much coded for the Mac platform, which has a lot of peculiarities compared to POSIX and Windows.  The preferred language for Mac OS was originally Pascal, so the C standard library is often mostly or entirely unused, and the Macintosh Toolbox (the operating system API)  has differences like preferring length-prefixed strings instead of C-style null terminated strings.

Data is in big endian format, as it was originally made for Motorola 68k and PowerPC CPUs.  Data files are split into two "forks," one as a flat data stream and the other as a resource database that the toolbox provides parsing facilities for.  In Mac development, parsing individual data elements was generally the preferred style vs. reading in whole structures, which leads to data formats often having variable-length strings and no padding for character buffer space or alignment.

Rendering is done using QuickDraw, the system-provided multimedia infrastructure.  Most images use the system-native PICT format, a vector format that is basically a list of QuickDraw commands.

At minimum, this'll require parsing a lot of Mac native resource formats, some Mac interchange formats (i.e. BinHex 4), reimplementation of a subset of QuickDraw and QuickTime, substitution of copyrighted fonts, and switch-out of numerous Mac-specific compiler extensions like dword literals and Pascal string escapes.

The plan for now is to implement the original UI in Qt, but I might rebuild the UI instead if that turns out to be impractical.

by OneEightHundred (noreply@blogger.com) at 2019-10-10 02:03

2019-09-06

When adding ETC support to Convection Texture Tools, I decided to try adapting the cluster fit algorithm used for desktop formats to ETC.

Cluster fit works by sorting the pixels into an order based on a color axis, and then repeatedly evaluating each possible combination of counts of the number of pixels assigned to each index.  It does so by taking the pixels and applying a least-squares fit to produce the endpoint line.

For ETC, this is is simplified in a few ways: The axis is always 1,1,1, so the step of picking a good axis is unnecessary.  There is only one base color and the offsets are determined by the table index, so the clustering step would only solve the base color.

Assuming that you know what the offsets for each pixel are, the least squares fit amounts to simply subtracting the offset from each of the input pixels and averaging the result.

For a 4x2 block, there are 165 possible cluster configurations, but it turns out that some of those are redundant, given certain assumptions.  The base color is derived from the formula ((color1-offset1)+(color2-offset2)+...)/8, but since the adds are commutative, that's identical to ((color1+color2+...)-(offset1+offset2+...))/8

The first half of that is the total of the colors, which is constant.  The second is the total of the offsets.

Fortunately, not all of the possible combinations produce unique offsets.  Some of them cancel out, since adding 1 to or subtracting 1 from the count of the offsets that are negatives of each other produces no change.  In an example case, the count tuples (5,0,1,2) and (3,2,3,0) are the same, since 5*-L + 0*-S + 1*S + 2*L = 3*-L + 2*-S + 3*S + 0*L.

For most of the tables, this results in only 81 possible offset combinations.  For the first table, the large value is divisible by the small value, causing even more cancellations, and only 57 possible offset combinations.

Finally, most of the base colors produced by the offset combinations are not unique after quantization: Differential mode only has 5-bit color resolution, and differential mode only has 4-bit resolution, so after quantization, many of the results get mapped to the same color.  Deduplicating them is also inexpensive: If the offsets are checked in ascending order, then once the candidate color progresses past the threshold where the result could map to a specific quantized color, it will never cross back below that threshold, so deduplication only needs to inspect the last appended quantized color.

Together, these reduce the candidate set of base colors to a fairly small number, creating a very optimal search space at low cost.

There are a few circumstances where these assumptions don't hold:

One is when the clamping behavior comes into effect, particularly when a pixel channel's value is near 0 or 255.  In that case, this algorithm can't account for the fact that changing the value of the base color would have no effect on some of the offset colors.

One is when the pixels are not of equal importance, such as when using weight-by-alpha, which makes the offset additions non-commutative, but that only invalidates the cancellation part of the algorithm.  The color total can be pre-weighted, and the rest of the algorithm would have to rely on working more like cluster fit: Sort the colors along the 1,1,1 line and determine the weights for the pixels in that order, generate all 165 cluster combinations, and compute the weight totals for each one.  Sort them into ascending order, and then the rest of the algorithm should work.

One is when dealing with differential mode constraints, since not all base color pairs are legal.  There are some cases where a base color pair that is just barely illegal could be made legal by nudging the colors closer together, but in practice, this is rare: Usually, there is already a very similar individual mode color pair, or another differential mode pair that is only slightly worse.

In CVTT, I deal with differential mode by evaluating all of the possibilities and picking the best legal pair.  There's a shortcut case when the best base color for both blocks produces a legal differential mode pair, but this is admittedly a bit less than optimal: It picks the first evaluation in the case of a tie when searching for the best, but since blocks are evaluated starting with the largest combined negative offset, it's a bit more likely to pick colors far away from the base than colors close to the base, even though colors closer to the average tend to produce smaller offsets and are more likely to be legal, so this could be improved by making the tie-breaking function prefer smaller offsets.

In practice though, the differential mode search is not where most of the computation time is spent: Evaluating the actual base colors is.

As with the rest of CVTT's codecs, brute force is still key: The codec is designed to use 8-wide SSE2 16-bit math ops wherever possible to processing 8 blocks at once, but this creates a number of challenges since sorting and list creation are not amenable to vectorization.  I solve this by careful insertion of scalar ops, and the entire differential mode part is scalar as well.  Fortunately, as stated, the parts that have to be scalar are not major contributors to the encoding time.


You can grab the stand-alone CVTT encoding kernels here: https://github.com/elasota/ConvectionKernels

by OneEightHundred (noreply@blogger.com) at 2019-09-06 00:47

2019-05-23

It would take me a while to get used to carrying my urine and feces around…

by Factor Mystic at 2019-05-23 16:51

2019-04-05

Although there are readily available skeletons for purchase, I want to print my own skeleton for “profiling and debugging” purposes :)

:)

by Factor Mystic at 2019-04-05 17:10

2018-06-13

Introduction

In the last post we were left with some tests that exercised some very basic functionality of the Deck class. In this post, we will continue to add unit tests and write production code to make those tests pass, until we get a class which is able to produce a randomised deck of 52 cards.

Test Refactoring

You can, and should, refactor your tests where appropriate. For instance, on the last test in the last post, we only asserted that we could get all the cards for a particular suit. What about the other three? With most modern test frameworks, that is very easy.

[InlineData(Suit.Clubs)]
[InlineData(Suit.Diamonds)]
[InlineData(Suit.Hearts)]
[InlineData(Suit.Spades)]
public void Should_BeAbleToSelectSuitOfCardsFromDeck(Suit suit)
{
    var deck = new Deck();

    var cards = deck.Where(x => x.Suit == suit);

    cards.Should().HaveCount(13);
}

More Cards

We are going to want actual cards with values to work with. And for the next test, we can literally copy and past the previous test to use as a starter.

[Theory]
[InlineData(Suit.Clubs)]
[InlineData(Suit.Diamonds)]
[InlineData(Suit.Hearts)]
[InlineData(Suit.Spades)]
public void Should_BuildAllCardsInDeck(Suit suit)
{
    var deck = new Deck();

    var cards = deck.Where(x => x.Suit == suit);

    cards.Should().Contain(new List<Card> 
    { 
        new Card(suit, "A"), new Card(suit, "2"), new Card(suit, "3"), new Card(suit, "4"),
        new Card(suit, "5"), new Card(suit, "6"), new Card(suit, "7"), new Card(suit, "8"),
        new Card(suit, "9"), new Card(suit, "10"), new Card(suit, "J"), new Card(suit, "Q"),
        new Card(suit, "K")
    });
}

Now that I’ve written this, when I compare it to the previous one, it’s testing the exact same thing, in slightly more detail. So we can delete the previous test, it’s just noise.

The test is currently failing because it can’t compile, due to there not being a constructor which takes a string. Lets fix that.

public struct Card
{
    private Suit _suit;
    private string _value;

    public Card(Suit suit, string value)
    {
        _suit = suit;
        _value = value;
    }

    public Suit Suit { get { return _suit; } }
    public string Value { get { return _value; } }

    public override string ToString()
    {
        return $"{Suit}";
    }
}

There are a couple of changes to this class. Firstly, I added the constructor, and private variables which hold the two defining variables, with properties with only public getters. I changed it from being a class to being a struct, and it’s now an immutable value type, which makes sense. In a deck of cards, there can, for example, only be one Ace of Spades.

These changes mean that are tests don’t work, as the Deck class is now broken, because the code which builds set of thirteen cards for a given suit is broken - it now doesn’t understand the Card constructor, or the fact that the .Suit property is now read-only.

Here is my first attempt at fixing the code, which I don’t currently think is all that bad:

private string _ranks = "A23456789XJQK";

private List<Card> BuildSuit(Suit suit)
{
    var cards = new List<Card>(_suitSize);

    for (var i = 1; i <= _suitSize; i++)
    {
        var rank = _ranks[i-1].ToString();
        var card = new Card(suit, rank);
        cards.Add(card);
    }

    return cards;
}

This now builds us four suites of thirteen cards. I realised as I was writing the production code that handling “10” as a value would be straightforward, so I opted for the simpler (and common) approach of using “X” to represent “10”. The test pass four times, once for each suit. This is probably unnecessary, but it protects us in future from inadvertantly adding any code which may affect the way that cards are generated for a particular suit.

Every day I’m (randomly) shuffling

It’s occured to me as I write this that the Deck class is funtionally complete, as it produces a deck of 52 cards when it is instantiated. You will however recall that we want a randomly shuffled deck of cards. If we consider, and invoke the Single Responsibility Principal, then we should add a Dealer class; we are modeling a real world event and a pack of cards cannot shuffle itself, that’s what the dealer does.

Conclusion

In this post I’ve completed the walk through of developing a class to create a deck of 52 cards using some basic TDD techniques. I realised adding the ability to shuffle the pack to the Deck class would be a violation of SRP, as the Deck class should not be concerned or have any knowledge about how it is shuffled. In the next post I will discuss how we can implement a Dealer class, and illustrate some techniques swapping the randomisation algorithim around.

2018-06-13 00:00

2018-03-30

Convection Texture Tools is now roughly equal quality-wise with NVTT at compressing BC7 textures despite being about 140 times faster, making it one of the fastest and highest-quality BC7 compressors.

How this was accomplished turned out to be simpler than expected.  Recall that Squish became the gold standard of S3TC compressors by implementing a "cluster fit" algorithm that ordered all of the input colors on a line and tried every possible grouping of them to least-squares fit them.

Unfortunately, using this technique isn't practical in BC7 because the number of orderings has rather extreme scaling characteristics.  While 2-bit indices have a few hundred possible orderings, 4-bit indices have millions, most BC7 mode indices are 3 bits, and some have 4.

With that option gone, most BC7 compressors until now have tried to solve endpoints using various types of endpoint perturbation, which tends to require a lot of iterations.

Convection just uses 2 rounds of K-means clustering and a much simpler technique based on a guess about why Squish's cluster fit algorithm is actually useful: It can create endpoint mappings that don't use some of the terminal ends of the endpoint line, causing the endpoint to be extrapolated out, possibly to a point that loses less accuracy to quantization.

Convection just tries cutting off 1 index at each end, then 1 index at both ends.  That turned out to be enough to place it near the top of the quality benchmarks.

Now I just need to add color weighting and alpha weighting and it'll be time to move on to other formats.

by OneEightHundred (noreply@blogger.com) at 2018-03-30 05:26

2018-02-04

I once ate 10 mg LSD by accident. It was a dilution error. The peak lasted ~10 hr. At some point, I saw the top of my head. But hey, maybe it was just an hallucination ;)

maybe ;)

by Factor Mystic at 2018-02-04 12:30

2017-11-28

Introduction

In the previous post in this series, we had finished up with a very basic unit test, which didn’t really test much, which we had ran using dotnet xunit in a console, and saw some lovely output.

We’ll continue to write some more unit tests to try and understand what kind of API we need in a class (or classes) which can help us satisfy the first rule of our Freecell engine implementation. As a reminder, our first rule is: There is one standard deck of cards, shuffled.

I’m trying to write both the code and the blog posts as I go along, so I have no idea what the final code will look like when I’ve finished. This means I’ll probably make mistakes and make some poor design decisions, but the whole point of TDD is that you can get a feel for that as you go along, because the tests will tell you.

Don’t try to TDD without some sort of plan

Whilst we obey the 3 Laws of TDD, that doesn’t mean that we can’t or shouldn’t doodle a design and some notes on a whiteboard or a notebook about the way our API could look. I always find that having some idea of where you want to go and what you want to achieve aids the TDD process, because then the unit tests should kick in and you’ll get a feel for whether things are going well or the conceptual design you had is not working.

With that in mind, we know that we will want to define a Card object, and that there are going to be four suits of cards, so that gives us a hint that we’ll need an enum to define them. Unless we want to play the same game of Freecell over and over again, then we’ll need to randomly generate the cards in the deck. We also know that we will need to iterate over the deck when it comes to building the Cascades, but the Deck should not be concerned with that.

With that in mind, we can start writing some more tests.

To a functioning Deck class

First things first, I think that I really like the idea of having the Deck class enumerable, so I’ll start with testing that.

[Fact]
public void Should_BeAbleToEnumerateCards()
{
    foreach (var card in new Deck())
    {
    }
}

This is enough to make the test fail, because the Deck class doesn’t yet have a public definition for GetEnumerator, but it gives us a feel for how the class is going to be used. To make the test pass, we can do the simplest thing to make the compiler happy, and give the Deck class a GetEnumerator definition.

public IEnumerator<object> GetEnumerator()
{
    return Enumerable.Empty<object>().GetEnumerator();
}

I’m using the generic type of object in the method, because I haven’t yet decided on what that type is going to be, because to do so would violate the three rules of TDD, and it hasn’t yet been necessary.

Now that we can enumerate the Deck class, we can start making things a little more interesting. Given that it is a deck of cards, it should be reasonable to expect that we could expect to be able to select a suit of cards from the deck and get a collection which has 13 cards in it. Remember, we only need to write as much of this next test as is sufficient to get the test to fail.

[Fact]
public void Should_BeAbleToSelectSuitOfCardsFromDeck()
{
    var deck = new Deck();

    var hearts = deck.Where();
}

It turns out we can’t even get to the point in the test of asserting something because we get a compiler failure. The compiler can’t find a method or extension method for Where. But, the previous test where we enumerate the Deck in a foreach passes. Well, we only wrote as much code to make that test pass as we needed to, and that only involved adding the GetEnumerator method to the class. We need to write more code to get this current test to pass, such that we can keep the previous test passing too.

This is easy to do by implementing IEnumerable<> on the Deck class:

public class Deck : IEnumerable<object>
{
    public IEnumerator<object> GetEnumerator()
    {
        foreach (var card in _cards)
        {
            yield return card;
        }
    }

    IEnumerator IEnumerable.GetEnumerator() => GetEnumerator();
}

I’ve cut some of the other code out of the class so that you can see just the detail of the implementation. The second explicitly implemented IEnumerable.GetEnumerator is there because IEnumerable<> inherits from it, so it must be implemented, but as you can see, we can just fastward to the genericly implemented method. With that done, we can now add using System.Linq; to the Deck class so that we can use the Where method.

var deck = new Deck();

var hearts = deck.Where(x => x.Suit == Suit.Hearts);

This is where the implementation is going to start getting a little more complicated that the actual tests. Obviously in order to make the test pass, we need to add an actual Card class and give it a property which can use to select the correct suit of cards.

public enum Suit
{
    Clubs,
    Diamonds,
    Hearts,
    Spades
}

public class Card
{
    public Suit Suit { get; set; }
}

After writing this, we can then change the enumerable implementation in the Deck class to public class Deck : IEnumerable<Deck>, and the test will now compile. Now we can actually assert the intent of the test:

[Fact]
public void Should_BeAbleToSelectSuitOfCardsFromDeck()
{
    var deck = new Deck();

    var hearts = deck.Select(x => x.Suit == Suit.Hearts);

    hearts.Should().HaveCount(13);
}

Conclusion

In this post, I talked through several iterations of the TDD loop, based on the 3 Rules of TDD, in some detail. An interesting discussion that always rears its head at this point is: Do you need to follow the 3 rules so excruciatingly religously? I don’t really know the answer to that. Certainly I always had it in my head that I would need a Card class, and that would necessitate a Suit enum, as these are pretty obvious things when thinking about the concept of a class which models a deck of cards. Could I have taken a short cut, written everything and then wrote the tests to test the implementation (as it stands)? Probably, for something so trivial.

In the next post, I will write some more tests to continue building the Deck class.

2017-11-28 00:00

2017-11-21

Introduction

I thought Freecell would make a fine basis for talking about Test Driven Development. It is a game which I enjoy playing. I have an app for it on my phone, and it’s been available on Windows for as long as I can remember, although I’m writing this on a Mac, which does not by default have a Freecell game.

The rules are fairly simple:

  • There is one standard deck of cards, shuffled.
  • There are four “Free” Cell piles, which may each have any one card stored in it.
  • There are four Foundation piles, one for each suit.
  • The cards are dealt face-up left-to-right into eight cascades
    • The cards must alternate in colour.
    • The result of the deal is that the first four cascades will have seven cards, the final four will have six cards.
  • The top most card of a cascade beings a tableau.
  • A tableaux must be built down by alternating colours.
  • A card in cell may be moved onto a tableau subject to the previous rule.
  • A tableaux may be recursively moved onto another tableaux, or to an empty cascade only if there is enough free space in Cells or empty cascades to use as intermediate locations.
  • The game is won when all four Foundation piles are built up in suit, Ace to King.

These rules will form the basis of a Frecell Rules Engine. Note that we’re not interested in a UI at the moment.

This post is a follow on from my previous post of how to setup a dotnet core environment for doing TDD.

red - first test

We know from the rules that we need a standard deck of cards to work with, so our initial test could assert that we can create an array, of some type that is yet to be determined, which has a length of 51.

[Fact]
public void Should_CreateAStandardDeckOfCards()
{
    var sut = new Deck();

}

There! Our first test. It fails (by not compiling). We’ve obeyed The 3 Laws of TDD: We’ve not written any production code and we’ve only written enough of the unit test to make it fail. We can make the test pass by creating a Deck class in the Freecell.Engine project. Time for another commit:

green - it passes

It is trivial to make our first test pass, as all we need to do is create a new class in our Freecell.Engine project, and our test passes as it now compiles. We can prove this by instructing dotnet to run our unit tests for us:

nostromo:Freecell.Engine.Tests stuart$ dotnet watch xunit
watch : Started
Detecting target frameworks in Freecell.Engine.Tests.csproj...
Building for framework netcoreapp2.0...
  Freecell.Engine -> /Users/stuart/dev/freecell/Freecell.Engine/bin/Debug/netstandard2.0/Freecell.Engine.dll
  Freecell.Engine.Tests -> /Users/stuart/dev/freecell/Freecell.Engine.Tests/bin/Debug/netcoreapp2.0/Freecell.Engine.Tests.dll
Running .NET Core 2.0.0 tests for framework netcoreapp2.0...
xUnit.net Console Runner (64-bit .NET Core 4.6.00001.0)
  Discovering: Freecell.Engine.Tests
  Discovered:  Freecell.Engine.Tests
  Starting:    Freecell.Engine.Tests
  Finished:    Freecell.Engine.Tests
=== TEST EXECUTION SUMMARY ===
   Freecell.Engine.Tests  Total: 1, Errors: 0, Failed: 0, Skipped: 0, Time: 0.142s
watch : Exited
watch : Waiting for a file to change before restarting dotnet...

It is important to make sure to run dotnet xunit from within the test project folder, you can’t pass the path to the test project like you can with dotnet test. As you can see, I’ve also started watching xunit, and the runner is now going to wait until I make and save a change before automatically compiling and running the tests.

red, green

This first unit test still doesn’t really test very much, and because we are obeying the 3 TDD rules, it forces us to think a little before we write any test code. When looking at the rules, I think we will probably want the ability to move through our deck of cards and have the ability to remove cards from the deck. So, with this in mind, the most logical thing to do is to make the Deck class enumerable. We could test that by checking a length property. Still in our first test, we can add this:

var sut = new Deck();

var length = sut.Length;

If I switch over to our dotnet watch window, we get the immediate feedback that this has failed:

Detecting target frameworks in Freecell.Engine.Tests.csproj...
Building for framework netcoreapp2.0...
  Freecell.Engine -> /Users/stuart/dev/freecell/Freecell.Engine/bin/Debug/netstandard2.0/Freecell.Engine.dll
DeckTests.cs(13,30): error CS1061: 'Deck' does not contain a definition for 'Length' and no extension method 'Length' accepting a first argument of type 'Deck' could be found (are you missing a using directive or an assembly reference?) [/Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj]
Build failed!
watch : Exited with error code 1
watch : Waiting for a file to change before restarting dotnet...

We know that we have a pretty good idea that we’re going to make the Deck class enumerable, and probably make it in implement IEnumerable<>, then we could add some sort of internal array to hold another type, probably a Card and then right a bunch more code that will make our test pass.

But that would violate the 3rd rule, so instead, we simply add a Length property to the Deck class:

public class Deck 
{
    public int Length {get;}
}

This makes our test happy, because it compiles again. But it still doesn’t assert anything. Let’s fix that, and assert that the Length property actually has a length that we would expect a deck of cards to have, namely 52:

var sut = new Deck();

var length = sut.Length;

length.Should().Be(51);

The last line of the test asserts through the use of FluentAssertions that the Length property should be 51. I like FluentAssertions, I think it looks a lot cleaner than writing something like Assert.True(sut.Length, 51), and it’s quite easy to read and understand: ‘Length’ should be 51. I love it. We can add it with the command dotnet add package FluentAssertions. Fix the using reference in the test class so that it compiles, and then check our watch window:

Detecting target frameworks in Freecell.Engine.Tests.csproj...
Building for framework netcoreapp2.0...
  Freecell.Engine -> /Users/stuart/dev/freecell/Freecell.Engine/bin/Debug/netstandard2.0/Freecell.Engine.dll
  Freecell.Engine.Tests -> /Users/stuart/dev/freecell/Freecell.Engine.Tests/bin/Debug/netcoreapp2.0/Freecell.Engine.Tests.dll
Running .NET Core 2.0.0 tests for framework netcoreapp2.0...
xUnit.net Console Runner (64-bit .NET Core 4.6.00001.0)
  Discovering: Freecell.Engine.Tests
  Discovered:  Freecell.Engine.Tests
  Starting:    Freecell.Engine.Tests
    Freecell.Engine.Tests.DeckTests.Should_CreateAStandardDeckOfCards [FAIL]
      Expected value to be 51, but found 0.
      Stack Trace:
           at FluentAssertions.Execution.XUnit2TestFramework.Throw(String message)
           at FluentAssertions.Execution.AssertionScope.FailWith(String message, Object[] args)
           at FluentAssertions.Numeric.NumericAssertions`1.Be(T expected, String because, Object[] becauseArgs)
        /Users/stuart/dev/freecell/Freecell.Engine.Tests/DeckTests.cs(16,0): at Freecell.Engine.Tests.DeckTests.Should_CreateAStandardDeckOfCards()
  Finished:    Freecell.Engine.Tests
=== TEST EXECUTION SUMMARY ===
   Freecell.Engine.Tests  Total: 1, Errors: 0, Failed: 1, Skipped: 0, Time: 0.201s
watch : Exited with error code 1
watch : Waiting for a file to change before restarting dotnet...

Now to make our test past, we could again just start implementing IEnumerable<>, but that’s not TDD, and Uncle Bob might get upset at me. Instead, we will do the simplest thing that will make the test pass:

public class Deck
{
    public int Length { get { return new string[51].Length; }}
}

refactor

Now that we have a full test with an assertion that passes, we can about the refactor stage of the red/gree/refactor TDD cycle. As it stands, our simple classes passes our test but we can see right away that newing up an array in the getter of the Length property is not going to be something that is going to serve our interests well in the long run, so we should do something about that. Making it a member variable seems to be the most logical thing to do at the moment, so we’ll do that. We don’t need to make any changes to our test on the refactor stage. If we do, that’s a design smell that would indicate that something is wrong.

ublic class Deck
{
    private const int _size = 51;
    private string[] _cards = new string[_size];
    public int Length { get { return _cards.Length; }}
}

Conclusion

In this post, we’ve fleshed out our Deck class a little more, and gone through the full red/green/refactor TDD cycle. I also introduced FluentAssertions, and showed the output from the watch window as it showed the test failing

2017-11-21 00:00

2017-11-14

Introduction

In a future post, I’m going to write about Test Driven Development, with the aim of writing a Freecell clone. In this post I’ll walk through setting up a dotnet core solution with a class library which will hold the Freecell rules engine, a class library for our unit tests and show to set up an environment for immediate feedback, which is one of the key benefits of TDD. I’ll also demonstrate using some basic git commands to setup our source control.

As you’ll notice from the command line output below, I’m doing all this on a Mac, but things should not be any different if you are following along on Linux. Or even Windows.

dotnet new

We need to new up two projects: one for our rules engine; one for the tests. It is a good idea to keep the unit tests separate from the code under test - in a real world application you really do not want test data to get mixed in with production code.

nostromo:dev stuart$ mkdir freecell
nostromo:dev stuart$ dotnet new classlib -o freecell/Freecell.Engine -n Freecell.Engine
The template "Class library" was created successfully.

Processing post-creation actions...
Running 'dotnet restore' on freecell/Freecell.Engine/Freecell.Engine.csproj...
  Restoring packages for /Users/stuart/dev/freecell/Freecell.Engine/Freecell.Engine.csproj...
  Generating MSBuild file /Users/stuart/dev/freecell/Freecell.Engine/obj/Freecell.Engine.csproj.nuget.g.props.
  Generating MSBuild file /Users/stuart/dev/freecell/Freecell.Engine/obj/Freecell.Engine.csproj.nuget.g.targets.
  Restore completed in 133.35 ms for /Users/stuart/dev/freecell/Freecell.Engine/Freecell.Engine.csproj.


Restore succeeded.

The command dotnet new console instructs the framework to create a new console application. The -o option allows an output directory to be specified and the -n allows the project name to be specified. If you don’t specify these options, the projet will be created in and named after the current folder. You can see more details on the command on Microsoft’s documentation.

Then create the second project to hold the unit tests. I like to use xUnit, and the dotnet framework team do too. It’s pretty telling that the dotnet framework team using xUnit instead of using MSTest - which was exactly the basis of my arguement when I moved a team from MSTest to xUnit last year.

nostromo:dev stuart$ dotnet new xunit -o freecell/Freecell.Engine.Tests -n Freecell.Engine.Tests
The template "xUnit Test Project" was created successfully.

...

Restore succeeded.

We should also add a reference into our test project to the Freecell.Engine project, as it is that which contains the code we want to test.

nostromo:freecell stuart$ cd Freecell.Engine.Tests/
nostromo:Freecell.Engine.Tests stuart$ dotnet add reference ../Freecell.Engine/Freecell.Engine.csproj 
Reference `..\Freecell.Engine\Freecell.Engine.csproj` added to the project.

With that all done, now is a good time to initialise a git repository to hold the code and make the first commit.

nostromo:dev stuart$ cd freecell/
nostromo:freecell stuart$ git init
Initialized empty Git repository in /Users/stuart/dev/freecell/.git/
nostromo:freecell stuart$ git add --all
nostromo:freecell stuart$ git commit -m "Initial commit"
[master (root-commit) 2cc150c] Initial commit
 12 files changed, 6025 insertions(+)
 create mode 100644 Freecell.Engine.Tests/Freecell.Engine.Tests.csproj
 create mode 100644 Freecell.Engine.Tests/UnitTest1.cs
 create mode 100644 Freecell.Engine.Tests/obj/Freecell.Engine.Tests.csproj.nuget.cache
 create mode 100644 Freecell.Engine.Tests/obj/Freecell.Engine.Tests.csproj.nuget.g.props
 create mode 100644 Freecell.Engine.Tests/obj/Freecell.Engine.Tests.csproj.nuget.g.targets
 create mode 100644 Freecell.Engine.Tests/obj/project.assets.json
 create mode 100644 Freecell.Engine/Class1.cs
 create mode 100644 Freecell.Engine/Freecell.Engine.csproj
 create mode 100644 Freecell.Engine/obj/Freecell.Engine.csproj.nuget.cache
 create mode 100644 Freecell.Engine/obj/Freecell.Engine.csproj.nuget.g.props
 create mode 100644 Freecell.Engine/obj/Freecell.Engine.csproj.nuget.g.targets
 create mode 100644 Freecell.Engine/obj/project.assets.json
nostromo:freecell stuart$ 

dotnet new sln

Although it doesn’t matter to me as I’m coding this on a Mac using Visual Studio Code, for everyone’s convenience, we should add a solution file. This will also help later on when it comes to talking about build scripts and using Continuous Integration, as it’s usually easier to target a single solution file for building all the projects.

nostromo:freecell stuart$ dotnet new sln -n Freecell.Engine
The template "Solution File" was created successfully.
nostromo:freecell stuart$ dotnet sln add Freecell.Engine/Freecell.Engine.csproj 
Project `Freecell.Engine/Freecell.Engine.csproj` added to the solution.
nostromo:freecell stuart$ dotnet sln add Freecell.Engine.Tests/Freecell.Engine.Tests.csproj 
Project `Freecell.Engine.Tests/Freecell.Engine.Tests.csproj` added to the solution.

dotnet xUnit

I’m going also going to start using the dotnet xunit command which is available to us, but this isn’t (currently) as straight forward as it perhaps will become. Firstly we need to update the version of xUnit which the dotnet new xunit command installed into the project, as it’s still 2.2.0, and to use dotnet xunit it needs to be the same version. Secondly, there isn’t yet a dotnet-cli command to update packages. But you can achieve this by adding an already existing package, which if you don’t specify a version will update it to the latest version. Why they don’t just add a dotnet update package --all command beats me.

If version numbers have changed since this post was written/published, don’t worry. All you need to do is make sure that the xUnit package and the dotnet xUnit command package are the same verisons. You can’t really go wrong as the dotnet xunit command will tell you if there is a version mismatch.

nostromo:freecell stuart$ cd Freecell.Engine.Tests/
nostromo:Freecell.Engine.Tests stuart$ dotnet add package xunit
  Writing /var/folders/xc/xshvfj214z18xn0t5y1vzty80000gn/T/tmpr93zFG.tmp
info : Adding PackageReference for package 'xunit' into project '/Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj'.
log  : Restoring packages for /Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj...
info :   CACHE https://api.nuget.org/v3-flatcontainer/xunit/index.json
info : Package 'xunit' is compatible with all the specified frameworks in project '/Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj'.
info : PackageReference for package 'xunit' version '2.3.1' updated in file '/Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj'.
nostromo:Freecell.Engine.Tests stuart$ 

With that done, we can now add the dotnet-xunit cli command package, and start using it:

nostromo:Freecell.Engine.Tests stuart$ dotnet add package dotnet-xunit
  Writing /var/folders/xc/xshvfj214z18xn0t5y1vzty80000gn/T/tmp6wUvtG.tmp
info : Adding PackageReference for package 'dotnet-xunit' into project '/Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj'.
log  : Restoring packages for /Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj...
info :   GET https://api.nuget.org/v3-flatcontainer/dotnet-xunit/index.json
info :   OK https://api.nuget.org/v3-flatcontainer/dotnet-xunit/index.json 639ms
info : Package 'dotnet-xunit' is compatible with all the specified frameworks in project '/Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj'.
info : PackageReference for package 'dotnet-xunit' version '2.3.1' added to file '/Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj'.
nostromo:Freecell.Engine.Tests stuart$ dotnet xunit
No executable found matching command "dotnet-xunit"
nostromo:Freecell.Engine.Tests stuart$ 

Hang on just a minute, the computer is lying to me, I clearly just added the dotnet-xunit package, which provides the dotnet xunit command. What gives? Well, the gotcha here is that the .csproj needs to be updated and told that the dotnet-xunit package is a special and unique snowflake. Instead of PackageReference, it needs to be DotNetCliToolReference. To be fair, this is documented in the xUnit documentation, and I think this is something that in the future will probably be automatic. For the time being we have to to it ourselves. If we now run dotnet xunit again:

nostromo:Freecell.Engine.Tests stuart$ dotnet xunit
Detecting target frameworks in Freecell.Engine.Tests.csproj...
Building for framework netcoreapp2.0...
  Freecell.Engine -> /Users/stuart/dev/freecell/Freecell.Engine/bin/Debug/netstandard2.0/Freecell.Engine.dll
  Freecell.Engine.Tests -> /Users/stuart/dev/freecell/Freecell.Engine.Tests/bin/Debug/netcoreapp2.0/Freecell.Engine.Tests.dll
Running .NET Core 2.0.0 tests for framework netcoreapp2.0...
xUnit.net Console Runner (64-bit .NET Core 4.6.00001.0)
  Discovering: Freecell.Engine.Tests
  Discovered:  Freecell.Engine.Tests
  Starting:    Freecell.Engine.Tests
  Finished:    Freecell.Engine.Tests
=== TEST EXECUTION SUMMARY ===
   Freecell.Engine.Tests  Total: 1, Errors: 0, Failed: 0, Skipped: 0, Time: 0.156s
nostromo:Freecell.Engine.Tests stuart$ 

As you can see, we get much nicer output than if we just used the standard dotnet test command. Using this command also has the added benefit of being able to produce xml output which can be consumed by a CI server to show details about the unit tests, but that isn’t somethin that I’m going to get into just yet.

I’m also going to update the xUnit Visual Studio runner now as well, as it is required to make VS Code debug our tests, which will come in handy later on. Executing dotnet add package xunit.runner.visualstudio does this for us.

dotnet watch

I am a big fan of NCrunch, and the rapid and immediate feedback which it provides when coding in Visual Studio. Sadly, it’s not available for Visual Studio Code, or indeed for macOS, so in order to replicate the functionality it provides, we can make a few tweaks to our test project and watch our code for changes which are then automatically compiled and the tests ran. In order to get the NCrunch-like functionality, we need to add the dotnet watch cli command. This is fairly straightforward.

nostromo:Freecell.Engine.Tests stuart$ dotnet add package Microsoft.DotNet.Watcher.Tools
  Writing /var/folders/xc/xshvfj214z18xn0t5y1vzty80000gn/T/tmpFpRFyo.tmp
info : Adding PackageReference for package 'Microsoft.DotNet.Watcher.Tools' into project '/Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj'.
log  : Restoring packages for /Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj...
info :   GET https://api.nuget.org/v3-flatcontainer/microsoft.dotnet.watcher.tools/index.json
info :   OK https://api.nuget.org/v3-flatcontainer/microsoft.dotnet.watcher.tools/index.json 1418ms
info : Package 'Microsoft.DotNet.Watcher.Tools' is compatible with all the specified frameworks in project '/Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj'.
info : PackageReference for package 'Microsoft.DotNet.Watcher.Tools' version '2.0.0' added to file '/Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj'.
nostromo:Freecell.Engine.Tests stuart$ dotnet watch xunit
Version for package `Microsoft.DotNet.Watcher.Tools` could not be resolved.
nostromo:Freecell.Engine.Tests stuart$ dotnet restore
  Restoring packages for /Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj...
  Restoring packages for /Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj...
  Restore completed in 13.12 ms for /Users/stuart/dev/freecell/Freecell.Engine/Freecell.Engine.csproj.
  Restore completed in 26.52 ms for /Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj.
  Restore completed in 148.11 ms for /Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj.
  Restore completed in 393.99 ms for /Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj.

Make sure you remember to make the same edit to the .csproj file again so that dotnet understands that this is a CLI command. This is kind of opposite to the way Hansleman showed it, but it achieves the same end goal.

Now we can watch our unit test code for changes:

nostromo:Freecell.Engine.Tests stuart$ dotnet watch xunit
watch : Started
Detecting target frameworks in Freecell.Engine.Tests.csproj...
Building for framework netcoreapp2.0...
  Freecell.Engine -> /Users/stuart/dev/freecell/Freecell.Engine/bin/Debug/netstandard2.0/Freecell.Engine.dll
  Freecell.Engine.Tests -> /Users/stuart/dev/freecell/Freecell.Engine.Tests/bin/Debug/netcoreapp2.0/Freecell.Engine.Tests.dll
Running .NET Core 2.0.0 tests for framework netcoreapp2.0...
xUnit.net Console Runner (64-bit .NET Core 4.6.00001.0)
  Discovering: Freecell.Engine.Tests
  Discovered:  Freecell.Engine.Tests
  Starting:    Freecell.Engine.Tests
  Finished:    Freecell.Engine.Tests
=== TEST EXECUTION SUMMARY ===
   Freecell.Engine.Tests  Total: 1, Errors: 0, Failed: 0, Skipped: 0, Time: 0.147s
watch : Exited
watch : Waiting for a file to change before restarting dotnet...

Conclusion

In this post I have walked through setting up a class library and unit test library using dotnet core, how to create a solution file and add the projects to it and how an immediate feedback cycle for TDD can be setup in a fairly easy and straightforward manner. I also demonstrated some basic git usage and initialised a repository for the code.

2017-11-14 00:00

2017-11-02

Introduction

Binary search is the classic search algorithm, and I remember implementing it in C at University. As an experiment I’m going to implement it in C# to see if the line of business applications I usually build have rotted my brain.

Algorithm

As Wikipedia explains, Binary Search follows this procedure:

Given an array A of n elements with values or records A0, A1, …, An−1, sorted such that A0 ≤ A1 ≤ … ≤ An−1, and target value T, the following subroutine uses binary search to find the index of T in A.

  1. Set L to 0 and R to n − 1.
  2. If L > R, the search terminates as unsuccessful.
  3. Set m (the position of the middle element) to the floor (the largest previous integer) of (L + R) / 2.
  4. If Am < T, set L to m + 1 and go to step 2.
  5. If Am > T, set R to m − 1 and go to step 2.
  6. Now Am = T, the search is done; return m.

This is actually Knuth’s algorithm, from The Art of Computer Programming as stated in the footnote on the Wikipedia article.

Implementation

It’s worth noting that this is merely a fun exercise, and that .net has an implementation in Array.BinarySearch which is much better than the implementation below and I would always use that instead.

It’s also worth mentioning that I’m cheating a little bit and assuming that the array is already sorted, and that it only works on int’s.

My implementation

public class BinarySearch
{
    private int[] _array;

    public BinarySearch(int[] array) => _array = array;

    public int Search(int term)
    {
        var l = 0;
        var r = _array.Length - 1;

        while (l <= r)
        {
            var mid = (l + r) / 2;

            if(_array[mid] < term)
            {
                l = mid+1;
            }
            else if (_array[mid] > term)
            {
                r = mid - 1;
            }
            else
            {
                return mid;
            }
        }

        return -1;
    }
}

Console runner

Here is the console runner:

class Program
{
    static string _message = "Found term {0} at position {1}";

    static void Main(string[] args)
    {
        var integers = new []{1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
        var searcher = new BinarySearch(integers);

        var result = searcher.Search(11);
        Console.WriteLine(_message, 11, result);

        result = searcher.Search(0);
        Console.WriteLine(_message, 0, result);

        result = searcher.Search(6);
        Console.WriteLine(_message, 6, result);
    }
}

Here is the output:

nostromo:sandbox stuart$ dotnet run
Found term 11 at position -1
Found term 0 at position -1
Found term 6 at position 5
nostromo:sandbox stuart$

2017-11-02 00:00

2017-05-22

Since I started working as a programmer, I’ve always taken notes in meetings, and jotted things down during the day to remember, but these were all usually on an A4 notepad, which I’ve always used as a daily scratch pad, and until recently I have never kept a proper journal which I could refer back to at a later time.

A colleague of mine with whom I have been working on a project together, has for a long time kept a development journal, or diary, of things that have happened in his work day. Examples are:

  • Meeting notes, who was present, salient points of what was said and what was agreed
  • Design ideas, diagrams, pseudo code
  • Technical notes on gotchas in the language and application
  • Noteworthy events connected to the project

The event which got me interested in his note keeping was one day when the Produce Owner made a decision about the scope and importance of a particular feature. The colleague in question looked back over his notes and was able to prove that the PO had made a different decision about the same thing a few weeks before. When things like this happened a few more times, I really sat up and started to ask some questions.

“If it isn’t written down, it didn’t happen”, was the response.

If you write it down, you are more likely to remember it. There is a large body of evidence that suggests that the simple act of physically writing notes helps aid memory retention. There are a lot of articles and blogs about this subject, but I’d never paid it much attention. After all, I’d kept enough notes when at school and university, and I didn’t think I need to when I was working.

I could not have been more wrong.

So I started taking notes. I got an A4 lined hardback notebook and started writing stuff down.

And: It works.

So for example, I can tell you who made the decision in a meeting thirteen months ago which meant a feature in the application was developed in a particular way which now makes it upsettingly difficult to modify that part of the application. I can produce my design notes from six months ago where I planned the refactoring of some functionality, and the implications of said work, and which developers on the project I’d talked it through with to get some sanity checking that what I was proposing wasn’t stupid. I can tell you who brought cakes in on a particular day last month and who said which particular funny thing last week that is now part of the project slang.

What I’ve found is that if you write stuff down during the day, about what you are doing, it helps you remember (like the research says it will), and makes you think about what you have done already, and what you need to do in the future. This is all stuff that is required for a Scrum stand-up, if you have to do those. It also provides amunition for those of you who have to suffer through the dreaded annual performance appraisal; or, helps remind you what to list on the invoice when billing for the hours you’ve worked.

Lastly, it helps (me, anyway) remember what I was doing on my latest pet project that I haven’t touched for eight months. Which I should get back to.

2017-05-22 00:00

2016-11-02

For anyone keeping score, I haven’t updated my blog for over a year.

Insert excuses here

I know that I should do, Hanselman and Somnez both recommend it.

Anyway, I’ve come up with some goals for myself for the next year:

  • Get on Hanselminutes
    • Failing that, any other developer focused podcast will do
  • Blog more regularly
  • Contribute more to open source
  • Release an open source project of my own
  • Become rich and famous

So we shall see what happens.

2016-11-02 00:00

2015-10-08

The principle of YAGNI should always apply, and I was recently reminded of that when I had to build a small application to give to a user for a small one off task. We’re talking a single screen application with a single button on it. Idiot proof.

So, File > New > Winforms application - doesn’t have to be anything fancy. Then I caught myself: My first instinct was to add StructureMap to it.

I thought to myself that it’s only going to have a few classes, I mean just because it’s a small winforms app doesn’t mean I’m going to shit up the code behind with business logic and data access, so why bother with the overhead of adding a IOC library?

It doesn’t mean I don’t have to use dependency injection. Mark Seemann calls this “Poor Man’s DI”.

static class Program
{
    [STAThread]
    static void Main()
    {
        Application.EnableVisualStyles();
        Application.SetCompatibleTextRenderingDefault(false);

        var service = new Service(new Database(), new Other());
        Application.Run(new Main(service));
    }
}

It’s short and sweet, and there is no IOC container configuration to worry about. Because I don’t need to.

What about when….

Requirements change. “Can it just do this as well…?” More dependencies required, perhaps another form, maybe another service or two. I think I’d see how far I could push Poor Man’s DI before I brought in a proper IOC container to help manage things.

2015-10-08 00:00

2015-06-04

Configuring SignalR in ASP.NET MVC, using StructureMap as the IoC container is fairly straightforward, but not without some subtleties that caught me out.

For the purposes of this post, I’m going to assume that you are familiar with both SignalR and StructureMap, and that you know how to configure them in an ASP.NET MVC application. I will also assume that through some google-fu you have seen the Dependency Injection in SignalR guidance, and have worked through it and got to the “Using IoC Containers in SignalR” section.

I would assume, although I’ve not tested it, that much of this could also be applied to a self-hosted SignalR server.

Library versions used

This post is based on:

  • Asp.Net MVC 5.2.3
  • SignalR 2.2.0
  • StructureMap 3.1.5.154
  • StructureMap.MVC5 3.1.1.134

Follow the guidance up to the section on using Ninject, at which point we now want to configure StructureMap.

Replace the SignalR Dependency Resolver

The implementation is nearly identical, with some obvious StructureMap specific differences:

public class StructureMapSignalRDependencyResolver : DefaultDependencyResolver
{
    private readonly IContainer _container;

    public StructureMapSignalRDependencyResolver(IContainer container)
    {
   	    _container = container;
    }
    
    public override object GetService(Type serviceType)
    {
        return _container.TryGetInstance(serviceType) ?? base.GetService(serviceType);
    }
    
    public override IEnumerable<object> GetServices(Type serviceType)
    {
        var objects = _container.GetAllInstances(serviceType).Cast<object>();
        return objects.Concat(base.GetServices(serviceType));
    }
}

The behaviour is fairly similar. TryGetInstance will attempt to resolve the type, and if it doesn’t know about it, will return null, in which case we call the base resolver, which does.

Register this with StructureMap:

For<IDependencyResolver>().Singleton().Use<StructureMapSignalRDependencyResolver>();

In your Startup, where you configure SignalR, we need to use this new resolver implementation:

var resolver = DependencyResolver.Current.GetService<Microsoft.AspNet.SignalR.IDependencyResolver>();
    
var hubConfiguration = new HubConfiguration
{
    Resolver = resolver

    /* other options as required */
};

Here, we are using the MVC DependencyResolver, which has already been replaced by StructureMap thanks to StructureMap.MVC5, to resolve an instance of the SignalR dependency resolver we’ve registered, which we then tell SignalR to use with a hub configuration object.

Now we just need to configure the StructureMap registry, and teach it how to resolve IHubConnectionContext<dynamic>:

For<IConnectionManager>().Use<ConnectionManager>();
For<IStockTicker>()
    .Singleton()
    .Use<StockTicker>()
    .Ctor<IHubConnectionContext<dynamic>>()
    .Is(ctx => ctx.GetInstance<IDependencyResolver>()
        .Resolve<IConnectionManager>()
        .GetHubContext<StockTickerHub>().Clients);

As in the guidance, we want the StockTicker instance to be a singleton, and we have specify how to resolve the IHubConnectionContext<dynamic> which the StockTicker requires. In the Is, I’m using the context to resolve the default SignalR connection manager we’ve registered. This isn’t in the guidance, but I couldn’t get it work without doing this.

If anyone has comments/improvements on this, I’d love to hear them.

2015-06-04 00:00

Configuring SignalR in ASP.NET MVC, using StructureMap as the IoC container is fairly straightforward, but not without some subtleties that caught me out.

For the purposes of this post, I’m going to assume that you are familiar with both SignalR and StructureMap, and that you know how to configure them in an ASP.NET MVC application. I will also assume that through some google-fu you have seen the Dependency Injection in SignalR guidance, and have worked through it and got to the “Using IoC Containers in SignalR” section.

I would assume, although I’ve not tested it, that much of this could also be applied to a self-hosted SignalR server.

Library versions used

This post is based on:

  • Asp.Net MVC 5.2.3
  • SignalR 2.2.0
  • StructureMap 3.1.5.154
  • StructureMap.MVC5 3.1.1.134

Follow the guidance up to the section on using Ninject, at which point we now want to configure StructureMap.

Replace the SignalR Dependency Resolver

The implementation is nearly identical, with some obvious StructureMap specific differences:

public class StructureMapSignalRDependencyResolver : DefaultDependencyResolver
{
    private readonly IContainer _container;

    public StructureMapSignalRDependencyResolver(IContainer container)
    {
   	    _container = container;
    }
    
    public override object GetService(Type serviceType)
    {
        return _container.TryGetInstance(serviceType) ?? base.GetService(serviceType);
    }
    
    public override IEnumerable<object> GetServices(Type serviceType)
    {
        var objects = _container.GetAllInstances(serviceType).Cast<object>();
        return objects.Concat(base.GetServices(serviceType));
    }
}

The behaviour is fairly similar. TryGetInstance will attempt to resolve the type, and if it doesn’t know about it, will return null, in which case we call the base resolver, which does.

Register this with StructureMap:

For<IDependencyResolver>().Singleton().Use<StructureMapSignalRDependencyResolver>();

In your Startup, where you configure SignalR, we need to use this new resolver implementation:

var resolver = DependencyResolver.Current.GetService<Microsoft.AspNet.SignalR.IDependencyResolver>();
    
var hubConfiguration = new HubConfiguration
{
    Resolver = resolver

    /* other options as required */
};

Here, we are using the MVC DependencyResolver, which has already been replaced by StructureMap thanks to StructureMap.MVC5, to resolve an instance of the SignalR dependency resolver we’ve registered, which we then tell SignalR to use with a hub configuration object.

Now we just need to configure the StructureMap registry, and teach it how to resolve IHubConnectionContext<dynamic>:

For<IConnectionManager>().Use<ConnectionManager>();
For<IStockTicker>()
    .Singleton()
    .Use<StockTicker>()
    .Ctor<IHubConnectionContext<dynamic>>()
    .Is(ctx => ctx.GetInstance<IDependencyResolver>()
        .Resolve<IConnectionManager>()
        .GetHubContext<StockTickerHub>().Clients);

As in the guidance, we want the StockTicker instance to be a singleton, and we have specify how to resolve the IHubConnectionContext<dynamic> which the StockTicker requires. In the Is, I’m using the context to resolve the default SignalR connection manager we’ve registered. This isn’t in the guidance, but I couldn’t get it work without doing this.

If anyone has comments/improvements on this, I’d love to hear them.

2015-06-04 00:00

2015-04-17

It is possible to setup your build server to run code analysis on your solution/projects, without having to install VS on your build server.

The answer is here: http://stackoverflow.com/a/21731245/3181

I’m not going to reproduce the code here, there is no point. This post is just a reminder to myself as to where I found the solution to this.

2015-04-17 00:00

2014-01-29

A few months ago I left a busy startup job I’d had for over a year. The work was engrossing: I stopped blogging, but I was programming every day. I learned a completely new language, but got plenty of chances to use my existing knowledge. That is, after all, why they hired me.

dilbert

I especially liked something that might seem boring: combing through logs of occasional server errors and modifying our code to avoid them. Maybe it was because I had setup the monitoring system. Or because I was manually deleting servers that had broken in new ways. The economist in me especially liked putting a dollar value on bugs of this nature: 20 useless servers cost an extra 500 dollars a week on AWS.

But, there’s only so much waste like this to clean up. I’d automated most of the manual work I was doing and taught a few interns how to do the rest. I spent two weeks openly wondering what I’d do after finishing my current project, even questioning whether I’d still be useful with the company’s new direction.

fireme
Career Tip: don’t do this.

That’s when we agreed to part ways. So, there I was, no “official” job but still a ton of things to keep me busy. I’d help run a chain of Hacker Hostels in Silicon Valley, I was still maintaining Wine as an Ubuntu developer, and I was still a “politician” on Ubuntu’s Community Council having weekly meetings with Mark Shuttleworth.

Politiking, business management, and even Ubuntu packaging, however, aren’t programming. I just wasn’t doing it anymore, until last week. I got curious about counting my users on Launchpad. Download counts are exposed by an API, but not viewable on any webpage. No one else had written a proper script to harvest that data. It was time to program.

fuckshitdamn

And man, I went a little nuts. It was utterly engrossing, in the way that writing and video games used to be. I found myself up past 3am before I even noticed the time; I’d spent a whole day just testing and coding before finally putting it on github. I rationalized my need to make it good as a service to others who’d use it. But in truth I just liked doing it.

It didn’t stop there. I started looking around for programming puzzles. I wrote 4 lines of python that I thought were so neat they needed to be posted as a self-answered question on stack overflow. I literally thought they were beautiful, and using the new yield from feature in Python3 was making me inordinately happy.

And now, I’m writing again. And making terrible cartoons on my penboard. I missed this shit. It’s fucking awesome.

by YokoZar at 2014-01-29 02:46

2013-02-07

Lock’n’Roll, a Pidgin plugin for Windows designed to set an away status message when the PC is locked, has received its first update in three and a half years!

Daniel Laberge has forked the project and released a version 1.2 update which allows you to specify which status should be set when the workstation locks. Get it while it’s awesome (always)!

by Chris at 2013-02-07 21:56

2012-01-08

How do you generate the tangent vectors, which represent which way the texture axes on a textured triangle, are facing?

Hitting up Google tends to produce articles like this one, or maybe even that exact one. I've seen others linked too, the basic formulae tend to be the same. Have you looked at what you're pasting into your code though? Have you noticed that you're using the T coordinates to calculate the S vector, and vice versa? Well, you can look at the underlying math, and you'll find that it's because that's what happens when you assume the normal, S vector, and T vectors form an orthonormal matrix and attempt to invert it, in a sense you're not really using the S and T vectors but rather vectors perpendicular to them.

But that's fine, right? I mean, this is an orthogonal matrix, and they are perpendicular to each other, right? Well, does your texture project on to the triangle with the texture axes at right angles to each other, like a grid?


... Not always? Well, you might have a problem then!

So, what's the real answer?

Well, what do we know? First, translating the vertex positions will not affect the axial directions. Second, scrolling the texture will not affect the axial directions.

So, for triangle (A,B,C), with coordinates (x,y,z,t), we can create a new triangle (LA,LB,LC) and the directions will be the same:

We also know that both axis directions are on the same plane as the points, so to resolve that, we can to convert this into a local coordinate system and force one axis to zero.



Now we need triangle (Origin, PLB, PLC) in this local coordinate space. We know PLB[y] is zero since LB was used as the X axis.


Now we can solve this. Remember that PLB[y] is zero, so...


Do this for both axes and you have your correct texture axis vectors, regardless of the texture projection. You can then multiply the results by your tangent-space normalmap, normalize the result, and have a proper world-space surface normal.

As always, the source code spoilers:

terVec3 lb = ti->points[1] - ti->points[0];
terVec3 lc = ti->points[2] - ti->points[0];
terVec2 lbt = ti->texCoords[1] - ti->texCoords[0];
terVec2 lct = ti->texCoords[2] - ti->texCoords[0];

// Generate local space for the triangle plane
terVec3 localX = lb.Normalize2();
terVec3 localZ = lb.Cross(lc).Normalize2();
terVec3 localY = localX.Cross(localZ).Normalize2();

// Determine X/Y vectors in local space
float plbx = lb.DotProduct(localX);
terVec2 plc = terVec2(lc.DotProduct(localX), lc.DotProduct(localY));

terVec2 tsvS, tsvT;

tsvS[0] = lbt[0] / plbx;
tsvS[1] = (lct[0] - tsvS[0]*plc[0]) / plc[1];
tsvT[0] = lbt[1] / plbx;
tsvT[1] = (lct[1] - tsvT[0]*plc[0]) / plc[1];

ti->svec = (localX*tsvS[0] + localY*tsvS[1]).Normalize2();
ti->tvec = (localX*tsvT[0] + localY*tsvT[1]).Normalize2();


There's an additional special case to be aware of: Mirroring.

Mirroring across an edge can cause wild changes in a vector's direction, possibly even degenerating it. There isn't a clear-cut solution to these, but you can work around the problem by snapping the vector to the normal, effectively cancelling it out on the mirroring edge.

Personally, I check the angle between the two vectors, and if they're more than 90 degrees apart, I cancel them, otherwise I merge them.

by OneEightHundred (noreply@blogger.com) at 2012-01-08 00:23

2011-12-07

Valve's self-shadowing radiosity normal maps concept can be used with spherical harmonics in approximately the same way: Integrate a sphere based on how much light will affect a sample if incoming from numerous sample direction, accounting for collision with other samples due to elevation.

You can store this as three DXT1 textures, though you can improve quality by packing channels with similar spatial coherence. Coefficients 0, 2, and 6 in particular tend to pack well, since they're all dominated primarily by directions aimed perpendicular to the texture.

I use the following packing:
Texture 1: Coefs 0, 2, 6
Texture 2: Coefs 1, 4, 5
Texture 3: Coefs 3, 7, 8

You can reference an early post on this blog for code on how to rotate a SH vector by a matrix, in turn allowing you to get it into texture space. Once you've done that, simply multiply each SH coefficient from the self-shadowing map by the SH coefficients created from your light source (also covered on the previous post) and add together.

by OneEightHundred (noreply@blogger.com) at 2011-12-07 18:39