sn.printf.net

2022-11-15

After reviewing the code for the simple YAML parser I wrote, I decided it was getting a little messy, so before continuing, I decided to refactor it a little bit.

The simples thing to do was to separate the serialisation and the deserialisation into separate classes, and simple call those from within the YamlConvert class in the existing methods. This approach tends to be what other JSON and YAML libraries do, with added functionality such as being able to control aspects of the serialisation/deserialisation process for specific types.

I currently don’t need, or want, to do that, as I’m taking a much more brute force approach - however it is something to consider for a future refactor. Maybe.

I ended up with the following for the YamlConvert:

public static class YamlConvert
{
    private static YamlSerialiser Serialiser;
    private static YamlDeserialiser Deserialiser;
    
    static YamlConvert()
    {
        Serialiser = new YamlSerialiser();
        Deserialiser = new YamlDeserialiser();
    }
    
    public static string Serialise(YamlHeader header)
    {
        return Serialiser.Serialise(header);
    }

    public static YamlHeader Deserialise(string filePath)
    {
        if (!File.Exists(filePath)) throw new FileNotFoundException("Unable to find specified file", filePath);

        var content = File.ReadAllLines(filePath);

        return Deserialise(content);
    }

    public static YamlHeader Deserialise(string[] rawHeader)
    {
        return Deserialiser.Deserialise(rawHeader);
    }
}

It works quite well, as it did before, and looks a lot better. There is no dependency configuration to worry about, as I mentioned above I’m not worried about swapping out the serialisation/deserialisation process at any time.

2022-11-15 00:00

2022-07-30

Previously we left off with a method which could parse the YAML header in one of our markdown files, and it was collecting each line between the --- header marker, for further processing.

One of the main requirements for the overall BlogHelper9000 utility is to be able to standardise the YAML headers in each source markdown file for a post. Some of the posts had a mix of different tags, that were essentially doing the same thing, so one of the aims is to be able to collect those, and transform the values into the correct tags.

In order to achieve this, we can specify a collection of the valid header properties up front, and also a collection of the ‘other’ properties that we find, which we can hold for further in the process when we’ve written the code to handle those properties. The YamlHeader class has already been defined, and we can use a little reflection to load that class up and pick the properties out.

private static Dictionary<string, object?> GetYamlHeaderProperties(YamlHeader? header = null)
{
    var yamlHeader = header ?? new YamlHeader();
    return yamlHeader.GetType()
        .GetProperties(BindingFlags.DeclaredOnly | BindingFlags.Public | BindingFlags.Instance)
        .Where(p => p.GetCustomAttribute<YamlIgnoreAttribute>() is null)
        .ToDictionary(p =>
        {
            var attr = p.GetCustomAttribute<YamlNameAttribute>();

            return attr is not null ? attr.Name.ToLower() : p.Name.ToLower();
        }, p => p.GetValue(yamlHeader, null));
}

We need to be careful to ignore collecting properties that are not part of the YAML header in markdown files, but that we use in the YamlHeader that we can use when doing further processing - such as holding the ‘extra’ properties that we’ll need to match up with their valid counterparts in a further step. Thus we have the custom YamlIgnoreAttribute that we can use to ensure we drop properties that we don’t care about. We also need to ensure that we can match up C# property names with the actual YAML header name, so we also have the YamlNameAttribute to handle this.

Then we just need a way of parsing the individual lines and pulling the header name and the value out.

(string property, string value) ParseHeaderTag(string tag)
{
    tag = tag.Trim();
    var index = tag.IndexOf(':');
    var property = tag.Substring(0, index);
    var value = tag.Substring(index+1).Trim();
    return (property, value);
}

Here we just return a simple tuple after doing some simple substring manipulation, which is greatly helped by the header and its value always being seperated by ‘:’.

Then if we put all that together we can start to parse the header properties.

private static YamlHeader ParseYamlHeader(IEnumerable<string> yamlHeader)
{
    var parsedHeaderProperties = new Dictionary<string, object>();
    var extraHeaderProperties = new Dictionary<string, string>();
    var headerProperties = GetYamlHeaderProperties();

    foreach (var line in yamlHeader)
    {
        var propertyValue = ParseHeaderTag(line);

        if (headerProperties.ContainsKey(propertyValue.property))
        {
            parsedHeaderProperties.Add(propertyValue.property, propertyValue.value);
        }
        else
        {
            extraHeaderProperties.Add(propertyValue.property, propertyValue.value);
        }
    }

    return ToYamlHeader(parsedHeaderProperties, extraHeaderProperties);

All we need to do is, to setup up some dictionaries to hold the header properties, get the dictionary of valid header properties, and then loop through each line, parsing the header tag and verifying whether the property is a ‘valid’ one that we definitely know we want to keep, and or one we need to hold for further processing. You’ll noticed in the above code, that it’s missing an end brace: this is deliberate, because the ParseHeaderTag method and ToYamlHeader method are both nested methods.

Reading through the code to write this post has made me realise that we can do some refactoring to make this look a little nicer.

So we’ll look at that next.

2022-07-30 00:00

2022-07-22

The next thing to do to get BlogHelper9000 functional is to write a command which provides some information about the posts in the blog. I want to know:

  • How many published posts there are
  • How many drafts there are
  • A short list of recent posts
  • How long it’s been since a post was published

I also know that I want to introduce a command which will allow me to fix the metadata in the posts, which is a little messy. I’ve been inconsistently blogging since 2007, originally starting off on a self-hosted python blog I’ve forgot the name of before migrating to Wordpress, and then migrating to a short lived .net static site generator before switching over to Jekyll.

Obviously, Markdown powered blogs like Jekyll have to provide non-markdown metadata in each post, and for Jekyll (and most markdown powered blogs) that means: YAML.

Parse that YAML

There are a couple of options when it comes to parsing YAML. One would be to use YamlDotNet which is a stable library which conforms with V1.1 and v1.2 of the YAML specifications.

But where is the fun in that?

I’ve defined a POCO called YamlHeader which I’m going to use to use as the in-memory object to represent the YAML metadata header at the top of a markdown file.

If we take a leaf from different JSON converters, we can define a YamlConvert class like this:

public static class YamlConvert
{
    public static string Serialise(YamlHeader header)
    {
    }

    public static YamlHeader Deserialise(string filePath)
    {
    }
}

With this, we can easily serialise a YamlHeader into a string, and deserialise a file into a YamlHeader.

Deserialise

Deserialising is the slight more complicated of the two, so lets start with that.

Our first unit test looks like this:

    [Fact]
    public void Should_Deserialise_YamlHeader()
    {
        var yaml = @"---
layout: post
title: 'Dynamic port assignment in Octopus Deploy'
tags: ['build tools', 'octopus deploy']
featured_image: /assets/images/posts/2020/artem-sapegin-b18TRXc8UPQ-unsplash.jpg
featured: false
hidden: false
---
post content that's not parsed";
        
        var yamlObject = YamlConvert.Deserialise(yaml.Split(Environment.NewLine));

        yamlObject.Layout.Should().Be("post");
        yamlObject.Tags.Should().NotBeEmpty();
    }

This immediately requires us to add an overload for Deserialise to the YamlConvert class, which takes a string[]. This means our implementation for the first Deserialise method is simply:

public static YamlHeader Deserialise(string filePath)
{
    if (!File.Exists(filePath)) throw new FileNotFoundException("Unable to find specified file", filePath);

    var content = File.ReadAllLines(filePath);

    return Deserialise(content);
}

Now we get into the fun part. And a big caveat: I’m not sure if this is the best way of doing this, but it works for me and that’s all I care about.

Anyway. A YAML header block is identified by a single line of only --- followd by n lines of YAML which is signified to have ended by another single line of only ---. You can see this in the unit test above.

The algorithm I came up with goes like this:

For each line in lines:
  if line is '---' then
    if header start marker not found then
      header start marker found
      continue
     break loop
    store line
  parse each line of found header

So in a nutshell, it loops through each line in the file, look for the first --- to identify the start of the header, and then until it hits another ---, it gathers the lines for further processing.

Translated into C#, the code looks like this:

public static YamlHeader Deserialise(string[] fileContent)
{
    var headerStartMarkerFound = false;
    var yamlBlock = new List<string>();

    foreach (var line in fileContent)
    {
        if (line.Trim() == "---")
        {
            if (!headerStartMarkerFound)
            {
                headerStartMarkerFound = true;
                continue;
            }

            break;
        }

        yamlBlock.Add(line);
    }
        
    return ParseYamlHeader(yamlBlock);
}

This is fairly straightforward, and isn’t where I think some of the problems with the way it works actually are - all that is hidden behind ParseYamlHeader, and is worth a post on its own.

2022-07-22 00:00

2022-07-14

In the introductory post to this series, I ended with issuing a command to initialise a new console project, BlogHelper9000. It doesn’t matter how you create your project, be it from Visual Studio, Rider or the terminal, the end result is the same, as the templates are all the same.

With the new .net 6 templates, the resulting Program.cs is somewhat sparse, if you discount the single comment then all you get in the file is a comment and a Console.WriteLine("Hello, World!");, thanks to all the new wizardry in the latest versions of the language and the framework.

Thanks to this new fangled sorcery, the app still has a static main method, you just don’t need to see it, and as such, the args string array is still there. For very simple applications, this is all you really need to do. However, once you get past a few commands, with a few optional flags, things can get complicated, fast. This can into a maintenance headache.

In the past I’ve written my own command line parsing abstractions, I’ve used Mono.Options and other libraries, and I think I’ve finally settled on Oakton as my go to library for quickly and easily adding command line parsing to a console application. It’s intuitive, easy to use and easy to maintain. This means you can easily introduce it into a team environment and have everyone understand it immediately.

Setup Command loading

After following Oakton’s getting started documentation, you can see how easy it is to get going with a basic implementation. I recommended introducing the ability to have both synchronous and asynchronous commands able to be executed, and you achieve this by a small tweak to the Program.cs and taking into consideration the top-level statements in .net 6, like this:

using System.Reflection;

var executor = CommandExecutor.For(_ =>{
    _.RegisterCommands(typeof(Program).GetTypeInfo().Assembly);
});

var result = await executor.ExecuteAsync(args);
return result;

In .net 5, or if you don’t like top-level statements and have a static int Main you can make it static Task<int> Main instead and return the executor.ExecuteAsync instead of awaiting it.

Base classes

In some console applications, different commands can have the same optional flags, and I like to put mine in a class called BaseInput. Because I know I’m going to have several commands in this application, I’m going to add some base classes so that the different commands can share some of the same functionality. I’ve also used this in the past to, for example, create a database instance in the base class, which is then passed into each inheriting command. It’s also a good place to add some common argument/flag validation.

What I like to do is have an abstract base class, which inherits from the Oakton command, and add an abstract Run method to it, and usually a virtual bool ValidateInput too; these can then be overriden in our actual Command implementations and have a lot of nice functionality automated for us in a way that can be used across all Commands.

Some of the detail of these classes are elided, to stop this from being a super long post, you can see all the details in the Github repo.

public abstract class BaseCommand<TInput> : OaktonCommand<TInput>
    where TInput : BaseInput
{
    public override bool Execute(TInput input)
    {
        return ValidateInput(input) && Run(input);
    }

    protected abstract bool Run(TInput input);

    protected virtual bool ValidateInput(TInput input)
    {
        /* ... */
    }
}

This ensures that all the Commands we implement can optionally decide to validate the inputs that they take in, simply by overriding ValidateInput.

The async version is exactly the same… except async:

public abstract class AsyncBaseCommand<TInput> : OaktonAsyncCommand<TInput>
    where TInput : BaseInput
{
    public override Task<bool> Execute(TInput input)
    {
        return ValidateInput(input) && Run(input);
    }

    protected abstract Task<bool> Run(TInput input);

    protected virtual Task<bool> ValidateInput(TInput input)
    {
        /* ... */
    }
}

There is an additional class I’ve not yet shown, which adds some further reusable functionality between each base class, and that’s the BaseHelper class. I’ve got a pretty good idea that any commands I write for the app are going to operate on posts or post drafts, which in jekyll are stored in _posts and _drafts respectively. Consequently, the commands need an easy way of having these paths to hand, so a little internal helper class is a good place to put this shared logic.

internal class BaseHelper<TInput> where TInput : BaseInput
{
    public string DraftsPath { get; }

    public string PostsPath { get;  }

    private BaseHelper(TInput input)
    {
        DraftsPath = Path.Combine(input.BaseDirectoryFlag, "_drafts");
        PostsPath = Path.Combine(input.BaseDirectoryFlag, "_posts");
    }

    public static BaseHelper<TInput> Initialise(TInput input)
    {
        return new BaseHelper<TInput>(input);
    }

    public bool ValidateInput(TInput input)
    {
        if (!Directory.Exists(DraftsPath))
        {
            ConsoleWriter.Write(ConsoleColor.Red, "Unable to find blog _drafts folder");
            return false;
        }

        if (!Directory.Exists(PostsPath))
        {
            ConsoleWriter.Write(ConsoleColor.Red, "Unable to find blog _posts folder");
            return false;
        }

        return true;
    }
}

This means that our base class implementations can now become:

private BaseHelper<TInput> _baseHelper = null!;
protected string DraftsPath => _baseHelper.DraftsPath;
protected string PostsPath => _baseHelper.PostsPath;

public override bool Execute(TInput input)
{
    _baseHelper = BaseHelper<TInput>.Initialise(input);
    return ValidateInput(input) && Run(input);
}

protected virtual bool ValidateInput(TInput input)
{
    return _baseHelper.ValidateInput(input);
}
Note the null!, where I am telling the compiler to ignore the fact that _baseHelper is being initialised to null, as I know better.

This allows each command implementation to hook into this method and validate itself automatically.

First Command

Now that we have some base classes to work with, we can start to write our first command. If you check the history in the repo, you’ll see this wasn’t the first command I actually wrote… but it probably should have been. In any case, it only serves to illustrate our first real command implementation.

public class InfoCommand : BaseCommand<BaseInput>
{
    public InfoCommand()
    {
        Usage("Info");
    }

    protected override bool Run(BaseInput input)
    {
        var posts = LoadsPosts();
        var blogDetails = new Details();

        DeterminePostCount(posts, blogDetails);
        DetermineDraftsInfo(posts, blogDetails);
        DetermineRecentPosts(posts, blogDetails);
        DetermineDaysSinceLastPost(blogDetails);

        RenderDetails(blogDetails);

        return true;
    }

    /**...*/
}

LoadPosts is a method in the base class which is responsible for loading the posts into memory, so that we can process them and extract meaningful details about the posts. We put store this information in a Details class, which is what we ultimately use to render the details to the console. You can see the details of these methods in the github repository, however they all boil down to simple Linq queries.

Summary

In this post we’ve seen how to setup Oakton and configure a base class to extend the functionality and give us more flexibility, and an initial command. In subsequent posts, we’ll cover more commands and I’ll start to use the utility to tidy up metadata across all the posts in the blog and fix things like images for posts.

2022-07-14 00:00

2022-06-09

Normally you can’t broadly stop someone from being able to send you mail. However, there is a loophole.

You can file a PS Form 1500 and say that the advertisement you received from them made you horny. No questions asked prohibitory order.

🙅‍♂️🥵📫

by Factor Mystic at 2022-06-09 00:37

2022-03-11

I just had to setup my vimrc and vimfiles on a new laptop for work, and had some fun with Vim, mostly as it’s been years since I had to do it. I keep my vimfiles folder in my github, so I can grab it wherever I need it.

To recap, one of the places that Vim will look for things is $HOME/vimfiles/vimrc, where $HOME is actually the same as %USERPROFILE%. In most corporate environments, the %USERPROFILE% is actually stored in a networked folder location, to enable roaming profile support and help when a user gets a new computer.

So you can put your vimfiles there, but, it’s a network folder - it’s slow to start an instance of Vim. Especially if you have a few plugins.

Instead, what you can do is to edit the _vimrc file in the Vim installation folder (usually in C:\Program Files (x86)\vim), delete the entire contents and replace it with:

set rpt+=C:\path\to\your\vimfiles
set viminfo+=nC:\path\to\your\vimfiles\or\whatever
source C:\path\to\your\vimfiles\vimrc

What this does is:

  1. Sets the runtime path to be the path to your vimfiles
  2. Tells vim where to store/update the viminfo file (which stores useful history state amongst other things)
  3. Source your vimrc file and uses that

This post largely serves as a memory aid for myself when I need to do this again in future I won’t spend longer than I probably needed to googling it to find out how to do it, but I hope it helps someone else.

2022-03-11 00:00

2022-03-04

Recently I was inspired by @buhakmeh’s blog post, Supercharge Blogging With .NET and Ruby Frankenblog to write something similar, both as an exercise and excuse to blog about something, and as a way of tidying up the metadata on my existing blog posts and adding header images to old posts.

High level requirements

The initial high level requirements I want to support are:

  1. Cross-platform. This blog is jekyll based, and as such is written in markdown. Any tool I write for automation purposes should be cross-platform.
  2. Easily add posts from the command line, and have some default/initial yaml header metadata automatically added.
  3. See a high level overview of the current status of my blog. This should include things like the most recent post, how many days I’ve been lazy and not published a post, available drafts etc
  4. Publish posts from the command line, which should update the post with published status and add the published date to the yaml header and filename.
  5. Create a customised post header for each post on the blog, containing some kind of blog branding template and the post title, and update or add the appropriate yaml header metadata to each post. This idea also comes from another @buhakmeh’s post.
  6. The blog has many years of blog posts, spread across several different blogging platforms before settling on Jekyll. As such, some of the yaml metadata for each blog post is… not consistent. Some effort should go into correcting this.
  7. Automaticlly notify Twitter of published posts.

Next steps

The next series of posts will cover implementing the above requirements… not necessarily in that order. First I will go over setting up the project and configuring Oakton.

After that I will probably cover implementing fixes to the existing blog metadata, as I think that is going to be something that will be required in order for any sort of Info function to work properly, as all of the yaml metadata will need to be consistent.

Then I think I’ll tackle the image stuff, which should be fairly interesting, and should give a nice look to the existing posts, as having prominent images for posts is part of the theme for the blog, which I’ve not really taken full advantage of.

I’ll try to update this post with links to future posts, or else make it all a big series.

dotnet new console --name BlogHelper9000

2022-03-04 00:00

2022-01-11

At work, we have recently been porting our internal web framework into .net 6. Yes, we are late to the party on this, for reasons. Suffice it to say I currently work in an inherently risk averse industry.

Anyway, one part of the framework is responsible for getting reports from SSRS.

The way it did this is to use a wrapper class around a SOAP client generated from good old ReportService2005.asmx?wsdl, using our faithful friend svcutil.exe. The wrapper class used some TaskCompletionSource magic on the events in the client to make the client.LoadReportAsync and the other *Async methods actually async, as the generated client was not truely async.

Fast forward to the modern times, and we need to upgrade it. How do we do that?

Obviously, Microsoft are a step ahead: svcutil has a dotnet version - dotnet-svcutil. We can install it and get going:

dotnet too install --global dotnet-svcutil

Once installed, we can call it against the endpoint:

Make sure you call this command in the root of the project where the service should go
dotnet-svcutil http://server/ReportServer/ReportService2005.asmx?wsdl

In our wrapper class, the initialisation of the client has to change slightly, because the generated client is different to the original svcutil implementation. Looking at the diff between the two files, it’s because the newer version of the client users more modern .net functionality.

The wrapper class constructor has to be changed slightly:

public Wrapper(string url, NetworkCredential credentials)
{
    var binding = new BasicHttpBinding(BasicHttpSecurityMode.TransportCredentialOnly);
    binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Ntlm;
    binding.MaxReceivedMessageSize = 10485760; // this is a 10mb limit
    var address = new EndpointAddress(url);

    _client = new ReportExecutionServiceSoapClient(binding, address);
    _client.ClientCredentials.Windows.AllowedInpersonationLevel = TokenImpersonationLevel.Impersonation;
    _client.ClientCredentials.Windows.ClientCredential = credentials;
}

Then, the code which actually generates the report can be updated to remove all of the TaskCompletionSource, which actually simplifies it a great deal:

public async Task<byte[]> RenderReport(string reportPath, string reportFormat, ParameterValue[] parameterValues)
{
    await _client.LoadReportAsync(null, reportPath, null);
    await _client.SetExecutionParametersAsync(null, null, parameterValues, "en-gb");
    var deviceInfo = @"<DeviceInfo><Toolbar>False</ToolBar></DeviceInfo>";
    var request = new RenderRequest(null, null, reportFormat, deviceInfo);
    var response = await _client.RenderAsync(request);
    return response.Result;
}

You can then do whatever you like with the byte[], like return it in an IActionResult or load it into a MemoryStream and write it to disk as the file.

Much of the detail of this post is sourced from various places around the web, but I’ve forgotten all of the places I gleaned the information from.

2022-01-11 00:00

2021-12-22

who is eating cereal anymore? Literally don’t think I’ve seen someone eat a bowl of cereal in twenty years

🥣🤔

by Factor Mystic at 2021-12-22 03:23

2021-10-26

Recently we realised that we had quite a few applications being deployed through Octopus Deploy, and that we had a number of Environments, and a number of Channels, and that managing the ports being used in Dev/QA/UAT across different servers/channels was becoming… problematic.

When looking at this problem, it’s immediately clear that you need some way of dynamically allocating a port number on each deployment. This blog post from Paul Stovell shows the way, using a custom Powershell build step.

As we’d lost track of what sites were using what ports, and that we also have ad-hoc websites in IIS that aren’t managed by Octopus Deploy, we thought that asking IIS “Hey, what ports are the sites you know about using?” might be a way forward. We also had the additional requirement that on some of our servers, we also might have some arbitary services also using a port and that we might bump into a situation where a port was chosen that was already being used by a non-IIS application/website.

Researching the first situation, it’s quickly apparent that you can do this in Powershell, using the Webadministration module. Based on the answers to this question on Stackoverflow, we came up with this:

Import-Module Webadministration

function Get-IIS-Used-Ports()
{
    $Websites = Get-ChildItem IIS:\Sites

    $ports = foreach($Site in $Websites)
    {
        $Binding = $Site.bindings
        [string]$BindingInfo = $Binding.Collection
        [string]$Port = $BindingInfo.SubString($BindingInfo.IndexOf(":")+1,$BindingInfo.LastIndexOf(":")-$BindingInfo.IndexOf(":")-1)

        $Port -as [int]
    }

    return $ports
}

To get the list of ports on a machine that are not being used is also fairly straightforward in Powershell:

function Get-Free-Ports()
{
    $availablePorts = @(49000-65000)
    $usedPorts = @(Get-NetTCPConnection | Select -ExpandProperty LocalPort | Sort -Descending | Where { $_ -ge 49000})

    $unusedPorts = foreach($possiblePort in $usedPorts)
    {
        $unused = $possiblePort -notin $usedPorts
        if($unused)
        {
            $possiblePort
        }
    }

    return $unusedPorts
}

With those two functions in hand, you can work out what free ports are available to be used as the ‘next port’ on a server. It’s worth pointing out that if a site in IIS is stopped, then IIS won’t allow that port to be used in another website (in IIS), but the port also doesn’t show up as a used port in netstat -a, which is kind of what Get-NetTCPConnection does.

function Get-Next-Port()
{
    $iisUsedPorts = Get-IIS-Used-Ports
    $freePorts = Get-Free-Ports

    $port = $freePorts | Where-Object { $iisUsedPorts -notcontains $_} | Sort-Object | Select-Object First 1

    Set-OctopusVariable -Name "Port" -Value "$port"
}

Then you just have to call it at the end of the script:

Get-Next-Port

You’d also want to have various Write-Host or other logging messages so that you get some useful output in the build step when you’re running it.

2021-10-26 00:00

2021-05-06

If you found this because you have a build server which is ‘offline’, without any external internet access because of reasons, and you can’t get your build to work because dotnet fails to restore the tool you require for your build process because of said lack of external internet access, then this is for you.

In hindsight, this may be obvious for most people, but it wasn’t for me, so here it is.

In this situation, you just need to shy away from local tools completely, because as of yet, I’ve been unable to find anyway of telling dotnet not to try to restore them, and they fail every build.

Instead, I’ve installed the tool(s) as a global tool, in a specific folder, e.g. C:\dotnet-tools, which I’ve then added to the system path on the server. You may need to restart the build server for it to pick up the changes to the environment variable.

One challenge that remains is how to ensure the dotnet tools are consistent on both the developer machine, and the build server. I leave that as an exercise for the reader.

2021-05-06 00:00

2021-04-01

I’m leaving this here so I can find it again easily.

We had a problem updating the Visual Studio 2019 Build Tools on a server, after updating an already existing offline layout.

I won’t go into that here, because it’s covered extensively on Microsoft’s Documentation website.

The installation kept failing, even when using --noweb. It turns out that when your server is completely cut off from the internet, as was the case here, you also need to pass --noUpdateInstaller.

This is because (so it would seem) that even though --noweb correctly tells the installer to use the offline cache, it doesn’t prevent the installer from trying to update itself, which will obviously fail in a totally disconnected environment.

2021-04-01 00:00

2021-01-03

Since a technical breakdown of how Betsy does texture compression was posted, I wanted to lay out how the compressors in Convection Texture Tools (CVTT) work, as well as provide some context of what CVTT's objectives are in the first place to explain some of the technical decisions.

First off, while I am very happy with how CVTT has turned out, and while it's definitely a production-quality texture compressor, providing the best compressor possible for a production environment has not been its primary goal. Its primary goal is to experiment with compression techniques to improve the state of the art, particularly finding inexpensive ways to hit high quality targets.

A common theme that wound up manifesting in most of CVTT's design is that encoding decisions are either guided by informed decisions, i.e. models that relate to the problem being solved, or are exhaustive.  Very little of it is done by random or random-like searching. Much of what CVTT exists to experiment with is figuring out techniques which amount to making those informed decisions.

CVTT's ParallelMath module, and choice of C++

While there's some concidence with CVTT having a similar philosophy to Intel's ISPC compressor, the reason for CVTT's SPMD-style design was actually motivated by it being built a port of the skeleton of DirectXTex's HLSL BC7 compressor.

I chose to use C++ instead of ISPC for three main reasons:
  • It was easier to develop it in Visual Studio.
  • It was easier to do operations that didn't parallelize well.  This turned out to matter with the ETC compressor in particular.
  • I don't trust in ISPC's longevity, in particular I think it will be obsolete as soon as someone makes something that can target both CPU and GPU, like either a new language that can cross-compile, or SPIR-V-on-CPU.

Anyway, CVTT's ParallelMath module is kind of the foundation that everything else is built on.  Much of its design is motivated by SIMD instruction set quirks, and a desire to maintain compatibility with older instruction sets like SSE2 without sacrificing too much.

Part of that compatibility effort is that most of CVTT's ops use a UInt15 type.  The reason for UInt15 is to handle architectures (like SSE2!) that don't support unsigned compares, min, or max, which means performing those operations on a 16-bit number requires flipping the high bit on both operands.  For any number where we know the high bit is zero for both operands, that flip is unnecessary - and a huge number of operations in CVTT fit in 15 bits.

The compare flag types are basically vector booleans, where either all bits are 1 or all bits are 0 for a given lane - There's one type for 16-bit ints, and one for 32-bit floats, and they have to be converted since they're different widths.  Those are combined with several utility functions, some of which, like SelectOrZero and NotConditionalSet, can elide a few operations.

The RoundForScope type is a nifty dual-use piece of code.  SSE rounding modes are determined by the CSR register, not per-op, so RoundForScope when targeting SSE will set the CSR, and then reset it in its destructor.  For other architectures, including the scalar target, the TYPE of the RoundForScope passed in is what determines the operation, so the same code works whether the rounding is per-op or per-scope.

While the ParallelMath architecture has been very resistant to bugs for the most part, where it has run into bugs, they've mostly been due to improper use of AnySet or AllSet - Cases where parallel code can behave improperly because lanes where the condition should exclude it are still executing, and need to be manually filtered out using conditionals.

BC1-7 common themes

All of the desktop formats that CVTT supports are based on interpolation.  S3TC RGB (a.k.a. DXT1) for instance defines two colors (called endpoints), then defines all pixels as being either one of those two colors, or a color that is part-way between those two colors, for each 4x4 block.  Most of the encoding effort is spent on determining what the two colors should be.
 
You can read about a lot of this on Simon Brown's post outlining the compression techniques used by Squish, one of the pioneering S3TC compressors, which in turn is the basis for the algorithm used by CVTT's BC1 compressor.

Principal component analysis

Principal component analysis determines, based on a set of points, what the main axis is that the colors are aligned along.  This gives us a very good guess of what the initial colors should be, simply using the colors that are the furthest along that axis, but it isn't necessarily ideal.

Endpoint refinement

In BC1 for instance, each color is assigned to one of four possible values along the color line.  CVTT solves for that by just finding the color with the shortest distance to each pixel's color.  If the color assignments are known, then it's possible to determine what the color values are that will minimize the sum of the square distance of that mapping.  One round of refinement usually yields slightly better results and is pretty cheap to check.  Two rounds will sometimes yield a slightly better result.

Extrapolation

One problem with using the farthest extents of the principal axis as the color is that the color precision is reduced (quantized) by the format.  In BC1-5, the color is reduced to a 16-bit color with 5 bits of red, 6 bits of green, and 5 bits of alpha.  It's frequently possible to achieve a more accurate match by using colors outside of the range so that the interpolated colors are closer to the actual image colors - This sacrifices some of the color range.

CVTT internally refers to these as "tweak factors" or similar, since what they functionally do is make adjustments to the color mapping to try finding a better result.

The number of extrapolation possibilities increases quadratically with the number of indexes.  CVTT will only ever try four possibilities: No insets, one inset on one end (which is two possibilities, one for each end), and one inset on both ends.

BC1 (DXT1)

CVTT's BC1 encoder uses the cluster fit technique developed by Simon Brown for Squish.  It uses the principal axis to determine an ordering of each of the 16 pixels along the color line, and then rather than computing the endpoints from the start and end points, it computes them by trying each possible count of pixels assigned to each endpoint that maintains the original order and still totals 16.  That's a fairly large set of possibilities with a lot of useless entries, but BC1 is fairly tight on bits, so it does take a lot of searching to maximize quality out of it.

BC2 (DXT3)

BC2 uses BC1 for RGB and 4bpp alpha.  There's not much to say here, since it just involves reducing the alpha precision.

BC3 (DXT5)

This one is actually a bit interesting.  DXT5 uses indexed alpha, where it defines two 8-bit alpha endpoints and a 3-bit interpolator per pixel, but it also has a mode where 2 of the interpolators are reserved 0 and 255 and only 6 are endpoint-to-endpoint values.  Most encoders will just use the min/max alpha.  CVTT will also try extrapolated endpoints, and will try for the second mode by assuming that any pixels within 1/10th of the endpoint range of 0 or 255 would be assigned to the reserved endpoints.  The reason for the 1/10th range is that the rounding range of the 6-value endpoints is 1/10th of the range, and it assumes that for any case where the endpoints would include values in that range, it would just use the 8-index mode and there'd be 6 indexes between them anyway.

BC4 and BC5

These two modes are functionally the same as BC3's alpha encoding, with the exception that the signed modes are offset by 128.  CVTT handles signed modes by pre-offsetting them and undoing the offset.

BC7

BC7 has 8 modes of operation and is the most complicated format to encode, but it's actually not terribly more complicated than BC1.  All of the modes do one of two things: They encode 1 to 3 pairs of endpoints that are assigned to specific groupings of pixels for all color channels, referred to as partitions, or or they encode one set of endpoints for the entire block, except for one endpoint, which is encoded separately.
 
Here are the possible partitions:

Credit: Jon Rocatis from this post.

Another feature of BC7 are parity bits, where the low bit of each endpoint is specified by a single bit.  Parity bits (P-bit) exist as a way of getting a bit more endpoint precision when there aren't as many available bits as there are endpoint channels without causing the channels to have a different number of bits, something that caused problems with gray discoloration in BC1-3.
 
CVTT will by default just try every partition, and every P-bit combination.

Based on some follow-up work that I'm still experimenting with, a good quality trade-off would be to only check certain subsets.  Among the BC7 subsets, the vast majority of selected subsets fall into a only about 16 of the possible ones, and omitting those causes very little quality loss.  I'll publish more about that when my next experiment is further along.

Weight-by-alpha issues

One weakness that CVTT's encoder has vs. Monte Carlo-style encoders is that principal component analysis does not work well for modes in BC7 where the alpha and some of the color channels are interpolated using the same indexes.  This is never a problem with BC2 or BC3, which can avoid that problem by calculating alpha first and then pre-weighting the RGB channels.

I haven't committed a solution to that yet, and while CVTT gets pretty good quality anyway, it's one area where it underperforms other compressors on BC7 by a noticeable amount.

Shape re-use

The groupings of pixels in BC7 are called "shapes."

One optimization that CVTT does is partially reuse calculations for identical shapes.  That is, if you look at the 3 subset grouping above, you can notice that many of the pixel groups are the same as some pixel groups in the 2 subset grouping.

To take advantage of that fact, CVTT performs principal component analysis on all unique shapes before performing further steps.  This is a bit of a tradeoff though: It's only an optimization if those shapes are actually used, so it's not ideal for if CVTT were to reduce the number of subsets that it checks.

Weight reconstruction

One important aspect of BC7 is that, unlike BC1-3, it specifies the precision that interpolation is to be done at, as well as the weight values for each index.  However, doing a table lookup for each value in a parallelized index values is a bit slow.  CVTT avoids this by reconstructing the weights arithmetically:

MUInt15 weight = ParallelMath::LosslessCast<MUInt15>::Cast(ParallelMath::RightShift(ParallelMath::CompactMultiply(g_weightReciprocals[m_range], index) + 256, 9));

Coincidentally, doing this just barely fits into 16 bits of precision accurately.

BC6H

BC6H is very similar to BC7, except it's 16-bit floating point.   The floating point part is achieved by encoding the endpoints as a high-precision base and low-precision difference from the base.  Some of the modes that it supports are partitioned similar to BC7, and it also has an extremely complicated storage format where the endpoint bits are located somewhat arbitrarily.
 
There's a reason that BC6H is the one mode that's flagged as "experimental."  Unlike all other modes, BC6H is floating point, but has a very unique quirk: When BC6H interpolates between endpoints, it's done as if the endpoint values are integers, even though they will be bit-cast into floating point values.

Doing that severely complicates making a BC6H encoder, because part of the floating point values are the exponent, meaning that the values are roughly logarithmic.  Unless they're the same, they don't even correlate proportionally with each other, so color values may shift erratically, and principal component analysis doesn't really work.

CVTT tries to do its usual tricks in spite of this, and it sort of works, but it's an area where CVTT's general approach is ill-suited.

ETC1

ETC1 is based on cluster fit, via what's basically a mathematical reformulation of it.

Basically, ETC1 is based on the idea that the human visual system sees color detail less than intensity detail, so it encodes each 4x4 block as a pair of either 4x2 or 2x4 blocks which each encode a color, an offset table ID, and a per-pixel index into the offset table.  The offsets are added to ALL color channels, making them grayscale offsets, essentially.
 

Unique cumulative offsets

What's distinct about ETC compares to the desktop formats, as far as using cluster fit is concerned, is two things: First, the primary axis is always known.  Second, the offset tables are symmetrical, where 2 of the entries are the negation of the other two.
 
The optimal color for a block, not accounting for clamping, will be the average color of the block, offset by 1/16th of the offset assigned to each pixel.  Since half of the offsets negate each other, every pair of pixels assigned to opposing offsets cancel out, causing no change.  This drastically reduces the search space, since many of the combinations will produce identical colors.  Another thing that reduces the search space is that many of the colors will be duplicates after the precision reduction from quantization.  Yet another thing is that in the first mode, the offsets are +2 and +4, which have a common factor, causing many of the possible offsets to overlap, cancelling out even more combinations.

So, CVTT's ETC1 compressor simply evaluates each possible offset from the average color that results in a unique color post-quantization, and picks the best one.  Differential mode works by selecting the best VALID combination of colors, first by checking if the best pair of colors is valid, and failing that, checking all evaluated color combinations.
 

ETC2

ETC2 has 3 additional selectable modes on top of the ETC1 modes.  One, called T mode, contains 4 colors: Color0, Color1, Color1+offset, and Color2+offset.  Another, called H mode, contains Color0+offset, Color0-offset, Color1+offset, and Color1-offset.  The final mode, called planar mode, contains what is essentially a base color and a per-axis offset gradient.

T and H mode

T and H mode both exist to better handle blocks where, within the 2x4 or 4x2 blocks, the colors do not align well along the grayscale axis.  CVTT's T/H mode encoding basically works with that assumption by trying to find where it thinks the poorly-aligned color axes might be.  First, it generates some chrominance coordinates, which are basically 2D coordinates corresponding to the pixel colors projected on to the grayscale plane.  Then, it performs principal component analysis to find the primary chrominance axis.  Then, it splits the block based on which side of the half-way point each pixel is to form two groupings that are referred to internally as "sectors."

From the sectors, it performs a similar process of inspecting each possible offset count from the average to determine the best fit - But it will also record if any colors NOT assigned to the sector can still use one of the results that it computed, which are used later to determine the actual optimal pairing of the results that it computed.

One case that this may not handle optimally is when the pixels in a block ARE fairly well-aligned along the grayscale axis, but the ability of T/H colors to be relatively arbitrary would be an advantage.
 

ETC2 with punch-through, "virtual T mode"

ETC2 supports punchthrough transparency by mapping one of the T or H indexes to transparent.  Both of these are resolved in the same way as T mode.  When encoding punch-through the color values for T mode are Color0, Color1+offset, transparent, Color1-offset, and in H mode, they are Color0+offset, Color0-offset, transparent, and Color1.

Essentially, both have a single color, and another color +/- an offset, there are only 2 differences: First, the isolated color H mode is still offset, so the offset has to be undone.  If that quantizes to a more accurate value, then H mode is better.  Second, the H mode color may not be valid - H mode encodes the table index low bit based on the order of the colors, but unlike when encoding opaque, reordering the colors will affect which color has the isolated value and which one has the pair of values. 

H mode as T mode encoding

One special case to handle with testing H mode is the possibility that the optimal color is the same.  This should be avoidable by evaluating T mode first, but the code handles it properly just to be safe.  Because H mode encodes the table low bit based on a comparison of the endpoints, it may not be possible to select the correct table if the endpoints are the same.  In that case, CVTT uses a fallback where it encodes the block as T mode instead, mapping everything to the color with the pair of offsets.

Planar mode

Planar mode involves finding an optimal combination of 3 values that determine the color of each channel value as O+(H*x)+(V*Y)

How planar mode actually works is by just finding the least-squares fit for each of those three values at once.
 
Where error=(reconstructedValue-actualValue)², we want to solve for d(error)/dO=0, d(error)/dH=0, and d(error)/dV=0

All three of these cases resolve to quadratic formulas, so the entire thing is just converted to a system of linear equations and solved.  The proof and steps are in the code.

ETC2 alpha and EAC

Both of these "grayscale" modes are both more complicated because they have 3-bit indexes, multiple lookup tables, and an amplitude multiplier.

CVTT tries a limited set of possibilities based on alpha insets.  It tries 10 alpha ranges, which correspond to all ranges where the index inset of each endpoint is +/- 1 the number of the other endpoint.  So, for example, given 8 alpha offsets numbered 0-7, it will try these pairs:
  • 0,7
  • 0,6
  • 1,7
  • 1,6
  • 1,5
  • 2,6
  • 2,5
  • 2,4
  • 3,5
  • 3,4
Once the range is selected, 2 multipliers are checked: The highest value that can be multiplied without exceeding the actual alpha range, and the smallest number that can be multiplied while exceeding it.

The best result of these possibilities is selected.

Possible improvements and areas of interest

BC6H is by far the most improvable aspect.  Traditional PCA doesn't work well because of the logarithmic interpolation.  Sum-of-square-difference in floating point pseudo-logarithmic space performs much worse than in gamma space and is prone to sparkly artifacts.

ETC1 cumulative offset deduplication assumes that each pixel is equally important, which doesn't hold when using weight-by-alpha.

ETC2 T/H mode encoding could try all 15 possible sector assignments (based on the 16-pixel ordering along the chroma axis) instead of one.  I did try finding the grouping that minimized the total square distance to the group averages instead of using the centroid as the split point, but that actually had no effect... they might be mathematically equivalent?  Not sure.

A lot of these concepts don't translate well to ASTC.  CVTT's approaches largely assume that it's practical to traverse the entire search space, but ASTC is highly configurable, so its search space has many axes, essentially.  The fact that partitioning is done AFTER grid interpolation in particular is also a big headache that would require its own novel solutions.

Reduction of the search space is one of CVTT's biggest sore spots.  It performs excellently at high quality targets, but is relatively slow at lower quality targets.  I justified this because typically developers want to maximize quality when import is a one-time operation done offline, and CVTT is fast enough for the most part, but it probably wouldn't be suitable for real-time operation.

by OneEightHundred (noreply@blogger.com) at 2021-01-03 23:21

2020-10-20

 

The plan to post a play-by-play for dev kind of fell apart as I preferred to focus on just doing the work, but the Windows port was a success.

If you want some highlights:

  • I replaced the internal resource format with ZIP archives to make it easier to create custom resource archives.
  • PICT support was dropped in favor of BMP, which is way easier to load.  The gpr2gpa tool handles importing.
  • Ditto with dropping "snd " resource support in favor of WAV.
  • Some resources were refactored to JSON so they could be patched, mostly dialogs.
  • Massive internal API refactoring, especially refactoring the QuickDraw routines to use the new DrawSurface API, which doesn't have an active "port" but instead uses method calls directly to the draw surface.
  • A bunch of work to allow resolution changes while in-game.  The game will load visible dynamic objects from neighboring rooms in a resolution-dependent way, so a lot of work went in to unloading and reloading those objects.

The SDL variant ("AerofoilSDL") is also basically done, with a new OpenGL ES 2 rendering backend and SDL sound backend for improved portability.  The lead version on Windows still uses D3D11 and XAudio2 though.

Unfortunately, I'm still looking for someone to assist with the macOS port, which is made more difficult by the fact that Apple discontinued OpenGL, so I can't really provide a working renderer for it any more.  (Aerofoil's renderer is actually slightly complicated, mostly due to postprocessing.)

Goin' mobile

In the meantime, the Android port is under way!  The game is fully playable so far, most of the work has to do with redoing the UI for touchscreens.  The in-game controls use corner taps for rubber bands and battery/helium, but it's a bit awkward if you're trying to use the battery while moving left due to the taps being on the same side of the screen.

Most of the cases where you NEED to use the battery, you're facing right, so this was kind of a tactical decision, but there are some screens (like "Grease is on TV") where it'd be really nice if it was more usable facing left.

I'm also adding a "source export" feature: The source code package will be bundled with the app, and you can just use the source export feature to save the source code to your documents directory.  That is, once I figure out how to save to the documents directory, which is apparently very complicated...

Anyway, I'm working on getting this into the Google Play Store too.  There might be some APKs posted to GitHub as pre-releases, but there may (if I can figure out how it works) be some Internal Testing releases via GPS.  If you want to opt in to the GPS tests, shoot an e-mail to codedeposit.gps@gmail.com

Will there be an iOS port?

Maybe, but there are two obstacles:

The game is GPL-licensed and there have reportedly been problems with Apple removing GPL-licensed apps from the App Store, and it may not be possible to comply with it.  I've heard there is now a way to push apps to your personal device via Xcode with only an Apple ID, which might make satisfying some of the requirements easier, but I don't know.

Second, as with the macOS version, someone would need to do the port.  I don't have a Mac, so I don't have Xcode, so I can't do it.


by OneEightHundred (noreply@blogger.com) at 2020-10-20 11:09

2020-10-06

A conservative estimate has me shooting hogs in 45 seconds

🐷🔫⏱

by Factor Mystic at 2020-10-06 22:36

2020-08-03

As part of modernisng, updating and generally overhauling my blog, I thought it would be nice to add some consistancy to the Yaml front matter used by Jekyll. For those who do not know, Jekyll uses Yaml front matter blocks to process any file which contains one as a special file. The front matter can contain variables in the form foo: value. Jekyll itself defines some predefined globabl variables and variables for posts, but anything else is valid and can be use in Liquid tags.

I wondered if I could write some F# to:

  1. Load all the markdown files.
  2. Parse all the front matter.
  3. Modify the front matter to drop variables no longer required by a theme.
  4. Update the front matter with new variables which are understand by the current theme.
  5. Randomly assign a path to a header image file for each post which doesn’t already have one.
  6. Write the front matter back to its post.

Fairly straightforward requirements.

Loading and parsing the front matter

I’m using YamlDotNet to do most of the heavy lifting. I think could also have used the FSharp.Configuration Type Provider, but I’m not sure that it would have done exaclty what I wanted.

I’m just writing this in an F# script, hosted in a project. After adding the YamlDotNet NuGet package, we can reference it and get to work:

#r "../../.nuget/packages/YamlDotNet/8.1.2/lib/netstandard2.1/YamlDotNet.dll"

open System.IO
open System.Text.RegularExpressions
open YamlDotNet.Serialization
open YamlDotNet.Serialization.NamingConventions

let path = "../sgrassie.github.io/_posts"

Here, we reference the package, and then open various namespaces for use later on. The code for my blog is kept in a separate folder, relative to the project which has got the fsharp scripts I’m writing abot in it. This is nice and easy.

type FrontMatter() =
    member val Title = "" with get, set
    member val Description = "" with get, set
    member val Layout = "" with get, set
    member val Tags = [|""|] with get, set
    member val Published = "" with get, set
    member val Category = "" with get, set
    member val Categories = "" with get, set
    member val Metadescription = "" with get, set
    member val Series = "" with get, set
    member val Featured = false with get, set
    member val Hidden = false with get, set
    member val Image = "" with get, set
    [<YamlMember(Alias = "featured_image", ApplyNamingConventions = false)>]
    member val FeaturedImage = "" with get, set
    [<YamlMember(Alias = "featured_image_thumbnail", ApplyNamingConventions = false)>]
    member val FeaturedImageThumbnail = "" with get, set
    [<YamlIgnore>]
    member val MarkdownFilePath = "" with get, set

This is a class with auto-implemented properties. You can see three attributes in use. The YamlMember attribute allows us to alias a property in Yaml which doesn’t follow the CamelCase convention we configured the deserialiser with. I think that a C# version of this would look pretty much the same.

let deserializer = DeserializerBuilder()
                     .WithNamingConvention(CamelCaseNamingConvention.Instance)
                     .Build()

This initialises the YamlDotNet deserialiser, and is pretty much almost exactly how you would do this in C#. To deserialise something, we need some Yaml. When I was testing this, I got an error in YamlDotNet that was pretty weird and essentially means that it can’t parse the file, and it turns out it’s because all the other stuff outside the Yaml front matter that is upsetting it.

let expression = "(?:---)(?<yaml>[\\s\\S]*?)(?:---)"

Oh regex, I do love thee.

Very simply, this regex will parse everything in a file between two --- blocks, into a named Yaml group. We now have actual front matter, we still need to parse into an object.

let extractFrontmatter filePath =
    let file = File.ReadAllText(filePath)
    let result = Regex.Match(file, expression).Groups.["yaml"].Value
    let frontMatter =
        let frontMatter = deserializer.Deserialize<FrontMatter>(result)
        frontMatter.MarkdownFilePath <- filePath
        frontMatter
    frontMatter

This is a bit more complex so lets unpack it:

  1. Pass in the filePath.
  2. Read all of the text from it.
  3. Strip only the front matter from the text.
  4. Parse the front matter test with an inner function, which uses the deserializer, and return it. Here, we also keep track of the file path (we will need this later).

We also need to load all of the markdown files:

let loadMarkdownFiles path = Directory.EnumerateFiles(path, "*.md", SearchOption.AllDirectories) 

Notice how those last couple of functions are using ‘currying’. It lets us do all of the work in one pipeline:

path |> loadMarkdownFiles |> Seq.map extractFrontmatter |> Seq.iter (fun x -> printfn "%s - %s" x.MarkdownFilePath x.Title)

This gives us a dataset to work with. Next time we’ll continue with the rest of the requirements.

2020-08-03 00:00

2020-07-27

Many years ago, after working in my first programming job for a couple of years the company was taken over, and coding tests for new hires were introduced. The incumbent developers all decided to take the test, and it was seen as a fun diversion for a couple of hours.

I don’t have access to the actual wording of the requirements given to candidates, but the test required a text file containing around 100k words to be loaded and sorted into the largest set of the longest anagram. For example in the words file I’m using in this blog post, there are 466544 words in the file, 406627 of which are anagrams. The largest set is for a 7 letter anagram, of whih there are 15 words. There are smaller sets of longer anagrams, we’re not interested in those. And, it had to run in in less a second. They had three hours to write it, on a computer not connected to the internet. They had access to Java, through Eclipse, C/C++/C# through Visual Studio and Delphi through Embarcadero Studio.

I don’t know where the test originally came from - I think it originated in a different company which had been acquired by the same company I now worked for, but I’m not sure. I think the intent of the test was to in part gauge how the candiate reacted to the deadline pressure, part how they could understand the requirements given to them, and lastly what sort of code they wrote.

As it has been a long time and the company no longer recruits after moving most development overseas, so, I’m going to present my solution.

Making people sit coding tests during interviews is not good for anyone, and doesn’t always guarantee that you’ll hire the best person for the job.

The Solution

First we have to load the file, and figure out to generate the anagram and keep track of how many instances of that anagram there are. It turned out for the candidates taking the test that this was the bit that most got stuck on, specifically the short mental leap it took to working out you needed to sort the letters of the word alphabetically to create the key.

private static string CreateKey(string word)
{
    var lowerCharArray = word.ToLowerInvariant().ToCharArray();
    Array.Sort(lowerCharArray);
    return new string(lowerCharArray);
}

private static void LoadWords(string filePath, Dictionary<string, List<string>> words)
{
    using (var streamReader = File.OpenText(filePath))
    {
        string s;

        while ((s = streamReader.ReadLine()) != null)
        {
            var key = CreateKey(s);

            if (words.TryGetValue(key, out var set))
            {
                set.Add(s);
            }
            else
        
                var newSet = new List<string> {s};
                words.Add(key, newSet);
            }
        }
    }
}

words is a Dictionary<string, List<string>>, which we use to track the count of anagrams. The rest of the file loading is a fairly standard while loop over the reader ReadLine method, checking the dictionary to see if the anagram has already been found, and if so add the new word to the set, otherwise, add the anagram and create a new list to hold the word(s).

Once we have all the words loaded and matched into sets of anagrams, we can process them to work out which is the largest set with the longest word.

private static KeyValuePair<string, List<string>> ProcessAnagrams(Dictionary<string, List<string>> words)
{
    var largestSet = 0;
    var longestWord = 0;
    var foundSet = new KeyValuePair<string, List<string>>();

    foreach (var set in words)
    {
        if (set.Value.Count >= largestSet)
        {
            largestSet = set.Value.Count;

            if (set.Key.Length > longestWord)
            {
                longestWord = set.Key.Length;
                foundSet = set;
            }
            else
            {
                longestWord = 0;
            }
        }
    }

    return foundSet;
}

Here we simply bruteforce check all of the entries in the dictionary to find the answer. It’s not elegant, but it gets the job done. Running it on my Macbook Pro gives:

406627 anagrams processed from 466544 in 00:02:850
File read and key generation in 00:02:829
Anagrams searched in: 00:00:021
Found: 
Key: AEINRST (7), Count: 15
aeinrst
antsier
asterin
eranist
nastier
ratines
resiant
restain
retains
retinas
retsina
stainer
starnie
stearin
Tersina

2020-07-27 00:00

2020-07-20

There are lots of blog comment systems, and this blog has used Disqus as the comment system for a long time. I’m not going to go into all the reasons to move away from Disqus, but page load times and wanting more control over your data and being able to respect your readers privacy figure highly.

Also, this blog is a technical blog focused on software development and associated topics, and this means that anyone who wants to comment on my blog is almost certain to be familar with Github and have an account, and also be as uncomfortable using Disqus as I have been.

I did investigate rolling my own code based on examples from other blogs, who have used some jekyll liquid templates and javascript to pull from the Github API and use it to post comments back to the repo hosting the blog. This has some attraction, but also has a big drawback, which is the authorisation situation to the Github API, as you don’t really want your client id and client secret exposed in the repo.

Enter utteranc.es

You can get around this by hosting an app in heroku to use as the postback url so that you can hide the client id and client secret, and there is also staticman, but none of these seemed as simple as just using utteranc.es

To configure utteranc.es, head over to the website and follow the instructions, and fill out the form to suit you. For the blog post to issue mapping, I chose ‘Issue title contains page title’, and I also chose to have utteranc.es add a ‘Comment’ label to the issue it creates in the blog repository. After you do that, you’ll get a code snippet generated for you that looks somewhat like this:

<script src="https://utteranc.es/client.js"
        repo="sgrassie/sgrassie.github.io"
        issue-term="title"
        label="Comment"
        theme="github-light"
        crossorigin="anonymous"
        async>
</script>

Add this to a jekyll include, for example utterances.html and then include it in your post.html layout at the position you want the blog comments to appear. Most jekyll blog templates have Disqus support, so it will probably just be a simple case of finding where in the layout that Disqus is included, and replacing it.

Exporting existing comments

If your existing comments are not important to you, then at this point you can stop and enjoy your new Github powered comment system. Personally for me, it’s the principle of the thing, and the fact that the comments on my blog belong to me, and the author of the comment. So, we can do something about it.

Disqus allows you to export your comments, and once you do so, you will get your comments emailed to the email registered with your Disqus account. I’ve done a lot of work with XML in a previous role, and I think that the Disqus XML export looks… odd. The reason I say that is that each post on your blog appears to be mapped to a <thread> element, which contains a bunch of expected metadata about the blog post. I would expect each individual comment to be a nested in a <comments> element, but this is not the case. Instead, each individual comment has an entry as a <post> element at the same level as the <thread>, and they are mapped to each other using and attribute id. I don’t think that makes any sense, I’m sure there must be good reasons. I just can’t think what might be.

A comment then, looks like this:

<thread dsq:id="1467739952">
    <id>218 http://temporalcohesion.co.uk/?p=218</id>
    <forum>temporalcohesion</forum>
    <category dsq:id="2467491" />
    <link>http://temporalcohesion.co.uk/2010/10/25/lets-write-an-api-library-for-github/</link>
    <title>Let&amp;#8217;s write an API library for Github</title>
    <message />
    <createdAt>2010-10-25T12:00:24Z</createdAt>
    <author>
        <name>Stuart Grassie</name>
        <isAnonymous>false</isAnonymous>
        <username>stuartgrassie</username>
    </author>
    <isClosed>false</isClosed>
    <isDeleted>false</isDeleted>
</thread>

An actual comment on this post looks like:

<post dsq:id="952258229">
    <id>wp_id=25</id>
    <message><![CDATA[<p>Great post Stu!</p>]]></message>
    <createdAt>2010-10-25T22:47:44Z</createdAt>
    <isDeleted>false</isDeleted>
    <isSpam>false</isSpam>
    <author>
        <name>John Sheehan</name>
        <isAnonymous>true</isAnonymous>
    </author>
    <thread dsq:id="1467739952" />

You can see the way that the post element is mapped back to the containing thread using the dsq:id attribute.

Parsing the XML

The strange structure of the XML makes it less straightforward to parse the XML, as it means we’ll have to do a little bit of work in matching up blog posts and the comments on them. Also very annoying is the fact that a thread element doesn’t know if it actually has any associated post comments.

We can acomplish this fairly easily with a little bit of F# and the FSharp.Data XmlProvider. Setting the provider up is straightforward, here I’m just using a direct reference to the assembly which I’d previously added via NuGet.

#r "../../.nuget/packages/fsharp.data/3.3.3/lib/netstandard2.0/FSharp.Data.dll
open FSharp.Data

type Disqus = XmlProvider<"/Users/stuart/Downloads/temporalcohesion-2020-07-13T20 27 09.014136-all.xml">

type Comment = { Author: string; Message: string; Created: System.DateTimeOffset; ParentThreadId: int64; }
type BlogPost = { Title: string; Url: string; Author: string; ThreadId: int64; Comments : Comment list }

let data = Disqus.Load("/Users/stuart/Downloads/temporalcohesion-2020-07-13T20 27 09.014136-all.xml")

If you are new to F# (and I’m still fairly new) this might look scary, but it really isn’t. After referencing the assembly in the script, we open the FSharp.Data namespace, and then initialise an XmlProvider by passing it the XML the file we’re going to parse.

Do not do this for really big XML files! See the XmlProvider documentation for more details.

That enables the XmlProvider to infer a lot of things about the XML in the file, and then the XmlProvider loads the actual data from the file. Two records are also defined to hold the details about the Threads/Posts that are going to imported, and how multiple comments map refer to a single blog post. These records are analagous to simple C# POCO classes with getters and setters.

With these types ready, we can define a couple of functions to convert the XML into them, and thus do a way with a lot of the extraneous noise from the XML, that we don’t really care about.

let toComments posts =
    posts
    |> Seq.filter (fun (post : Disqus.Post) -> not post.IsSpam || not post.IsDeleted)
    |> Seq.map (fun (post : Disqus.Post) -> {Author = post.Author.Name; Message = post.Message; Created = post.CreatedAt; ParentThreadId = post.Thread.Id})
    |> Seq.toArray

let toBlogPosts posts =
    posts
    |> Seq.filter (fun (thread : Disqus.Thread) -> not thread.IsDeleted)
    |> Seq.map (fun (thread : Disqus.Thread) -> {Title = thread.Title; Url = thread.Link.Substring(0, thread.Link.Length - 1); Author = thread.Author.Name; ThreadId = thread.Id; Comments = [] })

These functions use currying, which as a longtime C# developer I’m still getting the hang of, and that will come in handy shortly. They map the Disqus types generated by the XmlProvider into the custom types I defined, taking care to filter out comments we don’t want to import and not importing any blog posts which Disqus says have been deleted.

I’m not sure the Seq.filter in the toComments function worked correctly, as I still had to go and manually delete a couple of comments that were marked as spam from the Github Issues

With those functions defined, we need a way of mapping the comments to the correct blog post.

let mapBlogToComments(post, comments) =
    let commentsOnPost = comments 
                         |> Array.filter (fun comment -> comment.ParentThreadId = post.ThreadId) 
                         |> Array.toList
    {post with Comments = commentsOnPost}

Here we take a single post, and all of the comments, and then use a nested function to grab the set of comments associated to that post, by way of the ThreadId. With that written, we can use some more currying to create another function that will do a lot of hard work for us:

let addCommentsToTheirPosts comments = data.Threads |> toBlogPosts |> Seq.map (fun post -> mapBlogToComments(post, comments))

This function will take the threads, use the toBlogPosts method to turn them into BlogPost and then map each blog post to the correct comments using the method we’ve just defined to do that. But where do the comments come from? Well, it turns out this currying thing is really quite useful, as it enables all this magic looking |>, or ‘piping’ to happen.

let toImport = data.Posts
               |> toComments
               |> addCommentsToTheirPosts 
               |> Seq.filter (fun x -> x.Comments.Length > 0)

Take all the posts data, turn them all into comments, and then pipe that to the addCommentsToTheirPosts function, and then filter out blog posts which don’t have any comments, as importing those is pointless. All for around 24 lines of code. I know full well the C# it would take do all that, and whilst with C# 8 you could probably get close, I doubt you’d equal 24 lines.

Whilst googling for clarification on an aspect of the Octokit.net api, I came across Removing Disqus and adding GitHub Issue Comments, which is essentially what I’m doing here, just in C#.

Just to be on the safe side, it’s probably a good idea to look through each of the posts and comments that we’ve now got to just to see if things are matching up correctly.

toImport |> Seq.iter (fun post -> printfn "%s - %s - comments: %d" post.Title post.Url post.Comments.Length)

Running that will give you an idea of what blog posts are going to be imported, and the number of comments. The first time I ran this, I found some of the blog posts in the Disqus XML import did not have the posts title set, so I was getting duplicated post titles. As there were only three instances of this error, I just manually corrected the XML and re-reran the script to check I had everything correct.

Uploading to GitHub

So far, so good. Now comes the fun part and something I’ve yet to do in F#, which is interop with a C# library. It turns out that it’s not so hard, but that makes perfect sense when you understand that F# is a .net language, just like C#. A long time ago I started to write an API library for GitHub, but I gave it up in favour of Octokit.net.

The F# which follows looks horrible, and I am certain there must be a cleaner way of doing what I’m about to show, but I don’t know what it is.

We can easily reference Octokit and open the namespace as before:

#r "../../.nuget/packages/octokit/0.48.0/lib/netstandard2.0/Octokit.dll"
open Octokit

Then we just need to setup a few variables:

let repo = "sgrassie.github.io"
let githubApp = "foo"
let token = "<your-personal-access-token-here>"
let credentials = Credentials(token)
let header = ProductHeaderValue(githubApp)
let client = GitHubClient(header, Credentials = credentials) 

These just get us a client to work with, and all I did was just register a new Personal Access Token on my account to use as the password. Notice how with F# you don’t need to new anything, even though they are classes from a C# assembly. These can then be used in the following function, which I’m gonna prefix with this warning:

I’m still new at F#, I’ve no idea if what you’re avout to see is ‘good’ F#.

It does work though, so just… use at your own caution.

let exportToGithub posts =
    for post in posts do
        System.Threading.Thread.Sleep(2000)
        let issuebody = sprintf "Comment thread for the post [%s](%s)" post.Title post.Url
        printfn "%s" issuebody
        let newIssue = NewIssue(post.Title, Body = issuebody)
        let issue = client.Issue.Create("sgrassie", repo, newIssue) |> Async.AwaitTask |> Async.RunSynchronously
        printfn "New issue created for %s" post.Title
        for comment in post.Comments do
           System.Threading.Thread.Sleep(2000)
           let message = sprintf "Comment by **%s** on **%s** (imported from Disqus):\r\n\r\n%s" comment.Author (comment.Created.ToString("f")) comment.Message
           let newComment = client.Issue.Comment.Create("sgrassie", repo, issue.Number, message) |> Async.AwaitTask |> Async.RunSynchronously
           printfn "    New comment created for %s" comment.Author

toImport |> exportToGithub 

I’m sure that a more experienced F# person is going to look at that and be like “WTF”, but as I said, it does work. I left the printfn log messages in, but essentially it loops over each post, waits a couple of seconds and then creates the new issue, and then loops over all of the comments for that post and adds then as comments to the issue. I put the Thread.Sleep’s in the there just so I didn’t hammer the Github API, but honestly there was that few to import I doubt it would have trigged the rate limit, but I imagine a more popular blog with more comments on the posts woould.

2020-07-20 00:00

2020-07-13

I’ve been upgrading part of our build infrastructure to handle the ongoing upgrade to .net core, and as part of that, I’ve had to update the Cake build script to handle doing the restore in an offline environment, on the build server.

There is a great post on the Octopus blog about writing a Cake build script for .net core, I encourage you to check that out, I’m not going to repeat too much of that.

My specific requirement is that the `DotNetCoreRestore’ needs to succeed on an ‘offline’ build server, that is, a build server that has no access to the internet.

In order for this to succeed, you are going to need provide a way for NuGet to get the packages, usually this is done by maintaining an offline NuGet cache which you can point NuGet at, or even checking the packages into the repository. I’d always recommend going with the first option, although there are scenarios were the second option might be required.

However you do it, you need to tell NuGet where they are. The easiest thing to do is to use a NuGet.Config local to the .sln, but it is possible to code a location into the script.

Here is the restore task:

Task("Restore")
    .IsDependentOn("Clean")
    .Does(() =>
    {
        var settings = new DotNetCoreRestoreSettings();

        if(BuildSystem.IsRunningOnTeamCity)
        {
            settings.PackagesDirectory = "./packages";
            settings.IgnoreFailedSources = true
            //optionally
            //settings.Sources = new[] { "http://someinternalfeed/nuget" }
        }

        foreach(var project in projects)
        {
            DotNetCoreRestore(project.FullPath, settings);
        }
    });

This project uses a NuGet.config to add the paths of internal package sources, and sets the location of the packages folder to be local to the .sln - we’ve found this cuts down on conflicts on developer machines.

2020-07-13 00:00

2020-07-10

In the previous post, I displayed my fledgling understanding of F# by writing a script which can parse the CSV set of results of the English Premier League to generate the league table. The script does this primarily by using a mutable BCL Dictionary. F# is immutable by default, and whilst that is itself not immutable, you have to go out of your way to enable it. I’ll try to save repeating Scott Wlashin.

There are some improvements that can be made to the script. I’ll highlight them here and then link to the full script as a gist.

First a note on pattern matching. In the previous post I mentioned that I thought I could use pattern matching in a particular place, and obviously I can:

let fullTimeResult =
        match row.FTR with
        | "H" -> Home
        | "A" -> Away
        | _ -> Draw

Rather than if/then/else. Here, the “” is equal to the default in a C# switch statement, if it’s not a Home or Away (win), then it _must be a draw.

Making things immutable

To start making things immutable, we can update the updateTeam function from the previous post, and pass in a Map<string, LeagueRow>:

let updateTeam (league : Map<string, LeagueRow>, team : string, points : int, forGoals : int, againstGoals: int, won : int, drawn, lost: int) =
    if league.ContainsKey team then
        let existing = league.[team]
        let updated = {existing with Played = existing.Played + 1; Won = existing.Won + won; Drawn = existing.Drawn + drawn; Lost = existing.Lost + lost; For = existing.For + forGoals; Against = existing.Against + againstGoals; Points = existing.Points + points}
        league.Add(team, updated)
    else
        let leagueRow = {Team = team; Played = 1; Won = won; Drawn = drawn; Lost = lost; GD = 0; For = forGoals; Against = againstGoals; Points = points}
        league.Add(team, leagueRow)

The code is almost the same as the previous version, except that we no longer use the <- operator to update the mutable dictionary. What’s going on instead is that F# creates a new instance of the LeagueRow, with updated values, and adds that to the Map, by key, which has the side-effect of creating a new instance of the whole Map, with the league row identified by the key replaced with the updated version.

The updateHomeWin function becomes:

let updateHomeWin (league : Map<string, LeagueRow>, result : MatchResult) =
    let league = updateTeam(league, result.HomeTeam, 3, result.HomeGoals, result.AwayGoals, 1, 0, 0)
    let league = updateTeam(league, result.AwayTeam, 0, result.AwayGoals, result.HomeGoals, 0, 0, 1)
    league

This again replaces the BCL Dictionary with the Map, and simply passes the league map through each updateTeam call, and then returns the updated league object.

processMatchResult is also updated to pass in a Map, and calling the fold with it and a default map is straightforward:

|> Seq.fold processMatchResult (Map<string, LeagueRow> [])

This makes the script much more ‘the way of things’ in F#, which is to say it’s using an immutable data structure.

2020-07-10 00:00

2020-07-07

During lockdown, I’ve made another effort at learning F#. This time I think I’ve had a bit more success. Processing data is something that we as developers do on a weekly or even daily basis, so it seems quite natural to practice that in F#. As a big football fan, I’ve decided to use the English Premier League results for season 2019/2020, as it’s a dataset I implicitly understand.

The EPL results set is available in CSV format from football-data.co.uk, and rather than having to parse it all by hand or hitting up CsvHelper and still having to write some C# code to actually use it, in F# we can use a Type Provider, specifically the CsvProvider from FSharp.Data.

It’s worth pointing out at this point that I’ve been learning for about two weeks, so the following F# should not be taken as a definitive example of good, idiomatic code.

Loading and parsing the data

FSharp.Data is easily added via NuGet, and using an .fsx script, we can easily reference the assembly and open the namespace:

#r "../../.nuget/packages/fsharp.data/3.3.3/lib/netstandard2.0/FSharp.Data.dll"
open FSharp.Data
open System.Collections.Generic

I didn’t have any luck in the script with referencing a more local copy of the assembly, such as one in the /bin folder, due to it complaining about not being able to find the FSharp.Data.DesignTime.dll, but going directly to the assembly in the NuGet packages folder seems to work just fine. It is also worth noting that I’m writing this on a Mac (in VS Code), so your path syntax might vary. Also note that we also open the BCL System.Collections.Generic namespace. We’ll need that later.

Next, comes the part that blows my mind. Here is how we generate a type which knows how to load and parse a CSV file of a given structure:

type Results = CsvProvider<"../../Downloads/epl1920.csv">

That’s it. It’s pretty amazing. The Results type is now also type safe, and it’s had a guess at infering what the types are for each column of the data. We could probably do something similar to this in C# using CsvHelper and either Castle.DynamicProxy or some magic with the new Roslyn compiler, but I think it would take quite a bit of code to create something that came close to what this can do.

Skipping over some important stuff that we’ll get to in just short a while, we can now easily load the full results set:

Results.Load("../../Downloads/epl1920.csv")

This is fairly straightforward, and does exactly what it looks like. The data loaded from the file is available in a .Rows property, that we’ll use shortly.

Parsing the data

All good so far, but now things get a little more complicated. Now we need to think somewhat about the data, and if you look in the file… it’s got a LOT of information. Mostly related to betting information for the match, but there is also quite a lot of information about the match itself. For the purposes of calculating the league, most of thie information in the file is redundant. In order to just get the information we need, we can define a Record to hold to that information. A Record in F# is somewhat analagous to a C# POCO class, but with automatic type safety and full equality comparisons out of the box.

type FullTimeResult = | Home | Away | Draw
type MatchResult = {HomeTeam : string; AwayTeam: string; HomeGoals: int; AwayGoals: int; Result: FullTimeResult}

The FullTimeResult type is just like a C# enum, and is easier to read than the ‘A’, ‘H’ or ‘D’ we get from the CSV file for the FTR (Full Time Result) column. I think it also looks nicer to read when it comes to the pattern matching, but we’ll get to that. With those types defined, we can get to the real meat of this and actually parse the data:

let league = Results.Load("../../Downloads/epl1920.csv")
                .Rows
                |> Seq.map toMatchResult
                |> Seq.fold processMatchResult (Dictionary<string, LeagueRow>())
                |> Seq.sortByDescending (fun (KeyValue(_, v)) -> v.Points)

Here, we load the file as we discussed earlier, but now, we forward pipe the data returned from the .Rows property to Seq.map through the toMatchResult function, which takes a Row and extracts the data we’re interested in and returns a new MatchResult. In C# this is the same as doing .Rows.Select(new MatchResult {...}). Then, the resulting sequence of MatchResults is piped forward through the processMatchResult function, using the scary sounding Seq.fold, and it also passed a new instance of a BCL Dictionary, with a string key and a LeagueRow type as the value. I’ve not yet mentioned the LeagueRow type… it’s not super important to proceedings, it just a type which holds all the data you would expect to see in a football league table. For reference it’s included below in the full script.

Amazingly, those five lines load the file, process all the data, and provide an object which contains a fairly accurate version of the English Premier League table. Obviously things are a little more involved than that.

Examing the parsing in more detail

As you’ll recall, the there is a lot of data in the CSV file that is irrelevant when it comes to generating the league table. We can map all the data we need into the MatchResult type, which we do by forward piping the data through Seq.map and the toMatchResult function:

let toMatchResult (row: Results.Row) =
    let fullTimeResult = 
        if row.FTR = "H" then FullTimeResult.Home
        elif row.FTR = "A" then FullTimeResult.Away
        else FullTimeResult.Draw
    {
        HomeTeam = row.HomeTeam
        AwayTeam = row.AwayTeam
        HomeGoals = row.FTHG
        AwayGoals = row.FTAG
        Result = fullTimeResult
    }

This is mostly just a simple mapping from the results row into the new MatchResult type. You’ll notice we don’t need to explicitly ‘new’ anything up, don’t forget, we’re in a functional world now, so the MatchResult is returned as a side affect of what we’re doing. We also define a nested method which processes the full time result using a simple if/else construct. I think I could also have used pattern matching, but it’s simple enough that I’m not going to worry about it.

Next, comes the scarying sounding fold. The method looks like this:

let processMatchResult (league : Dictionary<string, LeagueRow>) result  =
    match result.Result with
    | Home -> updateHomeWin(league, result)
    | Away -> updateAwayWin(league, result)
    | Draw -> updateDraw(league, result)
    league

What happens is that we tell Seq.fold to use this method to do the folding, and we give it an initial state of a new and empty Dictionary<string, LeagueRow>(). Seq.fold carries the state over to each subseqent ‘fold’ over the sequence of MatchResults it was piped. You’ll note that the final thing returned as a side effect of the method is the same dictionary which was passed in. This essentially forms the core of the algorithm to produce the league. The pattern matching of match <thing> with is equivalent to a C# switch statement on steroids. I am barely scratching the surface of what can be done with pattern matching in F#.

The patten patch decides what kind of result we are dealing with, and delegates further processing to the relevant method. Here is the definition for updateHomeWin. The other two methods are exactly the same, except they distribute the points/goals/wins/losses/draws accordingly, so I won’t go into those in detail.

let updateHomeWin (league : Dictionary<string, LeagueRow>, result : MatchResult) =
    updateTeam(league, result.HomeTeam, 3, result.HomeGoals, result.AwayGoals, 1, 0, 0)
    updateTeam(league, result.AwayTeam, 0, result.AwayGoals, result.HomeGoals, 0, 0, 1)

Each MatchResult consists of two teams, and we have to update each entry in the league for both of these teams, with the correct number of points, goals for, goals against, win, draw and loss. The real part of this is in the updateTeam function:

let updateTeam (league : Dictionary<string, LeagueRow>, team : string, points : int, forGoals : int, againstGoals: int, won : int, drawn, lost: int) =
    if league.ContainsKey team then
        let existing = league.[team]
        let updated = {existing with Played = existing.Played + 1; Won = existing.Won + won; Drawn = existing.Drawn + drawn; Lost = existing.Lost + lost; For = existing.For + forGoals; Against = existing.Against + againstGoals; Points = existing.Points + points}
        league.[team] <- updated
    else
        let leagueRow = {Team = team; Played = 1; Won = won; Drawn = drawn; Lost = lost; GD = 0; For = forGoals; Against = againstGoals; Points = points}
        league.Add(team, leagueRow)

This is just a simple dictionary update where we check if a team already has an entry, and if so, update it, otherwise we create it. Things of note here are that whilst F# is mostly immutable, types from System.Collections.Generic are mutable, which is how this whole thing works. I’m sure that someone much better at F# can come along and tell me how to do this with immutable F# collections. Also of note is the collection access of league.[team], which is different than in C#. We also update the value in the dictionary by using <-.

After that, we can define a simple method to print out a row from the league for us, and then iterate through the entries in the dictionary, to get a league table:

let print league =
    printfn "Team: %s | Played: %d | Won: %d | Lost: %d | Drawn: %d | For: %d | Against: %d | GD: %d | Points: %d" league.Team league.Played league.Won league.Lost league.Drawn league.For league.Against (league.For - league.Against) league.Points

league
|> Seq.iter (fun (KeyValue(_, v)) -> print v)

The KeyValue is an active pattern, which matches values of KeyValuePair objects from the BCL Dictionary, and this produces (with data correct as at the publication of this post):

Team: Liverpool | Played: 33 | Won: 29 | Lost: 2 | Drawn: 2 | For: 72 | Against: 25 | GD: 47 | Points: 89
Team: Man City | Played: 33 | Won: 21 | Lost: 9 | Drawn: 3 | For: 81 | Against: 34 | GD: 47 | Points: 66
Team: Leicester | Played: 33 | Won: 17 | Lost: 9 | Drawn: 7 | For: 63 | Against: 31 | GD: 32 | Points: 58
Team: Chelsea | Played: 33 | Won: 17 | Lost: 10 | Drawn: 6 | For: 60 | Against: 44 | GD: 16 | Points: 57
Team: Man United | Played: 33 | Won: 15 | Lost: 8 | Drawn: 10 | For: 56 | Against: 33 | GD: 23 | Points: 55

For completeness here is a gist of the full script:

2020-07-07 00:00

2020-06-23

Suppose you’re on a rocket ship, and you’re given the choice of three buttons: one button starts up your advanced MHE thrusters, but the other buttons explode the ship. You choose some button (say, number 1), and the rocket ship itself (which is a sentient AI), who knows what the buttons are connected to, chooses another button (say, number 3). It then says to you, “Do you want to choose button number 2?”

Is it to your advantage to switch your choice? Also, who designed this control panel?

by Factor Mystic at 2020-06-23 02:11

2020-06-07

I second this. I’ve been taping my mouth closed at night for the past year…

¯\_(ツ)_/¯

by Factor Mystic at 2020-06-07 16:43

2020-05-19

In every introduction to a potential client, partner, or other associate, the first thing I do is give a brief overview of my history. I know this is common in just about any business or social interaction, but its especially important in my line of work, since communicating my curriculum vitae is so critical to demonstrating my technical competence. And in truth, a technical lead can be as charming as you’d like but unless they are extremely technically competent, nothing else matters. But its hard to compress a lot of data into a short introduction!

I’ve worked on projects for some of the biggest companies in the world — not just Fortune 500 companies, but Fortune 50 companies! — as well as launched dozens of startups, some that didn’t work, some that did, and some that were extremely successful. Those larger companies include industry giants like Disney (three divisions — Disney Parks, Disney Channel, and Disney film!), Dreamworks, Warner Bros., Accenture, CBC, Sega, Sonos, and many more, as well as successful, venture backed startups like Graphite Comics, for which I currently serve as CTO.

I’ve also been written about, and my projects (both professional and personal) have been written about, by publications like the New York Times (which covered the launch of Graphite Comics, and the AI recommendation system I built for it, on the front page of the Business section), The Guardian (twice!), Fortune, TechCrunch, USA Today, AdWeek and many other internationally known outlets, in addition to multiple smaller but no less influential blogs and online journals like PocketGamer, TouchArcade, VentureBeat, and many, many more.

Just typing up that last paragraph and linking to all of that old press was a proud moment. In addition to mainstream press, I’ve also published papers on game related AI, including writeup of my graduate thesis on Hebbian learning in artificial neural networks — a topic I want to get into in more depth later.

That being said, I get asked a lot to dive deeper into my background — not only what I’ve worked on in the past, but why and how, which, to me, are much more interesting questions!

(Note: I will be adding to this as time permits until I’m finished!)

When I first became interested in computer science and programming specifically, there really wasn’t much of an industry per se — at least not anything that looks anything like the industry today. What did exist then was a collection of business-centric hardware and software providers for the most part, building tools to help businesses do the kinds of things businesses did in the 80s — spreadsheets, simple printing software, things like that.

If I try to remember exactly why I became fascinated with computers back then, I genuinely couldn’t tell you! There was very little for me to sink my teeth into — and keep in mind I was a very young kid at this time — certainly not interested in spreadsheets and early databases!

I guess I can liken it to those early video game experiences. Why was it fun to make the white dot chase the grey dot? Why were any of those Atari 2600 games I played as a kid fun in the least? I also can’t answer this question, except to say that in both cases, I think I, and all the other kids intrigued by tech back then, just sensed that something cool was in there. We knew that someday the ultra simple games we played like Pong and Pac Man would mature into nearly photo-realistic, open world masterpieces like GTA V. Maybe we sensed those big, boxy, monochrome machines that just displayed green text would someday fit in our hand, and allow us to do amazing things like fly a drone, video chat with someone around the world, or find our way home.

Whatever the case may be, I found myself drawn to technology in a major way around the age of 10. My neighbor friend’s dad worked for IBM, and he had an early IBM PC that didn’t do much, and I had a few friends from early on who had early Apple computers. But my very first computer was a Commodore 128.

The Commodore 128 was an upgrade from the immensely popular Commodore 64. I believe the Commodore 64 is still the biggest selling single computer of all time. At any rate it was a very popular machine, primarily because you could run games on it — and the games it ran were pretty amazing, especially for the time. Even better, you could plug an Atari 2600 controller right into it.

The C128 came with BASIC, which was my first programming language. I’m actually surprised BASIC, or something like it, isn’t more prevalent these days. BASIC is, after all, extremely basic, and it was a great way for 10 year old me to start learning the essentials of programming. The C128 came with a few big thick manuals, and one of them was a complete guide to programming the machine in BASIC.

This is one of the most interesting and stark differences between the early days of PC use and today. Computers shipped with compilers and manuals explaining how to use them. Imagine if every iMac you bought came with a big book on how to code in Swift and XCode not only preinstalled, but tightly integrated into the operating system! That’s what things were like then — if you shelled out the money for one of these things, it was more likely than not that you planned to build software for it.

But while BASIC was fun, it didn’t take me very far. You really couldn’t do much with BASIC, so I found myself focusing on generating sounds with it. One of the demo programs in the manual (printed out, so you had to copy it line by line off of paper to get it into your computer!) was a program to play a piece of music by generating tones with the C128s pretty amazing audio chip. But other than that — well, without a lot more skill than I had, there wasn’t much farther for me to go.

So I switched my focus to networking. That little C128 had a slot you could plug a modem into, and you could use it to dial into a few online services. Some of these were early Internet-like things that let you play games or talk with people, and as I reflect back on them, they were really pretty amazing pieces of software for the time.

So when I finally upgraded to an IBM compatible PC, the first thing I did is install a modem, and then start to tinker around with Wildcat BBS. I quickly met up with a group of local (Orange County, California) tinkerers who were using Wildcat to connect with other enthusiasts, and eventually became one of the first members of my school’s computer club — where we mostly just messed around with Wildcat. There still wasn’t much out there that was fascinating on the consumer level, at least not, to me, moreso than BBS software. The idea that I could link up with other people through my computer was amazing.

But my time with Wildcat didn’t involve any coding, and programming was what I really wanted to learn.

Eventually, I went to high school, just in time for the school to offer an AP Computer Science course, which I took my junior year. At that time, the course was taught in Pascal, which was a much more complex language than BASIC and I quickly started to imagine the possibilities with this new language. Pascal lead me to C, which led me to C++, and then to Java in its early days. Since my high school only offered the one CS class, I took some college level classes in high school at Fullerton College and California State University, Fullerton in Unix, C, C++ and a data structures course taught in Pascal. I was actually starting to become a polyglot at an early age, which I’m very grateful for, as I know a few very talented developers who struggle learning new languages and platforms. Being thrown into so many so young (mostly because the industry was all over the place back then!) really helped me later on, I think.

I started school at the University of California, Santa Barbara as a double major — Mathematics and Computer Science. This was so early on that very few students even had an email account, as everything was done through dial up into a Unix system using text-based Unix utilities like mail, finger, talk… You could look things up in a really ridiculous way using gopher, archie and eventually on the web using lynx. But then, I was lucky to get one of a very few SLIP accounts, and eventually a PPP account. Don’t even bother looking those up, it was a short lived thing but it was how you could jump on the graphical internet over dialup. Using those technologies, I was the first person I knew in the entire world who could look things up on the web, in an actual browser, with images and everything. And imagine how life changing that was.

I feel like people in their 20s just really can’t imagine how Earth shattering this all was back then.

Anyway, I ended up only taking the Mathematics degree, as I planned on graduate studies in Computer Science and wanted to get to that as soon as possible rather than to an extra year as an undergrad. Specifically, I had planned to chase a PhD in Computer Science and then figure out where to go from there. But then something strange happened…

I went to film school.

I’m not sure when it happened, but it occurred to me that one of the things I loved the most about technology was the creative side of it. I loved games, and music, and film, and all of these wonderful things that you could do with computers suddenly in addition to writing interesting software on them. And I wanted to hone my creative skills. So I started an MFA program in film production at Chapman University.

After school I moved to Los Angeles and ventured briefly into the creative arts — first as a struggling filmmaker, then as a punch up writer (I would be hired to take scripts that didn’t quite work and add jokes or interesting scenes to them). Eventually I got into some support work with a few local film festivals. But I really didn’t enjoy the film industry at all so I started dabbling in something I did enjoy: music.

Now this is going to sound like I was all over the place, and I was. But I started writing and producing electronic music, and pretty quickly started getting hired to mix and produce CDs, remix songs for some well known artists, and eventually even landed a record deal. But the whole time I was more interested in the idea of pushing creative boundaries than anything, and since I was working in the electronic music arena, the way you did that was with software.

So suddenly I found myself back in the software world — this time writing music software. Specifically, I was building virtual effects and instruments using a technology called VST and VSTi. This technology allowed you (and still does to this day) to build instruments and audio processing plugins for any DAW that supports the VST format.

I quickly found myself spending more time writing software than music, and thus I was thrust back into the world of technology.

Around this time I took a job as a network engineer at a company in Orange County, and later in downtown Los Angeles. The life of a network tech is dull, but I did it because it gave me access to two things: tons of computer hardware, and immensely huge Internet bandwidth — the latter of which was tremendously expensive and tremendously hard to get back then.

I used those resources to do two things: first, to run a series of Internet radio stations (yes, “Radio on Internet”) called the Glowdot network, and to build a photo sharing site called Glowfoto.

Glowfoto eventually took over my life, and I developed it into a full-scale social network. I was lucky to ride the MySpace wave early on, and Glowfoto became an early companion site to MySpace. Glowfoto allowed MySpace users to upload more than 10 photos to their profile, back when MySpace had a 10 photo limit (if you can believe that).

Eventually MySpace went away, and about 5 years later I finally retired Glowfoto as well. But during that time I started getting many, many requests for me to build similar services for other companies. And thus my career as a contract software developer began.

(more coming soon!)

by stromdotcom at 2020-05-19 17:44

2020-05-01

I usually like to keep my posts more positive-focused — here’s what you should do vs. here’s what you shouldn’t do. But this week alone I had three potential clients relay to me a very common experience:

I thought I finally found a good developer, but then they suddenly just disappeared!

I’ll admit its hard to explain to entrepreneurs sometimes why that developer ghosted them — usually the explanation feels personal, or accusatory. But in my experience its just a matter of communication! Often its something that comes up in the initial conversations that seems innocuous to the person looking for a developer, but to that developer, its a huge red flag.

In this post, I’ll try to give a few of the reasons that developer may have disappeared into thin air.

I think possibly the best way to explain what’s going on here is to break down a few common statements, and look at what the entrepreneur probably meant to express, and how the candidate developer interpreted it.

However, its important to note that sometimes what was said was exactly what was intended, and what was heard was exactly what was intended! In those cases, I think typically the problem is that the entrepreneur just doesn’t quite understand the market they are entering. You’ll see what I mean when we get to one of those cases.

I also think its important to state a few very important facts right out of the gate.

Most important among these is that contract software developers are still hugely in demand, and hugely in short supply. This may not seem to be evident when you put out an RFP and get snowed with replies. Its important to keep in mind that the vast, vast majority of those replies are from incompetent or just junior developers, non-technical middleman project managers who are going to outsource your project, or direct offshore dev shops staffed by ludicrously unqualified programmers.

When you strip away the worthless proposals and just look at the qualified, experienced developers who are actually capable of building your app, you quickly realize there is a tiny handful of qualified, experienced freelancers out there compared to the massive number of RFPs put up daily.

It’s important, then, to understand that when you do finally zero in on that great developer you should convey an understanding and respect for his or her talent, time and attention. Far too often, good developers get spoken down to by potential clients as if they are in that same pool of dreck that I mentioned above. And in fairness, from the client’s perspective, that might as well be true for the initial portion of the conversation. But it’s still not an encouraging attitude to take with someone you are about to entrust with the technology powering your business!

Another reason for the apparent discrepancy is that to date there still hasn’t been anyone to come along and crack the problem of pairing quality projects with quality developers. Some companies like Upwork have taken great strides in improving the way they match candidates to jobs, but it is still pretty far from perfect, and I’ve found that far too many decent jobs are getting 50+ responses from horrifying developers. That’s not to mention the insane number of very low quality jobs from completely unfunded clients that get posted daily.

Why is this important? Because some potential clients are initially confused why a developer would aggressively pursue them, only to vanish into thin air. After all, if you needed the work bad enough to contact me, the thinking may go, then you must need me more than I need you.

However, that’s seldom true. Instead, because that matching has not been solved, developers often need to cast a wide net to find quality clients, just as clients need to cast a wide net to find a quality developer. So in the end, both parties have been sifting through enormous piles of potential candidates and clients to find the one good one in the bunch.

And for that reason, incidentally, I plan to do a follow up post to this one about why that client ghosted the developer! Believe me, this situation goes both ways.

So anyway, there is still a lot of work to do. But the most important takeaway here, and something I must absolutely stress in the extreme is: there are very few good developers out there, and very many jobs. Once a developer connects with a potential client, it becomes an immediate game of filtering the good from the bad, and trying to determine which employers will be a pleasure to work with, and which will be a nightmare.

Remember: its not just you interviewing a developer. They are also interviewing you!

Ok let’s run through a few of the things you might say that will scare that developer away.

This project is easy, it should only take you a couple hours

What you probably meant to say: I’m not trying to build a nuclear submarine here.

Oof. Gotta get the most common and possibly the worst one out of the way first.

Here’s the deal. If you aren’t a developer, then you most likely don’t really know how long anything takes. That’s one of your developer’s jobs, in fact — to help you estimate time and cost.

But the big problem with a client who doesn’t know how long things take but thinks they do is that those clients will never be happy. Everything will feel like it is taking too long and costing too much money if they think it’s “easy” and “should only take a couple hours” but is in fact hard and takes a long time. And the developer can expect a constant stream of frustrated emails to that effect.

Secondly: almost nothing takes “a couple hours”. Maybe a quick little bug fix — assuming you know where and what the bug is. But otherwise, no.

So this is a bad assumption to make right out of the gate. And although you probably mean this to say “hey, I’m not trying to make your life tough with this!” what the developer hears is “I’m going to make your life a living hell”.

Seriously.

I only need a developer for a few hours to work on something small.

What you probably meant to say: I only need a developer for a few hours to work on something small.

Here is one of those examples of the client saying exactly what they mean. Absolutely nothing wrong with that! And yes, this looks a lot like the last one. It is! Just without the accidental insults.

But the fact is, all you need is a couple hours of that developer’s time! However there are a few considerations to keep in mind

First, that gig that takes just a few hours also requires a few hours to set up and tear down. In development, there is a time cost to jumping into something new that is often hard to explain to a client.

It can take an hour or two just to discuss the requirements of the job. It can take a couple hours to download the current project, open it up, poke around, get everything set up in the dev environment, etc. Then, finally, there is a cost to getting out of the project — delivery, invoicing, debrief meeting, etc.

Are you ok with being billed for those hours? A 2 hour quick in-and-out project could actually take 4-5 hours given the above. Some developers I know who would otherwise take on small quick gigs like this actually end up passing because they find it really hard to explain those extra hours to clients, and it ends up not being worth it.

Second, most freelancers are looking for something long term, not very short term. They’re looking for the weeks-long or months-long project that will sustain them for a while. It would be incredibly brutal to put in all the legwork involved in pairing up with a client a hundred times a month to gather 80 or so 2-hour gigs a month just to earn a sustainable living. In fact, its not even humanly possible — this is a subject for another post, but you might be very surprised to learn the actual cost in time and money involved in securing a client. Suffice to say it is much too high to allow for too many extremely short term clients in the “couple hours of work” range.

And so, unfortunately, these projects often get dropped — even though they very well may turn into more work in the future!

I don’t have any good suggestions for this one. Unfortunately this just is what is is. I can say that eventually you will find someone willing to help you out for that short term. It just might take longer than you expect!

I need someone to come in and take over for the last developer do some bug fixes.

What you probably meant to say: Help!

Ok this one is interesting. From your perspective, “taking over” a project probably seems like a pretty normal thing. However, from a developer’s perspective, a whole lot of alarm signals go up right away.

Here are the first questions that immediately pop into my head:

What happened to the last developer?

Did they quit? Why? Did they get frustrated? Did the project go to hell? Did the client start making unreasonable demands or developer unrealistic expectations? Did the client start demanding a lot of free work?

It is important for the developer to have an understanding of what working with a new client might be like, and you are providing a piece of potentially valuable information here: someone was working for you, and now they aren’t. Something, surely, went wrong.

In order to mitigate against this, be upfront about what happened! Get ahead of this question, and let the prospective developer know what happened to the last one.

How bad is the situation?

No one wants to walk into a project that already has problems. In the best case, software doesn’t have bugs. It isn’t riddled with problems and unknown issues that need to be tracked down.

Now, in the real world, of course there are bugs to be squashed, almost always. But in my experience, the kind of project that starts with “I want to hire you to fix some bugs” is utterly rife with bugs and faults. And walking into that scenario is simply not appealing to any developer.

Mitigate this question by being transparent about the situation. Is the app functional, but there are just some small usability issues? Or is the app crashing constantly and losing data like crazy?

What sort of headspace is this client in?

Are you completely frustrated? Are you now convinced all freelance developers are as incompetent as the offshore developers you just hired for $20 an hour?

If you just had your money lit on fire by a worthless team of offshore developers, and you are now looking for a local developer to fix the situation, its important that you understand the vast difference between the two.

A good local developer is college educated, possibly at the graduate level, and has real experience, has shipped highly rated apps and earns a real living doing this.

The offshore developers you just worked with barely have a highschool education in some instances, and took a 3 week coding crash course before being shoved in a room and told to write code for projects they barely understand. I’m not kidding about this.

So make sure you don’t treat the developer who is potentially going to save your product the same way you want to treat the people who just burned it to the ground.

I have had potential clients straight up tell me they don’t value developers at all and see us all as incompetent wastes of space. And while I really, truly do understand their frustration, having lost it all by going with the hilariously bad cheap option — do you think I had any interest in taking on that job?

Yikes.

No thanks! I’m here to help, not to be insulted. I have a MS in Computer Science and left the Ph.D. program at UCLA to do the work I do now. I’ve been doing this for over 20 years. I am not on the same planet as the $20/hour developer you just hired with 3 weeks of bootcamp training.

Please keep this in mind when you are interviewing prospective onshore developers!

I need a HIPAA compliant app for the public education sector funded by the federal government leveraging military grade encryption for export to 66 countries

What you probably meant to say: this is going to be tough, so I’m willing to pay top dollar.

This one comes up a lot. There are certain types of software development that are just hard work out of the gate. Anything that requires compliance with government regulations, opens up clients or companies to liability of any kind, apps dealing with copyrighted material, etc.

Some of the absolutely best ideas I’ve heard over the years involve the healthcare industry or the public education sector. However, in my experience these projects are brutal and come with huge obstacles right out of the gate!

Sometimes the client just doesn’t know what they are getting themselves into. That happens. But more often than not, they do.

The bigger issue here is that there are other jobs out there that simply don’t come with the headaches of compliance and liability. In order to make these jobs appealing, they just have to come at a higher price, unfortunately.

Sadly, I have known some good clients in the past that did not bring their budgets up to the level required for the quality work required for a project like this, and eventually went offshort for low-quality, low-cost work. In literally every single case, they either ended up with no app at all, or an app that utterly and completely failed to comply with the regulations.

Let me just stress again: these apps are tough. Not only are they stressful for the developer, they require a lot of experience, and that experience, unfortunately, is expensive.

I’m looking for a developer with 8-10 years of experience building apps for the vegan hula hoop industry

What you probably meant to say: my industry is important to me, and I want it to be important to you too.

This one is more humorous to me than anything. But I know a few developers that use this to screen out potential clients.

Here’s the thing: you may be an expert in the sneaker flipping industry, but I’m an expert at software development. You are massively unlikely to find a good software developer who has focused on your particular industry. It just doesn’t work like that.

We jump from project to project by the very nature of what we do. Sometimes we make social apps, sometimes B2B apps, sometimes fintech, sometimes e-commerce. I literally don’t know a single freelance software developer who has built a career focusing on only one industry.

Oh and that sneaker example? That’s one I just saw recently. They wanted a developer who was a sneaker fanatic. And you know what? They may find one, because at least that exists and there may be some overlap. But generally speaking, you really shouldn’t limit the pool of candidates you are interviewing by insisting that they have particular expertise in your field.

And furthermore, it almost never actually matters! Unless you are building a complex AI driven system, in which case experience with AI would be mandatory, or if you are building an image processing app, in which case image manipulation experience would be important.

But the key is: unless you are specifically seeking someone with experience in the particular technology you are developing, then the experience really is unlikely to matter much.

If you are building an e-commerce platform for buying and selling medical equipment, then experience building storefronts, taking payments, managing invoices, etc would be a huge plus. Experience in the medical industry would not.

Please put (some random text) in the subject line. Don’t bother responding unless you are (some random quality)

What you probably meant to say: last time I posted this I got a bunch of weird responses from Indian developers and wasted a whole bunch of time.

I am very sympathetic about this one. However, it is generally not a good idea to treat potential hires like peons just because you had to deal with a bunch of jokers the last time you tried this.

I completely get it though — you will get flooded with responses from low quality developers. However, it is a very, very bad idea to let the one good developer think you have been soured and now consider all developers to be annoying garbage.

I think this one sort of speaks for itself. If I’m deciding whether or not I want to take this project, the last thing I want to risk is working with someone who has a chip on their shoulder!

I want to add a dozen AR filters like Instagram and Facebook have to my app, and I need it done at a fixed price in the next couple weeks!

What you probably meant to say: is this feasable?

The short take on this is simple: the apps you are trying to replicate didn’t start out as complex as you see them now. And unless you want to spend the time and money they did to get there, you should probably rethink this plan.

Instagram, for example, didn’t have video and stories and IGTV, or even gallery posts. It was a dead simple feed of square images bundled with a couple very simple filters.

I replicated the core functionality for two apps i built for Warner Bros. to promote two films several years ago (Tim Burton’s Dark Shadows, and Christopher Nolan’s Dark Knight Rises). And I built the first of those apps in 3 weeks on iOS — and this was before Apple released the CoreImage library.

How? Well because Instagram was pretty simple back then! It took me a couple hours to reverse engineer how the filters were assembled, build a little framework for configuring these filters and applying them to an image, and voila!

However, if you asked me to replicate Instagram now, I’d probably run for the hills. Instagram is complex now! But they got there after many years, many millions of dollars, and an acquisition by Facebook.

See its not just the time requirement. Some of these features are just not feasible when you are still trying to break your development into bite-sized fixed price chunks and having a freelancer build them piecemeal. At a certain point, you need to actually start to structure your company like a company — and that means raising capital to build these functions, hiring full-time engineers instead of bouncing from freelancer to freelancer, and organizing and coalescing your team — engineers, project managers, designers, et al — under one roof, so that they can effectively work as a unit to build the project according to the bigger vision and roadmap you have.

This is a much longer post as well, but to keep it brief: it’s just not realistic to work the way you did at the beginning all the way through this complexity, as your requirements become more and more complex. The first version of Instagram can be built by a freelancer. The modern day version of Instagram requires Facebook.

Now, why is this a problem for a developer? Because by simply asking for this feature in this timeline, they can tell right away that you have unrealistic expectations — not only in terms of the time required to develop these features, but the company structure and funding required to do so. There is a good chance that you don’t have a real solid grasp of what exactly you are asking for, and just how big a task it is, and that can make your project tough to work on!

A lot of developers have come to learn that clients with unrealistic expectations often can’t be talked out of those expectations. And, so, they typically just move on.

I need a scalable backend, an iOS app, and Android app, and a custom algorithm, and I need them all at very high quality in 6 months. And my budget is $3000.

What you probably meant to say: I have no money

It’s ok to take a shot at bringing your vision to life, even if you don’t have enough money to do it exactly the way you’d like. It really is! I am in this business because I love working with visionary people with big, outrageous dreams. The tech industry is a thrilling, vibrant and lucrative place to work because those people exist.

What’s a little far out though is the client who is not doing the math at all.

I had a meeting this morning that went exactly like this. The client needed an iOS app, Android app, web app, backend REST API and custom web-based admin panel. The client understood it would take anywhere from 3-6 months to do this in the best case, and even seemed to understand I would need to bring in help to do it in that timeline. Then told me with a straight face that the budget was $3000 firm.

So I did the math. Two developers working around the clock to get this done in an extremely optimistic 3 months works out to about $3 an hour each. Considering the 3 months was, as I said, very optimistic in this case, I would project closer to $1-2 an hour.

And they were specifically looking for a developer based in Southern California. Well, I’m in Southern California, and I can tell you if I made $3000 a week I’d be living a dumpster next week. I don’t even see how I could make it past week three if I was earning $1000 or less a month. Its expensive here. And even if it wasn’t, that is a ludicrous amount of money for anyone, anywhere.

But not only that, you are hiring highly skilled, highly trained, in demand people to do this work. Or, at least you should be if you care at all about what you’re building! But you are expecting them to leap at a job that pays about 5% of the local minimum wage.

I don’t even know what else to say.

Looking for a technical co-founder!

What you probably meant to say: I have no money

Let’s finish with the big deal breaker. This, too is a post all on its own, but its a big one and it comes up all the time.

There are a few other common and related statements I can throw in with this one:

  • I want someone who is in this for the long haul
  • I’m looking for someone who really believes in our idea, and isn’t just looking for a paycheck
  • This is the next billion dollar idea!

Really quickly, let me break down a few things.

First, this is our job. We aren’t a charity giving up our time to help you strike it big. We have bills to pay and loved ones to take care of, and reality just doesn’t permit us to take on projects pro-bono. It just doesn’t, and I don’t know anyone who knows what they are doing who ever jumps on board on an equity basis at the early stage.

Again, there are a lot of reasons for this, and hopefully I’ll get to post about them one day. But for now, the simple version is: you just aren’t there yet.

The time to ask a developer to come on for equity is after you have a product, and you have some traction in the market. NOT when all you have is an idea, or some awful prototype you had build overseas that barely works.

While I’m absolutely sure that you are super excited about your idea, you’re asking a developer to be just as excited as you are, when in reality all you have is an idea. And to someone who listens to ideas all day long, an idea alone just isn’t that exciting. What is exciting is when that idea starts to work. When users start to react positively to the released product. When the non-technical team starts to put together those critical deals and build those incredibly important relationships that take that humble idea and turn it into a viable company. That’s when you should ask someone to come on for equity, not before!

And typically, equity offers come along with pay, not in lieu of pay. In fact, its your primary job as an entrepreneur at the earliest stages to raise money for your company! I know its hard — believe me, I’ve been through it many times. But it’s a crucial part of launching a company, and it’s critical to building a competent team that will be able to build the technical foundation for your company.

But until I write that big post explaining this in more depth, let’s just keep it short and simple: you get what you pay for.

by stromdotcom at 2020-05-01 21:13

2020-03-23

One of the advantages (if you can call it that) of being in this industry as long as I have is that I’ve been through multiple economic disasters, technological paradigm shifts, and — it’s not all bad! — economic and technological boom periods.

So I figured I might as well throw out a few predictions as to how I feel technology, and more specifically the app world, is going to change after this is over. What kinds of apps will consumers want, what kinds of apps will entrepreneurs build?

Keep in mind this is just my opinion based on my own experience, and should not be taken as anything but that. That being said, lets start with the big question: what sorts of apps are going to be big after the coronovirus scare of 2020?

Remote working apps

One of the initial download spikes I saw after people here in California were asked to stay home was Zoom. Not just Zoom though — Skype, Discord, and other communication apps suddenly became much more useful as we all retreated home.

What’s interesting to me in times like this is how people start bending existing technologies to their needs. For example, you may have an app that was used primarily for virtual face-to-face conferencing for businesses now being used to host virtual get togethers for people under lockdown.

That’s an interesting opportunity for entrepreneurs to start to see gaps in the technological landcape. The market will tell you really quickly that there is a need for a specific app tailored for that particular use case. Its possible that platforms like Zoom will step up and fill the gap themselves. But sometimes those apps can’t or simply won’t, and that’s a huge opportunity to get in and fill an existing need.

One of the biggest struggles I see with many app startups is that they created something very cool, very interesting, but not very in demand. That means when they launch, they not only need to find users and let them know they exist, they also have to explain to them why they should care. It’s always an easier road when the market is already waiting with baited breath for your product.

Social networks

I know the social networking landscape is already pretty crowded, but I predict we’ll see a few more innovative apps in this space in the next year or two.

I have been watching a few giant gaps of my own for the last few years (and hopefully one day I’ll have time to set about filling them!) but more are sure to pop up as the battle against Coronavirus goes on.

Many of us, for example, are cut off from our parents right now. I am, and I could be for months. Are our existing social networks adequate for facilitating communication with our older family members? Are they easy to use, do they offer the tools we need to help our parents and grandparents out in a time of crisis?

One immediate issue that came to my attention when all of this happened was that my parents need someone to bring supplies to them — if it is unsafe for them to venture outside, they will need someone to make sure they have food and toilet paper and soap and every other necessity. And for the very old among us there is the added concern of whether or not they are able to stay aware of their own needs.

At home entertainment

I don’t personally feel like gaming or streaming entertainment is ripe for any newcomers as a result of Coronavirus. After all just about every type of media is available to stream these days — even comics! But I do think their importance in our lives is going to become much more evident now.

I have, for a while, started to think about ways that entrepreneurs can help consumers manage the growing number of subscriptions that are required now — either by way of bundles, or technologies to manage payment options, sharing, etc.

Information broadcasting

And finally, the big one: we need better technology to get information to the community.

And we don’t just need better channels — we need better noise reduction. The amount of utter nonsense I’ve seen on Twitter, Facebook and Instagram this week is depressing. I have had several friends and family members ask me if I had heard some tidbit working its way through the grapevine which was either comically false or, worse, potentially dangerous.

We need better ways to spread valid information than our current UGC platforms, which we have seen over the past few years are highly susceptible to misinformation and confusion.

I would personally love to see someone finally come in and solve not only the UGC news and information dissemination problem, but also make sure we have adequate tools in our pockets to receive and process updates from agencies like the CDC or WHO to keep us informed in the event of health crises or natural disasters.

That’s all for now. I’m sure over the next couple weeks or possibly months we’ll see more gaps in the market as a result of this.

One thing I would like to remind everyone is that the iPhone boom happened essentially right after the financial crisis of 2007-2008. And the just came out a ridiculously long period of growth since then. So stay positive, stay creative, and most importantly stay safe!

by stromdotcom at 2020-03-23 21:00

2019-11-23

Most of the images in Glider PRO's resources are in PICT format.

The PICT format is basically a bunch of serialized QuickDraw opcodes and can contain a combination of both image and vector data.

The first goal is to get all of the known resources to parse.  The good news is that none of the resources in the Glider PRO application resources or any of the houses contain vector data, so it's 100% bitmaps.  The bad news is that the bitmaps have quite a bit of variation in their internal structure, and sometimes they don't match the display format.

Several images contain multiple images spliced together within the image data, and at least one image is 16-bit color even though the rest of the images are indexed color.  One is 4-bit indexed color instead of 8-bit.  Many of them are 1-bit, and the bit scheme for 1-bit images is also inverted compared to the usual expectations (i.e. 1 is black, 0 is white).

Adding to these complications, while it looks like all of the images are using the standard system palette, there's no guarantee that they will - It's actually even possible to make a PICT image that combines multiple images with different color palettes, because the palette is defined per picture op, not per image file.

There's also a fun quirk where the PICT image frame doesn't necessarily have 0,0 as the top-left corner.

I think the best solution to this will simply be to change the display type to 32-bit and unpack PICT images to a single raster bitmap on load.  The game appears to use QuickDraw abstractions for all of its draw operations, so while it presumes that the color depth should be 8-bit, I don't think there's anything that will prevent GlidePort from using 32-bit instead.

In the meantime, I've been able to convert all of the resources in the open source release to PNG format as a test, so it should be possible to now adapt that to a runtime PICT loader.

by OneEightHundred (noreply@blogger.com) at 2019-11-23 20:43

2019-10-10

Recently found out that Classic Mac game Glider PRO's source code was released, so I'm starting a project called GlidePort to bring it to Windows, ideally in as faithful of a reproduction as possible and using the original data files.  Some additions like gamepad support may come at a later time if this stays on track.

While this is a chance to restore of the few iconic Mac-specific games of the era to, it's also a chance to explore a lot of the era technology, so I'll be doing some dev diaries about the process.

Porting Glider has a number of technical challenges: It's very much coded for the Mac platform, which has a lot of peculiarities compared to POSIX and Windows.  The preferred language for Mac OS was originally Pascal, so the C standard library is often mostly or entirely unused, and the Macintosh Toolbox (the operating system API)  has differences like preferring length-prefixed strings instead of C-style null terminated strings.

Data is in big endian format, as it was originally made for Motorola 68k and PowerPC CPUs.  Data files are split into two "forks," one as a flat data stream and the other as a resource database that the toolbox provides parsing facilities for.  In Mac development, parsing individual data elements was generally the preferred style vs. reading in whole structures, which leads to data formats often having variable-length strings and no padding for character buffer space or alignment.

Rendering is done using QuickDraw, the system-provided multimedia infrastructure.  Most images use the system-native PICT format, a vector format that is basically a list of QuickDraw commands.

At minimum, this'll require parsing a lot of Mac native resource formats, some Mac interchange formats (i.e. BinHex 4), reimplementation of a subset of QuickDraw and QuickTime, substitution of copyrighted fonts, and switch-out of numerous Mac-specific compiler extensions like dword literals and Pascal string escapes.

The plan for now is to implement the original UI in Qt, but I might rebuild the UI instead if that turns out to be impractical.

by OneEightHundred (noreply@blogger.com) at 2019-10-10 02:03

2019-09-06

When adding ETC support to Convection Texture Tools, I decided to try adapting the cluster fit algorithm used for desktop formats to ETC.

Cluster fit works by sorting the pixels into an order based on a color axis, and then repeatedly evaluating each possible combination of counts of the number of pixels assigned to each index.  It does so by taking the pixels and applying a least-squares fit to produce the endpoint line.

For ETC, this is is simplified in a few ways: The axis is always 1,1,1, so the step of picking a good axis is unnecessary.  There is only one base color and the offsets are determined by the table index, so the clustering step would only solve the base color.

Assuming that you know what the offsets for each pixel are, the least squares fit amounts to simply subtracting the offset from each of the input pixels and averaging the result.

For a 4x2 block, there are 165 possible cluster configurations, but it turns out that some of those are redundant, given certain assumptions.  The base color is derived from the formula ((color1-offset1)+(color2-offset2)+...)/8, but since the adds are commutative, that's identical to ((color1+color2+...)-(offset1+offset2+...))/8

The first half of that is the total of the colors, which is constant.  The second is the total of the offsets.

Fortunately, not all of the possible combinations produce unique offsets.  Some of them cancel out, since adding 1 to or subtracting 1 from the count of the offsets that are negatives of each other produces no change.  In an example case, the count tuples (5,0,1,2) and (3,2,3,0) are the same, since 5*-L + 0*-S + 1*S + 2*L = 3*-L + 2*-S + 3*S + 0*L.

For most of the tables, this results in only 81 possible offset combinations.  For the first table, the large value is divisible by the small value, causing even more cancellations, and only 57 possible offset combinations.

Finally, most of the base colors produced by the offset combinations are not unique after quantization: Differential mode only has 5-bit color resolution, and differential mode only has 4-bit resolution, so after quantization, many of the results get mapped to the same color.  Deduplicating them is also inexpensive: If the offsets are checked in ascending order, then once the candidate color progresses past the threshold where the result could map to a specific quantized color, it will never cross back below that threshold, so deduplication only needs to inspect the last appended quantized color.

Together, these reduce the candidate set of base colors to a fairly small number, creating a very optimal search space at low cost.

There are a few circumstances where these assumptions don't hold:

One is when the clamping behavior comes into effect, particularly when a pixel channel's value is near 0 or 255.  In that case, this algorithm can't account for the fact that changing the value of the base color would have no effect on some of the offset colors.

One is when the pixels are not of equal importance, such as when using weight-by-alpha, which makes the offset additions non-commutative, but that only invalidates the cancellation part of the algorithm.  The color total can be pre-weighted, and the rest of the algorithm would have to rely on working more like cluster fit: Sort the colors along the 1,1,1 line and determine the weights for the pixels in that order, generate all 165 cluster combinations, and compute the weight totals for each one.  Sort them into ascending order, and then the rest of the algorithm should work.

One is when dealing with differential mode constraints, since not all base color pairs are legal.  There are some cases where a base color pair that is just barely illegal could be made legal by nudging the colors closer together, but in practice, this is rare: Usually, there is already a very similar individual mode color pair, or another differential mode pair that is only slightly worse.

In CVTT, I deal with differential mode by evaluating all of the possibilities and picking the best legal pair.  There's a shortcut case when the best base color for both blocks produces a legal differential mode pair, but this is admittedly a bit less than optimal: It picks the first evaluation in the case of a tie when searching for the best, but since blocks are evaluated starting with the largest combined negative offset, it's a bit more likely to pick colors far away from the base than colors close to the base, even though colors closer to the average tend to produce smaller offsets and are more likely to be legal, so this could be improved by making the tie-breaking function prefer smaller offsets.

In practice though, the differential mode search is not where most of the computation time is spent: Evaluating the actual base colors is.

As with the rest of CVTT's codecs, brute force is still key: The codec is designed to use 8-wide SSE2 16-bit math ops wherever possible to processing 8 blocks at once, but this creates a number of challenges since sorting and list creation are not amenable to vectorization.  I solve this by careful insertion of scalar ops, and the entire differential mode part is scalar as well.  Fortunately, as stated, the parts that have to be scalar are not major contributors to the encoding time.


You can grab the stand-alone CVTT encoding kernels here: https://github.com/elasota/ConvectionKernels

by OneEightHundred (noreply@blogger.com) at 2019-09-06 00:47

2018-06-13

Introduction

In the last post we were left with some tests that exercised some very basic functionality of the Deck class. In this post, we will continue to add unit tests and write production code to make those tests pass, until we get a class which is able to produce a randomised deck of 52 cards.

Test Refactoring

You can, and should, refactor your tests where appropriate. For instance, on the last test in the last post, we only asserted that we could get all the cards for a particular suit. What about the other three? With most modern test frameworks, that is very easy.

[InlineData(Suit.Clubs)]
[InlineData(Suit.Diamonds)]
[InlineData(Suit.Hearts)]
[InlineData(Suit.Spades)]
public void Should_BeAbleToSelectSuitOfCardsFromDeck(Suit suit)
{
    var deck = new Deck();

    var cards = deck.Where(x => x.Suit == suit);

    cards.Should().HaveCount(13);
}

More Cards

We are going to want actual cards with values to work with. And for the next test, we can literally copy and past the previous test to use as a starter.

[Theory]
[InlineData(Suit.Clubs)]
[InlineData(Suit.Diamonds)]
[InlineData(Suit.Hearts)]
[InlineData(Suit.Spades)]
public void Should_BuildAllCardsInDeck(Suit suit)
{
    var deck = new Deck();

    var cards = deck.Where(x => x.Suit == suit);

    cards.Should().Contain(new List<Card> 
    { 
        new Card(suit, "A"), new Card(suit, "2"), new Card(suit, "3"), new Card(suit, "4"),
        new Card(suit, "5"), new Card(suit, "6"), new Card(suit, "7"), new Card(suit, "8"),
        new Card(suit, "9"), new Card(suit, "10"), new Card(suit, "J"), new Card(suit, "Q"),
        new Card(suit, "K")
    });
}

Now that I’ve written this, when I compare it to the previous one, it’s testing the exact same thing, in slightly more detail. So we can delete the previous test, it’s just noise.

The test is currently failing because it can’t compile, due to there not being a constructor which takes a string. Lets fix that.

public struct Card
{
    private Suit _suit;
    private string _value;

    public Card(Suit suit, string value)
    {
        _suit = suit;
        _value = value;
    }

    public Suit Suit { get { return _suit; } }
    public string Value { get { return _value; } }

    public override string ToString()
    {
        return $"{Suit}";
    }
}

There are a couple of changes to this class. Firstly, I added the constructor, and private variables which hold the two defining variables, with properties with only public getters. I changed it from being a class to being a struct, and it’s now an immutable value type, which makes sense. In a deck of cards, there can, for example, only be one Ace of Spades.

These changes mean that are tests don’t work, as the Deck class is now broken, because the code which builds set of thirteen cards for a given suit is broken - it now doesn’t understand the Card constructor, or the fact that the .Suit property is now read-only.

Here is my first attempt at fixing the code, which I don’t currently think is all that bad:

private string _ranks = "A23456789XJQK";

private List<Card> BuildSuit(Suit suit)
{
    var cards = new List<Card>(_suitSize);

    for (var i = 1; i <= _suitSize; i++)
    {
        var rank = _ranks[i-1].ToString();
        var card = new Card(suit, rank);
        cards.Add(card);
    }

    return cards;
}

This now builds us four suites of thirteen cards. I realised as I was writing the production code that handling “10” as a value would be straightforward, so I opted for the simpler (and common) approach of using “X” to represent “10”. The test pass four times, once for each suit. This is probably unnecessary, but it protects us in future from inadvertantly adding any code which may affect the way that cards are generated for a particular suit.

Every day I’m (randomly) shuffling

It’s occured to me as I write this that the Deck class is funtionally complete, as it produces a deck of 52 cards when it is instantiated. You will however recall that we want a randomly shuffled deck of cards. If we consider, and invoke the Single Responsibility Principal, then we should add a Dealer class; we are modeling a real world event and a pack of cards cannot shuffle itself, that’s what the dealer does.

Conclusion

In this post I’ve completed the walk through of developing a class to create a deck of 52 cards using some basic TDD techniques. I realised adding the ability to shuffle the pack to the Deck class would be a violation of SRP, as the Deck class should not be concerned or have any knowledge about how it is shuffled. In the next post I will discuss how we can implement a Dealer class, and illustrate some techniques swapping the randomisation algorithim around.

2018-06-13 00:00