sn.printf.net

2017-11-28

Introduction

In the previous post in this series, we had finished up with a very basic unit test, which didn’t really test much, which we had ran using dotnet xunit in a console, and saw some lovely output.

We’ll continue to write some more unit tests to try and understand what kind of API we need in a class (or classes) which can help us satisfy the first rule of our Freecell engine implementation. As a reminder, our first rule is: There is one standard deck of cards, shuffled.

I’m trying to write both the code and the blog posts as I go along, so I have no idea what the final code will look like when I’ve finished. This means I’ll probably make mistakes and make some poor design decisions, but the whole point of TDD is that you can get a feel for that as you go along, because the tests will tell you.

Don’t try to TDD without some sort of plan

Whilst we obey the 3 Laws of TDD, that doesn’t mean that we can’t or shouldn’t doodle a design and some notes on a whiteboard or a notebook about the way our API could look. I always find that having some idea of where you want to go and what you want to achieve aids the TDD process, because then the unit tests should kick in and you’ll get a feel for whether things are going well or the conceptual design you had is not working.

With that in mind, we know that we will want to define a Card object, and that there are going to be four suits of cards, so that gives us a hint that we’ll need an enum to define them. Unless we want to play the same game of Freecell over and over again, then we’ll need to randomly generate the cards in the deck. We also know that we will need to iterate over the deck when it comes to building the Cascades, but the Deck should not be concerned with that.

With that in mind, we can start writing some more tests.

To a functioning Deck class

First things first, I think that I really like the idea of having the Deck class enumerable, so I’ll start with testing that.

[Fact]
public void Should_BeAbleToEnumerateCards()
{
    foreach (var card in new Deck())
    {
    }
}

This is enough to make the test fail, because the Deck class doesn’t yet have a public definition for GetEnumerator, but it gives us a feel for how the class is going to be used. To make the test pass, we can do the simplest thing to make the compiler happy, and give the Deck class a GetEnumerator definition.

public IEnumerator<object> GetEnumerator()
{
    return Enumerable.Empty<object>().GetEnumerator();
}

I’m using the generic type of object in the method, because I haven’t yet decided on what that type is going to be, because to do so would violate the three rules of TDD, and it hasn’t yet been necessary.

Now that we can enumerate the Deck class, we can start making things a little more interesting. Given that it is a deck of cards, it should be reasonable to expect that we could expect to be able to select a suit of cards from the deck and get a collection which has 13 cards in it. Remember, we only need to write as much of this next test as is sufficient to get the test to fail.

[Fact]
public void Should_BeAbleToSelectSuitOfCardsFromDeck()
{
    var deck = new Deck();

    var hearts = deck.Where();
}

It turns out we can’t even get to the point in the test of asserting something because we get a compiler failure. The compiler can’t find a method or extension method for Where. But, the previous test where we enumerate the Deck in a foreach passes. Well, we only wrote as much code to make that test pass as we needed to, and that only involved adding the GetEnumerator method to the class. We need to write more code to get this current test to pass, such that we can keep the previous test passing too.

This is easy to do by implementing IEnumerable<> on the Deck class:

public class Deck : IEnumerable<object>
{
    public IEnumerator<object> GetEnumerator()
    {
        foreach (var card in _cards)
        {
            yield return card;
        }
    }

    IEnumerator IEnumerable.GetEnumerator() => GetEnumerator();
}

I’ve cut some of the other code out of the class so that you can see just the detail of the implementation. The second explicitly implemented IEnumerable.GetEnumerator is there because IEnumerable<> inherits from it, so it must be implemented, but as you can see, we can just fastward to the genericly implemented method. With that done, we can now add using System.Linq; to the Deck class so that we can use the Where method.

var deck = new Deck();

var hearts = deck.Where(x => x.Suit == Suit.Hearts);

This is where the implementation is going to start getting a little more complicated that the actual tests. Obviously in order to make the test pass, we need to add an actual Card class and give it a property which can use to select the correct suit of cards.

public enum Suit
{
    Clubs,
    Diamonds,
    Hearts,
    Spades
}

public class Card
{
    public Suit Suit { get; set; }
}

After writing this, we can then change the enumerable implementation in the Deck class to public class Deck : IEnumerable<Deck>, and the test will now compile. Now we can actually assert the intent of the test:

[Fact]
public void Should_BeAbleToSelectSuitOfCardsFromDeck()
{
    var deck = new Deck();

    var hearts = deck.Select(x => x.Suit == Suit.Hearts);

    hearts.Should().HaveCount(13);
}

Conclusion

In this post, I talked through several iterations of the TDD loop, based on the 3 Rules of TDD, in some detail. An interesting discussion that always rears its head at this point is: Do you need to follow the 3 rules so excruciatingly religously? I don’t really know the answer to that. Certainly I always had it in my head that I would need a Card class, and that would necessitate a Suit enum, as these are pretty obvious things when thinking about the concept of a class which models a deck of cards. Could I have taken a short cut, written everything and then wrote the tests to test the implementation (as it stands)? Probably, for something so trivial.

In the next post, I will write some more tests to continue building the Deck class.

2017-11-28 00:00

2017-11-21

Introduction

I thought Freecell would make a fine basis for talking about Test Driven Development. It is a game which I enjoy playing. I have an app for it on my phone, and it’s been available on Windows for as long as I can remember, although I’m writing this on a Mac, which does not by default have a Freecell game.

The rules are fairly simple:

  • There is one standard deck of cards, shuffled.
  • There are four “Free” Cell piles, which may each have any one card stored in it.
  • There are four Foundation piles, one for each suit.
  • The cards are dealt face-up left-to-right into eight cascades
    • The cards must alternate in colour.
    • The result of the deal is that the first four cascades will have seven cards, the final four will have six cards.
  • The top most card of a cascade beings a tableau.
  • A tableaux must be built down by alternating colours.
  • A card in cell may be moved onto a tableau subject to the previous rule.
  • A tableaux may be recursively moved onto another tableaux, or to an empty cascade only if there is enough free space in Cells or empty cascades to use as intermediate locations.
  • The game is won when all four Foundation piles are built up in suit, Ace to King.

These rules will form the basis of a Frecell Rules Engine. Note that we’re not interested in a UI at the moment.

This post is a follow on from my previous post of how to setup a dotnet core environment for doing TDD.

red - first test

We know from the rules that we need a standard deck of cards to work with, so our initial test could assert that we can create an array, of some type that is yet to be determined, which has a length of 51.

[Fact]
public void Should_CreateAStandardDeckOfCards()
{
    var sut = new Deck();

}

There! Our first test. It fails (by not compiling). We’ve obeyed The 3 Laws of TDD: We’ve not written any production code and we’ve only written enough of the unit test to make it fail. We can make the test pass by creating a Deck class in the Freecell.Engine project. Time for another commit:

green - it passes

It is trivial to make our first test pass, as all we need to do is create a new class in our Freecell.Engine project, and our test passes as it now compiles. We can prove this by instructing dotnet to run our unit tests for us:

nostromo:Freecell.Engine.Tests stuart$ dotnet watch xunit
watch : Started
Detecting target frameworks in Freecell.Engine.Tests.csproj...
Building for framework netcoreapp2.0...
  Freecell.Engine -> /Users/stuart/dev/freecell/Freecell.Engine/bin/Debug/netstandard2.0/Freecell.Engine.dll
  Freecell.Engine.Tests -> /Users/stuart/dev/freecell/Freecell.Engine.Tests/bin/Debug/netcoreapp2.0/Freecell.Engine.Tests.dll
Running .NET Core 2.0.0 tests for framework netcoreapp2.0...
xUnit.net Console Runner (64-bit .NET Core 4.6.00001.0)
  Discovering: Freecell.Engine.Tests
  Discovered:  Freecell.Engine.Tests
  Starting:    Freecell.Engine.Tests
  Finished:    Freecell.Engine.Tests
=== TEST EXECUTION SUMMARY ===
   Freecell.Engine.Tests  Total: 1, Errors: 0, Failed: 0, Skipped: 0, Time: 0.142s
watch : Exited
watch : Waiting for a file to change before restarting dotnet...

It is important to make sure to run dotnet xunit from within the test project folder, you can’t pass the path to the test project like you can with dotnet test. As you can see, I’ve also started watching xunit, and the runner is now going to wait until I make and save a change before automatically compiling and running the tests.

red, green

This first unit test still doesn’t really test very much, and because we are obeying the 3 TDD rules, it forces us to think a little before we write any test code. When looking at the rules, I think we will probably want the ability to move through our deck of cards and have the ability to remove cards from the deck. So, with this in mind, the most logical thing to do is to make the Deck class enumerable. We could test that by checking a length property. Still in our first test, we can add this:

var sut = new Deck();

var length = sut.Length;

If I switch over to our dotnet watch window, we get the immediate feedback that this has failed:

Detecting target frameworks in Freecell.Engine.Tests.csproj...
Building for framework netcoreapp2.0...
  Freecell.Engine -> /Users/stuart/dev/freecell/Freecell.Engine/bin/Debug/netstandard2.0/Freecell.Engine.dll
DeckTests.cs(13,30): error CS1061: 'Deck' does not contain a definition for 'Length' and no extension method 'Length' accepting a first argument of type 'Deck' could be found (are you missing a using directive or an assembly reference?) [/Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj]
Build failed!
watch : Exited with error code 1
watch : Waiting for a file to change before restarting dotnet...

We know that we have a pretty good idea that we’re going to make the Deck class enumerable, and probably make it in implement IEnumerable<>, then we could add some sort of internal array to hold another type, probably a Card and then right a bunch more code that will make our test pass.

But that would violate the 3rd rule, so instead, we simply add a Length property to the Deck class:

public class Deck 
{
    public int Length {get;}
}

This makes our test happy, because it compiles again. But it still doesn’t assert anything. Let’s fix that, and assert that the Length property actually has a length that we would expect a deck of cards to have, namely 52:

var sut = new Deck();

var length = sut.Length;

length.Should().Be(51);

The last line of the test asserts through the use of FluentAssertions that the Length property should be 51. I like FluentAssertions, I think it looks a lot cleaner than writing something like Assert.True(sut.Length, 51), and it’s quite easy to read and understand: ‘Length’ should be 51. I love it. We can add it with the command dotnet add package FluentAssertions. Fix the using reference in the test class so that it compiles, and then check our watch window:

Detecting target frameworks in Freecell.Engine.Tests.csproj...
Building for framework netcoreapp2.0...
  Freecell.Engine -> /Users/stuart/dev/freecell/Freecell.Engine/bin/Debug/netstandard2.0/Freecell.Engine.dll
  Freecell.Engine.Tests -> /Users/stuart/dev/freecell/Freecell.Engine.Tests/bin/Debug/netcoreapp2.0/Freecell.Engine.Tests.dll
Running .NET Core 2.0.0 tests for framework netcoreapp2.0...
xUnit.net Console Runner (64-bit .NET Core 4.6.00001.0)
  Discovering: Freecell.Engine.Tests
  Discovered:  Freecell.Engine.Tests
  Starting:    Freecell.Engine.Tests
    Freecell.Engine.Tests.DeckTests.Should_CreateAStandardDeckOfCards [FAIL]
      Expected value to be 51, but found 0.
      Stack Trace:
           at FluentAssertions.Execution.XUnit2TestFramework.Throw(String message)
           at FluentAssertions.Execution.AssertionScope.FailWith(String message, Object[] args)
           at FluentAssertions.Numeric.NumericAssertions`1.Be(T expected, String because, Object[] becauseArgs)
        /Users/stuart/dev/freecell/Freecell.Engine.Tests/DeckTests.cs(16,0): at Freecell.Engine.Tests.DeckTests.Should_CreateAStandardDeckOfCards()
  Finished:    Freecell.Engine.Tests
=== TEST EXECUTION SUMMARY ===
   Freecell.Engine.Tests  Total: 1, Errors: 0, Failed: 1, Skipped: 0, Time: 0.201s
watch : Exited with error code 1
watch : Waiting for a file to change before restarting dotnet...

Now to make our test past, we could again just start implementing IEnumerable<>, but that’s not TDD, and Uncle Bob might get upset at me. Instead, we will do the simplest thing that will make the test pass:

public class Deck
{
    public int Length { get { return new string[51].Length; }}
}

refactor

Now that we have a full test with an assertion that passes, we can about the refactor stage of the red/gree/refactor TDD cycle. As it stands, our simple classes passes our test but we can see right away that newing up an array in the getter of the Length property is not going to be something that is going to serve our interests well in the long run, so we should do something about that. Making it a member variable seems to be the most logical thing to do at the moment, so we’ll do that. We don’t need to make any changes to our test on the refactor stage. If we do, that’s a design smell that would indicate that something is wrong.

ublic class Deck
{
    private const int _size = 51;
    private string[] _cards = new string[_size];
    public int Length { get { return _cards.Length; }}
}

Conclusion

In this post, we’ve fleshed out our Deck class a little more, and gone through the full red/green/refactor TDD cycle. I also introduced FluentAssertions, and showed the output from the watch window as it showed the test failing

2017-11-21 00:00

2017-11-20

As the first step in my new game project The Deadlings, I've put together a little starter project that combines the Farseer 3.1 physics library with the GameStateManagement sample from Microsoft, all built on XNA 4.0.  You can download it at thedeadlings.com.

2017-11-20 19:30

TouchArcade wrote a wonderful review of our game Rogue Runner today. After 2 months and 5 updates, it's wonderful to see our little game take off.

2017-11-20 19:30

I am now offering my services as a freelance iOS developer. Need an app developed for iPhone, iPod Touch or iPad?  Get in touch with me via Glowdot Productions, Inc. You can hire a guy who's had apps in almost every chart category on the app store, been in the top 50 ...

2017-11-20 19:30

2017-11-14

Introduction

In a future post, I’m going to write about Test Driven Development, with the aim of writing a Freecell clone. In this post I’ll walk through setting up a dotnet core solution with a class library which will hold the Freecell rules engine, a class library for our unit tests and show to set up an environment for immediate feedback, which is one of the key benefits of TDD. I’ll also demonstrate using some basic git commands to setup our source control.

As you’ll notice from the command line output below, I’m doing all this on a Mac, but things should not be any different if you are following along on Linux. Or even Windows.

dotnet new

We need to new up two projects: one for our rules engine; one for the tests. It is a good idea to keep the unit tests separate from the code under test - in a real world application you really do not want test data to get mixed in with production code.

nostromo:dev stuart$ mkdir freecell
nostromo:dev stuart$ dotnet new classlib -o freecell/Freecell.Engine -n Freecell.Engine
The template "Class library" was created successfully.

Processing post-creation actions...
Running 'dotnet restore' on freecell/Freecell.Engine/Freecell.Engine.csproj...
  Restoring packages for /Users/stuart/dev/freecell/Freecell.Engine/Freecell.Engine.csproj...
  Generating MSBuild file /Users/stuart/dev/freecell/Freecell.Engine/obj/Freecell.Engine.csproj.nuget.g.props.
  Generating MSBuild file /Users/stuart/dev/freecell/Freecell.Engine/obj/Freecell.Engine.csproj.nuget.g.targets.
  Restore completed in 133.35 ms for /Users/stuart/dev/freecell/Freecell.Engine/Freecell.Engine.csproj.


Restore succeeded.

The command dotnet new console instructs the framework to create a new console application. The -o option allows an output directory to be specified and the -n allows the project name to be specified. If you don’t specify these options, the projet will be created in and named after the current folder. You can see more details on the command on Microsoft’s documentation.

Then create the second project to hold the unit tests. I like to use xUnit, and the dotnet framework team do too. It’s pretty telling that the dotnet framework team using xUnit instead of using MSTest - which was exactly the basis of my arguement when I moved a team from MSTest to xUnit last year.

nostromo:dev stuart$ dotnet new xunit -o freecell/Freecell.Engine.Tests -n Freecell.Engine.Tests
The template "xUnit Test Project" was created successfully.

...

Restore succeeded.

We should also add a reference into our test project to the Freecell.Engine project, as it is that which contains the code we want to test.

nostromo:freecell stuart$ cd Freecell.Engine.Tests/
nostromo:Freecell.Engine.Tests stuart$ dotnet add reference ../Freecell.Engine/Freecell.Engine.csproj 
Reference `..\Freecell.Engine\Freecell.Engine.csproj` added to the project.

With that all done, now is a good time to initialise a git repository to hold the code and make the first commit.

nostromo:dev stuart$ cd freecell/
nostromo:freecell stuart$ git init
Initialized empty Git repository in /Users/stuart/dev/freecell/.git/
nostromo:freecell stuart$ git add --all
nostromo:freecell stuart$ git commit -m "Initial commit"
[master (root-commit) 2cc150c] Initial commit
 12 files changed, 6025 insertions(+)
 create mode 100644 Freecell.Engine.Tests/Freecell.Engine.Tests.csproj
 create mode 100644 Freecell.Engine.Tests/UnitTest1.cs
 create mode 100644 Freecell.Engine.Tests/obj/Freecell.Engine.Tests.csproj.nuget.cache
 create mode 100644 Freecell.Engine.Tests/obj/Freecell.Engine.Tests.csproj.nuget.g.props
 create mode 100644 Freecell.Engine.Tests/obj/Freecell.Engine.Tests.csproj.nuget.g.targets
 create mode 100644 Freecell.Engine.Tests/obj/project.assets.json
 create mode 100644 Freecell.Engine/Class1.cs
 create mode 100644 Freecell.Engine/Freecell.Engine.csproj
 create mode 100644 Freecell.Engine/obj/Freecell.Engine.csproj.nuget.cache
 create mode 100644 Freecell.Engine/obj/Freecell.Engine.csproj.nuget.g.props
 create mode 100644 Freecell.Engine/obj/Freecell.Engine.csproj.nuget.g.targets
 create mode 100644 Freecell.Engine/obj/project.assets.json
nostromo:freecell stuart$ 

dotnet new sln

Although it doesn’t matter to me as I’m coding this on a Mac using Visual Studio Code, for everyone’s convenience, we should add a solution file. This will also help later on when it comes to talking about build scripts and using Continuous Integration, as it’s usually easier to target a single solution file for building all the projects.

nostromo:freecell stuart$ dotnet new sln -n Freecell.Engine
The template "Solution File" was created successfully.
nostromo:freecell stuart$ dotnet sln add Freecell.Engine/Freecell.Engine.csproj 
Project `Freecell.Engine/Freecell.Engine.csproj` added to the solution.
nostromo:freecell stuart$ dotnet sln add Freecell.Engine.Tests/Freecell.Engine.Tests.csproj 
Project `Freecell.Engine.Tests/Freecell.Engine.Tests.csproj` added to the solution.

dotnet xUnit

I’m going also going to start using the dotnet xunit command which is available to us, but this isn’t (currently) as straight forward as it perhaps will become. Firstly we need to update the version of xUnit which the dotnet new xunit command installed into the project, as it’s still 2.2.0, and to use dotnet xunit it needs to be the same version. Secondly, there isn’t yet a dotnet-cli command to update packages. But you can achieve this by adding an already existing package, which if you don’t specify a version will update it to the latest version. Why they don’t just add a dotnet update package --all command beats me.

If version numbers have changed since this post was written/published, don’t worry. All you need to do is make sure that the xUnit package and the dotnet xUnit command package are the same verisons. You can’t really go wrong as the dotnet xunit command will tell you if there is a version mismatch.

nostromo:freecell stuart$ cd Freecell.Engine.Tests/
nostromo:Freecell.Engine.Tests stuart$ dotnet add package xunit
  Writing /var/folders/xc/xshvfj214z18xn0t5y1vzty80000gn/T/tmpr93zFG.tmp
info : Adding PackageReference for package 'xunit' into project '/Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj'.
log  : Restoring packages for /Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj...
info :   CACHE https://api.nuget.org/v3-flatcontainer/xunit/index.json
info : Package 'xunit' is compatible with all the specified frameworks in project '/Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj'.
info : PackageReference for package 'xunit' version '2.3.1' updated in file '/Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj'.
nostromo:Freecell.Engine.Tests stuart$ 

With that done, we can now add the dotnet-xunit cli command package, and start using it:

nostromo:Freecell.Engine.Tests stuart$ dotnet add package dotnet-xunit
  Writing /var/folders/xc/xshvfj214z18xn0t5y1vzty80000gn/T/tmp6wUvtG.tmp
info : Adding PackageReference for package 'dotnet-xunit' into project '/Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj'.
log  : Restoring packages for /Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj...
info :   GET https://api.nuget.org/v3-flatcontainer/dotnet-xunit/index.json
info :   OK https://api.nuget.org/v3-flatcontainer/dotnet-xunit/index.json 639ms
info : Package 'dotnet-xunit' is compatible with all the specified frameworks in project '/Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj'.
info : PackageReference for package 'dotnet-xunit' version '2.3.1' added to file '/Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj'.
nostromo:Freecell.Engine.Tests stuart$ dotnet xunit
No executable found matching command "dotnet-xunit"
nostromo:Freecell.Engine.Tests stuart$ 

Hang on just a minute, the computer is lying to me, I clearly just added the dotnet-xunit package, which provides the dotnet xunit command. What gives? Well, the gotcha here is that the .csproj needs to be updated and told that the dotnet-xunit package is a special and unique snowflake. Instead of PackageReference, it needs to be DotNetCliToolReference. To be fair, this is documented in the xUnit documentation, and I think this is something that in the future will probably be automatic. For the time being we have to to it ourselves. If we now run dotnet xunit again:

nostromo:Freecell.Engine.Tests stuart$ dotnet xunit
Detecting target frameworks in Freecell.Engine.Tests.csproj...
Building for framework netcoreapp2.0...
  Freecell.Engine -> /Users/stuart/dev/freecell/Freecell.Engine/bin/Debug/netstandard2.0/Freecell.Engine.dll
  Freecell.Engine.Tests -> /Users/stuart/dev/freecell/Freecell.Engine.Tests/bin/Debug/netcoreapp2.0/Freecell.Engine.Tests.dll
Running .NET Core 2.0.0 tests for framework netcoreapp2.0...
xUnit.net Console Runner (64-bit .NET Core 4.6.00001.0)
  Discovering: Freecell.Engine.Tests
  Discovered:  Freecell.Engine.Tests
  Starting:    Freecell.Engine.Tests
  Finished:    Freecell.Engine.Tests
=== TEST EXECUTION SUMMARY ===
   Freecell.Engine.Tests  Total: 1, Errors: 0, Failed: 0, Skipped: 0, Time: 0.156s
nostromo:Freecell.Engine.Tests stuart$ 

As you can see, we get much nicer output than if we just used the standard dotnet test command. Using this command also has the added benefit of being able to produce xml output which can be consumed by a CI server to show details about the unit tests, but that isn’t somethin that I’m going to get into just yet.

I’m also going to update the xUnit Visual Studio runner now as well, as it is required to make VS Code debug our tests, which will come in handy later on. Executing dotnet add package xunit.runner.visualstudio does this for us.

dotnet watch

I am a big fan of NCrunch, and the rapid and immediate feedback which it provides when coding in Visual Studio. Sadly, it’s not available for Visual Studio Code, or indeed for macOS, so in order to replicate the functionality it provides, we can make a few tweaks to our test project and watch our code for changes which are then automatically compiled and the tests ran. In order to get the NCrunch-like functionality, we need to add the dotnet watch cli command. This is fairly straightforward.

nostromo:Freecell.Engine.Tests stuart$ dotnet add package Microsoft.DotNet.Watcher.Tools
  Writing /var/folders/xc/xshvfj214z18xn0t5y1vzty80000gn/T/tmpFpRFyo.tmp
info : Adding PackageReference for package 'Microsoft.DotNet.Watcher.Tools' into project '/Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj'.
log  : Restoring packages for /Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj...
info :   GET https://api.nuget.org/v3-flatcontainer/microsoft.dotnet.watcher.tools/index.json
info :   OK https://api.nuget.org/v3-flatcontainer/microsoft.dotnet.watcher.tools/index.json 1418ms
info : Package 'Microsoft.DotNet.Watcher.Tools' is compatible with all the specified frameworks in project '/Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj'.
info : PackageReference for package 'Microsoft.DotNet.Watcher.Tools' version '2.0.0' added to file '/Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj'.
nostromo:Freecell.Engine.Tests stuart$ dotnet watch xunit
Version for package `Microsoft.DotNet.Watcher.Tools` could not be resolved.
nostromo:Freecell.Engine.Tests stuart$ dotnet restore
  Restoring packages for /Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj...
  Restoring packages for /Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj...
  Restore completed in 13.12 ms for /Users/stuart/dev/freecell/Freecell.Engine/Freecell.Engine.csproj.
  Restore completed in 26.52 ms for /Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj.
  Restore completed in 148.11 ms for /Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj.
  Restore completed in 393.99 ms for /Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj.

Make sure you remember to make the same edit to the .csproj file again so that dotnet understands that this is a CLI command. This is kind of opposite to the way Hansleman showed it, but it achieves the same end goal.

Now we can watch our unit test code for changes:

nostromo:Freecell.Engine.Tests stuart$ dotnet watch xunit
watch : Started
Detecting target frameworks in Freecell.Engine.Tests.csproj...
Building for framework netcoreapp2.0...
  Freecell.Engine -> /Users/stuart/dev/freecell/Freecell.Engine/bin/Debug/netstandard2.0/Freecell.Engine.dll
  Freecell.Engine.Tests -> /Users/stuart/dev/freecell/Freecell.Engine.Tests/bin/Debug/netcoreapp2.0/Freecell.Engine.Tests.dll
Running .NET Core 2.0.0 tests for framework netcoreapp2.0...
xUnit.net Console Runner (64-bit .NET Core 4.6.00001.0)
  Discovering: Freecell.Engine.Tests
  Discovered:  Freecell.Engine.Tests
  Starting:    Freecell.Engine.Tests
  Finished:    Freecell.Engine.Tests
=== TEST EXECUTION SUMMARY ===
   Freecell.Engine.Tests  Total: 1, Errors: 0, Failed: 0, Skipped: 0, Time: 0.147s
watch : Exited
watch : Waiting for a file to change before restarting dotnet...

Conclusion

In this post I have walked through setting up a class library and unit test library using dotnet core, how to create a solution file and add the projects to it and how an immediate feedback cycle for TDD can be setup in a fairly easy and straightforward manner. I also demonstrated some basic git usage and initialised a repository for the code.

2017-11-14 00:00

2017-11-02

Introduction

Binary search is the classic search algorithm, and I remember implementing it in C at University. As an experiment I’m going to implement it in C# to see if the line of business applications I usually build have rotted my brain.

Algorithm

As Wikipedia explains, Binary Search follows this procedure:

Given an array A of n elements with values or records A0, A1, …, An−1, sorted such that A0 ≤ A1 ≤ … ≤ An−1, and target value T, the following subroutine uses binary search to find the index of T in A.

  1. Set L to 0 and R to n − 1.
  2. If L > R, the search terminates as unsuccessful.
  3. Set m (the position of the middle element) to the floor (the largest previous integer) of (L + R) / 2.
  4. If Am < T, set L to m + 1 and go to step 2.
  5. If Am > T, set R to m − 1 and go to step 2.
  6. Now Am = T, the search is done; return m.

This is actually Knuth’s algorithm, from The Art of Computer Programming as stated in the footnote on the Wikipedia article.

Implementation

It’s worth noting that this is merely a fun exercise, and that .net has an implementation in Array.BinarySearch which is much better than the implementation below and I would always use that instead.

It’s also worth mentioning that I’m cheating a little bit and assuming that the array is already sorted, and that it only works on int’s.

My implementation

public class BinarySearch
{
    private int[] _array;

    public BinarySearch(int[] array) => _array = array;

    public int Search(int term)
    {
        var l = 0;
        var r = _array.Length - 1;

        while (l <= r)
        {
            var mid = (l + r) / 2;

            if(_array[mid] < term)
            {
                l = mid+1;
            }
            else if (_array[mid] > term)
            {
                r = mid - 1;
            }
            else
            {
                return mid;
            }
        }

        return -1;
    }
}

Console runner

Here is the console runner:

class Program
{
    static string _message = "Found term {0} at position {1}";

    static void Main(string[] args)
    {
        var integers = new []{1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
        var searcher = new BinarySearch(integers);

        var result = searcher.Search(11);
        Console.WriteLine(_message, 11, result);

        result = searcher.Search(0);
        Console.WriteLine(_message, 0, result);

        result = searcher.Search(6);
        Console.WriteLine(_message, 6, result);
    }
}

Here is the output:

nostromo:sandbox stuart$ dotnet run
Found term 11 at position -1
Found term 0 at position -1
Found term 6 at position 5
nostromo:sandbox stuart$

2017-11-02 00:00

2017-10-17

I bought a California king from Zinus, they sent me two of them, and didn’t want the second one back. I paid less than 300, and now my parents have a free mattress too. One gripe was the lack of warnings on the box. I accidentally punctured the plastic trying to drag the package into my bedroom and one puncture was like a deploy mechanism that left me trapped in a door way as the mattress slowly inflated on me. We laughed about it, but I’m a large man, I can only imagine how a small person or child would have dealt with that situation.

oops!!

by Factor Mystic at 2017-10-17 20:38

2017-05-22

Since I started working as a programmer, I’ve always taken notes in meetings, and jotted things down during the day to remember, but these were all usually on an A4 notepad, which I’ve always used as a daily scratch pad, and until recently I have never kept a proper journal which I could refer back to at a later time.

A colleague of mine with whom I have been working on a project together, has for a long time kept a development journal, or diary, of things that have happened in his work day. Examples are:

  • Meeting notes, who was present, salient points of what was said and what was agreed
  • Design ideas, diagrams, pseudo code
  • Technical notes on gotchas in the language and application
  • Noteworthy events connected to the project

The event which got me interested in his note keeping was one day when the Produce Owner made a decision about the scope and importance of a particular feature. The colleague in question looked back over his notes and was able to prove that the PO had made a different decision about the same thing a few weeks before. When things like this happened a few more times, I really sat up and started to ask some questions.

“If it isn’t written down, it didn’t happen”, was the response.

If you write it down, you are more likely to remember it. There is a large body of evidence that suggests that the simple act of physically writing notes helps aid memory retention. There are a lot of articles and blogs about this subject, but I’d never paid it much attention. After all, I’d kept enough notes when at school and university, and I didn’t think I need to when I was working.

I could not have been more wrong.

So I started taking notes. I got an A4 lined hardback notebook and started writing stuff down.

And: It works.

So for example, I can tell you who made the decision in a meeting thirteen months ago which meant a feature in the application was developed in a particular way which now makes it upsettingly difficult to modify that part of the application. I can produce my design notes from six months ago where I planned the refactoring of some functionality, and the implications of said work, and which developers on the project I’d talked it through with to get some sanity checking that what I was proposing wasn’t stupid. I can tell you who brought cakes in on a particular day last month and who said which particular funny thing last week that is now part of the project slang.

What I’ve found is that if you write stuff down during the day, about what you are doing, it helps you remember (like the research says it will), and makes you think about what you have done already, and what you need to do in the future. This is all stuff that is required for a Scrum stand-up, if you have to do those. It also provides amunition for those of you who have to suffer through the dreaded annual performance appraisal; or, helps remind you what to list on the invoice when billing for the hours you’ve worked.

Lastly, it helps (me, anyway) remember what I was doing on my latest pet project that I haven’t touched for eight months. Which I should get back to.

2017-05-22 00:00

2017-05-20

As clever geeks we are sort of protected form the ‘big bad world’ by living in expenses cities and suburbs, working with other geeks and getting paid well.

In prison I guess that counts for nothing.

¯\_(ツ)_/¯ I guess!! 

by Factor Mystic at 2017-05-20 00:23

2017-02-18

Make a public connection private (source):

Set-NetConnectionProfile -InterfaceAlias <interface name> -NetworkCategory Private

Rename the adapter (source):

Get-NetAdapter -Name Ethernet | Rename-NetAdapter -NewName Renamed

I’m posting this so I can find it again next time I forget

by Factor Mystic at 2017-02-18 21:48

2014-01-29

A few months ago I left a busy startup job I’d had for over a year. The work was engrossing: I stopped blogging, but I was programming every day. I learned a completely new language, but got plenty of chances to use my existing knowledge. That is, after all, why they hired me.

dilbert

I especially liked something that might seem boring: combing through logs of occasional server errors and modifying our code to avoid them. Maybe it was because I had setup the monitoring system. Or because I was manually deleting servers that had broken in new ways. The economist in me especially liked putting a dollar value on bugs of this nature: 20 useless servers cost an extra 500 dollars a week on AWS.

But, there’s only so much waste like this to clean up. I’d automated most of the manual work I was doing and taught a few interns how to do the rest. I spent two weeks openly wondering what I’d do after finishing my current project, even questioning whether I’d still be useful with the company’s new direction.

fireme
Career Tip: don’t do this.

That’s when we agreed to part ways. So, there I was, no “official” job but still a ton of things to keep me busy. I’d help run a chain of Hacker Hostels in Silicon Valley, I was still maintaining Wine as an Ubuntu developer, and I was still a “politician” on Ubuntu’s Community Council having weekly meetings with Mark Shuttleworth.

Politiking, business management, and even Ubuntu packaging, however, aren’t programming. I just wasn’t doing it anymore, until last week. I got curious about counting my users on Launchpad. Download counts are exposed by an API, but not viewable on any webpage. No one else had written a proper script to harvest that data. It was time to program.

fuckshitdamn

And man, I went a little nuts. It was utterly engrossing, in the way that writing and video games used to be. I found myself up past 3am before I even noticed the time; I’d spent a whole day just testing and coding before finally putting it on github. I rationalized my need to make it good as a service to others who’d use it. But in truth I just liked doing it.

It didn’t stop there. I started looking around for programming puzzles. I wrote 4 lines of python that I thought were so neat they needed to be posted as a self-answered question on stack overflow. I literally thought they were beautiful, and using the new yield from feature in Python3 was making me inordinately happy.

And now, I’m writing again. And making terrible cartoons on my penboard. I missed this shit. It’s fucking awesome.

by YokoZar at 2014-01-29 02:46

2013-02-08

Lock’n'Roll, a Pidgin plugin for Windows designed to set an away status message when the PC is locked, has received its first update in three and a half years!

Daniel Laberge has forked the project and released a version 1.2 update which allows you to specify which status should be set when the workstation locks. Get it while it’s awesome (always)!

by Chris at 2013-02-08 03:56

2012-01-07

How do you generate the tangent vectors, which represent which way the texture axes on a textured triangle, are facing?

Hitting up Google tends to produce articles like this one, or maybe even that exact one. I've seen others linked too, the basic formulae tend to be the same. Have you looked at what you're pasting into your code though? Have you noticed that you're using the T coordinates to calculate the S vector, and vice versa? Well, you can look at the underlying math, and you'll find that it's because that's what happens when you assume the normal, S vector, and T vectors form an orthonormal matrix and attempt to invert it, in a sense you're not really using the S and T vectors but rather vectors perpendicular to them.

But that's fine, right? I mean, this is an orthogonal matrix, and they are perpendicular to each other, right? Well, does your texture project on to the triangle with the texture axes at right angles to each other, like a grid?


... Not always? Well, you might have a problem then!

So, what's the real answer?

Well, what do we know? First, translating the vertex positions will not affect the axial directions. Second, scrolling the texture will not affect the axial directions.

So, for triangle (A,B,C), with coordinates (x,y,z,t), we can create a new triangle (LA,LB,LC) and the directions will be the same:

We also know that both axis directions are on the same plane as the points, so to resolve that, we can to convert this into a local coordinate system and force one axis to zero.



Now we need triangle (Origin, PLB, PLC) in this local coordinate space. We know PLB[y] is zero since LB was used as the X axis.


Now we can solve this. Remember that PLB[y] is zero, so...


Do this for both axes and you have your correct texture axis vectors, regardless of the texture projection. You can then multiply the results by your tangent-space normalmap, normalize the result, and have a proper world-space surface normal.

As always, the source code spoilers:

terVec3 lb = ti->points[1] - ti->points[0];
terVec3 lc = ti->points[2] - ti->points[0];
terVec2 lbt = ti->texCoords[1] - ti->texCoords[0];
terVec2 lct = ti->texCoords[2] - ti->texCoords[0];

// Generate local space for the triangle plane
terVec3 localX = lb.Normalize2();
terVec3 localZ = lb.Cross(lc).Normalize2();
terVec3 localY = localX.Cross(localZ).Normalize2();

// Determine X/Y vectors in local space
float plbx = lb.DotProduct(localX);
terVec2 plc = terVec2(lc.DotProduct(localX), lc.DotProduct(localY));

terVec2 tsvS, tsvT;

tsvS[0] = lbt[0] / plbx;
tsvS[1] = (lct[0] - tsvS[0]*plc[0]) / plc[1];
tsvT[0] = lbt[1] / plbx;
tsvT[1] = (lct[1] - tsvT[0]*plc[0]) / plc[1];

ti->svec = (localX*tsvS[0] + localY*tsvS[1]).Normalize2();
ti->tvec = (localX*tsvT[0] + localY*tsvT[1]).Normalize2();


There's an additional special case to be aware of: Mirroring.

Mirroring across an edge can cause wild changes in a vector's direction, possibly even degenerating it. There isn't a clear-cut solution to these, but you can work around the problem by snapping the vector to the normal, effectively cancelling it out on the mirroring edge.

Personally, I check the angle between the two vectors, and if they're more than 90 degrees apart, I cancel them, otherwise I merge them.

by OneEightHundred (noreply@blogger.com) at 2012-01-07 21:23

2011-12-07

Valve's self-shadowing radiosity normal maps concept can be used with spherical harmonics in approximately the same way: Integrate a sphere based on how much light will affect a sample if incoming from numerous sample direction, accounting for collision with other samples due to elevation.

You can store this as three DXT1 textures, though you can improve quality by packing channels with similar spatial coherence. Coefficients 0, 2, and 6 in particular tend to pack well, since they're all dominated primarily by directions aimed perpendicular to the texture.

I use the following packing:
Texture 1: Coefs 0, 2, 6
Texture 2: Coefs 1, 4, 5
Texture 3: Coefs 3, 7, 8

You can reference an early post on this blog for code on how to rotate a SH vector by a matrix, in turn allowing you to get it into texture space. Once you've done that, simply multiply each SH coefficient from the self-shadowing map by the SH coefficients created from your light source (also covered on the previous post) and add together.

by OneEightHundred (noreply@blogger.com) at 2011-12-07 15:39

2011-12-02

Spherical harmonics seems to have some impenetrable level of difficulty, especially among the indie scene which has little to go off of other than a few presentations and whitepapers, some of which even contain incorrect information (i.e. one of the formulas in the Sony paper on the topic is incorrect), and most of which are still using ZYZ rotations because it's so hard to find how to do a matrix rotation.

Hao Chen and Xinguo Liu did a presentation at SIGGRAPH '08 and the slides from it contain a good deal of useful stuff, nevermind one of the ONLY easy-to-find rotate-by-matrix functions. It also treats the Z axis a bit awkwardly, so I patched the rotation code up a bit, and a pre-integrated cosine convolution filter so you can easily get SH coefs for directional light.

There was also gratuitous use of sqrt(3) multipliers, which can be completely eliminated by simply premultiplying or predividing coef #6 by it, which incidentally causes all of the constants and multipliers to resolve to rational numbers.

As always, you can include multiple lights by simply adding the SH coefs for them together. If you want specular, you can approximate a directional light by using the linear component to determine the direction, and constant component to determine the color. You can do this per-channel, or use the average values to determine the direction and do it once.

Here are the spoilers:

#define SH_AMBIENT_FACTOR   (0.25f)
#define SH_LINEAR_FACTOR (0.5f)
#define SH_QUADRATIC_FACTOR (0.3125f)

void LambertDiffuseToSHCoefs(const terVec3 &dir, float out[9])
{
// Constant
out[0] = 1.0f * SH_AMBIENT_FACTOR;

// Linear
out[1] = dir[1] * SH_LINEAR_FACTOR;
out[2] = dir[2] * SH_LINEAR_FACTOR;
out[3] = dir[0] * SH_LINEAR_FACTOR;

// Quadratics
out[4] = ( dir[0]*dir[1] ) * 3.0f*SH_QUADRATIC_FACTOR;
out[5] = ( dir[1]*dir[2] ) * 3.0f*SH_QUADRATIC_FACTOR;
out[6] = ( 1.5f*( dir[2]*dir[2] ) - 0.5f ) * SH_QUADRATIC_FACTOR;
out[7] = ( dir[0]*dir[2] ) * 3.0f*SH_QUADRATIC_FACTOR;
out[8] = 0.5f*( dir[0]*dir[0] - dir[1]*dir[1] ) * 3.0f*SH_QUADRATIC_FACTOR;
}


void RotateCoefsByMatrix(float outCoefs[9], const float pIn[9], const terMat3x3 &rMat)
{
// DC
outCoefs[0] = pIn[0];

// Linear
outCoefs[1] = rMat[1][0]*pIn[3] + rMat[1][1]*pIn[1] + rMat[1][2]*pIn[2];
outCoefs[2] = rMat[2][0]*pIn[3] + rMat[2][1]*pIn[1] + rMat[2][2]*pIn[2];
outCoefs[3] = rMat[0][0]*pIn[3] + rMat[0][1]*pIn[1] + rMat[0][2]*pIn[2];

// Quadratics
outCoefs[4] = (
( rMat[0][0]*rMat[1][1] + rMat[0][1]*rMat[1][0] ) * ( pIn[4] )
+ ( rMat[0][1]*rMat[1][2] + rMat[0][2]*rMat[1][1] ) * ( pIn[5] )
+ ( rMat[0][2]*rMat[1][0] + rMat[0][0]*rMat[1][2] ) * ( pIn[7] )
+ ( rMat[0][0]*rMat[1][0] ) * ( pIn[8] )
+ ( rMat[0][1]*rMat[1][1] ) * ( -pIn[8] )
+ ( rMat[0][2]*rMat[1][2] ) * ( 3.0f*pIn[6] )
);

outCoefs[5] = (
( rMat[1][0]*rMat[2][1] + rMat[1][1]*rMat[2][0] ) * ( pIn[4] )
+ ( rMat[1][1]*rMat[2][2] + rMat[1][2]*rMat[2][1] ) * ( pIn[5] )
+ ( rMat[1][2]*rMat[2][0] + rMat[1][0]*rMat[2][2] ) * ( pIn[7] )
+ ( rMat[1][0]*rMat[2][0] ) * ( pIn[8] )
+ ( rMat[1][1]*rMat[2][1] ) * ( -pIn[8] )
+ ( rMat[1][2]*rMat[2][2] ) * ( 3.0f*pIn[6] )
);

outCoefs[6] = (
( rMat[2][1]*rMat[2][0] ) * ( pIn[4] )
+ ( rMat[2][2]*rMat[2][1] ) * ( pIn[5] )
+ ( rMat[2][0]*rMat[2][2] ) * ( pIn[7] )
+ 0.5f*( rMat[2][0]*rMat[2][0] ) * ( pIn[8])
+ 0.5f*( rMat[2][1]*rMat[2][1] ) * ( -pIn[8])
+ 1.5f*( rMat[2][2]*rMat[2][2] ) * ( pIn[6] )
- 0.5f * ( pIn[6] )
);

outCoefs[7] = (
( rMat[0][0]*rMat[2][1] + rMat[0][1]*rMat[2][0] ) * ( pIn[4] )
+ ( rMat[0][1]*rMat[2][2] + rMat[0][2]*rMat[2][1] ) * ( pIn[5] )
+ ( rMat[0][2]*rMat[2][0] + rMat[0][0]*rMat[2][2] ) * ( pIn[7] )
+ ( rMat[0][0]*rMat[2][0] ) * ( pIn[8] )
+ ( rMat[0][1]*rMat[2][1] ) * ( -pIn[8] )
+ ( rMat[0][2]*rMat[2][2] ) * ( 3.0f*pIn[6] )
);

outCoefs[8] = (
( rMat[0][1]*rMat[0][0] - rMat[1][1]*rMat[1][0] ) * ( pIn[4] )
+ ( rMat[0][2]*rMat[0][1] - rMat[1][2]*rMat[1][1] ) * ( pIn[5] )
+ ( rMat[0][0]*rMat[0][2] - rMat[1][0]*rMat[1][2] ) * ( pIn[7] )
+0.5f*( rMat[0][0]*rMat[0][0] - rMat[1][0]*rMat[1][0] ) * ( pIn[8] )
+0.5f*( rMat[0][1]*rMat[0][1] - rMat[1][1]*rMat[1][1] ) * ( -pIn[8] )
+0.5f*( rMat[0][2]*rMat[0][2] - rMat[1][2]*rMat[1][2] ) * ( 3.0f*pIn[6] )
);
}


... and to sample it in the shader ...


float3 SampleSHQuadratic(float3 dir, float3 shVector[9])
{
float3 ds1 = dir.xyz*dir.xyz;
float3 ds2 = dir*dir.yzx; // xy, zy, xz

float3 v = shVector[0];

v += dir.y * shVector[1];
v += dir.z * shVector[2];
v += dir.x * shVector[3];

v += ds2.x * shVector[4];
v += ds2.y * shVector[5];
v += (ds1.z * 1.5 - 0.5) * shVector[6];
v += ds2.z * shVector[7];
v += (ds1.x - ds1.y) * 0.5 * shVector[8];

return v;
}


For Monte Carlo integration, take sampling points, feed direction "dir" to the following function to get multipliers for each coefficient, then multiply by the intensity in that direction. Divide the total by the number of sampling points:


void SHForDirection(const terVec3 &dir, float out[9])
{
// Constant
out[0] = 1.0f;

// Linear
out[1] = dir[1] * 3.0f;
out[2] = dir[2] * 3.0f;
out[3] = dir[0] * 3.0f;

// Quadratics
out[4] = ( dir[0]*dir[1] ) * 15.0f;
out[5] = ( dir[1]*dir[2] ) * 15.0f;
out[6] = ( 1.5f*( dir[2]*dir[2] ) - 0.5f ) * 5.0f;
out[7] = ( dir[0]*dir[2] ) * 15.0f;
out[8] = 0.5f*( dir[0]*dir[0] - dir[1]*dir[1] ) * 15.0f;
}


... and finally, for a uniformly-distributed random point on a sphere ...


terVec3 RandomDirection(int (*randomFunc)(), int randMax)
{
float u = (((float)randomFunc()) / (float)(randMax - 1))*2.0f - 1.0f;
float n = sqrtf(1.0f - u*u);

float theta = 2.0f * M_PI * (((float)randomFunc()) / (float)(randMax));

return terVec3(n * cos(theta), n * sin(theta), u);
}

by OneEightHundred (noreply@blogger.com) at 2011-12-02 09:22

2011-12-01

Fresh install on OS X of ColdFusion Bulder 2 (TWO, the SECOND one). Typing a simple conditional, this is what I was given:



I also had to manually write the closing cfif tag. It's such a joke.

The absolute core purpose of an IDE is to be a text editor. Secondary to that are other features that are supposed to make you work better. ColdFusion Builder 2 (TWO!!!!!) completely fails on all levels as a text editor. It doesn't even function as well as notepad.exe!

Text search is finicky, Find & Replace is completely broken half the time, the UI is often unresponsive (yay Eclipse), the text cursor sometimes disappears, double-clicking folders or files in an FTP view pops up the Rename dialog every time, HTML / CF tag completion usually doesn't happen, indention is broken, function parameter tooltips obscure the place you are typing, # and " completion randomly breaks (often leaving you with a ###)...the list goes on and on.

Adobe has a big feature list on their site. I'm thinking maybe they should go back and use some resources to fix the parts where you type things into the computer, you know, the whole point of the thing.

by Ted (noreply@blogger.com) at 2011-12-01 15:14

2011-10-18

Has it really been a year since the last update?

Well, things have been chugging along with less discovery and more actual work. However, development on TDP is largely on hold due to the likely impending release of the Doom 3 source code, which has numerous architectural improvements like rigid-body physics and much better customization of entity networking.


In the meantime, however, a component of TDP has been spun off into its own project: The RDX extension language. Initially planned as a resource manager, it has evolved into a full-fledged programmability API. The main goal was to have a runtime with very straightforward integration, to the point that you can easily use it for managing your C++ resources, but also to be much higher performance than dynamically-typed interpreted languages, especially when dealing with complex data types such as float vectors.

Features are still being implemented, but the compiler seems to be stable and load-time conversion to native x86 code is functional. Expect a real release in a month or two.

The project now has a home on Google Code.

by OneEightHundred (noreply@blogger.com) at 2011-10-18 22:37

2011-08-24

It was Thursday afternoon, a completely sensible hour, but for me I had been woken up by the call. In my sleepy haze I hadn’t realized that this quickly turned into a surprise job interview. I made the mistake of saying that, while I had worked plenty with Ubuntu packaging and scripted application tests, I hadn’t actually written any of Wine’s C code.

“Oh.”

I began to feel the consequences of the impression I’d given. “Well, we want a real developer.” Without thinking, I’d managed to frame my years of experience in precisely the wrong way. All that integration work, application testing, knowledge of scripting languages and the deep internals of Ubuntu suddenly counted for nothing. It didn’t matter how much work I’d done or how many developer summits I’d been sponsored to: in this moment I was someone who didn’t even write simple Wine conformance tests.

We talked some more, and I went back to bed to wake at midnight, technically Friday. Too late and too early to go out, everything was quiet enough to get some real work done. I thought about the earlier conversation, and while I hadn’t written C code since high school I decided to dive right back in and hack at Wine myself. Within minutes I found a bug, and four hours later I had code not only proving it, but also fixing it for good.

Test driven development

Today’s Wine consists of two equally important parts: a series of implementations pretending to be parts of Windows, and an ever-growing suite of unit tests. The implementations are straightforward once you know the right thing to do: if the waiter function in Windows’ restaurant.dll asks for a tip, then ours needs to as well. Similarly, the tests prove what the right thing actually is, on both Wine and Windows. They help us answer weird questions, like if the Windows waiter still demands a tip with a negative bill. Somewhere out there, there’s a Windows program that depends on this behavior, and it will be broken in Wine if we make the mistake of assuming Windows acts reasonably.

I asked a developer to recommend a DLL that needed better tests, picked a random C file in it, and started looking. I soon found my target, a documented and complete implementation of a function with only partial tests. This code is already written, and believed working, in Wine. I was supposed to write a bunch of tests that should pass in Wine. That’s when I learned the function is already broken.

Awesome Face

The Wine Testbot

Wine owes a lot to the late Greg Geldorp, an employee of VMware who originally created a Windows testbot for us. Any developer can submit a test patch to it, and those tests will be run on multiple versions of Windows to make sure they pass. It saves us the trouble of having to reboot into 10 different versions of Windows when hacking on Wine.

When I used the testbot to run my new tests, however, I found that while they passed on Wine they actually failed on Windows. Since we’re supposed to do what Windows does, no matter how stupid, that meant two problems: my new tests were bad, and Wine had a bug. Fixing the tests is simple enough – you just change the definition of “pass” – but this kind of unexpected failure can also inspire even more tests. By the end of it I had 6 patches adding different tests to just one function, 3 of which were marked “todo_wine”.

Fixing Wine

While simply submitting tests would certainly be a useful contribution, I felt like I could do more. “You found the mess, you clean it up” is an annoying cliché, but here it has a ring of truth to it: my recent experience writing these tests meant that I had become the world expert on this function. At least, for the next few days, after which I planned on forgetting it forever. That’s what good tests are for – they let us confidently ignore the internals of done code. In the off chance we break it unintentionally, they tell us exactly what’s wrong.

And so I wrote what was supposed to be my final patch: one that fixed Wine and marked the tests as no longer todo. In true open source fashion, I sent it to a friend for review, where he promptly informed me that, while my new tests were passing, I’d created a place where Wine could crash. The solution is, unsurprisingly, yet more tests to see how Windows handles the situation (keeping in mind that sometimes Windows handles things by crashing). This is typical in Wine development: your first attempt at a patch often results in mere discovery that the problem is harder to solve than you thought.

Awesome Face

The real world

None of this actually matters, of course, unless the bug I’d fixed was actually affecting a real application that someone would want to run. Did I actually fix anything useful? I don’t know. It’s not exactly easy to get a list of all Windows applications that rely on edge-case behavior of shlwapi.dll’s StrFromTimeInterval function, but at least Wine is more correct now.

Apparent correctness isn’t the end-all of software development, of course. It’s possible doing something seemingly correct in Wine can make things worse: if my initial version of a fix slipped in to the code, for instance, an application could go from displaying a slightly wrong string to flat-out crashing. That’s why unit tests are just one part of software QA – you still need peer review of code and actual application testing.

Part of something greater

Incidentally, the whole experience reminded me of a blog post I had written over a year ago about modeling Wine development. My model was telling me that what I had just done was a bit inefficient: I made a modest improvement to Wine, but it wasn’t directly inspired by a particular real world application. Perhaps it would have been better had I tackled a more salient bug in a big name application, rather than polishing off some random function in string.c. But it wasn’t random: another developer recommended this particular code section to me because it was missing tests, and he noticed this precisely because in the past some untested behavior in a similar function was breaking a real application for him.

This function is now done. The coding was relatively simple by Wine standards – no need for expertise in COM, Direct3D, OLE, or any number of Windows conventions that O’Reilly writes entire books about. Wine doesn’t need experts: it needs a lot of grunt work from people like me. People willing to tackle one function at a time until by sheer attrition we end up with a test suite so exhaustive that everything can be simply guaranteed to work. That’s how we win in the end. That’s how real developers do it.

by YokoZar at 2011-08-24 14:59

2011-06-12

The Problem

For those using chef to automate your server infrastructure you probably find managing third-party cookbooks to be a pain. Ideally I want to make custom changes to a cookbook while still being able to track for upstream enhancements.

A few techniques I see being used are:

no tracking: Manually download an archive from github or opscode community and drop it in your cookbooks/ directory. Easy to make custom changes but you have no automated way to check for updates.

git submodules: This tracks upstream well, but unless you own the repo you can’t make changes.

fork it: Since pretty much all cookbooks reside on github, so you can fork a copy. This works, but now you might have dozens of different repos to manage. And checking for updates means going into each repo and manually merging in enhancements from the upstream.

knife vendor: Now we are getting somewhere. Chef’s knife command has functionality for dealing with third-party cookbooks. It looks something like this:

knife cookbook site vendor nginx

This downloads the nginx cookbook from the opscode community site, puts an unmodified copy in a chef-vendor-nginx git branch, and then puts a copy in your cookbooks/nginx dir in your master branch. When you run the install again it will download the updated version into the chef-vendor-nginx branch, and then merge that into master.

This is a good start, but it has a number of problems. First you are restricted to using what is available on the opscode community site. Second, although this seems like a git-centric model, knife is actually downloading a .tar.gz file. In fact if you visited the nginx cookbook page you would see it only offers an archive download, no way to inspect what this cookbook actually provides before installing.

There is a sea of great high-quality cookbooks on github. Since we all know and love git it would be great if we could get the previous functionality but using git repositories as a source instead.

A Solution

Enter knife-github-cookbooks. This gem enhances the knife command and lets us get the above functionality by pulling from github rather than downloading from opscode. To use it just install the gem and run:

knife cookbook github install cookbooks/nginx

By default it assumes a username/repo from github. So for each cookbook you install you will have a chef-vendor-cookbookname branch storing the upstream copy and a cookbooks/cookbook-name directory in your master branch to make local changes to.

If you want to check for upstream changes:

knife cookbook github compare nginx

That will launch a github compare view. You can even pass this command a different user who has forked the repo and see or merge in changes from that fork! Read more about it on the github page.

One thing to keep in mind is this gem doesn’t pull in required dependencies automatically, so you will have to make sure you download any requirements a cookbook might have. You can check what dependencies a cookbook requires by inspecting the metadata files.

Bonus Tip!

Opscode has a github repository full of recipes you probably want to use (opscode/cookbooks). Unfortunately using this repository would mean pulling in *all* of those cookbooks. That just clutters up your chef project with cookbooks you don’t need. Luckily there is https://github.com/cookbooks! This repository get updated daily and separates each opscode/cookbook cookbook into a separate git repository.

Now you can cherry-pick the cookbooks you want, and manage them with knife-github-cookbooks!

by Brian Racer at 2011-06-12 17:51

2011-06-07

Make sure you have the same path to the binaries you want to profile on your target device as your workstation.

On your profiling target as root:

export KREXP=dpkg -L kernel-debug | grep "vmlinux-2.6"
opcontrol --init
opcontrol --vmlinux=$KREXP
opcontrol --separate=kernel
opcontrol --status
opcontrol -c=8

start your application

as root again:

opcontrol --stop;  opcontrol --reset; opcontrol --start; sleep 5; opcontrol --stop

commence activity you want to profile - i.e. scroll around wildly, play a video, etc

on your workstation/host environment:
rm -rf /var/lib/oprofile && scp -r root@192.168.2.15:/var/lib/oprofile /var/lib/
this should give you some log info:
opreport -l -r

to finally generate the fancy svg:
opreport -c /path/to/binary_to_profile > /tmp/opreport.txt
oprofile-callgraph-to-svg -e 1 -n 1 /tmp/opreport.txt

your result should look like this


by heeen at 2011-06-07 14:08

2011-05-13

A quick one liner to iterate a nsIntRegion:

for(nsIntRegionRectIterator iter(mUpdateRegion); const nsIntRect* R=iter.Next();)
//do something with R

by heeen at 2011-05-13 11:32

2011-03-10

The problem:

Sometimes one version of Wine can run an application, but fail on others.  Maybe there’s a regression in the latest Wine version, or you’ve installed an optional patch that fixes one application but breaks another.

The solution to this is to have more than one version of Wine installed on the system, and have the system determine which version of Wine is best for the application.  This implies two different levels of usability – advanced users may want to configure and tweak which Wine runs which App, but mere humans won’t even want to know such a thing exists.

This is the reason why many people have created Wine front ends: they worry about things like patches and registry hacks and native DLLs so that users won’t have to.  You just click a button for the application you want to install, put in the requisite disc or password, and it does all the shaman dances for you.  Codeweaver’s, the chief sponsor of Wine, stays in business through sales of Crossover, which is basically just a front end to a special version of Wine.

So, these front ends exist to solve some very real and important problems. But now we have a new problem, in that we might have more than one front end — playonlinux, winetricks, and others all need to deal with this too.  In true open source fashion, we need to work together and come up with a standard.

Sharing is Caring

My proposed solution:

Distribution packagers like me make a separate package, say, wine-hotfixes.  This package replaces (on Ubuntu via dpkg-diversions) the /usr/bin/wine binary, moving the existing one to /usr/bin/wine-default.

The new /usr/bin/wine will pass everything to wine-default, unless it detects the environment variable WINEHOTFIXES is set to something other than null. If it is set (to, say WINEHOTFIXES=”6971,12234″), then the script will look in /etc/wine/hotfixes.d/ and /etc/wine/hotfixes.conf for alternative wine versions that might make it happy.  In the case of a partial match it will prioritize bugs in the listed order.

hotfixes.d will contain a series of config files, one for each alternative version of wine.  These could be installed manually, but generally they’ll come from special packages — say, a version of Wine built with a workaround for that annoying mouse escape bug.  Each file will give a path (say, /opt/wine-hotfixes/yokos-tweaked-wine) and which bugs it hotfixes.  hotfixes.conf can specify a list of bugs that the default Wine already fixes, as well as which bugs to ignore (eg that are probably fixed by every hotfix).

Start menu items (.desktop entries) can then work exactly as they do now, except they will have the WINEHOTFIXES environment variable set, generally as created by a front end.  If the user has no alternative wine versions, or no wine-hotfixes package, nothing different will happen and everything will still use the wine-default.  If the user upgrades Wine to a version that fixes all the worked around bugs, the default will be used again (forward-compatible) — all that’s needed is for the newer Wine package to ship an updated hotfixes.conf.

The beauty of this is that a front end can specify a list of bugs to workaround without actually having a ready hotfix – if needed that can be handled by someone else.  Similarly, the hotfixed Wines don’t actually need to know about which app they’ll be running, as wine-hotfixes will do all the matchmaking.  We also keep /usr/bin free, except for one added binary.

Wrapping it all up

The real test of a design, of course, is if it can be any simpler.  With the above, we’ve got it down to a single configuration item depending on who’s using it — hotfixes.d for the packager, hotfixes.conf for a manual tweaker, and the WINEHOTFIXES environment variable for the front end author.

It is of course worth asking if we should be doing this at all.  Zero configurations are better than one, after all, and ideally one magic Wine package would run every application flawlessly.  But that’s what we’ve been trying for years now, and people keep feeling the need to write these front ends anyway — we’re clearly not doing well enough without them, so we might as well manage them and work together.  This way, at least everything is backwards (and forwards) compatible: environment variables mean nothing without wine-hotfixes, and if we ever do invent a perfect version of Wine all the applications installed by front ends will continue to work just fine.

That should wrap it up, unless I’ve missed something.

Pool shot that sinks every ball

by YokoZar at 2011-03-10 08:01

2011-03-03

2011-02-08

My last post detailed how to compile the Io language from source and install it in Ubuntu (10.10 Maverick). Io has a growing set of addons such as GUI’s, sound and image manipulation, OpenGL, and database support to name a few. However they will not be enabled if you don’t have the proper development libraries installed.

I’ll go through a couple of addons in this article, but if you just want to make sure you have as many dependencies as possible to run the addons here is a line you can paste:

$ sudo apt-get install build-essential cmake libreadline-dev libssl-dev ncurses-dev libffi-dev zlib1g-dev libpcre3-dev libpng-dev libtiff4-dev libjpeg62-dev python-dev libpng-dev libtiff4-dev libjpeg62-dev libmysqlclient-dev libmemcached-dev libtokyocabinet-dev libsqlite3-dev libdbi0-dev libpq-dev libgmp3-dev libogg-dev libvorbis-dev libtaglib-cil-dev libtag1-dev libtheora-dev libsamplerate0-dev libloudmouth1-dev libsndfile1-dev libflac-dev libgl1-mesa-dev libglu1-mesa-dev freeglut3-dev libxmu-dev libxi-dev libxml2-dev libyajl-dev uuid-dev liblzo2-dev zlib1g-dev

You will need to rebuild Io once these are all installed.

I would encourage you to browse the addons/* directory in the Io source tree. There are many good useful addons and samples, although unfortunately there are few that do not seem to currently work or are missing samples, so dust off that book on C 🙂

Sockets

sudo apt-get install libevent-dev

Here is a minimal webserver using sockets:

WebRequest := Object clone do(
    cache := Map clone
    handleSocket := method(socket, server,
        socket streamReadNextChunk
        if(socket isOpen == false, return)
        request := socket readBuffer betweenSeq("GET ", " HTTP")         
 
        data := cache atIfAbsentPut(request,
            writeln("caching ", request)
            f := File clone with(request)
            if(f exists, f contents, nil)
        )                                                                
 
        if(data,
            socket streamWrite("HTTP/1.0 200 OK\n\n")
            socket streamWrite(data)
        ,
            socket streamWrite("Not Found")
        )                                                                
 
        socket close
        server requests append(self)
    )
)                                                                        
 
WebServer := Server clone do(
    setPort(7777)
    socket setHost("127.0.0.1")
    requests := List clone
    handleSocket := method(socket,
        WebRequest handleSocket(socket, self)
    )
) start

Lots of other good socket based examples in addons/Socket/samples.

Regex

sudo apt-get install libpcre3-dev

That will install Perl Compatible Regular Expression support for Io. You can use it like:

regex := Regex with("(?\\d+)([ \t]+)?(?\\w+)")
match := "73noises" matchesOfRegex(regex) next

CFFI

During the configure process you might have noticed a message saying Could NOT find FFI  (missing:  FFI_INCLUDE_DIRS).  FFI (foreign function interface) is basically a system that lets us call functions in different programming languages. First make sure you have the development libraries:

$ sudo apt-get install libffi-dev

How FFI functions is very architecture and compiler dependent, and it seems debian places the includes in a location the cmake scripts aren’t looking. I’m not that familiar with cmake and couldn’t find a very elegant solution, so just place the following line in the modules/FindFFI.cmake script:

$ vim modules/FindFFI.cmake
 
# Add the following line
set(FFI_INCLUDE_DIRS /usr/include/x86_64-linux-gnu)
# Above these two
include(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(FFI DEFAULT_MSG FFI_INCLUDE_DIRS FFI_LIBRARIES)

Here is a small program that gets us direct access to libc’s puts(3) function:

CFFI
 
lib := Library clone setName("libc.so.6")
puts := Function with(Types CString) setLibrary(lib) setName("puts")
 
puts "Hello Io!"

Python

sudo apt-get install python-dev

Want to access Python from Io?

# Import a module
sys := Python import("sys")
 
"Which version of python are we running?" println
sys version println
 
"Split a string" println
str := "Brave brave Sir Robin"
str println
string split(str) println
 
"Load a C module (.so)" println
t := Python import("time")
 
writeln("Current time is: ", t time)

Databases

sudo apt-get install libmysqlclient-dev libmemcache-dev libtokyocabinet-dev libsqlite3-dev libdbi0-dev

Io has addons for MySQL, PostgresQL, memcached, Tokyo Cabinet, SQLite and a few others.

Sound

sudo apt-get install libgmp3-dev libogg-dev libvorbis-dev libtaglib-cil-dev libtag1-dev libtheora-dev libsamplerate0-dev libloudmouth1-dev libsndfile1-dev libflac-dev

Various sound processing libraries.

Images

$ sudo apt-get install libpng-dev libtiff4-dev libjpeg62-dev

Various image loading libraries.

GUI

$ sudo apt-get install x11proto-xf86misc-dev xutils-dev libxpm-dev libpango1.0-dev libcairo2-dev libfreetype6-dev 
 
$ sudo apt-get install libclutter-1.0-dev libatk1.0-dev

There is also a GUI called Flux that requires OpenGL support. I wasn’t able to get it working however.

OpenGL

$ sudo apt-get install libgl1-mesa-dev libglu1-mesa-dev freeglut3-dev libxmu-dev libxi-dev

Lots of great examples in addons/OpenGL/samples.

XML and JSON

$ sudo apt-get install libxml2-dev libyajl-dev

If you need to do any XML or JSON parsing.

UUID

$ sudo apt-get install uuid-dev

Support for UUID generator. Seems to be broken however.

Misc

$ sudo apt-get install libreadline-dev libssl-dev ncurses-dev libffi-dev zlib1g-dev liblzo2-dev zlib1g-dev

SSL, archives, REPL history, curses GUI.

by Brian Racer at 2011-02-08 20:54

2011-02-05

I have recently begun reading through Bruce Tate’s fun Seven Languages In Seven Weeks book. One of the chapters focuses the Io language and it’s installation can be a little bit non-standard to get it to my liking.

Generally on my development machine when I compile from source I like to install locally to my home directory rather than system wide. This way sudo privileges are not needed plus I just like the idea of keeping these items close to home.

First Io requires the cmake build system so make sure that is available.

$ sudo apt-get install cmake

Next download and extract the source code.

$ wget --no-check-certificate http://github.com/stevedekorte/io/zipball/master -O io-lang.zip
$ unzip io-lang.zip
$ cd stevedekorte-io-[hash]

Io provides a build script, however it is setup to install the language to /usr/local. Since I want it to go in $HOME/local you just need to modify that file. Here is a quick one liner:

$ sed -i -e 's/^INSTALL_PREFIX="\/usr\/local/INSTALL_PREFIX="$HOME\/local/' build.sh

Now build and install.

$ ./build.sh
$ ./build.sh install

Since we are installing into a location our OS doesn’t really know about, we need to configure a few paths.

$ vim ~/.bashrc
export PATH="${HOME}/local/bin:${PATH}"
export LD_LIBRARY_PATH="${HOME}/local/lib:${LD_LIBRARY_PATH}"
 
# You might want these too
export LD_RUN_PATH=$LD_LIBRARY_PATH
export CPPFLAGS="-I${HOME}/local/include"
export CXXFLAGS=$CPPFLAGS
export CFLAGS=$CPPFLAGS
export MANPATH="${HOME}/local/share/man:${MANPATH}"

Lastly restart your shell and type ‘io’ and you should be dropped into Io’s REPL!

A side benefit to this method is you can install anything you build into $HOME/local. Usually you just need to pass the –prefix=$HOME/local parameter when you run a ./configure script.

by Brian Racer at 2011-02-05 17:30

2010-10-11

You'll recall some improvements I proposed to the YCoCg DXT5 algorithm a while back.

There's another realization of it I made recently: As a YUV-style color space, the Co and Cg channels are constrained to a range that's directly proportional to the Y channel. The addition of the scalar blue channel was mainly introduced to deal with resolution issues that caused banding artifacts on colored objects changing value, but the entire issue there can be sidestepped by simply using the Y channel as a multiplier for the Co and Cg channels, causing them to only respect tone and saturation while the Y channel becomes fully responsible for intensity.

This is not a quality improvement, in fact it nearly doubles PSNR in testing. However, it does result in considerable simplification of the algorithm, both on the encode and decode sides, and the perceptual loss compared to the old algorithm is very minimal.

This also simplifies the algorithm considerably:


int iY = px[0] + 2*px[1] + px[2]; // 0..1020
int iCo, iCg;

if (iY == 0)
{
iCo = 0;
iCg = 0;
}
else
{
iCo = (px[0] + px[1]) * 255 / iY;
iCg = (px[1] * 2) * 255 / iY;
}

px[0] = (unsigned char)iCo;
px[1] = (unsigned char)iCg;
px[2] = 0;
px[3] = (unsigned char)((iY + 2) / 4);


... And to decode:


float3 DecodeYCoCgRel(float4 inColor)
{
return (float3(4.0, 0.0, -4.0) * inColor.r
+ float3(-2.0, 2.0, -2.0) * inColor.g
+ float3(0.0, 0.0, 4.0)) * inColor.a;
}



While this does the job with much less perceptual loss than DXT1, and eliminates banding artifacts almost entirely, it is not quite as precise as the old algorithm, so using that is recommended if you need the quality.

by OneEightHundred (noreply@blogger.com) at 2010-10-11 00:21

2010-10-10

A few years back there was a publication on real-time YCoCg DXT5 texture compression. There are two improvements on the technique I feel I should present:

There's a pretty clear problem right off the bat: It's not particularly friendly to linear textures. If you simply attempt to convert sRGB values into linear space and store the result in YCoCg, you will experience severe banding owing largely to the loss of precision at lower values. Gamma space provides a lot of precision at lower intensity values where the human visual system is more sensitive.

sRGB texture modes exist as a method to cheaply convert from gamma space to linear, and are pretty fast since GPUs can just use a look-up table to get the linear values, but YCoCg can't be treated as an sRGB texture and doing sRGB decodes in the shader is fairly slow since it involves a divide, power raise, and conditional.

This can be resolved first by simply converting from a 2.2-ish sRGB gamma ramp to a 2.0 gamma ramp, which preserves most of the original gamut: 255 input values map to 240 output values, low intensity values maintain most of their precision, and they can be linearized by simply squaring the result in the shader.


Another concern, which isn't really one if you're aiming for speed and doing things real-time, but is if you're considering using such a technique for offline processing, is the limited scale factor. DXT5 provides enough resolution for 32 possible scale factor values, so there isn't any reason to limit it to 1, 2, or 4 if you don't have to. Using the full range gives you more color resolution to work with.


Here's some sample code:


unsigned char Linearize(unsigned char inByte)
{
float srgbVal = ((float)inByte) / 255.0f;
float linearVal;

if(srgbVal 0.04045)
linearVal = srgbVal / 12.92f;
else
linearVal = pow( (srgbVal + 0.055f) / 1.055f, 2.4f);

return (unsigned char)(floor(sqrt(linearVal)* 255.0 + 0.5));
}

void ConvertBlockToYCoCg(const unsigned char inPixels[16*3], unsigned char outPixels[16*4])
{
unsigned char linearizedPixels[16*3]; // Convert to linear values

for(int i=0;i16*3;i++)
linearizedPixels[i] = Linearize(inPixels[i]);

// Calculate Co and Cg extents
int extents = 0;
int n = 0;
int iY, iCo, iCg;
int blockCo[16];
int blockCg[16];
const unsigned char *px = linearizedPixels;
for(int i=0;i16;i++)
{
iCo = (px[0]1) - (px[2]1);
iCg = (px[1]1) - px[0] - px[2];
if(-iCo > extents) extents = -iCo;
if( iCo > extents) extents = iCo;
if(-iCg > extents) extents = -iCg;
if( iCg > extents) extents = iCg;

blockCo[n] = iCo;
blockCg[n++] = iCg;

px += 3;
}

// Co = -510..510
// Cg = -510..510
float scaleFactor = 1.0f;
if(extents > 127)
scaleFactor = (float)extents * 4.0f / 510.0f;

// Convert to quantized scalefactor
unsigned char scaleFactorQuantized = (unsigned char)(ceil((scaleFactor - 1.0f) * 31.0f / 3.0f));

// Unquantize
scaleFactor = 1.0f + (float)(scaleFactorQuantized / 31.0f) * 3.0f;

unsigned char bVal = (unsigned char)((scaleFactorQuantized 3) | (scaleFactorQuantized >> 2));

unsigned char *outPx = outPixels;

n = 0;
px = linearizedPixels;
for(i=0;i16;i++)
{
// Calculate components
iY = ( px[0] + (px[1]1) + px[2] + 2 ) / 4;
iCo = ((blockCo[n] / scaleFactor) + 128);
iCg = ((blockCg[n] / scaleFactor) + 128);

if(iCo 0) iCo = 0; else if(iCo > 255) iCo = 255;
if(iCg 0) iCg = 0; else if(iCg > 255) iCg = 255;
if(iY 0) iY = 0; else if(iY > 255) iY = 255;

px += 3;

outPx[0] = (unsigned char)iCo;
outPx[1] = (unsigned char)iCg;
outPx[2] = bVal;
outPx[3] = (unsigned char)iY;

outPx += 4;
}
}




.... And to decode it in the shader ...



float3 DecodeYCoCg(float4 inColor)
{
float3 base = inColor.arg + float3(0, -0.5, -0.5);
float scale = (inColor.b*0.75 + 0.25);
float4 multipliers = float4(1.0, 0.0, scale, -scale);
float3 result;

result.r = dot(base, multipliers.xzw);
result.g = dot(base, multipliers.xyz);
result.b = dot(base, multipliers.xww);

// Convert from 2.0 gamma to linear
return result*result;
}

by OneEightHundred (noreply@blogger.com) at 2010-10-10 22:32

2010-09-12

This article is hilarious. It sounds like a perfectly normal business-y article until to you get to this gem:
The barrier to entry on the Instant concept is apparently low, and Yahoo and Microsoft's Bing have both tested the waters, according to a report in Search Engine Land.
(emphasis mine)

So apparently Dawn Kawamoto, "Technology Reporter" for Daily Finance, thinks the barrier to entry to searching the entire internet instantly is low.

I don't even know what to say.

by Blake Householder (noreply@blogger.com) at 2010-09-12 19:44