sn.printf.net

2017-07-18

As the first step in my new game project The Deadlings, I've put together a little starter project that combines the Farseer 3.1 physics library with the GameStateManagement sample from Microsoft, all built on XNA 4.0.  You can download it at thedeadlings.com.

2017-07-18 21:30

TouchArcade wrote a wonderful review of our game Rogue Runner today. After 2 months and 5 updates, it's wonderful to see our little game take off.

2017-07-18 21:30

I am now offering my services as a freelance iOS developer. Need an app developed for iPhone, iPod Touch or iPad?  Get in touch with me via Glowdot Productions, Inc. You can hire a guy who's had apps in almost every chart category on the app store, been in the top 50 ...

2017-07-18 21:30

2017-06-28

It depends.

The problem with distributed systems, is that no matter what the question is, the answer is inevitably ‘It Depends’.

When you cut a larger service apart, where you cut depends on latency, resources, and access to state, but it also depends on error handling, availably and recovery processes. It depends, but you probably don’t want to depend on a message broker.

Using a message broker to distribute work is like a cross between a load balancer with a database, with the disadvantages of both and the advantages of neither.

Message brokers, or persistent queues accessed by publish-subscribe, are a popular way to pull components apart over a network. They’re popular because they often have a low setup cost, and provide easy service discovery, but they can come at a high operational cost, depending where you put them in your systems.

In practice, a message broker is a service that transforms network errors and machine failures into filled disks. Then you add more disks. The advantage of publish-subscribe is that it isolates components from each other, but the problem is usually gluing them together.


For short-lived tasks, you want a load balancer

For short-lived tasks, publish-subscribe is a convenient way to build a system quickly, but you inevitably end up implementing a new protocol atop. You have publish-subscribe, but you really want request-response. If you want something computed, you’ll probably want to know the result.

Starting with publish-subscribe makes work assignment easy: jobs get added to the queue, workers take turns to remove them. Unfortunately, it makes finding out what happened quite hard, and you’ll need to add another queue to send a result back.

Once you can handle success, it is time to handle the errors. The first step is often adding code to retry the request a few times. After you DDoS your system, you put a call to sleep(). After you slowly DDoS your system, each retry waits twice as long as the previous.

(Aside: Accidental synchronisation is still a problem, as waiting to retry doesn’t prevent a lot of things happening at once.)

As workers fail to keep up, clients give up and retry work, but the earlier request is still waiting to be processed. The solution is to move some of the queue back to clients, asking them to hold onto work until work has been accepted: back-pressure, or acknowledgements.

Although the components interact via publish-subscribe, we’ve created a request-response protocol atop. Now the message broker is really only doing two useful things: service discovery, and load balancing. It is also doing two not-so-useful thing: enqueuing requests, and persisting them.

For short-lived tasks, the persistence is unnecessary: the client sticks around for as long as the work needs to be done, and handles recovery. The queuing isn’t that necessary either.

Queues inevitably run in two states: full, or empty. If your queue is running full, you haven’t pushed enough work to the edges, and if it is running empty, it’s working as a slow load balancer.

A mostly empty queue is still first-come-first-served, serving as point of contention for requests. A broker often does nothing but wait for workers to poll for new messages. If your queue is meant to run empty, why wait to forward on a request.

(Aside: Something like random load balancing will work, but join-idle-queue is well worth your time investigating)

For distributing short-lived tasks, you can use a message broker, but you’ll be building a load balancer, along with an ad-hoc RPC system, with extra latency.


For long-lived tasks, you’ll need a database

A load balancer with service discovery won’t help you with long running tasks, or work that outlives the client, or manage throughput. You’ll want persistence, but not in your message broker. For long-lived tasks, you’ll want a database instead.

Although the persistence and queueing were obstacles for short-lived tasks, the disadvantages are less obvious for long-lived tasks, but similar things can go wrong.

If you care about the result of a task, you’ll want to store that it is needed somewhere other than in the persistent queue. If the task is run but fails midway, something will have to take responsibility for it, and the broker will have forgotten. This is why you want a database.

Duplicates in a queue often cause more headaches, as long-lived tasks have more opportunities to overlap. Although we’re using the broker to distribute work, we’re also using it implicitly as a mutex. To stop work from overlapping, you implement a lock atop. After it breaks a couple of times, you replace it with leases, adding timeouts.

(Note: This is not why you want a database, using transactions for long running tasks is suffering. Long running processes are best modelled as state machines.)

When the database becomes the primary source of truth, you can handle a broker going offline, or a broker losing the contents of a queue, by backfilling from the database. As a result, you don’t need to directly enqueue work with the broker, but mark it as required in the database, and wait for something else to handle it.

Assuming that something else isn’t a human who has been paged.

A message pump can scan the database periodically and send work requests to the broker. Enqueuing work in batches can be an effective way of making an expensive database call survivable. The pump responsible for enqueuing the work can also track if it has completed, and so handle recovery or retries too.

Backlog is still a problem, so you’ll want to use back-pressure to keep the queue fairly empty, and only fill from the database when needed. Although a broker can handle temporary overload, back-pressure should mean it never has to.

At this point the message broker is really providing two things: service discovery, and work assignment, but really you need a scheduler. A scheduler is what scans a database, works out which jobs need to run, and often where to run them too. A scheduler is what takes responsibility for handling errors.

(Aside: Writing a scheduler is hard. It is much easier to have 1000 while loops waiting for the right time, than one while loop waiting for which of the 1000 is first. A scheduler can track when it last ran something, but the work can’t rely on that being the last time it ran. Idempotency isn’t just your friend, it is your saviour.)

You can use a message broker for long-lived tasks, but you’ll be building a lock manager, a database, and a scheduler, along with yet another home-brew request-response system.


Publish-Subscribe is about isolating components

The problem with running tasks with publish-subscribe is that you really want request-response. The problem with using queues to assign work is that you don’t want to wait for a worker to ask.

The problem with relying on a persistent queue for recovery, is that recovery must get handled elsewhere, and the problem with brokers is nothing else makes service discovery so trivial.

Message brokers can be misused, but it isn’t to say they have no use. Brokers work well when you need to cross system boundaries.

Although you want to keep queues empty between components, it is convenient to have a buffer at the edges of your system, to hide some failures from external clients. When you handle external faults at the edges, you free the insides from handling them. The inside of your system can focus on handling internal problems, of which there are many.

A broker can be used to buffer work at the edges, but it can also be used as an optimisation, to kick off work a little earlier than planned. A broker can pass on a notification that data has been changed, and the system can fetch data through another API.

(Aside: If you use a broker to speed up a process, the system will grow to rely on it for performance. People use caches to speed up database calls, but there are many systems that simply do not work fast enough until the cache is warmed up, filled with data. Although you are not relying on the message broker for reliability, relying on it for performance is just as treacherous.)

Sometimes you want a load balancer, sometimes you’ll need a database, but sometimes a message broker will be a good fit.

Although persistence can’t handle many errors, it is convenient if you need to restart with new code or settings, without data loss. Sometimes the error handling offered is just right.

Although a persistent queue offers some protection against failure, it can’t take responsibility for when things go wrong halfway through a task. To be able to recover from failure you have to stop hiding it, you must add acknowledgements, back-pressure, error handling, to get back to a working system.

A persistent message queue is not bad in itself, but relying on it for recovery, and by extension, correct behaviour, is fraught with peril.


Systems grow by pushing responsibilities to the edges

Performance isn’t easy either. You don’t want queues, or persistence in the central or underlying layers of your system. You want them at the edges.

It’s slow is the hardest problem to debug, and often the reason is that something is stuck in a queue. For long and short-lived tasks, we used back-pressure to keep the queue empty, to reduce latency.

When you have several queues between you and the worker, it becomes even more important to keep the queue out of the centre of the network. We’ve spent decades on tcp congestion control to avoid it.

If you’re curious, the history of tcp congestion makes for interesting reading. Although the ends of a tcp connection were responsible for failure and retries, the routers were responsible for congestion: drop things when there is too much.

The problem is that it worked until the network was saturated, and similar to backlog in queues, when it broke, errors cascaded. The solution was similar: back-pressure. Similar to sleeping twice as long on errors, tcp sends half as many packets, before gradually increasing the amount as things improve.

Back-pressure is about pushing work to the edges, letting the ends of the conversation find stability, rather than trying to optimise all of the links in-between in isolation. Congestion control is about using back-pressure to keep the queues in-between as empty as possible, to keep latency down, and to increase throughput by avoiding the need to drop packets.

Pushing work to the edges is how your system scales. We have spent a lot of time and a considerable amount of money on IP-Multicast, but nothing has been as effective as BitTorrent. Instead of relying on smart routers to work out how to broadcast, we rely on smart clients to talk to each other.

Pushing recovery to the outer layers is how your system handles failure. In the earlier examples, we needed to get the client, or the scheduler to handle the lifecycle of a task, as it outlived the time on the queue.

Error recovery in the lower layers of a system is an optimisation, and you can’t push work to the centre of a network and scale. This is the end-to-end principle, and it is one of the most important ideas in system design.

The end-to-end principle is why you can restart your home router, when it crashes, without it having to replay all of the websites you wanted to visit before letting you ask for a new page. The browser (and your computer) is responsible for recovery, not the computers in between.

This isn’t a new idea, and Erlang/OTP owes a lot to it. OTP organises a running program into a supervision tree. Each process will often have one process above it, restarting it on failure, and above that, another supervisor to do the same.

(Aside: Pipelines aren’t incompatible with process supervision, one way is for each part to spawn the program that reads its output. A failure down the chain can propagate back up to be handled correctly.)

Although each program will handle some errors, the top levels of the supervision tree handle larger faults with restarts. Similarly, it’s nice if your webpage can recover from a fault, but inevitably someone will have to hit refresh.

The end-to-end principle is realising that no matter how many exceptions you handle deep down inside your program, some will leak out, and something at the outer layer has to take responsibility.

Although sometimes taking responsibility is writing things to an audit log, and message brokers are pretty good at that.


Aside: But what about replicated logs?

“How do I subscribe to the topic on the message broker?”

“It’s not a message broker, it’s a replicated log”

“Ok, How do I subscribe to the replicated log”

From ‘I believe I did, Bob’, jrecursive

Although a replicated log is often confused with a message broker, they aren’t immune from handling failure. Although it’s good the components are isolated from each other, they still have to be integrated into the system at large. Both offer a one way stream for sharing, both offer publish-subscribe like interfaces, but the intent is wildly different.

A replicated log is often about auditing, or recovery: having a central point of truth for decisions. Sometimes a replicated log is about building a pipeline with fan-in (aggregating data), or fan-out (broadcasting data), but always building a system where data flows in one direction.

The easiest way to see the difference between a replicated log and a message broker is to ask an engineer to draw a diagram of how the pieces connect.

If the diagram looks like a one-way system, it’s a replicated log. If almost every component talks to it, it’s a message broker. If you can draw a flow-chart, it’s a replicated log. If you take all the arrows away and you’re left with a venn diagram of ‘things that talk to each other’, it’s a message broker.

Be warned: A distributed system is something you can draw on a whiteboard pretty quickly, but it’ll take hours to explain how all the pieces interact.


You cut a monolith with a protocol

How you cut a monolith is often more about how you are cutting up responsibility within a team, than cutting it into components. It really does depend, and often more on the social aspects than the technical ones, but you are still responsible for the protocol you create.

Distributed systems are messy because of how the pieces interact over time, rather than which pieces are interacting. The complexity of a distributed system does not come from having hundreds of machines, but hundreds of ways for them to interact. A protocol must take into account performance, safety, stability, availability, and most importantly, error handling.

When we talk about distributed systems, we are talking about power structures: how resources are allocated, how work is divided, how control is shared, or how order is kept across systems ostensibly built out of well meaning but faulty components.

A protocol is the rules and expectations of participants in a system, and how they are beholden to each other. A protocol defines who takes responsibility for failure.

The problem with message brokers, and queues, is that no-one does.

Using a message broker is not the end of the world, nor a sign of poor engineering. Using a message broker is a tradeoff. Use them freely knowing they work well on the edges of your system as buffers. Use them wisely knowing that the buck has to stop somewhere else. Use them cheekily to get something working.

I say don’t rely on a message broker, but I can’t point to easy off-the-shelf answers. HTTP and DNS are remarkable protocols, but I still have no good answers for service discovery.

Lots of software regularly gets pushed into service way outside of its designed capabilities, and brokers are no exception. Although the bad habits around brokers and the relative ease of getting a prototype up and running lead to nasty effects at scale, you don’t need to build everything at once.

The complexity of a system lies in its protocol not its topology, and a protocol is what you create when you cut your monolith into pieces. If modularity is about building software, protocol is about how we break it apart.

The main task of the engineering analyst is not merely to obtain “solutions” but is rather to understand the dynamic behaviour of the system in such a way that the secrets of the mechanism are revealed, and that if it is built it will have no surprises left for [them]. Other than exhaustive physical experimentations, this is the only sound basis for engineering design, and disregard of this cardinal principle has not infrequently lead to disaster.

From “Analysis of Nonlinear Control Systems” by Dustan Graham and Duane McRuer, p 436

Protocol is the reason why ‘it depends’, and the reason why you shouldn’t depend on a message broker: You can use a message broker to glue systems together, but never use one to cut systems apart.

2017-06-28 05:33

2017-05-22

Since I started working as a programmer, I’ve always taken notes in meetings, and jotted things down during the day to remember, but these were all usually on an A4 notepad, which I’ve always used as a daily scratch pad, and until recently I have never kept a proper journal which I could refer back to at a later time.

A colleague of mine with whom I have been working on a project together, has for a long time kept a development journal, or diary, of things that have happened in his work day. Examples are:

  • Meeting notes, who was present, salient points of what was said and what was agreed
  • Design ideas, diagrams, pseudo code
  • Technical notes on gotchas in the language and application
  • Noteworthy events connected to the project

The event which got me interested in his note keeping was one day when the Produce Owner made a decision about the scope and importance of a particular feature. The colleague in question looked back over his notes and was able to prove that the PO had made a different decision about the same thing a few weeks before. When things like this happened a few more times, I really sat up and started to ask some questions.

“If it isn’t written down, it didn’t happen”, was the response.

If you write it down, you are more likely to remember it. There is a large body of evidence that suggests that the simple act of physically writing notes helps aid memory retention. There are a lot of articles and blogs about this subject, but I’d never paid it much attention. After all, I’d kept enough notes when at school and university, and I didn’t think I need to when I was working.

I could not have been more wrong.

So I started taking notes. I got an A4 lined hardback notebook and started writing stuff down.

And: It works.

So for example, I can tell you who made the decision in a meeting thirteen months ago which meant a feature in the application was developed in a particular way which now makes it upsettingly difficult to modify that part of the application. I can produce my design notes from six months ago where I planned the refactoring of some functionality, and the implications of said work, and which developers on the project I’d talked it through with to get some sanity checking that what I was proposing wasn’t stupid. I can tell you who brought cakes in on a particular day last month and who said which particular funny thing last week that is now part of the project slang.

What I’ve found is that if you write stuff down during the day, about what you are doing, it helps you remember (like the research says it will), and makes you think about what you have done already, and what you need to do in the future. This is all stuff that is required for a Scrum stand-up, if you have to do those. It also provides amunition for those of you who have to suffer through the dreaded annual performance appraisal; or, helps remind you what to list on the invoice when billing for the hours you’ve worked.

Lastly, it helps (me, anyway) remember what I was doing on my latest pet project that I haven’t touched for eight months. Which I should get back to.

2017-05-22 00:00

2017-05-20

As clever geeks we are sort of protected form the ‘big bad world’ by living in expenses cities and suburbs, working with other geeks and getting paid well.

In prison I guess that counts for nothing.

¯\_(ツ)_/¯ I guess!! 

by Factor Mystic at 2017-05-20 00:23

2017-04-21

2017-02-18

Make a public connection private (source):

Set-NetConnectionProfile -InterfaceAlias <interface name> -NetworkCategory Private

Rename the adapter (source):

Get-NetAdapter -Name Ethernet | Rename-NetAdapter -NewName Renamed

I’m posting this so I can find it again next time I forget

by Factor Mystic at 2017-02-18 21:48

2016-11-24

2016-11-02

For anyone keeping score, I haven’t updated my blog for over a year.

Insert excuses here

I know that I should do, Hanselman and Somnez both recommend it.

Anyway, I’ve come up with some goals for myself for the next year:

  • Get on Hanselminutes
    • Failing that, any other developer focused podcast will do
  • Blog more regularly
  • Contribute more to open source
  • Release an open source project of my own
  • Become rich and famous

So we shall see what happens.

2016-11-02 00:00

2016-09-16



I like this talk a lot: what modularity is, what we use it for, how modularity happens in systems, and how we can use modularity to manage change.

2016-09-16 10:43

2016-08-22

Last night I found out i’d lost a friend, and if you’ll be patient with my words, I’d like to reflect a little.

Mathie was one of the older, weirder, geeks I met. I’d escaped my home town in the edge of nowhere, and it was my first time having a peer group of adults.

He’d helped everywhere. With the student run shell server, with the local IRC server everyone collected on, a known and friendly face on the circuit

Mathie was one of the many people behind Scottish Ruby Conference, responsible for bringing a lot of interesting people into Edinburgh, and into my life.

Why wasn’t this talk given at Scottish Ruby Conference

I fucked up

In front of everyone assembled at the fringe track, a collection of talks that didn’t quite make it, mathie answered honestly. It’s kinda how i’ll remember him: a bit of a fuckup.

A fuckup who, changed my life for the better.

Thanks mathie, I hope to pass on some of your kindness.

RIP, You fucking idiot.

2016-08-22 11:12

2016-07-07

2016-06-12

2016-05-19

(Writing this down so I can google it when I inevitably forget about it in the future)

Powershell supports Unicode in a custom prompt, but if it’s coming out garbled, then you probably forgot to save your profile.ps1 as UTF-8 WITH BOM.

 

This is UTF-8 (no BOM):

powershell-unicode-prompt-problem-1

 

 

This is is UTF-8 w/ BOM:

powershell-unicode-prompt-ok-2

Nice.

(BTW, the code for this custom prompt is here)

by Factor Mystic at 2016-05-19 01:31

2016-02-08

2015-10-08

The principle of YAGNI should always apply, and I was recently reminded of that when I had to build a small application to give to a user for a small one off task. We’re talking a single screen application with a single button on it. Idiot proof.

So, File > New > Winforms application - doesn’t have to be anything fancy. Then I caught myself: My first instinct was to add StructureMap to it.

I thought to myself that it’s only going to have a few classes, I mean just because it’s a small winforms app doesn’t mean I’m going to shit up the code behind with business logic and data access, so why bother with the overhead of adding a IOC library?

It doesn’t mean I don’t have to use dependency injection. Mark Seemann calls this “Poor Man’s DI”.

    static class Program
    {
        [STAThread]
        static void Main()
        {
            Application.EnableVisualStyles();
            Application.SetCompatibleTextRenderingDefault(false);

            var service = new Service(new Database(), new Other());
            Application.Run(new Main(service));
        }
    }

It’s short and sweet, and there is no IOC container configuration to worry about. Because I don’t need to.

What about when….

Requirements change. “Can it just do this as well…?” More dependencies required, perhaps another form, maybe another service or two. I think I’d see how far I could push Poor Man’s DI before I brought in a proper IOC container to help manage things.

2015-10-08 00:00

2014-01-29

A few months ago I left a busy startup job I’d had for over a year. The work was engrossing: I stopped blogging, but I was programming every day. I learned a completely new language, but got plenty of chances to use my existing knowledge. That is, after all, why they hired me.

dilbert

I especially liked something that might seem boring: combing through logs of occasional server errors and modifying our code to avoid them. Maybe it was because I had setup the monitoring system. Or because I was manually deleting servers that had broken in new ways. The economist in me especially liked putting a dollar value on bugs of this nature: 20 useless servers cost an extra 500 dollars a week on AWS.

But, there’s only so much waste like this to clean up. I’d automated most of the manual work I was doing and taught a few interns how to do the rest. I spent two weeks openly wondering what I’d do after finishing my current project, even questioning whether I’d still be useful with the company’s new direction.

fireme
Career Tip: don’t do this.

That’s when we agreed to part ways. So, there I was, no “official” job but still a ton of things to keep me busy. I’d help run a chain of Hacker Hostels in Silicon Valley, I was still maintaining Wine as an Ubuntu developer, and I was still a “politician” on Ubuntu’s Community Council having weekly meetings with Mark Shuttleworth.

Politiking, business management, and even Ubuntu packaging, however, aren’t programming. I just wasn’t doing it anymore, until last week. I got curious about counting my users on Launchpad. Download counts are exposed by an API, but not viewable on any webpage. No one else had written a proper script to harvest that data. It was time to program.

fuckshitdamn

And man, I went a little nuts. It was utterly engrossing, in the way that writing and video games used to be. I found myself up past 3am before I even noticed the time; I’d spent a whole day just testing and coding before finally putting it on github. I rationalized my need to make it good as a service to others who’d use it. But in truth I just liked doing it.

It didn’t stop there. I started looking around for programming puzzles. I wrote 4 lines of python that I thought were so neat they needed to be posted as a self-answered question on stack overflow. I literally thought they were beautiful, and using the new yield from feature in Python3 was making me inordinately happy.

And now, I’m writing again. And making terrible cartoons on my penboard. I missed this shit. It’s fucking awesome.

by YokoZar at 2014-01-29 02:46

2013-02-08

Lock’n'Roll, a Pidgin plugin for Windows designed to set an away status message when the PC is locked, has received its first update in three and a half years!

Daniel Laberge has forked the project and released a version 1.2 update which allows you to specify which status should be set when the workstation locks. Get it while it’s awesome (always)!

by Chris at 2013-02-08 03:56

2012-01-07

How do you generate the tangent vectors, which represent which way the texture axes on a textured triangle, are facing?

Hitting up Google tends to produce articles like this one, or maybe even that exact one. I've seen others linked too, the basic formulae tend to be the same. Have you looked at what you're pasting into your code though? Have you noticed that you're using the T coordinates to calculate the S vector, and vice versa? Well, you can look at the underlying math, and you'll find that it's because that's what happens when you assume the normal, S vector, and T vectors form an orthonormal matrix and attempt to invert it, in a sense you're not really using the S and T vectors but rather vectors perpendicular to them.

But that's fine, right? I mean, this is an orthogonal matrix, and they are perpendicular to each other, right? Well, does your texture project on to the triangle with the texture axes at right angles to each other, like a grid?


... Not always? Well, you might have a problem then!

So, what's the real answer?

Well, what do we know? First, translating the vertex positions will not affect the axial directions. Second, scrolling the texture will not affect the axial directions.

So, for triangle (A,B,C), with coordinates (x,y,z,t), we can create a new triangle (LA,LB,LC) and the directions will be the same:

We also know that both axis directions are on the same plane as the points, so to resolve that, we can to convert this into a local coordinate system and force one axis to zero.



Now we need triangle (Origin, PLB, PLC) in this local coordinate space. We know PLB[y] is zero since LB was used as the X axis.


Now we can solve this. Remember that PLB[y] is zero, so...


Do this for both axes and you have your correct texture axis vectors, regardless of the texture projection. You can then multiply the results by your tangent-space normalmap, normalize the result, and have a proper world-space surface normal.

As always, the source code spoilers:

terVec3 lb = ti->points[1] - ti->points[0];
terVec3 lc = ti->points[2] - ti->points[0];
terVec2 lbt = ti->texCoords[1] - ti->texCoords[0];
terVec2 lct = ti->texCoords[2] - ti->texCoords[0];

// Generate local space for the triangle plane
terVec3 localX = lb.Normalize2();
terVec3 localZ = lb.Cross(lc).Normalize2();
terVec3 localY = localX.Cross(localZ).Normalize2();

// Determine X/Y vectors in local space
float plbx = lb.DotProduct(localX);
terVec2 plc = terVec2(lc.DotProduct(localX), lc.DotProduct(localY));

terVec2 tsvS, tsvT;

tsvS[0] = lbt[0] / plbx;
tsvS[1] = (lct[0] - tsvS[0]*plc[0]) / plc[1];
tsvT[0] = lbt[1] / plbx;
tsvT[1] = (lct[1] - tsvT[0]*plc[0]) / plc[1];

ti->svec = (localX*tsvS[0] + localY*tsvS[1]).Normalize2();
ti->tvec = (localX*tsvT[0] + localY*tsvT[1]).Normalize2();


There's an additional special case to be aware of: Mirroring.

Mirroring across an edge can cause wild changes in a vector's direction, possibly even degenerating it. There isn't a clear-cut solution to these, but you can work around the problem by snapping the vector to the normal, effectively cancelling it out on the mirroring edge.

Personally, I check the angle between the two vectors, and if they're more than 90 degrees apart, I cancel them, otherwise I merge them.

by OneEightHundred (noreply@blogger.com) at 2012-01-07 21:23

2011-12-07

Valve's self-shadowing radiosity normal maps concept can be used with spherical harmonics in approximately the same way: Integrate a sphere based on how much light will affect a sample if incoming from numerous sample direction, accounting for collision with other samples due to elevation.

You can store this as three DXT1 textures, though you can improve quality by packing channels with similar spatial coherence. Coefficients 0, 2, and 6 in particular tend to pack well, since they're all dominated primarily by directions aimed perpendicular to the texture.

I use the following packing:
Texture 1: Coefs 0, 2, 6
Texture 2: Coefs 1, 4, 5
Texture 3: Coefs 3, 7, 8

You can reference an early post on this blog for code on how to rotate a SH vector by a matrix, in turn allowing you to get it into texture space. Once you've done that, simply multiply each SH coefficient from the self-shadowing map by the SH coefficients created from your light source (also covered on the previous post) and add together.

by OneEightHundred (noreply@blogger.com) at 2011-12-07 15:39

2011-12-02

Spherical harmonics seems to have some impenetrable level of difficulty, especially among the indie scene which has little to go off of other than a few presentations and whitepapers, some of which even contain incorrect information (i.e. one of the formulas in the Sony paper on the topic is incorrect), and most of which are still using ZYZ rotations because it's so hard to find how to do a matrix rotation.

Hao Chen and Xinguo Liu did a presentation at SIGGRAPH '08 and the slides from it contain a good deal of useful stuff, nevermind one of the ONLY easy-to-find rotate-by-matrix functions. It also treats the Z axis a bit awkwardly, so I patched the rotation code up a bit, and a pre-integrated cosine convolution filter so you can easily get SH coefs for directional light.

There was also gratuitous use of sqrt(3) multipliers, which can be completely eliminated by simply premultiplying or predividing coef #6 by it, which incidentally causes all of the constants and multipliers to resolve to rational numbers.

As always, you can include multiple lights by simply adding the SH coefs for them together. If you want specular, you can approximate a directional light by using the linear component to determine the direction, and constant component to determine the color. You can do this per-channel, or use the average values to determine the direction and do it once.

Here are the spoilers:

#define SH_AMBIENT_FACTOR   (0.25f)
#define SH_LINEAR_FACTOR (0.5f)
#define SH_QUADRATIC_FACTOR (0.3125f)

void LambertDiffuseToSHCoefs(const terVec3 &dir, float out[9])
{
// Constant
out[0] = 1.0f * SH_AMBIENT_FACTOR;

// Linear
out[1] = dir[1] * SH_LINEAR_FACTOR;
out[2] = dir[2] * SH_LINEAR_FACTOR;
out[3] = dir[0] * SH_LINEAR_FACTOR;

// Quadratics
out[4] = ( dir[0]*dir[1] ) * 3.0f*SH_QUADRATIC_FACTOR;
out[5] = ( dir[1]*dir[2] ) * 3.0f*SH_QUADRATIC_FACTOR;
out[6] = ( 1.5f*( dir[2]*dir[2] ) - 0.5f ) * SH_QUADRATIC_FACTOR;
out[7] = ( dir[0]*dir[2] ) * 3.0f*SH_QUADRATIC_FACTOR;
out[8] = 0.5f*( dir[0]*dir[0] - dir[1]*dir[1] ) * 3.0f*SH_QUADRATIC_FACTOR;
}


void RotateCoefsByMatrix(float outCoefs[9], const float pIn[9], const terMat3x3 &rMat)
{
// DC
outCoefs[0] = pIn[0];

// Linear
outCoefs[1] = rMat[1][0]*pIn[3] + rMat[1][1]*pIn[1] + rMat[1][2]*pIn[2];
outCoefs[2] = rMat[2][0]*pIn[3] + rMat[2][1]*pIn[1] + rMat[2][2]*pIn[2];
outCoefs[3] = rMat[0][0]*pIn[3] + rMat[0][1]*pIn[1] + rMat[0][2]*pIn[2];

// Quadratics
outCoefs[4] = (
( rMat[0][0]*rMat[1][1] + rMat[0][1]*rMat[1][0] ) * ( pIn[4] )
+ ( rMat[0][1]*rMat[1][2] + rMat[0][2]*rMat[1][1] ) * ( pIn[5] )
+ ( rMat[0][2]*rMat[1][0] + rMat[0][0]*rMat[1][2] ) * ( pIn[7] )
+ ( rMat[0][0]*rMat[1][0] ) * ( pIn[8] )
+ ( rMat[0][1]*rMat[1][1] ) * ( -pIn[8] )
+ ( rMat[0][2]*rMat[1][2] ) * ( 3.0f*pIn[6] )
);

outCoefs[5] = (
( rMat[1][0]*rMat[2][1] + rMat[1][1]*rMat[2][0] ) * ( pIn[4] )
+ ( rMat[1][1]*rMat[2][2] + rMat[1][2]*rMat[2][1] ) * ( pIn[5] )
+ ( rMat[1][2]*rMat[2][0] + rMat[1][0]*rMat[2][2] ) * ( pIn[7] )
+ ( rMat[1][0]*rMat[2][0] ) * ( pIn[8] )
+ ( rMat[1][1]*rMat[2][1] ) * ( -pIn[8] )
+ ( rMat[1][2]*rMat[2][2] ) * ( 3.0f*pIn[6] )
);

outCoefs[6] = (
( rMat[2][1]*rMat[2][0] ) * ( pIn[4] )
+ ( rMat[2][2]*rMat[2][1] ) * ( pIn[5] )
+ ( rMat[2][0]*rMat[2][2] ) * ( pIn[7] )
+ 0.5f*( rMat[2][0]*rMat[2][0] ) * ( pIn[8])
+ 0.5f*( rMat[2][1]*rMat[2][1] ) * ( -pIn[8])
+ 1.5f*( rMat[2][2]*rMat[2][2] ) * ( pIn[6] )
- 0.5f * ( pIn[6] )
);

outCoefs[7] = (
( rMat[0][0]*rMat[2][1] + rMat[0][1]*rMat[2][0] ) * ( pIn[4] )
+ ( rMat[0][1]*rMat[2][2] + rMat[0][2]*rMat[2][1] ) * ( pIn[5] )
+ ( rMat[0][2]*rMat[2][0] + rMat[0][0]*rMat[2][2] ) * ( pIn[7] )
+ ( rMat[0][0]*rMat[2][0] ) * ( pIn[8] )
+ ( rMat[0][1]*rMat[2][1] ) * ( -pIn[8] )
+ ( rMat[0][2]*rMat[2][2] ) * ( 3.0f*pIn[6] )
);

outCoefs[8] = (
( rMat[0][1]*rMat[0][0] - rMat[1][1]*rMat[1][0] ) * ( pIn[4] )
+ ( rMat[0][2]*rMat[0][1] - rMat[1][2]*rMat[1][1] ) * ( pIn[5] )
+ ( rMat[0][0]*rMat[0][2] - rMat[1][0]*rMat[1][2] ) * ( pIn[7] )
+0.5f*( rMat[0][0]*rMat[0][0] - rMat[1][0]*rMat[1][0] ) * ( pIn[8] )
+0.5f*( rMat[0][1]*rMat[0][1] - rMat[1][1]*rMat[1][1] ) * ( -pIn[8] )
+0.5f*( rMat[0][2]*rMat[0][2] - rMat[1][2]*rMat[1][2] ) * ( 3.0f*pIn[6] )
);
}


... and to sample it in the shader ...


float3 SampleSHQuadratic(float3 dir, float3 shVector[9])
{
float3 ds1 = dir.xyz*dir.xyz;
float3 ds2 = dir*dir.yzx; // xy, zy, xz

float3 v = shVector[0];

v += dir.y * shVector[1];
v += dir.z * shVector[2];
v += dir.x * shVector[3];

v += ds2.x * shVector[4];
v += ds2.y * shVector[5];
v += (ds1.z * 1.5 - 0.5) * shVector[6];
v += ds2.z * shVector[7];
v += (ds1.x - ds1.y) * 0.5 * shVector[8];

return v;
}


For Monte Carlo integration, take sampling points, feed direction "dir" to the following function to get multipliers for each coefficient, then multiply by the intensity in that direction. Divide the total by the number of sampling points:


void SHForDirection(const terVec3 &dir, float out[9])
{
// Constant
out[0] = 1.0f;

// Linear
out[1] = dir[1] * 3.0f;
out[2] = dir[2] * 3.0f;
out[3] = dir[0] * 3.0f;

// Quadratics
out[4] = ( dir[0]*dir[1] ) * 15.0f;
out[5] = ( dir[1]*dir[2] ) * 15.0f;
out[6] = ( 1.5f*( dir[2]*dir[2] ) - 0.5f ) * 5.0f;
out[7] = ( dir[0]*dir[2] ) * 15.0f;
out[8] = 0.5f*( dir[0]*dir[0] - dir[1]*dir[1] ) * 15.0f;
}


... and finally, for a uniformly-distributed random point on a sphere ...


terVec3 RandomDirection(int (*randomFunc)(), int randMax)
{
float u = (((float)randomFunc()) / (float)(randMax - 1))*2.0f - 1.0f;
float n = sqrtf(1.0f - u*u);

float theta = 2.0f * M_PI * (((float)randomFunc()) / (float)(randMax));

return terVec3(n * cos(theta), n * sin(theta), u);
}

by OneEightHundred (noreply@blogger.com) at 2011-12-02 09:22

2011-12-01

Fresh install on OS X of ColdFusion Bulder 2 (TWO, the SECOND one). Typing a simple conditional, this is what I was given:



I also had to manually write the closing cfif tag. It's such a joke.

The absolute core purpose of an IDE is to be a text editor. Secondary to that are other features that are supposed to make you work better. ColdFusion Builder 2 (TWO!!!!!) completely fails on all levels as a text editor. It doesn't even function as well as notepad.exe!

Text search is finicky, Find & Replace is completely broken half the time, the UI is often unresponsive (yay Eclipse), the text cursor sometimes disappears, double-clicking folders or files in an FTP view pops up the Rename dialog every time, HTML / CF tag completion usually doesn't happen, indention is broken, function parameter tooltips obscure the place you are typing, # and " completion randomly breaks (often leaving you with a ###)...the list goes on and on.

Adobe has a big feature list on their site. I'm thinking maybe they should go back and use some resources to fix the parts where you type things into the computer, you know, the whole point of the thing.

by Ted (noreply@blogger.com) at 2011-12-01 15:14

2011-10-18

Has it really been a year since the last update?

Well, things have been chugging along with less discovery and more actual work. However, development on TDP is largely on hold due to the likely impending release of the Doom 3 source code, which has numerous architectural improvements like rigid-body physics and much better customization of entity networking.


In the meantime, however, a component of TDP has been spun off into its own project: The RDX extension language. Initially planned as a resource manager, it has evolved into a full-fledged programmability API. The main goal was to have a runtime with very straightforward integration, to the point that you can easily use it for managing your C++ resources, but also to be much higher performance than dynamically-typed interpreted languages, especially when dealing with complex data types such as float vectors.

Features are still being implemented, but the compiler seems to be stable and load-time conversion to native x86 code is functional. Expect a real release in a month or two.

The project now has a home on Google Code.

by OneEightHundred (noreply@blogger.com) at 2011-10-18 22:37

2011-08-24

It was Thursday afternoon, a completely sensible hour, but for me I had been woken up by the call. In my sleepy haze I hadn’t realized that this quickly turned into a surprise job interview. I made the mistake of saying that, while I had worked plenty with Ubuntu packaging and scripted application tests, I hadn’t actually written any of Wine’s C code.

“Oh.”

I began to feel the consequences of the impression I’d given. “Well, we want a real developer.” Without thinking, I’d managed to frame my years of experience in precisely the wrong way. All that integration work, application testing, knowledge of scripting languages and the deep internals of Ubuntu suddenly counted for nothing. It didn’t matter how much work I’d done or how many developer summits I’d been sponsored to: in this moment I was someone who didn’t even write simple Wine conformance tests.

We talked some more, and I went back to bed to wake at midnight, technically Friday. Too late and too early to go out, everything was quiet enough to get some real work done. I thought about the earlier conversation, and while I hadn’t written C code since high school I decided to dive right back in and hack at Wine myself. Within minutes I found a bug, and four hours later I had code not only proving it, but also fixing it for good.

Test driven development

Today’s Wine consists of two equally important parts: a series of implementations pretending to be parts of Windows, and an ever-growing suite of unit tests. The implementations are straightforward once you know the right thing to do: if the waiter function in Windows’ restaurant.dll asks for a tip, then ours needs to as well. Similarly, the tests prove what the right thing actually is, on both Wine and Windows. They help us answer weird questions, like if the Windows waiter still demands a tip with a negative bill. Somewhere out there, there’s a Windows program that depends on this behavior, and it will be broken in Wine if we make the mistake of assuming Windows acts reasonably.

I asked a developer to recommend a DLL that needed better tests, picked a random C file in it, and started looking. I soon found my target, a documented and complete implementation of a function with only partial tests. This code is already written, and believed working, in Wine. I was supposed to write a bunch of tests that should pass in Wine. That’s when I learned the function is already broken.

Awesome Face

The Wine Testbot

Wine owes a lot to the late Greg Geldorp, an employee of VMware who originally created a Windows testbot for us. Any developer can submit a test patch to it, and those tests will be run on multiple versions of Windows to make sure they pass. It saves us the trouble of having to reboot into 10 different versions of Windows when hacking on Wine.

When I used the testbot to run my new tests, however, I found that while they passed on Wine they actually failed on Windows. Since we’re supposed to do what Windows does, no matter how stupid, that meant two problems: my new tests were bad, and Wine had a bug. Fixing the tests is simple enough – you just change the definition of “pass” – but this kind of unexpected failure can also inspire even more tests. By the end of it I had 6 patches adding different tests to just one function, 3 of which were marked “todo_wine”.

Fixing Wine

While simply submitting tests would certainly be a useful contribution, I felt like I could do more. “You found the mess, you clean it up” is an annoying cliché, but here it has a ring of truth to it: my recent experience writing these tests meant that I had become the world expert on this function. At least, for the next few days, after which I planned on forgetting it forever. That’s what good tests are for – they let us confidently ignore the internals of done code. In the off chance we break it unintentionally, they tell us exactly what’s wrong.

And so I wrote what was supposed to be my final patch: one that fixed Wine and marked the tests as no longer todo. In true open source fashion, I sent it to a friend for review, where he promptly informed me that, while my new tests were passing, I’d created a place where Wine could crash. The solution is, unsurprisingly, yet more tests to see how Windows handles the situation (keeping in mind that sometimes Windows handles things by crashing). This is typical in Wine development: your first attempt at a patch often results in mere discovery that the problem is harder to solve than you thought.

Awesome Face

The real world

None of this actually matters, of course, unless the bug I’d fixed was actually affecting a real application that someone would want to run. Did I actually fix anything useful? I don’t know. It’s not exactly easy to get a list of all Windows applications that rely on edge-case behavior of shlwapi.dll’s StrFromTimeInterval function, but at least Wine is more correct now.

Apparent correctness isn’t the end-all of software development, of course. It’s possible doing something seemingly correct in Wine can make things worse: if my initial version of a fix slipped in to the code, for instance, an application could go from displaying a slightly wrong string to flat-out crashing. That’s why unit tests are just one part of software QA – you still need peer review of code and actual application testing.

Part of something greater

Incidentally, the whole experience reminded me of a blog post I had written over a year ago about modeling Wine development. My model was telling me that what I had just done was a bit inefficient: I made a modest improvement to Wine, but it wasn’t directly inspired by a particular real world application. Perhaps it would have been better had I tackled a more salient bug in a big name application, rather than polishing off some random function in string.c. But it wasn’t random: another developer recommended this particular code section to me because it was missing tests, and he noticed this precisely because in the past some untested behavior in a similar function was breaking a real application for him.

This function is now done. The coding was relatively simple by Wine standards – no need for expertise in COM, Direct3D, OLE, or any number of Windows conventions that O’Reilly writes entire books about. Wine doesn’t need experts: it needs a lot of grunt work from people like me. People willing to tackle one function at a time until by sheer attrition we end up with a test suite so exhaustive that everything can be simply guaranteed to work. That’s how we win in the end. That’s how real developers do it.

by YokoZar at 2011-08-24 14:59

2011-06-12

The Problem

For those using chef to automate your server infrastructure you probably find managing third-party cookbooks to be a pain. Ideally I want to make custom changes to a cookbook while still being able to track for upstream enhancements.

A few techniques I see being used are:

no tracking: Manually download an archive from github or opscode community and drop it in your cookbooks/ directory. Easy to make custom changes but you have no automated way to check for updates.

git submodules: This tracks upstream well, but unless you own the repo you can’t make changes.

fork it: Since pretty much all cookbooks reside on github, so you can fork a copy. This works, but now you might have dozens of different repos to manage. And checking for updates means going into each repo and manually merging in enhancements from the upstream.

knife vendor: Now we are getting somewhere. Chef’s knife command has functionality for dealing with third-party cookbooks. It looks something like this:

knife cookbook site vendor nginx

This downloads the nginx cookbook from the opscode community site, puts an unmodified copy in a chef-vendor-nginx git branch, and then puts a copy in your cookbooks/nginx dir in your master branch. When you run the install again it will download the updated version into the chef-vendor-nginx branch, and then merge that into master.

This is a good start, but it has a number of problems. First you are restricted to using what is available on the opscode community site. Second, although this seems like a git-centric model, knife is actually downloading a .tar.gz file. In fact if you visited the nginx cookbook page you would see it only offers an archive download, no way to inspect what this cookbook actually provides before installing.

There is a sea of great high-quality cookbooks on github. Since we all know and love git it would be great if we could get the previous functionality but using git repositories as a source instead.

A Solution

Enter knife-github-cookbooks. This gem enhances the knife command and lets us get the above functionality by pulling from github rather than downloading from opscode. To use it just install the gem and run:

knife cookbook github install cookbooks/nginx

By default it assumes a username/repo from github. So for each cookbook you install you will have a chef-vendor-cookbookname branch storing the upstream copy and a cookbooks/cookbook-name directory in your master branch to make local changes to.

If you want to check for upstream changes:

knife cookbook github compare nginx

That will launch a github compare view. You can even pass this command a different user who has forked the repo and see or merge in changes from that fork! Read more about it on the github page.

One thing to keep in mind is this gem doesn’t pull in required dependencies automatically, so you will have to make sure you download any requirements a cookbook might have. You can check what dependencies a cookbook requires by inspecting the metadata files.

Bonus Tip!

Opscode has a github repository full of recipes you probably want to use (opscode/cookbooks). Unfortunately using this repository would mean pulling in *all* of those cookbooks. That just clutters up your chef project with cookbooks you don’t need. Luckily there is https://github.com/cookbooks! This repository get updated daily and separates each opscode/cookbook cookbook into a separate git repository.

Now you can cherry-pick the cookbooks you want, and manage them with knife-github-cookbooks!

by Brian Racer at 2011-06-12 17:51

2011-06-07

Make sure you have the same path to the binaries you want to profile on your target device as your workstation.

On your profiling target as root:

export KREXP=dpkg -L kernel-debug | grep "vmlinux-2.6"
opcontrol --init
opcontrol --vmlinux=$KREXP
opcontrol --separate=kernel
opcontrol --status
opcontrol -c=8

start your application

as root again:

opcontrol --stop;  opcontrol --reset; opcontrol --start; sleep 5; opcontrol --stop

commence activity you want to profile - i.e. scroll around wildly, play a video, etc

on your workstation/host environment:
rm -rf /var/lib/oprofile && scp -r root@192.168.2.15:/var/lib/oprofile /var/lib/
this should give you some log info:
opreport -l -r

to finally generate the fancy svg:
opreport -c /path/to/binary_to_profile > /tmp/opreport.txt
oprofile-callgraph-to-svg -e 1 -n 1 /tmp/opreport.txt

your result should look like this


by heeen at 2011-06-07 14:08

2011-05-13

A quick one liner to iterate a nsIntRegion:

for(nsIntRegionRectIterator iter(mUpdateRegion); const nsIntRect* R=iter.Next();)
//do something with R

by heeen at 2011-05-13 11:32

2011-03-10

The problem:

Sometimes one version of Wine can run an application, but fail on others.  Maybe there’s a regression in the latest Wine version, or you’ve installed an optional patch that fixes one application but breaks another.

The solution to this is to have more than one version of Wine installed on the system, and have the system determine which version of Wine is best for the application.  This implies two different levels of usability – advanced users may want to configure and tweak which Wine runs which App, but mere humans won’t even want to know such a thing exists.

This is the reason why many people have created Wine front ends: they worry about things like patches and registry hacks and native DLLs so that users won’t have to.  You just click a button for the application you want to install, put in the requisite disc or password, and it does all the shaman dances for you.  Codeweaver’s, the chief sponsor of Wine, stays in business through sales of Crossover, which is basically just a front end to a special version of Wine.

So, these front ends exist to solve some very real and important problems. But now we have a new problem, in that we might have more than one front end — playonlinux, winetricks, and others all need to deal with this too.  In true open source fashion, we need to work together and come up with a standard.

Sharing is Caring

My proposed solution:

Distribution packagers like me make a separate package, say, wine-hotfixes.  This package replaces (on Ubuntu via dpkg-diversions) the /usr/bin/wine binary, moving the existing one to /usr/bin/wine-default.

The new /usr/bin/wine will pass everything to wine-default, unless it detects the environment variable WINEHOTFIXES is set to something other than null. If it is set (to, say WINEHOTFIXES=”6971,12234″), then the script will look in /etc/wine/hotfixes.d/ and /etc/wine/hotfixes.conf for alternative wine versions that might make it happy.  In the case of a partial match it will prioritize bugs in the listed order.

hotfixes.d will contain a series of config files, one for each alternative version of wine.  These could be installed manually, but generally they’ll come from special packages — say, a version of Wine built with a workaround for that annoying mouse escape bug.  Each file will give a path (say, /opt/wine-hotfixes/yokos-tweaked-wine) and which bugs it hotfixes.  hotfixes.conf can specify a list of bugs that the default Wine already fixes, as well as which bugs to ignore (eg that are probably fixed by every hotfix).

Start menu items (.desktop entries) can then work exactly as they do now, except they will have the WINEHOTFIXES environment variable set, generally as created by a front end.  If the user has no alternative wine versions, or no wine-hotfixes package, nothing different will happen and everything will still use the wine-default.  If the user upgrades Wine to a version that fixes all the worked around bugs, the default will be used again (forward-compatible) — all that’s needed is for the newer Wine package to ship an updated hotfixes.conf.

The beauty of this is that a front end can specify a list of bugs to workaround without actually having a ready hotfix – if needed that can be handled by someone else.  Similarly, the hotfixed Wines don’t actually need to know about which app they’ll be running, as wine-hotfixes will do all the matchmaking.  We also keep /usr/bin free, except for one added binary.

Wrapping it all up

The real test of a design, of course, is if it can be any simpler.  With the above, we’ve got it down to a single configuration item depending on who’s using it — hotfixes.d for the packager, hotfixes.conf for a manual tweaker, and the WINEHOTFIXES environment variable for the front end author.

It is of course worth asking if we should be doing this at all.  Zero configurations are better than one, after all, and ideally one magic Wine package would run every application flawlessly.  But that’s what we’ve been trying for years now, and people keep feeling the need to write these front ends anyway — we’re clearly not doing well enough without them, so we might as well manage them and work together.  This way, at least everything is backwards (and forwards) compatible: environment variables mean nothing without wine-hotfixes, and if we ever do invent a perfect version of Wine all the applications installed by front ends will continue to work just fine.

That should wrap it up, unless I’ve missed something.

Pool shot that sinks every ball

by YokoZar at 2011-03-10 08:01