sn.printf.net

2019-02-12

Graphite is a scalable, efficient and multi-platform graphical content distribution system for mobile devices and the web.

I designed and developed the backend system powering Graphite — a Node.js based system that is powered by several AWS services, in addition to some locally hosted server functionality (mostly to handle maintenance jobs and statistical analysis of the live system) and a media sharing system facilitating the onboarding of new users coming from social media sites like Twitter and Facebook.

I also developed the iOS app for Graphite — one of the biggest and most complex mobile projects I have ever taken on. Although on the surface Graphite seems quite simple, in fact the technology powering it is extremely sophisticated and complex.

In addition to developing the iOS app and the server-side platform, I current manage the development of every other current and future platform — including Android, the web, and a few other platforms on the roadmap.

You can download the public beta of Graphite on iOS and on Android and visit the Graphite website here.

If you need an app developed, reach out and let’s talk! You can contact me using the contact form on this site, via Skype at stromdotcom or by visiting my company website at https://glowdot.com.

by stromdotcom at 2019-02-12 20:13

Scrawl is a social media app developed over the course of one month, representing a typical social media app project.

Aside from being a pretty neat idea, I actually launched this app to demonstrate a few critical concepts that many first time (and even some fairly experienced!) mobile entrepreneurs might not be aware of.  Let’s take a look at a few of them!

Delayed permission requests

One of the surest ways to lose a new user is to ask for permissions too soon — or worse, immediately upon first launch of the app.  Even if you are lucky and don’t immediately cause that user to delete your app, you will almost certainly end up with that user denying your request.

The reason for this is pretty simple, if not immediately obvious.  A new user really doesn’t know much about you or your app at all.  And you certainly haven’t earned their trust yet.  But here you are, asking for access to their photos, contacts, location…. wouldn’t your first response be, “wait, who are you and what are you going to do with this information?”

And while the push notifications permission might not be invasive in terms of privacy, it is invasive in terms of annoyance factor for many apps, as it has been chronically abused in the past.  Consequently, most users tend to answer “no” unless they have a compelling reason to say “yes”.

The worst part of all this is that once a user says no, its really hard to get them to go into the settings app and change their answer to yes.  Primarily because most users don’t even know how do to that even if they wanted to.

A much better approach is to delay asking for those permissions until you have spent a little time with that user, and given them a good reason to accept the request.  In Scrawl, we do this in a few simple ways:

  • Push notifications are not requested until you perform an action for which you might receive a notification.  In Scrawl, you can receive notifications when users reply to you, or post in a place you have favorited.  So we don’t ask for push permission until you post something, or favorite a place.
  • For location permission, we start users off as if they were sitting in our office in Santa Monica, CA.  At the top of the “nearby places” list, we let them know they are looking at places near us, and offer them the option of sharing their location so we can show them places near them.  In this way, we allow the user to request that we accept their share, rather than intrusively ask them where they are.
  • For camera and photo library permissions, we ask only when the user requests to share an image.  This isn’t that interesting, however, as most apps work like this.  It is, in fact, default behavior in iOS apps.

Aside from waiting to ask, we employ one more strategy here to maximize acceptance of permission requests: we ask the user ourselves before we request that iOS make the request.  This is a little more advanced a concept, but it is based on the fact that once iOS itself asks you for permission, it will never ask again — the user MUST change their answer in the Settings app.  So instead of requesting the push notification permission, for example, we pop up an explanatory dialog explaining why we want it, and asking the user if they are interested in granting it.  If they say no, we don’t ask them again for a while, giving them a chance to get more comfortable in the app, and giving us a chance to earn trust.  If they say yes, then we tell iOS to go ahead and request the permission.  If they said yes to us, there’s a pretty good chance they’ll say yes to iOS.  So when the request is made, even though its the last time it will be made, the odds that it will be accepted is much, much higher.

Delayed log in requests

As a general principle, we like to delay asking a user to log in or sign up as long as we possibly can.  At a bare minimum, we try with all our might to avoid requesting users log in immediately upon launching the app!  For some apps, this is inevitable, for most situations, there is a set of functionality that is perfectly reasonable to offer to anonymous users.

In Scrawl, for example, users can search places and read posts, look at photos, and even report objectionable content without logging in.  But to create a place, post a message, upload a photo or vote on any content, they need to log in.  When a user attempts to do one of those things, that’s when we request they log in or sign up — not before.

Content moderation

One of the most important things to consider when creating a social network, and especially one which centers on user generated content (UGC), is content moderation.  You need to have a way to make sure objectionable content is quickly removed and malicious users are removed from the system.  I won’t go into detail about our moderation system here for obvious reasons, but we have created a model for self-moderation that allows objectionable content to be filtered out of the system until such a time that a human moderator can permanently remove it and moderate the offending user accounts.  It’s a pretty smart system, and one that we believe fixes this major problem which has plagued other similar apps in the past.

If you’d like to see Scrawl in action, we’d love for you to check it out on iOS here:

Scrawl on the App Store.

An Android version is coming soon!

Scrawl – post anything, anywhere, anytime!

If you need an app developed, reach out and let’s talk! You can contact me using the contact form on this site, via Skype at stromdotcom or by visiting my company website at https://glowdot.com.

by stromdotcom at 2019-02-12 00:06

2019-02-11

In this series of posts, I want to discuss several myths, misconceptions, and misunderstandings that threaten to derail inexperienced or non-technical founders of tech startups.  

In the 10+ years I have been developing for mobile platforms — 15 years if you count Windows Mobile! — I continue to be amazed by the number of well-intentioned founders who fall for the trap of the cheap offshore development firm.  I am not exaggerating when I say I have never heard of a development that went well that leveraged a team of developers from a shop in India.  The failure rate is so insanely high that I wonder how the word hasn’t spread far and wide to stay the hell away from offshore code factories.

My best guess is that it has become a sort of industry meme, and as such it has woven itself so deeply into the fabric of the industry that it will take more than countless blog posts, post-mortem talks, and shared war stories to make it go away.

We have a somewhat cynical but also very true saying around here, which is that our best clients are those who have gone offshore and been burned — often badly — but not to the extent that they threw in the towel or ran completely out of money.  Not only have these clients learned deep and valuable lessons about the process and their own product by testing the waters, they have found a profound respect for the process of developing quality software, and an even more critical respect for the people who write that code.

The misconception here is at its worst when it can be summed up as follows: all developers are the same, the only thing differentiating them is the price.

The setup looks something like this: a non-technical  founder sends out an RFP to several development shops — some are local (like Glowdot), some are offshore (typically India but sometimes Russia or even China), and some are fronted locally by middlemen outsourcing — sometimes without your knowledge — the project to an offshore shop.  Unsurprisingly, you’re going to get three wildly different quotes: the local shop will be the highest, followed by the middleman, who is taking a slice off the top of the Indian shop, and at the bottom is the offshore dev shop.  The range in these quotes can often be incredibly bizarre.  I’ve heard of projects that should not be attempted without budgets of at least hundreds of thousands of dollars getting quoted at silly amounts like $500.  And that might sound crazy, but if I told you one guy was selling a Ferrari for $250,000 and another guy was selling one for $1000, and you knew absolutely nothing about cars,  you can imagine how someone might be so utterly baffled that they just pick a quote in the middle.

But remember! That middle quote is just the lowest quote, with a middleman adding a little extra for himself.  Its very possible that the developers that would be working on your project in both quotes is the exact same team!

So, what is really going on here?  Well there are a few truths you need to know to understand why local developers are more expensive — and why that extra expense is probably justified.

Software development is a very highly skilled endeavor

It is not easy to build good quality software.  A talented developer has a passion for what he does, he has a quality education in mathematics and computer science, and he has years of experience in building complex software products.  That team in India?  They don’t have any of this.

Now don’t get me wrong: there are brilliant programmers in India, China, Russia… anywhere.  But they are all working for big companies, and they earn good salaries.  They most definitely are not crammed into a tiny room building apps for $5 an hour.

We had a series of meetings about 6 years ago with a very, very large company.  You have absolutely heard of the company in question, and probably bought several of their products.  The product we were brought in to discuss was of such importance, that our second and all subsequent meetings were with the founder and CEO.  This was his pet project, and his baby.  This was the product that was intended to bring this very big, very successful company into the digital age.  The problem was he could not wrap his head around the cost of hiring a development team in California to build this product.  He had quotes from developers in China and the cost difference was so vast, that he could not justify going with us.  No amount of explaining on our part was helping.

Ultimately, they did not go with us.  Ultimately, the product was never launched.

So why did he have such a hard time understanding the cost?  Because his business was built on the idea of manufacturing physical products in China, and selling those products in the US at a considerable markup.  The company had made billions doing this.  And from that founder’s perspective, software was just like any of the other products he built and sold — you send the specs off to a factory, they build it, and you start selling it.

Problem is: software is not like that at all.  You can’t make things go faster by throwing more people at it.  You can’t make things more efficient by organizing teams into an assembly line.  In fact, doing those things is a sure fire way to plunge your project into chaos.  Which, unfortunately, is what happened.

Good programmers are in short supply

Here in the US, its unlikely that anyone who gets through a Computer Science program in university will ever go a day without work.  In fact, the supply of programmers is so short that the industry is constantly begging the government to simplify the process of giving Visas to foreign workers.

I’m sure almost everyone at this point has heard of the perks that come along with a job in technology — amazing salaries and benefits, free food, massages, and even in more and more companies, unlimited time off.  I actually saw a job posting here in Santa Monica yesterday that boasted it’s employees averaged a month off a year.  And these companies still have a very hard time filling their vacancies.

So with that in mind, you can understand why the cheapest developer is almost certainly not the one you want to go with.  Anyone willing to drop their price to compete with offshore development quotes is desperate for work in a way I don’t even want to think about.  If your product matters to you at all — and let’s be honest, anything worth doing is worth doing right — then you want the best of the best working on it.

Cheap now, expensive later

Finally, the most important advice I can give you is that trying to save money now is only going to cost you dearly in the long term.  This is so incredibly true in software that I can’t overstate it.  We have heard stories of companies that went offshore because a project was quoted at say $20,000, only to end up, years later, a million dollars in the hole, with nothing to show for it.  In that specific example, Glowdot eventually took over development and built that product for a tenth of what the company blew on a supposedly “cheap” offshore developer.   That lost money could have been better spent marketing the product or continuing the development lifecycle.

But how does that happen?  How does a $20,000 project turn into a million dollar failure?

Well, first off, the quote was nowhere near accurate.  Even at the ludicrous hourly rate these shops tell you you’ll be paying, $20,000 doesn’t go very far on a complex, multi-platform app with a database and backend system.  Once the $20,000 is burned up, the company usually notices a few things:

  • The product isn’t finished
  • What is finished, is buggy and confusing and unusable
  • Lots of time has passed

So here you are: several months later, you’ve burned though your budget and don’t have much to show for it.  You have two options: shut it down and go find another developer, or yell and scream and demand your developer get this back on track.

The response from these shops is almost universally the same: “we can bring a few other programmers on board and work on this for another month”.  This of course comes with a new quote for the changes.   Now that the team has expanded, you’re burning through money several times faster.  But hey!  Maybe you can turn this ship around!

Months later, things have gone from bad to worse.  The product is just as buggy and unusable as before, but since you requested a bunch of new features, you just have even more buggy and unusable features.  You realize now that what you really should have done is fix what you had, then add features once your product is at a stable state.  Why didn’t your developer give you that advice?  Well, because they aren’t in the business of building software, they are in the business of invoicing you.  And now you’re up to $100k invested in this mess.

At that level, you probably have raised money from friends and family who trusted you to build this thing.  And you feel obligated to finish what you started.  So you try to change course, scale back, strategize and plan – this time you want to do it right, and use some of the knowledge you’ve gained!  But you’re in too deep to start over, so you throw more money at your developers.  You yell louder, you ask for more team members.  And you keep grinding away.  But the project is just getting more and more complicated.  The programmers who initially started your project are gone and have been replaced by new programmers.  You start to realize these guys don’t have any sort of system or development philosophy other than “just start writing code”.  You start reaching out to local programmers asking if they can fix the bugs, and they all politely decline to work on your project.  Why?  Because fixing bad code is a nightmare, and there is plenty of work on projects that aren’t a nightmare.

In the best case scenario, you have a some money left, and you decide to hire a competent local team.  You meet with your programmer in person and they discuss the methodologies they use, and the structure and strategy with which they approach software development.  They talk to you about their past clients, and you recognize the names.  You realize you could have saved a lot of money and heartache by just hiring the right team to begin with.

In closing, I invite anyone who stumbles upon this post to spend some time on Google looking for war stories of companies that tried to go cheap when sourcing their technology.  Get in touch with us to discuss your project and ask a lot of questions.  Even if we don’t ultimately build your project, we pride ourselves on our willingness to help non-technical founders understand the crazy and complex business they are about to get themselves into.  Glowdot has been building software for almost 20 years — and I personally have been doing this for over 30!  We have seen it all, and we can help you navigate this treacherous landscape better than anyone in the industry.  That’s why we’ve been trusted by some of the biggest companies in the world.

Send us a message and let us show you what we can do for your company.

by stromdotcom at 2019-02-11 23:16

2019-02-10

GlobeChat is a highly scalable, modern, efficient text messaging app with the unique feature that it translates incoming messages into your native language in real time as you chat.

I designed and developed the backend infrastructure for GlobeChat as well as the iOS app, and oversaw the development of the Android app. This development included integration with several AWS services as well as building modules to interface with Microsoft’s Azure cloud platform for translation services.

GlobeChat also, unsurprisingly, exemplifies the multitude of layers of complexity required when localizing an app for multiple languages. In total, GlobeChat supports 61 languages — and in order to do so, the app must not only translate messages between users, but include localized text in the client apps, as well as localized text on the server side (e.g. to translate common push notification messages, server generated error messages, and so on).

Managing copy on all of those levels for 61 languages is a massive task on it’s own, in addition to the technical complexity of building a chat app, let alone one that is handling translation in real time!

Indeed, this is so complex and novel that the technology powering GlobeChat has been patented.

You can download GlobeChat on iOS as well as on Android. You can also visit the GlobeChat website for more information.

If you need an app developed, reach out and let’s talk! You can contact me using the contact form on this site, via Skype at stromdotcom or by visiting my company website at https://glowdot.com.

by stromdotcom at 2019-02-10 20:52

2019-01-08

Originating in a thesis, REST is an attempt to explain what makes the browser distinct from other networked applications.

You might be able to imagine a few reasons why: there’s tabs, there’s a back button too, but what makes the browser unique is that a browser can be used to check email, without knowing anything about POP3 or IMAP.

Although every piece of software inevitably grows to check email, the browser is unique in the ability to work with lots of different services without configuration—this is what REST is all about.

HTML only has links and forms, but it’s enough to build incredibly complex applications. HTTP only has GET and POST, but that’s enough to know when to cache or retry things, HTTP uses URLs, so it’s easy to route messages to different places too.

Unlike almost every other networked application, the browser is remarkably interoperable. The thesis was an attempt to explain how that came to be, and called the resulting style REST.

REST is about having a way to describe services (HTML), to identify them (URLs), and to talk to them (HTTP), where you can cache, proxy, or reroute messages, and break up large or long requests into smaller interlinked ones too.

How REST does this isn’t exactly clear.

The thesis breaks down the design of the web into a number of constraints—Client-Server, Stateless, Caching, Uniform Interface, Layering, and Code-on-Demand—but it is all too easy to follow them and end up with something that can’t be used in a browser.

REST without a browser means little more than “I have no idea what I am doing, but I think it is better than what you are doing.”, or worse “We made our API look like a database table, we don’t know why”. Instead of interoperable tools, we have arguments about PUT or POST, endless debates over how a URL should look, and somehow always end up with a CRUD API and absolutely no browsing.

There are some examples of browsers that don’t use HTML, but many of these HTML replacements are for describing collections, and as a result most of the browsers resemble file browsing more than web browsing. It’s not to say you need a back and a next button, but it should be possible for one program to work with a variety of services.

For an RPC service you might think about a curl like tool for sending requests to a service:

$ rpctl http://service/ describe MyService
methods: ...., my_method

$ rpctl http://service/ describe MyService.my_method
arguments: name, age

$ rpctl http://service/ call MyService.my_method --name="James" --age=31
Result:
   message: "Hello, James!"

You can also imagine a single command line tool for a databases that might resemble kubectl:

$ dbctl http://service/ list ModelName --where-age=23
$ dbctl http://service/ create ModelName --name=Sam --age=23
$ ...

Now imagine using the same command line tool for both, and using the same command line tool for every service—that’s the point of REST. Almost.

$ apictl call MyService:my_method --arg=...
$ apictl delete MyModel --where-arg=...
$ apictl tail MyContainers:logs --where ...
$ apictl help MyService

You could implement a command line tool like this without going through the hassle of reading a thesis. You could download a schema in advance, or load it at runtime, and use it to create requests and parse responses, but REST is quite a bit more than being able to reflect, or describe a service at runtime.

The REST constraints require using a common format for the contents of messages so that the command line tool doesn’t need configuring, require sending the messages in a way that allows you to proxy, cache, or reroute them without fully understanding their contents.

REST is also a way to break apart long or large messages up into smaller ones linked together—something far more than just learning what commands can be sent at runtime, but allowing a response to explain how to fetch the next part in sequence.

To demonstrate, take an RPC service with a long running method call:

class MyService(Service):
    @rpc()
    def long_running_call(self, args: str) -> bool:
        id = third_party.start_process(args)
        while third_party.wait(id):
            pass
        return third_party.is_success(id)

When a response is too big, you have to break it down into smaller responses. When a method is slow, you have to break it down into one method to start the process, and another method to check if it’s finished.

class MyService(Service):
    @rpc()
    def start_long_running_call(self, args: str) -> str:
         ...
    @rpc()
    def wait_for_long_running_call(self, key: str) -> bool:
         ...

In some frameworks you can use a streaming API instead, but replacing a procedure call with streaming involves adding heartbeat messages, timeouts, and recovery, so many developers opt for polling instead—breaking the single request into two, like the example above.

Both approaches require changing the client and the server code, and if another method needs breaking up you have to change all of the code again. REST offers a different approach.

We return a response that describes how to fetch another request, much like a HTTP redirect. You’d handle them In a client library much like an HTTP client handles redirects does, too.

def long_running_call(self, args: str) -> Result[bool]:
    key = third_party.start_process(args)
    return Future("MyService.wait_for_long_running_call", {"key":key})

def wait_for_long_running_call(self, key: str) -> Result[bool]:
    if not third_party.wait(key):
        return third_party.is_success(key)
    else:
        return Future("MyService.wait_for_long_running_call", {"key":key})
def fetch(request):
   response = make_api_call(request)
   while response.kind == 'Future':
       request = make_next_request(response.method_name, response.args)
       response = make_api_call(request)

For the more operations minded, imagine I call time.sleep() inside the client, and maybe imagine the Future response has a duration inside. The neat trick is that you can change the amount the client sleeps by changing the value returned by the server.

The real point is that by allowing a response to describe the next request in sequence, we’ve skipped over the problems of the other two approaches—we only need to implement the code once in the client.

When a different method needs breaking up, you can return a Future and get on with your life. In some ways it’s as if you’re returning a callback to the client, something the client knows how to run to produce a request. With Future objects, it’s more like returning values for a template.

This approach works for breaking up a large response into smaller ones too, like iterating through a long list of results. Pagination often looks something like this in an RPC system:

cursor = rpc.open_cursor()
output = []
while cursor:
    output.append(cursor.values)
    cursor = rpc.move_cursor(cursor.id)

Or something like this:

start = 0
output = []
while True:
    out = rpc.get_values(start, batch=30)
    output.append(out)
    start += len(out)
    if len(out)  str
        ...
   def lock_queue(self, worker_id:str, queue_name: str) -> str:
        ...
   def take_from_queue(self, worker_id: str, queue_name, queue_lock: str):
       ...
   def upload_result(self, worker_id, queue_name, queue_lock, next, result):
       ...
   def unlock_queue(self, worker_id, queue_name, queue_lock):
       ...
   def exit_worker(self, worker_id):
       ...

Unfortunately, the client code looks much nastier:

worker_id = rpc.register_worker(my_name)
lock = rpc.lock_queue(worker_id, queue_name)
while True:
    next = rpc.take_from_queue(worker_id, queue_name, lock)
    if next:
        result = process(next)
        rpc.upload_result(worker_id, queue_name, lock, next, result)
    else:
        break
rpc.unlock_queue(worker_id, queue_name, lock)
rpc.exit_worker(worker_id)

Each method requires a handful of parameters, relating to the current session open with the service. They aren’t strictly necessary—they do make debugging a system far easier—but problem of having to chain together requests might be a little familiar.

What we’d rather do is use some API where the state between requests is handled for us. The traditional way to achieve this is to build these wrappers by hand, creating special code on the client to assemble the responses.

With REST, we can define a Service that has methods like before, but also contains a little bit of state, and return it from other method calls:

class WorkerApi(Service):
    def register(self, worker_id):
        return Lease(worker_id)

class Lease(Service):
    worker_id: str

    @rpc()
     def lock_queue(self, name):
        ...
        return Queue(self.worker_id, name, lock)

    @rpc()
    def expire(self):
        ...

class Queue(Service):
    name: str
    lock: str
    worker_id: str

    @rpc()
     def get_task(self):
        return Task(.., name, lock, worker_id)

    @rpc()
    def unlock(self):
        ...

class Task(Service)
    task_id: str
    worker_id: str

    @rpc()
     def upload(self, out):
        mark_done(self.task_id, self.actions, out)

Instead of one service, we now have four. Instead of returning identifiers to pass back in, we return a Service with those values filled in for us. As a result, the client code looks a lot nicer—you can even add new parameters in behind the scenes.

lease = rpc.register_worker(my_name)

queue = lease.lock_queue(queue_name)

while True:
    next = queue.take_next() 
    if next:
        next.upload_result(process(next))
    else:
        break
queue.unlock()
lease.expire()

Although the Future looked like a callback, returning a Service feels like returning an object. This is the power of self description—unlike reflection where you can specify in advance every request that can be made—each response has the opportunity to define a new parameterised request.

It’s this navigation through several linked responses that distinguishes a regular command line tool from one that browses—and where REST gets its name: the passing back and forth of requests from server to client is where the ‘state-transfer’ part of REST comes from, and using a common Result or Cursor object is where the 'representational’ comes from.

Although a RESTful system is more than just these combined—along with a reusable browser, you have reusable proxies too.

In the same way that messages describe things to the client, they describe things to any middleware between client and server: using GET, POST, and distinct URLs is what allows caches to work across services, and using a stateless protocol (HTTP) is what allows a proxy or load balancer to work so effortlessly.

The trick with REST is that despite HTTP being stateless, and despite HTTP being simple, you can build complex, stateful services by threading the state invisibly between smaller messages—transferring a representation of state back and forth between client and server.

Although the point of REST is to build a browser, the point is to use self-description and state-transfer to allow heavy amounts of interoperation—not just a reusable client, but reusable proxies, caches, or load balancers.

Going back to the constraints (Client-Server, Stateless, Caching, Uniform Interface, Layering and Code-on-Demand), you might be able to see how they things fit together to achieve these goals.

The first, Client-Server, feels a little obvious, but sets the background. A server waits for requests from a client, and issues responses.

The second, Stateless, is a little more confusing. If a HTTP proxy had to keep track of how requests link together, it would involve a lot more memory and processing. The point of the stateless constraint is that to a proxy, each request stands alone. The point is also that any stateful interactions should be handled by linking messages together.

Caching is the third constraint: labelling if a response can be cached (HTTP uses headers on the response), or if a request can be resent (using GET or POST). The fourth constraint, Uniform Interface, is the most difficult, so we’ll cover it last. Layering is the fifth, and it roughly means “you can proxy it”.

Code-on-demand is the final, optional, and most overlooked constraint, but it covers the use of Cursors, Futures, or parameterised Services—the idea that despite using a simple means to describe services or responses, the responses can define new requests to send. Code-on-demand takes that further, and imagines passing back code, rather than templates and values to assemble.

With the other constraints handled, it’s time for uniform interface. Like stateless, this constraint is more about HTTP than it is about the system atop, and frequently misapplied. This is the reason why people keep making database APIs and calling them RESTful, but the constraint has nothing to do with CRUD.

The constraint is broken down into four ideas, and we’ll take them one by one: self-descriptive messages, identification of resources, manipulation of resources through representations, hypermedia as the engine of application state.

Self-Description is at the heart of REST, and this sub-constraint fills in the gaps between the Layering, Caching, and Stateless constraints. Sort-of. It covers using 'GET’ and 'POST’ to indicate to a proxy how to handle things, and covers how responses indicate if they can be cached, too. It also means using a content-type header.

The next sub-constraint, identification, means using different URLs for different services. In the RPC examples above, it means having a common, standard way to address a service or method, as well as one with parameters.

This ties into the next sub-constraint, which is about using standard representations across services—this doesn’t mean using special formats for every API request, but using the same underlying language to describe every response. In other words, the web works because everyone uses HTML.

Uniformity so far isn’t too difficult: Use HTTP (self-description), URLs (identification) and HTML (manipulation through representations), but it’s the last sub-constraint thats causes most of the headaches. Hypermedia as the engine of application state.

This is a fancy way of talking about how large or long requests can be broken up into interlinked messages, or how a number of smaller requests can be threaded together, passing the state from one to the next. Hypermedia referrs to using Cursor, Future, or Service objects, application state is the details passed around as hidden arguments, and being the 'engine’ means using it to tie the whole system together.

Together they form the basis of the Representational State-Transfer Style. More than half of these constraints can be satisfied by just using HTTP, and the other half only really help when you’re implementing a browser, but there are still a few more tricks that you can do with REST.

Although a RESTful system doesn’t have to offer a database like interface, it can.

Along with Service or Cursor, you could imagine Model or Rows objects to return, but you should expect a little more from a RESTful system than just create, read, update and delete. With REST, you can do things like inlining: along with returning a request to make, a server can embed the result inside. A client can skip the network call and work directly on the inlined response. A server can even make this choice at runtime, opting to embed if the message is small enough.

Finally, with a RESTful system, you should be able to offer things in different encodings, depending on what the client asks for—even HTML. In other words, if your framework can do all of these things for you, offering a web interface isn’t too much of a stretch. If you can build a reusable command line tool, generating a web interface isn’t too difficult, and at least this time you don’t have to implement a browser from scratch.

If you now find yourself understanding REST, I’m sorry. You’re now cursed. Like a cross been the greek myths of Cassandra and Prometheus, you will be forced to explain the ideas over and over again to no avail. The terminology has been utterly destroyed to the point it has less meaning than 'Agile’.

Even so, the underlying ideas of interoperability, self-description, and interlinked requests are surprisingly useful—you can break up large or slow responses, you can to browse or even parameterise services, and you can do it in a way that lets you re-use tools across services too.

Ideally someone else will have done it for you, and like with a web browser, you don’t really care how RESTful it is, but how useful it is. Your framework should handle almost all of this for you, and you shouldn’t have to care about the details.

If anything, REST is about exposing just enough detail—Proxies and load-balancers only care about the URL and GET or POST. The underlying client libraries only have to handle something like HTML, rather than unique and special formats for every service.

REST is fundamentally about letting people use a service without having to know all the details ahead of time, which might be how we got into this mess in the first place.

2019-01-08 16:57

2018-08-16

From Anthony Zuiker the creator of the huge television franchise CSI: Crime Scene Investigation comes Mysteryopolis — a gamified narrative app targeted at the kids market.

Mysteryopolis was developed as a bundled exclusive for the then new Navi tablet — an Android based tablet designed for kids, and featuring a kid friendly lineup of apps and games.

Mysteryopolis Trailer

I built Mysteryopolis and designed the minigames based on Zuiker’s script, which was originally very large in scope. One of the early challenges on this project was figuring out what we could do within the budget and timeline we had available to us, while still staying true to the creator’s original vision.

Aside from the game design and development, Mysteryopolis also required a content library management system — specifically, we needed to be able to distribute a lightweight game app that allowed users to download episodes (read: very large, very high quality video files) and game content on demand — and delete and reinstall said content in order to manage device resources.

Mysteryopolis received, unsurprisingly, a considerable amount of press upon its release. You can read Fortune magazine’s write up, AdWeek’s take on the app, and VentureBeat’s story on the project.

If you need an app developed, reach out and let’s talk! You can contact me using the contact form on this site, via Skype at stromdotcom or by visiting my company website at https://glowdot.com.

by stromdotcom at 2018-08-16 23:59

2018-08-05

If you ask a programmer for advice—a terrible idea—they might tell you something like the following: Don’t repeat yourself. Programs should do one thing and one thing well. Never rewrite your code from scratch, ever!.

Following “Don’t Repeat Yourself” might lead you to a function with four boolean flags, and a matrix of behaviours to carefully navigate when changing the code. Splitting things up into simple units can lead to awkward composition and struggling to coordinate cross cutting changes. Avoiding rewrites means they’re often left so late that they have no chance of succeeding.

The advice isn’t inherently bad—although there is good intent, following it to the letter can create more problems than it promises to solve.

Sometimes the best way to follow an adage is to do the exact opposite: embrace feature switches and constantly rewrite your code, pull things together to make coordination between them easier to manage, and repeat yourself to avoid implementing everything in one function..

This advice is much harder to follow, unfortunately.

Repeat yourself to find abstractions.

“Don’t Repeat Yourself” is almost a truism—if anything, the point of programming is to avoid work.

No-one enjoys writing boilerplate. The more straightforward it is to write, the duller it is to summon into a text editor. People are already tired of writing eight exact copies of the same code before even having to do so. You don’t need to convince programmers not to repeat themselves, but you do need to teach them how and when to avoid it.

“Don’t Repeat Yourself” often gets interpreted as “Don’t Copy Paste” or to avoid repeating code within the codebase, but the best form of avoiding repetition is in avoiding reimplementing what exists elsewhere—and thankfully most of us already do!

Almost every web application leans heavily on an operating system, a database, and a variety of other lumps of code to get the job done. A modern website reuses millions of lines of code without even trying. Unfortunately, programmers love to avoid repetition, and “Don’t Repeat Yourself” turns into “Always Use an Abstraction”.

By an abstraction, I mean two interlinked things: a idea we can think and reason about, and the way in which we model it inside our programming languages. Abstractions are way of repeating yourself, so that you can change multiple parts of your program in one place. Abstractions allow you to manage cross-cutting changes across your system, or sharing behaviors within it.

The problem with always using an abstraction is that you’re preemptively guessing which parts of the codebase need to change together. “Don’t Repeat Yourself” will lead to a rigid, tightly coupled mess of code. Repeating yourself is the best way to discover which abstractions, if any, you actually need.

As Sandi Metz put it, “duplication is far cheaper than the wrong abstraction”.

You can’t really write a re-usable abstraction up front. Most successful libraries or frameworks are extracted from a larger working system, rather than being created from scratch. If you haven’t built something useful with your library yet, it is unlikely anyone else will. Code reuse isn’t a good excuse to avoid duplicating code, and writing reusable code inside your project is often a form of preemptive optimization.

When it comes to repeating yourself inside your own project, the point isn’t to be able to reuse code, but rather to make coordinated changes. Use abstractions when you’re sure about coupling things together, rather than for opportunistic or accidental code reuse—it’s ok to repeat yourself to find out when.

Repeat yourself, but don’t repeat other people’s hard work. Repeat yourself: duplicate to find the right abstraction first, then deduplicate to implement it.

With “Don’t Repeat Yourself”, some insist that it isn’t about avoiding duplication of code, but about avoiding duplication of functionality or duplication of responsibility. This is more popularly known as the “Single Responsibility Principle”, and it’s just as easily mishandled.

Gather responsibilities to simplify interactions between them

When it comes to breaking a larger service into smaller pieces, one idea is that each piece should only do one thing within the system—do one thing, and do it well—and the hope is that by following this rule, changes and maintenance become easier.

It works out well in the small: reusing variables for different purposes is an ever-present source of bugs. It’s less successful elsewhere: although one class might do two things in a rather nasty way, disentangling it isn’t of much benefit when you end up with two nasty classes with a far more complex mess of wiring between them.

The only real difference between pushing something together and pulling something apart is that some changes become easier to perform than others.

The choice between a monolith and microservices is another example of this—the choice between developing and deploying a single service, or composing things out of smaller, independently developed services.

The big difference between them is that cross-cutting change is easier in one, and local changes are easier in the other. Which one works best for a team often depends more on environmental factors than on the specific changes being made.

Although a monolith can be painful when new features need to be added and microservices can be painful when co-ordination is required, a monolith can run smoothly with feature flags and short lived branches and microservices work well when deployment is easy and heavily automated.

Even a monolith can be decomposed internally into microservices, albeit in a single repository and deployed as a whole. Everything can be broken into smaller parts—the trick is knowing when it’s an advantage to do so.

Modularity is more than reducing things to their smallest parts.

Invoking the ‘single responsibility principle’, programmers have been known to brutally decompose software into a terrifyingly large number of small interlocking pieces—a craft rarely seen outside of obscenely expensive watches, or bash.

The traditional UNIX command line is a showcase of small components that do exactly one function, and it can be a challenge to discover which one you need and in which way to hold it to get the job done. Piping things into awk '{print $2}' is almost a rite of passage.

Another example of the single responsibility principle is git. Although you can use git checkout to do six different things to the repository, they all use similar operations internally. Despite having singular functionality, components can be used in very different ways.

A layer of small components with no shared features creates a need for a layer above where these features overlap, and if absent, the user will create one, with bash aliases, scripts, or even spreadsheets to copy-paste from.

Even adding this layer might not help you: git already has a notion of user-facing and automation-facing commands, and the UI is still a mess. It’s always easier to add a new flag to an existing command than to it is to duplicate it and maintain it in parallel.

Similarly, functions gain boolean flags and classes gain new methods as the needs of the codebase change. In trying to avoid duplication and keep code together, we end up entangling things.

Although components can be created with a single responsibility, over time their responsibilities will change and interact in new and unexpected ways. What a module is currently responsible for within a system does not necessarily correlate to how it will grow.

Modularity is about limiting the options for growth

A given module often gets changed because it is the easiest module to change, rather than the best place for the change to be made. In the end, what defines a module is what pieces of the system it will never responsible for, rather what it is currently responsible for.

When a unit has no rules about what code cannot be included, it will eventually contain larger and larger amounts of the system. This is eternally true of every module named ‘util’, and why almost everything in a Model-View-Controller system ends up in the controller.

In theory, Model-View-Controller is about three interlocking units of code. One for the database, another for the UI, and one for the glue between them. In practice, Model-View-Controller resembles a monolith with two distinct subsystems—one for the database code, another for the UI, both nestled inside the controller.

The purpose of MVC isn’t to just keep all the database code in one place, but also to keep it away from frontend code. The data we have and how we want to view it will change over time independent of the frontend code.

Although code reuse is good and smaller components are good, they should be the result of other desired changes. Both are tradeoffs, introducing coupling through a lack of redundancy, or complexity in how things are composed. Decomposing things into smaller parts or unifying them is neither universally good nor bad for the codebase, and largely depends on what changes come afterwards.

In the same way abstraction isn’t about code reuse, but coupling things for change, modularity isn’t about grouping similar things together by function, but working out how to keep things apart and limiting co-ordination across the codebase.

This means recognizing which bits are slightly more entangled than others, knowing which pieces need to talk to each other, which need to share resources, what shares responsibilities, and most importantly, what external constraints are in place and which way they are moving.

In the end, it’s about optimizing for those changes—and this is rarely achieved by aiming for reusable code, as sometimes handling changes means rewriting everything.

Rewrite Everything

Usually, a rewrite is only a practical option when it’s the only option left. Technical debt, or code the seniors wrote that we can’t be rude about, accrues until all change becomes hazardous. It is only when the system is at breaking point that a rewrite is even considered an option.

Sometimes the reasons can be less dramatic: an API is being switched off, a startup has taken a beautiful journey, or there’s a new fashion in town and orders from the top to chase it. Rewrites can happen to appease a programmer too—rewarding good teamwork with a solo project.

The reason rewrites are so risky in practice is that replacing one working system with another is rarely an overnight change. We rarely understand what the previous system did—many of its properties are accidental in nature. Documentation is scarce, tests are ornamental, and interfaces are organic in nature, stubbornly locking behaviors in place.

If migrating to the replacement depends on switching over everything at once, make sure you’ve booked a holiday during the transition, well in advance.

Successful rewrites plan for migration to and from the old system, plan to ease in the existing load, and plan to handle things being in one or both places at once. Both systems are continuously maintained until one of them can be decommissioned. A slow, careful migration is the only option that reliably works on larger systems.

To succeed, you have to start with the hard problems first—often performance related—but it can involve dealing with the most difficult customer, or biggest customer or user of the system too. Rewrites must be driven by triage, reducing the problem in scope into something that can be effectively improved while being guided by the larger problems at hand.

If a replacement isn’t doing something useful after three months, odds are it will never do anything useful.

The longer it takes to run a replacement system in production, the longer it takes to find bugs. Unfortunately, migrations get pushed back in the name of feature development. A new project has the most room for feature bloat—this is known as the second-system effect.

The second system effect is the name of the canonical doomed rewrite, one where numerous features are planned, not enough are implemented, and what has been written rarely works reliably. It’s a similar to writing a game engine without a game to implement to guide decisions, or a framework without a product inside. The resulting code is an unconstrained mess that is barely fit for its purpose.

The reason we say “Never Rewrite Code” is that we leave rewrites too late, demand too much, and expect them to work immediately. It’s more important to never rewrite in a hurry than to never rewrite at all.

null is true, everything is permitted

The problem with following advice to the letter is that it rarely works in practice. The problem with following it at all costs is that eventually we cannot afford to do so.

It isn’t “Don’t Repeat Yourself”, but “Some redundancy is healthy, some isn’t”, and using abstractions when you’re sure you want to couple things together.

It isn’t “Each thing has a unique component”, or other variants of the single responsibility principle, but “Decoupling parts into smaller pieces is often worth it if the interfaces are simple between them, and try to keep the fast changing and tricky to implement bits away from each other”.

It’s never “Don’t Rewrite!”, but “Don’t abandon what works”. Build a plan for migration, maintain in parallel, then decommission, eventually. In high-growth situations you can probably put off decommissioning, and possibly even migrations.

When you hear a piece of advice, you need to understand the structure and environment in place that made it true, because they can just as often make it false. Things like “Don’t Repeat Yourself” are about making a tradeoff, usually one that’s good in the small or for beginners to copy at first, but hazardous to invoke without question on larger systems.

In a larger system, it’s much harder to understand the consequences of our design choices—in many cases the consequences are only discovered far, far too late in the process and it is only by throwing more engineers into the pit that there is any hope of completion.

In the end, we call our good decisions ‘clean code’ and our bad decisions ‘technical debt’, despite following the same rules and practices to get there.

2018-08-05 13:02

2018-06-13

Introduction

In the last post we were left with some tests that exercised some very basic functionality of the Deck class. In this post, we will continue to add unit tests and write production code to make those tests pass, until we get a class which is able to produce a randomised deck of 52 cards.

Test Refactoring

You can, and should, refactor your tests where appropriate. For instance, on the last test in the last post, we only asserted that we could get all the cards for a particular suit. What about the other three? With most modern test frameworks, that is very easy.

Theory]
[InlineData(Suit.Clubs)]
[InlineData(Suit.Diamonds)]
[InlineData(Suit.Hearts)]
[InlineData(Suit.Spades)]
public void Should_BeAbleToSelectSuitOfCardsFromDeck(Suit suit)
{
    var deck = new Deck();

    var cards = deck.Where(x => x.Suit == suit);

    cards.Should().HaveCount(13);
}

More Cards

We are going to want actual cards with values to work with. And for the next test, we can literally copy and past the previous test to use as a starter.

[Theory]
[InlineData(Suit.Clubs)]
[InlineData(Suit.Diamonds)]
[InlineData(Suit.Hearts)]
[InlineData(Suit.Spades)]
public void Should_BuildAllCardsInDeck(Suit suit)
{
    var deck = new Deck();

    var cards = deck.Where(x => x.Suit == suit);

    cards.Should().Contain(new List<Card> 
    { 
        new Card(suit, "A"), new Card(suit, "2"), new Card(suit, "3"), new Card(suit, "4"),
        new Card(suit, "5"), new Card(suit, "6"), new Card(suit, "7"), new Card(suit, "8"),
        new Card(suit, "9"), new Card(suit, "10"), new Card(suit, "J"), new Card(suit, "Q"),
        new Card(suit, "K")
    });
}

Now that I’ve written this, when I compare it to the previous one, it’s testing the exact same thing, in slightly more detail. So we can delete the previous test, it’s just noise.

The test is currently failing because it can’t compile, due to there not being a constructor which takes a string. Lets fix that.

public struct Card
{
    private Suit _suit;
    private string _value;

    public Card(Suit suit, string value)
    {
        _suit = suit;
        _value = value;
    }

    public Suit Suit { get { return _suit; } }
    public string Value { get { return _value; } }

    public override string ToString()
    {
        return $"{Suit}";
    }
}

There are a couple of changes to this class. Firstly, I added the constructor, and private variables which hold the two defining variables, with properties with only public getters. I changed it from being a class to being a struct, and it’s now an immutable value type, which makes sense. In a deck of cards, there can, for example, only be one Ace of Spades.

These changes mean that are tests don’t work, as the Deck class is now broken, because the code which builds set of thirteen cards for a given suit is broken - it now doesn’t understand the Card constructor, or the fact that the .Suit property is now read-only.

Here is my first attempt at fixing the code, which I don’t currently think is all that bad:

private string _ranks = "A23456789XJQK";

private List<Card> BuildSuit(Suit suit)
{
    var cards = new List<Card>(_suitSize);

    for (var i = 1; i <= _suitSize; i++)
    {
        var rank = _ranks[i-1].ToString();
        var card = new Card(suit, rank);
        cards.Add(card);
    }

    return cards;
}

This now builds us four suites of thirteen cards. I realised as I was writing the production code that handling “10” as a value would be straightforward, so I opted for the simpler (and common) approach of using “X” to represent “10”. The test pass four times, once for each suit. This is probably unnecessary, but it protects us in future from inadvertantly adding any code which may affect the way that cards are generated for a particular suit.

Every day I’m (randomly) shuffling

It’s occured to me as I write this that the Deck class is funtionally complete, as it produces a deck of 52 cards when it is instantiated. You will however recall that we want a randomly shuffled deck of cards. If we consider, and invoke the Single Responsibility Principal, then we should add a Dealer class; we are modeling a real world event and a pack of cards cannot shuffle itself, that’s what the dealer does.

Conclusion

In this post I’ve completed the walk through of developing a class to create a deck of 52 cards using some basic TDD techniques. I realised adding the ability to shuffle the pack to the Deck class would be a violation of SRP, as the Deck class should not be concerned or have any knowledge about how it is shuffled. In the next post I will discuss how we can implement a Dealer class, and illustrate some techniques swapping the randomisation algorithim around.

2018-06-13 00:00

2018-05-14

Debuggable code is code that doesn’t outsmart you. Some code is a little to harder to debug than others: code with hidden behaviour, poor error handling, ambiguity, too little or too much structure, or code that’s in the middle of being changed. On a large enough project, you’ll eventually bump into code that you don’t understand.

On an old enough project, you’ll discover code you forgot about writing—and if it wasn’t for the commit logs, you’d swear it was someone else. As a project grows in size it becomes harder to remember what each piece of code does, harder still when the code doesn’t do what it is supposed to. When it comes to changing code you don’t understand, you’re forced to learn about it the hard way: Debugging.

Writing code that’s easy to debug begins with realising you won’t remember anything about the code later.

Rule 0: Good code has obvious faults.

Many used methodology salesmen have argued that the way to write understandable code is to write clean code. The problem is that “clean” is highly contextual in meaning. Clean code can be hardcoded into a system, and sometimes a dirty hack can written in a way that’s easy to turn off. Sometimes the code is clean because the filth has been pushed elsewhere. Good code isn’t necessarily clean code.

Code being clean or dirty is more about how much pride, or embarrassment the developer takes in the code, rather than how easy it has been to maintain or change. Instead of clean, we want boring code where change is obvious— I’ve found it easier to get people to contribute to a code base when the low hanging fruit has been left around for others to collect. The best code might be anything you can look at quickly learn things about it.

  • Code that doesn’t try to make an ugly problem look good, or a boring problem look interesting.
  • Code where the faults are obvious and the behaviour is clear, rather than code with no obvious faults and subtle behaviours.
  • Code that documents where it falls short of perfect, rather than aiming to be perfect.
  • Code with behaviour so obvious that any developer can imagine countless different ways to go about changing it.

Sometimes, code is just nasty as fuck, and any attempts to clean it up leaves you in a worse state. Writing clean code without understanding the consequences of your actions might as well be a summoning ritual for maintainable code.

It is not to say that clean code is bad, but sometimes the practice of clean coding is more akin to sweeping problems under the rug. Debuggable code isn’t necessarily clean, and code that’s littered with checks or error handling rarely makes for pleasant reading.

Rule 1: The computer is always on fire.

The computer is on fire, and the program crashed the last time it ran.

The first thing a program should do is ensure that it is starting out from a known, good, safe state before trying to get any work done. Sometimes there isn’t a clean copy of the state because the user deleted it, or upgraded their computer. The program crashed the last time it ran and, rather paradoxically, the program is being run for the first time too.

For example, when reading and writing program state to a file, a number of problems can happen:

  • The file is missing
  • The file is corrupt
  • The file is an older version, or a newer one
  • The last change to the file is unfinished
  • The filesystem was lying to you

These are not new problems and databases have been dealing with them since the dawn of time (1970-01-01). Using something like SQLite will handle many of these problems for you, but If the program crashed the last time it ran, the code might be run with the wrong data, or in the wrong way too.

With scheduled programs, for example, you can guarantee that the following accidents will occur:

  • It gets run twice in the same hour because of daylight savings time.
  • It gets run twice because an operator forgot it had already been run.
  • It will miss an hour, due to the machine running out of disk, or mysterious cloud networking issues.
  • It will take longer than an hour to run and may delay subsequent invocations of the program.
  • It will be run with the wrong time of day
  • It will inevitably be run close to a boundary, like midnight, end of month, end of year and fail due to arithmetic error.

Writing robust software begins with writing software that assumed it crashed the last time it ran, and crashing whenever it doesn’t know the right thing to do. The best thing about throwing an exception over leaving a comment like “This Shouldn’t Happen”, is that when it inevitably does happen, you get a head-start on debugging your code.

You don’t have to be able to recover from these problems either—it’s enough to let the program give up and not make things any worse. Small checks that raise an exception can save weeks of tracing through logs, and a simple lock file can save hours of restoring from backup.

Code that’s easy to debug is code that checks to see if things are correct before doing what was asked of it, code that makes it easy to go back to a known good state and trying again, and code that has layers of defence to force errors to surface as early as possible.

Rule 2: Your program is at war with itself.

Google’s biggest DoS attacks come from ourselves—because we have really big systems—although every now and then someone will show up and try to give us a run for our money, but really we’re more capable of hammering ourselves into the ground than anybody else is.

This is true for all systems.

Astrid Atkinson, Engineering for the Long Game

The software always crashed the last time it ran, and now it is always out of cpu, out of memory, and out of disk too. All of the workers are hammering an empty queue, everyone is retrying a failed request that’s long expired, and all of the servers have paused for garbage collection at the same time. Not only is the system broken, it is constantly trying to break itself.

Even checking if the system is actually running can be quite difficult.

It can be quite easy to implement something that checks if the server is running, but not if it is handling requests. Unless you check the uptime, it is possible that the program is crashing in-between every check. Health checks can trigger bugs too: I have managed to write health checks that crashed the system it was meant to protect. On two separate occasions, three months apart.

In software, writing code to handle errors will inevitably lead to discovering more errors to handle, many of them caused by the error handling itself. Similarly, performance optimisations can often be the cause of bottlenecks in the system—Making an app that’s pleasant to use in one tab can make an app that’s painful to use when you have twenty copies of it running.

Another example is where a worker in a pipeline is running too fast, and exhausting the available memory before the next part has a chance to catch up. If you’d rather a car metaphor: traffic jams. Speeding up is what creates them, and can be seen in the way the congestion moves back through the traffic. Optimisations can create systems that fail under high or heavy load, often in mysterious ways.

In other words: the faster you make it, the harder it will be pushed, and if you don’t allow your system to push back even a little, don’t be surprised if it snaps.

Back-pressure is one form of feedback within a system, and a program that is easy to debug is one where the user is involved in the feedback loop, having insight into all behaviours of a system, the accidental, the intentional, the desired, and the unwanted too. Debuggable code is easy to inspect, where you can watch and understand the changes happening within.

Rule 3: What you don’t disambiguate now, you debug later.

In other words: it should not be hard to look at the variables in your program and work out what is happening. Give or take some terrifying linear algebra subroutines, you should strive to represent your program’s state as obviously as possible. This means things like not changing your mind about what a variable does halfway through a program, if there is one obvious cardinal sin it is using a single variable for two different purposes.

It also means carefully avoiding the semi-predicate problem, never using a single value (count) to represent a pair of values (boolean, count). Avoiding things like returning a positive number for a result, and returning -1 when nothing matches. The reason is that it’s easy to end up in the situation where you want something like "0, but true" (and notably, Perl 5 has this exact feature), or you create code that’s hard to compose with other parts of your system (-1 might be a valid input for the next part of the program, rather than an error).

Along with using a single variable for two purposes, it can be just as bad to use a pair of variables for a single purpose—especially if they are booleans. I don’t mean keeping a pair of numbers to store a range is bad, but using a number of booleans to indicate what state your program is in is often a state machine in disguise.

When state doesn’t flow from top to bottom, give or take the occasional loop, it’s best to give the state a variable of it’s own and clean the logic up. If you have a set of booleans inside an object, replace it with a variable called state and use an enum (or a string if it’s persisted somewhere). The if statements end up looking like if state == name and stop looking like if bad_name && !alternate_option.

Even when you do make the state machine explicit, you can still mess up: sometimes code has two state machines hidden inside. I had great difficulty writing an HTTP proxy until I had made each state machine explicit, tracing connection state and parsing state separately. When you merge two state machines into one, it can be hard to add new states, or know exactly what state something is meant to be in.

This is far more about creating things you won’t have to debug, than making things easy to debug. By working out the list of valid states, it’s far easier to reject the invalid ones outright, rather than accidentally letting one or two through.

Rule 4: Accidental Behaviour is Expected Behaviour.

When you’re less than clear about what a data structure does, users fill in the gaps—any behaviour of your code, intended or accidental, will eventually be relied upon somewhere else. Many mainstream programming languages had hash tables you could iterate through, which sort-of preserved insertion order, most of the time.

Some languages chose to make the hash table behave as many users expected them to, iterating through the keys in the order they were added, but others chose to make the hash table return keys in a different order, each time it was iterated through. In the latter case, some users then complained that the behaviour wasn’t random enough.

Tragically, any source of randomness in your program will eventually be used for statistical simulation purposes, or worse, cryptography, and any source of ordering will be used for sorting instead.

In a database, some identifiers carry a little bit more information than others. When creating a table, a developer can choose between different types of primary key. The correct answer is a UUID, or something that’s indistinguishable from a UUID. The problem with the other choices is that they can expose ordering information as well as identity, i.e. not just if a == b but if a <= b, and by other choices mean auto-incrementing keys.

With an auto-incrementing key, the database assigns a number to each row in the table, adding 1 when a new row is inserted. This creates an ambiguity of sorts: people do not know which part of the data is canonical. In other words: Do you sort by key, or by timestamp? Like with the hash-tables before, people will decide the right answer for themselves. The other problem is that users can easily guess the other keys records nearby, too.

Ultimately any attempt to be smarter than a UUID will backfire: we already tried with postcodes, telephone numbers, and IP Addresses, and we failed miserably each time. UUIDs might not make your code more debuggable, but less accidental behaviour tends to mean less accidents.

Ordering is not the only piece of information people will extract from a key: If you create database keys that are constructed from the other fields, then people will throw away the data and reconstruct it from the key instead. Now you have two problems: when a program’s state is kept in more than one place, it is all too easy for the copies to start disagreeing with each other. It’s even harder to keep them in sync if you aren’t sure which one you need to change, or which one you have changed.

Whatever you permit your users to do, they’ll implement. Writing debuggable code is thinking ahead about the ways in which it can be misused, and how other people might interact with it in general.

Rule 5: Debugging is social, before it is technical.

When a software project is split over multiple components and systems, it can be considerably harder to find bugs. Once you understand how the problem occurs, you might have to co-ordinate changes across several parts in order to fix the behaviour. Fixing bugs in a larger project is less about finding the bugs, and more about convincing the other people that they’re real, or even that a fix is possible.

Bugs stick around in software because no-one is entirely sure who is responsible for things. In other words, it’s harder to debug code when nothing is written down, everything must be asked in Slack, and nothing gets answered until the one person who knows logs-on.

Planning, tools, process, and documentation are the ways we can fix this.

Planning is how we can remove the stress of being on call, structures in place to manage incidents. Plans are how we keep customers informed, switch out people when they’ve been on call too long, and how we track the problems and introduce changes to reduce future risk. Tools are the way in which we deskill work and make it accessible to others. Process is the way in which can we remove control from the individual and give it to the team.

The people will change, the interactions too, but the processes and tools will be carried on as the team mutates over time. It isn’t so much valuing one more than the other but building one to support changes in the other.Process can also be used to remove control from the team too, so it isn’t always good or bad, but there is always some process at work, even when it isn’t written down, and the act of documenting it is the first step to letting other people change it.

Documentation means more than text files: documentation is how you handover responsibilities, how you bring new people up to speed, and how you communicate what’s changed to the people impacted by those changes. Writing documentation requires more empathy than writing code, and more skill too: there aren’t easy compiler flags or type checkers, and it’s easy to write a lot of words without documenting anything.

Without documentation, how can you expect people to make informed decisions, or even consent to the consequences of using the software? Without documentation, tools, or processes you cannot share the burden of maintenance, or even replace the people currently lumbered with the task.

Making things easy to debug applies just as much to the processes around code as the code itself, making it clear whose toes you will have to stand on to fix the code.

Code that’s easy to debug is easy to explain.

A common occurrence when debugging is realising the problem when explaining it to someone else. The other person doesn’t even have to exist but you do have to force yourself to start from scratch, explain the situation, the problem, the steps to reproduce it, and often that framing is enough to give us insight into the answer.

If only. Sometimes when we ask for help, we don’t ask for the right help, and I’m as guilty of this as anyone—it’s such a common affliction that it has a name: “The X-Y Problem”: How do I get the last three letters of a filename? Oh? No, I meant the file extension.

We talk about problems in terms of the solutions we understand, and we talk about the solutions in terms of the consequences we’re aware of. Debugging is learning the hard way about unexpected consequences, and alternative solutions, and involves one of the hardest things a programer can ever do: admit that they got something wrong.

It wasn’t a compiler bug, after all.

2018-05-14 04:30

2018-04-20

After building several games for my corporate clients, I’m pleased to have had the opportunity and the funding to develop something purely designed to be fun: Playground Wars!

Playground Wars is a side-scrolling tower defense style game built in Unity for iOS, Android and Mac, with available ports to Windows and the web.

I could write all day about this game, or I could just show you some video of the game in action:

Playground Wars menus and introductory scenes
Deeper game play later in the game demonstrating weather effects and more complex enemies and traps.

Developing Playground Wars was an absolute blast, and it was also the largest team I’ve ever managed — from multiple game artists, sound designers, UI designers, musicians, voice actors and more — Playground Wars was truly a labor of love.

Although the game is no longer available on iOS due to Apple’s 64-bit requirement (we opted not to invest in updating the app for the iOS platform), it is still available for the Mac in the Mac app store, as well as Android.

If you need an app developed, reach out and let’s talk! You can contact me using the contact form on this site, via Skype at stromdotcom or by visiting my company website at https://glowdot.com.

by stromdotcom at 2018-04-20 03:25

2018-04-01

I’m super pleased to announce the release of the second game based on my open source time management game platform for the Unity game engine: Sonos on Tour!

Sonos on Tour was conceived and developed by me at the request of Sonos, as an internal game designed to be rolled out to sales associates at big box stores like Target, Best Buy, Fry’s, and so on.

Sonos’ challenge was educating retail associates on the value of Sonos’ products, in order to empower them to convey that value to customers. To that end, I developed a time management game in which the player assumes the role of a floor salesperson selling Sonos products to an ever increasing stream of varied customers. Users must serve customers in the most efficient order possible in order to maximize the day’s sales, and can use their earnings to upgrade their store, hire employees, unlock perks, and more.

Sonos on Tour was designed for iOS, Android and the web, and was distributed privately through an enterprise distribution channel, and only made available to select retail associates in partnership with Sonos.

If you need an app developed, reach out and let’s talk! You can contact me using the contact form on this site, via Skype at stromdotcom or by visiting my company website at https://glowdot.com.

by stromdotcom at 2018-04-01 02:21

2018-03-30

Convection Texture Tools is now roughly equal quality-wise with NVTT at compressing BC7 textures despite being about 140 times faster, making it one of the fastest and highest-quality BC7 compressors.

How this was accomplished turned out to be simpler than expected.  Recall that Squish became the gold standard of S3TC compressors by implementing a "cluster fit" algorithm that ordered all of the input colors on a line and tried every possible grouping of them to least-squares fit them.

Unfortunately, using this technique isn't practical in BC7 because the number of orderings has rather extreme scaling characteristics.  While 2-bit indices have a few hundred possible orderings, 4-bit indices have millions, most BC7 mode indices are 3 bits, and some have 4.

With that option gone, most BC7 compressors until now have tried to solve endpoints using various types of endpoint perturbation, which tends to require a lot of iterations.

Convection just uses 2 rounds of K-means clustering and a much simpler technique based on a guess about why Squish's cluster fit algorithm is actually useful: It can create endpoint mappings that don't use some of the terminal ends of the endpoint line, causing the endpoint to be extrapolated out, possibly to a point that loses less accuracy to quantization.

Convection just tries cutting off 1 index at each end, then 1 index at both ends.  That turned out to be enough to place it near the top of the quality benchmarks.

Now I just need to add color weighting and alpha weighting and it'll be time to move on to other formats.

by OneEightHundred (noreply@blogger.com) at 2018-03-30 05:26

2018-03-18

I’m proud to announce the release of Accenture Sky Journey for iOS and Android. Sky Journey is Accenture’s first ever mobile game, and in addition to developing both the iOS and Android versions of the game, I also helped Accenture develop their internal corporate guidelines for products like Sky Journey — specifically, guidelines for adapting Accenture corporate branding regulations to more casual, lighthearted digital products — in this case, a Diner Dash style time management game.

Taking a step back, Sky Journey was also the first game built on my open source time management game platform — a rather massive code framework designed to power time management games built in the Unity game engine.

Understandably, Sky Journey received a great deal of press and attention upon its release — just one example is this writeup of the game in the Guardian.

In March 2018, LinkedIn published their list of the top companies to work for. At #37 was Accenture, and Sky Journey was listed as one of the reasons it’s a great company:

Game on: Accenture developed a 25-level video game app, Sky Journey, in which players run an airport using real business solutions developed by the firm.Daniel Roth, LinkedIn Editor in Chief, 'LinkedIn Top Companies 2018: Where the U.S. wants to work now'

If you need an app developed, reach out and let’s talk! You can contact me using the contact form on this site, via Skype at stromdotcom or by visiting my company website at https://glowdot.com.

by stromdotcom at 2018-03-18 01:45

2018-02-04

I once ate 10 mg LSD by accident. It was a dilution error. The peak lasted ~10 hr. At some point, I saw the top of my head. But hey, maybe it was just an hallucination ;)

maybe ;)

by Factor Mystic at 2018-02-04 17:30

2018-02-03

I know that last time I said I was going to start soldering, but I really wanted to play with the networking capabilities of the ESP8266 first.

There’s a bunch of example programs to do web requests, such as BasicHttpClient, HTTPSRequest, WifiClient, StreamHttpClient, etc. I had trouble getting these working because there’s no built in certificate store or TLS validation capabilities. That means you can’t do a “normal” HTTPS request to test services like RequestBin (since they’re HTTPS only). The example programs have you type in the server’s SHA1 thumbprint, but that didn’t seem to work for me. The certs I inspected were SHA256, which I assume is the problem.

Anyway, I’m not interested in doing HTTP in any of my project ideas right now, so I moved on to what I actually want to do, which is MQTT. Once again it was Hack-a-day that clued me in to this protocol, which is very popular for small devices & home automation. I started out looking for a “simplest possible MQTT example for ESP8266″ and didn’t find anything simple enough initially. Later I realized that there’s two great places to start looking for libraries & examples. First is the esp8266/Arduino repo on Github, which has a list of miscellaneous third party projects compatible with this specific chip. Second is in the Arduino IDE itself; the Library Manager is searchable:

Arduino Library Manager - Searching for

Arduino Library Manager – Searching for “MQTT”

The problem here is that which (if any) are actually good, useful, or correct for the ESP8266. The first search result in that screenshot is only for the “Arduino Uno Wifi Developer Edition”, for example.

Another challenge here is working through all the company branding. The second library listed, “Adafruit MQTT Library”, is ESP8266 compatible and comes with a “simple” example program to get started. However, it’s oriented around the Adafruit IO IoT web service (which is apparently a thing). I did get it to work with the local MQTT broker I’m running here on my PC, but I had to guess if might and try to peel away their extra stuff just to get to the bones.

The ESP8266 Github linked to lmroy/pubsubclient, which itself is a fork of knolleary/pubsubclient which seems more up to date. I don’t know why they’re linking to an out of date fork, except that it appears to more easily support “large” messages. The original has a default max packet size of 128 bytes, which might be too small for real apps, but I’m looking for a simple example so it should be fine.

Here’s a link to the example program from that repo: https://github.com/knolleary/pubsubclient/blob/master/examples/mqtt_esp8266/mqtt_esp8266.ino

mqtt-server-hostname

The example is easy to set up; just punch in your Wifi access info & MQTT broker host name. Interestingly it does apparently support DNS… from working with the HTTP examples earlier, some of them used IP addresses rather than host names, so it wasn’t clear if DNS was supported in some of these network libraries. This one does, apparently.

Here’s what the output looks like. I’m running mosquitto 1.4.8 on my PC, with mosquitto_sub running in the lower panel subscribed to # (a wildcard, so all topic messages are shown).

Basic MQTT Example on the ESP8266

Basic MQTT Example on the ESP8266

Actual footage of me as this program was running

Actual footage of me as this program was running

I thought it would be fun to give the messages a little personality, so I found a list of all the voice lines from the turrets in Portal 2, copied them into a giant array, and now instead of “Hello World #37″ it’ll send something like “So what am I, uh, supposed to do here?” or “Well, I tried. Best of Luck!” once every few seconds.

Additionally, I made it so that the power LED blinks as it’s sending a message, as a little visual chirp to let you know it’s alive.


via GIPHY

The code is here, and this version is a modification of the Adafruit MQTT example, rather than the other library linked above, because I wrote it before I discovered that simpler example. (Found the list of voice lines here, and removed a few dupes).

by Factor Mystic at 2018-02-03 19:03

2018-01-30

I thought it might be fun to play around with programmable microcontrollers, so I bought some to play with. One of the most popular chips right now is the ESP8266 which I first saw pop up on Hack-a-day in 2014. I had to search backwards through 47 pages of blog posts that have been made in the meantime — that might give you a sense of its popularity.

I played around with PIC16/PIC18s in the early 2000s but never actually made anything, but the interest has been there. It’s also been my long time desire to create an E-ink weather display (once again thanks to an old Hack-a-day post, this one from 2012. That’s how far behind on projects I am). Recently I noticed some inexpensive E-ink development boards on Aliexpress and decided to jump into a less shallow end of the pool. I had also been following the ESP8266 for Arduino repo on Github, so I vaguely knew where to begin.

WeMos D1 ESP8266 WeMos D1 ESP8266

This evening I received the hardware (specifically, three of these) and decided to see if I could get a basic program deployed, just to get started. I don’t really know what I’m doing but I’m pretty good at reading & following directions (that counts for a lot in life).

The basics are:

1. Follow the “Installing with Boards Manager” directions here: https://github.com/esp8266/Arduino/ (which includes grabbing the latest Arduino IDE software, then pulling in the ESP8266 chip configuration, which includes the WeMos D1 mini board configuration.

2. From WeMos’ website, grab the driver for the on-board USB/programmer chip, which for me was the CH240G (https://wiki.wemos.cc/tutorials:get_started:get_started_in_arduino). Nothing more “exciting” than installing Chinese device driver software!

3. I got a little tripped up here, but later figured it out- when you plug one of the D1 boards into your PC via USB, it’ll show up as a COM Port in Device Manager. You have to pick that same COM Port in the Arduino IDE or it can’t find your board to deploy to. This was a little confusing because it won’t show up until you’re plugged in. picking the right com port in the arduino ide

4. The Arduino IDE comes with the ability to load in example programs from the board configuration, loaded in Step 1. I wanted the simplest possible thing to make sure everything was working, so picked the “Hello World” of microcontroller programs: “Blink”, which toggles the power LED in an infinite loop.

So far, so good! All told, this took about an hour from opening the package to getting the light to blink (which includes scrounging around for a good USB cable and trying to get a in-focus pictures).

As you can see, I didn’t even bother to solder on the headers yet. I will do that, but I think next I will look into getting some wifi code up and running.

by Factor Mystic at 2018-01-30 02:36

2018-01-01

I’m proud to announce the launch of Caremob for iOS.

Caremob is the first ever real-time global movements app, allowing users to react to current events in any of six ways: protest, support, empathy, peace, celebration and mourning.  Users can leverage Caremob to spread the word about movements they care about, and lend support to existing movements via a novel one-touch time mechanic, allowing virtual mobs to grow in size and gain visibility.

On Caremob, I developed the iOS client app, as well as the backend system that enabled users to link up in unison on a global map. I also brought my vast UI/UX experience and designed the interface for this incredibly original functionality — the first of it’s kind.

In addition to developing a scalable, very unique social platform, I developed the algorithms that formed the basis of Caremob’s patent pending technology.

Download Caremob for iOS.

If you need an app developed, reach out and let’s talk! You can contact me using the contact form on this site, via Skype at stromdotcom or by visiting my company website at https://glowdot.com.

by stromdotcom at 2018-01-01 01:26

2017-12-04

Psychological Safety in Operation Teams:

Think of a team you work with closely. How strongly do you agree with these five statements?

  1. If I take a chance and screw up, it will be held against me.
  2. Our team has a strong sense of culture that can be hard for new people to join.
  3. My team is slow to offer help to people who are struggling.
  4. Using my unique skills and talents comes second to the objectives of the team.
  5. It’s uncomfortable to have open, honest conversations about our team’s sensitive issues.

Teams that score high on questions like these can be deemed to be “unsafe.”

2017-12-04 00:24

2017-11-28

Introduction

In the previous post in this series, we had finished up with a very basic unit test, which didn’t really test much, which we had ran using dotnet xunit in a console, and saw some lovely output.

We’ll continue to write some more unit tests to try and understand what kind of API we need in a class (or classes) which can help us satisfy the first rule of our Freecell engine implementation. As a reminder, our first rule is: There is one standard deck of cards, shuffled.

I’m trying to write both the code and the blog posts as I go along, so I have no idea what the final code will look like when I’ve finished. This means I’ll probably make mistakes and make some poor design decisions, but the whole point of TDD is that you can get a feel for that as you go along, because the tests will tell you.

Don’t try to TDD without some sort of plan

Whilst we obey the 3 Laws of TDD, that doesn’t mean that we can’t or shouldn’t doodle a design and some notes on a whiteboard or a notebook about the way our API could look. I always find that having some idea of where you want to go and what you want to achieve aids the TDD process, because then the unit tests should kick in and you’ll get a feel for whether things are going well or the conceptual design you had is not working.

With that in mind, we know that we will want to define a Card object, and that there are going to be four suits of cards, so that gives us a hint that we’ll need an enum to define them. Unless we want to play the same game of Freecell over and over again, then we’ll need to randomly generate the cards in the deck. We also know that we will need to iterate over the deck when it comes to building the Cascades, but the Deck should not be concerned with that.

With that in mind, we can start writing some more tests.

To a functioning Deck class

First things first, I think that I really like the idea of having the Deck class enumerable, so I’ll start with testing that.

[Fact]
public void Should_BeAbleToEnumerateCards()
{
    foreach (var card in new Deck())
    {
    }
}

This is enough to make the test fail, because the Deck class doesn’t yet have a public definition for GetEnumerator, but it gives us a feel for how the class is going to be used. To make the test pass, we can do the simplest thing to make the compiler happy, and give the Deck class a GetEnumerator definition.

public IEnumerator<object> GetEnumerator()
{
    return Enumerable.Empty<object>().GetEnumerator();
}

I’m using the generic type of object in the method, because I haven’t yet decided on what that type is going to be, because to do so would violate the three rules of TDD, and it hasn’t yet been necessary.

Now that we can enumerate the Deck class, we can start making things a little more interesting. Given that it is a deck of cards, it should be reasonable to expect that we could expect to be able to select a suit of cards from the deck and get a collection which has 13 cards in it. Remember, we only need to write as much of this next test as is sufficient to get the test to fail.

[Fact]
public void Should_BeAbleToSelectSuitOfCardsFromDeck()
{
    var deck = new Deck();

    var hearts = deck.Where();
}

It turns out we can’t even get to the point in the test of asserting something because we get a compiler failure. The compiler can’t find a method or extension method for Where. But, the previous test where we enumerate the Deck in a foreach passes. Well, we only wrote as much code to make that test pass as we needed to, and that only involved adding the GetEnumerator method to the class. We need to write more code to get this current test to pass, such that we can keep the previous test passing too.

This is easy to do by implementing IEnumerable<> on the Deck class:

public class Deck : IEnumerable<object>
{
    public IEnumerator<object> GetEnumerator()
    {
        foreach (var card in _cards)
        {
            yield return card;
        }
    }

    IEnumerator IEnumerable.GetEnumerator() => GetEnumerator();
}

I’ve cut some of the other code out of the class so that you can see just the detail of the implementation. The second explicitly implemented IEnumerable.GetEnumerator is there because IEnumerable<> inherits from it, so it must be implemented, but as you can see, we can just fastward to the genericly implemented method. With that done, we can now add using System.Linq; to the Deck class so that we can use the Where method.

var deck = new Deck();

var hearts = deck.Where(x => x.Suit == Suit.Hearts);

This is where the implementation is going to start getting a little more complicated that the actual tests. Obviously in order to make the test pass, we need to add an actual Card class and give it a property which can use to select the correct suit of cards.

public enum Suit
{
    Clubs,
    Diamonds,
    Hearts,
    Spades
}

public class Card
{
    public Suit Suit { get; set; }
}

After writing this, we can then change the enumerable implementation in the Deck class to public class Deck : IEnumerable<Deck>, and the test will now compile. Now we can actually assert the intent of the test:

[Fact]
public void Should_BeAbleToSelectSuitOfCardsFromDeck()
{
    var deck = new Deck();

    var hearts = deck.Select(x => x.Suit == Suit.Hearts);

    hearts.Should().HaveCount(13);
}

Conclusion

In this post, I talked through several iterations of the TDD loop, based on the 3 Rules of TDD, in some detail. An interesting discussion that always rears its head at this point is: Do you need to follow the 3 rules so excruciatingly religously? I don’t really know the answer to that. Certainly I always had it in my head that I would need a Card class, and that would necessitate a Suit enum, as these are pretty obvious things when thinking about the concept of a class which models a deck of cards. Could I have taken a short cut, written everything and then wrote the tests to test the implementation (as it stands)? Probably, for something so trivial.

In the next post, I will write some more tests to continue building the Deck class.

2017-11-28 00:00

2017-11-21

Introduction

I thought Freecell would make a fine basis for talking about Test Driven Development. It is a game which I enjoy playing. I have an app for it on my phone, and it’s been available on Windows for as long as I can remember, although I’m writing this on a Mac, which does not by default have a Freecell game.

The rules are fairly simple:

  • There is one standard deck of cards, shuffled.
  • There are four “Free” Cell piles, which may each have any one card stored in it.
  • There are four Foundation piles, one for each suit.
  • The cards are dealt face-up left-to-right into eight cascades
    • The cards must alternate in colour.
    • The result of the deal is that the first four cascades will have seven cards, the final four will have six cards.
  • The top most card of a cascade beings a tableau.
  • A tableaux must be built down by alternating colours.
  • A card in cell may be moved onto a tableau subject to the previous rule.
  • A tableaux may be recursively moved onto another tableaux, or to an empty cascade only if there is enough free space in Cells or empty cascades to use as intermediate locations.
  • The game is won when all four Foundation piles are built up in suit, Ace to King.

These rules will form the basis of a Frecell Rules Engine. Note that we’re not interested in a UI at the moment.

This post is a follow on from my previous post of how to setup a dotnet core environment for doing TDD.

red - first test

We know from the rules that we need a standard deck of cards to work with, so our initial test could assert that we can create an array, of some type that is yet to be determined, which has a length of 51.

[Fact]
public void Should_CreateAStandardDeckOfCards()
{
    var sut = new Deck();

}

There! Our first test. It fails (by not compiling). We’ve obeyed The 3 Laws of TDD: We’ve not written any production code and we’ve only written enough of the unit test to make it fail. We can make the test pass by creating a Deck class in the Freecell.Engine project. Time for another commit:

green - it passes

It is trivial to make our first test pass, as all we need to do is create a new class in our Freecell.Engine project, and our test passes as it now compiles. We can prove this by instructing dotnet to run our unit tests for us:

nostromo:Freecell.Engine.Tests stuart$ dotnet watch xunit
watch : Started
Detecting target frameworks in Freecell.Engine.Tests.csproj...
Building for framework netcoreapp2.0...
  Freecell.Engine -> /Users/stuart/dev/freecell/Freecell.Engine/bin/Debug/netstandard2.0/Freecell.Engine.dll
  Freecell.Engine.Tests -> /Users/stuart/dev/freecell/Freecell.Engine.Tests/bin/Debug/netcoreapp2.0/Freecell.Engine.Tests.dll
Running .NET Core 2.0.0 tests for framework netcoreapp2.0...
xUnit.net Console Runner (64-bit .NET Core 4.6.00001.0)
  Discovering: Freecell.Engine.Tests
  Discovered:  Freecell.Engine.Tests
  Starting:    Freecell.Engine.Tests
  Finished:    Freecell.Engine.Tests
=== TEST EXECUTION SUMMARY ===
   Freecell.Engine.Tests  Total: 1, Errors: 0, Failed: 0, Skipped: 0, Time: 0.142s
watch : Exited
watch : Waiting for a file to change before restarting dotnet...

It is important to make sure to run dotnet xunit from within the test project folder, you can’t pass the path to the test project like you can with dotnet test. As you can see, I’ve also started watching xunit, and the runner is now going to wait until I make and save a change before automatically compiling and running the tests.

red, green

This first unit test still doesn’t really test very much, and because we are obeying the 3 TDD rules, it forces us to think a little before we write any test code. When looking at the rules, I think we will probably want the ability to move through our deck of cards and have the ability to remove cards from the deck. So, with this in mind, the most logical thing to do is to make the Deck class enumerable. We could test that by checking a length property. Still in our first test, we can add this:

var sut = new Deck();

var length = sut.Length;

If I switch over to our dotnet watch window, we get the immediate feedback that this has failed:

Detecting target frameworks in Freecell.Engine.Tests.csproj...
Building for framework netcoreapp2.0...
  Freecell.Engine -> /Users/stuart/dev/freecell/Freecell.Engine/bin/Debug/netstandard2.0/Freecell.Engine.dll
DeckTests.cs(13,30): error CS1061: 'Deck' does not contain a definition for 'Length' and no extension method 'Length' accepting a first argument of type 'Deck' could be found (are you missing a using directive or an assembly reference?) [/Users/stuart/dev/freecell/Freecell.Engine.Tests/Freecell.Engine.Tests.csproj]
Build failed!
watch : Exited with error code 1
watch : Waiting for a file to change before restarting dotnet...

We know that we have a pretty good idea that we’re going to make the Deck class enumerable, and probably make it in implement IEnumerable<>, then we could add some sort of internal array to hold another type, probably a Card and then right a bunch more code that will make our test pass.

But that would violate the 3rd rule, so instead, we simply add a Length property to the Deck class:

public class Deck 
{
    public int Length {get;}
}

This makes our test happy, because it compiles again. But it still doesn’t assert anything. Let’s fix that, and assert that the Length property actually has a length that we would expect a deck of cards to have, namely 52:

var sut = new Deck();

var length = sut.Length;

length.Should().Be(51);

The last line of the test asserts through the use of FluentAssertions that the Length property should be 51. I like FluentAssertions, I think it looks a lot cleaner than writing something like Assert.True(sut.Length, 51), and it’s quite easy to read and understand: ‘Length’ should be 51. I love it. We can add it with the command dotnet add package FluentAssertions. Fix the using reference in the test class so that it compiles, and then check our watch window:

Detecting target frameworks in Freecell.Engine.Tests.csproj...
Building for framework netcoreapp2.0...
  Freecell.Engine -> /Users/stuart/dev/freecell/Freecell.Engine/bin/Debug/netstandard2.0/Freecell.Engine.dll
  Freecell.Engine.Tests -> /Users/stuart/dev/freecell/Freecell.Engine.Tests/bin/Debug/netcoreapp2.0/Freecell.Engine.Tests.dll
Running .NET Core 2.0.0 tests for framework netcoreapp2.0...
xUnit.net Console Runner (64-bit .NET Core 4.6.00001.0)
  Discovering: Freecell.Engine.Tests
  Discovered:  Freecell.Engine.Tests
  Starting:    Freecell.Engine.Tests
    Freecell.Engine.Tests.DeckTests.Should_CreateAStandardDeckOfCards [FAIL]
      Expected value to be 51, but found 0.
      Stack Trace:
           at FluentAssertions.Execution.XUnit2TestFramework.Throw(String message)
           at FluentAssertions.Execution.AssertionScope.FailWith(String message, Object[] args)
           at FluentAssertions.Numeric.NumericAssertions`1.Be(T expected, String because, Object[] becauseArgs)
        /Users/stuart/dev/freecell/Freecell.Engine.Tests/DeckTests.cs(16,0): at Freecell.Engine.Tests.DeckTests.Should_CreateAStandardDeckOfCards()
  Finished:    Freecell.Engine.Tests
=== TEST EXECUTION SUMMARY ===
   Freecell.Engine.Tests  Total: 1, Errors: 0, Failed: 1, Skipped: 0, Time: 0.201s
watch : Exited with error code 1
watch : Waiting for a file to change before restarting dotnet...

Now to make our test past, we could again just start implementing IEnumerable<>, but that’s not TDD, and Uncle Bob might get upset at me. Instead, we will do the simplest thing that will make the test pass:

public class Deck
{
    public int Length { get { return new string[51].Length; }}
}

refactor

Now that we have a full test with an assertion that passes, we can about the refactor stage of the red/gree/refactor TDD cycle. As it stands, our simple classes passes our test but we can see right away that newing up an array in the getter of the Length property is not going to be something that is going to serve our interests well in the long run, so we should do something about that. Making it a member variable seems to be the most logical thing to do at the moment, so we’ll do that. We don’t need to make any changes to our test on the refactor stage. If we do, that’s a design smell that would indicate that something is wrong.

ublic class Deck
{
    private const int _size = 51;
    private string[] _cards = new string[_size];
    public int Length { get { return _cards.Length; }}
}

Conclusion

In this post, we’ve fleshed out our Deck class a little more, and gone through the full red/green/refactor TDD cycle. I also introduced FluentAssertions, and showed the output from the watch window as it showed the test failing

2017-11-21 00:00

2017-05-13

Around 2015, I was asked by Circle of Confusion (the production company behind The Walking Dead television show on AMC) to build a companion app for a film they had made called Capture.

Capture was one of those projects that is simultaneously exhilarating and terrifying. One of the challenges of app development is trying to plan out the timeline and budget for a process that is absolutely full of unknowns. And some projects have many more unknowns than others.

In the case of the capture app, the task was to develop and app that was a fairly typical film companion app, including a trailer, some character bios, photo stills, plot description, and so on. So far, not many unknowns there.

But this app also needed to listen when it was opened, and recognize audio cues. When a particular audio clip from the film was recognized, it would trigger some event — an incoming text message, an incoming phone call, an audio clip, vibration, or a film clip would start playing full screen, for example.

Essentially, this was Shazam but for a specific set of movie clips.

The first step, naturally, in planning out a project like this is to figure out what the big challenges are, and start to look at how they might be solved.

On a project like this, there is obviously no way we are going to develop a proprietary algorithm for recognizing audio out in the wild. This is, coincidentally, an area I studied with some seriousness in graduate school — specifically recognizing patterns in audio clips, images, and other assorted media using a fuzzy algorithm (specifically, wavelets). It was not something that would be feasible to do from scratch for a film companion app, to say the least!

Fortunately, we were able to find and license a C library that did just what we needed. The library needed an Objective-C wrapper to be used in the iOS project, so that was step one in prototyping this app. Once that was done, the rest of the app could be built around our audio recognition engine, and we could then focus on processing the audio clips into data we could embed in the app and building the system that would allow us to trigger the various events that would occur when our recognition engine would fire off a notification that we had an audio match.

The result was one of the most satisfying apps to test — we spent hours playing clips from the film and watching our phones go crazy in response.

Capture is available for iOS — but of course, in order to fully experience it you need to also watch the film, which you can stream on Amazon.

If you need an app developed, reach out and let’s talk! You can contact me using the contact form on this site, via Skype at stromdotcom or by visiting my company website at https://glowdot.com.

by stromdotcom at 2017-05-13 20:44

2017-04-06

Today my latest client project Focus was released in the Apple and Google Play app stores.

Focus is a safe driving app, which uses proprietary, state of the art speech to text technology to allow users to send and reply to text messages, make calls, send inter-app messages to other Focus users, and more. In addition, Focus leverages built-in text to speech technology to power a fully voice controlled user interface and read-back of messages, notes, and more.

Focus uses a proprietary blend of speech recognition systems — notably Siri, OpenEars, and Nuance. What makes it proprietary exactly? I built a simple AI system that determines the best library for the job, given the requirements of the user and the ambient sound conditions. As an example: matching speech against a list of known commands in a noisy car would require one combination of audio library and settings, whereas speech transcription in a quiet environment would be better suited using another combination. This degree of intelligent fine tuning resulted in a speech recognition app that outperformed the biggest names in voice recognition at release — an amazing feat considering the relatively small size of the team, and the constrained budget of a modest, bootstrapped startup.

But before that blending technology could be built, I had to build an iOS wrapper for the low level C code that makes up the Nuance speech framework. Nuance is an embedded speech recognition platform, not designed out of the box for high level use, as in an app like Focus. I essentially built the SDK a company like Nuance would normally provide to end users to use in client apps like Focus. This is no easy task, but luckily it’s something I have done before (as on the Capture app with the Audible Magic library). Tasks with a difficulty of this magnitude — which often come up well into the development process — are why it is absolutely critical that your development team is top notch.

If you need an app developed, reach out and let’s talk! You can contact me using the contact form on this site, via Skype at stromdotcom or by visiting my company website at https://glowdot.com.

by stromdotcom at 2017-04-06 03:01

2017-01-01

I finally got a few minutes to bring my personal site StromCode back to life! If you’ve been around a while, you may remember this site from as far back as 2001, when I hosted several code tutorials here — notably my win32 api programming guide, my intro to VST programming in C++ and later the same guide ported to C#, my guide to low level network programming in C, or the many, many tutorials on web application and API development in PHP.

The world has finally caught up, and there are many much better places to get the sort of info I used to post here, so I’m repurposing StromCode as my personal blog and CV of sorts.

Back in the early 2000s, I was still pursuing my studies in Computer Science, although I was also running one of the largest media hosting sites on the Internet and building up my digital consultancy into what it is today. That consultancy, Glowdot Productions, Inc., would eventually go on to build apps, games, and other software for companies like Warner Bros., Disney, Dreamworks, CBC, Sonos, Accenture, Circle of Confusion, and many, many local startups taking a stab at the social media space — a world I started my career in and in which I had my first success.

In 2019, I still lead mobile and other platform developments for local startups and large corporations alike, in addition to offering guidance and advice in tech to up and coming entrepreneurs.

On StromCode, I plan to break down as many of my current and past projects as I can, and offer whatever insights I am able to provide into the process of developing software for current gen platforms.

Stay tuned!

by stromdotcom at 2017-01-01 00:45

2014-01-29

A few months ago I left a busy startup job I’d had for over a year. The work was engrossing: I stopped blogging, but I was programming every day. I learned a completely new language, but got plenty of chances to use my existing knowledge. That is, after all, why they hired me.

dilbert

I especially liked something that might seem boring: combing through logs of occasional server errors and modifying our code to avoid them. Maybe it was because I had setup the monitoring system. Or because I was manually deleting servers that had broken in new ways. The economist in me especially liked putting a dollar value on bugs of this nature: 20 useless servers cost an extra 500 dollars a week on AWS.

But, there’s only so much waste like this to clean up. I’d automated most of the manual work I was doing and taught a few interns how to do the rest. I spent two weeks openly wondering what I’d do after finishing my current project, even questioning whether I’d still be useful with the company’s new direction.

fireme
Career Tip: don’t do this.

That’s when we agreed to part ways. So, there I was, no “official” job but still a ton of things to keep me busy. I’d help run a chain of Hacker Hostels in Silicon Valley, I was still maintaining Wine as an Ubuntu developer, and I was still a “politician” on Ubuntu’s Community Council having weekly meetings with Mark Shuttleworth.

Politiking, business management, and even Ubuntu packaging, however, aren’t programming. I just wasn’t doing it anymore, until last week. I got curious about counting my users on Launchpad. Download counts are exposed by an API, but not viewable on any webpage. No one else had written a proper script to harvest that data. It was time to program.

fuckshitdamn

And man, I went a little nuts. It was utterly engrossing, in the way that writing and video games used to be. I found myself up past 3am before I even noticed the time; I’d spent a whole day just testing and coding before finally putting it on github. I rationalized my need to make it good as a service to others who’d use it. But in truth I just liked doing it.

It didn’t stop there. I started looking around for programming puzzles. I wrote 4 lines of python that I thought were so neat they needed to be posted as a self-answered question on stack overflow. I literally thought they were beautiful, and using the new yield from feature in Python3 was making me inordinately happy.

And now, I’m writing again. And making terrible cartoons on my penboard. I missed this shit. It’s fucking awesome.

by YokoZar at 2014-01-29 02:46

2013-02-08

Lock’n'Roll, a Pidgin plugin for Windows designed to set an away status message when the PC is locked, has received its first update in three and a half years!

Daniel Laberge has forked the project and released a version 1.2 update which allows you to specify which status should be set when the workstation locks. Get it while it’s awesome (always)!

by Chris at 2013-02-08 03:56

2012-01-08

How do you generate the tangent vectors, which represent which way the texture axes on a textured triangle, are facing?

Hitting up Google tends to produce articles like this one, or maybe even that exact one. I've seen others linked too, the basic formulae tend to be the same. Have you looked at what you're pasting into your code though? Have you noticed that you're using the T coordinates to calculate the S vector, and vice versa? Well, you can look at the underlying math, and you'll find that it's because that's what happens when you assume the normal, S vector, and T vectors form an orthonormal matrix and attempt to invert it, in a sense you're not really using the S and T vectors but rather vectors perpendicular to them.

But that's fine, right? I mean, this is an orthogonal matrix, and they are perpendicular to each other, right? Well, does your texture project on to the triangle with the texture axes at right angles to each other, like a grid?


... Not always? Well, you might have a problem then!

So, what's the real answer?

Well, what do we know? First, translating the vertex positions will not affect the axial directions. Second, scrolling the texture will not affect the axial directions.

So, for triangle (A,B,C), with coordinates (x,y,z,t), we can create a new triangle (LA,LB,LC) and the directions will be the same:

We also know that both axis directions are on the same plane as the points, so to resolve that, we can to convert this into a local coordinate system and force one axis to zero.



Now we need triangle (Origin, PLB, PLC) in this local coordinate space. We know PLB[y] is zero since LB was used as the X axis.


Now we can solve this. Remember that PLB[y] is zero, so...


Do this for both axes and you have your correct texture axis vectors, regardless of the texture projection. You can then multiply the results by your tangent-space normalmap, normalize the result, and have a proper world-space surface normal.

As always, the source code spoilers:

terVec3 lb = ti->points[1] - ti->points[0];
terVec3 lc = ti->points[2] - ti->points[0];
terVec2 lbt = ti->texCoords[1] - ti->texCoords[0];
terVec2 lct = ti->texCoords[2] - ti->texCoords[0];

// Generate local space for the triangle plane
terVec3 localX = lb.Normalize2();
terVec3 localZ = lb.Cross(lc).Normalize2();
terVec3 localY = localX.Cross(localZ).Normalize2();

// Determine X/Y vectors in local space
float plbx = lb.DotProduct(localX);
terVec2 plc = terVec2(lc.DotProduct(localX), lc.DotProduct(localY));

terVec2 tsvS, tsvT;

tsvS[0] = lbt[0] / plbx;
tsvS[1] = (lct[0] - tsvS[0]*plc[0]) / plc[1];
tsvT[0] = lbt[1] / plbx;
tsvT[1] = (lct[1] - tsvT[0]*plc[0]) / plc[1];

ti->svec = (localX*tsvS[0] + localY*tsvS[1]).Normalize2();
ti->tvec = (localX*tsvT[0] + localY*tsvT[1]).Normalize2();


There's an additional special case to be aware of: Mirroring.

Mirroring across an edge can cause wild changes in a vector's direction, possibly even degenerating it. There isn't a clear-cut solution to these, but you can work around the problem by snapping the vector to the normal, effectively cancelling it out on the mirroring edge.

Personally, I check the angle between the two vectors, and if they're more than 90 degrees apart, I cancel them, otherwise I merge them.

by OneEightHundred (noreply@blogger.com) at 2012-01-08 00:23

2011-12-07

Valve's self-shadowing radiosity normal maps concept can be used with spherical harmonics in approximately the same way: Integrate a sphere based on how much light will affect a sample if incoming from numerous sample direction, accounting for collision with other samples due to elevation.

You can store this as three DXT1 textures, though you can improve quality by packing channels with similar spatial coherence. Coefficients 0, 2, and 6 in particular tend to pack well, since they're all dominated primarily by directions aimed perpendicular to the texture.

I use the following packing:
Texture 1: Coefs 0, 2, 6
Texture 2: Coefs 1, 4, 5
Texture 3: Coefs 3, 7, 8

You can reference an early post on this blog for code on how to rotate a SH vector by a matrix, in turn allowing you to get it into texture space. Once you've done that, simply multiply each SH coefficient from the self-shadowing map by the SH coefficients created from your light source (also covered on the previous post) and add together.

by OneEightHundred (noreply@blogger.com) at 2011-12-07 18:39

2011-12-02

Spherical harmonics seems to have some impenetrable level of difficulty, especially among the indie scene which has little to go off of other than a few presentations and whitepapers, some of which even contain incorrect information (i.e. one of the formulas in the Sony paper on the topic is incorrect), and most of which are still using ZYZ rotations because it's so hard to find how to do a matrix rotation.

Hao Chen and Xinguo Liu did a presentation at SIGGRAPH '08 and the slides from it contain a good deal of useful stuff, nevermind one of the ONLY easy-to-find rotate-by-matrix functions. It also treats the Z axis a bit awkwardly, so I patched the rotation code up a bit, and a pre-integrated cosine convolution filter so you can easily get SH coefs for directional light.

There was also gratuitous use of sqrt(3) multipliers, which can be completely eliminated by simply premultiplying or predividing coef #6 by it, which incidentally causes all of the constants and multipliers to resolve to rational numbers.

As always, you can include multiple lights by simply adding the SH coefs for them together. If you want specular, you can approximate a directional light by using the linear component to determine the direction, and constant component to determine the color. You can do this per-channel, or use the average values to determine the direction and do it once.

Here are the spoilers:

#define SH_AMBIENT_FACTOR   (0.25f)
#define SH_LINEAR_FACTOR (0.5f)
#define SH_QUADRATIC_FACTOR (0.3125f)

void LambertDiffuseToSHCoefs(const terVec3 &dir, float out[9])
{
// Constant
out[0] = 1.0f * SH_AMBIENT_FACTOR;

// Linear
out[1] = dir[1] * SH_LINEAR_FACTOR;
out[2] = dir[2] * SH_LINEAR_FACTOR;
out[3] = dir[0] * SH_LINEAR_FACTOR;

// Quadratics
out[4] = ( dir[0]*dir[1] ) * 3.0f*SH_QUADRATIC_FACTOR;
out[5] = ( dir[1]*dir[2] ) * 3.0f*SH_QUADRATIC_FACTOR;
out[6] = ( 1.5f*( dir[2]*dir[2] ) - 0.5f ) * SH_QUADRATIC_FACTOR;
out[7] = ( dir[0]*dir[2] ) * 3.0f*SH_QUADRATIC_FACTOR;
out[8] = 0.5f*( dir[0]*dir[0] - dir[1]*dir[1] ) * 3.0f*SH_QUADRATIC_FACTOR;
}


void RotateCoefsByMatrix(float outCoefs[9], const float pIn[9], const terMat3x3 &rMat)
{
// DC
outCoefs[0] = pIn[0];

// Linear
outCoefs[1] = rMat[1][0]*pIn[3] + rMat[1][1]*pIn[1] + rMat[1][2]*pIn[2];
outCoefs[2] = rMat[2][0]*pIn[3] + rMat[2][1]*pIn[1] + rMat[2][2]*pIn[2];
outCoefs[3] = rMat[0][0]*pIn[3] + rMat[0][1]*pIn[1] + rMat[0][2]*pIn[2];

// Quadratics
outCoefs[4] = (
( rMat[0][0]*rMat[1][1] + rMat[0][1]*rMat[1][0] ) * ( pIn[4] )
+ ( rMat[0][1]*rMat[1][2] + rMat[0][2]*rMat[1][1] ) * ( pIn[5] )
+ ( rMat[0][2]*rMat[1][0] + rMat[0][0]*rMat[1][2] ) * ( pIn[7] )
+ ( rMat[0][0]*rMat[1][0] ) * ( pIn[8] )
+ ( rMat[0][1]*rMat[1][1] ) * ( -pIn[8] )
+ ( rMat[0][2]*rMat[1][2] ) * ( 3.0f*pIn[6] )
);

outCoefs[5] = (
( rMat[1][0]*rMat[2][1] + rMat[1][1]*rMat[2][0] ) * ( pIn[4] )
+ ( rMat[1][1]*rMat[2][2] + rMat[1][2]*rMat[2][1] ) * ( pIn[5] )
+ ( rMat[1][2]*rMat[2][0] + rMat[1][0]*rMat[2][2] ) * ( pIn[7] )
+ ( rMat[1][0]*rMat[2][0] ) * ( pIn[8] )
+ ( rMat[1][1]*rMat[2][1] ) * ( -pIn[8] )
+ ( rMat[1][2]*rMat[2][2] ) * ( 3.0f*pIn[6] )
);

outCoefs[6] = (
( rMat[2][1]*rMat[2][0] ) * ( pIn[4] )
+ ( rMat[2][2]*rMat[2][1] ) * ( pIn[5] )
+ ( rMat[2][0]*rMat[2][2] ) * ( pIn[7] )
+ 0.5f*( rMat[2][0]*rMat[2][0] ) * ( pIn[8])
+ 0.5f*( rMat[2][1]*rMat[2][1] ) * ( -pIn[8])
+ 1.5f*( rMat[2][2]*rMat[2][2] ) * ( pIn[6] )
- 0.5f * ( pIn[6] )
);

outCoefs[7] = (
( rMat[0][0]*rMat[2][1] + rMat[0][1]*rMat[2][0] ) * ( pIn[4] )
+ ( rMat[0][1]*rMat[2][2] + rMat[0][2]*rMat[2][1] ) * ( pIn[5] )
+ ( rMat[0][2]*rMat[2][0] + rMat[0][0]*rMat[2][2] ) * ( pIn[7] )
+ ( rMat[0][0]*rMat[2][0] ) * ( pIn[8] )
+ ( rMat[0][1]*rMat[2][1] ) * ( -pIn[8] )
+ ( rMat[0][2]*rMat[2][2] ) * ( 3.0f*pIn[6] )
);

outCoefs[8] = (
( rMat[0][1]*rMat[0][0] - rMat[1][1]*rMat[1][0] ) * ( pIn[4] )
+ ( rMat[0][2]*rMat[0][1] - rMat[1][2]*rMat[1][1] ) * ( pIn[5] )
+ ( rMat[0][0]*rMat[0][2] - rMat[1][0]*rMat[1][2] ) * ( pIn[7] )
+0.5f*( rMat[0][0]*rMat[0][0] - rMat[1][0]*rMat[1][0] ) * ( pIn[8] )
+0.5f*( rMat[0][1]*rMat[0][1] - rMat[1][1]*rMat[1][1] ) * ( -pIn[8] )
+0.5f*( rMat[0][2]*rMat[0][2] - rMat[1][2]*rMat[1][2] ) * ( 3.0f*pIn[6] )
);
}


... and to sample it in the shader ...


float3 SampleSHQuadratic(float3 dir, float3 shVector[9])
{
float3 ds1 = dir.xyz*dir.xyz;
float3 ds2 = dir*dir.yzx; // xy, zy, xz

float3 v = shVector[0];

v += dir.y * shVector[1];
v += dir.z * shVector[2];
v += dir.x * shVector[3];

v += ds2.x * shVector[4];
v += ds2.y * shVector[5];
v += (ds1.z * 1.5 - 0.5) * shVector[6];
v += ds2.z * shVector[7];
v += (ds1.x - ds1.y) * 0.5 * shVector[8];

return v;
}


For Monte Carlo integration, take sampling points, feed direction "dir" to the following function to get multipliers for each coefficient, then multiply by the intensity in that direction. Divide the total by the number of sampling points:


void SHForDirection(const terVec3 &dir, float out[9])
{
// Constant
out[0] = 1.0f;

// Linear
out[1] = dir[1] * 3.0f;
out[2] = dir[2] * 3.0f;
out[3] = dir[0] * 3.0f;

// Quadratics
out[4] = ( dir[0]*dir[1] ) * 15.0f;
out[5] = ( dir[1]*dir[2] ) * 15.0f;
out[6] = ( 1.5f*( dir[2]*dir[2] ) - 0.5f ) * 5.0f;
out[7] = ( dir[0]*dir[2] ) * 15.0f;
out[8] = 0.5f*( dir[0]*dir[0] - dir[1]*dir[1] ) * 15.0f;
}


... and finally, for a uniformly-distributed random point on a sphere ...


terVec3 RandomDirection(int (*randomFunc)(), int randMax)
{
float u = (((float)randomFunc()) / (float)(randMax - 1))*2.0f - 1.0f;
float n = sqrtf(1.0f - u*u);

float theta = 2.0f * M_PI * (((float)randomFunc()) / (float)(randMax));

return terVec3(n * cos(theta), n * sin(theta), u);
}

by OneEightHundred (noreply@blogger.com) at 2011-12-02 12:22

2011-12-01

Fresh install on OS X of ColdFusion Bulder 2 (TWO, the SECOND one). Typing a simple conditional, this is what I was given:



I also had to manually write the closing cfif tag. It's such a joke.

The absolute core purpose of an IDE is to be a text editor. Secondary to that are other features that are supposed to make you work better. ColdFusion Builder 2 (TWO!!!!!) completely fails on all levels as a text editor. It doesn't even function as well as notepad.exe!

Text search is finicky, Find & Replace is completely broken half the time, the UI is often unresponsive (yay Eclipse), the text cursor sometimes disappears, double-clicking folders or files in an FTP view pops up the Rename dialog every time, HTML / CF tag completion usually doesn't happen, indention is broken, function parameter tooltips obscure the place you are typing, # and " completion randomly breaks (often leaving you with a ###)...the list goes on and on.

Adobe has a big feature list on their site. I'm thinking maybe they should go back and use some resources to fix the parts where you type things into the computer, you know, the whole point of the thing.

by Ted (noreply@blogger.com) at 2011-12-01 15:14