After reviewing the code for the simple YAML parser I wrote, I decided it was getting a little messy, so before continuing, I decided to refactor it a little bit.
The simples thing to do was to separate the serialisation and the deserialisation into separate classes, and simple call those from within the YamlConvert
class in the existing methods. This approach tends to be what other JSON and YAML libraries do, with added functionality such as being able to control aspects of the serialisation/deserialisation process for specific types.
I currently don’t need, or want, to do that, as I’m taking a much more brute force approach - however it is something to consider for a future refactor. Maybe.
I ended up with the following for the YamlConvert
:
public static class YamlConvert
{
private static YamlSerialiser Serialiser;
private static YamlDeserialiser Deserialiser;
static YamlConvert()
{
Serialiser = new YamlSerialiser();
Deserialiser = new YamlDeserialiser();
}
public static string Serialise(YamlHeader header)
{
return Serialiser.Serialise(header);
}
public static YamlHeader Deserialise(string filePath)
{
if (!File.Exists(filePath)) throw new FileNotFoundException("Unable to find specified file", filePath);
var content = File.ReadAllLines(filePath);
return Deserialise(content);
}
public static YamlHeader Deserialise(string[] rawHeader)
{
return Deserialiser.Deserialise(rawHeader);
}
}
It works quite well, as it did before, and looks a lot better. There is no dependency configuration to worry about, as I mentioned above I’m not worried about swapping out the serialisation/deserialisation process at any time.
Previously we left off with a method which could parse the YAML header in one of our markdown files, and it was collecting each line between the ---
header marker, for further processing.
One of the main requirements for the overall BlogHelper9000
utility is to be able to standardise the YAML headers in each source markdown file for a post. Some of the posts had a mix of different tags, that were essentially doing the same thing, so one of the aims is to be able to collect those, and transform the values into the correct tags.
In order to achieve this, we can specify a collection of the valid header properties up front, and also a collection of the ‘other’ properties that we find, which we can hold for further in the process when we’ve written the code to handle those properties. The YamlHeader
class has already been defined, and we can use a little reflection to load that class up and pick the properties out.
private static Dictionary<string, object?> GetYamlHeaderProperties(YamlHeader? header = null)
{
var yamlHeader = header ?? new YamlHeader();
return yamlHeader.GetType()
.GetProperties(BindingFlags.DeclaredOnly | BindingFlags.Public | BindingFlags.Instance)
.Where(p => p.GetCustomAttribute<YamlIgnoreAttribute>() is null)
.ToDictionary(p =>
{
var attr = p.GetCustomAttribute<YamlNameAttribute>();
return attr is not null ? attr.Name.ToLower() : p.Name.ToLower();
}, p => p.GetValue(yamlHeader, null));
}
We need to be careful to ignore collecting properties that are not part of the YAML header in markdown files, but that we use in the YamlHeader
that we can use when doing further processing - such as holding the ‘extra’ properties that we’ll need to match up with their valid counterparts in a further step. Thus we have the custom YamlIgnoreAttribute
that we can use to ensure we drop properties that we don’t care about. We also need to ensure that we can match up C# property names with the actual YAML header name, so we also have the YamlNameAttribute
to handle this.
Then we just need a way of parsing the individual lines and pulling the header name and the value out.
(string property, string value) ParseHeaderTag(string tag)
{
tag = tag.Trim();
var index = tag.IndexOf(':');
var property = tag.Substring(0, index);
var value = tag.Substring(index+1).Trim();
return (property, value);
}
Here we just return a simple tuple after doing some simple substring manipulation, which is greatly helped by the header and its value always being seperated by ‘:’.
Then if we put all that together we can start to parse the header properties.
private static YamlHeader ParseYamlHeader(IEnumerable<string> yamlHeader)
{
var parsedHeaderProperties = new Dictionary<string, object>();
var extraHeaderProperties = new Dictionary<string, string>();
var headerProperties = GetYamlHeaderProperties();
foreach (var line in yamlHeader)
{
var propertyValue = ParseHeaderTag(line);
if (headerProperties.ContainsKey(propertyValue.property))
{
parsedHeaderProperties.Add(propertyValue.property, propertyValue.value);
}
else
{
extraHeaderProperties.Add(propertyValue.property, propertyValue.value);
}
}
return ToYamlHeader(parsedHeaderProperties, extraHeaderProperties);
All we need to do is, to setup up some dictionaries to hold the header properties, get the dictionary of valid header properties, and then loop through each line, parsing the header tag and verifying whether the property is a ‘valid’ one that we definitely know we want to keep, and or one we need to hold for further processing. You’ll noticed in the above code, that it’s missing an end brace: this is deliberate, because the ParseHeaderTag
method and ToYamlHeader
method are both nested methods.
Reading through the code to write this post has made me realise that we can do some refactoring to make this look a little nicer.
So we’ll look at that next.
The next thing to do to get BlogHelper9000 functional is to write a command which provides some information about the posts in the blog. I want to know:
I also know that I want to introduce a command which will allow me to fix the metadata in the posts, which is a little messy. I’ve been inconsistently blogging since 2007, originally starting off on a self-hosted python blog I’ve forgot the name of before migrating to Wordpress, and then migrating to a short lived .net static site generator before switching over to Jekyll.
Obviously, Markdown powered blogs like Jekyll have to provide non-markdown metadata in each post, and for Jekyll (and most markdown powered blogs) that means: YAML.
There are a couple of options when it comes to parsing YAML. One would be to use YamlDotNet which is a stable library which conforms with V1.1 and v1.2 of the YAML specifications.
But where is the fun in that?
I’ve defined a POCO called YamlHeader
which I’m going to use to use as the in-memory object to represent the YAML metadata header at the top of a markdown file.
If we take a leaf from different JSON converters, we can define a YamlConvert
class like this:
public static class YamlConvert
{
public static string Serialise(YamlHeader header)
{
}
public static YamlHeader Deserialise(string filePath)
{
}
}
With this, we can easily serialise a YamlHeader
into a string, and deserialise a file into a YamlHeader
.
Deserialising is the slight more complicated of the two, so lets start with that.
Our first unit test looks like this:
[Fact]
public void Should_Deserialise_YamlHeader()
{
var yaml = @"---
layout: post
title: 'Dynamic port assignment in Octopus Deploy'
tags: ['build tools', 'octopus deploy']
featured_image: /assets/images/posts/2020/artem-sapegin-b18TRXc8UPQ-unsplash.jpg
featured: false
hidden: false
---
post content that's not parsed";
var yamlObject = YamlConvert.Deserialise(yaml.Split(Environment.NewLine));
yamlObject.Layout.Should().Be("post");
yamlObject.Tags.Should().NotBeEmpty();
}
This immediately requires us to add an overload for Deserialise
to the YamlConvert
class, which takes a string[]
. This means our implementation for the first Deserialise
method is simply:
public static YamlHeader Deserialise(string filePath)
{
if (!File.Exists(filePath)) throw new FileNotFoundException("Unable to find specified file", filePath);
var content = File.ReadAllLines(filePath);
return Deserialise(content);
}
Now we get into the fun part. And a big caveat: I’m not sure if this is the best way of doing this, but it works for me and that’s all I care about.
Anyway. A YAML header block is identified by a single line of only ---
followd by n
lines of YAML which is signified to have ended by another single line of only ---
. You can see this in the unit test above.
The algorithm I came up with goes like this:
For each line in lines:
if line is '---' then
if header start marker not found then
header start marker found
continue
break loop
store line
parse each line of found header
So in a nutshell, it loops through each line in the file, look for the first ---
to identify the start of the header, and then until it hits another ---
, it gathers the lines for further processing.
Translated into C#, the code looks like this:
public static YamlHeader Deserialise(string[] fileContent)
{
var headerStartMarkerFound = false;
var yamlBlock = new List<string>();
foreach (var line in fileContent)
{
if (line.Trim() == "---")
{
if (!headerStartMarkerFound)
{
headerStartMarkerFound = true;
continue;
}
break;
}
yamlBlock.Add(line);
}
return ParseYamlHeader(yamlBlock);
}
This is fairly straightforward, and isn’t where I think some of the problems with the way it works actually are - all that is hidden behind ParseYamlHeader
, and is worth a post on its own.
In the introductory post to this series, I ended with issuing a command to initialise a new console project, BlogHelper9000
. It doesn’t matter how you create your project, be it from Visual Studio, Rider or the terminal, the end result is the same, as the templates are all the same.
With the new .net 6 templates, the resulting Program.cs
is somewhat sparse, if you discount the single comment then all you get in the file is a comment and a Console.WriteLine("Hello, World!");
, thanks to all the new wizardry in the latest versions of the language and the framework.
Thanks to this new fangled sorcery, the app still has a static main method, you just don’t need to see it, and as such, the args
string array is still there. For very simple applications, this is all you really need to do. However, once you get past a few commands, with a few optional flags, things can get complicated, fast. This can into a maintenance headache.
In the past I’ve written my own command line parsing abstractions, I’ve used Mono.Options and other libraries, and I think I’ve finally settled on Oakton as my go to library for quickly and easily adding command line parsing to a console application. It’s intuitive, easy to use and easy to maintain. This means you can easily introduce it into a team environment and have everyone understand it immediately.
After following Oakton’s getting started documentation, you can see how easy it is to get going with a basic implementation. I recommended introducing the ability to have both synchronous and asynchronous commands able to be executed, and you achieve this by a small tweak to the Program.cs
and taking into consideration the top-level statements in .net 6, like this:
using System.Reflection;
var executor = CommandExecutor.For(_ =>{
_.RegisterCommands(typeof(Program).GetTypeInfo().Assembly);
});
var result = await executor.ExecuteAsync(args);
return result;
In .net 5, or if you don’t like top-level statements and have a static int Main
you can make it static Task<int> Main
instead and return the executor.ExecuteAsync
instead of awaiting it.
In some console applications, different commands can have the same optional flags, and I like to put mine in a class called BaseInput
. Because I know I’m going to have several commands in this application, I’m going to add some base classes so that the different commands can share some of the same functionality. I’ve also used this in the past to, for example, create a database instance in the base class, which is then passed into each inheriting command. It’s also a good place to add some common argument/flag validation.
What I like to do is have an abstract base class, which inherits from the Oakton command, and add an abstract Run
method to it, and usually a virtual bool ValidateInput
too; these can then be overriden in our actual Command implementations and have a lot of nice functionality automated for us in a way that can be used across all Commands.
Some of the detail of these classes are elided, to stop this from being a super long post, you can see all the details in the Github repo.
public abstract class BaseCommand<TInput> : OaktonCommand<TInput>
where TInput : BaseInput
{
public override bool Execute(TInput input)
{
return ValidateInput(input) && Run(input);
}
protected abstract bool Run(TInput input);
protected virtual bool ValidateInput(TInput input)
{
/* ... */
}
}
This ensures that all the Commands we implement can optionally decide to validate the inputs that they take in, simply by overriding ValidateInput
.
The async version is exactly the same… except async:
public abstract class AsyncBaseCommand<TInput> : OaktonAsyncCommand<TInput>
where TInput : BaseInput
{
public override Task<bool> Execute(TInput input)
{
return ValidateInput(input) && Run(input);
}
protected abstract Task<bool> Run(TInput input);
protected virtual Task<bool> ValidateInput(TInput input)
{
/* ... */
}
}
There is an additional class I’ve not yet shown, which adds some further reusable functionality between each base class, and that’s the BaseHelper
class. I’ve got a pretty good idea that any commands I write for the app are going to operate on posts or post drafts, which in jekyll are stored in _posts
and _drafts
respectively. Consequently, the commands need an easy way of having these paths to hand, so a little internal helper class is a good place to put this shared logic.
internal class BaseHelper<TInput> where TInput : BaseInput
{
public string DraftsPath { get; }
public string PostsPath { get; }
private BaseHelper(TInput input)
{
DraftsPath = Path.Combine(input.BaseDirectoryFlag, "_drafts");
PostsPath = Path.Combine(input.BaseDirectoryFlag, "_posts");
}
public static BaseHelper<TInput> Initialise(TInput input)
{
return new BaseHelper<TInput>(input);
}
public bool ValidateInput(TInput input)
{
if (!Directory.Exists(DraftsPath))
{
ConsoleWriter.Write(ConsoleColor.Red, "Unable to find blog _drafts folder");
return false;
}
if (!Directory.Exists(PostsPath))
{
ConsoleWriter.Write(ConsoleColor.Red, "Unable to find blog _posts folder");
return false;
}
return true;
}
}
This means that our base class implementations can now become:
private BaseHelper<TInput> _baseHelper = null!;
protected string DraftsPath => _baseHelper.DraftsPath;
protected string PostsPath => _baseHelper.PostsPath;
public override bool Execute(TInput input)
{
_baseHelper = BaseHelper<TInput>.Initialise(input);
return ValidateInput(input) && Run(input);
}
protected virtual bool ValidateInput(TInput input)
{
return _baseHelper.ValidateInput(input);
}
null!
, where I am telling the compiler to ignore the fact that _baseHelper
is being initialised to null, as I know better.
This allows each command implementation to hook into this method and validate itself automatically.
Now that we have some base classes to work with, we can start to write our first command. If you check the history in the repo, you’ll see this wasn’t the first command I actually wrote… but it probably should have been. In any case, it only serves to illustrate our first real command implementation.
public class InfoCommand : BaseCommand<BaseInput>
{
public InfoCommand()
{
Usage("Info");
}
protected override bool Run(BaseInput input)
{
var posts = LoadsPosts();
var blogDetails = new Details();
DeterminePostCount(posts, blogDetails);
DetermineDraftsInfo(posts, blogDetails);
DetermineRecentPosts(posts, blogDetails);
DetermineDaysSinceLastPost(blogDetails);
RenderDetails(blogDetails);
return true;
}
/**...*/
}
LoadPosts
is a method in the base class which is responsible for loading the posts into memory, so that we can process them and extract meaningful details about the posts. We put store this information in a Details
class, which is what we ultimately use to render the details to the console. You can see the details of these methods in the github repository, however they all boil down to simple Linq queries.
In this post we’ve seen how to setup Oakton and configure a base class to extend the functionality and give us more flexibility, and an initial command. In subsequent posts, we’ll cover more commands and I’ll start to use the utility to tidy up metadata across all the posts in the blog and fix things like images for posts.
I just had to setup my vimrc
and vimfiles
on a new laptop for work, and had some fun with Vim, mostly as it’s been years since I had to do it. I keep my vimfiles
folder in my github, so I can grab it wherever I need it.
To recap, one of the places that Vim will look for things is $HOME/vimfiles/vimrc
, where $HOME
is actually the same as %USERPROFILE%
. In most corporate environments, the %USERPROFILE%
is actually stored in a networked folder location, to enable roaming profile support and help when a user gets a new computer.
So you can put your vimfiles
there, but, it’s a network folder - it’s slow to start an instance of Vim. Especially if you have a few plugins.
Instead, what you can do is to edit the _vimrc
file in the Vim installation folder (usually in C:\Program Files (x86)\vim
), delete the entire contents and replace it with:
set rpt+=C:\path\to\your\vimfiles
set viminfo+=nC:\path\to\your\vimfiles\or\whatever
source C:\path\to\your\vimfiles\vimrc
What this does is:
vimrc
file and uses thatThis post largely serves as a memory aid for myself when I need to do this again in future I won’t spend longer than I probably needed to googling it to find out how to do it, but I hope it helps someone else.
Recently I was inspired by @buhakmeh’s blog post, Supercharge Blogging With .NET and Ruby Frankenblog to write something similar, both as an exercise and excuse to blog about something, and as a way of tidying up the metadata on my existing blog posts and adding header images to old posts.
The initial high level requirements I want to support are:
The next series of posts will cover implementing the above requirements… not necessarily in that order. First I will go over setting up the project and configuring Oakton.
After that I will probably cover implementing fixes to the existing blog metadata, as I think that is going to be something that will be required in order for any sort of Info function to work properly, as all of the yaml metadata will need to be consistent.
Then I think I’ll tackle the image stuff, which should be fairly interesting, and should give a nice look to the existing posts, as having prominent images for posts is part of the theme for the blog, which I’ve not really taken full advantage of.
I’ll try to update this post with links to future posts, or else make it all a big series.
dotnet new console --name BlogHelper9000
At work, we have recently been porting our internal web framework into .net 6. Yes, we are late to the party on this, for reasons. Suffice it to say I currently work in an inherently risk averse industry.
Anyway, one part of the framework is responsible for getting reports from SSRS.
The way it did this is to use a wrapper class around a SOAP client generated from good old ReportService2005.asmx?wsdl
, using our faithful friend svcutil.exe
. The wrapper class used some TaskCompletionSource
magic on the events in the client to make the client.LoadReportAsync
and the other *Async
methods actually async, as the generated client was not truely async.
Fast forward to the modern times, and we need to upgrade it. How do we do that?
Obviously, Microsoft are a step ahead: svcutil
has a dotnet version - dotnet-svcutil
. We can install it and get going:
dotnet too install --global dotnet-svcutil
Once installed, we can call it against the endpoint:
dotnet-svcutil http://server/ReportServer/ReportService2005.asmx?wsdl
In our wrapper class, the initialisation of the client has to change slightly, because the generated client is different to the original svcutil
implementation. Looking at the diff between the two files, it’s because the newer version of the client users more modern .net functionality.
The wrapper class constructor has to be changed slightly:
public Wrapper(string url, NetworkCredential credentials)
{
var binding = new BasicHttpBinding(BasicHttpSecurityMode.TransportCredentialOnly);
binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Ntlm;
binding.MaxReceivedMessageSize = 10485760; // this is a 10mb limit
var address = new EndpointAddress(url);
_client = new ReportExecutionServiceSoapClient(binding, address);
_client.ClientCredentials.Windows.AllowedInpersonationLevel = TokenImpersonationLevel.Impersonation;
_client.ClientCredentials.Windows.ClientCredential = credentials;
}
Then, the code which actually generates the report can be updated to remove all of the TaskCompletionSource
, which actually simplifies it a great deal:
public async Task<byte[]> RenderReport(string reportPath, string reportFormat, ParameterValue[] parameterValues)
{
await _client.LoadReportAsync(null, reportPath, null);
await _client.SetExecutionParametersAsync(null, null, parameterValues, "en-gb");
var deviceInfo = @"<DeviceInfo><Toolbar>False</ToolBar></DeviceInfo>";
var request = new RenderRequest(null, null, reportFormat, deviceInfo);
var response = await _client.RenderAsync(request);
return response.Result;
}
You can then do whatever you like with the byte[]
, like return it in an IActionResult
or load it into a MemoryStream
and write it to disk as the file.
Recently we realised that we had quite a few applications being deployed through Octopus Deploy, and that we had a number of Environments, and a number of Channels, and that managing the ports being used in Dev/QA/UAT across different servers/channels was becoming… problematic.
When looking at this problem, it’s immediately clear that you need some way of dynamically allocating a port number on each deployment. This blog post from Paul Stovell shows the way, using a custom Powershell build step.
As we’d lost track of what sites were using what ports, and that we also have ad-hoc websites in IIS that aren’t managed by Octopus Deploy, we thought that asking IIS “Hey, what ports are the sites you know about using?” might be a way forward. We also had the additional requirement that on some of our servers, we also might have some arbitary services also using a port and that we might bump into a situation where a port was chosen that was already being used by a non-IIS application/website.
Researching the first situation, it’s quickly apparent that you can do this in Powershell, using the Webadministration
module. Based on the answers to this question on Stackoverflow, we came up with this:
Import-Module Webadministration
function Get-IIS-Used-Ports()
{
$Websites = Get-ChildItem IIS:\Sites
$ports = foreach($Site in $Websites)
{
$Binding = $Site.bindings
[string]$BindingInfo = $Binding.Collection
[string]$Port = $BindingInfo.SubString($BindingInfo.IndexOf(":")+1,$BindingInfo.LastIndexOf(":")-$BindingInfo.IndexOf(":")-1)
$Port -as [int]
}
return $ports
}
To get the list of ports on a machine that are not being used is also fairly straightforward in Powershell:
function Get-Free-Ports()
{
$availablePorts = @(49000-65000)
$usedPorts = @(Get-NetTCPConnection | Select -ExpandProperty LocalPort | Sort -Descending | Where { $_ -ge 49000})
$unusedPorts = foreach($possiblePort in $usedPorts)
{
$unused = $possiblePort -notin $usedPorts
if($unused)
{
$possiblePort
}
}
return $unusedPorts
}
With those two functions in hand, you can work out what free ports are available to be used as the ‘next port’ on a server. It’s worth pointing out that if a site in IIS is stopped, then IIS won’t allow that port to be used in another website (in IIS), but the port also doesn’t show up as a used port in netstat -a
, which is kind of what Get-NetTCPConnection
does.
function Get-Next-Port()
{
$iisUsedPorts = Get-IIS-Used-Ports
$freePorts = Get-Free-Ports
$port = $freePorts | Where-Object { $iisUsedPorts -notcontains $_} | Sort-Object | Select-Object First 1
Set-OctopusVariable -Name "Port" -Value "$port"
}
Then you just have to call it at the end of the script:
Get-Next-Port
You’d also want to have various Write-Host
or other logging messages so that you get some useful output in the build step when you’re running it.
If you found this because you have a build server which is ‘offline’, without any external internet access because of reasons, and you can’t get your build to work because dotnet fails to restore the tool you require for your build process because of said lack of external internet access, then this is for you.
In hindsight, this may be obvious for most people, but it wasn’t for me, so here it is.
In this situation, you just need to shy away from local tools completely, because as of yet, I’ve been unable to find anyway of telling dotnet not to try to restore them, and they fail every build.
Instead, I’ve installed the tool(s) as a global tool, in a specific folder, e.g. C:\dotnet-tools
, which I’ve then added to the system path on the server. You may need to restart the build server for it to pick up the changes to the environment variable.
One challenge that remains is how to ensure the dotnet tools are consistent on both the developer machine, and the build server. I leave that as an exercise for the reader.
I’m leaving this here so I can find it again easily.
We had a problem updating the Visual Studio 2019 Build Tools on a server, after updating an already existing offline layout.
I won’t go into that here, because it’s covered extensively on Microsoft’s Documentation website.
The installation kept failing, even when using --noweb
. It turns out that when your server is completely cut off from the internet, as was the case here, you also need to pass --noUpdateInstaller
.
This is because (so it would seem) that even though --noweb
correctly tells the installer to use the offline cache, it doesn’t prevent the installer from trying to update itself, which will obviously fail in a totally disconnected environment.
Since a technical breakdown of how Betsy does texture compression was posted, I wanted to lay out how the compressors in Convection Texture Tools (CVTT) work, as well as provide some context of what CVTT's objectives are in the first place to explain some of the technical decisions.
First off, while I am very happy with how CVTT has turned out, and while it's definitely a production-quality texture compressor, providing the best compressor possible for a production environment has not been its primary goal. Its primary goal is to experiment with compression techniques to improve the state of the art, particularly finding inexpensive ways to hit high quality targets.
A common theme that wound up manifesting in most of CVTT's design is that encoding decisions are either guided by informed decisions, i.e. models that relate to the problem being solved, or are exhaustive. Very little of it is done by random or random-like searching. Much of what CVTT exists to experiment with is figuring out techniques which amount to making those informed decisions.
Anyway, CVTT's ParallelMath module is kind of the foundation that everything else is built on. Much of its design is motivated by SIMD instruction set quirks, and a desire to maintain compatibility with older instruction sets like SSE2 without sacrificing too much.
Part of that compatibility effort is that most of CVTT's ops use a UInt15 type. The reason for UInt15 is to handle architectures (like SSE2!) that don't support unsigned compares, min, or max, which means performing those operations on a 16-bit number requires flipping the high bit on both operands. For any number where we know the high bit is zero for both operands, that flip is unnecessary - and a huge number of operations in CVTT fit in 15 bits.
The compare flag types are basically vector booleans, where either all bits are 1 or all bits are 0 for a given lane - There's one type for 16-bit ints, and one for 32-bit floats, and they have to be converted since they're different widths. Those are combined with several utility functions, some of which, like SelectOrZero and NotConditionalSet, can elide a few operations.
The RoundForScope type is a nifty dual-use piece of code. SSE rounding modes are determined by the CSR register, not per-op, so RoundForScope when targeting SSE will set the CSR, and then reset it in its destructor. For other architectures, including the scalar target, the TYPE of the RoundForScope passed in is what determines the operation, so the same code works whether the rounding is per-op or per-scope.
While the ParallelMath architecture has been very resistant to bugs for the most part, where it has run into bugs, they've mostly been due to improper use of AnySet or AllSet - Cases where parallel code can behave improperly because lanes where the condition should exclude it are still executing, and need to be manually filtered out using conditionals.
by OneEightHundred (noreply@blogger.com) at 2021-01-03 23:21
If you want some highlights:
The SDL variant ("AerofoilSDL") is also basically done, with a new OpenGL ES 2 rendering backend and SDL sound backend for improved portability. The lead version on Windows still uses D3D11 and XAudio2 though.
Unfortunately, I'm still looking for someone to assist with the macOS port, which is made more difficult by the fact that Apple discontinued OpenGL, so I can't really provide a working renderer for it any more. (Aerofoil's renderer is actually slightly complicated, mostly due to postprocessing.)
In the meantime, the Android port is under way! The game is fully playable so far, most of the work has to do with redoing the UI for touchscreens. The in-game controls use corner taps for rubber bands and battery/helium, but it's a bit awkward if you're trying to use the battery while moving left due to the taps being on the same side of the screen.
Most of the cases where you NEED to use the battery, you're facing right, so this was kind of a tactical decision, but there are some screens (like "Grease is on TV") where it'd be really nice if it was more usable facing left.
I'm also adding a "source export" feature: The source code package will be bundled with the app, and you can just use the source export feature to save the source code to your documents directory. That is, once I figure out how to save to the documents directory, which is apparently very complicated...
Anyway, I'm working on getting this into the Google Play Store too. There might be some APKs posted to GitHub as pre-releases, but there may (if I can figure out how it works) be some Internal Testing releases via GPS. If you want to opt in to the GPS tests, shoot an e-mail to codedeposit.gps@gmail.com
Maybe, but there are two obstacles:
The game is GPL-licensed and there have reportedly been problems with Apple removing GPL-licensed apps from the App Store, and it may not be possible to comply with it. I've heard there is now a way to push apps to your personal device via Xcode with only an Apple ID, which might make satisfying some of the requirements easier, but I don't know.
Second, as with the macOS version, someone would need to do the port. I don't have a Mac, so I don't have Xcode, so I can't do it.
by OneEightHundred (noreply@blogger.com) at 2020-10-20 11:09
by OneEightHundred (noreply@blogger.com) at 2019-11-23 20:43
by OneEightHundred (noreply@blogger.com) at 2019-10-10 02:03
by OneEightHundred (noreply@blogger.com) at 2019-09-06 00:47
by OneEightHundred (noreply@blogger.com) at 2018-03-30 05:26
terVec3 lb = ti->points[1] - ti->points[0];
terVec3 lc = ti->points[2] - ti->points[0];
terVec2 lbt = ti->texCoords[1] - ti->texCoords[0];
terVec2 lct = ti->texCoords[2] - ti->texCoords[0];
// Generate local space for the triangle plane
terVec3 localX = lb.Normalize2();
terVec3 localZ = lb.Cross(lc).Normalize2();
terVec3 localY = localX.Cross(localZ).Normalize2();
// Determine X/Y vectors in local space
float plbx = lb.DotProduct(localX);
terVec2 plc = terVec2(lc.DotProduct(localX), lc.DotProduct(localY));
terVec2 tsvS, tsvT;
tsvS[0] = lbt[0] / plbx;
tsvS[1] = (lct[0] - tsvS[0]*plc[0]) / plc[1];
tsvT[0] = lbt[1] / plbx;
tsvT[1] = (lct[1] - tsvT[0]*plc[0]) / plc[1];
ti->svec = (localX*tsvS[0] + localY*tsvS[1]).Normalize2();
ti->tvec = (localX*tsvT[0] + localY*tsvT[1]).Normalize2();
by OneEightHundred (noreply@blogger.com) at 2012-01-08 00:23
by OneEightHundred (noreply@blogger.com) at 2011-12-07 18:39
#define SH_AMBIENT_FACTOR (0.25f)
#define SH_LINEAR_FACTOR (0.5f)
#define SH_QUADRATIC_FACTOR (0.3125f)
void LambertDiffuseToSHCoefs(const terVec3 &dir, float out[9])
{
// Constant
out[0] = 1.0f * SH_AMBIENT_FACTOR;
// Linear
out[1] = dir[1] * SH_LINEAR_FACTOR;
out[2] = dir[2] * SH_LINEAR_FACTOR;
out[3] = dir[0] * SH_LINEAR_FACTOR;
// Quadratics
out[4] = ( dir[0]*dir[1] ) * 3.0f*SH_QUADRATIC_FACTOR;
out[5] = ( dir[1]*dir[2] ) * 3.0f*SH_QUADRATIC_FACTOR;
out[6] = ( 1.5f*( dir[2]*dir[2] ) - 0.5f ) * SH_QUADRATIC_FACTOR;
out[7] = ( dir[0]*dir[2] ) * 3.0f*SH_QUADRATIC_FACTOR;
out[8] = 0.5f*( dir[0]*dir[0] - dir[1]*dir[1] ) * 3.0f*SH_QUADRATIC_FACTOR;
}
void RotateCoefsByMatrix(float outCoefs[9], const float pIn[9], const terMat3x3 &rMat)
{
// DC
outCoefs[0] = pIn[0];
// Linear
outCoefs[1] = rMat[1][0]*pIn[3] + rMat[1][1]*pIn[1] + rMat[1][2]*pIn[2];
outCoefs[2] = rMat[2][0]*pIn[3] + rMat[2][1]*pIn[1] + rMat[2][2]*pIn[2];
outCoefs[3] = rMat[0][0]*pIn[3] + rMat[0][1]*pIn[1] + rMat[0][2]*pIn[2];
// Quadratics
outCoefs[4] = (
( rMat[0][0]*rMat[1][1] + rMat[0][1]*rMat[1][0] ) * ( pIn[4] )
+ ( rMat[0][1]*rMat[1][2] + rMat[0][2]*rMat[1][1] ) * ( pIn[5] )
+ ( rMat[0][2]*rMat[1][0] + rMat[0][0]*rMat[1][2] ) * ( pIn[7] )
+ ( rMat[0][0]*rMat[1][0] ) * ( pIn[8] )
+ ( rMat[0][1]*rMat[1][1] ) * ( -pIn[8] )
+ ( rMat[0][2]*rMat[1][2] ) * ( 3.0f*pIn[6] )
);
outCoefs[5] = (
( rMat[1][0]*rMat[2][1] + rMat[1][1]*rMat[2][0] ) * ( pIn[4] )
+ ( rMat[1][1]*rMat[2][2] + rMat[1][2]*rMat[2][1] ) * ( pIn[5] )
+ ( rMat[1][2]*rMat[2][0] + rMat[1][0]*rMat[2][2] ) * ( pIn[7] )
+ ( rMat[1][0]*rMat[2][0] ) * ( pIn[8] )
+ ( rMat[1][1]*rMat[2][1] ) * ( -pIn[8] )
+ ( rMat[1][2]*rMat[2][2] ) * ( 3.0f*pIn[6] )
);
outCoefs[6] = (
( rMat[2][1]*rMat[2][0] ) * ( pIn[4] )
+ ( rMat[2][2]*rMat[2][1] ) * ( pIn[5] )
+ ( rMat[2][0]*rMat[2][2] ) * ( pIn[7] )
+ 0.5f*( rMat[2][0]*rMat[2][0] ) * ( pIn[8])
+ 0.5f*( rMat[2][1]*rMat[2][1] ) * ( -pIn[8])
+ 1.5f*( rMat[2][2]*rMat[2][2] ) * ( pIn[6] )
- 0.5f * ( pIn[6] )
);
outCoefs[7] = (
( rMat[0][0]*rMat[2][1] + rMat[0][1]*rMat[2][0] ) * ( pIn[4] )
+ ( rMat[0][1]*rMat[2][2] + rMat[0][2]*rMat[2][1] ) * ( pIn[5] )
+ ( rMat[0][2]*rMat[2][0] + rMat[0][0]*rMat[2][2] ) * ( pIn[7] )
+ ( rMat[0][0]*rMat[2][0] ) * ( pIn[8] )
+ ( rMat[0][1]*rMat[2][1] ) * ( -pIn[8] )
+ ( rMat[0][2]*rMat[2][2] ) * ( 3.0f*pIn[6] )
);
outCoefs[8] = (
( rMat[0][1]*rMat[0][0] - rMat[1][1]*rMat[1][0] ) * ( pIn[4] )
+ ( rMat[0][2]*rMat[0][1] - rMat[1][2]*rMat[1][1] ) * ( pIn[5] )
+ ( rMat[0][0]*rMat[0][2] - rMat[1][0]*rMat[1][2] ) * ( pIn[7] )
+0.5f*( rMat[0][0]*rMat[0][0] - rMat[1][0]*rMat[1][0] ) * ( pIn[8] )
+0.5f*( rMat[0][1]*rMat[0][1] - rMat[1][1]*rMat[1][1] ) * ( -pIn[8] )
+0.5f*( rMat[0][2]*rMat[0][2] - rMat[1][2]*rMat[1][2] ) * ( 3.0f*pIn[6] )
);
}
float3 SampleSHQuadratic(float3 dir, float3 shVector[9])
{
float3 ds1 = dir.xyz*dir.xyz;
float3 ds2 = dir*dir.yzx; // xy, zy, xz
float3 v = shVector[0];
v += dir.y * shVector[1];
v += dir.z * shVector[2];
v += dir.x * shVector[3];
v += ds2.x * shVector[4];
v += ds2.y * shVector[5];
v += (ds1.z * 1.5 - 0.5) * shVector[6];
v += ds2.z * shVector[7];
v += (ds1.x - ds1.y) * 0.5 * shVector[8];
return v;
}
void SHForDirection(const terVec3 &dir, float out[9])
{
// Constant
out[0] = 1.0f;
// Linear
out[1] = dir[1] * 3.0f;
out[2] = dir[2] * 3.0f;
out[3] = dir[0] * 3.0f;
// Quadratics
out[4] = ( dir[0]*dir[1] ) * 15.0f;
out[5] = ( dir[1]*dir[2] ) * 15.0f;
out[6] = ( 1.5f*( dir[2]*dir[2] ) - 0.5f ) * 5.0f;
out[7] = ( dir[0]*dir[2] ) * 15.0f;
out[8] = 0.5f*( dir[0]*dir[0] - dir[1]*dir[1] ) * 15.0f;
}
terVec3 RandomDirection(int (*randomFunc)(), int randMax)
{
float u = (((float)randomFunc()) / (float)(randMax - 1))*2.0f - 1.0f;
float n = sqrtf(1.0f - u*u);
float theta = 2.0f * M_PI * (((float)randomFunc()) / (float)(randMax));
return terVec3(n * cos(theta), n * sin(theta), u);
}
by OneEightHundred (noreply@blogger.com) at 2011-12-02 12:22
by OneEightHundred (noreply@blogger.com) at 2011-10-19 01:37
int iY = px[0] + 2*px[1] + px[2]; // 0..1020
int iCo, iCg;
if (iY == 0)
{
iCo = 0;
iCg = 0;
}
else
{
iCo = (px[0] + px[1]) * 255 / iY;
iCg = (px[1] * 2) * 255 / iY;
}
px[0] = (unsigned char)iCo;
px[1] = (unsigned char)iCg;
px[2] = 0;
px[3] = (unsigned char)((iY + 2) / 4);
float3 DecodeYCoCgRel(float4 inColor)
{
return (float3(4.0, 0.0, -4.0) * inColor.r
+ float3(-2.0, 2.0, -2.0) * inColor.g
+ float3(0.0, 0.0, 4.0)) * inColor.a;
}
by OneEightHundred (noreply@blogger.com) at 2010-10-11 03:21
unsigned char Linearize(unsigned char inByte)
{
float srgbVal = ((float)inByte) / 255.0f;
float linearVal;
if(srgbVal 0.04045)
linearVal = srgbVal / 12.92f;
else
linearVal = pow( (srgbVal + 0.055f) / 1.055f, 2.4f);
return (unsigned char)(floor(sqrt(linearVal)* 255.0 + 0.5));
}
void ConvertBlockToYCoCg(const unsigned char inPixels[16*3], unsigned char outPixels[16*4])
{
unsigned char linearizedPixels[16*3]; // Convert to linear values
for(int i=0;i16*3;i++)
linearizedPixels[i] = Linearize(inPixels[i]);
// Calculate Co and Cg extents
int extents = 0;
int n = 0;
int iY, iCo, iCg;
int blockCo[16];
int blockCg[16];
const unsigned char *px = linearizedPixels;
for(int i=0;i16;i++)
{
iCo = (px[0]1) - (px[2]1);
iCg = (px[1]1) - px[0] - px[2];
if(-iCo > extents) extents = -iCo;
if( iCo > extents) extents = iCo;
if(-iCg > extents) extents = -iCg;
if( iCg > extents) extents = iCg;
blockCo[n] = iCo;
blockCg[n++] = iCg;
px += 3;
}
// Co = -510..510
// Cg = -510..510
float scaleFactor = 1.0f;
if(extents > 127)
scaleFactor = (float)extents * 4.0f / 510.0f;
// Convert to quantized scalefactor
unsigned char scaleFactorQuantized = (unsigned char)(ceil((scaleFactor - 1.0f) * 31.0f / 3.0f));
// Unquantize
scaleFactor = 1.0f + (float)(scaleFactorQuantized / 31.0f) * 3.0f;
unsigned char bVal = (unsigned char)((scaleFactorQuantized 3) | (scaleFactorQuantized >> 2));
unsigned char *outPx = outPixels;
n = 0;
px = linearizedPixels;
for(i=0;i16;i++)
{
// Calculate components
iY = ( px[0] + (px[1]1) + px[2] + 2 ) / 4;
iCo = ((blockCo[n] / scaleFactor) + 128);
iCg = ((blockCg[n] / scaleFactor) + 128);
if(iCo 0) iCo = 0; else if(iCo > 255) iCo = 255;
if(iCg 0) iCg = 0; else if(iCg > 255) iCg = 255;
if(iY 0) iY = 0; else if(iY > 255) iY = 255;
px += 3;
outPx[0] = (unsigned char)iCo;
outPx[1] = (unsigned char)iCg;
outPx[2] = bVal;
outPx[3] = (unsigned char)iY;
outPx += 4;
}
}
float3 DecodeYCoCg(float4 inColor)
{
float3 base = inColor.arg + float3(0, -0.5, -0.5);
float scale = (inColor.b*0.75 + 0.25);
float4 multipliers = float4(1.0, 0.0, scale, -scale);
float3 result;
result.r = dot(base, multipliers.xzw);
result.g = dot(base, multipliers.xyz);
result.b = dot(base, multipliers.xww);
// Convert from 2.0 gamma to linear
return result*result;
}
by OneEightHundred (noreply@blogger.com) at 2010-10-11 01:32
The barrier to entry on the Instant concept is apparently low, and Yahoo and Microsoft's Bing have both tested the waters, according to a report in Search Engine Land.(emphasis mine)
by Blake Householder (noreply@blogger.com) at 2010-09-12 19:44
RE: GAO: your landing page sucks :(and they helpfully replied with:
I clicked an ad banner for your site from Gamasutra (
http://www.game-advertising-online.com/?section=doc&action=advertising )
and *nothing* on the landing page tells me why I should do business with
you. What will it cost me? What benefits will I get? Why are you better
than your competitors? I have no idea!
I see that you've got some reach, but I have no frame of reference for that
so I don't care.
You've got some clients, but they're not me, so I don't care.
You've got "cutting edge functionality" but I don't care.
I can apply for an account, but why?
Good day,Thanks guys! Guess I'll take my money elsewhere!
We are pleased to have confirmation that our landing page only appeals to
people who care.
Best Wishes,
Valera Koltsov
Game Advertising Online
http://www.game-advertising-online.com
by Blake Householder (noreply@blogger.com) at 2010-09-11 19:04